1
Arm queue; some of the simpler stuff, things other have reviewed (thanks!), etc.
1
The following changes since commit a97978bcc2d1f650c7d411428806e5b03082b8c7:
2
2
3
-- PMM
3
Merge remote-tracking branch 'remotes/dg-gitlab/tags/ppc-for-6.1-20210603' into staging (2021-06-03 10:00:35 +0100)
4
5
The following changes since commit 5d2f557b47dfbf8f23277a5bdd8473d4607c681a:
6
7
Merge remote-tracking branch 'remotes/kraxel/tags/vga-20200605-pull-request' into staging (2020-06-05 13:53:05 +0100)
8
4
9
are available in the Git repository at:
5
are available in the Git repository at:
10
6
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200605
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210603
12
8
13
for you to fetch changes up to 2c35a39eda0b16c2ed85c94cec204bf5efb97812:
9
for you to fetch changes up to 1c861885894d840235954060050d240259f5340b:
14
10
15
target/arm: Convert Neon one-register-and-immediate insns to decodetree (2020-06-05 17:23:10 +0100)
11
tests/unit/test-vmstate: Assert that dup() and mkstemp() succeed (2021-06-03 16:43:27 +0100)
16
12
17
----------------------------------------------------------------
13
----------------------------------------------------------------
18
target-arm queue:
14
target-arm queue:
19
hw/ssi/imx_spi: Handle tx burst lengths other than 8 correctly
15
* Some not-yet-enabled preliminaries for M-profile MVE support
20
hw/input/pxa2xx_keypad: Replace hw_error() by qemu_log_mask()
16
* Consistently use "Cortex-Axx", not "Cortex Axx" in docs, comments
21
hw/arm/pxa2xx: Replace printf() call by qemu_log_mask()
17
* docs: Fix installation of man pages with Sphinx 4.x
22
target/arm: Convert crypto insns to gvec
18
* Mark LDS{MIN,MAX} as signed operations
23
hw/adc/stm32f2xx_adc: Correct memory region size and access size
19
* Fix missing syndrome value for DAIF and PAC check exceptions
24
tests/acceptance: Add a boot test for the xlnx-versal-virt machine
20
* Implement BFloat16 extensions
25
docs/system: Document Aspeed boards
21
* Refactoring of hvf accelerator code in preparation for aarch64 support
26
raspi: Add model of the USB controller
22
* Fix some coverity nits in test code
27
target/arm: Convert 2-reg-and-shift and 1-reg-imm Neon insns to decodetree
28
23
29
----------------------------------------------------------------
24
----------------------------------------------------------------
30
Cédric Le Goater (1):
25
Alexander Graf (12):
31
docs/system: Document Aspeed boards
26
hvf: Move assert_hvf_ok() into common directory
27
hvf: Move vcpu thread functions into common directory
28
hvf: Move cpu functions into common directory
29
hvf: Move hvf internal definitions into common header
30
hvf: Make hvf_set_phys_mem() static
31
hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
32
hvf: Split out common code on vcpu init and destroy
33
hvf: Use cpu_synchronize_state()
34
hvf: Make synchronize functions static
35
hvf: Remove hvf-accel-ops.h
36
hvf: Introduce hvf vcpu struct
37
hvf: Simplify post reset/init/loadvm hooks
32
38
33
Eden Mikitas (2):
39
Damien Goutte-Gattat (1):
34
hw/ssi/imx_spi: changed while statement to prevent underflow
40
docs: Fix installation of man pages with Sphinx 4.x
35
hw/ssi/imx_spi: Removed unnecessary cast of rx data received from slave
36
41
37
Paul Zimmerman (7):
42
Jamie Iles (4):
38
raspi: add BCM2835 SOC MPHI emulation
43
target/arm: fix missing exception class
39
dwc-hsotg (dwc2) USB host controller register definitions
44
target/arm: fold do_raise_exception into raise_exception
40
dwc-hsotg (dwc2) USB host controller state definitions
45
target/arm: use raise_exception_ra for MTE check failure
41
dwc-hsotg (dwc2) USB host controller emulation
46
target/arm: use raise_exception_ra for stack limit exception
42
usb: add short-packet handling to usb-storage driver
43
wire in the dwc-hsotg (dwc2) USB host controller emulation
44
raspi2 acceptance test: add test for dwc-hsotg (dwc2) USB host
45
47
46
Peter Maydell (9):
48
Peter Maydell (15):
47
target/arm: Convert Neon VSHL and VSLI 2-reg-shift insn to decodetree
49
target/arm: Add isar feature check functions for MVE
48
target/arm: Convert Neon VSHR 2-reg-shift insns to decodetree
50
target/arm: Update feature checks for insns which are "MVE or FP"
49
target/arm: Convert Neon VSRA, VSRI, VRSHR, VRSRA 2-reg-shift insns to decodetree
51
target/arm: Move fpsp/fpdp isar check into callers of do_vfp_2op_sp/dp
50
target/arm: Convert VQSHLU, VQSHL 2-reg-shift insns to decodetree
52
target/arm: Add MVE check to VMOV_reg_sp and VMOV_reg_dp
51
target/arm: Convert Neon narrowing shifts with op==8 to decodetree
53
target/arm: Fix return values in fp_sysreg_checks()
52
target/arm: Convert Neon narrowing shifts with op==9 to decodetree
54
target/arm: Implement M-profile VPR register
53
target/arm: Convert Neon VSHLL, VMOVL to decodetree
55
target/arm: Make FPSCR.LTPSIZE writable for MVE
54
target/arm: Convert VCVT fixed-point ops to decodetree
56
target/arm: Allow board models to specify initial NS VTOR
55
target/arm: Convert Neon one-register-and-immediate insns to decodetree
57
arm: Consistently use "Cortex-Axx", not "Cortex Axx"
58
tests/qtest/bios-tables-test: Check for dup2() failure
59
tests/qtest/e1000e-test: Check qemu_recv() succeeded
60
tests/qtest/hd-geo-test: Fix checks on mkstemp() return value
61
tests/qtest/pflash-cfi02-test: Avoid potential integer overflow
62
tests/qtest/tpm-tests: Remove unnecessary NULL checks
63
tests/unit/test-vmstate: Assert that dup() and mkstemp() succeed
56
64
57
Philippe Mathieu-Daudé (3):
65
Richard Henderson (13):
58
hw/input/pxa2xx_keypad: Replace hw_error() by qemu_log_mask()
66
target/arm: Mark LDS{MIN,MAX} as signed operations
59
hw/arm/pxa2xx: Replace printf() call by qemu_log_mask()
67
target/arm: Add isar_feature_{aa32, aa64, aa64_sve}_bf16
60
hw/adc/stm32f2xx_adc: Correct memory region size and access size
68
target/arm: Unify unallocated path in disas_fp_1src
69
target/arm: Implement scalar float32 to bfloat16 conversion
70
target/arm: Implement vector float32 to bfloat16 conversion
71
softfpu: Add float_round_to_odd_inf
72
target/arm: Implement bfloat16 dot product (vector)
73
target/arm: Implement bfloat16 dot product (indexed)
74
target/arm: Implement bfloat16 matrix multiply accumulate
75
target/arm: Implement bfloat widening fma (vector)
76
target/arm: Implement bfloat widening fma (indexed)
77
linux-user/aarch64: Enable hwcap bits for bfloat16
78
target/arm: Enable BFloat16 extensions
61
79
62
Richard Henderson (6):
80
docs/conf.py | 1 +
63
target/arm: Convert aes and sm4 to gvec helpers
81
docs/system/arm/aspeed.rst | 4 +-
64
target/arm: Convert rax1 to gvec helpers
82
docs/system/arm/nuvoton.rst | 6 +-
65
target/arm: Convert sha512 and sm3 to gvec helpers
83
docs/system/arm/sabrelite.rst | 2 +-
66
target/arm: Convert sha1 and sha256 to gvec helpers
84
include/fpu/softfloat-types.h | 4 +-
67
target/arm: Split helper_crypto_sha1_3reg
85
include/hw/arm/allwinner-h3.h | 2 +-
68
target/arm: Split helper_crypto_sm3tt
86
include/hw/arm/armv7m.h | 2 +
87
include/hw/core/cpu.h | 3 +-
88
include/sysemu/hvf_int.h | 58 +++++
89
target/arm/cpu.h | 48 +++-
90
target/arm/helper-sve.h | 4 +
91
target/arm/helper.h | 15 ++
92
target/i386/hvf/hvf-accel-ops.h | 23 --
93
target/i386/hvf/hvf-i386.h | 33 +--
94
target/i386/hvf/vmx.h | 24 +-
95
target/i386/hvf/x86hvf.h | 2 -
96
target/arm/neon-dp.decode | 1 +
97
target/arm/neon-shared.decode | 11 +
98
target/arm/sve.decode | 19 +-
99
target/arm/vfp.decode | 2 +
100
accel/hvf/hvf-accel-ops.c | 471 ++++++++++++++++++++++++++++++++++++++++
101
accel/hvf/hvf-all.c | 47 ++++
102
hw/arm/armv7m.c | 7 +
103
hw/arm/aspeed.c | 6 +-
104
hw/arm/mcimx6ul-evk.c | 2 +-
105
hw/arm/mcimx7d-sabre.c | 2 +-
106
hw/arm/npcm7xx_boards.c | 4 +-
107
hw/arm/sabrelite.c | 2 +-
108
hw/misc/npcm7xx_clk.c | 2 +-
109
linux-user/elfload.c | 2 +
110
target/arm/cpu.c | 13 ++
111
target/arm/cpu64.c | 3 +
112
target/arm/cpu_tcg.c | 1 +
113
target/arm/m_helper.c | 5 +-
114
target/arm/machine.c | 20 ++
115
target/arm/mte_helper.c | 12 +-
116
target/arm/op_helper.c | 32 ++-
117
target/arm/sve_helper.c | 2 +
118
target/arm/translate-a64.c | 155 +++++++++++--
119
target/arm/translate-neon.c | 91 ++++++++
120
target/arm/translate-sve.c | 112 ++++++++++
121
target/arm/translate-vfp.c | 164 ++++++++++----
122
target/arm/vec_helper.c | 140 +++++++++++-
123
target/arm/vfp_helper.c | 21 +-
124
target/i386/hvf/hvf-accel-ops.c | 146 -------------
125
target/i386/hvf/hvf.c | 464 +++++----------------------------------
126
target/i386/hvf/x86.c | 28 +--
127
target/i386/hvf/x86_descr.c | 26 +--
128
target/i386/hvf/x86_emu.c | 62 +++---
129
target/i386/hvf/x86_mmu.c | 4 +-
130
target/i386/hvf/x86_task.c | 12 +-
131
target/i386/hvf/x86hvf.c | 222 +++++++++----------
132
tests/qtest/bios-tables-test.c | 8 +-
133
tests/qtest/e1000e-test.c | 3 +-
134
tests/qtest/hd-geo-test.c | 4 +-
135
tests/qtest/pflash-cfi02-test.c | 2 +-
136
tests/qtest/tpm-tests.c | 12 +-
137
tests/unit/test-vmstate.c | 5 +-
138
fpu/softfloat-parts.c.inc | 6 +-
139
MAINTAINERS | 8 +
140
accel/hvf/meson.build | 7 +
141
accel/meson.build | 1 +
142
target/i386/hvf/meson.build | 1 -
143
63 files changed, 1666 insertions(+), 935 deletions(-)
144
create mode 100644 include/sysemu/hvf_int.h
145
delete mode 100644 target/i386/hvf/hvf-accel-ops.h
146
create mode 100644 accel/hvf/hvf-accel-ops.c
147
create mode 100644 accel/hvf/hvf-all.c
148
delete mode 100644 target/i386/hvf/hvf-accel-ops.c
149
create mode 100644 accel/hvf/meson.build
69
150
70
Thomas Huth (1):
71
tests/acceptance: Add a boot test for the xlnx-versal-virt machine
72
73
docs/system/arm/aspeed.rst | 85 ++
74
docs/system/target-arm.rst | 1 +
75
hw/usb/hcd-dwc2.h | 190 +++++
76
include/hw/arm/bcm2835_peripherals.h | 5 +-
77
include/hw/misc/bcm2835_mphi.h | 44 +
78
include/hw/usb/dwc2-regs.h | 899 ++++++++++++++++++++
79
target/arm/helper.h | 45 +-
80
target/arm/translate-a64.h | 3 +
81
target/arm/vec_internal.h | 33 +
82
target/arm/neon-dp.decode | 214 ++++-
83
hw/adc/stm32f2xx_adc.c | 4 +-
84
hw/arm/bcm2835_peripherals.c | 38 +-
85
hw/arm/pxa2xx.c | 66 +-
86
hw/input/pxa2xx_keypad.c | 10 +-
87
hw/misc/bcm2835_mphi.c | 191 +++++
88
hw/ssi/imx_spi.c | 4 +-
89
hw/usb/dev-storage.c | 15 +-
90
hw/usb/hcd-dwc2.c | 1417 ++++++++++++++++++++++++++++++++
91
target/arm/crypto_helper.c | 267 ++++--
92
target/arm/translate-a64.c | 198 ++---
93
target/arm/translate-neon.inc.c | 796 ++++++++++++++----
94
target/arm/translate.c | 539 +-----------
95
target/arm/vec_helper.c | 12 +-
96
hw/misc/Makefile.objs | 1 +
97
hw/usb/Kconfig | 5 +
98
hw/usb/Makefile.objs | 1 +
99
hw/usb/trace-events | 50 ++
100
tests/acceptance/boot_linux_console.py | 35 +-
101
28 files changed, 4258 insertions(+), 910 deletions(-)
102
create mode 100644 docs/system/arm/aspeed.rst
103
create mode 100644 hw/usb/hcd-dwc2.h
104
create mode 100644 include/hw/misc/bcm2835_mphi.h
105
create mode 100644 include/hw/usb/dwc2-regs.h
106
create mode 100644 target/arm/vec_internal.h
107
create mode 100644 hw/misc/bcm2835_mphi.c
108
create mode 100644 hw/usb/hcd-dwc2.c
109
diff view generated by jsdifflib
1
Convert the VQSHLU and QVSHL 2-reg-shift insns to decodetree.
1
Add the isar feature check functions we will need for v8.1M MVE:
2
These are the last of the simple shift-by-immediate insns.
2
* a check for MVE present: this corresponds to the pseudocode's
3
CheckDecodeFaults(ExtType_Mve)
4
* a check for the optional floating-point part of MVE: this
5
corresponds to CheckDecodeFaults(ExtType_MveFp)
3
6
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200522145520.6778-5-peter.maydell@linaro.org
9
Message-id: 20210520152840.24453-2-peter.maydell@linaro.org
7
---
10
---
8
target/arm/neon-dp.decode | 15 +++++
11
target/arm/cpu.h | 22 ++++++++++++++++++++++
9
target/arm/translate-neon.inc.c | 108 +++++++++++++++++++++++++++++++
12
1 file changed, 22 insertions(+)
10
target/arm/translate.c | 110 +-------------------------------
11
3 files changed, 126 insertions(+), 107 deletions(-)
12
13
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
16
--- a/target/arm/cpu.h
16
+++ b/target/arm/neon-dp.decode
17
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
18
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
19
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
20
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
21
+
22
+VQSHLU_64_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_d
23
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_s
24
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_h
25
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_b
26
+
27
+VQSHL_S_64_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
28
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
29
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
30
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
31
+
32
+VQSHL_U_64_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
33
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
34
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
35
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
36
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-neon.inc.c
39
+++ b/target/arm/translate-neon.inc.c
40
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHR_U_2sh(DisasContext *s, arg_2reg_shift *a)
41
return do_vector_2sh(s, a, tcg_gen_gvec_shri);
42
}
19
}
43
}
20
}
44
+
21
45
+static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a,
22
+static inline bool isar_feature_aa32_mve(const ARMISARegisters *id)
46
+ NeonGenTwo64OpEnvFn *fn)
47
+{
23
+{
48
+ /*
24
+ /*
49
+ * 2-reg-and-shift operations, size == 3 case, where the
25
+ * Return true if MVE is supported (either integer or floating point).
50
+ * function needs to be passed cpu_env.
26
+ * We must check for M-profile as the MVFR1 field means something
27
+ * else for A-profile.
51
+ */
28
+ */
52
+ TCGv_i64 constimm;
29
+ return isar_feature_aa32_mprofile(id) &&
53
+ int pass;
30
+ FIELD_EX32(id->mvfr1, MVFR1, MVE) > 0;
54
+
55
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
56
+ return false;
57
+ }
58
+
59
+ /* UNDEF accesses to D16-D31 if they don't exist. */
60
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
61
+ ((a->vd | a->vm) & 0x10)) {
62
+ return false;
63
+ }
64
+
65
+ if ((a->vm | a->vd) & a->q) {
66
+ return false;
67
+ }
68
+
69
+ if (!vfp_access_check(s)) {
70
+ return true;
71
+ }
72
+
73
+ /*
74
+ * To avoid excessive duplication of ops we implement shift
75
+ * by immediate using the variable shift operations.
76
+ */
77
+ constimm = tcg_const_i64(dup_const(a->size, a->shift));
78
+
79
+ for (pass = 0; pass < a->q + 1; pass++) {
80
+ TCGv_i64 tmp = tcg_temp_new_i64();
81
+
82
+ neon_load_reg64(tmp, a->vm + pass);
83
+ fn(tmp, cpu_env, tmp, constimm);
84
+ neon_store_reg64(tmp, a->vd + pass);
85
+ }
86
+ tcg_temp_free_i64(constimm);
87
+ return true;
88
+}
31
+}
89
+
32
+
90
+static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
33
+static inline bool isar_feature_aa32_mve_fp(const ARMISARegisters *id)
91
+ NeonGenTwoOpEnvFn *fn)
92
+{
34
+{
93
+ /*
35
+ /*
94
+ * 2-reg-and-shift operations, size < 3 case, where the
36
+ * Return true if MVE is supported (either integer or floating point).
95
+ * helper needs to be passed cpu_env.
37
+ * We must check for M-profile as the MVFR1 field means something
38
+ * else for A-profile.
96
+ */
39
+ */
97
+ TCGv_i32 constimm;
40
+ return isar_feature_aa32_mprofile(id) &&
98
+ int pass;
41
+ FIELD_EX32(id->mvfr1, MVFR1, MVE) >= 2;
99
+
100
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
101
+ return false;
102
+ }
103
+
104
+ /* UNDEF accesses to D16-D31 if they don't exist. */
105
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
106
+ ((a->vd | a->vm) & 0x10)) {
107
+ return false;
108
+ }
109
+
110
+ if ((a->vm | a->vd) & a->q) {
111
+ return false;
112
+ }
113
+
114
+ if (!vfp_access_check(s)) {
115
+ return true;
116
+ }
117
+
118
+ /*
119
+ * To avoid excessive duplication of ops we implement shift
120
+ * by immediate using the variable shift operations.
121
+ */
122
+ constimm = tcg_const_i32(dup_const(a->size, a->shift));
123
+
124
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
125
+ TCGv_i32 tmp = neon_load_reg(a->vm, pass);
126
+ fn(tmp, cpu_env, tmp, constimm);
127
+ neon_store_reg(a->vd, pass, tmp);
128
+ }
129
+ tcg_temp_free_i32(constimm);
130
+ return true;
131
+}
42
+}
132
+
43
+
133
+#define DO_2SHIFT_ENV(INSN, FUNC) \
44
static inline bool isar_feature_aa32_vfp_simd(const ARMISARegisters *id)
134
+ static bool trans_##INSN##_64_2sh(DisasContext *s, arg_2reg_shift *a) \
135
+ { \
136
+ return do_2shift_env_64(s, a, gen_helper_neon_##FUNC##64); \
137
+ } \
138
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
139
+ { \
140
+ static NeonGenTwoOpEnvFn * const fns[] = { \
141
+ gen_helper_neon_##FUNC##8, \
142
+ gen_helper_neon_##FUNC##16, \
143
+ gen_helper_neon_##FUNC##32, \
144
+ }; \
145
+ assert(a->size < ARRAY_SIZE(fns)); \
146
+ return do_2shift_env_32(s, a, fns[a->size]); \
147
+ }
148
+
149
+DO_2SHIFT_ENV(VQSHLU, qshlu_s)
150
+DO_2SHIFT_ENV(VQSHL_U, qshl_u)
151
+DO_2SHIFT_ENV(VQSHL_S, qshl_s)
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_rsb(int size, TCGv_i32 t0, TCGv_i32 t1)
157
}
158
}
159
160
-#define GEN_NEON_INTEGER_OP_ENV(name) do { \
161
- switch ((size << 1) | u) { \
162
- case 0: \
163
- gen_helper_neon_##name##_s8(tmp, cpu_env, tmp, tmp2); \
164
- break; \
165
- case 1: \
166
- gen_helper_neon_##name##_u8(tmp, cpu_env, tmp, tmp2); \
167
- break; \
168
- case 2: \
169
- gen_helper_neon_##name##_s16(tmp, cpu_env, tmp, tmp2); \
170
- break; \
171
- case 3: \
172
- gen_helper_neon_##name##_u16(tmp, cpu_env, tmp, tmp2); \
173
- break; \
174
- case 4: \
175
- gen_helper_neon_##name##_s32(tmp, cpu_env, tmp, tmp2); \
176
- break; \
177
- case 5: \
178
- gen_helper_neon_##name##_u32(tmp, cpu_env, tmp, tmp2); \
179
- break; \
180
- default: return 1; \
181
- }} while (0)
182
-
183
static TCGv_i32 neon_load_scratch(int scratch)
184
{
45
{
185
TCGv_i32 tmp = tcg_temp_new_i32();
46
/*
186
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
187
int size;
188
int shift;
189
int pass;
190
- int count;
191
int u;
192
int vec_size;
193
uint32_t imm;
194
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
195
case 3: /* VRSRA */
196
case 4: /* VSRI */
197
case 5: /* VSHL, VSLI */
198
+ case 6: /* VQSHLU */
199
+ case 7: /* VQSHL */
200
return 1; /* handled by decodetree */
201
default:
202
break;
203
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
204
size--;
205
}
206
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
207
- if (op < 8) {
208
- /* Shift by immediate:
209
- VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
210
- if (q && ((rd | rm) & 1)) {
211
- return 1;
212
- }
213
- if (!u && (op == 4 || op == 6)) {
214
- return 1;
215
- }
216
- /* Right shifts are encoded as N - shift, where N is the
217
- element size in bits. */
218
- if (op <= 4) {
219
- shift = shift - (1 << (size + 3));
220
- }
221
-
222
- if (size == 3) {
223
- count = q + 1;
224
- } else {
225
- count = q ? 4: 2;
226
- }
227
-
228
- /* To avoid excessive duplication of ops we implement shift
229
- * by immediate using the variable shift operations.
230
- */
231
- imm = dup_const(size, shift);
232
-
233
- for (pass = 0; pass < count; pass++) {
234
- if (size == 3) {
235
- neon_load_reg64(cpu_V0, rm + pass);
236
- tcg_gen_movi_i64(cpu_V1, imm);
237
- switch (op) {
238
- case 6: /* VQSHLU */
239
- gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
240
- cpu_V0, cpu_V1);
241
- break;
242
- case 7: /* VQSHL */
243
- if (u) {
244
- gen_helper_neon_qshl_u64(cpu_V0, cpu_env,
245
- cpu_V0, cpu_V1);
246
- } else {
247
- gen_helper_neon_qshl_s64(cpu_V0, cpu_env,
248
- cpu_V0, cpu_V1);
249
- }
250
- break;
251
- default:
252
- g_assert_not_reached();
253
- }
254
- neon_store_reg64(cpu_V0, rd + pass);
255
- } else { /* size < 3 */
256
- /* Operands in T0 and T1. */
257
- tmp = neon_load_reg(rm, pass);
258
- tmp2 = tcg_temp_new_i32();
259
- tcg_gen_movi_i32(tmp2, imm);
260
- switch (op) {
261
- case 6: /* VQSHLU */
262
- switch (size) {
263
- case 0:
264
- gen_helper_neon_qshlu_s8(tmp, cpu_env,
265
- tmp, tmp2);
266
- break;
267
- case 1:
268
- gen_helper_neon_qshlu_s16(tmp, cpu_env,
269
- tmp, tmp2);
270
- break;
271
- case 2:
272
- gen_helper_neon_qshlu_s32(tmp, cpu_env,
273
- tmp, tmp2);
274
- break;
275
- default:
276
- abort();
277
- }
278
- break;
279
- case 7: /* VQSHL */
280
- GEN_NEON_INTEGER_OP_ENV(qshl);
281
- break;
282
- default:
283
- g_assert_not_reached();
284
- }
285
- tcg_temp_free_i32(tmp2);
286
- neon_store_reg(rd, pass, tmp);
287
- }
288
- } /* for pass */
289
- } else if (op < 10) {
290
+ if (op < 10) {
291
/* Shift by immediate and narrow:
292
VSHRN, VRSHRN, VQSHRN, VQRSHRN. */
293
int input_unsigned = (op == 8) ? !u : u;
294
--
47
--
295
2.20.1
48
2.20.1
296
49
297
50
diff view generated by jsdifflib
1
Convert the insns in the one-register-and-immediate group to decodetree.
1
Some v8M instructions are present if either the floating point
2
extension or MVE is implemented. Update our implementation of them
3
to check for MVE as well as for FP.
2
4
3
In the new decode, our asimd_imm_const() function returns a 64-bit value
5
This is all the insns which use CheckDecodeFaults(ExtType_MveOrFp) or
4
rather than a 32-bit one, which means we don't need to treat cmode=14 op=1
6
CheckDecodeFaults(ExtType_MveOrDpFp) in their pseudocode, which are
5
as a special case in the decoder (it is the only encoding where the two
7
essentially the loads and stores, moves and sysreg accesses, except
6
halves of the 64-bit value are different).
8
for VMOV_reg_sp and VMOV_reg_dp, which we handle in subsequent
9
patches because they need a refactor to provide a place to put the
10
new MVE check.
7
11
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200522145520.6778-10-peter.maydell@linaro.org
14
Message-id: 20210520152840.24453-3-peter.maydell@linaro.org
11
---
15
---
12
target/arm/neon-dp.decode | 22 ++++++
16
target/arm/translate-vfp.c | 48 +++++++++++++++++++++++---------------
13
target/arm/translate-neon.inc.c | 118 ++++++++++++++++++++++++++++++++
17
1 file changed, 29 insertions(+), 19 deletions(-)
14
target/arm/translate.c | 101 +--------------------------
15
3 files changed, 142 insertions(+), 99 deletions(-)
16
18
17
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
19
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/neon-dp.decode
21
--- a/target/arm/translate-vfp.c
20
+++ b/target/arm/neon-dp.decode
22
+++ b/target/arm/translate-vfp.c
21
@@ -XXX,XX +XXX,XX @@ VCVT_SF_2sh 1111 001 0 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
23
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
22
VCVT_UF_2sh 1111 001 1 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
24
/* VMOV scalar to general purpose register */
23
VCVT_FS_2sh 1111 001 0 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
25
TCGv_i32 tmp;
24
VCVT_FU_2sh 1111 001 1 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
26
25
+
27
- /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
26
+######################################################################
28
- if (a->size == MO_32
27
+# 1-reg-and-modified-immediate grouping:
29
- ? !dc_isar_feature(aa32_fpsp_v2, s)
28
+# 1111 001 i 1 D 000 imm:3 Vd:4 cmode:4 0 Q op 1 Vm:4
30
- : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
29
+######################################################################
31
- return false;
30
+
31
+&1reg_imm vd q imm cmode op
32
+
33
+%asimd_imm_value 24:1 16:3 0:4
34
+
35
+@1reg_imm .... ... . . . ... ... .... .... . q:1 . . .... \
36
+ &1reg_imm imm=%asimd_imm_value vd=%vd_dp
37
+
38
+# The cmode/op bits here decode VORR/VBIC/VMOV/VMNV, but
39
+# not in a way we can conveniently represent in decodetree without
40
+# a lot of repetition:
41
+# VORR: op=0, (cmode & 1) && cmode < 12
42
+# VBIC: op=1, (cmode & 1) && cmode < 12
43
+# VMOV: everything else
44
+# So we have a single decode line and check the cmode/op in the
45
+# trans function.
46
+Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
47
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate-neon.inc.c
50
+++ b/target/arm/translate-neon.inc.c
51
@@ -XXX,XX +XXX,XX @@ DO_FP_2SH(VCVT_SF, gen_helper_vfp_sltos)
52
DO_FP_2SH(VCVT_UF, gen_helper_vfp_ultos)
53
DO_FP_2SH(VCVT_FS, gen_helper_vfp_tosls_round_to_zero)
54
DO_FP_2SH(VCVT_FU, gen_helper_vfp_touls_round_to_zero)
55
+
56
+static uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
57
+{
58
+ /*
32
+ /*
59
+ * Expand the encoded constant.
33
+ * SIZE == MO_32 is a VFP instruction; otherwise NEON. MVE has
60
+ * Note that cmode = 2,3,4,5,6,7,10,11,12,13 imm=0 is UNPREDICTABLE.
34
+ * all sizes, whether the CPU has fp or not.
61
+ * We choose to not special-case this and will behave as if a
62
+ * valid constant encoding of 0 had been given.
63
+ * cmode = 15 op = 1 must UNDEF; we assume decode has handled that.
64
+ */
35
+ */
65
+ switch (cmode) {
36
+ if (!dc_isar_feature(aa32_mve, s)) {
66
+ case 0: case 1:
37
+ if (a->size == MO_32
67
+ /* no-op */
38
+ ? !dc_isar_feature(aa32_fpsp_v2, s)
68
+ break;
39
+ : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
69
+ case 2: case 3:
70
+ imm <<= 8;
71
+ break;
72
+ case 4: case 5:
73
+ imm <<= 16;
74
+ break;
75
+ case 6: case 7:
76
+ imm <<= 24;
77
+ break;
78
+ case 8: case 9:
79
+ imm |= imm << 16;
80
+ break;
81
+ case 10: case 11:
82
+ imm = (imm << 8) | (imm << 24);
83
+ break;
84
+ case 12:
85
+ imm = (imm << 8) | 0xff;
86
+ break;
87
+ case 13:
88
+ imm = (imm << 16) | 0xffff;
89
+ break;
90
+ case 14:
91
+ if (op) {
92
+ /*
93
+ * This is the only case where the top and bottom 32 bits
94
+ * of the encoded constant differ.
95
+ */
96
+ uint64_t imm64 = 0;
97
+ int n;
98
+
99
+ for (n = 0; n < 8; n++) {
100
+ if (imm & (1 << n)) {
101
+ imm64 |= (0xffULL << (n * 8));
102
+ }
103
+ }
104
+ return imm64;
105
+ }
106
+ imm |= (imm << 8) | (imm << 16) | (imm << 24);
107
+ break;
108
+ case 15:
109
+ imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
110
+ | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
111
+ break;
112
+ }
113
+ if (op) {
114
+ imm = ~imm;
115
+ }
116
+ return dup_const(MO_32, imm);
117
+}
118
+
119
+static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
120
+ GVecGen2iFn *fn)
121
+{
122
+ uint64_t imm;
123
+ int reg_ofs, vec_size;
124
+
125
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
126
+ return false;
127
+ }
128
+
129
+ /* UNDEF accesses to D16-D31 if they don't exist. */
130
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
131
+ return false;
132
+ }
133
+
134
+ if (a->vd & a->q) {
135
+ return false;
136
+ }
137
+
138
+ if (!vfp_access_check(s)) {
139
+ return true;
140
+ }
141
+
142
+ reg_ofs = neon_reg_offset(a->vd, 0);
143
+ vec_size = a->q ? 16 : 8;
144
+ imm = asimd_imm_const(a->imm, a->cmode, a->op);
145
+
146
+ fn(MO_64, reg_ofs, reg_ofs, imm, vec_size, vec_size);
147
+ return true;
148
+}
149
+
150
+static void gen_VMOV_1r(unsigned vece, uint32_t dofs, uint32_t aofs,
151
+ int64_t c, uint32_t oprsz, uint32_t maxsz)
152
+{
153
+ tcg_gen_gvec_dup_imm(MO_64, dofs, oprsz, maxsz, c);
154
+}
155
+
156
+static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a)
157
+{
158
+ /* Handle decode of cmode/op here between VORR/VBIC/VMOV */
159
+ GVecGen2iFn *fn;
160
+
161
+ if ((a->cmode & 1) && a->cmode < 12) {
162
+ /* for op=1, the imm will be inverted, so BIC becomes AND. */
163
+ fn = a->op ? tcg_gen_gvec_andi : tcg_gen_gvec_ori;
164
+ } else {
165
+ /* There is one unallocated cmode/op combination in this space */
166
+ if (a->cmode == 15 && a->op == 1) {
167
+ return false;
40
+ return false;
168
+ }
41
+ }
169
+ fn = gen_VMOV_1r;
42
}
170
+ }
43
171
+ return do_1reg_imm(s, a, fn);
44
/* UNDEF accesses to D16-D31 if they don't exist */
172
+}
45
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
173
diff --git a/target/arm/translate.c b/target/arm/translate.c
46
/* VMOV general purpose register to scalar */
174
index XXXXXXX..XXXXXXX 100644
47
TCGv_i32 tmp;
175
--- a/target/arm/translate.c
48
176
+++ b/target/arm/translate.c
49
- /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
50
- if (a->size == MO_32
178
/* Three register same length: handled by decodetree */
51
- ? !dc_isar_feature(aa32_fpsp_v2, s)
179
return 1;
52
- : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
180
} else if (insn & (1 << 4)) {
53
- return false;
181
- if ((insn & 0x00380080) != 0) {
54
+ /*
182
- /* Two registers and shift: handled by decodetree */
55
+ * SIZE == MO_32 is a VFP instruction; otherwise NEON. MVE has
183
- return 1;
56
+ * all sizes, whether the CPU has fp or not.
184
- } else { /* (insn & 0x00380080) == 0 */
57
+ */
185
- int invert, reg_ofs, vec_size;
58
+ if (!dc_isar_feature(aa32_mve, s)) {
186
-
59
+ if (a->size == MO_32
187
- if (q && (rd & 1)) {
60
+ ? !dc_isar_feature(aa32_fpsp_v2, s)
188
- return 1;
61
+ : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
189
- }
62
+ return false;
190
-
63
+ }
191
- op = (insn >> 8) & 0xf;
64
}
192
- /* One register and immediate. */
65
193
- imm = (u << 7) | ((insn >> 12) & 0x70) | (insn & 0xf);
66
/* UNDEF accesses to D16-D31 if they don't exist */
194
- invert = (insn & (1 << 5)) != 0;
67
@@ -XXX,XX +XXX,XX @@ typedef enum FPSysRegCheckResult {
195
- /* Note that op = 2,3,4,5,6,7,10,11,12,13 imm=0 is UNPREDICTABLE.
68
196
- * We choose to not special-case this and will behave as if a
69
static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
197
- * valid constant encoding of 0 had been given.
70
{
198
- */
71
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
199
- switch (op) {
72
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
200
- case 0: case 1:
73
return FPSysRegCheckFailed;
201
- /* no-op */
74
}
202
- break;
75
203
- case 2: case 3:
76
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
204
- imm <<= 8;
77
{
205
- break;
78
TCGv_i32 tmp;
206
- case 4: case 5:
79
207
- imm <<= 16;
80
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
208
- break;
81
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
209
- case 6: case 7:
82
return false;
210
- imm <<= 24;
83
}
211
- break;
84
212
- case 8: case 9:
85
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
213
- imm |= imm << 16;
86
{
214
- break;
87
TCGv_i32 tmp;
215
- case 10: case 11:
88
216
- imm = (imm << 8) | (imm << 24);
89
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
217
- break;
90
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
218
- case 12:
91
return false;
219
- imm = (imm << 8) | 0xff;
92
}
220
- break;
93
221
- case 13:
94
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
222
- imm = (imm << 16) | 0xffff;
95
* floating point register. Note that this does not require support
223
- break;
96
* for double precision arithmetic.
224
- case 14:
97
*/
225
- imm |= (imm << 8) | (imm << 16) | (imm << 24);
98
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
226
- if (invert) {
99
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
227
- imm = ~imm;
100
return false;
228
- }
101
}
229
- break;
102
230
- case 15:
103
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_hp(DisasContext *s, arg_VLDR_VSTR_sp *a)
231
- if (invert) {
104
uint32_t offset;
232
- return 1;
105
TCGv_i32 addr, tmp;
233
- }
106
234
- imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
107
- if (!dc_isar_feature(aa32_fp16_arith, s)) {
235
- | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
108
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
236
- break;
109
return false;
237
- }
110
}
238
- if (invert) {
111
239
- imm = ~imm;
112
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
240
- }
113
uint32_t offset;
241
-
114
TCGv_i32 addr, tmp;
242
- reg_ofs = neon_reg_offset(rd, 0);
115
243
- vec_size = q ? 16 : 8;
116
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
244
-
117
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
245
- if (op & 1 && op < 12) {
118
return false;
246
- if (invert) {
119
}
247
- /* The immediate value has already been inverted,
120
248
- * so BIC becomes AND.
121
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
249
- */
122
TCGv_i64 tmp;
250
- tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
123
251
- vec_size, vec_size);
124
/* Note that this does not require support for double arithmetic. */
252
- } else {
125
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
253
- tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
126
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
254
- vec_size, vec_size);
127
return false;
255
- }
128
}
256
- } else {
129
257
- /* VMOV, VMVN. */
130
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
258
- if (op == 14 && invert) {
131
TCGv_i32 addr, tmp;
259
- TCGv_i64 t64 = tcg_temp_new_i64();
132
int i, n;
260
-
133
261
- for (pass = 0; pass <= q; ++pass) {
134
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
262
- uint64_t val = 0;
135
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
263
- int n;
136
return false;
264
-
137
}
265
- for (n = 0; n < 8; n++) {
138
266
- if (imm & (1 << (n + pass * 8))) {
139
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
267
- val |= 0xffull << (n * 8);
140
int i, n;
268
- }
141
269
- }
142
/* Note that this does not require support for double arithmetic. */
270
- tcg_gen_movi_i64(t64, val);
143
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
271
- neon_store_reg64(t64, rd + pass);
144
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
272
- }
145
return false;
273
- tcg_temp_free_i64(t64);
146
}
274
- } else {
147
275
- tcg_gen_gvec_dup_imm(MO_32, reg_ofs, vec_size,
276
- vec_size, imm);
277
- }
278
- }
279
- }
280
+ /* Two registers and shift or reg and imm: handled by decodetree */
281
+ return 1;
282
} else { /* (insn & 0x00800010 == 0x00800000) */
283
if (size != 3) {
284
op = (insn >> 8) & 0xf;
285
--
148
--
286
2.20.1
149
2.20.1
287
150
288
151
diff view generated by jsdifflib
New patch
1
The do_vfp_2op_sp() and do_vfp_2op_dp() functions currently check
2
whether floating point is supported via the aa32_fpdp_v2 and
3
aa32_fpsp_v2 isar checks. For v8.1M MVE support, the VMOV_reg trans
4
functions (but not any of the others) need to update this to also
5
allow the insn if MVE is implemented. Move the check out of the do_
6
function and into its callsites (which are all implemented via the
7
DO_VFP_2OP macro), so we have a place to change the check for the
8
VMOV insns.
1
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210520152840.24453-4-peter.maydell@linaro.org
13
---
14
target/arm/translate-vfp.c | 37 +++++++++++++++++++------------------
15
1 file changed, 19 insertions(+), 18 deletions(-)
16
17
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate-vfp.c
20
+++ b/target/arm/translate-vfp.c
21
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
22
int veclen = s->vec_len;
23
TCGv_i32 f0, fd;
24
25
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
26
- return false;
27
- }
28
+ /* Note that the caller must check the aa32_fpsp_v2 feature. */
29
30
if (!dc_isar_feature(aa32_fpshvec, s) &&
31
(veclen != 0 || s->vec_stride != 0)) {
32
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
33
*/
34
TCGv_i32 f0;
35
36
+ /* Note that the caller must check the aa32_fp16_arith feature */
37
+
38
if (!dc_isar_feature(aa32_fp16_arith, s)) {
39
return false;
40
}
41
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
42
int veclen = s->vec_len;
43
TCGv_i64 f0, fd;
44
45
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
46
- return false;
47
- }
48
+ /* Note that the caller must check the aa32_fpdp_v2 feature. */
49
50
/* UNDEF accesses to D16-D31 if they don't exist */
51
if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vm) & 0x10)) {
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
53
return true;
54
}
55
56
-#define DO_VFP_2OP(INSN, PREC, FN) \
57
+#define DO_VFP_2OP(INSN, PREC, FN, CHECK) \
58
static bool trans_##INSN##_##PREC(DisasContext *s, \
59
arg_##INSN##_##PREC *a) \
60
{ \
61
+ if (!dc_isar_feature(CHECK, s)) { \
62
+ return false; \
63
+ } \
64
return do_vfp_2op_##PREC(s, FN, a->vd, a->vm); \
65
}
66
67
-DO_VFP_2OP(VMOV_reg, sp, tcg_gen_mov_i32)
68
-DO_VFP_2OP(VMOV_reg, dp, tcg_gen_mov_i64)
69
+DO_VFP_2OP(VMOV_reg, sp, tcg_gen_mov_i32, aa32_fpsp_v2)
70
+DO_VFP_2OP(VMOV_reg, dp, tcg_gen_mov_i64, aa32_fpdp_v2)
71
72
-DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh)
73
-DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss)
74
-DO_VFP_2OP(VABS, dp, gen_helper_vfp_absd)
75
+DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh, aa32_fp16_arith)
76
+DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss, aa32_fpsp_v2)
77
+DO_VFP_2OP(VABS, dp, gen_helper_vfp_absd, aa32_fpdp_v2)
78
79
-DO_VFP_2OP(VNEG, hp, gen_helper_vfp_negh)
80
-DO_VFP_2OP(VNEG, sp, gen_helper_vfp_negs)
81
-DO_VFP_2OP(VNEG, dp, gen_helper_vfp_negd)
82
+DO_VFP_2OP(VNEG, hp, gen_helper_vfp_negh, aa32_fp16_arith)
83
+DO_VFP_2OP(VNEG, sp, gen_helper_vfp_negs, aa32_fpsp_v2)
84
+DO_VFP_2OP(VNEG, dp, gen_helper_vfp_negd, aa32_fpdp_v2)
85
86
static void gen_VSQRT_hp(TCGv_i32 vd, TCGv_i32 vm)
87
{
88
@@ -XXX,XX +XXX,XX @@ static void gen_VSQRT_dp(TCGv_i64 vd, TCGv_i64 vm)
89
gen_helper_vfp_sqrtd(vd, vm, cpu_env);
90
}
91
92
-DO_VFP_2OP(VSQRT, hp, gen_VSQRT_hp)
93
-DO_VFP_2OP(VSQRT, sp, gen_VSQRT_sp)
94
-DO_VFP_2OP(VSQRT, dp, gen_VSQRT_dp)
95
+DO_VFP_2OP(VSQRT, hp, gen_VSQRT_hp, aa32_fp16_arith)
96
+DO_VFP_2OP(VSQRT, sp, gen_VSQRT_sp, aa32_fpsp_v2)
97
+DO_VFP_2OP(VSQRT, dp, gen_VSQRT_dp, aa32_fpdp_v2)
98
99
static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a)
100
{
101
--
102
2.20.1
103
104
diff view generated by jsdifflib
1
Convert the Neon narrowing shifts where op==8 to decodetree:
1
Split out the handling of VMOV_reg_sp and VMOV_reg_dp so that we can
2
* VSHRN
2
permit the insns if either FP or MVE are present.
3
* VRSHRN
4
* VQSHRUN
5
* VQRSHRUN
6
3
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200522145520.6778-6-peter.maydell@linaro.org
6
Message-id: 20210520152840.24453-5-peter.maydell@linaro.org
10
---
7
---
11
target/arm/neon-dp.decode | 27 ++++++
8
target/arm/translate-vfp.c | 15 +++++++++++++--
12
target/arm/translate-neon.inc.c | 167 ++++++++++++++++++++++++++++++++
9
1 file changed, 13 insertions(+), 2 deletions(-)
13
target/arm/translate.c | 1 +
14
3 files changed, 195 insertions(+)
15
10
16
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
11
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
17
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/neon-dp.decode
13
--- a/target/arm/translate-vfp.c
19
+++ b/target/arm/neon-dp.decode
14
+++ b/target/arm/translate-vfp.c
20
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
21
@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
16
return do_vfp_2op_##PREC(s, FN, a->vd, a->vm); \
22
&2reg_shift vm=%vm_dp vd=%vd_dp size=0
17
}
23
18
24
+# Narrowing right shifts: here the Q bit is part of the opcode decode
19
-DO_VFP_2OP(VMOV_reg, sp, tcg_gen_mov_i32, aa32_fpsp_v2)
25
+@2reg_shrn_d .... ... . . . 1 ..... .... .... 0 . . . .... \
20
-DO_VFP_2OP(VMOV_reg, dp, tcg_gen_mov_i64, aa32_fpdp_v2)
26
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3 q=0 \
21
+#define DO_VFP_VMOV(INSN, PREC, FN) \
27
+ shift=%neon_rshift_i5
22
+ static bool trans_##INSN##_##PREC(DisasContext *s, \
28
+@2reg_shrn_s .... ... . . . 01 .... .... .... 0 . . . .... \
23
+ arg_##INSN##_##PREC *a) \
29
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 q=0 \
24
+ { \
30
+ shift=%neon_rshift_i4
25
+ if (!dc_isar_feature(aa32_fp##PREC##_v2, s) && \
31
+@2reg_shrn_h .... ... . . . 001 ... .... .... 0 . . . .... \
26
+ !dc_isar_feature(aa32_mve, s)) { \
32
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0 \
27
+ return false; \
33
+ shift=%neon_rshift_i3
28
+ } \
34
+
29
+ return do_vfp_2op_##PREC(s, FN, a->vd, a->vm); \
35
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
36
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
37
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
38
@@ -XXX,XX +XXX,XX @@ VQSHL_U_64_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
39
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
40
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
41
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
42
+
43
+VSHRN_64_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_d
44
+VSHRN_32_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_s
45
+VSHRN_16_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
46
+
47
+VRSHRN_64_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
48
+VRSHRN_32_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
49
+VRSHRN_16_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
50
+
51
+VQSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_d
52
+VQSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_s
53
+VQSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
54
+
55
+VQRSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
56
+VQRSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
57
+VQRSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
58
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-neon.inc.c
61
+++ b/target/arm/translate-neon.inc.c
62
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
63
DO_2SHIFT_ENV(VQSHLU, qshlu_s)
64
DO_2SHIFT_ENV(VQSHL_U, qshl_u)
65
DO_2SHIFT_ENV(VQSHL_S, qshl_s)
66
+
67
+static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
68
+ NeonGenTwo64OpFn *shiftfn,
69
+ NeonGenNarrowEnvFn *narrowfn)
70
+{
71
+ /* 2-reg-and-shift narrowing-shift operations, size == 3 case */
72
+ TCGv_i64 constimm, rm1, rm2;
73
+ TCGv_i32 rd;
74
+
75
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
76
+ return false;
77
+ }
30
+ }
78
+
31
+
79
+ /* UNDEF accesses to D16-D31 if they don't exist. */
32
+DO_VFP_VMOV(VMOV_reg, sp, tcg_gen_mov_i32)
80
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
33
+DO_VFP_VMOV(VMOV_reg, dp, tcg_gen_mov_i64)
81
+ ((a->vd | a->vm) & 0x10)) {
34
82
+ return false;
35
DO_VFP_2OP(VABS, hp, gen_helper_vfp_absh, aa32_fp16_arith)
83
+ }
36
DO_VFP_2OP(VABS, sp, gen_helper_vfp_abss, aa32_fpsp_v2)
84
+
85
+ if (a->vm & 1) {
86
+ return false;
87
+ }
88
+
89
+ if (!vfp_access_check(s)) {
90
+ return true;
91
+ }
92
+
93
+ /*
94
+ * This is always a right shift, and the shiftfn is always a
95
+ * left-shift helper, which thus needs the negated shift count.
96
+ */
97
+ constimm = tcg_const_i64(-a->shift);
98
+ rm1 = tcg_temp_new_i64();
99
+ rm2 = tcg_temp_new_i64();
100
+
101
+ /* Load both inputs first to avoid potential overwrite if rm == rd */
102
+ neon_load_reg64(rm1, a->vm);
103
+ neon_load_reg64(rm2, a->vm + 1);
104
+
105
+ shiftfn(rm1, rm1, constimm);
106
+ rd = tcg_temp_new_i32();
107
+ narrowfn(rd, cpu_env, rm1);
108
+ neon_store_reg(a->vd, 0, rd);
109
+
110
+ shiftfn(rm2, rm2, constimm);
111
+ rd = tcg_temp_new_i32();
112
+ narrowfn(rd, cpu_env, rm2);
113
+ neon_store_reg(a->vd, 1, rd);
114
+
115
+ tcg_temp_free_i64(rm1);
116
+ tcg_temp_free_i64(rm2);
117
+ tcg_temp_free_i64(constimm);
118
+
119
+ return true;
120
+}
121
+
122
+static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
123
+ NeonGenTwoOpFn *shiftfn,
124
+ NeonGenNarrowEnvFn *narrowfn)
125
+{
126
+ /* 2-reg-and-shift narrowing-shift operations, size < 3 case */
127
+ TCGv_i32 constimm, rm1, rm2, rm3, rm4;
128
+ TCGv_i64 rtmp;
129
+ uint32_t imm;
130
+
131
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
132
+ return false;
133
+ }
134
+
135
+ /* UNDEF accesses to D16-D31 if they don't exist. */
136
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
137
+ ((a->vd | a->vm) & 0x10)) {
138
+ return false;
139
+ }
140
+
141
+ if (a->vm & 1) {
142
+ return false;
143
+ }
144
+
145
+ if (!vfp_access_check(s)) {
146
+ return true;
147
+ }
148
+
149
+ /*
150
+ * This is always a right shift, and the shiftfn is always a
151
+ * left-shift helper, which thus needs the negated shift count
152
+ * duplicated into each lane of the immediate value.
153
+ */
154
+ if (a->size == 1) {
155
+ imm = (uint16_t)(-a->shift);
156
+ imm |= imm << 16;
157
+ } else {
158
+ /* size == 2 */
159
+ imm = -a->shift;
160
+ }
161
+ constimm = tcg_const_i32(imm);
162
+
163
+ /* Load all inputs first to avoid potential overwrite */
164
+ rm1 = neon_load_reg(a->vm, 0);
165
+ rm2 = neon_load_reg(a->vm, 1);
166
+ rm3 = neon_load_reg(a->vm + 1, 0);
167
+ rm4 = neon_load_reg(a->vm + 1, 1);
168
+ rtmp = tcg_temp_new_i64();
169
+
170
+ shiftfn(rm1, rm1, constimm);
171
+ shiftfn(rm2, rm2, constimm);
172
+
173
+ tcg_gen_concat_i32_i64(rtmp, rm1, rm2);
174
+ tcg_temp_free_i32(rm2);
175
+
176
+ narrowfn(rm1, cpu_env, rtmp);
177
+ neon_store_reg(a->vd, 0, rm1);
178
+
179
+ shiftfn(rm3, rm3, constimm);
180
+ shiftfn(rm4, rm4, constimm);
181
+ tcg_temp_free_i32(constimm);
182
+
183
+ tcg_gen_concat_i32_i64(rtmp, rm3, rm4);
184
+ tcg_temp_free_i32(rm4);
185
+
186
+ narrowfn(rm3, cpu_env, rtmp);
187
+ tcg_temp_free_i64(rtmp);
188
+ neon_store_reg(a->vd, 1, rm3);
189
+ return true;
190
+}
191
+
192
+#define DO_2SN_64(INSN, FUNC, NARROWFUNC) \
193
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
194
+ { \
195
+ return do_2shift_narrow_64(s, a, FUNC, NARROWFUNC); \
196
+ }
197
+#define DO_2SN_32(INSN, FUNC, NARROWFUNC) \
198
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
199
+ { \
200
+ return do_2shift_narrow_32(s, a, FUNC, NARROWFUNC); \
201
+ }
202
+
203
+static void gen_neon_narrow_u32(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
204
+{
205
+ tcg_gen_extrl_i64_i32(dest, src);
206
+}
207
+
208
+static void gen_neon_narrow_u16(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
209
+{
210
+ gen_helper_neon_narrow_u16(dest, src);
211
+}
212
+
213
+static void gen_neon_narrow_u8(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
214
+{
215
+ gen_helper_neon_narrow_u8(dest, src);
216
+}
217
+
218
+DO_2SN_64(VSHRN_64, gen_ushl_i64, gen_neon_narrow_u32)
219
+DO_2SN_32(VSHRN_32, gen_ushl_i32, gen_neon_narrow_u16)
220
+DO_2SN_32(VSHRN_16, gen_helper_neon_shl_u16, gen_neon_narrow_u8)
221
+
222
+DO_2SN_64(VRSHRN_64, gen_helper_neon_rshl_u64, gen_neon_narrow_u32)
223
+DO_2SN_32(VRSHRN_32, gen_helper_neon_rshl_u32, gen_neon_narrow_u16)
224
+DO_2SN_32(VRSHRN_16, gen_helper_neon_rshl_u16, gen_neon_narrow_u8)
225
+
226
+DO_2SN_64(VQSHRUN_64, gen_sshl_i64, gen_helper_neon_unarrow_sat32)
227
+DO_2SN_32(VQSHRUN_32, gen_sshl_i32, gen_helper_neon_unarrow_sat16)
228
+DO_2SN_32(VQSHRUN_16, gen_helper_neon_shl_s16, gen_helper_neon_unarrow_sat8)
229
+
230
+DO_2SN_64(VQRSHRUN_64, gen_helper_neon_rshl_s64, gen_helper_neon_unarrow_sat32)
231
+DO_2SN_32(VQRSHRUN_32, gen_helper_neon_rshl_s32, gen_helper_neon_unarrow_sat16)
232
+DO_2SN_32(VQRSHRUN_16, gen_helper_neon_rshl_s16, gen_helper_neon_unarrow_sat8)
233
diff --git a/target/arm/translate.c b/target/arm/translate.c
234
index XXXXXXX..XXXXXXX 100644
235
--- a/target/arm/translate.c
236
+++ b/target/arm/translate.c
237
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
238
case 5: /* VSHL, VSLI */
239
case 6: /* VQSHLU */
240
case 7: /* VQSHL */
241
+ case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
242
return 1; /* handled by decodetree */
243
default:
244
break;
245
--
37
--
246
2.20.1
38
2.20.1
247
39
248
40
diff view generated by jsdifflib
New patch
1
The fp_sysreg_checks() function is supposed to be returning an
2
FPSysRegCheckResult, which is an enum with three possible values.
3
However, three places in the function "return false" (a hangover from
4
a previous iteration of the design where the function just returned a
5
bool). Make these return FPSysRegCheckFailed instead (for no
6
functional change, since both false and FPSysRegCheckFailed are
7
zero).
1
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210520152840.24453-6-peter.maydell@linaro.org
12
---
13
target/arm/translate-vfp.c | 6 +++---
14
1 file changed, 3 insertions(+), 3 deletions(-)
15
16
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-vfp.c
19
+++ b/target/arm/translate-vfp.c
20
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
21
break;
22
case ARM_VFP_FPSCR_NZCVQC:
23
if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
24
- return false;
25
+ return FPSysRegCheckFailed;
26
}
27
break;
28
case ARM_VFP_FPCXT_S:
29
case ARM_VFP_FPCXT_NS:
30
if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
31
- return false;
32
+ return FPSysRegCheckFailed;
33
}
34
if (!s->v8m_secure) {
35
- return false;
36
+ return FPSysRegCheckFailed;
37
}
38
break;
39
default:
40
--
41
2.20.1
42
43
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
If MVE is implemented for an M-profile CPU then it has a VPR
2
register, which tracks predication information.
2
3
3
Add the dwc-hsotg (dwc2) USB host controller emulation code.
4
Implement the read and write handling of this register, and
4
Based on hw/usb/hcd-ehci.c and hw/usb/hcd-ohci.c.
5
the migration of its state.
5
6
6
Note that to use this with the dwc-otg driver in the Raspbian
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
kernel, you must pass the option "dwc_otg.fiq_fsm_enable=0" on
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
the kernel command line.
9
Message-id: 20210520152840.24453-7-peter.maydell@linaro.org
10
---
11
target/arm/cpu.h | 6 ++++++
12
target/arm/machine.c | 19 +++++++++++++++++++
13
target/arm/translate-vfp.c | 38 ++++++++++++++++++++++++++++++++++++++
14
3 files changed, 63 insertions(+)
9
15
10
Emulation of slave mode and of descriptor-DMA mode has not been
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
11
implemented yet. These modes are seldom used.
17
index XXXXXXX..XXXXXXX 100644
12
18
--- a/target/arm/cpu.h
13
I have used some on-line sources of information while developing
19
+++ b/target/arm/cpu.h
14
this emulation, including:
20
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
15
21
uint32_t cpacr[M_REG_NUM_BANKS];
16
http://www.capital-micro.com/PDF/CME-M7_Family_User_Guide_EN.pdf
22
uint32_t nsacr;
17
which has a pretty complete description of the controller starting
23
int ltpsize;
18
on page 370.
24
+ uint32_t vpr;
19
25
} v7m;
20
https://sourceforge.net/p/wive-ng/wive-ng-mt/ci/master/tree/docs/DataSheets/RT3050_5x_V2.0_081408_0902.pdf
26
21
which has a description of the controller registers starting on
27
/* Information associated with an exception about to be taken:
22
page 130.
28
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_FPCCR, ASPEN, 31, 1)
23
29
R_V7M_FPCCR_UFRDY_MASK | \
24
Thanks to Felippe Mathieu-Daude for providing a cleaner method
30
R_V7M_FPCCR_ASPEN_MASK)
25
of implementing the memory regions for the controller registers.
31
26
32
+/* v7M VPR bits */
27
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
33
+FIELD(V7M_VPR, P0, 0, 16)
28
Message-id: 20200520235349.21215-5-pauldzim@gmail.com
34
+FIELD(V7M_VPR, MASK01, 16, 4)
29
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
35
+FIELD(V7M_VPR, MASK23, 20, 4)
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
---
32
hw/usb/hcd-dwc2.c | 1417 ++++++++++++++++++++++++++++++++++++++++++
33
hw/usb/Kconfig | 5 +
34
hw/usb/Makefile.objs | 1 +
35
hw/usb/trace-events | 50 ++
36
4 files changed, 1473 insertions(+)
37
create mode 100644 hw/usb/hcd-dwc2.c
38
39
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
40
new file mode 100644
41
index XXXXXXX..XXXXXXX
42
--- /dev/null
43
+++ b/hw/usb/hcd-dwc2.c
44
@@ -XXX,XX +XXX,XX @@
45
+/*
46
+ * dwc-hsotg (dwc2) USB host controller emulation
47
+ *
48
+ * Based on hw/usb/hcd-ehci.c and hw/usb/hcd-ohci.c
49
+ *
50
+ * Note that to use this emulation with the dwc-otg driver in the
51
+ * Raspbian kernel, you must pass the option "dwc_otg.fiq_fsm_enable=0"
52
+ * on the kernel command line.
53
+ *
54
+ * Some useful documentation used to develop this emulation can be
55
+ * found online (as of April 2020) at:
56
+ *
57
+ * http://www.capital-micro.com/PDF/CME-M7_Family_User_Guide_EN.pdf
58
+ * which has a pretty complete description of the controller starting
59
+ * on page 370.
60
+ *
61
+ * https://sourceforge.net/p/wive-ng/wive-ng-mt/ci/master/tree/docs/DataSheets/RT3050_5x_V2.0_081408_0902.pdf
62
+ * which has a description of the controller registers starting on
63
+ * page 130.
64
+ *
65
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
66
+ *
67
+ * This program is free software; you can redistribute it and/or modify
68
+ * it under the terms of the GNU General Public License as published by
69
+ * the Free Software Foundation; either version 2 of the License, or
70
+ * (at your option) any later version.
71
+ *
72
+ * This program is distributed in the hope that it will be useful,
73
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
74
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
75
+ * GNU General Public License for more details.
76
+ */
77
+
36
+
78
+#include "qemu/osdep.h"
37
/*
79
+#include "qemu/units.h"
38
* System register ID fields.
80
+#include "qapi/error.h"
39
*/
81
+#include "hw/usb/dwc2-regs.h"
40
diff --git a/target/arm/machine.c b/target/arm/machine.c
82
+#include "hw/usb/hcd-dwc2.h"
41
index XXXXXXX..XXXXXXX 100644
83
+#include "migration/vmstate.h"
42
--- a/target/arm/machine.c
84
+#include "trace.h"
43
+++ b/target/arm/machine.c
85
+#include "qemu/log.h"
44
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_fp = {
86
+#include "qemu/error-report.h"
45
}
87
+#include "qemu/main-loop.h"
46
};
88
+#include "hw/qdev-properties.h"
47
48
+static bool mve_needed(void *opaque)
49
+{
50
+ ARMCPU *cpu = opaque;
89
+
51
+
90
+#define USB_HZ_FS 12000000
52
+ return cpu_isar_feature(aa32_mve, cpu);
91
+#define USB_HZ_HS 96000000
92
+#define USB_FRMINTVL 12000
93
+
94
+/* nifty macros from Arnon's EHCI version */
95
+#define get_field(data, field) \
96
+ (((data) & field##_MASK) >> field##_SHIFT)
97
+
98
+#define set_field(data, newval, field) do { \
99
+ uint32_t val = *(data); \
100
+ val &= ~field##_MASK; \
101
+ val |= ((newval) << field##_SHIFT) & field##_MASK; \
102
+ *(data) = val; \
103
+} while (0)
104
+
105
+#define get_bit(data, bitmask) \
106
+ (!!((data) & (bitmask)))
107
+
108
+/* update irq line */
109
+static inline void dwc2_update_irq(DWC2State *s)
110
+{
111
+ static int oldlevel;
112
+ int level = 0;
113
+
114
+ if ((s->gintsts & s->gintmsk) && (s->gahbcfg & GAHBCFG_GLBL_INTR_EN)) {
115
+ level = 1;
116
+ }
117
+ if (level != oldlevel) {
118
+ oldlevel = level;
119
+ trace_usb_dwc2_update_irq(level);
120
+ qemu_set_irq(s->irq, level);
121
+ }
122
+}
53
+}
123
+
54
+
124
+/* flag interrupt condition */
55
+static const VMStateDescription vmstate_m_mve = {
125
+static inline void dwc2_raise_global_irq(DWC2State *s, uint32_t intr)
56
+ .name = "cpu/m/mve",
126
+{
127
+ if (!(s->gintsts & intr)) {
128
+ s->gintsts |= intr;
129
+ trace_usb_dwc2_raise_global_irq(intr);
130
+ dwc2_update_irq(s);
131
+ }
132
+}
133
+
134
+static inline void dwc2_lower_global_irq(DWC2State *s, uint32_t intr)
135
+{
136
+ if (s->gintsts & intr) {
137
+ s->gintsts &= ~intr;
138
+ trace_usb_dwc2_lower_global_irq(intr);
139
+ dwc2_update_irq(s);
140
+ }
141
+}
142
+
143
+static inline void dwc2_raise_host_irq(DWC2State *s, uint32_t host_intr)
144
+{
145
+ if (!(s->haint & host_intr)) {
146
+ s->haint |= host_intr;
147
+ s->haint &= 0xffff;
148
+ trace_usb_dwc2_raise_host_irq(host_intr);
149
+ if (s->haint & s->haintmsk) {
150
+ dwc2_raise_global_irq(s, GINTSTS_HCHINT);
151
+ }
152
+ }
153
+}
154
+
155
+static inline void dwc2_lower_host_irq(DWC2State *s, uint32_t host_intr)
156
+{
157
+ if (s->haint & host_intr) {
158
+ s->haint &= ~host_intr;
159
+ trace_usb_dwc2_lower_host_irq(host_intr);
160
+ if (!(s->haint & s->haintmsk)) {
161
+ dwc2_lower_global_irq(s, GINTSTS_HCHINT);
162
+ }
163
+ }
164
+}
165
+
166
+static inline void dwc2_update_hc_irq(DWC2State *s, int index)
167
+{
168
+ uint32_t host_intr = 1 << (index >> 3);
169
+
170
+ if (s->hreg1[index + 2] & s->hreg1[index + 3]) {
171
+ dwc2_raise_host_irq(s, host_intr);
172
+ } else {
173
+ dwc2_lower_host_irq(s, host_intr);
174
+ }
175
+}
176
+
177
+/* set a timer for EOF */
178
+static void dwc2_eof_timer(DWC2State *s)
179
+{
180
+ timer_mod(s->eof_timer, s->sof_time + s->usb_frame_time);
181
+}
182
+
183
+/* Set a timer for EOF and generate SOF event */
184
+static void dwc2_sof(DWC2State *s)
185
+{
186
+ s->sof_time += s->usb_frame_time;
187
+ trace_usb_dwc2_sof(s->sof_time);
188
+ dwc2_eof_timer(s);
189
+ dwc2_raise_global_irq(s, GINTSTS_SOF);
190
+}
191
+
192
+/* Do frame processing on frame boundary */
193
+static void dwc2_frame_boundary(void *opaque)
194
+{
195
+ DWC2State *s = opaque;
196
+ int64_t now;
197
+ uint16_t frcnt;
198
+
199
+ now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
200
+
201
+ /* Frame boundary, so do EOF stuff here */
202
+
203
+ /* Increment frame number */
204
+ frcnt = (uint16_t)((now - s->sof_time) / s->fi);
205
+ s->frame_number = (s->frame_number + frcnt) & 0xffff;
206
+ s->hfnum = s->frame_number & HFNUM_MAX_FRNUM;
207
+
208
+ /* Do SOF stuff here */
209
+ dwc2_sof(s);
210
+}
211
+
212
+/* Start sending SOF tokens on the USB bus */
213
+static void dwc2_bus_start(DWC2State *s)
214
+{
215
+ trace_usb_dwc2_bus_start();
216
+ s->sof_time = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
217
+ dwc2_eof_timer(s);
218
+}
219
+
220
+/* Stop sending SOF tokens on the USB bus */
221
+static void dwc2_bus_stop(DWC2State *s)
222
+{
223
+ trace_usb_dwc2_bus_stop();
224
+ timer_del(s->eof_timer);
225
+}
226
+
227
+static USBDevice *dwc2_find_device(DWC2State *s, uint8_t addr)
228
+{
229
+ USBDevice *dev;
230
+
231
+ trace_usb_dwc2_find_device(addr);
232
+
233
+ if (!(s->hprt0 & HPRT0_ENA)) {
234
+ trace_usb_dwc2_port_disabled(0);
235
+ } else {
236
+ dev = usb_find_device(&s->uport, addr);
237
+ if (dev != NULL) {
238
+ trace_usb_dwc2_device_found(0);
239
+ return dev;
240
+ }
241
+ }
242
+
243
+ trace_usb_dwc2_device_not_found();
244
+ return NULL;
245
+}
246
+
247
+static const char *pstatus[] = {
248
+ "USB_RET_SUCCESS", "USB_RET_NODEV", "USB_RET_NAK", "USB_RET_STALL",
249
+ "USB_RET_BABBLE", "USB_RET_IOERROR", "USB_RET_ASYNC",
250
+ "USB_RET_ADD_TO_QUEUE", "USB_RET_REMOVE_FROM_QUEUE"
251
+};
252
+
253
+static uint32_t pintr[] = {
254
+ HCINTMSK_XFERCOMPL, HCINTMSK_XACTERR, HCINTMSK_NAK, HCINTMSK_STALL,
255
+ HCINTMSK_BBLERR, HCINTMSK_XACTERR, HCINTMSK_XACTERR, HCINTMSK_XACTERR,
256
+ HCINTMSK_XACTERR
257
+};
258
+
259
+static const char *types[] = {
260
+ "Ctrl", "Isoc", "Bulk", "Intr"
261
+};
262
+
263
+static const char *dirs[] = {
264
+ "Out", "In"
265
+};
266
+
267
+static void dwc2_handle_packet(DWC2State *s, uint32_t devadr, USBDevice *dev,
268
+ USBEndpoint *ep, uint32_t index, bool send)
269
+{
270
+ DWC2Packet *p;
271
+ uint32_t hcchar = s->hreg1[index];
272
+ uint32_t hctsiz = s->hreg1[index + 4];
273
+ uint32_t hcdma = s->hreg1[index + 5];
274
+ uint32_t chan, epnum, epdir, eptype, mps, pid, pcnt, len, tlen, intr = 0;
275
+ uint32_t tpcnt, stsidx, actual = 0;
276
+ bool do_intr = false, done = false;
277
+
278
+ epnum = get_field(hcchar, HCCHAR_EPNUM);
279
+ epdir = get_bit(hcchar, HCCHAR_EPDIR);
280
+ eptype = get_field(hcchar, HCCHAR_EPTYPE);
281
+ mps = get_field(hcchar, HCCHAR_MPS);
282
+ pid = get_field(hctsiz, TSIZ_SC_MC_PID);
283
+ pcnt = get_field(hctsiz, TSIZ_PKTCNT);
284
+ len = get_field(hctsiz, TSIZ_XFERSIZE);
285
+ assert(len <= DWC2_MAX_XFER_SIZE);
286
+ chan = index >> 3;
287
+ p = &s->packet[chan];
288
+
289
+ trace_usb_dwc2_handle_packet(chan, dev, &p->packet, epnum, types[eptype],
290
+ dirs[epdir], mps, len, pcnt);
291
+
292
+ if (eptype == USB_ENDPOINT_XFER_CONTROL && pid == TSIZ_SC_MC_PID_SETUP) {
293
+ pid = USB_TOKEN_SETUP;
294
+ } else {
295
+ pid = epdir ? USB_TOKEN_IN : USB_TOKEN_OUT;
296
+ }
297
+
298
+ if (send) {
299
+ tlen = len;
300
+ if (p->small) {
301
+ if (tlen > mps) {
302
+ tlen = mps;
303
+ }
304
+ }
305
+
306
+ if (pid != USB_TOKEN_IN) {
307
+ trace_usb_dwc2_memory_read(hcdma, tlen);
308
+ if (dma_memory_read(&s->dma_as, hcdma,
309
+ s->usb_buf[chan], tlen) != MEMTX_OK) {
310
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: dma_memory_read failed\n",
311
+ __func__);
312
+ }
313
+ }
314
+
315
+ usb_packet_init(&p->packet);
316
+ usb_packet_setup(&p->packet, pid, ep, 0, hcdma,
317
+ pid != USB_TOKEN_IN, true);
318
+ usb_packet_addbuf(&p->packet, s->usb_buf[chan], tlen);
319
+ p->async = DWC2_ASYNC_NONE;
320
+ usb_handle_packet(dev, &p->packet);
321
+ } else {
322
+ tlen = p->len;
323
+ }
324
+
325
+ stsidx = -p->packet.status;
326
+ assert(stsidx < sizeof(pstatus) / sizeof(*pstatus));
327
+ actual = p->packet.actual_length;
328
+ trace_usb_dwc2_packet_status(pstatus[stsidx], actual);
329
+
330
+babble:
331
+ if (p->packet.status != USB_RET_SUCCESS &&
332
+ p->packet.status != USB_RET_NAK &&
333
+ p->packet.status != USB_RET_STALL &&
334
+ p->packet.status != USB_RET_ASYNC) {
335
+ trace_usb_dwc2_packet_error(pstatus[stsidx]);
336
+ }
337
+
338
+ if (p->packet.status == USB_RET_ASYNC) {
339
+ trace_usb_dwc2_async_packet(&p->packet, chan, dev, epnum,
340
+ dirs[epdir], tlen);
341
+ usb_device_flush_ep_queue(dev, ep);
342
+ assert(p->async != DWC2_ASYNC_INFLIGHT);
343
+ p->devadr = devadr;
344
+ p->epnum = epnum;
345
+ p->epdir = epdir;
346
+ p->mps = mps;
347
+ p->pid = pid;
348
+ p->index = index;
349
+ p->pcnt = pcnt;
350
+ p->len = tlen;
351
+ p->async = DWC2_ASYNC_INFLIGHT;
352
+ p->needs_service = false;
353
+ return;
354
+ }
355
+
356
+ if (p->packet.status == USB_RET_SUCCESS) {
357
+ if (actual > tlen) {
358
+ p->packet.status = USB_RET_BABBLE;
359
+ goto babble;
360
+ }
361
+
362
+ if (pid == USB_TOKEN_IN) {
363
+ trace_usb_dwc2_memory_write(hcdma, actual);
364
+ if (dma_memory_write(&s->dma_as, hcdma, s->usb_buf[chan],
365
+ actual) != MEMTX_OK) {
366
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: dma_memory_write failed\n",
367
+ __func__);
368
+ }
369
+ }
370
+
371
+ tpcnt = actual / mps;
372
+ if (actual % mps) {
373
+ tpcnt++;
374
+ if (pid == USB_TOKEN_IN) {
375
+ done = true;
376
+ }
377
+ }
378
+
379
+ pcnt -= tpcnt < pcnt ? tpcnt : pcnt;
380
+ set_field(&hctsiz, pcnt, TSIZ_PKTCNT);
381
+ len -= actual < len ? actual : len;
382
+ set_field(&hctsiz, len, TSIZ_XFERSIZE);
383
+ s->hreg1[index + 4] = hctsiz;
384
+ hcdma += actual;
385
+ s->hreg1[index + 5] = hcdma;
386
+
387
+ if (!pcnt || len == 0 || actual == 0) {
388
+ done = true;
389
+ }
390
+ } else {
391
+ intr |= pintr[stsidx];
392
+ if (p->packet.status == USB_RET_NAK &&
393
+ (eptype == USB_ENDPOINT_XFER_CONTROL ||
394
+ eptype == USB_ENDPOINT_XFER_BULK)) {
395
+ /*
396
+ * for ctrl/bulk, automatically retry on NAK,
397
+ * but send the interrupt anyway
398
+ */
399
+ intr &= ~HCINTMSK_RESERVED14_31;
400
+ s->hreg1[index + 2] |= intr;
401
+ do_intr = true;
402
+ } else {
403
+ intr |= HCINTMSK_CHHLTD;
404
+ done = true;
405
+ }
406
+ }
407
+
408
+ usb_packet_cleanup(&p->packet);
409
+
410
+ if (done) {
411
+ hcchar &= ~HCCHAR_CHENA;
412
+ s->hreg1[index] = hcchar;
413
+ if (!(intr & HCINTMSK_CHHLTD)) {
414
+ intr |= HCINTMSK_CHHLTD | HCINTMSK_XFERCOMPL;
415
+ }
416
+ intr &= ~HCINTMSK_RESERVED14_31;
417
+ s->hreg1[index + 2] |= intr;
418
+ p->needs_service = false;
419
+ trace_usb_dwc2_packet_done(pstatus[stsidx], actual, len, pcnt);
420
+ dwc2_update_hc_irq(s, index);
421
+ return;
422
+ }
423
+
424
+ p->devadr = devadr;
425
+ p->epnum = epnum;
426
+ p->epdir = epdir;
427
+ p->mps = mps;
428
+ p->pid = pid;
429
+ p->index = index;
430
+ p->pcnt = pcnt;
431
+ p->len = len;
432
+ p->needs_service = true;
433
+ trace_usb_dwc2_packet_next(pstatus[stsidx], len, pcnt);
434
+ if (do_intr) {
435
+ dwc2_update_hc_irq(s, index);
436
+ }
437
+}
438
+
439
+/* Attach or detach a device on root hub */
440
+
441
+static const char *speeds[] = {
442
+ "low", "full", "high"
443
+};
444
+
445
+static void dwc2_attach(USBPort *port)
446
+{
447
+ DWC2State *s = port->opaque;
448
+ int hispd = 0;
449
+
450
+ trace_usb_dwc2_attach(port);
451
+ assert(port->index == 0);
452
+
453
+ if (!port->dev || !port->dev->attached) {
454
+ return;
455
+ }
456
+
457
+ assert(port->dev->speed <= USB_SPEED_HIGH);
458
+ trace_usb_dwc2_attach_speed(speeds[port->dev->speed]);
459
+ s->hprt0 &= ~HPRT0_SPD_MASK;
460
+
461
+ switch (port->dev->speed) {
462
+ case USB_SPEED_LOW:
463
+ s->hprt0 |= HPRT0_SPD_LOW_SPEED << HPRT0_SPD_SHIFT;
464
+ break;
465
+ case USB_SPEED_FULL:
466
+ s->hprt0 |= HPRT0_SPD_FULL_SPEED << HPRT0_SPD_SHIFT;
467
+ break;
468
+ case USB_SPEED_HIGH:
469
+ s->hprt0 |= HPRT0_SPD_HIGH_SPEED << HPRT0_SPD_SHIFT;
470
+ hispd = 1;
471
+ break;
472
+ }
473
+
474
+ if (hispd) {
475
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 8000; /* 125000 */
476
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_HS) {
477
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_HS; /* 10.4 */
478
+ } else {
479
+ s->usb_bit_time = 1;
480
+ }
481
+ } else {
482
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 1000; /* 1000000 */
483
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_FS) {
484
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_FS; /* 83.3 */
485
+ } else {
486
+ s->usb_bit_time = 1;
487
+ }
488
+ }
489
+
490
+ s->fi = USB_FRMINTVL - 1;
491
+ s->hprt0 |= HPRT0_CONNDET | HPRT0_CONNSTS;
492
+
493
+ dwc2_bus_start(s);
494
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
495
+}
496
+
497
+static void dwc2_detach(USBPort *port)
498
+{
499
+ DWC2State *s = port->opaque;
500
+
501
+ trace_usb_dwc2_detach(port);
502
+ assert(port->index == 0);
503
+
504
+ dwc2_bus_stop(s);
505
+
506
+ s->hprt0 &= ~(HPRT0_SPD_MASK | HPRT0_SUSP | HPRT0_ENA | HPRT0_CONNSTS);
507
+ s->hprt0 |= HPRT0_CONNDET | HPRT0_ENACHG;
508
+
509
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
510
+}
511
+
512
+static void dwc2_child_detach(USBPort *port, USBDevice *child)
513
+{
514
+ trace_usb_dwc2_child_detach(port, child);
515
+ assert(port->index == 0);
516
+}
517
+
518
+static void dwc2_wakeup(USBPort *port)
519
+{
520
+ DWC2State *s = port->opaque;
521
+
522
+ trace_usb_dwc2_wakeup(port);
523
+ assert(port->index == 0);
524
+
525
+ if (s->hprt0 & HPRT0_SUSP) {
526
+ s->hprt0 |= HPRT0_RES;
527
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
528
+ }
529
+
530
+ qemu_bh_schedule(s->async_bh);
531
+}
532
+
533
+static void dwc2_async_packet_complete(USBPort *port, USBPacket *packet)
534
+{
535
+ DWC2State *s = port->opaque;
536
+ DWC2Packet *p;
537
+ USBDevice *dev;
538
+ USBEndpoint *ep;
539
+
540
+ assert(port->index == 0);
541
+ p = container_of(packet, DWC2Packet, packet);
542
+ dev = dwc2_find_device(s, p->devadr);
543
+ ep = usb_ep_get(dev, p->pid, p->epnum);
544
+ trace_usb_dwc2_async_packet_complete(port, packet, p->index >> 3, dev,
545
+ p->epnum, dirs[p->epdir], p->len);
546
+ assert(p->async == DWC2_ASYNC_INFLIGHT);
547
+
548
+ if (packet->status == USB_RET_REMOVE_FROM_QUEUE) {
549
+ usb_cancel_packet(packet);
550
+ usb_packet_cleanup(packet);
551
+ return;
552
+ }
553
+
554
+ dwc2_handle_packet(s, p->devadr, dev, ep, p->index, false);
555
+
556
+ p->async = DWC2_ASYNC_FINISHED;
557
+ qemu_bh_schedule(s->async_bh);
558
+}
559
+
560
+static USBPortOps dwc2_port_ops = {
561
+ .attach = dwc2_attach,
562
+ .detach = dwc2_detach,
563
+ .child_detach = dwc2_child_detach,
564
+ .wakeup = dwc2_wakeup,
565
+ .complete = dwc2_async_packet_complete,
566
+};
567
+
568
+static uint32_t dwc2_get_frame_remaining(DWC2State *s)
569
+{
570
+ uint32_t fr = 0;
571
+ int64_t tks;
572
+
573
+ tks = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - s->sof_time;
574
+ if (tks < 0) {
575
+ tks = 0;
576
+ }
577
+
578
+ /* avoid muldiv if possible */
579
+ if (tks >= s->usb_frame_time) {
580
+ goto out;
581
+ }
582
+ if (tks < s->usb_bit_time) {
583
+ fr = s->fi;
584
+ goto out;
585
+ }
586
+
587
+ /* tks = number of ns since SOF, divided by 83 (fs) or 10 (hs) */
588
+ tks = tks / s->usb_bit_time;
589
+ if (tks >= (int64_t)s->fi) {
590
+ goto out;
591
+ }
592
+
593
+ /* remaining = frame interval minus tks */
594
+ fr = (uint32_t)((int64_t)s->fi - tks);
595
+
596
+out:
597
+ return fr;
598
+}
599
+
600
+static void dwc2_work_bh(void *opaque)
601
+{
602
+ DWC2State *s = opaque;
603
+ DWC2Packet *p;
604
+ USBDevice *dev;
605
+ USBEndpoint *ep;
606
+ int64_t t_now, expire_time;
607
+ int chan;
608
+ bool found = false;
609
+
610
+ trace_usb_dwc2_work_bh();
611
+ if (s->working) {
612
+ return;
613
+ }
614
+ s->working = true;
615
+
616
+ t_now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
617
+ chan = s->next_chan;
618
+
619
+ do {
620
+ p = &s->packet[chan];
621
+ if (p->needs_service) {
622
+ dev = dwc2_find_device(s, p->devadr);
623
+ ep = usb_ep_get(dev, p->pid, p->epnum);
624
+ trace_usb_dwc2_work_bh_service(s->next_chan, chan, dev, p->epnum);
625
+ dwc2_handle_packet(s, p->devadr, dev, ep, p->index, true);
626
+ found = true;
627
+ }
628
+ if (++chan == DWC2_NB_CHAN) {
629
+ chan = 0;
630
+ }
631
+ if (found) {
632
+ s->next_chan = chan;
633
+ trace_usb_dwc2_work_bh_next(chan);
634
+ }
635
+ } while (chan != s->next_chan);
636
+
637
+ if (found) {
638
+ expire_time = t_now + NANOSECONDS_PER_SECOND / 4000;
639
+ timer_mod(s->frame_timer, expire_time);
640
+ }
641
+ s->working = false;
642
+}
643
+
644
+static void dwc2_enable_chan(DWC2State *s, uint32_t index)
645
+{
646
+ USBDevice *dev;
647
+ USBEndpoint *ep;
648
+ uint32_t hcchar;
649
+ uint32_t hctsiz;
650
+ uint32_t devadr, epnum, epdir, eptype, pid, len;
651
+ DWC2Packet *p;
652
+
653
+ assert((index >> 3) < DWC2_NB_CHAN);
654
+ p = &s->packet[index >> 3];
655
+ hcchar = s->hreg1[index];
656
+ hctsiz = s->hreg1[index + 4];
657
+ devadr = get_field(hcchar, HCCHAR_DEVADDR);
658
+ epnum = get_field(hcchar, HCCHAR_EPNUM);
659
+ epdir = get_bit(hcchar, HCCHAR_EPDIR);
660
+ eptype = get_field(hcchar, HCCHAR_EPTYPE);
661
+ pid = get_field(hctsiz, TSIZ_SC_MC_PID);
662
+ len = get_field(hctsiz, TSIZ_XFERSIZE);
663
+
664
+ dev = dwc2_find_device(s, devadr);
665
+
666
+ trace_usb_dwc2_enable_chan(index >> 3, dev, &p->packet, epnum);
667
+ if (dev == NULL) {
668
+ return;
669
+ }
670
+
671
+ if (eptype == USB_ENDPOINT_XFER_CONTROL && pid == TSIZ_SC_MC_PID_SETUP) {
672
+ pid = USB_TOKEN_SETUP;
673
+ } else {
674
+ pid = epdir ? USB_TOKEN_IN : USB_TOKEN_OUT;
675
+ }
676
+
677
+ ep = usb_ep_get(dev, pid, epnum);
678
+
679
+ /*
680
+ * Hack: Networking doesn't like us delivering large transfers, it kind
681
+ * of works but the latency is horrible. So if the transfer is <= the mtu
682
+ * size, we take that as a hint that this might be a network transfer,
683
+ * and do the transfer packet-by-packet.
684
+ */
685
+ if (len > 1536) {
686
+ p->small = false;
687
+ } else {
688
+ p->small = true;
689
+ }
690
+
691
+ dwc2_handle_packet(s, devadr, dev, ep, index, true);
692
+ qemu_bh_schedule(s->async_bh);
693
+}
694
+
695
+static const char *glbregnm[] = {
696
+ "GOTGCTL ", "GOTGINT ", "GAHBCFG ", "GUSBCFG ", "GRSTCTL ",
697
+ "GINTSTS ", "GINTMSK ", "GRXSTSR ", "GRXSTSP ", "GRXFSIZ ",
698
+ "GNPTXFSIZ", "GNPTXSTS ", "GI2CCTL ", "GPVNDCTL ", "GGPIO ",
699
+ "GUID ", "GSNPSID ", "GHWCFG1 ", "GHWCFG2 ", "GHWCFG3 ",
700
+ "GHWCFG4 ", "GLPMCFG ", "GPWRDN ", "GDFIFOCFG", "GADPCTL ",
701
+ "GREFCLK ", "GINTMSK2 ", "GINTSTS2 "
702
+};
703
+
704
+static uint64_t dwc2_glbreg_read(void *ptr, hwaddr addr, int index,
705
+ unsigned size)
706
+{
707
+ DWC2State *s = ptr;
708
+ uint32_t val;
709
+
710
+ assert(addr <= GINTSTS2);
711
+ val = s->glbreg[index];
712
+
713
+ switch (addr) {
714
+ case GRSTCTL:
715
+ /* clear any self-clearing bits that were set */
716
+ val &= ~(GRSTCTL_TXFFLSH | GRSTCTL_RXFFLSH | GRSTCTL_IN_TKNQ_FLSH |
717
+ GRSTCTL_FRMCNTRRST | GRSTCTL_HSFTRST | GRSTCTL_CSFTRST);
718
+ s->glbreg[index] = val;
719
+ break;
720
+ default:
721
+ break;
722
+ }
723
+
724
+ trace_usb_dwc2_glbreg_read(addr, glbregnm[index], val);
725
+ return val;
726
+}
727
+
728
+static void dwc2_glbreg_write(void *ptr, hwaddr addr, int index, uint64_t val,
729
+ unsigned size)
730
+{
731
+ DWC2State *s = ptr;
732
+ uint64_t orig = val;
733
+ uint32_t *mmio;
734
+ uint32_t old;
735
+ int iflg = 0;
736
+
737
+ assert(addr <= GINTSTS2);
738
+ mmio = &s->glbreg[index];
739
+ old = *mmio;
740
+
741
+ switch (addr) {
742
+ case GOTGCTL:
743
+ /* don't allow setting of read-only bits */
744
+ val &= ~(GOTGCTL_MULT_VALID_BC_MASK | GOTGCTL_BSESVLD |
745
+ GOTGCTL_ASESVLD | GOTGCTL_DBNC_SHORT | GOTGCTL_CONID_B |
746
+ GOTGCTL_HSTNEGSCS | GOTGCTL_SESREQSCS);
747
+ /* don't allow clearing of read-only bits */
748
+ val |= old & (GOTGCTL_MULT_VALID_BC_MASK | GOTGCTL_BSESVLD |
749
+ GOTGCTL_ASESVLD | GOTGCTL_DBNC_SHORT | GOTGCTL_CONID_B |
750
+ GOTGCTL_HSTNEGSCS | GOTGCTL_SESREQSCS);
751
+ break;
752
+ case GAHBCFG:
753
+ if ((val & GAHBCFG_GLBL_INTR_EN) && !(old & GAHBCFG_GLBL_INTR_EN)) {
754
+ iflg = 1;
755
+ }
756
+ break;
757
+ case GRSTCTL:
758
+ val |= GRSTCTL_AHBIDLE;
759
+ val &= ~GRSTCTL_DMAREQ;
760
+ if (!(old & GRSTCTL_TXFFLSH) && (val & GRSTCTL_TXFFLSH)) {
761
+ /* TODO - TX fifo flush */
762
+ qemu_log_mask(LOG_UNIMP, "Tx FIFO flush not implemented\n");
763
+ }
764
+ if (!(old & GRSTCTL_RXFFLSH) && (val & GRSTCTL_RXFFLSH)) {
765
+ /* TODO - RX fifo flush */
766
+ qemu_log_mask(LOG_UNIMP, "Rx FIFO flush not implemented\n");
767
+ }
768
+ if (!(old & GRSTCTL_IN_TKNQ_FLSH) && (val & GRSTCTL_IN_TKNQ_FLSH)) {
769
+ /* TODO - device IN token queue flush */
770
+ qemu_log_mask(LOG_UNIMP, "Token queue flush not implemented\n");
771
+ }
772
+ if (!(old & GRSTCTL_FRMCNTRRST) && (val & GRSTCTL_FRMCNTRRST)) {
773
+ /* TODO - host frame counter reset */
774
+ qemu_log_mask(LOG_UNIMP, "Frame counter reset not implemented\n");
775
+ }
776
+ if (!(old & GRSTCTL_HSFTRST) && (val & GRSTCTL_HSFTRST)) {
777
+ /* TODO - host soft reset */
778
+ qemu_log_mask(LOG_UNIMP, "Host soft reset not implemented\n");
779
+ }
780
+ if (!(old & GRSTCTL_CSFTRST) && (val & GRSTCTL_CSFTRST)) {
781
+ /* TODO - core soft reset */
782
+ qemu_log_mask(LOG_UNIMP, "Core soft reset not implemented\n");
783
+ }
784
+ /* don't allow clearing of self-clearing bits */
785
+ val |= old & (GRSTCTL_TXFFLSH | GRSTCTL_RXFFLSH |
786
+ GRSTCTL_IN_TKNQ_FLSH | GRSTCTL_FRMCNTRRST |
787
+ GRSTCTL_HSFTRST | GRSTCTL_CSFTRST);
788
+ break;
789
+ case GINTSTS:
790
+ /* clear the write-1-to-clear bits */
791
+ val |= ~old;
792
+ val = ~val;
793
+ /* don't allow clearing of read-only bits */
794
+ val |= old & (GINTSTS_PTXFEMP | GINTSTS_HCHINT | GINTSTS_PRTINT |
795
+ GINTSTS_OEPINT | GINTSTS_IEPINT | GINTSTS_GOUTNAKEFF |
796
+ GINTSTS_GINNAKEFF | GINTSTS_NPTXFEMP | GINTSTS_RXFLVL |
797
+ GINTSTS_OTGINT | GINTSTS_CURMODE_HOST);
798
+ iflg = 1;
799
+ break;
800
+ case GINTMSK:
801
+ iflg = 1;
802
+ break;
803
+ default:
804
+ break;
805
+ }
806
+
807
+ trace_usb_dwc2_glbreg_write(addr, glbregnm[index], orig, old, val);
808
+ *mmio = val;
809
+
810
+ if (iflg) {
811
+ dwc2_update_irq(s);
812
+ }
813
+}
814
+
815
+static uint64_t dwc2_fszreg_read(void *ptr, hwaddr addr, int index,
816
+ unsigned size)
817
+{
818
+ DWC2State *s = ptr;
819
+ uint32_t val;
820
+
821
+ assert(addr == HPTXFSIZ);
822
+ val = s->fszreg[index];
823
+
824
+ trace_usb_dwc2_fszreg_read(addr, val);
825
+ return val;
826
+}
827
+
828
+static void dwc2_fszreg_write(void *ptr, hwaddr addr, int index, uint64_t val,
829
+ unsigned size)
830
+{
831
+ DWC2State *s = ptr;
832
+ uint64_t orig = val;
833
+ uint32_t *mmio;
834
+ uint32_t old;
835
+
836
+ assert(addr == HPTXFSIZ);
837
+ mmio = &s->fszreg[index];
838
+ old = *mmio;
839
+
840
+ trace_usb_dwc2_fszreg_write(addr, orig, old, val);
841
+ *mmio = val;
842
+}
843
+
844
+static const char *hreg0nm[] = {
845
+ "HCFG ", "HFIR ", "HFNUM ", "<rsvd> ", "HPTXSTS ",
846
+ "HAINT ", "HAINTMSK ", "HFLBADDR ", "<rsvd> ", "<rsvd> ",
847
+ "<rsvd> ", "<rsvd> ", "<rsvd> ", "<rsvd> ", "<rsvd> ",
848
+ "<rsvd> ", "HPRT0 "
849
+};
850
+
851
+static uint64_t dwc2_hreg0_read(void *ptr, hwaddr addr, int index,
852
+ unsigned size)
853
+{
854
+ DWC2State *s = ptr;
855
+ uint32_t val;
856
+
857
+ assert(addr >= HCFG && addr <= HPRT0);
858
+ val = s->hreg0[index];
859
+
860
+ switch (addr) {
861
+ case HFNUM:
862
+ val = (dwc2_get_frame_remaining(s) << HFNUM_FRREM_SHIFT) |
863
+ (s->hfnum << HFNUM_FRNUM_SHIFT);
864
+ break;
865
+ default:
866
+ break;
867
+ }
868
+
869
+ trace_usb_dwc2_hreg0_read(addr, hreg0nm[index], val);
870
+ return val;
871
+}
872
+
873
+static void dwc2_hreg0_write(void *ptr, hwaddr addr, int index, uint64_t val,
874
+ unsigned size)
875
+{
876
+ DWC2State *s = ptr;
877
+ USBDevice *dev = s->uport.dev;
878
+ uint64_t orig = val;
879
+ uint32_t *mmio;
880
+ uint32_t tval, told, old;
881
+ int prst = 0;
882
+ int iflg = 0;
883
+
884
+ assert(addr >= HCFG && addr <= HPRT0);
885
+ mmio = &s->hreg0[index];
886
+ old = *mmio;
887
+
888
+ switch (addr) {
889
+ case HFIR:
890
+ break;
891
+ case HFNUM:
892
+ case HPTXSTS:
893
+ case HAINT:
894
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: write to read-only register\n",
895
+ __func__);
896
+ return;
897
+ case HAINTMSK:
898
+ val &= 0xffff;
899
+ break;
900
+ case HPRT0:
901
+ /* don't allow clearing of read-only bits */
902
+ val |= old & (HPRT0_SPD_MASK | HPRT0_LNSTS_MASK | HPRT0_OVRCURRACT |
903
+ HPRT0_CONNSTS);
904
+ /* don't allow clearing of self-clearing bits */
905
+ val |= old & (HPRT0_SUSP | HPRT0_RES);
906
+ /* don't allow setting of self-setting bits */
907
+ if (!(old & HPRT0_ENA) && (val & HPRT0_ENA)) {
908
+ val &= ~HPRT0_ENA;
909
+ }
910
+ /* clear the write-1-to-clear bits */
911
+ tval = val & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
912
+ HPRT0_CONNDET);
913
+ told = old & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
914
+ HPRT0_CONNDET);
915
+ tval |= ~told;
916
+ tval = ~tval;
917
+ tval &= (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
918
+ HPRT0_CONNDET);
919
+ val &= ~(HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
920
+ HPRT0_CONNDET);
921
+ val |= tval;
922
+ if (!(val & HPRT0_RST) && (old & HPRT0_RST)) {
923
+ if (dev && dev->attached) {
924
+ val |= HPRT0_ENA | HPRT0_ENACHG;
925
+ prst = 1;
926
+ }
927
+ }
928
+ if (val & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_CONNDET)) {
929
+ iflg = 1;
930
+ } else {
931
+ iflg = -1;
932
+ }
933
+ break;
934
+ default:
935
+ break;
936
+ }
937
+
938
+ if (prst) {
939
+ trace_usb_dwc2_hreg0_write(addr, hreg0nm[index], orig, old,
940
+ val & ~HPRT0_CONNDET);
941
+ trace_usb_dwc2_hreg0_action("call usb_port_reset");
942
+ usb_port_reset(&s->uport);
943
+ val &= ~HPRT0_CONNDET;
944
+ } else {
945
+ trace_usb_dwc2_hreg0_write(addr, hreg0nm[index], orig, old, val);
946
+ }
947
+
948
+ *mmio = val;
949
+
950
+ if (iflg > 0) {
951
+ trace_usb_dwc2_hreg0_action("enable PRTINT");
952
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
953
+ } else if (iflg < 0) {
954
+ trace_usb_dwc2_hreg0_action("disable PRTINT");
955
+ dwc2_lower_global_irq(s, GINTSTS_PRTINT);
956
+ }
957
+}
958
+
959
+static const char *hreg1nm[] = {
960
+ "HCCHAR ", "HCSPLT ", "HCINT ", "HCINTMSK", "HCTSIZ ", "HCDMA ",
961
+ "<rsvd> ", "HCDMAB "
962
+};
963
+
964
+static uint64_t dwc2_hreg1_read(void *ptr, hwaddr addr, int index,
965
+ unsigned size)
966
+{
967
+ DWC2State *s = ptr;
968
+ uint32_t val;
969
+
970
+ assert(addr >= HCCHAR(0) && addr <= HCDMAB(DWC2_NB_CHAN - 1));
971
+ val = s->hreg1[index];
972
+
973
+ trace_usb_dwc2_hreg1_read(addr, hreg1nm[index & 7], addr >> 5, val);
974
+ return val;
975
+}
976
+
977
+static void dwc2_hreg1_write(void *ptr, hwaddr addr, int index, uint64_t val,
978
+ unsigned size)
979
+{
980
+ DWC2State *s = ptr;
981
+ uint64_t orig = val;
982
+ uint32_t *mmio;
983
+ uint32_t old;
984
+ int iflg = 0;
985
+ int enflg = 0;
986
+ int disflg = 0;
987
+
988
+ assert(addr >= HCCHAR(0) && addr <= HCDMAB(DWC2_NB_CHAN - 1));
989
+ mmio = &s->hreg1[index];
990
+ old = *mmio;
991
+
992
+ switch (HSOTG_REG(0x500) + (addr & 0x1c)) {
993
+ case HCCHAR(0):
994
+ if ((val & HCCHAR_CHDIS) && !(old & HCCHAR_CHDIS)) {
995
+ val &= ~(HCCHAR_CHENA | HCCHAR_CHDIS);
996
+ disflg = 1;
997
+ } else {
998
+ val |= old & HCCHAR_CHDIS;
999
+ if ((val & HCCHAR_CHENA) && !(old & HCCHAR_CHENA)) {
1000
+ val &= ~HCCHAR_CHDIS;
1001
+ enflg = 1;
1002
+ } else {
1003
+ val |= old & HCCHAR_CHENA;
1004
+ }
1005
+ }
1006
+ break;
1007
+ case HCINT(0):
1008
+ /* clear the write-1-to-clear bits */
1009
+ val |= ~old;
1010
+ val = ~val;
1011
+ val &= ~HCINTMSK_RESERVED14_31;
1012
+ iflg = 1;
1013
+ break;
1014
+ case HCINTMSK(0):
1015
+ val &= ~HCINTMSK_RESERVED14_31;
1016
+ iflg = 1;
1017
+ break;
1018
+ case HCDMAB(0):
1019
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: write to read-only register\n",
1020
+ __func__);
1021
+ return;
1022
+ default:
1023
+ break;
1024
+ }
1025
+
1026
+ trace_usb_dwc2_hreg1_write(addr, hreg1nm[index & 7], index >> 3, orig,
1027
+ old, val);
1028
+ *mmio = val;
1029
+
1030
+ if (disflg) {
1031
+ /* set ChHltd in HCINT */
1032
+ s->hreg1[(index & ~7) + 2] |= HCINTMSK_CHHLTD;
1033
+ iflg = 1;
1034
+ }
1035
+
1036
+ if (enflg) {
1037
+ dwc2_enable_chan(s, index & ~7);
1038
+ }
1039
+
1040
+ if (iflg) {
1041
+ dwc2_update_hc_irq(s, index & ~7);
1042
+ }
1043
+}
1044
+
1045
+static const char *pcgregnm[] = {
1046
+ "PCGCTL ", "PCGCCTL1 "
1047
+};
1048
+
1049
+static uint64_t dwc2_pcgreg_read(void *ptr, hwaddr addr, int index,
1050
+ unsigned size)
1051
+{
1052
+ DWC2State *s = ptr;
1053
+ uint32_t val;
1054
+
1055
+ assert(addr >= PCGCTL && addr <= PCGCCTL1);
1056
+ val = s->pcgreg[index];
1057
+
1058
+ trace_usb_dwc2_pcgreg_read(addr, pcgregnm[index], val);
1059
+ return val;
1060
+}
1061
+
1062
+static void dwc2_pcgreg_write(void *ptr, hwaddr addr, int index,
1063
+ uint64_t val, unsigned size)
1064
+{
1065
+ DWC2State *s = ptr;
1066
+ uint64_t orig = val;
1067
+ uint32_t *mmio;
1068
+ uint32_t old;
1069
+
1070
+ assert(addr >= PCGCTL && addr <= PCGCCTL1);
1071
+ mmio = &s->pcgreg[index];
1072
+ old = *mmio;
1073
+
1074
+ trace_usb_dwc2_pcgreg_write(addr, pcgregnm[index], orig, old, val);
1075
+ *mmio = val;
1076
+}
1077
+
1078
+static uint64_t dwc2_hsotg_read(void *ptr, hwaddr addr, unsigned size)
1079
+{
1080
+ uint64_t val;
1081
+
1082
+ switch (addr) {
1083
+ case HSOTG_REG(0x000) ... HSOTG_REG(0x0fc):
1084
+ val = dwc2_glbreg_read(ptr, addr, (addr - HSOTG_REG(0x000)) >> 2, size);
1085
+ break;
1086
+ case HSOTG_REG(0x100):
1087
+ val = dwc2_fszreg_read(ptr, addr, (addr - HSOTG_REG(0x100)) >> 2, size);
1088
+ break;
1089
+ case HSOTG_REG(0x104) ... HSOTG_REG(0x3fc):
1090
+ /* Gadget-mode registers, just return 0 for now */
1091
+ val = 0;
1092
+ break;
1093
+ case HSOTG_REG(0x400) ... HSOTG_REG(0x4fc):
1094
+ val = dwc2_hreg0_read(ptr, addr, (addr - HSOTG_REG(0x400)) >> 2, size);
1095
+ break;
1096
+ case HSOTG_REG(0x500) ... HSOTG_REG(0x7fc):
1097
+ val = dwc2_hreg1_read(ptr, addr, (addr - HSOTG_REG(0x500)) >> 2, size);
1098
+ break;
1099
+ case HSOTG_REG(0x800) ... HSOTG_REG(0xdfc):
1100
+ /* Gadget-mode registers, just return 0 for now */
1101
+ val = 0;
1102
+ break;
1103
+ case HSOTG_REG(0xe00) ... HSOTG_REG(0xffc):
1104
+ val = dwc2_pcgreg_read(ptr, addr, (addr - HSOTG_REG(0xe00)) >> 2, size);
1105
+ break;
1106
+ default:
1107
+ g_assert_not_reached();
1108
+ }
1109
+
1110
+ return val;
1111
+}
1112
+
1113
+static void dwc2_hsotg_write(void *ptr, hwaddr addr, uint64_t val,
1114
+ unsigned size)
1115
+{
1116
+ switch (addr) {
1117
+ case HSOTG_REG(0x000) ... HSOTG_REG(0x0fc):
1118
+ dwc2_glbreg_write(ptr, addr, (addr - HSOTG_REG(0x000)) >> 2, val, size);
1119
+ break;
1120
+ case HSOTG_REG(0x100):
1121
+ dwc2_fszreg_write(ptr, addr, (addr - HSOTG_REG(0x100)) >> 2, val, size);
1122
+ break;
1123
+ case HSOTG_REG(0x104) ... HSOTG_REG(0x3fc):
1124
+ /* Gadget-mode registers, do nothing for now */
1125
+ break;
1126
+ case HSOTG_REG(0x400) ... HSOTG_REG(0x4fc):
1127
+ dwc2_hreg0_write(ptr, addr, (addr - HSOTG_REG(0x400)) >> 2, val, size);
1128
+ break;
1129
+ case HSOTG_REG(0x500) ... HSOTG_REG(0x7fc):
1130
+ dwc2_hreg1_write(ptr, addr, (addr - HSOTG_REG(0x500)) >> 2, val, size);
1131
+ break;
1132
+ case HSOTG_REG(0x800) ... HSOTG_REG(0xdfc):
1133
+ /* Gadget-mode registers, do nothing for now */
1134
+ break;
1135
+ case HSOTG_REG(0xe00) ... HSOTG_REG(0xffc):
1136
+ dwc2_pcgreg_write(ptr, addr, (addr - HSOTG_REG(0xe00)) >> 2, val, size);
1137
+ break;
1138
+ default:
1139
+ g_assert_not_reached();
1140
+ }
1141
+}
1142
+
1143
+static const MemoryRegionOps dwc2_mmio_hsotg_ops = {
1144
+ .read = dwc2_hsotg_read,
1145
+ .write = dwc2_hsotg_write,
1146
+ .impl.min_access_size = 4,
1147
+ .impl.max_access_size = 4,
1148
+ .endianness = DEVICE_LITTLE_ENDIAN,
1149
+};
1150
+
1151
+static uint64_t dwc2_hreg2_read(void *ptr, hwaddr addr, unsigned size)
1152
+{
1153
+ /* TODO - implement FIFOs to support slave mode */
1154
+ trace_usb_dwc2_hreg2_read(addr, addr >> 12, 0);
1155
+ qemu_log_mask(LOG_UNIMP, "FIFO read not implemented\n");
1156
+ return 0;
1157
+}
1158
+
1159
+static void dwc2_hreg2_write(void *ptr, hwaddr addr, uint64_t val,
1160
+ unsigned size)
1161
+{
1162
+ uint64_t orig = val;
1163
+
1164
+ /* TODO - implement FIFOs to support slave mode */
1165
+ trace_usb_dwc2_hreg2_write(addr, addr >> 12, orig, 0, val);
1166
+ qemu_log_mask(LOG_UNIMP, "FIFO write not implemented\n");
1167
+}
1168
+
1169
+static const MemoryRegionOps dwc2_mmio_hreg2_ops = {
1170
+ .read = dwc2_hreg2_read,
1171
+ .write = dwc2_hreg2_write,
1172
+ .impl.min_access_size = 4,
1173
+ .impl.max_access_size = 4,
1174
+ .endianness = DEVICE_LITTLE_ENDIAN,
1175
+};
1176
+
1177
+static void dwc2_wakeup_endpoint(USBBus *bus, USBEndpoint *ep,
1178
+ unsigned int stream)
1179
+{
1180
+ DWC2State *s = container_of(bus, DWC2State, bus);
1181
+
1182
+ trace_usb_dwc2_wakeup_endpoint(ep, stream);
1183
+
1184
+ /* TODO - do something here? */
1185
+ qemu_bh_schedule(s->async_bh);
1186
+}
1187
+
1188
+static USBBusOps dwc2_bus_ops = {
1189
+ .wakeup_endpoint = dwc2_wakeup_endpoint,
1190
+};
1191
+
1192
+static void dwc2_work_timer(void *opaque)
1193
+{
1194
+ DWC2State *s = opaque;
1195
+
1196
+ trace_usb_dwc2_work_timer();
1197
+ qemu_bh_schedule(s->async_bh);
1198
+}
1199
+
1200
+static void dwc2_reset_enter(Object *obj, ResetType type)
1201
+{
1202
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1203
+ DWC2State *s = DWC2_USB(obj);
1204
+ int i;
1205
+
1206
+ trace_usb_dwc2_reset_enter();
1207
+
1208
+ if (c->parent_phases.enter) {
1209
+ c->parent_phases.enter(obj, type);
1210
+ }
1211
+
1212
+ timer_del(s->frame_timer);
1213
+ qemu_bh_cancel(s->async_bh);
1214
+
1215
+ if (s->uport.dev && s->uport.dev->attached) {
1216
+ usb_detach(&s->uport);
1217
+ }
1218
+
1219
+ dwc2_bus_stop(s);
1220
+
1221
+ s->gotgctl = GOTGCTL_BSESVLD | GOTGCTL_ASESVLD | GOTGCTL_CONID_B;
1222
+ s->gotgint = 0;
1223
+ s->gahbcfg = 0;
1224
+ s->gusbcfg = 5 << GUSBCFG_USBTRDTIM_SHIFT;
1225
+ s->grstctl = GRSTCTL_AHBIDLE;
1226
+ s->gintsts = GINTSTS_CONIDSTSCHNG | GINTSTS_PTXFEMP | GINTSTS_NPTXFEMP |
1227
+ GINTSTS_CURMODE_HOST;
1228
+ s->gintmsk = 0;
1229
+ s->grxstsr = 0;
1230
+ s->grxstsp = 0;
1231
+ s->grxfsiz = 1024;
1232
+ s->gnptxfsiz = 1024 << FIFOSIZE_DEPTH_SHIFT;
1233
+ s->gnptxsts = (4 << FIFOSIZE_DEPTH_SHIFT) | 1024;
1234
+ s->gi2cctl = GI2CCTL_I2CDATSE0 | GI2CCTL_ACK;
1235
+ s->gpvndctl = 0;
1236
+ s->ggpio = 0;
1237
+ s->guid = 0;
1238
+ s->gsnpsid = 0x4f54294a;
1239
+ s->ghwcfg1 = 0;
1240
+ s->ghwcfg2 = (8 << GHWCFG2_DEV_TOKEN_Q_DEPTH_SHIFT) |
1241
+ (4 << GHWCFG2_HOST_PERIO_TX_Q_DEPTH_SHIFT) |
1242
+ (4 << GHWCFG2_NONPERIO_TX_Q_DEPTH_SHIFT) |
1243
+ GHWCFG2_DYNAMIC_FIFO |
1244
+ GHWCFG2_PERIO_EP_SUPPORTED |
1245
+ ((DWC2_NB_CHAN - 1) << GHWCFG2_NUM_HOST_CHAN_SHIFT) |
1246
+ (GHWCFG2_INT_DMA_ARCH << GHWCFG2_ARCHITECTURE_SHIFT) |
1247
+ (GHWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST << GHWCFG2_OP_MODE_SHIFT);
1248
+ s->ghwcfg3 = (4096 << GHWCFG3_DFIFO_DEPTH_SHIFT) |
1249
+ (4 << GHWCFG3_PACKET_SIZE_CNTR_WIDTH_SHIFT) |
1250
+ (4 << GHWCFG3_XFER_SIZE_CNTR_WIDTH_SHIFT);
1251
+ s->ghwcfg4 = 0;
1252
+ s->glpmcfg = 0;
1253
+ s->gpwrdn = GPWRDN_PWRDNRSTN;
1254
+ s->gdfifocfg = 0;
1255
+ s->gadpctl = 0;
1256
+ s->grefclk = 0;
1257
+ s->gintmsk2 = 0;
1258
+ s->gintsts2 = 0;
1259
+
1260
+ s->hptxfsiz = 500 << FIFOSIZE_DEPTH_SHIFT;
1261
+
1262
+ s->hcfg = 2 << HCFG_RESVALID_SHIFT;
1263
+ s->hfir = 60000;
1264
+ s->hfnum = 0x3fff;
1265
+ s->hptxsts = (16 << TXSTS_QSPCAVAIL_SHIFT) | 32768;
1266
+ s->haint = 0;
1267
+ s->haintmsk = 0;
1268
+ s->hprt0 = 0;
1269
+
1270
+ memset(s->hreg1, 0, sizeof(s->hreg1));
1271
+ memset(s->pcgreg, 0, sizeof(s->pcgreg));
1272
+
1273
+ s->sof_time = 0;
1274
+ s->frame_number = 0;
1275
+ s->fi = USB_FRMINTVL - 1;
1276
+ s->next_chan = 0;
1277
+ s->working = false;
1278
+
1279
+ for (i = 0; i < DWC2_NB_CHAN; i++) {
1280
+ s->packet[i].needs_service = false;
1281
+ }
1282
+}
1283
+
1284
+static void dwc2_reset_hold(Object *obj)
1285
+{
1286
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1287
+ DWC2State *s = DWC2_USB(obj);
1288
+
1289
+ trace_usb_dwc2_reset_hold();
1290
+
1291
+ if (c->parent_phases.hold) {
1292
+ c->parent_phases.hold(obj);
1293
+ }
1294
+
1295
+ dwc2_update_irq(s);
1296
+}
1297
+
1298
+static void dwc2_reset_exit(Object *obj)
1299
+{
1300
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1301
+ DWC2State *s = DWC2_USB(obj);
1302
+
1303
+ trace_usb_dwc2_reset_exit();
1304
+
1305
+ if (c->parent_phases.exit) {
1306
+ c->parent_phases.exit(obj);
1307
+ }
1308
+
1309
+ s->hprt0 = HPRT0_PWR;
1310
+ if (s->uport.dev && s->uport.dev->attached) {
1311
+ usb_attach(&s->uport);
1312
+ usb_device_reset(s->uport.dev);
1313
+ }
1314
+}
1315
+
1316
+static void dwc2_realize(DeviceState *dev, Error **errp)
1317
+{
1318
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
1319
+ DWC2State *s = DWC2_USB(dev);
1320
+ Object *obj;
1321
+ Error *err = NULL;
1322
+
1323
+ obj = object_property_get_link(OBJECT(dev), "dma-mr", &err);
1324
+ if (err) {
1325
+ error_setg(errp, "dwc2: required dma-mr link not found: %s",
1326
+ error_get_pretty(err));
1327
+ return;
1328
+ }
1329
+ assert(obj != NULL);
1330
+
1331
+ s->dma_mr = MEMORY_REGION(obj);
1332
+ address_space_init(&s->dma_as, s->dma_mr, "dwc2");
1333
+
1334
+ usb_bus_new(&s->bus, sizeof(s->bus), &dwc2_bus_ops, dev);
1335
+ usb_register_port(&s->bus, &s->uport, s, 0, &dwc2_port_ops,
1336
+ USB_SPEED_MASK_LOW | USB_SPEED_MASK_FULL |
1337
+ (s->usb_version == 2 ? USB_SPEED_MASK_HIGH : 0));
1338
+ s->uport.dev = 0;
1339
+
1340
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 1000; /* 1000000 */
1341
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_FS) {
1342
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_FS; /* 83.3 */
1343
+ } else {
1344
+ s->usb_bit_time = 1;
1345
+ }
1346
+
1347
+ s->fi = USB_FRMINTVL - 1;
1348
+ s->eof_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_frame_boundary, s);
1349
+ s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_work_timer, s);
1350
+ s->async_bh = qemu_bh_new(dwc2_work_bh, s);
1351
+
1352
+ sysbus_init_irq(sbd, &s->irq);
1353
+}
1354
+
1355
+static void dwc2_init(Object *obj)
1356
+{
1357
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1358
+ DWC2State *s = DWC2_USB(obj);
1359
+
1360
+ memory_region_init(&s->container, obj, "dwc2", DWC2_MMIO_SIZE);
1361
+ sysbus_init_mmio(sbd, &s->container);
1362
+
1363
+ memory_region_init_io(&s->hsotg, obj, &dwc2_mmio_hsotg_ops, s,
1364
+ "dwc2-io", 4 * KiB);
1365
+ memory_region_add_subregion(&s->container, 0x0000, &s->hsotg);
1366
+
1367
+ memory_region_init_io(&s->fifos, obj, &dwc2_mmio_hreg2_ops, s,
1368
+ "dwc2-fifo", 64 * KiB);
1369
+ memory_region_add_subregion(&s->container, 0x1000, &s->fifos);
1370
+}
1371
+
1372
+static const VMStateDescription vmstate_dwc2_state_packet = {
1373
+ .name = "dwc2/packet",
1374
+ .version_id = 1,
57
+ .version_id = 1,
1375
+ .minimum_version_id = 1,
58
+ .minimum_version_id = 1,
59
+ .needed = mve_needed,
1376
+ .fields = (VMStateField[]) {
60
+ .fields = (VMStateField[]) {
1377
+ VMSTATE_UINT32(devadr, DWC2Packet),
61
+ VMSTATE_UINT32(env.v7m.vpr, ARMCPU),
1378
+ VMSTATE_UINT32(epnum, DWC2Packet),
1379
+ VMSTATE_UINT32(epdir, DWC2Packet),
1380
+ VMSTATE_UINT32(mps, DWC2Packet),
1381
+ VMSTATE_UINT32(pid, DWC2Packet),
1382
+ VMSTATE_UINT32(index, DWC2Packet),
1383
+ VMSTATE_UINT32(pcnt, DWC2Packet),
1384
+ VMSTATE_UINT32(len, DWC2Packet),
1385
+ VMSTATE_INT32(async, DWC2Packet),
1386
+ VMSTATE_BOOL(small, DWC2Packet),
1387
+ VMSTATE_BOOL(needs_service, DWC2Packet),
1388
+ VMSTATE_END_OF_LIST()
62
+ VMSTATE_END_OF_LIST()
1389
+ },
63
+ },
1390
+};
64
+};
1391
+
65
+
1392
+const VMStateDescription vmstate_dwc2_state = {
66
static const VMStateDescription vmstate_m = {
1393
+ .name = "dwc2",
67
.name = "cpu/m",
1394
+ .version_id = 1,
68
.version_id = 4,
1395
+ .minimum_version_id = 1,
69
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
1396
+ .fields = (VMStateField[]) {
70
&vmstate_m_other_sp,
1397
+ VMSTATE_UINT32_ARRAY(glbreg, DWC2State,
71
&vmstate_m_v8m,
1398
+ DWC2_GLBREG_SIZE / sizeof(uint32_t)),
72
&vmstate_m_fp,
1399
+ VMSTATE_UINT32_ARRAY(fszreg, DWC2State,
73
+ &vmstate_m_mve,
1400
+ DWC2_FSZREG_SIZE / sizeof(uint32_t)),
74
NULL
1401
+ VMSTATE_UINT32_ARRAY(hreg0, DWC2State,
75
}
1402
+ DWC2_HREG0_SIZE / sizeof(uint32_t)),
76
};
1403
+ VMSTATE_UINT32_ARRAY(hreg1, DWC2State,
77
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
1404
+ DWC2_HREG1_SIZE / sizeof(uint32_t)),
78
index XXXXXXX..XXXXXXX 100644
1405
+ VMSTATE_UINT32_ARRAY(pcgreg, DWC2State,
79
--- a/target/arm/translate-vfp.c
1406
+ DWC2_PCGREG_SIZE / sizeof(uint32_t)),
80
+++ b/target/arm/translate-vfp.c
1407
+
81
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
1408
+ VMSTATE_TIMER_PTR(eof_timer, DWC2State),
82
return FPSysRegCheckFailed;
1409
+ VMSTATE_TIMER_PTR(frame_timer, DWC2State),
83
}
1410
+ VMSTATE_INT64(sof_time, DWC2State),
84
break;
1411
+ VMSTATE_INT64(usb_frame_time, DWC2State),
85
+ case ARM_VFP_VPR:
1412
+ VMSTATE_INT64(usb_bit_time, DWC2State),
86
+ case ARM_VFP_P0:
1413
+ VMSTATE_UINT32(usb_version, DWC2State),
87
+ if (!dc_isar_feature(aa32_mve, s)) {
1414
+ VMSTATE_UINT16(frame_number, DWC2State),
88
+ return FPSysRegCheckFailed;
1415
+ VMSTATE_UINT16(fi, DWC2State),
89
+ }
1416
+ VMSTATE_UINT16(next_chan, DWC2State),
90
+ break;
1417
+ VMSTATE_BOOL(working, DWC2State),
91
default:
1418
+
92
return FPSysRegCheckFailed;
1419
+ VMSTATE_STRUCT_ARRAY(packet, DWC2State, DWC2_NB_CHAN, 1,
93
}
1420
+ vmstate_dwc2_state_packet, DWC2Packet),
94
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
1421
+ VMSTATE_UINT8_2DARRAY(usb_buf, DWC2State, DWC2_NB_CHAN,
95
tcg_temp_free_i32(sfpa);
1422
+ DWC2_MAX_XFER_SIZE),
96
break;
1423
+
97
}
1424
+ VMSTATE_END_OF_LIST()
98
+ case ARM_VFP_VPR:
99
+ /* Behaves as NOP if not privileged */
100
+ if (IS_USER(s)) {
101
+ break;
102
+ }
103
+ tmp = loadfn(s, opaque);
104
+ store_cpu_field(tmp, v7m.vpr);
105
+ break;
106
+ case ARM_VFP_P0:
107
+ {
108
+ TCGv_i32 vpr;
109
+ tmp = loadfn(s, opaque);
110
+ vpr = load_cpu_field(v7m.vpr);
111
+ tcg_gen_deposit_i32(vpr, vpr, tmp,
112
+ R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
113
+ store_cpu_field(vpr, v7m.vpr);
114
+ tcg_temp_free_i32(tmp);
115
+ break;
1425
+ }
116
+ }
1426
+};
117
default:
1427
+
118
g_assert_not_reached();
1428
+static Property dwc2_usb_properties[] = {
119
}
1429
+ DEFINE_PROP_UINT32("usb_version", DWC2State, usb_version, 2),
120
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
1430
+ DEFINE_PROP_END_OF_LIST(),
121
tcg_temp_free_i32(fpscr);
1431
+};
122
break;
1432
+
123
}
1433
+static void dwc2_class_init(ObjectClass *klass, void *data)
124
+ case ARM_VFP_VPR:
1434
+{
125
+ /* Behaves as NOP if not privileged */
1435
+ DeviceClass *dc = DEVICE_CLASS(klass);
126
+ if (IS_USER(s)) {
1436
+ DWC2Class *c = DWC2_CLASS(klass);
127
+ break;
1437
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
128
+ }
1438
+
129
+ tmp = load_cpu_field(v7m.vpr);
1439
+ dc->realize = dwc2_realize;
130
+ storefn(s, opaque, tmp);
1440
+ dc->vmsd = &vmstate_dwc2_state;
131
+ break;
1441
+ set_bit(DEVICE_CATEGORY_USB, dc->categories);
132
+ case ARM_VFP_P0:
1442
+ device_class_set_props(dc, dwc2_usb_properties);
133
+ tmp = load_cpu_field(v7m.vpr);
1443
+ resettable_class_set_parent_phases(rc, dwc2_reset_enter, dwc2_reset_hold,
134
+ tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
1444
+ dwc2_reset_exit, &c->parent_phases);
135
+ storefn(s, opaque, tmp);
1445
+}
136
+ break;
1446
+
137
default:
1447
+static const TypeInfo dwc2_usb_type_info = {
138
g_assert_not_reached();
1448
+ .name = TYPE_DWC2_USB,
139
}
1449
+ .parent = TYPE_SYS_BUS_DEVICE,
1450
+ .instance_size = sizeof(DWC2State),
1451
+ .instance_init = dwc2_init,
1452
+ .class_size = sizeof(DWC2Class),
1453
+ .class_init = dwc2_class_init,
1454
+};
1455
+
1456
+static void dwc2_usb_register_types(void)
1457
+{
1458
+ type_register_static(&dwc2_usb_type_info);
1459
+}
1460
+
1461
+type_init(dwc2_usb_register_types)
1462
diff --git a/hw/usb/Kconfig b/hw/usb/Kconfig
1463
index XXXXXXX..XXXXXXX 100644
1464
--- a/hw/usb/Kconfig
1465
+++ b/hw/usb/Kconfig
1466
@@ -XXX,XX +XXX,XX @@ config USB_MUSB
1467
bool
1468
select USB
1469
1470
+config USB_DWC2
1471
+ bool
1472
+ default y
1473
+ select USB
1474
+
1475
config TUSB6010
1476
bool
1477
select USB_MUSB
1478
diff --git a/hw/usb/Makefile.objs b/hw/usb/Makefile.objs
1479
index XXXXXXX..XXXXXXX 100644
1480
--- a/hw/usb/Makefile.objs
1481
+++ b/hw/usb/Makefile.objs
1482
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_USB_EHCI_SYSBUS) += hcd-ehci-sysbus.o
1483
common-obj-$(CONFIG_USB_XHCI) += hcd-xhci.o
1484
common-obj-$(CONFIG_USB_XHCI_NEC) += hcd-xhci-nec.o
1485
common-obj-$(CONFIG_USB_MUSB) += hcd-musb.o
1486
+common-obj-$(CONFIG_USB_DWC2) += hcd-dwc2.o
1487
1488
common-obj-$(CONFIG_TUSB6010) += tusb6010.o
1489
common-obj-$(CONFIG_IMX) += chipidea.o
1490
diff --git a/hw/usb/trace-events b/hw/usb/trace-events
1491
index XXXXXXX..XXXXXXX 100644
1492
--- a/hw/usb/trace-events
1493
+++ b/hw/usb/trace-events
1494
@@ -XXX,XX +XXX,XX @@ usb_xhci_xfer_error(void *xfer, uint32_t ret) "%p: ret %d"
1495
usb_xhci_unimplemented(const char *item, int nr) "%s (0x%x)"
1496
usb_xhci_enforced_limit(const char *item) "%s"
1497
1498
+# hcd-dwc2.c
1499
+usb_dwc2_update_irq(uint32_t level) "level=%d"
1500
+usb_dwc2_raise_global_irq(uint32_t intr) "0x%08x"
1501
+usb_dwc2_lower_global_irq(uint32_t intr) "0x%08x"
1502
+usb_dwc2_raise_host_irq(uint32_t intr) "0x%04x"
1503
+usb_dwc2_lower_host_irq(uint32_t intr) "0x%04x"
1504
+usb_dwc2_sof(int64_t next) "next SOF %" PRId64
1505
+usb_dwc2_bus_start(void) "start SOFs"
1506
+usb_dwc2_bus_stop(void) "stop SOFs"
1507
+usb_dwc2_find_device(uint8_t addr) "%d"
1508
+usb_dwc2_port_disabled(uint32_t pnum) "port %d disabled"
1509
+usb_dwc2_device_found(uint32_t pnum) "device found on port %d"
1510
+usb_dwc2_device_not_found(void) "device not found"
1511
+usb_dwc2_handle_packet(uint32_t chan, void *dev, void *pkt, uint32_t ep, const char *type, const char *dir, uint32_t mps, uint32_t len, uint32_t pcnt) "ch %d dev %p pkt %p ep %d type %s dir %s mps %d len %d pcnt %d"
1512
+usb_dwc2_memory_read(uint32_t addr, uint32_t len) "addr %d len %d"
1513
+usb_dwc2_packet_status(const char *status, uint32_t len) "status %s len %d"
1514
+usb_dwc2_packet_error(const char *status) "ERROR %s"
1515
+usb_dwc2_async_packet(void *pkt, uint32_t chan, void *dev, uint32_t ep, const char *dir, uint32_t len) "pkt %p ch %d dev %p ep %d %s len %d"
1516
+usb_dwc2_memory_write(uint32_t addr, uint32_t len) "addr %d len %d"
1517
+usb_dwc2_packet_done(const char *status, uint32_t actual, uint32_t len, uint32_t pcnt) "status %s actual %d len %d pcnt %d"
1518
+usb_dwc2_packet_next(const char *status, uint32_t len, uint32_t pcnt) "status %s len %d pcnt %d"
1519
+usb_dwc2_attach(void *port) "port %p"
1520
+usb_dwc2_attach_speed(const char *speed) "%s-speed device attached"
1521
+usb_dwc2_detach(void *port) "port %p"
1522
+usb_dwc2_child_detach(void *port, void *child) "port %p child %p"
1523
+usb_dwc2_wakeup(void *port) "port %p"
1524
+usb_dwc2_async_packet_complete(void *port, void *pkt, uint32_t chan, void *dev, uint32_t ep, const char *dir, uint32_t len) "port %p packet %p ch %d dev %p ep %d %s len %d"
1525
+usb_dwc2_work_bh(void) ""
1526
+usb_dwc2_work_bh_service(uint32_t first, uint32_t current, void *dev, uint32_t ep) "first %d servicing %d dev %p ep %d"
1527
+usb_dwc2_work_bh_next(uint32_t chan) "next %d"
1528
+usb_dwc2_enable_chan(uint32_t chan, void *dev, void *pkt, uint32_t ep) "ch %d dev %p pkt %p ep %d"
1529
+usb_dwc2_glbreg_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1530
+usb_dwc2_glbreg_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1531
+usb_dwc2_fszreg_read(uint64_t addr, uint32_t val) " 0x%04" PRIx64 " HPTXFSIZ val 0x%08x"
1532
+usb_dwc2_fszreg_write(uint64_t addr, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " HPTXFSIZ val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1533
+usb_dwc2_hreg0_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1534
+usb_dwc2_hreg0_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1535
+usb_dwc2_hreg1_read(uint64_t addr, const char *reg, uint64_t chan, uint32_t val) " 0x%04" PRIx64 " %s%" PRId64 " val 0x%08x"
1536
+usb_dwc2_hreg1_write(uint64_t addr, const char *reg, uint64_t chan, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " %s%" PRId64 " val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1537
+usb_dwc2_pcgreg_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1538
+usb_dwc2_pcgreg_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1539
+usb_dwc2_hreg2_read(uint64_t addr, uint64_t fifo, uint32_t val) " 0x%04" PRIx64 " FIFO%" PRId64 " val 0x%08x"
1540
+usb_dwc2_hreg2_write(uint64_t addr, uint64_t fifo, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " FIFO%" PRId64 " val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1541
+usb_dwc2_hreg0_action(const char *s) "%s"
1542
+usb_dwc2_wakeup_endpoint(void *ep, uint32_t stream) "endp %p stream %d"
1543
+usb_dwc2_work_timer(void) ""
1544
+usb_dwc2_reset_enter(void) "=== RESET enter ==="
1545
+usb_dwc2_reset_hold(void) "=== RESET hold ==="
1546
+usb_dwc2_reset_exit(void) "=== RESET exit ==="
1547
+
1548
# desc.c
1549
usb_desc_device(int addr, int len, int ret) "dev %d query device, len %d, ret %d"
1550
usb_desc_device_qualifier(int addr, int len, int ret) "dev %d query device qualifier, len %d, ret %d"
1551
--
140
--
1552
2.20.1
141
2.20.1
1553
142
1554
143
diff view generated by jsdifflib
1
Convert the VSRA, VSRI, VRSHR, VRSRA 2-reg-shift insns to decodetree.
1
The M-profile FPSCR has an LTPSIZE field, but if MVE is not
2
(These are the last instructions in the group that are vectorized;
2
implemented it is read-only and always reads as 4; this is how QEMU
3
the rest all require looping over each element.)
3
currently handles it.
4
5
Make the field writable when MVE is implemented.
6
7
We can safely add the field to the MVE migration struct because
8
currently no CPUs enable MVE and so the migration struct is never
9
used.
4
10
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-4-peter.maydell@linaro.org
13
Message-id: 20210520152840.24453-8-peter.maydell@linaro.org
8
---
14
---
9
target/arm/neon-dp.decode | 35 ++++++++++++++++++++++
15
target/arm/cpu.h | 3 ++-
10
target/arm/translate-neon.inc.c | 7 +++++
16
target/arm/machine.c | 1 +
11
target/arm/translate.c | 52 +++------------------------------
17
target/arm/vfp_helper.c | 9 ++++++---
12
3 files changed, 46 insertions(+), 48 deletions(-)
18
3 files changed, 9 insertions(+), 4 deletions(-)
13
19
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
22
--- a/target/arm/cpu.h
17
+++ b/target/arm/neon-dp.decode
23
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
24
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
19
VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
25
uint32_t fpdscr[M_REG_NUM_BANKS];
20
VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
26
uint32_t cpacr[M_REG_NUM_BANKS];
21
27
uint32_t nsacr;
22
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_d
28
- int ltpsize;
23
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_s
29
+ uint32_t ltpsize;
24
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_h
30
uint32_t vpr;
25
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_b
31
} v7m;
32
33
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
34
35
#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
36
#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
37
+#define FPCR_LTPSIZE_LENGTH 3
38
39
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
40
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
41
diff --git a/target/arm/machine.c b/target/arm/machine.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/machine.c
44
+++ b/target/arm/machine.c
45
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_mve = {
46
.needed = mve_needed,
47
.fields = (VMStateField[]) {
48
VMSTATE_UINT32(env.v7m.vpr, ARMCPU),
49
+ VMSTATE_UINT32(env.v7m.ltpsize, ARMCPU),
50
VMSTATE_END_OF_LIST()
51
},
52
};
53
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/vfp_helper.c
56
+++ b/target/arm/vfp_helper.c
57
@@ -XXX,XX +XXX,XX @@ uint32_t vfp_get_fpscr(CPUARMState *env)
58
59
void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
60
{
61
+ ARMCPU *cpu = env_archcpu(env);
26
+
62
+
27
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_d
63
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
28
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_s
64
- if (!cpu_isar_feature(any_fp16, env_archcpu(env))) {
29
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_h
65
+ if (!cpu_isar_feature(any_fp16, cpu)) {
30
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_b
66
val &= ~FPCR_FZ16;
31
+
67
}
32
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_d
68
33
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_s
69
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
34
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_h
70
* because in v7A no-short-vector-support cores still had to
35
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_b
71
* allow Stride/Len to be written with the only effect that
36
+
72
* some insns are required to UNDEF if the guest sets them.
37
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_d
73
- *
38
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_s
74
- * TODO: if M-profile MVE implemented, set LTPSIZE.
39
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_h
75
*/
40
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_b
76
env->vfp.vec_len = extract32(val, 16, 3);
41
+
77
env->vfp.vec_stride = extract32(val, 20, 2);
42
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_d
78
+ } else if (cpu_isar_feature(aa32_mve, cpu)) {
43
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_s
79
+ env->v7m.ltpsize = extract32(val, FPCR_LTPSIZE_SHIFT,
44
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_h
80
+ FPCR_LTPSIZE_LENGTH);
45
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_b
81
}
46
+
82
47
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_d
83
if (arm_feature(env, ARM_FEATURE_NEON)) {
48
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_s
49
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_h
50
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_b
51
+
52
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_d
53
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_s
54
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_h
55
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_b
56
+
57
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
58
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
59
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
60
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate-neon.inc.c
63
+++ b/target/arm/translate-neon.inc.c
64
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
65
66
DO_2SH(VSHL, tcg_gen_gvec_shli)
67
DO_2SH(VSLI, gen_gvec_sli)
68
+DO_2SH(VSRI, gen_gvec_sri)
69
+DO_2SH(VSRA_S, gen_gvec_ssra)
70
+DO_2SH(VSRA_U, gen_gvec_usra)
71
+DO_2SH(VRSHR_S, gen_gvec_srshr)
72
+DO_2SH(VRSHR_U, gen_gvec_urshr)
73
+DO_2SH(VRSRA_S, gen_gvec_srsra)
74
+DO_2SH(VRSRA_U, gen_gvec_ursra)
75
76
static bool trans_VSHR_S_2sh(DisasContext *s, arg_2reg_shift *a)
77
{
78
diff --git a/target/arm/translate.c b/target/arm/translate.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate.c
81
+++ b/target/arm/translate.c
82
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
83
84
switch (op) {
85
case 0: /* VSHR */
86
+ case 1: /* VSRA */
87
+ case 2: /* VRSHR */
88
+ case 3: /* VRSRA */
89
+ case 4: /* VSRI */
90
case 5: /* VSHL, VSLI */
91
return 1; /* handled by decodetree */
92
default:
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
94
shift = shift - (1 << (size + 3));
95
}
96
97
- switch (op) {
98
- case 1: /* VSRA */
99
- /* Right shift comes here negative. */
100
- shift = -shift;
101
- if (u) {
102
- gen_gvec_usra(size, rd_ofs, rm_ofs, shift,
103
- vec_size, vec_size);
104
- } else {
105
- gen_gvec_ssra(size, rd_ofs, rm_ofs, shift,
106
- vec_size, vec_size);
107
- }
108
- return 0;
109
-
110
- case 2: /* VRSHR */
111
- /* Right shift comes here negative. */
112
- shift = -shift;
113
- if (u) {
114
- gen_gvec_urshr(size, rd_ofs, rm_ofs, shift,
115
- vec_size, vec_size);
116
- } else {
117
- gen_gvec_srshr(size, rd_ofs, rm_ofs, shift,
118
- vec_size, vec_size);
119
- }
120
- return 0;
121
-
122
- case 3: /* VRSRA */
123
- /* Right shift comes here negative. */
124
- shift = -shift;
125
- if (u) {
126
- gen_gvec_ursra(size, rd_ofs, rm_ofs, shift,
127
- vec_size, vec_size);
128
- } else {
129
- gen_gvec_srsra(size, rd_ofs, rm_ofs, shift,
130
- vec_size, vec_size);
131
- }
132
- return 0;
133
-
134
- case 4: /* VSRI */
135
- if (!u) {
136
- return 1;
137
- }
138
- /* Right shift comes here negative. */
139
- shift = -shift;
140
- gen_gvec_sri(size, rd_ofs, rm_ofs, shift,
141
- vec_size, vec_size);
142
- return 0;
143
- }
144
-
145
if (size == 3) {
146
count = q + 1;
147
} else {
148
--
84
--
149
2.20.1
85
2.20.1
150
86
151
87
diff view generated by jsdifflib
1
Convert the VCVT fixed-point conversion operations in the
1
Currently we allow board models to specify the initial value of the
2
Neon 2-regs-and-shift group to decodetree.
2
Secure VTOR register, using an init-svtor property on the TYPE_ARMV7M
3
object which is plumbed through to the CPU. Allow board models to
4
also specify the initial value of the Non-secure VTOR via a similar
5
init-nsvtor property.
3
6
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200522145520.6778-9-peter.maydell@linaro.org
9
Message-id: 20210520152840.24453-10-peter.maydell@linaro.org
7
---
10
---
8
target/arm/neon-dp.decode | 11 +++++
11
include/hw/arm/armv7m.h | 2 ++
9
target/arm/translate-neon.inc.c | 49 +++++++++++++++++++++
12
target/arm/cpu.h | 2 ++
10
target/arm/translate.c | 75 +--------------------------------
13
hw/arm/armv7m.c | 7 +++++++
11
3 files changed, 62 insertions(+), 73 deletions(-)
14
target/arm/cpu.c | 10 ++++++++++
15
4 files changed, 21 insertions(+)
12
16
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
17
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
19
--- a/include/hw/arm/armv7m.h
16
+++ b/target/arm/neon-dp.decode
20
+++ b/include/hw/arm/armv7m.h
17
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
21
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(ARMv7MState, ARMV7M)
18
@2reg_shll_b .... ... . . . 001 shift:3 .... .... 0 . . . .... \
22
* devices will be automatically layered on top of this view.)
19
&2reg_shift vm=%vm_dp vd=%vd_dp size=0 q=0
23
* + Property "idau": IDAU interface (forwarded to CPU object)
20
24
* + Property "init-svtor": secure VTOR reset value (forwarded to CPU object)
21
+# We use size=0 for fp32 and size=1 for fp16 to match the 3-same encodings.
25
+ * + Property "init-nsvtor": non-secure VTOR reset value (forwarded to CPU object)
22
+@2reg_vcvt .... ... . . . 1 ..... .... .... . q:1 . . .... \
26
* + Property "vfp": enable VFP (forwarded to CPU object)
23
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 shift=%neon_rshift_i5
27
* + Property "dsp": enable DSP (forwarded to CPU object)
24
+
28
* + Property "enable-bitband": expose bitbanded IO
25
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
29
@@ -XXX,XX +XXX,XX @@ struct ARMv7MState {
26
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
30
MemoryRegion *board_memory;
27
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
31
Object *idau;
28
@@ -XXX,XX +XXX,XX @@ VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
32
uint32_t init_svtor;
29
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
33
+ uint32_t init_nsvtor;
30
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
34
bool enable_bitband;
31
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
35
bool start_powered_off;
32
+
36
bool vfp;
33
+# VCVT fixed<->float conversions
37
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
+# TODO: FP16 fixed<->float conversions are opc==0b1100 and 0b1101
35
+VCVT_SF_2sh 1111 001 0 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
36
+VCVT_UF_2sh 1111 001 1 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
37
+VCVT_FS_2sh 1111 001 0 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
38
+VCVT_FU_2sh 1111 001 1 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
39
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
40
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/translate-neon.inc.c
39
--- a/target/arm/cpu.h
42
+++ b/target/arm/translate-neon.inc.c
40
+++ b/target/arm/cpu.h
43
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL_U_2sh(DisasContext *s, arg_2reg_shift *a)
41
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
44
};
42
45
return do_vshll_2sh(s, a, widenfn[a->size], true);
43
/* For v8M, initial value of the Secure VTOR */
46
}
44
uint32_t init_svtor;
47
+
45
+ /* For v8M, initial value of the Non-secure VTOR */
48
+static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
46
+ uint32_t init_nsvtor;
49
+ NeonGenTwoSingleOPFn *fn)
47
50
+{
48
/* [QEMU_]KVM_ARM_TARGET_* constant for this CPU, or
51
+ /* FP operations in 2-reg-and-shift group */
49
* QEMU_KVM_ARM_TARGET_NONE if the kernel doesn't support this CPU type.
52
+ TCGv_i32 tmp, shiftv;
50
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
53
+ TCGv_ptr fpstatus;
51
index XXXXXXX..XXXXXXX 100644
54
+ int pass;
52
--- a/hw/arm/armv7m.c
55
+
53
+++ b/hw/arm/armv7m.c
56
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
54
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
57
+ return false;
55
return;
56
}
57
}
58
+ if (object_property_find(OBJECT(s->cpu), "init-nsvtor")) {
59
+ if (!object_property_set_uint(OBJECT(s->cpu), "init-nsvtor",
60
+ s->init_nsvtor, errp)) {
61
+ return;
62
+ }
58
+ }
63
+ }
59
+
64
if (object_property_find(OBJECT(s->cpu), "start-powered-off")) {
60
+ /* UNDEF accesses to D16-D31 if they don't exist. */
65
if (!object_property_set_bool(OBJECT(s->cpu), "start-powered-off",
61
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
66
s->start_powered_off, errp)) {
62
+ ((a->vd | a->vm) & 0x10)) {
67
@@ -XXX,XX +XXX,XX @@ static Property armv7m_properties[] = {
63
+ return false;
68
MemoryRegion *),
69
DEFINE_PROP_LINK("idau", ARMv7MState, idau, TYPE_IDAU_INTERFACE, Object *),
70
DEFINE_PROP_UINT32("init-svtor", ARMv7MState, init_svtor, 0),
71
+ DEFINE_PROP_UINT32("init-nsvtor", ARMv7MState, init_nsvtor, 0),
72
DEFINE_PROP_BOOL("enable-bitband", ARMv7MState, enable_bitband, false),
73
DEFINE_PROP_BOOL("start-powered-off", ARMv7MState, start_powered_off,
74
false),
75
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/cpu.c
78
+++ b/target/arm/cpu.c
79
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
80
env->regs[14] = 0xffffffff;
81
82
env->v7m.vecbase[M_REG_S] = cpu->init_svtor & 0xffffff80;
83
+ env->v7m.vecbase[M_REG_NS] = cpu->init_nsvtor & 0xffffff80;
84
85
/* Load the initial SP and PC from offset 0 and 4 in the vector table */
86
vecbase = env->v7m.vecbase[env->v7m.secure];
87
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
88
&cpu->init_svtor,
89
OBJ_PROP_FLAG_READWRITE);
90
}
91
+ if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
92
+ /*
93
+ * Initial value of the NS VTOR (for cores without the Security
94
+ * extension, this is the only VTOR)
95
+ */
96
+ object_property_add_uint32_ptr(obj, "init-nsvtor",
97
+ &cpu->init_nsvtor,
98
+ OBJ_PROP_FLAG_READWRITE);
64
+ }
99
+ }
65
+
100
66
+ if ((a->vm | a->vd) & a->q) {
101
qdev_property_add_static(DEVICE(obj), &arm_cpu_cfgend_property);
67
+ return false;
68
+ }
69
+
70
+ if (!vfp_access_check(s)) {
71
+ return true;
72
+ }
73
+
74
+ fpstatus = get_fpstatus_ptr(1);
75
+ shiftv = tcg_const_i32(a->shift);
76
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
77
+ tmp = neon_load_reg(a->vm, pass);
78
+ fn(tmp, tmp, shiftv, fpstatus);
79
+ neon_store_reg(a->vd, pass, tmp);
80
+ }
81
+ tcg_temp_free_ptr(fpstatus);
82
+ tcg_temp_free_i32(shiftv);
83
+ return true;
84
+}
85
+
86
+#define DO_FP_2SH(INSN, FUNC) \
87
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
88
+ { \
89
+ return do_fp_2sh(s, a, FUNC); \
90
+ }
91
+
92
+DO_FP_2SH(VCVT_SF, gen_helper_vfp_sltos)
93
+DO_FP_2SH(VCVT_UF, gen_helper_vfp_ultos)
94
+DO_FP_2SH(VCVT_FS, gen_helper_vfp_tosls_round_to_zero)
95
+DO_FP_2SH(VCVT_FU, gen_helper_vfp_touls_round_to_zero)
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
101
int q;
102
int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
103
int size;
104
- int shift;
105
int pass;
106
int u;
107
int vec_size;
108
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
109
return 1;
110
} else if (insn & (1 << 4)) {
111
if ((insn & 0x00380080) != 0) {
112
- /* Two registers and shift. */
113
- op = (insn >> 8) & 0xf;
114
-
115
- switch (op) {
116
- case 0: /* VSHR */
117
- case 1: /* VSRA */
118
- case 2: /* VRSHR */
119
- case 3: /* VRSRA */
120
- case 4: /* VSRI */
121
- case 5: /* VSHL, VSLI */
122
- case 6: /* VQSHLU */
123
- case 7: /* VQSHL */
124
- case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
125
- case 9: /* VQSHRN, VQRSHRN */
126
- case 10: /* VSHLL, including VMOVL */
127
- return 1; /* handled by decodetree */
128
- default:
129
- break;
130
- }
131
-
132
- if (insn & (1 << 7)) {
133
- /* 64-bit shift. */
134
- if (op > 7) {
135
- return 1;
136
- }
137
- size = 3;
138
- } else {
139
- size = 2;
140
- while ((insn & (1 << (size + 19))) == 0)
141
- size--;
142
- }
143
- shift = (insn >> 16) & ((1 << (3 + size)) - 1);
144
- if (op >= 14) {
145
- /* VCVT fixed-point. */
146
- TCGv_ptr fpst;
147
- TCGv_i32 shiftv;
148
- VFPGenFixPointFn *fn;
149
-
150
- if (!(insn & (1 << 21)) || (q && ((rd | rm) & 1))) {
151
- return 1;
152
- }
153
-
154
- if (!(op & 1)) {
155
- if (u) {
156
- fn = gen_helper_vfp_ultos;
157
- } else {
158
- fn = gen_helper_vfp_sltos;
159
- }
160
- } else {
161
- if (u) {
162
- fn = gen_helper_vfp_touls_round_to_zero;
163
- } else {
164
- fn = gen_helper_vfp_tosls_round_to_zero;
165
- }
166
- }
167
-
168
- /* We have already masked out the must-be-1 top bit of imm6,
169
- * hence this 32-shift where the ARM ARM has 64-imm6.
170
- */
171
- shift = 32 - shift;
172
- fpst = get_fpstatus_ptr(1);
173
- shiftv = tcg_const_i32(shift);
174
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
175
- TCGv_i32 tmpf = neon_load_reg(rm, pass);
176
- fn(tmpf, tmpf, shiftv, fpst);
177
- neon_store_reg(rd, pass, tmpf);
178
- }
179
- tcg_temp_free_ptr(fpst);
180
- tcg_temp_free_i32(shiftv);
181
- } else {
182
- return 1;
183
- }
184
+ /* Two registers and shift: handled by decodetree */
185
+ return 1;
186
} else { /* (insn & 0x00380080) == 0 */
187
int invert, reg_ofs, vec_size;
188
102
189
--
103
--
190
2.20.1
104
2.20.1
191
105
192
106
diff view generated by jsdifflib
New patch
1
1
The official punctuation for Arm CPU names uses a hyphen, like
2
"Cortex-A9". We mostly follow this, but in a few places usage
3
without the hyphen has crept in. Fix those so we consistently
4
use the same way of writing the CPU name.
5
6
This commit was created with:
7
git grep -z -l 'Cortex ' | xargs -0 sed -i 's/Cortex /Cortex-/'
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Message-id: 20210527095152.10968-1-peter.maydell@linaro.org
14
---
15
docs/system/arm/aspeed.rst | 4 ++--
16
docs/system/arm/nuvoton.rst | 6 +++---
17
docs/system/arm/sabrelite.rst | 2 +-
18
include/hw/arm/allwinner-h3.h | 2 +-
19
hw/arm/aspeed.c | 6 +++---
20
hw/arm/mcimx6ul-evk.c | 2 +-
21
hw/arm/mcimx7d-sabre.c | 2 +-
22
hw/arm/npcm7xx_boards.c | 4 ++--
23
hw/arm/sabrelite.c | 2 +-
24
hw/misc/npcm7xx_clk.c | 2 +-
25
10 files changed, 16 insertions(+), 16 deletions(-)
26
27
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
28
index XXXXXXX..XXXXXXX 100644
29
--- a/docs/system/arm/aspeed.rst
30
+++ b/docs/system/arm/aspeed.rst
31
@@ -XXX,XX +XXX,XX @@ The QEMU Aspeed machines model BMCs of various OpenPOWER systems and
32
Aspeed evaluation boards. They are based on different releases of the
33
Aspeed SoC : the AST2400 integrating an ARM926EJ-S CPU (400MHz), the
34
AST2500 with an ARM1176JZS CPU (800MHz) and more recently the AST2600
35
-with dual cores ARM Cortex A7 CPUs (1.2GHz).
36
+with dual cores ARM Cortex-A7 CPUs (1.2GHz).
37
38
The SoC comes with RAM, Gigabit ethernet, USB, SD/MMC, USB, SPI, I2C,
39
etc.
40
@@ -XXX,XX +XXX,XX @@ AST2500 SoC based machines :
41
42
AST2600 SoC based machines :
43
44
-- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex A7)
45
+- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex-A7)
46
- ``tacoma-bmc`` OpenPOWER Witherspoon POWER9 AST2600 BMC
47
48
Supported devices
49
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
50
index XXXXXXX..XXXXXXX 100644
51
--- a/docs/system/arm/nuvoton.rst
52
+++ b/docs/system/arm/nuvoton.rst
53
@@ -XXX,XX +XXX,XX @@ Nuvoton iBMC boards (``npcm750-evb``, ``quanta-gsj``)
54
55
The `Nuvoton iBMC`_ chips (NPCM7xx) are a family of ARM-based SoCs that are
56
designed to be used as Baseboard Management Controllers (BMCs) in various
57
-servers. They all feature one or two ARM Cortex A9 CPU cores, as well as an
58
+servers. They all feature one or two ARM Cortex-A9 CPU cores, as well as an
59
assortment of peripherals targeted for either Enterprise or Data Center /
60
Hyperscale applications. The former is a superset of the latter, so NPCM750 has
61
all the peripherals of NPCM730 and more.
62
63
.. _Nuvoton iBMC: https://www.nuvoton.com/products/cloud-computing/ibmc/
64
65
-The NPCM750 SoC has two Cortex A9 cores and is targeted for the Enterprise
66
+The NPCM750 SoC has two Cortex-A9 cores and is targeted for the Enterprise
67
segment. The following machines are based on this chip :
68
69
- ``npcm750-evb`` Nuvoton NPCM750 Evaluation board
70
71
-The NPCM730 SoC has two Cortex A9 cores and is targeted for Data Center and
72
+The NPCM730 SoC has two Cortex-A9 cores and is targeted for Data Center and
73
Hyperscale applications. The following machines are based on this chip :
74
75
- ``quanta-gsj`` Quanta GSJ server BMC
76
diff --git a/docs/system/arm/sabrelite.rst b/docs/system/arm/sabrelite.rst
77
index XXXXXXX..XXXXXXX 100644
78
--- a/docs/system/arm/sabrelite.rst
79
+++ b/docs/system/arm/sabrelite.rst
80
@@ -XXX,XX +XXX,XX @@ Supported devices
81
82
The SABRE Lite machine supports the following devices:
83
84
- * Up to 4 Cortex A9 cores
85
+ * Up to 4 Cortex-A9 cores
86
* Generic Interrupt Controller
87
* 1 Clock Controller Module
88
* 1 System Reset Controller
89
diff --git a/include/hw/arm/allwinner-h3.h b/include/hw/arm/allwinner-h3.h
90
index XXXXXXX..XXXXXXX 100644
91
--- a/include/hw/arm/allwinner-h3.h
92
+++ b/include/hw/arm/allwinner-h3.h
93
@@ -XXX,XX +XXX,XX @@
94
*/
95
96
/*
97
- * The Allwinner H3 is a System on Chip containing four ARM Cortex A7
98
+ * The Allwinner H3 is a System on Chip containing four ARM Cortex-A7
99
* processor cores. Features and specifications include DDR2/DDR3 memory,
100
* SD/MMC storage cards, 10/100/1000Mbit Ethernet, USB 2.0, HDMI and
101
* various I/O modules.
102
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/hw/arm/aspeed.c
105
+++ b/hw/arm/aspeed.c
106
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_ast2600_evb_class_init(ObjectClass *oc, void *data)
107
MachineClass *mc = MACHINE_CLASS(oc);
108
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
109
110
- mc->desc = "Aspeed AST2600 EVB (Cortex A7)";
111
+ mc->desc = "Aspeed AST2600 EVB (Cortex-A7)";
112
amc->soc_name = "ast2600-a1";
113
amc->hw_strap1 = AST2600_EVB_HW_STRAP1;
114
amc->hw_strap2 = AST2600_EVB_HW_STRAP2;
115
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_tacoma_class_init(ObjectClass *oc, void *data)
116
MachineClass *mc = MACHINE_CLASS(oc);
117
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
118
119
- mc->desc = "OpenPOWER Tacoma BMC (Cortex A7)";
120
+ mc->desc = "OpenPOWER Tacoma BMC (Cortex-A7)";
121
amc->soc_name = "ast2600-a1";
122
amc->hw_strap1 = TACOMA_BMC_HW_STRAP1;
123
amc->hw_strap2 = TACOMA_BMC_HW_STRAP2;
124
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_rainier_class_init(ObjectClass *oc, void *data)
125
MachineClass *mc = MACHINE_CLASS(oc);
126
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
127
128
- mc->desc = "IBM Rainier BMC (Cortex A7)";
129
+ mc->desc = "IBM Rainier BMC (Cortex-A7)";
130
amc->soc_name = "ast2600-a1";
131
amc->hw_strap1 = RAINIER_BMC_HW_STRAP1;
132
amc->hw_strap2 = RAINIER_BMC_HW_STRAP2;
133
diff --git a/hw/arm/mcimx6ul-evk.c b/hw/arm/mcimx6ul-evk.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/hw/arm/mcimx6ul-evk.c
136
+++ b/hw/arm/mcimx6ul-evk.c
137
@@ -XXX,XX +XXX,XX @@ static void mcimx6ul_evk_init(MachineState *machine)
138
139
static void mcimx6ul_evk_machine_init(MachineClass *mc)
140
{
141
- mc->desc = "Freescale i.MX6UL Evaluation Kit (Cortex A7)";
142
+ mc->desc = "Freescale i.MX6UL Evaluation Kit (Cortex-A7)";
143
mc->init = mcimx6ul_evk_init;
144
mc->max_cpus = FSL_IMX6UL_NUM_CPUS;
145
mc->default_ram_id = "mcimx6ul-evk.ram";
146
diff --git a/hw/arm/mcimx7d-sabre.c b/hw/arm/mcimx7d-sabre.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/hw/arm/mcimx7d-sabre.c
149
+++ b/hw/arm/mcimx7d-sabre.c
150
@@ -XXX,XX +XXX,XX @@ static void mcimx7d_sabre_init(MachineState *machine)
151
152
static void mcimx7d_sabre_machine_init(MachineClass *mc)
153
{
154
- mc->desc = "Freescale i.MX7 DUAL SABRE (Cortex A7)";
155
+ mc->desc = "Freescale i.MX7 DUAL SABRE (Cortex-A7)";
156
mc->init = mcimx7d_sabre_init;
157
mc->max_cpus = FSL_IMX7_NUM_CPUS;
158
mc->default_ram_id = "mcimx7d-sabre.ram";
159
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/hw/arm/npcm7xx_boards.c
162
+++ b/hw/arm/npcm7xx_boards.c
163
@@ -XXX,XX +XXX,XX @@ static void npcm750_evb_machine_class_init(ObjectClass *oc, void *data)
164
165
npcm7xx_set_soc_type(nmc, TYPE_NPCM750);
166
167
- mc->desc = "Nuvoton NPCM750 Evaluation Board (Cortex A9)";
168
+ mc->desc = "Nuvoton NPCM750 Evaluation Board (Cortex-A9)";
169
mc->init = npcm750_evb_init;
170
mc->default_ram_size = 512 * MiB;
171
};
172
@@ -XXX,XX +XXX,XX @@ static void gsj_machine_class_init(ObjectClass *oc, void *data)
173
174
npcm7xx_set_soc_type(nmc, TYPE_NPCM730);
175
176
- mc->desc = "Quanta GSJ (Cortex A9)";
177
+ mc->desc = "Quanta GSJ (Cortex-A9)";
178
mc->init = quanta_gsj_init;
179
mc->default_ram_size = 512 * MiB;
180
};
181
diff --git a/hw/arm/sabrelite.c b/hw/arm/sabrelite.c
182
index XXXXXXX..XXXXXXX 100644
183
--- a/hw/arm/sabrelite.c
184
+++ b/hw/arm/sabrelite.c
185
@@ -XXX,XX +XXX,XX @@ static void sabrelite_init(MachineState *machine)
186
187
static void sabrelite_machine_init(MachineClass *mc)
188
{
189
- mc->desc = "Freescale i.MX6 Quad SABRE Lite Board (Cortex A9)";
190
+ mc->desc = "Freescale i.MX6 Quad SABRE Lite Board (Cortex-A9)";
191
mc->init = sabrelite_init;
192
mc->max_cpus = FSL_IMX6_NUM_CPUS;
193
mc->ignore_memory_transaction_failures = true;
194
diff --git a/hw/misc/npcm7xx_clk.c b/hw/misc/npcm7xx_clk.c
195
index XXXXXXX..XXXXXXX 100644
196
--- a/hw/misc/npcm7xx_clk.c
197
+++ b/hw/misc/npcm7xx_clk.c
198
@@ -XXX,XX +XXX,XX @@
199
#define NPCM7XX_CLOCK_REF_HZ (25000000)
200
201
/* Register Field Definitions */
202
-#define NPCM7XX_CLK_WDRCR_CA9C BIT(0) /* Cortex A9 Cores */
203
+#define NPCM7XX_CLK_WDRCR_CA9C BIT(0) /* Cortex-A9 Cores */
204
205
#define PLLCON_LOKI BIT(31)
206
#define PLLCON_LOKS BIT(30)
207
--
208
2.20.1
209
210
diff view generated by jsdifflib
New patch
1
From: Damien Goutte-Gattat <dgouttegattat@incenp.org>
1
2
3
The 4.x branch of Sphinx introduces a breaking change, as generated man
4
pages are now written to subdirectories corresponding to the manual
5
section they belong to. This results in `make install` erroring out when
6
attempting to install the man pages, because they are not where it
7
expects to find them.
8
9
This patch restores the behavior of Sphinx 3.x regarding man pages.
10
11
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/256
12
Signed-off-by: Damien Goutte-Gattat <dgouttegattat@incenp.org>
13
Message-id: 20210503161422.15028-1-dgouttegattat@incenp.org
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
docs/conf.py | 1 +
18
1 file changed, 1 insertion(+)
19
20
diff --git a/docs/conf.py b/docs/conf.py
21
index XXXXXXX..XXXXXXX 100644
22
--- a/docs/conf.py
23
+++ b/docs/conf.py
24
@@ -XXX,XX +XXX,XX @@
25
['Stefan Hajnoczi <stefanha@redhat.com>',
26
'Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>'], 1),
27
]
28
+man_make_section_directory = False
29
30
# -- Options for Texinfo output -------------------------------------------
31
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rather than passing an opcode to a helper, fully decode the
3
The operands to tcg_gen_atomic_fetch_s{min,max}_i64 must
4
operation at translate time. Use clear_tail_16 to zap the
4
be signed, so that the inputs are properly extended.
5
balance of the SVE register with the AdvSIMD write.
5
Zero extend the result afterward, as needed.
6
6
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/364
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-7-richard.henderson@linaro.org
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20210602020720.47679-1-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
target/arm/helper.h | 5 ++++-
13
target/arm/translate-a64.c | 13 ++++++++++---
13
target/arm/crypto_helper.c | 24 ++++++++++++++++++------
14
1 file changed, 10 insertions(+), 3 deletions(-)
14
target/arm/translate-a64.c | 21 +++++----------------
15
3 files changed, 27 insertions(+), 23 deletions(-)
16
15
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
DEF_HELPER_FLAGS_4(crypto_sha512su1, TCG_CALL_NO_RWG,
23
void, ptr, ptr, ptr, i32)
24
25
-DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
26
+DEF_HELPER_FLAGS_4(crypto_sm3tt1a, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(crypto_sm3tt1b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(crypto_sm3tt2a, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(crypto_sm3tt2b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_4(crypto_sm3partw1, TCG_CALL_NO_RWG,
31
void, ptr, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(crypto_sm3partw2, TCG_CALL_NO_RWG,
33
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/crypto_helper.c
36
+++ b/target/arm/crypto_helper.c
37
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm, uint32_t desc)
38
clear_tail_16(vd, desc);
39
}
40
41
-void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
42
- uint32_t opcode)
43
+static inline void QEMU_ALWAYS_INLINE
44
+crypto_sm3tt(uint64_t *rd, uint64_t *rn, uint64_t *rm,
45
+ uint32_t desc, uint32_t opcode)
46
{
47
- uint64_t *rd = vd;
48
- uint64_t *rn = vn;
49
- uint64_t *rm = vm;
50
union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
51
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
52
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
53
+ uint32_t imm2 = simd_data(desc);
54
uint32_t t;
55
56
assert(imm2 < 4);
57
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
58
/* SM3TT2B */
59
t = cho(CR_ST_WORD(d, 3), CR_ST_WORD(d, 2), CR_ST_WORD(d, 1));
60
} else {
61
- g_assert_not_reached();
62
+ qemu_build_not_reached();
63
}
64
65
t += CR_ST_WORD(d, 0) + CR_ST_WORD(m, imm2);
66
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
67
68
rd[0] = d.l[0];
69
rd[1] = d.l[1];
70
+
71
+ clear_tail_16(rd, desc);
72
}
73
74
+#define DO_SM3TT(NAME, OPCODE) \
75
+ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
76
+ { crypto_sm3tt(vd, vn, vm, desc, OPCODE); }
77
+
78
+DO_SM3TT(crypto_sm3tt1a, 0)
79
+DO_SM3TT(crypto_sm3tt1b, 1)
80
+DO_SM3TT(crypto_sm3tt2a, 2)
81
+DO_SM3TT(crypto_sm3tt2b, 3)
82
+
83
+#undef DO_SM3TT
84
+
85
static uint8_t const sm4_sbox[] = {
86
0xd6, 0x90, 0xe9, 0xfe, 0xcc, 0xe1, 0x3d, 0xb7,
87
0x16, 0xb6, 0x14, 0xc2, 0x28, 0xfb, 0x2c, 0x05,
88
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
89
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate-a64.c
18
--- a/target/arm/translate-a64.c
91
+++ b/target/arm/translate-a64.c
19
+++ b/target/arm/translate-a64.c
92
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
93
*/
21
int o3_opc = extract32(insn, 12, 4);
94
static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
22
bool r = extract32(insn, 22, 1);
95
{
23
bool a = extract32(insn, 23, 1);
96
+ static gen_helper_gvec_3 * const fns[4] = {
24
- TCGv_i64 tcg_rs, clean_addr;
97
+ gen_helper_crypto_sm3tt1a, gen_helper_crypto_sm3tt1b,
25
+ TCGv_i64 tcg_rs, tcg_rt, clean_addr;
98
+ gen_helper_crypto_sm3tt2a, gen_helper_crypto_sm3tt2b,
26
AtomicThreeOpFn *fn = NULL;
99
+ };
27
+ MemOp mop = s->be_data | size | MO_ALIGN;
100
int opcode = extract32(insn, 10, 2);
28
101
int imm2 = extract32(insn, 12, 2);
29
if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
102
int rm = extract32(insn, 16, 5);
103
int rn = extract32(insn, 5, 5);
104
int rd = extract32(insn, 0, 5);
105
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
106
- TCGv_i32 tcg_imm2, tcg_opcode;
107
108
if (!dc_isar_feature(aa64_sm3, s)) {
109
unallocated_encoding(s);
30
unallocated_encoding(s);
110
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
31
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
111
return;
32
break;
33
case 004: /* LDSMAX */
34
fn = tcg_gen_atomic_fetch_smax_i64;
35
+ mop |= MO_SIGN;
36
break;
37
case 005: /* LDSMIN */
38
fn = tcg_gen_atomic_fetch_smin_i64;
39
+ mop |= MO_SIGN;
40
break;
41
case 006: /* LDUMAX */
42
fn = tcg_gen_atomic_fetch_umax_i64;
43
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
112
}
44
}
113
45
114
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
46
tcg_rs = read_cpu_reg(s, rs, true);
115
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
47
+ tcg_rt = cpu_reg(s, rt);
116
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
48
117
- tcg_imm2 = tcg_const_i32(imm2);
49
if (o3_opc == 1) { /* LDCLR */
118
- tcg_opcode = tcg_const_i32(opcode);
50
tcg_gen_not_i64(tcg_rs, tcg_rs);
119
-
51
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
120
- gen_helper_crypto_sm3tt(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr, tcg_imm2,
52
/* The tcg atomic primitives are all full barriers. Therefore we
121
- tcg_opcode);
53
* can ignore the Acquire and Release bits of this instruction.
122
-
54
*/
123
- tcg_temp_free_ptr(tcg_rd_ptr);
55
- fn(cpu_reg(s, rt), clean_addr, tcg_rs, get_mem_index(s),
124
- tcg_temp_free_ptr(tcg_rn_ptr);
56
- s->be_data | size | MO_ALIGN);
125
- tcg_temp_free_ptr(tcg_rm_ptr);
57
+ fn(tcg_rt, clean_addr, tcg_rs, get_mem_index(s), mop);
126
- tcg_temp_free_i32(tcg_imm2);
58
+
127
- tcg_temp_free_i32(tcg_opcode);
59
+ if ((mop & MO_SIGN) && size != MO_64) {
128
+ gen_gvec_op3_ool(s, true, rd, rn, rm, imm2, fns[opcode]);
60
+ tcg_gen_ext32u_i64(tcg_rt, tcg_rt);
61
+ }
129
}
62
}
130
63
131
/* C3.6 Data processing - SIMD, inc Crypto
64
/*
132
--
65
--
133
2.20.1
66
2.20.1
134
67
135
68
diff view generated by jsdifflib
New patch
1
From: Jamie Iles <jamie@nuviainc.com>
1
2
3
The DAIF and PAC checks used raise_exception_ra to raise an exception
4
and unwind CPU state but raise_exception_ra is currently designed for
5
handling data aborts as the syndrome is partially precomputed and
6
encoded in the TB and then merged in merge_syn_data_abort when handling
7
the data abort. Using raise_exception_ra for DAIF and PAC checks
8
results in an empty syndrome being retrieved from data[2] in
9
restore_state_to_opc and setting ESR to 0. This manifested as:
10
11
kvm [571]: Unknown exception class: esr: 0x000000 –
12
Unknown/Uncategorized
13
14
when launching a KVM guest when the host qemu used a CPU supporting
15
EL2+pointer authentication and enabling pointer authentication in the
16
guest.
17
18
Rework raise_exception_ra such that the state is restored before raising
19
the exception so that the exception is not clobbered by
20
restore_state_to_opc.
21
22
Fixes: 0d43e1a2d29a ("target/arm: Add PAuth helpers")
23
Cc: Richard Henderson <richard.henderson@linaro.org>
24
Cc: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
26
[PMM: added comment]
27
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
30
target/arm/op_helper.c | 11 +++++++++--
31
1 file changed, 9 insertions(+), 2 deletions(-)
32
33
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/op_helper.c
36
+++ b/target/arm/op_helper.c
37
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
38
void raise_exception_ra(CPUARMState *env, uint32_t excp, uint32_t syndrome,
39
uint32_t target_el, uintptr_t ra)
40
{
41
- CPUState *cs = do_raise_exception(env, excp, syndrome, target_el);
42
- cpu_loop_exit_restore(cs, ra);
43
+ CPUState *cs = env_cpu(env);
44
+
45
+ /*
46
+ * restore_state_to_opc() will set env->exception.syndrome, so
47
+ * we must restore CPU state here before setting the syndrome
48
+ * the caller passed us, and cannot use cpu_loop_exit_restore().
49
+ */
50
+ cpu_restore_state(cs, ra, true);
51
+ raise_exception(env, excp, syndrome, target_el);
52
}
53
54
uint64_t HELPER(neon_tbl)(CPUARMState *env, uint32_t desc,
55
--
56
2.20.1
57
58
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Jamie Iles <jamie@nuviainc.com>
2
2
3
The ADC region size is 256B, split as:
3
Now that there are no other users of do_raise_exception, fold it into
4
- [0x00 - 0x4f] defined
4
raise_exception.
5
- [0x50 - 0xff] reserved
6
5
7
All registers are 32-bit (thus when the datasheet mentions the
6
Cc: Richard Henderson <richard.henderson@linaro.org>
8
last defined register is 0x4c, it means its address range is
7
Cc: Peter Maydell <peter.maydell@linaro.org>
9
0x4c .. 0x4f.
8
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
10
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
This model implementation is also 32-bit. Set MemoryRegionOps
12
'impl' fields.
13
14
See:
15
'RM0033 Reference manual Rev 8', Table 10.13.18 "ADC register map".
16
17
Reported-by: Seth Kintigh <skintigh@gmail.com>
18
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
19
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
Message-id: 20200603055915.17678-1-f4bug@amsat.org
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
11
---
23
hw/adc/stm32f2xx_adc.c | 4 +++-
12
target/arm/op_helper.c | 12 ++----------
24
1 file changed, 3 insertions(+), 1 deletion(-)
13
1 file changed, 2 insertions(+), 10 deletions(-)
25
14
26
diff --git a/hw/adc/stm32f2xx_adc.c b/hw/adc/stm32f2xx_adc.c
15
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
27
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/adc/stm32f2xx_adc.c
17
--- a/target/arm/op_helper.c
29
+++ b/hw/adc/stm32f2xx_adc.c
18
+++ b/target/arm/op_helper.c
30
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps stm32f2xx_adc_ops = {
19
@@ -XXX,XX +XXX,XX @@
31
.read = stm32f2xx_adc_read,
20
#define SIGNBIT (uint32_t)0x80000000
32
.write = stm32f2xx_adc_write,
21
#define SIGNBIT64 ((uint64_t)1 << 63)
33
.endianness = DEVICE_NATIVE_ENDIAN,
22
34
+ .impl.min_access_size = 4,
23
-static CPUState *do_raise_exception(CPUARMState *env, uint32_t excp,
35
+ .impl.max_access_size = 4,
24
- uint32_t syndrome, uint32_t target_el)
36
};
25
+void raise_exception(CPUARMState *env, uint32_t excp,
37
26
+ uint32_t syndrome, uint32_t target_el)
38
static const VMStateDescription vmstate_stm32f2xx_adc = {
27
{
39
@@ -XXX,XX +XXX,XX @@ static void stm32f2xx_adc_init(Object *obj)
28
CPUState *cs = env_cpu(env);
40
sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
29
41
30
@@ -XXX,XX +XXX,XX @@ static CPUState *do_raise_exception(CPUARMState *env, uint32_t excp,
42
memory_region_init_io(&s->mmio, obj, &stm32f2xx_adc_ops, s,
31
cs->exception_index = excp;
43
- TYPE_STM32F2XX_ADC, 0xFF);
32
env->exception.syndrome = syndrome;
44
+ TYPE_STM32F2XX_ADC, 0x100);
33
env->exception.target_el = target_el;
45
sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio);
34
-
35
- return cs;
36
-}
37
-
38
-void raise_exception(CPUARMState *env, uint32_t excp,
39
- uint32_t syndrome, uint32_t target_el)
40
-{
41
- CPUState *cs = do_raise_exception(env, excp, syndrome, target_el);
42
cpu_loop_exit(cs);
46
}
43
}
47
44
48
--
45
--
49
2.20.1
46
2.20.1
50
47
51
48
diff view generated by jsdifflib
New patch
1
From: Jamie Iles <jamie@nuviainc.com>
1
2
3
Now that raise_exception_ra restores the state before raising the
4
exception we can use restore_exception_ra to perform the state restore +
5
exception raising without clobbering the syndrome.
6
7
Cc: Richard Henderson <richard.henderson@linaro.org>
8
Cc: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
10
[PMM: Keep the one line of the comment that is still relevant]
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/mte_helper.c | 12 +++---------
15
1 file changed, 3 insertions(+), 9 deletions(-)
16
17
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/mte_helper.c
20
+++ b/target/arm/mte_helper.c
21
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
22
23
switch (tcf) {
24
case 1:
25
- /*
26
- * Tag check fail causes a synchronous exception.
27
- *
28
- * In restore_state_to_opc, we set the exception syndrome
29
- * for the load or store operation. Unwind first so we
30
- * may overwrite that with the syndrome for the tag check.
31
- */
32
- cpu_restore_state(env_cpu(env), ra, true);
33
+ /* Tag check fail causes a synchronous exception. */
34
env->exception.vaddress = dirty_ptr;
35
36
is_write = FIELD_EX32(desc, MTEDESC, WRITE);
37
syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0,
38
is_write, 0x11);
39
- raise_exception(env, EXCP_DATA_ABORT, syn, exception_target_el(env));
40
+ raise_exception_ra(env, EXCP_DATA_ABORT, syn,
41
+ exception_target_el(env), ra);
42
/* noreturn, but fall through to the assert anyway */
43
44
case 0:
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Jamie Iles <jamie@nuviainc.com>
2
2
3
hw_error() calls exit(). This a bit overkill when we can log
3
The sequence cpu_restore_state() + raise_exception() is equivalent to
4
the accesses as unimplemented or guest error.
4
raise_exception_ra(), so use that instead. (In this case we never
5
cared about the syndrome value, because M-profile doesn't use the
6
syndrome; the old code was just written unnecessarily awkwardly.)
5
7
6
When fuzzing the devices, we don't want the whole process to
8
Cc: Richard Henderson <richard.henderson@linaro.org>
7
exit. Replace some hw_error() calls by qemu_log_mask()
9
Cc: Peter Maydell <peter.maydell@linaro.org>
8
(missed in commit 5a0001ec7e).
10
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
9
11
[PMM: Retain edited version of comment; rewrite commit message]
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20200525114123.21317-2-f4bug@amsat.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
14
---
15
hw/input/pxa2xx_keypad.c | 10 +++++++---
15
target/arm/m_helper.c | 5 +----
16
1 file changed, 7 insertions(+), 3 deletions(-)
16
target/arm/op_helper.c | 9 +++------
17
2 files changed, 4 insertions(+), 10 deletions(-)
17
18
18
diff --git a/hw/input/pxa2xx_keypad.c b/hw/input/pxa2xx_keypad.c
19
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/input/pxa2xx_keypad.c
21
--- a/target/arm/m_helper.c
21
+++ b/hw/input/pxa2xx_keypad.c
22
+++ b/target/arm/m_helper.c
22
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
23
*/
24
limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
24
25
25
#include "qemu/osdep.h"
26
if (val < limit) {
26
-#include "hw/hw.h"
27
- CPUState *cs = env_cpu(env);
27
+#include "qemu/log.h"
28
-
28
#include "hw/irq.h"
29
- cpu_restore_state(cs, GETPC(), true);
29
#include "migration/vmstate.h"
30
- raise_exception(env, EXCP_STKOF, 0, 1);
30
#include "hw/arm/pxa.h"
31
+ raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
31
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_keypad_read(void *opaque, hwaddr offset,
32
}
32
return s->kpkdi;
33
33
break;
34
if (is_psp) {
34
default:
35
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
35
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
36
index XXXXXXX..XXXXXXX 100644
36
+ qemu_log_mask(LOG_GUEST_ERROR,
37
--- a/target/arm/op_helper.c
37
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
38
+++ b/target/arm/op_helper.c
38
+ __func__, offset);
39
@@ -XXX,XX +XXX,XX @@ void HELPER(v8m_stackcheck)(CPUARMState *env, uint32_t newvalue)
39
}
40
* raising an exception if the limit is breached.
40
41
*/
41
return 0;
42
if (newvalue < v7m_sp_limit(env)) {
42
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_keypad_write(void *opaque, hwaddr offset,
43
- CPUState *cs = env_cpu(env);
43
break;
44
-
44
45
/*
45
default:
46
* Stack limit exceptions are a rare case, so rather than syncing
46
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
47
- * PC/condbits before the call, we use cpu_restore_state() to
47
+ qemu_log_mask(LOG_GUEST_ERROR,
48
- * get them right before raising the exception.
48
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
49
+ * PC/condbits before the call, we use raise_exception_ra() so
49
+ __func__, offset);
50
+ * that cpu_restore_state() will sort them out.
51
*/
52
- cpu_restore_state(cs, GETPC(), true);
53
- raise_exception(env, EXCP_STKOF, 0, 1);
54
+ raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
50
}
55
}
51
}
56
}
52
57
53
--
58
--
54
2.20.1
59
2.20.1
55
60
56
61
diff view generated by jsdifflib
1
Convert the VSHR 2-reg-shift insns to decodetree.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Note that unlike the legacy decoder, we present the right shift
3
Note that the SVE BFLOAT16 support does not require SVE2,
4
amount to the trans_ function as a positive integer.
4
it is an independent extension.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200522145520.6778-3-peter.maydell@linaro.org
9
---
10
---
10
target/arm/neon-dp.decode | 25 ++++++++++++++++++++
11
target/arm/cpu.h | 15 +++++++++++++++
11
target/arm/translate-neon.inc.c | 41 +++++++++++++++++++++++++++++++++
12
1 file changed, 15 insertions(+)
12
target/arm/translate.c | 21 +----------------
13
3 files changed, 67 insertions(+), 20 deletions(-)
14
13
15
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/neon-dp.decode
16
--- a/target/arm/cpu.h
18
+++ b/target/arm/neon-dp.decode
17
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
20
######################################################################
19
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
21
&2reg_shift vm vd q shift size
22
23
+# Right shifts are encoded as N - shift, where N is the element size in bits.
24
+%neon_rshift_i6 16:6 !function=rsub_64
25
+%neon_rshift_i5 16:5 !function=rsub_32
26
+%neon_rshift_i4 16:4 !function=rsub_16
27
+%neon_rshift_i3 16:3 !function=rsub_8
28
+
29
+@2reg_shr_d .... ... . . . ...... .... .... 1 q:1 . . .... \
30
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3 shift=%neon_rshift_i6
31
+@2reg_shr_s .... ... . . . 1 ..... .... .... 0 q:1 . . .... \
32
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 shift=%neon_rshift_i5
33
+@2reg_shr_h .... ... . . . 01 .... .... .... 0 q:1 . . .... \
34
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 shift=%neon_rshift_i4
35
+@2reg_shr_b .... ... . . . 001 ... .... .... 0 q:1 . . .... \
36
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 shift=%neon_rshift_i3
37
+
38
@2reg_shl_d .... ... . . . shift:6 .... .... 1 q:1 . . .... \
39
&2reg_shift vm=%vm_dp vd=%vd_dp size=3
40
@2reg_shl_s .... ... . . . 1 shift:5 .... .... 0 q:1 . . .... \
41
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
42
@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
43
&2reg_shift vm=%vm_dp vd=%vd_dp size=0
44
45
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
46
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
47
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
48
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
49
+
50
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
51
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
52
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
53
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
54
+
55
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
56
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
57
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
58
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-neon.inc.c
61
+++ b/target/arm/translate-neon.inc.c
62
@@ -XXX,XX +XXX,XX @@ static inline int plus1(DisasContext *s, int x)
63
return x + 1;
64
}
20
}
65
21
66
+static inline int rsub_64(DisasContext *s, int x)
22
+static inline bool isar_feature_aa32_bf16(const ARMISARegisters *id)
67
+{
23
+{
68
+ return 64 - x;
24
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, BF16) != 0;
69
+}
25
+}
70
+
26
+
71
+static inline int rsub_32(DisasContext *s, int x)
27
static inline bool isar_feature_aa32_i8mm(const ARMISARegisters *id)
28
{
29
return FIELD_EX32(id->id_isar6, ID_ISAR6, I8MM) != 0;
30
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_dcpodp(const ARMISARegisters *id)
31
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, DPB) >= 2;
32
}
33
34
+static inline bool isar_feature_aa64_bf16(const ARMISARegisters *id)
72
+{
35
+{
73
+ return 32 - x;
36
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, BF16) != 0;
74
+}
75
+static inline int rsub_16(DisasContext *s, int x)
76
+{
77
+ return 16 - x;
78
+}
79
+static inline int rsub_8(DisasContext *s, int x)
80
+{
81
+ return 8 - x;
82
+}
37
+}
83
+
38
+
84
/* Include the generated Neon decoder */
39
static inline bool isar_feature_aa64_fp_simd(const ARMISARegisters *id)
85
#include "decode-neon-dp.inc.c"
40
{
86
#include "decode-neon-ls.inc.c"
41
/* We always set the AdvSIMD and FP fields identically. */
87
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
42
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
88
43
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
89
DO_2SH(VSHL, tcg_gen_gvec_shli)
44
}
90
DO_2SH(VSLI, gen_gvec_sli)
45
91
+
46
+static inline bool isar_feature_aa64_sve_bf16(const ARMISARegisters *id)
92
+static bool trans_VSHR_S_2sh(DisasContext *s, arg_2reg_shift *a)
93
+{
47
+{
94
+ /* Signed shift out of range results in all-sign-bits */
48
+ return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BFLOAT16) != 0;
95
+ a->shift = MIN(a->shift, (8 << a->size) - 1);
96
+ return do_vector_2sh(s, a, tcg_gen_gvec_sari);
97
+}
49
+}
98
+
50
+
99
+static void gen_zero_rd_2sh(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
51
static inline bool isar_feature_aa64_sve2_sha3(const ARMISARegisters *id)
100
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
52
{
101
+{
53
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SHA3) != 0;
102
+ tcg_gen_gvec_dup_imm(vece, rd_ofs, oprsz, maxsz, 0);
103
+}
104
+
105
+static bool trans_VSHR_U_2sh(DisasContext *s, arg_2reg_shift *a)
106
+{
107
+ /* Shift out of range is architecturally valid and results in zero. */
108
+ if (a->shift >= (8 << a->size)) {
109
+ return do_vector_2sh(s, a, gen_zero_rd_2sh);
110
+ } else {
111
+ return do_vector_2sh(s, a, tcg_gen_gvec_shri);
112
+ }
113
+}
114
diff --git a/target/arm/translate.c b/target/arm/translate.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/translate.c
117
+++ b/target/arm/translate.c
118
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
119
op = (insn >> 8) & 0xf;
120
121
switch (op) {
122
+ case 0: /* VSHR */
123
case 5: /* VSHL, VSLI */
124
return 1; /* handled by decodetree */
125
default:
126
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
127
}
128
129
switch (op) {
130
- case 0: /* VSHR */
131
- /* Right shift comes here negative. */
132
- shift = -shift;
133
- /* Shifts larger than the element size are architecturally
134
- * valid. Unsigned results in all zeros; signed results
135
- * in all sign bits.
136
- */
137
- if (!u) {
138
- tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
139
- MIN(shift, (8 << size) - 1),
140
- vec_size, vec_size);
141
- } else if (shift >= 8 << size) {
142
- tcg_gen_gvec_dup_imm(MO_8, rd_ofs, vec_size,
143
- vec_size, 0);
144
- } else {
145
- tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
146
- vec_size, vec_size);
147
- }
148
- return 0;
149
-
150
case 1: /* VSRA */
151
/* Right shift comes here negative. */
152
shift = -shift;
153
--
54
--
154
2.20.1
55
2.20.1
155
56
156
57
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210525225817.400336-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 15 ++++++---------
9
1 file changed, 6 insertions(+), 9 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
16
int rd = extract32(insn, 0, 5);
17
18
if (mos) {
19
- unallocated_encoding(s);
20
- return;
21
+ goto do_unallocated;
22
}
23
24
switch (opcode) {
25
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
26
/* FCVT between half, single and double precision */
27
int dtype = extract32(opcode, 0, 2);
28
if (type == 2 || dtype == type) {
29
- unallocated_encoding(s);
30
- return;
31
+ goto do_unallocated;
32
}
33
if (!fp_access_check(s)) {
34
return;
35
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
36
37
case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
38
if (type > 1 || !dc_isar_feature(aa64_frint, s)) {
39
- unallocated_encoding(s);
40
- return;
41
+ goto do_unallocated;
42
}
43
/* fall through */
44
case 0x0 ... 0x3:
45
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
46
break;
47
case 3:
48
if (!dc_isar_feature(aa64_fp16, s)) {
49
- unallocated_encoding(s);
50
- return;
51
+ goto do_unallocated;
52
}
53
54
if (!fp_access_check(s)) {
55
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
56
handle_fp_1src_half(s, opcode, rd, rn);
57
break;
58
default:
59
- unallocated_encoding(s);
60
+ goto do_unallocated;
61
}
62
break;
63
64
default:
65
+ do_unallocated:
66
unallocated_encoding(s);
67
break;
68
}
69
--
70
2.20.1
71
72
diff view generated by jsdifflib
1
Convert the VSHLL and VMOVL insns from the 2-reg-shift group
1
From: Richard Henderson <richard.henderson@linaro.org>
2
to decodetree. Since the loop always has two passes, we unroll
3
it to avoid the awkward reassignment of one TCGv to another.
4
2
3
This is the 64-bit BFCVT and the 32-bit VCVT{B,T}.BF16.F32.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210525225817.400336-4-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-8-peter.maydell@linaro.org
8
---
9
---
9
target/arm/neon-dp.decode | 16 +++++++
10
target/arm/helper.h | 1 +
10
target/arm/translate-neon.inc.c | 81 +++++++++++++++++++++++++++++++++
11
target/arm/vfp.decode | 2 ++
11
target/arm/translate.c | 46 +------------------
12
target/arm/translate-a64.c | 19 +++++++++++++++++++
12
3 files changed, 99 insertions(+), 44 deletions(-)
13
target/arm/translate-vfp.c | 24 ++++++++++++++++++++++++
14
target/arm/vfp_helper.c | 5 +++++
15
5 files changed, 51 insertions(+)
13
16
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
19
--- a/target/arm/helper.h
17
+++ b/target/arm/neon-dp.decode
20
+++ b/target/arm/helper.h
18
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_cmped, void, f64, f64, env)
19
&2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0 \
22
20
shift=%neon_rshift_i3
23
DEF_HELPER_2(vfp_fcvtds, f64, f32, env)
21
24
DEF_HELPER_2(vfp_fcvtsd, f32, f64, env)
22
+# Long left shifts: again Q is part of opcode decode
25
+DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, ptr)
23
+@2reg_shll_s .... ... . . . 1 shift:5 .... .... 0 . . . .... \
26
24
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 q=0
27
DEF_HELPER_2(vfp_uitoh, f16, i32, ptr)
25
+@2reg_shll_h .... ... . . . 01 shift:4 .... .... 0 . . . .... \
28
DEF_HELPER_2(vfp_uitos, f32, i32, ptr)
26
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0
29
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
27
+@2reg_shll_b .... ... . . . 001 shift:3 .... .... 0 . . . .... \
30
index XXXXXXX..XXXXXXX 100644
28
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 q=0
31
--- a/target/arm/vfp.decode
32
+++ b/target/arm/vfp.decode
33
@@ -XXX,XX +XXX,XX @@ VCVT_f64_f16 ---- 1110 1.11 0010 .... 1011 t:1 1.0 .... \
34
35
# VCVTB and VCVTT to f16: Vd format is always vd_sp;
36
# Vm format depends on size bit
37
+VCVT_b16_f32 ---- 1110 1.11 0011 .... 1001 t:1 1.0 .... \
38
+ vd=%vd_sp vm=%vm_sp
39
VCVT_f16_f32 ---- 1110 1.11 0011 .... 1010 t:1 1.0 .... \
40
vd=%vd_sp vm=%vm_sp
41
VCVT_f16_f64 ---- 1110 1.11 0011 .... 1011 t:1 1.0 .... \
42
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/translate-a64.c
45
+++ b/target/arm/translate-a64.c
46
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
47
case 0x3: /* FSQRT */
48
gen_helper_vfp_sqrts(tcg_res, tcg_op, cpu_env);
49
goto done;
50
+ case 0x6: /* BFCVT */
51
+ gen_fpst = gen_helper_bfcvt;
52
+ break;
53
case 0x8: /* FRINTN */
54
case 0x9: /* FRINTP */
55
case 0xa: /* FRINTM */
56
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
57
}
58
break;
59
60
+ case 0x6:
61
+ switch (type) {
62
+ case 1: /* BFCVT */
63
+ if (!dc_isar_feature(aa64_bf16, s)) {
64
+ goto do_unallocated;
65
+ }
66
+ if (!fp_access_check(s)) {
67
+ return;
68
+ }
69
+ handle_fp_1src_single(s, opcode, rd, rn);
70
+ break;
71
+ default:
72
+ goto do_unallocated;
73
+ }
74
+ break;
29
+
75
+
30
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
76
default:
31
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
77
do_unallocated:
32
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
78
unallocated_encoding(s);
33
@@ -XXX,XX +XXX,XX @@ VQSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
79
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
34
VQRSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
80
index XXXXXXX..XXXXXXX 100644
35
VQRSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
81
--- a/target/arm/translate-vfp.c
36
VQRSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
82
+++ b/target/arm/translate-vfp.c
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
84
return true;
85
}
86
87
+static bool trans_VCVT_b16_f32(DisasContext *s, arg_VCVT_b16_f32 *a)
88
+{
89
+ TCGv_ptr fpst;
90
+ TCGv_i32 tmp;
37
+
91
+
38
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
92
+ if (!dc_isar_feature(aa32_bf16, s)) {
39
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
40
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
41
+
42
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
43
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
44
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
45
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate-neon.inc.c
48
+++ b/target/arm/translate-neon.inc.c
49
@@ -XXX,XX +XXX,XX @@ DO_2SN_32(VQSHRN_U16, gen_helper_neon_shl_u16, gen_helper_neon_narrow_sat_u8)
50
DO_2SN_64(VQRSHRN_U64, gen_helper_neon_rshl_u64, gen_helper_neon_narrow_sat_u32)
51
DO_2SN_32(VQRSHRN_U32, gen_helper_neon_rshl_u32, gen_helper_neon_narrow_sat_u16)
52
DO_2SN_32(VQRSHRN_U16, gen_helper_neon_rshl_u16, gen_helper_neon_narrow_sat_u8)
53
+
54
+static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
55
+ NeonGenWidenFn *widenfn, bool u)
56
+{
57
+ TCGv_i64 tmp;
58
+ TCGv_i32 rm0, rm1;
59
+ uint64_t widen_mask = 0;
60
+
61
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
62
+ return false;
63
+ }
64
+
65
+ /* UNDEF accesses to D16-D31 if they don't exist. */
66
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
67
+ ((a->vd | a->vm) & 0x10)) {
68
+ return false;
69
+ }
70
+
71
+ if (a->vd & 1) {
72
+ return false;
93
+ return false;
73
+ }
94
+ }
74
+
95
+
75
+ if (!vfp_access_check(s)) {
96
+ if (!vfp_access_check(s)) {
76
+ return true;
97
+ return true;
77
+ }
98
+ }
78
+
99
+
79
+ /*
100
+ fpst = fpstatus_ptr(FPST_FPCR);
80
+ * This is a widen-and-shift operation. The shift is always less
101
+ tmp = tcg_temp_new_i32();
81
+ * than the width of the source type, so after widening the input
82
+ * vector we can simply shift the whole 64-bit widened register,
83
+ * and then clear the potential overflow bits resulting from left
84
+ * bits of the narrow input appearing as right bits of the left
85
+ * neighbour narrow input. Calculate a mask of bits to clear.
86
+ */
87
+ if ((a->shift != 0) && (a->size < 2 || u)) {
88
+ int esize = 8 << a->size;
89
+ widen_mask = MAKE_64BIT_MASK(0, esize);
90
+ widen_mask >>= esize - a->shift;
91
+ widen_mask = dup_const(a->size + 1, widen_mask);
92
+ }
93
+
102
+
94
+ rm0 = neon_load_reg(a->vm, 0);
103
+ vfp_load_reg32(tmp, a->vm);
95
+ rm1 = neon_load_reg(a->vm, 1);
104
+ gen_helper_bfcvt(tmp, tmp, fpst);
96
+ tmp = tcg_temp_new_i64();
105
+ tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
97
+
106
+ tcg_temp_free_ptr(fpst);
98
+ widenfn(tmp, rm0);
107
+ tcg_temp_free_i32(tmp);
99
+ if (a->shift != 0) {
100
+ tcg_gen_shli_i64(tmp, tmp, a->shift);
101
+ tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
102
+ }
103
+ neon_store_reg64(tmp, a->vd);
104
+
105
+ widenfn(tmp, rm1);
106
+ if (a->shift != 0) {
107
+ tcg_gen_shli_i64(tmp, tmp, a->shift);
108
+ tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
109
+ }
110
+ neon_store_reg64(tmp, a->vd + 1);
111
+ tcg_temp_free_i64(tmp);
112
+ return true;
108
+ return true;
113
+}
109
+}
114
+
110
+
115
+static bool trans_VSHLL_S_2sh(DisasContext *s, arg_2reg_shift *a)
111
static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a)
112
{
113
TCGv_ptr fpst;
114
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/vfp_helper.c
117
+++ b/target/arm/vfp_helper.c
118
@@ -XXX,XX +XXX,XX @@ float32 VFP_HELPER(fcvts, d)(float64 x, CPUARMState *env)
119
return float64_to_float32(x, &env->vfp.fp_status);
120
}
121
122
+uint32_t HELPER(bfcvt)(float32 x, void *status)
116
+{
123
+{
117
+ NeonGenWidenFn *widenfn[] = {
124
+ return float32_to_bfloat16(x, status);
118
+ gen_helper_neon_widen_s8,
119
+ gen_helper_neon_widen_s16,
120
+ tcg_gen_ext_i32_i64,
121
+ };
122
+ return do_vshll_2sh(s, a, widenfn[a->size], false);
123
+}
125
+}
124
+
126
+
125
+static bool trans_VSHLL_U_2sh(DisasContext *s, arg_2reg_shift *a)
127
/*
126
+{
128
* VFP3 fixed point conversion. The AArch32 versions of fix-to-float
127
+ NeonGenWidenFn *widenfn[] = {
129
* must always round-to-nearest; the AArch64 ones honour the FPSCR
128
+ gen_helper_neon_widen_u8,
129
+ gen_helper_neon_widen_u16,
130
+ tcg_gen_extu_i32_i64,
131
+ };
132
+ return do_vshll_2sh(s, a, widenfn[a->size], true);
133
+}
134
diff --git a/target/arm/translate.c b/target/arm/translate.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/target/arm/translate.c
137
+++ b/target/arm/translate.c
138
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
139
case 7: /* VQSHL */
140
case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
141
case 9: /* VQSHRN, VQRSHRN */
142
+ case 10: /* VSHLL, including VMOVL */
143
return 1; /* handled by decodetree */
144
default:
145
break;
146
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
147
size--;
148
}
149
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
150
- if (op == 10) {
151
- /* VSHLL, VMOVL */
152
- if (q || (rd & 1)) {
153
- return 1;
154
- }
155
- tmp = neon_load_reg(rm, 0);
156
- tmp2 = neon_load_reg(rm, 1);
157
- for (pass = 0; pass < 2; pass++) {
158
- if (pass == 1)
159
- tmp = tmp2;
160
-
161
- gen_neon_widen(cpu_V0, tmp, size, u);
162
-
163
- if (shift != 0) {
164
- /* The shift is less than the width of the source
165
- type, so we can just shift the whole register. */
166
- tcg_gen_shli_i64(cpu_V0, cpu_V0, shift);
167
- /* Widen the result of shift: we need to clear
168
- * the potential overflow bits resulting from
169
- * left bits of the narrow input appearing as
170
- * right bits of left the neighbour narrow
171
- * input. */
172
- if (size < 2 || !u) {
173
- uint64_t imm64;
174
- if (size == 0) {
175
- imm = (0xffu >> (8 - shift));
176
- imm |= imm << 16;
177
- } else if (size == 1) {
178
- imm = 0xffff >> (16 - shift);
179
- } else {
180
- /* size == 2 */
181
- imm = 0xffffffff >> (32 - shift);
182
- }
183
- if (size < 2) {
184
- imm64 = imm | (((uint64_t)imm) << 32);
185
- } else {
186
- imm64 = imm;
187
- }
188
- tcg_gen_andi_i64(cpu_V0, cpu_V0, ~imm64);
189
- }
190
- }
191
- neon_store_reg64(cpu_V0, rd + pass);
192
- }
193
- } else if (op >= 14) {
194
+ if (op >= 14) {
195
/* VCVT fixed-point. */
196
TCGv_ptr fpst;
197
TCGv_i32 shiftv;
198
--
130
--
199
2.20.1
131
2.20.1
200
132
201
133
diff view generated by jsdifflib
1
Convert the VSHL and VSLI insns from the Neon 2-registers-and-a-shift
1
From: Richard Henderson <richard.henderson@linaro.org>
2
group to decodetree.
2
3
3
This is BFCVT{N,T} for both AArch64 AdvSIMD and SVE,
4
and VCVT.BF16.F32 for AArch32 NEON.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-5-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200522145520.6778-2-peter.maydell@linaro.org
7
---
10
---
8
target/arm/neon-dp.decode | 25 ++++++++++++++++++++++
11
target/arm/helper-sve.h | 4 ++++
9
target/arm/translate-neon.inc.c | 38 +++++++++++++++++++++++++++++++++
12
target/arm/helper.h | 1 +
10
target/arm/translate.c | 18 +++++++---------
13
target/arm/neon-dp.decode | 1 +
11
3 files changed, 71 insertions(+), 10 deletions(-)
14
target/arm/sve.decode | 2 ++
12
15
target/arm/sve_helper.c | 2 ++
16
target/arm/translate-a64.c | 17 ++++++++++++++
17
target/arm/translate-neon.c | 45 +++++++++++++++++++++++++++++++++++++
18
target/arm/translate-sve.c | 16 +++++++++++++
19
target/arm/vfp_helper.c | 7 ++++++
20
9 files changed, 95 insertions(+)
21
22
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper-sve.h
25
+++ b/target/arm/helper-sve.h
26
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_fcvt_hd, TCG_CALL_NO_RWG,
27
void, ptr, ptr, ptr, ptr, i32)
28
DEF_HELPER_FLAGS_5(sve_fcvt_sd, TCG_CALL_NO_RWG,
29
void, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_5(sve_bfcvt, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
33
DEF_HELPER_FLAGS_5(sve_fcvtzs_hh, TCG_CALL_NO_RWG,
34
void, ptr, ptr, ptr, ptr, i32)
35
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve2_fcvtnt_sh, TCG_CALL_NO_RWG,
36
void, ptr, ptr, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_5(sve2_fcvtnt_ds, TCG_CALL_NO_RWG,
38
void, ptr, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_5(sve_bfcvtnt, TCG_CALL_NO_RWG,
40
+ void, ptr, ptr, ptr, ptr, i32)
41
42
DEF_HELPER_FLAGS_5(sve2_fcvtlt_hs, TCG_CALL_NO_RWG,
43
void, ptr, ptr, ptr, ptr, i32)
44
diff --git a/target/arm/helper.h b/target/arm/helper.h
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper.h
47
+++ b/target/arm/helper.h
48
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_cmped, void, f64, f64, env)
49
DEF_HELPER_2(vfp_fcvtds, f64, f32, env)
50
DEF_HELPER_2(vfp_fcvtsd, f32, f64, env)
51
DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, ptr)
52
+DEF_HELPER_FLAGS_2(bfcvt_pair, TCG_CALL_NO_RWG, i32, i64, ptr)
53
54
DEF_HELPER_2(vfp_uitoh, f16, i32, ptr)
55
DEF_HELPER_2(vfp_uitos, f32, i32, ptr)
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
56
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
58
--- a/target/arm/neon-dp.decode
16
+++ b/target/arm/neon-dp.decode
59
+++ b/target/arm/neon-dp.decode
17
@@ -XXX,XX +XXX,XX @@ VRECPS_fp_3s 1111 001 0 0 . 0 . .... .... 1111 ... 1 .... @3same_fp
60
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
18
VRSQRTS_fp_3s 1111 001 0 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
61
VRINTZ 1111 001 11 . 11 .. 10 .... 0 1011 . . 0 .... @2misc
19
VMAXNM_fp_3s 1111 001 1 0 . 0 . .... .... 1111 ... 1 .... @3same_fp
62
20
VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
63
VCVT_F16_F32 1111 001 11 . 11 .. 10 .... 0 1100 0 . 0 .... @2misc_q0
21
+
64
+ VCVT_B16_F32 1111 001 11 . 11 .. 10 .... 0 1100 1 . 0 .... @2misc_q0
22
+######################################################################
65
23
+# 2-reg-and-shift grouping:
66
VRINTM 1111 001 11 . 11 .. 10 .... 0 1101 . . 0 .... @2misc
24
+# 1111 001 U 1 D immH:3 immL:3 Vd:4 opc:4 L Q M 1 Vm:4
67
25
+######################################################################
68
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
26
+&2reg_shift vm vd q shift size
69
index XXXXXXX..XXXXXXX 100644
27
+
70
--- a/target/arm/sve.decode
28
+@2reg_shl_d .... ... . . . shift:6 .... .... 1 q:1 . . .... \
71
+++ b/target/arm/sve.decode
29
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3
72
@@ -XXX,XX +XXX,XX @@ FNMLS_zpzzz 01100101 .. 1 ..... 111 ... ..... ..... @rdn_pg_rm_ra
30
+@2reg_shl_s .... ... . . . 1 shift:5 .... .... 0 q:1 . . .... \
73
# SVE floating-point convert precision
31
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2
74
FCVT_sh 01100101 10 0010 00 101 ... ..... ..... @rd_pg_rn_e0
32
+@2reg_shl_h .... ... . . . 01 shift:4 .... .... 0 q:1 . . .... \
75
FCVT_hs 01100101 10 0010 01 101 ... ..... ..... @rd_pg_rn_e0
33
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1
76
+BFCVT 01100101 10 0010 10 101 ... ..... ..... @rd_pg_rn_e0
34
+@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
77
FCVT_dh 01100101 11 0010 00 101 ... ..... ..... @rd_pg_rn_e0
35
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0
78
FCVT_hd 01100101 11 0010 01 101 ... ..... ..... @rd_pg_rn_e0
36
+
79
FCVT_ds 01100101 11 0010 10 101 ... ..... ..... @rd_pg_rn_e0
37
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
80
@@ -XXX,XX +XXX,XX @@ RAX1 01000101 00 1 ..... 11110 1 ..... ..... @rd_rn_rm_e0
38
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
81
FCVTXNT_ds 01100100 00 0010 10 101 ... ..... ..... @rd_pg_rn_e0
39
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
82
FCVTX_ds 01100101 00 0010 10 101 ... ..... ..... @rd_pg_rn_e0
40
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
83
FCVTNT_sh 01100100 10 0010 00 101 ... ..... ..... @rd_pg_rn_e0
41
+
84
+BFCVTNT 01100100 10 0010 10 101 ... ..... ..... @rd_pg_rn_e0
42
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
85
FCVTLT_hs 01100100 10 0010 01 101 ... ..... ..... @rd_pg_rn_e0
43
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
86
FCVTNT_ds 01100100 11 0010 10 101 ... ..... ..... @rd_pg_rn_e0
44
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
87
FCVTLT_sd 01100100 11 0010 11 101 ... ..... ..... @rd_pg_rn_e0
45
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
88
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
46
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
89
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/sve_helper.c
48
--- a/target/arm/translate-neon.inc.c
91
+++ b/target/arm/sve_helper.c
49
+++ b/target/arm/translate-neon.inc.c
92
@@ -XXX,XX +XXX,XX @@ static inline uint64_t vfp_float64_to_uint64_rtz(float64 f, float_status *s)
50
@@ -XXX,XX +XXX,XX @@ static bool do_3same_fp_pair(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn)
93
51
DO_3S_FP_PAIR(VPADD, gen_helper_vfp_adds)
94
DO_ZPZ_FP(sve_fcvt_sh, uint32_t, H1_4, sve_f32_to_f16)
52
DO_3S_FP_PAIR(VPMAX, gen_helper_vfp_maxs)
95
DO_ZPZ_FP(sve_fcvt_hs, uint32_t, H1_4, sve_f16_to_f32)
53
DO_3S_FP_PAIR(VPMIN, gen_helper_vfp_mins)
96
+DO_ZPZ_FP(sve_bfcvt, uint32_t, H1_4, float32_to_bfloat16)
54
+
97
DO_ZPZ_FP(sve_fcvt_dh, uint64_t, , sve_f64_to_f16)
55
+static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
98
DO_ZPZ_FP(sve_fcvt_hd, uint64_t, , sve_f16_to_f64)
56
+{
99
DO_ZPZ_FP(sve_fcvt_ds, uint64_t, , float64_to_float32)
57
+ /* Handle a 2-reg-shift insn which can be vectorized. */
100
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc) \
58
+ int vec_size = a->q ? 16 : 8;
101
} while (i != 0); \
59
+ int rd_ofs = neon_reg_offset(a->vd, 0);
102
}
60
+ int rm_ofs = neon_reg_offset(a->vm, 0);
103
61
+
104
+DO_FCVTNT(sve_bfcvtnt, uint32_t, uint16_t, H1_4, H1_2, float32_to_bfloat16)
62
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
105
DO_FCVTNT(sve2_fcvtnt_sh, uint32_t, uint16_t, H1_4, H1_2, sve_f32_to_f16)
106
DO_FCVTNT(sve2_fcvtnt_ds, uint64_t, uint32_t, , H1_4, float64_to_float32)
107
108
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
109
index XXXXXXX..XXXXXXX 100644
110
--- a/target/arm/translate-a64.c
111
+++ b/target/arm/translate-a64.c
112
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
113
tcg_temp_free_i32(ahp);
114
}
115
break;
116
+ case 0x36: /* BFCVTN, BFCVTN2 */
117
+ {
118
+ TCGv_ptr fpst = fpstatus_ptr(FPST_FPCR);
119
+ gen_helper_bfcvt_pair(tcg_res[pass], tcg_op, fpst);
120
+ tcg_temp_free_ptr(fpst);
121
+ }
122
+ break;
123
case 0x56: /* FCVTXN, FCVTXN2 */
124
/* 64 bit to 32 bit float conversion
125
* with von Neumann rounding (round to odd)
126
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
127
}
128
handle_2misc_narrow(s, false, opcode, 0, is_q, size - 1, rn, rd);
129
return;
130
+ case 0x36: /* BFCVTN, BFCVTN2 */
131
+ if (!dc_isar_feature(aa64_bf16, s) || size != 2) {
132
+ unallocated_encoding(s);
133
+ return;
134
+ }
135
+ if (!fp_access_check(s)) {
136
+ return;
137
+ }
138
+ handle_2misc_narrow(s, false, opcode, 0, is_q, size - 1, rn, rd);
139
+ return;
140
case 0x17: /* FCVTL, FCVTL2 */
141
if (!fp_access_check(s)) {
142
return;
143
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/target/arm/translate-neon.c
146
+++ b/target/arm/translate-neon.c
147
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
148
return true;
149
}
150
151
+static bool trans_VCVT_B16_F32(DisasContext *s, arg_2misc *a)
152
+{
153
+ TCGv_ptr fpst;
154
+ TCGv_i64 tmp;
155
+ TCGv_i32 dst0, dst1;
156
+
157
+ if (!dc_isar_feature(aa32_bf16, s)) {
63
+ return false;
158
+ return false;
64
+ }
159
+ }
65
+
160
+
66
+ /* UNDEF accesses to D16-D31 if they don't exist. */
161
+ /* UNDEF accesses to D16-D31 if they don't exist. */
67
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
162
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
68
+ ((a->vd | a->vm) & 0x10)) {
163
+ ((a->vd | a->vm) & 0x10)) {
69
+ return false;
164
+ return false;
70
+ }
165
+ }
71
+
166
+
72
+ if ((a->vm | a->vd) & a->q) {
167
+ if ((a->vm & 1) || (a->size != 1)) {
73
+ return false;
168
+ return false;
74
+ }
169
+ }
75
+
170
+
76
+ if (!vfp_access_check(s)) {
171
+ if (!vfp_access_check(s)) {
77
+ return true;
172
+ return true;
78
+ }
173
+ }
79
+
174
+
80
+ fn(a->size, rd_ofs, rm_ofs, a->shift, vec_size, vec_size);
175
+ fpst = fpstatus_ptr(FPST_STD);
176
+ tmp = tcg_temp_new_i64();
177
+ dst0 = tcg_temp_new_i32();
178
+ dst1 = tcg_temp_new_i32();
179
+
180
+ read_neon_element64(tmp, a->vm, 0, MO_64);
181
+ gen_helper_bfcvt_pair(dst0, tmp, fpst);
182
+
183
+ read_neon_element64(tmp, a->vm, 1, MO_64);
184
+ gen_helper_bfcvt_pair(dst1, tmp, fpst);
185
+
186
+ write_neon_element32(dst0, a->vd, 0, MO_32);
187
+ write_neon_element32(dst1, a->vd, 1, MO_32);
188
+
189
+ tcg_temp_free_i64(tmp);
190
+ tcg_temp_free_i32(dst0);
191
+ tcg_temp_free_i32(dst1);
192
+ tcg_temp_free_ptr(fpst);
81
+ return true;
193
+ return true;
82
+}
194
+}
83
+
195
+
84
+#define DO_2SH(INSN, FUNC) \
196
static bool trans_VCVT_F16_F32(DisasContext *s, arg_2misc *a)
85
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
197
{
86
+ { \
198
TCGv_ptr fpst;
87
+ return do_vector_2sh(s, a, FUNC); \
199
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
88
+ } \
200
index XXXXXXX..XXXXXXX 100644
89
+
201
--- a/target/arm/translate-sve.c
90
+DO_2SH(VSHL, tcg_gen_gvec_shli)
202
+++ b/target/arm/translate-sve.c
91
+DO_2SH(VSLI, gen_gvec_sli)
203
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVT_hs(DisasContext *s, arg_rpr_esz *a)
92
diff --git a/target/arm/translate.c b/target/arm/translate.c
204
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_hs);
93
index XXXXXXX..XXXXXXX 100644
205
}
94
--- a/target/arm/translate.c
206
95
+++ b/target/arm/translate.c
207
+static bool trans_BFCVT(DisasContext *s, arg_rpr_esz *a)
96
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
208
+{
97
if ((insn & 0x00380080) != 0) {
209
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
98
/* Two registers and shift. */
210
+ return false;
99
op = (insn >> 8) & 0xf;
211
+ }
100
+
212
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_bfcvt);
101
+ switch (op) {
213
+}
102
+ case 5: /* VSHL, VSLI */
214
+
103
+ return 1; /* handled by decodetree */
215
static bool trans_FCVT_dh(DisasContext *s, arg_rpr_esz *a)
104
+ default:
216
{
105
+ break;
217
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_fcvt_dh);
106
+ }
218
@@ -XXX,XX +XXX,XX @@ static bool trans_FCVTNT_sh(DisasContext *s, arg_rpr_esz *a)
107
+
219
return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve2_fcvtnt_sh);
108
if (insn & (1 << 7)) {
220
}
109
/* 64-bit shift. */
221
110
if (op > 7) {
222
+static bool trans_BFCVTNT(DisasContext *s, arg_rpr_esz *a)
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
223
+{
112
gen_gvec_sri(size, rd_ofs, rm_ofs, shift,
224
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
113
vec_size, vec_size);
225
+ return false;
114
return 0;
226
+ }
115
-
227
+ return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, gen_helper_sve_bfcvtnt);
116
- case 5: /* VSHL, VSLI */
228
+}
117
- if (u) { /* VSLI */
229
+
118
- gen_gvec_sli(size, rd_ofs, rm_ofs, shift,
230
static bool trans_FCVTNT_ds(DisasContext *s, arg_rpr_esz *a)
119
- vec_size, vec_size);
231
{
120
- } else { /* VSHL */
232
if (!dc_isar_feature(aa64_sve2, s)) {
121
- tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
233
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
122
- vec_size, vec_size);
234
index XXXXXXX..XXXXXXX 100644
123
- }
235
--- a/target/arm/vfp_helper.c
124
- return 0;
236
+++ b/target/arm/vfp_helper.c
125
}
237
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(bfcvt)(float32 x, void *status)
126
238
return float32_to_bfloat16(x, status);
127
if (size == 3) {
239
}
240
241
+uint32_t HELPER(bfcvt_pair)(uint64_t pair, void *status)
242
+{
243
+ bfloat16 lo = float32_to_bfloat16(extract64(pair, 0, 32), status);
244
+ bfloat16 hi = float32_to_bfloat16(extract64(pair, 32, 32), status);
245
+ return deposit32(lo, 16, 16, hi);
246
+}
247
+
248
/*
249
* VFP3 fixed point conversion. The AArch32 versions of fix-to-float
250
* must always round-to-nearest; the AArch64 ones honour the FPSCR
128
--
251
--
129
2.20.1
252
2.20.1
130
253
131
254
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
For Arm BFDOT and BFMMLA, we need a version of round-to-odd
4
that overflows to infinity, instead of the max normal number.
5
6
Cc: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210525225817.400336-6-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/fpu/softfloat-types.h | 4 +++-
13
fpu/softfloat-parts.c.inc | 6 ++++--
14
2 files changed, 7 insertions(+), 3 deletions(-)
15
16
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/fpu/softfloat-types.h
19
+++ b/include/fpu/softfloat-types.h
20
@@ -XXX,XX +XXX,XX @@ typedef enum __attribute__((__packed__)) {
21
float_round_up = 2,
22
float_round_to_zero = 3,
23
float_round_ties_away = 4,
24
- /* Not an IEEE rounding mode: round to the closest odd mantissa value */
25
+ /* Not an IEEE rounding mode: round to closest odd, overflow to max */
26
float_round_to_odd = 5,
27
+ /* Not an IEEE rounding mode: round to closest odd, overflow to inf */
28
+ float_round_to_odd_inf = 6,
29
} FloatRoundMode;
30
31
/*
32
diff --git a/fpu/softfloat-parts.c.inc b/fpu/softfloat-parts.c.inc
33
index XXXXXXX..XXXXXXX 100644
34
--- a/fpu/softfloat-parts.c.inc
35
+++ b/fpu/softfloat-parts.c.inc
36
@@ -XXX,XX +XXX,XX @@ static void partsN(uncanon)(FloatPartsN *p, float_status *s,
37
g_assert_not_reached();
38
}
39
40
+ overflow_norm = false;
41
switch (s->float_rounding_mode) {
42
case float_round_nearest_even:
43
- overflow_norm = false;
44
inc = ((p->frac_lo & roundeven_mask) != frac_lsbm1 ? frac_lsbm1 : 0);
45
break;
46
case float_round_ties_away:
47
- overflow_norm = false;
48
inc = frac_lsbm1;
49
break;
50
case float_round_to_zero:
51
@@ -XXX,XX +XXX,XX @@ static void partsN(uncanon)(FloatPartsN *p, float_status *s,
52
break;
53
case float_round_to_odd:
54
overflow_norm = true;
55
+ /* fall through */
56
+ case float_round_to_odd_inf:
57
inc = p->frac_lo & frac_lsb ? 0 : round_mask;
58
break;
59
default:
60
@@ -XXX,XX +XXX,XX @@ static void partsN(uncanon)(FloatPartsN *p, float_status *s,
61
? frac_lsbm1 : 0);
62
break;
63
case float_round_to_odd:
64
+ case float_round_to_odd_inf:
65
inc = p->frac_lo & frac_lsb ? 0 : round_mask;
66
break;
67
default:
68
--
69
2.20.1
70
71
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Do not yet convert the helpers to loop over opr_sz, but the
3
This is BFDOT for both AArch64 AdvSIMD and SVE,
4
descriptor allows the vector tail to be cleared. Which fixes
4
and VDOT.BF16 for AArch32 NEON.
5
an existing bug vs SVE.
6
5
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-4-richard.henderson@linaro.org
7
Message-id: 20210525225817.400336-7-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper.h | 15 +++++++-----
11
target/arm/helper.h | 3 +++
13
target/arm/crypto_helper.c | 37 +++++++++++++++++++++++-----
12
target/arm/neon-shared.decode | 2 ++
14
target/arm/translate-a64.c | 50 ++++++++++++--------------------------
13
target/arm/sve.decode | 3 +++
15
3 files changed, 55 insertions(+), 47 deletions(-)
14
target/arm/translate-a64.c | 20 ++++++++++++++++++
15
target/arm/translate-neon.c | 9 ++++++++
16
target/arm/translate-sve.c | 12 +++++++++++
17
target/arm/vec_helper.c | 40 +++++++++++++++++++++++++++++++++++
18
7 files changed, 89 insertions(+)
16
19
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
22
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
23
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_ummla_b, TCG_CALL_NO_RWG,
22
DEF_HELPER_FLAGS_2(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr)
25
DEF_HELPER_FLAGS_5(gvec_usmmla_b, TCG_CALL_NO_RWG,
23
DEF_HELPER_FLAGS_3(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
26
void, ptr, ptr, ptr, ptr, i32)
24
27
25
-DEF_HELPER_FLAGS_3(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
26
-DEF_HELPER_FLAGS_3(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
29
+ void, ptr, ptr, ptr, ptr, i32)
27
-DEF_HELPER_FLAGS_2(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr)
30
+
28
-DEF_HELPER_FLAGS_3(crypto_sha512su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
31
#ifdef TARGET_AARCH64
29
+DEF_HELPER_FLAGS_4(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
#include "helper-a64.h"
30
+DEF_HELPER_FLAGS_4(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
#include "helper-sve.h"
31
+DEF_HELPER_FLAGS_3(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
32
+DEF_HELPER_FLAGS_4(crypto_sha512su1, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, i32)
34
35
DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
36
-DEF_HELPER_FLAGS_3(crypto_sm3partw1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
37
-DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
38
+DEF_HELPER_FLAGS_4(crypto_sm3partw1, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(crypto_sm3partw2, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, i32)
42
43
DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
46
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/crypto_helper.c
36
--- a/target/arm/neon-shared.decode
48
+++ b/target/arm/crypto_helper.c
37
+++ b/target/arm/neon-shared.decode
49
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
38
@@ -XXX,XX +XXX,XX @@ VUDOT 1111 110 00 . 10 .... .... 1101 . q:1 . 1 .... \
50
#define CR_ST_WORD(state, i) (state.words[i])
39
vm=%vm_dp vn=%vn_dp vd=%vd_dp
51
#endif
40
VUSDOT 1111 110 01 . 10 .... .... 1101 . q:1 . 0 .... \
52
41
vm=%vm_dp vn=%vn_dp vd=%vd_dp
53
+/*
42
+VDOT_b16 1111 110 00 . 00 .... .... 1101 . q:1 . 0 .... \
54
+ * The caller has not been converted to full gvec, and so only
43
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
55
+ * modifies the low 16 bytes of the vector register.
44
56
+ */
45
# VFM[AS]L
57
+static void clear_tail_16(void *vd, uint32_t desc)
46
VFML 1111 110 0 s:1 . 10 .... .... 1000 . 0 . 1 .... \
58
+{
47
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
59
+ int opr_sz = simd_oprsz(desc);
48
index XXXXXXX..XXXXXXX 100644
60
+ int max_sz = simd_maxsz(desc);
49
--- a/target/arm/sve.decode
50
+++ b/target/arm/sve.decode
51
@@ -XXX,XX +XXX,XX @@ FMLALT_zzzw 01100100 10 1 ..... 10 0 00 1 ..... ..... @rda_rn_rm_e0
52
FMLSLB_zzzw 01100100 10 1 ..... 10 1 00 0 ..... ..... @rda_rn_rm_e0
53
FMLSLT_zzzw 01100100 10 1 ..... 10 1 00 1 ..... ..... @rda_rn_rm_e0
54
55
+### SVE2 floating-point bfloat16 dot-product
56
+BFDOT_zzzz 01100100 01 1 ..... 10 0 00 0 ..... ..... @rda_rn_rm_e0
61
+
57
+
62
+ assert(opr_sz == 16);
58
### SVE2 floating-point multiply-add long (indexed)
63
+ clear_tail(vd, opr_sz, max_sz);
59
FMLALB_zzxw 01100100 10 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
64
+}
60
FMLALT_zzxw 01100100 10 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
65
+
66
static void do_crypto_aese(uint64_t *rd, uint64_t *rn,
67
uint64_t *rm, bool decrypt)
68
{
69
@@ -XXX,XX +XXX,XX @@ static uint64_t s1_512(uint64_t x)
70
return ror64(x, 19) ^ ror64(x, 61) ^ (x >> 6);
71
}
72
73
-void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm)
74
+void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm, uint32_t desc)
75
{
76
uint64_t *rd = vd;
77
uint64_t *rn = vn;
78
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm)
79
80
rd[0] = d0;
81
rd[1] = d1;
82
+
83
+ clear_tail_16(vd, desc);
84
}
85
86
-void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm)
87
+void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm, uint32_t desc)
88
{
89
uint64_t *rd = vd;
90
uint64_t *rn = vn;
91
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm)
92
93
rd[0] = d0;
94
rd[1] = d1;
95
+
96
+ clear_tail_16(vd, desc);
97
}
98
99
-void HELPER(crypto_sha512su0)(void *vd, void *vn)
100
+void HELPER(crypto_sha512su0)(void *vd, void *vn, uint32_t desc)
101
{
102
uint64_t *rd = vd;
103
uint64_t *rn = vn;
104
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512su0)(void *vd, void *vn)
105
106
rd[0] = d0;
107
rd[1] = d1;
108
+
109
+ clear_tail_16(vd, desc);
110
}
111
112
-void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm)
113
+void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm, uint32_t desc)
114
{
115
uint64_t *rd = vd;
116
uint64_t *rn = vn;
117
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm)
118
119
rd[0] += s1_512(rn[0]) + rm[0];
120
rd[1] += s1_512(rn[1]) + rm[1];
121
+
122
+ clear_tail_16(vd, desc);
123
}
124
125
-void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm)
126
+void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm, uint32_t desc)
127
{
128
uint64_t *rd = vd;
129
uint64_t *rn = vn;
130
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm)
131
132
rd[0] = d.l[0];
133
rd[1] = d.l[1];
134
+
135
+ clear_tail_16(vd, desc);
136
}
137
138
-void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm)
139
+void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm, uint32_t desc)
140
{
141
uint64_t *rd = vd;
142
uint64_t *rn = vn;
143
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm)
144
145
rd[0] = d.l[0];
146
rd[1] = d.l[1];
147
+
148
+ clear_tail_16(vd, desc);
149
}
150
151
void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
152
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
61
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
153
index XXXXXXX..XXXXXXX 100644
62
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate-a64.c
63
--- a/target/arm/translate-a64.c
155
+++ b/target/arm/translate-a64.c
64
+++ b/target/arm/translate-a64.c
156
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
157
int rn = extract32(insn, 5, 5);
66
}
158
int rd = extract32(insn, 0, 5);
67
feature = dc_isar_feature(aa64_fcma, s);
159
bool feature;
160
- CryptoThreeOpFn *genfn = NULL;
161
gen_helper_gvec_3 *oolfn = NULL;
162
GVecGen3Fn *gvecfn = NULL;
163
164
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
165
switch (opcode) {
166
case 0: /* SHA512H */
167
feature = dc_isar_feature(aa64_sha512, s);
168
- genfn = gen_helper_crypto_sha512h;
169
+ oolfn = gen_helper_crypto_sha512h;
170
break;
171
case 1: /* SHA512H2 */
172
feature = dc_isar_feature(aa64_sha512, s);
173
- genfn = gen_helper_crypto_sha512h2;
174
+ oolfn = gen_helper_crypto_sha512h2;
175
break;
176
case 2: /* SHA512SU1 */
177
feature = dc_isar_feature(aa64_sha512, s);
178
- genfn = gen_helper_crypto_sha512su1;
179
+ oolfn = gen_helper_crypto_sha512su1;
180
break;
181
case 3: /* RAX1 */
182
feature = dc_isar_feature(aa64_sha3, s);
183
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
184
switch (opcode) {
185
case 0: /* SM3PARTW1 */
186
feature = dc_isar_feature(aa64_sm3, s);
187
- genfn = gen_helper_crypto_sm3partw1;
188
+ oolfn = gen_helper_crypto_sm3partw1;
189
break;
190
case 1: /* SM3PARTW2 */
191
feature = dc_isar_feature(aa64_sm3, s);
192
- genfn = gen_helper_crypto_sm3partw2;
193
+ oolfn = gen_helper_crypto_sm3partw2;
194
break;
195
case 2: /* SM4EKEY */
196
feature = dc_isar_feature(aa64_sm4, s);
197
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
198
199
if (oolfn) {
200
gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
201
- } else if (gvecfn) {
202
- gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
203
} else {
204
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
205
-
206
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
207
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
208
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
209
-
210
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
211
-
212
- tcg_temp_free_ptr(tcg_rd_ptr);
213
- tcg_temp_free_ptr(tcg_rn_ptr);
214
- tcg_temp_free_ptr(tcg_rm_ptr);
215
+ gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
216
}
217
}
218
219
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
220
int opcode = extract32(insn, 10, 2);
221
int rn = extract32(insn, 5, 5);
222
int rd = extract32(insn, 0, 5);
223
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
224
bool feature;
225
- CryptoTwoOpFn *genfn;
226
- gen_helper_gvec_3 *oolfn = NULL;
227
228
switch (opcode) {
229
case 0: /* SHA512SU0 */
230
feature = dc_isar_feature(aa64_sha512, s);
231
- genfn = gen_helper_crypto_sha512su0;
232
break;
68
break;
233
case 1: /* SM4E */
69
+ case 0x1f: /* BFDOT */
234
feature = dc_isar_feature(aa64_sm4, s);
70
+ switch (size) {
235
- oolfn = gen_helper_crypto_sm4e;
71
+ case 1:
236
break;
72
+ feature = dc_isar_feature(aa64_bf16, s);
73
+ break;
74
+ default:
75
+ unallocated_encoding(s);
76
+ return;
77
+ }
78
+ break;
237
default:
79
default:
238
unallocated_encoding(s);
80
unallocated_encoding(s);
239
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
240
return;
81
return;
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
83
}
84
return;
85
86
+ case 0xf: /* BFDOT */
87
+ switch (size) {
88
+ case 1:
89
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfdot);
90
+ break;
91
+ default:
92
+ g_assert_not_reached();
93
+ }
94
+ return;
95
+
96
default:
97
g_assert_not_reached();
241
}
98
}
242
99
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
243
- if (oolfn) {
100
index XXXXXXX..XXXXXXX 100644
244
- gen_gvec_op3_ool(s, true, rd, rd, rn, 0, oolfn);
101
--- a/target/arm/translate-neon.c
245
- return;
102
+++ b/target/arm/translate-neon.c
246
+ switch (opcode) {
103
@@ -XXX,XX +XXX,XX @@ static bool trans_VUSDOT(DisasContext *s, arg_VUSDOT *a)
247
+ case 0: /* SHA512SU0 */
104
gen_helper_gvec_usdot_b);
248
+ gen_gvec_op2_ool(s, true, rd, rn, 0, gen_helper_crypto_sha512su0);
249
+ break;
250
+ case 1: /* SM4E */
251
+ gen_gvec_op3_ool(s, true, rd, rd, rn, 0, gen_helper_crypto_sm4e);
252
+ break;
253
+ default:
254
+ g_assert_not_reached();
255
}
256
-
257
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
258
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
259
-
260
- genfn(tcg_rd_ptr, tcg_rn_ptr);
261
-
262
- tcg_temp_free_ptr(tcg_rd_ptr);
263
- tcg_temp_free_ptr(tcg_rn_ptr);
264
}
105
}
265
106
266
/* Crypto four-register
107
+static bool trans_VDOT_b16(DisasContext *s, arg_VDOT_b16 *a)
108
+{
109
+ if (!dc_isar_feature(aa32_bf16, s)) {
110
+ return false;
111
+ }
112
+ return do_neon_ddda(s, a->q * 7, a->vd, a->vn, a->vm, 0,
113
+ gen_helper_gvec_bfdot);
114
+}
115
+
116
static bool trans_VFML(DisasContext *s, arg_VFML *a)
117
{
118
int opr_sz;
119
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/translate-sve.c
122
+++ b/target/arm/translate-sve.c
123
@@ -XXX,XX +XXX,XX @@ static bool trans_UMMLA(DisasContext *s, arg_rrrr_esz *a)
124
{
125
return do_i8mm_zzzz_ool(s, a, gen_helper_gvec_ummla_b, 0);
126
}
127
+
128
+static bool trans_BFDOT_zzzz(DisasContext *s, arg_rrrr_esz *a)
129
+{
130
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
131
+ return false;
132
+ }
133
+ if (sve_access_check(s)) {
134
+ gen_gvec_ool_zzzz(s, gen_helper_gvec_bfdot,
135
+ a->rd, a->rn, a->rm, a->ra, 0);
136
+ }
137
+ return true;
138
+}
139
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
140
index XXXXXXX..XXXXXXX 100644
141
--- a/target/arm/vec_helper.c
142
+++ b/target/arm/vec_helper.c
143
@@ -XXX,XX +XXX,XX @@ static void do_mmla_b(void *vd, void *vn, void *vm, void *va, uint32_t desc,
144
DO_MMLA_B(gvec_smmla_b, do_smmla_b)
145
DO_MMLA_B(gvec_ummla_b, do_ummla_b)
146
DO_MMLA_B(gvec_usmmla_b, do_usmmla_b)
147
+
148
+/*
149
+ * BFloat16 Dot Product
150
+ */
151
+
152
+static float32 bfdotadd(float32 sum, uint32_t e1, uint32_t e2)
153
+{
154
+ /* FPCR is ignored for BFDOT and BFMMLA. */
155
+ float_status bf_status = {
156
+ .tininess_before_rounding = float_tininess_before_rounding,
157
+ .float_rounding_mode = float_round_to_odd_inf,
158
+ .flush_to_zero = true,
159
+ .flush_inputs_to_zero = true,
160
+ .default_nan_mode = true,
161
+ };
162
+ float32 t1, t2;
163
+
164
+ /*
165
+ * Extract each BFloat16 from the element pair, and shift
166
+ * them such that they become float32.
167
+ */
168
+ t1 = float32_mul(e1 << 16, e2 << 16, &bf_status);
169
+ t2 = float32_mul(e1 & 0xffff0000u, e2 & 0xffff0000u, &bf_status);
170
+ t1 = float32_add(t1, t2, &bf_status);
171
+ t1 = float32_add(sum, t1, &bf_status);
172
+
173
+ return t1;
174
+}
175
+
176
+void HELPER(gvec_bfdot)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
177
+{
178
+ intptr_t i, opr_sz = simd_oprsz(desc);
179
+ float32 *d = vd, *a = va;
180
+ uint32_t *n = vn, *m = vm;
181
+
182
+ for (i = 0; i < opr_sz / 4; ++i) {
183
+ d[i] = bfdotadd(a[i], n[i], m[i]);
184
+ }
185
+ clear_tail(d, opr_sz, simd_maxsz(desc));
186
+}
267
--
187
--
268
2.20.1
188
2.20.1
269
189
270
190
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rather than passing an opcode to a helper, fully decode the
3
This is BFDOT for both AArch64 AdvSIMD and SVE,
4
operation at translate time. Use clear_tail_16 to zap the
4
and VDOT.BF16 for AArch32 NEON.
5
balance of the SVE register with the AdvSIMD write.
6
5
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-6-richard.henderson@linaro.org
7
Message-id: 20210525225817.400336-8-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper.h | 5 +-
11
target/arm/helper.h | 2 ++
13
target/arm/neon-dp.decode | 6 +-
12
target/arm/neon-shared.decode | 2 ++
14
target/arm/crypto_helper.c | 99 +++++++++++++++++++++------------
13
target/arm/sve.decode | 3 +++
15
target/arm/translate-a64.c | 29 ++++------
14
target/arm/translate-a64.c | 41 +++++++++++++++++++++++++++--------
16
target/arm/translate-neon.inc.c | 46 ++++-----------
15
target/arm/translate-neon.c | 9 ++++++++
17
5 files changed, 93 insertions(+), 92 deletions(-)
16
target/arm/translate-sve.c | 12 ++++++++++
17
target/arm/vec_helper.c | 20 +++++++++++++++++
18
7 files changed, 80 insertions(+), 9 deletions(-)
18
19
19
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.h
22
--- a/target/arm/helper.h
22
+++ b/target/arm/helper.h
23
+++ b/target/arm/helper.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(neon_qzip32, TCG_CALL_NO_RWG, void, ptr, ptr)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_usmmla_b, TCG_CALL_NO_RWG,
24
DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
25
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
26
27
void, ptr, ptr, ptr, ptr, i32)
27
-DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
28
+DEF_HELPER_FLAGS_4(crypto_sha1su0, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+ void, ptr, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(crypto_sha1c, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
30
+DEF_HELPER_FLAGS_4(crypto_sha1p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
#ifdef TARGET_AARCH64
31
+DEF_HELPER_FLAGS_4(crypto_sha1m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
#include "helper-a64.h"
32
DEF_HELPER_FLAGS_3(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
33
DEF_HELPER_FLAGS_3(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
35
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
36
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/neon-dp.decode
35
--- a/target/arm/neon-shared.decode
38
+++ b/target/arm/neon-dp.decode
36
+++ b/target/arm/neon-shared.decode
39
@@ -XXX,XX +XXX,XX @@ VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
37
@@ -XXX,XX +XXX,XX @@ VUSDOT_scalar 1111 1110 1 . 00 .... .... 1101 . q:1 index:1 0 vm:4 \
40
@3same_crypto .... .... .... .... .... .... .... .... \
38
vn=%vn_dp vd=%vd_dp
41
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0 q=1
39
VSUDOT_scalar 1111 1110 1 . 00 .... .... 1101 . q:1 index:1 1 vm:4 \
42
40
vn=%vn_dp vd=%vd_dp
43
-SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
41
+VDOT_b16_scal 1111 1110 0 . 00 .... .... 1101 . q:1 index:1 0 vm:4 \
44
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
42
+ vn=%vn_dp vd=%vd_dp
45
+SHA1C_3s 1111 001 0 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
43
46
+SHA1P_3s 1111 001 0 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
44
%vfml_scalar_q0_rm 0:3 5:1
47
+SHA1M_3s 1111 001 0 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
45
%vfml_scalar_q1_index 5:1 3:1
48
+SHA1SU0_3s 1111 001 0 0 . 11 .... .... 1100 . 1 . 0 .... @3same_crypto
46
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
49
SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
50
SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
51
SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
52
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
53
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/crypto_helper.c
48
--- a/target/arm/sve.decode
55
+++ b/target/arm/crypto_helper.c
49
+++ b/target/arm/sve.decode
56
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
50
@@ -XXX,XX +XXX,XX @@ FMLALB_zzxw 01100100 10 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
57
};
51
FMLALT_zzxw 01100100 10 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
58
52
FMLSLB_zzxw 01100100 10 1 ..... 0110.0 ..... ..... @rrxr_3a esz=2
59
#ifdef HOST_WORDS_BIGENDIAN
53
FMLSLT_zzxw 01100100 10 1 ..... 0110.1 ..... ..... @rrxr_3a esz=2
60
-#define CR_ST_BYTE(state, i) (state.bytes[(15 - (i)) ^ 8])
61
-#define CR_ST_WORD(state, i) (state.words[(3 - (i)) ^ 2])
62
+#define CR_ST_BYTE(state, i) ((state).bytes[(15 - (i)) ^ 8])
63
+#define CR_ST_WORD(state, i) ((state).words[(3 - (i)) ^ 2])
64
#else
65
-#define CR_ST_BYTE(state, i) (state.bytes[i])
66
-#define CR_ST_WORD(state, i) (state.words[i])
67
+#define CR_ST_BYTE(state, i) ((state).bytes[i])
68
+#define CR_ST_WORD(state, i) ((state).words[i])
69
#endif
70
71
/*
72
@@ -XXX,XX +XXX,XX @@ static uint32_t maj(uint32_t x, uint32_t y, uint32_t z)
73
return (x & y) | ((x | y) & z);
74
}
75
76
-void HELPER(crypto_sha1_3reg)(void *vd, void *vn, void *vm, uint32_t op)
77
+void HELPER(crypto_sha1su0)(void *vd, void *vn, void *vm, uint32_t desc)
78
+{
79
+ uint64_t *d = vd, *n = vn, *m = vm;
80
+ uint64_t d0, d1;
81
+
54
+
82
+ d0 = d[1] ^ d[0] ^ m[0];
55
+### SVE2 floating-point bfloat16 dot-product (indexed)
83
+ d1 = n[0] ^ d[1] ^ m[1];
56
+BFDOT_zzxz 01100100 01 1 ..... 010000 ..... ..... @rrxr_2 esz=2
84
+ d[0] = d0;
85
+ d[1] = d1;
86
+
87
+ clear_tail_16(vd, desc);
88
+}
89
+
90
+static inline void crypto_sha1_3reg(uint64_t *rd, uint64_t *rn,
91
+ uint64_t *rm, uint32_t desc,
92
+ uint32_t (*fn)(union CRYPTO_STATE *d))
93
{
94
- uint64_t *rd = vd;
95
- uint64_t *rn = vn;
96
- uint64_t *rm = vm;
97
union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
98
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
99
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
100
+ int i;
101
102
- if (op == 3) { /* sha1su0 */
103
- d.l[0] ^= d.l[1] ^ m.l[0];
104
- d.l[1] ^= n.l[0] ^ m.l[1];
105
- } else {
106
- int i;
107
+ for (i = 0; i < 4; i++) {
108
+ uint32_t t = fn(&d);
109
110
- for (i = 0; i < 4; i++) {
111
- uint32_t t;
112
+ t += rol32(CR_ST_WORD(d, 0), 5) + CR_ST_WORD(n, 0)
113
+ + CR_ST_WORD(m, i);
114
115
- switch (op) {
116
- case 0: /* sha1c */
117
- t = cho(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
118
- break;
119
- case 1: /* sha1p */
120
- t = par(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
121
- break;
122
- case 2: /* sha1m */
123
- t = maj(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
124
- break;
125
- default:
126
- g_assert_not_reached();
127
- }
128
- t += rol32(CR_ST_WORD(d, 0), 5) + CR_ST_WORD(n, 0)
129
- + CR_ST_WORD(m, i);
130
-
131
- CR_ST_WORD(n, 0) = CR_ST_WORD(d, 3);
132
- CR_ST_WORD(d, 3) = CR_ST_WORD(d, 2);
133
- CR_ST_WORD(d, 2) = ror32(CR_ST_WORD(d, 1), 2);
134
- CR_ST_WORD(d, 1) = CR_ST_WORD(d, 0);
135
- CR_ST_WORD(d, 0) = t;
136
- }
137
+ CR_ST_WORD(n, 0) = CR_ST_WORD(d, 3);
138
+ CR_ST_WORD(d, 3) = CR_ST_WORD(d, 2);
139
+ CR_ST_WORD(d, 2) = ror32(CR_ST_WORD(d, 1), 2);
140
+ CR_ST_WORD(d, 1) = CR_ST_WORD(d, 0);
141
+ CR_ST_WORD(d, 0) = t;
142
}
143
rd[0] = d.l[0];
144
rd[1] = d.l[1];
145
+
146
+ clear_tail_16(rd, desc);
147
+}
148
+
149
+static uint32_t do_sha1c(union CRYPTO_STATE *d)
150
+{
151
+ return cho(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
152
+}
153
+
154
+void HELPER(crypto_sha1c)(void *vd, void *vn, void *vm, uint32_t desc)
155
+{
156
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1c);
157
+}
158
+
159
+static uint32_t do_sha1p(union CRYPTO_STATE *d)
160
+{
161
+ return par(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
162
+}
163
+
164
+void HELPER(crypto_sha1p)(void *vd, void *vn, void *vm, uint32_t desc)
165
+{
166
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1p);
167
+}
168
+
169
+static uint32_t do_sha1m(union CRYPTO_STATE *d)
170
+{
171
+ return maj(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
172
+}
173
+
174
+void HELPER(crypto_sha1m)(void *vd, void *vn, void *vm, uint32_t desc)
175
+{
176
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1m);
177
}
178
179
void HELPER(crypto_sha1h)(void *vd, void *vm, uint32_t desc)
180
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
57
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
181
index XXXXXXX..XXXXXXX 100644
58
index XXXXXXX..XXXXXXX 100644
182
--- a/target/arm/translate-a64.c
59
--- a/target/arm/translate-a64.c
183
+++ b/target/arm/translate-a64.c
60
+++ b/target/arm/translate-a64.c
184
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
61
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
185
62
return;
186
switch (opcode) {
63
}
187
case 0: /* SHA1C */
188
+ genfn = gen_helper_crypto_sha1c;
189
+ feature = dc_isar_feature(aa64_sha1, s);
190
+ break;
191
case 1: /* SHA1P */
192
+ genfn = gen_helper_crypto_sha1p;
193
+ feature = dc_isar_feature(aa64_sha1, s);
194
+ break;
195
case 2: /* SHA1M */
196
+ genfn = gen_helper_crypto_sha1m;
197
+ feature = dc_isar_feature(aa64_sha1, s);
198
+ break;
199
case 3: /* SHA1SU0 */
200
- genfn = NULL;
201
+ genfn = gen_helper_crypto_sha1su0;
202
feature = dc_isar_feature(aa64_sha1, s);
203
break;
64
break;
204
case 4: /* SHA256H */
65
- case 0x0f: /* SUDOT, USDOT */
205
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
66
- if (is_scalar || (size & 1) || !dc_isar_feature(aa64_i8mm, s)) {
206
if (!fp_access_check(s)) {
67
+ case 0x0f:
68
+ switch (size) {
69
+ case 0: /* SUDOT */
70
+ case 2: /* USDOT */
71
+ if (is_scalar || !dc_isar_feature(aa64_i8mm, s)) {
72
+ unallocated_encoding(s);
73
+ return;
74
+ }
75
+ break;
76
+ case 1: /* BFDOT */
77
+ if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
78
+ unallocated_encoding(s);
79
+ return;
80
+ }
81
+ break;
82
+ default:
83
unallocated_encoding(s);
84
return;
85
}
86
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
87
u ? gen_helper_gvec_udot_idx_b
88
: gen_helper_gvec_sdot_idx_b);
207
return;
89
return;
90
- case 0x0f: /* SUDOT, USDOT */
91
- gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
92
- extract32(insn, 23, 1)
93
- ? gen_helper_gvec_usdot_idx_b
94
- : gen_helper_gvec_sudot_idx_b);
95
- return;
96
-
97
+ case 0x0f:
98
+ switch (extract32(insn, 22, 2)) {
99
+ case 0: /* SUDOT */
100
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
101
+ gen_helper_gvec_sudot_idx_b);
102
+ return;
103
+ case 1: /* BFDOT */
104
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
105
+ gen_helper_gvec_bfdot_idx);
106
+ return;
107
+ case 2: /* USDOT */
108
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
109
+ gen_helper_gvec_usdot_idx_b);
110
+ return;
111
+ }
112
+ g_assert_not_reached();
113
case 0x11: /* FCMLA #0 */
114
case 0x13: /* FCMLA #90 */
115
case 0x15: /* FCMLA #180 */
116
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/translate-neon.c
119
+++ b/target/arm/translate-neon.c
120
@@ -XXX,XX +XXX,XX @@ static bool trans_VSUDOT_scalar(DisasContext *s, arg_VSUDOT_scalar *a)
121
gen_helper_gvec_sudot_idx_b);
122
}
123
124
+static bool trans_VDOT_b16_scal(DisasContext *s, arg_VDOT_b16_scal *a)
125
+{
126
+ if (!dc_isar_feature(aa32_bf16, s)) {
127
+ return false;
128
+ }
129
+ return do_neon_ddda(s, a->q * 6, a->vd, a->vn, a->vm, a->index,
130
+ gen_helper_gvec_bfdot_idx);
131
+}
132
+
133
static bool trans_VFML_scalar(DisasContext *s, arg_VFML_scalar *a)
134
{
135
int opr_sz;
136
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
137
index XXXXXXX..XXXXXXX 100644
138
--- a/target/arm/translate-sve.c
139
+++ b/target/arm/translate-sve.c
140
@@ -XXX,XX +XXX,XX @@ static bool trans_BFDOT_zzzz(DisasContext *s, arg_rrrr_esz *a)
208
}
141
}
209
-
142
return true;
210
- if (genfn) {
211
- gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
212
- } else {
213
- TCGv_i32 tcg_opcode = tcg_const_i32(opcode);
214
- TCGv_ptr tcg_rd_ptr = vec_full_reg_ptr(s, rd);
215
- TCGv_ptr tcg_rn_ptr = vec_full_reg_ptr(s, rn);
216
- TCGv_ptr tcg_rm_ptr = vec_full_reg_ptr(s, rm);
217
-
218
- gen_helper_crypto_sha1_3reg(tcg_rd_ptr, tcg_rn_ptr,
219
- tcg_rm_ptr, tcg_opcode);
220
-
221
- tcg_temp_free_i32(tcg_opcode);
222
- tcg_temp_free_ptr(tcg_rd_ptr);
223
- tcg_temp_free_ptr(tcg_rn_ptr);
224
- tcg_temp_free_ptr(tcg_rm_ptr);
225
- }
226
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
227
}
143
}
228
144
+
229
/* Crypto two-reg SHA
145
+static bool trans_BFDOT_zzxz(DisasContext *s, arg_rrxr_esz *a)
230
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
146
+{
147
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
148
+ return false;
149
+ }
150
+ if (sve_access_check(s)) {
151
+ gen_gvec_ool_zzzz(s, gen_helper_gvec_bfdot_idx,
152
+ a->rd, a->rn, a->rm, a->ra, a->index);
153
+ }
154
+ return true;
155
+}
156
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
231
index XXXXXXX..XXXXXXX 100644
157
index XXXXXXX..XXXXXXX 100644
232
--- a/target/arm/translate-neon.inc.c
158
--- a/target/arm/vec_helper.c
233
+++ b/target/arm/translate-neon.inc.c
159
+++ b/target/arm/vec_helper.c
234
@@ -XXX,XX +XXX,XX @@ static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
160
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfdot)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
235
DO_VQRDMLAH(VQRDMLAH, gen_gvec_sqrdmlah_qc)
236
DO_VQRDMLAH(VQRDMLSH, gen_gvec_sqrdmlsh_qc)
237
238
-static bool trans_SHA1_3s(DisasContext *s, arg_SHA1_3s *a)
239
-{
240
- TCGv_ptr ptr1, ptr2, ptr3;
241
- TCGv_i32 tmp;
242
-
243
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
244
- !dc_isar_feature(aa32_sha1, s)) {
245
- return false;
246
+#define DO_SHA1(NAME, FUNC) \
247
+ WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
248
+ static bool trans_##NAME##_3s(DisasContext *s, arg_3same *a) \
249
+ { \
250
+ if (!dc_isar_feature(aa32_sha1, s)) { \
251
+ return false; \
252
+ } \
253
+ return do_3same(s, a, gen_##NAME##_3s); \
254
}
161
}
255
162
clear_tail(d, opr_sz, simd_maxsz(desc));
256
- /* UNDEF accesses to D16-D31 if they don't exist. */
163
}
257
- if (!dc_isar_feature(aa32_simd_r32, s) &&
164
+
258
- ((a->vd | a->vn | a->vm) & 0x10)) {
165
+void HELPER(gvec_bfdot_idx)(void *vd, void *vn, void *vm,
259
- return false;
166
+ void *va, uint32_t desc)
260
- }
167
+{
261
-
168
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
262
- if ((a->vn | a->vm | a->vd) & 1) {
169
+ intptr_t index = simd_data(desc);
263
- return false;
170
+ intptr_t elements = opr_sz / 4;
264
- }
171
+ intptr_t eltspersegment = MIN(16 / 4, elements);
265
-
172
+ float32 *d = vd, *a = va;
266
- if (!vfp_access_check(s)) {
173
+ uint32_t *n = vn, *m = vm;
267
- return true;
174
+
268
- }
175
+ for (i = 0; i < elements; i += eltspersegment) {
269
-
176
+ uint32_t m_idx = m[i + H4(index)];
270
- ptr1 = vfp_reg_ptr(true, a->vd);
177
+
271
- ptr2 = vfp_reg_ptr(true, a->vn);
178
+ for (j = i; j < i + eltspersegment; j++) {
272
- ptr3 = vfp_reg_ptr(true, a->vm);
179
+ d[j] = bfdotadd(a[j], n[j], m_idx);
273
- tmp = tcg_const_i32(a->optype);
180
+ }
274
- gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp);
181
+ }
275
- tcg_temp_free_i32(tmp);
182
+ clear_tail(d, opr_sz, simd_maxsz(desc));
276
- tcg_temp_free_ptr(ptr1);
183
+}
277
- tcg_temp_free_ptr(ptr2);
278
- tcg_temp_free_ptr(ptr3);
279
-
280
- return true;
281
-}
282
+DO_SHA1(SHA1C, gen_helper_crypto_sha1c)
283
+DO_SHA1(SHA1P, gen_helper_crypto_sha1p)
284
+DO_SHA1(SHA1M, gen_helper_crypto_sha1m)
285
+DO_SHA1(SHA1SU0, gen_helper_crypto_sha1su0)
286
287
#define DO_SHA2(NAME, FUNC) \
288
WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
289
--
184
--
290
2.20.1
185
2.20.1
291
186
292
187
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Do not yet convert the helpers to loop over opr_sz, but the
3
This is BFMMLA for both AArch64 AdvSIMD and SVE,
4
descriptor allows the vector tail to be cleared. Which fixes
4
and VMMLA.BF16 for AArch32 NEON.
5
an existing bug vs SVE.
6
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-5-richard.henderson@linaro.org
8
Message-id: 20210525225817.400336-9-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper.h | 12 ++--
11
target/arm/helper.h | 3 +++
13
target/arm/neon-dp.decode | 12 ++--
12
target/arm/neon-shared.decode | 2 ++
14
target/arm/crypto_helper.c | 24 +++++--
13
target/arm/sve.decode | 6 +++--
15
target/arm/translate-a64.c | 34 ++++-----
14
target/arm/translate-a64.c | 10 +++++++++
16
target/arm/translate-neon.inc.c | 124 +++++---------------------------
15
target/arm/translate-neon.c | 9 ++++++++
17
target/arm/translate.c | 24 ++-----
16
target/arm/translate-sve.c | 12 ++++++++++
18
6 files changed, 67 insertions(+), 163 deletions(-)
17
target/arm/vec_helper.c | 42 ++++++++++++++++++++++++++++++++++-
18
7 files changed, 81 insertions(+), 3 deletions(-)
19
19
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.h
22
--- a/target/arm/helper.h
23
+++ b/target/arm/helper.h
23
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
25
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
26
26
void, ptr, ptr, ptr, ptr, i32)
27
DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
28
-DEF_HELPER_FLAGS_2(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr)
28
+DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
29
-DEF_HELPER_FLAGS_2(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr)
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_3(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_3(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
#ifdef TARGET_AARCH64
32
32
#include "helper-a64.h"
33
-DEF_HELPER_FLAGS_3(crypto_sha256h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
33
#include "helper-sve.h"
34
-DEF_HELPER_FLAGS_3(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
34
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
35
-DEF_HELPER_FLAGS_2(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr)
36
-DEF_HELPER_FLAGS_3(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
37
+DEF_HELPER_FLAGS_4(crypto_sha256h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
42
DEF_HELPER_FLAGS_4(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
43
DEF_HELPER_FLAGS_4(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
45
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/neon-dp.decode
36
--- a/target/arm/neon-shared.decode
47
+++ b/target/arm/neon-dp.decode
37
+++ b/target/arm/neon-shared.decode
48
@@ -XXX,XX +XXX,XX @@ VPADD_3s 1111 001 0 0 . .. .... .... 1011 . . . 1 .... @3same_q0
38
@@ -XXX,XX +XXX,XX @@ VUMMLA 1111 1100 0.10 .... .... 1100 .1.1 .... \
49
39
vm=%vm_dp vn=%vn_dp vd=%vd_dp
50
VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
40
VUSMMLA 1111 1100 1.10 .... .... 1100 .1.0 .... \
51
41
vm=%vm_dp vn=%vn_dp vd=%vd_dp
52
+@3same_crypto .... .... .... .... .... .... .... .... \
42
+VMMLA_b16 1111 1100 0.00 .... .... 1100 .1.0 .... \
53
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0 q=1
43
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
54
+
44
55
SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
45
VCMLA_scalar 1111 1110 0 . rot:2 .... .... 1000 . q:1 index:1 0 vm:4 \
56
vm=%vm_dp vn=%vn_dp vd=%vd_dp
46
vn=%vn_dp vd=%vd_dp size=1
57
-SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... \
47
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
58
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
59
-SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... \
60
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
61
-SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... \
62
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
63
+SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
64
+SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
65
+SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
66
67
VFMA_fp_3s 1111 001 0 0 . 0 . .... .... 1100 ... 1 .... @3same_fp
68
VFMS_fp_3s 1111 001 0 0 . 1 . .... .... 1100 ... 1 .... @3same_fp
69
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
70
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/crypto_helper.c
49
--- a/target/arm/sve.decode
72
+++ b/target/arm/crypto_helper.c
50
+++ b/target/arm/sve.decode
73
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1_3reg)(void *vd, void *vn, void *vm, uint32_t op)
51
@@ -XXX,XX +XXX,XX @@ SQRDCMLAH_zzzz 01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5 ra=%reg_movprfx
74
rd[1] = d.l[1];
52
USDOT_zzzz 01000100 .. 0 ..... 011 110 ..... ..... @rda_rn_rm
75
}
53
76
54
### SVE2 floating point matrix multiply accumulate
77
-void HELPER(crypto_sha1h)(void *vd, void *vm)
55
-
78
+void HELPER(crypto_sha1h)(void *vd, void *vm, uint32_t desc)
56
-FMMLA 01100100 .. 1 ..... 111001 ..... ..... @rda_rn_rm
79
{
57
+{
80
uint64_t *rd = vd;
58
+ BFMMLA 01100100 01 1 ..... 111 001 ..... ..... @rda_rn_rm_e0
81
uint64_t *rm = vm;
59
+ FMMLA 01100100 .. 1 ..... 111 001 ..... ..... @rda_rn_rm
82
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1h)(void *vd, void *vm)
60
+}
83
61
84
rd[0] = m.l[0];
62
### SVE2 Memory Gather Load Group
85
rd[1] = m.l[1];
63
86
+
87
+ clear_tail_16(vd, desc);
88
}
89
90
-void HELPER(crypto_sha1su1)(void *vd, void *vm)
91
+void HELPER(crypto_sha1su1)(void *vd, void *vm, uint32_t desc)
92
{
93
uint64_t *rd = vd;
94
uint64_t *rm = vm;
95
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1su1)(void *vd, void *vm)
96
97
rd[0] = d.l[0];
98
rd[1] = d.l[1];
99
+
100
+ clear_tail_16(vd, desc);
101
}
102
103
/*
104
@@ -XXX,XX +XXX,XX @@ static uint32_t s1(uint32_t x)
105
return ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10);
106
}
107
108
-void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm)
109
+void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm, uint32_t desc)
110
{
111
uint64_t *rd = vd;
112
uint64_t *rn = vn;
113
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm)
114
115
rd[0] = d.l[0];
116
rd[1] = d.l[1];
117
+
118
+ clear_tail_16(vd, desc);
119
}
120
121
-void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm)
122
+void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm, uint32_t desc)
123
{
124
uint64_t *rd = vd;
125
uint64_t *rn = vn;
126
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm)
127
128
rd[0] = d.l[0];
129
rd[1] = d.l[1];
130
+
131
+ clear_tail_16(vd, desc);
132
}
133
134
-void HELPER(crypto_sha256su0)(void *vd, void *vm)
135
+void HELPER(crypto_sha256su0)(void *vd, void *vm, uint32_t desc)
136
{
137
uint64_t *rd = vd;
138
uint64_t *rm = vm;
139
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256su0)(void *vd, void *vm)
140
141
rd[0] = d.l[0];
142
rd[1] = d.l[1];
143
+
144
+ clear_tail_16(vd, desc);
145
}
146
147
-void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm)
148
+void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm, uint32_t desc)
149
{
150
uint64_t *rd = vd;
151
uint64_t *rn = vn;
152
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm)
153
154
rd[0] = d.l[0];
155
rd[1] = d.l[1];
156
+
157
+ clear_tail_16(vd, desc);
158
}
159
160
/*
161
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
64
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
162
index XXXXXXX..XXXXXXX 100644
65
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-a64.c
66
--- a/target/arm/translate-a64.c
164
+++ b/target/arm/translate-a64.c
67
+++ b/target/arm/translate-a64.c
165
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
68
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
166
int rm = extract32(insn, 16, 5);
69
}
167
int rn = extract32(insn, 5, 5);
70
feature = dc_isar_feature(aa64_fcma, s);
168
int rd = extract32(insn, 0, 5);
71
break;
169
- CryptoThreeOpFn *genfn;
72
+ case 0x1d: /* BFMMLA */
170
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
73
+ if (size != MO_16 || !is_q) {
171
+ gen_helper_gvec_3 *genfn;
74
+ unallocated_encoding(s);
172
bool feature;
75
+ return;
173
76
+ }
174
if (size != 0) {
77
+ feature = dc_isar_feature(aa64_bf16, s);
175
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
78
+ break;
79
case 0x1f: /* BFDOT */
80
switch (size) {
81
case 1:
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
83
}
176
return;
84
return;
85
86
+ case 0xd: /* BFMMLA */
87
+ gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfmmla);
88
+ return;
89
case 0xf: /* BFDOT */
90
switch (size) {
91
case 1:
92
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/translate-neon.c
95
+++ b/target/arm/translate-neon.c
96
@@ -XXX,XX +XXX,XX @@ static bool trans_VUSMMLA(DisasContext *s, arg_VUSMMLA *a)
97
return do_neon_ddda(s, 7, a->vd, a->vn, a->vm, 0,
98
gen_helper_gvec_usmmla_b);
99
}
100
+
101
+static bool trans_VMMLA_b16(DisasContext *s, arg_VMMLA_b16 *a)
102
+{
103
+ if (!dc_isar_feature(aa32_bf16, s)) {
104
+ return false;
105
+ }
106
+ return do_neon_ddda(s, 7, a->vd, a->vn, a->vm, 0,
107
+ gen_helper_gvec_bfmmla);
108
+}
109
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
110
index XXXXXXX..XXXXXXX 100644
111
--- a/target/arm/translate-sve.c
112
+++ b/target/arm/translate-sve.c
113
@@ -XXX,XX +XXX,XX @@ static bool trans_BFDOT_zzxz(DisasContext *s, arg_rrxr_esz *a)
177
}
114
}
178
179
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
180
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
181
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
182
-
183
if (genfn) {
184
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
185
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
186
} else {
187
TCGv_i32 tcg_opcode = tcg_const_i32(opcode);
188
+ TCGv_ptr tcg_rd_ptr = vec_full_reg_ptr(s, rd);
189
+ TCGv_ptr tcg_rn_ptr = vec_full_reg_ptr(s, rn);
190
+ TCGv_ptr tcg_rm_ptr = vec_full_reg_ptr(s, rm);
191
192
gen_helper_crypto_sha1_3reg(tcg_rd_ptr, tcg_rn_ptr,
193
tcg_rm_ptr, tcg_opcode);
194
- tcg_temp_free_i32(tcg_opcode);
195
- }
196
197
- tcg_temp_free_ptr(tcg_rd_ptr);
198
- tcg_temp_free_ptr(tcg_rn_ptr);
199
- tcg_temp_free_ptr(tcg_rm_ptr);
200
+ tcg_temp_free_i32(tcg_opcode);
201
+ tcg_temp_free_ptr(tcg_rd_ptr);
202
+ tcg_temp_free_ptr(tcg_rn_ptr);
203
+ tcg_temp_free_ptr(tcg_rm_ptr);
204
+ }
205
}
206
207
/* Crypto two-reg SHA
208
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
209
int opcode = extract32(insn, 12, 5);
210
int rn = extract32(insn, 5, 5);
211
int rd = extract32(insn, 0, 5);
212
- CryptoTwoOpFn *genfn;
213
+ gen_helper_gvec_2 *genfn;
214
bool feature;
215
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
216
217
if (size != 0) {
218
unallocated_encoding(s);
219
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
220
if (!fp_access_check(s)) {
221
return;
222
}
223
-
224
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
225
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
226
-
227
- genfn(tcg_rd_ptr, tcg_rn_ptr);
228
-
229
- tcg_temp_free_ptr(tcg_rd_ptr);
230
- tcg_temp_free_ptr(tcg_rn_ptr);
231
+ gen_gvec_op2_ool(s, true, rd, rn, 0, genfn);
232
}
233
234
static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
235
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
236
index XXXXXXX..XXXXXXX 100644
237
--- a/target/arm/translate-neon.inc.c
238
+++ b/target/arm/translate-neon.inc.c
239
@@ -XXX,XX +XXX,XX @@ DO_3SAME_CMP(VCGE_S, TCG_COND_GE)
240
DO_3SAME_CMP(VCGE_U, TCG_COND_GEU)
241
DO_3SAME_CMP(VCEQ, TCG_COND_EQ)
242
243
-static void gen_VMUL_p_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
244
- uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz)
245
-{
246
- tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz,
247
- 0, gen_helper_gvec_pmul_b);
248
-}
249
+#define WRAP_OOL_FN(WRAPNAME, FUNC) \
250
+ static void WRAPNAME(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs, \
251
+ uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz) \
252
+ { \
253
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, 0, FUNC); \
254
+ }
255
+
256
+WRAP_OOL_FN(gen_VMUL_p_3s, gen_helper_gvec_pmul_b)
257
258
static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
259
{
260
@@ -XXX,XX +XXX,XX @@ static bool trans_SHA1_3s(DisasContext *s, arg_SHA1_3s *a)
261
return true;
115
return true;
262
}
116
}
263
117
+
264
-static bool trans_SHA256H_3s(DisasContext *s, arg_SHA256H_3s *a)
118
+static bool trans_BFMMLA(DisasContext *s, arg_rrrr_esz *a)
265
-{
119
+{
266
- TCGv_ptr ptr1, ptr2, ptr3;
120
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
267
-
121
+ return false;
268
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
122
+ }
269
- !dc_isar_feature(aa32_sha2, s)) {
123
+ if (sve_access_check(s)) {
270
- return false;
124
+ gen_gvec_ool_zzzz(s, gen_helper_gvec_bfmmla,
271
+#define DO_SHA2(NAME, FUNC) \
125
+ a->rd, a->rn, a->rm, a->ra, 0);
272
+ WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
126
+ }
273
+ static bool trans_##NAME##_3s(DisasContext *s, arg_3same *a) \
127
+ return true;
274
+ { \
128
+}
275
+ if (!dc_isar_feature(aa32_sha2, s)) { \
129
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
276
+ return false; \
130
index XXXXXXX..XXXXXXX 100644
277
+ } \
131
--- a/target/arm/vec_helper.c
278
+ return do_3same(s, a, gen_##NAME##_3s); \
132
+++ b/target/arm/vec_helper.c
133
@@ -XXX,XX +XXX,XX @@ static void do_mmla_b(void *vd, void *vn, void *vm, void *va, uint32_t desc,
134
* Process the entire segment at once, writing back the
135
* results only after we've consumed all of the inputs.
136
*
137
- * Key to indicies by column:
138
+ * Key to indices by column:
139
* i j i j
140
*/
141
sum0 = a[H4(0 + 0)];
142
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfdot_idx)(void *vd, void *vn, void *vm,
279
}
143
}
280
144
clear_tail(d, opr_sz, simd_maxsz(desc));
281
- /* UNDEF accesses to D16-D31 if they don't exist. */
145
}
282
- if (!dc_isar_feature(aa32_simd_r32, s) &&
146
+
283
- ((a->vd | a->vn | a->vm) & 0x10)) {
147
+void HELPER(gvec_bfmmla)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
284
- return false;
148
+{
285
- }
149
+ intptr_t s, opr_sz = simd_oprsz(desc);
286
-
150
+ float32 *d = vd, *a = va;
287
- if ((a->vn | a->vm | a->vd) & 1) {
151
+ uint32_t *n = vn, *m = vm;
288
- return false;
152
+
289
- }
153
+ for (s = 0; s < opr_sz / 4; s += 4) {
290
-
154
+ float32 sum00, sum01, sum10, sum11;
291
- if (!vfp_access_check(s)) {
155
+
292
- return true;
156
+ /*
293
- }
157
+ * Process the entire segment at once, writing back the
294
-
158
+ * results only after we've consumed all of the inputs.
295
- ptr1 = vfp_reg_ptr(true, a->vd);
159
+ *
296
- ptr2 = vfp_reg_ptr(true, a->vn);
160
+ * Key to indicies by column:
297
- ptr3 = vfp_reg_ptr(true, a->vm);
161
+ * i j i k j k
298
- gen_helper_crypto_sha256h(ptr1, ptr2, ptr3);
162
+ */
299
- tcg_temp_free_ptr(ptr1);
163
+ sum00 = a[s + H4(0 + 0)];
300
- tcg_temp_free_ptr(ptr2);
164
+ sum00 = bfdotadd(sum00, n[s + H4(0 + 0)], m[s + H4(0 + 0)]);
301
- tcg_temp_free_ptr(ptr3);
165
+ sum00 = bfdotadd(sum00, n[s + H4(0 + 1)], m[s + H4(0 + 1)]);
302
-
166
+
303
- return true;
167
+ sum01 = a[s + H4(0 + 1)];
304
-}
168
+ sum01 = bfdotadd(sum01, n[s + H4(0 + 0)], m[s + H4(2 + 0)]);
305
-
169
+ sum01 = bfdotadd(sum01, n[s + H4(0 + 1)], m[s + H4(2 + 1)]);
306
-static bool trans_SHA256H2_3s(DisasContext *s, arg_SHA256H2_3s *a)
170
+
307
-{
171
+ sum10 = a[s + H4(2 + 0)];
308
- TCGv_ptr ptr1, ptr2, ptr3;
172
+ sum10 = bfdotadd(sum10, n[s + H4(2 + 0)], m[s + H4(0 + 0)]);
309
-
173
+ sum10 = bfdotadd(sum10, n[s + H4(2 + 1)], m[s + H4(0 + 1)]);
310
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
174
+
311
- !dc_isar_feature(aa32_sha2, s)) {
175
+ sum11 = a[s + H4(2 + 1)];
312
- return false;
176
+ sum11 = bfdotadd(sum11, n[s + H4(2 + 0)], m[s + H4(2 + 0)]);
313
- }
177
+ sum11 = bfdotadd(sum11, n[s + H4(2 + 1)], m[s + H4(2 + 1)]);
314
-
178
+
315
- /* UNDEF accesses to D16-D31 if they don't exist. */
179
+ d[s + H4(0 + 0)] = sum00;
316
- if (!dc_isar_feature(aa32_simd_r32, s) &&
180
+ d[s + H4(0 + 1)] = sum01;
317
- ((a->vd | a->vn | a->vm) & 0x10)) {
181
+ d[s + H4(2 + 0)] = sum10;
318
- return false;
182
+ d[s + H4(2 + 1)] = sum11;
319
- }
183
+ }
320
-
184
+ clear_tail(d, opr_sz, simd_maxsz(desc));
321
- if ((a->vn | a->vm | a->vd) & 1) {
185
+}
322
- return false;
323
- }
324
-
325
- if (!vfp_access_check(s)) {
326
- return true;
327
- }
328
-
329
- ptr1 = vfp_reg_ptr(true, a->vd);
330
- ptr2 = vfp_reg_ptr(true, a->vn);
331
- ptr3 = vfp_reg_ptr(true, a->vm);
332
- gen_helper_crypto_sha256h2(ptr1, ptr2, ptr3);
333
- tcg_temp_free_ptr(ptr1);
334
- tcg_temp_free_ptr(ptr2);
335
- tcg_temp_free_ptr(ptr3);
336
-
337
- return true;
338
-}
339
-
340
-static bool trans_SHA256SU1_3s(DisasContext *s, arg_SHA256SU1_3s *a)
341
-{
342
- TCGv_ptr ptr1, ptr2, ptr3;
343
-
344
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
345
- !dc_isar_feature(aa32_sha2, s)) {
346
- return false;
347
- }
348
-
349
- /* UNDEF accesses to D16-D31 if they don't exist. */
350
- if (!dc_isar_feature(aa32_simd_r32, s) &&
351
- ((a->vd | a->vn | a->vm) & 0x10)) {
352
- return false;
353
- }
354
-
355
- if ((a->vn | a->vm | a->vd) & 1) {
356
- return false;
357
- }
358
-
359
- if (!vfp_access_check(s)) {
360
- return true;
361
- }
362
-
363
- ptr1 = vfp_reg_ptr(true, a->vd);
364
- ptr2 = vfp_reg_ptr(true, a->vn);
365
- ptr3 = vfp_reg_ptr(true, a->vm);
366
- gen_helper_crypto_sha256su1(ptr1, ptr2, ptr3);
367
- tcg_temp_free_ptr(ptr1);
368
- tcg_temp_free_ptr(ptr2);
369
- tcg_temp_free_ptr(ptr3);
370
-
371
- return true;
372
-}
373
+DO_SHA2(SHA256H, gen_helper_crypto_sha256h)
374
+DO_SHA2(SHA256H2, gen_helper_crypto_sha256h2)
375
+DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
376
377
#define DO_3SAME_64(INSN, FUNC) \
378
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
379
diff --git a/target/arm/translate.c b/target/arm/translate.c
380
index XXXXXXX..XXXXXXX 100644
381
--- a/target/arm/translate.c
382
+++ b/target/arm/translate.c
383
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
384
int vec_size;
385
uint32_t imm;
386
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
387
- TCGv_ptr ptr1, ptr2;
388
+ TCGv_ptr ptr1;
389
TCGv_i64 tmp64;
390
391
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
392
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
393
if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
394
return 1;
395
}
396
- ptr1 = vfp_reg_ptr(true, rd);
397
- ptr2 = vfp_reg_ptr(true, rm);
398
-
399
- gen_helper_crypto_sha1h(ptr1, ptr2);
400
-
401
- tcg_temp_free_ptr(ptr1);
402
- tcg_temp_free_ptr(ptr2);
403
+ tcg_gen_gvec_2_ool(rd_ofs, rm_ofs, 16, 16, 0,
404
+ gen_helper_crypto_sha1h);
405
break;
406
case NEON_2RM_SHA1SU1:
407
if ((rm | rd) & 1) {
408
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
409
} else if (!dc_isar_feature(aa32_sha1, s)) {
410
return 1;
411
}
412
- ptr1 = vfp_reg_ptr(true, rd);
413
- ptr2 = vfp_reg_ptr(true, rm);
414
- if (q) {
415
- gen_helper_crypto_sha256su0(ptr1, ptr2);
416
- } else {
417
- gen_helper_crypto_sha1su1(ptr1, ptr2);
418
- }
419
- tcg_temp_free_ptr(ptr1);
420
- tcg_temp_free_ptr(ptr2);
421
+ tcg_gen_gvec_2_ool(rd_ofs, rm_ofs, 16, 16, 0,
422
+ q ? gen_helper_crypto_sha256su0
423
+ : gen_helper_crypto_sha1su1);
424
break;
425
-
426
case NEON_2RM_VMVN:
427
tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
428
break;
429
--
186
--
430
2.20.1
187
2.20.1
431
188
432
189
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
With this conversion, we will be able to use the same helpers
3
This is BFMLAL{B,T} for both AArch64 AdvSIMD and SVE,
4
with sve. This also fixes a bug in which we failed to clear
4
and VFMA{B,T}.BF16 for AArch32 NEON.
5
the high bits of the SVE register after an AdvSIMD operation.
6
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-3-richard.henderson@linaro.org
8
Message-id: 20210525225817.400336-10-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper.h | 2 ++
11
target/arm/helper.h | 3 +++
13
target/arm/translate-a64.h | 3 ++
12
target/arm/neon-shared.decode | 3 +++
14
target/arm/crypto_helper.c | 11 +++++++
13
target/arm/sve.decode | 3 +++
15
target/arm/translate-a64.c | 59 ++++++++++++++++++++------------------
14
target/arm/translate-a64.c | 13 +++++++++----
16
4 files changed, 47 insertions(+), 28 deletions(-)
15
target/arm/translate-neon.c | 9 +++++++++
16
target/arm/translate-sve.c | 30 ++++++++++++++++++++++++++++++
17
target/arm/vec_helper.c | 16 ++++++++++++++++
18
7 files changed, 73 insertions(+), 4 deletions(-)
17
19
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
19
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.h
22
--- a/target/arm/helper.h
21
+++ b/target/arm/helper.h
23
+++ b/target/arm/helper.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
23
DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
24
DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
void, ptr, ptr, ptr, ptr, i32)
25
27
26
+DEF_HELPER_FLAGS_4(crypto_rax1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, ptr, i32)
27
+
30
+
28
DEF_HELPER_FLAGS_3(crc32, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
31
#ifdef TARGET_AARCH64
29
DEF_HELPER_FLAGS_3(crc32c, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
32
#include "helper-a64.h"
30
33
#include "helper-sve.h"
31
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
34
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
32
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate-a64.h
36
--- a/target/arm/neon-shared.decode
34
+++ b/target/arm/translate-a64.h
37
+++ b/target/arm/neon-shared.decode
35
@@ -XXX,XX +XXX,XX @@ static inline int vec_full_reg_size(DisasContext *s)
38
@@ -XXX,XX +XXX,XX @@ VUSMMLA 1111 1100 1.10 .... .... 1100 .1.0 .... \
36
39
VMMLA_b16 1111 1100 0.00 .... .... 1100 .1.0 .... \
37
bool disas_sve(DisasContext *, uint32_t);
40
vm=%vm_dp vn=%vn_dp vd=%vd_dp
38
41
39
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
42
+VFMA_b16 1111 110 0 0.11 .... .... 1000 . q:1 . 1 .... \
40
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
43
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
41
+
44
+
42
#endif /* TARGET_ARM_TRANSLATE_A64_H */
45
VCMLA_scalar 1111 1110 0 . rot:2 .... .... 1000 . q:1 index:1 0 vm:4 \
43
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
46
vn=%vn_dp vd=%vd_dp size=1
47
VCMLA_scalar 1111 1110 1 . rot:2 .... .... 1000 . q:1 . 0 .... \
48
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
44
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/crypto_helper.c
50
--- a/target/arm/sve.decode
46
+++ b/target/arm/crypto_helper.c
51
+++ b/target/arm/sve.decode
47
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm, uint32_t desc)
52
@@ -XXX,XX +XXX,XX @@ FMLALT_zzzw 01100100 10 1 ..... 10 0 00 1 ..... ..... @rda_rn_rm_e0
48
}
53
FMLSLB_zzzw 01100100 10 1 ..... 10 1 00 0 ..... ..... @rda_rn_rm_e0
49
clear_tail(vd, opr_sz, simd_maxsz(desc));
54
FMLSLT_zzzw 01100100 10 1 ..... 10 1 00 1 ..... ..... @rda_rn_rm_e0
50
}
55
56
+BFMLALB_zzzw 01100100 11 1 ..... 10 0 00 0 ..... ..... @rda_rn_rm_e0
57
+BFMLALT_zzzw 01100100 11 1 ..... 10 0 00 1 ..... ..... @rda_rn_rm_e0
51
+
58
+
52
+void HELPER(crypto_rax1)(void *vd, void *vn, void *vm, uint32_t desc)
59
### SVE2 floating-point bfloat16 dot-product
53
+{
60
BFDOT_zzzz 01100100 01 1 ..... 10 0 00 0 ..... ..... @rda_rn_rm_e0
54
+ intptr_t i, opr_sz = simd_oprsz(desc);
61
55
+ uint64_t *d = vd, *n = vn, *m = vm;
56
+
57
+ for (i = 0; i < opr_sz / 8; ++i) {
58
+ d[i] = n[i] ^ rol64(m[i], 1);
59
+ }
60
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
61
+}
62
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
62
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
63
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/translate-a64.c
64
--- a/target/arm/translate-a64.c
65
+++ b/target/arm/translate-a64.c
65
+++ b/target/arm/translate-a64.c
66
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
66
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
67
tcg_temp_free_ptr(tcg_rn_ptr);
67
}
68
feature = dc_isar_feature(aa64_bf16, s);
69
break;
70
- case 0x1f: /* BFDOT */
71
+ case 0x1f:
72
switch (size) {
73
- case 1:
74
+ case 1: /* BFDOT */
75
+ case 3: /* BFMLAL{B,T} */
76
feature = dc_isar_feature(aa64_bf16, s);
77
break;
78
default:
79
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
80
case 0xd: /* BFMMLA */
81
gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfmmla);
82
return;
83
- case 0xf: /* BFDOT */
84
+ case 0xf:
85
switch (size) {
86
- case 1:
87
+ case 1: /* BFDOT */
88
gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfdot);
89
break;
90
+ case 3: /* BFMLAL{B,T} */
91
+ gen_gvec_op4_fpst(s, 1, rd, rn, rm, rd, false, is_q,
92
+ gen_helper_gvec_bfmlal);
93
+ break;
94
default:
95
g_assert_not_reached();
96
}
97
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/translate-neon.c
100
+++ b/target/arm/translate-neon.c
101
@@ -XXX,XX +XXX,XX @@ static bool trans_VMMLA_b16(DisasContext *s, arg_VMMLA_b16 *a)
102
return do_neon_ddda(s, 7, a->vd, a->vn, a->vm, 0,
103
gen_helper_gvec_bfmmla);
68
}
104
}
69
105
+
70
+static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
106
+static bool trans_VFMA_b16(DisasContext *s, arg_VFMA_b16 *a)
71
+{
107
+{
72
+ tcg_gen_rotli_i64(d, m, 1);
108
+ if (!dc_isar_feature(aa32_bf16, s)) {
73
+ tcg_gen_xor_i64(d, d, n);
109
+ return false;
110
+ }
111
+ return do_neon_ddda_fpst(s, 7, a->vd, a->vn, a->vm, a->q, FPST_STD,
112
+ gen_helper_gvec_bfmlal);
113
+}
114
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/translate-sve.c
117
+++ b/target/arm/translate-sve.c
118
@@ -XXX,XX +XXX,XX @@ static bool trans_BFMMLA(DisasContext *s, arg_rrrr_esz *a)
119
}
120
return true;
121
}
122
+
123
+static bool do_BFMLAL_zzzw(DisasContext *s, arg_rrrr_esz *a, bool sel)
124
+{
125
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
126
+ return false;
127
+ }
128
+ if (sve_access_check(s)) {
129
+ TCGv_ptr status = fpstatus_ptr(FPST_FPCR);
130
+ unsigned vsz = vec_full_reg_size(s);
131
+
132
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
133
+ vec_full_reg_offset(s, a->rn),
134
+ vec_full_reg_offset(s, a->rm),
135
+ vec_full_reg_offset(s, a->ra),
136
+ status, vsz, vsz, sel,
137
+ gen_helper_gvec_bfmlal);
138
+ tcg_temp_free_ptr(status);
139
+ }
140
+ return true;
74
+}
141
+}
75
+
142
+
76
+static void gen_rax1_vec(unsigned vece, TCGv_vec d, TCGv_vec n, TCGv_vec m)
143
+static bool trans_BFMLALB_zzzw(DisasContext *s, arg_rrrr_esz *a)
77
+{
144
+{
78
+ tcg_gen_rotli_vec(vece, d, m, 1);
145
+ return do_BFMLAL_zzzw(s, a, false);
79
+ tcg_gen_xor_vec(vece, d, d, n);
80
+}
146
+}
81
+
147
+
82
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
148
+static bool trans_BFMLALT_zzzw(DisasContext *s, arg_rrrr_esz *a)
83
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
84
+{
149
+{
85
+ static const TCGOpcode vecop_list[] = { INDEX_op_rotli_vec, 0 };
150
+ return do_BFMLAL_zzzw(s, a, true);
86
+ static const GVecGen3 op = {
87
+ .fni8 = gen_rax1_i64,
88
+ .fniv = gen_rax1_vec,
89
+ .opt_opc = vecop_list,
90
+ .fno = gen_helper_crypto_rax1,
91
+ .vece = MO_64,
92
+ };
93
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &op);
94
+}
151
+}
152
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/vec_helper.c
155
+++ b/target/arm/vec_helper.c
156
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfmmla)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
157
}
158
clear_tail(d, opr_sz, simd_maxsz(desc));
159
}
95
+
160
+
96
/* Crypto three-reg SHA512
161
+void HELPER(gvec_bfmlal)(void *vd, void *vn, void *vm, void *va,
97
* 31 21 20 16 15 14 13 12 11 10 9 5 4 0
162
+ void *stat, uint32_t desc)
98
* +-----------------------+------+---+---+-----+--------+------+------+
163
+{
99
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
164
+ intptr_t i, opr_sz = simd_oprsz(desc);
100
bool feature;
165
+ intptr_t sel = simd_data(desc);
101
CryptoThreeOpFn *genfn = NULL;
166
+ float32 *d = vd, *a = va;
102
gen_helper_gvec_3 *oolfn = NULL;
167
+ bfloat16 *n = vn, *m = vm;
103
+ GVecGen3Fn *gvecfn = NULL;
168
+
104
169
+ for (i = 0; i < opr_sz / 4; ++i) {
105
if (o == 0) {
170
+ float32 nn = n[H2(i * 2 + sel)] << 16;
106
switch (opcode) {
171
+ float32 mm = m[H2(i * 2 + sel)] << 16;
107
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
172
+ d[H4(i)] = float32_muladd(nn, mm, a[H4(i)], 0, stat);
108
break;
173
+ }
109
case 3: /* RAX1 */
174
+ clear_tail(d, opr_sz, simd_maxsz(desc));
110
feature = dc_isar_feature(aa64_sha3, s);
175
+}
111
- genfn = NULL;
112
+ gvecfn = gen_gvec_rax1;
113
break;
114
default:
115
g_assert_not_reached();
116
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
117
118
if (oolfn) {
119
gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
120
- return;
121
- }
122
-
123
- if (genfn) {
124
+ } else if (gvecfn) {
125
+ gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
126
+ } else {
127
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
128
129
tcg_rd_ptr = vec_full_reg_ptr(s, rd);
130
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
131
tcg_temp_free_ptr(tcg_rd_ptr);
132
tcg_temp_free_ptr(tcg_rn_ptr);
133
tcg_temp_free_ptr(tcg_rm_ptr);
134
- } else {
135
- TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
136
- int pass;
137
-
138
- tcg_op1 = tcg_temp_new_i64();
139
- tcg_op2 = tcg_temp_new_i64();
140
- tcg_res[0] = tcg_temp_new_i64();
141
- tcg_res[1] = tcg_temp_new_i64();
142
-
143
- for (pass = 0; pass < 2; pass++) {
144
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
145
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
146
-
147
- tcg_gen_rotli_i64(tcg_res[pass], tcg_op2, 1);
148
- tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
149
- }
150
- write_vec_element(s, tcg_res[0], rd, 0, MO_64);
151
- write_vec_element(s, tcg_res[1], rd, 1, MO_64);
152
-
153
- tcg_temp_free_i64(tcg_op1);
154
- tcg_temp_free_i64(tcg_op2);
155
- tcg_temp_free_i64(tcg_res[0]);
156
- tcg_temp_free_i64(tcg_res[1]);
157
}
158
}
159
160
--
176
--
161
2.20.1
177
2.20.1
162
178
163
179
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
With this conversion, we will be able to use the same helpers
3
This is BFMLAL{B,T} for both AArch64 AdvSIMD and SVE,
4
with sve. In particular, pass 3 vector parameters for the
4
and VFMA{B,T}.BF16 for AArch32 NEON.
5
3-operand operations; for advsimd the destination register
6
is also an input.
7
5
8
This also fixes a bug in which we failed to clear the high bits
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
of the SVE register after an AdvSIMD operation.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20200514212831.31248-2-richard.henderson@linaro.org
8
Message-id: 20210525225817.400336-11-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
10
---
16
target/arm/helper.h | 6 ++--
11
target/arm/helper.h | 2 ++
17
target/arm/vec_internal.h | 33 +++++++++++++++++
12
target/arm/neon-shared.decode | 2 ++
18
target/arm/crypto_helper.c | 72 +++++++++++++++++++++++++++-----------
13
target/arm/sve.decode | 2 ++
19
target/arm/translate-a64.c | 55 ++++++++++++++++++-----------
14
target/arm/translate-a64.c | 15 ++++++++++++++-
20
target/arm/translate.c | 27 +++++++-------
15
target/arm/translate-neon.c | 10 ++++++++++
21
target/arm/vec_helper.c | 12 +------
16
target/arm/translate-sve.c | 30 ++++++++++++++++++++++++++++++
22
6 files changed, 138 insertions(+), 67 deletions(-)
17
target/arm/vec_helper.c | 22 ++++++++++++++++++++++
23
create mode 100644 target/arm/vec_internal.h
18
7 files changed, 82 insertions(+), 1 deletion(-)
24
19
25
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
26
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.h
22
--- a/target/arm/helper.h
28
+++ b/target/arm/helper.h
23
+++ b/target/arm/helper.h
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(neon_qzip8, TCG_CALL_NO_RWG, void, ptr, ptr)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
30
DEF_HELPER_FLAGS_2(neon_qzip16, TCG_CALL_NO_RWG, void, ptr, ptr)
25
31
DEF_HELPER_FLAGS_2(neon_qzip32, TCG_CALL_NO_RWG, void, ptr, ptr)
26
DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
32
27
void, ptr, ptr, ptr, ptr, ptr, i32)
33
-DEF_HELPER_FLAGS_3(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
34
+DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+ void, ptr, ptr, ptr, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
30
36
31
#ifdef TARGET_AARCH64
37
DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
#include "helper-a64.h"
38
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
33
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
39
DEF_HELPER_FLAGS_3(crypto_sm3partw1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
40
DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
41
42
-DEF_HELPER_FLAGS_2(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr)
43
-DEF_HELPER_FLAGS_3(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
44
+DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
46
47
DEF_HELPER_FLAGS_3(crc32, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
48
DEF_HELPER_FLAGS_3(crc32c, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
49
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
50
new file mode 100644
51
index XXXXXXX..XXXXXXX
52
--- /dev/null
53
+++ b/target/arm/vec_internal.h
54
@@ -XXX,XX +XXX,XX @@
55
+/*
56
+ * ARM AdvSIMD / SVE Vector Helpers
57
+ *
58
+ * Copyright (c) 2020 Linaro
59
+ *
60
+ * This library is free software; you can redistribute it and/or
61
+ * modify it under the terms of the GNU Lesser General Public
62
+ * License as published by the Free Software Foundation; either
63
+ * version 2 of the License, or (at your option) any later version.
64
+ *
65
+ * This library is distributed in the hope that it will be useful,
66
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
67
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
68
+ * Lesser General Public License for more details.
69
+ *
70
+ * You should have received a copy of the GNU Lesser General Public
71
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
72
+ */
73
+
74
+#ifndef TARGET_ARM_VEC_INTERNALS_H
75
+#define TARGET_ARM_VEC_INTERNALS_H
76
+
77
+static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
78
+{
79
+ uint64_t *d = vd + opr_sz;
80
+ uintptr_t i;
81
+
82
+ for (i = opr_sz; i < max_sz; i += 8) {
83
+ *d++ = 0;
84
+ }
85
+}
86
+
87
+#endif /* TARGET_ARM_VEC_INTERNALS_H */
88
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
89
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/crypto_helper.c
35
--- a/target/arm/neon-shared.decode
91
+++ b/target/arm/crypto_helper.c
36
+++ b/target/arm/neon-shared.decode
92
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@ VFML_scalar 1111 1110 0 . 0 s:1 .... .... 1000 . 0 . 1 index:1 ... \
93
38
rm=%vfml_scalar_q0_rm vn=%vn_sp vd=%vd_dp q=0
94
#include "cpu.h"
39
VFML_scalar 1111 1110 0 . 0 s:1 .... .... 1000 . 1 . 1 . rm:3 \
95
#include "exec/helper-proto.h"
40
index=%vfml_scalar_q1_index vn=%vn_dp vd=%vd_dp q=1
96
+#include "tcg/tcg-gvec-desc.h"
41
+VFMA_b16_scal 1111 1110 0.11 .... .... 1000 . q:1 . 1 . vm:3 \
97
#include "crypto/aes.h"
42
+ index=%vfml_scalar_q1_index vn=%vn_dp vd=%vd_dp
98
+#include "vec_internal.h"
43
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
99
44
index XXXXXXX..XXXXXXX 100644
100
union CRYPTO_STATE {
45
--- a/target/arm/sve.decode
101
uint8_t bytes[16];
46
+++ b/target/arm/sve.decode
102
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
47
@@ -XXX,XX +XXX,XX @@ FMLALB_zzxw 01100100 10 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
103
#define CR_ST_WORD(state, i) (state.words[i])
48
FMLALT_zzxw 01100100 10 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
104
#endif
49
FMLSLB_zzxw 01100100 10 1 ..... 0110.0 ..... ..... @rrxr_3a esz=2
105
50
FMLSLT_zzxw 01100100 10 1 ..... 0110.1 ..... ..... @rrxr_3a esz=2
106
-void HELPER(crypto_aese)(void *vd, void *vm, uint32_t decrypt)
51
+BFMLALB_zzxw 01100100 11 1 ..... 0100.0 ..... ..... @rrxr_3a esz=2
107
+static void do_crypto_aese(uint64_t *rd, uint64_t *rn,
52
+BFMLALT_zzxw 01100100 11 1 ..... 0100.1 ..... ..... @rrxr_3a esz=2
108
+ uint64_t *rm, bool decrypt)
53
109
{
54
### SVE2 floating-point bfloat16 dot-product (indexed)
110
static uint8_t const * const sbox[2] = { AES_sbox, AES_isbox };
55
BFDOT_zzxz 01100100 01 1 ..... 010000 ..... ..... @rrxr_2 esz=2
111
static uint8_t const * const shift[2] = { AES_shifts, AES_ishifts };
112
- uint64_t *rd = vd;
113
- uint64_t *rm = vm;
114
union CRYPTO_STATE rk = { .l = { rm[0], rm[1] } };
115
- union CRYPTO_STATE st = { .l = { rd[0], rd[1] } };
116
+ union CRYPTO_STATE st = { .l = { rn[0], rn[1] } };
117
int i;
118
119
- assert(decrypt < 2);
120
-
121
/* xor state vector with round key */
122
rk.l[0] ^= st.l[0];
123
rk.l[1] ^= st.l[1];
124
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aese)(void *vd, void *vm, uint32_t decrypt)
125
rd[1] = st.l[1];
126
}
127
128
-void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
129
+void HELPER(crypto_aese)(void *vd, void *vn, void *vm, uint32_t desc)
130
+{
131
+ intptr_t i, opr_sz = simd_oprsz(desc);
132
+ bool decrypt = simd_data(desc);
133
+
134
+ for (i = 0; i < opr_sz; i += 16) {
135
+ do_crypto_aese(vd + i, vn + i, vm + i, decrypt);
136
+ }
137
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
138
+}
139
+
140
+static void do_crypto_aesmc(uint64_t *rd, uint64_t *rm, bool decrypt)
141
{
142
static uint32_t const mc[][256] = { {
143
/* MixColumns lookup table */
144
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
145
0xbe805d9f, 0xb58d5491, 0xa89a4f83, 0xa397468d,
146
} };
147
148
- uint64_t *rd = vd;
149
- uint64_t *rm = vm;
150
union CRYPTO_STATE st = { .l = { rm[0], rm[1] } };
151
int i;
152
153
- assert(decrypt < 2);
154
-
155
for (i = 0; i < 16; i += 4) {
156
CR_ST_WORD(st, i >> 2) =
157
mc[decrypt][CR_ST_BYTE(st, i)] ^
158
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
159
rd[1] = st.l[1];
160
}
161
162
+void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t desc)
163
+{
164
+ intptr_t i, opr_sz = simd_oprsz(desc);
165
+ bool decrypt = simd_data(desc);
166
+
167
+ for (i = 0; i < opr_sz; i += 16) {
168
+ do_crypto_aesmc(vd + i, vm + i, decrypt);
169
+ }
170
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
171
+}
172
+
173
/*
174
* SHA-1 logical functions
175
*/
176
@@ -XXX,XX +XXX,XX @@ static uint8_t const sm4_sbox[] = {
177
0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48,
178
};
179
180
-void HELPER(crypto_sm4e)(void *vd, void *vn)
181
+static void do_crypto_sm4e(uint64_t *rd, uint64_t *rn, uint64_t *rm)
182
{
183
- uint64_t *rd = vd;
184
- uint64_t *rn = vn;
185
- union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
186
- union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
187
+ union CRYPTO_STATE d = { .l = { rn[0], rn[1] } };
188
+ union CRYPTO_STATE n = { .l = { rm[0], rm[1] } };
189
uint32_t t, i;
190
191
for (i = 0; i < 4; i++) {
192
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4e)(void *vd, void *vn)
193
rd[1] = d.l[1];
194
}
195
196
-void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm)
197
+void HELPER(crypto_sm4e)(void *vd, void *vn, void *vm, uint32_t desc)
198
+{
199
+ intptr_t i, opr_sz = simd_oprsz(desc);
200
+
201
+ for (i = 0; i < opr_sz; i += 16) {
202
+ do_crypto_sm4e(vd + i, vn + i, vm + i);
203
+ }
204
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
205
+}
206
+
207
+static void do_crypto_sm4ekey(uint64_t *rd, uint64_t *rn, uint64_t *rm)
208
{
209
- uint64_t *rd = vd;
210
- uint64_t *rn = vn;
211
- uint64_t *rm = vm;
212
union CRYPTO_STATE d;
213
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
214
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
215
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm)
216
rd[0] = d.l[0];
217
rd[1] = d.l[1];
218
}
219
+
220
+void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm, uint32_t desc)
221
+{
222
+ intptr_t i, opr_sz = simd_oprsz(desc);
223
+
224
+ for (i = 0; i < opr_sz; i += 16) {
225
+ do_crypto_sm4ekey(vd + i, vn + i, vm + i);
226
+ }
227
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
228
+}
229
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
56
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
230
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
231
--- a/target/arm/translate-a64.c
58
--- a/target/arm/translate-a64.c
232
+++ b/target/arm/translate-a64.c
59
+++ b/target/arm/translate-a64.c
233
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_fn4(DisasContext *s, bool is_q, int rd, int rn, int rm,
60
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
234
is_q ? 16 : 8, vec_full_reg_size(s));
61
unallocated_encoding(s);
235
}
62
return;
236
63
}
237
+/* Expand a 2-operand operation using an out-of-line helper. */
64
+ size = MO_32;
238
+static void gen_gvec_op2_ool(DisasContext *s, bool is_q, int rd,
239
+ int rn, int data, gen_helper_gvec_2 *fn)
240
+{
241
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
242
+ vec_full_reg_offset(s, rn),
243
+ is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
244
+}
245
+
246
/* Expand a 3-operand operation using an out-of-line helper. */
247
static void gen_gvec_op3_ool(DisasContext *s, bool is_q, int rd,
248
int rn, int rm, int data, gen_helper_gvec_3 *fn)
249
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
250
int rn = extract32(insn, 5, 5);
251
int rd = extract32(insn, 0, 5);
252
int decrypt;
253
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
254
- TCGv_i32 tcg_decrypt;
255
- CryptoThreeOpIntFn *genfn;
256
+ gen_helper_gvec_2 *genfn2 = NULL;
257
+ gen_helper_gvec_3 *genfn3 = NULL;
258
259
if (!dc_isar_feature(aa64_aes, s) || size != 0) {
260
unallocated_encoding(s);
261
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
262
switch (opcode) {
263
case 0x4: /* AESE */
264
decrypt = 0;
265
- genfn = gen_helper_crypto_aese;
266
+ genfn3 = gen_helper_crypto_aese;
267
break;
268
case 0x6: /* AESMC */
269
decrypt = 0;
270
- genfn = gen_helper_crypto_aesmc;
271
+ genfn2 = gen_helper_crypto_aesmc;
272
break;
273
case 0x5: /* AESD */
274
decrypt = 1;
275
- genfn = gen_helper_crypto_aese;
276
+ genfn3 = gen_helper_crypto_aese;
277
break;
278
case 0x7: /* AESIMC */
279
decrypt = 1;
280
- genfn = gen_helper_crypto_aesmc;
281
+ genfn2 = gen_helper_crypto_aesmc;
282
break;
283
default:
284
unallocated_encoding(s);
285
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
286
if (!fp_access_check(s)) {
287
return;
288
}
289
-
290
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
291
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
292
- tcg_decrypt = tcg_const_i32(decrypt);
293
-
294
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_decrypt);
295
-
296
- tcg_temp_free_ptr(tcg_rd_ptr);
297
- tcg_temp_free_ptr(tcg_rn_ptr);
298
- tcg_temp_free_i32(tcg_decrypt);
299
+ if (genfn2) {
300
+ gen_gvec_op2_ool(s, true, rd, rn, decrypt, genfn2);
301
+ } else {
302
+ gen_gvec_op3_ool(s, true, rd, rd, rn, decrypt, genfn3);
303
+ }
304
}
305
306
/* Crypto three-reg SHA
307
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
308
int rn = extract32(insn, 5, 5);
309
int rd = extract32(insn, 0, 5);
310
bool feature;
311
- CryptoThreeOpFn *genfn;
312
+ CryptoThreeOpFn *genfn = NULL;
313
+ gen_helper_gvec_3 *oolfn = NULL;
314
315
if (o == 0) {
316
switch (opcode) {
317
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
318
break;
65
break;
319
case 2: /* SM4EKEY */
66
case 1: /* BFDOT */
320
feature = dc_isar_feature(aa64_sm4, s);
67
if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
321
- genfn = gen_helper_crypto_sm4ekey;
68
unallocated_encoding(s);
322
+ oolfn = gen_helper_crypto_sm4ekey;
69
return;
70
}
71
+ size = MO_32;
72
+ break;
73
+ case 3: /* BFMLAL{B,T} */
74
+ if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
75
+ unallocated_encoding(s);
76
+ return;
77
+ }
78
+ /* can't set is_fp without other incorrect size checks */
79
+ size = MO_16;
323
break;
80
break;
324
default:
81
default:
325
unallocated_encoding(s);
82
unallocated_encoding(s);
326
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
83
return;
327
return;
84
}
328
}
85
- size = MO_32;
329
86
break;
330
+ if (oolfn) {
87
case 0x11: /* FCMLA #0 */
331
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
88
case 0x13: /* FCMLA #90 */
332
+ return;
89
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
90
gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
91
gen_helper_gvec_usdot_idx_b);
92
return;
93
+ case 3: /* BFMLAL{B,T} */
94
+ gen_gvec_op4_fpst(s, 1, rd, rn, rm, rd, 0, (index << 1) | is_q,
95
+ gen_helper_gvec_bfmlal_idx);
96
+ return;
97
}
98
g_assert_not_reached();
99
case 0x11: /* FCMLA #0 */
100
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/target/arm/translate-neon.c
103
+++ b/target/arm/translate-neon.c
104
@@ -XXX,XX +XXX,XX @@ static bool trans_VFMA_b16(DisasContext *s, arg_VFMA_b16 *a)
105
return do_neon_ddda_fpst(s, 7, a->vd, a->vn, a->vm, a->q, FPST_STD,
106
gen_helper_gvec_bfmlal);
107
}
108
+
109
+static bool trans_VFMA_b16_scal(DisasContext *s, arg_VFMA_b16_scal *a)
110
+{
111
+ if (!dc_isar_feature(aa32_bf16, s)) {
112
+ return false;
333
+ }
113
+ }
114
+ return do_neon_ddda_fpst(s, 6, a->vd, a->vn, a->vm,
115
+ (a->index << 1) | a->q, FPST_STD,
116
+ gen_helper_gvec_bfmlal_idx);
117
+}
118
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/target/arm/translate-sve.c
121
+++ b/target/arm/translate-sve.c
122
@@ -XXX,XX +XXX,XX @@ static bool trans_BFMLALT_zzzw(DisasContext *s, arg_rrrr_esz *a)
123
{
124
return do_BFMLAL_zzzw(s, a, true);
125
}
334
+
126
+
335
if (genfn) {
127
+static bool do_BFMLAL_zzxw(DisasContext *s, arg_rrxr_esz *a, bool sel)
336
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
128
+{
337
129
+ if (!dc_isar_feature(aa64_sve_bf16, s)) {
338
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
130
+ return false;
339
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
340
bool feature;
341
CryptoTwoOpFn *genfn;
342
+ gen_helper_gvec_3 *oolfn = NULL;
343
344
switch (opcode) {
345
case 0: /* SHA512SU0 */
346
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
347
break;
348
case 1: /* SM4E */
349
feature = dc_isar_feature(aa64_sm4, s);
350
- genfn = gen_helper_crypto_sm4e;
351
+ oolfn = gen_helper_crypto_sm4e;
352
break;
353
default:
354
unallocated_encoding(s);
355
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
356
return;
357
}
358
359
+ if (oolfn) {
360
+ gen_gvec_op3_ool(s, true, rd, rd, rn, 0, oolfn);
361
+ return;
362
+ }
131
+ }
132
+ if (sve_access_check(s)) {
133
+ TCGv_ptr status = fpstatus_ptr(FPST_FPCR);
134
+ unsigned vsz = vec_full_reg_size(s);
363
+
135
+
364
tcg_rd_ptr = vec_full_reg_ptr(s, rd);
136
+ tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
365
tcg_rn_ptr = vec_full_reg_ptr(s, rn);
137
+ vec_full_reg_offset(s, a->rn),
366
138
+ vec_full_reg_offset(s, a->rm),
367
diff --git a/target/arm/translate.c b/target/arm/translate.c
139
+ vec_full_reg_offset(s, a->ra),
368
index XXXXXXX..XXXXXXX 100644
140
+ status, vsz, vsz, (a->index << 1) | sel,
369
--- a/target/arm/translate.c
141
+ gen_helper_gvec_bfmlal_idx);
370
+++ b/target/arm/translate.c
142
+ tcg_temp_free_ptr(status);
371
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
143
+ }
372
if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
144
+ return true;
373
return 1;
145
+}
374
}
146
+
375
- ptr1 = vfp_reg_ptr(true, rd);
147
+static bool trans_BFMLALB_zzxw(DisasContext *s, arg_rrxr_esz *a)
376
- ptr2 = vfp_reg_ptr(true, rm);
148
+{
377
-
149
+ return do_BFMLAL_zzxw(s, a, false);
378
- /* Bit 6 is the lowest opcode bit; it distinguishes between
150
+}
379
- * encryption (AESE/AESMC) and decryption (AESD/AESIMC)
151
+
380
- */
152
+static bool trans_BFMLALT_zzxw(DisasContext *s, arg_rrxr_esz *a)
381
- tmp3 = tcg_const_i32(extract32(insn, 6, 1));
153
+{
382
-
154
+ return do_BFMLAL_zzxw(s, a, true);
383
+ /*
155
+}
384
+ * Bit 6 is the lowest opcode bit; it distinguishes
385
+ * between encryption (AESE/AESMC) and decryption
386
+ * (AESD/AESIMC).
387
+ */
388
if (op == NEON_2RM_AESE) {
389
- gen_helper_crypto_aese(ptr1, ptr2, tmp3);
390
+ tcg_gen_gvec_3_ool(vfp_reg_offset(true, rd),
391
+ vfp_reg_offset(true, rd),
392
+ vfp_reg_offset(true, rm),
393
+ 16, 16, extract32(insn, 6, 1),
394
+ gen_helper_crypto_aese);
395
} else {
396
- gen_helper_crypto_aesmc(ptr1, ptr2, tmp3);
397
+ tcg_gen_gvec_2_ool(vfp_reg_offset(true, rd),
398
+ vfp_reg_offset(true, rm),
399
+ 16, 16, extract32(insn, 6, 1),
400
+ gen_helper_crypto_aesmc);
401
}
402
- tcg_temp_free_ptr(ptr1);
403
- tcg_temp_free_ptr(ptr2);
404
- tcg_temp_free_i32(tmp3);
405
break;
406
case NEON_2RM_SHA1H:
407
if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
408
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
156
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
409
index XXXXXXX..XXXXXXX 100644
157
index XXXXXXX..XXXXXXX 100644
410
--- a/target/arm/vec_helper.c
158
--- a/target/arm/vec_helper.c
411
+++ b/target/arm/vec_helper.c
159
+++ b/target/arm/vec_helper.c
412
@@ -XXX,XX +XXX,XX @@
160
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_bfmlal)(void *vd, void *vn, void *vm, void *va,
413
#include "exec/helper-proto.h"
161
}
414
#include "tcg/tcg-gvec-desc.h"
162
clear_tail(d, opr_sz, simd_maxsz(desc));
415
#include "fpu/softfloat.h"
163
}
416
-
164
+
417
+#include "vec_internal.h"
165
+void HELPER(gvec_bfmlal_idx)(void *vd, void *vn, void *vm,
418
166
+ void *va, void *stat, uint32_t desc)
419
/* Note that vector data is stored in host-endian 64-bit chunks,
167
+{
420
so addressing units smaller than that needs a host-endian fixup. */
168
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
421
@@ -XXX,XX +XXX,XX @@
169
+ intptr_t sel = extract32(desc, SIMD_DATA_SHIFT, 1);
422
#define H4(x) (x)
170
+ intptr_t index = extract32(desc, SIMD_DATA_SHIFT + 1, 3);
423
#endif
171
+ intptr_t elements = opr_sz / 4;
424
172
+ intptr_t eltspersegment = MIN(16 / 4, elements);
425
-static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
173
+ float32 *d = vd, *a = va;
426
-{
174
+ bfloat16 *n = vn, *m = vm;
427
- uint64_t *d = vd + opr_sz;
175
+
428
- uintptr_t i;
176
+ for (i = 0; i < elements; i += eltspersegment) {
429
-
177
+ float32 m_idx = m[H2(2 * i + index)] << 16;
430
- for (i = opr_sz; i < max_sz; i += 8) {
178
+
431
- *d++ = 0;
179
+ for (j = i; j < i + eltspersegment; j++) {
432
- }
180
+ float32 n_j = n[H2(2 * j + sel)] << 16;
433
-}
181
+ d[H4(j)] = float32_muladd(n_j, m_idx, a[H4(j)], 0, stat);
434
-
182
+ }
435
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
183
+ }
436
static int16_t inl_qrdmlah_s16(int16_t src1, int16_t src2,
184
+ clear_tail(d, opr_sz, simd_maxsz(desc));
437
int16_t src3, uint32_t *sat)
185
+}
438
--
186
--
439
2.20.1
187
2.20.1
440
188
441
189
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20210525225817.400336-12-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
linux-user/elfload.c | 2 ++
9
1 file changed, 2 insertions(+)
10
11
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/linux-user/elfload.c
14
+++ b/linux-user/elfload.c
15
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
16
GET_FEATURE_ID(aa64_sve_i8mm, ARM_HWCAP2_A64_SVEI8MM);
17
GET_FEATURE_ID(aa64_sve_f32mm, ARM_HWCAP2_A64_SVEF32MM);
18
GET_FEATURE_ID(aa64_sve_f64mm, ARM_HWCAP2_A64_SVEF64MM);
19
+ GET_FEATURE_ID(aa64_sve_bf16, ARM_HWCAP2_A64_SVEBF16);
20
GET_FEATURE_ID(aa64_i8mm, ARM_HWCAP2_A64_I8MM);
21
+ GET_FEATURE_ID(aa64_bf16, ARM_HWCAP2_A64_BF16);
22
GET_FEATURE_ID(aa64_rndr, ARM_HWCAP2_A64_RNG);
23
GET_FEATURE_ID(aa64_bti, ARM_HWCAP2_A64_BTI);
24
GET_FEATURE_ID(aa64_mte, ARM_HWCAP2_A64_MTE);
25
--
26
2.20.1
27
28
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Disable BF16 again for !have_neon and !have_vfp during realize.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210525225817.400336-13-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/cpu.c | 3 +++
11
target/arm/cpu64.c | 3 +++
12
target/arm/cpu_tcg.c | 1 +
13
3 files changed, 7 insertions(+)
14
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
20
21
u = cpu->isar.id_isar6;
22
u = FIELD_DP32(u, ID_ISAR6, JSCVT, 0);
23
+ u = FIELD_DP32(u, ID_ISAR6, BF16, 0);
24
cpu->isar.id_isar6 = u;
25
26
u = cpu->isar.mvfr0;
27
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
28
29
t = cpu->isar.id_aa64isar1;
30
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 0);
31
+ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 0);
32
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 0);
33
cpu->isar.id_aa64isar1 = t;
34
35
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
36
u = cpu->isar.id_isar6;
37
u = FIELD_DP32(u, ID_ISAR6, DP, 0);
38
u = FIELD_DP32(u, ID_ISAR6, FHM, 0);
39
+ u = FIELD_DP32(u, ID_ISAR6, BF16, 0);
40
u = FIELD_DP32(u, ID_ISAR6, I8MM, 0);
41
cpu->isar.id_isar6 = u;
42
43
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/cpu64.c
46
+++ b/target/arm/cpu64.c
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
48
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
49
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
50
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
51
+ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1);
52
t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
53
t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
54
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
55
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
56
t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
57
t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* PMULL */
58
t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
59
+ t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1);
60
t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
61
t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
62
t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
63
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
64
u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
65
u = FIELD_DP32(u, ID_ISAR6, SB, 1);
66
u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
67
+ u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
68
u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
69
cpu->isar.id_isar6 = u;
70
71
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/cpu_tcg.c
74
+++ b/target/arm/cpu_tcg.c
75
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
76
t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
77
t = FIELD_DP32(t, ID_ISAR6, SB, 1);
78
t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
79
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
80
t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
81
cpu->isar.id_isar6 = t;
82
83
--
84
2.20.1
85
86
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Add BCM2835 SOC MPHI (Message-based Parallel Host Interface)
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
emulation. It is very basic, only providing the FIQ interrupt
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
needed to allow the dwc-otg USB host controller driver in the
5
prepare for support for multiple architectures, let's start moving common
6
Raspbian kernel to function.
6
code out into its own accel directory.
7
7
8
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
8
This patch moves assert_hvf_ok() and introduces generic build infrastructure.
9
Acked-by: Philippe Mathieu-Daude <f4bug@amsat.org>
9
10
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-2-agraf@csgraf.de
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20200520235349.21215-2-pauldzim@gmail.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
15
---
14
include/hw/arm/bcm2835_peripherals.h | 2 +
16
include/sysemu/hvf_int.h | 18 +++++++++++++++
15
include/hw/misc/bcm2835_mphi.h | 44 ++++++
17
accel/hvf/hvf-all.c | 47 ++++++++++++++++++++++++++++++++++++++++
16
hw/arm/bcm2835_peripherals.c | 17 +++
18
target/i386/hvf/hvf.c | 33 +---------------------------
17
hw/misc/bcm2835_mphi.c | 191 +++++++++++++++++++++++++++
19
MAINTAINERS | 8 +++++++
18
hw/misc/Makefile.objs | 1 +
20
accel/hvf/meson.build | 6 +++++
19
5 files changed, 255 insertions(+)
21
accel/meson.build | 1 +
20
create mode 100644 include/hw/misc/bcm2835_mphi.h
22
6 files changed, 81 insertions(+), 32 deletions(-)
21
create mode 100644 hw/misc/bcm2835_mphi.c
23
create mode 100644 include/sysemu/hvf_int.h
22
24
create mode 100644 accel/hvf/hvf-all.c
23
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
25
create mode 100644 accel/hvf/meson.build
24
index XXXXXXX..XXXXXXX 100644
26
25
--- a/include/hw/arm/bcm2835_peripherals.h
27
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
26
+++ b/include/hw/arm/bcm2835_peripherals.h
27
@@ -XXX,XX +XXX,XX @@
28
#include "hw/misc/bcm2835_property.h"
29
#include "hw/misc/bcm2835_rng.h"
30
#include "hw/misc/bcm2835_mbox.h"
31
+#include "hw/misc/bcm2835_mphi.h"
32
#include "hw/misc/bcm2835_thermal.h"
33
#include "hw/sd/sdhci.h"
34
#include "hw/sd/bcm2835_sdhost.h"
35
@@ -XXX,XX +XXX,XX @@ typedef struct BCM2835PeripheralState {
36
qemu_irq irq, fiq;
37
38
BCM2835SystemTimerState systmr;
39
+ BCM2835MphiState mphi;
40
UnimplementedDeviceState armtmr;
41
UnimplementedDeviceState cprman;
42
UnimplementedDeviceState a2w;
43
diff --git a/include/hw/misc/bcm2835_mphi.h b/include/hw/misc/bcm2835_mphi.h
44
new file mode 100644
28
new file mode 100644
45
index XXXXXXX..XXXXXXX
29
index XXXXXXX..XXXXXXX
46
--- /dev/null
30
--- /dev/null
47
+++ b/include/hw/misc/bcm2835_mphi.h
31
+++ b/include/sysemu/hvf_int.h
48
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@
49
+/*
33
+/*
50
+ * BCM2835 SOC MPHI state definitions
34
+ * QEMU Hypervisor.framework (HVF) support
51
+ *
35
+ *
52
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
36
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
53
+ *
37
+ * See the COPYING file in the top-level directory.
54
+ * This program is free software; you can redistribute it and/or modify
38
+ *
55
+ * it under the terms of the GNU General Public License as published by
56
+ * the Free Software Foundation; either version 2 of the License, or
57
+ * (at your option) any later version.
58
+ *
59
+ * This program is distributed in the hope that it will be useful,
60
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
61
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
62
+ * GNU General Public License for more details.
63
+ */
39
+ */
64
+
40
+
65
+#ifndef HW_MISC_BCM2835_MPHI_H
41
+/* header to be included in HVF-specific code */
66
+#define HW_MISC_BCM2835_MPHI_H
42
+
67
+
43
+#ifndef HVF_INT_H
68
+#include "hw/irq.h"
44
+#define HVF_INT_H
69
+#include "hw/sysbus.h"
45
+
70
+
46
+#include <Hypervisor/hv.h>
71
+#define MPHI_MMIO_SIZE 0x1000
47
+
72
+
48
+void assert_hvf_ok(hv_return_t ret);
73
+typedef struct BCM2835MphiState BCM2835MphiState;
74
+
75
+struct BCM2835MphiState {
76
+ SysBusDevice parent_obj;
77
+ qemu_irq irq;
78
+ MemoryRegion iomem;
79
+
80
+ uint32_t outdda;
81
+ uint32_t outddb;
82
+ uint32_t ctrl;
83
+ uint32_t intstat;
84
+ uint32_t swirq;
85
+};
86
+
87
+#define TYPE_BCM2835_MPHI "bcm2835-mphi"
88
+
89
+#define BCM2835_MPHI(obj) \
90
+ OBJECT_CHECK(BCM2835MphiState, (obj), TYPE_BCM2835_MPHI)
91
+
49
+
92
+#endif
50
+#endif
93
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
51
diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/bcm2835_peripherals.c
96
+++ b/hw/arm/bcm2835_peripherals.c
97
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
98
OBJECT(&s->sdhci.sdbus));
99
object_property_add_const_link(OBJECT(&s->gpio), "sdbus-sdhost",
100
OBJECT(&s->sdhost.sdbus));
101
+
102
+ /* Mphi */
103
+ sysbus_init_child_obj(obj, "mphi", &s->mphi, sizeof(s->mphi),
104
+ TYPE_BCM2835_MPHI);
105
}
106
107
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
108
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
109
110
object_property_add_alias(OBJECT(s), "sd-bus", OBJECT(&s->gpio), "sd-bus");
111
112
+ /* Mphi */
113
+ object_property_set_bool(OBJECT(&s->mphi), true, "realized", &err);
114
+ if (err) {
115
+ error_propagate(errp, err);
116
+ return;
117
+ }
118
+
119
+ memory_region_add_subregion(&s->peri_mr, MPHI_OFFSET,
120
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->mphi), 0));
121
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->mphi), 0,
122
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
123
+ INTERRUPT_HOSTPORT));
124
+
125
create_unimp(s, &s->armtmr, "bcm2835-sp804", ARMCTRL_TIMER0_1_OFFSET, 0x40);
126
create_unimp(s, &s->cprman, "bcm2835-cprman", CPRMAN_OFFSET, 0x1000);
127
create_unimp(s, &s->a2w, "bcm2835-a2w", A2W_OFFSET, 0x1000);
128
diff --git a/hw/misc/bcm2835_mphi.c b/hw/misc/bcm2835_mphi.c
129
new file mode 100644
52
new file mode 100644
130
index XXXXXXX..XXXXXXX
53
index XXXXXXX..XXXXXXX
131
--- /dev/null
54
--- /dev/null
132
+++ b/hw/misc/bcm2835_mphi.c
55
+++ b/accel/hvf/hvf-all.c
133
@@ -XXX,XX +XXX,XX @@
56
@@ -XXX,XX +XXX,XX @@
134
+/*
57
+/*
135
+ * BCM2835 SOC MPHI emulation
58
+ * QEMU Hypervisor.framework support
136
+ *
59
+ *
137
+ * Very basic emulation, only providing the FIQ interrupt needed to
60
+ * This work is licensed under the terms of the GNU GPL, version 2. See
138
+ * allow the dwc-otg USB host controller driver in the Raspbian kernel
61
+ * the COPYING file in the top-level directory.
139
+ * to function.
62
+ *
140
+ *
63
+ * Contributions after 2012-01-13 are licensed under the terms of the
141
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
64
+ * GNU GPL, version 2 or (at your option) any later version.
142
+ *
143
+ * This program is free software; you can redistribute it and/or modify
144
+ * it under the terms of the GNU General Public License as published by
145
+ * the Free Software Foundation; either version 2 of the License, or
146
+ * (at your option) any later version.
147
+ *
148
+ * This program is distributed in the hope that it will be useful,
149
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
150
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
151
+ * GNU General Public License for more details.
152
+ */
65
+ */
153
+
66
+
154
+#include "qemu/osdep.h"
67
+#include "qemu/osdep.h"
155
+#include "qapi/error.h"
68
+#include "qemu-common.h"
156
+#include "hw/misc/bcm2835_mphi.h"
157
+#include "migration/vmstate.h"
158
+#include "qemu/error-report.h"
69
+#include "qemu/error-report.h"
159
+#include "qemu/log.h"
70
+#include "sysemu/hvf.h"
160
+#include "qemu/main-loop.h"
71
+#include "sysemu/hvf_int.h"
161
+
72
+
162
+static inline void mphi_raise_irq(BCM2835MphiState *s)
73
+void assert_hvf_ok(hv_return_t ret)
163
+{
74
+{
164
+ qemu_set_irq(s->irq, 1);
75
+ if (ret == HV_SUCCESS) {
165
+}
166
+
167
+static inline void mphi_lower_irq(BCM2835MphiState *s)
168
+{
169
+ qemu_set_irq(s->irq, 0);
170
+}
171
+
172
+static uint64_t mphi_reg_read(void *ptr, hwaddr addr, unsigned size)
173
+{
174
+ BCM2835MphiState *s = ptr;
175
+ uint32_t val = 0;
176
+
177
+ switch (addr) {
178
+ case 0x28: /* outdda */
179
+ val = s->outdda;
180
+ break;
181
+ case 0x2c: /* outddb */
182
+ val = s->outddb;
183
+ break;
184
+ case 0x4c: /* ctrl */
185
+ val = s->ctrl;
186
+ val |= 1 << 17;
187
+ break;
188
+ case 0x50: /* intstat */
189
+ val = s->intstat;
190
+ break;
191
+ case 0x1f0: /* swirq_set */
192
+ val = s->swirq;
193
+ break;
194
+ case 0x1f4: /* swirq_clr */
195
+ val = s->swirq;
196
+ break;
197
+ default:
198
+ qemu_log_mask(LOG_UNIMP, "read from unknown register");
199
+ break;
200
+ }
201
+
202
+ return val;
203
+}
204
+
205
+static void mphi_reg_write(void *ptr, hwaddr addr, uint64_t val, unsigned size)
206
+{
207
+ BCM2835MphiState *s = ptr;
208
+ int do_irq = 0;
209
+
210
+ switch (addr) {
211
+ case 0x28: /* outdda */
212
+ s->outdda = val;
213
+ break;
214
+ case 0x2c: /* outddb */
215
+ s->outddb = val;
216
+ if (val & (1 << 29)) {
217
+ do_irq = 1;
218
+ }
219
+ break;
220
+ case 0x4c: /* ctrl */
221
+ s->ctrl = val;
222
+ if (val & (1 << 16)) {
223
+ do_irq = -1;
224
+ }
225
+ break;
226
+ case 0x50: /* intstat */
227
+ s->intstat = val;
228
+ if (val & ((1 << 16) | (1 << 29))) {
229
+ do_irq = -1;
230
+ }
231
+ break;
232
+ case 0x1f0: /* swirq_set */
233
+ s->swirq |= val;
234
+ do_irq = 1;
235
+ break;
236
+ case 0x1f4: /* swirq_clr */
237
+ s->swirq &= ~val;
238
+ do_irq = -1;
239
+ break;
240
+ default:
241
+ qemu_log_mask(LOG_UNIMP, "write to unknown register");
242
+ return;
76
+ return;
243
+ }
77
+ }
244
+
78
+
245
+ if (do_irq > 0) {
79
+ switch (ret) {
246
+ mphi_raise_irq(s);
80
+ case HV_ERROR:
247
+ } else if (do_irq < 0) {
81
+ error_report("Error: HV_ERROR");
248
+ mphi_lower_irq(s);
82
+ break;
83
+ case HV_BUSY:
84
+ error_report("Error: HV_BUSY");
85
+ break;
86
+ case HV_BAD_ARGUMENT:
87
+ error_report("Error: HV_BAD_ARGUMENT");
88
+ break;
89
+ case HV_NO_RESOURCES:
90
+ error_report("Error: HV_NO_RESOURCES");
91
+ break;
92
+ case HV_NO_DEVICE:
93
+ error_report("Error: HV_NO_DEVICE");
94
+ break;
95
+ case HV_UNSUPPORTED:
96
+ error_report("Error: HV_UNSUPPORTED");
97
+ break;
98
+ default:
99
+ error_report("Unknown Error");
249
+ }
100
+ }
101
+
102
+ abort();
250
+}
103
+}
251
+
104
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
252
+static const MemoryRegionOps mphi_mmio_ops = {
253
+ .read = mphi_reg_read,
254
+ .write = mphi_reg_write,
255
+ .impl.min_access_size = 4,
256
+ .impl.max_access_size = 4,
257
+ .endianness = DEVICE_LITTLE_ENDIAN,
258
+};
259
+
260
+static void mphi_reset(DeviceState *dev)
261
+{
262
+ BCM2835MphiState *s = BCM2835_MPHI(dev);
263
+
264
+ s->outdda = 0;
265
+ s->outddb = 0;
266
+ s->ctrl = 0;
267
+ s->intstat = 0;
268
+ s->swirq = 0;
269
+}
270
+
271
+static void mphi_realize(DeviceState *dev, Error **errp)
272
+{
273
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
274
+ BCM2835MphiState *s = BCM2835_MPHI(dev);
275
+
276
+ sysbus_init_irq(sbd, &s->irq);
277
+}
278
+
279
+static void mphi_init(Object *obj)
280
+{
281
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
282
+ BCM2835MphiState *s = BCM2835_MPHI(obj);
283
+
284
+ memory_region_init_io(&s->iomem, obj, &mphi_mmio_ops, s, "mphi", MPHI_MMIO_SIZE);
285
+ sysbus_init_mmio(sbd, &s->iomem);
286
+}
287
+
288
+const VMStateDescription vmstate_mphi_state = {
289
+ .name = "mphi",
290
+ .version_id = 1,
291
+ .minimum_version_id = 1,
292
+ .fields = (VMStateField[]) {
293
+ VMSTATE_UINT32(outdda, BCM2835MphiState),
294
+ VMSTATE_UINT32(outddb, BCM2835MphiState),
295
+ VMSTATE_UINT32(ctrl, BCM2835MphiState),
296
+ VMSTATE_UINT32(intstat, BCM2835MphiState),
297
+ VMSTATE_UINT32(swirq, BCM2835MphiState),
298
+ VMSTATE_END_OF_LIST()
299
+ }
300
+};
301
+
302
+static void mphi_class_init(ObjectClass *klass, void *data)
303
+{
304
+ DeviceClass *dc = DEVICE_CLASS(klass);
305
+
306
+ dc->realize = mphi_realize;
307
+ dc->reset = mphi_reset;
308
+ dc->vmsd = &vmstate_mphi_state;
309
+}
310
+
311
+static const TypeInfo bcm2835_mphi_type_info = {
312
+ .name = TYPE_BCM2835_MPHI,
313
+ .parent = TYPE_SYS_BUS_DEVICE,
314
+ .instance_size = sizeof(BCM2835MphiState),
315
+ .instance_init = mphi_init,
316
+ .class_init = mphi_class_init,
317
+};
318
+
319
+static void bcm2835_mphi_register_types(void)
320
+{
321
+ type_register_static(&bcm2835_mphi_type_info);
322
+}
323
+
324
+type_init(bcm2835_mphi_register_types)
325
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
326
index XXXXXXX..XXXXXXX 100644
105
index XXXXXXX..XXXXXXX 100644
327
--- a/hw/misc/Makefile.objs
106
--- a/target/i386/hvf/hvf.c
328
+++ b/hw/misc/Makefile.objs
107
+++ b/target/i386/hvf/hvf.c
329
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_OMAP) += omap_l4.o
108
@@ -XXX,XX +XXX,XX @@
330
common-obj-$(CONFIG_OMAP) += omap_sdrc.o
109
#include "qemu/error-report.h"
331
common-obj-$(CONFIG_OMAP) += omap_tap.o
110
332
common-obj-$(CONFIG_RASPI) += bcm2835_mbox.o
111
#include "sysemu/hvf.h"
333
+common-obj-$(CONFIG_RASPI) += bcm2835_mphi.o
112
+#include "sysemu/hvf_int.h"
334
common-obj-$(CONFIG_RASPI) += bcm2835_property.o
113
#include "sysemu/runstate.h"
335
common-obj-$(CONFIG_RASPI) += bcm2835_rng.o
114
#include "hvf-i386.h"
336
common-obj-$(CONFIG_RASPI) += bcm2835_thermal.o
115
#include "vmcs.h"
116
@@ -XXX,XX +XXX,XX @@
117
118
HVFState *hvf_state;
119
120
-static void assert_hvf_ok(hv_return_t ret)
121
-{
122
- if (ret == HV_SUCCESS) {
123
- return;
124
- }
125
-
126
- switch (ret) {
127
- case HV_ERROR:
128
- error_report("Error: HV_ERROR");
129
- break;
130
- case HV_BUSY:
131
- error_report("Error: HV_BUSY");
132
- break;
133
- case HV_BAD_ARGUMENT:
134
- error_report("Error: HV_BAD_ARGUMENT");
135
- break;
136
- case HV_NO_RESOURCES:
137
- error_report("Error: HV_NO_RESOURCES");
138
- break;
139
- case HV_NO_DEVICE:
140
- error_report("Error: HV_NO_DEVICE");
141
- break;
142
- case HV_UNSUPPORTED:
143
- error_report("Error: HV_UNSUPPORTED");
144
- break;
145
- default:
146
- error_report("Unknown Error");
147
- }
148
-
149
- abort();
150
-}
151
-
152
/* Memory slots */
153
hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
154
{
155
diff --git a/MAINTAINERS b/MAINTAINERS
156
index XXXXXXX..XXXXXXX 100644
157
--- a/MAINTAINERS
158
+++ b/MAINTAINERS
159
@@ -XXX,XX +XXX,XX @@ M: Roman Bolshakov <r.bolshakov@yadro.com>
160
W: https://wiki.qemu.org/Features/HVF
161
S: Maintained
162
F: target/i386/hvf/
163
+
164
+HVF
165
+M: Cameron Esfahani <dirty@apple.com>
166
+M: Roman Bolshakov <r.bolshakov@yadro.com>
167
+W: https://wiki.qemu.org/Features/HVF
168
+S: Maintained
169
+F: accel/hvf/
170
F: include/sysemu/hvf.h
171
+F: include/sysemu/hvf_int.h
172
173
WHPX CPUs
174
M: Sunil Muthuswamy <sunilmut@microsoft.com>
175
diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build
176
new file mode 100644
177
index XXXXXXX..XXXXXXX
178
--- /dev/null
179
+++ b/accel/hvf/meson.build
180
@@ -XXX,XX +XXX,XX @@
181
+hvf_ss = ss.source_set()
182
+hvf_ss.add(files(
183
+ 'hvf-all.c',
184
+))
185
+
186
+specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss)
187
diff --git a/accel/meson.build b/accel/meson.build
188
index XXXXXXX..XXXXXXX 100644
189
--- a/accel/meson.build
190
+++ b/accel/meson.build
191
@@ -XXX,XX +XXX,XX @@ specific_ss.add(files('accel-common.c'))
192
softmmu_ss.add(files('accel-softmmu.c'))
193
user_ss.add(files('accel-user.c'))
194
195
+subdir('hvf')
196
subdir('qtest')
197
subdir('kvm')
198
subdir('tcg')
337
--
199
--
338
2.20.1
200
2.20.1
339
201
340
202
diff view generated by jsdifflib
New patch
1
From: Alexander Graf <agraf@csgraf.de>
1
2
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
7
8
This patch moves the vCPU thread loop over.
9
10
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-3-agraf@csgraf.de
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
{target/i386 => accel}/hvf/hvf-accel-ops.h | 0
17
{target/i386 => accel}/hvf/hvf-accel-ops.c | 0
18
target/i386/hvf/x86hvf.c | 2 +-
19
accel/hvf/meson.build | 1 +
20
target/i386/hvf/meson.build | 1 -
21
5 files changed, 2 insertions(+), 2 deletions(-)
22
rename {target/i386 => accel}/hvf/hvf-accel-ops.h (100%)
23
rename {target/i386 => accel}/hvf/hvf-accel-ops.c (100%)
24
25
diff --git a/target/i386/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
26
similarity index 100%
27
rename from target/i386/hvf/hvf-accel-ops.h
28
rename to accel/hvf/hvf-accel-ops.h
29
diff --git a/target/i386/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
30
similarity index 100%
31
rename from target/i386/hvf/hvf-accel-ops.c
32
rename to accel/hvf/hvf-accel-ops.c
33
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/i386/hvf/x86hvf.c
36
+++ b/target/i386/hvf/x86hvf.c
37
@@ -XXX,XX +XXX,XX @@
38
#include <Hypervisor/hv.h>
39
#include <Hypervisor/hv_vmx.h>
40
41
-#include "hvf-accel-ops.h"
42
+#include "accel/hvf/hvf-accel-ops.h"
43
44
void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
45
SegmentCache *qseg, bool is_tr)
46
diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build
47
index XXXXXXX..XXXXXXX 100644
48
--- a/accel/hvf/meson.build
49
+++ b/accel/hvf/meson.build
50
@@ -XXX,XX +XXX,XX @@
51
hvf_ss = ss.source_set()
52
hvf_ss.add(files(
53
'hvf-all.c',
54
+ 'hvf-accel-ops.c',
55
))
56
57
specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss)
58
diff --git a/target/i386/hvf/meson.build b/target/i386/hvf/meson.build
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/i386/hvf/meson.build
61
+++ b/target/i386/hvf/meson.build
62
@@ -XXX,XX +XXX,XX @@
63
i386_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
64
'hvf.c',
65
- 'hvf-accel-ops.c',
66
'x86.c',
67
'x86_cpuid.c',
68
'x86_decode.c',
69
--
70
2.20.1
71
72
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Add the dwc-hsotg (dwc2) USB host controller state definitions.
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
Mostly based on hw/usb/hcd-ehci.h.
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
5
7
6
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
8
This patch moves CPU and memory operations over. While at it, make sure
7
Message-id: 20200520235349.21215-4-pauldzim@gmail.com
9
the code is consumable on non-i386 systems.
10
11
Signed-off-by: Alexander Graf <agraf@csgraf.de>
12
Reviewed-by: Sergio Lopez <slp@redhat.com>
13
Message-id: 20210519202253.76782-4-agraf@csgraf.de
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
16
---
11
hw/usb/hcd-dwc2.h | 190 ++++++++++++++++++++++++++++++++++++++++++++++
17
include/sysemu/hvf_int.h | 4 +
12
1 file changed, 190 insertions(+)
18
target/i386/hvf/hvf-i386.h | 2 -
13
create mode 100644 hw/usb/hcd-dwc2.h
19
target/i386/hvf/x86hvf.h | 2 -
20
accel/hvf/hvf-accel-ops.c | 308 ++++++++++++++++++++++++++++++++++++-
21
target/i386/hvf/hvf.c | 302 ------------------------------------
22
5 files changed, 311 insertions(+), 307 deletions(-)
14
23
15
diff --git a/hw/usb/hcd-dwc2.h b/hw/usb/hcd-dwc2.h
24
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
16
new file mode 100644
25
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX
26
--- a/include/sysemu/hvf_int.h
18
--- /dev/null
27
+++ b/include/sysemu/hvf_int.h
19
+++ b/hw/usb/hcd-dwc2.h
20
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@
21
+/*
29
22
+ * dwc-hsotg (dwc2) USB host controller state definitions
30
#include <Hypervisor/hv.h>
23
+ *
31
24
+ * Based on hw/usb/hcd-ehci.h
32
+void hvf_set_phys_mem(MemoryRegionSection *, bool);
25
+ *
33
void assert_hvf_ok(hv_return_t ret);
26
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
34
+hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
27
+ *
35
+int hvf_put_registers(CPUState *);
28
+ * This program is free software; you can redistribute it and/or modify
36
+int hvf_get_registers(CPUState *);
29
+ * it under the terms of the GNU General Public License as published by
37
30
+ * the Free Software Foundation; either version 2 of the License, or
38
#endif
31
+ * (at your option) any later version.
39
diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h
32
+ *
40
index XXXXXXX..XXXXXXX 100644
33
+ * This program is distributed in the hope that it will be useful,
41
--- a/target/i386/hvf/hvf-i386.h
34
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
42
+++ b/target/i386/hvf/hvf-i386.h
35
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
43
@@ -XXX,XX +XXX,XX @@ struct HVFState {
36
+ * GNU General Public License for more details.
44
};
37
+ */
45
extern HVFState *hvf_state;
38
+
46
39
+#ifndef HW_USB_DWC2_H
47
-void hvf_set_phys_mem(MemoryRegionSection *, bool);
40
+#define HW_USB_DWC2_H
48
void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int);
41
+
49
-hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
42
+#include "qemu/timer.h"
50
43
+#include "hw/irq.h"
51
#ifdef NEED_CPU_H
44
+#include "hw/sysbus.h"
52
/* Functions exported to host specific mode */
45
+#include "hw/usb.h"
53
diff --git a/target/i386/hvf/x86hvf.h b/target/i386/hvf/x86hvf.h
46
+#include "sysemu/dma.h"
54
index XXXXXXX..XXXXXXX 100644
47
+
55
--- a/target/i386/hvf/x86hvf.h
48
+#define DWC2_MMIO_SIZE 0x11000
56
+++ b/target/i386/hvf/x86hvf.h
49
+
57
@@ -XXX,XX +XXX,XX @@
50
+#define DWC2_NB_CHAN 8 /* Number of host channels */
58
#include "x86_descr.h"
51
+#define DWC2_MAX_XFER_SIZE 65536 /* Max transfer size expected in HCTSIZ */
59
52
+
60
int hvf_process_events(CPUState *);
53
+typedef struct DWC2Packet DWC2Packet;
61
-int hvf_put_registers(CPUState *);
54
+typedef struct DWC2State DWC2State;
62
-int hvf_get_registers(CPUState *);
55
+typedef struct DWC2Class DWC2Class;
63
bool hvf_inject_interrupts(CPUState *);
56
+
64
void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
57
+enum async_state {
65
SegmentCache *qseg, bool is_tr);
58
+ DWC2_ASYNC_NONE = 0,
66
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
59
+ DWC2_ASYNC_INITIALIZED,
67
index XXXXXXX..XXXXXXX 100644
60
+ DWC2_ASYNC_INFLIGHT,
68
--- a/accel/hvf/hvf-accel-ops.c
61
+ DWC2_ASYNC_FINISHED,
69
+++ b/accel/hvf/hvf-accel-ops.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "qemu/osdep.h"
72
#include "qemu/error-report.h"
73
#include "qemu/main-loop.h"
74
+#include "exec/address-spaces.h"
75
+#include "exec/exec-all.h"
76
+#include "sysemu/cpus.h"
77
#include "sysemu/hvf.h"
78
+#include "sysemu/hvf_int.h"
79
#include "sysemu/runstate.h"
80
-#include "target/i386/cpu.h"
81
#include "qemu/guest-random.h"
82
83
#include "hvf-accel-ops.h"
84
85
+HVFState *hvf_state;
86
+
87
+/* Memory slots */
88
+
89
+hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
90
+{
91
+ hvf_slot *slot;
92
+ int x;
93
+ for (x = 0; x < hvf_state->num_slots; ++x) {
94
+ slot = &hvf_state->slots[x];
95
+ if (slot->size && start < (slot->start + slot->size) &&
96
+ (start + size) > slot->start) {
97
+ return slot;
98
+ }
99
+ }
100
+ return NULL;
101
+}
102
+
103
+struct mac_slot {
104
+ int present;
105
+ uint64_t size;
106
+ uint64_t gpa_start;
107
+ uint64_t gva;
62
+};
108
+};
63
+
109
+
64
+struct DWC2Packet {
110
+struct mac_slot mac_slots[32];
65
+ USBPacket packet;
111
+
66
+ uint32_t devadr;
112
+static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
67
+ uint32_t epnum;
113
+{
68
+ uint32_t epdir;
114
+ struct mac_slot *macslot;
69
+ uint32_t mps;
115
+ hv_return_t ret;
70
+ uint32_t pid;
116
+
71
+ uint32_t index;
117
+ macslot = &mac_slots[slot->slot_id];
72
+ uint32_t pcnt;
118
+
73
+ uint32_t len;
119
+ if (macslot->present) {
74
+ int32_t async;
120
+ if (macslot->size != slot->size) {
75
+ bool small;
121
+ macslot->present = 0;
76
+ bool needs_service;
122
+ ret = hv_vm_unmap(macslot->gpa_start, macslot->size);
123
+ assert_hvf_ok(ret);
124
+ }
125
+ }
126
+
127
+ if (!slot->size) {
128
+ return 0;
129
+ }
130
+
131
+ macslot->present = 1;
132
+ macslot->gpa_start = slot->start;
133
+ macslot->size = slot->size;
134
+ ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
135
+ assert_hvf_ok(ret);
136
+ return 0;
137
+}
138
+
139
+void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
140
+{
141
+ hvf_slot *mem;
142
+ MemoryRegion *area = section->mr;
143
+ bool writeable = !area->readonly && !area->rom_device;
144
+ hv_memory_flags_t flags;
145
+
146
+ if (!memory_region_is_ram(area)) {
147
+ if (writeable) {
148
+ return;
149
+ } else if (!memory_region_is_romd(area)) {
150
+ /*
151
+ * If the memory device is not in romd_mode, then we actually want
152
+ * to remove the hvf memory slot so all accesses will trap.
153
+ */
154
+ add = false;
155
+ }
156
+ }
157
+
158
+ mem = hvf_find_overlap_slot(
159
+ section->offset_within_address_space,
160
+ int128_get64(section->size));
161
+
162
+ if (mem && add) {
163
+ if (mem->size == int128_get64(section->size) &&
164
+ mem->start == section->offset_within_address_space &&
165
+ mem->mem == (memory_region_get_ram_ptr(area) +
166
+ section->offset_within_region)) {
167
+ return; /* Same region was attempted to register, go away. */
168
+ }
169
+ }
170
+
171
+ /* Region needs to be reset. set the size to 0 and remap it. */
172
+ if (mem) {
173
+ mem->size = 0;
174
+ if (do_hvf_set_memory(mem, 0)) {
175
+ error_report("Failed to reset overlapping slot");
176
+ abort();
177
+ }
178
+ }
179
+
180
+ if (!add) {
181
+ return;
182
+ }
183
+
184
+ if (area->readonly ||
185
+ (!memory_region_is_ram(area) && memory_region_is_romd(area))) {
186
+ flags = HV_MEMORY_READ | HV_MEMORY_EXEC;
187
+ } else {
188
+ flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC;
189
+ }
190
+
191
+ /* Now make a new slot. */
192
+ int x;
193
+
194
+ for (x = 0; x < hvf_state->num_slots; ++x) {
195
+ mem = &hvf_state->slots[x];
196
+ if (!mem->size) {
197
+ break;
198
+ }
199
+ }
200
+
201
+ if (x == hvf_state->num_slots) {
202
+ error_report("No free slots");
203
+ abort();
204
+ }
205
+
206
+ mem->size = int128_get64(section->size);
207
+ mem->mem = memory_region_get_ram_ptr(area) + section->offset_within_region;
208
+ mem->start = section->offset_within_address_space;
209
+ mem->region = area;
210
+
211
+ if (do_hvf_set_memory(mem, flags)) {
212
+ error_report("Error registering new memory slot");
213
+ abort();
214
+ }
215
+}
216
+
217
+static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
218
+{
219
+ if (!cpu->vcpu_dirty) {
220
+ hvf_get_registers(cpu);
221
+ cpu->vcpu_dirty = true;
222
+ }
223
+}
224
+
225
+void hvf_cpu_synchronize_state(CPUState *cpu)
226
+{
227
+ if (!cpu->vcpu_dirty) {
228
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
229
+ }
230
+}
231
+
232
+static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
233
+ run_on_cpu_data arg)
234
+{
235
+ hvf_put_registers(cpu);
236
+ cpu->vcpu_dirty = false;
237
+}
238
+
239
+void hvf_cpu_synchronize_post_reset(CPUState *cpu)
240
+{
241
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
242
+}
243
+
244
+static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
245
+ run_on_cpu_data arg)
246
+{
247
+ hvf_put_registers(cpu);
248
+ cpu->vcpu_dirty = false;
249
+}
250
+
251
+void hvf_cpu_synchronize_post_init(CPUState *cpu)
252
+{
253
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
254
+}
255
+
256
+static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
257
+ run_on_cpu_data arg)
258
+{
259
+ cpu->vcpu_dirty = true;
260
+}
261
+
262
+void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
263
+{
264
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
265
+}
266
+
267
+static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
268
+{
269
+ hvf_slot *slot;
270
+
271
+ slot = hvf_find_overlap_slot(
272
+ section->offset_within_address_space,
273
+ int128_get64(section->size));
274
+
275
+ /* protect region against writes; begin tracking it */
276
+ if (on) {
277
+ slot->flags |= HVF_SLOT_LOG;
278
+ hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
279
+ HV_MEMORY_READ);
280
+ /* stop tracking region*/
281
+ } else {
282
+ slot->flags &= ~HVF_SLOT_LOG;
283
+ hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
284
+ HV_MEMORY_READ | HV_MEMORY_WRITE);
285
+ }
286
+}
287
+
288
+static void hvf_log_start(MemoryListener *listener,
289
+ MemoryRegionSection *section, int old, int new)
290
+{
291
+ if (old != 0) {
292
+ return;
293
+ }
294
+
295
+ hvf_set_dirty_tracking(section, 1);
296
+}
297
+
298
+static void hvf_log_stop(MemoryListener *listener,
299
+ MemoryRegionSection *section, int old, int new)
300
+{
301
+ if (new != 0) {
302
+ return;
303
+ }
304
+
305
+ hvf_set_dirty_tracking(section, 0);
306
+}
307
+
308
+static void hvf_log_sync(MemoryListener *listener,
309
+ MemoryRegionSection *section)
310
+{
311
+ /*
312
+ * sync of dirty pages is handled elsewhere; just make sure we keep
313
+ * tracking the region.
314
+ */
315
+ hvf_set_dirty_tracking(section, 1);
316
+}
317
+
318
+static void hvf_region_add(MemoryListener *listener,
319
+ MemoryRegionSection *section)
320
+{
321
+ hvf_set_phys_mem(section, true);
322
+}
323
+
324
+static void hvf_region_del(MemoryListener *listener,
325
+ MemoryRegionSection *section)
326
+{
327
+ hvf_set_phys_mem(section, false);
328
+}
329
+
330
+static MemoryListener hvf_memory_listener = {
331
+ .priority = 10,
332
+ .region_add = hvf_region_add,
333
+ .region_del = hvf_region_del,
334
+ .log_start = hvf_log_start,
335
+ .log_stop = hvf_log_stop,
336
+ .log_sync = hvf_log_sync,
77
+};
337
+};
78
+
338
+
79
+struct DWC2State {
339
+static void dummy_signal(int sig)
80
+ /*< private >*/
340
+{
81
+ SysBusDevice parent_obj;
341
+}
82
+
342
+
83
+ /*< public >*/
343
+bool hvf_allowed;
84
+ USBBus bus;
344
+
85
+ qemu_irq irq;
345
+static int hvf_accel_init(MachineState *ms)
86
+ MemoryRegion *dma_mr;
346
+{
87
+ AddressSpace dma_as;
347
+ int x;
88
+ MemoryRegion container;
348
+ hv_return_t ret;
89
+ MemoryRegion hsotg;
349
+ HVFState *s;
90
+ MemoryRegion fifos;
350
+
91
+
351
+ ret = hv_vm_create(HV_VM_DEFAULT);
92
+ union {
352
+ assert_hvf_ok(ret);
93
+#define DWC2_GLBREG_SIZE 0x70
353
+
94
+ uint32_t glbreg[DWC2_GLBREG_SIZE / sizeof(uint32_t)];
354
+ s = g_new0(HVFState, 1);
95
+ struct {
355
+
96
+ uint32_t gotgctl; /* 00 */
356
+ s->num_slots = 32;
97
+ uint32_t gotgint; /* 04 */
357
+ for (x = 0; x < s->num_slots; ++x) {
98
+ uint32_t gahbcfg; /* 08 */
358
+ s->slots[x].size = 0;
99
+ uint32_t gusbcfg; /* 0c */
359
+ s->slots[x].slot_id = x;
100
+ uint32_t grstctl; /* 10 */
360
+ }
101
+ uint32_t gintsts; /* 14 */
361
+
102
+ uint32_t gintmsk; /* 18 */
362
+ hvf_state = s;
103
+ uint32_t grxstsr; /* 1c */
363
+ memory_listener_register(&hvf_memory_listener, &address_space_memory);
104
+ uint32_t grxstsp; /* 20 */
364
+ return 0;
105
+ uint32_t grxfsiz; /* 24 */
365
+}
106
+ uint32_t gnptxfsiz; /* 28 */
366
+
107
+ uint32_t gnptxsts; /* 2c */
367
+static void hvf_accel_class_init(ObjectClass *oc, void *data)
108
+ uint32_t gi2cctl; /* 30 */
368
+{
109
+ uint32_t gpvndctl; /* 34 */
369
+ AccelClass *ac = ACCEL_CLASS(oc);
110
+ uint32_t ggpio; /* 38 */
370
+ ac->name = "HVF";
111
+ uint32_t guid; /* 3c */
371
+ ac->init_machine = hvf_accel_init;
112
+ uint32_t gsnpsid; /* 40 */
372
+ ac->allowed = &hvf_allowed;
113
+ uint32_t ghwcfg1; /* 44 */
373
+}
114
+ uint32_t ghwcfg2; /* 48 */
374
+
115
+ uint32_t ghwcfg3; /* 4c */
375
+static const TypeInfo hvf_accel_type = {
116
+ uint32_t ghwcfg4; /* 50 */
376
+ .name = TYPE_HVF_ACCEL,
117
+ uint32_t glpmcfg; /* 54 */
377
+ .parent = TYPE_ACCEL,
118
+ uint32_t gpwrdn; /* 58 */
378
+ .class_init = hvf_accel_class_init,
119
+ uint32_t gdfifocfg; /* 5c */
120
+ uint32_t gadpctl; /* 60 */
121
+ uint32_t grefclk; /* 64 */
122
+ uint32_t gintmsk2; /* 68 */
123
+ uint32_t gintsts2; /* 6c */
124
+ };
125
+ };
126
+
127
+ union {
128
+#define DWC2_FSZREG_SIZE 0x04
129
+ uint32_t fszreg[DWC2_FSZREG_SIZE / sizeof(uint32_t)];
130
+ struct {
131
+ uint32_t hptxfsiz; /* 100 */
132
+ };
133
+ };
134
+
135
+ union {
136
+#define DWC2_HREG0_SIZE 0x44
137
+ uint32_t hreg0[DWC2_HREG0_SIZE / sizeof(uint32_t)];
138
+ struct {
139
+ uint32_t hcfg; /* 400 */
140
+ uint32_t hfir; /* 404 */
141
+ uint32_t hfnum; /* 408 */
142
+ uint32_t rsvd0; /* 40c */
143
+ uint32_t hptxsts; /* 410 */
144
+ uint32_t haint; /* 414 */
145
+ uint32_t haintmsk; /* 418 */
146
+ uint32_t hflbaddr; /* 41c */
147
+ uint32_t rsvd1[8]; /* 420-43c */
148
+ uint32_t hprt0; /* 440 */
149
+ };
150
+ };
151
+
152
+#define DWC2_HREG1_SIZE (0x20 * DWC2_NB_CHAN)
153
+ uint32_t hreg1[DWC2_HREG1_SIZE / sizeof(uint32_t)];
154
+
155
+#define hcchar(_ch) hreg1[((_ch) << 3) + 0] /* 500, 520, ... */
156
+#define hcsplt(_ch) hreg1[((_ch) << 3) + 1] /* 504, 524, ... */
157
+#define hcint(_ch) hreg1[((_ch) << 3) + 2] /* 508, 528, ... */
158
+#define hcintmsk(_ch) hreg1[((_ch) << 3) + 3] /* 50c, 52c, ... */
159
+#define hctsiz(_ch) hreg1[((_ch) << 3) + 4] /* 510, 530, ... */
160
+#define hcdma(_ch) hreg1[((_ch) << 3) + 5] /* 514, 534, ... */
161
+#define hcdmab(_ch) hreg1[((_ch) << 3) + 7] /* 51c, 53c, ... */
162
+
163
+ union {
164
+#define DWC2_PCGREG_SIZE 0x08
165
+ uint32_t pcgreg[DWC2_PCGREG_SIZE / sizeof(uint32_t)];
166
+ struct {
167
+ uint32_t pcgctl; /* e00 */
168
+ uint32_t pcgcctl1; /* e04 */
169
+ };
170
+ };
171
+
172
+ /* TODO - implement FIFO registers for slave mode */
173
+#define DWC2_HFIFO_SIZE (0x1000 * DWC2_NB_CHAN)
174
+
175
+ /*
176
+ * Internal state
177
+ */
178
+ QEMUTimer *eof_timer;
179
+ QEMUTimer *frame_timer;
180
+ QEMUBH *async_bh;
181
+ int64_t sof_time;
182
+ int64_t usb_frame_time;
183
+ int64_t usb_bit_time;
184
+ uint32_t usb_version;
185
+ uint16_t frame_number;
186
+ uint16_t fi;
187
+ uint16_t next_chan;
188
+ bool working;
189
+ USBPort uport;
190
+ DWC2Packet packet[DWC2_NB_CHAN]; /* one packet per chan */
191
+ uint8_t usb_buf[DWC2_NB_CHAN][DWC2_MAX_XFER_SIZE]; /* one buffer per chan */
192
+};
379
+};
193
+
380
+
194
+struct DWC2Class {
381
+static void hvf_type_init(void)
195
+ /*< private >*/
382
+{
196
+ SysBusDeviceClass parent_class;
383
+ type_register_static(&hvf_accel_type);
197
+ ResettablePhases parent_phases;
384
+}
198
+
385
+
199
+ /*< public >*/
386
+type_init(hvf_type_init);
200
+};
387
+
201
+
388
/*
202
+#define TYPE_DWC2_USB "dwc2-usb"
389
* The HVF-specific vCPU thread function. This one should only run when the host
203
+#define DWC2_USB(obj) \
390
* CPU supports the VMX "unrestricted guest" feature.
204
+ OBJECT_CHECK(DWC2State, (obj), TYPE_DWC2_USB)
391
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
205
+#define DWC2_CLASS(klass) \
392
index XXXXXXX..XXXXXXX 100644
206
+ OBJECT_CLASS_CHECK(DWC2Class, (klass), TYPE_DWC2_USB)
393
--- a/target/i386/hvf/hvf.c
207
+#define DWC2_GET_CLASS(obj) \
394
+++ b/target/i386/hvf/hvf.c
208
+ OBJECT_GET_CLASS(DWC2Class, (obj), TYPE_DWC2_USB)
395
@@ -XXX,XX +XXX,XX @@
209
+
396
210
+#endif
397
#include "hvf-accel-ops.h"
398
399
-HVFState *hvf_state;
400
-
401
-/* Memory slots */
402
-hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
403
-{
404
- hvf_slot *slot;
405
- int x;
406
- for (x = 0; x < hvf_state->num_slots; ++x) {
407
- slot = &hvf_state->slots[x];
408
- if (slot->size && start < (slot->start + slot->size) &&
409
- (start + size) > slot->start) {
410
- return slot;
411
- }
412
- }
413
- return NULL;
414
-}
415
-
416
-struct mac_slot {
417
- int present;
418
- uint64_t size;
419
- uint64_t gpa_start;
420
- uint64_t gva;
421
-};
422
-
423
-struct mac_slot mac_slots[32];
424
-
425
-static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
426
-{
427
- struct mac_slot *macslot;
428
- hv_return_t ret;
429
-
430
- macslot = &mac_slots[slot->slot_id];
431
-
432
- if (macslot->present) {
433
- if (macslot->size != slot->size) {
434
- macslot->present = 0;
435
- ret = hv_vm_unmap(macslot->gpa_start, macslot->size);
436
- assert_hvf_ok(ret);
437
- }
438
- }
439
-
440
- if (!slot->size) {
441
- return 0;
442
- }
443
-
444
- macslot->present = 1;
445
- macslot->gpa_start = slot->start;
446
- macslot->size = slot->size;
447
- ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
448
- assert_hvf_ok(ret);
449
- return 0;
450
-}
451
-
452
-void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
453
-{
454
- hvf_slot *mem;
455
- MemoryRegion *area = section->mr;
456
- bool writeable = !area->readonly && !area->rom_device;
457
- hv_memory_flags_t flags;
458
-
459
- if (!memory_region_is_ram(area)) {
460
- if (writeable) {
461
- return;
462
- } else if (!memory_region_is_romd(area)) {
463
- /*
464
- * If the memory device is not in romd_mode, then we actually want
465
- * to remove the hvf memory slot so all accesses will trap.
466
- */
467
- add = false;
468
- }
469
- }
470
-
471
- mem = hvf_find_overlap_slot(
472
- section->offset_within_address_space,
473
- int128_get64(section->size));
474
-
475
- if (mem && add) {
476
- if (mem->size == int128_get64(section->size) &&
477
- mem->start == section->offset_within_address_space &&
478
- mem->mem == (memory_region_get_ram_ptr(area) +
479
- section->offset_within_region)) {
480
- return; /* Same region was attempted to register, go away. */
481
- }
482
- }
483
-
484
- /* Region needs to be reset. set the size to 0 and remap it. */
485
- if (mem) {
486
- mem->size = 0;
487
- if (do_hvf_set_memory(mem, 0)) {
488
- error_report("Failed to reset overlapping slot");
489
- abort();
490
- }
491
- }
492
-
493
- if (!add) {
494
- return;
495
- }
496
-
497
- if (area->readonly ||
498
- (!memory_region_is_ram(area) && memory_region_is_romd(area))) {
499
- flags = HV_MEMORY_READ | HV_MEMORY_EXEC;
500
- } else {
501
- flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC;
502
- }
503
-
504
- /* Now make a new slot. */
505
- int x;
506
-
507
- for (x = 0; x < hvf_state->num_slots; ++x) {
508
- mem = &hvf_state->slots[x];
509
- if (!mem->size) {
510
- break;
511
- }
512
- }
513
-
514
- if (x == hvf_state->num_slots) {
515
- error_report("No free slots");
516
- abort();
517
- }
518
-
519
- mem->size = int128_get64(section->size);
520
- mem->mem = memory_region_get_ram_ptr(area) + section->offset_within_region;
521
- mem->start = section->offset_within_address_space;
522
- mem->region = area;
523
-
524
- if (do_hvf_set_memory(mem, flags)) {
525
- error_report("Error registering new memory slot");
526
- abort();
527
- }
528
-}
529
-
530
void vmx_update_tpr(CPUState *cpu)
531
{
532
/* TODO: need integrate APIC handling */
533
@@ -XXX,XX +XXX,XX @@ void hvf_handle_io(CPUArchState *env, uint16_t port, void *buffer,
534
}
535
}
536
537
-static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
538
-{
539
- if (!cpu->vcpu_dirty) {
540
- hvf_get_registers(cpu);
541
- cpu->vcpu_dirty = true;
542
- }
543
-}
544
-
545
-void hvf_cpu_synchronize_state(CPUState *cpu)
546
-{
547
- if (!cpu->vcpu_dirty) {
548
- run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
549
- }
550
-}
551
-
552
-static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
553
- run_on_cpu_data arg)
554
-{
555
- hvf_put_registers(cpu);
556
- cpu->vcpu_dirty = false;
557
-}
558
-
559
-void hvf_cpu_synchronize_post_reset(CPUState *cpu)
560
-{
561
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
562
-}
563
-
564
-static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
565
- run_on_cpu_data arg)
566
-{
567
- hvf_put_registers(cpu);
568
- cpu->vcpu_dirty = false;
569
-}
570
-
571
-void hvf_cpu_synchronize_post_init(CPUState *cpu)
572
-{
573
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
574
-}
575
-
576
-static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
577
- run_on_cpu_data arg)
578
-{
579
- cpu->vcpu_dirty = true;
580
-}
581
-
582
-void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
583
-{
584
- run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
585
-}
586
-
587
static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
588
{
589
int read, write;
590
@@ -XXX,XX +XXX,XX @@ static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
591
return false;
592
}
593
594
-static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
595
-{
596
- hvf_slot *slot;
597
-
598
- slot = hvf_find_overlap_slot(
599
- section->offset_within_address_space,
600
- int128_get64(section->size));
601
-
602
- /* protect region against writes; begin tracking it */
603
- if (on) {
604
- slot->flags |= HVF_SLOT_LOG;
605
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
606
- HV_MEMORY_READ);
607
- /* stop tracking region*/
608
- } else {
609
- slot->flags &= ~HVF_SLOT_LOG;
610
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
611
- HV_MEMORY_READ | HV_MEMORY_WRITE);
612
- }
613
-}
614
-
615
-static void hvf_log_start(MemoryListener *listener,
616
- MemoryRegionSection *section, int old, int new)
617
-{
618
- if (old != 0) {
619
- return;
620
- }
621
-
622
- hvf_set_dirty_tracking(section, 1);
623
-}
624
-
625
-static void hvf_log_stop(MemoryListener *listener,
626
- MemoryRegionSection *section, int old, int new)
627
-{
628
- if (new != 0) {
629
- return;
630
- }
631
-
632
- hvf_set_dirty_tracking(section, 0);
633
-}
634
-
635
-static void hvf_log_sync(MemoryListener *listener,
636
- MemoryRegionSection *section)
637
-{
638
- /*
639
- * sync of dirty pages is handled elsewhere; just make sure we keep
640
- * tracking the region.
641
- */
642
- hvf_set_dirty_tracking(section, 1);
643
-}
644
-
645
-static void hvf_region_add(MemoryListener *listener,
646
- MemoryRegionSection *section)
647
-{
648
- hvf_set_phys_mem(section, true);
649
-}
650
-
651
-static void hvf_region_del(MemoryListener *listener,
652
- MemoryRegionSection *section)
653
-{
654
- hvf_set_phys_mem(section, false);
655
-}
656
-
657
-static MemoryListener hvf_memory_listener = {
658
- .priority = 10,
659
- .region_add = hvf_region_add,
660
- .region_del = hvf_region_del,
661
- .log_start = hvf_log_start,
662
- .log_stop = hvf_log_stop,
663
- .log_sync = hvf_log_sync,
664
-};
665
-
666
void hvf_vcpu_destroy(CPUState *cpu)
667
{
668
X86CPU *x86_cpu = X86_CPU(cpu);
669
@@ -XXX,XX +XXX,XX @@ void hvf_vcpu_destroy(CPUState *cpu)
670
assert_hvf_ok(ret);
671
}
672
673
-static void dummy_signal(int sig)
674
-{
675
-}
676
-
677
static void init_tsc_freq(CPUX86State *env)
678
{
679
size_t length;
680
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
681
682
return ret;
683
}
684
-
685
-bool hvf_allowed;
686
-
687
-static int hvf_accel_init(MachineState *ms)
688
-{
689
- int x;
690
- hv_return_t ret;
691
- HVFState *s;
692
-
693
- ret = hv_vm_create(HV_VM_DEFAULT);
694
- assert_hvf_ok(ret);
695
-
696
- s = g_new0(HVFState, 1);
697
-
698
- s->num_slots = 32;
699
- for (x = 0; x < s->num_slots; ++x) {
700
- s->slots[x].size = 0;
701
- s->slots[x].slot_id = x;
702
- }
703
-
704
- hvf_state = s;
705
- memory_listener_register(&hvf_memory_listener, &address_space_memory);
706
- return 0;
707
-}
708
-
709
-static void hvf_accel_class_init(ObjectClass *oc, void *data)
710
-{
711
- AccelClass *ac = ACCEL_CLASS(oc);
712
- ac->name = "HVF";
713
- ac->init_machine = hvf_accel_init;
714
- ac->allowed = &hvf_allowed;
715
-}
716
-
717
-static const TypeInfo hvf_accel_type = {
718
- .name = TYPE_HVF_ACCEL,
719
- .parent = TYPE_ACCEL,
720
- .class_init = hvf_accel_class_init,
721
-};
722
-
723
-static void hvf_type_init(void)
724
-{
725
- type_register_static(&hvf_accel_type);
726
-}
727
-
728
-type_init(hvf_type_init);
211
--
729
--
212
2.20.1
730
2.20.1
213
731
214
732
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Add a check for functional dwc-hsotg (dwc2) USB host emulation to
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
the Raspi 2 acceptance test
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
5
7
6
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
8
This patch moves a few internal struct and constant defines over.
7
Reviewed-by: Philippe Mathieu-Daude <f4bug@amsat.org>
9
8
Message-id: 20200520235349.21215-8-pauldzim@gmail.com
10
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-5-agraf@csgraf.de
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
tests/acceptance/boot_linux_console.py | 9 +++++++--
16
include/sysemu/hvf_int.h | 30 ++++++++++++++++++++++++++++++
12
1 file changed, 7 insertions(+), 2 deletions(-)
17
target/i386/hvf/hvf-i386.h | 31 +------------------------------
18
2 files changed, 31 insertions(+), 30 deletions(-)
13
19
14
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
20
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/acceptance/boot_linux_console.py
22
--- a/include/sysemu/hvf_int.h
17
+++ b/tests/acceptance/boot_linux_console.py
23
+++ b/include/sysemu/hvf_int.h
18
@@ -XXX,XX +XXX,XX @@ class BootLinuxConsole(LinuxKernelTest):
24
@@ -XXX,XX +XXX,XX @@
19
25
20
self.vm.set_console()
26
#include <Hypervisor/hv.h>
21
kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
27
22
- serial_kernel_cmdline[uart_id])
28
+/* hvf_slot flags */
23
+ serial_kernel_cmdline[uart_id] +
29
+#define HVF_SLOT_LOG (1 << 0)
24
+ ' root=/dev/mmcblk0p2 rootwait ' +
30
+
25
+ 'dwc_otg.fiq_fsm_enable=0')
31
+typedef struct hvf_slot {
26
self.vm.add_args('-kernel', kernel_path,
32
+ uint64_t start;
27
'-dtb', dtb_path,
33
+ uint64_t size;
28
- '-append', kernel_command_line)
34
+ uint8_t *mem;
29
+ '-append', kernel_command_line,
35
+ int slot_id;
30
+ '-device', 'usb-kbd')
36
+ uint32_t flags;
31
self.vm.launch()
37
+ MemoryRegion *region;
32
console_pattern = 'Kernel command line: %s' % kernel_command_line
38
+} hvf_slot;
33
self.wait_for_console_pattern(console_pattern)
39
+
34
+ console_pattern = 'Product: QEMU USB Keyboard'
40
+typedef struct hvf_vcpu_caps {
35
+ self.wait_for_console_pattern(console_pattern)
41
+ uint64_t vmx_cap_pinbased;
36
42
+ uint64_t vmx_cap_procbased;
37
def test_arm_raspi2_uart0(self):
43
+ uint64_t vmx_cap_procbased2;
38
"""
44
+ uint64_t vmx_cap_entry;
45
+ uint64_t vmx_cap_exit;
46
+ uint64_t vmx_cap_preemption_timer;
47
+} hvf_vcpu_caps;
48
+
49
+struct HVFState {
50
+ AccelState parent;
51
+ hvf_slot slots[32];
52
+ int num_slots;
53
+
54
+ hvf_vcpu_caps *hvf_caps;
55
+};
56
+extern HVFState *hvf_state;
57
+
58
void hvf_set_phys_mem(MemoryRegionSection *, bool);
59
void assert_hvf_ok(hv_return_t ret);
60
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
61
diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/i386/hvf/hvf-i386.h
64
+++ b/target/i386/hvf/hvf-i386.h
65
@@ -XXX,XX +XXX,XX @@
66
67
#include "qemu/accel.h"
68
#include "sysemu/hvf.h"
69
+#include "sysemu/hvf_int.h"
70
#include "cpu.h"
71
#include "x86.h"
72
73
-/* hvf_slot flags */
74
-#define HVF_SLOT_LOG (1 << 0)
75
-
76
-typedef struct hvf_slot {
77
- uint64_t start;
78
- uint64_t size;
79
- uint8_t *mem;
80
- int slot_id;
81
- uint32_t flags;
82
- MemoryRegion *region;
83
-} hvf_slot;
84
-
85
-typedef struct hvf_vcpu_caps {
86
- uint64_t vmx_cap_pinbased;
87
- uint64_t vmx_cap_procbased;
88
- uint64_t vmx_cap_procbased2;
89
- uint64_t vmx_cap_entry;
90
- uint64_t vmx_cap_exit;
91
- uint64_t vmx_cap_preemption_timer;
92
-} hvf_vcpu_caps;
93
-
94
-struct HVFState {
95
- AccelState parent;
96
- hvf_slot slots[32];
97
- int num_slots;
98
-
99
- hvf_vcpu_caps *hvf_caps;
100
-};
101
-extern HVFState *hvf_state;
102
-
103
void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int);
104
105
#ifdef NEED_CPU_H
39
--
106
--
40
2.20.1
107
2.20.1
41
108
42
109
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
The dwc-hsotg (dwc2) USB host depends on a short packet to
3
The hvf_set_phys_mem() function is only called within the same file.
4
indicate the end of an IN transfer. The usb-storage driver
4
Make it static.
5
currently doesn't provide this, so fix it.
6
5
7
I have tested this change rather extensively using a PC
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
emulation with xhci, ehci, and uhci controllers, and have
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
9
not observed any regressions.
8
Message-id: 20210519202253.76782-6-agraf@csgraf.de
10
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
12
Message-id: 20200520235349.21215-6-pauldzim@gmail.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/usb/dev-storage.c | 15 ++++++++++++++-
12
include/sysemu/hvf_int.h | 1 -
16
1 file changed, 14 insertions(+), 1 deletion(-)
13
accel/hvf/hvf-accel-ops.c | 2 +-
14
2 files changed, 1 insertion(+), 2 deletions(-)
17
15
18
diff --git a/hw/usb/dev-storage.c b/hw/usb/dev-storage.c
16
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/usb/dev-storage.c
18
--- a/include/sysemu/hvf_int.h
21
+++ b/hw/usb/dev-storage.c
19
+++ b/include/sysemu/hvf_int.h
22
@@ -XXX,XX +XXX,XX @@ static void usb_msd_copy_data(MSDState *s, USBPacket *p)
20
@@ -XXX,XX +XXX,XX @@ struct HVFState {
23
usb_packet_copy(p, scsi_req_get_buf(s->req) + s->scsi_off, len);
21
};
24
s->scsi_len -= len;
22
extern HVFState *hvf_state;
25
s->scsi_off += len;
23
26
+ if (len > s->data_len) {
24
-void hvf_set_phys_mem(MemoryRegionSection *, bool);
27
+ len = s->data_len;
25
void assert_hvf_ok(hv_return_t ret);
28
+ }
26
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
29
s->data_len -= len;
27
int hvf_put_registers(CPUState *);
30
if (s->scsi_len == 0 || s->data_len == 0) {
28
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
31
scsi_req_continue(s->req);
29
index XXXXXXX..XXXXXXX 100644
32
@@ -XXX,XX +XXX,XX @@ static void usb_msd_command_complete(SCSIRequest *req, uint32_t status, size_t r
30
--- a/accel/hvf/hvf-accel-ops.c
33
if (s->data_len) {
31
+++ b/accel/hvf/hvf-accel-ops.c
34
int len = (p->iov.size - p->actual_length);
32
@@ -XXX,XX +XXX,XX @@ static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
35
usb_packet_skip(p, len);
33
return 0;
36
+ if (len > s->data_len) {
34
}
37
+ len = s->data_len;
35
38
+ }
36
-void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
39
s->data_len -= len;
37
+static void hvf_set_phys_mem(MemoryRegionSection *section, bool add)
40
}
38
{
41
if (s->data_len == 0) {
39
hvf_slot *mem;
42
@@ -XXX,XX +XXX,XX @@ static void usb_msd_handle_data(USBDevice *dev, USBPacket *p)
40
MemoryRegion *area = section->mr;
43
int len = p->iov.size - p->actual_length;
44
if (len) {
45
usb_packet_skip(p, len);
46
+ if (len > s->data_len) {
47
+ len = s->data_len;
48
+ }
49
s->data_len -= len;
50
if (s->data_len == 0) {
51
s->mode = USB_MSDM_CSW;
52
@@ -XXX,XX +XXX,XX @@ static void usb_msd_handle_data(USBDevice *dev, USBPacket *p)
53
int len = p->iov.size - p->actual_length;
54
if (len) {
55
usb_packet_skip(p, len);
56
+ if (len > s->data_len) {
57
+ len = s->data_len;
58
+ }
59
s->data_len -= len;
60
if (s->data_len == 0) {
61
s->mode = USB_MSDM_CSW;
62
}
63
}
64
}
65
- if (p->actual_length < p->iov.size) {
66
+ if (p->actual_length < p->iov.size && (p->short_not_ok ||
67
+ s->scsi_len >= p->ep->max_packet_size)) {
68
DPRINTF("Deferring packet %p [wait data-in]\n", p);
69
s->packet = p;
70
p->status = USB_RET_ASYNC;
71
--
41
--
72
2.20.1
42
2.20.1
73
43
74
44
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Import the dwc-hsotg (dwc2) register definitions file from the
3
The ARM version of Hypervisor.framework no longer defines these two
4
Linux kernel. This is a copy of drivers/usb/dwc2/hw.h from the
4
types, so let's just revert to standard ones.
5
mainline Linux kernel, the only changes being to the header, and
6
two instances of 'u32' changed to 'uint32_t' to allow it to
7
compile. Checkpatch throws a boatload of errors due to the tab
8
indentation, but I would rather import it as-is than reformat it.
9
5
10
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Message-id: 20200520235349.21215-3-pauldzim@gmail.com
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20210519202253.76782-7-agraf@csgraf.de
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
include/hw/usb/dwc2-regs.h | 899 +++++++++++++++++++++++++++++++++++++
12
accel/hvf/hvf-accel-ops.c | 6 +++---
16
1 file changed, 899 insertions(+)
13
1 file changed, 3 insertions(+), 3 deletions(-)
17
create mode 100644 include/hw/usb/dwc2-regs.h
18
14
19
diff --git a/include/hw/usb/dwc2-regs.h b/include/hw/usb/dwc2-regs.h
15
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
20
new file mode 100644
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX
17
--- a/accel/hvf/hvf-accel-ops.c
22
--- /dev/null
18
+++ b/accel/hvf/hvf-accel-ops.c
23
+++ b/include/hw/usb/dwc2-regs.h
19
@@ -XXX,XX +XXX,XX @@ static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags)
24
@@ -XXX,XX +XXX,XX @@
20
macslot->present = 1;
25
+/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
21
macslot->gpa_start = slot->start;
26
+/*
22
macslot->size = slot->size;
27
+ * Imported from the Linux kernel file drivers/usb/dwc2/hw.h, commit
23
- ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags);
28
+ * a89bae709b3492b478480a2c9734e7e9393b279c ("usb: dwc2: Move
24
+ ret = hv_vm_map(slot->mem, slot->start, slot->size, flags);
29
+ * UTMI_PHY_DATA defines closer")
25
assert_hvf_ok(ret);
30
+ *
26
return 0;
31
+ * hw.h - DesignWare HS OTG Controller hardware definitions
27
}
32
+ *
28
@@ -XXX,XX +XXX,XX @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
33
+ * Copyright 2004-2013 Synopsys, Inc.
29
/* protect region against writes; begin tracking it */
34
+ *
30
if (on) {
35
+ * Redistribution and use in source and binary forms, with or without
31
slot->flags |= HVF_SLOT_LOG;
36
+ * modification, are permitted provided that the following conditions
32
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
37
+ * are met:
33
+ hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
38
+ * 1. Redistributions of source code must retain the above copyright
34
HV_MEMORY_READ);
39
+ * notice, this list of conditions, and the following disclaimer,
35
/* stop tracking region*/
40
+ * without modification.
36
} else {
41
+ * 2. Redistributions in binary form must reproduce the above copyright
37
slot->flags &= ~HVF_SLOT_LOG;
42
+ * notice, this list of conditions and the following disclaimer in the
38
- hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
43
+ * documentation and/or other materials provided with the distribution.
39
+ hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
44
+ * 3. The names of the above-listed copyright holders may not be used
40
HV_MEMORY_READ | HV_MEMORY_WRITE);
45
+ * to endorse or promote products derived from this software without
41
}
46
+ * specific prior written permission.
42
}
47
+ *
48
+ * ALTERNATIVELY, this software may be distributed under the terms of the
49
+ * GNU General Public License ("GPL") as published by the Free Software
50
+ * Foundation; either version 2 of the License, or (at your option) any
51
+ * later version.
52
+ *
53
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
54
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
55
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
56
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
57
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
58
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
59
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
60
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
61
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
62
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
63
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
64
+ */
65
+
66
+#ifndef __DWC2_HW_H__
67
+#define __DWC2_HW_H__
68
+
69
+#define HSOTG_REG(x)    (x)
70
+
71
+#define GOTGCTL                HSOTG_REG(0x000)
72
+#define GOTGCTL_CHIRPEN            BIT(27)
73
+#define GOTGCTL_MULT_VALID_BC_MASK    (0x1f << 22)
74
+#define GOTGCTL_MULT_VALID_BC_SHIFT    22
75
+#define GOTGCTL_OTGVER            BIT(20)
76
+#define GOTGCTL_BSESVLD            BIT(19)
77
+#define GOTGCTL_ASESVLD            BIT(18)
78
+#define GOTGCTL_DBNC_SHORT        BIT(17)
79
+#define GOTGCTL_CONID_B            BIT(16)
80
+#define GOTGCTL_DBNCE_FLTR_BYPASS    BIT(15)
81
+#define GOTGCTL_DEVHNPEN        BIT(11)
82
+#define GOTGCTL_HSTSETHNPEN        BIT(10)
83
+#define GOTGCTL_HNPREQ            BIT(9)
84
+#define GOTGCTL_HSTNEGSCS        BIT(8)
85
+#define GOTGCTL_SESREQ            BIT(1)
86
+#define GOTGCTL_SESREQSCS        BIT(0)
87
+
88
+#define GOTGINT                HSOTG_REG(0x004)
89
+#define GOTGINT_DBNCE_DONE        BIT(19)
90
+#define GOTGINT_A_DEV_TOUT_CHG        BIT(18)
91
+#define GOTGINT_HST_NEG_DET        BIT(17)
92
+#define GOTGINT_HST_NEG_SUC_STS_CHNG    BIT(9)
93
+#define GOTGINT_SES_REQ_SUC_STS_CHNG    BIT(8)
94
+#define GOTGINT_SES_END_DET        BIT(2)
95
+
96
+#define GAHBCFG                HSOTG_REG(0x008)
97
+#define GAHBCFG_AHB_SINGLE        BIT(23)
98
+#define GAHBCFG_NOTI_ALL_DMA_WRIT    BIT(22)
99
+#define GAHBCFG_REM_MEM_SUPP        BIT(21)
100
+#define GAHBCFG_P_TXF_EMP_LVL        BIT(8)
101
+#define GAHBCFG_NP_TXF_EMP_LVL        BIT(7)
102
+#define GAHBCFG_DMA_EN            BIT(5)
103
+#define GAHBCFG_HBSTLEN_MASK        (0xf << 1)
104
+#define GAHBCFG_HBSTLEN_SHIFT        1
105
+#define GAHBCFG_HBSTLEN_SINGLE        0
106
+#define GAHBCFG_HBSTLEN_INCR        1
107
+#define GAHBCFG_HBSTLEN_INCR4        3
108
+#define GAHBCFG_HBSTLEN_INCR8        5
109
+#define GAHBCFG_HBSTLEN_INCR16        7
110
+#define GAHBCFG_GLBL_INTR_EN        BIT(0)
111
+#define GAHBCFG_CTRL_MASK        (GAHBCFG_P_TXF_EMP_LVL | \
112
+                     GAHBCFG_NP_TXF_EMP_LVL | \
113
+                     GAHBCFG_DMA_EN | \
114
+                     GAHBCFG_GLBL_INTR_EN)
115
+
116
+#define GUSBCFG                HSOTG_REG(0x00C)
117
+#define GUSBCFG_FORCEDEVMODE        BIT(30)
118
+#define GUSBCFG_FORCEHOSTMODE        BIT(29)
119
+#define GUSBCFG_TXENDDELAY        BIT(28)
120
+#define GUSBCFG_ICTRAFFICPULLREMOVE    BIT(27)
121
+#define GUSBCFG_ICUSBCAP        BIT(26)
122
+#define GUSBCFG_ULPI_INT_PROT_DIS    BIT(25)
123
+#define GUSBCFG_INDICATORPASSTHROUGH    BIT(24)
124
+#define GUSBCFG_INDICATORCOMPLEMENT    BIT(23)
125
+#define GUSBCFG_TERMSELDLPULSE        BIT(22)
126
+#define GUSBCFG_ULPI_INT_VBUS_IND    BIT(21)
127
+#define GUSBCFG_ULPI_EXT_VBUS_DRV    BIT(20)
128
+#define GUSBCFG_ULPI_CLK_SUSP_M        BIT(19)
129
+#define GUSBCFG_ULPI_AUTO_RES        BIT(18)
130
+#define GUSBCFG_ULPI_FS_LS        BIT(17)
131
+#define GUSBCFG_OTG_UTMI_FS_SEL        BIT(16)
132
+#define GUSBCFG_PHY_LP_CLK_SEL        BIT(15)
133
+#define GUSBCFG_USBTRDTIM_MASK        (0xf << 10)
134
+#define GUSBCFG_USBTRDTIM_SHIFT        10
135
+#define GUSBCFG_HNPCAP            BIT(9)
136
+#define GUSBCFG_SRPCAP            BIT(8)
137
+#define GUSBCFG_DDRSEL            BIT(7)
138
+#define GUSBCFG_PHYSEL            BIT(6)
139
+#define GUSBCFG_FSINTF            BIT(5)
140
+#define GUSBCFG_ULPI_UTMI_SEL        BIT(4)
141
+#define GUSBCFG_PHYIF16            BIT(3)
142
+#define GUSBCFG_PHYIF8            (0 << 3)
143
+#define GUSBCFG_TOUTCAL_MASK        (0x7 << 0)
144
+#define GUSBCFG_TOUTCAL_SHIFT        0
145
+#define GUSBCFG_TOUTCAL_LIMIT        0x7
146
+#define GUSBCFG_TOUTCAL(_x)        ((_x) << 0)
147
+
148
+#define GRSTCTL                HSOTG_REG(0x010)
149
+#define GRSTCTL_AHBIDLE            BIT(31)
150
+#define GRSTCTL_DMAREQ            BIT(30)
151
+#define GRSTCTL_TXFNUM_MASK        (0x1f << 6)
152
+#define GRSTCTL_TXFNUM_SHIFT        6
153
+#define GRSTCTL_TXFNUM_LIMIT        0x1f
154
+#define GRSTCTL_TXFNUM(_x)        ((_x) << 6)
155
+#define GRSTCTL_TXFFLSH            BIT(5)
156
+#define GRSTCTL_RXFFLSH            BIT(4)
157
+#define GRSTCTL_IN_TKNQ_FLSH        BIT(3)
158
+#define GRSTCTL_FRMCNTRRST        BIT(2)
159
+#define GRSTCTL_HSFTRST            BIT(1)
160
+#define GRSTCTL_CSFTRST            BIT(0)
161
+
162
+#define GINTSTS                HSOTG_REG(0x014)
163
+#define GINTMSK                HSOTG_REG(0x018)
164
+#define GINTSTS_WKUPINT            BIT(31)
165
+#define GINTSTS_SESSREQINT        BIT(30)
166
+#define GINTSTS_DISCONNINT        BIT(29)
167
+#define GINTSTS_CONIDSTSCHNG        BIT(28)
168
+#define GINTSTS_LPMTRANRCVD        BIT(27)
169
+#define GINTSTS_PTXFEMP            BIT(26)
170
+#define GINTSTS_HCHINT            BIT(25)
171
+#define GINTSTS_PRTINT            BIT(24)
172
+#define GINTSTS_RESETDET        BIT(23)
173
+#define GINTSTS_FET_SUSP        BIT(22)
174
+#define GINTSTS_INCOMPL_IP        BIT(21)
175
+#define GINTSTS_INCOMPL_SOOUT        BIT(21)
176
+#define GINTSTS_INCOMPL_SOIN        BIT(20)
177
+#define GINTSTS_OEPINT            BIT(19)
178
+#define GINTSTS_IEPINT            BIT(18)
179
+#define GINTSTS_EPMIS            BIT(17)
180
+#define GINTSTS_RESTOREDONE        BIT(16)
181
+#define GINTSTS_EOPF            BIT(15)
182
+#define GINTSTS_ISOUTDROP        BIT(14)
183
+#define GINTSTS_ENUMDONE        BIT(13)
184
+#define GINTSTS_USBRST            BIT(12)
185
+#define GINTSTS_USBSUSP            BIT(11)
186
+#define GINTSTS_ERLYSUSP        BIT(10)
187
+#define GINTSTS_I2CINT            BIT(9)
188
+#define GINTSTS_ULPI_CK_INT        BIT(8)
189
+#define GINTSTS_GOUTNAKEFF        BIT(7)
190
+#define GINTSTS_GINNAKEFF        BIT(6)
191
+#define GINTSTS_NPTXFEMP        BIT(5)
192
+#define GINTSTS_RXFLVL            BIT(4)
193
+#define GINTSTS_SOF            BIT(3)
194
+#define GINTSTS_OTGINT            BIT(2)
195
+#define GINTSTS_MODEMIS            BIT(1)
196
+#define GINTSTS_CURMODE_HOST        BIT(0)
197
+
198
+#define GRXSTSR                HSOTG_REG(0x01C)
199
+#define GRXSTSP                HSOTG_REG(0x020)
200
+#define GRXSTS_FN_MASK            (0x7f << 25)
201
+#define GRXSTS_FN_SHIFT            25
202
+#define GRXSTS_PKTSTS_MASK        (0xf << 17)
203
+#define GRXSTS_PKTSTS_SHIFT        17
204
+#define GRXSTS_PKTSTS_GLOBALOUTNAK    1
205
+#define GRXSTS_PKTSTS_OUTRX        2
206
+#define GRXSTS_PKTSTS_HCHIN        2
207
+#define GRXSTS_PKTSTS_OUTDONE        3
208
+#define GRXSTS_PKTSTS_HCHIN_XFER_COMP    3
209
+#define GRXSTS_PKTSTS_SETUPDONE        4
210
+#define GRXSTS_PKTSTS_DATATOGGLEERR    5
211
+#define GRXSTS_PKTSTS_SETUPRX        6
212
+#define GRXSTS_PKTSTS_HCHHALTED        7
213
+#define GRXSTS_HCHNUM_MASK        (0xf << 0)
214
+#define GRXSTS_HCHNUM_SHIFT        0
215
+#define GRXSTS_DPID_MASK        (0x3 << 15)
216
+#define GRXSTS_DPID_SHIFT        15
217
+#define GRXSTS_BYTECNT_MASK        (0x7ff << 4)
218
+#define GRXSTS_BYTECNT_SHIFT        4
219
+#define GRXSTS_EPNUM_MASK        (0xf << 0)
220
+#define GRXSTS_EPNUM_SHIFT        0
221
+
222
+#define GRXFSIZ                HSOTG_REG(0x024)
223
+#define GRXFSIZ_DEPTH_MASK        (0xffff << 0)
224
+#define GRXFSIZ_DEPTH_SHIFT        0
225
+
226
+#define GNPTXFSIZ            HSOTG_REG(0x028)
227
+/* Use FIFOSIZE_* constants to access this register */
228
+
229
+#define GNPTXSTS            HSOTG_REG(0x02C)
230
+#define GNPTXSTS_NP_TXQ_TOP_MASK        (0x7f << 24)
231
+#define GNPTXSTS_NP_TXQ_TOP_SHIFT        24
232
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_MASK        (0xff << 16)
233
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_SHIFT        16
234
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_GET(_v)    (((_v) >> 16) & 0xff)
235
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_MASK        (0xffff << 0)
236
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_SHIFT        0
237
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_GET(_v)    (((_v) >> 0) & 0xffff)
238
+
239
+#define GI2CCTL                HSOTG_REG(0x0030)
240
+#define GI2CCTL_BSYDNE            BIT(31)
241
+#define GI2CCTL_RW            BIT(30)
242
+#define GI2CCTL_I2CDATSE0        BIT(28)
243
+#define GI2CCTL_I2CDEVADDR_MASK        (0x3 << 26)
244
+#define GI2CCTL_I2CDEVADDR_SHIFT    26
245
+#define GI2CCTL_I2CSUSPCTL        BIT(25)
246
+#define GI2CCTL_ACK            BIT(24)
247
+#define GI2CCTL_I2CEN            BIT(23)
248
+#define GI2CCTL_ADDR_MASK        (0x7f << 16)
249
+#define GI2CCTL_ADDR_SHIFT        16
250
+#define GI2CCTL_REGADDR_MASK        (0xff << 8)
251
+#define GI2CCTL_REGADDR_SHIFT        8
252
+#define GI2CCTL_RWDATA_MASK        (0xff << 0)
253
+#define GI2CCTL_RWDATA_SHIFT        0
254
+
255
+#define GPVNDCTL            HSOTG_REG(0x0034)
256
+#define GGPIO                HSOTG_REG(0x0038)
257
+#define GGPIO_STM32_OTG_GCCFG_PWRDWN    BIT(16)
258
+
259
+#define GUID                HSOTG_REG(0x003c)
260
+#define GSNPSID                HSOTG_REG(0x0040)
261
+#define GHWCFG1                HSOTG_REG(0x0044)
262
+#define GSNPSID_ID_MASK            GENMASK(31, 16)
263
+
264
+#define GHWCFG2                HSOTG_REG(0x0048)
265
+#define GHWCFG2_OTG_ENABLE_IC_USB        BIT(31)
266
+#define GHWCFG2_DEV_TOKEN_Q_DEPTH_MASK        (0x1f << 26)
267
+#define GHWCFG2_DEV_TOKEN_Q_DEPTH_SHIFT        26
268
+#define GHWCFG2_HOST_PERIO_TX_Q_DEPTH_MASK    (0x3 << 24)
269
+#define GHWCFG2_HOST_PERIO_TX_Q_DEPTH_SHIFT    24
270
+#define GHWCFG2_NONPERIO_TX_Q_DEPTH_MASK    (0x3 << 22)
271
+#define GHWCFG2_NONPERIO_TX_Q_DEPTH_SHIFT    22
272
+#define GHWCFG2_MULTI_PROC_INT            BIT(20)
273
+#define GHWCFG2_DYNAMIC_FIFO            BIT(19)
274
+#define GHWCFG2_PERIO_EP_SUPPORTED        BIT(18)
275
+#define GHWCFG2_NUM_HOST_CHAN_MASK        (0xf << 14)
276
+#define GHWCFG2_NUM_HOST_CHAN_SHIFT        14
277
+#define GHWCFG2_NUM_DEV_EP_MASK            (0xf << 10)
278
+#define GHWCFG2_NUM_DEV_EP_SHIFT        10
279
+#define GHWCFG2_FS_PHY_TYPE_MASK        (0x3 << 8)
280
+#define GHWCFG2_FS_PHY_TYPE_SHIFT        8
281
+#define GHWCFG2_FS_PHY_TYPE_NOT_SUPPORTED    0
282
+#define GHWCFG2_FS_PHY_TYPE_DEDICATED        1
283
+#define GHWCFG2_FS_PHY_TYPE_SHARED_UTMI        2
284
+#define GHWCFG2_FS_PHY_TYPE_SHARED_ULPI        3
285
+#define GHWCFG2_HS_PHY_TYPE_MASK        (0x3 << 6)
286
+#define GHWCFG2_HS_PHY_TYPE_SHIFT        6
287
+#define GHWCFG2_HS_PHY_TYPE_NOT_SUPPORTED    0
288
+#define GHWCFG2_HS_PHY_TYPE_UTMI        1
289
+#define GHWCFG2_HS_PHY_TYPE_ULPI        2
290
+#define GHWCFG2_HS_PHY_TYPE_UTMI_ULPI        3
291
+#define GHWCFG2_POINT2POINT            BIT(5)
292
+#define GHWCFG2_ARCHITECTURE_MASK        (0x3 << 3)
293
+#define GHWCFG2_ARCHITECTURE_SHIFT        3
294
+#define GHWCFG2_SLAVE_ONLY_ARCH            0
295
+#define GHWCFG2_EXT_DMA_ARCH            1
296
+#define GHWCFG2_INT_DMA_ARCH            2
297
+#define GHWCFG2_OP_MODE_MASK            (0x7 << 0)
298
+#define GHWCFG2_OP_MODE_SHIFT            0
299
+#define GHWCFG2_OP_MODE_HNP_SRP_CAPABLE        0
300
+#define GHWCFG2_OP_MODE_SRP_ONLY_CAPABLE    1
301
+#define GHWCFG2_OP_MODE_NO_HNP_SRP_CAPABLE    2
302
+#define GHWCFG2_OP_MODE_SRP_CAPABLE_DEVICE    3
303
+#define GHWCFG2_OP_MODE_NO_SRP_CAPABLE_DEVICE    4
304
+#define GHWCFG2_OP_MODE_SRP_CAPABLE_HOST    5
305
+#define GHWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST    6
306
+#define GHWCFG2_OP_MODE_UNDEFINED        7
307
+
308
+#define GHWCFG3                HSOTG_REG(0x004c)
309
+#define GHWCFG3_DFIFO_DEPTH_MASK        (0xffff << 16)
310
+#define GHWCFG3_DFIFO_DEPTH_SHIFT        16
311
+#define GHWCFG3_OTG_LPM_EN            BIT(15)
312
+#define GHWCFG3_BC_SUPPORT            BIT(14)
313
+#define GHWCFG3_OTG_ENABLE_HSIC            BIT(13)
314
+#define GHWCFG3_ADP_SUPP            BIT(12)
315
+#define GHWCFG3_SYNCH_RESET_TYPE        BIT(11)
316
+#define GHWCFG3_OPTIONAL_FEATURES        BIT(10)
317
+#define GHWCFG3_VENDOR_CTRL_IF            BIT(9)
318
+#define GHWCFG3_I2C                BIT(8)
319
+#define GHWCFG3_OTG_FUNC            BIT(7)
320
+#define GHWCFG3_PACKET_SIZE_CNTR_WIDTH_MASK    (0x7 << 4)
321
+#define GHWCFG3_PACKET_SIZE_CNTR_WIDTH_SHIFT    4
322
+#define GHWCFG3_XFER_SIZE_CNTR_WIDTH_MASK    (0xf << 0)
323
+#define GHWCFG3_XFER_SIZE_CNTR_WIDTH_SHIFT    0
324
+
325
+#define GHWCFG4                HSOTG_REG(0x0050)
326
+#define GHWCFG4_DESC_DMA_DYN            BIT(31)
327
+#define GHWCFG4_DESC_DMA            BIT(30)
328
+#define GHWCFG4_NUM_IN_EPS_MASK            (0xf << 26)
329
+#define GHWCFG4_NUM_IN_EPS_SHIFT        26
330
+#define GHWCFG4_DED_FIFO_EN            BIT(25)
331
+#define GHWCFG4_DED_FIFO_SHIFT        25
332
+#define GHWCFG4_SESSION_END_FILT_EN        BIT(24)
333
+#define GHWCFG4_B_VALID_FILT_EN            BIT(23)
334
+#define GHWCFG4_A_VALID_FILT_EN            BIT(22)
335
+#define GHWCFG4_VBUS_VALID_FILT_EN        BIT(21)
336
+#define GHWCFG4_IDDIG_FILT_EN            BIT(20)
337
+#define GHWCFG4_NUM_DEV_MODE_CTRL_EP_MASK    (0xf << 16)
338
+#define GHWCFG4_NUM_DEV_MODE_CTRL_EP_SHIFT    16
339
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_MASK    (0x3 << 14)
340
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_SHIFT    14
341
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_8        0
342
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_16        1
343
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_8_OR_16    2
344
+#define GHWCFG4_ACG_SUPPORTED            BIT(12)
345
+#define GHWCFG4_IPG_ISOC_SUPPORTED        BIT(11)
346
+#define GHWCFG4_SERVICE_INTERVAL_SUPPORTED BIT(10)
347
+#define GHWCFG4_XHIBER                BIT(7)
348
+#define GHWCFG4_HIBER                BIT(6)
349
+#define GHWCFG4_MIN_AHB_FREQ            BIT(5)
350
+#define GHWCFG4_POWER_OPTIMIZ            BIT(4)
351
+#define GHWCFG4_NUM_DEV_PERIO_IN_EP_MASK    (0xf << 0)
352
+#define GHWCFG4_NUM_DEV_PERIO_IN_EP_SHIFT    0
353
+
354
+#define GLPMCFG                HSOTG_REG(0x0054)
355
+#define GLPMCFG_INVSELHSIC        BIT(31)
356
+#define GLPMCFG_HSICCON            BIT(30)
357
+#define GLPMCFG_RSTRSLPSTS        BIT(29)
358
+#define GLPMCFG_ENBESL            BIT(28)
359
+#define GLPMCFG_LPM_RETRYCNT_STS_MASK    (0x7 << 25)
360
+#define GLPMCFG_LPM_RETRYCNT_STS_SHIFT    25
361
+#define GLPMCFG_SNDLPM            BIT(24)
362
+#define GLPMCFG_RETRY_CNT_MASK        (0x7 << 21)
363
+#define GLPMCFG_RETRY_CNT_SHIFT        21
364
+#define GLPMCFG_LPM_REJECT_CTRL_CONTROL    BIT(21)
365
+#define GLPMCFG_LPM_ACCEPT_CTRL_ISOC    BIT(22)
366
+#define GLPMCFG_LPM_CHNL_INDX_MASK    (0xf << 17)
367
+#define GLPMCFG_LPM_CHNL_INDX_SHIFT    17
368
+#define GLPMCFG_L1RESUMEOK        BIT(16)
369
+#define GLPMCFG_SLPSTS            BIT(15)
370
+#define GLPMCFG_COREL1RES_MASK        (0x3 << 13)
371
+#define GLPMCFG_COREL1RES_SHIFT        13
372
+#define GLPMCFG_HIRD_THRES_MASK        (0x1f << 8)
373
+#define GLPMCFG_HIRD_THRES_SHIFT    8
374
+#define GLPMCFG_HIRD_THRES_EN        (0x10 << 8)
375
+#define GLPMCFG_ENBLSLPM        BIT(7)
376
+#define GLPMCFG_BREMOTEWAKE        BIT(6)
377
+#define GLPMCFG_HIRD_MASK        (0xf << 2)
378
+#define GLPMCFG_HIRD_SHIFT        2
379
+#define GLPMCFG_APPL1RES        BIT(1)
380
+#define GLPMCFG_LPMCAP            BIT(0)
381
+
382
+#define GPWRDN                HSOTG_REG(0x0058)
383
+#define GPWRDN_MULT_VAL_ID_BC_MASK    (0x1f << 24)
384
+#define GPWRDN_MULT_VAL_ID_BC_SHIFT    24
385
+#define GPWRDN_ADP_INT            BIT(23)
386
+#define GPWRDN_BSESSVLD            BIT(22)
387
+#define GPWRDN_IDSTS            BIT(21)
388
+#define GPWRDN_LINESTATE_MASK        (0x3 << 19)
389
+#define GPWRDN_LINESTATE_SHIFT        19
390
+#define GPWRDN_STS_CHGINT_MSK        BIT(18)
391
+#define GPWRDN_STS_CHGINT        BIT(17)
392
+#define GPWRDN_SRP_DET_MSK        BIT(16)
393
+#define GPWRDN_SRP_DET            BIT(15)
394
+#define GPWRDN_CONNECT_DET_MSK        BIT(14)
395
+#define GPWRDN_CONNECT_DET        BIT(13)
396
+#define GPWRDN_DISCONN_DET_MSK        BIT(12)
397
+#define GPWRDN_DISCONN_DET        BIT(11)
398
+#define GPWRDN_RST_DET_MSK        BIT(10)
399
+#define GPWRDN_RST_DET            BIT(9)
400
+#define GPWRDN_LNSTSCHG_MSK        BIT(8)
401
+#define GPWRDN_LNSTSCHG            BIT(7)
402
+#define GPWRDN_DIS_VBUS            BIT(6)
403
+#define GPWRDN_PWRDNSWTCH        BIT(5)
404
+#define GPWRDN_PWRDNRSTN        BIT(4)
405
+#define GPWRDN_PWRDNCLMP        BIT(3)
406
+#define GPWRDN_RESTORE            BIT(2)
407
+#define GPWRDN_PMUACTV            BIT(1)
408
+#define GPWRDN_PMUINTSEL        BIT(0)
409
+
410
+#define GDFIFOCFG            HSOTG_REG(0x005c)
411
+#define GDFIFOCFG_EPINFOBASE_MASK    (0xffff << 16)
412
+#define GDFIFOCFG_EPINFOBASE_SHIFT    16
413
+#define GDFIFOCFG_GDFIFOCFG_MASK    (0xffff << 0)
414
+#define GDFIFOCFG_GDFIFOCFG_SHIFT    0
415
+
416
+#define ADPCTL                HSOTG_REG(0x0060)
417
+#define ADPCTL_AR_MASK            (0x3 << 27)
418
+#define ADPCTL_AR_SHIFT            27
419
+#define ADPCTL_ADP_TMOUT_INT_MSK    BIT(26)
420
+#define ADPCTL_ADP_SNS_INT_MSK        BIT(25)
421
+#define ADPCTL_ADP_PRB_INT_MSK        BIT(24)
422
+#define ADPCTL_ADP_TMOUT_INT        BIT(23)
423
+#define ADPCTL_ADP_SNS_INT        BIT(22)
424
+#define ADPCTL_ADP_PRB_INT        BIT(21)
425
+#define ADPCTL_ADPENA            BIT(20)
426
+#define ADPCTL_ADPRES            BIT(19)
427
+#define ADPCTL_ENASNS            BIT(18)
428
+#define ADPCTL_ENAPRB            BIT(17)
429
+#define ADPCTL_RTIM_MASK        (0x7ff << 6)
430
+#define ADPCTL_RTIM_SHIFT        6
431
+#define ADPCTL_PRB_PER_MASK        (0x3 << 4)
432
+#define ADPCTL_PRB_PER_SHIFT        4
433
+#define ADPCTL_PRB_DELTA_MASK        (0x3 << 2)
434
+#define ADPCTL_PRB_DELTA_SHIFT        2
435
+#define ADPCTL_PRB_DSCHRG_MASK        (0x3 << 0)
436
+#define ADPCTL_PRB_DSCHRG_SHIFT        0
437
+
438
+#define GREFCLK                 HSOTG_REG(0x0064)
439
+#define GREFCLK_REFCLKPER_MASK         (0x1ffff << 15)
440
+#define GREFCLK_REFCLKPER_SHIFT         15
441
+#define GREFCLK_REF_CLK_MODE         BIT(14)
442
+#define GREFCLK_SOF_CNT_WKUP_ALERT_MASK     (0x3ff)
443
+#define GREFCLK_SOF_CNT_WKUP_ALERT_SHIFT 0
444
+
445
+#define GINTMSK2            HSOTG_REG(0x0068)
446
+#define GINTMSK2_WKUP_ALERT_INT_MSK    BIT(0)
447
+
448
+#define GINTSTS2            HSOTG_REG(0x006c)
449
+#define GINTSTS2_WKUP_ALERT_INT        BIT(0)
450
+
451
+#define HPTXFSIZ            HSOTG_REG(0x100)
452
+/* Use FIFOSIZE_* constants to access this register */
453
+
454
+#define DPTXFSIZN(_a)            HSOTG_REG(0x104 + (((_a) - 1) * 4))
455
+/* Use FIFOSIZE_* constants to access this register */
456
+
457
+/* These apply to the GNPTXFSIZ, HPTXFSIZ and DPTXFSIZN registers */
458
+#define FIFOSIZE_DEPTH_MASK        (0xffff << 16)
459
+#define FIFOSIZE_DEPTH_SHIFT        16
460
+#define FIFOSIZE_STARTADDR_MASK        (0xffff << 0)
461
+#define FIFOSIZE_STARTADDR_SHIFT    0
462
+#define FIFOSIZE_DEPTH_GET(_x)        (((_x) >> 16) & 0xffff)
463
+
464
+/* Device mode registers */
465
+
466
+#define DCFG                HSOTG_REG(0x800)
467
+#define DCFG_DESCDMA_EN            BIT(23)
468
+#define DCFG_EPMISCNT_MASK        (0x1f << 18)
469
+#define DCFG_EPMISCNT_SHIFT        18
470
+#define DCFG_EPMISCNT_LIMIT        0x1f
471
+#define DCFG_EPMISCNT(_x)        ((_x) << 18)
472
+#define DCFG_IPG_ISOC_SUPPORDED        BIT(17)
473
+#define DCFG_PERFRINT_MASK        (0x3 << 11)
474
+#define DCFG_PERFRINT_SHIFT        11
475
+#define DCFG_PERFRINT_LIMIT        0x3
476
+#define DCFG_PERFRINT(_x)        ((_x) << 11)
477
+#define DCFG_DEVADDR_MASK        (0x7f << 4)
478
+#define DCFG_DEVADDR_SHIFT        4
479
+#define DCFG_DEVADDR_LIMIT        0x7f
480
+#define DCFG_DEVADDR(_x)        ((_x) << 4)
481
+#define DCFG_NZ_STS_OUT_HSHK        BIT(2)
482
+#define DCFG_DEVSPD_MASK        (0x3 << 0)
483
+#define DCFG_DEVSPD_SHIFT        0
484
+#define DCFG_DEVSPD_HS            0
485
+#define DCFG_DEVSPD_FS            1
486
+#define DCFG_DEVSPD_LS            2
487
+#define DCFG_DEVSPD_FS48        3
488
+
489
+#define DCTL                HSOTG_REG(0x804)
490
+#define DCTL_SERVICE_INTERVAL_SUPPORTED BIT(19)
491
+#define DCTL_PWRONPRGDONE        BIT(11)
492
+#define DCTL_CGOUTNAK            BIT(10)
493
+#define DCTL_SGOUTNAK            BIT(9)
494
+#define DCTL_CGNPINNAK            BIT(8)
495
+#define DCTL_SGNPINNAK            BIT(7)
496
+#define DCTL_TSTCTL_MASK        (0x7 << 4)
497
+#define DCTL_TSTCTL_SHIFT        4
498
+#define DCTL_GOUTNAKSTS            BIT(3)
499
+#define DCTL_GNPINNAKSTS        BIT(2)
500
+#define DCTL_SFTDISCON            BIT(1)
501
+#define DCTL_RMTWKUPSIG            BIT(0)
502
+
503
+#define DSTS                HSOTG_REG(0x808)
504
+#define DSTS_SOFFN_MASK            (0x3fff << 8)
505
+#define DSTS_SOFFN_SHIFT        8
506
+#define DSTS_SOFFN_LIMIT        0x3fff
507
+#define DSTS_SOFFN(_x)            ((_x) << 8)
508
+#define DSTS_ERRATICERR            BIT(3)
509
+#define DSTS_ENUMSPD_MASK        (0x3 << 1)
510
+#define DSTS_ENUMSPD_SHIFT        1
511
+#define DSTS_ENUMSPD_HS            0
512
+#define DSTS_ENUMSPD_FS            1
513
+#define DSTS_ENUMSPD_LS            2
514
+#define DSTS_ENUMSPD_FS48        3
515
+#define DSTS_SUSPSTS            BIT(0)
516
+
517
+#define DIEPMSK                HSOTG_REG(0x810)
518
+#define DIEPMSK_NAKMSK            BIT(13)
519
+#define DIEPMSK_BNAININTRMSK        BIT(9)
520
+#define DIEPMSK_TXFIFOUNDRNMSK        BIT(8)
521
+#define DIEPMSK_TXFIFOEMPTY        BIT(7)
522
+#define DIEPMSK_INEPNAKEFFMSK        BIT(6)
523
+#define DIEPMSK_INTKNEPMISMSK        BIT(5)
524
+#define DIEPMSK_INTKNTXFEMPMSK        BIT(4)
525
+#define DIEPMSK_TIMEOUTMSK        BIT(3)
526
+#define DIEPMSK_AHBERRMSK        BIT(2)
527
+#define DIEPMSK_EPDISBLDMSK        BIT(1)
528
+#define DIEPMSK_XFERCOMPLMSK        BIT(0)
529
+
530
+#define DOEPMSK                HSOTG_REG(0x814)
531
+#define DOEPMSK_BNAMSK            BIT(9)
532
+#define DOEPMSK_BACK2BACKSETUP        BIT(6)
533
+#define DOEPMSK_STSPHSERCVDMSK        BIT(5)
534
+#define DOEPMSK_OUTTKNEPDISMSK        BIT(4)
535
+#define DOEPMSK_SETUPMSK        BIT(3)
536
+#define DOEPMSK_AHBERRMSK        BIT(2)
537
+#define DOEPMSK_EPDISBLDMSK        BIT(1)
538
+#define DOEPMSK_XFERCOMPLMSK        BIT(0)
539
+
540
+#define DAINT                HSOTG_REG(0x818)
541
+#define DAINTMSK            HSOTG_REG(0x81C)
542
+#define DAINT_OUTEP_SHIFT        16
543
+#define DAINT_OUTEP(_x)            (1 << ((_x) + 16))
544
+#define DAINT_INEP(_x)            (1 << (_x))
545
+
546
+#define DTKNQR1                HSOTG_REG(0x820)
547
+#define DTKNQR2                HSOTG_REG(0x824)
548
+#define DTKNQR3                HSOTG_REG(0x830)
549
+#define DTKNQR4                HSOTG_REG(0x834)
550
+#define DIEPEMPMSK            HSOTG_REG(0x834)
551
+
552
+#define DVBUSDIS            HSOTG_REG(0x828)
553
+#define DVBUSPULSE            HSOTG_REG(0x82C)
554
+
555
+#define DIEPCTL0            HSOTG_REG(0x900)
556
+#define DIEPCTL(_a)            HSOTG_REG(0x900 + ((_a) * 0x20))
557
+
558
+#define DOEPCTL0            HSOTG_REG(0xB00)
559
+#define DOEPCTL(_a)            HSOTG_REG(0xB00 + ((_a) * 0x20))
560
+
561
+/* EP0 specialness:
562
+ * bits[29..28] - reserved (no SetD0PID, SetD1PID)
563
+ * bits[25..22] - should always be zero, this isn't a periodic endpoint
564
+ * bits[10..0] - MPS setting different for EP0
565
+ */
566
+#define D0EPCTL_MPS_MASK        (0x3 << 0)
567
+#define D0EPCTL_MPS_SHIFT        0
568
+#define D0EPCTL_MPS_64            0
569
+#define D0EPCTL_MPS_32            1
570
+#define D0EPCTL_MPS_16            2
571
+#define D0EPCTL_MPS_8            3
572
+
573
+#define DXEPCTL_EPENA            BIT(31)
574
+#define DXEPCTL_EPDIS            BIT(30)
575
+#define DXEPCTL_SETD1PID        BIT(29)
576
+#define DXEPCTL_SETODDFR        BIT(29)
577
+#define DXEPCTL_SETD0PID        BIT(28)
578
+#define DXEPCTL_SETEVENFR        BIT(28)
579
+#define DXEPCTL_SNAK            BIT(27)
580
+#define DXEPCTL_CNAK            BIT(26)
581
+#define DXEPCTL_TXFNUM_MASK        (0xf << 22)
582
+#define DXEPCTL_TXFNUM_SHIFT        22
583
+#define DXEPCTL_TXFNUM_LIMIT        0xf
584
+#define DXEPCTL_TXFNUM(_x)        ((_x) << 22)
585
+#define DXEPCTL_STALL            BIT(21)
586
+#define DXEPCTL_SNP            BIT(20)
587
+#define DXEPCTL_EPTYPE_MASK        (0x3 << 18)
588
+#define DXEPCTL_EPTYPE_CONTROL        (0x0 << 18)
589
+#define DXEPCTL_EPTYPE_ISO        (0x1 << 18)
590
+#define DXEPCTL_EPTYPE_BULK        (0x2 << 18)
591
+#define DXEPCTL_EPTYPE_INTERRUPT    (0x3 << 18)
592
+
593
+#define DXEPCTL_NAKSTS            BIT(17)
594
+#define DXEPCTL_DPID            BIT(16)
595
+#define DXEPCTL_EOFRNUM            BIT(16)
596
+#define DXEPCTL_USBACTEP        BIT(15)
597
+#define DXEPCTL_NEXTEP_MASK        (0xf << 11)
598
+#define DXEPCTL_NEXTEP_SHIFT        11
599
+#define DXEPCTL_NEXTEP_LIMIT        0xf
600
+#define DXEPCTL_NEXTEP(_x)        ((_x) << 11)
601
+#define DXEPCTL_MPS_MASK        (0x7ff << 0)
602
+#define DXEPCTL_MPS_SHIFT        0
603
+#define DXEPCTL_MPS_LIMIT        0x7ff
604
+#define DXEPCTL_MPS(_x)            ((_x) << 0)
605
+
606
+#define DIEPINT(_a)            HSOTG_REG(0x908 + ((_a) * 0x20))
607
+#define DOEPINT(_a)            HSOTG_REG(0xB08 + ((_a) * 0x20))
608
+#define DXEPINT_SETUP_RCVD        BIT(15)
609
+#define DXEPINT_NYETINTRPT        BIT(14)
610
+#define DXEPINT_NAKINTRPT        BIT(13)
611
+#define DXEPINT_BBLEERRINTRPT        BIT(12)
612
+#define DXEPINT_PKTDRPSTS        BIT(11)
613
+#define DXEPINT_BNAINTR            BIT(9)
614
+#define DXEPINT_TXFIFOUNDRN        BIT(8)
615
+#define DXEPINT_OUTPKTERR        BIT(8)
616
+#define DXEPINT_TXFEMP            BIT(7)
617
+#define DXEPINT_INEPNAKEFF        BIT(6)
618
+#define DXEPINT_BACK2BACKSETUP        BIT(6)
619
+#define DXEPINT_INTKNEPMIS        BIT(5)
620
+#define DXEPINT_STSPHSERCVD        BIT(5)
621
+#define DXEPINT_INTKNTXFEMP        BIT(4)
622
+#define DXEPINT_OUTTKNEPDIS        BIT(4)
623
+#define DXEPINT_TIMEOUT            BIT(3)
624
+#define DXEPINT_SETUP            BIT(3)
625
+#define DXEPINT_AHBERR            BIT(2)
626
+#define DXEPINT_EPDISBLD        BIT(1)
627
+#define DXEPINT_XFERCOMPL        BIT(0)
628
+
629
+#define DIEPTSIZ0            HSOTG_REG(0x910)
630
+#define DIEPTSIZ0_PKTCNT_MASK        (0x3 << 19)
631
+#define DIEPTSIZ0_PKTCNT_SHIFT        19
632
+#define DIEPTSIZ0_PKTCNT_LIMIT        0x3
633
+#define DIEPTSIZ0_PKTCNT(_x)        ((_x) << 19)
634
+#define DIEPTSIZ0_XFERSIZE_MASK        (0x7f << 0)
635
+#define DIEPTSIZ0_XFERSIZE_SHIFT    0
636
+#define DIEPTSIZ0_XFERSIZE_LIMIT    0x7f
637
+#define DIEPTSIZ0_XFERSIZE(_x)        ((_x) << 0)
638
+
639
+#define DOEPTSIZ0            HSOTG_REG(0xB10)
640
+#define DOEPTSIZ0_SUPCNT_MASK        (0x3 << 29)
641
+#define DOEPTSIZ0_SUPCNT_SHIFT        29
642
+#define DOEPTSIZ0_SUPCNT_LIMIT        0x3
643
+#define DOEPTSIZ0_SUPCNT(_x)        ((_x) << 29)
644
+#define DOEPTSIZ0_PKTCNT        BIT(19)
645
+#define DOEPTSIZ0_XFERSIZE_MASK        (0x7f << 0)
646
+#define DOEPTSIZ0_XFERSIZE_SHIFT    0
647
+
648
+#define DIEPTSIZ(_a)            HSOTG_REG(0x910 + ((_a) * 0x20))
649
+#define DOEPTSIZ(_a)            HSOTG_REG(0xB10 + ((_a) * 0x20))
650
+#define DXEPTSIZ_MC_MASK        (0x3 << 29)
651
+#define DXEPTSIZ_MC_SHIFT        29
652
+#define DXEPTSIZ_MC_LIMIT        0x3
653
+#define DXEPTSIZ_MC(_x)            ((_x) << 29)
654
+#define DXEPTSIZ_PKTCNT_MASK        (0x3ff << 19)
655
+#define DXEPTSIZ_PKTCNT_SHIFT        19
656
+#define DXEPTSIZ_PKTCNT_LIMIT        0x3ff
657
+#define DXEPTSIZ_PKTCNT_GET(_v)        (((_v) >> 19) & 0x3ff)
658
+#define DXEPTSIZ_PKTCNT(_x)        ((_x) << 19)
659
+#define DXEPTSIZ_XFERSIZE_MASK        (0x7ffff << 0)
660
+#define DXEPTSIZ_XFERSIZE_SHIFT        0
661
+#define DXEPTSIZ_XFERSIZE_LIMIT        0x7ffff
662
+#define DXEPTSIZ_XFERSIZE_GET(_v)    (((_v) >> 0) & 0x7ffff)
663
+#define DXEPTSIZ_XFERSIZE(_x)        ((_x) << 0)
664
+
665
+#define DIEPDMA(_a)            HSOTG_REG(0x914 + ((_a) * 0x20))
666
+#define DOEPDMA(_a)            HSOTG_REG(0xB14 + ((_a) * 0x20))
667
+
668
+#define DTXFSTS(_a)            HSOTG_REG(0x918 + ((_a) * 0x20))
669
+
670
+#define PCGCTL                HSOTG_REG(0x0e00)
671
+#define PCGCTL_IF_DEV_MODE        BIT(31)
672
+#define PCGCTL_P2HD_PRT_SPD_MASK    (0x3 << 29)
673
+#define PCGCTL_P2HD_PRT_SPD_SHIFT    29
674
+#define PCGCTL_P2HD_DEV_ENUM_SPD_MASK    (0x3 << 27)
675
+#define PCGCTL_P2HD_DEV_ENUM_SPD_SHIFT    27
676
+#define PCGCTL_MAC_DEV_ADDR_MASK    (0x7f << 20)
677
+#define PCGCTL_MAC_DEV_ADDR_SHIFT    20
678
+#define PCGCTL_MAX_TERMSEL        BIT(19)
679
+#define PCGCTL_MAX_XCVRSELECT_MASK    (0x3 << 17)
680
+#define PCGCTL_MAX_XCVRSELECT_SHIFT    17
681
+#define PCGCTL_PORT_POWER        BIT(16)
682
+#define PCGCTL_PRT_CLK_SEL_MASK        (0x3 << 14)
683
+#define PCGCTL_PRT_CLK_SEL_SHIFT    14
684
+#define PCGCTL_ESS_REG_RESTORED        BIT(13)
685
+#define PCGCTL_EXTND_HIBER_SWITCH    BIT(12)
686
+#define PCGCTL_EXTND_HIBER_PWRCLMP    BIT(11)
687
+#define PCGCTL_ENBL_EXTND_HIBER        BIT(10)
688
+#define PCGCTL_RESTOREMODE        BIT(9)
689
+#define PCGCTL_RESETAFTSUSP        BIT(8)
690
+#define PCGCTL_DEEP_SLEEP        BIT(7)
691
+#define PCGCTL_PHY_IN_SLEEP        BIT(6)
692
+#define PCGCTL_ENBL_SLEEP_GATING    BIT(5)
693
+#define PCGCTL_RSTPDWNMODULE        BIT(3)
694
+#define PCGCTL_PWRCLMP            BIT(2)
695
+#define PCGCTL_GATEHCLK            BIT(1)
696
+#define PCGCTL_STOPPCLK            BIT(0)
697
+
698
+#define PCGCCTL1 HSOTG_REG(0xe04)
699
+#define PCGCCTL1_TIMER (0x3 << 1)
700
+#define PCGCCTL1_GATEEN BIT(0)
701
+
702
+#define EPFIFO(_a)            HSOTG_REG(0x1000 + ((_a) * 0x1000))
703
+
704
+/* Host Mode Registers */
705
+
706
+#define HCFG                HSOTG_REG(0x0400)
707
+#define HCFG_MODECHTIMEN        BIT(31)
708
+#define HCFG_PERSCHEDENA        BIT(26)
709
+#define HCFG_FRLISTEN_MASK        (0x3 << 24)
710
+#define HCFG_FRLISTEN_SHIFT        24
711
+#define HCFG_FRLISTEN_8                (0 << 24)
712
+#define FRLISTEN_8_SIZE                8
713
+#define HCFG_FRLISTEN_16            BIT(24)
714
+#define FRLISTEN_16_SIZE            16
715
+#define HCFG_FRLISTEN_32            (2 << 24)
716
+#define FRLISTEN_32_SIZE            32
717
+#define HCFG_FRLISTEN_64            (3 << 24)
718
+#define FRLISTEN_64_SIZE            64
719
+#define HCFG_DESCDMA            BIT(23)
720
+#define HCFG_RESVALID_MASK        (0xff << 8)
721
+#define HCFG_RESVALID_SHIFT        8
722
+#define HCFG_ENA32KHZ            BIT(7)
723
+#define HCFG_FSLSSUPP            BIT(2)
724
+#define HCFG_FSLSPCLKSEL_MASK        (0x3 << 0)
725
+#define HCFG_FSLSPCLKSEL_SHIFT        0
726
+#define HCFG_FSLSPCLKSEL_30_60_MHZ    0
727
+#define HCFG_FSLSPCLKSEL_48_MHZ        1
728
+#define HCFG_FSLSPCLKSEL_6_MHZ        2
729
+
730
+#define HFIR                HSOTG_REG(0x0404)
731
+#define HFIR_FRINT_MASK            (0xffff << 0)
732
+#define HFIR_FRINT_SHIFT        0
733
+#define HFIR_RLDCTRL            BIT(16)
734
+
735
+#define HFNUM                HSOTG_REG(0x0408)
736
+#define HFNUM_FRREM_MASK        (0xffff << 16)
737
+#define HFNUM_FRREM_SHIFT        16
738
+#define HFNUM_FRNUM_MASK        (0xffff << 0)
739
+#define HFNUM_FRNUM_SHIFT        0
740
+#define HFNUM_MAX_FRNUM            0x3fff
741
+
742
+#define HPTXSTS                HSOTG_REG(0x0410)
743
+#define TXSTS_QTOP_ODD            BIT(31)
744
+#define TXSTS_QTOP_CHNEP_MASK        (0xf << 27)
745
+#define TXSTS_QTOP_CHNEP_SHIFT        27
746
+#define TXSTS_QTOP_TOKEN_MASK        (0x3 << 25)
747
+#define TXSTS_QTOP_TOKEN_SHIFT        25
748
+#define TXSTS_QTOP_TERMINATE        BIT(24)
749
+#define TXSTS_QSPCAVAIL_MASK        (0xff << 16)
750
+#define TXSTS_QSPCAVAIL_SHIFT        16
751
+#define TXSTS_FSPCAVAIL_MASK        (0xffff << 0)
752
+#define TXSTS_FSPCAVAIL_SHIFT        0
753
+
754
+#define HAINT                HSOTG_REG(0x0414)
755
+#define HAINTMSK            HSOTG_REG(0x0418)
756
+#define HFLBADDR            HSOTG_REG(0x041c)
757
+
758
+#define HPRT0                HSOTG_REG(0x0440)
759
+#define HPRT0_SPD_MASK            (0x3 << 17)
760
+#define HPRT0_SPD_SHIFT            17
761
+#define HPRT0_SPD_HIGH_SPEED        0
762
+#define HPRT0_SPD_FULL_SPEED        1
763
+#define HPRT0_SPD_LOW_SPEED        2
764
+#define HPRT0_TSTCTL_MASK        (0xf << 13)
765
+#define HPRT0_TSTCTL_SHIFT        13
766
+#define HPRT0_PWR            BIT(12)
767
+#define HPRT0_LNSTS_MASK        (0x3 << 10)
768
+#define HPRT0_LNSTS_SHIFT        10
769
+#define HPRT0_RST            BIT(8)
770
+#define HPRT0_SUSP            BIT(7)
771
+#define HPRT0_RES            BIT(6)
772
+#define HPRT0_OVRCURRCHG        BIT(5)
773
+#define HPRT0_OVRCURRACT        BIT(4)
774
+#define HPRT0_ENACHG            BIT(3)
775
+#define HPRT0_ENA            BIT(2)
776
+#define HPRT0_CONNDET            BIT(1)
777
+#define HPRT0_CONNSTS            BIT(0)
778
+
779
+#define HCCHAR(_ch)            HSOTG_REG(0x0500 + 0x20 * (_ch))
780
+#define HCCHAR_CHENA            BIT(31)
781
+#define HCCHAR_CHDIS            BIT(30)
782
+#define HCCHAR_ODDFRM            BIT(29)
783
+#define HCCHAR_DEVADDR_MASK        (0x7f << 22)
784
+#define HCCHAR_DEVADDR_SHIFT        22
785
+#define HCCHAR_MULTICNT_MASK        (0x3 << 20)
786
+#define HCCHAR_MULTICNT_SHIFT        20
787
+#define HCCHAR_EPTYPE_MASK        (0x3 << 18)
788
+#define HCCHAR_EPTYPE_SHIFT        18
789
+#define HCCHAR_LSPDDEV            BIT(17)
790
+#define HCCHAR_EPDIR            BIT(15)
791
+#define HCCHAR_EPNUM_MASK        (0xf << 11)
792
+#define HCCHAR_EPNUM_SHIFT        11
793
+#define HCCHAR_MPS_MASK            (0x7ff << 0)
794
+#define HCCHAR_MPS_SHIFT        0
795
+
796
+#define HCSPLT(_ch)            HSOTG_REG(0x0504 + 0x20 * (_ch))
797
+#define HCSPLT_SPLTENA            BIT(31)
798
+#define HCSPLT_COMPSPLT            BIT(16)
799
+#define HCSPLT_XACTPOS_MASK        (0x3 << 14)
800
+#define HCSPLT_XACTPOS_SHIFT        14
801
+#define HCSPLT_XACTPOS_MID        0
802
+#define HCSPLT_XACTPOS_END        1
803
+#define HCSPLT_XACTPOS_BEGIN        2
804
+#define HCSPLT_XACTPOS_ALL        3
805
+#define HCSPLT_HUBADDR_MASK        (0x7f << 7)
806
+#define HCSPLT_HUBADDR_SHIFT        7
807
+#define HCSPLT_PRTADDR_MASK        (0x7f << 0)
808
+#define HCSPLT_PRTADDR_SHIFT        0
809
+
810
+#define HCINT(_ch)            HSOTG_REG(0x0508 + 0x20 * (_ch))
811
+#define HCINTMSK(_ch)            HSOTG_REG(0x050c + 0x20 * (_ch))
812
+#define HCINTMSK_RESERVED14_31        (0x3ffff << 14)
813
+#define HCINTMSK_FRM_LIST_ROLL        BIT(13)
814
+#define HCINTMSK_XCS_XACT        BIT(12)
815
+#define HCINTMSK_BNA            BIT(11)
816
+#define HCINTMSK_DATATGLERR        BIT(10)
817
+#define HCINTMSK_FRMOVRUN        BIT(9)
818
+#define HCINTMSK_BBLERR            BIT(8)
819
+#define HCINTMSK_XACTERR        BIT(7)
820
+#define HCINTMSK_NYET            BIT(6)
821
+#define HCINTMSK_ACK            BIT(5)
822
+#define HCINTMSK_NAK            BIT(4)
823
+#define HCINTMSK_STALL            BIT(3)
824
+#define HCINTMSK_AHBERR            BIT(2)
825
+#define HCINTMSK_CHHLTD            BIT(1)
826
+#define HCINTMSK_XFERCOMPL        BIT(0)
827
+
828
+#define HCTSIZ(_ch)            HSOTG_REG(0x0510 + 0x20 * (_ch))
829
+#define TSIZ_DOPNG            BIT(31)
830
+#define TSIZ_SC_MC_PID_MASK        (0x3 << 29)
831
+#define TSIZ_SC_MC_PID_SHIFT        29
832
+#define TSIZ_SC_MC_PID_DATA0        0
833
+#define TSIZ_SC_MC_PID_DATA2        1
834
+#define TSIZ_SC_MC_PID_DATA1        2
835
+#define TSIZ_SC_MC_PID_MDATA        3
836
+#define TSIZ_SC_MC_PID_SETUP        3
837
+#define TSIZ_PKTCNT_MASK        (0x3ff << 19)
838
+#define TSIZ_PKTCNT_SHIFT        19
839
+#define TSIZ_NTD_MASK            (0xff << 8)
840
+#define TSIZ_NTD_SHIFT            8
841
+#define TSIZ_SCHINFO_MASK        (0xff << 0)
842
+#define TSIZ_SCHINFO_SHIFT        0
843
+#define TSIZ_XFERSIZE_MASK        (0x7ffff << 0)
844
+#define TSIZ_XFERSIZE_SHIFT        0
845
+
846
+#define HCDMA(_ch)            HSOTG_REG(0x0514 + 0x20 * (_ch))
847
+
848
+#define HCDMAB(_ch)            HSOTG_REG(0x051c + 0x20 * (_ch))
849
+
850
+#define HCFIFO(_ch)            HSOTG_REG(0x1000 + 0x1000 * (_ch))
851
+
852
+/**
853
+ * struct dwc2_dma_desc - DMA descriptor structure,
854
+ * used for both host and gadget modes
855
+ *
856
+ * @status: DMA descriptor status quadlet
857
+ * @buf: DMA descriptor data buffer pointer
858
+ *
859
+ * DMA Descriptor structure contains two quadlets:
860
+ * Status quadlet and Data buffer pointer.
861
+ */
862
+struct dwc2_dma_desc {
863
+    uint32_t status;
864
+    uint32_t buf;
865
+} __packed;
866
+
867
+/* Host Mode DMA descriptor status quadlet */
868
+
869
+#define HOST_DMA_A            BIT(31)
870
+#define HOST_DMA_STS_MASK        (0x3 << 28)
871
+#define HOST_DMA_STS_SHIFT        28
872
+#define HOST_DMA_STS_PKTERR        BIT(28)
873
+#define HOST_DMA_EOL            BIT(26)
874
+#define HOST_DMA_IOC            BIT(25)
875
+#define HOST_DMA_SUP            BIT(24)
876
+#define HOST_DMA_ALT_QTD        BIT(23)
877
+#define HOST_DMA_QTD_OFFSET_MASK    (0x3f << 17)
878
+#define HOST_DMA_QTD_OFFSET_SHIFT    17
879
+#define HOST_DMA_ISOC_NBYTES_MASK    (0xfff << 0)
880
+#define HOST_DMA_ISOC_NBYTES_SHIFT    0
881
+#define HOST_DMA_NBYTES_MASK        (0x1ffff << 0)
882
+#define HOST_DMA_NBYTES_SHIFT        0
883
+#define HOST_DMA_NBYTES_LIMIT        131071
884
+
885
+/* Device Mode DMA descriptor status quadlet */
886
+
887
+#define DEV_DMA_BUFF_STS_MASK        (0x3 << 30)
888
+#define DEV_DMA_BUFF_STS_SHIFT        30
889
+#define DEV_DMA_BUFF_STS_HREADY        0
890
+#define DEV_DMA_BUFF_STS_DMABUSY    1
891
+#define DEV_DMA_BUFF_STS_DMADONE    2
892
+#define DEV_DMA_BUFF_STS_HBUSY        3
893
+#define DEV_DMA_STS_MASK        (0x3 << 28)
894
+#define DEV_DMA_STS_SHIFT        28
895
+#define DEV_DMA_STS_SUCC        0
896
+#define DEV_DMA_STS_BUFF_FLUSH        1
897
+#define DEV_DMA_STS_BUFF_ERR        3
898
+#define DEV_DMA_L            BIT(27)
899
+#define DEV_DMA_SHORT            BIT(26)
900
+#define DEV_DMA_IOC            BIT(25)
901
+#define DEV_DMA_SR            BIT(24)
902
+#define DEV_DMA_MTRF            BIT(23)
903
+#define DEV_DMA_ISOC_PID_MASK        (0x3 << 23)
904
+#define DEV_DMA_ISOC_PID_SHIFT        23
905
+#define DEV_DMA_ISOC_PID_DATA0        0
906
+#define DEV_DMA_ISOC_PID_DATA2        1
907
+#define DEV_DMA_ISOC_PID_DATA1        2
908
+#define DEV_DMA_ISOC_PID_MDATA        3
909
+#define DEV_DMA_ISOC_FRNUM_MASK        (0x7ff << 12)
910
+#define DEV_DMA_ISOC_FRNUM_SHIFT    12
911
+#define DEV_DMA_ISOC_TX_NBYTES_MASK    (0xfff << 0)
912
+#define DEV_DMA_ISOC_TX_NBYTES_LIMIT    0xfff
913
+#define DEV_DMA_ISOC_RX_NBYTES_MASK    (0x7ff << 0)
914
+#define DEV_DMA_ISOC_RX_NBYTES_LIMIT    0x7ff
915
+#define DEV_DMA_ISOC_NBYTES_SHIFT    0
916
+#define DEV_DMA_NBYTES_MASK        (0xffff << 0)
917
+#define DEV_DMA_NBYTES_SHIFT        0
918
+#define DEV_DMA_NBYTES_LIMIT        0xffff
919
+
920
+#define MAX_DMA_DESC_NUM_GENERIC    64
921
+#define MAX_DMA_DESC_NUM_HS_ISOC    256
922
+
923
+#endif /* __DWC2_HW_H__ */
924
--
43
--
925
2.20.1
44
2.20.1
926
45
927
46
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Wire the dwc-hsotg (dwc2) emulation into Qemu
3
Until now, Hypervisor.framework has only been available on x86_64 systems.
4
With Apple Silicon shipping now, it extends its reach to aarch64. To
5
prepare for support for multiple architectures, let's start moving common
6
code out into its own accel directory.
4
7
5
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
8
This patch splits the vcpu init and destroy functions into a generic and
6
Reviewed-by: Philippe Mathieu-Daude <f4bug@amsat.org>
9
an architecture specific portion. This also allows us to move the generic
7
Message-id: 20200520235349.21215-7-pauldzim@gmail.com
10
functions into the generic hvf code, removing exported functions.
11
12
Signed-off-by: Alexander Graf <agraf@csgraf.de>
13
Reviewed-by: Sergio Lopez <slp@redhat.com>
14
Message-id: 20210519202253.76782-8-agraf@csgraf.de
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
17
---
10
include/hw/arm/bcm2835_peripherals.h | 3 ++-
18
accel/hvf/hvf-accel-ops.h | 2 --
11
hw/arm/bcm2835_peripherals.c | 21 ++++++++++++++++++++-
19
include/sysemu/hvf_int.h | 2 ++
12
2 files changed, 22 insertions(+), 2 deletions(-)
20
accel/hvf/hvf-accel-ops.c | 30 ++++++++++++++++++++++++++++++
21
target/i386/hvf/hvf.c | 23 ++---------------------
22
4 files changed, 34 insertions(+), 23 deletions(-)
13
23
14
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
24
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/arm/bcm2835_peripherals.h
26
--- a/accel/hvf/hvf-accel-ops.h
17
+++ b/include/hw/arm/bcm2835_peripherals.h
27
+++ b/accel/hvf/hvf-accel-ops.h
18
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@
19
#include "hw/sd/bcm2835_sdhost.h"
29
20
#include "hw/gpio/bcm2835_gpio.h"
30
#include "sysemu/cpus.h"
21
#include "hw/timer/bcm2835_systmr.h"
31
22
+#include "hw/usb/hcd-dwc2.h"
32
-int hvf_init_vcpu(CPUState *);
23
#include "hw/misc/unimp.h"
33
int hvf_vcpu_exec(CPUState *);
24
34
void hvf_cpu_synchronize_state(CPUState *);
25
#define TYPE_BCM2835_PERIPHERALS "bcm2835-peripherals"
35
void hvf_cpu_synchronize_post_reset(CPUState *);
26
@@ -XXX,XX +XXX,XX @@ typedef struct BCM2835PeripheralState {
36
void hvf_cpu_synchronize_post_init(CPUState *);
27
UnimplementedDeviceState ave0;
37
void hvf_cpu_synchronize_pre_loadvm(CPUState *);
28
UnimplementedDeviceState bscsl;
38
-void hvf_vcpu_destroy(CPUState *);
29
UnimplementedDeviceState smi;
39
30
- UnimplementedDeviceState dwc2;
40
#endif /* HVF_CPUS_H */
31
+ DWC2State dwc2;
41
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
32
UnimplementedDeviceState sdramc;
33
} BCM2835PeripheralState;
34
35
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
36
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/arm/bcm2835_peripherals.c
43
--- a/include/sysemu/hvf_int.h
38
+++ b/hw/arm/bcm2835_peripherals.c
44
+++ b/include/sysemu/hvf_int.h
39
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
45
@@ -XXX,XX +XXX,XX @@ struct HVFState {
40
/* Mphi */
46
extern HVFState *hvf_state;
41
sysbus_init_child_obj(obj, "mphi", &s->mphi, sizeof(s->mphi),
47
42
TYPE_BCM2835_MPHI);
48
void assert_hvf_ok(hv_return_t ret);
49
+int hvf_arch_init_vcpu(CPUState *cpu);
50
+void hvf_arch_vcpu_destroy(CPUState *cpu);
51
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
52
int hvf_put_registers(CPUState *);
53
int hvf_get_registers(CPUState *);
54
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/accel/hvf/hvf-accel-ops.c
57
+++ b/accel/hvf/hvf-accel-ops.c
58
@@ -XXX,XX +XXX,XX @@ static void hvf_type_init(void)
59
60
type_init(hvf_type_init);
61
62
+static void hvf_vcpu_destroy(CPUState *cpu)
63
+{
64
+ hv_return_t ret = hv_vcpu_destroy(cpu->hvf_fd);
65
+ assert_hvf_ok(ret);
43
+
66
+
44
+ /* DWC2 */
67
+ hvf_arch_vcpu_destroy(cpu);
45
+ sysbus_init_child_obj(obj, "dwc2", &s->dwc2, sizeof(s->dwc2),
68
+}
46
+ TYPE_DWC2_USB);
47
+
69
+
48
+ object_property_add_const_link(OBJECT(&s->dwc2), "dma-mr",
70
+static int hvf_init_vcpu(CPUState *cpu)
49
+ OBJECT(&s->gpu_bus_mr));
71
+{
72
+ int r;
73
+
74
+ /* init cpu signals */
75
+ sigset_t set;
76
+ struct sigaction sigact;
77
+
78
+ memset(&sigact, 0, sizeof(sigact));
79
+ sigact.sa_handler = dummy_signal;
80
+ sigaction(SIG_IPI, &sigact, NULL);
81
+
82
+ pthread_sigmask(SIG_BLOCK, NULL, &set);
83
+ sigdelset(&set, SIG_IPI);
84
+
85
+ r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
86
+ cpu->vcpu_dirty = 1;
87
+ assert_hvf_ok(r);
88
+
89
+ return hvf_arch_init_vcpu(cpu);
90
+}
91
+
92
/*
93
* The HVF-specific vCPU thread function. This one should only run when the host
94
* CPU supports the VMX "unrestricted guest" feature.
95
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/target/i386/hvf/hvf.c
98
+++ b/target/i386/hvf/hvf.c
99
@@ -XXX,XX +XXX,XX @@ static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual)
100
return false;
50
}
101
}
51
102
52
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
103
-void hvf_vcpu_destroy(CPUState *cpu)
53
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
104
+void hvf_arch_vcpu_destroy(CPUState *cpu)
54
qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
105
{
55
INTERRUPT_HOSTPORT));
106
X86CPU *x86_cpu = X86_CPU(cpu);
56
107
CPUX86State *env = &x86_cpu->env;
57
+ /* DWC2 */
108
58
+ object_property_set_bool(OBJECT(&s->dwc2), true, "realized", &err);
109
- hv_return_t ret = hv_vcpu_destroy((hv_vcpuid_t)cpu->hvf_fd);
59
+ if (err) {
110
g_free(env->hvf_mmio_buf);
60
+ error_propagate(errp, err);
111
- assert_hvf_ok(ret);
61
+ return;
62
+ }
63
+
64
+ memory_region_add_subregion(&s->peri_mr, USB_OTG_OFFSET,
65
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->dwc2), 0));
66
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->dwc2), 0,
67
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
68
+ INTERRUPT_USB));
69
+
70
create_unimp(s, &s->armtmr, "bcm2835-sp804", ARMCTRL_TIMER0_1_OFFSET, 0x40);
71
create_unimp(s, &s->cprman, "bcm2835-cprman", CPRMAN_OFFSET, 0x1000);
72
create_unimp(s, &s->a2w, "bcm2835-a2w", A2W_OFFSET, 0x1000);
73
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
74
create_unimp(s, &s->otp, "bcm2835-otp", OTP_OFFSET, 0x80);
75
create_unimp(s, &s->dbus, "bcm2835-dbus", DBUS_OFFSET, 0x8000);
76
create_unimp(s, &s->ave0, "bcm2835-ave0", AVE0_OFFSET, 0x8000);
77
- create_unimp(s, &s->dwc2, "dwc-usb2", USB_OTG_OFFSET, 0x1000);
78
create_unimp(s, &s->sdramc, "bcm2835-sdramc", SDRAMC_OFFSET, 0x100);
79
}
112
}
80
113
114
static void init_tsc_freq(CPUX86State *env)
115
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
116
return env->apic_bus_freq != 0;
117
}
118
119
-int hvf_init_vcpu(CPUState *cpu)
120
+int hvf_arch_init_vcpu(CPUState *cpu)
121
{
122
-
123
X86CPU *x86cpu = X86_CPU(cpu);
124
CPUX86State *env = &x86cpu->env;
125
- int r;
126
-
127
- /* init cpu signals */
128
- sigset_t set;
129
- struct sigaction sigact;
130
-
131
- memset(&sigact, 0, sizeof(sigact));
132
- sigact.sa_handler = dummy_signal;
133
- sigaction(SIG_IPI, &sigact, NULL);
134
-
135
- pthread_sigmask(SIG_BLOCK, NULL, &set);
136
- sigdelset(&set, SIG_IPI);
137
138
init_emu();
139
init_decoder();
140
@@ -XXX,XX +XXX,XX @@ int hvf_init_vcpu(CPUState *cpu)
141
}
142
}
143
144
- r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
145
- cpu->vcpu_dirty = 1;
146
- assert_hvf_ok(r);
147
-
148
if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED,
149
&hvf_state->hvf_caps->vmx_cap_pinbased)) {
150
abort();
81
--
151
--
82
2.20.1
152
2.20.1
83
153
84
154
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Signed-off-by: Cédric Le Goater <clg@kaod.org>
3
There is no reason to call the hvf specific hvf_cpu_synchronize_state()
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
when we can just use the generic cpu_synchronize_state() instead. This
5
Message-id: 20200602135050.593692-1-clg@kaod.org
5
allows us to have less dependency on internal function definitions and
6
allows us to make hvf_cpu_synchronize_state() static.
7
8
Signed-off-by: Alexander Graf <agraf@csgraf.de>
9
Reviewed-by: Sergio Lopez <slp@redhat.com>
10
Message-id: 20210519202253.76782-9-agraf@csgraf.de
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
docs/system/arm/aspeed.rst | 85 ++++++++++++++++++++++++++++++++++++++
14
accel/hvf/hvf-accel-ops.h | 1 -
9
docs/system/target-arm.rst | 1 +
15
accel/hvf/hvf-accel-ops.c | 2 +-
10
2 files changed, 86 insertions(+)
16
target/i386/hvf/x86hvf.c | 9 ++++-----
11
create mode 100644 docs/system/arm/aspeed.rst
17
3 files changed, 5 insertions(+), 7 deletions(-)
12
18
13
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
19
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
14
new file mode 100644
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX
21
--- a/accel/hvf/hvf-accel-ops.h
16
--- /dev/null
22
+++ b/accel/hvf/hvf-accel-ops.h
17
+++ b/docs/system/arm/aspeed.rst
18
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@
19
+Aspeed family boards (``*-bmc``, ``ast2500-evb``, ``ast2600-evb``)
24
#include "sysemu/cpus.h"
20
+==================================================================
25
21
+
26
int hvf_vcpu_exec(CPUState *);
22
+The QEMU Aspeed machines model BMCs of various OpenPOWER systems and
27
-void hvf_cpu_synchronize_state(CPUState *);
23
+Aspeed evaluation boards. They are based on different releases of the
28
void hvf_cpu_synchronize_post_reset(CPUState *);
24
+Aspeed SoC : the AST2400 integrating an ARM926EJ-S CPU (400MHz), the
29
void hvf_cpu_synchronize_post_init(CPUState *);
25
+AST2500 with an ARM1176JZS CPU (800MHz) and more recently the AST2600
30
void hvf_cpu_synchronize_pre_loadvm(CPUState *);
26
+with dual cores ARM Cortex A7 CPUs (1.2GHz).
31
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
27
+
28
+The SoC comes with RAM, Gigabit ethernet, USB, SD/MMC, USB, SPI, I2C,
29
+etc.
30
+
31
+AST2400 SoC based machines :
32
+
33
+- ``palmetto-bmc`` OpenPOWER Palmetto POWER8 BMC
34
+
35
+AST2500 SoC based machines :
36
+
37
+- ``ast2500-evb`` Aspeed AST2500 Evaluation board
38
+- ``romulus-bmc`` OpenPOWER Romulus POWER9 BMC
39
+- ``witherspoon-bmc`` OpenPOWER Witherspoon POWER9 BMC
40
+- ``sonorapass-bmc`` OCP SonoraPass BMC
41
+- ``swift-bmc`` OpenPOWER Swift BMC POWER9
42
+
43
+AST2600 SoC based machines :
44
+
45
+- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex A7)
46
+- ``tacoma-bmc`` OpenPOWER Witherspoon POWER9 AST2600 BMC
47
+
48
+Supported devices
49
+-----------------
50
+
51
+ * SMP (for the AST2600 Cortex-A7)
52
+ * Interrupt Controller (VIC)
53
+ * Timer Controller
54
+ * RTC Controller
55
+ * I2C Controller
56
+ * System Control Unit (SCU)
57
+ * SRAM mapping
58
+ * X-DMA Controller (basic interface)
59
+ * Static Memory Controller (SMC or FMC) - Only SPI Flash support
60
+ * SPI Memory Controller
61
+ * USB 2.0 Controller
62
+ * SD/MMC storage controllers
63
+ * SDRAM controller (dummy interface for basic settings and training)
64
+ * Watchdog Controller
65
+ * GPIO Controller (Master only)
66
+ * UART
67
+ * Ethernet controllers
68
+
69
+
70
+Missing devices
71
+---------------
72
+
73
+ * Coprocessor support
74
+ * ADC (out of tree implementation)
75
+ * PWM and Fan Controller
76
+ * LPC Bus Controller
77
+ * Slave GPIO Controller
78
+ * Super I/O Controller
79
+ * Hash/Crypto Engine
80
+ * PCI-Express 1 Controller
81
+ * Graphic Display Controller
82
+ * PECI Controller
83
+ * MCTP Controller
84
+ * Mailbox Controller
85
+ * Virtual UART
86
+ * eSPI Controller
87
+ * I3C Controller
88
+
89
+Boot options
90
+------------
91
+
92
+The Aspeed machines can be started using the -kernel option to load a
93
+Linux kernel or from a firmare image which can be downloaded from the
94
+OpenPOWER jenkins :
95
+
96
+ https://openpower.xyz/
97
+
98
+The image should be attached as an MTD drive. Run :
99
+
100
+.. code-block:: bash
101
+
102
+ $ qemu-system-arm -M romulus-bmc -nic user \
103
+    -drive file=flash-romulus,format=raw,if=mtd -nographic
104
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
105
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
106
--- a/docs/system/target-arm.rst
33
--- a/accel/hvf/hvf-accel-ops.c
107
+++ b/docs/system/target-arm.rst
34
+++ b/accel/hvf/hvf-accel-ops.c
108
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
35
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
109
arm/realview
36
}
110
arm/versatile
37
}
111
arm/vexpress
38
112
+ arm/aspeed
39
-void hvf_cpu_synchronize_state(CPUState *cpu)
113
arm/musicpal
40
+static void hvf_cpu_synchronize_state(CPUState *cpu)
114
arm/nseries
41
{
115
arm/orangepi
42
if (!cpu->vcpu_dirty) {
43
run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL);
44
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/i386/hvf/x86hvf.c
47
+++ b/target/i386/hvf/x86hvf.c
48
@@ -XXX,XX +XXX,XX @@
49
#include "cpu.h"
50
#include "x86_descr.h"
51
#include "x86_decode.h"
52
+#include "sysemu/hw_accel.h"
53
54
#include "hw/i386/apic_internal.h"
55
56
#include <Hypervisor/hv.h>
57
#include <Hypervisor/hv_vmx.h>
58
59
-#include "accel/hvf/hvf-accel-ops.h"
60
-
61
void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg,
62
SegmentCache *qseg, bool is_tr)
63
{
64
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
65
env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
66
67
if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
68
- hvf_cpu_synchronize_state(cpu_state);
69
+ cpu_synchronize_state(cpu_state);
70
do_cpu_init(cpu);
71
}
72
73
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
74
cpu_state->halted = 0;
75
}
76
if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) {
77
- hvf_cpu_synchronize_state(cpu_state);
78
+ cpu_synchronize_state(cpu_state);
79
do_cpu_sipi(cpu);
80
}
81
if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) {
82
cpu_state->interrupt_request &= ~CPU_INTERRUPT_TPR;
83
- hvf_cpu_synchronize_state(cpu_state);
84
+ cpu_synchronize_state(cpu_state);
85
apic_handle_tpr_access_report(cpu->apic_state, env->eip,
86
env->tpr_access_type);
87
}
116
--
88
--
117
2.20.1
89
2.20.1
118
90
119
91
diff view generated by jsdifflib
1
From: Thomas Huth <thuth@redhat.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
As described by Edgar here:
3
The hvf accel synchronize functions are only used as input for local
4
callback functions, so we can make them static.
4
5
5
https://www.mail-archive.com/qemu-devel@nongnu.org/msg605124.html
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
6
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
7
we can use the Ubuntu kernel for testing the xlnx-versal-virt machine.
8
Message-id: 20210519202253.76782-10-agraf@csgraf.de
8
So let's add a boot test for this now.
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Signed-off-by: Thomas Huth <thuth@redhat.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
15
Message-id: 20200525141237.15243-1-thuth@redhat.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
11
---
18
tests/acceptance/boot_linux_console.py | 26 ++++++++++++++++++++++++++
12
accel/hvf/hvf-accel-ops.h | 3 ---
19
1 file changed, 26 insertions(+)
13
accel/hvf/hvf-accel-ops.c | 6 +++---
14
2 files changed, 3 insertions(+), 6 deletions(-)
20
15
21
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
16
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
22
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
23
--- a/tests/acceptance/boot_linux_console.py
18
--- a/accel/hvf/hvf-accel-ops.h
24
+++ b/tests/acceptance/boot_linux_console.py
19
+++ b/accel/hvf/hvf-accel-ops.h
25
@@ -XXX,XX +XXX,XX @@ class BootLinuxConsole(LinuxKernelTest):
20
@@ -XXX,XX +XXX,XX @@
26
console_pattern = 'Kernel command line: %s' % kernel_command_line
21
#include "sysemu/cpus.h"
27
self.wait_for_console_pattern(console_pattern)
22
28
23
int hvf_vcpu_exec(CPUState *);
29
+ def test_aarch64_xlnx_versal_virt(self):
24
-void hvf_cpu_synchronize_post_reset(CPUState *);
30
+ """
25
-void hvf_cpu_synchronize_post_init(CPUState *);
31
+ :avocado: tags=arch:aarch64
26
-void hvf_cpu_synchronize_pre_loadvm(CPUState *);
32
+ :avocado: tags=machine:xlnx-versal-virt
27
33
+ :avocado: tags=device:pl011
28
#endif /* HVF_CPUS_H */
34
+ :avocado: tags=device:arm_gicv3
29
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
35
+ """
30
index XXXXXXX..XXXXXXX 100644
36
+ kernel_url = ('http://ports.ubuntu.com/ubuntu-ports/dists/'
31
--- a/accel/hvf/hvf-accel-ops.c
37
+ 'bionic-updates/main/installer-arm64/current/images/'
32
+++ b/accel/hvf/hvf-accel-ops.c
38
+ 'netboot/ubuntu-installer/arm64/linux')
33
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
39
+ kernel_hash = '5bfc54cf7ed8157d93f6e5b0241e727b6dc22c50'
34
cpu->vcpu_dirty = false;
40
+ kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
35
}
41
+
36
42
+ initrd_url = ('http://ports.ubuntu.com/ubuntu-ports/dists/'
37
-void hvf_cpu_synchronize_post_reset(CPUState *cpu)
43
+ 'bionic-updates/main/installer-arm64/current/images/'
38
+static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
44
+ 'netboot/ubuntu-installer/arm64/initrd.gz')
39
{
45
+ initrd_hash = 'd385d3e88d53e2004c5d43cbe668b458a094f772'
40
run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
46
+ initrd_path = self.fetch_asset(initrd_url, asset_hash=initrd_hash)
41
}
47
+
42
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
48
+ self.vm.set_console()
43
cpu->vcpu_dirty = false;
49
+ self.vm.add_args('-m', '2G',
44
}
50
+ '-kernel', kernel_path,
45
51
+ '-initrd', initrd_path)
46
-void hvf_cpu_synchronize_post_init(CPUState *cpu)
52
+ self.vm.launch()
47
+static void hvf_cpu_synchronize_post_init(CPUState *cpu)
53
+ self.wait_for_console_pattern('Checked W+X mappings: passed')
48
{
54
+
49
run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
55
def test_arm_virt(self):
50
}
56
"""
51
@@ -XXX,XX +XXX,XX @@ static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
57
:avocado: tags=arch:arm
52
cpu->vcpu_dirty = true;
53
}
54
55
-void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
56
+static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
57
{
58
run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
59
}
58
--
60
--
59
2.20.1
61
2.20.1
60
62
61
63
diff view generated by jsdifflib
1
From: Eden Mikitas <e.mikitas@gmail.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
When inserting the value retrieved (rx) from the spi slave, rx is pushed to
3
We can move the definition of hvf_vcpu_exec() into our internal
4
rx_fifo after being cast to uint8_t. rx_fifo is a fifo32, and the rx
4
hvf header, obsoleting the need for hvf-accel-ops.h.
5
register the driver uses is also 32 bit. This zeroes the 24 most
6
significant bits of rx. This proved problematic with devices that expect to
7
use the whole 32 bits of the rx register.
8
5
9
Signed-off-by: Eden Mikitas <e.mikitas@gmail.com>
6
Signed-off-by: Alexander Graf <agraf@csgraf.de>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Sergio Lopez <slp@redhat.com>
8
Message-id: 20210519202253.76782-11-agraf@csgraf.de
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
hw/ssi/imx_spi.c | 2 +-
12
accel/hvf/hvf-accel-ops.h | 17 -----------------
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
include/sysemu/hvf_int.h | 1 +
14
accel/hvf/hvf-accel-ops.c | 2 --
15
target/i386/hvf/hvf.c | 2 --
16
4 files changed, 1 insertion(+), 21 deletions(-)
17
delete mode 100644 accel/hvf/hvf-accel-ops.h
15
18
16
diff --git a/hw/ssi/imx_spi.c b/hw/ssi/imx_spi.c
19
diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h
20
deleted file mode 100644
21
index XXXXXXX..XXXXXXX
22
--- a/accel/hvf/hvf-accel-ops.h
23
+++ /dev/null
24
@@ -XXX,XX +XXX,XX @@
25
-/*
26
- * Accelerator CPUS Interface
27
- *
28
- * Copyright 2020 SUSE LLC
29
- *
30
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
31
- * See the COPYING file in the top-level directory.
32
- */
33
-
34
-#ifndef HVF_CPUS_H
35
-#define HVF_CPUS_H
36
-
37
-#include "sysemu/cpus.h"
38
-
39
-int hvf_vcpu_exec(CPUState *);
40
-
41
-#endif /* HVF_CPUS_H */
42
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
17
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/ssi/imx_spi.c
44
--- a/include/sysemu/hvf_int.h
19
+++ b/hw/ssi/imx_spi.c
45
+++ b/include/sysemu/hvf_int.h
20
@@ -XXX,XX +XXX,XX @@ static void imx_spi_flush_txfifo(IMXSPIState *s)
46
@@ -XXX,XX +XXX,XX @@ extern HVFState *hvf_state;
21
if (fifo32_is_full(&s->rx_fifo)) {
47
void assert_hvf_ok(hv_return_t ret);
22
s->regs[ECSPI_STATREG] |= ECSPI_STATREG_RO;
48
int hvf_arch_init_vcpu(CPUState *cpu);
23
} else {
49
void hvf_arch_vcpu_destroy(CPUState *cpu);
24
- fifo32_push(&s->rx_fifo, (uint8_t)rx);
50
+int hvf_vcpu_exec(CPUState *);
25
+ fifo32_push(&s->rx_fifo, rx);
51
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
26
}
52
int hvf_put_registers(CPUState *);
27
53
int hvf_get_registers(CPUState *);
28
if (s->burst_length <= 0) {
54
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/accel/hvf/hvf-accel-ops.c
57
+++ b/accel/hvf/hvf-accel-ops.c
58
@@ -XXX,XX +XXX,XX @@
59
#include "sysemu/runstate.h"
60
#include "qemu/guest-random.h"
61
62
-#include "hvf-accel-ops.h"
63
-
64
HVFState *hvf_state;
65
66
/* Memory slots */
67
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/i386/hvf/hvf.c
70
+++ b/target/i386/hvf/hvf.c
71
@@ -XXX,XX +XXX,XX @@
72
#include "qemu/accel.h"
73
#include "target/i386/cpu.h"
74
75
-#include "hvf-accel-ops.h"
76
-
77
void vmx_update_tpr(CPUState *cpu)
78
{
79
/* TODO: need integrate APIC handling */
29
--
80
--
30
2.20.1
81
2.20.1
31
82
32
83
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Replace printf() calls by qemu_log_mask(), which is disabled
3
We will need more than a single field for hvf going forward. To keep
4
by default. This avoid flooding the terminal when fuzzing the
4
the global vcpu struct uncluttered, let's allocate a special hvf vcpu
5
device.
5
struct, similar to how hax does it.
6
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
Message-id: 20200525114123.21317-3-f4bug@amsat.org
8
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
9
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210519202253.76782-12-agraf@csgraf.de
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
15
---
12
hw/arm/pxa2xx.c | 66 ++++++++++++++++++++++++++++++++++++-------------
16
include/hw/core/cpu.h | 3 +-
13
1 file changed, 49 insertions(+), 17 deletions(-)
17
include/sysemu/hvf_int.h | 4 +
18
target/i386/hvf/vmx.h | 24 +++--
19
accel/hvf/hvf-accel-ops.c | 8 +-
20
target/i386/hvf/hvf.c | 104 +++++++++---------
21
target/i386/hvf/x86.c | 28 ++---
22
target/i386/hvf/x86_descr.c | 26 ++---
23
target/i386/hvf/x86_emu.c | 62 +++++------
24
target/i386/hvf/x86_mmu.c | 4 +-
25
target/i386/hvf/x86_task.c | 12 +--
26
target/i386/hvf/x86hvf.c | 210 ++++++++++++++++++------------------
27
11 files changed, 248 insertions(+), 237 deletions(-)
14
28
15
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
29
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
16
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/pxa2xx.c
31
--- a/include/hw/core/cpu.h
18
+++ b/hw/arm/pxa2xx.c
32
+++ b/include/hw/core/cpu.h
33
@@ -XXX,XX +XXX,XX @@ struct KVMState;
34
struct kvm_run;
35
36
struct hax_vcpu_state;
37
+struct hvf_vcpu_state;
38
39
#define TB_JMP_CACHE_BITS 12
40
#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
41
@@ -XXX,XX +XXX,XX @@ struct CPUState {
42
43
struct hax_vcpu_state *hax_vcpu;
44
45
- int hvf_fd;
46
+ struct hvf_vcpu_state *hvf;
47
48
/* track IOMMUs whose translations we've cached in the TCG TLB */
49
GArray *iommu_notifiers;
50
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
51
index XXXXXXX..XXXXXXX 100644
52
--- a/include/sysemu/hvf_int.h
53
+++ b/include/sysemu/hvf_int.h
54
@@ -XXX,XX +XXX,XX @@ struct HVFState {
55
};
56
extern HVFState *hvf_state;
57
58
+struct hvf_vcpu_state {
59
+ int fd;
60
+};
61
+
62
void assert_hvf_ok(hv_return_t ret);
63
int hvf_arch_init_vcpu(CPUState *cpu);
64
void hvf_arch_vcpu_destroy(CPUState *cpu);
65
diff --git a/target/i386/hvf/vmx.h b/target/i386/hvf/vmx.h
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/i386/hvf/vmx.h
68
+++ b/target/i386/hvf/vmx.h
19
@@ -XXX,XX +XXX,XX @@
69
@@ -XXX,XX +XXX,XX @@
20
#include "sysemu/blockdev.h"
70
#include "vmcs.h"
21
#include "sysemu/qtest.h"
71
#include "cpu.h"
22
#include "qemu/cutils.h"
72
#include "x86.h"
23
+#include "qemu/log.h"
73
+#include "sysemu/hvf.h"
24
74
+#include "sysemu/hvf_int.h"
25
static struct {
75
26
hwaddr io_base;
76
#include "exec/address-spaces.h"
27
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_pm_read(void *opaque, hwaddr addr,
77
28
return s->pm_regs[addr >> 2];
78
@@ -XXX,XX +XXX,XX @@ static inline void macvm_set_rip(CPUState *cpu, uint64_t rip)
29
default:
79
uint64_t val;
30
fail:
80
31
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
81
/* BUG, should take considering overlap.. */
32
+ qemu_log_mask(LOG_GUEST_ERROR,
82
- wreg(cpu->hvf_fd, HV_X86_RIP, rip);
33
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
83
+ wreg(cpu->hvf->fd, HV_X86_RIP, rip);
34
+ __func__, addr);
84
env->eip = rip;
35
break;
85
36
}
86
/* after moving forward in rip, we need to clean INTERRUPTABILITY */
87
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
88
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
89
if (val & (VMCS_INTERRUPTIBILITY_STI_BLOCKING |
90
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) {
91
env->hflags &= ~HF_INHIBIT_IRQ_MASK;
92
- wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY,
93
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY,
94
val & ~(VMCS_INTERRUPTIBILITY_STI_BLOCKING |
95
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING));
96
}
97
@@ -XXX,XX +XXX,XX @@ static inline void vmx_clear_nmi_blocking(CPUState *cpu)
98
CPUX86State *env = &x86_cpu->env;
99
100
env->hflags2 &= ~HF2_NMI_MASK;
101
- uint32_t gi = (uint32_t) rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
102
+ uint32_t gi = (uint32_t) rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
103
gi &= ~VMCS_INTERRUPTIBILITY_NMI_BLOCKING;
104
- wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
105
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
106
}
107
108
static inline void vmx_set_nmi_blocking(CPUState *cpu)
109
@@ -XXX,XX +XXX,XX @@ static inline void vmx_set_nmi_blocking(CPUState *cpu)
110
CPUX86State *env = &x86_cpu->env;
111
112
env->hflags2 |= HF2_NMI_MASK;
113
- uint32_t gi = (uint32_t)rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY);
114
+ uint32_t gi = (uint32_t)rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY);
115
gi |= VMCS_INTERRUPTIBILITY_NMI_BLOCKING;
116
- wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
117
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi);
118
}
119
120
static inline void vmx_set_nmi_window_exiting(CPUState *cpu)
121
{
122
uint64_t val;
123
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
124
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val |
125
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
126
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val |
127
VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING);
128
129
}
130
@@ -XXX,XX +XXX,XX @@ static inline void vmx_clear_nmi_window_exiting(CPUState *cpu)
131
{
132
133
uint64_t val;
134
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
135
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val &
136
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
137
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val &
138
~VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING);
139
}
140
141
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
142
index XXXXXXX..XXXXXXX 100644
143
--- a/accel/hvf/hvf-accel-ops.c
144
+++ b/accel/hvf/hvf-accel-ops.c
145
@@ -XXX,XX +XXX,XX @@ type_init(hvf_type_init);
146
147
static void hvf_vcpu_destroy(CPUState *cpu)
148
{
149
- hv_return_t ret = hv_vcpu_destroy(cpu->hvf_fd);
150
+ hv_return_t ret = hv_vcpu_destroy(cpu->hvf->fd);
151
assert_hvf_ok(ret);
152
153
hvf_arch_vcpu_destroy(cpu);
154
+ g_free(cpu->hvf);
155
+ cpu->hvf = NULL;
156
}
157
158
static int hvf_init_vcpu(CPUState *cpu)
159
{
160
int r;
161
162
+ cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
163
+
164
/* init cpu signals */
165
sigset_t set;
166
struct sigaction sigact;
167
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
168
pthread_sigmask(SIG_BLOCK, NULL, &set);
169
sigdelset(&set, SIG_IPI);
170
171
- r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT);
172
+ r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
173
cpu->vcpu_dirty = 1;
174
assert_hvf_ok(r);
175
176
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
177
index XXXXXXX..XXXXXXX 100644
178
--- a/target/i386/hvf/hvf.c
179
+++ b/target/i386/hvf/hvf.c
180
@@ -XXX,XX +XXX,XX @@ void vmx_update_tpr(CPUState *cpu)
181
int tpr = cpu_get_apic_tpr(x86_cpu->apic_state) << 4;
182
int irr = apic_get_highest_priority_irr(x86_cpu->apic_state);
183
184
- wreg(cpu->hvf_fd, HV_X86_TPR, tpr);
185
+ wreg(cpu->hvf->fd, HV_X86_TPR, tpr);
186
if (irr == -1) {
187
- wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0);
188
+ wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0);
189
} else {
190
- wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 :
191
+ wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 :
192
irr >> 4);
193
}
194
}
195
@@ -XXX,XX +XXX,XX @@ void vmx_update_tpr(CPUState *cpu)
196
static void update_apic_tpr(CPUState *cpu)
197
{
198
X86CPU *x86_cpu = X86_CPU(cpu);
199
- int tpr = rreg(cpu->hvf_fd, HV_X86_TPR) >> 4;
200
+ int tpr = rreg(cpu->hvf->fd, HV_X86_TPR) >> 4;
201
cpu_set_apic_tpr(x86_cpu->apic_state, tpr);
202
}
203
204
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init_vcpu(CPUState *cpu)
205
}
206
207
/* set VMCS control fields */
208
- wvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS,
209
+ wvmcs(cpu->hvf->fd, VMCS_PIN_BASED_CTLS,
210
cap2ctrl(hvf_state->hvf_caps->vmx_cap_pinbased,
211
VMCS_PIN_BASED_CTLS_EXTINT |
212
VMCS_PIN_BASED_CTLS_NMI |
213
VMCS_PIN_BASED_CTLS_VNMI));
214
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS,
215
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS,
216
cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased,
217
VMCS_PRI_PROC_BASED_CTLS_HLT |
218
VMCS_PRI_PROC_BASED_CTLS_MWAIT |
219
VMCS_PRI_PROC_BASED_CTLS_TSC_OFFSET |
220
VMCS_PRI_PROC_BASED_CTLS_TPR_SHADOW) |
221
VMCS_PRI_PROC_BASED_CTLS_SEC_CONTROL);
222
- wvmcs(cpu->hvf_fd, VMCS_SEC_PROC_BASED_CTLS,
223
+ wvmcs(cpu->hvf->fd, VMCS_SEC_PROC_BASED_CTLS,
224
cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased2,
225
VMCS_PRI_PROC_BASED2_CTLS_APIC_ACCESSES));
226
227
- wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry,
228
+ wvmcs(cpu->hvf->fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry,
229
0));
230
- wvmcs(cpu->hvf_fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */
231
+ wvmcs(cpu->hvf->fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */
232
233
- wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0);
234
+ wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0);
235
236
x86cpu = X86_CPU(cpu);
237
x86cpu->env.xsave_buf = qemu_memalign(4096, 4096);
238
239
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_STAR, 1);
240
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_LSTAR, 1);
241
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_CSTAR, 1);
242
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FMASK, 1);
243
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FSBASE, 1);
244
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_GSBASE, 1);
245
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_KERNELGSBASE, 1);
246
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_TSC_AUX, 1);
247
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_TSC, 1);
248
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_CS, 1);
249
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_EIP, 1);
250
- hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_ESP, 1);
251
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_STAR, 1);
252
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_LSTAR, 1);
253
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_CSTAR, 1);
254
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_FMASK, 1);
255
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_FSBASE, 1);
256
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_GSBASE, 1);
257
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_KERNELGSBASE, 1);
258
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_TSC_AUX, 1);
259
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_TSC, 1);
260
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_CS, 1);
261
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_EIP, 1);
262
+ hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_ESP, 1);
263
37
return 0;
264
return 0;
38
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_pm_write(void *opaque, hwaddr addr,
265
}
39
s->pm_regs[addr >> 2] = value;
266
@@ -XXX,XX +XXX,XX @@ static void hvf_store_events(CPUState *cpu, uint32_t ins_len, uint64_t idtvec_in
267
}
268
if (idtvec_info & VMCS_IDT_VEC_ERRCODE_VALID) {
269
env->has_error_code = true;
270
- env->error_code = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_ERROR);
271
+ env->error_code = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_ERROR);
272
}
273
}
274
- if ((rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) &
275
+ if ((rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY) &
276
VMCS_INTERRUPTIBILITY_NMI_BLOCKING)) {
277
env->hflags2 |= HF2_NMI_MASK;
278
} else {
279
env->hflags2 &= ~HF2_NMI_MASK;
280
}
281
- if (rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) &
282
+ if (rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY) &
283
(VMCS_INTERRUPTIBILITY_STI_BLOCKING |
284
VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) {
285
env->hflags |= HF_INHIBIT_IRQ_MASK;
286
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
287
return EXCP_HLT;
288
}
289
290
- hv_return_t r = hv_vcpu_run(cpu->hvf_fd);
291
+ hv_return_t r = hv_vcpu_run(cpu->hvf->fd);
292
assert_hvf_ok(r);
293
294
/* handle VMEXIT */
295
- uint64_t exit_reason = rvmcs(cpu->hvf_fd, VMCS_EXIT_REASON);
296
- uint64_t exit_qual = rvmcs(cpu->hvf_fd, VMCS_EXIT_QUALIFICATION);
297
- uint32_t ins_len = (uint32_t)rvmcs(cpu->hvf_fd,
298
+ uint64_t exit_reason = rvmcs(cpu->hvf->fd, VMCS_EXIT_REASON);
299
+ uint64_t exit_qual = rvmcs(cpu->hvf->fd, VMCS_EXIT_QUALIFICATION);
300
+ uint32_t ins_len = (uint32_t)rvmcs(cpu->hvf->fd,
301
VMCS_EXIT_INSTRUCTION_LENGTH);
302
303
- uint64_t idtvec_info = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO);
304
+ uint64_t idtvec_info = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_INFO);
305
306
hvf_store_events(cpu, ins_len, idtvec_info);
307
- rip = rreg(cpu->hvf_fd, HV_X86_RIP);
308
- env->eflags = rreg(cpu->hvf_fd, HV_X86_RFLAGS);
309
+ rip = rreg(cpu->hvf->fd, HV_X86_RIP);
310
+ env->eflags = rreg(cpu->hvf->fd, HV_X86_RFLAGS);
311
312
qemu_mutex_lock_iothread();
313
314
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
315
case EXIT_REASON_EPT_FAULT:
316
{
317
hvf_slot *slot;
318
- uint64_t gpa = rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_ADDRESS);
319
+ uint64_t gpa = rvmcs(cpu->hvf->fd, VMCS_GUEST_PHYSICAL_ADDRESS);
320
321
if (((idtvec_info & VMCS_IDT_VEC_VALID) == 0) &&
322
((exit_qual & EXIT_QUAL_NMIUDTI) != 0)) {
323
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
324
store_regs(cpu);
325
break;
326
} else if (!string && !in) {
327
- RAX(env) = rreg(cpu->hvf_fd, HV_X86_RAX);
328
+ RAX(env) = rreg(cpu->hvf->fd, HV_X86_RAX);
329
hvf_handle_io(env, port, &RAX(env), 1, size, 1);
330
macvm_set_rip(cpu, rip + ins_len);
331
break;
332
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
40
break;
333
break;
41
}
334
}
42
-
335
case EXIT_REASON_CPUID: {
43
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
336
- uint32_t rax = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX);
44
+ qemu_log_mask(LOG_GUEST_ERROR,
337
- uint32_t rbx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RBX);
45
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
338
- uint32_t rcx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX);
46
+ __func__, addr);
339
- uint32_t rdx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX);
340
+ uint32_t rax = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RAX);
341
+ uint32_t rbx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RBX);
342
+ uint32_t rcx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RCX);
343
+ uint32_t rdx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RDX);
344
345
if (rax == 1) {
346
/* CPUID1.ecx.OSXSAVE needs to know CR4 */
347
- env->cr[4] = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4);
348
+ env->cr[4] = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR4);
349
}
350
hvf_cpu_x86_cpuid(env, rax, rcx, &rax, &rbx, &rcx, &rdx);
351
352
- wreg(cpu->hvf_fd, HV_X86_RAX, rax);
353
- wreg(cpu->hvf_fd, HV_X86_RBX, rbx);
354
- wreg(cpu->hvf_fd, HV_X86_RCX, rcx);
355
- wreg(cpu->hvf_fd, HV_X86_RDX, rdx);
356
+ wreg(cpu->hvf->fd, HV_X86_RAX, rax);
357
+ wreg(cpu->hvf->fd, HV_X86_RBX, rbx);
358
+ wreg(cpu->hvf->fd, HV_X86_RCX, rcx);
359
+ wreg(cpu->hvf->fd, HV_X86_RDX, rdx);
360
361
macvm_set_rip(cpu, rip + ins_len);
362
break;
363
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
364
case EXIT_REASON_XSETBV: {
365
X86CPU *x86_cpu = X86_CPU(cpu);
366
CPUX86State *env = &x86_cpu->env;
367
- uint32_t eax = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX);
368
- uint32_t ecx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX);
369
- uint32_t edx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX);
370
+ uint32_t eax = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RAX);
371
+ uint32_t ecx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RCX);
372
+ uint32_t edx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RDX);
373
374
if (ecx) {
375
macvm_set_rip(cpu, rip + ins_len);
376
break;
377
}
378
env->xcr0 = ((uint64_t)edx << 32) | eax;
379
- wreg(cpu->hvf_fd, HV_X86_XCR0, env->xcr0 | 1);
380
+ wreg(cpu->hvf->fd, HV_X86_XCR0, env->xcr0 | 1);
381
macvm_set_rip(cpu, rip + ins_len);
382
break;
383
}
384
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
385
386
switch (cr) {
387
case 0x0: {
388
- macvm_set_cr0(cpu->hvf_fd, RRX(env, reg));
389
+ macvm_set_cr0(cpu->hvf->fd, RRX(env, reg));
390
break;
391
}
392
case 4: {
393
- macvm_set_cr4(cpu->hvf_fd, RRX(env, reg));
394
+ macvm_set_cr4(cpu->hvf->fd, RRX(env, reg));
395
break;
396
}
397
case 8: {
398
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
399
break;
400
}
401
case EXIT_REASON_TASK_SWITCH: {
402
- uint64_t vinfo = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO);
403
+ uint64_t vinfo = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_INFO);
404
x68_segment_selector sel = {.sel = exit_qual & 0xffff};
405
vmx_handle_task_switch(cpu, sel, (exit_qual >> 30) & 0x3,
406
vinfo & VMCS_INTR_VALID, vinfo & VECTORING_INFO_VECTOR_MASK, vinfo
407
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
408
break;
409
}
410
case EXIT_REASON_RDPMC:
411
- wreg(cpu->hvf_fd, HV_X86_RAX, 0);
412
- wreg(cpu->hvf_fd, HV_X86_RDX, 0);
413
+ wreg(cpu->hvf->fd, HV_X86_RAX, 0);
414
+ wreg(cpu->hvf->fd, HV_X86_RDX, 0);
415
macvm_set_rip(cpu, rip + ins_len);
416
break;
417
case VMX_REASON_VMCALL:
418
diff --git a/target/i386/hvf/x86.c b/target/i386/hvf/x86.c
419
index XXXXXXX..XXXXXXX 100644
420
--- a/target/i386/hvf/x86.c
421
+++ b/target/i386/hvf/x86.c
422
@@ -XXX,XX +XXX,XX @@ bool x86_read_segment_descriptor(struct CPUState *cpu,
423
}
424
425
if (GDT_SEL == sel.ti) {
426
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
427
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
428
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_BASE);
429
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
430
} else {
431
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
432
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
433
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_BASE);
434
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_LIMIT);
435
}
436
437
if (sel.index * 8 >= limit) {
438
@@ -XXX,XX +XXX,XX @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
439
uint32_t limit;
440
441
if (GDT_SEL == sel.ti) {
442
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
443
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
444
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_BASE);
445
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
446
} else {
447
- base = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
448
- limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
449
+ base = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_BASE);
450
+ limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_LIMIT);
451
}
452
453
if (sel.index * 8 >= limit) {
454
@@ -XXX,XX +XXX,XX @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
455
bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_desc,
456
int gate)
457
{
458
- target_ulong base = rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE);
459
- uint32_t limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT);
460
+ target_ulong base = rvmcs(cpu->hvf->fd, VMCS_GUEST_IDTR_BASE);
461
+ uint32_t limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_IDTR_LIMIT);
462
463
memset(idt_desc, 0, sizeof(*idt_desc));
464
if (gate * 8 >= limit) {
465
@@ -XXX,XX +XXX,XX @@ bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_desc,
466
467
bool x86_is_protected(struct CPUState *cpu)
468
{
469
- uint64_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
470
+ uint64_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
471
return cr0 & CR0_PE;
472
}
473
474
@@ -XXX,XX +XXX,XX @@ bool x86_is_v8086(struct CPUState *cpu)
475
476
bool x86_is_long_mode(struct CPUState *cpu)
477
{
478
- return rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA;
479
+ return rvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA;
480
}
481
482
bool x86_is_long64_mode(struct CPUState *cpu)
483
@@ -XXX,XX +XXX,XX @@ bool x86_is_long64_mode(struct CPUState *cpu)
484
485
bool x86_is_paging_mode(struct CPUState *cpu)
486
{
487
- uint64_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
488
+ uint64_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
489
return cr0 & CR0_PG;
490
}
491
492
bool x86_is_pae_enabled(struct CPUState *cpu)
493
{
494
- uint64_t cr4 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4);
495
+ uint64_t cr4 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR4);
496
return cr4 & CR4_PAE;
497
}
498
499
diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c
500
index XXXXXXX..XXXXXXX 100644
501
--- a/target/i386/hvf/x86_descr.c
502
+++ b/target/i386/hvf/x86_descr.c
503
@@ -XXX,XX +XXX,XX @@ static const struct vmx_segment_field {
504
505
uint32_t vmx_read_segment_limit(CPUState *cpu, X86Seg seg)
506
{
507
- return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit);
508
+ return (uint32_t)rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].limit);
509
}
510
511
uint32_t vmx_read_segment_ar(CPUState *cpu, X86Seg seg)
512
{
513
- return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes);
514
+ return (uint32_t)rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].ar_bytes);
515
}
516
517
uint64_t vmx_read_segment_base(CPUState *cpu, X86Seg seg)
518
{
519
- return rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base);
520
+ return rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].base);
521
}
522
523
x68_segment_selector vmx_read_segment_selector(CPUState *cpu, X86Seg seg)
524
{
525
x68_segment_selector sel;
526
- sel.sel = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector);
527
+ sel.sel = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector);
528
return sel;
529
}
530
531
void vmx_write_segment_selector(struct CPUState *cpu, x68_segment_selector selector, X86Seg seg)
532
{
533
- wvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector, selector.sel);
534
+ wvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector, selector.sel);
535
}
536
537
void vmx_read_segment_descriptor(struct CPUState *cpu, struct vmx_segment *desc, X86Seg seg)
538
{
539
- desc->sel = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector);
540
- desc->base = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base);
541
- desc->limit = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit);
542
- desc->ar = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes);
543
+ desc->sel = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector);
544
+ desc->base = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].base);
545
+ desc->limit = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].limit);
546
+ desc->ar = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].ar_bytes);
547
}
548
549
void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc, X86Seg seg)
550
{
551
const struct vmx_segment_field *sf = &vmx_segment_fields[seg];
552
553
- wvmcs(cpu->hvf_fd, sf->base, desc->base);
554
- wvmcs(cpu->hvf_fd, sf->limit, desc->limit);
555
- wvmcs(cpu->hvf_fd, sf->selector, desc->sel);
556
- wvmcs(cpu->hvf_fd, sf->ar_bytes, desc->ar);
557
+ wvmcs(cpu->hvf->fd, sf->base, desc->base);
558
+ wvmcs(cpu->hvf->fd, sf->limit, desc->limit);
559
+ wvmcs(cpu->hvf->fd, sf->selector, desc->sel);
560
+ wvmcs(cpu->hvf->fd, sf->ar_bytes, desc->ar);
561
}
562
563
void x86_segment_descriptor_to_vmx(struct CPUState *cpu, x68_segment_selector selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_desc)
564
diff --git a/target/i386/hvf/x86_emu.c b/target/i386/hvf/x86_emu.c
565
index XXXXXXX..XXXXXXX 100644
566
--- a/target/i386/hvf/x86_emu.c
567
+++ b/target/i386/hvf/x86_emu.c
568
@@ -XXX,XX +XXX,XX @@ void simulate_rdmsr(struct CPUState *cpu)
569
570
switch (msr) {
571
case MSR_IA32_TSC:
572
- val = rdtscp() + rvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET);
573
+ val = rdtscp() + rvmcs(cpu->hvf->fd, VMCS_TSC_OFFSET);
47
break;
574
break;
48
}
575
case MSR_IA32_APICBASE:
49
}
576
val = cpu_get_apic_base(X86_CPU(cpu)->apic_state);
50
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_cm_read(void *opaque, hwaddr addr,
577
@@ -XXX,XX +XXX,XX @@ void simulate_rdmsr(struct CPUState *cpu)
51
return s->cm_regs[CCCR >> 2] | (3 << 28);
578
val = x86_cpu->ucode_rev;
52
53
default:
54
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
55
+ qemu_log_mask(LOG_GUEST_ERROR,
56
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
57
+ __func__, addr);
58
break;
579
break;
59
}
580
case MSR_EFER:
60
return 0;
581
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER);
61
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_cm_write(void *opaque, hwaddr addr,
582
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER);
62
break;
583
break;
63
584
case MSR_FSBASE:
64
default:
585
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE);
65
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
586
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_FS_BASE);
66
+ qemu_log_mask(LOG_GUEST_ERROR,
67
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
68
+ __func__, addr);
69
break;
587
break;
70
}
588
case MSR_GSBASE:
71
}
589
- val = rvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE);
72
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_mm_read(void *opaque, hwaddr addr,
590
+ val = rvmcs(cpu->hvf->fd, VMCS_GUEST_GS_BASE);
73
return s->mm_regs[addr >> 2];
74
/* fall through */
75
default:
76
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
77
+ qemu_log_mask(LOG_GUEST_ERROR,
78
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
79
+ __func__, addr);
80
break;
591
break;
81
}
592
case MSR_KERNELGSBASE:
82
return 0;
593
- val = rvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE);
83
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_mm_write(void *opaque, hwaddr addr,
594
+ val = rvmcs(cpu->hvf->fd, VMCS_HOST_FS_BASE);
84
}
85
86
default:
87
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
88
+ qemu_log_mask(LOG_GUEST_ERROR,
89
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
90
+ __func__, addr);
91
break;
595
break;
92
}
596
case MSR_STAR:
93
}
597
abort();
94
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_ssp_read(void *opaque, hwaddr addr,
598
@@ -XXX,XX +XXX,XX @@ void simulate_wrmsr(struct CPUState *cpu)
95
case SSACD:
599
cpu_set_apic_base(X86_CPU(cpu)->apic_state, data);
96
return s->ssacd;
97
default:
98
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
99
+ qemu_log_mask(LOG_GUEST_ERROR,
100
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
101
+ __func__, addr);
102
break;
600
break;
103
}
601
case MSR_FSBASE:
104
return 0;
602
- wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, data);
105
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_ssp_write(void *opaque, hwaddr addr,
603
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_FS_BASE, data);
106
break;
604
break;
107
605
case MSR_GSBASE:
108
default:
606
- wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, data);
109
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
607
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_GS_BASE, data);
110
+ qemu_log_mask(LOG_GUEST_ERROR,
111
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
112
+ __func__, addr);
113
break;
608
break;
114
}
609
case MSR_KERNELGSBASE:
115
}
610
- wvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE, data);
116
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_rtc_read(void *opaque, hwaddr addr,
611
+ wvmcs(cpu->hvf->fd, VMCS_HOST_FS_BASE, data);
117
else
118
return s->last_swcr;
119
default:
120
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
121
+ qemu_log_mask(LOG_GUEST_ERROR,
122
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
123
+ __func__, addr);
124
break;
612
break;
125
}
613
case MSR_STAR:
126
return 0;
614
abort();
127
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_rtc_write(void *opaque, hwaddr addr,
615
@@ -XXX,XX +XXX,XX @@ void simulate_wrmsr(struct CPUState *cpu)
128
break;
616
break;
129
617
case MSR_EFER:
130
default:
618
/*printf("new efer %llx\n", EFER(cpu));*/
131
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
619
- wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, data);
132
+ qemu_log_mask(LOG_GUEST_ERROR,
620
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER, data);
133
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
621
if (data & MSR_EFER_NXE) {
134
+ __func__, addr);
622
- hv_vcpu_invalidate_tlb(cpu->hvf_fd);
135
}
623
+ hv_vcpu_invalidate_tlb(cpu->hvf->fd);
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_i2c_read(void *opaque, hwaddr addr,
139
s->ibmr = 0;
140
return s->ibmr;
141
default:
142
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
143
+ qemu_log_mask(LOG_GUEST_ERROR,
144
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
145
+ __func__, addr);
146
break;
147
}
148
return 0;
149
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_i2c_write(void *opaque, hwaddr addr,
150
break;
151
152
default:
153
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
154
+ qemu_log_mask(LOG_GUEST_ERROR,
155
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
156
+ __func__, addr);
157
}
158
}
159
160
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_i2s_read(void *opaque, hwaddr addr,
161
}
162
return 0;
163
default:
164
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
165
+ qemu_log_mask(LOG_GUEST_ERROR,
166
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
167
+ __func__, addr);
168
break;
169
}
170
return 0;
171
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_i2s_write(void *opaque, hwaddr addr,
172
}
624
}
173
break;
625
break;
174
default:
626
case MSR_MTRRphysBase(0):
175
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
627
@@ -XXX,XX +XXX,XX @@ void load_regs(struct CPUState *cpu)
176
+ qemu_log_mask(LOG_GUEST_ERROR,
628
CPUX86State *env = &x86_cpu->env;
177
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
629
178
+ __func__, addr);
630
int i = 0;
179
}
631
- RRX(env, R_EAX) = rreg(cpu->hvf_fd, HV_X86_RAX);
180
}
632
- RRX(env, R_EBX) = rreg(cpu->hvf_fd, HV_X86_RBX);
181
633
- RRX(env, R_ECX) = rreg(cpu->hvf_fd, HV_X86_RCX);
182
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_fir_read(void *opaque, hwaddr addr,
634
- RRX(env, R_EDX) = rreg(cpu->hvf_fd, HV_X86_RDX);
183
case ICFOR:
635
- RRX(env, R_ESI) = rreg(cpu->hvf_fd, HV_X86_RSI);
184
return s->rx_len;
636
- RRX(env, R_EDI) = rreg(cpu->hvf_fd, HV_X86_RDI);
185
default:
637
- RRX(env, R_ESP) = rreg(cpu->hvf_fd, HV_X86_RSP);
186
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
638
- RRX(env, R_EBP) = rreg(cpu->hvf_fd, HV_X86_RBP);
187
+ qemu_log_mask(LOG_GUEST_ERROR,
639
+ RRX(env, R_EAX) = rreg(cpu->hvf->fd, HV_X86_RAX);
188
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
640
+ RRX(env, R_EBX) = rreg(cpu->hvf->fd, HV_X86_RBX);
189
+ __func__, addr);
641
+ RRX(env, R_ECX) = rreg(cpu->hvf->fd, HV_X86_RCX);
190
break;
642
+ RRX(env, R_EDX) = rreg(cpu->hvf->fd, HV_X86_RDX);
191
}
643
+ RRX(env, R_ESI) = rreg(cpu->hvf->fd, HV_X86_RSI);
644
+ RRX(env, R_EDI) = rreg(cpu->hvf->fd, HV_X86_RDI);
645
+ RRX(env, R_ESP) = rreg(cpu->hvf->fd, HV_X86_RSP);
646
+ RRX(env, R_EBP) = rreg(cpu->hvf->fd, HV_X86_RBP);
647
for (i = 8; i < 16; i++) {
648
- RRX(env, i) = rreg(cpu->hvf_fd, HV_X86_RAX + i);
649
+ RRX(env, i) = rreg(cpu->hvf->fd, HV_X86_RAX + i);
650
}
651
652
- env->eflags = rreg(cpu->hvf_fd, HV_X86_RFLAGS);
653
+ env->eflags = rreg(cpu->hvf->fd, HV_X86_RFLAGS);
654
rflags_to_lflags(env);
655
- env->eip = rreg(cpu->hvf_fd, HV_X86_RIP);
656
+ env->eip = rreg(cpu->hvf->fd, HV_X86_RIP);
657
}
658
659
void store_regs(struct CPUState *cpu)
660
@@ -XXX,XX +XXX,XX @@ void store_regs(struct CPUState *cpu)
661
CPUX86State *env = &x86_cpu->env;
662
663
int i = 0;
664
- wreg(cpu->hvf_fd, HV_X86_RAX, RAX(env));
665
- wreg(cpu->hvf_fd, HV_X86_RBX, RBX(env));
666
- wreg(cpu->hvf_fd, HV_X86_RCX, RCX(env));
667
- wreg(cpu->hvf_fd, HV_X86_RDX, RDX(env));
668
- wreg(cpu->hvf_fd, HV_X86_RSI, RSI(env));
669
- wreg(cpu->hvf_fd, HV_X86_RDI, RDI(env));
670
- wreg(cpu->hvf_fd, HV_X86_RBP, RBP(env));
671
- wreg(cpu->hvf_fd, HV_X86_RSP, RSP(env));
672
+ wreg(cpu->hvf->fd, HV_X86_RAX, RAX(env));
673
+ wreg(cpu->hvf->fd, HV_X86_RBX, RBX(env));
674
+ wreg(cpu->hvf->fd, HV_X86_RCX, RCX(env));
675
+ wreg(cpu->hvf->fd, HV_X86_RDX, RDX(env));
676
+ wreg(cpu->hvf->fd, HV_X86_RSI, RSI(env));
677
+ wreg(cpu->hvf->fd, HV_X86_RDI, RDI(env));
678
+ wreg(cpu->hvf->fd, HV_X86_RBP, RBP(env));
679
+ wreg(cpu->hvf->fd, HV_X86_RSP, RSP(env));
680
for (i = 8; i < 16; i++) {
681
- wreg(cpu->hvf_fd, HV_X86_RAX + i, RRX(env, i));
682
+ wreg(cpu->hvf->fd, HV_X86_RAX + i, RRX(env, i));
683
}
684
685
lflags_to_rflags(env);
686
- wreg(cpu->hvf_fd, HV_X86_RFLAGS, env->eflags);
687
+ wreg(cpu->hvf->fd, HV_X86_RFLAGS, env->eflags);
688
macvm_set_rip(cpu, env->eip);
689
}
690
691
diff --git a/target/i386/hvf/x86_mmu.c b/target/i386/hvf/x86_mmu.c
692
index XXXXXXX..XXXXXXX 100644
693
--- a/target/i386/hvf/x86_mmu.c
694
+++ b/target/i386/hvf/x86_mmu.c
695
@@ -XXX,XX +XXX,XX @@ static bool test_pt_entry(struct CPUState *cpu, struct gpt_translation *pt,
696
pt->err_code |= MMU_PAGE_PT;
697
}
698
699
- uint32_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0);
700
+ uint32_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0);
701
/* check protection */
702
if (cr0 & CR0_WP) {
703
if (pt->write_access && !pte_write_access(pte)) {
704
@@ -XXX,XX +XXX,XX @@ static bool walk_gpt(struct CPUState *cpu, target_ulong addr, int err_code,
705
{
706
int top_level, level;
707
bool is_large = false;
708
- target_ulong cr3 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3);
709
+ target_ulong cr3 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR3);
710
uint64_t page_mask = pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK;
711
712
memset(pt, 0, sizeof(*pt));
713
diff --git a/target/i386/hvf/x86_task.c b/target/i386/hvf/x86_task.c
714
index XXXXXXX..XXXXXXX 100644
715
--- a/target/i386/hvf/x86_task.c
716
+++ b/target/i386/hvf/x86_task.c
717
@@ -XXX,XX +XXX,XX @@ static void load_state_from_tss32(CPUState *cpu, struct x86_tss_segment32 *tss)
718
X86CPU *x86_cpu = X86_CPU(cpu);
719
CPUX86State *env = &x86_cpu->env;
720
721
- wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, tss->cr3);
722
+ wvmcs(cpu->hvf->fd, VMCS_GUEST_CR3, tss->cr3);
723
724
env->eip = tss->eip;
725
env->eflags = tss->eflags | 2;
726
@@ -XXX,XX +XXX,XX @@ static int task_switch_32(CPUState *cpu, x68_segment_selector tss_sel, x68_segme
727
728
void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, int reason, bool gate_valid, uint8_t gate, uint64_t gate_type)
729
{
730
- uint64_t rip = rreg(cpu->hvf_fd, HV_X86_RIP);
731
+ uint64_t rip = rreg(cpu->hvf->fd, HV_X86_RIP);
732
if (!gate_valid || (gate_type != VMCS_INTR_T_HWEXCEPTION &&
733
gate_type != VMCS_INTR_T_HWINTR &&
734
gate_type != VMCS_INTR_T_NMI)) {
735
- int ins_len = rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH);
736
+ int ins_len = rvmcs(cpu->hvf->fd, VMCS_EXIT_INSTRUCTION_LENGTH);
737
macvm_set_rip(cpu, rip + ins_len);
738
return;
739
}
740
@@ -XXX,XX +XXX,XX @@ void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, int rea
741
//ret = task_switch_16(cpu, tss_sel, old_tss_sel, old_tss_base, &next_tss_desc);
742
VM_PANIC("task_switch_16");
743
744
- macvm_set_cr0(cpu->hvf_fd, rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0) | CR0_TS);
745
+ macvm_set_cr0(cpu->hvf->fd, rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0) | CR0_TS);
746
x86_segment_descriptor_to_vmx(cpu, tss_sel, &next_tss_desc, &vmx_seg);
747
vmx_write_segment_descriptor(cpu, &vmx_seg, R_TR);
748
749
store_regs(cpu);
750
751
- hv_vcpu_invalidate_tlb(cpu->hvf_fd);
752
- hv_vcpu_flush(cpu->hvf_fd);
753
+ hv_vcpu_invalidate_tlb(cpu->hvf->fd);
754
+ hv_vcpu_flush(cpu->hvf->fd);
755
}
756
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
757
index XXXXXXX..XXXXXXX 100644
758
--- a/target/i386/hvf/x86hvf.c
759
+++ b/target/i386/hvf/x86hvf.c
760
@@ -XXX,XX +XXX,XX @@ void hvf_put_xsave(CPUState *cpu_state)
761
762
x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave);
763
764
- if (hv_vcpu_write_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) {
765
+ if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
766
abort();
767
}
768
}
769
@@ -XXX,XX +XXX,XX @@ void hvf_put_segments(CPUState *cpu_state)
770
CPUX86State *env = &X86_CPU(cpu_state)->env;
771
struct vmx_segment seg;
772
773
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
774
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
775
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
776
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
777
778
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit);
779
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE, env->gdt.base);
780
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit);
781
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE, env->gdt.base);
782
783
- /* wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR2, env->cr[2]); */
784
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3, env->cr[3]);
785
+ /* wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR2, env->cr[2]); */
786
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3, env->cr[3]);
787
vmx_update_tpr(cpu_state);
788
- wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER, env->efer);
789
+ wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER, env->efer);
790
791
- macvm_set_cr4(cpu_state->hvf_fd, env->cr[4]);
792
- macvm_set_cr0(cpu_state->hvf_fd, env->cr[0]);
793
+ macvm_set_cr4(cpu_state->hvf->fd, env->cr[4]);
794
+ macvm_set_cr0(cpu_state->hvf->fd, env->cr[0]);
795
796
hvf_set_segment(cpu_state, &seg, &env->segs[R_CS], false);
797
vmx_write_segment_descriptor(cpu_state, &seg, R_CS);
798
@@ -XXX,XX +XXX,XX @@ void hvf_put_segments(CPUState *cpu_state)
799
hvf_set_segment(cpu_state, &seg, &env->ldt, false);
800
vmx_write_segment_descriptor(cpu_state, &seg, R_LDTR);
801
802
- hv_vcpu_flush(cpu_state->hvf_fd);
803
+ hv_vcpu_flush(cpu_state->hvf->fd);
804
}
805
806
void hvf_put_msrs(CPUState *cpu_state)
807
{
808
CPUX86State *env = &X86_CPU(cpu_state)->env;
809
810
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS,
811
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_CS,
812
env->sysenter_cs);
813
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP,
814
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_ESP,
815
env->sysenter_esp);
816
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP,
817
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_EIP,
818
env->sysenter_eip);
819
820
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_STAR, env->star);
821
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_STAR, env->star);
822
823
#ifdef TARGET_X86_64
824
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_CSTAR, env->cstar);
825
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, env->kernelgsbase);
826
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FMASK, env->fmask);
827
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_LSTAR, env->lstar);
828
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_CSTAR, env->cstar);
829
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_KERNELGSBASE, env->kernelgsbase);
830
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_FMASK, env->fmask);
831
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_LSTAR, env->lstar);
832
#endif
833
834
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_GSBASE, env->segs[R_GS].base);
835
- hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FSBASE, env->segs[R_FS].base);
836
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_GSBASE, env->segs[R_GS].base);
837
+ hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_FSBASE, env->segs[R_FS].base);
838
}
839
840
841
@@ -XXX,XX +XXX,XX @@ void hvf_get_xsave(CPUState *cpu_state)
842
843
xsave = X86_CPU(cpu_state)->env.xsave_buf;
844
845
- if (hv_vcpu_read_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) {
846
+ if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
847
abort();
848
}
849
850
@@ -XXX,XX +XXX,XX @@ void hvf_get_segments(CPUState *cpu_state)
851
vmx_read_segment_descriptor(cpu_state, &seg, R_LDTR);
852
hvf_get_segment(&env->ldt, &seg);
853
854
- env->idt.limit = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT);
855
- env->idt.base = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE);
856
- env->gdt.limit = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
857
- env->gdt.base = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE);
858
+ env->idt.limit = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT);
859
+ env->idt.base = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE);
860
+ env->gdt.limit = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT);
861
+ env->gdt.base = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE);
862
863
- env->cr[0] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR0);
864
+ env->cr[0] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR0);
865
env->cr[2] = 0;
866
- env->cr[3] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3);
867
- env->cr[4] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR4);
868
+ env->cr[3] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3);
869
+ env->cr[4] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR4);
870
871
- env->efer = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER);
872
+ env->efer = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER);
873
}
874
875
void hvf_get_msrs(CPUState *cpu_state)
876
@@ -XXX,XX +XXX,XX @@ void hvf_get_msrs(CPUState *cpu_state)
877
CPUX86State *env = &X86_CPU(cpu_state)->env;
878
uint64_t tmp;
879
880
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, &tmp);
881
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_CS, &tmp);
882
env->sysenter_cs = tmp;
883
884
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp);
885
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_ESP, &tmp);
886
env->sysenter_esp = tmp;
887
888
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, &tmp);
889
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_EIP, &tmp);
890
env->sysenter_eip = tmp;
891
892
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_STAR, &env->star);
893
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_STAR, &env->star);
894
895
#ifdef TARGET_X86_64
896
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_CSTAR, &env->cstar);
897
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, &env->kernelgsbase);
898
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_FMASK, &env->fmask);
899
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_LSTAR, &env->lstar);
900
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_CSTAR, &env->cstar);
901
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_KERNELGSBASE, &env->kernelgsbase);
902
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_FMASK, &env->fmask);
903
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_LSTAR, &env->lstar);
904
#endif
905
906
- hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_APICBASE, &tmp);
907
+ hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_APICBASE, &tmp);
908
909
- env->tsc = rdtscp() + rvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET);
910
+ env->tsc = rdtscp() + rvmcs(cpu_state->hvf->fd, VMCS_TSC_OFFSET);
911
}
912
913
int hvf_put_registers(CPUState *cpu_state)
914
@@ -XXX,XX +XXX,XX @@ int hvf_put_registers(CPUState *cpu_state)
915
X86CPU *x86cpu = X86_CPU(cpu_state);
916
CPUX86State *env = &x86cpu->env;
917
918
- wreg(cpu_state->hvf_fd, HV_X86_RAX, env->regs[R_EAX]);
919
- wreg(cpu_state->hvf_fd, HV_X86_RBX, env->regs[R_EBX]);
920
- wreg(cpu_state->hvf_fd, HV_X86_RCX, env->regs[R_ECX]);
921
- wreg(cpu_state->hvf_fd, HV_X86_RDX, env->regs[R_EDX]);
922
- wreg(cpu_state->hvf_fd, HV_X86_RBP, env->regs[R_EBP]);
923
- wreg(cpu_state->hvf_fd, HV_X86_RSP, env->regs[R_ESP]);
924
- wreg(cpu_state->hvf_fd, HV_X86_RSI, env->regs[R_ESI]);
925
- wreg(cpu_state->hvf_fd, HV_X86_RDI, env->regs[R_EDI]);
926
- wreg(cpu_state->hvf_fd, HV_X86_R8, env->regs[8]);
927
- wreg(cpu_state->hvf_fd, HV_X86_R9, env->regs[9]);
928
- wreg(cpu_state->hvf_fd, HV_X86_R10, env->regs[10]);
929
- wreg(cpu_state->hvf_fd, HV_X86_R11, env->regs[11]);
930
- wreg(cpu_state->hvf_fd, HV_X86_R12, env->regs[12]);
931
- wreg(cpu_state->hvf_fd, HV_X86_R13, env->regs[13]);
932
- wreg(cpu_state->hvf_fd, HV_X86_R14, env->regs[14]);
933
- wreg(cpu_state->hvf_fd, HV_X86_R15, env->regs[15]);
934
- wreg(cpu_state->hvf_fd, HV_X86_RFLAGS, env->eflags);
935
- wreg(cpu_state->hvf_fd, HV_X86_RIP, env->eip);
936
+ wreg(cpu_state->hvf->fd, HV_X86_RAX, env->regs[R_EAX]);
937
+ wreg(cpu_state->hvf->fd, HV_X86_RBX, env->regs[R_EBX]);
938
+ wreg(cpu_state->hvf->fd, HV_X86_RCX, env->regs[R_ECX]);
939
+ wreg(cpu_state->hvf->fd, HV_X86_RDX, env->regs[R_EDX]);
940
+ wreg(cpu_state->hvf->fd, HV_X86_RBP, env->regs[R_EBP]);
941
+ wreg(cpu_state->hvf->fd, HV_X86_RSP, env->regs[R_ESP]);
942
+ wreg(cpu_state->hvf->fd, HV_X86_RSI, env->regs[R_ESI]);
943
+ wreg(cpu_state->hvf->fd, HV_X86_RDI, env->regs[R_EDI]);
944
+ wreg(cpu_state->hvf->fd, HV_X86_R8, env->regs[8]);
945
+ wreg(cpu_state->hvf->fd, HV_X86_R9, env->regs[9]);
946
+ wreg(cpu_state->hvf->fd, HV_X86_R10, env->regs[10]);
947
+ wreg(cpu_state->hvf->fd, HV_X86_R11, env->regs[11]);
948
+ wreg(cpu_state->hvf->fd, HV_X86_R12, env->regs[12]);
949
+ wreg(cpu_state->hvf->fd, HV_X86_R13, env->regs[13]);
950
+ wreg(cpu_state->hvf->fd, HV_X86_R14, env->regs[14]);
951
+ wreg(cpu_state->hvf->fd, HV_X86_R15, env->regs[15]);
952
+ wreg(cpu_state->hvf->fd, HV_X86_RFLAGS, env->eflags);
953
+ wreg(cpu_state->hvf->fd, HV_X86_RIP, env->eip);
954
955
- wreg(cpu_state->hvf_fd, HV_X86_XCR0, env->xcr0);
956
+ wreg(cpu_state->hvf->fd, HV_X86_XCR0, env->xcr0);
957
958
hvf_put_xsave(cpu_state);
959
960
@@ -XXX,XX +XXX,XX @@ int hvf_put_registers(CPUState *cpu_state)
961
962
hvf_put_msrs(cpu_state);
963
964
- wreg(cpu_state->hvf_fd, HV_X86_DR0, env->dr[0]);
965
- wreg(cpu_state->hvf_fd, HV_X86_DR1, env->dr[1]);
966
- wreg(cpu_state->hvf_fd, HV_X86_DR2, env->dr[2]);
967
- wreg(cpu_state->hvf_fd, HV_X86_DR3, env->dr[3]);
968
- wreg(cpu_state->hvf_fd, HV_X86_DR4, env->dr[4]);
969
- wreg(cpu_state->hvf_fd, HV_X86_DR5, env->dr[5]);
970
- wreg(cpu_state->hvf_fd, HV_X86_DR6, env->dr[6]);
971
- wreg(cpu_state->hvf_fd, HV_X86_DR7, env->dr[7]);
972
+ wreg(cpu_state->hvf->fd, HV_X86_DR0, env->dr[0]);
973
+ wreg(cpu_state->hvf->fd, HV_X86_DR1, env->dr[1]);
974
+ wreg(cpu_state->hvf->fd, HV_X86_DR2, env->dr[2]);
975
+ wreg(cpu_state->hvf->fd, HV_X86_DR3, env->dr[3]);
976
+ wreg(cpu_state->hvf->fd, HV_X86_DR4, env->dr[4]);
977
+ wreg(cpu_state->hvf->fd, HV_X86_DR5, env->dr[5]);
978
+ wreg(cpu_state->hvf->fd, HV_X86_DR6, env->dr[6]);
979
+ wreg(cpu_state->hvf->fd, HV_X86_DR7, env->dr[7]);
980
192
return 0;
981
return 0;
193
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_fir_write(void *opaque, hwaddr addr,
982
}
194
case ICFOR:
983
@@ -XXX,XX +XXX,XX @@ int hvf_get_registers(CPUState *cpu_state)
195
break;
984
X86CPU *x86cpu = X86_CPU(cpu_state);
196
default:
985
CPUX86State *env = &x86cpu->env;
197
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
986
198
+ qemu_log_mask(LOG_GUEST_ERROR,
987
- env->regs[R_EAX] = rreg(cpu_state->hvf_fd, HV_X86_RAX);
199
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
988
- env->regs[R_EBX] = rreg(cpu_state->hvf_fd, HV_X86_RBX);
200
+ __func__, addr);
989
- env->regs[R_ECX] = rreg(cpu_state->hvf_fd, HV_X86_RCX);
201
}
990
- env->regs[R_EDX] = rreg(cpu_state->hvf_fd, HV_X86_RDX);
202
}
991
- env->regs[R_EBP] = rreg(cpu_state->hvf_fd, HV_X86_RBP);
203
992
- env->regs[R_ESP] = rreg(cpu_state->hvf_fd, HV_X86_RSP);
993
- env->regs[R_ESI] = rreg(cpu_state->hvf_fd, HV_X86_RSI);
994
- env->regs[R_EDI] = rreg(cpu_state->hvf_fd, HV_X86_RDI);
995
- env->regs[8] = rreg(cpu_state->hvf_fd, HV_X86_R8);
996
- env->regs[9] = rreg(cpu_state->hvf_fd, HV_X86_R9);
997
- env->regs[10] = rreg(cpu_state->hvf_fd, HV_X86_R10);
998
- env->regs[11] = rreg(cpu_state->hvf_fd, HV_X86_R11);
999
- env->regs[12] = rreg(cpu_state->hvf_fd, HV_X86_R12);
1000
- env->regs[13] = rreg(cpu_state->hvf_fd, HV_X86_R13);
1001
- env->regs[14] = rreg(cpu_state->hvf_fd, HV_X86_R14);
1002
- env->regs[15] = rreg(cpu_state->hvf_fd, HV_X86_R15);
1003
+ env->regs[R_EAX] = rreg(cpu_state->hvf->fd, HV_X86_RAX);
1004
+ env->regs[R_EBX] = rreg(cpu_state->hvf->fd, HV_X86_RBX);
1005
+ env->regs[R_ECX] = rreg(cpu_state->hvf->fd, HV_X86_RCX);
1006
+ env->regs[R_EDX] = rreg(cpu_state->hvf->fd, HV_X86_RDX);
1007
+ env->regs[R_EBP] = rreg(cpu_state->hvf->fd, HV_X86_RBP);
1008
+ env->regs[R_ESP] = rreg(cpu_state->hvf->fd, HV_X86_RSP);
1009
+ env->regs[R_ESI] = rreg(cpu_state->hvf->fd, HV_X86_RSI);
1010
+ env->regs[R_EDI] = rreg(cpu_state->hvf->fd, HV_X86_RDI);
1011
+ env->regs[8] = rreg(cpu_state->hvf->fd, HV_X86_R8);
1012
+ env->regs[9] = rreg(cpu_state->hvf->fd, HV_X86_R9);
1013
+ env->regs[10] = rreg(cpu_state->hvf->fd, HV_X86_R10);
1014
+ env->regs[11] = rreg(cpu_state->hvf->fd, HV_X86_R11);
1015
+ env->regs[12] = rreg(cpu_state->hvf->fd, HV_X86_R12);
1016
+ env->regs[13] = rreg(cpu_state->hvf->fd, HV_X86_R13);
1017
+ env->regs[14] = rreg(cpu_state->hvf->fd, HV_X86_R14);
1018
+ env->regs[15] = rreg(cpu_state->hvf->fd, HV_X86_R15);
1019
1020
- env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
1021
- env->eip = rreg(cpu_state->hvf_fd, HV_X86_RIP);
1022
+ env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
1023
+ env->eip = rreg(cpu_state->hvf->fd, HV_X86_RIP);
1024
1025
hvf_get_xsave(cpu_state);
1026
- env->xcr0 = rreg(cpu_state->hvf_fd, HV_X86_XCR0);
1027
+ env->xcr0 = rreg(cpu_state->hvf->fd, HV_X86_XCR0);
1028
1029
hvf_get_segments(cpu_state);
1030
hvf_get_msrs(cpu_state);
1031
1032
- env->dr[0] = rreg(cpu_state->hvf_fd, HV_X86_DR0);
1033
- env->dr[1] = rreg(cpu_state->hvf_fd, HV_X86_DR1);
1034
- env->dr[2] = rreg(cpu_state->hvf_fd, HV_X86_DR2);
1035
- env->dr[3] = rreg(cpu_state->hvf_fd, HV_X86_DR3);
1036
- env->dr[4] = rreg(cpu_state->hvf_fd, HV_X86_DR4);
1037
- env->dr[5] = rreg(cpu_state->hvf_fd, HV_X86_DR5);
1038
- env->dr[6] = rreg(cpu_state->hvf_fd, HV_X86_DR6);
1039
- env->dr[7] = rreg(cpu_state->hvf_fd, HV_X86_DR7);
1040
+ env->dr[0] = rreg(cpu_state->hvf->fd, HV_X86_DR0);
1041
+ env->dr[1] = rreg(cpu_state->hvf->fd, HV_X86_DR1);
1042
+ env->dr[2] = rreg(cpu_state->hvf->fd, HV_X86_DR2);
1043
+ env->dr[3] = rreg(cpu_state->hvf->fd, HV_X86_DR3);
1044
+ env->dr[4] = rreg(cpu_state->hvf->fd, HV_X86_DR4);
1045
+ env->dr[5] = rreg(cpu_state->hvf->fd, HV_X86_DR5);
1046
+ env->dr[6] = rreg(cpu_state->hvf->fd, HV_X86_DR6);
1047
+ env->dr[7] = rreg(cpu_state->hvf->fd, HV_X86_DR7);
1048
1049
x86_update_hflags(env);
1050
return 0;
1051
@@ -XXX,XX +XXX,XX @@ int hvf_get_registers(CPUState *cpu_state)
1052
static void vmx_set_int_window_exiting(CPUState *cpu)
1053
{
1054
uint64_t val;
1055
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
1056
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val |
1057
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
1058
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val |
1059
VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING);
1060
}
1061
1062
void vmx_clear_int_window_exiting(CPUState *cpu)
1063
{
1064
uint64_t val;
1065
- val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS);
1066
- wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val &
1067
+ val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS);
1068
+ wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val &
1069
~VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING);
1070
}
1071
1072
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1073
uint64_t info = 0;
1074
if (have_event) {
1075
info = vector | intr_type | VMCS_INTR_VALID;
1076
- uint64_t reason = rvmcs(cpu_state->hvf_fd, VMCS_EXIT_REASON);
1077
+ uint64_t reason = rvmcs(cpu_state->hvf->fd, VMCS_EXIT_REASON);
1078
if (env->nmi_injected && reason != EXIT_REASON_TASK_SWITCH) {
1079
vmx_clear_nmi_blocking(cpu_state);
1080
}
1081
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1082
info &= ~(1 << 12); /* clear undefined bit */
1083
if (intr_type == VMCS_INTR_T_SWINTR ||
1084
intr_type == VMCS_INTR_T_SWEXCEPTION) {
1085
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, env->ins_len);
1086
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INST_LENGTH, env->ins_len);
1087
}
1088
1089
if (env->has_error_code) {
1090
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR,
1091
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_EXCEPTION_ERROR,
1092
env->error_code);
1093
/* Indicate that VMCS_ENTRY_EXCEPTION_ERROR is valid */
1094
info |= VMCS_INTR_DEL_ERRCODE;
1095
}
1096
/*printf("reinject %lx err %d\n", info, err);*/
1097
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
1098
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, info);
1099
};
1100
}
1101
1102
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1103
if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
1104
cpu_state->interrupt_request &= ~CPU_INTERRUPT_NMI;
1105
info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | EXCP02_NMI;
1106
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info);
1107
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, info);
1108
} else {
1109
vmx_set_nmi_window_exiting(cpu_state);
1110
}
1111
@@ -XXX,XX +XXX,XX @@ bool hvf_inject_interrupts(CPUState *cpu_state)
1112
int line = cpu_get_pic_interrupt(&x86cpu->env);
1113
cpu_state->interrupt_request &= ~CPU_INTERRUPT_HARD;
1114
if (line >= 0) {
1115
- wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line |
1116
+ wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, line |
1117
VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
1118
}
1119
}
1120
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
1121
X86CPU *cpu = X86_CPU(cpu_state);
1122
CPUX86State *env = &cpu->env;
1123
1124
- env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
1125
+ env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
1126
1127
if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
1128
cpu_synchronize_state(cpu_state);
204
--
1129
--
205
2.20.1
1130
2.20.1
206
1131
207
1132
diff view generated by jsdifflib
1
Convert the remaining Neon narrowing shifts to decodetree:
1
From: Alexander Graf <agraf@csgraf.de>
2
* VQSHRN
3
* VQRSHRN
4
2
3
The hooks we have that call us after reset, init and loadvm really all
4
just want to say "The reference of all register state is in the QEMU
5
vcpu struct, please push it".
6
7
We already have a working pushing mechanism though called cpu->vcpu_dirty,
8
so we can just reuse that for all of the above, syncing state properly the
9
next time we actually execute a vCPU.
10
11
This fixes PSCI resets on ARM, as they modify CPU state even after the
12
post init call has completed, but before we execute the vCPU again.
13
14
To also make the scheme work for x86, we have to make sure we don't
15
move stale eflags into our env when the vcpu state is dirty.
16
17
Signed-off-by: Alexander Graf <agraf@csgraf.de>
18
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
19
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
20
Reviewed-by: Sergio Lopez <slp@redhat.com>
21
Message-id: 20210519202253.76782-13-agraf@csgraf.de
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-7-peter.maydell@linaro.org
8
---
23
---
9
target/arm/neon-dp.decode | 20 ++++++
24
accel/hvf/hvf-accel-ops.c | 27 +++++++--------------------
10
target/arm/translate-neon.inc.c | 15 +++++
25
target/i386/hvf/x86hvf.c | 5 ++++-
11
target/arm/translate.c | 110 +-------------------------------
26
2 files changed, 11 insertions(+), 21 deletions(-)
12
3 files changed, 37 insertions(+), 108 deletions(-)
13
27
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
28
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
15
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
30
--- a/accel/hvf/hvf-accel-ops.c
17
+++ b/target/arm/neon-dp.decode
31
+++ b/accel/hvf/hvf-accel-ops.c
18
@@ -XXX,XX +XXX,XX @@ VQSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
32
@@ -XXX,XX +XXX,XX @@ static void hvf_cpu_synchronize_state(CPUState *cpu)
19
VQRSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
20
VQRSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
21
VQRSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
22
+
23
+# VQSHRN with signed input
24
+VQSHRN_S64_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_d
25
+VQSHRN_S32_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_s
26
+VQSHRN_S16_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
27
+
28
+# VQRSHRN with signed input
29
+VQRSHRN_S64_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
30
+VQRSHRN_S32_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
31
+VQRSHRN_S16_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
32
+
33
+# VQSHRN with unsigned input
34
+VQSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_d
35
+VQSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_s
36
+VQSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
37
+
38
+# VQRSHRN with unsigned input
39
+VQRSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
40
+VQRSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
41
+VQRSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
42
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/translate-neon.inc.c
45
+++ b/target/arm/translate-neon.inc.c
46
@@ -XXX,XX +XXX,XX @@ DO_2SN_32(VQSHRUN_16, gen_helper_neon_shl_s16, gen_helper_neon_unarrow_sat8)
47
DO_2SN_64(VQRSHRUN_64, gen_helper_neon_rshl_s64, gen_helper_neon_unarrow_sat32)
48
DO_2SN_32(VQRSHRUN_32, gen_helper_neon_rshl_s32, gen_helper_neon_unarrow_sat16)
49
DO_2SN_32(VQRSHRUN_16, gen_helper_neon_rshl_s16, gen_helper_neon_unarrow_sat8)
50
+DO_2SN_64(VQSHRN_S64, gen_sshl_i64, gen_helper_neon_narrow_sat_s32)
51
+DO_2SN_32(VQSHRN_S32, gen_sshl_i32, gen_helper_neon_narrow_sat_s16)
52
+DO_2SN_32(VQSHRN_S16, gen_helper_neon_shl_s16, gen_helper_neon_narrow_sat_s8)
53
+
54
+DO_2SN_64(VQRSHRN_S64, gen_helper_neon_rshl_s64, gen_helper_neon_narrow_sat_s32)
55
+DO_2SN_32(VQRSHRN_S32, gen_helper_neon_rshl_s32, gen_helper_neon_narrow_sat_s16)
56
+DO_2SN_32(VQRSHRN_S16, gen_helper_neon_rshl_s16, gen_helper_neon_narrow_sat_s8)
57
+
58
+DO_2SN_64(VQSHRN_U64, gen_ushl_i64, gen_helper_neon_narrow_sat_u32)
59
+DO_2SN_32(VQSHRN_U32, gen_ushl_i32, gen_helper_neon_narrow_sat_u16)
60
+DO_2SN_32(VQSHRN_U16, gen_helper_neon_shl_u16, gen_helper_neon_narrow_sat_u8)
61
+
62
+DO_2SN_64(VQRSHRN_U64, gen_helper_neon_rshl_u64, gen_helper_neon_narrow_sat_u32)
63
+DO_2SN_32(VQRSHRN_U32, gen_helper_neon_rshl_u32, gen_helper_neon_narrow_sat_u16)
64
+DO_2SN_32(VQRSHRN_U16, gen_helper_neon_rshl_u16, gen_helper_neon_narrow_sat_u8)
65
diff --git a/target/arm/translate.c b/target/arm/translate.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate.c
68
+++ b/target/arm/translate.c
69
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_unarrow_sats(int size, TCGv_i32 dest, TCGv_i64 src)
70
}
33
}
71
}
34
}
72
35
73
-static inline void gen_neon_shift_narrow(int size, TCGv_i32 var, TCGv_i32 shift,
36
-static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu,
74
- int q, int u)
37
- run_on_cpu_data arg)
75
-{
38
+static void do_hvf_cpu_synchronize_set_dirty(CPUState *cpu,
76
- if (q) {
39
+ run_on_cpu_data arg)
77
- if (u) {
40
{
78
- switch (size) {
41
- hvf_put_registers(cpu);
79
- case 1: gen_helper_neon_rshl_u16(var, var, shift); break;
42
- cpu->vcpu_dirty = false;
80
- case 2: gen_helper_neon_rshl_u32(var, var, shift); break;
43
+ /* QEMU state is the reference, push it to HVF now and on next entry */
81
- default: abort();
44
+ cpu->vcpu_dirty = true;
82
- }
45
}
83
- } else {
46
84
- switch (size) {
47
static void hvf_cpu_synchronize_post_reset(CPUState *cpu)
85
- case 1: gen_helper_neon_rshl_s16(var, var, shift); break;
48
{
86
- case 2: gen_helper_neon_rshl_s32(var, var, shift); break;
49
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
87
- default: abort();
88
- }
89
- }
90
- } else {
91
- if (u) {
92
- switch (size) {
93
- case 1: gen_helper_neon_shl_u16(var, var, shift); break;
94
- case 2: gen_ushl_i32(var, var, shift); break;
95
- default: abort();
96
- }
97
- } else {
98
- switch (size) {
99
- case 1: gen_helper_neon_shl_s16(var, var, shift); break;
100
- case 2: gen_sshl_i32(var, var, shift); break;
101
- default: abort();
102
- }
103
- }
104
- }
105
-}
50
-}
106
-
51
-
107
static inline void gen_neon_widen(TCGv_i64 dest, TCGv_i32 src, int size, int u)
52
-static void do_hvf_cpu_synchronize_post_init(CPUState *cpu,
53
- run_on_cpu_data arg)
54
-{
55
- hvf_put_registers(cpu);
56
- cpu->vcpu_dirty = false;
57
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
58
}
59
60
static void hvf_cpu_synchronize_post_init(CPUState *cpu)
108
{
61
{
109
if (u) {
62
- run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
110
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
63
-}
111
case 6: /* VQSHLU */
64
-
112
case 7: /* VQSHL */
65
-static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu,
113
case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
66
- run_on_cpu_data arg)
114
+ case 9: /* VQSHRN, VQRSHRN */
67
-{
115
return 1; /* handled by decodetree */
68
- cpu->vcpu_dirty = true;
116
default:
69
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
117
break;
70
}
118
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
71
119
size--;
72
static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu)
120
}
73
{
121
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
74
- run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL);
122
- if (op < 10) {
75
+ run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL);
123
- /* Shift by immediate and narrow:
76
}
124
- VSHRN, VRSHRN, VQSHRN, VQRSHRN. */
77
125
- int input_unsigned = (op == 8) ? !u : u;
78
static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
126
- if (rm & 1) {
79
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
127
- return 1;
80
index XXXXXXX..XXXXXXX 100644
128
- }
81
--- a/target/i386/hvf/x86hvf.c
129
- shift = shift - (1 << (size + 3));
82
+++ b/target/i386/hvf/x86hvf.c
130
- size++;
83
@@ -XXX,XX +XXX,XX @@ int hvf_process_events(CPUState *cpu_state)
131
- if (size == 3) {
84
X86CPU *cpu = X86_CPU(cpu_state);
132
- tmp64 = tcg_const_i64(shift);
85
CPUX86State *env = &cpu->env;
133
- neon_load_reg64(cpu_V0, rm);
86
134
- neon_load_reg64(cpu_V1, rm + 1);
87
- env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
135
- for (pass = 0; pass < 2; pass++) {
88
+ if (!cpu_state->vcpu_dirty) {
136
- TCGv_i64 in;
89
+ /* light weight sync for CPU_INTERRUPT_HARD and IF_MASK */
137
- if (pass == 0) {
90
+ env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS);
138
- in = cpu_V0;
91
+ }
139
- } else {
92
140
- in = cpu_V1;
93
if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) {
141
- }
94
cpu_synchronize_state(cpu_state);
142
- if (q) {
143
- if (input_unsigned) {
144
- gen_helper_neon_rshl_u64(cpu_V0, in, tmp64);
145
- } else {
146
- gen_helper_neon_rshl_s64(cpu_V0, in, tmp64);
147
- }
148
- } else {
149
- if (input_unsigned) {
150
- gen_ushl_i64(cpu_V0, in, tmp64);
151
- } else {
152
- gen_sshl_i64(cpu_V0, in, tmp64);
153
- }
154
- }
155
- tmp = tcg_temp_new_i32();
156
- gen_neon_narrow_op(op == 8, u, size - 1, tmp, cpu_V0);
157
- neon_store_reg(rd, pass, tmp);
158
- } /* for pass */
159
- tcg_temp_free_i64(tmp64);
160
- } else {
161
- if (size == 1) {
162
- imm = (uint16_t)shift;
163
- imm |= imm << 16;
164
- } else {
165
- /* size == 2 */
166
- imm = (uint32_t)shift;
167
- }
168
- tmp2 = tcg_const_i32(imm);
169
- tmp4 = neon_load_reg(rm + 1, 0);
170
- tmp5 = neon_load_reg(rm + 1, 1);
171
- for (pass = 0; pass < 2; pass++) {
172
- if (pass == 0) {
173
- tmp = neon_load_reg(rm, 0);
174
- } else {
175
- tmp = tmp4;
176
- }
177
- gen_neon_shift_narrow(size, tmp, tmp2, q,
178
- input_unsigned);
179
- if (pass == 0) {
180
- tmp3 = neon_load_reg(rm, 1);
181
- } else {
182
- tmp3 = tmp5;
183
- }
184
- gen_neon_shift_narrow(size, tmp3, tmp2, q,
185
- input_unsigned);
186
- tcg_gen_concat_i32_i64(cpu_V0, tmp, tmp3);
187
- tcg_temp_free_i32(tmp);
188
- tcg_temp_free_i32(tmp3);
189
- tmp = tcg_temp_new_i32();
190
- gen_neon_narrow_op(op == 8, u, size - 1, tmp, cpu_V0);
191
- neon_store_reg(rd, pass, tmp);
192
- } /* for pass */
193
- tcg_temp_free_i32(tmp2);
194
- }
195
- } else if (op == 10) {
196
+ if (op == 10) {
197
/* VSHLL, VMOVL */
198
if (q || (rd & 1)) {
199
return 1;
200
--
95
--
201
2.20.1
96
2.20.1
202
97
203
98
diff view generated by jsdifflib
New patch
1
Coverity notes that we don't check for dup2() failing. Add some
2
assertions so that if it does ever happen we get some indication.
3
(This is similar to how we handle other "don't expect this syscall to
4
fail" checks in this test code.)
1
5
6
Fixes: Coverity CID 1432346
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
9
Message-id: 20210525134458.6675-2-peter.maydell@linaro.org
10
---
11
tests/qtest/bios-tables-test.c | 8 ++++++--
12
1 file changed, 6 insertions(+), 2 deletions(-)
13
14
diff --git a/tests/qtest/bios-tables-test.c b/tests/qtest/bios-tables-test.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/qtest/bios-tables-test.c
17
+++ b/tests/qtest/bios-tables-test.c
18
@@ -XXX,XX +XXX,XX @@ static void test_acpi_asl(test_data *data)
19
exp_sdt->asl_file, sdt->asl_file);
20
int out = dup(STDOUT_FILENO);
21
int ret G_GNUC_UNUSED;
22
+ int dupret;
23
24
- dup2(STDERR_FILENO, STDOUT_FILENO);
25
+ g_assert(out >= 0);
26
+ dupret = dup2(STDERR_FILENO, STDOUT_FILENO);
27
+ g_assert(dupret >= 0);
28
ret = system(diff) ;
29
- dup2(out, STDOUT_FILENO);
30
+ dupret = dup2(out, STDOUT_FILENO);
31
+ g_assert(dupret >= 0);
32
close(out);
33
g_free(diff);
34
}
35
--
36
2.20.1
37
38
diff view generated by jsdifflib
New patch
1
The e1000e_send_verify() test calls qemu_recv() but doesn't
2
check that the call succeeded, which annoys Coverity. Add
3
an explicit test check for the length of the data.
1
4
5
(This is a test check, not a "we assume this syscall always
6
succeeds", so we use g_assert_cmpint() rather than g_assert().)
7
8
Fixes: Coverity CID 1432324
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
11
Message-id: 20210525134458.6675-3-peter.maydell@linaro.org
12
---
13
tests/qtest/e1000e-test.c | 3 ++-
14
1 file changed, 2 insertions(+), 1 deletion(-)
15
16
diff --git a/tests/qtest/e1000e-test.c b/tests/qtest/e1000e-test.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/tests/qtest/e1000e-test.c
19
+++ b/tests/qtest/e1000e-test.c
20
@@ -XXX,XX +XXX,XX @@ static void e1000e_send_verify(QE1000E *d, int *test_sockets, QGuestAllocator *a
21
/* Check data sent to the backend */
22
ret = qemu_recv(test_sockets[0], &recv_len, sizeof(recv_len), 0);
23
g_assert_cmpint(ret, == , sizeof(recv_len));
24
- qemu_recv(test_sockets[0], buffer, 64, 0);
25
+ ret = qemu_recv(test_sockets[0], buffer, 64, 0);
26
+ g_assert_cmpint(ret, >=, 5);
27
g_assert_cmpstr(buffer, == , "TEST");
28
29
/* Free test data buffer */
30
--
31
2.20.1
32
33
diff view generated by jsdifflib
New patch
1
Coverity notices that the checks against mkstemp() failing in
2
create_qcow2_with_mbr() are wrong: mkstemp returns -1 on failure but
3
the check is just "g_assert(fd)". Fix to use "g_assert(fd >= 0)",
4
matching the correct check in create_test_img().
1
5
6
Fixes: Coverity CID 1432274
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
10
Message-id: 20210525134458.6675-4-peter.maydell@linaro.org
11
---
12
tests/qtest/hd-geo-test.c | 4 ++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
14
15
diff --git a/tests/qtest/hd-geo-test.c b/tests/qtest/hd-geo-test.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/tests/qtest/hd-geo-test.c
18
+++ b/tests/qtest/hd-geo-test.c
19
@@ -XXX,XX +XXX,XX @@ static char *create_qcow2_with_mbr(MBRpartitions mbr, uint64_t sectors)
20
}
21
22
fd = mkstemp(raw_path);
23
- g_assert(fd);
24
+ g_assert(fd >= 0);
25
close(fd);
26
27
fd = open(raw_path, O_WRONLY);
28
@@ -XXX,XX +XXX,XX @@ static char *create_qcow2_with_mbr(MBRpartitions mbr, uint64_t sectors)
29
close(fd);
30
31
fd = mkstemp(qcow2_path);
32
- g_assert(fd);
33
+ g_assert(fd >= 0);
34
close(fd);
35
36
qemu_img_path = getenv("QTEST_QEMU_IMG");
37
--
38
2.20.1
39
40
diff view generated by jsdifflib
1
From: Eden Mikitas <e.mikitas@gmail.com>
1
Coverity points out that we calculate a 64-bit value using 32-bit
2
arithmetic; add the cast to force the multiply to be done as 64-bits.
3
(The overflow will never happen with the current test data.)
2
4
3
The while statement in question only checked if tx_burst is not 0.
5
Fixes: Coverity CID 1432320
4
tx_burst is a signed int, which is assigned the value put by the
5
guest driver in ECSPI_CONREG. The burst length can be anywhere
6
between 1 and 4096, and since tx_burst is always decremented by 8
7
it could possibly underflow, causing an infinite loop.
8
9
Signed-off-by: Eden Mikitas <e.mikitas@gmail.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
9
Message-id: 20210525134458.6675-5-peter.maydell@linaro.org
12
---
10
---
13
hw/ssi/imx_spi.c | 2 +-
11
tests/qtest/pflash-cfi02-test.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
12
1 file changed, 1 insertion(+), 1 deletion(-)
15
13
16
diff --git a/hw/ssi/imx_spi.c b/hw/ssi/imx_spi.c
14
diff --git a/tests/qtest/pflash-cfi02-test.c b/tests/qtest/pflash-cfi02-test.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/ssi/imx_spi.c
16
--- a/tests/qtest/pflash-cfi02-test.c
19
+++ b/hw/ssi/imx_spi.c
17
+++ b/tests/qtest/pflash-cfi02-test.c
20
@@ -XXX,XX +XXX,XX @@ static void imx_spi_flush_txfifo(IMXSPIState *s)
18
@@ -XXX,XX +XXX,XX @@ static void test_geometry(const void *opaque)
21
19
22
rx = 0;
20
for (int region = 0; region < nb_erase_regions; ++region) {
23
21
for (uint32_t i = 0; i < c->nb_blocs[region]; ++i) {
24
- while (tx_burst) {
22
- uint64_t byte_addr = i * c->sector_len[region];
25
+ while (tx_burst > 0) {
23
+ uint64_t byte_addr = (uint64_t)i * c->sector_len[region];
26
uint8_t byte = tx & 0xff;
24
g_assert_cmphex(flash_read(c, byte_addr), ==, bank_mask(c));
27
25
}
28
DPRINTF("writing 0x%02x\n", (uint32_t)byte);
26
}
29
--
27
--
30
2.20.1
28
2.20.1
31
29
32
30
diff view generated by jsdifflib
New patch
1
Coverity points out that in tpm_test_swtpm_migration_test() we
2
assume that src_tpm_addr and dst_tpm_addr are non-NULL (we
3
pass them to tpm_util_migration_start_qemu() which will
4
unconditionally dereference them) but then later explicitly
5
check them for NULL. Remove the pointless checks.
1
6
7
Fixes: Coverity CID 1432367, 1432359
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
12
Message-id: 20210525134458.6675-6-peter.maydell@linaro.org
13
---
14
tests/qtest/tpm-tests.c | 12 ++++--------
15
1 file changed, 4 insertions(+), 8 deletions(-)
16
17
diff --git a/tests/qtest/tpm-tests.c b/tests/qtest/tpm-tests.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/qtest/tpm-tests.c
20
+++ b/tests/qtest/tpm-tests.c
21
@@ -XXX,XX +XXX,XX @@ void tpm_test_swtpm_migration_test(const char *src_tpm_path,
22
qtest_quit(src_qemu);
23
24
tpm_util_swtpm_kill(dst_tpm_pid);
25
- if (dst_tpm_addr) {
26
- g_unlink(dst_tpm_addr->u.q_unix.path);
27
- qapi_free_SocketAddress(dst_tpm_addr);
28
- }
29
+ g_unlink(dst_tpm_addr->u.q_unix.path);
30
+ qapi_free_SocketAddress(dst_tpm_addr);
31
32
tpm_util_swtpm_kill(src_tpm_pid);
33
- if (src_tpm_addr) {
34
- g_unlink(src_tpm_addr->u.q_unix.path);
35
- qapi_free_SocketAddress(src_tpm_addr);
36
- }
37
+ g_unlink(src_tpm_addr->u.q_unix.path);
38
+ qapi_free_SocketAddress(src_tpm_addr);
39
}
40
--
41
2.20.1
42
43
diff view generated by jsdifflib
New patch
1
Coverity complains that we don't check for failures from dup()
2
and mkstemp(); add asserts that these syscalls succeeded.
1
3
4
Fixes: Coverity CID 1432516, 1432574
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20210525134458.6675-7-peter.maydell@linaro.org
9
---
10
tests/unit/test-vmstate.c | 5 ++++-
11
1 file changed, 4 insertions(+), 1 deletion(-)
12
13
diff --git a/tests/unit/test-vmstate.c b/tests/unit/test-vmstate.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tests/unit/test-vmstate.c
16
+++ b/tests/unit/test-vmstate.c
17
@@ -XXX,XX +XXX,XX @@ static int temp_fd;
18
/* Duplicate temp_fd and seek to the beginning of the file */
19
static QEMUFile *open_test_file(bool write)
20
{
21
- int fd = dup(temp_fd);
22
+ int fd;
23
QIOChannel *ioc;
24
QEMUFile *f;
25
26
+ fd = dup(temp_fd);
27
+ g_assert(fd >= 0);
28
lseek(fd, 0, SEEK_SET);
29
if (write) {
30
g_assert_cmpint(ftruncate(fd, 0), ==, 0);
31
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
32
g_autofree char *temp_file = g_strdup_printf("%s/vmst.test.XXXXXX",
33
g_get_tmp_dir());
34
temp_fd = mkstemp(temp_file);
35
+ g_assert(temp_fd >= 0);
36
37
module_call_init(MODULE_INIT_QOM);
38
39
--
40
2.20.1
41
42
diff view generated by jsdifflib