1
Arm queue; some of the simpler stuff, things other have reviewed (thanks!), etc.
1
arm queue: big stuff here is my MVE codegen optimisation,
2
and Alex's Apple Silicon hvf support.
2
3
3
-- PMM
4
-- PMM
4
5
5
The following changes since commit 5d2f557b47dfbf8f23277a5bdd8473d4607c681a:
6
The following changes since commit 7adb961995a3744f51396502b33ad04a56a317c3:
6
7
7
Merge remote-tracking branch 'remotes/kraxel/tags/vga-20200605-pull-request' into staging (2020-06-05 13:53:05 +0100)
8
Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-virtiofs-20210916' into staging (2021-09-19 18:53:29 +0100)
8
9
9
are available in the Git repository at:
10
are available in the Git repository at:
10
11
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200605
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210920
12
13
13
for you to fetch changes up to 2c35a39eda0b16c2ed85c94cec204bf5efb97812:
14
for you to fetch changes up to 1dc5a60bfe406bc1122d68cbdefda38d23134b27:
14
15
15
target/arm: Convert Neon one-register-and-immediate insns to decodetree (2020-06-05 17:23:10 +0100)
16
target/arm: Optimize MVE 1op-immediate insns (2021-09-20 14:18:01 +0100)
16
17
17
----------------------------------------------------------------
18
----------------------------------------------------------------
18
target-arm queue:
19
target-arm queue:
19
hw/ssi/imx_spi: Handle tx burst lengths other than 8 correctly
20
* Optimize codegen for MVE when predication not active
20
hw/input/pxa2xx_keypad: Replace hw_error() by qemu_log_mask()
21
* hvf: Add Apple Silicon support
21
hw/arm/pxa2xx: Replace printf() call by qemu_log_mask()
22
* hw/intc: Set GIC maintenance interrupt level to only 0 or 1
22
target/arm: Convert crypto insns to gvec
23
* Fix mishandling of MVE FPSCR.LTPSIZE reset for usermode emulator
23
hw/adc/stm32f2xx_adc: Correct memory region size and access size
24
* elf2dmp: Fix coverity nits
24
tests/acceptance: Add a boot test for the xlnx-versal-virt machine
25
docs/system: Document Aspeed boards
26
raspi: Add model of the USB controller
27
target/arm: Convert 2-reg-and-shift and 1-reg-imm Neon insns to decodetree
28
25
29
----------------------------------------------------------------
26
----------------------------------------------------------------
30
Cédric Le Goater (1):
27
Alexander Graf (7):
31
docs/system: Document Aspeed boards
28
arm: Move PMC register definitions to internals.h
29
hvf: Add execute to dirty log permission bitmap
30
hvf: Introduce hvf_arch_init() callback
31
hvf: Add Apple Silicon support
32
hvf: arm: Implement PSCI handling
33
arm: Add Hypervisor.framework build target
34
hvf: arm: Add rudimentary PMC support
32
35
33
Eden Mikitas (2):
36
Peter Collingbourne (1):
34
hw/ssi/imx_spi: changed while statement to prevent underflow
37
arm/hvf: Add a WFI handler
35
hw/ssi/imx_spi: Removed unnecessary cast of rx data received from slave
36
38
37
Paul Zimmerman (7):
39
Peter Maydell (18):
38
raspi: add BCM2835 SOC MPHI emulation
40
elf2dmp: Check curl_easy_setopt() return value
39
dwc-hsotg (dwc2) USB host controller register definitions
41
elf2dmp: Fail cleanly if PDB file specifies zero block_size
40
dwc-hsotg (dwc2) USB host controller state definitions
42
target/arm: Don't skip M-profile reset entirely in user mode
41
dwc-hsotg (dwc2) USB host controller emulation
43
target/arm: Always clear exclusive monitor on reset
42
usb: add short-packet handling to usb-storage driver
44
target/arm: Consolidate ifdef blocks in reset
43
wire in the dwc-hsotg (dwc2) USB host controller emulation
45
hvf: arm: Implement -cpu host
44
raspi2 acceptance test: add test for dwc-hsotg (dwc2) USB host
46
target/arm: Avoid goto_tb if we're trying to exit to the main loop
47
target/arm: Enforce that FPDSCR.LTPSIZE is 4 on inbound migration
48
target/arm: Add TB flag for "MVE insns not predicated"
49
target/arm: Optimize MVE logic ops
50
target/arm: Optimize MVE arithmetic ops
51
target/arm: Optimize MVE VNEG, VABS
52
target/arm: Optimize MVE VDUP
53
target/arm: Optimize MVE VMVN
54
target/arm: Optimize MVE VSHL, VSHR immediate forms
55
target/arm: Optimize MVE VSHLL and VMOVL
56
target/arm: Optimize MVE VSLI and VSRI
57
target/arm: Optimize MVE 1op-immediate insns
45
58
46
Peter Maydell (9):
59
Shashi Mallela (1):
47
target/arm: Convert Neon VSHL and VSLI 2-reg-shift insn to decodetree
60
hw/intc: Set GIC maintenance interrupt level to only 0 or 1
48
target/arm: Convert Neon VSHR 2-reg-shift insns to decodetree
49
target/arm: Convert Neon VSRA, VSRI, VRSHR, VRSRA 2-reg-shift insns to decodetree
50
target/arm: Convert VQSHLU, VQSHL 2-reg-shift insns to decodetree
51
target/arm: Convert Neon narrowing shifts with op==8 to decodetree
52
target/arm: Convert Neon narrowing shifts with op==9 to decodetree
53
target/arm: Convert Neon VSHLL, VMOVL to decodetree
54
target/arm: Convert VCVT fixed-point ops to decodetree
55
target/arm: Convert Neon one-register-and-immediate insns to decodetree
56
61
57
Philippe Mathieu-Daudé (3):
62
meson.build | 8 +
58
hw/input/pxa2xx_keypad: Replace hw_error() by qemu_log_mask()
63
include/sysemu/hvf_int.h | 12 +-
59
hw/arm/pxa2xx: Replace printf() call by qemu_log_mask()
64
target/arm/cpu.h | 6 +-
60
hw/adc/stm32f2xx_adc: Correct memory region size and access size
65
target/arm/hvf_arm.h | 18 +
66
target/arm/internals.h | 44 ++
67
target/arm/kvm_arm.h | 2 -
68
target/arm/translate.h | 2 +
69
accel/hvf/hvf-accel-ops.c | 21 +-
70
contrib/elf2dmp/download.c | 22 +-
71
contrib/elf2dmp/pdb.c | 4 +
72
hw/intc/arm_gicv3_cpuif.c | 5 +-
73
target/arm/cpu.c | 56 +-
74
target/arm/helper.c | 77 ++-
75
target/arm/hvf/hvf.c | 1278 +++++++++++++++++++++++++++++++++++++++++
76
target/arm/machine.c | 13 +
77
target/arm/translate-m-nocp.c | 8 +-
78
target/arm/translate-mve.c | 310 +++++++---
79
target/arm/translate-vfp.c | 33 +-
80
target/arm/translate.c | 42 +-
81
target/i386/hvf/hvf.c | 10 +
82
MAINTAINERS | 5 +
83
target/arm/hvf/meson.build | 3 +
84
target/arm/hvf/trace-events | 11 +
85
target/arm/meson.build | 2 +
86
24 files changed, 1824 insertions(+), 168 deletions(-)
87
create mode 100644 target/arm/hvf_arm.h
88
create mode 100644 target/arm/hvf/hvf.c
89
create mode 100644 target/arm/hvf/meson.build
90
create mode 100644 target/arm/hvf/trace-events
61
91
62
Richard Henderson (6):
63
target/arm: Convert aes and sm4 to gvec helpers
64
target/arm: Convert rax1 to gvec helpers
65
target/arm: Convert sha512 and sm3 to gvec helpers
66
target/arm: Convert sha1 and sha256 to gvec helpers
67
target/arm: Split helper_crypto_sha1_3reg
68
target/arm: Split helper_crypto_sm3tt
69
70
Thomas Huth (1):
71
tests/acceptance: Add a boot test for the xlnx-versal-virt machine
72
73
docs/system/arm/aspeed.rst | 85 ++
74
docs/system/target-arm.rst | 1 +
75
hw/usb/hcd-dwc2.h | 190 +++++
76
include/hw/arm/bcm2835_peripherals.h | 5 +-
77
include/hw/misc/bcm2835_mphi.h | 44 +
78
include/hw/usb/dwc2-regs.h | 899 ++++++++++++++++++++
79
target/arm/helper.h | 45 +-
80
target/arm/translate-a64.h | 3 +
81
target/arm/vec_internal.h | 33 +
82
target/arm/neon-dp.decode | 214 ++++-
83
hw/adc/stm32f2xx_adc.c | 4 +-
84
hw/arm/bcm2835_peripherals.c | 38 +-
85
hw/arm/pxa2xx.c | 66 +-
86
hw/input/pxa2xx_keypad.c | 10 +-
87
hw/misc/bcm2835_mphi.c | 191 +++++
88
hw/ssi/imx_spi.c | 4 +-
89
hw/usb/dev-storage.c | 15 +-
90
hw/usb/hcd-dwc2.c | 1417 ++++++++++++++++++++++++++++++++
91
target/arm/crypto_helper.c | 267 ++++--
92
target/arm/translate-a64.c | 198 ++---
93
target/arm/translate-neon.inc.c | 796 ++++++++++++++----
94
target/arm/translate.c | 539 +-----------
95
target/arm/vec_helper.c | 12 +-
96
hw/misc/Makefile.objs | 1 +
97
hw/usb/Kconfig | 5 +
98
hw/usb/Makefile.objs | 1 +
99
hw/usb/trace-events | 50 ++
100
tests/acceptance/boot_linux_console.py | 35 +-
101
28 files changed, 4258 insertions(+), 910 deletions(-)
102
create mode 100644 docs/system/arm/aspeed.rst
103
create mode 100644 hw/usb/hcd-dwc2.h
104
create mode 100644 include/hw/misc/bcm2835_mphi.h
105
create mode 100644 include/hw/usb/dwc2-regs.h
106
create mode 100644 target/arm/vec_internal.h
107
create mode 100644 hw/misc/bcm2835_mphi.c
108
create mode 100644 hw/usb/hcd-dwc2.c
109
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
Coverity points out that we aren't checking the return value
2
from curl_easy_setopt().
2
3
3
Replace printf() calls by qemu_log_mask(), which is disabled
4
Fixes: Coverity CID 1458895
4
by default. This avoid flooding the terminal when fuzzing the
5
Inspired-by: Peter Maydell <peter.maydell@linaro.org>
5
device.
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
7
Reviewed-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Tested-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
8
Message-id: 20200525114123.21317-3-f4bug@amsat.org
9
Message-id: 20210910170656.366592-2-philmd@redhat.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/arm/pxa2xx.c | 66 ++++++++++++++++++++++++++++++++++++-------------
12
contrib/elf2dmp/download.c | 22 ++++++++++------------
13
1 file changed, 49 insertions(+), 17 deletions(-)
13
1 file changed, 10 insertions(+), 12 deletions(-)
14
14
15
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
15
diff --git a/contrib/elf2dmp/download.c b/contrib/elf2dmp/download.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/pxa2xx.c
17
--- a/contrib/elf2dmp/download.c
18
+++ b/hw/arm/pxa2xx.c
18
+++ b/contrib/elf2dmp/download.c
19
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ int download_url(const char *name, const char *url)
20
#include "sysemu/blockdev.h"
20
goto out_curl;
21
#include "sysemu/qtest.h"
22
#include "qemu/cutils.h"
23
+#include "qemu/log.h"
24
25
static struct {
26
hwaddr io_base;
27
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_pm_read(void *opaque, hwaddr addr,
28
return s->pm_regs[addr >> 2];
29
default:
30
fail:
31
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
32
+ qemu_log_mask(LOG_GUEST_ERROR,
33
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
34
+ __func__, addr);
35
break;
36
}
21
}
37
return 0;
22
38
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_pm_write(void *opaque, hwaddr addr,
23
- curl_easy_setopt(curl, CURLOPT_URL, url);
39
s->pm_regs[addr >> 2] = value;
24
- curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, NULL);
40
break;
25
- curl_easy_setopt(curl, CURLOPT_WRITEDATA, file);
41
}
26
- curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1);
27
- curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0);
42
-
28
-
43
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
29
- if (curl_easy_perform(curl) != CURLE_OK) {
44
+ qemu_log_mask(LOG_GUEST_ERROR,
30
- err = 1;
45
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
31
- fclose(file);
46
+ __func__, addr);
32
+ if (curl_easy_setopt(curl, CURLOPT_URL, url) != CURLE_OK
47
break;
33
+ || curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, NULL) != CURLE_OK
34
+ || curl_easy_setopt(curl, CURLOPT_WRITEDATA, file) != CURLE_OK
35
+ || curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1) != CURLE_OK
36
+ || curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0) != CURLE_OK
37
+ || curl_easy_perform(curl) != CURLE_OK) {
38
unlink(name);
39
- goto out_curl;
40
+ fclose(file);
41
+ err = 1;
42
+ } else {
43
+ err = fclose(file);
48
}
44
}
49
}
45
50
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_cm_read(void *opaque, hwaddr addr,
46
- err = fclose(file);
51
return s->cm_regs[CCCR >> 2] | (3 << 28);
47
-
52
48
out_curl:
53
default:
49
curl_easy_cleanup(curl);
54
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
55
+ qemu_log_mask(LOG_GUEST_ERROR,
56
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
57
+ __func__, addr);
58
break;
59
}
60
return 0;
61
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_cm_write(void *opaque, hwaddr addr,
62
break;
63
64
default:
65
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
66
+ qemu_log_mask(LOG_GUEST_ERROR,
67
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
68
+ __func__, addr);
69
break;
70
}
71
}
72
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_mm_read(void *opaque, hwaddr addr,
73
return s->mm_regs[addr >> 2];
74
/* fall through */
75
default:
76
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
77
+ qemu_log_mask(LOG_GUEST_ERROR,
78
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
79
+ __func__, addr);
80
break;
81
}
82
return 0;
83
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_mm_write(void *opaque, hwaddr addr,
84
}
85
86
default:
87
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
88
+ qemu_log_mask(LOG_GUEST_ERROR,
89
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
90
+ __func__, addr);
91
break;
92
}
93
}
94
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_ssp_read(void *opaque, hwaddr addr,
95
case SSACD:
96
return s->ssacd;
97
default:
98
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
99
+ qemu_log_mask(LOG_GUEST_ERROR,
100
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
101
+ __func__, addr);
102
break;
103
}
104
return 0;
105
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_ssp_write(void *opaque, hwaddr addr,
106
break;
107
108
default:
109
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
110
+ qemu_log_mask(LOG_GUEST_ERROR,
111
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
112
+ __func__, addr);
113
break;
114
}
115
}
116
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_rtc_read(void *opaque, hwaddr addr,
117
else
118
return s->last_swcr;
119
default:
120
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
121
+ qemu_log_mask(LOG_GUEST_ERROR,
122
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
123
+ __func__, addr);
124
break;
125
}
126
return 0;
127
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_rtc_write(void *opaque, hwaddr addr,
128
break;
129
130
default:
131
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
132
+ qemu_log_mask(LOG_GUEST_ERROR,
133
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
134
+ __func__, addr);
135
}
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_i2c_read(void *opaque, hwaddr addr,
139
s->ibmr = 0;
140
return s->ibmr;
141
default:
142
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
143
+ qemu_log_mask(LOG_GUEST_ERROR,
144
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
145
+ __func__, addr);
146
break;
147
}
148
return 0;
149
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_i2c_write(void *opaque, hwaddr addr,
150
break;
151
152
default:
153
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
154
+ qemu_log_mask(LOG_GUEST_ERROR,
155
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
156
+ __func__, addr);
157
}
158
}
159
160
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_i2s_read(void *opaque, hwaddr addr,
161
}
162
return 0;
163
default:
164
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
165
+ qemu_log_mask(LOG_GUEST_ERROR,
166
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
167
+ __func__, addr);
168
break;
169
}
170
return 0;
171
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_i2s_write(void *opaque, hwaddr addr,
172
}
173
break;
174
default:
175
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
176
+ qemu_log_mask(LOG_GUEST_ERROR,
177
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
178
+ __func__, addr);
179
}
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_fir_read(void *opaque, hwaddr addr,
183
case ICFOR:
184
return s->rx_len;
185
default:
186
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
187
+ qemu_log_mask(LOG_GUEST_ERROR,
188
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
189
+ __func__, addr);
190
break;
191
}
192
return 0;
193
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_fir_write(void *opaque, hwaddr addr,
194
case ICFOR:
195
break;
196
default:
197
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
198
+ qemu_log_mask(LOG_GUEST_ERROR,
199
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
200
+ __func__, addr);
201
}
202
}
203
50
204
--
51
--
205
2.20.1
52
2.20.1
206
53
207
54
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
Coverity points out that if the PDB file we're trying to read
2
has a header specifying a block_size of zero then we will
3
end up trying to divide by zero in pdb_ds_read_file().
4
Check for this and fail cleanly instead.
2
5
3
Add a check for functional dwc-hsotg (dwc2) USB host emulation to
6
Fixes: Coverity CID 1458869
4
the Raspi 2 acceptance test
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Tested-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu>
11
Message-id: 20210910170656.366592-3-philmd@redhat.com
12
Message-Id: <20210901143910.17112-3-peter.maydell@linaro.org>
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
---
15
contrib/elf2dmp/pdb.c | 4 ++++
16
1 file changed, 4 insertions(+)
5
17
6
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
18
diff --git a/contrib/elf2dmp/pdb.c b/contrib/elf2dmp/pdb.c
7
Reviewed-by: Philippe Mathieu-Daude <f4bug@amsat.org>
8
Message-id: 20200520235349.21215-8-pauldzim@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
tests/acceptance/boot_linux_console.py | 9 +++++++--
12
1 file changed, 7 insertions(+), 2 deletions(-)
13
14
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/acceptance/boot_linux_console.py
20
--- a/contrib/elf2dmp/pdb.c
17
+++ b/tests/acceptance/boot_linux_console.py
21
+++ b/contrib/elf2dmp/pdb.c
18
@@ -XXX,XX +XXX,XX @@ class BootLinuxConsole(LinuxKernelTest):
22
@@ -XXX,XX +XXX,XX @@ out_symbols:
19
23
20
self.vm.set_console()
24
static int pdb_reader_ds_init(struct pdb_reader *r, PDB_DS_HEADER *hdr)
21
kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
25
{
22
- serial_kernel_cmdline[uart_id])
26
+ if (hdr->block_size == 0) {
23
+ serial_kernel_cmdline[uart_id] +
27
+ return 1;
24
+ ' root=/dev/mmcblk0p2 rootwait ' +
28
+ }
25
+ 'dwc_otg.fiq_fsm_enable=0')
29
+
26
self.vm.add_args('-kernel', kernel_path,
30
memset(r->file_used, 0, sizeof(r->file_used));
27
'-dtb', dtb_path,
31
r->ds.header = hdr;
28
- '-append', kernel_command_line)
32
r->ds.toc = pdb_ds_read(hdr, (uint32_t *)((uint8_t *)hdr +
29
+ '-append', kernel_command_line,
30
+ '-device', 'usb-kbd')
31
self.vm.launch()
32
console_pattern = 'Kernel command line: %s' % kernel_command_line
33
self.wait_for_console_pattern(console_pattern)
34
+ console_pattern = 'Product: QEMU USB Keyboard'
35
+ self.wait_for_console_pattern(console_pattern)
36
37
def test_arm_raspi2_uart0(self):
38
"""
39
--
33
--
40
2.20.1
34
2.20.1
41
35
42
36
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
Currently all of the M-profile specific code in arm_cpu_reset() is
2
inside a !defined(CONFIG_USER_ONLY) ifdef block. This is
3
unintentional: it happened because originally the only
4
M-profile-specific handling was the setup of the initial SP and PC
5
from the vector table, which is system-emulation only. But then we
6
added a lot of other M-profile setup to the same "if (ARM_FEATURE_M)"
7
code block without noticing that it was all inside a not-user-mode
8
ifdef. This has generally been harmless, but with the addition of
9
v8.1M low-overhead-loop support we ran into a problem: the reset of
10
FPSCR.LTPSIZE to 4 was only being done for system emulation mode, so
11
if a user-mode guest tried to execute the LE instruction it would
12
incorrectly take a UsageFault.
2
13
3
The dwc-hsotg (dwc2) USB host depends on a short packet to
14
Adjust the ifdefs so only the really system-emulation specific parts
4
indicate the end of an IN transfer. The usb-storage driver
15
are covered. Because this means we now run some reset code that sets
5
currently doesn't provide this, so fix it.
16
up initial values in the FPCCR and similar FPU related registers,
17
explicitly set up the registers controlling FPU context handling in
18
user-emulation mode so that the FPU works by design and not by
19
chance.
6
20
7
I have tested this change rather extensively using a PC
21
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/613
8
emulation with xhci, ehci, and uhci controllers, and have
22
Cc: qemu-stable@nongnu.org
9
not observed any regressions.
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 20210914120725.24992-2-peter.maydell@linaro.org
26
---
27
target/arm/cpu.c | 19 +++++++++++++++++++
28
1 file changed, 19 insertions(+)
10
29
11
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
Message-id: 20200520235349.21215-6-pauldzim@gmail.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/usb/dev-storage.c | 15 ++++++++++++++-
16
1 file changed, 14 insertions(+), 1 deletion(-)
17
18
diff --git a/hw/usb/dev-storage.c b/hw/usb/dev-storage.c
19
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/usb/dev-storage.c
32
--- a/target/arm/cpu.c
21
+++ b/hw/usb/dev-storage.c
33
+++ b/target/arm/cpu.c
22
@@ -XXX,XX +XXX,XX @@ static void usb_msd_copy_data(MSDState *s, USBPacket *p)
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
23
usb_packet_copy(p, scsi_req_get_buf(s->req) + s->scsi_off, len);
35
env->uncached_cpsr = ARM_CPU_MODE_SVC;
24
s->scsi_len -= len;
36
}
25
s->scsi_off += len;
37
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
26
+ if (len > s->data_len) {
38
+#endif
27
+ len = s->data_len;
39
28
+ }
40
if (arm_feature(env, ARM_FEATURE_M)) {
29
s->data_len -= len;
41
+#ifndef CONFIG_USER_ONLY
30
if (s->scsi_len == 0 || s->data_len == 0) {
42
uint32_t initial_msp; /* Loaded from 0x0 */
31
scsi_req_continue(s->req);
43
uint32_t initial_pc; /* Loaded from 0x4 */
32
@@ -XXX,XX +XXX,XX @@ static void usb_msd_command_complete(SCSIRequest *req, uint32_t status, size_t r
44
uint8_t *rom;
33
if (s->data_len) {
45
uint32_t vecbase;
34
int len = (p->iov.size - p->actual_length);
46
+#endif
35
usb_packet_skip(p, len);
47
36
+ if (len > s->data_len) {
48
if (cpu_isar_feature(aa32_lob, cpu)) {
37
+ len = s->data_len;
49
/*
38
+ }
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
39
s->data_len -= len;
51
env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
40
}
52
R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
41
if (s->data_len == 0) {
53
}
42
@@ -XXX,XX +XXX,XX @@ static void usb_msd_handle_data(USBDevice *dev, USBPacket *p)
54
+
43
int len = p->iov.size - p->actual_length;
55
+#ifndef CONFIG_USER_ONLY
44
if (len) {
56
/* Unlike A/R profile, M profile defines the reset LR value */
45
usb_packet_skip(p, len);
57
env->regs[14] = 0xffffffff;
46
+ if (len > s->data_len) {
58
47
+ len = s->data_len;
59
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
48
+ }
60
env->regs[13] = initial_msp & 0xFFFFFFFC;
49
s->data_len -= len;
61
env->regs[15] = initial_pc & ~1;
50
if (s->data_len == 0) {
62
env->thumb = initial_pc & 1;
51
s->mode = USB_MSDM_CSW;
63
+#else
52
@@ -XXX,XX +XXX,XX @@ static void usb_msd_handle_data(USBDevice *dev, USBPacket *p)
64
+ /*
53
int len = p->iov.size - p->actual_length;
65
+ * For user mode we run non-secure and with access to the FPU.
54
if (len) {
66
+ * The FPU context is active (ie does not need further setup)
55
usb_packet_skip(p, len);
67
+ * and is owned by non-secure.
56
+ if (len > s->data_len) {
68
+ */
57
+ len = s->data_len;
69
+ env->v7m.secure = false;
58
+ }
70
+ env->v7m.nsacr = 0xcff;
59
s->data_len -= len;
71
+ env->v7m.cpacr[M_REG_NS] = 0xf0ffff;
60
if (s->data_len == 0) {
72
+ env->v7m.fpccr[M_REG_S] &=
61
s->mode = USB_MSDM_CSW;
73
+ ~(R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK);
62
}
74
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
63
}
75
+#endif
64
}
76
}
65
- if (p->actual_length < p->iov.size) {
77
66
+ if (p->actual_length < p->iov.size && (p->short_not_ok ||
78
+#ifndef CONFIG_USER_ONLY
67
+ s->scsi_len >= p->ep->max_packet_size)) {
79
/* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
68
DPRINTF("Deferring packet %p [wait data-in]\n", p);
80
* executing as AArch32 then check if highvecs are enabled and
69
s->packet = p;
81
* adjust the PC accordingly.
70
p->status = USB_RET_ASYNC;
71
--
82
--
72
2.20.1
83
2.20.1
73
84
74
85
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
There's no particular reason why the exclusive monitor should
2
be only cleared on reset in system emulation mode. It doesn't
3
hurt if it isn't cleared in user mode, but we might as well
4
reduce the amount of code we have that's inside an ifdef.
2
5
3
Add the dwc-hsotg (dwc2) USB host controller state definitions.
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Mostly based on hw/usb/hcd-ehci.h.
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210914120725.24992-3-peter.maydell@linaro.org
9
---
10
target/arm/cpu.c | 6 +++---
11
1 file changed, 3 insertions(+), 3 deletions(-)
5
12
6
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
7
Message-id: 20200520235349.21215-4-pauldzim@gmail.com
14
index XXXXXXX..XXXXXXX 100644
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
--- a/target/arm/cpu.c
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
+++ b/target/arm/cpu.c
10
---
17
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
11
hw/usb/hcd-dwc2.h | 190 ++++++++++++++++++++++++++++++++++++++++++++++
18
env->regs[15] = 0xFFFF0000;
12
1 file changed, 190 insertions(+)
19
}
13
create mode 100644 hw/usb/hcd-dwc2.h
20
14
21
+ env->vfp.xregs[ARM_VFP_FPEXC] = 0;
15
diff --git a/hw/usb/hcd-dwc2.h b/hw/usb/hcd-dwc2.h
22
+#endif
16
new file mode 100644
17
index XXXXXXX..XXXXXXX
18
--- /dev/null
19
+++ b/hw/usb/hcd-dwc2.h
20
@@ -XXX,XX +XXX,XX @@
21
+/*
22
+ * dwc-hsotg (dwc2) USB host controller state definitions
23
+ *
24
+ * Based on hw/usb/hcd-ehci.h
25
+ *
26
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
27
+ *
28
+ * This program is free software; you can redistribute it and/or modify
29
+ * it under the terms of the GNU General Public License as published by
30
+ * the Free Software Foundation; either version 2 of the License, or
31
+ * (at your option) any later version.
32
+ *
33
+ * This program is distributed in the hope that it will be useful,
34
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
35
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
36
+ * GNU General Public License for more details.
37
+ */
38
+
23
+
39
+#ifndef HW_USB_DWC2_H
24
/* M profile requires that reset clears the exclusive monitor;
40
+#define HW_USB_DWC2_H
25
* A profile does not, but clearing it makes more sense than having it
41
+
26
* set with an exclusive access on address zero.
42
+#include "qemu/timer.h"
27
*/
43
+#include "hw/irq.h"
28
arm_clear_exclusive(env);
44
+#include "hw/sysbus.h"
29
45
+#include "hw/usb.h"
30
- env->vfp.xregs[ARM_VFP_FPEXC] = 0;
46
+#include "sysemu/dma.h"
31
-#endif
47
+
32
-
48
+#define DWC2_MMIO_SIZE 0x11000
33
if (arm_feature(env, ARM_FEATURE_PMSA)) {
49
+
34
if (cpu->pmsav7_dregion > 0) {
50
+#define DWC2_NB_CHAN 8 /* Number of host channels */
35
if (arm_feature(env, ARM_FEATURE_V8)) {
51
+#define DWC2_MAX_XFER_SIZE 65536 /* Max transfer size expected in HCTSIZ */
52
+
53
+typedef struct DWC2Packet DWC2Packet;
54
+typedef struct DWC2State DWC2State;
55
+typedef struct DWC2Class DWC2Class;
56
+
57
+enum async_state {
58
+ DWC2_ASYNC_NONE = 0,
59
+ DWC2_ASYNC_INITIALIZED,
60
+ DWC2_ASYNC_INFLIGHT,
61
+ DWC2_ASYNC_FINISHED,
62
+};
63
+
64
+struct DWC2Packet {
65
+ USBPacket packet;
66
+ uint32_t devadr;
67
+ uint32_t epnum;
68
+ uint32_t epdir;
69
+ uint32_t mps;
70
+ uint32_t pid;
71
+ uint32_t index;
72
+ uint32_t pcnt;
73
+ uint32_t len;
74
+ int32_t async;
75
+ bool small;
76
+ bool needs_service;
77
+};
78
+
79
+struct DWC2State {
80
+ /*< private >*/
81
+ SysBusDevice parent_obj;
82
+
83
+ /*< public >*/
84
+ USBBus bus;
85
+ qemu_irq irq;
86
+ MemoryRegion *dma_mr;
87
+ AddressSpace dma_as;
88
+ MemoryRegion container;
89
+ MemoryRegion hsotg;
90
+ MemoryRegion fifos;
91
+
92
+ union {
93
+#define DWC2_GLBREG_SIZE 0x70
94
+ uint32_t glbreg[DWC2_GLBREG_SIZE / sizeof(uint32_t)];
95
+ struct {
96
+ uint32_t gotgctl; /* 00 */
97
+ uint32_t gotgint; /* 04 */
98
+ uint32_t gahbcfg; /* 08 */
99
+ uint32_t gusbcfg; /* 0c */
100
+ uint32_t grstctl; /* 10 */
101
+ uint32_t gintsts; /* 14 */
102
+ uint32_t gintmsk; /* 18 */
103
+ uint32_t grxstsr; /* 1c */
104
+ uint32_t grxstsp; /* 20 */
105
+ uint32_t grxfsiz; /* 24 */
106
+ uint32_t gnptxfsiz; /* 28 */
107
+ uint32_t gnptxsts; /* 2c */
108
+ uint32_t gi2cctl; /* 30 */
109
+ uint32_t gpvndctl; /* 34 */
110
+ uint32_t ggpio; /* 38 */
111
+ uint32_t guid; /* 3c */
112
+ uint32_t gsnpsid; /* 40 */
113
+ uint32_t ghwcfg1; /* 44 */
114
+ uint32_t ghwcfg2; /* 48 */
115
+ uint32_t ghwcfg3; /* 4c */
116
+ uint32_t ghwcfg4; /* 50 */
117
+ uint32_t glpmcfg; /* 54 */
118
+ uint32_t gpwrdn; /* 58 */
119
+ uint32_t gdfifocfg; /* 5c */
120
+ uint32_t gadpctl; /* 60 */
121
+ uint32_t grefclk; /* 64 */
122
+ uint32_t gintmsk2; /* 68 */
123
+ uint32_t gintsts2; /* 6c */
124
+ };
125
+ };
126
+
127
+ union {
128
+#define DWC2_FSZREG_SIZE 0x04
129
+ uint32_t fszreg[DWC2_FSZREG_SIZE / sizeof(uint32_t)];
130
+ struct {
131
+ uint32_t hptxfsiz; /* 100 */
132
+ };
133
+ };
134
+
135
+ union {
136
+#define DWC2_HREG0_SIZE 0x44
137
+ uint32_t hreg0[DWC2_HREG0_SIZE / sizeof(uint32_t)];
138
+ struct {
139
+ uint32_t hcfg; /* 400 */
140
+ uint32_t hfir; /* 404 */
141
+ uint32_t hfnum; /* 408 */
142
+ uint32_t rsvd0; /* 40c */
143
+ uint32_t hptxsts; /* 410 */
144
+ uint32_t haint; /* 414 */
145
+ uint32_t haintmsk; /* 418 */
146
+ uint32_t hflbaddr; /* 41c */
147
+ uint32_t rsvd1[8]; /* 420-43c */
148
+ uint32_t hprt0; /* 440 */
149
+ };
150
+ };
151
+
152
+#define DWC2_HREG1_SIZE (0x20 * DWC2_NB_CHAN)
153
+ uint32_t hreg1[DWC2_HREG1_SIZE / sizeof(uint32_t)];
154
+
155
+#define hcchar(_ch) hreg1[((_ch) << 3) + 0] /* 500, 520, ... */
156
+#define hcsplt(_ch) hreg1[((_ch) << 3) + 1] /* 504, 524, ... */
157
+#define hcint(_ch) hreg1[((_ch) << 3) + 2] /* 508, 528, ... */
158
+#define hcintmsk(_ch) hreg1[((_ch) << 3) + 3] /* 50c, 52c, ... */
159
+#define hctsiz(_ch) hreg1[((_ch) << 3) + 4] /* 510, 530, ... */
160
+#define hcdma(_ch) hreg1[((_ch) << 3) + 5] /* 514, 534, ... */
161
+#define hcdmab(_ch) hreg1[((_ch) << 3) + 7] /* 51c, 53c, ... */
162
+
163
+ union {
164
+#define DWC2_PCGREG_SIZE 0x08
165
+ uint32_t pcgreg[DWC2_PCGREG_SIZE / sizeof(uint32_t)];
166
+ struct {
167
+ uint32_t pcgctl; /* e00 */
168
+ uint32_t pcgcctl1; /* e04 */
169
+ };
170
+ };
171
+
172
+ /* TODO - implement FIFO registers for slave mode */
173
+#define DWC2_HFIFO_SIZE (0x1000 * DWC2_NB_CHAN)
174
+
175
+ /*
176
+ * Internal state
177
+ */
178
+ QEMUTimer *eof_timer;
179
+ QEMUTimer *frame_timer;
180
+ QEMUBH *async_bh;
181
+ int64_t sof_time;
182
+ int64_t usb_frame_time;
183
+ int64_t usb_bit_time;
184
+ uint32_t usb_version;
185
+ uint16_t frame_number;
186
+ uint16_t fi;
187
+ uint16_t next_chan;
188
+ bool working;
189
+ USBPort uport;
190
+ DWC2Packet packet[DWC2_NB_CHAN]; /* one packet per chan */
191
+ uint8_t usb_buf[DWC2_NB_CHAN][DWC2_MAX_XFER_SIZE]; /* one buffer per chan */
192
+};
193
+
194
+struct DWC2Class {
195
+ /*< private >*/
196
+ SysBusDeviceClass parent_class;
197
+ ResettablePhases parent_phases;
198
+
199
+ /*< public >*/
200
+};
201
+
202
+#define TYPE_DWC2_USB "dwc2-usb"
203
+#define DWC2_USB(obj) \
204
+ OBJECT_CHECK(DWC2State, (obj), TYPE_DWC2_USB)
205
+#define DWC2_CLASS(klass) \
206
+ OBJECT_CLASS_CHECK(DWC2Class, (klass), TYPE_DWC2_USB)
207
+#define DWC2_GET_CLASS(obj) \
208
+ OBJECT_GET_CLASS(DWC2Class, (obj), TYPE_DWC2_USB)
209
+
210
+#endif
211
--
36
--
212
2.20.1
37
2.20.1
213
38
214
39
diff view generated by jsdifflib
1
Convert the VCVT fixed-point conversion operations in the
1
Move an ifndef CONFIG_USER_ONLY code block up in arm_cpu_reset() so
2
Neon 2-regs-and-shift group to decodetree.
2
it can be merged with another earlier one.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200522145520.6778-9-peter.maydell@linaro.org
6
Message-id: 20210914120725.24992-4-peter.maydell@linaro.org
7
---
7
---
8
target/arm/neon-dp.decode | 11 +++++
8
target/arm/cpu.c | 22 ++++++++++------------
9
target/arm/translate-neon.inc.c | 49 +++++++++++++++++++++
9
1 file changed, 10 insertions(+), 12 deletions(-)
10
target/arm/translate.c | 75 +--------------------------------
11
3 files changed, 62 insertions(+), 73 deletions(-)
12
10
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
11
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
13
--- a/target/arm/cpu.c
16
+++ b/target/arm/neon-dp.decode
14
+++ b/target/arm/cpu.c
17
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
15
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
18
@2reg_shll_b .... ... . . . 001 shift:3 .... .... 0 . . . .... \
16
env->uncached_cpsr = ARM_CPU_MODE_SVC;
19
&2reg_shift vm=%vm_dp vd=%vd_dp size=0 q=0
17
}
20
18
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
21
+# We use size=0 for fp32 and size=1 for fp16 to match the 3-same encodings.
22
+@2reg_vcvt .... ... . . . 1 ..... .... .... . q:1 . . .... \
23
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 shift=%neon_rshift_i5
24
+
19
+
25
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
20
+ /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
26
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
21
+ * executing as AArch32 then check if highvecs are enabled and
27
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
22
+ * adjust the PC accordingly.
28
@@ -XXX,XX +XXX,XX @@ VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
23
+ */
29
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
24
+ if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
30
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
25
+ env->regs[15] = 0xFFFF0000;
31
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
32
+
33
+# VCVT fixed<->float conversions
34
+# TODO: FP16 fixed<->float conversions are opc==0b1100 and 0b1101
35
+VCVT_SF_2sh 1111 001 0 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
36
+VCVT_UF_2sh 1111 001 1 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
37
+VCVT_FS_2sh 1111 001 0 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
38
+VCVT_FU_2sh 1111 001 1 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
39
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/translate-neon.inc.c
42
+++ b/target/arm/translate-neon.inc.c
43
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL_U_2sh(DisasContext *s, arg_2reg_shift *a)
44
};
45
return do_vshll_2sh(s, a, widenfn[a->size], true);
46
}
47
+
48
+static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
49
+ NeonGenTwoSingleOPFn *fn)
50
+{
51
+ /* FP operations in 2-reg-and-shift group */
52
+ TCGv_i32 tmp, shiftv;
53
+ TCGv_ptr fpstatus;
54
+ int pass;
55
+
56
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
57
+ return false;
58
+ }
26
+ }
59
+
27
+
60
+ /* UNDEF accesses to D16-D31 if they don't exist. */
28
+ env->vfp.xregs[ARM_VFP_FPEXC] = 0;
61
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
29
#endif
62
+ ((a->vd | a->vm) & 0x10)) {
30
63
+ return false;
31
if (arm_feature(env, ARM_FEATURE_M)) {
64
+ }
32
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
65
+
33
#endif
66
+ if ((a->vm | a->vd) & a->q) {
34
}
67
+ return false;
35
68
+ }
36
-#ifndef CONFIG_USER_ONLY
69
+
37
- /* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
70
+ if (!vfp_access_check(s)) {
38
- * executing as AArch32 then check if highvecs are enabled and
71
+ return true;
39
- * adjust the PC accordingly.
72
+ }
40
- */
73
+
41
- if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) {
74
+ fpstatus = get_fpstatus_ptr(1);
42
- env->regs[15] = 0xFFFF0000;
75
+ shiftv = tcg_const_i32(a->shift);
43
- }
76
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
77
+ tmp = neon_load_reg(a->vm, pass);
78
+ fn(tmp, tmp, shiftv, fpstatus);
79
+ neon_store_reg(a->vd, pass, tmp);
80
+ }
81
+ tcg_temp_free_ptr(fpstatus);
82
+ tcg_temp_free_i32(shiftv);
83
+ return true;
84
+}
85
+
86
+#define DO_FP_2SH(INSN, FUNC) \
87
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
88
+ { \
89
+ return do_fp_2sh(s, a, FUNC); \
90
+ }
91
+
92
+DO_FP_2SH(VCVT_SF, gen_helper_vfp_sltos)
93
+DO_FP_2SH(VCVT_UF, gen_helper_vfp_ultos)
94
+DO_FP_2SH(VCVT_FS, gen_helper_vfp_tosls_round_to_zero)
95
+DO_FP_2SH(VCVT_FU, gen_helper_vfp_touls_round_to_zero)
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
101
int q;
102
int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
103
int size;
104
- int shift;
105
int pass;
106
int u;
107
int vec_size;
108
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
109
return 1;
110
} else if (insn & (1 << 4)) {
111
if ((insn & 0x00380080) != 0) {
112
- /* Two registers and shift. */
113
- op = (insn >> 8) & 0xf;
114
-
44
-
115
- switch (op) {
45
- env->vfp.xregs[ARM_VFP_FPEXC] = 0;
116
- case 0: /* VSHR */
46
-#endif
117
- case 1: /* VSRA */
118
- case 2: /* VRSHR */
119
- case 3: /* VRSRA */
120
- case 4: /* VSRI */
121
- case 5: /* VSHL, VSLI */
122
- case 6: /* VQSHLU */
123
- case 7: /* VQSHL */
124
- case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
125
- case 9: /* VQSHRN, VQRSHRN */
126
- case 10: /* VSHLL, including VMOVL */
127
- return 1; /* handled by decodetree */
128
- default:
129
- break;
130
- }
131
-
47
-
132
- if (insn & (1 << 7)) {
48
/* M profile requires that reset clears the exclusive monitor;
133
- /* 64-bit shift. */
49
* A profile does not, but clearing it makes more sense than having it
134
- if (op > 7) {
50
* set with an exclusive access on address zero.
135
- return 1;
136
- }
137
- size = 3;
138
- } else {
139
- size = 2;
140
- while ((insn & (1 << (size + 19))) == 0)
141
- size--;
142
- }
143
- shift = (insn >> 16) & ((1 << (3 + size)) - 1);
144
- if (op >= 14) {
145
- /* VCVT fixed-point. */
146
- TCGv_ptr fpst;
147
- TCGv_i32 shiftv;
148
- VFPGenFixPointFn *fn;
149
-
150
- if (!(insn & (1 << 21)) || (q && ((rd | rm) & 1))) {
151
- return 1;
152
- }
153
-
154
- if (!(op & 1)) {
155
- if (u) {
156
- fn = gen_helper_vfp_ultos;
157
- } else {
158
- fn = gen_helper_vfp_sltos;
159
- }
160
- } else {
161
- if (u) {
162
- fn = gen_helper_vfp_touls_round_to_zero;
163
- } else {
164
- fn = gen_helper_vfp_tosls_round_to_zero;
165
- }
166
- }
167
-
168
- /* We have already masked out the must-be-1 top bit of imm6,
169
- * hence this 32-shift where the ARM ARM has 64-imm6.
170
- */
171
- shift = 32 - shift;
172
- fpst = get_fpstatus_ptr(1);
173
- shiftv = tcg_const_i32(shift);
174
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
175
- TCGv_i32 tmpf = neon_load_reg(rm, pass);
176
- fn(tmpf, tmpf, shiftv, fpst);
177
- neon_store_reg(rd, pass, tmpf);
178
- }
179
- tcg_temp_free_ptr(fpst);
180
- tcg_temp_free_i32(shiftv);
181
- } else {
182
- return 1;
183
- }
184
+ /* Two registers and shift: handled by decodetree */
185
+ return 1;
186
} else { /* (insn & 0x00380080) == 0 */
187
int invert, reg_ofs, vec_size;
188
189
--
51
--
190
2.20.1
52
2.20.1
191
53
192
54
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Shashi Mallela <shashi.mallela@linaro.org>
2
2
3
Import the dwc-hsotg (dwc2) register definitions file from the
3
During sbsa acs level 3 testing, it is seen that the GIC maintenance
4
Linux kernel. This is a copy of drivers/usb/dwc2/hw.h from the
4
interrupts are not triggered and the related test cases fail. This
5
mainline Linux kernel, the only changes being to the header, and
5
is because we were incorrectly passing the value of the MISR register
6
two instances of 'u32' changed to 'uint32_t' to allow it to
6
(from maintenance_interrupt_state()) to qemu_set_irq() as the level
7
compile. Checkpatch throws a boatload of errors due to the tab
7
argument, whereas the device on the other end of this irq line
8
indentation, but I would rather import it as-is than reformat it.
8
expects a 0/1 value.
9
9
10
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
10
Fix the logic to pass a 0/1 level indication, rather than a
11
Message-id: 20200520235349.21215-3-pauldzim@gmail.com
11
0/not-0 value.
12
13
Fixes: c5fc89b36c0 ("hw/intc/arm_gicv3: Implement gicv3_cpuif_virt_update()")
14
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 20210915205809.59068-1-shashi.mallela@linaro.org
17
[PMM: tweaked commit message; collapsed nested if()s into one]
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
20
---
15
include/hw/usb/dwc2-regs.h | 899 +++++++++++++++++++++++++++++++++++++
21
hw/intc/arm_gicv3_cpuif.c | 5 +++--
16
1 file changed, 899 insertions(+)
22
1 file changed, 3 insertions(+), 2 deletions(-)
17
create mode 100644 include/hw/usb/dwc2-regs.h
18
23
19
diff --git a/include/hw/usb/dwc2-regs.h b/include/hw/usb/dwc2-regs.h
24
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
20
new file mode 100644
25
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX
26
--- a/hw/intc/arm_gicv3_cpuif.c
22
--- /dev/null
27
+++ b/hw/intc/arm_gicv3_cpuif.c
23
+++ b/include/hw/usb/dwc2-regs.h
28
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
24
@@ -XXX,XX +XXX,XX @@
29
}
25
+/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
30
}
26
+/*
31
27
+ * Imported from the Linux kernel file drivers/usb/dwc2/hw.h, commit
32
- if (cs->ich_hcr_el2 & ICH_HCR_EL2_EN) {
28
+ * a89bae709b3492b478480a2c9734e7e9393b279c ("usb: dwc2: Move
33
- maintlevel = maintenance_interrupt_state(cs);
29
+ * UTMI_PHY_DATA defines closer")
34
+ if ((cs->ich_hcr_el2 & ICH_HCR_EL2_EN) &&
30
+ *
35
+ maintenance_interrupt_state(cs) != 0) {
31
+ * hw.h - DesignWare HS OTG Controller hardware definitions
36
+ maintlevel = 1;
32
+ *
37
}
33
+ * Copyright 2004-2013 Synopsys, Inc.
38
34
+ *
39
trace_gicv3_cpuif_virt_set_irqs(gicv3_redist_affid(cs), fiqlevel,
35
+ * Redistribution and use in source and binary forms, with or without
36
+ * modification, are permitted provided that the following conditions
37
+ * are met:
38
+ * 1. Redistributions of source code must retain the above copyright
39
+ * notice, this list of conditions, and the following disclaimer,
40
+ * without modification.
41
+ * 2. Redistributions in binary form must reproduce the above copyright
42
+ * notice, this list of conditions and the following disclaimer in the
43
+ * documentation and/or other materials provided with the distribution.
44
+ * 3. The names of the above-listed copyright holders may not be used
45
+ * to endorse or promote products derived from this software without
46
+ * specific prior written permission.
47
+ *
48
+ * ALTERNATIVELY, this software may be distributed under the terms of the
49
+ * GNU General Public License ("GPL") as published by the Free Software
50
+ * Foundation; either version 2 of the License, or (at your option) any
51
+ * later version.
52
+ *
53
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
54
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
55
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
56
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
57
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
58
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
59
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
60
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
61
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
62
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
63
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
64
+ */
65
+
66
+#ifndef __DWC2_HW_H__
67
+#define __DWC2_HW_H__
68
+
69
+#define HSOTG_REG(x)    (x)
70
+
71
+#define GOTGCTL                HSOTG_REG(0x000)
72
+#define GOTGCTL_CHIRPEN            BIT(27)
73
+#define GOTGCTL_MULT_VALID_BC_MASK    (0x1f << 22)
74
+#define GOTGCTL_MULT_VALID_BC_SHIFT    22
75
+#define GOTGCTL_OTGVER            BIT(20)
76
+#define GOTGCTL_BSESVLD            BIT(19)
77
+#define GOTGCTL_ASESVLD            BIT(18)
78
+#define GOTGCTL_DBNC_SHORT        BIT(17)
79
+#define GOTGCTL_CONID_B            BIT(16)
80
+#define GOTGCTL_DBNCE_FLTR_BYPASS    BIT(15)
81
+#define GOTGCTL_DEVHNPEN        BIT(11)
82
+#define GOTGCTL_HSTSETHNPEN        BIT(10)
83
+#define GOTGCTL_HNPREQ            BIT(9)
84
+#define GOTGCTL_HSTNEGSCS        BIT(8)
85
+#define GOTGCTL_SESREQ            BIT(1)
86
+#define GOTGCTL_SESREQSCS        BIT(0)
87
+
88
+#define GOTGINT                HSOTG_REG(0x004)
89
+#define GOTGINT_DBNCE_DONE        BIT(19)
90
+#define GOTGINT_A_DEV_TOUT_CHG        BIT(18)
91
+#define GOTGINT_HST_NEG_DET        BIT(17)
92
+#define GOTGINT_HST_NEG_SUC_STS_CHNG    BIT(9)
93
+#define GOTGINT_SES_REQ_SUC_STS_CHNG    BIT(8)
94
+#define GOTGINT_SES_END_DET        BIT(2)
95
+
96
+#define GAHBCFG                HSOTG_REG(0x008)
97
+#define GAHBCFG_AHB_SINGLE        BIT(23)
98
+#define GAHBCFG_NOTI_ALL_DMA_WRIT    BIT(22)
99
+#define GAHBCFG_REM_MEM_SUPP        BIT(21)
100
+#define GAHBCFG_P_TXF_EMP_LVL        BIT(8)
101
+#define GAHBCFG_NP_TXF_EMP_LVL        BIT(7)
102
+#define GAHBCFG_DMA_EN            BIT(5)
103
+#define GAHBCFG_HBSTLEN_MASK        (0xf << 1)
104
+#define GAHBCFG_HBSTLEN_SHIFT        1
105
+#define GAHBCFG_HBSTLEN_SINGLE        0
106
+#define GAHBCFG_HBSTLEN_INCR        1
107
+#define GAHBCFG_HBSTLEN_INCR4        3
108
+#define GAHBCFG_HBSTLEN_INCR8        5
109
+#define GAHBCFG_HBSTLEN_INCR16        7
110
+#define GAHBCFG_GLBL_INTR_EN        BIT(0)
111
+#define GAHBCFG_CTRL_MASK        (GAHBCFG_P_TXF_EMP_LVL | \
112
+                     GAHBCFG_NP_TXF_EMP_LVL | \
113
+                     GAHBCFG_DMA_EN | \
114
+                     GAHBCFG_GLBL_INTR_EN)
115
+
116
+#define GUSBCFG                HSOTG_REG(0x00C)
117
+#define GUSBCFG_FORCEDEVMODE        BIT(30)
118
+#define GUSBCFG_FORCEHOSTMODE        BIT(29)
119
+#define GUSBCFG_TXENDDELAY        BIT(28)
120
+#define GUSBCFG_ICTRAFFICPULLREMOVE    BIT(27)
121
+#define GUSBCFG_ICUSBCAP        BIT(26)
122
+#define GUSBCFG_ULPI_INT_PROT_DIS    BIT(25)
123
+#define GUSBCFG_INDICATORPASSTHROUGH    BIT(24)
124
+#define GUSBCFG_INDICATORCOMPLEMENT    BIT(23)
125
+#define GUSBCFG_TERMSELDLPULSE        BIT(22)
126
+#define GUSBCFG_ULPI_INT_VBUS_IND    BIT(21)
127
+#define GUSBCFG_ULPI_EXT_VBUS_DRV    BIT(20)
128
+#define GUSBCFG_ULPI_CLK_SUSP_M        BIT(19)
129
+#define GUSBCFG_ULPI_AUTO_RES        BIT(18)
130
+#define GUSBCFG_ULPI_FS_LS        BIT(17)
131
+#define GUSBCFG_OTG_UTMI_FS_SEL        BIT(16)
132
+#define GUSBCFG_PHY_LP_CLK_SEL        BIT(15)
133
+#define GUSBCFG_USBTRDTIM_MASK        (0xf << 10)
134
+#define GUSBCFG_USBTRDTIM_SHIFT        10
135
+#define GUSBCFG_HNPCAP            BIT(9)
136
+#define GUSBCFG_SRPCAP            BIT(8)
137
+#define GUSBCFG_DDRSEL            BIT(7)
138
+#define GUSBCFG_PHYSEL            BIT(6)
139
+#define GUSBCFG_FSINTF            BIT(5)
140
+#define GUSBCFG_ULPI_UTMI_SEL        BIT(4)
141
+#define GUSBCFG_PHYIF16            BIT(3)
142
+#define GUSBCFG_PHYIF8            (0 << 3)
143
+#define GUSBCFG_TOUTCAL_MASK        (0x7 << 0)
144
+#define GUSBCFG_TOUTCAL_SHIFT        0
145
+#define GUSBCFG_TOUTCAL_LIMIT        0x7
146
+#define GUSBCFG_TOUTCAL(_x)        ((_x) << 0)
147
+
148
+#define GRSTCTL                HSOTG_REG(0x010)
149
+#define GRSTCTL_AHBIDLE            BIT(31)
150
+#define GRSTCTL_DMAREQ            BIT(30)
151
+#define GRSTCTL_TXFNUM_MASK        (0x1f << 6)
152
+#define GRSTCTL_TXFNUM_SHIFT        6
153
+#define GRSTCTL_TXFNUM_LIMIT        0x1f
154
+#define GRSTCTL_TXFNUM(_x)        ((_x) << 6)
155
+#define GRSTCTL_TXFFLSH            BIT(5)
156
+#define GRSTCTL_RXFFLSH            BIT(4)
157
+#define GRSTCTL_IN_TKNQ_FLSH        BIT(3)
158
+#define GRSTCTL_FRMCNTRRST        BIT(2)
159
+#define GRSTCTL_HSFTRST            BIT(1)
160
+#define GRSTCTL_CSFTRST            BIT(0)
161
+
162
+#define GINTSTS                HSOTG_REG(0x014)
163
+#define GINTMSK                HSOTG_REG(0x018)
164
+#define GINTSTS_WKUPINT            BIT(31)
165
+#define GINTSTS_SESSREQINT        BIT(30)
166
+#define GINTSTS_DISCONNINT        BIT(29)
167
+#define GINTSTS_CONIDSTSCHNG        BIT(28)
168
+#define GINTSTS_LPMTRANRCVD        BIT(27)
169
+#define GINTSTS_PTXFEMP            BIT(26)
170
+#define GINTSTS_HCHINT            BIT(25)
171
+#define GINTSTS_PRTINT            BIT(24)
172
+#define GINTSTS_RESETDET        BIT(23)
173
+#define GINTSTS_FET_SUSP        BIT(22)
174
+#define GINTSTS_INCOMPL_IP        BIT(21)
175
+#define GINTSTS_INCOMPL_SOOUT        BIT(21)
176
+#define GINTSTS_INCOMPL_SOIN        BIT(20)
177
+#define GINTSTS_OEPINT            BIT(19)
178
+#define GINTSTS_IEPINT            BIT(18)
179
+#define GINTSTS_EPMIS            BIT(17)
180
+#define GINTSTS_RESTOREDONE        BIT(16)
181
+#define GINTSTS_EOPF            BIT(15)
182
+#define GINTSTS_ISOUTDROP        BIT(14)
183
+#define GINTSTS_ENUMDONE        BIT(13)
184
+#define GINTSTS_USBRST            BIT(12)
185
+#define GINTSTS_USBSUSP            BIT(11)
186
+#define GINTSTS_ERLYSUSP        BIT(10)
187
+#define GINTSTS_I2CINT            BIT(9)
188
+#define GINTSTS_ULPI_CK_INT        BIT(8)
189
+#define GINTSTS_GOUTNAKEFF        BIT(7)
190
+#define GINTSTS_GINNAKEFF        BIT(6)
191
+#define GINTSTS_NPTXFEMP        BIT(5)
192
+#define GINTSTS_RXFLVL            BIT(4)
193
+#define GINTSTS_SOF            BIT(3)
194
+#define GINTSTS_OTGINT            BIT(2)
195
+#define GINTSTS_MODEMIS            BIT(1)
196
+#define GINTSTS_CURMODE_HOST        BIT(0)
197
+
198
+#define GRXSTSR                HSOTG_REG(0x01C)
199
+#define GRXSTSP                HSOTG_REG(0x020)
200
+#define GRXSTS_FN_MASK            (0x7f << 25)
201
+#define GRXSTS_FN_SHIFT            25
202
+#define GRXSTS_PKTSTS_MASK        (0xf << 17)
203
+#define GRXSTS_PKTSTS_SHIFT        17
204
+#define GRXSTS_PKTSTS_GLOBALOUTNAK    1
205
+#define GRXSTS_PKTSTS_OUTRX        2
206
+#define GRXSTS_PKTSTS_HCHIN        2
207
+#define GRXSTS_PKTSTS_OUTDONE        3
208
+#define GRXSTS_PKTSTS_HCHIN_XFER_COMP    3
209
+#define GRXSTS_PKTSTS_SETUPDONE        4
210
+#define GRXSTS_PKTSTS_DATATOGGLEERR    5
211
+#define GRXSTS_PKTSTS_SETUPRX        6
212
+#define GRXSTS_PKTSTS_HCHHALTED        7
213
+#define GRXSTS_HCHNUM_MASK        (0xf << 0)
214
+#define GRXSTS_HCHNUM_SHIFT        0
215
+#define GRXSTS_DPID_MASK        (0x3 << 15)
216
+#define GRXSTS_DPID_SHIFT        15
217
+#define GRXSTS_BYTECNT_MASK        (0x7ff << 4)
218
+#define GRXSTS_BYTECNT_SHIFT        4
219
+#define GRXSTS_EPNUM_MASK        (0xf << 0)
220
+#define GRXSTS_EPNUM_SHIFT        0
221
+
222
+#define GRXFSIZ                HSOTG_REG(0x024)
223
+#define GRXFSIZ_DEPTH_MASK        (0xffff << 0)
224
+#define GRXFSIZ_DEPTH_SHIFT        0
225
+
226
+#define GNPTXFSIZ            HSOTG_REG(0x028)
227
+/* Use FIFOSIZE_* constants to access this register */
228
+
229
+#define GNPTXSTS            HSOTG_REG(0x02C)
230
+#define GNPTXSTS_NP_TXQ_TOP_MASK        (0x7f << 24)
231
+#define GNPTXSTS_NP_TXQ_TOP_SHIFT        24
232
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_MASK        (0xff << 16)
233
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_SHIFT        16
234
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_GET(_v)    (((_v) >> 16) & 0xff)
235
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_MASK        (0xffff << 0)
236
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_SHIFT        0
237
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_GET(_v)    (((_v) >> 0) & 0xffff)
238
+
239
+#define GI2CCTL                HSOTG_REG(0x0030)
240
+#define GI2CCTL_BSYDNE            BIT(31)
241
+#define GI2CCTL_RW            BIT(30)
242
+#define GI2CCTL_I2CDATSE0        BIT(28)
243
+#define GI2CCTL_I2CDEVADDR_MASK        (0x3 << 26)
244
+#define GI2CCTL_I2CDEVADDR_SHIFT    26
245
+#define GI2CCTL_I2CSUSPCTL        BIT(25)
246
+#define GI2CCTL_ACK            BIT(24)
247
+#define GI2CCTL_I2CEN            BIT(23)
248
+#define GI2CCTL_ADDR_MASK        (0x7f << 16)
249
+#define GI2CCTL_ADDR_SHIFT        16
250
+#define GI2CCTL_REGADDR_MASK        (0xff << 8)
251
+#define GI2CCTL_REGADDR_SHIFT        8
252
+#define GI2CCTL_RWDATA_MASK        (0xff << 0)
253
+#define GI2CCTL_RWDATA_SHIFT        0
254
+
255
+#define GPVNDCTL            HSOTG_REG(0x0034)
256
+#define GGPIO                HSOTG_REG(0x0038)
257
+#define GGPIO_STM32_OTG_GCCFG_PWRDWN    BIT(16)
258
+
259
+#define GUID                HSOTG_REG(0x003c)
260
+#define GSNPSID                HSOTG_REG(0x0040)
261
+#define GHWCFG1                HSOTG_REG(0x0044)
262
+#define GSNPSID_ID_MASK            GENMASK(31, 16)
263
+
264
+#define GHWCFG2                HSOTG_REG(0x0048)
265
+#define GHWCFG2_OTG_ENABLE_IC_USB        BIT(31)
266
+#define GHWCFG2_DEV_TOKEN_Q_DEPTH_MASK        (0x1f << 26)
267
+#define GHWCFG2_DEV_TOKEN_Q_DEPTH_SHIFT        26
268
+#define GHWCFG2_HOST_PERIO_TX_Q_DEPTH_MASK    (0x3 << 24)
269
+#define GHWCFG2_HOST_PERIO_TX_Q_DEPTH_SHIFT    24
270
+#define GHWCFG2_NONPERIO_TX_Q_DEPTH_MASK    (0x3 << 22)
271
+#define GHWCFG2_NONPERIO_TX_Q_DEPTH_SHIFT    22
272
+#define GHWCFG2_MULTI_PROC_INT            BIT(20)
273
+#define GHWCFG2_DYNAMIC_FIFO            BIT(19)
274
+#define GHWCFG2_PERIO_EP_SUPPORTED        BIT(18)
275
+#define GHWCFG2_NUM_HOST_CHAN_MASK        (0xf << 14)
276
+#define GHWCFG2_NUM_HOST_CHAN_SHIFT        14
277
+#define GHWCFG2_NUM_DEV_EP_MASK            (0xf << 10)
278
+#define GHWCFG2_NUM_DEV_EP_SHIFT        10
279
+#define GHWCFG2_FS_PHY_TYPE_MASK        (0x3 << 8)
280
+#define GHWCFG2_FS_PHY_TYPE_SHIFT        8
281
+#define GHWCFG2_FS_PHY_TYPE_NOT_SUPPORTED    0
282
+#define GHWCFG2_FS_PHY_TYPE_DEDICATED        1
283
+#define GHWCFG2_FS_PHY_TYPE_SHARED_UTMI        2
284
+#define GHWCFG2_FS_PHY_TYPE_SHARED_ULPI        3
285
+#define GHWCFG2_HS_PHY_TYPE_MASK        (0x3 << 6)
286
+#define GHWCFG2_HS_PHY_TYPE_SHIFT        6
287
+#define GHWCFG2_HS_PHY_TYPE_NOT_SUPPORTED    0
288
+#define GHWCFG2_HS_PHY_TYPE_UTMI        1
289
+#define GHWCFG2_HS_PHY_TYPE_ULPI        2
290
+#define GHWCFG2_HS_PHY_TYPE_UTMI_ULPI        3
291
+#define GHWCFG2_POINT2POINT            BIT(5)
292
+#define GHWCFG2_ARCHITECTURE_MASK        (0x3 << 3)
293
+#define GHWCFG2_ARCHITECTURE_SHIFT        3
294
+#define GHWCFG2_SLAVE_ONLY_ARCH            0
295
+#define GHWCFG2_EXT_DMA_ARCH            1
296
+#define GHWCFG2_INT_DMA_ARCH            2
297
+#define GHWCFG2_OP_MODE_MASK            (0x7 << 0)
298
+#define GHWCFG2_OP_MODE_SHIFT            0
299
+#define GHWCFG2_OP_MODE_HNP_SRP_CAPABLE        0
300
+#define GHWCFG2_OP_MODE_SRP_ONLY_CAPABLE    1
301
+#define GHWCFG2_OP_MODE_NO_HNP_SRP_CAPABLE    2
302
+#define GHWCFG2_OP_MODE_SRP_CAPABLE_DEVICE    3
303
+#define GHWCFG2_OP_MODE_NO_SRP_CAPABLE_DEVICE    4
304
+#define GHWCFG2_OP_MODE_SRP_CAPABLE_HOST    5
305
+#define GHWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST    6
306
+#define GHWCFG2_OP_MODE_UNDEFINED        7
307
+
308
+#define GHWCFG3                HSOTG_REG(0x004c)
309
+#define GHWCFG3_DFIFO_DEPTH_MASK        (0xffff << 16)
310
+#define GHWCFG3_DFIFO_DEPTH_SHIFT        16
311
+#define GHWCFG3_OTG_LPM_EN            BIT(15)
312
+#define GHWCFG3_BC_SUPPORT            BIT(14)
313
+#define GHWCFG3_OTG_ENABLE_HSIC            BIT(13)
314
+#define GHWCFG3_ADP_SUPP            BIT(12)
315
+#define GHWCFG3_SYNCH_RESET_TYPE        BIT(11)
316
+#define GHWCFG3_OPTIONAL_FEATURES        BIT(10)
317
+#define GHWCFG3_VENDOR_CTRL_IF            BIT(9)
318
+#define GHWCFG3_I2C                BIT(8)
319
+#define GHWCFG3_OTG_FUNC            BIT(7)
320
+#define GHWCFG3_PACKET_SIZE_CNTR_WIDTH_MASK    (0x7 << 4)
321
+#define GHWCFG3_PACKET_SIZE_CNTR_WIDTH_SHIFT    4
322
+#define GHWCFG3_XFER_SIZE_CNTR_WIDTH_MASK    (0xf << 0)
323
+#define GHWCFG3_XFER_SIZE_CNTR_WIDTH_SHIFT    0
324
+
325
+#define GHWCFG4                HSOTG_REG(0x0050)
326
+#define GHWCFG4_DESC_DMA_DYN            BIT(31)
327
+#define GHWCFG4_DESC_DMA            BIT(30)
328
+#define GHWCFG4_NUM_IN_EPS_MASK            (0xf << 26)
329
+#define GHWCFG4_NUM_IN_EPS_SHIFT        26
330
+#define GHWCFG4_DED_FIFO_EN            BIT(25)
331
+#define GHWCFG4_DED_FIFO_SHIFT        25
332
+#define GHWCFG4_SESSION_END_FILT_EN        BIT(24)
333
+#define GHWCFG4_B_VALID_FILT_EN            BIT(23)
334
+#define GHWCFG4_A_VALID_FILT_EN            BIT(22)
335
+#define GHWCFG4_VBUS_VALID_FILT_EN        BIT(21)
336
+#define GHWCFG4_IDDIG_FILT_EN            BIT(20)
337
+#define GHWCFG4_NUM_DEV_MODE_CTRL_EP_MASK    (0xf << 16)
338
+#define GHWCFG4_NUM_DEV_MODE_CTRL_EP_SHIFT    16
339
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_MASK    (0x3 << 14)
340
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_SHIFT    14
341
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_8        0
342
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_16        1
343
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_8_OR_16    2
344
+#define GHWCFG4_ACG_SUPPORTED            BIT(12)
345
+#define GHWCFG4_IPG_ISOC_SUPPORTED        BIT(11)
346
+#define GHWCFG4_SERVICE_INTERVAL_SUPPORTED BIT(10)
347
+#define GHWCFG4_XHIBER                BIT(7)
348
+#define GHWCFG4_HIBER                BIT(6)
349
+#define GHWCFG4_MIN_AHB_FREQ            BIT(5)
350
+#define GHWCFG4_POWER_OPTIMIZ            BIT(4)
351
+#define GHWCFG4_NUM_DEV_PERIO_IN_EP_MASK    (0xf << 0)
352
+#define GHWCFG4_NUM_DEV_PERIO_IN_EP_SHIFT    0
353
+
354
+#define GLPMCFG                HSOTG_REG(0x0054)
355
+#define GLPMCFG_INVSELHSIC        BIT(31)
356
+#define GLPMCFG_HSICCON            BIT(30)
357
+#define GLPMCFG_RSTRSLPSTS        BIT(29)
358
+#define GLPMCFG_ENBESL            BIT(28)
359
+#define GLPMCFG_LPM_RETRYCNT_STS_MASK    (0x7 << 25)
360
+#define GLPMCFG_LPM_RETRYCNT_STS_SHIFT    25
361
+#define GLPMCFG_SNDLPM            BIT(24)
362
+#define GLPMCFG_RETRY_CNT_MASK        (0x7 << 21)
363
+#define GLPMCFG_RETRY_CNT_SHIFT        21
364
+#define GLPMCFG_LPM_REJECT_CTRL_CONTROL    BIT(21)
365
+#define GLPMCFG_LPM_ACCEPT_CTRL_ISOC    BIT(22)
366
+#define GLPMCFG_LPM_CHNL_INDX_MASK    (0xf << 17)
367
+#define GLPMCFG_LPM_CHNL_INDX_SHIFT    17
368
+#define GLPMCFG_L1RESUMEOK        BIT(16)
369
+#define GLPMCFG_SLPSTS            BIT(15)
370
+#define GLPMCFG_COREL1RES_MASK        (0x3 << 13)
371
+#define GLPMCFG_COREL1RES_SHIFT        13
372
+#define GLPMCFG_HIRD_THRES_MASK        (0x1f << 8)
373
+#define GLPMCFG_HIRD_THRES_SHIFT    8
374
+#define GLPMCFG_HIRD_THRES_EN        (0x10 << 8)
375
+#define GLPMCFG_ENBLSLPM        BIT(7)
376
+#define GLPMCFG_BREMOTEWAKE        BIT(6)
377
+#define GLPMCFG_HIRD_MASK        (0xf << 2)
378
+#define GLPMCFG_HIRD_SHIFT        2
379
+#define GLPMCFG_APPL1RES        BIT(1)
380
+#define GLPMCFG_LPMCAP            BIT(0)
381
+
382
+#define GPWRDN                HSOTG_REG(0x0058)
383
+#define GPWRDN_MULT_VAL_ID_BC_MASK    (0x1f << 24)
384
+#define GPWRDN_MULT_VAL_ID_BC_SHIFT    24
385
+#define GPWRDN_ADP_INT            BIT(23)
386
+#define GPWRDN_BSESSVLD            BIT(22)
387
+#define GPWRDN_IDSTS            BIT(21)
388
+#define GPWRDN_LINESTATE_MASK        (0x3 << 19)
389
+#define GPWRDN_LINESTATE_SHIFT        19
390
+#define GPWRDN_STS_CHGINT_MSK        BIT(18)
391
+#define GPWRDN_STS_CHGINT        BIT(17)
392
+#define GPWRDN_SRP_DET_MSK        BIT(16)
393
+#define GPWRDN_SRP_DET            BIT(15)
394
+#define GPWRDN_CONNECT_DET_MSK        BIT(14)
395
+#define GPWRDN_CONNECT_DET        BIT(13)
396
+#define GPWRDN_DISCONN_DET_MSK        BIT(12)
397
+#define GPWRDN_DISCONN_DET        BIT(11)
398
+#define GPWRDN_RST_DET_MSK        BIT(10)
399
+#define GPWRDN_RST_DET            BIT(9)
400
+#define GPWRDN_LNSTSCHG_MSK        BIT(8)
401
+#define GPWRDN_LNSTSCHG            BIT(7)
402
+#define GPWRDN_DIS_VBUS            BIT(6)
403
+#define GPWRDN_PWRDNSWTCH        BIT(5)
404
+#define GPWRDN_PWRDNRSTN        BIT(4)
405
+#define GPWRDN_PWRDNCLMP        BIT(3)
406
+#define GPWRDN_RESTORE            BIT(2)
407
+#define GPWRDN_PMUACTV            BIT(1)
408
+#define GPWRDN_PMUINTSEL        BIT(0)
409
+
410
+#define GDFIFOCFG            HSOTG_REG(0x005c)
411
+#define GDFIFOCFG_EPINFOBASE_MASK    (0xffff << 16)
412
+#define GDFIFOCFG_EPINFOBASE_SHIFT    16
413
+#define GDFIFOCFG_GDFIFOCFG_MASK    (0xffff << 0)
414
+#define GDFIFOCFG_GDFIFOCFG_SHIFT    0
415
+
416
+#define ADPCTL                HSOTG_REG(0x0060)
417
+#define ADPCTL_AR_MASK            (0x3 << 27)
418
+#define ADPCTL_AR_SHIFT            27
419
+#define ADPCTL_ADP_TMOUT_INT_MSK    BIT(26)
420
+#define ADPCTL_ADP_SNS_INT_MSK        BIT(25)
421
+#define ADPCTL_ADP_PRB_INT_MSK        BIT(24)
422
+#define ADPCTL_ADP_TMOUT_INT        BIT(23)
423
+#define ADPCTL_ADP_SNS_INT        BIT(22)
424
+#define ADPCTL_ADP_PRB_INT        BIT(21)
425
+#define ADPCTL_ADPENA            BIT(20)
426
+#define ADPCTL_ADPRES            BIT(19)
427
+#define ADPCTL_ENASNS            BIT(18)
428
+#define ADPCTL_ENAPRB            BIT(17)
429
+#define ADPCTL_RTIM_MASK        (0x7ff << 6)
430
+#define ADPCTL_RTIM_SHIFT        6
431
+#define ADPCTL_PRB_PER_MASK        (0x3 << 4)
432
+#define ADPCTL_PRB_PER_SHIFT        4
433
+#define ADPCTL_PRB_DELTA_MASK        (0x3 << 2)
434
+#define ADPCTL_PRB_DELTA_SHIFT        2
435
+#define ADPCTL_PRB_DSCHRG_MASK        (0x3 << 0)
436
+#define ADPCTL_PRB_DSCHRG_SHIFT        0
437
+
438
+#define GREFCLK                 HSOTG_REG(0x0064)
439
+#define GREFCLK_REFCLKPER_MASK         (0x1ffff << 15)
440
+#define GREFCLK_REFCLKPER_SHIFT         15
441
+#define GREFCLK_REF_CLK_MODE         BIT(14)
442
+#define GREFCLK_SOF_CNT_WKUP_ALERT_MASK     (0x3ff)
443
+#define GREFCLK_SOF_CNT_WKUP_ALERT_SHIFT 0
444
+
445
+#define GINTMSK2            HSOTG_REG(0x0068)
446
+#define GINTMSK2_WKUP_ALERT_INT_MSK    BIT(0)
447
+
448
+#define GINTSTS2            HSOTG_REG(0x006c)
449
+#define GINTSTS2_WKUP_ALERT_INT        BIT(0)
450
+
451
+#define HPTXFSIZ            HSOTG_REG(0x100)
452
+/* Use FIFOSIZE_* constants to access this register */
453
+
454
+#define DPTXFSIZN(_a)            HSOTG_REG(0x104 + (((_a) - 1) * 4))
455
+/* Use FIFOSIZE_* constants to access this register */
456
+
457
+/* These apply to the GNPTXFSIZ, HPTXFSIZ and DPTXFSIZN registers */
458
+#define FIFOSIZE_DEPTH_MASK        (0xffff << 16)
459
+#define FIFOSIZE_DEPTH_SHIFT        16
460
+#define FIFOSIZE_STARTADDR_MASK        (0xffff << 0)
461
+#define FIFOSIZE_STARTADDR_SHIFT    0
462
+#define FIFOSIZE_DEPTH_GET(_x)        (((_x) >> 16) & 0xffff)
463
+
464
+/* Device mode registers */
465
+
466
+#define DCFG                HSOTG_REG(0x800)
467
+#define DCFG_DESCDMA_EN            BIT(23)
468
+#define DCFG_EPMISCNT_MASK        (0x1f << 18)
469
+#define DCFG_EPMISCNT_SHIFT        18
470
+#define DCFG_EPMISCNT_LIMIT        0x1f
471
+#define DCFG_EPMISCNT(_x)        ((_x) << 18)
472
+#define DCFG_IPG_ISOC_SUPPORDED        BIT(17)
473
+#define DCFG_PERFRINT_MASK        (0x3 << 11)
474
+#define DCFG_PERFRINT_SHIFT        11
475
+#define DCFG_PERFRINT_LIMIT        0x3
476
+#define DCFG_PERFRINT(_x)        ((_x) << 11)
477
+#define DCFG_DEVADDR_MASK        (0x7f << 4)
478
+#define DCFG_DEVADDR_SHIFT        4
479
+#define DCFG_DEVADDR_LIMIT        0x7f
480
+#define DCFG_DEVADDR(_x)        ((_x) << 4)
481
+#define DCFG_NZ_STS_OUT_HSHK        BIT(2)
482
+#define DCFG_DEVSPD_MASK        (0x3 << 0)
483
+#define DCFG_DEVSPD_SHIFT        0
484
+#define DCFG_DEVSPD_HS            0
485
+#define DCFG_DEVSPD_FS            1
486
+#define DCFG_DEVSPD_LS            2
487
+#define DCFG_DEVSPD_FS48        3
488
+
489
+#define DCTL                HSOTG_REG(0x804)
490
+#define DCTL_SERVICE_INTERVAL_SUPPORTED BIT(19)
491
+#define DCTL_PWRONPRGDONE        BIT(11)
492
+#define DCTL_CGOUTNAK            BIT(10)
493
+#define DCTL_SGOUTNAK            BIT(9)
494
+#define DCTL_CGNPINNAK            BIT(8)
495
+#define DCTL_SGNPINNAK            BIT(7)
496
+#define DCTL_TSTCTL_MASK        (0x7 << 4)
497
+#define DCTL_TSTCTL_SHIFT        4
498
+#define DCTL_GOUTNAKSTS            BIT(3)
499
+#define DCTL_GNPINNAKSTS        BIT(2)
500
+#define DCTL_SFTDISCON            BIT(1)
501
+#define DCTL_RMTWKUPSIG            BIT(0)
502
+
503
+#define DSTS                HSOTG_REG(0x808)
504
+#define DSTS_SOFFN_MASK            (0x3fff << 8)
505
+#define DSTS_SOFFN_SHIFT        8
506
+#define DSTS_SOFFN_LIMIT        0x3fff
507
+#define DSTS_SOFFN(_x)            ((_x) << 8)
508
+#define DSTS_ERRATICERR            BIT(3)
509
+#define DSTS_ENUMSPD_MASK        (0x3 << 1)
510
+#define DSTS_ENUMSPD_SHIFT        1
511
+#define DSTS_ENUMSPD_HS            0
512
+#define DSTS_ENUMSPD_FS            1
513
+#define DSTS_ENUMSPD_LS            2
514
+#define DSTS_ENUMSPD_FS48        3
515
+#define DSTS_SUSPSTS            BIT(0)
516
+
517
+#define DIEPMSK                HSOTG_REG(0x810)
518
+#define DIEPMSK_NAKMSK            BIT(13)
519
+#define DIEPMSK_BNAININTRMSK        BIT(9)
520
+#define DIEPMSK_TXFIFOUNDRNMSK        BIT(8)
521
+#define DIEPMSK_TXFIFOEMPTY        BIT(7)
522
+#define DIEPMSK_INEPNAKEFFMSK        BIT(6)
523
+#define DIEPMSK_INTKNEPMISMSK        BIT(5)
524
+#define DIEPMSK_INTKNTXFEMPMSK        BIT(4)
525
+#define DIEPMSK_TIMEOUTMSK        BIT(3)
526
+#define DIEPMSK_AHBERRMSK        BIT(2)
527
+#define DIEPMSK_EPDISBLDMSK        BIT(1)
528
+#define DIEPMSK_XFERCOMPLMSK        BIT(0)
529
+
530
+#define DOEPMSK                HSOTG_REG(0x814)
531
+#define DOEPMSK_BNAMSK            BIT(9)
532
+#define DOEPMSK_BACK2BACKSETUP        BIT(6)
533
+#define DOEPMSK_STSPHSERCVDMSK        BIT(5)
534
+#define DOEPMSK_OUTTKNEPDISMSK        BIT(4)
535
+#define DOEPMSK_SETUPMSK        BIT(3)
536
+#define DOEPMSK_AHBERRMSK        BIT(2)
537
+#define DOEPMSK_EPDISBLDMSK        BIT(1)
538
+#define DOEPMSK_XFERCOMPLMSK        BIT(0)
539
+
540
+#define DAINT                HSOTG_REG(0x818)
541
+#define DAINTMSK            HSOTG_REG(0x81C)
542
+#define DAINT_OUTEP_SHIFT        16
543
+#define DAINT_OUTEP(_x)            (1 << ((_x) + 16))
544
+#define DAINT_INEP(_x)            (1 << (_x))
545
+
546
+#define DTKNQR1                HSOTG_REG(0x820)
547
+#define DTKNQR2                HSOTG_REG(0x824)
548
+#define DTKNQR3                HSOTG_REG(0x830)
549
+#define DTKNQR4                HSOTG_REG(0x834)
550
+#define DIEPEMPMSK            HSOTG_REG(0x834)
551
+
552
+#define DVBUSDIS            HSOTG_REG(0x828)
553
+#define DVBUSPULSE            HSOTG_REG(0x82C)
554
+
555
+#define DIEPCTL0            HSOTG_REG(0x900)
556
+#define DIEPCTL(_a)            HSOTG_REG(0x900 + ((_a) * 0x20))
557
+
558
+#define DOEPCTL0            HSOTG_REG(0xB00)
559
+#define DOEPCTL(_a)            HSOTG_REG(0xB00 + ((_a) * 0x20))
560
+
561
+/* EP0 specialness:
562
+ * bits[29..28] - reserved (no SetD0PID, SetD1PID)
563
+ * bits[25..22] - should always be zero, this isn't a periodic endpoint
564
+ * bits[10..0] - MPS setting different for EP0
565
+ */
566
+#define D0EPCTL_MPS_MASK        (0x3 << 0)
567
+#define D0EPCTL_MPS_SHIFT        0
568
+#define D0EPCTL_MPS_64            0
569
+#define D0EPCTL_MPS_32            1
570
+#define D0EPCTL_MPS_16            2
571
+#define D0EPCTL_MPS_8            3
572
+
573
+#define DXEPCTL_EPENA            BIT(31)
574
+#define DXEPCTL_EPDIS            BIT(30)
575
+#define DXEPCTL_SETD1PID        BIT(29)
576
+#define DXEPCTL_SETODDFR        BIT(29)
577
+#define DXEPCTL_SETD0PID        BIT(28)
578
+#define DXEPCTL_SETEVENFR        BIT(28)
579
+#define DXEPCTL_SNAK            BIT(27)
580
+#define DXEPCTL_CNAK            BIT(26)
581
+#define DXEPCTL_TXFNUM_MASK        (0xf << 22)
582
+#define DXEPCTL_TXFNUM_SHIFT        22
583
+#define DXEPCTL_TXFNUM_LIMIT        0xf
584
+#define DXEPCTL_TXFNUM(_x)        ((_x) << 22)
585
+#define DXEPCTL_STALL            BIT(21)
586
+#define DXEPCTL_SNP            BIT(20)
587
+#define DXEPCTL_EPTYPE_MASK        (0x3 << 18)
588
+#define DXEPCTL_EPTYPE_CONTROL        (0x0 << 18)
589
+#define DXEPCTL_EPTYPE_ISO        (0x1 << 18)
590
+#define DXEPCTL_EPTYPE_BULK        (0x2 << 18)
591
+#define DXEPCTL_EPTYPE_INTERRUPT    (0x3 << 18)
592
+
593
+#define DXEPCTL_NAKSTS            BIT(17)
594
+#define DXEPCTL_DPID            BIT(16)
595
+#define DXEPCTL_EOFRNUM            BIT(16)
596
+#define DXEPCTL_USBACTEP        BIT(15)
597
+#define DXEPCTL_NEXTEP_MASK        (0xf << 11)
598
+#define DXEPCTL_NEXTEP_SHIFT        11
599
+#define DXEPCTL_NEXTEP_LIMIT        0xf
600
+#define DXEPCTL_NEXTEP(_x)        ((_x) << 11)
601
+#define DXEPCTL_MPS_MASK        (0x7ff << 0)
602
+#define DXEPCTL_MPS_SHIFT        0
603
+#define DXEPCTL_MPS_LIMIT        0x7ff
604
+#define DXEPCTL_MPS(_x)            ((_x) << 0)
605
+
606
+#define DIEPINT(_a)            HSOTG_REG(0x908 + ((_a) * 0x20))
607
+#define DOEPINT(_a)            HSOTG_REG(0xB08 + ((_a) * 0x20))
608
+#define DXEPINT_SETUP_RCVD        BIT(15)
609
+#define DXEPINT_NYETINTRPT        BIT(14)
610
+#define DXEPINT_NAKINTRPT        BIT(13)
611
+#define DXEPINT_BBLEERRINTRPT        BIT(12)
612
+#define DXEPINT_PKTDRPSTS        BIT(11)
613
+#define DXEPINT_BNAINTR            BIT(9)
614
+#define DXEPINT_TXFIFOUNDRN        BIT(8)
615
+#define DXEPINT_OUTPKTERR        BIT(8)
616
+#define DXEPINT_TXFEMP            BIT(7)
617
+#define DXEPINT_INEPNAKEFF        BIT(6)
618
+#define DXEPINT_BACK2BACKSETUP        BIT(6)
619
+#define DXEPINT_INTKNEPMIS        BIT(5)
620
+#define DXEPINT_STSPHSERCVD        BIT(5)
621
+#define DXEPINT_INTKNTXFEMP        BIT(4)
622
+#define DXEPINT_OUTTKNEPDIS        BIT(4)
623
+#define DXEPINT_TIMEOUT            BIT(3)
624
+#define DXEPINT_SETUP            BIT(3)
625
+#define DXEPINT_AHBERR            BIT(2)
626
+#define DXEPINT_EPDISBLD        BIT(1)
627
+#define DXEPINT_XFERCOMPL        BIT(0)
628
+
629
+#define DIEPTSIZ0            HSOTG_REG(0x910)
630
+#define DIEPTSIZ0_PKTCNT_MASK        (0x3 << 19)
631
+#define DIEPTSIZ0_PKTCNT_SHIFT        19
632
+#define DIEPTSIZ0_PKTCNT_LIMIT        0x3
633
+#define DIEPTSIZ0_PKTCNT(_x)        ((_x) << 19)
634
+#define DIEPTSIZ0_XFERSIZE_MASK        (0x7f << 0)
635
+#define DIEPTSIZ0_XFERSIZE_SHIFT    0
636
+#define DIEPTSIZ0_XFERSIZE_LIMIT    0x7f
637
+#define DIEPTSIZ0_XFERSIZE(_x)        ((_x) << 0)
638
+
639
+#define DOEPTSIZ0            HSOTG_REG(0xB10)
640
+#define DOEPTSIZ0_SUPCNT_MASK        (0x3 << 29)
641
+#define DOEPTSIZ0_SUPCNT_SHIFT        29
642
+#define DOEPTSIZ0_SUPCNT_LIMIT        0x3
643
+#define DOEPTSIZ0_SUPCNT(_x)        ((_x) << 29)
644
+#define DOEPTSIZ0_PKTCNT        BIT(19)
645
+#define DOEPTSIZ0_XFERSIZE_MASK        (0x7f << 0)
646
+#define DOEPTSIZ0_XFERSIZE_SHIFT    0
647
+
648
+#define DIEPTSIZ(_a)            HSOTG_REG(0x910 + ((_a) * 0x20))
649
+#define DOEPTSIZ(_a)            HSOTG_REG(0xB10 + ((_a) * 0x20))
650
+#define DXEPTSIZ_MC_MASK        (0x3 << 29)
651
+#define DXEPTSIZ_MC_SHIFT        29
652
+#define DXEPTSIZ_MC_LIMIT        0x3
653
+#define DXEPTSIZ_MC(_x)            ((_x) << 29)
654
+#define DXEPTSIZ_PKTCNT_MASK        (0x3ff << 19)
655
+#define DXEPTSIZ_PKTCNT_SHIFT        19
656
+#define DXEPTSIZ_PKTCNT_LIMIT        0x3ff
657
+#define DXEPTSIZ_PKTCNT_GET(_v)        (((_v) >> 19) & 0x3ff)
658
+#define DXEPTSIZ_PKTCNT(_x)        ((_x) << 19)
659
+#define DXEPTSIZ_XFERSIZE_MASK        (0x7ffff << 0)
660
+#define DXEPTSIZ_XFERSIZE_SHIFT        0
661
+#define DXEPTSIZ_XFERSIZE_LIMIT        0x7ffff
662
+#define DXEPTSIZ_XFERSIZE_GET(_v)    (((_v) >> 0) & 0x7ffff)
663
+#define DXEPTSIZ_XFERSIZE(_x)        ((_x) << 0)
664
+
665
+#define DIEPDMA(_a)            HSOTG_REG(0x914 + ((_a) * 0x20))
666
+#define DOEPDMA(_a)            HSOTG_REG(0xB14 + ((_a) * 0x20))
667
+
668
+#define DTXFSTS(_a)            HSOTG_REG(0x918 + ((_a) * 0x20))
669
+
670
+#define PCGCTL                HSOTG_REG(0x0e00)
671
+#define PCGCTL_IF_DEV_MODE        BIT(31)
672
+#define PCGCTL_P2HD_PRT_SPD_MASK    (0x3 << 29)
673
+#define PCGCTL_P2HD_PRT_SPD_SHIFT    29
674
+#define PCGCTL_P2HD_DEV_ENUM_SPD_MASK    (0x3 << 27)
675
+#define PCGCTL_P2HD_DEV_ENUM_SPD_SHIFT    27
676
+#define PCGCTL_MAC_DEV_ADDR_MASK    (0x7f << 20)
677
+#define PCGCTL_MAC_DEV_ADDR_SHIFT    20
678
+#define PCGCTL_MAX_TERMSEL        BIT(19)
679
+#define PCGCTL_MAX_XCVRSELECT_MASK    (0x3 << 17)
680
+#define PCGCTL_MAX_XCVRSELECT_SHIFT    17
681
+#define PCGCTL_PORT_POWER        BIT(16)
682
+#define PCGCTL_PRT_CLK_SEL_MASK        (0x3 << 14)
683
+#define PCGCTL_PRT_CLK_SEL_SHIFT    14
684
+#define PCGCTL_ESS_REG_RESTORED        BIT(13)
685
+#define PCGCTL_EXTND_HIBER_SWITCH    BIT(12)
686
+#define PCGCTL_EXTND_HIBER_PWRCLMP    BIT(11)
687
+#define PCGCTL_ENBL_EXTND_HIBER        BIT(10)
688
+#define PCGCTL_RESTOREMODE        BIT(9)
689
+#define PCGCTL_RESETAFTSUSP        BIT(8)
690
+#define PCGCTL_DEEP_SLEEP        BIT(7)
691
+#define PCGCTL_PHY_IN_SLEEP        BIT(6)
692
+#define PCGCTL_ENBL_SLEEP_GATING    BIT(5)
693
+#define PCGCTL_RSTPDWNMODULE        BIT(3)
694
+#define PCGCTL_PWRCLMP            BIT(2)
695
+#define PCGCTL_GATEHCLK            BIT(1)
696
+#define PCGCTL_STOPPCLK            BIT(0)
697
+
698
+#define PCGCCTL1 HSOTG_REG(0xe04)
699
+#define PCGCCTL1_TIMER (0x3 << 1)
700
+#define PCGCCTL1_GATEEN BIT(0)
701
+
702
+#define EPFIFO(_a)            HSOTG_REG(0x1000 + ((_a) * 0x1000))
703
+
704
+/* Host Mode Registers */
705
+
706
+#define HCFG                HSOTG_REG(0x0400)
707
+#define HCFG_MODECHTIMEN        BIT(31)
708
+#define HCFG_PERSCHEDENA        BIT(26)
709
+#define HCFG_FRLISTEN_MASK        (0x3 << 24)
710
+#define HCFG_FRLISTEN_SHIFT        24
711
+#define HCFG_FRLISTEN_8                (0 << 24)
712
+#define FRLISTEN_8_SIZE                8
713
+#define HCFG_FRLISTEN_16            BIT(24)
714
+#define FRLISTEN_16_SIZE            16
715
+#define HCFG_FRLISTEN_32            (2 << 24)
716
+#define FRLISTEN_32_SIZE            32
717
+#define HCFG_FRLISTEN_64            (3 << 24)
718
+#define FRLISTEN_64_SIZE            64
719
+#define HCFG_DESCDMA            BIT(23)
720
+#define HCFG_RESVALID_MASK        (0xff << 8)
721
+#define HCFG_RESVALID_SHIFT        8
722
+#define HCFG_ENA32KHZ            BIT(7)
723
+#define HCFG_FSLSSUPP            BIT(2)
724
+#define HCFG_FSLSPCLKSEL_MASK        (0x3 << 0)
725
+#define HCFG_FSLSPCLKSEL_SHIFT        0
726
+#define HCFG_FSLSPCLKSEL_30_60_MHZ    0
727
+#define HCFG_FSLSPCLKSEL_48_MHZ        1
728
+#define HCFG_FSLSPCLKSEL_6_MHZ        2
729
+
730
+#define HFIR                HSOTG_REG(0x0404)
731
+#define HFIR_FRINT_MASK            (0xffff << 0)
732
+#define HFIR_FRINT_SHIFT        0
733
+#define HFIR_RLDCTRL            BIT(16)
734
+
735
+#define HFNUM                HSOTG_REG(0x0408)
736
+#define HFNUM_FRREM_MASK        (0xffff << 16)
737
+#define HFNUM_FRREM_SHIFT        16
738
+#define HFNUM_FRNUM_MASK        (0xffff << 0)
739
+#define HFNUM_FRNUM_SHIFT        0
740
+#define HFNUM_MAX_FRNUM            0x3fff
741
+
742
+#define HPTXSTS                HSOTG_REG(0x0410)
743
+#define TXSTS_QTOP_ODD            BIT(31)
744
+#define TXSTS_QTOP_CHNEP_MASK        (0xf << 27)
745
+#define TXSTS_QTOP_CHNEP_SHIFT        27
746
+#define TXSTS_QTOP_TOKEN_MASK        (0x3 << 25)
747
+#define TXSTS_QTOP_TOKEN_SHIFT        25
748
+#define TXSTS_QTOP_TERMINATE        BIT(24)
749
+#define TXSTS_QSPCAVAIL_MASK        (0xff << 16)
750
+#define TXSTS_QSPCAVAIL_SHIFT        16
751
+#define TXSTS_FSPCAVAIL_MASK        (0xffff << 0)
752
+#define TXSTS_FSPCAVAIL_SHIFT        0
753
+
754
+#define HAINT                HSOTG_REG(0x0414)
755
+#define HAINTMSK            HSOTG_REG(0x0418)
756
+#define HFLBADDR            HSOTG_REG(0x041c)
757
+
758
+#define HPRT0                HSOTG_REG(0x0440)
759
+#define HPRT0_SPD_MASK            (0x3 << 17)
760
+#define HPRT0_SPD_SHIFT            17
761
+#define HPRT0_SPD_HIGH_SPEED        0
762
+#define HPRT0_SPD_FULL_SPEED        1
763
+#define HPRT0_SPD_LOW_SPEED        2
764
+#define HPRT0_TSTCTL_MASK        (0xf << 13)
765
+#define HPRT0_TSTCTL_SHIFT        13
766
+#define HPRT0_PWR            BIT(12)
767
+#define HPRT0_LNSTS_MASK        (0x3 << 10)
768
+#define HPRT0_LNSTS_SHIFT        10
769
+#define HPRT0_RST            BIT(8)
770
+#define HPRT0_SUSP            BIT(7)
771
+#define HPRT0_RES            BIT(6)
772
+#define HPRT0_OVRCURRCHG        BIT(5)
773
+#define HPRT0_OVRCURRACT        BIT(4)
774
+#define HPRT0_ENACHG            BIT(3)
775
+#define HPRT0_ENA            BIT(2)
776
+#define HPRT0_CONNDET            BIT(1)
777
+#define HPRT0_CONNSTS            BIT(0)
778
+
779
+#define HCCHAR(_ch)            HSOTG_REG(0x0500 + 0x20 * (_ch))
780
+#define HCCHAR_CHENA            BIT(31)
781
+#define HCCHAR_CHDIS            BIT(30)
782
+#define HCCHAR_ODDFRM            BIT(29)
783
+#define HCCHAR_DEVADDR_MASK        (0x7f << 22)
784
+#define HCCHAR_DEVADDR_SHIFT        22
785
+#define HCCHAR_MULTICNT_MASK        (0x3 << 20)
786
+#define HCCHAR_MULTICNT_SHIFT        20
787
+#define HCCHAR_EPTYPE_MASK        (0x3 << 18)
788
+#define HCCHAR_EPTYPE_SHIFT        18
789
+#define HCCHAR_LSPDDEV            BIT(17)
790
+#define HCCHAR_EPDIR            BIT(15)
791
+#define HCCHAR_EPNUM_MASK        (0xf << 11)
792
+#define HCCHAR_EPNUM_SHIFT        11
793
+#define HCCHAR_MPS_MASK            (0x7ff << 0)
794
+#define HCCHAR_MPS_SHIFT        0
795
+
796
+#define HCSPLT(_ch)            HSOTG_REG(0x0504 + 0x20 * (_ch))
797
+#define HCSPLT_SPLTENA            BIT(31)
798
+#define HCSPLT_COMPSPLT            BIT(16)
799
+#define HCSPLT_XACTPOS_MASK        (0x3 << 14)
800
+#define HCSPLT_XACTPOS_SHIFT        14
801
+#define HCSPLT_XACTPOS_MID        0
802
+#define HCSPLT_XACTPOS_END        1
803
+#define HCSPLT_XACTPOS_BEGIN        2
804
+#define HCSPLT_XACTPOS_ALL        3
805
+#define HCSPLT_HUBADDR_MASK        (0x7f << 7)
806
+#define HCSPLT_HUBADDR_SHIFT        7
807
+#define HCSPLT_PRTADDR_MASK        (0x7f << 0)
808
+#define HCSPLT_PRTADDR_SHIFT        0
809
+
810
+#define HCINT(_ch)            HSOTG_REG(0x0508 + 0x20 * (_ch))
811
+#define HCINTMSK(_ch)            HSOTG_REG(0x050c + 0x20 * (_ch))
812
+#define HCINTMSK_RESERVED14_31        (0x3ffff << 14)
813
+#define HCINTMSK_FRM_LIST_ROLL        BIT(13)
814
+#define HCINTMSK_XCS_XACT        BIT(12)
815
+#define HCINTMSK_BNA            BIT(11)
816
+#define HCINTMSK_DATATGLERR        BIT(10)
817
+#define HCINTMSK_FRMOVRUN        BIT(9)
818
+#define HCINTMSK_BBLERR            BIT(8)
819
+#define HCINTMSK_XACTERR        BIT(7)
820
+#define HCINTMSK_NYET            BIT(6)
821
+#define HCINTMSK_ACK            BIT(5)
822
+#define HCINTMSK_NAK            BIT(4)
823
+#define HCINTMSK_STALL            BIT(3)
824
+#define HCINTMSK_AHBERR            BIT(2)
825
+#define HCINTMSK_CHHLTD            BIT(1)
826
+#define HCINTMSK_XFERCOMPL        BIT(0)
827
+
828
+#define HCTSIZ(_ch)            HSOTG_REG(0x0510 + 0x20 * (_ch))
829
+#define TSIZ_DOPNG            BIT(31)
830
+#define TSIZ_SC_MC_PID_MASK        (0x3 << 29)
831
+#define TSIZ_SC_MC_PID_SHIFT        29
832
+#define TSIZ_SC_MC_PID_DATA0        0
833
+#define TSIZ_SC_MC_PID_DATA2        1
834
+#define TSIZ_SC_MC_PID_DATA1        2
835
+#define TSIZ_SC_MC_PID_MDATA        3
836
+#define TSIZ_SC_MC_PID_SETUP        3
837
+#define TSIZ_PKTCNT_MASK        (0x3ff << 19)
838
+#define TSIZ_PKTCNT_SHIFT        19
839
+#define TSIZ_NTD_MASK            (0xff << 8)
840
+#define TSIZ_NTD_SHIFT            8
841
+#define TSIZ_SCHINFO_MASK        (0xff << 0)
842
+#define TSIZ_SCHINFO_SHIFT        0
843
+#define TSIZ_XFERSIZE_MASK        (0x7ffff << 0)
844
+#define TSIZ_XFERSIZE_SHIFT        0
845
+
846
+#define HCDMA(_ch)            HSOTG_REG(0x0514 + 0x20 * (_ch))
847
+
848
+#define HCDMAB(_ch)            HSOTG_REG(0x051c + 0x20 * (_ch))
849
+
850
+#define HCFIFO(_ch)            HSOTG_REG(0x1000 + 0x1000 * (_ch))
851
+
852
+/**
853
+ * struct dwc2_dma_desc - DMA descriptor structure,
854
+ * used for both host and gadget modes
855
+ *
856
+ * @status: DMA descriptor status quadlet
857
+ * @buf: DMA descriptor data buffer pointer
858
+ *
859
+ * DMA Descriptor structure contains two quadlets:
860
+ * Status quadlet and Data buffer pointer.
861
+ */
862
+struct dwc2_dma_desc {
863
+    uint32_t status;
864
+    uint32_t buf;
865
+} __packed;
866
+
867
+/* Host Mode DMA descriptor status quadlet */
868
+
869
+#define HOST_DMA_A            BIT(31)
870
+#define HOST_DMA_STS_MASK        (0x3 << 28)
871
+#define HOST_DMA_STS_SHIFT        28
872
+#define HOST_DMA_STS_PKTERR        BIT(28)
873
+#define HOST_DMA_EOL            BIT(26)
874
+#define HOST_DMA_IOC            BIT(25)
875
+#define HOST_DMA_SUP            BIT(24)
876
+#define HOST_DMA_ALT_QTD        BIT(23)
877
+#define HOST_DMA_QTD_OFFSET_MASK    (0x3f << 17)
878
+#define HOST_DMA_QTD_OFFSET_SHIFT    17
879
+#define HOST_DMA_ISOC_NBYTES_MASK    (0xfff << 0)
880
+#define HOST_DMA_ISOC_NBYTES_SHIFT    0
881
+#define HOST_DMA_NBYTES_MASK        (0x1ffff << 0)
882
+#define HOST_DMA_NBYTES_SHIFT        0
883
+#define HOST_DMA_NBYTES_LIMIT        131071
884
+
885
+/* Device Mode DMA descriptor status quadlet */
886
+
887
+#define DEV_DMA_BUFF_STS_MASK        (0x3 << 30)
888
+#define DEV_DMA_BUFF_STS_SHIFT        30
889
+#define DEV_DMA_BUFF_STS_HREADY        0
890
+#define DEV_DMA_BUFF_STS_DMABUSY    1
891
+#define DEV_DMA_BUFF_STS_DMADONE    2
892
+#define DEV_DMA_BUFF_STS_HBUSY        3
893
+#define DEV_DMA_STS_MASK        (0x3 << 28)
894
+#define DEV_DMA_STS_SHIFT        28
895
+#define DEV_DMA_STS_SUCC        0
896
+#define DEV_DMA_STS_BUFF_FLUSH        1
897
+#define DEV_DMA_STS_BUFF_ERR        3
898
+#define DEV_DMA_L            BIT(27)
899
+#define DEV_DMA_SHORT            BIT(26)
900
+#define DEV_DMA_IOC            BIT(25)
901
+#define DEV_DMA_SR            BIT(24)
902
+#define DEV_DMA_MTRF            BIT(23)
903
+#define DEV_DMA_ISOC_PID_MASK        (0x3 << 23)
904
+#define DEV_DMA_ISOC_PID_SHIFT        23
905
+#define DEV_DMA_ISOC_PID_DATA0        0
906
+#define DEV_DMA_ISOC_PID_DATA2        1
907
+#define DEV_DMA_ISOC_PID_DATA1        2
908
+#define DEV_DMA_ISOC_PID_MDATA        3
909
+#define DEV_DMA_ISOC_FRNUM_MASK        (0x7ff << 12)
910
+#define DEV_DMA_ISOC_FRNUM_SHIFT    12
911
+#define DEV_DMA_ISOC_TX_NBYTES_MASK    (0xfff << 0)
912
+#define DEV_DMA_ISOC_TX_NBYTES_LIMIT    0xfff
913
+#define DEV_DMA_ISOC_RX_NBYTES_MASK    (0x7ff << 0)
914
+#define DEV_DMA_ISOC_RX_NBYTES_LIMIT    0x7ff
915
+#define DEV_DMA_ISOC_NBYTES_SHIFT    0
916
+#define DEV_DMA_NBYTES_MASK        (0xffff << 0)
917
+#define DEV_DMA_NBYTES_SHIFT        0
918
+#define DEV_DMA_NBYTES_LIMIT        0xffff
919
+
920
+#define MAX_DMA_DESC_NUM_GENERIC    64
921
+#define MAX_DMA_DESC_NUM_HS_ISOC    256
922
+
923
+#endif /* __DWC2_HW_H__ */
924
--
40
--
925
2.20.1
41
2.20.1
926
42
927
43
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
With this conversion, we will be able to use the same helpers
3
We will need PMC register definitions in accel specific code later.
4
with sve. This also fixes a bug in which we failed to clear
4
Move all constant definitions to common arm headers so we can reuse
5
the high bits of the SVE register after an AdvSIMD operation.
5
them.
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
Message-id: 20200514212831.31248-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20210916155404.86958-2-agraf@csgraf.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/helper.h | 2 ++
12
target/arm/internals.h | 44 ++++++++++++++++++++++++++++++++++++++++++
13
target/arm/translate-a64.h | 3 ++
13
target/arm/helper.c | 44 ------------------------------------------
14
target/arm/crypto_helper.c | 11 +++++++
14
2 files changed, 44 insertions(+), 44 deletions(-)
15
target/arm/translate-a64.c | 59 ++++++++++++++++++++------------------
16
4 files changed, 47 insertions(+), 28 deletions(-)
17
15
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.h
18
--- a/target/arm/internals.h
21
+++ b/target/arm/helper.h
19
+++ b/target/arm/internals.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
20
@@ -XXX,XX +XXX,XX @@ enum MVEECIState {
23
DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
/* All other values reserved */
24
DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
};
25
23
26
+DEF_HELPER_FLAGS_4(crypto_rax1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+/* Definitions for the PMU registers */
25
+#define PMCRN_MASK 0xf800
26
+#define PMCRN_SHIFT 11
27
+#define PMCRLC 0x40
28
+#define PMCRDP 0x20
29
+#define PMCRX 0x10
30
+#define PMCRD 0x8
31
+#define PMCRC 0x4
32
+#define PMCRP 0x2
33
+#define PMCRE 0x1
34
+/*
35
+ * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
36
+ * which can be written as 1 to trigger behaviour but which stay RAZ).
37
+ */
38
+#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
27
+
39
+
28
DEF_HELPER_FLAGS_3(crc32, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
40
+#define PMXEVTYPER_P 0x80000000
29
DEF_HELPER_FLAGS_3(crc32c, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
41
+#define PMXEVTYPER_U 0x40000000
30
42
+#define PMXEVTYPER_NSK 0x20000000
31
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
43
+#define PMXEVTYPER_NSU 0x10000000
32
index XXXXXXX..XXXXXXX 100644
44
+#define PMXEVTYPER_NSH 0x08000000
33
--- a/target/arm/translate-a64.h
45
+#define PMXEVTYPER_M 0x04000000
34
+++ b/target/arm/translate-a64.h
46
+#define PMXEVTYPER_MT 0x02000000
35
@@ -XXX,XX +XXX,XX @@ static inline int vec_full_reg_size(DisasContext *s)
47
+#define PMXEVTYPER_EVTCOUNT 0x0000ffff
36
48
+#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
37
bool disas_sve(DisasContext *, uint32_t);
49
+ PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
38
50
+ PMXEVTYPER_M | PMXEVTYPER_MT | \
39
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
51
+ PMXEVTYPER_EVTCOUNT)
40
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
41
+
52
+
42
#endif /* TARGET_ARM_TRANSLATE_A64_H */
53
+#define PMCCFILTR 0xf8000000
43
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
54
+#define PMCCFILTR_M PMXEVTYPER_M
44
index XXXXXXX..XXXXXXX 100644
55
+#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
45
--- a/target/arm/crypto_helper.c
46
+++ b/target/arm/crypto_helper.c
47
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm, uint32_t desc)
48
}
49
clear_tail(vd, opr_sz, simd_maxsz(desc));
50
}
51
+
56
+
52
+void HELPER(crypto_rax1)(void *vd, void *vn, void *vm, uint32_t desc)
57
+static inline uint32_t pmu_num_counters(CPUARMState *env)
53
+{
58
+{
54
+ intptr_t i, opr_sz = simd_oprsz(desc);
59
+ return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
55
+ uint64_t *d = vd, *n = vn, *m = vm;
56
+
57
+ for (i = 0; i < opr_sz / 8; ++i) {
58
+ d[i] = n[i] ^ rol64(m[i], 1);
59
+ }
60
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
61
+}
62
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/translate-a64.c
65
+++ b/target/arm/translate-a64.c
66
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
67
tcg_temp_free_ptr(tcg_rn_ptr);
68
}
69
70
+static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
71
+{
72
+ tcg_gen_rotli_i64(d, m, 1);
73
+ tcg_gen_xor_i64(d, d, n);
74
+}
60
+}
75
+
61
+
76
+static void gen_rax1_vec(unsigned vece, TCGv_vec d, TCGv_vec n, TCGv_vec m)
62
+/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
63
+static inline uint64_t pmu_counter_mask(CPUARMState *env)
77
+{
64
+{
78
+ tcg_gen_rotli_vec(vece, d, m, 1);
65
+ return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
79
+ tcg_gen_xor_vec(vece, d, d, n);
80
+}
66
+}
81
+
67
+
82
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
68
#endif
83
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
84
+{
70
index XXXXXXX..XXXXXXX 100644
85
+ static const TCGOpcode vecop_list[] = { INDEX_op_rotli_vec, 0 };
71
--- a/target/arm/helper.c
86
+ static const GVecGen3 op = {
72
+++ b/target/arm/helper.c
87
+ .fni8 = gen_rax1_i64,
73
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
88
+ .fniv = gen_rax1_vec,
74
REGINFO_SENTINEL
89
+ .opt_opc = vecop_list,
75
};
90
+ .fno = gen_helper_crypto_rax1,
76
91
+ .vece = MO_64,
77
-/* Definitions for the PMU registers */
92
+ };
78
-#define PMCRN_MASK 0xf800
93
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &op);
79
-#define PMCRN_SHIFT 11
94
+}
80
-#define PMCRLC 0x40
95
+
81
-#define PMCRDP 0x20
96
/* Crypto three-reg SHA512
82
-#define PMCRX 0x10
97
* 31 21 20 16 15 14 13 12 11 10 9 5 4 0
83
-#define PMCRD 0x8
98
* +-----------------------+------+---+---+-----+--------+------+------+
84
-#define PMCRC 0x4
99
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
85
-#define PMCRP 0x2
100
bool feature;
86
-#define PMCRE 0x1
101
CryptoThreeOpFn *genfn = NULL;
87
-/*
102
gen_helper_gvec_3 *oolfn = NULL;
88
- * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
103
+ GVecGen3Fn *gvecfn = NULL;
89
- * which can be written as 1 to trigger behaviour but which stay RAZ).
104
90
- */
105
if (o == 0) {
91
-#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
106
switch (opcode) {
107
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
108
break;
109
case 3: /* RAX1 */
110
feature = dc_isar_feature(aa64_sha3, s);
111
- genfn = NULL;
112
+ gvecfn = gen_gvec_rax1;
113
break;
114
default:
115
g_assert_not_reached();
116
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
117
118
if (oolfn) {
119
gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
120
- return;
121
- }
122
-
92
-
123
- if (genfn) {
93
-#define PMXEVTYPER_P 0x80000000
124
+ } else if (gvecfn) {
94
-#define PMXEVTYPER_U 0x40000000
125
+ gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
95
-#define PMXEVTYPER_NSK 0x20000000
126
+ } else {
96
-#define PMXEVTYPER_NSU 0x10000000
127
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
97
-#define PMXEVTYPER_NSH 0x08000000
128
98
-#define PMXEVTYPER_M 0x04000000
129
tcg_rd_ptr = vec_full_reg_ptr(s, rd);
99
-#define PMXEVTYPER_MT 0x02000000
130
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
100
-#define PMXEVTYPER_EVTCOUNT 0x0000ffff
131
tcg_temp_free_ptr(tcg_rd_ptr);
101
-#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
132
tcg_temp_free_ptr(tcg_rn_ptr);
102
- PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
133
tcg_temp_free_ptr(tcg_rm_ptr);
103
- PMXEVTYPER_M | PMXEVTYPER_MT | \
134
- } else {
104
- PMXEVTYPER_EVTCOUNT)
135
- TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
136
- int pass;
137
-
105
-
138
- tcg_op1 = tcg_temp_new_i64();
106
-#define PMCCFILTR 0xf8000000
139
- tcg_op2 = tcg_temp_new_i64();
107
-#define PMCCFILTR_M PMXEVTYPER_M
140
- tcg_res[0] = tcg_temp_new_i64();
108
-#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
141
- tcg_res[1] = tcg_temp_new_i64();
142
-
109
-
143
- for (pass = 0; pass < 2; pass++) {
110
-static inline uint32_t pmu_num_counters(CPUARMState *env)
144
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
111
-{
145
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
112
- return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
113
-}
146
-
114
-
147
- tcg_gen_rotli_i64(tcg_res[pass], tcg_op2, 1);
115
-/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
148
- tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
116
-static inline uint64_t pmu_counter_mask(CPUARMState *env)
149
- }
117
-{
150
- write_vec_element(s, tcg_res[0], rd, 0, MO_64);
118
- return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
151
- write_vec_element(s, tcg_res[1], rd, 1, MO_64);
119
-}
152
-
120
-
153
- tcg_temp_free_i64(tcg_op1);
121
typedef struct pm_event {
154
- tcg_temp_free_i64(tcg_op2);
122
uint16_t number; /* PMEVTYPER.evtCount is 16 bits wide */
155
- tcg_temp_free_i64(tcg_res[0]);
123
/* If the event is supported on this CPU (used to generate PMCEID[01]) */
156
- tcg_temp_free_i64(tcg_res[1]);
157
}
158
}
159
160
--
124
--
161
2.20.1
125
2.20.1
162
126
163
127
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
hw_error() calls exit(). This a bit overkill when we can log
3
Hvf's permission bitmap during and after dirty logging does not include
4
the accesses as unimplemented or guest error.
4
the HV_MEMORY_EXEC permission. At least on Apple Silicon, this leads to
5
instruction faults once dirty logging was enabled.
5
6
6
When fuzzing the devices, we don't want the whole process to
7
Add the bit to make it work properly.
7
exit. Replace some hw_error() calls by qemu_log_mask()
8
(missed in commit 5a0001ec7e).
9
8
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
11
Message-id: 20200525114123.21317-2-f4bug@amsat.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210916155404.86958-3-agraf@csgraf.de
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
13
---
15
hw/input/pxa2xx_keypad.c | 10 +++++++---
14
accel/hvf/hvf-accel-ops.c | 4 ++--
16
1 file changed, 7 insertions(+), 3 deletions(-)
15
1 file changed, 2 insertions(+), 2 deletions(-)
17
16
18
diff --git a/hw/input/pxa2xx_keypad.c b/hw/input/pxa2xx_keypad.c
17
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
19
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/input/pxa2xx_keypad.c
19
--- a/accel/hvf/hvf-accel-ops.c
21
+++ b/hw/input/pxa2xx_keypad.c
20
+++ b/accel/hvf/hvf-accel-ops.c
22
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on)
23
*/
22
if (on) {
24
23
slot->flags |= HVF_SLOT_LOG;
25
#include "qemu/osdep.h"
24
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
26
-#include "hw/hw.h"
25
- HV_MEMORY_READ);
27
+#include "qemu/log.h"
26
+ HV_MEMORY_READ | HV_MEMORY_EXEC);
28
#include "hw/irq.h"
27
/* stop tracking region*/
29
#include "migration/vmstate.h"
28
} else {
30
#include "hw/arm/pxa.h"
29
slot->flags &= ~HVF_SLOT_LOG;
31
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_keypad_read(void *opaque, hwaddr offset,
30
hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size,
32
return s->kpkdi;
31
- HV_MEMORY_READ | HV_MEMORY_WRITE);
33
break;
32
+ HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
34
default:
35
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
36
+ qemu_log_mask(LOG_GUEST_ERROR,
37
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
38
+ __func__, offset);
39
}
40
41
return 0;
42
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_keypad_write(void *opaque, hwaddr offset,
43
break;
44
45
default:
46
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
47
+ qemu_log_mask(LOG_GUEST_ERROR,
48
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
49
+ __func__, offset);
50
}
33
}
51
}
34
}
52
35
53
--
36
--
54
2.20.1
37
2.20.1
55
38
56
39
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
The ADC region size is 256B, split as:
3
We will need to install a migration helper for the ARM hvf backend.
4
- [0x00 - 0x4f] defined
4
Let's introduce an arch callback for the overall hvf init chain to
5
- [0x50 - 0xff] reserved
5
do so.
6
6
7
All registers are 32-bit (thus when the datasheet mentions the
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
last defined register is 0x4c, it means its address range is
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
0x4c .. 0x4f.
9
Message-id: 20210916155404.86958-4-agraf@csgraf.de
10
11
This model implementation is also 32-bit. Set MemoryRegionOps
12
'impl' fields.
13
14
See:
15
'RM0033 Reference manual Rev 8', Table 10.13.18 "ADC register map".
16
17
Reported-by: Seth Kintigh <skintigh@gmail.com>
18
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
19
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
Message-id: 20200603055915.17678-1-f4bug@amsat.org
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
11
---
23
hw/adc/stm32f2xx_adc.c | 4 +++-
12
include/sysemu/hvf_int.h | 1 +
24
1 file changed, 3 insertions(+), 1 deletion(-)
13
accel/hvf/hvf-accel-ops.c | 3 ++-
14
target/i386/hvf/hvf.c | 5 +++++
15
3 files changed, 8 insertions(+), 1 deletion(-)
25
16
26
diff --git a/hw/adc/stm32f2xx_adc.c b/hw/adc/stm32f2xx_adc.c
17
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
27
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/adc/stm32f2xx_adc.c
19
--- a/include/sysemu/hvf_int.h
29
+++ b/hw/adc/stm32f2xx_adc.c
20
+++ b/include/sysemu/hvf_int.h
30
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps stm32f2xx_adc_ops = {
21
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
31
.read = stm32f2xx_adc_read,
32
.write = stm32f2xx_adc_write,
33
.endianness = DEVICE_NATIVE_ENDIAN,
34
+ .impl.min_access_size = 4,
35
+ .impl.max_access_size = 4,
36
};
22
};
37
23
38
static const VMStateDescription vmstate_stm32f2xx_adc = {
24
void assert_hvf_ok(hv_return_t ret);
39
@@ -XXX,XX +XXX,XX @@ static void stm32f2xx_adc_init(Object *obj)
25
+int hvf_arch_init(void);
40
sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
26
int hvf_arch_init_vcpu(CPUState *cpu);
41
27
void hvf_arch_vcpu_destroy(CPUState *cpu);
42
memory_region_init_io(&s->mmio, obj, &stm32f2xx_adc_ops, s,
28
int hvf_vcpu_exec(CPUState *);
43
- TYPE_STM32F2XX_ADC, 0xFF);
29
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
44
+ TYPE_STM32F2XX_ADC, 0x100);
30
index XXXXXXX..XXXXXXX 100644
45
sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio);
31
--- a/accel/hvf/hvf-accel-ops.c
32
+++ b/accel/hvf/hvf-accel-ops.c
33
@@ -XXX,XX +XXX,XX @@ static int hvf_accel_init(MachineState *ms)
34
35
hvf_state = s;
36
memory_listener_register(&hvf_memory_listener, &address_space_memory);
37
- return 0;
38
+
39
+ return hvf_arch_init();
46
}
40
}
47
41
42
static void hvf_accel_class_init(ObjectClass *oc, void *data)
43
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/i386/hvf/hvf.c
46
+++ b/target/i386/hvf/hvf.c
47
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
48
return env->apic_bus_freq != 0;
49
}
50
51
+int hvf_arch_init(void)
52
+{
53
+ return 0;
54
+}
55
+
56
int hvf_arch_init_vcpu(CPUState *cpu)
57
{
58
X86CPU *x86cpu = X86_CPU(cpu);
48
--
59
--
49
2.20.1
60
2.20.1
50
61
51
62
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Add the dwc-hsotg (dwc2) USB host controller emulation code.
3
With Apple Silicon available to the masses, it's a good time to add support
4
Based on hw/usb/hcd-ehci.c and hw/usb/hcd-ohci.c.
4
for driving its virtualization extensions from QEMU.
5
5
6
Note that to use this with the dwc-otg driver in the Raspbian
6
This patch adds all necessary architecture specific code to get basic VMs
7
kernel, you must pass the option "dwc_otg.fiq_fsm_enable=0" on
7
working, including save/restore.
8
the kernel command line.
9
8
10
Emulation of slave mode and of descriptor-DMA mode has not been
9
Known limitations:
11
implemented yet. These modes are seldom used.
12
10
13
I have used some on-line sources of information while developing
11
- WFI handling is missing (follows in later patch)
14
this emulation, including:
12
- No watchpoint/breakpoint support
15
13
16
http://www.capital-micro.com/PDF/CME-M7_Family_User_Guide_EN.pdf
14
Signed-off-by: Alexander Graf <agraf@csgraf.de>
17
which has a pretty complete description of the controller starting
15
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
18
on page 370.
16
Reviewed-by: Sergio Lopez <slp@redhat.com>
19
20
https://sourceforge.net/p/wive-ng/wive-ng-mt/ci/master/tree/docs/DataSheets/RT3050_5x_V2.0_081408_0902.pdf
21
which has a description of the controller registers starting on
22
page 130.
23
24
Thanks to Felippe Mathieu-Daude for providing a cleaner method
25
of implementing the memory regions for the controller registers.
26
27
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
28
Message-id: 20200520235349.21215-5-pauldzim@gmail.com
29
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Message-id: 20210916155404.86958-5-agraf@csgraf.de
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
---
20
---
32
hw/usb/hcd-dwc2.c | 1417 ++++++++++++++++++++++++++++++++++++++++++
21
meson.build | 1 +
33
hw/usb/Kconfig | 5 +
22
include/sysemu/hvf_int.h | 10 +-
34
hw/usb/Makefile.objs | 1 +
23
accel/hvf/hvf-accel-ops.c | 9 +
35
hw/usb/trace-events | 50 ++
24
target/arm/hvf/hvf.c | 794 ++++++++++++++++++++++++++++++++++++
36
4 files changed, 1473 insertions(+)
25
target/i386/hvf/hvf.c | 5 +
37
create mode 100644 hw/usb/hcd-dwc2.c
26
MAINTAINERS | 5 +
27
target/arm/hvf/trace-events | 10 +
28
7 files changed, 833 insertions(+), 1 deletion(-)
29
create mode 100644 target/arm/hvf/hvf.c
30
create mode 100644 target/arm/hvf/trace-events
38
31
39
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
32
diff --git a/meson.build b/meson.build
33
index XXXXXXX..XXXXXXX 100644
34
--- a/meson.build
35
+++ b/meson.build
36
@@ -XXX,XX +XXX,XX @@ if have_system or have_user
37
'accel/tcg',
38
'hw/core',
39
'target/arm',
40
+ 'target/arm/hvf',
41
'target/hppa',
42
'target/i386',
43
'target/i386/kvm',
44
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
45
index XXXXXXX..XXXXXXX 100644
46
--- a/include/sysemu/hvf_int.h
47
+++ b/include/sysemu/hvf_int.h
48
@@ -XXX,XX +XXX,XX @@
49
#ifndef HVF_INT_H
50
#define HVF_INT_H
51
52
+#ifdef __aarch64__
53
+#include <Hypervisor/Hypervisor.h>
54
+#else
55
#include <Hypervisor/hv.h>
56
+#endif
57
58
/* hvf_slot flags */
59
#define HVF_SLOT_LOG (1 << 0)
60
@@ -XXX,XX +XXX,XX @@ struct HVFState {
61
int num_slots;
62
63
hvf_vcpu_caps *hvf_caps;
64
+ uint64_t vtimer_offset;
65
};
66
extern HVFState *hvf_state;
67
68
struct hvf_vcpu_state {
69
- int fd;
70
+ uint64_t fd;
71
+ void *exit;
72
+ bool vtimer_masked;
73
};
74
75
void assert_hvf_ok(hv_return_t ret);
76
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *);
77
hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t);
78
int hvf_put_registers(CPUState *);
79
int hvf_get_registers(CPUState *);
80
+void hvf_kick_vcpu_thread(CPUState *cpu);
81
82
#endif
83
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/accel/hvf/hvf-accel-ops.c
86
+++ b/accel/hvf/hvf-accel-ops.c
87
@@ -XXX,XX +XXX,XX @@
88
89
HVFState *hvf_state;
90
91
+#ifdef __aarch64__
92
+#define HV_VM_DEFAULT NULL
93
+#endif
94
+
95
/* Memory slots */
96
97
hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size)
98
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
99
pthread_sigmask(SIG_BLOCK, NULL, &set);
100
sigdelset(&set, SIG_IPI);
101
102
+#ifdef __aarch64__
103
+ r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
104
+#else
105
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
106
+#endif
107
cpu->vcpu_dirty = 1;
108
assert_hvf_ok(r);
109
110
@@ -XXX,XX +XXX,XX @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
111
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
112
113
ops->create_vcpu_thread = hvf_start_vcpu_thread;
114
+ ops->kick_vcpu_thread = hvf_kick_vcpu_thread;
115
116
ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset;
117
ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
118
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
40
new file mode 100644
119
new file mode 100644
41
index XXXXXXX..XXXXXXX
120
index XXXXXXX..XXXXXXX
42
--- /dev/null
121
--- /dev/null
43
+++ b/hw/usb/hcd-dwc2.c
122
+++ b/target/arm/hvf/hvf.c
44
@@ -XXX,XX +XXX,XX @@
123
@@ -XXX,XX +XXX,XX @@
45
+/*
124
+/*
46
+ * dwc-hsotg (dwc2) USB host controller emulation
125
+ * QEMU Hypervisor.framework support for Apple Silicon
126
+
127
+ * Copyright 2020 Alexander Graf <agraf@csgraf.de>
47
+ *
128
+ *
48
+ * Based on hw/usb/hcd-ehci.c and hw/usb/hcd-ohci.c
129
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
130
+ * See the COPYING file in the top-level directory.
49
+ *
131
+ *
50
+ * Note that to use this emulation with the dwc-otg driver in the
51
+ * Raspbian kernel, you must pass the option "dwc_otg.fiq_fsm_enable=0"
52
+ * on the kernel command line.
53
+ *
54
+ * Some useful documentation used to develop this emulation can be
55
+ * found online (as of April 2020) at:
56
+ *
57
+ * http://www.capital-micro.com/PDF/CME-M7_Family_User_Guide_EN.pdf
58
+ * which has a pretty complete description of the controller starting
59
+ * on page 370.
60
+ *
61
+ * https://sourceforge.net/p/wive-ng/wive-ng-mt/ci/master/tree/docs/DataSheets/RT3050_5x_V2.0_081408_0902.pdf
62
+ * which has a description of the controller registers starting on
63
+ * page 130.
64
+ *
65
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
66
+ *
67
+ * This program is free software; you can redistribute it and/or modify
68
+ * it under the terms of the GNU General Public License as published by
69
+ * the Free Software Foundation; either version 2 of the License, or
70
+ * (at your option) any later version.
71
+ *
72
+ * This program is distributed in the hope that it will be useful,
73
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
74
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
75
+ * GNU General Public License for more details.
76
+ */
132
+ */
77
+
133
+
78
+#include "qemu/osdep.h"
134
+#include "qemu/osdep.h"
79
+#include "qemu/units.h"
135
+#include "qemu-common.h"
80
+#include "qapi/error.h"
136
+#include "qemu/error-report.h"
81
+#include "hw/usb/dwc2-regs.h"
137
+
82
+#include "hw/usb/hcd-dwc2.h"
138
+#include "sysemu/runstate.h"
139
+#include "sysemu/hvf.h"
140
+#include "sysemu/hvf_int.h"
141
+#include "sysemu/hw_accel.h"
142
+
143
+#include <mach/mach_time.h>
144
+
145
+#include "exec/address-spaces.h"
146
+#include "hw/irq.h"
147
+#include "qemu/main-loop.h"
148
+#include "sysemu/cpus.h"
149
+#include "target/arm/cpu.h"
150
+#include "target/arm/internals.h"
151
+#include "trace/trace-target_arm_hvf.h"
83
+#include "migration/vmstate.h"
152
+#include "migration/vmstate.h"
84
+#include "trace.h"
153
+
85
+#include "qemu/log.h"
154
+#define HVF_SYSREG(crn, crm, op0, op1, op2) \
86
+#include "qemu/error-report.h"
155
+ ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
87
+#include "qemu/main-loop.h"
156
+#define PL1_WRITE_MASK 0x4
88
+#include "hw/qdev-properties.h"
157
+
89
+
158
+#define SYSREG(op0, op1, crn, crm, op2) \
90
+#define USB_HZ_FS 12000000
159
+ ((op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (crm << 1))
91
+#define USB_HZ_HS 96000000
160
+#define SYSREG_MASK SYSREG(0x3, 0x7, 0xf, 0xf, 0x7)
92
+#define USB_FRMINTVL 12000
161
+#define SYSREG_OSLAR_EL1 SYSREG(2, 0, 1, 0, 4)
93
+
162
+#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
94
+/* nifty macros from Arnon's EHCI version */
163
+#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
95
+#define get_field(data, field) \
164
+#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
96
+ (((data) & field##_MASK) >> field##_SHIFT)
165
+
97
+
166
+#define WFX_IS_WFE (1 << 0)
98
+#define set_field(data, newval, field) do { \
167
+
99
+ uint32_t val = *(data); \
168
+#define TMR_CTL_ENABLE (1 << 0)
100
+ val &= ~field##_MASK; \
169
+#define TMR_CTL_IMASK (1 << 1)
101
+ val |= ((newval) << field##_SHIFT) & field##_MASK; \
170
+#define TMR_CTL_ISTATUS (1 << 2)
102
+ *(data) = val; \
171
+
103
+} while (0)
172
+typedef struct HVFVTimer {
104
+
173
+ /* Vtimer value during migration and paused state */
105
+#define get_bit(data, bitmask) \
174
+ uint64_t vtimer_val;
106
+ (!!((data) & (bitmask)))
175
+} HVFVTimer;
107
+
176
+
108
+/* update irq line */
177
+static HVFVTimer vtimer;
109
+static inline void dwc2_update_irq(DWC2State *s)
178
+
110
+{
179
+struct hvf_reg_match {
111
+ static int oldlevel;
180
+ int reg;
112
+ int level = 0;
181
+ uint64_t offset;
113
+
182
+};
114
+ if ((s->gintsts & s->gintmsk) && (s->gahbcfg & GAHBCFG_GLBL_INTR_EN)) {
183
+
115
+ level = 1;
184
+static const struct hvf_reg_match hvf_reg_match[] = {
116
+ }
185
+ { HV_REG_X0, offsetof(CPUARMState, xregs[0]) },
117
+ if (level != oldlevel) {
186
+ { HV_REG_X1, offsetof(CPUARMState, xregs[1]) },
118
+ oldlevel = level;
187
+ { HV_REG_X2, offsetof(CPUARMState, xregs[2]) },
119
+ trace_usb_dwc2_update_irq(level);
188
+ { HV_REG_X3, offsetof(CPUARMState, xregs[3]) },
120
+ qemu_set_irq(s->irq, level);
189
+ { HV_REG_X4, offsetof(CPUARMState, xregs[4]) },
121
+ }
190
+ { HV_REG_X5, offsetof(CPUARMState, xregs[5]) },
122
+}
191
+ { HV_REG_X6, offsetof(CPUARMState, xregs[6]) },
123
+
192
+ { HV_REG_X7, offsetof(CPUARMState, xregs[7]) },
124
+/* flag interrupt condition */
193
+ { HV_REG_X8, offsetof(CPUARMState, xregs[8]) },
125
+static inline void dwc2_raise_global_irq(DWC2State *s, uint32_t intr)
194
+ { HV_REG_X9, offsetof(CPUARMState, xregs[9]) },
126
+{
195
+ { HV_REG_X10, offsetof(CPUARMState, xregs[10]) },
127
+ if (!(s->gintsts & intr)) {
196
+ { HV_REG_X11, offsetof(CPUARMState, xregs[11]) },
128
+ s->gintsts |= intr;
197
+ { HV_REG_X12, offsetof(CPUARMState, xregs[12]) },
129
+ trace_usb_dwc2_raise_global_irq(intr);
198
+ { HV_REG_X13, offsetof(CPUARMState, xregs[13]) },
130
+ dwc2_update_irq(s);
199
+ { HV_REG_X14, offsetof(CPUARMState, xregs[14]) },
131
+ }
200
+ { HV_REG_X15, offsetof(CPUARMState, xregs[15]) },
132
+}
201
+ { HV_REG_X16, offsetof(CPUARMState, xregs[16]) },
133
+
202
+ { HV_REG_X17, offsetof(CPUARMState, xregs[17]) },
134
+static inline void dwc2_lower_global_irq(DWC2State *s, uint32_t intr)
203
+ { HV_REG_X18, offsetof(CPUARMState, xregs[18]) },
135
+{
204
+ { HV_REG_X19, offsetof(CPUARMState, xregs[19]) },
136
+ if (s->gintsts & intr) {
205
+ { HV_REG_X20, offsetof(CPUARMState, xregs[20]) },
137
+ s->gintsts &= ~intr;
206
+ { HV_REG_X21, offsetof(CPUARMState, xregs[21]) },
138
+ trace_usb_dwc2_lower_global_irq(intr);
207
+ { HV_REG_X22, offsetof(CPUARMState, xregs[22]) },
139
+ dwc2_update_irq(s);
208
+ { HV_REG_X23, offsetof(CPUARMState, xregs[23]) },
140
+ }
209
+ { HV_REG_X24, offsetof(CPUARMState, xregs[24]) },
141
+}
210
+ { HV_REG_X25, offsetof(CPUARMState, xregs[25]) },
142
+
211
+ { HV_REG_X26, offsetof(CPUARMState, xregs[26]) },
143
+static inline void dwc2_raise_host_irq(DWC2State *s, uint32_t host_intr)
212
+ { HV_REG_X27, offsetof(CPUARMState, xregs[27]) },
144
+{
213
+ { HV_REG_X28, offsetof(CPUARMState, xregs[28]) },
145
+ if (!(s->haint & host_intr)) {
214
+ { HV_REG_X29, offsetof(CPUARMState, xregs[29]) },
146
+ s->haint |= host_intr;
215
+ { HV_REG_X30, offsetof(CPUARMState, xregs[30]) },
147
+ s->haint &= 0xffff;
216
+ { HV_REG_PC, offsetof(CPUARMState, pc) },
148
+ trace_usb_dwc2_raise_host_irq(host_intr);
217
+};
149
+ if (s->haint & s->haintmsk) {
218
+
150
+ dwc2_raise_global_irq(s, GINTSTS_HCHINT);
219
+static const struct hvf_reg_match hvf_fpreg_match[] = {
220
+ { HV_SIMD_FP_REG_Q0, offsetof(CPUARMState, vfp.zregs[0]) },
221
+ { HV_SIMD_FP_REG_Q1, offsetof(CPUARMState, vfp.zregs[1]) },
222
+ { HV_SIMD_FP_REG_Q2, offsetof(CPUARMState, vfp.zregs[2]) },
223
+ { HV_SIMD_FP_REG_Q3, offsetof(CPUARMState, vfp.zregs[3]) },
224
+ { HV_SIMD_FP_REG_Q4, offsetof(CPUARMState, vfp.zregs[4]) },
225
+ { HV_SIMD_FP_REG_Q5, offsetof(CPUARMState, vfp.zregs[5]) },
226
+ { HV_SIMD_FP_REG_Q6, offsetof(CPUARMState, vfp.zregs[6]) },
227
+ { HV_SIMD_FP_REG_Q7, offsetof(CPUARMState, vfp.zregs[7]) },
228
+ { HV_SIMD_FP_REG_Q8, offsetof(CPUARMState, vfp.zregs[8]) },
229
+ { HV_SIMD_FP_REG_Q9, offsetof(CPUARMState, vfp.zregs[9]) },
230
+ { HV_SIMD_FP_REG_Q10, offsetof(CPUARMState, vfp.zregs[10]) },
231
+ { HV_SIMD_FP_REG_Q11, offsetof(CPUARMState, vfp.zregs[11]) },
232
+ { HV_SIMD_FP_REG_Q12, offsetof(CPUARMState, vfp.zregs[12]) },
233
+ { HV_SIMD_FP_REG_Q13, offsetof(CPUARMState, vfp.zregs[13]) },
234
+ { HV_SIMD_FP_REG_Q14, offsetof(CPUARMState, vfp.zregs[14]) },
235
+ { HV_SIMD_FP_REG_Q15, offsetof(CPUARMState, vfp.zregs[15]) },
236
+ { HV_SIMD_FP_REG_Q16, offsetof(CPUARMState, vfp.zregs[16]) },
237
+ { HV_SIMD_FP_REG_Q17, offsetof(CPUARMState, vfp.zregs[17]) },
238
+ { HV_SIMD_FP_REG_Q18, offsetof(CPUARMState, vfp.zregs[18]) },
239
+ { HV_SIMD_FP_REG_Q19, offsetof(CPUARMState, vfp.zregs[19]) },
240
+ { HV_SIMD_FP_REG_Q20, offsetof(CPUARMState, vfp.zregs[20]) },
241
+ { HV_SIMD_FP_REG_Q21, offsetof(CPUARMState, vfp.zregs[21]) },
242
+ { HV_SIMD_FP_REG_Q22, offsetof(CPUARMState, vfp.zregs[22]) },
243
+ { HV_SIMD_FP_REG_Q23, offsetof(CPUARMState, vfp.zregs[23]) },
244
+ { HV_SIMD_FP_REG_Q24, offsetof(CPUARMState, vfp.zregs[24]) },
245
+ { HV_SIMD_FP_REG_Q25, offsetof(CPUARMState, vfp.zregs[25]) },
246
+ { HV_SIMD_FP_REG_Q26, offsetof(CPUARMState, vfp.zregs[26]) },
247
+ { HV_SIMD_FP_REG_Q27, offsetof(CPUARMState, vfp.zregs[27]) },
248
+ { HV_SIMD_FP_REG_Q28, offsetof(CPUARMState, vfp.zregs[28]) },
249
+ { HV_SIMD_FP_REG_Q29, offsetof(CPUARMState, vfp.zregs[29]) },
250
+ { HV_SIMD_FP_REG_Q30, offsetof(CPUARMState, vfp.zregs[30]) },
251
+ { HV_SIMD_FP_REG_Q31, offsetof(CPUARMState, vfp.zregs[31]) },
252
+};
253
+
254
+struct hvf_sreg_match {
255
+ int reg;
256
+ uint32_t key;
257
+ uint32_t cp_idx;
258
+};
259
+
260
+static struct hvf_sreg_match hvf_sreg_match[] = {
261
+ { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 4) },
262
+ { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 5) },
263
+ { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 6) },
264
+ { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 7) },
265
+
266
+ { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 4) },
267
+ { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 5) },
268
+ { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 6) },
269
+ { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 7) },
270
+
271
+ { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 4) },
272
+ { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 5) },
273
+ { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 6) },
274
+ { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 7) },
275
+
276
+ { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 4) },
277
+ { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 5) },
278
+ { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 6) },
279
+ { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 7) },
280
+
281
+ { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 4) },
282
+ { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 5) },
283
+ { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 6) },
284
+ { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 7) },
285
+
286
+ { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 4) },
287
+ { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 5) },
288
+ { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 6) },
289
+ { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 7) },
290
+
291
+ { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 4) },
292
+ { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 5) },
293
+ { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 6) },
294
+ { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 7) },
295
+
296
+ { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 4) },
297
+ { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 5) },
298
+ { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 6) },
299
+ { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 7) },
300
+
301
+ { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 4) },
302
+ { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 5) },
303
+ { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 6) },
304
+ { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 7) },
305
+
306
+ { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 4) },
307
+ { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 5) },
308
+ { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 6) },
309
+ { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 7) },
310
+
311
+ { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 4) },
312
+ { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 5) },
313
+ { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 6) },
314
+ { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 7) },
315
+
316
+ { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 4) },
317
+ { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 5) },
318
+ { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 6) },
319
+ { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 7) },
320
+
321
+ { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 4) },
322
+ { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 5) },
323
+ { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 6) },
324
+ { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 7) },
325
+
326
+ { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 4) },
327
+ { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 5) },
328
+ { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 6) },
329
+ { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 7) },
330
+
331
+ { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 4) },
332
+ { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 5) },
333
+ { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 6) },
334
+ { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 7) },
335
+
336
+ { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 4) },
337
+ { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 5) },
338
+ { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 6) },
339
+ { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 7) },
340
+
341
+#ifdef SYNC_NO_RAW_REGS
342
+ /*
343
+ * The registers below are manually synced on init because they are
344
+ * marked as NO_RAW. We still list them to make number space sync easier.
345
+ */
346
+ { HV_SYS_REG_MDCCINT_EL1, HVF_SYSREG(0, 2, 2, 0, 0) },
347
+ { HV_SYS_REG_MIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 0) },
348
+ { HV_SYS_REG_MPIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 5) },
349
+ { HV_SYS_REG_ID_AA64PFR0_EL1, HVF_SYSREG(0, 4, 3, 0, 0) },
350
+#endif
351
+ { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 2) },
352
+ { HV_SYS_REG_ID_AA64DFR0_EL1, HVF_SYSREG(0, 5, 3, 0, 0) },
353
+ { HV_SYS_REG_ID_AA64DFR1_EL1, HVF_SYSREG(0, 5, 3, 0, 1) },
354
+ { HV_SYS_REG_ID_AA64ISAR0_EL1, HVF_SYSREG(0, 6, 3, 0, 0) },
355
+ { HV_SYS_REG_ID_AA64ISAR1_EL1, HVF_SYSREG(0, 6, 3, 0, 1) },
356
+#ifdef SYNC_NO_MMFR0
357
+ /* We keep the hardware MMFR0 around. HW limits are there anyway */
358
+ { HV_SYS_REG_ID_AA64MMFR0_EL1, HVF_SYSREG(0, 7, 3, 0, 0) },
359
+#endif
360
+ { HV_SYS_REG_ID_AA64MMFR1_EL1, HVF_SYSREG(0, 7, 3, 0, 1) },
361
+ { HV_SYS_REG_ID_AA64MMFR2_EL1, HVF_SYSREG(0, 7, 3, 0, 2) },
362
+
363
+ { HV_SYS_REG_MDSCR_EL1, HVF_SYSREG(0, 2, 2, 0, 2) },
364
+ { HV_SYS_REG_SCTLR_EL1, HVF_SYSREG(1, 0, 3, 0, 0) },
365
+ { HV_SYS_REG_CPACR_EL1, HVF_SYSREG(1, 0, 3, 0, 2) },
366
+ { HV_SYS_REG_TTBR0_EL1, HVF_SYSREG(2, 0, 3, 0, 0) },
367
+ { HV_SYS_REG_TTBR1_EL1, HVF_SYSREG(2, 0, 3, 0, 1) },
368
+ { HV_SYS_REG_TCR_EL1, HVF_SYSREG(2, 0, 3, 0, 2) },
369
+
370
+ { HV_SYS_REG_APIAKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 0) },
371
+ { HV_SYS_REG_APIAKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 1) },
372
+ { HV_SYS_REG_APIBKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 2) },
373
+ { HV_SYS_REG_APIBKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 3) },
374
+ { HV_SYS_REG_APDAKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 0) },
375
+ { HV_SYS_REG_APDAKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 1) },
376
+ { HV_SYS_REG_APDBKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 2) },
377
+ { HV_SYS_REG_APDBKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 3) },
378
+ { HV_SYS_REG_APGAKEYLO_EL1, HVF_SYSREG(2, 3, 3, 0, 0) },
379
+ { HV_SYS_REG_APGAKEYHI_EL1, HVF_SYSREG(2, 3, 3, 0, 1) },
380
+
381
+ { HV_SYS_REG_SPSR_EL1, HVF_SYSREG(4, 0, 3, 0, 0) },
382
+ { HV_SYS_REG_ELR_EL1, HVF_SYSREG(4, 0, 3, 0, 1) },
383
+ { HV_SYS_REG_SP_EL0, HVF_SYSREG(4, 1, 3, 0, 0) },
384
+ { HV_SYS_REG_AFSR0_EL1, HVF_SYSREG(5, 1, 3, 0, 0) },
385
+ { HV_SYS_REG_AFSR1_EL1, HVF_SYSREG(5, 1, 3, 0, 1) },
386
+ { HV_SYS_REG_ESR_EL1, HVF_SYSREG(5, 2, 3, 0, 0) },
387
+ { HV_SYS_REG_FAR_EL1, HVF_SYSREG(6, 0, 3, 0, 0) },
388
+ { HV_SYS_REG_PAR_EL1, HVF_SYSREG(7, 4, 3, 0, 0) },
389
+ { HV_SYS_REG_MAIR_EL1, HVF_SYSREG(10, 2, 3, 0, 0) },
390
+ { HV_SYS_REG_AMAIR_EL1, HVF_SYSREG(10, 3, 3, 0, 0) },
391
+ { HV_SYS_REG_VBAR_EL1, HVF_SYSREG(12, 0, 3, 0, 0) },
392
+ { HV_SYS_REG_CONTEXTIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 1) },
393
+ { HV_SYS_REG_TPIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 4) },
394
+ { HV_SYS_REG_CNTKCTL_EL1, HVF_SYSREG(14, 1, 3, 0, 0) },
395
+ { HV_SYS_REG_CSSELR_EL1, HVF_SYSREG(0, 0, 3, 2, 0) },
396
+ { HV_SYS_REG_TPIDR_EL0, HVF_SYSREG(13, 0, 3, 3, 2) },
397
+ { HV_SYS_REG_TPIDRRO_EL0, HVF_SYSREG(13, 0, 3, 3, 3) },
398
+ { HV_SYS_REG_CNTV_CTL_EL0, HVF_SYSREG(14, 3, 3, 3, 1) },
399
+ { HV_SYS_REG_CNTV_CVAL_EL0, HVF_SYSREG(14, 3, 3, 3, 2) },
400
+ { HV_SYS_REG_SP_EL1, HVF_SYSREG(4, 1, 3, 4, 0) },
401
+};
402
+
403
+int hvf_get_registers(CPUState *cpu)
404
+{
405
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
406
+ CPUARMState *env = &arm_cpu->env;
407
+ hv_return_t ret;
408
+ uint64_t val;
409
+ hv_simd_fp_uchar16_t fpval;
410
+ int i;
411
+
412
+ for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
413
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, hvf_reg_match[i].reg, &val);
414
+ *(uint64_t *)((void *)env + hvf_reg_match[i].offset) = val;
415
+ assert_hvf_ok(ret);
416
+ }
417
+
418
+ for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
419
+ ret = hv_vcpu_get_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
420
+ &fpval);
421
+ memcpy((void *)env + hvf_fpreg_match[i].offset, &fpval, sizeof(fpval));
422
+ assert_hvf_ok(ret);
423
+ }
424
+
425
+ val = 0;
426
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPCR, &val);
427
+ assert_hvf_ok(ret);
428
+ vfp_set_fpcr(env, val);
429
+
430
+ val = 0;
431
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPSR, &val);
432
+ assert_hvf_ok(ret);
433
+ vfp_set_fpsr(env, val);
434
+
435
+ ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_CPSR, &val);
436
+ assert_hvf_ok(ret);
437
+ pstate_write(env, val);
438
+
439
+ for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
440
+ if (hvf_sreg_match[i].cp_idx == -1) {
441
+ continue;
151
+ }
442
+ }
152
+ }
443
+
153
+}
444
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val);
154
+
445
+ assert_hvf_ok(ret);
155
+static inline void dwc2_lower_host_irq(DWC2State *s, uint32_t host_intr)
446
+
156
+{
447
+ arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx] = val;
157
+ if (s->haint & host_intr) {
448
+ }
158
+ s->haint &= ~host_intr;
449
+ assert(write_list_to_cpustate(arm_cpu));
159
+ trace_usb_dwc2_lower_host_irq(host_intr);
450
+
160
+ if (!(s->haint & s->haintmsk)) {
451
+ aarch64_restore_sp(env, arm_current_el(env));
161
+ dwc2_lower_global_irq(s, GINTSTS_HCHINT);
452
+
453
+ return 0;
454
+}
455
+
456
+int hvf_put_registers(CPUState *cpu)
457
+{
458
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
459
+ CPUARMState *env = &arm_cpu->env;
460
+ hv_return_t ret;
461
+ uint64_t val;
462
+ hv_simd_fp_uchar16_t fpval;
463
+ int i;
464
+
465
+ for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) {
466
+ val = *(uint64_t *)((void *)env + hvf_reg_match[i].offset);
467
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, hvf_reg_match[i].reg, val);
468
+ assert_hvf_ok(ret);
469
+ }
470
+
471
+ for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) {
472
+ memcpy(&fpval, (void *)env + hvf_fpreg_match[i].offset, sizeof(fpval));
473
+ ret = hv_vcpu_set_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg,
474
+ fpval);
475
+ assert_hvf_ok(ret);
476
+ }
477
+
478
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPCR, vfp_get_fpcr(env));
479
+ assert_hvf_ok(ret);
480
+
481
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPSR, vfp_get_fpsr(env));
482
+ assert_hvf_ok(ret);
483
+
484
+ ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_CPSR, pstate_read(env));
485
+ assert_hvf_ok(ret);
486
+
487
+ aarch64_save_sp(env, arm_current_el(env));
488
+
489
+ assert(write_cpustate_to_list(arm_cpu, false));
490
+ for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) {
491
+ if (hvf_sreg_match[i].cp_idx == -1) {
492
+ continue;
162
+ }
493
+ }
163
+ }
494
+
164
+}
495
+ val = arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx];
165
+
496
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val);
166
+static inline void dwc2_update_hc_irq(DWC2State *s, int index)
497
+ assert_hvf_ok(ret);
167
+{
498
+ }
168
+ uint32_t host_intr = 1 << (index >> 3);
499
+
169
+
500
+ ret = hv_vcpu_set_vtimer_offset(cpu->hvf->fd, hvf_state->vtimer_offset);
170
+ if (s->hreg1[index + 2] & s->hreg1[index + 3]) {
501
+ assert_hvf_ok(ret);
171
+ dwc2_raise_host_irq(s, host_intr);
502
+
172
+ } else {
503
+ return 0;
173
+ dwc2_lower_host_irq(s, host_intr);
504
+}
174
+ }
505
+
175
+}
506
+static void flush_cpu_state(CPUState *cpu)
176
+
507
+{
177
+/* set a timer for EOF */
508
+ if (cpu->vcpu_dirty) {
178
+static void dwc2_eof_timer(DWC2State *s)
509
+ hvf_put_registers(cpu);
179
+{
510
+ cpu->vcpu_dirty = false;
180
+ timer_mod(s->eof_timer, s->sof_time + s->usb_frame_time);
511
+ }
181
+}
512
+}
182
+
513
+
183
+/* Set a timer for EOF and generate SOF event */
514
+static void hvf_set_reg(CPUState *cpu, int rt, uint64_t val)
184
+static void dwc2_sof(DWC2State *s)
515
+{
185
+{
516
+ hv_return_t r;
186
+ s->sof_time += s->usb_frame_time;
517
+
187
+ trace_usb_dwc2_sof(s->sof_time);
518
+ flush_cpu_state(cpu);
188
+ dwc2_eof_timer(s);
519
+
189
+ dwc2_raise_global_irq(s, GINTSTS_SOF);
520
+ if (rt < 31) {
190
+}
521
+ r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_X0 + rt, val);
191
+
522
+ assert_hvf_ok(r);
192
+/* Do frame processing on frame boundary */
523
+ }
193
+static void dwc2_frame_boundary(void *opaque)
524
+}
194
+{
525
+
195
+ DWC2State *s = opaque;
526
+static uint64_t hvf_get_reg(CPUState *cpu, int rt)
196
+ int64_t now;
527
+{
197
+ uint16_t frcnt;
528
+ uint64_t val = 0;
198
+
529
+ hv_return_t r;
199
+ now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
530
+
200
+
531
+ flush_cpu_state(cpu);
201
+ /* Frame boundary, so do EOF stuff here */
532
+
202
+
533
+ if (rt < 31) {
203
+ /* Increment frame number */
534
+ r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_X0 + rt, &val);
204
+ frcnt = (uint16_t)((now - s->sof_time) / s->fi);
535
+ assert_hvf_ok(r);
205
+ s->frame_number = (s->frame_number + frcnt) & 0xffff;
536
+ }
206
+ s->hfnum = s->frame_number & HFNUM_MAX_FRNUM;
537
+
207
+
538
+ return val;
208
+ /* Do SOF stuff here */
539
+}
209
+ dwc2_sof(s);
540
+
210
+}
541
+void hvf_arch_vcpu_destroy(CPUState *cpu)
211
+
542
+{
212
+/* Start sending SOF tokens on the USB bus */
543
+}
213
+static void dwc2_bus_start(DWC2State *s)
544
+
214
+{
545
+int hvf_arch_init_vcpu(CPUState *cpu)
215
+ trace_usb_dwc2_bus_start();
546
+{
216
+ s->sof_time = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
547
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
217
+ dwc2_eof_timer(s);
548
+ CPUARMState *env = &arm_cpu->env;
218
+}
549
+ uint32_t sregs_match_len = ARRAY_SIZE(hvf_sreg_match);
219
+
550
+ uint32_t sregs_cnt = 0;
220
+/* Stop sending SOF tokens on the USB bus */
551
+ uint64_t pfr;
221
+static void dwc2_bus_stop(DWC2State *s)
552
+ hv_return_t ret;
222
+{
553
+ int i;
223
+ trace_usb_dwc2_bus_stop();
554
+
224
+ timer_del(s->eof_timer);
555
+ env->aarch64 = 1;
225
+}
556
+ asm volatile("mrs %0, cntfrq_el0" : "=r"(arm_cpu->gt_cntfrq_hz));
226
+
557
+
227
+static USBDevice *dwc2_find_device(DWC2State *s, uint8_t addr)
558
+ /* Allocate enough space for our sysreg sync */
228
+{
559
+ arm_cpu->cpreg_indexes = g_renew(uint64_t, arm_cpu->cpreg_indexes,
229
+ USBDevice *dev;
560
+ sregs_match_len);
230
+
561
+ arm_cpu->cpreg_values = g_renew(uint64_t, arm_cpu->cpreg_values,
231
+ trace_usb_dwc2_find_device(addr);
562
+ sregs_match_len);
232
+
563
+ arm_cpu->cpreg_vmstate_indexes = g_renew(uint64_t,
233
+ if (!(s->hprt0 & HPRT0_ENA)) {
564
+ arm_cpu->cpreg_vmstate_indexes,
234
+ trace_usb_dwc2_port_disabled(0);
565
+ sregs_match_len);
235
+ } else {
566
+ arm_cpu->cpreg_vmstate_values = g_renew(uint64_t,
236
+ dev = usb_find_device(&s->uport, addr);
567
+ arm_cpu->cpreg_vmstate_values,
237
+ if (dev != NULL) {
568
+ sregs_match_len);
238
+ trace_usb_dwc2_device_found(0);
569
+
239
+ return dev;
570
+ memset(arm_cpu->cpreg_values, 0, sregs_match_len * sizeof(uint64_t));
571
+
572
+ /* Populate cp list for all known sysregs */
573
+ for (i = 0; i < sregs_match_len; i++) {
574
+ const ARMCPRegInfo *ri;
575
+ uint32_t key = hvf_sreg_match[i].key;
576
+
577
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, key);
578
+ if (ri) {
579
+ assert(!(ri->type & ARM_CP_NO_RAW));
580
+ hvf_sreg_match[i].cp_idx = sregs_cnt;
581
+ arm_cpu->cpreg_indexes[sregs_cnt++] = cpreg_to_kvm_id(key);
582
+ } else {
583
+ hvf_sreg_match[i].cp_idx = -1;
240
+ }
584
+ }
241
+ }
585
+ }
242
+
586
+ arm_cpu->cpreg_array_len = sregs_cnt;
243
+ trace_usb_dwc2_device_not_found();
587
+ arm_cpu->cpreg_vmstate_array_len = sregs_cnt;
244
+ return NULL;
588
+
245
+}
589
+ assert(write_cpustate_to_list(arm_cpu, false));
246
+
590
+
247
+static const char *pstatus[] = {
591
+ /* Set CP_NO_RAW system registers on init */
248
+ "USB_RET_SUCCESS", "USB_RET_NODEV", "USB_RET_NAK", "USB_RET_STALL",
592
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MIDR_EL1,
249
+ "USB_RET_BABBLE", "USB_RET_IOERROR", "USB_RET_ASYNC",
593
+ arm_cpu->midr);
250
+ "USB_RET_ADD_TO_QUEUE", "USB_RET_REMOVE_FROM_QUEUE"
594
+ assert_hvf_ok(ret);
251
+};
595
+
252
+
596
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MPIDR_EL1,
253
+static uint32_t pintr[] = {
597
+ arm_cpu->mp_affinity);
254
+ HCINTMSK_XFERCOMPL, HCINTMSK_XACTERR, HCINTMSK_NAK, HCINTMSK_STALL,
598
+ assert_hvf_ok(ret);
255
+ HCINTMSK_BBLERR, HCINTMSK_XACTERR, HCINTMSK_XACTERR, HCINTMSK_XACTERR,
599
+
256
+ HCINTMSK_XACTERR
600
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, &pfr);
257
+};
601
+ assert_hvf_ok(ret);
258
+
602
+ pfr |= env->gicv3state ? (1 << 24) : 0;
259
+static const char *types[] = {
603
+ ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, pfr);
260
+ "Ctrl", "Isoc", "Bulk", "Intr"
604
+ assert_hvf_ok(ret);
261
+};
605
+
262
+
606
+ /* We're limited to underlying hardware caps, override internal versions */
263
+static const char *dirs[] = {
607
+ ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64MMFR0_EL1,
264
+ "Out", "In"
608
+ &arm_cpu->isar.id_aa64mmfr0);
265
+};
609
+ assert_hvf_ok(ret);
266
+
610
+
267
+static void dwc2_handle_packet(DWC2State *s, uint32_t devadr, USBDevice *dev,
611
+ return 0;
268
+ USBEndpoint *ep, uint32_t index, bool send)
612
+}
269
+{
613
+
270
+ DWC2Packet *p;
614
+void hvf_kick_vcpu_thread(CPUState *cpu)
271
+ uint32_t hcchar = s->hreg1[index];
615
+{
272
+ uint32_t hctsiz = s->hreg1[index + 4];
616
+ hv_vcpus_exit(&cpu->hvf->fd, 1);
273
+ uint32_t hcdma = s->hreg1[index + 5];
617
+}
274
+ uint32_t chan, epnum, epdir, eptype, mps, pid, pcnt, len, tlen, intr = 0;
618
+
275
+ uint32_t tpcnt, stsidx, actual = 0;
619
+static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
276
+ bool do_intr = false, done = false;
620
+ uint32_t syndrome)
277
+
621
+{
278
+ epnum = get_field(hcchar, HCCHAR_EPNUM);
622
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
279
+ epdir = get_bit(hcchar, HCCHAR_EPDIR);
623
+ CPUARMState *env = &arm_cpu->env;
280
+ eptype = get_field(hcchar, HCCHAR_EPTYPE);
624
+
281
+ mps = get_field(hcchar, HCCHAR_MPS);
625
+ cpu->exception_index = excp;
282
+ pid = get_field(hctsiz, TSIZ_SC_MC_PID);
626
+ env->exception.target_el = 1;
283
+ pcnt = get_field(hctsiz, TSIZ_PKTCNT);
627
+ env->exception.syndrome = syndrome;
284
+ len = get_field(hctsiz, TSIZ_XFERSIZE);
628
+
285
+ assert(len <= DWC2_MAX_XFER_SIZE);
629
+ arm_cpu_do_interrupt(cpu);
286
+ chan = index >> 3;
630
+}
287
+ p = &s->packet[chan];
631
+
288
+
632
+static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
289
+ trace_usb_dwc2_handle_packet(chan, dev, &p->packet, epnum, types[eptype],
633
+{
290
+ dirs[epdir], mps, len, pcnt);
634
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
291
+
635
+ CPUARMState *env = &arm_cpu->env;
292
+ if (eptype == USB_ENDPOINT_XFER_CONTROL && pid == TSIZ_SC_MC_PID_SETUP) {
636
+ uint64_t val = 0;
293
+ pid = USB_TOKEN_SETUP;
637
+
294
+ } else {
638
+ switch (reg) {
295
+ pid = epdir ? USB_TOKEN_IN : USB_TOKEN_OUT;
639
+ case SYSREG_CNTPCT_EL0:
296
+ }
640
+ val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
297
+
641
+ gt_cntfrq_period_ns(arm_cpu);
298
+ if (send) {
642
+ break;
299
+ tlen = len;
643
+ case SYSREG_OSLSR_EL1:
300
+ if (p->small) {
644
+ val = env->cp15.oslsr_el1;
301
+ if (tlen > mps) {
645
+ break;
302
+ tlen = mps;
646
+ case SYSREG_OSDLR_EL1:
303
+ }
647
+ /* Dummy register */
648
+ break;
649
+ default:
650
+ cpu_synchronize_state(cpu);
651
+ trace_hvf_unhandled_sysreg_read(env->pc, reg,
652
+ (reg >> 20) & 0x3,
653
+ (reg >> 14) & 0x7,
654
+ (reg >> 10) & 0xf,
655
+ (reg >> 1) & 0xf,
656
+ (reg >> 17) & 0x7);
657
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
658
+ return 1;
659
+ }
660
+
661
+ trace_hvf_sysreg_read(reg,
662
+ (reg >> 20) & 0x3,
663
+ (reg >> 14) & 0x7,
664
+ (reg >> 10) & 0xf,
665
+ (reg >> 1) & 0xf,
666
+ (reg >> 17) & 0x7,
667
+ val);
668
+ hvf_set_reg(cpu, rt, val);
669
+
670
+ return 0;
671
+}
672
+
673
+static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
674
+{
675
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
676
+ CPUARMState *env = &arm_cpu->env;
677
+
678
+ trace_hvf_sysreg_write(reg,
679
+ (reg >> 20) & 0x3,
680
+ (reg >> 14) & 0x7,
681
+ (reg >> 10) & 0xf,
682
+ (reg >> 1) & 0xf,
683
+ (reg >> 17) & 0x7,
684
+ val);
685
+
686
+ switch (reg) {
687
+ case SYSREG_OSLAR_EL1:
688
+ env->cp15.oslsr_el1 = val & 1;
689
+ break;
690
+ case SYSREG_OSDLR_EL1:
691
+ /* Dummy register */
692
+ break;
693
+ default:
694
+ cpu_synchronize_state(cpu);
695
+ trace_hvf_unhandled_sysreg_write(env->pc, reg,
696
+ (reg >> 20) & 0x3,
697
+ (reg >> 14) & 0x7,
698
+ (reg >> 10) & 0xf,
699
+ (reg >> 1) & 0xf,
700
+ (reg >> 17) & 0x7);
701
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
702
+ return 1;
703
+ }
704
+
705
+ return 0;
706
+}
707
+
708
+static int hvf_inject_interrupts(CPUState *cpu)
709
+{
710
+ if (cpu->interrupt_request & CPU_INTERRUPT_FIQ) {
711
+ trace_hvf_inject_fiq();
712
+ hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_FIQ,
713
+ true);
714
+ }
715
+
716
+ if (cpu->interrupt_request & CPU_INTERRUPT_HARD) {
717
+ trace_hvf_inject_irq();
718
+ hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_IRQ,
719
+ true);
720
+ }
721
+
722
+ return 0;
723
+}
724
+
725
+static uint64_t hvf_vtimer_val_raw(void)
726
+{
727
+ /*
728
+ * mach_absolute_time() returns the vtimer value without the VM
729
+ * offset that we define. Add our own offset on top.
730
+ */
731
+ return mach_absolute_time() - hvf_state->vtimer_offset;
732
+}
733
+
734
+static void hvf_sync_vtimer(CPUState *cpu)
735
+{
736
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
737
+ hv_return_t r;
738
+ uint64_t ctl;
739
+ bool irq_state;
740
+
741
+ if (!cpu->hvf->vtimer_masked) {
742
+ /* We will get notified on vtimer changes by hvf, nothing to do */
743
+ return;
744
+ }
745
+
746
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
747
+ assert_hvf_ok(r);
748
+
749
+ irq_state = (ctl & (TMR_CTL_ENABLE | TMR_CTL_IMASK | TMR_CTL_ISTATUS)) ==
750
+ (TMR_CTL_ENABLE | TMR_CTL_ISTATUS);
751
+ qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], irq_state);
752
+
753
+ if (!irq_state) {
754
+ /* Timer no longer asserting, we can unmask it */
755
+ hv_vcpu_set_vtimer_mask(cpu->hvf->fd, false);
756
+ cpu->hvf->vtimer_masked = false;
757
+ }
758
+}
759
+
760
+int hvf_vcpu_exec(CPUState *cpu)
761
+{
762
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
763
+ CPUARMState *env = &arm_cpu->env;
764
+ hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit;
765
+ hv_return_t r;
766
+ bool advance_pc = false;
767
+
768
+ if (hvf_inject_interrupts(cpu)) {
769
+ return EXCP_INTERRUPT;
770
+ }
771
+
772
+ if (cpu->halted) {
773
+ return EXCP_HLT;
774
+ }
775
+
776
+ flush_cpu_state(cpu);
777
+
778
+ qemu_mutex_unlock_iothread();
779
+ assert_hvf_ok(hv_vcpu_run(cpu->hvf->fd));
780
+
781
+ /* handle VMEXIT */
782
+ uint64_t exit_reason = hvf_exit->reason;
783
+ uint64_t syndrome = hvf_exit->exception.syndrome;
784
+ uint32_t ec = syn_get_ec(syndrome);
785
+
786
+ qemu_mutex_lock_iothread();
787
+ switch (exit_reason) {
788
+ case HV_EXIT_REASON_EXCEPTION:
789
+ /* This is the main one, handle below. */
790
+ break;
791
+ case HV_EXIT_REASON_VTIMER_ACTIVATED:
792
+ qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 1);
793
+ cpu->hvf->vtimer_masked = true;
794
+ return 0;
795
+ case HV_EXIT_REASON_CANCELED:
796
+ /* we got kicked, no exit to process */
797
+ return 0;
798
+ default:
799
+ assert(0);
800
+ }
801
+
802
+ hvf_sync_vtimer(cpu);
803
+
804
+ switch (ec) {
805
+ case EC_DATAABORT: {
806
+ bool isv = syndrome & ARM_EL_ISV;
807
+ bool iswrite = (syndrome >> 6) & 1;
808
+ bool s1ptw = (syndrome >> 7) & 1;
809
+ uint32_t sas = (syndrome >> 22) & 3;
810
+ uint32_t len = 1 << sas;
811
+ uint32_t srt = (syndrome >> 16) & 0x1f;
812
+ uint64_t val = 0;
813
+
814
+ trace_hvf_data_abort(env->pc, hvf_exit->exception.virtual_address,
815
+ hvf_exit->exception.physical_address, isv,
816
+ iswrite, s1ptw, len, srt);
817
+
818
+ assert(isv);
819
+
820
+ if (iswrite) {
821
+ val = hvf_get_reg(cpu, srt);
822
+ address_space_write(&address_space_memory,
823
+ hvf_exit->exception.physical_address,
824
+ MEMTXATTRS_UNSPECIFIED, &val, len);
825
+ } else {
826
+ address_space_read(&address_space_memory,
827
+ hvf_exit->exception.physical_address,
828
+ MEMTXATTRS_UNSPECIFIED, &val, len);
829
+ hvf_set_reg(cpu, srt, val);
304
+ }
830
+ }
305
+
831
+
306
+ if (pid != USB_TOKEN_IN) {
832
+ advance_pc = true;
307
+ trace_usb_dwc2_memory_read(hcdma, tlen);
833
+ break;
308
+ if (dma_memory_read(&s->dma_as, hcdma,
834
+ }
309
+ s->usb_buf[chan], tlen) != MEMTX_OK) {
835
+ case EC_SYSTEMREGISTERTRAP: {
310
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: dma_memory_read failed\n",
836
+ bool isread = (syndrome >> 0) & 1;
311
+ __func__);
837
+ uint32_t rt = (syndrome >> 5) & 0x1f;
312
+ }
838
+ uint32_t reg = syndrome & SYSREG_MASK;
839
+ uint64_t val;
840
+ int ret = 0;
841
+
842
+ if (isread) {
843
+ ret = hvf_sysreg_read(cpu, reg, rt);
844
+ } else {
845
+ val = hvf_get_reg(cpu, rt);
846
+ ret = hvf_sysreg_write(cpu, reg, val);
313
+ }
847
+ }
314
+
848
+
315
+ usb_packet_init(&p->packet);
849
+ advance_pc = !ret;
316
+ usb_packet_setup(&p->packet, pid, ep, 0, hcdma,
850
+ break;
317
+ pid != USB_TOKEN_IN, true);
851
+ }
318
+ usb_packet_addbuf(&p->packet, s->usb_buf[chan], tlen);
852
+ case EC_WFX_TRAP:
319
+ p->async = DWC2_ASYNC_NONE;
853
+ advance_pc = true;
320
+ usb_handle_packet(dev, &p->packet);
854
+ break;
321
+ } else {
855
+ case EC_AA64_HVC:
322
+ tlen = p->len;
856
+ cpu_synchronize_state(cpu);
323
+ }
857
+ trace_hvf_unknown_hvc(env->xregs[0]);
324
+
858
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
325
+ stsidx = -p->packet.status;
859
+ env->xregs[0] = -1;
326
+ assert(stsidx < sizeof(pstatus) / sizeof(*pstatus));
860
+ break;
327
+ actual = p->packet.actual_length;
861
+ case EC_AA64_SMC:
328
+ trace_usb_dwc2_packet_status(pstatus[stsidx], actual);
862
+ cpu_synchronize_state(cpu);
329
+
863
+ trace_hvf_unknown_smc(env->xregs[0]);
330
+babble:
864
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
331
+ if (p->packet.status != USB_RET_SUCCESS &&
332
+ p->packet.status != USB_RET_NAK &&
333
+ p->packet.status != USB_RET_STALL &&
334
+ p->packet.status != USB_RET_ASYNC) {
335
+ trace_usb_dwc2_packet_error(pstatus[stsidx]);
336
+ }
337
+
338
+ if (p->packet.status == USB_RET_ASYNC) {
339
+ trace_usb_dwc2_async_packet(&p->packet, chan, dev, epnum,
340
+ dirs[epdir], tlen);
341
+ usb_device_flush_ep_queue(dev, ep);
342
+ assert(p->async != DWC2_ASYNC_INFLIGHT);
343
+ p->devadr = devadr;
344
+ p->epnum = epnum;
345
+ p->epdir = epdir;
346
+ p->mps = mps;
347
+ p->pid = pid;
348
+ p->index = index;
349
+ p->pcnt = pcnt;
350
+ p->len = tlen;
351
+ p->async = DWC2_ASYNC_INFLIGHT;
352
+ p->needs_service = false;
353
+ return;
354
+ }
355
+
356
+ if (p->packet.status == USB_RET_SUCCESS) {
357
+ if (actual > tlen) {
358
+ p->packet.status = USB_RET_BABBLE;
359
+ goto babble;
360
+ }
361
+
362
+ if (pid == USB_TOKEN_IN) {
363
+ trace_usb_dwc2_memory_write(hcdma, actual);
364
+ if (dma_memory_write(&s->dma_as, hcdma, s->usb_buf[chan],
365
+ actual) != MEMTX_OK) {
366
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: dma_memory_write failed\n",
367
+ __func__);
368
+ }
369
+ }
370
+
371
+ tpcnt = actual / mps;
372
+ if (actual % mps) {
373
+ tpcnt++;
374
+ if (pid == USB_TOKEN_IN) {
375
+ done = true;
376
+ }
377
+ }
378
+
379
+ pcnt -= tpcnt < pcnt ? tpcnt : pcnt;
380
+ set_field(&hctsiz, pcnt, TSIZ_PKTCNT);
381
+ len -= actual < len ? actual : len;
382
+ set_field(&hctsiz, len, TSIZ_XFERSIZE);
383
+ s->hreg1[index + 4] = hctsiz;
384
+ hcdma += actual;
385
+ s->hreg1[index + 5] = hcdma;
386
+
387
+ if (!pcnt || len == 0 || actual == 0) {
388
+ done = true;
389
+ }
390
+ } else {
391
+ intr |= pintr[stsidx];
392
+ if (p->packet.status == USB_RET_NAK &&
393
+ (eptype == USB_ENDPOINT_XFER_CONTROL ||
394
+ eptype == USB_ENDPOINT_XFER_BULK)) {
395
+ /*
396
+ * for ctrl/bulk, automatically retry on NAK,
397
+ * but send the interrupt anyway
398
+ */
399
+ intr &= ~HCINTMSK_RESERVED14_31;
400
+ s->hreg1[index + 2] |= intr;
401
+ do_intr = true;
402
+ } else {
403
+ intr |= HCINTMSK_CHHLTD;
404
+ done = true;
405
+ }
406
+ }
407
+
408
+ usb_packet_cleanup(&p->packet);
409
+
410
+ if (done) {
411
+ hcchar &= ~HCCHAR_CHENA;
412
+ s->hreg1[index] = hcchar;
413
+ if (!(intr & HCINTMSK_CHHLTD)) {
414
+ intr |= HCINTMSK_CHHLTD | HCINTMSK_XFERCOMPL;
415
+ }
416
+ intr &= ~HCINTMSK_RESERVED14_31;
417
+ s->hreg1[index + 2] |= intr;
418
+ p->needs_service = false;
419
+ trace_usb_dwc2_packet_done(pstatus[stsidx], actual, len, pcnt);
420
+ dwc2_update_hc_irq(s, index);
421
+ return;
422
+ }
423
+
424
+ p->devadr = devadr;
425
+ p->epnum = epnum;
426
+ p->epdir = epdir;
427
+ p->mps = mps;
428
+ p->pid = pid;
429
+ p->index = index;
430
+ p->pcnt = pcnt;
431
+ p->len = len;
432
+ p->needs_service = true;
433
+ trace_usb_dwc2_packet_next(pstatus[stsidx], len, pcnt);
434
+ if (do_intr) {
435
+ dwc2_update_hc_irq(s, index);
436
+ }
437
+}
438
+
439
+/* Attach or detach a device on root hub */
440
+
441
+static const char *speeds[] = {
442
+ "low", "full", "high"
443
+};
444
+
445
+static void dwc2_attach(USBPort *port)
446
+{
447
+ DWC2State *s = port->opaque;
448
+ int hispd = 0;
449
+
450
+ trace_usb_dwc2_attach(port);
451
+ assert(port->index == 0);
452
+
453
+ if (!port->dev || !port->dev->attached) {
454
+ return;
455
+ }
456
+
457
+ assert(port->dev->speed <= USB_SPEED_HIGH);
458
+ trace_usb_dwc2_attach_speed(speeds[port->dev->speed]);
459
+ s->hprt0 &= ~HPRT0_SPD_MASK;
460
+
461
+ switch (port->dev->speed) {
462
+ case USB_SPEED_LOW:
463
+ s->hprt0 |= HPRT0_SPD_LOW_SPEED << HPRT0_SPD_SHIFT;
464
+ break;
465
+ case USB_SPEED_FULL:
466
+ s->hprt0 |= HPRT0_SPD_FULL_SPEED << HPRT0_SPD_SHIFT;
467
+ break;
468
+ case USB_SPEED_HIGH:
469
+ s->hprt0 |= HPRT0_SPD_HIGH_SPEED << HPRT0_SPD_SHIFT;
470
+ hispd = 1;
471
+ break;
472
+ }
473
+
474
+ if (hispd) {
475
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 8000; /* 125000 */
476
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_HS) {
477
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_HS; /* 10.4 */
478
+ } else {
479
+ s->usb_bit_time = 1;
480
+ }
481
+ } else {
482
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 1000; /* 1000000 */
483
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_FS) {
484
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_FS; /* 83.3 */
485
+ } else {
486
+ s->usb_bit_time = 1;
487
+ }
488
+ }
489
+
490
+ s->fi = USB_FRMINTVL - 1;
491
+ s->hprt0 |= HPRT0_CONNDET | HPRT0_CONNSTS;
492
+
493
+ dwc2_bus_start(s);
494
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
495
+}
496
+
497
+static void dwc2_detach(USBPort *port)
498
+{
499
+ DWC2State *s = port->opaque;
500
+
501
+ trace_usb_dwc2_detach(port);
502
+ assert(port->index == 0);
503
+
504
+ dwc2_bus_stop(s);
505
+
506
+ s->hprt0 &= ~(HPRT0_SPD_MASK | HPRT0_SUSP | HPRT0_ENA | HPRT0_CONNSTS);
507
+ s->hprt0 |= HPRT0_CONNDET | HPRT0_ENACHG;
508
+
509
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
510
+}
511
+
512
+static void dwc2_child_detach(USBPort *port, USBDevice *child)
513
+{
514
+ trace_usb_dwc2_child_detach(port, child);
515
+ assert(port->index == 0);
516
+}
517
+
518
+static void dwc2_wakeup(USBPort *port)
519
+{
520
+ DWC2State *s = port->opaque;
521
+
522
+ trace_usb_dwc2_wakeup(port);
523
+ assert(port->index == 0);
524
+
525
+ if (s->hprt0 & HPRT0_SUSP) {
526
+ s->hprt0 |= HPRT0_RES;
527
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
528
+ }
529
+
530
+ qemu_bh_schedule(s->async_bh);
531
+}
532
+
533
+static void dwc2_async_packet_complete(USBPort *port, USBPacket *packet)
534
+{
535
+ DWC2State *s = port->opaque;
536
+ DWC2Packet *p;
537
+ USBDevice *dev;
538
+ USBEndpoint *ep;
539
+
540
+ assert(port->index == 0);
541
+ p = container_of(packet, DWC2Packet, packet);
542
+ dev = dwc2_find_device(s, p->devadr);
543
+ ep = usb_ep_get(dev, p->pid, p->epnum);
544
+ trace_usb_dwc2_async_packet_complete(port, packet, p->index >> 3, dev,
545
+ p->epnum, dirs[p->epdir], p->len);
546
+ assert(p->async == DWC2_ASYNC_INFLIGHT);
547
+
548
+ if (packet->status == USB_RET_REMOVE_FROM_QUEUE) {
549
+ usb_cancel_packet(packet);
550
+ usb_packet_cleanup(packet);
551
+ return;
552
+ }
553
+
554
+ dwc2_handle_packet(s, p->devadr, dev, ep, p->index, false);
555
+
556
+ p->async = DWC2_ASYNC_FINISHED;
557
+ qemu_bh_schedule(s->async_bh);
558
+}
559
+
560
+static USBPortOps dwc2_port_ops = {
561
+ .attach = dwc2_attach,
562
+ .detach = dwc2_detach,
563
+ .child_detach = dwc2_child_detach,
564
+ .wakeup = dwc2_wakeup,
565
+ .complete = dwc2_async_packet_complete,
566
+};
567
+
568
+static uint32_t dwc2_get_frame_remaining(DWC2State *s)
569
+{
570
+ uint32_t fr = 0;
571
+ int64_t tks;
572
+
573
+ tks = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - s->sof_time;
574
+ if (tks < 0) {
575
+ tks = 0;
576
+ }
577
+
578
+ /* avoid muldiv if possible */
579
+ if (tks >= s->usb_frame_time) {
580
+ goto out;
581
+ }
582
+ if (tks < s->usb_bit_time) {
583
+ fr = s->fi;
584
+ goto out;
585
+ }
586
+
587
+ /* tks = number of ns since SOF, divided by 83 (fs) or 10 (hs) */
588
+ tks = tks / s->usb_bit_time;
589
+ if (tks >= (int64_t)s->fi) {
590
+ goto out;
591
+ }
592
+
593
+ /* remaining = frame interval minus tks */
594
+ fr = (uint32_t)((int64_t)s->fi - tks);
595
+
596
+out:
597
+ return fr;
598
+}
599
+
600
+static void dwc2_work_bh(void *opaque)
601
+{
602
+ DWC2State *s = opaque;
603
+ DWC2Packet *p;
604
+ USBDevice *dev;
605
+ USBEndpoint *ep;
606
+ int64_t t_now, expire_time;
607
+ int chan;
608
+ bool found = false;
609
+
610
+ trace_usb_dwc2_work_bh();
611
+ if (s->working) {
612
+ return;
613
+ }
614
+ s->working = true;
615
+
616
+ t_now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
617
+ chan = s->next_chan;
618
+
619
+ do {
620
+ p = &s->packet[chan];
621
+ if (p->needs_service) {
622
+ dev = dwc2_find_device(s, p->devadr);
623
+ ep = usb_ep_get(dev, p->pid, p->epnum);
624
+ trace_usb_dwc2_work_bh_service(s->next_chan, chan, dev, p->epnum);
625
+ dwc2_handle_packet(s, p->devadr, dev, ep, p->index, true);
626
+ found = true;
627
+ }
628
+ if (++chan == DWC2_NB_CHAN) {
629
+ chan = 0;
630
+ }
631
+ if (found) {
632
+ s->next_chan = chan;
633
+ trace_usb_dwc2_work_bh_next(chan);
634
+ }
635
+ } while (chan != s->next_chan);
636
+
637
+ if (found) {
638
+ expire_time = t_now + NANOSECONDS_PER_SECOND / 4000;
639
+ timer_mod(s->frame_timer, expire_time);
640
+ }
641
+ s->working = false;
642
+}
643
+
644
+static void dwc2_enable_chan(DWC2State *s, uint32_t index)
645
+{
646
+ USBDevice *dev;
647
+ USBEndpoint *ep;
648
+ uint32_t hcchar;
649
+ uint32_t hctsiz;
650
+ uint32_t devadr, epnum, epdir, eptype, pid, len;
651
+ DWC2Packet *p;
652
+
653
+ assert((index >> 3) < DWC2_NB_CHAN);
654
+ p = &s->packet[index >> 3];
655
+ hcchar = s->hreg1[index];
656
+ hctsiz = s->hreg1[index + 4];
657
+ devadr = get_field(hcchar, HCCHAR_DEVADDR);
658
+ epnum = get_field(hcchar, HCCHAR_EPNUM);
659
+ epdir = get_bit(hcchar, HCCHAR_EPDIR);
660
+ eptype = get_field(hcchar, HCCHAR_EPTYPE);
661
+ pid = get_field(hctsiz, TSIZ_SC_MC_PID);
662
+ len = get_field(hctsiz, TSIZ_XFERSIZE);
663
+
664
+ dev = dwc2_find_device(s, devadr);
665
+
666
+ trace_usb_dwc2_enable_chan(index >> 3, dev, &p->packet, epnum);
667
+ if (dev == NULL) {
668
+ return;
669
+ }
670
+
671
+ if (eptype == USB_ENDPOINT_XFER_CONTROL && pid == TSIZ_SC_MC_PID_SETUP) {
672
+ pid = USB_TOKEN_SETUP;
673
+ } else {
674
+ pid = epdir ? USB_TOKEN_IN : USB_TOKEN_OUT;
675
+ }
676
+
677
+ ep = usb_ep_get(dev, pid, epnum);
678
+
679
+ /*
680
+ * Hack: Networking doesn't like us delivering large transfers, it kind
681
+ * of works but the latency is horrible. So if the transfer is <= the mtu
682
+ * size, we take that as a hint that this might be a network transfer,
683
+ * and do the transfer packet-by-packet.
684
+ */
685
+ if (len > 1536) {
686
+ p->small = false;
687
+ } else {
688
+ p->small = true;
689
+ }
690
+
691
+ dwc2_handle_packet(s, devadr, dev, ep, index, true);
692
+ qemu_bh_schedule(s->async_bh);
693
+}
694
+
695
+static const char *glbregnm[] = {
696
+ "GOTGCTL ", "GOTGINT ", "GAHBCFG ", "GUSBCFG ", "GRSTCTL ",
697
+ "GINTSTS ", "GINTMSK ", "GRXSTSR ", "GRXSTSP ", "GRXFSIZ ",
698
+ "GNPTXFSIZ", "GNPTXSTS ", "GI2CCTL ", "GPVNDCTL ", "GGPIO ",
699
+ "GUID ", "GSNPSID ", "GHWCFG1 ", "GHWCFG2 ", "GHWCFG3 ",
700
+ "GHWCFG4 ", "GLPMCFG ", "GPWRDN ", "GDFIFOCFG", "GADPCTL ",
701
+ "GREFCLK ", "GINTMSK2 ", "GINTSTS2 "
702
+};
703
+
704
+static uint64_t dwc2_glbreg_read(void *ptr, hwaddr addr, int index,
705
+ unsigned size)
706
+{
707
+ DWC2State *s = ptr;
708
+ uint32_t val;
709
+
710
+ assert(addr <= GINTSTS2);
711
+ val = s->glbreg[index];
712
+
713
+ switch (addr) {
714
+ case GRSTCTL:
715
+ /* clear any self-clearing bits that were set */
716
+ val &= ~(GRSTCTL_TXFFLSH | GRSTCTL_RXFFLSH | GRSTCTL_IN_TKNQ_FLSH |
717
+ GRSTCTL_FRMCNTRRST | GRSTCTL_HSFTRST | GRSTCTL_CSFTRST);
718
+ s->glbreg[index] = val;
719
+ break;
865
+ break;
720
+ default:
866
+ default:
721
+ break;
867
+ cpu_synchronize_state(cpu);
722
+ }
868
+ trace_hvf_exit(syndrome, ec, env->pc);
723
+
869
+ error_report("0x%llx: unhandled exception ec=0x%x", env->pc, ec);
724
+ trace_usb_dwc2_glbreg_read(addr, glbregnm[index], val);
870
+ }
725
+ return val;
871
+
726
+}
872
+ if (advance_pc) {
727
+
873
+ uint64_t pc;
728
+static void dwc2_glbreg_write(void *ptr, hwaddr addr, int index, uint64_t val,
874
+
729
+ unsigned size)
875
+ flush_cpu_state(cpu);
730
+{
876
+
731
+ DWC2State *s = ptr;
877
+ r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_PC, &pc);
732
+ uint64_t orig = val;
878
+ assert_hvf_ok(r);
733
+ uint32_t *mmio;
879
+ pc += 4;
734
+ uint32_t old;
880
+ r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
735
+ int iflg = 0;
881
+ assert_hvf_ok(r);
736
+
882
+ }
737
+ assert(addr <= GINTSTS2);
883
+
738
+ mmio = &s->glbreg[index];
739
+ old = *mmio;
740
+
741
+ switch (addr) {
742
+ case GOTGCTL:
743
+ /* don't allow setting of read-only bits */
744
+ val &= ~(GOTGCTL_MULT_VALID_BC_MASK | GOTGCTL_BSESVLD |
745
+ GOTGCTL_ASESVLD | GOTGCTL_DBNC_SHORT | GOTGCTL_CONID_B |
746
+ GOTGCTL_HSTNEGSCS | GOTGCTL_SESREQSCS);
747
+ /* don't allow clearing of read-only bits */
748
+ val |= old & (GOTGCTL_MULT_VALID_BC_MASK | GOTGCTL_BSESVLD |
749
+ GOTGCTL_ASESVLD | GOTGCTL_DBNC_SHORT | GOTGCTL_CONID_B |
750
+ GOTGCTL_HSTNEGSCS | GOTGCTL_SESREQSCS);
751
+ break;
752
+ case GAHBCFG:
753
+ if ((val & GAHBCFG_GLBL_INTR_EN) && !(old & GAHBCFG_GLBL_INTR_EN)) {
754
+ iflg = 1;
755
+ }
756
+ break;
757
+ case GRSTCTL:
758
+ val |= GRSTCTL_AHBIDLE;
759
+ val &= ~GRSTCTL_DMAREQ;
760
+ if (!(old & GRSTCTL_TXFFLSH) && (val & GRSTCTL_TXFFLSH)) {
761
+ /* TODO - TX fifo flush */
762
+ qemu_log_mask(LOG_UNIMP, "Tx FIFO flush not implemented\n");
763
+ }
764
+ if (!(old & GRSTCTL_RXFFLSH) && (val & GRSTCTL_RXFFLSH)) {
765
+ /* TODO - RX fifo flush */
766
+ qemu_log_mask(LOG_UNIMP, "Rx FIFO flush not implemented\n");
767
+ }
768
+ if (!(old & GRSTCTL_IN_TKNQ_FLSH) && (val & GRSTCTL_IN_TKNQ_FLSH)) {
769
+ /* TODO - device IN token queue flush */
770
+ qemu_log_mask(LOG_UNIMP, "Token queue flush not implemented\n");
771
+ }
772
+ if (!(old & GRSTCTL_FRMCNTRRST) && (val & GRSTCTL_FRMCNTRRST)) {
773
+ /* TODO - host frame counter reset */
774
+ qemu_log_mask(LOG_UNIMP, "Frame counter reset not implemented\n");
775
+ }
776
+ if (!(old & GRSTCTL_HSFTRST) && (val & GRSTCTL_HSFTRST)) {
777
+ /* TODO - host soft reset */
778
+ qemu_log_mask(LOG_UNIMP, "Host soft reset not implemented\n");
779
+ }
780
+ if (!(old & GRSTCTL_CSFTRST) && (val & GRSTCTL_CSFTRST)) {
781
+ /* TODO - core soft reset */
782
+ qemu_log_mask(LOG_UNIMP, "Core soft reset not implemented\n");
783
+ }
784
+ /* don't allow clearing of self-clearing bits */
785
+ val |= old & (GRSTCTL_TXFFLSH | GRSTCTL_RXFFLSH |
786
+ GRSTCTL_IN_TKNQ_FLSH | GRSTCTL_FRMCNTRRST |
787
+ GRSTCTL_HSFTRST | GRSTCTL_CSFTRST);
788
+ break;
789
+ case GINTSTS:
790
+ /* clear the write-1-to-clear bits */
791
+ val |= ~old;
792
+ val = ~val;
793
+ /* don't allow clearing of read-only bits */
794
+ val |= old & (GINTSTS_PTXFEMP | GINTSTS_HCHINT | GINTSTS_PRTINT |
795
+ GINTSTS_OEPINT | GINTSTS_IEPINT | GINTSTS_GOUTNAKEFF |
796
+ GINTSTS_GINNAKEFF | GINTSTS_NPTXFEMP | GINTSTS_RXFLVL |
797
+ GINTSTS_OTGINT | GINTSTS_CURMODE_HOST);
798
+ iflg = 1;
799
+ break;
800
+ case GINTMSK:
801
+ iflg = 1;
802
+ break;
803
+ default:
804
+ break;
805
+ }
806
+
807
+ trace_usb_dwc2_glbreg_write(addr, glbregnm[index], orig, old, val);
808
+ *mmio = val;
809
+
810
+ if (iflg) {
811
+ dwc2_update_irq(s);
812
+ }
813
+}
814
+
815
+static uint64_t dwc2_fszreg_read(void *ptr, hwaddr addr, int index,
816
+ unsigned size)
817
+{
818
+ DWC2State *s = ptr;
819
+ uint32_t val;
820
+
821
+ assert(addr == HPTXFSIZ);
822
+ val = s->fszreg[index];
823
+
824
+ trace_usb_dwc2_fszreg_read(addr, val);
825
+ return val;
826
+}
827
+
828
+static void dwc2_fszreg_write(void *ptr, hwaddr addr, int index, uint64_t val,
829
+ unsigned size)
830
+{
831
+ DWC2State *s = ptr;
832
+ uint64_t orig = val;
833
+ uint32_t *mmio;
834
+ uint32_t old;
835
+
836
+ assert(addr == HPTXFSIZ);
837
+ mmio = &s->fszreg[index];
838
+ old = *mmio;
839
+
840
+ trace_usb_dwc2_fszreg_write(addr, orig, old, val);
841
+ *mmio = val;
842
+}
843
+
844
+static const char *hreg0nm[] = {
845
+ "HCFG ", "HFIR ", "HFNUM ", "<rsvd> ", "HPTXSTS ",
846
+ "HAINT ", "HAINTMSK ", "HFLBADDR ", "<rsvd> ", "<rsvd> ",
847
+ "<rsvd> ", "<rsvd> ", "<rsvd> ", "<rsvd> ", "<rsvd> ",
848
+ "<rsvd> ", "HPRT0 "
849
+};
850
+
851
+static uint64_t dwc2_hreg0_read(void *ptr, hwaddr addr, int index,
852
+ unsigned size)
853
+{
854
+ DWC2State *s = ptr;
855
+ uint32_t val;
856
+
857
+ assert(addr >= HCFG && addr <= HPRT0);
858
+ val = s->hreg0[index];
859
+
860
+ switch (addr) {
861
+ case HFNUM:
862
+ val = (dwc2_get_frame_remaining(s) << HFNUM_FRREM_SHIFT) |
863
+ (s->hfnum << HFNUM_FRNUM_SHIFT);
864
+ break;
865
+ default:
866
+ break;
867
+ }
868
+
869
+ trace_usb_dwc2_hreg0_read(addr, hreg0nm[index], val);
870
+ return val;
871
+}
872
+
873
+static void dwc2_hreg0_write(void *ptr, hwaddr addr, int index, uint64_t val,
874
+ unsigned size)
875
+{
876
+ DWC2State *s = ptr;
877
+ USBDevice *dev = s->uport.dev;
878
+ uint64_t orig = val;
879
+ uint32_t *mmio;
880
+ uint32_t tval, told, old;
881
+ int prst = 0;
882
+ int iflg = 0;
883
+
884
+ assert(addr >= HCFG && addr <= HPRT0);
885
+ mmio = &s->hreg0[index];
886
+ old = *mmio;
887
+
888
+ switch (addr) {
889
+ case HFIR:
890
+ break;
891
+ case HFNUM:
892
+ case HPTXSTS:
893
+ case HAINT:
894
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: write to read-only register\n",
895
+ __func__);
896
+ return;
897
+ case HAINTMSK:
898
+ val &= 0xffff;
899
+ break;
900
+ case HPRT0:
901
+ /* don't allow clearing of read-only bits */
902
+ val |= old & (HPRT0_SPD_MASK | HPRT0_LNSTS_MASK | HPRT0_OVRCURRACT |
903
+ HPRT0_CONNSTS);
904
+ /* don't allow clearing of self-clearing bits */
905
+ val |= old & (HPRT0_SUSP | HPRT0_RES);
906
+ /* don't allow setting of self-setting bits */
907
+ if (!(old & HPRT0_ENA) && (val & HPRT0_ENA)) {
908
+ val &= ~HPRT0_ENA;
909
+ }
910
+ /* clear the write-1-to-clear bits */
911
+ tval = val & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
912
+ HPRT0_CONNDET);
913
+ told = old & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
914
+ HPRT0_CONNDET);
915
+ tval |= ~told;
916
+ tval = ~tval;
917
+ tval &= (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
918
+ HPRT0_CONNDET);
919
+ val &= ~(HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
920
+ HPRT0_CONNDET);
921
+ val |= tval;
922
+ if (!(val & HPRT0_RST) && (old & HPRT0_RST)) {
923
+ if (dev && dev->attached) {
924
+ val |= HPRT0_ENA | HPRT0_ENACHG;
925
+ prst = 1;
926
+ }
927
+ }
928
+ if (val & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_CONNDET)) {
929
+ iflg = 1;
930
+ } else {
931
+ iflg = -1;
932
+ }
933
+ break;
934
+ default:
935
+ break;
936
+ }
937
+
938
+ if (prst) {
939
+ trace_usb_dwc2_hreg0_write(addr, hreg0nm[index], orig, old,
940
+ val & ~HPRT0_CONNDET);
941
+ trace_usb_dwc2_hreg0_action("call usb_port_reset");
942
+ usb_port_reset(&s->uport);
943
+ val &= ~HPRT0_CONNDET;
944
+ } else {
945
+ trace_usb_dwc2_hreg0_write(addr, hreg0nm[index], orig, old, val);
946
+ }
947
+
948
+ *mmio = val;
949
+
950
+ if (iflg > 0) {
951
+ trace_usb_dwc2_hreg0_action("enable PRTINT");
952
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
953
+ } else if (iflg < 0) {
954
+ trace_usb_dwc2_hreg0_action("disable PRTINT");
955
+ dwc2_lower_global_irq(s, GINTSTS_PRTINT);
956
+ }
957
+}
958
+
959
+static const char *hreg1nm[] = {
960
+ "HCCHAR ", "HCSPLT ", "HCINT ", "HCINTMSK", "HCTSIZ ", "HCDMA ",
961
+ "<rsvd> ", "HCDMAB "
962
+};
963
+
964
+static uint64_t dwc2_hreg1_read(void *ptr, hwaddr addr, int index,
965
+ unsigned size)
966
+{
967
+ DWC2State *s = ptr;
968
+ uint32_t val;
969
+
970
+ assert(addr >= HCCHAR(0) && addr <= HCDMAB(DWC2_NB_CHAN - 1));
971
+ val = s->hreg1[index];
972
+
973
+ trace_usb_dwc2_hreg1_read(addr, hreg1nm[index & 7], addr >> 5, val);
974
+ return val;
975
+}
976
+
977
+static void dwc2_hreg1_write(void *ptr, hwaddr addr, int index, uint64_t val,
978
+ unsigned size)
979
+{
980
+ DWC2State *s = ptr;
981
+ uint64_t orig = val;
982
+ uint32_t *mmio;
983
+ uint32_t old;
984
+ int iflg = 0;
985
+ int enflg = 0;
986
+ int disflg = 0;
987
+
988
+ assert(addr >= HCCHAR(0) && addr <= HCDMAB(DWC2_NB_CHAN - 1));
989
+ mmio = &s->hreg1[index];
990
+ old = *mmio;
991
+
992
+ switch (HSOTG_REG(0x500) + (addr & 0x1c)) {
993
+ case HCCHAR(0):
994
+ if ((val & HCCHAR_CHDIS) && !(old & HCCHAR_CHDIS)) {
995
+ val &= ~(HCCHAR_CHENA | HCCHAR_CHDIS);
996
+ disflg = 1;
997
+ } else {
998
+ val |= old & HCCHAR_CHDIS;
999
+ if ((val & HCCHAR_CHENA) && !(old & HCCHAR_CHENA)) {
1000
+ val &= ~HCCHAR_CHDIS;
1001
+ enflg = 1;
1002
+ } else {
1003
+ val |= old & HCCHAR_CHENA;
1004
+ }
1005
+ }
1006
+ break;
1007
+ case HCINT(0):
1008
+ /* clear the write-1-to-clear bits */
1009
+ val |= ~old;
1010
+ val = ~val;
1011
+ val &= ~HCINTMSK_RESERVED14_31;
1012
+ iflg = 1;
1013
+ break;
1014
+ case HCINTMSK(0):
1015
+ val &= ~HCINTMSK_RESERVED14_31;
1016
+ iflg = 1;
1017
+ break;
1018
+ case HCDMAB(0):
1019
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: write to read-only register\n",
1020
+ __func__);
1021
+ return;
1022
+ default:
1023
+ break;
1024
+ }
1025
+
1026
+ trace_usb_dwc2_hreg1_write(addr, hreg1nm[index & 7], index >> 3, orig,
1027
+ old, val);
1028
+ *mmio = val;
1029
+
1030
+ if (disflg) {
1031
+ /* set ChHltd in HCINT */
1032
+ s->hreg1[(index & ~7) + 2] |= HCINTMSK_CHHLTD;
1033
+ iflg = 1;
1034
+ }
1035
+
1036
+ if (enflg) {
1037
+ dwc2_enable_chan(s, index & ~7);
1038
+ }
1039
+
1040
+ if (iflg) {
1041
+ dwc2_update_hc_irq(s, index & ~7);
1042
+ }
1043
+}
1044
+
1045
+static const char *pcgregnm[] = {
1046
+ "PCGCTL ", "PCGCCTL1 "
1047
+};
1048
+
1049
+static uint64_t dwc2_pcgreg_read(void *ptr, hwaddr addr, int index,
1050
+ unsigned size)
1051
+{
1052
+ DWC2State *s = ptr;
1053
+ uint32_t val;
1054
+
1055
+ assert(addr >= PCGCTL && addr <= PCGCCTL1);
1056
+ val = s->pcgreg[index];
1057
+
1058
+ trace_usb_dwc2_pcgreg_read(addr, pcgregnm[index], val);
1059
+ return val;
1060
+}
1061
+
1062
+static void dwc2_pcgreg_write(void *ptr, hwaddr addr, int index,
1063
+ uint64_t val, unsigned size)
1064
+{
1065
+ DWC2State *s = ptr;
1066
+ uint64_t orig = val;
1067
+ uint32_t *mmio;
1068
+ uint32_t old;
1069
+
1070
+ assert(addr >= PCGCTL && addr <= PCGCCTL1);
1071
+ mmio = &s->pcgreg[index];
1072
+ old = *mmio;
1073
+
1074
+ trace_usb_dwc2_pcgreg_write(addr, pcgregnm[index], orig, old, val);
1075
+ *mmio = val;
1076
+}
1077
+
1078
+static uint64_t dwc2_hsotg_read(void *ptr, hwaddr addr, unsigned size)
1079
+{
1080
+ uint64_t val;
1081
+
1082
+ switch (addr) {
1083
+ case HSOTG_REG(0x000) ... HSOTG_REG(0x0fc):
1084
+ val = dwc2_glbreg_read(ptr, addr, (addr - HSOTG_REG(0x000)) >> 2, size);
1085
+ break;
1086
+ case HSOTG_REG(0x100):
1087
+ val = dwc2_fszreg_read(ptr, addr, (addr - HSOTG_REG(0x100)) >> 2, size);
1088
+ break;
1089
+ case HSOTG_REG(0x104) ... HSOTG_REG(0x3fc):
1090
+ /* Gadget-mode registers, just return 0 for now */
1091
+ val = 0;
1092
+ break;
1093
+ case HSOTG_REG(0x400) ... HSOTG_REG(0x4fc):
1094
+ val = dwc2_hreg0_read(ptr, addr, (addr - HSOTG_REG(0x400)) >> 2, size);
1095
+ break;
1096
+ case HSOTG_REG(0x500) ... HSOTG_REG(0x7fc):
1097
+ val = dwc2_hreg1_read(ptr, addr, (addr - HSOTG_REG(0x500)) >> 2, size);
1098
+ break;
1099
+ case HSOTG_REG(0x800) ... HSOTG_REG(0xdfc):
1100
+ /* Gadget-mode registers, just return 0 for now */
1101
+ val = 0;
1102
+ break;
1103
+ case HSOTG_REG(0xe00) ... HSOTG_REG(0xffc):
1104
+ val = dwc2_pcgreg_read(ptr, addr, (addr - HSOTG_REG(0xe00)) >> 2, size);
1105
+ break;
1106
+ default:
1107
+ g_assert_not_reached();
1108
+ }
1109
+
1110
+ return val;
1111
+}
1112
+
1113
+static void dwc2_hsotg_write(void *ptr, hwaddr addr, uint64_t val,
1114
+ unsigned size)
1115
+{
1116
+ switch (addr) {
1117
+ case HSOTG_REG(0x000) ... HSOTG_REG(0x0fc):
1118
+ dwc2_glbreg_write(ptr, addr, (addr - HSOTG_REG(0x000)) >> 2, val, size);
1119
+ break;
1120
+ case HSOTG_REG(0x100):
1121
+ dwc2_fszreg_write(ptr, addr, (addr - HSOTG_REG(0x100)) >> 2, val, size);
1122
+ break;
1123
+ case HSOTG_REG(0x104) ... HSOTG_REG(0x3fc):
1124
+ /* Gadget-mode registers, do nothing for now */
1125
+ break;
1126
+ case HSOTG_REG(0x400) ... HSOTG_REG(0x4fc):
1127
+ dwc2_hreg0_write(ptr, addr, (addr - HSOTG_REG(0x400)) >> 2, val, size);
1128
+ break;
1129
+ case HSOTG_REG(0x500) ... HSOTG_REG(0x7fc):
1130
+ dwc2_hreg1_write(ptr, addr, (addr - HSOTG_REG(0x500)) >> 2, val, size);
1131
+ break;
1132
+ case HSOTG_REG(0x800) ... HSOTG_REG(0xdfc):
1133
+ /* Gadget-mode registers, do nothing for now */
1134
+ break;
1135
+ case HSOTG_REG(0xe00) ... HSOTG_REG(0xffc):
1136
+ dwc2_pcgreg_write(ptr, addr, (addr - HSOTG_REG(0xe00)) >> 2, val, size);
1137
+ break;
1138
+ default:
1139
+ g_assert_not_reached();
1140
+ }
1141
+}
1142
+
1143
+static const MemoryRegionOps dwc2_mmio_hsotg_ops = {
1144
+ .read = dwc2_hsotg_read,
1145
+ .write = dwc2_hsotg_write,
1146
+ .impl.min_access_size = 4,
1147
+ .impl.max_access_size = 4,
1148
+ .endianness = DEVICE_LITTLE_ENDIAN,
1149
+};
1150
+
1151
+static uint64_t dwc2_hreg2_read(void *ptr, hwaddr addr, unsigned size)
1152
+{
1153
+ /* TODO - implement FIFOs to support slave mode */
1154
+ trace_usb_dwc2_hreg2_read(addr, addr >> 12, 0);
1155
+ qemu_log_mask(LOG_UNIMP, "FIFO read not implemented\n");
1156
+ return 0;
884
+ return 0;
1157
+}
885
+}
1158
+
886
+
1159
+static void dwc2_hreg2_write(void *ptr, hwaddr addr, uint64_t val,
887
+static const VMStateDescription vmstate_hvf_vtimer = {
1160
+ unsigned size)
888
+ .name = "hvf-vtimer",
1161
+{
1162
+ uint64_t orig = val;
1163
+
1164
+ /* TODO - implement FIFOs to support slave mode */
1165
+ trace_usb_dwc2_hreg2_write(addr, addr >> 12, orig, 0, val);
1166
+ qemu_log_mask(LOG_UNIMP, "FIFO write not implemented\n");
1167
+}
1168
+
1169
+static const MemoryRegionOps dwc2_mmio_hreg2_ops = {
1170
+ .read = dwc2_hreg2_read,
1171
+ .write = dwc2_hreg2_write,
1172
+ .impl.min_access_size = 4,
1173
+ .impl.max_access_size = 4,
1174
+ .endianness = DEVICE_LITTLE_ENDIAN,
1175
+};
1176
+
1177
+static void dwc2_wakeup_endpoint(USBBus *bus, USBEndpoint *ep,
1178
+ unsigned int stream)
1179
+{
1180
+ DWC2State *s = container_of(bus, DWC2State, bus);
1181
+
1182
+ trace_usb_dwc2_wakeup_endpoint(ep, stream);
1183
+
1184
+ /* TODO - do something here? */
1185
+ qemu_bh_schedule(s->async_bh);
1186
+}
1187
+
1188
+static USBBusOps dwc2_bus_ops = {
1189
+ .wakeup_endpoint = dwc2_wakeup_endpoint,
1190
+};
1191
+
1192
+static void dwc2_work_timer(void *opaque)
1193
+{
1194
+ DWC2State *s = opaque;
1195
+
1196
+ trace_usb_dwc2_work_timer();
1197
+ qemu_bh_schedule(s->async_bh);
1198
+}
1199
+
1200
+static void dwc2_reset_enter(Object *obj, ResetType type)
1201
+{
1202
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1203
+ DWC2State *s = DWC2_USB(obj);
1204
+ int i;
1205
+
1206
+ trace_usb_dwc2_reset_enter();
1207
+
1208
+ if (c->parent_phases.enter) {
1209
+ c->parent_phases.enter(obj, type);
1210
+ }
1211
+
1212
+ timer_del(s->frame_timer);
1213
+ qemu_bh_cancel(s->async_bh);
1214
+
1215
+ if (s->uport.dev && s->uport.dev->attached) {
1216
+ usb_detach(&s->uport);
1217
+ }
1218
+
1219
+ dwc2_bus_stop(s);
1220
+
1221
+ s->gotgctl = GOTGCTL_BSESVLD | GOTGCTL_ASESVLD | GOTGCTL_CONID_B;
1222
+ s->gotgint = 0;
1223
+ s->gahbcfg = 0;
1224
+ s->gusbcfg = 5 << GUSBCFG_USBTRDTIM_SHIFT;
1225
+ s->grstctl = GRSTCTL_AHBIDLE;
1226
+ s->gintsts = GINTSTS_CONIDSTSCHNG | GINTSTS_PTXFEMP | GINTSTS_NPTXFEMP |
1227
+ GINTSTS_CURMODE_HOST;
1228
+ s->gintmsk = 0;
1229
+ s->grxstsr = 0;
1230
+ s->grxstsp = 0;
1231
+ s->grxfsiz = 1024;
1232
+ s->gnptxfsiz = 1024 << FIFOSIZE_DEPTH_SHIFT;
1233
+ s->gnptxsts = (4 << FIFOSIZE_DEPTH_SHIFT) | 1024;
1234
+ s->gi2cctl = GI2CCTL_I2CDATSE0 | GI2CCTL_ACK;
1235
+ s->gpvndctl = 0;
1236
+ s->ggpio = 0;
1237
+ s->guid = 0;
1238
+ s->gsnpsid = 0x4f54294a;
1239
+ s->ghwcfg1 = 0;
1240
+ s->ghwcfg2 = (8 << GHWCFG2_DEV_TOKEN_Q_DEPTH_SHIFT) |
1241
+ (4 << GHWCFG2_HOST_PERIO_TX_Q_DEPTH_SHIFT) |
1242
+ (4 << GHWCFG2_NONPERIO_TX_Q_DEPTH_SHIFT) |
1243
+ GHWCFG2_DYNAMIC_FIFO |
1244
+ GHWCFG2_PERIO_EP_SUPPORTED |
1245
+ ((DWC2_NB_CHAN - 1) << GHWCFG2_NUM_HOST_CHAN_SHIFT) |
1246
+ (GHWCFG2_INT_DMA_ARCH << GHWCFG2_ARCHITECTURE_SHIFT) |
1247
+ (GHWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST << GHWCFG2_OP_MODE_SHIFT);
1248
+ s->ghwcfg3 = (4096 << GHWCFG3_DFIFO_DEPTH_SHIFT) |
1249
+ (4 << GHWCFG3_PACKET_SIZE_CNTR_WIDTH_SHIFT) |
1250
+ (4 << GHWCFG3_XFER_SIZE_CNTR_WIDTH_SHIFT);
1251
+ s->ghwcfg4 = 0;
1252
+ s->glpmcfg = 0;
1253
+ s->gpwrdn = GPWRDN_PWRDNRSTN;
1254
+ s->gdfifocfg = 0;
1255
+ s->gadpctl = 0;
1256
+ s->grefclk = 0;
1257
+ s->gintmsk2 = 0;
1258
+ s->gintsts2 = 0;
1259
+
1260
+ s->hptxfsiz = 500 << FIFOSIZE_DEPTH_SHIFT;
1261
+
1262
+ s->hcfg = 2 << HCFG_RESVALID_SHIFT;
1263
+ s->hfir = 60000;
1264
+ s->hfnum = 0x3fff;
1265
+ s->hptxsts = (16 << TXSTS_QSPCAVAIL_SHIFT) | 32768;
1266
+ s->haint = 0;
1267
+ s->haintmsk = 0;
1268
+ s->hprt0 = 0;
1269
+
1270
+ memset(s->hreg1, 0, sizeof(s->hreg1));
1271
+ memset(s->pcgreg, 0, sizeof(s->pcgreg));
1272
+
1273
+ s->sof_time = 0;
1274
+ s->frame_number = 0;
1275
+ s->fi = USB_FRMINTVL - 1;
1276
+ s->next_chan = 0;
1277
+ s->working = false;
1278
+
1279
+ for (i = 0; i < DWC2_NB_CHAN; i++) {
1280
+ s->packet[i].needs_service = false;
1281
+ }
1282
+}
1283
+
1284
+static void dwc2_reset_hold(Object *obj)
1285
+{
1286
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1287
+ DWC2State *s = DWC2_USB(obj);
1288
+
1289
+ trace_usb_dwc2_reset_hold();
1290
+
1291
+ if (c->parent_phases.hold) {
1292
+ c->parent_phases.hold(obj);
1293
+ }
1294
+
1295
+ dwc2_update_irq(s);
1296
+}
1297
+
1298
+static void dwc2_reset_exit(Object *obj)
1299
+{
1300
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1301
+ DWC2State *s = DWC2_USB(obj);
1302
+
1303
+ trace_usb_dwc2_reset_exit();
1304
+
1305
+ if (c->parent_phases.exit) {
1306
+ c->parent_phases.exit(obj);
1307
+ }
1308
+
1309
+ s->hprt0 = HPRT0_PWR;
1310
+ if (s->uport.dev && s->uport.dev->attached) {
1311
+ usb_attach(&s->uport);
1312
+ usb_device_reset(s->uport.dev);
1313
+ }
1314
+}
1315
+
1316
+static void dwc2_realize(DeviceState *dev, Error **errp)
1317
+{
1318
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
1319
+ DWC2State *s = DWC2_USB(dev);
1320
+ Object *obj;
1321
+ Error *err = NULL;
1322
+
1323
+ obj = object_property_get_link(OBJECT(dev), "dma-mr", &err);
1324
+ if (err) {
1325
+ error_setg(errp, "dwc2: required dma-mr link not found: %s",
1326
+ error_get_pretty(err));
1327
+ return;
1328
+ }
1329
+ assert(obj != NULL);
1330
+
1331
+ s->dma_mr = MEMORY_REGION(obj);
1332
+ address_space_init(&s->dma_as, s->dma_mr, "dwc2");
1333
+
1334
+ usb_bus_new(&s->bus, sizeof(s->bus), &dwc2_bus_ops, dev);
1335
+ usb_register_port(&s->bus, &s->uport, s, 0, &dwc2_port_ops,
1336
+ USB_SPEED_MASK_LOW | USB_SPEED_MASK_FULL |
1337
+ (s->usb_version == 2 ? USB_SPEED_MASK_HIGH : 0));
1338
+ s->uport.dev = 0;
1339
+
1340
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 1000; /* 1000000 */
1341
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_FS) {
1342
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_FS; /* 83.3 */
1343
+ } else {
1344
+ s->usb_bit_time = 1;
1345
+ }
1346
+
1347
+ s->fi = USB_FRMINTVL - 1;
1348
+ s->eof_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_frame_boundary, s);
1349
+ s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_work_timer, s);
1350
+ s->async_bh = qemu_bh_new(dwc2_work_bh, s);
1351
+
1352
+ sysbus_init_irq(sbd, &s->irq);
1353
+}
1354
+
1355
+static void dwc2_init(Object *obj)
1356
+{
1357
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1358
+ DWC2State *s = DWC2_USB(obj);
1359
+
1360
+ memory_region_init(&s->container, obj, "dwc2", DWC2_MMIO_SIZE);
1361
+ sysbus_init_mmio(sbd, &s->container);
1362
+
1363
+ memory_region_init_io(&s->hsotg, obj, &dwc2_mmio_hsotg_ops, s,
1364
+ "dwc2-io", 4 * KiB);
1365
+ memory_region_add_subregion(&s->container, 0x0000, &s->hsotg);
1366
+
1367
+ memory_region_init_io(&s->fifos, obj, &dwc2_mmio_hreg2_ops, s,
1368
+ "dwc2-fifo", 64 * KiB);
1369
+ memory_region_add_subregion(&s->container, 0x1000, &s->fifos);
1370
+}
1371
+
1372
+static const VMStateDescription vmstate_dwc2_state_packet = {
1373
+ .name = "dwc2/packet",
1374
+ .version_id = 1,
889
+ .version_id = 1,
1375
+ .minimum_version_id = 1,
890
+ .minimum_version_id = 1,
1376
+ .fields = (VMStateField[]) {
891
+ .fields = (VMStateField[]) {
1377
+ VMSTATE_UINT32(devadr, DWC2Packet),
892
+ VMSTATE_UINT64(vtimer_val, HVFVTimer),
1378
+ VMSTATE_UINT32(epnum, DWC2Packet),
1379
+ VMSTATE_UINT32(epdir, DWC2Packet),
1380
+ VMSTATE_UINT32(mps, DWC2Packet),
1381
+ VMSTATE_UINT32(pid, DWC2Packet),
1382
+ VMSTATE_UINT32(index, DWC2Packet),
1383
+ VMSTATE_UINT32(pcnt, DWC2Packet),
1384
+ VMSTATE_UINT32(len, DWC2Packet),
1385
+ VMSTATE_INT32(async, DWC2Packet),
1386
+ VMSTATE_BOOL(small, DWC2Packet),
1387
+ VMSTATE_BOOL(needs_service, DWC2Packet),
1388
+ VMSTATE_END_OF_LIST()
893
+ VMSTATE_END_OF_LIST()
1389
+ },
894
+ },
1390
+};
895
+};
1391
+
896
+
1392
+const VMStateDescription vmstate_dwc2_state = {
897
+static void hvf_vm_state_change(void *opaque, bool running, RunState state)
1393
+ .name = "dwc2",
898
+{
1394
+ .version_id = 1,
899
+ HVFVTimer *s = opaque;
1395
+ .minimum_version_id = 1,
900
+
1396
+ .fields = (VMStateField[]) {
901
+ if (running) {
1397
+ VMSTATE_UINT32_ARRAY(glbreg, DWC2State,
902
+ /* Update vtimer offset on all CPUs */
1398
+ DWC2_GLBREG_SIZE / sizeof(uint32_t)),
903
+ hvf_state->vtimer_offset = mach_absolute_time() - s->vtimer_val;
1399
+ VMSTATE_UINT32_ARRAY(fszreg, DWC2State,
904
+ cpu_synchronize_all_states();
1400
+ DWC2_FSZREG_SIZE / sizeof(uint32_t)),
905
+ } else {
1401
+ VMSTATE_UINT32_ARRAY(hreg0, DWC2State,
906
+ /* Remember vtimer value on every pause */
1402
+ DWC2_HREG0_SIZE / sizeof(uint32_t)),
907
+ s->vtimer_val = hvf_vtimer_val_raw();
1403
+ VMSTATE_UINT32_ARRAY(hreg1, DWC2State,
908
+ }
1404
+ DWC2_HREG1_SIZE / sizeof(uint32_t)),
909
+}
1405
+ VMSTATE_UINT32_ARRAY(pcgreg, DWC2State,
910
+
1406
+ DWC2_PCGREG_SIZE / sizeof(uint32_t)),
911
+int hvf_arch_init(void)
1407
+
912
+{
1408
+ VMSTATE_TIMER_PTR(eof_timer, DWC2State),
913
+ hvf_state->vtimer_offset = mach_absolute_time();
1409
+ VMSTATE_TIMER_PTR(frame_timer, DWC2State),
914
+ vmstate_register(NULL, 0, &vmstate_hvf_vtimer, &vtimer);
1410
+ VMSTATE_INT64(sof_time, DWC2State),
915
+ qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
1411
+ VMSTATE_INT64(usb_frame_time, DWC2State),
916
+ return 0;
1412
+ VMSTATE_INT64(usb_bit_time, DWC2State),
917
+}
1413
+ VMSTATE_UINT32(usb_version, DWC2State),
918
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
1414
+ VMSTATE_UINT16(frame_number, DWC2State),
1415
+ VMSTATE_UINT16(fi, DWC2State),
1416
+ VMSTATE_UINT16(next_chan, DWC2State),
1417
+ VMSTATE_BOOL(working, DWC2State),
1418
+
1419
+ VMSTATE_STRUCT_ARRAY(packet, DWC2State, DWC2_NB_CHAN, 1,
1420
+ vmstate_dwc2_state_packet, DWC2Packet),
1421
+ VMSTATE_UINT8_2DARRAY(usb_buf, DWC2State, DWC2_NB_CHAN,
1422
+ DWC2_MAX_XFER_SIZE),
1423
+
1424
+ VMSTATE_END_OF_LIST()
1425
+ }
1426
+};
1427
+
1428
+static Property dwc2_usb_properties[] = {
1429
+ DEFINE_PROP_UINT32("usb_version", DWC2State, usb_version, 2),
1430
+ DEFINE_PROP_END_OF_LIST(),
1431
+};
1432
+
1433
+static void dwc2_class_init(ObjectClass *klass, void *data)
1434
+{
1435
+ DeviceClass *dc = DEVICE_CLASS(klass);
1436
+ DWC2Class *c = DWC2_CLASS(klass);
1437
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1438
+
1439
+ dc->realize = dwc2_realize;
1440
+ dc->vmsd = &vmstate_dwc2_state;
1441
+ set_bit(DEVICE_CATEGORY_USB, dc->categories);
1442
+ device_class_set_props(dc, dwc2_usb_properties);
1443
+ resettable_class_set_parent_phases(rc, dwc2_reset_enter, dwc2_reset_hold,
1444
+ dwc2_reset_exit, &c->parent_phases);
1445
+}
1446
+
1447
+static const TypeInfo dwc2_usb_type_info = {
1448
+ .name = TYPE_DWC2_USB,
1449
+ .parent = TYPE_SYS_BUS_DEVICE,
1450
+ .instance_size = sizeof(DWC2State),
1451
+ .instance_init = dwc2_init,
1452
+ .class_size = sizeof(DWC2Class),
1453
+ .class_init = dwc2_class_init,
1454
+};
1455
+
1456
+static void dwc2_usb_register_types(void)
1457
+{
1458
+ type_register_static(&dwc2_usb_type_info);
1459
+}
1460
+
1461
+type_init(dwc2_usb_register_types)
1462
diff --git a/hw/usb/Kconfig b/hw/usb/Kconfig
1463
index XXXXXXX..XXXXXXX 100644
919
index XXXXXXX..XXXXXXX 100644
1464
--- a/hw/usb/Kconfig
920
--- a/target/i386/hvf/hvf.c
1465
+++ b/hw/usb/Kconfig
921
+++ b/target/i386/hvf/hvf.c
1466
@@ -XXX,XX +XXX,XX @@ config USB_MUSB
922
@@ -XXX,XX +XXX,XX @@ static inline bool apic_bus_freq_is_known(CPUX86State *env)
1467
bool
923
return env->apic_bus_freq != 0;
1468
select USB
924
}
1469
925
1470
+config USB_DWC2
926
+void hvf_kick_vcpu_thread(CPUState *cpu)
1471
+ bool
927
+{
1472
+ default y
928
+ cpus_kick_thread(cpu);
1473
+ select USB
929
+}
1474
+
930
+
1475
config TUSB6010
931
int hvf_arch_init(void)
1476
bool
932
{
1477
select USB_MUSB
933
return 0;
1478
diff --git a/hw/usb/Makefile.objs b/hw/usb/Makefile.objs
934
diff --git a/MAINTAINERS b/MAINTAINERS
1479
index XXXXXXX..XXXXXXX 100644
935
index XXXXXXX..XXXXXXX 100644
1480
--- a/hw/usb/Makefile.objs
936
--- a/MAINTAINERS
1481
+++ b/hw/usb/Makefile.objs
937
+++ b/MAINTAINERS
1482
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_USB_EHCI_SYSBUS) += hcd-ehci-sysbus.o
938
@@ -XXX,XX +XXX,XX @@ F: accel/accel-*.c
1483
common-obj-$(CONFIG_USB_XHCI) += hcd-xhci.o
939
F: accel/Makefile.objs
1484
common-obj-$(CONFIG_USB_XHCI_NEC) += hcd-xhci-nec.o
940
F: accel/stubs/Makefile.objs
1485
common-obj-$(CONFIG_USB_MUSB) += hcd-musb.o
941
1486
+common-obj-$(CONFIG_USB_DWC2) += hcd-dwc2.o
942
+Apple Silicon HVF CPUs
1487
943
+M: Alexander Graf <agraf@csgraf.de>
1488
common-obj-$(CONFIG_TUSB6010) += tusb6010.o
944
+S: Maintained
1489
common-obj-$(CONFIG_IMX) += chipidea.o
945
+F: target/arm/hvf/
1490
diff --git a/hw/usb/trace-events b/hw/usb/trace-events
946
+
1491
index XXXXXXX..XXXXXXX 100644
947
X86 HVF CPUs
1492
--- a/hw/usb/trace-events
948
M: Cameron Esfahani <dirty@apple.com>
1493
+++ b/hw/usb/trace-events
949
M: Roman Bolshakov <r.bolshakov@yadro.com>
1494
@@ -XXX,XX +XXX,XX @@ usb_xhci_xfer_error(void *xfer, uint32_t ret) "%p: ret %d"
950
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
1495
usb_xhci_unimplemented(const char *item, int nr) "%s (0x%x)"
951
new file mode 100644
1496
usb_xhci_enforced_limit(const char *item) "%s"
952
index XXXXXXX..XXXXXXX
1497
953
--- /dev/null
1498
+# hcd-dwc2.c
954
+++ b/target/arm/hvf/trace-events
1499
+usb_dwc2_update_irq(uint32_t level) "level=%d"
955
@@ -XXX,XX +XXX,XX @@
1500
+usb_dwc2_raise_global_irq(uint32_t intr) "0x%08x"
956
+hvf_unhandled_sysreg_read(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg read at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
1501
+usb_dwc2_lower_global_irq(uint32_t intr) "0x%08x"
957
+hvf_unhandled_sysreg_write(uint64_t pc, uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg write at pc=0x%"PRIx64": 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)"
1502
+usb_dwc2_raise_host_irq(uint32_t intr) "0x%04x"
958
+hvf_inject_fiq(void) "injecting FIQ"
1503
+usb_dwc2_lower_host_irq(uint32_t intr) "0x%04x"
959
+hvf_inject_irq(void) "injecting IRQ"
1504
+usb_dwc2_sof(int64_t next) "next SOF %" PRId64
960
+hvf_data_abort(uint64_t pc, uint64_t va, uint64_t pa, bool isv, bool iswrite, bool s1ptw, uint32_t len, uint32_t srt) "data abort: [pc=0x%"PRIx64" va=0x%016"PRIx64" pa=0x%016"PRIx64" isv=%d iswrite=%d s1ptw=%d len=%d srt=%d]"
1505
+usb_dwc2_bus_start(void) "start SOFs"
961
+hvf_sysreg_read(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg read 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d) = 0x%016"PRIx64
1506
+usb_dwc2_bus_stop(void) "stop SOFs"
962
+hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg write 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d, val=0x%016"PRIx64")"
1507
+usb_dwc2_find_device(uint8_t addr) "%d"
963
+hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
1508
+usb_dwc2_port_disabled(uint32_t pnum) "port %d disabled"
964
+hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
1509
+usb_dwc2_device_found(uint32_t pnum) "device found on port %d"
965
+hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
1510
+usb_dwc2_device_not_found(void) "device not found"
1511
+usb_dwc2_handle_packet(uint32_t chan, void *dev, void *pkt, uint32_t ep, const char *type, const char *dir, uint32_t mps, uint32_t len, uint32_t pcnt) "ch %d dev %p pkt %p ep %d type %s dir %s mps %d len %d pcnt %d"
1512
+usb_dwc2_memory_read(uint32_t addr, uint32_t len) "addr %d len %d"
1513
+usb_dwc2_packet_status(const char *status, uint32_t len) "status %s len %d"
1514
+usb_dwc2_packet_error(const char *status) "ERROR %s"
1515
+usb_dwc2_async_packet(void *pkt, uint32_t chan, void *dev, uint32_t ep, const char *dir, uint32_t len) "pkt %p ch %d dev %p ep %d %s len %d"
1516
+usb_dwc2_memory_write(uint32_t addr, uint32_t len) "addr %d len %d"
1517
+usb_dwc2_packet_done(const char *status, uint32_t actual, uint32_t len, uint32_t pcnt) "status %s actual %d len %d pcnt %d"
1518
+usb_dwc2_packet_next(const char *status, uint32_t len, uint32_t pcnt) "status %s len %d pcnt %d"
1519
+usb_dwc2_attach(void *port) "port %p"
1520
+usb_dwc2_attach_speed(const char *speed) "%s-speed device attached"
1521
+usb_dwc2_detach(void *port) "port %p"
1522
+usb_dwc2_child_detach(void *port, void *child) "port %p child %p"
1523
+usb_dwc2_wakeup(void *port) "port %p"
1524
+usb_dwc2_async_packet_complete(void *port, void *pkt, uint32_t chan, void *dev, uint32_t ep, const char *dir, uint32_t len) "port %p packet %p ch %d dev %p ep %d %s len %d"
1525
+usb_dwc2_work_bh(void) ""
1526
+usb_dwc2_work_bh_service(uint32_t first, uint32_t current, void *dev, uint32_t ep) "first %d servicing %d dev %p ep %d"
1527
+usb_dwc2_work_bh_next(uint32_t chan) "next %d"
1528
+usb_dwc2_enable_chan(uint32_t chan, void *dev, void *pkt, uint32_t ep) "ch %d dev %p pkt %p ep %d"
1529
+usb_dwc2_glbreg_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1530
+usb_dwc2_glbreg_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1531
+usb_dwc2_fszreg_read(uint64_t addr, uint32_t val) " 0x%04" PRIx64 " HPTXFSIZ val 0x%08x"
1532
+usb_dwc2_fszreg_write(uint64_t addr, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " HPTXFSIZ val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1533
+usb_dwc2_hreg0_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1534
+usb_dwc2_hreg0_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1535
+usb_dwc2_hreg1_read(uint64_t addr, const char *reg, uint64_t chan, uint32_t val) " 0x%04" PRIx64 " %s%" PRId64 " val 0x%08x"
1536
+usb_dwc2_hreg1_write(uint64_t addr, const char *reg, uint64_t chan, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " %s%" PRId64 " val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1537
+usb_dwc2_pcgreg_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1538
+usb_dwc2_pcgreg_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1539
+usb_dwc2_hreg2_read(uint64_t addr, uint64_t fifo, uint32_t val) " 0x%04" PRIx64 " FIFO%" PRId64 " val 0x%08x"
1540
+usb_dwc2_hreg2_write(uint64_t addr, uint64_t fifo, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " FIFO%" PRId64 " val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1541
+usb_dwc2_hreg0_action(const char *s) "%s"
1542
+usb_dwc2_wakeup_endpoint(void *ep, uint32_t stream) "endp %p stream %d"
1543
+usb_dwc2_work_timer(void) ""
1544
+usb_dwc2_reset_enter(void) "=== RESET enter ==="
1545
+usb_dwc2_reset_hold(void) "=== RESET hold ==="
1546
+usb_dwc2_reset_exit(void) "=== RESET exit ==="
1547
+
1548
# desc.c
1549
usb_desc_device(int addr, int len, int ret) "dev %d query device, len %d, ret %d"
1550
usb_desc_device_qualifier(int addr, int len, int ret) "dev %d query device qualifier, len %d, ret %d"
1551
--
966
--
1552
2.20.1
967
2.20.1
1553
968
1554
969
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Peter Collingbourne <pcc@google.com>
2
2
3
With this conversion, we will be able to use the same helpers
3
Sleep on WFI until the VTIMER is due but allow ourselves to be woken
4
with sve. In particular, pass 3 vector parameters for the
4
up on IPI.
5
3-operand operations; for advsimd the destination register
6
is also an input.
7
5
8
This also fixes a bug in which we failed to clear the high bits
6
In this implementation IPI is blocked on the CPU thread at startup and
9
of the SVE register after an AdvSIMD operation.
7
pselect() is used to atomically unblock the signal and begin sleeping.
8
The signal is sent unconditionally so there's no need to worry about
9
races between actually sleeping and the "we think we're sleeping"
10
state. It may lead to an extra wakeup but that's better than missing
11
it entirely.
10
12
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Peter Collingbourne <pcc@google.com>
12
Message-id: 20200514212831.31248-2-richard.henderson@linaro.org
14
Signed-off-by: Alexander Graf <agraf@csgraf.de>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
16
Reviewed-by: Sergio Lopez <slp@redhat.com>
17
Message-id: 20210916155404.86958-6-agraf@csgraf.de
18
[agraf: Remove unused 'set' variable, always advance PC on WFX trap,
19
support vm stop / continue operations and cntv offsets]
20
Signed-off-by: Alexander Graf <agraf@csgraf.de>
21
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
22
Reviewed-by: Sergio Lopez <slp@redhat.com>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
24
---
16
target/arm/helper.h | 6 ++--
25
include/sysemu/hvf_int.h | 1 +
17
target/arm/vec_internal.h | 33 +++++++++++++++++
26
accel/hvf/hvf-accel-ops.c | 5 +--
18
target/arm/crypto_helper.c | 72 +++++++++++++++++++++++++++-----------
27
target/arm/hvf/hvf.c | 79 +++++++++++++++++++++++++++++++++++++++
19
target/arm/translate-a64.c | 55 ++++++++++++++++++-----------
28
3 files changed, 82 insertions(+), 3 deletions(-)
20
target/arm/translate.c | 27 +++++++-------
21
target/arm/vec_helper.c | 12 +------
22
6 files changed, 138 insertions(+), 67 deletions(-)
23
create mode 100644 target/arm/vec_internal.h
24
29
25
diff --git a/target/arm/helper.h b/target/arm/helper.h
30
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
26
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.h
32
--- a/include/sysemu/hvf_int.h
28
+++ b/target/arm/helper.h
33
+++ b/include/sysemu/hvf_int.h
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(neon_qzip8, TCG_CALL_NO_RWG, void, ptr, ptr)
34
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
30
DEF_HELPER_FLAGS_2(neon_qzip16, TCG_CALL_NO_RWG, void, ptr, ptr)
35
uint64_t fd;
31
DEF_HELPER_FLAGS_2(neon_qzip32, TCG_CALL_NO_RWG, void, ptr, ptr)
36
void *exit;
32
37
bool vtimer_masked;
33
-DEF_HELPER_FLAGS_3(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
38
+ sigset_t unblock_ipi_mask;
34
+DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
};
35
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
36
41
void assert_hvf_ok(hv_return_t ret);
37
DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
38
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
43
index XXXXXXX..XXXXXXX 100644
39
DEF_HELPER_FLAGS_3(crypto_sm3partw1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
44
--- a/accel/hvf/hvf-accel-ops.c
40
DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
45
+++ b/accel/hvf/hvf-accel-ops.c
41
46
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
42
-DEF_HELPER_FLAGS_2(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr)
47
cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
43
-DEF_HELPER_FLAGS_3(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
48
44
+DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
49
/* init cpu signals */
45
+DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
50
- sigset_t set;
46
51
struct sigaction sigact;
47
DEF_HELPER_FLAGS_3(crc32, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
52
48
DEF_HELPER_FLAGS_3(crc32c, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
53
memset(&sigact, 0, sizeof(sigact));
49
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
54
sigact.sa_handler = dummy_signal;
50
new file mode 100644
55
sigaction(SIG_IPI, &sigact, NULL);
51
index XXXXXXX..XXXXXXX
56
52
--- /dev/null
57
- pthread_sigmask(SIG_BLOCK, NULL, &set);
53
+++ b/target/arm/vec_internal.h
58
- sigdelset(&set, SIG_IPI);
59
+ pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask);
60
+ sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI);
61
62
#ifdef __aarch64__
63
r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
64
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/hvf/hvf.c
67
+++ b/target/arm/hvf/hvf.c
54
@@ -XXX,XX +XXX,XX @@
68
@@ -XXX,XX +XXX,XX @@
55
+/*
69
* QEMU Hypervisor.framework support for Apple Silicon
56
+ * ARM AdvSIMD / SVE Vector Helpers
70
57
+ *
71
* Copyright 2020 Alexander Graf <agraf@csgraf.de>
58
+ * Copyright (c) 2020 Linaro
72
+ * Copyright 2020 Google LLC
59
+ *
73
*
60
+ * This library is free software; you can redistribute it and/or
74
* This work is licensed under the terms of the GNU GPL, version 2 or later.
61
+ * modify it under the terms of the GNU Lesser General Public
75
* See the COPYING file in the top-level directory.
62
+ * License as published by the Free Software Foundation; either
76
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init_vcpu(CPUState *cpu)
63
+ * version 2 of the License, or (at your option) any later version.
77
64
+ *
78
void hvf_kick_vcpu_thread(CPUState *cpu)
65
+ * This library is distributed in the hope that it will be useful,
79
{
66
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
80
+ cpus_kick_thread(cpu);
67
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
81
hv_vcpus_exit(&cpu->hvf->fd, 1);
68
+ * Lesser General Public License for more details.
82
}
69
+ *
83
70
+ * You should have received a copy of the GNU Lesser General Public
84
@@ -XXX,XX +XXX,XX @@ static uint64_t hvf_vtimer_val_raw(void)
71
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
85
return mach_absolute_time() - hvf_state->vtimer_offset;
72
+ */
86
}
87
88
+static uint64_t hvf_vtimer_val(void)
89
+{
90
+ if (!runstate_is_running()) {
91
+ /* VM is paused, the vtimer value is in vtimer.vtimer_val */
92
+ return vtimer.vtimer_val;
93
+ }
73
+
94
+
74
+#ifndef TARGET_ARM_VEC_INTERNALS_H
95
+ return hvf_vtimer_val_raw();
75
+#define TARGET_ARM_VEC_INTERNALS_H
76
+
77
+static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
78
+{
79
+ uint64_t *d = vd + opr_sz;
80
+ uintptr_t i;
81
+
82
+ for (i = opr_sz; i < max_sz; i += 8) {
83
+ *d++ = 0;
84
+ }
85
+}
96
+}
86
+
97
+
87
+#endif /* TARGET_ARM_VEC_INTERNALS_H */
98
+static void hvf_wait_for_ipi(CPUState *cpu, struct timespec *ts)
88
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/crypto_helper.c
91
+++ b/target/arm/crypto_helper.c
92
@@ -XXX,XX +XXX,XX @@
93
94
#include "cpu.h"
95
#include "exec/helper-proto.h"
96
+#include "tcg/tcg-gvec-desc.h"
97
#include "crypto/aes.h"
98
+#include "vec_internal.h"
99
100
union CRYPTO_STATE {
101
uint8_t bytes[16];
102
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
103
#define CR_ST_WORD(state, i) (state.words[i])
104
#endif
105
106
-void HELPER(crypto_aese)(void *vd, void *vm, uint32_t decrypt)
107
+static void do_crypto_aese(uint64_t *rd, uint64_t *rn,
108
+ uint64_t *rm, bool decrypt)
109
{
110
static uint8_t const * const sbox[2] = { AES_sbox, AES_isbox };
111
static uint8_t const * const shift[2] = { AES_shifts, AES_ishifts };
112
- uint64_t *rd = vd;
113
- uint64_t *rm = vm;
114
union CRYPTO_STATE rk = { .l = { rm[0], rm[1] } };
115
- union CRYPTO_STATE st = { .l = { rd[0], rd[1] } };
116
+ union CRYPTO_STATE st = { .l = { rn[0], rn[1] } };
117
int i;
118
119
- assert(decrypt < 2);
120
-
121
/* xor state vector with round key */
122
rk.l[0] ^= st.l[0];
123
rk.l[1] ^= st.l[1];
124
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aese)(void *vd, void *vm, uint32_t decrypt)
125
rd[1] = st.l[1];
126
}
127
128
-void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
129
+void HELPER(crypto_aese)(void *vd, void *vn, void *vm, uint32_t desc)
130
+{
99
+{
131
+ intptr_t i, opr_sz = simd_oprsz(desc);
100
+ /*
132
+ bool decrypt = simd_data(desc);
101
+ * Use pselect to sleep so that other threads can IPI us while we're
133
+
102
+ * sleeping.
134
+ for (i = 0; i < opr_sz; i += 16) {
103
+ */
135
+ do_crypto_aese(vd + i, vn + i, vm + i, decrypt);
104
+ qatomic_mb_set(&cpu->thread_kicked, false);
136
+ }
105
+ qemu_mutex_unlock_iothread();
137
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
106
+ pselect(0, 0, 0, 0, ts, &cpu->hvf->unblock_ipi_mask);
107
+ qemu_mutex_lock_iothread();
138
+}
108
+}
139
+
109
+
140
+static void do_crypto_aesmc(uint64_t *rd, uint64_t *rm, bool decrypt)
110
+static void hvf_wfi(CPUState *cpu)
141
{
142
static uint32_t const mc[][256] = { {
143
/* MixColumns lookup table */
144
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
145
0xbe805d9f, 0xb58d5491, 0xa89a4f83, 0xa397468d,
146
} };
147
148
- uint64_t *rd = vd;
149
- uint64_t *rm = vm;
150
union CRYPTO_STATE st = { .l = { rm[0], rm[1] } };
151
int i;
152
153
- assert(decrypt < 2);
154
-
155
for (i = 0; i < 16; i += 4) {
156
CR_ST_WORD(st, i >> 2) =
157
mc[decrypt][CR_ST_BYTE(st, i)] ^
158
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
159
rd[1] = st.l[1];
160
}
161
162
+void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t desc)
163
+{
111
+{
164
+ intptr_t i, opr_sz = simd_oprsz(desc);
112
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
165
+ bool decrypt = simd_data(desc);
113
+ struct timespec ts;
114
+ hv_return_t r;
115
+ uint64_t ctl;
116
+ uint64_t cval;
117
+ int64_t ticks_to_sleep;
118
+ uint64_t seconds;
119
+ uint64_t nanos;
120
+ uint32_t cntfrq;
166
+
121
+
167
+ for (i = 0; i < opr_sz; i += 16) {
122
+ if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
168
+ do_crypto_aesmc(vd + i, vm + i, decrypt);
123
+ /* Interrupt pending, no need to wait */
169
+ }
170
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
171
+}
172
+
173
/*
174
* SHA-1 logical functions
175
*/
176
@@ -XXX,XX +XXX,XX @@ static uint8_t const sm4_sbox[] = {
177
0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48,
178
};
179
180
-void HELPER(crypto_sm4e)(void *vd, void *vn)
181
+static void do_crypto_sm4e(uint64_t *rd, uint64_t *rn, uint64_t *rm)
182
{
183
- uint64_t *rd = vd;
184
- uint64_t *rn = vn;
185
- union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
186
- union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
187
+ union CRYPTO_STATE d = { .l = { rn[0], rn[1] } };
188
+ union CRYPTO_STATE n = { .l = { rm[0], rm[1] } };
189
uint32_t t, i;
190
191
for (i = 0; i < 4; i++) {
192
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4e)(void *vd, void *vn)
193
rd[1] = d.l[1];
194
}
195
196
-void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm)
197
+void HELPER(crypto_sm4e)(void *vd, void *vn, void *vm, uint32_t desc)
198
+{
199
+ intptr_t i, opr_sz = simd_oprsz(desc);
200
+
201
+ for (i = 0; i < opr_sz; i += 16) {
202
+ do_crypto_sm4e(vd + i, vn + i, vm + i);
203
+ }
204
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
205
+}
206
+
207
+static void do_crypto_sm4ekey(uint64_t *rd, uint64_t *rn, uint64_t *rm)
208
{
209
- uint64_t *rd = vd;
210
- uint64_t *rn = vn;
211
- uint64_t *rm = vm;
212
union CRYPTO_STATE d;
213
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
214
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
215
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm)
216
rd[0] = d.l[0];
217
rd[1] = d.l[1];
218
}
219
+
220
+void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm, uint32_t desc)
221
+{
222
+ intptr_t i, opr_sz = simd_oprsz(desc);
223
+
224
+ for (i = 0; i < opr_sz; i += 16) {
225
+ do_crypto_sm4ekey(vd + i, vn + i, vm + i);
226
+ }
227
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
228
+}
229
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
230
index XXXXXXX..XXXXXXX 100644
231
--- a/target/arm/translate-a64.c
232
+++ b/target/arm/translate-a64.c
233
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_fn4(DisasContext *s, bool is_q, int rd, int rn, int rm,
234
is_q ? 16 : 8, vec_full_reg_size(s));
235
}
236
237
+/* Expand a 2-operand operation using an out-of-line helper. */
238
+static void gen_gvec_op2_ool(DisasContext *s, bool is_q, int rd,
239
+ int rn, int data, gen_helper_gvec_2 *fn)
240
+{
241
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
242
+ vec_full_reg_offset(s, rn),
243
+ is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
244
+}
245
+
246
/* Expand a 3-operand operation using an out-of-line helper. */
247
static void gen_gvec_op3_ool(DisasContext *s, bool is_q, int rd,
248
int rn, int rm, int data, gen_helper_gvec_3 *fn)
249
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
250
int rn = extract32(insn, 5, 5);
251
int rd = extract32(insn, 0, 5);
252
int decrypt;
253
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
254
- TCGv_i32 tcg_decrypt;
255
- CryptoThreeOpIntFn *genfn;
256
+ gen_helper_gvec_2 *genfn2 = NULL;
257
+ gen_helper_gvec_3 *genfn3 = NULL;
258
259
if (!dc_isar_feature(aa64_aes, s) || size != 0) {
260
unallocated_encoding(s);
261
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
262
switch (opcode) {
263
case 0x4: /* AESE */
264
decrypt = 0;
265
- genfn = gen_helper_crypto_aese;
266
+ genfn3 = gen_helper_crypto_aese;
267
break;
268
case 0x6: /* AESMC */
269
decrypt = 0;
270
- genfn = gen_helper_crypto_aesmc;
271
+ genfn2 = gen_helper_crypto_aesmc;
272
break;
273
case 0x5: /* AESD */
274
decrypt = 1;
275
- genfn = gen_helper_crypto_aese;
276
+ genfn3 = gen_helper_crypto_aese;
277
break;
278
case 0x7: /* AESIMC */
279
decrypt = 1;
280
- genfn = gen_helper_crypto_aesmc;
281
+ genfn2 = gen_helper_crypto_aesmc;
282
break;
283
default:
284
unallocated_encoding(s);
285
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
286
if (!fp_access_check(s)) {
287
return;
288
}
289
-
290
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
291
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
292
- tcg_decrypt = tcg_const_i32(decrypt);
293
-
294
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_decrypt);
295
-
296
- tcg_temp_free_ptr(tcg_rd_ptr);
297
- tcg_temp_free_ptr(tcg_rn_ptr);
298
- tcg_temp_free_i32(tcg_decrypt);
299
+ if (genfn2) {
300
+ gen_gvec_op2_ool(s, true, rd, rn, decrypt, genfn2);
301
+ } else {
302
+ gen_gvec_op3_ool(s, true, rd, rd, rn, decrypt, genfn3);
303
+ }
304
}
305
306
/* Crypto three-reg SHA
307
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
308
int rn = extract32(insn, 5, 5);
309
int rd = extract32(insn, 0, 5);
310
bool feature;
311
- CryptoThreeOpFn *genfn;
312
+ CryptoThreeOpFn *genfn = NULL;
313
+ gen_helper_gvec_3 *oolfn = NULL;
314
315
if (o == 0) {
316
switch (opcode) {
317
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
318
break;
319
case 2: /* SM4EKEY */
320
feature = dc_isar_feature(aa64_sm4, s);
321
- genfn = gen_helper_crypto_sm4ekey;
322
+ oolfn = gen_helper_crypto_sm4ekey;
323
break;
324
default:
325
unallocated_encoding(s);
326
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
327
return;
328
}
329
330
+ if (oolfn) {
331
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
332
+ return;
124
+ return;
333
+ }
125
+ }
334
+
126
+
335
if (genfn) {
127
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl);
336
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
128
+ assert_hvf_ok(r);
337
129
+
338
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
130
+ if (!(ctl & 1) || (ctl & 2)) {
339
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
131
+ /* Timer disabled or masked, just wait for an IPI. */
340
bool feature;
132
+ hvf_wait_for_ipi(cpu, NULL);
341
CryptoTwoOpFn *genfn;
342
+ gen_helper_gvec_3 *oolfn = NULL;
343
344
switch (opcode) {
345
case 0: /* SHA512SU0 */
346
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
347
break;
348
case 1: /* SM4E */
349
feature = dc_isar_feature(aa64_sm4, s);
350
- genfn = gen_helper_crypto_sm4e;
351
+ oolfn = gen_helper_crypto_sm4e;
352
break;
353
default:
354
unallocated_encoding(s);
355
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
356
return;
357
}
358
359
+ if (oolfn) {
360
+ gen_gvec_op3_ool(s, true, rd, rd, rn, 0, oolfn);
361
+ return;
133
+ return;
362
+ }
134
+ }
363
+
135
+
364
tcg_rd_ptr = vec_full_reg_ptr(s, rd);
136
+ r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0, &cval);
365
tcg_rn_ptr = vec_full_reg_ptr(s, rn);
137
+ assert_hvf_ok(r);
366
138
+
367
diff --git a/target/arm/translate.c b/target/arm/translate.c
139
+ ticks_to_sleep = cval - hvf_vtimer_val();
368
index XXXXXXX..XXXXXXX 100644
140
+ if (ticks_to_sleep < 0) {
369
--- a/target/arm/translate.c
141
+ return;
370
+++ b/target/arm/translate.c
142
+ }
371
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
143
+
372
if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
144
+ cntfrq = gt_cntfrq_period_ns(arm_cpu);
373
return 1;
145
+ seconds = muldiv64(ticks_to_sleep, cntfrq, NANOSECONDS_PER_SECOND);
374
}
146
+ ticks_to_sleep -= muldiv64(seconds, NANOSECONDS_PER_SECOND, cntfrq);
375
- ptr1 = vfp_reg_ptr(true, rd);
147
+ nanos = ticks_to_sleep * cntfrq;
376
- ptr2 = vfp_reg_ptr(true, rm);
148
+
377
-
149
+ /*
378
- /* Bit 6 is the lowest opcode bit; it distinguishes between
150
+ * Don't sleep for less than the time a context switch would take,
379
- * encryption (AESE/AESMC) and decryption (AESD/AESIMC)
151
+ * so that we can satisfy fast timer requests on the same CPU.
380
- */
152
+ * Measurements on M1 show the sweet spot to be ~2ms.
381
- tmp3 = tcg_const_i32(extract32(insn, 6, 1));
153
+ */
382
-
154
+ if (!seconds && nanos < (2 * SCALE_MS)) {
383
+ /*
155
+ return;
384
+ * Bit 6 is the lowest opcode bit; it distinguishes
156
+ }
385
+ * between encryption (AESE/AESMC) and decryption
157
+
386
+ * (AESD/AESIMC).
158
+ ts = (struct timespec) { seconds, nanos };
387
+ */
159
+ hvf_wait_for_ipi(cpu, &ts);
388
if (op == NEON_2RM_AESE) {
160
+}
389
- gen_helper_crypto_aese(ptr1, ptr2, tmp3);
161
+
390
+ tcg_gen_gvec_3_ool(vfp_reg_offset(true, rd),
162
static void hvf_sync_vtimer(CPUState *cpu)
391
+ vfp_reg_offset(true, rd),
163
{
392
+ vfp_reg_offset(true, rm),
164
ARMCPU *arm_cpu = ARM_CPU(cpu);
393
+ 16, 16, extract32(insn, 6, 1),
165
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
394
+ gen_helper_crypto_aese);
166
}
395
} else {
167
case EC_WFX_TRAP:
396
- gen_helper_crypto_aesmc(ptr1, ptr2, tmp3);
168
advance_pc = true;
397
+ tcg_gen_gvec_2_ool(vfp_reg_offset(true, rd),
169
+ if (!(syndrome & WFX_IS_WFE)) {
398
+ vfp_reg_offset(true, rm),
170
+ hvf_wfi(cpu);
399
+ 16, 16, extract32(insn, 6, 1),
171
+ }
400
+ gen_helper_crypto_aesmc);
172
break;
401
}
173
case EC_AA64_HVC:
402
- tcg_temp_free_ptr(ptr1);
174
cpu_synchronize_state(cpu);
403
- tcg_temp_free_ptr(ptr2);
404
- tcg_temp_free_i32(tmp3);
405
break;
406
case NEON_2RM_SHA1H:
407
if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
408
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
409
index XXXXXXX..XXXXXXX 100644
410
--- a/target/arm/vec_helper.c
411
+++ b/target/arm/vec_helper.c
412
@@ -XXX,XX +XXX,XX @@
413
#include "exec/helper-proto.h"
414
#include "tcg/tcg-gvec-desc.h"
415
#include "fpu/softfloat.h"
416
-
417
+#include "vec_internal.h"
418
419
/* Note that vector data is stored in host-endian 64-bit chunks,
420
so addressing units smaller than that needs a host-endian fixup. */
421
@@ -XXX,XX +XXX,XX @@
422
#define H4(x) (x)
423
#endif
424
425
-static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
426
-{
427
- uint64_t *d = vd + opr_sz;
428
- uintptr_t i;
429
-
430
- for (i = opr_sz; i < max_sz; i += 8) {
431
- *d++ = 0;
432
- }
433
-}
434
-
435
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
436
static int16_t inl_qrdmlah_s16(int16_t src1, int16_t src2,
437
int16_t src3, uint32_t *sat)
438
--
175
--
439
2.20.1
176
2.20.1
440
177
441
178
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
Now that we have working system register sync, we push more target CPU
2
2
properties into the virtual machine. That might be useful in some
3
Add BCM2835 SOC MPHI (Message-based Parallel Host Interface)
3
situations, but is not the typical case that users want.
4
emulation. It is very basic, only providing the FIQ interrupt
4
5
needed to allow the dwc-otg USB host controller driver in the
5
So let's add a -cpu host option that allows them to explicitly pass all
6
Raspbian kernel to function.
6
CPU capabilities of their host CPU into the guest.
7
7
8
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
8
Signed-off-by: Alexander Graf <agraf@csgraf.de>
9
Acked-by: Philippe Mathieu-Daude <f4bug@amsat.org>
9
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
10
Reviewed-by: Sergio Lopez <slp@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20200520235349.21215-2-pauldzim@gmail.com
12
Message-id: 20210916155404.86958-7-agraf@csgraf.de
13
[PMM: drop unnecessary #include line from .h file]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
15
---
14
include/hw/arm/bcm2835_peripherals.h | 2 +
16
target/arm/cpu.h | 2 +
15
include/hw/misc/bcm2835_mphi.h | 44 ++++++
17
target/arm/hvf_arm.h | 18 +++++++++
16
hw/arm/bcm2835_peripherals.c | 17 +++
18
target/arm/kvm_arm.h | 2 -
17
hw/misc/bcm2835_mphi.c | 191 +++++++++++++++++++++++++++
19
target/arm/cpu.c | 13 ++++--
18
hw/misc/Makefile.objs | 1 +
20
target/arm/hvf/hvf.c | 95 ++++++++++++++++++++++++++++++++++++++++++++
19
5 files changed, 255 insertions(+)
21
5 files changed, 124 insertions(+), 6 deletions(-)
20
create mode 100644 include/hw/misc/bcm2835_mphi.h
22
create mode 100644 target/arm/hvf_arm.h
21
create mode 100644 hw/misc/bcm2835_mphi.c
23
22
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
25
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/cpu.h
25
--- a/include/hw/arm/bcm2835_peripherals.h
27
+++ b/target/arm/cpu.h
26
+++ b/include/hw/arm/bcm2835_peripherals.h
28
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
27
@@ -XXX,XX +XXX,XX @@
29
#define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
28
#include "hw/misc/bcm2835_property.h"
30
#define CPU_RESOLVING_TYPE TYPE_ARM_CPU
29
#include "hw/misc/bcm2835_rng.h"
31
30
#include "hw/misc/bcm2835_mbox.h"
32
+#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
31
+#include "hw/misc/bcm2835_mphi.h"
33
+
32
#include "hw/misc/bcm2835_thermal.h"
34
#define cpu_signal_handler cpu_arm_signal_handler
33
#include "hw/sd/sdhci.h"
35
#define cpu_list arm_cpu_list
34
#include "hw/sd/bcm2835_sdhost.h"
36
35
@@ -XXX,XX +XXX,XX @@ typedef struct BCM2835PeripheralState {
37
diff --git a/target/arm/hvf_arm.h b/target/arm/hvf_arm.h
36
qemu_irq irq, fiq;
37
38
BCM2835SystemTimerState systmr;
39
+ BCM2835MphiState mphi;
40
UnimplementedDeviceState armtmr;
41
UnimplementedDeviceState cprman;
42
UnimplementedDeviceState a2w;
43
diff --git a/include/hw/misc/bcm2835_mphi.h b/include/hw/misc/bcm2835_mphi.h
44
new file mode 100644
38
new file mode 100644
45
index XXXXXXX..XXXXXXX
39
index XXXXXXX..XXXXXXX
46
--- /dev/null
40
--- /dev/null
47
+++ b/include/hw/misc/bcm2835_mphi.h
41
+++ b/target/arm/hvf_arm.h
48
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@
49
+/*
43
+/*
50
+ * BCM2835 SOC MPHI state definitions
44
+ * QEMU Hypervisor.framework (HVF) support -- ARM specifics
51
+ *
45
+ *
52
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
46
+ * Copyright (c) 2021 Alexander Graf
53
+ *
47
+ *
54
+ * This program is free software; you can redistribute it and/or modify
48
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
55
+ * it under the terms of the GNU General Public License as published by
49
+ * See the COPYING file in the top-level directory.
56
+ * the Free Software Foundation; either version 2 of the License, or
57
+ * (at your option) any later version.
58
+ *
50
+ *
59
+ * This program is distributed in the hope that it will be useful,
60
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
61
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
62
+ * GNU General Public License for more details.
63
+ */
51
+ */
64
+
52
+
65
+#ifndef HW_MISC_BCM2835_MPHI_H
53
+#ifndef QEMU_HVF_ARM_H
66
+#define HW_MISC_BCM2835_MPHI_H
54
+#define QEMU_HVF_ARM_H
67
+
55
+
68
+#include "hw/irq.h"
56
+#include "cpu.h"
69
+#include "hw/sysbus.h"
57
+
70
+
58
+void hvf_arm_set_cpu_features_from_host(struct ARMCPU *cpu);
71
+#define MPHI_MMIO_SIZE 0x1000
72
+
73
+typedef struct BCM2835MphiState BCM2835MphiState;
74
+
75
+struct BCM2835MphiState {
76
+ SysBusDevice parent_obj;
77
+ qemu_irq irq;
78
+ MemoryRegion iomem;
79
+
80
+ uint32_t outdda;
81
+ uint32_t outddb;
82
+ uint32_t ctrl;
83
+ uint32_t intstat;
84
+ uint32_t swirq;
85
+};
86
+
87
+#define TYPE_BCM2835_MPHI "bcm2835-mphi"
88
+
89
+#define BCM2835_MPHI(obj) \
90
+ OBJECT_CHECK(BCM2835MphiState, (obj), TYPE_BCM2835_MPHI)
91
+
59
+
92
+#endif
60
+#endif
93
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
61
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
94
index XXXXXXX..XXXXXXX 100644
62
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/bcm2835_peripherals.c
63
--- a/target/arm/kvm_arm.h
96
+++ b/hw/arm/bcm2835_peripherals.c
64
+++ b/target/arm/kvm_arm.h
97
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
65
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
98
OBJECT(&s->sdhci.sdbus));
66
*/
99
object_property_add_const_link(OBJECT(&s->gpio), "sdbus-sdhost",
67
void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
100
OBJECT(&s->sdhost.sdbus));
68
101
+
69
-#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
102
+ /* Mphi */
70
-
103
+ sysbus_init_child_obj(obj, "mphi", &s->mphi, sizeof(s->mphi),
71
/**
104
+ TYPE_BCM2835_MPHI);
72
* ARMHostCPUFeatures: information about the host CPU (identified
105
}
73
* by asking the host kernel)
106
74
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
107
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
75
index XXXXXXX..XXXXXXX 100644
108
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
76
--- a/target/arm/cpu.c
109
77
+++ b/target/arm/cpu.c
110
object_property_add_alias(OBJECT(s), "sd-bus", OBJECT(&s->gpio), "sd-bus");
111
112
+ /* Mphi */
113
+ object_property_set_bool(OBJECT(&s->mphi), true, "realized", &err);
114
+ if (err) {
115
+ error_propagate(errp, err);
116
+ return;
117
+ }
118
+
119
+ memory_region_add_subregion(&s->peri_mr, MPHI_OFFSET,
120
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->mphi), 0));
121
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->mphi), 0,
122
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
123
+ INTERRUPT_HOSTPORT));
124
+
125
create_unimp(s, &s->armtmr, "bcm2835-sp804", ARMCTRL_TIMER0_1_OFFSET, 0x40);
126
create_unimp(s, &s->cprman, "bcm2835-cprman", CPRMAN_OFFSET, 0x1000);
127
create_unimp(s, &s->a2w, "bcm2835-a2w", A2W_OFFSET, 0x1000);
128
diff --git a/hw/misc/bcm2835_mphi.c b/hw/misc/bcm2835_mphi.c
129
new file mode 100644
130
index XXXXXXX..XXXXXXX
131
--- /dev/null
132
+++ b/hw/misc/bcm2835_mphi.c
133
@@ -XXX,XX +XXX,XX @@
78
@@ -XXX,XX +XXX,XX @@
134
+/*
79
#include "sysemu/tcg.h"
135
+ * BCM2835 SOC MPHI emulation
80
#include "sysemu/hw_accel.h"
136
+ *
81
#include "kvm_arm.h"
137
+ * Very basic emulation, only providing the FIQ interrupt needed to
82
+#include "hvf_arm.h"
138
+ * allow the dwc-otg USB host controller driver in the Raspbian kernel
83
#include "disas/capstone.h"
139
+ * to function.
84
#include "fpu/softfloat.h"
140
+ *
85
141
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
86
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
142
+ *
87
* this is the first point where we can report it.
143
+ * This program is free software; you can redistribute it and/or modify
88
*/
144
+ * it under the terms of the GNU General Public License as published by
89
if (cpu->host_cpu_probe_failed) {
145
+ * the Free Software Foundation; either version 2 of the License, or
90
- if (!kvm_enabled()) {
146
+ * (at your option) any later version.
91
- error_setg(errp, "The 'host' CPU type can only be used with KVM");
147
+ *
92
+ if (!kvm_enabled() && !hvf_enabled()) {
148
+ * This program is distributed in the hope that it will be useful,
93
+ error_setg(errp, "The 'host' CPU type can only be used with KVM or HVF");
149
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
94
} else {
150
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
95
error_setg(errp, "Failed to retrieve host CPU features");
151
+ * GNU General Public License for more details.
96
}
152
+ */
97
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
153
+
98
#endif /* CONFIG_TCG */
154
+#include "qemu/osdep.h"
99
}
155
+#include "qapi/error.h"
100
156
+#include "hw/misc/bcm2835_mphi.h"
101
-#ifdef CONFIG_KVM
157
+#include "migration/vmstate.h"
102
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
158
+#include "qemu/error-report.h"
103
static void arm_host_initfn(Object *obj)
159
+#include "qemu/log.h"
104
{
160
+#include "qemu/main-loop.h"
105
ARMCPU *cpu = ARM_CPU(obj);
161
+
106
162
+static inline void mphi_raise_irq(BCM2835MphiState *s)
107
+#ifdef CONFIG_KVM
108
kvm_arm_set_cpu_features_from_host(cpu);
109
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
110
aarch64_add_sve_properties(obj);
111
}
112
+#else
113
+ hvf_arm_set_cpu_features_from_host(cpu);
114
+#endif
115
arm_cpu_post_init(obj);
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_register_types(void)
119
{
120
type_register_static(&arm_cpu_type_info);
121
122
-#ifdef CONFIG_KVM
123
+#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
124
type_register_static(&host_arm_cpu_type_info);
125
#endif
126
}
127
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/hvf/hvf.c
130
+++ b/target/arm/hvf/hvf.c
131
@@ -XXX,XX +XXX,XX @@
132
#include "sysemu/hvf.h"
133
#include "sysemu/hvf_int.h"
134
#include "sysemu/hw_accel.h"
135
+#include "hvf_arm.h"
136
137
#include <mach/mach_time.h>
138
139
@@ -XXX,XX +XXX,XX @@ typedef struct HVFVTimer {
140
141
static HVFVTimer vtimer;
142
143
+typedef struct ARMHostCPUFeatures {
144
+ ARMISARegisters isar;
145
+ uint64_t features;
146
+ uint64_t midr;
147
+ uint32_t reset_sctlr;
148
+ const char *dtb_compatible;
149
+} ARMHostCPUFeatures;
150
+
151
+static ARMHostCPUFeatures arm_host_cpu_features;
152
+
153
struct hvf_reg_match {
154
int reg;
155
uint64_t offset;
156
@@ -XXX,XX +XXX,XX @@ static uint64_t hvf_get_reg(CPUState *cpu, int rt)
157
return val;
158
}
159
160
+static bool hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
163
+{
161
+{
164
+ qemu_set_irq(s->irq, 1);
162
+ ARMISARegisters host_isar = {};
163
+ const struct isar_regs {
164
+ int reg;
165
+ uint64_t *val;
166
+ } regs[] = {
167
+ { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 },
168
+ { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 },
169
+ { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 },
170
+ { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 },
171
+ { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 },
172
+ { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 },
173
+ { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 },
174
+ { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 },
175
+ { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 },
176
+ };
177
+ hv_vcpu_t fd;
178
+ hv_return_t r = HV_SUCCESS;
179
+ hv_vcpu_exit_t *exit;
180
+ int i;
181
+
182
+ ahcf->dtb_compatible = "arm,arm-v8";
183
+ ahcf->features = (1ULL << ARM_FEATURE_V8) |
184
+ (1ULL << ARM_FEATURE_NEON) |
185
+ (1ULL << ARM_FEATURE_AARCH64) |
186
+ (1ULL << ARM_FEATURE_PMU) |
187
+ (1ULL << ARM_FEATURE_GENERIC_TIMER);
188
+
189
+ /* We set up a small vcpu to extract host registers */
190
+
191
+ if (hv_vcpu_create(&fd, &exit, NULL) != HV_SUCCESS) {
192
+ return false;
193
+ }
194
+
195
+ for (i = 0; i < ARRAY_SIZE(regs); i++) {
196
+ r |= hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val);
197
+ }
198
+ r |= hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr);
199
+ r |= hv_vcpu_destroy(fd);
200
+
201
+ ahcf->isar = host_isar;
202
+
203
+ /*
204
+ * A scratch vCPU returns SCTLR 0, so let's fill our default with the M1
205
+ * boot SCTLR from https://github.com/AsahiLinux/m1n1/issues/97
206
+ */
207
+ ahcf->reset_sctlr = 0x30100180;
208
+ /*
209
+ * SPAN is disabled by default when SCTLR.SPAN=1. To improve compatibility,
210
+ * let's disable it on boot and then allow guest software to turn it on by
211
+ * setting it to 0.
212
+ */
213
+ ahcf->reset_sctlr |= 0x00800000;
214
+
215
+ /* Make sure we don't advertise AArch32 support for EL0/EL1 */
216
+ if ((host_isar.id_aa64pfr0 & 0xff) != 0x11) {
217
+ return false;
218
+ }
219
+
220
+ return r == HV_SUCCESS;
165
+}
221
+}
166
+
222
+
167
+static inline void mphi_lower_irq(BCM2835MphiState *s)
223
+void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu)
168
+{
224
+{
169
+ qemu_set_irq(s->irq, 0);
225
+ if (!arm_host_cpu_features.dtb_compatible) {
226
+ if (!hvf_enabled() ||
227
+ !hvf_arm_get_host_cpu_features(&arm_host_cpu_features)) {
228
+ /*
229
+ * We can't report this error yet, so flag that we need to
230
+ * in arm_cpu_realizefn().
231
+ */
232
+ cpu->host_cpu_probe_failed = true;
233
+ return;
234
+ }
235
+ }
236
+
237
+ cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible;
238
+ cpu->isar = arm_host_cpu_features.isar;
239
+ cpu->env.features = arm_host_cpu_features.features;
240
+ cpu->midr = arm_host_cpu_features.midr;
241
+ cpu->reset_sctlr = arm_host_cpu_features.reset_sctlr;
170
+}
242
+}
171
+
243
+
172
+static uint64_t mphi_reg_read(void *ptr, hwaddr addr, unsigned size)
244
void hvf_arch_vcpu_destroy(CPUState *cpu)
173
+{
245
{
174
+ BCM2835MphiState *s = ptr;
246
}
175
+ uint32_t val = 0;
176
+
177
+ switch (addr) {
178
+ case 0x28: /* outdda */
179
+ val = s->outdda;
180
+ break;
181
+ case 0x2c: /* outddb */
182
+ val = s->outddb;
183
+ break;
184
+ case 0x4c: /* ctrl */
185
+ val = s->ctrl;
186
+ val |= 1 << 17;
187
+ break;
188
+ case 0x50: /* intstat */
189
+ val = s->intstat;
190
+ break;
191
+ case 0x1f0: /* swirq_set */
192
+ val = s->swirq;
193
+ break;
194
+ case 0x1f4: /* swirq_clr */
195
+ val = s->swirq;
196
+ break;
197
+ default:
198
+ qemu_log_mask(LOG_UNIMP, "read from unknown register");
199
+ break;
200
+ }
201
+
202
+ return val;
203
+}
204
+
205
+static void mphi_reg_write(void *ptr, hwaddr addr, uint64_t val, unsigned size)
206
+{
207
+ BCM2835MphiState *s = ptr;
208
+ int do_irq = 0;
209
+
210
+ switch (addr) {
211
+ case 0x28: /* outdda */
212
+ s->outdda = val;
213
+ break;
214
+ case 0x2c: /* outddb */
215
+ s->outddb = val;
216
+ if (val & (1 << 29)) {
217
+ do_irq = 1;
218
+ }
219
+ break;
220
+ case 0x4c: /* ctrl */
221
+ s->ctrl = val;
222
+ if (val & (1 << 16)) {
223
+ do_irq = -1;
224
+ }
225
+ break;
226
+ case 0x50: /* intstat */
227
+ s->intstat = val;
228
+ if (val & ((1 << 16) | (1 << 29))) {
229
+ do_irq = -1;
230
+ }
231
+ break;
232
+ case 0x1f0: /* swirq_set */
233
+ s->swirq |= val;
234
+ do_irq = 1;
235
+ break;
236
+ case 0x1f4: /* swirq_clr */
237
+ s->swirq &= ~val;
238
+ do_irq = -1;
239
+ break;
240
+ default:
241
+ qemu_log_mask(LOG_UNIMP, "write to unknown register");
242
+ return;
243
+ }
244
+
245
+ if (do_irq > 0) {
246
+ mphi_raise_irq(s);
247
+ } else if (do_irq < 0) {
248
+ mphi_lower_irq(s);
249
+ }
250
+}
251
+
252
+static const MemoryRegionOps mphi_mmio_ops = {
253
+ .read = mphi_reg_read,
254
+ .write = mphi_reg_write,
255
+ .impl.min_access_size = 4,
256
+ .impl.max_access_size = 4,
257
+ .endianness = DEVICE_LITTLE_ENDIAN,
258
+};
259
+
260
+static void mphi_reset(DeviceState *dev)
261
+{
262
+ BCM2835MphiState *s = BCM2835_MPHI(dev);
263
+
264
+ s->outdda = 0;
265
+ s->outddb = 0;
266
+ s->ctrl = 0;
267
+ s->intstat = 0;
268
+ s->swirq = 0;
269
+}
270
+
271
+static void mphi_realize(DeviceState *dev, Error **errp)
272
+{
273
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
274
+ BCM2835MphiState *s = BCM2835_MPHI(dev);
275
+
276
+ sysbus_init_irq(sbd, &s->irq);
277
+}
278
+
279
+static void mphi_init(Object *obj)
280
+{
281
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
282
+ BCM2835MphiState *s = BCM2835_MPHI(obj);
283
+
284
+ memory_region_init_io(&s->iomem, obj, &mphi_mmio_ops, s, "mphi", MPHI_MMIO_SIZE);
285
+ sysbus_init_mmio(sbd, &s->iomem);
286
+}
287
+
288
+const VMStateDescription vmstate_mphi_state = {
289
+ .name = "mphi",
290
+ .version_id = 1,
291
+ .minimum_version_id = 1,
292
+ .fields = (VMStateField[]) {
293
+ VMSTATE_UINT32(outdda, BCM2835MphiState),
294
+ VMSTATE_UINT32(outddb, BCM2835MphiState),
295
+ VMSTATE_UINT32(ctrl, BCM2835MphiState),
296
+ VMSTATE_UINT32(intstat, BCM2835MphiState),
297
+ VMSTATE_UINT32(swirq, BCM2835MphiState),
298
+ VMSTATE_END_OF_LIST()
299
+ }
300
+};
301
+
302
+static void mphi_class_init(ObjectClass *klass, void *data)
303
+{
304
+ DeviceClass *dc = DEVICE_CLASS(klass);
305
+
306
+ dc->realize = mphi_realize;
307
+ dc->reset = mphi_reset;
308
+ dc->vmsd = &vmstate_mphi_state;
309
+}
310
+
311
+static const TypeInfo bcm2835_mphi_type_info = {
312
+ .name = TYPE_BCM2835_MPHI,
313
+ .parent = TYPE_SYS_BUS_DEVICE,
314
+ .instance_size = sizeof(BCM2835MphiState),
315
+ .instance_init = mphi_init,
316
+ .class_init = mphi_class_init,
317
+};
318
+
319
+static void bcm2835_mphi_register_types(void)
320
+{
321
+ type_register_static(&bcm2835_mphi_type_info);
322
+}
323
+
324
+type_init(bcm2835_mphi_register_types)
325
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
326
index XXXXXXX..XXXXXXX 100644
327
--- a/hw/misc/Makefile.objs
328
+++ b/hw/misc/Makefile.objs
329
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_OMAP) += omap_l4.o
330
common-obj-$(CONFIG_OMAP) += omap_sdrc.o
331
common-obj-$(CONFIG_OMAP) += omap_tap.o
332
common-obj-$(CONFIG_RASPI) += bcm2835_mbox.o
333
+common-obj-$(CONFIG_RASPI) += bcm2835_mphi.o
334
common-obj-$(CONFIG_RASPI) += bcm2835_property.o
335
common-obj-$(CONFIG_RASPI) += bcm2835_rng.o
336
common-obj-$(CONFIG_RASPI) += bcm2835_thermal.o
337
--
247
--
338
2.20.1
248
2.20.1
339
249
340
250
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Do not yet convert the helpers to loop over opr_sz, but the
3
We need to handle PSCI calls. Most of the TCG code works for us,
4
descriptor allows the vector tail to be cleared. Which fixes
4
but we can simplify it to only handle aa64 mode and we need to
5
an existing bug vs SVE.
5
handle SUSPEND differently.
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
This patch takes the TCG code as template and duplicates it in HVF.
8
Message-id: 20200514212831.31248-4-richard.henderson@linaro.org
8
9
To tell the guest that we support PSCI 0.2 now, update the check in
10
arm_cpu_initfn() as well.
11
12
Signed-off-by: Alexander Graf <agraf@csgraf.de>
13
Reviewed-by: Sergio Lopez <slp@redhat.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20210916155404.86958-8-agraf@csgraf.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
17
---
12
target/arm/helper.h | 15 +++++++-----
18
target/arm/cpu.c | 4 +-
13
target/arm/crypto_helper.c | 37 +++++++++++++++++++++++-----
19
target/arm/hvf/hvf.c | 141 ++++++++++++++++++++++++++++++++++--
14
target/arm/translate-a64.c | 50 ++++++++++++--------------------------
20
target/arm/hvf/trace-events | 1 +
15
3 files changed, 55 insertions(+), 47 deletions(-)
21
3 files changed, 139 insertions(+), 7 deletions(-)
16
22
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
23
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
18
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
25
--- a/target/arm/cpu.c
20
+++ b/target/arm/helper.h
26
+++ b/target/arm/cpu.c
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
27
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
22
DEF_HELPER_FLAGS_2(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr)
28
cpu->psci_version = 1; /* By default assume PSCI v0.1 */
23
DEF_HELPER_FLAGS_3(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
29
cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
24
30
25
-DEF_HELPER_FLAGS_3(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
31
- if (tcg_enabled()) {
26
-DEF_HELPER_FLAGS_3(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
32
- cpu->psci_version = 2; /* TCG implements PSCI 0.2 */
27
-DEF_HELPER_FLAGS_2(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr)
33
+ if (tcg_enabled() || hvf_enabled()) {
28
-DEF_HELPER_FLAGS_3(crypto_sha512su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
34
+ cpu->psci_version = 2; /* TCG and HVF implement PSCI 0.2 */
29
+DEF_HELPER_FLAGS_4(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(crypto_sha512su1, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, i32)
34
35
DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
36
-DEF_HELPER_FLAGS_3(crypto_sm3partw1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
37
-DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
38
+DEF_HELPER_FLAGS_4(crypto_sm3partw1, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(crypto_sm3partw2, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, i32)
42
43
DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/crypto_helper.c
48
+++ b/target/arm/crypto_helper.c
49
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
50
#define CR_ST_WORD(state, i) (state.words[i])
51
#endif
52
53
+/*
54
+ * The caller has not been converted to full gvec, and so only
55
+ * modifies the low 16 bytes of the vector register.
56
+ */
57
+static void clear_tail_16(void *vd, uint32_t desc)
58
+{
59
+ int opr_sz = simd_oprsz(desc);
60
+ int max_sz = simd_maxsz(desc);
61
+
62
+ assert(opr_sz == 16);
63
+ clear_tail(vd, opr_sz, max_sz);
64
+}
65
+
66
static void do_crypto_aese(uint64_t *rd, uint64_t *rn,
67
uint64_t *rm, bool decrypt)
68
{
69
@@ -XXX,XX +XXX,XX @@ static uint64_t s1_512(uint64_t x)
70
return ror64(x, 19) ^ ror64(x, 61) ^ (x >> 6);
71
}
72
73
-void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm)
74
+void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm, uint32_t desc)
75
{
76
uint64_t *rd = vd;
77
uint64_t *rn = vn;
78
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm)
79
80
rd[0] = d0;
81
rd[1] = d1;
82
+
83
+ clear_tail_16(vd, desc);
84
}
85
86
-void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm)
87
+void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm, uint32_t desc)
88
{
89
uint64_t *rd = vd;
90
uint64_t *rn = vn;
91
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm)
92
93
rd[0] = d0;
94
rd[1] = d1;
95
+
96
+ clear_tail_16(vd, desc);
97
}
98
99
-void HELPER(crypto_sha512su0)(void *vd, void *vn)
100
+void HELPER(crypto_sha512su0)(void *vd, void *vn, uint32_t desc)
101
{
102
uint64_t *rd = vd;
103
uint64_t *rn = vn;
104
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512su0)(void *vd, void *vn)
105
106
rd[0] = d0;
107
rd[1] = d1;
108
+
109
+ clear_tail_16(vd, desc);
110
}
111
112
-void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm)
113
+void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm, uint32_t desc)
114
{
115
uint64_t *rd = vd;
116
uint64_t *rn = vn;
117
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm)
118
119
rd[0] += s1_512(rn[0]) + rm[0];
120
rd[1] += s1_512(rn[1]) + rm[1];
121
+
122
+ clear_tail_16(vd, desc);
123
}
124
125
-void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm)
126
+void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm, uint32_t desc)
127
{
128
uint64_t *rd = vd;
129
uint64_t *rn = vn;
130
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm)
131
132
rd[0] = d.l[0];
133
rd[1] = d.l[1];
134
+
135
+ clear_tail_16(vd, desc);
136
}
137
138
-void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm)
139
+void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm, uint32_t desc)
140
{
141
uint64_t *rd = vd;
142
uint64_t *rn = vn;
143
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm)
144
145
rd[0] = d.l[0];
146
rd[1] = d.l[1];
147
+
148
+ clear_tail_16(vd, desc);
149
}
150
151
void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
152
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate-a64.c
155
+++ b/target/arm/translate-a64.c
156
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
157
int rn = extract32(insn, 5, 5);
158
int rd = extract32(insn, 0, 5);
159
bool feature;
160
- CryptoThreeOpFn *genfn = NULL;
161
gen_helper_gvec_3 *oolfn = NULL;
162
GVecGen3Fn *gvecfn = NULL;
163
164
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
165
switch (opcode) {
166
case 0: /* SHA512H */
167
feature = dc_isar_feature(aa64_sha512, s);
168
- genfn = gen_helper_crypto_sha512h;
169
+ oolfn = gen_helper_crypto_sha512h;
170
break;
171
case 1: /* SHA512H2 */
172
feature = dc_isar_feature(aa64_sha512, s);
173
- genfn = gen_helper_crypto_sha512h2;
174
+ oolfn = gen_helper_crypto_sha512h2;
175
break;
176
case 2: /* SHA512SU1 */
177
feature = dc_isar_feature(aa64_sha512, s);
178
- genfn = gen_helper_crypto_sha512su1;
179
+ oolfn = gen_helper_crypto_sha512su1;
180
break;
181
case 3: /* RAX1 */
182
feature = dc_isar_feature(aa64_sha3, s);
183
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
184
switch (opcode) {
185
case 0: /* SM3PARTW1 */
186
feature = dc_isar_feature(aa64_sm3, s);
187
- genfn = gen_helper_crypto_sm3partw1;
188
+ oolfn = gen_helper_crypto_sm3partw1;
189
break;
190
case 1: /* SM3PARTW2 */
191
feature = dc_isar_feature(aa64_sm3, s);
192
- genfn = gen_helper_crypto_sm3partw2;
193
+ oolfn = gen_helper_crypto_sm3partw2;
194
break;
195
case 2: /* SM4EKEY */
196
feature = dc_isar_feature(aa64_sm4, s);
197
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
198
199
if (oolfn) {
200
gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
201
- } else if (gvecfn) {
202
- gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
203
} else {
204
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
205
-
206
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
207
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
208
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
209
-
210
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
211
-
212
- tcg_temp_free_ptr(tcg_rd_ptr);
213
- tcg_temp_free_ptr(tcg_rn_ptr);
214
- tcg_temp_free_ptr(tcg_rm_ptr);
215
+ gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
216
}
35
}
217
}
36
}
218
37
219
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
38
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
220
int opcode = extract32(insn, 10, 2);
39
index XXXXXXX..XXXXXXX 100644
221
int rn = extract32(insn, 5, 5);
40
--- a/target/arm/hvf/hvf.c
222
int rd = extract32(insn, 0, 5);
41
+++ b/target/arm/hvf/hvf.c
223
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
42
@@ -XXX,XX +XXX,XX @@
224
bool feature;
43
#include "hw/irq.h"
225
- CryptoTwoOpFn *genfn;
44
#include "qemu/main-loop.h"
226
- gen_helper_gvec_3 *oolfn = NULL;
45
#include "sysemu/cpus.h"
227
46
+#include "arm-powerctl.h"
228
switch (opcode) {
47
#include "target/arm/cpu.h"
229
case 0: /* SHA512SU0 */
48
#include "target/arm/internals.h"
230
feature = dc_isar_feature(aa64_sha512, s);
49
#include "trace/trace-target_arm_hvf.h"
231
- genfn = gen_helper_crypto_sha512su0;
50
@@ -XXX,XX +XXX,XX @@
51
#define TMR_CTL_IMASK (1 << 1)
52
#define TMR_CTL_ISTATUS (1 << 2)
53
54
+static void hvf_wfi(CPUState *cpu);
55
+
56
typedef struct HVFVTimer {
57
/* Vtimer value during migration and paused state */
58
uint64_t vtimer_val;
59
@@ -XXX,XX +XXX,XX @@ static void hvf_raise_exception(CPUState *cpu, uint32_t excp,
60
arm_cpu_do_interrupt(cpu);
61
}
62
63
+static void hvf_psci_cpu_off(ARMCPU *arm_cpu)
64
+{
65
+ int32_t ret = arm_set_cpu_off(arm_cpu->mp_affinity);
66
+ assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS);
67
+}
68
+
69
+/*
70
+ * Handle a PSCI call.
71
+ *
72
+ * Returns 0 on success
73
+ * -1 when the PSCI call is unknown,
74
+ */
75
+static bool hvf_handle_psci_call(CPUState *cpu)
76
+{
77
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
78
+ CPUARMState *env = &arm_cpu->env;
79
+ uint64_t param[4] = {
80
+ env->xregs[0],
81
+ env->xregs[1],
82
+ env->xregs[2],
83
+ env->xregs[3]
84
+ };
85
+ uint64_t context_id, mpidr;
86
+ bool target_aarch64 = true;
87
+ CPUState *target_cpu_state;
88
+ ARMCPU *target_cpu;
89
+ target_ulong entry;
90
+ int target_el = 1;
91
+ int32_t ret = 0;
92
+
93
+ trace_hvf_psci_call(param[0], param[1], param[2], param[3],
94
+ arm_cpu->mp_affinity);
95
+
96
+ switch (param[0]) {
97
+ case QEMU_PSCI_0_2_FN_PSCI_VERSION:
98
+ ret = QEMU_PSCI_0_2_RET_VERSION_0_2;
99
+ break;
100
+ case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE:
101
+ ret = QEMU_PSCI_0_2_RET_TOS_MIGRATION_NOT_REQUIRED; /* No trusted OS */
102
+ break;
103
+ case QEMU_PSCI_0_2_FN_AFFINITY_INFO:
104
+ case QEMU_PSCI_0_2_FN64_AFFINITY_INFO:
105
+ mpidr = param[1];
106
+
107
+ switch (param[2]) {
108
+ case 0:
109
+ target_cpu_state = arm_get_cpu_by_id(mpidr);
110
+ if (!target_cpu_state) {
111
+ ret = QEMU_PSCI_RET_INVALID_PARAMS;
112
+ break;
113
+ }
114
+ target_cpu = ARM_CPU(target_cpu_state);
115
+
116
+ ret = target_cpu->power_state;
117
+ break;
118
+ default:
119
+ /* Everything above affinity level 0 is always on. */
120
+ ret = 0;
121
+ }
122
+ break;
123
+ case QEMU_PSCI_0_2_FN_SYSTEM_RESET:
124
+ qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
125
+ /*
126
+ * QEMU reset and shutdown are async requests, but PSCI
127
+ * mandates that we never return from the reset/shutdown
128
+ * call, so power the CPU off now so it doesn't execute
129
+ * anything further.
130
+ */
131
+ hvf_psci_cpu_off(arm_cpu);
132
+ break;
133
+ case QEMU_PSCI_0_2_FN_SYSTEM_OFF:
134
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
135
+ hvf_psci_cpu_off(arm_cpu);
136
+ break;
137
+ case QEMU_PSCI_0_1_FN_CPU_ON:
138
+ case QEMU_PSCI_0_2_FN_CPU_ON:
139
+ case QEMU_PSCI_0_2_FN64_CPU_ON:
140
+ mpidr = param[1];
141
+ entry = param[2];
142
+ context_id = param[3];
143
+ ret = arm_set_cpu_on(mpidr, entry, context_id,
144
+ target_el, target_aarch64);
145
+ break;
146
+ case QEMU_PSCI_0_1_FN_CPU_OFF:
147
+ case QEMU_PSCI_0_2_FN_CPU_OFF:
148
+ hvf_psci_cpu_off(arm_cpu);
149
+ break;
150
+ case QEMU_PSCI_0_1_FN_CPU_SUSPEND:
151
+ case QEMU_PSCI_0_2_FN_CPU_SUSPEND:
152
+ case QEMU_PSCI_0_2_FN64_CPU_SUSPEND:
153
+ /* Affinity levels are not supported in QEMU */
154
+ if (param[1] & 0xfffe0000) {
155
+ ret = QEMU_PSCI_RET_INVALID_PARAMS;
156
+ break;
157
+ }
158
+ /* Powerdown is not supported, we always go into WFI */
159
+ env->xregs[0] = 0;
160
+ hvf_wfi(cpu);
161
+ break;
162
+ case QEMU_PSCI_0_1_FN_MIGRATE:
163
+ case QEMU_PSCI_0_2_FN_MIGRATE:
164
+ ret = QEMU_PSCI_RET_NOT_SUPPORTED;
165
+ break;
166
+ default:
167
+ return false;
168
+ }
169
+
170
+ env->xregs[0] = ret;
171
+ return true;
172
+}
173
+
174
static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
175
{
176
ARMCPU *arm_cpu = ARM_CPU(cpu);
177
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
232
break;
178
break;
233
case 1: /* SM4E */
179
case EC_AA64_HVC:
234
feature = dc_isar_feature(aa64_sm4, s);
180
cpu_synchronize_state(cpu);
235
- oolfn = gen_helper_crypto_sm4e;
181
- trace_hvf_unknown_hvc(env->xregs[0]);
182
- /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
183
- env->xregs[0] = -1;
184
+ if (arm_cpu->psci_conduit == QEMU_PSCI_CONDUIT_HVC) {
185
+ if (!hvf_handle_psci_call(cpu)) {
186
+ trace_hvf_unknown_hvc(env->xregs[0]);
187
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
188
+ env->xregs[0] = -1;
189
+ }
190
+ } else {
191
+ trace_hvf_unknown_hvc(env->xregs[0]);
192
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
193
+ }
194
break;
195
case EC_AA64_SMC:
196
cpu_synchronize_state(cpu);
197
- trace_hvf_unknown_smc(env->xregs[0]);
198
- hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
199
+ if (arm_cpu->psci_conduit == QEMU_PSCI_CONDUIT_SMC) {
200
+ advance_pc = true;
201
+
202
+ if (!hvf_handle_psci_call(cpu)) {
203
+ trace_hvf_unknown_smc(env->xregs[0]);
204
+ /* SMCCC 1.3 section 5.2 says every unknown SMCCC call returns -1 */
205
+ env->xregs[0] = -1;
206
+ }
207
+ } else {
208
+ trace_hvf_unknown_smc(env->xregs[0]);
209
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
210
+ }
236
break;
211
break;
237
default:
212
default:
238
unallocated_encoding(s);
213
cpu_synchronize_state(cpu);
239
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
214
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
240
return;
215
index XXXXXXX..XXXXXXX 100644
241
}
216
--- a/target/arm/hvf/trace-events
242
217
+++ b/target/arm/hvf/trace-events
243
- if (oolfn) {
218
@@ -XXX,XX +XXX,XX @@ hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_
244
- gen_gvec_op3_ool(s, true, rd, rd, rn, 0, oolfn);
219
hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
245
- return;
220
hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
246
+ switch (opcode) {
221
hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
247
+ case 0: /* SHA512SU0 */
222
+hvf_psci_call(uint64_t x0, uint64_t x1, uint64_t x2, uint64_t x3, uint32_t cpuid) "PSCI Call x0=0x%016"PRIx64" x1=0x%016"PRIx64" x2=0x%016"PRIx64" x3=0x%016"PRIx64" cpu=0x%x"
248
+ gen_gvec_op2_ool(s, true, rd, rn, 0, gen_helper_crypto_sha512su0);
249
+ break;
250
+ case 1: /* SM4E */
251
+ gen_gvec_op3_ool(s, true, rd, rd, rn, 0, gen_helper_crypto_sm4e);
252
+ break;
253
+ default:
254
+ g_assert_not_reached();
255
}
256
-
257
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
258
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
259
-
260
- genfn(tcg_rd_ptr, tcg_rn_ptr);
261
-
262
- tcg_temp_free_ptr(tcg_rd_ptr);
263
- tcg_temp_free_ptr(tcg_rn_ptr);
264
}
265
266
/* Crypto four-register
267
--
223
--
268
2.20.1
224
2.20.1
269
225
270
226
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Signed-off-by: Cédric Le Goater <clg@kaod.org>
3
Now that we have all logic in place that we need to handle Hypervisor.framework
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we
5
Message-id: 20200602135050.593692-1-clg@kaod.org
5
can build it.
6
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
9
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> (x86 only)
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Sergio Lopez <slp@redhat.com>
12
Message-id: 20210916155404.86958-9-agraf@csgraf.de
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
14
---
8
docs/system/arm/aspeed.rst | 85 ++++++++++++++++++++++++++++++++++++++
15
meson.build | 7 +++++++
9
docs/system/target-arm.rst | 1 +
16
target/arm/hvf/meson.build | 3 +++
10
2 files changed, 86 insertions(+)
17
target/arm/meson.build | 2 ++
11
create mode 100644 docs/system/arm/aspeed.rst
18
3 files changed, 12 insertions(+)
19
create mode 100644 target/arm/hvf/meson.build
12
20
13
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
21
diff --git a/meson.build b/meson.build
22
index XXXXXXX..XXXXXXX 100644
23
--- a/meson.build
24
+++ b/meson.build
25
@@ -XXX,XX +XXX,XX @@ else
26
endif
27
28
accelerator_targets = { 'CONFIG_KVM': kvm_targets }
29
+
30
+if cpu in ['aarch64']
31
+ accelerator_targets += {
32
+ 'CONFIG_HVF': ['aarch64-softmmu']
33
+ }
34
+endif
35
+
36
if cpu in ['x86', 'x86_64', 'arm', 'aarch64']
37
# i386 emulator provides xenpv machine type for multiple architectures
38
accelerator_targets += {
39
diff --git a/target/arm/hvf/meson.build b/target/arm/hvf/meson.build
14
new file mode 100644
40
new file mode 100644
15
index XXXXXXX..XXXXXXX
41
index XXXXXXX..XXXXXXX
16
--- /dev/null
42
--- /dev/null
17
+++ b/docs/system/arm/aspeed.rst
43
+++ b/target/arm/hvf/meson.build
18
@@ -XXX,XX +XXX,XX @@
44
@@ -XXX,XX +XXX,XX @@
19
+Aspeed family boards (``*-bmc``, ``ast2500-evb``, ``ast2600-evb``)
45
+arm_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files(
20
+==================================================================
46
+ 'hvf.c',
47
+))
48
diff --git a/target/arm/meson.build b/target/arm/meson.build
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/meson.build
51
+++ b/target/arm/meson.build
52
@@ -XXX,XX +XXX,XX @@ arm_softmmu_ss.add(files(
53
'psci.c',
54
))
55
56
+subdir('hvf')
21
+
57
+
22
+The QEMU Aspeed machines model BMCs of various OpenPOWER systems and
58
target_arch += {'arm': arm_ss}
23
+Aspeed evaluation boards. They are based on different releases of the
59
target_softmmu_arch += {'arm': arm_softmmu_ss}
24
+Aspeed SoC : the AST2400 integrating an ARM926EJ-S CPU (400MHz), the
25
+AST2500 with an ARM1176JZS CPU (800MHz) and more recently the AST2600
26
+with dual cores ARM Cortex A7 CPUs (1.2GHz).
27
+
28
+The SoC comes with RAM, Gigabit ethernet, USB, SD/MMC, USB, SPI, I2C,
29
+etc.
30
+
31
+AST2400 SoC based machines :
32
+
33
+- ``palmetto-bmc`` OpenPOWER Palmetto POWER8 BMC
34
+
35
+AST2500 SoC based machines :
36
+
37
+- ``ast2500-evb`` Aspeed AST2500 Evaluation board
38
+- ``romulus-bmc`` OpenPOWER Romulus POWER9 BMC
39
+- ``witherspoon-bmc`` OpenPOWER Witherspoon POWER9 BMC
40
+- ``sonorapass-bmc`` OCP SonoraPass BMC
41
+- ``swift-bmc`` OpenPOWER Swift BMC POWER9
42
+
43
+AST2600 SoC based machines :
44
+
45
+- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex A7)
46
+- ``tacoma-bmc`` OpenPOWER Witherspoon POWER9 AST2600 BMC
47
+
48
+Supported devices
49
+-----------------
50
+
51
+ * SMP (for the AST2600 Cortex-A7)
52
+ * Interrupt Controller (VIC)
53
+ * Timer Controller
54
+ * RTC Controller
55
+ * I2C Controller
56
+ * System Control Unit (SCU)
57
+ * SRAM mapping
58
+ * X-DMA Controller (basic interface)
59
+ * Static Memory Controller (SMC or FMC) - Only SPI Flash support
60
+ * SPI Memory Controller
61
+ * USB 2.0 Controller
62
+ * SD/MMC storage controllers
63
+ * SDRAM controller (dummy interface for basic settings and training)
64
+ * Watchdog Controller
65
+ * GPIO Controller (Master only)
66
+ * UART
67
+ * Ethernet controllers
68
+
69
+
70
+Missing devices
71
+---------------
72
+
73
+ * Coprocessor support
74
+ * ADC (out of tree implementation)
75
+ * PWM and Fan Controller
76
+ * LPC Bus Controller
77
+ * Slave GPIO Controller
78
+ * Super I/O Controller
79
+ * Hash/Crypto Engine
80
+ * PCI-Express 1 Controller
81
+ * Graphic Display Controller
82
+ * PECI Controller
83
+ * MCTP Controller
84
+ * Mailbox Controller
85
+ * Virtual UART
86
+ * eSPI Controller
87
+ * I3C Controller
88
+
89
+Boot options
90
+------------
91
+
92
+The Aspeed machines can be started using the -kernel option to load a
93
+Linux kernel or from a firmare image which can be downloaded from the
94
+OpenPOWER jenkins :
95
+
96
+ https://openpower.xyz/
97
+
98
+The image should be attached as an MTD drive. Run :
99
+
100
+.. code-block:: bash
101
+
102
+ $ qemu-system-arm -M romulus-bmc -nic user \
103
+    -drive file=flash-romulus,format=raw,if=mtd -nographic
104
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
105
index XXXXXXX..XXXXXXX 100644
106
--- a/docs/system/target-arm.rst
107
+++ b/docs/system/target-arm.rst
108
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
109
arm/realview
110
arm/versatile
111
arm/vexpress
112
+ arm/aspeed
113
arm/musicpal
114
arm/nseries
115
arm/orangepi
116
--
60
--
117
2.20.1
61
2.20.1
118
62
119
63
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Wire the dwc-hsotg (dwc2) emulation into Qemu
3
We can expose cycle counters on the PMU easily. To be as compatible as
4
4
possible, let's do so, but make sure we don't expose any other architectural
5
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
5
counters that we can not model yet.
6
Reviewed-by: Philippe Mathieu-Daude <f4bug@amsat.org>
6
7
Message-id: 20200520235349.21215-7-pauldzim@gmail.com
7
This allows OSs to work that require PMU support.
8
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210916155404.86958-10-agraf@csgraf.de
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
include/hw/arm/bcm2835_peripherals.h | 3 ++-
14
target/arm/hvf/hvf.c | 179 +++++++++++++++++++++++++++++++++++++++++++
11
hw/arm/bcm2835_peripherals.c | 21 ++++++++++++++++++++-
15
1 file changed, 179 insertions(+)
12
2 files changed, 22 insertions(+), 2 deletions(-)
16
13
17
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
14
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/arm/bcm2835_peripherals.h
19
--- a/target/arm/hvf/hvf.c
17
+++ b/include/hw/arm/bcm2835_peripherals.h
20
+++ b/target/arm/hvf/hvf.c
18
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@
19
#include "hw/sd/bcm2835_sdhost.h"
22
#define SYSREG_OSLSR_EL1 SYSREG(2, 0, 1, 1, 4)
20
#include "hw/gpio/bcm2835_gpio.h"
23
#define SYSREG_OSDLR_EL1 SYSREG(2, 0, 1, 3, 4)
21
#include "hw/timer/bcm2835_systmr.h"
24
#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1)
22
+#include "hw/usb/hcd-dwc2.h"
25
+#define SYSREG_PMCR_EL0 SYSREG(3, 3, 9, 12, 0)
23
#include "hw/misc/unimp.h"
26
+#define SYSREG_PMUSERENR_EL0 SYSREG(3, 3, 9, 14, 0)
24
27
+#define SYSREG_PMCNTENSET_EL0 SYSREG(3, 3, 9, 12, 1)
25
#define TYPE_BCM2835_PERIPHERALS "bcm2835-peripherals"
28
+#define SYSREG_PMCNTENCLR_EL0 SYSREG(3, 3, 9, 12, 2)
26
@@ -XXX,XX +XXX,XX @@ typedef struct BCM2835PeripheralState {
29
+#define SYSREG_PMINTENCLR_EL1 SYSREG(3, 0, 9, 14, 2)
27
UnimplementedDeviceState ave0;
30
+#define SYSREG_PMOVSCLR_EL0 SYSREG(3, 3, 9, 12, 3)
28
UnimplementedDeviceState bscsl;
31
+#define SYSREG_PMSWINC_EL0 SYSREG(3, 3, 9, 12, 4)
29
UnimplementedDeviceState smi;
32
+#define SYSREG_PMSELR_EL0 SYSREG(3, 3, 9, 12, 5)
30
- UnimplementedDeviceState dwc2;
33
+#define SYSREG_PMCEID0_EL0 SYSREG(3, 3, 9, 12, 6)
31
+ DWC2State dwc2;
34
+#define SYSREG_PMCEID1_EL0 SYSREG(3, 3, 9, 12, 7)
32
UnimplementedDeviceState sdramc;
35
+#define SYSREG_PMCCNTR_EL0 SYSREG(3, 3, 9, 13, 0)
33
} BCM2835PeripheralState;
36
+#define SYSREG_PMCCFILTR_EL0 SYSREG(3, 3, 14, 15, 7)
34
37
35
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
38
#define WFX_IS_WFE (1 << 0)
36
index XXXXXXX..XXXXXXX 100644
39
37
--- a/hw/arm/bcm2835_peripherals.c
40
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
38
+++ b/hw/arm/bcm2835_peripherals.c
41
val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) /
39
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
42
gt_cntfrq_period_ns(arm_cpu);
40
/* Mphi */
43
break;
41
sysbus_init_child_obj(obj, "mphi", &s->mphi, sizeof(s->mphi),
44
+ case SYSREG_PMCR_EL0:
42
TYPE_BCM2835_MPHI);
45
+ val = env->cp15.c9_pmcr;
43
+
46
+ break;
44
+ /* DWC2 */
47
+ case SYSREG_PMCCNTR_EL0:
45
+ sysbus_init_child_obj(obj, "dwc2", &s->dwc2, sizeof(s->dwc2),
48
+ pmu_op_start(env);
46
+ TYPE_DWC2_USB);
49
+ val = env->cp15.c15_ccnt;
47
+
50
+ pmu_op_finish(env);
48
+ object_property_add_const_link(OBJECT(&s->dwc2), "dma-mr",
51
+ break;
49
+ OBJECT(&s->gpu_bus_mr));
52
+ case SYSREG_PMCNTENCLR_EL0:
53
+ val = env->cp15.c9_pmcnten;
54
+ break;
55
+ case SYSREG_PMOVSCLR_EL0:
56
+ val = env->cp15.c9_pmovsr;
57
+ break;
58
+ case SYSREG_PMSELR_EL0:
59
+ val = env->cp15.c9_pmselr;
60
+ break;
61
+ case SYSREG_PMINTENCLR_EL1:
62
+ val = env->cp15.c9_pminten;
63
+ break;
64
+ case SYSREG_PMCCFILTR_EL0:
65
+ val = env->cp15.pmccfiltr_el0;
66
+ break;
67
+ case SYSREG_PMCNTENSET_EL0:
68
+ val = env->cp15.c9_pmcnten;
69
+ break;
70
+ case SYSREG_PMUSERENR_EL0:
71
+ val = env->cp15.c9_pmuserenr;
72
+ break;
73
+ case SYSREG_PMCEID0_EL0:
74
+ case SYSREG_PMCEID1_EL0:
75
+ /* We can't really count anything yet, declare all events invalid */
76
+ val = 0;
77
+ break;
78
case SYSREG_OSLSR_EL1:
79
val = env->cp15.oslsr_el1;
80
break;
81
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
82
return 0;
50
}
83
}
51
84
52
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
85
+static void pmu_update_irq(CPUARMState *env)
53
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
86
+{
54
qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
87
+ ARMCPU *cpu = env_archcpu(env);
55
INTERRUPT_HOSTPORT));
88
+ qemu_set_irq(cpu->pmu_interrupt, (env->cp15.c9_pmcr & PMCRE) &&
56
89
+ (env->cp15.c9_pminten & env->cp15.c9_pmovsr));
57
+ /* DWC2 */
90
+}
58
+ object_property_set_bool(OBJECT(&s->dwc2), true, "realized", &err);
91
+
59
+ if (err) {
92
+static bool pmu_event_supported(uint16_t number)
60
+ error_propagate(errp, err);
93
+{
61
+ return;
94
+ return false;
62
+ }
95
+}
63
+
96
+
64
+ memory_region_add_subregion(&s->peri_mr, USB_OTG_OFFSET,
97
+/* Returns true if the counter (pass 31 for PMCCNTR) should count events using
65
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->dwc2), 0));
98
+ * the current EL, security state, and register configuration.
66
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->dwc2), 0,
99
+ */
67
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
100
+static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
68
+ INTERRUPT_USB));
101
+{
69
+
102
+ uint64_t filter;
70
create_unimp(s, &s->armtmr, "bcm2835-sp804", ARMCTRL_TIMER0_1_OFFSET, 0x40);
103
+ bool enabled, filtered = true;
71
create_unimp(s, &s->cprman, "bcm2835-cprman", CPRMAN_OFFSET, 0x1000);
104
+ int el = arm_current_el(env);
72
create_unimp(s, &s->a2w, "bcm2835-a2w", A2W_OFFSET, 0x1000);
105
+
73
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
106
+ enabled = (env->cp15.c9_pmcr & PMCRE) &&
74
create_unimp(s, &s->otp, "bcm2835-otp", OTP_OFFSET, 0x80);
107
+ (env->cp15.c9_pmcnten & (1 << counter));
75
create_unimp(s, &s->dbus, "bcm2835-dbus", DBUS_OFFSET, 0x8000);
108
+
76
create_unimp(s, &s->ave0, "bcm2835-ave0", AVE0_OFFSET, 0x8000);
109
+ if (counter == 31) {
77
- create_unimp(s, &s->dwc2, "dwc-usb2", USB_OTG_OFFSET, 0x1000);
110
+ filter = env->cp15.pmccfiltr_el0;
78
create_unimp(s, &s->sdramc, "bcm2835-sdramc", SDRAMC_OFFSET, 0x100);
111
+ } else {
79
}
112
+ filter = env->cp15.c14_pmevtyper[counter];
80
113
+ }
114
+
115
+ if (el == 0) {
116
+ filtered = filter & PMXEVTYPER_U;
117
+ } else if (el == 1) {
118
+ filtered = filter & PMXEVTYPER_P;
119
+ }
120
+
121
+ if (counter != 31) {
122
+ /*
123
+ * If not checking PMCCNTR, ensure the counter is setup to an event we
124
+ * support
125
+ */
126
+ uint16_t event = filter & PMXEVTYPER_EVTCOUNT;
127
+ if (!pmu_event_supported(event)) {
128
+ return false;
129
+ }
130
+ }
131
+
132
+ return enabled && !filtered;
133
+}
134
+
135
+static void pmswinc_write(CPUARMState *env, uint64_t value)
136
+{
137
+ unsigned int i;
138
+ for (i = 0; i < pmu_num_counters(env); i++) {
139
+ /* Increment a counter's count iff: */
140
+ if ((value & (1 << i)) && /* counter's bit is set */
141
+ /* counter is enabled and not filtered */
142
+ pmu_counter_enabled(env, i) &&
143
+ /* counter is SW_INCR */
144
+ (env->cp15.c14_pmevtyper[i] & PMXEVTYPER_EVTCOUNT) == 0x0) {
145
+ /*
146
+ * Detect if this write causes an overflow since we can't predict
147
+ * PMSWINC overflows like we can for other events
148
+ */
149
+ uint32_t new_pmswinc = env->cp15.c14_pmevcntr[i] + 1;
150
+
151
+ if (env->cp15.c14_pmevcntr[i] & ~new_pmswinc & INT32_MIN) {
152
+ env->cp15.c9_pmovsr |= (1 << i);
153
+ pmu_update_irq(env);
154
+ }
155
+
156
+ env->cp15.c14_pmevcntr[i] = new_pmswinc;
157
+ }
158
+ }
159
+}
160
+
161
static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
162
{
163
ARMCPU *arm_cpu = ARM_CPU(cpu);
164
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
165
val);
166
167
switch (reg) {
168
+ case SYSREG_PMCCNTR_EL0:
169
+ pmu_op_start(env);
170
+ env->cp15.c15_ccnt = val;
171
+ pmu_op_finish(env);
172
+ break;
173
+ case SYSREG_PMCR_EL0:
174
+ pmu_op_start(env);
175
+
176
+ if (val & PMCRC) {
177
+ /* The counter has been reset */
178
+ env->cp15.c15_ccnt = 0;
179
+ }
180
+
181
+ if (val & PMCRP) {
182
+ unsigned int i;
183
+ for (i = 0; i < pmu_num_counters(env); i++) {
184
+ env->cp15.c14_pmevcntr[i] = 0;
185
+ }
186
+ }
187
+
188
+ env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK;
189
+ env->cp15.c9_pmcr |= (val & PMCR_WRITEABLE_MASK);
190
+
191
+ pmu_op_finish(env);
192
+ break;
193
+ case SYSREG_PMUSERENR_EL0:
194
+ env->cp15.c9_pmuserenr = val & 0xf;
195
+ break;
196
+ case SYSREG_PMCNTENSET_EL0:
197
+ env->cp15.c9_pmcnten |= (val & pmu_counter_mask(env));
198
+ break;
199
+ case SYSREG_PMCNTENCLR_EL0:
200
+ env->cp15.c9_pmcnten &= ~(val & pmu_counter_mask(env));
201
+ break;
202
+ case SYSREG_PMINTENCLR_EL1:
203
+ pmu_op_start(env);
204
+ env->cp15.c9_pminten |= val;
205
+ pmu_op_finish(env);
206
+ break;
207
+ case SYSREG_PMOVSCLR_EL0:
208
+ pmu_op_start(env);
209
+ env->cp15.c9_pmovsr &= ~val;
210
+ pmu_op_finish(env);
211
+ break;
212
+ case SYSREG_PMSWINC_EL0:
213
+ pmu_op_start(env);
214
+ pmswinc_write(env, val);
215
+ pmu_op_finish(env);
216
+ break;
217
+ case SYSREG_PMSELR_EL0:
218
+ env->cp15.c9_pmselr = val & 0x1f;
219
+ break;
220
+ case SYSREG_PMCCFILTR_EL0:
221
+ pmu_op_start(env);
222
+ env->cp15.pmccfiltr_el0 = val & PMCCFILTR_EL0;
223
+ pmu_op_finish(env);
224
+ break;
225
case SYSREG_OSLAR_EL1:
226
env->cp15.oslsr_el1 = val & 1;
227
break;
81
--
228
--
82
2.20.1
229
2.20.1
83
230
84
231
diff view generated by jsdifflib
1
Convert the remaining Neon narrowing shifts to decodetree:
1
Currently gen_jmp_tb() assumes that if it is called then the jump it
2
* VQSHRN
2
is handling is the only reason that we might be trying to end the TB,
3
* VQRSHRN
3
so it will use goto_tb if it can. This is usually the case: mostly
4
"we did something that means we must end the TB" happens on a
5
non-branch instruction. However, there are cases where we decide
6
early in handling an instruction that we need to end the TB and
7
return to the main loop, and then the insn is a complex one that
8
involves gen_jmp_tb(). For instance, for M-profile FP instructions,
9
in gen_preserve_fp_state() which is called from vfp_access_check() we
10
want to force an exit to the main loop if lazy state preservation is
11
active and we are in icount mode.
12
13
Make gen_jmp_tb() look at the current value of is_jmp, and only use
14
goto_tb if the previous is_jmp was DISAS_NEXT or DISAS_TOO_MANY.
4
15
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-7-peter.maydell@linaro.org
18
Message-id: 20210913095440.13462-2-peter.maydell@linaro.org
8
---
19
---
9
target/arm/neon-dp.decode | 20 ++++++
20
target/arm/translate.c | 34 +++++++++++++++++++++++++++++++++-
10
target/arm/translate-neon.inc.c | 15 +++++
21
1 file changed, 33 insertions(+), 1 deletion(-)
11
target/arm/translate.c | 110 +-------------------------------
12
3 files changed, 37 insertions(+), 108 deletions(-)
13
22
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
17
+++ b/target/arm/neon-dp.decode
18
@@ -XXX,XX +XXX,XX @@ VQSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
19
VQRSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
20
VQRSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
21
VQRSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
22
+
23
+# VQSHRN with signed input
24
+VQSHRN_S64_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_d
25
+VQSHRN_S32_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_s
26
+VQSHRN_S16_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
27
+
28
+# VQRSHRN with signed input
29
+VQRSHRN_S64_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
30
+VQRSHRN_S32_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
31
+VQRSHRN_S16_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
32
+
33
+# VQSHRN with unsigned input
34
+VQSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_d
35
+VQSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_s
36
+VQSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
37
+
38
+# VQRSHRN with unsigned input
39
+VQRSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
40
+VQRSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
41
+VQRSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
42
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/translate-neon.inc.c
45
+++ b/target/arm/translate-neon.inc.c
46
@@ -XXX,XX +XXX,XX @@ DO_2SN_32(VQSHRUN_16, gen_helper_neon_shl_s16, gen_helper_neon_unarrow_sat8)
47
DO_2SN_64(VQRSHRUN_64, gen_helper_neon_rshl_s64, gen_helper_neon_unarrow_sat32)
48
DO_2SN_32(VQRSHRUN_32, gen_helper_neon_rshl_s32, gen_helper_neon_unarrow_sat16)
49
DO_2SN_32(VQRSHRUN_16, gen_helper_neon_rshl_s16, gen_helper_neon_unarrow_sat8)
50
+DO_2SN_64(VQSHRN_S64, gen_sshl_i64, gen_helper_neon_narrow_sat_s32)
51
+DO_2SN_32(VQSHRN_S32, gen_sshl_i32, gen_helper_neon_narrow_sat_s16)
52
+DO_2SN_32(VQSHRN_S16, gen_helper_neon_shl_s16, gen_helper_neon_narrow_sat_s8)
53
+
54
+DO_2SN_64(VQRSHRN_S64, gen_helper_neon_rshl_s64, gen_helper_neon_narrow_sat_s32)
55
+DO_2SN_32(VQRSHRN_S32, gen_helper_neon_rshl_s32, gen_helper_neon_narrow_sat_s16)
56
+DO_2SN_32(VQRSHRN_S16, gen_helper_neon_rshl_s16, gen_helper_neon_narrow_sat_s8)
57
+
58
+DO_2SN_64(VQSHRN_U64, gen_ushl_i64, gen_helper_neon_narrow_sat_u32)
59
+DO_2SN_32(VQSHRN_U32, gen_ushl_i32, gen_helper_neon_narrow_sat_u16)
60
+DO_2SN_32(VQSHRN_U16, gen_helper_neon_shl_u16, gen_helper_neon_narrow_sat_u8)
61
+
62
+DO_2SN_64(VQRSHRN_U64, gen_helper_neon_rshl_u64, gen_helper_neon_narrow_sat_u32)
63
+DO_2SN_32(VQRSHRN_U32, gen_helper_neon_rshl_u32, gen_helper_neon_narrow_sat_u16)
64
+DO_2SN_32(VQRSHRN_U16, gen_helper_neon_rshl_u16, gen_helper_neon_narrow_sat_u8)
65
diff --git a/target/arm/translate.c b/target/arm/translate.c
23
diff --git a/target/arm/translate.c b/target/arm/translate.c
66
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate.c
25
--- a/target/arm/translate.c
68
+++ b/target/arm/translate.c
26
+++ b/target/arm/translate.c
69
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_unarrow_sats(int size, TCGv_i32 dest, TCGv_i64 src)
27
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
28
/* An indirect jump so that we still trigger the debug exception. */
29
gen_set_pc_im(s, dest);
30
s->base.is_jmp = DISAS_JUMP;
31
- } else {
32
+ return;
33
+ }
34
+ switch (s->base.is_jmp) {
35
+ case DISAS_NEXT:
36
+ case DISAS_TOO_MANY:
37
+ case DISAS_NORETURN:
38
+ /*
39
+ * The normal case: just go to the destination TB.
40
+ * NB: NORETURN happens if we generate code like
41
+ * gen_brcondi(l);
42
+ * gen_jmp();
43
+ * gen_set_label(l);
44
+ * gen_jmp();
45
+ * on the second call to gen_jmp().
46
+ */
47
gen_goto_tb(s, tbno, dest);
48
+ break;
49
+ case DISAS_UPDATE_NOCHAIN:
50
+ case DISAS_UPDATE_EXIT:
51
+ /*
52
+ * We already decided we're leaving the TB for some other reason.
53
+ * Avoid using goto_tb so we really do exit back to the main loop
54
+ * and don't chain to another TB.
55
+ */
56
+ gen_set_pc_im(s, dest);
57
+ gen_goto_ptr();
58
+ s->base.is_jmp = DISAS_NORETURN;
59
+ break;
60
+ default:
61
+ /*
62
+ * We shouldn't be emitting code for a jump and also have
63
+ * is_jmp set to one of the special cases like DISAS_SWI.
64
+ */
65
+ g_assert_not_reached();
70
}
66
}
71
}
67
}
72
68
73
-static inline void gen_neon_shift_narrow(int size, TCGv_i32 var, TCGv_i32 shift,
74
- int q, int u)
75
-{
76
- if (q) {
77
- if (u) {
78
- switch (size) {
79
- case 1: gen_helper_neon_rshl_u16(var, var, shift); break;
80
- case 2: gen_helper_neon_rshl_u32(var, var, shift); break;
81
- default: abort();
82
- }
83
- } else {
84
- switch (size) {
85
- case 1: gen_helper_neon_rshl_s16(var, var, shift); break;
86
- case 2: gen_helper_neon_rshl_s32(var, var, shift); break;
87
- default: abort();
88
- }
89
- }
90
- } else {
91
- if (u) {
92
- switch (size) {
93
- case 1: gen_helper_neon_shl_u16(var, var, shift); break;
94
- case 2: gen_ushl_i32(var, var, shift); break;
95
- default: abort();
96
- }
97
- } else {
98
- switch (size) {
99
- case 1: gen_helper_neon_shl_s16(var, var, shift); break;
100
- case 2: gen_sshl_i32(var, var, shift); break;
101
- default: abort();
102
- }
103
- }
104
- }
105
-}
106
-
107
static inline void gen_neon_widen(TCGv_i64 dest, TCGv_i32 src, int size, int u)
108
{
109
if (u) {
110
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
111
case 6: /* VQSHLU */
112
case 7: /* VQSHL */
113
case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
114
+ case 9: /* VQSHRN, VQRSHRN */
115
return 1; /* handled by decodetree */
116
default:
117
break;
118
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
119
size--;
120
}
121
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
122
- if (op < 10) {
123
- /* Shift by immediate and narrow:
124
- VSHRN, VRSHRN, VQSHRN, VQRSHRN. */
125
- int input_unsigned = (op == 8) ? !u : u;
126
- if (rm & 1) {
127
- return 1;
128
- }
129
- shift = shift - (1 << (size + 3));
130
- size++;
131
- if (size == 3) {
132
- tmp64 = tcg_const_i64(shift);
133
- neon_load_reg64(cpu_V0, rm);
134
- neon_load_reg64(cpu_V1, rm + 1);
135
- for (pass = 0; pass < 2; pass++) {
136
- TCGv_i64 in;
137
- if (pass == 0) {
138
- in = cpu_V0;
139
- } else {
140
- in = cpu_V1;
141
- }
142
- if (q) {
143
- if (input_unsigned) {
144
- gen_helper_neon_rshl_u64(cpu_V0, in, tmp64);
145
- } else {
146
- gen_helper_neon_rshl_s64(cpu_V0, in, tmp64);
147
- }
148
- } else {
149
- if (input_unsigned) {
150
- gen_ushl_i64(cpu_V0, in, tmp64);
151
- } else {
152
- gen_sshl_i64(cpu_V0, in, tmp64);
153
- }
154
- }
155
- tmp = tcg_temp_new_i32();
156
- gen_neon_narrow_op(op == 8, u, size - 1, tmp, cpu_V0);
157
- neon_store_reg(rd, pass, tmp);
158
- } /* for pass */
159
- tcg_temp_free_i64(tmp64);
160
- } else {
161
- if (size == 1) {
162
- imm = (uint16_t)shift;
163
- imm |= imm << 16;
164
- } else {
165
- /* size == 2 */
166
- imm = (uint32_t)shift;
167
- }
168
- tmp2 = tcg_const_i32(imm);
169
- tmp4 = neon_load_reg(rm + 1, 0);
170
- tmp5 = neon_load_reg(rm + 1, 1);
171
- for (pass = 0; pass < 2; pass++) {
172
- if (pass == 0) {
173
- tmp = neon_load_reg(rm, 0);
174
- } else {
175
- tmp = tmp4;
176
- }
177
- gen_neon_shift_narrow(size, tmp, tmp2, q,
178
- input_unsigned);
179
- if (pass == 0) {
180
- tmp3 = neon_load_reg(rm, 1);
181
- } else {
182
- tmp3 = tmp5;
183
- }
184
- gen_neon_shift_narrow(size, tmp3, tmp2, q,
185
- input_unsigned);
186
- tcg_gen_concat_i32_i64(cpu_V0, tmp, tmp3);
187
- tcg_temp_free_i32(tmp);
188
- tcg_temp_free_i32(tmp3);
189
- tmp = tcg_temp_new_i32();
190
- gen_neon_narrow_op(op == 8, u, size - 1, tmp, cpu_V0);
191
- neon_store_reg(rd, pass, tmp);
192
- } /* for pass */
193
- tcg_temp_free_i32(tmp2);
194
- }
195
- } else if (op == 10) {
196
+ if (op == 10) {
197
/* VSHLL, VMOVL */
198
if (q || (rd & 1)) {
199
return 1;
200
--
69
--
201
2.20.1
70
2.20.1
202
71
203
72
diff view generated by jsdifflib
1
Convert the VSHLL and VMOVL insns from the 2-reg-shift group
1
Architecturally, for an M-profile CPU with the LOB feature the
2
to decodetree. Since the loop always has two passes, we unroll
2
LTPSIZE field in FPDSCR is always constant 4. QEMU's implementation
3
it to avoid the awkward reassignment of one TCGv to another.
3
enforces this everywhere, except that we don't check that it is true
4
in incoming migration data.
5
6
We're going to add come in gen_update_fp_context() which relies on
7
the "always 4" property. Since this is TCG-only, we don't actually
8
need to be robust to bogus incoming migration data, and the effect of
9
it being wrong would be wrong code generation rather than a QEMU
10
crash; but if it did ever happen somehow it would be very difficult
11
to track down the cause. Add a check so that we fail the inbound
12
migration if the FPDSCR.LTPSIZE value is incorrect.
4
13
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-8-peter.maydell@linaro.org
16
Message-id: 20210913095440.13462-3-peter.maydell@linaro.org
8
---
17
---
9
target/arm/neon-dp.decode | 16 +++++++
18
target/arm/machine.c | 13 +++++++++++++
10
target/arm/translate-neon.inc.c | 81 +++++++++++++++++++++++++++++++++
19
1 file changed, 13 insertions(+)
11
target/arm/translate.c | 46 +------------------
12
3 files changed, 99 insertions(+), 44 deletions(-)
13
20
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
21
diff --git a/target/arm/machine.c b/target/arm/machine.c
15
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
23
--- a/target/arm/machine.c
17
+++ b/target/arm/neon-dp.decode
24
+++ b/target/arm/machine.c
18
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
25
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
19
&2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0 \
26
hw_breakpoint_update_all(cpu);
20
shift=%neon_rshift_i3
27
hw_watchpoint_update_all(cpu);
21
28
22
+# Long left shifts: again Q is part of opcode decode
29
+ /*
23
+@2reg_shll_s .... ... . . . 1 shift:5 .... .... 0 . . . .... \
30
+ * TCG gen_update_fp_context() relies on the invariant that
24
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 q=0
31
+ * FPDSCR.LTPSIZE is constant 4 for M-profile with the LOB extension;
25
+@2reg_shll_h .... ... . . . 01 shift:4 .... .... 0 . . . .... \
32
+ * forbid bogus incoming data with some other value.
26
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0
33
+ */
27
+@2reg_shll_b .... ... . . . 001 shift:3 .... .... 0 . . . .... \
34
+ if (arm_feature(env, ARM_FEATURE_M) && cpu_isar_feature(aa32_lob, cpu)) {
28
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 q=0
35
+ if (extract32(env->v7m.fpdscr[M_REG_NS],
29
+
36
+ FPCR_LTPSIZE_SHIFT, FPCR_LTPSIZE_LENGTH) != 4 ||
30
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
37
+ extract32(env->v7m.fpdscr[M_REG_S],
31
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
38
+ FPCR_LTPSIZE_SHIFT, FPCR_LTPSIZE_LENGTH) != 4) {
32
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
39
+ return -1;
33
@@ -XXX,XX +XXX,XX @@ VQSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
40
+ }
34
VQRSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
35
VQRSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
36
VQRSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
37
+
38
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
39
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
40
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
41
+
42
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
43
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
44
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
45
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate-neon.inc.c
48
+++ b/target/arm/translate-neon.inc.c
49
@@ -XXX,XX +XXX,XX @@ DO_2SN_32(VQSHRN_U16, gen_helper_neon_shl_u16, gen_helper_neon_narrow_sat_u8)
50
DO_2SN_64(VQRSHRN_U64, gen_helper_neon_rshl_u64, gen_helper_neon_narrow_sat_u32)
51
DO_2SN_32(VQRSHRN_U32, gen_helper_neon_rshl_u32, gen_helper_neon_narrow_sat_u16)
52
DO_2SN_32(VQRSHRN_U16, gen_helper_neon_rshl_u16, gen_helper_neon_narrow_sat_u8)
53
+
54
+static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
55
+ NeonGenWidenFn *widenfn, bool u)
56
+{
57
+ TCGv_i64 tmp;
58
+ TCGv_i32 rm0, rm1;
59
+ uint64_t widen_mask = 0;
60
+
61
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
62
+ return false;
63
+ }
41
+ }
64
+
42
if (!kvm_enabled()) {
65
+ /* UNDEF accesses to D16-D31 if they don't exist. */
43
pmu_op_finish(&cpu->env);
66
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
44
}
67
+ ((a->vd | a->vm) & 0x10)) {
68
+ return false;
69
+ }
70
+
71
+ if (a->vd & 1) {
72
+ return false;
73
+ }
74
+
75
+ if (!vfp_access_check(s)) {
76
+ return true;
77
+ }
78
+
79
+ /*
80
+ * This is a widen-and-shift operation. The shift is always less
81
+ * than the width of the source type, so after widening the input
82
+ * vector we can simply shift the whole 64-bit widened register,
83
+ * and then clear the potential overflow bits resulting from left
84
+ * bits of the narrow input appearing as right bits of the left
85
+ * neighbour narrow input. Calculate a mask of bits to clear.
86
+ */
87
+ if ((a->shift != 0) && (a->size < 2 || u)) {
88
+ int esize = 8 << a->size;
89
+ widen_mask = MAKE_64BIT_MASK(0, esize);
90
+ widen_mask >>= esize - a->shift;
91
+ widen_mask = dup_const(a->size + 1, widen_mask);
92
+ }
93
+
94
+ rm0 = neon_load_reg(a->vm, 0);
95
+ rm1 = neon_load_reg(a->vm, 1);
96
+ tmp = tcg_temp_new_i64();
97
+
98
+ widenfn(tmp, rm0);
99
+ if (a->shift != 0) {
100
+ tcg_gen_shli_i64(tmp, tmp, a->shift);
101
+ tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
102
+ }
103
+ neon_store_reg64(tmp, a->vd);
104
+
105
+ widenfn(tmp, rm1);
106
+ if (a->shift != 0) {
107
+ tcg_gen_shli_i64(tmp, tmp, a->shift);
108
+ tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
109
+ }
110
+ neon_store_reg64(tmp, a->vd + 1);
111
+ tcg_temp_free_i64(tmp);
112
+ return true;
113
+}
114
+
115
+static bool trans_VSHLL_S_2sh(DisasContext *s, arg_2reg_shift *a)
116
+{
117
+ NeonGenWidenFn *widenfn[] = {
118
+ gen_helper_neon_widen_s8,
119
+ gen_helper_neon_widen_s16,
120
+ tcg_gen_ext_i32_i64,
121
+ };
122
+ return do_vshll_2sh(s, a, widenfn[a->size], false);
123
+}
124
+
125
+static bool trans_VSHLL_U_2sh(DisasContext *s, arg_2reg_shift *a)
126
+{
127
+ NeonGenWidenFn *widenfn[] = {
128
+ gen_helper_neon_widen_u8,
129
+ gen_helper_neon_widen_u16,
130
+ tcg_gen_extu_i32_i64,
131
+ };
132
+ return do_vshll_2sh(s, a, widenfn[a->size], true);
133
+}
134
diff --git a/target/arm/translate.c b/target/arm/translate.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/target/arm/translate.c
137
+++ b/target/arm/translate.c
138
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
139
case 7: /* VQSHL */
140
case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
141
case 9: /* VQSHRN, VQRSHRN */
142
+ case 10: /* VSHLL, including VMOVL */
143
return 1; /* handled by decodetree */
144
default:
145
break;
146
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
147
size--;
148
}
149
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
150
- if (op == 10) {
151
- /* VSHLL, VMOVL */
152
- if (q || (rd & 1)) {
153
- return 1;
154
- }
155
- tmp = neon_load_reg(rm, 0);
156
- tmp2 = neon_load_reg(rm, 1);
157
- for (pass = 0; pass < 2; pass++) {
158
- if (pass == 1)
159
- tmp = tmp2;
160
-
161
- gen_neon_widen(cpu_V0, tmp, size, u);
162
-
163
- if (shift != 0) {
164
- /* The shift is less than the width of the source
165
- type, so we can just shift the whole register. */
166
- tcg_gen_shli_i64(cpu_V0, cpu_V0, shift);
167
- /* Widen the result of shift: we need to clear
168
- * the potential overflow bits resulting from
169
- * left bits of the narrow input appearing as
170
- * right bits of left the neighbour narrow
171
- * input. */
172
- if (size < 2 || !u) {
173
- uint64_t imm64;
174
- if (size == 0) {
175
- imm = (0xffu >> (8 - shift));
176
- imm |= imm << 16;
177
- } else if (size == 1) {
178
- imm = 0xffff >> (16 - shift);
179
- } else {
180
- /* size == 2 */
181
- imm = 0xffffffff >> (32 - shift);
182
- }
183
- if (size < 2) {
184
- imm64 = imm | (((uint64_t)imm) << 32);
185
- } else {
186
- imm64 = imm;
187
- }
188
- tcg_gen_andi_i64(cpu_V0, cpu_V0, ~imm64);
189
- }
190
- }
191
- neon_store_reg64(cpu_V0, rd + pass);
192
- }
193
- } else if (op >= 14) {
194
+ if (op >= 14) {
195
/* VCVT fixed-point. */
196
TCGv_ptr fpst;
197
TCGv_i32 shiftv;
198
--
45
--
199
2.20.1
46
2.20.1
200
47
201
48
diff view generated by jsdifflib
1
Convert the insns in the one-register-and-immediate group to decodetree.
1
Our current codegen for MVE always calls out to helper functions,
2
2
because some byte lanes might be predicated. The common case is that
3
In the new decode, our asimd_imm_const() function returns a 64-bit value
3
in fact there is no predication active and all lanes should be
4
rather than a 32-bit one, which means we don't need to treat cmode=14 op=1
4
updated together, so we can produce better code by detecting that and
5
as a special case in the decoder (it is the only encoding where the two
5
using the TCG generic vector infrastructure.
6
halves of the 64-bit value are different).
6
7
Add a TB flag that is set when we can guarantee that there is no
8
active MVE predication, and a bool in the DisasContext. Subsequent
9
patches will use this flag to generate improved code for some
10
instructions.
11
12
In most cases when the predication state changes we simply end the TB
13
after that instruction. For the code called from vfp_access_check()
14
that handles lazy state preservation and creating a new FP context,
15
we can usually avoid having to try to end the TB because luckily the
16
new value of the flag following the register changes in those
17
sequences doesn't depend on any runtime decisions. We do have to end
18
the TB if the guest has enabled lazy FP state preservation but not
19
automatic state preservation, but this is an odd corner case that is
20
not going to be common in real-world code.
7
21
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200522145520.6778-10-peter.maydell@linaro.org
24
Message-id: 20210913095440.13462-4-peter.maydell@linaro.org
11
---
25
---
12
target/arm/neon-dp.decode | 22 ++++++
26
target/arm/cpu.h | 4 +++-
13
target/arm/translate-neon.inc.c | 118 ++++++++++++++++++++++++++++++++
27
target/arm/translate.h | 2 ++
14
target/arm/translate.c | 101 +--------------------------
28
target/arm/helper.c | 33 +++++++++++++++++++++++++++++++++
15
3 files changed, 142 insertions(+), 99 deletions(-)
29
target/arm/translate-m-nocp.c | 8 +++++++-
16
30
target/arm/translate-mve.c | 13 ++++++++++++-
17
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
31
target/arm/translate-vfp.c | 33 +++++++++++++++++++++++++++------
18
index XXXXXXX..XXXXXXX 100644
32
target/arm/translate.c | 8 ++++++++
19
--- a/target/arm/neon-dp.decode
33
7 files changed, 92 insertions(+), 9 deletions(-)
20
+++ b/target/arm/neon-dp.decode
34
21
@@ -XXX,XX +XXX,XX @@ VCVT_SF_2sh 1111 001 0 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
35
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
VCVT_UF_2sh 1111 001 1 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
36
index XXXXXXX..XXXXXXX 100644
23
VCVT_FS_2sh 1111 001 0 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
37
--- a/target/arm/cpu.h
24
VCVT_FU_2sh 1111 001 1 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
38
+++ b/target/arm/cpu.h
25
+
39
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
26
+######################################################################
40
* | TBFLAG_AM32 | +-----+----------+
27
+# 1-reg-and-modified-immediate grouping:
41
* | | |TBFLAG_M32|
28
+# 1111 001 i 1 D 000 imm:3 Vd:4 cmode:4 0 Q op 1 Vm:4
42
* +-------------+----------------+----------+
29
+######################################################################
43
- * 31 23 5 4 0
30
+
44
+ * 31 23 6 5 0
31
+&1reg_imm vd q imm cmode op
45
*
32
+
46
* Unless otherwise noted, these bits are cached in env->hflags.
33
+%asimd_imm_value 24:1 16:3 0:4
47
*/
34
+
48
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, LSPACT, 2, 1) /* Not cached. */
35
+@1reg_imm .... ... . . . ... ... .... .... . q:1 . . .... \
49
FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
36
+ &1reg_imm imm=%asimd_imm_value vd=%vd_dp
50
/* Set if FPCCR.S does not match current security state */
37
+
51
FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
38
+# The cmode/op bits here decode VORR/VBIC/VMOV/VMNV, but
52
+/* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */
39
+# not in a way we can conveniently represent in decodetree without
53
+FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
40
+# a lot of repetition:
54
41
+# VORR: op=0, (cmode & 1) && cmode < 12
55
/*
42
+# VBIC: op=1, (cmode & 1) && cmode < 12
56
* Bit usage when in AArch64 state
43
+# VMOV: everything else
57
diff --git a/target/arm/translate.h b/target/arm/translate.h
44
+# So we have a single decode line and check the cmode/op in the
58
index XXXXXXX..XXXXXXX 100644
45
+# trans function.
59
--- a/target/arm/translate.h
46
+Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
60
+++ b/target/arm/translate.h
47
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
61
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
48
index XXXXXXX..XXXXXXX 100644
62
bool align_mem;
49
--- a/target/arm/translate-neon.inc.c
63
/* True if PSTATE.IL is set */
50
+++ b/target/arm/translate-neon.inc.c
64
bool pstate_il;
51
@@ -XXX,XX +XXX,XX @@ DO_FP_2SH(VCVT_SF, gen_helper_vfp_sltos)
65
+ /* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
52
DO_FP_2SH(VCVT_UF, gen_helper_vfp_ultos)
66
+ bool mve_no_pred;
53
DO_FP_2SH(VCVT_FS, gen_helper_vfp_tosls_round_to_zero)
67
/*
54
DO_FP_2SH(VCVT_FU, gen_helper_vfp_touls_round_to_zero)
68
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
55
+
69
* < 0, set by the current instruction.
56
+static uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
70
diff --git a/target/arm/helper.c b/target/arm/helper.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/helper.c
73
+++ b/target/arm/helper.c
74
@@ -XXX,XX +XXX,XX @@ static inline void assert_hflags_rebuild_correctly(CPUARMState *env)
75
#endif
76
}
77
78
+static bool mve_no_pred(CPUARMState *env)
57
+{
79
+{
58
+ /*
80
+ /*
59
+ * Expand the encoded constant.
81
+ * Return true if there is definitely no predication of MVE
60
+ * Note that cmode = 2,3,4,5,6,7,10,11,12,13 imm=0 is UNPREDICTABLE.
82
+ * instructions by VPR or LTPSIZE. (Returning false even if there
61
+ * We choose to not special-case this and will behave as if a
83
+ * isn't any predication is OK; generated code will just be
62
+ * valid constant encoding of 0 had been given.
84
+ * a little worse.)
63
+ * cmode = 15 op = 1 must UNDEF; we assume decode has handled that.
85
+ * If the CPU does not implement MVE then this TB flag is always 0.
86
+ *
87
+ * NOTE: if you change this logic, the "recalculate s->mve_no_pred"
88
+ * logic in gen_update_fp_context() needs to be updated to match.
89
+ *
90
+ * We do not include the effect of the ECI bits here -- they are
91
+ * tracked in other TB flags. This simplifies the logic for
92
+ * "when did we emit code that changes the MVE_NO_PRED TB flag
93
+ * and thus need to end the TB?".
64
+ */
94
+ */
65
+ switch (cmode) {
95
+ if (cpu_isar_feature(aa32_mve, env_archcpu(env))) {
66
+ case 0: case 1:
67
+ /* no-op */
68
+ break;
69
+ case 2: case 3:
70
+ imm <<= 8;
71
+ break;
72
+ case 4: case 5:
73
+ imm <<= 16;
74
+ break;
75
+ case 6: case 7:
76
+ imm <<= 24;
77
+ break;
78
+ case 8: case 9:
79
+ imm |= imm << 16;
80
+ break;
81
+ case 10: case 11:
82
+ imm = (imm << 8) | (imm << 24);
83
+ break;
84
+ case 12:
85
+ imm = (imm << 8) | 0xff;
86
+ break;
87
+ case 13:
88
+ imm = (imm << 16) | 0xffff;
89
+ break;
90
+ case 14:
91
+ if (op) {
92
+ /*
93
+ * This is the only case where the top and bottom 32 bits
94
+ * of the encoded constant differ.
95
+ */
96
+ uint64_t imm64 = 0;
97
+ int n;
98
+
99
+ for (n = 0; n < 8; n++) {
100
+ if (imm & (1 << n)) {
101
+ imm64 |= (0xffULL << (n * 8));
102
+ }
103
+ }
104
+ return imm64;
105
+ }
106
+ imm |= (imm << 8) | (imm << 16) | (imm << 24);
107
+ break;
108
+ case 15:
109
+ imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
110
+ | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
111
+ break;
112
+ }
113
+ if (op) {
114
+ imm = ~imm;
115
+ }
116
+ return dup_const(MO_32, imm);
117
+}
118
+
119
+static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
120
+ GVecGen2iFn *fn)
121
+{
122
+ uint64_t imm;
123
+ int reg_ofs, vec_size;
124
+
125
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
126
+ return false;
96
+ return false;
127
+ }
97
+ }
128
+
98
+ if (env->v7m.vpr) {
129
+ /* UNDEF accesses to D16-D31 if they don't exist. */
130
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
131
+ return false;
99
+ return false;
132
+ }
100
+ }
133
+
101
+ if (env->v7m.ltpsize < 4) {
134
+ if (a->vd & a->q) {
135
+ return false;
102
+ return false;
136
+ }
103
+ }
137
+
138
+ if (!vfp_access_check(s)) {
139
+ return true;
140
+ }
141
+
142
+ reg_ofs = neon_reg_offset(a->vd, 0);
143
+ vec_size = a->q ? 16 : 8;
144
+ imm = asimd_imm_const(a->imm, a->cmode, a->op);
145
+
146
+ fn(MO_64, reg_ofs, reg_ofs, imm, vec_size, vec_size);
147
+ return true;
104
+ return true;
148
+}
105
+}
149
+
106
+
150
+static void gen_VMOV_1r(unsigned vece, uint32_t dofs, uint32_t aofs,
107
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
151
+ int64_t c, uint32_t oprsz, uint32_t maxsz)
108
target_ulong *cs_base, uint32_t *pflags)
109
{
110
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
111
if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
112
DP_TBFLAG_M32(flags, LSPACT, 1);
113
}
114
+
115
+ if (mve_no_pred(env)) {
116
+ DP_TBFLAG_M32(flags, MVE_NO_PRED, 1);
117
+ }
118
} else {
119
/*
120
* Note that XSCALE_CPAR shares bits with VECSTRIDE.
121
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/arm/translate-m-nocp.c
124
+++ b/target/arm/translate-m-nocp.c
125
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
126
127
clear_eci_state(s);
128
129
- /* End the TB, because we have updated FP control bits */
130
+ /*
131
+ * End the TB, because we have updated FP control bits,
132
+ * and possibly VPR or LTPSIZE.
133
+ */
134
s->base.is_jmp = DISAS_UPDATE_EXIT;
135
return true;
136
}
137
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
138
store_cpu_field(control, v7m.control[M_REG_S]);
139
tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
140
gen_helper_vfp_set_fpscr(cpu_env, tmp);
141
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
142
tcg_temp_free_i32(tmp);
143
tcg_temp_free_i32(sfpa);
144
break;
145
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
146
}
147
tmp = loadfn(s, opaque, true);
148
store_cpu_field(tmp, v7m.vpr);
149
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
150
break;
151
case ARM_VFP_P0:
152
{
153
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
154
tcg_gen_deposit_i32(vpr, vpr, tmp,
155
R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
156
store_cpu_field(vpr, v7m.vpr);
157
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
158
tcg_temp_free_i32(tmp);
159
break;
160
}
161
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-mve.c
164
+++ b/target/arm/translate-mve.c
165
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VORR, gen_helper_mve_vorr)
166
DO_LOGIC(VORN, gen_helper_mve_vorn)
167
DO_LOGIC(VEOR, gen_helper_mve_veor)
168
169
-DO_LOGIC(VPSEL, gen_helper_mve_vpsel)
170
+static bool trans_VPSEL(DisasContext *s, arg_2op *a)
152
+{
171
+{
153
+ tcg_gen_gvec_dup_imm(MO_64, dofs, oprsz, maxsz, c);
172
+ /* This insn updates predication bits */
173
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
174
+ return do_2op(s, a, gen_helper_mve_vpsel);
154
+}
175
+}
155
+
176
156
+static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a)
177
#define DO_2OP(INSN, FN) \
157
+{
178
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
158
+ /* Handle decode of cmode/op here between VORR/VBIC/VMOV */
179
@@ -XXX,XX +XXX,XX @@ static bool trans_VPNOT(DisasContext *s, arg_VPNOT *a)
159
+ GVecGen2iFn *fn;
180
}
160
+
181
161
+ if ((a->cmode & 1) && a->cmode < 12) {
182
gen_helper_mve_vpnot(cpu_env);
162
+ /* for op=1, the imm will be inverted, so BIC becomes AND. */
183
+ /* This insn updates predication bits */
163
+ fn = a->op ? tcg_gen_gvec_andi : tcg_gen_gvec_ori;
184
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
164
+ } else {
185
mve_update_eci(s);
165
+ /* There is one unallocated cmode/op combination in this space */
186
return true;
166
+ if (a->cmode == 15 && a->op == 1) {
187
}
167
+ return false;
188
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
189
/* VPT */
190
gen_vpst(s, a->mask);
191
}
192
+ /* This insn updates predication bits */
193
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
194
mve_update_eci(s);
195
return true;
196
}
197
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp_scalar(DisasContext *s, arg_vcmp_scalar *a,
198
/* VPT */
199
gen_vpst(s, a->mask);
200
}
201
+ /* This insn updates predication bits */
202
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
203
mve_update_eci(s);
204
return true;
205
}
206
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/translate-vfp.c
209
+++ b/target/arm/translate-vfp.c
210
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
211
* Generate code for M-profile lazy FP state preservation if needed;
212
* this corresponds to the pseudocode PreserveFPState() function.
213
*/
214
-static void gen_preserve_fp_state(DisasContext *s)
215
+static void gen_preserve_fp_state(DisasContext *s, bool skip_context_update)
216
{
217
if (s->v7m_lspact) {
218
/*
219
@@ -XXX,XX +XXX,XX @@ static void gen_preserve_fp_state(DisasContext *s)
220
* any further FP insns in this TB.
221
*/
222
s->v7m_lspact = false;
223
+ /*
224
+ * The helper might have zeroed VPR, so we do not know the
225
+ * correct value for the MVE_NO_PRED TB flag any more.
226
+ * If we're about to create a new fp context then that
227
+ * will precisely determine the MVE_NO_PRED value (see
228
+ * gen_update_fp_context()). Otherwise, we must:
229
+ * - set s->mve_no_pred to false, so this instruction
230
+ * is generated to use helper functions
231
+ * - end the TB now, without chaining to the next TB
232
+ */
233
+ if (skip_context_update || !s->v7m_new_fp_ctxt_needed) {
234
+ s->mve_no_pred = false;
235
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
168
+ }
236
+ }
169
+ fn = gen_VMOV_1r;
237
}
170
+ }
238
}
171
+ return do_1reg_imm(s, a, fn);
239
172
+}
240
@@ -XXX,XX +XXX,XX @@ static void gen_update_fp_context(DisasContext *s)
241
TCGv_i32 z32 = tcg_const_i32(0);
242
store_cpu_field(z32, v7m.vpr);
243
}
244
-
245
/*
246
- * We don't need to arrange to end the TB, because the only
247
- * parts of FPSCR which we cache in the TB flags are the VECLEN
248
- * and VECSTRIDE, and those don't exist for M-profile.
249
+ * We just updated the FPSCR and VPR. Some of this state is cached
250
+ * in the MVE_NO_PRED TB flag. We want to avoid having to end the
251
+ * TB here, which means we need the new value of the MVE_NO_PRED
252
+ * flag to be exactly known here and the same for all executions.
253
+ * Luckily FPDSCR.LTPSIZE is always constant 4 and the VPR is
254
+ * always set to 0, so the new MVE_NO_PRED flag is always 1
255
+ * if and only if we have MVE.
256
+ *
257
+ * (The other FPSCR state cached in TB flags is VECLEN and VECSTRIDE,
258
+ * but those do not exist for M-profile, so are not relevant here.)
259
*/
260
+ s->mve_no_pred = dc_isar_feature(aa32_mve, s);
261
262
if (s->v8m_secure) {
263
bits |= R_V7M_CONTROL_SFPA_MASK;
264
@@ -XXX,XX +XXX,XX @@ bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
265
/* Handle M-profile lazy FP state mechanics */
266
267
/* Trigger lazy-state preservation if necessary */
268
- gen_preserve_fp_state(s);
269
+ gen_preserve_fp_state(s, skip_context_update);
270
271
if (!skip_context_update) {
272
/* Update ownership of FP context and create new FP context if needed */
173
diff --git a/target/arm/translate.c b/target/arm/translate.c
273
diff --git a/target/arm/translate.c b/target/arm/translate.c
174
index XXXXXXX..XXXXXXX 100644
274
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/translate.c
275
--- a/target/arm/translate.c
176
+++ b/target/arm/translate.c
276
+++ b/target/arm/translate.c
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
277
@@ -XXX,XX +XXX,XX @@ static bool trans_DLS(DisasContext *s, arg_DLS *a)
178
/* Three register same length: handled by decodetree */
278
/* DLSTP: set FPSCR.LTPSIZE */
179
return 1;
279
tmp = tcg_const_i32(a->size);
180
} else if (insn & (1 << 4)) {
280
store_cpu_field(tmp, v7m.ltpsize);
181
- if ((insn & 0x00380080) != 0) {
281
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
182
- /* Two registers and shift: handled by decodetree */
282
}
183
- return 1;
283
return true;
184
- } else { /* (insn & 0x00380080) == 0 */
284
}
185
- int invert, reg_ofs, vec_size;
285
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
186
-
286
assert(ok);
187
- if (q && (rd & 1)) {
287
tmp = tcg_const_i32(a->size);
188
- return 1;
288
store_cpu_field(tmp, v7m.ltpsize);
189
- }
289
+ /*
190
-
290
+ * LTPSIZE updated, but MVE_NO_PRED will always be the same thing (0)
191
- op = (insn >> 8) & 0xf;
291
+ * when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
192
- /* One register and immediate. */
292
+ */
193
- imm = (u << 7) | ((insn >> 12) & 0x70) | (insn & 0xf);
293
}
194
- invert = (insn & (1 << 5)) != 0;
294
gen_jmp_tb(s, s->base.pc_next, 1);
195
- /* Note that op = 2,3,4,5,6,7,10,11,12,13 imm=0 is UNPREDICTABLE.
295
196
- * We choose to not special-case this and will behave as if a
296
@@ -XXX,XX +XXX,XX @@ static bool trans_VCTP(DisasContext *s, arg_VCTP *a)
197
- * valid constant encoding of 0 had been given.
297
gen_helper_mve_vctp(cpu_env, masklen);
198
- */
298
tcg_temp_free_i32(masklen);
199
- switch (op) {
299
tcg_temp_free_i32(rn_shifted);
200
- case 0: case 1:
300
+ /* This insn updates predication bits */
201
- /* no-op */
301
+ s->base.is_jmp = DISAS_UPDATE_NOCHAIN;
202
- break;
302
mve_update_eci(s);
203
- case 2: case 3:
303
return true;
204
- imm <<= 8;
304
}
205
- break;
305
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
206
- case 4: case 5:
306
dc->v7m_new_fp_ctxt_needed =
207
- imm <<= 16;
307
EX_TBFLAG_M32(tb_flags, NEW_FP_CTXT_NEEDED);
208
- break;
308
dc->v7m_lspact = EX_TBFLAG_M32(tb_flags, LSPACT);
209
- case 6: case 7:
309
+ dc->mve_no_pred = EX_TBFLAG_M32(tb_flags, MVE_NO_PRED);
210
- imm <<= 24;
310
} else {
211
- break;
311
dc->debug_target_el = EX_TBFLAG_ANY(tb_flags, DEBUG_TARGET_EL);
212
- case 8: case 9:
312
dc->sctlr_b = EX_TBFLAG_A32(tb_flags, SCTLR__B);
213
- imm |= imm << 16;
214
- break;
215
- case 10: case 11:
216
- imm = (imm << 8) | (imm << 24);
217
- break;
218
- case 12:
219
- imm = (imm << 8) | 0xff;
220
- break;
221
- case 13:
222
- imm = (imm << 16) | 0xffff;
223
- break;
224
- case 14:
225
- imm |= (imm << 8) | (imm << 16) | (imm << 24);
226
- if (invert) {
227
- imm = ~imm;
228
- }
229
- break;
230
- case 15:
231
- if (invert) {
232
- return 1;
233
- }
234
- imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
235
- | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
236
- break;
237
- }
238
- if (invert) {
239
- imm = ~imm;
240
- }
241
-
242
- reg_ofs = neon_reg_offset(rd, 0);
243
- vec_size = q ? 16 : 8;
244
-
245
- if (op & 1 && op < 12) {
246
- if (invert) {
247
- /* The immediate value has already been inverted,
248
- * so BIC becomes AND.
249
- */
250
- tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
251
- vec_size, vec_size);
252
- } else {
253
- tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
254
- vec_size, vec_size);
255
- }
256
- } else {
257
- /* VMOV, VMVN. */
258
- if (op == 14 && invert) {
259
- TCGv_i64 t64 = tcg_temp_new_i64();
260
-
261
- for (pass = 0; pass <= q; ++pass) {
262
- uint64_t val = 0;
263
- int n;
264
-
265
- for (n = 0; n < 8; n++) {
266
- if (imm & (1 << (n + pass * 8))) {
267
- val |= 0xffull << (n * 8);
268
- }
269
- }
270
- tcg_gen_movi_i64(t64, val);
271
- neon_store_reg64(t64, rd + pass);
272
- }
273
- tcg_temp_free_i64(t64);
274
- } else {
275
- tcg_gen_gvec_dup_imm(MO_32, reg_ofs, vec_size,
276
- vec_size, imm);
277
- }
278
- }
279
- }
280
+ /* Two registers and shift or reg and imm: handled by decodetree */
281
+ return 1;
282
} else { /* (insn & 0x00800010 == 0x00800000) */
283
if (size != 3) {
284
op = (insn >> 8) & 0xf;
285
--
313
--
286
2.20.1
314
2.20.1
287
315
288
316
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
When not predicating, implement the MVE bitwise logical insns
2
directly using TCG vector operations.
2
3
3
Do not yet convert the helpers to loop over opr_sz, but the
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
descriptor allows the vector tail to be cleared. Which fixes
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
an existing bug vs SVE.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-5-peter.maydell@linaro.org
8
---
9
target/arm/translate-mve.c | 51 +++++++++++++++++++++++++++-----------
10
1 file changed, 36 insertions(+), 15 deletions(-)
6
11
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
8
Message-id: 20200514212831.31248-5-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.h | 12 ++--
13
target/arm/neon-dp.decode | 12 ++--
14
target/arm/crypto_helper.c | 24 +++++--
15
target/arm/translate-a64.c | 34 ++++-----
16
target/arm/translate-neon.inc.c | 124 +++++---------------------------
17
target/arm/translate.c | 24 ++-----
18
6 files changed, 67 insertions(+), 163 deletions(-)
19
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.h
14
--- a/target/arm/translate-mve.c
23
+++ b/target/arm/helper.h
15
+++ b/target/arm/translate-mve.c
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
16
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr mve_qreg_ptr(unsigned reg)
25
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
17
return ret;
26
18
}
27
DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
28
-DEF_HELPER_FLAGS_2(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr)
20
+static bool mve_no_predication(DisasContext *s)
29
-DEF_HELPER_FLAGS_2(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr)
21
+{
30
+DEF_HELPER_FLAGS_3(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
+ /*
31
+DEF_HELPER_FLAGS_3(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
+ * Return true if we are executing the entire MVE instruction
32
24
+ * with no predication or partial-execution, and so we can safely
33
-DEF_HELPER_FLAGS_3(crypto_sha256h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
25
+ * use an inline TCG vector implementation.
34
-DEF_HELPER_FLAGS_3(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
26
+ */
35
-DEF_HELPER_FLAGS_2(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr)
27
+ return s->eci == 0 && s->mve_no_pred;
36
-DEF_HELPER_FLAGS_3(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
28
+}
37
+DEF_HELPER_FLAGS_4(crypto_sha256h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
42
DEF_HELPER_FLAGS_4(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
43
DEF_HELPER_FLAGS_4(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/neon-dp.decode
47
+++ b/target/arm/neon-dp.decode
48
@@ -XXX,XX +XXX,XX @@ VPADD_3s 1111 001 0 0 . .. .... .... 1011 . . . 1 .... @3same_q0
49
50
VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
51
52
+@3same_crypto .... .... .... .... .... .... .... .... \
53
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0 q=1
54
+
29
+
55
SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
30
static bool mve_check_qreg_bank(DisasContext *s, int qmask)
56
vm=%vm_dp vn=%vn_dp vd=%vd_dp
31
{
57
-SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... \
32
/*
58
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
33
@@ -XXX,XX +XXX,XX @@ static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
59
-SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... \
34
return do_1op(s, a, fns[a->size]);
60
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
61
-SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... \
62
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
63
+SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
64
+SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
65
+SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
66
67
VFMA_fp_3s 1111 001 0 0 . 0 . .... .... 1100 ... 1 .... @3same_fp
68
VFMS_fp_3s 1111 001 0 0 . 1 . .... .... 1100 ... 1 .... @3same_fp
69
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/crypto_helper.c
72
+++ b/target/arm/crypto_helper.c
73
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1_3reg)(void *vd, void *vn, void *vm, uint32_t op)
74
rd[1] = d.l[1];
75
}
35
}
76
36
77
-void HELPER(crypto_sha1h)(void *vd, void *vm)
37
-static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
78
+void HELPER(crypto_sha1h)(void *vd, void *vm, uint32_t desc)
38
+static bool do_2op_vec(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn,
39
+ GVecGen3Fn *vecfn)
79
{
40
{
80
uint64_t *rd = vd;
41
TCGv_ptr qd, qn, qm;
81
uint64_t *rm = vm;
42
82
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1h)(void *vd, void *vm)
43
@@ -XXX,XX +XXX,XX @@ static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
83
44
return true;
84
rd[0] = m.l[0];
85
rd[1] = m.l[1];
86
+
87
+ clear_tail_16(vd, desc);
88
}
89
90
-void HELPER(crypto_sha1su1)(void *vd, void *vm)
91
+void HELPER(crypto_sha1su1)(void *vd, void *vm, uint32_t desc)
92
{
93
uint64_t *rd = vd;
94
uint64_t *rm = vm;
95
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1su1)(void *vd, void *vm)
96
97
rd[0] = d.l[0];
98
rd[1] = d.l[1];
99
+
100
+ clear_tail_16(vd, desc);
101
}
102
103
/*
104
@@ -XXX,XX +XXX,XX @@ static uint32_t s1(uint32_t x)
105
return ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10);
106
}
107
108
-void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm)
109
+void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm, uint32_t desc)
110
{
111
uint64_t *rd = vd;
112
uint64_t *rn = vn;
113
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm)
114
115
rd[0] = d.l[0];
116
rd[1] = d.l[1];
117
+
118
+ clear_tail_16(vd, desc);
119
}
120
121
-void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm)
122
+void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm, uint32_t desc)
123
{
124
uint64_t *rd = vd;
125
uint64_t *rn = vn;
126
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm)
127
128
rd[0] = d.l[0];
129
rd[1] = d.l[1];
130
+
131
+ clear_tail_16(vd, desc);
132
}
133
134
-void HELPER(crypto_sha256su0)(void *vd, void *vm)
135
+void HELPER(crypto_sha256su0)(void *vd, void *vm, uint32_t desc)
136
{
137
uint64_t *rd = vd;
138
uint64_t *rm = vm;
139
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256su0)(void *vd, void *vm)
140
141
rd[0] = d.l[0];
142
rd[1] = d.l[1];
143
+
144
+ clear_tail_16(vd, desc);
145
}
146
147
-void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm)
148
+void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm, uint32_t desc)
149
{
150
uint64_t *rd = vd;
151
uint64_t *rn = vn;
152
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm)
153
154
rd[0] = d.l[0];
155
rd[1] = d.l[1];
156
+
157
+ clear_tail_16(vd, desc);
158
}
159
160
/*
161
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-a64.c
164
+++ b/target/arm/translate-a64.c
165
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
166
int rm = extract32(insn, 16, 5);
167
int rn = extract32(insn, 5, 5);
168
int rd = extract32(insn, 0, 5);
169
- CryptoThreeOpFn *genfn;
170
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
171
+ gen_helper_gvec_3 *genfn;
172
bool feature;
173
174
if (size != 0) {
175
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
176
return;
177
}
45
}
178
46
179
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
47
- qd = mve_qreg_ptr(a->qd);
180
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
48
- qn = mve_qreg_ptr(a->qn);
181
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
49
- qm = mve_qreg_ptr(a->qm);
182
-
50
- fn(cpu_env, qd, qn, qm);
183
if (genfn) {
51
- tcg_temp_free_ptr(qd);
184
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
52
- tcg_temp_free_ptr(qn);
185
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
53
- tcg_temp_free_ptr(qm);
186
} else {
54
+ if (vecfn && mve_no_predication(s)) {
187
TCGv_i32 tcg_opcode = tcg_const_i32(opcode);
55
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qn),
188
+ TCGv_ptr tcg_rd_ptr = vec_full_reg_ptr(s, rd);
56
+ mve_qreg_offset(a->qm), 16, 16);
189
+ TCGv_ptr tcg_rn_ptr = vec_full_reg_ptr(s, rn);
57
+ } else {
190
+ TCGv_ptr tcg_rm_ptr = vec_full_reg_ptr(s, rm);
58
+ qd = mve_qreg_ptr(a->qd);
191
59
+ qn = mve_qreg_ptr(a->qn);
192
gen_helper_crypto_sha1_3reg(tcg_rd_ptr, tcg_rn_ptr,
60
+ qm = mve_qreg_ptr(a->qm);
193
tcg_rm_ptr, tcg_opcode);
61
+ fn(cpu_env, qd, qn, qm);
194
- tcg_temp_free_i32(tcg_opcode);
62
+ tcg_temp_free_ptr(qd);
195
- }
63
+ tcg_temp_free_ptr(qn);
196
64
+ tcg_temp_free_ptr(qm);
197
- tcg_temp_free_ptr(tcg_rd_ptr);
198
- tcg_temp_free_ptr(tcg_rn_ptr);
199
- tcg_temp_free_ptr(tcg_rm_ptr);
200
+ tcg_temp_free_i32(tcg_opcode);
201
+ tcg_temp_free_ptr(tcg_rd_ptr);
202
+ tcg_temp_free_ptr(tcg_rn_ptr);
203
+ tcg_temp_free_ptr(tcg_rm_ptr);
204
+ }
65
+ }
205
}
66
mve_update_eci(s);
206
207
/* Crypto two-reg SHA
208
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
209
int opcode = extract32(insn, 12, 5);
210
int rn = extract32(insn, 5, 5);
211
int rd = extract32(insn, 0, 5);
212
- CryptoTwoOpFn *genfn;
213
+ gen_helper_gvec_2 *genfn;
214
bool feature;
215
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
216
217
if (size != 0) {
218
unallocated_encoding(s);
219
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
220
if (!fp_access_check(s)) {
221
return;
222
}
223
-
224
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
225
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
226
-
227
- genfn(tcg_rd_ptr, tcg_rn_ptr);
228
-
229
- tcg_temp_free_ptr(tcg_rd_ptr);
230
- tcg_temp_free_ptr(tcg_rn_ptr);
231
+ gen_gvec_op2_ool(s, true, rd, rn, 0, genfn);
232
}
233
234
static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
235
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
236
index XXXXXXX..XXXXXXX 100644
237
--- a/target/arm/translate-neon.inc.c
238
+++ b/target/arm/translate-neon.inc.c
239
@@ -XXX,XX +XXX,XX @@ DO_3SAME_CMP(VCGE_S, TCG_COND_GE)
240
DO_3SAME_CMP(VCGE_U, TCG_COND_GEU)
241
DO_3SAME_CMP(VCEQ, TCG_COND_EQ)
242
243
-static void gen_VMUL_p_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
244
- uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz)
245
-{
246
- tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz,
247
- 0, gen_helper_gvec_pmul_b);
248
-}
249
+#define WRAP_OOL_FN(WRAPNAME, FUNC) \
250
+ static void WRAPNAME(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs, \
251
+ uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz) \
252
+ { \
253
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, 0, FUNC); \
254
+ }
255
+
256
+WRAP_OOL_FN(gen_VMUL_p_3s, gen_helper_gvec_pmul_b)
257
258
static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
259
{
260
@@ -XXX,XX +XXX,XX @@ static bool trans_SHA1_3s(DisasContext *s, arg_SHA1_3s *a)
261
return true;
67
return true;
262
}
68
}
263
69
264
-static bool trans_SHA256H_3s(DisasContext *s, arg_SHA256H_3s *a)
70
-#define DO_LOGIC(INSN, HELPER) \
265
-{
71
+static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn *fn)
266
- TCGv_ptr ptr1, ptr2, ptr3;
72
+{
267
-
73
+ return do_2op_vec(s, a, fn, NULL);
268
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
74
+}
269
- !dc_isar_feature(aa32_sha2, s)) {
75
+
270
- return false;
76
+#define DO_LOGIC(INSN, HELPER, VECFN) \
271
+#define DO_SHA2(NAME, FUNC) \
77
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
272
+ WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
78
{ \
273
+ static bool trans_##NAME##_3s(DisasContext *s, arg_3same *a) \
79
- return do_2op(s, a, HELPER); \
274
+ { \
80
+ return do_2op_vec(s, a, HELPER, VECFN); \
275
+ if (!dc_isar_feature(aa32_sha2, s)) { \
276
+ return false; \
277
+ } \
278
+ return do_3same(s, a, gen_##NAME##_3s); \
279
}
81
}
280
82
281
- /* UNDEF accesses to D16-D31 if they don't exist. */
83
-DO_LOGIC(VAND, gen_helper_mve_vand)
282
- if (!dc_isar_feature(aa32_simd_r32, s) &&
84
-DO_LOGIC(VBIC, gen_helper_mve_vbic)
283
- ((a->vd | a->vn | a->vm) & 0x10)) {
85
-DO_LOGIC(VORR, gen_helper_mve_vorr)
284
- return false;
86
-DO_LOGIC(VORN, gen_helper_mve_vorn)
285
- }
87
-DO_LOGIC(VEOR, gen_helper_mve_veor)
286
-
88
+DO_LOGIC(VAND, gen_helper_mve_vand, tcg_gen_gvec_and)
287
- if ((a->vn | a->vm | a->vd) & 1) {
89
+DO_LOGIC(VBIC, gen_helper_mve_vbic, tcg_gen_gvec_andc)
288
- return false;
90
+DO_LOGIC(VORR, gen_helper_mve_vorr, tcg_gen_gvec_or)
289
- }
91
+DO_LOGIC(VORN, gen_helper_mve_vorn, tcg_gen_gvec_orc)
290
-
92
+DO_LOGIC(VEOR, gen_helper_mve_veor, tcg_gen_gvec_xor)
291
- if (!vfp_access_check(s)) {
93
292
- return true;
94
static bool trans_VPSEL(DisasContext *s, arg_2op *a)
293
- }
95
{
294
-
295
- ptr1 = vfp_reg_ptr(true, a->vd);
296
- ptr2 = vfp_reg_ptr(true, a->vn);
297
- ptr3 = vfp_reg_ptr(true, a->vm);
298
- gen_helper_crypto_sha256h(ptr1, ptr2, ptr3);
299
- tcg_temp_free_ptr(ptr1);
300
- tcg_temp_free_ptr(ptr2);
301
- tcg_temp_free_ptr(ptr3);
302
-
303
- return true;
304
-}
305
-
306
-static bool trans_SHA256H2_3s(DisasContext *s, arg_SHA256H2_3s *a)
307
-{
308
- TCGv_ptr ptr1, ptr2, ptr3;
309
-
310
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
311
- !dc_isar_feature(aa32_sha2, s)) {
312
- return false;
313
- }
314
-
315
- /* UNDEF accesses to D16-D31 if they don't exist. */
316
- if (!dc_isar_feature(aa32_simd_r32, s) &&
317
- ((a->vd | a->vn | a->vm) & 0x10)) {
318
- return false;
319
- }
320
-
321
- if ((a->vn | a->vm | a->vd) & 1) {
322
- return false;
323
- }
324
-
325
- if (!vfp_access_check(s)) {
326
- return true;
327
- }
328
-
329
- ptr1 = vfp_reg_ptr(true, a->vd);
330
- ptr2 = vfp_reg_ptr(true, a->vn);
331
- ptr3 = vfp_reg_ptr(true, a->vm);
332
- gen_helper_crypto_sha256h2(ptr1, ptr2, ptr3);
333
- tcg_temp_free_ptr(ptr1);
334
- tcg_temp_free_ptr(ptr2);
335
- tcg_temp_free_ptr(ptr3);
336
-
337
- return true;
338
-}
339
-
340
-static bool trans_SHA256SU1_3s(DisasContext *s, arg_SHA256SU1_3s *a)
341
-{
342
- TCGv_ptr ptr1, ptr2, ptr3;
343
-
344
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
345
- !dc_isar_feature(aa32_sha2, s)) {
346
- return false;
347
- }
348
-
349
- /* UNDEF accesses to D16-D31 if they don't exist. */
350
- if (!dc_isar_feature(aa32_simd_r32, s) &&
351
- ((a->vd | a->vn | a->vm) & 0x10)) {
352
- return false;
353
- }
354
-
355
- if ((a->vn | a->vm | a->vd) & 1) {
356
- return false;
357
- }
358
-
359
- if (!vfp_access_check(s)) {
360
- return true;
361
- }
362
-
363
- ptr1 = vfp_reg_ptr(true, a->vd);
364
- ptr2 = vfp_reg_ptr(true, a->vn);
365
- ptr3 = vfp_reg_ptr(true, a->vm);
366
- gen_helper_crypto_sha256su1(ptr1, ptr2, ptr3);
367
- tcg_temp_free_ptr(ptr1);
368
- tcg_temp_free_ptr(ptr2);
369
- tcg_temp_free_ptr(ptr3);
370
-
371
- return true;
372
-}
373
+DO_SHA2(SHA256H, gen_helper_crypto_sha256h)
374
+DO_SHA2(SHA256H2, gen_helper_crypto_sha256h2)
375
+DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
376
377
#define DO_3SAME_64(INSN, FUNC) \
378
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
379
diff --git a/target/arm/translate.c b/target/arm/translate.c
380
index XXXXXXX..XXXXXXX 100644
381
--- a/target/arm/translate.c
382
+++ b/target/arm/translate.c
383
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
384
int vec_size;
385
uint32_t imm;
386
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
387
- TCGv_ptr ptr1, ptr2;
388
+ TCGv_ptr ptr1;
389
TCGv_i64 tmp64;
390
391
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
392
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
393
if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
394
return 1;
395
}
396
- ptr1 = vfp_reg_ptr(true, rd);
397
- ptr2 = vfp_reg_ptr(true, rm);
398
-
399
- gen_helper_crypto_sha1h(ptr1, ptr2);
400
-
401
- tcg_temp_free_ptr(ptr1);
402
- tcg_temp_free_ptr(ptr2);
403
+ tcg_gen_gvec_2_ool(rd_ofs, rm_ofs, 16, 16, 0,
404
+ gen_helper_crypto_sha1h);
405
break;
406
case NEON_2RM_SHA1SU1:
407
if ((rm | rd) & 1) {
408
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
409
} else if (!dc_isar_feature(aa32_sha1, s)) {
410
return 1;
411
}
412
- ptr1 = vfp_reg_ptr(true, rd);
413
- ptr2 = vfp_reg_ptr(true, rm);
414
- if (q) {
415
- gen_helper_crypto_sha256su0(ptr1, ptr2);
416
- } else {
417
- gen_helper_crypto_sha1su1(ptr1, ptr2);
418
- }
419
- tcg_temp_free_ptr(ptr1);
420
- tcg_temp_free_ptr(ptr2);
421
+ tcg_gen_gvec_2_ool(rd_ofs, rm_ofs, 16, 16, 0,
422
+ q ? gen_helper_crypto_sha256su0
423
+ : gen_helper_crypto_sha1su1);
424
break;
425
-
426
case NEON_2RM_VMVN:
427
tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
428
break;
429
--
96
--
430
2.20.1
97
2.20.1
431
98
432
99
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Optimize MVE arithmetic ops when we have a TCG
2
vector operation we can use.
2
3
3
Rather than passing an opcode to a helper, fully decode the
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
operation at translate time. Use clear_tail_16 to zap the
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
balance of the SVE register with the AdvSIMD write.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210913095440.13462-6-peter.maydell@linaro.org
8
---
9
target/arm/translate-mve.c | 20 +++++++++++---------
10
1 file changed, 11 insertions(+), 9 deletions(-)
6
11
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
8
Message-id: 20200514212831.31248-7-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.h | 5 ++++-
13
target/arm/crypto_helper.c | 24 ++++++++++++++++++------
14
target/arm/translate-a64.c | 21 +++++----------------
15
3 files changed, 27 insertions(+), 23 deletions(-)
16
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
14
--- a/target/arm/translate-mve.c
20
+++ b/target/arm/helper.h
15
+++ b/target/arm/translate-mve.c
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
16
@@ -XXX,XX +XXX,XX @@ static bool trans_VPSEL(DisasContext *s, arg_2op *a)
22
DEF_HELPER_FLAGS_4(crypto_sha512su1, TCG_CALL_NO_RWG,
17
return do_2op(s, a, gen_helper_mve_vpsel);
23
void, ptr, ptr, ptr, i32)
24
25
-DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
26
+DEF_HELPER_FLAGS_4(crypto_sm3tt1a, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(crypto_sm3tt1b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(crypto_sm3tt2a, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(crypto_sm3tt2b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_4(crypto_sm3partw1, TCG_CALL_NO_RWG,
31
void, ptr, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(crypto_sm3partw2, TCG_CALL_NO_RWG,
33
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/crypto_helper.c
36
+++ b/target/arm/crypto_helper.c
37
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm, uint32_t desc)
38
clear_tail_16(vd, desc);
39
}
18
}
40
19
41
-void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
20
-#define DO_2OP(INSN, FN) \
42
- uint32_t opcode)
21
+#define DO_2OP_VEC(INSN, FN, VECFN) \
43
+static inline void QEMU_ALWAYS_INLINE
22
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
44
+crypto_sm3tt(uint64_t *rd, uint64_t *rn, uint64_t *rm,
23
{ \
45
+ uint32_t desc, uint32_t opcode)
24
static MVEGenTwoOpFn * const fns[] = { \
46
{
25
@@ -XXX,XX +XXX,XX @@ static bool trans_VPSEL(DisasContext *s, arg_2op *a)
47
- uint64_t *rd = vd;
26
gen_helper_mve_##FN##w, \
48
- uint64_t *rn = vn;
27
NULL, \
49
- uint64_t *rm = vm;
28
}; \
50
union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
29
- return do_2op(s, a, fns[a->size]); \
51
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
30
+ return do_2op_vec(s, a, fns[a->size], VECFN); \
52
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
53
+ uint32_t imm2 = simd_data(desc);
54
uint32_t t;
55
56
assert(imm2 < 4);
57
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
58
/* SM3TT2B */
59
t = cho(CR_ST_WORD(d, 3), CR_ST_WORD(d, 2), CR_ST_WORD(d, 1));
60
} else {
61
- g_assert_not_reached();
62
+ qemu_build_not_reached();
63
}
31
}
64
32
65
t += CR_ST_WORD(d, 0) + CR_ST_WORD(m, imm2);
33
-DO_2OP(VADD, vadd)
66
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
34
-DO_2OP(VSUB, vsub)
67
35
-DO_2OP(VMUL, vmul)
68
rd[0] = d.l[0];
36
+#define DO_2OP(INSN, FN) DO_2OP_VEC(INSN, FN, NULL)
69
rd[1] = d.l[1];
70
+
37
+
71
+ clear_tail_16(rd, desc);
38
+DO_2OP_VEC(VADD, vadd, tcg_gen_gvec_add)
72
}
39
+DO_2OP_VEC(VSUB, vsub, tcg_gen_gvec_sub)
73
40
+DO_2OP_VEC(VMUL, vmul, tcg_gen_gvec_mul)
74
+#define DO_SM3TT(NAME, OPCODE) \
41
DO_2OP(VMULH_S, vmulhs)
75
+ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
42
DO_2OP(VMULH_U, vmulhu)
76
+ { crypto_sm3tt(vd, vn, vm, desc, OPCODE); }
43
DO_2OP(VRMULH_S, vrmulhs)
77
+
44
DO_2OP(VRMULH_U, vrmulhu)
78
+DO_SM3TT(crypto_sm3tt1a, 0)
45
-DO_2OP(VMAX_S, vmaxs)
79
+DO_SM3TT(crypto_sm3tt1b, 1)
46
-DO_2OP(VMAX_U, vmaxu)
80
+DO_SM3TT(crypto_sm3tt2a, 2)
47
-DO_2OP(VMIN_S, vmins)
81
+DO_SM3TT(crypto_sm3tt2b, 3)
48
-DO_2OP(VMIN_U, vminu)
82
+
49
+DO_2OP_VEC(VMAX_S, vmaxs, tcg_gen_gvec_smax)
83
+#undef DO_SM3TT
50
+DO_2OP_VEC(VMAX_U, vmaxu, tcg_gen_gvec_umax)
84
+
51
+DO_2OP_VEC(VMIN_S, vmins, tcg_gen_gvec_smin)
85
static uint8_t const sm4_sbox[] = {
52
+DO_2OP_VEC(VMIN_U, vminu, tcg_gen_gvec_umin)
86
0xd6, 0x90, 0xe9, 0xfe, 0xcc, 0xe1, 0x3d, 0xb7,
53
DO_2OP(VABD_S, vabds)
87
0x16, 0xb6, 0x14, 0xc2, 0x28, 0xfb, 0x2c, 0x05,
54
DO_2OP(VABD_U, vabdu)
88
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
55
DO_2OP(VHADD_S, vhadds)
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate-a64.c
91
+++ b/target/arm/translate-a64.c
92
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
93
*/
94
static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
95
{
96
+ static gen_helper_gvec_3 * const fns[4] = {
97
+ gen_helper_crypto_sm3tt1a, gen_helper_crypto_sm3tt1b,
98
+ gen_helper_crypto_sm3tt2a, gen_helper_crypto_sm3tt2b,
99
+ };
100
int opcode = extract32(insn, 10, 2);
101
int imm2 = extract32(insn, 12, 2);
102
int rm = extract32(insn, 16, 5);
103
int rn = extract32(insn, 5, 5);
104
int rd = extract32(insn, 0, 5);
105
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
106
- TCGv_i32 tcg_imm2, tcg_opcode;
107
108
if (!dc_isar_feature(aa64_sm3, s)) {
109
unallocated_encoding(s);
110
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
111
return;
112
}
113
114
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
115
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
116
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
117
- tcg_imm2 = tcg_const_i32(imm2);
118
- tcg_opcode = tcg_const_i32(opcode);
119
-
120
- gen_helper_crypto_sm3tt(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr, tcg_imm2,
121
- tcg_opcode);
122
-
123
- tcg_temp_free_ptr(tcg_rd_ptr);
124
- tcg_temp_free_ptr(tcg_rn_ptr);
125
- tcg_temp_free_ptr(tcg_rm_ptr);
126
- tcg_temp_free_i32(tcg_imm2);
127
- tcg_temp_free_i32(tcg_opcode);
128
+ gen_gvec_op3_ool(s, true, rd, rn, rm, imm2, fns[opcode]);
129
}
130
131
/* C3.6 Data processing - SIMD, inc Crypto
132
--
56
--
133
2.20.1
57
2.20.1
134
58
135
59
diff view generated by jsdifflib
1
Convert the Neon narrowing shifts where op==8 to decodetree:
1
Optimize the MVE VNEG and VABS insns by using TCG
2
* VSHRN
2
vector ops when possible.
3
* VRSHRN
4
* VQSHRUN
5
* VQRSHRUN
6
3
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200522145520.6778-6-peter.maydell@linaro.org
7
Message-id: 20210913095440.13462-7-peter.maydell@linaro.org
10
---
8
---
11
target/arm/neon-dp.decode | 27 ++++++
9
target/arm/translate-mve.c | 32 ++++++++++++++++++++++----------
12
target/arm/translate-neon.inc.c | 167 ++++++++++++++++++++++++++++++++
10
1 file changed, 22 insertions(+), 10 deletions(-)
13
target/arm/translate.c | 1 +
14
3 files changed, 195 insertions(+)
15
11
16
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/neon-dp.decode
14
--- a/target/arm/translate-mve.c
19
+++ b/target/arm/neon-dp.decode
15
+++ b/target/arm/translate-mve.c
20
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
16
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
21
@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
17
return true;
22
&2reg_shift vm=%vm_dp vd=%vd_dp size=0
18
}
23
19
24
+# Narrowing right shifts: here the Q bit is part of the opcode decode
20
-static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
25
+@2reg_shrn_d .... ... . . . 1 ..... .... .... 0 . . . .... \
21
+static bool do_1op_vec(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn,
26
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3 q=0 \
22
+ GVecGen2Fn vecfn)
27
+ shift=%neon_rshift_i5
23
{
28
+@2reg_shrn_s .... ... . . . 01 .... .... .... 0 . . . .... \
24
TCGv_ptr qd, qm;
29
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 q=0 \
25
30
+ shift=%neon_rshift_i4
26
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
31
+@2reg_shrn_h .... ... . . . 001 ... .... .... 0 . . . .... \
27
return true;
32
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0 \
28
}
33
+ shift=%neon_rshift_i3
29
34
+
30
- qd = mve_qreg_ptr(a->qd);
35
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
31
- qm = mve_qreg_ptr(a->qm);
36
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
32
- fn(cpu_env, qd, qm);
37
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
33
- tcg_temp_free_ptr(qd);
38
@@ -XXX,XX +XXX,XX @@ VQSHL_U_64_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
34
- tcg_temp_free_ptr(qm);
39
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
35
+ if (vecfn && mve_no_predication(s)) {
40
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
36
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qm), 16, 16);
41
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
37
+ } else {
42
+
38
+ qd = mve_qreg_ptr(a->qd);
43
+VSHRN_64_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_d
39
+ qm = mve_qreg_ptr(a->qm);
44
+VSHRN_32_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_s
40
+ fn(cpu_env, qd, qm);
45
+VSHRN_16_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
41
+ tcg_temp_free_ptr(qd);
46
+
42
+ tcg_temp_free_ptr(qm);
47
+VRSHRN_64_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
43
+ }
48
+VRSHRN_32_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
44
mve_update_eci(s);
49
+VRSHRN_16_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
45
return true;
50
+
46
}
51
+VQSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_d
47
52
+VQSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_s
48
-#define DO_1OP(INSN, FN) \
53
+VQSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
49
+static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
54
+
55
+VQRSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
56
+VQRSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
57
+VQRSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
58
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-neon.inc.c
61
+++ b/target/arm/translate-neon.inc.c
62
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
63
DO_2SHIFT_ENV(VQSHLU, qshlu_s)
64
DO_2SHIFT_ENV(VQSHL_U, qshl_u)
65
DO_2SHIFT_ENV(VQSHL_S, qshl_s)
66
+
67
+static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
68
+ NeonGenTwo64OpFn *shiftfn,
69
+ NeonGenNarrowEnvFn *narrowfn)
70
+{
50
+{
71
+ /* 2-reg-and-shift narrowing-shift operations, size == 3 case */
51
+ return do_1op_vec(s, a, fn, NULL);
72
+ TCGv_i64 constimm, rm1, rm2;
73
+ TCGv_i32 rd;
74
+
75
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
76
+ return false;
77
+ }
78
+
79
+ /* UNDEF accesses to D16-D31 if they don't exist. */
80
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
81
+ ((a->vd | a->vm) & 0x10)) {
82
+ return false;
83
+ }
84
+
85
+ if (a->vm & 1) {
86
+ return false;
87
+ }
88
+
89
+ if (!vfp_access_check(s)) {
90
+ return true;
91
+ }
92
+
93
+ /*
94
+ * This is always a right shift, and the shiftfn is always a
95
+ * left-shift helper, which thus needs the negated shift count.
96
+ */
97
+ constimm = tcg_const_i64(-a->shift);
98
+ rm1 = tcg_temp_new_i64();
99
+ rm2 = tcg_temp_new_i64();
100
+
101
+ /* Load both inputs first to avoid potential overwrite if rm == rd */
102
+ neon_load_reg64(rm1, a->vm);
103
+ neon_load_reg64(rm2, a->vm + 1);
104
+
105
+ shiftfn(rm1, rm1, constimm);
106
+ rd = tcg_temp_new_i32();
107
+ narrowfn(rd, cpu_env, rm1);
108
+ neon_store_reg(a->vd, 0, rd);
109
+
110
+ shiftfn(rm2, rm2, constimm);
111
+ rd = tcg_temp_new_i32();
112
+ narrowfn(rd, cpu_env, rm2);
113
+ neon_store_reg(a->vd, 1, rd);
114
+
115
+ tcg_temp_free_i64(rm1);
116
+ tcg_temp_free_i64(rm2);
117
+ tcg_temp_free_i64(constimm);
118
+
119
+ return true;
120
+}
52
+}
121
+
53
+
122
+static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
54
+#define DO_1OP_VEC(INSN, FN, VECFN) \
123
+ NeonGenTwoOpFn *shiftfn,
55
static bool trans_##INSN(DisasContext *s, arg_1op *a) \
124
+ NeonGenNarrowEnvFn *narrowfn)
56
{ \
125
+{
57
static MVEGenOneOpFn * const fns[] = { \
126
+ /* 2-reg-and-shift narrowing-shift operations, size < 3 case */
58
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
127
+ TCGv_i32 constimm, rm1, rm2, rm3, rm4;
59
gen_helper_mve_##FN##w, \
128
+ TCGv_i64 rtmp;
60
NULL, \
129
+ uint32_t imm;
61
}; \
62
- return do_1op(s, a, fns[a->size]); \
63
+ return do_1op_vec(s, a, fns[a->size], VECFN); \
64
}
65
66
+#define DO_1OP(INSN, FN) DO_1OP_VEC(INSN, FN, NULL)
130
+
67
+
131
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
68
DO_1OP(VCLZ, vclz)
132
+ return false;
69
DO_1OP(VCLS, vcls)
133
+ }
70
-DO_1OP(VABS, vabs)
134
+
71
-DO_1OP(VNEG, vneg)
135
+ /* UNDEF accesses to D16-D31 if they don't exist. */
72
+DO_1OP_VEC(VABS, vabs, tcg_gen_gvec_abs)
136
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
73
+DO_1OP_VEC(VNEG, vneg, tcg_gen_gvec_neg)
137
+ ((a->vd | a->vm) & 0x10)) {
74
DO_1OP(VQABS, vqabs)
138
+ return false;
75
DO_1OP(VQNEG, vqneg)
139
+ }
76
DO_1OP(VMAXA, vmaxa)
140
+
141
+ if (a->vm & 1) {
142
+ return false;
143
+ }
144
+
145
+ if (!vfp_access_check(s)) {
146
+ return true;
147
+ }
148
+
149
+ /*
150
+ * This is always a right shift, and the shiftfn is always a
151
+ * left-shift helper, which thus needs the negated shift count
152
+ * duplicated into each lane of the immediate value.
153
+ */
154
+ if (a->size == 1) {
155
+ imm = (uint16_t)(-a->shift);
156
+ imm |= imm << 16;
157
+ } else {
158
+ /* size == 2 */
159
+ imm = -a->shift;
160
+ }
161
+ constimm = tcg_const_i32(imm);
162
+
163
+ /* Load all inputs first to avoid potential overwrite */
164
+ rm1 = neon_load_reg(a->vm, 0);
165
+ rm2 = neon_load_reg(a->vm, 1);
166
+ rm3 = neon_load_reg(a->vm + 1, 0);
167
+ rm4 = neon_load_reg(a->vm + 1, 1);
168
+ rtmp = tcg_temp_new_i64();
169
+
170
+ shiftfn(rm1, rm1, constimm);
171
+ shiftfn(rm2, rm2, constimm);
172
+
173
+ tcg_gen_concat_i32_i64(rtmp, rm1, rm2);
174
+ tcg_temp_free_i32(rm2);
175
+
176
+ narrowfn(rm1, cpu_env, rtmp);
177
+ neon_store_reg(a->vd, 0, rm1);
178
+
179
+ shiftfn(rm3, rm3, constimm);
180
+ shiftfn(rm4, rm4, constimm);
181
+ tcg_temp_free_i32(constimm);
182
+
183
+ tcg_gen_concat_i32_i64(rtmp, rm3, rm4);
184
+ tcg_temp_free_i32(rm4);
185
+
186
+ narrowfn(rm3, cpu_env, rtmp);
187
+ tcg_temp_free_i64(rtmp);
188
+ neon_store_reg(a->vd, 1, rm3);
189
+ return true;
190
+}
191
+
192
+#define DO_2SN_64(INSN, FUNC, NARROWFUNC) \
193
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
194
+ { \
195
+ return do_2shift_narrow_64(s, a, FUNC, NARROWFUNC); \
196
+ }
197
+#define DO_2SN_32(INSN, FUNC, NARROWFUNC) \
198
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
199
+ { \
200
+ return do_2shift_narrow_32(s, a, FUNC, NARROWFUNC); \
201
+ }
202
+
203
+static void gen_neon_narrow_u32(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
204
+{
205
+ tcg_gen_extrl_i64_i32(dest, src);
206
+}
207
+
208
+static void gen_neon_narrow_u16(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
209
+{
210
+ gen_helper_neon_narrow_u16(dest, src);
211
+}
212
+
213
+static void gen_neon_narrow_u8(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
214
+{
215
+ gen_helper_neon_narrow_u8(dest, src);
216
+}
217
+
218
+DO_2SN_64(VSHRN_64, gen_ushl_i64, gen_neon_narrow_u32)
219
+DO_2SN_32(VSHRN_32, gen_ushl_i32, gen_neon_narrow_u16)
220
+DO_2SN_32(VSHRN_16, gen_helper_neon_shl_u16, gen_neon_narrow_u8)
221
+
222
+DO_2SN_64(VRSHRN_64, gen_helper_neon_rshl_u64, gen_neon_narrow_u32)
223
+DO_2SN_32(VRSHRN_32, gen_helper_neon_rshl_u32, gen_neon_narrow_u16)
224
+DO_2SN_32(VRSHRN_16, gen_helper_neon_rshl_u16, gen_neon_narrow_u8)
225
+
226
+DO_2SN_64(VQSHRUN_64, gen_sshl_i64, gen_helper_neon_unarrow_sat32)
227
+DO_2SN_32(VQSHRUN_32, gen_sshl_i32, gen_helper_neon_unarrow_sat16)
228
+DO_2SN_32(VQSHRUN_16, gen_helper_neon_shl_s16, gen_helper_neon_unarrow_sat8)
229
+
230
+DO_2SN_64(VQRSHRUN_64, gen_helper_neon_rshl_s64, gen_helper_neon_unarrow_sat32)
231
+DO_2SN_32(VQRSHRUN_32, gen_helper_neon_rshl_s32, gen_helper_neon_unarrow_sat16)
232
+DO_2SN_32(VQRSHRUN_16, gen_helper_neon_rshl_s16, gen_helper_neon_unarrow_sat8)
233
diff --git a/target/arm/translate.c b/target/arm/translate.c
234
index XXXXXXX..XXXXXXX 100644
235
--- a/target/arm/translate.c
236
+++ b/target/arm/translate.c
237
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
238
case 5: /* VSHL, VSLI */
239
case 6: /* VQSHLU */
240
case 7: /* VQSHL */
241
+ case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
242
return 1; /* handled by decodetree */
243
default:
244
break;
245
--
77
--
246
2.20.1
78
2.20.1
247
79
248
80
diff view generated by jsdifflib
1
Convert the VSRA, VSRI, VRSHR, VRSRA 2-reg-shift insns to decodetree.
1
Optimize the MVE VDUP insns by using TCG vector ops when possible.
2
(These are the last instructions in the group that are vectorized;
3
the rest all require looping over each element.)
4
2
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-4-peter.maydell@linaro.org
5
Message-id: 20210913095440.13462-8-peter.maydell@linaro.org
8
---
6
---
9
target/arm/neon-dp.decode | 35 ++++++++++++++++++++++
7
target/arm/translate-mve.c | 12 ++++++++----
10
target/arm/translate-neon.inc.c | 7 +++++
8
1 file changed, 8 insertions(+), 4 deletions(-)
11
target/arm/translate.c | 52 +++------------------------------
12
3 files changed, 46 insertions(+), 48 deletions(-)
13
9
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
15
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
12
--- a/target/arm/translate-mve.c
17
+++ b/target/arm/neon-dp.decode
13
+++ b/target/arm/translate-mve.c
18
@@ -XXX,XX +XXX,XX @@ VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
19
VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
15
return true;
20
VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
16
}
21
17
22
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_d
18
- qd = mve_qreg_ptr(a->qd);
23
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_s
19
rt = load_reg(s, a->rt);
24
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_h
20
- tcg_gen_dup_i32(a->size, rt, rt);
25
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_b
21
- gen_helper_mve_vdup(cpu_env, qd, rt);
26
+
22
- tcg_temp_free_ptr(qd);
27
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_d
23
+ if (mve_no_predication(s)) {
28
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_s
24
+ tcg_gen_gvec_dup_i32(a->size, mve_qreg_offset(a->qd), 16, 16, rt);
29
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_h
25
+ } else {
30
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_b
26
+ qd = mve_qreg_ptr(a->qd);
31
+
27
+ tcg_gen_dup_i32(a->size, rt, rt);
32
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_d
28
+ gen_helper_mve_vdup(cpu_env, qd, rt);
33
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_s
29
+ tcg_temp_free_ptr(qd);
34
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_h
30
+ }
35
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_b
31
tcg_temp_free_i32(rt);
36
+
32
mve_update_eci(s);
37
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_d
33
return true;
38
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_s
39
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_h
40
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_b
41
+
42
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_d
43
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_s
44
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_h
45
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_b
46
+
47
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_d
48
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_s
49
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_h
50
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_b
51
+
52
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_d
53
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_s
54
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_h
55
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_b
56
+
57
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
58
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
59
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
60
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate-neon.inc.c
63
+++ b/target/arm/translate-neon.inc.c
64
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
65
66
DO_2SH(VSHL, tcg_gen_gvec_shli)
67
DO_2SH(VSLI, gen_gvec_sli)
68
+DO_2SH(VSRI, gen_gvec_sri)
69
+DO_2SH(VSRA_S, gen_gvec_ssra)
70
+DO_2SH(VSRA_U, gen_gvec_usra)
71
+DO_2SH(VRSHR_S, gen_gvec_srshr)
72
+DO_2SH(VRSHR_U, gen_gvec_urshr)
73
+DO_2SH(VRSRA_S, gen_gvec_srsra)
74
+DO_2SH(VRSRA_U, gen_gvec_ursra)
75
76
static bool trans_VSHR_S_2sh(DisasContext *s, arg_2reg_shift *a)
77
{
78
diff --git a/target/arm/translate.c b/target/arm/translate.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate.c
81
+++ b/target/arm/translate.c
82
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
83
84
switch (op) {
85
case 0: /* VSHR */
86
+ case 1: /* VSRA */
87
+ case 2: /* VRSHR */
88
+ case 3: /* VRSRA */
89
+ case 4: /* VSRI */
90
case 5: /* VSHL, VSLI */
91
return 1; /* handled by decodetree */
92
default:
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
94
shift = shift - (1 << (size + 3));
95
}
96
97
- switch (op) {
98
- case 1: /* VSRA */
99
- /* Right shift comes here negative. */
100
- shift = -shift;
101
- if (u) {
102
- gen_gvec_usra(size, rd_ofs, rm_ofs, shift,
103
- vec_size, vec_size);
104
- } else {
105
- gen_gvec_ssra(size, rd_ofs, rm_ofs, shift,
106
- vec_size, vec_size);
107
- }
108
- return 0;
109
-
110
- case 2: /* VRSHR */
111
- /* Right shift comes here negative. */
112
- shift = -shift;
113
- if (u) {
114
- gen_gvec_urshr(size, rd_ofs, rm_ofs, shift,
115
- vec_size, vec_size);
116
- } else {
117
- gen_gvec_srshr(size, rd_ofs, rm_ofs, shift,
118
- vec_size, vec_size);
119
- }
120
- return 0;
121
-
122
- case 3: /* VRSRA */
123
- /* Right shift comes here negative. */
124
- shift = -shift;
125
- if (u) {
126
- gen_gvec_ursra(size, rd_ofs, rm_ofs, shift,
127
- vec_size, vec_size);
128
- } else {
129
- gen_gvec_srsra(size, rd_ofs, rm_ofs, shift,
130
- vec_size, vec_size);
131
- }
132
- return 0;
133
-
134
- case 4: /* VSRI */
135
- if (!u) {
136
- return 1;
137
- }
138
- /* Right shift comes here negative. */
139
- shift = -shift;
140
- gen_gvec_sri(size, rd_ofs, rm_ofs, shift,
141
- vec_size, vec_size);
142
- return 0;
143
- }
144
-
145
if (size == 3) {
146
count = q + 1;
147
} else {
148
--
34
--
149
2.20.1
35
2.20.1
150
36
151
37
diff view generated by jsdifflib
1
From: Eden Mikitas <e.mikitas@gmail.com>
1
Optimize the MVE VMVN insn by using TCG vector ops when possible.
2
2
3
The while statement in question only checked if tx_burst is not 0.
4
tx_burst is a signed int, which is assigned the value put by the
5
guest driver in ECSPI_CONREG. The burst length can be anywhere
6
between 1 and 4096, and since tx_burst is always decremented by 8
7
it could possibly underflow, causing an infinite loop.
8
9
Signed-off-by: Eden Mikitas <e.mikitas@gmail.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210913095440.13462-9-peter.maydell@linaro.org
12
---
6
---
13
hw/ssi/imx_spi.c | 2 +-
7
target/arm/translate-mve.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
8
1 file changed, 1 insertion(+), 1 deletion(-)
15
9
16
diff --git a/hw/ssi/imx_spi.c b/hw/ssi/imx_spi.c
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
17
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/ssi/imx_spi.c
12
--- a/target/arm/translate-mve.c
19
+++ b/hw/ssi/imx_spi.c
13
+++ b/target/arm/translate-mve.c
20
@@ -XXX,XX +XXX,XX @@ static void imx_spi_flush_txfifo(IMXSPIState *s)
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_1op *a)
21
15
22
rx = 0;
16
static bool trans_VMVN(DisasContext *s, arg_1op *a)
23
17
{
24
- while (tx_burst) {
18
- return do_1op(s, a, gen_helper_mve_vmvn);
25
+ while (tx_burst > 0) {
19
+ return do_1op_vec(s, a, gen_helper_mve_vmvn, tcg_gen_gvec_not);
26
uint8_t byte = tx & 0xff;
20
}
27
21
28
DPRINTF("writing 0x%02x\n", (uint32_t)byte);
22
static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
29
--
23
--
30
2.20.1
24
2.20.1
31
25
32
26
diff view generated by jsdifflib
Deleted patch
1
From: Eden Mikitas <e.mikitas@gmail.com>
2
1
3
When inserting the value retrieved (rx) from the spi slave, rx is pushed to
4
rx_fifo after being cast to uint8_t. rx_fifo is a fifo32, and the rx
5
register the driver uses is also 32 bit. This zeroes the 24 most
6
significant bits of rx. This proved problematic with devices that expect to
7
use the whole 32 bits of the rx register.
8
9
Signed-off-by: Eden Mikitas <e.mikitas@gmail.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/ssi/imx_spi.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/hw/ssi/imx_spi.c b/hw/ssi/imx_spi.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/ssi/imx_spi.c
19
+++ b/hw/ssi/imx_spi.c
20
@@ -XXX,XX +XXX,XX @@ static void imx_spi_flush_txfifo(IMXSPIState *s)
21
if (fifo32_is_full(&s->rx_fifo)) {
22
s->regs[ECSPI_STATREG] |= ECSPI_STATREG_RO;
23
} else {
24
- fifo32_push(&s->rx_fifo, (uint8_t)rx);
25
+ fifo32_push(&s->rx_fifo, rx);
26
}
27
28
if (s->burst_length <= 0) {
29
--
30
2.20.1
31
32
diff view generated by jsdifflib
1
Convert the VQSHLU and QVSHL 2-reg-shift insns to decodetree.
1
Optimize the MVE VSHL and VSHR immediate forms by using TCG vector
2
These are the last of the simple shift-by-immediate insns.
2
ops when possible.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200522145520.6778-5-peter.maydell@linaro.org
6
Message-id: 20210913095440.13462-10-peter.maydell@linaro.org
7
---
7
---
8
target/arm/neon-dp.decode | 15 +++++
8
target/arm/translate-mve.c | 83 +++++++++++++++++++++++++++++---------
9
target/arm/translate-neon.inc.c | 108 +++++++++++++++++++++++++++++++
9
1 file changed, 63 insertions(+), 20 deletions(-)
10
target/arm/translate.c | 110 +-------------------------------
11
3 files changed, 126 insertions(+), 107 deletions(-)
12
10
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
14
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
13
--- a/target/arm/translate-mve.c
16
+++ b/target/arm/neon-dp.decode
14
+++ b/target/arm/translate-mve.c
17
@@ -XXX,XX +XXX,XX @@ VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
15
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
18
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
16
return do_1imm(s, a, fn);
19
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
17
}
20
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
18
19
-static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
20
- bool negateshift)
21
+static bool do_2shift_vec(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
22
+ bool negateshift, GVecGen2iFn vecfn)
23
{
24
TCGv_ptr qd, qm;
25
int shift = a->shift;
26
@@ -XXX,XX +XXX,XX @@ static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
27
shift = -shift;
28
}
29
30
- qd = mve_qreg_ptr(a->qd);
31
- qm = mve_qreg_ptr(a->qm);
32
- fn(cpu_env, qd, qm, tcg_constant_i32(shift));
33
- tcg_temp_free_ptr(qd);
34
- tcg_temp_free_ptr(qm);
35
+ if (vecfn && mve_no_predication(s)) {
36
+ vecfn(a->size, mve_qreg_offset(a->qd), mve_qreg_offset(a->qm),
37
+ shift, 16, 16);
38
+ } else {
39
+ qd = mve_qreg_ptr(a->qd);
40
+ qm = mve_qreg_ptr(a->qm);
41
+ fn(cpu_env, qd, qm, tcg_constant_i32(shift));
42
+ tcg_temp_free_ptr(qd);
43
+ tcg_temp_free_ptr(qm);
44
+ }
45
mve_update_eci(s);
46
return true;
47
}
48
49
-#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
50
- static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
51
- { \
52
- static MVEGenTwoOpShiftFn * const fns[] = { \
53
- gen_helper_mve_##FN##b, \
54
- gen_helper_mve_##FN##h, \
55
- gen_helper_mve_##FN##w, \
56
- NULL, \
57
- }; \
58
- return do_2shift(s, a, fns[a->size], NEGATESHIFT); \
59
+static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
60
+ bool negateshift)
61
+{
62
+ return do_2shift_vec(s, a, fn, negateshift, NULL);
63
+}
21
+
64
+
22
+VQSHLU_64_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_d
65
+#define DO_2SHIFT_VEC(INSN, FN, NEGATESHIFT, VECFN) \
23
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_s
66
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
24
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_h
67
+ { \
25
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_b
68
+ static MVEGenTwoOpShiftFn * const fns[] = { \
69
+ gen_helper_mve_##FN##b, \
70
+ gen_helper_mve_##FN##h, \
71
+ gen_helper_mve_##FN##w, \
72
+ NULL, \
73
+ }; \
74
+ return do_2shift_vec(s, a, fns[a->size], NEGATESHIFT, VECFN); \
75
}
76
77
-DO_2SHIFT(VSHLI, vshli_u, false)
78
+#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
79
+ DO_2SHIFT_VEC(INSN, FN, NEGATESHIFT, NULL)
26
+
80
+
27
+VQSHL_S_64_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
81
+static void do_gvec_shri_s(unsigned vece, uint32_t dofs, uint32_t aofs,
28
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
82
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
29
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
30
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
31
+
32
+VQSHL_U_64_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
33
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
34
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
35
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
36
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-neon.inc.c
39
+++ b/target/arm/translate-neon.inc.c
40
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHR_U_2sh(DisasContext *s, arg_2reg_shift *a)
41
return do_vector_2sh(s, a, tcg_gen_gvec_shri);
42
}
43
}
44
+
45
+static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a,
46
+ NeonGenTwo64OpEnvFn *fn)
47
+{
83
+{
48
+ /*
84
+ /*
49
+ * 2-reg-and-shift operations, size == 3 case, where the
85
+ * We get here with a negated shift count, and we must handle
50
+ * function needs to be passed cpu_env.
86
+ * shifts by the element size, which tcg_gen_gvec_sari() does not do.
51
+ */
87
+ */
52
+ TCGv_i64 constimm;
88
+ shift = -shift;
53
+ int pass;
89
+ if (shift == (8 << vece)) {
54
+
90
+ shift--;
55
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
56
+ return false;
57
+ }
91
+ }
58
+
92
+ tcg_gen_gvec_sari(vece, dofs, aofs, shift, oprsz, maxsz);
59
+ /* UNDEF accesses to D16-D31 if they don't exist. */
60
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
61
+ ((a->vd | a->vm) & 0x10)) {
62
+ return false;
63
+ }
64
+
65
+ if ((a->vm | a->vd) & a->q) {
66
+ return false;
67
+ }
68
+
69
+ if (!vfp_access_check(s)) {
70
+ return true;
71
+ }
72
+
73
+ /*
74
+ * To avoid excessive duplication of ops we implement shift
75
+ * by immediate using the variable shift operations.
76
+ */
77
+ constimm = tcg_const_i64(dup_const(a->size, a->shift));
78
+
79
+ for (pass = 0; pass < a->q + 1; pass++) {
80
+ TCGv_i64 tmp = tcg_temp_new_i64();
81
+
82
+ neon_load_reg64(tmp, a->vm + pass);
83
+ fn(tmp, cpu_env, tmp, constimm);
84
+ neon_store_reg64(tmp, a->vd + pass);
85
+ }
86
+ tcg_temp_free_i64(constimm);
87
+ return true;
88
+}
93
+}
89
+
94
+
90
+static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
95
+static void do_gvec_shri_u(unsigned vece, uint32_t dofs, uint32_t aofs,
91
+ NeonGenTwoOpEnvFn *fn)
96
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
92
+{
97
+{
93
+ /*
98
+ /*
94
+ * 2-reg-and-shift operations, size < 3 case, where the
99
+ * We get here with a negated shift count, and we must handle
95
+ * helper needs to be passed cpu_env.
100
+ * shifts by the element size, which tcg_gen_gvec_shri() does not do.
96
+ */
101
+ */
97
+ TCGv_i32 constimm;
102
+ shift = -shift;
98
+ int pass;
103
+ if (shift == (8 << vece)) {
99
+
104
+ tcg_gen_gvec_dup_imm(vece, dofs, oprsz, maxsz, 0);
100
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
105
+ } else {
101
+ return false;
106
+ tcg_gen_gvec_shri(vece, dofs, aofs, shift, oprsz, maxsz);
102
+ }
107
+ }
103
+
104
+ /* UNDEF accesses to D16-D31 if they don't exist. */
105
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
106
+ ((a->vd | a->vm) & 0x10)) {
107
+ return false;
108
+ }
109
+
110
+ if ((a->vm | a->vd) & a->q) {
111
+ return false;
112
+ }
113
+
114
+ if (!vfp_access_check(s)) {
115
+ return true;
116
+ }
117
+
118
+ /*
119
+ * To avoid excessive duplication of ops we implement shift
120
+ * by immediate using the variable shift operations.
121
+ */
122
+ constimm = tcg_const_i32(dup_const(a->size, a->shift));
123
+
124
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
125
+ TCGv_i32 tmp = neon_load_reg(a->vm, pass);
126
+ fn(tmp, cpu_env, tmp, constimm);
127
+ neon_store_reg(a->vd, pass, tmp);
128
+ }
129
+ tcg_temp_free_i32(constimm);
130
+ return true;
131
+}
108
+}
132
+
109
+
133
+#define DO_2SHIFT_ENV(INSN, FUNC) \
110
+DO_2SHIFT_VEC(VSHLI, vshli_u, false, tcg_gen_gvec_shli)
134
+ static bool trans_##INSN##_64_2sh(DisasContext *s, arg_2reg_shift *a) \
111
DO_2SHIFT(VQSHLI_S, vqshli_s, false)
135
+ { \
112
DO_2SHIFT(VQSHLI_U, vqshli_u, false)
136
+ return do_2shift_env_64(s, a, gen_helper_neon_##FUNC##64); \
113
DO_2SHIFT(VQSHLUI, vqshlui_s, false)
137
+ } \
114
/* These right shifts use a left-shift helper with negated shift count */
138
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
115
-DO_2SHIFT(VSHRI_S, vshli_s, true)
139
+ { \
116
-DO_2SHIFT(VSHRI_U, vshli_u, true)
140
+ static NeonGenTwoOpEnvFn * const fns[] = { \
117
+DO_2SHIFT_VEC(VSHRI_S, vshli_s, true, do_gvec_shri_s)
141
+ gen_helper_neon_##FUNC##8, \
118
+DO_2SHIFT_VEC(VSHRI_U, vshli_u, true, do_gvec_shri_u)
142
+ gen_helper_neon_##FUNC##16, \
119
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
143
+ gen_helper_neon_##FUNC##32, \
120
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
144
+ }; \
121
145
+ assert(a->size < ARRAY_SIZE(fns)); \
146
+ return do_2shift_env_32(s, a, fns[a->size]); \
147
+ }
148
+
149
+DO_2SHIFT_ENV(VQSHLU, qshlu_s)
150
+DO_2SHIFT_ENV(VQSHL_U, qshl_u)
151
+DO_2SHIFT_ENV(VQSHL_S, qshl_s)
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_rsb(int size, TCGv_i32 t0, TCGv_i32 t1)
157
}
158
}
159
160
-#define GEN_NEON_INTEGER_OP_ENV(name) do { \
161
- switch ((size << 1) | u) { \
162
- case 0: \
163
- gen_helper_neon_##name##_s8(tmp, cpu_env, tmp, tmp2); \
164
- break; \
165
- case 1: \
166
- gen_helper_neon_##name##_u8(tmp, cpu_env, tmp, tmp2); \
167
- break; \
168
- case 2: \
169
- gen_helper_neon_##name##_s16(tmp, cpu_env, tmp, tmp2); \
170
- break; \
171
- case 3: \
172
- gen_helper_neon_##name##_u16(tmp, cpu_env, tmp, tmp2); \
173
- break; \
174
- case 4: \
175
- gen_helper_neon_##name##_s32(tmp, cpu_env, tmp, tmp2); \
176
- break; \
177
- case 5: \
178
- gen_helper_neon_##name##_u32(tmp, cpu_env, tmp, tmp2); \
179
- break; \
180
- default: return 1; \
181
- }} while (0)
182
-
183
static TCGv_i32 neon_load_scratch(int scratch)
184
{
185
TCGv_i32 tmp = tcg_temp_new_i32();
186
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
187
int size;
188
int shift;
189
int pass;
190
- int count;
191
int u;
192
int vec_size;
193
uint32_t imm;
194
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
195
case 3: /* VRSRA */
196
case 4: /* VSRI */
197
case 5: /* VSHL, VSLI */
198
+ case 6: /* VQSHLU */
199
+ case 7: /* VQSHL */
200
return 1; /* handled by decodetree */
201
default:
202
break;
203
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
204
size--;
205
}
206
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
207
- if (op < 8) {
208
- /* Shift by immediate:
209
- VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
210
- if (q && ((rd | rm) & 1)) {
211
- return 1;
212
- }
213
- if (!u && (op == 4 || op == 6)) {
214
- return 1;
215
- }
216
- /* Right shifts are encoded as N - shift, where N is the
217
- element size in bits. */
218
- if (op <= 4) {
219
- shift = shift - (1 << (size + 3));
220
- }
221
-
222
- if (size == 3) {
223
- count = q + 1;
224
- } else {
225
- count = q ? 4: 2;
226
- }
227
-
228
- /* To avoid excessive duplication of ops we implement shift
229
- * by immediate using the variable shift operations.
230
- */
231
- imm = dup_const(size, shift);
232
-
233
- for (pass = 0; pass < count; pass++) {
234
- if (size == 3) {
235
- neon_load_reg64(cpu_V0, rm + pass);
236
- tcg_gen_movi_i64(cpu_V1, imm);
237
- switch (op) {
238
- case 6: /* VQSHLU */
239
- gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
240
- cpu_V0, cpu_V1);
241
- break;
242
- case 7: /* VQSHL */
243
- if (u) {
244
- gen_helper_neon_qshl_u64(cpu_V0, cpu_env,
245
- cpu_V0, cpu_V1);
246
- } else {
247
- gen_helper_neon_qshl_s64(cpu_V0, cpu_env,
248
- cpu_V0, cpu_V1);
249
- }
250
- break;
251
- default:
252
- g_assert_not_reached();
253
- }
254
- neon_store_reg64(cpu_V0, rd + pass);
255
- } else { /* size < 3 */
256
- /* Operands in T0 and T1. */
257
- tmp = neon_load_reg(rm, pass);
258
- tmp2 = tcg_temp_new_i32();
259
- tcg_gen_movi_i32(tmp2, imm);
260
- switch (op) {
261
- case 6: /* VQSHLU */
262
- switch (size) {
263
- case 0:
264
- gen_helper_neon_qshlu_s8(tmp, cpu_env,
265
- tmp, tmp2);
266
- break;
267
- case 1:
268
- gen_helper_neon_qshlu_s16(tmp, cpu_env,
269
- tmp, tmp2);
270
- break;
271
- case 2:
272
- gen_helper_neon_qshlu_s32(tmp, cpu_env,
273
- tmp, tmp2);
274
- break;
275
- default:
276
- abort();
277
- }
278
- break;
279
- case 7: /* VQSHL */
280
- GEN_NEON_INTEGER_OP_ENV(qshl);
281
- break;
282
- default:
283
- g_assert_not_reached();
284
- }
285
- tcg_temp_free_i32(tmp2);
286
- neon_store_reg(rd, pass, tmp);
287
- }
288
- } /* for pass */
289
- } else if (op < 10) {
290
+ if (op < 10) {
291
/* Shift by immediate and narrow:
292
VSHRN, VRSHRN, VQSHRN, VQRSHRN. */
293
int input_unsigned = (op == 8) ? !u : u;
294
--
122
--
295
2.20.1
123
2.20.1
296
124
297
125
diff view generated by jsdifflib
1
Convert the VSHR 2-reg-shift insns to decodetree.
1
Optimize the MVE VSHLL insns by using TCG vector ops when possible.
2
2
This includes the VMOVL insn, which we handle in mve.decode as "VSHLL
3
Note that unlike the legacy decoder, we present the right shift
3
with zero shift count".
4
amount to the trans_ function as a positive integer.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200522145520.6778-3-peter.maydell@linaro.org
7
Message-id: 20210913095440.13462-11-peter.maydell@linaro.org
9
---
8
---
10
target/arm/neon-dp.decode | 25 ++++++++++++++++++++
9
target/arm/translate-mve.c | 67 +++++++++++++++++++++++++++++++++-----
11
target/arm/translate-neon.inc.c | 41 +++++++++++++++++++++++++++++++++
10
1 file changed, 59 insertions(+), 8 deletions(-)
12
target/arm/translate.c | 21 +----------------
13
3 files changed, 67 insertions(+), 20 deletions(-)
14
11
15
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/neon-dp.decode
14
--- a/target/arm/translate-mve.c
18
+++ b/target/arm/neon-dp.decode
15
+++ b/target/arm/translate-mve.c
19
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
16
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SCALAR(VQSHL_U_scalar, vqshli_u)
20
######################################################################
17
DO_2SHIFT_SCALAR(VQRSHL_S_scalar, vqrshli_s)
21
&2reg_shift vm vd q shift size
18
DO_2SHIFT_SCALAR(VQRSHL_U_scalar, vqrshli_u)
22
19
23
+# Right shifts are encoded as N - shift, where N is the element size in bits.
20
-#define DO_VSHLL(INSN, FN) \
24
+%neon_rshift_i6 16:6 !function=rsub_64
21
- static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
25
+%neon_rshift_i5 16:5 !function=rsub_32
22
- { \
26
+%neon_rshift_i4 16:4 !function=rsub_16
23
- static MVEGenTwoOpShiftFn * const fns[] = { \
27
+%neon_rshift_i3 16:3 !function=rsub_8
24
- gen_helper_mve_##FN##b, \
28
+
25
- gen_helper_mve_##FN##h, \
29
+@2reg_shr_d .... ... . . . ...... .... .... 1 q:1 . . .... \
26
- }; \
30
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3 shift=%neon_rshift_i6
27
- return do_2shift(s, a, fns[a->size], false); \
31
+@2reg_shr_s .... ... . . . 1 ..... .... .... 0 q:1 . . .... \
28
+#define DO_VSHLL(INSN, FN) \
32
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 shift=%neon_rshift_i5
29
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
33
+@2reg_shr_h .... ... . . . 01 .... .... .... 0 q:1 . . .... \
30
+ { \
34
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 shift=%neon_rshift_i4
31
+ static MVEGenTwoOpShiftFn * const fns[] = { \
35
+@2reg_shr_b .... ... . . . 001 ... .... .... 0 q:1 . . .... \
32
+ gen_helper_mve_##FN##b, \
36
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 shift=%neon_rshift_i3
33
+ gen_helper_mve_##FN##h, \
37
+
34
+ }; \
38
@2reg_shl_d .... ... . . . shift:6 .... .... 1 q:1 . . .... \
35
+ return do_2shift_vec(s, a, fns[a->size], false, do_gvec_##FN); \
39
&2reg_shift vm=%vm_dp vd=%vd_dp size=3
36
}
40
@2reg_shl_s .... ... . . . 1 shift:5 .... .... 0 q:1 . . .... \
37
41
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
38
+/*
42
@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
39
+ * For the VSHLL vector helpers, the vece is the size of the input
43
&2reg_shift vm=%vm_dp vd=%vd_dp size=0
40
+ * (ie MO_8 or MO_16); the helpers want to work in the output size.
44
41
+ * The shift count can be 0..<input size>, inclusive. (0 is VMOVL.)
45
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
42
+ */
46
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
43
+static void do_gvec_vshllbs(unsigned vece, uint32_t dofs, uint32_t aofs,
47
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
44
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
48
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
49
+
50
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
51
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
52
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
53
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
54
+
55
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
56
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
57
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
58
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-neon.inc.c
61
+++ b/target/arm/translate-neon.inc.c
62
@@ -XXX,XX +XXX,XX @@ static inline int plus1(DisasContext *s, int x)
63
return x + 1;
64
}
65
66
+static inline int rsub_64(DisasContext *s, int x)
67
+{
45
+{
68
+ return 64 - x;
46
+ unsigned ovece = vece + 1;
47
+ unsigned ibits = vece == MO_8 ? 8 : 16;
48
+ tcg_gen_gvec_shli(ovece, dofs, aofs, ibits, oprsz, maxsz);
49
+ tcg_gen_gvec_sari(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
69
+}
50
+}
70
+
51
+
71
+static inline int rsub_32(DisasContext *s, int x)
52
+static void do_gvec_vshllbu(unsigned vece, uint32_t dofs, uint32_t aofs,
53
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
72
+{
54
+{
73
+ return 32 - x;
55
+ unsigned ovece = vece + 1;
74
+}
56
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
75
+static inline int rsub_16(DisasContext *s, int x)
57
+ ovece == MO_16 ? 0xff : 0xffff, oprsz, maxsz);
76
+{
58
+ tcg_gen_gvec_shli(ovece, dofs, dofs, shift, oprsz, maxsz);
77
+ return 16 - x;
78
+}
79
+static inline int rsub_8(DisasContext *s, int x)
80
+{
81
+ return 8 - x;
82
+}
59
+}
83
+
60
+
84
/* Include the generated Neon decoder */
61
+static void do_gvec_vshllts(unsigned vece, uint32_t dofs, uint32_t aofs,
85
#include "decode-neon-dp.inc.c"
62
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
86
#include "decode-neon-ls.inc.c"
87
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
88
89
DO_2SH(VSHL, tcg_gen_gvec_shli)
90
DO_2SH(VSLI, gen_gvec_sli)
91
+
92
+static bool trans_VSHR_S_2sh(DisasContext *s, arg_2reg_shift *a)
93
+{
63
+{
94
+ /* Signed shift out of range results in all-sign-bits */
64
+ unsigned ovece = vece + 1;
95
+ a->shift = MIN(a->shift, (8 << a->size) - 1);
65
+ unsigned ibits = vece == MO_8 ? 8 : 16;
96
+ return do_vector_2sh(s, a, tcg_gen_gvec_sari);
66
+ if (shift == 0) {
67
+ tcg_gen_gvec_sari(ovece, dofs, aofs, ibits, oprsz, maxsz);
68
+ } else {
69
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
70
+ ovece == MO_16 ? 0xff00 : 0xffff0000, oprsz, maxsz);
71
+ tcg_gen_gvec_sari(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
72
+ }
97
+}
73
+}
98
+
74
+
99
+static void gen_zero_rd_2sh(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
75
+static void do_gvec_vshlltu(unsigned vece, uint32_t dofs, uint32_t aofs,
100
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
76
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
101
+{
77
+{
102
+ tcg_gen_gvec_dup_imm(vece, rd_ofs, oprsz, maxsz, 0);
78
+ unsigned ovece = vece + 1;
79
+ unsigned ibits = vece == MO_8 ? 8 : 16;
80
+ if (shift == 0) {
81
+ tcg_gen_gvec_shri(ovece, dofs, aofs, ibits, oprsz, maxsz);
82
+ } else {
83
+ tcg_gen_gvec_andi(ovece, dofs, aofs,
84
+ ovece == MO_16 ? 0xff00 : 0xffff0000, oprsz, maxsz);
85
+ tcg_gen_gvec_shri(ovece, dofs, dofs, ibits - shift, oprsz, maxsz);
86
+ }
103
+}
87
+}
104
+
88
+
105
+static bool trans_VSHR_U_2sh(DisasContext *s, arg_2reg_shift *a)
89
DO_VSHLL(VSHLL_BS, vshllbs)
106
+{
90
DO_VSHLL(VSHLL_BU, vshllbu)
107
+ /* Shift out of range is architecturally valid and results in zero. */
91
DO_VSHLL(VSHLL_TS, vshllts)
108
+ if (a->shift >= (8 << a->size)) {
109
+ return do_vector_2sh(s, a, gen_zero_rd_2sh);
110
+ } else {
111
+ return do_vector_2sh(s, a, tcg_gen_gvec_shri);
112
+ }
113
+}
114
diff --git a/target/arm/translate.c b/target/arm/translate.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/translate.c
117
+++ b/target/arm/translate.c
118
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
119
op = (insn >> 8) & 0xf;
120
121
switch (op) {
122
+ case 0: /* VSHR */
123
case 5: /* VSHL, VSLI */
124
return 1; /* handled by decodetree */
125
default:
126
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
127
}
128
129
switch (op) {
130
- case 0: /* VSHR */
131
- /* Right shift comes here negative. */
132
- shift = -shift;
133
- /* Shifts larger than the element size are architecturally
134
- * valid. Unsigned results in all zeros; signed results
135
- * in all sign bits.
136
- */
137
- if (!u) {
138
- tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
139
- MIN(shift, (8 << size) - 1),
140
- vec_size, vec_size);
141
- } else if (shift >= 8 << size) {
142
- tcg_gen_gvec_dup_imm(MO_8, rd_ofs, vec_size,
143
- vec_size, 0);
144
- } else {
145
- tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
146
- vec_size, vec_size);
147
- }
148
- return 0;
149
-
150
case 1: /* VSRA */
151
/* Right shift comes here negative. */
152
shift = -shift;
153
--
92
--
154
2.20.1
93
2.20.1
155
94
156
95
diff view generated by jsdifflib
1
Convert the VSHL and VSLI insns from the Neon 2-registers-and-a-shift
1
Optimize the MVE shift-and-insert insns by using TCG
2
group to decodetree.
2
vector ops when possible.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200522145520.6778-2-peter.maydell@linaro.org
6
Message-id: 20210913095440.13462-12-peter.maydell@linaro.org
7
---
7
---
8
target/arm/neon-dp.decode | 25 ++++++++++++++++++++++
8
target/arm/translate-mve.c | 4 ++--
9
target/arm/translate-neon.inc.c | 38 +++++++++++++++++++++++++++++++++
9
1 file changed, 2 insertions(+), 2 deletions(-)
10
target/arm/translate.c | 18 +++++++---------
11
3 files changed, 71 insertions(+), 10 deletions(-)
12
10
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
14
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
13
--- a/target/arm/translate-mve.c
16
+++ b/target/arm/neon-dp.decode
14
+++ b/target/arm/translate-mve.c
17
@@ -XXX,XX +XXX,XX @@ VRECPS_fp_3s 1111 001 0 0 . 0 . .... .... 1111 ... 1 .... @3same_fp
15
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_VEC(VSHRI_U, vshli_u, true, do_gvec_shri_u)
18
VRSQRTS_fp_3s 1111 001 0 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
16
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
19
VMAXNM_fp_3s 1111 001 1 0 . 0 . .... .... 1111 ... 1 .... @3same_fp
17
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
20
VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
18
21
+
19
-DO_2SHIFT(VSRI, vsri, false)
22
+######################################################################
20
-DO_2SHIFT(VSLI, vsli, false)
23
+# 2-reg-and-shift grouping:
21
+DO_2SHIFT_VEC(VSRI, vsri, false, gen_gvec_sri)
24
+# 1111 001 U 1 D immH:3 immL:3 Vd:4 opc:4 L Q M 1 Vm:4
22
+DO_2SHIFT_VEC(VSLI, vsli, false, gen_gvec_sli)
25
+######################################################################
23
26
+&2reg_shift vm vd q shift size
24
#define DO_2SHIFT_FP(INSN, FN) \
27
+
25
static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
28
+@2reg_shl_d .... ... . . . shift:6 .... .... 1 q:1 . . .... \
29
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3
30
+@2reg_shl_s .... ... . . . 1 shift:5 .... .... 0 q:1 . . .... \
31
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2
32
+@2reg_shl_h .... ... . . . 01 shift:4 .... .... 0 q:1 . . .... \
33
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1
34
+@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
35
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0
36
+
37
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
38
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
39
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
40
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
41
+
42
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
43
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
44
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
45
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
46
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-neon.inc.c
49
+++ b/target/arm/translate-neon.inc.c
50
@@ -XXX,XX +XXX,XX @@ static bool do_3same_fp_pair(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn)
51
DO_3S_FP_PAIR(VPADD, gen_helper_vfp_adds)
52
DO_3S_FP_PAIR(VPMAX, gen_helper_vfp_maxs)
53
DO_3S_FP_PAIR(VPMIN, gen_helper_vfp_mins)
54
+
55
+static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
56
+{
57
+ /* Handle a 2-reg-shift insn which can be vectorized. */
58
+ int vec_size = a->q ? 16 : 8;
59
+ int rd_ofs = neon_reg_offset(a->vd, 0);
60
+ int rm_ofs = neon_reg_offset(a->vm, 0);
61
+
62
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
63
+ return false;
64
+ }
65
+
66
+ /* UNDEF accesses to D16-D31 if they don't exist. */
67
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
68
+ ((a->vd | a->vm) & 0x10)) {
69
+ return false;
70
+ }
71
+
72
+ if ((a->vm | a->vd) & a->q) {
73
+ return false;
74
+ }
75
+
76
+ if (!vfp_access_check(s)) {
77
+ return true;
78
+ }
79
+
80
+ fn(a->size, rd_ofs, rm_ofs, a->shift, vec_size, vec_size);
81
+ return true;
82
+}
83
+
84
+#define DO_2SH(INSN, FUNC) \
85
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
86
+ { \
87
+ return do_vector_2sh(s, a, FUNC); \
88
+ } \
89
+
90
+DO_2SH(VSHL, tcg_gen_gvec_shli)
91
+DO_2SH(VSLI, gen_gvec_sli)
92
diff --git a/target/arm/translate.c b/target/arm/translate.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/translate.c
95
+++ b/target/arm/translate.c
96
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
97
if ((insn & 0x00380080) != 0) {
98
/* Two registers and shift. */
99
op = (insn >> 8) & 0xf;
100
+
101
+ switch (op) {
102
+ case 5: /* VSHL, VSLI */
103
+ return 1; /* handled by decodetree */
104
+ default:
105
+ break;
106
+ }
107
+
108
if (insn & (1 << 7)) {
109
/* 64-bit shift. */
110
if (op > 7) {
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
112
gen_gvec_sri(size, rd_ofs, rm_ofs, shift,
113
vec_size, vec_size);
114
return 0;
115
-
116
- case 5: /* VSHL, VSLI */
117
- if (u) { /* VSLI */
118
- gen_gvec_sli(size, rd_ofs, rm_ofs, shift,
119
- vec_size, vec_size);
120
- } else { /* VSHL */
121
- tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
122
- vec_size, vec_size);
123
- }
124
- return 0;
125
}
126
127
if (size == 3) {
128
--
26
--
129
2.20.1
27
2.20.1
130
28
131
29
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Optimize the MVE 1op-immediate insns (VORR, VBIC, VMOV) to
2
use TCG vector ops when possible.
2
3
3
Rather than passing an opcode to a helper, fully decode the
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
operation at translate time. Use clear_tail_16 to zap the
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
balance of the SVE register with the AdvSIMD write.
6
Message-id: 20210913095440.13462-13-peter.maydell@linaro.org
7
---
8
target/arm/translate-mve.c | 26 +++++++++++++++++++++-----
9
1 file changed, 21 insertions(+), 5 deletions(-)
6
10
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
8
Message-id: 20200514212831.31248-6-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.h | 5 +-
13
target/arm/neon-dp.decode | 6 +-
14
target/arm/crypto_helper.c | 99 +++++++++++++++++++++------------
15
target/arm/translate-a64.c | 29 ++++------
16
target/arm/translate-neon.inc.c | 46 ++++-----------
17
5 files changed, 93 insertions(+), 92 deletions(-)
18
19
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.h
13
--- a/target/arm/translate-mve.c
22
+++ b/target/arm/helper.h
14
+++ b/target/arm/translate-mve.c
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(neon_qzip32, TCG_CALL_NO_RWG, void, ptr, ptr)
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VADDLV(DisasContext *s, arg_VADDLV *a)
24
DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
16
return true;
25
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
27
-DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(crypto_sha1su0, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(crypto_sha1c, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(crypto_sha1p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(crypto_sha1m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_3(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_3(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
35
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/neon-dp.decode
38
+++ b/target/arm/neon-dp.decode
39
@@ -XXX,XX +XXX,XX @@ VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
40
@3same_crypto .... .... .... .... .... .... .... .... \
41
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0 q=1
42
43
-SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
44
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
45
+SHA1C_3s 1111 001 0 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
46
+SHA1P_3s 1111 001 0 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
47
+SHA1M_3s 1111 001 0 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
48
+SHA1SU0_3s 1111 001 0 0 . 11 .... .... 1100 . 1 . 0 .... @3same_crypto
49
SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
50
SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
51
SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
52
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/crypto_helper.c
55
+++ b/target/arm/crypto_helper.c
56
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
57
};
58
59
#ifdef HOST_WORDS_BIGENDIAN
60
-#define CR_ST_BYTE(state, i) (state.bytes[(15 - (i)) ^ 8])
61
-#define CR_ST_WORD(state, i) (state.words[(3 - (i)) ^ 2])
62
+#define CR_ST_BYTE(state, i) ((state).bytes[(15 - (i)) ^ 8])
63
+#define CR_ST_WORD(state, i) ((state).words[(3 - (i)) ^ 2])
64
#else
65
-#define CR_ST_BYTE(state, i) (state.bytes[i])
66
-#define CR_ST_WORD(state, i) (state.words[i])
67
+#define CR_ST_BYTE(state, i) ((state).bytes[i])
68
+#define CR_ST_WORD(state, i) ((state).words[i])
69
#endif
70
71
/*
72
@@ -XXX,XX +XXX,XX @@ static uint32_t maj(uint32_t x, uint32_t y, uint32_t z)
73
return (x & y) | ((x | y) & z);
74
}
17
}
75
18
76
-void HELPER(crypto_sha1_3reg)(void *vd, void *vn, void *vm, uint32_t op)
19
-static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
77
+void HELPER(crypto_sha1su0)(void *vd, void *vn, void *vm, uint32_t desc)
20
+static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn,
21
+ GVecGen2iFn *vecfn)
22
{
23
TCGv_ptr qd;
24
uint64_t imm;
25
@@ -XXX,XX +XXX,XX @@ static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
26
27
imm = asimd_imm_const(a->imm, a->cmode, a->op);
28
29
- qd = mve_qreg_ptr(a->qd);
30
- fn(cpu_env, qd, tcg_constant_i64(imm));
31
- tcg_temp_free_ptr(qd);
32
+ if (vecfn && mve_no_predication(s)) {
33
+ vecfn(MO_64, mve_qreg_offset(a->qd), mve_qreg_offset(a->qd),
34
+ imm, 16, 16);
35
+ } else {
36
+ qd = mve_qreg_ptr(a->qd);
37
+ fn(cpu_env, qd, tcg_constant_i64(imm));
38
+ tcg_temp_free_ptr(qd);
39
+ }
40
mve_update_eci(s);
41
return true;
42
}
43
44
+static void gen_gvec_vmovi(unsigned vece, uint32_t dofs, uint32_t aofs,
45
+ int64_t c, uint32_t oprsz, uint32_t maxsz)
78
+{
46
+{
79
+ uint64_t *d = vd, *n = vn, *m = vm;
47
+ tcg_gen_gvec_dup_imm(vece, dofs, oprsz, maxsz, c);
80
+ uint64_t d0, d1;
81
+
82
+ d0 = d[1] ^ d[0] ^ m[0];
83
+ d1 = n[0] ^ d[1] ^ m[1];
84
+ d[0] = d0;
85
+ d[1] = d1;
86
+
87
+ clear_tail_16(vd, desc);
88
+}
48
+}
89
+
49
+
90
+static inline void crypto_sha1_3reg(uint64_t *rd, uint64_t *rn,
50
static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
91
+ uint64_t *rm, uint32_t desc,
92
+ uint32_t (*fn)(union CRYPTO_STATE *d))
93
{
51
{
94
- uint64_t *rd = vd;
52
/* Handle decode of cmode/op here between VORR/VBIC/VMOV */
95
- uint64_t *rn = vn;
53
MVEGenOneOpImmFn *fn;
96
- uint64_t *rm = vm;
54
+ GVecGen2iFn *vecfn;
97
union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
55
98
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
56
if ((a->cmode & 1) && a->cmode < 12) {
99
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
57
if (a->op) {
100
+ int i;
58
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
101
59
* so the VBIC becomes a logical AND operation.
102
- if (op == 3) { /* sha1su0 */
60
*/
103
- d.l[0] ^= d.l[1] ^ m.l[0];
61
fn = gen_helper_mve_vandi;
104
- d.l[1] ^= n.l[0] ^ m.l[1];
62
+ vecfn = tcg_gen_gvec_andi;
105
- } else {
63
} else {
106
- int i;
64
fn = gen_helper_mve_vorri;
107
+ for (i = 0; i < 4; i++) {
65
+ vecfn = tcg_gen_gvec_ori;
108
+ uint32_t t = fn(&d);
66
}
109
67
} else {
110
- for (i = 0; i < 4; i++) {
68
/* There is one unallocated cmode/op combination in this space */
111
- uint32_t t;
69
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
112
+ t += rol32(CR_ST_WORD(d, 0), 5) + CR_ST_WORD(n, 0)
70
}
113
+ + CR_ST_WORD(m, i);
71
/* asimd_imm_const() sorts out VMVNI vs VMOVI for us */
114
72
fn = gen_helper_mve_vmovi;
115
- switch (op) {
73
+ vecfn = gen_gvec_vmovi;
116
- case 0: /* sha1c */
117
- t = cho(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
118
- break;
119
- case 1: /* sha1p */
120
- t = par(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
121
- break;
122
- case 2: /* sha1m */
123
- t = maj(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
124
- break;
125
- default:
126
- g_assert_not_reached();
127
- }
128
- t += rol32(CR_ST_WORD(d, 0), 5) + CR_ST_WORD(n, 0)
129
- + CR_ST_WORD(m, i);
130
-
131
- CR_ST_WORD(n, 0) = CR_ST_WORD(d, 3);
132
- CR_ST_WORD(d, 3) = CR_ST_WORD(d, 2);
133
- CR_ST_WORD(d, 2) = ror32(CR_ST_WORD(d, 1), 2);
134
- CR_ST_WORD(d, 1) = CR_ST_WORD(d, 0);
135
- CR_ST_WORD(d, 0) = t;
136
- }
137
+ CR_ST_WORD(n, 0) = CR_ST_WORD(d, 3);
138
+ CR_ST_WORD(d, 3) = CR_ST_WORD(d, 2);
139
+ CR_ST_WORD(d, 2) = ror32(CR_ST_WORD(d, 1), 2);
140
+ CR_ST_WORD(d, 1) = CR_ST_WORD(d, 0);
141
+ CR_ST_WORD(d, 0) = t;
142
}
74
}
143
rd[0] = d.l[0];
75
- return do_1imm(s, a, fn);
144
rd[1] = d.l[1];
76
+ return do_1imm(s, a, fn, vecfn);
145
+
146
+ clear_tail_16(rd, desc);
147
+}
148
+
149
+static uint32_t do_sha1c(union CRYPTO_STATE *d)
150
+{
151
+ return cho(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
152
+}
153
+
154
+void HELPER(crypto_sha1c)(void *vd, void *vn, void *vm, uint32_t desc)
155
+{
156
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1c);
157
+}
158
+
159
+static uint32_t do_sha1p(union CRYPTO_STATE *d)
160
+{
161
+ return par(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
162
+}
163
+
164
+void HELPER(crypto_sha1p)(void *vd, void *vn, void *vm, uint32_t desc)
165
+{
166
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1p);
167
+}
168
+
169
+static uint32_t do_sha1m(union CRYPTO_STATE *d)
170
+{
171
+ return maj(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
172
+}
173
+
174
+void HELPER(crypto_sha1m)(void *vd, void *vn, void *vm, uint32_t desc)
175
+{
176
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1m);
177
}
77
}
178
78
179
void HELPER(crypto_sha1h)(void *vd, void *vm, uint32_t desc)
79
static bool do_2shift_vec(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
180
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
181
index XXXXXXX..XXXXXXX 100644
182
--- a/target/arm/translate-a64.c
183
+++ b/target/arm/translate-a64.c
184
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
185
186
switch (opcode) {
187
case 0: /* SHA1C */
188
+ genfn = gen_helper_crypto_sha1c;
189
+ feature = dc_isar_feature(aa64_sha1, s);
190
+ break;
191
case 1: /* SHA1P */
192
+ genfn = gen_helper_crypto_sha1p;
193
+ feature = dc_isar_feature(aa64_sha1, s);
194
+ break;
195
case 2: /* SHA1M */
196
+ genfn = gen_helper_crypto_sha1m;
197
+ feature = dc_isar_feature(aa64_sha1, s);
198
+ break;
199
case 3: /* SHA1SU0 */
200
- genfn = NULL;
201
+ genfn = gen_helper_crypto_sha1su0;
202
feature = dc_isar_feature(aa64_sha1, s);
203
break;
204
case 4: /* SHA256H */
205
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
206
if (!fp_access_check(s)) {
207
return;
208
}
209
-
210
- if (genfn) {
211
- gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
212
- } else {
213
- TCGv_i32 tcg_opcode = tcg_const_i32(opcode);
214
- TCGv_ptr tcg_rd_ptr = vec_full_reg_ptr(s, rd);
215
- TCGv_ptr tcg_rn_ptr = vec_full_reg_ptr(s, rn);
216
- TCGv_ptr tcg_rm_ptr = vec_full_reg_ptr(s, rm);
217
-
218
- gen_helper_crypto_sha1_3reg(tcg_rd_ptr, tcg_rn_ptr,
219
- tcg_rm_ptr, tcg_opcode);
220
-
221
- tcg_temp_free_i32(tcg_opcode);
222
- tcg_temp_free_ptr(tcg_rd_ptr);
223
- tcg_temp_free_ptr(tcg_rn_ptr);
224
- tcg_temp_free_ptr(tcg_rm_ptr);
225
- }
226
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
227
}
228
229
/* Crypto two-reg SHA
230
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
231
index XXXXXXX..XXXXXXX 100644
232
--- a/target/arm/translate-neon.inc.c
233
+++ b/target/arm/translate-neon.inc.c
234
@@ -XXX,XX +XXX,XX @@ static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
235
DO_VQRDMLAH(VQRDMLAH, gen_gvec_sqrdmlah_qc)
236
DO_VQRDMLAH(VQRDMLSH, gen_gvec_sqrdmlsh_qc)
237
238
-static bool trans_SHA1_3s(DisasContext *s, arg_SHA1_3s *a)
239
-{
240
- TCGv_ptr ptr1, ptr2, ptr3;
241
- TCGv_i32 tmp;
242
-
243
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
244
- !dc_isar_feature(aa32_sha1, s)) {
245
- return false;
246
+#define DO_SHA1(NAME, FUNC) \
247
+ WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
248
+ static bool trans_##NAME##_3s(DisasContext *s, arg_3same *a) \
249
+ { \
250
+ if (!dc_isar_feature(aa32_sha1, s)) { \
251
+ return false; \
252
+ } \
253
+ return do_3same(s, a, gen_##NAME##_3s); \
254
}
255
256
- /* UNDEF accesses to D16-D31 if they don't exist. */
257
- if (!dc_isar_feature(aa32_simd_r32, s) &&
258
- ((a->vd | a->vn | a->vm) & 0x10)) {
259
- return false;
260
- }
261
-
262
- if ((a->vn | a->vm | a->vd) & 1) {
263
- return false;
264
- }
265
-
266
- if (!vfp_access_check(s)) {
267
- return true;
268
- }
269
-
270
- ptr1 = vfp_reg_ptr(true, a->vd);
271
- ptr2 = vfp_reg_ptr(true, a->vn);
272
- ptr3 = vfp_reg_ptr(true, a->vm);
273
- tmp = tcg_const_i32(a->optype);
274
- gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp);
275
- tcg_temp_free_i32(tmp);
276
- tcg_temp_free_ptr(ptr1);
277
- tcg_temp_free_ptr(ptr2);
278
- tcg_temp_free_ptr(ptr3);
279
-
280
- return true;
281
-}
282
+DO_SHA1(SHA1C, gen_helper_crypto_sha1c)
283
+DO_SHA1(SHA1P, gen_helper_crypto_sha1p)
284
+DO_SHA1(SHA1M, gen_helper_crypto_sha1m)
285
+DO_SHA1(SHA1SU0, gen_helper_crypto_sha1su0)
286
287
#define DO_SHA2(NAME, FUNC) \
288
WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
289
--
80
--
290
2.20.1
81
2.20.1
291
82
292
83
diff view generated by jsdifflib
Deleted patch
1
From: Thomas Huth <thuth@redhat.com>
2
1
3
As described by Edgar here:
4
5
https://www.mail-archive.com/qemu-devel@nongnu.org/msg605124.html
6
7
we can use the Ubuntu kernel for testing the xlnx-versal-virt machine.
8
So let's add a boot test for this now.
9
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Signed-off-by: Thomas Huth <thuth@redhat.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
15
Message-id: 20200525141237.15243-1-thuth@redhat.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
tests/acceptance/boot_linux_console.py | 26 ++++++++++++++++++++++++++
19
1 file changed, 26 insertions(+)
20
21
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
22
index XXXXXXX..XXXXXXX 100644
23
--- a/tests/acceptance/boot_linux_console.py
24
+++ b/tests/acceptance/boot_linux_console.py
25
@@ -XXX,XX +XXX,XX @@ class BootLinuxConsole(LinuxKernelTest):
26
console_pattern = 'Kernel command line: %s' % kernel_command_line
27
self.wait_for_console_pattern(console_pattern)
28
29
+ def test_aarch64_xlnx_versal_virt(self):
30
+ """
31
+ :avocado: tags=arch:aarch64
32
+ :avocado: tags=machine:xlnx-versal-virt
33
+ :avocado: tags=device:pl011
34
+ :avocado: tags=device:arm_gicv3
35
+ """
36
+ kernel_url = ('http://ports.ubuntu.com/ubuntu-ports/dists/'
37
+ 'bionic-updates/main/installer-arm64/current/images/'
38
+ 'netboot/ubuntu-installer/arm64/linux')
39
+ kernel_hash = '5bfc54cf7ed8157d93f6e5b0241e727b6dc22c50'
40
+ kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
41
+
42
+ initrd_url = ('http://ports.ubuntu.com/ubuntu-ports/dists/'
43
+ 'bionic-updates/main/installer-arm64/current/images/'
44
+ 'netboot/ubuntu-installer/arm64/initrd.gz')
45
+ initrd_hash = 'd385d3e88d53e2004c5d43cbe668b458a094f772'
46
+ initrd_path = self.fetch_asset(initrd_url, asset_hash=initrd_hash)
47
+
48
+ self.vm.set_console()
49
+ self.vm.add_args('-m', '2G',
50
+ '-kernel', kernel_path,
51
+ '-initrd', initrd_path)
52
+ self.vm.launch()
53
+ self.wait_for_console_pattern('Checked W+X mappings: passed')
54
+
55
def test_arm_virt(self):
56
"""
57
:avocado: tags=arch:arm
58
--
59
2.20.1
60
61
diff view generated by jsdifflib