1
Arm queue; some of the simpler stuff, things other have reviewed (thanks!), etc.
1
Hi; here's the latest arm pullreq. This is mostly patches from
2
RTH, plus a couple of other more minor things. Switching to
3
PCREL is the big one, hopefully should improve performance.
2
4
5
thanks
3
-- PMM
6
-- PMM
4
7
5
The following changes since commit 5d2f557b47dfbf8f23277a5bdd8473d4607c681a:
8
The following changes since commit 214a8da23651f2472b296b3293e619fd58d9e212:
6
9
7
Merge remote-tracking branch 'remotes/kraxel/tags/vga-20200605-pull-request' into staging (2020-06-05 13:53:05 +0100)
10
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2022-10-18 11:14:31 -0400)
8
11
9
are available in the Git repository at:
12
are available in the Git repository at:
10
13
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200605
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221020
12
15
13
for you to fetch changes up to 2c35a39eda0b16c2ed85c94cec204bf5efb97812:
16
for you to fetch changes up to 5db899303799e49209016a93289b8694afa1449e:
14
17
15
target/arm: Convert Neon one-register-and-immediate insns to decodetree (2020-06-05 17:23:10 +0100)
18
hw/ide/microdrive: Use device_cold_reset() for self-resets (2022-10-20 12:11:53 +0100)
16
19
17
----------------------------------------------------------------
20
----------------------------------------------------------------
18
target-arm queue:
21
target-arm queue:
19
hw/ssi/imx_spi: Handle tx burst lengths other than 8 correctly
22
* Switch to TARGET_TB_PCREL
20
hw/input/pxa2xx_keypad: Replace hw_error() by qemu_log_mask()
23
* More pagetable-walk refactoring preparatory to HAFDBS
21
hw/arm/pxa2xx: Replace printf() call by qemu_log_mask()
24
* update the cortex-a15 MIDR to latest rev
22
target/arm: Convert crypto insns to gvec
25
* hw/char/pl011: fix baud rate calculation
23
hw/adc/stm32f2xx_adc: Correct memory region size and access size
26
* hw/ide/microdrive: Use device_cold_reset() for self-resets
24
tests/acceptance: Add a boot test for the xlnx-versal-virt machine
25
docs/system: Document Aspeed boards
26
raspi: Add model of the USB controller
27
target/arm: Convert 2-reg-and-shift and 1-reg-imm Neon insns to decodetree
28
27
29
----------------------------------------------------------------
28
----------------------------------------------------------------
30
Cédric Le Goater (1):
29
Alex Bennée (1):
31
docs/system: Document Aspeed boards
30
target/arm: update the cortex-a15 MIDR to latest rev
32
31
33
Eden Mikitas (2):
32
Baruch Siach (1):
34
hw/ssi/imx_spi: changed while statement to prevent underflow
33
hw/char/pl011: fix baud rate calculation
35
hw/ssi/imx_spi: Removed unnecessary cast of rx data received from slave
36
34
37
Paul Zimmerman (7):
35
Peter Maydell (1):
38
raspi: add BCM2835 SOC MPHI emulation
36
hw/ide/microdrive: Use device_cold_reset() for self-resets
39
dwc-hsotg (dwc2) USB host controller register definitions
40
dwc-hsotg (dwc2) USB host controller state definitions
41
dwc-hsotg (dwc2) USB host controller emulation
42
usb: add short-packet handling to usb-storage driver
43
wire in the dwc-hsotg (dwc2) USB host controller emulation
44
raspi2 acceptance test: add test for dwc-hsotg (dwc2) USB host
45
37
46
Peter Maydell (9):
38
Richard Henderson (21):
47
target/arm: Convert Neon VSHL and VSLI 2-reg-shift insn to decodetree
39
target/arm: Enable TARGET_PAGE_ENTRY_EXTRA
48
target/arm: Convert Neon VSHR 2-reg-shift insns to decodetree
40
target/arm: Use probe_access_full for MTE
49
target/arm: Convert Neon VSRA, VSRI, VRSHR, VRSRA 2-reg-shift insns to decodetree
41
target/arm: Use probe_access_full for BTI
50
target/arm: Convert VQSHLU, VQSHL 2-reg-shift insns to decodetree
42
target/arm: Add ARMMMUIdx_Phys_{S,NS}
51
target/arm: Convert Neon narrowing shifts with op==8 to decodetree
43
target/arm: Move ARMMMUIdx_Stage2 to a real tlb mmu_idx
52
target/arm: Convert Neon narrowing shifts with op==9 to decodetree
44
target/arm: Restrict tlb flush from vttbr_write to vmid change
53
target/arm: Convert Neon VSHLL, VMOVL to decodetree
45
target/arm: Split out S1Translate type
54
target/arm: Convert VCVT fixed-point ops to decodetree
46
target/arm: Plumb debug into S1Translate
55
target/arm: Convert Neon one-register-and-immediate insns to decodetree
47
target/arm: Move be test for regime into S1TranslateResult
48
target/arm: Use softmmu tlbs for page table walking
49
target/arm: Split out get_phys_addr_twostage
50
target/arm: Use bool consistently for get_phys_addr subroutines
51
target/arm: Introduce curr_insn_len
52
target/arm: Change gen_goto_tb to work on displacements
53
target/arm: Change gen_*set_pc_im to gen_*update_pc
54
target/arm: Change gen_exception_insn* to work on displacements
55
target/arm: Remove gen_exception_internal_insn pc argument
56
target/arm: Change gen_jmp* to work on displacements
57
target/arm: Introduce gen_pc_plus_diff for aarch64
58
target/arm: Introduce gen_pc_plus_diff for aarch32
59
target/arm: Enable TARGET_TB_PCREL
56
60
57
Philippe Mathieu-Daudé (3):
61
target/arm/cpu-param.h | 17 +-
58
hw/input/pxa2xx_keypad: Replace hw_error() by qemu_log_mask()
62
target/arm/cpu.h | 47 ++--
59
hw/arm/pxa2xx: Replace printf() call by qemu_log_mask()
63
target/arm/internals.h | 1 +
60
hw/adc/stm32f2xx_adc: Correct memory region size and access size
64
target/arm/sve_ldst_internal.h | 1 +
65
target/arm/translate-a32.h | 2 +-
66
target/arm/translate.h | 66 ++++-
67
hw/char/pl011.c | 2 +-
68
hw/ide/microdrive.c | 8 +-
69
target/arm/cpu.c | 23 +-
70
target/arm/cpu_tcg.c | 4 +-
71
target/arm/helper.c | 155 +++++++++---
72
target/arm/mte_helper.c | 62 ++---
73
target/arm/ptw.c | 535 +++++++++++++++++++++++++----------------
74
target/arm/sve_helper.c | 54 ++---
75
target/arm/tlb_helper.c | 24 +-
76
target/arm/translate-a64.c | 220 ++++++++++-------
77
target/arm/translate-m-nocp.c | 8 +-
78
target/arm/translate-mve.c | 2 +-
79
target/arm/translate-vfp.c | 10 +-
80
target/arm/translate.c | 284 +++++++++++++---------
81
20 files changed, 918 insertions(+), 607 deletions(-)
61
82
62
Richard Henderson (6):
63
target/arm: Convert aes and sm4 to gvec helpers
64
target/arm: Convert rax1 to gvec helpers
65
target/arm: Convert sha512 and sm3 to gvec helpers
66
target/arm: Convert sha1 and sha256 to gvec helpers
67
target/arm: Split helper_crypto_sha1_3reg
68
target/arm: Split helper_crypto_sm3tt
69
70
Thomas Huth (1):
71
tests/acceptance: Add a boot test for the xlnx-versal-virt machine
72
73
docs/system/arm/aspeed.rst | 85 ++
74
docs/system/target-arm.rst | 1 +
75
hw/usb/hcd-dwc2.h | 190 +++++
76
include/hw/arm/bcm2835_peripherals.h | 5 +-
77
include/hw/misc/bcm2835_mphi.h | 44 +
78
include/hw/usb/dwc2-regs.h | 899 ++++++++++++++++++++
79
target/arm/helper.h | 45 +-
80
target/arm/translate-a64.h | 3 +
81
target/arm/vec_internal.h | 33 +
82
target/arm/neon-dp.decode | 214 ++++-
83
hw/adc/stm32f2xx_adc.c | 4 +-
84
hw/arm/bcm2835_peripherals.c | 38 +-
85
hw/arm/pxa2xx.c | 66 +-
86
hw/input/pxa2xx_keypad.c | 10 +-
87
hw/misc/bcm2835_mphi.c | 191 +++++
88
hw/ssi/imx_spi.c | 4 +-
89
hw/usb/dev-storage.c | 15 +-
90
hw/usb/hcd-dwc2.c | 1417 ++++++++++++++++++++++++++++++++
91
target/arm/crypto_helper.c | 267 ++++--
92
target/arm/translate-a64.c | 198 ++---
93
target/arm/translate-neon.inc.c | 796 ++++++++++++++----
94
target/arm/translate.c | 539 +-----------
95
target/arm/vec_helper.c | 12 +-
96
hw/misc/Makefile.objs | 1 +
97
hw/usb/Kconfig | 5 +
98
hw/usb/Makefile.objs | 1 +
99
hw/usb/trace-events | 50 ++
100
tests/acceptance/boot_linux_console.py | 35 +-
101
28 files changed, 4258 insertions(+), 910 deletions(-)
102
create mode 100644 docs/system/arm/aspeed.rst
103
create mode 100644 hw/usb/hcd-dwc2.h
104
create mode 100644 include/hw/misc/bcm2835_mphi.h
105
create mode 100644 include/hw/usb/dwc2-regs.h
106
create mode 100644 target/arm/vec_internal.h
107
create mode 100644 hw/misc/bcm2835_mphi.c
108
create mode 100644 hw/usb/hcd-dwc2.c
109
diff view generated by jsdifflib
Deleted patch
1
From: Eden Mikitas <e.mikitas@gmail.com>
2
1
3
The while statement in question only checked if tx_burst is not 0.
4
tx_burst is a signed int, which is assigned the value put by the
5
guest driver in ECSPI_CONREG. The burst length can be anywhere
6
between 1 and 4096, and since tx_burst is always decremented by 8
7
it could possibly underflow, causing an infinite loop.
8
9
Signed-off-by: Eden Mikitas <e.mikitas@gmail.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/ssi/imx_spi.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/hw/ssi/imx_spi.c b/hw/ssi/imx_spi.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/ssi/imx_spi.c
19
+++ b/hw/ssi/imx_spi.c
20
@@ -XXX,XX +XXX,XX @@ static void imx_spi_flush_txfifo(IMXSPIState *s)
21
22
rx = 0;
23
24
- while (tx_burst) {
25
+ while (tx_burst > 0) {
26
uint8_t byte = tx & 0xff;
27
28
DPRINTF("writing 0x%02x\n", (uint32_t)byte);
29
--
30
2.20.1
31
32
diff view generated by jsdifflib
Deleted patch
1
From: Eden Mikitas <e.mikitas@gmail.com>
2
1
3
When inserting the value retrieved (rx) from the spi slave, rx is pushed to
4
rx_fifo after being cast to uint8_t. rx_fifo is a fifo32, and the rx
5
register the driver uses is also 32 bit. This zeroes the 24 most
6
significant bits of rx. This proved problematic with devices that expect to
7
use the whole 32 bits of the rx register.
8
9
Signed-off-by: Eden Mikitas <e.mikitas@gmail.com>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/ssi/imx_spi.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/hw/ssi/imx_spi.c b/hw/ssi/imx_spi.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/ssi/imx_spi.c
19
+++ b/hw/ssi/imx_spi.c
20
@@ -XXX,XX +XXX,XX @@ static void imx_spi_flush_txfifo(IMXSPIState *s)
21
if (fifo32_is_full(&s->rx_fifo)) {
22
s->regs[ECSPI_STATREG] |= ECSPI_STATREG_RO;
23
} else {
24
- fifo32_push(&s->rx_fifo, (uint8_t)rx);
25
+ fifo32_push(&s->rx_fifo, rx);
26
}
27
28
if (s->burst_length <= 0) {
29
--
30
2.20.1
31
32
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Baruch Siach <baruch@tkos.co.il>
2
2
3
Add the dwc-hsotg (dwc2) USB host controller state definitions.
3
The PL011 TRM says that "UARTIBRD = 0 is invalid and UARTFBRD is ignored
4
Mostly based on hw/usb/hcd-ehci.h.
4
when this is the case". But the code looks at FBRD for the invalid case.
5
Fix this.
5
6
6
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
7
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
7
Message-id: 20200520235349.21215-4-pauldzim@gmail.com
8
Message-id: 1408f62a2e45665816527d4845ffde650957d5ab.1665051588.git.baruchs-c@neureality.ai
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
hw/usb/hcd-dwc2.h | 190 ++++++++++++++++++++++++++++++++++++++++++++++
12
hw/char/pl011.c | 2 +-
12
1 file changed, 190 insertions(+)
13
1 file changed, 1 insertion(+), 1 deletion(-)
13
create mode 100644 hw/usb/hcd-dwc2.h
14
14
15
diff --git a/hw/usb/hcd-dwc2.h b/hw/usb/hcd-dwc2.h
15
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
16
new file mode 100644
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX
17
--- a/hw/char/pl011.c
18
--- /dev/null
18
+++ b/hw/char/pl011.c
19
+++ b/hw/usb/hcd-dwc2.h
19
@@ -XXX,XX +XXX,XX @@ static unsigned int pl011_get_baudrate(const PL011State *s)
20
@@ -XXX,XX +XXX,XX @@
20
{
21
+/*
21
uint64_t clk;
22
+ * dwc-hsotg (dwc2) USB host controller state definitions
22
23
+ *
23
- if (s->fbrd == 0) {
24
+ * Based on hw/usb/hcd-ehci.h
24
+ if (s->ibrd == 0) {
25
+ *
25
return 0;
26
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
26
}
27
+ *
27
28
+ * This program is free software; you can redistribute it and/or modify
29
+ * it under the terms of the GNU General Public License as published by
30
+ * the Free Software Foundation; either version 2 of the License, or
31
+ * (at your option) any later version.
32
+ *
33
+ * This program is distributed in the hope that it will be useful,
34
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
35
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
36
+ * GNU General Public License for more details.
37
+ */
38
+
39
+#ifndef HW_USB_DWC2_H
40
+#define HW_USB_DWC2_H
41
+
42
+#include "qemu/timer.h"
43
+#include "hw/irq.h"
44
+#include "hw/sysbus.h"
45
+#include "hw/usb.h"
46
+#include "sysemu/dma.h"
47
+
48
+#define DWC2_MMIO_SIZE 0x11000
49
+
50
+#define DWC2_NB_CHAN 8 /* Number of host channels */
51
+#define DWC2_MAX_XFER_SIZE 65536 /* Max transfer size expected in HCTSIZ */
52
+
53
+typedef struct DWC2Packet DWC2Packet;
54
+typedef struct DWC2State DWC2State;
55
+typedef struct DWC2Class DWC2Class;
56
+
57
+enum async_state {
58
+ DWC2_ASYNC_NONE = 0,
59
+ DWC2_ASYNC_INITIALIZED,
60
+ DWC2_ASYNC_INFLIGHT,
61
+ DWC2_ASYNC_FINISHED,
62
+};
63
+
64
+struct DWC2Packet {
65
+ USBPacket packet;
66
+ uint32_t devadr;
67
+ uint32_t epnum;
68
+ uint32_t epdir;
69
+ uint32_t mps;
70
+ uint32_t pid;
71
+ uint32_t index;
72
+ uint32_t pcnt;
73
+ uint32_t len;
74
+ int32_t async;
75
+ bool small;
76
+ bool needs_service;
77
+};
78
+
79
+struct DWC2State {
80
+ /*< private >*/
81
+ SysBusDevice parent_obj;
82
+
83
+ /*< public >*/
84
+ USBBus bus;
85
+ qemu_irq irq;
86
+ MemoryRegion *dma_mr;
87
+ AddressSpace dma_as;
88
+ MemoryRegion container;
89
+ MemoryRegion hsotg;
90
+ MemoryRegion fifos;
91
+
92
+ union {
93
+#define DWC2_GLBREG_SIZE 0x70
94
+ uint32_t glbreg[DWC2_GLBREG_SIZE / sizeof(uint32_t)];
95
+ struct {
96
+ uint32_t gotgctl; /* 00 */
97
+ uint32_t gotgint; /* 04 */
98
+ uint32_t gahbcfg; /* 08 */
99
+ uint32_t gusbcfg; /* 0c */
100
+ uint32_t grstctl; /* 10 */
101
+ uint32_t gintsts; /* 14 */
102
+ uint32_t gintmsk; /* 18 */
103
+ uint32_t grxstsr; /* 1c */
104
+ uint32_t grxstsp; /* 20 */
105
+ uint32_t grxfsiz; /* 24 */
106
+ uint32_t gnptxfsiz; /* 28 */
107
+ uint32_t gnptxsts; /* 2c */
108
+ uint32_t gi2cctl; /* 30 */
109
+ uint32_t gpvndctl; /* 34 */
110
+ uint32_t ggpio; /* 38 */
111
+ uint32_t guid; /* 3c */
112
+ uint32_t gsnpsid; /* 40 */
113
+ uint32_t ghwcfg1; /* 44 */
114
+ uint32_t ghwcfg2; /* 48 */
115
+ uint32_t ghwcfg3; /* 4c */
116
+ uint32_t ghwcfg4; /* 50 */
117
+ uint32_t glpmcfg; /* 54 */
118
+ uint32_t gpwrdn; /* 58 */
119
+ uint32_t gdfifocfg; /* 5c */
120
+ uint32_t gadpctl; /* 60 */
121
+ uint32_t grefclk; /* 64 */
122
+ uint32_t gintmsk2; /* 68 */
123
+ uint32_t gintsts2; /* 6c */
124
+ };
125
+ };
126
+
127
+ union {
128
+#define DWC2_FSZREG_SIZE 0x04
129
+ uint32_t fszreg[DWC2_FSZREG_SIZE / sizeof(uint32_t)];
130
+ struct {
131
+ uint32_t hptxfsiz; /* 100 */
132
+ };
133
+ };
134
+
135
+ union {
136
+#define DWC2_HREG0_SIZE 0x44
137
+ uint32_t hreg0[DWC2_HREG0_SIZE / sizeof(uint32_t)];
138
+ struct {
139
+ uint32_t hcfg; /* 400 */
140
+ uint32_t hfir; /* 404 */
141
+ uint32_t hfnum; /* 408 */
142
+ uint32_t rsvd0; /* 40c */
143
+ uint32_t hptxsts; /* 410 */
144
+ uint32_t haint; /* 414 */
145
+ uint32_t haintmsk; /* 418 */
146
+ uint32_t hflbaddr; /* 41c */
147
+ uint32_t rsvd1[8]; /* 420-43c */
148
+ uint32_t hprt0; /* 440 */
149
+ };
150
+ };
151
+
152
+#define DWC2_HREG1_SIZE (0x20 * DWC2_NB_CHAN)
153
+ uint32_t hreg1[DWC2_HREG1_SIZE / sizeof(uint32_t)];
154
+
155
+#define hcchar(_ch) hreg1[((_ch) << 3) + 0] /* 500, 520, ... */
156
+#define hcsplt(_ch) hreg1[((_ch) << 3) + 1] /* 504, 524, ... */
157
+#define hcint(_ch) hreg1[((_ch) << 3) + 2] /* 508, 528, ... */
158
+#define hcintmsk(_ch) hreg1[((_ch) << 3) + 3] /* 50c, 52c, ... */
159
+#define hctsiz(_ch) hreg1[((_ch) << 3) + 4] /* 510, 530, ... */
160
+#define hcdma(_ch) hreg1[((_ch) << 3) + 5] /* 514, 534, ... */
161
+#define hcdmab(_ch) hreg1[((_ch) << 3) + 7] /* 51c, 53c, ... */
162
+
163
+ union {
164
+#define DWC2_PCGREG_SIZE 0x08
165
+ uint32_t pcgreg[DWC2_PCGREG_SIZE / sizeof(uint32_t)];
166
+ struct {
167
+ uint32_t pcgctl; /* e00 */
168
+ uint32_t pcgcctl1; /* e04 */
169
+ };
170
+ };
171
+
172
+ /* TODO - implement FIFO registers for slave mode */
173
+#define DWC2_HFIFO_SIZE (0x1000 * DWC2_NB_CHAN)
174
+
175
+ /*
176
+ * Internal state
177
+ */
178
+ QEMUTimer *eof_timer;
179
+ QEMUTimer *frame_timer;
180
+ QEMUBH *async_bh;
181
+ int64_t sof_time;
182
+ int64_t usb_frame_time;
183
+ int64_t usb_bit_time;
184
+ uint32_t usb_version;
185
+ uint16_t frame_number;
186
+ uint16_t fi;
187
+ uint16_t next_chan;
188
+ bool working;
189
+ USBPort uport;
190
+ DWC2Packet packet[DWC2_NB_CHAN]; /* one packet per chan */
191
+ uint8_t usb_buf[DWC2_NB_CHAN][DWC2_MAX_XFER_SIZE]; /* one buffer per chan */
192
+};
193
+
194
+struct DWC2Class {
195
+ /*< private >*/
196
+ SysBusDeviceClass parent_class;
197
+ ResettablePhases parent_phases;
198
+
199
+ /*< public >*/
200
+};
201
+
202
+#define TYPE_DWC2_USB "dwc2-usb"
203
+#define DWC2_USB(obj) \
204
+ OBJECT_CHECK(DWC2State, (obj), TYPE_DWC2_USB)
205
+#define DWC2_CLASS(klass) \
206
+ OBJECT_CLASS_CHECK(DWC2Class, (klass), TYPE_DWC2_USB)
207
+#define DWC2_GET_CLASS(obj) \
208
+ OBJECT_GET_CLASS(DWC2Class, (obj), TYPE_DWC2_USB)
209
+
210
+#endif
211
--
28
--
212
2.20.1
29
2.25.1
213
214
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The dwc-hsotg (dwc2) USB host depends on a short packet to
3
The CPUTLBEntryFull structure now stores the original pte attributes, as
4
indicate the end of an IN transfer. The usb-storage driver
4
well as the physical address. Therefore, we no longer need a separate
5
currently doesn't provide this, so fix it.
5
bit in MemTxAttrs, nor do we need to walk the tree of memory regions.
6
6
7
I have tested this change rather extensively using a PC
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
emulation with xhci, ehci, and uhci controllers, and have
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
not observed any regressions.
9
Message-id: 20221011031911.2408754-3-richard.henderson@linaro.org
10
11
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
12
Message-id: 20200520235349.21215-6-pauldzim@gmail.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/usb/dev-storage.c | 15 ++++++++++++++-
12
target/arm/cpu.h | 1 -
16
1 file changed, 14 insertions(+), 1 deletion(-)
13
target/arm/sve_ldst_internal.h | 1 +
14
target/arm/mte_helper.c | 62 ++++++++++------------------------
15
target/arm/sve_helper.c | 54 ++++++++++-------------------
16
target/arm/tlb_helper.c | 4 ---
17
5 files changed, 36 insertions(+), 86 deletions(-)
17
18
18
diff --git a/hw/usb/dev-storage.c b/hw/usb/dev-storage.c
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/usb/dev-storage.c
21
--- a/target/arm/cpu.h
21
+++ b/hw/usb/dev-storage.c
22
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ static void usb_msd_copy_data(MSDState *s, USBPacket *p)
23
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
23
usb_packet_copy(p, scsi_req_get_buf(s->req) + s->scsi_off, len);
24
* generic target bits directly.
24
s->scsi_len -= len;
25
*/
25
s->scsi_off += len;
26
#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
26
+ if (len > s->data_len) {
27
-#define arm_tlb_mte_tagged(x) (typecheck_memtxattrs(x)->target_tlb_bit1)
27
+ len = s->data_len;
28
28
+ }
29
/*
29
s->data_len -= len;
30
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
30
if (s->scsi_len == 0 || s->data_len == 0) {
31
diff --git a/target/arm/sve_ldst_internal.h b/target/arm/sve_ldst_internal.h
31
scsi_req_continue(s->req);
32
index XXXXXXX..XXXXXXX 100644
32
@@ -XXX,XX +XXX,XX @@ static void usb_msd_command_complete(SCSIRequest *req, uint32_t status, size_t r
33
--- a/target/arm/sve_ldst_internal.h
33
if (s->data_len) {
34
+++ b/target/arm/sve_ldst_internal.h
34
int len = (p->iov.size - p->actual_length);
35
@@ -XXX,XX +XXX,XX @@ typedef struct {
35
usb_packet_skip(p, len);
36
void *host;
36
+ if (len > s->data_len) {
37
int flags;
37
+ len = s->data_len;
38
MemTxAttrs attrs;
38
+ }
39
+ bool tagged;
39
s->data_len -= len;
40
} SVEHostPage;
41
42
bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
43
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mte_helper.c
46
+++ b/target/arm/mte_helper.c
47
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
48
TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1);
49
return tags + index;
50
#else
51
- uintptr_t index;
52
CPUTLBEntryFull *full;
53
+ MemTxAttrs attrs;
54
int in_page, flags;
55
- ram_addr_t ptr_ra;
56
hwaddr ptr_paddr, tag_paddr, xlat;
57
MemoryRegion *mr;
58
ARMASIdx tag_asi;
59
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
60
* valid. Indicate to probe_access_flags no-fault, then assert that
61
* we received a valid page.
62
*/
63
- flags = probe_access_flags(env, ptr, ptr_access, ptr_mmu_idx,
64
- ra == 0, &host, ra);
65
+ flags = probe_access_full(env, ptr, ptr_access, ptr_mmu_idx,
66
+ ra == 0, &host, &full, ra);
67
assert(!(flags & TLB_INVALID_MASK));
68
69
- /*
70
- * Find the CPUTLBEntryFull for ptr. This *must* be present in the TLB
71
- * because we just found the mapping.
72
- * TODO: Perhaps there should be a cputlb helper that returns a
73
- * matching tlb entry + iotlb entry.
74
- */
75
- index = tlb_index(env, ptr_mmu_idx, ptr);
76
-# ifdef CONFIG_DEBUG_TCG
77
- {
78
- CPUTLBEntry *entry = tlb_entry(env, ptr_mmu_idx, ptr);
79
- target_ulong comparator = (ptr_access == MMU_DATA_LOAD
80
- ? entry->addr_read
81
- : tlb_addr_write(entry));
82
- g_assert(tlb_hit(comparator, ptr));
83
- }
84
-# endif
85
- full = &env_tlb(env)->d[ptr_mmu_idx].fulltlb[index];
86
-
87
/* If the virtual page MemAttr != Tagged, access unchecked. */
88
- if (!arm_tlb_mte_tagged(&full->attrs)) {
89
+ if (full->pte_attrs != 0xf0) {
90
return NULL;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
94
return NULL;
95
}
96
97
+ /*
98
+ * Remember these values across the second lookup below,
99
+ * which may invalidate this pointer via tlb resize.
100
+ */
101
+ ptr_paddr = full->phys_addr;
102
+ attrs = full->attrs;
103
+ full = NULL;
104
+
105
/*
106
* The Normal memory access can extend to the next page. E.g. a single
107
* 8-byte access to the last byte of a page will check only the last
108
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
109
*/
110
in_page = -(ptr | TARGET_PAGE_MASK);
111
if (unlikely(ptr_size > in_page)) {
112
- void *ignore;
113
- flags |= probe_access_flags(env, ptr + in_page, ptr_access,
114
- ptr_mmu_idx, ra == 0, &ignore, ra);
115
+ flags |= probe_access_full(env, ptr + in_page, ptr_access,
116
+ ptr_mmu_idx, ra == 0, &host, &full, ra);
117
assert(!(flags & TLB_INVALID_MASK));
118
}
119
120
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
121
if (unlikely(flags & TLB_WATCHPOINT)) {
122
int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE;
123
assert(ra != 0);
124
- cpu_check_watchpoint(env_cpu(env), ptr, ptr_size,
125
- full->attrs, wp, ra);
126
+ cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, attrs, wp, ra);
127
}
128
129
- /*
130
- * Find the physical address within the normal mem space.
131
- * The memory region lookup must succeed because TLB_MMIO was
132
- * not set in the cputlb lookup above.
133
- */
134
- mr = memory_region_from_host(host, &ptr_ra);
135
- tcg_debug_assert(mr != NULL);
136
- tcg_debug_assert(memory_region_is_ram(mr));
137
- ptr_paddr = ptr_ra;
138
- do {
139
- ptr_paddr += mr->addr;
140
- mr = mr->container;
141
- } while (mr);
142
-
143
/* Convert to the physical address in tag space. */
144
tag_paddr = ptr_paddr >> (LOG2_TAG_GRANULE + 1);
145
146
/* Look up the address in tag space. */
147
- tag_asi = full->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
148
+ tag_asi = attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
149
tag_as = cpu_get_address_space(env_cpu(env), tag_asi);
150
mr = address_space_translate(tag_as, tag_paddr, &xlat, NULL,
151
- tag_access == MMU_DATA_STORE,
152
- full->attrs);
153
+ tag_access == MMU_DATA_STORE, attrs);
154
155
/*
156
* Note that @mr will never be NULL. If there is nothing in the address
157
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/target/arm/sve_helper.c
160
+++ b/target/arm/sve_helper.c
161
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
162
*/
163
addr = useronly_clean_ptr(addr);
164
165
+#ifdef CONFIG_USER_ONLY
166
flags = probe_access_flags(env, addr, access_type, mmu_idx, nofault,
167
&info->host, retaddr);
168
+ memset(&info->attrs, 0, sizeof(info->attrs));
169
+ /* Require both ANON and MTE; see allocation_tag_mem(). */
170
+ info->tagged = (flags & PAGE_ANON) && (flags & PAGE_MTE);
171
+#else
172
+ CPUTLBEntryFull *full;
173
+ flags = probe_access_full(env, addr, access_type, mmu_idx, nofault,
174
+ &info->host, &full, retaddr);
175
+ info->attrs = full->attrs;
176
+ info->tagged = full->pte_attrs == 0xf0;
177
+#endif
178
info->flags = flags;
179
180
if (flags & TLB_INVALID_MASK) {
181
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
182
183
/* Ensure that info->host[] is relative to addr, not addr + mem_off. */
184
info->host -= mem_off;
185
-
186
-#ifdef CONFIG_USER_ONLY
187
- memset(&info->attrs, 0, sizeof(info->attrs));
188
- /* Require both MAP_ANON and PROT_MTE -- see allocation_tag_mem. */
189
- arm_tlb_mte_tagged(&info->attrs) =
190
- (flags & PAGE_ANON) && (flags & PAGE_MTE);
191
-#else
192
- /*
193
- * Find the iotlbentry for addr and return the transaction attributes.
194
- * This *must* be present in the TLB because we just found the mapping.
195
- */
196
- {
197
- uintptr_t index = tlb_index(env, mmu_idx, addr);
198
-
199
-# ifdef CONFIG_DEBUG_TCG
200
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
201
- target_ulong comparator = (access_type == MMU_DATA_LOAD
202
- ? entry->addr_read
203
- : tlb_addr_write(entry));
204
- g_assert(tlb_hit(comparator, addr));
205
-# endif
206
-
207
- CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
208
- info->attrs = full->attrs;
209
- }
210
-#endif
211
-
212
return true;
213
}
214
215
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
216
intptr_t mem_off, reg_off, reg_last;
217
218
/* Process the page only if MemAttr == Tagged. */
219
- if (arm_tlb_mte_tagged(&info->page[0].attrs)) {
220
+ if (info->page[0].tagged) {
221
mem_off = info->mem_off_first[0];
222
reg_off = info->reg_off_first[0];
223
reg_last = info->reg_off_split;
224
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
225
}
226
227
mem_off = info->mem_off_first[1];
228
- if (mem_off >= 0 && arm_tlb_mte_tagged(&info->page[1].attrs)) {
229
+ if (mem_off >= 0 && info->page[1].tagged) {
230
reg_off = info->reg_off_first[1];
231
reg_last = info->reg_off_last[1];
232
233
@@ -XXX,XX +XXX,XX @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
234
* Disable MTE checking if the Tagged bit is not set. Since TBI must
235
* be set within MTEDESC for MTE, !mtedesc => !mte_active.
236
*/
237
- if (!arm_tlb_mte_tagged(&info.page[0].attrs)) {
238
+ if (!info.page[0].tagged) {
239
mtedesc = 0;
240
}
241
242
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
243
cpu_check_watchpoint(env_cpu(env), addr, msize,
244
info.attrs, BP_MEM_READ, retaddr);
245
}
246
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
247
+ if (mtedesc && info.tagged) {
248
mte_check(env, mtedesc, addr, retaddr);
249
}
250
if (unlikely(info.flags & TLB_MMIO)) {
251
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
252
msize, info.attrs,
253
BP_MEM_READ, retaddr);
254
}
255
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
256
+ if (mtedesc && info.tagged) {
257
mte_check(env, mtedesc, addr, retaddr);
258
}
259
tlb_fn(env, &scratch, reg_off, addr, retaddr);
260
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
261
(env_cpu(env), addr, msize) & BP_MEM_READ)) {
262
goto fault;
263
}
264
- if (mtedesc &&
265
- arm_tlb_mte_tagged(&info.attrs) &&
266
- !mte_probe(env, mtedesc, addr)) {
267
+ if (mtedesc && info.tagged && !mte_probe(env, mtedesc, addr)) {
268
goto fault;
269
}
270
271
@@ -XXX,XX +XXX,XX @@ void sve_st1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
272
info.attrs, BP_MEM_WRITE, retaddr);
273
}
274
275
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
276
+ if (mtedesc && info.tagged) {
277
mte_check(env, mtedesc, addr, retaddr);
278
}
40
}
279
}
41
if (s->data_len == 0) {
280
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
42
@@ -XXX,XX +XXX,XX @@ static void usb_msd_handle_data(USBDevice *dev, USBPacket *p)
281
index XXXXXXX..XXXXXXX 100644
43
int len = p->iov.size - p->actual_length;
282
--- a/target/arm/tlb_helper.c
44
if (len) {
283
+++ b/target/arm/tlb_helper.c
45
usb_packet_skip(p, len);
284
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
46
+ if (len > s->data_len) {
285
res.f.phys_addr &= TARGET_PAGE_MASK;
47
+ len = s->data_len;
286
address &= TARGET_PAGE_MASK;
48
+ }
287
}
49
s->data_len -= len;
288
- /* Notice and record tagged memory. */
50
if (s->data_len == 0) {
289
- if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) {
51
s->mode = USB_MSDM_CSW;
290
- arm_tlb_mte_tagged(&res.f.attrs) = true;
52
@@ -XXX,XX +XXX,XX @@ static void usb_msd_handle_data(USBDevice *dev, USBPacket *p)
291
- }
53
int len = p->iov.size - p->actual_length;
292
54
if (len) {
293
res.f.pte_attrs = res.cacheattrs.attrs;
55
usb_packet_skip(p, len);
294
res.f.shareability = res.cacheattrs.shareability;
56
+ if (len > s->data_len) {
57
+ len = s->data_len;
58
+ }
59
s->data_len -= len;
60
if (s->data_len == 0) {
61
s->mode = USB_MSDM_CSW;
62
}
63
}
64
}
65
- if (p->actual_length < p->iov.size) {
66
+ if (p->actual_length < p->iov.size && (p->short_not_ok ||
67
+ s->scsi_len >= p->ep->max_packet_size)) {
68
DPRINTF("Deferring packet %p [wait data-in]\n", p);
69
s->packet = p;
70
p->status = USB_RET_ASYNC;
71
--
295
--
72
2.20.1
296
2.25.1
73
74
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rather than passing an opcode to a helper, fully decode the
3
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit.
4
operation at translate time. Use clear_tail_16 to zap the
4
In is_guarded_page, use probe_access_full instead of just guessing
5
balance of the SVE register with the AdvSIMD write.
5
that the tlb entry is still present. Also handles the FIXME about
6
executing from device memory.
6
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-7-richard.henderson@linaro.org
10
Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
target/arm/helper.h | 5 ++++-
13
target/arm/cpu-param.h | 9 +++++----
13
target/arm/crypto_helper.c | 24 ++++++++++++++++++------
14
target/arm/cpu.h | 13 -------------
14
target/arm/translate-a64.c | 21 +++++----------------
15
target/arm/internals.h | 1 +
15
3 files changed, 27 insertions(+), 23 deletions(-)
16
target/arm/ptw.c | 7 ++++---
17
target/arm/translate-a64.c | 21 ++++++++++-----------
18
5 files changed, 20 insertions(+), 31 deletions(-)
16
19
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
22
--- a/target/arm/cpu-param.h
20
+++ b/target/arm/helper.h
23
+++ b/target/arm/cpu-param.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
@@ -XXX,XX +XXX,XX @@
22
DEF_HELPER_FLAGS_4(crypto_sha512su1, TCG_CALL_NO_RWG,
25
*
23
void, ptr, ptr, ptr, i32)
26
* For ARMMMUIdx_Stage2*, pte_attrs is the S2 descriptor bits [5:2].
24
27
* Otherwise, pte_attrs is the same as the MAIR_EL1 8-bit format.
25
-DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
28
- * For shareability, as in the SH field of the VMSAv8-64 PTEs.
26
+DEF_HELPER_FLAGS_4(crypto_sm3tt1a, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+ * For shareability and guarded, as in the SH and GP fields respectively
27
+DEF_HELPER_FLAGS_4(crypto_sm3tt1b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+ * of the VMSAv8-64 PTEs.
28
+DEF_HELPER_FLAGS_4(crypto_sm3tt2a, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
*/
29
+DEF_HELPER_FLAGS_4(crypto_sm3tt2b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
# define TARGET_PAGE_ENTRY_EXTRA \
30
DEF_HELPER_FLAGS_4(crypto_sm3partw1, TCG_CALL_NO_RWG,
33
- uint8_t pte_attrs; \
31
void, ptr, ptr, ptr, i32)
34
- uint8_t shareability;
32
DEF_HELPER_FLAGS_4(crypto_sm3partw2, TCG_CALL_NO_RWG,
35
-
33
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
36
+ uint8_t pte_attrs; \
37
+ uint8_t shareability; \
38
+ bool guarded;
39
#endif
40
41
#define NB_MMU_MODES 8
42
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/crypto_helper.c
44
--- a/target/arm/cpu.h
36
+++ b/target/arm/crypto_helper.c
45
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm, uint32_t desc)
46
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
38
clear_tail_16(vd, desc);
47
/* Shared between translate-sve.c and sve_helper.c. */
39
}
48
extern const uint64_t pred_esz_masks[5];
40
49
41
-void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
50
-/* Helper for the macros below, validating the argument type. */
42
- uint32_t opcode)
51
-static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
43
+static inline void QEMU_ALWAYS_INLINE
52
-{
44
+crypto_sm3tt(uint64_t *rd, uint64_t *rn, uint64_t *rm,
53
- return x;
45
+ uint32_t desc, uint32_t opcode)
54
-}
46
{
55
-
47
- uint64_t *rd = vd;
56
-/*
48
- uint64_t *rn = vn;
57
- * Lvalue macros for ARM TLB bits that we must cache in the TCG TLB.
49
- uint64_t *rm = vm;
58
- * Using these should be a bit more self-documenting than using the
50
union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
59
- * generic target bits directly.
51
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
60
- */
52
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
61
-#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
53
+ uint32_t imm2 = simd_data(desc);
62
-
54
uint32_t t;
63
/*
55
64
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
56
assert(imm2 < 4);
65
* Note that with the Linux kernel, PROT_MTE may not be cleared by mprotect
57
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
66
diff --git a/target/arm/internals.h b/target/arm/internals.h
58
/* SM3TT2B */
67
index XXXXXXX..XXXXXXX 100644
59
t = cho(CR_ST_WORD(d, 3), CR_ST_WORD(d, 2), CR_ST_WORD(d, 1));
68
--- a/target/arm/internals.h
60
} else {
69
+++ b/target/arm/internals.h
61
- g_assert_not_reached();
70
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
62
+ qemu_build_not_reached();
71
unsigned int attrs:8;
72
unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs */
73
bool is_s2_format:1;
74
+ bool guarded:1; /* guarded bit of the v8-64 PTE */
75
} ARMCacheAttrs;
76
77
/* Fields that are valid upon success. */
78
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/ptw.c
81
+++ b/target/arm/ptw.c
82
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
83
*/
84
result->f.attrs.secure = false;
63
}
85
}
64
86
- /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
65
t += CR_ST_WORD(d, 0) + CR_ST_WORD(m, imm2);
87
- if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
66
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
88
- arm_tlb_bti_gp(&result->f.attrs) = true;
67
68
rd[0] = d.l[0];
69
rd[1] = d.l[1];
70
+
89
+
71
+ clear_tail_16(rd, desc);
90
+ /* When in aarch64 mode, and BTI is enabled, remember GP in the TLB. */
72
}
91
+ if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) {
73
92
+ result->f.guarded = guarded;
74
+#define DO_SM3TT(NAME, OPCODE) \
93
}
75
+ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
94
76
+ { crypto_sm3tt(vd, vn, vm, desc, OPCODE); }
95
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
77
+
78
+DO_SM3TT(crypto_sm3tt1a, 0)
79
+DO_SM3TT(crypto_sm3tt1b, 1)
80
+DO_SM3TT(crypto_sm3tt2a, 2)
81
+DO_SM3TT(crypto_sm3tt2b, 3)
82
+
83
+#undef DO_SM3TT
84
+
85
static uint8_t const sm4_sbox[] = {
86
0xd6, 0x90, 0xe9, 0xfe, 0xcc, 0xe1, 0x3d, 0xb7,
87
0x16, 0xb6, 0x14, 0xc2, 0x28, 0xfb, 0x2c, 0x05,
88
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
96
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
89
index XXXXXXX..XXXXXXX 100644
97
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate-a64.c
98
--- a/target/arm/translate-a64.c
91
+++ b/target/arm/translate-a64.c
99
+++ b/target/arm/translate-a64.c
92
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
100
@@ -XXX,XX +XXX,XX @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s)
93
*/
101
#ifdef CONFIG_USER_ONLY
94
static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
102
return page_get_flags(addr) & PAGE_BTI;
95
{
103
#else
96
+ static gen_helper_gvec_3 * const fns[4] = {
104
+ CPUTLBEntryFull *full;
97
+ gen_helper_crypto_sm3tt1a, gen_helper_crypto_sm3tt1b,
105
+ void *host;
98
+ gen_helper_crypto_sm3tt2a, gen_helper_crypto_sm3tt2b,
106
int mmu_idx = arm_to_core_mmu_idx(s->mmu_idx);
99
+ };
107
- unsigned int index = tlb_index(env, mmu_idx, addr);
100
int opcode = extract32(insn, 10, 2);
108
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
101
int imm2 = extract32(insn, 12, 2);
109
+ int flags;
102
int rm = extract32(insn, 16, 5);
110
103
int rn = extract32(insn, 5, 5);
111
/*
104
int rd = extract32(insn, 0, 5);
112
* We test this immediately after reading an insn, which means
105
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
113
- * that any normal page must be in the TLB. The only exception
106
- TCGv_i32 tcg_imm2, tcg_opcode;
114
- * would be for executing from flash or device memory, which
107
115
- * does not retain the TLB entry.
108
if (!dc_isar_feature(aa64_sm3, s)) {
116
- *
109
unallocated_encoding(s);
117
- * FIXME: Assume false for those, for now. We could use
110
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
118
- * arm_cpu_get_phys_page_attrs_debug to re-read the page
111
return;
119
- * table entry even for that case.
112
}
120
+ * that the TLB entry must be present and valid, and thus this
113
121
+ * access will never raise an exception.
114
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
122
*/
115
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
123
- return (tlb_hit(entry->addr_code, addr) &&
116
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
124
- arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs));
117
- tcg_imm2 = tcg_const_i32(imm2);
125
+ flags = probe_access_full(env, addr, MMU_INST_FETCH, mmu_idx,
118
- tcg_opcode = tcg_const_i32(opcode);
126
+ false, &host, &full, 0);
119
-
127
+ assert(!(flags & TLB_INVALID_MASK));
120
- gen_helper_crypto_sm3tt(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr, tcg_imm2,
128
+
121
- tcg_opcode);
129
+ return full->guarded;
122
-
130
#endif
123
- tcg_temp_free_ptr(tcg_rd_ptr);
124
- tcg_temp_free_ptr(tcg_rn_ptr);
125
- tcg_temp_free_ptr(tcg_rm_ptr);
126
- tcg_temp_free_i32(tcg_imm2);
127
- tcg_temp_free_i32(tcg_opcode);
128
+ gen_gvec_op3_ool(s, true, rd, rn, rm, imm2, fns[opcode]);
129
}
131
}
130
132
131
/* C3.6 Data processing - SIMD, inc Crypto
132
--
133
--
133
2.20.1
134
2.25.1
134
135
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add a check for functional dwc-hsotg (dwc2) USB host emulation to
3
Not yet used, but add mmu indexes for 1-1 mapping
4
the Raspi 2 acceptance test
4
to physical addresses.
5
5
6
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daude <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200520235349.21215-8-pauldzim@gmail.com
8
Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
tests/acceptance/boot_linux_console.py | 9 +++++++--
11
target/arm/cpu-param.h | 2 +-
12
1 file changed, 7 insertions(+), 2 deletions(-)
12
target/arm/cpu.h | 7 ++++++-
13
target/arm/ptw.c | 19 +++++++++++++++++--
14
3 files changed, 24 insertions(+), 4 deletions(-)
13
15
14
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
16
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/acceptance/boot_linux_console.py
18
--- a/target/arm/cpu-param.h
17
+++ b/tests/acceptance/boot_linux_console.py
19
+++ b/target/arm/cpu-param.h
18
@@ -XXX,XX +XXX,XX @@ class BootLinuxConsole(LinuxKernelTest):
20
@@ -XXX,XX +XXX,XX @@
19
21
bool guarded;
20
self.vm.set_console()
22
#endif
21
kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
23
22
- serial_kernel_cmdline[uart_id])
24
-#define NB_MMU_MODES 8
23
+ serial_kernel_cmdline[uart_id] +
25
+#define NB_MMU_MODES 10
24
+ ' root=/dev/mmcblk0p2 rootwait ' +
26
25
+ 'dwc_otg.fiq_fsm_enable=0')
27
#endif
26
self.vm.add_args('-kernel', kernel_path,
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
'-dtb', dtb_path,
29
index XXXXXXX..XXXXXXX 100644
28
- '-append', kernel_command_line)
30
--- a/target/arm/cpu.h
29
+ '-append', kernel_command_line,
31
+++ b/target/arm/cpu.h
30
+ '-device', 'usb-kbd')
32
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
31
self.vm.launch()
33
* EL2 EL2&0 +PAN
32
console_pattern = 'Kernel command line: %s' % kernel_command_line
34
* EL2 (aka NS PL2)
33
self.wait_for_console_pattern(console_pattern)
35
* EL3 (aka S PL1)
34
+ console_pattern = 'Product: QEMU USB Keyboard'
36
+ * Physical (NS & S)
35
+ self.wait_for_console_pattern(console_pattern)
37
*
36
38
- * for a total of 8 different mmu_idx.
37
def test_arm_raspi2_uart0(self):
39
+ * for a total of 10 different mmu_idx.
38
"""
40
*
41
* R profile CPUs have an MPU, but can use the same set of MMU indexes
42
* as A profile. They only need to distinguish EL0 and EL1 (and
43
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
44
ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A,
45
ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A,
46
47
+ /* TLBs with 1-1 mapping to the physical address spaces. */
48
+ ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A,
49
+ ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A,
50
+
51
/*
52
* These are not allocated TLBs and are used only for AT system
53
* instructions or for the first stage of an S12 page table walk.
54
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/ptw.c
57
+++ b/target/arm/ptw.c
58
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
59
case ARMMMUIdx_E3:
60
break;
61
62
+ case ARMMMUIdx_Phys_NS:
63
+ case ARMMMUIdx_Phys_S:
64
+ /* No translation for physical address spaces. */
65
+ return true;
66
+
67
default:
68
g_assert_not_reached();
69
}
70
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
71
{
72
uint8_t memattr = 0x00; /* Device nGnRnE */
73
uint8_t shareability = 0; /* non-sharable */
74
+ int r_el;
75
76
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
77
- int r_el = regime_el(env, mmu_idx);
78
+ switch (mmu_idx) {
79
+ case ARMMMUIdx_Stage2:
80
+ case ARMMMUIdx_Stage2_S:
81
+ case ARMMMUIdx_Phys_NS:
82
+ case ARMMMUIdx_Phys_S:
83
+ break;
84
85
+ default:
86
+ r_el = regime_el(env, mmu_idx);
87
if (arm_el_is_aa64(env, r_el)) {
88
int pamax = arm_pamax(env_archcpu(env));
89
uint64_t tcr = env->cp15.tcr_el[r_el];
90
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
91
shareability = 2; /* outer sharable */
92
}
93
result->cacheattrs.is_s2_format = false;
94
+ break;
95
}
96
97
result->f.phys_addr = address;
98
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
99
is_secure = arm_is_secure_below_el3(env);
100
break;
101
case ARMMMUIdx_Stage2:
102
+ case ARMMMUIdx_Phys_NS:
103
case ARMMMUIdx_MPrivNegPri:
104
case ARMMMUIdx_MUserNegPri:
105
case ARMMMUIdx_MPriv:
106
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
107
break;
108
case ARMMMUIdx_E3:
109
case ARMMMUIdx_Stage2_S:
110
+ case ARMMMUIdx_Phys_S:
111
case ARMMMUIdx_MSPrivNegPri:
112
case ARMMMUIdx_MSUserNegPri:
113
case ARMMMUIdx_MSPriv:
39
--
114
--
40
2.20.1
115
2.25.1
41
42
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rather than passing an opcode to a helper, fully decode the
3
We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb.
4
operation at translate time. Use clear_tail_16 to zap the
4
Flush the tlb when invalidating stage 1+2 translations. Re-use
5
balance of the SVE register with the AdvSIMD write.
5
alle1_tlbmask() for other instances of EL1&0 + Stage2.
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-6-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20221011031911.2408754-6-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/helper.h | 5 +-
12
target/arm/cpu-param.h | 2 +-
13
target/arm/neon-dp.decode | 6 +-
13
target/arm/cpu.h | 23 ++++---
14
target/arm/crypto_helper.c | 99 +++++++++++++++++++++------------
14
target/arm/helper.c | 151 ++++++++++++++++++++++++++++++-----------
15
target/arm/translate-a64.c | 29 ++++------
15
3 files changed, 127 insertions(+), 49 deletions(-)
16
target/arm/translate-neon.inc.c | 46 ++++-----------
17
5 files changed, 93 insertions(+), 92 deletions(-)
18
16
19
diff --git a/target/arm/helper.h b/target/arm/helper.h
17
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.h
19
--- a/target/arm/cpu-param.h
22
+++ b/target/arm/helper.h
20
+++ b/target/arm/cpu-param.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(neon_qzip32, TCG_CALL_NO_RWG, void, ptr, ptr)
21
@@ -XXX,XX +XXX,XX @@
24
DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
bool guarded;
25
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
#endif
26
24
27
-DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
-#define NB_MMU_MODES 10
28
+DEF_HELPER_FLAGS_4(crypto_sha1su0, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+#define NB_MMU_MODES 12
29
+DEF_HELPER_FLAGS_4(crypto_sha1c, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
30
+DEF_HELPER_FLAGS_4(crypto_sha1p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
#endif
31
+DEF_HELPER_FLAGS_4(crypto_sha1m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
DEF_HELPER_FLAGS_3(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_3(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
35
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
36
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/neon-dp.decode
31
--- a/target/arm/cpu.h
38
+++ b/target/arm/neon-dp.decode
32
+++ b/target/arm/cpu.h
39
@@ -XXX,XX +XXX,XX @@ VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
33
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
40
@3same_crypto .... .... .... .... .... .... .... .... \
34
* EL2 (aka NS PL2)
41
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0 q=1
35
* EL3 (aka S PL1)
42
36
* Physical (NS & S)
43
-SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
37
+ * Stage2 (NS & S)
44
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
38
*
45
+SHA1C_3s 1111 001 0 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
39
- * for a total of 10 different mmu_idx.
46
+SHA1P_3s 1111 001 0 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
40
+ * for a total of 12 different mmu_idx.
47
+SHA1M_3s 1111 001 0 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
41
*
48
+SHA1SU0_3s 1111 001 0 0 . 11 .... .... 1100 . 1 . 0 .... @3same_crypto
42
* R profile CPUs have an MPU, but can use the same set of MMU indexes
49
SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
43
* as A profile. They only need to distinguish EL0 and EL1 (and
50
SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
44
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
51
SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
45
ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A,
52
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
46
ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A,
47
48
+ /*
49
+ * Used for second stage of an S12 page table walk, or for descriptor
50
+ * loads during first stage of an S1 page table walk. Note that both
51
+ * are in use simultaneously for SecureEL2: the security state for
52
+ * the S2 ptw is selected by the NS bit from the S1 ptw.
53
+ */
54
+ ARMMMUIdx_Stage2 = 10 | ARM_MMU_IDX_A,
55
+ ARMMMUIdx_Stage2_S = 11 | ARM_MMU_IDX_A,
56
+
57
/*
58
* These are not allocated TLBs and are used only for AT system
59
* instructions or for the first stage of an S12 page table walk.
60
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
61
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
62
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
63
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
64
- /*
65
- * Not allocated a TLB: used only for second stage of an S12 page
66
- * table walk, or for descriptor loads during first stage of an S1
67
- * page table walk. Note that if we ever want to have a TLB for this
68
- * then various TLB flush insns which currently are no-ops or flush
69
- * only stage 1 MMU indexes will need to change to flush stage 2.
70
- */
71
- ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
72
- ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB,
73
74
/*
75
* M-profile.
76
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
77
TO_CORE_BIT(E20_2),
78
TO_CORE_BIT(E20_2_PAN),
79
TO_CORE_BIT(E3),
80
+ TO_CORE_BIT(Stage2),
81
+ TO_CORE_BIT(Stage2_S),
82
83
TO_CORE_BIT(MUser),
84
TO_CORE_BIT(MPriv),
85
diff --git a/target/arm/helper.c b/target/arm/helper.c
53
index XXXXXXX..XXXXXXX 100644
86
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/crypto_helper.c
87
--- a/target/arm/helper.c
55
+++ b/target/arm/crypto_helper.c
88
+++ b/target/arm/helper.c
56
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
89
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
57
};
90
raw_write(env, ri, value);
58
91
}
59
#ifdef HOST_WORDS_BIGENDIAN
92
60
-#define CR_ST_BYTE(state, i) (state.bytes[(15 - (i)) ^ 8])
93
+static int alle1_tlbmask(CPUARMState *env)
61
-#define CR_ST_WORD(state, i) (state.words[(3 - (i)) ^ 2])
94
+{
62
+#define CR_ST_BYTE(state, i) ((state).bytes[(15 - (i)) ^ 8])
95
+ /*
63
+#define CR_ST_WORD(state, i) ((state).words[(3 - (i)) ^ 2])
96
+ * Note that the 'ALL' scope must invalidate both stage 1 and
64
#else
97
+ * stage 2 translations, whereas most other scopes only invalidate
65
-#define CR_ST_BYTE(state, i) (state.bytes[i])
98
+ * stage 1 translations.
66
-#define CR_ST_WORD(state, i) (state.words[i])
99
+ */
67
+#define CR_ST_BYTE(state, i) ((state).bytes[i])
100
+ return (ARMMMUIdxBit_E10_1 |
68
+#define CR_ST_WORD(state, i) ((state).words[i])
101
+ ARMMMUIdxBit_E10_1_PAN |
102
+ ARMMMUIdxBit_E10_0 |
103
+ ARMMMUIdxBit_Stage2 |
104
+ ARMMMUIdxBit_Stage2_S);
105
+}
106
+
107
+
108
/* IS variants of TLB operations must affect all cores */
109
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
uint64_t value)
111
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
112
{
113
CPUState *cs = env_cpu(env);
114
115
- tlb_flush_by_mmuidx(cs,
116
- ARMMMUIdxBit_E10_1 |
117
- ARMMMUIdxBit_E10_1_PAN |
118
- ARMMMUIdxBit_E10_0);
119
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
120
}
121
122
static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
123
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
{
125
CPUState *cs = env_cpu(env);
126
127
- tlb_flush_by_mmuidx_all_cpus_synced(cs,
128
- ARMMMUIdxBit_E10_1 |
129
- ARMMMUIdxBit_E10_1_PAN |
130
- ARMMMUIdxBit_E10_0);
131
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, alle1_tlbmask(env));
132
}
133
134
135
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
136
ARMMMUIdxBit_E2);
137
}
138
139
+static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
140
+ uint64_t value)
141
+{
142
+ CPUState *cs = env_cpu(env);
143
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
144
+
145
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
146
+}
147
+
148
+static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
149
+ uint64_t value)
150
+{
151
+ CPUState *cs = env_cpu(env);
152
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
153
+
154
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
155
+}
156
+
157
static const ARMCPRegInfo cp_reginfo[] = {
158
/* Define the secure and non-secure FCSE identifier CP registers
159
* separately because there is no secure bank in V8 (no _EL3). This allows
160
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
162
/*
163
* A change in VMID to the stage2 page table (Stage2) invalidates
164
- * the combined stage 1&2 tlbs (EL10_1 and EL10_0).
165
+ * the stage2 and combined stage 1&2 tlbs (EL10_1 and EL10_0).
166
*/
167
if (raw_read(env, ri) != value) {
168
- uint16_t mask = ARMMMUIdxBit_E10_1 |
169
- ARMMMUIdxBit_E10_1_PAN |
170
- ARMMMUIdxBit_E10_0;
171
- tlb_flush_by_mmuidx(cs, mask);
172
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
173
raw_write(env, ri, value);
174
}
175
}
176
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
177
}
178
}
179
180
-static int alle1_tlbmask(CPUARMState *env)
181
-{
182
- /*
183
- * Note that the 'ALL' scope must invalidate both stage 1 and
184
- * stage 2 translations, whereas most other scopes only invalidate
185
- * stage 1 translations.
186
- */
187
- return (ARMMMUIdxBit_E10_1 |
188
- ARMMMUIdxBit_E10_1_PAN |
189
- ARMMMUIdxBit_E10_0);
190
-}
191
-
192
static int e2_tlbmask(CPUARMState *env)
193
{
194
return (ARMMMUIdxBit_E20_0 |
195
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
ARMMMUIdxBit_E3, bits);
197
}
198
199
+static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
200
+{
201
+ /*
202
+ * The MSB of value is the NS field, which only applies if SEL2
203
+ * is implemented and SCR_EL3.NS is not set (i.e. in secure mode).
204
+ */
205
+ return (value >= 0
206
+ && cpu_isar_feature(aa64_sel2, env_archcpu(env))
207
+ && arm_is_secure_below_el3(env)
208
+ ? ARMMMUIdxBit_Stage2_S
209
+ : ARMMMUIdxBit_Stage2);
210
+}
211
+
212
+static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
213
+ uint64_t value)
214
+{
215
+ CPUState *cs = env_cpu(env);
216
+ int mask = ipas2e1_tlbmask(env, value);
217
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
218
+
219
+ if (tlb_force_broadcast(env)) {
220
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
221
+ } else {
222
+ tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
223
+ }
224
+}
225
+
226
+static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
227
+ uint64_t value)
228
+{
229
+ CPUState *cs = env_cpu(env);
230
+ int mask = ipas2e1_tlbmask(env, value);
231
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
232
+
233
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
234
+}
235
+
236
#ifdef TARGET_AARCH64
237
typedef struct {
238
uint64_t base;
239
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env,
240
241
do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
242
}
243
+
244
+static void tlbi_aa64_ripas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
245
+ uint64_t value)
246
+{
247
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value),
248
+ tlb_force_broadcast(env));
249
+}
250
+
251
+static void tlbi_aa64_ripas2e1is_write(CPUARMState *env,
252
+ const ARMCPRegInfo *ri,
253
+ uint64_t value)
254
+{
255
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value), true);
256
+}
69
#endif
257
#endif
70
258
71
/*
259
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
72
@@ -XXX,XX +XXX,XX @@ static uint32_t maj(uint32_t x, uint32_t y, uint32_t z)
260
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
73
return (x & y) | ((x | y) & z);
261
.writefn = tlbi_aa64_vae1_write },
74
}
262
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
75
263
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
76
-void HELPER(crypto_sha1_3reg)(void *vd, void *vn, void *vm, uint32_t op)
264
- .access = PL2_W, .type = ARM_CP_NOP },
77
+void HELPER(crypto_sha1su0)(void *vd, void *vn, void *vm, uint32_t desc)
265
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
78
+{
266
+ .writefn = tlbi_aa64_ipas2e1is_write },
79
+ uint64_t *d = vd, *n = vn, *m = vm;
267
{ .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
80
+ uint64_t d0, d1;
268
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
81
+
269
- .access = PL2_W, .type = ARM_CP_NOP },
82
+ d0 = d[1] ^ d[0] ^ m[0];
270
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
83
+ d1 = n[0] ^ d[1] ^ m[1];
271
+ .writefn = tlbi_aa64_ipas2e1is_write },
84
+ d[0] = d0;
272
{ .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
85
+ d[1] = d1;
273
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
86
+
274
.access = PL2_W, .type = ARM_CP_NO_RAW,
87
+ clear_tail_16(vd, desc);
275
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
88
+}
276
.writefn = tlbi_aa64_alle1is_write },
89
+
277
{ .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
90
+static inline void crypto_sha1_3reg(uint64_t *rd, uint64_t *rn,
278
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
91
+ uint64_t *rm, uint32_t desc,
279
- .access = PL2_W, .type = ARM_CP_NOP },
92
+ uint32_t (*fn)(union CRYPTO_STATE *d))
280
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
93
{
281
+ .writefn = tlbi_aa64_ipas2e1_write },
94
- uint64_t *rd = vd;
282
{ .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
95
- uint64_t *rn = vn;
283
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
96
- uint64_t *rm = vm;
284
- .access = PL2_W, .type = ARM_CP_NOP },
97
union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
285
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
98
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
286
+ .writefn = tlbi_aa64_ipas2e1_write },
99
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
287
{ .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
100
+ int i;
288
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
101
289
.access = PL2_W, .type = ARM_CP_NO_RAW,
102
- if (op == 3) { /* sha1su0 */
290
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
103
- d.l[0] ^= d.l[1] ^ m.l[0];
291
.writefn = tlbimva_hyp_is_write },
104
- d.l[1] ^= n.l[0] ^ m.l[1];
292
{ .name = "TLBIIPAS2",
105
- } else {
293
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
106
- int i;
294
- .type = ARM_CP_NOP, .access = PL2_W },
107
+ for (i = 0; i < 4; i++) {
295
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
108
+ uint32_t t = fn(&d);
296
+ .writefn = tlbiipas2_hyp_write },
109
297
{ .name = "TLBIIPAS2IS",
110
- for (i = 0; i < 4; i++) {
298
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
111
- uint32_t t;
299
- .type = ARM_CP_NOP, .access = PL2_W },
112
+ t += rol32(CR_ST_WORD(d, 0), 5) + CR_ST_WORD(n, 0)
300
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
113
+ + CR_ST_WORD(m, i);
301
+ .writefn = tlbiipas2is_hyp_write },
114
302
{ .name = "TLBIIPAS2L",
115
- switch (op) {
303
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
116
- case 0: /* sha1c */
304
- .type = ARM_CP_NOP, .access = PL2_W },
117
- t = cho(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
305
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
118
- break;
306
+ .writefn = tlbiipas2_hyp_write },
119
- case 1: /* sha1p */
307
{ .name = "TLBIIPAS2LIS",
120
- t = par(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
308
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
121
- break;
309
- .type = ARM_CP_NOP, .access = PL2_W },
122
- case 2: /* sha1m */
310
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
123
- t = maj(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
311
+ .writefn = tlbiipas2is_hyp_write },
124
- break;
312
/* 32 bit cache operations */
125
- default:
313
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
126
- g_assert_not_reached();
314
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
127
- }
315
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
128
- t += rol32(CR_ST_WORD(d, 0), 5) + CR_ST_WORD(n, 0)
316
.writefn = tlbi_aa64_rvae1_write },
129
- + CR_ST_WORD(m, i);
317
{ .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
130
-
318
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
131
- CR_ST_WORD(n, 0) = CR_ST_WORD(d, 3);
319
- .access = PL2_W, .type = ARM_CP_NOP },
132
- CR_ST_WORD(d, 3) = CR_ST_WORD(d, 2);
320
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
133
- CR_ST_WORD(d, 2) = ror32(CR_ST_WORD(d, 1), 2);
321
+ .writefn = tlbi_aa64_ripas2e1is_write },
134
- CR_ST_WORD(d, 1) = CR_ST_WORD(d, 0);
322
{ .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
135
- CR_ST_WORD(d, 0) = t;
323
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
136
- }
324
- .access = PL2_W, .type = ARM_CP_NOP },
137
+ CR_ST_WORD(n, 0) = CR_ST_WORD(d, 3);
325
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
138
+ CR_ST_WORD(d, 3) = CR_ST_WORD(d, 2);
326
+ .writefn = tlbi_aa64_ripas2e1is_write },
139
+ CR_ST_WORD(d, 2) = ror32(CR_ST_WORD(d, 1), 2);
327
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
140
+ CR_ST_WORD(d, 1) = CR_ST_WORD(d, 0);
328
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
141
+ CR_ST_WORD(d, 0) = t;
329
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
142
}
330
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
143
rd[0] = d.l[0];
331
.writefn = tlbi_aa64_rvae2is_write },
144
rd[1] = d.l[1];
332
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
145
+
333
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
146
+ clear_tail_16(rd, desc);
334
- .access = PL2_W, .type = ARM_CP_NOP },
147
+}
335
- { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
148
+
336
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
149
+static uint32_t do_sha1c(union CRYPTO_STATE *d)
337
+ .writefn = tlbi_aa64_ripas2e1_write },
150
+{
338
+ { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
151
+ return cho(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
339
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
152
+}
340
- .access = PL2_W, .type = ARM_CP_NOP },
153
+
341
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
154
+void HELPER(crypto_sha1c)(void *vd, void *vn, void *vm, uint32_t desc)
342
+ .writefn = tlbi_aa64_ripas2e1_write },
155
+{
343
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
156
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1c);
344
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
157
+}
345
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
158
+
159
+static uint32_t do_sha1p(union CRYPTO_STATE *d)
160
+{
161
+ return par(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
162
+}
163
+
164
+void HELPER(crypto_sha1p)(void *vd, void *vn, void *vm, uint32_t desc)
165
+{
166
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1p);
167
+}
168
+
169
+static uint32_t do_sha1m(union CRYPTO_STATE *d)
170
+{
171
+ return maj(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
172
+}
173
+
174
+void HELPER(crypto_sha1m)(void *vd, void *vn, void *vm, uint32_t desc)
175
+{
176
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1m);
177
}
178
179
void HELPER(crypto_sha1h)(void *vd, void *vm, uint32_t desc)
180
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
181
index XXXXXXX..XXXXXXX 100644
182
--- a/target/arm/translate-a64.c
183
+++ b/target/arm/translate-a64.c
184
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
185
186
switch (opcode) {
187
case 0: /* SHA1C */
188
+ genfn = gen_helper_crypto_sha1c;
189
+ feature = dc_isar_feature(aa64_sha1, s);
190
+ break;
191
case 1: /* SHA1P */
192
+ genfn = gen_helper_crypto_sha1p;
193
+ feature = dc_isar_feature(aa64_sha1, s);
194
+ break;
195
case 2: /* SHA1M */
196
+ genfn = gen_helper_crypto_sha1m;
197
+ feature = dc_isar_feature(aa64_sha1, s);
198
+ break;
199
case 3: /* SHA1SU0 */
200
- genfn = NULL;
201
+ genfn = gen_helper_crypto_sha1su0;
202
feature = dc_isar_feature(aa64_sha1, s);
203
break;
204
case 4: /* SHA256H */
205
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
206
if (!fp_access_check(s)) {
207
return;
208
}
209
-
210
- if (genfn) {
211
- gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
212
- } else {
213
- TCGv_i32 tcg_opcode = tcg_const_i32(opcode);
214
- TCGv_ptr tcg_rd_ptr = vec_full_reg_ptr(s, rd);
215
- TCGv_ptr tcg_rn_ptr = vec_full_reg_ptr(s, rn);
216
- TCGv_ptr tcg_rm_ptr = vec_full_reg_ptr(s, rm);
217
-
218
- gen_helper_crypto_sha1_3reg(tcg_rd_ptr, tcg_rn_ptr,
219
- tcg_rm_ptr, tcg_opcode);
220
-
221
- tcg_temp_free_i32(tcg_opcode);
222
- tcg_temp_free_ptr(tcg_rd_ptr);
223
- tcg_temp_free_ptr(tcg_rn_ptr);
224
- tcg_temp_free_ptr(tcg_rm_ptr);
225
- }
226
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
227
}
228
229
/* Crypto two-reg SHA
230
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
231
index XXXXXXX..XXXXXXX 100644
232
--- a/target/arm/translate-neon.inc.c
233
+++ b/target/arm/translate-neon.inc.c
234
@@ -XXX,XX +XXX,XX @@ static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
235
DO_VQRDMLAH(VQRDMLAH, gen_gvec_sqrdmlah_qc)
236
DO_VQRDMLAH(VQRDMLSH, gen_gvec_sqrdmlsh_qc)
237
238
-static bool trans_SHA1_3s(DisasContext *s, arg_SHA1_3s *a)
239
-{
240
- TCGv_ptr ptr1, ptr2, ptr3;
241
- TCGv_i32 tmp;
242
-
243
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
244
- !dc_isar_feature(aa32_sha1, s)) {
245
- return false;
246
+#define DO_SHA1(NAME, FUNC) \
247
+ WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
248
+ static bool trans_##NAME##_3s(DisasContext *s, arg_3same *a) \
249
+ { \
250
+ if (!dc_isar_feature(aa32_sha1, s)) { \
251
+ return false; \
252
+ } \
253
+ return do_3same(s, a, gen_##NAME##_3s); \
254
}
255
256
- /* UNDEF accesses to D16-D31 if they don't exist. */
257
- if (!dc_isar_feature(aa32_simd_r32, s) &&
258
- ((a->vd | a->vn | a->vm) & 0x10)) {
259
- return false;
260
- }
261
-
262
- if ((a->vn | a->vm | a->vd) & 1) {
263
- return false;
264
- }
265
-
266
- if (!vfp_access_check(s)) {
267
- return true;
268
- }
269
-
270
- ptr1 = vfp_reg_ptr(true, a->vd);
271
- ptr2 = vfp_reg_ptr(true, a->vn);
272
- ptr3 = vfp_reg_ptr(true, a->vm);
273
- tmp = tcg_const_i32(a->optype);
274
- gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp);
275
- tcg_temp_free_i32(tmp);
276
- tcg_temp_free_ptr(ptr1);
277
- tcg_temp_free_ptr(ptr2);
278
- tcg_temp_free_ptr(ptr3);
279
-
280
- return true;
281
-}
282
+DO_SHA1(SHA1C, gen_helper_crypto_sha1c)
283
+DO_SHA1(SHA1P, gen_helper_crypto_sha1p)
284
+DO_SHA1(SHA1M, gen_helper_crypto_sha1m)
285
+DO_SHA1(SHA1SU0, gen_helper_crypto_sha1su0)
286
287
#define DO_SHA2(NAME, FUNC) \
288
WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
289
--
346
--
290
2.20.1
347
2.25.1
291
292
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The ADC region size is 256B, split as:
3
Compare only the VMID field when considering whether we need to flush.
4
- [0x00 - 0x4f] defined
5
- [0x50 - 0xff] reserved
6
4
7
All registers are 32-bit (thus when the datasheet mentions the
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
last defined register is 0x4c, it means its address range is
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
0x4c .. 0x4f.
7
Message-id: 20221011031911.2408754-7-richard.henderson@linaro.org
10
11
This model implementation is also 32-bit. Set MemoryRegionOps
12
'impl' fields.
13
14
See:
15
'RM0033 Reference manual Rev 8', Table 10.13.18 "ADC register map".
16
17
Reported-by: Seth Kintigh <skintigh@gmail.com>
18
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
19
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
Message-id: 20200603055915.17678-1-f4bug@amsat.org
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
9
---
23
hw/adc/stm32f2xx_adc.c | 4 +++-
10
target/arm/helper.c | 4 ++--
24
1 file changed, 3 insertions(+), 1 deletion(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
25
12
26
diff --git a/hw/adc/stm32f2xx_adc.c b/hw/adc/stm32f2xx_adc.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/adc/stm32f2xx_adc.c
15
--- a/target/arm/helper.c
29
+++ b/hw/adc/stm32f2xx_adc.c
16
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps stm32f2xx_adc_ops = {
17
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
31
.read = stm32f2xx_adc_read,
18
* A change in VMID to the stage2 page table (Stage2) invalidates
32
.write = stm32f2xx_adc_write,
19
* the stage2 and combined stage 1&2 tlbs (EL10_1 and EL10_0).
33
.endianness = DEVICE_NATIVE_ENDIAN,
20
*/
34
+ .impl.min_access_size = 4,
21
- if (raw_read(env, ri) != value) {
35
+ .impl.max_access_size = 4,
22
+ if (extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
36
};
23
tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
37
24
- raw_write(env, ri, value);
38
static const VMStateDescription vmstate_stm32f2xx_adc = {
25
}
39
@@ -XXX,XX +XXX,XX @@ static void stm32f2xx_adc_init(Object *obj)
26
+ raw_write(env, ri, value);
40
sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
41
42
memory_region_init_io(&s->mmio, obj, &stm32f2xx_adc_ops, s,
43
- TYPE_STM32F2XX_ADC, 0xFF);
44
+ TYPE_STM32F2XX_ADC, 0x100);
45
sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio);
46
}
27
}
47
28
29
static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
48
--
30
--
49
2.20.1
31
2.25.1
50
51
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Replace printf() calls by qemu_log_mask(), which is disabled
3
Consolidate most of the inputs and outputs of S1_ptw_translate
4
by default. This avoid flooding the terminal when fuzzing the
4
into a single structure. Plumb this through arm_ld*_ptw from
5
device.
5
the controlling get_phys_addr_* routine.
6
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200525114123.21317-3-f4bug@amsat.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20221011031911.2408754-8-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/arm/pxa2xx.c | 66 ++++++++++++++++++++++++++++++++++++-------------
12
target/arm/ptw.c | 140 ++++++++++++++++++++++++++---------------------
13
1 file changed, 49 insertions(+), 17 deletions(-)
13
1 file changed, 79 insertions(+), 61 deletions(-)
14
14
15
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/pxa2xx.c
17
--- a/target/arm/ptw.c
18
+++ b/hw/arm/pxa2xx.c
18
+++ b/target/arm/ptw.c
19
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
20
#include "sysemu/blockdev.h"
20
#include "idau.h"
21
#include "sysemu/qtest.h"
21
22
#include "qemu/cutils.h"
22
23
+#include "qemu/log.h"
23
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
24
24
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
25
static struct {
25
- bool is_secure, bool s1_is_el0,
26
hwaddr io_base;
26
+typedef struct S1Translate {
27
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_pm_read(void *opaque, hwaddr addr,
27
+ ARMMMUIdx in_mmu_idx;
28
return s->pm_regs[addr >> 2];
28
+ bool in_secure;
29
default:
29
+ bool out_secure;
30
fail:
30
+ hwaddr out_phys;
31
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
31
+} S1Translate;
32
+ qemu_log_mask(LOG_GUEST_ERROR,
32
+
33
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
33
+static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
34
+ __func__, addr);
34
+ uint64_t address,
35
break;
35
+ MMUAccessType access_type, bool s1_is_el0,
36
}
36
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
37
__attribute__((nonnull));
38
39
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
40
}
41
42
/* Translate a S1 pagetable walk through S2 if needed. */
43
-static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
44
- hwaddr addr, bool *is_secure_ptr,
45
- ARMMMUFaultInfo *fi)
46
+static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
47
+ hwaddr addr, ARMMMUFaultInfo *fi)
48
{
49
- bool is_secure = *is_secure_ptr;
50
+ bool is_secure = ptw->in_secure;
51
ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
52
53
- if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
54
+ if (arm_mmu_idx_is_stage1_of_2(ptw->in_mmu_idx) &&
55
!regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
56
GetPhysAddrResult s2 = {};
57
+ S1Translate s2ptw = {
58
+ .in_mmu_idx = s2_mmu_idx,
59
+ .in_secure = is_secure,
60
+ };
61
uint64_t hcr;
62
int ret;
63
64
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
65
- is_secure, false, &s2, fi);
66
+ ret = get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
67
+ false, &s2, fi);
68
if (ret) {
69
assert(fi->type != ARMFault_None);
70
fi->s2addr = addr;
71
fi->stage2 = true;
72
fi->s1ptw = true;
73
fi->s1ns = !is_secure;
74
- return ~0;
75
+ return false;
76
}
77
78
hcr = arm_hcr_el2_eff_secstate(env, is_secure);
79
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
80
fi->stage2 = true;
81
fi->s1ptw = true;
82
fi->s1ns = !is_secure;
83
- return ~0;
84
+ return false;
85
}
86
87
if (arm_is_secure_below_el3(env)) {
88
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
89
} else {
90
is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
91
}
92
- *is_secure_ptr = is_secure;
93
} else {
94
assert(!is_secure);
95
}
96
97
addr = s2.f.phys_addr;
98
}
99
- return addr;
100
+
101
+ ptw->out_secure = is_secure;
102
+ ptw->out_phys = addr;
103
+ return true;
104
}
105
106
/* All loads done in the course of a page table walk go through here. */
107
-static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
108
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
109
+static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
110
+ ARMMMUFaultInfo *fi)
111
{
112
CPUState *cs = env_cpu(env);
113
MemTxAttrs attrs = {};
114
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
115
AddressSpace *as;
116
uint32_t data;
117
118
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
119
- attrs.secure = is_secure;
120
- as = arm_addressspace(cs, attrs);
121
- if (fi->s1ptw) {
122
+ if (!S1_ptw_translate(env, ptw, addr, fi)) {
123
return 0;
124
}
125
- if (regime_translation_big_endian(env, mmu_idx)) {
126
+ addr = ptw->out_phys;
127
+ attrs.secure = ptw->out_secure;
128
+ as = arm_addressspace(cs, attrs);
129
+ if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
130
data = address_space_ldl_be(as, addr, attrs, &result);
131
} else {
132
data = address_space_ldl_le(as, addr, attrs, &result);
133
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
37
return 0;
134
return 0;
38
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_pm_write(void *opaque, hwaddr addr,
135
}
39
s->pm_regs[addr >> 2] = value;
136
40
break;
137
-static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
41
}
138
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
42
-
139
+static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
43
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
140
+ ARMMMUFaultInfo *fi)
44
+ qemu_log_mask(LOG_GUEST_ERROR,
141
{
45
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
142
CPUState *cs = env_cpu(env);
46
+ __func__, addr);
143
MemTxAttrs attrs = {};
47
break;
144
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
48
}
145
AddressSpace *as;
49
}
146
uint64_t data;
50
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_cm_read(void *opaque, hwaddr addr,
147
51
return s->cm_regs[CCCR >> 2] | (3 << 28);
148
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
52
149
- attrs.secure = is_secure;
53
default:
150
- as = arm_addressspace(cs, attrs);
54
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
151
- if (fi->s1ptw) {
55
+ qemu_log_mask(LOG_GUEST_ERROR,
152
+ if (!S1_ptw_translate(env, ptw, addr, fi)) {
56
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
57
+ __func__, addr);
58
break;
59
}
60
return 0;
61
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_cm_write(void *opaque, hwaddr addr,
62
break;
63
64
default:
65
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
66
+ qemu_log_mask(LOG_GUEST_ERROR,
67
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
68
+ __func__, addr);
69
break;
70
}
71
}
72
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_mm_read(void *opaque, hwaddr addr,
73
return s->mm_regs[addr >> 2];
74
/* fall through */
75
default:
76
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
77
+ qemu_log_mask(LOG_GUEST_ERROR,
78
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
79
+ __func__, addr);
80
break;
81
}
82
return 0;
83
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_mm_write(void *opaque, hwaddr addr,
84
}
85
86
default:
87
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
88
+ qemu_log_mask(LOG_GUEST_ERROR,
89
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
90
+ __func__, addr);
91
break;
92
}
93
}
94
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_ssp_read(void *opaque, hwaddr addr,
95
case SSACD:
96
return s->ssacd;
97
default:
98
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
99
+ qemu_log_mask(LOG_GUEST_ERROR,
100
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
101
+ __func__, addr);
102
break;
103
}
104
return 0;
105
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_ssp_write(void *opaque, hwaddr addr,
106
break;
107
108
default:
109
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
110
+ qemu_log_mask(LOG_GUEST_ERROR,
111
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
112
+ __func__, addr);
113
break;
114
}
115
}
116
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_rtc_read(void *opaque, hwaddr addr,
117
else
118
return s->last_swcr;
119
default:
120
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
121
+ qemu_log_mask(LOG_GUEST_ERROR,
122
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
123
+ __func__, addr);
124
break;
125
}
126
return 0;
127
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_rtc_write(void *opaque, hwaddr addr,
128
break;
129
130
default:
131
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
132
+ qemu_log_mask(LOG_GUEST_ERROR,
133
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
134
+ __func__, addr);
135
}
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_i2c_read(void *opaque, hwaddr addr,
139
s->ibmr = 0;
140
return s->ibmr;
141
default:
142
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
143
+ qemu_log_mask(LOG_GUEST_ERROR,
144
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
145
+ __func__, addr);
146
break;
147
}
148
return 0;
149
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_i2c_write(void *opaque, hwaddr addr,
150
break;
151
152
default:
153
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
154
+ qemu_log_mask(LOG_GUEST_ERROR,
155
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
156
+ __func__, addr);
157
}
158
}
159
160
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_i2s_read(void *opaque, hwaddr addr,
161
}
162
return 0;
153
return 0;
163
default:
154
}
164
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
155
- if (regime_translation_big_endian(env, mmu_idx)) {
165
+ qemu_log_mask(LOG_GUEST_ERROR,
156
+ addr = ptw->out_phys;
166
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
157
+ attrs.secure = ptw->out_secure;
167
+ __func__, addr);
158
+ as = arm_addressspace(cs, attrs);
168
break;
159
+ if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
169
}
160
data = address_space_ldq_be(as, addr, attrs, &result);
170
return 0;
161
} else {
171
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_i2s_write(void *opaque, hwaddr addr,
162
data = address_space_ldq_le(as, addr, attrs, &result);
172
}
163
@@ -XXX,XX +XXX,XX @@ static int simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
173
break;
164
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
174
default:
165
}
175
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
166
176
+ qemu_log_mask(LOG_GUEST_ERROR,
167
-static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
177
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
168
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
178
+ __func__, addr);
169
- bool is_secure, GetPhysAddrResult *result,
179
}
170
- ARMMMUFaultInfo *fi)
180
}
171
+static bool get_phys_addr_v5(CPUARMState *env, S1Translate *ptw,
181
172
+ uint32_t address, MMUAccessType access_type,
182
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_fir_read(void *opaque, hwaddr addr,
173
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
183
case ICFOR:
174
{
184
return s->rx_len;
175
int level = 1;
185
default:
176
uint32_t table;
186
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
177
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
187
+ qemu_log_mask(LOG_GUEST_ERROR,
178
188
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
179
/* Pagetable walk. */
189
+ __func__, addr);
180
/* Lookup l1 descriptor. */
190
break;
181
- if (!get_level1_table_address(env, mmu_idx, &table, address)) {
191
}
182
+ if (!get_level1_table_address(env, ptw->in_mmu_idx, &table, address)) {
192
return 0;
183
/* Section translation fault if page walk is disabled by PD0 or PD1 */
193
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_fir_write(void *opaque, hwaddr addr,
184
fi->type = ARMFault_Translation;
194
case ICFOR:
185
goto do_fault;
195
break;
186
}
196
default:
187
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
197
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
188
+ desc = arm_ldl_ptw(env, ptw, table, fi);
198
+ qemu_log_mask(LOG_GUEST_ERROR,
189
if (fi->type != ARMFault_None) {
199
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
190
goto do_fault;
200
+ __func__, addr);
191
}
201
}
192
type = (desc & 3);
202
}
193
domain = (desc >> 5) & 0x0f;
194
- if (regime_el(env, mmu_idx) == 1) {
195
+ if (regime_el(env, ptw->in_mmu_idx) == 1) {
196
dacr = env->cp15.dacr_ns;
197
} else {
198
dacr = env->cp15.dacr_s;
199
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
200
/* Fine pagetable. */
201
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
202
}
203
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
204
+ desc = arm_ldl_ptw(env, ptw, table, fi);
205
if (fi->type != ARMFault_None) {
206
goto do_fault;
207
}
208
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
209
g_assert_not_reached();
210
}
211
}
212
- result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
213
+ result->f.prot = ap_to_rw_prot(env, ptw->in_mmu_idx, ap, domain_prot);
214
result->f.prot |= result->f.prot ? PAGE_EXEC : 0;
215
if (!(result->f.prot & (1 << access_type))) {
216
/* Access permission fault. */
217
@@ -XXX,XX +XXX,XX @@ do_fault:
218
return true;
219
}
220
221
-static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
222
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
223
- bool is_secure, GetPhysAddrResult *result,
224
- ARMMMUFaultInfo *fi)
225
+static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
226
+ uint32_t address, MMUAccessType access_type,
227
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
228
{
229
ARMCPU *cpu = env_archcpu(env);
230
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
231
int level = 1;
232
uint32_t table;
233
uint32_t desc;
234
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
235
fi->type = ARMFault_Translation;
236
goto do_fault;
237
}
238
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
239
+ desc = arm_ldl_ptw(env, ptw, table, fi);
240
if (fi->type != ARMFault_None) {
241
goto do_fault;
242
}
243
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
244
ns = extract32(desc, 3, 1);
245
/* Lookup l2 entry. */
246
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
247
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
248
+ desc = arm_ldl_ptw(env, ptw, table, fi);
249
if (fi->type != ARMFault_None) {
250
goto do_fault;
251
}
252
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
253
* the WnR bit is never set (the caller must do this).
254
*
255
* @env: CPUARMState
256
+ * @ptw: Current and next stage parameters for the walk.
257
* @address: virtual address to get physical address for
258
* @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH
259
- * @mmu_idx: MMU index indicating required translation regime
260
- * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page
261
- * table walk), must be true if this is stage 2 of a stage 1+2
262
+ * @s1_is_el0: if @ptw->in_mmu_idx is ARMMMUIdx_Stage2
263
+ * (so this is a stage 2 page table walk),
264
+ * must be true if this is stage 2 of a stage 1+2
265
* walk for an EL0 access. If @mmu_idx is anything else,
266
* @s1_is_el0 is ignored.
267
* @result: set on translation success,
268
* @fi: set to fault info if the translation fails
269
*/
270
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
271
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
272
- bool is_secure, bool s1_is_el0,
273
+static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
274
+ uint64_t address,
275
+ MMUAccessType access_type, bool s1_is_el0,
276
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
277
{
278
ARMCPU *cpu = env_archcpu(env);
279
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
280
+ bool is_secure = ptw->in_secure;
281
/* Read an LPAE long-descriptor translation table. */
282
ARMFaultType fault_type = ARMFault_Translation;
283
uint32_t level;
284
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
285
descaddr |= (address >> (stride * (4 - level))) & indexmask;
286
descaddr &= ~7ULL;
287
nstable = extract32(tableattrs, 4, 1);
288
- descriptor = arm_ldq_ptw(env, descaddr, !nstable, mmu_idx, fi);
289
+ ptw->in_secure = !nstable;
290
+ descriptor = arm_ldq_ptw(env, ptw, descaddr, fi);
291
if (fi->type != ARMFault_None) {
292
goto do_fault;
293
}
294
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
295
ARMMMUFaultInfo *fi)
296
{
297
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
298
+ S1Translate ptw;
299
300
if (mmu_idx != s1_mmu_idx) {
301
/*
302
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
303
int ret;
304
bool ipa_secure, s2walk_secure;
305
ARMCacheAttrs cacheattrs1;
306
- ARMMMUIdx s2_mmu_idx;
307
bool is_el0;
308
uint64_t hcr;
309
310
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
311
s2walk_secure = false;
312
}
313
314
- s2_mmu_idx = (s2walk_secure
315
- ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
316
+ ptw.in_mmu_idx =
317
+ s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
318
+ ptw.in_secure = s2walk_secure;
319
is_el0 = mmu_idx == ARMMMUIdx_E10_0;
320
321
/*
322
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
323
cacheattrs1 = result->cacheattrs;
324
memset(result, 0, sizeof(*result));
325
326
- ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx,
327
- s2walk_secure, is_el0, result, fi);
328
+ ret = get_phys_addr_lpae(env, &ptw, ipa, access_type,
329
+ is_el0, result, fi);
330
fi->s2addr = ipa;
331
332
/* Combine the S1 and S2 perms. */
333
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
334
return get_phys_addr_disabled(env, address, access_type, mmu_idx,
335
is_secure, result, fi);
336
}
337
+
338
+ ptw.in_mmu_idx = mmu_idx;
339
+ ptw.in_secure = is_secure;
340
+
341
if (regime_using_lpae_format(env, mmu_idx)) {
342
- return get_phys_addr_lpae(env, address, access_type, mmu_idx,
343
- is_secure, false, result, fi);
344
+ return get_phys_addr_lpae(env, &ptw, address, access_type, false,
345
+ result, fi);
346
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
347
- return get_phys_addr_v6(env, address, access_type, mmu_idx,
348
- is_secure, result, fi);
349
+ return get_phys_addr_v6(env, &ptw, address, access_type, result, fi);
350
} else {
351
- return get_phys_addr_v5(env, address, access_type, mmu_idx,
352
- is_secure, result, fi);
353
+ return get_phys_addr_v5(env, &ptw, address, access_type, result, fi);
354
}
355
}
203
356
204
--
357
--
205
2.20.1
358
2.25.1
206
207
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
hw_error() calls exit(). This a bit overkill when we can log
3
Before using softmmu page tables for the ptw, plumb down
4
the accesses as unimplemented or guest error.
4
a debug parameter so that we can query page table entries
5
from gdbstub without modifying cpu state.
5
6
6
When fuzzing the devices, we don't want the whole process to
7
exit. Replace some hw_error() calls by qemu_log_mask()
8
(missed in commit 5a0001ec7e).
9
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20200525114123.21317-2-f4bug@amsat.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221011031911.2408754-9-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/input/pxa2xx_keypad.c | 10 +++++++---
12
target/arm/ptw.c | 55 ++++++++++++++++++++++++++++++++----------------
16
1 file changed, 7 insertions(+), 3 deletions(-)
13
1 file changed, 37 insertions(+), 18 deletions(-)
17
14
18
diff --git a/hw/input/pxa2xx_keypad.c b/hw/input/pxa2xx_keypad.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/input/pxa2xx_keypad.c
17
--- a/target/arm/ptw.c
21
+++ b/hw/input/pxa2xx_keypad.c
18
+++ b/target/arm/ptw.c
22
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
23
*/
20
typedef struct S1Translate {
24
21
ARMMMUIdx in_mmu_idx;
25
#include "qemu/osdep.h"
22
bool in_secure;
26
-#include "hw/hw.h"
23
+ bool in_debug;
27
+#include "qemu/log.h"
24
bool out_secure;
28
#include "hw/irq.h"
25
hwaddr out_phys;
29
#include "migration/vmstate.h"
26
} S1Translate;
30
#include "hw/arm/pxa.h"
27
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
31
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_keypad_read(void *opaque, hwaddr offset,
28
S1Translate s2ptw = {
32
return s->kpkdi;
29
.in_mmu_idx = s2_mmu_idx,
33
break;
30
.in_secure = is_secure,
34
default:
31
+ .in_debug = ptw->in_debug,
35
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
32
};
36
+ qemu_log_mask(LOG_GUEST_ERROR,
33
uint64_t hcr;
37
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
34
int ret;
38
+ __func__, offset);
35
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
36
return 0;
37
}
38
39
-bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
40
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
41
- bool is_secure, GetPhysAddrResult *result,
42
- ARMMMUFaultInfo *fi)
43
+static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
44
+ target_ulong address,
45
+ MMUAccessType access_type,
46
+ GetPhysAddrResult *result,
47
+ ARMMMUFaultInfo *fi)
48
{
49
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
50
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
51
- S1Translate ptw;
52
+ bool is_secure = ptw->in_secure;
53
54
if (mmu_idx != s1_mmu_idx) {
55
/*
56
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
57
bool is_el0;
58
uint64_t hcr;
59
60
- ret = get_phys_addr_with_secure(env, address, access_type,
61
- s1_mmu_idx, is_secure, result, fi);
62
+ ptw->in_mmu_idx = s1_mmu_idx;
63
+ ret = get_phys_addr_with_struct(env, ptw, address, access_type,
64
+ result, fi);
65
66
/* If S1 fails or S2 is disabled, return early. */
67
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
68
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
69
s2walk_secure = false;
70
}
71
72
- ptw.in_mmu_idx =
73
+ ptw->in_mmu_idx =
74
s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
75
- ptw.in_secure = s2walk_secure;
76
+ ptw->in_secure = s2walk_secure;
77
is_el0 = mmu_idx == ARMMMUIdx_E10_0;
78
79
/*
80
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
81
cacheattrs1 = result->cacheattrs;
82
memset(result, 0, sizeof(*result));
83
84
- ret = get_phys_addr_lpae(env, &ptw, ipa, access_type,
85
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
86
is_el0, result, fi);
87
fi->s2addr = ipa;
88
89
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
90
is_secure, result, fi);
39
}
91
}
40
92
41
return 0;
93
- ptw.in_mmu_idx = mmu_idx;
42
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_keypad_write(void *opaque, hwaddr offset,
94
- ptw.in_secure = is_secure;
43
break;
95
-
44
96
if (regime_using_lpae_format(env, mmu_idx)) {
45
default:
97
- return get_phys_addr_lpae(env, &ptw, address, access_type, false,
46
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
98
+ return get_phys_addr_lpae(env, ptw, address, access_type, false,
47
+ qemu_log_mask(LOG_GUEST_ERROR,
99
result, fi);
48
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
100
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
49
+ __func__, offset);
101
- return get_phys_addr_v6(env, &ptw, address, access_type, result, fi);
102
+ return get_phys_addr_v6(env, ptw, address, access_type, result, fi);
103
} else {
104
- return get_phys_addr_v5(env, &ptw, address, access_type, result, fi);
105
+ return get_phys_addr_v5(env, ptw, address, access_type, result, fi);
50
}
106
}
51
}
107
}
52
108
109
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
110
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
111
+ bool is_secure, GetPhysAddrResult *result,
112
+ ARMMMUFaultInfo *fi)
113
+{
114
+ S1Translate ptw = {
115
+ .in_mmu_idx = mmu_idx,
116
+ .in_secure = is_secure,
117
+ };
118
+ return get_phys_addr_with_struct(env, &ptw, address, access_type,
119
+ result, fi);
120
+}
121
+
122
bool get_phys_addr(CPUARMState *env, target_ulong address,
123
MMUAccessType access_type, ARMMMUIdx mmu_idx,
124
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
125
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
126
{
127
ARMCPU *cpu = ARM_CPU(cs);
128
CPUARMState *env = &cpu->env;
129
+ S1Translate ptw = {
130
+ .in_mmu_idx = arm_mmu_idx(env),
131
+ .in_secure = arm_is_secure(env),
132
+ .in_debug = true,
133
+ };
134
GetPhysAddrResult res = {};
135
ARMMMUFaultInfo fi = {};
136
- ARMMMUIdx mmu_idx = arm_mmu_idx(env);
137
bool ret;
138
139
- ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi);
140
+ ret = get_phys_addr_with_struct(env, &ptw, addr, MMU_DATA_LOAD, &res, &fi);
141
*attrs = res.f.attrs;
142
143
if (ret) {
53
--
144
--
54
2.20.1
145
2.25.1
55
56
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add BCM2835 SOC MPHI (Message-based Parallel Host Interface)
3
Hoist this test out of arm_ld[lq]_ptw into S1_ptw_translate.
4
emulation. It is very basic, only providing the FIQ interrupt
5
needed to allow the dwc-otg USB host controller driver in the
6
Raspbian kernel to function.
7
4
8
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
9
Acked-by: Philippe Mathieu-Daude <f4bug@amsat.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20200520235349.21215-2-pauldzim@gmail.com
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221011031911.2408754-10-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
include/hw/arm/bcm2835_peripherals.h | 2 +
10
target/arm/ptw.c | 6 ++++--
15
include/hw/misc/bcm2835_mphi.h | 44 ++++++
11
1 file changed, 4 insertions(+), 2 deletions(-)
16
hw/arm/bcm2835_peripherals.c | 17 +++
17
hw/misc/bcm2835_mphi.c | 191 +++++++++++++++++++++++++++
18
hw/misc/Makefile.objs | 1 +
19
5 files changed, 255 insertions(+)
20
create mode 100644 include/hw/misc/bcm2835_mphi.h
21
create mode 100644 hw/misc/bcm2835_mphi.c
22
12
23
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
24
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/arm/bcm2835_peripherals.h
15
--- a/target/arm/ptw.c
26
+++ b/include/hw/arm/bcm2835_peripherals.h
16
+++ b/target/arm/ptw.c
27
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
28
#include "hw/misc/bcm2835_property.h"
18
bool in_secure;
29
#include "hw/misc/bcm2835_rng.h"
19
bool in_debug;
30
#include "hw/misc/bcm2835_mbox.h"
20
bool out_secure;
31
+#include "hw/misc/bcm2835_mphi.h"
21
+ bool out_be;
32
#include "hw/misc/bcm2835_thermal.h"
22
hwaddr out_phys;
33
#include "hw/sd/sdhci.h"
23
} S1Translate;
34
#include "hw/sd/bcm2835_sdhost.h"
24
35
@@ -XXX,XX +XXX,XX @@ typedef struct BCM2835PeripheralState {
25
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
36
qemu_irq irq, fiq;
26
37
27
ptw->out_secure = is_secure;
38
BCM2835SystemTimerState systmr;
28
ptw->out_phys = addr;
39
+ BCM2835MphiState mphi;
29
+ ptw->out_be = regime_translation_big_endian(env, ptw->in_mmu_idx);
40
UnimplementedDeviceState armtmr;
30
return true;
41
UnimplementedDeviceState cprman;
42
UnimplementedDeviceState a2w;
43
diff --git a/include/hw/misc/bcm2835_mphi.h b/include/hw/misc/bcm2835_mphi.h
44
new file mode 100644
45
index XXXXXXX..XXXXXXX
46
--- /dev/null
47
+++ b/include/hw/misc/bcm2835_mphi.h
48
@@ -XXX,XX +XXX,XX @@
49
+/*
50
+ * BCM2835 SOC MPHI state definitions
51
+ *
52
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
53
+ *
54
+ * This program is free software; you can redistribute it and/or modify
55
+ * it under the terms of the GNU General Public License as published by
56
+ * the Free Software Foundation; either version 2 of the License, or
57
+ * (at your option) any later version.
58
+ *
59
+ * This program is distributed in the hope that it will be useful,
60
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
61
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
62
+ * GNU General Public License for more details.
63
+ */
64
+
65
+#ifndef HW_MISC_BCM2835_MPHI_H
66
+#define HW_MISC_BCM2835_MPHI_H
67
+
68
+#include "hw/irq.h"
69
+#include "hw/sysbus.h"
70
+
71
+#define MPHI_MMIO_SIZE 0x1000
72
+
73
+typedef struct BCM2835MphiState BCM2835MphiState;
74
+
75
+struct BCM2835MphiState {
76
+ SysBusDevice parent_obj;
77
+ qemu_irq irq;
78
+ MemoryRegion iomem;
79
+
80
+ uint32_t outdda;
81
+ uint32_t outddb;
82
+ uint32_t ctrl;
83
+ uint32_t intstat;
84
+ uint32_t swirq;
85
+};
86
+
87
+#define TYPE_BCM2835_MPHI "bcm2835-mphi"
88
+
89
+#define BCM2835_MPHI(obj) \
90
+ OBJECT_CHECK(BCM2835MphiState, (obj), TYPE_BCM2835_MPHI)
91
+
92
+#endif
93
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/bcm2835_peripherals.c
96
+++ b/hw/arm/bcm2835_peripherals.c
97
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
98
OBJECT(&s->sdhci.sdbus));
99
object_property_add_const_link(OBJECT(&s->gpio), "sdbus-sdhost",
100
OBJECT(&s->sdhost.sdbus));
101
+
102
+ /* Mphi */
103
+ sysbus_init_child_obj(obj, "mphi", &s->mphi, sizeof(s->mphi),
104
+ TYPE_BCM2835_MPHI);
105
}
31
}
106
32
107
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
33
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
108
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
34
addr = ptw->out_phys;
109
35
attrs.secure = ptw->out_secure;
110
object_property_add_alias(OBJECT(s), "sd-bus", OBJECT(&s->gpio), "sd-bus");
36
as = arm_addressspace(cs, attrs);
111
37
- if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
112
+ /* Mphi */
38
+ if (ptw->out_be) {
113
+ object_property_set_bool(OBJECT(&s->mphi), true, "realized", &err);
39
data = address_space_ldl_be(as, addr, attrs, &result);
114
+ if (err) {
40
} else {
115
+ error_propagate(errp, err);
41
data = address_space_ldl_le(as, addr, attrs, &result);
116
+ return;
42
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
117
+ }
43
addr = ptw->out_phys;
118
+
44
attrs.secure = ptw->out_secure;
119
+ memory_region_add_subregion(&s->peri_mr, MPHI_OFFSET,
45
as = arm_addressspace(cs, attrs);
120
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->mphi), 0));
46
- if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
121
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->mphi), 0,
47
+ if (ptw->out_be) {
122
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
48
data = address_space_ldq_be(as, addr, attrs, &result);
123
+ INTERRUPT_HOSTPORT));
49
} else {
124
+
50
data = address_space_ldq_le(as, addr, attrs, &result);
125
create_unimp(s, &s->armtmr, "bcm2835-sp804", ARMCTRL_TIMER0_1_OFFSET, 0x40);
126
create_unimp(s, &s->cprman, "bcm2835-cprman", CPRMAN_OFFSET, 0x1000);
127
create_unimp(s, &s->a2w, "bcm2835-a2w", A2W_OFFSET, 0x1000);
128
diff --git a/hw/misc/bcm2835_mphi.c b/hw/misc/bcm2835_mphi.c
129
new file mode 100644
130
index XXXXXXX..XXXXXXX
131
--- /dev/null
132
+++ b/hw/misc/bcm2835_mphi.c
133
@@ -XXX,XX +XXX,XX @@
134
+/*
135
+ * BCM2835 SOC MPHI emulation
136
+ *
137
+ * Very basic emulation, only providing the FIQ interrupt needed to
138
+ * allow the dwc-otg USB host controller driver in the Raspbian kernel
139
+ * to function.
140
+ *
141
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
142
+ *
143
+ * This program is free software; you can redistribute it and/or modify
144
+ * it under the terms of the GNU General Public License as published by
145
+ * the Free Software Foundation; either version 2 of the License, or
146
+ * (at your option) any later version.
147
+ *
148
+ * This program is distributed in the hope that it will be useful,
149
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
150
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
151
+ * GNU General Public License for more details.
152
+ */
153
+
154
+#include "qemu/osdep.h"
155
+#include "qapi/error.h"
156
+#include "hw/misc/bcm2835_mphi.h"
157
+#include "migration/vmstate.h"
158
+#include "qemu/error-report.h"
159
+#include "qemu/log.h"
160
+#include "qemu/main-loop.h"
161
+
162
+static inline void mphi_raise_irq(BCM2835MphiState *s)
163
+{
164
+ qemu_set_irq(s->irq, 1);
165
+}
166
+
167
+static inline void mphi_lower_irq(BCM2835MphiState *s)
168
+{
169
+ qemu_set_irq(s->irq, 0);
170
+}
171
+
172
+static uint64_t mphi_reg_read(void *ptr, hwaddr addr, unsigned size)
173
+{
174
+ BCM2835MphiState *s = ptr;
175
+ uint32_t val = 0;
176
+
177
+ switch (addr) {
178
+ case 0x28: /* outdda */
179
+ val = s->outdda;
180
+ break;
181
+ case 0x2c: /* outddb */
182
+ val = s->outddb;
183
+ break;
184
+ case 0x4c: /* ctrl */
185
+ val = s->ctrl;
186
+ val |= 1 << 17;
187
+ break;
188
+ case 0x50: /* intstat */
189
+ val = s->intstat;
190
+ break;
191
+ case 0x1f0: /* swirq_set */
192
+ val = s->swirq;
193
+ break;
194
+ case 0x1f4: /* swirq_clr */
195
+ val = s->swirq;
196
+ break;
197
+ default:
198
+ qemu_log_mask(LOG_UNIMP, "read from unknown register");
199
+ break;
200
+ }
201
+
202
+ return val;
203
+}
204
+
205
+static void mphi_reg_write(void *ptr, hwaddr addr, uint64_t val, unsigned size)
206
+{
207
+ BCM2835MphiState *s = ptr;
208
+ int do_irq = 0;
209
+
210
+ switch (addr) {
211
+ case 0x28: /* outdda */
212
+ s->outdda = val;
213
+ break;
214
+ case 0x2c: /* outddb */
215
+ s->outddb = val;
216
+ if (val & (1 << 29)) {
217
+ do_irq = 1;
218
+ }
219
+ break;
220
+ case 0x4c: /* ctrl */
221
+ s->ctrl = val;
222
+ if (val & (1 << 16)) {
223
+ do_irq = -1;
224
+ }
225
+ break;
226
+ case 0x50: /* intstat */
227
+ s->intstat = val;
228
+ if (val & ((1 << 16) | (1 << 29))) {
229
+ do_irq = -1;
230
+ }
231
+ break;
232
+ case 0x1f0: /* swirq_set */
233
+ s->swirq |= val;
234
+ do_irq = 1;
235
+ break;
236
+ case 0x1f4: /* swirq_clr */
237
+ s->swirq &= ~val;
238
+ do_irq = -1;
239
+ break;
240
+ default:
241
+ qemu_log_mask(LOG_UNIMP, "write to unknown register");
242
+ return;
243
+ }
244
+
245
+ if (do_irq > 0) {
246
+ mphi_raise_irq(s);
247
+ } else if (do_irq < 0) {
248
+ mphi_lower_irq(s);
249
+ }
250
+}
251
+
252
+static const MemoryRegionOps mphi_mmio_ops = {
253
+ .read = mphi_reg_read,
254
+ .write = mphi_reg_write,
255
+ .impl.min_access_size = 4,
256
+ .impl.max_access_size = 4,
257
+ .endianness = DEVICE_LITTLE_ENDIAN,
258
+};
259
+
260
+static void mphi_reset(DeviceState *dev)
261
+{
262
+ BCM2835MphiState *s = BCM2835_MPHI(dev);
263
+
264
+ s->outdda = 0;
265
+ s->outddb = 0;
266
+ s->ctrl = 0;
267
+ s->intstat = 0;
268
+ s->swirq = 0;
269
+}
270
+
271
+static void mphi_realize(DeviceState *dev, Error **errp)
272
+{
273
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
274
+ BCM2835MphiState *s = BCM2835_MPHI(dev);
275
+
276
+ sysbus_init_irq(sbd, &s->irq);
277
+}
278
+
279
+static void mphi_init(Object *obj)
280
+{
281
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
282
+ BCM2835MphiState *s = BCM2835_MPHI(obj);
283
+
284
+ memory_region_init_io(&s->iomem, obj, &mphi_mmio_ops, s, "mphi", MPHI_MMIO_SIZE);
285
+ sysbus_init_mmio(sbd, &s->iomem);
286
+}
287
+
288
+const VMStateDescription vmstate_mphi_state = {
289
+ .name = "mphi",
290
+ .version_id = 1,
291
+ .minimum_version_id = 1,
292
+ .fields = (VMStateField[]) {
293
+ VMSTATE_UINT32(outdda, BCM2835MphiState),
294
+ VMSTATE_UINT32(outddb, BCM2835MphiState),
295
+ VMSTATE_UINT32(ctrl, BCM2835MphiState),
296
+ VMSTATE_UINT32(intstat, BCM2835MphiState),
297
+ VMSTATE_UINT32(swirq, BCM2835MphiState),
298
+ VMSTATE_END_OF_LIST()
299
+ }
300
+};
301
+
302
+static void mphi_class_init(ObjectClass *klass, void *data)
303
+{
304
+ DeviceClass *dc = DEVICE_CLASS(klass);
305
+
306
+ dc->realize = mphi_realize;
307
+ dc->reset = mphi_reset;
308
+ dc->vmsd = &vmstate_mphi_state;
309
+}
310
+
311
+static const TypeInfo bcm2835_mphi_type_info = {
312
+ .name = TYPE_BCM2835_MPHI,
313
+ .parent = TYPE_SYS_BUS_DEVICE,
314
+ .instance_size = sizeof(BCM2835MphiState),
315
+ .instance_init = mphi_init,
316
+ .class_init = mphi_class_init,
317
+};
318
+
319
+static void bcm2835_mphi_register_types(void)
320
+{
321
+ type_register_static(&bcm2835_mphi_type_info);
322
+}
323
+
324
+type_init(bcm2835_mphi_register_types)
325
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
326
index XXXXXXX..XXXXXXX 100644
327
--- a/hw/misc/Makefile.objs
328
+++ b/hw/misc/Makefile.objs
329
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_OMAP) += omap_l4.o
330
common-obj-$(CONFIG_OMAP) += omap_sdrc.o
331
common-obj-$(CONFIG_OMAP) += omap_tap.o
332
common-obj-$(CONFIG_RASPI) += bcm2835_mbox.o
333
+common-obj-$(CONFIG_RASPI) += bcm2835_mphi.o
334
common-obj-$(CONFIG_RASPI) += bcm2835_property.o
335
common-obj-$(CONFIG_RASPI) += bcm2835_rng.o
336
common-obj-$(CONFIG_RASPI) += bcm2835_thermal.o
337
--
51
--
338
2.20.1
52
2.25.1
339
340
diff view generated by jsdifflib
1
Convert the VSHLL and VMOVL insns from the 2-reg-shift group
1
From: Richard Henderson <richard.henderson@linaro.org>
2
to decodetree. Since the loop always has two passes, we unroll
3
it to avoid the awkward reassignment of one TCGv to another.
4
2
3
So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and
4
arm_ldq_ptw. Use probe_access_full to find the host address,
5
and if so use a host load. If the probe fails, we've got our
6
fault info already. On the off chance that page tables are not
7
in RAM, continue to use the address_space_ld* functions.
8
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-8-peter.maydell@linaro.org
8
---
13
---
9
target/arm/neon-dp.decode | 16 +++++++
14
target/arm/cpu.h | 5 +
10
target/arm/translate-neon.inc.c | 81 +++++++++++++++++++++++++++++++++
15
target/arm/ptw.c | 196 +++++++++++++++++++++++++---------------
11
target/arm/translate.c | 46 +------------------
16
target/arm/tlb_helper.c | 17 +++-
12
3 files changed, 99 insertions(+), 44 deletions(-)
17
3 files changed, 144 insertions(+), 74 deletions(-)
13
18
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
21
--- a/target/arm/cpu.h
17
+++ b/target/arm/neon-dp.decode
22
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
23
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMTBFlags {
19
&2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0 \
24
target_ulong flags2;
20
shift=%neon_rshift_i3
25
} CPUARMTBFlags;
21
26
22
+# Long left shifts: again Q is part of opcode decode
27
+typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
23
+@2reg_shll_s .... ... . . . 1 shift:5 .... .... 0 . . . .... \
28
+
24
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 q=0
29
typedef struct CPUArchState {
25
+@2reg_shll_h .... ... . . . 01 shift:4 .... .... 0 . . . .... \
30
/* Regs for current mode. */
26
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0
31
uint32_t regs[16];
27
+@2reg_shll_b .... ... . . . 001 shift:3 .... .... 0 . . . .... \
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
28
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 q=0
33
struct CPUBreakpoint *cpu_breakpoint[16];
29
+
34
struct CPUWatchpoint *cpu_watchpoint[16];
30
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
35
31
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
36
+ /* Optional fault info across tlb lookup. */
32
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
37
+ ARMMMUFaultInfo *tlb_fi;
33
@@ -XXX,XX +XXX,XX @@ VQSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
38
+
34
VQRSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
39
/* Fields up to this point are cleared by a CPU reset */
35
VQRSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
40
struct {} end_reset_fields;
36
VQRSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
41
37
+
42
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
38
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
39
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
40
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
41
+
42
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
43
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
44
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
45
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
46
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate-neon.inc.c
44
--- a/target/arm/ptw.c
48
+++ b/target/arm/translate-neon.inc.c
45
+++ b/target/arm/ptw.c
49
@@ -XXX,XX +XXX,XX @@ DO_2SN_32(VQSHRN_U16, gen_helper_neon_shl_u16, gen_helper_neon_narrow_sat_u8)
46
@@ -XXX,XX +XXX,XX @@
50
DO_2SN_64(VQRSHRN_U64, gen_helper_neon_rshl_u64, gen_helper_neon_narrow_sat_u32)
47
#include "qemu/osdep.h"
51
DO_2SN_32(VQRSHRN_U32, gen_helper_neon_rshl_u32, gen_helper_neon_narrow_sat_u16)
48
#include "qemu/log.h"
52
DO_2SN_32(VQRSHRN_U16, gen_helper_neon_rshl_u16, gen_helper_neon_narrow_sat_u8)
49
#include "qemu/range.h"
53
+
50
+#include "exec/exec-all.h"
54
+static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
51
#include "cpu.h"
55
+ NeonGenWidenFn *widenfn, bool u)
52
#include "internals.h"
56
+{
53
#include "idau.h"
57
+ TCGv_i64 tmp;
54
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
58
+ TCGv_i32 rm0, rm1;
55
bool out_secure;
59
+ uint64_t widen_mask = 0;
56
bool out_be;
60
+
57
hwaddr out_phys;
61
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
58
+ void *out_host;
62
+ return false;
59
} S1Translate;
60
61
static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
62
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
63
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
64
}
65
66
-static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
67
+static bool S2_attrs_are_device(uint64_t hcr, uint8_t attrs)
68
{
69
/*
70
* For an S1 page table walk, the stage 1 attributes are always
71
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
72
* With HCR_EL2.FWB == 1 this is when descriptor bit [4] is 0, ie
73
* when cacheattrs.attrs bit [2] is 0.
74
*/
75
- assert(cacheattrs.is_s2_format);
76
if (hcr & HCR_FWB) {
77
- return (cacheattrs.attrs & 0x4) == 0;
78
+ return (attrs & 0x4) == 0;
79
} else {
80
- return (cacheattrs.attrs & 0xc) == 0;
81
+ return (attrs & 0xc) == 0;
82
}
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
86
hwaddr addr, ARMMMUFaultInfo *fi)
87
{
88
bool is_secure = ptw->in_secure;
89
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
90
ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
91
+ bool s2_phys = false;
92
+ uint8_t pte_attrs;
93
+ bool pte_secure;
94
95
- if (arm_mmu_idx_is_stage1_of_2(ptw->in_mmu_idx) &&
96
- !regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
97
- GetPhysAddrResult s2 = {};
98
- S1Translate s2ptw = {
99
- .in_mmu_idx = s2_mmu_idx,
100
- .in_secure = is_secure,
101
- .in_debug = ptw->in_debug,
102
- };
103
- uint64_t hcr;
104
- int ret;
105
+ if (!arm_mmu_idx_is_stage1_of_2(mmu_idx)
106
+ || regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
107
+ s2_mmu_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
108
+ s2_phys = true;
63
+ }
109
+ }
64
+
110
65
+ /* UNDEF accesses to D16-D31 if they don't exist. */
111
- ret = get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
66
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
112
- false, &s2, fi);
67
+ ((a->vd | a->vm) & 0x10)) {
113
- if (ret) {
68
+ return false;
114
- assert(fi->type != ARMFault_None);
115
- fi->s2addr = addr;
116
- fi->stage2 = true;
117
- fi->s1ptw = true;
118
- fi->s1ns = !is_secure;
119
- return false;
120
+ if (unlikely(ptw->in_debug)) {
121
+ /*
122
+ * From gdbstub, do not use softmmu so that we don't modify the
123
+ * state of the cpu at all, including softmmu tlb contents.
124
+ */
125
+ if (s2_phys) {
126
+ ptw->out_phys = addr;
127
+ pte_attrs = 0;
128
+ pte_secure = is_secure;
129
+ } else {
130
+ S1Translate s2ptw = {
131
+ .in_mmu_idx = s2_mmu_idx,
132
+ .in_secure = is_secure,
133
+ .in_debug = true,
134
+ };
135
+ GetPhysAddrResult s2 = { };
136
+ if (!get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
137
+ false, &s2, fi)) {
138
+ goto fail;
139
+ }
140
+ ptw->out_phys = s2.f.phys_addr;
141
+ pte_attrs = s2.cacheattrs.attrs;
142
+ pte_secure = s2.f.attrs.secure;
143
}
144
+ ptw->out_host = NULL;
145
+ } else {
146
+ CPUTLBEntryFull *full;
147
+ int flags;
148
149
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
150
- if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
151
+ env->tlb_fi = fi;
152
+ flags = probe_access_full(env, addr, MMU_DATA_LOAD,
153
+ arm_to_core_mmu_idx(s2_mmu_idx),
154
+ true, &ptw->out_host, &full, 0);
155
+ env->tlb_fi = NULL;
156
+
157
+ if (unlikely(flags & TLB_INVALID_MASK)) {
158
+ goto fail;
159
+ }
160
+ ptw->out_phys = full->phys_addr;
161
+ pte_attrs = full->pte_attrs;
162
+ pte_secure = full->attrs.secure;
69
+ }
163
+ }
70
+
164
+
71
+ if (a->vd & 1) {
165
+ if (!s2_phys) {
72
+ return false;
166
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
167
+
168
+ if ((hcr & HCR_PTW) && S2_attrs_are_device(hcr, pte_attrs)) {
169
/*
170
* PTW set and S1 walk touched S2 Device memory:
171
* generate Permission fault.
172
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
173
fi->s1ns = !is_secure;
174
return false;
175
}
176
-
177
- if (arm_is_secure_below_el3(env)) {
178
- /* Check if page table walk is to secure or non-secure PA space. */
179
- if (is_secure) {
180
- is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
181
- } else {
182
- is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
183
- }
184
- } else {
185
- assert(!is_secure);
186
- }
187
-
188
- addr = s2.f.phys_addr;
189
}
190
191
- ptw->out_secure = is_secure;
192
- ptw->out_phys = addr;
193
- ptw->out_be = regime_translation_big_endian(env, ptw->in_mmu_idx);
194
+ /* Check if page table walk is to secure or non-secure PA space. */
195
+ ptw->out_secure = (is_secure
196
+ && !(pte_secure
197
+ ? env->cp15.vstcr_el2 & VSTCR_SW
198
+ : env->cp15.vtcr_el2 & VTCR_NSW));
199
+ ptw->out_be = regime_translation_big_endian(env, mmu_idx);
200
return true;
201
+
202
+ fail:
203
+ assert(fi->type != ARMFault_None);
204
+ fi->s2addr = addr;
205
+ fi->stage2 = true;
206
+ fi->s1ptw = true;
207
+ fi->s1ns = !is_secure;
208
+ return false;
209
}
210
211
/* All loads done in the course of a page table walk go through here. */
212
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
213
ARMMMUFaultInfo *fi)
214
{
215
CPUState *cs = env_cpu(env);
216
- MemTxAttrs attrs = {};
217
- MemTxResult result = MEMTX_OK;
218
- AddressSpace *as;
219
uint32_t data;
220
221
if (!S1_ptw_translate(env, ptw, addr, fi)) {
222
+ /* Failure. */
223
+ assert(fi->s1ptw);
224
return 0;
225
}
226
- addr = ptw->out_phys;
227
- attrs.secure = ptw->out_secure;
228
- as = arm_addressspace(cs, attrs);
229
- if (ptw->out_be) {
230
- data = address_space_ldl_be(as, addr, attrs, &result);
231
+
232
+ if (likely(ptw->out_host)) {
233
+ /* Page tables are in RAM, and we have the host address. */
234
+ if (ptw->out_be) {
235
+ data = ldl_be_p(ptw->out_host);
236
+ } else {
237
+ data = ldl_le_p(ptw->out_host);
238
+ }
239
} else {
240
- data = address_space_ldl_le(as, addr, attrs, &result);
241
+ /* Page tables are in MMIO. */
242
+ MemTxAttrs attrs = { .secure = ptw->out_secure };
243
+ AddressSpace *as = arm_addressspace(cs, attrs);
244
+ MemTxResult result = MEMTX_OK;
245
+
246
+ if (ptw->out_be) {
247
+ data = address_space_ldl_be(as, ptw->out_phys, attrs, &result);
248
+ } else {
249
+ data = address_space_ldl_le(as, ptw->out_phys, attrs, &result);
250
+ }
251
+ if (unlikely(result != MEMTX_OK)) {
252
+ fi->type = ARMFault_SyncExternalOnWalk;
253
+ fi->ea = arm_extabort_type(result);
254
+ return 0;
255
+ }
256
}
257
- if (result == MEMTX_OK) {
258
- return data;
259
- }
260
- fi->type = ARMFault_SyncExternalOnWalk;
261
- fi->ea = arm_extabort_type(result);
262
- return 0;
263
+ return data;
264
}
265
266
static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
267
ARMMMUFaultInfo *fi)
268
{
269
CPUState *cs = env_cpu(env);
270
- MemTxAttrs attrs = {};
271
- MemTxResult result = MEMTX_OK;
272
- AddressSpace *as;
273
uint64_t data;
274
275
if (!S1_ptw_translate(env, ptw, addr, fi)) {
276
+ /* Failure. */
277
+ assert(fi->s1ptw);
278
return 0;
279
}
280
- addr = ptw->out_phys;
281
- attrs.secure = ptw->out_secure;
282
- as = arm_addressspace(cs, attrs);
283
- if (ptw->out_be) {
284
- data = address_space_ldq_be(as, addr, attrs, &result);
285
+
286
+ if (likely(ptw->out_host)) {
287
+ /* Page tables are in RAM, and we have the host address. */
288
+ if (ptw->out_be) {
289
+ data = ldq_be_p(ptw->out_host);
290
+ } else {
291
+ data = ldq_le_p(ptw->out_host);
292
+ }
293
} else {
294
- data = address_space_ldq_le(as, addr, attrs, &result);
295
+ /* Page tables are in MMIO. */
296
+ MemTxAttrs attrs = { .secure = ptw->out_secure };
297
+ AddressSpace *as = arm_addressspace(cs, attrs);
298
+ MemTxResult result = MEMTX_OK;
299
+
300
+ if (ptw->out_be) {
301
+ data = address_space_ldq_be(as, ptw->out_phys, attrs, &result);
302
+ } else {
303
+ data = address_space_ldq_le(as, ptw->out_phys, attrs, &result);
304
+ }
305
+ if (unlikely(result != MEMTX_OK)) {
306
+ fi->type = ARMFault_SyncExternalOnWalk;
307
+ fi->ea = arm_extabort_type(result);
308
+ return 0;
309
+ }
310
}
311
- if (result == MEMTX_OK) {
312
- return data;
313
- }
314
- fi->type = ARMFault_SyncExternalOnWalk;
315
- fi->ea = arm_extabort_type(result);
316
- return 0;
317
+ return data;
318
}
319
320
static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
321
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/tlb_helper.c
324
+++ b/target/arm/tlb_helper.c
325
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
326
bool probe, uintptr_t retaddr)
327
{
328
ARMCPU *cpu = ARM_CPU(cs);
329
- ARMMMUFaultInfo fi = {};
330
GetPhysAddrResult res = {};
331
+ ARMMMUFaultInfo local_fi, *fi;
332
int ret;
333
334
+ /*
335
+ * Allow S1_ptw_translate to see any fault generated here.
336
+ * Since this may recurse, read and clear.
337
+ */
338
+ fi = cpu->env.tlb_fi;
339
+ if (fi) {
340
+ cpu->env.tlb_fi = NULL;
341
+ } else {
342
+ fi = memset(&local_fi, 0, sizeof(local_fi));
73
+ }
343
+ }
74
+
344
+
75
+ if (!vfp_access_check(s)) {
345
/*
76
+ return true;
346
* Walk the page table and (if the mapping exists) add the page
77
+ }
347
* to the TLB. On success, return true. Otherwise, if probing,
78
+
348
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
79
+ /*
349
*/
80
+ * This is a widen-and-shift operation. The shift is always less
350
ret = get_phys_addr(&cpu->env, address, access_type,
81
+ * than the width of the source type, so after widening the input
351
core_to_arm_mmu_idx(&cpu->env, mmu_idx),
82
+ * vector we can simply shift the whole 64-bit widened register,
352
- &res, &fi);
83
+ * and then clear the potential overflow bits resulting from left
353
+ &res, fi);
84
+ * bits of the narrow input appearing as right bits of the left
354
if (likely(!ret)) {
85
+ * neighbour narrow input. Calculate a mask of bits to clear.
355
/*
86
+ */
356
* Map a single [sub]page. Regions smaller than our declared
87
+ if ((a->shift != 0) && (a->size < 2 || u)) {
357
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
88
+ int esize = 8 << a->size;
358
} else {
89
+ widen_mask = MAKE_64BIT_MASK(0, esize);
359
/* now we have a real cpu fault */
90
+ widen_mask >>= esize - a->shift;
360
cpu_restore_state(cs, retaddr, true);
91
+ widen_mask = dup_const(a->size + 1, widen_mask);
361
- arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi);
92
+ }
362
+ arm_deliver_fault(cpu, address, access_type, mmu_idx, fi);
93
+
363
}
94
+ rm0 = neon_load_reg(a->vm, 0);
364
}
95
+ rm1 = neon_load_reg(a->vm, 1);
365
#else
96
+ tmp = tcg_temp_new_i64();
97
+
98
+ widenfn(tmp, rm0);
99
+ if (a->shift != 0) {
100
+ tcg_gen_shli_i64(tmp, tmp, a->shift);
101
+ tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
102
+ }
103
+ neon_store_reg64(tmp, a->vd);
104
+
105
+ widenfn(tmp, rm1);
106
+ if (a->shift != 0) {
107
+ tcg_gen_shli_i64(tmp, tmp, a->shift);
108
+ tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
109
+ }
110
+ neon_store_reg64(tmp, a->vd + 1);
111
+ tcg_temp_free_i64(tmp);
112
+ return true;
113
+}
114
+
115
+static bool trans_VSHLL_S_2sh(DisasContext *s, arg_2reg_shift *a)
116
+{
117
+ NeonGenWidenFn *widenfn[] = {
118
+ gen_helper_neon_widen_s8,
119
+ gen_helper_neon_widen_s16,
120
+ tcg_gen_ext_i32_i64,
121
+ };
122
+ return do_vshll_2sh(s, a, widenfn[a->size], false);
123
+}
124
+
125
+static bool trans_VSHLL_U_2sh(DisasContext *s, arg_2reg_shift *a)
126
+{
127
+ NeonGenWidenFn *widenfn[] = {
128
+ gen_helper_neon_widen_u8,
129
+ gen_helper_neon_widen_u16,
130
+ tcg_gen_extu_i32_i64,
131
+ };
132
+ return do_vshll_2sh(s, a, widenfn[a->size], true);
133
+}
134
diff --git a/target/arm/translate.c b/target/arm/translate.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/target/arm/translate.c
137
+++ b/target/arm/translate.c
138
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
139
case 7: /* VQSHL */
140
case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
141
case 9: /* VQSHRN, VQRSHRN */
142
+ case 10: /* VSHLL, including VMOVL */
143
return 1; /* handled by decodetree */
144
default:
145
break;
146
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
147
size--;
148
}
149
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
150
- if (op == 10) {
151
- /* VSHLL, VMOVL */
152
- if (q || (rd & 1)) {
153
- return 1;
154
- }
155
- tmp = neon_load_reg(rm, 0);
156
- tmp2 = neon_load_reg(rm, 1);
157
- for (pass = 0; pass < 2; pass++) {
158
- if (pass == 1)
159
- tmp = tmp2;
160
-
161
- gen_neon_widen(cpu_V0, tmp, size, u);
162
-
163
- if (shift != 0) {
164
- /* The shift is less than the width of the source
165
- type, so we can just shift the whole register. */
166
- tcg_gen_shli_i64(cpu_V0, cpu_V0, shift);
167
- /* Widen the result of shift: we need to clear
168
- * the potential overflow bits resulting from
169
- * left bits of the narrow input appearing as
170
- * right bits of left the neighbour narrow
171
- * input. */
172
- if (size < 2 || !u) {
173
- uint64_t imm64;
174
- if (size == 0) {
175
- imm = (0xffu >> (8 - shift));
176
- imm |= imm << 16;
177
- } else if (size == 1) {
178
- imm = 0xffff >> (16 - shift);
179
- } else {
180
- /* size == 2 */
181
- imm = 0xffffffff >> (32 - shift);
182
- }
183
- if (size < 2) {
184
- imm64 = imm | (((uint64_t)imm) << 32);
185
- } else {
186
- imm64 = imm;
187
- }
188
- tcg_gen_andi_i64(cpu_V0, cpu_V0, ~imm64);
189
- }
190
- }
191
- neon_store_reg64(cpu_V0, rd + pass);
192
- }
193
- } else if (op >= 14) {
194
+ if (op >= 14) {
195
/* VCVT fixed-point. */
196
TCGv_ptr fpst;
197
TCGv_i32 shiftv;
198
--
366
--
199
2.20.1
367
2.25.1
200
201
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the dwc-hsotg (dwc2) USB host controller emulation code.
4
Based on hw/usb/hcd-ehci.c and hw/usb/hcd-ohci.c.
5
6
Note that to use this with the dwc-otg driver in the Raspbian
7
kernel, you must pass the option "dwc_otg.fiq_fsm_enable=0" on
8
the kernel command line.
9
10
Emulation of slave mode and of descriptor-DMA mode has not been
11
implemented yet. These modes are seldom used.
12
13
I have used some on-line sources of information while developing
14
this emulation, including:
15
16
http://www.capital-micro.com/PDF/CME-M7_Family_User_Guide_EN.pdf
17
which has a pretty complete description of the controller starting
18
on page 370.
19
20
https://sourceforge.net/p/wive-ng/wive-ng-mt/ci/master/tree/docs/DataSheets/RT3050_5x_V2.0_081408_0902.pdf
21
which has a description of the controller registers starting on
22
page 130.
23
24
Thanks to Felippe Mathieu-Daude for providing a cleaner method
25
of implementing the memory regions for the controller registers.
26
27
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
28
Message-id: 20200520235349.21215-5-pauldzim@gmail.com
29
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20221011031911.2408754-12-richard.henderson@linaro.org
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
---
7
---
32
hw/usb/hcd-dwc2.c | 1417 ++++++++++++++++++++++++++++++++++++++++++
8
target/arm/ptw.c | 191 +++++++++++++++++++++++++----------------------
33
hw/usb/Kconfig | 5 +
9
1 file changed, 100 insertions(+), 91 deletions(-)
34
hw/usb/Makefile.objs | 1 +
35
hw/usb/trace-events | 50 ++
36
4 files changed, 1473 insertions(+)
37
create mode 100644 hw/usb/hcd-dwc2.c
38
10
39
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
40
new file mode 100644
12
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX
13
--- a/target/arm/ptw.c
42
--- /dev/null
14
+++ b/target/arm/ptw.c
43
+++ b/hw/usb/hcd-dwc2.c
15
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
44
@@ -XXX,XX +XXX,XX @@
16
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
45
+/*
17
__attribute__((nonnull));
46
+ * dwc-hsotg (dwc2) USB host controller emulation
18
47
+ *
19
+static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
48
+ * Based on hw/usb/hcd-ehci.c and hw/usb/hcd-ohci.c
20
+ target_ulong address,
49
+ *
21
+ MMUAccessType access_type,
50
+ * Note that to use this emulation with the dwc-otg driver in the
22
+ GetPhysAddrResult *result,
51
+ * Raspbian kernel, you must pass the option "dwc_otg.fiq_fsm_enable=0"
23
+ ARMMMUFaultInfo *fi)
52
+ * on the kernel command line.
24
+ __attribute__((nonnull));
53
+ *
25
+
54
+ * Some useful documentation used to develop this emulation can be
26
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
55
+ * found online (as of April 2020) at:
27
static const uint8_t pamax_map[] = {
56
+ *
28
[0] = 32,
57
+ * http://www.capital-micro.com/PDF/CME-M7_Family_User_Guide_EN.pdf
29
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
58
+ * which has a pretty complete description of the controller starting
30
return 0;
59
+ * on page 370.
31
}
60
+ *
32
61
+ * https://sourceforge.net/p/wive-ng/wive-ng-mt/ci/master/tree/docs/DataSheets/RT3050_5x_V2.0_081408_0902.pdf
33
+static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
62
+ * which has a description of the controller registers starting on
34
+ target_ulong address,
63
+ * page 130.
35
+ MMUAccessType access_type,
64
+ *
36
+ GetPhysAddrResult *result,
65
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
37
+ ARMMMUFaultInfo *fi)
66
+ *
67
+ * This program is free software; you can redistribute it and/or modify
68
+ * it under the terms of the GNU General Public License as published by
69
+ * the Free Software Foundation; either version 2 of the License, or
70
+ * (at your option) any later version.
71
+ *
72
+ * This program is distributed in the hope that it will be useful,
73
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
74
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
75
+ * GNU General Public License for more details.
76
+ */
77
+
78
+#include "qemu/osdep.h"
79
+#include "qemu/units.h"
80
+#include "qapi/error.h"
81
+#include "hw/usb/dwc2-regs.h"
82
+#include "hw/usb/hcd-dwc2.h"
83
+#include "migration/vmstate.h"
84
+#include "trace.h"
85
+#include "qemu/log.h"
86
+#include "qemu/error-report.h"
87
+#include "qemu/main-loop.h"
88
+#include "hw/qdev-properties.h"
89
+
90
+#define USB_HZ_FS 12000000
91
+#define USB_HZ_HS 96000000
92
+#define USB_FRMINTVL 12000
93
+
94
+/* nifty macros from Arnon's EHCI version */
95
+#define get_field(data, field) \
96
+ (((data) & field##_MASK) >> field##_SHIFT)
97
+
98
+#define set_field(data, newval, field) do { \
99
+ uint32_t val = *(data); \
100
+ val &= ~field##_MASK; \
101
+ val |= ((newval) << field##_SHIFT) & field##_MASK; \
102
+ *(data) = val; \
103
+} while (0)
104
+
105
+#define get_bit(data, bitmask) \
106
+ (!!((data) & (bitmask)))
107
+
108
+/* update irq line */
109
+static inline void dwc2_update_irq(DWC2State *s)
110
+{
38
+{
111
+ static int oldlevel;
39
+ hwaddr ipa;
112
+ int level = 0;
40
+ int s1_prot;
113
+
41
+ int ret;
114
+ if ((s->gintsts & s->gintmsk) && (s->gahbcfg & GAHBCFG_GLBL_INTR_EN)) {
42
+ bool is_secure = ptw->in_secure;
115
+ level = 1;
43
+ bool ipa_secure, s2walk_secure;
116
+ }
44
+ ARMCacheAttrs cacheattrs1;
117
+ if (level != oldlevel) {
45
+ bool is_el0;
118
+ oldlevel = level;
46
+ uint64_t hcr;
119
+ trace_usb_dwc2_update_irq(level);
47
+
120
+ qemu_set_irq(s->irq, level);
48
+ ret = get_phys_addr_with_struct(env, ptw, address, access_type, result, fi);
121
+ }
49
+
122
+}
50
+ /* If S1 fails or S2 is disabled, return early. */
123
+
51
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2, is_secure)) {
124
+/* flag interrupt condition */
52
+ return ret;
125
+static inline void dwc2_raise_global_irq(DWC2State *s, uint32_t intr)
53
+ }
126
+{
54
+
127
+ if (!(s->gintsts & intr)) {
55
+ ipa = result->f.phys_addr;
128
+ s->gintsts |= intr;
56
+ ipa_secure = result->f.attrs.secure;
129
+ trace_usb_dwc2_raise_global_irq(intr);
57
+ if (is_secure) {
130
+ dwc2_update_irq(s);
58
+ /* Select TCR based on the NS bit from the S1 walk. */
131
+ }
59
+ s2walk_secure = !(ipa_secure
132
+}
60
+ ? env->cp15.vstcr_el2 & VSTCR_SW
133
+
61
+ : env->cp15.vtcr_el2 & VTCR_NSW);
134
+static inline void dwc2_lower_global_irq(DWC2State *s, uint32_t intr)
62
+ } else {
135
+{
63
+ assert(!ipa_secure);
136
+ if (s->gintsts & intr) {
64
+ s2walk_secure = false;
137
+ s->gintsts &= ~intr;
65
+ }
138
+ trace_usb_dwc2_lower_global_irq(intr);
66
+
139
+ dwc2_update_irq(s);
67
+ is_el0 = ptw->in_mmu_idx == ARMMMUIdx_Stage1_E0;
140
+ }
68
+ ptw->in_mmu_idx = s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
141
+}
69
+ ptw->in_secure = s2walk_secure;
142
+
70
+
143
+static inline void dwc2_raise_host_irq(DWC2State *s, uint32_t host_intr)
71
+ /*
144
+{
72
+ * S1 is done, now do S2 translation.
145
+ if (!(s->haint & host_intr)) {
73
+ * Save the stage1 results so that we may merge prot and cacheattrs later.
146
+ s->haint |= host_intr;
74
+ */
147
+ s->haint &= 0xffff;
75
+ s1_prot = result->f.prot;
148
+ trace_usb_dwc2_raise_host_irq(host_intr);
76
+ cacheattrs1 = result->cacheattrs;
149
+ if (s->haint & s->haintmsk) {
77
+ memset(result, 0, sizeof(*result));
150
+ dwc2_raise_global_irq(s, GINTSTS_HCHINT);
78
+
79
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type, is_el0, result, fi);
80
+ fi->s2addr = ipa;
81
+
82
+ /* Combine the S1 and S2 perms. */
83
+ result->f.prot &= s1_prot;
84
+
85
+ /* If S2 fails, return early. */
86
+ if (ret) {
87
+ return ret;
88
+ }
89
+
90
+ /* Combine the S1 and S2 cache attributes. */
91
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
92
+ if (hcr & HCR_DC) {
93
+ /*
94
+ * HCR.DC forces the first stage attributes to
95
+ * Normal Non-Shareable,
96
+ * Inner Write-Back Read-Allocate Write-Allocate,
97
+ * Outer Write-Back Read-Allocate Write-Allocate.
98
+ * Do not overwrite Tagged within attrs.
99
+ */
100
+ if (cacheattrs1.attrs != 0xf0) {
101
+ cacheattrs1.attrs = 0xff;
151
+ }
102
+ }
152
+ }
103
+ cacheattrs1.shareability = 0;
153
+}
104
+ }
154
+
105
+ result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
155
+static inline void dwc2_lower_host_irq(DWC2State *s, uint32_t host_intr)
106
+ result->cacheattrs);
156
+{
157
+ if (s->haint & host_intr) {
158
+ s->haint &= ~host_intr;
159
+ trace_usb_dwc2_lower_host_irq(host_intr);
160
+ if (!(s->haint & s->haintmsk)) {
161
+ dwc2_lower_global_irq(s, GINTSTS_HCHINT);
162
+ }
163
+ }
164
+}
165
+
166
+static inline void dwc2_update_hc_irq(DWC2State *s, int index)
167
+{
168
+ uint32_t host_intr = 1 << (index >> 3);
169
+
170
+ if (s->hreg1[index + 2] & s->hreg1[index + 3]) {
171
+ dwc2_raise_host_irq(s, host_intr);
172
+ } else {
173
+ dwc2_lower_host_irq(s, host_intr);
174
+ }
175
+}
176
+
177
+/* set a timer for EOF */
178
+static void dwc2_eof_timer(DWC2State *s)
179
+{
180
+ timer_mod(s->eof_timer, s->sof_time + s->usb_frame_time);
181
+}
182
+
183
+/* Set a timer for EOF and generate SOF event */
184
+static void dwc2_sof(DWC2State *s)
185
+{
186
+ s->sof_time += s->usb_frame_time;
187
+ trace_usb_dwc2_sof(s->sof_time);
188
+ dwc2_eof_timer(s);
189
+ dwc2_raise_global_irq(s, GINTSTS_SOF);
190
+}
191
+
192
+/* Do frame processing on frame boundary */
193
+static void dwc2_frame_boundary(void *opaque)
194
+{
195
+ DWC2State *s = opaque;
196
+ int64_t now;
197
+ uint16_t frcnt;
198
+
199
+ now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
200
+
201
+ /* Frame boundary, so do EOF stuff here */
202
+
203
+ /* Increment frame number */
204
+ frcnt = (uint16_t)((now - s->sof_time) / s->fi);
205
+ s->frame_number = (s->frame_number + frcnt) & 0xffff;
206
+ s->hfnum = s->frame_number & HFNUM_MAX_FRNUM;
207
+
208
+ /* Do SOF stuff here */
209
+ dwc2_sof(s);
210
+}
211
+
212
+/* Start sending SOF tokens on the USB bus */
213
+static void dwc2_bus_start(DWC2State *s)
214
+{
215
+ trace_usb_dwc2_bus_start();
216
+ s->sof_time = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
217
+ dwc2_eof_timer(s);
218
+}
219
+
220
+/* Stop sending SOF tokens on the USB bus */
221
+static void dwc2_bus_stop(DWC2State *s)
222
+{
223
+ trace_usb_dwc2_bus_stop();
224
+ timer_del(s->eof_timer);
225
+}
226
+
227
+static USBDevice *dwc2_find_device(DWC2State *s, uint8_t addr)
228
+{
229
+ USBDevice *dev;
230
+
231
+ trace_usb_dwc2_find_device(addr);
232
+
233
+ if (!(s->hprt0 & HPRT0_ENA)) {
234
+ trace_usb_dwc2_port_disabled(0);
235
+ } else {
236
+ dev = usb_find_device(&s->uport, addr);
237
+ if (dev != NULL) {
238
+ trace_usb_dwc2_device_found(0);
239
+ return dev;
240
+ }
241
+ }
242
+
243
+ trace_usb_dwc2_device_not_found();
244
+ return NULL;
245
+}
246
+
247
+static const char *pstatus[] = {
248
+ "USB_RET_SUCCESS", "USB_RET_NODEV", "USB_RET_NAK", "USB_RET_STALL",
249
+ "USB_RET_BABBLE", "USB_RET_IOERROR", "USB_RET_ASYNC",
250
+ "USB_RET_ADD_TO_QUEUE", "USB_RET_REMOVE_FROM_QUEUE"
251
+};
252
+
253
+static uint32_t pintr[] = {
254
+ HCINTMSK_XFERCOMPL, HCINTMSK_XACTERR, HCINTMSK_NAK, HCINTMSK_STALL,
255
+ HCINTMSK_BBLERR, HCINTMSK_XACTERR, HCINTMSK_XACTERR, HCINTMSK_XACTERR,
256
+ HCINTMSK_XACTERR
257
+};
258
+
259
+static const char *types[] = {
260
+ "Ctrl", "Isoc", "Bulk", "Intr"
261
+};
262
+
263
+static const char *dirs[] = {
264
+ "Out", "In"
265
+};
266
+
267
+static void dwc2_handle_packet(DWC2State *s, uint32_t devadr, USBDevice *dev,
268
+ USBEndpoint *ep, uint32_t index, bool send)
269
+{
270
+ DWC2Packet *p;
271
+ uint32_t hcchar = s->hreg1[index];
272
+ uint32_t hctsiz = s->hreg1[index + 4];
273
+ uint32_t hcdma = s->hreg1[index + 5];
274
+ uint32_t chan, epnum, epdir, eptype, mps, pid, pcnt, len, tlen, intr = 0;
275
+ uint32_t tpcnt, stsidx, actual = 0;
276
+ bool do_intr = false, done = false;
277
+
278
+ epnum = get_field(hcchar, HCCHAR_EPNUM);
279
+ epdir = get_bit(hcchar, HCCHAR_EPDIR);
280
+ eptype = get_field(hcchar, HCCHAR_EPTYPE);
281
+ mps = get_field(hcchar, HCCHAR_MPS);
282
+ pid = get_field(hctsiz, TSIZ_SC_MC_PID);
283
+ pcnt = get_field(hctsiz, TSIZ_PKTCNT);
284
+ len = get_field(hctsiz, TSIZ_XFERSIZE);
285
+ assert(len <= DWC2_MAX_XFER_SIZE);
286
+ chan = index >> 3;
287
+ p = &s->packet[chan];
288
+
289
+ trace_usb_dwc2_handle_packet(chan, dev, &p->packet, epnum, types[eptype],
290
+ dirs[epdir], mps, len, pcnt);
291
+
292
+ if (eptype == USB_ENDPOINT_XFER_CONTROL && pid == TSIZ_SC_MC_PID_SETUP) {
293
+ pid = USB_TOKEN_SETUP;
294
+ } else {
295
+ pid = epdir ? USB_TOKEN_IN : USB_TOKEN_OUT;
296
+ }
297
+
298
+ if (send) {
299
+ tlen = len;
300
+ if (p->small) {
301
+ if (tlen > mps) {
302
+ tlen = mps;
303
+ }
304
+ }
305
+
306
+ if (pid != USB_TOKEN_IN) {
307
+ trace_usb_dwc2_memory_read(hcdma, tlen);
308
+ if (dma_memory_read(&s->dma_as, hcdma,
309
+ s->usb_buf[chan], tlen) != MEMTX_OK) {
310
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: dma_memory_read failed\n",
311
+ __func__);
312
+ }
313
+ }
314
+
315
+ usb_packet_init(&p->packet);
316
+ usb_packet_setup(&p->packet, pid, ep, 0, hcdma,
317
+ pid != USB_TOKEN_IN, true);
318
+ usb_packet_addbuf(&p->packet, s->usb_buf[chan], tlen);
319
+ p->async = DWC2_ASYNC_NONE;
320
+ usb_handle_packet(dev, &p->packet);
321
+ } else {
322
+ tlen = p->len;
323
+ }
324
+
325
+ stsidx = -p->packet.status;
326
+ assert(stsidx < sizeof(pstatus) / sizeof(*pstatus));
327
+ actual = p->packet.actual_length;
328
+ trace_usb_dwc2_packet_status(pstatus[stsidx], actual);
329
+
330
+babble:
331
+ if (p->packet.status != USB_RET_SUCCESS &&
332
+ p->packet.status != USB_RET_NAK &&
333
+ p->packet.status != USB_RET_STALL &&
334
+ p->packet.status != USB_RET_ASYNC) {
335
+ trace_usb_dwc2_packet_error(pstatus[stsidx]);
336
+ }
337
+
338
+ if (p->packet.status == USB_RET_ASYNC) {
339
+ trace_usb_dwc2_async_packet(&p->packet, chan, dev, epnum,
340
+ dirs[epdir], tlen);
341
+ usb_device_flush_ep_queue(dev, ep);
342
+ assert(p->async != DWC2_ASYNC_INFLIGHT);
343
+ p->devadr = devadr;
344
+ p->epnum = epnum;
345
+ p->epdir = epdir;
346
+ p->mps = mps;
347
+ p->pid = pid;
348
+ p->index = index;
349
+ p->pcnt = pcnt;
350
+ p->len = tlen;
351
+ p->async = DWC2_ASYNC_INFLIGHT;
352
+ p->needs_service = false;
353
+ return;
354
+ }
355
+
356
+ if (p->packet.status == USB_RET_SUCCESS) {
357
+ if (actual > tlen) {
358
+ p->packet.status = USB_RET_BABBLE;
359
+ goto babble;
360
+ }
361
+
362
+ if (pid == USB_TOKEN_IN) {
363
+ trace_usb_dwc2_memory_write(hcdma, actual);
364
+ if (dma_memory_write(&s->dma_as, hcdma, s->usb_buf[chan],
365
+ actual) != MEMTX_OK) {
366
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: dma_memory_write failed\n",
367
+ __func__);
368
+ }
369
+ }
370
+
371
+ tpcnt = actual / mps;
372
+ if (actual % mps) {
373
+ tpcnt++;
374
+ if (pid == USB_TOKEN_IN) {
375
+ done = true;
376
+ }
377
+ }
378
+
379
+ pcnt -= tpcnt < pcnt ? tpcnt : pcnt;
380
+ set_field(&hctsiz, pcnt, TSIZ_PKTCNT);
381
+ len -= actual < len ? actual : len;
382
+ set_field(&hctsiz, len, TSIZ_XFERSIZE);
383
+ s->hreg1[index + 4] = hctsiz;
384
+ hcdma += actual;
385
+ s->hreg1[index + 5] = hcdma;
386
+
387
+ if (!pcnt || len == 0 || actual == 0) {
388
+ done = true;
389
+ }
390
+ } else {
391
+ intr |= pintr[stsidx];
392
+ if (p->packet.status == USB_RET_NAK &&
393
+ (eptype == USB_ENDPOINT_XFER_CONTROL ||
394
+ eptype == USB_ENDPOINT_XFER_BULK)) {
395
+ /*
396
+ * for ctrl/bulk, automatically retry on NAK,
397
+ * but send the interrupt anyway
398
+ */
399
+ intr &= ~HCINTMSK_RESERVED14_31;
400
+ s->hreg1[index + 2] |= intr;
401
+ do_intr = true;
402
+ } else {
403
+ intr |= HCINTMSK_CHHLTD;
404
+ done = true;
405
+ }
406
+ }
407
+
408
+ usb_packet_cleanup(&p->packet);
409
+
410
+ if (done) {
411
+ hcchar &= ~HCCHAR_CHENA;
412
+ s->hreg1[index] = hcchar;
413
+ if (!(intr & HCINTMSK_CHHLTD)) {
414
+ intr |= HCINTMSK_CHHLTD | HCINTMSK_XFERCOMPL;
415
+ }
416
+ intr &= ~HCINTMSK_RESERVED14_31;
417
+ s->hreg1[index + 2] |= intr;
418
+ p->needs_service = false;
419
+ trace_usb_dwc2_packet_done(pstatus[stsidx], actual, len, pcnt);
420
+ dwc2_update_hc_irq(s, index);
421
+ return;
422
+ }
423
+
424
+ p->devadr = devadr;
425
+ p->epnum = epnum;
426
+ p->epdir = epdir;
427
+ p->mps = mps;
428
+ p->pid = pid;
429
+ p->index = index;
430
+ p->pcnt = pcnt;
431
+ p->len = len;
432
+ p->needs_service = true;
433
+ trace_usb_dwc2_packet_next(pstatus[stsidx], len, pcnt);
434
+ if (do_intr) {
435
+ dwc2_update_hc_irq(s, index);
436
+ }
437
+}
438
+
439
+/* Attach or detach a device on root hub */
440
+
441
+static const char *speeds[] = {
442
+ "low", "full", "high"
443
+};
444
+
445
+static void dwc2_attach(USBPort *port)
446
+{
447
+ DWC2State *s = port->opaque;
448
+ int hispd = 0;
449
+
450
+ trace_usb_dwc2_attach(port);
451
+ assert(port->index == 0);
452
+
453
+ if (!port->dev || !port->dev->attached) {
454
+ return;
455
+ }
456
+
457
+ assert(port->dev->speed <= USB_SPEED_HIGH);
458
+ trace_usb_dwc2_attach_speed(speeds[port->dev->speed]);
459
+ s->hprt0 &= ~HPRT0_SPD_MASK;
460
+
461
+ switch (port->dev->speed) {
462
+ case USB_SPEED_LOW:
463
+ s->hprt0 |= HPRT0_SPD_LOW_SPEED << HPRT0_SPD_SHIFT;
464
+ break;
465
+ case USB_SPEED_FULL:
466
+ s->hprt0 |= HPRT0_SPD_FULL_SPEED << HPRT0_SPD_SHIFT;
467
+ break;
468
+ case USB_SPEED_HIGH:
469
+ s->hprt0 |= HPRT0_SPD_HIGH_SPEED << HPRT0_SPD_SHIFT;
470
+ hispd = 1;
471
+ break;
472
+ }
473
+
474
+ if (hispd) {
475
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 8000; /* 125000 */
476
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_HS) {
477
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_HS; /* 10.4 */
478
+ } else {
479
+ s->usb_bit_time = 1;
480
+ }
481
+ } else {
482
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 1000; /* 1000000 */
483
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_FS) {
484
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_FS; /* 83.3 */
485
+ } else {
486
+ s->usb_bit_time = 1;
487
+ }
488
+ }
489
+
490
+ s->fi = USB_FRMINTVL - 1;
491
+ s->hprt0 |= HPRT0_CONNDET | HPRT0_CONNSTS;
492
+
493
+ dwc2_bus_start(s);
494
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
495
+}
496
+
497
+static void dwc2_detach(USBPort *port)
498
+{
499
+ DWC2State *s = port->opaque;
500
+
501
+ trace_usb_dwc2_detach(port);
502
+ assert(port->index == 0);
503
+
504
+ dwc2_bus_stop(s);
505
+
506
+ s->hprt0 &= ~(HPRT0_SPD_MASK | HPRT0_SUSP | HPRT0_ENA | HPRT0_CONNSTS);
507
+ s->hprt0 |= HPRT0_CONNDET | HPRT0_ENACHG;
508
+
509
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
510
+}
511
+
512
+static void dwc2_child_detach(USBPort *port, USBDevice *child)
513
+{
514
+ trace_usb_dwc2_child_detach(port, child);
515
+ assert(port->index == 0);
516
+}
517
+
518
+static void dwc2_wakeup(USBPort *port)
519
+{
520
+ DWC2State *s = port->opaque;
521
+
522
+ trace_usb_dwc2_wakeup(port);
523
+ assert(port->index == 0);
524
+
525
+ if (s->hprt0 & HPRT0_SUSP) {
526
+ s->hprt0 |= HPRT0_RES;
527
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
528
+ }
529
+
530
+ qemu_bh_schedule(s->async_bh);
531
+}
532
+
533
+static void dwc2_async_packet_complete(USBPort *port, USBPacket *packet)
534
+{
535
+ DWC2State *s = port->opaque;
536
+ DWC2Packet *p;
537
+ USBDevice *dev;
538
+ USBEndpoint *ep;
539
+
540
+ assert(port->index == 0);
541
+ p = container_of(packet, DWC2Packet, packet);
542
+ dev = dwc2_find_device(s, p->devadr);
543
+ ep = usb_ep_get(dev, p->pid, p->epnum);
544
+ trace_usb_dwc2_async_packet_complete(port, packet, p->index >> 3, dev,
545
+ p->epnum, dirs[p->epdir], p->len);
546
+ assert(p->async == DWC2_ASYNC_INFLIGHT);
547
+
548
+ if (packet->status == USB_RET_REMOVE_FROM_QUEUE) {
549
+ usb_cancel_packet(packet);
550
+ usb_packet_cleanup(packet);
551
+ return;
552
+ }
553
+
554
+ dwc2_handle_packet(s, p->devadr, dev, ep, p->index, false);
555
+
556
+ p->async = DWC2_ASYNC_FINISHED;
557
+ qemu_bh_schedule(s->async_bh);
558
+}
559
+
560
+static USBPortOps dwc2_port_ops = {
561
+ .attach = dwc2_attach,
562
+ .detach = dwc2_detach,
563
+ .child_detach = dwc2_child_detach,
564
+ .wakeup = dwc2_wakeup,
565
+ .complete = dwc2_async_packet_complete,
566
+};
567
+
568
+static uint32_t dwc2_get_frame_remaining(DWC2State *s)
569
+{
570
+ uint32_t fr = 0;
571
+ int64_t tks;
572
+
573
+ tks = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - s->sof_time;
574
+ if (tks < 0) {
575
+ tks = 0;
576
+ }
577
+
578
+ /* avoid muldiv if possible */
579
+ if (tks >= s->usb_frame_time) {
580
+ goto out;
581
+ }
582
+ if (tks < s->usb_bit_time) {
583
+ fr = s->fi;
584
+ goto out;
585
+ }
586
+
587
+ /* tks = number of ns since SOF, divided by 83 (fs) or 10 (hs) */
588
+ tks = tks / s->usb_bit_time;
589
+ if (tks >= (int64_t)s->fi) {
590
+ goto out;
591
+ }
592
+
593
+ /* remaining = frame interval minus tks */
594
+ fr = (uint32_t)((int64_t)s->fi - tks);
595
+
596
+out:
597
+ return fr;
598
+}
599
+
600
+static void dwc2_work_bh(void *opaque)
601
+{
602
+ DWC2State *s = opaque;
603
+ DWC2Packet *p;
604
+ USBDevice *dev;
605
+ USBEndpoint *ep;
606
+ int64_t t_now, expire_time;
607
+ int chan;
608
+ bool found = false;
609
+
610
+ trace_usb_dwc2_work_bh();
611
+ if (s->working) {
612
+ return;
613
+ }
614
+ s->working = true;
615
+
616
+ t_now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
617
+ chan = s->next_chan;
618
+
619
+ do {
620
+ p = &s->packet[chan];
621
+ if (p->needs_service) {
622
+ dev = dwc2_find_device(s, p->devadr);
623
+ ep = usb_ep_get(dev, p->pid, p->epnum);
624
+ trace_usb_dwc2_work_bh_service(s->next_chan, chan, dev, p->epnum);
625
+ dwc2_handle_packet(s, p->devadr, dev, ep, p->index, true);
626
+ found = true;
627
+ }
628
+ if (++chan == DWC2_NB_CHAN) {
629
+ chan = 0;
630
+ }
631
+ if (found) {
632
+ s->next_chan = chan;
633
+ trace_usb_dwc2_work_bh_next(chan);
634
+ }
635
+ } while (chan != s->next_chan);
636
+
637
+ if (found) {
638
+ expire_time = t_now + NANOSECONDS_PER_SECOND / 4000;
639
+ timer_mod(s->frame_timer, expire_time);
640
+ }
641
+ s->working = false;
642
+}
643
+
644
+static void dwc2_enable_chan(DWC2State *s, uint32_t index)
645
+{
646
+ USBDevice *dev;
647
+ USBEndpoint *ep;
648
+ uint32_t hcchar;
649
+ uint32_t hctsiz;
650
+ uint32_t devadr, epnum, epdir, eptype, pid, len;
651
+ DWC2Packet *p;
652
+
653
+ assert((index >> 3) < DWC2_NB_CHAN);
654
+ p = &s->packet[index >> 3];
655
+ hcchar = s->hreg1[index];
656
+ hctsiz = s->hreg1[index + 4];
657
+ devadr = get_field(hcchar, HCCHAR_DEVADDR);
658
+ epnum = get_field(hcchar, HCCHAR_EPNUM);
659
+ epdir = get_bit(hcchar, HCCHAR_EPDIR);
660
+ eptype = get_field(hcchar, HCCHAR_EPTYPE);
661
+ pid = get_field(hctsiz, TSIZ_SC_MC_PID);
662
+ len = get_field(hctsiz, TSIZ_XFERSIZE);
663
+
664
+ dev = dwc2_find_device(s, devadr);
665
+
666
+ trace_usb_dwc2_enable_chan(index >> 3, dev, &p->packet, epnum);
667
+ if (dev == NULL) {
668
+ return;
669
+ }
670
+
671
+ if (eptype == USB_ENDPOINT_XFER_CONTROL && pid == TSIZ_SC_MC_PID_SETUP) {
672
+ pid = USB_TOKEN_SETUP;
673
+ } else {
674
+ pid = epdir ? USB_TOKEN_IN : USB_TOKEN_OUT;
675
+ }
676
+
677
+ ep = usb_ep_get(dev, pid, epnum);
678
+
107
+
679
+ /*
108
+ /*
680
+ * Hack: Networking doesn't like us delivering large transfers, it kind
109
+ * Check if IPA translates to secure or non-secure PA space.
681
+ * of works but the latency is horrible. So if the transfer is <= the mtu
110
+ * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
682
+ * size, we take that as a hint that this might be a network transfer,
683
+ * and do the transfer packet-by-packet.
684
+ */
111
+ */
685
+ if (len > 1536) {
112
+ result->f.attrs.secure =
686
+ p->small = false;
113
+ (is_secure
687
+ } else {
114
+ && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
688
+ p->small = true;
115
+ && (ipa_secure
689
+ }
116
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
690
+
117
+
691
+ dwc2_handle_packet(s, devadr, dev, ep, index, true);
692
+ qemu_bh_schedule(s->async_bh);
693
+}
694
+
695
+static const char *glbregnm[] = {
696
+ "GOTGCTL ", "GOTGINT ", "GAHBCFG ", "GUSBCFG ", "GRSTCTL ",
697
+ "GINTSTS ", "GINTMSK ", "GRXSTSR ", "GRXSTSP ", "GRXFSIZ ",
698
+ "GNPTXFSIZ", "GNPTXSTS ", "GI2CCTL ", "GPVNDCTL ", "GGPIO ",
699
+ "GUID ", "GSNPSID ", "GHWCFG1 ", "GHWCFG2 ", "GHWCFG3 ",
700
+ "GHWCFG4 ", "GLPMCFG ", "GPWRDN ", "GDFIFOCFG", "GADPCTL ",
701
+ "GREFCLK ", "GINTMSK2 ", "GINTSTS2 "
702
+};
703
+
704
+static uint64_t dwc2_glbreg_read(void *ptr, hwaddr addr, int index,
705
+ unsigned size)
706
+{
707
+ DWC2State *s = ptr;
708
+ uint32_t val;
709
+
710
+ assert(addr <= GINTSTS2);
711
+ val = s->glbreg[index];
712
+
713
+ switch (addr) {
714
+ case GRSTCTL:
715
+ /* clear any self-clearing bits that were set */
716
+ val &= ~(GRSTCTL_TXFFLSH | GRSTCTL_RXFFLSH | GRSTCTL_IN_TKNQ_FLSH |
717
+ GRSTCTL_FRMCNTRRST | GRSTCTL_HSFTRST | GRSTCTL_CSFTRST);
718
+ s->glbreg[index] = val;
719
+ break;
720
+ default:
721
+ break;
722
+ }
723
+
724
+ trace_usb_dwc2_glbreg_read(addr, glbregnm[index], val);
725
+ return val;
726
+}
727
+
728
+static void dwc2_glbreg_write(void *ptr, hwaddr addr, int index, uint64_t val,
729
+ unsigned size)
730
+{
731
+ DWC2State *s = ptr;
732
+ uint64_t orig = val;
733
+ uint32_t *mmio;
734
+ uint32_t old;
735
+ int iflg = 0;
736
+
737
+ assert(addr <= GINTSTS2);
738
+ mmio = &s->glbreg[index];
739
+ old = *mmio;
740
+
741
+ switch (addr) {
742
+ case GOTGCTL:
743
+ /* don't allow setting of read-only bits */
744
+ val &= ~(GOTGCTL_MULT_VALID_BC_MASK | GOTGCTL_BSESVLD |
745
+ GOTGCTL_ASESVLD | GOTGCTL_DBNC_SHORT | GOTGCTL_CONID_B |
746
+ GOTGCTL_HSTNEGSCS | GOTGCTL_SESREQSCS);
747
+ /* don't allow clearing of read-only bits */
748
+ val |= old & (GOTGCTL_MULT_VALID_BC_MASK | GOTGCTL_BSESVLD |
749
+ GOTGCTL_ASESVLD | GOTGCTL_DBNC_SHORT | GOTGCTL_CONID_B |
750
+ GOTGCTL_HSTNEGSCS | GOTGCTL_SESREQSCS);
751
+ break;
752
+ case GAHBCFG:
753
+ if ((val & GAHBCFG_GLBL_INTR_EN) && !(old & GAHBCFG_GLBL_INTR_EN)) {
754
+ iflg = 1;
755
+ }
756
+ break;
757
+ case GRSTCTL:
758
+ val |= GRSTCTL_AHBIDLE;
759
+ val &= ~GRSTCTL_DMAREQ;
760
+ if (!(old & GRSTCTL_TXFFLSH) && (val & GRSTCTL_TXFFLSH)) {
761
+ /* TODO - TX fifo flush */
762
+ qemu_log_mask(LOG_UNIMP, "Tx FIFO flush not implemented\n");
763
+ }
764
+ if (!(old & GRSTCTL_RXFFLSH) && (val & GRSTCTL_RXFFLSH)) {
765
+ /* TODO - RX fifo flush */
766
+ qemu_log_mask(LOG_UNIMP, "Rx FIFO flush not implemented\n");
767
+ }
768
+ if (!(old & GRSTCTL_IN_TKNQ_FLSH) && (val & GRSTCTL_IN_TKNQ_FLSH)) {
769
+ /* TODO - device IN token queue flush */
770
+ qemu_log_mask(LOG_UNIMP, "Token queue flush not implemented\n");
771
+ }
772
+ if (!(old & GRSTCTL_FRMCNTRRST) && (val & GRSTCTL_FRMCNTRRST)) {
773
+ /* TODO - host frame counter reset */
774
+ qemu_log_mask(LOG_UNIMP, "Frame counter reset not implemented\n");
775
+ }
776
+ if (!(old & GRSTCTL_HSFTRST) && (val & GRSTCTL_HSFTRST)) {
777
+ /* TODO - host soft reset */
778
+ qemu_log_mask(LOG_UNIMP, "Host soft reset not implemented\n");
779
+ }
780
+ if (!(old & GRSTCTL_CSFTRST) && (val & GRSTCTL_CSFTRST)) {
781
+ /* TODO - core soft reset */
782
+ qemu_log_mask(LOG_UNIMP, "Core soft reset not implemented\n");
783
+ }
784
+ /* don't allow clearing of self-clearing bits */
785
+ val |= old & (GRSTCTL_TXFFLSH | GRSTCTL_RXFFLSH |
786
+ GRSTCTL_IN_TKNQ_FLSH | GRSTCTL_FRMCNTRRST |
787
+ GRSTCTL_HSFTRST | GRSTCTL_CSFTRST);
788
+ break;
789
+ case GINTSTS:
790
+ /* clear the write-1-to-clear bits */
791
+ val |= ~old;
792
+ val = ~val;
793
+ /* don't allow clearing of read-only bits */
794
+ val |= old & (GINTSTS_PTXFEMP | GINTSTS_HCHINT | GINTSTS_PRTINT |
795
+ GINTSTS_OEPINT | GINTSTS_IEPINT | GINTSTS_GOUTNAKEFF |
796
+ GINTSTS_GINNAKEFF | GINTSTS_NPTXFEMP | GINTSTS_RXFLVL |
797
+ GINTSTS_OTGINT | GINTSTS_CURMODE_HOST);
798
+ iflg = 1;
799
+ break;
800
+ case GINTMSK:
801
+ iflg = 1;
802
+ break;
803
+ default:
804
+ break;
805
+ }
806
+
807
+ trace_usb_dwc2_glbreg_write(addr, glbregnm[index], orig, old, val);
808
+ *mmio = val;
809
+
810
+ if (iflg) {
811
+ dwc2_update_irq(s);
812
+ }
813
+}
814
+
815
+static uint64_t dwc2_fszreg_read(void *ptr, hwaddr addr, int index,
816
+ unsigned size)
817
+{
818
+ DWC2State *s = ptr;
819
+ uint32_t val;
820
+
821
+ assert(addr == HPTXFSIZ);
822
+ val = s->fszreg[index];
823
+
824
+ trace_usb_dwc2_fszreg_read(addr, val);
825
+ return val;
826
+}
827
+
828
+static void dwc2_fszreg_write(void *ptr, hwaddr addr, int index, uint64_t val,
829
+ unsigned size)
830
+{
831
+ DWC2State *s = ptr;
832
+ uint64_t orig = val;
833
+ uint32_t *mmio;
834
+ uint32_t old;
835
+
836
+ assert(addr == HPTXFSIZ);
837
+ mmio = &s->fszreg[index];
838
+ old = *mmio;
839
+
840
+ trace_usb_dwc2_fszreg_write(addr, orig, old, val);
841
+ *mmio = val;
842
+}
843
+
844
+static const char *hreg0nm[] = {
845
+ "HCFG ", "HFIR ", "HFNUM ", "<rsvd> ", "HPTXSTS ",
846
+ "HAINT ", "HAINTMSK ", "HFLBADDR ", "<rsvd> ", "<rsvd> ",
847
+ "<rsvd> ", "<rsvd> ", "<rsvd> ", "<rsvd> ", "<rsvd> ",
848
+ "<rsvd> ", "HPRT0 "
849
+};
850
+
851
+static uint64_t dwc2_hreg0_read(void *ptr, hwaddr addr, int index,
852
+ unsigned size)
853
+{
854
+ DWC2State *s = ptr;
855
+ uint32_t val;
856
+
857
+ assert(addr >= HCFG && addr <= HPRT0);
858
+ val = s->hreg0[index];
859
+
860
+ switch (addr) {
861
+ case HFNUM:
862
+ val = (dwc2_get_frame_remaining(s) << HFNUM_FRREM_SHIFT) |
863
+ (s->hfnum << HFNUM_FRNUM_SHIFT);
864
+ break;
865
+ default:
866
+ break;
867
+ }
868
+
869
+ trace_usb_dwc2_hreg0_read(addr, hreg0nm[index], val);
870
+ return val;
871
+}
872
+
873
+static void dwc2_hreg0_write(void *ptr, hwaddr addr, int index, uint64_t val,
874
+ unsigned size)
875
+{
876
+ DWC2State *s = ptr;
877
+ USBDevice *dev = s->uport.dev;
878
+ uint64_t orig = val;
879
+ uint32_t *mmio;
880
+ uint32_t tval, told, old;
881
+ int prst = 0;
882
+ int iflg = 0;
883
+
884
+ assert(addr >= HCFG && addr <= HPRT0);
885
+ mmio = &s->hreg0[index];
886
+ old = *mmio;
887
+
888
+ switch (addr) {
889
+ case HFIR:
890
+ break;
891
+ case HFNUM:
892
+ case HPTXSTS:
893
+ case HAINT:
894
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: write to read-only register\n",
895
+ __func__);
896
+ return;
897
+ case HAINTMSK:
898
+ val &= 0xffff;
899
+ break;
900
+ case HPRT0:
901
+ /* don't allow clearing of read-only bits */
902
+ val |= old & (HPRT0_SPD_MASK | HPRT0_LNSTS_MASK | HPRT0_OVRCURRACT |
903
+ HPRT0_CONNSTS);
904
+ /* don't allow clearing of self-clearing bits */
905
+ val |= old & (HPRT0_SUSP | HPRT0_RES);
906
+ /* don't allow setting of self-setting bits */
907
+ if (!(old & HPRT0_ENA) && (val & HPRT0_ENA)) {
908
+ val &= ~HPRT0_ENA;
909
+ }
910
+ /* clear the write-1-to-clear bits */
911
+ tval = val & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
912
+ HPRT0_CONNDET);
913
+ told = old & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
914
+ HPRT0_CONNDET);
915
+ tval |= ~told;
916
+ tval = ~tval;
917
+ tval &= (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
918
+ HPRT0_CONNDET);
919
+ val &= ~(HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
920
+ HPRT0_CONNDET);
921
+ val |= tval;
922
+ if (!(val & HPRT0_RST) && (old & HPRT0_RST)) {
923
+ if (dev && dev->attached) {
924
+ val |= HPRT0_ENA | HPRT0_ENACHG;
925
+ prst = 1;
926
+ }
927
+ }
928
+ if (val & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_CONNDET)) {
929
+ iflg = 1;
930
+ } else {
931
+ iflg = -1;
932
+ }
933
+ break;
934
+ default:
935
+ break;
936
+ }
937
+
938
+ if (prst) {
939
+ trace_usb_dwc2_hreg0_write(addr, hreg0nm[index], orig, old,
940
+ val & ~HPRT0_CONNDET);
941
+ trace_usb_dwc2_hreg0_action("call usb_port_reset");
942
+ usb_port_reset(&s->uport);
943
+ val &= ~HPRT0_CONNDET;
944
+ } else {
945
+ trace_usb_dwc2_hreg0_write(addr, hreg0nm[index], orig, old, val);
946
+ }
947
+
948
+ *mmio = val;
949
+
950
+ if (iflg > 0) {
951
+ trace_usb_dwc2_hreg0_action("enable PRTINT");
952
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
953
+ } else if (iflg < 0) {
954
+ trace_usb_dwc2_hreg0_action("disable PRTINT");
955
+ dwc2_lower_global_irq(s, GINTSTS_PRTINT);
956
+ }
957
+}
958
+
959
+static const char *hreg1nm[] = {
960
+ "HCCHAR ", "HCSPLT ", "HCINT ", "HCINTMSK", "HCTSIZ ", "HCDMA ",
961
+ "<rsvd> ", "HCDMAB "
962
+};
963
+
964
+static uint64_t dwc2_hreg1_read(void *ptr, hwaddr addr, int index,
965
+ unsigned size)
966
+{
967
+ DWC2State *s = ptr;
968
+ uint32_t val;
969
+
970
+ assert(addr >= HCCHAR(0) && addr <= HCDMAB(DWC2_NB_CHAN - 1));
971
+ val = s->hreg1[index];
972
+
973
+ trace_usb_dwc2_hreg1_read(addr, hreg1nm[index & 7], addr >> 5, val);
974
+ return val;
975
+}
976
+
977
+static void dwc2_hreg1_write(void *ptr, hwaddr addr, int index, uint64_t val,
978
+ unsigned size)
979
+{
980
+ DWC2State *s = ptr;
981
+ uint64_t orig = val;
982
+ uint32_t *mmio;
983
+ uint32_t old;
984
+ int iflg = 0;
985
+ int enflg = 0;
986
+ int disflg = 0;
987
+
988
+ assert(addr >= HCCHAR(0) && addr <= HCDMAB(DWC2_NB_CHAN - 1));
989
+ mmio = &s->hreg1[index];
990
+ old = *mmio;
991
+
992
+ switch (HSOTG_REG(0x500) + (addr & 0x1c)) {
993
+ case HCCHAR(0):
994
+ if ((val & HCCHAR_CHDIS) && !(old & HCCHAR_CHDIS)) {
995
+ val &= ~(HCCHAR_CHENA | HCCHAR_CHDIS);
996
+ disflg = 1;
997
+ } else {
998
+ val |= old & HCCHAR_CHDIS;
999
+ if ((val & HCCHAR_CHENA) && !(old & HCCHAR_CHENA)) {
1000
+ val &= ~HCCHAR_CHDIS;
1001
+ enflg = 1;
1002
+ } else {
1003
+ val |= old & HCCHAR_CHENA;
1004
+ }
1005
+ }
1006
+ break;
1007
+ case HCINT(0):
1008
+ /* clear the write-1-to-clear bits */
1009
+ val |= ~old;
1010
+ val = ~val;
1011
+ val &= ~HCINTMSK_RESERVED14_31;
1012
+ iflg = 1;
1013
+ break;
1014
+ case HCINTMSK(0):
1015
+ val &= ~HCINTMSK_RESERVED14_31;
1016
+ iflg = 1;
1017
+ break;
1018
+ case HCDMAB(0):
1019
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: write to read-only register\n",
1020
+ __func__);
1021
+ return;
1022
+ default:
1023
+ break;
1024
+ }
1025
+
1026
+ trace_usb_dwc2_hreg1_write(addr, hreg1nm[index & 7], index >> 3, orig,
1027
+ old, val);
1028
+ *mmio = val;
1029
+
1030
+ if (disflg) {
1031
+ /* set ChHltd in HCINT */
1032
+ s->hreg1[(index & ~7) + 2] |= HCINTMSK_CHHLTD;
1033
+ iflg = 1;
1034
+ }
1035
+
1036
+ if (enflg) {
1037
+ dwc2_enable_chan(s, index & ~7);
1038
+ }
1039
+
1040
+ if (iflg) {
1041
+ dwc2_update_hc_irq(s, index & ~7);
1042
+ }
1043
+}
1044
+
1045
+static const char *pcgregnm[] = {
1046
+ "PCGCTL ", "PCGCCTL1 "
1047
+};
1048
+
1049
+static uint64_t dwc2_pcgreg_read(void *ptr, hwaddr addr, int index,
1050
+ unsigned size)
1051
+{
1052
+ DWC2State *s = ptr;
1053
+ uint32_t val;
1054
+
1055
+ assert(addr >= PCGCTL && addr <= PCGCCTL1);
1056
+ val = s->pcgreg[index];
1057
+
1058
+ trace_usb_dwc2_pcgreg_read(addr, pcgregnm[index], val);
1059
+ return val;
1060
+}
1061
+
1062
+static void dwc2_pcgreg_write(void *ptr, hwaddr addr, int index,
1063
+ uint64_t val, unsigned size)
1064
+{
1065
+ DWC2State *s = ptr;
1066
+ uint64_t orig = val;
1067
+ uint32_t *mmio;
1068
+ uint32_t old;
1069
+
1070
+ assert(addr >= PCGCTL && addr <= PCGCCTL1);
1071
+ mmio = &s->pcgreg[index];
1072
+ old = *mmio;
1073
+
1074
+ trace_usb_dwc2_pcgreg_write(addr, pcgregnm[index], orig, old, val);
1075
+ *mmio = val;
1076
+}
1077
+
1078
+static uint64_t dwc2_hsotg_read(void *ptr, hwaddr addr, unsigned size)
1079
+{
1080
+ uint64_t val;
1081
+
1082
+ switch (addr) {
1083
+ case HSOTG_REG(0x000) ... HSOTG_REG(0x0fc):
1084
+ val = dwc2_glbreg_read(ptr, addr, (addr - HSOTG_REG(0x000)) >> 2, size);
1085
+ break;
1086
+ case HSOTG_REG(0x100):
1087
+ val = dwc2_fszreg_read(ptr, addr, (addr - HSOTG_REG(0x100)) >> 2, size);
1088
+ break;
1089
+ case HSOTG_REG(0x104) ... HSOTG_REG(0x3fc):
1090
+ /* Gadget-mode registers, just return 0 for now */
1091
+ val = 0;
1092
+ break;
1093
+ case HSOTG_REG(0x400) ... HSOTG_REG(0x4fc):
1094
+ val = dwc2_hreg0_read(ptr, addr, (addr - HSOTG_REG(0x400)) >> 2, size);
1095
+ break;
1096
+ case HSOTG_REG(0x500) ... HSOTG_REG(0x7fc):
1097
+ val = dwc2_hreg1_read(ptr, addr, (addr - HSOTG_REG(0x500)) >> 2, size);
1098
+ break;
1099
+ case HSOTG_REG(0x800) ... HSOTG_REG(0xdfc):
1100
+ /* Gadget-mode registers, just return 0 for now */
1101
+ val = 0;
1102
+ break;
1103
+ case HSOTG_REG(0xe00) ... HSOTG_REG(0xffc):
1104
+ val = dwc2_pcgreg_read(ptr, addr, (addr - HSOTG_REG(0xe00)) >> 2, size);
1105
+ break;
1106
+ default:
1107
+ g_assert_not_reached();
1108
+ }
1109
+
1110
+ return val;
1111
+}
1112
+
1113
+static void dwc2_hsotg_write(void *ptr, hwaddr addr, uint64_t val,
1114
+ unsigned size)
1115
+{
1116
+ switch (addr) {
1117
+ case HSOTG_REG(0x000) ... HSOTG_REG(0x0fc):
1118
+ dwc2_glbreg_write(ptr, addr, (addr - HSOTG_REG(0x000)) >> 2, val, size);
1119
+ break;
1120
+ case HSOTG_REG(0x100):
1121
+ dwc2_fszreg_write(ptr, addr, (addr - HSOTG_REG(0x100)) >> 2, val, size);
1122
+ break;
1123
+ case HSOTG_REG(0x104) ... HSOTG_REG(0x3fc):
1124
+ /* Gadget-mode registers, do nothing for now */
1125
+ break;
1126
+ case HSOTG_REG(0x400) ... HSOTG_REG(0x4fc):
1127
+ dwc2_hreg0_write(ptr, addr, (addr - HSOTG_REG(0x400)) >> 2, val, size);
1128
+ break;
1129
+ case HSOTG_REG(0x500) ... HSOTG_REG(0x7fc):
1130
+ dwc2_hreg1_write(ptr, addr, (addr - HSOTG_REG(0x500)) >> 2, val, size);
1131
+ break;
1132
+ case HSOTG_REG(0x800) ... HSOTG_REG(0xdfc):
1133
+ /* Gadget-mode registers, do nothing for now */
1134
+ break;
1135
+ case HSOTG_REG(0xe00) ... HSOTG_REG(0xffc):
1136
+ dwc2_pcgreg_write(ptr, addr, (addr - HSOTG_REG(0xe00)) >> 2, val, size);
1137
+ break;
1138
+ default:
1139
+ g_assert_not_reached();
1140
+ }
1141
+}
1142
+
1143
+static const MemoryRegionOps dwc2_mmio_hsotg_ops = {
1144
+ .read = dwc2_hsotg_read,
1145
+ .write = dwc2_hsotg_write,
1146
+ .impl.min_access_size = 4,
1147
+ .impl.max_access_size = 4,
1148
+ .endianness = DEVICE_LITTLE_ENDIAN,
1149
+};
1150
+
1151
+static uint64_t dwc2_hreg2_read(void *ptr, hwaddr addr, unsigned size)
1152
+{
1153
+ /* TODO - implement FIFOs to support slave mode */
1154
+ trace_usb_dwc2_hreg2_read(addr, addr >> 12, 0);
1155
+ qemu_log_mask(LOG_UNIMP, "FIFO read not implemented\n");
1156
+ return 0;
118
+ return 0;
1157
+}
119
+}
1158
+
120
+
1159
+static void dwc2_hreg2_write(void *ptr, hwaddr addr, uint64_t val,
121
static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
1160
+ unsigned size)
122
target_ulong address,
1161
+{
123
MMUAccessType access_type,
1162
+ uint64_t orig = val;
124
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
1163
+
125
if (mmu_idx != s1_mmu_idx) {
1164
+ /* TODO - implement FIFOs to support slave mode */
126
/*
1165
+ trace_usb_dwc2_hreg2_write(addr, addr >> 12, orig, 0, val);
127
* Call ourselves recursively to do the stage 1 and then stage 2
1166
+ qemu_log_mask(LOG_UNIMP, "FIFO write not implemented\n");
128
- * translations if mmu_idx is a two-stage regime.
1167
+}
129
+ * translations if mmu_idx is a two-stage regime, and EL2 present.
1168
+
130
+ * Otherwise, a stage1+stage2 translation is just stage 1.
1169
+static const MemoryRegionOps dwc2_mmio_hreg2_ops = {
131
*/
1170
+ .read = dwc2_hreg2_read,
132
+ ptw->in_mmu_idx = mmu_idx = s1_mmu_idx;
1171
+ .write = dwc2_hreg2_write,
133
if (arm_feature(env, ARM_FEATURE_EL2)) {
1172
+ .impl.min_access_size = 4,
134
- hwaddr ipa;
1173
+ .impl.max_access_size = 4,
135
- int s1_prot;
1174
+ .endianness = DEVICE_LITTLE_ENDIAN,
136
- int ret;
1175
+};
137
- bool ipa_secure, s2walk_secure;
1176
+
138
- ARMCacheAttrs cacheattrs1;
1177
+static void dwc2_wakeup_endpoint(USBBus *bus, USBEndpoint *ep,
139
- bool is_el0;
1178
+ unsigned int stream)
140
- uint64_t hcr;
1179
+{
141
-
1180
+ DWC2State *s = container_of(bus, DWC2State, bus);
142
- ptw->in_mmu_idx = s1_mmu_idx;
1181
+
143
- ret = get_phys_addr_with_struct(env, ptw, address, access_type,
1182
+ trace_usb_dwc2_wakeup_endpoint(ep, stream);
144
- result, fi);
1183
+
145
-
1184
+ /* TODO - do something here? */
146
- /* If S1 fails or S2 is disabled, return early. */
1185
+ qemu_bh_schedule(s->async_bh);
147
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
1186
+}
148
- is_secure)) {
1187
+
149
- return ret;
1188
+static USBBusOps dwc2_bus_ops = {
150
- }
1189
+ .wakeup_endpoint = dwc2_wakeup_endpoint,
151
-
1190
+};
152
- ipa = result->f.phys_addr;
1191
+
153
- ipa_secure = result->f.attrs.secure;
1192
+static void dwc2_work_timer(void *opaque)
154
- if (is_secure) {
1193
+{
155
- /* Select TCR based on the NS bit from the S1 walk. */
1194
+ DWC2State *s = opaque;
156
- s2walk_secure = !(ipa_secure
1195
+
157
- ? env->cp15.vstcr_el2 & VSTCR_SW
1196
+ trace_usb_dwc2_work_timer();
158
- : env->cp15.vtcr_el2 & VTCR_NSW);
1197
+ qemu_bh_schedule(s->async_bh);
159
- } else {
1198
+}
160
- assert(!ipa_secure);
1199
+
161
- s2walk_secure = false;
1200
+static void dwc2_reset_enter(Object *obj, ResetType type)
162
- }
1201
+{
163
-
1202
+ DWC2Class *c = DWC2_GET_CLASS(obj);
164
- ptw->in_mmu_idx =
1203
+ DWC2State *s = DWC2_USB(obj);
165
- s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
1204
+ int i;
166
- ptw->in_secure = s2walk_secure;
1205
+
167
- is_el0 = mmu_idx == ARMMMUIdx_E10_0;
1206
+ trace_usb_dwc2_reset_enter();
168
-
1207
+
169
- /*
1208
+ if (c->parent_phases.enter) {
170
- * S1 is done, now do S2 translation.
1209
+ c->parent_phases.enter(obj, type);
171
- * Save the stage1 results so that we may merge
1210
+ }
172
- * prot and cacheattrs later.
1211
+
173
- */
1212
+ timer_del(s->frame_timer);
174
- s1_prot = result->f.prot;
1213
+ qemu_bh_cancel(s->async_bh);
175
- cacheattrs1 = result->cacheattrs;
1214
+
176
- memset(result, 0, sizeof(*result));
1215
+ if (s->uport.dev && s->uport.dev->attached) {
177
-
1216
+ usb_detach(&s->uport);
178
- ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
1217
+ }
179
- is_el0, result, fi);
1218
+
180
- fi->s2addr = ipa;
1219
+ dwc2_bus_stop(s);
181
-
1220
+
182
- /* Combine the S1 and S2 perms. */
1221
+ s->gotgctl = GOTGCTL_BSESVLD | GOTGCTL_ASESVLD | GOTGCTL_CONID_B;
183
- result->f.prot &= s1_prot;
1222
+ s->gotgint = 0;
184
-
1223
+ s->gahbcfg = 0;
185
- /* If S2 fails, return early. */
1224
+ s->gusbcfg = 5 << GUSBCFG_USBTRDTIM_SHIFT;
186
- if (ret) {
1225
+ s->grstctl = GRSTCTL_AHBIDLE;
187
- return ret;
1226
+ s->gintsts = GINTSTS_CONIDSTSCHNG | GINTSTS_PTXFEMP | GINTSTS_NPTXFEMP |
188
- }
1227
+ GINTSTS_CURMODE_HOST;
189
-
1228
+ s->gintmsk = 0;
190
- /* Combine the S1 and S2 cache attributes. */
1229
+ s->grxstsr = 0;
191
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
1230
+ s->grxstsp = 0;
192
- if (hcr & HCR_DC) {
1231
+ s->grxfsiz = 1024;
193
- /*
1232
+ s->gnptxfsiz = 1024 << FIFOSIZE_DEPTH_SHIFT;
194
- * HCR.DC forces the first stage attributes to
1233
+ s->gnptxsts = (4 << FIFOSIZE_DEPTH_SHIFT) | 1024;
195
- * Normal Non-Shareable,
1234
+ s->gi2cctl = GI2CCTL_I2CDATSE0 | GI2CCTL_ACK;
196
- * Inner Write-Back Read-Allocate Write-Allocate,
1235
+ s->gpvndctl = 0;
197
- * Outer Write-Back Read-Allocate Write-Allocate.
1236
+ s->ggpio = 0;
198
- * Do not overwrite Tagged within attrs.
1237
+ s->guid = 0;
199
- */
1238
+ s->gsnpsid = 0x4f54294a;
200
- if (cacheattrs1.attrs != 0xf0) {
1239
+ s->ghwcfg1 = 0;
201
- cacheattrs1.attrs = 0xff;
1240
+ s->ghwcfg2 = (8 << GHWCFG2_DEV_TOKEN_Q_DEPTH_SHIFT) |
202
- }
1241
+ (4 << GHWCFG2_HOST_PERIO_TX_Q_DEPTH_SHIFT) |
203
- cacheattrs1.shareability = 0;
1242
+ (4 << GHWCFG2_NONPERIO_TX_Q_DEPTH_SHIFT) |
204
- }
1243
+ GHWCFG2_DYNAMIC_FIFO |
205
- result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
1244
+ GHWCFG2_PERIO_EP_SUPPORTED |
206
- result->cacheattrs);
1245
+ ((DWC2_NB_CHAN - 1) << GHWCFG2_NUM_HOST_CHAN_SHIFT) |
207
-
1246
+ (GHWCFG2_INT_DMA_ARCH << GHWCFG2_ARCHITECTURE_SHIFT) |
208
- /*
1247
+ (GHWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST << GHWCFG2_OP_MODE_SHIFT);
209
- * Check if IPA translates to secure or non-secure PA space.
1248
+ s->ghwcfg3 = (4096 << GHWCFG3_DFIFO_DEPTH_SHIFT) |
210
- * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
1249
+ (4 << GHWCFG3_PACKET_SIZE_CNTR_WIDTH_SHIFT) |
211
- */
1250
+ (4 << GHWCFG3_XFER_SIZE_CNTR_WIDTH_SHIFT);
212
- result->f.attrs.secure =
1251
+ s->ghwcfg4 = 0;
213
- (is_secure
1252
+ s->glpmcfg = 0;
214
- && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
1253
+ s->gpwrdn = GPWRDN_PWRDNRSTN;
215
- && (ipa_secure
1254
+ s->gdfifocfg = 0;
216
- || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
1255
+ s->gadpctl = 0;
217
-
1256
+ s->grefclk = 0;
218
- return 0;
1257
+ s->gintmsk2 = 0;
219
- } else {
1258
+ s->gintsts2 = 0;
220
- /*
1259
+
221
- * For non-EL2 CPUs a stage1+stage2 translation is just stage 1.
1260
+ s->hptxfsiz = 500 << FIFOSIZE_DEPTH_SHIFT;
222
- */
1261
+
223
- mmu_idx = stage_1_mmu_idx(mmu_idx);
1262
+ s->hcfg = 2 << HCFG_RESVALID_SHIFT;
224
+ return get_phys_addr_twostage(env, ptw, address, access_type,
1263
+ s->hfir = 60000;
225
+ result, fi);
1264
+ s->hfnum = 0x3fff;
226
}
1265
+ s->hptxsts = (16 << TXSTS_QSPCAVAIL_SHIFT) | 32768;
227
}
1266
+ s->haint = 0;
228
1267
+ s->haintmsk = 0;
1268
+ s->hprt0 = 0;
1269
+
1270
+ memset(s->hreg1, 0, sizeof(s->hreg1));
1271
+ memset(s->pcgreg, 0, sizeof(s->pcgreg));
1272
+
1273
+ s->sof_time = 0;
1274
+ s->frame_number = 0;
1275
+ s->fi = USB_FRMINTVL - 1;
1276
+ s->next_chan = 0;
1277
+ s->working = false;
1278
+
1279
+ for (i = 0; i < DWC2_NB_CHAN; i++) {
1280
+ s->packet[i].needs_service = false;
1281
+ }
1282
+}
1283
+
1284
+static void dwc2_reset_hold(Object *obj)
1285
+{
1286
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1287
+ DWC2State *s = DWC2_USB(obj);
1288
+
1289
+ trace_usb_dwc2_reset_hold();
1290
+
1291
+ if (c->parent_phases.hold) {
1292
+ c->parent_phases.hold(obj);
1293
+ }
1294
+
1295
+ dwc2_update_irq(s);
1296
+}
1297
+
1298
+static void dwc2_reset_exit(Object *obj)
1299
+{
1300
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1301
+ DWC2State *s = DWC2_USB(obj);
1302
+
1303
+ trace_usb_dwc2_reset_exit();
1304
+
1305
+ if (c->parent_phases.exit) {
1306
+ c->parent_phases.exit(obj);
1307
+ }
1308
+
1309
+ s->hprt0 = HPRT0_PWR;
1310
+ if (s->uport.dev && s->uport.dev->attached) {
1311
+ usb_attach(&s->uport);
1312
+ usb_device_reset(s->uport.dev);
1313
+ }
1314
+}
1315
+
1316
+static void dwc2_realize(DeviceState *dev, Error **errp)
1317
+{
1318
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
1319
+ DWC2State *s = DWC2_USB(dev);
1320
+ Object *obj;
1321
+ Error *err = NULL;
1322
+
1323
+ obj = object_property_get_link(OBJECT(dev), "dma-mr", &err);
1324
+ if (err) {
1325
+ error_setg(errp, "dwc2: required dma-mr link not found: %s",
1326
+ error_get_pretty(err));
1327
+ return;
1328
+ }
1329
+ assert(obj != NULL);
1330
+
1331
+ s->dma_mr = MEMORY_REGION(obj);
1332
+ address_space_init(&s->dma_as, s->dma_mr, "dwc2");
1333
+
1334
+ usb_bus_new(&s->bus, sizeof(s->bus), &dwc2_bus_ops, dev);
1335
+ usb_register_port(&s->bus, &s->uport, s, 0, &dwc2_port_ops,
1336
+ USB_SPEED_MASK_LOW | USB_SPEED_MASK_FULL |
1337
+ (s->usb_version == 2 ? USB_SPEED_MASK_HIGH : 0));
1338
+ s->uport.dev = 0;
1339
+
1340
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 1000; /* 1000000 */
1341
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_FS) {
1342
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_FS; /* 83.3 */
1343
+ } else {
1344
+ s->usb_bit_time = 1;
1345
+ }
1346
+
1347
+ s->fi = USB_FRMINTVL - 1;
1348
+ s->eof_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_frame_boundary, s);
1349
+ s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_work_timer, s);
1350
+ s->async_bh = qemu_bh_new(dwc2_work_bh, s);
1351
+
1352
+ sysbus_init_irq(sbd, &s->irq);
1353
+}
1354
+
1355
+static void dwc2_init(Object *obj)
1356
+{
1357
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1358
+ DWC2State *s = DWC2_USB(obj);
1359
+
1360
+ memory_region_init(&s->container, obj, "dwc2", DWC2_MMIO_SIZE);
1361
+ sysbus_init_mmio(sbd, &s->container);
1362
+
1363
+ memory_region_init_io(&s->hsotg, obj, &dwc2_mmio_hsotg_ops, s,
1364
+ "dwc2-io", 4 * KiB);
1365
+ memory_region_add_subregion(&s->container, 0x0000, &s->hsotg);
1366
+
1367
+ memory_region_init_io(&s->fifos, obj, &dwc2_mmio_hreg2_ops, s,
1368
+ "dwc2-fifo", 64 * KiB);
1369
+ memory_region_add_subregion(&s->container, 0x1000, &s->fifos);
1370
+}
1371
+
1372
+static const VMStateDescription vmstate_dwc2_state_packet = {
1373
+ .name = "dwc2/packet",
1374
+ .version_id = 1,
1375
+ .minimum_version_id = 1,
1376
+ .fields = (VMStateField[]) {
1377
+ VMSTATE_UINT32(devadr, DWC2Packet),
1378
+ VMSTATE_UINT32(epnum, DWC2Packet),
1379
+ VMSTATE_UINT32(epdir, DWC2Packet),
1380
+ VMSTATE_UINT32(mps, DWC2Packet),
1381
+ VMSTATE_UINT32(pid, DWC2Packet),
1382
+ VMSTATE_UINT32(index, DWC2Packet),
1383
+ VMSTATE_UINT32(pcnt, DWC2Packet),
1384
+ VMSTATE_UINT32(len, DWC2Packet),
1385
+ VMSTATE_INT32(async, DWC2Packet),
1386
+ VMSTATE_BOOL(small, DWC2Packet),
1387
+ VMSTATE_BOOL(needs_service, DWC2Packet),
1388
+ VMSTATE_END_OF_LIST()
1389
+ },
1390
+};
1391
+
1392
+const VMStateDescription vmstate_dwc2_state = {
1393
+ .name = "dwc2",
1394
+ .version_id = 1,
1395
+ .minimum_version_id = 1,
1396
+ .fields = (VMStateField[]) {
1397
+ VMSTATE_UINT32_ARRAY(glbreg, DWC2State,
1398
+ DWC2_GLBREG_SIZE / sizeof(uint32_t)),
1399
+ VMSTATE_UINT32_ARRAY(fszreg, DWC2State,
1400
+ DWC2_FSZREG_SIZE / sizeof(uint32_t)),
1401
+ VMSTATE_UINT32_ARRAY(hreg0, DWC2State,
1402
+ DWC2_HREG0_SIZE / sizeof(uint32_t)),
1403
+ VMSTATE_UINT32_ARRAY(hreg1, DWC2State,
1404
+ DWC2_HREG1_SIZE / sizeof(uint32_t)),
1405
+ VMSTATE_UINT32_ARRAY(pcgreg, DWC2State,
1406
+ DWC2_PCGREG_SIZE / sizeof(uint32_t)),
1407
+
1408
+ VMSTATE_TIMER_PTR(eof_timer, DWC2State),
1409
+ VMSTATE_TIMER_PTR(frame_timer, DWC2State),
1410
+ VMSTATE_INT64(sof_time, DWC2State),
1411
+ VMSTATE_INT64(usb_frame_time, DWC2State),
1412
+ VMSTATE_INT64(usb_bit_time, DWC2State),
1413
+ VMSTATE_UINT32(usb_version, DWC2State),
1414
+ VMSTATE_UINT16(frame_number, DWC2State),
1415
+ VMSTATE_UINT16(fi, DWC2State),
1416
+ VMSTATE_UINT16(next_chan, DWC2State),
1417
+ VMSTATE_BOOL(working, DWC2State),
1418
+
1419
+ VMSTATE_STRUCT_ARRAY(packet, DWC2State, DWC2_NB_CHAN, 1,
1420
+ vmstate_dwc2_state_packet, DWC2Packet),
1421
+ VMSTATE_UINT8_2DARRAY(usb_buf, DWC2State, DWC2_NB_CHAN,
1422
+ DWC2_MAX_XFER_SIZE),
1423
+
1424
+ VMSTATE_END_OF_LIST()
1425
+ }
1426
+};
1427
+
1428
+static Property dwc2_usb_properties[] = {
1429
+ DEFINE_PROP_UINT32("usb_version", DWC2State, usb_version, 2),
1430
+ DEFINE_PROP_END_OF_LIST(),
1431
+};
1432
+
1433
+static void dwc2_class_init(ObjectClass *klass, void *data)
1434
+{
1435
+ DeviceClass *dc = DEVICE_CLASS(klass);
1436
+ DWC2Class *c = DWC2_CLASS(klass);
1437
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1438
+
1439
+ dc->realize = dwc2_realize;
1440
+ dc->vmsd = &vmstate_dwc2_state;
1441
+ set_bit(DEVICE_CATEGORY_USB, dc->categories);
1442
+ device_class_set_props(dc, dwc2_usb_properties);
1443
+ resettable_class_set_parent_phases(rc, dwc2_reset_enter, dwc2_reset_hold,
1444
+ dwc2_reset_exit, &c->parent_phases);
1445
+}
1446
+
1447
+static const TypeInfo dwc2_usb_type_info = {
1448
+ .name = TYPE_DWC2_USB,
1449
+ .parent = TYPE_SYS_BUS_DEVICE,
1450
+ .instance_size = sizeof(DWC2State),
1451
+ .instance_init = dwc2_init,
1452
+ .class_size = sizeof(DWC2Class),
1453
+ .class_init = dwc2_class_init,
1454
+};
1455
+
1456
+static void dwc2_usb_register_types(void)
1457
+{
1458
+ type_register_static(&dwc2_usb_type_info);
1459
+}
1460
+
1461
+type_init(dwc2_usb_register_types)
1462
diff --git a/hw/usb/Kconfig b/hw/usb/Kconfig
1463
index XXXXXXX..XXXXXXX 100644
1464
--- a/hw/usb/Kconfig
1465
+++ b/hw/usb/Kconfig
1466
@@ -XXX,XX +XXX,XX @@ config USB_MUSB
1467
bool
1468
select USB
1469
1470
+config USB_DWC2
1471
+ bool
1472
+ default y
1473
+ select USB
1474
+
1475
config TUSB6010
1476
bool
1477
select USB_MUSB
1478
diff --git a/hw/usb/Makefile.objs b/hw/usb/Makefile.objs
1479
index XXXXXXX..XXXXXXX 100644
1480
--- a/hw/usb/Makefile.objs
1481
+++ b/hw/usb/Makefile.objs
1482
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_USB_EHCI_SYSBUS) += hcd-ehci-sysbus.o
1483
common-obj-$(CONFIG_USB_XHCI) += hcd-xhci.o
1484
common-obj-$(CONFIG_USB_XHCI_NEC) += hcd-xhci-nec.o
1485
common-obj-$(CONFIG_USB_MUSB) += hcd-musb.o
1486
+common-obj-$(CONFIG_USB_DWC2) += hcd-dwc2.o
1487
1488
common-obj-$(CONFIG_TUSB6010) += tusb6010.o
1489
common-obj-$(CONFIG_IMX) += chipidea.o
1490
diff --git a/hw/usb/trace-events b/hw/usb/trace-events
1491
index XXXXXXX..XXXXXXX 100644
1492
--- a/hw/usb/trace-events
1493
+++ b/hw/usb/trace-events
1494
@@ -XXX,XX +XXX,XX @@ usb_xhci_xfer_error(void *xfer, uint32_t ret) "%p: ret %d"
1495
usb_xhci_unimplemented(const char *item, int nr) "%s (0x%x)"
1496
usb_xhci_enforced_limit(const char *item) "%s"
1497
1498
+# hcd-dwc2.c
1499
+usb_dwc2_update_irq(uint32_t level) "level=%d"
1500
+usb_dwc2_raise_global_irq(uint32_t intr) "0x%08x"
1501
+usb_dwc2_lower_global_irq(uint32_t intr) "0x%08x"
1502
+usb_dwc2_raise_host_irq(uint32_t intr) "0x%04x"
1503
+usb_dwc2_lower_host_irq(uint32_t intr) "0x%04x"
1504
+usb_dwc2_sof(int64_t next) "next SOF %" PRId64
1505
+usb_dwc2_bus_start(void) "start SOFs"
1506
+usb_dwc2_bus_stop(void) "stop SOFs"
1507
+usb_dwc2_find_device(uint8_t addr) "%d"
1508
+usb_dwc2_port_disabled(uint32_t pnum) "port %d disabled"
1509
+usb_dwc2_device_found(uint32_t pnum) "device found on port %d"
1510
+usb_dwc2_device_not_found(void) "device not found"
1511
+usb_dwc2_handle_packet(uint32_t chan, void *dev, void *pkt, uint32_t ep, const char *type, const char *dir, uint32_t mps, uint32_t len, uint32_t pcnt) "ch %d dev %p pkt %p ep %d type %s dir %s mps %d len %d pcnt %d"
1512
+usb_dwc2_memory_read(uint32_t addr, uint32_t len) "addr %d len %d"
1513
+usb_dwc2_packet_status(const char *status, uint32_t len) "status %s len %d"
1514
+usb_dwc2_packet_error(const char *status) "ERROR %s"
1515
+usb_dwc2_async_packet(void *pkt, uint32_t chan, void *dev, uint32_t ep, const char *dir, uint32_t len) "pkt %p ch %d dev %p ep %d %s len %d"
1516
+usb_dwc2_memory_write(uint32_t addr, uint32_t len) "addr %d len %d"
1517
+usb_dwc2_packet_done(const char *status, uint32_t actual, uint32_t len, uint32_t pcnt) "status %s actual %d len %d pcnt %d"
1518
+usb_dwc2_packet_next(const char *status, uint32_t len, uint32_t pcnt) "status %s len %d pcnt %d"
1519
+usb_dwc2_attach(void *port) "port %p"
1520
+usb_dwc2_attach_speed(const char *speed) "%s-speed device attached"
1521
+usb_dwc2_detach(void *port) "port %p"
1522
+usb_dwc2_child_detach(void *port, void *child) "port %p child %p"
1523
+usb_dwc2_wakeup(void *port) "port %p"
1524
+usb_dwc2_async_packet_complete(void *port, void *pkt, uint32_t chan, void *dev, uint32_t ep, const char *dir, uint32_t len) "port %p packet %p ch %d dev %p ep %d %s len %d"
1525
+usb_dwc2_work_bh(void) ""
1526
+usb_dwc2_work_bh_service(uint32_t first, uint32_t current, void *dev, uint32_t ep) "first %d servicing %d dev %p ep %d"
1527
+usb_dwc2_work_bh_next(uint32_t chan) "next %d"
1528
+usb_dwc2_enable_chan(uint32_t chan, void *dev, void *pkt, uint32_t ep) "ch %d dev %p pkt %p ep %d"
1529
+usb_dwc2_glbreg_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1530
+usb_dwc2_glbreg_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1531
+usb_dwc2_fszreg_read(uint64_t addr, uint32_t val) " 0x%04" PRIx64 " HPTXFSIZ val 0x%08x"
1532
+usb_dwc2_fszreg_write(uint64_t addr, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " HPTXFSIZ val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1533
+usb_dwc2_hreg0_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1534
+usb_dwc2_hreg0_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1535
+usb_dwc2_hreg1_read(uint64_t addr, const char *reg, uint64_t chan, uint32_t val) " 0x%04" PRIx64 " %s%" PRId64 " val 0x%08x"
1536
+usb_dwc2_hreg1_write(uint64_t addr, const char *reg, uint64_t chan, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " %s%" PRId64 " val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1537
+usb_dwc2_pcgreg_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1538
+usb_dwc2_pcgreg_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1539
+usb_dwc2_hreg2_read(uint64_t addr, uint64_t fifo, uint32_t val) " 0x%04" PRIx64 " FIFO%" PRId64 " val 0x%08x"
1540
+usb_dwc2_hreg2_write(uint64_t addr, uint64_t fifo, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " FIFO%" PRId64 " val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1541
+usb_dwc2_hreg0_action(const char *s) "%s"
1542
+usb_dwc2_wakeup_endpoint(void *ep, uint32_t stream) "endp %p stream %d"
1543
+usb_dwc2_work_timer(void) ""
1544
+usb_dwc2_reset_enter(void) "=== RESET enter ==="
1545
+usb_dwc2_reset_hold(void) "=== RESET hold ==="
1546
+usb_dwc2_reset_exit(void) "=== RESET exit ==="
1547
+
1548
# desc.c
1549
usb_desc_device(int addr, int len, int ret) "dev %d query device, len %d, ret %d"
1550
usb_desc_device_qualifier(int addr, int len, int ret) "dev %d query device qualifier, len %d, ret %d"
1551
--
229
--
1552
2.20.1
230
2.25.1
1553
1554
diff view generated by jsdifflib
1
From: Paul Zimmerman <pauldzim@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Wire the dwc-hsotg (dwc2) emulation into Qemu
3
The return type of the functions is already bool, but in a few
4
instances we used an integer type with the return statement.
4
5
5
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daude <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200520235349.21215-7-pauldzim@gmail.com
8
Message-id: 20221011031911.2408754-13-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
include/hw/arm/bcm2835_peripherals.h | 3 ++-
11
target/arm/ptw.c | 7 +++----
11
hw/arm/bcm2835_peripherals.c | 21 ++++++++++++++++++++-
12
1 file changed, 3 insertions(+), 4 deletions(-)
12
2 files changed, 22 insertions(+), 2 deletions(-)
13
13
14
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/arm/bcm2835_peripherals.h
16
--- a/target/arm/ptw.c
17
+++ b/include/hw/arm/bcm2835_peripherals.h
17
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
19
#include "hw/sd/bcm2835_sdhost.h"
19
result->f.lg_page_size = TARGET_PAGE_BITS;
20
#include "hw/gpio/bcm2835_gpio.h"
20
result->cacheattrs.shareability = shareability;
21
#include "hw/timer/bcm2835_systmr.h"
21
result->cacheattrs.attrs = memattr;
22
+#include "hw/usb/hcd-dwc2.h"
22
- return 0;
23
#include "hw/misc/unimp.h"
23
+ return false;
24
25
#define TYPE_BCM2835_PERIPHERALS "bcm2835-peripherals"
26
@@ -XXX,XX +XXX,XX @@ typedef struct BCM2835PeripheralState {
27
UnimplementedDeviceState ave0;
28
UnimplementedDeviceState bscsl;
29
UnimplementedDeviceState smi;
30
- UnimplementedDeviceState dwc2;
31
+ DWC2State dwc2;
32
UnimplementedDeviceState sdramc;
33
} BCM2835PeripheralState;
34
35
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/arm/bcm2835_peripherals.c
38
+++ b/hw/arm/bcm2835_peripherals.c
39
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
40
/* Mphi */
41
sysbus_init_child_obj(obj, "mphi", &s->mphi, sizeof(s->mphi),
42
TYPE_BCM2835_MPHI);
43
+
44
+ /* DWC2 */
45
+ sysbus_init_child_obj(obj, "dwc2", &s->dwc2, sizeof(s->dwc2),
46
+ TYPE_DWC2_USB);
47
+
48
+ object_property_add_const_link(OBJECT(&s->dwc2), "dma-mr",
49
+ OBJECT(&s->gpu_bus_mr));
50
}
24
}
51
25
52
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
26
static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
53
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
27
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
54
qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
28
{
55
INTERRUPT_HOSTPORT));
29
hwaddr ipa;
56
30
int s1_prot;
57
+ /* DWC2 */
31
- int ret;
58
+ object_property_set_bool(OBJECT(&s->dwc2), true, "realized", &err);
32
bool is_secure = ptw->in_secure;
59
+ if (err) {
33
- bool ipa_secure, s2walk_secure;
60
+ error_propagate(errp, err);
34
+ bool ret, ipa_secure, s2walk_secure;
61
+ return;
35
ARMCacheAttrs cacheattrs1;
62
+ }
36
bool is_el0;
63
+
37
uint64_t hcr;
64
+ memory_region_add_subregion(&s->peri_mr, USB_OTG_OFFSET,
38
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
65
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->dwc2), 0));
39
&& (ipa_secure
66
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->dwc2), 0,
40
|| !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
67
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
41
68
+ INTERRUPT_USB));
42
- return 0;
69
+
43
+ return false;
70
create_unimp(s, &s->armtmr, "bcm2835-sp804", ARMCTRL_TIMER0_1_OFFSET, 0x40);
71
create_unimp(s, &s->cprman, "bcm2835-cprman", CPRMAN_OFFSET, 0x1000);
72
create_unimp(s, &s->a2w, "bcm2835-a2w", A2W_OFFSET, 0x1000);
73
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
74
create_unimp(s, &s->otp, "bcm2835-otp", OTP_OFFSET, 0x80);
75
create_unimp(s, &s->dbus, "bcm2835-dbus", DBUS_OFFSET, 0x8000);
76
create_unimp(s, &s->ave0, "bcm2835-ave0", AVE0_OFFSET, 0x8000);
77
- create_unimp(s, &s->dwc2, "dwc-usb2", USB_OTG_OFFSET, 0x1000);
78
create_unimp(s, &s->sdramc, "bcm2835-sdramc", SDRAMC_OFFSET, 0x100);
79
}
44
}
80
45
46
static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
81
--
47
--
82
2.20.1
48
2.25.1
83
84
diff view generated by jsdifflib
1
Convert the VSHR 2-reg-shift insns to decodetree.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Note that unlike the legacy decoder, we present the right shift
3
A simple helper to retrieve the length of the current insn.
4
amount to the trans_ function as a positive integer.
5
4
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200522145520.6778-3-peter.maydell@linaro.org
9
---
9
---
10
target/arm/neon-dp.decode | 25 ++++++++++++++++++++
10
target/arm/translate.h | 5 +++++
11
target/arm/translate-neon.inc.c | 41 +++++++++++++++++++++++++++++++++
11
target/arm/translate-vfp.c | 2 +-
12
target/arm/translate.c | 21 +----------------
12
target/arm/translate.c | 5 ++---
13
3 files changed, 67 insertions(+), 20 deletions(-)
13
3 files changed, 8 insertions(+), 4 deletions(-)
14
14
15
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/neon-dp.decode
17
--- a/target/arm/translate.h
18
+++ b/target/arm/neon-dp.decode
18
+++ b/target/arm/translate.h
19
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
19
@@ -XXX,XX +XXX,XX @@ static inline void disas_set_insn_syndrome(DisasContext *s, uint32_t syn)
20
######################################################################
20
s->insn_start = NULL;
21
&2reg_shift vm vd q shift size
22
23
+# Right shifts are encoded as N - shift, where N is the element size in bits.
24
+%neon_rshift_i6 16:6 !function=rsub_64
25
+%neon_rshift_i5 16:5 !function=rsub_32
26
+%neon_rshift_i4 16:4 !function=rsub_16
27
+%neon_rshift_i3 16:3 !function=rsub_8
28
+
29
+@2reg_shr_d .... ... . . . ...... .... .... 1 q:1 . . .... \
30
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3 shift=%neon_rshift_i6
31
+@2reg_shr_s .... ... . . . 1 ..... .... .... 0 q:1 . . .... \
32
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 shift=%neon_rshift_i5
33
+@2reg_shr_h .... ... . . . 01 .... .... .... 0 q:1 . . .... \
34
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 shift=%neon_rshift_i4
35
+@2reg_shr_b .... ... . . . 001 ... .... .... 0 q:1 . . .... \
36
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 shift=%neon_rshift_i3
37
+
38
@2reg_shl_d .... ... . . . shift:6 .... .... 1 q:1 . . .... \
39
&2reg_shift vm=%vm_dp vd=%vd_dp size=3
40
@2reg_shl_s .... ... . . . 1 shift:5 .... .... 0 q:1 . . .... \
41
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
42
@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
43
&2reg_shift vm=%vm_dp vd=%vd_dp size=0
44
45
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
46
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
47
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
48
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
49
+
50
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
51
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
52
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
53
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
54
+
55
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
56
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
57
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
58
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-neon.inc.c
61
+++ b/target/arm/translate-neon.inc.c
62
@@ -XXX,XX +XXX,XX @@ static inline int plus1(DisasContext *s, int x)
63
return x + 1;
64
}
21
}
65
22
66
+static inline int rsub_64(DisasContext *s, int x)
23
+static inline int curr_insn_len(DisasContext *s)
67
+{
24
+{
68
+ return 64 - x;
25
+ return s->base.pc_next - s->pc_curr;
69
+}
26
+}
70
+
27
+
71
+static inline int rsub_32(DisasContext *s, int x)
28
/* is_jmp field values */
72
+{
29
#define DISAS_JUMP DISAS_TARGET_0 /* only pc was modified dynamically */
73
+ return 32 - x;
30
/* CPU state was modified dynamically; exit to main loop for interrupts. */
74
+}
31
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
75
+static inline int rsub_16(DisasContext *s, int x)
32
index XXXXXXX..XXXXXXX 100644
76
+{
33
--- a/target/arm/translate-vfp.c
77
+ return 16 - x;
34
+++ b/target/arm/translate-vfp.c
78
+}
35
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
79
+static inline int rsub_8(DisasContext *s, int x)
36
if (s->sme_trap_nonstreaming) {
80
+{
37
gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
81
+ return 8 - x;
38
syn_smetrap(SME_ET_Streaming,
82
+}
39
- s->base.pc_next - s->pc_curr == 2));
83
+
40
+ curr_insn_len(s) == 2));
84
/* Include the generated Neon decoder */
41
return false;
85
#include "decode-neon-dp.inc.c"
42
}
86
#include "decode-neon-ls.inc.c"
43
87
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
88
89
DO_2SH(VSHL, tcg_gen_gvec_shli)
90
DO_2SH(VSLI, gen_gvec_sli)
91
+
92
+static bool trans_VSHR_S_2sh(DisasContext *s, arg_2reg_shift *a)
93
+{
94
+ /* Signed shift out of range results in all-sign-bits */
95
+ a->shift = MIN(a->shift, (8 << a->size) - 1);
96
+ return do_vector_2sh(s, a, tcg_gen_gvec_sari);
97
+}
98
+
99
+static void gen_zero_rd_2sh(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
100
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
101
+{
102
+ tcg_gen_gvec_dup_imm(vece, rd_ofs, oprsz, maxsz, 0);
103
+}
104
+
105
+static bool trans_VSHR_U_2sh(DisasContext *s, arg_2reg_shift *a)
106
+{
107
+ /* Shift out of range is architecturally valid and results in zero. */
108
+ if (a->shift >= (8 << a->size)) {
109
+ return do_vector_2sh(s, a, gen_zero_rd_2sh);
110
+ } else {
111
+ return do_vector_2sh(s, a, tcg_gen_gvec_shri);
112
+ }
113
+}
114
diff --git a/target/arm/translate.c b/target/arm/translate.c
44
diff --git a/target/arm/translate.c b/target/arm/translate.c
115
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/translate.c
46
--- a/target/arm/translate.c
117
+++ b/target/arm/translate.c
47
+++ b/target/arm/translate.c
118
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
48
@@ -XXX,XX +XXX,XX @@ static ISSInfo make_issinfo(DisasContext *s, int rd, bool p, bool w)
119
op = (insn >> 8) & 0xf;
49
/* ISS not valid if writeback */
120
50
if (p && !w) {
121
switch (op) {
51
ret = rd;
122
+ case 0: /* VSHR */
52
- if (s->base.pc_next - s->pc_curr == 2) {
123
case 5: /* VSHL, VSLI */
53
+ if (curr_insn_len(s) == 2) {
124
return 1; /* handled by decodetree */
54
ret |= ISSIs16Bit;
125
default:
55
}
126
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
56
} else {
127
}
57
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
128
58
/* nothing more to generate */
129
switch (op) {
59
break;
130
- case 0: /* VSHR */
60
case DISAS_WFI:
131
- /* Right shift comes here negative. */
61
- gen_helper_wfi(cpu_env,
132
- shift = -shift;
62
- tcg_constant_i32(dc->base.pc_next - dc->pc_curr));
133
- /* Shifts larger than the element size are architecturally
63
+ gen_helper_wfi(cpu_env, tcg_constant_i32(curr_insn_len(dc)));
134
- * valid. Unsigned results in all zeros; signed results
64
/*
135
- * in all sign bits.
65
* The helper doesn't necessarily throw an exception, but we
136
- */
66
* must go back to the main loop to check for interrupts anyway.
137
- if (!u) {
138
- tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
139
- MIN(shift, (8 << size) - 1),
140
- vec_size, vec_size);
141
- } else if (shift >= 8 << size) {
142
- tcg_gen_gvec_dup_imm(MO_8, rd_ofs, vec_size,
143
- vec_size, 0);
144
- } else {
145
- tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
146
- vec_size, vec_size);
147
- }
148
- return 0;
149
-
150
case 1: /* VSRA */
151
/* Right shift comes here negative. */
152
shift = -shift;
153
--
67
--
154
2.20.1
68
2.25.1
155
69
156
70
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
With this conversion, we will be able to use the same helpers
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
with sve. This also fixes a bug in which we failed to clear
5
the high bits of the SVE register after an AdvSIMD operation.
6
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-3-richard.henderson@linaro.org
7
Message-id: 20221020030641.2066807-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/helper.h | 2 ++
10
target/arm/translate-a64.c | 40 ++++++++++++++++++++------------------
13
target/arm/translate-a64.h | 3 ++
11
target/arm/translate.c | 10 ++++++----
14
target/arm/crypto_helper.c | 11 +++++++
12
2 files changed, 27 insertions(+), 23 deletions(-)
15
target/arm/translate-a64.c | 59 ++++++++++++++++++++------------------
16
4 files changed, 47 insertions(+), 28 deletions(-)
17
13
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.h
21
+++ b/target/arm/helper.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
23
DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
26
+DEF_HELPER_FLAGS_4(crypto_rax1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+
28
DEF_HELPER_FLAGS_3(crc32, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
29
DEF_HELPER_FLAGS_3(crc32c, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
30
31
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate-a64.h
34
+++ b/target/arm/translate-a64.h
35
@@ -XXX,XX +XXX,XX @@ static inline int vec_full_reg_size(DisasContext *s)
36
37
bool disas_sve(DisasContext *, uint32_t);
38
39
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
40
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
41
+
42
#endif /* TARGET_ARM_TRANSLATE_A64_H */
43
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/crypto_helper.c
46
+++ b/target/arm/crypto_helper.c
47
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm, uint32_t desc)
48
}
49
clear_tail(vd, opr_sz, simd_maxsz(desc));
50
}
51
+
52
+void HELPER(crypto_rax1)(void *vd, void *vn, void *vm, uint32_t desc)
53
+{
54
+ intptr_t i, opr_sz = simd_oprsz(desc);
55
+ uint64_t *d = vd, *n = vn, *m = vm;
56
+
57
+ for (i = 0; i < opr_sz / 8; ++i) {
58
+ d[i] = n[i] ^ rol64(m[i], 1);
59
+ }
60
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
61
+}
62
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
63
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/translate-a64.c
16
--- a/target/arm/translate-a64.c
65
+++ b/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
66
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
18
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, uint64_t dest)
67
tcg_temp_free_ptr(tcg_rn_ptr);
19
return translator_use_goto_tb(&s->base, dest);
68
}
20
}
69
21
70
+static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
22
-static inline void gen_goto_tb(DisasContext *s, int n, uint64_t dest)
71
+{
23
+static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
72
+ tcg_gen_rotli_i64(d, m, 1);
24
{
73
+ tcg_gen_xor_i64(d, d, n);
25
+ uint64_t dest = s->pc_curr + diff;
74
+}
75
+
26
+
76
+static void gen_rax1_vec(unsigned vece, TCGv_vec d, TCGv_vec n, TCGv_vec m)
27
if (use_goto_tb(s, dest)) {
77
+{
28
tcg_gen_goto_tb(n);
78
+ tcg_gen_rotli_vec(vece, d, m, 1);
29
gen_a64_set_pc_im(dest);
79
+ tcg_gen_xor_vec(vece, d, d, n);
30
@@ -XXX,XX +XXX,XX @@ static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
80
+}
31
*/
81
+
32
static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
82
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
33
{
83
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
34
- uint64_t addr = s->pc_curr + sextract32(insn, 0, 26) * 4;
84
+{
35
+ int64_t diff = sextract32(insn, 0, 26) * 4;
85
+ static const TCGOpcode vecop_list[] = { INDEX_op_rotli_vec, 0 };
36
86
+ static const GVecGen3 op = {
37
if (insn & (1U << 31)) {
87
+ .fni8 = gen_rax1_i64,
38
/* BL Branch with link */
88
+ .fniv = gen_rax1_vec,
39
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
89
+ .opt_opc = vecop_list,
40
90
+ .fno = gen_helper_crypto_rax1,
41
/* B Branch / BL Branch with link */
91
+ .vece = MO_64,
42
reset_btype(s);
92
+ };
43
- gen_goto_tb(s, 0, addr);
93
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &op);
44
+ gen_goto_tb(s, 0, diff);
94
+}
45
}
95
+
46
96
/* Crypto three-reg SHA512
47
/* Compare and branch (immediate)
97
* 31 21 20 16 15 14 13 12 11 10 9 5 4 0
48
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
98
* +-----------------------+------+---+---+-----+--------+------+------+
49
static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
99
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
50
{
100
bool feature;
51
unsigned int sf, op, rt;
101
CryptoThreeOpFn *genfn = NULL;
52
- uint64_t addr;
102
gen_helper_gvec_3 *oolfn = NULL;
53
+ int64_t diff;
103
+ GVecGen3Fn *gvecfn = NULL;
54
TCGLabel *label_match;
104
55
TCGv_i64 tcg_cmp;
105
if (o == 0) {
56
106
switch (opcode) {
57
sf = extract32(insn, 31, 1);
107
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
58
op = extract32(insn, 24, 1); /* 0: CBZ; 1: CBNZ */
108
break;
59
rt = extract32(insn, 0, 5);
109
case 3: /* RAX1 */
60
- addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
110
feature = dc_isar_feature(aa64_sha3, s);
61
+ diff = sextract32(insn, 5, 19) * 4;
111
- genfn = NULL;
62
112
+ gvecfn = gen_gvec_rax1;
63
tcg_cmp = read_cpu_reg(s, rt, sf);
64
label_match = gen_new_label();
65
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
66
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
67
tcg_cmp, 0, label_match);
68
69
- gen_goto_tb(s, 0, s->base.pc_next);
70
+ gen_goto_tb(s, 0, 4);
71
gen_set_label(label_match);
72
- gen_goto_tb(s, 1, addr);
73
+ gen_goto_tb(s, 1, diff);
74
}
75
76
/* Test and branch (immediate)
77
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
78
static void disas_test_b_imm(DisasContext *s, uint32_t insn)
79
{
80
unsigned int bit_pos, op, rt;
81
- uint64_t addr;
82
+ int64_t diff;
83
TCGLabel *label_match;
84
TCGv_i64 tcg_cmp;
85
86
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
87
op = extract32(insn, 24, 1); /* 0: TBZ; 1: TBNZ */
88
- addr = s->pc_curr + sextract32(insn, 5, 14) * 4;
89
+ diff = sextract32(insn, 5, 14) * 4;
90
rt = extract32(insn, 0, 5);
91
92
tcg_cmp = tcg_temp_new_i64();
93
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
94
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
95
tcg_cmp, 0, label_match);
96
tcg_temp_free_i64(tcg_cmp);
97
- gen_goto_tb(s, 0, s->base.pc_next);
98
+ gen_goto_tb(s, 0, 4);
99
gen_set_label(label_match);
100
- gen_goto_tb(s, 1, addr);
101
+ gen_goto_tb(s, 1, diff);
102
}
103
104
/* Conditional branch (immediate)
105
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
106
static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
107
{
108
unsigned int cond;
109
- uint64_t addr;
110
+ int64_t diff;
111
112
if ((insn & (1 << 4)) || (insn & (1 << 24))) {
113
unallocated_encoding(s);
114
return;
115
}
116
- addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
117
+ diff = sextract32(insn, 5, 19) * 4;
118
cond = extract32(insn, 0, 4);
119
120
reset_btype(s);
121
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
122
/* genuinely conditional branches */
123
TCGLabel *label_match = gen_new_label();
124
arm_gen_test_cc(cond, label_match);
125
- gen_goto_tb(s, 0, s->base.pc_next);
126
+ gen_goto_tb(s, 0, 4);
127
gen_set_label(label_match);
128
- gen_goto_tb(s, 1, addr);
129
+ gen_goto_tb(s, 1, diff);
130
} else {
131
/* 0xe and 0xf are both "always" conditions */
132
- gen_goto_tb(s, 0, addr);
133
+ gen_goto_tb(s, 0, diff);
134
}
135
}
136
137
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
138
* any pending interrupts immediately.
139
*/
140
reset_btype(s);
141
- gen_goto_tb(s, 0, s->base.pc_next);
142
+ gen_goto_tb(s, 0, 4);
143
return;
144
145
case 7: /* SB */
146
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
147
* MB and end the TB instead.
148
*/
149
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
150
- gen_goto_tb(s, 0, s->base.pc_next);
151
+ gen_goto_tb(s, 0, 4);
152
return;
153
154
default:
155
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
156
switch (dc->base.is_jmp) {
157
case DISAS_NEXT:
158
case DISAS_TOO_MANY:
159
- gen_goto_tb(dc, 1, dc->base.pc_next);
160
+ gen_goto_tb(dc, 1, 4);
113
break;
161
break;
114
default:
162
default:
115
g_assert_not_reached();
163
case DISAS_UPDATE_EXIT:
116
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
164
diff --git a/target/arm/translate.c b/target/arm/translate.c
117
165
index XXXXXXX..XXXXXXX 100644
118
if (oolfn) {
166
--- a/target/arm/translate.c
119
gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
167
+++ b/target/arm/translate.c
120
- return;
168
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
121
- }
169
* cpu_loop_exec. Any live exit_requests will be processed as we
122
-
170
* enter the next TB.
123
- if (genfn) {
171
*/
124
+ } else if (gvecfn) {
172
-static void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
125
+ gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
173
+static void gen_goto_tb(DisasContext *s, int n, int diff)
126
+ } else {
174
{
127
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
175
+ target_ulong dest = s->pc_curr + diff;
128
176
+
129
tcg_rd_ptr = vec_full_reg_ptr(s, rd);
177
if (translator_use_goto_tb(&s->base, dest)) {
130
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
178
tcg_gen_goto_tb(n);
131
tcg_temp_free_ptr(tcg_rd_ptr);
179
gen_set_pc_im(s, dest);
132
tcg_temp_free_ptr(tcg_rn_ptr);
180
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
133
tcg_temp_free_ptr(tcg_rm_ptr);
181
* gen_jmp();
134
- } else {
182
* on the second call to gen_jmp().
135
- TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
183
*/
136
- int pass;
184
- gen_goto_tb(s, tbno, dest);
137
-
185
+ gen_goto_tb(s, tbno, dest - s->pc_curr);
138
- tcg_op1 = tcg_temp_new_i64();
186
break;
139
- tcg_op2 = tcg_temp_new_i64();
187
case DISAS_UPDATE_NOCHAIN:
140
- tcg_res[0] = tcg_temp_new_i64();
188
case DISAS_UPDATE_EXIT:
141
- tcg_res[1] = tcg_temp_new_i64();
189
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
142
-
190
switch (dc->base.is_jmp) {
143
- for (pass = 0; pass < 2; pass++) {
191
case DISAS_NEXT:
144
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
192
case DISAS_TOO_MANY:
145
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
193
- gen_goto_tb(dc, 1, dc->base.pc_next);
146
-
194
+ gen_goto_tb(dc, 1, curr_insn_len(dc));
147
- tcg_gen_rotli_i64(tcg_res[pass], tcg_op2, 1);
195
break;
148
- tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
196
case DISAS_UPDATE_NOCHAIN:
149
- }
197
gen_set_pc_im(dc, dc->base.pc_next);
150
- write_vec_element(s, tcg_res[0], rd, 0, MO_64);
198
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
151
- write_vec_element(s, tcg_res[1], rd, 1, MO_64);
199
gen_set_pc_im(dc, dc->base.pc_next);
152
-
200
gen_singlestep_exception(dc);
153
- tcg_temp_free_i64(tcg_op1);
201
} else {
154
- tcg_temp_free_i64(tcg_op2);
202
- gen_goto_tb(dc, 1, dc->base.pc_next);
155
- tcg_temp_free_i64(tcg_res[0]);
203
+ gen_goto_tb(dc, 1, curr_insn_len(dc));
156
- tcg_temp_free_i64(tcg_res[1]);
204
}
157
}
205
}
158
}
206
}
159
160
--
207
--
161
2.20.1
208
2.25.1
162
163
diff view generated by jsdifflib
1
Convert the insns in the one-register-and-immediate group to decodetree.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In the new decode, our asimd_imm_const() function returns a 64-bit value
3
In preparation for TARGET_TB_PCREL, reduce reliance on
4
rather than a 32-bit one, which means we don't need to treat cmode=14 op=1
4
absolute values by passing in pc difference.
5
as a special case in the decoder (it is the only encoding where the two
6
halves of the 64-bit value are different).
7
5
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221020030641.2066807-4-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200522145520.6778-10-peter.maydell@linaro.org
11
---
10
---
12
target/arm/neon-dp.decode | 22 ++++++
11
target/arm/translate-a32.h | 2 +-
13
target/arm/translate-neon.inc.c | 118 ++++++++++++++++++++++++++++++++
12
target/arm/translate.h | 6 ++--
14
target/arm/translate.c | 101 +--------------------------
13
target/arm/translate-a64.c | 32 +++++++++---------
15
3 files changed, 142 insertions(+), 99 deletions(-)
14
target/arm/translate-vfp.c | 2 +-
15
target/arm/translate.c | 68 ++++++++++++++++++++------------------
16
5 files changed, 56 insertions(+), 54 deletions(-)
16
17
17
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
18
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/neon-dp.decode
20
--- a/target/arm/translate-a32.h
20
+++ b/target/arm/neon-dp.decode
21
+++ b/target/arm/translate-a32.h
21
@@ -XXX,XX +XXX,XX @@ VCVT_SF_2sh 1111 001 0 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
22
@@ -XXX,XX +XXX,XX @@ void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop);
22
VCVT_UF_2sh 1111 001 1 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
23
TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs);
23
VCVT_FS_2sh 1111 001 0 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
24
void gen_set_cpsr(TCGv_i32 var, uint32_t mask);
24
VCVT_FU_2sh 1111 001 1 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
25
void gen_set_condexec(DisasContext *s);
25
+
26
-void gen_set_pc_im(DisasContext *s, target_ulong val);
26
+######################################################################
27
+void gen_update_pc(DisasContext *s, target_long diff);
27
+# 1-reg-and-modified-immediate grouping:
28
void gen_lookup_tb(DisasContext *s);
28
+# 1111 001 i 1 D 000 imm:3 Vd:4 cmode:4 0 Q op 1 Vm:4
29
long vfp_reg_offset(bool dp, unsigned reg);
29
+######################################################################
30
long neon_full_reg_offset(unsigned reg);
30
+
31
diff --git a/target/arm/translate.h b/target/arm/translate.h
31
+&1reg_imm vd q imm cmode op
32
+
33
+%asimd_imm_value 24:1 16:3 0:4
34
+
35
+@1reg_imm .... ... . . . ... ... .... .... . q:1 . . .... \
36
+ &1reg_imm imm=%asimd_imm_value vd=%vd_dp
37
+
38
+# The cmode/op bits here decode VORR/VBIC/VMOV/VMNV, but
39
+# not in a way we can conveniently represent in decodetree without
40
+# a lot of repetition:
41
+# VORR: op=0, (cmode & 1) && cmode < 12
42
+# VBIC: op=1, (cmode & 1) && cmode < 12
43
+# VMOV: everything else
44
+# So we have a single decode line and check the cmode/op in the
45
+# trans function.
46
+Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
47
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
48
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate-neon.inc.c
33
--- a/target/arm/translate.h
50
+++ b/target/arm/translate-neon.inc.c
34
+++ b/target/arm/translate.h
51
@@ -XXX,XX +XXX,XX @@ DO_FP_2SH(VCVT_SF, gen_helper_vfp_sltos)
35
@@ -XXX,XX +XXX,XX @@ static inline int curr_insn_len(DisasContext *s)
52
DO_FP_2SH(VCVT_UF, gen_helper_vfp_ultos)
36
* For instructions which want an immediate exit to the main loop, as opposed
53
DO_FP_2SH(VCVT_FS, gen_helper_vfp_tosls_round_to_zero)
37
* to attempting to use lookup_and_goto_ptr. Unlike DISAS_UPDATE_EXIT, this
54
DO_FP_2SH(VCVT_FU, gen_helper_vfp_touls_round_to_zero)
38
* doesn't write the PC on exiting the translation loop so you need to ensure
55
+
39
- * something (gen_a64_set_pc_im or runtime helper) has done so before we reach
56
+static uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
40
+ * something (gen_a64_update_pc or runtime helper) has done so before we reach
57
+{
41
* return from cpu_tb_exec.
58
+ /*
42
*/
59
+ * Expand the encoded constant.
43
#define DISAS_EXIT DISAS_TARGET_9
60
+ * Note that cmode = 2,3,4,5,6,7,10,11,12,13 imm=0 is UNPREDICTABLE.
44
@@ -XXX,XX +XXX,XX @@ static inline int curr_insn_len(DisasContext *s)
61
+ * We choose to not special-case this and will behave as if a
45
62
+ * valid constant encoding of 0 had been given.
46
#ifdef TARGET_AARCH64
63
+ * cmode = 15 op = 1 must UNDEF; we assume decode has handled that.
47
void a64_translate_init(void);
64
+ */
48
-void gen_a64_set_pc_im(uint64_t val);
65
+ switch (cmode) {
49
+void gen_a64_update_pc(DisasContext *s, target_long diff);
66
+ case 0: case 1:
50
extern const TranslatorOps aarch64_translator_ops;
67
+ /* no-op */
51
#else
68
+ break;
52
static inline void a64_translate_init(void)
69
+ case 2: case 3:
53
{
70
+ imm <<= 8;
54
}
71
+ break;
55
72
+ case 4: case 5:
56
-static inline void gen_a64_set_pc_im(uint64_t val)
73
+ imm <<= 16;
57
+static inline void gen_a64_update_pc(DisasContext *s, target_long diff)
74
+ break;
58
{
75
+ case 6: case 7:
59
}
76
+ imm <<= 24;
60
#endif
77
+ break;
61
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
78
+ case 8: case 9:
62
index XXXXXXX..XXXXXXX 100644
79
+ imm |= imm << 16;
63
--- a/target/arm/translate-a64.c
80
+ break;
64
+++ b/target/arm/translate-a64.c
81
+ case 10: case 11:
65
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
82
+ imm = (imm << 8) | (imm << 24);
66
}
83
+ break;
67
}
84
+ case 12:
68
85
+ imm = (imm << 8) | 0xff;
69
-void gen_a64_set_pc_im(uint64_t val)
86
+ break;
70
+void gen_a64_update_pc(DisasContext *s, target_long diff)
87
+ case 13:
71
{
88
+ imm = (imm << 16) | 0xffff;
72
- tcg_gen_movi_i64(cpu_pc, val);
89
+ break;
73
+ tcg_gen_movi_i64(cpu_pc, s->pc_curr + diff);
90
+ case 14:
74
}
91
+ if (op) {
75
92
+ /*
76
/*
93
+ * This is the only case where the top and bottom 32 bits
77
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
94
+ * of the encoded constant differ.
78
95
+ */
79
static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
96
+ uint64_t imm64 = 0;
80
{
97
+ int n;
81
- gen_a64_set_pc_im(pc);
98
+
82
+ gen_a64_update_pc(s, pc - s->pc_curr);
99
+ for (n = 0; n < 8; n++) {
83
gen_exception_internal(excp);
100
+ if (imm & (1 << n)) {
84
s->base.is_jmp = DISAS_NORETURN;
101
+ imm64 |= (0xffULL << (n * 8));
85
}
102
+ }
86
103
+ }
87
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syndrome)
104
+ return imm64;
88
{
105
+ }
89
- gen_a64_set_pc_im(s->pc_curr);
106
+ imm |= (imm << 8) | (imm << 16) | (imm << 24);
90
+ gen_a64_update_pc(s, 0);
107
+ break;
91
gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syndrome));
108
+ case 15:
92
s->base.is_jmp = DISAS_NORETURN;
109
+ imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
93
}
110
+ | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
94
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
111
+ break;
95
112
+ }
96
if (use_goto_tb(s, dest)) {
113
+ if (op) {
97
tcg_gen_goto_tb(n);
114
+ imm = ~imm;
98
- gen_a64_set_pc_im(dest);
115
+ }
99
+ gen_a64_update_pc(s, diff);
116
+ return dup_const(MO_32, imm);
100
tcg_gen_exit_tb(s->base.tb, n);
117
+}
101
s->base.is_jmp = DISAS_NORETURN;
118
+
102
} else {
119
+static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
103
- gen_a64_set_pc_im(dest);
120
+ GVecGen2iFn *fn)
104
+ gen_a64_update_pc(s, diff);
121
+{
105
if (s->ss_active) {
122
+ uint64_t imm;
106
gen_step_complete_exception(s);
123
+ int reg_ofs, vec_size;
107
} else {
124
+
108
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
125
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
109
uint32_t syndrome;
126
+ return false;
110
127
+ }
111
syndrome = syn_aa64_sysregtrap(op0, op1, op2, crn, crm, rt, isread);
128
+
112
- gen_a64_set_pc_im(s->pc_curr);
129
+ /* UNDEF accesses to D16-D31 if they don't exist. */
113
+ gen_a64_update_pc(s, 0);
130
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
114
gen_helper_access_check_cp_reg(cpu_env,
131
+ return false;
115
tcg_constant_ptr(ri),
132
+ }
116
tcg_constant_i32(syndrome),
133
+
117
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
134
+ if (a->vd & a->q) {
118
* The readfn or writefn might raise an exception;
135
+ return false;
119
* synchronize the CPU state in case it does.
136
+ }
120
*/
137
+
121
- gen_a64_set_pc_im(s->pc_curr);
138
+ if (!vfp_access_check(s)) {
122
+ gen_a64_update_pc(s, 0);
139
+ return true;
123
}
140
+ }
124
141
+
125
/* Handle special cases first */
142
+ reg_ofs = neon_reg_offset(a->vd, 0);
126
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
143
+ vec_size = a->q ? 16 : 8;
127
/* The pre HVC helper handles cases when HVC gets trapped
144
+ imm = asimd_imm_const(a->imm, a->cmode, a->op);
128
* as an undefined insn by runtime configuration.
145
+
129
*/
146
+ fn(MO_64, reg_ofs, reg_ofs, imm, vec_size, vec_size);
130
- gen_a64_set_pc_im(s->pc_curr);
147
+ return true;
131
+ gen_a64_update_pc(s, 0);
148
+}
132
gen_helper_pre_hvc(cpu_env);
149
+
133
gen_ss_advance(s);
150
+static void gen_VMOV_1r(unsigned vece, uint32_t dofs, uint32_t aofs,
134
gen_exception_insn_el(s, s->base.pc_next, EXCP_HVC,
151
+ int64_t c, uint32_t oprsz, uint32_t maxsz)
135
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
152
+{
136
unallocated_encoding(s);
153
+ tcg_gen_gvec_dup_imm(MO_64, dofs, oprsz, maxsz, c);
137
break;
154
+}
138
}
155
+
139
- gen_a64_set_pc_im(s->pc_curr);
156
+static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a)
140
+ gen_a64_update_pc(s, 0);
157
+{
141
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
158
+ /* Handle decode of cmode/op here between VORR/VBIC/VMOV */
142
gen_ss_advance(s);
159
+ GVecGen2iFn *fn;
143
gen_exception_insn_el(s, s->base.pc_next, EXCP_SMC,
160
+
144
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
161
+ if ((a->cmode & 1) && a->cmode < 12) {
145
*/
162
+ /* for op=1, the imm will be inverted, so BIC becomes AND. */
146
switch (dc->base.is_jmp) {
163
+ fn = a->op ? tcg_gen_gvec_andi : tcg_gen_gvec_ori;
147
default:
164
+ } else {
148
- gen_a64_set_pc_im(dc->base.pc_next);
165
+ /* There is one unallocated cmode/op combination in this space */
149
+ gen_a64_update_pc(dc, 4);
166
+ if (a->cmode == 15 && a->op == 1) {
150
/* fall through */
167
+ return false;
151
case DISAS_EXIT:
168
+ }
152
case DISAS_JUMP:
169
+ fn = gen_VMOV_1r;
153
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
170
+ }
154
break;
171
+ return do_1reg_imm(s, a, fn);
155
default:
172
+}
156
case DISAS_UPDATE_EXIT:
157
- gen_a64_set_pc_im(dc->base.pc_next);
158
+ gen_a64_update_pc(dc, 4);
159
/* fall through */
160
case DISAS_EXIT:
161
tcg_gen_exit_tb(NULL, 0);
162
break;
163
case DISAS_UPDATE_NOCHAIN:
164
- gen_a64_set_pc_im(dc->base.pc_next);
165
+ gen_a64_update_pc(dc, 4);
166
/* fall through */
167
case DISAS_JUMP:
168
tcg_gen_lookup_and_goto_ptr();
169
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
170
case DISAS_SWI:
171
break;
172
case DISAS_WFE:
173
- gen_a64_set_pc_im(dc->base.pc_next);
174
+ gen_a64_update_pc(dc, 4);
175
gen_helper_wfe(cpu_env);
176
break;
177
case DISAS_YIELD:
178
- gen_a64_set_pc_im(dc->base.pc_next);
179
+ gen_a64_update_pc(dc, 4);
180
gen_helper_yield(cpu_env);
181
break;
182
case DISAS_WFI:
183
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
184
* This is a special case because we don't want to just halt
185
* the CPU if trying to debug across a WFI.
186
*/
187
- gen_a64_set_pc_im(dc->base.pc_next);
188
+ gen_a64_update_pc(dc, 4);
189
gen_helper_wfi(cpu_env, tcg_constant_i32(4));
190
/*
191
* The helper doesn't necessarily throw an exception, but we
192
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/translate-vfp.c
195
+++ b/target/arm/translate-vfp.c
196
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
197
case ARM_VFP_FPSID:
198
if (s->current_el == 1) {
199
gen_set_condexec(s);
200
- gen_set_pc_im(s, s->pc_curr);
201
+ gen_update_pc(s, 0);
202
gen_helper_check_hcr_el2_trap(cpu_env,
203
tcg_constant_i32(a->rt),
204
tcg_constant_i32(a->reg));
173
diff --git a/target/arm/translate.c b/target/arm/translate.c
205
diff --git a/target/arm/translate.c b/target/arm/translate.c
174
index XXXXXXX..XXXXXXX 100644
206
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/translate.c
207
--- a/target/arm/translate.c
176
+++ b/target/arm/translate.c
208
+++ b/target/arm/translate.c
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
209
@@ -XXX,XX +XXX,XX @@ void gen_set_condexec(DisasContext *s)
178
/* Three register same length: handled by decodetree */
210
}
179
return 1;
211
}
180
} else if (insn & (1 << 4)) {
212
181
- if ((insn & 0x00380080) != 0) {
213
-void gen_set_pc_im(DisasContext *s, target_ulong val)
182
- /* Two registers and shift: handled by decodetree */
214
+void gen_update_pc(DisasContext *s, target_long diff)
183
- return 1;
215
{
184
- } else { /* (insn & 0x00380080) == 0 */
216
- tcg_gen_movi_i32(cpu_R[15], val);
185
- int invert, reg_ofs, vec_size;
217
+ tcg_gen_movi_i32(cpu_R[15], s->pc_curr + diff);
186
-
218
}
187
- if (q && (rd & 1)) {
219
188
- return 1;
220
/* Set PC and Thumb state from var. var is marked as dead. */
189
- }
221
@@ -XXX,XX +XXX,XX @@ static inline void gen_bxns(DisasContext *s, int rm)
190
-
222
191
- op = (insn >> 8) & 0xf;
223
/* The bxns helper may raise an EXCEPTION_EXIT exception, so in theory
192
- /* One register and immediate. */
224
* we need to sync state before calling it, but:
193
- imm = (u << 7) | ((insn >> 12) & 0x70) | (insn & 0xf);
225
- * - we don't need to do gen_set_pc_im() because the bxns helper will
194
- invert = (insn & (1 << 5)) != 0;
226
+ * - we don't need to do gen_update_pc() because the bxns helper will
195
- /* Note that op = 2,3,4,5,6,7,10,11,12,13 imm=0 is UNPREDICTABLE.
227
* always set the PC itself
196
- * We choose to not special-case this and will behave as if a
228
* - we don't need to do gen_set_condexec() because BXNS is UNPREDICTABLE
197
- * valid constant encoding of 0 had been given.
229
* unless it's outside an IT block or the last insn in an IT block,
198
- */
230
@@ -XXX,XX +XXX,XX @@ static inline void gen_blxns(DisasContext *s, int rm)
199
- switch (op) {
231
* We do however need to set the PC, because the blxns helper reads it.
200
- case 0: case 1:
232
* The blxns helper may throw an exception.
201
- /* no-op */
233
*/
202
- break;
234
- gen_set_pc_im(s, s->base.pc_next);
203
- case 2: case 3:
235
+ gen_update_pc(s, curr_insn_len(s));
204
- imm <<= 8;
236
gen_helper_v7m_blxns(cpu_env, var);
205
- break;
237
tcg_temp_free_i32(var);
206
- case 4: case 5:
238
s->base.is_jmp = DISAS_EXIT;
207
- imm <<= 16;
239
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
208
- break;
240
* as an undefined insn by runtime configuration (ie before
209
- case 6: case 7:
241
* the insn really executes).
210
- imm <<= 24;
242
*/
211
- break;
243
- gen_set_pc_im(s, s->pc_curr);
212
- case 8: case 9:
244
+ gen_update_pc(s, 0);
213
- imm |= imm << 16;
245
gen_helper_pre_hvc(cpu_env);
214
- break;
246
/* Otherwise we will treat this as a real exception which
215
- case 10: case 11:
247
* happens after execution of the insn. (The distinction matters
216
- imm = (imm << 8) | (imm << 24);
248
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
217
- break;
249
* for single stepping.)
218
- case 12:
250
*/
219
- imm = (imm << 8) | 0xff;
251
s->svc_imm = imm16;
220
- break;
252
- gen_set_pc_im(s, s->base.pc_next);
221
- case 13:
253
+ gen_update_pc(s, curr_insn_len(s));
222
- imm = (imm << 16) | 0xffff;
254
s->base.is_jmp = DISAS_HVC;
223
- break;
255
}
224
- case 14:
256
225
- imm |= (imm << 8) | (imm << 16) | (imm << 24);
257
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
226
- if (invert) {
258
/* As with HVC, we may take an exception either before or after
227
- imm = ~imm;
259
* the insn executes.
228
- }
260
*/
229
- break;
261
- gen_set_pc_im(s, s->pc_curr);
230
- case 15:
262
+ gen_update_pc(s, 0);
231
- if (invert) {
263
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa32_smc()));
232
- return 1;
264
- gen_set_pc_im(s, s->base.pc_next);
233
- }
265
+ gen_update_pc(s, curr_insn_len(s));
234
- imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
266
s->base.is_jmp = DISAS_SMC;
235
- | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
267
}
236
- break;
268
237
- }
269
static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
238
- if (invert) {
270
{
239
- imm = ~imm;
271
gen_set_condexec(s);
240
- }
272
- gen_set_pc_im(s, pc);
241
-
273
+ gen_update_pc(s, pc - s->pc_curr);
242
- reg_ofs = neon_reg_offset(rd, 0);
274
gen_exception_internal(excp);
243
- vec_size = q ? 16 : 8;
275
s->base.is_jmp = DISAS_NORETURN;
244
-
276
}
245
- if (op & 1 && op < 12) {
277
@@ -XXX,XX +XXX,XX @@ static void gen_exception_insn_el_v(DisasContext *s, uint64_t pc, int excp,
246
- if (invert) {
278
uint32_t syn, TCGv_i32 tcg_el)
247
- /* The immediate value has already been inverted,
279
{
248
- * so BIC becomes AND.
280
if (s->aarch64) {
249
- */
281
- gen_a64_set_pc_im(pc);
250
- tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
282
+ gen_a64_update_pc(s, pc - s->pc_curr);
251
- vec_size, vec_size);
283
} else {
252
- } else {
284
gen_set_condexec(s);
253
- tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
285
- gen_set_pc_im(s, pc);
254
- vec_size, vec_size);
286
+ gen_update_pc(s, pc - s->pc_curr);
255
- }
287
}
256
- } else {
288
gen_exception_el_v(excp, syn, tcg_el);
257
- /* VMOV, VMVN. */
289
s->base.is_jmp = DISAS_NORETURN;
258
- if (op == 14 && invert) {
290
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
259
- TCGv_i64 t64 = tcg_temp_new_i64();
291
void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
260
-
292
{
261
- for (pass = 0; pass <= q; ++pass) {
293
if (s->aarch64) {
262
- uint64_t val = 0;
294
- gen_a64_set_pc_im(pc);
263
- int n;
295
+ gen_a64_update_pc(s, pc - s->pc_curr);
264
-
296
} else {
265
- for (n = 0; n < 8; n++) {
297
gen_set_condexec(s);
266
- if (imm & (1 << (n + pass * 8))) {
298
- gen_set_pc_im(s, pc);
267
- val |= 0xffull << (n * 8);
299
+ gen_update_pc(s, pc - s->pc_curr);
268
- }
300
}
269
- }
301
gen_exception(excp, syn);
270
- tcg_gen_movi_i64(t64, val);
302
s->base.is_jmp = DISAS_NORETURN;
271
- neon_store_reg64(t64, rd + pass);
303
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
272
- }
304
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
273
- tcg_temp_free_i64(t64);
305
{
274
- } else {
306
gen_set_condexec(s);
275
- tcg_gen_gvec_dup_imm(MO_32, reg_ofs, vec_size,
307
- gen_set_pc_im(s, s->pc_curr);
276
- vec_size, imm);
308
+ gen_update_pc(s, 0);
277
- }
309
gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syn));
278
- }
310
s->base.is_jmp = DISAS_NORETURN;
279
- }
311
}
280
+ /* Two registers and shift or reg and imm: handled by decodetree */
312
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
281
+ return 1;
313
282
} else { /* (insn & 0x00800010 == 0x00800000) */
314
if (translator_use_goto_tb(&s->base, dest)) {
283
if (size != 3) {
315
tcg_gen_goto_tb(n);
284
op = (insn >> 8) & 0xf;
316
- gen_set_pc_im(s, dest);
317
+ gen_update_pc(s, diff);
318
tcg_gen_exit_tb(s->base.tb, n);
319
} else {
320
- gen_set_pc_im(s, dest);
321
+ gen_update_pc(s, diff);
322
gen_goto_ptr();
323
}
324
s->base.is_jmp = DISAS_NORETURN;
325
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
326
/* Jump, specifying which TB number to use if we gen_goto_tb() */
327
static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
328
{
329
+ int diff = dest - s->pc_curr;
330
+
331
if (unlikely(s->ss_active)) {
332
/* An indirect jump so that we still trigger the debug exception. */
333
- gen_set_pc_im(s, dest);
334
+ gen_update_pc(s, diff);
335
s->base.is_jmp = DISAS_JUMP;
336
return;
337
}
338
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
339
* gen_jmp();
340
* on the second call to gen_jmp().
341
*/
342
- gen_goto_tb(s, tbno, dest - s->pc_curr);
343
+ gen_goto_tb(s, tbno, diff);
344
break;
345
case DISAS_UPDATE_NOCHAIN:
346
case DISAS_UPDATE_EXIT:
347
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
348
* Avoid using goto_tb so we really do exit back to the main loop
349
* and don't chain to another TB.
350
*/
351
- gen_set_pc_im(s, dest);
352
+ gen_update_pc(s, diff);
353
gen_goto_ptr();
354
s->base.is_jmp = DISAS_NORETURN;
355
break;
356
@@ -XXX,XX +XXX,XX @@ static void gen_msr_banked(DisasContext *s, int r, int sysm, int rn)
357
358
/* Sync state because msr_banked() can raise exceptions */
359
gen_set_condexec(s);
360
- gen_set_pc_im(s, s->pc_curr);
361
+ gen_update_pc(s, 0);
362
tcg_reg = load_reg(s, rn);
363
gen_helper_msr_banked(cpu_env, tcg_reg,
364
tcg_constant_i32(tgtmode),
365
@@ -XXX,XX +XXX,XX @@ static void gen_mrs_banked(DisasContext *s, int r, int sysm, int rn)
366
367
/* Sync state because mrs_banked() can raise exceptions */
368
gen_set_condexec(s);
369
- gen_set_pc_im(s, s->pc_curr);
370
+ gen_update_pc(s, 0);
371
tcg_reg = tcg_temp_new_i32();
372
gen_helper_mrs_banked(tcg_reg, cpu_env,
373
tcg_constant_i32(tgtmode),
374
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
375
}
376
377
gen_set_condexec(s);
378
- gen_set_pc_im(s, s->pc_curr);
379
+ gen_update_pc(s, 0);
380
gen_helper_access_check_cp_reg(cpu_env,
381
tcg_constant_ptr(ri),
382
tcg_constant_i32(syndrome),
383
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
384
* synchronize the CPU state in case it does.
385
*/
386
gen_set_condexec(s);
387
- gen_set_pc_im(s, s->pc_curr);
388
+ gen_update_pc(s, 0);
389
}
390
391
/* Handle special cases first */
392
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
393
unallocated_encoding(s);
394
return;
395
}
396
- gen_set_pc_im(s, s->base.pc_next);
397
+ gen_update_pc(s, curr_insn_len(s));
398
s->base.is_jmp = DISAS_WFI;
399
return;
400
default:
401
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
402
addr = tcg_temp_new_i32();
403
/* get_r13_banked() will raise an exception if called from System mode */
404
gen_set_condexec(s);
405
- gen_set_pc_im(s, s->pc_curr);
406
+ gen_update_pc(s, 0);
407
gen_helper_get_r13_banked(addr, cpu_env, tcg_constant_i32(mode));
408
switch (amode) {
409
case 0: /* DA */
410
@@ -XXX,XX +XXX,XX @@ static bool trans_YIELD(DisasContext *s, arg_YIELD *a)
411
* scheduling of other vCPUs.
412
*/
413
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
414
- gen_set_pc_im(s, s->base.pc_next);
415
+ gen_update_pc(s, curr_insn_len(s));
416
s->base.is_jmp = DISAS_YIELD;
417
}
418
return true;
419
@@ -XXX,XX +XXX,XX @@ static bool trans_WFE(DisasContext *s, arg_WFE *a)
420
* implemented so we can't sleep like WFI does.
421
*/
422
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
423
- gen_set_pc_im(s, s->base.pc_next);
424
+ gen_update_pc(s, curr_insn_len(s));
425
s->base.is_jmp = DISAS_WFE;
426
}
427
return true;
428
@@ -XXX,XX +XXX,XX @@ static bool trans_WFE(DisasContext *s, arg_WFE *a)
429
static bool trans_WFI(DisasContext *s, arg_WFI *a)
430
{
431
/* For WFI, halt the vCPU until an IRQ. */
432
- gen_set_pc_im(s, s->base.pc_next);
433
+ gen_update_pc(s, curr_insn_len(s));
434
s->base.is_jmp = DISAS_WFI;
435
return true;
436
}
437
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
438
(a->imm == semihost_imm)) {
439
gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
440
} else {
441
- gen_set_pc_im(s, s->base.pc_next);
442
+ gen_update_pc(s, curr_insn_len(s));
443
s->svc_imm = a->imm;
444
s->base.is_jmp = DISAS_SWI;
445
}
446
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
447
case DISAS_TOO_MANY:
448
case DISAS_UPDATE_EXIT:
449
case DISAS_UPDATE_NOCHAIN:
450
- gen_set_pc_im(dc, dc->base.pc_next);
451
+ gen_update_pc(dc, curr_insn_len(dc));
452
/* fall through */
453
default:
454
/* FIXME: Single stepping a WFI insn will not halt the CPU. */
455
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
456
gen_goto_tb(dc, 1, curr_insn_len(dc));
457
break;
458
case DISAS_UPDATE_NOCHAIN:
459
- gen_set_pc_im(dc, dc->base.pc_next);
460
+ gen_update_pc(dc, curr_insn_len(dc));
461
/* fall through */
462
case DISAS_JUMP:
463
gen_goto_ptr();
464
break;
465
case DISAS_UPDATE_EXIT:
466
- gen_set_pc_im(dc, dc->base.pc_next);
467
+ gen_update_pc(dc, curr_insn_len(dc));
468
/* fall through */
469
default:
470
/* indicate that the hash table must be used to find the next TB */
471
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
472
gen_set_label(dc->condlabel);
473
gen_set_condexec(dc);
474
if (unlikely(dc->ss_active)) {
475
- gen_set_pc_im(dc, dc->base.pc_next);
476
+ gen_update_pc(dc, curr_insn_len(dc));
477
gen_singlestep_exception(dc);
478
} else {
479
gen_goto_tb(dc, 1, curr_insn_len(dc));
285
--
480
--
286
2.20.1
481
2.25.1
287
482
288
483
diff view generated by jsdifflib
1
Convert the Neon narrowing shifts where op==8 to decodetree:
1
From: Richard Henderson <richard.henderson@linaro.org>
2
* VSHRN
2
3
* VRSHRN
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
* VQSHRUN
4
5
* VQRSHRUN
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-5-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200522145520.6778-6-peter.maydell@linaro.org
10
---
9
---
11
target/arm/neon-dp.decode | 27 ++++++
10
target/arm/translate.h | 5 +++--
12
target/arm/translate-neon.inc.c | 167 ++++++++++++++++++++++++++++++++
11
target/arm/translate-a64.c | 28 ++++++++++-------------
13
target/arm/translate.c | 1 +
12
target/arm/translate-m-nocp.c | 6 ++---
14
3 files changed, 195 insertions(+)
13
target/arm/translate-mve.c | 2 +-
15
14
target/arm/translate-vfp.c | 6 ++---
16
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
15
target/arm/translate.c | 42 +++++++++++++++++------------------
17
index XXXXXXX..XXXXXXX 100644
16
6 files changed, 43 insertions(+), 46 deletions(-)
18
--- a/target/arm/neon-dp.decode
17
19
+++ b/target/arm/neon-dp.decode
18
diff --git a/target/arm/translate.h b/target/arm/translate.h
20
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
19
index XXXXXXX..XXXXXXX 100644
21
@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
20
--- a/target/arm/translate.h
22
&2reg_shift vm=%vm_dp vd=%vd_dp size=0
21
+++ b/target/arm/translate.h
23
22
@@ -XXX,XX +XXX,XX @@ void arm_jump_cc(DisasCompare *cmp, TCGLabel *label);
24
+# Narrowing right shifts: here the Q bit is part of the opcode decode
23
void arm_gen_test_cc(int cc, TCGLabel *label);
25
+@2reg_shrn_d .... ... . . . 1 ..... .... .... 0 . . . .... \
24
MemOp pow2_align(unsigned i);
26
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3 q=0 \
25
void unallocated_encoding(DisasContext *s);
27
+ shift=%neon_rshift_i5
26
-void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
28
+@2reg_shrn_s .... ... . . . 01 .... .... .... 0 . . . .... \
27
+void gen_exception_insn_el(DisasContext *s, target_long pc_diff, int excp,
29
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 q=0 \
28
uint32_t syn, uint32_t target_el);
30
+ shift=%neon_rshift_i4
29
-void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn);
31
+@2reg_shrn_h .... ... . . . 001 ... .... .... 0 . . . .... \
30
+void gen_exception_insn(DisasContext *s, target_long pc_diff,
32
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0 \
31
+ int excp, uint32_t syn);
33
+ shift=%neon_rshift_i3
32
34
+
33
/* Return state of Alternate Half-precision flag, caller frees result */
35
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
34
static inline TCGv_i32 get_ahp_flag(void)
36
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
35
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
37
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
36
index XXXXXXX..XXXXXXX 100644
38
@@ -XXX,XX +XXX,XX @@ VQSHL_U_64_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
37
--- a/target/arm/translate-a64.c
39
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
38
+++ b/target/arm/translate-a64.c
40
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
39
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check_only(DisasContext *s)
41
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
40
assert(!s->fp_access_checked);
42
+
41
s->fp_access_checked = true;
43
+VSHRN_64_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_d
42
44
+VSHRN_32_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_s
43
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
45
+VSHRN_16_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
44
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
46
+
45
syn_fp_access_trap(1, 0xe, false, 0),
47
+VRSHRN_64_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
46
s->fp_excp_el);
48
+VRSHRN_32_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
47
return false;
49
+VRSHRN_16_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
48
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check(DisasContext *s)
50
+
49
return false;
51
+VQSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_d
50
}
52
+VQSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_s
51
if (s->sme_trap_nonstreaming && s->is_nonstreaming) {
53
+VQSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
52
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
54
+
53
+ gen_exception_insn(s, 0, EXCP_UDEF,
55
+VQRSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
54
syn_smetrap(SME_ET_Streaming, false));
56
+VQRSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
55
return false;
57
+VQRSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
56
}
58
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
57
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
59
index XXXXXXX..XXXXXXX 100644
58
goto fail_exit;
60
--- a/target/arm/translate-neon.inc.c
59
}
61
+++ b/target/arm/translate-neon.inc.c
60
} else if (s->sve_excp_el) {
62
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
61
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
63
DO_2SHIFT_ENV(VQSHLU, qshlu_s)
62
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
64
DO_2SHIFT_ENV(VQSHL_U, qshl_u)
63
syn_sve_access_trap(), s->sve_excp_el);
65
DO_2SHIFT_ENV(VQSHL_S, qshl_s)
64
goto fail_exit;
66
+
65
}
67
+static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
66
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
68
+ NeonGenTwo64OpFn *shiftfn,
67
static bool sme_access_check(DisasContext *s)
69
+ NeonGenNarrowEnvFn *narrowfn)
68
{
70
+{
69
if (s->sme_excp_el) {
71
+ /* 2-reg-and-shift narrowing-shift operations, size == 3 case */
70
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
72
+ TCGv_i64 constimm, rm1, rm2;
71
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
73
+ TCGv_i32 rd;
72
syn_smetrap(SME_ET_AccessTrap, false),
74
+
73
s->sme_excp_el);
75
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
74
return false;
76
+ return false;
75
@@ -XXX,XX +XXX,XX @@ bool sme_enabled_check_with_svcr(DisasContext *s, unsigned req)
77
+ }
76
return false;
78
+
77
}
79
+ /* UNDEF accesses to D16-D31 if they don't exist. */
78
if (FIELD_EX64(req, SVCR, SM) && !s->pstate_sm) {
80
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
79
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
81
+ ((a->vd | a->vm) & 0x10)) {
80
+ gen_exception_insn(s, 0, EXCP_UDEF,
82
+ return false;
81
syn_smetrap(SME_ET_NotStreaming, false));
83
+ }
82
return false;
84
+
83
}
85
+ if (a->vm & 1) {
84
if (FIELD_EX64(req, SVCR, ZA) && !s->pstate_za) {
86
+ return false;
85
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
87
+ }
86
+ gen_exception_insn(s, 0, EXCP_UDEF,
88
+
87
syn_smetrap(SME_ET_InactiveZA, false));
89
+ if (!vfp_access_check(s)) {
88
return false;
90
+ return true;
89
}
91
+ }
90
@@ -XXX,XX +XXX,XX @@ static void gen_sysreg_undef(DisasContext *s, bool isread,
92
+
91
} else {
93
+ /*
92
syndrome = syn_uncategorized();
94
+ * This is always a right shift, and the shiftfn is always a
93
}
95
+ * left-shift helper, which thus needs the negated shift count.
94
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syndrome);
96
+ */
95
+ gen_exception_insn(s, 0, EXCP_UDEF, syndrome);
97
+ constimm = tcg_const_i64(-a->shift);
96
}
98
+ rm1 = tcg_temp_new_i64();
97
99
+ rm2 = tcg_temp_new_i64();
98
/* MRS - move from system register
100
+
99
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
101
+ /* Load both inputs first to avoid potential overwrite if rm == rd */
100
switch (op2_ll) {
102
+ neon_load_reg64(rm1, a->vm);
101
case 1: /* SVC */
103
+ neon_load_reg64(rm2, a->vm + 1);
102
gen_ss_advance(s);
104
+
103
- gen_exception_insn(s, s->base.pc_next, EXCP_SWI,
105
+ shiftfn(rm1, rm1, constimm);
104
- syn_aa64_svc(imm16));
106
+ rd = tcg_temp_new_i32();
105
+ gen_exception_insn(s, 4, EXCP_SWI, syn_aa64_svc(imm16));
107
+ narrowfn(rd, cpu_env, rm1);
106
break;
108
+ neon_store_reg(a->vd, 0, rd);
107
case 2: /* HVC */
109
+
108
if (s->current_el == 0) {
110
+ shiftfn(rm2, rm2, constimm);
109
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
111
+ rd = tcg_temp_new_i32();
110
gen_a64_update_pc(s, 0);
112
+ narrowfn(rd, cpu_env, rm2);
111
gen_helper_pre_hvc(cpu_env);
113
+ neon_store_reg(a->vd, 1, rd);
112
gen_ss_advance(s);
114
+
113
- gen_exception_insn_el(s, s->base.pc_next, EXCP_HVC,
115
+ tcg_temp_free_i64(rm1);
114
- syn_aa64_hvc(imm16), 2);
116
+ tcg_temp_free_i64(rm2);
115
+ gen_exception_insn_el(s, 4, EXCP_HVC, syn_aa64_hvc(imm16), 2);
117
+ tcg_temp_free_i64(constimm);
116
break;
118
+
117
case 3: /* SMC */
119
+ return true;
118
if (s->current_el == 0) {
120
+}
119
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
121
+
120
gen_a64_update_pc(s, 0);
122
+static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
121
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
123
+ NeonGenTwoOpFn *shiftfn,
122
gen_ss_advance(s);
124
+ NeonGenNarrowEnvFn *narrowfn)
123
- gen_exception_insn_el(s, s->base.pc_next, EXCP_SMC,
125
+{
124
- syn_aa64_smc(imm16), 3);
126
+ /* 2-reg-and-shift narrowing-shift operations, size < 3 case */
125
+ gen_exception_insn_el(s, 4, EXCP_SMC, syn_aa64_smc(imm16), 3);
127
+ TCGv_i32 constimm, rm1, rm2, rm3, rm4;
126
break;
128
+ TCGv_i64 rtmp;
127
default:
129
+ uint32_t imm;
128
unallocated_encoding(s);
130
+
129
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
131
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
130
* Illegal execution state. This has priority over BTI
132
+ return false;
131
* exceptions, but comes after instruction abort exceptions.
133
+ }
132
*/
134
+
133
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_illegalstate());
135
+ /* UNDEF accesses to D16-D31 if they don't exist. */
134
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_illegalstate());
136
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
135
return;
137
+ ((a->vd | a->vm) & 0x10)) {
136
}
138
+ return false;
137
139
+ }
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
140
+
139
if (s->btype != 0
141
+ if (a->vm & 1) {
140
&& s->guarded_page
142
+ return false;
141
&& !btype_destination_ok(insn, s->bt, s->btype)) {
143
+ }
142
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
144
+
143
- syn_btitrap(s->btype));
145
+ if (!vfp_access_check(s)) {
144
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_btitrap(s->btype));
146
+ return true;
145
return;
147
+ }
146
}
148
+
147
} else {
149
+ /*
148
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
150
+ * This is always a right shift, and the shiftfn is always a
149
index XXXXXXX..XXXXXXX 100644
151
+ * left-shift helper, which thus needs the negated shift count
150
--- a/target/arm/translate-m-nocp.c
152
+ * duplicated into each lane of the immediate value.
151
+++ b/target/arm/translate-m-nocp.c
153
+ */
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
154
+ if (a->size == 1) {
153
tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
155
+ imm = (uint16_t)(-a->shift);
154
156
+ imm |= imm << 16;
155
if (s->fp_excp_el != 0) {
157
+ } else {
156
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
158
+ /* size == 2 */
157
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
159
+ imm = -a->shift;
158
syn_uncategorized(), s->fp_excp_el);
160
+ }
159
return true;
161
+ constimm = tcg_const_i32(imm);
160
}
162
+
161
@@ -XXX,XX +XXX,XX @@ static bool trans_NOCP(DisasContext *s, arg_nocp *a)
163
+ /* Load all inputs first to avoid potential overwrite */
162
}
164
+ rm1 = neon_load_reg(a->vm, 0);
163
165
+ rm2 = neon_load_reg(a->vm, 1);
164
if (a->cp != 10) {
166
+ rm3 = neon_load_reg(a->vm + 1, 0);
165
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP, syn_uncategorized());
167
+ rm4 = neon_load_reg(a->vm + 1, 1);
166
+ gen_exception_insn(s, 0, EXCP_NOCP, syn_uncategorized());
168
+ rtmp = tcg_temp_new_i64();
167
return true;
169
+
168
}
170
+ shiftfn(rm1, rm1, constimm);
169
171
+ shiftfn(rm2, rm2, constimm);
170
if (s->fp_excp_el != 0) {
172
+
171
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
173
+ tcg_gen_concat_i32_i64(rtmp, rm1, rm2);
172
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
174
+ tcg_temp_free_i32(rm2);
173
syn_uncategorized(), s->fp_excp_el);
175
+
174
return true;
176
+ narrowfn(rm1, cpu_env, rtmp);
175
}
177
+ neon_store_reg(a->vd, 0, rm1);
176
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
178
+
177
index XXXXXXX..XXXXXXX 100644
179
+ shiftfn(rm3, rm3, constimm);
178
--- a/target/arm/translate-mve.c
180
+ shiftfn(rm4, rm4, constimm);
179
+++ b/target/arm/translate-mve.c
181
+ tcg_temp_free_i32(constimm);
180
@@ -XXX,XX +XXX,XX @@ bool mve_eci_check(DisasContext *s)
182
+
181
return true;
183
+ tcg_gen_concat_i32_i64(rtmp, rm3, rm4);
182
default:
184
+ tcg_temp_free_i32(rm4);
183
/* Reserved value: INVSTATE UsageFault */
185
+
184
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
186
+ narrowfn(rm3, cpu_env, rtmp);
185
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
187
+ tcg_temp_free_i64(rtmp);
186
return false;
188
+ neon_store_reg(a->vd, 1, rm3);
187
}
189
+ return true;
188
}
190
+}
189
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
191
+
190
index XXXXXXX..XXXXXXX 100644
192
+#define DO_2SN_64(INSN, FUNC, NARROWFUNC) \
191
--- a/target/arm/translate-vfp.c
193
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
192
+++ b/target/arm/translate-vfp.c
194
+ { \
193
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
195
+ return do_2shift_narrow_64(s, a, FUNC, NARROWFUNC); \
194
int coproc = arm_dc_feature(s, ARM_FEATURE_V8) ? 0 : 0xa;
196
+ }
195
uint32_t syn = syn_fp_access_trap(1, 0xe, false, coproc);
197
+#define DO_2SN_32(INSN, FUNC, NARROWFUNC) \
196
198
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
197
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF, syn, s->fp_excp_el);
199
+ { \
198
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn, s->fp_excp_el);
200
+ return do_2shift_narrow_32(s, a, FUNC, NARROWFUNC); \
199
return false;
201
+ }
200
}
202
+
201
203
+static void gen_neon_narrow_u32(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
202
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
204
+{
203
* appear to be any insns which touch VFP which are allowed.
205
+ tcg_gen_extrl_i64_i32(dest, src);
204
*/
206
+}
205
if (s->sme_trap_nonstreaming) {
207
+
206
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
208
+static void gen_neon_narrow_u16(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
207
+ gen_exception_insn(s, 0, EXCP_UDEF,
209
+{
208
syn_smetrap(SME_ET_Streaming,
210
+ gen_helper_neon_narrow_u16(dest, src);
209
curr_insn_len(s) == 2));
211
+}
210
return false;
212
+
211
@@ -XXX,XX +XXX,XX @@ bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
213
+static void gen_neon_narrow_u8(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
212
* the encoding space handled by the patterns in m-nocp.decode,
214
+{
213
* and for them we may need to raise NOCP here.
215
+ gen_helper_neon_narrow_u8(dest, src);
214
*/
216
+}
215
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
217
+
216
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
218
+DO_2SN_64(VSHRN_64, gen_ushl_i64, gen_neon_narrow_u32)
217
syn_uncategorized(), s->fp_excp_el);
219
+DO_2SN_32(VSHRN_32, gen_ushl_i32, gen_neon_narrow_u16)
218
return false;
220
+DO_2SN_32(VSHRN_16, gen_helper_neon_shl_u16, gen_neon_narrow_u8)
219
}
221
+
222
+DO_2SN_64(VRSHRN_64, gen_helper_neon_rshl_u64, gen_neon_narrow_u32)
223
+DO_2SN_32(VRSHRN_32, gen_helper_neon_rshl_u32, gen_neon_narrow_u16)
224
+DO_2SN_32(VRSHRN_16, gen_helper_neon_rshl_u16, gen_neon_narrow_u8)
225
+
226
+DO_2SN_64(VQSHRUN_64, gen_sshl_i64, gen_helper_neon_unarrow_sat32)
227
+DO_2SN_32(VQSHRUN_32, gen_sshl_i32, gen_helper_neon_unarrow_sat16)
228
+DO_2SN_32(VQSHRUN_16, gen_helper_neon_shl_s16, gen_helper_neon_unarrow_sat8)
229
+
230
+DO_2SN_64(VQRSHRUN_64, gen_helper_neon_rshl_s64, gen_helper_neon_unarrow_sat32)
231
+DO_2SN_32(VQRSHRUN_32, gen_helper_neon_rshl_s32, gen_helper_neon_unarrow_sat16)
232
+DO_2SN_32(VQRSHRUN_16, gen_helper_neon_rshl_s16, gen_helper_neon_unarrow_sat8)
233
diff --git a/target/arm/translate.c b/target/arm/translate.c
220
diff --git a/target/arm/translate.c b/target/arm/translate.c
234
index XXXXXXX..XXXXXXX 100644
221
index XXXXXXX..XXXXXXX 100644
235
--- a/target/arm/translate.c
222
--- a/target/arm/translate.c
236
+++ b/target/arm/translate.c
223
+++ b/target/arm/translate.c
237
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
224
@@ -XXX,XX +XXX,XX @@ static void gen_exception(int excp, uint32_t syndrome)
238
case 5: /* VSHL, VSLI */
225
tcg_constant_i32(syndrome));
239
case 6: /* VQSHLU */
226
}
240
case 7: /* VQSHL */
227
241
+ case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
228
-static void gen_exception_insn_el_v(DisasContext *s, uint64_t pc, int excp,
242
return 1; /* handled by decodetree */
229
- uint32_t syn, TCGv_i32 tcg_el)
243
default:
230
+static void gen_exception_insn_el_v(DisasContext *s, target_long pc_diff,
244
break;
231
+ int excp, uint32_t syn, TCGv_i32 tcg_el)
232
{
233
if (s->aarch64) {
234
- gen_a64_update_pc(s, pc - s->pc_curr);
235
+ gen_a64_update_pc(s, pc_diff);
236
} else {
237
gen_set_condexec(s);
238
- gen_update_pc(s, pc - s->pc_curr);
239
+ gen_update_pc(s, pc_diff);
240
}
241
gen_exception_el_v(excp, syn, tcg_el);
242
s->base.is_jmp = DISAS_NORETURN;
243
}
244
245
-void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
246
+void gen_exception_insn_el(DisasContext *s, target_long pc_diff, int excp,
247
uint32_t syn, uint32_t target_el)
248
{
249
- gen_exception_insn_el_v(s, pc, excp, syn, tcg_constant_i32(target_el));
250
+ gen_exception_insn_el_v(s, pc_diff, excp, syn,
251
+ tcg_constant_i32(target_el));
252
}
253
254
-void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
255
+void gen_exception_insn(DisasContext *s, target_long pc_diff,
256
+ int excp, uint32_t syn)
257
{
258
if (s->aarch64) {
259
- gen_a64_update_pc(s, pc - s->pc_curr);
260
+ gen_a64_update_pc(s, pc_diff);
261
} else {
262
gen_set_condexec(s);
263
- gen_update_pc(s, pc - s->pc_curr);
264
+ gen_update_pc(s, pc_diff);
265
}
266
gen_exception(excp, syn);
267
s->base.is_jmp = DISAS_NORETURN;
268
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
269
void unallocated_encoding(DisasContext *s)
270
{
271
/* Unallocated and reserved encodings are uncategorized */
272
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized());
273
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_uncategorized());
274
}
275
276
/* Force a TB lookup after an instruction that changes the CPU state. */
277
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
278
tcg_el = tcg_constant_i32(3);
279
}
280
281
- gen_exception_insn_el_v(s, s->pc_curr, EXCP_UDEF,
282
+ gen_exception_insn_el_v(s, 0, EXCP_UDEF,
283
syn_uncategorized(), tcg_el);
284
tcg_temp_free_i32(tcg_el);
285
return false;
286
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
287
288
undef:
289
/* If we get here then some access check did not pass */
290
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized());
291
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_uncategorized());
292
return false;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
296
* For the UNPREDICTABLE cases we choose to UNDEF.
297
*/
298
if (s->current_el == 1 && !s->ns && mode == ARM_CPU_MODE_MON) {
299
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
300
- syn_uncategorized(), 3);
301
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_uncategorized(), 3);
302
return;
303
}
304
305
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
306
* Do the check-and-raise-exception by hand.
307
*/
308
if (s->fp_excp_el) {
309
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
310
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
311
syn_uncategorized(), s->fp_excp_el);
312
return true;
313
}
314
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
315
tmp = load_cpu_field(v7m.ltpsize);
316
tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
317
tcg_temp_free_i32(tmp);
318
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
319
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
320
gen_set_label(skipexc);
321
}
322
323
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
324
* UsageFault exception.
325
*/
326
if (arm_dc_feature(s, ARM_FEATURE_M)) {
327
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
328
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
329
return;
330
}
331
332
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
333
* Illegal execution state. This has priority over BTI
334
* exceptions, but comes after instruction abort exceptions.
335
*/
336
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_illegalstate());
337
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_illegalstate());
338
return;
339
}
340
341
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
342
* Illegal execution state. This has priority over BTI
343
* exceptions, but comes after instruction abort exceptions.
344
*/
345
- gen_exception_insn(dc, dc->pc_curr, EXCP_UDEF, syn_illegalstate());
346
+ gen_exception_insn(dc, 0, EXCP_UDEF, syn_illegalstate());
347
return;
348
}
349
350
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
351
*/
352
tcg_remove_ops_after(dc->insn_eci_rewind);
353
dc->condjmp = 0;
354
- gen_exception_insn(dc, dc->pc_curr, EXCP_INVSTATE,
355
- syn_uncategorized());
356
+ gen_exception_insn(dc, 0, EXCP_INVSTATE, syn_uncategorized());
357
}
358
359
arm_post_translate_insn(dc);
245
--
360
--
246
2.20.1
361
2.25.1
247
362
248
363
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Do not yet convert the helpers to loop over opr_sz, but the
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
descriptor allows the vector tail to be cleared. Which fixes
4
Since we always pass dc->pc_curr, fold the arithmetic to zero displacement.
5
an existing bug vs SVE.
6
5
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-5-richard.henderson@linaro.org
8
Message-id: 20221020030641.2066807-6-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper.h | 12 ++--
11
target/arm/translate-a64.c | 6 +++---
13
target/arm/neon-dp.decode | 12 ++--
12
target/arm/translate.c | 10 +++++-----
14
target/arm/crypto_helper.c | 24 +++++--
13
2 files changed, 8 insertions(+), 8 deletions(-)
15
target/arm/translate-a64.c | 34 ++++-----
16
target/arm/translate-neon.inc.c | 124 +++++---------------------------
17
target/arm/translate.c | 24 ++-----
18
6 files changed, 67 insertions(+), 163 deletions(-)
19
14
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.h
23
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
27
DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
-DEF_HELPER_FLAGS_2(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr)
29
-DEF_HELPER_FLAGS_2(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr)
30
+DEF_HELPER_FLAGS_3(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
33
-DEF_HELPER_FLAGS_3(crypto_sha256h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
34
-DEF_HELPER_FLAGS_3(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
35
-DEF_HELPER_FLAGS_2(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr)
36
-DEF_HELPER_FLAGS_3(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
37
+DEF_HELPER_FLAGS_4(crypto_sha256h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
42
DEF_HELPER_FLAGS_4(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
43
DEF_HELPER_FLAGS_4(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/neon-dp.decode
47
+++ b/target/arm/neon-dp.decode
48
@@ -XXX,XX +XXX,XX @@ VPADD_3s 1111 001 0 0 . .. .... .... 1011 . . . 1 .... @3same_q0
49
50
VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
51
52
+@3same_crypto .... .... .... .... .... .... .... .... \
53
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0 q=1
54
+
55
SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
56
vm=%vm_dp vn=%vn_dp vd=%vd_dp
57
-SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... \
58
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
59
-SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... \
60
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
61
-SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... \
62
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
63
+SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
64
+SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
65
+SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
66
67
VFMA_fp_3s 1111 001 0 0 . 0 . .... .... 1100 ... 1 .... @3same_fp
68
VFMS_fp_3s 1111 001 0 0 . 1 . .... .... 1100 ... 1 .... @3same_fp
69
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/crypto_helper.c
72
+++ b/target/arm/crypto_helper.c
73
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1_3reg)(void *vd, void *vn, void *vm, uint32_t op)
74
rd[1] = d.l[1];
75
}
76
77
-void HELPER(crypto_sha1h)(void *vd, void *vm)
78
+void HELPER(crypto_sha1h)(void *vd, void *vm, uint32_t desc)
79
{
80
uint64_t *rd = vd;
81
uint64_t *rm = vm;
82
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1h)(void *vd, void *vm)
83
84
rd[0] = m.l[0];
85
rd[1] = m.l[1];
86
+
87
+ clear_tail_16(vd, desc);
88
}
89
90
-void HELPER(crypto_sha1su1)(void *vd, void *vm)
91
+void HELPER(crypto_sha1su1)(void *vd, void *vm, uint32_t desc)
92
{
93
uint64_t *rd = vd;
94
uint64_t *rm = vm;
95
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1su1)(void *vd, void *vm)
96
97
rd[0] = d.l[0];
98
rd[1] = d.l[1];
99
+
100
+ clear_tail_16(vd, desc);
101
}
102
103
/*
104
@@ -XXX,XX +XXX,XX @@ static uint32_t s1(uint32_t x)
105
return ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10);
106
}
107
108
-void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm)
109
+void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm, uint32_t desc)
110
{
111
uint64_t *rd = vd;
112
uint64_t *rn = vn;
113
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm)
114
115
rd[0] = d.l[0];
116
rd[1] = d.l[1];
117
+
118
+ clear_tail_16(vd, desc);
119
}
120
121
-void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm)
122
+void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm, uint32_t desc)
123
{
124
uint64_t *rd = vd;
125
uint64_t *rn = vn;
126
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm)
127
128
rd[0] = d.l[0];
129
rd[1] = d.l[1];
130
+
131
+ clear_tail_16(vd, desc);
132
}
133
134
-void HELPER(crypto_sha256su0)(void *vd, void *vm)
135
+void HELPER(crypto_sha256su0)(void *vd, void *vm, uint32_t desc)
136
{
137
uint64_t *rd = vd;
138
uint64_t *rm = vm;
139
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256su0)(void *vd, void *vm)
140
141
rd[0] = d.l[0];
142
rd[1] = d.l[1];
143
+
144
+ clear_tail_16(vd, desc);
145
}
146
147
-void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm)
148
+void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm, uint32_t desc)
149
{
150
uint64_t *rd = vd;
151
uint64_t *rn = vn;
152
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm)
153
154
rd[0] = d.l[0];
155
rd[1] = d.l[1];
156
+
157
+ clear_tail_16(vd, desc);
158
}
159
160
/*
161
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
162
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-a64.c
17
--- a/target/arm/translate-a64.c
164
+++ b/target/arm/translate-a64.c
18
+++ b/target/arm/translate-a64.c
165
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
19
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
166
int rm = extract32(insn, 16, 5);
20
gen_helper_exception_internal(cpu_env, tcg_constant_i32(excp));
167
int rn = extract32(insn, 5, 5);
168
int rd = extract32(insn, 0, 5);
169
- CryptoThreeOpFn *genfn;
170
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
171
+ gen_helper_gvec_3 *genfn;
172
bool feature;
173
174
if (size != 0) {
175
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
176
return;
177
}
178
179
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
180
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
181
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
182
-
183
if (genfn) {
184
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
185
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
186
} else {
187
TCGv_i32 tcg_opcode = tcg_const_i32(opcode);
188
+ TCGv_ptr tcg_rd_ptr = vec_full_reg_ptr(s, rd);
189
+ TCGv_ptr tcg_rn_ptr = vec_full_reg_ptr(s, rn);
190
+ TCGv_ptr tcg_rm_ptr = vec_full_reg_ptr(s, rm);
191
192
gen_helper_crypto_sha1_3reg(tcg_rd_ptr, tcg_rn_ptr,
193
tcg_rm_ptr, tcg_opcode);
194
- tcg_temp_free_i32(tcg_opcode);
195
- }
196
197
- tcg_temp_free_ptr(tcg_rd_ptr);
198
- tcg_temp_free_ptr(tcg_rn_ptr);
199
- tcg_temp_free_ptr(tcg_rm_ptr);
200
+ tcg_temp_free_i32(tcg_opcode);
201
+ tcg_temp_free_ptr(tcg_rd_ptr);
202
+ tcg_temp_free_ptr(tcg_rn_ptr);
203
+ tcg_temp_free_ptr(tcg_rm_ptr);
204
+ }
205
}
21
}
206
22
207
/* Crypto two-reg SHA
23
-static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
208
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
24
+static void gen_exception_internal_insn(DisasContext *s, int excp)
209
int opcode = extract32(insn, 12, 5);
25
{
210
int rn = extract32(insn, 5, 5);
26
- gen_a64_update_pc(s, pc - s->pc_curr);
211
int rd = extract32(insn, 0, 5);
27
+ gen_a64_update_pc(s, 0);
212
- CryptoTwoOpFn *genfn;
28
gen_exception_internal(excp);
213
+ gen_helper_gvec_2 *genfn;
29
s->base.is_jmp = DISAS_NORETURN;
214
bool feature;
215
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
216
217
if (size != 0) {
218
unallocated_encoding(s);
219
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
220
if (!fp_access_check(s)) {
221
return;
222
}
223
-
224
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
225
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
226
-
227
- genfn(tcg_rd_ptr, tcg_rn_ptr);
228
-
229
- tcg_temp_free_ptr(tcg_rd_ptr);
230
- tcg_temp_free_ptr(tcg_rn_ptr);
231
+ gen_gvec_op2_ool(s, true, rd, rn, 0, genfn);
232
}
30
}
233
31
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
234
static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
32
* Secondly, "HLT 0xf000" is the A64 semihosting syscall instruction.
235
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
33
*/
236
index XXXXXXX..XXXXXXX 100644
34
if (semihosting_enabled(s->current_el == 0) && imm16 == 0xf000) {
237
--- a/target/arm/translate-neon.inc.c
35
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
238
+++ b/target/arm/translate-neon.inc.c
36
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
239
@@ -XXX,XX +XXX,XX @@ DO_3SAME_CMP(VCGE_S, TCG_COND_GE)
37
} else {
240
DO_3SAME_CMP(VCGE_U, TCG_COND_GEU)
38
unallocated_encoding(s);
241
DO_3SAME_CMP(VCEQ, TCG_COND_EQ)
39
}
242
243
-static void gen_VMUL_p_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
244
- uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz)
245
-{
246
- tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz,
247
- 0, gen_helper_gvec_pmul_b);
248
-}
249
+#define WRAP_OOL_FN(WRAPNAME, FUNC) \
250
+ static void WRAPNAME(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs, \
251
+ uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz) \
252
+ { \
253
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, 0, FUNC); \
254
+ }
255
+
256
+WRAP_OOL_FN(gen_VMUL_p_3s, gen_helper_gvec_pmul_b)
257
258
static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
259
{
260
@@ -XXX,XX +XXX,XX @@ static bool trans_SHA1_3s(DisasContext *s, arg_SHA1_3s *a)
261
return true;
262
}
263
264
-static bool trans_SHA256H_3s(DisasContext *s, arg_SHA256H_3s *a)
265
-{
266
- TCGv_ptr ptr1, ptr2, ptr3;
267
-
268
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
269
- !dc_isar_feature(aa32_sha2, s)) {
270
- return false;
271
+#define DO_SHA2(NAME, FUNC) \
272
+ WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
273
+ static bool trans_##NAME##_3s(DisasContext *s, arg_3same *a) \
274
+ { \
275
+ if (!dc_isar_feature(aa32_sha2, s)) { \
276
+ return false; \
277
+ } \
278
+ return do_3same(s, a, gen_##NAME##_3s); \
279
}
280
281
- /* UNDEF accesses to D16-D31 if they don't exist. */
282
- if (!dc_isar_feature(aa32_simd_r32, s) &&
283
- ((a->vd | a->vn | a->vm) & 0x10)) {
284
- return false;
285
- }
286
-
287
- if ((a->vn | a->vm | a->vd) & 1) {
288
- return false;
289
- }
290
-
291
- if (!vfp_access_check(s)) {
292
- return true;
293
- }
294
-
295
- ptr1 = vfp_reg_ptr(true, a->vd);
296
- ptr2 = vfp_reg_ptr(true, a->vn);
297
- ptr3 = vfp_reg_ptr(true, a->vm);
298
- gen_helper_crypto_sha256h(ptr1, ptr2, ptr3);
299
- tcg_temp_free_ptr(ptr1);
300
- tcg_temp_free_ptr(ptr2);
301
- tcg_temp_free_ptr(ptr3);
302
-
303
- return true;
304
-}
305
-
306
-static bool trans_SHA256H2_3s(DisasContext *s, arg_SHA256H2_3s *a)
307
-{
308
- TCGv_ptr ptr1, ptr2, ptr3;
309
-
310
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
311
- !dc_isar_feature(aa32_sha2, s)) {
312
- return false;
313
- }
314
-
315
- /* UNDEF accesses to D16-D31 if they don't exist. */
316
- if (!dc_isar_feature(aa32_simd_r32, s) &&
317
- ((a->vd | a->vn | a->vm) & 0x10)) {
318
- return false;
319
- }
320
-
321
- if ((a->vn | a->vm | a->vd) & 1) {
322
- return false;
323
- }
324
-
325
- if (!vfp_access_check(s)) {
326
- return true;
327
- }
328
-
329
- ptr1 = vfp_reg_ptr(true, a->vd);
330
- ptr2 = vfp_reg_ptr(true, a->vn);
331
- ptr3 = vfp_reg_ptr(true, a->vm);
332
- gen_helper_crypto_sha256h2(ptr1, ptr2, ptr3);
333
- tcg_temp_free_ptr(ptr1);
334
- tcg_temp_free_ptr(ptr2);
335
- tcg_temp_free_ptr(ptr3);
336
-
337
- return true;
338
-}
339
-
340
-static bool trans_SHA256SU1_3s(DisasContext *s, arg_SHA256SU1_3s *a)
341
-{
342
- TCGv_ptr ptr1, ptr2, ptr3;
343
-
344
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
345
- !dc_isar_feature(aa32_sha2, s)) {
346
- return false;
347
- }
348
-
349
- /* UNDEF accesses to D16-D31 if they don't exist. */
350
- if (!dc_isar_feature(aa32_simd_r32, s) &&
351
- ((a->vd | a->vn | a->vm) & 0x10)) {
352
- return false;
353
- }
354
-
355
- if ((a->vn | a->vm | a->vd) & 1) {
356
- return false;
357
- }
358
-
359
- if (!vfp_access_check(s)) {
360
- return true;
361
- }
362
-
363
- ptr1 = vfp_reg_ptr(true, a->vd);
364
- ptr2 = vfp_reg_ptr(true, a->vn);
365
- ptr3 = vfp_reg_ptr(true, a->vm);
366
- gen_helper_crypto_sha256su1(ptr1, ptr2, ptr3);
367
- tcg_temp_free_ptr(ptr1);
368
- tcg_temp_free_ptr(ptr2);
369
- tcg_temp_free_ptr(ptr3);
370
-
371
- return true;
372
-}
373
+DO_SHA2(SHA256H, gen_helper_crypto_sha256h)
374
+DO_SHA2(SHA256H2, gen_helper_crypto_sha256h2)
375
+DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
376
377
#define DO_3SAME_64(INSN, FUNC) \
378
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
379
diff --git a/target/arm/translate.c b/target/arm/translate.c
40
diff --git a/target/arm/translate.c b/target/arm/translate.c
380
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
381
--- a/target/arm/translate.c
42
--- a/target/arm/translate.c
382
+++ b/target/arm/translate.c
43
+++ b/target/arm/translate.c
383
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
44
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
384
int vec_size;
45
s->base.is_jmp = DISAS_SMC;
385
uint32_t imm;
46
}
386
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
47
387
- TCGv_ptr ptr1, ptr2;
48
-static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
388
+ TCGv_ptr ptr1;
49
+static void gen_exception_internal_insn(DisasContext *s, int excp)
389
TCGv_i64 tmp64;
50
{
390
51
gen_set_condexec(s);
391
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
52
- gen_update_pc(s, pc - s->pc_curr);
392
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
53
+ gen_update_pc(s, 0);
393
if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
54
gen_exception_internal(excp);
394
return 1;
55
s->base.is_jmp = DISAS_NORETURN;
395
}
56
}
396
- ptr1 = vfp_reg_ptr(true, rd);
57
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
397
- ptr2 = vfp_reg_ptr(true, rm);
58
*/
398
-
59
if (semihosting_enabled(s->current_el != 0) &&
399
- gen_helper_crypto_sha1h(ptr1, ptr2);
60
(imm == (s->thumb ? 0x3c : 0xf000))) {
400
-
61
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
401
- tcg_temp_free_ptr(ptr1);
62
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
402
- tcg_temp_free_ptr(ptr2);
63
return;
403
+ tcg_gen_gvec_2_ool(rd_ofs, rm_ofs, 16, 16, 0,
64
}
404
+ gen_helper_crypto_sha1h);
65
405
break;
66
@@ -XXX,XX +XXX,XX @@ static bool trans_BKPT(DisasContext *s, arg_BKPT *a)
406
case NEON_2RM_SHA1SU1:
67
if (arm_dc_feature(s, ARM_FEATURE_M) &&
407
if ((rm | rd) & 1) {
68
semihosting_enabled(s->current_el == 0) &&
408
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
69
(a->imm == 0xab)) {
409
} else if (!dc_isar_feature(aa32_sha1, s)) {
70
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
410
return 1;
71
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
411
}
72
} else {
412
- ptr1 = vfp_reg_ptr(true, rd);
73
gen_exception_bkpt_insn(s, syn_aa32_bkpt(a->imm, false));
413
- ptr2 = vfp_reg_ptr(true, rm);
74
}
414
- if (q) {
75
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
415
- gen_helper_crypto_sha256su0(ptr1, ptr2);
76
if (!arm_dc_feature(s, ARM_FEATURE_M) &&
416
- } else {
77
semihosting_enabled(s->current_el == 0) &&
417
- gen_helper_crypto_sha1su1(ptr1, ptr2);
78
(a->imm == semihost_imm)) {
418
- }
79
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
419
- tcg_temp_free_ptr(ptr1);
80
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
420
- tcg_temp_free_ptr(ptr2);
81
} else {
421
+ tcg_gen_gvec_2_ool(rd_ofs, rm_ofs, 16, 16, 0,
82
gen_update_pc(s, curr_insn_len(s));
422
+ q ? gen_helper_crypto_sha256su0
83
s->svc_imm = a->imm;
423
+ : gen_helper_crypto_sha1su1);
424
break;
425
-
426
case NEON_2RM_VMVN:
427
tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
428
break;
429
--
84
--
430
2.20.1
85
2.25.1
431
86
432
87
diff view generated by jsdifflib
1
Convert the VQSHLU and QVSHL 2-reg-shift insns to decodetree.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
These are the last of the simple shift-by-immediate insns.
3
2
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-7-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200522145520.6778-5-peter.maydell@linaro.org
7
---
9
---
8
target/arm/neon-dp.decode | 15 +++++
10
target/arm/translate.c | 37 +++++++++++++++++++++----------------
9
target/arm/translate-neon.inc.c | 108 +++++++++++++++++++++++++++++++
11
1 file changed, 21 insertions(+), 16 deletions(-)
10
target/arm/translate.c | 110 +-------------------------------
11
3 files changed, 126 insertions(+), 107 deletions(-)
12
12
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
16
+++ b/target/arm/neon-dp.decode
17
@@ -XXX,XX +XXX,XX @@ VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
18
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
19
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
20
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
21
+
22
+VQSHLU_64_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_d
23
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_s
24
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_h
25
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_b
26
+
27
+VQSHL_S_64_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
28
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
29
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
30
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
31
+
32
+VQSHL_U_64_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
33
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
34
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
35
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
36
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-neon.inc.c
39
+++ b/target/arm/translate-neon.inc.c
40
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHR_U_2sh(DisasContext *s, arg_2reg_shift *a)
41
return do_vector_2sh(s, a, tcg_gen_gvec_shri);
42
}
43
}
44
+
45
+static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a,
46
+ NeonGenTwo64OpEnvFn *fn)
47
+{
48
+ /*
49
+ * 2-reg-and-shift operations, size == 3 case, where the
50
+ * function needs to be passed cpu_env.
51
+ */
52
+ TCGv_i64 constimm;
53
+ int pass;
54
+
55
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
56
+ return false;
57
+ }
58
+
59
+ /* UNDEF accesses to D16-D31 if they don't exist. */
60
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
61
+ ((a->vd | a->vm) & 0x10)) {
62
+ return false;
63
+ }
64
+
65
+ if ((a->vm | a->vd) & a->q) {
66
+ return false;
67
+ }
68
+
69
+ if (!vfp_access_check(s)) {
70
+ return true;
71
+ }
72
+
73
+ /*
74
+ * To avoid excessive duplication of ops we implement shift
75
+ * by immediate using the variable shift operations.
76
+ */
77
+ constimm = tcg_const_i64(dup_const(a->size, a->shift));
78
+
79
+ for (pass = 0; pass < a->q + 1; pass++) {
80
+ TCGv_i64 tmp = tcg_temp_new_i64();
81
+
82
+ neon_load_reg64(tmp, a->vm + pass);
83
+ fn(tmp, cpu_env, tmp, constimm);
84
+ neon_store_reg64(tmp, a->vd + pass);
85
+ }
86
+ tcg_temp_free_i64(constimm);
87
+ return true;
88
+}
89
+
90
+static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
91
+ NeonGenTwoOpEnvFn *fn)
92
+{
93
+ /*
94
+ * 2-reg-and-shift operations, size < 3 case, where the
95
+ * helper needs to be passed cpu_env.
96
+ */
97
+ TCGv_i32 constimm;
98
+ int pass;
99
+
100
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
101
+ return false;
102
+ }
103
+
104
+ /* UNDEF accesses to D16-D31 if they don't exist. */
105
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
106
+ ((a->vd | a->vm) & 0x10)) {
107
+ return false;
108
+ }
109
+
110
+ if ((a->vm | a->vd) & a->q) {
111
+ return false;
112
+ }
113
+
114
+ if (!vfp_access_check(s)) {
115
+ return true;
116
+ }
117
+
118
+ /*
119
+ * To avoid excessive duplication of ops we implement shift
120
+ * by immediate using the variable shift operations.
121
+ */
122
+ constimm = tcg_const_i32(dup_const(a->size, a->shift));
123
+
124
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
125
+ TCGv_i32 tmp = neon_load_reg(a->vm, pass);
126
+ fn(tmp, cpu_env, tmp, constimm);
127
+ neon_store_reg(a->vd, pass, tmp);
128
+ }
129
+ tcg_temp_free_i32(constimm);
130
+ return true;
131
+}
132
+
133
+#define DO_2SHIFT_ENV(INSN, FUNC) \
134
+ static bool trans_##INSN##_64_2sh(DisasContext *s, arg_2reg_shift *a) \
135
+ { \
136
+ return do_2shift_env_64(s, a, gen_helper_neon_##FUNC##64); \
137
+ } \
138
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
139
+ { \
140
+ static NeonGenTwoOpEnvFn * const fns[] = { \
141
+ gen_helper_neon_##FUNC##8, \
142
+ gen_helper_neon_##FUNC##16, \
143
+ gen_helper_neon_##FUNC##32, \
144
+ }; \
145
+ assert(a->size < ARRAY_SIZE(fns)); \
146
+ return do_2shift_env_32(s, a, fns[a->size]); \
147
+ }
148
+
149
+DO_2SHIFT_ENV(VQSHLU, qshlu_s)
150
+DO_2SHIFT_ENV(VQSHL_U, qshl_u)
151
+DO_2SHIFT_ENV(VQSHL_S, qshl_s)
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
15
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
16
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_rsb(int size, TCGv_i32 t0, TCGv_i32 t1)
17
@@ -XXX,XX +XXX,XX @@ static uint32_t read_pc(DisasContext *s)
18
return s->pc_curr + (s->thumb ? 4 : 8);
19
}
20
21
+/* The pc_curr difference for an architectural jump. */
22
+static target_long jmp_diff(DisasContext *s, target_long diff)
23
+{
24
+ return diff + (s->thumb ? 4 : 8);
25
+}
26
+
27
/* Set a variable to the value of a CPU register. */
28
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
29
{
30
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
31
* cpu_loop_exec. Any live exit_requests will be processed as we
32
* enter the next TB.
33
*/
34
-static void gen_goto_tb(DisasContext *s, int n, int diff)
35
+static void gen_goto_tb(DisasContext *s, int n, target_long diff)
36
{
37
target_ulong dest = s->pc_curr + diff;
38
39
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
40
}
41
42
/* Jump, specifying which TB number to use if we gen_goto_tb() */
43
-static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
44
+static void gen_jmp_tb(DisasContext *s, target_long diff, int tbno)
45
{
46
- int diff = dest - s->pc_curr;
47
-
48
if (unlikely(s->ss_active)) {
49
/* An indirect jump so that we still trigger the debug exception. */
50
gen_update_pc(s, diff);
51
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
157
}
52
}
158
}
53
}
159
54
160
-#define GEN_NEON_INTEGER_OP_ENV(name) do { \
55
-static inline void gen_jmp(DisasContext *s, uint32_t dest)
161
- switch ((size << 1) | u) { \
56
+static inline void gen_jmp(DisasContext *s, target_long diff)
162
- case 0: \
163
- gen_helper_neon_##name##_s8(tmp, cpu_env, tmp, tmp2); \
164
- break; \
165
- case 1: \
166
- gen_helper_neon_##name##_u8(tmp, cpu_env, tmp, tmp2); \
167
- break; \
168
- case 2: \
169
- gen_helper_neon_##name##_s16(tmp, cpu_env, tmp, tmp2); \
170
- break; \
171
- case 3: \
172
- gen_helper_neon_##name##_u16(tmp, cpu_env, tmp, tmp2); \
173
- break; \
174
- case 4: \
175
- gen_helper_neon_##name##_s32(tmp, cpu_env, tmp, tmp2); \
176
- break; \
177
- case 5: \
178
- gen_helper_neon_##name##_u32(tmp, cpu_env, tmp, tmp2); \
179
- break; \
180
- default: return 1; \
181
- }} while (0)
182
-
183
static TCGv_i32 neon_load_scratch(int scratch)
184
{
57
{
185
TCGv_i32 tmp = tcg_temp_new_i32();
58
- gen_jmp_tb(s, dest, 0);
186
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
59
+ gen_jmp_tb(s, diff, 0);
187
int size;
60
}
188
int shift;
61
189
int pass;
62
static inline void gen_mulxy(TCGv_i32 t0, TCGv_i32 t1, int x, int y)
190
- int count;
63
@@ -XXX,XX +XXX,XX @@ static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
191
int u;
64
192
int vec_size;
65
static bool trans_B(DisasContext *s, arg_i *a)
193
uint32_t imm;
66
{
194
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
67
- gen_jmp(s, read_pc(s) + a->imm);
195
case 3: /* VRSRA */
68
+ gen_jmp(s, jmp_diff(s, a->imm));
196
case 4: /* VSRI */
69
return true;
197
case 5: /* VSHL, VSLI */
70
}
198
+ case 6: /* VQSHLU */
71
199
+ case 7: /* VQSHL */
72
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond_thumb(DisasContext *s, arg_ci *a)
200
return 1; /* handled by decodetree */
73
return true;
201
default:
74
}
202
break;
75
arm_skip_unless(s, a->cond);
203
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
76
- gen_jmp(s, read_pc(s) + a->imm);
204
size--;
77
+ gen_jmp(s, jmp_diff(s, a->imm));
205
}
78
return true;
206
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
79
}
207
- if (op < 8) {
80
208
- /* Shift by immediate:
81
static bool trans_BL(DisasContext *s, arg_i *a)
209
- VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
82
{
210
- if (q && ((rd | rm) & 1)) {
83
tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
211
- return 1;
84
- gen_jmp(s, read_pc(s) + a->imm);
212
- }
85
+ gen_jmp(s, jmp_diff(s, a->imm));
213
- if (!u && (op == 4 || op == 6)) {
86
return true;
214
- return 1;
87
}
215
- }
88
216
- /* Right shifts are encoded as N - shift, where N is the
89
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
217
- element size in bits. */
90
}
218
- if (op <= 4) {
91
tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
219
- shift = shift - (1 << (size + 3));
92
store_cpu_field_constant(!s->thumb, thumb);
220
- }
93
- gen_jmp(s, (read_pc(s) & ~3) + a->imm);
221
-
94
+ /* This jump is computed from an aligned PC: subtract off the low bits. */
222
- if (size == 3) {
95
+ gen_jmp(s, jmp_diff(s, a->imm - (s->pc_curr & 3)));
223
- count = q + 1;
96
return true;
224
- } else {
97
}
225
- count = q ? 4: 2;
98
226
- }
99
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
227
-
100
* when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
228
- /* To avoid excessive duplication of ops we implement shift
101
*/
229
- * by immediate using the variable shift operations.
102
}
230
- */
103
- gen_jmp_tb(s, s->base.pc_next, 1);
231
- imm = dup_const(size, shift);
104
+ gen_jmp_tb(s, curr_insn_len(s), 1);
232
-
105
233
- for (pass = 0; pass < count; pass++) {
106
gen_set_label(nextlabel);
234
- if (size == 3) {
107
- gen_jmp(s, read_pc(s) + a->imm);
235
- neon_load_reg64(cpu_V0, rm + pass);
108
+ gen_jmp(s, jmp_diff(s, a->imm));
236
- tcg_gen_movi_i64(cpu_V1, imm);
109
return true;
237
- switch (op) {
110
}
238
- case 6: /* VQSHLU */
111
239
- gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
112
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
240
- cpu_V0, cpu_V1);
113
241
- break;
114
if (a->f) {
242
- case 7: /* VQSHL */
115
/* Loop-forever: just jump back to the loop start */
243
- if (u) {
116
- gen_jmp(s, read_pc(s) - a->imm);
244
- gen_helper_neon_qshl_u64(cpu_V0, cpu_env,
117
+ gen_jmp(s, jmp_diff(s, -a->imm));
245
- cpu_V0, cpu_V1);
118
return true;
246
- } else {
119
}
247
- gen_helper_neon_qshl_s64(cpu_V0, cpu_env,
120
248
- cpu_V0, cpu_V1);
121
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
249
- }
122
tcg_temp_free_i32(decr);
250
- break;
123
}
251
- default:
124
/* Jump back to the loop start */
252
- g_assert_not_reached();
125
- gen_jmp(s, read_pc(s) - a->imm);
253
- }
126
+ gen_jmp(s, jmp_diff(s, -a->imm));
254
- neon_store_reg64(cpu_V0, rd + pass);
127
255
- } else { /* size < 3 */
128
gen_set_label(loopend);
256
- /* Operands in T0 and T1. */
129
if (a->tp) {
257
- tmp = neon_load_reg(rm, pass);
130
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
258
- tmp2 = tcg_temp_new_i32();
131
store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
259
- tcg_gen_movi_i32(tmp2, imm);
132
}
260
- switch (op) {
133
/* End TB, continuing to following insn */
261
- case 6: /* VQSHLU */
134
- gen_jmp_tb(s, s->base.pc_next, 1);
262
- switch (size) {
135
+ gen_jmp_tb(s, curr_insn_len(s), 1);
263
- case 0:
136
return true;
264
- gen_helper_neon_qshlu_s8(tmp, cpu_env,
137
}
265
- tmp, tmp2);
138
266
- break;
139
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_CBZ *a)
267
- case 1:
140
tcg_gen_brcondi_i32(a->nz ? TCG_COND_EQ : TCG_COND_NE,
268
- gen_helper_neon_qshlu_s16(tmp, cpu_env,
141
tmp, 0, s->condlabel);
269
- tmp, tmp2);
142
tcg_temp_free_i32(tmp);
270
- break;
143
- gen_jmp(s, read_pc(s) + a->imm);
271
- case 2:
144
+ gen_jmp(s, jmp_diff(s, a->imm));
272
- gen_helper_neon_qshlu_s32(tmp, cpu_env,
145
return true;
273
- tmp, tmp2);
146
}
274
- break;
147
275
- default:
276
- abort();
277
- }
278
- break;
279
- case 7: /* VQSHL */
280
- GEN_NEON_INTEGER_OP_ENV(qshl);
281
- break;
282
- default:
283
- g_assert_not_reached();
284
- }
285
- tcg_temp_free_i32(tmp2);
286
- neon_store_reg(rd, pass, tmp);
287
- }
288
- } /* for pass */
289
- } else if (op < 10) {
290
+ if (op < 10) {
291
/* Shift by immediate and narrow:
292
VSHRN, VRSHRN, VQSHRN, VQRSHRN. */
293
int input_unsigned = (op == 8) ? !u : u;
294
--
148
--
295
2.20.1
149
2.25.1
296
297
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Do not yet convert the helpers to loop over opr_sz, but the
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
descriptor allows the vector tail to be cleared. Which fixes
5
an existing bug vs SVE.
6
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-4-richard.henderson@linaro.org
7
Message-id: 20221020030641.2066807-8-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/helper.h | 15 +++++++-----
10
target/arm/translate-a64.c | 41 +++++++++++++++++++++++++++-----------
13
target/arm/crypto_helper.c | 37 +++++++++++++++++++++++-----
11
1 file changed, 29 insertions(+), 12 deletions(-)
14
target/arm/translate-a64.c | 50 ++++++++++++--------------------------
15
3 files changed, 55 insertions(+), 47 deletions(-)
16
12
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.h
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
22
DEF_HELPER_FLAGS_2(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr)
23
DEF_HELPER_FLAGS_3(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
24
25
-DEF_HELPER_FLAGS_3(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
26
-DEF_HELPER_FLAGS_3(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
27
-DEF_HELPER_FLAGS_2(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr)
28
-DEF_HELPER_FLAGS_3(crypto_sha512su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
29
+DEF_HELPER_FLAGS_4(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(crypto_sha512su1, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, i32)
34
35
DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
36
-DEF_HELPER_FLAGS_3(crypto_sm3partw1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
37
-DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
38
+DEF_HELPER_FLAGS_4(crypto_sm3partw1, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(crypto_sm3partw2, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, i32)
42
43
DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/crypto_helper.c
48
+++ b/target/arm/crypto_helper.c
49
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
50
#define CR_ST_WORD(state, i) (state.words[i])
51
#endif
52
53
+/*
54
+ * The caller has not been converted to full gvec, and so only
55
+ * modifies the low 16 bytes of the vector register.
56
+ */
57
+static void clear_tail_16(void *vd, uint32_t desc)
58
+{
59
+ int opr_sz = simd_oprsz(desc);
60
+ int max_sz = simd_maxsz(desc);
61
+
62
+ assert(opr_sz == 16);
63
+ clear_tail(vd, opr_sz, max_sz);
64
+}
65
+
66
static void do_crypto_aese(uint64_t *rd, uint64_t *rn,
67
uint64_t *rm, bool decrypt)
68
{
69
@@ -XXX,XX +XXX,XX @@ static uint64_t s1_512(uint64_t x)
70
return ror64(x, 19) ^ ror64(x, 61) ^ (x >> 6);
71
}
72
73
-void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm)
74
+void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm, uint32_t desc)
75
{
76
uint64_t *rd = vd;
77
uint64_t *rn = vn;
78
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm)
79
80
rd[0] = d0;
81
rd[1] = d1;
82
+
83
+ clear_tail_16(vd, desc);
84
}
85
86
-void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm)
87
+void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm, uint32_t desc)
88
{
89
uint64_t *rd = vd;
90
uint64_t *rn = vn;
91
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm)
92
93
rd[0] = d0;
94
rd[1] = d1;
95
+
96
+ clear_tail_16(vd, desc);
97
}
98
99
-void HELPER(crypto_sha512su0)(void *vd, void *vn)
100
+void HELPER(crypto_sha512su0)(void *vd, void *vn, uint32_t desc)
101
{
102
uint64_t *rd = vd;
103
uint64_t *rn = vn;
104
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512su0)(void *vd, void *vn)
105
106
rd[0] = d0;
107
rd[1] = d1;
108
+
109
+ clear_tail_16(vd, desc);
110
}
111
112
-void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm)
113
+void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm, uint32_t desc)
114
{
115
uint64_t *rd = vd;
116
uint64_t *rn = vn;
117
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm)
118
119
rd[0] += s1_512(rn[0]) + rm[0];
120
rd[1] += s1_512(rn[1]) + rm[1];
121
+
122
+ clear_tail_16(vd, desc);
123
}
124
125
-void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm)
126
+void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm, uint32_t desc)
127
{
128
uint64_t *rd = vd;
129
uint64_t *rn = vn;
130
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm)
131
132
rd[0] = d.l[0];
133
rd[1] = d.l[1];
134
+
135
+ clear_tail_16(vd, desc);
136
}
137
138
-void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm)
139
+void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm, uint32_t desc)
140
{
141
uint64_t *rd = vd;
142
uint64_t *rn = vn;
143
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm)
144
145
rd[0] = d.l[0];
146
rd[1] = d.l[1];
147
+
148
+ clear_tail_16(vd, desc);
149
}
150
151
void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
152
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
153
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate-a64.c
15
--- a/target/arm/translate-a64.c
155
+++ b/target/arm/translate-a64.c
16
+++ b/target/arm/translate-a64.c
156
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
17
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
157
int rn = extract32(insn, 5, 5);
158
int rd = extract32(insn, 0, 5);
159
bool feature;
160
- CryptoThreeOpFn *genfn = NULL;
161
gen_helper_gvec_3 *oolfn = NULL;
162
GVecGen3Fn *gvecfn = NULL;
163
164
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
165
switch (opcode) {
166
case 0: /* SHA512H */
167
feature = dc_isar_feature(aa64_sha512, s);
168
- genfn = gen_helper_crypto_sha512h;
169
+ oolfn = gen_helper_crypto_sha512h;
170
break;
171
case 1: /* SHA512H2 */
172
feature = dc_isar_feature(aa64_sha512, s);
173
- genfn = gen_helper_crypto_sha512h2;
174
+ oolfn = gen_helper_crypto_sha512h2;
175
break;
176
case 2: /* SHA512SU1 */
177
feature = dc_isar_feature(aa64_sha512, s);
178
- genfn = gen_helper_crypto_sha512su1;
179
+ oolfn = gen_helper_crypto_sha512su1;
180
break;
181
case 3: /* RAX1 */
182
feature = dc_isar_feature(aa64_sha3, s);
183
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
184
switch (opcode) {
185
case 0: /* SM3PARTW1 */
186
feature = dc_isar_feature(aa64_sm3, s);
187
- genfn = gen_helper_crypto_sm3partw1;
188
+ oolfn = gen_helper_crypto_sm3partw1;
189
break;
190
case 1: /* SM3PARTW2 */
191
feature = dc_isar_feature(aa64_sm3, s);
192
- genfn = gen_helper_crypto_sm3partw2;
193
+ oolfn = gen_helper_crypto_sm3partw2;
194
break;
195
case 2: /* SM4EKEY */
196
feature = dc_isar_feature(aa64_sm4, s);
197
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
198
199
if (oolfn) {
200
gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
201
- } else if (gvecfn) {
202
- gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
203
} else {
204
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
205
-
206
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
207
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
208
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
209
-
210
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
211
-
212
- tcg_temp_free_ptr(tcg_rd_ptr);
213
- tcg_temp_free_ptr(tcg_rn_ptr);
214
- tcg_temp_free_ptr(tcg_rm_ptr);
215
+ gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
216
}
18
}
217
}
19
}
218
20
219
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
21
+static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
220
int opcode = extract32(insn, 10, 2);
22
+{
221
int rn = extract32(insn, 5, 5);
23
+ tcg_gen_movi_i64(dest, s->pc_curr + diff);
222
int rd = extract32(insn, 0, 5);
24
+}
223
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
25
+
224
bool feature;
26
void gen_a64_update_pc(DisasContext *s, target_long diff)
225
- CryptoTwoOpFn *genfn;
27
{
226
- gen_helper_gvec_3 *oolfn = NULL;
28
- tcg_gen_movi_i64(cpu_pc, s->pc_curr + diff);
227
29
+ gen_pc_plus_diff(s, cpu_pc, diff);
228
switch (opcode) {
30
}
229
case 0: /* SHA512SU0 */
31
230
feature = dc_isar_feature(aa64_sha512, s);
32
/*
231
- genfn = gen_helper_crypto_sha512su0;
33
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
34
35
if (insn & (1U << 31)) {
36
/* BL Branch with link */
37
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
38
+ gen_pc_plus_diff(s, cpu_reg(s, 30), curr_insn_len(s));
39
}
40
41
/* B Branch / BL Branch with link */
42
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
43
default:
44
goto do_unallocated;
45
}
46
- gen_a64_set_pc(s, dst);
47
/* BLR also needs to load return address */
48
if (opc == 1) {
49
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
50
+ TCGv_i64 lr = cpu_reg(s, 30);
51
+ if (dst == lr) {
52
+ TCGv_i64 tmp = new_tmp_a64(s);
53
+ tcg_gen_mov_i64(tmp, dst);
54
+ dst = tmp;
55
+ }
56
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
57
}
58
+ gen_a64_set_pc(s, dst);
232
break;
59
break;
233
case 1: /* SM4E */
60
234
feature = dc_isar_feature(aa64_sm4, s);
61
case 8: /* BRAA */
235
- oolfn = gen_helper_crypto_sm4e;
62
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
63
} else {
64
dst = cpu_reg(s, rn);
65
}
66
- gen_a64_set_pc(s, dst);
67
/* BLRAA also needs to load return address */
68
if (opc == 9) {
69
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
70
+ TCGv_i64 lr = cpu_reg(s, 30);
71
+ if (dst == lr) {
72
+ TCGv_i64 tmp = new_tmp_a64(s);
73
+ tcg_gen_mov_i64(tmp, dst);
74
+ dst = tmp;
75
+ }
76
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
77
}
78
+ gen_a64_set_pc(s, dst);
236
break;
79
break;
237
default:
80
238
unallocated_encoding(s);
81
case 4: /* ERET */
239
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
82
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
240
return;
83
84
tcg_rt = cpu_reg(s, rt);
85
86
- clean_addr = tcg_constant_i64(s->pc_curr + imm);
87
+ clean_addr = new_tmp_a64(s);
88
+ gen_pc_plus_diff(s, clean_addr, imm);
89
if (is_vector) {
90
do_fp_ld(s, rt, clean_addr, size);
91
} else {
92
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
93
static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
94
{
95
unsigned int page, rd;
96
- uint64_t base;
97
- uint64_t offset;
98
+ int64_t offset;
99
100
page = extract32(insn, 31, 1);
101
/* SignExtend(immhi:immlo) -> offset */
102
offset = sextract64(insn, 5, 19);
103
offset = offset << 2 | extract32(insn, 29, 2);
104
rd = extract32(insn, 0, 5);
105
- base = s->pc_curr;
106
107
if (page) {
108
/* ADRP (page based) */
109
- base &= ~0xfff;
110
offset <<= 12;
111
+ /* The page offset is ok for TARGET_TB_PCREL. */
112
+ offset -= s->pc_curr & 0xfff;
241
}
113
}
242
114
243
- if (oolfn) {
115
- tcg_gen_movi_i64(cpu_reg(s, rd), base + offset);
244
- gen_gvec_op3_ool(s, true, rd, rd, rn, 0, oolfn);
116
+ gen_pc_plus_diff(s, cpu_reg(s, rd), offset);
245
- return;
246
+ switch (opcode) {
247
+ case 0: /* SHA512SU0 */
248
+ gen_gvec_op2_ool(s, true, rd, rn, 0, gen_helper_crypto_sha512su0);
249
+ break;
250
+ case 1: /* SM4E */
251
+ gen_gvec_op3_ool(s, true, rd, rd, rn, 0, gen_helper_crypto_sm4e);
252
+ break;
253
+ default:
254
+ g_assert_not_reached();
255
}
256
-
257
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
258
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
259
-
260
- genfn(tcg_rd_ptr, tcg_rn_ptr);
261
-
262
- tcg_temp_free_ptr(tcg_rd_ptr);
263
- tcg_temp_free_ptr(tcg_rn_ptr);
264
}
117
}
265
118
266
/* Crypto four-register
119
/*
267
--
120
--
268
2.20.1
121
2.25.1
269
270
diff view generated by jsdifflib
1
Convert the remaining Neon narrowing shifts to decodetree:
1
From: Richard Henderson <richard.henderson@linaro.org>
2
* VQSHRN
3
* VQRSHRN
4
2
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-9-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-7-peter.maydell@linaro.org
8
---
9
---
9
target/arm/neon-dp.decode | 20 ++++++
10
target/arm/translate.c | 38 +++++++++++++++++++++-----------------
10
target/arm/translate-neon.inc.c | 15 +++++
11
1 file changed, 21 insertions(+), 17 deletions(-)
11
target/arm/translate.c | 110 +-------------------------------
12
3 files changed, 37 insertions(+), 108 deletions(-)
13
12
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
17
+++ b/target/arm/neon-dp.decode
18
@@ -XXX,XX +XXX,XX @@ VQSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
19
VQRSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
20
VQRSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
21
VQRSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
22
+
23
+# VQSHRN with signed input
24
+VQSHRN_S64_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_d
25
+VQSHRN_S32_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_s
26
+VQSHRN_S16_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
27
+
28
+# VQRSHRN with signed input
29
+VQRSHRN_S64_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
30
+VQRSHRN_S32_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
31
+VQRSHRN_S16_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
32
+
33
+# VQSHRN with unsigned input
34
+VQSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_d
35
+VQSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_s
36
+VQSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
37
+
38
+# VQRSHRN with unsigned input
39
+VQRSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
40
+VQRSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
41
+VQRSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
42
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/translate-neon.inc.c
45
+++ b/target/arm/translate-neon.inc.c
46
@@ -XXX,XX +XXX,XX @@ DO_2SN_32(VQSHRUN_16, gen_helper_neon_shl_s16, gen_helper_neon_unarrow_sat8)
47
DO_2SN_64(VQRSHRUN_64, gen_helper_neon_rshl_s64, gen_helper_neon_unarrow_sat32)
48
DO_2SN_32(VQRSHRUN_32, gen_helper_neon_rshl_s32, gen_helper_neon_unarrow_sat16)
49
DO_2SN_32(VQRSHRUN_16, gen_helper_neon_rshl_s16, gen_helper_neon_unarrow_sat8)
50
+DO_2SN_64(VQSHRN_S64, gen_sshl_i64, gen_helper_neon_narrow_sat_s32)
51
+DO_2SN_32(VQSHRN_S32, gen_sshl_i32, gen_helper_neon_narrow_sat_s16)
52
+DO_2SN_32(VQSHRN_S16, gen_helper_neon_shl_s16, gen_helper_neon_narrow_sat_s8)
53
+
54
+DO_2SN_64(VQRSHRN_S64, gen_helper_neon_rshl_s64, gen_helper_neon_narrow_sat_s32)
55
+DO_2SN_32(VQRSHRN_S32, gen_helper_neon_rshl_s32, gen_helper_neon_narrow_sat_s16)
56
+DO_2SN_32(VQRSHRN_S16, gen_helper_neon_rshl_s16, gen_helper_neon_narrow_sat_s8)
57
+
58
+DO_2SN_64(VQSHRN_U64, gen_ushl_i64, gen_helper_neon_narrow_sat_u32)
59
+DO_2SN_32(VQSHRN_U32, gen_ushl_i32, gen_helper_neon_narrow_sat_u16)
60
+DO_2SN_32(VQSHRN_U16, gen_helper_neon_shl_u16, gen_helper_neon_narrow_sat_u8)
61
+
62
+DO_2SN_64(VQRSHRN_U64, gen_helper_neon_rshl_u64, gen_helper_neon_narrow_sat_u32)
63
+DO_2SN_32(VQRSHRN_U32, gen_helper_neon_rshl_u32, gen_helper_neon_narrow_sat_u16)
64
+DO_2SN_32(VQRSHRN_U16, gen_helper_neon_rshl_u16, gen_helper_neon_narrow_sat_u8)
65
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
66
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate.c
15
--- a/target/arm/translate.c
68
+++ b/target/arm/translate.c
16
+++ b/target/arm/translate.c
69
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_unarrow_sats(int size, TCGv_i32 dest, TCGv_i64 src)
17
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
70
}
18
}
71
}
19
}
72
20
73
-static inline void gen_neon_shift_narrow(int size, TCGv_i32 var, TCGv_i32 shift,
21
-/* The architectural value of PC. */
74
- int q, int u)
22
-static uint32_t read_pc(DisasContext *s)
75
-{
23
-{
76
- if (q) {
24
- return s->pc_curr + (s->thumb ? 4 : 8);
77
- if (u) {
78
- switch (size) {
79
- case 1: gen_helper_neon_rshl_u16(var, var, shift); break;
80
- case 2: gen_helper_neon_rshl_u32(var, var, shift); break;
81
- default: abort();
82
- }
83
- } else {
84
- switch (size) {
85
- case 1: gen_helper_neon_rshl_s16(var, var, shift); break;
86
- case 2: gen_helper_neon_rshl_s32(var, var, shift); break;
87
- default: abort();
88
- }
89
- }
90
- } else {
91
- if (u) {
92
- switch (size) {
93
- case 1: gen_helper_neon_shl_u16(var, var, shift); break;
94
- case 2: gen_ushl_i32(var, var, shift); break;
95
- default: abort();
96
- }
97
- } else {
98
- switch (size) {
99
- case 1: gen_helper_neon_shl_s16(var, var, shift); break;
100
- case 2: gen_sshl_i32(var, var, shift); break;
101
- default: abort();
102
- }
103
- }
104
- }
105
-}
25
-}
106
-
26
-
107
static inline void gen_neon_widen(TCGv_i64 dest, TCGv_i32 src, int size, int u)
27
/* The pc_curr difference for an architectural jump. */
28
static target_long jmp_diff(DisasContext *s, target_long diff)
108
{
29
{
109
if (u) {
30
return diff + (s->thumb ? 4 : 8);
110
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
31
}
111
case 6: /* VQSHLU */
32
112
case 7: /* VQSHL */
33
+static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
113
case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
34
+{
114
+ case 9: /* VQSHRN, VQRSHRN */
35
+ tcg_gen_movi_i32(var, s->pc_curr + diff);
115
return 1; /* handled by decodetree */
36
+}
116
default:
37
+
117
break;
38
/* Set a variable to the value of a CPU register. */
118
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
39
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
119
size--;
40
{
120
}
41
if (reg == 15) {
121
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
42
- tcg_gen_movi_i32(var, read_pc(s));
122
- if (op < 10) {
43
+ gen_pc_plus_diff(s, var, jmp_diff(s, 0));
123
- /* Shift by immediate and narrow:
44
} else {
124
- VSHRN, VRSHRN, VQSHRN, VQRSHRN. */
45
tcg_gen_mov_i32(var, cpu_R[reg]);
125
- int input_unsigned = (op == 8) ? !u : u;
46
}
126
- if (rm & 1) {
47
@@ -XXX,XX +XXX,XX @@ TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs)
127
- return 1;
48
TCGv_i32 tmp = tcg_temp_new_i32();
128
- }
49
129
- shift = shift - (1 << (size + 3));
50
if (reg == 15) {
130
- size++;
51
- tcg_gen_movi_i32(tmp, (read_pc(s) & ~3) + ofs);
131
- if (size == 3) {
52
+ /*
132
- tmp64 = tcg_const_i64(shift);
53
+ * This address is computed from an aligned PC:
133
- neon_load_reg64(cpu_V0, rm);
54
+ * subtract off the low bits.
134
- neon_load_reg64(cpu_V1, rm + 1);
55
+ */
135
- for (pass = 0; pass < 2; pass++) {
56
+ gen_pc_plus_diff(s, tmp, jmp_diff(s, ofs - (s->pc_curr & 3)));
136
- TCGv_i64 in;
57
} else {
137
- if (pass == 0) {
58
tcg_gen_addi_i32(tmp, cpu_R[reg], ofs);
138
- in = cpu_V0;
59
}
139
- } else {
60
@@ -XXX,XX +XXX,XX @@ void unallocated_encoding(DisasContext *s)
140
- in = cpu_V1;
61
/* Force a TB lookup after an instruction that changes the CPU state. */
141
- }
62
void gen_lookup_tb(DisasContext *s)
142
- if (q) {
63
{
143
- if (input_unsigned) {
64
- tcg_gen_movi_i32(cpu_R[15], s->base.pc_next);
144
- gen_helper_neon_rshl_u64(cpu_V0, in, tmp64);
65
+ gen_pc_plus_diff(s, cpu_R[15], curr_insn_len(s));
145
- } else {
66
s->base.is_jmp = DISAS_EXIT;
146
- gen_helper_neon_rshl_s64(cpu_V0, in, tmp64);
67
}
147
- }
68
148
- } else {
69
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_r(DisasContext *s, arg_BLX_r *a)
149
- if (input_unsigned) {
70
return false;
150
- gen_ushl_i64(cpu_V0, in, tmp64);
71
}
151
- } else {
72
tmp = load_reg(s, a->rm);
152
- gen_sshl_i64(cpu_V0, in, tmp64);
73
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
153
- }
74
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
154
- }
75
gen_bx(s, tmp);
155
- tmp = tcg_temp_new_i32();
76
return true;
156
- gen_neon_narrow_op(op == 8, u, size - 1, tmp, cpu_V0);
77
}
157
- neon_store_reg(rd, pass, tmp);
78
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond_thumb(DisasContext *s, arg_ci *a)
158
- } /* for pass */
79
159
- tcg_temp_free_i64(tmp64);
80
static bool trans_BL(DisasContext *s, arg_i *a)
160
- } else {
81
{
161
- if (size == 1) {
82
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
162
- imm = (uint16_t)shift;
83
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
163
- imm |= imm << 16;
84
gen_jmp(s, jmp_diff(s, a->imm));
164
- } else {
85
return true;
165
- /* size == 2 */
86
}
166
- imm = (uint32_t)shift;
87
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
167
- }
88
if (s->thumb && (a->imm & 2)) {
168
- tmp2 = tcg_const_i32(imm);
89
return false;
169
- tmp4 = neon_load_reg(rm + 1, 0);
90
}
170
- tmp5 = neon_load_reg(rm + 1, 1);
91
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
171
- for (pass = 0; pass < 2; pass++) {
92
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
172
- if (pass == 0) {
93
store_cpu_field_constant(!s->thumb, thumb);
173
- tmp = neon_load_reg(rm, 0);
94
/* This jump is computed from an aligned PC: subtract off the low bits. */
174
- } else {
95
gen_jmp(s, jmp_diff(s, a->imm - (s->pc_curr & 3)));
175
- tmp = tmp4;
96
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
176
- }
97
static bool trans_BL_BLX_prefix(DisasContext *s, arg_BL_BLX_prefix *a)
177
- gen_neon_shift_narrow(size, tmp, tmp2, q,
98
{
178
- input_unsigned);
99
assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
179
- if (pass == 0) {
100
- tcg_gen_movi_i32(cpu_R[14], read_pc(s) + (a->imm << 12));
180
- tmp3 = neon_load_reg(rm, 1);
101
+ gen_pc_plus_diff(s, cpu_R[14], jmp_diff(s, a->imm << 12));
181
- } else {
102
return true;
182
- tmp3 = tmp5;
103
}
183
- }
104
184
- gen_neon_shift_narrow(size, tmp3, tmp2, q,
105
@@ -XXX,XX +XXX,XX @@ static bool trans_BL_suffix(DisasContext *s, arg_BL_suffix *a)
185
- input_unsigned);
106
186
- tcg_gen_concat_i32_i64(cpu_V0, tmp, tmp3);
107
assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
187
- tcg_temp_free_i32(tmp);
108
tcg_gen_addi_i32(tmp, cpu_R[14], (a->imm << 1) | 1);
188
- tcg_temp_free_i32(tmp3);
109
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
189
- tmp = tcg_temp_new_i32();
110
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | 1);
190
- gen_neon_narrow_op(op == 8, u, size - 1, tmp, cpu_V0);
111
gen_bx(s, tmp);
191
- neon_store_reg(rd, pass, tmp);
112
return true;
192
- } /* for pass */
113
}
193
- tcg_temp_free_i32(tmp2);
114
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_suffix(DisasContext *s, arg_BLX_suffix *a)
194
- }
115
tmp = tcg_temp_new_i32();
195
- } else if (op == 10) {
116
tcg_gen_addi_i32(tmp, cpu_R[14], a->imm << 1);
196
+ if (op == 10) {
117
tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
197
/* VSHLL, VMOVL */
118
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
198
if (q || (rd & 1)) {
119
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | 1);
199
return 1;
120
gen_bx(s, tmp);
121
return true;
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
124
tcg_gen_add_i32(addr, addr, tmp);
125
126
gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), half ? MO_UW : MO_UB);
127
- tcg_temp_free_i32(addr);
128
129
tcg_gen_add_i32(tmp, tmp, tmp);
130
- tcg_gen_addi_i32(tmp, tmp, read_pc(s));
131
+ gen_pc_plus_diff(s, addr, jmp_diff(s, 0));
132
+ tcg_gen_add_i32(tmp, tmp, addr);
133
+ tcg_temp_free_i32(addr);
134
store_reg(s, 15, tmp);
135
return true;
136
}
200
--
137
--
201
2.20.1
138
2.25.1
202
139
203
140
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
With this conversion, we will be able to use the same helpers
4
with sve. In particular, pass 3 vector parameters for the
5
3-operand operations; for advsimd the destination register
6
is also an input.
7
8
This also fixes a bug in which we failed to clear the high bits
9
of the SVE register after an AdvSIMD operation.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20200514212831.31248-2-richard.henderson@linaro.org
4
Message-id: 20221020030641.2066807-10-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
7
---
16
target/arm/helper.h | 6 ++--
8
target/arm/cpu-param.h | 2 +
17
target/arm/vec_internal.h | 33 +++++++++++++++++
9
target/arm/translate.h | 50 +++++++++++++++-
18
target/arm/crypto_helper.c | 72 +++++++++++++++++++++++++++-----------
10
target/arm/cpu.c | 23 ++++----
19
target/arm/translate-a64.c | 55 ++++++++++++++++++-----------
11
target/arm/translate-a64.c | 64 +++++++++++++-------
20
target/arm/translate.c | 27 +++++++-------
12
target/arm/translate-m-nocp.c | 2 +-
21
target/arm/vec_helper.c | 12 +------
13
target/arm/translate.c | 108 +++++++++++++++++++++++-----------
22
6 files changed, 138 insertions(+), 67 deletions(-)
14
6 files changed, 178 insertions(+), 71 deletions(-)
23
create mode 100644 target/arm/vec_internal.h
24
15
25
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
26
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.h
18
--- a/target/arm/cpu-param.h
28
+++ b/target/arm/helper.h
19
+++ b/target/arm/cpu-param.h
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(neon_qzip8, TCG_CALL_NO_RWG, void, ptr, ptr)
30
DEF_HELPER_FLAGS_2(neon_qzip16, TCG_CALL_NO_RWG, void, ptr, ptr)
31
DEF_HELPER_FLAGS_2(neon_qzip32, TCG_CALL_NO_RWG, void, ptr, ptr)
32
33
-DEF_HELPER_FLAGS_3(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
36
37
DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
39
DEF_HELPER_FLAGS_3(crypto_sm3partw1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
40
DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
41
42
-DEF_HELPER_FLAGS_2(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr)
43
-DEF_HELPER_FLAGS_3(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
44
+DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
46
47
DEF_HELPER_FLAGS_3(crc32, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
48
DEF_HELPER_FLAGS_3(crc32c, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
49
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
50
new file mode 100644
51
index XXXXXXX..XXXXXXX
52
--- /dev/null
53
+++ b/target/arm/vec_internal.h
54
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
21
# define TARGET_PAGE_BITS_VARY
22
# define TARGET_PAGE_BITS_MIN 10
23
24
+# define TARGET_TB_PCREL 1
25
+
26
/*
27
* Cache the attrs and shareability fields from the page table entry.
28
*
29
diff --git a/target/arm/translate.h b/target/arm/translate.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate.h
32
+++ b/target/arm/translate.h
33
@@ -XXX,XX +XXX,XX @@
34
35
36
/* internal defines */
37
+
55
+/*
38
+/*
56
+ * ARM AdvSIMD / SVE Vector Helpers
39
+ * Save pc_save across a branch, so that we may restore the value from
57
+ *
40
+ * before the branch at the point the label is emitted.
58
+ * Copyright (c) 2020 Linaro
59
+ *
60
+ * This library is free software; you can redistribute it and/or
61
+ * modify it under the terms of the GNU Lesser General Public
62
+ * License as published by the Free Software Foundation; either
63
+ * version 2 of the License, or (at your option) any later version.
64
+ *
65
+ * This library is distributed in the hope that it will be useful,
66
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
67
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
68
+ * Lesser General Public License for more details.
69
+ *
70
+ * You should have received a copy of the GNU Lesser General Public
71
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
72
+ */
41
+ */
42
+typedef struct DisasLabel {
43
+ TCGLabel *label;
44
+ target_ulong pc_save;
45
+} DisasLabel;
73
+
46
+
74
+#ifndef TARGET_ARM_VEC_INTERNALS_H
47
typedef struct DisasContext {
75
+#define TARGET_ARM_VEC_INTERNALS_H
48
DisasContextBase base;
76
+
49
const ARMISARegisters *isar;
77
+static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
50
51
/* The address of the current instruction being translated. */
52
target_ulong pc_curr;
53
+ /*
54
+ * For TARGET_TB_PCREL, the full value of cpu_pc is not known
55
+ * (although the page offset is known). For convenience, the
56
+ * translation loop uses the full virtual address that triggered
57
+ * the translation, from base.pc_start through pc_curr.
58
+ * For efficiency, we do not update cpu_pc for every instruction.
59
+ * Instead, pc_save has the value of pc_curr at the time of the
60
+ * last update to cpu_pc, which allows us to compute the addend
61
+ * needed to bring cpu_pc current: pc_curr - pc_save.
62
+ * If cpu_pc now contains the destination of an indirect branch,
63
+ * pc_save contains -1 to indicate that relative updates are no
64
+ * longer possible.
65
+ */
66
+ target_ulong pc_save;
67
target_ulong page_start;
68
uint32_t insn;
69
/* Nonzero if this instruction has been conditionally skipped. */
70
int condjmp;
71
/* The label that will be jumped to when the instruction is skipped. */
72
- TCGLabel *condlabel;
73
+ DisasLabel condlabel;
74
/* Thumb-2 conditional execution bits. */
75
int condexec_mask;
76
int condexec_cond;
77
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
78
* after decode (ie after any UNDEF checks)
79
*/
80
bool eci_handled;
81
- /* TCG op to rewind to if this turns out to be an invalid ECI state */
82
- TCGOp *insn_eci_rewind;
83
int sctlr_b;
84
MemOp be_data;
85
#if !defined(CONFIG_USER_ONLY)
86
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
87
*/
88
uint64_t asimd_imm_const(uint32_t imm, int cmode, int op);
89
90
+/*
91
+ * gen_disas_label:
92
+ * Create a label and cache a copy of pc_save.
93
+ */
94
+static inline DisasLabel gen_disas_label(DisasContext *s)
78
+{
95
+{
79
+ uint64_t *d = vd + opr_sz;
96
+ return (DisasLabel){
80
+ uintptr_t i;
97
+ .label = gen_new_label(),
81
+
98
+ .pc_save = s->pc_save,
82
+ for (i = opr_sz; i < max_sz; i += 8) {
99
+ };
83
+ *d++ = 0;
84
+ }
85
+}
100
+}
86
+
101
+
87
+#endif /* TARGET_ARM_VEC_INTERNALS_H */
102
+/*
88
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
103
+ * set_disas_label:
89
index XXXXXXX..XXXXXXX 100644
104
+ * Emit a label and restore the cached copy of pc_save.
90
--- a/target/arm/crypto_helper.c
105
+ */
91
+++ b/target/arm/crypto_helper.c
106
+static inline void set_disas_label(DisasContext *s, DisasLabel l)
92
@@ -XXX,XX +XXX,XX @@
93
94
#include "cpu.h"
95
#include "exec/helper-proto.h"
96
+#include "tcg/tcg-gvec-desc.h"
97
#include "crypto/aes.h"
98
+#include "vec_internal.h"
99
100
union CRYPTO_STATE {
101
uint8_t bytes[16];
102
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
103
#define CR_ST_WORD(state, i) (state.words[i])
104
#endif
105
106
-void HELPER(crypto_aese)(void *vd, void *vm, uint32_t decrypt)
107
+static void do_crypto_aese(uint64_t *rd, uint64_t *rn,
108
+ uint64_t *rm, bool decrypt)
109
{
110
static uint8_t const * const sbox[2] = { AES_sbox, AES_isbox };
111
static uint8_t const * const shift[2] = { AES_shifts, AES_ishifts };
112
- uint64_t *rd = vd;
113
- uint64_t *rm = vm;
114
union CRYPTO_STATE rk = { .l = { rm[0], rm[1] } };
115
- union CRYPTO_STATE st = { .l = { rd[0], rd[1] } };
116
+ union CRYPTO_STATE st = { .l = { rn[0], rn[1] } };
117
int i;
118
119
- assert(decrypt < 2);
120
-
121
/* xor state vector with round key */
122
rk.l[0] ^= st.l[0];
123
rk.l[1] ^= st.l[1];
124
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aese)(void *vd, void *vm, uint32_t decrypt)
125
rd[1] = st.l[1];
126
}
127
128
-void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
129
+void HELPER(crypto_aese)(void *vd, void *vn, void *vm, uint32_t desc)
130
+{
107
+{
131
+ intptr_t i, opr_sz = simd_oprsz(desc);
108
+ gen_set_label(l.label);
132
+ bool decrypt = simd_data(desc);
109
+ s->pc_save = l.pc_save;
133
+
134
+ for (i = 0; i < opr_sz; i += 16) {
135
+ do_crypto_aese(vd + i, vn + i, vm + i, decrypt);
136
+ }
137
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
138
+}
139
+
140
+static void do_crypto_aesmc(uint64_t *rd, uint64_t *rm, bool decrypt)
141
{
142
static uint32_t const mc[][256] = { {
143
/* MixColumns lookup table */
144
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
145
0xbe805d9f, 0xb58d5491, 0xa89a4f83, 0xa397468d,
146
} };
147
148
- uint64_t *rd = vd;
149
- uint64_t *rm = vm;
150
union CRYPTO_STATE st = { .l = { rm[0], rm[1] } };
151
int i;
152
153
- assert(decrypt < 2);
154
-
155
for (i = 0; i < 16; i += 4) {
156
CR_ST_WORD(st, i >> 2) =
157
mc[decrypt][CR_ST_BYTE(st, i)] ^
158
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
159
rd[1] = st.l[1];
160
}
161
162
+void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t desc)
163
+{
164
+ intptr_t i, opr_sz = simd_oprsz(desc);
165
+ bool decrypt = simd_data(desc);
166
+
167
+ for (i = 0; i < opr_sz; i += 16) {
168
+ do_crypto_aesmc(vd + i, vm + i, decrypt);
169
+ }
170
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
171
+}
110
+}
172
+
111
+
173
/*
112
/*
174
* SHA-1 logical functions
113
* Helpers for implementing sets of trans_* functions.
175
*/
114
* Defer the implementation of NAME to FUNC, with optional extra arguments.
176
@@ -XXX,XX +XXX,XX @@ static uint8_t const sm4_sbox[] = {
115
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
177
0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48,
116
index XXXXXXX..XXXXXXX 100644
178
};
117
--- a/target/arm/cpu.c
179
118
+++ b/target/arm/cpu.c
180
-void HELPER(crypto_sm4e)(void *vd, void *vn)
119
@@ -XXX,XX +XXX,XX @@ static vaddr arm_cpu_get_pc(CPUState *cs)
181
+static void do_crypto_sm4e(uint64_t *rd, uint64_t *rn, uint64_t *rm)
120
void arm_cpu_synchronize_from_tb(CPUState *cs,
182
{
121
const TranslationBlock *tb)
183
- uint64_t *rd = vd;
122
{
184
- uint64_t *rn = vn;
123
- ARMCPU *cpu = ARM_CPU(cs);
185
- union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
124
- CPUARMState *env = &cpu->env;
186
- union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
125
-
187
+ union CRYPTO_STATE d = { .l = { rn[0], rn[1] } };
126
- /*
188
+ union CRYPTO_STATE n = { .l = { rm[0], rm[1] } };
127
- * It's OK to look at env for the current mode here, because it's
189
uint32_t t, i;
128
- * never possible for an AArch64 TB to chain to an AArch32 TB.
190
129
- */
191
for (i = 0; i < 4; i++) {
130
- if (is_a64(env)) {
192
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4e)(void *vd, void *vn)
131
- env->pc = tb_pc(tb);
193
rd[1] = d.l[1];
132
- } else {
194
}
133
- env->regs[15] = tb_pc(tb);
195
134
+ /* The program counter is always up to date with TARGET_TB_PCREL. */
196
-void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm)
135
+ if (!TARGET_TB_PCREL) {
197
+void HELPER(crypto_sm4e)(void *vd, void *vn, void *vm, uint32_t desc)
136
+ CPUARMState *env = cs->env_ptr;
198
+{
137
+ /*
199
+ intptr_t i, opr_sz = simd_oprsz(desc);
138
+ * It's OK to look at env for the current mode here, because it's
200
+
139
+ * never possible for an AArch64 TB to chain to an AArch32 TB.
201
+ for (i = 0; i < opr_sz; i += 16) {
140
+ */
202
+ do_crypto_sm4e(vd + i, vn + i, vm + i);
141
+ if (is_a64(env)) {
203
+ }
142
+ env->pc = tb_pc(tb);
204
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
143
+ } else {
205
+}
144
+ env->regs[15] = tb_pc(tb);
206
+
145
+ }
207
+static void do_crypto_sm4ekey(uint64_t *rd, uint64_t *rn, uint64_t *rm)
146
}
208
{
147
}
209
- uint64_t *rd = vd;
148
#endif /* CONFIG_TCG */
210
- uint64_t *rn = vn;
211
- uint64_t *rm = vm;
212
union CRYPTO_STATE d;
213
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
214
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
215
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm)
216
rd[0] = d.l[0];
217
rd[1] = d.l[1];
218
}
219
+
220
+void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm, uint32_t desc)
221
+{
222
+ intptr_t i, opr_sz = simd_oprsz(desc);
223
+
224
+ for (i = 0; i < opr_sz; i += 16) {
225
+ do_crypto_sm4ekey(vd + i, vn + i, vm + i);
226
+ }
227
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
228
+}
229
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
149
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
230
index XXXXXXX..XXXXXXX 100644
150
index XXXXXXX..XXXXXXX 100644
231
--- a/target/arm/translate-a64.c
151
--- a/target/arm/translate-a64.c
232
+++ b/target/arm/translate-a64.c
152
+++ b/target/arm/translate-a64.c
233
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_fn4(DisasContext *s, bool is_q, int rd, int rn, int rm,
153
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
234
is_q ? 16 : 8, vec_full_reg_size(s));
154
235
}
155
static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
236
156
{
237
+/* Expand a 2-operand operation using an out-of-line helper. */
157
- tcg_gen_movi_i64(dest, s->pc_curr + diff);
238
+static void gen_gvec_op2_ool(DisasContext *s, bool is_q, int rd,
158
+ assert(s->pc_save != -1);
239
+ int rn, int data, gen_helper_gvec_2 *fn)
159
+ if (TARGET_TB_PCREL) {
240
+{
160
+ tcg_gen_addi_i64(dest, cpu_pc, (s->pc_curr - s->pc_save) + diff);
241
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
161
+ } else {
242
+ vec_full_reg_offset(s, rn),
162
+ tcg_gen_movi_i64(dest, s->pc_curr + diff);
243
+ is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
163
+ }
244
+}
164
}
165
166
void gen_a64_update_pc(DisasContext *s, target_long diff)
167
{
168
gen_pc_plus_diff(s, cpu_pc, diff);
169
+ s->pc_save = s->pc_curr + diff;
170
}
171
172
/*
173
@@ -XXX,XX +XXX,XX @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
174
* then loading an address into the PC will clear out any tag.
175
*/
176
gen_top_byte_ignore(s, cpu_pc, src, s->tbii);
177
+ s->pc_save = -1;
178
}
179
180
/*
181
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, uint64_t dest)
182
183
static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
184
{
185
- uint64_t dest = s->pc_curr + diff;
186
-
187
- if (use_goto_tb(s, dest)) {
188
- tcg_gen_goto_tb(n);
189
- gen_a64_update_pc(s, diff);
190
+ if (use_goto_tb(s, s->pc_curr + diff)) {
191
+ /*
192
+ * For pcrel, the pc must always be up-to-date on entry to
193
+ * the linked TB, so that it can use simple additions for all
194
+ * further adjustments. For !pcrel, the linked TB is compiled
195
+ * to know its full virtual address, so we can delay the
196
+ * update to pc to the unlinked path. A long chain of links
197
+ * can thus avoid many updates to the PC.
198
+ */
199
+ if (TARGET_TB_PCREL) {
200
+ gen_a64_update_pc(s, diff);
201
+ tcg_gen_goto_tb(n);
202
+ } else {
203
+ tcg_gen_goto_tb(n);
204
+ gen_a64_update_pc(s, diff);
205
+ }
206
tcg_gen_exit_tb(s->base.tb, n);
207
s->base.is_jmp = DISAS_NORETURN;
208
} else {
209
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
210
{
211
unsigned int sf, op, rt;
212
int64_t diff;
213
- TCGLabel *label_match;
214
+ DisasLabel match;
215
TCGv_i64 tcg_cmp;
216
217
sf = extract32(insn, 31, 1);
218
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
219
diff = sextract32(insn, 5, 19) * 4;
220
221
tcg_cmp = read_cpu_reg(s, rt, sf);
222
- label_match = gen_new_label();
223
-
224
reset_btype(s);
225
- tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
226
- tcg_cmp, 0, label_match);
227
228
+ match = gen_disas_label(s);
229
+ tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
230
+ tcg_cmp, 0, match.label);
231
gen_goto_tb(s, 0, 4);
232
- gen_set_label(label_match);
233
+ set_disas_label(s, match);
234
gen_goto_tb(s, 1, diff);
235
}
236
237
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
238
{
239
unsigned int bit_pos, op, rt;
240
int64_t diff;
241
- TCGLabel *label_match;
242
+ DisasLabel match;
243
TCGv_i64 tcg_cmp;
244
245
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
246
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
247
248
tcg_cmp = tcg_temp_new_i64();
249
tcg_gen_andi_i64(tcg_cmp, cpu_reg(s, rt), (1ULL << bit_pos));
250
- label_match = gen_new_label();
251
252
reset_btype(s);
245
+
253
+
246
/* Expand a 3-operand operation using an out-of-line helper. */
254
+ match = gen_disas_label(s);
247
static void gen_gvec_op3_ool(DisasContext *s, bool is_q, int rd,
255
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
248
int rn, int rm, int data, gen_helper_gvec_3 *fn)
256
- tcg_cmp, 0, label_match);
249
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
257
+ tcg_cmp, 0, match.label);
250
int rn = extract32(insn, 5, 5);
258
tcg_temp_free_i64(tcg_cmp);
251
int rd = extract32(insn, 0, 5);
259
gen_goto_tb(s, 0, 4);
252
int decrypt;
260
- gen_set_label(label_match);
253
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
261
+ set_disas_label(s, match);
254
- TCGv_i32 tcg_decrypt;
262
gen_goto_tb(s, 1, diff);
255
- CryptoThreeOpIntFn *genfn;
263
}
256
+ gen_helper_gvec_2 *genfn2 = NULL;
264
257
+ gen_helper_gvec_3 *genfn3 = NULL;
265
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
258
266
reset_btype(s);
259
if (!dc_isar_feature(aa64_aes, s) || size != 0) {
267
if (cond < 0x0e) {
260
unallocated_encoding(s);
268
/* genuinely conditional branches */
261
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
269
- TCGLabel *label_match = gen_new_label();
262
switch (opcode) {
270
- arm_gen_test_cc(cond, label_match);
263
case 0x4: /* AESE */
271
+ DisasLabel match = gen_disas_label(s);
264
decrypt = 0;
272
+ arm_gen_test_cc(cond, match.label);
265
- genfn = gen_helper_crypto_aese;
273
gen_goto_tb(s, 0, 4);
266
+ genfn3 = gen_helper_crypto_aese;
274
- gen_set_label(label_match);
267
break;
275
+ set_disas_label(s, match);
268
case 0x6: /* AESMC */
276
gen_goto_tb(s, 1, diff);
269
decrypt = 0;
277
} else {
270
- genfn = gen_helper_crypto_aesmc;
278
/* 0xe and 0xf are both "always" conditions */
271
+ genfn2 = gen_helper_crypto_aesmc;
279
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
272
break;
280
273
case 0x5: /* AESD */
281
dc->isar = &arm_cpu->isar;
274
decrypt = 1;
282
dc->condjmp = 0;
275
- genfn = gen_helper_crypto_aese;
276
+ genfn3 = gen_helper_crypto_aese;
277
break;
278
case 0x7: /* AESIMC */
279
decrypt = 1;
280
- genfn = gen_helper_crypto_aesmc;
281
+ genfn2 = gen_helper_crypto_aesmc;
282
break;
283
default:
284
unallocated_encoding(s);
285
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
286
if (!fp_access_check(s)) {
287
return;
288
}
289
-
283
-
290
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
284
+ dc->pc_save = dc->base.pc_first;
291
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
285
dc->aarch64 = true;
292
- tcg_decrypt = tcg_const_i32(decrypt);
286
dc->thumb = false;
293
-
287
dc->sctlr_b = 0;
294
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_decrypt);
288
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
295
-
289
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
296
- tcg_temp_free_ptr(tcg_rd_ptr);
290
{
297
- tcg_temp_free_ptr(tcg_rn_ptr);
291
DisasContext *dc = container_of(dcbase, DisasContext, base);
298
- tcg_temp_free_i32(tcg_decrypt);
292
+ target_ulong pc_arg = dc->base.pc_next;
299
+ if (genfn2) {
293
300
+ gen_gvec_op2_ool(s, true, rd, rn, decrypt, genfn2);
294
- tcg_gen_insn_start(dc->base.pc_next, 0, 0);
301
+ } else {
295
+ if (TARGET_TB_PCREL) {
302
+ gen_gvec_op3_ool(s, true, rd, rd, rn, decrypt, genfn3);
296
+ pc_arg &= ~TARGET_PAGE_MASK;
303
+ }
297
+ }
304
}
298
+ tcg_gen_insn_start(pc_arg, 0, 0);
305
299
dc->insn_start = tcg_last_op();
306
/* Crypto three-reg SHA
300
}
307
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
301
308
int rn = extract32(insn, 5, 5);
302
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
309
int rd = extract32(insn, 0, 5);
303
index XXXXXXX..XXXXXXX 100644
310
bool feature;
304
--- a/target/arm/translate-m-nocp.c
311
- CryptoThreeOpFn *genfn;
305
+++ b/target/arm/translate-m-nocp.c
312
+ CryptoThreeOpFn *genfn = NULL;
306
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
313
+ gen_helper_gvec_3 *oolfn = NULL;
307
tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
314
308
tcg_gen_or_i32(sfpa, sfpa, aspen);
315
if (o == 0) {
309
arm_gen_condlabel(s);
316
switch (opcode) {
310
- tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
317
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
311
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel.label);
318
break;
312
319
case 2: /* SM4EKEY */
313
if (s->fp_excp_el != 0) {
320
feature = dc_isar_feature(aa64_sm4, s);
314
gen_exception_insn_el(s, 0, EXCP_NOCP,
321
- genfn = gen_helper_crypto_sm4ekey;
322
+ oolfn = gen_helper_crypto_sm4ekey;
323
break;
324
default:
325
unallocated_encoding(s);
326
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
327
return;
328
}
329
330
+ if (oolfn) {
331
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
332
+ return;
333
+ }
334
+
335
if (genfn) {
336
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
337
338
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
339
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
340
bool feature;
341
CryptoTwoOpFn *genfn;
342
+ gen_helper_gvec_3 *oolfn = NULL;
343
344
switch (opcode) {
345
case 0: /* SHA512SU0 */
346
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
347
break;
348
case 1: /* SM4E */
349
feature = dc_isar_feature(aa64_sm4, s);
350
- genfn = gen_helper_crypto_sm4e;
351
+ oolfn = gen_helper_crypto_sm4e;
352
break;
353
default:
354
unallocated_encoding(s);
355
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
356
return;
357
}
358
359
+ if (oolfn) {
360
+ gen_gvec_op3_ool(s, true, rd, rd, rn, 0, oolfn);
361
+ return;
362
+ }
363
+
364
tcg_rd_ptr = vec_full_reg_ptr(s, rd);
365
tcg_rn_ptr = vec_full_reg_ptr(s, rn);
366
367
diff --git a/target/arm/translate.c b/target/arm/translate.c
315
diff --git a/target/arm/translate.c b/target/arm/translate.c
368
index XXXXXXX..XXXXXXX 100644
316
index XXXXXXX..XXXXXXX 100644
369
--- a/target/arm/translate.c
317
--- a/target/arm/translate.c
370
+++ b/target/arm/translate.c
318
+++ b/target/arm/translate.c
371
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
319
@@ -XXX,XX +XXX,XX @@ uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
372
if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
320
void arm_gen_condlabel(DisasContext *s)
373
return 1;
321
{
374
}
322
if (!s->condjmp) {
375
- ptr1 = vfp_reg_ptr(true, rd);
323
- s->condlabel = gen_new_label();
376
- ptr2 = vfp_reg_ptr(true, rm);
324
+ s->condlabel = gen_disas_label(s);
325
s->condjmp = 1;
326
}
327
}
328
@@ -XXX,XX +XXX,XX @@ static target_long jmp_diff(DisasContext *s, target_long diff)
329
330
static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
331
{
332
- tcg_gen_movi_i32(var, s->pc_curr + diff);
333
+ assert(s->pc_save != -1);
334
+ if (TARGET_TB_PCREL) {
335
+ tcg_gen_addi_i32(var, cpu_R[15], (s->pc_curr - s->pc_save) + diff);
336
+ } else {
337
+ tcg_gen_movi_i32(var, s->pc_curr + diff);
338
+ }
339
}
340
341
/* Set a variable to the value of a CPU register. */
342
@@ -XXX,XX +XXX,XX @@ void store_reg(DisasContext *s, int reg, TCGv_i32 var)
343
*/
344
tcg_gen_andi_i32(var, var, s->thumb ? ~1 : ~3);
345
s->base.is_jmp = DISAS_JUMP;
346
+ s->pc_save = -1;
347
} else if (reg == 13 && arm_dc_feature(s, ARM_FEATURE_M)) {
348
/* For M-profile SP bits [1:0] are always zero */
349
tcg_gen_andi_i32(var, var, ~3);
350
@@ -XXX,XX +XXX,XX @@ void gen_set_condexec(DisasContext *s)
351
352
void gen_update_pc(DisasContext *s, target_long diff)
353
{
354
- tcg_gen_movi_i32(cpu_R[15], s->pc_curr + diff);
355
+ gen_pc_plus_diff(s, cpu_R[15], diff);
356
+ s->pc_save = s->pc_curr + diff;
357
}
358
359
/* Set PC and Thumb state from var. var is marked as dead. */
360
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx(DisasContext *s, TCGv_i32 var)
361
tcg_gen_andi_i32(cpu_R[15], var, ~1);
362
tcg_gen_andi_i32(var, var, 1);
363
store_cpu_field(var, thumb);
364
+ s->pc_save = -1;
365
}
366
367
/*
368
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
369
static inline void gen_bx_excret_final_code(DisasContext *s)
370
{
371
/* Generate the code to finish possible exception return and end the TB */
372
- TCGLabel *excret_label = gen_new_label();
373
+ DisasLabel excret_label = gen_disas_label(s);
374
uint32_t min_magic;
375
376
if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY)) {
377
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret_final_code(DisasContext *s)
378
}
379
380
/* Is the new PC value in the magic range indicating exception return? */
381
- tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label);
382
+ tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label.label);
383
/* No: end the TB as we would for a DISAS_JMP */
384
if (s->ss_active) {
385
gen_singlestep_exception(s);
386
} else {
387
tcg_gen_exit_tb(NULL, 0);
388
}
389
- gen_set_label(excret_label);
390
+ set_disas_label(s, excret_label);
391
/* Yes: this is an exception return.
392
* At this point in runtime env->regs[15] and env->thumb will hold
393
* the exception-return magic number, which do_v7m_exception_exit()
394
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
395
*/
396
static void gen_goto_tb(DisasContext *s, int n, target_long diff)
397
{
398
- target_ulong dest = s->pc_curr + diff;
377
-
399
-
378
- /* Bit 6 is the lowest opcode bit; it distinguishes between
400
- if (translator_use_goto_tb(&s->base, dest)) {
379
- * encryption (AESE/AESMC) and decryption (AESD/AESIMC)
401
- tcg_gen_goto_tb(n);
380
- */
402
- gen_update_pc(s, diff);
381
- tmp3 = tcg_const_i32(extract32(insn, 6, 1));
403
+ if (translator_use_goto_tb(&s->base, s->pc_curr + diff)) {
404
+ /*
405
+ * For pcrel, the pc must always be up-to-date on entry to
406
+ * the linked TB, so that it can use simple additions for all
407
+ * further adjustments. For !pcrel, the linked TB is compiled
408
+ * to know its full virtual address, so we can delay the
409
+ * update to pc to the unlinked path. A long chain of links
410
+ * can thus avoid many updates to the PC.
411
+ */
412
+ if (TARGET_TB_PCREL) {
413
+ gen_update_pc(s, diff);
414
+ tcg_gen_goto_tb(n);
415
+ } else {
416
+ tcg_gen_goto_tb(n);
417
+ gen_update_pc(s, diff);
418
+ }
419
tcg_gen_exit_tb(s->base.tb, n);
420
} else {
421
gen_update_pc(s, diff);
422
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
423
static void arm_skip_unless(DisasContext *s, uint32_t cond)
424
{
425
arm_gen_condlabel(s);
426
- arm_gen_test_cc(cond ^ 1, s->condlabel);
427
+ arm_gen_test_cc(cond ^ 1, s->condlabel.label);
428
}
429
430
431
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
432
{
433
/* M-profile low-overhead while-loop start */
434
TCGv_i32 tmp;
435
- TCGLabel *nextlabel;
436
+ DisasLabel nextlabel;
437
438
if (!dc_isar_feature(aa32_lob, s)) {
439
return false;
440
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
441
}
442
}
443
444
- nextlabel = gen_new_label();
445
- tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel);
446
+ nextlabel = gen_disas_label(s);
447
+ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel.label);
448
tmp = load_reg(s, a->rn);
449
store_reg(s, 14, tmp);
450
if (a->size != 4) {
451
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
452
}
453
gen_jmp_tb(s, curr_insn_len(s), 1);
454
455
- gen_set_label(nextlabel);
456
+ set_disas_label(s, nextlabel);
457
gen_jmp(s, jmp_diff(s, a->imm));
458
return true;
459
}
460
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
461
* any faster.
462
*/
463
TCGv_i32 tmp;
464
- TCGLabel *loopend;
465
+ DisasLabel loopend;
466
bool fpu_active;
467
468
if (!dc_isar_feature(aa32_lob, s)) {
469
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
470
471
if (!a->tp && dc_isar_feature(aa32_mve, s) && fpu_active) {
472
/* Need to do a runtime check for LTPSIZE != 4 */
473
- TCGLabel *skipexc = gen_new_label();
474
+ DisasLabel skipexc = gen_disas_label(s);
475
tmp = load_cpu_field(v7m.ltpsize);
476
- tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
477
+ tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc.label);
478
tcg_temp_free_i32(tmp);
479
gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
480
- gen_set_label(skipexc);
481
+ set_disas_label(s, skipexc);
482
}
483
484
if (a->f) {
485
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
486
* loop decrement value is 1. For LETP we need to calculate the decrement
487
* value from LTPSIZE.
488
*/
489
- loopend = gen_new_label();
490
+ loopend = gen_disas_label(s);
491
if (!a->tp) {
492
- tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend);
493
+ tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend.label);
494
tcg_gen_addi_i32(cpu_R[14], cpu_R[14], -1);
495
} else {
496
/*
497
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
498
tcg_gen_shl_i32(decr, tcg_constant_i32(1), decr);
499
tcg_temp_free_i32(ltpsize);
500
501
- tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend);
502
+ tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend.label);
503
504
tcg_gen_sub_i32(cpu_R[14], cpu_R[14], decr);
505
tcg_temp_free_i32(decr);
506
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
507
/* Jump back to the loop start */
508
gen_jmp(s, jmp_diff(s, -a->imm));
509
510
- gen_set_label(loopend);
511
+ set_disas_label(s, loopend);
512
if (a->tp) {
513
/* Exits from tail-pred loops must reset LTPSIZE to 4 */
514
store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
515
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_CBZ *a)
516
517
arm_gen_condlabel(s);
518
tcg_gen_brcondi_i32(a->nz ? TCG_COND_EQ : TCG_COND_NE,
519
- tmp, 0, s->condlabel);
520
+ tmp, 0, s->condlabel.label);
521
tcg_temp_free_i32(tmp);
522
gen_jmp(s, jmp_diff(s, a->imm));
523
return true;
524
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
525
526
dc->isar = &cpu->isar;
527
dc->condjmp = 0;
382
-
528
-
383
+ /*
529
+ dc->pc_save = dc->base.pc_first;
384
+ * Bit 6 is the lowest opcode bit; it distinguishes
530
dc->aarch64 = false;
385
+ * between encryption (AESE/AESMC) and decryption
531
dc->thumb = EX_TBFLAG_AM32(tb_flags, THUMB);
386
+ * (AESD/AESIMC).
532
dc->be_data = EX_TBFLAG_ANY(tb_flags, BE_DATA) ? MO_BE : MO_LE;
387
+ */
533
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
388
if (op == NEON_2RM_AESE) {
534
*/
389
- gen_helper_crypto_aese(ptr1, ptr2, tmp3);
535
dc->eci = dc->condexec_mask = dc->condexec_cond = 0;
390
+ tcg_gen_gvec_3_ool(vfp_reg_offset(true, rd),
536
dc->eci_handled = false;
391
+ vfp_reg_offset(true, rd),
537
- dc->insn_eci_rewind = NULL;
392
+ vfp_reg_offset(true, rm),
538
if (condexec & 0xf) {
393
+ 16, 16, extract32(insn, 6, 1),
539
dc->condexec_mask = (condexec & 0xf) << 1;
394
+ gen_helper_crypto_aese);
540
dc->condexec_cond = condexec >> 4;
395
} else {
541
@@ -XXX,XX +XXX,XX @@ static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
396
- gen_helper_crypto_aesmc(ptr1, ptr2, tmp3);
542
* fields here.
397
+ tcg_gen_gvec_2_ool(vfp_reg_offset(true, rd),
543
*/
398
+ vfp_reg_offset(true, rm),
544
uint32_t condexec_bits;
399
+ 16, 16, extract32(insn, 6, 1),
545
+ target_ulong pc_arg = dc->base.pc_next;
400
+ gen_helper_crypto_aesmc);
546
401
}
547
+ if (TARGET_TB_PCREL) {
402
- tcg_temp_free_ptr(ptr1);
548
+ pc_arg &= ~TARGET_PAGE_MASK;
403
- tcg_temp_free_ptr(ptr2);
549
+ }
404
- tcg_temp_free_i32(tmp3);
550
if (dc->eci) {
405
break;
551
condexec_bits = dc->eci << 4;
406
case NEON_2RM_SHA1H:
552
} else {
407
if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
553
condexec_bits = (dc->condexec_cond << 4) | (dc->condexec_mask >> 1);
408
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
554
}
409
index XXXXXXX..XXXXXXX 100644
555
- tcg_gen_insn_start(dc->base.pc_next, condexec_bits, 0);
410
--- a/target/arm/vec_helper.c
556
+ tcg_gen_insn_start(pc_arg, condexec_bits, 0);
411
+++ b/target/arm/vec_helper.c
557
dc->insn_start = tcg_last_op();
412
@@ -XXX,XX +XXX,XX @@
558
}
413
#include "exec/helper-proto.h"
559
414
#include "tcg/tcg-gvec-desc.h"
560
@@ -XXX,XX +XXX,XX @@ static bool arm_check_ss_active(DisasContext *dc)
415
#include "fpu/softfloat.h"
561
416
-
562
static void arm_post_translate_insn(DisasContext *dc)
417
+#include "vec_internal.h"
563
{
418
564
- if (dc->condjmp && !dc->base.is_jmp) {
419
/* Note that vector data is stored in host-endian 64-bit chunks,
565
- gen_set_label(dc->condlabel);
420
so addressing units smaller than that needs a host-endian fixup. */
566
+ if (dc->condjmp && dc->base.is_jmp == DISAS_NEXT) {
421
@@ -XXX,XX +XXX,XX @@
567
+ if (dc->pc_save != dc->condlabel.pc_save) {
422
#define H4(x) (x)
568
+ gen_update_pc(dc, dc->condlabel.pc_save - dc->pc_save);
423
#endif
569
+ }
424
570
+ gen_set_label(dc->condlabel.label);
425
-static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
571
dc->condjmp = 0;
426
-{
572
}
427
- uint64_t *d = vd + opr_sz;
573
translator_loop_temp_check(&dc->base);
428
- uintptr_t i;
574
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
429
-
575
uint32_t pc = dc->base.pc_next;
430
- for (i = opr_sz; i < max_sz; i += 8) {
576
uint32_t insn;
431
- *d++ = 0;
577
bool is_16bit;
432
- }
578
+ /* TCG op to rewind to if this turns out to be an invalid ECI state */
433
-}
579
+ TCGOp *insn_eci_rewind = NULL;
434
-
580
+ target_ulong insn_eci_pc_save = -1;
435
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
581
436
static int16_t inl_qrdmlah_s16(int16_t src1, int16_t src2,
582
/* Misaligned thumb PC is architecturally impossible. */
437
int16_t src3, uint32_t *sat)
583
assert((dc->base.pc_next & 1) == 0);
584
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
585
* insn" case. We will rewind to the marker (ie throwing away
586
* all the generated code) and instead emit "take exception".
587
*/
588
- dc->insn_eci_rewind = tcg_last_op();
589
+ insn_eci_rewind = tcg_last_op();
590
+ insn_eci_pc_save = dc->pc_save;
591
}
592
593
if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
594
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
595
* Insn wasn't valid for ECI/ICI at all: undo what we
596
* just generated and instead emit an exception
597
*/
598
- tcg_remove_ops_after(dc->insn_eci_rewind);
599
+ tcg_remove_ops_after(insn_eci_rewind);
600
+ dc->pc_save = insn_eci_pc_save;
601
dc->condjmp = 0;
602
gen_exception_insn(dc, 0, EXCP_INVSTATE, syn_uncategorized());
603
}
604
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
605
606
if (dc->condjmp) {
607
/* "Condition failed" instruction codepath for the branch/trap insn */
608
- gen_set_label(dc->condlabel);
609
+ set_disas_label(dc, dc->condlabel);
610
gen_set_condexec(dc);
611
if (unlikely(dc->ss_active)) {
612
gen_update_pc(dc, curr_insn_len(dc));
613
@@ -XXX,XX +XXX,XX @@ void restore_state_to_opc(CPUARMState *env, TranslationBlock *tb,
614
target_ulong *data)
615
{
616
if (is_a64(env)) {
617
- env->pc = data[0];
618
+ if (TARGET_TB_PCREL) {
619
+ env->pc = (env->pc & TARGET_PAGE_MASK) | data[0];
620
+ } else {
621
+ env->pc = data[0];
622
+ }
623
env->condexec_bits = 0;
624
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
625
} else {
626
- env->regs[15] = data[0];
627
+ if (TARGET_TB_PCREL) {
628
+ env->regs[15] = (env->regs[15] & TARGET_PAGE_MASK) | data[0];
629
+ } else {
630
+ env->regs[15] = data[0];
631
+ }
632
env->condexec_bits = data[1];
633
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
634
}
438
--
635
--
439
2.20.1
636
2.25.1
440
441
diff view generated by jsdifflib
Deleted patch
1
From: Thomas Huth <thuth@redhat.com>
2
1
3
As described by Edgar here:
4
5
https://www.mail-archive.com/qemu-devel@nongnu.org/msg605124.html
6
7
we can use the Ubuntu kernel for testing the xlnx-versal-virt machine.
8
So let's add a boot test for this now.
9
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Signed-off-by: Thomas Huth <thuth@redhat.com>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
15
Message-id: 20200525141237.15243-1-thuth@redhat.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
tests/acceptance/boot_linux_console.py | 26 ++++++++++++++++++++++++++
19
1 file changed, 26 insertions(+)
20
21
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
22
index XXXXXXX..XXXXXXX 100644
23
--- a/tests/acceptance/boot_linux_console.py
24
+++ b/tests/acceptance/boot_linux_console.py
25
@@ -XXX,XX +XXX,XX @@ class BootLinuxConsole(LinuxKernelTest):
26
console_pattern = 'Kernel command line: %s' % kernel_command_line
27
self.wait_for_console_pattern(console_pattern)
28
29
+ def test_aarch64_xlnx_versal_virt(self):
30
+ """
31
+ :avocado: tags=arch:aarch64
32
+ :avocado: tags=machine:xlnx-versal-virt
33
+ :avocado: tags=device:pl011
34
+ :avocado: tags=device:arm_gicv3
35
+ """
36
+ kernel_url = ('http://ports.ubuntu.com/ubuntu-ports/dists/'
37
+ 'bionic-updates/main/installer-arm64/current/images/'
38
+ 'netboot/ubuntu-installer/arm64/linux')
39
+ kernel_hash = '5bfc54cf7ed8157d93f6e5b0241e727b6dc22c50'
40
+ kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
41
+
42
+ initrd_url = ('http://ports.ubuntu.com/ubuntu-ports/dists/'
43
+ 'bionic-updates/main/installer-arm64/current/images/'
44
+ 'netboot/ubuntu-installer/arm64/initrd.gz')
45
+ initrd_hash = 'd385d3e88d53e2004c5d43cbe668b458a094f772'
46
+ initrd_path = self.fetch_asset(initrd_url, asset_hash=initrd_hash)
47
+
48
+ self.vm.set_console()
49
+ self.vm.add_args('-m', '2G',
50
+ '-kernel', kernel_path,
51
+ '-initrd', initrd_path)
52
+ self.vm.launch()
53
+ self.wait_for_console_pattern('Checked W+X mappings: passed')
54
+
55
def test_arm_virt(self):
56
"""
57
:avocado: tags=arch:arm
58
--
59
2.20.1
60
61
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
Signed-off-by: Cédric Le Goater <clg@kaod.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Message-id: 20200602135050.593692-1-clg@kaod.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
docs/system/arm/aspeed.rst | 85 ++++++++++++++++++++++++++++++++++++++
9
docs/system/target-arm.rst | 1 +
10
2 files changed, 86 insertions(+)
11
create mode 100644 docs/system/arm/aspeed.rst
12
13
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
14
new file mode 100644
15
index XXXXXXX..XXXXXXX
16
--- /dev/null
17
+++ b/docs/system/arm/aspeed.rst
18
@@ -XXX,XX +XXX,XX @@
19
+Aspeed family boards (``*-bmc``, ``ast2500-evb``, ``ast2600-evb``)
20
+==================================================================
21
+
22
+The QEMU Aspeed machines model BMCs of various OpenPOWER systems and
23
+Aspeed evaluation boards. They are based on different releases of the
24
+Aspeed SoC : the AST2400 integrating an ARM926EJ-S CPU (400MHz), the
25
+AST2500 with an ARM1176JZS CPU (800MHz) and more recently the AST2600
26
+with dual cores ARM Cortex A7 CPUs (1.2GHz).
27
+
28
+The SoC comes with RAM, Gigabit ethernet, USB, SD/MMC, USB, SPI, I2C,
29
+etc.
30
+
31
+AST2400 SoC based machines :
32
+
33
+- ``palmetto-bmc`` OpenPOWER Palmetto POWER8 BMC
34
+
35
+AST2500 SoC based machines :
36
+
37
+- ``ast2500-evb`` Aspeed AST2500 Evaluation board
38
+- ``romulus-bmc`` OpenPOWER Romulus POWER9 BMC
39
+- ``witherspoon-bmc`` OpenPOWER Witherspoon POWER9 BMC
40
+- ``sonorapass-bmc`` OCP SonoraPass BMC
41
+- ``swift-bmc`` OpenPOWER Swift BMC POWER9
42
+
43
+AST2600 SoC based machines :
44
+
45
+- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex A7)
46
+- ``tacoma-bmc`` OpenPOWER Witherspoon POWER9 AST2600 BMC
47
+
48
+Supported devices
49
+-----------------
50
+
51
+ * SMP (for the AST2600 Cortex-A7)
52
+ * Interrupt Controller (VIC)
53
+ * Timer Controller
54
+ * RTC Controller
55
+ * I2C Controller
56
+ * System Control Unit (SCU)
57
+ * SRAM mapping
58
+ * X-DMA Controller (basic interface)
59
+ * Static Memory Controller (SMC or FMC) - Only SPI Flash support
60
+ * SPI Memory Controller
61
+ * USB 2.0 Controller
62
+ * SD/MMC storage controllers
63
+ * SDRAM controller (dummy interface for basic settings and training)
64
+ * Watchdog Controller
65
+ * GPIO Controller (Master only)
66
+ * UART
67
+ * Ethernet controllers
68
+
69
+
70
+Missing devices
71
+---------------
72
+
73
+ * Coprocessor support
74
+ * ADC (out of tree implementation)
75
+ * PWM and Fan Controller
76
+ * LPC Bus Controller
77
+ * Slave GPIO Controller
78
+ * Super I/O Controller
79
+ * Hash/Crypto Engine
80
+ * PCI-Express 1 Controller
81
+ * Graphic Display Controller
82
+ * PECI Controller
83
+ * MCTP Controller
84
+ * Mailbox Controller
85
+ * Virtual UART
86
+ * eSPI Controller
87
+ * I3C Controller
88
+
89
+Boot options
90
+------------
91
+
92
+The Aspeed machines can be started using the -kernel option to load a
93
+Linux kernel or from a firmare image which can be downloaded from the
94
+OpenPOWER jenkins :
95
+
96
+ https://openpower.xyz/
97
+
98
+The image should be attached as an MTD drive. Run :
99
+
100
+.. code-block:: bash
101
+
102
+ $ qemu-system-arm -M romulus-bmc -nic user \
103
+    -drive file=flash-romulus,format=raw,if=mtd -nographic
104
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
105
index XXXXXXX..XXXXXXX 100644
106
--- a/docs/system/target-arm.rst
107
+++ b/docs/system/target-arm.rst
108
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
109
arm/realview
110
arm/versatile
111
arm/vexpress
112
+ arm/aspeed
113
arm/musicpal
114
arm/nseries
115
arm/orangepi
116
--
117
2.20.1
118
119
diff view generated by jsdifflib
Deleted patch
1
From: Paul Zimmerman <pauldzim@gmail.com>
2
1
3
Import the dwc-hsotg (dwc2) register definitions file from the
4
Linux kernel. This is a copy of drivers/usb/dwc2/hw.h from the
5
mainline Linux kernel, the only changes being to the header, and
6
two instances of 'u32' changed to 'uint32_t' to allow it to
7
compile. Checkpatch throws a boatload of errors due to the tab
8
indentation, but I would rather import it as-is than reformat it.
9
10
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
11
Message-id: 20200520235349.21215-3-pauldzim@gmail.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
include/hw/usb/dwc2-regs.h | 899 +++++++++++++++++++++++++++++++++++++
16
1 file changed, 899 insertions(+)
17
create mode 100644 include/hw/usb/dwc2-regs.h
18
19
diff --git a/include/hw/usb/dwc2-regs.h b/include/hw/usb/dwc2-regs.h
20
new file mode 100644
21
index XXXXXXX..XXXXXXX
22
--- /dev/null
23
+++ b/include/hw/usb/dwc2-regs.h
24
@@ -XXX,XX +XXX,XX @@
25
+/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
26
+/*
27
+ * Imported from the Linux kernel file drivers/usb/dwc2/hw.h, commit
28
+ * a89bae709b3492b478480a2c9734e7e9393b279c ("usb: dwc2: Move
29
+ * UTMI_PHY_DATA defines closer")
30
+ *
31
+ * hw.h - DesignWare HS OTG Controller hardware definitions
32
+ *
33
+ * Copyright 2004-2013 Synopsys, Inc.
34
+ *
35
+ * Redistribution and use in source and binary forms, with or without
36
+ * modification, are permitted provided that the following conditions
37
+ * are met:
38
+ * 1. Redistributions of source code must retain the above copyright
39
+ * notice, this list of conditions, and the following disclaimer,
40
+ * without modification.
41
+ * 2. Redistributions in binary form must reproduce the above copyright
42
+ * notice, this list of conditions and the following disclaimer in the
43
+ * documentation and/or other materials provided with the distribution.
44
+ * 3. The names of the above-listed copyright holders may not be used
45
+ * to endorse or promote products derived from this software without
46
+ * specific prior written permission.
47
+ *
48
+ * ALTERNATIVELY, this software may be distributed under the terms of the
49
+ * GNU General Public License ("GPL") as published by the Free Software
50
+ * Foundation; either version 2 of the License, or (at your option) any
51
+ * later version.
52
+ *
53
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
54
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
55
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
56
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
57
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
58
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
59
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
60
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
61
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
62
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
63
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
64
+ */
65
+
66
+#ifndef __DWC2_HW_H__
67
+#define __DWC2_HW_H__
68
+
69
+#define HSOTG_REG(x)    (x)
70
+
71
+#define GOTGCTL                HSOTG_REG(0x000)
72
+#define GOTGCTL_CHIRPEN            BIT(27)
73
+#define GOTGCTL_MULT_VALID_BC_MASK    (0x1f << 22)
74
+#define GOTGCTL_MULT_VALID_BC_SHIFT    22
75
+#define GOTGCTL_OTGVER            BIT(20)
76
+#define GOTGCTL_BSESVLD            BIT(19)
77
+#define GOTGCTL_ASESVLD            BIT(18)
78
+#define GOTGCTL_DBNC_SHORT        BIT(17)
79
+#define GOTGCTL_CONID_B            BIT(16)
80
+#define GOTGCTL_DBNCE_FLTR_BYPASS    BIT(15)
81
+#define GOTGCTL_DEVHNPEN        BIT(11)
82
+#define GOTGCTL_HSTSETHNPEN        BIT(10)
83
+#define GOTGCTL_HNPREQ            BIT(9)
84
+#define GOTGCTL_HSTNEGSCS        BIT(8)
85
+#define GOTGCTL_SESREQ            BIT(1)
86
+#define GOTGCTL_SESREQSCS        BIT(0)
87
+
88
+#define GOTGINT                HSOTG_REG(0x004)
89
+#define GOTGINT_DBNCE_DONE        BIT(19)
90
+#define GOTGINT_A_DEV_TOUT_CHG        BIT(18)
91
+#define GOTGINT_HST_NEG_DET        BIT(17)
92
+#define GOTGINT_HST_NEG_SUC_STS_CHNG    BIT(9)
93
+#define GOTGINT_SES_REQ_SUC_STS_CHNG    BIT(8)
94
+#define GOTGINT_SES_END_DET        BIT(2)
95
+
96
+#define GAHBCFG                HSOTG_REG(0x008)
97
+#define GAHBCFG_AHB_SINGLE        BIT(23)
98
+#define GAHBCFG_NOTI_ALL_DMA_WRIT    BIT(22)
99
+#define GAHBCFG_REM_MEM_SUPP        BIT(21)
100
+#define GAHBCFG_P_TXF_EMP_LVL        BIT(8)
101
+#define GAHBCFG_NP_TXF_EMP_LVL        BIT(7)
102
+#define GAHBCFG_DMA_EN            BIT(5)
103
+#define GAHBCFG_HBSTLEN_MASK        (0xf << 1)
104
+#define GAHBCFG_HBSTLEN_SHIFT        1
105
+#define GAHBCFG_HBSTLEN_SINGLE        0
106
+#define GAHBCFG_HBSTLEN_INCR        1
107
+#define GAHBCFG_HBSTLEN_INCR4        3
108
+#define GAHBCFG_HBSTLEN_INCR8        5
109
+#define GAHBCFG_HBSTLEN_INCR16        7
110
+#define GAHBCFG_GLBL_INTR_EN        BIT(0)
111
+#define GAHBCFG_CTRL_MASK        (GAHBCFG_P_TXF_EMP_LVL | \
112
+                     GAHBCFG_NP_TXF_EMP_LVL | \
113
+                     GAHBCFG_DMA_EN | \
114
+                     GAHBCFG_GLBL_INTR_EN)
115
+
116
+#define GUSBCFG                HSOTG_REG(0x00C)
117
+#define GUSBCFG_FORCEDEVMODE        BIT(30)
118
+#define GUSBCFG_FORCEHOSTMODE        BIT(29)
119
+#define GUSBCFG_TXENDDELAY        BIT(28)
120
+#define GUSBCFG_ICTRAFFICPULLREMOVE    BIT(27)
121
+#define GUSBCFG_ICUSBCAP        BIT(26)
122
+#define GUSBCFG_ULPI_INT_PROT_DIS    BIT(25)
123
+#define GUSBCFG_INDICATORPASSTHROUGH    BIT(24)
124
+#define GUSBCFG_INDICATORCOMPLEMENT    BIT(23)
125
+#define GUSBCFG_TERMSELDLPULSE        BIT(22)
126
+#define GUSBCFG_ULPI_INT_VBUS_IND    BIT(21)
127
+#define GUSBCFG_ULPI_EXT_VBUS_DRV    BIT(20)
128
+#define GUSBCFG_ULPI_CLK_SUSP_M        BIT(19)
129
+#define GUSBCFG_ULPI_AUTO_RES        BIT(18)
130
+#define GUSBCFG_ULPI_FS_LS        BIT(17)
131
+#define GUSBCFG_OTG_UTMI_FS_SEL        BIT(16)
132
+#define GUSBCFG_PHY_LP_CLK_SEL        BIT(15)
133
+#define GUSBCFG_USBTRDTIM_MASK        (0xf << 10)
134
+#define GUSBCFG_USBTRDTIM_SHIFT        10
135
+#define GUSBCFG_HNPCAP            BIT(9)
136
+#define GUSBCFG_SRPCAP            BIT(8)
137
+#define GUSBCFG_DDRSEL            BIT(7)
138
+#define GUSBCFG_PHYSEL            BIT(6)
139
+#define GUSBCFG_FSINTF            BIT(5)
140
+#define GUSBCFG_ULPI_UTMI_SEL        BIT(4)
141
+#define GUSBCFG_PHYIF16            BIT(3)
142
+#define GUSBCFG_PHYIF8            (0 << 3)
143
+#define GUSBCFG_TOUTCAL_MASK        (0x7 << 0)
144
+#define GUSBCFG_TOUTCAL_SHIFT        0
145
+#define GUSBCFG_TOUTCAL_LIMIT        0x7
146
+#define GUSBCFG_TOUTCAL(_x)        ((_x) << 0)
147
+
148
+#define GRSTCTL                HSOTG_REG(0x010)
149
+#define GRSTCTL_AHBIDLE            BIT(31)
150
+#define GRSTCTL_DMAREQ            BIT(30)
151
+#define GRSTCTL_TXFNUM_MASK        (0x1f << 6)
152
+#define GRSTCTL_TXFNUM_SHIFT        6
153
+#define GRSTCTL_TXFNUM_LIMIT        0x1f
154
+#define GRSTCTL_TXFNUM(_x)        ((_x) << 6)
155
+#define GRSTCTL_TXFFLSH            BIT(5)
156
+#define GRSTCTL_RXFFLSH            BIT(4)
157
+#define GRSTCTL_IN_TKNQ_FLSH        BIT(3)
158
+#define GRSTCTL_FRMCNTRRST        BIT(2)
159
+#define GRSTCTL_HSFTRST            BIT(1)
160
+#define GRSTCTL_CSFTRST            BIT(0)
161
+
162
+#define GINTSTS                HSOTG_REG(0x014)
163
+#define GINTMSK                HSOTG_REG(0x018)
164
+#define GINTSTS_WKUPINT            BIT(31)
165
+#define GINTSTS_SESSREQINT        BIT(30)
166
+#define GINTSTS_DISCONNINT        BIT(29)
167
+#define GINTSTS_CONIDSTSCHNG        BIT(28)
168
+#define GINTSTS_LPMTRANRCVD        BIT(27)
169
+#define GINTSTS_PTXFEMP            BIT(26)
170
+#define GINTSTS_HCHINT            BIT(25)
171
+#define GINTSTS_PRTINT            BIT(24)
172
+#define GINTSTS_RESETDET        BIT(23)
173
+#define GINTSTS_FET_SUSP        BIT(22)
174
+#define GINTSTS_INCOMPL_IP        BIT(21)
175
+#define GINTSTS_INCOMPL_SOOUT        BIT(21)
176
+#define GINTSTS_INCOMPL_SOIN        BIT(20)
177
+#define GINTSTS_OEPINT            BIT(19)
178
+#define GINTSTS_IEPINT            BIT(18)
179
+#define GINTSTS_EPMIS            BIT(17)
180
+#define GINTSTS_RESTOREDONE        BIT(16)
181
+#define GINTSTS_EOPF            BIT(15)
182
+#define GINTSTS_ISOUTDROP        BIT(14)
183
+#define GINTSTS_ENUMDONE        BIT(13)
184
+#define GINTSTS_USBRST            BIT(12)
185
+#define GINTSTS_USBSUSP            BIT(11)
186
+#define GINTSTS_ERLYSUSP        BIT(10)
187
+#define GINTSTS_I2CINT            BIT(9)
188
+#define GINTSTS_ULPI_CK_INT        BIT(8)
189
+#define GINTSTS_GOUTNAKEFF        BIT(7)
190
+#define GINTSTS_GINNAKEFF        BIT(6)
191
+#define GINTSTS_NPTXFEMP        BIT(5)
192
+#define GINTSTS_RXFLVL            BIT(4)
193
+#define GINTSTS_SOF            BIT(3)
194
+#define GINTSTS_OTGINT            BIT(2)
195
+#define GINTSTS_MODEMIS            BIT(1)
196
+#define GINTSTS_CURMODE_HOST        BIT(0)
197
+
198
+#define GRXSTSR                HSOTG_REG(0x01C)
199
+#define GRXSTSP                HSOTG_REG(0x020)
200
+#define GRXSTS_FN_MASK            (0x7f << 25)
201
+#define GRXSTS_FN_SHIFT            25
202
+#define GRXSTS_PKTSTS_MASK        (0xf << 17)
203
+#define GRXSTS_PKTSTS_SHIFT        17
204
+#define GRXSTS_PKTSTS_GLOBALOUTNAK    1
205
+#define GRXSTS_PKTSTS_OUTRX        2
206
+#define GRXSTS_PKTSTS_HCHIN        2
207
+#define GRXSTS_PKTSTS_OUTDONE        3
208
+#define GRXSTS_PKTSTS_HCHIN_XFER_COMP    3
209
+#define GRXSTS_PKTSTS_SETUPDONE        4
210
+#define GRXSTS_PKTSTS_DATATOGGLEERR    5
211
+#define GRXSTS_PKTSTS_SETUPRX        6
212
+#define GRXSTS_PKTSTS_HCHHALTED        7
213
+#define GRXSTS_HCHNUM_MASK        (0xf << 0)
214
+#define GRXSTS_HCHNUM_SHIFT        0
215
+#define GRXSTS_DPID_MASK        (0x3 << 15)
216
+#define GRXSTS_DPID_SHIFT        15
217
+#define GRXSTS_BYTECNT_MASK        (0x7ff << 4)
218
+#define GRXSTS_BYTECNT_SHIFT        4
219
+#define GRXSTS_EPNUM_MASK        (0xf << 0)
220
+#define GRXSTS_EPNUM_SHIFT        0
221
+
222
+#define GRXFSIZ                HSOTG_REG(0x024)
223
+#define GRXFSIZ_DEPTH_MASK        (0xffff << 0)
224
+#define GRXFSIZ_DEPTH_SHIFT        0
225
+
226
+#define GNPTXFSIZ            HSOTG_REG(0x028)
227
+/* Use FIFOSIZE_* constants to access this register */
228
+
229
+#define GNPTXSTS            HSOTG_REG(0x02C)
230
+#define GNPTXSTS_NP_TXQ_TOP_MASK        (0x7f << 24)
231
+#define GNPTXSTS_NP_TXQ_TOP_SHIFT        24
232
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_MASK        (0xff << 16)
233
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_SHIFT        16
234
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_GET(_v)    (((_v) >> 16) & 0xff)
235
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_MASK        (0xffff << 0)
236
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_SHIFT        0
237
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_GET(_v)    (((_v) >> 0) & 0xffff)
238
+
239
+#define GI2CCTL                HSOTG_REG(0x0030)
240
+#define GI2CCTL_BSYDNE            BIT(31)
241
+#define GI2CCTL_RW            BIT(30)
242
+#define GI2CCTL_I2CDATSE0        BIT(28)
243
+#define GI2CCTL_I2CDEVADDR_MASK        (0x3 << 26)
244
+#define GI2CCTL_I2CDEVADDR_SHIFT    26
245
+#define GI2CCTL_I2CSUSPCTL        BIT(25)
246
+#define GI2CCTL_ACK            BIT(24)
247
+#define GI2CCTL_I2CEN            BIT(23)
248
+#define GI2CCTL_ADDR_MASK        (0x7f << 16)
249
+#define GI2CCTL_ADDR_SHIFT        16
250
+#define GI2CCTL_REGADDR_MASK        (0xff << 8)
251
+#define GI2CCTL_REGADDR_SHIFT        8
252
+#define GI2CCTL_RWDATA_MASK        (0xff << 0)
253
+#define GI2CCTL_RWDATA_SHIFT        0
254
+
255
+#define GPVNDCTL            HSOTG_REG(0x0034)
256
+#define GGPIO                HSOTG_REG(0x0038)
257
+#define GGPIO_STM32_OTG_GCCFG_PWRDWN    BIT(16)
258
+
259
+#define GUID                HSOTG_REG(0x003c)
260
+#define GSNPSID                HSOTG_REG(0x0040)
261
+#define GHWCFG1                HSOTG_REG(0x0044)
262
+#define GSNPSID_ID_MASK            GENMASK(31, 16)
263
+
264
+#define GHWCFG2                HSOTG_REG(0x0048)
265
+#define GHWCFG2_OTG_ENABLE_IC_USB        BIT(31)
266
+#define GHWCFG2_DEV_TOKEN_Q_DEPTH_MASK        (0x1f << 26)
267
+#define GHWCFG2_DEV_TOKEN_Q_DEPTH_SHIFT        26
268
+#define GHWCFG2_HOST_PERIO_TX_Q_DEPTH_MASK    (0x3 << 24)
269
+#define GHWCFG2_HOST_PERIO_TX_Q_DEPTH_SHIFT    24
270
+#define GHWCFG2_NONPERIO_TX_Q_DEPTH_MASK    (0x3 << 22)
271
+#define GHWCFG2_NONPERIO_TX_Q_DEPTH_SHIFT    22
272
+#define GHWCFG2_MULTI_PROC_INT            BIT(20)
273
+#define GHWCFG2_DYNAMIC_FIFO            BIT(19)
274
+#define GHWCFG2_PERIO_EP_SUPPORTED        BIT(18)
275
+#define GHWCFG2_NUM_HOST_CHAN_MASK        (0xf << 14)
276
+#define GHWCFG2_NUM_HOST_CHAN_SHIFT        14
277
+#define GHWCFG2_NUM_DEV_EP_MASK            (0xf << 10)
278
+#define GHWCFG2_NUM_DEV_EP_SHIFT        10
279
+#define GHWCFG2_FS_PHY_TYPE_MASK        (0x3 << 8)
280
+#define GHWCFG2_FS_PHY_TYPE_SHIFT        8
281
+#define GHWCFG2_FS_PHY_TYPE_NOT_SUPPORTED    0
282
+#define GHWCFG2_FS_PHY_TYPE_DEDICATED        1
283
+#define GHWCFG2_FS_PHY_TYPE_SHARED_UTMI        2
284
+#define GHWCFG2_FS_PHY_TYPE_SHARED_ULPI        3
285
+#define GHWCFG2_HS_PHY_TYPE_MASK        (0x3 << 6)
286
+#define GHWCFG2_HS_PHY_TYPE_SHIFT        6
287
+#define GHWCFG2_HS_PHY_TYPE_NOT_SUPPORTED    0
288
+#define GHWCFG2_HS_PHY_TYPE_UTMI        1
289
+#define GHWCFG2_HS_PHY_TYPE_ULPI        2
290
+#define GHWCFG2_HS_PHY_TYPE_UTMI_ULPI        3
291
+#define GHWCFG2_POINT2POINT            BIT(5)
292
+#define GHWCFG2_ARCHITECTURE_MASK        (0x3 << 3)
293
+#define GHWCFG2_ARCHITECTURE_SHIFT        3
294
+#define GHWCFG2_SLAVE_ONLY_ARCH            0
295
+#define GHWCFG2_EXT_DMA_ARCH            1
296
+#define GHWCFG2_INT_DMA_ARCH            2
297
+#define GHWCFG2_OP_MODE_MASK            (0x7 << 0)
298
+#define GHWCFG2_OP_MODE_SHIFT            0
299
+#define GHWCFG2_OP_MODE_HNP_SRP_CAPABLE        0
300
+#define GHWCFG2_OP_MODE_SRP_ONLY_CAPABLE    1
301
+#define GHWCFG2_OP_MODE_NO_HNP_SRP_CAPABLE    2
302
+#define GHWCFG2_OP_MODE_SRP_CAPABLE_DEVICE    3
303
+#define GHWCFG2_OP_MODE_NO_SRP_CAPABLE_DEVICE    4
304
+#define GHWCFG2_OP_MODE_SRP_CAPABLE_HOST    5
305
+#define GHWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST    6
306
+#define GHWCFG2_OP_MODE_UNDEFINED        7
307
+
308
+#define GHWCFG3                HSOTG_REG(0x004c)
309
+#define GHWCFG3_DFIFO_DEPTH_MASK        (0xffff << 16)
310
+#define GHWCFG3_DFIFO_DEPTH_SHIFT        16
311
+#define GHWCFG3_OTG_LPM_EN            BIT(15)
312
+#define GHWCFG3_BC_SUPPORT            BIT(14)
313
+#define GHWCFG3_OTG_ENABLE_HSIC            BIT(13)
314
+#define GHWCFG3_ADP_SUPP            BIT(12)
315
+#define GHWCFG3_SYNCH_RESET_TYPE        BIT(11)
316
+#define GHWCFG3_OPTIONAL_FEATURES        BIT(10)
317
+#define GHWCFG3_VENDOR_CTRL_IF            BIT(9)
318
+#define GHWCFG3_I2C                BIT(8)
319
+#define GHWCFG3_OTG_FUNC            BIT(7)
320
+#define GHWCFG3_PACKET_SIZE_CNTR_WIDTH_MASK    (0x7 << 4)
321
+#define GHWCFG3_PACKET_SIZE_CNTR_WIDTH_SHIFT    4
322
+#define GHWCFG3_XFER_SIZE_CNTR_WIDTH_MASK    (0xf << 0)
323
+#define GHWCFG3_XFER_SIZE_CNTR_WIDTH_SHIFT    0
324
+
325
+#define GHWCFG4                HSOTG_REG(0x0050)
326
+#define GHWCFG4_DESC_DMA_DYN            BIT(31)
327
+#define GHWCFG4_DESC_DMA            BIT(30)
328
+#define GHWCFG4_NUM_IN_EPS_MASK            (0xf << 26)
329
+#define GHWCFG4_NUM_IN_EPS_SHIFT        26
330
+#define GHWCFG4_DED_FIFO_EN            BIT(25)
331
+#define GHWCFG4_DED_FIFO_SHIFT        25
332
+#define GHWCFG4_SESSION_END_FILT_EN        BIT(24)
333
+#define GHWCFG4_B_VALID_FILT_EN            BIT(23)
334
+#define GHWCFG4_A_VALID_FILT_EN            BIT(22)
335
+#define GHWCFG4_VBUS_VALID_FILT_EN        BIT(21)
336
+#define GHWCFG4_IDDIG_FILT_EN            BIT(20)
337
+#define GHWCFG4_NUM_DEV_MODE_CTRL_EP_MASK    (0xf << 16)
338
+#define GHWCFG4_NUM_DEV_MODE_CTRL_EP_SHIFT    16
339
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_MASK    (0x3 << 14)
340
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_SHIFT    14
341
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_8        0
342
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_16        1
343
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_8_OR_16    2
344
+#define GHWCFG4_ACG_SUPPORTED            BIT(12)
345
+#define GHWCFG4_IPG_ISOC_SUPPORTED        BIT(11)
346
+#define GHWCFG4_SERVICE_INTERVAL_SUPPORTED BIT(10)
347
+#define GHWCFG4_XHIBER                BIT(7)
348
+#define GHWCFG4_HIBER                BIT(6)
349
+#define GHWCFG4_MIN_AHB_FREQ            BIT(5)
350
+#define GHWCFG4_POWER_OPTIMIZ            BIT(4)
351
+#define GHWCFG4_NUM_DEV_PERIO_IN_EP_MASK    (0xf << 0)
352
+#define GHWCFG4_NUM_DEV_PERIO_IN_EP_SHIFT    0
353
+
354
+#define GLPMCFG                HSOTG_REG(0x0054)
355
+#define GLPMCFG_INVSELHSIC        BIT(31)
356
+#define GLPMCFG_HSICCON            BIT(30)
357
+#define GLPMCFG_RSTRSLPSTS        BIT(29)
358
+#define GLPMCFG_ENBESL            BIT(28)
359
+#define GLPMCFG_LPM_RETRYCNT_STS_MASK    (0x7 << 25)
360
+#define GLPMCFG_LPM_RETRYCNT_STS_SHIFT    25
361
+#define GLPMCFG_SNDLPM            BIT(24)
362
+#define GLPMCFG_RETRY_CNT_MASK        (0x7 << 21)
363
+#define GLPMCFG_RETRY_CNT_SHIFT        21
364
+#define GLPMCFG_LPM_REJECT_CTRL_CONTROL    BIT(21)
365
+#define GLPMCFG_LPM_ACCEPT_CTRL_ISOC    BIT(22)
366
+#define GLPMCFG_LPM_CHNL_INDX_MASK    (0xf << 17)
367
+#define GLPMCFG_LPM_CHNL_INDX_SHIFT    17
368
+#define GLPMCFG_L1RESUMEOK        BIT(16)
369
+#define GLPMCFG_SLPSTS            BIT(15)
370
+#define GLPMCFG_COREL1RES_MASK        (0x3 << 13)
371
+#define GLPMCFG_COREL1RES_SHIFT        13
372
+#define GLPMCFG_HIRD_THRES_MASK        (0x1f << 8)
373
+#define GLPMCFG_HIRD_THRES_SHIFT    8
374
+#define GLPMCFG_HIRD_THRES_EN        (0x10 << 8)
375
+#define GLPMCFG_ENBLSLPM        BIT(7)
376
+#define GLPMCFG_BREMOTEWAKE        BIT(6)
377
+#define GLPMCFG_HIRD_MASK        (0xf << 2)
378
+#define GLPMCFG_HIRD_SHIFT        2
379
+#define GLPMCFG_APPL1RES        BIT(1)
380
+#define GLPMCFG_LPMCAP            BIT(0)
381
+
382
+#define GPWRDN                HSOTG_REG(0x0058)
383
+#define GPWRDN_MULT_VAL_ID_BC_MASK    (0x1f << 24)
384
+#define GPWRDN_MULT_VAL_ID_BC_SHIFT    24
385
+#define GPWRDN_ADP_INT            BIT(23)
386
+#define GPWRDN_BSESSVLD            BIT(22)
387
+#define GPWRDN_IDSTS            BIT(21)
388
+#define GPWRDN_LINESTATE_MASK        (0x3 << 19)
389
+#define GPWRDN_LINESTATE_SHIFT        19
390
+#define GPWRDN_STS_CHGINT_MSK        BIT(18)
391
+#define GPWRDN_STS_CHGINT        BIT(17)
392
+#define GPWRDN_SRP_DET_MSK        BIT(16)
393
+#define GPWRDN_SRP_DET            BIT(15)
394
+#define GPWRDN_CONNECT_DET_MSK        BIT(14)
395
+#define GPWRDN_CONNECT_DET        BIT(13)
396
+#define GPWRDN_DISCONN_DET_MSK        BIT(12)
397
+#define GPWRDN_DISCONN_DET        BIT(11)
398
+#define GPWRDN_RST_DET_MSK        BIT(10)
399
+#define GPWRDN_RST_DET            BIT(9)
400
+#define GPWRDN_LNSTSCHG_MSK        BIT(8)
401
+#define GPWRDN_LNSTSCHG            BIT(7)
402
+#define GPWRDN_DIS_VBUS            BIT(6)
403
+#define GPWRDN_PWRDNSWTCH        BIT(5)
404
+#define GPWRDN_PWRDNRSTN        BIT(4)
405
+#define GPWRDN_PWRDNCLMP        BIT(3)
406
+#define GPWRDN_RESTORE            BIT(2)
407
+#define GPWRDN_PMUACTV            BIT(1)
408
+#define GPWRDN_PMUINTSEL        BIT(0)
409
+
410
+#define GDFIFOCFG            HSOTG_REG(0x005c)
411
+#define GDFIFOCFG_EPINFOBASE_MASK    (0xffff << 16)
412
+#define GDFIFOCFG_EPINFOBASE_SHIFT    16
413
+#define GDFIFOCFG_GDFIFOCFG_MASK    (0xffff << 0)
414
+#define GDFIFOCFG_GDFIFOCFG_SHIFT    0
415
+
416
+#define ADPCTL                HSOTG_REG(0x0060)
417
+#define ADPCTL_AR_MASK            (0x3 << 27)
418
+#define ADPCTL_AR_SHIFT            27
419
+#define ADPCTL_ADP_TMOUT_INT_MSK    BIT(26)
420
+#define ADPCTL_ADP_SNS_INT_MSK        BIT(25)
421
+#define ADPCTL_ADP_PRB_INT_MSK        BIT(24)
422
+#define ADPCTL_ADP_TMOUT_INT        BIT(23)
423
+#define ADPCTL_ADP_SNS_INT        BIT(22)
424
+#define ADPCTL_ADP_PRB_INT        BIT(21)
425
+#define ADPCTL_ADPENA            BIT(20)
426
+#define ADPCTL_ADPRES            BIT(19)
427
+#define ADPCTL_ENASNS            BIT(18)
428
+#define ADPCTL_ENAPRB            BIT(17)
429
+#define ADPCTL_RTIM_MASK        (0x7ff << 6)
430
+#define ADPCTL_RTIM_SHIFT        6
431
+#define ADPCTL_PRB_PER_MASK        (0x3 << 4)
432
+#define ADPCTL_PRB_PER_SHIFT        4
433
+#define ADPCTL_PRB_DELTA_MASK        (0x3 << 2)
434
+#define ADPCTL_PRB_DELTA_SHIFT        2
435
+#define ADPCTL_PRB_DSCHRG_MASK        (0x3 << 0)
436
+#define ADPCTL_PRB_DSCHRG_SHIFT        0
437
+
438
+#define GREFCLK                 HSOTG_REG(0x0064)
439
+#define GREFCLK_REFCLKPER_MASK         (0x1ffff << 15)
440
+#define GREFCLK_REFCLKPER_SHIFT         15
441
+#define GREFCLK_REF_CLK_MODE         BIT(14)
442
+#define GREFCLK_SOF_CNT_WKUP_ALERT_MASK     (0x3ff)
443
+#define GREFCLK_SOF_CNT_WKUP_ALERT_SHIFT 0
444
+
445
+#define GINTMSK2            HSOTG_REG(0x0068)
446
+#define GINTMSK2_WKUP_ALERT_INT_MSK    BIT(0)
447
+
448
+#define GINTSTS2            HSOTG_REG(0x006c)
449
+#define GINTSTS2_WKUP_ALERT_INT        BIT(0)
450
+
451
+#define HPTXFSIZ            HSOTG_REG(0x100)
452
+/* Use FIFOSIZE_* constants to access this register */
453
+
454
+#define DPTXFSIZN(_a)            HSOTG_REG(0x104 + (((_a) - 1) * 4))
455
+/* Use FIFOSIZE_* constants to access this register */
456
+
457
+/* These apply to the GNPTXFSIZ, HPTXFSIZ and DPTXFSIZN registers */
458
+#define FIFOSIZE_DEPTH_MASK        (0xffff << 16)
459
+#define FIFOSIZE_DEPTH_SHIFT        16
460
+#define FIFOSIZE_STARTADDR_MASK        (0xffff << 0)
461
+#define FIFOSIZE_STARTADDR_SHIFT    0
462
+#define FIFOSIZE_DEPTH_GET(_x)        (((_x) >> 16) & 0xffff)
463
+
464
+/* Device mode registers */
465
+
466
+#define DCFG                HSOTG_REG(0x800)
467
+#define DCFG_DESCDMA_EN            BIT(23)
468
+#define DCFG_EPMISCNT_MASK        (0x1f << 18)
469
+#define DCFG_EPMISCNT_SHIFT        18
470
+#define DCFG_EPMISCNT_LIMIT        0x1f
471
+#define DCFG_EPMISCNT(_x)        ((_x) << 18)
472
+#define DCFG_IPG_ISOC_SUPPORDED        BIT(17)
473
+#define DCFG_PERFRINT_MASK        (0x3 << 11)
474
+#define DCFG_PERFRINT_SHIFT        11
475
+#define DCFG_PERFRINT_LIMIT        0x3
476
+#define DCFG_PERFRINT(_x)        ((_x) << 11)
477
+#define DCFG_DEVADDR_MASK        (0x7f << 4)
478
+#define DCFG_DEVADDR_SHIFT        4
479
+#define DCFG_DEVADDR_LIMIT        0x7f
480
+#define DCFG_DEVADDR(_x)        ((_x) << 4)
481
+#define DCFG_NZ_STS_OUT_HSHK        BIT(2)
482
+#define DCFG_DEVSPD_MASK        (0x3 << 0)
483
+#define DCFG_DEVSPD_SHIFT        0
484
+#define DCFG_DEVSPD_HS            0
485
+#define DCFG_DEVSPD_FS            1
486
+#define DCFG_DEVSPD_LS            2
487
+#define DCFG_DEVSPD_FS48        3
488
+
489
+#define DCTL                HSOTG_REG(0x804)
490
+#define DCTL_SERVICE_INTERVAL_SUPPORTED BIT(19)
491
+#define DCTL_PWRONPRGDONE        BIT(11)
492
+#define DCTL_CGOUTNAK            BIT(10)
493
+#define DCTL_SGOUTNAK            BIT(9)
494
+#define DCTL_CGNPINNAK            BIT(8)
495
+#define DCTL_SGNPINNAK            BIT(7)
496
+#define DCTL_TSTCTL_MASK        (0x7 << 4)
497
+#define DCTL_TSTCTL_SHIFT        4
498
+#define DCTL_GOUTNAKSTS            BIT(3)
499
+#define DCTL_GNPINNAKSTS        BIT(2)
500
+#define DCTL_SFTDISCON            BIT(1)
501
+#define DCTL_RMTWKUPSIG            BIT(0)
502
+
503
+#define DSTS                HSOTG_REG(0x808)
504
+#define DSTS_SOFFN_MASK            (0x3fff << 8)
505
+#define DSTS_SOFFN_SHIFT        8
506
+#define DSTS_SOFFN_LIMIT        0x3fff
507
+#define DSTS_SOFFN(_x)            ((_x) << 8)
508
+#define DSTS_ERRATICERR            BIT(3)
509
+#define DSTS_ENUMSPD_MASK        (0x3 << 1)
510
+#define DSTS_ENUMSPD_SHIFT        1
511
+#define DSTS_ENUMSPD_HS            0
512
+#define DSTS_ENUMSPD_FS            1
513
+#define DSTS_ENUMSPD_LS            2
514
+#define DSTS_ENUMSPD_FS48        3
515
+#define DSTS_SUSPSTS            BIT(0)
516
+
517
+#define DIEPMSK                HSOTG_REG(0x810)
518
+#define DIEPMSK_NAKMSK            BIT(13)
519
+#define DIEPMSK_BNAININTRMSK        BIT(9)
520
+#define DIEPMSK_TXFIFOUNDRNMSK        BIT(8)
521
+#define DIEPMSK_TXFIFOEMPTY        BIT(7)
522
+#define DIEPMSK_INEPNAKEFFMSK        BIT(6)
523
+#define DIEPMSK_INTKNEPMISMSK        BIT(5)
524
+#define DIEPMSK_INTKNTXFEMPMSK        BIT(4)
525
+#define DIEPMSK_TIMEOUTMSK        BIT(3)
526
+#define DIEPMSK_AHBERRMSK        BIT(2)
527
+#define DIEPMSK_EPDISBLDMSK        BIT(1)
528
+#define DIEPMSK_XFERCOMPLMSK        BIT(0)
529
+
530
+#define DOEPMSK                HSOTG_REG(0x814)
531
+#define DOEPMSK_BNAMSK            BIT(9)
532
+#define DOEPMSK_BACK2BACKSETUP        BIT(6)
533
+#define DOEPMSK_STSPHSERCVDMSK        BIT(5)
534
+#define DOEPMSK_OUTTKNEPDISMSK        BIT(4)
535
+#define DOEPMSK_SETUPMSK        BIT(3)
536
+#define DOEPMSK_AHBERRMSK        BIT(2)
537
+#define DOEPMSK_EPDISBLDMSK        BIT(1)
538
+#define DOEPMSK_XFERCOMPLMSK        BIT(0)
539
+
540
+#define DAINT                HSOTG_REG(0x818)
541
+#define DAINTMSK            HSOTG_REG(0x81C)
542
+#define DAINT_OUTEP_SHIFT        16
543
+#define DAINT_OUTEP(_x)            (1 << ((_x) + 16))
544
+#define DAINT_INEP(_x)            (1 << (_x))
545
+
546
+#define DTKNQR1                HSOTG_REG(0x820)
547
+#define DTKNQR2                HSOTG_REG(0x824)
548
+#define DTKNQR3                HSOTG_REG(0x830)
549
+#define DTKNQR4                HSOTG_REG(0x834)
550
+#define DIEPEMPMSK            HSOTG_REG(0x834)
551
+
552
+#define DVBUSDIS            HSOTG_REG(0x828)
553
+#define DVBUSPULSE            HSOTG_REG(0x82C)
554
+
555
+#define DIEPCTL0            HSOTG_REG(0x900)
556
+#define DIEPCTL(_a)            HSOTG_REG(0x900 + ((_a) * 0x20))
557
+
558
+#define DOEPCTL0            HSOTG_REG(0xB00)
559
+#define DOEPCTL(_a)            HSOTG_REG(0xB00 + ((_a) * 0x20))
560
+
561
+/* EP0 specialness:
562
+ * bits[29..28] - reserved (no SetD0PID, SetD1PID)
563
+ * bits[25..22] - should always be zero, this isn't a periodic endpoint
564
+ * bits[10..0] - MPS setting different for EP0
565
+ */
566
+#define D0EPCTL_MPS_MASK        (0x3 << 0)
567
+#define D0EPCTL_MPS_SHIFT        0
568
+#define D0EPCTL_MPS_64            0
569
+#define D0EPCTL_MPS_32            1
570
+#define D0EPCTL_MPS_16            2
571
+#define D0EPCTL_MPS_8            3
572
+
573
+#define DXEPCTL_EPENA            BIT(31)
574
+#define DXEPCTL_EPDIS            BIT(30)
575
+#define DXEPCTL_SETD1PID        BIT(29)
576
+#define DXEPCTL_SETODDFR        BIT(29)
577
+#define DXEPCTL_SETD0PID        BIT(28)
578
+#define DXEPCTL_SETEVENFR        BIT(28)
579
+#define DXEPCTL_SNAK            BIT(27)
580
+#define DXEPCTL_CNAK            BIT(26)
581
+#define DXEPCTL_TXFNUM_MASK        (0xf << 22)
582
+#define DXEPCTL_TXFNUM_SHIFT        22
583
+#define DXEPCTL_TXFNUM_LIMIT        0xf
584
+#define DXEPCTL_TXFNUM(_x)        ((_x) << 22)
585
+#define DXEPCTL_STALL            BIT(21)
586
+#define DXEPCTL_SNP            BIT(20)
587
+#define DXEPCTL_EPTYPE_MASK        (0x3 << 18)
588
+#define DXEPCTL_EPTYPE_CONTROL        (0x0 << 18)
589
+#define DXEPCTL_EPTYPE_ISO        (0x1 << 18)
590
+#define DXEPCTL_EPTYPE_BULK        (0x2 << 18)
591
+#define DXEPCTL_EPTYPE_INTERRUPT    (0x3 << 18)
592
+
593
+#define DXEPCTL_NAKSTS            BIT(17)
594
+#define DXEPCTL_DPID            BIT(16)
595
+#define DXEPCTL_EOFRNUM            BIT(16)
596
+#define DXEPCTL_USBACTEP        BIT(15)
597
+#define DXEPCTL_NEXTEP_MASK        (0xf << 11)
598
+#define DXEPCTL_NEXTEP_SHIFT        11
599
+#define DXEPCTL_NEXTEP_LIMIT        0xf
600
+#define DXEPCTL_NEXTEP(_x)        ((_x) << 11)
601
+#define DXEPCTL_MPS_MASK        (0x7ff << 0)
602
+#define DXEPCTL_MPS_SHIFT        0
603
+#define DXEPCTL_MPS_LIMIT        0x7ff
604
+#define DXEPCTL_MPS(_x)            ((_x) << 0)
605
+
606
+#define DIEPINT(_a)            HSOTG_REG(0x908 + ((_a) * 0x20))
607
+#define DOEPINT(_a)            HSOTG_REG(0xB08 + ((_a) * 0x20))
608
+#define DXEPINT_SETUP_RCVD        BIT(15)
609
+#define DXEPINT_NYETINTRPT        BIT(14)
610
+#define DXEPINT_NAKINTRPT        BIT(13)
611
+#define DXEPINT_BBLEERRINTRPT        BIT(12)
612
+#define DXEPINT_PKTDRPSTS        BIT(11)
613
+#define DXEPINT_BNAINTR            BIT(9)
614
+#define DXEPINT_TXFIFOUNDRN        BIT(8)
615
+#define DXEPINT_OUTPKTERR        BIT(8)
616
+#define DXEPINT_TXFEMP            BIT(7)
617
+#define DXEPINT_INEPNAKEFF        BIT(6)
618
+#define DXEPINT_BACK2BACKSETUP        BIT(6)
619
+#define DXEPINT_INTKNEPMIS        BIT(5)
620
+#define DXEPINT_STSPHSERCVD        BIT(5)
621
+#define DXEPINT_INTKNTXFEMP        BIT(4)
622
+#define DXEPINT_OUTTKNEPDIS        BIT(4)
623
+#define DXEPINT_TIMEOUT            BIT(3)
624
+#define DXEPINT_SETUP            BIT(3)
625
+#define DXEPINT_AHBERR            BIT(2)
626
+#define DXEPINT_EPDISBLD        BIT(1)
627
+#define DXEPINT_XFERCOMPL        BIT(0)
628
+
629
+#define DIEPTSIZ0            HSOTG_REG(0x910)
630
+#define DIEPTSIZ0_PKTCNT_MASK        (0x3 << 19)
631
+#define DIEPTSIZ0_PKTCNT_SHIFT        19
632
+#define DIEPTSIZ0_PKTCNT_LIMIT        0x3
633
+#define DIEPTSIZ0_PKTCNT(_x)        ((_x) << 19)
634
+#define DIEPTSIZ0_XFERSIZE_MASK        (0x7f << 0)
635
+#define DIEPTSIZ0_XFERSIZE_SHIFT    0
636
+#define DIEPTSIZ0_XFERSIZE_LIMIT    0x7f
637
+#define DIEPTSIZ0_XFERSIZE(_x)        ((_x) << 0)
638
+
639
+#define DOEPTSIZ0            HSOTG_REG(0xB10)
640
+#define DOEPTSIZ0_SUPCNT_MASK        (0x3 << 29)
641
+#define DOEPTSIZ0_SUPCNT_SHIFT        29
642
+#define DOEPTSIZ0_SUPCNT_LIMIT        0x3
643
+#define DOEPTSIZ0_SUPCNT(_x)        ((_x) << 29)
644
+#define DOEPTSIZ0_PKTCNT        BIT(19)
645
+#define DOEPTSIZ0_XFERSIZE_MASK        (0x7f << 0)
646
+#define DOEPTSIZ0_XFERSIZE_SHIFT    0
647
+
648
+#define DIEPTSIZ(_a)            HSOTG_REG(0x910 + ((_a) * 0x20))
649
+#define DOEPTSIZ(_a)            HSOTG_REG(0xB10 + ((_a) * 0x20))
650
+#define DXEPTSIZ_MC_MASK        (0x3 << 29)
651
+#define DXEPTSIZ_MC_SHIFT        29
652
+#define DXEPTSIZ_MC_LIMIT        0x3
653
+#define DXEPTSIZ_MC(_x)            ((_x) << 29)
654
+#define DXEPTSIZ_PKTCNT_MASK        (0x3ff << 19)
655
+#define DXEPTSIZ_PKTCNT_SHIFT        19
656
+#define DXEPTSIZ_PKTCNT_LIMIT        0x3ff
657
+#define DXEPTSIZ_PKTCNT_GET(_v)        (((_v) >> 19) & 0x3ff)
658
+#define DXEPTSIZ_PKTCNT(_x)        ((_x) << 19)
659
+#define DXEPTSIZ_XFERSIZE_MASK        (0x7ffff << 0)
660
+#define DXEPTSIZ_XFERSIZE_SHIFT        0
661
+#define DXEPTSIZ_XFERSIZE_LIMIT        0x7ffff
662
+#define DXEPTSIZ_XFERSIZE_GET(_v)    (((_v) >> 0) & 0x7ffff)
663
+#define DXEPTSIZ_XFERSIZE(_x)        ((_x) << 0)
664
+
665
+#define DIEPDMA(_a)            HSOTG_REG(0x914 + ((_a) * 0x20))
666
+#define DOEPDMA(_a)            HSOTG_REG(0xB14 + ((_a) * 0x20))
667
+
668
+#define DTXFSTS(_a)            HSOTG_REG(0x918 + ((_a) * 0x20))
669
+
670
+#define PCGCTL                HSOTG_REG(0x0e00)
671
+#define PCGCTL_IF_DEV_MODE        BIT(31)
672
+#define PCGCTL_P2HD_PRT_SPD_MASK    (0x3 << 29)
673
+#define PCGCTL_P2HD_PRT_SPD_SHIFT    29
674
+#define PCGCTL_P2HD_DEV_ENUM_SPD_MASK    (0x3 << 27)
675
+#define PCGCTL_P2HD_DEV_ENUM_SPD_SHIFT    27
676
+#define PCGCTL_MAC_DEV_ADDR_MASK    (0x7f << 20)
677
+#define PCGCTL_MAC_DEV_ADDR_SHIFT    20
678
+#define PCGCTL_MAX_TERMSEL        BIT(19)
679
+#define PCGCTL_MAX_XCVRSELECT_MASK    (0x3 << 17)
680
+#define PCGCTL_MAX_XCVRSELECT_SHIFT    17
681
+#define PCGCTL_PORT_POWER        BIT(16)
682
+#define PCGCTL_PRT_CLK_SEL_MASK        (0x3 << 14)
683
+#define PCGCTL_PRT_CLK_SEL_SHIFT    14
684
+#define PCGCTL_ESS_REG_RESTORED        BIT(13)
685
+#define PCGCTL_EXTND_HIBER_SWITCH    BIT(12)
686
+#define PCGCTL_EXTND_HIBER_PWRCLMP    BIT(11)
687
+#define PCGCTL_ENBL_EXTND_HIBER        BIT(10)
688
+#define PCGCTL_RESTOREMODE        BIT(9)
689
+#define PCGCTL_RESETAFTSUSP        BIT(8)
690
+#define PCGCTL_DEEP_SLEEP        BIT(7)
691
+#define PCGCTL_PHY_IN_SLEEP        BIT(6)
692
+#define PCGCTL_ENBL_SLEEP_GATING    BIT(5)
693
+#define PCGCTL_RSTPDWNMODULE        BIT(3)
694
+#define PCGCTL_PWRCLMP            BIT(2)
695
+#define PCGCTL_GATEHCLK            BIT(1)
696
+#define PCGCTL_STOPPCLK            BIT(0)
697
+
698
+#define PCGCCTL1 HSOTG_REG(0xe04)
699
+#define PCGCCTL1_TIMER (0x3 << 1)
700
+#define PCGCCTL1_GATEEN BIT(0)
701
+
702
+#define EPFIFO(_a)            HSOTG_REG(0x1000 + ((_a) * 0x1000))
703
+
704
+/* Host Mode Registers */
705
+
706
+#define HCFG                HSOTG_REG(0x0400)
707
+#define HCFG_MODECHTIMEN        BIT(31)
708
+#define HCFG_PERSCHEDENA        BIT(26)
709
+#define HCFG_FRLISTEN_MASK        (0x3 << 24)
710
+#define HCFG_FRLISTEN_SHIFT        24
711
+#define HCFG_FRLISTEN_8                (0 << 24)
712
+#define FRLISTEN_8_SIZE                8
713
+#define HCFG_FRLISTEN_16            BIT(24)
714
+#define FRLISTEN_16_SIZE            16
715
+#define HCFG_FRLISTEN_32            (2 << 24)
716
+#define FRLISTEN_32_SIZE            32
717
+#define HCFG_FRLISTEN_64            (3 << 24)
718
+#define FRLISTEN_64_SIZE            64
719
+#define HCFG_DESCDMA            BIT(23)
720
+#define HCFG_RESVALID_MASK        (0xff << 8)
721
+#define HCFG_RESVALID_SHIFT        8
722
+#define HCFG_ENA32KHZ            BIT(7)
723
+#define HCFG_FSLSSUPP            BIT(2)
724
+#define HCFG_FSLSPCLKSEL_MASK        (0x3 << 0)
725
+#define HCFG_FSLSPCLKSEL_SHIFT        0
726
+#define HCFG_FSLSPCLKSEL_30_60_MHZ    0
727
+#define HCFG_FSLSPCLKSEL_48_MHZ        1
728
+#define HCFG_FSLSPCLKSEL_6_MHZ        2
729
+
730
+#define HFIR                HSOTG_REG(0x0404)
731
+#define HFIR_FRINT_MASK            (0xffff << 0)
732
+#define HFIR_FRINT_SHIFT        0
733
+#define HFIR_RLDCTRL            BIT(16)
734
+
735
+#define HFNUM                HSOTG_REG(0x0408)
736
+#define HFNUM_FRREM_MASK        (0xffff << 16)
737
+#define HFNUM_FRREM_SHIFT        16
738
+#define HFNUM_FRNUM_MASK        (0xffff << 0)
739
+#define HFNUM_FRNUM_SHIFT        0
740
+#define HFNUM_MAX_FRNUM            0x3fff
741
+
742
+#define HPTXSTS                HSOTG_REG(0x0410)
743
+#define TXSTS_QTOP_ODD            BIT(31)
744
+#define TXSTS_QTOP_CHNEP_MASK        (0xf << 27)
745
+#define TXSTS_QTOP_CHNEP_SHIFT        27
746
+#define TXSTS_QTOP_TOKEN_MASK        (0x3 << 25)
747
+#define TXSTS_QTOP_TOKEN_SHIFT        25
748
+#define TXSTS_QTOP_TERMINATE        BIT(24)
749
+#define TXSTS_QSPCAVAIL_MASK        (0xff << 16)
750
+#define TXSTS_QSPCAVAIL_SHIFT        16
751
+#define TXSTS_FSPCAVAIL_MASK        (0xffff << 0)
752
+#define TXSTS_FSPCAVAIL_SHIFT        0
753
+
754
+#define HAINT                HSOTG_REG(0x0414)
755
+#define HAINTMSK            HSOTG_REG(0x0418)
756
+#define HFLBADDR            HSOTG_REG(0x041c)
757
+
758
+#define HPRT0                HSOTG_REG(0x0440)
759
+#define HPRT0_SPD_MASK            (0x3 << 17)
760
+#define HPRT0_SPD_SHIFT            17
761
+#define HPRT0_SPD_HIGH_SPEED        0
762
+#define HPRT0_SPD_FULL_SPEED        1
763
+#define HPRT0_SPD_LOW_SPEED        2
764
+#define HPRT0_TSTCTL_MASK        (0xf << 13)
765
+#define HPRT0_TSTCTL_SHIFT        13
766
+#define HPRT0_PWR            BIT(12)
767
+#define HPRT0_LNSTS_MASK        (0x3 << 10)
768
+#define HPRT0_LNSTS_SHIFT        10
769
+#define HPRT0_RST            BIT(8)
770
+#define HPRT0_SUSP            BIT(7)
771
+#define HPRT0_RES            BIT(6)
772
+#define HPRT0_OVRCURRCHG        BIT(5)
773
+#define HPRT0_OVRCURRACT        BIT(4)
774
+#define HPRT0_ENACHG            BIT(3)
775
+#define HPRT0_ENA            BIT(2)
776
+#define HPRT0_CONNDET            BIT(1)
777
+#define HPRT0_CONNSTS            BIT(0)
778
+
779
+#define HCCHAR(_ch)            HSOTG_REG(0x0500 + 0x20 * (_ch))
780
+#define HCCHAR_CHENA            BIT(31)
781
+#define HCCHAR_CHDIS            BIT(30)
782
+#define HCCHAR_ODDFRM            BIT(29)
783
+#define HCCHAR_DEVADDR_MASK        (0x7f << 22)
784
+#define HCCHAR_DEVADDR_SHIFT        22
785
+#define HCCHAR_MULTICNT_MASK        (0x3 << 20)
786
+#define HCCHAR_MULTICNT_SHIFT        20
787
+#define HCCHAR_EPTYPE_MASK        (0x3 << 18)
788
+#define HCCHAR_EPTYPE_SHIFT        18
789
+#define HCCHAR_LSPDDEV            BIT(17)
790
+#define HCCHAR_EPDIR            BIT(15)
791
+#define HCCHAR_EPNUM_MASK        (0xf << 11)
792
+#define HCCHAR_EPNUM_SHIFT        11
793
+#define HCCHAR_MPS_MASK            (0x7ff << 0)
794
+#define HCCHAR_MPS_SHIFT        0
795
+
796
+#define HCSPLT(_ch)            HSOTG_REG(0x0504 + 0x20 * (_ch))
797
+#define HCSPLT_SPLTENA            BIT(31)
798
+#define HCSPLT_COMPSPLT            BIT(16)
799
+#define HCSPLT_XACTPOS_MASK        (0x3 << 14)
800
+#define HCSPLT_XACTPOS_SHIFT        14
801
+#define HCSPLT_XACTPOS_MID        0
802
+#define HCSPLT_XACTPOS_END        1
803
+#define HCSPLT_XACTPOS_BEGIN        2
804
+#define HCSPLT_XACTPOS_ALL        3
805
+#define HCSPLT_HUBADDR_MASK        (0x7f << 7)
806
+#define HCSPLT_HUBADDR_SHIFT        7
807
+#define HCSPLT_PRTADDR_MASK        (0x7f << 0)
808
+#define HCSPLT_PRTADDR_SHIFT        0
809
+
810
+#define HCINT(_ch)            HSOTG_REG(0x0508 + 0x20 * (_ch))
811
+#define HCINTMSK(_ch)            HSOTG_REG(0x050c + 0x20 * (_ch))
812
+#define HCINTMSK_RESERVED14_31        (0x3ffff << 14)
813
+#define HCINTMSK_FRM_LIST_ROLL        BIT(13)
814
+#define HCINTMSK_XCS_XACT        BIT(12)
815
+#define HCINTMSK_BNA            BIT(11)
816
+#define HCINTMSK_DATATGLERR        BIT(10)
817
+#define HCINTMSK_FRMOVRUN        BIT(9)
818
+#define HCINTMSK_BBLERR            BIT(8)
819
+#define HCINTMSK_XACTERR        BIT(7)
820
+#define HCINTMSK_NYET            BIT(6)
821
+#define HCINTMSK_ACK            BIT(5)
822
+#define HCINTMSK_NAK            BIT(4)
823
+#define HCINTMSK_STALL            BIT(3)
824
+#define HCINTMSK_AHBERR            BIT(2)
825
+#define HCINTMSK_CHHLTD            BIT(1)
826
+#define HCINTMSK_XFERCOMPL        BIT(0)
827
+
828
+#define HCTSIZ(_ch)            HSOTG_REG(0x0510 + 0x20 * (_ch))
829
+#define TSIZ_DOPNG            BIT(31)
830
+#define TSIZ_SC_MC_PID_MASK        (0x3 << 29)
831
+#define TSIZ_SC_MC_PID_SHIFT        29
832
+#define TSIZ_SC_MC_PID_DATA0        0
833
+#define TSIZ_SC_MC_PID_DATA2        1
834
+#define TSIZ_SC_MC_PID_DATA1        2
835
+#define TSIZ_SC_MC_PID_MDATA        3
836
+#define TSIZ_SC_MC_PID_SETUP        3
837
+#define TSIZ_PKTCNT_MASK        (0x3ff << 19)
838
+#define TSIZ_PKTCNT_SHIFT        19
839
+#define TSIZ_NTD_MASK            (0xff << 8)
840
+#define TSIZ_NTD_SHIFT            8
841
+#define TSIZ_SCHINFO_MASK        (0xff << 0)
842
+#define TSIZ_SCHINFO_SHIFT        0
843
+#define TSIZ_XFERSIZE_MASK        (0x7ffff << 0)
844
+#define TSIZ_XFERSIZE_SHIFT        0
845
+
846
+#define HCDMA(_ch)            HSOTG_REG(0x0514 + 0x20 * (_ch))
847
+
848
+#define HCDMAB(_ch)            HSOTG_REG(0x051c + 0x20 * (_ch))
849
+
850
+#define HCFIFO(_ch)            HSOTG_REG(0x1000 + 0x1000 * (_ch))
851
+
852
+/**
853
+ * struct dwc2_dma_desc - DMA descriptor structure,
854
+ * used for both host and gadget modes
855
+ *
856
+ * @status: DMA descriptor status quadlet
857
+ * @buf: DMA descriptor data buffer pointer
858
+ *
859
+ * DMA Descriptor structure contains two quadlets:
860
+ * Status quadlet and Data buffer pointer.
861
+ */
862
+struct dwc2_dma_desc {
863
+    uint32_t status;
864
+    uint32_t buf;
865
+} __packed;
866
+
867
+/* Host Mode DMA descriptor status quadlet */
868
+
869
+#define HOST_DMA_A            BIT(31)
870
+#define HOST_DMA_STS_MASK        (0x3 << 28)
871
+#define HOST_DMA_STS_SHIFT        28
872
+#define HOST_DMA_STS_PKTERR        BIT(28)
873
+#define HOST_DMA_EOL            BIT(26)
874
+#define HOST_DMA_IOC            BIT(25)
875
+#define HOST_DMA_SUP            BIT(24)
876
+#define HOST_DMA_ALT_QTD        BIT(23)
877
+#define HOST_DMA_QTD_OFFSET_MASK    (0x3f << 17)
878
+#define HOST_DMA_QTD_OFFSET_SHIFT    17
879
+#define HOST_DMA_ISOC_NBYTES_MASK    (0xfff << 0)
880
+#define HOST_DMA_ISOC_NBYTES_SHIFT    0
881
+#define HOST_DMA_NBYTES_MASK        (0x1ffff << 0)
882
+#define HOST_DMA_NBYTES_SHIFT        0
883
+#define HOST_DMA_NBYTES_LIMIT        131071
884
+
885
+/* Device Mode DMA descriptor status quadlet */
886
+
887
+#define DEV_DMA_BUFF_STS_MASK        (0x3 << 30)
888
+#define DEV_DMA_BUFF_STS_SHIFT        30
889
+#define DEV_DMA_BUFF_STS_HREADY        0
890
+#define DEV_DMA_BUFF_STS_DMABUSY    1
891
+#define DEV_DMA_BUFF_STS_DMADONE    2
892
+#define DEV_DMA_BUFF_STS_HBUSY        3
893
+#define DEV_DMA_STS_MASK        (0x3 << 28)
894
+#define DEV_DMA_STS_SHIFT        28
895
+#define DEV_DMA_STS_SUCC        0
896
+#define DEV_DMA_STS_BUFF_FLUSH        1
897
+#define DEV_DMA_STS_BUFF_ERR        3
898
+#define DEV_DMA_L            BIT(27)
899
+#define DEV_DMA_SHORT            BIT(26)
900
+#define DEV_DMA_IOC            BIT(25)
901
+#define DEV_DMA_SR            BIT(24)
902
+#define DEV_DMA_MTRF            BIT(23)
903
+#define DEV_DMA_ISOC_PID_MASK        (0x3 << 23)
904
+#define DEV_DMA_ISOC_PID_SHIFT        23
905
+#define DEV_DMA_ISOC_PID_DATA0        0
906
+#define DEV_DMA_ISOC_PID_DATA2        1
907
+#define DEV_DMA_ISOC_PID_DATA1        2
908
+#define DEV_DMA_ISOC_PID_MDATA        3
909
+#define DEV_DMA_ISOC_FRNUM_MASK        (0x7ff << 12)
910
+#define DEV_DMA_ISOC_FRNUM_SHIFT    12
911
+#define DEV_DMA_ISOC_TX_NBYTES_MASK    (0xfff << 0)
912
+#define DEV_DMA_ISOC_TX_NBYTES_LIMIT    0xfff
913
+#define DEV_DMA_ISOC_RX_NBYTES_MASK    (0x7ff << 0)
914
+#define DEV_DMA_ISOC_RX_NBYTES_LIMIT    0x7ff
915
+#define DEV_DMA_ISOC_NBYTES_SHIFT    0
916
+#define DEV_DMA_NBYTES_MASK        (0xffff << 0)
917
+#define DEV_DMA_NBYTES_SHIFT        0
918
+#define DEV_DMA_NBYTES_LIMIT        0xffff
919
+
920
+#define MAX_DMA_DESC_NUM_GENERIC    64
921
+#define MAX_DMA_DESC_NUM_HS_ISOC    256
922
+
923
+#endif /* __DWC2_HW_H__ */
924
--
925
2.20.1
926
927
diff view generated by jsdifflib
Deleted patch
1
Convert the VSHL and VSLI insns from the Neon 2-registers-and-a-shift
2
group to decodetree.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200522145520.6778-2-peter.maydell@linaro.org
7
---
8
target/arm/neon-dp.decode | 25 ++++++++++++++++++++++
9
target/arm/translate-neon.inc.c | 38 +++++++++++++++++++++++++++++++++
10
target/arm/translate.c | 18 +++++++---------
11
3 files changed, 71 insertions(+), 10 deletions(-)
12
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
16
+++ b/target/arm/neon-dp.decode
17
@@ -XXX,XX +XXX,XX @@ VRECPS_fp_3s 1111 001 0 0 . 0 . .... .... 1111 ... 1 .... @3same_fp
18
VRSQRTS_fp_3s 1111 001 0 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
19
VMAXNM_fp_3s 1111 001 1 0 . 0 . .... .... 1111 ... 1 .... @3same_fp
20
VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
21
+
22
+######################################################################
23
+# 2-reg-and-shift grouping:
24
+# 1111 001 U 1 D immH:3 immL:3 Vd:4 opc:4 L Q M 1 Vm:4
25
+######################################################################
26
+&2reg_shift vm vd q shift size
27
+
28
+@2reg_shl_d .... ... . . . shift:6 .... .... 1 q:1 . . .... \
29
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3
30
+@2reg_shl_s .... ... . . . 1 shift:5 .... .... 0 q:1 . . .... \
31
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2
32
+@2reg_shl_h .... ... . . . 01 shift:4 .... .... 0 q:1 . . .... \
33
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1
34
+@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
35
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0
36
+
37
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
38
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
39
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
40
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
41
+
42
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
43
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
44
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
45
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
46
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-neon.inc.c
49
+++ b/target/arm/translate-neon.inc.c
50
@@ -XXX,XX +XXX,XX @@ static bool do_3same_fp_pair(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn)
51
DO_3S_FP_PAIR(VPADD, gen_helper_vfp_adds)
52
DO_3S_FP_PAIR(VPMAX, gen_helper_vfp_maxs)
53
DO_3S_FP_PAIR(VPMIN, gen_helper_vfp_mins)
54
+
55
+static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
56
+{
57
+ /* Handle a 2-reg-shift insn which can be vectorized. */
58
+ int vec_size = a->q ? 16 : 8;
59
+ int rd_ofs = neon_reg_offset(a->vd, 0);
60
+ int rm_ofs = neon_reg_offset(a->vm, 0);
61
+
62
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
63
+ return false;
64
+ }
65
+
66
+ /* UNDEF accesses to D16-D31 if they don't exist. */
67
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
68
+ ((a->vd | a->vm) & 0x10)) {
69
+ return false;
70
+ }
71
+
72
+ if ((a->vm | a->vd) & a->q) {
73
+ return false;
74
+ }
75
+
76
+ if (!vfp_access_check(s)) {
77
+ return true;
78
+ }
79
+
80
+ fn(a->size, rd_ofs, rm_ofs, a->shift, vec_size, vec_size);
81
+ return true;
82
+}
83
+
84
+#define DO_2SH(INSN, FUNC) \
85
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
86
+ { \
87
+ return do_vector_2sh(s, a, FUNC); \
88
+ } \
89
+
90
+DO_2SH(VSHL, tcg_gen_gvec_shli)
91
+DO_2SH(VSLI, gen_gvec_sli)
92
diff --git a/target/arm/translate.c b/target/arm/translate.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/translate.c
95
+++ b/target/arm/translate.c
96
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
97
if ((insn & 0x00380080) != 0) {
98
/* Two registers and shift. */
99
op = (insn >> 8) & 0xf;
100
+
101
+ switch (op) {
102
+ case 5: /* VSHL, VSLI */
103
+ return 1; /* handled by decodetree */
104
+ default:
105
+ break;
106
+ }
107
+
108
if (insn & (1 << 7)) {
109
/* 64-bit shift. */
110
if (op > 7) {
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
112
gen_gvec_sri(size, rd_ofs, rm_ofs, shift,
113
vec_size, vec_size);
114
return 0;
115
-
116
- case 5: /* VSHL, VSLI */
117
- if (u) { /* VSLI */
118
- gen_gvec_sli(size, rd_ofs, rm_ofs, shift,
119
- vec_size, vec_size);
120
- } else { /* VSHL */
121
- tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
122
- vec_size, vec_size);
123
- }
124
- return 0;
125
}
126
127
if (size == 3) {
128
--
129
2.20.1
130
131
diff view generated by jsdifflib
Deleted patch
1
Convert the VSRA, VSRI, VRSHR, VRSRA 2-reg-shift insns to decodetree.
2
(These are the last instructions in the group that are vectorized;
3
the rest all require looping over each element.)
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-4-peter.maydell@linaro.org
8
---
9
target/arm/neon-dp.decode | 35 ++++++++++++++++++++++
10
target/arm/translate-neon.inc.c | 7 +++++
11
target/arm/translate.c | 52 +++------------------------------
12
3 files changed, 46 insertions(+), 48 deletions(-)
13
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
17
+++ b/target/arm/neon-dp.decode
18
@@ -XXX,XX +XXX,XX @@ VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
19
VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
20
VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
21
22
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_d
23
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_s
24
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_h
25
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_b
26
+
27
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_d
28
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_s
29
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_h
30
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_b
31
+
32
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_d
33
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_s
34
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_h
35
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_b
36
+
37
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_d
38
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_s
39
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_h
40
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_b
41
+
42
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_d
43
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_s
44
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_h
45
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_b
46
+
47
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_d
48
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_s
49
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_h
50
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_b
51
+
52
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_d
53
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_s
54
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_h
55
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_b
56
+
57
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
58
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
59
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
60
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate-neon.inc.c
63
+++ b/target/arm/translate-neon.inc.c
64
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
65
66
DO_2SH(VSHL, tcg_gen_gvec_shli)
67
DO_2SH(VSLI, gen_gvec_sli)
68
+DO_2SH(VSRI, gen_gvec_sri)
69
+DO_2SH(VSRA_S, gen_gvec_ssra)
70
+DO_2SH(VSRA_U, gen_gvec_usra)
71
+DO_2SH(VRSHR_S, gen_gvec_srshr)
72
+DO_2SH(VRSHR_U, gen_gvec_urshr)
73
+DO_2SH(VRSRA_S, gen_gvec_srsra)
74
+DO_2SH(VRSRA_U, gen_gvec_ursra)
75
76
static bool trans_VSHR_S_2sh(DisasContext *s, arg_2reg_shift *a)
77
{
78
diff --git a/target/arm/translate.c b/target/arm/translate.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate.c
81
+++ b/target/arm/translate.c
82
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
83
84
switch (op) {
85
case 0: /* VSHR */
86
+ case 1: /* VSRA */
87
+ case 2: /* VRSHR */
88
+ case 3: /* VRSRA */
89
+ case 4: /* VSRI */
90
case 5: /* VSHL, VSLI */
91
return 1; /* handled by decodetree */
92
default:
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
94
shift = shift - (1 << (size + 3));
95
}
96
97
- switch (op) {
98
- case 1: /* VSRA */
99
- /* Right shift comes here negative. */
100
- shift = -shift;
101
- if (u) {
102
- gen_gvec_usra(size, rd_ofs, rm_ofs, shift,
103
- vec_size, vec_size);
104
- } else {
105
- gen_gvec_ssra(size, rd_ofs, rm_ofs, shift,
106
- vec_size, vec_size);
107
- }
108
- return 0;
109
-
110
- case 2: /* VRSHR */
111
- /* Right shift comes here negative. */
112
- shift = -shift;
113
- if (u) {
114
- gen_gvec_urshr(size, rd_ofs, rm_ofs, shift,
115
- vec_size, vec_size);
116
- } else {
117
- gen_gvec_srshr(size, rd_ofs, rm_ofs, shift,
118
- vec_size, vec_size);
119
- }
120
- return 0;
121
-
122
- case 3: /* VRSRA */
123
- /* Right shift comes here negative. */
124
- shift = -shift;
125
- if (u) {
126
- gen_gvec_ursra(size, rd_ofs, rm_ofs, shift,
127
- vec_size, vec_size);
128
- } else {
129
- gen_gvec_srsra(size, rd_ofs, rm_ofs, shift,
130
- vec_size, vec_size);
131
- }
132
- return 0;
133
-
134
- case 4: /* VSRI */
135
- if (!u) {
136
- return 1;
137
- }
138
- /* Right shift comes here negative. */
139
- shift = -shift;
140
- gen_gvec_sri(size, rd_ofs, rm_ofs, shift,
141
- vec_size, vec_size);
142
- return 0;
143
- }
144
-
145
if (size == 3) {
146
count = q + 1;
147
} else {
148
--
149
2.20.1
150
151
diff view generated by jsdifflib
1
Convert the VCVT fixed-point conversion operations in the
1
Currently the microdrive code uses device_legacy_reset() to reset
2
Neon 2-regs-and-shift group to decodetree.
2
itself, and has its reset method call reset on the IDE bus as the
3
last thing it does. Switch to using device_cold_reset().
4
5
The only concrete microdrive device is the TYPE_DSCM1XXXX; it is not
6
command-line pluggable, so it is used only by the old pxa2xx Arm
7
boards 'akita', 'borzoi', 'spitz', 'terrier' and 'tosa'.
8
9
You might think that this would result in the IDE bus being
10
reset automatically, but it does not, because the IDEBus type
11
does not set the BusClass::reset method. Instead the controller
12
must explicitly call ide_bus_reset(). We therefore leave that
13
call in md_reset().
14
15
Note also that because the PCMCIA card device is a direct subclass of
16
TYPE_DEVICE and we don't model the PCMCIA controller-to-card
17
interface as a qbus, PCMCIA cards are not on any qbus and so they
18
don't get reset when the system is reset. The reset only happens via
19
the dscm1xxxx_attach() and dscm1xxxx_detach() functions during
20
machine creation.
21
22
Because our aim here is merely to try to get rid of calls to the
23
device_legacy_reset() function, we leave these other dubious
24
reset-related issues alone. (They all stem from this code being
25
absolutely ancient.)
3
26
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Message-id: 20200522145520.6778-9-peter.maydell@linaro.org
29
Message-id: 20221013174042.1602926-1-peter.maydell@linaro.org
7
---
30
---
8
target/arm/neon-dp.decode | 11 +++++
31
hw/ide/microdrive.c | 8 ++++----
9
target/arm/translate-neon.inc.c | 49 +++++++++++++++++++++
32
1 file changed, 4 insertions(+), 4 deletions(-)
10
target/arm/translate.c | 75 +--------------------------------
11
3 files changed, 62 insertions(+), 73 deletions(-)
12
33
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
34
diff --git a/hw/ide/microdrive.c b/hw/ide/microdrive.c
14
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
36
--- a/hw/ide/microdrive.c
16
+++ b/target/arm/neon-dp.decode
37
+++ b/hw/ide/microdrive.c
17
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
38
@@ -XXX,XX +XXX,XX @@ static void md_attr_write(PCMCIACardState *card, uint32_t at, uint8_t value)
18
@2reg_shll_b .... ... . . . 001 shift:3 .... .... 0 . . . .... \
39
case 0x00:    /* Configuration Option Register */
19
&2reg_shift vm=%vm_dp vd=%vd_dp size=0 q=0
40
s->opt = value & 0xcf;
20
41
if (value & OPT_SRESET) {
21
+# We use size=0 for fp32 and size=1 for fp16 to match the 3-same encodings.
42
- device_legacy_reset(DEVICE(s));
22
+@2reg_vcvt .... ... . . . 1 ..... .... .... . q:1 . . .... \
43
+ device_cold_reset(DEVICE(s));
23
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 shift=%neon_rshift_i5
44
}
24
+
45
md_interrupt_update(s);
25
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
46
break;
26
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
47
@@ -XXX,XX +XXX,XX @@ static void md_common_write(PCMCIACardState *card, uint32_t at, uint16_t value)
27
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
48
case 0xe:    /* Device Control */
28
@@ -XXX,XX +XXX,XX @@ VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
49
s->ctrl = value;
29
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
50
if (value & CTRL_SRST) {
30
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
51
- device_legacy_reset(DEVICE(s));
31
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
52
+ device_cold_reset(DEVICE(s));
32
+
53
}
33
+# VCVT fixed<->float conversions
54
md_interrupt_update(s);
34
+# TODO: FP16 fixed<->float conversions are opc==0b1100 and 0b1101
55
break;
35
+VCVT_SF_2sh 1111 001 0 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
56
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_attach(PCMCIACardState *card)
36
+VCVT_UF_2sh 1111 001 1 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
57
md->attr_base = pcc->cis[0x74] | (pcc->cis[0x76] << 8);
37
+VCVT_FS_2sh 1111 001 0 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
58
md->io_base = 0x0;
38
+VCVT_FU_2sh 1111 001 1 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
59
39
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
60
- device_legacy_reset(DEVICE(md));
40
index XXXXXXX..XXXXXXX 100644
61
+ device_cold_reset(DEVICE(md));
41
--- a/target/arm/translate-neon.inc.c
62
md_interrupt_update(md);
42
+++ b/target/arm/translate-neon.inc.c
63
43
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL_U_2sh(DisasContext *s, arg_2reg_shift *a)
64
return 0;
44
};
65
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_detach(PCMCIACardState *card)
45
return do_vshll_2sh(s, a, widenfn[a->size], true);
66
{
67
MicroDriveState *md = MICRODRIVE(card);
68
69
- device_legacy_reset(DEVICE(md));
70
+ device_cold_reset(DEVICE(md));
71
return 0;
46
}
72
}
47
+
48
+static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
49
+ NeonGenTwoSingleOPFn *fn)
50
+{
51
+ /* FP operations in 2-reg-and-shift group */
52
+ TCGv_i32 tmp, shiftv;
53
+ TCGv_ptr fpstatus;
54
+ int pass;
55
+
56
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
57
+ return false;
58
+ }
59
+
60
+ /* UNDEF accesses to D16-D31 if they don't exist. */
61
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
62
+ ((a->vd | a->vm) & 0x10)) {
63
+ return false;
64
+ }
65
+
66
+ if ((a->vm | a->vd) & a->q) {
67
+ return false;
68
+ }
69
+
70
+ if (!vfp_access_check(s)) {
71
+ return true;
72
+ }
73
+
74
+ fpstatus = get_fpstatus_ptr(1);
75
+ shiftv = tcg_const_i32(a->shift);
76
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
77
+ tmp = neon_load_reg(a->vm, pass);
78
+ fn(tmp, tmp, shiftv, fpstatus);
79
+ neon_store_reg(a->vd, pass, tmp);
80
+ }
81
+ tcg_temp_free_ptr(fpstatus);
82
+ tcg_temp_free_i32(shiftv);
83
+ return true;
84
+}
85
+
86
+#define DO_FP_2SH(INSN, FUNC) \
87
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
88
+ { \
89
+ return do_fp_2sh(s, a, FUNC); \
90
+ }
91
+
92
+DO_FP_2SH(VCVT_SF, gen_helper_vfp_sltos)
93
+DO_FP_2SH(VCVT_UF, gen_helper_vfp_ultos)
94
+DO_FP_2SH(VCVT_FS, gen_helper_vfp_tosls_round_to_zero)
95
+DO_FP_2SH(VCVT_FU, gen_helper_vfp_touls_round_to_zero)
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
101
int q;
102
int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
103
int size;
104
- int shift;
105
int pass;
106
int u;
107
int vec_size;
108
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
109
return 1;
110
} else if (insn & (1 << 4)) {
111
if ((insn & 0x00380080) != 0) {
112
- /* Two registers and shift. */
113
- op = (insn >> 8) & 0xf;
114
-
115
- switch (op) {
116
- case 0: /* VSHR */
117
- case 1: /* VSRA */
118
- case 2: /* VRSHR */
119
- case 3: /* VRSRA */
120
- case 4: /* VSRI */
121
- case 5: /* VSHL, VSLI */
122
- case 6: /* VQSHLU */
123
- case 7: /* VQSHL */
124
- case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
125
- case 9: /* VQSHRN, VQRSHRN */
126
- case 10: /* VSHLL, including VMOVL */
127
- return 1; /* handled by decodetree */
128
- default:
129
- break;
130
- }
131
-
132
- if (insn & (1 << 7)) {
133
- /* 64-bit shift. */
134
- if (op > 7) {
135
- return 1;
136
- }
137
- size = 3;
138
- } else {
139
- size = 2;
140
- while ((insn & (1 << (size + 19))) == 0)
141
- size--;
142
- }
143
- shift = (insn >> 16) & ((1 << (3 + size)) - 1);
144
- if (op >= 14) {
145
- /* VCVT fixed-point. */
146
- TCGv_ptr fpst;
147
- TCGv_i32 shiftv;
148
- VFPGenFixPointFn *fn;
149
-
150
- if (!(insn & (1 << 21)) || (q && ((rd | rm) & 1))) {
151
- return 1;
152
- }
153
-
154
- if (!(op & 1)) {
155
- if (u) {
156
- fn = gen_helper_vfp_ultos;
157
- } else {
158
- fn = gen_helper_vfp_sltos;
159
- }
160
- } else {
161
- if (u) {
162
- fn = gen_helper_vfp_touls_round_to_zero;
163
- } else {
164
- fn = gen_helper_vfp_tosls_round_to_zero;
165
- }
166
- }
167
-
168
- /* We have already masked out the must-be-1 top bit of imm6,
169
- * hence this 32-shift where the ARM ARM has 64-imm6.
170
- */
171
- shift = 32 - shift;
172
- fpst = get_fpstatus_ptr(1);
173
- shiftv = tcg_const_i32(shift);
174
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
175
- TCGv_i32 tmpf = neon_load_reg(rm, pass);
176
- fn(tmpf, tmpf, shiftv, fpst);
177
- neon_store_reg(rd, pass, tmpf);
178
- }
179
- tcg_temp_free_ptr(fpst);
180
- tcg_temp_free_i32(shiftv);
181
- } else {
182
- return 1;
183
- }
184
+ /* Two registers and shift: handled by decodetree */
185
+ return 1;
186
} else { /* (insn & 0x00380080) == 0 */
187
int invert, reg_ofs, vec_size;
188
73
189
--
74
--
190
2.20.1
75
2.25.1
191
76
192
77
diff view generated by jsdifflib