1
The following changes since commit c88f1ffc19e38008a1c33ae039482a860aa7418c:
1
Arm queue; some of the simpler stuff, things other have reviewed (thanks!), etc.
2
2
3
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging (2020-05-08 14:29:18 +0100)
3
-- PMM
4
5
The following changes since commit 5d2f557b47dfbf8f23277a5bdd8473d4607c681a:
6
7
Merge remote-tracking branch 'remotes/kraxel/tags/vga-20200605-pull-request' into staging (2020-06-05 13:53:05 +0100)
4
8
5
are available in the Git repository at:
9
are available in the Git repository at:
6
10
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200511
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200605
8
12
9
for you to fetch changes up to 7e17d50ebd359ee5fa3d65d7fdc0fe0336d60694:
13
for you to fetch changes up to 2c35a39eda0b16c2ed85c94cec204bf5efb97812:
10
14
11
target/arm: Fix tcg_gen_gvec_dup_imm vs DUP (indexed) (2020-05-11 14:22:54 +0100)
15
target/arm: Convert Neon one-register-and-immediate insns to decodetree (2020-06-05 17:23:10 +0100)
12
16
13
----------------------------------------------------------------
17
----------------------------------------------------------------
14
target-arm queue:
18
target-arm queue:
15
aspeed: Add boot stub for smp booting
19
hw/ssi/imx_spi: Handle tx burst lengths other than 8 correctly
16
target/arm: Drop access_el3_aa32ns_aa64any()
20
hw/input/pxa2xx_keypad: Replace hw_error() by qemu_log_mask()
17
aspeed: Support AST2600A1 silicon revision
21
hw/arm/pxa2xx: Replace printf() call by qemu_log_mask()
18
aspeed: sdmc: Implement AST2600 locking behaviour
22
target/arm: Convert crypto insns to gvec
19
nrf51: Tracing cleanups
23
hw/adc/stm32f2xx_adc: Correct memory region size and access size
20
target/arm: Improve handling of SVE loads and stores
24
tests/acceptance: Add a boot test for the xlnx-versal-virt machine
21
target/arm: Don't show TCG-only CPUs in KVM-only QEMU builds
25
docs/system: Document Aspeed boards
22
hw/arm/musicpal: Map the UART devices unconditionally
26
raspi: Add model of the USB controller
23
target/arm: Fix tcg_gen_gvec_dup_imm vs DUP (indexed)
27
target/arm: Convert 2-reg-and-shift and 1-reg-imm Neon insns to decodetree
24
target/arm: Use tcg_gen_gvec_5_ptr for sve FMLA/FCMLA
25
28
26
----------------------------------------------------------------
29
----------------------------------------------------------------
27
Edgar E. Iglesias (1):
30
Cédric Le Goater (1):
28
target/arm: Drop access_el3_aa32ns_aa64any()
31
docs/system: Document Aspeed boards
29
32
30
Joel Stanley (3):
33
Eden Mikitas (2):
31
aspeed: Add boot stub for smp booting
34
hw/ssi/imx_spi: changed while statement to prevent underflow
32
aspeed: Support AST2600A1 silicon revision
35
hw/ssi/imx_spi: Removed unnecessary cast of rx data received from slave
33
aspeed: sdmc: Implement AST2600 locking behaviour
34
36
35
Philippe Mathieu-Daudé (8):
37
Paul Zimmerman (7):
36
hw/arm/nrf51: Add NRF51_PERIPHERAL_SIZE definition
38
raspi: add BCM2835 SOC MPHI emulation
37
hw/timer/nrf51_timer: Display timer ID in trace events
39
dwc-hsotg (dwc2) USB host controller register definitions
38
hw/timer/nrf51_timer: Add trace event of counter value update
40
dwc-hsotg (dwc2) USB host controller state definitions
39
target/arm/kvm: Inline set_feature() calls
41
dwc-hsotg (dwc2) USB host controller emulation
40
target/arm/cpu: Use ARRAY_SIZE() to iterate over ARMCPUInfo[]
42
usb: add short-packet handling to usb-storage driver
41
target/arm/cpu: Restrict v8M IDAU interface to Aarch32 CPUs
43
wire in the dwc-hsotg (dwc2) USB host controller emulation
42
target/arm: Restrict TCG cpus to TCG accel
44
raspi2 acceptance test: add test for dwc-hsotg (dwc2) USB host
43
hw/arm/musicpal: Map the UART devices unconditionally
44
45
45
Richard Henderson (21):
46
Peter Maydell (9):
46
exec: Add block comments for watchpoint routines
47
target/arm: Convert Neon VSHL and VSLI 2-reg-shift insn to decodetree
47
exec: Fix cpu_watchpoint_address_matches address length
48
target/arm: Convert Neon VSHR 2-reg-shift insns to decodetree
48
accel/tcg: Add block comment for probe_access
49
target/arm: Convert Neon VSRA, VSRI, VRSHR, VRSRA 2-reg-shift insns to decodetree
49
accel/tcg: Adjust probe_access call to page_check_range
50
target/arm: Convert VQSHLU, VQSHL 2-reg-shift insns to decodetree
50
accel/tcg: Add probe_access_flags
51
target/arm: Convert Neon narrowing shifts with op==8 to decodetree
51
accel/tcg: Add endian-specific cpu_{ld, st}* operations
52
target/arm: Convert Neon narrowing shifts with op==9 to decodetree
52
target/arm: Use cpu_*_data_ra for sve_ldst_tlb_fn
53
target/arm: Convert Neon VSHLL, VMOVL to decodetree
53
target/arm: Drop manual handling of set/clear_helper_retaddr
54
target/arm: Convert VCVT fixed-point ops to decodetree
54
target/arm: Add sve infrastructure for page lookup
55
target/arm: Convert Neon one-register-and-immediate insns to decodetree
55
target/arm: Adjust interface of sve_ld1_host_fn
56
56
target/arm: Use SVEContLdSt in sve_ld1_r
57
Philippe Mathieu-Daudé (3):
57
target/arm: Handle watchpoints in sve_ld1_r
58
hw/input/pxa2xx_keypad: Replace hw_error() by qemu_log_mask()
58
target/arm: Use SVEContLdSt for multi-register contiguous loads
59
hw/arm/pxa2xx: Replace printf() call by qemu_log_mask()
59
target/arm: Update contiguous first-fault and no-fault loads
60
hw/adc/stm32f2xx_adc: Correct memory region size and access size
60
target/arm: Use SVEContLdSt for contiguous stores
61
61
target/arm: Reuse sve_probe_page for gather first-fault loads
62
Richard Henderson (6):
62
target/arm: Reuse sve_probe_page for scatter stores
63
target/arm: Convert aes and sm4 to gvec helpers
63
target/arm: Reuse sve_probe_page for gather loads
64
target/arm: Convert rax1 to gvec helpers
64
target/arm: Remove sve_memopidx
65
target/arm: Convert sha512 and sm3 to gvec helpers
65
target/arm: Use tcg_gen_gvec_5_ptr for sve FMLA/FCMLA
66
target/arm: Convert sha1 and sha256 to gvec helpers
66
target/arm: Fix tcg_gen_gvec_dup_imm vs DUP (indexed)
67
target/arm: Split helper_crypto_sha1_3reg
68
target/arm: Split helper_crypto_sm3tt
67
69
68
Thomas Huth (1):
70
Thomas Huth (1):
69
target/arm: Make set_feature() available for other files
71
tests/acceptance: Add a boot test for the xlnx-versal-virt machine
70
72
71
docs/devel/loads-stores.rst | 39 +-
73
docs/system/arm/aspeed.rst | 85 ++
72
include/exec/cpu-all.h | 13 +-
74
docs/system/target-arm.rst | 1 +
73
include/exec/cpu_ldst.h | 283 +++--
75
hw/usb/hcd-dwc2.h | 190 +++++
74
include/exec/exec-all.h | 39 +
76
include/hw/arm/bcm2835_peripherals.h | 5 +-
75
include/hw/arm/nrf51.h | 3 +-
77
include/hw/misc/bcm2835_mphi.h | 44 +
76
include/hw/core/cpu.h | 23 +
78
include/hw/usb/dwc2-regs.h | 899 ++++++++++++++++++++
77
include/hw/i2c/microbit_i2c.h | 2 +-
79
target/arm/helper.h | 45 +-
78
include/hw/misc/aspeed_scu.h | 1 +
80
target/arm/translate-a64.h | 3 +
79
include/hw/timer/nrf51_timer.h | 1 +
81
target/arm/vec_internal.h | 33 +
80
target/arm/cpu.h | 10 +
82
target/arm/neon-dp.decode | 214 ++++-
81
target/arm/helper-sve.h | 45 +-
83
hw/adc/stm32f2xx_adc.c | 4 +-
82
target/arm/internals.h | 5 -
84
hw/arm/bcm2835_peripherals.c | 38 +-
83
accel/tcg/cputlb.c | 413 ++++---
85
hw/arm/pxa2xx.c | 66 +-
84
accel/tcg/user-exec.c | 256 ++++-
86
hw/input/pxa2xx_keypad.c | 10 +-
85
exec.c | 2 +-
87
hw/misc/bcm2835_mphi.c | 191 +++++
86
hw/arm/aspeed.c | 73 +-
88
hw/ssi/imx_spi.c | 4 +-
87
hw/arm/aspeed_ast2600.c | 6 +-
89
hw/usb/dev-storage.c | 15 +-
88
hw/arm/musicpal.c | 12 +-
90
hw/usb/hcd-dwc2.c | 1417 ++++++++++++++++++++++++++++++++
89
hw/arm/nrf51_soc.c | 9 +-
91
target/arm/crypto_helper.c | 267 ++++--
90
hw/i2c/microbit_i2c.c | 2 +-
92
target/arm/translate-a64.c | 198 ++---
91
hw/misc/aspeed_scu.c | 11 +-
93
target/arm/translate-neon.inc.c | 796 ++++++++++++++----
92
hw/misc/aspeed_sdmc.c | 55 +-
94
target/arm/translate.c | 539 +-----------
93
hw/timer/nrf51_timer.c | 14 +-
95
target/arm/vec_helper.c | 12 +-
94
target/arm/cpu.c | 662 +----------
96
hw/misc/Makefile.objs | 1 +
95
target/arm/cpu64.c | 18 +-
97
hw/usb/Kconfig | 5 +
96
target/arm/cpu_tcg.c | 664 +++++++++++
98
hw/usb/Makefile.objs | 1 +
97
target/arm/helper.c | 30 +-
99
hw/usb/trace-events | 50 ++
98
target/arm/kvm32.c | 13 +-
100
tests/acceptance/boot_linux_console.py | 35 +-
99
target/arm/kvm64.c | 22 +-
101
28 files changed, 4258 insertions(+), 910 deletions(-)
100
target/arm/sve_helper.c | 2398 +++++++++++++++++++++-------------------
102
create mode 100644 docs/system/arm/aspeed.rst
101
target/arm/translate-sve.c | 93 +-
103
create mode 100644 hw/usb/hcd-dwc2.h
102
hw/timer/trace-events | 5 +-
104
create mode 100644 include/hw/misc/bcm2835_mphi.h
103
target/arm/Makefile.objs | 1 +
105
create mode 100644 include/hw/usb/dwc2-regs.h
104
33 files changed, 2975 insertions(+), 2248 deletions(-)
106
create mode 100644 target/arm/vec_internal.h
105
create mode 100644 target/arm/cpu_tcg.c
107
create mode 100644 hw/misc/bcm2835_mphi.c
108
create mode 100644 hw/usb/hcd-dwc2.c
106
109
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Eden Mikitas <e.mikitas@gmail.com>
2
2
3
As IDAU is a v8M feature, restrict it to the Aarch32 CPUs.
3
The while statement in question only checked if tx_burst is not 0.
4
tx_burst is a signed int, which is assigned the value put by the
5
guest driver in ECSPI_CONREG. The burst length can be anywhere
6
between 1 and 4096, and since tx_burst is always decremented by 8
7
it could possibly underflow, causing an infinite loop.
4
8
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: Eden Mikitas <e.mikitas@gmail.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Message-id: 20200504172448.9402-5-philmd@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
12
---
10
target/arm/cpu.c | 2 +-
13
hw/ssi/imx_spi.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
12
15
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
diff --git a/hw/ssi/imx_spi.c b/hw/ssi/imx_spi.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.c
18
--- a/hw/ssi/imx_spi.c
16
+++ b/target/arm/cpu.c
19
+++ b/hw/ssi/imx_spi.c
17
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_register_types(void)
20
@@ -XXX,XX +XXX,XX @@ static void imx_spi_flush_txfifo(IMXSPIState *s)
18
const size_t cpu_count = ARRAY_SIZE(arm_cpus);
21
19
22
rx = 0;
20
type_register_static(&arm_cpu_type_info);
23
21
- type_register_static(&idau_interface_type_info);
24
- while (tx_burst) {
22
25
+ while (tx_burst > 0) {
23
#ifdef CONFIG_KVM
26
uint8_t byte = tx & 0xff;
24
type_register_static(&host_arm_cpu_type_info);
27
25
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_register_types(void)
28
DPRINTF("writing 0x%02x\n", (uint32_t)byte);
26
if (cpu_count) {
27
size_t i;
28
29
+ type_register_static(&idau_interface_type_info);
30
for (i = 0; i < cpu_count; ++i) {
31
arm_cpu_register(&arm_cpus[i]);
32
}
33
--
29
--
34
2.20.1
30
2.20.1
35
31
36
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Eden Mikitas <e.mikitas@gmail.com>
2
2
3
The only caller of cpu_watchpoint_address_matches passes
3
When inserting the value retrieved (rx) from the spi slave, rx is pushed to
4
TARGET_PAGE_SIZE, so the bug is not currently visible.
4
rx_fifo after being cast to uint8_t. rx_fifo is a fifo32, and the rx
5
register the driver uses is also 32 bit. This zeroes the 24 most
6
significant bits of rx. This proved problematic with devices that expect to
7
use the whole 32 bits of the rx register.
5
8
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Eden Mikitas <e.mikitas@gmail.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20200508154359.7494-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
exec.c | 2 +-
13
hw/ssi/imx_spi.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
15
diff --git a/exec.c b/exec.c
16
diff --git a/hw/ssi/imx_spi.c b/hw/ssi/imx_spi.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/exec.c
18
--- a/hw/ssi/imx_spi.c
18
+++ b/exec.c
19
+++ b/hw/ssi/imx_spi.c
19
@@ -XXX,XX +XXX,XX @@ int cpu_watchpoint_address_matches(CPUState *cpu, vaddr addr, vaddr len)
20
@@ -XXX,XX +XXX,XX @@ static void imx_spi_flush_txfifo(IMXSPIState *s)
20
int ret = 0;
21
if (fifo32_is_full(&s->rx_fifo)) {
21
22
s->regs[ECSPI_STATREG] |= ECSPI_STATREG_RO;
22
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
23
} else {
23
- if (watchpoint_address_matches(wp, addr, TARGET_PAGE_SIZE)) {
24
- fifo32_push(&s->rx_fifo, (uint8_t)rx);
24
+ if (watchpoint_address_matches(wp, addr, len)) {
25
+ fifo32_push(&s->rx_fifo, rx);
25
ret |= wp->flags;
26
}
26
}
27
}
27
28
if (s->burst_length <= 0) {
28
--
29
--
29
2.20.1
30
2.20.1
30
31
31
32
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
The NRF51 series SoC have 3 timer peripherals, each having
3
hw_error() calls exit(). This a bit overkill when we can log
4
4 counters. To help differentiate which peripheral is accessed,
4
the accesses as unimplemented or guest error.
5
display the timer ID in the trace events.
5
6
When fuzzing the devices, we don't want the whole process to
7
exit. Replace some hw_error() calls by qemu_log_mask()
8
(missed in commit 5a0001ec7e).
6
9
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20200525114123.21317-2-f4bug@amsat.org
9
Message-id: 20200504072822.18799-4-f4bug@amsat.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
14
---
12
include/hw/timer/nrf51_timer.h | 1 +
15
hw/input/pxa2xx_keypad.c | 10 +++++++---
13
hw/arm/nrf51_soc.c | 5 +++++
16
1 file changed, 7 insertions(+), 3 deletions(-)
14
hw/timer/nrf51_timer.c | 11 +++++++++--
15
hw/timer/trace-events | 4 ++--
16
4 files changed, 17 insertions(+), 4 deletions(-)
17
17
18
diff --git a/include/hw/timer/nrf51_timer.h b/include/hw/timer/nrf51_timer.h
18
diff --git a/hw/input/pxa2xx_keypad.c b/hw/input/pxa2xx_keypad.c
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/timer/nrf51_timer.h
20
--- a/hw/input/pxa2xx_keypad.c
21
+++ b/include/hw/timer/nrf51_timer.h
21
+++ b/hw/input/pxa2xx_keypad.c
22
@@ -XXX,XX +XXX,XX @@ typedef struct NRF51TimerState {
23
MemoryRegion iomem;
24
qemu_irq irq;
25
26
+ uint8_t id;
27
QEMUTimer timer;
28
int64_t timer_start_ns;
29
int64_t update_counter_ns;
30
diff --git a/hw/arm/nrf51_soc.c b/hw/arm/nrf51_soc.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/arm/nrf51_soc.c
33
+++ b/hw/arm/nrf51_soc.c
34
@@ -XXX,XX +XXX,XX @@ static void nrf51_soc_realize(DeviceState *dev_soc, Error **errp)
35
36
/* TIMER */
37
for (i = 0; i < NRF51_NUM_TIMERS; i++) {
38
+ object_property_set_uint(OBJECT(&s->timer[i]), i, "id", &err);
39
+ if (err) {
40
+ error_propagate(errp, err);
41
+ return;
42
+ }
43
object_property_set_bool(OBJECT(&s->timer[i]), true, "realized", &err);
44
if (err) {
45
error_propagate(errp, err);
46
diff --git a/hw/timer/nrf51_timer.c b/hw/timer/nrf51_timer.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/timer/nrf51_timer.c
49
+++ b/hw/timer/nrf51_timer.c
50
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@
51
#include "hw/arm/nrf51.h"
23
*/
24
25
#include "qemu/osdep.h"
26
-#include "hw/hw.h"
27
+#include "qemu/log.h"
52
#include "hw/irq.h"
28
#include "hw/irq.h"
53
#include "hw/timer/nrf51_timer.h"
54
+#include "hw/qdev-properties.h"
55
#include "migration/vmstate.h"
29
#include "migration/vmstate.h"
56
#include "trace.h"
30
#include "hw/arm/pxa.h"
57
31
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_keypad_read(void *opaque, hwaddr offset,
58
@@ -XXX,XX +XXX,XX @@ static uint64_t nrf51_timer_read(void *opaque, hwaddr offset, unsigned int size)
32
return s->kpkdi;
59
__func__, offset);
33
break;
34
default:
35
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
36
+ qemu_log_mask(LOG_GUEST_ERROR,
37
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
38
+ __func__, offset);
60
}
39
}
61
40
62
- trace_nrf51_timer_read(offset, r, size);
41
return 0;
63
+ trace_nrf51_timer_read(s->id, offset, r, size);
42
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_keypad_write(void *opaque, hwaddr offset,
64
43
break;
65
return r;
44
45
default:
46
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
47
+ qemu_log_mask(LOG_GUEST_ERROR,
48
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
49
+ __func__, offset);
50
}
66
}
51
}
67
@@ -XXX,XX +XXX,XX @@ static void nrf51_timer_write(void *opaque, hwaddr offset,
52
68
uint64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
69
size_t idx;
70
71
- trace_nrf51_timer_write(offset, value, size);
72
+ trace_nrf51_timer_write(s->id, offset, value, size);
73
74
switch (offset) {
75
case NRF51_TIMER_TASK_START:
76
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_nrf51_timer = {
77
}
78
};
79
80
+static Property nrf51_timer_properties[] = {
81
+ DEFINE_PROP_UINT8("id", NRF51TimerState, id, 0),
82
+ DEFINE_PROP_END_OF_LIST(),
83
+};
84
+
85
static void nrf51_timer_class_init(ObjectClass *klass, void *data)
86
{
87
DeviceClass *dc = DEVICE_CLASS(klass);
88
89
dc->reset = nrf51_timer_reset;
90
dc->vmsd = &vmstate_nrf51_timer;
91
+ device_class_set_props(dc, nrf51_timer_properties);
92
}
93
94
static const TypeInfo nrf51_timer_info = {
95
diff --git a/hw/timer/trace-events b/hw/timer/trace-events
96
index XXXXXXX..XXXXXXX 100644
97
--- a/hw/timer/trace-events
98
+++ b/hw/timer/trace-events
99
@@ -XXX,XX +XXX,XX @@ cmsdk_apb_dualtimer_write(uint64_t offset, uint64_t data, unsigned size) "CMSDK
100
cmsdk_apb_dualtimer_reset(void) "CMSDK APB dualtimer: reset"
101
102
# nrf51_timer.c
103
-nrf51_timer_read(uint64_t addr, uint32_t value, unsigned size) "read addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
104
-nrf51_timer_write(uint64_t addr, uint32_t value, unsigned size) "write addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
105
+nrf51_timer_read(uint8_t timer_id, uint64_t addr, uint32_t value, unsigned size) "timer %u read addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
106
+nrf51_timer_write(uint8_t timer_id, uint64_t addr, uint32_t value, unsigned size) "timer %u write addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
107
108
# bcm2835_systmr.c
109
bcm2835_systmr_irq(bool enable) "timer irq state %u"
110
--
53
--
111
2.20.1
54
2.20.1
112
55
113
56
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
I can't find proper documentation or datasheet, but it is likely
3
Replace printf() calls by qemu_log_mask(), which is disabled
4
a MMIO mapped serial device mapped in the 0x80000000..0x8000ffff
4
by default. This avoid flooding the terminal when fuzzing the
5
range belongs to the SoC address space, thus is always mapped in
5
device.
6
the memory bus.
7
Map the devices on the bus regardless a chardev is attached to it.
8
6
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Jan Kiszka <jan.kiszka@web.de>
8
Message-id: 20200525114123.21317-3-f4bug@amsat.org
11
Message-id: 20200505095945.23146-1-f4bug@amsat.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/arm/musicpal.c | 12 ++++--------
12
hw/arm/pxa2xx.c | 66 ++++++++++++++++++++++++++++++++++++-------------
15
1 file changed, 4 insertions(+), 8 deletions(-)
13
1 file changed, 49 insertions(+), 17 deletions(-)
16
14
17
diff --git a/hw/arm/musicpal.c b/hw/arm/musicpal.c
15
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/musicpal.c
17
--- a/hw/arm/pxa2xx.c
20
+++ b/hw/arm/musicpal.c
18
+++ b/hw/arm/pxa2xx.c
21
@@ -XXX,XX +XXX,XX @@ static void musicpal_init(MachineState *machine)
19
@@ -XXX,XX +XXX,XX @@
22
pic[MP_TIMER2_IRQ], pic[MP_TIMER3_IRQ],
20
#include "sysemu/blockdev.h"
23
pic[MP_TIMER4_IRQ], NULL);
21
#include "sysemu/qtest.h"
24
22
#include "qemu/cutils.h"
25
- if (serial_hd(0)) {
23
+#include "qemu/log.h"
26
- serial_mm_init(address_space_mem, MP_UART1_BASE, 2, pic[MP_UART1_IRQ],
24
27
- 1825000, serial_hd(0), DEVICE_NATIVE_ENDIAN);
25
static struct {
28
- }
26
hwaddr io_base;
29
- if (serial_hd(1)) {
27
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_pm_read(void *opaque, hwaddr addr,
30
- serial_mm_init(address_space_mem, MP_UART2_BASE, 2, pic[MP_UART2_IRQ],
28
return s->pm_regs[addr >> 2];
31
- 1825000, serial_hd(1), DEVICE_NATIVE_ENDIAN);
29
default:
32
- }
30
fail:
33
+ serial_mm_init(address_space_mem, MP_UART1_BASE, 2, pic[MP_UART1_IRQ],
31
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
34
+ 1825000, serial_hd(0), DEVICE_NATIVE_ENDIAN);
32
+ qemu_log_mask(LOG_GUEST_ERROR,
35
+ serial_mm_init(address_space_mem, MP_UART2_BASE, 2, pic[MP_UART2_IRQ],
33
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
36
+ 1825000, serial_hd(1), DEVICE_NATIVE_ENDIAN);
34
+ __func__, addr);
37
35
break;
38
/* Register flash */
36
}
39
dinfo = drive_get(IF_PFLASH, 0, 0);
37
return 0;
38
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_pm_write(void *opaque, hwaddr addr,
39
s->pm_regs[addr >> 2] = value;
40
break;
41
}
42
-
43
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
44
+ qemu_log_mask(LOG_GUEST_ERROR,
45
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
46
+ __func__, addr);
47
break;
48
}
49
}
50
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_cm_read(void *opaque, hwaddr addr,
51
return s->cm_regs[CCCR >> 2] | (3 << 28);
52
53
default:
54
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
55
+ qemu_log_mask(LOG_GUEST_ERROR,
56
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
57
+ __func__, addr);
58
break;
59
}
60
return 0;
61
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_cm_write(void *opaque, hwaddr addr,
62
break;
63
64
default:
65
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
66
+ qemu_log_mask(LOG_GUEST_ERROR,
67
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
68
+ __func__, addr);
69
break;
70
}
71
}
72
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_mm_read(void *opaque, hwaddr addr,
73
return s->mm_regs[addr >> 2];
74
/* fall through */
75
default:
76
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
77
+ qemu_log_mask(LOG_GUEST_ERROR,
78
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
79
+ __func__, addr);
80
break;
81
}
82
return 0;
83
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_mm_write(void *opaque, hwaddr addr,
84
}
85
86
default:
87
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
88
+ qemu_log_mask(LOG_GUEST_ERROR,
89
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
90
+ __func__, addr);
91
break;
92
}
93
}
94
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_ssp_read(void *opaque, hwaddr addr,
95
case SSACD:
96
return s->ssacd;
97
default:
98
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
99
+ qemu_log_mask(LOG_GUEST_ERROR,
100
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
101
+ __func__, addr);
102
break;
103
}
104
return 0;
105
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_ssp_write(void *opaque, hwaddr addr,
106
break;
107
108
default:
109
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
110
+ qemu_log_mask(LOG_GUEST_ERROR,
111
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
112
+ __func__, addr);
113
break;
114
}
115
}
116
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_rtc_read(void *opaque, hwaddr addr,
117
else
118
return s->last_swcr;
119
default:
120
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
121
+ qemu_log_mask(LOG_GUEST_ERROR,
122
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
123
+ __func__, addr);
124
break;
125
}
126
return 0;
127
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_rtc_write(void *opaque, hwaddr addr,
128
break;
129
130
default:
131
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
132
+ qemu_log_mask(LOG_GUEST_ERROR,
133
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
134
+ __func__, addr);
135
}
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_i2c_read(void *opaque, hwaddr addr,
139
s->ibmr = 0;
140
return s->ibmr;
141
default:
142
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
143
+ qemu_log_mask(LOG_GUEST_ERROR,
144
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
145
+ __func__, addr);
146
break;
147
}
148
return 0;
149
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_i2c_write(void *opaque, hwaddr addr,
150
break;
151
152
default:
153
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
154
+ qemu_log_mask(LOG_GUEST_ERROR,
155
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
156
+ __func__, addr);
157
}
158
}
159
160
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_i2s_read(void *opaque, hwaddr addr,
161
}
162
return 0;
163
default:
164
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
165
+ qemu_log_mask(LOG_GUEST_ERROR,
166
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
167
+ __func__, addr);
168
break;
169
}
170
return 0;
171
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_i2s_write(void *opaque, hwaddr addr,
172
}
173
break;
174
default:
175
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
176
+ qemu_log_mask(LOG_GUEST_ERROR,
177
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
178
+ __func__, addr);
179
}
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_fir_read(void *opaque, hwaddr addr,
183
case ICFOR:
184
return s->rx_len;
185
default:
186
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
187
+ qemu_log_mask(LOG_GUEST_ERROR,
188
+ "%s: Bad read offset 0x%"HWADDR_PRIx"\n",
189
+ __func__, addr);
190
break;
191
}
192
return 0;
193
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_fir_write(void *opaque, hwaddr addr,
194
case ICFOR:
195
break;
196
default:
197
- printf("%s: Bad register " REG_FMT "\n", __func__, addr);
198
+ qemu_log_mask(LOG_GUEST_ERROR,
199
+ "%s: Bad write offset 0x%"HWADDR_PRIx"\n",
200
+ __func__, addr);
201
}
202
}
203
40
--
204
--
41
2.20.1
205
2.20.1
42
206
43
207
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Since we converted back to cpu_*_data_ra, we do not need to
3
With this conversion, we will be able to use the same helpers
4
do this ourselves.
4
with sve. In particular, pass 3 vector parameters for the
5
5
3-operand operations; for advsimd the destination register
6
is also an input.
7
8
This also fixes a bug in which we failed to clear the high bits
9
of the SVE register after an AdvSIMD operation.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20200514212831.31248-2-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200508154359.7494-9-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
target/arm/sve_helper.c | 38 --------------------------------------
16
target/arm/helper.h | 6 ++--
12
1 file changed, 38 deletions(-)
17
target/arm/vec_internal.h | 33 +++++++++++++++++
13
18
target/arm/crypto_helper.c | 72 +++++++++++++++++++++++++++-----------
14
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
19
target/arm/translate-a64.c | 55 ++++++++++++++++++-----------
20
target/arm/translate.c | 27 +++++++-------
21
target/arm/vec_helper.c | 12 +------
22
6 files changed, 138 insertions(+), 67 deletions(-)
23
create mode 100644 target/arm/vec_internal.h
24
25
diff --git a/target/arm/helper.h b/target/arm/helper.h
15
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/sve_helper.c
27
--- a/target/arm/helper.h
17
+++ b/target/arm/sve_helper.c
28
+++ b/target/arm/helper.h
18
@@ -XXX,XX +XXX,XX @@ static intptr_t max_for_page(target_ulong base, intptr_t mem_off,
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(neon_qzip8, TCG_CALL_NO_RWG, void, ptr, ptr)
19
return MIN(split, mem_max - mem_off) + mem_off;
30
DEF_HELPER_FLAGS_2(neon_qzip16, TCG_CALL_NO_RWG, void, ptr, ptr)
20
}
31
DEF_HELPER_FLAGS_2(neon_qzip32, TCG_CALL_NO_RWG, void, ptr, ptr)
21
32
22
-#ifndef CONFIG_USER_ONLY
33
-DEF_HELPER_FLAGS_3(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
-/* These are normally defined only for CONFIG_USER_ONLY in <exec/cpu_ldst.h> */
34
+DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
-static inline void set_helper_retaddr(uintptr_t ra) { }
35
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
-static inline void clear_helper_retaddr(void) { }
36
26
-#endif
37
DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
-
38
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
39
DEF_HELPER_FLAGS_3(crypto_sm3partw1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
40
DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
41
42
-DEF_HELPER_FLAGS_2(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr)
43
-DEF_HELPER_FLAGS_3(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
44
+DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
46
47
DEF_HELPER_FLAGS_3(crc32, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
48
DEF_HELPER_FLAGS_3(crc32c, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
49
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
50
new file mode 100644
51
index XXXXXXX..XXXXXXX
52
--- /dev/null
53
+++ b/target/arm/vec_internal.h
54
@@ -XXX,XX +XXX,XX @@
55
+/*
56
+ * ARM AdvSIMD / SVE Vector Helpers
57
+ *
58
+ * Copyright (c) 2020 Linaro
59
+ *
60
+ * This library is free software; you can redistribute it and/or
61
+ * modify it under the terms of the GNU Lesser General Public
62
+ * License as published by the Free Software Foundation; either
63
+ * version 2 of the License, or (at your option) any later version.
64
+ *
65
+ * This library is distributed in the hope that it will be useful,
66
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
67
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
68
+ * Lesser General Public License for more details.
69
+ *
70
+ * You should have received a copy of the GNU Lesser General Public
71
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
72
+ */
73
+
74
+#ifndef TARGET_ARM_VEC_INTERNALS_H
75
+#define TARGET_ARM_VEC_INTERNALS_H
76
+
77
+static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
78
+{
79
+ uint64_t *d = vd + opr_sz;
80
+ uintptr_t i;
81
+
82
+ for (i = opr_sz; i < max_sz; i += 8) {
83
+ *d++ = 0;
84
+ }
85
+}
86
+
87
+#endif /* TARGET_ARM_VEC_INTERNALS_H */
88
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/crypto_helper.c
91
+++ b/target/arm/crypto_helper.c
92
@@ -XXX,XX +XXX,XX @@
93
94
#include "cpu.h"
95
#include "exec/helper-proto.h"
96
+#include "tcg/tcg-gvec-desc.h"
97
#include "crypto/aes.h"
98
+#include "vec_internal.h"
99
100
union CRYPTO_STATE {
101
uint8_t bytes[16];
102
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
103
#define CR_ST_WORD(state, i) (state.words[i])
104
#endif
105
106
-void HELPER(crypto_aese)(void *vd, void *vm, uint32_t decrypt)
107
+static void do_crypto_aese(uint64_t *rd, uint64_t *rn,
108
+ uint64_t *rm, bool decrypt)
109
{
110
static uint8_t const * const sbox[2] = { AES_sbox, AES_isbox };
111
static uint8_t const * const shift[2] = { AES_shifts, AES_ishifts };
112
- uint64_t *rd = vd;
113
- uint64_t *rm = vm;
114
union CRYPTO_STATE rk = { .l = { rm[0], rm[1] } };
115
- union CRYPTO_STATE st = { .l = { rd[0], rd[1] } };
116
+ union CRYPTO_STATE st = { .l = { rn[0], rn[1] } };
117
int i;
118
119
- assert(decrypt < 2);
120
-
121
/* xor state vector with round key */
122
rk.l[0] ^= st.l[0];
123
rk.l[1] ^= st.l[1];
124
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aese)(void *vd, void *vm, uint32_t decrypt)
125
rd[1] = st.l[1];
126
}
127
128
-void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
129
+void HELPER(crypto_aese)(void *vd, void *vn, void *vm, uint32_t desc)
130
+{
131
+ intptr_t i, opr_sz = simd_oprsz(desc);
132
+ bool decrypt = simd_data(desc);
133
+
134
+ for (i = 0; i < opr_sz; i += 16) {
135
+ do_crypto_aese(vd + i, vn + i, vm + i, decrypt);
136
+ }
137
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
138
+}
139
+
140
+static void do_crypto_aesmc(uint64_t *rd, uint64_t *rm, bool decrypt)
141
{
142
static uint32_t const mc[][256] = { {
143
/* MixColumns lookup table */
144
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
145
0xbe805d9f, 0xb58d5491, 0xa89a4f83, 0xa397468d,
146
} };
147
148
- uint64_t *rd = vd;
149
- uint64_t *rm = vm;
150
union CRYPTO_STATE st = { .l = { rm[0], rm[1] } };
151
int i;
152
153
- assert(decrypt < 2);
154
-
155
for (i = 0; i < 16; i += 4) {
156
CR_ST_WORD(st, i >> 2) =
157
mc[decrypt][CR_ST_BYTE(st, i)] ^
158
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t decrypt)
159
rd[1] = st.l[1];
160
}
161
162
+void HELPER(crypto_aesmc)(void *vd, void *vm, uint32_t desc)
163
+{
164
+ intptr_t i, opr_sz = simd_oprsz(desc);
165
+ bool decrypt = simd_data(desc);
166
+
167
+ for (i = 0; i < opr_sz; i += 16) {
168
+ do_crypto_aesmc(vd + i, vm + i, decrypt);
169
+ }
170
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
171
+}
172
+
28
/*
173
/*
29
* The result of tlb_vaddr_to_host for user-only is just g2h(x),
174
* SHA-1 logical functions
30
* which is always non-null. Elide the useless test.
175
*/
31
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_r(CPUARMState *env, void *vg, const target_ulong addr,
176
@@ -XXX,XX +XXX,XX @@ static uint8_t const sm4_sbox[] = {
177
0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48,
178
};
179
180
-void HELPER(crypto_sm4e)(void *vd, void *vn)
181
+static void do_crypto_sm4e(uint64_t *rd, uint64_t *rn, uint64_t *rm)
182
{
183
- uint64_t *rd = vd;
184
- uint64_t *rn = vn;
185
- union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
186
- union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
187
+ union CRYPTO_STATE d = { .l = { rn[0], rn[1] } };
188
+ union CRYPTO_STATE n = { .l = { rm[0], rm[1] } };
189
uint32_t t, i;
190
191
for (i = 0; i < 4; i++) {
192
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4e)(void *vd, void *vn)
193
rd[1] = d.l[1];
194
}
195
196
-void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm)
197
+void HELPER(crypto_sm4e)(void *vd, void *vn, void *vm, uint32_t desc)
198
+{
199
+ intptr_t i, opr_sz = simd_oprsz(desc);
200
+
201
+ for (i = 0; i < opr_sz; i += 16) {
202
+ do_crypto_sm4e(vd + i, vn + i, vm + i);
203
+ }
204
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
205
+}
206
+
207
+static void do_crypto_sm4ekey(uint64_t *rd, uint64_t *rn, uint64_t *rm)
208
{
209
- uint64_t *rd = vd;
210
- uint64_t *rn = vn;
211
- uint64_t *rm = vm;
212
union CRYPTO_STATE d;
213
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
214
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
215
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm)
216
rd[0] = d.l[0];
217
rd[1] = d.l[1];
218
}
219
+
220
+void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm, uint32_t desc)
221
+{
222
+ intptr_t i, opr_sz = simd_oprsz(desc);
223
+
224
+ for (i = 0; i < opr_sz; i += 16) {
225
+ do_crypto_sm4ekey(vd + i, vn + i, vm + i);
226
+ }
227
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
228
+}
229
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
230
index XXXXXXX..XXXXXXX 100644
231
--- a/target/arm/translate-a64.c
232
+++ b/target/arm/translate-a64.c
233
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_fn4(DisasContext *s, bool is_q, int rd, int rn, int rm,
234
is_q ? 16 : 8, vec_full_reg_size(s));
235
}
236
237
+/* Expand a 2-operand operation using an out-of-line helper. */
238
+static void gen_gvec_op2_ool(DisasContext *s, bool is_q, int rd,
239
+ int rn, int data, gen_helper_gvec_2 *fn)
240
+{
241
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
242
+ vec_full_reg_offset(s, rn),
243
+ is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
244
+}
245
+
246
/* Expand a 3-operand operation using an out-of-line helper. */
247
static void gen_gvec_op3_ool(DisasContext *s, bool is_q, int rd,
248
int rn, int rm, int data, gen_helper_gvec_3 *fn)
249
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
250
int rn = extract32(insn, 5, 5);
251
int rd = extract32(insn, 0, 5);
252
int decrypt;
253
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
254
- TCGv_i32 tcg_decrypt;
255
- CryptoThreeOpIntFn *genfn;
256
+ gen_helper_gvec_2 *genfn2 = NULL;
257
+ gen_helper_gvec_3 *genfn3 = NULL;
258
259
if (!dc_isar_feature(aa64_aes, s) || size != 0) {
260
unallocated_encoding(s);
261
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
262
switch (opcode) {
263
case 0x4: /* AESE */
264
decrypt = 0;
265
- genfn = gen_helper_crypto_aese;
266
+ genfn3 = gen_helper_crypto_aese;
267
break;
268
case 0x6: /* AESMC */
269
decrypt = 0;
270
- genfn = gen_helper_crypto_aesmc;
271
+ genfn2 = gen_helper_crypto_aesmc;
272
break;
273
case 0x5: /* AESD */
274
decrypt = 1;
275
- genfn = gen_helper_crypto_aese;
276
+ genfn3 = gen_helper_crypto_aese;
277
break;
278
case 0x7: /* AESIMC */
279
decrypt = 1;
280
- genfn = gen_helper_crypto_aesmc;
281
+ genfn2 = gen_helper_crypto_aesmc;
282
break;
283
default:
284
unallocated_encoding(s);
285
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
286
if (!fp_access_check(s)) {
32
return;
287
return;
33
}
288
}
34
mem_off = reg_off >> diffsz;
289
-
35
- set_helper_retaddr(retaddr);
290
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
36
291
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
37
/*
292
- tcg_decrypt = tcg_const_i32(decrypt);
38
* If the (remaining) load is entirely within a single page, then:
293
-
39
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_r(CPUARMState *env, void *vg, const target_ulong addr,
294
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_decrypt);
40
if (test_host_page(host)) {
295
-
41
mem_off = host_fn(vd, vg, host - mem_off, mem_off, mem_max);
296
- tcg_temp_free_ptr(tcg_rd_ptr);
42
tcg_debug_assert(mem_off == mem_max);
297
- tcg_temp_free_ptr(tcg_rn_ptr);
43
- clear_helper_retaddr();
298
- tcg_temp_free_i32(tcg_decrypt);
44
/* After having taken any fault, zero leading inactive elements. */
299
+ if (genfn2) {
45
swap_memzero(vd, reg_off);
300
+ gen_gvec_op2_ool(s, true, rd, rn, decrypt, genfn2);
46
return;
301
+ } else {
47
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_r(CPUARMState *env, void *vg, const target_ulong addr,
302
+ gen_gvec_op3_ool(s, true, rd, rd, rn, decrypt, genfn3);
48
}
303
+ }
49
#endif
304
}
50
305
51
- clear_helper_retaddr();
306
/* Crypto three-reg SHA
52
memcpy(vd, &scratch, reg_max);
307
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
53
}
308
int rn = extract32(insn, 5, 5);
54
309
int rd = extract32(insn, 0, 5);
55
@@ -XXX,XX +XXX,XX @@ static void sve_ld2_r(CPUARMState *env, void *vg, target_ulong addr,
310
bool feature;
56
intptr_t i, oprsz = simd_oprsz(desc);
311
- CryptoThreeOpFn *genfn;
57
ARMVectorReg scratch[2] = { };
312
+ CryptoThreeOpFn *genfn = NULL;
58
313
+ gen_helper_gvec_3 *oolfn = NULL;
59
- set_helper_retaddr(ra);
314
60
for (i = 0; i < oprsz; ) {
315
if (o == 0) {
61
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
316
switch (opcode) {
62
do {
317
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
63
@@ -XXX,XX +XXX,XX @@ static void sve_ld2_r(CPUARMState *env, void *vg, target_ulong addr,
318
break;
64
addr += 2 * size;
319
case 2: /* SM4EKEY */
65
} while (i & 15);
320
feature = dc_isar_feature(aa64_sm4, s);
66
}
321
- genfn = gen_helper_crypto_sm4ekey;
67
- clear_helper_retaddr();
322
+ oolfn = gen_helper_crypto_sm4ekey;
68
323
break;
69
/* Wait until all exceptions have been raised to write back. */
324
default:
70
memcpy(&env->vfp.zregs[rd], &scratch[0], oprsz);
325
unallocated_encoding(s);
71
@@ -XXX,XX +XXX,XX @@ static void sve_ld3_r(CPUARMState *env, void *vg, target_ulong addr,
326
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
72
intptr_t i, oprsz = simd_oprsz(desc);
73
ARMVectorReg scratch[3] = { };
74
75
- set_helper_retaddr(ra);
76
for (i = 0; i < oprsz; ) {
77
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
78
do {
79
@@ -XXX,XX +XXX,XX @@ static void sve_ld3_r(CPUARMState *env, void *vg, target_ulong addr,
80
addr += 3 * size;
81
} while (i & 15);
82
}
83
- clear_helper_retaddr();
84
85
/* Wait until all exceptions have been raised to write back. */
86
memcpy(&env->vfp.zregs[rd], &scratch[0], oprsz);
87
@@ -XXX,XX +XXX,XX @@ static void sve_ld4_r(CPUARMState *env, void *vg, target_ulong addr,
88
intptr_t i, oprsz = simd_oprsz(desc);
89
ARMVectorReg scratch[4] = { };
90
91
- set_helper_retaddr(ra);
92
for (i = 0; i < oprsz; ) {
93
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
94
do {
95
@@ -XXX,XX +XXX,XX @@ static void sve_ld4_r(CPUARMState *env, void *vg, target_ulong addr,
96
addr += 4 * size;
97
} while (i & 15);
98
}
99
- clear_helper_retaddr();
100
101
/* Wait until all exceptions have been raised to write back. */
102
memcpy(&env->vfp.zregs[rd], &scratch[0], oprsz);
103
@@ -XXX,XX +XXX,XX @@ static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
104
return;
327
return;
105
}
328
}
106
mem_off = reg_off >> diffsz;
329
107
- set_helper_retaddr(retaddr);
330
+ if (oolfn) {
108
331
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
109
/*
332
+ return;
110
* If the (remaining) load is entirely within a single page, then:
333
+ }
111
@@ -XXX,XX +XXX,XX @@ static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
334
+
112
if (test_host_page(host)) {
335
if (genfn) {
113
mem_off = host_fn(vd, vg, host - mem_off, mem_off, mem_max);
336
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
114
tcg_debug_assert(mem_off == mem_max);
337
115
- clear_helper_retaddr();
338
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
116
/* After any fault, zero any leading inactive elements. */
339
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
117
swap_memzero(vd, reg_off);
340
bool feature;
118
return;
341
CryptoTwoOpFn *genfn;
119
@@ -XXX,XX +XXX,XX @@ static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
342
+ gen_helper_gvec_3 *oolfn = NULL;
343
344
switch (opcode) {
345
case 0: /* SHA512SU0 */
346
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
347
break;
348
case 1: /* SM4E */
349
feature = dc_isar_feature(aa64_sm4, s);
350
- genfn = gen_helper_crypto_sm4e;
351
+ oolfn = gen_helper_crypto_sm4e;
352
break;
353
default:
354
unallocated_encoding(s);
355
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
356
return;
120
}
357
}
358
359
+ if (oolfn) {
360
+ gen_gvec_op3_ool(s, true, rd, rd, rn, 0, oolfn);
361
+ return;
362
+ }
363
+
364
tcg_rd_ptr = vec_full_reg_ptr(s, rd);
365
tcg_rn_ptr = vec_full_reg_ptr(s, rn);
366
367
diff --git a/target/arm/translate.c b/target/arm/translate.c
368
index XXXXXXX..XXXXXXX 100644
369
--- a/target/arm/translate.c
370
+++ b/target/arm/translate.c
371
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
372
if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
373
return 1;
374
}
375
- ptr1 = vfp_reg_ptr(true, rd);
376
- ptr2 = vfp_reg_ptr(true, rm);
377
-
378
- /* Bit 6 is the lowest opcode bit; it distinguishes between
379
- * encryption (AESE/AESMC) and decryption (AESD/AESIMC)
380
- */
381
- tmp3 = tcg_const_i32(extract32(insn, 6, 1));
382
-
383
+ /*
384
+ * Bit 6 is the lowest opcode bit; it distinguishes
385
+ * between encryption (AESE/AESMC) and decryption
386
+ * (AESD/AESIMC).
387
+ */
388
if (op == NEON_2RM_AESE) {
389
- gen_helper_crypto_aese(ptr1, ptr2, tmp3);
390
+ tcg_gen_gvec_3_ool(vfp_reg_offset(true, rd),
391
+ vfp_reg_offset(true, rd),
392
+ vfp_reg_offset(true, rm),
393
+ 16, 16, extract32(insn, 6, 1),
394
+ gen_helper_crypto_aese);
395
} else {
396
- gen_helper_crypto_aesmc(ptr1, ptr2, tmp3);
397
+ tcg_gen_gvec_2_ool(vfp_reg_offset(true, rd),
398
+ vfp_reg_offset(true, rm),
399
+ 16, 16, extract32(insn, 6, 1),
400
+ gen_helper_crypto_aesmc);
401
}
402
- tcg_temp_free_ptr(ptr1);
403
- tcg_temp_free_ptr(ptr2);
404
- tcg_temp_free_i32(tmp3);
405
break;
406
case NEON_2RM_SHA1H:
407
if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
408
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
409
index XXXXXXX..XXXXXXX 100644
410
--- a/target/arm/vec_helper.c
411
+++ b/target/arm/vec_helper.c
412
@@ -XXX,XX +XXX,XX @@
413
#include "exec/helper-proto.h"
414
#include "tcg/tcg-gvec-desc.h"
415
#include "fpu/softfloat.h"
416
-
417
+#include "vec_internal.h"
418
419
/* Note that vector data is stored in host-endian 64-bit chunks,
420
so addressing units smaller than that needs a host-endian fixup. */
421
@@ -XXX,XX +XXX,XX @@
422
#define H4(x) (x)
121
#endif
423
#endif
122
424
123
- clear_helper_retaddr();
425
-static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
124
record_fault(env, reg_off, reg_max);
426
-{
125
}
427
- uint64_t *d = vd + opr_sz;
126
428
- uintptr_t i;
127
@@ -XXX,XX +XXX,XX @@ static void sve_st1_r(CPUARMState *env, void *vg, target_ulong addr,
429
-
128
intptr_t i, oprsz = simd_oprsz(desc);
430
- for (i = opr_sz; i < max_sz; i += 8) {
129
void *vd = &env->vfp.zregs[rd];
431
- *d++ = 0;
130
432
- }
131
- set_helper_retaddr(ra);
433
-}
132
for (i = 0; i < oprsz; ) {
434
-
133
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
435
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
134
do {
436
static int16_t inl_qrdmlah_s16(int16_t src1, int16_t src2,
135
@@ -XXX,XX +XXX,XX @@ static void sve_st1_r(CPUARMState *env, void *vg, target_ulong addr,
437
int16_t src3, uint32_t *sat)
136
addr += msize;
137
} while (i & 15);
138
}
139
- clear_helper_retaddr();
140
}
141
142
static void sve_st2_r(CPUARMState *env, void *vg, target_ulong addr,
143
@@ -XXX,XX +XXX,XX @@ static void sve_st2_r(CPUARMState *env, void *vg, target_ulong addr,
144
void *d1 = &env->vfp.zregs[rd];
145
void *d2 = &env->vfp.zregs[(rd + 1) & 31];
146
147
- set_helper_retaddr(ra);
148
for (i = 0; i < oprsz; ) {
149
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
150
do {
151
@@ -XXX,XX +XXX,XX @@ static void sve_st2_r(CPUARMState *env, void *vg, target_ulong addr,
152
addr += 2 * msize;
153
} while (i & 15);
154
}
155
- clear_helper_retaddr();
156
}
157
158
static void sve_st3_r(CPUARMState *env, void *vg, target_ulong addr,
159
@@ -XXX,XX +XXX,XX @@ static void sve_st3_r(CPUARMState *env, void *vg, target_ulong addr,
160
void *d2 = &env->vfp.zregs[(rd + 1) & 31];
161
void *d3 = &env->vfp.zregs[(rd + 2) & 31];
162
163
- set_helper_retaddr(ra);
164
for (i = 0; i < oprsz; ) {
165
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
166
do {
167
@@ -XXX,XX +XXX,XX @@ static void sve_st3_r(CPUARMState *env, void *vg, target_ulong addr,
168
addr += 3 * msize;
169
} while (i & 15);
170
}
171
- clear_helper_retaddr();
172
}
173
174
static void sve_st4_r(CPUARMState *env, void *vg, target_ulong addr,
175
@@ -XXX,XX +XXX,XX @@ static void sve_st4_r(CPUARMState *env, void *vg, target_ulong addr,
176
void *d3 = &env->vfp.zregs[(rd + 2) & 31];
177
void *d4 = &env->vfp.zregs[(rd + 3) & 31];
178
179
- set_helper_retaddr(ra);
180
for (i = 0; i < oprsz; ) {
181
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
182
do {
183
@@ -XXX,XX +XXX,XX @@ static void sve_st4_r(CPUARMState *env, void *vg, target_ulong addr,
184
addr += 4 * msize;
185
} while (i & 15);
186
}
187
- clear_helper_retaddr();
188
}
189
190
#define DO_STN_1(N, NAME, ESIZE) \
191
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
192
intptr_t i, oprsz = simd_oprsz(desc);
193
ARMVectorReg scratch = { };
194
195
- set_helper_retaddr(ra);
196
for (i = 0; i < oprsz; ) {
197
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
198
do {
199
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
200
i += 4, pg >>= 4;
201
} while (i & 15);
202
}
203
- clear_helper_retaddr();
204
205
/* Wait until all exceptions have been raised to write back. */
206
memcpy(vd, &scratch, oprsz);
207
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
208
intptr_t i, oprsz = simd_oprsz(desc) / 8;
209
ARMVectorReg scratch = { };
210
211
- set_helper_retaddr(ra);
212
for (i = 0; i < oprsz; i++) {
213
uint8_t pg = *(uint8_t *)(vg + H1(i));
214
if (likely(pg & 1)) {
215
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
216
tlb_fn(env, &scratch, i * 8, base + (off << scale), ra);
217
}
218
}
219
- clear_helper_retaddr();
220
221
/* Wait until all exceptions have been raised to write back. */
222
memcpy(vd, &scratch, oprsz * 8);
223
@@ -XXX,XX +XXX,XX @@ static inline void sve_ldff1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
224
reg_off = find_next_active(vg, 0, reg_max, MO_32);
225
if (likely(reg_off < reg_max)) {
226
/* Perform one normal read, which will fault or not. */
227
- set_helper_retaddr(ra);
228
addr = off_fn(vm, reg_off);
229
addr = base + (addr << scale);
230
tlb_fn(env, vd, reg_off, addr, ra);
231
232
/* The rest of the reads will be non-faulting. */
233
- clear_helper_retaddr();
234
}
235
236
/* After any fault, zero the leading predicated false elements. */
237
@@ -XXX,XX +XXX,XX @@ static inline void sve_ldff1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
238
reg_off = find_next_active(vg, 0, reg_max, MO_64);
239
if (likely(reg_off < reg_max)) {
240
/* Perform one normal read, which will fault or not. */
241
- set_helper_retaddr(ra);
242
addr = off_fn(vm, reg_off);
243
addr = base + (addr << scale);
244
tlb_fn(env, vd, reg_off, addr, ra);
245
246
/* The rest of the reads will be non-faulting. */
247
- clear_helper_retaddr();
248
}
249
250
/* After any fault, zero the leading predicated false elements. */
251
@@ -XXX,XX +XXX,XX @@ static void sve_st1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
252
const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
253
intptr_t i, oprsz = simd_oprsz(desc);
254
255
- set_helper_retaddr(ra);
256
for (i = 0; i < oprsz; ) {
257
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
258
do {
259
@@ -XXX,XX +XXX,XX @@ static void sve_st1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
260
i += 4, pg >>= 4;
261
} while (i & 15);
262
}
263
- clear_helper_retaddr();
264
}
265
266
static void sve_st1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
267
@@ -XXX,XX +XXX,XX @@ static void sve_st1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
268
const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
269
intptr_t i, oprsz = simd_oprsz(desc) / 8;
270
271
- set_helper_retaddr(ra);
272
for (i = 0; i < oprsz; i++) {
273
uint8_t pg = *(uint8_t *)(vg + H1(i));
274
if (likely(pg & 1)) {
275
@@ -XXX,XX +XXX,XX @@ static void sve_st1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
276
tlb_fn(env, vd, i * 8, base + (off << scale), ra);
277
}
278
}
279
- clear_helper_retaddr();
280
}
281
282
#define DO_ST1_ZPZ_S(MEM, OFS) \
283
--
438
--
284
2.20.1
439
2.20.1
285
440
286
441
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
With this conversion, we will be able to use the same helpers
4
with sve. This also fixes a bug in which we failed to clear
5
the high bits of the SVE register after an AdvSIMD operation.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-3-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200508154359.7494-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/sve_helper.c | 208 +++++++++++++++++++++-------------------
12
target/arm/helper.h | 2 ++
9
1 file changed, 109 insertions(+), 99 deletions(-)
13
target/arm/translate-a64.h | 3 ++
14
target/arm/crypto_helper.c | 11 +++++++
15
target/arm/translate-a64.c | 59 ++++++++++++++++++++------------------
16
4 files changed, 47 insertions(+), 28 deletions(-)
10
17
11
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
12
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/sve_helper.c
20
--- a/target/arm/helper.h
14
+++ b/target/arm/sve_helper.c
21
+++ b/target/arm/helper.h
15
@@ -XXX,XX +XXX,XX @@ static target_ulong off_zd_d(void *reg, intptr_t reg_ofs)
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
16
return *(uint64_t *)(reg + reg_ofs);
23
DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
26
+DEF_HELPER_FLAGS_4(crypto_rax1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+
28
DEF_HELPER_FLAGS_3(crc32, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
29
DEF_HELPER_FLAGS_3(crc32c, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
30
31
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate-a64.h
34
+++ b/target/arm/translate-a64.h
35
@@ -XXX,XX +XXX,XX @@ static inline int vec_full_reg_size(DisasContext *s)
36
37
bool disas_sve(DisasContext *, uint32_t);
38
39
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
40
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
41
+
42
#endif /* TARGET_ARM_TRANSLATE_A64_H */
43
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/crypto_helper.c
46
+++ b/target/arm/crypto_helper.c
47
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm4ekey)(void *vd, void *vn, void* vm, uint32_t desc)
48
}
49
clear_tail(vd, opr_sz, simd_maxsz(desc));
17
}
50
}
18
19
-static void sve_ld1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
20
- target_ulong base, uint32_t desc, uintptr_t ra,
21
- zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn)
22
+static inline QEMU_ALWAYS_INLINE
23
+void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
24
+ target_ulong base, uint32_t desc, uintptr_t retaddr,
25
+ int esize, int msize, zreg_off_fn *off_fn,
26
+ sve_ldst1_host_fn *host_fn,
27
+ sve_ldst1_tlb_fn *tlb_fn)
28
{
29
const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
30
- intptr_t i, oprsz = simd_oprsz(desc);
31
- ARMVectorReg scratch = { };
32
+ const int mmu_idx = cpu_mmu_index(env, false);
33
+ const intptr_t reg_max = simd_oprsz(desc);
34
+ ARMVectorReg scratch;
35
+ intptr_t reg_off;
36
+ SVEHostPage info, info2;
37
38
- for (i = 0; i < oprsz; ) {
39
- uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
40
+ memset(&scratch, 0, reg_max);
41
+ reg_off = 0;
42
+ do {
43
+ uint64_t pg = vg[reg_off >> 6];
44
do {
45
if (likely(pg & 1)) {
46
- target_ulong off = off_fn(vm, i);
47
- tlb_fn(env, &scratch, i, base + (off << scale), ra);
48
+ target_ulong addr = base + (off_fn(vm, reg_off) << scale);
49
+ target_ulong in_page = -(addr | TARGET_PAGE_MASK);
50
+
51
+
51
+ sve_probe_page(&info, false, env, addr, 0, MMU_DATA_LOAD,
52
+void HELPER(crypto_rax1)(void *vd, void *vn, void *vm, uint32_t desc)
52
+ mmu_idx, retaddr);
53
+{
54
+ intptr_t i, opr_sz = simd_oprsz(desc);
55
+ uint64_t *d = vd, *n = vn, *m = vm;
53
+
56
+
54
+ if (likely(in_page >= msize)) {
57
+ for (i = 0; i < opr_sz / 8; ++i) {
55
+ if (unlikely(info.flags & TLB_WATCHPOINT)) {
58
+ d[i] = n[i] ^ rol64(m[i], 1);
56
+ cpu_check_watchpoint(env_cpu(env), addr, msize,
59
+ }
57
+ info.attrs, BP_MEM_READ, retaddr);
60
+ clear_tail(vd, opr_sz, simd_maxsz(desc));
58
+ }
61
+}
59
+ /* TODO: MTE check */
62
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
60
+ host_fn(&scratch, reg_off, info.host);
63
index XXXXXXX..XXXXXXX 100644
61
+ } else {
64
--- a/target/arm/translate-a64.c
62
+ /* Element crosses the page boundary. */
65
+++ b/target/arm/translate-a64.c
63
+ sve_probe_page(&info2, false, env, addr + in_page, 0,
66
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
64
+ MMU_DATA_LOAD, mmu_idx, retaddr);
67
tcg_temp_free_ptr(tcg_rn_ptr);
65
+ if (unlikely((info.flags | info2.flags) & TLB_WATCHPOINT)) {
66
+ cpu_check_watchpoint(env_cpu(env), addr,
67
+ msize, info.attrs,
68
+ BP_MEM_READ, retaddr);
69
+ }
70
+ /* TODO: MTE check */
71
+ tlb_fn(env, &scratch, reg_off, addr, retaddr);
72
+ }
73
}
74
- i += 4, pg >>= 4;
75
- } while (i & 15);
76
- }
77
+ reg_off += esize;
78
+ pg >>= esize;
79
+ } while (reg_off & 63);
80
+ } while (reg_off < reg_max);
81
82
/* Wait until all exceptions have been raised to write back. */
83
- memcpy(vd, &scratch, oprsz);
84
+ memcpy(vd, &scratch, reg_max);
85
}
68
}
86
69
87
-static void sve_ld1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
70
+static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
88
- target_ulong base, uint32_t desc, uintptr_t ra,
71
+{
89
- zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn)
72
+ tcg_gen_rotli_i64(d, m, 1);
90
-{
73
+ tcg_gen_xor_i64(d, d, n);
91
- const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
74
+}
92
- intptr_t i, oprsz = simd_oprsz(desc) / 8;
75
+
93
- ARMVectorReg scratch = { };
76
+static void gen_rax1_vec(unsigned vece, TCGv_vec d, TCGv_vec n, TCGv_vec m)
94
-
77
+{
95
- for (i = 0; i < oprsz; i++) {
78
+ tcg_gen_rotli_vec(vece, d, m, 1);
96
- uint8_t pg = *(uint8_t *)(vg + H1(i));
79
+ tcg_gen_xor_vec(vece, d, d, n);
97
- if (likely(pg & 1)) {
80
+}
98
- target_ulong off = off_fn(vm, i * 8);
81
+
99
- tlb_fn(env, &scratch, i * 8, base + (off << scale), ra);
82
+void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
100
- }
83
+ uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz)
84
+{
85
+ static const TCGOpcode vecop_list[] = { INDEX_op_rotli_vec, 0 };
86
+ static const GVecGen3 op = {
87
+ .fni8 = gen_rax1_i64,
88
+ .fniv = gen_rax1_vec,
89
+ .opt_opc = vecop_list,
90
+ .fno = gen_helper_crypto_rax1,
91
+ .vece = MO_64,
92
+ };
93
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, opr_sz, max_sz, &op);
94
+}
95
+
96
/* Crypto three-reg SHA512
97
* 31 21 20 16 15 14 13 12 11 10 9 5 4 0
98
* +-----------------------+------+---+---+-----+--------+------+------+
99
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
100
bool feature;
101
CryptoThreeOpFn *genfn = NULL;
102
gen_helper_gvec_3 *oolfn = NULL;
103
+ GVecGen3Fn *gvecfn = NULL;
104
105
if (o == 0) {
106
switch (opcode) {
107
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
108
break;
109
case 3: /* RAX1 */
110
feature = dc_isar_feature(aa64_sha3, s);
111
- genfn = NULL;
112
+ gvecfn = gen_gvec_rax1;
113
break;
114
default:
115
g_assert_not_reached();
116
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
117
118
if (oolfn) {
119
gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
120
- return;
101
- }
121
- }
102
-
122
-
103
- /* Wait until all exceptions have been raised to write back. */
123
- if (genfn) {
104
- memcpy(vd, &scratch, oprsz * 8);
124
+ } else if (gvecfn) {
105
+#define DO_LD1_ZPZ_S(MEM, OFS, MSZ) \
125
+ gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
106
+void HELPER(sve_ld##MEM##_##OFS)(CPUARMState *env, void *vd, void *vg, \
126
+ } else {
107
+ void *vm, target_ulong base, uint32_t desc) \
127
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
108
+{ \
128
109
+ sve_ld1_z(env, vd, vg, vm, base, desc, GETPC(), 4, 1 << MSZ, \
129
tcg_rd_ptr = vec_full_reg_ptr(s, rd);
110
+ off_##OFS##_s, sve_ld1##MEM##_host, sve_ld1##MEM##_tlb); \
130
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
131
tcg_temp_free_ptr(tcg_rd_ptr);
132
tcg_temp_free_ptr(tcg_rn_ptr);
133
tcg_temp_free_ptr(tcg_rm_ptr);
134
- } else {
135
- TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
136
- int pass;
137
-
138
- tcg_op1 = tcg_temp_new_i64();
139
- tcg_op2 = tcg_temp_new_i64();
140
- tcg_res[0] = tcg_temp_new_i64();
141
- tcg_res[1] = tcg_temp_new_i64();
142
-
143
- for (pass = 0; pass < 2; pass++) {
144
- read_vec_element(s, tcg_op1, rn, pass, MO_64);
145
- read_vec_element(s, tcg_op2, rm, pass, MO_64);
146
-
147
- tcg_gen_rotli_i64(tcg_res[pass], tcg_op2, 1);
148
- tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
149
- }
150
- write_vec_element(s, tcg_res[0], rd, 0, MO_64);
151
- write_vec_element(s, tcg_res[1], rd, 1, MO_64);
152
-
153
- tcg_temp_free_i64(tcg_op1);
154
- tcg_temp_free_i64(tcg_op2);
155
- tcg_temp_free_i64(tcg_res[0]);
156
- tcg_temp_free_i64(tcg_res[1]);
157
}
111
}
158
}
112
159
113
-#define DO_LD1_ZPZ_S(MEM, OFS) \
114
-void QEMU_FLATTEN HELPER(sve_ld##MEM##_##OFS) \
115
- (CPUARMState *env, void *vd, void *vg, void *vm, \
116
- target_ulong base, uint32_t desc) \
117
-{ \
118
- sve_ld1_zs(env, vd, vg, vm, base, desc, GETPC(), \
119
- off_##OFS##_s, sve_ld1##MEM##_tlb); \
120
+#define DO_LD1_ZPZ_D(MEM, OFS, MSZ) \
121
+void HELPER(sve_ld##MEM##_##OFS)(CPUARMState *env, void *vd, void *vg, \
122
+ void *vm, target_ulong base, uint32_t desc) \
123
+{ \
124
+ sve_ld1_z(env, vd, vg, vm, base, desc, GETPC(), 8, 1 << MSZ, \
125
+ off_##OFS##_d, sve_ld1##MEM##_host, sve_ld1##MEM##_tlb); \
126
}
127
128
-#define DO_LD1_ZPZ_D(MEM, OFS) \
129
-void QEMU_FLATTEN HELPER(sve_ld##MEM##_##OFS) \
130
- (CPUARMState *env, void *vd, void *vg, void *vm, \
131
- target_ulong base, uint32_t desc) \
132
-{ \
133
- sve_ld1_zd(env, vd, vg, vm, base, desc, GETPC(), \
134
- off_##OFS##_d, sve_ld1##MEM##_tlb); \
135
-}
136
+DO_LD1_ZPZ_S(bsu, zsu, MO_8)
137
+DO_LD1_ZPZ_S(bsu, zss, MO_8)
138
+DO_LD1_ZPZ_D(bdu, zsu, MO_8)
139
+DO_LD1_ZPZ_D(bdu, zss, MO_8)
140
+DO_LD1_ZPZ_D(bdu, zd, MO_8)
141
142
-DO_LD1_ZPZ_S(bsu, zsu)
143
-DO_LD1_ZPZ_S(bsu, zss)
144
-DO_LD1_ZPZ_D(bdu, zsu)
145
-DO_LD1_ZPZ_D(bdu, zss)
146
-DO_LD1_ZPZ_D(bdu, zd)
147
+DO_LD1_ZPZ_S(bss, zsu, MO_8)
148
+DO_LD1_ZPZ_S(bss, zss, MO_8)
149
+DO_LD1_ZPZ_D(bds, zsu, MO_8)
150
+DO_LD1_ZPZ_D(bds, zss, MO_8)
151
+DO_LD1_ZPZ_D(bds, zd, MO_8)
152
153
-DO_LD1_ZPZ_S(bss, zsu)
154
-DO_LD1_ZPZ_S(bss, zss)
155
-DO_LD1_ZPZ_D(bds, zsu)
156
-DO_LD1_ZPZ_D(bds, zss)
157
-DO_LD1_ZPZ_D(bds, zd)
158
+DO_LD1_ZPZ_S(hsu_le, zsu, MO_16)
159
+DO_LD1_ZPZ_S(hsu_le, zss, MO_16)
160
+DO_LD1_ZPZ_D(hdu_le, zsu, MO_16)
161
+DO_LD1_ZPZ_D(hdu_le, zss, MO_16)
162
+DO_LD1_ZPZ_D(hdu_le, zd, MO_16)
163
164
-DO_LD1_ZPZ_S(hsu_le, zsu)
165
-DO_LD1_ZPZ_S(hsu_le, zss)
166
-DO_LD1_ZPZ_D(hdu_le, zsu)
167
-DO_LD1_ZPZ_D(hdu_le, zss)
168
-DO_LD1_ZPZ_D(hdu_le, zd)
169
+DO_LD1_ZPZ_S(hsu_be, zsu, MO_16)
170
+DO_LD1_ZPZ_S(hsu_be, zss, MO_16)
171
+DO_LD1_ZPZ_D(hdu_be, zsu, MO_16)
172
+DO_LD1_ZPZ_D(hdu_be, zss, MO_16)
173
+DO_LD1_ZPZ_D(hdu_be, zd, MO_16)
174
175
-DO_LD1_ZPZ_S(hsu_be, zsu)
176
-DO_LD1_ZPZ_S(hsu_be, zss)
177
-DO_LD1_ZPZ_D(hdu_be, zsu)
178
-DO_LD1_ZPZ_D(hdu_be, zss)
179
-DO_LD1_ZPZ_D(hdu_be, zd)
180
+DO_LD1_ZPZ_S(hss_le, zsu, MO_16)
181
+DO_LD1_ZPZ_S(hss_le, zss, MO_16)
182
+DO_LD1_ZPZ_D(hds_le, zsu, MO_16)
183
+DO_LD1_ZPZ_D(hds_le, zss, MO_16)
184
+DO_LD1_ZPZ_D(hds_le, zd, MO_16)
185
186
-DO_LD1_ZPZ_S(hss_le, zsu)
187
-DO_LD1_ZPZ_S(hss_le, zss)
188
-DO_LD1_ZPZ_D(hds_le, zsu)
189
-DO_LD1_ZPZ_D(hds_le, zss)
190
-DO_LD1_ZPZ_D(hds_le, zd)
191
+DO_LD1_ZPZ_S(hss_be, zsu, MO_16)
192
+DO_LD1_ZPZ_S(hss_be, zss, MO_16)
193
+DO_LD1_ZPZ_D(hds_be, zsu, MO_16)
194
+DO_LD1_ZPZ_D(hds_be, zss, MO_16)
195
+DO_LD1_ZPZ_D(hds_be, zd, MO_16)
196
197
-DO_LD1_ZPZ_S(hss_be, zsu)
198
-DO_LD1_ZPZ_S(hss_be, zss)
199
-DO_LD1_ZPZ_D(hds_be, zsu)
200
-DO_LD1_ZPZ_D(hds_be, zss)
201
-DO_LD1_ZPZ_D(hds_be, zd)
202
+DO_LD1_ZPZ_S(ss_le, zsu, MO_32)
203
+DO_LD1_ZPZ_S(ss_le, zss, MO_32)
204
+DO_LD1_ZPZ_D(sdu_le, zsu, MO_32)
205
+DO_LD1_ZPZ_D(sdu_le, zss, MO_32)
206
+DO_LD1_ZPZ_D(sdu_le, zd, MO_32)
207
208
-DO_LD1_ZPZ_S(ss_le, zsu)
209
-DO_LD1_ZPZ_S(ss_le, zss)
210
-DO_LD1_ZPZ_D(sdu_le, zsu)
211
-DO_LD1_ZPZ_D(sdu_le, zss)
212
-DO_LD1_ZPZ_D(sdu_le, zd)
213
+DO_LD1_ZPZ_S(ss_be, zsu, MO_32)
214
+DO_LD1_ZPZ_S(ss_be, zss, MO_32)
215
+DO_LD1_ZPZ_D(sdu_be, zsu, MO_32)
216
+DO_LD1_ZPZ_D(sdu_be, zss, MO_32)
217
+DO_LD1_ZPZ_D(sdu_be, zd, MO_32)
218
219
-DO_LD1_ZPZ_S(ss_be, zsu)
220
-DO_LD1_ZPZ_S(ss_be, zss)
221
-DO_LD1_ZPZ_D(sdu_be, zsu)
222
-DO_LD1_ZPZ_D(sdu_be, zss)
223
-DO_LD1_ZPZ_D(sdu_be, zd)
224
+DO_LD1_ZPZ_D(sds_le, zsu, MO_32)
225
+DO_LD1_ZPZ_D(sds_le, zss, MO_32)
226
+DO_LD1_ZPZ_D(sds_le, zd, MO_32)
227
228
-DO_LD1_ZPZ_D(sds_le, zsu)
229
-DO_LD1_ZPZ_D(sds_le, zss)
230
-DO_LD1_ZPZ_D(sds_le, zd)
231
+DO_LD1_ZPZ_D(sds_be, zsu, MO_32)
232
+DO_LD1_ZPZ_D(sds_be, zss, MO_32)
233
+DO_LD1_ZPZ_D(sds_be, zd, MO_32)
234
235
-DO_LD1_ZPZ_D(sds_be, zsu)
236
-DO_LD1_ZPZ_D(sds_be, zss)
237
-DO_LD1_ZPZ_D(sds_be, zd)
238
+DO_LD1_ZPZ_D(dd_le, zsu, MO_64)
239
+DO_LD1_ZPZ_D(dd_le, zss, MO_64)
240
+DO_LD1_ZPZ_D(dd_le, zd, MO_64)
241
242
-DO_LD1_ZPZ_D(dd_le, zsu)
243
-DO_LD1_ZPZ_D(dd_le, zss)
244
-DO_LD1_ZPZ_D(dd_le, zd)
245
-
246
-DO_LD1_ZPZ_D(dd_be, zsu)
247
-DO_LD1_ZPZ_D(dd_be, zss)
248
-DO_LD1_ZPZ_D(dd_be, zd)
249
+DO_LD1_ZPZ_D(dd_be, zsu, MO_64)
250
+DO_LD1_ZPZ_D(dd_be, zss, MO_64)
251
+DO_LD1_ZPZ_D(dd_be, zd, MO_64)
252
253
#undef DO_LD1_ZPZ_S
254
#undef DO_LD1_ZPZ_D
255
--
160
--
256
2.20.1
161
2.20.1
257
162
258
163
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This new interface will allow targets to probe for a page
3
Do not yet convert the helpers to loop over opr_sz, but the
4
and then handle watchpoints themselves. This will be most
4
descriptor allows the vector tail to be cleared. Which fixes
5
useful for vector predicated memory operations, where one
5
an existing bug vs SVE.
6
page lookup can be used for many operations, and one test
7
can avoid many watchpoint checks.
8
6
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200508154359.7494-6-richard.henderson@linaro.org
8
Message-id: 20200514212831.31248-4-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
include/exec/cpu-all.h | 13 ++-
12
target/arm/helper.h | 15 +++++++-----
15
include/exec/exec-all.h | 22 +++++
13
target/arm/crypto_helper.c | 37 +++++++++++++++++++++++-----
16
accel/tcg/cputlb.c | 177 ++++++++++++++++++++--------------------
14
target/arm/translate-a64.c | 50 ++++++++++++--------------------------
17
accel/tcg/user-exec.c | 43 ++++++++--
15
3 files changed, 55 insertions(+), 47 deletions(-)
18
4 files changed, 158 insertions(+), 97 deletions(-)
16
19
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
20
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/cpu-all.h
19
--- a/target/arm/helper.h
23
+++ b/include/exec/cpu-all.h
20
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ CPUArchState *cpu_copy(CPUArchState *env);
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
25
| CPU_INTERRUPT_TGT_EXT_3 \
22
DEF_HELPER_FLAGS_2(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr)
26
| CPU_INTERRUPT_TGT_EXT_4)
23
DEF_HELPER_FLAGS_3(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
27
24
28
-#if !defined(CONFIG_USER_ONLY)
25
-DEF_HELPER_FLAGS_3(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
29
+#ifdef CONFIG_USER_ONLY
26
-DEF_HELPER_FLAGS_3(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
30
+
27
-DEF_HELPER_FLAGS_2(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr)
28
-DEF_HELPER_FLAGS_3(crypto_sha512su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
29
+DEF_HELPER_FLAGS_4(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(crypto_sha512su1, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, i32)
34
35
DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
36
-DEF_HELPER_FLAGS_3(crypto_sm3partw1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
37
-DEF_HELPER_FLAGS_3(crypto_sm3partw2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
38
+DEF_HELPER_FLAGS_4(crypto_sm3partw1, TCG_CALL_NO_RWG,
39
+ void, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(crypto_sm3partw2, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, i32)
42
43
DEF_HELPER_FLAGS_4(crypto_sm4e, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
DEF_HELPER_FLAGS_4(crypto_sm4ekey, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
45
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/crypto_helper.c
48
+++ b/target/arm/crypto_helper.c
49
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
50
#define CR_ST_WORD(state, i) (state.words[i])
51
#endif
52
31
+/*
53
+/*
32
+ * Allow some level of source compatibility with softmmu. We do not
54
+ * The caller has not been converted to full gvec, and so only
33
+ * support any of the more exotic features, so only invalid pages may
55
+ * modifies the low 16 bytes of the vector register.
34
+ * be signaled by probe_access_flags().
35
+ */
56
+ */
36
+#define TLB_INVALID_MASK (1 << (TARGET_PAGE_BITS_MIN - 1))
57
+static void clear_tail_16(void *vd, uint32_t desc)
37
+#define TLB_MMIO 0
58
+{
38
+#define TLB_WATCHPOINT 0
59
+ int opr_sz = simd_oprsz(desc);
39
+
60
+ int max_sz = simd_maxsz(desc);
40
+#else
61
+
41
62
+ assert(opr_sz == 16);
42
/*
63
+ clear_tail(vd, opr_sz, max_sz);
43
* Flags stored in the low bits of the TLB virtual address.
64
+}
44
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
65
+
66
static void do_crypto_aese(uint64_t *rd, uint64_t *rn,
67
uint64_t *rm, bool decrypt)
68
{
69
@@ -XXX,XX +XXX,XX @@ static uint64_t s1_512(uint64_t x)
70
return ror64(x, 19) ^ ror64(x, 61) ^ (x >> 6);
71
}
72
73
-void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm)
74
+void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm, uint32_t desc)
75
{
76
uint64_t *rd = vd;
77
uint64_t *rn = vn;
78
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512h)(void *vd, void *vn, void *vm)
79
80
rd[0] = d0;
81
rd[1] = d1;
82
+
83
+ clear_tail_16(vd, desc);
84
}
85
86
-void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm)
87
+void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm, uint32_t desc)
88
{
89
uint64_t *rd = vd;
90
uint64_t *rn = vn;
91
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512h2)(void *vd, void *vn, void *vm)
92
93
rd[0] = d0;
94
rd[1] = d1;
95
+
96
+ clear_tail_16(vd, desc);
97
}
98
99
-void HELPER(crypto_sha512su0)(void *vd, void *vn)
100
+void HELPER(crypto_sha512su0)(void *vd, void *vn, uint32_t desc)
101
{
102
uint64_t *rd = vd;
103
uint64_t *rn = vn;
104
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512su0)(void *vd, void *vn)
105
106
rd[0] = d0;
107
rd[1] = d1;
108
+
109
+ clear_tail_16(vd, desc);
110
}
111
112
-void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm)
113
+void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm, uint32_t desc)
114
{
115
uint64_t *rd = vd;
116
uint64_t *rn = vn;
117
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha512su1)(void *vd, void *vn, void *vm)
118
119
rd[0] += s1_512(rn[0]) + rm[0];
120
rd[1] += s1_512(rn[1]) + rm[1];
121
+
122
+ clear_tail_16(vd, desc);
123
}
124
125
-void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm)
126
+void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm, uint32_t desc)
127
{
128
uint64_t *rd = vd;
129
uint64_t *rn = vn;
130
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw1)(void *vd, void *vn, void *vm)
131
132
rd[0] = d.l[0];
133
rd[1] = d.l[1];
134
+
135
+ clear_tail_16(vd, desc);
136
}
137
138
-void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm)
139
+void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm, uint32_t desc)
140
{
141
uint64_t *rd = vd;
142
uint64_t *rn = vn;
143
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm)
144
145
rd[0] = d.l[0];
146
rd[1] = d.l[1];
147
+
148
+ clear_tail_16(vd, desc);
149
}
150
151
void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
152
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
45
index XXXXXXX..XXXXXXX 100644
153
index XXXXXXX..XXXXXXX 100644
46
--- a/include/exec/exec-all.h
154
--- a/target/arm/translate-a64.c
47
+++ b/include/exec/exec-all.h
155
+++ b/target/arm/translate-a64.c
48
@@ -XXX,XX +XXX,XX @@ static inline void *probe_read(CPUArchState *env, target_ulong addr, int size,
156
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
49
return probe_access(env, addr, size, MMU_DATA_LOAD, mmu_idx, retaddr);
157
int rn = extract32(insn, 5, 5);
50
}
158
int rd = extract32(insn, 0, 5);
51
159
bool feature;
52
+/**
160
- CryptoThreeOpFn *genfn = NULL;
53
+ * probe_access_flags:
161
gen_helper_gvec_3 *oolfn = NULL;
54
+ * @env: CPUArchState
162
GVecGen3Fn *gvecfn = NULL;
55
+ * @addr: guest virtual address to look up
163
56
+ * @access_type: read, write or execute permission
164
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
57
+ * @mmu_idx: MMU index to use for lookup
165
switch (opcode) {
58
+ * @nonfault: suppress the fault
166
case 0: /* SHA512H */
59
+ * @phost: return value for host address
167
feature = dc_isar_feature(aa64_sha512, s);
60
+ * @retaddr: return address for unwinding
168
- genfn = gen_helper_crypto_sha512h;
61
+ *
169
+ oolfn = gen_helper_crypto_sha512h;
62
+ * Similar to probe_access, loosely returning the TLB_FLAGS_MASK for
170
break;
63
+ * the page, and storing the host address for RAM in @phost.
171
case 1: /* SHA512H2 */
64
+ *
172
feature = dc_isar_feature(aa64_sha512, s);
65
+ * If @nonfault is set, do not raise an exception but return TLB_INVALID_MASK.
173
- genfn = gen_helper_crypto_sha512h2;
66
+ * Do not handle watchpoints, but include TLB_WATCHPOINT in the returned flags.
174
+ oolfn = gen_helper_crypto_sha512h2;
67
+ * Do handle clean pages, so exclude TLB_NOTDIRY from the returned flags.
175
break;
68
+ * For simplicity, all "mmio-like" flags are folded to TLB_MMIO.
176
case 2: /* SHA512SU1 */
69
+ */
177
feature = dc_isar_feature(aa64_sha512, s);
70
+int probe_access_flags(CPUArchState *env, target_ulong addr,
178
- genfn = gen_helper_crypto_sha512su1;
71
+ MMUAccessType access_type, int mmu_idx,
179
+ oolfn = gen_helper_crypto_sha512su1;
72
+ bool nonfault, void **phost, uintptr_t retaddr);
180
break;
73
+
181
case 3: /* RAX1 */
74
#define CODE_GEN_ALIGN 16 /* must be >= of the size of a icache line */
182
feature = dc_isar_feature(aa64_sha3, s);
75
183
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
76
/* Estimated block size for TB allocation. */
184
switch (opcode) {
77
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
185
case 0: /* SM3PARTW1 */
78
index XXXXXXX..XXXXXXX 100644
186
feature = dc_isar_feature(aa64_sm3, s);
79
--- a/accel/tcg/cputlb.c
187
- genfn = gen_helper_crypto_sm3partw1;
80
+++ b/accel/tcg/cputlb.c
188
+ oolfn = gen_helper_crypto_sm3partw1;
81
@@ -XXX,XX +XXX,XX @@ static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size,
189
break;
190
case 1: /* SM3PARTW2 */
191
feature = dc_isar_feature(aa64_sm3, s);
192
- genfn = gen_helper_crypto_sm3partw2;
193
+ oolfn = gen_helper_crypto_sm3partw2;
194
break;
195
case 2: /* SM4EKEY */
196
feature = dc_isar_feature(aa64_sm4, s);
197
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
198
199
if (oolfn) {
200
gen_gvec_op3_ool(s, true, rd, rn, rm, 0, oolfn);
201
- } else if (gvecfn) {
202
- gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
203
} else {
204
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
205
-
206
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
207
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
208
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
209
-
210
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
211
-
212
- tcg_temp_free_ptr(tcg_rd_ptr);
213
- tcg_temp_free_ptr(tcg_rn_ptr);
214
- tcg_temp_free_ptr(tcg_rm_ptr);
215
+ gen_gvec_fn3(s, true, rd, rn, rm, gvecfn, MO_64);
82
}
216
}
83
}
217
}
84
218
85
-/*
219
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
86
- * Probe for whether the specified guest access is permitted. If it is not
220
int opcode = extract32(insn, 10, 2);
87
- * permitted then an exception will be taken in the same way as if this
221
int rn = extract32(insn, 5, 5);
88
- * were a real access (and we will not return).
222
int rd = extract32(insn, 0, 5);
89
- * If the size is 0 or the page requires I/O access, returns NULL; otherwise,
223
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
90
- * returns the address of the host page similar to tlb_vaddr_to_host().
224
bool feature;
91
- */
225
- CryptoTwoOpFn *genfn;
92
-void *probe_access(CPUArchState *env, target_ulong addr, int size,
226
- gen_helper_gvec_3 *oolfn = NULL;
93
- MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
227
94
+static int probe_access_internal(CPUArchState *env, target_ulong addr,
228
switch (opcode) {
95
+ int fault_size, MMUAccessType access_type,
229
case 0: /* SHA512SU0 */
96
+ int mmu_idx, bool nonfault,
230
feature = dc_isar_feature(aa64_sha512, s);
97
+ void **phost, uintptr_t retaddr)
231
- genfn = gen_helper_crypto_sha512su0;
98
{
232
break;
99
uintptr_t index = tlb_index(env, mmu_idx, addr);
233
case 1: /* SM4E */
100
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
234
feature = dc_isar_feature(aa64_sm4, s);
101
- target_ulong tlb_addr;
235
- oolfn = gen_helper_crypto_sm4e;
102
- size_t elt_ofs;
236
break;
103
- int wp_access;
104
-
105
- g_assert(-(addr | TARGET_PAGE_MASK) >= size);
106
-
107
- switch (access_type) {
108
- case MMU_DATA_LOAD:
109
- elt_ofs = offsetof(CPUTLBEntry, addr_read);
110
- wp_access = BP_MEM_READ;
111
- break;
112
- case MMU_DATA_STORE:
113
- elt_ofs = offsetof(CPUTLBEntry, addr_write);
114
- wp_access = BP_MEM_WRITE;
115
- break;
116
- case MMU_INST_FETCH:
117
- elt_ofs = offsetof(CPUTLBEntry, addr_code);
118
- wp_access = BP_MEM_READ;
119
- break;
120
- default:
121
- g_assert_not_reached();
122
- }
123
- tlb_addr = tlb_read_ofs(entry, elt_ofs);
124
-
125
- if (unlikely(!tlb_hit(tlb_addr, addr))) {
126
- if (!victim_tlb_hit(env, mmu_idx, index, elt_ofs,
127
- addr & TARGET_PAGE_MASK)) {
128
- tlb_fill(env_cpu(env), addr, size, access_type, mmu_idx, retaddr);
129
- /* TLB resize via tlb_fill may have moved the entry. */
130
- index = tlb_index(env, mmu_idx, addr);
131
- entry = tlb_entry(env, mmu_idx, addr);
132
- }
133
- tlb_addr = tlb_read_ofs(entry, elt_ofs);
134
- }
135
-
136
- if (!size) {
137
- return NULL;
138
- }
139
-
140
- if (unlikely(tlb_addr & TLB_FLAGS_MASK)) {
141
- CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index];
142
-
143
- /* Reject I/O access, or other required slow-path. */
144
- if (tlb_addr & (TLB_MMIO | TLB_BSWAP | TLB_DISCARD_WRITE)) {
145
- return NULL;
146
- }
147
-
148
- /* Handle watchpoints. */
149
- if (tlb_addr & TLB_WATCHPOINT) {
150
- cpu_check_watchpoint(env_cpu(env), addr, size,
151
- iotlbentry->attrs, wp_access, retaddr);
152
- }
153
-
154
- /* Handle clean RAM pages. */
155
- if (tlb_addr & TLB_NOTDIRTY) {
156
- notdirty_write(env_cpu(env), addr, size, iotlbentry, retaddr);
157
- }
158
- }
159
-
160
- return (void *)((uintptr_t)addr + entry->addend);
161
-}
162
-
163
-void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
164
- MMUAccessType access_type, int mmu_idx)
165
-{
166
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
167
- target_ulong tlb_addr, page;
168
+ target_ulong tlb_addr, page_addr;
169
size_t elt_ofs;
170
+ int flags;
171
172
switch (access_type) {
173
case MMU_DATA_LOAD:
174
@@ -XXX,XX +XXX,XX @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
175
default:
237
default:
176
g_assert_not_reached();
238
unallocated_encoding(s);
239
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
240
return;
177
}
241
}
178
-
242
179
- page = addr & TARGET_PAGE_MASK;
243
- if (oolfn) {
180
tlb_addr = tlb_read_ofs(entry, elt_ofs);
244
- gen_gvec_op3_ool(s, true, rd, rd, rn, 0, oolfn);
181
245
- return;
182
- if (!tlb_hit_page(tlb_addr, page)) {
246
+ switch (opcode) {
183
- uintptr_t index = tlb_index(env, mmu_idx, addr);
247
+ case 0: /* SHA512SU0 */
184
-
248
+ gen_gvec_op2_ool(s, true, rd, rn, 0, gen_helper_crypto_sha512su0);
185
- if (!victim_tlb_hit(env, mmu_idx, index, elt_ofs, page)) {
249
+ break;
186
+ page_addr = addr & TARGET_PAGE_MASK;
250
+ case 1: /* SM4E */
187
+ if (!tlb_hit_page(tlb_addr, page_addr)) {
251
+ gen_gvec_op3_ool(s, true, rd, rd, rn, 0, gen_helper_crypto_sm4e);
188
+ if (!victim_tlb_hit(env, mmu_idx, index, elt_ofs, page_addr)) {
252
+ break;
189
CPUState *cs = env_cpu(env);
253
+ default:
190
CPUClass *cc = CPU_GET_CLASS(cs);
254
+ g_assert_not_reached();
191
192
- if (!cc->tlb_fill(cs, addr, 0, access_type, mmu_idx, true, 0)) {
193
+ if (!cc->tlb_fill(cs, addr, fault_size, access_type,
194
+ mmu_idx, nonfault, retaddr)) {
195
/* Non-faulting page table read failed. */
196
- return NULL;
197
+ *phost = NULL;
198
+ return TLB_INVALID_MASK;
199
}
200
201
/* TLB resize via tlb_fill may have moved the entry. */
202
@@ -XXX,XX +XXX,XX @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
203
}
204
tlb_addr = tlb_read_ofs(entry, elt_ofs);
205
}
255
}
206
+ flags = tlb_addr & TLB_FLAGS_MASK;
256
-
207
257
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
208
- if (tlb_addr & ~TARGET_PAGE_MASK) {
258
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
209
- /* IO access */
259
-
210
+ /* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */
260
- genfn(tcg_rd_ptr, tcg_rn_ptr);
211
+ if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY))) {
261
-
212
+ *phost = NULL;
262
- tcg_temp_free_ptr(tcg_rd_ptr);
213
+ return TLB_MMIO;
263
- tcg_temp_free_ptr(tcg_rn_ptr);
214
+ }
264
}
215
+
265
216
+ /* Everything else is RAM. */
266
/* Crypto four-register
217
+ *phost = (void *)((uintptr_t)addr + entry->addend);
218
+ return flags;
219
+}
220
+
221
+int probe_access_flags(CPUArchState *env, target_ulong addr,
222
+ MMUAccessType access_type, int mmu_idx,
223
+ bool nonfault, void **phost, uintptr_t retaddr)
224
+{
225
+ int flags;
226
+
227
+ flags = probe_access_internal(env, addr, 0, access_type, mmu_idx,
228
+ nonfault, phost, retaddr);
229
+
230
+ /* Handle clean RAM pages. */
231
+ if (unlikely(flags & TLB_NOTDIRTY)) {
232
+ uintptr_t index = tlb_index(env, mmu_idx, addr);
233
+ CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index];
234
+
235
+ notdirty_write(env_cpu(env), addr, 1, iotlbentry, retaddr);
236
+ flags &= ~TLB_NOTDIRTY;
237
+ }
238
+
239
+ return flags;
240
+}
241
+
242
+void *probe_access(CPUArchState *env, target_ulong addr, int size,
243
+ MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
244
+{
245
+ void *host;
246
+ int flags;
247
+
248
+ g_assert(-(addr | TARGET_PAGE_MASK) >= size);
249
+
250
+ flags = probe_access_internal(env, addr, size, access_type, mmu_idx,
251
+ false, &host, retaddr);
252
+
253
+ /* Per the interface, size == 0 merely faults the access. */
254
+ if (size == 0) {
255
return NULL;
256
}
257
258
- return (void *)((uintptr_t)addr + entry->addend);
259
+ if (unlikely(flags & (TLB_NOTDIRTY | TLB_WATCHPOINT))) {
260
+ uintptr_t index = tlb_index(env, mmu_idx, addr);
261
+ CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index];
262
+
263
+ /* Handle watchpoints. */
264
+ if (flags & TLB_WATCHPOINT) {
265
+ int wp_access = (access_type == MMU_DATA_STORE
266
+ ? BP_MEM_WRITE : BP_MEM_READ);
267
+ cpu_check_watchpoint(env_cpu(env), addr, size,
268
+ iotlbentry->attrs, wp_access, retaddr);
269
+ }
270
+
271
+ /* Handle clean RAM pages. */
272
+ if (flags & TLB_NOTDIRTY) {
273
+ notdirty_write(env_cpu(env), addr, 1, iotlbentry, retaddr);
274
+ }
275
+ }
276
+
277
+ return host;
278
}
279
280
+void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
281
+ MMUAccessType access_type, int mmu_idx)
282
+{
283
+ void *host;
284
+ int flags;
285
+
286
+ flags = probe_access_internal(env, addr, 0, access_type,
287
+ mmu_idx, true, &host, 0);
288
+
289
+ /* No combination of flags are expected by the caller. */
290
+ return flags ? NULL : host;
291
+}
292
293
#ifdef CONFIG_PLUGIN
294
/*
295
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
296
index XXXXXXX..XXXXXXX 100644
297
--- a/accel/tcg/user-exec.c
298
+++ b/accel/tcg/user-exec.c
299
@@ -XXX,XX +XXX,XX @@ static inline int handle_cpu_signal(uintptr_t pc, siginfo_t *info,
300
g_assert_not_reached();
301
}
302
303
-void *probe_access(CPUArchState *env, target_ulong addr, int size,
304
- MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
305
+static int probe_access_internal(CPUArchState *env, target_ulong addr,
306
+ int fault_size, MMUAccessType access_type,
307
+ bool nonfault, uintptr_t ra)
308
{
309
int flags;
310
311
- g_assert(-(addr | TARGET_PAGE_MASK) >= size);
312
-
313
switch (access_type) {
314
case MMU_DATA_STORE:
315
flags = PAGE_WRITE;
316
@@ -XXX,XX +XXX,XX @@ void *probe_access(CPUArchState *env, target_ulong addr, int size,
317
}
318
319
if (!guest_addr_valid(addr) || page_check_range(addr, 1, flags) < 0) {
320
- CPUState *cpu = env_cpu(env);
321
- CPUClass *cc = CPU_GET_CLASS(cpu);
322
- cc->tlb_fill(cpu, addr, size, access_type, MMU_USER_IDX, false,
323
- retaddr);
324
- g_assert_not_reached();
325
+ if (nonfault) {
326
+ return TLB_INVALID_MASK;
327
+ } else {
328
+ CPUState *cpu = env_cpu(env);
329
+ CPUClass *cc = CPU_GET_CLASS(cpu);
330
+ cc->tlb_fill(cpu, addr, fault_size, access_type,
331
+ MMU_USER_IDX, false, ra);
332
+ g_assert_not_reached();
333
+ }
334
}
335
+ return 0;
336
+}
337
+
338
+int probe_access_flags(CPUArchState *env, target_ulong addr,
339
+ MMUAccessType access_type, int mmu_idx,
340
+ bool nonfault, void **phost, uintptr_t ra)
341
+{
342
+ int flags;
343
+
344
+ flags = probe_access_internal(env, addr, 0, access_type, nonfault, ra);
345
+ *phost = flags ? NULL : g2h(addr);
346
+ return flags;
347
+}
348
+
349
+void *probe_access(CPUArchState *env, target_ulong addr, int size,
350
+ MMUAccessType access_type, int mmu_idx, uintptr_t ra)
351
+{
352
+ int flags;
353
+
354
+ g_assert(-(addr | TARGET_PAGE_MASK) >= size);
355
+ flags = probe_access_internal(env, addr, size, access_type, false, ra);
356
+ g_assert(flags == 0);
357
358
return size ? g2h(addr) : NULL;
359
}
360
--
267
--
361
2.20.1
268
2.20.1
362
269
363
270
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Do not yet convert the helpers to loop over opr_sz, but the
4
descriptor allows the vector tail to be cleared. Which fixes
5
an existing bug vs SVE.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200514212831.31248-5-richard.henderson@linaro.org
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200508154359.7494-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/sve_helper.c | 223 ++++++++++++++--------------------------
12
target/arm/helper.h | 12 ++--
9
1 file changed, 79 insertions(+), 144 deletions(-)
13
target/arm/neon-dp.decode | 12 ++--
14
target/arm/crypto_helper.c | 24 +++++--
15
target/arm/translate-a64.c | 34 ++++-----
16
target/arm/translate-neon.inc.c | 124 +++++---------------------------
17
target/arm/translate.c | 24 ++-----
18
6 files changed, 67 insertions(+), 163 deletions(-)
10
19
11
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
20
diff --git a/target/arm/helper.h b/target/arm/helper.h
12
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/sve_helper.c
22
--- a/target/arm/helper.h
14
+++ b/target/arm/sve_helper.c
23
+++ b/target/arm/helper.h
15
@@ -XXX,XX +XXX,XX @@ static inline bool test_host_page(void *host)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
16
}
25
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
27
DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
-DEF_HELPER_FLAGS_2(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr)
29
-DEF_HELPER_FLAGS_2(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr)
30
+DEF_HELPER_FLAGS_3(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
33
-DEF_HELPER_FLAGS_3(crypto_sha256h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
34
-DEF_HELPER_FLAGS_3(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
35
-DEF_HELPER_FLAGS_2(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr)
36
-DEF_HELPER_FLAGS_3(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr)
37
+DEF_HELPER_FLAGS_4(crypto_sha256h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(crypto_sha256h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_3(crypto_sha256su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(crypto_sha256su1, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
42
DEF_HELPER_FLAGS_4(crypto_sha512h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
43
DEF_HELPER_FLAGS_4(crypto_sha512h2, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
44
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/neon-dp.decode
47
+++ b/target/arm/neon-dp.decode
48
@@ -XXX,XX +XXX,XX @@ VPADD_3s 1111 001 0 0 . .. .... .... 1011 . . . 1 .... @3same_q0
49
50
VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
51
52
+@3same_crypto .... .... .... .... .... .... .... .... \
53
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0 q=1
54
+
55
SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
56
vm=%vm_dp vn=%vn_dp vd=%vd_dp
57
-SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... \
58
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
59
-SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... \
60
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
61
-SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... \
62
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
63
+SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
64
+SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
65
+SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
66
67
VFMA_fp_3s 1111 001 0 0 . 0 . .... .... 1100 ... 1 .... @3same_fp
68
VFMS_fp_3s 1111 001 0 0 . 1 . .... .... 1100 ... 1 .... @3same_fp
69
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/crypto_helper.c
72
+++ b/target/arm/crypto_helper.c
73
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1_3reg)(void *vd, void *vn, void *vm, uint32_t op)
74
rd[1] = d.l[1];
75
}
76
77
-void HELPER(crypto_sha1h)(void *vd, void *vm)
78
+void HELPER(crypto_sha1h)(void *vd, void *vm, uint32_t desc)
79
{
80
uint64_t *rd = vd;
81
uint64_t *rm = vm;
82
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1h)(void *vd, void *vm)
83
84
rd[0] = m.l[0];
85
rd[1] = m.l[1];
86
+
87
+ clear_tail_16(vd, desc);
88
}
89
90
-void HELPER(crypto_sha1su1)(void *vd, void *vm)
91
+void HELPER(crypto_sha1su1)(void *vd, void *vm, uint32_t desc)
92
{
93
uint64_t *rd = vd;
94
uint64_t *rm = vm;
95
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha1su1)(void *vd, void *vm)
96
97
rd[0] = d.l[0];
98
rd[1] = d.l[1];
99
+
100
+ clear_tail_16(vd, desc);
101
}
17
102
18
/*
103
/*
19
- * Common helper for all contiguous one-register predicated loads.
104
@@ -XXX,XX +XXX,XX @@ static uint32_t s1(uint32_t x)
20
+ * Common helper for all contiguous 1,2,3,4-register predicated stores.
105
return ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10);
21
*/
106
}
22
static inline QEMU_ALWAYS_INLINE
107
23
-void sve_ld1_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
108
-void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm)
24
+void sve_ldN_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
109
+void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm, uint32_t desc)
25
uint32_t desc, const uintptr_t retaddr,
110
{
26
- const int esz, const int msz,
111
uint64_t *rd = vd;
27
+ const int esz, const int msz, const int N,
112
uint64_t *rn = vn;
28
sve_ldst1_host_fn *host_fn,
113
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256h)(void *vd, void *vn, void *vm)
29
sve_ldst1_tlb_fn *tlb_fn)
114
30
{
115
rd[0] = d.l[0];
31
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
116
rd[1] = d.l[1];
32
- void *vd = &env->vfp.zregs[rd];
117
+
33
const intptr_t reg_max = simd_oprsz(desc);
118
+ clear_tail_16(vd, desc);
34
intptr_t reg_off, reg_last, mem_off;
119
}
35
SVEContLdSt info;
120
36
void *host;
121
-void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm)
37
- int flags;
122
+void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm, uint32_t desc)
38
+ int flags, i;
123
{
39
124
uint64_t *rd = vd;
40
/* Find the active elements. */
125
uint64_t *rn = vn;
41
- if (!sve_cont_ldst_elements(&info, addr, vg, reg_max, esz, 1 << msz)) {
126
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256h2)(void *vd, void *vn, void *vm)
42
+ if (!sve_cont_ldst_elements(&info, addr, vg, reg_max, esz, N << msz)) {
127
43
/* The entire predicate was false; no load occurs. */
128
rd[0] = d.l[0];
44
- memset(vd, 0, reg_max);
129
rd[1] = d.l[1];
45
+ for (i = 0; i < N; ++i) {
130
+
46
+ memset(&env->vfp.zregs[(rd + i) & 31], 0, reg_max);
131
+ clear_tail_16(vd, desc);
47
+ }
132
}
133
134
-void HELPER(crypto_sha256su0)(void *vd, void *vm)
135
+void HELPER(crypto_sha256su0)(void *vd, void *vm, uint32_t desc)
136
{
137
uint64_t *rd = vd;
138
uint64_t *rm = vm;
139
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256su0)(void *vd, void *vm)
140
141
rd[0] = d.l[0];
142
rd[1] = d.l[1];
143
+
144
+ clear_tail_16(vd, desc);
145
}
146
147
-void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm)
148
+void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm, uint32_t desc)
149
{
150
uint64_t *rd = vd;
151
uint64_t *rn = vn;
152
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sha256su1)(void *vd, void *vn, void *vm)
153
154
rd[0] = d.l[0];
155
rd[1] = d.l[1];
156
+
157
+ clear_tail_16(vd, desc);
158
}
159
160
/*
161
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-a64.c
164
+++ b/target/arm/translate-a64.c
165
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
166
int rm = extract32(insn, 16, 5);
167
int rn = extract32(insn, 5, 5);
168
int rd = extract32(insn, 0, 5);
169
- CryptoThreeOpFn *genfn;
170
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
171
+ gen_helper_gvec_3 *genfn;
172
bool feature;
173
174
if (size != 0) {
175
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
48
return;
176
return;
49
}
177
}
50
178
51
@@ -XXX,XX +XXX,XX @@ void sve_ld1_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
179
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
52
sve_cont_ldst_pages(&info, FAULT_ALL, env, addr, MMU_DATA_LOAD, retaddr);
180
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
53
181
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
54
/* Handle watchpoints for all active elements. */
182
-
55
- sve_cont_ldst_watchpoints(&info, env, vg, addr, 1 << esz, 1 << msz,
183
if (genfn) {
56
+ sve_cont_ldst_watchpoints(&info, env, vg, addr, 1 << esz, N << msz,
184
- genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
57
BP_MEM_READ, retaddr);
185
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
58
186
} else {
59
/* TODO: MTE check. */
187
TCGv_i32 tcg_opcode = tcg_const_i32(opcode);
60
@@ -XXX,XX +XXX,XX @@ void sve_ld1_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
188
+ TCGv_ptr tcg_rd_ptr = vec_full_reg_ptr(s, rd);
61
* which for ARM will raise SyncExternal. Perform the load
189
+ TCGv_ptr tcg_rn_ptr = vec_full_reg_ptr(s, rn);
62
* into scratch memory to preserve register state until the end.
190
+ TCGv_ptr tcg_rm_ptr = vec_full_reg_ptr(s, rm);
63
*/
191
64
- ARMVectorReg scratch;
192
gen_helper_crypto_sha1_3reg(tcg_rd_ptr, tcg_rn_ptr,
65
+ ARMVectorReg scratch[4] = { };
193
tcg_rm_ptr, tcg_opcode);
66
194
- tcg_temp_free_i32(tcg_opcode);
67
- memset(&scratch, 0, reg_max);
195
- }
68
mem_off = info.mem_off_first[0];
196
69
reg_off = info.reg_off_first[0];
197
- tcg_temp_free_ptr(tcg_rd_ptr);
70
reg_last = info.reg_off_last[1];
198
- tcg_temp_free_ptr(tcg_rn_ptr);
71
@@ -XXX,XX +XXX,XX @@ void sve_ld1_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
199
- tcg_temp_free_ptr(tcg_rm_ptr);
72
uint64_t pg = vg[reg_off >> 6];
200
+ tcg_temp_free_i32(tcg_opcode);
73
do {
201
+ tcg_temp_free_ptr(tcg_rd_ptr);
74
if ((pg >> (reg_off & 63)) & 1) {
202
+ tcg_temp_free_ptr(tcg_rn_ptr);
75
- tlb_fn(env, &scratch, reg_off, addr + mem_off, retaddr);
203
+ tcg_temp_free_ptr(tcg_rm_ptr);
76
+ for (i = 0; i < N; ++i) {
204
+ }
77
+ tlb_fn(env, &scratch[i], reg_off,
205
}
78
+ addr + mem_off + (i << msz), retaddr);
206
79
+ }
207
/* Crypto two-reg SHA
80
}
208
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
81
reg_off += 1 << esz;
209
int opcode = extract32(insn, 12, 5);
82
- mem_off += 1 << msz;
210
int rn = extract32(insn, 5, 5);
83
+ mem_off += N << msz;
211
int rd = extract32(insn, 0, 5);
84
} while (reg_off & 63);
212
- CryptoTwoOpFn *genfn;
85
} while (reg_off <= reg_last);
213
+ gen_helper_gvec_2 *genfn;
86
214
bool feature;
87
- memcpy(vd, &scratch, reg_max);
215
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
88
+ for (i = 0; i < N; ++i) {
216
89
+ memcpy(&env->vfp.zregs[(rd + i) & 31], &scratch[i], reg_max);
217
if (size != 0) {
90
+ }
218
unallocated_encoding(s);
219
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
220
if (!fp_access_check(s)) {
91
return;
221
return;
92
#endif
93
}
222
}
94
223
-
95
/* The entire operation is in RAM, on valid pages. */
224
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
96
225
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
97
- memset(vd, 0, reg_max);
226
-
98
+ for (i = 0; i < N; ++i) {
227
- genfn(tcg_rd_ptr, tcg_rn_ptr);
99
+ memset(&env->vfp.zregs[(rd + i) & 31], 0, reg_max);
228
-
229
- tcg_temp_free_ptr(tcg_rd_ptr);
230
- tcg_temp_free_ptr(tcg_rn_ptr);
231
+ gen_gvec_op2_ool(s, true, rd, rn, 0, genfn);
232
}
233
234
static void gen_rax1_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m)
235
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
236
index XXXXXXX..XXXXXXX 100644
237
--- a/target/arm/translate-neon.inc.c
238
+++ b/target/arm/translate-neon.inc.c
239
@@ -XXX,XX +XXX,XX @@ DO_3SAME_CMP(VCGE_S, TCG_COND_GE)
240
DO_3SAME_CMP(VCGE_U, TCG_COND_GEU)
241
DO_3SAME_CMP(VCEQ, TCG_COND_EQ)
242
243
-static void gen_VMUL_p_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
244
- uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz)
245
-{
246
- tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz,
247
- 0, gen_helper_gvec_pmul_b);
248
-}
249
+#define WRAP_OOL_FN(WRAPNAME, FUNC) \
250
+ static void WRAPNAME(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs, \
251
+ uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz) \
252
+ { \
253
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, 0, FUNC); \
100
+ }
254
+ }
101
+
255
+
102
mem_off = info.mem_off_first[0];
256
+WRAP_OOL_FN(gen_VMUL_p_3s, gen_helper_gvec_pmul_b)
103
reg_off = info.reg_off_first[0];
257
104
reg_last = info.reg_off_last[0];
258
static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
105
@@ -XXX,XX +XXX,XX @@ void sve_ld1_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
259
{
106
uint64_t pg = vg[reg_off >> 6];
260
@@ -XXX,XX +XXX,XX @@ static bool trans_SHA1_3s(DisasContext *s, arg_SHA1_3s *a)
107
do {
261
return true;
108
if ((pg >> (reg_off & 63)) & 1) {
262
}
109
- host_fn(vd, reg_off, host + mem_off);
263
110
+ for (i = 0; i < N; ++i) {
264
-static bool trans_SHA256H_3s(DisasContext *s, arg_SHA256H_3s *a)
111
+ host_fn(&env->vfp.zregs[(rd + i) & 31], reg_off,
265
-{
112
+ host + mem_off + (i << msz));
266
- TCGv_ptr ptr1, ptr2, ptr3;
113
+ }
267
-
114
}
268
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
115
reg_off += 1 << esz;
269
- !dc_isar_feature(aa32_sha2, s)) {
116
- mem_off += 1 << msz;
270
- return false;
117
+ mem_off += N << msz;
271
+#define DO_SHA2(NAME, FUNC) \
118
} while (reg_off <= reg_last && (reg_off & 63));
272
+ WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
273
+ static bool trans_##NAME##_3s(DisasContext *s, arg_3same *a) \
274
+ { \
275
+ if (!dc_isar_feature(aa32_sha2, s)) { \
276
+ return false; \
277
+ } \
278
+ return do_3same(s, a, gen_##NAME##_3s); \
119
}
279
}
120
280
121
@@ -XXX,XX +XXX,XX @@ void sve_ld1_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
281
- /* UNDEF accesses to D16-D31 if they don't exist. */
122
*/
282
- if (!dc_isar_feature(aa32_simd_r32, s) &&
123
mem_off = info.mem_off_split;
283
- ((a->vd | a->vn | a->vm) & 0x10)) {
124
if (unlikely(mem_off >= 0)) {
284
- return false;
125
- tlb_fn(env, vd, info.reg_off_split, addr + mem_off, retaddr);
285
- }
126
+ reg_off = info.reg_off_split;
286
-
127
+ for (i = 0; i < N; ++i) {
287
- if ((a->vn | a->vm | a->vd) & 1) {
128
+ tlb_fn(env, &env->vfp.zregs[(rd + i) & 31], reg_off,
288
- return false;
129
+ addr + mem_off + (i << msz), retaddr);
289
- }
130
+ }
290
-
131
}
291
- if (!vfp_access_check(s)) {
132
292
- return true;
133
mem_off = info.mem_off_first[1];
293
- }
134
@@ -XXX,XX +XXX,XX @@ void sve_ld1_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
294
-
135
uint64_t pg = vg[reg_off >> 6];
295
- ptr1 = vfp_reg_ptr(true, a->vd);
136
do {
296
- ptr2 = vfp_reg_ptr(true, a->vn);
137
if ((pg >> (reg_off & 63)) & 1) {
297
- ptr3 = vfp_reg_ptr(true, a->vm);
138
- host_fn(vd, reg_off, host + mem_off);
298
- gen_helper_crypto_sha256h(ptr1, ptr2, ptr3);
139
+ for (i = 0; i < N; ++i) {
299
- tcg_temp_free_ptr(ptr1);
140
+ host_fn(&env->vfp.zregs[(rd + i) & 31], reg_off,
300
- tcg_temp_free_ptr(ptr2);
141
+ host + mem_off + (i << msz));
301
- tcg_temp_free_ptr(ptr3);
142
+ }
302
-
143
}
303
- return true;
144
reg_off += 1 << esz;
304
-}
145
- mem_off += 1 << msz;
305
-
146
+ mem_off += N << msz;
306
-static bool trans_SHA256H2_3s(DisasContext *s, arg_SHA256H2_3s *a)
147
} while (reg_off & 63);
148
} while (reg_off <= reg_last);
149
}
150
@@ -XXX,XX +XXX,XX @@ void sve_ld1_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
151
void HELPER(sve_##NAME##_r)(CPUARMState *env, void *vg, \
152
target_ulong addr, uint32_t desc) \
153
{ \
154
- sve_ld1_r(env, vg, addr, desc, GETPC(), ESZ, 0, \
155
+ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MO_8, 1, \
156
sve_##NAME##_host, sve_##NAME##_tlb); \
157
}
158
159
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_##NAME##_r)(CPUARMState *env, void *vg, \
160
void HELPER(sve_##NAME##_le_r)(CPUARMState *env, void *vg, \
161
target_ulong addr, uint32_t desc) \
162
{ \
163
- sve_ld1_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, \
164
+ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, 1, \
165
sve_##NAME##_le_host, sve_##NAME##_le_tlb); \
166
} \
167
void HELPER(sve_##NAME##_be_r)(CPUARMState *env, void *vg, \
168
target_ulong addr, uint32_t desc) \
169
{ \
170
- sve_ld1_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, \
171
+ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, 1, \
172
sve_##NAME##_be_host, sve_##NAME##_be_tlb); \
173
}
174
175
-DO_LD1_1(ld1bb, 0)
176
-DO_LD1_1(ld1bhu, 1)
177
-DO_LD1_1(ld1bhs, 1)
178
-DO_LD1_1(ld1bsu, 2)
179
-DO_LD1_1(ld1bss, 2)
180
-DO_LD1_1(ld1bdu, 3)
181
-DO_LD1_1(ld1bds, 3)
182
+DO_LD1_1(ld1bb, MO_8)
183
+DO_LD1_1(ld1bhu, MO_16)
184
+DO_LD1_1(ld1bhs, MO_16)
185
+DO_LD1_1(ld1bsu, MO_32)
186
+DO_LD1_1(ld1bss, MO_32)
187
+DO_LD1_1(ld1bdu, MO_64)
188
+DO_LD1_1(ld1bds, MO_64)
189
190
-DO_LD1_2(ld1hh, 1, 1)
191
-DO_LD1_2(ld1hsu, 2, 1)
192
-DO_LD1_2(ld1hss, 2, 1)
193
-DO_LD1_2(ld1hdu, 3, 1)
194
-DO_LD1_2(ld1hds, 3, 1)
195
+DO_LD1_2(ld1hh, MO_16, MO_16)
196
+DO_LD1_2(ld1hsu, MO_32, MO_16)
197
+DO_LD1_2(ld1hss, MO_32, MO_16)
198
+DO_LD1_2(ld1hdu, MO_64, MO_16)
199
+DO_LD1_2(ld1hds, MO_64, MO_16)
200
201
-DO_LD1_2(ld1ss, 2, 2)
202
-DO_LD1_2(ld1sdu, 3, 2)
203
-DO_LD1_2(ld1sds, 3, 2)
204
+DO_LD1_2(ld1ss, MO_32, MO_32)
205
+DO_LD1_2(ld1sdu, MO_64, MO_32)
206
+DO_LD1_2(ld1sds, MO_64, MO_32)
207
208
-DO_LD1_2(ld1dd, 3, 3)
209
+DO_LD1_2(ld1dd, MO_64, MO_64)
210
211
#undef DO_LD1_1
212
#undef DO_LD1_2
213
214
-/*
215
- * Common helpers for all contiguous 2,3,4-register predicated loads.
216
- */
217
-static void sve_ld2_r(CPUARMState *env, void *vg, target_ulong addr,
218
- uint32_t desc, int size, uintptr_t ra,
219
- sve_ldst1_tlb_fn *tlb_fn)
220
-{
307
-{
221
- const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
308
- TCGv_ptr ptr1, ptr2, ptr3;
222
- intptr_t i, oprsz = simd_oprsz(desc);
309
-
223
- ARMVectorReg scratch[2] = { };
310
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
224
-
311
- !dc_isar_feature(aa32_sha2, s)) {
225
- for (i = 0; i < oprsz; ) {
312
- return false;
226
- uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
313
- }
227
- do {
314
-
228
- if (pg & 1) {
315
- /* UNDEF accesses to D16-D31 if they don't exist. */
229
- tlb_fn(env, &scratch[0], i, addr, ra);
316
- if (!dc_isar_feature(aa32_simd_r32, s) &&
230
- tlb_fn(env, &scratch[1], i, addr + size, ra);
317
- ((a->vd | a->vn | a->vm) & 0x10)) {
231
- }
318
- return false;
232
- i += size, pg >>= size;
319
- }
233
- addr += 2 * size;
320
-
234
- } while (i & 15);
321
- if ((a->vn | a->vm | a->vd) & 1) {
235
- }
322
- return false;
236
-
323
- }
237
- /* Wait until all exceptions have been raised to write back. */
324
-
238
- memcpy(&env->vfp.zregs[rd], &scratch[0], oprsz);
325
- if (!vfp_access_check(s)) {
239
- memcpy(&env->vfp.zregs[(rd + 1) & 31], &scratch[1], oprsz);
326
- return true;
327
- }
328
-
329
- ptr1 = vfp_reg_ptr(true, a->vd);
330
- ptr2 = vfp_reg_ptr(true, a->vn);
331
- ptr3 = vfp_reg_ptr(true, a->vm);
332
- gen_helper_crypto_sha256h2(ptr1, ptr2, ptr3);
333
- tcg_temp_free_ptr(ptr1);
334
- tcg_temp_free_ptr(ptr2);
335
- tcg_temp_free_ptr(ptr3);
336
-
337
- return true;
240
-}
338
-}
241
-
339
-
242
-static void sve_ld3_r(CPUARMState *env, void *vg, target_ulong addr,
340
-static bool trans_SHA256SU1_3s(DisasContext *s, arg_SHA256SU1_3s *a)
243
- uint32_t desc, int size, uintptr_t ra,
244
- sve_ldst1_tlb_fn *tlb_fn)
245
-{
341
-{
246
- const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
342
- TCGv_ptr ptr1, ptr2, ptr3;
247
- intptr_t i, oprsz = simd_oprsz(desc);
343
-
248
- ARMVectorReg scratch[3] = { };
344
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
249
-
345
- !dc_isar_feature(aa32_sha2, s)) {
250
- for (i = 0; i < oprsz; ) {
346
- return false;
251
- uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
347
- }
252
- do {
348
-
253
- if (pg & 1) {
349
- /* UNDEF accesses to D16-D31 if they don't exist. */
254
- tlb_fn(env, &scratch[0], i, addr, ra);
350
- if (!dc_isar_feature(aa32_simd_r32, s) &&
255
- tlb_fn(env, &scratch[1], i, addr + size, ra);
351
- ((a->vd | a->vn | a->vm) & 0x10)) {
256
- tlb_fn(env, &scratch[2], i, addr + 2 * size, ra);
352
- return false;
257
- }
353
- }
258
- i += size, pg >>= size;
354
-
259
- addr += 3 * size;
355
- if ((a->vn | a->vm | a->vd) & 1) {
260
- } while (i & 15);
356
- return false;
261
- }
357
- }
262
-
358
-
263
- /* Wait until all exceptions have been raised to write back. */
359
- if (!vfp_access_check(s)) {
264
- memcpy(&env->vfp.zregs[rd], &scratch[0], oprsz);
360
- return true;
265
- memcpy(&env->vfp.zregs[(rd + 1) & 31], &scratch[1], oprsz);
361
- }
266
- memcpy(&env->vfp.zregs[(rd + 2) & 31], &scratch[2], oprsz);
362
-
363
- ptr1 = vfp_reg_ptr(true, a->vd);
364
- ptr2 = vfp_reg_ptr(true, a->vn);
365
- ptr3 = vfp_reg_ptr(true, a->vm);
366
- gen_helper_crypto_sha256su1(ptr1, ptr2, ptr3);
367
- tcg_temp_free_ptr(ptr1);
368
- tcg_temp_free_ptr(ptr2);
369
- tcg_temp_free_ptr(ptr3);
370
-
371
- return true;
267
-}
372
-}
268
-
373
+DO_SHA2(SHA256H, gen_helper_crypto_sha256h)
269
-static void sve_ld4_r(CPUARMState *env, void *vg, target_ulong addr,
374
+DO_SHA2(SHA256H2, gen_helper_crypto_sha256h2)
270
- uint32_t desc, int size, uintptr_t ra,
375
+DO_SHA2(SHA256SU1, gen_helper_crypto_sha256su1)
271
- sve_ldst1_tlb_fn *tlb_fn)
376
272
-{
377
#define DO_3SAME_64(INSN, FUNC) \
273
- const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
378
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
274
- intptr_t i, oprsz = simd_oprsz(desc);
379
diff --git a/target/arm/translate.c b/target/arm/translate.c
275
- ARMVectorReg scratch[4] = { };
380
index XXXXXXX..XXXXXXX 100644
276
-
381
--- a/target/arm/translate.c
277
- for (i = 0; i < oprsz; ) {
382
+++ b/target/arm/translate.c
278
- uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
383
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
279
- do {
384
int vec_size;
280
- if (pg & 1) {
385
uint32_t imm;
281
- tlb_fn(env, &scratch[0], i, addr, ra);
386
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
282
- tlb_fn(env, &scratch[1], i, addr + size, ra);
387
- TCGv_ptr ptr1, ptr2;
283
- tlb_fn(env, &scratch[2], i, addr + 2 * size, ra);
388
+ TCGv_ptr ptr1;
284
- tlb_fn(env, &scratch[3], i, addr + 3 * size, ra);
389
TCGv_i64 tmp64;
285
- }
390
286
- i += size, pg >>= size;
391
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
287
- addr += 4 * size;
392
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
288
- } while (i & 15);
393
if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
289
- }
394
return 1;
290
-
395
}
291
- /* Wait until all exceptions have been raised to write back. */
396
- ptr1 = vfp_reg_ptr(true, rd);
292
- memcpy(&env->vfp.zregs[rd], &scratch[0], oprsz);
397
- ptr2 = vfp_reg_ptr(true, rm);
293
- memcpy(&env->vfp.zregs[(rd + 1) & 31], &scratch[1], oprsz);
398
-
294
- memcpy(&env->vfp.zregs[(rd + 2) & 31], &scratch[2], oprsz);
399
- gen_helper_crypto_sha1h(ptr1, ptr2);
295
- memcpy(&env->vfp.zregs[(rd + 3) & 31], &scratch[3], oprsz);
400
-
296
-}
401
- tcg_temp_free_ptr(ptr1);
297
-
402
- tcg_temp_free_ptr(ptr2);
298
#define DO_LDN_1(N) \
403
+ tcg_gen_gvec_2_ool(rd_ofs, rm_ofs, 16, 16, 0,
299
-void QEMU_FLATTEN HELPER(sve_ld##N##bb_r) \
404
+ gen_helper_crypto_sha1h);
300
- (CPUARMState *env, void *vg, target_ulong addr, uint32_t desc) \
405
break;
301
-{ \
406
case NEON_2RM_SHA1SU1:
302
- sve_ld##N##_r(env, vg, addr, desc, 1, GETPC(), sve_ld1bb_tlb); \
407
if ((rm | rd) & 1) {
303
+void HELPER(sve_ld##N##bb_r)(CPUARMState *env, void *vg, \
408
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
304
+ target_ulong addr, uint32_t desc) \
409
} else if (!dc_isar_feature(aa32_sha1, s)) {
305
+{ \
410
return 1;
306
+ sve_ldN_r(env, vg, addr, desc, GETPC(), MO_8, MO_8, N, \
411
}
307
+ sve_ld1bb_host, sve_ld1bb_tlb); \
412
- ptr1 = vfp_reg_ptr(true, rd);
308
}
413
- ptr2 = vfp_reg_ptr(true, rm);
309
414
- if (q) {
310
-#define DO_LDN_2(N, SUFF, SIZE) \
415
- gen_helper_crypto_sha256su0(ptr1, ptr2);
311
-void QEMU_FLATTEN HELPER(sve_ld##N##SUFF##_le_r) \
416
- } else {
312
- (CPUARMState *env, void *vg, target_ulong addr, uint32_t desc) \
417
- gen_helper_crypto_sha1su1(ptr1, ptr2);
313
+#define DO_LDN_2(N, SUFF, ESZ) \
418
- }
314
+void HELPER(sve_ld##N##SUFF##_le_r)(CPUARMState *env, void *vg, \
419
- tcg_temp_free_ptr(ptr1);
315
+ target_ulong addr, uint32_t desc) \
420
- tcg_temp_free_ptr(ptr2);
316
{ \
421
+ tcg_gen_gvec_2_ool(rd_ofs, rm_ofs, 16, 16, 0,
317
- sve_ld##N##_r(env, vg, addr, desc, SIZE, GETPC(), \
422
+ q ? gen_helper_crypto_sha256su0
318
- sve_ld1##SUFF##_le_tlb); \
423
+ : gen_helper_crypto_sha1su1);
319
+ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, ESZ, N, \
424
break;
320
+ sve_ld1##SUFF##_le_host, sve_ld1##SUFF##_le_tlb); \
425
-
321
} \
426
case NEON_2RM_VMVN:
322
-void QEMU_FLATTEN HELPER(sve_ld##N##SUFF##_be_r) \
427
tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
323
- (CPUARMState *env, void *vg, target_ulong addr, uint32_t desc) \
428
break;
324
+void HELPER(sve_ld##N##SUFF##_be_r)(CPUARMState *env, void *vg, \
325
+ target_ulong addr, uint32_t desc) \
326
{ \
327
- sve_ld##N##_r(env, vg, addr, desc, SIZE, GETPC(), \
328
- sve_ld1##SUFF##_be_tlb); \
329
+ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, ESZ, N, \
330
+ sve_ld1##SUFF##_be_host, sve_ld1##SUFF##_be_tlb); \
331
}
332
333
DO_LDN_1(2)
334
DO_LDN_1(3)
335
DO_LDN_1(4)
336
337
-DO_LDN_2(2, hh, 2)
338
-DO_LDN_2(3, hh, 2)
339
-DO_LDN_2(4, hh, 2)
340
+DO_LDN_2(2, hh, MO_16)
341
+DO_LDN_2(3, hh, MO_16)
342
+DO_LDN_2(4, hh, MO_16)
343
344
-DO_LDN_2(2, ss, 4)
345
-DO_LDN_2(3, ss, 4)
346
-DO_LDN_2(4, ss, 4)
347
+DO_LDN_2(2, ss, MO_32)
348
+DO_LDN_2(3, ss, MO_32)
349
+DO_LDN_2(4, ss, MO_32)
350
351
-DO_LDN_2(2, dd, 8)
352
-DO_LDN_2(3, dd, 8)
353
-DO_LDN_2(4, dd, 8)
354
+DO_LDN_2(2, dd, MO_64)
355
+DO_LDN_2(3, dd, MO_64)
356
+DO_LDN_2(4, dd, MO_64)
357
358
#undef DO_LDN_1
359
#undef DO_LDN_2
360
--
429
--
361
2.20.1
430
2.20.1
362
431
363
432
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The current interface includes a loop; change it to load a
3
Rather than passing an opcode to a helper, fully decode the
4
single element. We will then be able to use the function
4
operation at translate time. Use clear_tail_16 to zap the
5
for ld{2,3,4} where individual vector elements are not adjacent.
5
balance of the SVE register with the AdvSIMD write.
6
6
7
Replace each call with the simplest possible loop over active
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
elements.
8
Message-id: 20200514212831.31248-6-richard.henderson@linaro.org
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20200508154359.7494-11-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
target/arm/sve_helper.c | 124 ++++++++++++++++++++--------------------
12
target/arm/helper.h | 5 +-
16
1 file changed, 63 insertions(+), 61 deletions(-)
13
target/arm/neon-dp.decode | 6 +-
17
14
target/arm/crypto_helper.c | 99 +++++++++++++++++++++------------
18
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
15
target/arm/translate-a64.c | 29 ++++------
19
index XXXXXXX..XXXXXXX 100644
16
target/arm/translate-neon.inc.c | 46 ++++-----------
20
--- a/target/arm/sve_helper.c
17
5 files changed, 93 insertions(+), 92 deletions(-)
21
+++ b/target/arm/sve_helper.c
18
22
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcmla_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
19
diff --git a/target/arm/helper.h b/target/arm/helper.h
23
*/
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.h
22
+++ b/target/arm/helper.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(neon_qzip32, TCG_CALL_NO_RWG, void, ptr, ptr)
24
DEF_HELPER_FLAGS_4(crypto_aese, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_3(crypto_aesmc, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
27
-DEF_HELPER_FLAGS_4(crypto_sha1_3reg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(crypto_sha1su0, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(crypto_sha1c, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(crypto_sha1p, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(crypto_sha1m, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_3(crypto_sha1h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
DEF_HELPER_FLAGS_3(crypto_sha1su1, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
35
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/neon-dp.decode
38
+++ b/target/arm/neon-dp.decode
39
@@ -XXX,XX +XXX,XX @@ VQRDMLAH_3s 1111 001 1 0 . .. .... .... 1011 ... 1 .... @3same
40
@3same_crypto .... .... .... .... .... .... .... .... \
41
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0 q=1
42
43
-SHA1_3s 1111 001 0 0 . optype:2 .... .... 1100 . 1 . 0 .... \
44
- vm=%vm_dp vn=%vn_dp vd=%vd_dp
45
+SHA1C_3s 1111 001 0 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
46
+SHA1P_3s 1111 001 0 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
47
+SHA1M_3s 1111 001 0 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
48
+SHA1SU0_3s 1111 001 0 0 . 11 .... .... 1100 . 1 . 0 .... @3same_crypto
49
SHA256H_3s 1111 001 1 0 . 00 .... .... 1100 . 1 . 0 .... @3same_crypto
50
SHA256H2_3s 1111 001 1 0 . 01 .... .... 1100 . 1 . 0 .... @3same_crypto
51
SHA256SU1_3s 1111 001 1 0 . 10 .... .... 1100 . 1 . 0 .... @3same_crypto
52
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/crypto_helper.c
55
+++ b/target/arm/crypto_helper.c
56
@@ -XXX,XX +XXX,XX @@ union CRYPTO_STATE {
57
};
58
59
#ifdef HOST_WORDS_BIGENDIAN
60
-#define CR_ST_BYTE(state, i) (state.bytes[(15 - (i)) ^ 8])
61
-#define CR_ST_WORD(state, i) (state.words[(3 - (i)) ^ 2])
62
+#define CR_ST_BYTE(state, i) ((state).bytes[(15 - (i)) ^ 8])
63
+#define CR_ST_WORD(state, i) ((state).words[(3 - (i)) ^ 2])
64
#else
65
-#define CR_ST_BYTE(state, i) (state.bytes[i])
66
-#define CR_ST_WORD(state, i) (state.words[i])
67
+#define CR_ST_BYTE(state, i) ((state).bytes[i])
68
+#define CR_ST_WORD(state, i) ((state).words[i])
69
#endif
24
70
25
/*
71
/*
26
- * Load elements into @vd, controlled by @vg, from @host + @mem_ofs.
72
@@ -XXX,XX +XXX,XX @@ static uint32_t maj(uint32_t x, uint32_t y, uint32_t z)
27
- * Memory is valid through @host + @mem_max. The register element
73
return (x & y) | ((x | y) & z);
28
- * indices are inferred from @mem_ofs, as modified by the types for
29
- * which the helper is built. Return the @mem_ofs of the first element
30
- * not loaded (which is @mem_max if they are all loaded).
31
- *
32
- * For softmmu, we have fully validated the guest page. For user-only,
33
- * we cannot fully validate without taking the mmap lock, but since we
34
- * know the access is within one host page, if any access is valid they
35
- * all must be valid. However, when @vg is all false, it may be that
36
- * no access is valid.
37
+ * Load one element into @vd + @reg_off from @host.
38
+ * The controlling predicate is known to be true.
39
*/
40
-typedef intptr_t sve_ld1_host_fn(void *vd, void *vg, void *host,
41
- intptr_t mem_ofs, intptr_t mem_max);
42
+typedef void sve_ldst1_host_fn(void *vd, intptr_t reg_off, void *host);
43
44
/*
45
* Load one element into @vd + @reg_off from (@env, @vaddr, @ra).
46
@@ -XXX,XX +XXX,XX @@ typedef void sve_ldst1_tlb_fn(CPUARMState *env, void *vd, intptr_t reg_off,
47
*/
48
49
#define DO_LD_HOST(NAME, H, TYPEE, TYPEM, HOST) \
50
-static intptr_t sve_##NAME##_host(void *vd, void *vg, void *host, \
51
- intptr_t mem_off, const intptr_t mem_max) \
52
-{ \
53
- intptr_t reg_off = mem_off * (sizeof(TYPEE) / sizeof(TYPEM)); \
54
- uint64_t *pg = vg; \
55
- while (mem_off + sizeof(TYPEM) <= mem_max) { \
56
- TYPEM val = 0; \
57
- if (likely((pg[reg_off >> 6] >> (reg_off & 63)) & 1)) { \
58
- val = HOST(host + mem_off); \
59
- } \
60
- *(TYPEE *)(vd + H(reg_off)) = val; \
61
- mem_off += sizeof(TYPEM), reg_off += sizeof(TYPEE); \
62
- } \
63
- return mem_off; \
64
+static void sve_##NAME##_host(void *vd, intptr_t reg_off, void *host) \
65
+{ \
66
+ TYPEM val = HOST(host); \
67
+ *(TYPEE *)(vd + H(reg_off)) = val; \
68
}
74
}
69
75
70
#define DO_LD_TLB(NAME, H, TYPEE, TYPEM, TLB) \
76
-void HELPER(crypto_sha1_3reg)(void *vd, void *vn, void *vm, uint32_t op)
71
@@ -XXX,XX +XXX,XX @@ static inline bool test_host_page(void *host)
77
+void HELPER(crypto_sha1su0)(void *vd, void *vn, void *vm, uint32_t desc)
72
static void sve_ld1_r(CPUARMState *env, void *vg, const target_ulong addr,
78
+{
73
uint32_t desc, const uintptr_t retaddr,
79
+ uint64_t *d = vd, *n = vn, *m = vm;
74
const int esz, const int msz,
80
+ uint64_t d0, d1;
75
- sve_ld1_host_fn *host_fn,
81
+
76
+ sve_ldst1_host_fn *host_fn,
82
+ d0 = d[1] ^ d[0] ^ m[0];
77
sve_ldst1_tlb_fn *tlb_fn)
83
+ d1 = n[0] ^ d[1] ^ m[1];
84
+ d[0] = d0;
85
+ d[1] = d1;
86
+
87
+ clear_tail_16(vd, desc);
88
+}
89
+
90
+static inline void crypto_sha1_3reg(uint64_t *rd, uint64_t *rn,
91
+ uint64_t *rm, uint32_t desc,
92
+ uint32_t (*fn)(union CRYPTO_STATE *d))
78
{
93
{
79
const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
94
- uint64_t *rd = vd;
80
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_r(CPUARMState *env, void *vg, const target_ulong addr,
95
- uint64_t *rn = vn;
81
if (likely(split == mem_max)) {
96
- uint64_t *rm = vm;
82
host = tlb_vaddr_to_host(env, addr + mem_off, MMU_DATA_LOAD, mmu_idx);
97
union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
83
if (test_host_page(host)) {
98
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
84
- mem_off = host_fn(vd, vg, host - mem_off, mem_off, mem_max);
99
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
85
- tcg_debug_assert(mem_off == mem_max);
100
+ int i;
86
+ intptr_t i = reg_off;
101
87
+ host -= mem_off;
102
- if (op == 3) { /* sha1su0 */
88
+ do {
103
- d.l[0] ^= d.l[1] ^ m.l[0];
89
+ host_fn(vd, i, host + (i >> diffsz));
104
- d.l[1] ^= n.l[0] ^ m.l[1];
90
+ i = find_next_active(vg, i + (1 << esz), reg_max, esz);
105
- } else {
91
+ } while (i < reg_max);
106
- int i;
92
/* After having taken any fault, zero leading inactive elements. */
107
+ for (i = 0; i < 4; i++) {
93
swap_memzero(vd, reg_off);
108
+ uint32_t t = fn(&d);
94
return;
109
95
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_r(CPUARMState *env, void *vg, const target_ulong addr,
110
- for (i = 0; i < 4; i++) {
96
*/
111
- uint32_t t;
97
#ifdef CONFIG_USER_ONLY
112
+ t += rol32(CR_ST_WORD(d, 0), 5) + CR_ST_WORD(n, 0)
98
swap_memzero(&scratch, reg_off);
113
+ + CR_ST_WORD(m, i);
99
- host_fn(&scratch, vg, g2h(addr), mem_off, mem_max);
114
100
+ host = g2h(addr);
115
- switch (op) {
101
+ do {
116
- case 0: /* sha1c */
102
+ host_fn(&scratch, reg_off, host + (reg_off >> diffsz));
117
- t = cho(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
103
+ reg_off += 1 << esz;
118
- break;
104
+ reg_off = find_next_active(vg, reg_off, reg_max, esz);
119
- case 1: /* sha1p */
105
+ } while (reg_off < reg_max);
120
- t = par(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
106
#else
121
- break;
107
memset(&scratch, 0, reg_max);
122
- case 2: /* sha1m */
108
goto start;
123
- t = maj(CR_ST_WORD(d, 1), CR_ST_WORD(d, 2), CR_ST_WORD(d, 3));
109
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_r(CPUARMState *env, void *vg, const target_ulong addr,
124
- break;
110
host = tlb_vaddr_to_host(env, addr + mem_off,
125
- default:
111
MMU_DATA_LOAD, mmu_idx);
126
- g_assert_not_reached();
112
if (host) {
127
- }
113
- mem_off = host_fn(&scratch, vg, host - mem_off,
128
- t += rol32(CR_ST_WORD(d, 0), 5) + CR_ST_WORD(n, 0)
114
- mem_off, split);
129
- + CR_ST_WORD(m, i);
115
- reg_off = mem_off << diffsz;
130
-
116
+ host -= mem_off;
131
- CR_ST_WORD(n, 0) = CR_ST_WORD(d, 3);
117
+ do {
132
- CR_ST_WORD(d, 3) = CR_ST_WORD(d, 2);
118
+ host_fn(&scratch, reg_off, host + mem_off);
133
- CR_ST_WORD(d, 2) = ror32(CR_ST_WORD(d, 1), 2);
119
+ reg_off += 1 << esz;
134
- CR_ST_WORD(d, 1) = CR_ST_WORD(d, 0);
120
+ reg_off = find_next_active(vg, reg_off, reg_max, esz);
135
- CR_ST_WORD(d, 0) = t;
121
+ mem_off = reg_off >> diffsz;
136
- }
122
+ } while (split - mem_off >= (1 << msz));
137
+ CR_ST_WORD(n, 0) = CR_ST_WORD(d, 3);
123
continue;
138
+ CR_ST_WORD(d, 3) = CR_ST_WORD(d, 2);
124
}
139
+ CR_ST_WORD(d, 2) = ror32(CR_ST_WORD(d, 1), 2);
125
}
140
+ CR_ST_WORD(d, 1) = CR_ST_WORD(d, 0);
126
@@ -XXX,XX +XXX,XX @@ static void record_fault(CPUARMState *env, uintptr_t i, uintptr_t oprsz)
141
+ CR_ST_WORD(d, 0) = t;
127
static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
128
uint32_t desc, const uintptr_t retaddr,
129
const int esz, const int msz,
130
- sve_ld1_host_fn *host_fn,
131
+ sve_ldst1_host_fn *host_fn,
132
sve_ldst1_tlb_fn *tlb_fn)
133
{
134
const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
135
@@ -XXX,XX +XXX,XX @@ static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
136
const int diffsz = esz - msz;
137
const intptr_t reg_max = simd_oprsz(desc);
138
const intptr_t mem_max = reg_max >> diffsz;
139
- intptr_t split, reg_off, mem_off;
140
+ intptr_t split, reg_off, mem_off, i;
141
void *host;
142
143
/* Skip to the first active element. */
144
@@ -XXX,XX +XXX,XX @@ static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
145
if (likely(split == mem_max)) {
146
host = tlb_vaddr_to_host(env, addr + mem_off, MMU_DATA_LOAD, mmu_idx);
147
if (test_host_page(host)) {
148
- mem_off = host_fn(vd, vg, host - mem_off, mem_off, mem_max);
149
- tcg_debug_assert(mem_off == mem_max);
150
+ i = reg_off;
151
+ host -= mem_off;
152
+ do {
153
+ host_fn(vd, i, host + (i >> diffsz));
154
+ i = find_next_active(vg, i + (1 << esz), reg_max, esz);
155
+ } while (i < reg_max);
156
/* After any fault, zero any leading inactive elements. */
157
swap_memzero(vd, reg_off);
158
return;
159
}
160
}
142
}
161
143
rd[0] = d.l[0];
162
-#ifdef CONFIG_USER_ONLY
144
rd[1] = d.l[1];
163
- /*
145
+
164
- * The page(s) containing this first element at ADDR+MEM_OFF must
146
+ clear_tail_16(rd, desc);
165
- * be valid. Considering that this first element may be misaligned
147
+}
166
- * and cross a page boundary itself, take the rest of the page from
148
+
167
- * the last byte of the element.
149
+static uint32_t do_sha1c(union CRYPTO_STATE *d)
168
- */
150
+{
169
- split = max_for_page(addr, mem_off + (1 << msz) - 1, mem_max);
151
+ return cho(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
170
- mem_off = host_fn(vd, vg, g2h(addr), mem_off, split);
152
+}
171
-
153
+
172
- /* After any fault, zero any leading inactive elements. */
154
+void HELPER(crypto_sha1c)(void *vd, void *vn, void *vm, uint32_t desc)
173
- swap_memzero(vd, reg_off);
155
+{
174
- reg_off = mem_off << diffsz;
156
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1c);
175
-#else
157
+}
176
/*
158
+
177
* Perform one normal read, which will fault or not.
159
+static uint32_t do_sha1p(union CRYPTO_STATE *d)
178
* But it is likely to bring the page into the tlb.
160
+{
179
@@ -XXX,XX +XXX,XX @@ static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
161
+ return par(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
180
if (split >= (1 << msz)) {
162
+}
181
host = tlb_vaddr_to_host(env, addr + mem_off, MMU_DATA_LOAD, mmu_idx);
163
+
182
if (host) {
164
+void HELPER(crypto_sha1p)(void *vd, void *vn, void *vm, uint32_t desc)
183
- mem_off = host_fn(vd, vg, host - mem_off, mem_off, split);
165
+{
184
- reg_off = mem_off << diffsz;
166
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1p);
185
+ host -= mem_off;
167
+}
186
+ do {
168
+
187
+ host_fn(vd, reg_off, host + mem_off);
169
+static uint32_t do_sha1m(union CRYPTO_STATE *d)
188
+ reg_off += 1 << esz;
170
+{
189
+ reg_off = find_next_active(vg, reg_off, reg_max, esz);
171
+ return maj(CR_ST_WORD(*d, 1), CR_ST_WORD(*d, 2), CR_ST_WORD(*d, 3));
190
+ mem_off = reg_off >> diffsz;
172
+}
191
+ } while (split - mem_off >= (1 << msz));
173
+
192
}
174
+void HELPER(crypto_sha1m)(void *vd, void *vn, void *vm, uint32_t desc)
193
}
175
+{
194
-#endif
176
+ crypto_sha1_3reg(vd, vn, vm, desc, do_sha1m);
195
196
record_fault(env, reg_off, reg_max);
197
}
177
}
198
@@ -XXX,XX +XXX,XX @@ static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
178
199
*/
179
void HELPER(crypto_sha1h)(void *vd, void *vm, uint32_t desc)
200
static void sve_ldnf1_r(CPUARMState *env, void *vg, const target_ulong addr,
180
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
201
uint32_t desc, const int esz, const int msz,
181
index XXXXXXX..XXXXXXX 100644
202
- sve_ld1_host_fn *host_fn)
182
--- a/target/arm/translate-a64.c
203
+ sve_ldst1_host_fn *host_fn)
183
+++ b/target/arm/translate-a64.c
204
{
184
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
205
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
185
206
void *vd = &env->vfp.zregs[rd];
186
switch (opcode) {
207
@@ -XXX,XX +XXX,XX @@ static void sve_ldnf1_r(CPUARMState *env, void *vg, const target_ulong addr,
187
case 0: /* SHA1C */
208
host = tlb_vaddr_to_host(env, addr, MMU_DATA_LOAD, mmu_idx);
188
+ genfn = gen_helper_crypto_sha1c;
209
if (likely(page_check_range(addr, mem_max, PAGE_READ) == 0)) {
189
+ feature = dc_isar_feature(aa64_sha1, s);
210
/* The entire operation is valid and will not fault. */
190
+ break;
211
- host_fn(vd, vg, host, 0, mem_max);
191
case 1: /* SHA1P */
212
+ reg_off = 0;
192
+ genfn = gen_helper_crypto_sha1p;
213
+ do {
193
+ feature = dc_isar_feature(aa64_sha1, s);
214
+ mem_off = reg_off >> diffsz;
194
+ break;
215
+ host_fn(vd, reg_off, host + mem_off);
195
case 2: /* SHA1M */
216
+ reg_off += 1 << esz;
196
+ genfn = gen_helper_crypto_sha1m;
217
+ reg_off = find_next_active(vg, reg_off, reg_max, esz);
197
+ feature = dc_isar_feature(aa64_sha1, s);
218
+ } while (reg_off < reg_max);
198
+ break;
199
case 3: /* SHA1SU0 */
200
- genfn = NULL;
201
+ genfn = gen_helper_crypto_sha1su0;
202
feature = dc_isar_feature(aa64_sha1, s);
203
break;
204
case 4: /* SHA256H */
205
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
206
if (!fp_access_check(s)) {
219
return;
207
return;
220
}
208
}
221
#endif
209
-
222
@@ -XXX,XX +XXX,XX @@ static void sve_ldnf1_r(CPUARMState *env, void *vg, const target_ulong addr,
210
- if (genfn) {
223
if (page_check_range(addr + mem_off, 1 << msz, PAGE_READ) == 0) {
211
- gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
224
/* At least one load is valid; take the rest of the page. */
212
- } else {
225
split = max_for_page(addr, mem_off + (1 << msz) - 1, mem_max);
213
- TCGv_i32 tcg_opcode = tcg_const_i32(opcode);
226
- mem_off = host_fn(vd, vg, host, mem_off, split);
214
- TCGv_ptr tcg_rd_ptr = vec_full_reg_ptr(s, rd);
227
- reg_off = mem_off << diffsz;
215
- TCGv_ptr tcg_rn_ptr = vec_full_reg_ptr(s, rn);
228
+ do {
216
- TCGv_ptr tcg_rm_ptr = vec_full_reg_ptr(s, rm);
229
+ host_fn(vd, reg_off, host + mem_off);
217
-
230
+ reg_off += 1 << esz;
218
- gen_helper_crypto_sha1_3reg(tcg_rd_ptr, tcg_rn_ptr,
231
+ reg_off = find_next_active(vg, reg_off, reg_max, esz);
219
- tcg_rm_ptr, tcg_opcode);
232
+ mem_off = reg_off >> diffsz;
220
-
233
+ } while (split - mem_off >= (1 << msz));
221
- tcg_temp_free_i32(tcg_opcode);
222
- tcg_temp_free_ptr(tcg_rd_ptr);
223
- tcg_temp_free_ptr(tcg_rn_ptr);
224
- tcg_temp_free_ptr(tcg_rm_ptr);
225
- }
226
+ gen_gvec_op3_ool(s, true, rd, rn, rm, 0, genfn);
227
}
228
229
/* Crypto two-reg SHA
230
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
231
index XXXXXXX..XXXXXXX 100644
232
--- a/target/arm/translate-neon.inc.c
233
+++ b/target/arm/translate-neon.inc.c
234
@@ -XXX,XX +XXX,XX @@ static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
235
DO_VQRDMLAH(VQRDMLAH, gen_gvec_sqrdmlah_qc)
236
DO_VQRDMLAH(VQRDMLSH, gen_gvec_sqrdmlsh_qc)
237
238
-static bool trans_SHA1_3s(DisasContext *s, arg_SHA1_3s *a)
239
-{
240
- TCGv_ptr ptr1, ptr2, ptr3;
241
- TCGv_i32 tmp;
242
-
243
- if (!arm_dc_feature(s, ARM_FEATURE_NEON) ||
244
- !dc_isar_feature(aa32_sha1, s)) {
245
- return false;
246
+#define DO_SHA1(NAME, FUNC) \
247
+ WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
248
+ static bool trans_##NAME##_3s(DisasContext *s, arg_3same *a) \
249
+ { \
250
+ if (!dc_isar_feature(aa32_sha1, s)) { \
251
+ return false; \
252
+ } \
253
+ return do_3same(s, a, gen_##NAME##_3s); \
234
}
254
}
235
#else
255
236
/*
256
- /* UNDEF accesses to D16-D31 if they don't exist. */
237
@@ -XXX,XX +XXX,XX @@ static void sve_ldnf1_r(CPUARMState *env, void *vg, const target_ulong addr,
257
- if (!dc_isar_feature(aa32_simd_r32, s) &&
238
host = tlb_vaddr_to_host(env, addr + mem_off, MMU_DATA_LOAD, mmu_idx);
258
- ((a->vd | a->vn | a->vm) & 0x10)) {
239
split = max_for_page(addr, mem_off, mem_max);
259
- return false;
240
if (host && split >= (1 << msz)) {
260
- }
241
- mem_off = host_fn(vd, vg, host - mem_off, mem_off, split);
261
-
242
- reg_off = mem_off << diffsz;
262
- if ((a->vn | a->vm | a->vd) & 1) {
243
+ host -= mem_off;
263
- return false;
244
+ do {
264
- }
245
+ host_fn(vd, reg_off, host + mem_off);
265
-
246
+ reg_off += 1 << esz;
266
- if (!vfp_access_check(s)) {
247
+ reg_off = find_next_active(vg, reg_off, reg_max, esz);
267
- return true;
248
+ mem_off = reg_off >> diffsz;
268
- }
249
+ } while (split - mem_off >= (1 << msz));
269
-
250
}
270
- ptr1 = vfp_reg_ptr(true, a->vd);
251
#endif
271
- ptr2 = vfp_reg_ptr(true, a->vn);
252
272
- ptr3 = vfp_reg_ptr(true, a->vm);
273
- tmp = tcg_const_i32(a->optype);
274
- gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp);
275
- tcg_temp_free_i32(tmp);
276
- tcg_temp_free_ptr(ptr1);
277
- tcg_temp_free_ptr(ptr2);
278
- tcg_temp_free_ptr(ptr3);
279
-
280
- return true;
281
-}
282
+DO_SHA1(SHA1C, gen_helper_crypto_sha1c)
283
+DO_SHA1(SHA1P, gen_helper_crypto_sha1p)
284
+DO_SHA1(SHA1M, gen_helper_crypto_sha1m)
285
+DO_SHA1(SHA1SU0, gen_helper_crypto_sha1su0)
286
287
#define DO_SHA2(NAME, FUNC) \
288
WRAP_OOL_FN(gen_##NAME##_3s, FUNC) \
253
--
289
--
254
2.20.1
290
2.20.1
255
291
256
292
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We have validated that addr+size does not cross a page boundary.
3
Rather than passing an opcode to a helper, fully decode the
4
Therefore we need to validate exactly one page. We can achieve
4
operation at translate time. Use clear_tail_16 to zap the
5
that passing any value 1 <= x <= size to page_check_range.
5
balance of the SVE register with the AdvSIMD write.
6
7
Passing 1 will simplify the next patch.
8
6
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200508154359.7494-5-richard.henderson@linaro.org
8
Message-id: 20200514212831.31248-7-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
accel/tcg/user-exec.c | 2 +-
12
target/arm/helper.h | 5 ++++-
15
1 file changed, 1 insertion(+), 1 deletion(-)
13
target/arm/crypto_helper.c | 24 ++++++++++++++++++------
14
target/arm/translate-a64.c | 21 +++++----------------
15
3 files changed, 27 insertions(+), 23 deletions(-)
16
16
17
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/user-exec.c
19
--- a/target/arm/helper.h
20
+++ b/accel/tcg/user-exec.c
20
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ void *probe_access(CPUArchState *env, target_ulong addr, int size,
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crypto_sha512su0, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
22
g_assert_not_reached();
22
DEF_HELPER_FLAGS_4(crypto_sha512su1, TCG_CALL_NO_RWG,
23
void, ptr, ptr, ptr, i32)
24
25
-DEF_HELPER_FLAGS_5(crypto_sm3tt, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32, i32)
26
+DEF_HELPER_FLAGS_4(crypto_sm3tt1a, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(crypto_sm3tt1b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(crypto_sm3tt2a, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(crypto_sm3tt2b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_4(crypto_sm3partw1, TCG_CALL_NO_RWG,
31
void, ptr, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(crypto_sm3partw2, TCG_CALL_NO_RWG,
33
diff --git a/target/arm/crypto_helper.c b/target/arm/crypto_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/crypto_helper.c
36
+++ b/target/arm/crypto_helper.c
37
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3partw2)(void *vd, void *vn, void *vm, uint32_t desc)
38
clear_tail_16(vd, desc);
39
}
40
41
-void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
42
- uint32_t opcode)
43
+static inline void QEMU_ALWAYS_INLINE
44
+crypto_sm3tt(uint64_t *rd, uint64_t *rn, uint64_t *rm,
45
+ uint32_t desc, uint32_t opcode)
46
{
47
- uint64_t *rd = vd;
48
- uint64_t *rn = vn;
49
- uint64_t *rm = vm;
50
union CRYPTO_STATE d = { .l = { rd[0], rd[1] } };
51
union CRYPTO_STATE n = { .l = { rn[0], rn[1] } };
52
union CRYPTO_STATE m = { .l = { rm[0], rm[1] } };
53
+ uint32_t imm2 = simd_data(desc);
54
uint32_t t;
55
56
assert(imm2 < 4);
57
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
58
/* SM3TT2B */
59
t = cho(CR_ST_WORD(d, 3), CR_ST_WORD(d, 2), CR_ST_WORD(d, 1));
60
} else {
61
- g_assert_not_reached();
62
+ qemu_build_not_reached();
23
}
63
}
24
64
25
- if (!guest_addr_valid(addr) || page_check_range(addr, size, flags) < 0) {
65
t += CR_ST_WORD(d, 0) + CR_ST_WORD(m, imm2);
26
+ if (!guest_addr_valid(addr) || page_check_range(addr, 1, flags) < 0) {
66
@@ -XXX,XX +XXX,XX @@ void HELPER(crypto_sm3tt)(void *vd, void *vn, void *vm, uint32_t imm2,
27
CPUState *cpu = env_cpu(env);
67
28
CPUClass *cc = CPU_GET_CLASS(cpu);
68
rd[0] = d.l[0];
29
cc->tlb_fill(cpu, addr, size, access_type, MMU_USER_IDX, false,
69
rd[1] = d.l[1];
70
+
71
+ clear_tail_16(rd, desc);
72
}
73
74
+#define DO_SM3TT(NAME, OPCODE) \
75
+ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
76
+ { crypto_sm3tt(vd, vn, vm, desc, OPCODE); }
77
+
78
+DO_SM3TT(crypto_sm3tt1a, 0)
79
+DO_SM3TT(crypto_sm3tt1b, 1)
80
+DO_SM3TT(crypto_sm3tt2a, 2)
81
+DO_SM3TT(crypto_sm3tt2b, 3)
82
+
83
+#undef DO_SM3TT
84
+
85
static uint8_t const sm4_sbox[] = {
86
0xd6, 0x90, 0xe9, 0xfe, 0xcc, 0xe1, 0x3d, 0xb7,
87
0x16, 0xb6, 0x14, 0xc2, 0x28, 0xfb, 0x2c, 0x05,
88
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate-a64.c
91
+++ b/target/arm/translate-a64.c
92
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
93
*/
94
static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
95
{
96
+ static gen_helper_gvec_3 * const fns[4] = {
97
+ gen_helper_crypto_sm3tt1a, gen_helper_crypto_sm3tt1b,
98
+ gen_helper_crypto_sm3tt2a, gen_helper_crypto_sm3tt2b,
99
+ };
100
int opcode = extract32(insn, 10, 2);
101
int imm2 = extract32(insn, 12, 2);
102
int rm = extract32(insn, 16, 5);
103
int rn = extract32(insn, 5, 5);
104
int rd = extract32(insn, 0, 5);
105
- TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
106
- TCGv_i32 tcg_imm2, tcg_opcode;
107
108
if (!dc_isar_feature(aa64_sm3, s)) {
109
unallocated_encoding(s);
110
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
111
return;
112
}
113
114
- tcg_rd_ptr = vec_full_reg_ptr(s, rd);
115
- tcg_rn_ptr = vec_full_reg_ptr(s, rn);
116
- tcg_rm_ptr = vec_full_reg_ptr(s, rm);
117
- tcg_imm2 = tcg_const_i32(imm2);
118
- tcg_opcode = tcg_const_i32(opcode);
119
-
120
- gen_helper_crypto_sm3tt(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr, tcg_imm2,
121
- tcg_opcode);
122
-
123
- tcg_temp_free_ptr(tcg_rd_ptr);
124
- tcg_temp_free_ptr(tcg_rn_ptr);
125
- tcg_temp_free_ptr(tcg_rm_ptr);
126
- tcg_temp_free_i32(tcg_imm2);
127
- tcg_temp_free_i32(tcg_opcode);
128
+ gen_gvec_op3_ool(s, true, rd, rn, rm, imm2, fns[opcode]);
129
}
130
131
/* C3.6 Data processing - SIMD, inc Crypto
30
--
132
--
31
2.20.1
133
2.20.1
32
134
33
135
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
On the NRF51 series, all peripherals have a fixed I/O size
3
The ADC region size is 256B, split as:
4
of 4KiB. Define NRF51_PERIPHERAL_SIZE and use it.
4
- [0x00 - 0x4f] defined
5
- [0x50 - 0xff] reserved
5
6
7
All registers are 32-bit (thus when the datasheet mentions the
8
last defined register is 0x4c, it means its address range is
9
0x4c .. 0x4f.
10
11
This model implementation is also 32-bit. Set MemoryRegionOps
12
'impl' fields.
13
14
See:
15
'RM0033 Reference manual Rev 8', Table 10.13.18 "ADC register map".
16
17
Reported-by: Seth Kintigh <skintigh@gmail.com>
18
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20200603055915.17678-1-f4bug@amsat.org
8
Message-id: 20200504072822.18799-2-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
22
---
11
include/hw/arm/nrf51.h | 3 +--
23
hw/adc/stm32f2xx_adc.c | 4 +++-
12
include/hw/i2c/microbit_i2c.h | 2 +-
24
1 file changed, 3 insertions(+), 1 deletion(-)
13
hw/arm/nrf51_soc.c | 4 ++--
14
hw/i2c/microbit_i2c.c | 2 +-
15
hw/timer/nrf51_timer.c | 2 +-
16
5 files changed, 6 insertions(+), 7 deletions(-)
17
25
18
diff --git a/include/hw/arm/nrf51.h b/include/hw/arm/nrf51.h
26
diff --git a/hw/adc/stm32f2xx_adc.c b/hw/adc/stm32f2xx_adc.c
19
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/nrf51.h
28
--- a/hw/adc/stm32f2xx_adc.c
21
+++ b/include/hw/arm/nrf51.h
29
+++ b/hw/adc/stm32f2xx_adc.c
22
@@ -XXX,XX +XXX,XX @@
30
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps stm32f2xx_adc_ops = {
23
#define NRF51_IOMEM_BASE 0x40000000
31
.read = stm32f2xx_adc_read,
24
#define NRF51_IOMEM_SIZE 0x20000000
32
.write = stm32f2xx_adc_write,
25
33
.endianness = DEVICE_NATIVE_ENDIAN,
26
+#define NRF51_PERIPHERAL_SIZE 0x00001000
34
+ .impl.min_access_size = 4,
27
#define NRF51_UART_BASE 0x40002000
35
+ .impl.max_access_size = 4,
28
#define NRF51_TWI_BASE 0x40003000
36
};
29
-#define NRF51_TWI_SIZE 0x00001000
37
30
#define NRF51_TIMER_BASE 0x40008000
38
static const VMStateDescription vmstate_stm32f2xx_adc = {
31
-#define NRF51_TIMER_SIZE 0x00001000
39
@@ -XXX,XX +XXX,XX @@ static void stm32f2xx_adc_init(Object *obj)
32
#define NRF51_RNG_BASE 0x4000D000
40
sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
33
#define NRF51_NVMC_BASE 0x4001E000
41
34
#define NRF51_GPIO_BASE 0x50000000
42
memory_region_init_io(&s->mmio, obj, &stm32f2xx_adc_ops, s,
35
diff --git a/include/hw/i2c/microbit_i2c.h b/include/hw/i2c/microbit_i2c.h
43
- TYPE_STM32F2XX_ADC, 0xFF);
36
index XXXXXXX..XXXXXXX 100644
44
+ TYPE_STM32F2XX_ADC, 0x100);
37
--- a/include/hw/i2c/microbit_i2c.h
45
sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio);
38
+++ b/include/hw/i2c/microbit_i2c.h
39
@@ -XXX,XX +XXX,XX @@
40
#define MICROBIT_I2C(obj) \
41
OBJECT_CHECK(MicrobitI2CState, (obj), TYPE_MICROBIT_I2C)
42
43
-#define MICROBIT_I2C_NREGS (NRF51_TWI_SIZE / sizeof(uint32_t))
44
+#define MICROBIT_I2C_NREGS (NRF51_PERIPHERAL_SIZE / sizeof(uint32_t))
45
46
typedef struct {
47
SysBusDevice parent_obj;
48
diff --git a/hw/arm/nrf51_soc.c b/hw/arm/nrf51_soc.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/hw/arm/nrf51_soc.c
51
+++ b/hw/arm/nrf51_soc.c
52
@@ -XXX,XX +XXX,XX @@ static void nrf51_soc_realize(DeviceState *dev_soc, Error **errp)
53
return;
54
}
55
56
- base_addr = NRF51_TIMER_BASE + i * NRF51_TIMER_SIZE;
57
+ base_addr = NRF51_TIMER_BASE + i * NRF51_PERIPHERAL_SIZE;
58
59
sysbus_mmio_map(SYS_BUS_DEVICE(&s->timer[i]), 0, base_addr);
60
sysbus_connect_irq(SYS_BUS_DEVICE(&s->timer[i]), 0,
61
@@ -XXX,XX +XXX,XX @@ static void nrf51_soc_realize(DeviceState *dev_soc, Error **errp)
62
63
/* STUB Peripherals */
64
memory_region_init_io(&s->clock, OBJECT(dev_soc), &clock_ops, NULL,
65
- "nrf51_soc.clock", 0x1000);
66
+ "nrf51_soc.clock", NRF51_PERIPHERAL_SIZE);
67
memory_region_add_subregion_overlap(&s->container,
68
NRF51_IOMEM_BASE, &s->clock, -1);
69
70
diff --git a/hw/i2c/microbit_i2c.c b/hw/i2c/microbit_i2c.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/hw/i2c/microbit_i2c.c
73
+++ b/hw/i2c/microbit_i2c.c
74
@@ -XXX,XX +XXX,XX @@ static void microbit_i2c_realize(DeviceState *dev, Error **errp)
75
MicrobitI2CState *s = MICROBIT_I2C(dev);
76
77
memory_region_init_io(&s->iomem, OBJECT(s), &microbit_i2c_ops, s,
78
- "microbit.twi", NRF51_TWI_SIZE);
79
+ "microbit.twi", NRF51_PERIPHERAL_SIZE);
80
sysbus_init_mmio(sbd, &s->iomem);
81
}
46
}
82
83
diff --git a/hw/timer/nrf51_timer.c b/hw/timer/nrf51_timer.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/hw/timer/nrf51_timer.c
86
+++ b/hw/timer/nrf51_timer.c
87
@@ -XXX,XX +XXX,XX @@ static void nrf51_timer_init(Object *obj)
88
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
89
90
memory_region_init_io(&s->iomem, obj, &rng_ops, s,
91
- TYPE_NRF51_TIMER, NRF51_TIMER_SIZE);
92
+ TYPE_NRF51_TIMER, NRF51_PERIPHERAL_SIZE);
93
sysbus_init_mmio(sbd, &s->iomem);
94
sysbus_init_irq(sbd, &s->irq);
95
47
96
--
48
--
97
2.20.1
49
2.20.1
98
50
99
51
diff view generated by jsdifflib
1
From: Thomas Huth <thuth@redhat.com>
1
From: Thomas Huth <thuth@redhat.com>
2
2
3
Move the common set_feature() and unset_feature() functions
3
As described by Edgar here:
4
from cpu.c and cpu64.c to cpu.h.
5
4
6
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
5
https://www.mail-archive.com/qemu-devel@nongnu.org/msg605124.html
6
7
we can use the Ubuntu kernel for testing the xlnx-versal-virt machine.
8
So let's add a boot test for this now.
9
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Thomas Huth <thuth@redhat.com>
12
Signed-off-by: Thomas Huth <thuth@redhat.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Message-id: 20200525141237.15243-1-thuth@redhat.com
11
Message-id: 20200504172448.9402-3-philmd@redhat.com
12
Message-ID: <20190921150420.30743-2-thuth@redhat.com>
13
[PMD: Split Thomas's patch in two: set_feature, cpu_register]
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
---
17
target/arm/cpu.h | 10 ++++++++++
18
tests/acceptance/boot_linux_console.py | 26 ++++++++++++++++++++++++++
18
target/arm/cpu.c | 10 ----------
19
1 file changed, 26 insertions(+)
19
target/arm/cpu64.c | 10 ----------
20
3 files changed, 10 insertions(+), 20 deletions(-)
21
20
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
23
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
23
--- a/tests/acceptance/boot_linux_console.py
25
+++ b/target/arm/cpu.h
24
+++ b/tests/acceptance/boot_linux_console.py
26
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
25
@@ -XXX,XX +XXX,XX @@ class BootLinuxConsole(LinuxKernelTest):
27
void *gicv3state;
26
console_pattern = 'Kernel command line: %s' % kernel_command_line
28
} CPUARMState;
27
self.wait_for_console_pattern(console_pattern)
29
28
30
+static inline void set_feature(CPUARMState *env, int feature)
29
+ def test_aarch64_xlnx_versal_virt(self):
31
+{
30
+ """
32
+ env->features |= 1ULL << feature;
31
+ :avocado: tags=arch:aarch64
33
+}
32
+ :avocado: tags=machine:xlnx-versal-virt
33
+ :avocado: tags=device:pl011
34
+ :avocado: tags=device:arm_gicv3
35
+ """
36
+ kernel_url = ('http://ports.ubuntu.com/ubuntu-ports/dists/'
37
+ 'bionic-updates/main/installer-arm64/current/images/'
38
+ 'netboot/ubuntu-installer/arm64/linux')
39
+ kernel_hash = '5bfc54cf7ed8157d93f6e5b0241e727b6dc22c50'
40
+ kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
34
+
41
+
35
+static inline void unset_feature(CPUARMState *env, int feature)
42
+ initrd_url = ('http://ports.ubuntu.com/ubuntu-ports/dists/'
36
+{
43
+ 'bionic-updates/main/installer-arm64/current/images/'
37
+ env->features &= ~(1ULL << feature);
44
+ 'netboot/ubuntu-installer/arm64/initrd.gz')
38
+}
45
+ initrd_hash = 'd385d3e88d53e2004c5d43cbe668b458a094f772'
46
+ initrd_path = self.fetch_asset(initrd_url, asset_hash=initrd_hash)
39
+
47
+
40
/**
48
+ self.vm.set_console()
41
* ARMELChangeHookFn:
49
+ self.vm.add_args('-m', '2G',
42
* type of a function which can be registered via arm_register_el_change_hook()
50
+ '-kernel', kernel_path,
43
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
51
+ '-initrd', initrd_path)
44
index XXXXXXX..XXXXXXX 100644
52
+ self.vm.launch()
45
--- a/target/arm/cpu.c
53
+ self.wait_for_console_pattern('Checked W+X mappings: passed')
46
+++ b/target/arm/cpu.c
54
+
47
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_virtio_is_big_endian(CPUState *cs)
55
def test_arm_virt(self):
48
56
"""
49
#endif
57
:avocado: tags=arch:arm
50
51
-static inline void set_feature(CPUARMState *env, int feature)
52
-{
53
- env->features |= 1ULL << feature;
54
-}
55
-
56
-static inline void unset_feature(CPUARMState *env, int feature)
57
-{
58
- env->features &= ~(1ULL << feature);
59
-}
60
-
61
static int
62
print_insn_thumb1(bfd_vma pc, disassemble_info *info)
63
{
64
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/cpu64.c
67
+++ b/target/arm/cpu64.c
68
@@ -XXX,XX +XXX,XX @@
69
#include "kvm_arm.h"
70
#include "qapi/visitor.h"
71
72
-static inline void set_feature(CPUARMState *env, int feature)
73
-{
74
- env->features |= 1ULL << feature;
75
-}
76
-
77
-static inline void unset_feature(CPUARMState *env, int feature)
78
-{
79
- env->features &= ~(1ULL << feature);
80
-}
81
-
82
#ifndef CONFIG_USER_ONLY
83
static uint64_t a57_a53_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
84
{
85
--
58
--
86
2.20.1
59
2.20.1
87
60
88
61
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Cédric Le Goater <clg@kaod.org>
2
2
3
DUP (indexed) can duplicate 128-bit elements, so using esz
3
Signed-off-by: Cédric Le Goater <clg@kaod.org>
4
unconditionally can assert in tcg_gen_gvec_dup_imm.
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
5
Message-id: 20200602135050.593692-1-clg@kaod.org
6
Fixes: 8711e71f9cbb
7
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
11
Message-id: 20200507172352.15418-5-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
target/arm/translate-sve.c | 6 +++++-
8
docs/system/arm/aspeed.rst | 85 ++++++++++++++++++++++++++++++++++++++
15
1 file changed, 5 insertions(+), 1 deletion(-)
9
docs/system/target-arm.rst | 1 +
10
2 files changed, 86 insertions(+)
11
create mode 100644 docs/system/arm/aspeed.rst
16
12
17
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
14
new file mode 100644
15
index XXXXXXX..XXXXXXX
16
--- /dev/null
17
+++ b/docs/system/arm/aspeed.rst
18
@@ -XXX,XX +XXX,XX @@
19
+Aspeed family boards (``*-bmc``, ``ast2500-evb``, ``ast2600-evb``)
20
+==================================================================
21
+
22
+The QEMU Aspeed machines model BMCs of various OpenPOWER systems and
23
+Aspeed evaluation boards. They are based on different releases of the
24
+Aspeed SoC : the AST2400 integrating an ARM926EJ-S CPU (400MHz), the
25
+AST2500 with an ARM1176JZS CPU (800MHz) and more recently the AST2600
26
+with dual cores ARM Cortex A7 CPUs (1.2GHz).
27
+
28
+The SoC comes with RAM, Gigabit ethernet, USB, SD/MMC, USB, SPI, I2C,
29
+etc.
30
+
31
+AST2400 SoC based machines :
32
+
33
+- ``palmetto-bmc`` OpenPOWER Palmetto POWER8 BMC
34
+
35
+AST2500 SoC based machines :
36
+
37
+- ``ast2500-evb`` Aspeed AST2500 Evaluation board
38
+- ``romulus-bmc`` OpenPOWER Romulus POWER9 BMC
39
+- ``witherspoon-bmc`` OpenPOWER Witherspoon POWER9 BMC
40
+- ``sonorapass-bmc`` OCP SonoraPass BMC
41
+- ``swift-bmc`` OpenPOWER Swift BMC POWER9
42
+
43
+AST2600 SoC based machines :
44
+
45
+- ``ast2600-evb`` Aspeed AST2600 Evaluation board (Cortex A7)
46
+- ``tacoma-bmc`` OpenPOWER Witherspoon POWER9 AST2600 BMC
47
+
48
+Supported devices
49
+-----------------
50
+
51
+ * SMP (for the AST2600 Cortex-A7)
52
+ * Interrupt Controller (VIC)
53
+ * Timer Controller
54
+ * RTC Controller
55
+ * I2C Controller
56
+ * System Control Unit (SCU)
57
+ * SRAM mapping
58
+ * X-DMA Controller (basic interface)
59
+ * Static Memory Controller (SMC or FMC) - Only SPI Flash support
60
+ * SPI Memory Controller
61
+ * USB 2.0 Controller
62
+ * SD/MMC storage controllers
63
+ * SDRAM controller (dummy interface for basic settings and training)
64
+ * Watchdog Controller
65
+ * GPIO Controller (Master only)
66
+ * UART
67
+ * Ethernet controllers
68
+
69
+
70
+Missing devices
71
+---------------
72
+
73
+ * Coprocessor support
74
+ * ADC (out of tree implementation)
75
+ * PWM and Fan Controller
76
+ * LPC Bus Controller
77
+ * Slave GPIO Controller
78
+ * Super I/O Controller
79
+ * Hash/Crypto Engine
80
+ * PCI-Express 1 Controller
81
+ * Graphic Display Controller
82
+ * PECI Controller
83
+ * MCTP Controller
84
+ * Mailbox Controller
85
+ * Virtual UART
86
+ * eSPI Controller
87
+ * I3C Controller
88
+
89
+Boot options
90
+------------
91
+
92
+The Aspeed machines can be started using the -kernel option to load a
93
+Linux kernel or from a firmare image which can be downloaded from the
94
+OpenPOWER jenkins :
95
+
96
+ https://openpower.xyz/
97
+
98
+The image should be attached as an MTD drive. Run :
99
+
100
+.. code-block:: bash
101
+
102
+ $ qemu-system-arm -M romulus-bmc -nic user \
103
+    -drive file=flash-romulus,format=raw,if=mtd -nographic
104
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
18
index XXXXXXX..XXXXXXX 100644
105
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate-sve.c
106
--- a/docs/system/target-arm.rst
20
+++ b/target/arm/translate-sve.c
107
+++ b/docs/system/target-arm.rst
21
@@ -XXX,XX +XXX,XX @@ static bool trans_DUP_x(DisasContext *s, arg_DUP_x *a)
108
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
22
unsigned nofs = vec_reg_offset(s, a->rn, index, esz);
109
arm/realview
23
tcg_gen_gvec_dup_mem(esz, dofs, nofs, vsz, vsz);
110
arm/versatile
24
} else {
111
arm/vexpress
25
- tcg_gen_gvec_dup_imm(esz, dofs, vsz, vsz, 0);
112
+ arm/aspeed
26
+ /*
113
arm/musicpal
27
+ * While dup_mem handles 128-bit elements, dup_imm does not.
114
arm/nseries
28
+ * Thankfully element size doesn't matter for splatting zero.
115
arm/orangepi
29
+ */
30
+ tcg_gen_gvec_dup_imm(MO_64, dofs, vsz, vsz, 0);
31
}
32
}
33
return true;
34
--
116
--
35
2.20.1
117
2.20.1
36
118
37
119
diff view generated by jsdifflib
1
From: Joel Stanley <joel@jms.id.au>
1
From: Paul Zimmerman <pauldzim@gmail.com>
2
2
3
The AST2600 handles this differently with the extra 'hardlock' state, so
3
Add BCM2835 SOC MPHI (Message-based Parallel Host Interface)
4
move the testing to the soc specific class' write callback.
4
emulation. It is very basic, only providing the FIQ interrupt
5
5
needed to allow the dwc-otg USB host controller driver in the
6
Signed-off-by: Joel Stanley <joel@jms.id.au>
6
Raspbian kernel to function.
7
Reviewed-by: Cédric Le Goater <clg@kaod.org>
7
8
Message-id: 20200505090136.341426-1-joel@jms.id.au
8
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
9
Acked-by: Philippe Mathieu-Daude <f4bug@amsat.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20200520235349.21215-2-pauldzim@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
hw/misc/aspeed_sdmc.c | 55 +++++++++++++++++++++++++++++++++++--------
14
include/hw/arm/bcm2835_peripherals.h | 2 +
12
1 file changed, 45 insertions(+), 10 deletions(-)
15
include/hw/misc/bcm2835_mphi.h | 44 ++++++
13
16
hw/arm/bcm2835_peripherals.c | 17 +++
14
diff --git a/hw/misc/aspeed_sdmc.c b/hw/misc/aspeed_sdmc.c
17
hw/misc/bcm2835_mphi.c | 191 +++++++++++++++++++++++++++
18
hw/misc/Makefile.objs | 1 +
19
5 files changed, 255 insertions(+)
20
create mode 100644 include/hw/misc/bcm2835_mphi.h
21
create mode 100644 hw/misc/bcm2835_mphi.c
22
23
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
15
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/misc/aspeed_sdmc.c
25
--- a/include/hw/arm/bcm2835_peripherals.h
17
+++ b/hw/misc/aspeed_sdmc.c
26
+++ b/include/hw/arm/bcm2835_peripherals.h
18
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@
19
28
#include "hw/misc/bcm2835_property.h"
20
/* Protection Key Register */
29
#include "hw/misc/bcm2835_rng.h"
21
#define R_PROT (0x00 / 4)
30
#include "hw/misc/bcm2835_mbox.h"
22
+#define PROT_UNLOCKED 0x01
31
+#include "hw/misc/bcm2835_mphi.h"
23
+#define PROT_HARDLOCKED 0x10 /* AST2600 */
32
#include "hw/misc/bcm2835_thermal.h"
24
+#define PROT_SOFTLOCKED 0x00
33
#include "hw/sd/sdhci.h"
25
+
34
#include "hw/sd/bcm2835_sdhost.h"
26
#define PROT_KEY_UNLOCK 0xFC600309
35
@@ -XXX,XX +XXX,XX @@ typedef struct BCM2835PeripheralState {
27
+#define PROT_KEY_HARDLOCK 0xDEADDEAD /* AST2600 */
36
qemu_irq irq, fiq;
28
37
29
/* Configuration Register */
38
BCM2835SystemTimerState systmr;
30
#define R_CONF (0x04 / 4)
39
+ BCM2835MphiState mphi;
31
@@ -XXX,XX +XXX,XX @@ static void aspeed_sdmc_write(void *opaque, hwaddr addr, uint64_t data,
40
UnimplementedDeviceState armtmr;
32
return;
41
UnimplementedDeviceState cprman;
33
}
42
UnimplementedDeviceState a2w;
34
43
diff --git a/include/hw/misc/bcm2835_mphi.h b/include/hw/misc/bcm2835_mphi.h
35
- if (addr == R_PROT) {
44
new file mode 100644
36
- s->regs[addr] = (data == PROT_KEY_UNLOCK) ? 1 : 0;
45
index XXXXXXX..XXXXXXX
37
- return;
46
--- /dev/null
38
- }
47
+++ b/include/hw/misc/bcm2835_mphi.h
39
-
48
@@ -XXX,XX +XXX,XX @@
40
- if (!s->regs[R_PROT]) {
49
+/*
41
- qemu_log_mask(LOG_GUEST_ERROR, "%s: SDMC is locked!\n", __func__);
50
+ * BCM2835 SOC MPHI state definitions
42
- return;
51
+ *
43
- }
52
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
44
-
53
+ *
45
asc->write(s, addr, data);
54
+ * This program is free software; you can redistribute it and/or modify
55
+ * it under the terms of the GNU General Public License as published by
56
+ * the Free Software Foundation; either version 2 of the License, or
57
+ * (at your option) any later version.
58
+ *
59
+ * This program is distributed in the hope that it will be useful,
60
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
61
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
62
+ * GNU General Public License for more details.
63
+ */
64
+
65
+#ifndef HW_MISC_BCM2835_MPHI_H
66
+#define HW_MISC_BCM2835_MPHI_H
67
+
68
+#include "hw/irq.h"
69
+#include "hw/sysbus.h"
70
+
71
+#define MPHI_MMIO_SIZE 0x1000
72
+
73
+typedef struct BCM2835MphiState BCM2835MphiState;
74
+
75
+struct BCM2835MphiState {
76
+ SysBusDevice parent_obj;
77
+ qemu_irq irq;
78
+ MemoryRegion iomem;
79
+
80
+ uint32_t outdda;
81
+ uint32_t outddb;
82
+ uint32_t ctrl;
83
+ uint32_t intstat;
84
+ uint32_t swirq;
85
+};
86
+
87
+#define TYPE_BCM2835_MPHI "bcm2835-mphi"
88
+
89
+#define BCM2835_MPHI(obj) \
90
+ OBJECT_CHECK(BCM2835MphiState, (obj), TYPE_BCM2835_MPHI)
91
+
92
+#endif
93
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/bcm2835_peripherals.c
96
+++ b/hw/arm/bcm2835_peripherals.c
97
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
98
OBJECT(&s->sdhci.sdbus));
99
object_property_add_const_link(OBJECT(&s->gpio), "sdbus-sdhost",
100
OBJECT(&s->sdhost.sdbus));
101
+
102
+ /* Mphi */
103
+ sysbus_init_child_obj(obj, "mphi", &s->mphi, sizeof(s->mphi),
104
+ TYPE_BCM2835_MPHI);
46
}
105
}
47
106
48
@@ -XXX,XX +XXX,XX @@ static uint32_t aspeed_2400_sdmc_compute_conf(AspeedSDMCState *s, uint32_t data)
107
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
49
static void aspeed_2400_sdmc_write(AspeedSDMCState *s, uint32_t reg,
108
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
50
uint32_t data)
109
51
{
110
object_property_add_alias(OBJECT(s), "sd-bus", OBJECT(&s->gpio), "sd-bus");
52
+ if (reg == R_PROT) {
111
53
+ s->regs[reg] = (data == PROT_KEY_UNLOCK) ? PROT_UNLOCKED : PROT_SOFTLOCKED;
112
+ /* Mphi */
113
+ object_property_set_bool(OBJECT(&s->mphi), true, "realized", &err);
114
+ if (err) {
115
+ error_propagate(errp, err);
54
+ return;
116
+ return;
55
+ }
117
+ }
56
+
118
+
57
+ if (!s->regs[R_PROT]) {
119
+ memory_region_add_subregion(&s->peri_mr, MPHI_OFFSET,
58
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: SDMC is locked!\n", __func__);
120
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->mphi), 0));
121
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->mphi), 0,
122
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
123
+ INTERRUPT_HOSTPORT));
124
+
125
create_unimp(s, &s->armtmr, "bcm2835-sp804", ARMCTRL_TIMER0_1_OFFSET, 0x40);
126
create_unimp(s, &s->cprman, "bcm2835-cprman", CPRMAN_OFFSET, 0x1000);
127
create_unimp(s, &s->a2w, "bcm2835-a2w", A2W_OFFSET, 0x1000);
128
diff --git a/hw/misc/bcm2835_mphi.c b/hw/misc/bcm2835_mphi.c
129
new file mode 100644
130
index XXXXXXX..XXXXXXX
131
--- /dev/null
132
+++ b/hw/misc/bcm2835_mphi.c
133
@@ -XXX,XX +XXX,XX @@
134
+/*
135
+ * BCM2835 SOC MPHI emulation
136
+ *
137
+ * Very basic emulation, only providing the FIQ interrupt needed to
138
+ * allow the dwc-otg USB host controller driver in the Raspbian kernel
139
+ * to function.
140
+ *
141
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
142
+ *
143
+ * This program is free software; you can redistribute it and/or modify
144
+ * it under the terms of the GNU General Public License as published by
145
+ * the Free Software Foundation; either version 2 of the License, or
146
+ * (at your option) any later version.
147
+ *
148
+ * This program is distributed in the hope that it will be useful,
149
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
150
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
151
+ * GNU General Public License for more details.
152
+ */
153
+
154
+#include "qemu/osdep.h"
155
+#include "qapi/error.h"
156
+#include "hw/misc/bcm2835_mphi.h"
157
+#include "migration/vmstate.h"
158
+#include "qemu/error-report.h"
159
+#include "qemu/log.h"
160
+#include "qemu/main-loop.h"
161
+
162
+static inline void mphi_raise_irq(BCM2835MphiState *s)
163
+{
164
+ qemu_set_irq(s->irq, 1);
165
+}
166
+
167
+static inline void mphi_lower_irq(BCM2835MphiState *s)
168
+{
169
+ qemu_set_irq(s->irq, 0);
170
+}
171
+
172
+static uint64_t mphi_reg_read(void *ptr, hwaddr addr, unsigned size)
173
+{
174
+ BCM2835MphiState *s = ptr;
175
+ uint32_t val = 0;
176
+
177
+ switch (addr) {
178
+ case 0x28: /* outdda */
179
+ val = s->outdda;
180
+ break;
181
+ case 0x2c: /* outddb */
182
+ val = s->outddb;
183
+ break;
184
+ case 0x4c: /* ctrl */
185
+ val = s->ctrl;
186
+ val |= 1 << 17;
187
+ break;
188
+ case 0x50: /* intstat */
189
+ val = s->intstat;
190
+ break;
191
+ case 0x1f0: /* swirq_set */
192
+ val = s->swirq;
193
+ break;
194
+ case 0x1f4: /* swirq_clr */
195
+ val = s->swirq;
196
+ break;
197
+ default:
198
+ qemu_log_mask(LOG_UNIMP, "read from unknown register");
199
+ break;
200
+ }
201
+
202
+ return val;
203
+}
204
+
205
+static void mphi_reg_write(void *ptr, hwaddr addr, uint64_t val, unsigned size)
206
+{
207
+ BCM2835MphiState *s = ptr;
208
+ int do_irq = 0;
209
+
210
+ switch (addr) {
211
+ case 0x28: /* outdda */
212
+ s->outdda = val;
213
+ break;
214
+ case 0x2c: /* outddb */
215
+ s->outddb = val;
216
+ if (val & (1 << 29)) {
217
+ do_irq = 1;
218
+ }
219
+ break;
220
+ case 0x4c: /* ctrl */
221
+ s->ctrl = val;
222
+ if (val & (1 << 16)) {
223
+ do_irq = -1;
224
+ }
225
+ break;
226
+ case 0x50: /* intstat */
227
+ s->intstat = val;
228
+ if (val & ((1 << 16) | (1 << 29))) {
229
+ do_irq = -1;
230
+ }
231
+ break;
232
+ case 0x1f0: /* swirq_set */
233
+ s->swirq |= val;
234
+ do_irq = 1;
235
+ break;
236
+ case 0x1f4: /* swirq_clr */
237
+ s->swirq &= ~val;
238
+ do_irq = -1;
239
+ break;
240
+ default:
241
+ qemu_log_mask(LOG_UNIMP, "write to unknown register");
59
+ return;
242
+ return;
60
+ }
243
+ }
61
+
244
+
62
switch (reg) {
245
+ if (do_irq > 0) {
63
case R_CONF:
246
+ mphi_raise_irq(s);
64
data = aspeed_2400_sdmc_compute_conf(s, data);
247
+ } else if (do_irq < 0) {
65
@@ -XXX,XX +XXX,XX @@ static uint32_t aspeed_2500_sdmc_compute_conf(AspeedSDMCState *s, uint32_t data)
248
+ mphi_lower_irq(s);
66
static void aspeed_2500_sdmc_write(AspeedSDMCState *s, uint32_t reg,
249
+ }
67
uint32_t data)
250
+}
68
{
251
+
69
+ if (reg == R_PROT) {
252
+static const MemoryRegionOps mphi_mmio_ops = {
70
+ s->regs[reg] = (data == PROT_KEY_UNLOCK) ? PROT_UNLOCKED : PROT_SOFTLOCKED;
253
+ .read = mphi_reg_read,
71
+ return;
254
+ .write = mphi_reg_write,
72
+ }
255
+ .impl.min_access_size = 4,
73
+
256
+ .impl.max_access_size = 4,
74
+ if (!s->regs[R_PROT]) {
257
+ .endianness = DEVICE_LITTLE_ENDIAN,
75
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: SDMC is locked!\n", __func__);
258
+};
76
+ return;
259
+
77
+ }
260
+static void mphi_reset(DeviceState *dev)
78
+
261
+{
79
switch (reg) {
262
+ BCM2835MphiState *s = BCM2835_MPHI(dev);
80
case R_CONF:
263
+
81
data = aspeed_2500_sdmc_compute_conf(s, data);
264
+ s->outdda = 0;
82
@@ -XXX,XX +XXX,XX @@ static uint32_t aspeed_2600_sdmc_compute_conf(AspeedSDMCState *s, uint32_t data)
265
+ s->outddb = 0;
83
static void aspeed_2600_sdmc_write(AspeedSDMCState *s, uint32_t reg,
266
+ s->ctrl = 0;
84
uint32_t data)
267
+ s->intstat = 0;
85
{
268
+ s->swirq = 0;
86
+ if (s->regs[R_PROT] == PROT_HARDLOCKED) {
269
+}
87
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: SDMC is locked until system reset!\n",
270
+
88
+ __func__);
271
+static void mphi_realize(DeviceState *dev, Error **errp)
89
+ return;
272
+{
90
+ }
273
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
91
+
274
+ BCM2835MphiState *s = BCM2835_MPHI(dev);
92
+ if (reg != R_PROT && s->regs[R_PROT] == PROT_SOFTLOCKED) {
275
+
93
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: SDMC is locked!\n", __func__);
276
+ sysbus_init_irq(sbd, &s->irq);
94
+ return;
277
+}
95
+ }
278
+
96
+
279
+static void mphi_init(Object *obj)
97
switch (reg) {
280
+{
98
+ case R_PROT:
281
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
99
+ if (data == PROT_KEY_UNLOCK) {
282
+ BCM2835MphiState *s = BCM2835_MPHI(obj);
100
+ data = PROT_UNLOCKED;
283
+
101
+ } else if (data == PROT_KEY_HARDLOCK) {
284
+ memory_region_init_io(&s->iomem, obj, &mphi_mmio_ops, s, "mphi", MPHI_MMIO_SIZE);
102
+ data = PROT_HARDLOCKED;
285
+ sysbus_init_mmio(sbd, &s->iomem);
103
+ } else {
286
+}
104
+ data = PROT_SOFTLOCKED;
287
+
105
+ }
288
+const VMStateDescription vmstate_mphi_state = {
106
+ break;
289
+ .name = "mphi",
107
case R_CONF:
290
+ .version_id = 1,
108
data = aspeed_2600_sdmc_compute_conf(s, data);
291
+ .minimum_version_id = 1,
109
break;
292
+ .fields = (VMStateField[]) {
293
+ VMSTATE_UINT32(outdda, BCM2835MphiState),
294
+ VMSTATE_UINT32(outddb, BCM2835MphiState),
295
+ VMSTATE_UINT32(ctrl, BCM2835MphiState),
296
+ VMSTATE_UINT32(intstat, BCM2835MphiState),
297
+ VMSTATE_UINT32(swirq, BCM2835MphiState),
298
+ VMSTATE_END_OF_LIST()
299
+ }
300
+};
301
+
302
+static void mphi_class_init(ObjectClass *klass, void *data)
303
+{
304
+ DeviceClass *dc = DEVICE_CLASS(klass);
305
+
306
+ dc->realize = mphi_realize;
307
+ dc->reset = mphi_reset;
308
+ dc->vmsd = &vmstate_mphi_state;
309
+}
310
+
311
+static const TypeInfo bcm2835_mphi_type_info = {
312
+ .name = TYPE_BCM2835_MPHI,
313
+ .parent = TYPE_SYS_BUS_DEVICE,
314
+ .instance_size = sizeof(BCM2835MphiState),
315
+ .instance_init = mphi_init,
316
+ .class_init = mphi_class_init,
317
+};
318
+
319
+static void bcm2835_mphi_register_types(void)
320
+{
321
+ type_register_static(&bcm2835_mphi_type_info);
322
+}
323
+
324
+type_init(bcm2835_mphi_register_types)
325
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
326
index XXXXXXX..XXXXXXX 100644
327
--- a/hw/misc/Makefile.objs
328
+++ b/hw/misc/Makefile.objs
329
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_OMAP) += omap_l4.o
330
common-obj-$(CONFIG_OMAP) += omap_sdrc.o
331
common-obj-$(CONFIG_OMAP) += omap_tap.o
332
common-obj-$(CONFIG_RASPI) += bcm2835_mbox.o
333
+common-obj-$(CONFIG_RASPI) += bcm2835_mphi.o
334
common-obj-$(CONFIG_RASPI) += bcm2835_property.o
335
common-obj-$(CONFIG_RASPI) += bcm2835_rng.o
336
common-obj-$(CONFIG_RASPI) += bcm2835_thermal.o
110
--
337
--
111
2.20.1
338
2.20.1
112
339
113
340
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Paul Zimmerman <pauldzim@gmail.com>
2
2
3
Import the dwc-hsotg (dwc2) register definitions file from the
4
Linux kernel. This is a copy of drivers/usb/dwc2/hw.h from the
5
mainline Linux kernel, the only changes being to the header, and
6
two instances of 'u32' changed to 'uint32_t' to allow it to
7
compile. Checkpatch throws a boatload of errors due to the tab
8
indentation, but I would rather import it as-is than reformat it.
9
10
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
11
Message-id: 20200520235349.21215-3-pauldzim@gmail.com
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200508154359.7494-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
14
---
8
include/hw/core/cpu.h | 23 +++++++++++++++++++++++
15
include/hw/usb/dwc2-regs.h | 899 +++++++++++++++++++++++++++++++++++++
9
1 file changed, 23 insertions(+)
16
1 file changed, 899 insertions(+)
17
create mode 100644 include/hw/usb/dwc2-regs.h
10
18
11
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
19
diff --git a/include/hw/usb/dwc2-regs.h b/include/hw/usb/dwc2-regs.h
12
index XXXXXXX..XXXXXXX 100644
20
new file mode 100644
13
--- a/include/hw/core/cpu.h
21
index XXXXXXX..XXXXXXX
14
+++ b/include/hw/core/cpu.h
22
--- /dev/null
15
@@ -XXX,XX +XXX,XX @@ int cpu_watchpoint_remove(CPUState *cpu, vaddr addr,
23
+++ b/include/hw/usb/dwc2-regs.h
16
vaddr len, int flags);
24
@@ -XXX,XX +XXX,XX @@
17
void cpu_watchpoint_remove_by_ref(CPUState *cpu, CPUWatchpoint *watchpoint);
25
+/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
18
void cpu_watchpoint_remove_all(CPUState *cpu, int mask);
26
+/*
27
+ * Imported from the Linux kernel file drivers/usb/dwc2/hw.h, commit
28
+ * a89bae709b3492b478480a2c9734e7e9393b279c ("usb: dwc2: Move
29
+ * UTMI_PHY_DATA defines closer")
30
+ *
31
+ * hw.h - DesignWare HS OTG Controller hardware definitions
32
+ *
33
+ * Copyright 2004-2013 Synopsys, Inc.
34
+ *
35
+ * Redistribution and use in source and binary forms, with or without
36
+ * modification, are permitted provided that the following conditions
37
+ * are met:
38
+ * 1. Redistributions of source code must retain the above copyright
39
+ * notice, this list of conditions, and the following disclaimer,
40
+ * without modification.
41
+ * 2. Redistributions in binary form must reproduce the above copyright
42
+ * notice, this list of conditions and the following disclaimer in the
43
+ * documentation and/or other materials provided with the distribution.
44
+ * 3. The names of the above-listed copyright holders may not be used
45
+ * to endorse or promote products derived from this software without
46
+ * specific prior written permission.
47
+ *
48
+ * ALTERNATIVELY, this software may be distributed under the terms of the
49
+ * GNU General Public License ("GPL") as published by the Free Software
50
+ * Foundation; either version 2 of the License, or (at your option) any
51
+ * later version.
52
+ *
53
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
54
+ * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
55
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
56
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
57
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
58
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
59
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
60
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
61
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
62
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
63
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
64
+ */
65
+
66
+#ifndef __DWC2_HW_H__
67
+#define __DWC2_HW_H__
68
+
69
+#define HSOTG_REG(x)    (x)
70
+
71
+#define GOTGCTL                HSOTG_REG(0x000)
72
+#define GOTGCTL_CHIRPEN            BIT(27)
73
+#define GOTGCTL_MULT_VALID_BC_MASK    (0x1f << 22)
74
+#define GOTGCTL_MULT_VALID_BC_SHIFT    22
75
+#define GOTGCTL_OTGVER            BIT(20)
76
+#define GOTGCTL_BSESVLD            BIT(19)
77
+#define GOTGCTL_ASESVLD            BIT(18)
78
+#define GOTGCTL_DBNC_SHORT        BIT(17)
79
+#define GOTGCTL_CONID_B            BIT(16)
80
+#define GOTGCTL_DBNCE_FLTR_BYPASS    BIT(15)
81
+#define GOTGCTL_DEVHNPEN        BIT(11)
82
+#define GOTGCTL_HSTSETHNPEN        BIT(10)
83
+#define GOTGCTL_HNPREQ            BIT(9)
84
+#define GOTGCTL_HSTNEGSCS        BIT(8)
85
+#define GOTGCTL_SESREQ            BIT(1)
86
+#define GOTGCTL_SESREQSCS        BIT(0)
87
+
88
+#define GOTGINT                HSOTG_REG(0x004)
89
+#define GOTGINT_DBNCE_DONE        BIT(19)
90
+#define GOTGINT_A_DEV_TOUT_CHG        BIT(18)
91
+#define GOTGINT_HST_NEG_DET        BIT(17)
92
+#define GOTGINT_HST_NEG_SUC_STS_CHNG    BIT(9)
93
+#define GOTGINT_SES_REQ_SUC_STS_CHNG    BIT(8)
94
+#define GOTGINT_SES_END_DET        BIT(2)
95
+
96
+#define GAHBCFG                HSOTG_REG(0x008)
97
+#define GAHBCFG_AHB_SINGLE        BIT(23)
98
+#define GAHBCFG_NOTI_ALL_DMA_WRIT    BIT(22)
99
+#define GAHBCFG_REM_MEM_SUPP        BIT(21)
100
+#define GAHBCFG_P_TXF_EMP_LVL        BIT(8)
101
+#define GAHBCFG_NP_TXF_EMP_LVL        BIT(7)
102
+#define GAHBCFG_DMA_EN            BIT(5)
103
+#define GAHBCFG_HBSTLEN_MASK        (0xf << 1)
104
+#define GAHBCFG_HBSTLEN_SHIFT        1
105
+#define GAHBCFG_HBSTLEN_SINGLE        0
106
+#define GAHBCFG_HBSTLEN_INCR        1
107
+#define GAHBCFG_HBSTLEN_INCR4        3
108
+#define GAHBCFG_HBSTLEN_INCR8        5
109
+#define GAHBCFG_HBSTLEN_INCR16        7
110
+#define GAHBCFG_GLBL_INTR_EN        BIT(0)
111
+#define GAHBCFG_CTRL_MASK        (GAHBCFG_P_TXF_EMP_LVL | \
112
+                     GAHBCFG_NP_TXF_EMP_LVL | \
113
+                     GAHBCFG_DMA_EN | \
114
+                     GAHBCFG_GLBL_INTR_EN)
115
+
116
+#define GUSBCFG                HSOTG_REG(0x00C)
117
+#define GUSBCFG_FORCEDEVMODE        BIT(30)
118
+#define GUSBCFG_FORCEHOSTMODE        BIT(29)
119
+#define GUSBCFG_TXENDDELAY        BIT(28)
120
+#define GUSBCFG_ICTRAFFICPULLREMOVE    BIT(27)
121
+#define GUSBCFG_ICUSBCAP        BIT(26)
122
+#define GUSBCFG_ULPI_INT_PROT_DIS    BIT(25)
123
+#define GUSBCFG_INDICATORPASSTHROUGH    BIT(24)
124
+#define GUSBCFG_INDICATORCOMPLEMENT    BIT(23)
125
+#define GUSBCFG_TERMSELDLPULSE        BIT(22)
126
+#define GUSBCFG_ULPI_INT_VBUS_IND    BIT(21)
127
+#define GUSBCFG_ULPI_EXT_VBUS_DRV    BIT(20)
128
+#define GUSBCFG_ULPI_CLK_SUSP_M        BIT(19)
129
+#define GUSBCFG_ULPI_AUTO_RES        BIT(18)
130
+#define GUSBCFG_ULPI_FS_LS        BIT(17)
131
+#define GUSBCFG_OTG_UTMI_FS_SEL        BIT(16)
132
+#define GUSBCFG_PHY_LP_CLK_SEL        BIT(15)
133
+#define GUSBCFG_USBTRDTIM_MASK        (0xf << 10)
134
+#define GUSBCFG_USBTRDTIM_SHIFT        10
135
+#define GUSBCFG_HNPCAP            BIT(9)
136
+#define GUSBCFG_SRPCAP            BIT(8)
137
+#define GUSBCFG_DDRSEL            BIT(7)
138
+#define GUSBCFG_PHYSEL            BIT(6)
139
+#define GUSBCFG_FSINTF            BIT(5)
140
+#define GUSBCFG_ULPI_UTMI_SEL        BIT(4)
141
+#define GUSBCFG_PHYIF16            BIT(3)
142
+#define GUSBCFG_PHYIF8            (0 << 3)
143
+#define GUSBCFG_TOUTCAL_MASK        (0x7 << 0)
144
+#define GUSBCFG_TOUTCAL_SHIFT        0
145
+#define GUSBCFG_TOUTCAL_LIMIT        0x7
146
+#define GUSBCFG_TOUTCAL(_x)        ((_x) << 0)
147
+
148
+#define GRSTCTL                HSOTG_REG(0x010)
149
+#define GRSTCTL_AHBIDLE            BIT(31)
150
+#define GRSTCTL_DMAREQ            BIT(30)
151
+#define GRSTCTL_TXFNUM_MASK        (0x1f << 6)
152
+#define GRSTCTL_TXFNUM_SHIFT        6
153
+#define GRSTCTL_TXFNUM_LIMIT        0x1f
154
+#define GRSTCTL_TXFNUM(_x)        ((_x) << 6)
155
+#define GRSTCTL_TXFFLSH            BIT(5)
156
+#define GRSTCTL_RXFFLSH            BIT(4)
157
+#define GRSTCTL_IN_TKNQ_FLSH        BIT(3)
158
+#define GRSTCTL_FRMCNTRRST        BIT(2)
159
+#define GRSTCTL_HSFTRST            BIT(1)
160
+#define GRSTCTL_CSFTRST            BIT(0)
161
+
162
+#define GINTSTS                HSOTG_REG(0x014)
163
+#define GINTMSK                HSOTG_REG(0x018)
164
+#define GINTSTS_WKUPINT            BIT(31)
165
+#define GINTSTS_SESSREQINT        BIT(30)
166
+#define GINTSTS_DISCONNINT        BIT(29)
167
+#define GINTSTS_CONIDSTSCHNG        BIT(28)
168
+#define GINTSTS_LPMTRANRCVD        BIT(27)
169
+#define GINTSTS_PTXFEMP            BIT(26)
170
+#define GINTSTS_HCHINT            BIT(25)
171
+#define GINTSTS_PRTINT            BIT(24)
172
+#define GINTSTS_RESETDET        BIT(23)
173
+#define GINTSTS_FET_SUSP        BIT(22)
174
+#define GINTSTS_INCOMPL_IP        BIT(21)
175
+#define GINTSTS_INCOMPL_SOOUT        BIT(21)
176
+#define GINTSTS_INCOMPL_SOIN        BIT(20)
177
+#define GINTSTS_OEPINT            BIT(19)
178
+#define GINTSTS_IEPINT            BIT(18)
179
+#define GINTSTS_EPMIS            BIT(17)
180
+#define GINTSTS_RESTOREDONE        BIT(16)
181
+#define GINTSTS_EOPF            BIT(15)
182
+#define GINTSTS_ISOUTDROP        BIT(14)
183
+#define GINTSTS_ENUMDONE        BIT(13)
184
+#define GINTSTS_USBRST            BIT(12)
185
+#define GINTSTS_USBSUSP            BIT(11)
186
+#define GINTSTS_ERLYSUSP        BIT(10)
187
+#define GINTSTS_I2CINT            BIT(9)
188
+#define GINTSTS_ULPI_CK_INT        BIT(8)
189
+#define GINTSTS_GOUTNAKEFF        BIT(7)
190
+#define GINTSTS_GINNAKEFF        BIT(6)
191
+#define GINTSTS_NPTXFEMP        BIT(5)
192
+#define GINTSTS_RXFLVL            BIT(4)
193
+#define GINTSTS_SOF            BIT(3)
194
+#define GINTSTS_OTGINT            BIT(2)
195
+#define GINTSTS_MODEMIS            BIT(1)
196
+#define GINTSTS_CURMODE_HOST        BIT(0)
197
+
198
+#define GRXSTSR                HSOTG_REG(0x01C)
199
+#define GRXSTSP                HSOTG_REG(0x020)
200
+#define GRXSTS_FN_MASK            (0x7f << 25)
201
+#define GRXSTS_FN_SHIFT            25
202
+#define GRXSTS_PKTSTS_MASK        (0xf << 17)
203
+#define GRXSTS_PKTSTS_SHIFT        17
204
+#define GRXSTS_PKTSTS_GLOBALOUTNAK    1
205
+#define GRXSTS_PKTSTS_OUTRX        2
206
+#define GRXSTS_PKTSTS_HCHIN        2
207
+#define GRXSTS_PKTSTS_OUTDONE        3
208
+#define GRXSTS_PKTSTS_HCHIN_XFER_COMP    3
209
+#define GRXSTS_PKTSTS_SETUPDONE        4
210
+#define GRXSTS_PKTSTS_DATATOGGLEERR    5
211
+#define GRXSTS_PKTSTS_SETUPRX        6
212
+#define GRXSTS_PKTSTS_HCHHALTED        7
213
+#define GRXSTS_HCHNUM_MASK        (0xf << 0)
214
+#define GRXSTS_HCHNUM_SHIFT        0
215
+#define GRXSTS_DPID_MASK        (0x3 << 15)
216
+#define GRXSTS_DPID_SHIFT        15
217
+#define GRXSTS_BYTECNT_MASK        (0x7ff << 4)
218
+#define GRXSTS_BYTECNT_SHIFT        4
219
+#define GRXSTS_EPNUM_MASK        (0xf << 0)
220
+#define GRXSTS_EPNUM_SHIFT        0
221
+
222
+#define GRXFSIZ                HSOTG_REG(0x024)
223
+#define GRXFSIZ_DEPTH_MASK        (0xffff << 0)
224
+#define GRXFSIZ_DEPTH_SHIFT        0
225
+
226
+#define GNPTXFSIZ            HSOTG_REG(0x028)
227
+/* Use FIFOSIZE_* constants to access this register */
228
+
229
+#define GNPTXSTS            HSOTG_REG(0x02C)
230
+#define GNPTXSTS_NP_TXQ_TOP_MASK        (0x7f << 24)
231
+#define GNPTXSTS_NP_TXQ_TOP_SHIFT        24
232
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_MASK        (0xff << 16)
233
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_SHIFT        16
234
+#define GNPTXSTS_NP_TXQ_SPC_AVAIL_GET(_v)    (((_v) >> 16) & 0xff)
235
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_MASK        (0xffff << 0)
236
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_SHIFT        0
237
+#define GNPTXSTS_NP_TXF_SPC_AVAIL_GET(_v)    (((_v) >> 0) & 0xffff)
238
+
239
+#define GI2CCTL                HSOTG_REG(0x0030)
240
+#define GI2CCTL_BSYDNE            BIT(31)
241
+#define GI2CCTL_RW            BIT(30)
242
+#define GI2CCTL_I2CDATSE0        BIT(28)
243
+#define GI2CCTL_I2CDEVADDR_MASK        (0x3 << 26)
244
+#define GI2CCTL_I2CDEVADDR_SHIFT    26
245
+#define GI2CCTL_I2CSUSPCTL        BIT(25)
246
+#define GI2CCTL_ACK            BIT(24)
247
+#define GI2CCTL_I2CEN            BIT(23)
248
+#define GI2CCTL_ADDR_MASK        (0x7f << 16)
249
+#define GI2CCTL_ADDR_SHIFT        16
250
+#define GI2CCTL_REGADDR_MASK        (0xff << 8)
251
+#define GI2CCTL_REGADDR_SHIFT        8
252
+#define GI2CCTL_RWDATA_MASK        (0xff << 0)
253
+#define GI2CCTL_RWDATA_SHIFT        0
254
+
255
+#define GPVNDCTL            HSOTG_REG(0x0034)
256
+#define GGPIO                HSOTG_REG(0x0038)
257
+#define GGPIO_STM32_OTG_GCCFG_PWRDWN    BIT(16)
258
+
259
+#define GUID                HSOTG_REG(0x003c)
260
+#define GSNPSID                HSOTG_REG(0x0040)
261
+#define GHWCFG1                HSOTG_REG(0x0044)
262
+#define GSNPSID_ID_MASK            GENMASK(31, 16)
263
+
264
+#define GHWCFG2                HSOTG_REG(0x0048)
265
+#define GHWCFG2_OTG_ENABLE_IC_USB        BIT(31)
266
+#define GHWCFG2_DEV_TOKEN_Q_DEPTH_MASK        (0x1f << 26)
267
+#define GHWCFG2_DEV_TOKEN_Q_DEPTH_SHIFT        26
268
+#define GHWCFG2_HOST_PERIO_TX_Q_DEPTH_MASK    (0x3 << 24)
269
+#define GHWCFG2_HOST_PERIO_TX_Q_DEPTH_SHIFT    24
270
+#define GHWCFG2_NONPERIO_TX_Q_DEPTH_MASK    (0x3 << 22)
271
+#define GHWCFG2_NONPERIO_TX_Q_DEPTH_SHIFT    22
272
+#define GHWCFG2_MULTI_PROC_INT            BIT(20)
273
+#define GHWCFG2_DYNAMIC_FIFO            BIT(19)
274
+#define GHWCFG2_PERIO_EP_SUPPORTED        BIT(18)
275
+#define GHWCFG2_NUM_HOST_CHAN_MASK        (0xf << 14)
276
+#define GHWCFG2_NUM_HOST_CHAN_SHIFT        14
277
+#define GHWCFG2_NUM_DEV_EP_MASK            (0xf << 10)
278
+#define GHWCFG2_NUM_DEV_EP_SHIFT        10
279
+#define GHWCFG2_FS_PHY_TYPE_MASK        (0x3 << 8)
280
+#define GHWCFG2_FS_PHY_TYPE_SHIFT        8
281
+#define GHWCFG2_FS_PHY_TYPE_NOT_SUPPORTED    0
282
+#define GHWCFG2_FS_PHY_TYPE_DEDICATED        1
283
+#define GHWCFG2_FS_PHY_TYPE_SHARED_UTMI        2
284
+#define GHWCFG2_FS_PHY_TYPE_SHARED_ULPI        3
285
+#define GHWCFG2_HS_PHY_TYPE_MASK        (0x3 << 6)
286
+#define GHWCFG2_HS_PHY_TYPE_SHIFT        6
287
+#define GHWCFG2_HS_PHY_TYPE_NOT_SUPPORTED    0
288
+#define GHWCFG2_HS_PHY_TYPE_UTMI        1
289
+#define GHWCFG2_HS_PHY_TYPE_ULPI        2
290
+#define GHWCFG2_HS_PHY_TYPE_UTMI_ULPI        3
291
+#define GHWCFG2_POINT2POINT            BIT(5)
292
+#define GHWCFG2_ARCHITECTURE_MASK        (0x3 << 3)
293
+#define GHWCFG2_ARCHITECTURE_SHIFT        3
294
+#define GHWCFG2_SLAVE_ONLY_ARCH            0
295
+#define GHWCFG2_EXT_DMA_ARCH            1
296
+#define GHWCFG2_INT_DMA_ARCH            2
297
+#define GHWCFG2_OP_MODE_MASK            (0x7 << 0)
298
+#define GHWCFG2_OP_MODE_SHIFT            0
299
+#define GHWCFG2_OP_MODE_HNP_SRP_CAPABLE        0
300
+#define GHWCFG2_OP_MODE_SRP_ONLY_CAPABLE    1
301
+#define GHWCFG2_OP_MODE_NO_HNP_SRP_CAPABLE    2
302
+#define GHWCFG2_OP_MODE_SRP_CAPABLE_DEVICE    3
303
+#define GHWCFG2_OP_MODE_NO_SRP_CAPABLE_DEVICE    4
304
+#define GHWCFG2_OP_MODE_SRP_CAPABLE_HOST    5
305
+#define GHWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST    6
306
+#define GHWCFG2_OP_MODE_UNDEFINED        7
307
+
308
+#define GHWCFG3                HSOTG_REG(0x004c)
309
+#define GHWCFG3_DFIFO_DEPTH_MASK        (0xffff << 16)
310
+#define GHWCFG3_DFIFO_DEPTH_SHIFT        16
311
+#define GHWCFG3_OTG_LPM_EN            BIT(15)
312
+#define GHWCFG3_BC_SUPPORT            BIT(14)
313
+#define GHWCFG3_OTG_ENABLE_HSIC            BIT(13)
314
+#define GHWCFG3_ADP_SUPP            BIT(12)
315
+#define GHWCFG3_SYNCH_RESET_TYPE        BIT(11)
316
+#define GHWCFG3_OPTIONAL_FEATURES        BIT(10)
317
+#define GHWCFG3_VENDOR_CTRL_IF            BIT(9)
318
+#define GHWCFG3_I2C                BIT(8)
319
+#define GHWCFG3_OTG_FUNC            BIT(7)
320
+#define GHWCFG3_PACKET_SIZE_CNTR_WIDTH_MASK    (0x7 << 4)
321
+#define GHWCFG3_PACKET_SIZE_CNTR_WIDTH_SHIFT    4
322
+#define GHWCFG3_XFER_SIZE_CNTR_WIDTH_MASK    (0xf << 0)
323
+#define GHWCFG3_XFER_SIZE_CNTR_WIDTH_SHIFT    0
324
+
325
+#define GHWCFG4                HSOTG_REG(0x0050)
326
+#define GHWCFG4_DESC_DMA_DYN            BIT(31)
327
+#define GHWCFG4_DESC_DMA            BIT(30)
328
+#define GHWCFG4_NUM_IN_EPS_MASK            (0xf << 26)
329
+#define GHWCFG4_NUM_IN_EPS_SHIFT        26
330
+#define GHWCFG4_DED_FIFO_EN            BIT(25)
331
+#define GHWCFG4_DED_FIFO_SHIFT        25
332
+#define GHWCFG4_SESSION_END_FILT_EN        BIT(24)
333
+#define GHWCFG4_B_VALID_FILT_EN            BIT(23)
334
+#define GHWCFG4_A_VALID_FILT_EN            BIT(22)
335
+#define GHWCFG4_VBUS_VALID_FILT_EN        BIT(21)
336
+#define GHWCFG4_IDDIG_FILT_EN            BIT(20)
337
+#define GHWCFG4_NUM_DEV_MODE_CTRL_EP_MASK    (0xf << 16)
338
+#define GHWCFG4_NUM_DEV_MODE_CTRL_EP_SHIFT    16
339
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_MASK    (0x3 << 14)
340
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_SHIFT    14
341
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_8        0
342
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_16        1
343
+#define GHWCFG4_UTMI_PHY_DATA_WIDTH_8_OR_16    2
344
+#define GHWCFG4_ACG_SUPPORTED            BIT(12)
345
+#define GHWCFG4_IPG_ISOC_SUPPORTED        BIT(11)
346
+#define GHWCFG4_SERVICE_INTERVAL_SUPPORTED BIT(10)
347
+#define GHWCFG4_XHIBER                BIT(7)
348
+#define GHWCFG4_HIBER                BIT(6)
349
+#define GHWCFG4_MIN_AHB_FREQ            BIT(5)
350
+#define GHWCFG4_POWER_OPTIMIZ            BIT(4)
351
+#define GHWCFG4_NUM_DEV_PERIO_IN_EP_MASK    (0xf << 0)
352
+#define GHWCFG4_NUM_DEV_PERIO_IN_EP_SHIFT    0
353
+
354
+#define GLPMCFG                HSOTG_REG(0x0054)
355
+#define GLPMCFG_INVSELHSIC        BIT(31)
356
+#define GLPMCFG_HSICCON            BIT(30)
357
+#define GLPMCFG_RSTRSLPSTS        BIT(29)
358
+#define GLPMCFG_ENBESL            BIT(28)
359
+#define GLPMCFG_LPM_RETRYCNT_STS_MASK    (0x7 << 25)
360
+#define GLPMCFG_LPM_RETRYCNT_STS_SHIFT    25
361
+#define GLPMCFG_SNDLPM            BIT(24)
362
+#define GLPMCFG_RETRY_CNT_MASK        (0x7 << 21)
363
+#define GLPMCFG_RETRY_CNT_SHIFT        21
364
+#define GLPMCFG_LPM_REJECT_CTRL_CONTROL    BIT(21)
365
+#define GLPMCFG_LPM_ACCEPT_CTRL_ISOC    BIT(22)
366
+#define GLPMCFG_LPM_CHNL_INDX_MASK    (0xf << 17)
367
+#define GLPMCFG_LPM_CHNL_INDX_SHIFT    17
368
+#define GLPMCFG_L1RESUMEOK        BIT(16)
369
+#define GLPMCFG_SLPSTS            BIT(15)
370
+#define GLPMCFG_COREL1RES_MASK        (0x3 << 13)
371
+#define GLPMCFG_COREL1RES_SHIFT        13
372
+#define GLPMCFG_HIRD_THRES_MASK        (0x1f << 8)
373
+#define GLPMCFG_HIRD_THRES_SHIFT    8
374
+#define GLPMCFG_HIRD_THRES_EN        (0x10 << 8)
375
+#define GLPMCFG_ENBLSLPM        BIT(7)
376
+#define GLPMCFG_BREMOTEWAKE        BIT(6)
377
+#define GLPMCFG_HIRD_MASK        (0xf << 2)
378
+#define GLPMCFG_HIRD_SHIFT        2
379
+#define GLPMCFG_APPL1RES        BIT(1)
380
+#define GLPMCFG_LPMCAP            BIT(0)
381
+
382
+#define GPWRDN                HSOTG_REG(0x0058)
383
+#define GPWRDN_MULT_VAL_ID_BC_MASK    (0x1f << 24)
384
+#define GPWRDN_MULT_VAL_ID_BC_SHIFT    24
385
+#define GPWRDN_ADP_INT            BIT(23)
386
+#define GPWRDN_BSESSVLD            BIT(22)
387
+#define GPWRDN_IDSTS            BIT(21)
388
+#define GPWRDN_LINESTATE_MASK        (0x3 << 19)
389
+#define GPWRDN_LINESTATE_SHIFT        19
390
+#define GPWRDN_STS_CHGINT_MSK        BIT(18)
391
+#define GPWRDN_STS_CHGINT        BIT(17)
392
+#define GPWRDN_SRP_DET_MSK        BIT(16)
393
+#define GPWRDN_SRP_DET            BIT(15)
394
+#define GPWRDN_CONNECT_DET_MSK        BIT(14)
395
+#define GPWRDN_CONNECT_DET        BIT(13)
396
+#define GPWRDN_DISCONN_DET_MSK        BIT(12)
397
+#define GPWRDN_DISCONN_DET        BIT(11)
398
+#define GPWRDN_RST_DET_MSK        BIT(10)
399
+#define GPWRDN_RST_DET            BIT(9)
400
+#define GPWRDN_LNSTSCHG_MSK        BIT(8)
401
+#define GPWRDN_LNSTSCHG            BIT(7)
402
+#define GPWRDN_DIS_VBUS            BIT(6)
403
+#define GPWRDN_PWRDNSWTCH        BIT(5)
404
+#define GPWRDN_PWRDNRSTN        BIT(4)
405
+#define GPWRDN_PWRDNCLMP        BIT(3)
406
+#define GPWRDN_RESTORE            BIT(2)
407
+#define GPWRDN_PMUACTV            BIT(1)
408
+#define GPWRDN_PMUINTSEL        BIT(0)
409
+
410
+#define GDFIFOCFG            HSOTG_REG(0x005c)
411
+#define GDFIFOCFG_EPINFOBASE_MASK    (0xffff << 16)
412
+#define GDFIFOCFG_EPINFOBASE_SHIFT    16
413
+#define GDFIFOCFG_GDFIFOCFG_MASK    (0xffff << 0)
414
+#define GDFIFOCFG_GDFIFOCFG_SHIFT    0
415
+
416
+#define ADPCTL                HSOTG_REG(0x0060)
417
+#define ADPCTL_AR_MASK            (0x3 << 27)
418
+#define ADPCTL_AR_SHIFT            27
419
+#define ADPCTL_ADP_TMOUT_INT_MSK    BIT(26)
420
+#define ADPCTL_ADP_SNS_INT_MSK        BIT(25)
421
+#define ADPCTL_ADP_PRB_INT_MSK        BIT(24)
422
+#define ADPCTL_ADP_TMOUT_INT        BIT(23)
423
+#define ADPCTL_ADP_SNS_INT        BIT(22)
424
+#define ADPCTL_ADP_PRB_INT        BIT(21)
425
+#define ADPCTL_ADPENA            BIT(20)
426
+#define ADPCTL_ADPRES            BIT(19)
427
+#define ADPCTL_ENASNS            BIT(18)
428
+#define ADPCTL_ENAPRB            BIT(17)
429
+#define ADPCTL_RTIM_MASK        (0x7ff << 6)
430
+#define ADPCTL_RTIM_SHIFT        6
431
+#define ADPCTL_PRB_PER_MASK        (0x3 << 4)
432
+#define ADPCTL_PRB_PER_SHIFT        4
433
+#define ADPCTL_PRB_DELTA_MASK        (0x3 << 2)
434
+#define ADPCTL_PRB_DELTA_SHIFT        2
435
+#define ADPCTL_PRB_DSCHRG_MASK        (0x3 << 0)
436
+#define ADPCTL_PRB_DSCHRG_SHIFT        0
437
+
438
+#define GREFCLK                 HSOTG_REG(0x0064)
439
+#define GREFCLK_REFCLKPER_MASK         (0x1ffff << 15)
440
+#define GREFCLK_REFCLKPER_SHIFT         15
441
+#define GREFCLK_REF_CLK_MODE         BIT(14)
442
+#define GREFCLK_SOF_CNT_WKUP_ALERT_MASK     (0x3ff)
443
+#define GREFCLK_SOF_CNT_WKUP_ALERT_SHIFT 0
444
+
445
+#define GINTMSK2            HSOTG_REG(0x0068)
446
+#define GINTMSK2_WKUP_ALERT_INT_MSK    BIT(0)
447
+
448
+#define GINTSTS2            HSOTG_REG(0x006c)
449
+#define GINTSTS2_WKUP_ALERT_INT        BIT(0)
450
+
451
+#define HPTXFSIZ            HSOTG_REG(0x100)
452
+/* Use FIFOSIZE_* constants to access this register */
453
+
454
+#define DPTXFSIZN(_a)            HSOTG_REG(0x104 + (((_a) - 1) * 4))
455
+/* Use FIFOSIZE_* constants to access this register */
456
+
457
+/* These apply to the GNPTXFSIZ, HPTXFSIZ and DPTXFSIZN registers */
458
+#define FIFOSIZE_DEPTH_MASK        (0xffff << 16)
459
+#define FIFOSIZE_DEPTH_SHIFT        16
460
+#define FIFOSIZE_STARTADDR_MASK        (0xffff << 0)
461
+#define FIFOSIZE_STARTADDR_SHIFT    0
462
+#define FIFOSIZE_DEPTH_GET(_x)        (((_x) >> 16) & 0xffff)
463
+
464
+/* Device mode registers */
465
+
466
+#define DCFG                HSOTG_REG(0x800)
467
+#define DCFG_DESCDMA_EN            BIT(23)
468
+#define DCFG_EPMISCNT_MASK        (0x1f << 18)
469
+#define DCFG_EPMISCNT_SHIFT        18
470
+#define DCFG_EPMISCNT_LIMIT        0x1f
471
+#define DCFG_EPMISCNT(_x)        ((_x) << 18)
472
+#define DCFG_IPG_ISOC_SUPPORDED        BIT(17)
473
+#define DCFG_PERFRINT_MASK        (0x3 << 11)
474
+#define DCFG_PERFRINT_SHIFT        11
475
+#define DCFG_PERFRINT_LIMIT        0x3
476
+#define DCFG_PERFRINT(_x)        ((_x) << 11)
477
+#define DCFG_DEVADDR_MASK        (0x7f << 4)
478
+#define DCFG_DEVADDR_SHIFT        4
479
+#define DCFG_DEVADDR_LIMIT        0x7f
480
+#define DCFG_DEVADDR(_x)        ((_x) << 4)
481
+#define DCFG_NZ_STS_OUT_HSHK        BIT(2)
482
+#define DCFG_DEVSPD_MASK        (0x3 << 0)
483
+#define DCFG_DEVSPD_SHIFT        0
484
+#define DCFG_DEVSPD_HS            0
485
+#define DCFG_DEVSPD_FS            1
486
+#define DCFG_DEVSPD_LS            2
487
+#define DCFG_DEVSPD_FS48        3
488
+
489
+#define DCTL                HSOTG_REG(0x804)
490
+#define DCTL_SERVICE_INTERVAL_SUPPORTED BIT(19)
491
+#define DCTL_PWRONPRGDONE        BIT(11)
492
+#define DCTL_CGOUTNAK            BIT(10)
493
+#define DCTL_SGOUTNAK            BIT(9)
494
+#define DCTL_CGNPINNAK            BIT(8)
495
+#define DCTL_SGNPINNAK            BIT(7)
496
+#define DCTL_TSTCTL_MASK        (0x7 << 4)
497
+#define DCTL_TSTCTL_SHIFT        4
498
+#define DCTL_GOUTNAKSTS            BIT(3)
499
+#define DCTL_GNPINNAKSTS        BIT(2)
500
+#define DCTL_SFTDISCON            BIT(1)
501
+#define DCTL_RMTWKUPSIG            BIT(0)
502
+
503
+#define DSTS                HSOTG_REG(0x808)
504
+#define DSTS_SOFFN_MASK            (0x3fff << 8)
505
+#define DSTS_SOFFN_SHIFT        8
506
+#define DSTS_SOFFN_LIMIT        0x3fff
507
+#define DSTS_SOFFN(_x)            ((_x) << 8)
508
+#define DSTS_ERRATICERR            BIT(3)
509
+#define DSTS_ENUMSPD_MASK        (0x3 << 1)
510
+#define DSTS_ENUMSPD_SHIFT        1
511
+#define DSTS_ENUMSPD_HS            0
512
+#define DSTS_ENUMSPD_FS            1
513
+#define DSTS_ENUMSPD_LS            2
514
+#define DSTS_ENUMSPD_FS48        3
515
+#define DSTS_SUSPSTS            BIT(0)
516
+
517
+#define DIEPMSK                HSOTG_REG(0x810)
518
+#define DIEPMSK_NAKMSK            BIT(13)
519
+#define DIEPMSK_BNAININTRMSK        BIT(9)
520
+#define DIEPMSK_TXFIFOUNDRNMSK        BIT(8)
521
+#define DIEPMSK_TXFIFOEMPTY        BIT(7)
522
+#define DIEPMSK_INEPNAKEFFMSK        BIT(6)
523
+#define DIEPMSK_INTKNEPMISMSK        BIT(5)
524
+#define DIEPMSK_INTKNTXFEMPMSK        BIT(4)
525
+#define DIEPMSK_TIMEOUTMSK        BIT(3)
526
+#define DIEPMSK_AHBERRMSK        BIT(2)
527
+#define DIEPMSK_EPDISBLDMSK        BIT(1)
528
+#define DIEPMSK_XFERCOMPLMSK        BIT(0)
529
+
530
+#define DOEPMSK                HSOTG_REG(0x814)
531
+#define DOEPMSK_BNAMSK            BIT(9)
532
+#define DOEPMSK_BACK2BACKSETUP        BIT(6)
533
+#define DOEPMSK_STSPHSERCVDMSK        BIT(5)
534
+#define DOEPMSK_OUTTKNEPDISMSK        BIT(4)
535
+#define DOEPMSK_SETUPMSK        BIT(3)
536
+#define DOEPMSK_AHBERRMSK        BIT(2)
537
+#define DOEPMSK_EPDISBLDMSK        BIT(1)
538
+#define DOEPMSK_XFERCOMPLMSK        BIT(0)
539
+
540
+#define DAINT                HSOTG_REG(0x818)
541
+#define DAINTMSK            HSOTG_REG(0x81C)
542
+#define DAINT_OUTEP_SHIFT        16
543
+#define DAINT_OUTEP(_x)            (1 << ((_x) + 16))
544
+#define DAINT_INEP(_x)            (1 << (_x))
545
+
546
+#define DTKNQR1                HSOTG_REG(0x820)
547
+#define DTKNQR2                HSOTG_REG(0x824)
548
+#define DTKNQR3                HSOTG_REG(0x830)
549
+#define DTKNQR4                HSOTG_REG(0x834)
550
+#define DIEPEMPMSK            HSOTG_REG(0x834)
551
+
552
+#define DVBUSDIS            HSOTG_REG(0x828)
553
+#define DVBUSPULSE            HSOTG_REG(0x82C)
554
+
555
+#define DIEPCTL0            HSOTG_REG(0x900)
556
+#define DIEPCTL(_a)            HSOTG_REG(0x900 + ((_a) * 0x20))
557
+
558
+#define DOEPCTL0            HSOTG_REG(0xB00)
559
+#define DOEPCTL(_a)            HSOTG_REG(0xB00 + ((_a) * 0x20))
560
+
561
+/* EP0 specialness:
562
+ * bits[29..28] - reserved (no SetD0PID, SetD1PID)
563
+ * bits[25..22] - should always be zero, this isn't a periodic endpoint
564
+ * bits[10..0] - MPS setting different for EP0
565
+ */
566
+#define D0EPCTL_MPS_MASK        (0x3 << 0)
567
+#define D0EPCTL_MPS_SHIFT        0
568
+#define D0EPCTL_MPS_64            0
569
+#define D0EPCTL_MPS_32            1
570
+#define D0EPCTL_MPS_16            2
571
+#define D0EPCTL_MPS_8            3
572
+
573
+#define DXEPCTL_EPENA            BIT(31)
574
+#define DXEPCTL_EPDIS            BIT(30)
575
+#define DXEPCTL_SETD1PID        BIT(29)
576
+#define DXEPCTL_SETODDFR        BIT(29)
577
+#define DXEPCTL_SETD0PID        BIT(28)
578
+#define DXEPCTL_SETEVENFR        BIT(28)
579
+#define DXEPCTL_SNAK            BIT(27)
580
+#define DXEPCTL_CNAK            BIT(26)
581
+#define DXEPCTL_TXFNUM_MASK        (0xf << 22)
582
+#define DXEPCTL_TXFNUM_SHIFT        22
583
+#define DXEPCTL_TXFNUM_LIMIT        0xf
584
+#define DXEPCTL_TXFNUM(_x)        ((_x) << 22)
585
+#define DXEPCTL_STALL            BIT(21)
586
+#define DXEPCTL_SNP            BIT(20)
587
+#define DXEPCTL_EPTYPE_MASK        (0x3 << 18)
588
+#define DXEPCTL_EPTYPE_CONTROL        (0x0 << 18)
589
+#define DXEPCTL_EPTYPE_ISO        (0x1 << 18)
590
+#define DXEPCTL_EPTYPE_BULK        (0x2 << 18)
591
+#define DXEPCTL_EPTYPE_INTERRUPT    (0x3 << 18)
592
+
593
+#define DXEPCTL_NAKSTS            BIT(17)
594
+#define DXEPCTL_DPID            BIT(16)
595
+#define DXEPCTL_EOFRNUM            BIT(16)
596
+#define DXEPCTL_USBACTEP        BIT(15)
597
+#define DXEPCTL_NEXTEP_MASK        (0xf << 11)
598
+#define DXEPCTL_NEXTEP_SHIFT        11
599
+#define DXEPCTL_NEXTEP_LIMIT        0xf
600
+#define DXEPCTL_NEXTEP(_x)        ((_x) << 11)
601
+#define DXEPCTL_MPS_MASK        (0x7ff << 0)
602
+#define DXEPCTL_MPS_SHIFT        0
603
+#define DXEPCTL_MPS_LIMIT        0x7ff
604
+#define DXEPCTL_MPS(_x)            ((_x) << 0)
605
+
606
+#define DIEPINT(_a)            HSOTG_REG(0x908 + ((_a) * 0x20))
607
+#define DOEPINT(_a)            HSOTG_REG(0xB08 + ((_a) * 0x20))
608
+#define DXEPINT_SETUP_RCVD        BIT(15)
609
+#define DXEPINT_NYETINTRPT        BIT(14)
610
+#define DXEPINT_NAKINTRPT        BIT(13)
611
+#define DXEPINT_BBLEERRINTRPT        BIT(12)
612
+#define DXEPINT_PKTDRPSTS        BIT(11)
613
+#define DXEPINT_BNAINTR            BIT(9)
614
+#define DXEPINT_TXFIFOUNDRN        BIT(8)
615
+#define DXEPINT_OUTPKTERR        BIT(8)
616
+#define DXEPINT_TXFEMP            BIT(7)
617
+#define DXEPINT_INEPNAKEFF        BIT(6)
618
+#define DXEPINT_BACK2BACKSETUP        BIT(6)
619
+#define DXEPINT_INTKNEPMIS        BIT(5)
620
+#define DXEPINT_STSPHSERCVD        BIT(5)
621
+#define DXEPINT_INTKNTXFEMP        BIT(4)
622
+#define DXEPINT_OUTTKNEPDIS        BIT(4)
623
+#define DXEPINT_TIMEOUT            BIT(3)
624
+#define DXEPINT_SETUP            BIT(3)
625
+#define DXEPINT_AHBERR            BIT(2)
626
+#define DXEPINT_EPDISBLD        BIT(1)
627
+#define DXEPINT_XFERCOMPL        BIT(0)
628
+
629
+#define DIEPTSIZ0            HSOTG_REG(0x910)
630
+#define DIEPTSIZ0_PKTCNT_MASK        (0x3 << 19)
631
+#define DIEPTSIZ0_PKTCNT_SHIFT        19
632
+#define DIEPTSIZ0_PKTCNT_LIMIT        0x3
633
+#define DIEPTSIZ0_PKTCNT(_x)        ((_x) << 19)
634
+#define DIEPTSIZ0_XFERSIZE_MASK        (0x7f << 0)
635
+#define DIEPTSIZ0_XFERSIZE_SHIFT    0
636
+#define DIEPTSIZ0_XFERSIZE_LIMIT    0x7f
637
+#define DIEPTSIZ0_XFERSIZE(_x)        ((_x) << 0)
638
+
639
+#define DOEPTSIZ0            HSOTG_REG(0xB10)
640
+#define DOEPTSIZ0_SUPCNT_MASK        (0x3 << 29)
641
+#define DOEPTSIZ0_SUPCNT_SHIFT        29
642
+#define DOEPTSIZ0_SUPCNT_LIMIT        0x3
643
+#define DOEPTSIZ0_SUPCNT(_x)        ((_x) << 29)
644
+#define DOEPTSIZ0_PKTCNT        BIT(19)
645
+#define DOEPTSIZ0_XFERSIZE_MASK        (0x7f << 0)
646
+#define DOEPTSIZ0_XFERSIZE_SHIFT    0
647
+
648
+#define DIEPTSIZ(_a)            HSOTG_REG(0x910 + ((_a) * 0x20))
649
+#define DOEPTSIZ(_a)            HSOTG_REG(0xB10 + ((_a) * 0x20))
650
+#define DXEPTSIZ_MC_MASK        (0x3 << 29)
651
+#define DXEPTSIZ_MC_SHIFT        29
652
+#define DXEPTSIZ_MC_LIMIT        0x3
653
+#define DXEPTSIZ_MC(_x)            ((_x) << 29)
654
+#define DXEPTSIZ_PKTCNT_MASK        (0x3ff << 19)
655
+#define DXEPTSIZ_PKTCNT_SHIFT        19
656
+#define DXEPTSIZ_PKTCNT_LIMIT        0x3ff
657
+#define DXEPTSIZ_PKTCNT_GET(_v)        (((_v) >> 19) & 0x3ff)
658
+#define DXEPTSIZ_PKTCNT(_x)        ((_x) << 19)
659
+#define DXEPTSIZ_XFERSIZE_MASK        (0x7ffff << 0)
660
+#define DXEPTSIZ_XFERSIZE_SHIFT        0
661
+#define DXEPTSIZ_XFERSIZE_LIMIT        0x7ffff
662
+#define DXEPTSIZ_XFERSIZE_GET(_v)    (((_v) >> 0) & 0x7ffff)
663
+#define DXEPTSIZ_XFERSIZE(_x)        ((_x) << 0)
664
+
665
+#define DIEPDMA(_a)            HSOTG_REG(0x914 + ((_a) * 0x20))
666
+#define DOEPDMA(_a)            HSOTG_REG(0xB14 + ((_a) * 0x20))
667
+
668
+#define DTXFSTS(_a)            HSOTG_REG(0x918 + ((_a) * 0x20))
669
+
670
+#define PCGCTL                HSOTG_REG(0x0e00)
671
+#define PCGCTL_IF_DEV_MODE        BIT(31)
672
+#define PCGCTL_P2HD_PRT_SPD_MASK    (0x3 << 29)
673
+#define PCGCTL_P2HD_PRT_SPD_SHIFT    29
674
+#define PCGCTL_P2HD_DEV_ENUM_SPD_MASK    (0x3 << 27)
675
+#define PCGCTL_P2HD_DEV_ENUM_SPD_SHIFT    27
676
+#define PCGCTL_MAC_DEV_ADDR_MASK    (0x7f << 20)
677
+#define PCGCTL_MAC_DEV_ADDR_SHIFT    20
678
+#define PCGCTL_MAX_TERMSEL        BIT(19)
679
+#define PCGCTL_MAX_XCVRSELECT_MASK    (0x3 << 17)
680
+#define PCGCTL_MAX_XCVRSELECT_SHIFT    17
681
+#define PCGCTL_PORT_POWER        BIT(16)
682
+#define PCGCTL_PRT_CLK_SEL_MASK        (0x3 << 14)
683
+#define PCGCTL_PRT_CLK_SEL_SHIFT    14
684
+#define PCGCTL_ESS_REG_RESTORED        BIT(13)
685
+#define PCGCTL_EXTND_HIBER_SWITCH    BIT(12)
686
+#define PCGCTL_EXTND_HIBER_PWRCLMP    BIT(11)
687
+#define PCGCTL_ENBL_EXTND_HIBER        BIT(10)
688
+#define PCGCTL_RESTOREMODE        BIT(9)
689
+#define PCGCTL_RESETAFTSUSP        BIT(8)
690
+#define PCGCTL_DEEP_SLEEP        BIT(7)
691
+#define PCGCTL_PHY_IN_SLEEP        BIT(6)
692
+#define PCGCTL_ENBL_SLEEP_GATING    BIT(5)
693
+#define PCGCTL_RSTPDWNMODULE        BIT(3)
694
+#define PCGCTL_PWRCLMP            BIT(2)
695
+#define PCGCTL_GATEHCLK            BIT(1)
696
+#define PCGCTL_STOPPCLK            BIT(0)
697
+
698
+#define PCGCCTL1 HSOTG_REG(0xe04)
699
+#define PCGCCTL1_TIMER (0x3 << 1)
700
+#define PCGCCTL1_GATEEN BIT(0)
701
+
702
+#define EPFIFO(_a)            HSOTG_REG(0x1000 + ((_a) * 0x1000))
703
+
704
+/* Host Mode Registers */
705
+
706
+#define HCFG                HSOTG_REG(0x0400)
707
+#define HCFG_MODECHTIMEN        BIT(31)
708
+#define HCFG_PERSCHEDENA        BIT(26)
709
+#define HCFG_FRLISTEN_MASK        (0x3 << 24)
710
+#define HCFG_FRLISTEN_SHIFT        24
711
+#define HCFG_FRLISTEN_8                (0 << 24)
712
+#define FRLISTEN_8_SIZE                8
713
+#define HCFG_FRLISTEN_16            BIT(24)
714
+#define FRLISTEN_16_SIZE            16
715
+#define HCFG_FRLISTEN_32            (2 << 24)
716
+#define FRLISTEN_32_SIZE            32
717
+#define HCFG_FRLISTEN_64            (3 << 24)
718
+#define FRLISTEN_64_SIZE            64
719
+#define HCFG_DESCDMA            BIT(23)
720
+#define HCFG_RESVALID_MASK        (0xff << 8)
721
+#define HCFG_RESVALID_SHIFT        8
722
+#define HCFG_ENA32KHZ            BIT(7)
723
+#define HCFG_FSLSSUPP            BIT(2)
724
+#define HCFG_FSLSPCLKSEL_MASK        (0x3 << 0)
725
+#define HCFG_FSLSPCLKSEL_SHIFT        0
726
+#define HCFG_FSLSPCLKSEL_30_60_MHZ    0
727
+#define HCFG_FSLSPCLKSEL_48_MHZ        1
728
+#define HCFG_FSLSPCLKSEL_6_MHZ        2
729
+
730
+#define HFIR                HSOTG_REG(0x0404)
731
+#define HFIR_FRINT_MASK            (0xffff << 0)
732
+#define HFIR_FRINT_SHIFT        0
733
+#define HFIR_RLDCTRL            BIT(16)
734
+
735
+#define HFNUM                HSOTG_REG(0x0408)
736
+#define HFNUM_FRREM_MASK        (0xffff << 16)
737
+#define HFNUM_FRREM_SHIFT        16
738
+#define HFNUM_FRNUM_MASK        (0xffff << 0)
739
+#define HFNUM_FRNUM_SHIFT        0
740
+#define HFNUM_MAX_FRNUM            0x3fff
741
+
742
+#define HPTXSTS                HSOTG_REG(0x0410)
743
+#define TXSTS_QTOP_ODD            BIT(31)
744
+#define TXSTS_QTOP_CHNEP_MASK        (0xf << 27)
745
+#define TXSTS_QTOP_CHNEP_SHIFT        27
746
+#define TXSTS_QTOP_TOKEN_MASK        (0x3 << 25)
747
+#define TXSTS_QTOP_TOKEN_SHIFT        25
748
+#define TXSTS_QTOP_TERMINATE        BIT(24)
749
+#define TXSTS_QSPCAVAIL_MASK        (0xff << 16)
750
+#define TXSTS_QSPCAVAIL_SHIFT        16
751
+#define TXSTS_FSPCAVAIL_MASK        (0xffff << 0)
752
+#define TXSTS_FSPCAVAIL_SHIFT        0
753
+
754
+#define HAINT                HSOTG_REG(0x0414)
755
+#define HAINTMSK            HSOTG_REG(0x0418)
756
+#define HFLBADDR            HSOTG_REG(0x041c)
757
+
758
+#define HPRT0                HSOTG_REG(0x0440)
759
+#define HPRT0_SPD_MASK            (0x3 << 17)
760
+#define HPRT0_SPD_SHIFT            17
761
+#define HPRT0_SPD_HIGH_SPEED        0
762
+#define HPRT0_SPD_FULL_SPEED        1
763
+#define HPRT0_SPD_LOW_SPEED        2
764
+#define HPRT0_TSTCTL_MASK        (0xf << 13)
765
+#define HPRT0_TSTCTL_SHIFT        13
766
+#define HPRT0_PWR            BIT(12)
767
+#define HPRT0_LNSTS_MASK        (0x3 << 10)
768
+#define HPRT0_LNSTS_SHIFT        10
769
+#define HPRT0_RST            BIT(8)
770
+#define HPRT0_SUSP            BIT(7)
771
+#define HPRT0_RES            BIT(6)
772
+#define HPRT0_OVRCURRCHG        BIT(5)
773
+#define HPRT0_OVRCURRACT        BIT(4)
774
+#define HPRT0_ENACHG            BIT(3)
775
+#define HPRT0_ENA            BIT(2)
776
+#define HPRT0_CONNDET            BIT(1)
777
+#define HPRT0_CONNSTS            BIT(0)
778
+
779
+#define HCCHAR(_ch)            HSOTG_REG(0x0500 + 0x20 * (_ch))
780
+#define HCCHAR_CHENA            BIT(31)
781
+#define HCCHAR_CHDIS            BIT(30)
782
+#define HCCHAR_ODDFRM            BIT(29)
783
+#define HCCHAR_DEVADDR_MASK        (0x7f << 22)
784
+#define HCCHAR_DEVADDR_SHIFT        22
785
+#define HCCHAR_MULTICNT_MASK        (0x3 << 20)
786
+#define HCCHAR_MULTICNT_SHIFT        20
787
+#define HCCHAR_EPTYPE_MASK        (0x3 << 18)
788
+#define HCCHAR_EPTYPE_SHIFT        18
789
+#define HCCHAR_LSPDDEV            BIT(17)
790
+#define HCCHAR_EPDIR            BIT(15)
791
+#define HCCHAR_EPNUM_MASK        (0xf << 11)
792
+#define HCCHAR_EPNUM_SHIFT        11
793
+#define HCCHAR_MPS_MASK            (0x7ff << 0)
794
+#define HCCHAR_MPS_SHIFT        0
795
+
796
+#define HCSPLT(_ch)            HSOTG_REG(0x0504 + 0x20 * (_ch))
797
+#define HCSPLT_SPLTENA            BIT(31)
798
+#define HCSPLT_COMPSPLT            BIT(16)
799
+#define HCSPLT_XACTPOS_MASK        (0x3 << 14)
800
+#define HCSPLT_XACTPOS_SHIFT        14
801
+#define HCSPLT_XACTPOS_MID        0
802
+#define HCSPLT_XACTPOS_END        1
803
+#define HCSPLT_XACTPOS_BEGIN        2
804
+#define HCSPLT_XACTPOS_ALL        3
805
+#define HCSPLT_HUBADDR_MASK        (0x7f << 7)
806
+#define HCSPLT_HUBADDR_SHIFT        7
807
+#define HCSPLT_PRTADDR_MASK        (0x7f << 0)
808
+#define HCSPLT_PRTADDR_SHIFT        0
809
+
810
+#define HCINT(_ch)            HSOTG_REG(0x0508 + 0x20 * (_ch))
811
+#define HCINTMSK(_ch)            HSOTG_REG(0x050c + 0x20 * (_ch))
812
+#define HCINTMSK_RESERVED14_31        (0x3ffff << 14)
813
+#define HCINTMSK_FRM_LIST_ROLL        BIT(13)
814
+#define HCINTMSK_XCS_XACT        BIT(12)
815
+#define HCINTMSK_BNA            BIT(11)
816
+#define HCINTMSK_DATATGLERR        BIT(10)
817
+#define HCINTMSK_FRMOVRUN        BIT(9)
818
+#define HCINTMSK_BBLERR            BIT(8)
819
+#define HCINTMSK_XACTERR        BIT(7)
820
+#define HCINTMSK_NYET            BIT(6)
821
+#define HCINTMSK_ACK            BIT(5)
822
+#define HCINTMSK_NAK            BIT(4)
823
+#define HCINTMSK_STALL            BIT(3)
824
+#define HCINTMSK_AHBERR            BIT(2)
825
+#define HCINTMSK_CHHLTD            BIT(1)
826
+#define HCINTMSK_XFERCOMPL        BIT(0)
827
+
828
+#define HCTSIZ(_ch)            HSOTG_REG(0x0510 + 0x20 * (_ch))
829
+#define TSIZ_DOPNG            BIT(31)
830
+#define TSIZ_SC_MC_PID_MASK        (0x3 << 29)
831
+#define TSIZ_SC_MC_PID_SHIFT        29
832
+#define TSIZ_SC_MC_PID_DATA0        0
833
+#define TSIZ_SC_MC_PID_DATA2        1
834
+#define TSIZ_SC_MC_PID_DATA1        2
835
+#define TSIZ_SC_MC_PID_MDATA        3
836
+#define TSIZ_SC_MC_PID_SETUP        3
837
+#define TSIZ_PKTCNT_MASK        (0x3ff << 19)
838
+#define TSIZ_PKTCNT_SHIFT        19
839
+#define TSIZ_NTD_MASK            (0xff << 8)
840
+#define TSIZ_NTD_SHIFT            8
841
+#define TSIZ_SCHINFO_MASK        (0xff << 0)
842
+#define TSIZ_SCHINFO_SHIFT        0
843
+#define TSIZ_XFERSIZE_MASK        (0x7ffff << 0)
844
+#define TSIZ_XFERSIZE_SHIFT        0
845
+
846
+#define HCDMA(_ch)            HSOTG_REG(0x0514 + 0x20 * (_ch))
847
+
848
+#define HCDMAB(_ch)            HSOTG_REG(0x051c + 0x20 * (_ch))
849
+
850
+#define HCFIFO(_ch)            HSOTG_REG(0x1000 + 0x1000 * (_ch))
19
+
851
+
20
+/**
852
+/**
21
+ * cpu_check_watchpoint:
853
+ * struct dwc2_dma_desc - DMA descriptor structure,
22
+ * @cpu: cpu context
854
+ * used for both host and gadget modes
23
+ * @addr: guest virtual address
24
+ * @len: access length
25
+ * @attrs: memory access attributes
26
+ * @flags: watchpoint access type
27
+ * @ra: unwind return address
28
+ *
855
+ *
29
+ * Check for a watchpoint hit in [addr, addr+len) of the type
856
+ * @status: DMA descriptor status quadlet
30
+ * specified by @flags. Exit via exception with a hit.
857
+ * @buf: DMA descriptor data buffer pointer
858
+ *
859
+ * DMA Descriptor structure contains two quadlets:
860
+ * Status quadlet and Data buffer pointer.
31
+ */
861
+ */
32
void cpu_check_watchpoint(CPUState *cpu, vaddr addr, vaddr len,
862
+struct dwc2_dma_desc {
33
MemTxAttrs attrs, int flags, uintptr_t ra);
863
+    uint32_t status;
34
+
864
+    uint32_t buf;
35
+/**
865
+} __packed;
36
+ * cpu_watchpoint_address_matches:
866
+
37
+ * @cpu: cpu context
867
+/* Host Mode DMA descriptor status quadlet */
38
+ * @addr: guest virtual address
868
+
39
+ * @len: access length
869
+#define HOST_DMA_A            BIT(31)
40
+ *
870
+#define HOST_DMA_STS_MASK        (0x3 << 28)
41
+ * Return the watchpoint flags that apply to [addr, addr+len).
871
+#define HOST_DMA_STS_SHIFT        28
42
+ * If no watchpoint is registered for the range, the result is 0.
872
+#define HOST_DMA_STS_PKTERR        BIT(28)
43
+ */
873
+#define HOST_DMA_EOL            BIT(26)
44
int cpu_watchpoint_address_matches(CPUState *cpu, vaddr addr, vaddr len);
874
+#define HOST_DMA_IOC            BIT(25)
45
#endif
875
+#define HOST_DMA_SUP            BIT(24)
46
876
+#define HOST_DMA_ALT_QTD        BIT(23)
877
+#define HOST_DMA_QTD_OFFSET_MASK    (0x3f << 17)
878
+#define HOST_DMA_QTD_OFFSET_SHIFT    17
879
+#define HOST_DMA_ISOC_NBYTES_MASK    (0xfff << 0)
880
+#define HOST_DMA_ISOC_NBYTES_SHIFT    0
881
+#define HOST_DMA_NBYTES_MASK        (0x1ffff << 0)
882
+#define HOST_DMA_NBYTES_SHIFT        0
883
+#define HOST_DMA_NBYTES_LIMIT        131071
884
+
885
+/* Device Mode DMA descriptor status quadlet */
886
+
887
+#define DEV_DMA_BUFF_STS_MASK        (0x3 << 30)
888
+#define DEV_DMA_BUFF_STS_SHIFT        30
889
+#define DEV_DMA_BUFF_STS_HREADY        0
890
+#define DEV_DMA_BUFF_STS_DMABUSY    1
891
+#define DEV_DMA_BUFF_STS_DMADONE    2
892
+#define DEV_DMA_BUFF_STS_HBUSY        3
893
+#define DEV_DMA_STS_MASK        (0x3 << 28)
894
+#define DEV_DMA_STS_SHIFT        28
895
+#define DEV_DMA_STS_SUCC        0
896
+#define DEV_DMA_STS_BUFF_FLUSH        1
897
+#define DEV_DMA_STS_BUFF_ERR        3
898
+#define DEV_DMA_L            BIT(27)
899
+#define DEV_DMA_SHORT            BIT(26)
900
+#define DEV_DMA_IOC            BIT(25)
901
+#define DEV_DMA_SR            BIT(24)
902
+#define DEV_DMA_MTRF            BIT(23)
903
+#define DEV_DMA_ISOC_PID_MASK        (0x3 << 23)
904
+#define DEV_DMA_ISOC_PID_SHIFT        23
905
+#define DEV_DMA_ISOC_PID_DATA0        0
906
+#define DEV_DMA_ISOC_PID_DATA2        1
907
+#define DEV_DMA_ISOC_PID_DATA1        2
908
+#define DEV_DMA_ISOC_PID_MDATA        3
909
+#define DEV_DMA_ISOC_FRNUM_MASK        (0x7ff << 12)
910
+#define DEV_DMA_ISOC_FRNUM_SHIFT    12
911
+#define DEV_DMA_ISOC_TX_NBYTES_MASK    (0xfff << 0)
912
+#define DEV_DMA_ISOC_TX_NBYTES_LIMIT    0xfff
913
+#define DEV_DMA_ISOC_RX_NBYTES_MASK    (0x7ff << 0)
914
+#define DEV_DMA_ISOC_RX_NBYTES_LIMIT    0x7ff
915
+#define DEV_DMA_ISOC_NBYTES_SHIFT    0
916
+#define DEV_DMA_NBYTES_MASK        (0xffff << 0)
917
+#define DEV_DMA_NBYTES_SHIFT        0
918
+#define DEV_DMA_NBYTES_LIMIT        0xffff
919
+
920
+#define MAX_DMA_DESC_NUM_GENERIC    64
921
+#define MAX_DMA_DESC_NUM_HS_ISOC    256
922
+
923
+#endif /* __DWC2_HW_H__ */
47
--
924
--
48
2.20.1
925
2.20.1
49
926
50
927
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Paul Zimmerman <pauldzim@gmail.com>
2
2
3
Calling access_el3_aa32ns() works for AArch32 only cores
3
Add the dwc-hsotg (dwc2) USB host controller state definitions.
4
but it does not handle 32-bit EL2 on top of 64-bit EL3
4
Mostly based on hw/usb/hcd-ehci.h.
5
for mixed 32/64-bit cores.
5
6
6
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
7
Merge access_el3_aa32ns_aa64any() into access_el3_aa32ns()
7
Message-id: 20200520235349.21215-4-pauldzim@gmail.com
8
and only use the latter.
9
10
Fixes: 68e9c2fe65 ("target-arm: Add VTCR_EL2")
11
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
12
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Message-id: 20200505141729.31930-2-edgar.iglesias@gmail.com
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
10
---
17
target/arm/helper.c | 30 +++++++-----------------------
11
hw/usb/hcd-dwc2.h | 190 ++++++++++++++++++++++++++++++++++++++++++++++
18
1 file changed, 7 insertions(+), 23 deletions(-)
12
1 file changed, 190 insertions(+)
19
13
create mode 100644 hw/usb/hcd-dwc2.h
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
21
index XXXXXXX..XXXXXXX 100644
15
diff --git a/hw/usb/hcd-dwc2.h b/hw/usb/hcd-dwc2.h
22
--- a/target/arm/helper.c
16
new file mode 100644
23
+++ b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX
24
@@ -XXX,XX +XXX,XX @@ void init_cpreg_list(ARMCPU *cpu)
18
--- /dev/null
25
}
19
+++ b/hw/usb/hcd-dwc2.h
26
20
@@ -XXX,XX +XXX,XX @@
27
/*
21
+/*
28
- * Some registers are not accessible if EL3.NS=0 and EL3 is using AArch32 but
22
+ * dwc-hsotg (dwc2) USB host controller state definitions
29
- * they are accessible when EL3 is using AArch64 regardless of EL3.NS.
23
+ *
30
- *
24
+ * Based on hw/usb/hcd-ehci.h
31
- * access_el3_aa32ns: Used to check AArch32 register views.
25
+ *
32
- * access_el3_aa32ns_aa64any: Used to check both AArch32/64 register views.
26
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
33
+ * Some registers are not accessible from AArch32 EL3 if SCR.NS == 0.
27
+ *
34
*/
28
+ * This program is free software; you can redistribute it and/or modify
35
static CPAccessResult access_el3_aa32ns(CPUARMState *env,
29
+ * it under the terms of the GNU General Public License as published by
36
const ARMCPRegInfo *ri,
30
+ * the Free Software Foundation; either version 2 of the License, or
37
bool isread)
31
+ * (at your option) any later version.
38
{
32
+ *
39
- bool secure = arm_is_secure_below_el3(env);
33
+ * This program is distributed in the hope that it will be useful,
40
-
34
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
41
- assert(!arm_el_is_aa64(env, 3));
35
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
42
- if (secure) {
36
+ * GNU General Public License for more details.
43
+ if (!is_a64(env) && arm_current_el(env) == 3 &&
37
+ */
44
+ arm_is_secure_below_el3(env)) {
38
+
45
return CP_ACCESS_TRAP_UNCATEGORIZED;
39
+#ifndef HW_USB_DWC2_H
46
}
40
+#define HW_USB_DWC2_H
47
return CP_ACCESS_OK;
41
+
48
}
42
+#include "qemu/timer.h"
49
43
+#include "hw/irq.h"
50
-static CPAccessResult access_el3_aa32ns_aa64any(CPUARMState *env,
44
+#include "hw/sysbus.h"
51
- const ARMCPRegInfo *ri,
45
+#include "hw/usb.h"
52
- bool isread)
46
+#include "sysemu/dma.h"
53
-{
47
+
54
- if (!arm_el_is_aa64(env, 3)) {
48
+#define DWC2_MMIO_SIZE 0x11000
55
- return access_el3_aa32ns(env, ri, isread);
49
+
56
- }
50
+#define DWC2_NB_CHAN 8 /* Number of host channels */
57
- return CP_ACCESS_OK;
51
+#define DWC2_MAX_XFER_SIZE 65536 /* Max transfer size expected in HCTSIZ */
58
-}
52
+
59
-
53
+typedef struct DWC2Packet DWC2Packet;
60
/* Some secure-only AArch32 registers trap to EL3 if used from
54
+typedef struct DWC2State DWC2State;
61
* Secure EL1 (but are just ordinary UNDEF in other non-EL3 contexts).
55
+typedef struct DWC2Class DWC2Class;
62
* Note that an access from Secure EL1 can only happen if EL3 is AArch64.
56
+
63
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
57
+enum async_state {
64
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
58
+ DWC2_ASYNC_NONE = 0,
65
{ .name = "VTCR_EL2", .state = ARM_CP_STATE_BOTH,
59
+ DWC2_ASYNC_INITIALIZED,
66
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
60
+ DWC2_ASYNC_INFLIGHT,
67
- .access = PL2_RW, .accessfn = access_el3_aa32ns_aa64any,
61
+ DWC2_ASYNC_FINISHED,
68
+ .access = PL2_RW, .accessfn = access_el3_aa32ns,
62
+};
69
.type = ARM_CP_CONST, .resetvalue = 0 },
63
+
70
{ .name = "VTTBR", .state = ARM_CP_STATE_AA32,
64
+struct DWC2Packet {
71
.cp = 15, .opc1 = 6, .crm = 2,
65
+ USBPacket packet;
72
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
66
+ uint32_t devadr;
73
.type = ARM_CP_CONST, .resetvalue = 0 },
67
+ uint32_t epnum;
74
{ .name = "HPFAR_EL2", .state = ARM_CP_STATE_BOTH,
68
+ uint32_t epdir;
75
.opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4,
69
+ uint32_t mps;
76
- .access = PL2_RW, .accessfn = access_el3_aa32ns_aa64any,
70
+ uint32_t pid;
77
+ .access = PL2_RW, .accessfn = access_el3_aa32ns,
71
+ uint32_t index;
78
.type = ARM_CP_CONST, .resetvalue = 0 },
72
+ uint32_t pcnt;
79
{ .name = "HSTR_EL2", .state = ARM_CP_STATE_BOTH,
73
+ uint32_t len;
80
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3,
74
+ int32_t async;
81
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
75
+ bool small;
82
ARMCPRegInfo vpidr_regs[] = {
76
+ bool needs_service;
83
{ .name = "VPIDR_EL2", .state = ARM_CP_STATE_BOTH,
77
+};
84
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
78
+
85
- .access = PL2_RW, .accessfn = access_el3_aa32ns_aa64any,
79
+struct DWC2State {
86
+ .access = PL2_RW, .accessfn = access_el3_aa32ns,
80
+ /*< private >*/
87
.type = ARM_CP_CONST, .resetvalue = cpu->midr,
81
+ SysBusDevice parent_obj;
88
.fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
82
+
89
{ .name = "VMPIDR_EL2", .state = ARM_CP_STATE_BOTH,
83
+ /*< public >*/
90
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
84
+ USBBus bus;
91
- .access = PL2_RW, .accessfn = access_el3_aa32ns_aa64any,
85
+ qemu_irq irq;
92
+ .access = PL2_RW, .accessfn = access_el3_aa32ns,
86
+ MemoryRegion *dma_mr;
93
.type = ARM_CP_NO_RAW,
87
+ AddressSpace dma_as;
94
.writefn = arm_cp_write_ignore, .readfn = mpidr_read },
88
+ MemoryRegion container;
95
REGINFO_SENTINEL
89
+ MemoryRegion hsotg;
90
+ MemoryRegion fifos;
91
+
92
+ union {
93
+#define DWC2_GLBREG_SIZE 0x70
94
+ uint32_t glbreg[DWC2_GLBREG_SIZE / sizeof(uint32_t)];
95
+ struct {
96
+ uint32_t gotgctl; /* 00 */
97
+ uint32_t gotgint; /* 04 */
98
+ uint32_t gahbcfg; /* 08 */
99
+ uint32_t gusbcfg; /* 0c */
100
+ uint32_t grstctl; /* 10 */
101
+ uint32_t gintsts; /* 14 */
102
+ uint32_t gintmsk; /* 18 */
103
+ uint32_t grxstsr; /* 1c */
104
+ uint32_t grxstsp; /* 20 */
105
+ uint32_t grxfsiz; /* 24 */
106
+ uint32_t gnptxfsiz; /* 28 */
107
+ uint32_t gnptxsts; /* 2c */
108
+ uint32_t gi2cctl; /* 30 */
109
+ uint32_t gpvndctl; /* 34 */
110
+ uint32_t ggpio; /* 38 */
111
+ uint32_t guid; /* 3c */
112
+ uint32_t gsnpsid; /* 40 */
113
+ uint32_t ghwcfg1; /* 44 */
114
+ uint32_t ghwcfg2; /* 48 */
115
+ uint32_t ghwcfg3; /* 4c */
116
+ uint32_t ghwcfg4; /* 50 */
117
+ uint32_t glpmcfg; /* 54 */
118
+ uint32_t gpwrdn; /* 58 */
119
+ uint32_t gdfifocfg; /* 5c */
120
+ uint32_t gadpctl; /* 60 */
121
+ uint32_t grefclk; /* 64 */
122
+ uint32_t gintmsk2; /* 68 */
123
+ uint32_t gintsts2; /* 6c */
124
+ };
125
+ };
126
+
127
+ union {
128
+#define DWC2_FSZREG_SIZE 0x04
129
+ uint32_t fszreg[DWC2_FSZREG_SIZE / sizeof(uint32_t)];
130
+ struct {
131
+ uint32_t hptxfsiz; /* 100 */
132
+ };
133
+ };
134
+
135
+ union {
136
+#define DWC2_HREG0_SIZE 0x44
137
+ uint32_t hreg0[DWC2_HREG0_SIZE / sizeof(uint32_t)];
138
+ struct {
139
+ uint32_t hcfg; /* 400 */
140
+ uint32_t hfir; /* 404 */
141
+ uint32_t hfnum; /* 408 */
142
+ uint32_t rsvd0; /* 40c */
143
+ uint32_t hptxsts; /* 410 */
144
+ uint32_t haint; /* 414 */
145
+ uint32_t haintmsk; /* 418 */
146
+ uint32_t hflbaddr; /* 41c */
147
+ uint32_t rsvd1[8]; /* 420-43c */
148
+ uint32_t hprt0; /* 440 */
149
+ };
150
+ };
151
+
152
+#define DWC2_HREG1_SIZE (0x20 * DWC2_NB_CHAN)
153
+ uint32_t hreg1[DWC2_HREG1_SIZE / sizeof(uint32_t)];
154
+
155
+#define hcchar(_ch) hreg1[((_ch) << 3) + 0] /* 500, 520, ... */
156
+#define hcsplt(_ch) hreg1[((_ch) << 3) + 1] /* 504, 524, ... */
157
+#define hcint(_ch) hreg1[((_ch) << 3) + 2] /* 508, 528, ... */
158
+#define hcintmsk(_ch) hreg1[((_ch) << 3) + 3] /* 50c, 52c, ... */
159
+#define hctsiz(_ch) hreg1[((_ch) << 3) + 4] /* 510, 530, ... */
160
+#define hcdma(_ch) hreg1[((_ch) << 3) + 5] /* 514, 534, ... */
161
+#define hcdmab(_ch) hreg1[((_ch) << 3) + 7] /* 51c, 53c, ... */
162
+
163
+ union {
164
+#define DWC2_PCGREG_SIZE 0x08
165
+ uint32_t pcgreg[DWC2_PCGREG_SIZE / sizeof(uint32_t)];
166
+ struct {
167
+ uint32_t pcgctl; /* e00 */
168
+ uint32_t pcgcctl1; /* e04 */
169
+ };
170
+ };
171
+
172
+ /* TODO - implement FIFO registers for slave mode */
173
+#define DWC2_HFIFO_SIZE (0x1000 * DWC2_NB_CHAN)
174
+
175
+ /*
176
+ * Internal state
177
+ */
178
+ QEMUTimer *eof_timer;
179
+ QEMUTimer *frame_timer;
180
+ QEMUBH *async_bh;
181
+ int64_t sof_time;
182
+ int64_t usb_frame_time;
183
+ int64_t usb_bit_time;
184
+ uint32_t usb_version;
185
+ uint16_t frame_number;
186
+ uint16_t fi;
187
+ uint16_t next_chan;
188
+ bool working;
189
+ USBPort uport;
190
+ DWC2Packet packet[DWC2_NB_CHAN]; /* one packet per chan */
191
+ uint8_t usb_buf[DWC2_NB_CHAN][DWC2_MAX_XFER_SIZE]; /* one buffer per chan */
192
+};
193
+
194
+struct DWC2Class {
195
+ /*< private >*/
196
+ SysBusDeviceClass parent_class;
197
+ ResettablePhases parent_phases;
198
+
199
+ /*< public >*/
200
+};
201
+
202
+#define TYPE_DWC2_USB "dwc2-usb"
203
+#define DWC2_USB(obj) \
204
+ OBJECT_CHECK(DWC2State, (obj), TYPE_DWC2_USB)
205
+#define DWC2_CLASS(klass) \
206
+ OBJECT_CLASS_CHECK(DWC2Class, (klass), TYPE_DWC2_USB)
207
+#define DWC2_GET_CLASS(obj) \
208
+ OBJECT_GET_CLASS(DWC2Class, (obj), TYPE_DWC2_USB)
209
+
210
+#endif
96
--
211
--
97
2.20.1
212
2.20.1
98
213
99
214
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Paul Zimmerman <pauldzim@gmail.com>
2
2
3
A KVM-only build won't be able to run TCG cpus.
3
Add the dwc-hsotg (dwc2) USB host controller emulation code.
4
Based on hw/usb/hcd-ehci.c and hw/usb/hcd-ohci.c.
4
5
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Note that to use this with the dwc-otg driver in the Raspbian
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
kernel, you must pass the option "dwc_otg.fiq_fsm_enable=0" on
7
Message-id: 20200504172448.9402-6-philmd@redhat.com
8
the kernel command line.
9
10
Emulation of slave mode and of descriptor-DMA mode has not been
11
implemented yet. These modes are seldom used.
12
13
I have used some on-line sources of information while developing
14
this emulation, including:
15
16
http://www.capital-micro.com/PDF/CME-M7_Family_User_Guide_EN.pdf
17
which has a pretty complete description of the controller starting
18
on page 370.
19
20
https://sourceforge.net/p/wive-ng/wive-ng-mt/ci/master/tree/docs/DataSheets/RT3050_5x_V2.0_081408_0902.pdf
21
which has a description of the controller registers starting on
22
page 130.
23
24
Thanks to Felippe Mathieu-Daude for providing a cleaner method
25
of implementing the memory regions for the controller registers.
26
27
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
28
Message-id: 20200520235349.21215-5-pauldzim@gmail.com
29
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
31
---
10
target/arm/cpu.c | 634 -------------------------------------
32
hw/usb/hcd-dwc2.c | 1417 ++++++++++++++++++++++++++++++++++++++++++
11
target/arm/cpu_tcg.c | 664 +++++++++++++++++++++++++++++++++++++++
33
hw/usb/Kconfig | 5 +
12
target/arm/Makefile.objs | 1 +
34
hw/usb/Makefile.objs | 1 +
13
3 files changed, 665 insertions(+), 634 deletions(-)
35
hw/usb/trace-events | 50 ++
14
create mode 100644 target/arm/cpu_tcg.c
36
4 files changed, 1473 insertions(+)
37
create mode 100644 hw/usb/hcd-dwc2.c
15
38
16
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
39
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.c
19
+++ b/target/arm/cpu.c
20
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
21
return true;
22
}
23
24
-#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
25
-static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
26
-{
27
- CPUClass *cc = CPU_GET_CLASS(cs);
28
- ARMCPU *cpu = ARM_CPU(cs);
29
- CPUARMState *env = &cpu->env;
30
- bool ret = false;
31
-
32
- /*
33
- * ARMv7-M interrupt masking works differently than -A or -R.
34
- * There is no FIQ/IRQ distinction. Instead of I and F bits
35
- * masking FIQ and IRQ interrupts, an exception is taken only
36
- * if it is higher priority than the current execution priority
37
- * (which depends on state like BASEPRI, FAULTMASK and the
38
- * currently active exception).
39
- */
40
- if (interrupt_request & CPU_INTERRUPT_HARD
41
- && (armv7m_nvic_can_take_pending_exception(env->nvic))) {
42
- cs->exception_index = EXCP_IRQ;
43
- cc->do_interrupt(cs);
44
- ret = true;
45
- }
46
- return ret;
47
-}
48
-#endif
49
-
50
void arm_cpu_update_virq(ARMCPU *cpu)
51
{
52
/*
53
@@ -XXX,XX +XXX,XX @@ static ObjectClass *arm_cpu_class_by_name(const char *cpu_model)
54
/* CPU models. These are not needed for the AArch64 linux-user build. */
55
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
56
57
-static void arm926_initfn(Object *obj)
58
-{
59
- ARMCPU *cpu = ARM_CPU(obj);
60
-
61
- cpu->dtb_compatible = "arm,arm926";
62
- set_feature(&cpu->env, ARM_FEATURE_V5);
63
- set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
64
- set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
65
- cpu->midr = 0x41069265;
66
- cpu->reset_fpsid = 0x41011090;
67
- cpu->ctr = 0x1dd20d2;
68
- cpu->reset_sctlr = 0x00090078;
69
-
70
- /*
71
- * ARMv5 does not have the ID_ISAR registers, but we can still
72
- * set the field to indicate Jazelle support within QEMU.
73
- */
74
- cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
75
- /*
76
- * Similarly, we need to set MVFR0 fields to enable vfp and short vector
77
- * support even though ARMv5 doesn't have this register.
78
- */
79
- cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
80
- cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSP, 1);
81
- cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
82
-}
83
-
84
-static void arm946_initfn(Object *obj)
85
-{
86
- ARMCPU *cpu = ARM_CPU(obj);
87
-
88
- cpu->dtb_compatible = "arm,arm946";
89
- set_feature(&cpu->env, ARM_FEATURE_V5);
90
- set_feature(&cpu->env, ARM_FEATURE_PMSA);
91
- set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
92
- cpu->midr = 0x41059461;
93
- cpu->ctr = 0x0f004006;
94
- cpu->reset_sctlr = 0x00000078;
95
-}
96
-
97
-static void arm1026_initfn(Object *obj)
98
-{
99
- ARMCPU *cpu = ARM_CPU(obj);
100
-
101
- cpu->dtb_compatible = "arm,arm1026";
102
- set_feature(&cpu->env, ARM_FEATURE_V5);
103
- set_feature(&cpu->env, ARM_FEATURE_AUXCR);
104
- set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
105
- set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
106
- cpu->midr = 0x4106a262;
107
- cpu->reset_fpsid = 0x410110a0;
108
- cpu->ctr = 0x1dd20d2;
109
- cpu->reset_sctlr = 0x00090078;
110
- cpu->reset_auxcr = 1;
111
-
112
- /*
113
- * ARMv5 does not have the ID_ISAR registers, but we can still
114
- * set the field to indicate Jazelle support within QEMU.
115
- */
116
- cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
117
- /*
118
- * Similarly, we need to set MVFR0 fields to enable vfp and short vector
119
- * support even though ARMv5 doesn't have this register.
120
- */
121
- cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
122
- cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSP, 1);
123
- cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
124
-
125
- {
126
- /* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
127
- ARMCPRegInfo ifar = {
128
- .name = "IFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 1,
129
- .access = PL1_RW,
130
- .fieldoffset = offsetof(CPUARMState, cp15.ifar_ns),
131
- .resetvalue = 0
132
- };
133
- define_one_arm_cp_reg(cpu, &ifar);
134
- }
135
-}
136
-
137
-static void arm1136_r2_initfn(Object *obj)
138
-{
139
- ARMCPU *cpu = ARM_CPU(obj);
140
- /*
141
- * What qemu calls "arm1136_r2" is actually the 1136 r0p2, ie an
142
- * older core than plain "arm1136". In particular this does not
143
- * have the v6K features.
144
- * These ID register values are correct for 1136 but may be wrong
145
- * for 1136_r2 (in particular r0p2 does not actually implement most
146
- * of the ID registers).
147
- */
148
-
149
- cpu->dtb_compatible = "arm,arm1136";
150
- set_feature(&cpu->env, ARM_FEATURE_V6);
151
- set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
152
- set_feature(&cpu->env, ARM_FEATURE_CACHE_DIRTY_REG);
153
- set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
154
- cpu->midr = 0x4107b362;
155
- cpu->reset_fpsid = 0x410120b4;
156
- cpu->isar.mvfr0 = 0x11111111;
157
- cpu->isar.mvfr1 = 0x00000000;
158
- cpu->ctr = 0x1dd20d2;
159
- cpu->reset_sctlr = 0x00050078;
160
- cpu->id_pfr0 = 0x111;
161
- cpu->id_pfr1 = 0x1;
162
- cpu->isar.id_dfr0 = 0x2;
163
- cpu->id_afr0 = 0x3;
164
- cpu->isar.id_mmfr0 = 0x01130003;
165
- cpu->isar.id_mmfr1 = 0x10030302;
166
- cpu->isar.id_mmfr2 = 0x01222110;
167
- cpu->isar.id_isar0 = 0x00140011;
168
- cpu->isar.id_isar1 = 0x12002111;
169
- cpu->isar.id_isar2 = 0x11231111;
170
- cpu->isar.id_isar3 = 0x01102131;
171
- cpu->isar.id_isar4 = 0x141;
172
- cpu->reset_auxcr = 7;
173
-}
174
-
175
-static void arm1136_initfn(Object *obj)
176
-{
177
- ARMCPU *cpu = ARM_CPU(obj);
178
-
179
- cpu->dtb_compatible = "arm,arm1136";
180
- set_feature(&cpu->env, ARM_FEATURE_V6K);
181
- set_feature(&cpu->env, ARM_FEATURE_V6);
182
- set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
183
- set_feature(&cpu->env, ARM_FEATURE_CACHE_DIRTY_REG);
184
- set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
185
- cpu->midr = 0x4117b363;
186
- cpu->reset_fpsid = 0x410120b4;
187
- cpu->isar.mvfr0 = 0x11111111;
188
- cpu->isar.mvfr1 = 0x00000000;
189
- cpu->ctr = 0x1dd20d2;
190
- cpu->reset_sctlr = 0x00050078;
191
- cpu->id_pfr0 = 0x111;
192
- cpu->id_pfr1 = 0x1;
193
- cpu->isar.id_dfr0 = 0x2;
194
- cpu->id_afr0 = 0x3;
195
- cpu->isar.id_mmfr0 = 0x01130003;
196
- cpu->isar.id_mmfr1 = 0x10030302;
197
- cpu->isar.id_mmfr2 = 0x01222110;
198
- cpu->isar.id_isar0 = 0x00140011;
199
- cpu->isar.id_isar1 = 0x12002111;
200
- cpu->isar.id_isar2 = 0x11231111;
201
- cpu->isar.id_isar3 = 0x01102131;
202
- cpu->isar.id_isar4 = 0x141;
203
- cpu->reset_auxcr = 7;
204
-}
205
-
206
-static void arm1176_initfn(Object *obj)
207
-{
208
- ARMCPU *cpu = ARM_CPU(obj);
209
-
210
- cpu->dtb_compatible = "arm,arm1176";
211
- set_feature(&cpu->env, ARM_FEATURE_V6K);
212
- set_feature(&cpu->env, ARM_FEATURE_VAPA);
213
- set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
214
- set_feature(&cpu->env, ARM_FEATURE_CACHE_DIRTY_REG);
215
- set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
216
- set_feature(&cpu->env, ARM_FEATURE_EL3);
217
- cpu->midr = 0x410fb767;
218
- cpu->reset_fpsid = 0x410120b5;
219
- cpu->isar.mvfr0 = 0x11111111;
220
- cpu->isar.mvfr1 = 0x00000000;
221
- cpu->ctr = 0x1dd20d2;
222
- cpu->reset_sctlr = 0x00050078;
223
- cpu->id_pfr0 = 0x111;
224
- cpu->id_pfr1 = 0x11;
225
- cpu->isar.id_dfr0 = 0x33;
226
- cpu->id_afr0 = 0;
227
- cpu->isar.id_mmfr0 = 0x01130003;
228
- cpu->isar.id_mmfr1 = 0x10030302;
229
- cpu->isar.id_mmfr2 = 0x01222100;
230
- cpu->isar.id_isar0 = 0x0140011;
231
- cpu->isar.id_isar1 = 0x12002111;
232
- cpu->isar.id_isar2 = 0x11231121;
233
- cpu->isar.id_isar3 = 0x01102131;
234
- cpu->isar.id_isar4 = 0x01141;
235
- cpu->reset_auxcr = 7;
236
-}
237
-
238
-static void arm11mpcore_initfn(Object *obj)
239
-{
240
- ARMCPU *cpu = ARM_CPU(obj);
241
-
242
- cpu->dtb_compatible = "arm,arm11mpcore";
243
- set_feature(&cpu->env, ARM_FEATURE_V6K);
244
- set_feature(&cpu->env, ARM_FEATURE_VAPA);
245
- set_feature(&cpu->env, ARM_FEATURE_MPIDR);
246
- set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
247
- cpu->midr = 0x410fb022;
248
- cpu->reset_fpsid = 0x410120b4;
249
- cpu->isar.mvfr0 = 0x11111111;
250
- cpu->isar.mvfr1 = 0x00000000;
251
- cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
252
- cpu->id_pfr0 = 0x111;
253
- cpu->id_pfr1 = 0x1;
254
- cpu->isar.id_dfr0 = 0;
255
- cpu->id_afr0 = 0x2;
256
- cpu->isar.id_mmfr0 = 0x01100103;
257
- cpu->isar.id_mmfr1 = 0x10020302;
258
- cpu->isar.id_mmfr2 = 0x01222000;
259
- cpu->isar.id_isar0 = 0x00100011;
260
- cpu->isar.id_isar1 = 0x12002111;
261
- cpu->isar.id_isar2 = 0x11221011;
262
- cpu->isar.id_isar3 = 0x01102131;
263
- cpu->isar.id_isar4 = 0x141;
264
- cpu->reset_auxcr = 1;
265
-}
266
-
267
-static void cortex_m0_initfn(Object *obj)
268
-{
269
- ARMCPU *cpu = ARM_CPU(obj);
270
- set_feature(&cpu->env, ARM_FEATURE_V6);
271
- set_feature(&cpu->env, ARM_FEATURE_M);
272
-
273
- cpu->midr = 0x410cc200;
274
-}
275
-
276
-static void cortex_m3_initfn(Object *obj)
277
-{
278
- ARMCPU *cpu = ARM_CPU(obj);
279
- set_feature(&cpu->env, ARM_FEATURE_V7);
280
- set_feature(&cpu->env, ARM_FEATURE_M);
281
- set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
282
- cpu->midr = 0x410fc231;
283
- cpu->pmsav7_dregion = 8;
284
- cpu->id_pfr0 = 0x00000030;
285
- cpu->id_pfr1 = 0x00000200;
286
- cpu->isar.id_dfr0 = 0x00100000;
287
- cpu->id_afr0 = 0x00000000;
288
- cpu->isar.id_mmfr0 = 0x00000030;
289
- cpu->isar.id_mmfr1 = 0x00000000;
290
- cpu->isar.id_mmfr2 = 0x00000000;
291
- cpu->isar.id_mmfr3 = 0x00000000;
292
- cpu->isar.id_isar0 = 0x01141110;
293
- cpu->isar.id_isar1 = 0x02111000;
294
- cpu->isar.id_isar2 = 0x21112231;
295
- cpu->isar.id_isar3 = 0x01111110;
296
- cpu->isar.id_isar4 = 0x01310102;
297
- cpu->isar.id_isar5 = 0x00000000;
298
- cpu->isar.id_isar6 = 0x00000000;
299
-}
300
-
301
-static void cortex_m4_initfn(Object *obj)
302
-{
303
- ARMCPU *cpu = ARM_CPU(obj);
304
-
305
- set_feature(&cpu->env, ARM_FEATURE_V7);
306
- set_feature(&cpu->env, ARM_FEATURE_M);
307
- set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
308
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
309
- cpu->midr = 0x410fc240; /* r0p0 */
310
- cpu->pmsav7_dregion = 8;
311
- cpu->isar.mvfr0 = 0x10110021;
312
- cpu->isar.mvfr1 = 0x11000011;
313
- cpu->isar.mvfr2 = 0x00000000;
314
- cpu->id_pfr0 = 0x00000030;
315
- cpu->id_pfr1 = 0x00000200;
316
- cpu->isar.id_dfr0 = 0x00100000;
317
- cpu->id_afr0 = 0x00000000;
318
- cpu->isar.id_mmfr0 = 0x00000030;
319
- cpu->isar.id_mmfr1 = 0x00000000;
320
- cpu->isar.id_mmfr2 = 0x00000000;
321
- cpu->isar.id_mmfr3 = 0x00000000;
322
- cpu->isar.id_isar0 = 0x01141110;
323
- cpu->isar.id_isar1 = 0x02111000;
324
- cpu->isar.id_isar2 = 0x21112231;
325
- cpu->isar.id_isar3 = 0x01111110;
326
- cpu->isar.id_isar4 = 0x01310102;
327
- cpu->isar.id_isar5 = 0x00000000;
328
- cpu->isar.id_isar6 = 0x00000000;
329
-}
330
-
331
-static void cortex_m7_initfn(Object *obj)
332
-{
333
- ARMCPU *cpu = ARM_CPU(obj);
334
-
335
- set_feature(&cpu->env, ARM_FEATURE_V7);
336
- set_feature(&cpu->env, ARM_FEATURE_M);
337
- set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
338
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
339
- cpu->midr = 0x411fc272; /* r1p2 */
340
- cpu->pmsav7_dregion = 8;
341
- cpu->isar.mvfr0 = 0x10110221;
342
- cpu->isar.mvfr1 = 0x12000011;
343
- cpu->isar.mvfr2 = 0x00000040;
344
- cpu->id_pfr0 = 0x00000030;
345
- cpu->id_pfr1 = 0x00000200;
346
- cpu->isar.id_dfr0 = 0x00100000;
347
- cpu->id_afr0 = 0x00000000;
348
- cpu->isar.id_mmfr0 = 0x00100030;
349
- cpu->isar.id_mmfr1 = 0x00000000;
350
- cpu->isar.id_mmfr2 = 0x01000000;
351
- cpu->isar.id_mmfr3 = 0x00000000;
352
- cpu->isar.id_isar0 = 0x01101110;
353
- cpu->isar.id_isar1 = 0x02112000;
354
- cpu->isar.id_isar2 = 0x20232231;
355
- cpu->isar.id_isar3 = 0x01111131;
356
- cpu->isar.id_isar4 = 0x01310132;
357
- cpu->isar.id_isar5 = 0x00000000;
358
- cpu->isar.id_isar6 = 0x00000000;
359
-}
360
-
361
-static void cortex_m33_initfn(Object *obj)
362
-{
363
- ARMCPU *cpu = ARM_CPU(obj);
364
-
365
- set_feature(&cpu->env, ARM_FEATURE_V8);
366
- set_feature(&cpu->env, ARM_FEATURE_M);
367
- set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
368
- set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
369
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
370
- cpu->midr = 0x410fd213; /* r0p3 */
371
- cpu->pmsav7_dregion = 16;
372
- cpu->sau_sregion = 8;
373
- cpu->isar.mvfr0 = 0x10110021;
374
- cpu->isar.mvfr1 = 0x11000011;
375
- cpu->isar.mvfr2 = 0x00000040;
376
- cpu->id_pfr0 = 0x00000030;
377
- cpu->id_pfr1 = 0x00000210;
378
- cpu->isar.id_dfr0 = 0x00200000;
379
- cpu->id_afr0 = 0x00000000;
380
- cpu->isar.id_mmfr0 = 0x00101F40;
381
- cpu->isar.id_mmfr1 = 0x00000000;
382
- cpu->isar.id_mmfr2 = 0x01000000;
383
- cpu->isar.id_mmfr3 = 0x00000000;
384
- cpu->isar.id_isar0 = 0x01101110;
385
- cpu->isar.id_isar1 = 0x02212000;
386
- cpu->isar.id_isar2 = 0x20232232;
387
- cpu->isar.id_isar3 = 0x01111131;
388
- cpu->isar.id_isar4 = 0x01310132;
389
- cpu->isar.id_isar5 = 0x00000000;
390
- cpu->isar.id_isar6 = 0x00000000;
391
- cpu->clidr = 0x00000000;
392
- cpu->ctr = 0x8000c000;
393
-}
394
-
395
-static void arm_v7m_class_init(ObjectClass *oc, void *data)
396
-{
397
- ARMCPUClass *acc = ARM_CPU_CLASS(oc);
398
- CPUClass *cc = CPU_CLASS(oc);
399
-
400
- acc->info = data;
401
-#ifndef CONFIG_USER_ONLY
402
- cc->do_interrupt = arm_v7m_cpu_do_interrupt;
403
-#endif
404
-
405
- cc->cpu_exec_interrupt = arm_v7m_cpu_exec_interrupt;
406
-}
407
-
408
-static const ARMCPRegInfo cortexr5_cp_reginfo[] = {
409
- /* Dummy the TCM region regs for the moment */
410
- { .name = "ATCM", .cp = 15, .opc1 = 0, .crn = 9, .crm = 1, .opc2 = 0,
411
- .access = PL1_RW, .type = ARM_CP_CONST },
412
- { .name = "BTCM", .cp = 15, .opc1 = 0, .crn = 9, .crm = 1, .opc2 = 1,
413
- .access = PL1_RW, .type = ARM_CP_CONST },
414
- { .name = "DCACHE_INVAL", .cp = 15, .opc1 = 0, .crn = 15, .crm = 5,
415
- .opc2 = 0, .access = PL1_W, .type = ARM_CP_NOP },
416
- REGINFO_SENTINEL
417
-};
418
-
419
-static void cortex_r5_initfn(Object *obj)
420
-{
421
- ARMCPU *cpu = ARM_CPU(obj);
422
-
423
- set_feature(&cpu->env, ARM_FEATURE_V7);
424
- set_feature(&cpu->env, ARM_FEATURE_V7MP);
425
- set_feature(&cpu->env, ARM_FEATURE_PMSA);
426
- set_feature(&cpu->env, ARM_FEATURE_PMU);
427
- cpu->midr = 0x411fc153; /* r1p3 */
428
- cpu->id_pfr0 = 0x0131;
429
- cpu->id_pfr1 = 0x001;
430
- cpu->isar.id_dfr0 = 0x010400;
431
- cpu->id_afr0 = 0x0;
432
- cpu->isar.id_mmfr0 = 0x0210030;
433
- cpu->isar.id_mmfr1 = 0x00000000;
434
- cpu->isar.id_mmfr2 = 0x01200000;
435
- cpu->isar.id_mmfr3 = 0x0211;
436
- cpu->isar.id_isar0 = 0x02101111;
437
- cpu->isar.id_isar1 = 0x13112111;
438
- cpu->isar.id_isar2 = 0x21232141;
439
- cpu->isar.id_isar3 = 0x01112131;
440
- cpu->isar.id_isar4 = 0x0010142;
441
- cpu->isar.id_isar5 = 0x0;
442
- cpu->isar.id_isar6 = 0x0;
443
- cpu->mp_is_up = true;
444
- cpu->pmsav7_dregion = 16;
445
- define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
446
-}
447
-
448
-static void cortex_r5f_initfn(Object *obj)
449
-{
450
- ARMCPU *cpu = ARM_CPU(obj);
451
-
452
- cortex_r5_initfn(obj);
453
- cpu->isar.mvfr0 = 0x10110221;
454
- cpu->isar.mvfr1 = 0x00000011;
455
-}
456
-
457
static const ARMCPRegInfo cortexa8_cp_reginfo[] = {
458
{ .name = "L2LOCKDOWN", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 0,
459
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
460
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
461
define_arm_cp_regs(cpu, cortexa15_cp_reginfo);
462
}
463
464
-static void ti925t_initfn(Object *obj)
465
-{
466
- ARMCPU *cpu = ARM_CPU(obj);
467
- set_feature(&cpu->env, ARM_FEATURE_V4T);
468
- set_feature(&cpu->env, ARM_FEATURE_OMAPCP);
469
- cpu->midr = ARM_CPUID_TI925T;
470
- cpu->ctr = 0x5109149;
471
- cpu->reset_sctlr = 0x00000070;
472
-}
473
-
474
-static void sa1100_initfn(Object *obj)
475
-{
476
- ARMCPU *cpu = ARM_CPU(obj);
477
-
478
- cpu->dtb_compatible = "intel,sa1100";
479
- set_feature(&cpu->env, ARM_FEATURE_STRONGARM);
480
- set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
481
- cpu->midr = 0x4401A11B;
482
- cpu->reset_sctlr = 0x00000070;
483
-}
484
-
485
-static void sa1110_initfn(Object *obj)
486
-{
487
- ARMCPU *cpu = ARM_CPU(obj);
488
- set_feature(&cpu->env, ARM_FEATURE_STRONGARM);
489
- set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
490
- cpu->midr = 0x6901B119;
491
- cpu->reset_sctlr = 0x00000070;
492
-}
493
-
494
-static void pxa250_initfn(Object *obj)
495
-{
496
- ARMCPU *cpu = ARM_CPU(obj);
497
-
498
- cpu->dtb_compatible = "marvell,xscale";
499
- set_feature(&cpu->env, ARM_FEATURE_V5);
500
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
501
- cpu->midr = 0x69052100;
502
- cpu->ctr = 0xd172172;
503
- cpu->reset_sctlr = 0x00000078;
504
-}
505
-
506
-static void pxa255_initfn(Object *obj)
507
-{
508
- ARMCPU *cpu = ARM_CPU(obj);
509
-
510
- cpu->dtb_compatible = "marvell,xscale";
511
- set_feature(&cpu->env, ARM_FEATURE_V5);
512
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
513
- cpu->midr = 0x69052d00;
514
- cpu->ctr = 0xd172172;
515
- cpu->reset_sctlr = 0x00000078;
516
-}
517
-
518
-static void pxa260_initfn(Object *obj)
519
-{
520
- ARMCPU *cpu = ARM_CPU(obj);
521
-
522
- cpu->dtb_compatible = "marvell,xscale";
523
- set_feature(&cpu->env, ARM_FEATURE_V5);
524
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
525
- cpu->midr = 0x69052903;
526
- cpu->ctr = 0xd172172;
527
- cpu->reset_sctlr = 0x00000078;
528
-}
529
-
530
-static void pxa261_initfn(Object *obj)
531
-{
532
- ARMCPU *cpu = ARM_CPU(obj);
533
-
534
- cpu->dtb_compatible = "marvell,xscale";
535
- set_feature(&cpu->env, ARM_FEATURE_V5);
536
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
537
- cpu->midr = 0x69052d05;
538
- cpu->ctr = 0xd172172;
539
- cpu->reset_sctlr = 0x00000078;
540
-}
541
-
542
-static void pxa262_initfn(Object *obj)
543
-{
544
- ARMCPU *cpu = ARM_CPU(obj);
545
-
546
- cpu->dtb_compatible = "marvell,xscale";
547
- set_feature(&cpu->env, ARM_FEATURE_V5);
548
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
549
- cpu->midr = 0x69052d06;
550
- cpu->ctr = 0xd172172;
551
- cpu->reset_sctlr = 0x00000078;
552
-}
553
-
554
-static void pxa270a0_initfn(Object *obj)
555
-{
556
- ARMCPU *cpu = ARM_CPU(obj);
557
-
558
- cpu->dtb_compatible = "marvell,xscale";
559
- set_feature(&cpu->env, ARM_FEATURE_V5);
560
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
561
- set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
562
- cpu->midr = 0x69054110;
563
- cpu->ctr = 0xd172172;
564
- cpu->reset_sctlr = 0x00000078;
565
-}
566
-
567
-static void pxa270a1_initfn(Object *obj)
568
-{
569
- ARMCPU *cpu = ARM_CPU(obj);
570
-
571
- cpu->dtb_compatible = "marvell,xscale";
572
- set_feature(&cpu->env, ARM_FEATURE_V5);
573
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
574
- set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
575
- cpu->midr = 0x69054111;
576
- cpu->ctr = 0xd172172;
577
- cpu->reset_sctlr = 0x00000078;
578
-}
579
-
580
-static void pxa270b0_initfn(Object *obj)
581
-{
582
- ARMCPU *cpu = ARM_CPU(obj);
583
-
584
- cpu->dtb_compatible = "marvell,xscale";
585
- set_feature(&cpu->env, ARM_FEATURE_V5);
586
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
587
- set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
588
- cpu->midr = 0x69054112;
589
- cpu->ctr = 0xd172172;
590
- cpu->reset_sctlr = 0x00000078;
591
-}
592
-
593
-static void pxa270b1_initfn(Object *obj)
594
-{
595
- ARMCPU *cpu = ARM_CPU(obj);
596
-
597
- cpu->dtb_compatible = "marvell,xscale";
598
- set_feature(&cpu->env, ARM_FEATURE_V5);
599
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
600
- set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
601
- cpu->midr = 0x69054113;
602
- cpu->ctr = 0xd172172;
603
- cpu->reset_sctlr = 0x00000078;
604
-}
605
-
606
-static void pxa270c0_initfn(Object *obj)
607
-{
608
- ARMCPU *cpu = ARM_CPU(obj);
609
-
610
- cpu->dtb_compatible = "marvell,xscale";
611
- set_feature(&cpu->env, ARM_FEATURE_V5);
612
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
613
- set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
614
- cpu->midr = 0x69054114;
615
- cpu->ctr = 0xd172172;
616
- cpu->reset_sctlr = 0x00000078;
617
-}
618
-
619
-static void pxa270c5_initfn(Object *obj)
620
-{
621
- ARMCPU *cpu = ARM_CPU(obj);
622
-
623
- cpu->dtb_compatible = "marvell,xscale";
624
- set_feature(&cpu->env, ARM_FEATURE_V5);
625
- set_feature(&cpu->env, ARM_FEATURE_XSCALE);
626
- set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
627
- cpu->midr = 0x69054117;
628
- cpu->ctr = 0xd172172;
629
- cpu->reset_sctlr = 0x00000078;
630
-}
631
-
632
#ifndef TARGET_AARCH64
633
/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host);
634
* otherwise, a CPU with as many features enabled as our emulation supports.
635
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
636
637
static const ARMCPUInfo arm_cpus[] = {
638
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
639
- { .name = "arm926", .initfn = arm926_initfn },
640
- { .name = "arm946", .initfn = arm946_initfn },
641
- { .name = "arm1026", .initfn = arm1026_initfn },
642
- /*
643
- * What QEMU calls "arm1136-r2" is actually the 1136 r0p2, i.e. an
644
- * older core than plain "arm1136". In particular this does not
645
- * have the v6K features.
646
- */
647
- { .name = "arm1136-r2", .initfn = arm1136_r2_initfn },
648
- { .name = "arm1136", .initfn = arm1136_initfn },
649
- { .name = "arm1176", .initfn = arm1176_initfn },
650
- { .name = "arm11mpcore", .initfn = arm11mpcore_initfn },
651
- { .name = "cortex-m0", .initfn = cortex_m0_initfn,
652
- .class_init = arm_v7m_class_init },
653
- { .name = "cortex-m3", .initfn = cortex_m3_initfn,
654
- .class_init = arm_v7m_class_init },
655
- { .name = "cortex-m4", .initfn = cortex_m4_initfn,
656
- .class_init = arm_v7m_class_init },
657
- { .name = "cortex-m7", .initfn = cortex_m7_initfn,
658
- .class_init = arm_v7m_class_init },
659
- { .name = "cortex-m33", .initfn = cortex_m33_initfn,
660
- .class_init = arm_v7m_class_init },
661
- { .name = "cortex-r5", .initfn = cortex_r5_initfn },
662
- { .name = "cortex-r5f", .initfn = cortex_r5f_initfn },
663
{ .name = "cortex-a7", .initfn = cortex_a7_initfn },
664
{ .name = "cortex-a8", .initfn = cortex_a8_initfn },
665
{ .name = "cortex-a9", .initfn = cortex_a9_initfn },
666
{ .name = "cortex-a15", .initfn = cortex_a15_initfn },
667
- { .name = "ti925t", .initfn = ti925t_initfn },
668
- { .name = "sa1100", .initfn = sa1100_initfn },
669
- { .name = "sa1110", .initfn = sa1110_initfn },
670
- { .name = "pxa250", .initfn = pxa250_initfn },
671
- { .name = "pxa255", .initfn = pxa255_initfn },
672
- { .name = "pxa260", .initfn = pxa260_initfn },
673
- { .name = "pxa261", .initfn = pxa261_initfn },
674
- { .name = "pxa262", .initfn = pxa262_initfn },
675
- /* "pxa270" is an alias for "pxa270-a0" */
676
- { .name = "pxa270", .initfn = pxa270a0_initfn },
677
- { .name = "pxa270-a0", .initfn = pxa270a0_initfn },
678
- { .name = "pxa270-a1", .initfn = pxa270a1_initfn },
679
- { .name = "pxa270-b0", .initfn = pxa270b0_initfn },
680
- { .name = "pxa270-b1", .initfn = pxa270b1_initfn },
681
- { .name = "pxa270-c0", .initfn = pxa270c0_initfn },
682
- { .name = "pxa270-c5", .initfn = pxa270c5_initfn },
683
#ifndef TARGET_AARCH64
684
{ .name = "max", .initfn = arm_max_initfn },
685
#endif
686
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
687
new file mode 100644
40
new file mode 100644
688
index XXXXXXX..XXXXXXX
41
index XXXXXXX..XXXXXXX
689
--- /dev/null
42
--- /dev/null
690
+++ b/target/arm/cpu_tcg.c
43
+++ b/hw/usb/hcd-dwc2.c
691
@@ -XXX,XX +XXX,XX @@
44
@@ -XXX,XX +XXX,XX @@
692
+/*
45
+/*
693
+ * QEMU ARM TCG CPUs.
46
+ * dwc-hsotg (dwc2) USB host controller emulation
694
+ *
47
+ *
695
+ * Copyright (c) 2012 SUSE LINUX Products GmbH
48
+ * Based on hw/usb/hcd-ehci.c and hw/usb/hcd-ohci.c
696
+ *
49
+ *
697
+ * This code is licensed under the GNU GPL v2 or later.
50
+ * Note that to use this emulation with the dwc-otg driver in the
51
+ * Raspbian kernel, you must pass the option "dwc_otg.fiq_fsm_enable=0"
52
+ * on the kernel command line.
698
+ *
53
+ *
699
+ * SPDX-License-Identifier: GPL-2.0-or-later
54
+ * Some useful documentation used to develop this emulation can be
55
+ * found online (as of April 2020) at:
56
+ *
57
+ * http://www.capital-micro.com/PDF/CME-M7_Family_User_Guide_EN.pdf
58
+ * which has a pretty complete description of the controller starting
59
+ * on page 370.
60
+ *
61
+ * https://sourceforge.net/p/wive-ng/wive-ng-mt/ci/master/tree/docs/DataSheets/RT3050_5x_V2.0_081408_0902.pdf
62
+ * which has a description of the controller registers starting on
63
+ * page 130.
64
+ *
65
+ * Copyright (c) 2020 Paul Zimmerman <pauldzim@gmail.com>
66
+ *
67
+ * This program is free software; you can redistribute it and/or modify
68
+ * it under the terms of the GNU General Public License as published by
69
+ * the Free Software Foundation; either version 2 of the License, or
70
+ * (at your option) any later version.
71
+ *
72
+ * This program is distributed in the hope that it will be useful,
73
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
74
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
75
+ * GNU General Public License for more details.
700
+ */
76
+ */
701
+
77
+
702
+#include "qemu/osdep.h"
78
+#include "qemu/osdep.h"
703
+#include "cpu.h"
79
+#include "qemu/units.h"
704
+#include "internals.h"
80
+#include "qapi/error.h"
705
+
81
+#include "hw/usb/dwc2-regs.h"
706
+/* CPU models. These are not needed for the AArch64 linux-user build. */
82
+#include "hw/usb/hcd-dwc2.h"
707
+#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
83
+#include "migration/vmstate.h"
708
+
84
+#include "trace.h"
709
+static bool arm_v7m_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
85
+#include "qemu/log.h"
710
+{
86
+#include "qemu/error-report.h"
711
+ CPUClass *cc = CPU_GET_CLASS(cs);
87
+#include "qemu/main-loop.h"
712
+ ARMCPU *cpu = ARM_CPU(cs);
88
+#include "hw/qdev-properties.h"
713
+ CPUARMState *env = &cpu->env;
89
+
714
+ bool ret = false;
90
+#define USB_HZ_FS 12000000
91
+#define USB_HZ_HS 96000000
92
+#define USB_FRMINTVL 12000
93
+
94
+/* nifty macros from Arnon's EHCI version */
95
+#define get_field(data, field) \
96
+ (((data) & field##_MASK) >> field##_SHIFT)
97
+
98
+#define set_field(data, newval, field) do { \
99
+ uint32_t val = *(data); \
100
+ val &= ~field##_MASK; \
101
+ val |= ((newval) << field##_SHIFT) & field##_MASK; \
102
+ *(data) = val; \
103
+} while (0)
104
+
105
+#define get_bit(data, bitmask) \
106
+ (!!((data) & (bitmask)))
107
+
108
+/* update irq line */
109
+static inline void dwc2_update_irq(DWC2State *s)
110
+{
111
+ static int oldlevel;
112
+ int level = 0;
113
+
114
+ if ((s->gintsts & s->gintmsk) && (s->gahbcfg & GAHBCFG_GLBL_INTR_EN)) {
115
+ level = 1;
116
+ }
117
+ if (level != oldlevel) {
118
+ oldlevel = level;
119
+ trace_usb_dwc2_update_irq(level);
120
+ qemu_set_irq(s->irq, level);
121
+ }
122
+}
123
+
124
+/* flag interrupt condition */
125
+static inline void dwc2_raise_global_irq(DWC2State *s, uint32_t intr)
126
+{
127
+ if (!(s->gintsts & intr)) {
128
+ s->gintsts |= intr;
129
+ trace_usb_dwc2_raise_global_irq(intr);
130
+ dwc2_update_irq(s);
131
+ }
132
+}
133
+
134
+static inline void dwc2_lower_global_irq(DWC2State *s, uint32_t intr)
135
+{
136
+ if (s->gintsts & intr) {
137
+ s->gintsts &= ~intr;
138
+ trace_usb_dwc2_lower_global_irq(intr);
139
+ dwc2_update_irq(s);
140
+ }
141
+}
142
+
143
+static inline void dwc2_raise_host_irq(DWC2State *s, uint32_t host_intr)
144
+{
145
+ if (!(s->haint & host_intr)) {
146
+ s->haint |= host_intr;
147
+ s->haint &= 0xffff;
148
+ trace_usb_dwc2_raise_host_irq(host_intr);
149
+ if (s->haint & s->haintmsk) {
150
+ dwc2_raise_global_irq(s, GINTSTS_HCHINT);
151
+ }
152
+ }
153
+}
154
+
155
+static inline void dwc2_lower_host_irq(DWC2State *s, uint32_t host_intr)
156
+{
157
+ if (s->haint & host_intr) {
158
+ s->haint &= ~host_intr;
159
+ trace_usb_dwc2_lower_host_irq(host_intr);
160
+ if (!(s->haint & s->haintmsk)) {
161
+ dwc2_lower_global_irq(s, GINTSTS_HCHINT);
162
+ }
163
+ }
164
+}
165
+
166
+static inline void dwc2_update_hc_irq(DWC2State *s, int index)
167
+{
168
+ uint32_t host_intr = 1 << (index >> 3);
169
+
170
+ if (s->hreg1[index + 2] & s->hreg1[index + 3]) {
171
+ dwc2_raise_host_irq(s, host_intr);
172
+ } else {
173
+ dwc2_lower_host_irq(s, host_intr);
174
+ }
175
+}
176
+
177
+/* set a timer for EOF */
178
+static void dwc2_eof_timer(DWC2State *s)
179
+{
180
+ timer_mod(s->eof_timer, s->sof_time + s->usb_frame_time);
181
+}
182
+
183
+/* Set a timer for EOF and generate SOF event */
184
+static void dwc2_sof(DWC2State *s)
185
+{
186
+ s->sof_time += s->usb_frame_time;
187
+ trace_usb_dwc2_sof(s->sof_time);
188
+ dwc2_eof_timer(s);
189
+ dwc2_raise_global_irq(s, GINTSTS_SOF);
190
+}
191
+
192
+/* Do frame processing on frame boundary */
193
+static void dwc2_frame_boundary(void *opaque)
194
+{
195
+ DWC2State *s = opaque;
196
+ int64_t now;
197
+ uint16_t frcnt;
198
+
199
+ now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
200
+
201
+ /* Frame boundary, so do EOF stuff here */
202
+
203
+ /* Increment frame number */
204
+ frcnt = (uint16_t)((now - s->sof_time) / s->fi);
205
+ s->frame_number = (s->frame_number + frcnt) & 0xffff;
206
+ s->hfnum = s->frame_number & HFNUM_MAX_FRNUM;
207
+
208
+ /* Do SOF stuff here */
209
+ dwc2_sof(s);
210
+}
211
+
212
+/* Start sending SOF tokens on the USB bus */
213
+static void dwc2_bus_start(DWC2State *s)
214
+{
215
+ trace_usb_dwc2_bus_start();
216
+ s->sof_time = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
217
+ dwc2_eof_timer(s);
218
+}
219
+
220
+/* Stop sending SOF tokens on the USB bus */
221
+static void dwc2_bus_stop(DWC2State *s)
222
+{
223
+ trace_usb_dwc2_bus_stop();
224
+ timer_del(s->eof_timer);
225
+}
226
+
227
+static USBDevice *dwc2_find_device(DWC2State *s, uint8_t addr)
228
+{
229
+ USBDevice *dev;
230
+
231
+ trace_usb_dwc2_find_device(addr);
232
+
233
+ if (!(s->hprt0 & HPRT0_ENA)) {
234
+ trace_usb_dwc2_port_disabled(0);
235
+ } else {
236
+ dev = usb_find_device(&s->uport, addr);
237
+ if (dev != NULL) {
238
+ trace_usb_dwc2_device_found(0);
239
+ return dev;
240
+ }
241
+ }
242
+
243
+ trace_usb_dwc2_device_not_found();
244
+ return NULL;
245
+}
246
+
247
+static const char *pstatus[] = {
248
+ "USB_RET_SUCCESS", "USB_RET_NODEV", "USB_RET_NAK", "USB_RET_STALL",
249
+ "USB_RET_BABBLE", "USB_RET_IOERROR", "USB_RET_ASYNC",
250
+ "USB_RET_ADD_TO_QUEUE", "USB_RET_REMOVE_FROM_QUEUE"
251
+};
252
+
253
+static uint32_t pintr[] = {
254
+ HCINTMSK_XFERCOMPL, HCINTMSK_XACTERR, HCINTMSK_NAK, HCINTMSK_STALL,
255
+ HCINTMSK_BBLERR, HCINTMSK_XACTERR, HCINTMSK_XACTERR, HCINTMSK_XACTERR,
256
+ HCINTMSK_XACTERR
257
+};
258
+
259
+static const char *types[] = {
260
+ "Ctrl", "Isoc", "Bulk", "Intr"
261
+};
262
+
263
+static const char *dirs[] = {
264
+ "Out", "In"
265
+};
266
+
267
+static void dwc2_handle_packet(DWC2State *s, uint32_t devadr, USBDevice *dev,
268
+ USBEndpoint *ep, uint32_t index, bool send)
269
+{
270
+ DWC2Packet *p;
271
+ uint32_t hcchar = s->hreg1[index];
272
+ uint32_t hctsiz = s->hreg1[index + 4];
273
+ uint32_t hcdma = s->hreg1[index + 5];
274
+ uint32_t chan, epnum, epdir, eptype, mps, pid, pcnt, len, tlen, intr = 0;
275
+ uint32_t tpcnt, stsidx, actual = 0;
276
+ bool do_intr = false, done = false;
277
+
278
+ epnum = get_field(hcchar, HCCHAR_EPNUM);
279
+ epdir = get_bit(hcchar, HCCHAR_EPDIR);
280
+ eptype = get_field(hcchar, HCCHAR_EPTYPE);
281
+ mps = get_field(hcchar, HCCHAR_MPS);
282
+ pid = get_field(hctsiz, TSIZ_SC_MC_PID);
283
+ pcnt = get_field(hctsiz, TSIZ_PKTCNT);
284
+ len = get_field(hctsiz, TSIZ_XFERSIZE);
285
+ assert(len <= DWC2_MAX_XFER_SIZE);
286
+ chan = index >> 3;
287
+ p = &s->packet[chan];
288
+
289
+ trace_usb_dwc2_handle_packet(chan, dev, &p->packet, epnum, types[eptype],
290
+ dirs[epdir], mps, len, pcnt);
291
+
292
+ if (eptype == USB_ENDPOINT_XFER_CONTROL && pid == TSIZ_SC_MC_PID_SETUP) {
293
+ pid = USB_TOKEN_SETUP;
294
+ } else {
295
+ pid = epdir ? USB_TOKEN_IN : USB_TOKEN_OUT;
296
+ }
297
+
298
+ if (send) {
299
+ tlen = len;
300
+ if (p->small) {
301
+ if (tlen > mps) {
302
+ tlen = mps;
303
+ }
304
+ }
305
+
306
+ if (pid != USB_TOKEN_IN) {
307
+ trace_usb_dwc2_memory_read(hcdma, tlen);
308
+ if (dma_memory_read(&s->dma_as, hcdma,
309
+ s->usb_buf[chan], tlen) != MEMTX_OK) {
310
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: dma_memory_read failed\n",
311
+ __func__);
312
+ }
313
+ }
314
+
315
+ usb_packet_init(&p->packet);
316
+ usb_packet_setup(&p->packet, pid, ep, 0, hcdma,
317
+ pid != USB_TOKEN_IN, true);
318
+ usb_packet_addbuf(&p->packet, s->usb_buf[chan], tlen);
319
+ p->async = DWC2_ASYNC_NONE;
320
+ usb_handle_packet(dev, &p->packet);
321
+ } else {
322
+ tlen = p->len;
323
+ }
324
+
325
+ stsidx = -p->packet.status;
326
+ assert(stsidx < sizeof(pstatus) / sizeof(*pstatus));
327
+ actual = p->packet.actual_length;
328
+ trace_usb_dwc2_packet_status(pstatus[stsidx], actual);
329
+
330
+babble:
331
+ if (p->packet.status != USB_RET_SUCCESS &&
332
+ p->packet.status != USB_RET_NAK &&
333
+ p->packet.status != USB_RET_STALL &&
334
+ p->packet.status != USB_RET_ASYNC) {
335
+ trace_usb_dwc2_packet_error(pstatus[stsidx]);
336
+ }
337
+
338
+ if (p->packet.status == USB_RET_ASYNC) {
339
+ trace_usb_dwc2_async_packet(&p->packet, chan, dev, epnum,
340
+ dirs[epdir], tlen);
341
+ usb_device_flush_ep_queue(dev, ep);
342
+ assert(p->async != DWC2_ASYNC_INFLIGHT);
343
+ p->devadr = devadr;
344
+ p->epnum = epnum;
345
+ p->epdir = epdir;
346
+ p->mps = mps;
347
+ p->pid = pid;
348
+ p->index = index;
349
+ p->pcnt = pcnt;
350
+ p->len = tlen;
351
+ p->async = DWC2_ASYNC_INFLIGHT;
352
+ p->needs_service = false;
353
+ return;
354
+ }
355
+
356
+ if (p->packet.status == USB_RET_SUCCESS) {
357
+ if (actual > tlen) {
358
+ p->packet.status = USB_RET_BABBLE;
359
+ goto babble;
360
+ }
361
+
362
+ if (pid == USB_TOKEN_IN) {
363
+ trace_usb_dwc2_memory_write(hcdma, actual);
364
+ if (dma_memory_write(&s->dma_as, hcdma, s->usb_buf[chan],
365
+ actual) != MEMTX_OK) {
366
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: dma_memory_write failed\n",
367
+ __func__);
368
+ }
369
+ }
370
+
371
+ tpcnt = actual / mps;
372
+ if (actual % mps) {
373
+ tpcnt++;
374
+ if (pid == USB_TOKEN_IN) {
375
+ done = true;
376
+ }
377
+ }
378
+
379
+ pcnt -= tpcnt < pcnt ? tpcnt : pcnt;
380
+ set_field(&hctsiz, pcnt, TSIZ_PKTCNT);
381
+ len -= actual < len ? actual : len;
382
+ set_field(&hctsiz, len, TSIZ_XFERSIZE);
383
+ s->hreg1[index + 4] = hctsiz;
384
+ hcdma += actual;
385
+ s->hreg1[index + 5] = hcdma;
386
+
387
+ if (!pcnt || len == 0 || actual == 0) {
388
+ done = true;
389
+ }
390
+ } else {
391
+ intr |= pintr[stsidx];
392
+ if (p->packet.status == USB_RET_NAK &&
393
+ (eptype == USB_ENDPOINT_XFER_CONTROL ||
394
+ eptype == USB_ENDPOINT_XFER_BULK)) {
395
+ /*
396
+ * for ctrl/bulk, automatically retry on NAK,
397
+ * but send the interrupt anyway
398
+ */
399
+ intr &= ~HCINTMSK_RESERVED14_31;
400
+ s->hreg1[index + 2] |= intr;
401
+ do_intr = true;
402
+ } else {
403
+ intr |= HCINTMSK_CHHLTD;
404
+ done = true;
405
+ }
406
+ }
407
+
408
+ usb_packet_cleanup(&p->packet);
409
+
410
+ if (done) {
411
+ hcchar &= ~HCCHAR_CHENA;
412
+ s->hreg1[index] = hcchar;
413
+ if (!(intr & HCINTMSK_CHHLTD)) {
414
+ intr |= HCINTMSK_CHHLTD | HCINTMSK_XFERCOMPL;
415
+ }
416
+ intr &= ~HCINTMSK_RESERVED14_31;
417
+ s->hreg1[index + 2] |= intr;
418
+ p->needs_service = false;
419
+ trace_usb_dwc2_packet_done(pstatus[stsidx], actual, len, pcnt);
420
+ dwc2_update_hc_irq(s, index);
421
+ return;
422
+ }
423
+
424
+ p->devadr = devadr;
425
+ p->epnum = epnum;
426
+ p->epdir = epdir;
427
+ p->mps = mps;
428
+ p->pid = pid;
429
+ p->index = index;
430
+ p->pcnt = pcnt;
431
+ p->len = len;
432
+ p->needs_service = true;
433
+ trace_usb_dwc2_packet_next(pstatus[stsidx], len, pcnt);
434
+ if (do_intr) {
435
+ dwc2_update_hc_irq(s, index);
436
+ }
437
+}
438
+
439
+/* Attach or detach a device on root hub */
440
+
441
+static const char *speeds[] = {
442
+ "low", "full", "high"
443
+};
444
+
445
+static void dwc2_attach(USBPort *port)
446
+{
447
+ DWC2State *s = port->opaque;
448
+ int hispd = 0;
449
+
450
+ trace_usb_dwc2_attach(port);
451
+ assert(port->index == 0);
452
+
453
+ if (!port->dev || !port->dev->attached) {
454
+ return;
455
+ }
456
+
457
+ assert(port->dev->speed <= USB_SPEED_HIGH);
458
+ trace_usb_dwc2_attach_speed(speeds[port->dev->speed]);
459
+ s->hprt0 &= ~HPRT0_SPD_MASK;
460
+
461
+ switch (port->dev->speed) {
462
+ case USB_SPEED_LOW:
463
+ s->hprt0 |= HPRT0_SPD_LOW_SPEED << HPRT0_SPD_SHIFT;
464
+ break;
465
+ case USB_SPEED_FULL:
466
+ s->hprt0 |= HPRT0_SPD_FULL_SPEED << HPRT0_SPD_SHIFT;
467
+ break;
468
+ case USB_SPEED_HIGH:
469
+ s->hprt0 |= HPRT0_SPD_HIGH_SPEED << HPRT0_SPD_SHIFT;
470
+ hispd = 1;
471
+ break;
472
+ }
473
+
474
+ if (hispd) {
475
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 8000; /* 125000 */
476
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_HS) {
477
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_HS; /* 10.4 */
478
+ } else {
479
+ s->usb_bit_time = 1;
480
+ }
481
+ } else {
482
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 1000; /* 1000000 */
483
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_FS) {
484
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_FS; /* 83.3 */
485
+ } else {
486
+ s->usb_bit_time = 1;
487
+ }
488
+ }
489
+
490
+ s->fi = USB_FRMINTVL - 1;
491
+ s->hprt0 |= HPRT0_CONNDET | HPRT0_CONNSTS;
492
+
493
+ dwc2_bus_start(s);
494
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
495
+}
496
+
497
+static void dwc2_detach(USBPort *port)
498
+{
499
+ DWC2State *s = port->opaque;
500
+
501
+ trace_usb_dwc2_detach(port);
502
+ assert(port->index == 0);
503
+
504
+ dwc2_bus_stop(s);
505
+
506
+ s->hprt0 &= ~(HPRT0_SPD_MASK | HPRT0_SUSP | HPRT0_ENA | HPRT0_CONNSTS);
507
+ s->hprt0 |= HPRT0_CONNDET | HPRT0_ENACHG;
508
+
509
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
510
+}
511
+
512
+static void dwc2_child_detach(USBPort *port, USBDevice *child)
513
+{
514
+ trace_usb_dwc2_child_detach(port, child);
515
+ assert(port->index == 0);
516
+}
517
+
518
+static void dwc2_wakeup(USBPort *port)
519
+{
520
+ DWC2State *s = port->opaque;
521
+
522
+ trace_usb_dwc2_wakeup(port);
523
+ assert(port->index == 0);
524
+
525
+ if (s->hprt0 & HPRT0_SUSP) {
526
+ s->hprt0 |= HPRT0_RES;
527
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
528
+ }
529
+
530
+ qemu_bh_schedule(s->async_bh);
531
+}
532
+
533
+static void dwc2_async_packet_complete(USBPort *port, USBPacket *packet)
534
+{
535
+ DWC2State *s = port->opaque;
536
+ DWC2Packet *p;
537
+ USBDevice *dev;
538
+ USBEndpoint *ep;
539
+
540
+ assert(port->index == 0);
541
+ p = container_of(packet, DWC2Packet, packet);
542
+ dev = dwc2_find_device(s, p->devadr);
543
+ ep = usb_ep_get(dev, p->pid, p->epnum);
544
+ trace_usb_dwc2_async_packet_complete(port, packet, p->index >> 3, dev,
545
+ p->epnum, dirs[p->epdir], p->len);
546
+ assert(p->async == DWC2_ASYNC_INFLIGHT);
547
+
548
+ if (packet->status == USB_RET_REMOVE_FROM_QUEUE) {
549
+ usb_cancel_packet(packet);
550
+ usb_packet_cleanup(packet);
551
+ return;
552
+ }
553
+
554
+ dwc2_handle_packet(s, p->devadr, dev, ep, p->index, false);
555
+
556
+ p->async = DWC2_ASYNC_FINISHED;
557
+ qemu_bh_schedule(s->async_bh);
558
+}
559
+
560
+static USBPortOps dwc2_port_ops = {
561
+ .attach = dwc2_attach,
562
+ .detach = dwc2_detach,
563
+ .child_detach = dwc2_child_detach,
564
+ .wakeup = dwc2_wakeup,
565
+ .complete = dwc2_async_packet_complete,
566
+};
567
+
568
+static uint32_t dwc2_get_frame_remaining(DWC2State *s)
569
+{
570
+ uint32_t fr = 0;
571
+ int64_t tks;
572
+
573
+ tks = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) - s->sof_time;
574
+ if (tks < 0) {
575
+ tks = 0;
576
+ }
577
+
578
+ /* avoid muldiv if possible */
579
+ if (tks >= s->usb_frame_time) {
580
+ goto out;
581
+ }
582
+ if (tks < s->usb_bit_time) {
583
+ fr = s->fi;
584
+ goto out;
585
+ }
586
+
587
+ /* tks = number of ns since SOF, divided by 83 (fs) or 10 (hs) */
588
+ tks = tks / s->usb_bit_time;
589
+ if (tks >= (int64_t)s->fi) {
590
+ goto out;
591
+ }
592
+
593
+ /* remaining = frame interval minus tks */
594
+ fr = (uint32_t)((int64_t)s->fi - tks);
595
+
596
+out:
597
+ return fr;
598
+}
599
+
600
+static void dwc2_work_bh(void *opaque)
601
+{
602
+ DWC2State *s = opaque;
603
+ DWC2Packet *p;
604
+ USBDevice *dev;
605
+ USBEndpoint *ep;
606
+ int64_t t_now, expire_time;
607
+ int chan;
608
+ bool found = false;
609
+
610
+ trace_usb_dwc2_work_bh();
611
+ if (s->working) {
612
+ return;
613
+ }
614
+ s->working = true;
615
+
616
+ t_now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
617
+ chan = s->next_chan;
618
+
619
+ do {
620
+ p = &s->packet[chan];
621
+ if (p->needs_service) {
622
+ dev = dwc2_find_device(s, p->devadr);
623
+ ep = usb_ep_get(dev, p->pid, p->epnum);
624
+ trace_usb_dwc2_work_bh_service(s->next_chan, chan, dev, p->epnum);
625
+ dwc2_handle_packet(s, p->devadr, dev, ep, p->index, true);
626
+ found = true;
627
+ }
628
+ if (++chan == DWC2_NB_CHAN) {
629
+ chan = 0;
630
+ }
631
+ if (found) {
632
+ s->next_chan = chan;
633
+ trace_usb_dwc2_work_bh_next(chan);
634
+ }
635
+ } while (chan != s->next_chan);
636
+
637
+ if (found) {
638
+ expire_time = t_now + NANOSECONDS_PER_SECOND / 4000;
639
+ timer_mod(s->frame_timer, expire_time);
640
+ }
641
+ s->working = false;
642
+}
643
+
644
+static void dwc2_enable_chan(DWC2State *s, uint32_t index)
645
+{
646
+ USBDevice *dev;
647
+ USBEndpoint *ep;
648
+ uint32_t hcchar;
649
+ uint32_t hctsiz;
650
+ uint32_t devadr, epnum, epdir, eptype, pid, len;
651
+ DWC2Packet *p;
652
+
653
+ assert((index >> 3) < DWC2_NB_CHAN);
654
+ p = &s->packet[index >> 3];
655
+ hcchar = s->hreg1[index];
656
+ hctsiz = s->hreg1[index + 4];
657
+ devadr = get_field(hcchar, HCCHAR_DEVADDR);
658
+ epnum = get_field(hcchar, HCCHAR_EPNUM);
659
+ epdir = get_bit(hcchar, HCCHAR_EPDIR);
660
+ eptype = get_field(hcchar, HCCHAR_EPTYPE);
661
+ pid = get_field(hctsiz, TSIZ_SC_MC_PID);
662
+ len = get_field(hctsiz, TSIZ_XFERSIZE);
663
+
664
+ dev = dwc2_find_device(s, devadr);
665
+
666
+ trace_usb_dwc2_enable_chan(index >> 3, dev, &p->packet, epnum);
667
+ if (dev == NULL) {
668
+ return;
669
+ }
670
+
671
+ if (eptype == USB_ENDPOINT_XFER_CONTROL && pid == TSIZ_SC_MC_PID_SETUP) {
672
+ pid = USB_TOKEN_SETUP;
673
+ } else {
674
+ pid = epdir ? USB_TOKEN_IN : USB_TOKEN_OUT;
675
+ }
676
+
677
+ ep = usb_ep_get(dev, pid, epnum);
715
+
678
+
716
+ /*
679
+ /*
717
+ * ARMv7-M interrupt masking works differently than -A or -R.
680
+ * Hack: Networking doesn't like us delivering large transfers, it kind
718
+ * There is no FIQ/IRQ distinction. Instead of I and F bits
681
+ * of works but the latency is horrible. So if the transfer is <= the mtu
719
+ * masking FIQ and IRQ interrupts, an exception is taken only
682
+ * size, we take that as a hint that this might be a network transfer,
720
+ * if it is higher priority than the current execution priority
683
+ * and do the transfer packet-by-packet.
721
+ * (which depends on state like BASEPRI, FAULTMASK and the
722
+ * currently active exception).
723
+ */
684
+ */
724
+ if (interrupt_request & CPU_INTERRUPT_HARD
685
+ if (len > 1536) {
725
+ && (armv7m_nvic_can_take_pending_exception(env->nvic))) {
686
+ p->small = false;
726
+ cs->exception_index = EXCP_IRQ;
687
+ } else {
727
+ cc->do_interrupt(cs);
688
+ p->small = true;
728
+ ret = true;
689
+ }
729
+ }
690
+
730
+ return ret;
691
+ dwc2_handle_packet(s, devadr, dev, ep, index, true);
731
+}
692
+ qemu_bh_schedule(s->async_bh);
732
+
693
+}
733
+static void arm926_initfn(Object *obj)
694
+
734
+{
695
+static const char *glbregnm[] = {
735
+ ARMCPU *cpu = ARM_CPU(obj);
696
+ "GOTGCTL ", "GOTGINT ", "GAHBCFG ", "GUSBCFG ", "GRSTCTL ",
736
+
697
+ "GINTSTS ", "GINTMSK ", "GRXSTSR ", "GRXSTSP ", "GRXFSIZ ",
737
+ cpu->dtb_compatible = "arm,arm926";
698
+ "GNPTXFSIZ", "GNPTXSTS ", "GI2CCTL ", "GPVNDCTL ", "GGPIO ",
738
+ set_feature(&cpu->env, ARM_FEATURE_V5);
699
+ "GUID ", "GSNPSID ", "GHWCFG1 ", "GHWCFG2 ", "GHWCFG3 ",
739
+ set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
700
+ "GHWCFG4 ", "GLPMCFG ", "GPWRDN ", "GDFIFOCFG", "GADPCTL ",
740
+ set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
701
+ "GREFCLK ", "GINTMSK2 ", "GINTSTS2 "
741
+ cpu->midr = 0x41069265;
702
+};
742
+ cpu->reset_fpsid = 0x41011090;
703
+
743
+ cpu->ctr = 0x1dd20d2;
704
+static uint64_t dwc2_glbreg_read(void *ptr, hwaddr addr, int index,
744
+ cpu->reset_sctlr = 0x00090078;
705
+ unsigned size)
745
+
706
+{
746
+ /*
707
+ DWC2State *s = ptr;
747
+ * ARMv5 does not have the ID_ISAR registers, but we can still
708
+ uint32_t val;
748
+ * set the field to indicate Jazelle support within QEMU.
709
+
749
+ */
710
+ assert(addr <= GINTSTS2);
750
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
711
+ val = s->glbreg[index];
751
+ /*
712
+
752
+ * Similarly, we need to set MVFR0 fields to enable vfp and short vector
713
+ switch (addr) {
753
+ * support even though ARMv5 doesn't have this register.
714
+ case GRSTCTL:
754
+ */
715
+ /* clear any self-clearing bits that were set */
755
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
716
+ val &= ~(GRSTCTL_TXFFLSH | GRSTCTL_RXFFLSH | GRSTCTL_IN_TKNQ_FLSH |
756
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSP, 1);
717
+ GRSTCTL_FRMCNTRRST | GRSTCTL_HSFTRST | GRSTCTL_CSFTRST);
757
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
718
+ s->glbreg[index] = val;
758
+}
719
+ break;
759
+
720
+ default:
760
+static void arm946_initfn(Object *obj)
721
+ break;
761
+{
722
+ }
762
+ ARMCPU *cpu = ARM_CPU(obj);
723
+
763
+
724
+ trace_usb_dwc2_glbreg_read(addr, glbregnm[index], val);
764
+ cpu->dtb_compatible = "arm,arm946";
725
+ return val;
765
+ set_feature(&cpu->env, ARM_FEATURE_V5);
726
+}
766
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
727
+
767
+ set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
728
+static void dwc2_glbreg_write(void *ptr, hwaddr addr, int index, uint64_t val,
768
+ cpu->midr = 0x41059461;
729
+ unsigned size)
769
+ cpu->ctr = 0x0f004006;
730
+{
770
+ cpu->reset_sctlr = 0x00000078;
731
+ DWC2State *s = ptr;
771
+}
732
+ uint64_t orig = val;
772
+
733
+ uint32_t *mmio;
773
+static void arm1026_initfn(Object *obj)
734
+ uint32_t old;
774
+{
735
+ int iflg = 0;
775
+ ARMCPU *cpu = ARM_CPU(obj);
736
+
776
+
737
+ assert(addr <= GINTSTS2);
777
+ cpu->dtb_compatible = "arm,arm1026";
738
+ mmio = &s->glbreg[index];
778
+ set_feature(&cpu->env, ARM_FEATURE_V5);
739
+ old = *mmio;
779
+ set_feature(&cpu->env, ARM_FEATURE_AUXCR);
740
+
780
+ set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
741
+ switch (addr) {
781
+ set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
742
+ case GOTGCTL:
782
+ cpu->midr = 0x4106a262;
743
+ /* don't allow setting of read-only bits */
783
+ cpu->reset_fpsid = 0x410110a0;
744
+ val &= ~(GOTGCTL_MULT_VALID_BC_MASK | GOTGCTL_BSESVLD |
784
+ cpu->ctr = 0x1dd20d2;
745
+ GOTGCTL_ASESVLD | GOTGCTL_DBNC_SHORT | GOTGCTL_CONID_B |
785
+ cpu->reset_sctlr = 0x00090078;
746
+ GOTGCTL_HSTNEGSCS | GOTGCTL_SESREQSCS);
786
+ cpu->reset_auxcr = 1;
747
+ /* don't allow clearing of read-only bits */
787
+
748
+ val |= old & (GOTGCTL_MULT_VALID_BC_MASK | GOTGCTL_BSESVLD |
788
+ /*
749
+ GOTGCTL_ASESVLD | GOTGCTL_DBNC_SHORT | GOTGCTL_CONID_B |
789
+ * ARMv5 does not have the ID_ISAR registers, but we can still
750
+ GOTGCTL_HSTNEGSCS | GOTGCTL_SESREQSCS);
790
+ * set the field to indicate Jazelle support within QEMU.
751
+ break;
791
+ */
752
+ case GAHBCFG:
792
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
753
+ if ((val & GAHBCFG_GLBL_INTR_EN) && !(old & GAHBCFG_GLBL_INTR_EN)) {
793
+ /*
754
+ iflg = 1;
794
+ * Similarly, we need to set MVFR0 fields to enable vfp and short vector
755
+ }
795
+ * support even though ARMv5 doesn't have this register.
756
+ break;
796
+ */
757
+ case GRSTCTL:
797
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
758
+ val |= GRSTCTL_AHBIDLE;
798
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSP, 1);
759
+ val &= ~GRSTCTL_DMAREQ;
799
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
760
+ if (!(old & GRSTCTL_TXFFLSH) && (val & GRSTCTL_TXFFLSH)) {
800
+
761
+ /* TODO - TX fifo flush */
801
+ {
762
+ qemu_log_mask(LOG_UNIMP, "Tx FIFO flush not implemented\n");
802
+ /* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
763
+ }
803
+ ARMCPRegInfo ifar = {
764
+ if (!(old & GRSTCTL_RXFFLSH) && (val & GRSTCTL_RXFFLSH)) {
804
+ .name = "IFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 1,
765
+ /* TODO - RX fifo flush */
805
+ .access = PL1_RW,
766
+ qemu_log_mask(LOG_UNIMP, "Rx FIFO flush not implemented\n");
806
+ .fieldoffset = offsetof(CPUARMState, cp15.ifar_ns),
767
+ }
807
+ .resetvalue = 0
768
+ if (!(old & GRSTCTL_IN_TKNQ_FLSH) && (val & GRSTCTL_IN_TKNQ_FLSH)) {
808
+ };
769
+ /* TODO - device IN token queue flush */
809
+ define_one_arm_cp_reg(cpu, &ifar);
770
+ qemu_log_mask(LOG_UNIMP, "Token queue flush not implemented\n");
810
+ }
771
+ }
811
+}
772
+ if (!(old & GRSTCTL_FRMCNTRRST) && (val & GRSTCTL_FRMCNTRRST)) {
812
+
773
+ /* TODO - host frame counter reset */
813
+static void arm1136_r2_initfn(Object *obj)
774
+ qemu_log_mask(LOG_UNIMP, "Frame counter reset not implemented\n");
814
+{
775
+ }
815
+ ARMCPU *cpu = ARM_CPU(obj);
776
+ if (!(old & GRSTCTL_HSFTRST) && (val & GRSTCTL_HSFTRST)) {
816
+ /*
777
+ /* TODO - host soft reset */
817
+ * What qemu calls "arm1136_r2" is actually the 1136 r0p2, ie an
778
+ qemu_log_mask(LOG_UNIMP, "Host soft reset not implemented\n");
818
+ * older core than plain "arm1136". In particular this does not
779
+ }
819
+ * have the v6K features.
780
+ if (!(old & GRSTCTL_CSFTRST) && (val & GRSTCTL_CSFTRST)) {
820
+ * These ID register values are correct for 1136 but may be wrong
781
+ /* TODO - core soft reset */
821
+ * for 1136_r2 (in particular r0p2 does not actually implement most
782
+ qemu_log_mask(LOG_UNIMP, "Core soft reset not implemented\n");
822
+ * of the ID registers).
783
+ }
823
+ */
784
+ /* don't allow clearing of self-clearing bits */
824
+
785
+ val |= old & (GRSTCTL_TXFFLSH | GRSTCTL_RXFFLSH |
825
+ cpu->dtb_compatible = "arm,arm1136";
786
+ GRSTCTL_IN_TKNQ_FLSH | GRSTCTL_FRMCNTRRST |
826
+ set_feature(&cpu->env, ARM_FEATURE_V6);
787
+ GRSTCTL_HSFTRST | GRSTCTL_CSFTRST);
827
+ set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
788
+ break;
828
+ set_feature(&cpu->env, ARM_FEATURE_CACHE_DIRTY_REG);
789
+ case GINTSTS:
829
+ set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
790
+ /* clear the write-1-to-clear bits */
830
+ cpu->midr = 0x4107b362;
791
+ val |= ~old;
831
+ cpu->reset_fpsid = 0x410120b4;
792
+ val = ~val;
832
+ cpu->isar.mvfr0 = 0x11111111;
793
+ /* don't allow clearing of read-only bits */
833
+ cpu->isar.mvfr1 = 0x00000000;
794
+ val |= old & (GINTSTS_PTXFEMP | GINTSTS_HCHINT | GINTSTS_PRTINT |
834
+ cpu->ctr = 0x1dd20d2;
795
+ GINTSTS_OEPINT | GINTSTS_IEPINT | GINTSTS_GOUTNAKEFF |
835
+ cpu->reset_sctlr = 0x00050078;
796
+ GINTSTS_GINNAKEFF | GINTSTS_NPTXFEMP | GINTSTS_RXFLVL |
836
+ cpu->id_pfr0 = 0x111;
797
+ GINTSTS_OTGINT | GINTSTS_CURMODE_HOST);
837
+ cpu->id_pfr1 = 0x1;
798
+ iflg = 1;
838
+ cpu->isar.id_dfr0 = 0x2;
799
+ break;
839
+ cpu->id_afr0 = 0x3;
800
+ case GINTMSK:
840
+ cpu->isar.id_mmfr0 = 0x01130003;
801
+ iflg = 1;
841
+ cpu->isar.id_mmfr1 = 0x10030302;
802
+ break;
842
+ cpu->isar.id_mmfr2 = 0x01222110;
803
+ default:
843
+ cpu->isar.id_isar0 = 0x00140011;
804
+ break;
844
+ cpu->isar.id_isar1 = 0x12002111;
805
+ }
845
+ cpu->isar.id_isar2 = 0x11231111;
806
+
846
+ cpu->isar.id_isar3 = 0x01102131;
807
+ trace_usb_dwc2_glbreg_write(addr, glbregnm[index], orig, old, val);
847
+ cpu->isar.id_isar4 = 0x141;
808
+ *mmio = val;
848
+ cpu->reset_auxcr = 7;
809
+
849
+}
810
+ if (iflg) {
850
+
811
+ dwc2_update_irq(s);
851
+static void arm1136_initfn(Object *obj)
812
+ }
852
+{
813
+}
853
+ ARMCPU *cpu = ARM_CPU(obj);
814
+
854
+
815
+static uint64_t dwc2_fszreg_read(void *ptr, hwaddr addr, int index,
855
+ cpu->dtb_compatible = "arm,arm1136";
816
+ unsigned size)
856
+ set_feature(&cpu->env, ARM_FEATURE_V6K);
817
+{
857
+ set_feature(&cpu->env, ARM_FEATURE_V6);
818
+ DWC2State *s = ptr;
858
+ set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
819
+ uint32_t val;
859
+ set_feature(&cpu->env, ARM_FEATURE_CACHE_DIRTY_REG);
820
+
860
+ set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
821
+ assert(addr == HPTXFSIZ);
861
+ cpu->midr = 0x4117b363;
822
+ val = s->fszreg[index];
862
+ cpu->reset_fpsid = 0x410120b4;
823
+
863
+ cpu->isar.mvfr0 = 0x11111111;
824
+ trace_usb_dwc2_fszreg_read(addr, val);
864
+ cpu->isar.mvfr1 = 0x00000000;
825
+ return val;
865
+ cpu->ctr = 0x1dd20d2;
826
+}
866
+ cpu->reset_sctlr = 0x00050078;
827
+
867
+ cpu->id_pfr0 = 0x111;
828
+static void dwc2_fszreg_write(void *ptr, hwaddr addr, int index, uint64_t val,
868
+ cpu->id_pfr1 = 0x1;
829
+ unsigned size)
869
+ cpu->isar.id_dfr0 = 0x2;
830
+{
870
+ cpu->id_afr0 = 0x3;
831
+ DWC2State *s = ptr;
871
+ cpu->isar.id_mmfr0 = 0x01130003;
832
+ uint64_t orig = val;
872
+ cpu->isar.id_mmfr1 = 0x10030302;
833
+ uint32_t *mmio;
873
+ cpu->isar.id_mmfr2 = 0x01222110;
834
+ uint32_t old;
874
+ cpu->isar.id_isar0 = 0x00140011;
835
+
875
+ cpu->isar.id_isar1 = 0x12002111;
836
+ assert(addr == HPTXFSIZ);
876
+ cpu->isar.id_isar2 = 0x11231111;
837
+ mmio = &s->fszreg[index];
877
+ cpu->isar.id_isar3 = 0x01102131;
838
+ old = *mmio;
878
+ cpu->isar.id_isar4 = 0x141;
839
+
879
+ cpu->reset_auxcr = 7;
840
+ trace_usb_dwc2_fszreg_write(addr, orig, old, val);
880
+}
841
+ *mmio = val;
881
+
842
+}
882
+static void arm1176_initfn(Object *obj)
843
+
883
+{
844
+static const char *hreg0nm[] = {
884
+ ARMCPU *cpu = ARM_CPU(obj);
845
+ "HCFG ", "HFIR ", "HFNUM ", "<rsvd> ", "HPTXSTS ",
885
+
846
+ "HAINT ", "HAINTMSK ", "HFLBADDR ", "<rsvd> ", "<rsvd> ",
886
+ cpu->dtb_compatible = "arm,arm1176";
847
+ "<rsvd> ", "<rsvd> ", "<rsvd> ", "<rsvd> ", "<rsvd> ",
887
+ set_feature(&cpu->env, ARM_FEATURE_V6K);
848
+ "<rsvd> ", "HPRT0 "
888
+ set_feature(&cpu->env, ARM_FEATURE_VAPA);
849
+};
889
+ set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
850
+
890
+ set_feature(&cpu->env, ARM_FEATURE_CACHE_DIRTY_REG);
851
+static uint64_t dwc2_hreg0_read(void *ptr, hwaddr addr, int index,
891
+ set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
852
+ unsigned size)
892
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
853
+{
893
+ cpu->midr = 0x410fb767;
854
+ DWC2State *s = ptr;
894
+ cpu->reset_fpsid = 0x410120b5;
855
+ uint32_t val;
895
+ cpu->isar.mvfr0 = 0x11111111;
856
+
896
+ cpu->isar.mvfr1 = 0x00000000;
857
+ assert(addr >= HCFG && addr <= HPRT0);
897
+ cpu->ctr = 0x1dd20d2;
858
+ val = s->hreg0[index];
898
+ cpu->reset_sctlr = 0x00050078;
859
+
899
+ cpu->id_pfr0 = 0x111;
860
+ switch (addr) {
900
+ cpu->id_pfr1 = 0x11;
861
+ case HFNUM:
901
+ cpu->isar.id_dfr0 = 0x33;
862
+ val = (dwc2_get_frame_remaining(s) << HFNUM_FRREM_SHIFT) |
902
+ cpu->id_afr0 = 0;
863
+ (s->hfnum << HFNUM_FRNUM_SHIFT);
903
+ cpu->isar.id_mmfr0 = 0x01130003;
864
+ break;
904
+ cpu->isar.id_mmfr1 = 0x10030302;
865
+ default:
905
+ cpu->isar.id_mmfr2 = 0x01222100;
866
+ break;
906
+ cpu->isar.id_isar0 = 0x0140011;
867
+ }
907
+ cpu->isar.id_isar1 = 0x12002111;
868
+
908
+ cpu->isar.id_isar2 = 0x11231121;
869
+ trace_usb_dwc2_hreg0_read(addr, hreg0nm[index], val);
909
+ cpu->isar.id_isar3 = 0x01102131;
870
+ return val;
910
+ cpu->isar.id_isar4 = 0x01141;
871
+}
911
+ cpu->reset_auxcr = 7;
872
+
912
+}
873
+static void dwc2_hreg0_write(void *ptr, hwaddr addr, int index, uint64_t val,
913
+
874
+ unsigned size)
914
+static void arm11mpcore_initfn(Object *obj)
875
+{
915
+{
876
+ DWC2State *s = ptr;
916
+ ARMCPU *cpu = ARM_CPU(obj);
877
+ USBDevice *dev = s->uport.dev;
917
+
878
+ uint64_t orig = val;
918
+ cpu->dtb_compatible = "arm,arm11mpcore";
879
+ uint32_t *mmio;
919
+ set_feature(&cpu->env, ARM_FEATURE_V6K);
880
+ uint32_t tval, told, old;
920
+ set_feature(&cpu->env, ARM_FEATURE_VAPA);
881
+ int prst = 0;
921
+ set_feature(&cpu->env, ARM_FEATURE_MPIDR);
882
+ int iflg = 0;
922
+ set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
883
+
923
+ cpu->midr = 0x410fb022;
884
+ assert(addr >= HCFG && addr <= HPRT0);
924
+ cpu->reset_fpsid = 0x410120b4;
885
+ mmio = &s->hreg0[index];
925
+ cpu->isar.mvfr0 = 0x11111111;
886
+ old = *mmio;
926
+ cpu->isar.mvfr1 = 0x00000000;
887
+
927
+ cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
888
+ switch (addr) {
928
+ cpu->id_pfr0 = 0x111;
889
+ case HFIR:
929
+ cpu->id_pfr1 = 0x1;
890
+ break;
930
+ cpu->isar.id_dfr0 = 0;
891
+ case HFNUM:
931
+ cpu->id_afr0 = 0x2;
892
+ case HPTXSTS:
932
+ cpu->isar.id_mmfr0 = 0x01100103;
893
+ case HAINT:
933
+ cpu->isar.id_mmfr1 = 0x10020302;
894
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: write to read-only register\n",
934
+ cpu->isar.id_mmfr2 = 0x01222000;
895
+ __func__);
935
+ cpu->isar.id_isar0 = 0x00100011;
896
+ return;
936
+ cpu->isar.id_isar1 = 0x12002111;
897
+ case HAINTMSK:
937
+ cpu->isar.id_isar2 = 0x11221011;
898
+ val &= 0xffff;
938
+ cpu->isar.id_isar3 = 0x01102131;
899
+ break;
939
+ cpu->isar.id_isar4 = 0x141;
900
+ case HPRT0:
940
+ cpu->reset_auxcr = 1;
901
+ /* don't allow clearing of read-only bits */
941
+}
902
+ val |= old & (HPRT0_SPD_MASK | HPRT0_LNSTS_MASK | HPRT0_OVRCURRACT |
942
+
903
+ HPRT0_CONNSTS);
943
+static void cortex_m0_initfn(Object *obj)
904
+ /* don't allow clearing of self-clearing bits */
944
+{
905
+ val |= old & (HPRT0_SUSP | HPRT0_RES);
945
+ ARMCPU *cpu = ARM_CPU(obj);
906
+ /* don't allow setting of self-setting bits */
946
+ set_feature(&cpu->env, ARM_FEATURE_V6);
907
+ if (!(old & HPRT0_ENA) && (val & HPRT0_ENA)) {
947
+ set_feature(&cpu->env, ARM_FEATURE_M);
908
+ val &= ~HPRT0_ENA;
948
+
909
+ }
949
+ cpu->midr = 0x410cc200;
910
+ /* clear the write-1-to-clear bits */
950
+}
911
+ tval = val & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
951
+
912
+ HPRT0_CONNDET);
952
+static void cortex_m3_initfn(Object *obj)
913
+ told = old & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
953
+{
914
+ HPRT0_CONNDET);
954
+ ARMCPU *cpu = ARM_CPU(obj);
915
+ tval |= ~told;
955
+ set_feature(&cpu->env, ARM_FEATURE_V7);
916
+ tval = ~tval;
956
+ set_feature(&cpu->env, ARM_FEATURE_M);
917
+ tval &= (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
957
+ set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
918
+ HPRT0_CONNDET);
958
+ cpu->midr = 0x410fc231;
919
+ val &= ~(HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_ENA |
959
+ cpu->pmsav7_dregion = 8;
920
+ HPRT0_CONNDET);
960
+ cpu->id_pfr0 = 0x00000030;
921
+ val |= tval;
961
+ cpu->id_pfr1 = 0x00000200;
922
+ if (!(val & HPRT0_RST) && (old & HPRT0_RST)) {
962
+ cpu->isar.id_dfr0 = 0x00100000;
923
+ if (dev && dev->attached) {
963
+ cpu->id_afr0 = 0x00000000;
924
+ val |= HPRT0_ENA | HPRT0_ENACHG;
964
+ cpu->isar.id_mmfr0 = 0x00000030;
925
+ prst = 1;
965
+ cpu->isar.id_mmfr1 = 0x00000000;
926
+ }
966
+ cpu->isar.id_mmfr2 = 0x00000000;
927
+ }
967
+ cpu->isar.id_mmfr3 = 0x00000000;
928
+ if (val & (HPRT0_OVRCURRCHG | HPRT0_ENACHG | HPRT0_CONNDET)) {
968
+ cpu->isar.id_isar0 = 0x01141110;
929
+ iflg = 1;
969
+ cpu->isar.id_isar1 = 0x02111000;
930
+ } else {
970
+ cpu->isar.id_isar2 = 0x21112231;
931
+ iflg = -1;
971
+ cpu->isar.id_isar3 = 0x01111110;
932
+ }
972
+ cpu->isar.id_isar4 = 0x01310102;
933
+ break;
973
+ cpu->isar.id_isar5 = 0x00000000;
934
+ default:
974
+ cpu->isar.id_isar6 = 0x00000000;
935
+ break;
975
+}
936
+ }
976
+
937
+
977
+static void cortex_m4_initfn(Object *obj)
938
+ if (prst) {
978
+{
939
+ trace_usb_dwc2_hreg0_write(addr, hreg0nm[index], orig, old,
979
+ ARMCPU *cpu = ARM_CPU(obj);
940
+ val & ~HPRT0_CONNDET);
980
+
941
+ trace_usb_dwc2_hreg0_action("call usb_port_reset");
981
+ set_feature(&cpu->env, ARM_FEATURE_V7);
942
+ usb_port_reset(&s->uport);
982
+ set_feature(&cpu->env, ARM_FEATURE_M);
943
+ val &= ~HPRT0_CONNDET;
983
+ set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
944
+ } else {
984
+ set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
945
+ trace_usb_dwc2_hreg0_write(addr, hreg0nm[index], orig, old, val);
985
+ cpu->midr = 0x410fc240; /* r0p0 */
946
+ }
986
+ cpu->pmsav7_dregion = 8;
947
+
987
+ cpu->isar.mvfr0 = 0x10110021;
948
+ *mmio = val;
988
+ cpu->isar.mvfr1 = 0x11000011;
949
+
989
+ cpu->isar.mvfr2 = 0x00000000;
950
+ if (iflg > 0) {
990
+ cpu->id_pfr0 = 0x00000030;
951
+ trace_usb_dwc2_hreg0_action("enable PRTINT");
991
+ cpu->id_pfr1 = 0x00000200;
952
+ dwc2_raise_global_irq(s, GINTSTS_PRTINT);
992
+ cpu->isar.id_dfr0 = 0x00100000;
953
+ } else if (iflg < 0) {
993
+ cpu->id_afr0 = 0x00000000;
954
+ trace_usb_dwc2_hreg0_action("disable PRTINT");
994
+ cpu->isar.id_mmfr0 = 0x00000030;
955
+ dwc2_lower_global_irq(s, GINTSTS_PRTINT);
995
+ cpu->isar.id_mmfr1 = 0x00000000;
956
+ }
996
+ cpu->isar.id_mmfr2 = 0x00000000;
957
+}
997
+ cpu->isar.id_mmfr3 = 0x00000000;
958
+
998
+ cpu->isar.id_isar0 = 0x01141110;
959
+static const char *hreg1nm[] = {
999
+ cpu->isar.id_isar1 = 0x02111000;
960
+ "HCCHAR ", "HCSPLT ", "HCINT ", "HCINTMSK", "HCTSIZ ", "HCDMA ",
1000
+ cpu->isar.id_isar2 = 0x21112231;
961
+ "<rsvd> ", "HCDMAB "
1001
+ cpu->isar.id_isar3 = 0x01111110;
962
+};
1002
+ cpu->isar.id_isar4 = 0x01310102;
963
+
1003
+ cpu->isar.id_isar5 = 0x00000000;
964
+static uint64_t dwc2_hreg1_read(void *ptr, hwaddr addr, int index,
1004
+ cpu->isar.id_isar6 = 0x00000000;
965
+ unsigned size)
1005
+}
966
+{
1006
+
967
+ DWC2State *s = ptr;
1007
+static void cortex_m7_initfn(Object *obj)
968
+ uint32_t val;
1008
+{
969
+
1009
+ ARMCPU *cpu = ARM_CPU(obj);
970
+ assert(addr >= HCCHAR(0) && addr <= HCDMAB(DWC2_NB_CHAN - 1));
1010
+
971
+ val = s->hreg1[index];
1011
+ set_feature(&cpu->env, ARM_FEATURE_V7);
972
+
1012
+ set_feature(&cpu->env, ARM_FEATURE_M);
973
+ trace_usb_dwc2_hreg1_read(addr, hreg1nm[index & 7], addr >> 5, val);
1013
+ set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
974
+ return val;
1014
+ set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
975
+}
1015
+ cpu->midr = 0x411fc272; /* r1p2 */
976
+
1016
+ cpu->pmsav7_dregion = 8;
977
+static void dwc2_hreg1_write(void *ptr, hwaddr addr, int index, uint64_t val,
1017
+ cpu->isar.mvfr0 = 0x10110221;
978
+ unsigned size)
1018
+ cpu->isar.mvfr1 = 0x12000011;
979
+{
1019
+ cpu->isar.mvfr2 = 0x00000040;
980
+ DWC2State *s = ptr;
1020
+ cpu->id_pfr0 = 0x00000030;
981
+ uint64_t orig = val;
1021
+ cpu->id_pfr1 = 0x00000200;
982
+ uint32_t *mmio;
1022
+ cpu->isar.id_dfr0 = 0x00100000;
983
+ uint32_t old;
1023
+ cpu->id_afr0 = 0x00000000;
984
+ int iflg = 0;
1024
+ cpu->isar.id_mmfr0 = 0x00100030;
985
+ int enflg = 0;
1025
+ cpu->isar.id_mmfr1 = 0x00000000;
986
+ int disflg = 0;
1026
+ cpu->isar.id_mmfr2 = 0x01000000;
987
+
1027
+ cpu->isar.id_mmfr3 = 0x00000000;
988
+ assert(addr >= HCCHAR(0) && addr <= HCDMAB(DWC2_NB_CHAN - 1));
1028
+ cpu->isar.id_isar0 = 0x01101110;
989
+ mmio = &s->hreg1[index];
1029
+ cpu->isar.id_isar1 = 0x02112000;
990
+ old = *mmio;
1030
+ cpu->isar.id_isar2 = 0x20232231;
991
+
1031
+ cpu->isar.id_isar3 = 0x01111131;
992
+ switch (HSOTG_REG(0x500) + (addr & 0x1c)) {
1032
+ cpu->isar.id_isar4 = 0x01310132;
993
+ case HCCHAR(0):
1033
+ cpu->isar.id_isar5 = 0x00000000;
994
+ if ((val & HCCHAR_CHDIS) && !(old & HCCHAR_CHDIS)) {
1034
+ cpu->isar.id_isar6 = 0x00000000;
995
+ val &= ~(HCCHAR_CHENA | HCCHAR_CHDIS);
1035
+}
996
+ disflg = 1;
1036
+
997
+ } else {
1037
+static void cortex_m33_initfn(Object *obj)
998
+ val |= old & HCCHAR_CHDIS;
1038
+{
999
+ if ((val & HCCHAR_CHENA) && !(old & HCCHAR_CHENA)) {
1039
+ ARMCPU *cpu = ARM_CPU(obj);
1000
+ val &= ~HCCHAR_CHDIS;
1040
+
1001
+ enflg = 1;
1041
+ set_feature(&cpu->env, ARM_FEATURE_V8);
1002
+ } else {
1042
+ set_feature(&cpu->env, ARM_FEATURE_M);
1003
+ val |= old & HCCHAR_CHENA;
1043
+ set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
1004
+ }
1044
+ set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
1005
+ }
1045
+ set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
1006
+ break;
1046
+ cpu->midr = 0x410fd213; /* r0p3 */
1007
+ case HCINT(0):
1047
+ cpu->pmsav7_dregion = 16;
1008
+ /* clear the write-1-to-clear bits */
1048
+ cpu->sau_sregion = 8;
1009
+ val |= ~old;
1049
+ cpu->isar.mvfr0 = 0x10110021;
1010
+ val = ~val;
1050
+ cpu->isar.mvfr1 = 0x11000011;
1011
+ val &= ~HCINTMSK_RESERVED14_31;
1051
+ cpu->isar.mvfr2 = 0x00000040;
1012
+ iflg = 1;
1052
+ cpu->id_pfr0 = 0x00000030;
1013
+ break;
1053
+ cpu->id_pfr1 = 0x00000210;
1014
+ case HCINTMSK(0):
1054
+ cpu->isar.id_dfr0 = 0x00200000;
1015
+ val &= ~HCINTMSK_RESERVED14_31;
1055
+ cpu->id_afr0 = 0x00000000;
1016
+ iflg = 1;
1056
+ cpu->isar.id_mmfr0 = 0x00101F40;
1017
+ break;
1057
+ cpu->isar.id_mmfr1 = 0x00000000;
1018
+ case HCDMAB(0):
1058
+ cpu->isar.id_mmfr2 = 0x01000000;
1019
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: write to read-only register\n",
1059
+ cpu->isar.id_mmfr3 = 0x00000000;
1020
+ __func__);
1060
+ cpu->isar.id_isar0 = 0x01101110;
1021
+ return;
1061
+ cpu->isar.id_isar1 = 0x02212000;
1022
+ default:
1062
+ cpu->isar.id_isar2 = 0x20232232;
1023
+ break;
1063
+ cpu->isar.id_isar3 = 0x01111131;
1024
+ }
1064
+ cpu->isar.id_isar4 = 0x01310132;
1025
+
1065
+ cpu->isar.id_isar5 = 0x00000000;
1026
+ trace_usb_dwc2_hreg1_write(addr, hreg1nm[index & 7], index >> 3, orig,
1066
+ cpu->isar.id_isar6 = 0x00000000;
1027
+ old, val);
1067
+ cpu->clidr = 0x00000000;
1028
+ *mmio = val;
1068
+ cpu->ctr = 0x8000c000;
1029
+
1069
+}
1030
+ if (disflg) {
1070
+
1031
+ /* set ChHltd in HCINT */
1071
+static const ARMCPRegInfo cortexr5_cp_reginfo[] = {
1032
+ s->hreg1[(index & ~7) + 2] |= HCINTMSK_CHHLTD;
1072
+ /* Dummy the TCM region regs for the moment */
1033
+ iflg = 1;
1073
+ { .name = "ATCM", .cp = 15, .opc1 = 0, .crn = 9, .crm = 1, .opc2 = 0,
1034
+ }
1074
+ .access = PL1_RW, .type = ARM_CP_CONST },
1035
+
1075
+ { .name = "BTCM", .cp = 15, .opc1 = 0, .crn = 9, .crm = 1, .opc2 = 1,
1036
+ if (enflg) {
1076
+ .access = PL1_RW, .type = ARM_CP_CONST },
1037
+ dwc2_enable_chan(s, index & ~7);
1077
+ { .name = "DCACHE_INVAL", .cp = 15, .opc1 = 0, .crn = 15, .crm = 5,
1038
+ }
1078
+ .opc2 = 0, .access = PL1_W, .type = ARM_CP_NOP },
1039
+
1079
+ REGINFO_SENTINEL
1040
+ if (iflg) {
1080
+};
1041
+ dwc2_update_hc_irq(s, index & ~7);
1081
+
1042
+ }
1082
+static void cortex_r5_initfn(Object *obj)
1043
+}
1083
+{
1044
+
1084
+ ARMCPU *cpu = ARM_CPU(obj);
1045
+static const char *pcgregnm[] = {
1085
+
1046
+ "PCGCTL ", "PCGCCTL1 "
1086
+ set_feature(&cpu->env, ARM_FEATURE_V7);
1047
+};
1087
+ set_feature(&cpu->env, ARM_FEATURE_V7MP);
1048
+
1088
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
1049
+static uint64_t dwc2_pcgreg_read(void *ptr, hwaddr addr, int index,
1089
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
1050
+ unsigned size)
1090
+ cpu->midr = 0x411fc153; /* r1p3 */
1051
+{
1091
+ cpu->id_pfr0 = 0x0131;
1052
+ DWC2State *s = ptr;
1092
+ cpu->id_pfr1 = 0x001;
1053
+ uint32_t val;
1093
+ cpu->isar.id_dfr0 = 0x010400;
1054
+
1094
+ cpu->id_afr0 = 0x0;
1055
+ assert(addr >= PCGCTL && addr <= PCGCCTL1);
1095
+ cpu->isar.id_mmfr0 = 0x0210030;
1056
+ val = s->pcgreg[index];
1096
+ cpu->isar.id_mmfr1 = 0x00000000;
1057
+
1097
+ cpu->isar.id_mmfr2 = 0x01200000;
1058
+ trace_usb_dwc2_pcgreg_read(addr, pcgregnm[index], val);
1098
+ cpu->isar.id_mmfr3 = 0x0211;
1059
+ return val;
1099
+ cpu->isar.id_isar0 = 0x02101111;
1060
+}
1100
+ cpu->isar.id_isar1 = 0x13112111;
1061
+
1101
+ cpu->isar.id_isar2 = 0x21232141;
1062
+static void dwc2_pcgreg_write(void *ptr, hwaddr addr, int index,
1102
+ cpu->isar.id_isar3 = 0x01112131;
1063
+ uint64_t val, unsigned size)
1103
+ cpu->isar.id_isar4 = 0x0010142;
1064
+{
1104
+ cpu->isar.id_isar5 = 0x0;
1065
+ DWC2State *s = ptr;
1105
+ cpu->isar.id_isar6 = 0x0;
1066
+ uint64_t orig = val;
1106
+ cpu->mp_is_up = true;
1067
+ uint32_t *mmio;
1107
+ cpu->pmsav7_dregion = 16;
1068
+ uint32_t old;
1108
+ define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
1069
+
1109
+}
1070
+ assert(addr >= PCGCTL && addr <= PCGCCTL1);
1110
+
1071
+ mmio = &s->pcgreg[index];
1111
+static void cortex_r5f_initfn(Object *obj)
1072
+ old = *mmio;
1112
+{
1073
+
1113
+ ARMCPU *cpu = ARM_CPU(obj);
1074
+ trace_usb_dwc2_pcgreg_write(addr, pcgregnm[index], orig, old, val);
1114
+
1075
+ *mmio = val;
1115
+ cortex_r5_initfn(obj);
1076
+}
1116
+ cpu->isar.mvfr0 = 0x10110221;
1077
+
1117
+ cpu->isar.mvfr1 = 0x00000011;
1078
+static uint64_t dwc2_hsotg_read(void *ptr, hwaddr addr, unsigned size)
1118
+}
1079
+{
1119
+
1080
+ uint64_t val;
1120
+static void ti925t_initfn(Object *obj)
1081
+
1121
+{
1082
+ switch (addr) {
1122
+ ARMCPU *cpu = ARM_CPU(obj);
1083
+ case HSOTG_REG(0x000) ... HSOTG_REG(0x0fc):
1123
+ set_feature(&cpu->env, ARM_FEATURE_V4T);
1084
+ val = dwc2_glbreg_read(ptr, addr, (addr - HSOTG_REG(0x000)) >> 2, size);
1124
+ set_feature(&cpu->env, ARM_FEATURE_OMAPCP);
1085
+ break;
1125
+ cpu->midr = ARM_CPUID_TI925T;
1086
+ case HSOTG_REG(0x100):
1126
+ cpu->ctr = 0x5109149;
1087
+ val = dwc2_fszreg_read(ptr, addr, (addr - HSOTG_REG(0x100)) >> 2, size);
1127
+ cpu->reset_sctlr = 0x00000070;
1088
+ break;
1128
+}
1089
+ case HSOTG_REG(0x104) ... HSOTG_REG(0x3fc):
1129
+
1090
+ /* Gadget-mode registers, just return 0 for now */
1130
+static void sa1100_initfn(Object *obj)
1091
+ val = 0;
1131
+{
1092
+ break;
1132
+ ARMCPU *cpu = ARM_CPU(obj);
1093
+ case HSOTG_REG(0x400) ... HSOTG_REG(0x4fc):
1133
+
1094
+ val = dwc2_hreg0_read(ptr, addr, (addr - HSOTG_REG(0x400)) >> 2, size);
1134
+ cpu->dtb_compatible = "intel,sa1100";
1095
+ break;
1135
+ set_feature(&cpu->env, ARM_FEATURE_STRONGARM);
1096
+ case HSOTG_REG(0x500) ... HSOTG_REG(0x7fc):
1136
+ set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
1097
+ val = dwc2_hreg1_read(ptr, addr, (addr - HSOTG_REG(0x500)) >> 2, size);
1137
+ cpu->midr = 0x4401A11B;
1098
+ break;
1138
+ cpu->reset_sctlr = 0x00000070;
1099
+ case HSOTG_REG(0x800) ... HSOTG_REG(0xdfc):
1139
+}
1100
+ /* Gadget-mode registers, just return 0 for now */
1140
+
1101
+ val = 0;
1141
+static void sa1110_initfn(Object *obj)
1102
+ break;
1142
+{
1103
+ case HSOTG_REG(0xe00) ... HSOTG_REG(0xffc):
1143
+ ARMCPU *cpu = ARM_CPU(obj);
1104
+ val = dwc2_pcgreg_read(ptr, addr, (addr - HSOTG_REG(0xe00)) >> 2, size);
1144
+ set_feature(&cpu->env, ARM_FEATURE_STRONGARM);
1105
+ break;
1145
+ set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
1106
+ default:
1146
+ cpu->midr = 0x6901B119;
1107
+ g_assert_not_reached();
1147
+ cpu->reset_sctlr = 0x00000070;
1108
+ }
1148
+}
1109
+
1149
+
1110
+ return val;
1150
+static void pxa250_initfn(Object *obj)
1111
+}
1151
+{
1112
+
1152
+ ARMCPU *cpu = ARM_CPU(obj);
1113
+static void dwc2_hsotg_write(void *ptr, hwaddr addr, uint64_t val,
1153
+
1114
+ unsigned size)
1154
+ cpu->dtb_compatible = "marvell,xscale";
1115
+{
1155
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1116
+ switch (addr) {
1156
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1117
+ case HSOTG_REG(0x000) ... HSOTG_REG(0x0fc):
1157
+ cpu->midr = 0x69052100;
1118
+ dwc2_glbreg_write(ptr, addr, (addr - HSOTG_REG(0x000)) >> 2, val, size);
1158
+ cpu->ctr = 0xd172172;
1119
+ break;
1159
+ cpu->reset_sctlr = 0x00000078;
1120
+ case HSOTG_REG(0x100):
1160
+}
1121
+ dwc2_fszreg_write(ptr, addr, (addr - HSOTG_REG(0x100)) >> 2, val, size);
1161
+
1122
+ break;
1162
+static void pxa255_initfn(Object *obj)
1123
+ case HSOTG_REG(0x104) ... HSOTG_REG(0x3fc):
1163
+{
1124
+ /* Gadget-mode registers, do nothing for now */
1164
+ ARMCPU *cpu = ARM_CPU(obj);
1125
+ break;
1165
+
1126
+ case HSOTG_REG(0x400) ... HSOTG_REG(0x4fc):
1166
+ cpu->dtb_compatible = "marvell,xscale";
1127
+ dwc2_hreg0_write(ptr, addr, (addr - HSOTG_REG(0x400)) >> 2, val, size);
1167
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1128
+ break;
1168
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1129
+ case HSOTG_REG(0x500) ... HSOTG_REG(0x7fc):
1169
+ cpu->midr = 0x69052d00;
1130
+ dwc2_hreg1_write(ptr, addr, (addr - HSOTG_REG(0x500)) >> 2, val, size);
1170
+ cpu->ctr = 0xd172172;
1131
+ break;
1171
+ cpu->reset_sctlr = 0x00000078;
1132
+ case HSOTG_REG(0x800) ... HSOTG_REG(0xdfc):
1172
+}
1133
+ /* Gadget-mode registers, do nothing for now */
1173
+
1134
+ break;
1174
+static void pxa260_initfn(Object *obj)
1135
+ case HSOTG_REG(0xe00) ... HSOTG_REG(0xffc):
1175
+{
1136
+ dwc2_pcgreg_write(ptr, addr, (addr - HSOTG_REG(0xe00)) >> 2, val, size);
1176
+ ARMCPU *cpu = ARM_CPU(obj);
1137
+ break;
1177
+
1138
+ default:
1178
+ cpu->dtb_compatible = "marvell,xscale";
1139
+ g_assert_not_reached();
1179
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1140
+ }
1180
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1141
+}
1181
+ cpu->midr = 0x69052903;
1142
+
1182
+ cpu->ctr = 0xd172172;
1143
+static const MemoryRegionOps dwc2_mmio_hsotg_ops = {
1183
+ cpu->reset_sctlr = 0x00000078;
1144
+ .read = dwc2_hsotg_read,
1184
+}
1145
+ .write = dwc2_hsotg_write,
1185
+
1146
+ .impl.min_access_size = 4,
1186
+static void pxa261_initfn(Object *obj)
1147
+ .impl.max_access_size = 4,
1187
+{
1148
+ .endianness = DEVICE_LITTLE_ENDIAN,
1188
+ ARMCPU *cpu = ARM_CPU(obj);
1149
+};
1189
+
1150
+
1190
+ cpu->dtb_compatible = "marvell,xscale";
1151
+static uint64_t dwc2_hreg2_read(void *ptr, hwaddr addr, unsigned size)
1191
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1152
+{
1192
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1153
+ /* TODO - implement FIFOs to support slave mode */
1193
+ cpu->midr = 0x69052d05;
1154
+ trace_usb_dwc2_hreg2_read(addr, addr >> 12, 0);
1194
+ cpu->ctr = 0xd172172;
1155
+ qemu_log_mask(LOG_UNIMP, "FIFO read not implemented\n");
1195
+ cpu->reset_sctlr = 0x00000078;
1156
+ return 0;
1196
+}
1157
+}
1197
+
1158
+
1198
+static void pxa262_initfn(Object *obj)
1159
+static void dwc2_hreg2_write(void *ptr, hwaddr addr, uint64_t val,
1199
+{
1160
+ unsigned size)
1200
+ ARMCPU *cpu = ARM_CPU(obj);
1161
+{
1201
+
1162
+ uint64_t orig = val;
1202
+ cpu->dtb_compatible = "marvell,xscale";
1163
+
1203
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1164
+ /* TODO - implement FIFOs to support slave mode */
1204
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1165
+ trace_usb_dwc2_hreg2_write(addr, addr >> 12, orig, 0, val);
1205
+ cpu->midr = 0x69052d06;
1166
+ qemu_log_mask(LOG_UNIMP, "FIFO write not implemented\n");
1206
+ cpu->ctr = 0xd172172;
1167
+}
1207
+ cpu->reset_sctlr = 0x00000078;
1168
+
1208
+}
1169
+static const MemoryRegionOps dwc2_mmio_hreg2_ops = {
1209
+
1170
+ .read = dwc2_hreg2_read,
1210
+static void pxa270a0_initfn(Object *obj)
1171
+ .write = dwc2_hreg2_write,
1211
+{
1172
+ .impl.min_access_size = 4,
1212
+ ARMCPU *cpu = ARM_CPU(obj);
1173
+ .impl.max_access_size = 4,
1213
+
1174
+ .endianness = DEVICE_LITTLE_ENDIAN,
1214
+ cpu->dtb_compatible = "marvell,xscale";
1175
+};
1215
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1176
+
1216
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1177
+static void dwc2_wakeup_endpoint(USBBus *bus, USBEndpoint *ep,
1217
+ set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
1178
+ unsigned int stream)
1218
+ cpu->midr = 0x69054110;
1179
+{
1219
+ cpu->ctr = 0xd172172;
1180
+ DWC2State *s = container_of(bus, DWC2State, bus);
1220
+ cpu->reset_sctlr = 0x00000078;
1181
+
1221
+}
1182
+ trace_usb_dwc2_wakeup_endpoint(ep, stream);
1222
+
1183
+
1223
+static void pxa270a1_initfn(Object *obj)
1184
+ /* TODO - do something here? */
1224
+{
1185
+ qemu_bh_schedule(s->async_bh);
1225
+ ARMCPU *cpu = ARM_CPU(obj);
1186
+}
1226
+
1187
+
1227
+ cpu->dtb_compatible = "marvell,xscale";
1188
+static USBBusOps dwc2_bus_ops = {
1228
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1189
+ .wakeup_endpoint = dwc2_wakeup_endpoint,
1229
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1190
+};
1230
+ set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
1191
+
1231
+ cpu->midr = 0x69054111;
1192
+static void dwc2_work_timer(void *opaque)
1232
+ cpu->ctr = 0xd172172;
1193
+{
1233
+ cpu->reset_sctlr = 0x00000078;
1194
+ DWC2State *s = opaque;
1234
+}
1195
+
1235
+
1196
+ trace_usb_dwc2_work_timer();
1236
+static void pxa270b0_initfn(Object *obj)
1197
+ qemu_bh_schedule(s->async_bh);
1237
+{
1198
+}
1238
+ ARMCPU *cpu = ARM_CPU(obj);
1199
+
1239
+
1200
+static void dwc2_reset_enter(Object *obj, ResetType type)
1240
+ cpu->dtb_compatible = "marvell,xscale";
1201
+{
1241
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1202
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1242
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1203
+ DWC2State *s = DWC2_USB(obj);
1243
+ set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
1204
+ int i;
1244
+ cpu->midr = 0x69054112;
1205
+
1245
+ cpu->ctr = 0xd172172;
1206
+ trace_usb_dwc2_reset_enter();
1246
+ cpu->reset_sctlr = 0x00000078;
1207
+
1247
+}
1208
+ if (c->parent_phases.enter) {
1248
+
1209
+ c->parent_phases.enter(obj, type);
1249
+static void pxa270b1_initfn(Object *obj)
1210
+ }
1250
+{
1211
+
1251
+ ARMCPU *cpu = ARM_CPU(obj);
1212
+ timer_del(s->frame_timer);
1252
+
1213
+ qemu_bh_cancel(s->async_bh);
1253
+ cpu->dtb_compatible = "marvell,xscale";
1214
+
1254
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1215
+ if (s->uport.dev && s->uport.dev->attached) {
1255
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1216
+ usb_detach(&s->uport);
1256
+ set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
1217
+ }
1257
+ cpu->midr = 0x69054113;
1218
+
1258
+ cpu->ctr = 0xd172172;
1219
+ dwc2_bus_stop(s);
1259
+ cpu->reset_sctlr = 0x00000078;
1220
+
1260
+}
1221
+ s->gotgctl = GOTGCTL_BSESVLD | GOTGCTL_ASESVLD | GOTGCTL_CONID_B;
1261
+
1222
+ s->gotgint = 0;
1262
+static void pxa270c0_initfn(Object *obj)
1223
+ s->gahbcfg = 0;
1263
+{
1224
+ s->gusbcfg = 5 << GUSBCFG_USBTRDTIM_SHIFT;
1264
+ ARMCPU *cpu = ARM_CPU(obj);
1225
+ s->grstctl = GRSTCTL_AHBIDLE;
1265
+
1226
+ s->gintsts = GINTSTS_CONIDSTSCHNG | GINTSTS_PTXFEMP | GINTSTS_NPTXFEMP |
1266
+ cpu->dtb_compatible = "marvell,xscale";
1227
+ GINTSTS_CURMODE_HOST;
1267
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1228
+ s->gintmsk = 0;
1268
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1229
+ s->grxstsr = 0;
1269
+ set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
1230
+ s->grxstsp = 0;
1270
+ cpu->midr = 0x69054114;
1231
+ s->grxfsiz = 1024;
1271
+ cpu->ctr = 0xd172172;
1232
+ s->gnptxfsiz = 1024 << FIFOSIZE_DEPTH_SHIFT;
1272
+ cpu->reset_sctlr = 0x00000078;
1233
+ s->gnptxsts = (4 << FIFOSIZE_DEPTH_SHIFT) | 1024;
1273
+}
1234
+ s->gi2cctl = GI2CCTL_I2CDATSE0 | GI2CCTL_ACK;
1274
+
1235
+ s->gpvndctl = 0;
1275
+static void pxa270c5_initfn(Object *obj)
1236
+ s->ggpio = 0;
1276
+{
1237
+ s->guid = 0;
1277
+ ARMCPU *cpu = ARM_CPU(obj);
1238
+ s->gsnpsid = 0x4f54294a;
1278
+
1239
+ s->ghwcfg1 = 0;
1279
+ cpu->dtb_compatible = "marvell,xscale";
1240
+ s->ghwcfg2 = (8 << GHWCFG2_DEV_TOKEN_Q_DEPTH_SHIFT) |
1280
+ set_feature(&cpu->env, ARM_FEATURE_V5);
1241
+ (4 << GHWCFG2_HOST_PERIO_TX_Q_DEPTH_SHIFT) |
1281
+ set_feature(&cpu->env, ARM_FEATURE_XSCALE);
1242
+ (4 << GHWCFG2_NONPERIO_TX_Q_DEPTH_SHIFT) |
1282
+ set_feature(&cpu->env, ARM_FEATURE_IWMMXT);
1243
+ GHWCFG2_DYNAMIC_FIFO |
1283
+ cpu->midr = 0x69054117;
1244
+ GHWCFG2_PERIO_EP_SUPPORTED |
1284
+ cpu->ctr = 0xd172172;
1245
+ ((DWC2_NB_CHAN - 1) << GHWCFG2_NUM_HOST_CHAN_SHIFT) |
1285
+ cpu->reset_sctlr = 0x00000078;
1246
+ (GHWCFG2_INT_DMA_ARCH << GHWCFG2_ARCHITECTURE_SHIFT) |
1286
+}
1247
+ (GHWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST << GHWCFG2_OP_MODE_SHIFT);
1287
+
1248
+ s->ghwcfg3 = (4096 << GHWCFG3_DFIFO_DEPTH_SHIFT) |
1288
+static void arm_v7m_class_init(ObjectClass *oc, void *data)
1249
+ (4 << GHWCFG3_PACKET_SIZE_CNTR_WIDTH_SHIFT) |
1289
+{
1250
+ (4 << GHWCFG3_XFER_SIZE_CNTR_WIDTH_SHIFT);
1290
+ ARMCPUClass *acc = ARM_CPU_CLASS(oc);
1251
+ s->ghwcfg4 = 0;
1291
+ CPUClass *cc = CPU_CLASS(oc);
1252
+ s->glpmcfg = 0;
1292
+
1253
+ s->gpwrdn = GPWRDN_PWRDNRSTN;
1293
+ acc->info = data;
1254
+ s->gdfifocfg = 0;
1294
+#ifndef CONFIG_USER_ONLY
1255
+ s->gadpctl = 0;
1295
+ cc->do_interrupt = arm_v7m_cpu_do_interrupt;
1256
+ s->grefclk = 0;
1296
+#endif
1257
+ s->gintmsk2 = 0;
1297
+
1258
+ s->gintsts2 = 0;
1298
+ cc->cpu_exec_interrupt = arm_v7m_cpu_exec_interrupt;
1259
+
1299
+}
1260
+ s->hptxfsiz = 500 << FIFOSIZE_DEPTH_SHIFT;
1300
+
1261
+
1301
+static const ARMCPUInfo arm_tcg_cpus[] = {
1262
+ s->hcfg = 2 << HCFG_RESVALID_SHIFT;
1302
+ { .name = "arm926", .initfn = arm926_initfn },
1263
+ s->hfir = 60000;
1303
+ { .name = "arm946", .initfn = arm946_initfn },
1264
+ s->hfnum = 0x3fff;
1304
+ { .name = "arm1026", .initfn = arm1026_initfn },
1265
+ s->hptxsts = (16 << TXSTS_QSPCAVAIL_SHIFT) | 32768;
1305
+ /*
1266
+ s->haint = 0;
1306
+ * What QEMU calls "arm1136-r2" is actually the 1136 r0p2, i.e. an
1267
+ s->haintmsk = 0;
1307
+ * older core than plain "arm1136". In particular this does not
1268
+ s->hprt0 = 0;
1308
+ * have the v6K features.
1269
+
1309
+ */
1270
+ memset(s->hreg1, 0, sizeof(s->hreg1));
1310
+ { .name = "arm1136-r2", .initfn = arm1136_r2_initfn },
1271
+ memset(s->pcgreg, 0, sizeof(s->pcgreg));
1311
+ { .name = "arm1136", .initfn = arm1136_initfn },
1272
+
1312
+ { .name = "arm1176", .initfn = arm1176_initfn },
1273
+ s->sof_time = 0;
1313
+ { .name = "arm11mpcore", .initfn = arm11mpcore_initfn },
1274
+ s->frame_number = 0;
1314
+ { .name = "cortex-m0", .initfn = cortex_m0_initfn,
1275
+ s->fi = USB_FRMINTVL - 1;
1315
+ .class_init = arm_v7m_class_init },
1276
+ s->next_chan = 0;
1316
+ { .name = "cortex-m3", .initfn = cortex_m3_initfn,
1277
+ s->working = false;
1317
+ .class_init = arm_v7m_class_init },
1278
+
1318
+ { .name = "cortex-m4", .initfn = cortex_m4_initfn,
1279
+ for (i = 0; i < DWC2_NB_CHAN; i++) {
1319
+ .class_init = arm_v7m_class_init },
1280
+ s->packet[i].needs_service = false;
1320
+ { .name = "cortex-m7", .initfn = cortex_m7_initfn,
1281
+ }
1321
+ .class_init = arm_v7m_class_init },
1282
+}
1322
+ { .name = "cortex-m33", .initfn = cortex_m33_initfn,
1283
+
1323
+ .class_init = arm_v7m_class_init },
1284
+static void dwc2_reset_hold(Object *obj)
1324
+ { .name = "cortex-r5", .initfn = cortex_r5_initfn },
1285
+{
1325
+ { .name = "cortex-r5f", .initfn = cortex_r5f_initfn },
1286
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1326
+ { .name = "ti925t", .initfn = ti925t_initfn },
1287
+ DWC2State *s = DWC2_USB(obj);
1327
+ { .name = "sa1100", .initfn = sa1100_initfn },
1288
+
1328
+ { .name = "sa1110", .initfn = sa1110_initfn },
1289
+ trace_usb_dwc2_reset_hold();
1329
+ { .name = "pxa250", .initfn = pxa250_initfn },
1290
+
1330
+ { .name = "pxa255", .initfn = pxa255_initfn },
1291
+ if (c->parent_phases.hold) {
1331
+ { .name = "pxa260", .initfn = pxa260_initfn },
1292
+ c->parent_phases.hold(obj);
1332
+ { .name = "pxa261", .initfn = pxa261_initfn },
1293
+ }
1333
+ { .name = "pxa262", .initfn = pxa262_initfn },
1294
+
1334
+ /* "pxa270" is an alias for "pxa270-a0" */
1295
+ dwc2_update_irq(s);
1335
+ { .name = "pxa270", .initfn = pxa270a0_initfn },
1296
+}
1336
+ { .name = "pxa270-a0", .initfn = pxa270a0_initfn },
1297
+
1337
+ { .name = "pxa270-a1", .initfn = pxa270a1_initfn },
1298
+static void dwc2_reset_exit(Object *obj)
1338
+ { .name = "pxa270-b0", .initfn = pxa270b0_initfn },
1299
+{
1339
+ { .name = "pxa270-b1", .initfn = pxa270b1_initfn },
1300
+ DWC2Class *c = DWC2_GET_CLASS(obj);
1340
+ { .name = "pxa270-c0", .initfn = pxa270c0_initfn },
1301
+ DWC2State *s = DWC2_USB(obj);
1341
+ { .name = "pxa270-c5", .initfn = pxa270c5_initfn },
1302
+
1342
+};
1303
+ trace_usb_dwc2_reset_exit();
1343
+
1304
+
1344
+static void arm_tcg_cpu_register_types(void)
1305
+ if (c->parent_phases.exit) {
1345
+{
1306
+ c->parent_phases.exit(obj);
1346
+ size_t i;
1307
+ }
1347
+
1308
+
1348
+ for (i = 0; i < ARRAY_SIZE(arm_tcg_cpus); ++i) {
1309
+ s->hprt0 = HPRT0_PWR;
1349
+ arm_cpu_register(&arm_tcg_cpus[i]);
1310
+ if (s->uport.dev && s->uport.dev->attached) {
1350
+ }
1311
+ usb_attach(&s->uport);
1351
+}
1312
+ usb_device_reset(s->uport.dev);
1352
+
1313
+ }
1353
+type_init(arm_tcg_cpu_register_types)
1314
+}
1354
+
1315
+
1355
+#endif /* !CONFIG_USER_ONLY || !TARGET_AARCH64 */
1316
+static void dwc2_realize(DeviceState *dev, Error **errp)
1356
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
1317
+{
1318
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
1319
+ DWC2State *s = DWC2_USB(dev);
1320
+ Object *obj;
1321
+ Error *err = NULL;
1322
+
1323
+ obj = object_property_get_link(OBJECT(dev), "dma-mr", &err);
1324
+ if (err) {
1325
+ error_setg(errp, "dwc2: required dma-mr link not found: %s",
1326
+ error_get_pretty(err));
1327
+ return;
1328
+ }
1329
+ assert(obj != NULL);
1330
+
1331
+ s->dma_mr = MEMORY_REGION(obj);
1332
+ address_space_init(&s->dma_as, s->dma_mr, "dwc2");
1333
+
1334
+ usb_bus_new(&s->bus, sizeof(s->bus), &dwc2_bus_ops, dev);
1335
+ usb_register_port(&s->bus, &s->uport, s, 0, &dwc2_port_ops,
1336
+ USB_SPEED_MASK_LOW | USB_SPEED_MASK_FULL |
1337
+ (s->usb_version == 2 ? USB_SPEED_MASK_HIGH : 0));
1338
+ s->uport.dev = 0;
1339
+
1340
+ s->usb_frame_time = NANOSECONDS_PER_SECOND / 1000; /* 1000000 */
1341
+ if (NANOSECONDS_PER_SECOND >= USB_HZ_FS) {
1342
+ s->usb_bit_time = NANOSECONDS_PER_SECOND / USB_HZ_FS; /* 83.3 */
1343
+ } else {
1344
+ s->usb_bit_time = 1;
1345
+ }
1346
+
1347
+ s->fi = USB_FRMINTVL - 1;
1348
+ s->eof_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_frame_boundary, s);
1349
+ s->frame_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, dwc2_work_timer, s);
1350
+ s->async_bh = qemu_bh_new(dwc2_work_bh, s);
1351
+
1352
+ sysbus_init_irq(sbd, &s->irq);
1353
+}
1354
+
1355
+static void dwc2_init(Object *obj)
1356
+{
1357
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1358
+ DWC2State *s = DWC2_USB(obj);
1359
+
1360
+ memory_region_init(&s->container, obj, "dwc2", DWC2_MMIO_SIZE);
1361
+ sysbus_init_mmio(sbd, &s->container);
1362
+
1363
+ memory_region_init_io(&s->hsotg, obj, &dwc2_mmio_hsotg_ops, s,
1364
+ "dwc2-io", 4 * KiB);
1365
+ memory_region_add_subregion(&s->container, 0x0000, &s->hsotg);
1366
+
1367
+ memory_region_init_io(&s->fifos, obj, &dwc2_mmio_hreg2_ops, s,
1368
+ "dwc2-fifo", 64 * KiB);
1369
+ memory_region_add_subregion(&s->container, 0x1000, &s->fifos);
1370
+}
1371
+
1372
+static const VMStateDescription vmstate_dwc2_state_packet = {
1373
+ .name = "dwc2/packet",
1374
+ .version_id = 1,
1375
+ .minimum_version_id = 1,
1376
+ .fields = (VMStateField[]) {
1377
+ VMSTATE_UINT32(devadr, DWC2Packet),
1378
+ VMSTATE_UINT32(epnum, DWC2Packet),
1379
+ VMSTATE_UINT32(epdir, DWC2Packet),
1380
+ VMSTATE_UINT32(mps, DWC2Packet),
1381
+ VMSTATE_UINT32(pid, DWC2Packet),
1382
+ VMSTATE_UINT32(index, DWC2Packet),
1383
+ VMSTATE_UINT32(pcnt, DWC2Packet),
1384
+ VMSTATE_UINT32(len, DWC2Packet),
1385
+ VMSTATE_INT32(async, DWC2Packet),
1386
+ VMSTATE_BOOL(small, DWC2Packet),
1387
+ VMSTATE_BOOL(needs_service, DWC2Packet),
1388
+ VMSTATE_END_OF_LIST()
1389
+ },
1390
+};
1391
+
1392
+const VMStateDescription vmstate_dwc2_state = {
1393
+ .name = "dwc2",
1394
+ .version_id = 1,
1395
+ .minimum_version_id = 1,
1396
+ .fields = (VMStateField[]) {
1397
+ VMSTATE_UINT32_ARRAY(glbreg, DWC2State,
1398
+ DWC2_GLBREG_SIZE / sizeof(uint32_t)),
1399
+ VMSTATE_UINT32_ARRAY(fszreg, DWC2State,
1400
+ DWC2_FSZREG_SIZE / sizeof(uint32_t)),
1401
+ VMSTATE_UINT32_ARRAY(hreg0, DWC2State,
1402
+ DWC2_HREG0_SIZE / sizeof(uint32_t)),
1403
+ VMSTATE_UINT32_ARRAY(hreg1, DWC2State,
1404
+ DWC2_HREG1_SIZE / sizeof(uint32_t)),
1405
+ VMSTATE_UINT32_ARRAY(pcgreg, DWC2State,
1406
+ DWC2_PCGREG_SIZE / sizeof(uint32_t)),
1407
+
1408
+ VMSTATE_TIMER_PTR(eof_timer, DWC2State),
1409
+ VMSTATE_TIMER_PTR(frame_timer, DWC2State),
1410
+ VMSTATE_INT64(sof_time, DWC2State),
1411
+ VMSTATE_INT64(usb_frame_time, DWC2State),
1412
+ VMSTATE_INT64(usb_bit_time, DWC2State),
1413
+ VMSTATE_UINT32(usb_version, DWC2State),
1414
+ VMSTATE_UINT16(frame_number, DWC2State),
1415
+ VMSTATE_UINT16(fi, DWC2State),
1416
+ VMSTATE_UINT16(next_chan, DWC2State),
1417
+ VMSTATE_BOOL(working, DWC2State),
1418
+
1419
+ VMSTATE_STRUCT_ARRAY(packet, DWC2State, DWC2_NB_CHAN, 1,
1420
+ vmstate_dwc2_state_packet, DWC2Packet),
1421
+ VMSTATE_UINT8_2DARRAY(usb_buf, DWC2State, DWC2_NB_CHAN,
1422
+ DWC2_MAX_XFER_SIZE),
1423
+
1424
+ VMSTATE_END_OF_LIST()
1425
+ }
1426
+};
1427
+
1428
+static Property dwc2_usb_properties[] = {
1429
+ DEFINE_PROP_UINT32("usb_version", DWC2State, usb_version, 2),
1430
+ DEFINE_PROP_END_OF_LIST(),
1431
+};
1432
+
1433
+static void dwc2_class_init(ObjectClass *klass, void *data)
1434
+{
1435
+ DeviceClass *dc = DEVICE_CLASS(klass);
1436
+ DWC2Class *c = DWC2_CLASS(klass);
1437
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1438
+
1439
+ dc->realize = dwc2_realize;
1440
+ dc->vmsd = &vmstate_dwc2_state;
1441
+ set_bit(DEVICE_CATEGORY_USB, dc->categories);
1442
+ device_class_set_props(dc, dwc2_usb_properties);
1443
+ resettable_class_set_parent_phases(rc, dwc2_reset_enter, dwc2_reset_hold,
1444
+ dwc2_reset_exit, &c->parent_phases);
1445
+}
1446
+
1447
+static const TypeInfo dwc2_usb_type_info = {
1448
+ .name = TYPE_DWC2_USB,
1449
+ .parent = TYPE_SYS_BUS_DEVICE,
1450
+ .instance_size = sizeof(DWC2State),
1451
+ .instance_init = dwc2_init,
1452
+ .class_size = sizeof(DWC2Class),
1453
+ .class_init = dwc2_class_init,
1454
+};
1455
+
1456
+static void dwc2_usb_register_types(void)
1457
+{
1458
+ type_register_static(&dwc2_usb_type_info);
1459
+}
1460
+
1461
+type_init(dwc2_usb_register_types)
1462
diff --git a/hw/usb/Kconfig b/hw/usb/Kconfig
1357
index XXXXXXX..XXXXXXX 100644
1463
index XXXXXXX..XXXXXXX 100644
1358
--- a/target/arm/Makefile.objs
1464
--- a/hw/usb/Kconfig
1359
+++ b/target/arm/Makefile.objs
1465
+++ b/hw/usb/Kconfig
1360
@@ -XXX,XX +XXX,XX @@ obj-y += translate.o op_helper.o
1466
@@ -XXX,XX +XXX,XX @@ config USB_MUSB
1361
obj-y += crypto_helper.o
1467
bool
1362
obj-y += iwmmxt_helper.o vec_helper.o neon_helper.o
1468
select USB
1363
obj-y += m_helper.o
1469
1364
+obj-y += cpu_tcg.o
1470
+config USB_DWC2
1365
1471
+ bool
1366
obj-$(CONFIG_SOFTMMU) += psci.o
1472
+ default y
1367
1473
+ select USB
1474
+
1475
config TUSB6010
1476
bool
1477
select USB_MUSB
1478
diff --git a/hw/usb/Makefile.objs b/hw/usb/Makefile.objs
1479
index XXXXXXX..XXXXXXX 100644
1480
--- a/hw/usb/Makefile.objs
1481
+++ b/hw/usb/Makefile.objs
1482
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_USB_EHCI_SYSBUS) += hcd-ehci-sysbus.o
1483
common-obj-$(CONFIG_USB_XHCI) += hcd-xhci.o
1484
common-obj-$(CONFIG_USB_XHCI_NEC) += hcd-xhci-nec.o
1485
common-obj-$(CONFIG_USB_MUSB) += hcd-musb.o
1486
+common-obj-$(CONFIG_USB_DWC2) += hcd-dwc2.o
1487
1488
common-obj-$(CONFIG_TUSB6010) += tusb6010.o
1489
common-obj-$(CONFIG_IMX) += chipidea.o
1490
diff --git a/hw/usb/trace-events b/hw/usb/trace-events
1491
index XXXXXXX..XXXXXXX 100644
1492
--- a/hw/usb/trace-events
1493
+++ b/hw/usb/trace-events
1494
@@ -XXX,XX +XXX,XX @@ usb_xhci_xfer_error(void *xfer, uint32_t ret) "%p: ret %d"
1495
usb_xhci_unimplemented(const char *item, int nr) "%s (0x%x)"
1496
usb_xhci_enforced_limit(const char *item) "%s"
1497
1498
+# hcd-dwc2.c
1499
+usb_dwc2_update_irq(uint32_t level) "level=%d"
1500
+usb_dwc2_raise_global_irq(uint32_t intr) "0x%08x"
1501
+usb_dwc2_lower_global_irq(uint32_t intr) "0x%08x"
1502
+usb_dwc2_raise_host_irq(uint32_t intr) "0x%04x"
1503
+usb_dwc2_lower_host_irq(uint32_t intr) "0x%04x"
1504
+usb_dwc2_sof(int64_t next) "next SOF %" PRId64
1505
+usb_dwc2_bus_start(void) "start SOFs"
1506
+usb_dwc2_bus_stop(void) "stop SOFs"
1507
+usb_dwc2_find_device(uint8_t addr) "%d"
1508
+usb_dwc2_port_disabled(uint32_t pnum) "port %d disabled"
1509
+usb_dwc2_device_found(uint32_t pnum) "device found on port %d"
1510
+usb_dwc2_device_not_found(void) "device not found"
1511
+usb_dwc2_handle_packet(uint32_t chan, void *dev, void *pkt, uint32_t ep, const char *type, const char *dir, uint32_t mps, uint32_t len, uint32_t pcnt) "ch %d dev %p pkt %p ep %d type %s dir %s mps %d len %d pcnt %d"
1512
+usb_dwc2_memory_read(uint32_t addr, uint32_t len) "addr %d len %d"
1513
+usb_dwc2_packet_status(const char *status, uint32_t len) "status %s len %d"
1514
+usb_dwc2_packet_error(const char *status) "ERROR %s"
1515
+usb_dwc2_async_packet(void *pkt, uint32_t chan, void *dev, uint32_t ep, const char *dir, uint32_t len) "pkt %p ch %d dev %p ep %d %s len %d"
1516
+usb_dwc2_memory_write(uint32_t addr, uint32_t len) "addr %d len %d"
1517
+usb_dwc2_packet_done(const char *status, uint32_t actual, uint32_t len, uint32_t pcnt) "status %s actual %d len %d pcnt %d"
1518
+usb_dwc2_packet_next(const char *status, uint32_t len, uint32_t pcnt) "status %s len %d pcnt %d"
1519
+usb_dwc2_attach(void *port) "port %p"
1520
+usb_dwc2_attach_speed(const char *speed) "%s-speed device attached"
1521
+usb_dwc2_detach(void *port) "port %p"
1522
+usb_dwc2_child_detach(void *port, void *child) "port %p child %p"
1523
+usb_dwc2_wakeup(void *port) "port %p"
1524
+usb_dwc2_async_packet_complete(void *port, void *pkt, uint32_t chan, void *dev, uint32_t ep, const char *dir, uint32_t len) "port %p packet %p ch %d dev %p ep %d %s len %d"
1525
+usb_dwc2_work_bh(void) ""
1526
+usb_dwc2_work_bh_service(uint32_t first, uint32_t current, void *dev, uint32_t ep) "first %d servicing %d dev %p ep %d"
1527
+usb_dwc2_work_bh_next(uint32_t chan) "next %d"
1528
+usb_dwc2_enable_chan(uint32_t chan, void *dev, void *pkt, uint32_t ep) "ch %d dev %p pkt %p ep %d"
1529
+usb_dwc2_glbreg_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1530
+usb_dwc2_glbreg_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1531
+usb_dwc2_fszreg_read(uint64_t addr, uint32_t val) " 0x%04" PRIx64 " HPTXFSIZ val 0x%08x"
1532
+usb_dwc2_fszreg_write(uint64_t addr, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " HPTXFSIZ val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1533
+usb_dwc2_hreg0_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1534
+usb_dwc2_hreg0_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1535
+usb_dwc2_hreg1_read(uint64_t addr, const char *reg, uint64_t chan, uint32_t val) " 0x%04" PRIx64 " %s%" PRId64 " val 0x%08x"
1536
+usb_dwc2_hreg1_write(uint64_t addr, const char *reg, uint64_t chan, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " %s%" PRId64 " val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1537
+usb_dwc2_pcgreg_read(uint64_t addr, const char *reg, uint32_t val) " 0x%04" PRIx64 " %s val 0x%08x"
1538
+usb_dwc2_pcgreg_write(uint64_t addr, const char *reg, uint64_t val, uint32_t old, uint64_t result) "0x%04" PRIx64 " %s val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1539
+usb_dwc2_hreg2_read(uint64_t addr, uint64_t fifo, uint32_t val) " 0x%04" PRIx64 " FIFO%" PRId64 " val 0x%08x"
1540
+usb_dwc2_hreg2_write(uint64_t addr, uint64_t fifo, uint64_t val, uint32_t old, uint64_t result) " 0x%04" PRIx64 " FIFO%" PRId64 " val 0x%08" PRIx64 " old 0x%08x result 0x%08" PRIx64
1541
+usb_dwc2_hreg0_action(const char *s) "%s"
1542
+usb_dwc2_wakeup_endpoint(void *ep, uint32_t stream) "endp %p stream %d"
1543
+usb_dwc2_work_timer(void) ""
1544
+usb_dwc2_reset_enter(void) "=== RESET enter ==="
1545
+usb_dwc2_reset_hold(void) "=== RESET hold ==="
1546
+usb_dwc2_reset_exit(void) "=== RESET exit ==="
1547
+
1548
# desc.c
1549
usb_desc_device(int addr, int len, int ret) "dev %d query device, len %d, ret %d"
1550
usb_desc_device_qualifier(int addr, int len, int ret) "dev %d query device qualifier, len %d, ret %d"
1368
--
1551
--
1369
2.20.1
1552
2.20.1
1370
1553
1371
1554
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Paul Zimmerman <pauldzim@gmail.com>
2
2
3
Now that we can pass 7 parameters, do not encode register
3
The dwc-hsotg (dwc2) USB host depends on a short packet to
4
operands within simd_data.
4
indicate the end of an IN transfer. The usb-storage driver
5
currently doesn't provide this, so fix it.
5
6
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
I have tested this change rather extensively using a PC
7
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
8
emulation with xhci, ehci, and uhci controllers, and have
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
not observed any regressions.
9
Message-id: 20200507172352.15418-2-richard.henderson@linaro.org
10
11
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
12
Message-id: 20200520235349.21215-6-pauldzim@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
14
---
12
target/arm/helper-sve.h | 45 +++++++----
15
hw/usb/dev-storage.c | 15 ++++++++++++++-
13
target/arm/sve_helper.c | 157 ++++++++++++++-----------------------
16
1 file changed, 14 insertions(+), 1 deletion(-)
14
target/arm/translate-sve.c | 70 ++++++-----------
15
3 files changed, 114 insertions(+), 158 deletions(-)
16
17
17
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
diff --git a/hw/usb/dev-storage.c b/hw/usb/dev-storage.c
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-sve.h
20
--- a/hw/usb/dev-storage.c
20
+++ b/target/arm/helper-sve.h
21
+++ b/hw/usb/dev-storage.c
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(sve_fcadd_s, TCG_CALL_NO_RWG,
22
@@ -XXX,XX +XXX,XX @@ static void usb_msd_copy_data(MSDState *s, USBPacket *p)
22
DEF_HELPER_FLAGS_6(sve_fcadd_d, TCG_CALL_NO_RWG,
23
usb_packet_copy(p, scsi_req_get_buf(s->req) + s->scsi_off, len);
23
void, ptr, ptr, ptr, ptr, ptr, i32)
24
s->scsi_len -= len;
24
25
s->scsi_off += len;
25
-DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
26
+ if (len > s->data_len) {
26
-DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
27
+ len = s->data_len;
27
-DEF_HELPER_FLAGS_3(sve_fmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
28
+ }
28
+DEF_HELPER_FLAGS_7(sve_fmla_zpzzz_h, TCG_CALL_NO_RWG,
29
s->data_len -= len;
29
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
30
if (s->scsi_len == 0 || s->data_len == 0) {
30
+DEF_HELPER_FLAGS_7(sve_fmla_zpzzz_s, TCG_CALL_NO_RWG,
31
scsi_req_continue(s->req);
31
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
32
@@ -XXX,XX +XXX,XX @@ static void usb_msd_command_complete(SCSIRequest *req, uint32_t status, size_t r
32
+DEF_HELPER_FLAGS_7(sve_fmla_zpzzz_d, TCG_CALL_NO_RWG,
33
if (s->data_len) {
33
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
34
int len = (p->iov.size - p->actual_length);
34
35
usb_packet_skip(p, len);
35
-DEF_HELPER_FLAGS_3(sve_fmls_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
36
+ if (len > s->data_len) {
36
-DEF_HELPER_FLAGS_3(sve_fmls_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
37
+ len = s->data_len;
37
-DEF_HELPER_FLAGS_3(sve_fmls_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
38
+ }
38
+DEF_HELPER_FLAGS_7(sve_fmls_zpzzz_h, TCG_CALL_NO_RWG,
39
s->data_len -= len;
39
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_7(sve_fmls_zpzzz_s, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_7(sve_fmls_zpzzz_d, TCG_CALL_NO_RWG,
43
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
44
45
-DEF_HELPER_FLAGS_3(sve_fnmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
46
-DEF_HELPER_FLAGS_3(sve_fnmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
47
-DEF_HELPER_FLAGS_3(sve_fnmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
48
+DEF_HELPER_FLAGS_7(sve_fnmla_zpzzz_h, TCG_CALL_NO_RWG,
49
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_7(sve_fnmla_zpzzz_s, TCG_CALL_NO_RWG,
51
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_7(sve_fnmla_zpzzz_d, TCG_CALL_NO_RWG,
53
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
54
55
-DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
56
-DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
57
-DEF_HELPER_FLAGS_3(sve_fnmls_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
58
+DEF_HELPER_FLAGS_7(sve_fnmls_zpzzz_h, TCG_CALL_NO_RWG,
59
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_7(sve_fnmls_zpzzz_s, TCG_CALL_NO_RWG,
61
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
62
+DEF_HELPER_FLAGS_7(sve_fnmls_zpzzz_d, TCG_CALL_NO_RWG,
63
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
64
65
-DEF_HELPER_FLAGS_3(sve_fcmla_zpzzz_h, TCG_CALL_NO_RWG, void, env, ptr, i32)
66
-DEF_HELPER_FLAGS_3(sve_fcmla_zpzzz_s, TCG_CALL_NO_RWG, void, env, ptr, i32)
67
-DEF_HELPER_FLAGS_3(sve_fcmla_zpzzz_d, TCG_CALL_NO_RWG, void, env, ptr, i32)
68
+DEF_HELPER_FLAGS_7(sve_fcmla_zpzzz_h, TCG_CALL_NO_RWG,
69
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
70
+DEF_HELPER_FLAGS_7(sve_fcmla_zpzzz_s, TCG_CALL_NO_RWG,
71
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
72
+DEF_HELPER_FLAGS_7(sve_fcmla_zpzzz_d, TCG_CALL_NO_RWG,
73
+ void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
74
75
DEF_HELPER_FLAGS_5(sve_ftmad_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
76
DEF_HELPER_FLAGS_5(sve_ftmad_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
77
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/sve_helper.c
80
+++ b/target/arm/sve_helper.c
81
@@ -XXX,XX +XXX,XX @@ DO_ZPZ_FP(sve_ucvt_dd, uint64_t, , uint64_to_float64)
82
83
#undef DO_ZPZ_FP
84
85
-/* 4-operand predicated multiply-add. This requires 7 operands to pass
86
- * "properly", so we need to encode some of the registers into DESC.
87
- */
88
-QEMU_BUILD_BUG_ON(SIMD_DATA_SHIFT + 20 > 32);
89
-
90
-static void do_fmla_zpzzz_h(CPUARMState *env, void *vg, uint32_t desc,
91
+static void do_fmla_zpzzz_h(void *vd, void *vn, void *vm, void *va, void *vg,
92
+ float_status *status, uint32_t desc,
93
uint16_t neg1, uint16_t neg3)
94
{
95
intptr_t i = simd_oprsz(desc);
96
- unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
97
- unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
98
- unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
99
- unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
100
- void *vd = &env->vfp.zregs[rd];
101
- void *vn = &env->vfp.zregs[rn];
102
- void *vm = &env->vfp.zregs[rm];
103
- void *va = &env->vfp.zregs[ra];
104
uint64_t *g = vg;
105
106
do {
107
@@ -XXX,XX +XXX,XX @@ static void do_fmla_zpzzz_h(CPUARMState *env, void *vg, uint32_t desc,
108
e1 = *(uint16_t *)(vn + H1_2(i)) ^ neg1;
109
e2 = *(uint16_t *)(vm + H1_2(i));
110
e3 = *(uint16_t *)(va + H1_2(i)) ^ neg3;
111
- r = float16_muladd(e1, e2, e3, 0, &env->vfp.fp_status_f16);
112
+ r = float16_muladd(e1, e2, e3, 0, status);
113
*(uint16_t *)(vd + H1_2(i)) = r;
114
}
40
}
115
} while (i & 63);
41
if (s->data_len == 0) {
116
} while (i != 0);
42
@@ -XXX,XX +XXX,XX @@ static void usb_msd_handle_data(USBDevice *dev, USBPacket *p)
117
}
43
int len = p->iov.size - p->actual_length;
118
44
if (len) {
119
-void HELPER(sve_fmla_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
45
usb_packet_skip(p, len);
120
+void HELPER(sve_fmla_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
46
+ if (len > s->data_len) {
121
+ void *vg, void *status, uint32_t desc)
47
+ len = s->data_len;
122
{
48
+ }
123
- do_fmla_zpzzz_h(env, vg, desc, 0, 0);
49
s->data_len -= len;
124
+ do_fmla_zpzzz_h(vd, vn, vm, va, vg, status, desc, 0, 0);
50
if (s->data_len == 0) {
125
}
51
s->mode = USB_MSDM_CSW;
126
52
@@ -XXX,XX +XXX,XX @@ static void usb_msd_handle_data(USBDevice *dev, USBPacket *p)
127
-void HELPER(sve_fmls_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
53
int len = p->iov.size - p->actual_length;
128
+void HELPER(sve_fmls_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
54
if (len) {
129
+ void *vg, void *status, uint32_t desc)
55
usb_packet_skip(p, len);
130
{
56
+ if (len > s->data_len) {
131
- do_fmla_zpzzz_h(env, vg, desc, 0x8000, 0);
57
+ len = s->data_len;
132
+ do_fmla_zpzzz_h(vd, vn, vm, va, vg, status, desc, 0x8000, 0);
58
+ }
133
}
59
s->data_len -= len;
134
60
if (s->data_len == 0) {
135
-void HELPER(sve_fnmla_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
61
s->mode = USB_MSDM_CSW;
136
+void HELPER(sve_fnmla_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
62
}
137
+ void *vg, void *status, uint32_t desc)
63
}
138
{
139
- do_fmla_zpzzz_h(env, vg, desc, 0x8000, 0x8000);
140
+ do_fmla_zpzzz_h(vd, vn, vm, va, vg, status, desc, 0x8000, 0x8000);
141
}
142
143
-void HELPER(sve_fnmls_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
144
+void HELPER(sve_fnmls_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
145
+ void *vg, void *status, uint32_t desc)
146
{
147
- do_fmla_zpzzz_h(env, vg, desc, 0, 0x8000);
148
+ do_fmla_zpzzz_h(vd, vn, vm, va, vg, status, desc, 0, 0x8000);
149
}
150
151
-static void do_fmla_zpzzz_s(CPUARMState *env, void *vg, uint32_t desc,
152
+static void do_fmla_zpzzz_s(void *vd, void *vn, void *vm, void *va, void *vg,
153
+ float_status *status, uint32_t desc,
154
uint32_t neg1, uint32_t neg3)
155
{
156
intptr_t i = simd_oprsz(desc);
157
- unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
158
- unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
159
- unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
160
- unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
161
- void *vd = &env->vfp.zregs[rd];
162
- void *vn = &env->vfp.zregs[rn];
163
- void *vm = &env->vfp.zregs[rm];
164
- void *va = &env->vfp.zregs[ra];
165
uint64_t *g = vg;
166
167
do {
168
@@ -XXX,XX +XXX,XX @@ static void do_fmla_zpzzz_s(CPUARMState *env, void *vg, uint32_t desc,
169
e1 = *(uint32_t *)(vn + H1_4(i)) ^ neg1;
170
e2 = *(uint32_t *)(vm + H1_4(i));
171
e3 = *(uint32_t *)(va + H1_4(i)) ^ neg3;
172
- r = float32_muladd(e1, e2, e3, 0, &env->vfp.fp_status);
173
+ r = float32_muladd(e1, e2, e3, 0, status);
174
*(uint32_t *)(vd + H1_4(i)) = r;
175
}
64
}
176
} while (i & 63);
65
- if (p->actual_length < p->iov.size) {
177
} while (i != 0);
66
+ if (p->actual_length < p->iov.size && (p->short_not_ok ||
178
}
67
+ s->scsi_len >= p->ep->max_packet_size)) {
179
68
DPRINTF("Deferring packet %p [wait data-in]\n", p);
180
-void HELPER(sve_fmla_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
69
s->packet = p;
181
+void HELPER(sve_fmla_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
70
p->status = USB_RET_ASYNC;
182
+ void *vg, void *status, uint32_t desc)
183
{
184
- do_fmla_zpzzz_s(env, vg, desc, 0, 0);
185
+ do_fmla_zpzzz_s(vd, vn, vm, va, vg, status, desc, 0, 0);
186
}
187
188
-void HELPER(sve_fmls_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
189
+void HELPER(sve_fmls_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
190
+ void *vg, void *status, uint32_t desc)
191
{
192
- do_fmla_zpzzz_s(env, vg, desc, 0x80000000, 0);
193
+ do_fmla_zpzzz_s(vd, vn, vm, va, vg, status, desc, 0x80000000, 0);
194
}
195
196
-void HELPER(sve_fnmla_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
197
+void HELPER(sve_fnmla_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
198
+ void *vg, void *status, uint32_t desc)
199
{
200
- do_fmla_zpzzz_s(env, vg, desc, 0x80000000, 0x80000000);
201
+ do_fmla_zpzzz_s(vd, vn, vm, va, vg, status, desc, 0x80000000, 0x80000000);
202
}
203
204
-void HELPER(sve_fnmls_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
205
+void HELPER(sve_fnmls_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
206
+ void *vg, void *status, uint32_t desc)
207
{
208
- do_fmla_zpzzz_s(env, vg, desc, 0, 0x80000000);
209
+ do_fmla_zpzzz_s(vd, vn, vm, va, vg, status, desc, 0, 0x80000000);
210
}
211
212
-static void do_fmla_zpzzz_d(CPUARMState *env, void *vg, uint32_t desc,
213
+static void do_fmla_zpzzz_d(void *vd, void *vn, void *vm, void *va, void *vg,
214
+ float_status *status, uint32_t desc,
215
uint64_t neg1, uint64_t neg3)
216
{
217
intptr_t i = simd_oprsz(desc);
218
- unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
219
- unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
220
- unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
221
- unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
222
- void *vd = &env->vfp.zregs[rd];
223
- void *vn = &env->vfp.zregs[rn];
224
- void *vm = &env->vfp.zregs[rm];
225
- void *va = &env->vfp.zregs[ra];
226
uint64_t *g = vg;
227
228
do {
229
@@ -XXX,XX +XXX,XX @@ static void do_fmla_zpzzz_d(CPUARMState *env, void *vg, uint32_t desc,
230
e1 = *(uint64_t *)(vn + i) ^ neg1;
231
e2 = *(uint64_t *)(vm + i);
232
e3 = *(uint64_t *)(va + i) ^ neg3;
233
- r = float64_muladd(e1, e2, e3, 0, &env->vfp.fp_status);
234
+ r = float64_muladd(e1, e2, e3, 0, status);
235
*(uint64_t *)(vd + i) = r;
236
}
237
} while (i & 63);
238
} while (i != 0);
239
}
240
241
-void HELPER(sve_fmla_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
242
+void HELPER(sve_fmla_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
243
+ void *vg, void *status, uint32_t desc)
244
{
245
- do_fmla_zpzzz_d(env, vg, desc, 0, 0);
246
+ do_fmla_zpzzz_d(vd, vn, vm, va, vg, status, desc, 0, 0);
247
}
248
249
-void HELPER(sve_fmls_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
250
+void HELPER(sve_fmls_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
251
+ void *vg, void *status, uint32_t desc)
252
{
253
- do_fmla_zpzzz_d(env, vg, desc, INT64_MIN, 0);
254
+ do_fmla_zpzzz_d(vd, vn, vm, va, vg, status, desc, INT64_MIN, 0);
255
}
256
257
-void HELPER(sve_fnmla_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
258
+void HELPER(sve_fnmla_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
259
+ void *vg, void *status, uint32_t desc)
260
{
261
- do_fmla_zpzzz_d(env, vg, desc, INT64_MIN, INT64_MIN);
262
+ do_fmla_zpzzz_d(vd, vn, vm, va, vg, status, desc, INT64_MIN, INT64_MIN);
263
}
264
265
-void HELPER(sve_fnmls_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
266
+void HELPER(sve_fnmls_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
267
+ void *vg, void *status, uint32_t desc)
268
{
269
- do_fmla_zpzzz_d(env, vg, desc, 0, INT64_MIN);
270
+ do_fmla_zpzzz_d(vd, vn, vm, va, vg, status, desc, 0, INT64_MIN);
271
}
272
273
/* Two operand floating-point comparison controlled by a predicate.
274
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcadd_d)(void *vd, void *vn, void *vm, void *vg,
275
* FP Complex Multiply
276
*/
277
278
-QEMU_BUILD_BUG_ON(SIMD_DATA_SHIFT + 22 > 32);
279
-
280
-void HELPER(sve_fcmla_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
281
+void HELPER(sve_fcmla_zpzzz_h)(void *vd, void *vn, void *vm, void *va,
282
+ void *vg, void *status, uint32_t desc)
283
{
284
intptr_t j, i = simd_oprsz(desc);
285
- unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
286
- unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
287
- unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
288
- unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
289
- unsigned rot = extract32(desc, SIMD_DATA_SHIFT + 20, 2);
290
+ unsigned rot = simd_data(desc);
291
bool flip = rot & 1;
292
float16 neg_imag, neg_real;
293
- void *vd = &env->vfp.zregs[rd];
294
- void *vn = &env->vfp.zregs[rn];
295
- void *vm = &env->vfp.zregs[rm];
296
- void *va = &env->vfp.zregs[ra];
297
uint64_t *g = vg;
298
299
neg_imag = float16_set_sign(0, (rot & 2) != 0);
300
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcmla_zpzzz_h)(CPUARMState *env, void *vg, uint32_t desc)
301
302
if (likely((pg >> (i & 63)) & 1)) {
303
d = *(float16 *)(va + H1_2(i));
304
- d = float16_muladd(e2, e1, d, 0, &env->vfp.fp_status_f16);
305
+ d = float16_muladd(e2, e1, d, 0, status);
306
*(float16 *)(vd + H1_2(i)) = d;
307
}
308
if (likely((pg >> (j & 63)) & 1)) {
309
d = *(float16 *)(va + H1_2(j));
310
- d = float16_muladd(e4, e3, d, 0, &env->vfp.fp_status_f16);
311
+ d = float16_muladd(e4, e3, d, 0, status);
312
*(float16 *)(vd + H1_2(j)) = d;
313
}
314
} while (i & 63);
315
} while (i != 0);
316
}
317
318
-void HELPER(sve_fcmla_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
319
+void HELPER(sve_fcmla_zpzzz_s)(void *vd, void *vn, void *vm, void *va,
320
+ void *vg, void *status, uint32_t desc)
321
{
322
intptr_t j, i = simd_oprsz(desc);
323
- unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
324
- unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
325
- unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
326
- unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
327
- unsigned rot = extract32(desc, SIMD_DATA_SHIFT + 20, 2);
328
+ unsigned rot = simd_data(desc);
329
bool flip = rot & 1;
330
float32 neg_imag, neg_real;
331
- void *vd = &env->vfp.zregs[rd];
332
- void *vn = &env->vfp.zregs[rn];
333
- void *vm = &env->vfp.zregs[rm];
334
- void *va = &env->vfp.zregs[ra];
335
uint64_t *g = vg;
336
337
neg_imag = float32_set_sign(0, (rot & 2) != 0);
338
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcmla_zpzzz_s)(CPUARMState *env, void *vg, uint32_t desc)
339
340
if (likely((pg >> (i & 63)) & 1)) {
341
d = *(float32 *)(va + H1_2(i));
342
- d = float32_muladd(e2, e1, d, 0, &env->vfp.fp_status);
343
+ d = float32_muladd(e2, e1, d, 0, status);
344
*(float32 *)(vd + H1_2(i)) = d;
345
}
346
if (likely((pg >> (j & 63)) & 1)) {
347
d = *(float32 *)(va + H1_2(j));
348
- d = float32_muladd(e4, e3, d, 0, &env->vfp.fp_status);
349
+ d = float32_muladd(e4, e3, d, 0, status);
350
*(float32 *)(vd + H1_2(j)) = d;
351
}
352
} while (i & 63);
353
} while (i != 0);
354
}
355
356
-void HELPER(sve_fcmla_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
357
+void HELPER(sve_fcmla_zpzzz_d)(void *vd, void *vn, void *vm, void *va,
358
+ void *vg, void *status, uint32_t desc)
359
{
360
intptr_t j, i = simd_oprsz(desc);
361
- unsigned rd = extract32(desc, SIMD_DATA_SHIFT, 5);
362
- unsigned rn = extract32(desc, SIMD_DATA_SHIFT + 5, 5);
363
- unsigned rm = extract32(desc, SIMD_DATA_SHIFT + 10, 5);
364
- unsigned ra = extract32(desc, SIMD_DATA_SHIFT + 15, 5);
365
- unsigned rot = extract32(desc, SIMD_DATA_SHIFT + 20, 2);
366
+ unsigned rot = simd_data(desc);
367
bool flip = rot & 1;
368
float64 neg_imag, neg_real;
369
- void *vd = &env->vfp.zregs[rd];
370
- void *vn = &env->vfp.zregs[rn];
371
- void *vm = &env->vfp.zregs[rm];
372
- void *va = &env->vfp.zregs[ra];
373
uint64_t *g = vg;
374
375
neg_imag = float64_set_sign(0, (rot & 2) != 0);
376
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcmla_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
377
378
if (likely((pg >> (i & 63)) & 1)) {
379
d = *(float64 *)(va + H1_2(i));
380
- d = float64_muladd(e2, e1, d, 0, &env->vfp.fp_status);
381
+ d = float64_muladd(e2, e1, d, 0, status);
382
*(float64 *)(vd + H1_2(i)) = d;
383
}
384
if (likely((pg >> (j & 63)) & 1)) {
385
d = *(float64 *)(va + H1_2(j));
386
- d = float64_muladd(e4, e3, d, 0, &env->vfp.fp_status);
387
+ d = float64_muladd(e4, e3, d, 0, status);
388
*(float64 *)(vd + H1_2(j)) = d;
389
}
390
} while (i & 63);
391
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
392
index XXXXXXX..XXXXXXX 100644
393
--- a/target/arm/translate-sve.c
394
+++ b/target/arm/translate-sve.c
395
@@ -XXX,XX +XXX,XX @@ static bool trans_FCADD(DisasContext *s, arg_FCADD *a)
396
return true;
397
}
398
399
-typedef void gen_helper_sve_fmla(TCGv_env, TCGv_ptr, TCGv_i32);
400
-
401
-static bool do_fmla(DisasContext *s, arg_rprrr_esz *a, gen_helper_sve_fmla *fn)
402
+static bool do_fmla(DisasContext *s, arg_rprrr_esz *a,
403
+ gen_helper_gvec_5_ptr *fn)
404
{
405
- if (fn == NULL) {
406
+ if (a->esz == 0) {
407
return false;
408
}
409
- if (!sve_access_check(s)) {
410
- return true;
411
+ if (sve_access_check(s)) {
412
+ unsigned vsz = vec_full_reg_size(s);
413
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
414
+ tcg_gen_gvec_5_ptr(vec_full_reg_offset(s, a->rd),
415
+ vec_full_reg_offset(s, a->rn),
416
+ vec_full_reg_offset(s, a->rm),
417
+ vec_full_reg_offset(s, a->ra),
418
+ pred_full_reg_offset(s, a->pg),
419
+ status, vsz, vsz, 0, fn);
420
+ tcg_temp_free_ptr(status);
421
}
422
-
423
- unsigned vsz = vec_full_reg_size(s);
424
- unsigned desc;
425
- TCGv_i32 t_desc;
426
- TCGv_ptr pg = tcg_temp_new_ptr();
427
-
428
- /* We would need 7 operands to pass these arguments "properly".
429
- * So we encode all the register numbers into the descriptor.
430
- */
431
- desc = deposit32(a->rd, 5, 5, a->rn);
432
- desc = deposit32(desc, 10, 5, a->rm);
433
- desc = deposit32(desc, 15, 5, a->ra);
434
- desc = simd_desc(vsz, vsz, desc);
435
-
436
- t_desc = tcg_const_i32(desc);
437
- tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
438
- fn(cpu_env, pg, t_desc);
439
- tcg_temp_free_i32(t_desc);
440
- tcg_temp_free_ptr(pg);
441
return true;
442
}
443
444
#define DO_FMLA(NAME, name) \
445
static bool trans_##NAME(DisasContext *s, arg_rprrr_esz *a) \
446
{ \
447
- static gen_helper_sve_fmla * const fns[4] = { \
448
+ static gen_helper_gvec_5_ptr * const fns[4] = { \
449
NULL, gen_helper_sve_##name##_h, \
450
gen_helper_sve_##name##_s, gen_helper_sve_##name##_d \
451
}; \
452
@@ -XXX,XX +XXX,XX @@ DO_FMLA(FNMLS_zpzzz, fnmls_zpzzz)
453
454
static bool trans_FCMLA_zpzzz(DisasContext *s, arg_FCMLA_zpzzz *a)
455
{
456
- static gen_helper_sve_fmla * const fns[3] = {
457
+ static gen_helper_gvec_5_ptr * const fns[4] = {
458
+ NULL,
459
gen_helper_sve_fcmla_zpzzz_h,
460
gen_helper_sve_fcmla_zpzzz_s,
461
gen_helper_sve_fcmla_zpzzz_d,
462
@@ -XXX,XX +XXX,XX @@ static bool trans_FCMLA_zpzzz(DisasContext *s, arg_FCMLA_zpzzz *a)
463
}
464
if (sve_access_check(s)) {
465
unsigned vsz = vec_full_reg_size(s);
466
- unsigned desc;
467
- TCGv_i32 t_desc;
468
- TCGv_ptr pg = tcg_temp_new_ptr();
469
-
470
- /* We would need 7 operands to pass these arguments "properly".
471
- * So we encode all the register numbers into the descriptor.
472
- */
473
- desc = deposit32(a->rd, 5, 5, a->rn);
474
- desc = deposit32(desc, 10, 5, a->rm);
475
- desc = deposit32(desc, 15, 5, a->ra);
476
- desc = deposit32(desc, 20, 2, a->rot);
477
- desc = sextract32(desc, 0, 22);
478
- desc = simd_desc(vsz, vsz, desc);
479
-
480
- t_desc = tcg_const_i32(desc);
481
- tcg_gen_addi_ptr(pg, cpu_env, pred_full_reg_offset(s, a->pg));
482
- fns[a->esz - 1](cpu_env, pg, t_desc);
483
- tcg_temp_free_i32(t_desc);
484
- tcg_temp_free_ptr(pg);
485
+ TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
486
+ tcg_gen_gvec_5_ptr(vec_full_reg_offset(s, a->rd),
487
+ vec_full_reg_offset(s, a->rn),
488
+ vec_full_reg_offset(s, a->rm),
489
+ vec_full_reg_offset(s, a->ra),
490
+ pred_full_reg_offset(s, a->pg),
491
+ status, vsz, vsz, a->rot, fns[a->esz]);
492
+ tcg_temp_free_ptr(status);
493
}
494
return true;
495
}
496
--
71
--
497
2.20.1
72
2.20.1
498
73
499
74
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Paul Zimmerman <pauldzim@gmail.com>
2
2
3
With sve_cont_ldst_pages, the differences between first-fault and no-fault
3
Wire the dwc-hsotg (dwc2) emulation into Qemu
4
are minimal, so unify the routines. With cpu_probe_watchpoint, we are able
5
to make progress through pages with TLB_WATCHPOINT set when the watchpoint
6
does not actually fire.
7
4
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daude <f4bug@amsat.org>
10
Message-id: 20200508154359.7494-15-richard.henderson@linaro.org
7
Message-id: 20200520235349.21215-7-pauldzim@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
target/arm/sve_helper.c | 346 +++++++++++++++++++---------------------
10
include/hw/arm/bcm2835_peripherals.h | 3 ++-
14
1 file changed, 162 insertions(+), 184 deletions(-)
11
hw/arm/bcm2835_peripherals.c | 21 ++++++++++++++++++++-
12
2 files changed, 22 insertions(+), 2 deletions(-)
15
13
16
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
14
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve_helper.c
16
--- a/include/hw/arm/bcm2835_peripherals.h
19
+++ b/target/arm/sve_helper.c
17
+++ b/include/hw/arm/bcm2835_peripherals.h
20
@@ -XXX,XX +XXX,XX @@ static intptr_t find_next_active(uint64_t *vg, intptr_t reg_off,
18
@@ -XXX,XX +XXX,XX @@
21
return reg_off;
19
#include "hw/sd/bcm2835_sdhost.h"
20
#include "hw/gpio/bcm2835_gpio.h"
21
#include "hw/timer/bcm2835_systmr.h"
22
+#include "hw/usb/hcd-dwc2.h"
23
#include "hw/misc/unimp.h"
24
25
#define TYPE_BCM2835_PERIPHERALS "bcm2835-peripherals"
26
@@ -XXX,XX +XXX,XX @@ typedef struct BCM2835PeripheralState {
27
UnimplementedDeviceState ave0;
28
UnimplementedDeviceState bscsl;
29
UnimplementedDeviceState smi;
30
- UnimplementedDeviceState dwc2;
31
+ DWC2State dwc2;
32
UnimplementedDeviceState sdramc;
33
} BCM2835PeripheralState;
34
35
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/arm/bcm2835_peripherals.c
38
+++ b/hw/arm/bcm2835_peripherals.c
39
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
40
/* Mphi */
41
sysbus_init_child_obj(obj, "mphi", &s->mphi, sizeof(s->mphi),
42
TYPE_BCM2835_MPHI);
43
+
44
+ /* DWC2 */
45
+ sysbus_init_child_obj(obj, "dwc2", &s->dwc2, sizeof(s->dwc2),
46
+ TYPE_DWC2_USB);
47
+
48
+ object_property_add_const_link(OBJECT(&s->dwc2), "dma-mr",
49
+ OBJECT(&s->gpu_bus_mr));
22
}
50
}
23
51
24
-/*
52
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
25
- * Return the maximum offset <= @mem_max which is still within the page
53
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
26
- * referenced by @base + @mem_off.
54
qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
27
- */
55
INTERRUPT_HOSTPORT));
28
-static intptr_t max_for_page(target_ulong base, intptr_t mem_off,
56
29
- intptr_t mem_max)
57
+ /* DWC2 */
30
-{
58
+ object_property_set_bool(OBJECT(&s->dwc2), true, "realized", &err);
31
- target_ulong addr = base + mem_off;
59
+ if (err) {
32
- intptr_t split = -(intptr_t)(addr | TARGET_PAGE_MASK);
60
+ error_propagate(errp, err);
33
- return MIN(split, mem_max - mem_off) + mem_off;
34
-}
35
-
36
/*
37
* Resolve the guest virtual address to info->host and info->flags.
38
* If @nofault, return false if the page is invalid, otherwise
39
@@ -XXX,XX +XXX,XX @@ static void sve_cont_ldst_watchpoints(SVEContLdSt *info, CPUARMState *env,
40
#endif
41
}
42
43
-/*
44
- * The result of tlb_vaddr_to_host for user-only is just g2h(x),
45
- * which is always non-null. Elide the useless test.
46
- */
47
-static inline bool test_host_page(void *host)
48
-{
49
-#ifdef CONFIG_USER_ONLY
50
- return true;
51
-#else
52
- return likely(host != NULL);
53
-#endif
54
-}
55
-
56
/*
57
* Common helper for all contiguous 1,2,3,4-register predicated stores.
58
*/
59
@@ -XXX,XX +XXX,XX @@ static void record_fault(CPUARMState *env, uintptr_t i, uintptr_t oprsz)
60
}
61
62
/*
63
- * Common helper for all contiguous first-fault loads.
64
+ * Common helper for all contiguous no-fault and first-fault loads.
65
*/
66
-static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
67
- uint32_t desc, const uintptr_t retaddr,
68
- const int esz, const int msz,
69
- sve_ldst1_host_fn *host_fn,
70
- sve_ldst1_tlb_fn *tlb_fn)
71
+static inline QEMU_ALWAYS_INLINE
72
+void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
73
+ uint32_t desc, const uintptr_t retaddr,
74
+ const int esz, const int msz, const SVEContFault fault,
75
+ sve_ldst1_host_fn *host_fn,
76
+ sve_ldst1_tlb_fn *tlb_fn)
77
{
78
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
79
- const int mmu_idx = get_mmuidx(oi);
80
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
81
void *vd = &env->vfp.zregs[rd];
82
- const int diffsz = esz - msz;
83
const intptr_t reg_max = simd_oprsz(desc);
84
- const intptr_t mem_max = reg_max >> diffsz;
85
- intptr_t split, reg_off, mem_off, i;
86
+ intptr_t reg_off, mem_off, reg_last;
87
+ SVEContLdSt info;
88
+ int flags;
89
void *host;
90
91
- /* Skip to the first active element. */
92
- reg_off = find_next_active(vg, 0, reg_max, esz);
93
- if (unlikely(reg_off == reg_max)) {
94
+ /* Find the active elements. */
95
+ if (!sve_cont_ldst_elements(&info, addr, vg, reg_max, esz, 1 << msz)) {
96
/* The entire predicate was false; no load occurs. */
97
memset(vd, 0, reg_max);
98
return;
99
}
100
- mem_off = reg_off >> diffsz;
101
+ reg_off = info.reg_off_first[0];
102
103
- /*
104
- * If the (remaining) load is entirely within a single page, then:
105
- * For softmmu, and the tlb hits, then no faults will occur;
106
- * For user-only, either the first load will fault or none will.
107
- * We can thus perform the load directly to the destination and
108
- * Vd will be unmodified on any exception path.
109
- */
110
- split = max_for_page(addr, mem_off, mem_max);
111
- if (likely(split == mem_max)) {
112
- host = tlb_vaddr_to_host(env, addr + mem_off, MMU_DATA_LOAD, mmu_idx);
113
- if (test_host_page(host)) {
114
- i = reg_off;
115
- host -= mem_off;
116
- do {
117
- host_fn(vd, i, host + (i >> diffsz));
118
- i = find_next_active(vg, i + (1 << esz), reg_max, esz);
119
- } while (i < reg_max);
120
- /* After any fault, zero any leading inactive elements. */
121
+ /* Probe the page(s). */
122
+ if (!sve_cont_ldst_pages(&info, fault, env, addr, MMU_DATA_LOAD, retaddr)) {
123
+ /* Fault on first element. */
124
+ tcg_debug_assert(fault == FAULT_NO);
125
+ memset(vd, 0, reg_max);
126
+ goto do_fault;
127
+ }
128
+
129
+ mem_off = info.mem_off_first[0];
130
+ flags = info.page[0].flags;
131
+
132
+ if (fault == FAULT_FIRST) {
133
+ /*
134
+ * Special handling of the first active element,
135
+ * if it crosses a page boundary or is MMIO.
136
+ */
137
+ bool is_split = mem_off == info.mem_off_split;
138
+ /* TODO: MTE check. */
139
+ if (unlikely(flags != 0) || unlikely(is_split)) {
140
+ /*
141
+ * Use the slow path for cross-page handling.
142
+ * Might trap for MMIO or watchpoints.
143
+ */
144
+ tlb_fn(env, vd, reg_off, addr + mem_off, retaddr);
145
+
146
+ /* After any fault, zero the other elements. */
147
swap_memzero(vd, reg_off);
148
- return;
149
+ reg_off += 1 << esz;
150
+ mem_off += 1 << msz;
151
+ swap_memzero(vd + reg_off, reg_max - reg_off);
152
+
153
+ if (is_split) {
154
+ goto second_page;
155
+ }
156
+ } else {
157
+ memset(vd, 0, reg_max);
158
+ }
159
+ } else {
160
+ memset(vd, 0, reg_max);
161
+ if (unlikely(mem_off == info.mem_off_split)) {
162
+ /* The first active element crosses a page boundary. */
163
+ flags |= info.page[1].flags;
164
+ if (unlikely(flags & TLB_MMIO)) {
165
+ /* Some page is MMIO, see below. */
166
+ goto do_fault;
167
+ }
168
+ if (unlikely(flags & TLB_WATCHPOINT) &&
169
+ (cpu_watchpoint_address_matches
170
+ (env_cpu(env), addr + mem_off, 1 << msz)
171
+ & BP_MEM_READ)) {
172
+ /* Watchpoint hit, see below. */
173
+ goto do_fault;
174
+ }
175
+ /* TODO: MTE check. */
176
+ /*
177
+ * Use the slow path for cross-page handling.
178
+ * This is RAM, without a watchpoint, and will not trap.
179
+ */
180
+ tlb_fn(env, vd, reg_off, addr + mem_off, retaddr);
181
+ goto second_page;
182
}
183
}
184
185
/*
186
- * Perform one normal read, which will fault or not.
187
- * But it is likely to bring the page into the tlb.
188
+ * From this point on, all memory operations are MemSingleNF.
189
+ *
190
+ * Per the MemSingleNF pseudocode, a no-fault load from Device memory
191
+ * must not actually hit the bus -- it returns (UNKNOWN, FAULT) instead.
192
+ *
193
+ * Unfortuately we do not have access to the memory attributes from the
194
+ * PTE to tell Device memory from Normal memory. So we make a mostly
195
+ * correct check, and indicate (UNKNOWN, FAULT) for any MMIO.
196
+ * This gives the right answer for the common cases of "Normal memory,
197
+ * backed by host RAM" and "Device memory, backed by MMIO".
198
+ * The architecture allows us to suppress an NF load and return
199
+ * (UNKNOWN, FAULT) for any reason, so our behaviour for the corner
200
+ * case of "Normal memory, backed by MMIO" is permitted. The case we
201
+ * get wrong is "Device memory, backed by host RAM", for which we
202
+ * should return (UNKNOWN, FAULT) for but do not.
203
+ *
204
+ * Similarly, CPU_BP breakpoints would raise exceptions, and so
205
+ * return (UNKNOWN, FAULT). For simplicity, we consider gdb and
206
+ * architectural breakpoints the same.
207
*/
208
- tlb_fn(env, vd, reg_off, addr + mem_off, retaddr);
209
+ if (unlikely(flags & TLB_MMIO)) {
210
+ goto do_fault;
211
+ }
212
213
- /* After any fault, zero any leading predicated false elts. */
214
- swap_memzero(vd, reg_off);
215
- mem_off += 1 << msz;
216
- reg_off += 1 << esz;
217
+ reg_last = info.reg_off_last[0];
218
+ host = info.page[0].host;
219
220
- /* Try again to read the balance of the page. */
221
- split = max_for_page(addr, mem_off - 1, mem_max);
222
- if (split >= (1 << msz)) {
223
- host = tlb_vaddr_to_host(env, addr + mem_off, MMU_DATA_LOAD, mmu_idx);
224
- if (host) {
225
- host -= mem_off;
226
- do {
227
+ do {
228
+ uint64_t pg = *(uint64_t *)(vg + (reg_off >> 3));
229
+ do {
230
+ if ((pg >> (reg_off & 63)) & 1) {
231
+ if (unlikely(flags & TLB_WATCHPOINT) &&
232
+ (cpu_watchpoint_address_matches
233
+ (env_cpu(env), addr + mem_off, 1 << msz)
234
+ & BP_MEM_READ)) {
235
+ goto do_fault;
236
+ }
237
+ /* TODO: MTE check. */
238
host_fn(vd, reg_off, host + mem_off);
239
- reg_off += 1 << esz;
240
- reg_off = find_next_active(vg, reg_off, reg_max, esz);
241
- mem_off = reg_off >> diffsz;
242
- } while (split - mem_off >= (1 << msz));
243
- }
244
- }
245
-
246
- record_fault(env, reg_off, reg_max);
247
-}
248
-
249
-/*
250
- * Common helper for all contiguous no-fault loads.
251
- */
252
-static void sve_ldnf1_r(CPUARMState *env, void *vg, const target_ulong addr,
253
- uint32_t desc, const int esz, const int msz,
254
- sve_ldst1_host_fn *host_fn)
255
-{
256
- const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
257
- void *vd = &env->vfp.zregs[rd];
258
- const int diffsz = esz - msz;
259
- const intptr_t reg_max = simd_oprsz(desc);
260
- const intptr_t mem_max = reg_max >> diffsz;
261
- const int mmu_idx = cpu_mmu_index(env, false);
262
- intptr_t split, reg_off, mem_off;
263
- void *host;
264
-
265
-#ifdef CONFIG_USER_ONLY
266
- host = tlb_vaddr_to_host(env, addr, MMU_DATA_LOAD, mmu_idx);
267
- if (likely(page_check_range(addr, mem_max, PAGE_READ) == 0)) {
268
- /* The entire operation is valid and will not fault. */
269
- reg_off = 0;
270
- do {
271
- mem_off = reg_off >> diffsz;
272
- host_fn(vd, reg_off, host + mem_off);
273
+ }
274
reg_off += 1 << esz;
275
- reg_off = find_next_active(vg, reg_off, reg_max, esz);
276
- } while (reg_off < reg_max);
277
- return;
278
- }
279
-#endif
280
+ mem_off += 1 << msz;
281
+ } while (reg_off <= reg_last && (reg_off & 63));
282
+ } while (reg_off <= reg_last);
283
284
- /* There will be no fault, so we may modify in advance. */
285
- memset(vd, 0, reg_max);
286
-
287
- /* Skip to the first active element. */
288
- reg_off = find_next_active(vg, 0, reg_max, esz);
289
- if (unlikely(reg_off == reg_max)) {
290
- /* The entire predicate was false; no load occurs. */
291
- return;
292
- }
293
- mem_off = reg_off >> diffsz;
294
-
295
-#ifdef CONFIG_USER_ONLY
296
- if (page_check_range(addr + mem_off, 1 << msz, PAGE_READ) == 0) {
297
- /* At least one load is valid; take the rest of the page. */
298
- split = max_for_page(addr, mem_off + (1 << msz) - 1, mem_max);
299
- do {
300
- host_fn(vd, reg_off, host + mem_off);
301
- reg_off += 1 << esz;
302
- reg_off = find_next_active(vg, reg_off, reg_max, esz);
303
- mem_off = reg_off >> diffsz;
304
- } while (split - mem_off >= (1 << msz));
305
- }
306
-#else
307
/*
308
- * If the address is not in the TLB, we have no way to bring the
309
- * entry into the TLB without also risking a fault. Note that
310
- * the corollary is that we never load from an address not in RAM.
311
- *
312
- * This last is out of spec, in a weird corner case.
313
- * Per the MemNF/MemSingleNF pseudocode, a NF load from Device memory
314
- * must not actually hit the bus -- it returns UNKNOWN data instead.
315
- * But if you map non-RAM with Normal memory attributes and do a NF
316
- * load then it should access the bus. (Nobody ought actually do this
317
- * in the real world, obviously.)
318
- *
319
- * Then there are the annoying special cases with watchpoints...
320
- * TODO: Add a form of non-faulting loads using cc->tlb_fill(probe=true).
321
+ * MemSingleNF is allowed to fail for any reason. We have special
322
+ * code above to handle the first element crossing a page boundary.
323
+ * As an implementation choice, decline to handle a cross-page element
324
+ * in any other position.
325
*/
326
- host = tlb_vaddr_to_host(env, addr + mem_off, MMU_DATA_LOAD, mmu_idx);
327
- split = max_for_page(addr, mem_off, mem_max);
328
- if (host && split >= (1 << msz)) {
329
- host -= mem_off;
330
- do {
331
- host_fn(vd, reg_off, host + mem_off);
332
- reg_off += 1 << esz;
333
- reg_off = find_next_active(vg, reg_off, reg_max, esz);
334
- mem_off = reg_off >> diffsz;
335
- } while (split - mem_off >= (1 << msz));
336
+ reg_off = info.reg_off_split;
337
+ if (reg_off >= 0) {
338
+ goto do_fault;
339
}
340
-#endif
341
342
+ second_page:
343
+ reg_off = info.reg_off_first[1];
344
+ if (likely(reg_off < 0)) {
345
+ /* No active elements on the second page. All done. */
346
+ return;
61
+ return;
347
+ }
62
+ }
348
+
63
+
349
+ /*
64
+ memory_region_add_subregion(&s->peri_mr, USB_OTG_OFFSET,
350
+ * MemSingleNF is allowed to fail for any reason. As an implementation
65
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->dwc2), 0));
351
+ * choice, decline to handle elements on the second page. This should
66
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->dwc2), 0,
352
+ * be low frequency as the guest walks through memory -- the next
67
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
353
+ * iteration of the guest's loop should be aligned on the page boundary,
68
+ INTERRUPT_USB));
354
+ * and then all following iterations will stay aligned.
355
+ */
356
+
69
+
357
+ do_fault:
70
create_unimp(s, &s->armtmr, "bcm2835-sp804", ARMCTRL_TIMER0_1_OFFSET, 0x40);
358
record_fault(env, reg_off, reg_max);
71
create_unimp(s, &s->cprman, "bcm2835-cprman", CPRMAN_OFFSET, 0x1000);
72
create_unimp(s, &s->a2w, "bcm2835-a2w", A2W_OFFSET, 0x1000);
73
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
74
create_unimp(s, &s->otp, "bcm2835-otp", OTP_OFFSET, 0x80);
75
create_unimp(s, &s->dbus, "bcm2835-dbus", DBUS_OFFSET, 0x8000);
76
create_unimp(s, &s->ave0, "bcm2835-ave0", AVE0_OFFSET, 0x8000);
77
- create_unimp(s, &s->dwc2, "dwc-usb2", USB_OTG_OFFSET, 0x1000);
78
create_unimp(s, &s->sdramc, "bcm2835-sdramc", SDRAMC_OFFSET, 0x100);
359
}
79
}
360
80
361
@@ -XXX,XX +XXX,XX @@ static void sve_ldnf1_r(CPUARMState *env, void *vg, const target_ulong addr,
362
void HELPER(sve_ldff1##PART##_r)(CPUARMState *env, void *vg, \
363
target_ulong addr, uint32_t desc) \
364
{ \
365
- sve_ldff1_r(env, vg, addr, desc, GETPC(), ESZ, 0, \
366
- sve_ld1##PART##_host, sve_ld1##PART##_tlb); \
367
+ sve_ldnfff1_r(env, vg, addr, desc, GETPC(), ESZ, MO_8, FAULT_FIRST, \
368
+ sve_ld1##PART##_host, sve_ld1##PART##_tlb); \
369
} \
370
void HELPER(sve_ldnf1##PART##_r)(CPUARMState *env, void *vg, \
371
target_ulong addr, uint32_t desc) \
372
{ \
373
- sve_ldnf1_r(env, vg, addr, desc, ESZ, 0, sve_ld1##PART##_host); \
374
+ sve_ldnfff1_r(env, vg, addr, desc, GETPC(), ESZ, MO_8, FAULT_NO, \
375
+ sve_ld1##PART##_host, sve_ld1##PART##_tlb); \
376
}
377
378
#define DO_LDFF1_LDNF1_2(PART, ESZ, MSZ) \
379
void HELPER(sve_ldff1##PART##_le_r)(CPUARMState *env, void *vg, \
380
target_ulong addr, uint32_t desc) \
381
{ \
382
- sve_ldff1_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, \
383
- sve_ld1##PART##_le_host, sve_ld1##PART##_le_tlb); \
384
+ sve_ldnfff1_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, FAULT_FIRST, \
385
+ sve_ld1##PART##_le_host, sve_ld1##PART##_le_tlb); \
386
} \
387
void HELPER(sve_ldnf1##PART##_le_r)(CPUARMState *env, void *vg, \
388
target_ulong addr, uint32_t desc) \
389
{ \
390
- sve_ldnf1_r(env, vg, addr, desc, ESZ, MSZ, sve_ld1##PART##_le_host); \
391
+ sve_ldnfff1_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, FAULT_NO, \
392
+ sve_ld1##PART##_le_host, sve_ld1##PART##_le_tlb); \
393
} \
394
void HELPER(sve_ldff1##PART##_be_r)(CPUARMState *env, void *vg, \
395
target_ulong addr, uint32_t desc) \
396
{ \
397
- sve_ldff1_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, \
398
- sve_ld1##PART##_be_host, sve_ld1##PART##_be_tlb); \
399
+ sve_ldnfff1_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, FAULT_FIRST, \
400
+ sve_ld1##PART##_be_host, sve_ld1##PART##_be_tlb); \
401
} \
402
void HELPER(sve_ldnf1##PART##_be_r)(CPUARMState *env, void *vg, \
403
target_ulong addr, uint32_t desc) \
404
{ \
405
- sve_ldnf1_r(env, vg, addr, desc, ESZ, MSZ, sve_ld1##PART##_be_host); \
406
+ sve_ldnfff1_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, FAULT_NO, \
407
+ sve_ld1##PART##_be_host, sve_ld1##PART##_be_tlb); \
408
}
409
410
-DO_LDFF1_LDNF1_1(bb, 0)
411
-DO_LDFF1_LDNF1_1(bhu, 1)
412
-DO_LDFF1_LDNF1_1(bhs, 1)
413
-DO_LDFF1_LDNF1_1(bsu, 2)
414
-DO_LDFF1_LDNF1_1(bss, 2)
415
-DO_LDFF1_LDNF1_1(bdu, 3)
416
-DO_LDFF1_LDNF1_1(bds, 3)
417
+DO_LDFF1_LDNF1_1(bb, MO_8)
418
+DO_LDFF1_LDNF1_1(bhu, MO_16)
419
+DO_LDFF1_LDNF1_1(bhs, MO_16)
420
+DO_LDFF1_LDNF1_1(bsu, MO_32)
421
+DO_LDFF1_LDNF1_1(bss, MO_32)
422
+DO_LDFF1_LDNF1_1(bdu, MO_64)
423
+DO_LDFF1_LDNF1_1(bds, MO_64)
424
425
-DO_LDFF1_LDNF1_2(hh, 1, 1)
426
-DO_LDFF1_LDNF1_2(hsu, 2, 1)
427
-DO_LDFF1_LDNF1_2(hss, 2, 1)
428
-DO_LDFF1_LDNF1_2(hdu, 3, 1)
429
-DO_LDFF1_LDNF1_2(hds, 3, 1)
430
+DO_LDFF1_LDNF1_2(hh, MO_16, MO_16)
431
+DO_LDFF1_LDNF1_2(hsu, MO_32, MO_16)
432
+DO_LDFF1_LDNF1_2(hss, MO_32, MO_16)
433
+DO_LDFF1_LDNF1_2(hdu, MO_64, MO_16)
434
+DO_LDFF1_LDNF1_2(hds, MO_64, MO_16)
435
436
-DO_LDFF1_LDNF1_2(ss, 2, 2)
437
-DO_LDFF1_LDNF1_2(sdu, 3, 2)
438
-DO_LDFF1_LDNF1_2(sds, 3, 2)
439
+DO_LDFF1_LDNF1_2(ss, MO_32, MO_32)
440
+DO_LDFF1_LDNF1_2(sdu, MO_64, MO_32)
441
+DO_LDFF1_LDNF1_2(sds, MO_64, MO_32)
442
443
-DO_LDFF1_LDNF1_2(dd, 3, 3)
444
+DO_LDFF1_LDNF1_2(dd, MO_64, MO_64)
445
446
#undef DO_LDFF1_LDNF1_1
447
#undef DO_LDFF1_LDNF1_2
448
--
81
--
449
2.20.1
82
2.20.1
450
83
451
84
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Paul Zimmerman <pauldzim@gmail.com>
2
2
3
Use ARRAY_SIZE() to iterate over ARMCPUInfo[].
3
Add a check for functional dwc-hsotg (dwc2) USB host emulation to
4
the Raspi 2 acceptance test
4
5
5
Since on the aarch64-linux-user build, arm_cpus[] is empty, add
6
Signed-off-by: Paul Zimmerman <pauldzim@gmail.com>
6
the cpu_count variable and only iterate when it is non-zero.
7
Reviewed-by: Philippe Mathieu-Daude <f4bug@amsat.org>
7
8
Message-id: 20200520235349.21215-8-pauldzim@gmail.com
8
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Message-id: 20200504172448.9402-4-philmd@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
target/arm/cpu.c | 16 +++++++++-------
11
tests/acceptance/boot_linux_console.py | 9 +++++++--
15
target/arm/cpu64.c | 8 +++-----
12
1 file changed, 7 insertions(+), 2 deletions(-)
16
2 files changed, 12 insertions(+), 12 deletions(-)
17
13
18
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.c
16
--- a/tests/acceptance/boot_linux_console.py
21
+++ b/target/arm/cpu.c
17
+++ b/tests/acceptance/boot_linux_console.py
22
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_cpus[] = {
18
@@ -XXX,XX +XXX,XX @@ class BootLinuxConsole(LinuxKernelTest):
23
{ .name = "any", .initfn = arm_max_initfn },
19
24
#endif
20
self.vm.set_console()
25
#endif
21
kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
26
- { .name = NULL }
22
- serial_kernel_cmdline[uart_id])
27
};
23
+ serial_kernel_cmdline[uart_id] +
28
24
+ ' root=/dev/mmcblk0p2 rootwait ' +
29
static Property arm_cpu_properties[] = {
25
+ 'dwc_otg.fiq_fsm_enable=0')
30
@@ -XXX,XX +XXX,XX @@ static const TypeInfo idau_interface_type_info = {
26
self.vm.add_args('-kernel', kernel_path,
31
27
'-dtb', dtb_path,
32
static void arm_cpu_register_types(void)
28
- '-append', kernel_command_line)
33
{
29
+ '-append', kernel_command_line,
34
- const ARMCPUInfo *info = arm_cpus;
30
+ '-device', 'usb-kbd')
35
+ const size_t cpu_count = ARRAY_SIZE(arm_cpus);
31
self.vm.launch()
36
32
console_pattern = 'Kernel command line: %s' % kernel_command_line
37
type_register_static(&arm_cpu_type_info);
33
self.wait_for_console_pattern(console_pattern)
38
type_register_static(&idau_interface_type_info);
34
+ console_pattern = 'Product: QEMU USB Keyboard'
39
35
+ self.wait_for_console_pattern(console_pattern)
40
- while (info->name) {
36
41
- arm_cpu_register(info);
37
def test_arm_raspi2_uart0(self):
42
- info++;
38
"""
43
- }
44
-
45
#ifdef CONFIG_KVM
46
type_register_static(&host_arm_cpu_type_info);
47
#endif
48
+
49
+ if (cpu_count) {
50
+ size_t i;
51
+
52
+ for (i = 0; i < cpu_count; ++i) {
53
+ arm_cpu_register(&arm_cpus[i]);
54
+ }
55
+ }
56
}
57
58
type_init(arm_cpu_register_types)
59
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/cpu64.c
62
+++ b/target/arm/cpu64.c
63
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
64
{ .name = "cortex-a53", .initfn = aarch64_a53_initfn },
65
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
66
{ .name = "max", .initfn = aarch64_max_initfn },
67
- { .name = NULL }
68
};
69
70
static bool aarch64_cpu_get_aarch64(Object *obj, Error **errp)
71
@@ -XXX,XX +XXX,XX @@ static const TypeInfo aarch64_cpu_type_info = {
72
73
static void aarch64_cpu_register_types(void)
74
{
75
- const ARMCPUInfo *info = aarch64_cpus;
76
+ size_t i;
77
78
type_register_static(&aarch64_cpu_type_info);
79
80
- while (info->name) {
81
- aarch64_cpu_register(info);
82
- info++;
83
+ for (i = 0; i < ARRAY_SIZE(aarch64_cpus); ++i) {
84
+ aarch64_cpu_register(&aarch64_cpus[i]);
85
}
86
}
87
88
--
39
--
89
2.20.1
40
2.20.1
90
41
91
42
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Convert the VSHL and VSLI insns from the Neon 2-registers-and-a-shift
2
group to decodetree.
2
3
3
Follow the model set up for contiguous loads. This handles
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
watchpoints correctly for contiguous stores, recognizing the
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
exception before any changes to memory.
6
Message-id: 20200522145520.6778-2-peter.maydell@linaro.org
7
---
8
target/arm/neon-dp.decode | 25 ++++++++++++++++++++++
9
target/arm/translate-neon.inc.c | 38 +++++++++++++++++++++++++++++++++
10
target/arm/translate.c | 18 +++++++---------
11
3 files changed, 71 insertions(+), 10 deletions(-)
6
12
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200508154359.7494-16-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/sve_helper.c | 285 ++++++++++++++++++++++------------------
13
1 file changed, 159 insertions(+), 126 deletions(-)
14
15
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/sve_helper.c
15
--- a/target/arm/neon-dp.decode
18
+++ b/target/arm/sve_helper.c
16
+++ b/target/arm/neon-dp.decode
19
@@ -XXX,XX +XXX,XX @@ static void sve_##NAME##_host(void *vd, intptr_t reg_off, void *host) \
17
@@ -XXX,XX +XXX,XX @@ VRECPS_fp_3s 1111 001 0 0 . 0 . .... .... 1111 ... 1 .... @3same_fp
20
*(TYPEE *)(vd + H(reg_off)) = val; \
18
VRSQRTS_fp_3s 1111 001 0 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
21
}
19
VMAXNM_fp_3s 1111 001 1 0 . 0 . .... .... 1111 ... 1 .... @3same_fp
22
20
VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
23
+#define DO_ST_HOST(NAME, H, TYPEE, TYPEM, HOST) \
24
+static void sve_##NAME##_host(void *vd, intptr_t reg_off, void *host) \
25
+{ HOST(host, (TYPEM)*(TYPEE *)(vd + H(reg_off))); }
26
+
21
+
27
#define DO_LD_TLB(NAME, H, TYPEE, TYPEM, TLB) \
22
+######################################################################
28
static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
23
+# 2-reg-and-shift grouping:
29
target_ulong addr, uintptr_t ra) \
24
+# 1111 001 U 1 D immH:3 immL:3 Vd:4 opc:4 L Q M 1 Vm:4
30
@@ -XXX,XX +XXX,XX @@ DO_LD_PRIM_1(ld1bdu, , uint64_t, uint8_t)
25
+######################################################################
31
DO_LD_PRIM_1(ld1bds, , uint64_t, int8_t)
26
+&2reg_shift vm vd q shift size
32
33
#define DO_ST_PRIM_1(NAME, H, TE, TM) \
34
+ DO_ST_HOST(st1##NAME, H, TE, TM, stb_p) \
35
DO_ST_TLB(st1##NAME, H, TE, TM, cpu_stb_data_ra)
36
37
DO_ST_PRIM_1(bb, H1, uint8_t, uint8_t)
38
@@ -XXX,XX +XXX,XX @@ DO_ST_PRIM_1(bd, , uint64_t, uint8_t)
39
DO_LD_TLB(ld1##NAME##_le, H, TE, TM, cpu_##LD##_le_data_ra)
40
41
#define DO_ST_PRIM_2(NAME, H, TE, TM, ST) \
42
+ DO_ST_HOST(st1##NAME##_be, H, TE, TM, ST##_be_p) \
43
+ DO_ST_HOST(st1##NAME##_le, H, TE, TM, ST##_le_p) \
44
DO_ST_TLB(st1##NAME##_be, H, TE, TM, cpu_##ST##_be_data_ra) \
45
DO_ST_TLB(st1##NAME##_le, H, TE, TM, cpu_##ST##_le_data_ra)
46
47
@@ -XXX,XX +XXX,XX @@ DO_LDFF1_LDNF1_2(dd, MO_64, MO_64)
48
#undef DO_LDFF1_LDNF1_2
49
50
/*
51
- * Common helpers for all contiguous 1,2,3,4-register predicated stores.
52
+ * Common helper for all contiguous 1,2,3,4-register predicated stores.
53
*/
54
-static void sve_st1_r(CPUARMState *env, void *vg, target_ulong addr,
55
- uint32_t desc, const uintptr_t ra,
56
- const int esize, const int msize,
57
- sve_ldst1_tlb_fn *tlb_fn)
58
+
27
+
59
+static inline QEMU_ALWAYS_INLINE
28
+@2reg_shl_d .... ... . . . shift:6 .... .... 1 q:1 . . .... \
60
+void sve_stN_r(CPUARMState *env, uint64_t *vg, target_ulong addr, uint32_t desc,
29
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3
61
+ const uintptr_t retaddr, const int esz,
30
+@2reg_shl_s .... ... . . . 1 shift:5 .... .... 0 q:1 . . .... \
62
+ const int msz, const int N,
31
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2
63
+ sve_ldst1_host_fn *host_fn,
32
+@2reg_shl_h .... ... . . . 01 shift:4 .... .... 0 q:1 . . .... \
64
+ sve_ldst1_tlb_fn *tlb_fn)
33
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1
65
{
34
+@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
66
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
35
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0
67
- intptr_t i, oprsz = simd_oprsz(desc);
36
+
68
- void *vd = &env->vfp.zregs[rd];
37
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
69
+ const intptr_t reg_max = simd_oprsz(desc);
38
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
70
+ intptr_t reg_off, reg_last, mem_off;
39
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
71
+ SVEContLdSt info;
40
+VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
72
+ void *host;
41
+
73
+ int i, flags;
42
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
74
43
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
75
- for (i = 0; i < oprsz; ) {
44
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
76
- uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
45
+VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
77
- do {
46
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
78
- if (pg & 1) {
47
index XXXXXXX..XXXXXXX 100644
79
- tlb_fn(env, vd, i, addr, ra);
48
--- a/target/arm/translate-neon.inc.c
80
+ /* Find the active elements. */
49
+++ b/target/arm/translate-neon.inc.c
81
+ if (!sve_cont_ldst_elements(&info, addr, vg, reg_max, esz, N << msz)) {
50
@@ -XXX,XX +XXX,XX @@ static bool do_3same_fp_pair(DisasContext *s, arg_3same *a, VFPGen3OpSPFn *fn)
82
+ /* The entire predicate was false; no store occurs. */
51
DO_3S_FP_PAIR(VPADD, gen_helper_vfp_adds)
83
+ return;
52
DO_3S_FP_PAIR(VPMAX, gen_helper_vfp_maxs)
53
DO_3S_FP_PAIR(VPMIN, gen_helper_vfp_mins)
54
+
55
+static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
56
+{
57
+ /* Handle a 2-reg-shift insn which can be vectorized. */
58
+ int vec_size = a->q ? 16 : 8;
59
+ int rd_ofs = neon_reg_offset(a->vd, 0);
60
+ int rm_ofs = neon_reg_offset(a->vm, 0);
61
+
62
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
63
+ return false;
84
+ }
64
+ }
85
+
65
+
86
+ /* Probe the page(s). Exit with exception for any invalid page. */
66
+ /* UNDEF accesses to D16-D31 if they don't exist. */
87
+ sve_cont_ldst_pages(&info, FAULT_ALL, env, addr, MMU_DATA_STORE, retaddr);
67
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
88
+
68
+ ((a->vd | a->vm) & 0x10)) {
89
+ /* Handle watchpoints for all active elements. */
69
+ return false;
90
+ sve_cont_ldst_watchpoints(&info, env, vg, addr, 1 << esz, N << msz,
91
+ BP_MEM_WRITE, retaddr);
92
+
93
+ /* TODO: MTE check. */
94
+
95
+ flags = info.page[0].flags | info.page[1].flags;
96
+ if (unlikely(flags != 0)) {
97
+#ifdef CONFIG_USER_ONLY
98
+ g_assert_not_reached();
99
+#else
100
+ /*
101
+ * At least one page includes MMIO.
102
+ * Any bus operation can fail with cpu_transaction_failed,
103
+ * which for ARM will raise SyncExternal. We cannot avoid
104
+ * this fault and will leave with the store incomplete.
105
+ */
106
+ mem_off = info.mem_off_first[0];
107
+ reg_off = info.reg_off_first[0];
108
+ reg_last = info.reg_off_last[1];
109
+ if (reg_last < 0) {
110
+ reg_last = info.reg_off_split;
111
+ if (reg_last < 0) {
112
+ reg_last = info.reg_off_last[0];
113
}
114
- i += esize, pg >>= esize;
115
- addr += msize;
116
- } while (i & 15);
117
+ }
118
+
119
+ do {
120
+ uint64_t pg = vg[reg_off >> 6];
121
+ do {
122
+ if ((pg >> (reg_off & 63)) & 1) {
123
+ for (i = 0; i < N; ++i) {
124
+ tlb_fn(env, &env->vfp.zregs[(rd + i) & 31], reg_off,
125
+ addr + mem_off + (i << msz), retaddr);
126
+ }
127
+ }
128
+ reg_off += 1 << esz;
129
+ mem_off += N << msz;
130
+ } while (reg_off & 63);
131
+ } while (reg_off <= reg_last);
132
+ return;
133
+#endif
134
+ }
70
+ }
135
+
71
+
136
+ mem_off = info.mem_off_first[0];
72
+ if ((a->vm | a->vd) & a->q) {
137
+ reg_off = info.reg_off_first[0];
73
+ return false;
138
+ reg_last = info.reg_off_last[0];
139
+ host = info.page[0].host;
140
+
141
+ while (reg_off <= reg_last) {
142
+ uint64_t pg = vg[reg_off >> 6];
143
+ do {
144
+ if ((pg >> (reg_off & 63)) & 1) {
145
+ for (i = 0; i < N; ++i) {
146
+ host_fn(&env->vfp.zregs[(rd + i) & 31], reg_off,
147
+ host + mem_off + (i << msz));
148
+ }
149
+ }
150
+ reg_off += 1 << esz;
151
+ mem_off += N << msz;
152
+ } while (reg_off <= reg_last && (reg_off & 63));
153
+ }
74
+ }
154
+
75
+
155
+ /*
76
+ if (!vfp_access_check(s)) {
156
+ * Use the slow path to manage the cross-page misalignment.
77
+ return true;
157
+ * But we know this is RAM and cannot trap.
158
+ */
159
+ mem_off = info.mem_off_split;
160
+ if (unlikely(mem_off >= 0)) {
161
+ reg_off = info.reg_off_split;
162
+ for (i = 0; i < N; ++i) {
163
+ tlb_fn(env, &env->vfp.zregs[(rd + i) & 31], reg_off,
164
+ addr + mem_off + (i << msz), retaddr);
165
+ }
166
+ }
78
+ }
167
+
79
+
168
+ mem_off = info.mem_off_first[1];
80
+ fn(a->size, rd_ofs, rm_ofs, a->shift, vec_size, vec_size);
169
+ if (unlikely(mem_off >= 0)) {
81
+ return true;
170
+ reg_off = info.reg_off_first[1];
82
+}
171
+ reg_last = info.reg_off_last[1];
172
+ host = info.page[1].host;
173
+
83
+
174
+ do {
84
+#define DO_2SH(INSN, FUNC) \
175
+ uint64_t pg = vg[reg_off >> 6];
85
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
176
+ do {
86
+ { \
177
+ if ((pg >> (reg_off & 63)) & 1) {
87
+ return do_vector_2sh(s, a, FUNC); \
178
+ for (i = 0; i < N; ++i) {
88
+ } \
179
+ host_fn(&env->vfp.zregs[(rd + i) & 31], reg_off,
89
+
180
+ host + mem_off + (i << msz));
90
+DO_2SH(VSHL, tcg_gen_gvec_shli)
181
+ }
91
+DO_2SH(VSLI, gen_gvec_sli)
182
+ }
92
diff --git a/target/arm/translate.c b/target/arm/translate.c
183
+ reg_off += 1 << esz;
93
index XXXXXXX..XXXXXXX 100644
184
+ mem_off += N << msz;
94
--- a/target/arm/translate.c
185
+ } while (reg_off & 63);
95
+++ b/target/arm/translate.c
186
+ } while (reg_off <= reg_last);
96
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
187
}
97
if ((insn & 0x00380080) != 0) {
188
}
98
/* Two registers and shift. */
189
99
op = (insn >> 8) & 0xf;
190
-static void sve_st2_r(CPUARMState *env, void *vg, target_ulong addr,
100
+
191
- uint32_t desc, const uintptr_t ra,
101
+ switch (op) {
192
- const int esize, const int msize,
102
+ case 5: /* VSHL, VSLI */
193
- sve_ldst1_tlb_fn *tlb_fn)
103
+ return 1; /* handled by decodetree */
194
-{
104
+ default:
195
- const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
105
+ break;
196
- intptr_t i, oprsz = simd_oprsz(desc);
106
+ }
197
- void *d1 = &env->vfp.zregs[rd];
107
+
198
- void *d2 = &env->vfp.zregs[(rd + 1) & 31];
108
if (insn & (1 << 7)) {
109
/* 64-bit shift. */
110
if (op > 7) {
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
112
gen_gvec_sri(size, rd_ofs, rm_ofs, shift,
113
vec_size, vec_size);
114
return 0;
199
-
115
-
200
- for (i = 0; i < oprsz; ) {
116
- case 5: /* VSHL, VSLI */
201
- uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
117
- if (u) { /* VSLI */
202
- do {
118
- gen_gvec_sli(size, rd_ofs, rm_ofs, shift,
203
- if (pg & 1) {
119
- vec_size, vec_size);
204
- tlb_fn(env, d1, i, addr, ra);
120
- } else { /* VSHL */
205
- tlb_fn(env, d2, i, addr + msize, ra);
121
- tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
206
- }
122
- vec_size, vec_size);
207
- i += esize, pg >>= esize;
123
- }
208
- addr += 2 * msize;
124
- return 0;
209
- } while (i & 15);
125
}
210
- }
126
211
-}
127
if (size == 3) {
212
-
213
-static void sve_st3_r(CPUARMState *env, void *vg, target_ulong addr,
214
- uint32_t desc, const uintptr_t ra,
215
- const int esize, const int msize,
216
- sve_ldst1_tlb_fn *tlb_fn)
217
-{
218
- const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
219
- intptr_t i, oprsz = simd_oprsz(desc);
220
- void *d1 = &env->vfp.zregs[rd];
221
- void *d2 = &env->vfp.zregs[(rd + 1) & 31];
222
- void *d3 = &env->vfp.zregs[(rd + 2) & 31];
223
-
224
- for (i = 0; i < oprsz; ) {
225
- uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
226
- do {
227
- if (pg & 1) {
228
- tlb_fn(env, d1, i, addr, ra);
229
- tlb_fn(env, d2, i, addr + msize, ra);
230
- tlb_fn(env, d3, i, addr + 2 * msize, ra);
231
- }
232
- i += esize, pg >>= esize;
233
- addr += 3 * msize;
234
- } while (i & 15);
235
- }
236
-}
237
-
238
-static void sve_st4_r(CPUARMState *env, void *vg, target_ulong addr,
239
- uint32_t desc, const uintptr_t ra,
240
- const int esize, const int msize,
241
- sve_ldst1_tlb_fn *tlb_fn)
242
-{
243
- const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
244
- intptr_t i, oprsz = simd_oprsz(desc);
245
- void *d1 = &env->vfp.zregs[rd];
246
- void *d2 = &env->vfp.zregs[(rd + 1) & 31];
247
- void *d3 = &env->vfp.zregs[(rd + 2) & 31];
248
- void *d4 = &env->vfp.zregs[(rd + 3) & 31];
249
-
250
- for (i = 0; i < oprsz; ) {
251
- uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
252
- do {
253
- if (pg & 1) {
254
- tlb_fn(env, d1, i, addr, ra);
255
- tlb_fn(env, d2, i, addr + msize, ra);
256
- tlb_fn(env, d3, i, addr + 2 * msize, ra);
257
- tlb_fn(env, d4, i, addr + 3 * msize, ra);
258
- }
259
- i += esize, pg >>= esize;
260
- addr += 4 * msize;
261
- } while (i & 15);
262
- }
263
-}
264
-
265
-#define DO_STN_1(N, NAME, ESIZE) \
266
-void QEMU_FLATTEN HELPER(sve_st##N##NAME##_r) \
267
- (CPUARMState *env, void *vg, target_ulong addr, uint32_t desc) \
268
+#define DO_STN_1(N, NAME, ESZ) \
269
+void HELPER(sve_st##N##NAME##_r)(CPUARMState *env, void *vg, \
270
+ target_ulong addr, uint32_t desc) \
271
{ \
272
- sve_st##N##_r(env, vg, addr, desc, GETPC(), ESIZE, 1, \
273
- sve_st1##NAME##_tlb); \
274
+ sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MO_8, N, \
275
+ sve_st1##NAME##_host, sve_st1##NAME##_tlb); \
276
}
277
278
-#define DO_STN_2(N, NAME, ESIZE, MSIZE) \
279
-void QEMU_FLATTEN HELPER(sve_st##N##NAME##_le_r) \
280
- (CPUARMState *env, void *vg, target_ulong addr, uint32_t desc) \
281
+#define DO_STN_2(N, NAME, ESZ, MSZ) \
282
+void HELPER(sve_st##N##NAME##_le_r)(CPUARMState *env, void *vg, \
283
+ target_ulong addr, uint32_t desc) \
284
{ \
285
- sve_st##N##_r(env, vg, addr, desc, GETPC(), ESIZE, MSIZE, \
286
- sve_st1##NAME##_le_tlb); \
287
+ sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, N, \
288
+ sve_st1##NAME##_le_host, sve_st1##NAME##_le_tlb); \
289
} \
290
-void QEMU_FLATTEN HELPER(sve_st##N##NAME##_be_r) \
291
- (CPUARMState *env, void *vg, target_ulong addr, uint32_t desc) \
292
+void HELPER(sve_st##N##NAME##_be_r)(CPUARMState *env, void *vg, \
293
+ target_ulong addr, uint32_t desc) \
294
{ \
295
- sve_st##N##_r(env, vg, addr, desc, GETPC(), ESIZE, MSIZE, \
296
- sve_st1##NAME##_be_tlb); \
297
+ sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, N, \
298
+ sve_st1##NAME##_be_host, sve_st1##NAME##_be_tlb); \
299
}
300
301
-DO_STN_1(1, bb, 1)
302
-DO_STN_1(1, bh, 2)
303
-DO_STN_1(1, bs, 4)
304
-DO_STN_1(1, bd, 8)
305
-DO_STN_1(2, bb, 1)
306
-DO_STN_1(3, bb, 1)
307
-DO_STN_1(4, bb, 1)
308
+DO_STN_1(1, bb, MO_8)
309
+DO_STN_1(1, bh, MO_16)
310
+DO_STN_1(1, bs, MO_32)
311
+DO_STN_1(1, bd, MO_64)
312
+DO_STN_1(2, bb, MO_8)
313
+DO_STN_1(3, bb, MO_8)
314
+DO_STN_1(4, bb, MO_8)
315
316
-DO_STN_2(1, hh, 2, 2)
317
-DO_STN_2(1, hs, 4, 2)
318
-DO_STN_2(1, hd, 8, 2)
319
-DO_STN_2(2, hh, 2, 2)
320
-DO_STN_2(3, hh, 2, 2)
321
-DO_STN_2(4, hh, 2, 2)
322
+DO_STN_2(1, hh, MO_16, MO_16)
323
+DO_STN_2(1, hs, MO_32, MO_16)
324
+DO_STN_2(1, hd, MO_64, MO_16)
325
+DO_STN_2(2, hh, MO_16, MO_16)
326
+DO_STN_2(3, hh, MO_16, MO_16)
327
+DO_STN_2(4, hh, MO_16, MO_16)
328
329
-DO_STN_2(1, ss, 4, 4)
330
-DO_STN_2(1, sd, 8, 4)
331
-DO_STN_2(2, ss, 4, 4)
332
-DO_STN_2(3, ss, 4, 4)
333
-DO_STN_2(4, ss, 4, 4)
334
+DO_STN_2(1, ss, MO_32, MO_32)
335
+DO_STN_2(1, sd, MO_64, MO_32)
336
+DO_STN_2(2, ss, MO_32, MO_32)
337
+DO_STN_2(3, ss, MO_32, MO_32)
338
+DO_STN_2(4, ss, MO_32, MO_32)
339
340
-DO_STN_2(1, dd, 8, 8)
341
-DO_STN_2(2, dd, 8, 8)
342
-DO_STN_2(3, dd, 8, 8)
343
-DO_STN_2(4, dd, 8, 8)
344
+DO_STN_2(1, dd, MO_64, MO_64)
345
+DO_STN_2(2, dd, MO_64, MO_64)
346
+DO_STN_2(3, dd, MO_64, MO_64)
347
+DO_STN_2(4, dd, MO_64, MO_64)
348
349
#undef DO_STN_1
350
#undef DO_STN_2
351
--
128
--
352
2.20.1
129
2.20.1
353
130
354
131
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Convert the VSHR 2-reg-shift insns to decodetree.
2
2
3
We currently have target-endian versions of these operations,
3
Note that unlike the legacy decoder, we present the right shift
4
but no easy way to force a specific endianness. This can be
4
amount to the trans_ function as a positive integer.
5
helpful if the target has endian-specific operations, or a mode
6
that swaps endianness.
7
5
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200508154359.7494-7-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200522145520.6778-3-peter.maydell@linaro.org
12
---
9
---
13
docs/devel/loads-stores.rst | 39 +++--
10
target/arm/neon-dp.decode | 25 ++++++++++++++++++++
14
include/exec/cpu_ldst.h | 283 +++++++++++++++++++++++++++---------
11
target/arm/translate-neon.inc.c | 41 +++++++++++++++++++++++++++++++++
15
accel/tcg/cputlb.c | 236 ++++++++++++++++++++++--------
12
target/arm/translate.c | 21 +----------------
16
accel/tcg/user-exec.c | 211 ++++++++++++++++++++++-----
13
3 files changed, 67 insertions(+), 20 deletions(-)
17
4 files changed, 587 insertions(+), 182 deletions(-)
18
14
19
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
15
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/docs/devel/loads-stores.rst
17
--- a/target/arm/neon-dp.decode
22
+++ b/docs/devel/loads-stores.rst
18
+++ b/target/arm/neon-dp.decode
23
@@ -XXX,XX +XXX,XX @@ function, which is a return address into the generated code.
19
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
24
20
######################################################################
25
Function names follow the pattern:
21
&2reg_shift vm vd q shift size
26
22
27
-load: ``cpu_ld{sign}{size}_mmuidx_ra(env, ptr, mmuidx, retaddr)``
23
+# Right shifts are encoded as N - shift, where N is the element size in bits.
28
+load: ``cpu_ld{sign}{size}{end}_mmuidx_ra(env, ptr, mmuidx, retaddr)``
24
+%neon_rshift_i6 16:6 !function=rsub_64
29
25
+%neon_rshift_i5 16:5 !function=rsub_32
30
-store: ``cpu_st{size}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
26
+%neon_rshift_i4 16:4 !function=rsub_16
31
+store: ``cpu_st{size}{end}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
27
+%neon_rshift_i3 16:3 !function=rsub_8
32
33
``sign``
34
- (empty) : for 32 or 64 bit sizes
35
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
36
- ``l`` : 32 bits
37
- ``q`` : 64 bits
38
39
+``end``
40
+ - (empty) : for target endian, or 8 bit sizes
41
+ - ``_be`` : big endian
42
+ - ``_le`` : little endian
43
+
28
+
44
Regexes for git grep:
29
+@2reg_shr_d .... ... . . . ...... .... .... 1 q:1 . . .... \
45
- - ``\<cpu_ld[us]\?[bwlq]_mmuidx_ra\>``
30
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3 shift=%neon_rshift_i6
46
- - ``\<cpu_st[bwlq]_mmuidx_ra\>``
31
+@2reg_shr_s .... ... . . . 1 ..... .... .... 0 q:1 . . .... \
47
+ - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_mmuidx_ra\>``
32
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 shift=%neon_rshift_i5
48
+ - ``\<cpu_st[bwlq](_[bl]e)\?_mmuidx_ra\>``
33
+@2reg_shr_h .... ... . . . 01 .... .... .... 0 q:1 . . .... \
49
34
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 shift=%neon_rshift_i4
50
``cpu_{ld,st}*_data_ra``
35
+@2reg_shr_b .... ... . . . 001 ... .... .... 0 q:1 . . .... \
51
~~~~~~~~~~~~~~~~~~~~~~~~
36
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 shift=%neon_rshift_i3
52
@@ -XXX,XX +XXX,XX @@ be performed with a context other than the default.
53
54
Function names follow the pattern:
55
56
-load: ``cpu_ld{sign}{size}_data_ra(env, ptr, ra)``
57
+load: ``cpu_ld{sign}{size}{end}_data_ra(env, ptr, ra)``
58
59
-store: ``cpu_st{size}_data_ra(env, ptr, val, ra)``
60
+store: ``cpu_st{size}{end}_data_ra(env, ptr, val, ra)``
61
62
``sign``
63
- (empty) : for 32 or 64 bit sizes
64
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}_data_ra(env, ptr, val, ra)``
65
- ``l`` : 32 bits
66
- ``q`` : 64 bits
67
68
+``end``
69
+ - (empty) : for target endian, or 8 bit sizes
70
+ - ``_be`` : big endian
71
+ - ``_le`` : little endian
72
+
37
+
73
Regexes for git grep:
38
@2reg_shl_d .... ... . . . shift:6 .... .... 1 q:1 . . .... \
74
- - ``\<cpu_ld[us]\?[bwlq]_data_ra\>``
39
&2reg_shift vm=%vm_dp vd=%vd_dp size=3
75
- - ``\<cpu_st[bwlq]_data_ra\>``
40
@2reg_shl_s .... ... . . . 1 shift:5 .... .... 0 q:1 . . .... \
76
+ - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data_ra\>``
41
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
77
+ - ``\<cpu_st[bwlq](_[bl]e)\?_data_ra\>``
42
@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
78
43
&2reg_shift vm=%vm_dp vd=%vd_dp size=0
79
``cpu_{ld,st}*_data``
44
80
~~~~~~~~~~~~~~~~~~~~~
45
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
81
@@ -XXX,XX +XXX,XX @@ the CPU state anyway.
46
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
82
47
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
83
Function names follow the pattern:
48
+VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
84
85
-load: ``cpu_ld{sign}{size}_data(env, ptr)``
86
+load: ``cpu_ld{sign}{size}{end}_data(env, ptr)``
87
88
-store: ``cpu_st{size}_data(env, ptr, val)``
89
+store: ``cpu_st{size}{end}_data(env, ptr, val)``
90
91
``sign``
92
- (empty) : for 32 or 64 bit sizes
93
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}_data(env, ptr, val)``
94
- ``l`` : 32 bits
95
- ``q`` : 64 bits
96
97
+``end``
98
+ - (empty) : for target endian, or 8 bit sizes
99
+ - ``_be`` : big endian
100
+ - ``_le`` : little endian
101
+
49
+
102
Regexes for git grep
50
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
103
- - ``\<cpu_ld[us]\?[bwlq]_data\>``
51
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
104
- - ``\<cpu_st[bwlq]_data\+\>``
52
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
105
+ - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data\>``
53
+VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
106
+ - ``\<cpu_st[bwlq](_[bl]e)\?_data\+\>``
54
+
107
55
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
108
``cpu_ld*_code``
56
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
109
~~~~~~~~~~~~~~~~
57
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
110
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
58
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
111
index XXXXXXX..XXXXXXX 100644
59
index XXXXXXX..XXXXXXX 100644
112
--- a/include/exec/cpu_ldst.h
60
--- a/target/arm/translate-neon.inc.c
113
+++ b/include/exec/cpu_ldst.h
61
+++ b/target/arm/translate-neon.inc.c
114
@@ -XXX,XX +XXX,XX @@
62
@@ -XXX,XX +XXX,XX @@ static inline int plus1(DisasContext *s, int x)
115
*
63
return x + 1;
116
* The syntax for the accessors is:
117
*
118
- * load: cpu_ld{sign}{size}_{mmusuffix}(env, ptr)
119
- * cpu_ld{sign}{size}_{mmusuffix}_ra(env, ptr, retaddr)
120
- * cpu_ld{sign}{size}_mmuidx_ra(env, ptr, mmu_idx, retaddr)
121
+ * load: cpu_ld{sign}{size}{end}_{mmusuffix}(env, ptr)
122
+ * cpu_ld{sign}{size}{end}_{mmusuffix}_ra(env, ptr, retaddr)
123
+ * cpu_ld{sign}{size}{end}_mmuidx_ra(env, ptr, mmu_idx, retaddr)
124
*
125
- * store: cpu_st{size}_{mmusuffix}(env, ptr, val)
126
- * cpu_st{size}_{mmusuffix}_ra(env, ptr, val, retaddr)
127
- * cpu_st{size}_mmuidx_ra(env, ptr, val, mmu_idx, retaddr)
128
+ * store: cpu_st{size}{end}_{mmusuffix}(env, ptr, val)
129
+ * cpu_st{size}{end}_{mmusuffix}_ra(env, ptr, val, retaddr)
130
+ * cpu_st{size}{end}_mmuidx_ra(env, ptr, val, mmu_idx, retaddr)
131
*
132
* sign is:
133
* (empty): for 32 and 64 bit sizes
134
@@ -XXX,XX +XXX,XX @@
135
* l: 32 bits
136
* q: 64 bits
137
*
138
+ * end is:
139
+ * (empty): for target native endian, or for 8 bit access
140
+ * _be: for forced big endian
141
+ * _le: for forced little endian
142
+ *
143
* mmusuffix is one of the generic suffixes "data" or "code", or "mmuidx".
144
* The "mmuidx" suffix carries an extra mmu_idx argument that specifies
145
* the index to use; the "data" and "code" suffixes take the index from
146
@@ -XXX,XX +XXX,XX @@ typedef target_ulong abi_ptr;
147
#endif
148
149
uint32_t cpu_ldub_data(CPUArchState *env, abi_ptr ptr);
150
-uint32_t cpu_lduw_data(CPUArchState *env, abi_ptr ptr);
151
-uint32_t cpu_ldl_data(CPUArchState *env, abi_ptr ptr);
152
-uint64_t cpu_ldq_data(CPUArchState *env, abi_ptr ptr);
153
int cpu_ldsb_data(CPUArchState *env, abi_ptr ptr);
154
-int cpu_ldsw_data(CPUArchState *env, abi_ptr ptr);
155
156
-uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
157
-uint32_t cpu_lduw_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
158
-uint32_t cpu_ldl_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
159
-uint64_t cpu_ldq_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
160
-int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
161
-int cpu_ldsw_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
162
+uint32_t cpu_lduw_be_data(CPUArchState *env, abi_ptr ptr);
163
+int cpu_ldsw_be_data(CPUArchState *env, abi_ptr ptr);
164
+uint32_t cpu_ldl_be_data(CPUArchState *env, abi_ptr ptr);
165
+uint64_t cpu_ldq_be_data(CPUArchState *env, abi_ptr ptr);
166
+
167
+uint32_t cpu_lduw_le_data(CPUArchState *env, abi_ptr ptr);
168
+int cpu_ldsw_le_data(CPUArchState *env, abi_ptr ptr);
169
+uint32_t cpu_ldl_le_data(CPUArchState *env, abi_ptr ptr);
170
+uint64_t cpu_ldq_le_data(CPUArchState *env, abi_ptr ptr);
171
+
172
+uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t ra);
173
+int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t ra);
174
+
175
+uint32_t cpu_lduw_be_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t ra);
176
+int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t ra);
177
+uint32_t cpu_ldl_be_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t ra);
178
+uint64_t cpu_ldq_be_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t ra);
179
+
180
+uint32_t cpu_lduw_le_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t ra);
181
+int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t ra);
182
+uint32_t cpu_ldl_le_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t ra);
183
+uint64_t cpu_ldq_le_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t ra);
184
185
void cpu_stb_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
186
-void cpu_stw_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
187
-void cpu_stl_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
188
-void cpu_stq_data(CPUArchState *env, abi_ptr ptr, uint64_t val);
189
+
190
+void cpu_stw_be_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
191
+void cpu_stl_be_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
192
+void cpu_stq_be_data(CPUArchState *env, abi_ptr ptr, uint64_t val);
193
+
194
+void cpu_stw_le_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
195
+void cpu_stl_le_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
196
+void cpu_stq_le_data(CPUArchState *env, abi_ptr ptr, uint64_t val);
197
198
void cpu_stb_data_ra(CPUArchState *env, abi_ptr ptr,
199
- uint32_t val, uintptr_t retaddr);
200
-void cpu_stw_data_ra(CPUArchState *env, abi_ptr ptr,
201
- uint32_t val, uintptr_t retaddr);
202
-void cpu_stl_data_ra(CPUArchState *env, abi_ptr ptr,
203
- uint32_t val, uintptr_t retaddr);
204
-void cpu_stq_data_ra(CPUArchState *env, abi_ptr ptr,
205
- uint64_t val, uintptr_t retaddr);
206
+ uint32_t val, uintptr_t ra);
207
+
208
+void cpu_stw_be_data_ra(CPUArchState *env, abi_ptr ptr,
209
+ uint32_t val, uintptr_t ra);
210
+void cpu_stl_be_data_ra(CPUArchState *env, abi_ptr ptr,
211
+ uint32_t val, uintptr_t ra);
212
+void cpu_stq_be_data_ra(CPUArchState *env, abi_ptr ptr,
213
+ uint64_t val, uintptr_t ra);
214
+
215
+void cpu_stw_le_data_ra(CPUArchState *env, abi_ptr ptr,
216
+ uint32_t val, uintptr_t ra);
217
+void cpu_stl_le_data_ra(CPUArchState *env, abi_ptr ptr,
218
+ uint32_t val, uintptr_t ra);
219
+void cpu_stq_le_data_ra(CPUArchState *env, abi_ptr ptr,
220
+ uint64_t val, uintptr_t ra);
221
222
#if defined(CONFIG_USER_ONLY)
223
224
@@ -XXX,XX +XXX,XX @@ static inline uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
225
return cpu_ldub_data_ra(env, addr, ra);
226
}
64
}
227
65
228
-static inline uint32_t cpu_lduw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
66
+static inline int rsub_64(DisasContext *s, int x)
229
- int mmu_idx, uintptr_t ra)
67
+{
230
-{
68
+ return 64 - x;
231
- return cpu_lduw_data_ra(env, addr, ra);
232
-}
233
-
234
-static inline uint32_t cpu_ldl_mmuidx_ra(CPUArchState *env, abi_ptr addr,
235
- int mmu_idx, uintptr_t ra)
236
-{
237
- return cpu_ldl_data_ra(env, addr, ra);
238
-}
239
-
240
-static inline uint64_t cpu_ldq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
241
- int mmu_idx, uintptr_t ra)
242
-{
243
- return cpu_ldq_data_ra(env, addr, ra);
244
-}
245
-
246
static inline int cpu_ldsb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
247
int mmu_idx, uintptr_t ra)
248
{
249
return cpu_ldsb_data_ra(env, addr, ra);
250
}
251
252
-static inline int cpu_ldsw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
253
- int mmu_idx, uintptr_t ra)
254
+static inline uint32_t cpu_lduw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
255
+ int mmu_idx, uintptr_t ra)
256
{
257
- return cpu_ldsw_data_ra(env, addr, ra);
258
+ return cpu_lduw_be_data_ra(env, addr, ra);
259
+}
69
+}
260
+
70
+
261
+static inline int cpu_ldsw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
71
+static inline int rsub_32(DisasContext *s, int x)
262
+ int mmu_idx, uintptr_t ra)
263
+{
72
+{
264
+ return cpu_ldsw_be_data_ra(env, addr, ra);
73
+ return 32 - x;
74
+}
75
+static inline int rsub_16(DisasContext *s, int x)
76
+{
77
+ return 16 - x;
78
+}
79
+static inline int rsub_8(DisasContext *s, int x)
80
+{
81
+ return 8 - x;
265
+}
82
+}
266
+
83
+
267
+static inline uint32_t cpu_ldl_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
84
/* Include the generated Neon decoder */
268
+ int mmu_idx, uintptr_t ra)
85
#include "decode-neon-dp.inc.c"
86
#include "decode-neon-ls.inc.c"
87
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
88
89
DO_2SH(VSHL, tcg_gen_gvec_shli)
90
DO_2SH(VSLI, gen_gvec_sli)
91
+
92
+static bool trans_VSHR_S_2sh(DisasContext *s, arg_2reg_shift *a)
269
+{
93
+{
270
+ return cpu_ldl_be_data_ra(env, addr, ra);
94
+ /* Signed shift out of range results in all-sign-bits */
95
+ a->shift = MIN(a->shift, (8 << a->size) - 1);
96
+ return do_vector_2sh(s, a, tcg_gen_gvec_sari);
271
+}
97
+}
272
+
98
+
273
+static inline uint64_t cpu_ldq_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
99
+static void gen_zero_rd_2sh(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
274
+ int mmu_idx, uintptr_t ra)
100
+ int64_t shift, uint32_t oprsz, uint32_t maxsz)
275
+{
101
+{
276
+ return cpu_ldq_be_data_ra(env, addr, ra);
102
+ tcg_gen_gvec_dup_imm(vece, rd_ofs, oprsz, maxsz, 0);
277
+}
103
+}
278
+
104
+
279
+static inline uint32_t cpu_lduw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
105
+static bool trans_VSHR_U_2sh(DisasContext *s, arg_2reg_shift *a)
280
+ int mmu_idx, uintptr_t ra)
281
+{
106
+{
282
+ return cpu_lduw_le_data_ra(env, addr, ra);
107
+ /* Shift out of range is architecturally valid and results in zero. */
108
+ if (a->shift >= (8 << a->size)) {
109
+ return do_vector_2sh(s, a, gen_zero_rd_2sh);
110
+ } else {
111
+ return do_vector_2sh(s, a, tcg_gen_gvec_shri);
112
+ }
283
+}
113
+}
284
+
114
diff --git a/target/arm/translate.c b/target/arm/translate.c
285
+static inline int cpu_ldsw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
115
index XXXXXXX..XXXXXXX 100644
286
+ int mmu_idx, uintptr_t ra)
116
--- a/target/arm/translate.c
287
+{
117
+++ b/target/arm/translate.c
288
+ return cpu_ldsw_le_data_ra(env, addr, ra);
118
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
289
+}
119
op = (insn >> 8) & 0xf;
290
+
120
291
+static inline uint32_t cpu_ldl_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
121
switch (op) {
292
+ int mmu_idx, uintptr_t ra)
122
+ case 0: /* VSHR */
293
+{
123
case 5: /* VSHL, VSLI */
294
+ return cpu_ldl_le_data_ra(env, addr, ra);
124
return 1; /* handled by decodetree */
295
+}
125
default:
296
+
126
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
297
+static inline uint64_t cpu_ldq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
127
}
298
+ int mmu_idx, uintptr_t ra)
128
299
+{
129
switch (op) {
300
+ return cpu_ldq_le_data_ra(env, addr, ra);
130
- case 0: /* VSHR */
301
}
131
- /* Right shift comes here negative. */
302
132
- shift = -shift;
303
static inline void cpu_stb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
133
- /* Shifts larger than the element size are architecturally
304
@@ -XXX,XX +XXX,XX @@ static inline void cpu_stb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
134
- * valid. Unsigned results in all zeros; signed results
305
cpu_stb_data_ra(env, addr, val, ra);
135
- * in all sign bits.
306
}
136
- */
307
137
- if (!u) {
308
-static inline void cpu_stw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
138
- tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
309
- uint32_t val, int mmu_idx, uintptr_t ra)
139
- MIN(shift, (8 << size) - 1),
310
+static inline void cpu_stw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
140
- vec_size, vec_size);
311
+ uint32_t val, int mmu_idx,
141
- } else if (shift >= 8 << size) {
312
+ uintptr_t ra)
142
- tcg_gen_gvec_dup_imm(MO_8, rd_ofs, vec_size,
313
{
143
- vec_size, 0);
314
- cpu_stw_data_ra(env, addr, val, ra);
144
- } else {
315
+ cpu_stw_be_data_ra(env, addr, val, ra);
145
- tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
316
}
146
- vec_size, vec_size);
317
147
- }
318
-static inline void cpu_stl_mmuidx_ra(CPUArchState *env, abi_ptr addr,
148
- return 0;
319
- uint32_t val, int mmu_idx, uintptr_t ra)
320
+static inline void cpu_stl_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
321
+ uint32_t val, int mmu_idx,
322
+ uintptr_t ra)
323
{
324
- cpu_stl_data_ra(env, addr, val, ra);
325
+ cpu_stl_be_data_ra(env, addr, val, ra);
326
}
327
328
-static inline void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
329
- uint64_t val, int mmu_idx, uintptr_t ra)
330
+static inline void cpu_stq_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
331
+ uint64_t val, int mmu_idx,
332
+ uintptr_t ra)
333
{
334
- cpu_stq_data_ra(env, addr, val, ra);
335
+ cpu_stq_be_data_ra(env, addr, val, ra);
336
+}
337
+
338
+static inline void cpu_stw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
339
+ uint32_t val, int mmu_idx,
340
+ uintptr_t ra)
341
+{
342
+ cpu_stw_le_data_ra(env, addr, val, ra);
343
+}
344
+
345
+static inline void cpu_stl_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
346
+ uint32_t val, int mmu_idx,
347
+ uintptr_t ra)
348
+{
349
+ cpu_stl_le_data_ra(env, addr, val, ra);
350
+}
351
+
352
+static inline void cpu_stq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
353
+ uint64_t val, int mmu_idx,
354
+ uintptr_t ra)
355
+{
356
+ cpu_stq_le_data_ra(env, addr, val, ra);
357
}
358
359
#else
360
@@ -XXX,XX +XXX,XX @@ static inline CPUTLBEntry *tlb_entry(CPUArchState *env, uintptr_t mmu_idx,
361
362
uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
363
int mmu_idx, uintptr_t ra);
364
-uint32_t cpu_lduw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
365
- int mmu_idx, uintptr_t ra);
366
-uint32_t cpu_ldl_mmuidx_ra(CPUArchState *env, abi_ptr addr,
367
- int mmu_idx, uintptr_t ra);
368
-uint64_t cpu_ldq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
369
- int mmu_idx, uintptr_t ra);
370
-
149
-
371
int cpu_ldsb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
150
case 1: /* VSRA */
372
int mmu_idx, uintptr_t ra);
151
/* Right shift comes here negative. */
373
-int cpu_ldsw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
152
shift = -shift;
374
- int mmu_idx, uintptr_t ra);
375
+
376
+uint32_t cpu_lduw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
377
+ int mmu_idx, uintptr_t ra);
378
+int cpu_ldsw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
379
+ int mmu_idx, uintptr_t ra);
380
+uint32_t cpu_ldl_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
381
+ int mmu_idx, uintptr_t ra);
382
+uint64_t cpu_ldq_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
383
+ int mmu_idx, uintptr_t ra);
384
+
385
+uint32_t cpu_lduw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
386
+ int mmu_idx, uintptr_t ra);
387
+int cpu_ldsw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
388
+ int mmu_idx, uintptr_t ra);
389
+uint32_t cpu_ldl_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
390
+ int mmu_idx, uintptr_t ra);
391
+uint64_t cpu_ldq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
392
+ int mmu_idx, uintptr_t ra);
393
394
void cpu_stb_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
395
int mmu_idx, uintptr_t retaddr);
396
-void cpu_stw_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
397
- int mmu_idx, uintptr_t retaddr);
398
-void cpu_stl_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
399
- int mmu_idx, uintptr_t retaddr);
400
-void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
401
- int mmu_idx, uintptr_t retaddr);
402
+
403
+void cpu_stw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
404
+ int mmu_idx, uintptr_t retaddr);
405
+void cpu_stl_be_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
406
+ int mmu_idx, uintptr_t retaddr);
407
+void cpu_stq_be_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
408
+ int mmu_idx, uintptr_t retaddr);
409
+
410
+void cpu_stw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
411
+ int mmu_idx, uintptr_t retaddr);
412
+void cpu_stl_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
413
+ int mmu_idx, uintptr_t retaddr);
414
+void cpu_stq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
415
+ int mmu_idx, uintptr_t retaddr);
416
417
#endif /* defined(CONFIG_USER_ONLY) */
418
419
+#ifdef TARGET_WORDS_BIGENDIAN
420
+# define cpu_lduw_data cpu_lduw_be_data
421
+# define cpu_ldsw_data cpu_ldsw_be_data
422
+# define cpu_ldl_data cpu_ldl_be_data
423
+# define cpu_ldq_data cpu_ldq_be_data
424
+# define cpu_lduw_data_ra cpu_lduw_be_data_ra
425
+# define cpu_ldsw_data_ra cpu_ldsw_be_data_ra
426
+# define cpu_ldl_data_ra cpu_ldl_be_data_ra
427
+# define cpu_ldq_data_ra cpu_ldq_be_data_ra
428
+# define cpu_lduw_mmuidx_ra cpu_lduw_be_mmuidx_ra
429
+# define cpu_ldsw_mmuidx_ra cpu_ldsw_be_mmuidx_ra
430
+# define cpu_ldl_mmuidx_ra cpu_ldl_be_mmuidx_ra
431
+# define cpu_ldq_mmuidx_ra cpu_ldq_be_mmuidx_ra
432
+# define cpu_stw_data cpu_stw_be_data
433
+# define cpu_stl_data cpu_stl_be_data
434
+# define cpu_stq_data cpu_stq_be_data
435
+# define cpu_stw_data_ra cpu_stw_be_data_ra
436
+# define cpu_stl_data_ra cpu_stl_be_data_ra
437
+# define cpu_stq_data_ra cpu_stq_be_data_ra
438
+# define cpu_stw_mmuidx_ra cpu_stw_be_mmuidx_ra
439
+# define cpu_stl_mmuidx_ra cpu_stl_be_mmuidx_ra
440
+# define cpu_stq_mmuidx_ra cpu_stq_be_mmuidx_ra
441
+#else
442
+# define cpu_lduw_data cpu_lduw_le_data
443
+# define cpu_ldsw_data cpu_ldsw_le_data
444
+# define cpu_ldl_data cpu_ldl_le_data
445
+# define cpu_ldq_data cpu_ldq_le_data
446
+# define cpu_lduw_data_ra cpu_lduw_le_data_ra
447
+# define cpu_ldsw_data_ra cpu_ldsw_le_data_ra
448
+# define cpu_ldl_data_ra cpu_ldl_le_data_ra
449
+# define cpu_ldq_data_ra cpu_ldq_le_data_ra
450
+# define cpu_lduw_mmuidx_ra cpu_lduw_le_mmuidx_ra
451
+# define cpu_ldsw_mmuidx_ra cpu_ldsw_le_mmuidx_ra
452
+# define cpu_ldl_mmuidx_ra cpu_ldl_le_mmuidx_ra
453
+# define cpu_ldq_mmuidx_ra cpu_ldq_le_mmuidx_ra
454
+# define cpu_stw_data cpu_stw_le_data
455
+# define cpu_stl_data cpu_stl_le_data
456
+# define cpu_stq_data cpu_stq_le_data
457
+# define cpu_stw_data_ra cpu_stw_le_data_ra
458
+# define cpu_stl_data_ra cpu_stl_le_data_ra
459
+# define cpu_stq_data_ra cpu_stq_le_data_ra
460
+# define cpu_stw_mmuidx_ra cpu_stw_le_mmuidx_ra
461
+# define cpu_stl_mmuidx_ra cpu_stl_le_mmuidx_ra
462
+# define cpu_stq_mmuidx_ra cpu_stq_le_mmuidx_ra
463
+#endif
464
+
465
uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr);
466
uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr);
467
uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr);
468
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
469
index XXXXXXX..XXXXXXX 100644
470
--- a/accel/tcg/cputlb.c
471
+++ b/accel/tcg/cputlb.c
472
@@ -XXX,XX +XXX,XX @@ int cpu_ldsb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
473
full_ldub_mmu);
474
}
475
476
-uint32_t cpu_lduw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
477
- int mmu_idx, uintptr_t ra)
478
+uint32_t cpu_lduw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
479
+ int mmu_idx, uintptr_t ra)
480
{
481
- return cpu_load_helper(env, addr, mmu_idx, ra, MO_TEUW,
482
- MO_TE == MO_LE
483
- ? full_le_lduw_mmu : full_be_lduw_mmu);
484
+ return cpu_load_helper(env, addr, mmu_idx, ra, MO_BEUW, full_be_lduw_mmu);
485
}
486
487
-int cpu_ldsw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
488
- int mmu_idx, uintptr_t ra)
489
+int cpu_ldsw_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
490
+ int mmu_idx, uintptr_t ra)
491
{
492
- return (int16_t)cpu_load_helper(env, addr, mmu_idx, ra, MO_TESW,
493
- MO_TE == MO_LE
494
- ? full_le_lduw_mmu : full_be_lduw_mmu);
495
+ return (int16_t)cpu_load_helper(env, addr, mmu_idx, ra, MO_BESW,
496
+ full_be_lduw_mmu);
497
}
498
499
-uint32_t cpu_ldl_mmuidx_ra(CPUArchState *env, abi_ptr addr,
500
- int mmu_idx, uintptr_t ra)
501
+uint32_t cpu_ldl_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
502
+ int mmu_idx, uintptr_t ra)
503
{
504
- return cpu_load_helper(env, addr, mmu_idx, ra, MO_TEUL,
505
- MO_TE == MO_LE
506
- ? full_le_ldul_mmu : full_be_ldul_mmu);
507
+ return cpu_load_helper(env, addr, mmu_idx, ra, MO_BEUL, full_be_ldul_mmu);
508
}
509
510
-uint64_t cpu_ldq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
511
- int mmu_idx, uintptr_t ra)
512
+uint64_t cpu_ldq_be_mmuidx_ra(CPUArchState *env, abi_ptr addr,
513
+ int mmu_idx, uintptr_t ra)
514
{
515
- return cpu_load_helper(env, addr, mmu_idx, ra, MO_TEQ,
516
- MO_TE == MO_LE
517
- ? helper_le_ldq_mmu : helper_be_ldq_mmu);
518
+ return cpu_load_helper(env, addr, mmu_idx, ra, MO_BEQ, helper_be_ldq_mmu);
519
+}
520
+
521
+uint32_t cpu_lduw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
522
+ int mmu_idx, uintptr_t ra)
523
+{
524
+ return cpu_load_helper(env, addr, mmu_idx, ra, MO_LEUW, full_le_lduw_mmu);
525
+}
526
+
527
+int cpu_ldsw_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
528
+ int mmu_idx, uintptr_t ra)
529
+{
530
+ return (int16_t)cpu_load_helper(env, addr, mmu_idx, ra, MO_LESW,
531
+ full_le_lduw_mmu);
532
+}
533
+
534
+uint32_t cpu_ldl_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
535
+ int mmu_idx, uintptr_t ra)
536
+{
537
+ return cpu_load_helper(env, addr, mmu_idx, ra, MO_LEUL, full_le_ldul_mmu);
538
+}
539
+
540
+uint64_t cpu_ldq_le_mmuidx_ra(CPUArchState *env, abi_ptr addr,
541
+ int mmu_idx, uintptr_t ra)
542
+{
543
+ return cpu_load_helper(env, addr, mmu_idx, ra, MO_LEQ, helper_le_ldq_mmu);
544
}
545
546
uint32_t cpu_ldub_data_ra(CPUArchState *env, target_ulong ptr,
547
@@ -XXX,XX +XXX,XX @@ int cpu_ldsb_data_ra(CPUArchState *env, target_ulong ptr, uintptr_t retaddr)
548
return cpu_ldsb_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
549
}
550
551
-uint32_t cpu_lduw_data_ra(CPUArchState *env, target_ulong ptr,
552
- uintptr_t retaddr)
553
+uint32_t cpu_lduw_be_data_ra(CPUArchState *env, target_ulong ptr,
554
+ uintptr_t retaddr)
555
{
556
- return cpu_lduw_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
557
+ return cpu_lduw_be_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
558
}
559
560
-int cpu_ldsw_data_ra(CPUArchState *env, target_ulong ptr, uintptr_t retaddr)
561
+int cpu_ldsw_be_data_ra(CPUArchState *env, target_ulong ptr, uintptr_t retaddr)
562
{
563
- return cpu_ldsw_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
564
+ return cpu_ldsw_be_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
565
}
566
567
-uint32_t cpu_ldl_data_ra(CPUArchState *env, target_ulong ptr, uintptr_t retaddr)
568
+uint32_t cpu_ldl_be_data_ra(CPUArchState *env, target_ulong ptr,
569
+ uintptr_t retaddr)
570
{
571
- return cpu_ldl_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
572
+ return cpu_ldl_be_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
573
}
574
575
-uint64_t cpu_ldq_data_ra(CPUArchState *env, target_ulong ptr, uintptr_t retaddr)
576
+uint64_t cpu_ldq_be_data_ra(CPUArchState *env, target_ulong ptr,
577
+ uintptr_t retaddr)
578
{
579
- return cpu_ldq_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
580
+ return cpu_ldq_be_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
581
+}
582
+
583
+uint32_t cpu_lduw_le_data_ra(CPUArchState *env, target_ulong ptr,
584
+ uintptr_t retaddr)
585
+{
586
+ return cpu_lduw_le_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
587
+}
588
+
589
+int cpu_ldsw_le_data_ra(CPUArchState *env, target_ulong ptr, uintptr_t retaddr)
590
+{
591
+ return cpu_ldsw_le_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
592
+}
593
+
594
+uint32_t cpu_ldl_le_data_ra(CPUArchState *env, target_ulong ptr,
595
+ uintptr_t retaddr)
596
+{
597
+ return cpu_ldl_le_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
598
+}
599
+
600
+uint64_t cpu_ldq_le_data_ra(CPUArchState *env, target_ulong ptr,
601
+ uintptr_t retaddr)
602
+{
603
+ return cpu_ldq_le_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
604
}
605
606
uint32_t cpu_ldub_data(CPUArchState *env, target_ulong ptr)
607
@@ -XXX,XX +XXX,XX @@ int cpu_ldsb_data(CPUArchState *env, target_ulong ptr)
608
return cpu_ldsb_data_ra(env, ptr, 0);
609
}
610
611
-uint32_t cpu_lduw_data(CPUArchState *env, target_ulong ptr)
612
+uint32_t cpu_lduw_be_data(CPUArchState *env, target_ulong ptr)
613
{
614
- return cpu_lduw_data_ra(env, ptr, 0);
615
+ return cpu_lduw_be_data_ra(env, ptr, 0);
616
}
617
618
-int cpu_ldsw_data(CPUArchState *env, target_ulong ptr)
619
+int cpu_ldsw_be_data(CPUArchState *env, target_ulong ptr)
620
{
621
- return cpu_ldsw_data_ra(env, ptr, 0);
622
+ return cpu_ldsw_be_data_ra(env, ptr, 0);
623
}
624
625
-uint32_t cpu_ldl_data(CPUArchState *env, target_ulong ptr)
626
+uint32_t cpu_ldl_be_data(CPUArchState *env, target_ulong ptr)
627
{
628
- return cpu_ldl_data_ra(env, ptr, 0);
629
+ return cpu_ldl_be_data_ra(env, ptr, 0);
630
}
631
632
-uint64_t cpu_ldq_data(CPUArchState *env, target_ulong ptr)
633
+uint64_t cpu_ldq_be_data(CPUArchState *env, target_ulong ptr)
634
{
635
- return cpu_ldq_data_ra(env, ptr, 0);
636
+ return cpu_ldq_be_data_ra(env, ptr, 0);
637
+}
638
+
639
+uint32_t cpu_lduw_le_data(CPUArchState *env, target_ulong ptr)
640
+{
641
+ return cpu_lduw_le_data_ra(env, ptr, 0);
642
+}
643
+
644
+int cpu_ldsw_le_data(CPUArchState *env, target_ulong ptr)
645
+{
646
+ return cpu_ldsw_le_data_ra(env, ptr, 0);
647
+}
648
+
649
+uint32_t cpu_ldl_le_data(CPUArchState *env, target_ulong ptr)
650
+{
651
+ return cpu_ldl_le_data_ra(env, ptr, 0);
652
+}
653
+
654
+uint64_t cpu_ldq_le_data(CPUArchState *env, target_ulong ptr)
655
+{
656
+ return cpu_ldq_le_data_ra(env, ptr, 0);
657
}
658
659
/*
660
@@ -XXX,XX +XXX,XX @@ void cpu_stb_mmuidx_ra(CPUArchState *env, target_ulong addr, uint32_t val,
661
cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_UB);
662
}
663
664
-void cpu_stw_mmuidx_ra(CPUArchState *env, target_ulong addr, uint32_t val,
665
- int mmu_idx, uintptr_t retaddr)
666
+void cpu_stw_be_mmuidx_ra(CPUArchState *env, target_ulong addr, uint32_t val,
667
+ int mmu_idx, uintptr_t retaddr)
668
{
669
- cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_TEUW);
670
+ cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_BEUW);
671
}
672
673
-void cpu_stl_mmuidx_ra(CPUArchState *env, target_ulong addr, uint32_t val,
674
- int mmu_idx, uintptr_t retaddr)
675
+void cpu_stl_be_mmuidx_ra(CPUArchState *env, target_ulong addr, uint32_t val,
676
+ int mmu_idx, uintptr_t retaddr)
677
{
678
- cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_TEUL);
679
+ cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_BEUL);
680
}
681
682
-void cpu_stq_mmuidx_ra(CPUArchState *env, target_ulong addr, uint64_t val,
683
- int mmu_idx, uintptr_t retaddr)
684
+void cpu_stq_be_mmuidx_ra(CPUArchState *env, target_ulong addr, uint64_t val,
685
+ int mmu_idx, uintptr_t retaddr)
686
{
687
- cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_TEQ);
688
+ cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_BEQ);
689
+}
690
+
691
+void cpu_stw_le_mmuidx_ra(CPUArchState *env, target_ulong addr, uint32_t val,
692
+ int mmu_idx, uintptr_t retaddr)
693
+{
694
+ cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_LEUW);
695
+}
696
+
697
+void cpu_stl_le_mmuidx_ra(CPUArchState *env, target_ulong addr, uint32_t val,
698
+ int mmu_idx, uintptr_t retaddr)
699
+{
700
+ cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_LEUL);
701
+}
702
+
703
+void cpu_stq_le_mmuidx_ra(CPUArchState *env, target_ulong addr, uint64_t val,
704
+ int mmu_idx, uintptr_t retaddr)
705
+{
706
+ cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_LEQ);
707
}
708
709
void cpu_stb_data_ra(CPUArchState *env, target_ulong ptr,
710
@@ -XXX,XX +XXX,XX @@ void cpu_stb_data_ra(CPUArchState *env, target_ulong ptr,
711
cpu_stb_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
712
}
713
714
-void cpu_stw_data_ra(CPUArchState *env, target_ulong ptr,
715
- uint32_t val, uintptr_t retaddr)
716
+void cpu_stw_be_data_ra(CPUArchState *env, target_ulong ptr,
717
+ uint32_t val, uintptr_t retaddr)
718
{
719
- cpu_stw_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
720
+ cpu_stw_be_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
721
}
722
723
-void cpu_stl_data_ra(CPUArchState *env, target_ulong ptr,
724
- uint32_t val, uintptr_t retaddr)
725
+void cpu_stl_be_data_ra(CPUArchState *env, target_ulong ptr,
726
+ uint32_t val, uintptr_t retaddr)
727
{
728
- cpu_stl_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
729
+ cpu_stl_be_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
730
}
731
732
-void cpu_stq_data_ra(CPUArchState *env, target_ulong ptr,
733
- uint64_t val, uintptr_t retaddr)
734
+void cpu_stq_be_data_ra(CPUArchState *env, target_ulong ptr,
735
+ uint64_t val, uintptr_t retaddr)
736
{
737
- cpu_stq_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
738
+ cpu_stq_be_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
739
+}
740
+
741
+void cpu_stw_le_data_ra(CPUArchState *env, target_ulong ptr,
742
+ uint32_t val, uintptr_t retaddr)
743
+{
744
+ cpu_stw_le_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
745
+}
746
+
747
+void cpu_stl_le_data_ra(CPUArchState *env, target_ulong ptr,
748
+ uint32_t val, uintptr_t retaddr)
749
+{
750
+ cpu_stl_le_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
751
+}
752
+
753
+void cpu_stq_le_data_ra(CPUArchState *env, target_ulong ptr,
754
+ uint64_t val, uintptr_t retaddr)
755
+{
756
+ cpu_stq_le_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
757
}
758
759
void cpu_stb_data(CPUArchState *env, target_ulong ptr, uint32_t val)
760
@@ -XXX,XX +XXX,XX @@ void cpu_stb_data(CPUArchState *env, target_ulong ptr, uint32_t val)
761
cpu_stb_data_ra(env, ptr, val, 0);
762
}
763
764
-void cpu_stw_data(CPUArchState *env, target_ulong ptr, uint32_t val)
765
+void cpu_stw_be_data(CPUArchState *env, target_ulong ptr, uint32_t val)
766
{
767
- cpu_stw_data_ra(env, ptr, val, 0);
768
+ cpu_stw_be_data_ra(env, ptr, val, 0);
769
}
770
771
-void cpu_stl_data(CPUArchState *env, target_ulong ptr, uint32_t val)
772
+void cpu_stl_be_data(CPUArchState *env, target_ulong ptr, uint32_t val)
773
{
774
- cpu_stl_data_ra(env, ptr, val, 0);
775
+ cpu_stl_be_data_ra(env, ptr, val, 0);
776
}
777
778
-void cpu_stq_data(CPUArchState *env, target_ulong ptr, uint64_t val)
779
+void cpu_stq_be_data(CPUArchState *env, target_ulong ptr, uint64_t val)
780
{
781
- cpu_stq_data_ra(env, ptr, val, 0);
782
+ cpu_stq_be_data_ra(env, ptr, val, 0);
783
+}
784
+
785
+void cpu_stw_le_data(CPUArchState *env, target_ulong ptr, uint32_t val)
786
+{
787
+ cpu_stw_le_data_ra(env, ptr, val, 0);
788
+}
789
+
790
+void cpu_stl_le_data(CPUArchState *env, target_ulong ptr, uint32_t val)
791
+{
792
+ cpu_stl_le_data_ra(env, ptr, val, 0);
793
+}
794
+
795
+void cpu_stq_le_data(CPUArchState *env, target_ulong ptr, uint64_t val)
796
+{
797
+ cpu_stq_le_data_ra(env, ptr, val, 0);
798
}
799
800
/* First set of helpers allows passing in of OI and RETADDR. This makes
801
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
802
index XXXXXXX..XXXXXXX 100644
803
--- a/accel/tcg/user-exec.c
804
+++ b/accel/tcg/user-exec.c
805
@@ -XXX,XX +XXX,XX @@ int cpu_ldsb_data(CPUArchState *env, abi_ptr ptr)
806
return ret;
807
}
808
809
-uint32_t cpu_lduw_data(CPUArchState *env, abi_ptr ptr)
810
+uint32_t cpu_lduw_be_data(CPUArchState *env, abi_ptr ptr)
811
{
812
uint32_t ret;
813
- uint16_t meminfo = trace_mem_get_info(MO_TEUW, MMU_USER_IDX, false);
814
+ uint16_t meminfo = trace_mem_get_info(MO_BEUW, MMU_USER_IDX, false);
815
816
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
817
- ret = lduw_p(g2h(ptr));
818
+ ret = lduw_be_p(g2h(ptr));
819
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
820
return ret;
821
}
822
823
-int cpu_ldsw_data(CPUArchState *env, abi_ptr ptr)
824
+int cpu_ldsw_be_data(CPUArchState *env, abi_ptr ptr)
825
{
826
int ret;
827
- uint16_t meminfo = trace_mem_get_info(MO_TESW, MMU_USER_IDX, false);
828
+ uint16_t meminfo = trace_mem_get_info(MO_BESW, MMU_USER_IDX, false);
829
830
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
831
- ret = ldsw_p(g2h(ptr));
832
+ ret = ldsw_be_p(g2h(ptr));
833
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
834
return ret;
835
}
836
837
-uint32_t cpu_ldl_data(CPUArchState *env, abi_ptr ptr)
838
+uint32_t cpu_ldl_be_data(CPUArchState *env, abi_ptr ptr)
839
{
840
uint32_t ret;
841
- uint16_t meminfo = trace_mem_get_info(MO_TEUL, MMU_USER_IDX, false);
842
+ uint16_t meminfo = trace_mem_get_info(MO_BEUL, MMU_USER_IDX, false);
843
844
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
845
- ret = ldl_p(g2h(ptr));
846
+ ret = ldl_be_p(g2h(ptr));
847
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
848
return ret;
849
}
850
851
-uint64_t cpu_ldq_data(CPUArchState *env, abi_ptr ptr)
852
+uint64_t cpu_ldq_be_data(CPUArchState *env, abi_ptr ptr)
853
{
854
uint64_t ret;
855
- uint16_t meminfo = trace_mem_get_info(MO_TEQ, MMU_USER_IDX, false);
856
+ uint16_t meminfo = trace_mem_get_info(MO_BEQ, MMU_USER_IDX, false);
857
858
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
859
- ret = ldq_p(g2h(ptr));
860
+ ret = ldq_be_p(g2h(ptr));
861
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
862
+ return ret;
863
+}
864
+
865
+uint32_t cpu_lduw_le_data(CPUArchState *env, abi_ptr ptr)
866
+{
867
+ uint32_t ret;
868
+ uint16_t meminfo = trace_mem_get_info(MO_LEUW, MMU_USER_IDX, false);
869
+
870
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
871
+ ret = lduw_le_p(g2h(ptr));
872
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
873
+ return ret;
874
+}
875
+
876
+int cpu_ldsw_le_data(CPUArchState *env, abi_ptr ptr)
877
+{
878
+ int ret;
879
+ uint16_t meminfo = trace_mem_get_info(MO_LESW, MMU_USER_IDX, false);
880
+
881
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
882
+ ret = ldsw_le_p(g2h(ptr));
883
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
884
+ return ret;
885
+}
886
+
887
+uint32_t cpu_ldl_le_data(CPUArchState *env, abi_ptr ptr)
888
+{
889
+ uint32_t ret;
890
+ uint16_t meminfo = trace_mem_get_info(MO_LEUL, MMU_USER_IDX, false);
891
+
892
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
893
+ ret = ldl_le_p(g2h(ptr));
894
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
895
+ return ret;
896
+}
897
+
898
+uint64_t cpu_ldq_le_data(CPUArchState *env, abi_ptr ptr)
899
+{
900
+ uint64_t ret;
901
+ uint16_t meminfo = trace_mem_get_info(MO_LEQ, MMU_USER_IDX, false);
902
+
903
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
904
+ ret = ldq_le_p(g2h(ptr));
905
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
906
return ret;
907
}
908
@@ -XXX,XX +XXX,XX @@ int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
909
return ret;
910
}
911
912
-uint32_t cpu_lduw_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
913
+uint32_t cpu_lduw_be_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
914
{
915
uint32_t ret;
916
917
set_helper_retaddr(retaddr);
918
- ret = cpu_lduw_data(env, ptr);
919
+ ret = cpu_lduw_be_data(env, ptr);
920
clear_helper_retaddr();
921
return ret;
922
}
923
924
-int cpu_ldsw_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
925
+int cpu_ldsw_be_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
926
{
927
int ret;
928
929
set_helper_retaddr(retaddr);
930
- ret = cpu_ldsw_data(env, ptr);
931
+ ret = cpu_ldsw_be_data(env, ptr);
932
clear_helper_retaddr();
933
return ret;
934
}
935
936
-uint32_t cpu_ldl_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
937
+uint32_t cpu_ldl_be_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
938
{
939
uint32_t ret;
940
941
set_helper_retaddr(retaddr);
942
- ret = cpu_ldl_data(env, ptr);
943
+ ret = cpu_ldl_be_data(env, ptr);
944
clear_helper_retaddr();
945
return ret;
946
}
947
948
-uint64_t cpu_ldq_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
949
+uint64_t cpu_ldq_be_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
950
{
951
uint64_t ret;
952
953
set_helper_retaddr(retaddr);
954
- ret = cpu_ldq_data(env, ptr);
955
+ ret = cpu_ldq_be_data(env, ptr);
956
+ clear_helper_retaddr();
957
+ return ret;
958
+}
959
+
960
+uint32_t cpu_lduw_le_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
961
+{
962
+ uint32_t ret;
963
+
964
+ set_helper_retaddr(retaddr);
965
+ ret = cpu_lduw_le_data(env, ptr);
966
+ clear_helper_retaddr();
967
+ return ret;
968
+}
969
+
970
+int cpu_ldsw_le_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
971
+{
972
+ int ret;
973
+
974
+ set_helper_retaddr(retaddr);
975
+ ret = cpu_ldsw_le_data(env, ptr);
976
+ clear_helper_retaddr();
977
+ return ret;
978
+}
979
+
980
+uint32_t cpu_ldl_le_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
981
+{
982
+ uint32_t ret;
983
+
984
+ set_helper_retaddr(retaddr);
985
+ ret = cpu_ldl_le_data(env, ptr);
986
+ clear_helper_retaddr();
987
+ return ret;
988
+}
989
+
990
+uint64_t cpu_ldq_le_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
991
+{
992
+ uint64_t ret;
993
+
994
+ set_helper_retaddr(retaddr);
995
+ ret = cpu_ldq_le_data(env, ptr);
996
clear_helper_retaddr();
997
return ret;
998
}
999
@@ -XXX,XX +XXX,XX @@ void cpu_stb_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
1000
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
1001
}
1002
1003
-void cpu_stw_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
1004
+void cpu_stw_be_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
1005
{
1006
- uint16_t meminfo = trace_mem_get_info(MO_TEUW, MMU_USER_IDX, true);
1007
+ uint16_t meminfo = trace_mem_get_info(MO_BEUW, MMU_USER_IDX, true);
1008
1009
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
1010
- stw_p(g2h(ptr), val);
1011
+ stw_be_p(g2h(ptr), val);
1012
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
1013
}
1014
1015
-void cpu_stl_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
1016
+void cpu_stl_be_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
1017
{
1018
- uint16_t meminfo = trace_mem_get_info(MO_TEUL, MMU_USER_IDX, true);
1019
+ uint16_t meminfo = trace_mem_get_info(MO_BEUL, MMU_USER_IDX, true);
1020
1021
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
1022
- stl_p(g2h(ptr), val);
1023
+ stl_be_p(g2h(ptr), val);
1024
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
1025
}
1026
1027
-void cpu_stq_data(CPUArchState *env, abi_ptr ptr, uint64_t val)
1028
+void cpu_stq_be_data(CPUArchState *env, abi_ptr ptr, uint64_t val)
1029
{
1030
- uint16_t meminfo = trace_mem_get_info(MO_TEQ, MMU_USER_IDX, true);
1031
+ uint16_t meminfo = trace_mem_get_info(MO_BEQ, MMU_USER_IDX, true);
1032
1033
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
1034
- stq_p(g2h(ptr), val);
1035
+ stq_be_p(g2h(ptr), val);
1036
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
1037
+}
1038
+
1039
+void cpu_stw_le_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
1040
+{
1041
+ uint16_t meminfo = trace_mem_get_info(MO_LEUW, MMU_USER_IDX, true);
1042
+
1043
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
1044
+ stw_le_p(g2h(ptr), val);
1045
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
1046
+}
1047
+
1048
+void cpu_stl_le_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
1049
+{
1050
+ uint16_t meminfo = trace_mem_get_info(MO_LEUL, MMU_USER_IDX, true);
1051
+
1052
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
1053
+ stl_le_p(g2h(ptr), val);
1054
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
1055
+}
1056
+
1057
+void cpu_stq_le_data(CPUArchState *env, abi_ptr ptr, uint64_t val)
1058
+{
1059
+ uint16_t meminfo = trace_mem_get_info(MO_LEQ, MMU_USER_IDX, true);
1060
+
1061
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
1062
+ stq_le_p(g2h(ptr), val);
1063
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
1064
}
1065
1066
@@ -XXX,XX +XXX,XX @@ void cpu_stb_data_ra(CPUArchState *env, abi_ptr ptr,
1067
clear_helper_retaddr();
1068
}
1069
1070
-void cpu_stw_data_ra(CPUArchState *env, abi_ptr ptr,
1071
- uint32_t val, uintptr_t retaddr)
1072
+void cpu_stw_be_data_ra(CPUArchState *env, abi_ptr ptr,
1073
+ uint32_t val, uintptr_t retaddr)
1074
{
1075
set_helper_retaddr(retaddr);
1076
- cpu_stw_data(env, ptr, val);
1077
+ cpu_stw_be_data(env, ptr, val);
1078
clear_helper_retaddr();
1079
}
1080
1081
-void cpu_stl_data_ra(CPUArchState *env, abi_ptr ptr,
1082
- uint32_t val, uintptr_t retaddr)
1083
+void cpu_stl_be_data_ra(CPUArchState *env, abi_ptr ptr,
1084
+ uint32_t val, uintptr_t retaddr)
1085
{
1086
set_helper_retaddr(retaddr);
1087
- cpu_stl_data(env, ptr, val);
1088
+ cpu_stl_be_data(env, ptr, val);
1089
clear_helper_retaddr();
1090
}
1091
1092
-void cpu_stq_data_ra(CPUArchState *env, abi_ptr ptr,
1093
- uint64_t val, uintptr_t retaddr)
1094
+void cpu_stq_be_data_ra(CPUArchState *env, abi_ptr ptr,
1095
+ uint64_t val, uintptr_t retaddr)
1096
{
1097
set_helper_retaddr(retaddr);
1098
- cpu_stq_data(env, ptr, val);
1099
+ cpu_stq_be_data(env, ptr, val);
1100
+ clear_helper_retaddr();
1101
+}
1102
+
1103
+void cpu_stw_le_data_ra(CPUArchState *env, abi_ptr ptr,
1104
+ uint32_t val, uintptr_t retaddr)
1105
+{
1106
+ set_helper_retaddr(retaddr);
1107
+ cpu_stw_le_data(env, ptr, val);
1108
+ clear_helper_retaddr();
1109
+}
1110
+
1111
+void cpu_stl_le_data_ra(CPUArchState *env, abi_ptr ptr,
1112
+ uint32_t val, uintptr_t retaddr)
1113
+{
1114
+ set_helper_retaddr(retaddr);
1115
+ cpu_stl_le_data(env, ptr, val);
1116
+ clear_helper_retaddr();
1117
+}
1118
+
1119
+void cpu_stq_le_data_ra(CPUArchState *env, abi_ptr ptr,
1120
+ uint64_t val, uintptr_t retaddr)
1121
+{
1122
+ set_helper_retaddr(retaddr);
1123
+ cpu_stq_le_data(env, ptr, val);
1124
clear_helper_retaddr();
1125
}
1126
1127
--
153
--
1128
2.20.1
154
2.20.1
1129
155
1130
156
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
Convert the VSRA, VSRI, VRSHR, VRSRA 2-reg-shift insns to decodetree.
2
(These are the last instructions in the group that are vectorized;
3
the rest all require looping over each element.)
2
4
3
We want to move the inlined declarations of set_feature()
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
from cpu*.c to cpu.h. To avoid clashing with the KVM
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
declarations, inline the few KVM calls.
7
Message-id: 20200522145520.6778-4-peter.maydell@linaro.org
8
---
9
target/arm/neon-dp.decode | 35 ++++++++++++++++++++++
10
target/arm/translate-neon.inc.c | 7 +++++
11
target/arm/translate.c | 52 +++------------------------------
12
3 files changed, 46 insertions(+), 48 deletions(-)
6
13
7
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20200504172448.9402-2-philmd@redhat.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/kvm32.c | 13 ++++---------
13
target/arm/kvm64.c | 22 ++++++----------------
14
2 files changed, 10 insertions(+), 25 deletions(-)
15
16
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm32.c
16
--- a/target/arm/neon-dp.decode
19
+++ b/target/arm/kvm32.c
17
+++ b/target/arm/neon-dp.decode
20
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
21
#include "internals.h"
19
VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
22
#include "qemu/log.h"
20
VSHR_U_2sh 1111 001 1 1 . ...... .... 0000 . . . 1 .... @2reg_shr_b
23
21
24
-static inline void set_feature(uint64_t *features, int feature)
22
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_d
25
-{
23
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_s
26
- *features |= 1ULL << feature;
24
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_h
27
-}
25
+VSRA_S_2sh 1111 001 0 1 . ...... .... 0001 . . . 1 .... @2reg_shr_b
26
+
27
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_d
28
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_s
29
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_h
30
+VSRA_U_2sh 1111 001 1 1 . ...... .... 0001 . . . 1 .... @2reg_shr_b
31
+
32
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_d
33
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_s
34
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_h
35
+VRSHR_S_2sh 1111 001 0 1 . ...... .... 0010 . . . 1 .... @2reg_shr_b
36
+
37
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_d
38
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_s
39
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_h
40
+VRSHR_U_2sh 1111 001 1 1 . ...... .... 0010 . . . 1 .... @2reg_shr_b
41
+
42
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_d
43
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_s
44
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_h
45
+VRSRA_S_2sh 1111 001 0 1 . ...... .... 0011 . . . 1 .... @2reg_shr_b
46
+
47
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_d
48
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_s
49
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_h
50
+VRSRA_U_2sh 1111 001 1 1 . ...... .... 0011 . . . 1 .... @2reg_shr_b
51
+
52
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_d
53
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_s
54
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_h
55
+VSRI_2sh 1111 001 1 1 . ...... .... 0100 . . . 1 .... @2reg_shr_b
56
+
57
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
58
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
59
VSHL_2sh 1111 001 0 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
60
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate-neon.inc.c
63
+++ b/target/arm/translate-neon.inc.c
64
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
65
66
DO_2SH(VSHL, tcg_gen_gvec_shli)
67
DO_2SH(VSLI, gen_gvec_sli)
68
+DO_2SH(VSRI, gen_gvec_sri)
69
+DO_2SH(VSRA_S, gen_gvec_ssra)
70
+DO_2SH(VSRA_U, gen_gvec_usra)
71
+DO_2SH(VRSHR_S, gen_gvec_srshr)
72
+DO_2SH(VRSHR_U, gen_gvec_urshr)
73
+DO_2SH(VRSRA_S, gen_gvec_srsra)
74
+DO_2SH(VRSRA_U, gen_gvec_ursra)
75
76
static bool trans_VSHR_S_2sh(DisasContext *s, arg_2reg_shift *a)
77
{
78
diff --git a/target/arm/translate.c b/target/arm/translate.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate.c
81
+++ b/target/arm/translate.c
82
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
83
84
switch (op) {
85
case 0: /* VSHR */
86
+ case 1: /* VSRA */
87
+ case 2: /* VRSHR */
88
+ case 3: /* VRSRA */
89
+ case 4: /* VSRI */
90
case 5: /* VSHL, VSLI */
91
return 1; /* handled by decodetree */
92
default:
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
94
shift = shift - (1 << (size + 3));
95
}
96
97
- switch (op) {
98
- case 1: /* VSRA */
99
- /* Right shift comes here negative. */
100
- shift = -shift;
101
- if (u) {
102
- gen_gvec_usra(size, rd_ofs, rm_ofs, shift,
103
- vec_size, vec_size);
104
- } else {
105
- gen_gvec_ssra(size, rd_ofs, rm_ofs, shift,
106
- vec_size, vec_size);
107
- }
108
- return 0;
28
-
109
-
29
static int read_sys_reg32(int fd, uint32_t *pret, uint64_t id)
110
- case 2: /* VRSHR */
30
{
111
- /* Right shift comes here negative. */
31
struct kvm_one_reg idreg = { .id = id, .addr = (uintptr_t)pret };
112
- shift = -shift;
32
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
113
- if (u) {
33
* timers; this in turn implies most of the other feature
114
- gen_gvec_urshr(size, rd_ofs, rm_ofs, shift,
34
* bits, but a few must be tested.
115
- vec_size, vec_size);
35
*/
116
- } else {
36
- set_feature(&features, ARM_FEATURE_V7VE);
117
- gen_gvec_srshr(size, rd_ofs, rm_ofs, shift,
37
- set_feature(&features, ARM_FEATURE_GENERIC_TIMER);
118
- vec_size, vec_size);
38
+ features |= 1ULL << ARM_FEATURE_V7VE;
119
- }
39
+ features |= 1ULL << ARM_FEATURE_GENERIC_TIMER;
120
- return 0;
40
41
if (extract32(id_pfr0, 12, 4) == 1) {
42
- set_feature(&features, ARM_FEATURE_THUMB2EE);
43
+ features |= 1ULL << ARM_FEATURE_THUMB2EE;
44
}
45
if (extract32(ahcf->isar.mvfr1, 12, 4) == 1) {
46
- set_feature(&features, ARM_FEATURE_NEON);
47
+ features |= 1ULL << ARM_FEATURE_NEON;
48
}
49
50
ahcf->features = features;
51
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/kvm64.c
54
+++ b/target/arm/kvm64.c
55
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pmu_set_irq(CPUState *cs, int irq)
56
}
57
}
58
59
-static inline void set_feature(uint64_t *features, int feature)
60
-{
61
- *features |= 1ULL << feature;
62
-}
63
-
121
-
64
-static inline void unset_feature(uint64_t *features, int feature)
122
- case 3: /* VRSRA */
65
-{
123
- /* Right shift comes here negative. */
66
- *features &= ~(1ULL << feature);
124
- shift = -shift;
67
-}
125
- if (u) {
126
- gen_gvec_ursra(size, rd_ofs, rm_ofs, shift,
127
- vec_size, vec_size);
128
- } else {
129
- gen_gvec_srsra(size, rd_ofs, rm_ofs, shift,
130
- vec_size, vec_size);
131
- }
132
- return 0;
68
-
133
-
69
static int read_sys_reg32(int fd, uint32_t *pret, uint64_t id)
134
- case 4: /* VSRI */
70
{
135
- if (!u) {
71
uint64_t ret;
136
- return 1;
72
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
137
- }
73
* with VFPv4+Neon; this in turn implies most of the other
138
- /* Right shift comes here negative. */
74
* feature bits.
139
- shift = -shift;
75
*/
140
- gen_gvec_sri(size, rd_ofs, rm_ofs, shift,
76
- set_feature(&features, ARM_FEATURE_V8);
141
- vec_size, vec_size);
77
- set_feature(&features, ARM_FEATURE_NEON);
142
- return 0;
78
- set_feature(&features, ARM_FEATURE_AARCH64);
143
- }
79
- set_feature(&features, ARM_FEATURE_PMU);
144
-
80
- set_feature(&features, ARM_FEATURE_GENERIC_TIMER);
145
if (size == 3) {
81
+ features |= 1ULL << ARM_FEATURE_V8;
146
count = q + 1;
82
+ features |= 1ULL << ARM_FEATURE_NEON;
147
} else {
83
+ features |= 1ULL << ARM_FEATURE_AARCH64;
84
+ features |= 1ULL << ARM_FEATURE_PMU;
85
+ features |= 1ULL << ARM_FEATURE_GENERIC_TIMER;
86
87
ahcf->features = features;
88
89
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
90
if (cpu->has_pmu) {
91
cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_PMU_V3;
92
} else {
93
- unset_feature(&env->features, ARM_FEATURE_PMU);
94
+ env->features &= ~(1ULL << ARM_FEATURE_PMU);
95
}
96
if (cpu_isar_feature(aa64_sve, cpu)) {
97
assert(kvm_arm_sve_supported(cs));
98
--
148
--
99
2.20.1
149
2.20.1
100
150
101
151
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Convert the VQSHLU and QVSHL 2-reg-shift insns to decodetree.
2
2
These are the last of the simple shift-by-immediate insns.
3
For contiguous predicated memory operations, we want to
3
4
minimize the number of tlb lookups performed. We have
5
open-coded this for sve_ld1_r, but for correctness with
6
MTE we will need this for all of the memory operations.
7
8
Create a structure that holds the bounds of active elements,
9
and metadata for two pages. Add routines to find those
10
active elements, lookup the pages, and run watchpoints
11
for those pages.
12
13
Temporarily mark the functions unused to avoid Werror.
14
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20200508154359.7494-10-richard.henderson@linaro.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200522145520.6778-5-peter.maydell@linaro.org
19
---
7
---
20
target/arm/sve_helper.c | 263 +++++++++++++++++++++++++++++++++++++++-
8
target/arm/neon-dp.decode | 15 +++++
21
1 file changed, 261 insertions(+), 2 deletions(-)
9
target/arm/translate-neon.inc.c | 108 +++++++++++++++++++++++++++++++
22
10
target/arm/translate.c | 110 +-------------------------------
23
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
11
3 files changed, 126 insertions(+), 107 deletions(-)
12
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
24
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/sve_helper.c
15
--- a/target/arm/neon-dp.decode
26
+++ b/target/arm/sve_helper.c
16
+++ b/target/arm/neon-dp.decode
27
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_cpy_z_d)(void *vd, void *vg, uint64_t val, uint32_t desc)
17
@@ -XXX,XX +XXX,XX @@ VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_d
18
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_s
19
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_h
20
VSLI_2sh 1111 001 1 1 . ...... .... 0101 . . . 1 .... @2reg_shl_b
21
+
22
+VQSHLU_64_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_d
23
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_s
24
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_h
25
+VQSHLU_2sh 1111 001 1 1 . ...... .... 0110 . . . 1 .... @2reg_shl_b
26
+
27
+VQSHL_S_64_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
28
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
29
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
30
+VQSHL_S_2sh 1111 001 0 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
31
+
32
+VQSHL_U_64_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
33
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
34
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
35
+VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
36
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-neon.inc.c
39
+++ b/target/arm/translate-neon.inc.c
40
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHR_U_2sh(DisasContext *s, arg_2reg_shift *a)
41
return do_vector_2sh(s, a, tcg_gen_gvec_shri);
28
}
42
}
29
}
43
}
30
44
+
31
-/* Big-endian hosts need to frob the byte indicies. If the copy
45
+static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a,
32
+/* Big-endian hosts need to frob the byte indices. If the copy
46
+ NeonGenTwo64OpEnvFn *fn)
33
* happens to be 8-byte aligned, then no frobbing necessary.
34
*/
35
static void swap_memmove(void *vd, void *vs, size_t n)
36
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fcmla_zpzzz_d)(CPUARMState *env, void *vg, uint32_t desc)
37
/*
38
* Load elements into @vd, controlled by @vg, from @host + @mem_ofs.
39
* Memory is valid through @host + @mem_max. The register element
40
- * indicies are inferred from @mem_ofs, as modified by the types for
41
+ * indices are inferred from @mem_ofs, as modified by the types for
42
* which the helper is built. Return the @mem_ofs of the first element
43
* not loaded (which is @mem_max if they are all loaded).
44
*
45
@@ -XXX,XX +XXX,XX @@ static intptr_t max_for_page(target_ulong base, intptr_t mem_off,
46
return MIN(split, mem_max - mem_off) + mem_off;
47
}
48
49
+/*
50
+ * Resolve the guest virtual address to info->host and info->flags.
51
+ * If @nofault, return false if the page is invalid, otherwise
52
+ * exit via page fault exception.
53
+ */
54
+
55
+typedef struct {
56
+ void *host;
57
+ int flags;
58
+ MemTxAttrs attrs;
59
+} SVEHostPage;
60
+
61
+static bool sve_probe_page(SVEHostPage *info, bool nofault,
62
+ CPUARMState *env, target_ulong addr,
63
+ int mem_off, MMUAccessType access_type,
64
+ int mmu_idx, uintptr_t retaddr)
65
+{
47
+{
66
+ int flags;
48
+ /*
67
+
49
+ * 2-reg-and-shift operations, size == 3 case, where the
68
+ addr += mem_off;
50
+ * function needs to be passed cpu_env.
69
+ flags = probe_access_flags(env, addr, access_type, mmu_idx, nofault,
51
+ */
70
+ &info->host, retaddr);
52
+ TCGv_i64 constimm;
71
+ info->flags = flags;
53
+ int pass;
72
+
54
+
73
+ if (flags & TLB_INVALID_MASK) {
55
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
74
+ g_assert(nofault);
56
+ return false;
75
+ return false;
57
+ }
76
+ }
58
+
77
+
59
+ /* UNDEF accesses to D16-D31 if they don't exist. */
78
+ /* Ensure that info->host[] is relative to addr, not addr + mem_off. */
60
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
79
+ info->host -= mem_off;
61
+ ((a->vd | a->vm) & 0x10)) {
80
+
62
+ return false;
81
+#ifdef CONFIG_USER_ONLY
63
+ }
82
+ memset(&info->attrs, 0, sizeof(info->attrs));
64
+
83
+#else
65
+ if ((a->vm | a->vd) & a->q) {
84
+ /*
66
+ return false;
85
+ * Find the iotlbentry for addr and return the transaction attributes.
67
+ }
86
+ * This *must* be present in the TLB because we just found the mapping.
68
+
87
+ */
69
+ if (!vfp_access_check(s)) {
88
+ {
70
+ return true;
89
+ uintptr_t index = tlb_index(env, mmu_idx, addr);
71
+ }
90
+
72
+
91
+# ifdef CONFIG_DEBUG_TCG
73
+ /*
92
+ CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
74
+ * To avoid excessive duplication of ops we implement shift
93
+ target_ulong comparator = (access_type == MMU_DATA_LOAD
75
+ * by immediate using the variable shift operations.
94
+ ? entry->addr_read
76
+ */
95
+ : tlb_addr_write(entry));
77
+ constimm = tcg_const_i64(dup_const(a->size, a->shift));
96
+ g_assert(tlb_hit(comparator, addr));
78
+
97
+# endif
79
+ for (pass = 0; pass < a->q + 1; pass++) {
98
+
80
+ TCGv_i64 tmp = tcg_temp_new_i64();
99
+ CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index];
81
+
100
+ info->attrs = iotlbentry->attrs;
82
+ neon_load_reg64(tmp, a->vm + pass);
101
+ }
83
+ fn(tmp, cpu_env, tmp, constimm);
102
+#endif
84
+ neon_store_reg64(tmp, a->vd + pass);
103
+
85
+ }
86
+ tcg_temp_free_i64(constimm);
104
+ return true;
87
+ return true;
105
+}
88
+}
106
+
89
+
107
+
90
+static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
108
+/*
91
+ NeonGenTwoOpEnvFn *fn)
109
+ * Analyse contiguous data, protected by a governing predicate.
110
+ */
111
+
112
+typedef enum {
113
+ FAULT_NO,
114
+ FAULT_FIRST,
115
+ FAULT_ALL,
116
+} SVEContFault;
117
+
118
+typedef struct {
119
+ /*
120
+ * First and last element wholly contained within the two pages.
121
+ * mem_off_first[0] and reg_off_first[0] are always set >= 0.
122
+ * reg_off_last[0] may be < 0 if the first element crosses pages.
123
+ * All of mem_off_first[1], reg_off_first[1] and reg_off_last[1]
124
+ * are set >= 0 only if there are complete elements on a second page.
125
+ *
126
+ * The reg_off_* offsets are relative to the internal vector register.
127
+ * The mem_off_first offset is relative to the memory address; the
128
+ * two offsets are different when a load operation extends, a store
129
+ * operation truncates, or for multi-register operations.
130
+ */
131
+ int16_t mem_off_first[2];
132
+ int16_t reg_off_first[2];
133
+ int16_t reg_off_last[2];
134
+
135
+ /*
136
+ * One element that is misaligned and spans both pages,
137
+ * or -1 if there is no such active element.
138
+ */
139
+ int16_t mem_off_split;
140
+ int16_t reg_off_split;
141
+
142
+ /*
143
+ * The byte offset at which the entire operation crosses a page boundary.
144
+ * Set >= 0 if and only if the entire operation spans two pages.
145
+ */
146
+ int16_t page_split;
147
+
148
+ /* TLB data for the two pages. */
149
+ SVEHostPage page[2];
150
+} SVEContLdSt;
151
+
152
+/*
153
+ * Find first active element on each page, and a loose bound for the
154
+ * final element on each page. Identify any single element that spans
155
+ * the page boundary. Return true if there are any active elements.
156
+ */
157
+static bool __attribute__((unused))
158
+sve_cont_ldst_elements(SVEContLdSt *info, target_ulong addr, uint64_t *vg,
159
+ intptr_t reg_max, int esz, int msize)
160
+{
92
+{
161
+ const int esize = 1 << esz;
93
+ /*
162
+ const uint64_t pg_mask = pred_esz_masks[esz];
94
+ * 2-reg-and-shift operations, size < 3 case, where the
163
+ intptr_t reg_off_first = -1, reg_off_last = -1, reg_off_split;
95
+ * helper needs to be passed cpu_env.
164
+ intptr_t mem_off_last, mem_off_split;
96
+ */
165
+ intptr_t page_split, elt_split;
97
+ TCGv_i32 constimm;
166
+ intptr_t i;
98
+ int pass;
167
+
99
+
168
+ /* Set all of the element indices to -1, and the TLB data to 0. */
100
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
169
+ memset(info, -1, offsetof(SVEContLdSt, page));
101
+ return false;
170
+ memset(info->page, 0, sizeof(info->page));
102
+ }
171
+
103
+
172
+ /* Gross scan over the entire predicate to find bounds. */
104
+ /* UNDEF accesses to D16-D31 if they don't exist. */
173
+ i = 0;
105
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
174
+ do {
106
+ ((a->vd | a->vm) & 0x10)) {
175
+ uint64_t pg = vg[i] & pg_mask;
107
+ return false;
176
+ if (pg) {
108
+ }
177
+ reg_off_last = i * 64 + 63 - clz64(pg);
109
+
178
+ if (reg_off_first < 0) {
110
+ if ((a->vm | a->vd) & a->q) {
179
+ reg_off_first = i * 64 + ctz64(pg);
111
+ return false;
180
+ }
112
+ }
181
+ }
113
+
182
+ } while (++i * 64 < reg_max);
114
+ if (!vfp_access_check(s)) {
183
+
184
+ if (unlikely(reg_off_first < 0)) {
185
+ /* No active elements, no pages touched. */
186
+ return false;
187
+ }
188
+ tcg_debug_assert(reg_off_last >= 0 && reg_off_last < reg_max);
189
+
190
+ info->reg_off_first[0] = reg_off_first;
191
+ info->mem_off_first[0] = (reg_off_first >> esz) * msize;
192
+ mem_off_last = (reg_off_last >> esz) * msize;
193
+
194
+ page_split = -(addr | TARGET_PAGE_MASK);
195
+ if (likely(mem_off_last + msize <= page_split)) {
196
+ /* The entire operation fits within a single page. */
197
+ info->reg_off_last[0] = reg_off_last;
198
+ return true;
115
+ return true;
199
+ }
116
+ }
200
+
117
+
201
+ info->page_split = page_split;
118
+ /*
202
+ elt_split = page_split / msize;
119
+ * To avoid excessive duplication of ops we implement shift
203
+ reg_off_split = elt_split << esz;
120
+ * by immediate using the variable shift operations.
204
+ mem_off_split = elt_split * msize;
121
+ */
205
+
122
+ constimm = tcg_const_i32(dup_const(a->size, a->shift));
206
+ /*
123
+
207
+ * This is the last full element on the first page, but it is not
124
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
208
+ * necessarily active. If there is no full element, i.e. the first
125
+ TCGv_i32 tmp = neon_load_reg(a->vm, pass);
209
+ * active element is the one that's split, this value remains -1.
126
+ fn(tmp, cpu_env, tmp, constimm);
210
+ * It is useful as iteration bounds.
127
+ neon_store_reg(a->vd, pass, tmp);
211
+ */
128
+ }
212
+ if (elt_split != 0) {
129
+ tcg_temp_free_i32(constimm);
213
+ info->reg_off_last[0] = reg_off_split - esize;
214
+ }
215
+
216
+ /* Determine if an unaligned element spans the pages. */
217
+ if (page_split % msize != 0) {
218
+ /* It is helpful to know if the split element is active. */
219
+ if ((vg[reg_off_split >> 6] >> (reg_off_split & 63)) & 1) {
220
+ info->reg_off_split = reg_off_split;
221
+ info->mem_off_split = mem_off_split;
222
+
223
+ if (reg_off_split == reg_off_last) {
224
+ /* The page crossing element is last. */
225
+ return true;
226
+ }
227
+ }
228
+ reg_off_split += esize;
229
+ mem_off_split += msize;
230
+ }
231
+
232
+ /*
233
+ * We do want the first active element on the second page, because
234
+ * this may affect the address reported in an exception.
235
+ */
236
+ reg_off_split = find_next_active(vg, reg_off_split, reg_max, esz);
237
+ tcg_debug_assert(reg_off_split <= reg_off_last);
238
+ info->reg_off_first[1] = reg_off_split;
239
+ info->mem_off_first[1] = (reg_off_split >> esz) * msize;
240
+ info->reg_off_last[1] = reg_off_last;
241
+ return true;
130
+ return true;
242
+}
131
+}
243
+
132
+
244
+/*
133
+#define DO_2SHIFT_ENV(INSN, FUNC) \
245
+ * Resolve the guest virtual addresses to info->page[].
134
+ static bool trans_##INSN##_64_2sh(DisasContext *s, arg_2reg_shift *a) \
246
+ * Control the generation of page faults with @fault. Return false if
135
+ { \
247
+ * there is no work to do, which can only happen with @fault == FAULT_NO.
136
+ return do_2shift_env_64(s, a, gen_helper_neon_##FUNC##64); \
248
+ */
137
+ } \
249
+static bool __attribute__((unused))
138
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
250
+sve_cont_ldst_pages(SVEContLdSt *info, SVEContFault fault, CPUARMState *env,
139
+ { \
251
+ target_ulong addr, MMUAccessType access_type,
140
+ static NeonGenTwoOpEnvFn * const fns[] = { \
252
+ uintptr_t retaddr)
141
+ gen_helper_neon_##FUNC##8, \
253
+{
142
+ gen_helper_neon_##FUNC##16, \
254
+ int mmu_idx = cpu_mmu_index(env, false);
143
+ gen_helper_neon_##FUNC##32, \
255
+ int mem_off = info->mem_off_first[0];
144
+ }; \
256
+ bool nofault = fault == FAULT_NO;
145
+ assert(a->size < ARRAY_SIZE(fns)); \
257
+ bool have_work = true;
146
+ return do_2shift_env_32(s, a, fns[a->size]); \
258
+
147
+ }
259
+ if (!sve_probe_page(&info->page[0], nofault, env, addr, mem_off,
148
+
260
+ access_type, mmu_idx, retaddr)) {
149
+DO_2SHIFT_ENV(VQSHLU, qshlu_s)
261
+ /* No work to be done. */
150
+DO_2SHIFT_ENV(VQSHL_U, qshl_u)
262
+ return false;
151
+DO_2SHIFT_ENV(VQSHL_S, qshl_s)
263
+ }
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
264
+
153
index XXXXXXX..XXXXXXX 100644
265
+ if (likely(info->page_split < 0)) {
154
--- a/target/arm/translate.c
266
+ /* The entire operation was on the one page. */
155
+++ b/target/arm/translate.c
267
+ return true;
156
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_rsb(int size, TCGv_i32 t0, TCGv_i32 t1)
268
+ }
157
}
269
+
158
}
270
+ /*
159
271
+ * If the second page is invalid, then we want the fault address to be
160
-#define GEN_NEON_INTEGER_OP_ENV(name) do { \
272
+ * the first byte on that page which is accessed.
161
- switch ((size << 1) | u) { \
273
+ */
162
- case 0: \
274
+ if (info->mem_off_split >= 0) {
163
- gen_helper_neon_##name##_s8(tmp, cpu_env, tmp, tmp2); \
275
+ /*
164
- break; \
276
+ * There is an element split across the pages. The fault address
165
- case 1: \
277
+ * should be the first byte of the second page.
166
- gen_helper_neon_##name##_u8(tmp, cpu_env, tmp, tmp2); \
278
+ */
167
- break; \
279
+ mem_off = info->page_split;
168
- case 2: \
280
+ /*
169
- gen_helper_neon_##name##_s16(tmp, cpu_env, tmp, tmp2); \
281
+ * If the split element is also the first active element
170
- break; \
282
+ * of the vector, then: For first-fault we should continue
171
- case 3: \
283
+ * to generate faults for the second page. For no-fault,
172
- gen_helper_neon_##name##_u16(tmp, cpu_env, tmp, tmp2); \
284
+ * we have work only if the second page is valid.
173
- break; \
285
+ */
174
- case 4: \
286
+ if (info->mem_off_first[0] < info->mem_off_split) {
175
- gen_helper_neon_##name##_s32(tmp, cpu_env, tmp, tmp2); \
287
+ nofault = FAULT_FIRST;
176
- break; \
288
+ have_work = false;
177
- case 5: \
289
+ }
178
- gen_helper_neon_##name##_u32(tmp, cpu_env, tmp, tmp2); \
290
+ } else {
179
- break; \
291
+ /*
180
- default: return 1; \
292
+ * There is no element split across the pages. The fault address
181
- }} while (0)
293
+ * should be the first active element on the second page.
182
-
294
+ */
183
static TCGv_i32 neon_load_scratch(int scratch)
295
+ mem_off = info->mem_off_first[1];
184
{
296
+ /*
185
TCGv_i32 tmp = tcg_temp_new_i32();
297
+ * There must have been one active element on the first page,
186
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
298
+ * so we're out of first-fault territory.
187
int size;
299
+ */
188
int shift;
300
+ nofault = fault != FAULT_ALL;
189
int pass;
301
+ }
190
- int count;
302
+
191
int u;
303
+ have_work |= sve_probe_page(&info->page[1], nofault, env, addr, mem_off,
192
int vec_size;
304
+ access_type, mmu_idx, retaddr);
193
uint32_t imm;
305
+ return have_work;
194
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
306
+}
195
case 3: /* VRSRA */
307
+
196
case 4: /* VSRI */
308
/*
197
case 5: /* VSHL, VSLI */
309
* The result of tlb_vaddr_to_host for user-only is just g2h(x),
198
+ case 6: /* VQSHLU */
310
* which is always non-null. Elide the useless test.
199
+ case 7: /* VQSHL */
200
return 1; /* handled by decodetree */
201
default:
202
break;
203
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
204
size--;
205
}
206
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
207
- if (op < 8) {
208
- /* Shift by immediate:
209
- VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
210
- if (q && ((rd | rm) & 1)) {
211
- return 1;
212
- }
213
- if (!u && (op == 4 || op == 6)) {
214
- return 1;
215
- }
216
- /* Right shifts are encoded as N - shift, where N is the
217
- element size in bits. */
218
- if (op <= 4) {
219
- shift = shift - (1 << (size + 3));
220
- }
221
-
222
- if (size == 3) {
223
- count = q + 1;
224
- } else {
225
- count = q ? 4: 2;
226
- }
227
-
228
- /* To avoid excessive duplication of ops we implement shift
229
- * by immediate using the variable shift operations.
230
- */
231
- imm = dup_const(size, shift);
232
-
233
- for (pass = 0; pass < count; pass++) {
234
- if (size == 3) {
235
- neon_load_reg64(cpu_V0, rm + pass);
236
- tcg_gen_movi_i64(cpu_V1, imm);
237
- switch (op) {
238
- case 6: /* VQSHLU */
239
- gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
240
- cpu_V0, cpu_V1);
241
- break;
242
- case 7: /* VQSHL */
243
- if (u) {
244
- gen_helper_neon_qshl_u64(cpu_V0, cpu_env,
245
- cpu_V0, cpu_V1);
246
- } else {
247
- gen_helper_neon_qshl_s64(cpu_V0, cpu_env,
248
- cpu_V0, cpu_V1);
249
- }
250
- break;
251
- default:
252
- g_assert_not_reached();
253
- }
254
- neon_store_reg64(cpu_V0, rd + pass);
255
- } else { /* size < 3 */
256
- /* Operands in T0 and T1. */
257
- tmp = neon_load_reg(rm, pass);
258
- tmp2 = tcg_temp_new_i32();
259
- tcg_gen_movi_i32(tmp2, imm);
260
- switch (op) {
261
- case 6: /* VQSHLU */
262
- switch (size) {
263
- case 0:
264
- gen_helper_neon_qshlu_s8(tmp, cpu_env,
265
- tmp, tmp2);
266
- break;
267
- case 1:
268
- gen_helper_neon_qshlu_s16(tmp, cpu_env,
269
- tmp, tmp2);
270
- break;
271
- case 2:
272
- gen_helper_neon_qshlu_s32(tmp, cpu_env,
273
- tmp, tmp2);
274
- break;
275
- default:
276
- abort();
277
- }
278
- break;
279
- case 7: /* VQSHL */
280
- GEN_NEON_INTEGER_OP_ENV(qshl);
281
- break;
282
- default:
283
- g_assert_not_reached();
284
- }
285
- tcg_temp_free_i32(tmp2);
286
- neon_store_reg(rd, pass, tmp);
287
- }
288
- } /* for pass */
289
- } else if (op < 10) {
290
+ if (op < 10) {
291
/* Shift by immediate and narrow:
292
VSHRN, VRSHRN, VQSHRN, VQRSHRN. */
293
int input_unsigned = (op == 8) ? !u : u;
311
--
294
--
312
2.20.1
295
2.20.1
313
296
314
297
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Convert the Neon narrowing shifts where op==8 to decodetree:
2
2
* VSHRN
3
First use of the new helper functions, so we can remove the
3
* VRSHRN
4
unused markup. No longer need a scratch for user-only, as
4
* VQSHRUN
5
we completely probe the page set before reading; system mode
5
* VQRSHRUN
6
still requires a scratch for MMIO.
6
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200508154359.7494-12-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200522145520.6778-6-peter.maydell@linaro.org
12
---
10
---
13
target/arm/sve_helper.c | 188 +++++++++++++++++++++-------------------
11
target/arm/neon-dp.decode | 27 ++++++
14
1 file changed, 97 insertions(+), 91 deletions(-)
12
target/arm/translate-neon.inc.c | 167 ++++++++++++++++++++++++++++++++
15
13
target/arm/translate.c | 1 +
16
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
14
3 files changed, 195 insertions(+)
15
16
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve_helper.c
18
--- a/target/arm/neon-dp.decode
19
+++ b/target/arm/sve_helper.c
19
+++ b/target/arm/neon-dp.decode
20
@@ -XXX,XX +XXX,XX @@ typedef struct {
20
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
21
* final element on each page. Identify any single element that spans
21
@2reg_shl_b .... ... . . . 001 shift:3 .... .... 0 q:1 . . .... \
22
* the page boundary. Return true if there are any active elements.
22
&2reg_shift vm=%vm_dp vd=%vd_dp size=0
23
*/
23
24
-static bool __attribute__((unused))
24
+# Narrowing right shifts: here the Q bit is part of the opcode decode
25
-sve_cont_ldst_elements(SVEContLdSt *info, target_ulong addr, uint64_t *vg,
25
+@2reg_shrn_d .... ... . . . 1 ..... .... .... 0 . . . .... \
26
- intptr_t reg_max, int esz, int msize)
26
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=3 q=0 \
27
+static bool sve_cont_ldst_elements(SVEContLdSt *info, target_ulong addr,
27
+ shift=%neon_rshift_i5
28
+ uint64_t *vg, intptr_t reg_max,
28
+@2reg_shrn_s .... ... . . . 01 .... .... .... 0 . . . .... \
29
+ int esz, int msize)
29
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 q=0 \
30
{
30
+ shift=%neon_rshift_i4
31
const int esize = 1 << esz;
31
+@2reg_shrn_h .... ... . . . 001 ... .... .... 0 . . . .... \
32
const uint64_t pg_mask = pred_esz_masks[esz];
32
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0 \
33
@@ -XXX,XX +XXX,XX @@ sve_cont_ldst_elements(SVEContLdSt *info, target_ulong addr, uint64_t *vg,
33
+ shift=%neon_rshift_i3
34
* Control the generation of page faults with @fault. Return false if
34
+
35
* there is no work to do, which can only happen with @fault == FAULT_NO.
35
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
36
*/
36
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
37
-static bool __attribute__((unused))
37
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
38
-sve_cont_ldst_pages(SVEContLdSt *info, SVEContFault fault, CPUARMState *env,
38
@@ -XXX,XX +XXX,XX @@ VQSHL_U_64_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_d
39
- target_ulong addr, MMUAccessType access_type,
39
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_s
40
- uintptr_t retaddr)
40
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_h
41
+static bool sve_cont_ldst_pages(SVEContLdSt *info, SVEContFault fault,
41
VQSHL_U_2sh 1111 001 1 1 . ...... .... 0111 . . . 1 .... @2reg_shl_b
42
+ CPUARMState *env, target_ulong addr,
42
+
43
+ MMUAccessType access_type, uintptr_t retaddr)
43
+VSHRN_64_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_d
44
{
44
+VSHRN_32_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_s
45
int mmu_idx = cpu_mmu_index(env, false);
45
+VSHRN_16_2sh 1111 001 0 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
46
int mem_off = info->mem_off_first[0];
46
+
47
@@ -XXX,XX +XXX,XX @@ static inline bool test_host_page(void *host)
47
+VRSHRN_64_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
48
/*
48
+VRSHRN_32_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
49
* Common helper for all contiguous one-register predicated loads.
49
+VRSHRN_16_2sh 1111 001 0 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
50
*/
50
+
51
-static void sve_ld1_r(CPUARMState *env, void *vg, const target_ulong addr,
51
+VQSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_d
52
- uint32_t desc, const uintptr_t retaddr,
52
+VQSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_s
53
- const int esz, const int msz,
53
+VQSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
54
- sve_ldst1_host_fn *host_fn,
54
+
55
- sve_ldst1_tlb_fn *tlb_fn)
55
+VQRSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
56
+static inline QEMU_ALWAYS_INLINE
56
+VQRSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
57
+void sve_ld1_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
57
+VQRSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
58
+ uint32_t desc, const uintptr_t retaddr,
58
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
59
+ const int esz, const int msz,
59
index XXXXXXX..XXXXXXX 100644
60
+ sve_ldst1_host_fn *host_fn,
60
--- a/target/arm/translate-neon.inc.c
61
+ sve_ldst1_tlb_fn *tlb_fn)
61
+++ b/target/arm/translate-neon.inc.c
62
{
62
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
63
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
63
DO_2SHIFT_ENV(VQSHLU, qshlu_s)
64
- const int mmu_idx = get_mmuidx(oi);
64
DO_2SHIFT_ENV(VQSHL_U, qshl_u)
65
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
65
DO_2SHIFT_ENV(VQSHL_S, qshl_s)
66
void *vd = &env->vfp.zregs[rd];
66
+
67
- const int diffsz = esz - msz;
67
+static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
68
const intptr_t reg_max = simd_oprsz(desc);
68
+ NeonGenTwo64OpFn *shiftfn,
69
- const intptr_t mem_max = reg_max >> diffsz;
69
+ NeonGenNarrowEnvFn *narrowfn)
70
- ARMVectorReg scratch;
70
+{
71
+ intptr_t reg_off, reg_last, mem_off;
71
+ /* 2-reg-and-shift narrowing-shift operations, size == 3 case */
72
+ SVEContLdSt info;
72
+ TCGv_i64 constimm, rm1, rm2;
73
void *host;
73
+ TCGv_i32 rd;
74
- intptr_t split, reg_off, mem_off;
74
+
75
+ int flags;
75
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
76
76
+ return false;
77
- /* Find the first active element. */
77
+ }
78
- reg_off = find_next_active(vg, 0, reg_max, esz);
78
+
79
- if (unlikely(reg_off == reg_max)) {
79
+ /* UNDEF accesses to D16-D31 if they don't exist. */
80
+ /* Find the active elements. */
80
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
81
+ if (!sve_cont_ldst_elements(&info, addr, vg, reg_max, esz, 1 << msz)) {
81
+ ((a->vd | a->vm) & 0x10)) {
82
/* The entire predicate was false; no load occurs. */
82
+ return false;
83
memset(vd, 0, reg_max);
83
+ }
84
return;
84
+
85
}
85
+ if (a->vm & 1) {
86
- mem_off = reg_off >> diffsz;
86
+ return false;
87
87
+ }
88
- /*
88
+
89
- * If the (remaining) load is entirely within a single page, then:
89
+ if (!vfp_access_check(s)) {
90
- * For softmmu, and the tlb hits, then no faults will occur;
90
+ return true;
91
- * For user-only, either the first load will fault or none will.
92
- * We can thus perform the load directly to the destination and
93
- * Vd will be unmodified on any exception path.
94
- */
95
- split = max_for_page(addr, mem_off, mem_max);
96
- if (likely(split == mem_max)) {
97
- host = tlb_vaddr_to_host(env, addr + mem_off, MMU_DATA_LOAD, mmu_idx);
98
- if (test_host_page(host)) {
99
- intptr_t i = reg_off;
100
- host -= mem_off;
101
- do {
102
- host_fn(vd, i, host + (i >> diffsz));
103
- i = find_next_active(vg, i + (1 << esz), reg_max, esz);
104
- } while (i < reg_max);
105
- /* After having taken any fault, zero leading inactive elements. */
106
- swap_memzero(vd, reg_off);
107
- return;
108
- }
109
- }
110
+ /* Probe the page(s). Exit with exception for any invalid page. */
111
+ sve_cont_ldst_pages(&info, FAULT_ALL, env, addr, MMU_DATA_LOAD, retaddr);
112
113
- /*
114
- * Perform the predicated read into a temporary, thus ensuring
115
- * if the load of the last element faults, Vd is not modified.
116
- */
117
+ flags = info.page[0].flags | info.page[1].flags;
118
+ if (unlikely(flags != 0)) {
119
#ifdef CONFIG_USER_ONLY
120
- swap_memzero(&scratch, reg_off);
121
- host = g2h(addr);
122
- do {
123
- host_fn(&scratch, reg_off, host + (reg_off >> diffsz));
124
- reg_off += 1 << esz;
125
- reg_off = find_next_active(vg, reg_off, reg_max, esz);
126
- } while (reg_off < reg_max);
127
+ g_assert_not_reached();
128
#else
129
- memset(&scratch, 0, reg_max);
130
- goto start;
131
- while (1) {
132
- reg_off = find_next_active(vg, reg_off, reg_max, esz);
133
- if (reg_off >= reg_max) {
134
- break;
135
- }
136
- mem_off = reg_off >> diffsz;
137
- split = max_for_page(addr, mem_off, mem_max);
138
+ /*
139
+ * At least one page includes MMIO (or watchpoints).
140
+ * Any bus operation can fail with cpu_transaction_failed,
141
+ * which for ARM will raise SyncExternal. Perform the load
142
+ * into scratch memory to preserve register state until the end.
143
+ */
144
+ ARMVectorReg scratch;
145
146
- start:
147
- if (split - mem_off >= (1 << msz)) {
148
- /* At least one whole element on this page. */
149
- host = tlb_vaddr_to_host(env, addr + mem_off,
150
- MMU_DATA_LOAD, mmu_idx);
151
- if (host) {
152
- host -= mem_off;
153
- do {
154
- host_fn(&scratch, reg_off, host + mem_off);
155
- reg_off += 1 << esz;
156
- reg_off = find_next_active(vg, reg_off, reg_max, esz);
157
- mem_off = reg_off >> diffsz;
158
- } while (split - mem_off >= (1 << msz));
159
- continue;
160
+ memset(&scratch, 0, reg_max);
161
+ mem_off = info.mem_off_first[0];
162
+ reg_off = info.reg_off_first[0];
163
+ reg_last = info.reg_off_last[1];
164
+ if (reg_last < 0) {
165
+ reg_last = info.reg_off_split;
166
+ if (reg_last < 0) {
167
+ reg_last = info.reg_off_last[0];
168
}
169
}
170
171
- /*
172
- * Perform one normal read. This may fault, longjmping out to the
173
- * main loop in order to raise an exception. It may succeed, and
174
- * as a side-effect load the TLB entry for the next round. Finally,
175
- * in the extremely unlikely case we're performing this operation
176
- * on I/O memory, it may succeed but not bring in the TLB entry.
177
- * But even then we have still made forward progress.
178
- */
179
- tlb_fn(env, &scratch, reg_off, addr + mem_off, retaddr);
180
- reg_off += 1 << esz;
181
- }
182
-#endif
183
+ do {
184
+ uint64_t pg = vg[reg_off >> 6];
185
+ do {
186
+ if ((pg >> (reg_off & 63)) & 1) {
187
+ tlb_fn(env, &scratch, reg_off, addr + mem_off, retaddr);
188
+ }
189
+ reg_off += 1 << esz;
190
+ mem_off += 1 << msz;
191
+ } while (reg_off & 63);
192
+ } while (reg_off <= reg_last);
193
194
- memcpy(vd, &scratch, reg_max);
195
+ memcpy(vd, &scratch, reg_max);
196
+ return;
197
+#endif
198
+ }
199
+
200
+ /* The entire operation is in RAM, on valid pages. */
201
+
202
+ memset(vd, 0, reg_max);
203
+ mem_off = info.mem_off_first[0];
204
+ reg_off = info.reg_off_first[0];
205
+ reg_last = info.reg_off_last[0];
206
+ host = info.page[0].host;
207
+
208
+ while (reg_off <= reg_last) {
209
+ uint64_t pg = vg[reg_off >> 6];
210
+ do {
211
+ if ((pg >> (reg_off & 63)) & 1) {
212
+ host_fn(vd, reg_off, host + mem_off);
213
+ }
214
+ reg_off += 1 << esz;
215
+ mem_off += 1 << msz;
216
+ } while (reg_off <= reg_last && (reg_off & 63));
217
+ }
91
+ }
218
+
92
+
219
+ /*
93
+ /*
220
+ * Use the slow path to manage the cross-page misalignment.
94
+ * This is always a right shift, and the shiftfn is always a
221
+ * But we know this is RAM and cannot trap.
95
+ * left-shift helper, which thus needs the negated shift count.
222
+ */
96
+ */
223
+ mem_off = info.mem_off_split;
97
+ constimm = tcg_const_i64(-a->shift);
224
+ if (unlikely(mem_off >= 0)) {
98
+ rm1 = tcg_temp_new_i64();
225
+ tlb_fn(env, vd, info.reg_off_split, addr + mem_off, retaddr);
99
+ rm2 = tcg_temp_new_i64();
226
+ }
100
+
227
+
101
+ /* Load both inputs first to avoid potential overwrite if rm == rd */
228
+ mem_off = info.mem_off_first[1];
102
+ neon_load_reg64(rm1, a->vm);
229
+ if (unlikely(mem_off >= 0)) {
103
+ neon_load_reg64(rm2, a->vm + 1);
230
+ reg_off = info.reg_off_first[1];
104
+
231
+ reg_last = info.reg_off_last[1];
105
+ shiftfn(rm1, rm1, constimm);
232
+ host = info.page[1].host;
106
+ rd = tcg_temp_new_i32();
233
+
107
+ narrowfn(rd, cpu_env, rm1);
234
+ do {
108
+ neon_store_reg(a->vd, 0, rd);
235
+ uint64_t pg = vg[reg_off >> 6];
109
+
236
+ do {
110
+ shiftfn(rm2, rm2, constimm);
237
+ if ((pg >> (reg_off & 63)) & 1) {
111
+ rd = tcg_temp_new_i32();
238
+ host_fn(vd, reg_off, host + mem_off);
112
+ narrowfn(rd, cpu_env, rm2);
239
+ }
113
+ neon_store_reg(a->vd, 1, rd);
240
+ reg_off += 1 << esz;
114
+
241
+ mem_off += 1 << msz;
115
+ tcg_temp_free_i64(rm1);
242
+ } while (reg_off & 63);
116
+ tcg_temp_free_i64(rm2);
243
+ } while (reg_off <= reg_last);
117
+ tcg_temp_free_i64(constimm);
244
+ }
118
+
245
}
119
+ return true;
246
120
+}
247
#define DO_LD1_1(NAME, ESZ) \
121
+
122
+static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
123
+ NeonGenTwoOpFn *shiftfn,
124
+ NeonGenNarrowEnvFn *narrowfn)
125
+{
126
+ /* 2-reg-and-shift narrowing-shift operations, size < 3 case */
127
+ TCGv_i32 constimm, rm1, rm2, rm3, rm4;
128
+ TCGv_i64 rtmp;
129
+ uint32_t imm;
130
+
131
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
132
+ return false;
133
+ }
134
+
135
+ /* UNDEF accesses to D16-D31 if they don't exist. */
136
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
137
+ ((a->vd | a->vm) & 0x10)) {
138
+ return false;
139
+ }
140
+
141
+ if (a->vm & 1) {
142
+ return false;
143
+ }
144
+
145
+ if (!vfp_access_check(s)) {
146
+ return true;
147
+ }
148
+
149
+ /*
150
+ * This is always a right shift, and the shiftfn is always a
151
+ * left-shift helper, which thus needs the negated shift count
152
+ * duplicated into each lane of the immediate value.
153
+ */
154
+ if (a->size == 1) {
155
+ imm = (uint16_t)(-a->shift);
156
+ imm |= imm << 16;
157
+ } else {
158
+ /* size == 2 */
159
+ imm = -a->shift;
160
+ }
161
+ constimm = tcg_const_i32(imm);
162
+
163
+ /* Load all inputs first to avoid potential overwrite */
164
+ rm1 = neon_load_reg(a->vm, 0);
165
+ rm2 = neon_load_reg(a->vm, 1);
166
+ rm3 = neon_load_reg(a->vm + 1, 0);
167
+ rm4 = neon_load_reg(a->vm + 1, 1);
168
+ rtmp = tcg_temp_new_i64();
169
+
170
+ shiftfn(rm1, rm1, constimm);
171
+ shiftfn(rm2, rm2, constimm);
172
+
173
+ tcg_gen_concat_i32_i64(rtmp, rm1, rm2);
174
+ tcg_temp_free_i32(rm2);
175
+
176
+ narrowfn(rm1, cpu_env, rtmp);
177
+ neon_store_reg(a->vd, 0, rm1);
178
+
179
+ shiftfn(rm3, rm3, constimm);
180
+ shiftfn(rm4, rm4, constimm);
181
+ tcg_temp_free_i32(constimm);
182
+
183
+ tcg_gen_concat_i32_i64(rtmp, rm3, rm4);
184
+ tcg_temp_free_i32(rm4);
185
+
186
+ narrowfn(rm3, cpu_env, rtmp);
187
+ tcg_temp_free_i64(rtmp);
188
+ neon_store_reg(a->vd, 1, rm3);
189
+ return true;
190
+}
191
+
192
+#define DO_2SN_64(INSN, FUNC, NARROWFUNC) \
193
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
194
+ { \
195
+ return do_2shift_narrow_64(s, a, FUNC, NARROWFUNC); \
196
+ }
197
+#define DO_2SN_32(INSN, FUNC, NARROWFUNC) \
198
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
199
+ { \
200
+ return do_2shift_narrow_32(s, a, FUNC, NARROWFUNC); \
201
+ }
202
+
203
+static void gen_neon_narrow_u32(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
204
+{
205
+ tcg_gen_extrl_i64_i32(dest, src);
206
+}
207
+
208
+static void gen_neon_narrow_u16(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
209
+{
210
+ gen_helper_neon_narrow_u16(dest, src);
211
+}
212
+
213
+static void gen_neon_narrow_u8(TCGv_i32 dest, TCGv_ptr env, TCGv_i64 src)
214
+{
215
+ gen_helper_neon_narrow_u8(dest, src);
216
+}
217
+
218
+DO_2SN_64(VSHRN_64, gen_ushl_i64, gen_neon_narrow_u32)
219
+DO_2SN_32(VSHRN_32, gen_ushl_i32, gen_neon_narrow_u16)
220
+DO_2SN_32(VSHRN_16, gen_helper_neon_shl_u16, gen_neon_narrow_u8)
221
+
222
+DO_2SN_64(VRSHRN_64, gen_helper_neon_rshl_u64, gen_neon_narrow_u32)
223
+DO_2SN_32(VRSHRN_32, gen_helper_neon_rshl_u32, gen_neon_narrow_u16)
224
+DO_2SN_32(VRSHRN_16, gen_helper_neon_rshl_u16, gen_neon_narrow_u8)
225
+
226
+DO_2SN_64(VQSHRUN_64, gen_sshl_i64, gen_helper_neon_unarrow_sat32)
227
+DO_2SN_32(VQSHRUN_32, gen_sshl_i32, gen_helper_neon_unarrow_sat16)
228
+DO_2SN_32(VQSHRUN_16, gen_helper_neon_shl_s16, gen_helper_neon_unarrow_sat8)
229
+
230
+DO_2SN_64(VQRSHRUN_64, gen_helper_neon_rshl_s64, gen_helper_neon_unarrow_sat32)
231
+DO_2SN_32(VQRSHRUN_32, gen_helper_neon_rshl_s32, gen_helper_neon_unarrow_sat16)
232
+DO_2SN_32(VQRSHRUN_16, gen_helper_neon_rshl_s16, gen_helper_neon_unarrow_sat8)
233
diff --git a/target/arm/translate.c b/target/arm/translate.c
234
index XXXXXXX..XXXXXXX 100644
235
--- a/target/arm/translate.c
236
+++ b/target/arm/translate.c
237
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
238
case 5: /* VSHL, VSLI */
239
case 6: /* VQSHLU */
240
case 7: /* VQSHL */
241
+ case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
242
return 1; /* handled by decodetree */
243
default:
244
break;
248
--
245
--
249
2.20.1
246
2.20.1
250
247
251
248
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Convert the remaining Neon narrowing shifts to decodetree:
2
2
* VQSHRN
3
None of the sve helpers use TCGMemOpIdx any longer, so we can
3
* VQRSHRN
4
stop passing it.
4
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200508154359.7494-20-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-7-peter.maydell@linaro.org
10
---
8
---
11
target/arm/internals.h | 5 -----
9
target/arm/neon-dp.decode | 20 ++++++
12
target/arm/sve_helper.c | 14 +++++++-------
10
target/arm/translate-neon.inc.c | 15 +++++
13
target/arm/translate-sve.c | 17 +++--------------
11
target/arm/translate.c | 110 +-------------------------------
14
3 files changed, 10 insertions(+), 26 deletions(-)
12
3 files changed, 37 insertions(+), 108 deletions(-)
15
13
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
16
--- a/target/arm/neon-dp.decode
19
+++ b/target/arm/internals.h
17
+++ b/target/arm/neon-dp.decode
20
@@ -XXX,XX +XXX,XX @@ static inline int arm_num_ctx_cmps(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@ VQSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 0 . 1 .... @2reg_shrn_h
19
VQRSHRUN_64_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_d
20
VQRSHRUN_32_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_s
21
VQRSHRUN_16_2sh 1111 001 1 1 . ...... .... 1000 . 1 . 1 .... @2reg_shrn_h
22
+
23
+# VQSHRN with signed input
24
+VQSHRN_S64_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_d
25
+VQSHRN_S32_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_s
26
+VQSHRN_S16_2sh 1111 001 0 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
27
+
28
+# VQRSHRN with signed input
29
+VQRSHRN_S64_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
30
+VQRSHRN_S32_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
31
+VQRSHRN_S16_2sh 1111 001 0 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
32
+
33
+# VQSHRN with unsigned input
34
+VQSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_d
35
+VQSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_s
36
+VQSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
37
+
38
+# VQRSHRN with unsigned input
39
+VQRSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
40
+VQRSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
41
+VQRSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
42
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/translate-neon.inc.c
45
+++ b/target/arm/translate-neon.inc.c
46
@@ -XXX,XX +XXX,XX @@ DO_2SN_32(VQSHRUN_16, gen_helper_neon_shl_s16, gen_helper_neon_unarrow_sat8)
47
DO_2SN_64(VQRSHRUN_64, gen_helper_neon_rshl_s64, gen_helper_neon_unarrow_sat32)
48
DO_2SN_32(VQRSHRUN_32, gen_helper_neon_rshl_s32, gen_helper_neon_unarrow_sat16)
49
DO_2SN_32(VQRSHRUN_16, gen_helper_neon_rshl_s16, gen_helper_neon_unarrow_sat8)
50
+DO_2SN_64(VQSHRN_S64, gen_sshl_i64, gen_helper_neon_narrow_sat_s32)
51
+DO_2SN_32(VQSHRN_S32, gen_sshl_i32, gen_helper_neon_narrow_sat_s16)
52
+DO_2SN_32(VQSHRN_S16, gen_helper_neon_shl_s16, gen_helper_neon_narrow_sat_s8)
53
+
54
+DO_2SN_64(VQRSHRN_S64, gen_helper_neon_rshl_s64, gen_helper_neon_narrow_sat_s32)
55
+DO_2SN_32(VQRSHRN_S32, gen_helper_neon_rshl_s32, gen_helper_neon_narrow_sat_s16)
56
+DO_2SN_32(VQRSHRN_S16, gen_helper_neon_rshl_s16, gen_helper_neon_narrow_sat_s8)
57
+
58
+DO_2SN_64(VQSHRN_U64, gen_ushl_i64, gen_helper_neon_narrow_sat_u32)
59
+DO_2SN_32(VQSHRN_U32, gen_ushl_i32, gen_helper_neon_narrow_sat_u16)
60
+DO_2SN_32(VQSHRN_U16, gen_helper_neon_shl_u16, gen_helper_neon_narrow_sat_u8)
61
+
62
+DO_2SN_64(VQRSHRN_U64, gen_helper_neon_rshl_u64, gen_helper_neon_narrow_sat_u32)
63
+DO_2SN_32(VQRSHRN_U32, gen_helper_neon_rshl_u32, gen_helper_neon_narrow_sat_u16)
64
+DO_2SN_32(VQRSHRN_U16, gen_helper_neon_rshl_u16, gen_helper_neon_narrow_sat_u8)
65
diff --git a/target/arm/translate.c b/target/arm/translate.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate.c
68
+++ b/target/arm/translate.c
69
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_unarrow_sats(int size, TCGv_i32 dest, TCGv_i64 src)
21
}
70
}
22
}
71
}
23
72
24
-/* Note make_memop_idx reserves 4 bits for mmu_idx, and MO_BSWAP is bit 3.
73
-static inline void gen_neon_shift_narrow(int size, TCGv_i32 var, TCGv_i32 shift,
25
- * Thus a TCGMemOpIdx, without any MO_ALIGN bits, fits in 8 bits.
74
- int q, int u)
26
- */
27
-#define MEMOPIDX_SHIFT 8
28
-
29
/**
30
* v7m_using_psp: Return true if using process stack pointer
31
* Return true if the CPU is currently using the process stack
32
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/sve_helper.c
35
+++ b/target/arm/sve_helper.c
36
@@ -XXX,XX +XXX,XX @@ void sve_ldN_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
37
sve_ldst1_host_fn *host_fn,
38
sve_ldst1_tlb_fn *tlb_fn)
39
{
40
- const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
41
+ const unsigned rd = simd_data(desc);
42
const intptr_t reg_max = simd_oprsz(desc);
43
intptr_t reg_off, reg_last, mem_off;
44
SVEContLdSt info;
45
@@ -XXX,XX +XXX,XX @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
46
sve_ldst1_host_fn *host_fn,
47
sve_ldst1_tlb_fn *tlb_fn)
48
{
49
- const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
50
+ const unsigned rd = simd_data(desc);
51
void *vd = &env->vfp.zregs[rd];
52
const intptr_t reg_max = simd_oprsz(desc);
53
intptr_t reg_off, mem_off, reg_last;
54
@@ -XXX,XX +XXX,XX @@ void sve_stN_r(CPUARMState *env, uint64_t *vg, target_ulong addr, uint32_t desc,
55
sve_ldst1_host_fn *host_fn,
56
sve_ldst1_tlb_fn *tlb_fn)
57
{
58
- const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
59
+ const unsigned rd = simd_data(desc);
60
const intptr_t reg_max = simd_oprsz(desc);
61
intptr_t reg_off, reg_last, mem_off;
62
SVEContLdSt info;
63
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
64
sve_ldst1_host_fn *host_fn,
65
sve_ldst1_tlb_fn *tlb_fn)
66
{
67
- const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
68
const int mmu_idx = cpu_mmu_index(env, false);
69
const intptr_t reg_max = simd_oprsz(desc);
70
+ const int scale = simd_data(desc);
71
ARMVectorReg scratch;
72
intptr_t reg_off;
73
SVEHostPage info, info2;
74
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
75
sve_ldst1_tlb_fn *tlb_fn)
76
{
77
const int mmu_idx = cpu_mmu_index(env, false);
78
- const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
79
+ const intptr_t reg_max = simd_oprsz(desc);
80
+ const int scale = simd_data(desc);
81
const int esize = 1 << esz;
82
const int msize = 1 << msz;
83
- const intptr_t reg_max = simd_oprsz(desc);
84
intptr_t reg_off;
85
SVEHostPage info;
86
target_ulong addr, in_page;
87
@@ -XXX,XX +XXX,XX @@ void sve_st1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
88
sve_ldst1_host_fn *host_fn,
89
sve_ldst1_tlb_fn *tlb_fn)
90
{
91
- const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
92
const int mmu_idx = cpu_mmu_index(env, false);
93
const intptr_t reg_max = simd_oprsz(desc);
94
+ const int scale = simd_data(desc);
95
void *host[ARM_MAX_VQ * 4];
96
intptr_t reg_off, i;
97
SVEHostPage info, info2;
98
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/translate-sve.c
101
+++ b/target/arm/translate-sve.c
102
@@ -XXX,XX +XXX,XX @@ static const uint8_t dtype_esz[16] = {
103
3, 2, 1, 3
104
};
105
106
-static TCGMemOpIdx sve_memopidx(DisasContext *s, int dtype)
107
-{
75
-{
108
- return make_memop_idx(s->be_data | dtype_mop[dtype], get_mem_index(s));
76
- if (q) {
77
- if (u) {
78
- switch (size) {
79
- case 1: gen_helper_neon_rshl_u16(var, var, shift); break;
80
- case 2: gen_helper_neon_rshl_u32(var, var, shift); break;
81
- default: abort();
82
- }
83
- } else {
84
- switch (size) {
85
- case 1: gen_helper_neon_rshl_s16(var, var, shift); break;
86
- case 2: gen_helper_neon_rshl_s32(var, var, shift); break;
87
- default: abort();
88
- }
89
- }
90
- } else {
91
- if (u) {
92
- switch (size) {
93
- case 1: gen_helper_neon_shl_u16(var, var, shift); break;
94
- case 2: gen_ushl_i32(var, var, shift); break;
95
- default: abort();
96
- }
97
- } else {
98
- switch (size) {
99
- case 1: gen_helper_neon_shl_s16(var, var, shift); break;
100
- case 2: gen_sshl_i32(var, var, shift); break;
101
- default: abort();
102
- }
103
- }
104
- }
109
-}
105
-}
110
-
106
-
111
static void do_mem_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
107
static inline void gen_neon_widen(TCGv_i64 dest, TCGv_i32 src, int size, int u)
112
int dtype, gen_helper_gvec_mem *fn)
113
{
108
{
114
@@ -XXX,XX +XXX,XX @@ static void do_mem_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
109
if (u) {
115
* registers as pointers, so encode the regno into the data field.
110
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
116
* For consistency, do this even for LD1.
111
case 6: /* VQSHLU */
117
*/
112
case 7: /* VQSHL */
118
- desc = sve_memopidx(s, dtype);
113
case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
119
- desc |= zt << MEMOPIDX_SHIFT;
114
+ case 9: /* VQSHRN, VQRSHRN */
120
- desc = simd_desc(vsz, vsz, desc);
115
return 1; /* handled by decodetree */
121
+ desc = simd_desc(vsz, vsz, zt);
116
default:
122
t_desc = tcg_const_i32(desc);
117
break;
123
t_pg = tcg_temp_new_ptr();
118
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
124
119
size--;
125
@@ -XXX,XX +XXX,XX @@ static void do_ldrq(DisasContext *s, int zt, int pg, TCGv_i64 addr, int msz)
120
}
126
int desc, poff;
121
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
127
122
- if (op < 10) {
128
/* Load the first quadword using the normal predicated load helpers. */
123
- /* Shift by immediate and narrow:
129
- desc = sve_memopidx(s, msz_dtype(s, msz));
124
- VSHRN, VRSHRN, VQSHRN, VQRSHRN. */
130
- desc |= zt << MEMOPIDX_SHIFT;
125
- int input_unsigned = (op == 8) ? !u : u;
131
- desc = simd_desc(16, 16, desc);
126
- if (rm & 1) {
132
+ desc = simd_desc(16, 16, zt);
127
- return 1;
133
t_desc = tcg_const_i32(desc);
128
- }
134
129
- shift = shift - (1 << (size + 3));
135
poff = pred_full_reg_offset(s, pg);
130
- size++;
136
@@ -XXX,XX +XXX,XX @@ static void do_mem_zpz(DisasContext *s, int zt, int pg, int zm,
131
- if (size == 3) {
137
TCGv_i32 t_desc;
132
- tmp64 = tcg_const_i64(shift);
138
int desc;
133
- neon_load_reg64(cpu_V0, rm);
139
134
- neon_load_reg64(cpu_V1, rm + 1);
140
- desc = sve_memopidx(s, msz_dtype(s, msz));
135
- for (pass = 0; pass < 2; pass++) {
141
- desc |= scale << MEMOPIDX_SHIFT;
136
- TCGv_i64 in;
142
- desc = simd_desc(vsz, vsz, desc);
137
- if (pass == 0) {
143
+ desc = simd_desc(vsz, vsz, scale);
138
- in = cpu_V0;
144
t_desc = tcg_const_i32(desc);
139
- } else {
145
140
- in = cpu_V1;
146
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
141
- }
142
- if (q) {
143
- if (input_unsigned) {
144
- gen_helper_neon_rshl_u64(cpu_V0, in, tmp64);
145
- } else {
146
- gen_helper_neon_rshl_s64(cpu_V0, in, tmp64);
147
- }
148
- } else {
149
- if (input_unsigned) {
150
- gen_ushl_i64(cpu_V0, in, tmp64);
151
- } else {
152
- gen_sshl_i64(cpu_V0, in, tmp64);
153
- }
154
- }
155
- tmp = tcg_temp_new_i32();
156
- gen_neon_narrow_op(op == 8, u, size - 1, tmp, cpu_V0);
157
- neon_store_reg(rd, pass, tmp);
158
- } /* for pass */
159
- tcg_temp_free_i64(tmp64);
160
- } else {
161
- if (size == 1) {
162
- imm = (uint16_t)shift;
163
- imm |= imm << 16;
164
- } else {
165
- /* size == 2 */
166
- imm = (uint32_t)shift;
167
- }
168
- tmp2 = tcg_const_i32(imm);
169
- tmp4 = neon_load_reg(rm + 1, 0);
170
- tmp5 = neon_load_reg(rm + 1, 1);
171
- for (pass = 0; pass < 2; pass++) {
172
- if (pass == 0) {
173
- tmp = neon_load_reg(rm, 0);
174
- } else {
175
- tmp = tmp4;
176
- }
177
- gen_neon_shift_narrow(size, tmp, tmp2, q,
178
- input_unsigned);
179
- if (pass == 0) {
180
- tmp3 = neon_load_reg(rm, 1);
181
- } else {
182
- tmp3 = tmp5;
183
- }
184
- gen_neon_shift_narrow(size, tmp3, tmp2, q,
185
- input_unsigned);
186
- tcg_gen_concat_i32_i64(cpu_V0, tmp, tmp3);
187
- tcg_temp_free_i32(tmp);
188
- tcg_temp_free_i32(tmp3);
189
- tmp = tcg_temp_new_i32();
190
- gen_neon_narrow_op(op == 8, u, size - 1, tmp, cpu_V0);
191
- neon_store_reg(rd, pass, tmp);
192
- } /* for pass */
193
- tcg_temp_free_i32(tmp2);
194
- }
195
- } else if (op == 10) {
196
+ if (op == 10) {
197
/* VSHLL, VMOVL */
198
if (q || (rd & 1)) {
199
return 1;
147
--
200
--
148
2.20.1
201
2.20.1
149
202
150
203
diff view generated by jsdifflib
1
From: Joel Stanley <joel@jms.id.au>
1
Convert the VSHLL and VMOVL insns from the 2-reg-shift group
2
2
to decodetree. Since the loop always has two passes, we unroll
3
This is a boot stub that is similar to the code u-boot runs, allowing
3
it to avoid the awkward reassignment of one TCGv to another.
4
the kernel to boot the secondary CPU.
4
5
6
u-boot works as follows:
7
8
1. Initialises the SMP mailbox area in the SCU at 0x1e6e2180 with default values
9
10
2. Copies a stub named 'mailbox_insn' from flash to the SCU, just above the
11
mailbox area
12
13
3. Sets AST_SMP_MBOX_FIELD_READY to a magic value to indicate the
14
secondary can begin execution from the stub
15
16
4. The stub waits until the AST_SMP_MBOX_FIELD_GOSIGN register is set to
17
a magic value
18
19
5. Jumps to the address in AST_SMP_MBOX_FIELD_ENTRY, starting Linux
20
21
Linux indicates it is ready by writing the address of its entrypoint
22
function to AST_SMP_MBOX_FIELD_ENTRY and the 'go' magic number to
23
AST_SMP_MBOX_FIELD_GOSIGN. The secondary CPU sees this at step 4 and
24
breaks out of it's loop.
25
26
To be compatible, a fixed qemu stub is loaded into the mailbox area. As
27
qemu can ensure the stub is loaded before execution starts, we do not
28
need to emulate the AST_SMP_MBOX_FIELD_READY behaviour of u-boot. The
29
secondary CPU's program counter points to the beginning of the stub,
30
allowing qemu to start secondaries at step four.
31
32
Reboot behaviour is preserved by resetting AST_SMP_MBOX_FIELD_GOSIGN
33
when the secondaries are reset.
34
35
This is only configured when the system is booted with -kernel and qemu
36
does not execute u-boot first.
37
38
Reviewed-by: Cédric Le Goater <clg@kaod.org>
39
Tested-by: Cédric Le Goater <clg@kaod.org>
40
Signed-off-by: Joel Stanley <joel@jms.id.au>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200522145520.6778-8-peter.maydell@linaro.org
42
---
8
---
43
hw/arm/aspeed.c | 65 +++++++++++++++++++++++++++++++++++++++++++++++++
9
target/arm/neon-dp.decode | 16 +++++++
44
1 file changed, 65 insertions(+)
10
target/arm/translate-neon.inc.c | 81 +++++++++++++++++++++++++++++++++
45
11
target/arm/translate.c | 46 +------------------
46
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
12
3 files changed, 99 insertions(+), 44 deletions(-)
13
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
47
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/aspeed.c
16
--- a/target/arm/neon-dp.decode
49
+++ b/hw/arm/aspeed.c
17
+++ b/target/arm/neon-dp.decode
50
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps max_ram_ops = {
18
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
51
.endianness = DEVICE_NATIVE_ENDIAN,
19
&2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0 \
52
};
20
shift=%neon_rshift_i3
53
21
54
+#define AST_SMP_MAILBOX_BASE 0x1e6e2180
22
+# Long left shifts: again Q is part of opcode decode
55
+#define AST_SMP_MBOX_FIELD_ENTRY (AST_SMP_MAILBOX_BASE + 0x0)
23
+@2reg_shll_s .... ... . . . 1 shift:5 .... .... 0 . . . .... \
56
+#define AST_SMP_MBOX_FIELD_GOSIGN (AST_SMP_MAILBOX_BASE + 0x4)
24
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=2 q=0
57
+#define AST_SMP_MBOX_FIELD_READY (AST_SMP_MAILBOX_BASE + 0x8)
25
+@2reg_shll_h .... ... . . . 01 shift:4 .... .... 0 . . . .... \
58
+#define AST_SMP_MBOX_FIELD_POLLINSN (AST_SMP_MAILBOX_BASE + 0xc)
26
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=1 q=0
59
+#define AST_SMP_MBOX_CODE (AST_SMP_MAILBOX_BASE + 0x10)
27
+@2reg_shll_b .... ... . . . 001 shift:3 .... .... 0 . . . .... \
60
+#define AST_SMP_MBOX_GOSIGN 0xabbaab00
28
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 q=0
61
+
29
+
62
+static void aspeed_write_smpboot(ARMCPU *cpu,
30
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
63
+ const struct arm_boot_info *info)
31
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
32
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
33
@@ -XXX,XX +XXX,XX @@ VQSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 0 . 1 .... @2reg_shrn_h
34
VQRSHRN_U64_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_d
35
VQRSHRN_U32_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_s
36
VQRSHRN_U16_2sh 1111 001 1 1 . ...... .... 1001 . 1 . 1 .... @2reg_shrn_h
37
+
38
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
39
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
40
+VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
41
+
42
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
43
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
44
+VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
45
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate-neon.inc.c
48
+++ b/target/arm/translate-neon.inc.c
49
@@ -XXX,XX +XXX,XX @@ DO_2SN_32(VQSHRN_U16, gen_helper_neon_shl_u16, gen_helper_neon_narrow_sat_u8)
50
DO_2SN_64(VQRSHRN_U64, gen_helper_neon_rshl_u64, gen_helper_neon_narrow_sat_u32)
51
DO_2SN_32(VQRSHRN_U32, gen_helper_neon_rshl_u32, gen_helper_neon_narrow_sat_u16)
52
DO_2SN_32(VQRSHRN_U16, gen_helper_neon_rshl_u16, gen_helper_neon_narrow_sat_u8)
53
+
54
+static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
55
+ NeonGenWidenFn *widenfn, bool u)
64
+{
56
+{
65
+ static const uint32_t poll_mailbox_ready[] = {
57
+ TCGv_i64 tmp;
66
+ /*
58
+ TCGv_i32 rm0, rm1;
67
+ * r2 = per-cpu go sign value
59
+ uint64_t widen_mask = 0;
68
+ * r1 = AST_SMP_MBOX_FIELD_ENTRY
60
+
69
+ * r0 = AST_SMP_MBOX_FIELD_GOSIGN
61
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
70
+ */
62
+ return false;
71
+ 0xee100fb0, /* mrc p15, 0, r0, c0, c0, 5 */
63
+ }
72
+ 0xe21000ff, /* ands r0, r0, #255 */
64
+
73
+ 0xe59f201c, /* ldr r2, [pc, #28] */
65
+ /* UNDEF accesses to D16-D31 if they don't exist. */
74
+ 0xe1822000, /* orr r2, r2, r0 */
66
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
75
+
67
+ ((a->vd | a->vm) & 0x10)) {
76
+ 0xe59f1018, /* ldr r1, [pc, #24] */
68
+ return false;
77
+ 0xe59f0018, /* ldr r0, [pc, #24] */
69
+ }
78
+
70
+
79
+ 0xe320f002, /* wfe */
71
+ if (a->vd & 1) {
80
+ 0xe5904000, /* ldr r4, [r0] */
72
+ return false;
81
+ 0xe1520004, /* cmp r2, r4 */
73
+ }
82
+ 0x1afffffb, /* bne <wfe> */
74
+
83
+ 0xe591f000, /* ldr pc, [r1] */
75
+ if (!vfp_access_check(s)) {
84
+ AST_SMP_MBOX_GOSIGN,
76
+ return true;
85
+ AST_SMP_MBOX_FIELD_ENTRY,
77
+ }
86
+ AST_SMP_MBOX_FIELD_GOSIGN,
78
+
79
+ /*
80
+ * This is a widen-and-shift operation. The shift is always less
81
+ * than the width of the source type, so after widening the input
82
+ * vector we can simply shift the whole 64-bit widened register,
83
+ * and then clear the potential overflow bits resulting from left
84
+ * bits of the narrow input appearing as right bits of the left
85
+ * neighbour narrow input. Calculate a mask of bits to clear.
86
+ */
87
+ if ((a->shift != 0) && (a->size < 2 || u)) {
88
+ int esize = 8 << a->size;
89
+ widen_mask = MAKE_64BIT_MASK(0, esize);
90
+ widen_mask >>= esize - a->shift;
91
+ widen_mask = dup_const(a->size + 1, widen_mask);
92
+ }
93
+
94
+ rm0 = neon_load_reg(a->vm, 0);
95
+ rm1 = neon_load_reg(a->vm, 1);
96
+ tmp = tcg_temp_new_i64();
97
+
98
+ widenfn(tmp, rm0);
99
+ if (a->shift != 0) {
100
+ tcg_gen_shli_i64(tmp, tmp, a->shift);
101
+ tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
102
+ }
103
+ neon_store_reg64(tmp, a->vd);
104
+
105
+ widenfn(tmp, rm1);
106
+ if (a->shift != 0) {
107
+ tcg_gen_shli_i64(tmp, tmp, a->shift);
108
+ tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
109
+ }
110
+ neon_store_reg64(tmp, a->vd + 1);
111
+ tcg_temp_free_i64(tmp);
112
+ return true;
113
+}
114
+
115
+static bool trans_VSHLL_S_2sh(DisasContext *s, arg_2reg_shift *a)
116
+{
117
+ NeonGenWidenFn *widenfn[] = {
118
+ gen_helper_neon_widen_s8,
119
+ gen_helper_neon_widen_s16,
120
+ tcg_gen_ext_i32_i64,
87
+ };
121
+ };
88
+
122
+ return do_vshll_2sh(s, a, widenfn[a->size], false);
89
+ rom_add_blob_fixed("aspeed.smpboot", poll_mailbox_ready,
90
+ sizeof(poll_mailbox_ready),
91
+ info->smp_loader_start);
92
+}
123
+}
93
+
124
+
94
+static void aspeed_reset_secondary(ARMCPU *cpu,
125
+static bool trans_VSHLL_U_2sh(DisasContext *s, arg_2reg_shift *a)
95
+ const struct arm_boot_info *info)
96
+{
126
+{
97
+ AddressSpace *as = arm_boot_address_space(cpu, info);
127
+ NeonGenWidenFn *widenfn[] = {
98
+ CPUState *cs = CPU(cpu);
128
+ gen_helper_neon_widen_u8,
99
+
129
+ gen_helper_neon_widen_u16,
100
+ /* info->smp_bootreg_addr */
130
+ tcg_gen_extu_i32_i64,
101
+ address_space_stl_notdirty(as, AST_SMP_MBOX_FIELD_GOSIGN, 0,
131
+ };
102
+ MEMTXATTRS_UNSPECIFIED, NULL);
132
+ return do_vshll_2sh(s, a, widenfn[a->size], true);
103
+ cpu_set_pc(cs, info->smp_loader_start);
104
+}
133
+}
105
+
134
diff --git a/target/arm/translate.c b/target/arm/translate.c
106
#define FIRMWARE_ADDR 0x0
135
index XXXXXXX..XXXXXXX 100644
107
136
--- a/target/arm/translate.c
108
static void write_boot_rom(DriveInfo *dinfo, hwaddr addr, size_t rom_size,
137
+++ b/target/arm/translate.c
109
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_init(MachineState *machine)
138
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
110
}
139
case 7: /* VQSHL */
111
}
140
case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
112
141
case 9: /* VQSHRN, VQRSHRN */
113
+ if (machine->kernel_filename && bmc->soc.num_cpus > 1) {
142
+ case 10: /* VSHLL, including VMOVL */
114
+ /* With no u-boot we must set up a boot stub for the secondary CPU */
143
return 1; /* handled by decodetree */
115
+ MemoryRegion *smpboot = g_new(MemoryRegion, 1);
144
default:
116
+ memory_region_init_ram(smpboot, OBJECT(bmc), "aspeed.smpboot",
145
break;
117
+ 0x80, &error_abort);
146
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
118
+ memory_region_add_subregion(get_system_memory(),
147
size--;
119
+ AST_SMP_MAILBOX_BASE, smpboot);
148
}
120
+
149
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
121
+ aspeed_board_binfo.write_secondary_boot = aspeed_write_smpboot;
150
- if (op == 10) {
122
+ aspeed_board_binfo.secondary_cpu_reset_hook = aspeed_reset_secondary;
151
- /* VSHLL, VMOVL */
123
+ aspeed_board_binfo.smp_loader_start = AST_SMP_MBOX_CODE;
152
- if (q || (rd & 1)) {
124
+ }
153
- return 1;
125
+
154
- }
126
aspeed_board_binfo.ram_size = ram_size;
155
- tmp = neon_load_reg(rm, 0);
127
aspeed_board_binfo.loader_start = sc->memmap[ASPEED_SDRAM];
156
- tmp2 = neon_load_reg(rm, 1);
128
aspeed_board_binfo.nb_cpus = bmc->soc.num_cpus;
157
- for (pass = 0; pass < 2; pass++) {
158
- if (pass == 1)
159
- tmp = tmp2;
160
-
161
- gen_neon_widen(cpu_V0, tmp, size, u);
162
-
163
- if (shift != 0) {
164
- /* The shift is less than the width of the source
165
- type, so we can just shift the whole register. */
166
- tcg_gen_shli_i64(cpu_V0, cpu_V0, shift);
167
- /* Widen the result of shift: we need to clear
168
- * the potential overflow bits resulting from
169
- * left bits of the narrow input appearing as
170
- * right bits of left the neighbour narrow
171
- * input. */
172
- if (size < 2 || !u) {
173
- uint64_t imm64;
174
- if (size == 0) {
175
- imm = (0xffu >> (8 - shift));
176
- imm |= imm << 16;
177
- } else if (size == 1) {
178
- imm = 0xffff >> (16 - shift);
179
- } else {
180
- /* size == 2 */
181
- imm = 0xffffffff >> (32 - shift);
182
- }
183
- if (size < 2) {
184
- imm64 = imm | (((uint64_t)imm) << 32);
185
- } else {
186
- imm64 = imm;
187
- }
188
- tcg_gen_andi_i64(cpu_V0, cpu_V0, ~imm64);
189
- }
190
- }
191
- neon_store_reg64(cpu_V0, rd + pass);
192
- }
193
- } else if (op >= 14) {
194
+ if (op >= 14) {
195
/* VCVT fixed-point. */
196
TCGv_ptr fpst;
197
TCGv_i32 shiftv;
129
--
198
--
130
2.20.1
199
2.20.1
131
200
132
201
diff view generated by jsdifflib
Deleted patch
1
From: Joel Stanley <joel@jms.id.au>
2
1
3
There are minimal differences from Qemu's point of view between the A0
4
and A1 silicon revisions.
5
6
As the A1 exercises different code paths in u-boot it is desirable to
7
emulate that instead.
8
9
Signed-off-by: Joel Stanley <joel@jms.id.au>
10
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
11
Reviewed-by: Cédric Le Goater <clg@kaod.org>
12
Message-id: 20200504093703.261135-1-joel@jms.id.au
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
include/hw/misc/aspeed_scu.h | 1 +
16
hw/arm/aspeed.c | 8 ++++----
17
hw/arm/aspeed_ast2600.c | 6 +++---
18
hw/misc/aspeed_scu.c | 11 +++++------
19
4 files changed, 13 insertions(+), 13 deletions(-)
20
21
diff --git a/include/hw/misc/aspeed_scu.h b/include/hw/misc/aspeed_scu.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/misc/aspeed_scu.h
24
+++ b/include/hw/misc/aspeed_scu.h
25
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedSCUState {
26
#define AST2500_A0_SILICON_REV 0x04000303U
27
#define AST2500_A1_SILICON_REV 0x04010303U
28
#define AST2600_A0_SILICON_REV 0x05000303U
29
+#define AST2600_A1_SILICON_REV 0x05010303U
30
31
#define ASPEED_IS_AST2500(si_rev) ((((si_rev) >> 24) & 0xff) == 0x04)
32
33
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/arm/aspeed.c
36
+++ b/hw/arm/aspeed.c
37
@@ -XXX,XX +XXX,XX @@ struct AspeedBoardState {
38
39
/* Tacoma hardware value */
40
#define TACOMA_BMC_HW_STRAP1 0x00000000
41
-#define TACOMA_BMC_HW_STRAP2 0x00000000
42
+#define TACOMA_BMC_HW_STRAP2 0x00000040
43
44
/*
45
* The max ram region is for firmwares that scan the address space
46
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_ast2600_evb_class_init(ObjectClass *oc, void *data)
47
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
48
49
mc->desc = "Aspeed AST2600 EVB (Cortex A7)";
50
- amc->soc_name = "ast2600-a0";
51
+ amc->soc_name = "ast2600-a1";
52
amc->hw_strap1 = AST2600_EVB_HW_STRAP1;
53
amc->hw_strap2 = AST2600_EVB_HW_STRAP2;
54
amc->fmc_model = "w25q512jv";
55
@@ -XXX,XX +XXX,XX @@ static void aspeed_machine_tacoma_class_init(ObjectClass *oc, void *data)
56
MachineClass *mc = MACHINE_CLASS(oc);
57
AspeedMachineClass *amc = ASPEED_MACHINE_CLASS(oc);
58
59
- mc->desc = "Aspeed AST2600 EVB (Cortex A7)";
60
- amc->soc_name = "ast2600-a0";
61
+ mc->desc = "OpenPOWER Tacoma BMC (Cortex A7)";
62
+ amc->soc_name = "ast2600-a1";
63
amc->hw_strap1 = TACOMA_BMC_HW_STRAP1;
64
amc->hw_strap2 = TACOMA_BMC_HW_STRAP2;
65
amc->fmc_model = "mx66l1g45g";
66
diff --git a/hw/arm/aspeed_ast2600.c b/hw/arm/aspeed_ast2600.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/hw/arm/aspeed_ast2600.c
69
+++ b/hw/arm/aspeed_ast2600.c
70
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_class_init(ObjectClass *oc, void *data)
71
72
dc->realize = aspeed_soc_ast2600_realize;
73
74
- sc->name = "ast2600-a0";
75
+ sc->name = "ast2600-a1";
76
sc->cpu_type = ARM_CPU_TYPE_NAME("cortex-a7");
77
- sc->silicon_rev = AST2600_A0_SILICON_REV;
78
+ sc->silicon_rev = AST2600_A1_SILICON_REV;
79
sc->sram_size = 0x10000;
80
sc->spis_num = 2;
81
sc->ehcis_num = 2;
82
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_class_init(ObjectClass *oc, void *data)
83
}
84
85
static const TypeInfo aspeed_soc_ast2600_type_info = {
86
- .name = "ast2600-a0",
87
+ .name = "ast2600-a1",
88
.parent = TYPE_ASPEED_SOC,
89
.instance_size = sizeof(AspeedSoCState),
90
.instance_init = aspeed_soc_ast2600_init,
91
diff --git a/hw/misc/aspeed_scu.c b/hw/misc/aspeed_scu.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/hw/misc/aspeed_scu.c
94
+++ b/hw/misc/aspeed_scu.c
95
@@ -XXX,XX +XXX,XX @@ static uint32_t aspeed_silicon_revs[] = {
96
AST2500_A0_SILICON_REV,
97
AST2500_A1_SILICON_REV,
98
AST2600_A0_SILICON_REV,
99
+ AST2600_A1_SILICON_REV,
100
};
101
102
bool is_supported_silicon_rev(uint32_t silicon_rev)
103
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps aspeed_ast2600_scu_ops = {
104
.valid.unaligned = false,
105
};
106
107
-static const uint32_t ast2600_a0_resets[ASPEED_AST2600_SCU_NR_REGS] = {
108
- [AST2600_SILICON_REV] = AST2600_SILICON_REV,
109
- [AST2600_SILICON_REV2] = AST2600_SILICON_REV,
110
- [AST2600_SYS_RST_CTRL] = 0xF7CFFEDC | 0x100,
111
+static const uint32_t ast2600_a1_resets[ASPEED_AST2600_SCU_NR_REGS] = {
112
+ [AST2600_SYS_RST_CTRL] = 0xF7C3FED8,
113
[AST2600_SYS_RST_CTRL2] = 0xFFFFFFFC,
114
- [AST2600_CLK_STOP_CTRL] = 0xEFF43E8B,
115
+ [AST2600_CLK_STOP_CTRL] = 0xFFFF7F8A,
116
[AST2600_CLK_STOP_CTRL2] = 0xFFF0FFF0,
117
[AST2600_SDRAM_HANDSHAKE] = 0x00000040, /* SoC completed DRAM init */
118
[AST2600_HPLL_PARAM] = 0x1000405F,
119
@@ -XXX,XX +XXX,XX @@ static void aspeed_2600_scu_class_init(ObjectClass *klass, void *data)
120
121
dc->desc = "ASPEED 2600 System Control Unit";
122
dc->reset = aspeed_ast2600_scu_reset;
123
- asc->resets = ast2600_a0_resets;
124
+ asc->resets = ast2600_a1_resets;
125
asc->calc_hpll = aspeed_2500_scu_calc_hpll; /* No change since AST2500 */
126
asc->apb_divider = 4;
127
asc->nr_regs = ASPEED_AST2600_SCU_NR_REGS;
128
--
129
2.20.1
130
131
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Add trace event to display timer's counter value updates.
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200504072822.18799-5-f4bug@amsat.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/timer/nrf51_timer.c | 1 +
11
hw/timer/trace-events | 1 +
12
2 files changed, 2 insertions(+)
13
14
diff --git a/hw/timer/nrf51_timer.c b/hw/timer/nrf51_timer.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/timer/nrf51_timer.c
17
+++ b/hw/timer/nrf51_timer.c
18
@@ -XXX,XX +XXX,XX @@ static void nrf51_timer_write(void *opaque, hwaddr offset,
19
20
idx = (offset - NRF51_TIMER_TASK_CAPTURE_0) / 4;
21
s->cc[idx] = s->counter;
22
+ trace_nrf51_timer_set_count(s->id, idx, s->counter);
23
}
24
break;
25
case NRF51_TIMER_EVENT_COMPARE_0 ... NRF51_TIMER_EVENT_COMPARE_3:
26
diff --git a/hw/timer/trace-events b/hw/timer/trace-events
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/timer/trace-events
29
+++ b/hw/timer/trace-events
30
@@ -XXX,XX +XXX,XX @@ cmsdk_apb_dualtimer_reset(void) "CMSDK APB dualtimer: reset"
31
# nrf51_timer.c
32
nrf51_timer_read(uint8_t timer_id, uint64_t addr, uint32_t value, unsigned size) "timer %u read addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
33
nrf51_timer_write(uint8_t timer_id, uint64_t addr, uint32_t value, unsigned size) "timer %u write addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
34
+nrf51_timer_set_count(uint8_t timer_id, uint8_t counter_id, uint32_t value) "timer %u counter %u count 0x%" PRIx32
35
36
# bcm2835_systmr.c
37
bcm2835_systmr_irq(bool enable) "timer irq state %u"
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200508154359.7494-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
include/exec/exec-all.h | 17 +++++++++++++++++
9
1 file changed, 17 insertions(+)
10
11
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/include/exec/exec-all.h
14
+++ b/include/exec/exec-all.h
15
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
16
{
17
}
18
#endif
19
+/**
20
+ * probe_access:
21
+ * @env: CPUArchState
22
+ * @addr: guest virtual address to look up
23
+ * @size: size of the access
24
+ * @access_type: read, write or execute permission
25
+ * @mmu_idx: MMU index to use for lookup
26
+ * @retaddr: return address for unwinding
27
+ *
28
+ * Look up the guest virtual address @addr. Raise an exception if the
29
+ * page does not satisfy @access_type. Raise an exception if the
30
+ * access (@addr, @size) hits a watchpoint. For writes, mark a clean
31
+ * page as dirty.
32
+ *
33
+ * Finally, return the host address for a page that is backed by RAM,
34
+ * or NULL if the page requires I/O.
35
+ */
36
void *probe_access(CPUArchState *env, target_ulong addr, int size,
37
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr);
38
39
--
40
2.20.1
41
42
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Use the "normal" memory access functions, rather than the
4
softmmu internal helper functions directly.
5
6
Since fb901c905dc3, cpu_mem_index is now a simple extract
7
from env->hflags and not a large computation. Which means
8
that it's now more work to pass around this value than it
9
is to recompute it.
10
11
This only adjusts the primitives, and does not clean up
12
all of the uses within sve_helper.c.
13
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20200508154359.7494-8-richard.henderson@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
target/arm/sve_helper.c | 221 ++++++++++++++++------------------------
20
1 file changed, 86 insertions(+), 135 deletions(-)
21
22
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/sve_helper.c
25
+++ b/target/arm/sve_helper.c
26
@@ -XXX,XX +XXX,XX @@ typedef intptr_t sve_ld1_host_fn(void *vd, void *vg, void *host,
27
* Load one element into @vd + @reg_off from (@env, @vaddr, @ra).
28
* The controlling predicate is known to be true.
29
*/
30
-typedef void sve_ld1_tlb_fn(CPUARMState *env, void *vd, intptr_t reg_off,
31
- target_ulong vaddr, TCGMemOpIdx oi, uintptr_t ra);
32
-typedef sve_ld1_tlb_fn sve_st1_tlb_fn;
33
+typedef void sve_ldst1_tlb_fn(CPUARMState *env, void *vd, intptr_t reg_off,
34
+ target_ulong vaddr, uintptr_t retaddr);
35
36
/*
37
* Generate the above primitives.
38
@@ -XXX,XX +XXX,XX @@ static intptr_t sve_##NAME##_host(void *vd, void *vg, void *host, \
39
return mem_off; \
40
}
41
42
-#ifdef CONFIG_SOFTMMU
43
-#define DO_LD_TLB(NAME, H, TYPEE, TYPEM, HOST, MOEND, TLB) \
44
+#define DO_LD_TLB(NAME, H, TYPEE, TYPEM, TLB) \
45
static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
46
- target_ulong addr, TCGMemOpIdx oi, uintptr_t ra) \
47
+ target_ulong addr, uintptr_t ra) \
48
{ \
49
- TYPEM val = TLB(env, addr, oi, ra); \
50
- *(TYPEE *)(vd + H(reg_off)) = val; \
51
+ *(TYPEE *)(vd + H(reg_off)) = (TYPEM)TLB(env, addr, ra); \
52
}
53
-#else
54
-#define DO_LD_TLB(NAME, H, TYPEE, TYPEM, HOST, MOEND, TLB) \
55
+
56
+#define DO_ST_TLB(NAME, H, TYPEE, TYPEM, TLB) \
57
static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
58
- target_ulong addr, TCGMemOpIdx oi, uintptr_t ra) \
59
+ target_ulong addr, uintptr_t ra) \
60
{ \
61
- TYPEM val = HOST(g2h(addr)); \
62
- *(TYPEE *)(vd + H(reg_off)) = val; \
63
+ TLB(env, addr, (TYPEM)*(TYPEE *)(vd + H(reg_off)), ra); \
64
}
65
-#endif
66
67
#define DO_LD_PRIM_1(NAME, H, TE, TM) \
68
DO_LD_HOST(NAME, H, TE, TM, ldub_p) \
69
- DO_LD_TLB(NAME, H, TE, TM, ldub_p, 0, helper_ret_ldub_mmu)
70
+ DO_LD_TLB(NAME, H, TE, TM, cpu_ldub_data_ra)
71
72
DO_LD_PRIM_1(ld1bb, H1, uint8_t, uint8_t)
73
DO_LD_PRIM_1(ld1bhu, H1_2, uint16_t, uint8_t)
74
@@ -XXX,XX +XXX,XX @@ DO_LD_PRIM_1(ld1bss, H1_4, uint32_t, int8_t)
75
DO_LD_PRIM_1(ld1bdu, , uint64_t, uint8_t)
76
DO_LD_PRIM_1(ld1bds, , uint64_t, int8_t)
77
78
-#define DO_LD_PRIM_2(NAME, end, MOEND, H, TE, TM, PH, PT) \
79
- DO_LD_HOST(NAME##_##end, H, TE, TM, PH##_##end##_p) \
80
- DO_LD_TLB(NAME##_##end, H, TE, TM, PH##_##end##_p, \
81
- MOEND, helper_##end##_##PT##_mmu)
82
+#define DO_ST_PRIM_1(NAME, H, TE, TM) \
83
+ DO_ST_TLB(st1##NAME, H, TE, TM, cpu_stb_data_ra)
84
85
-DO_LD_PRIM_2(ld1hh, le, MO_LE, H1_2, uint16_t, uint16_t, lduw, lduw)
86
-DO_LD_PRIM_2(ld1hsu, le, MO_LE, H1_4, uint32_t, uint16_t, lduw, lduw)
87
-DO_LD_PRIM_2(ld1hss, le, MO_LE, H1_4, uint32_t, int16_t, lduw, lduw)
88
-DO_LD_PRIM_2(ld1hdu, le, MO_LE, , uint64_t, uint16_t, lduw, lduw)
89
-DO_LD_PRIM_2(ld1hds, le, MO_LE, , uint64_t, int16_t, lduw, lduw)
90
+DO_ST_PRIM_1(bb, H1, uint8_t, uint8_t)
91
+DO_ST_PRIM_1(bh, H1_2, uint16_t, uint8_t)
92
+DO_ST_PRIM_1(bs, H1_4, uint32_t, uint8_t)
93
+DO_ST_PRIM_1(bd, , uint64_t, uint8_t)
94
95
-DO_LD_PRIM_2(ld1ss, le, MO_LE, H1_4, uint32_t, uint32_t, ldl, ldul)
96
-DO_LD_PRIM_2(ld1sdu, le, MO_LE, , uint64_t, uint32_t, ldl, ldul)
97
-DO_LD_PRIM_2(ld1sds, le, MO_LE, , uint64_t, int32_t, ldl, ldul)
98
+#define DO_LD_PRIM_2(NAME, H, TE, TM, LD) \
99
+ DO_LD_HOST(ld1##NAME##_be, H, TE, TM, LD##_be_p) \
100
+ DO_LD_HOST(ld1##NAME##_le, H, TE, TM, LD##_le_p) \
101
+ DO_LD_TLB(ld1##NAME##_be, H, TE, TM, cpu_##LD##_be_data_ra) \
102
+ DO_LD_TLB(ld1##NAME##_le, H, TE, TM, cpu_##LD##_le_data_ra)
103
104
-DO_LD_PRIM_2(ld1dd, le, MO_LE, , uint64_t, uint64_t, ldq, ldq)
105
+#define DO_ST_PRIM_2(NAME, H, TE, TM, ST) \
106
+ DO_ST_TLB(st1##NAME##_be, H, TE, TM, cpu_##ST##_be_data_ra) \
107
+ DO_ST_TLB(st1##NAME##_le, H, TE, TM, cpu_##ST##_le_data_ra)
108
109
-DO_LD_PRIM_2(ld1hh, be, MO_BE, H1_2, uint16_t, uint16_t, lduw, lduw)
110
-DO_LD_PRIM_2(ld1hsu, be, MO_BE, H1_4, uint32_t, uint16_t, lduw, lduw)
111
-DO_LD_PRIM_2(ld1hss, be, MO_BE, H1_4, uint32_t, int16_t, lduw, lduw)
112
-DO_LD_PRIM_2(ld1hdu, be, MO_BE, , uint64_t, uint16_t, lduw, lduw)
113
-DO_LD_PRIM_2(ld1hds, be, MO_BE, , uint64_t, int16_t, lduw, lduw)
114
+DO_LD_PRIM_2(hh, H1_2, uint16_t, uint16_t, lduw)
115
+DO_LD_PRIM_2(hsu, H1_4, uint32_t, uint16_t, lduw)
116
+DO_LD_PRIM_2(hss, H1_4, uint32_t, int16_t, lduw)
117
+DO_LD_PRIM_2(hdu, , uint64_t, uint16_t, lduw)
118
+DO_LD_PRIM_2(hds, , uint64_t, int16_t, lduw)
119
120
-DO_LD_PRIM_2(ld1ss, be, MO_BE, H1_4, uint32_t, uint32_t, ldl, ldul)
121
-DO_LD_PRIM_2(ld1sdu, be, MO_BE, , uint64_t, uint32_t, ldl, ldul)
122
-DO_LD_PRIM_2(ld1sds, be, MO_BE, , uint64_t, int32_t, ldl, ldul)
123
+DO_ST_PRIM_2(hh, H1_2, uint16_t, uint16_t, stw)
124
+DO_ST_PRIM_2(hs, H1_4, uint32_t, uint16_t, stw)
125
+DO_ST_PRIM_2(hd, , uint64_t, uint16_t, stw)
126
127
-DO_LD_PRIM_2(ld1dd, be, MO_BE, , uint64_t, uint64_t, ldq, ldq)
128
+DO_LD_PRIM_2(ss, H1_4, uint32_t, uint32_t, ldl)
129
+DO_LD_PRIM_2(sdu, , uint64_t, uint32_t, ldl)
130
+DO_LD_PRIM_2(sds, , uint64_t, int32_t, ldl)
131
+
132
+DO_ST_PRIM_2(ss, H1_4, uint32_t, uint32_t, stl)
133
+DO_ST_PRIM_2(sd, , uint64_t, uint32_t, stl)
134
+
135
+DO_LD_PRIM_2(dd, , uint64_t, uint64_t, ldq)
136
+DO_ST_PRIM_2(dd, , uint64_t, uint64_t, stq)
137
138
#undef DO_LD_TLB
139
+#undef DO_ST_TLB
140
#undef DO_LD_HOST
141
#undef DO_LD_PRIM_1
142
+#undef DO_ST_PRIM_1
143
#undef DO_LD_PRIM_2
144
+#undef DO_ST_PRIM_2
145
146
/*
147
* Skip through a sequence of inactive elements in the guarding predicate @vg,
148
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_r(CPUARMState *env, void *vg, const target_ulong addr,
149
uint32_t desc, const uintptr_t retaddr,
150
const int esz, const int msz,
151
sve_ld1_host_fn *host_fn,
152
- sve_ld1_tlb_fn *tlb_fn)
153
+ sve_ldst1_tlb_fn *tlb_fn)
154
{
155
const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
156
const int mmu_idx = get_mmuidx(oi);
157
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_r(CPUARMState *env, void *vg, const target_ulong addr,
158
* on I/O memory, it may succeed but not bring in the TLB entry.
159
* But even then we have still made forward progress.
160
*/
161
- tlb_fn(env, &scratch, reg_off, addr + mem_off, oi, retaddr);
162
+ tlb_fn(env, &scratch, reg_off, addr + mem_off, retaddr);
163
reg_off += 1 << esz;
164
}
165
#endif
166
@@ -XXX,XX +XXX,XX @@ DO_LD1_2(ld1dd, 3, 3)
167
*/
168
static void sve_ld2_r(CPUARMState *env, void *vg, target_ulong addr,
169
uint32_t desc, int size, uintptr_t ra,
170
- sve_ld1_tlb_fn *tlb_fn)
171
+ sve_ldst1_tlb_fn *tlb_fn)
172
{
173
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
174
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
175
intptr_t i, oprsz = simd_oprsz(desc);
176
ARMVectorReg scratch[2] = { };
177
@@ -XXX,XX +XXX,XX @@ static void sve_ld2_r(CPUARMState *env, void *vg, target_ulong addr,
178
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
179
do {
180
if (pg & 1) {
181
- tlb_fn(env, &scratch[0], i, addr, oi, ra);
182
- tlb_fn(env, &scratch[1], i, addr + size, oi, ra);
183
+ tlb_fn(env, &scratch[0], i, addr, ra);
184
+ tlb_fn(env, &scratch[1], i, addr + size, ra);
185
}
186
i += size, pg >>= size;
187
addr += 2 * size;
188
@@ -XXX,XX +XXX,XX @@ static void sve_ld2_r(CPUARMState *env, void *vg, target_ulong addr,
189
190
static void sve_ld3_r(CPUARMState *env, void *vg, target_ulong addr,
191
uint32_t desc, int size, uintptr_t ra,
192
- sve_ld1_tlb_fn *tlb_fn)
193
+ sve_ldst1_tlb_fn *tlb_fn)
194
{
195
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
196
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
197
intptr_t i, oprsz = simd_oprsz(desc);
198
ARMVectorReg scratch[3] = { };
199
@@ -XXX,XX +XXX,XX @@ static void sve_ld3_r(CPUARMState *env, void *vg, target_ulong addr,
200
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
201
do {
202
if (pg & 1) {
203
- tlb_fn(env, &scratch[0], i, addr, oi, ra);
204
- tlb_fn(env, &scratch[1], i, addr + size, oi, ra);
205
- tlb_fn(env, &scratch[2], i, addr + 2 * size, oi, ra);
206
+ tlb_fn(env, &scratch[0], i, addr, ra);
207
+ tlb_fn(env, &scratch[1], i, addr + size, ra);
208
+ tlb_fn(env, &scratch[2], i, addr + 2 * size, ra);
209
}
210
i += size, pg >>= size;
211
addr += 3 * size;
212
@@ -XXX,XX +XXX,XX @@ static void sve_ld3_r(CPUARMState *env, void *vg, target_ulong addr,
213
214
static void sve_ld4_r(CPUARMState *env, void *vg, target_ulong addr,
215
uint32_t desc, int size, uintptr_t ra,
216
- sve_ld1_tlb_fn *tlb_fn)
217
+ sve_ldst1_tlb_fn *tlb_fn)
218
{
219
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
220
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
221
intptr_t i, oprsz = simd_oprsz(desc);
222
ARMVectorReg scratch[4] = { };
223
@@ -XXX,XX +XXX,XX @@ static void sve_ld4_r(CPUARMState *env, void *vg, target_ulong addr,
224
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
225
do {
226
if (pg & 1) {
227
- tlb_fn(env, &scratch[0], i, addr, oi, ra);
228
- tlb_fn(env, &scratch[1], i, addr + size, oi, ra);
229
- tlb_fn(env, &scratch[2], i, addr + 2 * size, oi, ra);
230
- tlb_fn(env, &scratch[3], i, addr + 3 * size, oi, ra);
231
+ tlb_fn(env, &scratch[0], i, addr, ra);
232
+ tlb_fn(env, &scratch[1], i, addr + size, ra);
233
+ tlb_fn(env, &scratch[2], i, addr + 2 * size, ra);
234
+ tlb_fn(env, &scratch[3], i, addr + 3 * size, ra);
235
}
236
i += size, pg >>= size;
237
addr += 4 * size;
238
@@ -XXX,XX +XXX,XX @@ static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
239
uint32_t desc, const uintptr_t retaddr,
240
const int esz, const int msz,
241
sve_ld1_host_fn *host_fn,
242
- sve_ld1_tlb_fn *tlb_fn)
243
+ sve_ldst1_tlb_fn *tlb_fn)
244
{
245
const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
246
const int mmu_idx = get_mmuidx(oi);
247
@@ -XXX,XX +XXX,XX @@ static void sve_ldff1_r(CPUARMState *env, void *vg, const target_ulong addr,
248
* Perform one normal read, which will fault or not.
249
* But it is likely to bring the page into the tlb.
250
*/
251
- tlb_fn(env, vd, reg_off, addr + mem_off, oi, retaddr);
252
+ tlb_fn(env, vd, reg_off, addr + mem_off, retaddr);
253
254
/* After any fault, zero any leading predicated false elts. */
255
swap_memzero(vd, reg_off);
256
@@ -XXX,XX +XXX,XX @@ DO_LDFF1_LDNF1_2(dd, 3, 3)
257
#undef DO_LDFF1_LDNF1_1
258
#undef DO_LDFF1_LDNF1_2
259
260
-/*
261
- * Store contiguous data, protected by a governing predicate.
262
- */
263
-
264
-#ifdef CONFIG_SOFTMMU
265
-#define DO_ST_TLB(NAME, H, TYPEM, HOST, MOEND, TLB) \
266
-static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
267
- target_ulong addr, TCGMemOpIdx oi, uintptr_t ra) \
268
-{ \
269
- TLB(env, addr, *(TYPEM *)(vd + H(reg_off)), oi, ra); \
270
-}
271
-#else
272
-#define DO_ST_TLB(NAME, H, TYPEM, HOST, MOEND, TLB) \
273
-static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
274
- target_ulong addr, TCGMemOpIdx oi, uintptr_t ra) \
275
-{ \
276
- HOST(g2h(addr), *(TYPEM *)(vd + H(reg_off))); \
277
-}
278
-#endif
279
-
280
-DO_ST_TLB(st1bb, H1, uint8_t, stb_p, 0, helper_ret_stb_mmu)
281
-DO_ST_TLB(st1bh, H1_2, uint16_t, stb_p, 0, helper_ret_stb_mmu)
282
-DO_ST_TLB(st1bs, H1_4, uint32_t, stb_p, 0, helper_ret_stb_mmu)
283
-DO_ST_TLB(st1bd, , uint64_t, stb_p, 0, helper_ret_stb_mmu)
284
-
285
-DO_ST_TLB(st1hh_le, H1_2, uint16_t, stw_le_p, MO_LE, helper_le_stw_mmu)
286
-DO_ST_TLB(st1hs_le, H1_4, uint32_t, stw_le_p, MO_LE, helper_le_stw_mmu)
287
-DO_ST_TLB(st1hd_le, , uint64_t, stw_le_p, MO_LE, helper_le_stw_mmu)
288
-
289
-DO_ST_TLB(st1ss_le, H1_4, uint32_t, stl_le_p, MO_LE, helper_le_stl_mmu)
290
-DO_ST_TLB(st1sd_le, , uint64_t, stl_le_p, MO_LE, helper_le_stl_mmu)
291
-
292
-DO_ST_TLB(st1dd_le, , uint64_t, stq_le_p, MO_LE, helper_le_stq_mmu)
293
-
294
-DO_ST_TLB(st1hh_be, H1_2, uint16_t, stw_be_p, MO_BE, helper_be_stw_mmu)
295
-DO_ST_TLB(st1hs_be, H1_4, uint32_t, stw_be_p, MO_BE, helper_be_stw_mmu)
296
-DO_ST_TLB(st1hd_be, , uint64_t, stw_be_p, MO_BE, helper_be_stw_mmu)
297
-
298
-DO_ST_TLB(st1ss_be, H1_4, uint32_t, stl_be_p, MO_BE, helper_be_stl_mmu)
299
-DO_ST_TLB(st1sd_be, , uint64_t, stl_be_p, MO_BE, helper_be_stl_mmu)
300
-
301
-DO_ST_TLB(st1dd_be, , uint64_t, stq_be_p, MO_BE, helper_be_stq_mmu)
302
-
303
-#undef DO_ST_TLB
304
-
305
/*
306
* Common helpers for all contiguous 1,2,3,4-register predicated stores.
307
*/
308
static void sve_st1_r(CPUARMState *env, void *vg, target_ulong addr,
309
uint32_t desc, const uintptr_t ra,
310
const int esize, const int msize,
311
- sve_st1_tlb_fn *tlb_fn)
312
+ sve_ldst1_tlb_fn *tlb_fn)
313
{
314
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
315
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
316
intptr_t i, oprsz = simd_oprsz(desc);
317
void *vd = &env->vfp.zregs[rd];
318
@@ -XXX,XX +XXX,XX @@ static void sve_st1_r(CPUARMState *env, void *vg, target_ulong addr,
319
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
320
do {
321
if (pg & 1) {
322
- tlb_fn(env, vd, i, addr, oi, ra);
323
+ tlb_fn(env, vd, i, addr, ra);
324
}
325
i += esize, pg >>= esize;
326
addr += msize;
327
@@ -XXX,XX +XXX,XX @@ static void sve_st1_r(CPUARMState *env, void *vg, target_ulong addr,
328
static void sve_st2_r(CPUARMState *env, void *vg, target_ulong addr,
329
uint32_t desc, const uintptr_t ra,
330
const int esize, const int msize,
331
- sve_st1_tlb_fn *tlb_fn)
332
+ sve_ldst1_tlb_fn *tlb_fn)
333
{
334
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
335
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
336
intptr_t i, oprsz = simd_oprsz(desc);
337
void *d1 = &env->vfp.zregs[rd];
338
@@ -XXX,XX +XXX,XX @@ static void sve_st2_r(CPUARMState *env, void *vg, target_ulong addr,
339
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
340
do {
341
if (pg & 1) {
342
- tlb_fn(env, d1, i, addr, oi, ra);
343
- tlb_fn(env, d2, i, addr + msize, oi, ra);
344
+ tlb_fn(env, d1, i, addr, ra);
345
+ tlb_fn(env, d2, i, addr + msize, ra);
346
}
347
i += esize, pg >>= esize;
348
addr += 2 * msize;
349
@@ -XXX,XX +XXX,XX @@ static void sve_st2_r(CPUARMState *env, void *vg, target_ulong addr,
350
static void sve_st3_r(CPUARMState *env, void *vg, target_ulong addr,
351
uint32_t desc, const uintptr_t ra,
352
const int esize, const int msize,
353
- sve_st1_tlb_fn *tlb_fn)
354
+ sve_ldst1_tlb_fn *tlb_fn)
355
{
356
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
357
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
358
intptr_t i, oprsz = simd_oprsz(desc);
359
void *d1 = &env->vfp.zregs[rd];
360
@@ -XXX,XX +XXX,XX @@ static void sve_st3_r(CPUARMState *env, void *vg, target_ulong addr,
361
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
362
do {
363
if (pg & 1) {
364
- tlb_fn(env, d1, i, addr, oi, ra);
365
- tlb_fn(env, d2, i, addr + msize, oi, ra);
366
- tlb_fn(env, d3, i, addr + 2 * msize, oi, ra);
367
+ tlb_fn(env, d1, i, addr, ra);
368
+ tlb_fn(env, d2, i, addr + msize, ra);
369
+ tlb_fn(env, d3, i, addr + 2 * msize, ra);
370
}
371
i += esize, pg >>= esize;
372
addr += 3 * msize;
373
@@ -XXX,XX +XXX,XX @@ static void sve_st3_r(CPUARMState *env, void *vg, target_ulong addr,
374
static void sve_st4_r(CPUARMState *env, void *vg, target_ulong addr,
375
uint32_t desc, const uintptr_t ra,
376
const int esize, const int msize,
377
- sve_st1_tlb_fn *tlb_fn)
378
+ sve_ldst1_tlb_fn *tlb_fn)
379
{
380
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
381
const unsigned rd = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 5);
382
intptr_t i, oprsz = simd_oprsz(desc);
383
void *d1 = &env->vfp.zregs[rd];
384
@@ -XXX,XX +XXX,XX @@ static void sve_st4_r(CPUARMState *env, void *vg, target_ulong addr,
385
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
386
do {
387
if (pg & 1) {
388
- tlb_fn(env, d1, i, addr, oi, ra);
389
- tlb_fn(env, d2, i, addr + msize, oi, ra);
390
- tlb_fn(env, d3, i, addr + 2 * msize, oi, ra);
391
- tlb_fn(env, d4, i, addr + 3 * msize, oi, ra);
392
+ tlb_fn(env, d1, i, addr, ra);
393
+ tlb_fn(env, d2, i, addr + msize, ra);
394
+ tlb_fn(env, d3, i, addr + 2 * msize, ra);
395
+ tlb_fn(env, d4, i, addr + 3 * msize, ra);
396
}
397
i += esize, pg >>= esize;
398
addr += 4 * msize;
399
@@ -XXX,XX +XXX,XX @@ static target_ulong off_zd_d(void *reg, intptr_t reg_ofs)
400
401
static void sve_ld1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
402
target_ulong base, uint32_t desc, uintptr_t ra,
403
- zreg_off_fn *off_fn, sve_ld1_tlb_fn *tlb_fn)
404
+ zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn)
405
{
406
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
407
const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
408
intptr_t i, oprsz = simd_oprsz(desc);
409
ARMVectorReg scratch = { };
410
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
411
do {
412
if (likely(pg & 1)) {
413
target_ulong off = off_fn(vm, i);
414
- tlb_fn(env, &scratch, i, base + (off << scale), oi, ra);
415
+ tlb_fn(env, &scratch, i, base + (off << scale), ra);
416
}
417
i += 4, pg >>= 4;
418
} while (i & 15);
419
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
420
421
static void sve_ld1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
422
target_ulong base, uint32_t desc, uintptr_t ra,
423
- zreg_off_fn *off_fn, sve_ld1_tlb_fn *tlb_fn)
424
+ zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn)
425
{
426
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
427
const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
428
intptr_t i, oprsz = simd_oprsz(desc) / 8;
429
ARMVectorReg scratch = { };
430
@@ -XXX,XX +XXX,XX @@ static void sve_ld1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
431
uint8_t pg = *(uint8_t *)(vg + H1(i));
432
if (likely(pg & 1)) {
433
target_ulong off = off_fn(vm, i * 8);
434
- tlb_fn(env, &scratch, i * 8, base + (off << scale), oi, ra);
435
+ tlb_fn(env, &scratch, i * 8, base + (off << scale), ra);
436
}
437
}
438
clear_helper_retaddr();
439
@@ -XXX,XX +XXX,XX @@ DO_LD_NF(dd_be, , uint64_t, uint64_t, ldq_be_p)
440
*/
441
static inline void sve_ldff1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
442
target_ulong base, uint32_t desc, uintptr_t ra,
443
- zreg_off_fn *off_fn, sve_ld1_tlb_fn *tlb_fn,
444
+ zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn,
445
sve_ld1_nf_fn *nonfault_fn)
446
{
447
const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
448
@@ -XXX,XX +XXX,XX @@ static inline void sve_ldff1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
449
set_helper_retaddr(ra);
450
addr = off_fn(vm, reg_off);
451
addr = base + (addr << scale);
452
- tlb_fn(env, vd, reg_off, addr, oi, ra);
453
+ tlb_fn(env, vd, reg_off, addr, ra);
454
455
/* The rest of the reads will be non-faulting. */
456
clear_helper_retaddr();
457
@@ -XXX,XX +XXX,XX @@ static inline void sve_ldff1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
458
459
static inline void sve_ldff1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
460
target_ulong base, uint32_t desc, uintptr_t ra,
461
- zreg_off_fn *off_fn, sve_ld1_tlb_fn *tlb_fn,
462
+ zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn,
463
sve_ld1_nf_fn *nonfault_fn)
464
{
465
const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
466
@@ -XXX,XX +XXX,XX @@ static inline void sve_ldff1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
467
set_helper_retaddr(ra);
468
addr = off_fn(vm, reg_off);
469
addr = base + (addr << scale);
470
- tlb_fn(env, vd, reg_off, addr, oi, ra);
471
+ tlb_fn(env, vd, reg_off, addr, ra);
472
473
/* The rest of the reads will be non-faulting. */
474
clear_helper_retaddr();
475
@@ -XXX,XX +XXX,XX @@ DO_LDFF1_ZPZ_D(dd_be, zd)
476
477
static void sve_st1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
478
target_ulong base, uint32_t desc, uintptr_t ra,
479
- zreg_off_fn *off_fn, sve_ld1_tlb_fn *tlb_fn)
480
+ zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn)
481
{
482
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
483
const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
484
intptr_t i, oprsz = simd_oprsz(desc);
485
486
@@ -XXX,XX +XXX,XX @@ static void sve_st1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
487
do {
488
if (likely(pg & 1)) {
489
target_ulong off = off_fn(vm, i);
490
- tlb_fn(env, vd, i, base + (off << scale), oi, ra);
491
+ tlb_fn(env, vd, i, base + (off << scale), ra);
492
}
493
i += 4, pg >>= 4;
494
} while (i & 15);
495
@@ -XXX,XX +XXX,XX @@ static void sve_st1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
496
497
static void sve_st1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
498
target_ulong base, uint32_t desc, uintptr_t ra,
499
- zreg_off_fn *off_fn, sve_ld1_tlb_fn *tlb_fn)
500
+ zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn)
501
{
502
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
503
const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
504
intptr_t i, oprsz = simd_oprsz(desc) / 8;
505
506
@@ -XXX,XX +XXX,XX @@ static void sve_st1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
507
uint8_t pg = *(uint8_t *)(vg + H1(i));
508
if (likely(pg & 1)) {
509
target_ulong off = off_fn(vm, i * 8);
510
- tlb_fn(env, vd, i * 8, base + (off << scale), oi, ra);
511
+ tlb_fn(env, vd, i * 8, base + (off << scale), ra);
512
}
513
}
514
clear_helper_retaddr();
515
--
516
2.20.1
517
518
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Convert the VCVT fixed-point conversion operations in the
2
Neon 2-regs-and-shift group to decodetree.
2
3
3
Handle all of the watchpoints for active elements all at once,
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
before we've modified the vector register. This removes the
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
TLB_WATCHPOINT bit from page[].flags, which means that we can
6
Message-id: 20200522145520.6778-9-peter.maydell@linaro.org
6
use the normal fast path via RAM.
7
---
8
target/arm/neon-dp.decode | 11 +++++
9
target/arm/translate-neon.inc.c | 49 +++++++++++++++++++++
10
target/arm/translate.c | 75 +--------------------------------
11
3 files changed, 62 insertions(+), 73 deletions(-)
7
12
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200508154359.7494-13-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/sve_helper.c | 72 ++++++++++++++++++++++++++++++++++++++++-
14
1 file changed, 71 insertions(+), 1 deletion(-)
15
16
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve_helper.c
15
--- a/target/arm/neon-dp.decode
19
+++ b/target/arm/sve_helper.c
16
+++ b/target/arm/neon-dp.decode
20
@@ -XXX,XX +XXX,XX @@ static bool sve_cont_ldst_pages(SVEContLdSt *info, SVEContFault fault,
17
@@ -XXX,XX +XXX,XX @@ VMINNM_fp_3s 1111 001 1 0 . 1 . .... .... 1111 ... 1 .... @3same_fp
21
return have_work;
18
@2reg_shll_b .... ... . . . 001 shift:3 .... .... 0 . . . .... \
19
&2reg_shift vm=%vm_dp vd=%vd_dp size=0 q=0
20
21
+# We use size=0 for fp32 and size=1 for fp16 to match the 3-same encodings.
22
+@2reg_vcvt .... ... . . . 1 ..... .... .... . q:1 . . .... \
23
+ &2reg_shift vm=%vm_dp vd=%vd_dp size=0 shift=%neon_rshift_i5
24
+
25
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_d
26
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_s
27
VSHR_S_2sh 1111 001 0 1 . ...... .... 0000 . . . 1 .... @2reg_shr_h
28
@@ -XXX,XX +XXX,XX @@ VSHLL_S_2sh 1111 001 0 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
29
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_s
30
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_h
31
VSHLL_U_2sh 1111 001 1 1 . ...... .... 1010 . 0 . 1 .... @2reg_shll_b
32
+
33
+# VCVT fixed<->float conversions
34
+# TODO: FP16 fixed<->float conversions are opc==0b1100 and 0b1101
35
+VCVT_SF_2sh 1111 001 0 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
36
+VCVT_UF_2sh 1111 001 1 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
37
+VCVT_FS_2sh 1111 001 0 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
38
+VCVT_FU_2sh 1111 001 1 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
39
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/translate-neon.inc.c
42
+++ b/target/arm/translate-neon.inc.c
43
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL_U_2sh(DisasContext *s, arg_2reg_shift *a)
44
};
45
return do_vshll_2sh(s, a, widenfn[a->size], true);
22
}
46
}
23
47
+
24
+static void sve_cont_ldst_watchpoints(SVEContLdSt *info, CPUARMState *env,
48
+static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
25
+ uint64_t *vg, target_ulong addr,
49
+ NeonGenTwoSingleOPFn *fn)
26
+ int esize, int msize, int wp_access,
27
+ uintptr_t retaddr)
28
+{
50
+{
29
+#ifndef CONFIG_USER_ONLY
51
+ /* FP operations in 2-reg-and-shift group */
30
+ intptr_t mem_off, reg_off, reg_last;
52
+ TCGv_i32 tmp, shiftv;
31
+ int flags0 = info->page[0].flags;
53
+ TCGv_ptr fpstatus;
32
+ int flags1 = info->page[1].flags;
54
+ int pass;
33
+
55
+
34
+ if (likely(!((flags0 | flags1) & TLB_WATCHPOINT))) {
56
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
35
+ return;
57
+ return false;
36
+ }
58
+ }
37
+
59
+
38
+ /* Indicate that watchpoints are handled. */
60
+ /* UNDEF accesses to D16-D31 if they don't exist. */
39
+ info->page[0].flags = flags0 & ~TLB_WATCHPOINT;
61
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
40
+ info->page[1].flags = flags1 & ~TLB_WATCHPOINT;
62
+ ((a->vd | a->vm) & 0x10)) {
41
+
63
+ return false;
42
+ if (flags0 & TLB_WATCHPOINT) {
43
+ mem_off = info->mem_off_first[0];
44
+ reg_off = info->reg_off_first[0];
45
+ reg_last = info->reg_off_last[0];
46
+
47
+ while (reg_off <= reg_last) {
48
+ uint64_t pg = vg[reg_off >> 6];
49
+ do {
50
+ if ((pg >> (reg_off & 63)) & 1) {
51
+ cpu_check_watchpoint(env_cpu(env), addr + mem_off,
52
+ msize, info->page[0].attrs,
53
+ wp_access, retaddr);
54
+ }
55
+ reg_off += esize;
56
+ mem_off += msize;
57
+ } while (reg_off <= reg_last && (reg_off & 63));
58
+ }
59
+ }
64
+ }
60
+
65
+
61
+ mem_off = info->mem_off_split;
66
+ if ((a->vm | a->vd) & a->q) {
62
+ if (mem_off >= 0) {
67
+ return false;
63
+ cpu_check_watchpoint(env_cpu(env), addr + mem_off, msize,
64
+ info->page[0].attrs, wp_access, retaddr);
65
+ }
68
+ }
66
+
69
+
67
+ mem_off = info->mem_off_first[1];
70
+ if (!vfp_access_check(s)) {
68
+ if ((flags1 & TLB_WATCHPOINT) && mem_off >= 0) {
71
+ return true;
69
+ reg_off = info->reg_off_first[1];
72
+ }
70
+ reg_last = info->reg_off_last[1];
71
+
73
+
72
+ do {
74
+ fpstatus = get_fpstatus_ptr(1);
73
+ uint64_t pg = vg[reg_off >> 6];
75
+ shiftv = tcg_const_i32(a->shift);
74
+ do {
76
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
75
+ if ((pg >> (reg_off & 63)) & 1) {
77
+ tmp = neon_load_reg(a->vm, pass);
76
+ cpu_check_watchpoint(env_cpu(env), addr + mem_off,
78
+ fn(tmp, tmp, shiftv, fpstatus);
77
+ msize, info->page[1].attrs,
79
+ neon_store_reg(a->vd, pass, tmp);
78
+ wp_access, retaddr);
79
+ }
80
+ reg_off += esize;
81
+ mem_off += msize;
82
+ } while (reg_off & 63);
83
+ } while (reg_off <= reg_last);
84
+ }
80
+ }
85
+#endif
81
+ tcg_temp_free_ptr(fpstatus);
82
+ tcg_temp_free_i32(shiftv);
83
+ return true;
86
+}
84
+}
87
+
85
+
88
/*
86
+#define DO_FP_2SH(INSN, FUNC) \
89
* The result of tlb_vaddr_to_host for user-only is just g2h(x),
87
+ static bool trans_##INSN##_2sh(DisasContext *s, arg_2reg_shift *a) \
90
* which is always non-null. Elide the useless test.
88
+ { \
91
@@ -XXX,XX +XXX,XX @@ void sve_ld1_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
89
+ return do_fp_2sh(s, a, FUNC); \
92
/* Probe the page(s). Exit with exception for any invalid page. */
90
+ }
93
sve_cont_ldst_pages(&info, FAULT_ALL, env, addr, MMU_DATA_LOAD, retaddr);
94
95
+ /* Handle watchpoints for all active elements. */
96
+ sve_cont_ldst_watchpoints(&info, env, vg, addr, 1 << esz, 1 << msz,
97
+ BP_MEM_READ, retaddr);
98
+
91
+
99
+ /* TODO: MTE check. */
92
+DO_FP_2SH(VCVT_SF, gen_helper_vfp_sltos)
100
+
93
+DO_FP_2SH(VCVT_UF, gen_helper_vfp_ultos)
101
flags = info.page[0].flags | info.page[1].flags;
94
+DO_FP_2SH(VCVT_FS, gen_helper_vfp_tosls_round_to_zero)
102
if (unlikely(flags != 0)) {
95
+DO_FP_2SH(VCVT_FU, gen_helper_vfp_touls_round_to_zero)
103
#ifdef CONFIG_USER_ONLY
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
104
g_assert_not_reached();
97
index XXXXXXX..XXXXXXX 100644
105
#else
98
--- a/target/arm/translate.c
106
/*
99
+++ b/target/arm/translate.c
107
- * At least one page includes MMIO (or watchpoints).
100
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
108
+ * At least one page includes MMIO.
101
int q;
109
* Any bus operation can fail with cpu_transaction_failed,
102
int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
110
* which for ARM will raise SyncExternal. Perform the load
103
int size;
111
* into scratch memory to preserve register state until the end.
104
- int shift;
105
int pass;
106
int u;
107
int vec_size;
108
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
109
return 1;
110
} else if (insn & (1 << 4)) {
111
if ((insn & 0x00380080) != 0) {
112
- /* Two registers and shift. */
113
- op = (insn >> 8) & 0xf;
114
-
115
- switch (op) {
116
- case 0: /* VSHR */
117
- case 1: /* VSRA */
118
- case 2: /* VRSHR */
119
- case 3: /* VRSRA */
120
- case 4: /* VSRI */
121
- case 5: /* VSHL, VSLI */
122
- case 6: /* VQSHLU */
123
- case 7: /* VQSHL */
124
- case 8: /* VSHRN, VRSHRN, VQSHRUN, VQRSHRUN */
125
- case 9: /* VQSHRN, VQRSHRN */
126
- case 10: /* VSHLL, including VMOVL */
127
- return 1; /* handled by decodetree */
128
- default:
129
- break;
130
- }
131
-
132
- if (insn & (1 << 7)) {
133
- /* 64-bit shift. */
134
- if (op > 7) {
135
- return 1;
136
- }
137
- size = 3;
138
- } else {
139
- size = 2;
140
- while ((insn & (1 << (size + 19))) == 0)
141
- size--;
142
- }
143
- shift = (insn >> 16) & ((1 << (3 + size)) - 1);
144
- if (op >= 14) {
145
- /* VCVT fixed-point. */
146
- TCGv_ptr fpst;
147
- TCGv_i32 shiftv;
148
- VFPGenFixPointFn *fn;
149
-
150
- if (!(insn & (1 << 21)) || (q && ((rd | rm) & 1))) {
151
- return 1;
152
- }
153
-
154
- if (!(op & 1)) {
155
- if (u) {
156
- fn = gen_helper_vfp_ultos;
157
- } else {
158
- fn = gen_helper_vfp_sltos;
159
- }
160
- } else {
161
- if (u) {
162
- fn = gen_helper_vfp_touls_round_to_zero;
163
- } else {
164
- fn = gen_helper_vfp_tosls_round_to_zero;
165
- }
166
- }
167
-
168
- /* We have already masked out the must-be-1 top bit of imm6,
169
- * hence this 32-shift where the ARM ARM has 64-imm6.
170
- */
171
- shift = 32 - shift;
172
- fpst = get_fpstatus_ptr(1);
173
- shiftv = tcg_const_i32(shift);
174
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
175
- TCGv_i32 tmpf = neon_load_reg(rm, pass);
176
- fn(tmpf, tmpf, shiftv, fpst);
177
- neon_store_reg(rd, pass, tmpf);
178
- }
179
- tcg_temp_free_ptr(fpst);
180
- tcg_temp_free_i32(shiftv);
181
- } else {
182
- return 1;
183
- }
184
+ /* Two registers and shift: handled by decodetree */
185
+ return 1;
186
} else { /* (insn & 0x00380080) == 0 */
187
int invert, reg_ofs, vec_size;
188
112
--
189
--
113
2.20.1
190
2.20.1
114
191
115
192
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This avoids the need for a separate set of helpers to implement
4
no-fault semantics, and will enable MTE in the future.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200508154359.7494-17-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/sve_helper.c | 323 ++++++++++++++++------------------------
12
1 file changed, 127 insertions(+), 196 deletions(-)
13
14
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/sve_helper.c
17
+++ b/target/arm/sve_helper.c
18
@@ -XXX,XX +XXX,XX @@ DO_LD1_ZPZ_D(dd_be, zd)
19
20
/* First fault loads with a vector index. */
21
22
-/* Load one element into VD+REG_OFF from (ENV,VADDR) without faulting.
23
- * The controlling predicate is known to be true. Return true if the
24
- * load was successful.
25
- */
26
-typedef bool sve_ld1_nf_fn(CPUARMState *env, void *vd, intptr_t reg_off,
27
- target_ulong vaddr, int mmu_idx);
28
-
29
-#ifdef CONFIG_SOFTMMU
30
-#define DO_LD_NF(NAME, H, TYPEE, TYPEM, HOST) \
31
-static bool sve_ld##NAME##_nf(CPUARMState *env, void *vd, intptr_t reg_off, \
32
- target_ulong addr, int mmu_idx) \
33
-{ \
34
- target_ulong next_page = -(addr | TARGET_PAGE_MASK); \
35
- if (likely(next_page - addr >= sizeof(TYPEM))) { \
36
- void *host = tlb_vaddr_to_host(env, addr, MMU_DATA_LOAD, mmu_idx); \
37
- if (likely(host)) { \
38
- TYPEM val = HOST(host); \
39
- *(TYPEE *)(vd + H(reg_off)) = val; \
40
- return true; \
41
- } \
42
- } \
43
- return false; \
44
-}
45
-#else
46
-#define DO_LD_NF(NAME, H, TYPEE, TYPEM, HOST) \
47
-static bool sve_ld##NAME##_nf(CPUARMState *env, void *vd, intptr_t reg_off, \
48
- target_ulong addr, int mmu_idx) \
49
-{ \
50
- if (likely(page_check_range(addr, sizeof(TYPEM), PAGE_READ))) { \
51
- TYPEM val = HOST(g2h(addr)); \
52
- *(TYPEE *)(vd + H(reg_off)) = val; \
53
- return true; \
54
- } \
55
- return false; \
56
-}
57
-#endif
58
-
59
-DO_LD_NF(bsu, H1_4, uint32_t, uint8_t, ldub_p)
60
-DO_LD_NF(bss, H1_4, uint32_t, int8_t, ldsb_p)
61
-DO_LD_NF(bdu, , uint64_t, uint8_t, ldub_p)
62
-DO_LD_NF(bds, , uint64_t, int8_t, ldsb_p)
63
-
64
-DO_LD_NF(hsu_le, H1_4, uint32_t, uint16_t, lduw_le_p)
65
-DO_LD_NF(hss_le, H1_4, uint32_t, int16_t, ldsw_le_p)
66
-DO_LD_NF(hsu_be, H1_4, uint32_t, uint16_t, lduw_be_p)
67
-DO_LD_NF(hss_be, H1_4, uint32_t, int16_t, ldsw_be_p)
68
-DO_LD_NF(hdu_le, , uint64_t, uint16_t, lduw_le_p)
69
-DO_LD_NF(hds_le, , uint64_t, int16_t, ldsw_le_p)
70
-DO_LD_NF(hdu_be, , uint64_t, uint16_t, lduw_be_p)
71
-DO_LD_NF(hds_be, , uint64_t, int16_t, ldsw_be_p)
72
-
73
-DO_LD_NF(ss_le, H1_4, uint32_t, uint32_t, ldl_le_p)
74
-DO_LD_NF(ss_be, H1_4, uint32_t, uint32_t, ldl_be_p)
75
-DO_LD_NF(sdu_le, , uint64_t, uint32_t, ldl_le_p)
76
-DO_LD_NF(sds_le, , uint64_t, int32_t, ldl_le_p)
77
-DO_LD_NF(sdu_be, , uint64_t, uint32_t, ldl_be_p)
78
-DO_LD_NF(sds_be, , uint64_t, int32_t, ldl_be_p)
79
-
80
-DO_LD_NF(dd_le, , uint64_t, uint64_t, ldq_le_p)
81
-DO_LD_NF(dd_be, , uint64_t, uint64_t, ldq_be_p)
82
-
83
/*
84
- * Common helper for all gather first-faulting loads.
85
+ * Common helpers for all gather first-faulting loads.
86
*/
87
-static inline void sve_ldff1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
88
- target_ulong base, uint32_t desc, uintptr_t ra,
89
- zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn,
90
- sve_ld1_nf_fn *nonfault_fn)
91
+
92
+static inline QEMU_ALWAYS_INLINE
93
+void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
94
+ target_ulong base, uint32_t desc, uintptr_t retaddr,
95
+ const int esz, const int msz, zreg_off_fn *off_fn,
96
+ sve_ldst1_host_fn *host_fn,
97
+ sve_ldst1_tlb_fn *tlb_fn)
98
{
99
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
100
- const int mmu_idx = get_mmuidx(oi);
101
+ const int mmu_idx = cpu_mmu_index(env, false);
102
const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
103
- intptr_t reg_off, reg_max = simd_oprsz(desc);
104
- target_ulong addr;
105
+ const int esize = 1 << esz;
106
+ const int msize = 1 << msz;
107
+ const intptr_t reg_max = simd_oprsz(desc);
108
+ intptr_t reg_off;
109
+ SVEHostPage info;
110
+ target_ulong addr, in_page;
111
112
/* Skip to the first true predicate. */
113
- reg_off = find_next_active(vg, 0, reg_max, MO_32);
114
- if (likely(reg_off < reg_max)) {
115
- /* Perform one normal read, which will fault or not. */
116
- addr = off_fn(vm, reg_off);
117
- addr = base + (addr << scale);
118
- tlb_fn(env, vd, reg_off, addr, ra);
119
-
120
- /* The rest of the reads will be non-faulting. */
121
+ reg_off = find_next_active(vg, 0, reg_max, esz);
122
+ if (unlikely(reg_off >= reg_max)) {
123
+ /* The entire predicate was false; no load occurs. */
124
+ memset(vd, 0, reg_max);
125
+ return;
126
}
127
128
- /* After any fault, zero the leading predicated false elements. */
129
+ /*
130
+ * Probe the first element, allowing faults.
131
+ */
132
+ addr = base + (off_fn(vm, reg_off) << scale);
133
+ tlb_fn(env, vd, reg_off, addr, retaddr);
134
+
135
+ /* After any fault, zero the other elements. */
136
swap_memzero(vd, reg_off);
137
+ reg_off += esize;
138
+ swap_memzero(vd + reg_off, reg_max - reg_off);
139
140
- while (likely((reg_off += 4) < reg_max)) {
141
- uint64_t pg = *(uint64_t *)(vg + (reg_off >> 6) * 8);
142
- if (likely((pg >> (reg_off & 63)) & 1)) {
143
- addr = off_fn(vm, reg_off);
144
- addr = base + (addr << scale);
145
- if (!nonfault_fn(env, vd, reg_off, addr, mmu_idx)) {
146
- record_fault(env, reg_off, reg_max);
147
- break;
148
+ /*
149
+ * Probe the remaining elements, not allowing faults.
150
+ */
151
+ while (reg_off < reg_max) {
152
+ uint64_t pg = vg[reg_off >> 6];
153
+ do {
154
+ if (likely((pg >> (reg_off & 63)) & 1)) {
155
+ addr = base + (off_fn(vm, reg_off) << scale);
156
+ in_page = -(addr | TARGET_PAGE_MASK);
157
+
158
+ if (unlikely(in_page < msize)) {
159
+ /* Stop if the element crosses a page boundary. */
160
+ goto fault;
161
+ }
162
+
163
+ sve_probe_page(&info, true, env, addr, 0, MMU_DATA_LOAD,
164
+ mmu_idx, retaddr);
165
+ if (unlikely(info.flags & (TLB_INVALID_MASK | TLB_MMIO))) {
166
+ goto fault;
167
+ }
168
+ if (unlikely(info.flags & TLB_WATCHPOINT) &&
169
+ (cpu_watchpoint_address_matches
170
+ (env_cpu(env), addr, msize) & BP_MEM_READ)) {
171
+ goto fault;
172
+ }
173
+ /* TODO: MTE check. */
174
+
175
+ host_fn(vd, reg_off, info.host);
176
}
177
- } else {
178
- *(uint32_t *)(vd + H1_4(reg_off)) = 0;
179
- }
180
+ reg_off += esize;
181
+ } while (reg_off & 63);
182
}
183
+ return;
184
+
185
+ fault:
186
+ record_fault(env, reg_off, reg_max);
187
}
188
189
-static inline void sve_ldff1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
190
- target_ulong base, uint32_t desc, uintptr_t ra,
191
- zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn,
192
- sve_ld1_nf_fn *nonfault_fn)
193
-{
194
- const TCGMemOpIdx oi = extract32(desc, SIMD_DATA_SHIFT, MEMOPIDX_SHIFT);
195
- const int mmu_idx = get_mmuidx(oi);
196
- const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
197
- intptr_t reg_off, reg_max = simd_oprsz(desc);
198
- target_ulong addr;
199
-
200
- /* Skip to the first true predicate. */
201
- reg_off = find_next_active(vg, 0, reg_max, MO_64);
202
- if (likely(reg_off < reg_max)) {
203
- /* Perform one normal read, which will fault or not. */
204
- addr = off_fn(vm, reg_off);
205
- addr = base + (addr << scale);
206
- tlb_fn(env, vd, reg_off, addr, ra);
207
-
208
- /* The rest of the reads will be non-faulting. */
209
- }
210
-
211
- /* After any fault, zero the leading predicated false elements. */
212
- swap_memzero(vd, reg_off);
213
-
214
- while (likely((reg_off += 8) < reg_max)) {
215
- uint8_t pg = *(uint8_t *)(vg + H1(reg_off >> 3));
216
- if (likely(pg & 1)) {
217
- addr = off_fn(vm, reg_off);
218
- addr = base + (addr << scale);
219
- if (!nonfault_fn(env, vd, reg_off, addr, mmu_idx)) {
220
- record_fault(env, reg_off, reg_max);
221
- break;
222
- }
223
- } else {
224
- *(uint64_t *)(vd + reg_off) = 0;
225
- }
226
- }
227
+#define DO_LDFF1_ZPZ_S(MEM, OFS, MSZ) \
228
+void HELPER(sve_ldff##MEM##_##OFS)(CPUARMState *env, void *vd, void *vg, \
229
+ void *vm, target_ulong base, uint32_t desc) \
230
+{ \
231
+ sve_ldff1_z(env, vd, vg, vm, base, desc, GETPC(), MO_32, MSZ, \
232
+ off_##OFS##_s, sve_ld1##MEM##_host, sve_ld1##MEM##_tlb); \
233
}
234
235
-#define DO_LDFF1_ZPZ_S(MEM, OFS) \
236
-void HELPER(sve_ldff##MEM##_##OFS) \
237
- (CPUARMState *env, void *vd, void *vg, void *vm, \
238
- target_ulong base, uint32_t desc) \
239
-{ \
240
- sve_ldff1_zs(env, vd, vg, vm, base, desc, GETPC(), \
241
- off_##OFS##_s, sve_ld1##MEM##_tlb, sve_ld##MEM##_nf); \
242
+#define DO_LDFF1_ZPZ_D(MEM, OFS, MSZ) \
243
+void HELPER(sve_ldff##MEM##_##OFS)(CPUARMState *env, void *vd, void *vg, \
244
+ void *vm, target_ulong base, uint32_t desc) \
245
+{ \
246
+ sve_ldff1_z(env, vd, vg, vm, base, desc, GETPC(), MO_64, MSZ, \
247
+ off_##OFS##_d, sve_ld1##MEM##_host, sve_ld1##MEM##_tlb); \
248
}
249
250
-#define DO_LDFF1_ZPZ_D(MEM, OFS) \
251
-void HELPER(sve_ldff##MEM##_##OFS) \
252
- (CPUARMState *env, void *vd, void *vg, void *vm, \
253
- target_ulong base, uint32_t desc) \
254
-{ \
255
- sve_ldff1_zd(env, vd, vg, vm, base, desc, GETPC(), \
256
- off_##OFS##_d, sve_ld1##MEM##_tlb, sve_ld##MEM##_nf); \
257
-}
258
+DO_LDFF1_ZPZ_S(bsu, zsu, MO_8)
259
+DO_LDFF1_ZPZ_S(bsu, zss, MO_8)
260
+DO_LDFF1_ZPZ_D(bdu, zsu, MO_8)
261
+DO_LDFF1_ZPZ_D(bdu, zss, MO_8)
262
+DO_LDFF1_ZPZ_D(bdu, zd, MO_8)
263
264
-DO_LDFF1_ZPZ_S(bsu, zsu)
265
-DO_LDFF1_ZPZ_S(bsu, zss)
266
-DO_LDFF1_ZPZ_D(bdu, zsu)
267
-DO_LDFF1_ZPZ_D(bdu, zss)
268
-DO_LDFF1_ZPZ_D(bdu, zd)
269
+DO_LDFF1_ZPZ_S(bss, zsu, MO_8)
270
+DO_LDFF1_ZPZ_S(bss, zss, MO_8)
271
+DO_LDFF1_ZPZ_D(bds, zsu, MO_8)
272
+DO_LDFF1_ZPZ_D(bds, zss, MO_8)
273
+DO_LDFF1_ZPZ_D(bds, zd, MO_8)
274
275
-DO_LDFF1_ZPZ_S(bss, zsu)
276
-DO_LDFF1_ZPZ_S(bss, zss)
277
-DO_LDFF1_ZPZ_D(bds, zsu)
278
-DO_LDFF1_ZPZ_D(bds, zss)
279
-DO_LDFF1_ZPZ_D(bds, zd)
280
+DO_LDFF1_ZPZ_S(hsu_le, zsu, MO_16)
281
+DO_LDFF1_ZPZ_S(hsu_le, zss, MO_16)
282
+DO_LDFF1_ZPZ_D(hdu_le, zsu, MO_16)
283
+DO_LDFF1_ZPZ_D(hdu_le, zss, MO_16)
284
+DO_LDFF1_ZPZ_D(hdu_le, zd, MO_16)
285
286
-DO_LDFF1_ZPZ_S(hsu_le, zsu)
287
-DO_LDFF1_ZPZ_S(hsu_le, zss)
288
-DO_LDFF1_ZPZ_D(hdu_le, zsu)
289
-DO_LDFF1_ZPZ_D(hdu_le, zss)
290
-DO_LDFF1_ZPZ_D(hdu_le, zd)
291
+DO_LDFF1_ZPZ_S(hsu_be, zsu, MO_16)
292
+DO_LDFF1_ZPZ_S(hsu_be, zss, MO_16)
293
+DO_LDFF1_ZPZ_D(hdu_be, zsu, MO_16)
294
+DO_LDFF1_ZPZ_D(hdu_be, zss, MO_16)
295
+DO_LDFF1_ZPZ_D(hdu_be, zd, MO_16)
296
297
-DO_LDFF1_ZPZ_S(hsu_be, zsu)
298
-DO_LDFF1_ZPZ_S(hsu_be, zss)
299
-DO_LDFF1_ZPZ_D(hdu_be, zsu)
300
-DO_LDFF1_ZPZ_D(hdu_be, zss)
301
-DO_LDFF1_ZPZ_D(hdu_be, zd)
302
+DO_LDFF1_ZPZ_S(hss_le, zsu, MO_16)
303
+DO_LDFF1_ZPZ_S(hss_le, zss, MO_16)
304
+DO_LDFF1_ZPZ_D(hds_le, zsu, MO_16)
305
+DO_LDFF1_ZPZ_D(hds_le, zss, MO_16)
306
+DO_LDFF1_ZPZ_D(hds_le, zd, MO_16)
307
308
-DO_LDFF1_ZPZ_S(hss_le, zsu)
309
-DO_LDFF1_ZPZ_S(hss_le, zss)
310
-DO_LDFF1_ZPZ_D(hds_le, zsu)
311
-DO_LDFF1_ZPZ_D(hds_le, zss)
312
-DO_LDFF1_ZPZ_D(hds_le, zd)
313
+DO_LDFF1_ZPZ_S(hss_be, zsu, MO_16)
314
+DO_LDFF1_ZPZ_S(hss_be, zss, MO_16)
315
+DO_LDFF1_ZPZ_D(hds_be, zsu, MO_16)
316
+DO_LDFF1_ZPZ_D(hds_be, zss, MO_16)
317
+DO_LDFF1_ZPZ_D(hds_be, zd, MO_16)
318
319
-DO_LDFF1_ZPZ_S(hss_be, zsu)
320
-DO_LDFF1_ZPZ_S(hss_be, zss)
321
-DO_LDFF1_ZPZ_D(hds_be, zsu)
322
-DO_LDFF1_ZPZ_D(hds_be, zss)
323
-DO_LDFF1_ZPZ_D(hds_be, zd)
324
+DO_LDFF1_ZPZ_S(ss_le, zsu, MO_32)
325
+DO_LDFF1_ZPZ_S(ss_le, zss, MO_32)
326
+DO_LDFF1_ZPZ_D(sdu_le, zsu, MO_32)
327
+DO_LDFF1_ZPZ_D(sdu_le, zss, MO_32)
328
+DO_LDFF1_ZPZ_D(sdu_le, zd, MO_32)
329
330
-DO_LDFF1_ZPZ_S(ss_le, zsu)
331
-DO_LDFF1_ZPZ_S(ss_le, zss)
332
-DO_LDFF1_ZPZ_D(sdu_le, zsu)
333
-DO_LDFF1_ZPZ_D(sdu_le, zss)
334
-DO_LDFF1_ZPZ_D(sdu_le, zd)
335
+DO_LDFF1_ZPZ_S(ss_be, zsu, MO_32)
336
+DO_LDFF1_ZPZ_S(ss_be, zss, MO_32)
337
+DO_LDFF1_ZPZ_D(sdu_be, zsu, MO_32)
338
+DO_LDFF1_ZPZ_D(sdu_be, zss, MO_32)
339
+DO_LDFF1_ZPZ_D(sdu_be, zd, MO_32)
340
341
-DO_LDFF1_ZPZ_S(ss_be, zsu)
342
-DO_LDFF1_ZPZ_S(ss_be, zss)
343
-DO_LDFF1_ZPZ_D(sdu_be, zsu)
344
-DO_LDFF1_ZPZ_D(sdu_be, zss)
345
-DO_LDFF1_ZPZ_D(sdu_be, zd)
346
+DO_LDFF1_ZPZ_D(sds_le, zsu, MO_32)
347
+DO_LDFF1_ZPZ_D(sds_le, zss, MO_32)
348
+DO_LDFF1_ZPZ_D(sds_le, zd, MO_32)
349
350
-DO_LDFF1_ZPZ_D(sds_le, zsu)
351
-DO_LDFF1_ZPZ_D(sds_le, zss)
352
-DO_LDFF1_ZPZ_D(sds_le, zd)
353
+DO_LDFF1_ZPZ_D(sds_be, zsu, MO_32)
354
+DO_LDFF1_ZPZ_D(sds_be, zss, MO_32)
355
+DO_LDFF1_ZPZ_D(sds_be, zd, MO_32)
356
357
-DO_LDFF1_ZPZ_D(sds_be, zsu)
358
-DO_LDFF1_ZPZ_D(sds_be, zss)
359
-DO_LDFF1_ZPZ_D(sds_be, zd)
360
+DO_LDFF1_ZPZ_D(dd_le, zsu, MO_64)
361
+DO_LDFF1_ZPZ_D(dd_le, zss, MO_64)
362
+DO_LDFF1_ZPZ_D(dd_le, zd, MO_64)
363
364
-DO_LDFF1_ZPZ_D(dd_le, zsu)
365
-DO_LDFF1_ZPZ_D(dd_le, zss)
366
-DO_LDFF1_ZPZ_D(dd_le, zd)
367
-
368
-DO_LDFF1_ZPZ_D(dd_be, zsu)
369
-DO_LDFF1_ZPZ_D(dd_be, zss)
370
-DO_LDFF1_ZPZ_D(dd_be, zd)
371
+DO_LDFF1_ZPZ_D(dd_be, zsu, MO_64)
372
+DO_LDFF1_ZPZ_D(dd_be, zss, MO_64)
373
+DO_LDFF1_ZPZ_D(dd_be, zd, MO_64)
374
375
/* Stores with a vector index. */
376
377
--
378
2.20.1
379
380
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Convert the insns in the one-register-and-immediate group to decodetree.
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
In the new decode, our asimd_imm_const() function returns a 64-bit value
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
rather than a 32-bit one, which means we don't need to treat cmode=14 op=1
5
Message-id: 20200508154359.7494-18-richard.henderson@linaro.org
5
as a special case in the decoder (it is the only encoding where the two
6
halves of the 64-bit value are different).
7
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200522145520.6778-10-peter.maydell@linaro.org
7
---
11
---
8
target/arm/sve_helper.c | 182 ++++++++++++++++++++++++----------------
12
target/arm/neon-dp.decode | 22 ++++++
9
1 file changed, 111 insertions(+), 71 deletions(-)
13
target/arm/translate-neon.inc.c | 118 ++++++++++++++++++++++++++++++++
10
14
target/arm/translate.c | 101 +--------------------------
11
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
15
3 files changed, 142 insertions(+), 99 deletions(-)
16
17
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
12
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/sve_helper.c
19
--- a/target/arm/neon-dp.decode
14
+++ b/target/arm/sve_helper.c
20
+++ b/target/arm/neon-dp.decode
15
@@ -XXX,XX +XXX,XX @@ DO_LDFF1_ZPZ_D(dd_be, zd, MO_64)
21
@@ -XXX,XX +XXX,XX @@ VCVT_SF_2sh 1111 001 0 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
16
22
VCVT_UF_2sh 1111 001 1 1 . ...... .... 1110 0 . . 1 .... @2reg_vcvt
17
/* Stores with a vector index. */
23
VCVT_FS_2sh 1111 001 0 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
18
24
VCVT_FU_2sh 1111 001 1 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
19
-static void sve_st1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
25
+
20
- target_ulong base, uint32_t desc, uintptr_t ra,
26
+######################################################################
21
- zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn)
27
+# 1-reg-and-modified-immediate grouping:
22
+static inline QEMU_ALWAYS_INLINE
28
+# 1111 001 i 1 D 000 imm:3 Vd:4 cmode:4 0 Q op 1 Vm:4
23
+void sve_st1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
29
+######################################################################
24
+ target_ulong base, uint32_t desc, uintptr_t retaddr,
30
+
25
+ int esize, int msize, zreg_off_fn *off_fn,
31
+&1reg_imm vd q imm cmode op
26
+ sve_ldst1_host_fn *host_fn,
32
+
27
+ sve_ldst1_tlb_fn *tlb_fn)
33
+%asimd_imm_value 24:1 16:3 0:4
28
{
34
+
29
const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
35
+@1reg_imm .... ... . . . ... ... .... .... . q:1 . . .... \
30
- intptr_t i, oprsz = simd_oprsz(desc);
36
+ &1reg_imm imm=%asimd_imm_value vd=%vd_dp
31
+ const int mmu_idx = cpu_mmu_index(env, false);
37
+
32
+ const intptr_t reg_max = simd_oprsz(desc);
38
+# The cmode/op bits here decode VORR/VBIC/VMOV/VMNV, but
33
+ void *host[ARM_MAX_VQ * 4];
39
+# not in a way we can conveniently represent in decodetree without
34
+ intptr_t reg_off, i;
40
+# a lot of repetition:
35
+ SVEHostPage info, info2;
41
+# VORR: op=0, (cmode & 1) && cmode < 12
36
42
+# VBIC: op=1, (cmode & 1) && cmode < 12
37
- for (i = 0; i < oprsz; ) {
43
+# VMOV: everything else
38
- uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3));
44
+# So we have a single decode line and check the cmode/op in the
45
+# trans function.
46
+Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
47
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/translate-neon.inc.c
50
+++ b/target/arm/translate-neon.inc.c
51
@@ -XXX,XX +XXX,XX @@ DO_FP_2SH(VCVT_SF, gen_helper_vfp_sltos)
52
DO_FP_2SH(VCVT_UF, gen_helper_vfp_ultos)
53
DO_FP_2SH(VCVT_FS, gen_helper_vfp_tosls_round_to_zero)
54
DO_FP_2SH(VCVT_FU, gen_helper_vfp_touls_round_to_zero)
55
+
56
+static uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
57
+{
39
+ /*
58
+ /*
40
+ * Probe all of the elements for host addresses and flags.
59
+ * Expand the encoded constant.
60
+ * Note that cmode = 2,3,4,5,6,7,10,11,12,13 imm=0 is UNPREDICTABLE.
61
+ * We choose to not special-case this and will behave as if a
62
+ * valid constant encoding of 0 had been given.
63
+ * cmode = 15 op = 1 must UNDEF; we assume decode has handled that.
41
+ */
64
+ */
42
+ i = reg_off = 0;
65
+ switch (cmode) {
43
+ do {
66
+ case 0: case 1:
44
+ uint64_t pg = vg[reg_off >> 6];
67
+ /* no-op */
45
do {
68
+ break;
46
- if (likely(pg & 1)) {
69
+ case 2: case 3:
47
- target_ulong off = off_fn(vm, i);
70
+ imm <<= 8;
48
- tlb_fn(env, vd, i, base + (off << scale), ra);
71
+ break;
49
+ target_ulong addr = base + (off_fn(vm, reg_off) << scale);
72
+ case 4: case 5:
50
+ target_ulong in_page = -(addr | TARGET_PAGE_MASK);
73
+ imm <<= 16;
51
+
74
+ break;
52
+ host[i] = NULL;
75
+ case 6: case 7:
53
+ if (likely((pg >> (reg_off & 63)) & 1)) {
76
+ imm <<= 24;
54
+ if (likely(in_page >= msize)) {
77
+ break;
55
+ sve_probe_page(&info, false, env, addr, 0, MMU_DATA_STORE,
78
+ case 8: case 9:
56
+ mmu_idx, retaddr);
79
+ imm |= imm << 16;
57
+ host[i] = info.host;
80
+ break;
58
+ } else {
81
+ case 10: case 11:
59
+ /*
82
+ imm = (imm << 8) | (imm << 24);
60
+ * Element crosses the page boundary.
83
+ break;
61
+ * Probe both pages, but do not record the host address,
84
+ case 12:
62
+ * so that we use the slow path.
85
+ imm = (imm << 8) | 0xff;
63
+ */
86
+ break;
64
+ sve_probe_page(&info, false, env, addr, 0,
87
+ case 13:
65
+ MMU_DATA_STORE, mmu_idx, retaddr);
88
+ imm = (imm << 16) | 0xffff;
66
+ sve_probe_page(&info2, false, env, addr + in_page, 0,
89
+ break;
67
+ MMU_DATA_STORE, mmu_idx, retaddr);
90
+ case 14:
68
+ info.flags |= info2.flags;
91
+ if (op) {
92
+ /*
93
+ * This is the only case where the top and bottom 32 bits
94
+ * of the encoded constant differ.
95
+ */
96
+ uint64_t imm64 = 0;
97
+ int n;
98
+
99
+ for (n = 0; n < 8; n++) {
100
+ if (imm & (1 << n)) {
101
+ imm64 |= (0xffULL << (n * 8));
69
+ }
102
+ }
70
+
103
+ }
71
+ if (unlikely(info.flags & TLB_WATCHPOINT)) {
104
+ return imm64;
72
+ cpu_check_watchpoint(env_cpu(env), addr, msize,
105
+ }
73
+ info.attrs, BP_MEM_WRITE, retaddr);
106
+ imm |= (imm << 8) | (imm << 16) | (imm << 24);
74
+ }
107
+ break;
75
+ /* TODO: MTE check. */
108
+ case 15:
76
}
109
+ imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
77
- i += 4, pg >>= 4;
110
+ | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
78
- } while (i & 15);
111
+ break;
79
- }
112
+ }
80
-}
113
+ if (op) {
81
+ i += 1;
114
+ imm = ~imm;
82
+ reg_off += esize;
115
+ }
83
+ } while (reg_off & 63);
116
+ return dup_const(MO_32, imm);
84
+ } while (reg_off < reg_max);
117
+}
85
118
+
86
-static void sve_st1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
119
+static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
87
- target_ulong base, uint32_t desc, uintptr_t ra,
120
+ GVecGen2iFn *fn)
88
- zreg_off_fn *off_fn, sve_ldst1_tlb_fn *tlb_fn)
121
+{
89
-{
122
+ uint64_t imm;
90
- const int scale = extract32(desc, SIMD_DATA_SHIFT + MEMOPIDX_SHIFT, 2);
123
+ int reg_ofs, vec_size;
91
- intptr_t i, oprsz = simd_oprsz(desc) / 8;
124
+
92
-
125
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
93
- for (i = 0; i < oprsz; i++) {
126
+ return false;
94
- uint8_t pg = *(uint8_t *)(vg + H1(i));
127
+ }
95
- if (likely(pg & 1)) {
128
+
96
- target_ulong off = off_fn(vm, i * 8);
129
+ /* UNDEF accesses to D16-D31 if they don't exist. */
97
- tlb_fn(env, vd, i * 8, base + (off << scale), ra);
130
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
98
+ /*
131
+ return false;
99
+ * Now that we have recognized all exceptions except SyncExternal
132
+ }
100
+ * (from TLB_MMIO), which we cannot avoid, perform all of the stores.
133
+
101
+ *
134
+ if (a->vd & a->q) {
102
+ * Note for the common case of an element in RAM, not crossing a page
135
+ return false;
103
+ * boundary, we have stored the host address in host[]. This doubles
136
+ }
104
+ * as a first-level check against the predicate, since only enabled
137
+
105
+ * elements have non-null host addresses.
138
+ if (!vfp_access_check(s)) {
106
+ */
139
+ return true;
107
+ i = reg_off = 0;
140
+ }
108
+ do {
141
+
109
+ void *h = host[i];
142
+ reg_ofs = neon_reg_offset(a->vd, 0);
110
+ if (likely(h != NULL)) {
143
+ vec_size = a->q ? 16 : 8;
111
+ host_fn(vd, reg_off, h);
144
+ imm = asimd_imm_const(a->imm, a->cmode, a->op);
112
+ } else if ((vg[reg_off >> 6] >> (reg_off & 63)) & 1) {
145
+
113
+ target_ulong addr = base + (off_fn(vm, reg_off) << scale);
146
+ fn(MO_64, reg_ofs, reg_ofs, imm, vec_size, vec_size);
114
+ tlb_fn(env, vd, reg_off, addr, retaddr);
147
+ return true;
115
}
148
+}
116
- }
149
+
117
+ i += 1;
150
+static void gen_VMOV_1r(unsigned vece, uint32_t dofs, uint32_t aofs,
118
+ reg_off += esize;
151
+ int64_t c, uint32_t oprsz, uint32_t maxsz)
119
+ } while (reg_off < reg_max);
152
+{
120
}
153
+ tcg_gen_gvec_dup_imm(MO_64, dofs, oprsz, maxsz, c);
121
154
+}
122
-#define DO_ST1_ZPZ_S(MEM, OFS) \
155
+
123
-void QEMU_FLATTEN HELPER(sve_st##MEM##_##OFS) \
156
+static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a)
124
- (CPUARMState *env, void *vd, void *vg, void *vm, \
157
+{
125
- target_ulong base, uint32_t desc) \
158
+ /* Handle decode of cmode/op here between VORR/VBIC/VMOV */
126
-{ \
159
+ GVecGen2iFn *fn;
127
- sve_st1_zs(env, vd, vg, vm, base, desc, GETPC(), \
160
+
128
- off_##OFS##_s, sve_st1##MEM##_tlb); \
161
+ if ((a->cmode & 1) && a->cmode < 12) {
129
+#define DO_ST1_ZPZ_S(MEM, OFS, MSZ) \
162
+ /* for op=1, the imm will be inverted, so BIC becomes AND. */
130
+void HELPER(sve_st##MEM##_##OFS)(CPUARMState *env, void *vd, void *vg, \
163
+ fn = a->op ? tcg_gen_gvec_andi : tcg_gen_gvec_ori;
131
+ void *vm, target_ulong base, uint32_t desc) \
164
+ } else {
132
+{ \
165
+ /* There is one unallocated cmode/op combination in this space */
133
+ sve_st1_z(env, vd, vg, vm, base, desc, GETPC(), 4, 1 << MSZ, \
166
+ if (a->cmode == 15 && a->op == 1) {
134
+ off_##OFS##_s, sve_st1##MEM##_host, sve_st1##MEM##_tlb); \
167
+ return false;
135
}
168
+ }
136
169
+ fn = gen_VMOV_1r;
137
-#define DO_ST1_ZPZ_D(MEM, OFS) \
170
+ }
138
-void QEMU_FLATTEN HELPER(sve_st##MEM##_##OFS) \
171
+ return do_1reg_imm(s, a, fn);
139
- (CPUARMState *env, void *vd, void *vg, void *vm, \
172
+}
140
- target_ulong base, uint32_t desc) \
173
diff --git a/target/arm/translate.c b/target/arm/translate.c
141
-{ \
174
index XXXXXXX..XXXXXXX 100644
142
- sve_st1_zd(env, vd, vg, vm, base, desc, GETPC(), \
175
--- a/target/arm/translate.c
143
- off_##OFS##_d, sve_st1##MEM##_tlb); \
176
+++ b/target/arm/translate.c
144
+#define DO_ST1_ZPZ_D(MEM, OFS, MSZ) \
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
145
+void HELPER(sve_st##MEM##_##OFS)(CPUARMState *env, void *vd, void *vg, \
178
/* Three register same length: handled by decodetree */
146
+ void *vm, target_ulong base, uint32_t desc) \
179
return 1;
147
+{ \
180
} else if (insn & (1 << 4)) {
148
+ sve_st1_z(env, vd, vg, vm, base, desc, GETPC(), 8, 1 << MSZ, \
181
- if ((insn & 0x00380080) != 0) {
149
+ off_##OFS##_d, sve_st1##MEM##_host, sve_st1##MEM##_tlb); \
182
- /* Two registers and shift: handled by decodetree */
150
}
183
- return 1;
151
184
- } else { /* (insn & 0x00380080) == 0 */
152
-DO_ST1_ZPZ_S(bs, zsu)
185
- int invert, reg_ofs, vec_size;
153
-DO_ST1_ZPZ_S(hs_le, zsu)
186
-
154
-DO_ST1_ZPZ_S(hs_be, zsu)
187
- if (q && (rd & 1)) {
155
-DO_ST1_ZPZ_S(ss_le, zsu)
188
- return 1;
156
-DO_ST1_ZPZ_S(ss_be, zsu)
189
- }
157
+DO_ST1_ZPZ_S(bs, zsu, MO_8)
190
-
158
+DO_ST1_ZPZ_S(hs_le, zsu, MO_16)
191
- op = (insn >> 8) & 0xf;
159
+DO_ST1_ZPZ_S(hs_be, zsu, MO_16)
192
- /* One register and immediate. */
160
+DO_ST1_ZPZ_S(ss_le, zsu, MO_32)
193
- imm = (u << 7) | ((insn >> 12) & 0x70) | (insn & 0xf);
161
+DO_ST1_ZPZ_S(ss_be, zsu, MO_32)
194
- invert = (insn & (1 << 5)) != 0;
162
195
- /* Note that op = 2,3,4,5,6,7,10,11,12,13 imm=0 is UNPREDICTABLE.
163
-DO_ST1_ZPZ_S(bs, zss)
196
- * We choose to not special-case this and will behave as if a
164
-DO_ST1_ZPZ_S(hs_le, zss)
197
- * valid constant encoding of 0 had been given.
165
-DO_ST1_ZPZ_S(hs_be, zss)
198
- */
166
-DO_ST1_ZPZ_S(ss_le, zss)
199
- switch (op) {
167
-DO_ST1_ZPZ_S(ss_be, zss)
200
- case 0: case 1:
168
+DO_ST1_ZPZ_S(bs, zss, MO_8)
201
- /* no-op */
169
+DO_ST1_ZPZ_S(hs_le, zss, MO_16)
202
- break;
170
+DO_ST1_ZPZ_S(hs_be, zss, MO_16)
203
- case 2: case 3:
171
+DO_ST1_ZPZ_S(ss_le, zss, MO_32)
204
- imm <<= 8;
172
+DO_ST1_ZPZ_S(ss_be, zss, MO_32)
205
- break;
173
206
- case 4: case 5:
174
-DO_ST1_ZPZ_D(bd, zsu)
207
- imm <<= 16;
175
-DO_ST1_ZPZ_D(hd_le, zsu)
208
- break;
176
-DO_ST1_ZPZ_D(hd_be, zsu)
209
- case 6: case 7:
177
-DO_ST1_ZPZ_D(sd_le, zsu)
210
- imm <<= 24;
178
-DO_ST1_ZPZ_D(sd_be, zsu)
211
- break;
179
-DO_ST1_ZPZ_D(dd_le, zsu)
212
- case 8: case 9:
180
-DO_ST1_ZPZ_D(dd_be, zsu)
213
- imm |= imm << 16;
181
+DO_ST1_ZPZ_D(bd, zsu, MO_8)
214
- break;
182
+DO_ST1_ZPZ_D(hd_le, zsu, MO_16)
215
- case 10: case 11:
183
+DO_ST1_ZPZ_D(hd_be, zsu, MO_16)
216
- imm = (imm << 8) | (imm << 24);
184
+DO_ST1_ZPZ_D(sd_le, zsu, MO_32)
217
- break;
185
+DO_ST1_ZPZ_D(sd_be, zsu, MO_32)
218
- case 12:
186
+DO_ST1_ZPZ_D(dd_le, zsu, MO_64)
219
- imm = (imm << 8) | 0xff;
187
+DO_ST1_ZPZ_D(dd_be, zsu, MO_64)
220
- break;
188
221
- case 13:
189
-DO_ST1_ZPZ_D(bd, zss)
222
- imm = (imm << 16) | 0xffff;
190
-DO_ST1_ZPZ_D(hd_le, zss)
223
- break;
191
-DO_ST1_ZPZ_D(hd_be, zss)
224
- case 14:
192
-DO_ST1_ZPZ_D(sd_le, zss)
225
- imm |= (imm << 8) | (imm << 16) | (imm << 24);
193
-DO_ST1_ZPZ_D(sd_be, zss)
226
- if (invert) {
194
-DO_ST1_ZPZ_D(dd_le, zss)
227
- imm = ~imm;
195
-DO_ST1_ZPZ_D(dd_be, zss)
228
- }
196
+DO_ST1_ZPZ_D(bd, zss, MO_8)
229
- break;
197
+DO_ST1_ZPZ_D(hd_le, zss, MO_16)
230
- case 15:
198
+DO_ST1_ZPZ_D(hd_be, zss, MO_16)
231
- if (invert) {
199
+DO_ST1_ZPZ_D(sd_le, zss, MO_32)
232
- return 1;
200
+DO_ST1_ZPZ_D(sd_be, zss, MO_32)
233
- }
201
+DO_ST1_ZPZ_D(dd_le, zss, MO_64)
234
- imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
202
+DO_ST1_ZPZ_D(dd_be, zss, MO_64)
235
- | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
203
236
- break;
204
-DO_ST1_ZPZ_D(bd, zd)
237
- }
205
-DO_ST1_ZPZ_D(hd_le, zd)
238
- if (invert) {
206
-DO_ST1_ZPZ_D(hd_be, zd)
239
- imm = ~imm;
207
-DO_ST1_ZPZ_D(sd_le, zd)
240
- }
208
-DO_ST1_ZPZ_D(sd_be, zd)
241
-
209
-DO_ST1_ZPZ_D(dd_le, zd)
242
- reg_ofs = neon_reg_offset(rd, 0);
210
-DO_ST1_ZPZ_D(dd_be, zd)
243
- vec_size = q ? 16 : 8;
211
+DO_ST1_ZPZ_D(bd, zd, MO_8)
244
-
212
+DO_ST1_ZPZ_D(hd_le, zd, MO_16)
245
- if (op & 1 && op < 12) {
213
+DO_ST1_ZPZ_D(hd_be, zd, MO_16)
246
- if (invert) {
214
+DO_ST1_ZPZ_D(sd_le, zd, MO_32)
247
- /* The immediate value has already been inverted,
215
+DO_ST1_ZPZ_D(sd_be, zd, MO_32)
248
- * so BIC becomes AND.
216
+DO_ST1_ZPZ_D(dd_le, zd, MO_64)
249
- */
217
+DO_ST1_ZPZ_D(dd_be, zd, MO_64)
250
- tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
218
251
- vec_size, vec_size);
219
#undef DO_ST1_ZPZ_S
252
- } else {
220
#undef DO_ST1_ZPZ_D
253
- tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
254
- vec_size, vec_size);
255
- }
256
- } else {
257
- /* VMOV, VMVN. */
258
- if (op == 14 && invert) {
259
- TCGv_i64 t64 = tcg_temp_new_i64();
260
-
261
- for (pass = 0; pass <= q; ++pass) {
262
- uint64_t val = 0;
263
- int n;
264
-
265
- for (n = 0; n < 8; n++) {
266
- if (imm & (1 << (n + pass * 8))) {
267
- val |= 0xffull << (n * 8);
268
- }
269
- }
270
- tcg_gen_movi_i64(t64, val);
271
- neon_store_reg64(t64, rd + pass);
272
- }
273
- tcg_temp_free_i64(t64);
274
- } else {
275
- tcg_gen_gvec_dup_imm(MO_32, reg_ofs, vec_size,
276
- vec_size, imm);
277
- }
278
- }
279
- }
280
+ /* Two registers and shift or reg and imm: handled by decodetree */
281
+ return 1;
282
} else { /* (insn & 0x00800010 == 0x00800000) */
283
if (size != 3) {
284
op = (insn >> 8) & 0xf;
221
--
285
--
222
2.20.1
286
2.20.1
223
287
224
288
diff view generated by jsdifflib