1
Most of this is the Neon decodetree patches, followed by Edgar's versal cleanups.
1
The following changes since commit 53f306f316549d20c76886903181413d20842423:
2
2
3
thanks
3
Merge remote-tracking branch 'remotes/ehabkost-gl/tags/x86-next-pull-request' into staging (2021-06-21 11:26:04 +0100)
4
-- PMM
5
6
7
The following changes since commit 2ef486e76d64436be90f7359a3071fb2a56ce835:
8
9
Merge remote-tracking branch 'remotes/marcel/tags/rdma-pull-request' into staging (2020-05-03 14:12:56 +0100)
10
4
11
are available in the Git repository at:
5
are available in the Git repository at:
12
6
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200504
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210621
14
8
15
for you to fetch changes up to 9aefc6cf9b73f66062d2f914a0136756e7a28211:
9
for you to fetch changes up to a83f1d9263d281f938a3984cda7104d55affd43a:
16
10
17
target/arm: Move gen_ function typedefs to translate.h (2020-05-04 12:59:26 +0100)
11
docs/system: arm: Add nRF boards description (2021-06-21 17:24:33 +0100)
18
12
19
----------------------------------------------------------------
13
----------------------------------------------------------------
20
target-arm queue:
14
target-arm queue:
21
* Start of conversion of Neon insns to decodetree
15
* Don't require 'virt' board to be compiled in for ACPI GHES code
22
* versal board: support SD and RTC
16
* docs: Document which architecture extensions we emulate
23
* Implement ARMv8.2-TTS2UXN
17
* Fix bugs in M-profile FPCXT_NS accesses
24
* Make VQDMULL undefined when U=1
18
* First slice of MVE patches
25
* Some minor code cleanups
19
* Implement MTE3
20
* docs/system: arm: Add nRF boards description
26
21
27
----------------------------------------------------------------
22
----------------------------------------------------------------
28
Edgar E. Iglesias (11):
23
Alexandre Iooss (1):
29
hw/arm: versal: Remove inclusion of arm_gicv3_common.h
24
docs/system: arm: Add nRF boards description
30
hw/arm: versal: Move misplaced comment
31
hw/arm: versal-virt: Fix typo xlnx-ve -> xlnx-versal
32
hw/arm: versal: Embed the UARTs into the SoC type
33
hw/arm: versal: Embed the GEMs into the SoC type
34
hw/arm: versal: Embed the ADMAs into the SoC type
35
hw/arm: versal: Embed the APUs into the SoC type
36
hw/arm: versal: Add support for SD
37
hw/arm: versal: Add support for the RTC
38
hw/arm: versal-virt: Add support for SD
39
hw/arm: versal-virt: Add support for the RTC
40
25
41
Fredrik Strupe (1):
26
Peter Collingbourne (1):
42
target/arm: Make VQDMULL undefined when U=1
27
target/arm: Implement MTE3
43
28
44
Peter Maydell (25):
29
Peter Maydell (55):
45
target/arm: Don't use a TLB for ARMMMUIdx_Stage2
30
hw/acpi: Provide stub version of acpi_ghes_record_errors()
46
target/arm: Use enum constant in get_phys_addr_lpae() call
31
hw/acpi: Provide function acpi_ghes_present()
47
target/arm: Add new 's1_is_el0' argument to get_phys_addr_lpae()
32
target/arm: Use acpi_ghes_present() to see if we report ACPI memory errors
48
target/arm: Implement ARMv8.2-TTS2UXN
33
docs/system/arm: Document which architecture extensions we emulate
49
target/arm: Use correct variable for setting 'max' cpu's ID_AA64DFR0
34
target/arm/translate-vfp.c: Whitespace fixes
50
target/arm/translate-vfp.inc.c: Remove duplicate simd_r32 check
35
target/arm: Handle FPU being disabled in FPCXT_NS accesses
51
target/arm: Don't allow Thumb Neon insns without FEATURE_NEON
36
target/arm: Don't NOCP fault for FPCXT_NS accesses
52
target/arm: Add stubs for AArch32 Neon decodetree
37
target/arm: Handle writeback in VLDR/VSTR sysreg with no memory access
53
target/arm: Convert VCMLA (vector) to decodetree
38
target/arm: Factor FP context update code out into helper function
54
target/arm: Convert VCADD (vector) to decodetree
39
target/arm: Split vfp_access_check() into A and M versions
55
target/arm: Convert V[US]DOT (vector) to decodetree
40
target/arm: Handle FPU check for FPCXT_NS insns via vfp_access_check_m()
56
target/arm: Convert VFM[AS]L (vector) to decodetree
41
target/arm: Implement MVE VLDR/VSTR (non-widening forms)
57
target/arm: Convert VCMLA (scalar) to decodetree
42
target/arm: Implement widening/narrowing MVE VLDR/VSTR insns
58
target/arm: Convert V[US]DOT (scalar) to decodetree
43
target/arm: Implement MVE VCLZ
59
target/arm: Convert VFM[AS]L (scalar) to decodetree
44
target/arm: Implement MVE VCLS
60
target/arm: Convert Neon load/store multiple structures to decodetree
45
target/arm: Implement MVE VREV16, VREV32, VREV64
61
target/arm: Convert Neon 'load single structure to all lanes' to decodetree
46
target/arm: Implement MVE VMVN (register)
62
target/arm: Convert Neon 'load/store single structure' to decodetree
47
target/arm: Implement MVE VABS
63
target/arm: Convert Neon 3-reg-same VADD/VSUB to decodetree
48
target/arm: Implement MVE VNEG
64
target/arm: Convert Neon 3-reg-same logic ops to decodetree
49
tcg: Make gen_dup_i32/i64() public as tcg_gen_dup_i32/i64
65
target/arm: Convert Neon 3-reg-same VMAX/VMIN to decodetree
50
target/arm: Implement MVE VDUP
66
target/arm: Convert Neon 3-reg-same comparisons to decodetree
51
target/arm: Implement MVE VAND, VBIC, VORR, VORN, VEOR
67
target/arm: Convert Neon 3-reg-same VQADD/VQSUB to decodetree
52
target/arm: Implement MVE VADD, VSUB, VMUL
68
target/arm: Convert Neon 3-reg-same VMUL, VMLA, VMLS, VSHL to decodetree
53
target/arm: Implement MVE VMULH
69
target/arm: Move gen_ function typedefs to translate.h
54
target/arm: Implement MVE VRMULH
55
target/arm: Implement MVE VMAX, VMIN
56
target/arm: Implement MVE VABD
57
target/arm: Implement MVE VHADD, VHSUB
58
target/arm: Implement MVE VMULL
59
target/arm: Implement MVE VMLALDAV
60
target/arm: Implement MVE VMLSLDAV
61
target/arm: Implement MVE VRMLALDAVH, VRMLSLDAVH
62
target/arm: Implement MVE VADD (scalar)
63
target/arm: Implement MVE VSUB, VMUL (scalar)
64
target/arm: Implement MVE VHADD, VHSUB (scalar)
65
target/arm: Implement MVE VBRSR
66
target/arm: Implement MVE VPST
67
target/arm: Implement MVE VQADD and VQSUB
68
target/arm: Implement MVE VQDMULH and VQRDMULH (scalar)
69
target/arm: Implement MVE VQDMULL scalar
70
target/arm: Implement MVE VQDMULH, VQRDMULH (vector)
71
target/arm: Implement MVE VQADD, VQSUB (vector)
72
target/arm: Implement MVE VQSHL (vector)
73
target/arm: Implement MVE VQRSHL
74
target/arm: Implement MVE VSHL insn
75
target/arm: Implement MVE VRSHL
76
target/arm: Implement MVE VQDMLADH and VQRDMLADH
77
target/arm: Implement MVE VQDMLSDH and VQRDMLSDH
78
target/arm: Implement MVE VQDMULL (vector)
79
target/arm: Implement MVE VRHADD
80
target/arm: Implement MVE VADC, VSBC
81
target/arm: Implement MVE VCADD
82
target/arm: Implement MVE VHCADD
83
target/arm: Implement MVE VADDV
84
target/arm: Make VMOV scalar <-> gpreg beatwise for MVE
70
85
71
Philippe Mathieu-Daudé (2):
86
docs/system/arm/emulation.rst | 103 ++++
72
hw/arm/mps2-tz: Use TYPE_IOTKIT instead of hardcoded string
87
docs/system/arm/nrf.rst | 51 ++
73
target/arm: Use uint64_t for midr field in CPU state struct
88
docs/system/target-arm.rst | 7 +
89
include/hw/acpi/ghes.h | 9 +
90
include/tcg/tcg-op.h | 8 +
91
include/tcg/tcg.h | 1 -
92
target/arm/helper-mve.h | 357 +++++++++++++
93
target/arm/helper.h | 2 +
94
target/arm/internals.h | 11 +
95
target/arm/translate-a32.h | 3 +
96
target/arm/translate.h | 10 +
97
target/arm/m-nocp.decode | 24 +
98
target/arm/mve.decode | 240 +++++++++
99
target/arm/vfp.decode | 14 -
100
hw/acpi/ghes-stub.c | 22 +
101
hw/acpi/ghes.c | 17 +
102
target/arm/cpu64.c | 2 +-
103
target/arm/kvm64.c | 6 +-
104
target/arm/mte_helper.c | 82 +--
105
target/arm/mve_helper.c | 1160 +++++++++++++++++++++++++++++++++++++++++
106
target/arm/translate-m-nocp.c | 550 +++++++++++++++++++
107
target/arm/translate-mve.c | 759 +++++++++++++++++++++++++++
108
target/arm/translate-vfp.c | 741 +++++++-------------------
109
tcg/tcg-op-gvec.c | 20 +-
110
MAINTAINERS | 1 +
111
hw/acpi/meson.build | 6 +-
112
target/arm/meson.build | 1 +
113
27 files changed, 3578 insertions(+), 629 deletions(-)
114
create mode 100644 docs/system/arm/emulation.rst
115
create mode 100644 docs/system/arm/nrf.rst
116
create mode 100644 target/arm/helper-mve.h
117
create mode 100644 hw/acpi/ghes-stub.c
118
create mode 100644 target/arm/mve_helper.c
74
119
75
include/hw/arm/xlnx-versal.h | 31 +-
76
target/arm/cpu-param.h | 2 +-
77
target/arm/cpu.h | 38 ++-
78
target/arm/translate-a64.h | 9 -
79
target/arm/translate.h | 26 ++
80
target/arm/neon-dp.decode | 86 +++++
81
target/arm/neon-ls.decode | 52 +++
82
target/arm/neon-shared.decode | 66 ++++
83
hw/arm/mps2-tz.c | 2 +-
84
hw/arm/xlnx-versal-virt.c | 74 ++++-
85
hw/arm/xlnx-versal.c | 115 +++++--
86
target/arm/cpu.c | 3 +-
87
target/arm/cpu64.c | 8 +-
88
target/arm/helper.c | 183 ++++------
89
target/arm/translate-a64.c | 17 -
90
target/arm/translate-neon.inc.c | 714 +++++++++++++++++++++++++++++++++++++++
91
target/arm/translate-vfp.inc.c | 6 -
92
target/arm/translate.c | 716 +++-------------------------------------
93
target/arm/Makefile.objs | 18 +
94
19 files changed, 1302 insertions(+), 864 deletions(-)
95
create mode 100644 target/arm/neon-dp.decode
96
create mode 100644 target/arm/neon-ls.decode
97
create mode 100644 target/arm/neon-shared.decode
98
create mode 100644 target/arm/translate-neon.inc.c
99
diff view generated by jsdifflib
New patch
1
Generic code in target/arm wants to call acpi_ghes_record_errors();
2
provide a stub version so that we don't fail to link when
3
CONFIG_ACPI_APEI is not set. This requires us to add a new
4
ghes-stub.c file to contain it and the meson.build mechanics
5
to use it when appropriate.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
10
Message-id: 20210603171259.27962-2-peter.maydell@linaro.org
11
---
12
hw/acpi/ghes-stub.c | 17 +++++++++++++++++
13
hw/acpi/meson.build | 6 +++---
14
2 files changed, 20 insertions(+), 3 deletions(-)
15
create mode 100644 hw/acpi/ghes-stub.c
16
17
diff --git a/hw/acpi/ghes-stub.c b/hw/acpi/ghes-stub.c
18
new file mode 100644
19
index XXXXXXX..XXXXXXX
20
--- /dev/null
21
+++ b/hw/acpi/ghes-stub.c
22
@@ -XXX,XX +XXX,XX @@
23
+/*
24
+ * Support for generating APEI tables and recording CPER for Guests:
25
+ * stub functions.
26
+ *
27
+ * Copyright (c) 2021 Linaro, Ltd
28
+ *
29
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
30
+ * See the COPYING file in the top-level directory.
31
+ */
32
+
33
+#include "qemu/osdep.h"
34
+#include "hw/acpi/ghes.h"
35
+
36
+int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
37
+{
38
+ return -1;
39
+}
40
diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/acpi/meson.build
43
+++ b/hw/acpi/meson.build
44
@@ -XXX,XX +XXX,XX @@ acpi_ss.add(when: 'CONFIG_ACPI_PCI', if_true: files('pci.c'))
45
acpi_ss.add(when: 'CONFIG_ACPI_VMGENID', if_true: files('vmgenid.c'))
46
acpi_ss.add(when: 'CONFIG_ACPI_HW_REDUCED', if_true: files('generic_event_device.c'))
47
acpi_ss.add(when: 'CONFIG_ACPI_HMAT', if_true: files('hmat.c'))
48
-acpi_ss.add(when: 'CONFIG_ACPI_APEI', if_true: files('ghes.c'))
49
+acpi_ss.add(when: 'CONFIG_ACPI_APEI', if_true: files('ghes.c'), if_false: files('ghes-stub.c'))
50
acpi_ss.add(when: 'CONFIG_ACPI_X86', if_true: files('core.c', 'piix4.c', 'pcihp.c'), if_false: files('acpi-stub.c'))
51
acpi_ss.add(when: 'CONFIG_ACPI_X86_ICH', if_true: files('ich9.c', 'tco.c'))
52
acpi_ss.add(when: 'CONFIG_IPMI', if_true: files('ipmi.c'), if_false: files('ipmi-stub.c'))
53
acpi_ss.add(when: 'CONFIG_PC', if_false: files('acpi-x86-stub.c'))
54
acpi_ss.add(when: 'CONFIG_TPM', if_true: files('tpm.c'))
55
-softmmu_ss.add(when: 'CONFIG_ACPI', if_false: files('acpi-stub.c', 'aml-build-stub.c'))
56
+softmmu_ss.add(when: 'CONFIG_ACPI', if_false: files('acpi-stub.c', 'aml-build-stub.c', 'ghes-stub.c'))
57
softmmu_ss.add_all(when: 'CONFIG_ACPI', if_true: acpi_ss)
58
softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('acpi-stub.c', 'aml-build-stub.c',
59
- 'acpi-x86-stub.c', 'ipmi-stub.c'))
60
+ 'acpi-x86-stub.c', 'ipmi-stub.c', 'ghes-stub.c'))
61
--
62
2.20.1
63
64
diff view generated by jsdifflib
New patch
1
Allow code elsewhere in the system to check whether the ACPI GHES
2
table is present, so it can determine whether it is OK to try to
3
record an error by calling acpi_ghes_record_errors().
1
4
5
(We don't need to migrate the new 'present' field in AcpiGhesState,
6
because it is set once at system initialization and doesn't change.)
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
11
Message-id: 20210603171259.27962-3-peter.maydell@linaro.org
12
---
13
include/hw/acpi/ghes.h | 9 +++++++++
14
hw/acpi/ghes-stub.c | 5 +++++
15
hw/acpi/ghes.c | 17 +++++++++++++++++
16
3 files changed, 31 insertions(+)
17
18
diff --git a/include/hw/acpi/ghes.h b/include/hw/acpi/ghes.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/acpi/ghes.h
21
+++ b/include/hw/acpi/ghes.h
22
@@ -XXX,XX +XXX,XX @@ enum {
23
24
typedef struct AcpiGhesState {
25
uint64_t ghes_addr_le;
26
+ bool present; /* True if GHES is present at all on this board */
27
} AcpiGhesState;
28
29
void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker);
30
@@ -XXX,XX +XXX,XX @@ void acpi_build_hest(GArray *table_data, BIOSLinker *linker,
31
void acpi_ghes_add_fw_cfg(AcpiGhesState *vms, FWCfgState *s,
32
GArray *hardware_errors);
33
int acpi_ghes_record_errors(uint8_t notify, uint64_t error_physical_addr);
34
+
35
+/**
36
+ * acpi_ghes_present: Report whether ACPI GHES table is present
37
+ *
38
+ * Returns: true if the system has an ACPI GHES table and it is
39
+ * safe to call acpi_ghes_record_errors() to record a memory error.
40
+ */
41
+bool acpi_ghes_present(void);
42
#endif
43
diff --git a/hw/acpi/ghes-stub.c b/hw/acpi/ghes-stub.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/acpi/ghes-stub.c
46
+++ b/hw/acpi/ghes-stub.c
47
@@ -XXX,XX +XXX,XX @@ int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
48
{
49
return -1;
50
}
51
+
52
+bool acpi_ghes_present(void)
53
+{
54
+ return false;
55
+}
56
diff --git a/hw/acpi/ghes.c b/hw/acpi/ghes.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/acpi/ghes.c
59
+++ b/hw/acpi/ghes.c
60
@@ -XXX,XX +XXX,XX @@ void acpi_ghes_add_fw_cfg(AcpiGhesState *ags, FWCfgState *s,
61
/* Create a read-write fw_cfg file for Address */
62
fw_cfg_add_file_callback(s, ACPI_GHES_DATA_ADDR_FW_CFG_FILE, NULL, NULL,
63
NULL, &(ags->ghes_addr_le), sizeof(ags->ghes_addr_le), false);
64
+
65
+ ags->present = true;
66
}
67
68
int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
69
@@ -XXX,XX +XXX,XX @@ int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
70
71
return ret;
72
}
73
+
74
+bool acpi_ghes_present(void)
75
+{
76
+ AcpiGedState *acpi_ged_state;
77
+ AcpiGhesState *ags;
78
+
79
+ acpi_ged_state = ACPI_GED(object_resolve_path_type("", TYPE_ACPI_GED,
80
+ NULL));
81
+
82
+ if (!acpi_ged_state) {
83
+ return false;
84
+ }
85
+ ags = &acpi_ged_state->ghes_state;
86
+ return ags->present;
87
+}
88
--
89
2.20.1
90
91
diff view generated by jsdifflib
New patch
1
The virt_is_acpi_enabled() function is specific to the virt board, as
2
is the check for its 'ras' property. Use the new acpi_ghes_present()
3
function to check whether we should report memory errors via
4
acpi_ghes_record_errors().
1
5
6
This avoids a link error if QEMU was built without support for the
7
virt board, and provides a mechanism that can be used by any future
8
board models that want to add ACPI memory error reporting support
9
(they only need to call acpi_ghes_add_fw_cfg()).
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
14
Message-id: 20210603171259.27962-4-peter.maydell@linaro.org
15
---
16
target/arm/kvm64.c | 6 +-----
17
1 file changed, 1 insertion(+), 5 deletions(-)
18
19
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/kvm64.c
22
+++ b/target/arm/kvm64.c
23
@@ -XXX,XX +XXX,XX @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr)
24
{
25
ram_addr_t ram_addr;
26
hwaddr paddr;
27
- Object *obj = qdev_get_machine();
28
- VirtMachineState *vms = VIRT_MACHINE(obj);
29
- bool acpi_enabled = virt_is_acpi_enabled(vms);
30
31
assert(code == BUS_MCEERR_AR || code == BUS_MCEERR_AO);
32
33
- if (acpi_enabled && addr &&
34
- object_property_get_bool(obj, "ras", NULL)) {
35
+ if (acpi_ghes_present() && addr) {
36
ram_addr = qemu_ram_addr_from_host(addr);
37
if (ram_addr != RAM_ADDR_INVALID &&
38
kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) {
39
--
40
2.20.1
41
42
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
These days the Arm architecture has a wide range of fine-grained
2
optional extra architectural features. We implement quite a lot
3
of these but by no means all of them. Document what we do implement,
4
so that users can find out without having to dig through back-issues
5
of our Changelog on the wiki.
2
6
3
Remove inclusion of arm_gicv3_common.h, this already gets
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
included via xlnx-versal.h.
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20210617140328.28622-1-peter.maydell@linaro.org
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
---
12
docs/system/arm/emulation.rst | 102 ++++++++++++++++++++++++++++++++++
13
docs/system/target-arm.rst | 6 ++
14
2 files changed, 108 insertions(+)
15
create mode 100644 docs/system/arm/emulation.rst
5
16
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
17
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
18
new file mode 100644
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
19
index XXXXXXX..XXXXXXX
9
Message-id: 20200427181649.26851-2-edgar.iglesias@gmail.com
20
--- /dev/null
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
+++ b/docs/system/arm/emulation.rst
11
---
22
@@ -XXX,XX +XXX,XX @@
12
hw/arm/xlnx-versal.c | 1 -
23
+A-profile CPU architecture support
13
1 file changed, 1 deletion(-)
24
+==================================
14
25
+
15
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
26
+QEMU's TCG emulation includes support for the Armv5, Armv6, Armv7 and
27
+Armv8 versions of the A-profile architecture. It also has support for
28
+the following architecture extensions:
29
+
30
+- FEAT_AA32BF16 (AArch32 BFloat16 instructions)
31
+- FEAT_AA32HPD (AArch32 hierarchical permission disables)
32
+- FEAT_AA32I8MM (AArch32 Int8 matrix multiplication instructions)
33
+- FEAT_AES (AESD and AESE instructions)
34
+- FEAT_BF16 (AArch64 BFloat16 instructions)
35
+- FEAT_BTI (Branch Target Identification)
36
+- FEAT_DIT (Data Independent Timing instructions)
37
+- FEAT_DPB (DC CVAP instruction)
38
+- FEAT_DotProd (Advanced SIMD dot product instructions)
39
+- FEAT_FCMA (Floating-point complex number instructions)
40
+- FEAT_FHM (Floating-point half-precision multiplication instructions)
41
+- FEAT_FP16 (Half-precision floating-point data processing)
42
+- FEAT_FRINTTS (Floating-point to integer instructions)
43
+- FEAT_FlagM (Flag manipulation instructions v2)
44
+- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
45
+- FEAT_HPDS (Hierarchical permission disables)
46
+- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
47
+- FEAT_JSCVT (JavaScript conversion instructions)
48
+- FEAT_LOR (Limited ordering regions)
49
+- FEAT_LRCPC (Load-acquire RCpc instructions)
50
+- FEAT_LRCPC2 (Load-acquire RCpc instructions v2)
51
+- FEAT_LSE (Large System Extensions)
52
+- FEAT_MTE (Memory Tagging Extension)
53
+- FEAT_MTE2 (Memory Tagging Extension)
54
+- FEAT_PAN (Privileged access never)
55
+- FEAT_PAN2 (AT S1E1R and AT S1E1W instruction variants affected by PSTATE.PAN)
56
+- FEAT_PAuth (Pointer authentication)
57
+- FEAT_PMULL (PMULL, PMULL2 instructions)
58
+- FEAT_PMUv3p1 (PMU Extensions v3.1)
59
+- FEAT_PMUv3p4 (PMU Extensions v3.4)
60
+- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
61
+- FEAT_RNG (Random number generator)
62
+- FEAT_SB (Speculation Barrier)
63
+- FEAT_SEL2 (Secure EL2)
64
+- FEAT_SHA1 (SHA1 instructions)
65
+- FEAT_SHA256 (SHA256 instructions)
66
+- FEAT_SHA3 (Advanced SIMD SHA3 instructions)
67
+- FEAT_SHA512 (Advanced SIMD SHA512 instructions)
68
+- FEAT_SM3 (Advanced SIMD SM3 instructions)
69
+- FEAT_SM4 (Advanced SIMD SM4 instructions)
70
+- FEAT_SPECRES (Speculation restriction instructions)
71
+- FEAT_SSBS (Speculative Store Bypass Safe)
72
+- FEAT_TLBIOS (TLB invalidate instructions in Outer Shareable domain)
73
+- FEAT_TLBIRANGE (TLB invalidate range instructions)
74
+- FEAT_TTCNP (Translation table Common not private translations)
75
+- FEAT_TTST (Small translation tables)
76
+- FEAT_UAO (Unprivileged Access Override control)
77
+- FEAT_VHE (Virtualization Host Extensions)
78
+- FEAT_VMID16 (16-bit VMID)
79
+- FEAT_XNX (Translation table stage 2 Unprivileged Execute-never)
80
+- SVE (The Scalable Vector Extension)
81
+- SVE2 (The Scalable Vector Extension v2)
82
+
83
+For information on the specifics of these extensions, please refer
84
+to the `Armv8-A Arm Architecture Reference Manual
85
+<https://developer.arm.com/documentation/ddi0487/latest>`_.
86
+
87
+When a specific named CPU is being emulated, only those features which
88
+are present in hardware for that CPU are emulated. (If a feature is
89
+not in the list above then it is not supported, even if the real
90
+hardware should have it.) The ``max`` CPU enables all features.
91
+
92
+R-profile CPU architecture support
93
+==================================
94
+
95
+QEMU's TCG emulation support for R-profile CPUs is currently limited.
96
+We emulate only the Cortex-R5 and Cortex-R5F CPUs.
97
+
98
+M-profile CPU architecture support
99
+==================================
100
+
101
+QEMU's TCG emulation includes support for Armv6-M, Armv7-M, Armv8-M, and
102
+Armv8.1-M versions of the M-profile architucture. It also has support
103
+for the following architecture extensions:
104
+
105
+- FP (Floating-point Extension)
106
+- FPCXT (FPCXT access instructions)
107
+- HP (Half-precision floating-point instructions)
108
+- LOB (Low Overhead loops and Branch future)
109
+- M (Main Extension)
110
+- MPU (Memory Protection Unit Extension)
111
+- PXN (Privileged Execute Never)
112
+- RAS (Reliability, Serviceability and Availability): "minimum RAS Extension" only
113
+- S (Security Extension)
114
+- ST (System Timer Extension)
115
+
116
+For information on the specifics of these extensions, please refer
117
+to the `Armv8-M Arm Architecture Reference Manual
118
+<https://developer.arm.com/documentation/ddi0553/latest>`_.
119
+
120
+When a specific named CPU is being emulated, only those features which
121
+are present in hardware for that CPU are emulated. (If a feature is
122
+not in the list above then it is not supported, even if the real
123
+hardware should have it.) There is no equivalent of the ``max`` CPU for
124
+M-profile.
125
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
16
index XXXXXXX..XXXXXXX 100644
126
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/xlnx-versal.c
127
--- a/docs/system/target-arm.rst
18
+++ b/hw/arm/xlnx-versal.c
128
+++ b/docs/system/target-arm.rst
19
@@ -XXX,XX +XXX,XX @@
129
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
20
#include "hw/arm/boot.h"
130
arm/virt
21
#include "kvm_arm.h"
131
arm/xlnx-versal-virt
22
#include "hw/misc/unimp.h"
132
23
-#include "hw/intc/arm_gicv3_common.h"
133
+Emulated CPU architecture support
24
#include "hw/arm/xlnx-versal.h"
134
+=================================
25
#include "hw/char/pl011.h"
135
+
136
+.. toctree::
137
+ arm/emulation
138
+
139
Arm CPU features
140
================
26
141
27
--
142
--
28
2.20.1
143
2.20.1
29
144
30
145
diff view generated by jsdifflib
New patch
1
In the code for handling VFP system register accesses there is some
2
stray whitespace after a unary '-' operator, and also some incorrect
3
indent in a couple of function prototypes. We're about to move this
4
code to another file, so fix the code style issues first so
5
checkpatch doesn't complain about the code-movement patch.
1
6
7
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210618141019.10671-2-peter.maydell@linaro.org
11
---
12
target/arm/translate-vfp.c | 11 +++++------
13
1 file changed, 5 insertions(+), 6 deletions(-)
14
15
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-vfp.c
18
+++ b/target/arm/translate-vfp.c
19
@@ -XXX,XX +XXX,XX @@ static void gen_branch_fpInactive(DisasContext *s, TCGCond cond,
20
}
21
22
static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
23
-
24
fp_sysreg_loadfn *loadfn,
25
- void *opaque)
26
+ void *opaque)
27
{
28
/* Do a write to an M-profile floating point system register */
29
TCGv_i32 tmp;
30
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
31
}
32
33
static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
34
- fp_sysreg_storefn *storefn,
35
- void *opaque)
36
+ fp_sysreg_storefn *storefn,
37
+ void *opaque)
38
{
39
/* Do a read from an M-profile floating point system register */
40
TCGv_i32 tmp;
41
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
42
TCGv_i32 addr;
43
44
if (!a->a) {
45
- offset = - offset;
46
+ offset = -offset;
47
}
48
49
addr = load_reg(s, a->rn);
50
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
51
TCGv_i32 value = tcg_temp_new_i32();
52
53
if (!a->a) {
54
- offset = - offset;
55
+ offset = -offset;
56
}
57
58
addr = load_reg(s, a->rn);
59
--
60
2.20.1
61
62
diff view generated by jsdifflib
New patch
1
If the guest makes an FPCXT_NS access when the FPU is disabled,
2
one of two things happens:
3
* if there is no active FP context, then the insn behaves the
4
same way as if the FPU was enabled: writes ignored, reads
5
same value as FPDSCR_NS
6
* if there is an active FP context, then we take a NOCP
7
exception
1
8
9
Add code to the sysreg read/write functions which emits
10
code to take the NOCP exception in the latter case.
11
12
At the moment this will never be used, because the NOCP checks in
13
m-nocp.decode happen first, and so the trans functions are never
14
called when the FPU is disabled. The code will be needed when we
15
move the sysreg access insns to before the NOCP patterns in the
16
following commit.
17
18
Cc: qemu-stable@nongnu.org
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20210618141019.10671-3-peter.maydell@linaro.org
22
---
23
target/arm/translate-vfp.c | 32 ++++++++++++++++++++++++++++++--
24
1 file changed, 30 insertions(+), 2 deletions(-)
25
26
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/translate-vfp.c
29
+++ b/target/arm/translate-vfp.c
30
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
31
lab_end = gen_new_label();
32
/* fpInactive case: write is a NOP, so branch to end */
33
gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
34
- /* !fpInactive: PreserveFPState(), and reads same as FPCXT_S */
35
+ /*
36
+ * !fpInactive: if FPU disabled, take NOCP exception;
37
+ * otherwise PreserveFPState(), and then FPCXT_NS writes
38
+ * behave the same as FPCXT_S writes.
39
+ */
40
+ if (s->fp_excp_el) {
41
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
42
+ syn_uncategorized(), s->fp_excp_el);
43
+ /*
44
+ * This was only a conditional exception, so override
45
+ * gen_exception_insn()'s default to DISAS_NORETURN
46
+ */
47
+ s->base.is_jmp = DISAS_NEXT;
48
+ break;
49
+ }
50
gen_preserve_fp_state(s);
51
/* fall through */
52
case ARM_VFP_FPCXT_S:
53
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
54
tcg_gen_br(lab_end);
55
56
gen_set_label(lab_active);
57
- /* !fpInactive: Reads the same as FPCXT_S, but side effects differ */
58
+ /*
59
+ * !fpInactive: if FPU disabled, take NOCP exception;
60
+ * otherwise PreserveFPState(), and then FPCXT_NS
61
+ * reads the same as FPCXT_S.
62
+ */
63
+ if (s->fp_excp_el) {
64
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
65
+ syn_uncategorized(), s->fp_excp_el);
66
+ /*
67
+ * This was only a conditional exception, so override
68
+ * gen_exception_insn()'s default to DISAS_NORETURN
69
+ */
70
+ s->base.is_jmp = DISAS_NEXT;
71
+ break;
72
+ }
73
gen_preserve_fp_state(s);
74
tmp = tcg_temp_new_i32();
75
sfpa = tcg_temp_new_i32();
76
--
77
2.20.1
78
79
diff view generated by jsdifflib
1
Convert the VFM[AS]L (scalar) insns in the 2reg-scalar-ext group
1
The M-profile architecture requires that accesses to FPCXT_NS when
2
to decodetree. These are the last ones in the group so we can remove
2
there is no active FP state must not take a NOCP fault even if the
3
all the legacy decode for the group.
3
FPU is disabled. We were not implementing this correctly, because
4
in our decode we catch the NOCP faults early in m-nocp.decode.
4
5
5
Note that in disas_thumb2_insn() the parts of this encoding space
6
Fix this bug by moving all the handling of M-profile FP system
6
where the decodetree decoder returns false will correctly be directed
7
register accesses from vfp.decode into m-nocp.decode and putting
7
to illegal_op by the "(insn & (1 << 28))" check so they won't fall
8
it above the NOCP blocks. This provides the correct behaviour:
8
into disas_coproc_insn() by mistake.
9
* for accesses other than FPCXT_NS the trans functions call
10
vfp_access_check(), which will check for FPU disabled and
11
raise a NOCP exception if necessary
12
* for FPCXT_NS we have the special case code that doesn't
13
call vfp_access_check()
14
* when these trans functions want to raise an UNDEF they return
15
false, so the decoder will fall through into the NOCP blocks.
16
This means that NOCP correctly takes precedence over UNDEF
17
for these insns. (This is a difference from the other insns
18
handled by m-nocp.decode, where UNDEF takes precedence and
19
which we implement by having those trans functions call
20
unallocated_encoding() in the appropriate places.)
9
21
22
[Note for backport to stable: this commit has a semantic dependency
23
on commit 9a486856e9173af, which was not marked as cc-stable because
24
we didn't know we'd need it for a for-stable bugfix.]
25
26
Cc: qemu-stable@nongnu.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20200430181003.21682-11-peter.maydell@linaro.org
29
Message-id: 20210618141019.10671-4-peter.maydell@linaro.org
13
---
30
---
14
target/arm/neon-shared.decode | 7 +++
31
target/arm/translate-a32.h | 1 +
15
target/arm/translate-neon.inc.c | 32 ++++++++++
32
target/arm/m-nocp.decode | 24 ++
16
target/arm/translate.c | 107 +-------------------------------
33
target/arm/vfp.decode | 14 -
17
3 files changed, 40 insertions(+), 106 deletions(-)
34
target/arm/translate-m-nocp.c | 514 +++++++++++++++++++++++++++++++++
35
target/arm/translate-vfp.c | 517 +---------------------------------
36
5 files changed, 542 insertions(+), 528 deletions(-)
18
37
19
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
38
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
20
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/neon-shared.decode
40
--- a/target/arm/translate-a32.h
22
+++ b/target/arm/neon-shared.decode
41
+++ b/target/arm/translate-a32.h
23
@@ -XXX,XX +XXX,XX @@ VCMLA_scalar 1111 1110 1 . rot:2 .... .... 1000 . q:1 . 0 .... \
42
@@ -XXX,XX +XXX,XX @@ bool disas_neon_shared(DisasContext *s, uint32_t insn);
24
43
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg);
25
VDOT_scalar 1111 1110 0 . 10 .... .... 1101 . q:1 index:1 u:1 rm:4 \
44
void arm_gen_condlabel(DisasContext *s);
26
vm=%vm_dp vn=%vn_dp vd=%vd_dp
45
bool vfp_access_check(DisasContext *s);
27
+
46
+void gen_preserve_fp_state(DisasContext *s);
28
+%vfml_scalar_q0_rm 0:3 5:1
47
void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop);
29
+%vfml_scalar_q1_index 5:1 3:1
48
void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop);
30
+VFML_scalar 1111 1110 0 . 0 s:1 .... .... 1000 . 0 . 1 index:1 ... \
49
void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop);
31
+ rm=%vfml_scalar_q0_rm vn=%vn_sp vd=%vd_dp q=0
50
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
32
+VFML_scalar 1111 1110 0 . 0 s:1 .... .... 1000 . 1 . 1 . rm:3 \
33
+ index=%vfml_scalar_q1_index vn=%vn_dp vd=%vd_dp q=1
34
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
35
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-neon.inc.c
52
--- a/target/arm/m-nocp.decode
37
+++ b/target/arm/translate-neon.inc.c
53
+++ b/target/arm/m-nocp.decode
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VDOT_scalar(DisasContext *s, arg_VDOT_scalar *a)
54
@@ -XXX,XX +XXX,XX @@
39
tcg_temp_free_ptr(fpst);
55
56
&nocp cp
57
58
+# M-profile VLDR/VSTR to sysreg
59
+%vldr_sysreg 22:1 13:3
60
+%imm7_0x4 0:7 !function=times_4
61
+
62
+&vldr_sysreg rn reg imm a w p
63
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
64
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
65
+
66
{
67
# Special cases which do not take an early NOCP: VLLDM and VLSTM
68
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
69
@@ -XXX,XX +XXX,XX @@
70
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
71
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
72
73
+ # FP system register accesses: these are a special case because accesses
74
+ # to FPCXT_NS succeed even if the FPU is disabled. We therefore need
75
+ # to handle them before the big NOCP blocks. Note that within these
76
+ # insns NOCP still has higher priority than UNDEFs; this is implemented
77
+ # by their returning 'false' for UNDEF so as to fall through into the
78
+ # NOCP check (in contrast to VLLDM etc, which call unallocated_encoding()
79
+ # for the UNDEFs there that must take precedence over NOCP.)
80
+
81
+ VMSR_VMRS ---- 1110 111 l:1 reg:4 rt:4 1010 0001 0000
82
+
83
+ # P=0 W=0 is SEE "Related encodings", so split into two patterns
84
+ VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
85
+ VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
86
+ VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
87
+ VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
88
+
89
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
90
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
91
# From v8.1M onwards this range will also NOCP:
92
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/vfp.decode
95
+++ b/target/arm/vfp.decode
96
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
97
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
98
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
99
100
-# M-profile VLDR/VSTR to sysreg
101
-%vldr_sysreg 22:1 13:3
102
-%imm7_0x4 0:7 !function=times_4
103
-
104
-&vldr_sysreg rn reg imm a w p
105
-@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
106
- reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
107
-
108
-# P=0 W=0 is SEE "Related encodings", so split into two patterns
109
-VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
110
-VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
111
-VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
112
-VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
113
-
114
# We split the load/store multiple up into two patterns to avoid
115
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
116
# grouping:
117
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/target/arm/translate-m-nocp.c
120
+++ b/target/arm/translate-m-nocp.c
121
@@ -XXX,XX +XXX,XX @@
122
123
#include "qemu/osdep.h"
124
#include "tcg/tcg-op.h"
125
+#include "tcg/tcg-op-gvec.h"
126
#include "translate.h"
127
#include "translate-a32.h"
128
129
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
40
return true;
130
return true;
41
}
131
}
42
+
132
43
+static bool trans_VFML_scalar(DisasContext *s, arg_VFML_scalar *a)
133
+/*
134
+ * M-profile provides two different sets of instructions that can
135
+ * access floating point system registers: VMSR/VMRS (which move
136
+ * to/from a general purpose register) and VLDR/VSTR sysreg (which
137
+ * move directly to/from memory). In some cases there are also side
138
+ * effects which must happen after any write to memory (which could
139
+ * cause an exception). So we implement the common logic for the
140
+ * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
141
+ * which take pointers to callback functions which will perform the
142
+ * actual "read/write general purpose register" and "read/write
143
+ * memory" operations.
144
+ */
145
+
146
+/*
147
+ * Emit code to store the sysreg to its final destination; frees the
148
+ * TCG temp 'value' it is passed.
149
+ */
150
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
151
+/*
152
+ * Emit code to load the value to be copied to the sysreg; returns
153
+ * a new TCG temporary
154
+ */
155
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
156
+
157
+/* Common decode/access checks for fp sysreg read/write */
158
+typedef enum FPSysRegCheckResult {
159
+ FPSysRegCheckFailed, /* caller should return false */
160
+ FPSysRegCheckDone, /* caller should return true */
161
+ FPSysRegCheckContinue, /* caller should continue generating code */
162
+} FPSysRegCheckResult;
163
+
164
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
44
+{
165
+{
45
+ int opr_sz;
166
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
46
+
167
+ return FPSysRegCheckFailed;
47
+ if (!dc_isar_feature(aa32_fhm, s)) {
168
+ }
169
+
170
+ switch (regno) {
171
+ case ARM_VFP_FPSCR:
172
+ case QEMU_VFP_FPSCR_NZCV:
173
+ break;
174
+ case ARM_VFP_FPSCR_NZCVQC:
175
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
176
+ return FPSysRegCheckFailed;
177
+ }
178
+ break;
179
+ case ARM_VFP_FPCXT_S:
180
+ case ARM_VFP_FPCXT_NS:
181
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
182
+ return FPSysRegCheckFailed;
183
+ }
184
+ if (!s->v8m_secure) {
185
+ return FPSysRegCheckFailed;
186
+ }
187
+ break;
188
+ case ARM_VFP_VPR:
189
+ case ARM_VFP_P0:
190
+ if (!dc_isar_feature(aa32_mve, s)) {
191
+ return FPSysRegCheckFailed;
192
+ }
193
+ break;
194
+ default:
195
+ return FPSysRegCheckFailed;
196
+ }
197
+
198
+ /*
199
+ * FPCXT_NS is a special case: it has specific handling for
200
+ * "current FP state is inactive", and must do the PreserveFPState()
201
+ * but not the usual full set of actions done by ExecuteFPCheck().
202
+ * So we don't call vfp_access_check() and the callers must handle this.
203
+ */
204
+ if (regno != ARM_VFP_FPCXT_NS && !vfp_access_check(s)) {
205
+ return FPSysRegCheckDone;
206
+ }
207
+ return FPSysRegCheckContinue;
208
+}
209
+
210
+static void gen_branch_fpInactive(DisasContext *s, TCGCond cond,
211
+ TCGLabel *label)
212
+{
213
+ /*
214
+ * FPCXT_NS is a special case: it has specific handling for
215
+ * "current FP state is inactive", and must do the PreserveFPState()
216
+ * but not the usual full set of actions done by ExecuteFPCheck().
217
+ * We don't have a TB flag that matches the fpInactive check, so we
218
+ * do it at runtime as we don't expect FPCXT_NS accesses to be frequent.
219
+ *
220
+ * Emit code that checks fpInactive and does a conditional
221
+ * branch to label based on it:
222
+ * if cond is TCG_COND_NE then branch if fpInactive != 0 (ie if inactive)
223
+ * if cond is TCG_COND_EQ then branch if fpInactive == 0 (ie if active)
224
+ */
225
+ assert(cond == TCG_COND_EQ || cond == TCG_COND_NE);
226
+
227
+ /* fpInactive = FPCCR_NS.ASPEN == 1 && CONTROL.FPCA == 0 */
228
+ TCGv_i32 aspen, fpca;
229
+ aspen = load_cpu_field(v7m.fpccr[M_REG_NS]);
230
+ fpca = load_cpu_field(v7m.control[M_REG_S]);
231
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
232
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
233
+ tcg_gen_andi_i32(fpca, fpca, R_V7M_CONTROL_FPCA_MASK);
234
+ tcg_gen_or_i32(fpca, fpca, aspen);
235
+ tcg_gen_brcondi_i32(tcg_invert_cond(cond), fpca, 0, label);
236
+ tcg_temp_free_i32(aspen);
237
+ tcg_temp_free_i32(fpca);
238
+}
239
+
240
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
241
+ fp_sysreg_loadfn *loadfn,
242
+ void *opaque)
243
+{
244
+ /* Do a write to an M-profile floating point system register */
245
+ TCGv_i32 tmp;
246
+ TCGLabel *lab_end = NULL;
247
+
248
+ switch (fp_sysreg_checks(s, regno)) {
249
+ case FPSysRegCheckFailed:
48
+ return false;
250
+ return false;
49
+ }
251
+ case FPSysRegCheckDone:
50
+
51
+ /* UNDEF accesses to D16-D31 if they don't exist. */
52
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
53
+ ((a->vd & 0x10) || (a->q && (a->vn & 0x10)))) {
54
+ return false;
55
+ }
56
+
57
+ if (a->vd & a->q) {
58
+ return false;
59
+ }
60
+
61
+ if (!vfp_access_check(s)) {
62
+ return true;
252
+ return true;
63
+ }
253
+ case FPSysRegCheckContinue:
64
+
254
+ break;
65
+ opr_sz = (1 + a->q) * 8;
255
+ }
66
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
256
+
67
+ vfp_reg_offset(a->q, a->vn),
257
+ switch (regno) {
68
+ vfp_reg_offset(a->q, a->rm),
258
+ case ARM_VFP_FPSCR:
69
+ cpu_env, opr_sz, opr_sz,
259
+ tmp = loadfn(s, opaque);
70
+ (a->index << 2) | a->s, /* is_2 == 0 */
260
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
71
+ gen_helper_gvec_fmlal_idx_a32);
261
+ tcg_temp_free_i32(tmp);
262
+ gen_lookup_tb(s);
263
+ break;
264
+ case ARM_VFP_FPSCR_NZCVQC:
265
+ {
266
+ TCGv_i32 fpscr;
267
+ tmp = loadfn(s, opaque);
268
+ if (dc_isar_feature(aa32_mve, s)) {
269
+ /* QC is only present for MVE; otherwise RES0 */
270
+ TCGv_i32 qc = tcg_temp_new_i32();
271
+ tcg_gen_andi_i32(qc, tmp, FPCR_QC);
272
+ /*
273
+ * The 4 vfp.qc[] fields need only be "zero" vs "non-zero";
274
+ * here writing the same value into all elements is simplest.
275
+ */
276
+ tcg_gen_gvec_dup_i32(MO_32, offsetof(CPUARMState, vfp.qc),
277
+ 16, 16, qc);
278
+ }
279
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
280
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
281
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
282
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
283
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
284
+ tcg_temp_free_i32(tmp);
285
+ break;
286
+ }
287
+ case ARM_VFP_FPCXT_NS:
288
+ lab_end = gen_new_label();
289
+ /* fpInactive case: write is a NOP, so branch to end */
290
+ gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
291
+ /*
292
+ * !fpInactive: if FPU disabled, take NOCP exception;
293
+ * otherwise PreserveFPState(), and then FPCXT_NS writes
294
+ * behave the same as FPCXT_S writes.
295
+ */
296
+ if (s->fp_excp_el) {
297
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
298
+ syn_uncategorized(), s->fp_excp_el);
299
+ /*
300
+ * This was only a conditional exception, so override
301
+ * gen_exception_insn()'s default to DISAS_NORETURN
302
+ */
303
+ s->base.is_jmp = DISAS_NEXT;
304
+ break;
305
+ }
306
+ gen_preserve_fp_state(s);
307
+ /* fall through */
308
+ case ARM_VFP_FPCXT_S:
309
+ {
310
+ TCGv_i32 sfpa, control;
311
+ /*
312
+ * Set FPSCR and CONTROL.SFPA from value; the new FPSCR takes
313
+ * bits [27:0] from value and zeroes bits [31:28].
314
+ */
315
+ tmp = loadfn(s, opaque);
316
+ sfpa = tcg_temp_new_i32();
317
+ tcg_gen_shri_i32(sfpa, tmp, 31);
318
+ control = load_cpu_field(v7m.control[M_REG_S]);
319
+ tcg_gen_deposit_i32(control, control, sfpa,
320
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
321
+ store_cpu_field(control, v7m.control[M_REG_S]);
322
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
323
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
324
+ tcg_temp_free_i32(tmp);
325
+ tcg_temp_free_i32(sfpa);
326
+ break;
327
+ }
328
+ case ARM_VFP_VPR:
329
+ /* Behaves as NOP if not privileged */
330
+ if (IS_USER(s)) {
331
+ break;
332
+ }
333
+ tmp = loadfn(s, opaque);
334
+ store_cpu_field(tmp, v7m.vpr);
335
+ break;
336
+ case ARM_VFP_P0:
337
+ {
338
+ TCGv_i32 vpr;
339
+ tmp = loadfn(s, opaque);
340
+ vpr = load_cpu_field(v7m.vpr);
341
+ tcg_gen_deposit_i32(vpr, vpr, tmp,
342
+ R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
343
+ store_cpu_field(vpr, v7m.vpr);
344
+ tcg_temp_free_i32(tmp);
345
+ break;
346
+ }
347
+ default:
348
+ g_assert_not_reached();
349
+ }
350
+ if (lab_end) {
351
+ gen_set_label(lab_end);
352
+ }
72
+ return true;
353
+ return true;
73
+}
354
+}
74
diff --git a/target/arm/translate.c b/target/arm/translate.c
355
+
356
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
357
+ fp_sysreg_storefn *storefn,
358
+ void *opaque)
359
+{
360
+ /* Do a read from an M-profile floating point system register */
361
+ TCGv_i32 tmp;
362
+ TCGLabel *lab_end = NULL;
363
+ bool lookup_tb = false;
364
+
365
+ switch (fp_sysreg_checks(s, regno)) {
366
+ case FPSysRegCheckFailed:
367
+ return false;
368
+ case FPSysRegCheckDone:
369
+ return true;
370
+ case FPSysRegCheckContinue:
371
+ break;
372
+ }
373
+
374
+ if (regno == ARM_VFP_FPSCR_NZCVQC && !dc_isar_feature(aa32_mve, s)) {
375
+ /* QC is RES0 without MVE, so NZCVQC simplifies to NZCV */
376
+ regno = QEMU_VFP_FPSCR_NZCV;
377
+ }
378
+
379
+ switch (regno) {
380
+ case ARM_VFP_FPSCR:
381
+ tmp = tcg_temp_new_i32();
382
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
383
+ storefn(s, opaque, tmp);
384
+ break;
385
+ case ARM_VFP_FPSCR_NZCVQC:
386
+ tmp = tcg_temp_new_i32();
387
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
388
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCVQC_MASK);
389
+ storefn(s, opaque, tmp);
390
+ break;
391
+ case QEMU_VFP_FPSCR_NZCV:
392
+ /*
393
+ * Read just NZCV; this is a special case to avoid the
394
+ * helper call for the "VMRS to CPSR.NZCV" insn.
395
+ */
396
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
397
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
398
+ storefn(s, opaque, tmp);
399
+ break;
400
+ case ARM_VFP_FPCXT_S:
401
+ {
402
+ TCGv_i32 control, sfpa, fpscr;
403
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
404
+ tmp = tcg_temp_new_i32();
405
+ sfpa = tcg_temp_new_i32();
406
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
407
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
408
+ control = load_cpu_field(v7m.control[M_REG_S]);
409
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
410
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
411
+ tcg_gen_or_i32(tmp, tmp, sfpa);
412
+ tcg_temp_free_i32(sfpa);
413
+ /*
414
+ * Store result before updating FPSCR etc, in case
415
+ * it is a memory write which causes an exception.
416
+ */
417
+ storefn(s, opaque, tmp);
418
+ /*
419
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
420
+ * CONTROL.SFPA; so we'll end the TB here.
421
+ */
422
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
423
+ store_cpu_field(control, v7m.control[M_REG_S]);
424
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
425
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
426
+ tcg_temp_free_i32(fpscr);
427
+ lookup_tb = true;
428
+ break;
429
+ }
430
+ case ARM_VFP_FPCXT_NS:
431
+ {
432
+ TCGv_i32 control, sfpa, fpscr, fpdscr, zero;
433
+ TCGLabel *lab_active = gen_new_label();
434
+
435
+ lookup_tb = true;
436
+
437
+ gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
438
+ /* fpInactive case: reads as FPDSCR_NS */
439
+ TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
440
+ storefn(s, opaque, tmp);
441
+ lab_end = gen_new_label();
442
+ tcg_gen_br(lab_end);
443
+
444
+ gen_set_label(lab_active);
445
+ /*
446
+ * !fpInactive: if FPU disabled, take NOCP exception;
447
+ * otherwise PreserveFPState(), and then FPCXT_NS
448
+ * reads the same as FPCXT_S.
449
+ */
450
+ if (s->fp_excp_el) {
451
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
452
+ syn_uncategorized(), s->fp_excp_el);
453
+ /*
454
+ * This was only a conditional exception, so override
455
+ * gen_exception_insn()'s default to DISAS_NORETURN
456
+ */
457
+ s->base.is_jmp = DISAS_NEXT;
458
+ break;
459
+ }
460
+ gen_preserve_fp_state(s);
461
+ tmp = tcg_temp_new_i32();
462
+ sfpa = tcg_temp_new_i32();
463
+ fpscr = tcg_temp_new_i32();
464
+ gen_helper_vfp_get_fpscr(fpscr, cpu_env);
465
+ tcg_gen_andi_i32(tmp, fpscr, ~FPCR_NZCV_MASK);
466
+ control = load_cpu_field(v7m.control[M_REG_S]);
467
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
468
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
469
+ tcg_gen_or_i32(tmp, tmp, sfpa);
470
+ tcg_temp_free_i32(control);
471
+ /* Store result before updating FPSCR, in case it faults */
472
+ storefn(s, opaque, tmp);
473
+ /* If SFPA is zero then set FPSCR from FPDSCR_NS */
474
+ fpdscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
475
+ zero = tcg_const_i32(0);
476
+ tcg_gen_movcond_i32(TCG_COND_EQ, fpscr, sfpa, zero, fpdscr, fpscr);
477
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
478
+ tcg_temp_free_i32(zero);
479
+ tcg_temp_free_i32(sfpa);
480
+ tcg_temp_free_i32(fpdscr);
481
+ tcg_temp_free_i32(fpscr);
482
+ break;
483
+ }
484
+ case ARM_VFP_VPR:
485
+ /* Behaves as NOP if not privileged */
486
+ if (IS_USER(s)) {
487
+ break;
488
+ }
489
+ tmp = load_cpu_field(v7m.vpr);
490
+ storefn(s, opaque, tmp);
491
+ break;
492
+ case ARM_VFP_P0:
493
+ tmp = load_cpu_field(v7m.vpr);
494
+ tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
495
+ storefn(s, opaque, tmp);
496
+ break;
497
+ default:
498
+ g_assert_not_reached();
499
+ }
500
+
501
+ if (lab_end) {
502
+ gen_set_label(lab_end);
503
+ }
504
+ if (lookup_tb) {
505
+ gen_lookup_tb(s);
506
+ }
507
+ return true;
508
+}
509
+
510
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
511
+{
512
+ arg_VMSR_VMRS *a = opaque;
513
+
514
+ if (a->rt == 15) {
515
+ /* Set the 4 flag bits in the CPSR */
516
+ gen_set_nzcv(value);
517
+ tcg_temp_free_i32(value);
518
+ } else {
519
+ store_reg(s, a->rt, value);
520
+ }
521
+}
522
+
523
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
524
+{
525
+ arg_VMSR_VMRS *a = opaque;
526
+
527
+ return load_reg(s, a->rt);
528
+}
529
+
530
+static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
531
+{
532
+ /*
533
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
534
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
535
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
536
+ * we only care about the top 4 bits of FPSCR there.
537
+ */
538
+ if (a->rt == 15) {
539
+ if (a->l && a->reg == ARM_VFP_FPSCR) {
540
+ a->reg = QEMU_VFP_FPSCR_NZCV;
541
+ } else {
542
+ return false;
543
+ }
544
+ }
545
+
546
+ if (a->l) {
547
+ /* VMRS, move FP system register to gp register */
548
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
549
+ } else {
550
+ /* VMSR, move gp register to FP system register */
551
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
552
+ }
553
+}
554
+
555
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
556
+{
557
+ arg_vldr_sysreg *a = opaque;
558
+ uint32_t offset = a->imm;
559
+ TCGv_i32 addr;
560
+
561
+ if (!a->a) {
562
+ offset = -offset;
563
+ }
564
+
565
+ addr = load_reg(s, a->rn);
566
+ if (a->p) {
567
+ tcg_gen_addi_i32(addr, addr, offset);
568
+ }
569
+
570
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
571
+ gen_helper_v8m_stackcheck(cpu_env, addr);
572
+ }
573
+
574
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
575
+ MO_UL | MO_ALIGN | s->be_data);
576
+ tcg_temp_free_i32(value);
577
+
578
+ if (a->w) {
579
+ /* writeback */
580
+ if (!a->p) {
581
+ tcg_gen_addi_i32(addr, addr, offset);
582
+ }
583
+ store_reg(s, a->rn, addr);
584
+ } else {
585
+ tcg_temp_free_i32(addr);
586
+ }
587
+}
588
+
589
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
590
+{
591
+ arg_vldr_sysreg *a = opaque;
592
+ uint32_t offset = a->imm;
593
+ TCGv_i32 addr;
594
+ TCGv_i32 value = tcg_temp_new_i32();
595
+
596
+ if (!a->a) {
597
+ offset = -offset;
598
+ }
599
+
600
+ addr = load_reg(s, a->rn);
601
+ if (a->p) {
602
+ tcg_gen_addi_i32(addr, addr, offset);
603
+ }
604
+
605
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
606
+ gen_helper_v8m_stackcheck(cpu_env, addr);
607
+ }
608
+
609
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
610
+ MO_UL | MO_ALIGN | s->be_data);
611
+
612
+ if (a->w) {
613
+ /* writeback */
614
+ if (!a->p) {
615
+ tcg_gen_addi_i32(addr, addr, offset);
616
+ }
617
+ store_reg(s, a->rn, addr);
618
+ } else {
619
+ tcg_temp_free_i32(addr);
620
+ }
621
+ return value;
622
+}
623
+
624
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
625
+{
626
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
627
+ return false;
628
+ }
629
+ if (a->rn == 15) {
630
+ return false;
631
+ }
632
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
633
+}
634
+
635
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
636
+{
637
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
638
+ return false;
639
+ }
640
+ if (a->rn == 15) {
641
+ return false;
642
+ }
643
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
644
+}
645
+
646
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
647
{
648
/*
649
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
75
index XXXXXXX..XXXXXXX 100644
650
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/translate.c
651
--- a/target/arm/translate-vfp.c
77
+++ b/target/arm/translate.c
652
+++ b/target/arm/translate-vfp.c
78
@@ -XXX,XX +XXX,XX @@ static int disas_dsp_insn(DisasContext *s, uint32_t insn)
653
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
654
* Generate code for M-profile lazy FP state preservation if needed;
655
* this corresponds to the pseudocode PreserveFPState() function.
656
*/
657
-static void gen_preserve_fp_state(DisasContext *s)
658
+void gen_preserve_fp_state(DisasContext *s)
659
{
660
if (s->v7m_lspact) {
661
/*
662
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
663
return true;
79
}
664
}
80
665
81
#define VFP_REG_SHR(x, n) (((n) > 0) ? (x) >> (n) : (x) << -(n))
666
-/*
82
-#define VFP_SREG(insn, bigbit, smallbit) \
667
- * M-profile provides two different sets of instructions that can
83
- ((VFP_REG_SHR(insn, bigbit - 1) & 0x1e) | (((insn) >> (smallbit)) & 1))
668
- * access floating point system registers: VMSR/VMRS (which move
84
#define VFP_DREG(reg, insn, bigbit, smallbit) do { \
669
- * to/from a general purpose register) and VLDR/VSTR sysreg (which
85
if (dc_isar_feature(aa32_simd_r32, s)) { \
670
- * move directly to/from memory). In some cases there are also side
86
reg = (((insn) >> (bigbit)) & 0x0f) \
671
- * effects which must happen after any write to memory (which could
87
@@ -XXX,XX +XXX,XX @@ static int disas_dsp_insn(DisasContext *s, uint32_t insn)
672
- * cause an exception). So we implement the common logic for the
88
reg = ((insn) >> (bigbit)) & 0x0f; \
673
- * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
89
}} while (0)
674
- * which take pointers to callback functions which will perform the
90
675
- * actual "read/write general purpose register" and "read/write
91
-#define VFP_SREG_D(insn) VFP_SREG(insn, 12, 22)
676
- * memory" operations.
92
#define VFP_DREG_D(reg, insn) VFP_DREG(reg, insn, 12, 22)
677
- */
93
-#define VFP_SREG_N(insn) VFP_SREG(insn, 16, 7)
678
-
94
#define VFP_DREG_N(reg, insn) VFP_DREG(reg, insn, 16, 7)
679
-/*
95
-#define VFP_SREG_M(insn) VFP_SREG(insn, 0, 5)
680
- * Emit code to store the sysreg to its final destination; frees the
96
#define VFP_DREG_M(reg, insn) VFP_DREG(reg, insn, 0, 5)
681
- * TCG temp 'value' it is passed.
97
682
- */
98
static void gen_neon_dup_low16(TCGv_i32 var)
683
-typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
99
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
684
-/*
100
return 0;
685
- * Emit code to load the value to be copied to the sysreg; returns
686
- * a new TCG temporary
687
- */
688
-typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
689
-
690
-/* Common decode/access checks for fp sysreg read/write */
691
-typedef enum FPSysRegCheckResult {
692
- FPSysRegCheckFailed, /* caller should return false */
693
- FPSysRegCheckDone, /* caller should return true */
694
- FPSysRegCheckContinue, /* caller should continue generating code */
695
-} FPSysRegCheckResult;
696
-
697
-static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
698
-{
699
- if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
700
- return FPSysRegCheckFailed;
701
- }
702
-
703
- switch (regno) {
704
- case ARM_VFP_FPSCR:
705
- case QEMU_VFP_FPSCR_NZCV:
706
- break;
707
- case ARM_VFP_FPSCR_NZCVQC:
708
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
709
- return FPSysRegCheckFailed;
710
- }
711
- break;
712
- case ARM_VFP_FPCXT_S:
713
- case ARM_VFP_FPCXT_NS:
714
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
715
- return FPSysRegCheckFailed;
716
- }
717
- if (!s->v8m_secure) {
718
- return FPSysRegCheckFailed;
719
- }
720
- break;
721
- case ARM_VFP_VPR:
722
- case ARM_VFP_P0:
723
- if (!dc_isar_feature(aa32_mve, s)) {
724
- return FPSysRegCheckFailed;
725
- }
726
- break;
727
- default:
728
- return FPSysRegCheckFailed;
729
- }
730
-
731
- /*
732
- * FPCXT_NS is a special case: it has specific handling for
733
- * "current FP state is inactive", and must do the PreserveFPState()
734
- * but not the usual full set of actions done by ExecuteFPCheck().
735
- * So we don't call vfp_access_check() and the callers must handle this.
736
- */
737
- if (regno != ARM_VFP_FPCXT_NS && !vfp_access_check(s)) {
738
- return FPSysRegCheckDone;
739
- }
740
- return FPSysRegCheckContinue;
741
-}
742
-
743
-static void gen_branch_fpInactive(DisasContext *s, TCGCond cond,
744
- TCGLabel *label)
745
-{
746
- /*
747
- * FPCXT_NS is a special case: it has specific handling for
748
- * "current FP state is inactive", and must do the PreserveFPState()
749
- * but not the usual full set of actions done by ExecuteFPCheck().
750
- * We don't have a TB flag that matches the fpInactive check, so we
751
- * do it at runtime as we don't expect FPCXT_NS accesses to be frequent.
752
- *
753
- * Emit code that checks fpInactive and does a conditional
754
- * branch to label based on it:
755
- * if cond is TCG_COND_NE then branch if fpInactive != 0 (ie if inactive)
756
- * if cond is TCG_COND_EQ then branch if fpInactive == 0 (ie if active)
757
- */
758
- assert(cond == TCG_COND_EQ || cond == TCG_COND_NE);
759
-
760
- /* fpInactive = FPCCR_NS.ASPEN == 1 && CONTROL.FPCA == 0 */
761
- TCGv_i32 aspen, fpca;
762
- aspen = load_cpu_field(v7m.fpccr[M_REG_NS]);
763
- fpca = load_cpu_field(v7m.control[M_REG_S]);
764
- tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
765
- tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
766
- tcg_gen_andi_i32(fpca, fpca, R_V7M_CONTROL_FPCA_MASK);
767
- tcg_gen_or_i32(fpca, fpca, aspen);
768
- tcg_gen_brcondi_i32(tcg_invert_cond(cond), fpca, 0, label);
769
- tcg_temp_free_i32(aspen);
770
- tcg_temp_free_i32(fpca);
771
-}
772
-
773
-static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
774
- fp_sysreg_loadfn *loadfn,
775
- void *opaque)
776
-{
777
- /* Do a write to an M-profile floating point system register */
778
- TCGv_i32 tmp;
779
- TCGLabel *lab_end = NULL;
780
-
781
- switch (fp_sysreg_checks(s, regno)) {
782
- case FPSysRegCheckFailed:
783
- return false;
784
- case FPSysRegCheckDone:
785
- return true;
786
- case FPSysRegCheckContinue:
787
- break;
788
- }
789
-
790
- switch (regno) {
791
- case ARM_VFP_FPSCR:
792
- tmp = loadfn(s, opaque);
793
- gen_helper_vfp_set_fpscr(cpu_env, tmp);
794
- tcg_temp_free_i32(tmp);
795
- gen_lookup_tb(s);
796
- break;
797
- case ARM_VFP_FPSCR_NZCVQC:
798
- {
799
- TCGv_i32 fpscr;
800
- tmp = loadfn(s, opaque);
801
- if (dc_isar_feature(aa32_mve, s)) {
802
- /* QC is only present for MVE; otherwise RES0 */
803
- TCGv_i32 qc = tcg_temp_new_i32();
804
- tcg_gen_andi_i32(qc, tmp, FPCR_QC);
805
- /*
806
- * The 4 vfp.qc[] fields need only be "zero" vs "non-zero";
807
- * here writing the same value into all elements is simplest.
808
- */
809
- tcg_gen_gvec_dup_i32(MO_32, offsetof(CPUARMState, vfp.qc),
810
- 16, 16, qc);
811
- }
812
- tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
813
- fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
814
- tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
815
- tcg_gen_or_i32(fpscr, fpscr, tmp);
816
- store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
817
- tcg_temp_free_i32(tmp);
818
- break;
819
- }
820
- case ARM_VFP_FPCXT_NS:
821
- lab_end = gen_new_label();
822
- /* fpInactive case: write is a NOP, so branch to end */
823
- gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
824
- /*
825
- * !fpInactive: if FPU disabled, take NOCP exception;
826
- * otherwise PreserveFPState(), and then FPCXT_NS writes
827
- * behave the same as FPCXT_S writes.
828
- */
829
- if (s->fp_excp_el) {
830
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
831
- syn_uncategorized(), s->fp_excp_el);
832
- /*
833
- * This was only a conditional exception, so override
834
- * gen_exception_insn()'s default to DISAS_NORETURN
835
- */
836
- s->base.is_jmp = DISAS_NEXT;
837
- break;
838
- }
839
- gen_preserve_fp_state(s);
840
- /* fall through */
841
- case ARM_VFP_FPCXT_S:
842
- {
843
- TCGv_i32 sfpa, control;
844
- /*
845
- * Set FPSCR and CONTROL.SFPA from value; the new FPSCR takes
846
- * bits [27:0] from value and zeroes bits [31:28].
847
- */
848
- tmp = loadfn(s, opaque);
849
- sfpa = tcg_temp_new_i32();
850
- tcg_gen_shri_i32(sfpa, tmp, 31);
851
- control = load_cpu_field(v7m.control[M_REG_S]);
852
- tcg_gen_deposit_i32(control, control, sfpa,
853
- R_V7M_CONTROL_SFPA_SHIFT, 1);
854
- store_cpu_field(control, v7m.control[M_REG_S]);
855
- tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
856
- gen_helper_vfp_set_fpscr(cpu_env, tmp);
857
- tcg_temp_free_i32(tmp);
858
- tcg_temp_free_i32(sfpa);
859
- break;
860
- }
861
- case ARM_VFP_VPR:
862
- /* Behaves as NOP if not privileged */
863
- if (IS_USER(s)) {
864
- break;
865
- }
866
- tmp = loadfn(s, opaque);
867
- store_cpu_field(tmp, v7m.vpr);
868
- break;
869
- case ARM_VFP_P0:
870
- {
871
- TCGv_i32 vpr;
872
- tmp = loadfn(s, opaque);
873
- vpr = load_cpu_field(v7m.vpr);
874
- tcg_gen_deposit_i32(vpr, vpr, tmp,
875
- R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
876
- store_cpu_field(vpr, v7m.vpr);
877
- tcg_temp_free_i32(tmp);
878
- break;
879
- }
880
- default:
881
- g_assert_not_reached();
882
- }
883
- if (lab_end) {
884
- gen_set_label(lab_end);
885
- }
886
- return true;
887
-}
888
-
889
-static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
890
- fp_sysreg_storefn *storefn,
891
- void *opaque)
892
-{
893
- /* Do a read from an M-profile floating point system register */
894
- TCGv_i32 tmp;
895
- TCGLabel *lab_end = NULL;
896
- bool lookup_tb = false;
897
-
898
- switch (fp_sysreg_checks(s, regno)) {
899
- case FPSysRegCheckFailed:
900
- return false;
901
- case FPSysRegCheckDone:
902
- return true;
903
- case FPSysRegCheckContinue:
904
- break;
905
- }
906
-
907
- if (regno == ARM_VFP_FPSCR_NZCVQC && !dc_isar_feature(aa32_mve, s)) {
908
- /* QC is RES0 without MVE, so NZCVQC simplifies to NZCV */
909
- regno = QEMU_VFP_FPSCR_NZCV;
910
- }
911
-
912
- switch (regno) {
913
- case ARM_VFP_FPSCR:
914
- tmp = tcg_temp_new_i32();
915
- gen_helper_vfp_get_fpscr(tmp, cpu_env);
916
- storefn(s, opaque, tmp);
917
- break;
918
- case ARM_VFP_FPSCR_NZCVQC:
919
- tmp = tcg_temp_new_i32();
920
- gen_helper_vfp_get_fpscr(tmp, cpu_env);
921
- tcg_gen_andi_i32(tmp, tmp, FPCR_NZCVQC_MASK);
922
- storefn(s, opaque, tmp);
923
- break;
924
- case QEMU_VFP_FPSCR_NZCV:
925
- /*
926
- * Read just NZCV; this is a special case to avoid the
927
- * helper call for the "VMRS to CPSR.NZCV" insn.
928
- */
929
- tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
930
- tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
931
- storefn(s, opaque, tmp);
932
- break;
933
- case ARM_VFP_FPCXT_S:
934
- {
935
- TCGv_i32 control, sfpa, fpscr;
936
- /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
937
- tmp = tcg_temp_new_i32();
938
- sfpa = tcg_temp_new_i32();
939
- gen_helper_vfp_get_fpscr(tmp, cpu_env);
940
- tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
941
- control = load_cpu_field(v7m.control[M_REG_S]);
942
- tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
943
- tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
944
- tcg_gen_or_i32(tmp, tmp, sfpa);
945
- tcg_temp_free_i32(sfpa);
946
- /*
947
- * Store result before updating FPSCR etc, in case
948
- * it is a memory write which causes an exception.
949
- */
950
- storefn(s, opaque, tmp);
951
- /*
952
- * Now we must reset FPSCR from FPDSCR_NS, and clear
953
- * CONTROL.SFPA; so we'll end the TB here.
954
- */
955
- tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
956
- store_cpu_field(control, v7m.control[M_REG_S]);
957
- fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
958
- gen_helper_vfp_set_fpscr(cpu_env, fpscr);
959
- tcg_temp_free_i32(fpscr);
960
- lookup_tb = true;
961
- break;
962
- }
963
- case ARM_VFP_FPCXT_NS:
964
- {
965
- TCGv_i32 control, sfpa, fpscr, fpdscr, zero;
966
- TCGLabel *lab_active = gen_new_label();
967
-
968
- lookup_tb = true;
969
-
970
- gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
971
- /* fpInactive case: reads as FPDSCR_NS */
972
- TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
973
- storefn(s, opaque, tmp);
974
- lab_end = gen_new_label();
975
- tcg_gen_br(lab_end);
976
-
977
- gen_set_label(lab_active);
978
- /*
979
- * !fpInactive: if FPU disabled, take NOCP exception;
980
- * otherwise PreserveFPState(), and then FPCXT_NS
981
- * reads the same as FPCXT_S.
982
- */
983
- if (s->fp_excp_el) {
984
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
985
- syn_uncategorized(), s->fp_excp_el);
986
- /*
987
- * This was only a conditional exception, so override
988
- * gen_exception_insn()'s default to DISAS_NORETURN
989
- */
990
- s->base.is_jmp = DISAS_NEXT;
991
- break;
992
- }
993
- gen_preserve_fp_state(s);
994
- tmp = tcg_temp_new_i32();
995
- sfpa = tcg_temp_new_i32();
996
- fpscr = tcg_temp_new_i32();
997
- gen_helper_vfp_get_fpscr(fpscr, cpu_env);
998
- tcg_gen_andi_i32(tmp, fpscr, ~FPCR_NZCV_MASK);
999
- control = load_cpu_field(v7m.control[M_REG_S]);
1000
- tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
1001
- tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
1002
- tcg_gen_or_i32(tmp, tmp, sfpa);
1003
- tcg_temp_free_i32(control);
1004
- /* Store result before updating FPSCR, in case it faults */
1005
- storefn(s, opaque, tmp);
1006
- /* If SFPA is zero then set FPSCR from FPDSCR_NS */
1007
- fpdscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
1008
- zero = tcg_const_i32(0);
1009
- tcg_gen_movcond_i32(TCG_COND_EQ, fpscr, sfpa, zero, fpdscr, fpscr);
1010
- gen_helper_vfp_set_fpscr(cpu_env, fpscr);
1011
- tcg_temp_free_i32(zero);
1012
- tcg_temp_free_i32(sfpa);
1013
- tcg_temp_free_i32(fpdscr);
1014
- tcg_temp_free_i32(fpscr);
1015
- break;
1016
- }
1017
- case ARM_VFP_VPR:
1018
- /* Behaves as NOP if not privileged */
1019
- if (IS_USER(s)) {
1020
- break;
1021
- }
1022
- tmp = load_cpu_field(v7m.vpr);
1023
- storefn(s, opaque, tmp);
1024
- break;
1025
- case ARM_VFP_P0:
1026
- tmp = load_cpu_field(v7m.vpr);
1027
- tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
1028
- storefn(s, opaque, tmp);
1029
- break;
1030
- default:
1031
- g_assert_not_reached();
1032
- }
1033
-
1034
- if (lab_end) {
1035
- gen_set_label(lab_end);
1036
- }
1037
- if (lookup_tb) {
1038
- gen_lookup_tb(s);
1039
- }
1040
- return true;
1041
-}
1042
-
1043
-static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
1044
-{
1045
- arg_VMSR_VMRS *a = opaque;
1046
-
1047
- if (a->rt == 15) {
1048
- /* Set the 4 flag bits in the CPSR */
1049
- gen_set_nzcv(value);
1050
- tcg_temp_free_i32(value);
1051
- } else {
1052
- store_reg(s, a->rt, value);
1053
- }
1054
-}
1055
-
1056
-static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
1057
-{
1058
- arg_VMSR_VMRS *a = opaque;
1059
-
1060
- return load_reg(s, a->rt);
1061
-}
1062
-
1063
-static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
1064
-{
1065
- /*
1066
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
1067
- * FPSCR -> r15 is a special case which writes to the PSR flags;
1068
- * set a->reg to a special value to tell gen_M_fp_sysreg_read()
1069
- * we only care about the top 4 bits of FPSCR there.
1070
- */
1071
- if (a->rt == 15) {
1072
- if (a->l && a->reg == ARM_VFP_FPSCR) {
1073
- a->reg = QEMU_VFP_FPSCR_NZCV;
1074
- } else {
1075
- return false;
1076
- }
1077
- }
1078
-
1079
- if (a->l) {
1080
- /* VMRS, move FP system register to gp register */
1081
- return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
1082
- } else {
1083
- /* VMSR, move gp register to FP system register */
1084
- return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
1085
- }
1086
-}
1087
-
1088
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
1089
{
1090
TCGv_i32 tmp;
1091
bool ignore_vfp_enabled = false;
1092
1093
if (arm_dc_feature(s, ARM_FEATURE_M)) {
1094
- return gen_M_VMSR_VMRS(s, a);
1095
+ /* M profile version was already handled in m-nocp.decode */
1096
+ return false;
1097
}
1098
1099
if (!dc_isar_feature(aa32_fpsp_v2, s)) {
1100
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
1101
return true;
101
}
1102
}
102
1103
103
-/* Advanced SIMD two registers and a scalar extension.
1104
-static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
104
- * 31 24 23 22 20 16 12 11 10 9 8 3 0
105
- * +-----------------+----+---+----+----+----+---+----+---+----+---------+----+
106
- * | 1 1 1 1 1 1 1 0 | o1 | D | o2 | Vn | Vd | 1 | o3 | 0 | o4 | N Q M U | Vm |
107
- * +-----------------+----+---+----+----+----+---+----+---+----+---------+----+
108
- *
109
- */
110
-
111
-static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
112
-{
1105
-{
113
- gen_helper_gvec_3 *fn_gvec = NULL;
1106
- arg_vldr_sysreg *a = opaque;
114
- gen_helper_gvec_3_ptr *fn_gvec_ptr = NULL;
1107
- uint32_t offset = a->imm;
115
- int rd, rn, rm, opr_sz, data;
1108
- TCGv_i32 addr;
116
- int off_rn, off_rm;
1109
-
117
- bool is_long = false, q = extract32(insn, 6, 1);
1110
- if (!a->a) {
118
- bool ptr_is_env = false;
1111
- offset = -offset;
119
-
1112
- }
120
- if ((insn & 0xffa00f10) == 0xfe000810) {
1113
-
121
- /* VFM[AS]L -- 1111 1110 0.0S .... .... 1000 .Q.1 .... */
1114
- addr = load_reg(s, a->rn);
122
- int is_s = extract32(insn, 20, 1);
1115
- if (a->p) {
123
- int vm20 = extract32(insn, 0, 3);
1116
- tcg_gen_addi_i32(addr, addr, offset);
124
- int vm3 = extract32(insn, 3, 1);
1117
- }
125
- int m = extract32(insn, 5, 1);
1118
-
126
- int index;
1119
- if (s->v8m_stackcheck && a->rn == 13 && a->w) {
127
-
1120
- gen_helper_v8m_stackcheck(cpu_env, addr);
128
- if (!dc_isar_feature(aa32_fhm, s)) {
1121
- }
129
- return 1;
1122
-
1123
- gen_aa32_st_i32(s, value, addr, get_mem_index(s),
1124
- MO_UL | MO_ALIGN | s->be_data);
1125
- tcg_temp_free_i32(value);
1126
-
1127
- if (a->w) {
1128
- /* writeback */
1129
- if (!a->p) {
1130
- tcg_gen_addi_i32(addr, addr, offset);
130
- }
1131
- }
131
- if (q) {
1132
- store_reg(s, a->rn, addr);
132
- rm = vm20;
1133
- } else {
133
- index = m * 2 + vm3;
1134
- tcg_temp_free_i32(addr);
134
- } else {
1135
- }
135
- rm = vm20 * 2 + m;
1136
-}
136
- index = vm3;
1137
-
1138
-static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
1139
-{
1140
- arg_vldr_sysreg *a = opaque;
1141
- uint32_t offset = a->imm;
1142
- TCGv_i32 addr;
1143
- TCGv_i32 value = tcg_temp_new_i32();
1144
-
1145
- if (!a->a) {
1146
- offset = -offset;
1147
- }
1148
-
1149
- addr = load_reg(s, a->rn);
1150
- if (a->p) {
1151
- tcg_gen_addi_i32(addr, addr, offset);
1152
- }
1153
-
1154
- if (s->v8m_stackcheck && a->rn == 13 && a->w) {
1155
- gen_helper_v8m_stackcheck(cpu_env, addr);
1156
- }
1157
-
1158
- gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
1159
- MO_UL | MO_ALIGN | s->be_data);
1160
-
1161
- if (a->w) {
1162
- /* writeback */
1163
- if (!a->p) {
1164
- tcg_gen_addi_i32(addr, addr, offset);
137
- }
1165
- }
138
- is_long = true;
1166
- store_reg(s, a->rn, addr);
139
- data = (index << 2) | is_s; /* is_2 == 0 */
140
- fn_gvec_ptr = gen_helper_gvec_fmlal_idx_a32;
141
- ptr_is_env = true;
142
- } else {
1167
- } else {
143
- return 1;
1168
- tcg_temp_free_i32(addr);
144
- }
1169
- }
145
-
1170
- return value;
146
- VFP_DREG_D(rd, insn);
147
- if (rd & q) {
148
- return 1;
149
- }
150
- if (q || !is_long) {
151
- VFP_DREG_N(rn, insn);
152
- if (rn & q & !is_long) {
153
- return 1;
154
- }
155
- off_rn = vfp_reg_offset(1, rn);
156
- off_rm = vfp_reg_offset(1, rm);
157
- } else {
158
- rn = VFP_SREG_N(insn);
159
- off_rn = vfp_reg_offset(0, rn);
160
- off_rm = vfp_reg_offset(0, rm);
161
- }
162
- if (s->fp_excp_el) {
163
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
164
- syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
165
- return 0;
166
- }
167
- if (!s->vfp_enabled) {
168
- return 1;
169
- }
170
-
171
- opr_sz = (1 + q) * 8;
172
- if (fn_gvec_ptr) {
173
- TCGv_ptr ptr;
174
- if (ptr_is_env) {
175
- ptr = cpu_env;
176
- } else {
177
- ptr = get_fpstatus_ptr(1);
178
- }
179
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd), off_rn, off_rm, ptr,
180
- opr_sz, opr_sz, data, fn_gvec_ptr);
181
- if (!ptr_is_env) {
182
- tcg_temp_free_ptr(ptr);
183
- }
184
- } else {
185
- tcg_gen_gvec_3_ool(vfp_reg_offset(1, rd), off_rn, off_rm,
186
- opr_sz, opr_sz, data, fn_gvec);
187
- }
188
- return 0;
189
-}
1171
-}
190
-
1172
-
191
static int disas_coproc_insn(DisasContext *s, uint32_t insn)
1173
-static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
1174
-{
1175
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
1176
- return false;
1177
- }
1178
- if (a->rn == 15) {
1179
- return false;
1180
- }
1181
- return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
1182
-}
1183
-
1184
-static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
1185
-{
1186
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
1187
- return false;
1188
- }
1189
- if (a->rn == 15) {
1190
- return false;
1191
- }
1192
- return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
1193
-}
1194
1195
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
192
{
1196
{
193
int cpnum, is64, crn, crm, opc1, opc2, isread, rt, rt2;
194
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
195
}
196
}
197
}
198
- } else if ((insn & 0x0f000a00) == 0x0e000800
199
- && arm_dc_feature(s, ARM_FEATURE_V8)) {
200
- if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
201
- goto illegal_op;
202
- }
203
- return;
204
}
205
goto illegal_op;
206
}
207
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
208
}
209
break;
210
}
211
- if ((insn & 0xff000a00) == 0xfe000800
212
- && arm_dc_feature(s, ARM_FEATURE_V8)) {
213
- /* The Thumb2 and ARM encodings are identical. */
214
- if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
215
- goto illegal_op;
216
- }
217
- } else if (((insn >> 24) & 3) == 3) {
218
+ if (((insn >> 24) & 3) == 3) {
219
/* Translate into the equivalent ARM encoding. */
220
insn = (insn & 0xe2ffffff) | ((insn & (1 << 28)) >> 4) | (1 << 28);
221
if (disas_neon_data_insn(s, insn)) {
222
--
1197
--
223
2.20.1
1198
2.20.1
224
1199
225
1200
diff view generated by jsdifflib
New patch
1
1
A few subcases of VLDR/VSTR sysreg succeed but do not perform a
2
memory access:
3
* VSTR of VPR when unprivileged
4
* VLDR to VPR when unprivileged
5
* VLDR to FPCXT_NS when fpInactive
6
7
In these cases, even though we don't do the memory access we should
8
still update the base register and perform the stack limit check if
9
the insn's addressing mode specifies writeback. Our implementation
10
failed to do this, because we handle these side-effects inside the
11
memory_to_fp_sysreg() and fp_sysreg_to_memory() callback functions,
12
which are only called if there's something to load or store.
13
14
Fix this by adding an extra argument to the callbacks which is set to
15
true to actually perform the access and false to only do side effects
16
like writeback, and calling the callback with do_access = false
17
for the three cases listed above.
18
19
This produces slightly suboptimal code for the case of a write
20
to FPCXT_NS when the FPU is inactive and the insn didn't have
21
side effects (ie no writeback, or via VMSR), in which case we'll
22
generate a conditional branch over an unconditional branch.
23
But this doesn't seem to be important enough to merit requiring
24
the callback to report back whether it generated any code or not.
25
26
Cc: qemu-stable@nongnu.org
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Message-id: 20210618141019.10671-5-peter.maydell@linaro.org
30
---
31
target/arm/translate-m-nocp.c | 102 ++++++++++++++++++++++++----------
32
1 file changed, 72 insertions(+), 30 deletions(-)
33
34
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-m-nocp.c
37
+++ b/target/arm/translate-m-nocp.c
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
39
40
/*
41
* Emit code to store the sysreg to its final destination; frees the
42
- * TCG temp 'value' it is passed.
43
+ * TCG temp 'value' it is passed. do_access is true to do the store,
44
+ * and false to skip it and only perform side-effects like base
45
+ * register writeback.
46
*/
47
-typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
48
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value,
49
+ bool do_access);
50
/*
51
* Emit code to load the value to be copied to the sysreg; returns
52
- * a new TCG temporary
53
+ * a new TCG temporary. do_access is true to do the store,
54
+ * and false to skip it and only perform side-effects like base
55
+ * register writeback.
56
*/
57
-typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
58
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque,
59
+ bool do_access);
60
61
/* Common decode/access checks for fp sysreg read/write */
62
typedef enum FPSysRegCheckResult {
63
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
64
65
switch (regno) {
66
case ARM_VFP_FPSCR:
67
- tmp = loadfn(s, opaque);
68
+ tmp = loadfn(s, opaque, true);
69
gen_helper_vfp_set_fpscr(cpu_env, tmp);
70
tcg_temp_free_i32(tmp);
71
gen_lookup_tb(s);
72
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
73
case ARM_VFP_FPSCR_NZCVQC:
74
{
75
TCGv_i32 fpscr;
76
- tmp = loadfn(s, opaque);
77
+ tmp = loadfn(s, opaque, true);
78
if (dc_isar_feature(aa32_mve, s)) {
79
/* QC is only present for MVE; otherwise RES0 */
80
TCGv_i32 qc = tcg_temp_new_i32();
81
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
82
break;
83
}
84
case ARM_VFP_FPCXT_NS:
85
+ {
86
+ TCGLabel *lab_active = gen_new_label();
87
+
88
lab_end = gen_new_label();
89
- /* fpInactive case: write is a NOP, so branch to end */
90
- gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
91
+ gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
92
+ /*
93
+ * fpInactive case: write is a NOP, so only do side effects
94
+ * like register writeback before we branch to end
95
+ */
96
+ loadfn(s, opaque, false);
97
+ tcg_gen_br(lab_end);
98
+
99
+ gen_set_label(lab_active);
100
/*
101
* !fpInactive: if FPU disabled, take NOCP exception;
102
* otherwise PreserveFPState(), and then FPCXT_NS writes
103
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
104
break;
105
}
106
gen_preserve_fp_state(s);
107
- /* fall through */
108
+ }
109
+ /* fall through */
110
case ARM_VFP_FPCXT_S:
111
{
112
TCGv_i32 sfpa, control;
113
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
114
* Set FPSCR and CONTROL.SFPA from value; the new FPSCR takes
115
* bits [27:0] from value and zeroes bits [31:28].
116
*/
117
- tmp = loadfn(s, opaque);
118
+ tmp = loadfn(s, opaque, true);
119
sfpa = tcg_temp_new_i32();
120
tcg_gen_shri_i32(sfpa, tmp, 31);
121
control = load_cpu_field(v7m.control[M_REG_S]);
122
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
123
case ARM_VFP_VPR:
124
/* Behaves as NOP if not privileged */
125
if (IS_USER(s)) {
126
+ loadfn(s, opaque, false);
127
break;
128
}
129
- tmp = loadfn(s, opaque);
130
+ tmp = loadfn(s, opaque, true);
131
store_cpu_field(tmp, v7m.vpr);
132
break;
133
case ARM_VFP_P0:
134
{
135
TCGv_i32 vpr;
136
- tmp = loadfn(s, opaque);
137
+ tmp = loadfn(s, opaque, true);
138
vpr = load_cpu_field(v7m.vpr);
139
tcg_gen_deposit_i32(vpr, vpr, tmp,
140
R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
141
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
142
case ARM_VFP_FPSCR:
143
tmp = tcg_temp_new_i32();
144
gen_helper_vfp_get_fpscr(tmp, cpu_env);
145
- storefn(s, opaque, tmp);
146
+ storefn(s, opaque, tmp, true);
147
break;
148
case ARM_VFP_FPSCR_NZCVQC:
149
tmp = tcg_temp_new_i32();
150
gen_helper_vfp_get_fpscr(tmp, cpu_env);
151
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCVQC_MASK);
152
- storefn(s, opaque, tmp);
153
+ storefn(s, opaque, tmp, true);
154
break;
155
case QEMU_VFP_FPSCR_NZCV:
156
/*
157
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
158
*/
159
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
160
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
161
- storefn(s, opaque, tmp);
162
+ storefn(s, opaque, tmp, true);
163
break;
164
case ARM_VFP_FPCXT_S:
165
{
166
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
167
* Store result before updating FPSCR etc, in case
168
* it is a memory write which causes an exception.
169
*/
170
- storefn(s, opaque, tmp);
171
+ storefn(s, opaque, tmp, true);
172
/*
173
* Now we must reset FPSCR from FPDSCR_NS, and clear
174
* CONTROL.SFPA; so we'll end the TB here.
175
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
176
gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
177
/* fpInactive case: reads as FPDSCR_NS */
178
TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
179
- storefn(s, opaque, tmp);
180
+ storefn(s, opaque, tmp, true);
181
lab_end = gen_new_label();
182
tcg_gen_br(lab_end);
183
184
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
185
tcg_gen_or_i32(tmp, tmp, sfpa);
186
tcg_temp_free_i32(control);
187
/* Store result before updating FPSCR, in case it faults */
188
- storefn(s, opaque, tmp);
189
+ storefn(s, opaque, tmp, true);
190
/* If SFPA is zero then set FPSCR from FPDSCR_NS */
191
fpdscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
192
zero = tcg_const_i32(0);
193
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
194
case ARM_VFP_VPR:
195
/* Behaves as NOP if not privileged */
196
if (IS_USER(s)) {
197
+ storefn(s, opaque, NULL, false);
198
break;
199
}
200
tmp = load_cpu_field(v7m.vpr);
201
- storefn(s, opaque, tmp);
202
+ storefn(s, opaque, tmp, true);
203
break;
204
case ARM_VFP_P0:
205
tmp = load_cpu_field(v7m.vpr);
206
tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
207
- storefn(s, opaque, tmp);
208
+ storefn(s, opaque, tmp, true);
209
break;
210
default:
211
g_assert_not_reached();
212
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
213
return true;
214
}
215
216
-static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
217
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value,
218
+ bool do_access)
219
{
220
arg_VMSR_VMRS *a = opaque;
221
222
+ if (!do_access) {
223
+ return;
224
+ }
225
+
226
if (a->rt == 15) {
227
/* Set the 4 flag bits in the CPSR */
228
gen_set_nzcv(value);
229
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
230
}
231
}
232
233
-static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
234
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque, bool do_access)
235
{
236
arg_VMSR_VMRS *a = opaque;
237
238
+ if (!do_access) {
239
+ return NULL;
240
+ }
241
return load_reg(s, a->rt);
242
}
243
244
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
245
}
246
}
247
248
-static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
249
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value,
250
+ bool do_access)
251
{
252
arg_vldr_sysreg *a = opaque;
253
uint32_t offset = a->imm;
254
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
255
offset = -offset;
256
}
257
258
+ if (!do_access && !a->w) {
259
+ return;
260
+ }
261
+
262
addr = load_reg(s, a->rn);
263
if (a->p) {
264
tcg_gen_addi_i32(addr, addr, offset);
265
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
266
gen_helper_v8m_stackcheck(cpu_env, addr);
267
}
268
269
- gen_aa32_st_i32(s, value, addr, get_mem_index(s),
270
- MO_UL | MO_ALIGN | s->be_data);
271
- tcg_temp_free_i32(value);
272
+ if (do_access) {
273
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
274
+ MO_UL | MO_ALIGN | s->be_data);
275
+ tcg_temp_free_i32(value);
276
+ }
277
278
if (a->w) {
279
/* writeback */
280
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
281
}
282
}
283
284
-static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
285
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque,
286
+ bool do_access)
287
{
288
arg_vldr_sysreg *a = opaque;
289
uint32_t offset = a->imm;
290
TCGv_i32 addr;
291
- TCGv_i32 value = tcg_temp_new_i32();
292
+ TCGv_i32 value = NULL;
293
294
if (!a->a) {
295
offset = -offset;
296
}
297
298
+ if (!do_access && !a->w) {
299
+ return NULL;
300
+ }
301
+
302
addr = load_reg(s, a->rn);
303
if (a->p) {
304
tcg_gen_addi_i32(addr, addr, offset);
305
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
306
gen_helper_v8m_stackcheck(cpu_env, addr);
307
}
308
309
- gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
310
- MO_UL | MO_ALIGN | s->be_data);
311
+ if (do_access) {
312
+ value = tcg_temp_new_i32();
313
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
314
+ MO_UL | MO_ALIGN | s->be_data);
315
+ }
316
317
if (a->w) {
318
/* writeback */
319
--
320
2.20.1
321
322
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Factor the code in full_vfp_access_check() which updates the
2
ownership of the FP context and creates a new FP context
3
out into its own function.
2
4
3
Add support for SD.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210618141019.10671-6-peter.maydell@linaro.org
8
---
9
target/arm/translate-vfp.c | 104 +++++++++++++++++++++----------------
10
1 file changed, 58 insertions(+), 46 deletions(-)
4
11
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
9
Message-id: 20200427181649.26851-9-edgar.iglesias@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/hw/arm/xlnx-versal.h | 12 ++++++++++++
13
hw/arm/xlnx-versal.c | 31 +++++++++++++++++++++++++++++++
14
2 files changed, 43 insertions(+)
15
16
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/xlnx-versal.h
14
--- a/target/arm/translate-vfp.c
19
+++ b/include/hw/arm/xlnx-versal.h
15
+++ b/target/arm/translate-vfp.c
20
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ void gen_preserve_fp_state(DisasContext *s)
21
22
#include "hw/sysbus.h"
23
#include "hw/arm/boot.h"
24
+#include "hw/sd/sdhci.h"
25
#include "hw/intc/arm_gicv3.h"
26
#include "hw/char/pl011.h"
27
#include "hw/dma/xlnx-zdma.h"
28
@@ -XXX,XX +XXX,XX @@
29
#define XLNX_VERSAL_NR_UARTS 2
30
#define XLNX_VERSAL_NR_GEMS 2
31
#define XLNX_VERSAL_NR_ADMAS 8
32
+#define XLNX_VERSAL_NR_SDS 2
33
#define XLNX_VERSAL_NR_IRQS 192
34
35
typedef struct Versal {
36
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
37
} iou;
38
} lpd;
39
40
+ /* The Platform Management Controller subsystem. */
41
+ struct {
42
+ struct {
43
+ SDHCIState sd[XLNX_VERSAL_NR_SDS];
44
+ } iou;
45
+ } pmc;
46
+
47
struct {
48
MemoryRegion *mr_ddr;
49
uint32_t psci_conduit;
50
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
51
#define VERSAL_GEM1_IRQ_0 58
52
#define VERSAL_GEM1_WAKE_IRQ_0 59
53
#define VERSAL_ADMA_IRQ_0 60
54
+#define VERSAL_SD0_IRQ_0 126
55
56
/* Architecturally reserved IRQs suitable for virtualization. */
57
#define VERSAL_RSVD_IRQ_FIRST 111
58
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
59
#define MM_FPD_CRF 0xfd1a0000U
60
#define MM_FPD_CRF_SIZE 0x140000
61
62
+#define MM_PMC_SD0 0xf1040000U
63
+#define MM_PMC_SD0_SIZE 0x10000
64
#define MM_PMC_CRP 0xf1260000U
65
#define MM_PMC_CRP_SIZE 0x10000
66
#endif
67
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/hw/arm/xlnx-versal.c
70
+++ b/hw/arm/xlnx-versal.c
71
@@ -XXX,XX +XXX,XX @@ static void versal_create_admas(Versal *s, qemu_irq *pic)
72
}
17
}
73
}
18
}
74
19
75
+#define SDHCI_CAPABILITIES 0x280737ec6481 /* Same as on ZynqMP. */
20
+/*
76
+static void versal_create_sds(Versal *s, qemu_irq *pic)
21
+ * Generate code for M-profile FP context handling: update the
22
+ * ownership of the FP context, and create a new context if
23
+ * necessary. This corresponds to the parts of the pseudocode
24
+ * ExecuteFPCheck() after the inital PreserveFPState() call.
25
+ */
26
+static void gen_update_fp_context(DisasContext *s)
77
+{
27
+{
78
+ int i;
28
+ /* Update ownership of FP context: set FPCCR.S to match current state */
29
+ if (s->v8m_fpccr_s_wrong) {
30
+ TCGv_i32 tmp;
79
+
31
+
80
+ for (i = 0; i < ARRAY_SIZE(s->pmc.iou.sd); i++) {
32
+ tmp = load_cpu_field(v7m.fpccr[M_REG_S]);
81
+ DeviceState *dev;
33
+ if (s->v8m_secure) {
82
+ MemoryRegion *mr;
34
+ tcg_gen_ori_i32(tmp, tmp, R_V7M_FPCCR_S_MASK);
35
+ } else {
36
+ tcg_gen_andi_i32(tmp, tmp, ~R_V7M_FPCCR_S_MASK);
37
+ }
38
+ store_cpu_field(tmp, v7m.fpccr[M_REG_S]);
39
+ /* Don't need to do this for any further FP insns in this TB */
40
+ s->v8m_fpccr_s_wrong = false;
41
+ }
83
+
42
+
84
+ sysbus_init_child_obj(OBJECT(s), "sd[*]",
43
+ if (s->v7m_new_fp_ctxt_needed) {
85
+ &s->pmc.iou.sd[i], sizeof(s->pmc.iou.sd[i]),
44
+ /*
86
+ TYPE_SYSBUS_SDHCI);
45
+ * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA,
87
+ dev = DEVICE(&s->pmc.iou.sd[i]);
46
+ * the FPSCR, and VPR.
47
+ */
48
+ TCGv_i32 control, fpscr;
49
+ uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
88
+
50
+
89
+ object_property_set_uint(OBJECT(dev),
51
+ fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
90
+ 3, "sd-spec-version", &error_fatal);
52
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
91
+ object_property_set_uint(OBJECT(dev), SDHCI_CAPABILITIES, "capareg",
53
+ tcg_temp_free_i32(fpscr);
92
+ &error_fatal);
54
+ if (dc_isar_feature(aa32_mve, s)) {
93
+ object_property_set_uint(OBJECT(dev), UHS_I, "uhs", &error_fatal);
55
+ TCGv_i32 z32 = tcg_const_i32(0);
94
+ qdev_init_nofail(dev);
56
+ store_cpu_field(z32, v7m.vpr);
57
+ }
95
+
58
+
96
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
59
+ /*
97
+ memory_region_add_subregion(&s->mr_ps,
60
+ * We don't need to arrange to end the TB, because the only
98
+ MM_PMC_SD0 + i * MM_PMC_SD0_SIZE, mr);
61
+ * parts of FPSCR which we cache in the TB flags are the VECLEN
62
+ * and VECSTRIDE, and those don't exist for M-profile.
63
+ */
99
+
64
+
100
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0,
65
+ if (s->v8m_secure) {
101
+ pic[VERSAL_SD0_IRQ_0 + i * 2]);
66
+ bits |= R_V7M_CONTROL_SFPA_MASK;
67
+ }
68
+ control = load_cpu_field(v7m.control[M_REG_S]);
69
+ tcg_gen_ori_i32(control, control, bits);
70
+ store_cpu_field(control, v7m.control[M_REG_S]);
71
+ /* Don't need to do this for any further FP insns in this TB */
72
+ s->v7m_new_fp_ctxt_needed = false;
102
+ }
73
+ }
103
+}
74
+}
104
+
75
+
105
/* This takes the board allocated linear DDR memory and creates aliases
76
/*
106
* for each split DDR range/aperture on the Versal address map.
77
* Check that VFP access is enabled. If it is, do the necessary
107
*/
78
* M-profile lazy-FP handling and then return true.
108
@@ -XXX,XX +XXX,XX @@ static void versal_realize(DeviceState *dev, Error **errp)
79
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
109
versal_create_uarts(s, pic);
80
/* Trigger lazy-state preservation if necessary */
110
versal_create_gems(s, pic);
81
gen_preserve_fp_state(s);
111
versal_create_admas(s, pic);
82
112
+ versal_create_sds(s, pic);
83
- /* Update ownership of FP context: set FPCCR.S to match current state */
113
versal_map_ddr(s);
84
- if (s->v8m_fpccr_s_wrong) {
114
versal_unimp(s);
85
- TCGv_i32 tmp;
115
86
-
87
- tmp = load_cpu_field(v7m.fpccr[M_REG_S]);
88
- if (s->v8m_secure) {
89
- tcg_gen_ori_i32(tmp, tmp, R_V7M_FPCCR_S_MASK);
90
- } else {
91
- tcg_gen_andi_i32(tmp, tmp, ~R_V7M_FPCCR_S_MASK);
92
- }
93
- store_cpu_field(tmp, v7m.fpccr[M_REG_S]);
94
- /* Don't need to do this for any further FP insns in this TB */
95
- s->v8m_fpccr_s_wrong = false;
96
- }
97
-
98
- if (s->v7m_new_fp_ctxt_needed) {
99
- /*
100
- * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA,
101
- * the FPSCR, and VPR.
102
- */
103
- TCGv_i32 control, fpscr;
104
- uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
105
-
106
- fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
107
- gen_helper_vfp_set_fpscr(cpu_env, fpscr);
108
- tcg_temp_free_i32(fpscr);
109
- if (dc_isar_feature(aa32_mve, s)) {
110
- TCGv_i32 z32 = tcg_const_i32(0);
111
- store_cpu_field(z32, v7m.vpr);
112
- }
113
-
114
- /*
115
- * We don't need to arrange to end the TB, because the only
116
- * parts of FPSCR which we cache in the TB flags are the VECLEN
117
- * and VECSTRIDE, and those don't exist for M-profile.
118
- */
119
-
120
- if (s->v8m_secure) {
121
- bits |= R_V7M_CONTROL_SFPA_MASK;
122
- }
123
- control = load_cpu_field(v7m.control[M_REG_S]);
124
- tcg_gen_ori_i32(control, control, bits);
125
- store_cpu_field(control, v7m.control[M_REG_S]);
126
- /* Don't need to do this for any further FP insns in this TB */
127
- s->v7m_new_fp_ctxt_needed = false;
128
- }
129
+ /* Update ownership of FP context and create new FP context if needed */
130
+ gen_update_fp_context(s);
131
}
132
133
return true;
116
--
134
--
117
2.20.1
135
2.20.1
118
136
119
137
diff view generated by jsdifflib
1
Somewhere along theline we accidentally added a duplicate
1
vfp_access_check and its helper routine full_vfp_access_check() has
2
"using D16-D31 when they don't exist" check to do_vfm_dp()
2
gradually grown and is now an awkward mix of A-profile only and
3
(probably an artifact of a patchseries rebase). Remove it.
3
M-profile only pieces. Refactor it into an A-profile only and an
4
M-profile only version, taking advantage of the fact that now the
5
only direct call to full_vfp_access_check() is in A-profile-only
6
code.
4
7
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20210618141019.10671-7-peter.maydell@linaro.org
8
Message-id: 20200430181003.21682-2-peter.maydell@linaro.org
9
---
11
---
10
target/arm/translate-vfp.inc.c | 6 ------
12
target/arm/translate-vfp.c | 79 +++++++++++++++++++++++---------------
11
1 file changed, 6 deletions(-)
13
1 file changed, 48 insertions(+), 31 deletions(-)
12
14
13
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
15
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.inc.c
17
--- a/target/arm/translate-vfp.c
16
+++ b/target/arm/translate-vfp.inc.c
18
+++ b/target/arm/translate-vfp.c
17
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
19
@@ -XXX,XX +XXX,XX @@ static void gen_update_fp_context(DisasContext *s)
20
}
21
22
/*
23
- * Check that VFP access is enabled. If it is, do the necessary
24
- * M-profile lazy-FP handling and then return true.
25
- * If not, emit code to generate an appropriate exception and
26
- * return false.
27
+ * Check that VFP access is enabled, A-profile specific version.
28
+ *
29
+ * If VFP is enabled, return true. If not, emit code to generate an
30
+ * appropriate exception and return false.
31
* The ignore_vfp_enabled argument specifies that we should ignore
32
- * whether VFP is enabled via FPEXC[EN]: this should be true for FMXR/FMRX
33
+ * whether VFP is enabled via FPEXC.EN: this should be true for FMXR/FMRX
34
* accesses to FPSID, FPEXC, MVFR0, MVFR1, MVFR2, and false for all other insns.
35
*/
36
-static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
37
+static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
38
{
39
if (s->fp_excp_el) {
40
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
41
- /*
42
- * M-profile mostly catches the "FPU disabled" case early, in
43
- * disas_m_nocp(), but a few insns (eg LCTP, WLSTP, DLSTP)
44
- * which do coprocessor-checks are outside the large ranges of
45
- * the encoding space handled by the patterns in m-nocp.decode,
46
- * and for them we may need to raise NOCP here.
47
- */
48
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
49
- syn_uncategorized(), s->fp_excp_el);
50
- } else {
51
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
52
- syn_fp_access_trap(1, 0xe, false),
53
- s->fp_excp_el);
54
- }
55
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
56
+ syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
18
return false;
57
return false;
19
}
58
}
20
59
21
- /* UNDEF accesses to D16-D31 if they don't exist. */
60
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
22
- if (!dc_isar_feature(aa32_simd_r32, s) &&
61
unallocated_encoding(s);
23
- ((a->vd | a->vn | a->vm) & 0x10)) {
62
return false;
24
- return false;
63
}
25
- }
64
+ return true;
65
+}
66
67
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
68
- /* Handle M-profile lazy FP state mechanics */
26
-
69
-
27
if (!vfp_access_check(s)) {
70
- /* Trigger lazy-state preservation if necessary */
71
- gen_preserve_fp_state(s);
72
-
73
- /* Update ownership of FP context and create new FP context if needed */
74
- gen_update_fp_context(s);
75
+/*
76
+ * Check that VFP access is enabled, M-profile specific version.
77
+ *
78
+ * If VFP is enabled, do the necessary M-profile lazy-FP handling and then
79
+ * return true. If not, emit code to generate an appropriate exception and
80
+ * return false.
81
+ */
82
+static bool vfp_access_check_m(DisasContext *s)
83
+{
84
+ if (s->fp_excp_el) {
85
+ /*
86
+ * M-profile mostly catches the "FPU disabled" case early, in
87
+ * disas_m_nocp(), but a few insns (eg LCTP, WLSTP, DLSTP)
88
+ * which do coprocessor-checks are outside the large ranges of
89
+ * the encoding space handled by the patterns in m-nocp.decode,
90
+ * and for them we may need to raise NOCP here.
91
+ */
92
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
93
+ syn_uncategorized(), s->fp_excp_el);
94
+ return false;
95
}
96
97
+ /* Handle M-profile lazy FP state mechanics */
98
+
99
+ /* Trigger lazy-state preservation if necessary */
100
+ gen_preserve_fp_state(s);
101
+
102
+ /* Update ownership of FP context and create new FP context if needed */
103
+ gen_update_fp_context(s);
104
+
105
return true;
106
}
107
108
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
109
*/
110
bool vfp_access_check(DisasContext *s)
111
{
112
- return full_vfp_access_check(s, false);
113
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
114
+ return vfp_access_check_m(s);
115
+ } else {
116
+ return vfp_access_check_a(s, false);
117
+ }
118
}
119
120
static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
121
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
122
return false;
123
}
124
125
- if (!full_vfp_access_check(s, ignore_vfp_enabled)) {
126
+ /*
127
+ * Call vfp_access_check_a() directly, because we need to tell
128
+ * it to ignore FPEXC.EN for some register accesses.
129
+ */
130
+ if (!vfp_access_check_a(s, ignore_vfp_enabled)) {
28
return true;
131
return true;
29
}
132
}
133
30
--
134
--
31
2.20.1
135
2.20.1
32
136
33
137
diff view generated by jsdifflib
New patch
1
Instead of open-coding the "take NOCP exception if FPU disabled,
2
otherwise call gen_preserve_fp_state()" code in the accessors for
3
FPCXT_NS, add an argument to vfp_access_check_m() which tells it to
4
skip the gen_update_fp_context() call, so we can use it for the
5
FPCXT_NS case.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210618141019.10671-8-peter.maydell@linaro.org
10
---
11
target/arm/translate-a32.h | 2 +-
12
target/arm/translate-m-nocp.c | 10 ++--------
13
target/arm/translate-vfp.c | 13 ++++++++-----
14
3 files changed, 11 insertions(+), 14 deletions(-)
15
16
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a32.h
19
+++ b/target/arm/translate-a32.h
20
@@ -XXX,XX +XXX,XX @@ bool disas_neon_shared(DisasContext *s, uint32_t insn);
21
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg);
22
void arm_gen_condlabel(DisasContext *s);
23
bool vfp_access_check(DisasContext *s);
24
-void gen_preserve_fp_state(DisasContext *s);
25
+bool vfp_access_check_m(DisasContext *s, bool skip_context_update);
26
void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop);
27
void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop);
28
void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop);
29
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate-m-nocp.c
32
+++ b/target/arm/translate-m-nocp.c
33
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
34
* otherwise PreserveFPState(), and then FPCXT_NS writes
35
* behave the same as FPCXT_S writes.
36
*/
37
- if (s->fp_excp_el) {
38
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
39
- syn_uncategorized(), s->fp_excp_el);
40
+ if (!vfp_access_check_m(s, true)) {
41
/*
42
* This was only a conditional exception, so override
43
* gen_exception_insn()'s default to DISAS_NORETURN
44
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
45
s->base.is_jmp = DISAS_NEXT;
46
break;
47
}
48
- gen_preserve_fp_state(s);
49
}
50
/* fall through */
51
case ARM_VFP_FPCXT_S:
52
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
53
* otherwise PreserveFPState(), and then FPCXT_NS
54
* reads the same as FPCXT_S.
55
*/
56
- if (s->fp_excp_el) {
57
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
58
- syn_uncategorized(), s->fp_excp_el);
59
+ if (!vfp_access_check_m(s, true)) {
60
/*
61
* This was only a conditional exception, so override
62
* gen_exception_insn()'s default to DISAS_NORETURN
63
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
64
s->base.is_jmp = DISAS_NEXT;
65
break;
66
}
67
- gen_preserve_fp_state(s);
68
tmp = tcg_temp_new_i32();
69
sfpa = tcg_temp_new_i32();
70
fpscr = tcg_temp_new_i32();
71
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate-vfp.c
74
+++ b/target/arm/translate-vfp.c
75
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
76
* Generate code for M-profile lazy FP state preservation if needed;
77
* this corresponds to the pseudocode PreserveFPState() function.
78
*/
79
-void gen_preserve_fp_state(DisasContext *s)
80
+static void gen_preserve_fp_state(DisasContext *s)
81
{
82
if (s->v7m_lspact) {
83
/*
84
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
85
* If VFP is enabled, do the necessary M-profile lazy-FP handling and then
86
* return true. If not, emit code to generate an appropriate exception and
87
* return false.
88
+ * skip_context_update is true to skip the "update FP context" part of this.
89
*/
90
-static bool vfp_access_check_m(DisasContext *s)
91
+bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
92
{
93
if (s->fp_excp_el) {
94
/*
95
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_m(DisasContext *s)
96
/* Trigger lazy-state preservation if necessary */
97
gen_preserve_fp_state(s);
98
99
- /* Update ownership of FP context and create new FP context if needed */
100
- gen_update_fp_context(s);
101
+ if (!skip_context_update) {
102
+ /* Update ownership of FP context and create new FP context if needed */
103
+ gen_update_fp_context(s);
104
+ }
105
106
return true;
107
}
108
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_m(DisasContext *s)
109
bool vfp_access_check(DisasContext *s)
110
{
111
if (arm_dc_feature(s, ARM_FEATURE_M)) {
112
- return vfp_access_check_m(s);
113
+ return vfp_access_check_m(s, false);
114
} else {
115
return vfp_access_check_a(s, false);
116
}
117
--
118
2.20.1
119
120
diff view generated by jsdifflib
1
Add the infrastructure for building and invoking a decodetree decoder
1
Implement the forms of the MVE VLDR and VSTR insns which perform
2
for the AArch32 Neon encodings. At the moment the new decoder covers
2
non-widening loads of bytes, halfwords or words from memory into
3
nothing, so we always fall back to the existing hand-written decode.
3
vector elements of the same width (encodings T5, T6, T7).
4
4
5
We follow the same pattern we did for the VFP decodetree conversion
5
(At the moment we know for MVE and M-profile in general that
6
(commit 78e138bc1f672c145ef6ace74617d and following): code that deals
6
vfp_access_check() can never return false, but we include the
7
with Neon will be moving gradually out to translate-neon.vfp.inc,
7
conventional return-true-on-failure check for consistency
8
which we #include into translate.c.
8
with non-M-profile translation code.)
9
10
In order to share the decode files between A32 and T32, we
11
split Neon into 3 parts:
12
* data-processing
13
* load-store
14
* 'shared' encodings
15
16
The first two groups of instructions have similar but not identical
17
A32 and T32 encodings, so we need to manually transform the T32
18
encoding into the A32 one before calling the decoder; the third group
19
covers the Neon instructions which are identical in A32 and T32.
20
9
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20200430181003.21682-4-peter.maydell@linaro.org
12
Message-id: 20210617121628.20116-2-peter.maydell@linaro.org
24
---
13
---
25
target/arm/neon-dp.decode | 29 ++++++++++++++++++++++++++
14
target/arm/{translate-mve.c => helper-mve.h} | 19 +-
26
target/arm/neon-ls.decode | 29 ++++++++++++++++++++++++++
15
target/arm/helper.h | 2 +
27
target/arm/neon-shared.decode | 27 +++++++++++++++++++++++++
16
target/arm/internals.h | 11 ++
28
target/arm/translate-neon.inc.c | 32 +++++++++++++++++++++++++++++
17
target/arm/mve.decode | 22 +++
29
target/arm/translate.c | 36 +++++++++++++++++++++++++++++++--
18
target/arm/mve_helper.c | 172 +++++++++++++++++++
30
target/arm/Makefile.objs | 18 +++++++++++++++++
19
target/arm/translate-mve.c | 119 +++++++++++++
31
6 files changed, 169 insertions(+), 2 deletions(-)
20
target/arm/meson.build | 1 +
32
create mode 100644 target/arm/neon-dp.decode
21
7 files changed, 334 insertions(+), 12 deletions(-)
33
create mode 100644 target/arm/neon-ls.decode
22
copy target/arm/{translate-mve.c => helper-mve.h} (61%)
34
create mode 100644 target/arm/neon-shared.decode
23
create mode 100644 target/arm/mve_helper.c
35
create mode 100644 target/arm/translate-neon.inc.c
36
24
37
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
25
diff --git a/target/arm/translate-mve.c b/target/arm/helper-mve.h
26
similarity index 61%
27
copy from target/arm/translate-mve.c
28
copy to target/arm/helper-mve.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-mve.c
31
+++ b/target/arm/helper-mve.h
32
@@ -XXX,XX +XXX,XX @@
33
/*
34
- * ARM translation: M-profile MVE instructions
35
+ * M-profile MVE specific helper definitions
36
*
37
* Copyright (c) 2021 Linaro, Ltd.
38
*
39
@@ -XXX,XX +XXX,XX @@
40
* You should have received a copy of the GNU Lesser General Public
41
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
42
*/
43
-
44
-#include "qemu/osdep.h"
45
-#include "tcg/tcg-op.h"
46
-#include "tcg/tcg-op-gvec.h"
47
-#include "exec/exec-all.h"
48
-#include "exec/gen-icount.h"
49
-#include "translate.h"
50
-#include "translate-a32.h"
51
-
52
-/* Include the generated decoder */
53
-#include "decode-mve.c.inc"
54
+DEF_HELPER_FLAGS_3(mve_vldrb, TCG_CALL_NO_WG, void, env, ptr, i32)
55
+DEF_HELPER_FLAGS_3(mve_vldrh, TCG_CALL_NO_WG, void, env, ptr, i32)
56
+DEF_HELPER_FLAGS_3(mve_vldrw, TCG_CALL_NO_WG, void, env, ptr, i32)
57
+DEF_HELPER_FLAGS_3(mve_vstrb, TCG_CALL_NO_WG, void, env, ptr, i32)
58
+DEF_HELPER_FLAGS_3(mve_vstrh, TCG_CALL_NO_WG, void, env, ptr, i32)
59
+DEF_HELPER_FLAGS_3(mve_vstrw, TCG_CALL_NO_WG, void, env, ptr, i32)
60
diff --git a/target/arm/helper.h b/target/arm/helper.h
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/helper.h
63
+++ b/target/arm/helper.h
64
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
65
#include "helper-a64.h"
66
#include "helper-sve.h"
67
#endif
68
+
69
+#include "helper-mve.h"
70
diff --git a/target/arm/internals.h b/target/arm/internals.h
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/internals.h
73
+++ b/target/arm/internals.h
74
@@ -XXX,XX +XXX,XX @@ static inline uint64_t useronly_maybe_clean_ptr(uint32_t desc, uint64_t ptr)
75
return ptr;
76
}
77
78
+/* Values for M-profile PSR.ECI for MVE insns */
79
+enum MVEECIState {
80
+ ECI_NONE = 0, /* No completed beats */
81
+ ECI_A0 = 1, /* Completed: A0 */
82
+ ECI_A0A1 = 2, /* Completed: A0, A1 */
83
+ /* 3 is reserved */
84
+ ECI_A0A1A2 = 4, /* Completed: A0, A1, A2 */
85
+ ECI_A0A1A2B0 = 5, /* Completed: A0, A1, A2, B0 */
86
+ /* All other values reserved */
87
+};
88
+
89
#endif
90
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/mve.decode
93
+++ b/target/arm/mve.decode
94
@@ -XXX,XX +XXX,XX @@
95
#
96
# This file is processed by scripts/decodetree.py
97
#
98
+
99
+%qd 22:1 13:3
100
+
101
+&vldr_vstr rn qd imm p a w size l
102
+
103
+@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd
104
+
105
+# Vector loads and stores
106
+
107
+# Non-widening loads/stores (P=0 W=0 is 'related encoding')
108
+VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111100 ....... @vldr_vstr \
109
+ size=0 p=0 w=1
110
+VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111101 ....... @vldr_vstr \
111
+ size=1 p=0 w=1
112
+VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111110 ....... @vldr_vstr \
113
+ size=2 p=0 w=1
114
+VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111100 ....... @vldr_vstr \
115
+ size=0 p=1
116
+VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
117
+ size=1 p=1
118
+VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
119
+ size=2 p=1
120
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
38
new file mode 100644
121
new file mode 100644
39
index XXXXXXX..XXXXXXX
122
index XXXXXXX..XXXXXXX
40
--- /dev/null
123
--- /dev/null
41
+++ b/target/arm/neon-dp.decode
124
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@
43
+# AArch32 Neon data-processing instruction descriptions
44
+#
45
+# Copyright (c) 2020 Linaro, Ltd
46
+#
47
+# This library is free software; you can redistribute it and/or
48
+# modify it under the terms of the GNU Lesser General Public
49
+# License as published by the Free Software Foundation; either
50
+# version 2 of the License, or (at your option) any later version.
51
+#
52
+# This library is distributed in the hope that it will be useful,
53
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
54
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
55
+# Lesser General Public License for more details.
56
+#
57
+# You should have received a copy of the GNU Lesser General Public
58
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
59
+
60
+#
61
+# This file is processed by scripts/decodetree.py
62
+#
63
+
64
+# Encodings for Neon data processing instructions where the T32 encoding
65
+# is a simple transformation of the A32 encoding.
66
+# More specifically, this file covers instructions where the A32 encoding is
67
+# 0b1111_001p_qqqq_qqqq_qqqq_qqqq_qqqq_qqqq
68
+# and the T32 encoding is
69
+# 0b111p_1111_qqqq_qqqq_qqqq_qqqq_qqqq_qqqq
70
+# This file works on the A32 encoding only; calling code for T32 has to
71
+# transform the insn into the A32 version first.
72
diff --git a/target/arm/neon-ls.decode b/target/arm/neon-ls.decode
73
new file mode 100644
74
index XXXXXXX..XXXXXXX
75
--- /dev/null
76
+++ b/target/arm/neon-ls.decode
77
@@ -XXX,XX +XXX,XX @@
78
+# AArch32 Neon load/store instruction descriptions
79
+#
80
+# Copyright (c) 2020 Linaro, Ltd
81
+#
82
+# This library is free software; you can redistribute it and/or
83
+# modify it under the terms of the GNU Lesser General Public
84
+# License as published by the Free Software Foundation; either
85
+# version 2 of the License, or (at your option) any later version.
86
+#
87
+# This library is distributed in the hope that it will be useful,
88
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
89
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
90
+# Lesser General Public License for more details.
91
+#
92
+# You should have received a copy of the GNU Lesser General Public
93
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
94
+
95
+#
96
+# This file is processed by scripts/decodetree.py
97
+#
98
+
99
+# Encodings for Neon load/store instructions where the T32 encoding
100
+# is a simple transformation of the A32 encoding.
101
+# More specifically, this file covers instructions where the A32 encoding is
102
+# 0b1111_0100_xxx0_xxxx_xxxx_xxxx_xxxx_xxxx
103
+# and the T32 encoding is
104
+# 0b1111_1001_xxx0_xxxx_xxxx_xxxx_xxxx_xxxx
105
+# This file works on the A32 encoding only; calling code for T32 has to
106
+# transform the insn into the A32 version first.
107
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
108
new file mode 100644
109
index XXXXXXX..XXXXXXX
110
--- /dev/null
111
+++ b/target/arm/neon-shared.decode
112
@@ -XXX,XX +XXX,XX @@
113
+# AArch32 Neon instruction descriptions
114
+#
115
+# Copyright (c) 2020 Linaro, Ltd
116
+#
117
+# This library is free software; you can redistribute it and/or
118
+# modify it under the terms of the GNU Lesser General Public
119
+# License as published by the Free Software Foundation; either
120
+# version 2 of the License, or (at your option) any later version.
121
+#
122
+# This library is distributed in the hope that it will be useful,
123
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
124
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
125
+# Lesser General Public License for more details.
126
+#
127
+# You should have received a copy of the GNU Lesser General Public
128
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
129
+
130
+#
131
+# This file is processed by scripts/decodetree.py
132
+#
133
+
134
+# Encodings for Neon instructions whose encoding is the same for
135
+# both A32 and T32.
136
+
137
+# More specifically, this covers:
138
+# 2reg scalar ext: 0b1111_1110_xxxx_xxxx_xxxx_1x0x_xxxx_xxxx
139
+# 3same ext: 0b1111_110x_xxxx_xxxx_xxxx_1x0x_xxxx_xxxx
140
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
141
new file mode 100644
142
index XXXXXXX..XXXXXXX
143
--- /dev/null
144
+++ b/target/arm/translate-neon.inc.c
145
@@ -XXX,XX +XXX,XX @@
125
@@ -XXX,XX +XXX,XX @@
146
+/*
126
+/*
147
+ * ARM translation: AArch32 Neon instructions
127
+ * M-profile MVE Operations
148
+ *
128
+ *
149
+ * Copyright (c) 2003 Fabrice Bellard
129
+ * Copyright (c) 2021 Linaro, Ltd.
150
+ * Copyright (c) 2005-2007 CodeSourcery
151
+ * Copyright (c) 2007 OpenedHand, Ltd.
152
+ * Copyright (c) 2020 Linaro, Ltd.
153
+ *
130
+ *
154
+ * This library is free software; you can redistribute it and/or
131
+ * This library is free software; you can redistribute it and/or
155
+ * modify it under the terms of the GNU Lesser General Public
132
+ * modify it under the terms of the GNU Lesser General Public
156
+ * License as published by the Free Software Foundation; either
133
+ * License as published by the Free Software Foundation; either
157
+ * version 2 of the License, or (at your option) any later version.
134
+ * version 2.1 of the License, or (at your option) any later version.
158
+ *
135
+ *
159
+ * This library is distributed in the hope that it will be useful,
136
+ * This library is distributed in the hope that it will be useful,
160
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
137
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
161
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
138
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
162
+ * Lesser General Public License for more details.
139
+ * Lesser General Public License for more details.
163
+ *
140
+ *
164
+ * You should have received a copy of the GNU Lesser General Public
141
+ * You should have received a copy of the GNU Lesser General Public
165
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
142
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
166
+ */
143
+ */
167
+
144
+
168
+/*
145
+#include "qemu/osdep.h"
169
+ * This file is intended to be included from translate.c; it uses
146
+#include "cpu.h"
170
+ * some macros and definitions provided by that file.
147
+#include "internals.h"
171
+ * It might be possible to convert it to a standalone .c file eventually.
148
+#include "vec_internal.h"
172
+ */
149
+#include "exec/helper-proto.h"
173
+
150
+#include "exec/cpu_ldst.h"
174
+/* Include the generated Neon decoder */
151
+#include "exec/exec-all.h"
175
+#include "decode-neon-dp.inc.c"
152
+
176
+#include "decode-neon-ls.inc.c"
153
+static uint16_t mve_element_mask(CPUARMState *env)
177
+#include "decode-neon-shared.inc.c"
154
+{
178
diff --git a/target/arm/translate.c b/target/arm/translate.c
155
+ /*
179
index XXXXXXX..XXXXXXX 100644
156
+ * Return the mask of which elements in the MVE vector should be
180
--- a/target/arm/translate.c
157
+ * updated. This is a combination of multiple things:
181
+++ b/target/arm/translate.c
158
+ * (1) by default, we update every lane in the vector
182
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
159
+ * (2) VPT predication stores its state in the VPR register;
183
160
+ * (3) low-overhead-branch tail predication will mask out part
184
#define ARM_CP_RW_BIT (1 << 20)
161
+ * the vector on the final iteration of the loop
185
162
+ * (4) if EPSR.ECI is set then we must execute only some beats
186
-/* Include the VFP decoder */
163
+ * of the insn
187
+/* Include the VFP and Neon decoders */
164
+ * We combine all these into a 16-bit result with the same semantics
188
#include "translate-vfp.inc.c"
165
+ * as VPR.P0: 0 to mask the lane, 1 if it is active.
189
+#include "translate-neon.inc.c"
166
+ * 8-bit vector ops will look at all bits of the result;
190
167
+ * 16-bit ops will look at bits 0, 2, 4, ...;
191
static inline void iwmmxt_load_reg(TCGv_i64 var, int reg)
168
+ * 32-bit ops will look at bits 0, 4, 8 and 12.
192
{
169
+ * Compare pseudocode GetCurInstrBeat(), though that only returns
193
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
170
+ * the 4-bit slice of the mask corresponding to a single beat.
194
/* Unconditional instructions. */
171
+ */
195
/* TODO: Perhaps merge these into one decodetree output file. */
172
+ uint16_t mask = FIELD_EX32(env->v7m.vpr, V7M_VPR, P0);
196
if (disas_a32_uncond(s, insn) ||
173
+
197
- disas_vfp_uncond(s, insn)) {
174
+ if (!(env->v7m.vpr & R_V7M_VPR_MASK01_MASK)) {
198
+ disas_vfp_uncond(s, insn) ||
175
+ mask |= 0xff;
199
+ disas_neon_dp(s, insn) ||
176
+ }
200
+ disas_neon_ls(s, insn) ||
177
+ if (!(env->v7m.vpr & R_V7M_VPR_MASK23_MASK)) {
201
+ disas_neon_shared(s, insn)) {
178
+ mask |= 0xff00;
202
return;
179
+ }
203
}
180
+
204
/* fall back to legacy decoder */
181
+ if (env->v7m.ltpsize < 4 &&
205
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
182
+ env->regs[14] <= (1 << (4 - env->v7m.ltpsize))) {
206
ARCH(6T2);
207
}
208
209
+ if ((insn & 0xef000000) == 0xef000000) {
210
+ /*
183
+ /*
211
+ * T32 encodings 0b111p_1111_qqqq_qqqq_qqqq_qqqq_qqqq_qqqq
184
+ * Tail predication active, and this is the last loop iteration.
212
+ * transform into
185
+ * The element size is (1 << ltpsize), and we only want to process
213
+ * A32 encodings 0b1111_001p_qqqq_qqqq_qqqq_qqqq_qqqq_qqqq
186
+ * loopcount elements, so we want to retain the least significant
187
+ * (loopcount * esize) predicate bits and zero out bits above that.
214
+ */
188
+ */
215
+ uint32_t a32_insn = (insn & 0xe2ffffff) |
189
+ int masklen = env->regs[14] << env->v7m.ltpsize;
216
+ ((insn & (1 << 28)) >> 4) | (1 << 28);
190
+ assert(masklen <= 16);
217
+
191
+ mask &= MAKE_64BIT_MASK(0, masklen);
218
+ if (disas_neon_dp(s, a32_insn)) {
192
+ }
219
+ return;
193
+
194
+ if ((env->condexec_bits & 0xf) == 0) {
195
+ /*
196
+ * ECI bits indicate which beats are already executed;
197
+ * we handle this by effectively predicating them out.
198
+ */
199
+ int eci = env->condexec_bits >> 4;
200
+ switch (eci) {
201
+ case ECI_NONE:
202
+ break;
203
+ case ECI_A0:
204
+ mask &= 0xfff0;
205
+ break;
206
+ case ECI_A0A1:
207
+ mask &= 0xff00;
208
+ break;
209
+ case ECI_A0A1A2:
210
+ case ECI_A0A1A2B0:
211
+ mask &= 0xf000;
212
+ break;
213
+ default:
214
+ g_assert_not_reached();
220
+ }
215
+ }
221
+ }
216
+ }
222
+
217
+
223
+ if ((insn & 0xff100000) == 0xf9000000) {
218
+ return mask;
224
+ /*
219
+}
225
+ * T32 encodings 0b1111_1001_ppp0_qqqq_qqqq_qqqq_qqqq_qqqq
220
+
226
+ * transform into
221
+static void mve_advance_vpt(CPUARMState *env)
227
+ * A32 encodings 0b1111_0100_ppp0_qqqq_qqqq_qqqq_qqqq_qqqq
222
+{
228
+ */
223
+ /* Advance the VPT and ECI state if necessary */
229
+ uint32_t a32_insn = (insn & 0x00ffffff) | 0xf4000000;
224
+ uint32_t vpr = env->v7m.vpr;
230
+
225
+ unsigned mask01, mask23;
231
+ if (disas_neon_ls(s, a32_insn)) {
226
+
232
+ return;
227
+ if ((env->condexec_bits & 0xf) == 0) {
228
+ env->condexec_bits = (env->condexec_bits == (ECI_A0A1A2B0 << 4)) ?
229
+ (ECI_A0 << 4) : (ECI_NONE << 4);
230
+ }
231
+
232
+ if (!(vpr & (R_V7M_VPR_MASK01_MASK | R_V7M_VPR_MASK23_MASK))) {
233
+ /* VPT not enabled, nothing to do */
234
+ return;
235
+ }
236
+
237
+ mask01 = FIELD_EX32(vpr, V7M_VPR, MASK01);
238
+ mask23 = FIELD_EX32(vpr, V7M_VPR, MASK23);
239
+ if (mask01 > 8) {
240
+ /* high bit set, but not 0b1000: invert the relevant half of P0 */
241
+ vpr ^= 0xff;
242
+ }
243
+ if (mask23 > 8) {
244
+ /* high bit set, but not 0b1000: invert the relevant half of P0 */
245
+ vpr ^= 0xff00;
246
+ }
247
+ vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1);
248
+ vpr = FIELD_DP32(vpr, V7M_VPR, MASK23, mask23 << 1);
249
+ env->v7m.vpr = vpr;
250
+}
251
+
252
+
253
+#define DO_VLDR(OP, MSIZE, LDTYPE, ESIZE, TYPE) \
254
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr) \
255
+ { \
256
+ TYPE *d = vd; \
257
+ uint16_t mask = mve_element_mask(env); \
258
+ unsigned b, e; \
259
+ /* \
260
+ * R_SXTM allows the dest reg to become UNKNOWN for abandoned \
261
+ * beats so we don't care if we update part of the dest and \
262
+ * then take an exception. \
263
+ */ \
264
+ for (b = 0, e = 0; b < 16; b += ESIZE, e++) { \
265
+ if (mask & (1 << b)) { \
266
+ d[H##ESIZE(e)] = cpu_##LDTYPE##_data_ra(env, addr, GETPC()); \
267
+ } \
268
+ addr += MSIZE; \
269
+ } \
270
+ mve_advance_vpt(env); \
271
+ }
272
+
273
+#define DO_VSTR(OP, MSIZE, STTYPE, ESIZE, TYPE) \
274
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr) \
275
+ { \
276
+ TYPE *d = vd; \
277
+ uint16_t mask = mve_element_mask(env); \
278
+ unsigned b, e; \
279
+ for (b = 0, e = 0; b < 16; b += ESIZE, e++) { \
280
+ if (mask & (1 << b)) { \
281
+ cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \
282
+ } \
283
+ addr += MSIZE; \
284
+ } \
285
+ mve_advance_vpt(env); \
286
+ }
287
+
288
+DO_VLDR(vldrb, 1, ldub, 1, uint8_t)
289
+DO_VLDR(vldrh, 2, lduw, 2, uint16_t)
290
+DO_VLDR(vldrw, 4, ldl, 4, uint32_t)
291
+
292
+DO_VSTR(vstrb, 1, stb, 1, uint8_t)
293
+DO_VSTR(vstrh, 2, stw, 2, uint16_t)
294
+DO_VSTR(vstrw, 4, stl, 4, uint32_t)
295
+
296
+#undef DO_VLDR
297
+#undef DO_VSTR
298
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
299
index XXXXXXX..XXXXXXX 100644
300
--- a/target/arm/translate-mve.c
301
+++ b/target/arm/translate-mve.c
302
@@ -XXX,XX +XXX,XX @@
303
304
/* Include the generated decoder */
305
#include "decode-mve.c.inc"
306
+
307
+typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
308
+
309
+/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
310
+static inline long mve_qreg_offset(unsigned reg)
311
+{
312
+ return offsetof(CPUARMState, vfp.zregs[reg].d[0]);
313
+}
314
+
315
+static TCGv_ptr mve_qreg_ptr(unsigned reg)
316
+{
317
+ TCGv_ptr ret = tcg_temp_new_ptr();
318
+ tcg_gen_addi_ptr(ret, cpu_env, mve_qreg_offset(reg));
319
+ return ret;
320
+}
321
+
322
+static bool mve_check_qreg_bank(DisasContext *s, int qmask)
323
+{
324
+ /*
325
+ * Check whether Qregs are in range. For v8.1M only Q0..Q7
326
+ * are supported, see VFPSmallRegisterBank().
327
+ */
328
+ return qmask < 8;
329
+}
330
+
331
+static bool mve_eci_check(DisasContext *s)
332
+{
333
+ /*
334
+ * This is a beatwise insn: check that ECI is valid (not a
335
+ * reserved value) and note that we are handling it.
336
+ * Return true if OK, false if we generated an exception.
337
+ */
338
+ s->eci_handled = true;
339
+ switch (s->eci) {
340
+ case ECI_NONE:
341
+ case ECI_A0:
342
+ case ECI_A0A1:
343
+ case ECI_A0A1A2:
344
+ case ECI_A0A1A2B0:
345
+ return true;
346
+ default:
347
+ /* Reserved value: INVSTATE UsageFault */
348
+ gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized(),
349
+ default_exception_el(s));
350
+ return false;
351
+ }
352
+}
353
+
354
+static void mve_update_eci(DisasContext *s)
355
+{
356
+ /*
357
+ * The helper function will always update the CPUState field,
358
+ * so we only need to update the DisasContext field.
359
+ */
360
+ if (s->eci) {
361
+ s->eci = (s->eci == ECI_A0A1A2B0) ? ECI_A0 : ECI_NONE;
362
+ }
363
+}
364
+
365
+static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
366
+{
367
+ TCGv_i32 addr;
368
+ uint32_t offset;
369
+ TCGv_ptr qreg;
370
+
371
+ if (!dc_isar_feature(aa32_mve, s) ||
372
+ !mve_check_qreg_bank(s, a->qd) ||
373
+ !fn) {
374
+ return false;
375
+ }
376
+
377
+ /* CONSTRAINED UNPREDICTABLE: we choose to UNDEF */
378
+ if (a->rn == 15 || (a->rn == 13 && a->w)) {
379
+ return false;
380
+ }
381
+
382
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
383
+ return true;
384
+ }
385
+
386
+ offset = a->imm << a->size;
387
+ if (!a->a) {
388
+ offset = -offset;
389
+ }
390
+ addr = load_reg(s, a->rn);
391
+ if (a->p) {
392
+ tcg_gen_addi_i32(addr, addr, offset);
393
+ }
394
+
395
+ qreg = mve_qreg_ptr(a->qd);
396
+ fn(cpu_env, qreg, addr);
397
+ tcg_temp_free_ptr(qreg);
398
+
399
+ /*
400
+ * Writeback always happens after the last beat of the insn,
401
+ * regardless of predication
402
+ */
403
+ if (a->w) {
404
+ if (!a->p) {
405
+ tcg_gen_addi_i32(addr, addr, offset);
233
+ }
406
+ }
234
+ }
407
+ store_reg(s, a->rn, addr);
235
+
408
+ } else {
236
/*
409
+ tcg_temp_free_i32(addr);
237
* TODO: Perhaps merge these into one decodetree output file.
410
+ }
238
* Note disas_vfp is written for a32 with cond field in the
411
+ mve_update_eci(s);
239
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
412
+ return true;
240
*/
413
+}
241
if (disas_t32(s, insn) ||
414
+
242
disas_vfp_uncond(s, insn) ||
415
+static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
243
+ disas_neon_shared(s, insn) ||
416
+{
244
((insn >> 28) == 0xe && disas_vfp(s, insn))) {
417
+ static MVEGenLdStFn * const ldstfns[4][2] = {
245
return;
418
+ { gen_helper_mve_vstrb, gen_helper_mve_vldrb },
246
}
419
+ { gen_helper_mve_vstrh, gen_helper_mve_vldrh },
247
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
420
+ { gen_helper_mve_vstrw, gen_helper_mve_vldrw },
248
index XXXXXXX..XXXXXXX 100644
421
+ { NULL, NULL }
249
--- a/target/arm/Makefile.objs
422
+ };
250
+++ b/target/arm/Makefile.objs
423
+ return do_ldst(s, a, ldstfns[a->size][a->l]);
251
@@ -XXX,XX +XXX,XX @@ target/arm/decode-sve.inc.c: $(SRC_PATH)/target/arm/sve.decode $(DECODETREE)
424
+}
252
     $(PYTHON) $(DECODETREE) --decode disas_sve -o $@ $<,\
425
diff --git a/target/arm/meson.build b/target/arm/meson.build
253
     "GEN", $(TARGET_DIR)$@)
426
index XXXXXXX..XXXXXXX 100644
254
427
--- a/target/arm/meson.build
255
+target/arm/decode-neon-shared.inc.c: $(SRC_PATH)/target/arm/neon-shared.decode $(DECODETREE)
428
+++ b/target/arm/meson.build
256
+    $(call quiet-command,\
429
@@ -XXX,XX +XXX,XX @@ arm_ss.add(files(
257
+     $(PYTHON) $(DECODETREE) --static-decode disas_neon_shared -o $@ $<,\
430
'helper.c',
258
+     "GEN", $(TARGET_DIR)$@)
431
'iwmmxt_helper.c',
259
+
432
'm_helper.c',
260
+target/arm/decode-neon-dp.inc.c: $(SRC_PATH)/target/arm/neon-dp.decode $(DECODETREE)
433
+ 'mve_helper.c',
261
+    $(call quiet-command,\
434
'neon_helper.c',
262
+     $(PYTHON) $(DECODETREE) --static-decode disas_neon_dp -o $@ $<,\
435
'op_helper.c',
263
+     "GEN", $(TARGET_DIR)$@)
436
'tlb_helper.c',
264
+
265
+target/arm/decode-neon-ls.inc.c: $(SRC_PATH)/target/arm/neon-ls.decode $(DECODETREE)
266
+    $(call quiet-command,\
267
+     $(PYTHON) $(DECODETREE) --static-decode disas_neon_ls -o $@ $<,\
268
+     "GEN", $(TARGET_DIR)$@)
269
+
270
target/arm/decode-vfp.inc.c: $(SRC_PATH)/target/arm/vfp.decode $(DECODETREE)
271
    $(call quiet-command,\
272
     $(PYTHON) $(DECODETREE) --static-decode disas_vfp -o $@ $<,\
273
@@ -XXX,XX +XXX,XX @@ target/arm/decode-t16.inc.c: $(SRC_PATH)/target/arm/t16.decode $(DECODETREE)
274
     "GEN", $(TARGET_DIR)$@)
275
276
target/arm/translate-sve.o: target/arm/decode-sve.inc.c
277
+target/arm/translate.o: target/arm/decode-neon-shared.inc.c
278
+target/arm/translate.o: target/arm/decode-neon-dp.inc.c
279
+target/arm/translate.o: target/arm/decode-neon-ls.inc.c
280
target/arm/translate.o: target/arm/decode-vfp.inc.c
281
target/arm/translate.o: target/arm/decode-vfp-uncond.inc.c
282
target/arm/translate.o: target/arm/decode-a32.inc.c
283
--
437
--
284
2.20.1
438
2.20.1
285
439
286
440
diff view generated by jsdifflib
New patch
1
Implement the variants of MVE VLDR (encodings T1, T2) which perform
2
"widening" loads where bytes or halfwords are loaded from memory and
3
zero or sign-extended into halfword or word length vector elements,
4
and the narrowing MVE VSTR (encodings T1, T2) where bytes or
5
halfwords are stored from halfword or word elements.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210617121628.20116-3-peter.maydell@linaro.org
10
---
11
target/arm/helper-mve.h | 10 ++++++++++
12
target/arm/mve.decode | 25 +++++++++++++++++++++++--
13
target/arm/mve_helper.c | 11 +++++++++++
14
target/arm/translate-mve.c | 14 ++++++++++++++
15
4 files changed, 58 insertions(+), 2 deletions(-)
16
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-mve.h
20
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vldrw, TCG_CALL_NO_WG, void, env, ptr, i32)
22
DEF_HELPER_FLAGS_3(mve_vstrb, TCG_CALL_NO_WG, void, env, ptr, i32)
23
DEF_HELPER_FLAGS_3(mve_vstrh, TCG_CALL_NO_WG, void, env, ptr, i32)
24
DEF_HELPER_FLAGS_3(mve_vstrw, TCG_CALL_NO_WG, void, env, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_3(mve_vldrb_sh, TCG_CALL_NO_WG, void, env, ptr, i32)
27
+DEF_HELPER_FLAGS_3(mve_vldrb_sw, TCG_CALL_NO_WG, void, env, ptr, i32)
28
+DEF_HELPER_FLAGS_3(mve_vldrb_uh, TCG_CALL_NO_WG, void, env, ptr, i32)
29
+DEF_HELPER_FLAGS_3(mve_vldrb_uw, TCG_CALL_NO_WG, void, env, ptr, i32)
30
+DEF_HELPER_FLAGS_3(mve_vldrh_sw, TCG_CALL_NO_WG, void, env, ptr, i32)
31
+DEF_HELPER_FLAGS_3(mve_vldrh_uw, TCG_CALL_NO_WG, void, env, ptr, i32)
32
+DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
33
+DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
34
+DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
35
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/mve.decode
38
+++ b/target/arm/mve.decode
39
@@ -XXX,XX +XXX,XX @@
40
41
%qd 22:1 13:3
42
43
-&vldr_vstr rn qd imm p a w size l
44
+&vldr_vstr rn qd imm p a w size l u
45
46
-@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd
47
+@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
48
+# Note that both Rn and Qd are 3 bits only (no D bit)
49
+@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
50
51
# Vector loads and stores
52
53
+# Widening loads and narrowing stores:
54
+# for these P=0 W=0 is 'related encoding'; sz=11 is 'related encoding'
55
+# This means we need to expand out to multiple patterns for P, W, SZ.
56
+# For stores the U bit must be 0 but we catch that in the trans_ function.
57
+# The naming scheme here is "VLDSTB_H == in-memory byte load/store to/from
58
+# signed halfword element in register", etc.
59
+VLDSTB_H 111 . 110 0 a:1 0 1 . 0 ... ... 0 111 01 ....... @vldst_wn \
60
+ p=0 w=1 size=1
61
+VLDSTB_H 111 . 110 1 a:1 0 w:1 . 0 ... ... 0 111 01 ....... @vldst_wn \
62
+ p=1 size=1
63
+VLDSTB_W 111 . 110 0 a:1 0 1 . 0 ... ... 0 111 10 ....... @vldst_wn \
64
+ p=0 w=1 size=2
65
+VLDSTB_W 111 . 110 1 a:1 0 w:1 . 0 ... ... 0 111 10 ....... @vldst_wn \
66
+ p=1 size=2
67
+VLDSTH_W 111 . 110 0 a:1 0 1 . 1 ... ... 0 111 10 ....... @vldst_wn \
68
+ p=0 w=1 size=2
69
+VLDSTH_W 111 . 110 1 a:1 0 w:1 . 1 ... ... 0 111 10 ....... @vldst_wn \
70
+ p=1 size=2
71
+
72
# Non-widening loads/stores (P=0 W=0 is 'related encoding')
73
VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111100 ....... @vldr_vstr \
74
size=0 p=0 w=1
75
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/mve_helper.c
78
+++ b/target/arm/mve_helper.c
79
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrb, 1, stb, 1, uint8_t)
80
DO_VSTR(vstrh, 2, stw, 2, uint16_t)
81
DO_VSTR(vstrw, 4, stl, 4, uint32_t)
82
83
+DO_VLDR(vldrb_sh, 1, ldsb, 2, int16_t)
84
+DO_VLDR(vldrb_sw, 1, ldsb, 4, int32_t)
85
+DO_VLDR(vldrb_uh, 1, ldub, 2, uint16_t)
86
+DO_VLDR(vldrb_uw, 1, ldub, 4, uint32_t)
87
+DO_VLDR(vldrh_sw, 2, ldsw, 4, int32_t)
88
+DO_VLDR(vldrh_uw, 2, lduw, 4, uint32_t)
89
+
90
+DO_VSTR(vstrb_h, 1, stb, 2, int16_t)
91
+DO_VSTR(vstrb_w, 1, stb, 4, int32_t)
92
+DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
93
+
94
#undef DO_VLDR
95
#undef DO_VSTR
96
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate-mve.c
99
+++ b/target/arm/translate-mve.c
100
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
101
};
102
return do_ldst(s, a, ldstfns[a->size][a->l]);
103
}
104
+
105
+#define DO_VLDST_WIDE_NARROW(OP, SLD, ULD, ST) \
106
+ static bool trans_##OP(DisasContext *s, arg_VLDR_VSTR *a) \
107
+ { \
108
+ static MVEGenLdStFn * const ldstfns[2][2] = { \
109
+ { gen_helper_mve_##ST, gen_helper_mve_##SLD }, \
110
+ { NULL, gen_helper_mve_##ULD }, \
111
+ }; \
112
+ return do_ldst(s, a, ldstfns[a->u][a->l]); \
113
+ }
114
+
115
+DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
116
+DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
117
+DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
118
--
119
2.20.1
120
121
diff view generated by jsdifflib
1
Convert the V[US]DOT (scalar) insns in the 2reg-scalar-ext group
1
Implement the MVE VCLZ insn (and the necessary machinery
2
to decodetree.
2
for MVE 1-input vector ops).
3
4
Note that for non-load instructions predication is always performed
5
at a byte level granularity regardless of element size (R_ZLSJ),
6
and so the masking logic here differs from that used in the VLDR
7
and VSTR helpers.
3
8
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-10-peter.maydell@linaro.org
11
Message-id: 20210617121628.20116-4-peter.maydell@linaro.org
7
---
12
---
8
target/arm/neon-shared.decode | 3 +++
13
target/arm/helper-mve.h | 4 ++
9
target/arm/translate-neon.inc.c | 35 +++++++++++++++++++++++++++++++++
14
target/arm/mve.decode | 8 ++++
10
target/arm/translate.c | 13 +-----------
15
target/arm/mve_helper.c | 82 ++++++++++++++++++++++++++++++++++++++
11
3 files changed, 39 insertions(+), 12 deletions(-)
16
target/arm/translate-mve.c | 38 ++++++++++++++++++
12
17
4 files changed, 132 insertions(+)
13
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
18
14
index XXXXXXX..XXXXXXX 100644
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
--- a/target/arm/neon-shared.decode
20
index XXXXXXX..XXXXXXX 100644
16
+++ b/target/arm/neon-shared.decode
21
--- a/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ VCMLA_scalar 1111 1110 0 . rot:2 .... .... 1000 . q:1 index:1 0 vm:4 \
22
+++ b/target/arm/helper-mve.h
18
vn=%vn_dp vd=%vd_dp size=0
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vldrh_uw, TCG_CALL_NO_WG, void, env, ptr, i32)
19
VCMLA_scalar 1111 1110 1 . rot:2 .... .... 1000 . q:1 . 0 .... \
24
DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
20
vm=%vm_dp vn=%vn_dp vd=%vd_dp size=1 index=0
25
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
21
+
26
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
22
+VDOT_scalar 1111 1110 0 . 10 .... .... 1101 . q:1 index:1 u:1 rm:4 \
27
+
23
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
28
+DEF_HELPER_FLAGS_3(mve_vclzb, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
29
+DEF_HELPER_FLAGS_3(mve_vclzh, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
index XXXXXXX..XXXXXXX 100644
30
+DEF_HELPER_FLAGS_3(mve_vclzw, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
--- a/target/arm/translate-neon.inc.c
31
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
27
+++ b/target/arm/translate-neon.inc.c
32
index XXXXXXX..XXXXXXX 100644
28
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMLA_scalar(DisasContext *s, arg_VCMLA_scalar *a)
33
--- a/target/arm/mve.decode
29
tcg_temp_free_ptr(fpst);
34
+++ b/target/arm/mve.decode
30
return true;
35
@@ -XXX,XX +XXX,XX @@
31
}
36
#
32
+
37
33
+static bool trans_VDOT_scalar(DisasContext *s, arg_VDOT_scalar *a)
38
%qd 22:1 13:3
34
+{
39
+%qm 5:1 1:3
35
+ gen_helper_gvec_3 *fn_gvec;
40
36
+ int opr_sz;
41
&vldr_vstr rn qd imm p a w size l u
37
+ TCGv_ptr fpst;
42
+&1op qd qm size
38
+
43
39
+ if (!dc_isar_feature(aa32_dp, s)) {
44
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
45
# Note that both Rn and Qd are 3 bits only (no D bit)
46
@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
47
48
+@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
49
+
50
# Vector loads and stores
51
52
# Widening loads and narrowing stores:
53
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
54
size=1 p=1
55
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
56
size=2 p=1
57
+
58
+# Vector miscellaneous
59
+
60
+VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
61
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/mve_helper.c
64
+++ b/target/arm/mve_helper.c
65
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
66
67
#undef DO_VLDR
68
#undef DO_VSTR
69
+
70
+/*
71
+ * The mergemask(D, R, M) macro performs the operation "*D = R" but
72
+ * storing only the bytes which correspond to 1 bits in M,
73
+ * leaving other bytes in *D unchanged. We use _Generic
74
+ * to select the correct implementation based on the type of D.
75
+ */
76
+
77
+static void mergemask_ub(uint8_t *d, uint8_t r, uint16_t mask)
78
+{
79
+ if (mask & 1) {
80
+ *d = r;
81
+ }
82
+}
83
+
84
+static void mergemask_sb(int8_t *d, int8_t r, uint16_t mask)
85
+{
86
+ mergemask_ub((uint8_t *)d, r, mask);
87
+}
88
+
89
+static void mergemask_uh(uint16_t *d, uint16_t r, uint16_t mask)
90
+{
91
+ uint16_t bmask = expand_pred_b_data[mask & 3];
92
+ *d = (*d & ~bmask) | (r & bmask);
93
+}
94
+
95
+static void mergemask_sh(int16_t *d, int16_t r, uint16_t mask)
96
+{
97
+ mergemask_uh((uint16_t *)d, r, mask);
98
+}
99
+
100
+static void mergemask_uw(uint32_t *d, uint32_t r, uint16_t mask)
101
+{
102
+ uint32_t bmask = expand_pred_b_data[mask & 0xf];
103
+ *d = (*d & ~bmask) | (r & bmask);
104
+}
105
+
106
+static void mergemask_sw(int32_t *d, int32_t r, uint16_t mask)
107
+{
108
+ mergemask_uw((uint32_t *)d, r, mask);
109
+}
110
+
111
+static void mergemask_uq(uint64_t *d, uint64_t r, uint16_t mask)
112
+{
113
+ uint64_t bmask = expand_pred_b_data[mask & 0xff];
114
+ *d = (*d & ~bmask) | (r & bmask);
115
+}
116
+
117
+static void mergemask_sq(int64_t *d, int64_t r, uint16_t mask)
118
+{
119
+ mergemask_uq((uint64_t *)d, r, mask);
120
+}
121
+
122
+#define mergemask(D, R, M) \
123
+ _Generic(D, \
124
+ uint8_t *: mergemask_ub, \
125
+ int8_t *: mergemask_sb, \
126
+ uint16_t *: mergemask_uh, \
127
+ int16_t *: mergemask_sh, \
128
+ uint32_t *: mergemask_uw, \
129
+ int32_t *: mergemask_sw, \
130
+ uint64_t *: mergemask_uq, \
131
+ int64_t *: mergemask_sq)(D, R, M)
132
+
133
+#define DO_1OP(OP, ESIZE, TYPE, FN) \
134
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
135
+ { \
136
+ TYPE *d = vd, *m = vm; \
137
+ uint16_t mask = mve_element_mask(env); \
138
+ unsigned e; \
139
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
140
+ mergemask(&d[H##ESIZE(e)], FN(m[H##ESIZE(e)]), mask); \
141
+ } \
142
+ mve_advance_vpt(env); \
143
+ }
144
+
145
+#define DO_CLZ_B(N) (clz32(N) - 24)
146
+#define DO_CLZ_H(N) (clz32(N) - 16)
147
+
148
+DO_1OP(vclzb, 1, uint8_t, DO_CLZ_B)
149
+DO_1OP(vclzh, 2, uint16_t, DO_CLZ_H)
150
+DO_1OP(vclzw, 4, uint32_t, clz32)
151
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
152
index XXXXXXX..XXXXXXX 100644
153
--- a/target/arm/translate-mve.c
154
+++ b/target/arm/translate-mve.c
155
@@ -XXX,XX +XXX,XX @@
156
#include "decode-mve.c.inc"
157
158
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
159
+typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
160
161
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
162
static inline long mve_qreg_offset(unsigned reg)
163
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
164
DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
165
DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
166
DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
167
+
168
+static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
169
+{
170
+ TCGv_ptr qd, qm;
171
+
172
+ if (!dc_isar_feature(aa32_mve, s) ||
173
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
174
+ !fn) {
40
+ return false;
175
+ return false;
41
+ }
176
+ }
42
+
177
+
43
+ /* UNDEF accesses to D16-D31 if they don't exist. */
178
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
44
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
45
+ ((a->vd | a->vn) & 0x10)) {
46
+ return false;
47
+ }
48
+
49
+ if ((a->vd | a->vn) & a->q) {
50
+ return false;
51
+ }
52
+
53
+ if (!vfp_access_check(s)) {
54
+ return true;
179
+ return true;
55
+ }
180
+ }
56
+
181
+
57
+ fn_gvec = a->u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
182
+ qd = mve_qreg_ptr(a->qd);
58
+ opr_sz = (1 + a->q) * 8;
183
+ qm = mve_qreg_ptr(a->qm);
59
+ fpst = get_fpstatus_ptr(1);
184
+ fn(cpu_env, qd, qm);
60
+ tcg_gen_gvec_3_ool(vfp_reg_offset(1, a->vd),
185
+ tcg_temp_free_ptr(qd);
61
+ vfp_reg_offset(1, a->vn),
186
+ tcg_temp_free_ptr(qm);
62
+ vfp_reg_offset(1, a->rm),
187
+ mve_update_eci(s);
63
+ opr_sz, opr_sz, a->index, fn_gvec);
64
+ tcg_temp_free_ptr(fpst);
65
+ return true;
188
+ return true;
66
+}
189
+}
67
diff --git a/target/arm/translate.c b/target/arm/translate.c
190
+
68
index XXXXXXX..XXXXXXX 100644
191
+#define DO_1OP(INSN, FN) \
69
--- a/target/arm/translate.c
192
+ static bool trans_##INSN(DisasContext *s, arg_1op *a) \
70
+++ b/target/arm/translate.c
193
+ { \
71
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
194
+ static MVEGenOneOpFn * const fns[] = { \
72
bool is_long = false, q = extract32(insn, 6, 1);
195
+ gen_helper_mve_##FN##b, \
73
bool ptr_is_env = false;
196
+ gen_helper_mve_##FN##h, \
74
197
+ gen_helper_mve_##FN##w, \
75
- if ((insn & 0xffb00f00) == 0xfe200d00) {
198
+ NULL, \
76
- /* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
199
+ }; \
77
- int u = extract32(insn, 4, 1);
200
+ return do_1op(s, a, fns[a->size]); \
78
-
201
+ }
79
- if (!dc_isar_feature(aa32_dp, s)) {
202
+
80
- return 1;
203
+DO_1OP(VCLZ, vclz)
81
- }
82
- fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
83
- /* rm is just Vm, and index is M. */
84
- data = extract32(insn, 5, 1); /* index */
85
- rm = extract32(insn, 0, 4);
86
- } else if ((insn & 0xffa00f10) == 0xfe000810) {
87
+ if ((insn & 0xffa00f10) == 0xfe000810) {
88
/* VFM[AS]L -- 1111 1110 0.0S .... .... 1000 .Q.1 .... */
89
int is_s = extract32(insn, 20, 1);
90
int vm20 = extract32(insn, 0, 3);
91
--
204
--
92
2.20.1
205
2.20.1
93
206
94
207
diff view generated by jsdifflib
New patch
1
Implement the MVE VCLS insn.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-5-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 4 ++++
8
target/arm/mve.decode | 1 +
9
target/arm/mve_helper.c | 7 +++++++
10
target/arm/translate-mve.c | 1 +
11
4 files changed, 13 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
18
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
19
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
20
21
+DEF_HELPER_FLAGS_3(mve_vclsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+DEF_HELPER_FLAGS_3(mve_vclsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+
25
DEF_HELPER_FLAGS_3(mve_vclzb, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
DEF_HELPER_FLAGS_3(mve_vclzh, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
DEF_HELPER_FLAGS_3(mve_vclzw, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
33
34
# Vector miscellaneous
35
36
+VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
37
VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve_helper.c
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@ static void mergemask_sq(int64_t *d, int64_t r, uint16_t mask)
43
mve_advance_vpt(env); \
44
}
45
46
+#define DO_CLS_B(N) (clrsb32(N) - 24)
47
+#define DO_CLS_H(N) (clrsb32(N) - 16)
48
+
49
+DO_1OP(vclsb, 1, int8_t, DO_CLS_B)
50
+DO_1OP(vclsh, 2, int16_t, DO_CLS_H)
51
+DO_1OP(vclsw, 4, int32_t, clrsb32)
52
+
53
#define DO_CLZ_B(N) (clz32(N) - 24)
54
#define DO_CLZ_H(N) (clz32(N) - 16)
55
56
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/translate-mve.c
59
+++ b/target/arm/translate-mve.c
60
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
61
}
62
63
DO_1OP(VCLZ, vclz)
64
+DO_1OP(VCLS, vcls)
65
--
66
2.20.1
67
68
diff view generated by jsdifflib
New patch
1
Implement the MVE instructions VREV16, VREV32 and VREV64.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-6-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 7 +++++++
8
target/arm/mve.decode | 4 ++++
9
target/arm/mve_helper.c | 7 +++++++
10
target/arm/translate-mve.c | 33 +++++++++++++++++++++++++++++++++
11
4 files changed, 51 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
DEF_HELPER_FLAGS_3(mve_vclzb, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
DEF_HELPER_FLAGS_3(mve_vclzh, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
DEF_HELPER_FLAGS_3(mve_vclzw, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_3(mve_vrev16b, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vrev32b, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+DEF_HELPER_FLAGS_3(mve_vrev32h, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+DEF_HELPER_FLAGS_3(mve_vrev64b, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+DEF_HELPER_FLAGS_3(mve_vrev64h, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
+DEF_HELPER_FLAGS_3(mve_vrev64w, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
33
34
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
35
VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
36
+
37
+VREV16 1111 1111 1 . 11 .. 00 ... 0 0001 01 . 0 ... 0 @1op
38
+VREV32 1111 1111 1 . 11 .. 00 ... 0 0000 11 . 0 ... 0 @1op
39
+VREV64 1111 1111 1 . 11 .. 00 ... 0 0000 01 . 0 ... 0 @1op
40
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/mve_helper.c
43
+++ b/target/arm/mve_helper.c
44
@@ -XXX,XX +XXX,XX @@ DO_1OP(vclsw, 4, int32_t, clrsb32)
45
DO_1OP(vclzb, 1, uint8_t, DO_CLZ_B)
46
DO_1OP(vclzh, 2, uint16_t, DO_CLZ_H)
47
DO_1OP(vclzw, 4, uint32_t, clz32)
48
+
49
+DO_1OP(vrev16b, 2, uint16_t, bswap16)
50
+DO_1OP(vrev32b, 4, uint32_t, bswap32)
51
+DO_1OP(vrev32h, 4, uint32_t, hswap32)
52
+DO_1OP(vrev64b, 8, uint64_t, bswap64)
53
+DO_1OP(vrev64h, 8, uint64_t, hswap64)
54
+DO_1OP(vrev64w, 8, uint64_t, wswap64)
55
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/translate-mve.c
58
+++ b/target/arm/translate-mve.c
59
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
60
61
DO_1OP(VCLZ, vclz)
62
DO_1OP(VCLS, vcls)
63
+
64
+static bool trans_VREV16(DisasContext *s, arg_1op *a)
65
+{
66
+ static MVEGenOneOpFn * const fns[] = {
67
+ gen_helper_mve_vrev16b,
68
+ NULL,
69
+ NULL,
70
+ NULL,
71
+ };
72
+ return do_1op(s, a, fns[a->size]);
73
+}
74
+
75
+static bool trans_VREV32(DisasContext *s, arg_1op *a)
76
+{
77
+ static MVEGenOneOpFn * const fns[] = {
78
+ gen_helper_mve_vrev32b,
79
+ gen_helper_mve_vrev32h,
80
+ NULL,
81
+ NULL,
82
+ };
83
+ return do_1op(s, a, fns[a->size]);
84
+}
85
+
86
+static bool trans_VREV64(DisasContext *s, arg_1op *a)
87
+{
88
+ static MVEGenOneOpFn * const fns[] = {
89
+ gen_helper_mve_vrev64b,
90
+ gen_helper_mve_vrev64h,
91
+ gen_helper_mve_vrev64w,
92
+ NULL,
93
+ };
94
+ return do_1op(s, a, fns[a->size]);
95
+}
96
--
97
2.20.1
98
99
diff view generated by jsdifflib
New patch
1
Implement the MVE VMVN(register) operation. Note that for
2
predication this operation is byte-by-byte.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-7-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 2 ++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 4 ++++
11
target/arm/translate-mve.c | 5 +++++
12
4 files changed, 14 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vrev32h, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
DEF_HELPER_FLAGS_3(mve_vrev64b, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
DEF_HELPER_FLAGS_3(mve_vrev64h, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
DEF_HELPER_FLAGS_3(mve_vrev64w, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_3(mve_vmvn, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/mve.decode
27
+++ b/target/arm/mve.decode
28
@@ -XXX,XX +XXX,XX @@
29
@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
30
31
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
32
+@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
33
34
# Vector loads and stores
35
36
@@ -XXX,XX +XXX,XX @@ VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
37
VREV16 1111 1111 1 . 11 .. 00 ... 0 0001 01 . 0 ... 0 @1op
38
VREV32 1111 1111 1 . 11 .. 00 ... 0 0000 11 . 0 ... 0 @1op
39
VREV64 1111 1111 1 . 11 .. 00 ... 0 0000 01 . 0 ... 0 @1op
40
+
41
+VMVN 1111 1111 1 . 11 00 00 ... 0 0101 11 . 0 ... 0 @1op_nosz
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_1OP(vrev32h, 4, uint32_t, hswap32)
47
DO_1OP(vrev64b, 8, uint64_t, bswap64)
48
DO_1OP(vrev64h, 8, uint64_t, hswap64)
49
DO_1OP(vrev64w, 8, uint64_t, wswap64)
50
+
51
+#define DO_NOT(N) (~(N))
52
+
53
+DO_1OP(vmvn, 8, uint64_t, DO_NOT)
54
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/translate-mve.c
57
+++ b/target/arm/translate-mve.c
58
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_1op *a)
59
};
60
return do_1op(s, a, fns[a->size]);
61
}
62
+
63
+static bool trans_VMVN(DisasContext *s, arg_1op *a)
64
+{
65
+ return do_1op(s, a, gen_helper_mve_vmvn);
66
+}
67
--
68
2.20.1
69
70
diff view generated by jsdifflib
New patch
1
Implement the MVE VABS functions (both integer and floating point).
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-8-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 6 ++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 13 +++++++++++++
10
target/arm/translate-mve.c | 15 +++++++++++++++
11
4 files changed, 37 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vrev64h, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
DEF_HELPER_FLAGS_3(mve_vrev64w, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
20
DEF_HELPER_FLAGS_3(mve_vmvn, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_3(mve_vabsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+DEF_HELPER_FLAGS_3(mve_vabsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+DEF_HELPER_FLAGS_3(mve_vfabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+DEF_HELPER_FLAGS_3(mve_vfabss, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/mve.decode
30
+++ b/target/arm/mve.decode
31
@@ -XXX,XX +XXX,XX @@ VREV32 1111 1111 1 . 11 .. 00 ... 0 0000 11 . 0 ... 0 @1op
32
VREV64 1111 1111 1 . 11 .. 00 ... 0 0000 01 . 0 ... 0 @1op
33
34
VMVN 1111 1111 1 . 11 00 00 ... 0 0101 11 . 0 ... 0 @1op_nosz
35
+
36
+VABS 1111 1111 1 . 11 .. 01 ... 0 0011 01 . 0 ... 0 @1op
37
+VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve_helper.c
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@
43
#include "exec/helper-proto.h"
44
#include "exec/cpu_ldst.h"
45
#include "exec/exec-all.h"
46
+#include "tcg/tcg.h"
47
48
static uint16_t mve_element_mask(CPUARMState *env)
49
{
50
@@ -XXX,XX +XXX,XX @@ DO_1OP(vrev64w, 8, uint64_t, wswap64)
51
#define DO_NOT(N) (~(N))
52
53
DO_1OP(vmvn, 8, uint64_t, DO_NOT)
54
+
55
+#define DO_ABS(N) ((N) < 0 ? -(N) : (N))
56
+#define DO_FABSH(N) ((N) & dup_const(MO_16, 0x7fff))
57
+#define DO_FABSS(N) ((N) & dup_const(MO_32, 0x7fffffff))
58
+
59
+DO_1OP(vabsb, 1, int8_t, DO_ABS)
60
+DO_1OP(vabsh, 2, int16_t, DO_ABS)
61
+DO_1OP(vabsw, 4, int32_t, DO_ABS)
62
+
63
+/* We can do these 64 bits at a time */
64
+DO_1OP(vfabsh, 8, uint64_t, DO_FABSH)
65
+DO_1OP(vfabss, 8, uint64_t, DO_FABSS)
66
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/translate-mve.c
69
+++ b/target/arm/translate-mve.c
70
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
71
72
DO_1OP(VCLZ, vclz)
73
DO_1OP(VCLS, vcls)
74
+DO_1OP(VABS, vabs)
75
76
static bool trans_VREV16(DisasContext *s, arg_1op *a)
77
{
78
@@ -XXX,XX +XXX,XX @@ static bool trans_VMVN(DisasContext *s, arg_1op *a)
79
{
80
return do_1op(s, a, gen_helper_mve_vmvn);
81
}
82
+
83
+static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
84
+{
85
+ static MVEGenOneOpFn * const fns[] = {
86
+ NULL,
87
+ gen_helper_mve_vfabsh,
88
+ gen_helper_mve_vfabss,
89
+ NULL,
90
+ };
91
+ if (!dc_isar_feature(aa32_mve_fp, s)) {
92
+ return false;
93
+ }
94
+ return do_1op(s, a, fns[a->size]);
95
+}
96
--
97
2.20.1
98
99
diff view generated by jsdifflib
1
Convert VCMLA (scalar) in the 2reg-scalar-ext group to decodetree.
1
Implement the MVE VNEG insn (both integer and floating point forms).
2
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200430181003.21682-9-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-9-peter.maydell@linaro.org
6
---
6
---
7
target/arm/neon-shared.decode | 5 +++++
7
target/arm/helper-mve.h | 6 ++++++
8
target/arm/translate-neon.inc.c | 40 +++++++++++++++++++++++++++++++++
8
target/arm/mve.decode | 2 ++
9
target/arm/translate.c | 26 +--------------------
9
target/arm/mve_helper.c | 12 ++++++++++++
10
3 files changed, 46 insertions(+), 25 deletions(-)
10
target/arm/translate-mve.c | 15 +++++++++++++++
11
4 files changed, 35 insertions(+)
11
12
12
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-shared.decode
15
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/neon-shared.decode
16
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ VFML 1111 110 0 s:1 . 10 .... .... 1000 . 0 . 1 .... \
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
17
vm=%vm_sp vn=%vn_sp vd=%vd_dp q=0
18
DEF_HELPER_FLAGS_3(mve_vabsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
VFML 1111 110 0 s:1 . 10 .... .... 1000 . 1 . 1 .... \
19
DEF_HELPER_FLAGS_3(mve_vfabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
vm=%vm_dp vn=%vn_dp vd=%vd_dp q=1
20
DEF_HELPER_FLAGS_3(mve_vfabss, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
+
21
+
21
+VCMLA_scalar 1111 1110 0 . rot:2 .... .... 1000 . q:1 index:1 0 vm:4 \
22
+DEF_HELPER_FLAGS_3(mve_vnegb, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+ vn=%vn_dp vd=%vd_dp size=0
23
+DEF_HELPER_FLAGS_3(mve_vnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+VCMLA_scalar 1111 1110 1 . rot:2 .... .... 1000 . q:1 . 0 .... \
24
+DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp size=1 index=0
25
+DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
26
+DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
26
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/translate-neon.inc.c
29
--- a/target/arm/mve.decode
28
+++ b/target/arm/translate-neon.inc.c
30
+++ b/target/arm/mve.decode
29
@@ -XXX,XX +XXX,XX @@ static bool trans_VFML(DisasContext *s, arg_VFML *a)
31
@@ -XXX,XX +XXX,XX @@ VMVN 1111 1111 1 . 11 00 00 ... 0 0101 11 . 0 ... 0 @1op_nosz
30
gen_helper_gvec_fmlal_a32);
32
31
return true;
33
VABS 1111 1111 1 . 11 .. 01 ... 0 0011 01 . 0 ... 0 @1op
34
VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
35
+VNEG 1111 1111 1 . 11 .. 01 ... 0 0011 11 . 0 ... 0 @1op
36
+VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
37
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/mve_helper.c
40
+++ b/target/arm/mve_helper.c
41
@@ -XXX,XX +XXX,XX @@ DO_1OP(vabsw, 4, int32_t, DO_ABS)
42
/* We can do these 64 bits at a time */
43
DO_1OP(vfabsh, 8, uint64_t, DO_FABSH)
44
DO_1OP(vfabss, 8, uint64_t, DO_FABSS)
45
+
46
+#define DO_NEG(N) (-(N))
47
+#define DO_FNEGH(N) ((N) ^ dup_const(MO_16, 0x8000))
48
+#define DO_FNEGS(N) ((N) ^ dup_const(MO_32, 0x80000000))
49
+
50
+DO_1OP(vnegb, 1, int8_t, DO_NEG)
51
+DO_1OP(vnegh, 2, int16_t, DO_NEG)
52
+DO_1OP(vnegw, 4, int32_t, DO_NEG)
53
+
54
+/* We can do these 64 bits at a time */
55
+DO_1OP(vfnegh, 8, uint64_t, DO_FNEGH)
56
+DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
57
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate-mve.c
60
+++ b/target/arm/translate-mve.c
61
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
62
DO_1OP(VCLZ, vclz)
63
DO_1OP(VCLS, vcls)
64
DO_1OP(VABS, vabs)
65
+DO_1OP(VNEG, vneg)
66
67
static bool trans_VREV16(DisasContext *s, arg_1op *a)
68
{
69
@@ -XXX,XX +XXX,XX @@ static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
70
}
71
return do_1op(s, a, fns[a->size]);
32
}
72
}
33
+
73
+
34
+static bool trans_VCMLA_scalar(DisasContext *s, arg_VCMLA_scalar *a)
74
+static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
35
+{
75
+{
36
+ gen_helper_gvec_3_ptr *fn_gvec_ptr;
76
+ static MVEGenOneOpFn * const fns[] = {
37
+ int opr_sz;
77
+ NULL,
38
+ TCGv_ptr fpst;
78
+ gen_helper_mve_vfnegh,
39
+
79
+ gen_helper_mve_vfnegs,
40
+ if (!dc_isar_feature(aa32_vcma, s)) {
80
+ NULL,
81
+ };
82
+ if (!dc_isar_feature(aa32_mve_fp, s)) {
41
+ return false;
83
+ return false;
42
+ }
84
+ }
43
+ if (a->size == 0 && !dc_isar_feature(aa32_fp16_arith, s)) {
85
+ return do_1op(s, a, fns[a->size]);
44
+ return false;
45
+ }
46
+
47
+ /* UNDEF accesses to D16-D31 if they don't exist. */
48
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
49
+ ((a->vd | a->vn | a->vm) & 0x10)) {
50
+ return false;
51
+ }
52
+
53
+ if ((a->vd | a->vn) & a->q) {
54
+ return false;
55
+ }
56
+
57
+ if (!vfp_access_check(s)) {
58
+ return true;
59
+ }
60
+
61
+ fn_gvec_ptr = (a->size ? gen_helper_gvec_fcmlas_idx
62
+ : gen_helper_gvec_fcmlah_idx);
63
+ opr_sz = (1 + a->q) * 8;
64
+ fpst = get_fpstatus_ptr(1);
65
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
66
+ vfp_reg_offset(1, a->vn),
67
+ vfp_reg_offset(1, a->vm),
68
+ fpst, opr_sz, opr_sz,
69
+ (a->index << 2) | a->rot, fn_gvec_ptr);
70
+ tcg_temp_free_ptr(fpst);
71
+ return true;
72
+}
86
+}
73
diff --git a/target/arm/translate.c b/target/arm/translate.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/target/arm/translate.c
76
+++ b/target/arm/translate.c
77
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
78
bool is_long = false, q = extract32(insn, 6, 1);
79
bool ptr_is_env = false;
80
81
- if ((insn & 0xff000f10) == 0xfe000800) {
82
- /* VCMLA (indexed) -- 1111 1110 S.RR .... .... 1000 ...0 .... */
83
- int rot = extract32(insn, 20, 2);
84
- int size = extract32(insn, 23, 1);
85
- int index;
86
-
87
- if (!dc_isar_feature(aa32_vcma, s)) {
88
- return 1;
89
- }
90
- if (size == 0) {
91
- if (!dc_isar_feature(aa32_fp16_arith, s)) {
92
- return 1;
93
- }
94
- /* For fp16, rm is just Vm, and index is M. */
95
- rm = extract32(insn, 0, 4);
96
- index = extract32(insn, 5, 1);
97
- } else {
98
- /* For fp32, rm is the usual M:Vm, and index is 0. */
99
- VFP_DREG_M(rm, insn);
100
- index = 0;
101
- }
102
- data = (index << 2) | rot;
103
- fn_gvec_ptr = (size ? gen_helper_gvec_fcmlas_idx
104
- : gen_helper_gvec_fcmlah_idx);
105
- } else if ((insn & 0xffb00f00) == 0xfe200d00) {
106
+ if ((insn & 0xffb00f00) == 0xfe200d00) {
107
/* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
108
int u = extract32(insn, 4, 1);
109
110
--
87
--
111
2.20.1
88
2.20.1
112
89
113
90
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
The Arm MVE VDUP implementation would like to be able to emit code to
2
duplicate a byte or halfword value into an i32. We have code to do
3
this already in tcg-op-gvec.c, so all we need to do is make the
4
functions global.
2
5
3
Embed the APUs into the SoC type.
6
For consistency with other functions made available to the frontends:
7
* we rename to tcg_gen_dup_*
8
* we expose both the _i32 and _i64 forms
9
* we provide the #define for a _tl form
4
10
5
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
11
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Message-id: 20200427181649.26851-8-edgar.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20210617121628.20116-10-peter.maydell@linaro.org
12
---
15
---
13
include/hw/arm/xlnx-versal.h | 2 +-
16
include/tcg/tcg-op.h | 8 ++++++++
14
hw/arm/xlnx-versal-virt.c | 4 ++--
17
include/tcg/tcg.h | 1 -
15
hw/arm/xlnx-versal.c | 19 +++++--------------
18
tcg/tcg-op-gvec.c | 20 ++++++++++----------
16
3 files changed, 8 insertions(+), 17 deletions(-)
19
3 files changed, 18 insertions(+), 11 deletions(-)
17
20
18
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
21
diff --git a/include/tcg/tcg-op.h b/include/tcg/tcg-op.h
19
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/xlnx-versal.h
23
--- a/include/tcg/tcg-op.h
21
+++ b/include/hw/arm/xlnx-versal.h
24
+++ b/include/tcg/tcg-op.h
22
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
25
@@ -XXX,XX +XXX,XX @@ void tcg_gen_umin_i32(TCGv_i32, TCGv_i32 arg1, TCGv_i32 arg2);
23
struct {
26
void tcg_gen_umax_i32(TCGv_i32, TCGv_i32 arg1, TCGv_i32 arg2);
24
struct {
27
void tcg_gen_abs_i32(TCGv_i32, TCGv_i32);
25
MemoryRegion mr;
28
26
- ARMCPU *cpu[XLNX_VERSAL_NR_ACPUS];
29
+/* Replicate a value of size @vece from @in to all the lanes in @out */
27
+ ARMCPU cpu[XLNX_VERSAL_NR_ACPUS];
30
+void tcg_gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in);
28
GICv3State gic;
31
+
29
} apu;
32
static inline void tcg_gen_discard_i32(TCGv_i32 arg)
30
} fpd;
33
{
31
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
34
tcg_gen_op1_i32(INDEX_op_discard, arg);
35
@@ -XXX,XX +XXX,XX @@ void tcg_gen_umin_i64(TCGv_i64, TCGv_i64 arg1, TCGv_i64 arg2);
36
void tcg_gen_umax_i64(TCGv_i64, TCGv_i64 arg1, TCGv_i64 arg2);
37
void tcg_gen_abs_i64(TCGv_i64, TCGv_i64);
38
39
+/* Replicate a value of size @vece from @in to all the lanes in @out */
40
+void tcg_gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in);
41
+
42
#if TCG_TARGET_REG_BITS == 64
43
static inline void tcg_gen_discard_i64(TCGv_i64 arg)
44
{
45
@@ -XXX,XX +XXX,XX @@ void tcg_gen_stl_vec(TCGv_vec r, TCGv_ptr base, TCGArg offset, TCGType t);
46
#define tcg_gen_atomic_smax_fetch_tl tcg_gen_atomic_smax_fetch_i64
47
#define tcg_gen_atomic_umax_fetch_tl tcg_gen_atomic_umax_fetch_i64
48
#define tcg_gen_dup_tl_vec tcg_gen_dup_i64_vec
49
+#define tcg_gen_dup_tl tcg_gen_dup_i64
50
#else
51
#define tcg_gen_movi_tl tcg_gen_movi_i32
52
#define tcg_gen_mov_tl tcg_gen_mov_i32
53
@@ -XXX,XX +XXX,XX @@ void tcg_gen_stl_vec(TCGv_vec r, TCGv_ptr base, TCGArg offset, TCGType t);
54
#define tcg_gen_atomic_smax_fetch_tl tcg_gen_atomic_smax_fetch_i32
55
#define tcg_gen_atomic_umax_fetch_tl tcg_gen_atomic_umax_fetch_i32
56
#define tcg_gen_dup_tl_vec tcg_gen_dup_i32_vec
57
+#define tcg_gen_dup_tl tcg_gen_dup_i32
58
#endif
59
60
#if UINTPTR_MAX == UINT32_MAX
61
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
32
index XXXXXXX..XXXXXXX 100644
62
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/arm/xlnx-versal-virt.c
63
--- a/include/tcg/tcg.h
34
+++ b/hw/arm/xlnx-versal-virt.c
64
+++ b/include/tcg/tcg.h
35
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
65
@@ -XXX,XX +XXX,XX @@ uint64_t dup_const(unsigned vece, uint64_t c);
36
s->binfo.get_dtb = versal_virt_get_dtb;
66
: (qemu_build_not_reached_always(), 0)) \
37
s->binfo.modify_dtb = versal_virt_modify_dtb;
67
: dup_const(VECE, C))
38
if (machine->kernel_filename) {
68
39
- arm_load_kernel(s->soc.fpd.apu.cpu[0], machine, &s->binfo);
69
-
40
+ arm_load_kernel(&s->soc.fpd.apu.cpu[0], machine, &s->binfo);
70
/*
41
} else {
71
* Memory helpers that will be used by TCG generated code.
42
- AddressSpace *as = arm_boot_address_space(s->soc.fpd.apu.cpu[0],
72
*/
43
+ AddressSpace *as = arm_boot_address_space(&s->soc.fpd.apu.cpu[0],
73
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
44
&s->binfo);
45
/* Some boot-loaders (e.g u-boot) don't like blobs at address 0 (NULL).
46
* Offset things by 4K. */
47
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
48
index XXXXXXX..XXXXXXX 100644
74
index XXXXXXX..XXXXXXX 100644
49
--- a/hw/arm/xlnx-versal.c
75
--- a/tcg/tcg-op-gvec.c
50
+++ b/hw/arm/xlnx-versal.c
76
+++ b/tcg/tcg-op-gvec.c
51
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_cpus(Versal *s)
77
@@ -XXX,XX +XXX,XX @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
52
78
}
53
for (i = 0; i < ARRAY_SIZE(s->fpd.apu.cpu); i++) {
79
54
Object *obj;
80
/* Duplicate IN into OUT as per VECE. */
55
- char *name;
81
-static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
56
-
82
+void tcg_gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
57
- obj = object_new(XLNX_VERSAL_ACPU_TYPE);
83
{
58
- if (!obj) {
84
switch (vece) {
59
- error_report("Unable to create apu.cpu[%d] of type %s",
85
case MO_8:
60
- i, XLNX_VERSAL_ACPU_TYPE);
86
@@ -XXX,XX +XXX,XX @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
61
- exit(EXIT_FAILURE);
62
- }
63
-
64
- name = g_strdup_printf("apu-cpu[%d]", i);
65
- object_property_add_child(OBJECT(s), name, obj, &error_fatal);
66
- g_free(name);
67
68
+ object_initialize_child(OBJECT(s), "apu-cpu[*]",
69
+ &s->fpd.apu.cpu[i], sizeof(s->fpd.apu.cpu[i]),
70
+ XLNX_VERSAL_ACPU_TYPE, &error_abort, NULL);
71
+ obj = OBJECT(&s->fpd.apu.cpu[i]);
72
object_property_set_int(obj, s->cfg.psci_conduit,
73
"psci-conduit", &error_abort);
74
if (i) {
75
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_cpus(Versal *s)
76
object_property_set_link(obj, OBJECT(&s->fpd.apu.mr), "memory",
77
&error_abort);
78
object_property_set_bool(obj, true, "realized", &error_fatal);
79
- s->fpd.apu.cpu[i] = ARM_CPU(obj);
80
}
87
}
81
}
88
}
82
89
83
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_gic(Versal *s, qemu_irq *pic)
90
-static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
84
}
91
+void tcg_gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
85
92
{
86
for (i = 0; i < nr_apu_cpus; i++) {
93
switch (vece) {
87
- DeviceState *cpudev = DEVICE(s->fpd.apu.cpu[i]);
94
case MO_8:
88
+ DeviceState *cpudev = DEVICE(&s->fpd.apu.cpu[i]);
95
@@ -XXX,XX +XXX,XX @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
89
int ppibase = XLNX_VERSAL_NR_IRQS + i * GIC_INTERNAL + GIC_NR_SGIS;
96
&& (vece != MO_32 || !check_size_impl(oprsz, 4))) {
90
qemu_irq maint_irq;
97
t_64 = tcg_temp_new_i64();
91
int ti;
98
tcg_gen_extu_i32_i64(t_64, in_32);
99
- gen_dup_i64(vece, t_64, t_64);
100
+ tcg_gen_dup_i64(vece, t_64, t_64);
101
} else {
102
t_32 = tcg_temp_new_i32();
103
- gen_dup_i32(vece, t_32, in_32);
104
+ tcg_gen_dup_i32(vece, t_32, in_32);
105
}
106
} else if (in_64) {
107
/* We are given a 64-bit variable input. */
108
t_64 = tcg_temp_new_i64();
109
- gen_dup_i64(vece, t_64, in_64);
110
+ tcg_gen_dup_i64(vece, t_64, in_64);
111
} else {
112
/* We are given a constant input. */
113
/* For 64-bit hosts, use 64-bit constants for "simple" constants
114
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_2s(uint32_t dofs, uint32_t aofs, uint32_t oprsz,
115
} else if (g->fni8 && check_size_impl(oprsz, 8)) {
116
TCGv_i64 t64 = tcg_temp_new_i64();
117
118
- gen_dup_i64(g->vece, t64, c);
119
+ tcg_gen_dup_i64(g->vece, t64, c);
120
expand_2s_i64(dofs, aofs, oprsz, t64, g->scalar_first, g->fni8);
121
tcg_temp_free_i64(t64);
122
} else if (g->fni4 && check_size_impl(oprsz, 4)) {
123
TCGv_i32 t32 = tcg_temp_new_i32();
124
125
tcg_gen_extrl_i64_i32(t32, c);
126
- gen_dup_i32(g->vece, t32, t32);
127
+ tcg_gen_dup_i32(g->vece, t32, t32);
128
expand_2s_i32(dofs, aofs, oprsz, t32, g->scalar_first, g->fni4);
129
tcg_temp_free_i32(t32);
130
} else {
131
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_ands(unsigned vece, uint32_t dofs, uint32_t aofs,
132
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
133
{
134
TCGv_i64 tmp = tcg_temp_new_i64();
135
- gen_dup_i64(vece, tmp, c);
136
+ tcg_gen_dup_i64(vece, tmp, c);
137
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_ands);
138
tcg_temp_free_i64(tmp);
139
}
140
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_xors(unsigned vece, uint32_t dofs, uint32_t aofs,
141
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
142
{
143
TCGv_i64 tmp = tcg_temp_new_i64();
144
- gen_dup_i64(vece, tmp, c);
145
+ tcg_gen_dup_i64(vece, tmp, c);
146
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_xors);
147
tcg_temp_free_i64(tmp);
148
}
149
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_ors(unsigned vece, uint32_t dofs, uint32_t aofs,
150
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
151
{
152
TCGv_i64 tmp = tcg_temp_new_i64();
153
- gen_dup_i64(vece, tmp, c);
154
+ tcg_gen_dup_i64(vece, tmp, c);
155
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_ors);
156
tcg_temp_free_i64(tmp);
157
}
92
--
158
--
93
2.20.1
159
2.20.1
94
160
95
161
diff view generated by jsdifflib
1
Convert the Neon "load/store single structure to one lane" insns to
1
Implement the MVE VDUP insn, which duplicates a value from
2
decodetree.
2
a general-purpose register into every lane of a vector
3
3
register (subject to predication).
4
As this is the last set of insns in the neon load/store group,
5
we can remove the whole disas_neon_ls_insn() function.
6
4
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200430181003.21682-14-peter.maydell@linaro.org
7
Message-id: 20210617121628.20116-11-peter.maydell@linaro.org
10
---
8
---
11
target/arm/neon-ls.decode | 11 +++
9
target/arm/helper-mve.h | 2 ++
12
target/arm/translate-neon.inc.c | 89 +++++++++++++++++++
10
target/arm/mve.decode | 10 ++++++++++
13
target/arm/translate.c | 147 --------------------------------
11
target/arm/mve_helper.c | 16 ++++++++++++++++
14
3 files changed, 100 insertions(+), 147 deletions(-)
12
target/arm/translate-mve.c | 27 +++++++++++++++++++++++++++
13
4 files changed, 55 insertions(+)
15
14
16
diff --git a/target/arm/neon-ls.decode b/target/arm/neon-ls.decode
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/neon-ls.decode
17
--- a/target/arm/helper-mve.h
19
+++ b/target/arm/neon-ls.decode
18
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ VLDST_multiple 1111 0100 0 . l:1 0 rn:4 .... itype:4 size:2 align:2 rm:4 \
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
21
20
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
22
VLD_all_lanes 1111 0100 1 . 1 0 rn:4 .... 11 n:2 size:2 t:1 a:1 rm:4 \
21
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
23
vd=%vd_dp
22
23
+DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
24
+
24
+
25
+# Neon load/store single structure to one lane
25
DEF_HELPER_FLAGS_3(mve_vclsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+%imm1_5_p1 5:1 !function=plus1
26
DEF_HELPER_FLAGS_3(mve_vclsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
+%imm1_6_p1 6:1 !function=plus1
27
DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@
33
34
%qd 22:1 13:3
35
%qm 5:1 1:3
36
+%qn 7:1 17:3
37
38
&vldr_vstr rn qd imm p a w size l u
39
&1op qd qm size
40
@@ -XXX,XX +XXX,XX @@ VABS 1111 1111 1 . 11 .. 01 ... 0 0011 01 . 0 ... 0 @1op
41
VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
42
VNEG 1111 1111 1 . 11 .. 01 ... 0 0011 11 . 0 ... 0 @1op
43
VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
28
+
44
+
29
+VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 00 n:2 reg_idx:3 align:1 rm:4 \
45
+&vdup qd rt size
30
+ vd=%vd_dp size=0 stride=1
46
+# Qd is in the fields usually named Qn
31
+VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 01 n:2 reg_idx:2 align:2 rm:4 \
47
+@vdup .... .... . . .. ... . rt:4 .... . . . . .... qd=%qn &vdup
32
+ vd=%vd_dp size=1 stride=%imm1_5_p1
48
+
33
+VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 10 n:2 reg_idx:1 align:3 rm:4 \
49
+# B and E bits encode size, which we decode here to the usual size values
34
+ vd=%vd_dp size=2 stride=%imm1_6_p1
50
+VDUP 1110 1110 1 1 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=0
35
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
51
+VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 1 1 0000 @vdup size=1
52
+VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
53
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
36
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/translate-neon.inc.c
55
--- a/target/arm/mve_helper.c
38
+++ b/target/arm/translate-neon.inc.c
56
+++ b/target/arm/mve_helper.c
39
@@ -XXX,XX +XXX,XX @@
57
@@ -XXX,XX +XXX,XX @@ static void mergemask_sq(int64_t *d, int64_t r, uint16_t mask)
40
* It might be possible to convert it to a standalone .c file eventually.
58
uint64_t *: mergemask_uq, \
41
*/
59
int64_t *: mergemask_sq)(D, R, M)
42
60
43
+static inline int plus1(DisasContext *s, int x)
61
+void HELPER(mve_vdup)(CPUARMState *env, void *vd, uint32_t val)
44
+{
62
+{
45
+ return x + 1;
63
+ /*
64
+ * The generated code already replicated an 8 or 16 bit constant
65
+ * into the 32-bit value, so we only need to write the 32-bit
66
+ * value to all elements of the Qreg, allowing for predication.
67
+ */
68
+ uint32_t *d = vd;
69
+ uint16_t mask = mve_element_mask(env);
70
+ unsigned e;
71
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) {
72
+ mergemask(&d[H4(e)], val, mask);
73
+ }
74
+ mve_advance_vpt(env);
46
+}
75
+}
47
+
76
+
48
/* Include the generated Neon decoder */
77
#define DO_1OP(OP, ESIZE, TYPE, FN) \
49
#include "decode-neon-dp.inc.c"
78
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
50
#include "decode-neon-ls.inc.c"
79
{ \
51
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
80
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
52
81
index XXXXXXX..XXXXXXX 100644
53
return true;
82
--- a/target/arm/translate-mve.c
54
}
83
+++ b/target/arm/translate-mve.c
84
@@ -XXX,XX +XXX,XX @@ DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
85
DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
86
DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
87
88
+static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
89
+{
90
+ TCGv_ptr qd;
91
+ TCGv_i32 rt;
55
+
92
+
56
+static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
93
+ if (!dc_isar_feature(aa32_mve, s) ||
57
+{
94
+ !mve_check_qreg_bank(s, a->qd)) {
58
+ /* Neon load/store single structure to one lane */
59
+ int reg;
60
+ int nregs = a->n + 1;
61
+ int vd = a->vd;
62
+ TCGv_i32 addr, tmp;
63
+
64
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
65
+ return false;
95
+ return false;
66
+ }
96
+ }
67
+
97
+ if (a->rt == 13 || a->rt == 15) {
68
+ /* UNDEF accesses to D16-D31 if they don't exist */
98
+ /* UNPREDICTABLE; we choose to UNDEF */
69
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
70
+ return false;
99
+ return false;
71
+ }
100
+ }
72
+
101
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
73
+ /* Catch the UNDEF cases. This is unavoidably a bit messy. */
74
+ switch (nregs) {
75
+ case 1:
76
+ if (((a->align & (1 << a->size)) != 0) ||
77
+ (a->size == 2 && ((a->align & 3) == 1 || (a->align & 3) == 2))) {
78
+ return false;
79
+ }
80
+ break;
81
+ case 3:
82
+ if ((a->align & 1) != 0) {
83
+ return false;
84
+ }
85
+ /* fall through */
86
+ case 2:
87
+ if (a->size == 2 && (a->align & 2) != 0) {
88
+ return false;
89
+ }
90
+ break;
91
+ case 4:
92
+ if ((a->size == 2) && ((a->align & 3) == 3)) {
93
+ return false;
94
+ }
95
+ break;
96
+ default:
97
+ abort();
98
+ }
99
+ if ((vd + a->stride * (nregs - 1)) > 31) {
100
+ /*
101
+ * Attempts to write off the end of the register file are
102
+ * UNPREDICTABLE; we choose to UNDEF because otherwise we would
103
+ * access off the end of the array that holds the register data.
104
+ */
105
+ return false;
106
+ }
107
+
108
+ if (!vfp_access_check(s)) {
109
+ return true;
102
+ return true;
110
+ }
103
+ }
111
+
104
+
112
+ tmp = tcg_temp_new_i32();
105
+ qd = mve_qreg_ptr(a->qd);
113
+ addr = tcg_temp_new_i32();
106
+ rt = load_reg(s, a->rt);
114
+ load_reg_var(s, addr, a->rn);
107
+ tcg_gen_dup_i32(a->size, rt, rt);
115
+ /*
108
+ gen_helper_mve_vdup(cpu_env, qd, rt);
116
+ * TODO: if we implemented alignment exceptions, we should check
109
+ tcg_temp_free_ptr(qd);
117
+ * addr against the alignment encoded in a->align here.
110
+ tcg_temp_free_i32(rt);
118
+ */
111
+ mve_update_eci(s);
119
+ for (reg = 0; reg < nregs; reg++) {
120
+ if (a->l) {
121
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
122
+ s->be_data | a->size);
123
+ neon_store_element(vd, a->reg_idx, a->size, tmp);
124
+ } else { /* Store */
125
+ neon_load_element(tmp, vd, a->reg_idx, a->size);
126
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
127
+ s->be_data | a->size);
128
+ }
129
+ vd += a->stride;
130
+ tcg_gen_addi_i32(addr, addr, 1 << a->size);
131
+ }
132
+ tcg_temp_free_i32(addr);
133
+ tcg_temp_free_i32(tmp);
134
+
135
+ gen_neon_ldst_base_update(s, a->rm, a->rn, (1 << a->size) * nregs);
136
+
137
+ return true;
112
+ return true;
138
+}
113
+}
139
diff --git a/target/arm/translate.c b/target/arm/translate.c
114
+
140
index XXXXXXX..XXXXXXX 100644
115
static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
141
--- a/target/arm/translate.c
142
+++ b/target/arm/translate.c
143
@@ -XXX,XX +XXX,XX @@ static void gen_neon_trn_u16(TCGv_i32 t0, TCGv_i32 t1)
144
tcg_temp_free_i32(rd);
145
}
146
147
-
148
-/* Translate a NEON load/store element instruction. Return nonzero if the
149
- instruction is invalid. */
150
-static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
151
-{
152
- int rd, rn, rm;
153
- int nregs;
154
- int stride;
155
- int size;
156
- int reg;
157
- int load;
158
- TCGv_i32 addr;
159
- TCGv_i32 tmp;
160
-
161
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
162
- return 1;
163
- }
164
-
165
- /* FIXME: this access check should not take precedence over UNDEF
166
- * for invalid encodings; we will generate incorrect syndrome information
167
- * for attempts to execute invalid vfp/neon encodings with FP disabled.
168
- */
169
- if (s->fp_excp_el) {
170
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
171
- syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
172
- return 0;
173
- }
174
-
175
- if (!s->vfp_enabled)
176
- return 1;
177
- VFP_DREG_D(rd, insn);
178
- rn = (insn >> 16) & 0xf;
179
- rm = insn & 0xf;
180
- load = (insn & (1 << 21)) != 0;
181
- if ((insn & (1 << 23)) == 0) {
182
- /* Load store all elements -- handled already by decodetree */
183
- return 1;
184
- } else {
185
- size = (insn >> 10) & 3;
186
- if (size == 3) {
187
- /* Load single element to all lanes -- handled by decodetree */
188
- return 1;
189
- } else {
190
- /* Single element. */
191
- int idx = (insn >> 4) & 0xf;
192
- int reg_idx;
193
- switch (size) {
194
- case 0:
195
- reg_idx = (insn >> 5) & 7;
196
- stride = 1;
197
- break;
198
- case 1:
199
- reg_idx = (insn >> 6) & 3;
200
- stride = (insn & (1 << 5)) ? 2 : 1;
201
- break;
202
- case 2:
203
- reg_idx = (insn >> 7) & 1;
204
- stride = (insn & (1 << 6)) ? 2 : 1;
205
- break;
206
- default:
207
- abort();
208
- }
209
- nregs = ((insn >> 8) & 3) + 1;
210
- /* Catch the UNDEF cases. This is unavoidably a bit messy. */
211
- switch (nregs) {
212
- case 1:
213
- if (((idx & (1 << size)) != 0) ||
214
- (size == 2 && ((idx & 3) == 1 || (idx & 3) == 2))) {
215
- return 1;
216
- }
217
- break;
218
- case 3:
219
- if ((idx & 1) != 0) {
220
- return 1;
221
- }
222
- /* fall through */
223
- case 2:
224
- if (size == 2 && (idx & 2) != 0) {
225
- return 1;
226
- }
227
- break;
228
- case 4:
229
- if ((size == 2) && ((idx & 3) == 3)) {
230
- return 1;
231
- }
232
- break;
233
- default:
234
- abort();
235
- }
236
- if ((rd + stride * (nregs - 1)) > 31) {
237
- /* Attempts to write off the end of the register file
238
- * are UNPREDICTABLE; we choose to UNDEF because otherwise
239
- * the neon_load_reg() would write off the end of the array.
240
- */
241
- return 1;
242
- }
243
- tmp = tcg_temp_new_i32();
244
- addr = tcg_temp_new_i32();
245
- load_reg_var(s, addr, rn);
246
- for (reg = 0; reg < nregs; reg++) {
247
- if (load) {
248
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
249
- s->be_data | size);
250
- neon_store_element(rd, reg_idx, size, tmp);
251
- } else { /* Store */
252
- neon_load_element(tmp, rd, reg_idx, size);
253
- gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
254
- s->be_data | size);
255
- }
256
- rd += stride;
257
- tcg_gen_addi_i32(addr, addr, 1 << size);
258
- }
259
- tcg_temp_free_i32(addr);
260
- tcg_temp_free_i32(tmp);
261
- stride = nregs * (1 << size);
262
- }
263
- }
264
- if (rm != 15) {
265
- TCGv_i32 base;
266
-
267
- base = load_reg(s, rn);
268
- if (rm == 13) {
269
- tcg_gen_addi_i32(base, base, stride);
270
- } else {
271
- TCGv_i32 index;
272
- index = load_reg(s, rm);
273
- tcg_gen_add_i32(base, base, index);
274
- tcg_temp_free_i32(index);
275
- }
276
- store_reg(s, rn, base);
277
- }
278
- return 0;
279
-}
280
-
281
static inline void gen_neon_narrow(int size, TCGv_i32 dest, TCGv_i64 src)
282
{
116
{
283
switch (size) {
117
TCGv_ptr qd, qm;
284
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
285
}
286
return;
287
}
288
- if ((insn & 0x0f100000) == 0x04000000) {
289
- /* NEON load/store. */
290
- if (disas_neon_ls_insn(s, insn)) {
291
- goto illegal_op;
292
- }
293
- return;
294
- }
295
if ((insn & 0x0e000f00) == 0x0c000100) {
296
if (arm_dc_feature(s, ARM_FEATURE_IWMMXT)) {
297
/* iWMMXt register transfer. */
298
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
299
}
300
break;
301
case 12:
302
- if ((insn & 0x01100000) == 0x01000000) {
303
- if (disas_neon_ls_insn(s, insn)) {
304
- goto illegal_op;
305
- }
306
- break;
307
- }
308
goto illegal_op;
309
default:
310
illegal_op:
311
--
118
--
312
2.20.1
119
2.20.1
313
120
314
121
diff view generated by jsdifflib
1
Convert the VCADD (vector) insns to decodetree.
1
Implement the MVE vector logical operations operating
2
on two registers.
2
3
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200430181003.21682-6-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-12-peter.maydell@linaro.org
6
---
7
---
7
target/arm/neon-shared.decode | 3 +++
8
target/arm/helper-mve.h | 6 ++++++
8
target/arm/translate-neon.inc.c | 37 +++++++++++++++++++++++++++++++++
9
target/arm/mve.decode | 9 +++++++++
9
target/arm/translate.c | 11 +---------
10
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
10
3 files changed, 41 insertions(+), 10 deletions(-)
11
target/arm/translate-mve.c | 37 +++++++++++++++++++++++++++++++++++++
12
4 files changed, 78 insertions(+)
11
13
12
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-shared.decode
16
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/neon-shared.decode
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vand, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vbic, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
16
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@
17
33
18
VCMLA 1111 110 rot:2 . 1 size:1 .... .... 1000 . q:1 . 0 .... \
34
&vldr_vstr rn qd imm p a w size l u
19
vm=%vm_dp vn=%vn_dp vd=%vd_dp
35
&1op qd qm size
36
+&2op qd qm qn size
37
38
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
39
# Note that both Rn and Qd are 3 bits only (no D bit)
40
@@ -XXX,XX +XXX,XX @@
41
42
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
43
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
44
+@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
45
46
# Vector loads and stores
47
48
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
49
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
50
size=2 p=1
51
52
+# Vector 2-op
53
+VAND 1110 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
54
+VBIC 1110 1111 0 . 01 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
55
+VORR 1110 1111 0 . 10 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
56
+VORN 1110 1111 0 . 11 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
57
+VEOR 1111 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
20
+
58
+
21
+VCADD 1111 110 rot:1 1 . 0 size:1 .... .... 1000 . q:1 . 0 .... \
59
# Vector miscellaneous
22
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
60
23
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
61
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
62
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
24
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/translate-neon.inc.c
64
--- a/target/arm/mve_helper.c
26
+++ b/target/arm/translate-neon.inc.c
65
+++ b/target/arm/mve_helper.c
27
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMLA(DisasContext *s, arg_VCMLA *a)
66
@@ -XXX,XX +XXX,XX @@ DO_1OP(vnegw, 4, int32_t, DO_NEG)
28
tcg_temp_free_ptr(fpst);
67
/* We can do these 64 bits at a time */
29
return true;
68
DO_1OP(vfnegh, 8, uint64_t, DO_FNEGH)
69
DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
70
+
71
+#define DO_2OP(OP, ESIZE, TYPE, FN) \
72
+ void HELPER(glue(mve_, OP))(CPUARMState *env, \
73
+ void *vd, void *vn, void *vm) \
74
+ { \
75
+ TYPE *d = vd, *n = vn, *m = vm; \
76
+ uint16_t mask = mve_element_mask(env); \
77
+ unsigned e; \
78
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
79
+ mergemask(&d[H##ESIZE(e)], \
80
+ FN(n[H##ESIZE(e)], m[H##ESIZE(e)]), mask); \
81
+ } \
82
+ mve_advance_vpt(env); \
83
+ }
84
+
85
+#define DO_AND(N, M) ((N) & (M))
86
+#define DO_BIC(N, M) ((N) & ~(M))
87
+#define DO_ORR(N, M) ((N) | (M))
88
+#define DO_ORN(N, M) ((N) | ~(M))
89
+#define DO_EOR(N, M) ((N) ^ (M))
90
+
91
+DO_2OP(vand, 8, uint64_t, DO_AND)
92
+DO_2OP(vbic, 8, uint64_t, DO_BIC)
93
+DO_2OP(vorr, 8, uint64_t, DO_ORR)
94
+DO_2OP(vorn, 8, uint64_t, DO_ORN)
95
+DO_2OP(veor, 8, uint64_t, DO_EOR)
96
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate-mve.c
99
+++ b/target/arm/translate-mve.c
100
@@ -XXX,XX +XXX,XX @@
101
102
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
103
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
104
+typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
105
106
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
107
static inline long mve_qreg_offset(unsigned reg)
108
@@ -XXX,XX +XXX,XX @@ static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
109
}
110
return do_1op(s, a, fns[a->size]);
30
}
111
}
31
+
112
+
32
+static bool trans_VCADD(DisasContext *s, arg_VCADD *a)
113
+static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
33
+{
114
+{
34
+ int opr_sz;
115
+ TCGv_ptr qd, qn, qm;
35
+ TCGv_ptr fpst;
36
+ gen_helper_gvec_3_ptr *fn_gvec_ptr;
37
+
116
+
38
+ if (!dc_isar_feature(aa32_vcma, s)
117
+ if (!dc_isar_feature(aa32_mve, s) ||
39
+ || (!a->size && !dc_isar_feature(aa32_fp16_arith, s))) {
118
+ !mve_check_qreg_bank(s, a->qd | a->qn | a->qm) ||
119
+ !fn) {
40
+ return false;
120
+ return false;
41
+ }
121
+ }
42
+
122
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
43
+ /* UNDEF accesses to D16-D31 if they don't exist. */
44
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
45
+ ((a->vd | a->vn | a->vm) & 0x10)) {
46
+ return false;
47
+ }
48
+
49
+ if ((a->vn | a->vm | a->vd) & a->q) {
50
+ return false;
51
+ }
52
+
53
+ if (!vfp_access_check(s)) {
54
+ return true;
123
+ return true;
55
+ }
124
+ }
56
+
125
+
57
+ opr_sz = (1 + a->q) * 8;
126
+ qd = mve_qreg_ptr(a->qd);
58
+ fpst = get_fpstatus_ptr(1);
127
+ qn = mve_qreg_ptr(a->qn);
59
+ fn_gvec_ptr = a->size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
128
+ qm = mve_qreg_ptr(a->qm);
60
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
129
+ fn(cpu_env, qd, qn, qm);
61
+ vfp_reg_offset(1, a->vn),
130
+ tcg_temp_free_ptr(qd);
62
+ vfp_reg_offset(1, a->vm),
131
+ tcg_temp_free_ptr(qn);
63
+ fpst, opr_sz, opr_sz, a->rot,
132
+ tcg_temp_free_ptr(qm);
64
+ fn_gvec_ptr);
133
+ mve_update_eci(s);
65
+ tcg_temp_free_ptr(fpst);
66
+ return true;
134
+ return true;
67
+}
135
+}
68
diff --git a/target/arm/translate.c b/target/arm/translate.c
136
+
69
index XXXXXXX..XXXXXXX 100644
137
+#define DO_LOGIC(INSN, HELPER) \
70
--- a/target/arm/translate.c
138
+ static bool trans_##INSN(DisasContext *s, arg_2op *a) \
71
+++ b/target/arm/translate.c
139
+ { \
72
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
140
+ return do_2op(s, a, HELPER); \
73
bool is_long = false, q = extract32(insn, 6, 1);
141
+ }
74
bool ptr_is_env = false;
142
+
75
143
+DO_LOGIC(VAND, gen_helper_mve_vand)
76
- if ((insn & 0xfea00f10) == 0xfc800800) {
144
+DO_LOGIC(VBIC, gen_helper_mve_vbic)
77
- /* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
145
+DO_LOGIC(VORR, gen_helper_mve_vorr)
78
- int size = extract32(insn, 20, 1);
146
+DO_LOGIC(VORN, gen_helper_mve_vorn)
79
- data = extract32(insn, 24, 1); /* rot */
147
+DO_LOGIC(VEOR, gen_helper_mve_veor)
80
- if (!dc_isar_feature(aa32_vcma, s)
81
- || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
82
- return 1;
83
- }
84
- fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
85
- } else if ((insn & 0xfeb00f00) == 0xfc200d00) {
86
+ if ((insn & 0xfeb00f00) == 0xfc200d00) {
87
/* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
88
bool u = extract32(insn, 4, 1);
89
if (!dc_isar_feature(aa32_dp, s)) {
90
--
148
--
91
2.20.1
149
2.20.1
92
150
93
151
diff view generated by jsdifflib
1
For ARMv8.2-TTS2UXN, the stage 2 page table walk wants to know
1
Implement the MVE VADD, VSUB and VMUL insns.
2
whether the stage 1 access is for EL0 or not, because whether
3
exec permission is given can depend on whether this is an EL0
4
or EL1 access. Add a new argument to get_phys_addr_lpae() so
5
the call sites can pass this information in.
6
7
Since get_phys_addr_lpae() doesn't already have a doc comment,
8
add one so we have a place to put the documentation of the
9
semantics of the new s1_is_el0 argument.
10
2
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20200330210400.11724-4-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-13-peter.maydell@linaro.org
15
---
6
---
16
target/arm/helper.c | 29 ++++++++++++++++++++++++++++-
7
target/arm/helper-mve.h | 12 ++++++++++++
17
1 file changed, 28 insertions(+), 1 deletion(-)
8
target/arm/mve.decode | 5 +++++
9
target/arm/mve_helper.c | 14 ++++++++++++++
10
target/arm/translate-mve.c | 16 ++++++++++++++++
11
4 files changed, 47 insertions(+)
18
12
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
15
--- a/target/arm/helper-mve.h
22
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vbic, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+
26
+DEF_HELPER_FLAGS_4(mve_vsubb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vsubh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vsubw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vmulb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vmulh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vmulw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
23
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@
24
38
25
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
39
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
26
MMUAccessType access_type, ARMMMUIdx mmu_idx,
40
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
27
+ bool s1_is_el0,
41
+@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
28
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
42
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
29
target_ulong *page_size_ptr,
43
30
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs);
44
# Vector loads and stores
31
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
45
@@ -XXX,XX +XXX,XX @@ VORR 1110 1111 0 . 10 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
32
}
46
VORN 1110 1111 0 . 11 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
33
47
VEOR 1111 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
34
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, ARMMMUIdx_Stage2,
48
35
+ false,
49
+VADD 1110 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
36
&s2pa, &txattrs, &s2prot, &s2size, fi,
50
+VSUB 1111 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
37
pcacheattrs);
51
+VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
38
if (ret) {
52
+
39
@@ -XXX,XX +XXX,XX @@ static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
53
# Vector miscellaneous
40
};
54
41
}
55
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
42
56
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
+/**
57
index XXXXXXX..XXXXXXX 100644
44
+ * get_phys_addr_lpae: perform one stage of page table walk, LPAE format
58
--- a/target/arm/mve_helper.c
45
+ *
59
+++ b/target/arm/mve_helper.c
46
+ * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
60
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
47
+ * prot and page_size may not be filled in, and the populated fsr value provides
61
mve_advance_vpt(env); \
48
+ * information on why the translation aborted, in the format of a long-format
49
+ * DFSR/IFSR fault register, with the following caveats:
50
+ * * the WnR bit is never set (the caller must do this).
51
+ *
52
+ * @env: CPUARMState
53
+ * @address: virtual address to get physical address for
54
+ * @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH
55
+ * @mmu_idx: MMU index indicating required translation regime
56
+ * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page table
57
+ * walk), must be true if this is stage 2 of a stage 1+2 walk for an
58
+ * EL0 access). If @mmu_idx is anything else, @s1_is_el0 is ignored.
59
+ * @phys_ptr: set to the physical address corresponding to the virtual address
60
+ * @attrs: set to the memory transaction attributes to use
61
+ * @prot: set to the permissions for the page containing phys_ptr
62
+ * @page_size_ptr: set to the size of the page containing phys_ptr
63
+ * @fi: set to fault info if the translation fails
64
+ * @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
65
+ */
66
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
67
MMUAccessType access_type, ARMMMUIdx mmu_idx,
68
+ bool s1_is_el0,
69
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
70
target_ulong *page_size_ptr,
71
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
72
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
73
74
/* S1 is done. Now do S2 translation. */
75
ret = get_phys_addr_lpae(env, ipa, access_type, ARMMMUIdx_Stage2,
76
+ mmu_idx == ARMMMUIdx_E10_0,
77
phys_ptr, attrs, &s2_prot,
78
page_size, fi,
79
cacheattrs != NULL ? &cacheattrs2 : NULL);
80
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
81
}
62
}
82
63
83
if (regime_using_lpae_format(env, mmu_idx)) {
64
+/* provide unsigned 2-op helpers for all sizes */
84
- return get_phys_addr_lpae(env, address, access_type, mmu_idx,
65
+#define DO_2OP_U(OP, FN) \
85
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx, false,
66
+ DO_2OP(OP##b, 1, uint8_t, FN) \
86
phys_ptr, attrs, prot, page_size,
67
+ DO_2OP(OP##h, 2, uint16_t, FN) \
87
fi, cacheattrs);
68
+ DO_2OP(OP##w, 4, uint32_t, FN)
88
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
69
+
70
#define DO_AND(N, M) ((N) & (M))
71
#define DO_BIC(N, M) ((N) & ~(M))
72
#define DO_ORR(N, M) ((N) | (M))
73
@@ -XXX,XX +XXX,XX @@ DO_2OP(vbic, 8, uint64_t, DO_BIC)
74
DO_2OP(vorr, 8, uint64_t, DO_ORR)
75
DO_2OP(vorn, 8, uint64_t, DO_ORN)
76
DO_2OP(veor, 8, uint64_t, DO_EOR)
77
+
78
+#define DO_ADD(N, M) ((N) + (M))
79
+#define DO_SUB(N, M) ((N) - (M))
80
+#define DO_MUL(N, M) ((N) * (M))
81
+
82
+DO_2OP_U(vadd, DO_ADD)
83
+DO_2OP_U(vsub, DO_SUB)
84
+DO_2OP_U(vmul, DO_MUL)
85
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/arm/translate-mve.c
88
+++ b/target/arm/translate-mve.c
89
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VBIC, gen_helper_mve_vbic)
90
DO_LOGIC(VORR, gen_helper_mve_vorr)
91
DO_LOGIC(VORN, gen_helper_mve_vorn)
92
DO_LOGIC(VEOR, gen_helper_mve_veor)
93
+
94
+#define DO_2OP(INSN, FN) \
95
+ static bool trans_##INSN(DisasContext *s, arg_2op *a) \
96
+ { \
97
+ static MVEGenTwoOpFn * const fns[] = { \
98
+ gen_helper_mve_##FN##b, \
99
+ gen_helper_mve_##FN##h, \
100
+ gen_helper_mve_##FN##w, \
101
+ NULL, \
102
+ }; \
103
+ return do_2op(s, a, fns[a->size]); \
104
+ }
105
+
106
+DO_2OP(VADD, vadd)
107
+DO_2OP(VSUB, vsub)
108
+DO_2OP(VMUL, vmul)
89
--
109
--
90
2.20.1
110
2.20.1
91
111
92
112
diff view generated by jsdifflib
New patch
1
Implement the MVE VMULH insn, which performs a vector
2
multiply and returns the high half of the result.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-14-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 7 +++++++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 38 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vsubw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vmulb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vmulh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vmulw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vmulhsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vmulhsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@ VADD 1110 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
34
VSUB 1111 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
35
VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
36
37
+VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
38
+VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
39
+
40
# Vector miscellaneous
41
42
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
43
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mve_helper.c
46
+++ b/target/arm/mve_helper.c
47
@@ -XXX,XX +XXX,XX @@ DO_2OP(veor, 8, uint64_t, DO_EOR)
48
DO_2OP_U(vadd, DO_ADD)
49
DO_2OP_U(vsub, DO_SUB)
50
DO_2OP_U(vmul, DO_MUL)
51
+
52
+/*
53
+ * Because the computation type is at least twice as large as required,
54
+ * these work for both signed and unsigned source types.
55
+ */
56
+static inline uint8_t do_mulh_b(int32_t n, int32_t m)
57
+{
58
+ return (n * m) >> 8;
59
+}
60
+
61
+static inline uint16_t do_mulh_h(int32_t n, int32_t m)
62
+{
63
+ return (n * m) >> 16;
64
+}
65
+
66
+static inline uint32_t do_mulh_w(int64_t n, int64_t m)
67
+{
68
+ return (n * m) >> 32;
69
+}
70
+
71
+DO_2OP(vmulhsb, 1, int8_t, do_mulh_b)
72
+DO_2OP(vmulhsh, 2, int16_t, do_mulh_h)
73
+DO_2OP(vmulhsw, 4, int32_t, do_mulh_w)
74
+DO_2OP(vmulhub, 1, uint8_t, do_mulh_b)
75
+DO_2OP(vmulhuh, 2, uint16_t, do_mulh_h)
76
+DO_2OP(vmulhuw, 4, uint32_t, do_mulh_w)
77
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/translate-mve.c
80
+++ b/target/arm/translate-mve.c
81
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VEOR, gen_helper_mve_veor)
82
DO_2OP(VADD, vadd)
83
DO_2OP(VSUB, vsub)
84
DO_2OP(VMUL, vmul)
85
+DO_2OP(VMULH_S, vmulhs)
86
+DO_2OP(VMULH_U, vmulhu)
87
--
88
2.20.1
89
90
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Implement the MVE VRMULH insn, which performs a rounding multiply
2
and then returns the high half.
2
3
3
Add support for the RTC.
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-15-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 7 +++++++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 22 ++++++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 34 insertions(+)
4
13
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
8
Message-id: 20200427181649.26851-12-edgar.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/arm/xlnx-versal-virt.c | 22 ++++++++++++++++++++++
12
1 file changed, 22 insertions(+)
13
14
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/xlnx-versal-virt.c
16
--- a/target/arm/helper-mve.h
17
+++ b/hw/arm/xlnx-versal-virt.c
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ static void fdt_add_sd_nodes(VersalVirt *s)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
}
19
DEF_HELPER_FLAGS_4(mve_vmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vrmulhsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vrmulhsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vrmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vrmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vrmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vrmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
34
VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
35
VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
36
37
+VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
38
+VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
39
+
40
# Vector miscellaneous
41
42
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
43
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mve_helper.c
46
+++ b/target/arm/mve_helper.c
47
@@ -XXX,XX +XXX,XX @@ static inline uint32_t do_mulh_w(int64_t n, int64_t m)
48
return (n * m) >> 32;
20
}
49
}
21
50
22
+static void fdt_add_rtc_node(VersalVirt *s)
51
+static inline uint8_t do_rmulh_b(int32_t n, int32_t m)
23
+{
52
+{
24
+ const char compat[] = "xlnx,zynqmp-rtc";
53
+ return (n * m + (1U << 7)) >> 8;
25
+ const char interrupt_names[] = "alarm\0sec";
26
+ char *name = g_strdup_printf("/rtc@%x", MM_PMC_RTC);
27
+
28
+ qemu_fdt_add_subnode(s->fdt, name);
29
+
30
+ qemu_fdt_setprop_cells(s->fdt, name, "interrupts",
31
+ GIC_FDT_IRQ_TYPE_SPI, VERSAL_RTC_ALARM_IRQ,
32
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI,
33
+ GIC_FDT_IRQ_TYPE_SPI, VERSAL_RTC_SECONDS_IRQ,
34
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
35
+ qemu_fdt_setprop(s->fdt, name, "interrupt-names",
36
+ interrupt_names, sizeof(interrupt_names));
37
+ qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
38
+ 2, MM_PMC_RTC, 2, MM_PMC_RTC_SIZE);
39
+ qemu_fdt_setprop(s->fdt, name, "compatible", compat, sizeof(compat));
40
+ g_free(name);
41
+}
54
+}
42
+
55
+
43
static void fdt_nop_memory_nodes(void *fdt, Error **errp)
56
+static inline uint16_t do_rmulh_h(int32_t n, int32_t m)
44
{
57
+{
45
Error *err = NULL;
58
+ return (n * m + (1U << 15)) >> 16;
46
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
59
+}
47
fdt_add_timer_nodes(s);
60
+
48
fdt_add_zdma_nodes(s);
61
+static inline uint32_t do_rmulh_w(int64_t n, int64_t m)
49
fdt_add_sd_nodes(s);
62
+{
50
+ fdt_add_rtc_node(s);
63
+ return (n * m + (1U << 31)) >> 32;
51
fdt_add_cpu_nodes(s, psci_conduit);
64
+}
52
fdt_add_clk_node(s, "/clk125", 125000000, s->phandle.clk_125Mhz);
65
+
53
fdt_add_clk_node(s, "/clk25", 25000000, s->phandle.clk_25Mhz);
66
DO_2OP(vmulhsb, 1, int8_t, do_mulh_b)
67
DO_2OP(vmulhsh, 2, int16_t, do_mulh_h)
68
DO_2OP(vmulhsw, 4, int32_t, do_mulh_w)
69
DO_2OP(vmulhub, 1, uint8_t, do_mulh_b)
70
DO_2OP(vmulhuh, 2, uint16_t, do_mulh_h)
71
DO_2OP(vmulhuw, 4, uint32_t, do_mulh_w)
72
+
73
+DO_2OP(vrmulhsb, 1, int8_t, do_rmulh_b)
74
+DO_2OP(vrmulhsh, 2, int16_t, do_rmulh_h)
75
+DO_2OP(vrmulhsw, 4, int32_t, do_rmulh_w)
76
+DO_2OP(vrmulhub, 1, uint8_t, do_rmulh_b)
77
+DO_2OP(vrmulhuh, 2, uint16_t, do_rmulh_h)
78
+DO_2OP(vrmulhuw, 4, uint32_t, do_rmulh_w)
79
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/translate-mve.c
82
+++ b/target/arm/translate-mve.c
83
@@ -XXX,XX +XXX,XX @@ DO_2OP(VSUB, vsub)
84
DO_2OP(VMUL, vmul)
85
DO_2OP(VMULH_S, vmulhs)
86
DO_2OP(VMULH_U, vmulhu)
87
+DO_2OP(VRMULH_S, vrmulhs)
88
+DO_2OP(VRMULH_U, vrmulhu)
54
--
89
--
55
2.20.1
90
2.20.1
56
91
57
92
diff view generated by jsdifflib
New patch
1
Implement the MVE VMAX and VMIN insns.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-16-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 14 ++++++++++++++
8
target/arm/mve.decode | 5 +++++
9
target/arm/mve_helper.c | 14 ++++++++++++++
10
target/arm/translate-mve.c | 4 ++++
11
4 files changed, 37 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
DEF_HELPER_FLAGS_4(mve_vrmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vrmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vrmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_4(mve_vmaxsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+DEF_HELPER_FLAGS_4(mve_vmaxsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vmaxsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vmaxub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vmaxuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vmaxuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+
29
+DEF_HELPER_FLAGS_4(mve_vminsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+DEF_HELPER_FLAGS_4(mve_vminsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vminsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vminub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vminuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vminuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/mve.decode
38
+++ b/target/arm/mve.decode
39
@@ -XXX,XX +XXX,XX @@ VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
40
VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
41
VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
42
43
+VMAX_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
44
+VMAX_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
45
+VMIN_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
46
+VMIN_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
47
+
48
# Vector miscellaneous
49
50
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
51
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/mve_helper.c
54
+++ b/target/arm/mve_helper.c
55
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
56
DO_2OP(OP##h, 2, uint16_t, FN) \
57
DO_2OP(OP##w, 4, uint32_t, FN)
58
59
+/* provide signed 2-op helpers for all sizes */
60
+#define DO_2OP_S(OP, FN) \
61
+ DO_2OP(OP##b, 1, int8_t, FN) \
62
+ DO_2OP(OP##h, 2, int16_t, FN) \
63
+ DO_2OP(OP##w, 4, int32_t, FN)
64
+
65
#define DO_AND(N, M) ((N) & (M))
66
#define DO_BIC(N, M) ((N) & ~(M))
67
#define DO_ORR(N, M) ((N) | (M))
68
@@ -XXX,XX +XXX,XX @@ DO_2OP(vrmulhsw, 4, int32_t, do_rmulh_w)
69
DO_2OP(vrmulhub, 1, uint8_t, do_rmulh_b)
70
DO_2OP(vrmulhuh, 2, uint16_t, do_rmulh_h)
71
DO_2OP(vrmulhuw, 4, uint32_t, do_rmulh_w)
72
+
73
+#define DO_MAX(N, M) ((N) >= (M) ? (N) : (M))
74
+#define DO_MIN(N, M) ((N) >= (M) ? (M) : (N))
75
+
76
+DO_2OP_S(vmaxs, DO_MAX)
77
+DO_2OP_U(vmaxu, DO_MAX)
78
+DO_2OP_S(vmins, DO_MIN)
79
+DO_2OP_U(vminu, DO_MIN)
80
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/translate-mve.c
83
+++ b/target/arm/translate-mve.c
84
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULH_S, vmulhs)
85
DO_2OP(VMULH_U, vmulhu)
86
DO_2OP(VRMULH_S, vrmulhs)
87
DO_2OP(VRMULH_U, vrmulhu)
88
+DO_2OP(VMAX_S, vmaxs)
89
+DO_2OP(VMAX_U, vmaxu)
90
+DO_2OP(VMIN_S, vmins)
91
+DO_2OP(VMIN_U, vminu)
92
--
93
2.20.1
94
95
diff view generated by jsdifflib
New patch
1
Implement the MVE VABD insn.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-17-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 7 +++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 5 +++++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 17 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vminsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
DEF_HELPER_FLAGS_4(mve_vminub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vminuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vminuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_4(mve_vabdsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+DEF_HELPER_FLAGS_4(mve_vabdsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vabdsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vabdub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vabduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vabduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VMAX_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
33
VMIN_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
34
VMIN_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
35
36
+VABD_S 111 0 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
37
+VABD_U 111 1 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
38
+
39
# Vector miscellaneous
40
41
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_2OP_S(vmaxs, DO_MAX)
47
DO_2OP_U(vmaxu, DO_MAX)
48
DO_2OP_S(vmins, DO_MIN)
49
DO_2OP_U(vminu, DO_MIN)
50
+
51
+#define DO_ABD(N, M) ((N) >= (M) ? (N) - (M) : (M) - (N))
52
+
53
+DO_2OP_S(vabds, DO_ABD)
54
+DO_2OP_U(vabdu, DO_ABD)
55
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/translate-mve.c
58
+++ b/target/arm/translate-mve.c
59
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMAX_S, vmaxs)
60
DO_2OP(VMAX_U, vmaxu)
61
DO_2OP(VMIN_S, vmins)
62
DO_2OP(VMIN_U, vminu)
63
+DO_2OP(VABD_S, vabds)
64
+DO_2OP(VABD_U, vabdu)
65
--
66
2.20.1
67
68
diff view generated by jsdifflib
New patch
1
Implement MVE VHADD and VHSUB insns, which perform an addition
2
or subtraction and then halve the result.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-18-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 14 ++++++++++++++
9
target/arm/mve.decode | 5 +++++
10
target/arm/mve_helper.c | 25 +++++++++++++++++++++++++
11
target/arm/translate-mve.c | 4 ++++
12
4 files changed, 48 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vabdsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vabdub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vabduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vabduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vhaddsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vhaddsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vhaddsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vhaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vhadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vhadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vhsubsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vhsubsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vhsubsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vhsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vhsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+DEF_HELPER_FLAGS_4(mve_vhsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/mve.decode
39
+++ b/target/arm/mve.decode
40
@@ -XXX,XX +XXX,XX @@ VMIN_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
41
VABD_S 111 0 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
42
VABD_U 111 1 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
43
44
+VHADD_S 111 0 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
45
+VHADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
46
+VHSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
47
+VHSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
48
+
49
# Vector miscellaneous
50
51
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vminu, DO_MIN)
57
58
DO_2OP_S(vabds, DO_ABD)
59
DO_2OP_U(vabdu, DO_ABD)
60
+
61
+static inline uint32_t do_vhadd_u(uint32_t n, uint32_t m)
62
+{
63
+ return ((uint64_t)n + m) >> 1;
64
+}
65
+
66
+static inline int32_t do_vhadd_s(int32_t n, int32_t m)
67
+{
68
+ return ((int64_t)n + m) >> 1;
69
+}
70
+
71
+static inline uint32_t do_vhsub_u(uint32_t n, uint32_t m)
72
+{
73
+ return ((uint64_t)n - m) >> 1;
74
+}
75
+
76
+static inline int32_t do_vhsub_s(int32_t n, int32_t m)
77
+{
78
+ return ((int64_t)n - m) >> 1;
79
+}
80
+
81
+DO_2OP_S(vhadds, do_vhadd_s)
82
+DO_2OP_U(vhaddu, do_vhadd_u)
83
+DO_2OP_S(vhsubs, do_vhsub_s)
84
+DO_2OP_U(vhsubu, do_vhsub_u)
85
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/arm/translate-mve.c
88
+++ b/target/arm/translate-mve.c
89
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMIN_S, vmins)
90
DO_2OP(VMIN_U, vminu)
91
DO_2OP(VABD_S, vabds)
92
DO_2OP(VABD_U, vabdu)
93
+DO_2OP(VHADD_S, vhadds)
94
+DO_2OP(VHADD_U, vhaddu)
95
+DO_2OP(VHSUB_S, vhsubs)
96
+DO_2OP(VHSUB_U, vhsubu)
97
--
98
2.20.1
99
100
diff view generated by jsdifflib
New patch
1
Implement the MVE VMULL insn, which multiplies two single
2
width integer elements to produce a double width result.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-19-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 14 ++++++++++++++
9
target/arm/mve.decode | 5 +++++
10
target/arm/mve_helper.c | 34 ++++++++++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 4 ++++
12
4 files changed, 57 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vhsubsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vhsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vhsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vhsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vmullbsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vmullbsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vmullbsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vmullbub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vmullbuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vmullbuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vmulltsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vmulltsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vmulltsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/mve.decode
39
+++ b/target/arm/mve.decode
40
@@ -XXX,XX +XXX,XX @@ VHADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
41
VHSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
42
VHSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
43
44
+VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
45
+VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
46
+VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
47
+VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
48
+
49
# Vector miscellaneous
50
51
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
57
DO_2OP(OP##h, 2, int16_t, FN) \
58
DO_2OP(OP##w, 4, int32_t, FN)
59
60
+/*
61
+ * "Long" operations where two half-sized inputs (taken from either the
62
+ * top or the bottom of the input vector) produce a double-width result.
63
+ * Here ESIZE, TYPE are for the input, and LESIZE, LTYPE for the output.
64
+ */
65
+#define DO_2OP_L(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \
66
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, void *vm) \
67
+ { \
68
+ LTYPE *d = vd; \
69
+ TYPE *n = vn, *m = vm; \
70
+ uint16_t mask = mve_element_mask(env); \
71
+ unsigned le; \
72
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
73
+ LTYPE r = FN((LTYPE)n[H##ESIZE(le * 2 + TOP)], \
74
+ m[H##ESIZE(le * 2 + TOP)]); \
75
+ mergemask(&d[H##LESIZE(le)], r, mask); \
76
+ } \
77
+ mve_advance_vpt(env); \
78
+ }
79
+
80
#define DO_AND(N, M) ((N) & (M))
81
#define DO_BIC(N, M) ((N) & ~(M))
82
#define DO_ORR(N, M) ((N) | (M))
83
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vadd, DO_ADD)
84
DO_2OP_U(vsub, DO_SUB)
85
DO_2OP_U(vmul, DO_MUL)
86
87
+DO_2OP_L(vmullbsb, 0, 1, int8_t, 2, int16_t, DO_MUL)
88
+DO_2OP_L(vmullbsh, 0, 2, int16_t, 4, int32_t, DO_MUL)
89
+DO_2OP_L(vmullbsw, 0, 4, int32_t, 8, int64_t, DO_MUL)
90
+DO_2OP_L(vmullbub, 0, 1, uint8_t, 2, uint16_t, DO_MUL)
91
+DO_2OP_L(vmullbuh, 0, 2, uint16_t, 4, uint32_t, DO_MUL)
92
+DO_2OP_L(vmullbuw, 0, 4, uint32_t, 8, uint64_t, DO_MUL)
93
+
94
+DO_2OP_L(vmulltsb, 1, 1, int8_t, 2, int16_t, DO_MUL)
95
+DO_2OP_L(vmulltsh, 1, 2, int16_t, 4, int32_t, DO_MUL)
96
+DO_2OP_L(vmulltsw, 1, 4, int32_t, 8, int64_t, DO_MUL)
97
+DO_2OP_L(vmulltub, 1, 1, uint8_t, 2, uint16_t, DO_MUL)
98
+DO_2OP_L(vmulltuh, 1, 2, uint16_t, 4, uint32_t, DO_MUL)
99
+DO_2OP_L(vmulltuw, 1, 4, uint32_t, 8, uint64_t, DO_MUL)
100
+
101
/*
102
* Because the computation type is at least twice as large as required,
103
* these work for both signed and unsigned source types.
104
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/target/arm/translate-mve.c
107
+++ b/target/arm/translate-mve.c
108
@@ -XXX,XX +XXX,XX @@ DO_2OP(VHADD_S, vhadds)
109
DO_2OP(VHADD_U, vhaddu)
110
DO_2OP(VHSUB_S, vhsubs)
111
DO_2OP(VHSUB_U, vhsubu)
112
+DO_2OP(VMULL_BS, vmullbs)
113
+DO_2OP(VMULL_BU, vmullbu)
114
+DO_2OP(VMULL_TS, vmullts)
115
+DO_2OP(VMULL_TU, vmulltu)
116
--
117
2.20.1
118
119
diff view generated by jsdifflib
1
Convert the Neon 3-reg-same VADD and VSUB insns to decodetree.
1
Implement the MVE VMLALDAV insn, which multiplies pairs of integer
2
2
elements, accumulating them into a 64-bit result in a pair of
3
Note that we don't need the neon_3r_sizes[op] check here because all
3
general-purpose registers.
4
size values are OK for VADD and VSUB; we'll add this when we convert
5
the first insn that has size restrictions.
6
7
For this we need one of the GVecGen*Fn typedefs currently in
8
translate-a64.h; move them all to translate.h as a block so they
9
are visible to the 32-bit decoder.
10
4
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20200430181003.21682-15-peter.maydell@linaro.org
7
Message-id: 20210617121628.20116-20-peter.maydell@linaro.org
14
---
8
---
15
target/arm/translate-a64.h | 9 --------
9
target/arm/helper-mve.h | 8 ++++
16
target/arm/translate.h | 9 ++++++++
10
target/arm/translate.h | 10 ++++
17
target/arm/neon-dp.decode | 17 +++++++++++++++
11
target/arm/mve.decode | 15 ++++++
18
target/arm/translate-neon.inc.c | 38 +++++++++++++++++++++++++++++++++
12
target/arm/mve_helper.c | 34 ++++++++++++++
19
target/arm/translate.c | 14 ++++--------
13
target/arm/translate-mve.c | 96 ++++++++++++++++++++++++++++++++++++++
20
5 files changed, 68 insertions(+), 19 deletions(-)
14
5 files changed, 163 insertions(+)
21
15
22
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
23
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/translate-a64.h
18
--- a/target/arm/helper-mve.h
25
+++ b/target/arm/translate-a64.h
19
+++ b/target/arm/helper-mve.h
26
@@ -XXX,XX +XXX,XX @@ static inline int vec_full_reg_size(DisasContext *s)
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
21
DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
bool disas_sve(DisasContext *, uint32_t);
22
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
23
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
-/* Note that the gvec expanders operate on offsets + sizes. */
24
+
31
-typedef void GVecGen2Fn(unsigned, uint32_t, uint32_t, uint32_t, uint32_t);
25
+DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
-typedef void GVecGen2iFn(unsigned, uint32_t, uint32_t, int64_t,
26
+DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
33
- uint32_t, uint32_t);
27
+DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
34
-typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
28
+DEF_HELPER_FLAGS_4(mve_vmlaldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
35
- uint32_t, uint32_t, uint32_t);
29
+
36
-typedef void GVecGen4Fn(unsigned, uint32_t, uint32_t, uint32_t,
30
+DEF_HELPER_FLAGS_4(mve_vmlaldavuh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
37
- uint32_t, uint32_t, uint32_t);
31
+DEF_HELPER_FLAGS_4(mve_vmlaldavuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
38
-
39
#endif /* TARGET_ARM_TRANSLATE_A64_H */
40
diff --git a/target/arm/translate.h b/target/arm/translate.h
32
diff --git a/target/arm/translate.h b/target/arm/translate.h
41
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate.h
34
--- a/target/arm/translate.h
43
+++ b/target/arm/translate.h
35
+++ b/target/arm/translate.h
44
@@ -XXX,XX +XXX,XX @@ void gen_sshl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
36
@@ -XXX,XX +XXX,XX @@ static inline int negate(DisasContext *s, int x)
45
#define dc_isar_feature(name, ctx) \
37
return -x;
46
({ DisasContext *ctx_ = (ctx); isar_feature_##name(ctx_->isar); })
38
}
47
39
48
+/* Note that the gvec expanders operate on offsets + sizes. */
40
+static inline int plus_1(DisasContext *s, int x)
49
+typedef void GVecGen2Fn(unsigned, uint32_t, uint32_t, uint32_t, uint32_t);
41
+{
50
+typedef void GVecGen2iFn(unsigned, uint32_t, uint32_t, int64_t,
42
+ return x + 1;
51
+ uint32_t, uint32_t);
43
+}
52
+typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
44
+
53
+ uint32_t, uint32_t, uint32_t);
45
static inline int plus_2(DisasContext *s, int x)
54
+typedef void GVecGen4Fn(unsigned, uint32_t, uint32_t, uint32_t,
46
{
55
+ uint32_t, uint32_t, uint32_t);
47
return x + 2;
56
+
48
@@ -XXX,XX +XXX,XX @@ static inline int times_4(DisasContext *s, int x)
57
#endif /* TARGET_ARM_TRANSLATE_H */
49
return x * 4;
58
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
50
}
59
index XXXXXXX..XXXXXXX 100644
51
60
--- a/target/arm/neon-dp.decode
52
+static inline int times_2_plus_1(DisasContext *s, int x)
61
+++ b/target/arm/neon-dp.decode
53
+{
54
+ return x * 2 + 1;
55
+}
56
+
57
static inline int arm_dc_feature(DisasContext *dc, int feature)
58
{
59
return (dc->features & (1ULL << feature)) != 0;
60
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/mve.decode
63
+++ b/target/arm/mve.decode
64
@@ -XXX,XX +XXX,XX @@ VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
65
VDUP 1110 1110 1 1 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=0
66
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 1 1 0000 @vdup size=1
67
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
68
+
69
+# multiply-add long dual accumulate
70
+# rdahi: bits [3:1] from insn, bit 0 is 1
71
+# rdalo: bits [3:1] from insn, bit 0 is 0
72
+%rdahi 20:3 !function=times_2_plus_1
73
+%rdalo 13:3 !function=times_2
74
+# size bit is 0 for 16 bit, 1 for 32 bit
75
+%size_16 16:1 !function=plus_1
76
+
77
+&vmlaldav rdahi rdalo size qn qm x a
78
+
79
+@vmlaldav .... .... . ... ... . ... . .... .... qm:3 . \
80
+ qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
81
+VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
82
+VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
83
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/mve_helper.c
86
+++ b/target/arm/mve_helper.c
87
@@ -XXX,XX +XXX,XX @@ DO_2OP_S(vhadds, do_vhadd_s)
88
DO_2OP_U(vhaddu, do_vhadd_u)
89
DO_2OP_S(vhsubs, do_vhsub_s)
90
DO_2OP_U(vhsubu, do_vhsub_u)
91
+
92
+
93
+/*
94
+ * Multiply add long dual accumulate ops.
95
+ */
96
+#define DO_LDAV(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC) \
97
+ uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
98
+ void *vm, uint64_t a) \
99
+ { \
100
+ uint16_t mask = mve_element_mask(env); \
101
+ unsigned e; \
102
+ TYPE *n = vn, *m = vm; \
103
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
104
+ if (mask & 1) { \
105
+ if (e & 1) { \
106
+ a ODDACC \
107
+ (int64_t)n[H##ESIZE(e - 1 * XCHG)] * m[H##ESIZE(e)]; \
108
+ } else { \
109
+ a EVENACC \
110
+ (int64_t)n[H##ESIZE(e + 1 * XCHG)] * m[H##ESIZE(e)]; \
111
+ } \
112
+ } \
113
+ } \
114
+ mve_advance_vpt(env); \
115
+ return a; \
116
+ }
117
+
118
+DO_LDAV(vmlaldavsh, 2, int16_t, false, +=, +=)
119
+DO_LDAV(vmlaldavxsh, 2, int16_t, true, +=, +=)
120
+DO_LDAV(vmlaldavsw, 4, int32_t, false, +=, +=)
121
+DO_LDAV(vmlaldavxsw, 4, int32_t, true, +=, +=)
122
+
123
+DO_LDAV(vmlaldavuh, 2, uint16_t, false, +=, +=)
124
+DO_LDAV(vmlaldavuw, 4, uint32_t, false, +=, +=)
125
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/translate-mve.c
128
+++ b/target/arm/translate-mve.c
62
@@ -XXX,XX +XXX,XX @@
129
@@ -XXX,XX +XXX,XX @@
63
#
130
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
64
# This file is processed by scripts/decodetree.py
131
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
65
#
132
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
66
+# VFP/Neon register fields; same as vfp.decode
133
+typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
67
+%vm_dp 5:1 0:4
134
68
+%vn_dp 7:1 16:4
135
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
69
+%vd_dp 22:1 12:4
136
static inline long mve_qreg_offset(unsigned reg)
70
137
@@ -XXX,XX +XXX,XX @@ static void mve_update_eci(DisasContext *s)
71
# Encodings for Neon data processing instructions where the T32 encoding
138
}
72
# is a simple transformation of the A32 encoding.
73
@@ -XXX,XX +XXX,XX @@
74
# 0b111p_1111_qqqq_qqqq_qqqq_qqqq_qqqq_qqqq
75
# This file works on the A32 encoding only; calling code for T32 has to
76
# transform the insn into the A32 version first.
77
+
78
+######################################################################
79
+# 3-reg-same grouping:
80
+# 1111 001 U 0 D sz:2 Vn:4 Vd:4 opc:4 N Q M op Vm:4
81
+######################################################################
82
+
83
+&3same vm vn vd q size
84
+
85
+@3same .... ... . . . size:2 .... .... .... . q:1 . . .... \
86
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
87
+
88
+VADD_3s 1111 001 0 0 . .. .... .... 1000 . . . 0 .... @3same
89
+VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
90
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/translate-neon.inc.c
93
+++ b/target/arm/translate-neon.inc.c
94
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
95
96
return true;
97
}
139
}
98
+
140
99
+static bool do_3same(DisasContext *s, arg_3same *a, GVecGen3Fn fn)
141
+static bool mve_skip_first_beat(DisasContext *s)
100
+{
142
+{
101
+ int vec_size = a->q ? 16 : 8;
143
+ /* Return true if PSR.ECI says we must skip the first beat of this insn */
102
+ int rd_ofs = neon_reg_offset(a->vd, 0);
144
+ switch (s->eci) {
103
+ int rn_ofs = neon_reg_offset(a->vn, 0);
145
+ case ECI_NONE:
104
+ int rm_ofs = neon_reg_offset(a->vm, 0);
105
+
106
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
107
+ return false;
146
+ return false;
108
+ }
147
+ case ECI_A0:
109
+
148
+ case ECI_A0A1:
110
+ /* UNDEF accesses to D16-D31 if they don't exist. */
149
+ case ECI_A0A1A2:
111
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
150
+ case ECI_A0A1A2B0:
112
+ ((a->vd | a->vn | a->vm) & 0x10)) {
151
+ return true;
152
+ default:
153
+ g_assert_not_reached();
154
+ }
155
+}
156
+
157
static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
158
{
159
TCGv_i32 addr;
160
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_BS, vmullbs)
161
DO_2OP(VMULL_BU, vmullbu)
162
DO_2OP(VMULL_TS, vmullts)
163
DO_2OP(VMULL_TU, vmulltu)
164
+
165
+static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
166
+ MVEGenDualAccOpFn *fn)
167
+{
168
+ TCGv_ptr qn, qm;
169
+ TCGv_i64 rda;
170
+ TCGv_i32 rdalo, rdahi;
171
+
172
+ if (!dc_isar_feature(aa32_mve, s) ||
173
+ !mve_check_qreg_bank(s, a->qn | a->qm) ||
174
+ !fn) {
113
+ return false;
175
+ return false;
114
+ }
176
+ }
115
+
177
+ /*
116
+ if ((a->vn | a->vm | a->vd) & a->q) {
178
+ * rdahi == 13 is UNPREDICTABLE; rdahi == 15 is a related
179
+ * encoding; rdalo always has bit 0 clear so cannot be 13 or 15.
180
+ */
181
+ if (a->rdahi == 13 || a->rdahi == 15) {
117
+ return false;
182
+ return false;
118
+ }
183
+ }
119
+
184
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
120
+ if (!vfp_access_check(s)) {
121
+ return true;
185
+ return true;
122
+ }
186
+ }
123
+
187
+
124
+ fn(a->size, rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
188
+ qn = mve_qreg_ptr(a->qn);
189
+ qm = mve_qreg_ptr(a->qm);
190
+
191
+ /*
192
+ * This insn is subject to beat-wise execution. Partial execution
193
+ * of an A=0 (no-accumulate) insn which does not execute the first
194
+ * beat must start with the current rda value, not 0.
195
+ */
196
+ if (a->a || mve_skip_first_beat(s)) {
197
+ rda = tcg_temp_new_i64();
198
+ rdalo = load_reg(s, a->rdalo);
199
+ rdahi = load_reg(s, a->rdahi);
200
+ tcg_gen_concat_i32_i64(rda, rdalo, rdahi);
201
+ tcg_temp_free_i32(rdalo);
202
+ tcg_temp_free_i32(rdahi);
203
+ } else {
204
+ rda = tcg_const_i64(0);
205
+ }
206
+
207
+ fn(rda, cpu_env, qn, qm, rda);
208
+ tcg_temp_free_ptr(qn);
209
+ tcg_temp_free_ptr(qm);
210
+
211
+ rdalo = tcg_temp_new_i32();
212
+ rdahi = tcg_temp_new_i32();
213
+ tcg_gen_extrl_i64_i32(rdalo, rda);
214
+ tcg_gen_extrh_i64_i32(rdahi, rda);
215
+ store_reg(s, a->rdalo, rdalo);
216
+ store_reg(s, a->rdahi, rdahi);
217
+ tcg_temp_free_i64(rda);
218
+ mve_update_eci(s);
125
+ return true;
219
+ return true;
126
+}
220
+}
127
+
221
+
128
+#define DO_3SAME(INSN, FUNC) \
222
+static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a)
129
+ static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
223
+{
130
+ { \
224
+ static MVEGenDualAccOpFn * const fns[4][2] = {
131
+ return do_3same(s, a, FUNC); \
225
+ { NULL, NULL },
132
+ }
226
+ { gen_helper_mve_vmlaldavsh, gen_helper_mve_vmlaldavxsh },
133
+
227
+ { gen_helper_mve_vmlaldavsw, gen_helper_mve_vmlaldavxsw },
134
+DO_3SAME(VADD, tcg_gen_gvec_add)
228
+ { NULL, NULL },
135
+DO_3SAME(VSUB, tcg_gen_gvec_sub)
229
+ };
136
diff --git a/target/arm/translate.c b/target/arm/translate.c
230
+ return do_long_dual_acc(s, a, fns[a->size][a->x]);
137
index XXXXXXX..XXXXXXX 100644
231
+}
138
--- a/target/arm/translate.c
232
+
139
+++ b/target/arm/translate.c
233
+static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
140
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
234
+{
141
}
235
+ static MVEGenDualAccOpFn * const fns[4][2] = {
142
return 0;
236
+ { NULL, NULL },
143
237
+ { gen_helper_mve_vmlaldavuh, NULL },
144
- case NEON_3R_VADD_VSUB:
238
+ { gen_helper_mve_vmlaldavuw, NULL },
145
- if (u) {
239
+ { NULL, NULL },
146
- tcg_gen_gvec_sub(size, rd_ofs, rn_ofs, rm_ofs,
240
+ };
147
- vec_size, vec_size);
241
+ return do_long_dual_acc(s, a, fns[a->size][a->x]);
148
- } else {
242
+}
149
- tcg_gen_gvec_add(size, rd_ofs, rn_ofs, rm_ofs,
150
- vec_size, vec_size);
151
- }
152
- return 0;
153
-
154
case NEON_3R_VQADD:
155
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
156
rn_ofs, rm_ofs, vec_size, vec_size,
157
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
158
tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, vec_size, vec_size,
159
u ? &ushl_op[size] : &sshl_op[size]);
160
return 0;
161
+
162
+ case NEON_3R_VADD_VSUB:
163
+ /* Already handled by decodetree */
164
+ return 1;
165
}
166
167
if (size == 3) {
168
--
243
--
169
2.20.1
244
2.20.1
170
245
171
246
diff view generated by jsdifflib
New patch
1
Implement the MVE insn VMLSLDAV, which multiplies source elements,
2
alternately adding and subtracting them, and accumulates into a
3
64-bit result in a pair of general purpose registers.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210617121628.20116-21-peter.maydell@linaro.org
8
---
9
target/arm/helper-mve.h | 5 +++++
10
target/arm/mve.decode | 2 ++
11
target/arm/mve_helper.c | 5 +++++
12
target/arm/translate-mve.c | 11 +++++++++++
13
4 files changed, 23 insertions(+)
14
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-mve.h
18
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmlaldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
20
21
DEF_HELPER_FLAGS_4(mve_vmlaldavuh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
22
DEF_HELPER_FLAGS_4(mve_vmlaldavuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
23
+
24
+DEF_HELPER_FLAGS_4(mve_vmlsldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
25
+DEF_HELPER_FLAGS_4(mve_vmlsldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
26
+DEF_HELPER_FLAGS_4(mve_vmlsldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
+DEF_HELPER_FLAGS_4(mve_vmlsldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
33
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
34
VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
35
VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
36
+
37
+VMLSLDAV 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve_helper.c
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlaldavxsw, 4, int32_t, true, +=, +=)
43
44
DO_LDAV(vmlaldavuh, 2, uint16_t, false, +=, +=)
45
DO_LDAV(vmlaldavuw, 4, uint32_t, false, +=, +=)
46
+
47
+DO_LDAV(vmlsldavsh, 2, int16_t, false, +=, -=)
48
+DO_LDAV(vmlsldavxsh, 2, int16_t, true, +=, -=)
49
+DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
50
+DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
51
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/translate-mve.c
54
+++ b/target/arm/translate-mve.c
55
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
56
};
57
return do_long_dual_acc(s, a, fns[a->size][a->x]);
58
}
59
+
60
+static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
61
+{
62
+ static MVEGenDualAccOpFn * const fns[4][2] = {
63
+ { NULL, NULL },
64
+ { gen_helper_mve_vmlsldavsh, gen_helper_mve_vmlsldavxsh },
65
+ { gen_helper_mve_vmlsldavsw, gen_helper_mve_vmlsldavxsw },
66
+ { NULL, NULL },
67
+ };
68
+ return do_long_dual_acc(s, a, fns[a->size][a->x]);
69
+}
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
1
Convert the Neon VQADD/VQSUB insns in the 3-reg-same grouping
1
Implement the MVE VRMLALDAVH and VRMLSLDAVH insns, which accumulate
2
to decodetree.
2
the results of a rounded multiply of pairs of elements into a 72-bit
3
accumulator, returning the top 64 bits in a pair of general purpose
4
registers.
3
5
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-19-peter.maydell@linaro.org
8
Message-id: 20210617121628.20116-22-peter.maydell@linaro.org
7
---
9
---
8
target/arm/neon-dp.decode | 6 ++++++
10
target/arm/helper-mve.h | 8 ++++++++
9
target/arm/translate-neon.inc.c | 15 +++++++++++++++
11
target/arm/mve.decode | 7 +++++++
10
target/arm/translate.c | 14 ++------------
12
target/arm/mve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
11
3 files changed, 23 insertions(+), 12 deletions(-)
13
target/arm/translate-mve.c | 24 ++++++++++++++++++++++++
14
4 files changed, 76 insertions(+)
12
15
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
18
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/neon-dp.decode
19
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmlsldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
21
DEF_HELPER_FLAGS_4(mve_vmlsldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
22
DEF_HELPER_FLAGS_4(mve_vmlsldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
23
DEF_HELPER_FLAGS_4(mve_vmlsldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
24
+
25
+DEF_HELPER_FLAGS_4(mve_vrmlaldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
26
+DEF_HELPER_FLAGS_4(mve_vrmlaldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
+
28
+DEF_HELPER_FLAGS_4(mve_vrmlaldavhuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vrmlsldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
31
+DEF_HELPER_FLAGS_4(mve_vrmlsldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
35
+++ b/target/arm/mve.decode
36
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
37
38
@vmlaldav .... .... . ... ... . ... . .... .... qm:3 . \
39
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
40
+@vmlaldav_nosz .... .... . ... ... . ... . .... .... qm:3 . \
41
+ qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav
42
VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
43
VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
44
45
VMLSLDAV 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav
46
+
47
+VRMLALDAVH_S 1110 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
48
+VRMLALDAVH_U 1111 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
49
+
50
+VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_nosz
51
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/mve_helper.c
54
+++ b/target/arm/mve_helper.c
17
@@ -XXX,XX +XXX,XX @@
55
@@ -XXX,XX +XXX,XX @@
18
@3same .... ... . . . size:2 .... .... .... . q:1 . . .... \
56
*/
19
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
57
20
58
#include "qemu/osdep.h"
21
+VQADD_S_3s 1111 001 0 0 . .. .... .... 0000 . . . 1 .... @3same
59
+#include "qemu/int128.h"
22
+VQADD_U_3s 1111 001 1 0 . .. .... .... 0000 . . . 1 .... @3same
60
#include "cpu.h"
61
#include "internals.h"
62
#include "vec_internal.h"
63
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlsldavsh, 2, int16_t, false, +=, -=)
64
DO_LDAV(vmlsldavxsh, 2, int16_t, true, +=, -=)
65
DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
66
DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
23
+
67
+
24
@3same_logic .... ... . . . .. .... .... .... . q:1 .. .... \
68
+/*
25
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0
69
+ * Rounding multiply add long dual accumulate high: we must keep
26
70
+ * a 72-bit internal accumulator value and return the top 64 bits.
27
@@ -XXX,XX +XXX,XX @@ VBSL_3s 1111 001 1 0 . 01 .... .... 0001 ... 1 .... @3same_logic
71
+ */
28
VBIT_3s 1111 001 1 0 . 10 .... .... 0001 ... 1 .... @3same_logic
72
+#define DO_LDAVH(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC, TO128) \
29
VBIF_3s 1111 001 1 0 . 11 .... .... 0001 ... 1 .... @3same_logic
73
+ uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
30
74
+ void *vm, uint64_t a) \
31
+VQSUB_S_3s 1111 001 0 0 . .. .... .... 0010 . . . 1 .... @3same
75
+ { \
32
+VQSUB_U_3s 1111 001 1 0 . .. .... .... 0010 . . . 1 .... @3same
76
+ uint16_t mask = mve_element_mask(env); \
77
+ unsigned e; \
78
+ TYPE *n = vn, *m = vm; \
79
+ Int128 acc = int128_lshift(TO128(a), 8); \
80
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
81
+ if (mask & 1) { \
82
+ if (e & 1) { \
83
+ acc = ODDACC(acc, TO128(n[H##ESIZE(e - 1 * XCHG)] * \
84
+ m[H##ESIZE(e)])); \
85
+ } else { \
86
+ acc = EVENACC(acc, TO128(n[H##ESIZE(e + 1 * XCHG)] * \
87
+ m[H##ESIZE(e)])); \
88
+ } \
89
+ acc = int128_add(acc, 1 << 7); \
90
+ } \
91
+ } \
92
+ mve_advance_vpt(env); \
93
+ return int128_getlo(int128_rshift(acc, 8)); \
94
+ }
33
+
95
+
34
VCGT_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 0 .... @3same
96
+DO_LDAVH(vrmlaldavhsw, 4, int32_t, false, int128_add, int128_add, int128_makes64)
35
VCGT_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 0 .... @3same
97
+DO_LDAVH(vrmlaldavhxsw, 4, int32_t, true, int128_add, int128_add, int128_makes64)
36
VCGE_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 1 .... @3same
98
+
37
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
99
+DO_LDAVH(vrmlaldavhuw, 4, uint32_t, false, int128_add, int128_add, int128_make64)
100
+
101
+DO_LDAVH(vrmlsldavhsw, 4, int32_t, false, int128_add, int128_sub, int128_makes64)
102
+DO_LDAVH(vrmlsldavhxsw, 4, int32_t, true, int128_add, int128_sub, int128_makes64)
103
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
38
index XXXXXXX..XXXXXXX 100644
104
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-neon.inc.c
105
--- a/target/arm/translate-mve.c
40
+++ b/target/arm/translate-neon.inc.c
106
+++ b/target/arm/translate-mve.c
41
@@ -XXX,XX +XXX,XX @@ static void gen_VTST_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
107
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
42
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &cmtst_op[vece]);
108
};
109
return do_long_dual_acc(s, a, fns[a->size][a->x]);
43
}
110
}
44
DO_3SAME_NO_SZ_3(VTST, gen_VTST_3s)
45
+
111
+
46
+#define DO_3SAME_GVEC4(INSN, OPARRAY) \
112
+static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a)
47
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
113
+{
48
+ uint32_t rn_ofs, uint32_t rm_ofs, \
114
+ static MVEGenDualAccOpFn * const fns[] = {
49
+ uint32_t oprsz, uint32_t maxsz) \
115
+ gen_helper_mve_vrmlaldavhsw, gen_helper_mve_vrmlaldavhxsw,
50
+ { \
116
+ };
51
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc), \
117
+ return do_long_dual_acc(s, a, fns[a->x]);
52
+ rn_ofs, rm_ofs, oprsz, maxsz, &OPARRAY[vece]); \
118
+}
53
+ } \
54
+ DO_3SAME(INSN, gen_##INSN##_3s)
55
+
119
+
56
+DO_3SAME_GVEC4(VQADD_S, sqadd_op)
120
+static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a)
57
+DO_3SAME_GVEC4(VQADD_U, uqadd_op)
121
+{
58
+DO_3SAME_GVEC4(VQSUB_S, sqsub_op)
122
+ static MVEGenDualAccOpFn * const fns[] = {
59
+DO_3SAME_GVEC4(VQSUB_U, uqsub_op)
123
+ gen_helper_mve_vrmlaldavhuw, NULL,
60
diff --git a/target/arm/translate.c b/target/arm/translate.c
124
+ };
61
index XXXXXXX..XXXXXXX 100644
125
+ return do_long_dual_acc(s, a, fns[a->x]);
62
--- a/target/arm/translate.c
126
+}
63
+++ b/target/arm/translate.c
127
+
64
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
128
+static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
65
}
129
+{
66
return 1;
130
+ static MVEGenDualAccOpFn * const fns[] = {
67
131
+ gen_helper_mve_vrmlsldavhsw, gen_helper_mve_vrmlsldavhxsw,
68
- case NEON_3R_VQADD:
132
+ };
69
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
133
+ return do_long_dual_acc(s, a, fns[a->x]);
70
- rn_ofs, rm_ofs, vec_size, vec_size,
134
+}
71
- (u ? uqadd_op : sqadd_op) + size);
72
- return 0;
73
-
74
- case NEON_3R_VQSUB:
75
- tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
76
- rn_ofs, rm_ofs, vec_size, vec_size,
77
- (u ? uqsub_op : sqsub_op) + size);
78
- return 0;
79
-
80
case NEON_3R_VMUL: /* VMUL */
81
if (u) {
82
/* Polynomial case allows only P8. */
83
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
84
case NEON_3R_VTST_VCEQ:
85
case NEON_3R_VCGT:
86
case NEON_3R_VCGE:
87
+ case NEON_3R_VQADD:
88
+ case NEON_3R_VQSUB:
89
/* Already handled by decodetree */
90
return 1;
91
}
92
--
135
--
93
2.20.1
136
2.20.1
94
137
95
138
diff view generated by jsdifflib
1
Convert the V[US]DOT (vector) insns to decodetree.
1
Implement the scalar form of the MVE VADD insn. This takes the
2
scalar operand from a general purpose register.
2
3
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200430181003.21682-7-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-23-peter.maydell@linaro.org
6
---
7
---
7
target/arm/neon-shared.decode | 4 ++++
8
target/arm/helper-mve.h | 4 ++++
8
target/arm/translate-neon.inc.c | 32 ++++++++++++++++++++++++++++++++
9
target/arm/mve.decode | 7 ++++++
9
target/arm/translate.c | 9 +--------
10
target/arm/mve_helper.c | 22 +++++++++++++++++++
10
3 files changed, 37 insertions(+), 8 deletions(-)
11
target/arm/translate-mve.c | 45 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 78 insertions(+)
11
13
12
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-shared.decode
16
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/neon-shared.decode
17
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ VCMLA 1111 110 rot:2 . 1 size:1 .... .... 1000 . q:1 . 0 .... \
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
17
19
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
VCADD 1111 110 rot:1 1 . 0 size:1 .... .... 1000 . q:1 . 0 .... \
20
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
vm=%vm_dp vn=%vn_dp vd=%vd_dp
21
22
+DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
+
25
+
21
+# VUDOT and VSDOT
26
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
22
+VDOT 1111 110 00 . 10 .... .... 1101 . q:1 . u:1 .... \
27
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
23
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
28
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
24
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
25
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/translate-neon.inc.c
31
--- a/target/arm/mve.decode
27
+++ b/target/arm/translate-neon.inc.c
32
+++ b/target/arm/mve.decode
28
@@ -XXX,XX +XXX,XX @@ static bool trans_VCADD(DisasContext *s, arg_VCADD *a)
33
@@ -XXX,XX +XXX,XX @@
29
tcg_temp_free_ptr(fpst);
34
&vldr_vstr rn qd imm p a w size l u
30
return true;
35
&1op qd qm size
31
}
36
&2op qd qm qn size
37
+&2scalar qd qn rm size
38
39
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
40
# Note that both Rn and Qd are 3 bits only (no D bit)
41
@@ -XXX,XX +XXX,XX @@
42
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
43
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
44
45
+@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
32
+
46
+
33
+static bool trans_VDOT(DisasContext *s, arg_VDOT *a)
47
# Vector loads and stores
48
49
# Widening loads and narrowing stores:
50
@@ -XXX,XX +XXX,XX @@ VRMLALDAVH_S 1110 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_no
51
VRMLALDAVH_U 1111 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
52
53
VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_nosz
54
+
55
+# Scalar operations
56
+
57
+VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
58
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve_helper.c
61
+++ b/target/arm/mve_helper.c
62
@@ -XXX,XX +XXX,XX @@ DO_2OP_S(vhsubs, do_vhsub_s)
63
DO_2OP_U(vhsubu, do_vhsub_u)
64
65
66
+#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
67
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
68
+ uint32_t rm) \
69
+ { \
70
+ TYPE *d = vd, *n = vn; \
71
+ TYPE m = rm; \
72
+ uint16_t mask = mve_element_mask(env); \
73
+ unsigned e; \
74
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
75
+ mergemask(&d[H##ESIZE(e)], FN(n[H##ESIZE(e)], m), mask); \
76
+ } \
77
+ mve_advance_vpt(env); \
78
+ }
79
+
80
+/* provide unsigned 2-op scalar helpers for all sizes */
81
+#define DO_2OP_SCALAR_U(OP, FN) \
82
+ DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
83
+ DO_2OP_SCALAR(OP##h, 2, uint16_t, FN) \
84
+ DO_2OP_SCALAR(OP##w, 4, uint32_t, FN)
85
+
86
+DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
87
+
88
/*
89
* Multiply add long dual accumulate ops.
90
*/
91
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/translate-mve.c
94
+++ b/target/arm/translate-mve.c
95
@@ -XXX,XX +XXX,XX @@
96
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
97
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
98
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
99
+typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
100
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
101
102
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
103
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_BU, vmullbu)
104
DO_2OP(VMULL_TS, vmullts)
105
DO_2OP(VMULL_TU, vmulltu)
106
107
+static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
108
+ MVEGenTwoOpScalarFn fn)
34
+{
109
+{
35
+ int opr_sz;
110
+ TCGv_ptr qd, qn;
36
+ gen_helper_gvec_3 *fn_gvec;
111
+ TCGv_i32 rm;
37
+
112
+
38
+ if (!dc_isar_feature(aa32_dp, s)) {
113
+ if (!dc_isar_feature(aa32_mve, s) ||
114
+ !mve_check_qreg_bank(s, a->qd | a->qn) ||
115
+ !fn) {
39
+ return false;
116
+ return false;
40
+ }
117
+ }
41
+
118
+ if (a->rm == 13 || a->rm == 15) {
42
+ /* UNDEF accesses to D16-D31 if they don't exist. */
119
+ /* UNPREDICTABLE */
43
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
44
+ ((a->vd | a->vn | a->vm) & 0x10)) {
45
+ return false;
120
+ return false;
46
+ }
121
+ }
47
+
122
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
48
+ if ((a->vn | a->vm | a->vd) & a->q) {
49
+ return false;
50
+ }
51
+
52
+ if (!vfp_access_check(s)) {
53
+ return true;
123
+ return true;
54
+ }
124
+ }
55
+
125
+
56
+ opr_sz = (1 + a->q) * 8;
126
+ qd = mve_qreg_ptr(a->qd);
57
+ fn_gvec = a->u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
127
+ qn = mve_qreg_ptr(a->qn);
58
+ tcg_gen_gvec_3_ool(vfp_reg_offset(1, a->vd),
128
+ rm = load_reg(s, a->rm);
59
+ vfp_reg_offset(1, a->vn),
129
+ fn(cpu_env, qd, qn, rm);
60
+ vfp_reg_offset(1, a->vm),
130
+ tcg_temp_free_i32(rm);
61
+ opr_sz, opr_sz, 0, fn_gvec);
131
+ tcg_temp_free_ptr(qd);
132
+ tcg_temp_free_ptr(qn);
133
+ mve_update_eci(s);
62
+ return true;
134
+ return true;
63
+}
135
+}
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
136
+
65
index XXXXXXX..XXXXXXX 100644
137
+#define DO_2OP_SCALAR(INSN, FN) \
66
--- a/target/arm/translate.c
138
+ static bool trans_##INSN(DisasContext *s, arg_2scalar *a) \
67
+++ b/target/arm/translate.c
139
+ { \
68
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
140
+ static MVEGenTwoOpScalarFn * const fns[] = { \
69
bool is_long = false, q = extract32(insn, 6, 1);
141
+ gen_helper_mve_##FN##b, \
70
bool ptr_is_env = false;
142
+ gen_helper_mve_##FN##h, \
71
143
+ gen_helper_mve_##FN##w, \
72
- if ((insn & 0xfeb00f00) == 0xfc200d00) {
144
+ NULL, \
73
- /* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
145
+ }; \
74
- bool u = extract32(insn, 4, 1);
146
+ return do_2op_scalar(s, a, fns[a->size]); \
75
- if (!dc_isar_feature(aa32_dp, s)) {
147
+ }
76
- return 1;
148
+
77
- }
149
+DO_2OP_SCALAR(VADD_scalar, vadd_scalar)
78
- fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
150
+
79
- } else if ((insn & 0xff300f10) == 0xfc200810) {
151
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
80
+ if ((insn & 0xff300f10) == 0xfc200810) {
152
MVEGenDualAccOpFn *fn)
81
/* VFM[AS]L -- 1111 1100 S.10 .... .... 1000 .Q.1 .... */
153
{
82
int is_s = extract32(insn, 23, 1);
83
if (!dc_isar_feature(aa32_fhm, s)) {
84
--
154
--
85
2.20.1
155
2.20.1
86
156
87
157
diff view generated by jsdifflib
1
Convert the Neon comparison ops in the 3-reg-same grouping
1
Implement the scalar forms of the MVE VSUB and VMUL insns.
2
to decodetree.
3
2
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-18-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-24-peter.maydell@linaro.org
7
---
6
---
8
target/arm/neon-dp.decode | 8 ++++++++
7
target/arm/helper-mve.h | 8 ++++++++
9
target/arm/translate-neon.inc.c | 22 ++++++++++++++++++++++
8
target/arm/mve.decode | 2 ++
10
target/arm/translate.c | 23 +++--------------------
9
target/arm/mve_helper.c | 2 ++
11
3 files changed, 33 insertions(+), 20 deletions(-)
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 14 insertions(+)
12
12
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/neon-dp.decode
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ VBSL_3s 1111 001 1 0 . 01 .... .... 0001 ... 1 .... @3same_logic
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
18
VBIT_3s 1111 001 1 0 . 10 .... .... 0001 ... 1 .... @3same_logic
18
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
VBIF_3s 1111 001 1 0 . 11 .... .... 0001 ... 1 .... @3same_logic
19
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
20
21
+VCGT_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 0 .... @3same
21
+DEF_HELPER_FLAGS_4(mve_vsub_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
+VCGT_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 0 .... @3same
22
+DEF_HELPER_FLAGS_4(mve_vsub_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+VCGE_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 1 .... @3same
23
+DEF_HELPER_FLAGS_4(mve_vsub_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+VCGE_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 1 .... @3same
25
+
24
+
26
VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
25
+DEF_HELPER_FLAGS_4(mve_vmul_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
26
+DEF_HELPER_FLAGS_4(mve_vmul_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
VMIN_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 1 .... @3same
27
+DEF_HELPER_FLAGS_4(mve_vmul_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
@@ -XXX,XX +XXX,XX @@ VMIN_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 1 .... @3same
30
31
VADD_3s 1111 001 0 0 . .. .... .... 1000 . . . 0 .... @3same
32
VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
33
+
28
+
34
+VTST_3s 1111 001 0 0 . .. .... .... 1000 . . . 1 .... @3same
29
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
35
+VCEQ_3s 1111 001 1 0 . .. .... .... 1000 . . . 1 .... @3same
30
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
36
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
31
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-neon.inc.c
34
--- a/target/arm/mve.decode
39
+++ b/target/arm/translate-neon.inc.c
35
+++ b/target/arm/mve.decode
40
@@ -XXX,XX +XXX,XX @@ DO_3SAME_NO_SZ_3(VMAX_S, tcg_gen_gvec_smax)
36
@@ -XXX,XX +XXX,XX @@ VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_no
41
DO_3SAME_NO_SZ_3(VMAX_U, tcg_gen_gvec_umax)
37
# Scalar operations
42
DO_3SAME_NO_SZ_3(VMIN_S, tcg_gen_gvec_smin)
38
43
DO_3SAME_NO_SZ_3(VMIN_U, tcg_gen_gvec_umin)
39
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
44
+
40
+VSUB_scalar 1110 1110 0 . .. ... 1 ... 1 1111 . 100 .... @2scalar
45
+#define DO_3SAME_CMP(INSN, COND) \
41
+VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
46
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
47
+ uint32_t rn_ofs, uint32_t rm_ofs, \
48
+ uint32_t oprsz, uint32_t maxsz) \
49
+ { \
50
+ tcg_gen_gvec_cmp(COND, vece, rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz); \
51
+ } \
52
+ DO_3SAME_NO_SZ_3(INSN, gen_##INSN##_3s)
53
+
54
+DO_3SAME_CMP(VCGT_S, TCG_COND_GT)
55
+DO_3SAME_CMP(VCGT_U, TCG_COND_GTU)
56
+DO_3SAME_CMP(VCGE_S, TCG_COND_GE)
57
+DO_3SAME_CMP(VCGE_U, TCG_COND_GEU)
58
+DO_3SAME_CMP(VCEQ, TCG_COND_EQ)
59
+
60
+static void gen_VTST_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
61
+ uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz)
62
+{
63
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz, &cmtst_op[vece]);
64
+}
65
+DO_3SAME_NO_SZ_3(VTST, gen_VTST_3s)
66
diff --git a/target/arm/translate.c b/target/arm/translate.c
67
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/translate.c
44
--- a/target/arm/mve_helper.c
69
+++ b/target/arm/translate.c
45
+++ b/target/arm/mve_helper.c
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
46
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
71
u ? &mls_op[size] : &mla_op[size]);
47
DO_2OP_SCALAR(OP##w, 4, uint32_t, FN)
72
return 0;
48
73
49
DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
74
- case NEON_3R_VTST_VCEQ:
50
+DO_2OP_SCALAR_U(vsub_scalar, DO_SUB)
75
- if (u) { /* VCEQ */
51
+DO_2OP_SCALAR_U(vmul_scalar, DO_MUL)
76
- tcg_gen_gvec_cmp(TCG_COND_EQ, size, rd_ofs, rn_ofs, rm_ofs,
52
77
- vec_size, vec_size);
53
/*
78
- } else { /* VTST */
54
* Multiply add long dual accumulate ops.
79
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
55
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
80
- vec_size, vec_size, &cmtst_op[size]);
56
index XXXXXXX..XXXXXXX 100644
81
- }
57
--- a/target/arm/translate-mve.c
82
- return 0;
58
+++ b/target/arm/translate-mve.c
83
-
59
@@ -XXX,XX +XXX,XX @@ static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
84
- case NEON_3R_VCGT:
60
}
85
- tcg_gen_gvec_cmp(u ? TCG_COND_GTU : TCG_COND_GT, size,
61
86
- rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
62
DO_2OP_SCALAR(VADD_scalar, vadd_scalar)
87
- return 0;
63
+DO_2OP_SCALAR(VSUB_scalar, vsub_scalar)
88
-
64
+DO_2OP_SCALAR(VMUL_scalar, vmul_scalar)
89
- case NEON_3R_VCGE:
65
90
- tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
66
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
91
- rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
67
MVEGenDualAccOpFn *fn)
92
- return 0;
93
-
94
case NEON_3R_VSHL:
95
/* Note the operation is vshl vd,vm,vn */
96
tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, vec_size, vec_size,
97
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
98
case NEON_3R_LOGIC:
99
case NEON_3R_VMAX:
100
case NEON_3R_VMIN:
101
+ case NEON_3R_VTST_VCEQ:
102
+ case NEON_3R_VCGT:
103
+ case NEON_3R_VCGE:
104
/* Already handled by decodetree */
105
return 1;
106
}
107
--
68
--
108
2.20.1
69
2.20.1
109
70
110
71
diff view generated by jsdifflib
1
Convert the Neon logic ops in the 3-reg-same grouping to decodetree.
1
Implement the scalar variants of the MVE VHADD and VHSUB insns.
2
Note that for the logic ops the 'size' field forms part of their
3
decode and the actual operations are always bitwise.
4
2
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200430181003.21682-16-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-25-peter.maydell@linaro.org
8
---
6
---
9
target/arm/neon-dp.decode | 12 +++++++++++
7
target/arm/helper-mve.h | 16 ++++++++++++++++
10
target/arm/translate-neon.inc.c | 19 +++++++++++++++++
8
target/arm/mve.decode | 4 ++++
11
target/arm/translate.c | 38 +--------------------------------
9
target/arm/mve_helper.c | 8 ++++++++
12
3 files changed, 32 insertions(+), 37 deletions(-)
10
target/arm/translate-mve.c | 4 ++++
11
4 files changed, 32 insertions(+)
13
12
14
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/neon-dp.decode
15
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/neon-dp.decode
16
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmul_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
@3same .... ... . . . size:2 .... .... .... . q:1 . . .... \
18
DEF_HELPER_FLAGS_4(mve_vmul_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
&3same vm=%vm_dp vn=%vn_dp vd=%vd_dp
19
DEF_HELPER_FLAGS_4(mve_vmul_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
20
22
+@3same_logic .... ... . . . .. .... .... .... . q:1 .. .... \
21
+DEF_HELPER_FLAGS_4(mve_vhadds_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+ &3same vm=%vm_dp vn=%vn_dp vd=%vd_dp size=0
22
+DEF_HELPER_FLAGS_4(mve_vhadds_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(mve_vhadds_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+
24
+
25
+VAND_3s 1111 001 0 0 . 00 .... .... 0001 ... 1 .... @3same_logic
25
+DEF_HELPER_FLAGS_4(mve_vhaddu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+VBIC_3s 1111 001 0 0 . 01 .... .... 0001 ... 1 .... @3same_logic
26
+DEF_HELPER_FLAGS_4(mve_vhaddu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+VORR_3s 1111 001 0 0 . 10 .... .... 0001 ... 1 .... @3same_logic
27
+DEF_HELPER_FLAGS_4(mve_vhaddu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+VORN_3s 1111 001 0 0 . 11 .... .... 0001 ... 1 .... @3same_logic
29
+VEOR_3s 1111 001 1 0 . 00 .... .... 0001 ... 1 .... @3same_logic
30
+VBSL_3s 1111 001 1 0 . 01 .... .... 0001 ... 1 .... @3same_logic
31
+VBIT_3s 1111 001 1 0 . 10 .... .... 0001 ... 1 .... @3same_logic
32
+VBIF_3s 1111 001 1 0 . 11 .... .... 0001 ... 1 .... @3same_logic
33
+
28
+
34
VADD_3s 1111 001 0 0 . .. .... .... 1000 . . . 0 .... @3same
29
+DEF_HELPER_FLAGS_4(mve_vhsubs_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
30
+DEF_HELPER_FLAGS_4(mve_vhsubs_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
31
+DEF_HELPER_FLAGS_4(mve_vhsubs_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(mve_vhsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(mve_vhsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vhsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+
37
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
38
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
39
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
40
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-neon.inc.c
42
--- a/target/arm/mve.decode
39
+++ b/target/arm/translate-neon.inc.c
43
+++ b/target/arm/mve.decode
40
@@ -XXX,XX +XXX,XX @@ static bool do_3same(DisasContext *s, arg_3same *a, GVecGen3Fn fn)
44
@@ -XXX,XX +XXX,XX @@ VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_no
41
45
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
42
DO_3SAME(VADD, tcg_gen_gvec_add)
46
VSUB_scalar 1110 1110 0 . .. ... 1 ... 1 1111 . 100 .... @2scalar
43
DO_3SAME(VSUB, tcg_gen_gvec_sub)
47
VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
44
+DO_3SAME(VAND, tcg_gen_gvec_and)
48
+VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
45
+DO_3SAME(VBIC, tcg_gen_gvec_andc)
49
+VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
46
+DO_3SAME(VORR, tcg_gen_gvec_or)
50
+VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
47
+DO_3SAME(VORN, tcg_gen_gvec_orc)
51
+VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
48
+DO_3SAME(VEOR, tcg_gen_gvec_xor)
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
49
+
50
+/* These insns are all gvec_bitsel but with the inputs in various orders. */
51
+#define DO_3SAME_BITSEL(INSN, O1, O2, O3) \
52
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
53
+ uint32_t rn_ofs, uint32_t rm_ofs, \
54
+ uint32_t oprsz, uint32_t maxsz) \
55
+ { \
56
+ tcg_gen_gvec_bitsel(vece, rd_ofs, O1, O2, O3, oprsz, maxsz); \
57
+ } \
58
+ DO_3SAME(INSN, gen_##INSN##_3s)
59
+
60
+DO_3SAME_BITSEL(VBSL, rd_ofs, rn_ofs, rm_ofs)
61
+DO_3SAME_BITSEL(VBIT, rm_ofs, rn_ofs, rd_ofs)
62
+DO_3SAME_BITSEL(VBIF, rm_ofs, rd_ofs, rn_ofs)
63
diff --git a/target/arm/translate.c b/target/arm/translate.c
64
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate.c
54
--- a/target/arm/mve_helper.c
66
+++ b/target/arm/translate.c
55
+++ b/target/arm/mve_helper.c
67
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
56
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
68
}
57
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
69
return 1;
58
DO_2OP_SCALAR(OP##h, 2, uint16_t, FN) \
70
59
DO_2OP_SCALAR(OP##w, 4, uint32_t, FN)
71
- case NEON_3R_LOGIC: /* Logic ops. */
60
+#define DO_2OP_SCALAR_S(OP, FN) \
72
- switch ((u << 2) | size) {
61
+ DO_2OP_SCALAR(OP##b, 1, int8_t, FN) \
73
- case 0: /* VAND */
62
+ DO_2OP_SCALAR(OP##h, 2, int16_t, FN) \
74
- tcg_gen_gvec_and(0, rd_ofs, rn_ofs, rm_ofs,
63
+ DO_2OP_SCALAR(OP##w, 4, int32_t, FN)
75
- vec_size, vec_size);
64
76
- break;
65
DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
77
- case 1: /* VBIC */
66
DO_2OP_SCALAR_U(vsub_scalar, DO_SUB)
78
- tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
67
DO_2OP_SCALAR_U(vmul_scalar, DO_MUL)
79
- vec_size, vec_size);
68
+DO_2OP_SCALAR_S(vhadds_scalar, do_vhadd_s)
80
- break;
69
+DO_2OP_SCALAR_U(vhaddu_scalar, do_vhadd_u)
81
- case 2: /* VORR */
70
+DO_2OP_SCALAR_S(vhsubs_scalar, do_vhsub_s)
82
- tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
71
+DO_2OP_SCALAR_U(vhsubu_scalar, do_vhsub_u)
83
- vec_size, vec_size);
72
84
- break;
73
/*
85
- case 3: /* VORN */
74
* Multiply add long dual accumulate ops.
86
- tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
75
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
87
- vec_size, vec_size);
76
index XXXXXXX..XXXXXXX 100644
88
- break;
77
--- a/target/arm/translate-mve.c
89
- case 4: /* VEOR */
78
+++ b/target/arm/translate-mve.c
90
- tcg_gen_gvec_xor(0, rd_ofs, rn_ofs, rm_ofs,
79
@@ -XXX,XX +XXX,XX @@ static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
91
- vec_size, vec_size);
80
DO_2OP_SCALAR(VADD_scalar, vadd_scalar)
92
- break;
81
DO_2OP_SCALAR(VSUB_scalar, vsub_scalar)
93
- case 5: /* VBSL */
82
DO_2OP_SCALAR(VMUL_scalar, vmul_scalar)
94
- tcg_gen_gvec_bitsel(MO_8, rd_ofs, rd_ofs, rn_ofs, rm_ofs,
83
+DO_2OP_SCALAR(VHADD_S_scalar, vhadds_scalar)
95
- vec_size, vec_size);
84
+DO_2OP_SCALAR(VHADD_U_scalar, vhaddu_scalar)
96
- break;
85
+DO_2OP_SCALAR(VHSUB_S_scalar, vhsubs_scalar)
97
- case 6: /* VBIT */
86
+DO_2OP_SCALAR(VHSUB_U_scalar, vhsubu_scalar)
98
- tcg_gen_gvec_bitsel(MO_8, rd_ofs, rm_ofs, rn_ofs, rd_ofs,
87
99
- vec_size, vec_size);
88
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
100
- break;
89
MVEGenDualAccOpFn *fn)
101
- case 7: /* VBIF */
102
- tcg_gen_gvec_bitsel(MO_8, rd_ofs, rm_ofs, rd_ofs, rn_ofs,
103
- vec_size, vec_size);
104
- break;
105
- }
106
- return 0;
107
-
108
case NEON_3R_VQADD:
109
tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
110
rn_ofs, rm_ofs, vec_size, vec_size,
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
112
return 0;
113
114
case NEON_3R_VADD_VSUB:
115
+ case NEON_3R_LOGIC:
116
/* Already handled by decodetree */
117
return 1;
118
}
119
--
90
--
120
2.20.1
91
2.20.1
121
92
122
93
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Implement the MVE VBRSR insn, which reverses a specified
2
number of bits in each element, setting the rest to zero.
2
3
3
Add support for SD.
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-26-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 4 ++++
9
target/arm/mve.decode | 1 +
10
target/arm/mve_helper.c | 43 ++++++++++++++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 1 +
12
4 files changed, 49 insertions(+)
4
13
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
8
Message-id: 20200427181649.26851-11-edgar.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/arm/xlnx-versal-virt.c | 46 +++++++++++++++++++++++++++++++++++++++
12
1 file changed, 46 insertions(+)
13
14
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/xlnx-versal-virt.c
16
--- a/target/arm/helper-mve.h
17
+++ b/hw/arm/xlnx-versal-virt.c
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vhsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
#include "hw/arm/sysbus-fdt.h"
19
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
#include "hw/arm/fdt.h"
20
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
#include "cpu.h"
21
22
+#include "hw/qdev-properties.h"
22
+DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
#include "hw/arm/xlnx-versal.h"
23
+DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
24
+DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
#define TYPE_XLNX_VERSAL_VIRT_MACHINE MACHINE_TYPE_NAME("xlnx-versal-virt")
25
+
26
@@ -XXX,XX +XXX,XX @@ static void fdt_add_zdma_nodes(VersalVirt *s)
26
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
}
27
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
28
}
28
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
29
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
+static void fdt_add_sd_nodes(VersalVirt *s)
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@ VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
34
VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
35
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
36
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
37
+VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve_helper.c
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR_U(vhaddu_scalar, do_vhadd_u)
43
DO_2OP_SCALAR_S(vhsubs_scalar, do_vhsub_s)
44
DO_2OP_SCALAR_U(vhsubu_scalar, do_vhsub_u)
45
46
+static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
31
+{
47
+{
32
+ const char clocknames[] = "clk_xin\0clk_ahb";
48
+ m &= 0xff;
33
+ const char compat[] = "arasan,sdhci-8.9a";
49
+ if (m == 0) {
34
+ int i;
50
+ return 0;
35
+
36
+ for (i = ARRAY_SIZE(s->soc.pmc.iou.sd) - 1; i >= 0; i--) {
37
+ uint64_t addr = MM_PMC_SD0 + MM_PMC_SD0_SIZE * i;
38
+ char *name = g_strdup_printf("/sdhci@%" PRIx64, addr);
39
+
40
+ qemu_fdt_add_subnode(s->fdt, name);
41
+
42
+ qemu_fdt_setprop_cells(s->fdt, name, "clocks",
43
+ s->phandle.clk_25Mhz, s->phandle.clk_25Mhz);
44
+ qemu_fdt_setprop(s->fdt, name, "clock-names",
45
+ clocknames, sizeof(clocknames));
46
+ qemu_fdt_setprop_cells(s->fdt, name, "interrupts",
47
+ GIC_FDT_IRQ_TYPE_SPI, VERSAL_SD0_IRQ_0 + i * 2,
48
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
49
+ qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
50
+ 2, addr, 2, MM_PMC_SD0_SIZE);
51
+ qemu_fdt_setprop(s->fdt, name, "compatible", compat, sizeof(compat));
52
+ g_free(name);
53
+ }
51
+ }
52
+ n = revbit8(n);
53
+ if (m < 8) {
54
+ n >>= 8 - m;
55
+ }
56
+ return n;
54
+}
57
+}
55
+
58
+
56
static void fdt_nop_memory_nodes(void *fdt, Error **errp)
59
+static inline uint32_t do_vbrsrh(uint32_t n, uint32_t m)
57
{
58
Error *err = NULL;
59
@@ -XXX,XX +XXX,XX @@ static void create_virtio_regions(VersalVirt *s)
60
}
61
}
62
63
+static void sd_plugin_card(SDHCIState *sd, DriveInfo *di)
64
+{
60
+{
65
+ BlockBackend *blk = di ? blk_by_legacy_dinfo(di) : NULL;
61
+ m &= 0xff;
66
+ DeviceState *card;
62
+ if (m == 0) {
67
+
63
+ return 0;
68
+ card = qdev_create(qdev_get_child_bus(DEVICE(sd), "sd-bus"), TYPE_SD_CARD);
64
+ }
69
+ object_property_add_child(OBJECT(sd), "card[*]", OBJECT(card),
65
+ n = revbit16(n);
70
+ &error_fatal);
66
+ if (m < 16) {
71
+ qdev_prop_set_drive(card, "drive", blk, &error_fatal);
67
+ n >>= 16 - m;
72
+ object_property_set_bool(OBJECT(card), true, "realized", &error_fatal);
68
+ }
69
+ return n;
73
+}
70
+}
74
+
71
+
75
static void versal_virt_init(MachineState *machine)
72
+static inline uint32_t do_vbrsrw(uint32_t n, uint32_t m)
76
{
73
+{
77
VersalVirt *s = XLNX_VERSAL_VIRT_MACHINE(machine);
74
+ m &= 0xff;
78
int psci_conduit = QEMU_PSCI_CONDUIT_DISABLED;
75
+ if (m == 0) {
79
+ int i;
76
+ return 0;
80
81
/*
82
* If the user provides an Operating System to be loaded, we expect them
83
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
84
fdt_add_gic_nodes(s);
85
fdt_add_timer_nodes(s);
86
fdt_add_zdma_nodes(s);
87
+ fdt_add_sd_nodes(s);
88
fdt_add_cpu_nodes(s, psci_conduit);
89
fdt_add_clk_node(s, "/clk125", 125000000, s->phandle.clk_125Mhz);
90
fdt_add_clk_node(s, "/clk25", 25000000, s->phandle.clk_25Mhz);
91
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
92
memory_region_add_subregion_overlap(get_system_memory(),
93
0, &s->soc.fpd.apu.mr, 0);
94
95
+ /* Plugin SD cards. */
96
+ for (i = 0; i < ARRAY_SIZE(s->soc.pmc.iou.sd); i++) {
97
+ sd_plugin_card(&s->soc.pmc.iou.sd[i], drive_get_next(IF_SD));
98
+ }
77
+ }
78
+ n = revbit32(n);
79
+ if (m < 32) {
80
+ n >>= 32 - m;
81
+ }
82
+ return n;
83
+}
99
+
84
+
100
s->binfo.ram_size = machine->ram_size;
85
+DO_2OP_SCALAR(vbrsrb, 1, uint8_t, do_vbrsrb)
101
s->binfo.loader_start = 0x0;
86
+DO_2OP_SCALAR(vbrsrh, 2, uint16_t, do_vbrsrh)
102
s->binfo.get_dtb = versal_virt_get_dtb;
87
+DO_2OP_SCALAR(vbrsrw, 4, uint32_t, do_vbrsrw)
88
+
89
/*
90
* Multiply add long dual accumulate ops.
91
*/
92
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/translate-mve.c
95
+++ b/target/arm/translate-mve.c
96
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VHADD_S_scalar, vhadds_scalar)
97
DO_2OP_SCALAR(VHADD_U_scalar, vhaddu_scalar)
98
DO_2OP_SCALAR(VHSUB_S_scalar, vhsubs_scalar)
99
DO_2OP_SCALAR(VHSUB_U_scalar, vhsubu_scalar)
100
+DO_2OP_SCALAR(VBRSR, vbrsr)
101
102
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
103
MVEGenDualAccOpFn *fn)
103
--
104
--
104
2.20.1
105
2.20.1
105
106
106
107
diff view generated by jsdifflib
1
Convert the Neon "load/store multiple structures" insns to decodetree.
1
Implement the MVE VPST insn, which sets the predicate mask
2
fields in the VPR to the immediate value encoded in the insn.
2
3
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200430181003.21682-12-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-27-peter.maydell@linaro.org
6
---
7
---
7
target/arm/neon-ls.decode | 7 ++
8
target/arm/mve.decode | 4 +++
8
target/arm/translate-neon.inc.c | 124 ++++++++++++++++++++++++++++++++
9
target/arm/translate-mve.c | 59 ++++++++++++++++++++++++++++++++++++++
9
target/arm/translate.c | 91 +----------------------
10
2 files changed, 63 insertions(+)
10
3 files changed, 133 insertions(+), 89 deletions(-)
11
11
12
diff --git a/target/arm/neon-ls.decode b/target/arm/neon-ls.decode
12
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
13
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-ls.decode
14
--- a/target/arm/mve.decode
15
+++ b/target/arm/neon-ls.decode
15
+++ b/target/arm/mve.decode
16
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
17
# 0b1111_1001_xxx0_xxxx_xxxx_xxxx_xxxx_xxxx
17
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
18
# This file works on the A32 encoding only; calling code for T32 has to
18
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
19
# transform the insn into the A32 version first.
19
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
20
+
20
+
21
+%vd_dp 22:1 12:4
21
+# Predicate operations
22
+
22
+%mask_22_13 22:1 13:3
23
+# Neon load/store multiple structures
23
+VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
24
+
24
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
25
+VLDST_multiple 1111 0100 0 . l:1 0 rn:4 .... itype:4 size:2 align:2 rm:4 \
26
+ vd=%vd_dp
27
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
28
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/translate-neon.inc.c
26
--- a/target/arm/translate-mve.c
30
+++ b/target/arm/translate-neon.inc.c
27
+++ b/target/arm/translate-mve.c
31
@@ -XXX,XX +XXX,XX @@ static bool trans_VFML_scalar(DisasContext *s, arg_VFML_scalar *a)
28
@@ -XXX,XX +XXX,XX @@ static void mve_update_eci(DisasContext *s)
32
gen_helper_gvec_fmlal_idx_a32);
29
}
33
return true;
34
}
30
}
35
+
31
36
+static struct {
32
+static void mve_update_and_store_eci(DisasContext *s)
37
+ int nregs;
38
+ int interleave;
39
+ int spacing;
40
+} const neon_ls_element_type[11] = {
41
+ {1, 4, 1},
42
+ {1, 4, 2},
43
+ {4, 1, 1},
44
+ {2, 2, 2},
45
+ {1, 3, 1},
46
+ {1, 3, 2},
47
+ {3, 1, 1},
48
+ {1, 1, 1},
49
+ {1, 2, 1},
50
+ {1, 2, 2},
51
+ {2, 1, 1}
52
+};
53
+
54
+static void gen_neon_ldst_base_update(DisasContext *s, int rm, int rn,
55
+ int stride)
56
+{
33
+{
57
+ if (rm != 15) {
34
+ /*
58
+ TCGv_i32 base;
35
+ * For insns which don't call a helper function that will call
59
+
36
+ * mve_advance_vpt(), this version updates s->eci and also stores
60
+ base = load_reg(s, rn);
37
+ * it out to the CPUState field.
61
+ if (rm == 13) {
38
+ */
62
+ tcg_gen_addi_i32(base, base, stride);
39
+ if (s->eci) {
63
+ } else {
40
+ mve_update_eci(s);
64
+ TCGv_i32 index;
41
+ store_cpu_field(tcg_constant_i32(s->eci << 4), condexec_bits);
65
+ index = load_reg(s, rm);
66
+ tcg_gen_add_i32(base, base, index);
67
+ tcg_temp_free_i32(index);
68
+ }
69
+ store_reg(s, rn, base);
70
+ }
42
+ }
71
+}
43
+}
72
+
44
+
73
+static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a)
45
static bool mve_skip_first_beat(DisasContext *s)
46
{
47
/* Return true if PSR.ECI says we must skip the first beat of this insn */
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
49
};
50
return do_long_dual_acc(s, a, fns[a->x]);
51
}
52
+
53
+static bool trans_VPST(DisasContext *s, arg_VPST *a)
74
+{
54
+{
75
+ /* Neon load/store multiple structures */
55
+ TCGv_i32 vpr;
76
+ int nregs, interleave, spacing, reg, n;
77
+ MemOp endian = s->be_data;
78
+ int mmu_idx = get_mem_index(s);
79
+ int size = a->size;
80
+ TCGv_i64 tmp64;
81
+ TCGv_i32 addr, tmp;
82
+
56
+
83
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
57
+ /* mask == 0 is a "related encoding" */
58
+ if (!dc_isar_feature(aa32_mve, s) || !a->mask) {
84
+ return false;
59
+ return false;
85
+ }
60
+ }
86
+
61
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
87
+ /* UNDEF accesses to D16-D31 if they don't exist */
62
+ return true;
88
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
89
+ return false;
90
+ }
63
+ }
91
+ if (a->itype > 10) {
64
+ /*
92
+ return false;
65
+ * Set the VPR mask fields. We take advantage of MASK01 and MASK23
93
+ }
66
+ * being adjacent fields in the register.
94
+ /* Catch UNDEF cases for bad values of align field */
67
+ *
95
+ switch (a->itype & 0xc) {
68
+ * This insn is not predicated, but it is subject to beat-wise
96
+ case 4:
69
+ * execution, and the mask is updated on the odd-numbered beats.
97
+ if (a->align >= 2) {
70
+ * So if PSR.ECI says we should skip beat 1, we mustn't update the
98
+ return false;
71
+ * 01 mask field.
99
+ }
72
+ */
73
+ vpr = load_cpu_field(v7m.vpr);
74
+ switch (s->eci) {
75
+ case ECI_NONE:
76
+ case ECI_A0:
77
+ /* Update both 01 and 23 fields */
78
+ tcg_gen_deposit_i32(vpr, vpr,
79
+ tcg_constant_i32(a->mask | (a->mask << 4)),
80
+ R_V7M_VPR_MASK01_SHIFT,
81
+ R_V7M_VPR_MASK01_LENGTH + R_V7M_VPR_MASK23_LENGTH);
100
+ break;
82
+ break;
101
+ case 8:
83
+ case ECI_A0A1:
102
+ if (a->align == 3) {
84
+ case ECI_A0A1A2:
103
+ return false;
85
+ case ECI_A0A1A2B0:
104
+ }
86
+ /* Update only the 23 mask field */
87
+ tcg_gen_deposit_i32(vpr, vpr,
88
+ tcg_constant_i32(a->mask),
89
+ R_V7M_VPR_MASK23_SHIFT, R_V7M_VPR_MASK23_LENGTH);
105
+ break;
90
+ break;
106
+ default:
91
+ default:
107
+ break;
92
+ g_assert_not_reached();
108
+ }
93
+ }
109
+ nregs = neon_ls_element_type[a->itype].nregs;
94
+ store_cpu_field(vpr, v7m.vpr);
110
+ interleave = neon_ls_element_type[a->itype].interleave;
95
+ mve_update_and_store_eci(s);
111
+ spacing = neon_ls_element_type[a->itype].spacing;
112
+ if (size == 3 && (interleave | spacing) != 1) {
113
+ return false;
114
+ }
115
+
116
+ if (!vfp_access_check(s)) {
117
+ return true;
118
+ }
119
+
120
+ /* For our purposes, bytes are always little-endian. */
121
+ if (size == 0) {
122
+ endian = MO_LE;
123
+ }
124
+ /*
125
+ * Consecutive little-endian elements from a single register
126
+ * can be promoted to a larger little-endian operation.
127
+ */
128
+ if (interleave == 1 && endian == MO_LE) {
129
+ size = 3;
130
+ }
131
+ tmp64 = tcg_temp_new_i64();
132
+ addr = tcg_temp_new_i32();
133
+ tmp = tcg_const_i32(1 << size);
134
+ load_reg_var(s, addr, a->rn);
135
+ for (reg = 0; reg < nregs; reg++) {
136
+ for (n = 0; n < 8 >> size; n++) {
137
+ int xs;
138
+ for (xs = 0; xs < interleave; xs++) {
139
+ int tt = a->vd + reg + spacing * xs;
140
+
141
+ if (a->l) {
142
+ gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
143
+ neon_store_element64(tt, n, size, tmp64);
144
+ } else {
145
+ neon_load_element64(tmp64, tt, n, size);
146
+ gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
147
+ }
148
+ tcg_gen_add_i32(addr, addr, tmp);
149
+ }
150
+ }
151
+ }
152
+ tcg_temp_free_i32(addr);
153
+ tcg_temp_free_i32(tmp);
154
+ tcg_temp_free_i64(tmp64);
155
+
156
+ gen_neon_ldst_base_update(s, a->rm, a->rn, nregs * interleave * 8);
157
+ return true;
96
+ return true;
158
+}
97
+}
159
diff --git a/target/arm/translate.c b/target/arm/translate.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/target/arm/translate.c
162
+++ b/target/arm/translate.c
163
@@ -XXX,XX +XXX,XX @@ static void gen_neon_trn_u16(TCGv_i32 t0, TCGv_i32 t1)
164
}
165
166
167
-static struct {
168
- int nregs;
169
- int interleave;
170
- int spacing;
171
-} const neon_ls_element_type[11] = {
172
- {1, 4, 1},
173
- {1, 4, 2},
174
- {4, 1, 1},
175
- {2, 2, 2},
176
- {1, 3, 1},
177
- {1, 3, 2},
178
- {3, 1, 1},
179
- {1, 1, 1},
180
- {1, 2, 1},
181
- {1, 2, 2},
182
- {2, 1, 1}
183
-};
184
-
185
/* Translate a NEON load/store element instruction. Return nonzero if the
186
instruction is invalid. */
187
static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
188
{
189
int rd, rn, rm;
190
- int op;
191
int nregs;
192
- int interleave;
193
- int spacing;
194
int stride;
195
int size;
196
int reg;
197
int load;
198
- int n;
199
int vec_size;
200
- int mmu_idx;
201
- MemOp endian;
202
TCGv_i32 addr;
203
TCGv_i32 tmp;
204
- TCGv_i32 tmp2;
205
- TCGv_i64 tmp64;
206
207
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
208
return 1;
209
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
210
rn = (insn >> 16) & 0xf;
211
rm = insn & 0xf;
212
load = (insn & (1 << 21)) != 0;
213
- endian = s->be_data;
214
- mmu_idx = get_mem_index(s);
215
if ((insn & (1 << 23)) == 0) {
216
- /* Load store all elements. */
217
- op = (insn >> 8) & 0xf;
218
- size = (insn >> 6) & 3;
219
- if (op > 10)
220
- return 1;
221
- /* Catch UNDEF cases for bad values of align field */
222
- switch (op & 0xc) {
223
- case 4:
224
- if (((insn >> 5) & 1) == 1) {
225
- return 1;
226
- }
227
- break;
228
- case 8:
229
- if (((insn >> 4) & 3) == 3) {
230
- return 1;
231
- }
232
- break;
233
- default:
234
- break;
235
- }
236
- nregs = neon_ls_element_type[op].nregs;
237
- interleave = neon_ls_element_type[op].interleave;
238
- spacing = neon_ls_element_type[op].spacing;
239
- if (size == 3 && (interleave | spacing) != 1) {
240
- return 1;
241
- }
242
- /* For our purposes, bytes are always little-endian. */
243
- if (size == 0) {
244
- endian = MO_LE;
245
- }
246
- /* Consecutive little-endian elements from a single register
247
- * can be promoted to a larger little-endian operation.
248
- */
249
- if (interleave == 1 && endian == MO_LE) {
250
- size = 3;
251
- }
252
- tmp64 = tcg_temp_new_i64();
253
- addr = tcg_temp_new_i32();
254
- tmp2 = tcg_const_i32(1 << size);
255
- load_reg_var(s, addr, rn);
256
- for (reg = 0; reg < nregs; reg++) {
257
- for (n = 0; n < 8 >> size; n++) {
258
- int xs;
259
- for (xs = 0; xs < interleave; xs++) {
260
- int tt = rd + reg + spacing * xs;
261
-
262
- if (load) {
263
- gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
264
- neon_store_element64(tt, n, size, tmp64);
265
- } else {
266
- neon_load_element64(tmp64, tt, n, size);
267
- gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
268
- }
269
- tcg_gen_add_i32(addr, addr, tmp2);
270
- }
271
- }
272
- }
273
- tcg_temp_free_i32(addr);
274
- tcg_temp_free_i32(tmp2);
275
- tcg_temp_free_i64(tmp64);
276
- stride = nregs * interleave * 8;
277
+ /* Load store all elements -- handled already by decodetree */
278
+ return 1;
279
} else {
280
size = (insn >> 10) & 3;
281
if (size == 3) {
282
--
98
--
283
2.20.1
99
2.20.1
284
100
285
101
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Implement the MVE VQADD and VQSUB insns, which perform saturating
2
addition of a scalar to each element. Note that individual bytes of
3
each result element are used or discarded according to the predicate
4
mask, but FPSCR.QC is only set if the predicate mask for the lowest
5
byte of the element is set.
2
6
3
Fix typo xlnx-ve -> xlnx-versal.
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210617121628.20116-28-peter.maydell@linaro.org
10
---
11
target/arm/helper-mve.h | 16 ++++++++++
12
target/arm/mve.decode | 5 +++
13
target/arm/mve_helper.c | 62 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-mve.c | 4 +++
15
4 files changed, 87 insertions(+)
4
16
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
9
Message-id: 20200427181649.26851-4-edgar.iglesias@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/xlnx-versal-virt.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/xlnx-versal-virt.c
19
--- a/target/arm/helper-mve.h
18
+++ b/hw/arm/xlnx-versal-virt.c
20
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vhsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
psci_conduit = QEMU_PSCI_CONDUIT_SMC;
22
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_4(mve_vqadds_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vqadds_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vqadds_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_4(mve_vqaddu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(mve_vqaddu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vqaddu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(mve_vqsubs_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(mve_vqsubs_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vqsubs_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_4(mve_vqsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(mve_vqsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vqsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+
41
DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
42
DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
43
DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
44
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/mve.decode
47
+++ b/target/arm/mve.decode
48
@@ -XXX,XX +XXX,XX @@ VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
49
VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
50
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
51
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
52
+
53
+VQADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
54
+VQADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
55
+VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
56
+VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
57
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
58
59
# Predicate operations
60
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/mve_helper.c
63
+++ b/target/arm/mve_helper.c
64
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhaddu, do_vhadd_u)
65
DO_2OP_S(vhsubs, do_vhsub_s)
66
DO_2OP_U(vhsubu, do_vhsub_u)
67
68
+static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
69
+{
70
+ if (val > max) {
71
+ *s = true;
72
+ return max;
73
+ } else if (val < min) {
74
+ *s = true;
75
+ return min;
76
+ }
77
+ return val;
78
+}
79
+
80
+#define DO_SQADD_B(n, m, s) do_sat_bhw((int64_t)n + m, INT8_MIN, INT8_MAX, s)
81
+#define DO_SQADD_H(n, m, s) do_sat_bhw((int64_t)n + m, INT16_MIN, INT16_MAX, s)
82
+#define DO_SQADD_W(n, m, s) do_sat_bhw((int64_t)n + m, INT32_MIN, INT32_MAX, s)
83
+
84
+#define DO_UQADD_B(n, m, s) do_sat_bhw((int64_t)n + m, 0, UINT8_MAX, s)
85
+#define DO_UQADD_H(n, m, s) do_sat_bhw((int64_t)n + m, 0, UINT16_MAX, s)
86
+#define DO_UQADD_W(n, m, s) do_sat_bhw((int64_t)n + m, 0, UINT32_MAX, s)
87
+
88
+#define DO_SQSUB_B(n, m, s) do_sat_bhw((int64_t)n - m, INT8_MIN, INT8_MAX, s)
89
+#define DO_SQSUB_H(n, m, s) do_sat_bhw((int64_t)n - m, INT16_MIN, INT16_MAX, s)
90
+#define DO_SQSUB_W(n, m, s) do_sat_bhw((int64_t)n - m, INT32_MIN, INT32_MAX, s)
91
+
92
+#define DO_UQSUB_B(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT8_MAX, s)
93
+#define DO_UQSUB_H(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT16_MAX, s)
94
+#define DO_UQSUB_W(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT32_MAX, s)
95
96
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
97
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
98
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
99
mve_advance_vpt(env); \
21
}
100
}
22
101
23
- sysbus_init_child_obj(OBJECT(machine), "xlnx-ve", &s->soc,
102
+#define DO_2OP_SAT_SCALAR(OP, ESIZE, TYPE, FN) \
24
+ sysbus_init_child_obj(OBJECT(machine), "xlnx-versal", &s->soc,
103
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
25
sizeof(s->soc), TYPE_XLNX_VERSAL);
104
+ uint32_t rm) \
26
object_property_set_link(OBJECT(&s->soc), OBJECT(machine->ram),
105
+ { \
27
"ddr", &error_abort);
106
+ TYPE *d = vd, *n = vn; \
107
+ TYPE m = rm; \
108
+ uint16_t mask = mve_element_mask(env); \
109
+ unsigned e; \
110
+ bool qc = false; \
111
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
112
+ bool sat = false; \
113
+ mergemask(&d[H##ESIZE(e)], FN(n[H##ESIZE(e)], m, &sat), \
114
+ mask); \
115
+ qc |= sat & mask & 1; \
116
+ } \
117
+ if (qc) { \
118
+ env->vfp.qc[0] = qc; \
119
+ } \
120
+ mve_advance_vpt(env); \
121
+ }
122
+
123
/* provide unsigned 2-op scalar helpers for all sizes */
124
#define DO_2OP_SCALAR_U(OP, FN) \
125
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
126
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR_U(vhaddu_scalar, do_vhadd_u)
127
DO_2OP_SCALAR_S(vhsubs_scalar, do_vhsub_s)
128
DO_2OP_SCALAR_U(vhsubu_scalar, do_vhsub_u)
129
130
+DO_2OP_SAT_SCALAR(vqaddu_scalarb, 1, uint8_t, DO_UQADD_B)
131
+DO_2OP_SAT_SCALAR(vqaddu_scalarh, 2, uint16_t, DO_UQADD_H)
132
+DO_2OP_SAT_SCALAR(vqaddu_scalarw, 4, uint32_t, DO_UQADD_W)
133
+DO_2OP_SAT_SCALAR(vqadds_scalarb, 1, int8_t, DO_SQADD_B)
134
+DO_2OP_SAT_SCALAR(vqadds_scalarh, 2, int16_t, DO_SQADD_H)
135
+DO_2OP_SAT_SCALAR(vqadds_scalarw, 4, int32_t, DO_SQADD_W)
136
+
137
+DO_2OP_SAT_SCALAR(vqsubu_scalarb, 1, uint8_t, DO_UQSUB_B)
138
+DO_2OP_SAT_SCALAR(vqsubu_scalarh, 2, uint16_t, DO_UQSUB_H)
139
+DO_2OP_SAT_SCALAR(vqsubu_scalarw, 4, uint32_t, DO_UQSUB_W)
140
+DO_2OP_SAT_SCALAR(vqsubs_scalarb, 1, int8_t, DO_SQSUB_B)
141
+DO_2OP_SAT_SCALAR(vqsubs_scalarh, 2, int16_t, DO_SQSUB_H)
142
+DO_2OP_SAT_SCALAR(vqsubs_scalarw, 4, int32_t, DO_SQSUB_W)
143
+
144
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
145
{
146
m &= 0xff;
147
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
148
index XXXXXXX..XXXXXXX 100644
149
--- a/target/arm/translate-mve.c
150
+++ b/target/arm/translate-mve.c
151
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VHADD_S_scalar, vhadds_scalar)
152
DO_2OP_SCALAR(VHADD_U_scalar, vhaddu_scalar)
153
DO_2OP_SCALAR(VHSUB_S_scalar, vhsubs_scalar)
154
DO_2OP_SCALAR(VHSUB_U_scalar, vhsubu_scalar)
155
+DO_2OP_SCALAR(VQADD_S_scalar, vqadds_scalar)
156
+DO_2OP_SCALAR(VQADD_U_scalar, vqaddu_scalar)
157
+DO_2OP_SCALAR(VQSUB_S_scalar, vqsubs_scalar)
158
+DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
159
DO_2OP_SCALAR(VBRSR, vbrsr)
160
161
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
28
--
162
--
29
2.20.1
163
2.20.1
30
164
31
165
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Implement the MVE VQDMULH and VQRDMULH scalar insns, which multiply
2
elements by the scalar, double, possibly round, take the high half
3
and saturate.
2
4
3
Embed the ADMAs into the SoC type.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210617121628.20116-29-peter.maydell@linaro.org
8
---
9
target/arm/helper-mve.h | 8 ++++++++
10
target/arm/mve.decode | 3 +++
11
target/arm/mve_helper.c | 25 +++++++++++++++++++++++++
12
target/arm/translate-mve.c | 2 ++
13
4 files changed, 38 insertions(+)
4
14
5
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Message-id: 20200427181649.26851-7-edgar.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/arm/xlnx-versal.h | 3 ++-
14
hw/arm/xlnx-versal.c | 14 +++++++-------
15
2 files changed, 9 insertions(+), 8 deletions(-)
16
17
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/xlnx-versal.h
17
--- a/target/arm/helper-mve.h
20
+++ b/include/hw/arm/xlnx-versal.h
18
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
#include "hw/arm/boot.h"
20
DEF_HELPER_FLAGS_4(mve_vqsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
#include "hw/intc/arm_gicv3.h"
21
DEF_HELPER_FLAGS_4(mve_vqsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
#include "hw/char/pl011.h"
22
25
+#include "hw/dma/xlnx-zdma.h"
23
+DEF_HELPER_FLAGS_4(mve_vqdmulh_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
#include "hw/net/cadence_gem.h"
24
+DEF_HELPER_FLAGS_4(mve_vqdmulh_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
25
+DEF_HELPER_FLAGS_4(mve_vqdmulh_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
#define TYPE_XLNX_VERSAL "xlnx-versal"
26
+
29
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
27
+DEF_HELPER_FLAGS_4(mve_vqrdmulh_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
struct {
28
+DEF_HELPER_FLAGS_4(mve_vqrdmulh_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
PL011State uart[XLNX_VERSAL_NR_UARTS];
29
+DEF_HELPER_FLAGS_4(mve_vqrdmulh_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
CadenceGEMState gem[XLNX_VERSAL_NR_GEMS];
30
+
33
- SysBusDevice *adma[XLNX_VERSAL_NR_ADMAS];
31
DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+ XlnxZDMA adma[XLNX_VERSAL_NR_ADMAS];
32
DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
} iou;
33
DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
} lpd;
34
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
38
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
39
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/arm/xlnx-versal.c
36
--- a/target/arm/mve.decode
41
+++ b/hw/arm/xlnx-versal.c
37
+++ b/target/arm/mve.decode
42
@@ -XXX,XX +XXX,XX @@ static void versal_create_admas(Versal *s, qemu_irq *pic)
38
@@ -XXX,XX +XXX,XX @@ VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
43
DeviceState *dev;
39
VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
44
MemoryRegion *mr;
40
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
45
41
46
- dev = qdev_create(NULL, "xlnx.zdma");
42
+VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
47
- s->lpd.iou.adma[i] = SYS_BUS_DEVICE(dev);
43
+VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
48
- object_property_set_int(OBJECT(s->lpd.iou.adma[i]), 128, "bus-width",
44
+
49
- &error_abort);
45
# Predicate operations
50
- object_property_add_child(OBJECT(s), name, OBJECT(dev), &error_fatal);
46
%mask_22_13 22:1 13:3
51
+ sysbus_init_child_obj(OBJECT(s), name,
47
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
52
+ &s->lpd.iou.adma[i], sizeof(s->lpd.iou.adma[i]),
48
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
+ TYPE_XLNX_ZDMA);
49
index XXXXXXX..XXXXXXX 100644
54
+ dev = DEVICE(&s->lpd.iou.adma[i]);
50
--- a/target/arm/mve_helper.c
55
+ object_property_set_int(OBJECT(dev), 128, "bus-width", &error_abort);
51
+++ b/target/arm/mve_helper.c
56
qdev_init_nofail(dev);
52
@@ -XXX,XX +XXX,XX @@ static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
57
53
#define DO_UQSUB_H(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT16_MAX, s)
58
- mr = sysbus_mmio_get_region(s->lpd.iou.adma[i], 0);
54
#define DO_UQSUB_W(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT32_MAX, s)
59
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
55
60
memory_region_add_subregion(&s->mr_ps,
56
+/*
61
MM_ADMA_CH0 + i * MM_ADMA_CH0_SIZE, mr);
57
+ * For QDMULH and QRDMULH we simplify "double and shift by esize" into
62
58
+ * "shift by esize-1", adjusting the QRDMULH rounding constant to match.
63
- sysbus_connect_irq(s->lpd.iou.adma[i], 0, pic[VERSAL_ADMA_IRQ_0 + i]);
59
+ */
64
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[VERSAL_ADMA_IRQ_0 + i]);
60
+#define DO_QDMULH_B(n, m, s) do_sat_bhw(((int64_t)n * m) >> 7, \
65
g_free(name);
61
+ INT8_MIN, INT8_MAX, s)
66
}
62
+#define DO_QDMULH_H(n, m, s) do_sat_bhw(((int64_t)n * m) >> 15, \
67
}
63
+ INT16_MIN, INT16_MAX, s)
64
+#define DO_QDMULH_W(n, m, s) do_sat_bhw(((int64_t)n * m) >> 31, \
65
+ INT32_MIN, INT32_MAX, s)
66
+
67
+#define DO_QRDMULH_B(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 6)) >> 7, \
68
+ INT8_MIN, INT8_MAX, s)
69
+#define DO_QRDMULH_H(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 14)) >> 15, \
70
+ INT16_MIN, INT16_MAX, s)
71
+#define DO_QRDMULH_W(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 30)) >> 31, \
72
+ INT32_MIN, INT32_MAX, s)
73
+
74
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
75
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
76
uint32_t rm) \
77
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqsubs_scalarb, 1, int8_t, DO_SQSUB_B)
78
DO_2OP_SAT_SCALAR(vqsubs_scalarh, 2, int16_t, DO_SQSUB_H)
79
DO_2OP_SAT_SCALAR(vqsubs_scalarw, 4, int32_t, DO_SQSUB_W)
80
81
+DO_2OP_SAT_SCALAR(vqdmulh_scalarb, 1, int8_t, DO_QDMULH_B)
82
+DO_2OP_SAT_SCALAR(vqdmulh_scalarh, 2, int16_t, DO_QDMULH_H)
83
+DO_2OP_SAT_SCALAR(vqdmulh_scalarw, 4, int32_t, DO_QDMULH_W)
84
+DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
85
+DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
86
+DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
87
+
88
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
89
{
90
m &= 0xff;
91
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/translate-mve.c
94
+++ b/target/arm/translate-mve.c
95
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQADD_S_scalar, vqadds_scalar)
96
DO_2OP_SCALAR(VQADD_U_scalar, vqaddu_scalar)
97
DO_2OP_SCALAR(VQSUB_S_scalar, vqsubs_scalar)
98
DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
99
+DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
100
+DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
101
DO_2OP_SCALAR(VBRSR, vbrsr)
102
103
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
68
--
104
--
69
2.20.1
105
2.20.1
70
106
71
107
diff view generated by jsdifflib
1
Convert the VCMLA (vector) insns in the 3same extension group to
1
Implement the MVE VQDMULL scalar insn. This multiplies the top or
2
decodetree.
2
bottom half of each element by the scalar, doubles and saturates
3
to a double-width result.
4
5
Note that this encoding overlaps with VQADD and VQSUB; it uses
6
what in VQADD and VQSUB would be the 'size=0b11' encoding.
3
7
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-5-peter.maydell@linaro.org
10
Message-id: 20210617121628.20116-30-peter.maydell@linaro.org
7
---
11
---
8
target/arm/neon-shared.decode | 11 ++++++++++
12
target/arm/helper-mve.h | 5 +++
9
target/arm/translate-neon.inc.c | 37 +++++++++++++++++++++++++++++++++
13
target/arm/mve.decode | 23 +++++++++++---
10
target/arm/translate.c | 11 +---------
14
target/arm/mve_helper.c | 65 ++++++++++++++++++++++++++++++++++++++
11
3 files changed, 49 insertions(+), 10 deletions(-)
15
target/arm/translate-mve.c | 30 ++++++++++++++++++
12
16
4 files changed, 119 insertions(+), 4 deletions(-)
13
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
17
14
index XXXXXXX..XXXXXXX 100644
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
--- a/target/arm/neon-shared.decode
19
index XXXXXXX..XXXXXXX 100644
16
+++ b/target/arm/neon-shared.decode
20
--- a/target/arm/helper-mve.h
21
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
26
+DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+
31
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
33
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
34
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/mve.decode
37
+++ b/target/arm/mve.decode
17
@@ -XXX,XX +XXX,XX @@
38
@@ -XXX,XX +XXX,XX @@
18
# More specifically, this covers:
39
%qm 5:1 1:3
19
# 2reg scalar ext: 0b1111_1110_xxxx_xxxx_xxxx_1x0x_xxxx_xxxx
40
%qn 7:1 17:3
20
# 3same ext: 0b1111_110x_xxxx_xxxx_xxxx_1x0x_xxxx_xxxx
41
21
+
42
+# VQDMULL has size in bit 28: 0 for 16 bit, 1 for 32 bit
22
+# VFP/Neon register fields; same as vfp.decode
43
+%size_28 28:1 !function=plus_1
23
+%vm_dp 5:1 0:4
44
+
24
+%vm_sp 0:4 5:1
45
&vldr_vstr rn qd imm p a w size l u
25
+%vn_dp 7:1 16:4
46
&1op qd qm size
26
+%vn_sp 16:4 7:1
47
&2op qd qm qn size
27
+%vd_dp 22:1 12:4
28
+%vd_sp 12:4 22:1
29
+
30
+VCMLA 1111 110 rot:2 . 1 size:1 .... .... 1000 . q:1 . 0 .... \
31
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
32
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-neon.inc.c
35
+++ b/target/arm/translate-neon.inc.c
36
@@ -XXX,XX +XXX,XX @@
48
@@ -XXX,XX +XXX,XX @@
37
#include "decode-neon-dp.inc.c"
49
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
38
#include "decode-neon-ls.inc.c"
50
39
#include "decode-neon-shared.inc.c"
51
@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
40
+
52
+@2scalar_nosz .... .... .... .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
41
+static bool trans_VCMLA(DisasContext *s, arg_VCMLA *a)
53
42
+{
54
# Vector loads and stores
43
+ int opr_sz;
55
44
+ TCGv_ptr fpst;
56
@@ -XXX,XX +XXX,XX @@ VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
45
+ gen_helper_gvec_3_ptr *fn_gvec_ptr;
57
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
46
+
58
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
47
+ if (!dc_isar_feature(aa32_vcma, s)
59
48
+ || (!a->size && !dc_isar_feature(aa32_fp16_arith, s))) {
60
-VQADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
61
-VQADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
62
-VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
63
-VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
64
+{
65
+ VQADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
66
+ VQADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
67
+ VQDMULLB_scalar 111 . 1110 0 . 11 ... 0 ... 0 1111 . 110 .... @2scalar_nosz \
68
+ size=%size_28
69
+}
70
+
71
+{
72
+ VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
73
+ VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
74
+ VQDMULLT_scalar 111 . 1110 0 . 11 ... 0 ... 1 1111 . 110 .... @2scalar_nosz \
75
+ size=%size_28
76
+}
77
+
78
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
79
80
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
81
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
82
83
+
84
# Predicate operations
85
%mask_22_13 22:1 13:3
86
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
87
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
88
index XXXXXXX..XXXXXXX 100644
89
--- a/target/arm/mve_helper.c
90
+++ b/target/arm/mve_helper.c
91
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
92
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
93
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
94
95
+/*
96
+ * Long saturating scalar ops. As with DO_2OP_L, TYPE and H are for the
97
+ * input (smaller) type and LESIZE, LTYPE, LH for the output (long) type.
98
+ * SATMASK specifies which bits of the predicate mask matter for determining
99
+ * whether to propagate a saturation indication into FPSCR.QC -- for
100
+ * the 16x16->32 case we must check only the bit corresponding to the T or B
101
+ * half that we used, but for the 32x32->64 case we propagate if the mask
102
+ * bit is set for either half.
103
+ */
104
+#define DO_2OP_SAT_SCALAR_L(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN, SATMASK) \
105
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
106
+ uint32_t rm) \
107
+ { \
108
+ LTYPE *d = vd; \
109
+ TYPE *n = vn; \
110
+ TYPE m = rm; \
111
+ uint16_t mask = mve_element_mask(env); \
112
+ unsigned le; \
113
+ bool qc = false; \
114
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
115
+ bool sat = false; \
116
+ LTYPE r = FN((LTYPE)n[H##ESIZE(le * 2 + TOP)], m, &sat); \
117
+ mergemask(&d[H##LESIZE(le)], r, mask); \
118
+ qc |= sat && (mask & SATMASK); \
119
+ } \
120
+ if (qc) { \
121
+ env->vfp.qc[0] = qc; \
122
+ } \
123
+ mve_advance_vpt(env); \
124
+ }
125
+
126
+static inline int32_t do_qdmullh(int16_t n, int16_t m, bool *sat)
127
+{
128
+ int64_t r = ((int64_t)n * m) * 2;
129
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat);
130
+}
131
+
132
+static inline int64_t do_qdmullw(int32_t n, int32_t m, bool *sat)
133
+{
134
+ /* The multiply can't overflow, but the doubling might */
135
+ int64_t r = (int64_t)n * m;
136
+ if (r > INT64_MAX / 2) {
137
+ *sat = true;
138
+ return INT64_MAX;
139
+ } else if (r < INT64_MIN / 2) {
140
+ *sat = true;
141
+ return INT64_MIN;
142
+ } else {
143
+ return r * 2;
144
+ }
145
+}
146
+
147
+#define SATMASK16B 1
148
+#define SATMASK16T (1 << 2)
149
+#define SATMASK32 ((1 << 4) | 1)
150
+
151
+DO_2OP_SAT_SCALAR_L(vqdmullb_scalarh, 0, 2, int16_t, 4, int32_t, \
152
+ do_qdmullh, SATMASK16B)
153
+DO_2OP_SAT_SCALAR_L(vqdmullb_scalarw, 0, 4, int32_t, 8, int64_t, \
154
+ do_qdmullw, SATMASK32)
155
+DO_2OP_SAT_SCALAR_L(vqdmullt_scalarh, 1, 2, int16_t, 4, int32_t, \
156
+ do_qdmullh, SATMASK16T)
157
+DO_2OP_SAT_SCALAR_L(vqdmullt_scalarw, 1, 4, int32_t, 8, int64_t, \
158
+ do_qdmullw, SATMASK32)
159
+
160
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
161
{
162
m &= 0xff;
163
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/translate-mve.c
166
+++ b/target/arm/translate-mve.c
167
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
168
DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
169
DO_2OP_SCALAR(VBRSR, vbrsr)
170
171
+static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
172
+{
173
+ static MVEGenTwoOpScalarFn * const fns[] = {
174
+ NULL,
175
+ gen_helper_mve_vqdmullb_scalarh,
176
+ gen_helper_mve_vqdmullb_scalarw,
177
+ NULL,
178
+ };
179
+ if (a->qd == a->qn && a->size == MO_32) {
180
+ /* UNPREDICTABLE; we choose to undef */
49
+ return false;
181
+ return false;
50
+ }
182
+ }
51
+
183
+ return do_2op_scalar(s, a, fns[a->size]);
52
+ /* UNDEF accesses to D16-D31 if they don't exist. */
184
+}
53
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
185
+
54
+ ((a->vd | a->vn | a->vm) & 0x10)) {
186
+static bool trans_VQDMULLT_scalar(DisasContext *s, arg_2scalar *a)
187
+{
188
+ static MVEGenTwoOpScalarFn * const fns[] = {
189
+ NULL,
190
+ gen_helper_mve_vqdmullt_scalarh,
191
+ gen_helper_mve_vqdmullt_scalarw,
192
+ NULL,
193
+ };
194
+ if (a->qd == a->qn && a->size == MO_32) {
195
+ /* UNPREDICTABLE; we choose to undef */
55
+ return false;
196
+ return false;
56
+ }
197
+ }
57
+
198
+ return do_2op_scalar(s, a, fns[a->size]);
58
+ if ((a->vn | a->vm | a->vd) & a->q) {
199
+}
59
+ return false;
200
+
60
+ }
201
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
61
+
202
MVEGenDualAccOpFn *fn)
62
+ if (!vfp_access_check(s)) {
203
{
63
+ return true;
64
+ }
65
+
66
+ opr_sz = (1 + a->q) * 8;
67
+ fpst = get_fpstatus_ptr(1);
68
+ fn_gvec_ptr = a->size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
69
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
70
+ vfp_reg_offset(1, a->vn),
71
+ vfp_reg_offset(1, a->vm),
72
+ fpst, opr_sz, opr_sz, a->rot,
73
+ fn_gvec_ptr);
74
+ tcg_temp_free_ptr(fpst);
75
+ return true;
76
+}
77
diff --git a/target/arm/translate.c b/target/arm/translate.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/translate.c
80
+++ b/target/arm/translate.c
81
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
82
bool is_long = false, q = extract32(insn, 6, 1);
83
bool ptr_is_env = false;
84
85
- if ((insn & 0xfe200f10) == 0xfc200800) {
86
- /* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
87
- int size = extract32(insn, 20, 1);
88
- data = extract32(insn, 23, 2); /* rot */
89
- if (!dc_isar_feature(aa32_vcma, s)
90
- || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
91
- return 1;
92
- }
93
- fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
94
- } else if ((insn & 0xfea00f10) == 0xfc800800) {
95
+ if ((insn & 0xfea00f10) == 0xfc800800) {
96
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
97
int size = extract32(insn, 20, 1);
98
data = extract32(insn, 24, 1); /* rot */
99
--
204
--
100
2.20.1
205
2.20.1
101
206
102
207
diff view generated by jsdifflib
1
Convert the Neon 3-reg-same VMAX and VMIN insns to decodetree.
1
Implement the vector forms of the MVE VQDMULH and VQRDMULH insns.
2
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20200430181003.21682-17-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-31-peter.maydell@linaro.org
6
---
6
---
7
target/arm/neon-dp.decode | 5 +++++
7
target/arm/helper-mve.h | 8 ++++++++
8
target/arm/translate-neon.inc.c | 14 ++++++++++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/translate.c | 21 ++-------------------
9
target/arm/mve_helper.c | 27 +++++++++++++++++++++++++++
10
3 files changed, 21 insertions(+), 19 deletions(-)
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 40 insertions(+)
11
12
12
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-dp.decode
15
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/neon-dp.decode
16
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ VBSL_3s 1111 001 1 0 . 01 .... .... 0001 ... 1 .... @3same_logic
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
17
VBIT_3s 1111 001 1 0 . 10 .... .... 0001 ... 1 .... @3same_logic
18
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
VBIF_3s 1111 001 1 0 . 11 .... .... 0001 ... 1 .... @3same_logic
19
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
20
20
+VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
21
+DEF_HELPER_FLAGS_4(mve_vqdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
+VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
22
+DEF_HELPER_FLAGS_4(mve_vqdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+VMIN_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 1 .... @3same
23
+DEF_HELPER_FLAGS_4(mve_vqdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+VMIN_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 1 .... @3same
24
+
24
+
25
VADD_3s 1111 001 0 0 . .. .... .... 1000 . . . 0 .... @3same
25
+DEF_HELPER_FLAGS_4(mve_vqrdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
26
+DEF_HELPER_FLAGS_4(mve_vqrdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
27
+DEF_HELPER_FLAGS_4(mve_vqrdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+
29
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/translate-neon.inc.c
34
--- a/target/arm/mve.decode
30
+++ b/target/arm/translate-neon.inc.c
35
+++ b/target/arm/mve.decode
31
@@ -XXX,XX +XXX,XX @@ DO_3SAME(VEOR, tcg_gen_gvec_xor)
36
@@ -XXX,XX +XXX,XX @@ VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
32
DO_3SAME_BITSEL(VBSL, rd_ofs, rn_ofs, rm_ofs)
37
VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
33
DO_3SAME_BITSEL(VBIT, rm_ofs, rn_ofs, rd_ofs)
38
VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
34
DO_3SAME_BITSEL(VBIF, rm_ofs, rd_ofs, rn_ofs)
39
40
+VQDMULH 1110 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
41
+VQRDMULH 1111 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
35
+
42
+
36
+#define DO_3SAME_NO_SZ_3(INSN, FUNC) \
43
# Vector miscellaneous
37
+ static bool trans_##INSN##_3s(DisasContext *s, arg_3same *a) \
44
45
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve_helper.c
49
+++ b/target/arm/mve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
51
mve_advance_vpt(env); \
52
}
53
54
+#define DO_2OP_SAT(OP, ESIZE, TYPE, FN) \
55
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, void *vm) \
38
+ { \
56
+ { \
39
+ if (a->size == 3) { \
57
+ TYPE *d = vd, *n = vn, *m = vm; \
40
+ return false; \
58
+ uint16_t mask = mve_element_mask(env); \
59
+ unsigned e; \
60
+ bool qc = false; \
61
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
62
+ bool sat = false; \
63
+ TYPE r = FN(n[H##ESIZE(e)], m[H##ESIZE(e)], &sat); \
64
+ mergemask(&d[H##ESIZE(e)], r, mask); \
65
+ qc |= sat & mask & 1; \
41
+ } \
66
+ } \
42
+ return do_3same(s, a, FUNC); \
67
+ if (qc) { \
68
+ env->vfp.qc[0] = qc; \
69
+ } \
70
+ mve_advance_vpt(env); \
43
+ }
71
+ }
44
+
72
+
45
+DO_3SAME_NO_SZ_3(VMAX_S, tcg_gen_gvec_smax)
73
#define DO_AND(N, M) ((N) & (M))
46
+DO_3SAME_NO_SZ_3(VMAX_U, tcg_gen_gvec_umax)
74
#define DO_BIC(N, M) ((N) & ~(M))
47
+DO_3SAME_NO_SZ_3(VMIN_S, tcg_gen_gvec_smin)
75
#define DO_ORR(N, M) ((N) | (M))
48
+DO_3SAME_NO_SZ_3(VMIN_U, tcg_gen_gvec_umin)
76
@@ -XXX,XX +XXX,XX @@ static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
49
diff --git a/target/arm/translate.c b/target/arm/translate.c
77
#define DO_QRDMULH_W(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 30)) >> 31, \
78
INT32_MIN, INT32_MAX, s)
79
80
+DO_2OP_SAT(vqdmulhb, 1, int8_t, DO_QDMULH_B)
81
+DO_2OP_SAT(vqdmulhh, 2, int16_t, DO_QDMULH_H)
82
+DO_2OP_SAT(vqdmulhw, 4, int32_t, DO_QDMULH_W)
83
+
84
+DO_2OP_SAT(vqrdmulhb, 1, int8_t, DO_QRDMULH_B)
85
+DO_2OP_SAT(vqrdmulhh, 2, int16_t, DO_QRDMULH_H)
86
+DO_2OP_SAT(vqrdmulhw, 4, int32_t, DO_QRDMULH_W)
87
+
88
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
89
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
90
uint32_t rm) \
91
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
50
index XXXXXXX..XXXXXXX 100644
92
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/translate.c
93
--- a/target/arm/translate-mve.c
52
+++ b/target/arm/translate.c
94
+++ b/target/arm/translate-mve.c
53
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
95
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_BS, vmullbs)
54
rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
96
DO_2OP(VMULL_BU, vmullbu)
55
return 0;
97
DO_2OP(VMULL_TS, vmullts)
56
98
DO_2OP(VMULL_TU, vmulltu)
57
- case NEON_3R_VMAX:
99
+DO_2OP(VQDMULH, vqdmulh)
58
- if (u) {
100
+DO_2OP(VQRDMULH, vqrdmulh)
59
- tcg_gen_gvec_umax(size, rd_ofs, rn_ofs, rm_ofs,
101
60
- vec_size, vec_size);
102
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
61
- } else {
103
MVEGenTwoOpScalarFn fn)
62
- tcg_gen_gvec_smax(size, rd_ofs, rn_ofs, rm_ofs,
63
- vec_size, vec_size);
64
- }
65
- return 0;
66
- case NEON_3R_VMIN:
67
- if (u) {
68
- tcg_gen_gvec_umin(size, rd_ofs, rn_ofs, rm_ofs,
69
- vec_size, vec_size);
70
- } else {
71
- tcg_gen_gvec_smin(size, rd_ofs, rn_ofs, rm_ofs,
72
- vec_size, vec_size);
73
- }
74
- return 0;
75
-
76
case NEON_3R_VSHL:
77
/* Note the operation is vshl vd,vm,vn */
78
tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, vec_size, vec_size,
79
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
80
81
case NEON_3R_VADD_VSUB:
82
case NEON_3R_LOGIC:
83
+ case NEON_3R_VMAX:
84
+ case NEON_3R_VMIN:
85
/* Already handled by decodetree */
86
return 1;
87
}
88
--
104
--
89
2.20.1
105
2.20.1
90
106
91
107
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Implement the vector forms of the MVE VQADD and VQSUB insns.
2
2
3
Embed the GEMs into the SoC type.
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-32-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 16 ++++++++++++++++
8
target/arm/mve.decode | 5 +++++
9
target/arm/mve_helper.c | 14 ++++++++++++++
10
target/arm/translate-mve.c | 4 ++++
11
4 files changed, 39 insertions(+)
4
12
5
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Message-id: 20200427181649.26851-6-edgar.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/arm/xlnx-versal.h | 3 ++-
14
hw/arm/xlnx-versal.c | 15 ++++++++-------
15
2 files changed, 10 insertions(+), 8 deletions(-)
16
17
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/xlnx-versal.h
15
--- a/target/arm/helper-mve.h
20
+++ b/include/hw/arm/xlnx-versal.h
16
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
#include "hw/arm/boot.h"
18
DEF_HELPER_FLAGS_4(mve_vqrdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
#include "hw/intc/arm_gicv3.h"
19
DEF_HELPER_FLAGS_4(mve_vqrdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
#include "hw/char/pl011.h"
20
25
+#include "hw/net/cadence_gem.h"
21
+DEF_HELPER_FLAGS_4(mve_vqaddsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
22
+DEF_HELPER_FLAGS_4(mve_vqaddsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
#define TYPE_XLNX_VERSAL "xlnx-versal"
23
+DEF_HELPER_FLAGS_4(mve_vqaddsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
#define XLNX_VERSAL(obj) OBJECT_CHECK(Versal, (obj), TYPE_XLNX_VERSAL)
24
+
29
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
25
+DEF_HELPER_FLAGS_4(mve_vqaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
26
+DEF_HELPER_FLAGS_4(mve_vqadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
struct {
27
+DEF_HELPER_FLAGS_4(mve_vqadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
PL011State uart[XLNX_VERSAL_NR_UARTS];
28
+
33
- SysBusDevice *gem[XLNX_VERSAL_NR_GEMS];
29
+DEF_HELPER_FLAGS_4(mve_vqsubsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+ CadenceGEMState gem[XLNX_VERSAL_NR_GEMS];
30
+DEF_HELPER_FLAGS_4(mve_vqsubsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
SysBusDevice *adma[XLNX_VERSAL_NR_ADMAS];
31
+DEF_HELPER_FLAGS_4(mve_vqsubsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
} iou;
32
+
37
} lpd;
33
+DEF_HELPER_FLAGS_4(mve_vqsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
34
+DEF_HELPER_FLAGS_4(mve_vqsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+DEF_HELPER_FLAGS_4(mve_vqsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
+
37
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
39
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/arm/xlnx-versal.c
42
--- a/target/arm/mve.decode
41
+++ b/hw/arm/xlnx-versal.c
43
+++ b/target/arm/mve.decode
42
@@ -XXX,XX +XXX,XX @@ static void versal_create_gems(Versal *s, qemu_irq *pic)
44
@@ -XXX,XX +XXX,XX @@ VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
43
DeviceState *dev;
45
VQDMULH 1110 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
44
MemoryRegion *mr;
46
VQRDMULH 1111 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
45
47
46
- dev = qdev_create(NULL, "cadence_gem");
48
+VQADD_S 111 0 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
47
- s->lpd.iou.gem[i] = SYS_BUS_DEVICE(dev);
49
+VQADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
48
- object_property_add_child(OBJECT(s), name, OBJECT(dev), &error_fatal);
50
+VQSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
49
+ sysbus_init_child_obj(OBJECT(s), name,
51
+VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
50
+ &s->lpd.iou.gem[i], sizeof(s->lpd.iou.gem[i]),
52
+
51
+ TYPE_CADENCE_GEM);
53
# Vector miscellaneous
52
+ dev = DEVICE(&s->lpd.iou.gem[i]);
54
53
if (nd->used) {
55
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
54
qemu_check_nic_model(nd, "cadence_gem");
56
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
55
qdev_set_nic_properties(dev, nd);
57
index XXXXXXX..XXXXXXX 100644
56
}
58
--- a/target/arm/mve_helper.c
57
- object_property_set_int(OBJECT(s->lpd.iou.gem[i]),
59
+++ b/target/arm/mve_helper.c
58
+ object_property_set_int(OBJECT(dev),
60
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqrdmulhb, 1, int8_t, DO_QRDMULH_B)
59
2, "num-priority-queues",
61
DO_2OP_SAT(vqrdmulhh, 2, int16_t, DO_QRDMULH_H)
60
&error_abort);
62
DO_2OP_SAT(vqrdmulhw, 4, int32_t, DO_QRDMULH_W)
61
- object_property_set_link(OBJECT(s->lpd.iou.gem[i]),
63
62
+ object_property_set_link(OBJECT(dev),
64
+DO_2OP_SAT(vqaddub, 1, uint8_t, DO_UQADD_B)
63
OBJECT(&s->mr_ps), "dma",
65
+DO_2OP_SAT(vqadduh, 2, uint16_t, DO_UQADD_H)
64
&error_abort);
66
+DO_2OP_SAT(vqadduw, 4, uint32_t, DO_UQADD_W)
65
qdev_init_nofail(dev);
67
+DO_2OP_SAT(vqaddsb, 1, int8_t, DO_SQADD_B)
66
68
+DO_2OP_SAT(vqaddsh, 2, int16_t, DO_SQADD_H)
67
- mr = sysbus_mmio_get_region(s->lpd.iou.gem[i], 0);
69
+DO_2OP_SAT(vqaddsw, 4, int32_t, DO_SQADD_W)
68
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
70
+
69
memory_region_add_subregion(&s->mr_ps, addrs[i], mr);
71
+DO_2OP_SAT(vqsubub, 1, uint8_t, DO_UQSUB_B)
70
72
+DO_2OP_SAT(vqsubuh, 2, uint16_t, DO_UQSUB_H)
71
- sysbus_connect_irq(s->lpd.iou.gem[i], 0, pic[irqs[i]]);
73
+DO_2OP_SAT(vqsubuw, 4, uint32_t, DO_UQSUB_W)
72
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irqs[i]]);
74
+DO_2OP_SAT(vqsubsb, 1, int8_t, DO_SQSUB_B)
73
g_free(name);
75
+DO_2OP_SAT(vqsubsh, 2, int16_t, DO_SQSUB_H)
74
}
76
+DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
75
}
77
+
78
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
79
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
80
uint32_t rm) \
81
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/translate-mve.c
84
+++ b/target/arm/translate-mve.c
85
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_TS, vmullts)
86
DO_2OP(VMULL_TU, vmulltu)
87
DO_2OP(VQDMULH, vqdmulh)
88
DO_2OP(VQRDMULH, vqrdmulh)
89
+DO_2OP(VQADD_S, vqadds)
90
+DO_2OP(VQADD_U, vqaddu)
91
+DO_2OP(VQSUB_S, vqsubs)
92
+DO_2OP(VQSUB_U, vqsubu)
93
94
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
95
MVEGenTwoOpScalarFn fn)
76
--
96
--
77
2.20.1
97
2.20.1
78
98
79
99
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
Implement the MVE VQSHL insn (encoding T4, which is the
2
vector-shift-by-vector version).
2
3
3
By using the TYPE_* definitions for devices, we can:
4
The DO_SQSHL_OP and DO_UQSHL_OP macros here are derived from
4
- quickly find where devices are used with 'git-grep'
5
the neon_helper.c code for qshl_u{8,16,32} and qshl_s{8,16,32}.
5
- easily rename a device (one-line change).
6
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20200428154650.21991-1-f4bug@amsat.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210617121628.20116-33-peter.maydell@linaro.org
11
---
10
---
12
hw/arm/mps2-tz.c | 2 +-
11
target/arm/helper-mve.h | 8 ++++++++
13
1 file changed, 1 insertion(+), 1 deletion(-)
12
target/arm/mve.decode | 12 ++++++++++++
13
target/arm/mve_helper.c | 34 ++++++++++++++++++++++++++++++++++
14
target/arm/translate-mve.c | 2 ++
15
4 files changed, 56 insertions(+)
14
16
15
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/mps2-tz.c
19
--- a/target/arm/helper-mve.h
18
+++ b/hw/arm/mps2-tz.c
20
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
exit(EXIT_FAILURE);
22
DEF_HELPER_FLAGS_4(mve_vqsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
DEF_HELPER_FLAGS_4(mve_vqsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
25
+DEF_HELPER_FLAGS_4(mve_vqshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vqshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vqshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+
29
+DEF_HELPER_FLAGS_4(mve_vqshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+DEF_HELPER_FLAGS_4(mve_vqshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vqshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+
33
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/mve.decode
39
+++ b/target/arm/mve.decode
40
@@ -XXX,XX +XXX,XX @@
41
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
42
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
43
44
+# The _rev suffix indicates that Vn and Vm are reversed. This is
45
+# the case for shifts. In the Arm ARM these insns are documented
46
+# with the Vm and Vn fields in their usual places, but in the
47
+# assembly the operands are listed "backwards", ie in the order
48
+# Qd, Qm, Qn where other insns use Qd, Qn, Qm. For QEMU we choose
49
+# to consider Vm and Vn as being in different fields in the insn.
50
+# This gives us consistency with A64 and Neon.
51
+@2op_rev .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qn qn=%qm
52
+
53
@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
54
@2scalar_nosz .... .... .... .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
55
56
@@ -XXX,XX +XXX,XX @@ VQADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
57
VQSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
58
VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
59
60
+VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
61
+VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
62
+
63
# Vector miscellaneous
64
65
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
66
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/mve_helper.c
69
+++ b/target/arm/mve_helper.c
70
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
71
mve_advance_vpt(env); \
21
}
72
}
22
73
23
- sysbus_init_child_obj(OBJECT(machine), "iotkit", &mms->iotkit,
74
+/* provide unsigned 2-op helpers for all sizes */
24
+ sysbus_init_child_obj(OBJECT(machine), TYPE_IOTKIT, &mms->iotkit,
75
+#define DO_2OP_SAT_U(OP, FN) \
25
sizeof(mms->iotkit), mmc->armsse_type);
76
+ DO_2OP_SAT(OP##b, 1, uint8_t, FN) \
26
iotkitdev = DEVICE(&mms->iotkit);
77
+ DO_2OP_SAT(OP##h, 2, uint16_t, FN) \
27
object_property_set_link(OBJECT(&mms->iotkit), OBJECT(system_memory),
78
+ DO_2OP_SAT(OP##w, 4, uint32_t, FN)
79
+
80
+/* provide signed 2-op helpers for all sizes */
81
+#define DO_2OP_SAT_S(OP, FN) \
82
+ DO_2OP_SAT(OP##b, 1, int8_t, FN) \
83
+ DO_2OP_SAT(OP##h, 2, int16_t, FN) \
84
+ DO_2OP_SAT(OP##w, 4, int32_t, FN)
85
+
86
#define DO_AND(N, M) ((N) & (M))
87
#define DO_BIC(N, M) ((N) & ~(M))
88
#define DO_ORR(N, M) ((N) | (M))
89
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqsubsb, 1, int8_t, DO_SQSUB_B)
90
DO_2OP_SAT(vqsubsh, 2, int16_t, DO_SQSUB_H)
91
DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
92
93
+/*
94
+ * This wrapper fixes up the impedance mismatch between do_sqrshl_bhs()
95
+ * and friends wanting a uint32_t* sat and our needing a bool*.
96
+ */
97
+#define WRAP_QRSHL_HELPER(FN, N, M, ROUND, satp) \
98
+ ({ \
99
+ uint32_t su32 = 0; \
100
+ typeof(N) r = FN(N, (int8_t)(M), sizeof(N) * 8, ROUND, &su32); \
101
+ if (su32) { \
102
+ *satp = true; \
103
+ } \
104
+ r; \
105
+ })
106
+
107
+#define DO_SQSHL_OP(N, M, satp) \
108
+ WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, false, satp)
109
+#define DO_UQSHL_OP(N, M, satp) \
110
+ WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, false, satp)
111
+
112
+DO_2OP_SAT_S(vqshls, DO_SQSHL_OP)
113
+DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
114
+
115
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
116
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
117
uint32_t rm) \
118
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/target/arm/translate-mve.c
121
+++ b/target/arm/translate-mve.c
122
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQADD_S, vqadds)
123
DO_2OP(VQADD_U, vqaddu)
124
DO_2OP(VQSUB_S, vqsubs)
125
DO_2OP(VQSUB_U, vqsubu)
126
+DO_2OP(VQSHL_S, vqshls)
127
+DO_2OP(VQSHL_U, vqshlu)
128
129
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
130
MVEGenTwoOpScalarFn fn)
28
--
131
--
29
2.20.1
132
2.20.1
30
133
31
134
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Implement the MV VQRSHL (vector) insn. Again, the code to perform
2
the actual shifts is borrowed from neon_helper.c.
2
3
3
Embed the UARTs into the SoC type.
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-34-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 8 ++++++++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 6 ++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 19 insertions(+)
4
13
5
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Message-id: 20200427181649.26851-5-edgar.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/arm/xlnx-versal.h | 3 ++-
14
hw/arm/xlnx-versal.c | 12 ++++++------
15
2 files changed, 8 insertions(+), 7 deletions(-)
16
17
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/xlnx-versal.h
16
--- a/target/arm/helper-mve.h
20
+++ b/include/hw/arm/xlnx-versal.h
17
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
#include "hw/sysbus.h"
19
DEF_HELPER_FLAGS_4(mve_vqshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
#include "hw/arm/boot.h"
20
DEF_HELPER_FLAGS_4(mve_vqshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
#include "hw/intc/arm_gicv3.h"
21
25
+#include "hw/char/pl011.h"
22
+DEF_HELPER_FLAGS_4(mve_vqrshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
23
+DEF_HELPER_FLAGS_4(mve_vqrshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
#define TYPE_XLNX_VERSAL "xlnx-versal"
24
+DEF_HELPER_FLAGS_4(mve_vqrshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
#define XLNX_VERSAL(obj) OBJECT_CHECK(Versal, (obj), TYPE_XLNX_VERSAL)
25
+
29
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
26
+DEF_HELPER_FLAGS_4(mve_vqrshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
MemoryRegion mr_ocm;
27
+DEF_HELPER_FLAGS_4(mve_vqrshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
28
+DEF_HELPER_FLAGS_4(mve_vqrshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
struct {
29
+
33
- SysBusDevice *uart[XLNX_VERSAL_NR_UARTS];
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+ PL011State uart[XLNX_VERSAL_NR_UARTS];
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
SysBusDevice *gem[XLNX_VERSAL_NR_GEMS];
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
SysBusDevice *adma[XLNX_VERSAL_NR_ADMAS];
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
} iou;
38
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
39
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/arm/xlnx-versal.c
35
--- a/target/arm/mve.decode
41
+++ b/hw/arm/xlnx-versal.c
36
+++ b/target/arm/mve.decode
42
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@ VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
43
#include "kvm_arm.h"
38
VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
44
#include "hw/misc/unimp.h"
39
VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
45
#include "hw/arm/xlnx-versal.h"
40
46
-#include "hw/char/pl011.h"
41
+VQRSHL_S 111 0 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
47
42
+VQRSHL_U 111 1 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
48
#define XLNX_VERSAL_ACPU_TYPE ARM_CPU_TYPE_NAME("cortex-a72")
43
+
49
#define GEM_REVISION 0x40070106
44
# Vector miscellaneous
50
@@ -XXX,XX +XXX,XX @@ static void versal_create_uarts(Versal *s, qemu_irq *pic)
45
51
DeviceState *dev;
46
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
52
MemoryRegion *mr;
47
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
48
index XXXXXXX..XXXXXXX 100644
54
- dev = qdev_create(NULL, TYPE_PL011);
49
--- a/target/arm/mve_helper.c
55
- s->lpd.iou.uart[i] = SYS_BUS_DEVICE(dev);
50
+++ b/target/arm/mve_helper.c
56
+ sysbus_init_child_obj(OBJECT(s), name,
51
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
57
+ &s->lpd.iou.uart[i], sizeof(s->lpd.iou.uart[i]),
52
WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, false, satp)
58
+ TYPE_PL011);
53
#define DO_UQSHL_OP(N, M, satp) \
59
+ dev = DEVICE(&s->lpd.iou.uart[i]);
54
WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, false, satp)
60
qdev_prop_set_chr(dev, "chardev", serial_hd(i));
55
+#define DO_SQRSHL_OP(N, M, satp) \
61
- object_property_add_child(OBJECT(s), name, OBJECT(dev), &error_fatal);
56
+ WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, true, satp)
62
qdev_init_nofail(dev);
57
+#define DO_UQRSHL_OP(N, M, satp) \
63
58
+ WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, true, satp)
64
- mr = sysbus_mmio_get_region(s->lpd.iou.uart[i], 0);
59
65
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
60
DO_2OP_SAT_S(vqshls, DO_SQSHL_OP)
66
memory_region_add_subregion(&s->mr_ps, addrs[i], mr);
61
DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
67
62
+DO_2OP_SAT_S(vqrshls, DO_SQRSHL_OP)
68
- sysbus_connect_irq(s->lpd.iou.uart[i], 0, pic[irqs[i]]);
63
+DO_2OP_SAT_U(vqrshlu, DO_UQRSHL_OP)
69
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irqs[i]]);
64
70
g_free(name);
65
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
71
}
66
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
72
}
67
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/translate-mve.c
70
+++ b/target/arm/translate-mve.c
71
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQSUB_S, vqsubs)
72
DO_2OP(VQSUB_U, vqsubu)
73
DO_2OP(VQSHL_S, vqshls)
74
DO_2OP(VQSHL_U, vqshlu)
75
+DO_2OP(VQRSHL_S, vqrshls)
76
+DO_2OP(VQRSHL_U, vqrshlu)
77
78
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
79
MVEGenTwoOpScalarFn fn)
73
--
80
--
74
2.20.1
81
2.20.1
75
82
76
83
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Implement the MVE VSHL insn (vector form).
2
2
3
Move misplaced comment.
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-35-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 8 ++++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 6 ++++++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 19 insertions(+)
4
12
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
9
Message-id: 20200427181649.26851-3-edgar.iglesias@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/xlnx-versal.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/xlnx-versal.c
15
--- a/target/arm/helper-mve.h
18
+++ b/hw/arm/xlnx-versal.c
16
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_cpus(Versal *s)
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
18
DEF_HELPER_FLAGS_4(mve_vqsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
obj = object_new(XLNX_VERSAL_ACPU_TYPE);
19
DEF_HELPER_FLAGS_4(mve_vqsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
if (!obj) {
20
23
- /* Secondary CPUs start in PSCI powered-down state */
21
+DEF_HELPER_FLAGS_4(mve_vshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
error_report("Unable to create apu.cpu[%d] of type %s",
22
+DEF_HELPER_FLAGS_4(mve_vshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
i, XLNX_VERSAL_ACPU_TYPE);
23
+DEF_HELPER_FLAGS_4(mve_vshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
exit(EXIT_FAILURE);
24
+
27
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_cpus(Versal *s)
25
+DEF_HELPER_FLAGS_4(mve_vshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
object_property_set_int(obj, s->cfg.psci_conduit,
26
+DEF_HELPER_FLAGS_4(mve_vshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
"psci-conduit", &error_abort);
27
+DEF_HELPER_FLAGS_4(mve_vshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
if (i) {
28
+
31
+ /* Secondary CPUs start in PSCI powered-down state */
29
DEF_HELPER_FLAGS_4(mve_vqshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
object_property_set_bool(obj, true,
30
DEF_HELPER_FLAGS_4(mve_vqshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
"start-powered-off", &error_abort);
31
DEF_HELPER_FLAGS_4(mve_vqshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
}
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
35
+++ b/target/arm/mve.decode
36
@@ -XXX,XX +XXX,XX @@ VQADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
37
VQSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
38
VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
39
40
+VSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
41
+VSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
42
+
43
VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
44
VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
45
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve_helper.c
49
+++ b/target/arm/mve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhaddu, do_vhadd_u)
51
DO_2OP_S(vhsubs, do_vhsub_s)
52
DO_2OP_U(vhsubu, do_vhsub_u)
53
54
+#define DO_VSHLS(N, M) do_sqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
55
+#define DO_VSHLU(N, M) do_uqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
56
+
57
+DO_2OP_S(vshls, DO_VSHLS)
58
+DO_2OP_U(vshlu, DO_VSHLU)
59
+
60
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
61
{
62
if (val > max) {
63
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate-mve.c
66
+++ b/target/arm/translate-mve.c
67
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQADD_S, vqadds)
68
DO_2OP(VQADD_U, vqaddu)
69
DO_2OP(VQSUB_S, vqsubs)
70
DO_2OP(VQSUB_U, vqsubu)
71
+DO_2OP(VSHL_S, vshls)
72
+DO_2OP(VSHL_U, vshlu)
73
DO_2OP(VQSHL_S, vqshls)
74
DO_2OP(VQSHL_U, vqshlu)
75
DO_2OP(VQRSHL_S, vqrshls)
35
--
76
--
36
2.20.1
77
2.20.1
37
78
38
79
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
Implement the MVE VRSHL insn (vector form).
2
2
3
MIDR_EL1 is a 64-bit system register with the top 32-bit being RES0.
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Represent it in QEMU's ARMCPU struct with a uint64_t, not a
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
uint32_t.
5
Message-id: 20210617121628.20116-36-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 8 ++++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 4 ++++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 17 insertions(+)
6
12
7
This fixes an error when compiling with -Werror=conversion
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
8
because we were manipulating the register value using a
9
local uint64_t variable:
10
11
target/arm/cpu64.c: In function ‘aarch64_max_initfn’:
12
target/arm/cpu64.c:628:21: error: conversion from ‘uint64_t’ {aka ‘long unsigned int’} to ‘uint32_t’ {aka ‘unsigned int’} may change value [-Werror=conversion]
13
628 | cpu->midr = t;
14
| ^
15
16
and future-proofs us against a possible future architecture
17
change using some of the top 32 bits.
18
19
Suggested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
20
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
22
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
23
Message-id: 20200428172634.29707-1-f4bug@amsat.org
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
27
target/arm/cpu.h | 2 +-
28
target/arm/cpu.c | 2 +-
29
2 files changed, 2 insertions(+), 2 deletions(-)
30
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
15
--- a/target/arm/helper-mve.h
34
+++ b/target/arm/cpu.h
16
+++ b/target/arm/helper-mve.h
35
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
uint64_t id_aa64dfr0;
18
DEF_HELPER_FLAGS_4(mve_vshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
uint64_t id_aa64dfr1;
19
DEF_HELPER_FLAGS_4(mve_vshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
} isar;
20
39
- uint32_t midr;
21
+DEF_HELPER_FLAGS_4(mve_vrshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
40
+ uint64_t midr;
22
+DEF_HELPER_FLAGS_4(mve_vrshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
41
uint32_t revidr;
23
+DEF_HELPER_FLAGS_4(mve_vrshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
42
uint32_t reset_fpsid;
24
+
43
uint32_t ctr;
25
+DEF_HELPER_FLAGS_4(mve_vrshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
26
+DEF_HELPER_FLAGS_4(mve_vrshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vrshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+
29
DEF_HELPER_FLAGS_4(mve_vqshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
DEF_HELPER_FLAGS_4(mve_vqshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
DEF_HELPER_FLAGS_4(mve_vqshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
45
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu.c
34
--- a/target/arm/mve.decode
47
+++ b/target/arm/cpu.c
35
+++ b/target/arm/mve.decode
48
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_cpus[] = {
36
@@ -XXX,XX +XXX,XX @@ VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
49
static Property arm_cpu_properties[] = {
37
VSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
50
DEFINE_PROP_BOOL("start-powered-off", ARMCPU, start_powered_off, false),
38
VSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
51
DEFINE_PROP_UINT32("psci-conduit", ARMCPU, psci_conduit, 0),
39
52
- DEFINE_PROP_UINT32("midr", ARMCPU, midr, 0),
40
+VRSHL_S 111 0 1111 0 . .. ... 0 ... 0 0101 . 1 . 0 ... 0 @2op_rev
53
+ DEFINE_PROP_UINT64("midr", ARMCPU, midr, 0),
41
+VRSHL_U 111 1 1111 0 . .. ... 0 ... 0 0101 . 1 . 0 ... 0 @2op_rev
54
DEFINE_PROP_UINT64("mp-affinity", ARMCPU,
42
+
55
mp_affinity, ARM64_AFFINITY_INVALID),
43
VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
56
DEFINE_PROP_INT32("node-id", ARMCPU, node_id, CPU_UNSET_NUMA_NODE_ID),
44
VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
45
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve_helper.c
49
+++ b/target/arm/mve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
51
52
#define DO_VSHLS(N, M) do_sqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
53
#define DO_VSHLU(N, M) do_uqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
54
+#define DO_VRSHLS(N, M) do_sqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, true, NULL)
55
+#define DO_VRSHLU(N, M) do_uqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, true, NULL)
56
57
DO_2OP_S(vshls, DO_VSHLS)
58
DO_2OP_U(vshlu, DO_VSHLU)
59
+DO_2OP_S(vrshls, DO_VRSHLS)
60
+DO_2OP_U(vrshlu, DO_VRSHLU)
61
62
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
63
{
64
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate-mve.c
67
+++ b/target/arm/translate-mve.c
68
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQSUB_S, vqsubs)
69
DO_2OP(VQSUB_U, vqsubu)
70
DO_2OP(VSHL_S, vshls)
71
DO_2OP(VSHL_U, vshlu)
72
+DO_2OP(VRSHL_S, vrshls)
73
+DO_2OP(VRSHL_U, vrshlu)
74
DO_2OP(VQSHL_S, vqshls)
75
DO_2OP(VQSHL_U, vqshlu)
76
DO_2OP(VQRSHL_S, vqrshls)
57
--
77
--
58
2.20.1
78
2.20.1
59
79
60
80
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Implement the MVE VQDMLADH and VQRDMLADH insns. These multiply
2
elements, and then add pairs of products, double, possibly round,
3
saturate and return the high half of the result.
2
4
3
hw/arm: versal: Add support for the RTC.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210617121628.20116-37-peter.maydell@linaro.org
8
---
9
target/arm/helper-mve.h | 16 +++++++
10
target/arm/mve.decode | 5 +++
11
target/arm/mve_helper.c | 89 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate-mve.c | 4 ++
13
4 files changed, 114 insertions(+)
4
14
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
9
Message-id: 20200427181649.26851-10-edgar.iglesias@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/hw/arm/xlnx-versal.h | 8 ++++++++
13
hw/arm/xlnx-versal.c | 21 +++++++++++++++++++++
14
2 files changed, 29 insertions(+)
15
16
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/xlnx-versal.h
17
--- a/target/arm/helper-mve.h
19
+++ b/include/hw/arm/xlnx-versal.h
18
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
#include "hw/char/pl011.h"
20
DEF_HELPER_FLAGS_4(mve_vqrshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
#include "hw/dma/xlnx-zdma.h"
21
DEF_HELPER_FLAGS_4(mve_vqrshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
#include "hw/net/cadence_gem.h"
22
24
+#include "hw/rtc/xlnx-zynqmp-rtc.h"
23
+DEF_HELPER_FLAGS_4(mve_vqdmladhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
24
+DEF_HELPER_FLAGS_4(mve_vqdmladhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
#define TYPE_XLNX_VERSAL "xlnx-versal"
25
+DEF_HELPER_FLAGS_4(mve_vqdmladhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
#define XLNX_VERSAL(obj) OBJECT_CHECK(Versal, (obj), TYPE_XLNX_VERSAL)
28
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
29
struct {
30
SDHCIState sd[XLNX_VERSAL_NR_SDS];
31
} iou;
32
+
26
+
33
+ XlnxZynqMPRTC rtc;
27
+DEF_HELPER_FLAGS_4(mve_vqdmladhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
} pmc;
28
+DEF_HELPER_FLAGS_4(mve_vqdmladhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
29
+DEF_HELPER_FLAGS_4(mve_vqdmladhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
struct {
30
+
37
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
31
+DEF_HELPER_FLAGS_4(mve_vqrdmladhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
#define VERSAL_GEM1_IRQ_0 58
32
+DEF_HELPER_FLAGS_4(mve_vqrdmladhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
39
#define VERSAL_GEM1_WAKE_IRQ_0 59
33
+DEF_HELPER_FLAGS_4(mve_vqrdmladhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
40
#define VERSAL_ADMA_IRQ_0 60
34
+
41
+#define VERSAL_RTC_APB_ERR_IRQ 121
35
+DEF_HELPER_FLAGS_4(mve_vqrdmladhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
42
#define VERSAL_SD0_IRQ_0 126
36
+DEF_HELPER_FLAGS_4(mve_vqrdmladhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
43
+#define VERSAL_RTC_ALARM_IRQ 142
37
+DEF_HELPER_FLAGS_4(mve_vqrdmladhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
44
+#define VERSAL_RTC_SECONDS_IRQ 143
38
+
45
39
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
46
/* Architecturally reserved IRQs suitable for virtualization. */
40
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
47
#define VERSAL_RSVD_IRQ_FIRST 111
41
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
48
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
42
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
49
#define MM_PMC_SD0_SIZE 0x10000
50
#define MM_PMC_CRP 0xf1260000U
51
#define MM_PMC_CRP_SIZE 0x10000
52
+#define MM_PMC_RTC 0xf12a0000
53
+#define MM_PMC_RTC_SIZE 0x10000
54
#endif
55
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
56
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/xlnx-versal.c
44
--- a/target/arm/mve.decode
58
+++ b/hw/arm/xlnx-versal.c
45
+++ b/target/arm/mve.decode
59
@@ -XXX,XX +XXX,XX @@ static void versal_create_sds(Versal *s, qemu_irq *pic)
46
@@ -XXX,XX +XXX,XX @@ VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
60
}
47
VQRSHL_S 111 0 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
61
}
48
VQRSHL_U 111 1 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
62
49
63
+static void versal_create_rtc(Versal *s, qemu_irq *pic)
50
+VQDMLADH 1110 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 0 @2op
51
+VQDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
52
+VQRDMLADH 1110 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
53
+VQRDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
54
+
55
# Vector miscellaneous
56
57
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
58
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve_helper.c
61
+++ b/target/arm/mve_helper.c
62
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
63
DO_2OP_SAT_S(vqrshls, DO_SQRSHL_OP)
64
DO_2OP_SAT_U(vqrshlu, DO_UQRSHL_OP)
65
66
+/*
67
+ * Multiply add dual returning high half
68
+ * The 'FN' here takes four inputs A, B, C, D, a 0/1 indicator of
69
+ * whether to add the rounding constant, and the pointer to the
70
+ * saturation flag, and should do "(A * B + C * D) * 2 + rounding constant",
71
+ * saturate to twice the input size and return the high half; or
72
+ * (A * B - C * D) etc for VQDMLSDH.
73
+ */
74
+#define DO_VQDMLADH_OP(OP, ESIZE, TYPE, XCHG, ROUND, FN) \
75
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
76
+ void *vm) \
77
+ { \
78
+ TYPE *d = vd, *n = vn, *m = vm; \
79
+ uint16_t mask = mve_element_mask(env); \
80
+ unsigned e; \
81
+ bool qc = false; \
82
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
83
+ bool sat = false; \
84
+ if ((e & 1) == XCHG) { \
85
+ TYPE r = FN(n[H##ESIZE(e)], \
86
+ m[H##ESIZE(e - XCHG)], \
87
+ n[H##ESIZE(e + (1 - 2 * XCHG))], \
88
+ m[H##ESIZE(e + (1 - XCHG))], \
89
+ ROUND, &sat); \
90
+ mergemask(&d[H##ESIZE(e)], r, mask); \
91
+ qc |= sat & mask & 1; \
92
+ } \
93
+ } \
94
+ if (qc) { \
95
+ env->vfp.qc[0] = qc; \
96
+ } \
97
+ mve_advance_vpt(env); \
98
+ }
99
+
100
+static int8_t do_vqdmladh_b(int8_t a, int8_t b, int8_t c, int8_t d,
101
+ int round, bool *sat)
64
+{
102
+{
65
+ SysBusDevice *sbd;
103
+ int64_t r = ((int64_t)a * b + (int64_t)c * d) * 2 + (round << 7);
66
+ MemoryRegion *mr;
104
+ return do_sat_bhw(r, INT16_MIN, INT16_MAX, sat) >> 8;
67
+
68
+ sysbus_init_child_obj(OBJECT(s), "rtc", &s->pmc.rtc, sizeof(s->pmc.rtc),
69
+ TYPE_XLNX_ZYNQMP_RTC);
70
+ sbd = SYS_BUS_DEVICE(&s->pmc.rtc);
71
+ qdev_init_nofail(DEVICE(sbd));
72
+
73
+ mr = sysbus_mmio_get_region(sbd, 0);
74
+ memory_region_add_subregion(&s->mr_ps, MM_PMC_RTC, mr);
75
+
76
+ /*
77
+ * TODO: Connect the ALARM and SECONDS interrupts once our RTC model
78
+ * supports them.
79
+ */
80
+ sysbus_connect_irq(sbd, 1, pic[VERSAL_RTC_APB_ERR_IRQ]);
81
+}
105
+}
82
+
106
+
83
/* This takes the board allocated linear DDR memory and creates aliases
107
+static int16_t do_vqdmladh_h(int16_t a, int16_t b, int16_t c, int16_t d,
84
* for each split DDR range/aperture on the Versal address map.
108
+ int round, bool *sat)
85
*/
109
+{
86
@@ -XXX,XX +XXX,XX @@ static void versal_realize(DeviceState *dev, Error **errp)
110
+ int64_t r = ((int64_t)a * b + (int64_t)c * d) * 2 + (round << 15);
87
versal_create_gems(s, pic);
111
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat) >> 16;
88
versal_create_admas(s, pic);
112
+}
89
versal_create_sds(s, pic);
113
+
90
+ versal_create_rtc(s, pic);
114
+static int32_t do_vqdmladh_w(int32_t a, int32_t b, int32_t c, int32_t d,
91
versal_map_ddr(s);
115
+ int round, bool *sat)
92
versal_unimp(s);
116
+{
93
117
+ int64_t m1 = (int64_t)a * b;
118
+ int64_t m2 = (int64_t)c * d;
119
+ int64_t r;
120
+ /*
121
+ * Architecturally we should do the entire add, double, round
122
+ * and then check for saturation. We do three saturating adds,
123
+ * but we need to be careful about the order. If the first
124
+ * m1 + m2 saturates then it's impossible for the *2+rc to
125
+ * bring it back into the non-saturated range. However, if
126
+ * m1 + m2 is negative then it's possible that doing the doubling
127
+ * would take the intermediate result below INT64_MAX and the
128
+ * addition of the rounding constant then brings it back in range.
129
+ * So we add half the rounding constant before doubling rather
130
+ * than adding the rounding constant after the doubling.
131
+ */
132
+ if (sadd64_overflow(m1, m2, &r) ||
133
+ sadd64_overflow(r, (round << 30), &r) ||
134
+ sadd64_overflow(r, r, &r)) {
135
+ *sat = true;
136
+ return r < 0 ? INT32_MAX : INT32_MIN;
137
+ }
138
+ return r >> 32;
139
+}
140
+
141
+DO_VQDMLADH_OP(vqdmladhb, 1, int8_t, 0, 0, do_vqdmladh_b)
142
+DO_VQDMLADH_OP(vqdmladhh, 2, int16_t, 0, 0, do_vqdmladh_h)
143
+DO_VQDMLADH_OP(vqdmladhw, 4, int32_t, 0, 0, do_vqdmladh_w)
144
+DO_VQDMLADH_OP(vqdmladhxb, 1, int8_t, 1, 0, do_vqdmladh_b)
145
+DO_VQDMLADH_OP(vqdmladhxh, 2, int16_t, 1, 0, do_vqdmladh_h)
146
+DO_VQDMLADH_OP(vqdmladhxw, 4, int32_t, 1, 0, do_vqdmladh_w)
147
+
148
+DO_VQDMLADH_OP(vqrdmladhb, 1, int8_t, 0, 1, do_vqdmladh_b)
149
+DO_VQDMLADH_OP(vqrdmladhh, 2, int16_t, 0, 1, do_vqdmladh_h)
150
+DO_VQDMLADH_OP(vqrdmladhw, 4, int32_t, 0, 1, do_vqdmladh_w)
151
+DO_VQDMLADH_OP(vqrdmladhxb, 1, int8_t, 1, 1, do_vqdmladh_b)
152
+DO_VQDMLADH_OP(vqrdmladhxh, 2, int16_t, 1, 1, do_vqdmladh_h)
153
+DO_VQDMLADH_OP(vqrdmladhxw, 4, int32_t, 1, 1, do_vqdmladh_w)
154
+
155
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
156
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
157
uint32_t rm) \
158
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
159
index XXXXXXX..XXXXXXX 100644
160
--- a/target/arm/translate-mve.c
161
+++ b/target/arm/translate-mve.c
162
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQSHL_S, vqshls)
163
DO_2OP(VQSHL_U, vqshlu)
164
DO_2OP(VQRSHL_S, vqrshls)
165
DO_2OP(VQRSHL_U, vqrshlu)
166
+DO_2OP(VQDMLADH, vqdmladh)
167
+DO_2OP(VQDMLADHX, vqdmladhx)
168
+DO_2OP(VQRDMLADH, vqrdmladh)
169
+DO_2OP(VQRDMLADHX, vqrdmladhx)
170
171
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
172
MVEGenTwoOpScalarFn fn)
94
--
173
--
95
2.20.1
174
2.20.1
96
175
97
176
diff view generated by jsdifflib
1
The ARMv8.2-TTS2UXN feature extends the XN field in stage 2
1
Implement the MVE VQDMLSDH and VQRDMLSDH insns, which are
2
translation table descriptors from just bit [54] to bits [54:53],
2
like VQDMLADH and VQRDMLADH except that products are subtracted
3
allowing stage 2 to control execution permissions separately for EL0
3
rather than added.
4
and EL1. Implement the new semantics of the XN field and enable
5
the feature for our 'max' CPU.
6
4
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200330210400.11724-5-peter.maydell@linaro.org
7
Message-id: 20210617121628.20116-38-peter.maydell@linaro.org
11
---
8
---
12
target/arm/cpu.h | 15 +++++++++++++++
9
target/arm/helper-mve.h | 16 ++++++++++++++
13
target/arm/cpu.c | 1 +
10
target/arm/mve.decode | 5 +++++
14
target/arm/cpu64.c | 2 ++
11
target/arm/mve_helper.c | 44 ++++++++++++++++++++++++++++++++++++++
15
target/arm/helper.c | 37 +++++++++++++++++++++++++++++++------
12
target/arm/translate-mve.c | 4 ++++
16
4 files changed, 49 insertions(+), 6 deletions(-)
13
4 files changed, 69 insertions(+)
17
14
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
17
--- a/target/arm/helper-mve.h
21
+++ b/target/arm/cpu.h
18
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_ccidx(const ARMISARegisters *id)
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrdmladhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
return FIELD_EX32(id->id_mmfr4, ID_MMFR4, CCIDX) != 0;
20
DEF_HELPER_FLAGS_4(mve_vqrdmladhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vqrdmladhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
23
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+
27
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+
31
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+
35
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
+
39
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
42
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve.decode
45
+++ b/target/arm/mve.decode
46
@@ -XXX,XX +XXX,XX @@ VQDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
47
VQRDMLADH 1110 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
48
VQRDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
49
50
+VQDMLSDH 1111 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 0 @2op
51
+VQDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
52
+VQRDMLSDH 1111 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
53
+VQRDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
54
+
55
# Vector miscellaneous
56
57
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
58
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve_helper.c
61
+++ b/target/arm/mve_helper.c
62
@@ -XXX,XX +XXX,XX @@ static int32_t do_vqdmladh_w(int32_t a, int32_t b, int32_t c, int32_t d,
63
return r >> 32;
24
}
64
}
25
65
26
+static inline bool isar_feature_aa32_tts2uxn(const ARMISARegisters *id)
66
+static int8_t do_vqdmlsdh_b(int8_t a, int8_t b, int8_t c, int8_t d,
67
+ int round, bool *sat)
27
+{
68
+{
28
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, XNX) != 0;
69
+ int64_t r = ((int64_t)a * b - (int64_t)c * d) * 2 + (round << 7);
70
+ return do_sat_bhw(r, INT16_MIN, INT16_MAX, sat) >> 8;
29
+}
71
+}
30
+
72
+
31
/*
73
+static int16_t do_vqdmlsdh_h(int16_t a, int16_t b, int16_t c, int16_t d,
32
* 64-bit feature tests via id registers.
74
+ int round, bool *sat)
33
*/
34
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
35
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
36
}
37
38
+static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
39
+{
75
+{
40
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
76
+ int64_t r = ((int64_t)a * b - (int64_t)c * d) * 2 + (round << 15);
77
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat) >> 16;
41
+}
78
+}
42
+
79
+
43
/*
80
+static int32_t do_vqdmlsdh_w(int32_t a, int32_t b, int32_t c, int32_t d,
44
* Feature tests for "does this exist in either 32-bit or 64-bit?"
81
+ int round, bool *sat)
45
*/
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_ccidx(const ARMISARegisters *id)
47
return isar_feature_aa64_ccidx(id) || isar_feature_aa32_ccidx(id);
48
}
49
50
+static inline bool isar_feature_any_tts2uxn(const ARMISARegisters *id)
51
+{
82
+{
52
+ return isar_feature_aa64_tts2uxn(id) || isar_feature_aa32_tts2uxn(id);
83
+ int64_t m1 = (int64_t)a * b;
84
+ int64_t m2 = (int64_t)c * d;
85
+ int64_t r;
86
+ /* The same ordering issue as in do_vqdmladh_w applies here too */
87
+ if (ssub64_overflow(m1, m2, &r) ||
88
+ sadd64_overflow(r, (round << 30), &r) ||
89
+ sadd64_overflow(r, r, &r)) {
90
+ *sat = true;
91
+ return r < 0 ? INT32_MAX : INT32_MIN;
92
+ }
93
+ return r >> 32;
53
+}
94
+}
54
+
95
+
55
/*
96
DO_VQDMLADH_OP(vqdmladhb, 1, int8_t, 0, 0, do_vqdmladh_b)
56
* Forward to the above feature tests given an ARMCPU pointer.
97
DO_VQDMLADH_OP(vqdmladhh, 2, int16_t, 0, 0, do_vqdmladh_h)
57
*/
98
DO_VQDMLADH_OP(vqdmladhw, 4, int32_t, 0, 0, do_vqdmladh_w)
58
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
99
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmladhxb, 1, int8_t, 1, 1, do_vqdmladh_b)
100
DO_VQDMLADH_OP(vqrdmladhxh, 2, int16_t, 1, 1, do_vqdmladh_h)
101
DO_VQDMLADH_OP(vqrdmladhxw, 4, int32_t, 1, 1, do_vqdmladh_w)
102
103
+DO_VQDMLADH_OP(vqdmlsdhb, 1, int8_t, 0, 0, do_vqdmlsdh_b)
104
+DO_VQDMLADH_OP(vqdmlsdhh, 2, int16_t, 0, 0, do_vqdmlsdh_h)
105
+DO_VQDMLADH_OP(vqdmlsdhw, 4, int32_t, 0, 0, do_vqdmlsdh_w)
106
+DO_VQDMLADH_OP(vqdmlsdhxb, 1, int8_t, 1, 0, do_vqdmlsdh_b)
107
+DO_VQDMLADH_OP(vqdmlsdhxh, 2, int16_t, 1, 0, do_vqdmlsdh_h)
108
+DO_VQDMLADH_OP(vqdmlsdhxw, 4, int32_t, 1, 0, do_vqdmlsdh_w)
109
+
110
+DO_VQDMLADH_OP(vqrdmlsdhb, 1, int8_t, 0, 1, do_vqdmlsdh_b)
111
+DO_VQDMLADH_OP(vqrdmlsdhh, 2, int16_t, 0, 1, do_vqdmlsdh_h)
112
+DO_VQDMLADH_OP(vqrdmlsdhw, 4, int32_t, 0, 1, do_vqdmlsdh_w)
113
+DO_VQDMLADH_OP(vqrdmlsdhxb, 1, int8_t, 1, 1, do_vqdmlsdh_b)
114
+DO_VQDMLADH_OP(vqrdmlsdhxh, 2, int16_t, 1, 1, do_vqdmlsdh_h)
115
+DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
116
+
117
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
118
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
119
uint32_t rm) \
120
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
59
index XXXXXXX..XXXXXXX 100644
121
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/cpu.c
122
--- a/target/arm/translate-mve.c
61
+++ b/target/arm/cpu.c
123
+++ b/target/arm/translate-mve.c
62
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
124
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQDMLADH, vqdmladh)
63
t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
125
DO_2OP(VQDMLADHX, vqdmladhx)
64
t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
126
DO_2OP(VQRDMLADH, vqrdmladh)
65
t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
127
DO_2OP(VQRDMLADHX, vqrdmladhx)
66
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
128
+DO_2OP(VQDMLSDH, vqdmlsdh)
67
cpu->isar.id_mmfr4 = t;
129
+DO_2OP(VQDMLSDHX, vqdmlsdhx)
68
}
130
+DO_2OP(VQRDMLSDH, vqrdmlsdh)
69
#endif
131
+DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
70
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
132
71
index XXXXXXX..XXXXXXX 100644
133
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
72
--- a/target/arm/cpu64.c
134
MVEGenTwoOpScalarFn fn)
73
+++ b/target/arm/cpu64.c
74
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
75
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
76
t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
77
t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */
78
+ t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */
79
cpu->isar.id_aa64mmfr1 = t;
80
81
t = cpu->isar.id_aa64mmfr2;
82
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
83
u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
84
u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
85
u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */
86
+ u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
87
cpu->isar.id_mmfr4 = u;
88
89
u = cpu->isar.id_aa64dfr0;
90
diff --git a/target/arm/helper.c b/target/arm/helper.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/helper.c
93
+++ b/target/arm/helper.c
94
@@ -XXX,XX +XXX,XX @@ simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
95
*
96
* @env: CPUARMState
97
* @s2ap: The 2-bit stage2 access permissions (S2AP)
98
- * @xn: XN (execute-never) bit
99
+ * @xn: XN (execute-never) bits
100
+ * @s1_is_el0: true if this is S2 of an S1+2 walk for EL0
101
*/
102
-static int get_S2prot(CPUARMState *env, int s2ap, int xn)
103
+static int get_S2prot(CPUARMState *env, int s2ap, int xn, bool s1_is_el0)
104
{
105
int prot = 0;
106
107
@@ -XXX,XX +XXX,XX @@ static int get_S2prot(CPUARMState *env, int s2ap, int xn)
108
if (s2ap & 2) {
109
prot |= PAGE_WRITE;
110
}
111
- if (!xn) {
112
- if (arm_el_is_aa64(env, 2) || prot & PAGE_READ) {
113
+
114
+ if (cpu_isar_feature(any_tts2uxn, env_archcpu(env))) {
115
+ switch (xn) {
116
+ case 0:
117
prot |= PAGE_EXEC;
118
+ break;
119
+ case 1:
120
+ if (s1_is_el0) {
121
+ prot |= PAGE_EXEC;
122
+ }
123
+ break;
124
+ case 2:
125
+ break;
126
+ case 3:
127
+ if (!s1_is_el0) {
128
+ prot |= PAGE_EXEC;
129
+ }
130
+ break;
131
+ default:
132
+ g_assert_not_reached();
133
+ }
134
+ } else {
135
+ if (!extract32(xn, 1, 1)) {
136
+ if (arm_el_is_aa64(env, 2) || prot & PAGE_READ) {
137
+ prot |= PAGE_EXEC;
138
+ }
139
}
140
}
141
return prot;
142
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
143
}
144
145
ap = extract32(attrs, 4, 2);
146
- xn = extract32(attrs, 12, 1);
147
148
if (mmu_idx == ARMMMUIdx_Stage2) {
149
ns = true;
150
- *prot = get_S2prot(env, ap, xn);
151
+ xn = extract32(attrs, 11, 2);
152
+ *prot = get_S2prot(env, ap, xn, s1_is_el0);
153
} else {
154
ns = extract32(attrs, 3, 1);
155
+ xn = extract32(attrs, 12, 1);
156
pxn = extract32(attrs, 11, 1);
157
*prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
158
}
159
--
135
--
160
2.20.1
136
2.20.1
161
137
162
138
diff view generated by jsdifflib
1
Convert the Neon VMUL, VMLA, VMLS and VSHL insns in the
1
Implement the vector form of the MVE VQDMULL insn.
2
3-reg-same grouping to decodetree.
3
2
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-20-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-39-peter.maydell@linaro.org
7
---
6
---
8
target/arm/neon-dp.decode | 9 +++++++
7
target/arm/helper-mve.h | 5 +++++
9
target/arm/translate-neon.inc.c | 44 +++++++++++++++++++++++++++++++++
8
target/arm/mve.decode | 5 +++++
10
target/arm/translate.c | 28 +++------------------
9
target/arm/mve_helper.c | 30 ++++++++++++++++++++++++++++++
11
3 files changed, 56 insertions(+), 25 deletions(-)
10
target/arm/translate-mve.c | 30 ++++++++++++++++++++++++++++++
11
4 files changed, 70 insertions(+)
12
12
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/neon-dp.decode
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ VCGT_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 0 .... @3same
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
VCGE_S_3s 1111 001 0 0 . .. .... .... 0011 . . . 1 .... @3same
18
DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
VCGE_U_3s 1111 001 1 0 . .. .... .... 0011 . . . 1 .... @3same
19
DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
20
21
+VSHL_S_3s 1111 001 0 0 . .. .... .... 0100 . . . 0 .... @3same
21
+DEF_HELPER_FLAGS_4(mve_vqdmullbh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+VSHL_U_3s 1111 001 1 0 . .. .... .... 0100 . . . 0 .... @3same
22
+DEF_HELPER_FLAGS_4(mve_vqdmullbw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+DEF_HELPER_FLAGS_4(mve_vqdmullth, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vqdmulltw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+
25
+
24
VMAX_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 0 .... @3same
26
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
VMAX_U_3s 1111 001 1 0 . .. .... .... 0110 . . . 0 .... @3same
27
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
VMIN_S_3s 1111 001 0 0 . .. .... .... 0110 . . . 1 .... @3same
28
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
@@ -XXX,XX +XXX,XX @@ VSUB_3s 1111 001 1 0 . .. .... .... 1000 . . . 0 .... @3same
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
30
index XXXXXXX..XXXXXXX 100644
29
VTST_3s 1111 001 0 0 . .. .... .... 1000 . . . 1 .... @3same
31
--- a/target/arm/mve.decode
30
VCEQ_3s 1111 001 1 0 . .. .... .... 1000 . . . 1 .... @3same
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@
34
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
35
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
36
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
37
+@2op_sz28 .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn \
38
+ size=%size_28
39
40
# The _rev suffix indicates that Vn and Vm are reversed. This is
41
# the case for shifts. In the Arm ARM these insns are documented
42
@@ -XXX,XX +XXX,XX @@ VQDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
43
VQRDMLSDH 1111 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
44
VQRDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
45
46
+VQDMULLB 111 . 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 1 @2op_sz28
47
+VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
31
+
48
+
32
+VMLA_3s 1111 001 0 0 . .. .... .... 1001 . . . 0 .... @3same
49
# Vector miscellaneous
33
+VMLS_3s 1111 001 1 0 . .. .... .... 1001 . . . 0 .... @3same
50
51
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR_L(vqdmullt_scalarh, 1, 2, int16_t, 4, int32_t, \
57
DO_2OP_SAT_SCALAR_L(vqdmullt_scalarw, 1, 4, int32_t, 8, int64_t, \
58
do_qdmullw, SATMASK32)
59
60
+/*
61
+ * Long saturating ops
62
+ */
63
+#define DO_2OP_SAT_L(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN, SATMASK) \
64
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
65
+ void *vm) \
66
+ { \
67
+ LTYPE *d = vd; \
68
+ TYPE *n = vn, *m = vm; \
69
+ uint16_t mask = mve_element_mask(env); \
70
+ unsigned le; \
71
+ bool qc = false; \
72
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
73
+ bool sat = false; \
74
+ LTYPE op1 = n[H##ESIZE(le * 2 + TOP)]; \
75
+ LTYPE op2 = m[H##ESIZE(le * 2 + TOP)]; \
76
+ mergemask(&d[H##LESIZE(le)], FN(op1, op2, &sat), mask); \
77
+ qc |= sat && (mask & SATMASK); \
78
+ } \
79
+ if (qc) { \
80
+ env->vfp.qc[0] = qc; \
81
+ } \
82
+ mve_advance_vpt(env); \
83
+ }
34
+
84
+
35
+VMUL_3s 1111 001 0 0 . .. .... .... 1001 . . . 1 .... @3same
85
+DO_2OP_SAT_L(vqdmullbh, 0, 2, int16_t, 4, int32_t, do_qdmullh, SATMASK16B)
36
+VMUL_p_3s 1111 001 1 0 . .. .... .... 1001 . . . 1 .... @3same
86
+DO_2OP_SAT_L(vqdmullbw, 0, 4, int32_t, 8, int64_t, do_qdmullw, SATMASK32)
37
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
87
+DO_2OP_SAT_L(vqdmullth, 1, 2, int16_t, 4, int32_t, do_qdmullh, SATMASK16T)
88
+DO_2OP_SAT_L(vqdmulltw, 1, 4, int32_t, 8, int64_t, do_qdmullw, SATMASK32)
89
+
90
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
91
{
92
m &= 0xff;
93
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
38
index XXXXXXX..XXXXXXX 100644
94
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-neon.inc.c
95
--- a/target/arm/translate-mve.c
40
+++ b/target/arm/translate-neon.inc.c
96
+++ b/target/arm/translate-mve.c
41
@@ -XXX,XX +XXX,XX @@ DO_3SAME_NO_SZ_3(VMAX_S, tcg_gen_gvec_smax)
97
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQDMLSDHX, vqdmlsdhx)
42
DO_3SAME_NO_SZ_3(VMAX_U, tcg_gen_gvec_umax)
98
DO_2OP(VQRDMLSDH, vqrdmlsdh)
43
DO_3SAME_NO_SZ_3(VMIN_S, tcg_gen_gvec_smin)
99
DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
44
DO_3SAME_NO_SZ_3(VMIN_U, tcg_gen_gvec_umin)
100
45
+DO_3SAME_NO_SZ_3(VMUL, tcg_gen_gvec_mul)
101
+static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
46
47
#define DO_3SAME_CMP(INSN, COND) \
48
static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
49
@@ -XXX,XX +XXX,XX @@ DO_3SAME_GVEC4(VQADD_S, sqadd_op)
50
DO_3SAME_GVEC4(VQADD_U, uqadd_op)
51
DO_3SAME_GVEC4(VQSUB_S, sqsub_op)
52
DO_3SAME_GVEC4(VQSUB_U, uqsub_op)
53
+
54
+static void gen_VMUL_p_3s(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
55
+ uint32_t rm_ofs, uint32_t oprsz, uint32_t maxsz)
56
+{
102
+{
57
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, oprsz, maxsz,
103
+ static MVEGenTwoOpFn * const fns[] = {
58
+ 0, gen_helper_gvec_pmul_b);
104
+ NULL,
105
+ gen_helper_mve_vqdmullbh,
106
+ gen_helper_mve_vqdmullbw,
107
+ NULL,
108
+ };
109
+ if (a->size == MO_32 && (a->qd == a->qm || a->qd == a->qn)) {
110
+ /* UNPREDICTABLE; we choose to undef */
111
+ return false;
112
+ }
113
+ return do_2op(s, a, fns[a->size]);
59
+}
114
+}
60
+
115
+
61
+static bool trans_VMUL_p_3s(DisasContext *s, arg_3same *a)
116
+static bool trans_VQDMULLT(DisasContext *s, arg_2op *a)
62
+{
117
+{
63
+ if (a->size != 0) {
118
+ static MVEGenTwoOpFn * const fns[] = {
119
+ NULL,
120
+ gen_helper_mve_vqdmullth,
121
+ gen_helper_mve_vqdmulltw,
122
+ NULL,
123
+ };
124
+ if (a->size == MO_32 && (a->qd == a->qm || a->qd == a->qn)) {
125
+ /* UNPREDICTABLE; we choose to undef */
64
+ return false;
126
+ return false;
65
+ }
127
+ }
66
+ return do_3same(s, a, gen_VMUL_p_3s);
128
+ return do_2op(s, a, fns[a->size]);
67
+}
129
+}
68
+
130
+
69
+#define DO_3SAME_GVEC3_NO_SZ_3(INSN, OPARRAY) \
131
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
70
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
132
MVEGenTwoOpScalarFn fn)
71
+ uint32_t rn_ofs, uint32_t rm_ofs, \
133
{
72
+ uint32_t oprsz, uint32_t maxsz) \
73
+ { \
74
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, \
75
+ oprsz, maxsz, &OPARRAY[vece]); \
76
+ } \
77
+ DO_3SAME_NO_SZ_3(INSN, gen_##INSN##_3s)
78
+
79
+
80
+DO_3SAME_GVEC3_NO_SZ_3(VMLA, mla_op)
81
+DO_3SAME_GVEC3_NO_SZ_3(VMLS, mls_op)
82
+
83
+#define DO_3SAME_GVEC3_SHIFT(INSN, OPARRAY) \
84
+ static void gen_##INSN##_3s(unsigned vece, uint32_t rd_ofs, \
85
+ uint32_t rn_ofs, uint32_t rm_ofs, \
86
+ uint32_t oprsz, uint32_t maxsz) \
87
+ { \
88
+ /* Note the operation is vshl vd,vm,vn */ \
89
+ tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, \
90
+ oprsz, maxsz, &OPARRAY[vece]); \
91
+ } \
92
+ DO_3SAME(INSN, gen_##INSN##_3s)
93
+
94
+DO_3SAME_GVEC3_SHIFT(VSHL_S, sshl_op)
95
+DO_3SAME_GVEC3_SHIFT(VSHL_U, ushl_op)
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
101
}
102
return 1;
103
104
- case NEON_3R_VMUL: /* VMUL */
105
- if (u) {
106
- /* Polynomial case allows only P8. */
107
- if (size != 0) {
108
- return 1;
109
- }
110
- tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
111
- 0, gen_helper_gvec_pmul_b);
112
- } else {
113
- tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
114
- vec_size, vec_size);
115
- }
116
- return 0;
117
-
118
- case NEON_3R_VML: /* VMLA, VMLS */
119
- tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
120
- u ? &mls_op[size] : &mla_op[size]);
121
- return 0;
122
-
123
- case NEON_3R_VSHL:
124
- /* Note the operation is vshl vd,vm,vn */
125
- tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, vec_size, vec_size,
126
- u ? &ushl_op[size] : &sshl_op[size]);
127
- return 0;
128
-
129
case NEON_3R_VADD_VSUB:
130
case NEON_3R_LOGIC:
131
case NEON_3R_VMAX:
132
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
133
case NEON_3R_VCGE:
134
case NEON_3R_VQADD:
135
case NEON_3R_VQSUB:
136
+ case NEON_3R_VMUL:
137
+ case NEON_3R_VML:
138
+ case NEON_3R_VSHL:
139
/* Already handled by decodetree */
140
return 1;
141
}
142
--
134
--
143
2.20.1
135
2.20.1
144
136
145
137
diff view generated by jsdifflib
1
The access_type argument to get_phys_addr_lpae() is an MMUAccessType;
1
Implement the MVE VRHADD insn, which performs a rounded halving
2
use the enum constant MMU_DATA_LOAD rather than a literal 0 when we
2
addition.
3
call it in S1_ptw_translate().
4
3
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200330210400.11724-3-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-40-peter.maydell@linaro.org
9
---
7
---
10
target/arm/helper.c | 5 +++--
8
target/arm/helper-mve.h | 8 ++++++++
11
1 file changed, 3 insertions(+), 2 deletions(-)
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 6 ++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 19 insertions(+)
12
13
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqdmullbw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
pcacheattrs = &cacheattrs;
19
DEF_HELPER_FLAGS_4(mve_vqdmullth, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
}
20
DEF_HELPER_FLAGS_4(mve_vqdmulltw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
21
21
- ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_Stage2, &s2pa,
22
+DEF_HELPER_FLAGS_4(mve_vrhaddsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
- &txattrs, &s2prot, &s2size, fi, pcacheattrs);
23
+DEF_HELPER_FLAGS_4(mve_vrhaddsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+ ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, ARMMMUIdx_Stage2,
24
+DEF_HELPER_FLAGS_4(mve_vrhaddsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+ &s2pa, &txattrs, &s2prot, &s2size, fi,
25
+
25
+ pcacheattrs);
26
+DEF_HELPER_FLAGS_4(mve_vrhaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
if (ret) {
27
+DEF_HELPER_FLAGS_4(mve_vrhadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
assert(fi->type != ARMFault_None);
28
+DEF_HELPER_FLAGS_4(mve_vrhadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
fi->s2addr = addr;
29
+
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ VQRDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
38
VQDMULLB 111 . 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 1 @2op_sz28
39
VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
40
41
+VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
42
+VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
43
+
44
# Vector miscellaneous
45
46
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
47
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/mve_helper.c
50
+++ b/target/arm/mve_helper.c
51
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vshlu, DO_VSHLU)
52
DO_2OP_S(vrshls, DO_VRSHLS)
53
DO_2OP_U(vrshlu, DO_VRSHLU)
54
55
+#define DO_RHADD_S(N, M) (((int64_t)(N) + (M) + 1) >> 1)
56
+#define DO_RHADD_U(N, M) (((uint64_t)(N) + (M) + 1) >> 1)
57
+
58
+DO_2OP_S(vrhadds, DO_RHADD_S)
59
+DO_2OP_U(vrhaddu, DO_RHADD_U)
60
+
61
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
62
{
63
if (val > max) {
64
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate-mve.c
67
+++ b/target/arm/translate-mve.c
68
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQDMLSDH, vqdmlsdh)
69
DO_2OP(VQDMLSDHX, vqdmlsdhx)
70
DO_2OP(VQRDMLSDH, vqrdmlsdh)
71
DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
72
+DO_2OP(VRHADD_S, vrhadds)
73
+DO_2OP(VRHADD_U, vrhaddu)
74
75
static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
76
{
29
--
77
--
30
2.20.1
78
2.20.1
31
79
32
80
diff view generated by jsdifflib
1
We were accidentally permitting decode of Thumb Neon insns even if
1
Implement the MVE VADC and VSBC insns. These perform an
2
the CPU didn't have the FEATURE_NEON bit set, because the feature
2
add-with-carry or subtract-with-carry of the 32-bit elements in each
3
check was being done before the call to disas_neon_data_insn() and
3
lane of the input vectors, where the carry-out of each add is the
4
disas_neon_ls_insn() in the Arm decoder but was omitted from the
4
carry-in of the next. The initial carry input is either 1 or is from
5
Thumb decoder. Push the feature bit check down into the called
5
FPSCR.C; the carry out at the end is written back to FPSCR.C.
6
functions so it is done for both Arm and Thumb encodings.
7
6
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20210617121628.20116-41-peter.maydell@linaro.org
11
Message-id: 20200430181003.21682-3-peter.maydell@linaro.org
12
---
10
---
13
target/arm/translate.c | 16 ++++++++--------
11
target/arm/helper-mve.h | 5 ++++
14
1 file changed, 8 insertions(+), 8 deletions(-)
12
target/arm/mve.decode | 5 ++++
13
target/arm/mve_helper.c | 52 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-mve.c | 37 +++++++++++++++++++++++++++
15
4 files changed, 99 insertions(+)
15
16
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
19
--- a/target/arm/helper-mve.h
19
+++ b/target/arm/translate.c
20
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrhaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
TCGv_i32 tmp2;
22
DEF_HELPER_FLAGS_4(mve_vrhadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
TCGv_i64 tmp64;
23
DEF_HELPER_FLAGS_4(mve_vrhadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
24
24
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
25
+DEF_HELPER_FLAGS_4(mve_vadc, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+ return 1;
26
+DEF_HELPER_FLAGS_4(mve_vadci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vsbc, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vsbci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
38
VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
39
VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
40
41
+VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
42
+VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
43
+VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
44
+VSBCI 1111 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
45
+
46
# Vector miscellaneous
47
48
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
49
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/mve_helper.c
52
+++ b/target/arm/mve_helper.c
53
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vrshlu, DO_VRSHLU)
54
DO_2OP_S(vrhadds, DO_RHADD_S)
55
DO_2OP_U(vrhaddu, DO_RHADD_U)
56
57
+static void do_vadc(CPUARMState *env, uint32_t *d, uint32_t *n, uint32_t *m,
58
+ uint32_t inv, uint32_t carry_in, bool update_flags)
59
+{
60
+ uint16_t mask = mve_element_mask(env);
61
+ unsigned e;
62
+
63
+ /* If any additions trigger, we will update flags. */
64
+ if (mask & 0x1111) {
65
+ update_flags = true;
26
+ }
66
+ }
27
+
67
+
28
/* FIXME: this access check should not take precedence over UNDEF
68
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) {
29
* for invalid encodings; we will generate incorrect syndrome information
69
+ uint64_t r = carry_in;
30
* for attempts to execute invalid vfp/neon encodings with FP disabled.
70
+ r += n[H4(e)];
31
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
71
+ r += m[H4(e)] ^ inv;
32
TCGv_ptr ptr1, ptr2, ptr3;
72
+ if (mask & 1) {
33
TCGv_i64 tmp64;
73
+ carry_in = r >> 32;
34
74
+ }
35
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
75
+ mergemask(&d[H4(e)], r, mask);
36
+ return 1;
37
+ }
76
+ }
38
+
77
+
39
/* FIXME: this access check should not take precedence over UNDEF
78
+ if (update_flags) {
40
* for invalid encodings; we will generate incorrect syndrome information
79
+ /* Store C, clear NZV. */
41
* for attempts to execute invalid vfp/neon encodings with FP disabled.
80
+ env->vfp.xregs[ARM_VFP_FPSCR] &= ~FPCR_NZCV_MASK;
42
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
81
+ env->vfp.xregs[ARM_VFP_FPSCR] |= carry_in * FPCR_C;
43
82
+ }
44
if (((insn >> 25) & 7) == 1) {
83
+ mve_advance_vpt(env);
45
/* NEON Data processing. */
84
+}
46
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
85
+
47
- goto illegal_op;
86
+void HELPER(mve_vadc)(CPUARMState *env, void *vd, void *vn, void *vm)
48
- }
87
+{
49
-
88
+ bool carry_in = env->vfp.xregs[ARM_VFP_FPSCR] & FPCR_C;
50
if (disas_neon_data_insn(s, insn)) {
89
+ do_vadc(env, vd, vn, vm, 0, carry_in, false);
51
goto illegal_op;
90
+}
52
}
91
+
53
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
92
+void HELPER(mve_vsbc)(CPUARMState *env, void *vd, void *vn, void *vm)
54
}
93
+{
55
if ((insn & 0x0f100000) == 0x04000000) {
94
+ bool carry_in = env->vfp.xregs[ARM_VFP_FPSCR] & FPCR_C;
56
/* NEON load/store. */
95
+ do_vadc(env, vd, vn, vm, -1, carry_in, false);
57
- if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
96
+}
58
- goto illegal_op;
97
+
59
- }
98
+
60
-
99
+void HELPER(mve_vadci)(CPUARMState *env, void *vd, void *vn, void *vm)
61
if (disas_neon_ls_insn(s, insn)) {
100
+{
62
goto illegal_op;
101
+ do_vadc(env, vd, vn, vm, 0, 0, true);
63
}
102
+}
103
+
104
+void HELPER(mve_vsbci)(CPUARMState *env, void *vd, void *vn, void *vm)
105
+{
106
+ do_vadc(env, vd, vn, vm, -1, 1, true);
107
+}
108
+
109
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
110
{
111
if (val > max) {
112
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/target/arm/translate-mve.c
115
+++ b/target/arm/translate-mve.c
116
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMULLT(DisasContext *s, arg_2op *a)
117
return do_2op(s, a, fns[a->size]);
118
}
119
120
+/*
121
+ * VADC and VSBC: these perform an add-with-carry or subtract-with-carry
122
+ * of the 32-bit elements in each lane of the input vectors, where the
123
+ * carry-out of each add is the carry-in of the next. The initial carry
124
+ * input is either fixed (0 for VADCI, 1 for VSBCI) or is from FPSCR.C
125
+ * (for VADC and VSBC); the carry out at the end is written back to FPSCR.C.
126
+ * These insns are subject to beat-wise execution. Partial execution
127
+ * of an I=1 (initial carry input fixed) insn which does not
128
+ * execute the first beat must start with the current FPSCR.NZCV
129
+ * value, not the fixed constant input.
130
+ */
131
+static bool trans_VADC(DisasContext *s, arg_2op *a)
132
+{
133
+ return do_2op(s, a, gen_helper_mve_vadc);
134
+}
135
+
136
+static bool trans_VADCI(DisasContext *s, arg_2op *a)
137
+{
138
+ if (mve_skip_first_beat(s)) {
139
+ return trans_VADC(s, a);
140
+ }
141
+ return do_2op(s, a, gen_helper_mve_vadci);
142
+}
143
+
144
+static bool trans_VSBC(DisasContext *s, arg_2op *a)
145
+{
146
+ return do_2op(s, a, gen_helper_mve_vsbc);
147
+}
148
+
149
+static bool trans_VSBCI(DisasContext *s, arg_2op *a)
150
+{
151
+ if (mve_skip_first_beat(s)) {
152
+ return trans_VSBC(s, a);
153
+ }
154
+ return do_2op(s, a, gen_helper_mve_vsbci);
155
+}
156
+
157
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
158
MVEGenTwoOpScalarFn fn)
159
{
64
--
160
--
65
2.20.1
161
2.20.1
66
162
67
163
diff view generated by jsdifflib
1
We define ARMMMUIdx_Stage2 as being an MMU index which uses a QEMU
1
Implement the MVE VCADD insn, which performs a complex add with
2
TLB. However we never actually use the TLB -- all stage 2 lookups
2
rotate. Note that the size=0b11 encoding is VSBC.
3
are done by direct calls to get_phys_addr_lpae() followed by a
4
physical address load via address_space_ld*().
5
3
6
Remove Stage2 from the list of ARM MMU indexes which correspond to
4
The architecture grants some leeway for the "destination and Vm
7
real core MMU indexes, and instead put it in the set of "NOTLB" ARM
5
source overlap" case for the size MO_32 case, but we choose not to
8
MMU indexes.
6
make use of it, instead always calculating all 16 bytes worth of
9
7
results before setting the destination register.
10
This allows us to drop NB_MMU_MODES to 11. It also means we can
11
safely add support for the ARMv8.3-TTS2UXN extension, which adds
12
permission bits to the stage 2 descriptors which define execute
13
permission separatel for EL0 and EL1; supporting that while keeping
14
Stage2 in a QEMU TLB would require us to use separate TLBs for
15
"Stage2 for an EL0 access" and "Stage2 for an EL1 access", which is a
16
lot of extra complication given we aren't even using the QEMU TLB.
17
18
In the process of updating the comment on our MMU index use,
19
fix a couple of other minor errors:
20
* NS EL2 EL2&0 was missing from the list in the comment
21
* some text hadn't been updated from when we bumped NB_MMU_MODES
22
above 8
23
8
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
26
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
27
Message-id: 20200330210400.11724-2-peter.maydell@linaro.org
11
Message-id: 20210617121628.20116-42-peter.maydell@linaro.org
28
---
12
---
29
target/arm/cpu-param.h | 2 +-
13
target/arm/helper-mve.h | 8 ++++++++
30
target/arm/cpu.h | 21 +++++---
14
target/arm/mve.decode | 9 +++++++--
31
target/arm/helper.c | 112 ++++-------------------------------------
15
target/arm/mve_helper.c | 29 +++++++++++++++++++++++++++++
32
3 files changed, 27 insertions(+), 108 deletions(-)
16
target/arm/translate-mve.c | 7 +++++++
17
4 files changed, 51 insertions(+), 2 deletions(-)
33
18
34
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
35
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/cpu-param.h
21
--- a/target/arm/helper-mve.h
37
+++ b/target/arm/cpu-param.h
22
+++ b/target/arm/helper-mve.h
38
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vadci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
39
# define TARGET_PAGE_BITS_MIN 10
24
DEF_HELPER_FLAGS_4(mve_vsbc, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
40
#endif
25
DEF_HELPER_FLAGS_4(mve_vsbci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
41
26
42
-#define NB_MMU_MODES 12
27
+DEF_HELPER_FLAGS_4(mve_vcadd90b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
43
+#define NB_MMU_MODES 11
28
+DEF_HELPER_FLAGS_4(mve_vcadd90h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
44
29
+DEF_HELPER_FLAGS_4(mve_vcadd90w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
45
#endif
30
+
46
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
31
+DEF_HELPER_FLAGS_4(mve_vcadd270b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vcadd270h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vcadd270w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+
35
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
47
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/cpu.h
40
--- a/target/arm/mve.decode
49
+++ b/target/arm/cpu.h
41
+++ b/target/arm/mve.decode
50
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
42
@@ -XXX,XX +XXX,XX @@ VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
51
* handling via the TLB. The only way to do a stage 1 translation without
43
VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
52
* the immediate stage 2 translation is via the ATS or AT system insns,
44
53
* which can be slow-pathed and always do a page table walk.
45
VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
54
+ * The only use of stage 2 translations is either as part of an s1+2
46
-VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
55
+ * lookup or when loading the descriptors during a stage 1 page table walk,
47
VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
56
+ * and in both those cases we don't use the TLB.
48
-VSBCI 1111 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
57
* 4. we can also safely fold together the "32 bit EL3" and "64 bit EL3"
49
+
58
* translation regimes, because they map reasonably well to each other
50
+{
59
* and they can't both be active at the same time.
51
+ VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
60
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
52
+ VSBCI 1111 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
61
* NS EL1 EL1&0 stage 1+2 (aka NS PL1)
53
+ VCADD90 1111 1110 0 . .. ... 0 ... 0 1111 . 0 . 0 ... 0 @2op
62
* NS EL1 EL1&0 stage 1+2 +PAN
54
+ VCADD270 1111 1110 0 . .. ... 0 ... 1 1111 . 0 . 0 ... 0 @2op
63
* NS EL0 EL2&0
55
+}
64
+ * NS EL2 EL2&0
56
65
* NS EL2 EL2&0 +PAN
57
# Vector miscellaneous
66
* NS EL2 (aka NS PL2)
58
67
* S EL0 EL1&0 (aka S PL0)
59
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
68
* S EL1 EL1&0 (not used if EL3 is 32 bit)
69
* S EL1 EL1&0 +PAN
70
* S EL3 (aka S PL1)
71
- * NS EL1&0 stage 2
72
*
73
- * for a total of 12 different mmu_idx.
74
+ * for a total of 11 different mmu_idx.
75
*
76
* R profile CPUs have an MPU, but can use the same set of MMU indexes
77
* as A profile. They only need to distinguish NS EL0 and NS EL1 (and
78
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
79
* are not quite the same -- different CPU types (most notably M profile
80
* vs A/R profile) would like to use MMU indexes with different semantics,
81
* but since we don't ever need to use all of those in a single CPU we
82
- * can avoid setting NB_MMU_MODES to more than 8. The lower bits of
83
+ * can avoid having to set NB_MMU_MODES to "total number of A profile MMU
84
+ * modes + total number of M profile MMU modes". The lower bits of
85
* ARMMMUIdx are the core TLB mmu index, and the higher bits are always
86
* the same for any particular CPU.
87
* Variables of type ARMMUIdx are always full values, and the core
88
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
89
ARMMMUIdx_SE10_1_PAN = 9 | ARM_MMU_IDX_A,
90
ARMMMUIdx_SE3 = 10 | ARM_MMU_IDX_A,
91
92
- ARMMMUIdx_Stage2 = 11 | ARM_MMU_IDX_A,
93
-
94
/*
95
* These are not allocated TLBs and are used only for AT system
96
* instructions or for the first stage of an S12 page table walk.
97
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
98
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
99
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
100
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
101
+ /*
102
+ * Not allocated a TLB: used only for second stage of an S12 page
103
+ * table walk, or for descriptor loads during first stage of an S1
104
+ * page table walk. Note that if we ever want to have a TLB for this
105
+ * then various TLB flush insns which currently are no-ops or flush
106
+ * only stage 1 MMU indexes will need to change to flush stage 2.
107
+ */
108
+ ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
109
110
/*
111
* M-profile.
112
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
113
TO_CORE_BIT(SE10_1),
114
TO_CORE_BIT(SE10_1_PAN),
115
TO_CORE_BIT(SE3),
116
- TO_CORE_BIT(Stage2),
117
118
TO_CORE_BIT(MUser),
119
TO_CORE_BIT(MPriv),
120
diff --git a/target/arm/helper.c b/target/arm/helper.c
121
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/helper.c
61
--- a/target/arm/mve_helper.c
123
+++ b/target/arm/helper.c
62
+++ b/target/arm/mve_helper.c
124
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
63
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vsbci)(CPUARMState *env, void *vd, void *vn, void *vm)
125
tlb_flush_by_mmuidx(cs,
64
do_vadc(env, vd, vn, vm, -1, 1, true);
126
ARMMMUIdxBit_E10_1 |
127
ARMMMUIdxBit_E10_1_PAN |
128
- ARMMMUIdxBit_E10_0 |
129
- ARMMMUIdxBit_Stage2);
130
+ ARMMMUIdxBit_E10_0);
131
}
65
}
132
66
133
static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
+#define DO_VCADD(OP, ESIZE, TYPE, FN0, FN1) \
134
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
68
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, void *vm) \
135
tlb_flush_by_mmuidx_all_cpus_synced(cs,
69
+ { \
136
ARMMMUIdxBit_E10_1 |
70
+ TYPE *d = vd, *n = vn, *m = vm; \
137
ARMMMUIdxBit_E10_1_PAN |
71
+ uint16_t mask = mve_element_mask(env); \
138
- ARMMMUIdxBit_E10_0 |
72
+ unsigned e; \
139
- ARMMMUIdxBit_Stage2);
73
+ TYPE r[16 / ESIZE]; \
140
+ ARMMMUIdxBit_E10_0);
74
+ /* Calculate all results first to avoid overwriting inputs */ \
141
}
75
+ for (e = 0; e < 16 / ESIZE; e++) { \
142
76
+ if (!(e & 1)) { \
143
-static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri,
77
+ r[e] = FN0(n[H##ESIZE(e)], m[H##ESIZE(e + 1)]); \
144
- uint64_t value)
78
+ } else { \
145
-{
79
+ r[e] = FN1(n[H##ESIZE(e)], m[H##ESIZE(e - 1)]); \
146
- /* Invalidate by IPA. This has to invalidate any structures that
80
+ } \
147
- * contain only stage 2 translation information, but does not need
81
+ } \
148
- * to apply to structures that contain combined stage 1 and stage 2
82
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
149
- * translation information.
83
+ mergemask(&d[H##ESIZE(e)], r[e], mask); \
150
- * This must NOP if EL2 isn't implemented or SCR_EL3.NS is zero.
84
+ } \
151
- */
85
+ mve_advance_vpt(env); \
152
- CPUState *cs = env_cpu(env);
86
+ }
153
- uint64_t pageaddr;
87
+
154
-
88
+#define DO_VCADD_ALL(OP, FN0, FN1) \
155
- if (!arm_feature(env, ARM_FEATURE_EL2) || !(env->cp15.scr_el3 & SCR_NS)) {
89
+ DO_VCADD(OP##b, 1, int8_t, FN0, FN1) \
156
- return;
90
+ DO_VCADD(OP##h, 2, int16_t, FN0, FN1) \
157
- }
91
+ DO_VCADD(OP##w, 4, int32_t, FN0, FN1)
158
-
92
+
159
- pageaddr = sextract64(value << 12, 0, 40);
93
+DO_VCADD_ALL(vcadd90, DO_SUB, DO_ADD)
160
-
94
+DO_VCADD_ALL(vcadd270, DO_ADD, DO_SUB)
161
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
95
+
162
-}
96
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
163
-
164
-static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
165
- uint64_t value)
166
-{
167
- CPUState *cs = env_cpu(env);
168
- uint64_t pageaddr;
169
-
170
- if (!arm_feature(env, ARM_FEATURE_EL2) || !(env->cp15.scr_el3 & SCR_NS)) {
171
- return;
172
- }
173
-
174
- pageaddr = sextract64(value << 12, 0, 40);
175
-
176
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
177
- ARMMMUIdxBit_Stage2);
178
-}
179
180
static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
181
uint64_t value)
182
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
183
tlb_flush_by_mmuidx(cs,
184
ARMMMUIdxBit_E10_1 |
185
ARMMMUIdxBit_E10_1_PAN |
186
- ARMMMUIdxBit_E10_0 |
187
- ARMMMUIdxBit_Stage2);
188
+ ARMMMUIdxBit_E10_0);
189
raw_write(env, ri, value);
190
}
191
}
192
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
193
return ARMMMUIdxBit_SE10_1 |
194
ARMMMUIdxBit_SE10_1_PAN |
195
ARMMMUIdxBit_SE10_0;
196
- } else if (arm_feature(env, ARM_FEATURE_EL2)) {
197
- return ARMMMUIdxBit_E10_1 |
198
- ARMMMUIdxBit_E10_1_PAN |
199
- ARMMMUIdxBit_E10_0 |
200
- ARMMMUIdxBit_Stage2;
201
} else {
202
return ARMMMUIdxBit_E10_1 |
203
ARMMMUIdxBit_E10_1_PAN |
204
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
205
ARMMMUIdxBit_SE3);
206
}
207
208
-static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
209
- uint64_t value)
210
-{
211
- /* Invalidate by IPA. This has to invalidate any structures that
212
- * contain only stage 2 translation information, but does not need
213
- * to apply to structures that contain combined stage 1 and stage 2
214
- * translation information.
215
- * This must NOP if EL2 isn't implemented or SCR_EL3.NS is zero.
216
- */
217
- ARMCPU *cpu = env_archcpu(env);
218
- CPUState *cs = CPU(cpu);
219
- uint64_t pageaddr;
220
-
221
- if (!arm_feature(env, ARM_FEATURE_EL2) || !(env->cp15.scr_el3 & SCR_NS)) {
222
- return;
223
- }
224
-
225
- pageaddr = sextract64(value << 12, 0, 48);
226
-
227
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
228
-}
229
-
230
-static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
231
- uint64_t value)
232
-{
233
- CPUState *cs = env_cpu(env);
234
- uint64_t pageaddr;
235
-
236
- if (!arm_feature(env, ARM_FEATURE_EL2) || !(env->cp15.scr_el3 & SCR_NS)) {
237
- return;
238
- }
239
-
240
- pageaddr = sextract64(value << 12, 0, 48);
241
-
242
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
243
- ARMMMUIdxBit_Stage2);
244
-}
245
-
246
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
247
bool isread)
248
{
97
{
249
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
98
if (val > max) {
250
.writefn = tlbi_aa64_vae1_write },
99
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
251
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
100
index XXXXXXX..XXXXXXX 100644
252
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
101
--- a/target/arm/translate-mve.c
253
- .access = PL2_W, .type = ARM_CP_NO_RAW,
102
+++ b/target/arm/translate-mve.c
254
- .writefn = tlbi_aa64_ipas2e1is_write },
103
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQRDMLSDH, vqrdmlsdh)
255
+ .access = PL2_W, .type = ARM_CP_NOP },
104
DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
256
{ .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
105
DO_2OP(VRHADD_S, vrhadds)
257
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
106
DO_2OP(VRHADD_U, vrhaddu)
258
- .access = PL2_W, .type = ARM_CP_NO_RAW,
107
+/*
259
- .writefn = tlbi_aa64_ipas2e1is_write },
108
+ * VCADD Qd == Qm at size MO_32 is UNPREDICTABLE; we choose not to diagnose
260
+ .access = PL2_W, .type = ARM_CP_NOP },
109
+ * so we can reuse the DO_2OP macro. (Our implementation calculates the
261
{ .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
110
+ * "expected" results in this case.)
262
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
111
+ */
263
.access = PL2_W, .type = ARM_CP_NO_RAW,
112
+DO_2OP(VCADD90, vcadd90)
264
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
113
+DO_2OP(VCADD270, vcadd270)
265
.writefn = tlbi_aa64_alle1is_write },
114
266
{ .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
115
static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
267
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
116
{
268
- .access = PL2_W, .type = ARM_CP_NO_RAW,
269
- .writefn = tlbi_aa64_ipas2e1_write },
270
+ .access = PL2_W, .type = ARM_CP_NOP },
271
{ .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
272
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
273
- .access = PL2_W, .type = ARM_CP_NO_RAW,
274
- .writefn = tlbi_aa64_ipas2e1_write },
275
+ .access = PL2_W, .type = ARM_CP_NOP },
276
{ .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
277
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
278
.access = PL2_W, .type = ARM_CP_NO_RAW,
279
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
280
.writefn = tlbimva_hyp_is_write },
281
{ .name = "TLBIIPAS2",
282
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
283
- .type = ARM_CP_NO_RAW, .access = PL2_W,
284
- .writefn = tlbiipas2_write },
285
+ .type = ARM_CP_NOP, .access = PL2_W },
286
{ .name = "TLBIIPAS2IS",
287
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
288
- .type = ARM_CP_NO_RAW, .access = PL2_W,
289
- .writefn = tlbiipas2_is_write },
290
+ .type = ARM_CP_NOP, .access = PL2_W },
291
{ .name = "TLBIIPAS2L",
292
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
293
- .type = ARM_CP_NO_RAW, .access = PL2_W,
294
- .writefn = tlbiipas2_write },
295
+ .type = ARM_CP_NOP, .access = PL2_W },
296
{ .name = "TLBIIPAS2LIS",
297
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
298
- .type = ARM_CP_NO_RAW, .access = PL2_W,
299
- .writefn = tlbiipas2_is_write },
300
+ .type = ARM_CP_NOP, .access = PL2_W },
301
/* 32 bit cache operations */
302
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
303
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
304
--
117
--
305
2.20.1
118
2.20.1
306
119
307
120
diff view generated by jsdifflib
1
We're going to want at least some of the NeonGen* typedefs
1
Implement the MVE VHCADD insn, which is similar to VCADD
2
for the refactored 32-bit Neon decoder, so move them all
2
but performs a halving step. This one overlaps with VADC.
3
to translate.h since it makes more sense to keep them in
4
one group.
5
3
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200430181003.21682-23-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-43-peter.maydell@linaro.org
9
---
7
---
10
target/arm/translate.h | 17 +++++++++++++++++
8
target/arm/helper-mve.h | 8 ++++++++
11
target/arm/translate-a64.c | 17 -----------------
9
target/arm/mve.decode | 8 ++++++--
12
2 files changed, 17 insertions(+), 17 deletions(-)
10
target/arm/mve_helper.c | 2 ++
11
target/arm/translate-mve.c | 4 +++-
12
4 files changed, 19 insertions(+), 3 deletions(-)
13
13
14
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.h
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/translate.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vcadd270b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
typedef void GVecGen4Fn(unsigned, uint32_t, uint32_t, uint32_t,
19
DEF_HELPER_FLAGS_4(mve_vcadd270h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
uint32_t, uint32_t, uint32_t);
20
DEF_HELPER_FLAGS_4(mve_vcadd270w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
21
22
+/* Function prototype for gen_ functions for calling Neon helpers */
22
+DEF_HELPER_FLAGS_4(mve_vhcadd90b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+typedef void NeonGenOneOpEnvFn(TCGv_i32, TCGv_ptr, TCGv_i32);
23
+DEF_HELPER_FLAGS_4(mve_vhcadd90h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+typedef void NeonGenTwoOpFn(TCGv_i32, TCGv_i32, TCGv_i32);
24
+DEF_HELPER_FLAGS_4(mve_vhcadd90w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+typedef void NeonGenTwoOpEnvFn(TCGv_i32, TCGv_ptr, TCGv_i32, TCGv_i32);
26
+typedef void NeonGenTwo64OpFn(TCGv_i64, TCGv_i64, TCGv_i64);
27
+typedef void NeonGenTwo64OpEnvFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i64);
28
+typedef void NeonGenNarrowFn(TCGv_i32, TCGv_i64);
29
+typedef void NeonGenNarrowEnvFn(TCGv_i32, TCGv_ptr, TCGv_i64);
30
+typedef void NeonGenWidenFn(TCGv_i64, TCGv_i32);
31
+typedef void NeonGenTwoSingleOPFn(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
32
+typedef void NeonGenTwoDoubleOPFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_ptr);
33
+typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
34
+typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
35
+typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
36
+typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
37
+typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
38
+
25
+
39
#endif /* TARGET_ARM_TRANSLATE_H */
26
+DEF_HELPER_FLAGS_4(mve_vhcadd270b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
40
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
27
+DEF_HELPER_FLAGS_4(mve_vhcadd270h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vhcadd270w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
41
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate-a64.c
35
--- a/target/arm/mve.decode
43
+++ b/target/arm/translate-a64.c
36
+++ b/target/arm/mve.decode
44
@@ -XXX,XX +XXX,XX @@ typedef struct AArch64DecodeTable {
37
@@ -XXX,XX +XXX,XX @@ VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
45
AArch64DecodeFn *disas_fn;
38
VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
46
} AArch64DecodeTable;
39
VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
47
40
48
-/* Function prototype for gen_ functions for calling Neon helpers */
41
-VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
49
-typedef void NeonGenOneOpEnvFn(TCGv_i32, TCGv_ptr, TCGv_i32);
42
-VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
50
-typedef void NeonGenTwoOpFn(TCGv_i32, TCGv_i32, TCGv_i32);
43
+{
51
-typedef void NeonGenTwoOpEnvFn(TCGv_i32, TCGv_ptr, TCGv_i32, TCGv_i32);
44
+ VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
52
-typedef void NeonGenTwo64OpFn(TCGv_i64, TCGv_i64, TCGv_i64);
45
+ VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
53
-typedef void NeonGenTwo64OpEnvFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i64);
46
+ VHCADD90 1110 1110 0 . .. ... 0 ... 0 1111 . 0 . 0 ... 0 @2op
54
-typedef void NeonGenNarrowFn(TCGv_i32, TCGv_i64);
47
+ VHCADD270 1110 1110 0 . .. ... 0 ... 1 1111 . 0 . 0 ... 0 @2op
55
-typedef void NeonGenNarrowEnvFn(TCGv_i32, TCGv_ptr, TCGv_i64);
48
+}
56
-typedef void NeonGenWidenFn(TCGv_i64, TCGv_i32);
49
57
-typedef void NeonGenTwoSingleOPFn(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
50
{
58
-typedef void NeonGenTwoDoubleOPFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_ptr);
51
VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
59
-typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
60
-typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
53
index XXXXXXX..XXXXXXX 100644
61
-typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
54
--- a/target/arm/mve_helper.c
62
-typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
55
+++ b/target/arm/mve_helper.c
63
-typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
56
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vsbci)(CPUARMState *env, void *vd, void *vn, void *vm)
64
-
57
65
/* initialize TCG globals. */
58
DO_VCADD_ALL(vcadd90, DO_SUB, DO_ADD)
66
void a64_translate_init(void)
59
DO_VCADD_ALL(vcadd270, DO_ADD, DO_SUB)
60
+DO_VCADD_ALL(vhcadd90, do_vhsub_s, do_vhadd_s)
61
+DO_VCADD_ALL(vhcadd270, do_vhadd_s, do_vhsub_s)
62
63
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
64
{
65
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate-mve.c
68
+++ b/target/arm/translate-mve.c
69
@@ -XXX,XX +XXX,XX @@ DO_2OP(VRHADD_U, vrhaddu)
70
/*
71
* VCADD Qd == Qm at size MO_32 is UNPREDICTABLE; we choose not to diagnose
72
* so we can reuse the DO_2OP macro. (Our implementation calculates the
73
- * "expected" results in this case.)
74
+ * "expected" results in this case.) Similarly for VHCADD.
75
*/
76
DO_2OP(VCADD90, vcadd90)
77
DO_2OP(VCADD270, vcadd270)
78
+DO_2OP(VHCADD90, vhcadd90)
79
+DO_2OP(VHCADD270, vhcadd270)
80
81
static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
67
{
82
{
68
--
83
--
69
2.20.1
84
2.20.1
70
85
71
86
diff view generated by jsdifflib
1
Convert the Neon "load single structure to all lanes" insns to
1
Implement the MVE VADDV insn, which performs an addition
2
decodetree.
2
across vector lanes.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200430181003.21682-13-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-44-peter.maydell@linaro.org
7
---
7
---
8
target/arm/neon-ls.decode | 5 +++
8
target/arm/helper-mve.h | 7 +++++++
9
target/arm/translate-neon.inc.c | 73 +++++++++++++++++++++++++++++++++
9
target/arm/mve.decode | 2 ++
10
target/arm/translate.c | 55 +------------------------
10
target/arm/mve_helper.c | 24 +++++++++++++++++++++
11
3 files changed, 80 insertions(+), 53 deletions(-)
11
target/arm/translate-mve.c | 43 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 76 insertions(+)
12
13
13
diff --git a/target/arm/neon-ls.decode b/target/arm/neon-ls.decode
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-ls.decode
16
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/neon-ls.decode
17
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrmlaldavhuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
18
19
19
VLDST_multiple 1111 0100 0 . l:1 0 rn:4 .... itype:4 size:2 align:2 rm:4 \
20
DEF_HELPER_FLAGS_4(mve_vrmlsldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
20
vd=%vd_dp
21
DEF_HELPER_FLAGS_4(mve_vrmlsldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
21
+
22
+
22
+# Neon load single element to all lanes
23
+DEF_HELPER_FLAGS_3(mve_vaddvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
24
+DEF_HELPER_FLAGS_3(mve_vaddvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
25
+DEF_HELPER_FLAGS_3(mve_vaddvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
26
+DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
27
+DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
28
+DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@ VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
34
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
35
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
36
37
+# Vector add across vector
38
+VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
39
40
# Predicate operations
41
%mask_22_13 22:1 13:3
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_LDAVH(vrmlaldavhuw, 4, uint32_t, false, int128_add, int128_add, int128_make64
47
48
DO_LDAVH(vrmlsldavhsw, 4, int32_t, false, int128_add, int128_sub, int128_makes64)
49
DO_LDAVH(vrmlsldavhxsw, 4, int32_t, true, int128_add, int128_sub, int128_makes64)
23
+
50
+
24
+VLD_all_lanes 1111 0100 1 . 1 0 rn:4 .... 11 n:2 size:2 t:1 a:1 rm:4 \
51
+/* Vector add across vector */
25
+ vd=%vd_dp
52
+#define DO_VADDV(OP, ESIZE, TYPE) \
26
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
53
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
54
+ uint32_t ra) \
55
+ { \
56
+ uint16_t mask = mve_element_mask(env); \
57
+ unsigned e; \
58
+ TYPE *m = vm; \
59
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
60
+ if (mask & 1) { \
61
+ ra += m[H##ESIZE(e)]; \
62
+ } \
63
+ } \
64
+ mve_advance_vpt(env); \
65
+ return ra; \
66
+ } \
67
+
68
+DO_VADDV(vaddvsb, 1, uint8_t)
69
+DO_VADDV(vaddvsh, 2, uint16_t)
70
+DO_VADDV(vaddvsw, 4, uint32_t)
71
+DO_VADDV(vaddvub, 1, uint8_t)
72
+DO_VADDV(vaddvuh, 2, uint16_t)
73
+DO_VADDV(vaddvuw, 4, uint32_t)
74
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
27
index XXXXXXX..XXXXXXX 100644
75
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/translate-neon.inc.c
76
--- a/target/arm/translate-mve.c
29
+++ b/target/arm/translate-neon.inc.c
77
+++ b/target/arm/translate-mve.c
30
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a)
78
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
31
gen_neon_ldst_base_update(s, a->rm, a->rn, nregs * interleave * 8);
79
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
80
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
81
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
82
+typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
83
84
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
85
static inline long mve_qreg_offset(unsigned reg)
86
@@ -XXX,XX +XXX,XX @@ static bool trans_VPST(DisasContext *s, arg_VPST *a)
87
mve_update_and_store_eci(s);
32
return true;
88
return true;
33
}
89
}
34
+
90
+
35
+static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
91
+static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
36
+{
92
+{
37
+ /* Neon load single structure to all lanes */
93
+ /* VADDV: vector add across vector */
38
+ int reg, stride, vec_size;
94
+ static MVEGenVADDVFn * const fns[4][2] = {
39
+ int vd = a->vd;
95
+ { gen_helper_mve_vaddvsb, gen_helper_mve_vaddvub },
40
+ int size = a->size;
96
+ { gen_helper_mve_vaddvsh, gen_helper_mve_vaddvuh },
41
+ int nregs = a->n + 1;
97
+ { gen_helper_mve_vaddvsw, gen_helper_mve_vaddvuw },
42
+ TCGv_i32 addr, tmp;
98
+ { NULL, NULL }
99
+ };
100
+ TCGv_ptr qm;
101
+ TCGv_i32 rda;
43
+
102
+
44
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
103
+ if (!dc_isar_feature(aa32_mve, s) ||
104
+ a->size == 3) {
45
+ return false;
105
+ return false;
46
+ }
106
+ }
47
+
107
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
48
+ /* UNDEF accesses to D16-D31 if they don't exist */
49
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
50
+ return false;
51
+ }
52
+
53
+ if (size == 3) {
54
+ if (nregs != 4 || a->a == 0) {
55
+ return false;
56
+ }
57
+ /* For VLD4 size == 3 a == 1 means 32 bits at 16 byte alignment */
58
+ size = 2;
59
+ }
60
+ if (nregs == 1 && a->a == 1 && size == 0) {
61
+ return false;
62
+ }
63
+ if (nregs == 3 && a->a == 1) {
64
+ return false;
65
+ }
66
+
67
+ if (!vfp_access_check(s)) {
68
+ return true;
108
+ return true;
69
+ }
109
+ }
70
+
110
+
71
+ /*
111
+ /*
72
+ * VLD1 to all lanes: T bit indicates how many Dregs to write.
112
+ * This insn is subject to beat-wise execution. Partial execution
73
+ * VLD2/3/4 to all lanes: T bit indicates register stride.
113
+ * of an A=0 (no-accumulate) insn which does not execute the first
114
+ * beat must start with the current value of Rda, not zero.
74
+ */
115
+ */
75
+ stride = a->t ? 2 : 1;
116
+ if (a->a || mve_skip_first_beat(s)) {
76
+ vec_size = nregs == 1 ? stride * 8 : 8;
117
+ /* Accumulate input from Rda */
118
+ rda = load_reg(s, a->rda);
119
+ } else {
120
+ /* Accumulate starting at zero */
121
+ rda = tcg_const_i32(0);
122
+ }
77
+
123
+
78
+ tmp = tcg_temp_new_i32();
124
+ qm = mve_qreg_ptr(a->qm);
79
+ addr = tcg_temp_new_i32();
125
+ fns[a->size][a->u](rda, cpu_env, qm, rda);
80
+ load_reg_var(s, addr, a->rn);
126
+ store_reg(s, a->rda, rda);
81
+ for (reg = 0; reg < nregs; reg++) {
127
+ tcg_temp_free_ptr(qm);
82
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
83
+ s->be_data | size);
84
+ if ((vd & 1) && vec_size == 16) {
85
+ /*
86
+ * We cannot write 16 bytes at once because the
87
+ * destination is unaligned.
88
+ */
89
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
90
+ 8, 8, tmp);
91
+ tcg_gen_gvec_mov(0, neon_reg_offset(vd + 1, 0),
92
+ neon_reg_offset(vd, 0), 8, 8);
93
+ } else {
94
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
95
+ vec_size, vec_size, tmp);
96
+ }
97
+ tcg_gen_addi_i32(addr, addr, 1 << size);
98
+ vd += stride;
99
+ }
100
+ tcg_temp_free_i32(tmp);
101
+ tcg_temp_free_i32(addr);
102
+
128
+
103
+ gen_neon_ldst_base_update(s, a->rm, a->rn, (1 << size) * nregs);
129
+ mve_update_eci(s);
104
+
105
+ return true;
130
+ return true;
106
+}
131
+}
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
112
int size;
113
int reg;
114
int load;
115
- int vec_size;
116
TCGv_i32 addr;
117
TCGv_i32 tmp;
118
119
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
120
} else {
121
size = (insn >> 10) & 3;
122
if (size == 3) {
123
- /* Load single element to all lanes. */
124
- int a = (insn >> 4) & 1;
125
- if (!load) {
126
- return 1;
127
- }
128
- size = (insn >> 6) & 3;
129
- nregs = ((insn >> 8) & 3) + 1;
130
-
131
- if (size == 3) {
132
- if (nregs != 4 || a == 0) {
133
- return 1;
134
- }
135
- /* For VLD4 size==3 a == 1 means 32 bits at 16 byte alignment */
136
- size = 2;
137
- }
138
- if (nregs == 1 && a == 1 && size == 0) {
139
- return 1;
140
- }
141
- if (nregs == 3 && a == 1) {
142
- return 1;
143
- }
144
- addr = tcg_temp_new_i32();
145
- load_reg_var(s, addr, rn);
146
-
147
- /* VLD1 to all lanes: bit 5 indicates how many Dregs to write.
148
- * VLD2/3/4 to all lanes: bit 5 indicates register stride.
149
- */
150
- stride = (insn & (1 << 5)) ? 2 : 1;
151
- vec_size = nregs == 1 ? stride * 8 : 8;
152
-
153
- tmp = tcg_temp_new_i32();
154
- for (reg = 0; reg < nregs; reg++) {
155
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
156
- s->be_data | size);
157
- if ((rd & 1) && vec_size == 16) {
158
- /* We cannot write 16 bytes at once because the
159
- * destination is unaligned.
160
- */
161
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
162
- 8, 8, tmp);
163
- tcg_gen_gvec_mov(0, neon_reg_offset(rd + 1, 0),
164
- neon_reg_offset(rd, 0), 8, 8);
165
- } else {
166
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
167
- vec_size, vec_size, tmp);
168
- }
169
- tcg_gen_addi_i32(addr, addr, 1 << size);
170
- rd += stride;
171
- }
172
- tcg_temp_free_i32(tmp);
173
- tcg_temp_free_i32(addr);
174
- stride = (1 << size) * nregs;
175
+ /* Load single element to all lanes -- handled by decodetree */
176
+ return 1;
177
} else {
178
/* Single element. */
179
int idx = (insn >> 4) & 0xf;
180
--
132
--
181
2.20.1
133
2.20.1
182
134
183
135
diff view generated by jsdifflib
1
Convert the VFM[AS]L (vector) insns to decodetree. This is the last
1
In a CPU with MVE, the VMOV (vector lane to general-purpose register)
2
insn in the legacy decoder for the 3same_ext group, so we can
2
and VMOV (general-purpose register to vector lane) insns are not
3
delete the legacy decoder function for the group entirely.
3
predicated, but they are subject to beatwise execution if they
4
are not in an IT block.
4
5
5
Note that in disas_thumb2_insn() the parts of this encoding space
6
Since our implementation always executes all 4 beats in one tick,
6
where the decodetree decoder returns false will correctly be directed
7
this means only that we need to handle PSR.ECI:
7
to illegal_op by the "(insn & (1 << 28))" check so they won't fall
8
* we must do the usual check for bad ECI state
8
into disas_coproc_insn() by mistake.
9
* we must advance ECI state if the insn succeeds
10
* if ECI says we should not be executing the beat corresponding
11
to the lane of the vector register being accessed then we
12
should skip performing the move
13
14
Note that if PSR.ECI is non-zero then we cannot be in an IT block.
9
15
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20200430181003.21682-8-peter.maydell@linaro.org
18
Message-id: 20210617121628.20116-45-peter.maydell@linaro.org
13
---
19
---
14
target/arm/neon-shared.decode | 6 +++
20
target/arm/translate-a32.h | 2 +
15
target/arm/translate-neon.inc.c | 31 +++++++++++
21
target/arm/translate-mve.c | 4 +-
16
target/arm/translate.c | 92 +--------------------------------
22
target/arm/translate-vfp.c | 77 +++++++++++++++++++++++++++++++++++---
17
3 files changed, 38 insertions(+), 91 deletions(-)
23
3 files changed, 75 insertions(+), 8 deletions(-)
18
24
19
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
25
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
20
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/neon-shared.decode
27
--- a/target/arm/translate-a32.h
22
+++ b/target/arm/neon-shared.decode
28
+++ b/target/arm/translate-a32.h
23
@@ -XXX,XX +XXX,XX @@ VCADD 1111 110 rot:1 1 . 0 size:1 .... .... 1000 . q:1 . 0 .... \
29
@@ -XXX,XX +XXX,XX @@ long neon_full_reg_offset(unsigned reg);
24
# VUDOT and VSDOT
30
long neon_element_offset(int reg, int element, MemOp memop);
25
VDOT 1111 110 00 . 10 .... .... 1101 . q:1 . u:1 .... \
31
void gen_rev16(TCGv_i32 dest, TCGv_i32 var);
26
vm=%vm_dp vn=%vn_dp vd=%vd_dp
32
void clear_eci_state(DisasContext *s);
27
+
33
+bool mve_eci_check(DisasContext *s);
28
+# VFM[AS]L
34
+void mve_update_and_store_eci(DisasContext *s);
29
+VFML 1111 110 0 s:1 . 10 .... .... 1000 . 0 . 1 .... \
35
30
+ vm=%vm_sp vn=%vn_sp vd=%vd_dp q=0
36
static inline TCGv_i32 load_cpu_offset(int offset)
31
+VFML 1111 110 0 s:1 . 10 .... .... 1000 . 1 . 1 .... \
37
{
32
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp q=1
38
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
33
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
34
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-neon.inc.c
40
--- a/target/arm/translate-mve.c
36
+++ b/target/arm/translate-neon.inc.c
41
+++ b/target/arm/translate-mve.c
37
@@ -XXX,XX +XXX,XX @@ static bool trans_VDOT(DisasContext *s, arg_VDOT *a)
42
@@ -XXX,XX +XXX,XX @@ static bool mve_check_qreg_bank(DisasContext *s, int qmask)
38
opr_sz, opr_sz, 0, fn_gvec);
43
return qmask < 8;
44
}
45
46
-static bool mve_eci_check(DisasContext *s)
47
+bool mve_eci_check(DisasContext *s)
48
{
49
/*
50
* This is a beatwise insn: check that ECI is valid (not a
51
@@ -XXX,XX +XXX,XX @@ static void mve_update_eci(DisasContext *s)
52
}
53
}
54
55
-static void mve_update_and_store_eci(DisasContext *s)
56
+void mve_update_and_store_eci(DisasContext *s)
57
{
58
/*
59
* For insns which don't call a helper function that will call
60
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate-vfp.c
63
+++ b/target/arm/translate-vfp.c
64
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
39
return true;
65
return true;
40
}
66
}
67
68
+static bool mve_skip_vmov(DisasContext *s, int vn, int index, int size)
69
+{
70
+ /*
71
+ * In a CPU with MVE, the VMOV (vector lane to general-purpose register)
72
+ * and VMOV (general-purpose register to vector lane) insns are not
73
+ * predicated, but they are subject to beatwise execution if they are
74
+ * not in an IT block.
75
+ *
76
+ * Since our implementation always executes all 4 beats in one tick,
77
+ * this means only that if PSR.ECI says we should not be executing
78
+ * the beat corresponding to the lane of the vector register being
79
+ * accessed then we should skip performing the move, and that we need
80
+ * to do the usual check for bad ECI state and advance of ECI state.
81
+ *
82
+ * Note that if PSR.ECI is non-zero then we cannot be in an IT block.
83
+ *
84
+ * Return true if this VMOV scalar <-> gpreg should be skipped because
85
+ * the MVE PSR.ECI state says we skip the beat where the store happens.
86
+ */
41
+
87
+
42
+static bool trans_VFML(DisasContext *s, arg_VFML *a)
88
+ /* Calculate the byte offset into Qn which we're going to access */
43
+{
89
+ int ofs = (index << size) + ((vn & 1) * 8);
44
+ int opr_sz;
45
+
90
+
46
+ if (!dc_isar_feature(aa32_fhm, s)) {
91
+ if (!dc_isar_feature(aa32_mve, s)) {
47
+ return false;
92
+ return false;
48
+ }
93
+ }
49
+
94
+
50
+ /* UNDEF accesses to D16-D31 if they don't exist. */
95
+ switch (s->eci) {
51
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
96
+ case ECI_NONE:
52
+ (a->vd & 0x10)) {
53
+ return false;
97
+ return false;
98
+ case ECI_A0:
99
+ return ofs < 4;
100
+ case ECI_A0A1:
101
+ return ofs < 8;
102
+ case ECI_A0A1A2:
103
+ case ECI_A0A1A2B0:
104
+ return ofs < 12;
105
+ default:
106
+ g_assert_not_reached();
107
+ }
108
+}
109
+
110
static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
111
{
112
/* VMOV scalar to general purpose register */
113
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
114
return false;
115
}
116
117
+ if (dc_isar_feature(aa32_mve, s)) {
118
+ if (!mve_eci_check(s)) {
119
+ return true;
120
+ }
54
+ }
121
+ }
55
+
122
+
56
+ if (a->vd & a->q) {
123
if (!vfp_access_check(s)) {
57
+ return false;
124
return true;
125
}
126
127
- tmp = tcg_temp_new_i32();
128
- read_neon_element32(tmp, a->vn, a->index, a->size | (a->u ? 0 : MO_SIGN));
129
- store_reg(s, a->rt, tmp);
130
+ if (!mve_skip_vmov(s, a->vn, a->index, a->size)) {
131
+ tmp = tcg_temp_new_i32();
132
+ read_neon_element32(tmp, a->vn, a->index,
133
+ a->size | (a->u ? 0 : MO_SIGN));
134
+ store_reg(s, a->rt, tmp);
135
+ }
136
137
+ if (dc_isar_feature(aa32_mve, s)) {
138
+ mve_update_and_store_eci(s);
139
+ }
140
return true;
141
}
142
143
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
144
return false;
145
}
146
147
+ if (dc_isar_feature(aa32_mve, s)) {
148
+ if (!mve_eci_check(s)) {
149
+ return true;
150
+ }
58
+ }
151
+ }
59
+
152
+
60
+ if (!vfp_access_check(s)) {
153
if (!vfp_access_check(s)) {
61
+ return true;
154
return true;
155
}
156
157
- tmp = load_reg(s, a->rt);
158
- write_neon_element32(tmp, a->vn, a->index, a->size);
159
- tcg_temp_free_i32(tmp);
160
+ if (!mve_skip_vmov(s, a->vn, a->index, a->size)) {
161
+ tmp = load_reg(s, a->rt);
162
+ write_neon_element32(tmp, a->vn, a->index, a->size);
163
+ tcg_temp_free_i32(tmp);
62
+ }
164
+ }
63
+
165
64
+ opr_sz = (1 + a->q) * 8;
166
+ if (dc_isar_feature(aa32_mve, s)) {
65
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, a->vd),
167
+ mve_update_and_store_eci(s);
66
+ vfp_reg_offset(a->q, a->vn),
168
+ }
67
+ vfp_reg_offset(a->q, a->vm),
169
return true;
68
+ cpu_env, opr_sz, opr_sz, a->s, /* is_2 == 0 */
69
+ gen_helper_gvec_fmlal_a32);
70
+ return true;
71
+}
72
diff --git a/target/arm/translate.c b/target/arm/translate.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/translate.c
75
+++ b/target/arm/translate.c
76
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
77
return 0;
78
}
170
}
79
171
80
-/* Advanced SIMD three registers of the same length extension.
81
- * 31 25 23 22 20 16 12 11 10 9 8 3 0
82
- * +---------------+-----+---+-----+----+----+---+----+---+----+---------+----+
83
- * | 1 1 1 1 1 1 0 | op1 | D | op2 | Vn | Vd | 1 | o3 | 0 | o4 | N Q M U | Vm |
84
- * +---------------+-----+---+-----+----+----+---+----+---+----+---------+----+
85
- */
86
-static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
87
-{
88
- gen_helper_gvec_3 *fn_gvec = NULL;
89
- gen_helper_gvec_3_ptr *fn_gvec_ptr = NULL;
90
- int rd, rn, rm, opr_sz;
91
- int data = 0;
92
- int off_rn, off_rm;
93
- bool is_long = false, q = extract32(insn, 6, 1);
94
- bool ptr_is_env = false;
95
-
96
- if ((insn & 0xff300f10) == 0xfc200810) {
97
- /* VFM[AS]L -- 1111 1100 S.10 .... .... 1000 .Q.1 .... */
98
- int is_s = extract32(insn, 23, 1);
99
- if (!dc_isar_feature(aa32_fhm, s)) {
100
- return 1;
101
- }
102
- is_long = true;
103
- data = is_s; /* is_2 == 0 */
104
- fn_gvec_ptr = gen_helper_gvec_fmlal_a32;
105
- ptr_is_env = true;
106
- } else {
107
- return 1;
108
- }
109
-
110
- VFP_DREG_D(rd, insn);
111
- if (rd & q) {
112
- return 1;
113
- }
114
- if (q || !is_long) {
115
- VFP_DREG_N(rn, insn);
116
- VFP_DREG_M(rm, insn);
117
- if ((rn | rm) & q & !is_long) {
118
- return 1;
119
- }
120
- off_rn = vfp_reg_offset(1, rn);
121
- off_rm = vfp_reg_offset(1, rm);
122
- } else {
123
- rn = VFP_SREG_N(insn);
124
- rm = VFP_SREG_M(insn);
125
- off_rn = vfp_reg_offset(0, rn);
126
- off_rm = vfp_reg_offset(0, rm);
127
- }
128
-
129
- if (s->fp_excp_el) {
130
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
131
- syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
132
- return 0;
133
- }
134
- if (!s->vfp_enabled) {
135
- return 1;
136
- }
137
-
138
- opr_sz = (1 + q) * 8;
139
- if (fn_gvec_ptr) {
140
- TCGv_ptr ptr;
141
- if (ptr_is_env) {
142
- ptr = cpu_env;
143
- } else {
144
- ptr = get_fpstatus_ptr(1);
145
- }
146
- tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd), off_rn, off_rm, ptr,
147
- opr_sz, opr_sz, data, fn_gvec_ptr);
148
- if (!ptr_is_env) {
149
- tcg_temp_free_ptr(ptr);
150
- }
151
- } else {
152
- tcg_gen_gvec_3_ool(vfp_reg_offset(1, rd), off_rn, off_rm,
153
- opr_sz, opr_sz, data, fn_gvec);
154
- }
155
- return 0;
156
-}
157
-
158
/* Advanced SIMD two registers and a scalar extension.
159
* 31 24 23 22 20 16 12 11 10 9 8 3 0
160
* +-----------------+----+---+----+----+----+---+----+---+----+---------+----+
161
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
162
}
163
}
164
}
165
- } else if ((insn & 0x0e000a00) == 0x0c000800
166
- && arm_dc_feature(s, ARM_FEATURE_V8)) {
167
- if (disas_neon_insn_3same_ext(s, insn)) {
168
- goto illegal_op;
169
- }
170
- return;
171
} else if ((insn & 0x0f000a00) == 0x0e000800
172
&& arm_dc_feature(s, ARM_FEATURE_V8)) {
173
if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
174
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
175
}
176
break;
177
}
178
- if ((insn & 0xfe000a00) == 0xfc000800
179
+ if ((insn & 0xff000a00) == 0xfe000800
180
&& arm_dc_feature(s, ARM_FEATURE_V8)) {
181
/* The Thumb2 and ARM encodings are identical. */
182
- if (disas_neon_insn_3same_ext(s, insn)) {
183
- goto illegal_op;
184
- }
185
- } else if ((insn & 0xff000a00) == 0xfe000800
186
- && arm_dc_feature(s, ARM_FEATURE_V8)) {
187
- /* The Thumb2 and ARM encodings are identical. */
188
if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
189
goto illegal_op;
190
}
191
--
172
--
192
2.20.1
173
2.20.1
193
174
194
175
diff view generated by jsdifflib
1
In aarch64_max_initfn() we update both 32-bit and 64-bit ID
1
From: Peter Collingbourne <pcc@google.com>
2
registers. The intended pattern is that for 64-bit ID registers we
3
use FIELD_DP64 and the uint64_t 't' register, while 32-bit ID
4
registers use FIELD_DP32 and the uint32_t 'u' register. For
5
ID_AA64DFR0 we accidentally used 'u', meaning that the top 32 bits of
6
this 64-bit ID register would end up always zero. Luckily at the
7
moment that's what they should be anyway, so this bug has no visible
8
effects.
9
2
10
Use the right-sized variable.
3
MTE3 introduces an asymmetric tag checking mode, in which loads are
4
checked synchronously and stores are checked asynchronously. Add
5
support for it.
11
6
12
Fixes: 3bec78447a958d481991
7
Signed-off-by: Peter Collingbourne <pcc@google.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210616195614.11785-1-pcc@google.com
10
[PMM: Add line to emulation.rst]
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 20200423110915.10527-1-peter.maydell@linaro.org
17
---
12
---
18
target/arm/cpu64.c | 6 +++---
13
docs/system/arm/emulation.rst | 1 +
19
1 file changed, 3 insertions(+), 3 deletions(-)
14
target/arm/cpu64.c | 2 +-
15
target/arm/mte_helper.c | 82 ++++++++++++++++++++++-------------
16
3 files changed, 53 insertions(+), 32 deletions(-)
20
17
18
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
19
index XXXXXXX..XXXXXXX 100644
20
--- a/docs/system/arm/emulation.rst
21
+++ b/docs/system/arm/emulation.rst
22
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
23
- FEAT_LSE (Large System Extensions)
24
- FEAT_MTE (Memory Tagging Extension)
25
- FEAT_MTE2 (Memory Tagging Extension)
26
+- FEAT_MTE3 (MTE Asymmetric Fault Handling)
27
- FEAT_PAN (Privileged access never)
28
- FEAT_PAN2 (AT S1E1R and AT S1E1W instruction variants affected by PSTATE.PAN)
29
- FEAT_PAuth (Pointer authentication)
21
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
30
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
22
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu64.c
32
--- a/target/arm/cpu64.c
24
+++ b/target/arm/cpu64.c
33
+++ b/target/arm/cpu64.c
25
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
34
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
26
u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
35
* during realize if the board provides no tag memory, much like
27
cpu->isar.id_mmfr4 = u;
36
* we do for EL2 with the virtualization=on property.
28
37
*/
29
- u = cpu->isar.id_aa64dfr0;
38
- t = FIELD_DP64(t, ID_AA64PFR1, MTE, 2);
30
- u = FIELD_DP64(u, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
39
+ t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3);
31
- cpu->isar.id_aa64dfr0 = u;
40
cpu->isar.id_aa64pfr1 = t;
32
+ t = cpu->isar.id_aa64dfr0;
41
33
+ t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
42
t = cpu->isar.id_aa64mmfr0;
34
+ cpu->isar.id_aa64dfr0 = t;
43
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
35
44
index XXXXXXX..XXXXXXX 100644
36
u = cpu->isar.id_dfr0;
45
--- a/target/arm/mte_helper.c
37
u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
46
+++ b/target/arm/mte_helper.c
47
@@ -XXX,XX +XXX,XX @@ void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
48
}
49
}
50
51
+static void mte_sync_check_fail(CPUARMState *env, uint32_t desc,
52
+ uint64_t dirty_ptr, uintptr_t ra)
53
+{
54
+ int is_write, syn;
55
+
56
+ env->exception.vaddress = dirty_ptr;
57
+
58
+ is_write = FIELD_EX32(desc, MTEDESC, WRITE);
59
+ syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0, is_write,
60
+ 0x11);
61
+ raise_exception_ra(env, EXCP_DATA_ABORT, syn, exception_target_el(env), ra);
62
+ g_assert_not_reached();
63
+}
64
+
65
+static void mte_async_check_fail(CPUARMState *env, uint64_t dirty_ptr,
66
+ uintptr_t ra, ARMMMUIdx arm_mmu_idx, int el)
67
+{
68
+ int select;
69
+
70
+ if (regime_has_2_ranges(arm_mmu_idx)) {
71
+ select = extract64(dirty_ptr, 55, 1);
72
+ } else {
73
+ select = 0;
74
+ }
75
+ env->cp15.tfsr_el[el] |= 1 << select;
76
+#ifdef CONFIG_USER_ONLY
77
+ /*
78
+ * Stand in for a timer irq, setting _TIF_MTE_ASYNC_FAULT,
79
+ * which then sends a SIGSEGV when the thread is next scheduled.
80
+ * This cpu will return to the main loop at the end of the TB,
81
+ * which is rather sooner than "normal". But the alternative
82
+ * is waiting until the next syscall.
83
+ */
84
+ qemu_cpu_kick(env_cpu(env));
85
+#endif
86
+}
87
+
88
/* Record a tag check failure. */
89
static void mte_check_fail(CPUARMState *env, uint32_t desc,
90
uint64_t dirty_ptr, uintptr_t ra)
91
{
92
int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
93
ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx);
94
- int el, reg_el, tcf, select, is_write, syn;
95
+ int el, reg_el, tcf;
96
uint64_t sctlr;
97
98
reg_el = regime_el(env, arm_mmu_idx);
99
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
100
switch (tcf) {
101
case 1:
102
/* Tag check fail causes a synchronous exception. */
103
- env->exception.vaddress = dirty_ptr;
104
-
105
- is_write = FIELD_EX32(desc, MTEDESC, WRITE);
106
- syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0,
107
- is_write, 0x11);
108
- raise_exception_ra(env, EXCP_DATA_ABORT, syn,
109
- exception_target_el(env), ra);
110
- /* noreturn, but fall through to the assert anyway */
111
+ mte_sync_check_fail(env, desc, dirty_ptr, ra);
112
+ break;
113
114
case 0:
115
/*
116
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
117
118
case 2:
119
/* Tag check fail causes asynchronous flag set. */
120
- if (regime_has_2_ranges(arm_mmu_idx)) {
121
- select = extract64(dirty_ptr, 55, 1);
122
- } else {
123
- select = 0;
124
- }
125
- env->cp15.tfsr_el[el] |= 1 << select;
126
-#ifdef CONFIG_USER_ONLY
127
- /*
128
- * Stand in for a timer irq, setting _TIF_MTE_ASYNC_FAULT,
129
- * which then sends a SIGSEGV when the thread is next scheduled.
130
- * This cpu will return to the main loop at the end of the TB,
131
- * which is rather sooner than "normal". But the alternative
132
- * is waiting until the next syscall.
133
- */
134
- qemu_cpu_kick(env_cpu(env));
135
-#endif
136
+ mte_async_check_fail(env, dirty_ptr, ra, arm_mmu_idx, el);
137
break;
138
139
- default:
140
- /* Case 3: Reserved. */
141
- qemu_log_mask(LOG_GUEST_ERROR,
142
- "Tag check failure with SCTLR_EL%d.TCF%s "
143
- "set to reserved value %d\n",
144
- reg_el, el ? "" : "0", tcf);
145
+ case 3:
146
+ /*
147
+ * Tag check fail causes asynchronous flag set for stores, or
148
+ * a synchronous exception for loads.
149
+ */
150
+ if (FIELD_EX32(desc, MTEDESC, WRITE)) {
151
+ mte_async_check_fail(env, dirty_ptr, ra, arm_mmu_idx, el);
152
+ } else {
153
+ mte_sync_check_fail(env, desc, dirty_ptr, ra);
154
+ }
155
break;
156
}
157
}
38
--
158
--
39
2.20.1
159
2.20.1
40
160
41
161
diff view generated by jsdifflib
1
From: Fredrik Strupe <fredrik@strupe.net>
1
From: Alexandre Iooss <erdnaxe@crans.org>
2
2
3
According to Arm ARM, VQDMULL is only valid when U=0, while having
3
This adds the target guide for BBC Micro:bit.
4
U=1 is unallocated.
5
4
6
Signed-off-by: Fredrik Strupe <fredrik@strupe.net>
5
Information is taken from https://wiki.qemu.org/Features/MicroBit
7
Fixes: 695272dcb976 ("target-arm: Handle UNDEF cases for Neon 3-regs-different-widths")
6
and from hw/arm/nrf51_soc.c.
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
8
Signed-off-by: Alexandre Iooss <erdnaxe@crans.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Joel Stanley <joel@jms.id.au>
11
Message-id: 20210621075625.540471-1-erdnaxe@crans.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
target/arm/translate.c | 2 +-
14
docs/system/arm/nrf.rst | 51 ++++++++++++++++++++++++++++++++++++++
12
1 file changed, 1 insertion(+), 1 deletion(-)
15
docs/system/target-arm.rst | 1 +
16
MAINTAINERS | 1 +
17
3 files changed, 53 insertions(+)
18
create mode 100644 docs/system/arm/nrf.rst
13
19
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
diff --git a/docs/system/arm/nrf.rst b/docs/system/arm/nrf.rst
21
new file mode 100644
22
index XXXXXXX..XXXXXXX
23
--- /dev/null
24
+++ b/docs/system/arm/nrf.rst
25
@@ -XXX,XX +XXX,XX @@
26
+Nordic nRF boards (``microbit``)
27
+================================
28
+
29
+The `Nordic nRF`_ chips are a family of ARM-based System-on-Chip that
30
+are designed to be used for low-power and short-range wireless solutions.
31
+
32
+.. _Nordic nRF: https://www.nordicsemi.com/Products
33
+
34
+The nRF51 series is the first series for short range wireless applications.
35
+It is superseded by the nRF52 series.
36
+The following machines are based on this chip :
37
+
38
+- ``microbit`` BBC micro:bit board with nRF51822 SoC
39
+
40
+There are other series such as nRF52, nRF53 and nRF91 which are currently not
41
+supported by QEMU.
42
+
43
+Supported devices
44
+-----------------
45
+
46
+ * ARM Cortex-M0 (ARMv6-M)
47
+ * Serial ports (UART)
48
+ * Clock controller
49
+ * Timers
50
+ * Random Number Generator (RNG)
51
+ * GPIO controller
52
+ * NVMC
53
+ * SWI
54
+
55
+Missing devices
56
+---------------
57
+
58
+ * Watchdog
59
+ * Real-Time Clock (RTC) controller
60
+ * TWI (i2c)
61
+ * SPI controller
62
+ * Analog to Digital Converter (ADC)
63
+ * Quadrature decoder
64
+ * Radio
65
+
66
+Boot options
67
+------------
68
+
69
+The Micro:bit machine can be started using the ``-device`` option to load a
70
+firmware in `ihex format`_. Example:
71
+
72
+.. _ihex format: https://en.wikipedia.org/wiki/Intel_HEX
73
+
74
+.. code-block:: bash
75
+
76
+ $ qemu-system-arm -M microbit -device loader,file=test.hex
77
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
15
index XXXXXXX..XXXXXXX 100644
78
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
79
--- a/docs/system/target-arm.rst
17
+++ b/target/arm/translate.c
80
+++ b/docs/system/target-arm.rst
18
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
81
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
19
{0, 0, 0, 0}, /* VMLSL */
82
arm/digic
20
{0, 0, 0, 9}, /* VQDMLSL */
83
arm/musicpal
21
{0, 0, 0, 0}, /* Integer VMULL */
84
arm/gumstix
22
- {0, 0, 0, 1}, /* VQDMULL */
85
+ arm/nrf
23
+ {0, 0, 0, 9}, /* VQDMULL */
86
arm/nseries
24
{0, 0, 0, 0xa}, /* Polynomial VMULL */
87
arm/nuvoton
25
{0, 0, 0, 7}, /* Reserved: always UNDEF */
88
arm/orangepi
26
};
89
diff --git a/MAINTAINERS b/MAINTAINERS
90
index XXXXXXX..XXXXXXX 100644
91
--- a/MAINTAINERS
92
+++ b/MAINTAINERS
93
@@ -XXX,XX +XXX,XX @@ F: hw/*/microbit*.c
94
F: include/hw/*/nrf51*.h
95
F: include/hw/*/microbit*.h
96
F: tests/qtest/microbit-test.c
97
+F: docs/system/arm/nrf.rst
98
99
AVR Machines
100
-------------
27
--
101
--
28
2.20.1
102
2.20.1
29
103
30
104
diff view generated by jsdifflib