1
Another target-arm queue, since we're over 30 patches
1
target-arm queue: nothing big, just a collection of minor things.
2
already. Most of this is RTH's SVE-patches-part-1.
3
2
4
thanks
5
-- PMM
3
-- PMM
6
4
5
The following changes since commit ae3aa5da96f4ccf0c2a28851449d92db9fcfad71:
7
6
8
The following changes since commit d32e41a1188e929cc0fb16829ce3736046951e39:
7
Merge remote-tracking branch 'remotes/berrange/tags/socket-next-pull-request' into staging (2020-05-21 16:47:28 +0100)
9
10
Merge remote-tracking branch 'remotes/famz/tags/docker-and-block-pull-request' into staging (2018-05-18 14:11:52 +0100)
11
8
12
are available in the Git repository at:
9
are available in the Git repository at:
13
10
14
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180518
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200521
15
12
16
for you to fetch changes up to b94f8f60bd841c5b737185cd38263e26822f77ab:
13
for you to fetch changes up to 17b5df7b65d0192c5d775b5e1581518580774c77:
17
14
18
target/arm: Implement SVE Permute - Extract Group (2018-05-18 17:48:09 +0100)
15
linux-user/arm/signal.c: Drop TARGET_CONFIG_CPU_32 (2020-05-21 20:00:19 +0100)
19
16
20
----------------------------------------------------------------
17
----------------------------------------------------------------
21
target-arm queue:
18
target-arm queue:
22
* Initial part of SVE implementation (currently disabled)
19
* tests/acceptance: Add a test for the canon-a1100 machine
23
* smmuv3: fix some minor Coverity issues
20
* docs/system: Document some of the Arm development boards
24
* add model of Xilinx ZynqMP generic DMA controller
21
* linux-user: make BKPT insn cause SIGTRAP, not be a syscall
25
* expose (most) Arm coprocessor/system registers to
22
* target/arm: Remove unused GEN_NEON_INTEGER_OP macro
26
gdb via QEMU's gdbstub, for reads only
23
* fsl-imx25, fsl-imx31, fsl-imx6, fsl-imx6ul, fsl-imx7: implement watchdog
24
* hw/arm: Use qemu_log_mask() instead of hw_error() in various places
25
* ARM: PL061: Introduce N_GPIOS
26
* target/arm: Improve clear_vec_high() usage
27
* target/arm: Allow user-mode code to write CPSR.E via MSR
28
* linux-user/arm: Reset CPSR_E when entering a signal handler
29
* linux-user/arm/signal.c: Drop TARGET_CONFIG_CPU_32
27
30
28
----------------------------------------------------------------
31
----------------------------------------------------------------
29
Abdallah Bouassida (3):
32
Amanieu d'Antras (1):
30
target/arm: Add "ARM_CP_NO_GDB" as a new bit field for ARMCPRegInfo type
33
linux-user/arm: Reset CPSR_E when entering a signal handler
31
target/arm: Add "_S" suffix to the secure version of a sysreg
32
target/arm: Add the XML dynamic generation
33
34
34
Eric Auger (2):
35
Geert Uytterhoeven (1):
35
hw/arm/smmuv3: Fix Coverity issue in smmuv3_record_event
36
ARM: PL061: Introduce N_GPIOS
36
hw/arm/smmu-common: Fix coverity issue in get_block_pte_address
37
37
38
Francisco Iglesias (2):
38
Guenter Roeck (8):
39
xlnx-zdma: Add a model of the Xilinx ZynqMP generic DMA
39
hw: Move i.MX watchdog driver to hw/watchdog
40
xlnx-zynqmp: Connect the ZynqMP GDMA and ADMA
40
hw/watchdog: Implement full i.MX watchdog support
41
hw/arm/fsl-imx25: Wire up watchdog
42
hw/arm/fsl-imx31: Wire up watchdog
43
hw/arm/fsl-imx6: Connect watchdog interrupts
44
hw/arm/fsl-imx6ul: Connect watchdog interrupts
45
hw/arm/fsl-imx7: Instantiate various unimplemented devices
46
hw/arm/fsl-imx7: Connect watchdog interrupts
41
47
42
Richard Henderson (25):
48
Peter Maydell (12):
43
target/arm: Introduce translate-a64.h
49
docs/system: Add 'Arm' to the Integrator/CP document title
44
target/arm: Add SVE decode skeleton
50
docs/system: Sort Arm board index into alphabetical order
45
target/arm: Implement SVE Bitwise Logical - Unpredicated Group
51
docs/system: Document Arm Versatile Express boards
46
target/arm: Implement SVE load vector/predicate
52
docs/system: Document the various MPS2 models
47
target/arm: Implement SVE predicate test
53
docs/system: Document Musca boards
48
target/arm: Implement SVE Predicate Logical Operations Group
54
linux-user/arm: BKPT should cause SIGTRAP, not be a syscall
49
target/arm: Implement SVE Predicate Misc Group
55
linux-user/arm: Remove bogus SVC 0xf0002 handling
50
target/arm: Implement SVE Integer Binary Arithmetic - Predicated Group
56
linux-user/arm: Handle invalid arm-specific syscalls correctly
51
target/arm: Implement SVE Integer Reduction Group
57
linux-user/arm: Fix identification of syscall numbers
52
target/arm: Implement SVE bitwise shift by immediate (predicated)
58
target/arm: Remove unused GEN_NEON_INTEGER_OP macro
53
target/arm: Implement SVE bitwise shift by vector (predicated)
59
target/arm: Allow user-mode code to write CPSR.E via MSR
54
target/arm: Implement SVE bitwise shift by wide elements (predicated)
60
linux-user/arm/signal.c: Drop TARGET_CONFIG_CPU_32
55
target/arm: Implement SVE Integer Arithmetic - Unary Predicated Group
56
target/arm: Implement SVE Integer Multiply-Add Group
57
target/arm: Implement SVE Integer Arithmetic - Unpredicated Group
58
target/arm: Implement SVE Index Generation Group
59
target/arm: Implement SVE Stack Allocation Group
60
target/arm: Implement SVE Bitwise Shift - Unpredicated Group
61
target/arm: Implement SVE Compute Vector Address Group
62
target/arm: Implement SVE floating-point exponential accelerator
63
target/arm: Implement SVE floating-point trig select coefficient
64
target/arm: Implement SVE Element Count Group
65
target/arm: Implement SVE Bitwise Immediate Group
66
target/arm: Implement SVE Integer Wide Immediate - Predicated Group
67
target/arm: Implement SVE Permute - Extract Group
68
61
69
hw/dma/Makefile.objs | 1 +
62
Philippe Mathieu-Daudé (4):
70
target/arm/Makefile.objs | 10 +
63
hw/arm/integratorcp: Replace hw_error() by qemu_log_mask()
71
include/hw/arm/xlnx-zynqmp.h | 5 +
64
hw/arm/pxa2xx: Replace hw_error() by qemu_log_mask()
72
include/hw/dma/xlnx-zdma.h | 84 ++
65
hw/char/xilinx_uartlite: Replace hw_error() by qemu_log_mask()
73
include/qom/cpu.h | 5 +-
66
hw/timer/exynos4210_mct: Replace hw_error() by qemu_log_mask()
74
target/arm/cpu.h | 37 +-
75
target/arm/helper-sve.h | 427 +++++++++
76
target/arm/helper.h | 1 +
77
target/arm/translate-a64.h | 118 +++
78
gdbstub.c | 10 +
79
hw/arm/smmu-common.c | 4 +-
80
hw/arm/smmuv3.c | 2 +-
81
hw/arm/xlnx-zynqmp.c | 53 ++
82
hw/dma/xlnx-zdma.c | 832 +++++++++++++++++
83
target/arm/cpu.c | 1 +
84
target/arm/gdbstub.c | 76 ++
85
target/arm/helper.c | 57 +-
86
target/arm/sve_helper.c | 1562 +++++++++++++++++++++++++++++++
87
target/arm/translate-a64.c | 119 +--
88
target/arm/translate-sve.c | 2070 ++++++++++++++++++++++++++++++++++++++++++
89
.gitignore | 1 +
90
target/arm/sve.decode | 419 +++++++++
91
22 files changed, 5778 insertions(+), 116 deletions(-)
92
create mode 100644 include/hw/dma/xlnx-zdma.h
93
create mode 100644 target/arm/helper-sve.h
94
create mode 100644 target/arm/translate-a64.h
95
create mode 100644 hw/dma/xlnx-zdma.c
96
create mode 100644 target/arm/sve_helper.c
97
create mode 100644 target/arm/translate-sve.c
98
create mode 100644 target/arm/sve.decode
99
67
68
Richard Henderson (2):
69
target/arm: Use tcg_gen_gvec_mov for clear_vec_high
70
target/arm: Use clear_vec_high more effectively
71
72
Thomas Huth (1):
73
tests/acceptance: Add a test for the canon-a1100 machine
74
75
docs/system/arm/integratorcp.rst | 4 +-
76
docs/system/arm/mps2.rst | 29 +++
77
docs/system/arm/musca.rst | 31 +++
78
docs/system/arm/vexpress.rst | 60 ++++++
79
docs/system/target-arm.rst | 20 +-
80
include/hw/arm/fsl-imx25.h | 5 +
81
include/hw/arm/fsl-imx31.h | 4 +
82
include/hw/arm/fsl-imx6.h | 2 +-
83
include/hw/arm/fsl-imx6ul.h | 2 +-
84
include/hw/arm/fsl-imx7.h | 23 ++-
85
include/hw/misc/imx2_wdt.h | 33 ----
86
include/hw/watchdog/wdt_imx2.h | 90 +++++++++
87
target/arm/cpu.h | 2 +-
88
hw/arm/fsl-imx25.c | 10 +
89
hw/arm/fsl-imx31.c | 6 +
90
hw/arm/fsl-imx6.c | 9 +
91
hw/arm/fsl-imx6ul.c | 10 +
92
hw/arm/fsl-imx7.c | 35 ++++
93
hw/arm/integratorcp.c | 23 ++-
94
hw/arm/pxa2xx_gpio.c | 7 +-
95
hw/char/xilinx_uartlite.c | 5 +-
96
hw/display/pxa2xx_lcd.c | 8 +-
97
hw/dma/pxa2xx_dma.c | 14 +-
98
hw/gpio/pl061.c | 12 +-
99
hw/misc/imx2_wdt.c | 90 ---------
100
hw/timer/exynos4210_mct.c | 12 +-
101
hw/watchdog/wdt_imx2.c | 303 +++++++++++++++++++++++++++++
102
linux-user/arm/cpu_loop.c | 145 ++++++++------
103
linux-user/arm/signal.c | 15 +-
104
target/arm/translate-a64.c | 63 +++---
105
target/arm/translate.c | 23 ---
106
MAINTAINERS | 6 +
107
hw/arm/Kconfig | 5 +
108
hw/misc/Makefile.objs | 1 -
109
hw/watchdog/Kconfig | 3 +
110
hw/watchdog/Makefile.objs | 1 +
111
tests/acceptance/machine_arm_canona1100.py | 35 ++++
112
37 files changed, 854 insertions(+), 292 deletions(-)
113
create mode 100644 docs/system/arm/mps2.rst
114
create mode 100644 docs/system/arm/musca.rst
115
create mode 100644 docs/system/arm/vexpress.rst
116
delete mode 100644 include/hw/misc/imx2_wdt.h
117
create mode 100644 include/hw/watchdog/wdt_imx2.h
118
delete mode 100644 hw/misc/imx2_wdt.c
119
create mode 100644 hw/watchdog/wdt_imx2.c
120
create mode 100644 tests/acceptance/machine_arm_canona1100.py
121
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Thomas Huth <thuth@redhat.com>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
The canon-a1100 machine can be used with the Barebox firmware. The
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
QEMU Advent Calendar 2018 features a pre-compiled image which we
5
Message-id: 20180516223007.10256-26-richard.henderson@linaro.org
5
can use for testing.
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
10
Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
11
Signed-off-by: Thomas Huth <thuth@redhat.com>
12
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-id: 20200514190422.23645-1-f4bug@amsat.org
14
Message-Id: <20200129090420.13954-1-thuth@redhat.com>
15
[PMD: Rebased MAINTAINERS]
16
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
18
---
8
target/arm/helper-sve.h | 2 +
19
MAINTAINERS | 1 +
9
target/arm/sve_helper.c | 81 ++++++++++++++++++++++++++++++++++++++
20
tests/acceptance/machine_arm_canona1100.py | 35 ++++++++++++++++++++++
10
target/arm/translate-sve.c | 34 ++++++++++++++++
21
2 files changed, 36 insertions(+)
11
target/arm/sve.decode | 7 ++++
22
create mode 100644 tests/acceptance/machine_arm_canona1100.py
12
4 files changed, 124 insertions(+)
13
23
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
24
diff --git a/MAINTAINERS b/MAINTAINERS
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
26
--- a/MAINTAINERS
17
+++ b/target/arm/helper-sve.h
27
+++ b/MAINTAINERS
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_cpy_z_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
28
@@ -XXX,XX +XXX,XX @@ S: Odd Fixes
19
DEF_HELPER_FLAGS_4(sve_cpy_z_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
29
F: include/hw/arm/digic.h
20
DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
30
F: hw/*/digic*
21
31
F: include/hw/*/digic*
22
+DEF_HELPER_FLAGS_4(sve_ext, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
+F: tests/acceptance/machine_arm_canona1100.py
33
34
Goldfish RTC
35
M: Anup Patel <anup.patel@wdc.com>
36
diff --git a/tests/acceptance/machine_arm_canona1100.py b/tests/acceptance/machine_arm_canona1100.py
37
new file mode 100644
38
index XXXXXXX..XXXXXXX
39
--- /dev/null
40
+++ b/tests/acceptance/machine_arm_canona1100.py
41
@@ -XXX,XX +XXX,XX @@
42
+# Functional test that boots the canon-a1100 machine with firmware
43
+#
44
+# Copyright (c) 2020 Red Hat, Inc.
45
+#
46
+# Author:
47
+# Thomas Huth <thuth@redhat.com>
48
+#
49
+# This work is licensed under the terms of the GNU GPL, version 2 or
50
+# later. See the COPYING file in the top-level directory.
23
+
51
+
24
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
52
+from avocado_qemu import Test
25
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
53
+from avocado_qemu import wait_for_console_pattern
26
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
54
+from avocado.utils import archive
27
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/sve_helper.c
30
+++ b/target/arm/sve_helper.c
31
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_cpy_z_d)(void *vd, void *vg, uint64_t val, uint32_t desc)
32
d[i] = (pg[H1(i)] & 1 ? val : 0);
33
}
34
}
35
+
55
+
36
+/* Big-endian hosts need to frob the byte indicies. If the copy
56
+class CanonA1100Machine(Test):
37
+ * happens to be 8-byte aligned, then no frobbing necessary.
57
+ """Boots the barebox firmware and checks that the console is operational"""
38
+ */
39
+static void swap_memmove(void *vd, void *vs, size_t n)
40
+{
41
+ uintptr_t d = (uintptr_t)vd;
42
+ uintptr_t s = (uintptr_t)vs;
43
+ uintptr_t o = (d | s | n) & 7;
44
+ size_t i;
45
+
58
+
46
+#ifndef HOST_WORDS_BIGENDIAN
59
+ timeout = 90
47
+ o = 0;
48
+#endif
49
+ switch (o) {
50
+ case 0:
51
+ memmove(vd, vs, n);
52
+ break;
53
+
60
+
54
+ case 4:
61
+ def test_arm_canona1100(self):
55
+ if (d < s || d >= s + n) {
62
+ """
56
+ for (i = 0; i < n; i += 4) {
63
+ :avocado: tags=arch:arm
57
+ *(uint32_t *)H1_4(d + i) = *(uint32_t *)H1_4(s + i);
64
+ :avocado: tags=machine:canon-a1100
58
+ }
65
+ :avocado: tags=device:pflash_cfi02
59
+ } else {
66
+ """
60
+ for (i = n; i > 0; ) {
67
+ tar_url = ('https://www.qemu-advent-calendar.org'
61
+ i -= 4;
68
+ '/2018/download/day18.tar.xz')
62
+ *(uint32_t *)H1_4(d + i) = *(uint32_t *)H1_4(s + i);
69
+ tar_hash = '068b5fc4242b29381acee94713509f8a876e9db6'
63
+ }
70
+ file_path = self.fetch_asset(tar_url, asset_hash=tar_hash)
64
+ }
71
+ archive.extract(file_path, self.workdir)
65
+ break;
72
+ self.vm.set_console()
66
+
73
+ self.vm.add_args('-bios',
67
+ case 2:
74
+ self.workdir + '/day18/barebox.canon-a1100.bin')
68
+ case 6:
75
+ self.vm.launch()
69
+ if (d < s || d >= s + n) {
76
+ wait_for_console_pattern(self, 'running /env/bin/init')
70
+ for (i = 0; i < n; i += 2) {
71
+ *(uint16_t *)H1_2(d + i) = *(uint16_t *)H1_2(s + i);
72
+ }
73
+ } else {
74
+ for (i = n; i > 0; ) {
75
+ i -= 2;
76
+ *(uint16_t *)H1_2(d + i) = *(uint16_t *)H1_2(s + i);
77
+ }
78
+ }
79
+ break;
80
+
81
+ default:
82
+ if (d < s || d >= s + n) {
83
+ for (i = 0; i < n; i++) {
84
+ *(uint8_t *)H1(d + i) = *(uint8_t *)H1(s + i);
85
+ }
86
+ } else {
87
+ for (i = n; i > 0; ) {
88
+ i -= 1;
89
+ *(uint8_t *)H1(d + i) = *(uint8_t *)H1(s + i);
90
+ }
91
+ }
92
+ break;
93
+ }
94
+}
95
+
96
+void HELPER(sve_ext)(void *vd, void *vn, void *vm, uint32_t desc)
97
+{
98
+ intptr_t opr_sz = simd_oprsz(desc);
99
+ size_t n_ofs = simd_data(desc);
100
+ size_t n_siz = opr_sz - n_ofs;
101
+
102
+ if (vd != vm) {
103
+ swap_memmove(vd, vn + n_ofs, n_siz);
104
+ swap_memmove(vd + n_siz, vm, n_ofs);
105
+ } else if (vd != vn) {
106
+ swap_memmove(vd + n_siz, vd, n_ofs);
107
+ swap_memmove(vd, vn + n_ofs, n_siz);
108
+ } else {
109
+ /* vd == vn == vm. Need temp space. */
110
+ ARMVectorReg tmp;
111
+ swap_memmove(&tmp, vm, n_ofs);
112
+ swap_memmove(vd, vd + n_ofs, n_siz);
113
+ memcpy(vd + n_siz, &tmp, n_ofs);
114
+ }
115
+}
116
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/translate-sve.c
119
+++ b/target/arm/translate-sve.c
120
@@ -XXX,XX +XXX,XX @@ static bool trans_CPY_z_i(DisasContext *s, arg_CPY_z_i *a, uint32_t insn)
121
return true;
122
}
123
124
+/*
125
+ *** SVE Permute Extract Group
126
+ */
127
+
128
+static bool trans_EXT(DisasContext *s, arg_EXT *a, uint32_t insn)
129
+{
130
+ if (!sve_access_check(s)) {
131
+ return true;
132
+ }
133
+
134
+ unsigned vsz = vec_full_reg_size(s);
135
+ unsigned n_ofs = a->imm >= vsz ? 0 : a->imm;
136
+ unsigned n_siz = vsz - n_ofs;
137
+ unsigned d = vec_full_reg_offset(s, a->rd);
138
+ unsigned n = vec_full_reg_offset(s, a->rn);
139
+ unsigned m = vec_full_reg_offset(s, a->rm);
140
+
141
+ /* Use host vector move insns if we have appropriate sizes
142
+ * and no unfortunate overlap.
143
+ */
144
+ if (m != d
145
+ && n_ofs == size_for_gvec(n_ofs)
146
+ && n_siz == size_for_gvec(n_siz)
147
+ && (d != n || n_siz <= n_ofs)) {
148
+ tcg_gen_gvec_mov(0, d, n + n_ofs, n_siz, n_siz);
149
+ if (n_ofs != 0) {
150
+ tcg_gen_gvec_mov(0, d + n_siz, m, n_ofs, n_ofs);
151
+ }
152
+ } else {
153
+ tcg_gen_gvec_3_ool(d, n, m, vsz, vsz, n_ofs, gen_helper_sve_ext);
154
+ }
155
+ return true;
156
+}
157
+
158
/*
159
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
160
*/
161
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/sve.decode
164
+++ b/target/arm/sve.decode
165
@@ -XXX,XX +XXX,XX @@
166
167
%imm4_16_p1 16:4 !function=plus1
168
%imm6_22_5 22:1 5:5
169
+%imm8_16_10 16:5 10:3
170
%imm9_16_10 16:s6 10:3
171
172
# A combination of tsz:imm3 -- extract esize.
173
@@ -XXX,XX +XXX,XX @@ FCPY 00000101 .. 01 .... 110 imm:8 ..... @rdn_pg4
174
CPY_m_i 00000101 .. 01 .... 01 . ........ ..... @rdn_pg4 imm=%sh8_i8s
175
CPY_z_i 00000101 .. 01 .... 00 . ........ ..... @rdn_pg4 imm=%sh8_i8s
176
177
+### SVE Permute - Extract Group
178
+
179
+# SVE extract vector (immediate offset)
180
+EXT 00000101 001 ..... 000 ... rm:5 rd:5 \
181
+ &rrri rn=%reg_movprfx imm=%imm8_16_10
182
+
183
### SVE Predicate Logical Operations Group
184
185
# SVE predicate logical operations
186
--
77
--
187
2.17.0
78
2.20.1
188
79
189
80
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
Add 'Arm' to the Integrator/CP document title, for consistency with
2
the titling of the other documentation of Arm devboard models
3
(versatile, realview).
2
4
3
Coverity points out that this can overflow if n > 31,
4
because it's only doing 32-bit arithmetic. Let's use 1ULL instead
5
of 1. Also the formulae used to compute n can be replaced by
6
the level_shift() macro.
7
8
Reported-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 1526493784-25328-3-git-send-email-eric.auger@redhat.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20200507151819.28444-2-peter.maydell@linaro.org
14
---
10
---
15
hw/arm/smmu-common.c | 4 ++--
11
docs/system/arm/integratorcp.rst | 4 ++--
16
1 file changed, 2 insertions(+), 2 deletions(-)
12
1 file changed, 2 insertions(+), 2 deletions(-)
17
13
18
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
14
diff --git a/docs/system/arm/integratorcp.rst b/docs/system/arm/integratorcp.rst
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/smmu-common.c
16
--- a/docs/system/arm/integratorcp.rst
21
+++ b/hw/arm/smmu-common.c
17
+++ b/docs/system/arm/integratorcp.rst
22
@@ -XXX,XX +XXX,XX @@ static inline hwaddr get_table_pte_address(uint64_t pte, int granule_sz)
18
@@ -XXX,XX +XXX,XX @@
23
static inline hwaddr get_block_pte_address(uint64_t pte, int level,
19
-Integrator/CP (``integratorcp``)
24
int granule_sz, uint64_t *bsz)
20
-================================
25
{
21
+Arm Integrator/CP (``integratorcp``)
26
- int n = (granule_sz - 3) * (4 - level) + 3;
22
+====================================
27
+ int n = level_shift(level, granule_sz);
23
28
24
The Arm Integrator/CP board is emulated with the following devices:
29
- *bsz = 1 << n;
30
+ *bsz = 1ULL << n;
31
return PTE_ADDRESS(pte, n);
32
}
33
25
34
--
26
--
35
2.17.0
27
2.20.1
36
28
37
29
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Sort the board index into alphabetical order. (Note that we need to
2
sort alphabetically by the title text of each file, which isn't the
3
same ordering as sorting by the filename.)
2
4
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-25-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20200507151819.28444-3-peter.maydell@linaro.org
7
---
10
---
8
target/arm/helper-sve.h | 10 ++++
11
docs/system/target-arm.rst | 17 +++++++++++------
9
target/arm/sve_helper.c | 108 +++++++++++++++++++++++++++++++++++++
12
1 file changed, 11 insertions(+), 6 deletions(-)
10
target/arm/translate-sve.c | 88 ++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 19 ++++++-
12
4 files changed, 224 insertions(+), 1 deletion(-)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/docs/system/target-arm.rst
17
+++ b/target/arm/helper-sve.h
17
+++ b/docs/system/target-arm.rst
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_uqaddi_s, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
18
@@ -XXX,XX +XXX,XX @@ Unfortunately many of the Arm boards QEMU supports are currently
19
DEF_HELPER_FLAGS_4(sve_uqaddi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
19
undocumented; you can get a complete list by running
20
DEF_HELPER_FLAGS_4(sve_uqsubi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
20
``qemu-system-aarch64 --machine help``.
21
21
22
+DEF_HELPER_FLAGS_5(sve_cpy_m_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
22
+..
23
+DEF_HELPER_FLAGS_5(sve_cpy_m_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
23
+ This table of contents should be kept sorted alphabetically
24
+DEF_HELPER_FLAGS_5(sve_cpy_m_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
24
+ by the title text of each file, which isn't the same ordering
25
+DEF_HELPER_FLAGS_5(sve_cpy_m_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i64, i32)
25
+ as an alphabetical sort by filename.
26
+
26
+
27
+DEF_HELPER_FLAGS_4(sve_cpy_z_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
27
.. toctree::
28
+DEF_HELPER_FLAGS_4(sve_cpy_z_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
28
:maxdepth: 1
29
+DEF_HELPER_FLAGS_4(sve_cpy_z_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
29
30
+DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
30
arm/integratorcp
31
+
31
- arm/versatile
32
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
32
arm/realview
33
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
33
- arm/xscale
34
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
34
- arm/palm
35
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
35
- arm/nseries
36
index XXXXXXX..XXXXXXX 100644
36
- arm/stellaris
37
--- a/target/arm/sve_helper.c
37
+ arm/versatile
38
+++ b/target/arm/sve_helper.c
38
arm/musicpal
39
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uqsubi_d)(void *d, void *a, uint64_t b, uint32_t desc)
39
- arm/sx1
40
*(uint64_t *)(d + i) = (ai < b ? 0 : ai - b);
40
+ arm/nseries
41
}
41
arm/orangepi
42
}
42
+ arm/palm
43
+
43
+ arm/xscale
44
+/* Two operand predicated copy immediate with merge. All valid immediates
44
+ arm/sx1
45
+ * can fit within 17 signed bits in the simd_data field.
45
+ arm/stellaris
46
+ */
46
47
+void HELPER(sve_cpy_m_b)(void *vd, void *vn, void *vg,
47
Arm CPU features
48
+ uint64_t mm, uint32_t desc)
48
================
49
+{
50
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
51
+ uint64_t *d = vd, *n = vn;
52
+ uint8_t *pg = vg;
53
+
54
+ mm = dup_const(MO_8, mm);
55
+ for (i = 0; i < opr_sz; i += 1) {
56
+ uint64_t nn = n[i];
57
+ uint64_t pp = expand_pred_b(pg[H1(i)]);
58
+ d[i] = (mm & pp) | (nn & ~pp);
59
+ }
60
+}
61
+
62
+void HELPER(sve_cpy_m_h)(void *vd, void *vn, void *vg,
63
+ uint64_t mm, uint32_t desc)
64
+{
65
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
66
+ uint64_t *d = vd, *n = vn;
67
+ uint8_t *pg = vg;
68
+
69
+ mm = dup_const(MO_16, mm);
70
+ for (i = 0; i < opr_sz; i += 1) {
71
+ uint64_t nn = n[i];
72
+ uint64_t pp = expand_pred_h(pg[H1(i)]);
73
+ d[i] = (mm & pp) | (nn & ~pp);
74
+ }
75
+}
76
+
77
+void HELPER(sve_cpy_m_s)(void *vd, void *vn, void *vg,
78
+ uint64_t mm, uint32_t desc)
79
+{
80
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
81
+ uint64_t *d = vd, *n = vn;
82
+ uint8_t *pg = vg;
83
+
84
+ mm = dup_const(MO_32, mm);
85
+ for (i = 0; i < opr_sz; i += 1) {
86
+ uint64_t nn = n[i];
87
+ uint64_t pp = expand_pred_s(pg[H1(i)]);
88
+ d[i] = (mm & pp) | (nn & ~pp);
89
+ }
90
+}
91
+
92
+void HELPER(sve_cpy_m_d)(void *vd, void *vn, void *vg,
93
+ uint64_t mm, uint32_t desc)
94
+{
95
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
96
+ uint64_t *d = vd, *n = vn;
97
+ uint8_t *pg = vg;
98
+
99
+ for (i = 0; i < opr_sz; i += 1) {
100
+ uint64_t nn = n[i];
101
+ d[i] = (pg[H1(i)] & 1 ? mm : nn);
102
+ }
103
+}
104
+
105
+void HELPER(sve_cpy_z_b)(void *vd, void *vg, uint64_t val, uint32_t desc)
106
+{
107
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
108
+ uint64_t *d = vd;
109
+ uint8_t *pg = vg;
110
+
111
+ val = dup_const(MO_8, val);
112
+ for (i = 0; i < opr_sz; i += 1) {
113
+ d[i] = val & expand_pred_b(pg[H1(i)]);
114
+ }
115
+}
116
+
117
+void HELPER(sve_cpy_z_h)(void *vd, void *vg, uint64_t val, uint32_t desc)
118
+{
119
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
120
+ uint64_t *d = vd;
121
+ uint8_t *pg = vg;
122
+
123
+ val = dup_const(MO_16, val);
124
+ for (i = 0; i < opr_sz; i += 1) {
125
+ d[i] = val & expand_pred_h(pg[H1(i)]);
126
+ }
127
+}
128
+
129
+void HELPER(sve_cpy_z_s)(void *vd, void *vg, uint64_t val, uint32_t desc)
130
+{
131
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
132
+ uint64_t *d = vd;
133
+ uint8_t *pg = vg;
134
+
135
+ val = dup_const(MO_32, val);
136
+ for (i = 0; i < opr_sz; i += 1) {
137
+ d[i] = val & expand_pred_s(pg[H1(i)]);
138
+ }
139
+}
140
+
141
+void HELPER(sve_cpy_z_d)(void *vd, void *vg, uint64_t val, uint32_t desc)
142
+{
143
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
144
+ uint64_t *d = vd;
145
+ uint8_t *pg = vg;
146
+
147
+ for (i = 0; i < opr_sz; i += 1) {
148
+ d[i] = (pg[H1(i)] & 1 ? val : 0);
149
+ }
150
+}
151
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
152
index XXXXXXX..XXXXXXX 100644
153
--- a/target/arm/translate-sve.c
154
+++ b/target/arm/translate-sve.c
155
@@ -XXX,XX +XXX,XX @@ static inline int plus1(int x)
156
return x + 1;
157
}
158
159
+/* The SH bit is in bit 8. Extract the low 8 and shift. */
160
+static inline int expand_imm_sh8s(int x)
161
+{
162
+ return (int8_t)x << (x & 0x100 ? 8 : 0);
163
+}
164
+
165
/*
166
* Include the generated decoder.
167
*/
168
@@ -XXX,XX +XXX,XX @@ static bool trans_DUPM(DisasContext *s, arg_DUPM *a, uint32_t insn)
169
return true;
170
}
171
172
+/*
173
+ *** SVE Integer Wide Immediate - Predicated Group
174
+ */
175
+
176
+/* Implement all merging copies. This is used for CPY (immediate),
177
+ * FCPY, CPY (scalar), CPY (SIMD&FP scalar).
178
+ */
179
+static void do_cpy_m(DisasContext *s, int esz, int rd, int rn, int pg,
180
+ TCGv_i64 val)
181
+{
182
+ typedef void gen_cpy(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64, TCGv_i32);
183
+ static gen_cpy * const fns[4] = {
184
+ gen_helper_sve_cpy_m_b, gen_helper_sve_cpy_m_h,
185
+ gen_helper_sve_cpy_m_s, gen_helper_sve_cpy_m_d,
186
+ };
187
+ unsigned vsz = vec_full_reg_size(s);
188
+ TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
189
+ TCGv_ptr t_zd = tcg_temp_new_ptr();
190
+ TCGv_ptr t_zn = tcg_temp_new_ptr();
191
+ TCGv_ptr t_pg = tcg_temp_new_ptr();
192
+
193
+ tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, rd));
194
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, rn));
195
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
196
+
197
+ fns[esz](t_zd, t_zn, t_pg, val, desc);
198
+
199
+ tcg_temp_free_ptr(t_zd);
200
+ tcg_temp_free_ptr(t_zn);
201
+ tcg_temp_free_ptr(t_pg);
202
+ tcg_temp_free_i32(desc);
203
+}
204
+
205
+static bool trans_FCPY(DisasContext *s, arg_FCPY *a, uint32_t insn)
206
+{
207
+ if (a->esz == 0) {
208
+ return false;
209
+ }
210
+ if (sve_access_check(s)) {
211
+ /* Decode the VFP immediate. */
212
+ uint64_t imm = vfp_expand_imm(a->esz, a->imm);
213
+ TCGv_i64 t_imm = tcg_const_i64(imm);
214
+ do_cpy_m(s, a->esz, a->rd, a->rn, a->pg, t_imm);
215
+ tcg_temp_free_i64(t_imm);
216
+ }
217
+ return true;
218
+}
219
+
220
+static bool trans_CPY_m_i(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
221
+{
222
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
223
+ return false;
224
+ }
225
+ if (sve_access_check(s)) {
226
+ TCGv_i64 t_imm = tcg_const_i64(a->imm);
227
+ do_cpy_m(s, a->esz, a->rd, a->rn, a->pg, t_imm);
228
+ tcg_temp_free_i64(t_imm);
229
+ }
230
+ return true;
231
+}
232
+
233
+static bool trans_CPY_z_i(DisasContext *s, arg_CPY_z_i *a, uint32_t insn)
234
+{
235
+ static gen_helper_gvec_2i * const fns[4] = {
236
+ gen_helper_sve_cpy_z_b, gen_helper_sve_cpy_z_h,
237
+ gen_helper_sve_cpy_z_s, gen_helper_sve_cpy_z_d,
238
+ };
239
+
240
+ if (a->esz == 0 && extract32(insn, 13, 1)) {
241
+ return false;
242
+ }
243
+ if (sve_access_check(s)) {
244
+ unsigned vsz = vec_full_reg_size(s);
245
+ TCGv_i64 t_imm = tcg_const_i64(a->imm);
246
+ tcg_gen_gvec_2i_ool(vec_full_reg_offset(s, a->rd),
247
+ pred_full_reg_offset(s, a->pg),
248
+ t_imm, vsz, vsz, 0, fns[a->esz]);
249
+ tcg_temp_free_i64(t_imm);
250
+ }
251
+ return true;
252
+}
253
+
254
/*
255
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
256
*/
257
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
258
index XXXXXXX..XXXXXXX 100644
259
--- a/target/arm/sve.decode
260
+++ b/target/arm/sve.decode
261
@@ -XXX,XX +XXX,XX @@
262
###########################################################################
263
# Named fields. These are primarily for disjoint fields.
264
265
-%imm4_16_p1 16:4 !function=plus1
266
+%imm4_16_p1 16:4 !function=plus1
267
%imm6_22_5 22:1 5:5
268
%imm9_16_10 16:s6 10:3
269
270
@@ -XXX,XX +XXX,XX @@
271
%tszimm16_shr 22:2 16:5 !function=tszimm_shr
272
%tszimm16_shl 22:2 16:5 !function=tszimm_shl
273
274
+# Signed 8-bit immediate, optionally shifted left by 8.
275
+%sh8_i8s 5:9 !function=expand_imm_sh8s
276
+
277
# Either a copy of rd (at bit 0), or a different source
278
# as propagated via the MOVPRFX instruction.
279
%reg_movprfx 0:5
280
@@ -XXX,XX +XXX,XX @@
281
@rd_rn_tszimm ........ .. ... ... ...... rn:5 rd:5 \
282
&rri_esz esz=%tszimm16_esz
283
284
+# Two register operand, one immediate operand, with 4-bit predicate.
285
+# User must fill in imm.
286
+@rdn_pg4 ........ esz:2 .. pg:4 ... ........ rd:5 \
287
+ &rpri_esz rn=%reg_movprfx
288
+
289
# Two register operand, one encoded bitmask.
290
@rdn_dbm ........ .. .... dbm:13 rd:5 \
291
&rr_dbm rn=%reg_movprfx
292
@@ -XXX,XX +XXX,XX @@ AND_zzi 00000101 10 0000 ............. ..... @rdn_dbm
293
# SVE broadcast bitmask immediate
294
DUPM 00000101 11 0000 dbm:13 rd:5
295
296
+### SVE Integer Wide Immediate - Predicated Group
297
+
298
+# SVE copy floating-point immediate (predicated)
299
+FCPY 00000101 .. 01 .... 110 imm:8 ..... @rdn_pg4
300
+
301
+# SVE copy integer immediate (predicated)
302
+CPY_m_i 00000101 .. 01 .... 01 . ........ ..... @rdn_pg4 imm=%sh8_i8s
303
+CPY_z_i 00000101 .. 01 .... 00 . ........ ..... @rdn_pg4 imm=%sh8_i8s
304
+
305
### SVE Predicate Logical Operations Group
306
307
# SVE predicate logical operations
308
--
49
--
309
2.17.0
50
2.20.1
310
51
311
52
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Provide a minimal documentation of the Versatile Express boards
2
(vexpress-a9, vexpress-a15).
2
3
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-24-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20200507151819.28444-4-peter.maydell@linaro.org
7
---
9
---
8
target/arm/translate-sve.c | 49 ++++++++++++++++++++++++++++++++++++++
10
docs/system/arm/vexpress.rst | 60 ++++++++++++++++++++++++++++++++++++
9
target/arm/sve.decode | 17 +++++++++++++
11
docs/system/target-arm.rst | 1 +
10
2 files changed, 66 insertions(+)
12
MAINTAINERS | 1 +
13
3 files changed, 62 insertions(+)
14
create mode 100644 docs/system/arm/vexpress.rst
11
15
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/docs/system/arm/vexpress.rst b/docs/system/arm/vexpress.rst
17
new file mode 100644
18
index XXXXXXX..XXXXXXX
19
--- /dev/null
20
+++ b/docs/system/arm/vexpress.rst
21
@@ -XXX,XX +XXX,XX @@
22
+Arm Versatile Express boards (``vexpress-a9``, ``vexpress-a15``)
23
+================================================================
24
+
25
+QEMU models two variants of the Arm Versatile Express development
26
+board family:
27
+
28
+- ``vexpress-a9`` models the combination of the Versatile Express
29
+ motherboard and the CoreTile Express A9x4 daughterboard
30
+- ``vexpress-a15`` models the combination of the Versatile Express
31
+ motherboard and the CoreTile Express A15x2 daughterboard
32
+
33
+Note that as this hardware does not have PCI, IDE or SCSI,
34
+the only available storage option is emulated SD card.
35
+
36
+Implemented devices:
37
+
38
+- PL041 audio
39
+- PL181 SD controller
40
+- PL050 keyboard and mouse
41
+- PL011 UARTs
42
+- SP804 timers
43
+- I2C controller
44
+- PL031 RTC
45
+- PL111 LCD display controller
46
+- Flash memory
47
+- LAN9118 ethernet
48
+
49
+Unimplemented devices:
50
+
51
+- SP810 system control block
52
+- PCI-express
53
+- USB controller (Philips ISP1761)
54
+- Local DAP ROM
55
+- CoreSight interfaces
56
+- PL301 AXI interconnect
57
+- SCC
58
+- System counter
59
+- HDLCD controller (``vexpress-a15``)
60
+- SP805 watchdog
61
+- PL341 dynamic memory controller
62
+- DMA330 DMA controller
63
+- PL354 static memory controller
64
+- BP147 TrustZone Protection Controller
65
+- TrustZone Address Space Controller
66
+
67
+Other differences between the hardware and the QEMU model:
68
+
69
+- QEMU will default to creating one CPU unless you pass a different
70
+ ``-smp`` argument
71
+- QEMU allows the amount of RAM provided to be specified with the
72
+ ``-m`` argument
73
+- QEMU defaults to providing a CPU which does not provide either
74
+ TrustZone or the Virtualization Extensions: if you want these you
75
+ must enable them with ``-machine secure=on`` and ``-machine
76
+ virtualization=on``
77
+- QEMU provides 4 virtio-mmio virtio transports; these start at
78
+ address ``0x10013000`` for ``vexpress-a9`` and at ``0x1c130000`` for
79
+ ``vexpress-a15``, and have IRQs from 40 upwards. If a dtb is
80
+ provided on the command line then QEMU will edit it to include
81
+ suitable entries describing these transports for the guest.
82
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
13
index XXXXXXX..XXXXXXX 100644
83
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
84
--- a/docs/system/target-arm.rst
15
+++ b/target/arm/translate-sve.c
85
+++ b/docs/system/target-arm.rst
16
@@ -XXX,XX +XXX,XX @@ static bool trans_SINCDEC_v(DisasContext *s, arg_incdec2_cnt *a,
86
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
17
return true;
87
arm/integratorcp
18
}
88
arm/realview
19
89
arm/versatile
20
+/*
90
+ arm/vexpress
21
+ *** SVE Bitwise Immediate Group
91
arm/musicpal
22
+ */
92
arm/nseries
23
+
93
arm/orangepi
24
+static bool do_zz_dbm(DisasContext *s, arg_rr_dbm *a, GVecGen2iFn *gvec_fn)
94
diff --git a/MAINTAINERS b/MAINTAINERS
25
+{
26
+ uint64_t imm;
27
+ if (!logic_imm_decode_wmask(&imm, extract32(a->dbm, 12, 1),
28
+ extract32(a->dbm, 0, 6),
29
+ extract32(a->dbm, 6, 6))) {
30
+ return false;
31
+ }
32
+ if (sve_access_check(s)) {
33
+ unsigned vsz = vec_full_reg_size(s);
34
+ gvec_fn(MO_64, vec_full_reg_offset(s, a->rd),
35
+ vec_full_reg_offset(s, a->rn), imm, vsz, vsz);
36
+ }
37
+ return true;
38
+}
39
+
40
+static bool trans_AND_zzi(DisasContext *s, arg_rr_dbm *a, uint32_t insn)
41
+{
42
+ return do_zz_dbm(s, a, tcg_gen_gvec_andi);
43
+}
44
+
45
+static bool trans_ORR_zzi(DisasContext *s, arg_rr_dbm *a, uint32_t insn)
46
+{
47
+ return do_zz_dbm(s, a, tcg_gen_gvec_ori);
48
+}
49
+
50
+static bool trans_EOR_zzi(DisasContext *s, arg_rr_dbm *a, uint32_t insn)
51
+{
52
+ return do_zz_dbm(s, a, tcg_gen_gvec_xori);
53
+}
54
+
55
+static bool trans_DUPM(DisasContext *s, arg_DUPM *a, uint32_t insn)
56
+{
57
+ uint64_t imm;
58
+ if (!logic_imm_decode_wmask(&imm, extract32(a->dbm, 12, 1),
59
+ extract32(a->dbm, 0, 6),
60
+ extract32(a->dbm, 6, 6))) {
61
+ return false;
62
+ }
63
+ if (sve_access_check(s)) {
64
+ do_dupi_z(s, a->rd, imm);
65
+ }
66
+ return true;
67
+}
68
+
69
/*
70
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
71
*/
72
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
73
index XXXXXXX..XXXXXXX 100644
95
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/sve.decode
96
--- a/MAINTAINERS
75
+++ b/target/arm/sve.decode
97
+++ b/MAINTAINERS
76
@@ -XXX,XX +XXX,XX @@
98
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
77
99
L: qemu-arm@nongnu.org
78
&rr_esz rd rn esz
100
S: Maintained
79
&rri rd rn imm
101
F: hw/arm/vexpress.c
80
+&rr_dbm rd rn dbm
102
+F: docs/system/arm/vexpress.rst
81
&rrri rd rn rm imm
103
82
&rri_esz rd rn imm esz
104
Versatile PB
83
&rrr_esz rd rn rm esz
105
M: Peter Maydell <peter.maydell@linaro.org>
84
@@ -XXX,XX +XXX,XX @@
85
@rd_rn_tszimm ........ .. ... ... ...... rn:5 rd:5 \
86
&rri_esz esz=%tszimm16_esz
87
88
+# Two register operand, one encoded bitmask.
89
+@rdn_dbm ........ .. .... dbm:13 rd:5 \
90
+ &rr_dbm rn=%reg_movprfx
91
+
92
# Basic Load/Store with 9-bit immediate offset
93
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
94
&rri imm=%imm9_16_10
95
@@ -XXX,XX +XXX,XX @@ INCDEC_v 00000100 .. 1 1 .... 1100 0 d:1 ..... ..... @incdec2_cnt u=1
96
# Note these require esz != 0.
97
SINCDEC_v 00000100 .. 1 0 .... 1100 d:1 u:1 ..... ..... @incdec2_cnt
98
99
+### SVE Bitwise Immediate Group
100
+
101
+# SVE bitwise logical with immediate (unpredicated)
102
+ORR_zzi 00000101 00 0000 ............. ..... @rdn_dbm
103
+EOR_zzi 00000101 01 0000 ............. ..... @rdn_dbm
104
+AND_zzi 00000101 10 0000 ............. ..... @rdn_dbm
105
+
106
+# SVE broadcast bitmask immediate
107
+DUPM 00000101 11 0000 dbm:13 rd:5
108
+
109
+### SVE Predicate Logical Operations Group
110
+
111
# SVE predicate logical operations
112
AND_pppp 00100101 0. 00 .... 01 .... 0 .... 0 .... @pd_pg_pn_pm_s
113
BIC_pppp 00100101 0. 00 .... 01 .... 0 .... 1 .... @pd_pg_pn_pm_s
114
--
106
--
115
2.17.0
107
2.20.1
116
108
117
109
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Add basic documentation of the MPS2 board models.
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Message-id: 20200507151819.28444-5-peter.maydell@linaro.org
7
---
8
---
8
target/arm/Makefile.objs | 2 +-
9
docs/system/arm/mps2.rst | 29 +++++++++++++++++++++++++++++
9
target/arm/helper-sve.h | 21 ++++++++++
10
docs/system/target-arm.rst | 1 +
10
target/arm/helper.h | 1 +
11
MAINTAINERS | 1 +
11
target/arm/sve_helper.c | 78 ++++++++++++++++++++++++++++++++++++++
12
3 files changed, 31 insertions(+)
12
target/arm/translate-sve.c | 65 +++++++++++++++++++++++++++++++
13
create mode 100644 docs/system/arm/mps2.rst
13
target/arm/sve.decode | 5 +++
14
6 files changed, 171 insertions(+), 1 deletion(-)
15
create mode 100644 target/arm/helper-sve.h
16
create mode 100644 target/arm/sve_helper.c
17
14
18
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
15
diff --git a/docs/system/arm/mps2.rst b/docs/system/arm/mps2.rst
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/Makefile.objs
21
+++ b/target/arm/Makefile.objs
22
@@ -XXX,XX +XXX,XX @@ target/arm/decode-sve.inc.c: $(SRC_PATH)/target/arm/sve.decode $(DECODETREE)
23
     "GEN", $(TARGET_DIR)$@)
24
25
target/arm/translate-sve.o: target/arm/decode-sve.inc.c
26
-obj-$(TARGET_AARCH64) += translate-sve.o
27
+obj-$(TARGET_AARCH64) += translate-sve.o sve_helper.o
28
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
29
new file mode 100644
16
new file mode 100644
30
index XXXXXXX..XXXXXXX
17
index XXXXXXX..XXXXXXX
31
--- /dev/null
18
--- /dev/null
32
+++ b/target/arm/helper-sve.h
19
+++ b/docs/system/arm/mps2.rst
33
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
34
+/*
21
+Arm MPS2 boards (``mps2-an385``, ``mps2-an505``, ``mps2-an511``, ``mps2-an521``)
35
+ * AArch64 SVE specific helper definitions
22
+================================================================================
36
+ *
37
+ * Copyright (c) 2018 Linaro, Ltd
38
+ *
39
+ * This library is free software; you can redistribute it and/or
40
+ * modify it under the terms of the GNU Lesser General Public
41
+ * License as published by the Free Software Foundation; either
42
+ * version 2 of the License, or (at your option) any later version.
43
+ *
44
+ * This library is distributed in the hope that it will be useful,
45
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
46
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
47
+ * Lesser General Public License for more details.
48
+ *
49
+ * You should have received a copy of the GNU Lesser General Public
50
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
51
+ */
52
+
23
+
53
+DEF_HELPER_FLAGS_2(sve_predtest1, TCG_CALL_NO_WG, i32, i64, i64)
24
+These board models all use Arm M-profile CPUs.
54
+DEF_HELPER_FLAGS_3(sve_predtest, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
25
+
55
diff --git a/target/arm/helper.h b/target/arm/helper.h
26
+The Arm MPS2 and MPS2+ dev boards are FPGA based (the 2+ has a bigger
27
+FPGA but is otherwise the same as the 2). Since the CPU itself
28
+and most of the devices are in the FPGA, the details of the board
29
+as seen by the guest depend significantly on the FPGA image.
30
+
31
+QEMU models the following FPGA images:
32
+
33
+``mps2-an385``
34
+ Cortex-M3 as documented in ARM Application Note AN385
35
+``mps2-an511``
36
+ Cortex-M3 'DesignStart' as documented in AN511
37
+``mps2-an505``
38
+ Cortex-M33 as documented in ARM Application Note AN505
39
+``mps2-an521``
40
+ Dual Cortex-M33 as documented in Application Note AN521
41
+
42
+Differences between QEMU and real hardware:
43
+
44
+- AN385 remapping of low 16K of memory to either ZBT SSRAM1 or to
45
+ block RAM is unimplemented (QEMU always maps this to ZBT SSRAM1, as
46
+ if zbt_boot_ctrl is always zero)
47
+- QEMU provides a LAN9118 ethernet rather than LAN9220; the only guest
48
+ visible difference is that the LAN9118 doesn't support checksum
49
+ offloading
50
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
56
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/helper.h
52
--- a/docs/system/target-arm.rst
58
+++ b/target/arm/helper.h
53
+++ b/docs/system/target-arm.rst
59
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
54
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
60
55
:maxdepth: 1
61
#ifdef TARGET_AARCH64
56
62
#include "helper-a64.h"
57
arm/integratorcp
63
+#include "helper-sve.h"
58
+ arm/mps2
64
#endif
59
arm/realview
65
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
60
arm/versatile
66
new file mode 100644
61
arm/vexpress
67
index XXXXXXX..XXXXXXX
62
diff --git a/MAINTAINERS b/MAINTAINERS
68
--- /dev/null
69
+++ b/target/arm/sve_helper.c
70
@@ -XXX,XX +XXX,XX @@
71
+/*
72
+ * ARM SVE Operations
73
+ *
74
+ * Copyright (c) 2018 Linaro, Ltd.
75
+ *
76
+ * This library is free software; you can redistribute it and/or
77
+ * modify it under the terms of the GNU Lesser General Public
78
+ * License as published by the Free Software Foundation; either
79
+ * version 2 of the License, or (at your option) any later version.
80
+ *
81
+ * This library is distributed in the hope that it will be useful,
82
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
83
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
84
+ * Lesser General Public License for more details.
85
+ *
86
+ * You should have received a copy of the GNU Lesser General Public
87
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
88
+ */
89
+
90
+#include "qemu/osdep.h"
91
+#include "cpu.h"
92
+#include "exec/exec-all.h"
93
+#include "exec/cpu_ldst.h"
94
+#include "exec/helper-proto.h"
95
+#include "tcg/tcg-gvec-desc.h"
96
+
97
+
98
+/* Return a value for NZCV as per the ARM PredTest pseudofunction.
99
+ *
100
+ * The return value has bit 31 set if N is set, bit 1 set if Z is clear,
101
+ * and bit 0 set if C is set. Compare the definitions of these variables
102
+ * within CPUARMState.
103
+ */
104
+
105
+/* For no G bits set, NZCV = C. */
106
+#define PREDTEST_INIT 1
107
+
108
+/* This is an iterative function, called for each Pd and Pg word
109
+ * moving forward.
110
+ */
111
+static uint32_t iter_predtest_fwd(uint64_t d, uint64_t g, uint32_t flags)
112
+{
113
+ if (likely(g)) {
114
+ /* Compute N from first D & G.
115
+ Use bit 2 to signal first G bit seen. */
116
+ if (!(flags & 4)) {
117
+ flags |= ((d & (g & -g)) != 0) << 31;
118
+ flags |= 4;
119
+ }
120
+
121
+ /* Accumulate Z from each D & G. */
122
+ flags |= ((d & g) != 0) << 1;
123
+
124
+ /* Compute C from last !(D & G). Replace previous. */
125
+ flags = deposit32(flags, 0, 1, (d & pow2floor(g)) == 0);
126
+ }
127
+ return flags;
128
+}
129
+
130
+/* The same for a single word predicate. */
131
+uint32_t HELPER(sve_predtest1)(uint64_t d, uint64_t g)
132
+{
133
+ return iter_predtest_fwd(d, g, PREDTEST_INIT);
134
+}
135
+
136
+/* The same for a multi-word predicate. */
137
+uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t words)
138
+{
139
+ uint32_t flags = PREDTEST_INIT;
140
+ uint64_t *d = vd, *g = vg;
141
+ uintptr_t i = 0;
142
+
143
+ do {
144
+ flags = iter_predtest_fwd(d[i], g[i], flags);
145
+ } while (++i < words);
146
+
147
+ return flags;
148
+}
149
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
150
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/translate-sve.c
64
--- a/MAINTAINERS
152
+++ b/target/arm/translate-sve.c
65
+++ b/MAINTAINERS
153
@@ -XXX,XX +XXX,XX @@ static bool do_mov_z(DisasContext *s, int rd, int rn)
66
@@ -XXX,XX +XXX,XX @@ F: hw/misc/armsse-cpuid.c
154
return do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
67
F: include/hw/misc/armsse-cpuid.h
155
}
68
F: hw/misc/armsse-mhu.c
156
69
F: include/hw/misc/armsse-mhu.h
157
+/* Set the cpu flags as per a return from an SVE helper. */
70
+F: docs/system/arm/mps2.rst
158
+static void do_pred_flags(TCGv_i32 t)
71
159
+{
72
Musca
160
+ tcg_gen_mov_i32(cpu_NF, t);
73
M: Peter Maydell <peter.maydell@linaro.org>
161
+ tcg_gen_andi_i32(cpu_ZF, t, 2);
162
+ tcg_gen_andi_i32(cpu_CF, t, 1);
163
+ tcg_gen_movi_i32(cpu_VF, 0);
164
+}
165
+
166
+/* Subroutines computing the ARM PredTest psuedofunction. */
167
+static void do_predtest1(TCGv_i64 d, TCGv_i64 g)
168
+{
169
+ TCGv_i32 t = tcg_temp_new_i32();
170
+
171
+ gen_helper_sve_predtest1(t, d, g);
172
+ do_pred_flags(t);
173
+ tcg_temp_free_i32(t);
174
+}
175
+
176
+static void do_predtest(DisasContext *s, int dofs, int gofs, int words)
177
+{
178
+ TCGv_ptr dptr = tcg_temp_new_ptr();
179
+ TCGv_ptr gptr = tcg_temp_new_ptr();
180
+ TCGv_i32 t;
181
+
182
+ tcg_gen_addi_ptr(dptr, cpu_env, dofs);
183
+ tcg_gen_addi_ptr(gptr, cpu_env, gofs);
184
+ t = tcg_const_i32(words);
185
+
186
+ gen_helper_sve_predtest(t, dptr, gptr, t);
187
+ tcg_temp_free_ptr(dptr);
188
+ tcg_temp_free_ptr(gptr);
189
+
190
+ do_pred_flags(t);
191
+ tcg_temp_free_i32(t);
192
+}
193
+
194
/*
195
*** SVE Logical - Unpredicated Group
196
*/
197
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
198
return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
199
}
200
201
+/*
202
+ *** SVE Predicate Misc Group
203
+ */
204
+
205
+static bool trans_PTEST(DisasContext *s, arg_PTEST *a, uint32_t insn)
206
+{
207
+ if (sve_access_check(s)) {
208
+ int nofs = pred_full_reg_offset(s, a->rn);
209
+ int gofs = pred_full_reg_offset(s, a->pg);
210
+ int words = DIV_ROUND_UP(pred_full_reg_size(s), 8);
211
+
212
+ if (words == 1) {
213
+ TCGv_i64 pn = tcg_temp_new_i64();
214
+ TCGv_i64 pg = tcg_temp_new_i64();
215
+
216
+ tcg_gen_ld_i64(pn, cpu_env, nofs);
217
+ tcg_gen_ld_i64(pg, cpu_env, gofs);
218
+ do_predtest1(pn, pg);
219
+
220
+ tcg_temp_free_i64(pn);
221
+ tcg_temp_free_i64(pg);
222
+ } else {
223
+ do_predtest(s, nofs, gofs, words);
224
+ }
225
+ }
226
+ return true;
227
+}
228
+
229
/*
230
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
231
*/
232
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
233
index XXXXXXX..XXXXXXX 100644
234
--- a/target/arm/sve.decode
235
+++ b/target/arm/sve.decode
236
@@ -XXX,XX +XXX,XX @@ ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
237
EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
238
BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
239
240
+### SVE Predicate Misc Group
241
+
242
+# SVE predicate test
243
+PTEST 00100101 01 010000 11 pg:4 0 rn:4 0 0000
244
+
245
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
246
247
# SVE load predicate register
248
--
74
--
249
2.17.0
75
2.20.1
250
76
251
77
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
Provide a minimal documentation of the Musca boards.
2
2
3
Add a model of the generic DMA found on Xilinx ZynqMP.
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Message-id: 20200507151819.28444-6-peter.maydell@linaro.org
8
---
9
docs/system/arm/musca.rst | 31 +++++++++++++++++++++++++++++++
10
docs/system/target-arm.rst | 1 +
11
MAINTAINERS | 1 +
12
3 files changed, 33 insertions(+)
13
create mode 100644 docs/system/arm/musca.rst
4
14
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
15
diff --git a/docs/system/arm/musca.rst b/docs/system/arm/musca.rst
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20180503214201.29082-2-frasse.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/dma/Makefile.objs | 1 +
12
include/hw/dma/xlnx-zdma.h | 84 ++++
13
hw/dma/xlnx-zdma.c | 832 +++++++++++++++++++++++++++++++++++++
14
3 files changed, 917 insertions(+)
15
create mode 100644 include/hw/dma/xlnx-zdma.h
16
create mode 100644 hw/dma/xlnx-zdma.c
17
18
diff --git a/hw/dma/Makefile.objs b/hw/dma/Makefile.objs
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/dma/Makefile.objs
21
+++ b/hw/dma/Makefile.objs
22
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_ETRAXFS) += etraxfs_dma.o
23
common-obj-$(CONFIG_STP2000) += sparc32_dma.o
24
obj-$(CONFIG_XLNX_ZYNQMP) += xlnx_dpdma.o
25
obj-$(CONFIG_XLNX_ZYNQMP_ARM) += xlnx_dpdma.o
26
+common-obj-$(CONFIG_XLNX_ZYNQMP_ARM) += xlnx-zdma.o
27
28
obj-$(CONFIG_OMAP) += omap_dma.o soc_dma.o
29
obj-$(CONFIG_PXA2XX) += pxa2xx_dma.o
30
diff --git a/include/hw/dma/xlnx-zdma.h b/include/hw/dma/xlnx-zdma.h
31
new file mode 100644
16
new file mode 100644
32
index XXXXXXX..XXXXXXX
17
index XXXXXXX..XXXXXXX
33
--- /dev/null
18
--- /dev/null
34
+++ b/include/hw/dma/xlnx-zdma.h
19
+++ b/docs/system/arm/musca.rst
35
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@
36
+/*
21
+Arm Musca boards (``musca-a``, ``musca-b1``)
37
+ * QEMU model of the ZynqMP generic DMA
22
+============================================
38
+ *
39
+ * Copyright (c) 2014 Xilinx Inc.
40
+ * Copyright (c) 2018 FEIMTECH AB
41
+ *
42
+ * Written by Edgar E. Iglesias <edgar.iglesias@xilinx.com>,
43
+ * Francisco Iglesias <francisco.iglesias@feimtech.se>
44
+ *
45
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
46
+ * of this software and associated documentation files (the "Software"), to deal
47
+ * in the Software without restriction, including without limitation the rights
48
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
49
+ * copies of the Software, and to permit persons to whom the Software is
50
+ * furnished to do so, subject to the following conditions:
51
+ *
52
+ * The above copyright notice and this permission notice shall be included in
53
+ * all copies or substantial portions of the Software.
54
+ *
55
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
56
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
57
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
58
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
59
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
60
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
61
+ * THE SOFTWARE.
62
+ */
63
+
23
+
64
+#ifndef XLNX_ZDMA_H
24
+The Arm Musca development boards are a reference implementation
65
+#define XLNX_ZDMA_H
25
+of a system using the SSE-200 Subsystem for Embedded. They are
26
+dual Cortex-M33 systems.
66
+
27
+
67
+#include "hw/sysbus.h"
28
+QEMU provides models of the A and B1 variants of this board.
68
+#include "hw/register.h"
69
+#include "sysemu/dma.h"
70
+
29
+
71
+#define ZDMA_R_MAX (0x204 / 4)
30
+Unimplemented devices:
72
+
31
+
73
+typedef enum {
32
+- SPI
74
+ DISABLED = 0,
33
+- |I2C|
75
+ ENABLED = 1,
34
+- |I2S|
76
+ PAUSED = 2,
35
+- PWM
77
+} XlnxZDMAState;
36
+- QSPI
37
+- Timer
38
+- SCC
39
+- GPIO
40
+- eFlash
41
+- MHU
42
+- PVT
43
+- SDIO
44
+- CryptoCell
78
+
45
+
79
+typedef union {
46
+Note that (like the real hardware) the Musca-A machine is
80
+ struct {
47
+asymmetric: CPU 0 does not have the FPU or DSP extensions,
81
+ uint64_t addr;
48
+but CPU 1 does. Also like the real hardware, the memory maps
82
+ uint32_t size;
49
+for the A and B1 variants differ significantly, so guest
83
+ uint32_t attr;
50
+software must be built for the right variant.
84
+ };
85
+ uint32_t words[4];
86
+} XlnxZDMADescr;
87
+
51
+
88
+typedef struct XlnxZDMA {
52
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
89
+ SysBusDevice parent_obj;
53
index XXXXXXX..XXXXXXX 100644
90
+ MemoryRegion iomem;
54
--- a/docs/system/target-arm.rst
91
+ MemTxAttrs attr;
55
+++ b/docs/system/target-arm.rst
92
+ MemoryRegion *dma_mr;
56
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
93
+ AddressSpace *dma_as;
57
94
+ qemu_irq irq_zdma_ch_imr;
58
arm/integratorcp
95
+
59
arm/mps2
96
+ struct {
60
+ arm/musca
97
+ uint32_t bus_width;
61
arm/realview
98
+ } cfg;
62
arm/versatile
99
+
63
arm/vexpress
100
+ XlnxZDMAState state;
64
diff --git a/MAINTAINERS b/MAINTAINERS
101
+ bool error;
65
index XXXXXXX..XXXXXXX 100644
102
+
66
--- a/MAINTAINERS
103
+ XlnxZDMADescr dsc_src;
67
+++ b/MAINTAINERS
104
+ XlnxZDMADescr dsc_dst;
68
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
105
+
69
L: qemu-arm@nongnu.org
106
+ uint32_t regs[ZDMA_R_MAX];
70
S: Maintained
107
+ RegisterInfo regs_info[ZDMA_R_MAX];
71
F: hw/arm/musca.c
108
+
72
+F: docs/system/arm/musca.rst
109
+ /* We don't model the common bufs. Must be at least 16 bytes
73
110
+ to model write only mode. */
74
Musicpal
111
+ uint8_t buf[2048];
75
M: Jan Kiszka <jan.kiszka@web.de>
112
+} XlnxZDMA;
113
+
114
+#define TYPE_XLNX_ZDMA "xlnx.zdma"
115
+
116
+#define XLNX_ZDMA(obj) \
117
+ OBJECT_CHECK(XlnxZDMA, (obj), TYPE_XLNX_ZDMA)
118
+
119
+#endif /* XLNX_ZDMA_H */
120
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
121
new file mode 100644
122
index XXXXXXX..XXXXXXX
123
--- /dev/null
124
+++ b/hw/dma/xlnx-zdma.c
125
@@ -XXX,XX +XXX,XX @@
126
+/*
127
+ * QEMU model of the ZynqMP generic DMA
128
+ *
129
+ * Copyright (c) 2014 Xilinx Inc.
130
+ * Copyright (c) 2018 FEIMTECH AB
131
+ *
132
+ * Written by Edgar E. Iglesias <edgar.iglesias@xilinx.com>,
133
+ * Francisco Iglesias <francisco.iglesias@feimtech.se>
134
+ *
135
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
136
+ * of this software and associated documentation files (the "Software"), to deal
137
+ * in the Software without restriction, including without limitation the rights
138
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
139
+ * copies of the Software, and to permit persons to whom the Software is
140
+ * furnished to do so, subject to the following conditions:
141
+ *
142
+ * The above copyright notice and this permission notice shall be included in
143
+ * all copies or substantial portions of the Software.
144
+ *
145
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
146
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
147
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
148
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
149
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
150
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
151
+ * THE SOFTWARE.
152
+ */
153
+
154
+#include "qemu/osdep.h"
155
+#include "hw/dma/xlnx-zdma.h"
156
+#include "qemu/bitops.h"
157
+#include "qemu/log.h"
158
+#include "qapi/error.h"
159
+
160
+#ifndef XLNX_ZDMA_ERR_DEBUG
161
+#define XLNX_ZDMA_ERR_DEBUG 0
162
+#endif
163
+
164
+REG32(ZDMA_ERR_CTRL, 0x0)
165
+ FIELD(ZDMA_ERR_CTRL, APB_ERR_RES, 0, 1)
166
+REG32(ZDMA_CH_ISR, 0x100)
167
+ FIELD(ZDMA_CH_ISR, DMA_PAUSE, 11, 1)
168
+ FIELD(ZDMA_CH_ISR, DMA_DONE, 10, 1)
169
+ FIELD(ZDMA_CH_ISR, AXI_WR_DATA, 9, 1)
170
+ FIELD(ZDMA_CH_ISR, AXI_RD_DATA, 8, 1)
171
+ FIELD(ZDMA_CH_ISR, AXI_RD_DST_DSCR, 7, 1)
172
+ FIELD(ZDMA_CH_ISR, AXI_RD_SRC_DSCR, 6, 1)
173
+ FIELD(ZDMA_CH_ISR, IRQ_DST_ACCT_ERR, 5, 1)
174
+ FIELD(ZDMA_CH_ISR, IRQ_SRC_ACCT_ERR, 4, 1)
175
+ FIELD(ZDMA_CH_ISR, BYTE_CNT_OVRFL, 3, 1)
176
+ FIELD(ZDMA_CH_ISR, DST_DSCR_DONE, 2, 1)
177
+ FIELD(ZDMA_CH_ISR, SRC_DSCR_DONE, 1, 1)
178
+ FIELD(ZDMA_CH_ISR, INV_APB, 0, 1)
179
+REG32(ZDMA_CH_IMR, 0x104)
180
+ FIELD(ZDMA_CH_IMR, DMA_PAUSE, 11, 1)
181
+ FIELD(ZDMA_CH_IMR, DMA_DONE, 10, 1)
182
+ FIELD(ZDMA_CH_IMR, AXI_WR_DATA, 9, 1)
183
+ FIELD(ZDMA_CH_IMR, AXI_RD_DATA, 8, 1)
184
+ FIELD(ZDMA_CH_IMR, AXI_RD_DST_DSCR, 7, 1)
185
+ FIELD(ZDMA_CH_IMR, AXI_RD_SRC_DSCR, 6, 1)
186
+ FIELD(ZDMA_CH_IMR, IRQ_DST_ACCT_ERR, 5, 1)
187
+ FIELD(ZDMA_CH_IMR, IRQ_SRC_ACCT_ERR, 4, 1)
188
+ FIELD(ZDMA_CH_IMR, BYTE_CNT_OVRFL, 3, 1)
189
+ FIELD(ZDMA_CH_IMR, DST_DSCR_DONE, 2, 1)
190
+ FIELD(ZDMA_CH_IMR, SRC_DSCR_DONE, 1, 1)
191
+ FIELD(ZDMA_CH_IMR, INV_APB, 0, 1)
192
+REG32(ZDMA_CH_IEN, 0x108)
193
+ FIELD(ZDMA_CH_IEN, DMA_PAUSE, 11, 1)
194
+ FIELD(ZDMA_CH_IEN, DMA_DONE, 10, 1)
195
+ FIELD(ZDMA_CH_IEN, AXI_WR_DATA, 9, 1)
196
+ FIELD(ZDMA_CH_IEN, AXI_RD_DATA, 8, 1)
197
+ FIELD(ZDMA_CH_IEN, AXI_RD_DST_DSCR, 7, 1)
198
+ FIELD(ZDMA_CH_IEN, AXI_RD_SRC_DSCR, 6, 1)
199
+ FIELD(ZDMA_CH_IEN, IRQ_DST_ACCT_ERR, 5, 1)
200
+ FIELD(ZDMA_CH_IEN, IRQ_SRC_ACCT_ERR, 4, 1)
201
+ FIELD(ZDMA_CH_IEN, BYTE_CNT_OVRFL, 3, 1)
202
+ FIELD(ZDMA_CH_IEN, DST_DSCR_DONE, 2, 1)
203
+ FIELD(ZDMA_CH_IEN, SRC_DSCR_DONE, 1, 1)
204
+ FIELD(ZDMA_CH_IEN, INV_APB, 0, 1)
205
+REG32(ZDMA_CH_IDS, 0x10c)
206
+ FIELD(ZDMA_CH_IDS, DMA_PAUSE, 11, 1)
207
+ FIELD(ZDMA_CH_IDS, DMA_DONE, 10, 1)
208
+ FIELD(ZDMA_CH_IDS, AXI_WR_DATA, 9, 1)
209
+ FIELD(ZDMA_CH_IDS, AXI_RD_DATA, 8, 1)
210
+ FIELD(ZDMA_CH_IDS, AXI_RD_DST_DSCR, 7, 1)
211
+ FIELD(ZDMA_CH_IDS, AXI_RD_SRC_DSCR, 6, 1)
212
+ FIELD(ZDMA_CH_IDS, IRQ_DST_ACCT_ERR, 5, 1)
213
+ FIELD(ZDMA_CH_IDS, IRQ_SRC_ACCT_ERR, 4, 1)
214
+ FIELD(ZDMA_CH_IDS, BYTE_CNT_OVRFL, 3, 1)
215
+ FIELD(ZDMA_CH_IDS, DST_DSCR_DONE, 2, 1)
216
+ FIELD(ZDMA_CH_IDS, SRC_DSCR_DONE, 1, 1)
217
+ FIELD(ZDMA_CH_IDS, INV_APB, 0, 1)
218
+REG32(ZDMA_CH_CTRL0, 0x110)
219
+ FIELD(ZDMA_CH_CTRL0, OVR_FETCH, 7, 1)
220
+ FIELD(ZDMA_CH_CTRL0, POINT_TYPE, 6, 1)
221
+ FIELD(ZDMA_CH_CTRL0, MODE, 4, 2)
222
+ FIELD(ZDMA_CH_CTRL0, RATE_CTRL, 3, 1)
223
+ FIELD(ZDMA_CH_CTRL0, CONT_ADDR, 2, 1)
224
+ FIELD(ZDMA_CH_CTRL0, CONT, 1, 1)
225
+REG32(ZDMA_CH_CTRL1, 0x114)
226
+ FIELD(ZDMA_CH_CTRL1, DST_ISSUE, 5, 5)
227
+ FIELD(ZDMA_CH_CTRL1, SRC_ISSUE, 0, 5)
228
+REG32(ZDMA_CH_FCI, 0x118)
229
+ FIELD(ZDMA_CH_FCI, PROG_CELL_CNT, 2, 2)
230
+ FIELD(ZDMA_CH_FCI, SIDE, 1, 1)
231
+ FIELD(ZDMA_CH_FCI, EN, 0, 1)
232
+REG32(ZDMA_CH_STATUS, 0x11c)
233
+ FIELD(ZDMA_CH_STATUS, STATE, 0, 2)
234
+REG32(ZDMA_CH_DATA_ATTR, 0x120)
235
+ FIELD(ZDMA_CH_DATA_ATTR, ARBURST, 26, 2)
236
+ FIELD(ZDMA_CH_DATA_ATTR, ARCACHE, 22, 4)
237
+ FIELD(ZDMA_CH_DATA_ATTR, ARQOS, 18, 4)
238
+ FIELD(ZDMA_CH_DATA_ATTR, ARLEN, 14, 4)
239
+ FIELD(ZDMA_CH_DATA_ATTR, AWBURST, 12, 2)
240
+ FIELD(ZDMA_CH_DATA_ATTR, AWCACHE, 8, 4)
241
+ FIELD(ZDMA_CH_DATA_ATTR, AWQOS, 4, 4)
242
+ FIELD(ZDMA_CH_DATA_ATTR, AWLEN, 0, 4)
243
+REG32(ZDMA_CH_DSCR_ATTR, 0x124)
244
+ FIELD(ZDMA_CH_DSCR_ATTR, AXCOHRNT, 8, 1)
245
+ FIELD(ZDMA_CH_DSCR_ATTR, AXCACHE, 4, 4)
246
+ FIELD(ZDMA_CH_DSCR_ATTR, AXQOS, 0, 4)
247
+REG32(ZDMA_CH_SRC_DSCR_WORD0, 0x128)
248
+REG32(ZDMA_CH_SRC_DSCR_WORD1, 0x12c)
249
+ FIELD(ZDMA_CH_SRC_DSCR_WORD1, MSB, 0, 17)
250
+REG32(ZDMA_CH_SRC_DSCR_WORD2, 0x130)
251
+ FIELD(ZDMA_CH_SRC_DSCR_WORD2, SIZE, 0, 30)
252
+REG32(ZDMA_CH_SRC_DSCR_WORD3, 0x134)
253
+ FIELD(ZDMA_CH_SRC_DSCR_WORD3, CMD, 3, 2)
254
+ FIELD(ZDMA_CH_SRC_DSCR_WORD3, INTR, 2, 1)
255
+ FIELD(ZDMA_CH_SRC_DSCR_WORD3, TYPE, 1, 1)
256
+ FIELD(ZDMA_CH_SRC_DSCR_WORD3, COHRNT, 0, 1)
257
+REG32(ZDMA_CH_DST_DSCR_WORD0, 0x138)
258
+REG32(ZDMA_CH_DST_DSCR_WORD1, 0x13c)
259
+ FIELD(ZDMA_CH_DST_DSCR_WORD1, MSB, 0, 17)
260
+REG32(ZDMA_CH_DST_DSCR_WORD2, 0x140)
261
+ FIELD(ZDMA_CH_DST_DSCR_WORD2, SIZE, 0, 30)
262
+REG32(ZDMA_CH_DST_DSCR_WORD3, 0x144)
263
+ FIELD(ZDMA_CH_DST_DSCR_WORD3, INTR, 2, 1)
264
+ FIELD(ZDMA_CH_DST_DSCR_WORD3, TYPE, 1, 1)
265
+ FIELD(ZDMA_CH_DST_DSCR_WORD3, COHRNT, 0, 1)
266
+REG32(ZDMA_CH_WR_ONLY_WORD0, 0x148)
267
+REG32(ZDMA_CH_WR_ONLY_WORD1, 0x14c)
268
+REG32(ZDMA_CH_WR_ONLY_WORD2, 0x150)
269
+REG32(ZDMA_CH_WR_ONLY_WORD3, 0x154)
270
+REG32(ZDMA_CH_SRC_START_LSB, 0x158)
271
+REG32(ZDMA_CH_SRC_START_MSB, 0x15c)
272
+ FIELD(ZDMA_CH_SRC_START_MSB, ADDR, 0, 17)
273
+REG32(ZDMA_CH_DST_START_LSB, 0x160)
274
+REG32(ZDMA_CH_DST_START_MSB, 0x164)
275
+ FIELD(ZDMA_CH_DST_START_MSB, ADDR, 0, 17)
276
+REG32(ZDMA_CH_RATE_CTRL, 0x18c)
277
+ FIELD(ZDMA_CH_RATE_CTRL, CNT, 0, 12)
278
+REG32(ZDMA_CH_SRC_CUR_PYLD_LSB, 0x168)
279
+REG32(ZDMA_CH_SRC_CUR_PYLD_MSB, 0x16c)
280
+ FIELD(ZDMA_CH_SRC_CUR_PYLD_MSB, ADDR, 0, 17)
281
+REG32(ZDMA_CH_DST_CUR_PYLD_LSB, 0x170)
282
+REG32(ZDMA_CH_DST_CUR_PYLD_MSB, 0x174)
283
+ FIELD(ZDMA_CH_DST_CUR_PYLD_MSB, ADDR, 0, 17)
284
+REG32(ZDMA_CH_SRC_CUR_DSCR_LSB, 0x178)
285
+REG32(ZDMA_CH_SRC_CUR_DSCR_MSB, 0x17c)
286
+ FIELD(ZDMA_CH_SRC_CUR_DSCR_MSB, ADDR, 0, 17)
287
+REG32(ZDMA_CH_DST_CUR_DSCR_LSB, 0x180)
288
+REG32(ZDMA_CH_DST_CUR_DSCR_MSB, 0x184)
289
+ FIELD(ZDMA_CH_DST_CUR_DSCR_MSB, ADDR, 0, 17)
290
+REG32(ZDMA_CH_TOTAL_BYTE, 0x188)
291
+REG32(ZDMA_CH_RATE_CNTL, 0x18c)
292
+ FIELD(ZDMA_CH_RATE_CNTL, CNT, 0, 12)
293
+REG32(ZDMA_CH_IRQ_SRC_ACCT, 0x190)
294
+ FIELD(ZDMA_CH_IRQ_SRC_ACCT, CNT, 0, 8)
295
+REG32(ZDMA_CH_IRQ_DST_ACCT, 0x194)
296
+ FIELD(ZDMA_CH_IRQ_DST_ACCT, CNT, 0, 8)
297
+REG32(ZDMA_CH_DBG0, 0x198)
298
+ FIELD(ZDMA_CH_DBG0, CMN_BUF_FREE, 0, 9)
299
+REG32(ZDMA_CH_DBG1, 0x19c)
300
+ FIELD(ZDMA_CH_DBG1, CMN_BUF_OCC, 0, 9)
301
+REG32(ZDMA_CH_CTRL2, 0x200)
302
+ FIELD(ZDMA_CH_CTRL2, EN, 0, 1)
303
+
304
+enum {
305
+ PT_REG = 0,
306
+ PT_MEM = 1,
307
+};
308
+
309
+enum {
310
+ CMD_HALT = 1,
311
+ CMD_STOP = 2,
312
+};
313
+
314
+enum {
315
+ RW_MODE_RW = 0,
316
+ RW_MODE_WO = 1,
317
+ RW_MODE_RO = 2,
318
+};
319
+
320
+enum {
321
+ DTYPE_LINEAR = 0,
322
+ DTYPE_LINKED = 1,
323
+};
324
+
325
+enum {
326
+ AXI_BURST_FIXED = 0,
327
+ AXI_BURST_INCR = 1,
328
+};
329
+
330
+static void zdma_ch_imr_update_irq(XlnxZDMA *s)
331
+{
332
+ bool pending;
333
+
334
+ pending = s->regs[R_ZDMA_CH_ISR] & ~s->regs[R_ZDMA_CH_IMR];
335
+
336
+ qemu_set_irq(s->irq_zdma_ch_imr, pending);
337
+}
338
+
339
+static void zdma_ch_isr_postw(RegisterInfo *reg, uint64_t val64)
340
+{
341
+ XlnxZDMA *s = XLNX_ZDMA(reg->opaque);
342
+ zdma_ch_imr_update_irq(s);
343
+}
344
+
345
+static uint64_t zdma_ch_ien_prew(RegisterInfo *reg, uint64_t val64)
346
+{
347
+ XlnxZDMA *s = XLNX_ZDMA(reg->opaque);
348
+ uint32_t val = val64;
349
+
350
+ s->regs[R_ZDMA_CH_IMR] &= ~val;
351
+ zdma_ch_imr_update_irq(s);
352
+ return 0;
353
+}
354
+
355
+static uint64_t zdma_ch_ids_prew(RegisterInfo *reg, uint64_t val64)
356
+{
357
+ XlnxZDMA *s = XLNX_ZDMA(reg->opaque);
358
+ uint32_t val = val64;
359
+
360
+ s->regs[R_ZDMA_CH_IMR] |= val;
361
+ zdma_ch_imr_update_irq(s);
362
+ return 0;
363
+}
364
+
365
+static void zdma_set_state(XlnxZDMA *s, XlnxZDMAState state)
366
+{
367
+ s->state = state;
368
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_STATUS, STATE, state);
369
+
370
+ /* Signal error if we have an error condition. */
371
+ if (s->error) {
372
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_STATUS, STATE, 3);
373
+ }
374
+}
375
+
376
+static void zdma_src_done(XlnxZDMA *s)
377
+{
378
+ unsigned int cnt;
379
+ cnt = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_IRQ_SRC_ACCT, CNT);
380
+ cnt++;
381
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_IRQ_SRC_ACCT, CNT, cnt);
382
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, SRC_DSCR_DONE, true);
383
+
384
+ /* Did we overflow? */
385
+ if (cnt != ARRAY_FIELD_EX32(s->regs, ZDMA_CH_IRQ_SRC_ACCT, CNT)) {
386
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, IRQ_SRC_ACCT_ERR, true);
387
+ }
388
+ zdma_ch_imr_update_irq(s);
389
+}
390
+
391
+static void zdma_dst_done(XlnxZDMA *s)
392
+{
393
+ unsigned int cnt;
394
+ cnt = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_IRQ_DST_ACCT, CNT);
395
+ cnt++;
396
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_IRQ_DST_ACCT, CNT, cnt);
397
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, DST_DSCR_DONE, true);
398
+
399
+ /* Did we overflow? */
400
+ if (cnt != ARRAY_FIELD_EX32(s->regs, ZDMA_CH_IRQ_DST_ACCT, CNT)) {
401
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, IRQ_DST_ACCT_ERR, true);
402
+ }
403
+ zdma_ch_imr_update_irq(s);
404
+}
405
+
406
+static uint64_t zdma_get_regaddr64(XlnxZDMA *s, unsigned int basereg)
407
+{
408
+ uint64_t addr;
409
+
410
+ addr = s->regs[basereg + 1];
411
+ addr <<= 32;
412
+ addr |= s->regs[basereg];
413
+
414
+ return addr;
415
+}
416
+
417
+static void zdma_put_regaddr64(XlnxZDMA *s, unsigned int basereg, uint64_t addr)
418
+{
419
+ s->regs[basereg] = addr;
420
+ s->regs[basereg + 1] = addr >> 32;
421
+}
422
+
423
+static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
424
+{
425
+ /* ZDMA descriptors must be aligned to their own size. */
426
+ if (addr % sizeof(XlnxZDMADescr)) {
427
+ qemu_log_mask(LOG_GUEST_ERROR,
428
+ "zdma: unaligned descriptor at %" PRIx64,
429
+ addr);
430
+ memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
431
+ s->error = true;
432
+ return false;
433
+ }
434
+
435
+ address_space_rw(s->dma_as, addr, s->attr,
436
+ buf, sizeof(XlnxZDMADescr), false);
437
+ return true;
438
+}
439
+
440
+static void zdma_load_src_descriptor(XlnxZDMA *s)
441
+{
442
+ uint64_t src_addr;
443
+ unsigned int ptype = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, POINT_TYPE);
444
+
445
+ if (ptype == PT_REG) {
446
+ memcpy(&s->dsc_src, &s->regs[R_ZDMA_CH_SRC_DSCR_WORD0],
447
+ sizeof(s->dsc_src));
448
+ return;
449
+ }
450
+
451
+ src_addr = zdma_get_regaddr64(s, R_ZDMA_CH_SRC_CUR_DSCR_LSB);
452
+
453
+ if (!zdma_load_descriptor(s, src_addr, &s->dsc_src)) {
454
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, AXI_RD_SRC_DSCR, true);
455
+ }
456
+}
457
+
458
+static void zdma_load_dst_descriptor(XlnxZDMA *s)
459
+{
460
+ uint64_t dst_addr;
461
+ unsigned int ptype = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, POINT_TYPE);
462
+
463
+ if (ptype == PT_REG) {
464
+ memcpy(&s->dsc_dst, &s->regs[R_ZDMA_CH_DST_DSCR_WORD0],
465
+ sizeof(s->dsc_dst));
466
+ return;
467
+ }
468
+
469
+ dst_addr = zdma_get_regaddr64(s, R_ZDMA_CH_DST_CUR_DSCR_LSB);
470
+
471
+ if (!zdma_load_descriptor(s, dst_addr, &s->dsc_dst)) {
472
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, AXI_RD_DST_DSCR, true);
473
+ }
474
+}
475
+
476
+static uint64_t zdma_update_descr_addr(XlnxZDMA *s, bool type,
477
+ unsigned int basereg)
478
+{
479
+ uint64_t addr, next;
480
+
481
+ if (type == DTYPE_LINEAR) {
482
+ next = zdma_get_regaddr64(s, basereg);
483
+ next += sizeof(s->dsc_dst);
484
+ zdma_put_regaddr64(s, basereg, next);
485
+ } else {
486
+ addr = zdma_get_regaddr64(s, basereg);
487
+ addr += sizeof(s->dsc_dst);
488
+ address_space_rw(s->dma_as, addr, s->attr, (void *) &next, 8, false);
489
+ zdma_put_regaddr64(s, basereg, next);
490
+ }
491
+ return next;
492
+}
493
+
494
+static void zdma_write_dst(XlnxZDMA *s, uint8_t *buf, uint32_t len)
495
+{
496
+ uint32_t dst_size, dlen;
497
+ bool dst_intr, dst_type;
498
+ unsigned int ptype = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, POINT_TYPE);
499
+ unsigned int rw_mode = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, MODE);
500
+ unsigned int burst_type = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_DATA_ATTR,
501
+ AWBURST);
502
+
503
+ /* FIXED burst types are only supported in simple dma mode. */
504
+ if (ptype != PT_REG) {
505
+ burst_type = AXI_BURST_INCR;
506
+ }
507
+
508
+ while (len) {
509
+ dst_size = FIELD_EX32(s->dsc_dst.words[2], ZDMA_CH_DST_DSCR_WORD2,
510
+ SIZE);
511
+ dst_type = FIELD_EX32(s->dsc_dst.words[3], ZDMA_CH_DST_DSCR_WORD3,
512
+ TYPE);
513
+ if (dst_size == 0 && ptype == PT_MEM) {
514
+ uint64_t next;
515
+ next = zdma_update_descr_addr(s, dst_type,
516
+ R_ZDMA_CH_DST_CUR_DSCR_LSB);
517
+ zdma_load_descriptor(s, next, &s->dsc_dst);
518
+ dst_size = FIELD_EX32(s->dsc_dst.words[2], ZDMA_CH_DST_DSCR_WORD2,
519
+ SIZE);
520
+ dst_type = FIELD_EX32(s->dsc_dst.words[3], ZDMA_CH_DST_DSCR_WORD3,
521
+ TYPE);
522
+ }
523
+
524
+ /* Match what hardware does by ignoring the dst_size and only using
525
+ * the src size for Simple register mode. */
526
+ if (ptype == PT_REG && rw_mode != RW_MODE_WO) {
527
+ dst_size = len;
528
+ }
529
+
530
+ dst_intr = FIELD_EX32(s->dsc_dst.words[3], ZDMA_CH_DST_DSCR_WORD3,
531
+ INTR);
532
+
533
+ dlen = len > dst_size ? dst_size : len;
534
+ if (burst_type == AXI_BURST_FIXED) {
535
+ if (dlen > (s->cfg.bus_width / 8)) {
536
+ dlen = s->cfg.bus_width / 8;
537
+ }
538
+ }
539
+
540
+ address_space_rw(s->dma_as, s->dsc_dst.addr, s->attr, buf, dlen,
541
+ true);
542
+ if (burst_type == AXI_BURST_INCR) {
543
+ s->dsc_dst.addr += dlen;
544
+ }
545
+ dst_size -= dlen;
546
+ buf += dlen;
547
+ len -= dlen;
548
+
549
+ if (dst_size == 0 && dst_intr) {
550
+ zdma_dst_done(s);
551
+ }
552
+
553
+ /* Write back to buffered descriptor. */
554
+ s->dsc_dst.words[2] = FIELD_DP32(s->dsc_dst.words[2],
555
+ ZDMA_CH_DST_DSCR_WORD2,
556
+ SIZE,
557
+ dst_size);
558
+ }
559
+}
560
+
561
+static void zdma_process_descr(XlnxZDMA *s)
562
+{
563
+ uint64_t src_addr;
564
+ uint32_t src_size, len;
565
+ unsigned int src_cmd;
566
+ bool src_intr, src_type;
567
+ unsigned int ptype = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, POINT_TYPE);
568
+ unsigned int rw_mode = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, MODE);
569
+ unsigned int burst_type = ARRAY_FIELD_EX32(s->regs, ZDMA_CH_DATA_ATTR,
570
+ ARBURST);
571
+
572
+ src_addr = s->dsc_src.addr;
573
+ src_size = FIELD_EX32(s->dsc_src.words[2], ZDMA_CH_SRC_DSCR_WORD2, SIZE);
574
+ src_cmd = FIELD_EX32(s->dsc_src.words[3], ZDMA_CH_SRC_DSCR_WORD3, CMD);
575
+ src_type = FIELD_EX32(s->dsc_src.words[3], ZDMA_CH_SRC_DSCR_WORD3, TYPE);
576
+ src_intr = FIELD_EX32(s->dsc_src.words[3], ZDMA_CH_SRC_DSCR_WORD3, INTR);
577
+
578
+ /* FIXED burst types and non-rw modes are only supported in
579
+ * simple dma mode.
580
+ */
581
+ if (ptype != PT_REG) {
582
+ if (rw_mode != RW_MODE_RW) {
583
+ qemu_log_mask(LOG_GUEST_ERROR,
584
+ "zDMA: rw-mode=%d but not simple DMA mode.\n",
585
+ rw_mode);
586
+ }
587
+ if (burst_type != AXI_BURST_INCR) {
588
+ qemu_log_mask(LOG_GUEST_ERROR,
589
+ "zDMA: burst_type=%d but not simple DMA mode.\n",
590
+ burst_type);
591
+ }
592
+ burst_type = AXI_BURST_INCR;
593
+ rw_mode = RW_MODE_RW;
594
+ }
595
+
596
+ if (rw_mode == RW_MODE_WO) {
597
+ /* In Simple DMA Write-Only, we need to push DST size bytes
598
+ * regardless of what SRC size is set to. */
599
+ src_size = FIELD_EX32(s->dsc_dst.words[2], ZDMA_CH_DST_DSCR_WORD2,
600
+ SIZE);
601
+ memcpy(s->buf, &s->regs[R_ZDMA_CH_WR_ONLY_WORD0], s->cfg.bus_width / 8);
602
+ }
603
+
604
+ while (src_size) {
605
+ len = src_size > ARRAY_SIZE(s->buf) ? ARRAY_SIZE(s->buf) : src_size;
606
+ if (burst_type == AXI_BURST_FIXED) {
607
+ if (len > (s->cfg.bus_width / 8)) {
608
+ len = s->cfg.bus_width / 8;
609
+ }
610
+ }
611
+
612
+ if (rw_mode == RW_MODE_WO) {
613
+ if (len > s->cfg.bus_width / 8) {
614
+ len = s->cfg.bus_width / 8;
615
+ }
616
+ } else {
617
+ address_space_rw(s->dma_as, src_addr, s->attr, s->buf, len,
618
+ false);
619
+ if (burst_type == AXI_BURST_INCR) {
620
+ src_addr += len;
621
+ }
622
+ }
623
+
624
+ if (rw_mode != RW_MODE_RO) {
625
+ zdma_write_dst(s, s->buf, len);
626
+ }
627
+
628
+ s->regs[R_ZDMA_CH_TOTAL_BYTE] += len;
629
+ src_size -= len;
630
+ }
631
+
632
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, DMA_DONE, true);
633
+
634
+ if (src_intr) {
635
+ zdma_src_done(s);
636
+ }
637
+
638
+ /* Load next descriptor. */
639
+ if (ptype == PT_REG || src_cmd == CMD_STOP) {
640
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_CTRL2, EN, 0);
641
+ zdma_set_state(s, DISABLED);
642
+ return;
643
+ }
644
+
645
+ if (src_cmd == CMD_HALT) {
646
+ zdma_set_state(s, PAUSED);
647
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, DMA_PAUSE, 1);
648
+ zdma_ch_imr_update_irq(s);
649
+ return;
650
+ }
651
+
652
+ zdma_update_descr_addr(s, src_type, R_ZDMA_CH_SRC_CUR_DSCR_LSB);
653
+}
654
+
655
+static void zdma_run(XlnxZDMA *s)
656
+{
657
+ while (s->state == ENABLED && !s->error) {
658
+ zdma_load_src_descriptor(s);
659
+
660
+ if (s->error) {
661
+ zdma_set_state(s, DISABLED);
662
+ } else {
663
+ zdma_process_descr(s);
664
+ }
665
+ }
666
+
667
+ zdma_ch_imr_update_irq(s);
668
+}
669
+
670
+static void zdma_update_descr_addr_from_start(XlnxZDMA *s)
671
+{
672
+ uint64_t src_addr, dst_addr;
673
+
674
+ src_addr = zdma_get_regaddr64(s, R_ZDMA_CH_SRC_START_LSB);
675
+ zdma_put_regaddr64(s, R_ZDMA_CH_SRC_CUR_DSCR_LSB, src_addr);
676
+ dst_addr = zdma_get_regaddr64(s, R_ZDMA_CH_DST_START_LSB);
677
+ zdma_put_regaddr64(s, R_ZDMA_CH_DST_CUR_DSCR_LSB, dst_addr);
678
+ zdma_load_dst_descriptor(s);
679
+}
680
+
681
+static void zdma_ch_ctrlx_postw(RegisterInfo *reg, uint64_t val64)
682
+{
683
+ XlnxZDMA *s = XLNX_ZDMA(reg->opaque);
684
+
685
+ if (ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL2, EN)) {
686
+ s->error = false;
687
+
688
+ if (s->state == PAUSED &&
689
+ ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, CONT)) {
690
+ if (ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, CONT_ADDR) == 1) {
691
+ zdma_update_descr_addr_from_start(s);
692
+ } else {
693
+ bool src_type = FIELD_EX32(s->dsc_src.words[3],
694
+ ZDMA_CH_SRC_DSCR_WORD3, TYPE);
695
+ zdma_update_descr_addr(s, src_type,
696
+ R_ZDMA_CH_SRC_CUR_DSCR_LSB);
697
+ }
698
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_CTRL0, CONT, false);
699
+ zdma_set_state(s, ENABLED);
700
+ } else if (s->state == DISABLED) {
701
+ zdma_update_descr_addr_from_start(s);
702
+ zdma_set_state(s, ENABLED);
703
+ }
704
+ } else {
705
+ /* Leave Paused state? */
706
+ if (s->state == PAUSED &&
707
+ ARRAY_FIELD_EX32(s->regs, ZDMA_CH_CTRL0, CONT)) {
708
+ zdma_set_state(s, DISABLED);
709
+ }
710
+ }
711
+
712
+ zdma_run(s);
713
+}
714
+
715
+static RegisterAccessInfo zdma_regs_info[] = {
716
+ { .name = "ZDMA_ERR_CTRL", .addr = A_ZDMA_ERR_CTRL,
717
+ .rsvd = 0xfffffffe,
718
+ },{ .name = "ZDMA_CH_ISR", .addr = A_ZDMA_CH_ISR,
719
+ .rsvd = 0xfffff000,
720
+ .w1c = 0xfff,
721
+ .post_write = zdma_ch_isr_postw,
722
+ },{ .name = "ZDMA_CH_IMR", .addr = A_ZDMA_CH_IMR,
723
+ .reset = 0xfff,
724
+ .rsvd = 0xfffff000,
725
+ .ro = 0xfff,
726
+ },{ .name = "ZDMA_CH_IEN", .addr = A_ZDMA_CH_IEN,
727
+ .rsvd = 0xfffff000,
728
+ .pre_write = zdma_ch_ien_prew,
729
+ },{ .name = "ZDMA_CH_IDS", .addr = A_ZDMA_CH_IDS,
730
+ .rsvd = 0xfffff000,
731
+ .pre_write = zdma_ch_ids_prew,
732
+ },{ .name = "ZDMA_CH_CTRL0", .addr = A_ZDMA_CH_CTRL0,
733
+ .reset = 0x80,
734
+ .rsvd = 0xffffff01,
735
+ .post_write = zdma_ch_ctrlx_postw,
736
+ },{ .name = "ZDMA_CH_CTRL1", .addr = A_ZDMA_CH_CTRL1,
737
+ .reset = 0x3ff,
738
+ .rsvd = 0xfffffc00,
739
+ },{ .name = "ZDMA_CH_FCI", .addr = A_ZDMA_CH_FCI,
740
+ .rsvd = 0xffffffc0,
741
+ },{ .name = "ZDMA_CH_STATUS", .addr = A_ZDMA_CH_STATUS,
742
+ .rsvd = 0xfffffffc,
743
+ .ro = 0x3,
744
+ },{ .name = "ZDMA_CH_DATA_ATTR", .addr = A_ZDMA_CH_DATA_ATTR,
745
+ .reset = 0x483d20f,
746
+ .rsvd = 0xf0000000,
747
+ },{ .name = "ZDMA_CH_DSCR_ATTR", .addr = A_ZDMA_CH_DSCR_ATTR,
748
+ .rsvd = 0xfffffe00,
749
+ },{ .name = "ZDMA_CH_SRC_DSCR_WORD0", .addr = A_ZDMA_CH_SRC_DSCR_WORD0,
750
+ },{ .name = "ZDMA_CH_SRC_DSCR_WORD1", .addr = A_ZDMA_CH_SRC_DSCR_WORD1,
751
+ .rsvd = 0xfffe0000,
752
+ },{ .name = "ZDMA_CH_SRC_DSCR_WORD2", .addr = A_ZDMA_CH_SRC_DSCR_WORD2,
753
+ .rsvd = 0xc0000000,
754
+ },{ .name = "ZDMA_CH_SRC_DSCR_WORD3", .addr = A_ZDMA_CH_SRC_DSCR_WORD3,
755
+ .rsvd = 0xffffffe0,
756
+ },{ .name = "ZDMA_CH_DST_DSCR_WORD0", .addr = A_ZDMA_CH_DST_DSCR_WORD0,
757
+ },{ .name = "ZDMA_CH_DST_DSCR_WORD1", .addr = A_ZDMA_CH_DST_DSCR_WORD1,
758
+ .rsvd = 0xfffe0000,
759
+ },{ .name = "ZDMA_CH_DST_DSCR_WORD2", .addr = A_ZDMA_CH_DST_DSCR_WORD2,
760
+ .rsvd = 0xc0000000,
761
+ },{ .name = "ZDMA_CH_DST_DSCR_WORD3", .addr = A_ZDMA_CH_DST_DSCR_WORD3,
762
+ .rsvd = 0xfffffffa,
763
+ },{ .name = "ZDMA_CH_WR_ONLY_WORD0", .addr = A_ZDMA_CH_WR_ONLY_WORD0,
764
+ },{ .name = "ZDMA_CH_WR_ONLY_WORD1", .addr = A_ZDMA_CH_WR_ONLY_WORD1,
765
+ },{ .name = "ZDMA_CH_WR_ONLY_WORD2", .addr = A_ZDMA_CH_WR_ONLY_WORD2,
766
+ },{ .name = "ZDMA_CH_WR_ONLY_WORD3", .addr = A_ZDMA_CH_WR_ONLY_WORD3,
767
+ },{ .name = "ZDMA_CH_SRC_START_LSB", .addr = A_ZDMA_CH_SRC_START_LSB,
768
+ },{ .name = "ZDMA_CH_SRC_START_MSB", .addr = A_ZDMA_CH_SRC_START_MSB,
769
+ .rsvd = 0xfffe0000,
770
+ },{ .name = "ZDMA_CH_DST_START_LSB", .addr = A_ZDMA_CH_DST_START_LSB,
771
+ },{ .name = "ZDMA_CH_DST_START_MSB", .addr = A_ZDMA_CH_DST_START_MSB,
772
+ .rsvd = 0xfffe0000,
773
+ },{ .name = "ZDMA_CH_SRC_CUR_PYLD_LSB", .addr = A_ZDMA_CH_SRC_CUR_PYLD_LSB,
774
+ .ro = 0xffffffff,
775
+ },{ .name = "ZDMA_CH_SRC_CUR_PYLD_MSB", .addr = A_ZDMA_CH_SRC_CUR_PYLD_MSB,
776
+ .rsvd = 0xfffe0000,
777
+ .ro = 0x1ffff,
778
+ },{ .name = "ZDMA_CH_DST_CUR_PYLD_LSB", .addr = A_ZDMA_CH_DST_CUR_PYLD_LSB,
779
+ .ro = 0xffffffff,
780
+ },{ .name = "ZDMA_CH_DST_CUR_PYLD_MSB", .addr = A_ZDMA_CH_DST_CUR_PYLD_MSB,
781
+ .rsvd = 0xfffe0000,
782
+ .ro = 0x1ffff,
783
+ },{ .name = "ZDMA_CH_SRC_CUR_DSCR_LSB", .addr = A_ZDMA_CH_SRC_CUR_DSCR_LSB,
784
+ .ro = 0xffffffff,
785
+ },{ .name = "ZDMA_CH_SRC_CUR_DSCR_MSB", .addr = A_ZDMA_CH_SRC_CUR_DSCR_MSB,
786
+ .rsvd = 0xfffe0000,
787
+ .ro = 0x1ffff,
788
+ },{ .name = "ZDMA_CH_DST_CUR_DSCR_LSB", .addr = A_ZDMA_CH_DST_CUR_DSCR_LSB,
789
+ .ro = 0xffffffff,
790
+ },{ .name = "ZDMA_CH_DST_CUR_DSCR_MSB", .addr = A_ZDMA_CH_DST_CUR_DSCR_MSB,
791
+ .rsvd = 0xfffe0000,
792
+ .ro = 0x1ffff,
793
+ },{ .name = "ZDMA_CH_TOTAL_BYTE", .addr = A_ZDMA_CH_TOTAL_BYTE,
794
+ .w1c = 0xffffffff,
795
+ },{ .name = "ZDMA_CH_RATE_CNTL", .addr = A_ZDMA_CH_RATE_CNTL,
796
+ .rsvd = 0xfffff000,
797
+ },{ .name = "ZDMA_CH_IRQ_SRC_ACCT", .addr = A_ZDMA_CH_IRQ_SRC_ACCT,
798
+ .rsvd = 0xffffff00,
799
+ .ro = 0xff,
800
+ .cor = 0xff,
801
+ },{ .name = "ZDMA_CH_IRQ_DST_ACCT", .addr = A_ZDMA_CH_IRQ_DST_ACCT,
802
+ .rsvd = 0xffffff00,
803
+ .ro = 0xff,
804
+ .cor = 0xff,
805
+ },{ .name = "ZDMA_CH_DBG0", .addr = A_ZDMA_CH_DBG0,
806
+ .rsvd = 0xfffffe00,
807
+ .ro = 0x1ff,
808
+ },{ .name = "ZDMA_CH_DBG1", .addr = A_ZDMA_CH_DBG1,
809
+ .rsvd = 0xfffffe00,
810
+ .ro = 0x1ff,
811
+ },{ .name = "ZDMA_CH_CTRL2", .addr = A_ZDMA_CH_CTRL2,
812
+ .rsvd = 0xfffffffe,
813
+ .post_write = zdma_ch_ctrlx_postw,
814
+ }
815
+};
816
+
817
+static void zdma_reset(DeviceState *dev)
818
+{
819
+ XlnxZDMA *s = XLNX_ZDMA(dev);
820
+ unsigned int i;
821
+
822
+ for (i = 0; i < ARRAY_SIZE(s->regs_info); ++i) {
823
+ register_reset(&s->regs_info[i]);
824
+ }
825
+
826
+ zdma_ch_imr_update_irq(s);
827
+}
828
+
829
+static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
830
+{
831
+ XlnxZDMA *s = XLNX_ZDMA(opaque);
832
+ RegisterInfo *r = &s->regs_info[addr / 4];
833
+
834
+ if (!r->data) {
835
+ qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
836
+ object_get_canonical_path(OBJECT(s)),
837
+ addr);
838
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
839
+ zdma_ch_imr_update_irq(s);
840
+ return 0;
841
+ }
842
+ return register_read(r, ~0, NULL, false);
843
+}
844
+
845
+static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
846
+ unsigned size)
847
+{
848
+ XlnxZDMA *s = XLNX_ZDMA(opaque);
849
+ RegisterInfo *r = &s->regs_info[addr / 4];
850
+
851
+ if (!r->data) {
852
+ qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
853
+ object_get_canonical_path(OBJECT(s)),
854
+ addr, value);
855
+ ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
856
+ zdma_ch_imr_update_irq(s);
857
+ return;
858
+ }
859
+ register_write(r, value, ~0, NULL, false);
860
+}
861
+
862
+static const MemoryRegionOps zdma_ops = {
863
+ .read = zdma_read,
864
+ .write = zdma_write,
865
+ .endianness = DEVICE_LITTLE_ENDIAN,
866
+ .valid = {
867
+ .min_access_size = 4,
868
+ .max_access_size = 4,
869
+ },
870
+};
871
+
872
+static void zdma_realize(DeviceState *dev, Error **errp)
873
+{
874
+ XlnxZDMA *s = XLNX_ZDMA(dev);
875
+ unsigned int i;
876
+
877
+ for (i = 0; i < ARRAY_SIZE(zdma_regs_info); ++i) {
878
+ RegisterInfo *r = &s->regs_info[zdma_regs_info[i].addr / 4];
879
+
880
+ *r = (RegisterInfo) {
881
+ .data = (uint8_t *)&s->regs[
882
+ zdma_regs_info[i].addr / 4],
883
+ .data_size = sizeof(uint32_t),
884
+ .access = &zdma_regs_info[i],
885
+ .opaque = s,
886
+ };
887
+ }
888
+
889
+ if (s->dma_mr) {
890
+ s->dma_as = g_malloc0(sizeof(AddressSpace));
891
+ address_space_init(s->dma_as, s->dma_mr, NULL);
892
+ } else {
893
+ s->dma_as = &address_space_memory;
894
+ }
895
+ s->attr = MEMTXATTRS_UNSPECIFIED;
896
+}
897
+
898
+static void zdma_init(Object *obj)
899
+{
900
+ XlnxZDMA *s = XLNX_ZDMA(obj);
901
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
902
+
903
+ memory_region_init_io(&s->iomem, obj, &zdma_ops, s,
904
+ TYPE_XLNX_ZDMA, ZDMA_R_MAX * 4);
905
+ sysbus_init_mmio(sbd, &s->iomem);
906
+ sysbus_init_irq(sbd, &s->irq_zdma_ch_imr);
907
+
908
+ object_property_add_link(obj, "dma", TYPE_MEMORY_REGION,
909
+ (Object **)&s->dma_mr,
910
+ qdev_prop_allow_set_link_before_realize,
911
+ OBJ_PROP_LINK_UNREF_ON_RELEASE,
912
+ &error_abort);
913
+}
914
+
915
+static const VMStateDescription vmstate_zdma = {
916
+ .name = TYPE_XLNX_ZDMA,
917
+ .version_id = 1,
918
+ .minimum_version_id = 1,
919
+ .minimum_version_id_old = 1,
920
+ .fields = (VMStateField[]) {
921
+ VMSTATE_UINT32_ARRAY(regs, XlnxZDMA, ZDMA_R_MAX),
922
+ VMSTATE_UINT32(state, XlnxZDMA),
923
+ VMSTATE_UINT32_ARRAY(dsc_src.words, XlnxZDMA, 4),
924
+ VMSTATE_UINT32_ARRAY(dsc_dst.words, XlnxZDMA, 4),
925
+ VMSTATE_END_OF_LIST(),
926
+ }
927
+};
928
+
929
+static Property zdma_props[] = {
930
+ DEFINE_PROP_UINT32("bus-width", XlnxZDMA, cfg.bus_width, 64),
931
+ DEFINE_PROP_END_OF_LIST(),
932
+};
933
+
934
+static void zdma_class_init(ObjectClass *klass, void *data)
935
+{
936
+ DeviceClass *dc = DEVICE_CLASS(klass);
937
+
938
+ dc->reset = zdma_reset;
939
+ dc->realize = zdma_realize;
940
+ dc->props = zdma_props;
941
+ dc->vmsd = &vmstate_zdma;
942
+}
943
+
944
+static const TypeInfo zdma_info = {
945
+ .name = TYPE_XLNX_ZDMA,
946
+ .parent = TYPE_SYS_BUS_DEVICE,
947
+ .instance_size = sizeof(XlnxZDMA),
948
+ .class_init = zdma_class_init,
949
+ .instance_init = zdma_init,
950
+};
951
+
952
+static void zdma_register_types(void)
953
+{
954
+ type_register_static(&zdma_info);
955
+}
956
+
957
+type_init(zdma_register_types)
958
--
76
--
959
2.17.0
77
2.20.1
960
78
961
79
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
In linux-user/arm/cpu-loop.c we incorrectly treat EXCP_BKPT similarly
2
to EXCP_SWI, which means that if the guest executes a BKPT insn then
3
QEMU will perform a syscall for it (which syscall depends on what
4
value happens to be in r7...). The correct behaviour is that the
5
guest process should take a SIGTRAP.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
This code has been like this (more or less) since commit
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
06c949e62a098f in 2006 which added BKPT in the first place. This is
5
Message-id: 20180516223007.10256-23-richard.henderson@linaro.org
9
probably because at the time the same code path was used to handle
10
both Linux syscalls and semihosting calls, and (on M profile) BKPT
11
with a suitable magic number is used for semihosting calls. But
12
these days we've moved handling of semihosting out to an entirely
13
different codepath, so we can fix this bug by simply removing this
14
handling of EXCP_BKPT and instead making it deliver a SIGTRAP like
15
EXCP_DEBUG (as we do already on aarch64).
16
17
Reported-by: <omerg681@gmail.com>
18
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
19
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Message-id: 20200420212206.12776-2-peter.maydell@linaro.org
22
Fixes: https://bugs.launchpad.net/qemu/+bug/1873898
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
24
---
8
target/arm/helper-sve.h | 11 ++
25
linux-user/arm/cpu_loop.c | 30 ++++++++----------------------
9
target/arm/sve_helper.c | 136 ++++++++++++++++++
26
1 file changed, 8 insertions(+), 22 deletions(-)
10
target/arm/translate-sve.c | 288 +++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 31 +++-
12
4 files changed, 465 insertions(+), 1 deletion(-)
13
27
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
28
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
15
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
30
--- a/linux-user/arm/cpu_loop.c
17
+++ b/target/arm/helper-sve.h
31
+++ b/linux-user/arm/cpu_loop.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_ftssel_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
19
DEF_HELPER_FLAGS_4(sve_ftssel_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
}
20
DEF_HELPER_FLAGS_4(sve_ftssel_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
break;
21
35
case EXCP_SWI:
22
+DEF_HELPER_FLAGS_4(sve_sqaddi_b, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
36
- case EXCP_BKPT:
23
+DEF_HELPER_FLAGS_4(sve_sqaddi_h, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
37
{
24
+DEF_HELPER_FLAGS_4(sve_sqaddi_s, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
38
env->eabi = 1;
25
+DEF_HELPER_FLAGS_4(sve_sqaddi_d, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
39
/* system call */
26
+
40
- if (trapnr == EXCP_BKPT) {
27
+DEF_HELPER_FLAGS_4(sve_uqaddi_b, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
41
- if (env->thumb) {
28
+DEF_HELPER_FLAGS_4(sve_uqaddi_h, TCG_CALL_NO_RWG, void, ptr, ptr, s32, i32)
42
- /* FIXME - what to do if get_user() fails? */
29
+DEF_HELPER_FLAGS_4(sve_uqaddi_s, TCG_CALL_NO_RWG, void, ptr, ptr, s64, i32)
43
- get_user_code_u16(insn, env->regs[15], env);
30
+DEF_HELPER_FLAGS_4(sve_uqaddi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
44
- n = insn & 0xff;
31
+DEF_HELPER_FLAGS_4(sve_uqsubi_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
45
- env->regs[15] += 2;
32
+
46
- } else {
33
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
- /* FIXME - what to do if get_user() fails? */
34
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
- get_user_code_u32(insn, env->regs[15], env);
35
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
49
- n = (insn & 0xf) | ((insn >> 4) & 0xff0);
36
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
50
- env->regs[15] += 4;
37
index XXXXXXX..XXXXXXX 100644
51
- }
38
--- a/target/arm/sve_helper.c
52
+ if (env->thumb) {
39
+++ b/target/arm/sve_helper.c
53
+ /* FIXME - what to do if get_user() fails? */
40
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ftssel_d)(void *vd, void *vn, void *vm, uint32_t desc)
54
+ get_user_code_u16(insn, env->regs[15] - 2, env);
41
d[i] = nn ^ (mm & 2) << 62;
55
+ n = insn & 0xff;
42
}
56
} else {
43
}
57
- if (env->thumb) {
44
+
58
- /* FIXME - what to do if get_user() fails? */
45
+/*
59
- get_user_code_u16(insn, env->regs[15] - 2, env);
46
+ * Signed saturating addition with scalar operand.
60
- n = insn & 0xff;
47
+ */
61
- } else {
48
+
62
- /* FIXME - what to do if get_user() fails? */
49
+void HELPER(sve_sqaddi_b)(void *d, void *a, int32_t b, uint32_t desc)
63
- get_user_code_u32(insn, env->regs[15] - 4, env);
50
+{
64
- n = insn & 0xffffff;
51
+ intptr_t i, oprsz = simd_oprsz(desc);
65
- }
52
+
66
+ /* FIXME - what to do if get_user() fails? */
53
+ for (i = 0; i < oprsz; i += sizeof(int8_t)) {
67
+ get_user_code_u32(insn, env->regs[15] - 4, env);
54
+ int r = *(int8_t *)(a + i) + b;
68
+ n = insn & 0xffffff;
55
+ if (r > INT8_MAX) {
69
}
56
+ r = INT8_MAX;
70
57
+ } else if (r < INT8_MIN) {
71
if (n == ARM_NR_cacheflush) {
58
+ r = INT8_MIN;
72
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
59
+ }
73
}
60
+ *(int8_t *)(d + i) = r;
74
break;
61
+ }
75
case EXCP_DEBUG:
62
+}
76
+ case EXCP_BKPT:
63
+
77
excp_debug:
64
+void HELPER(sve_sqaddi_h)(void *d, void *a, int32_t b, uint32_t desc)
78
info.si_signo = TARGET_SIGTRAP;
65
+{
79
info.si_errno = 0;
66
+ intptr_t i, oprsz = simd_oprsz(desc);
67
+
68
+ for (i = 0; i < oprsz; i += sizeof(int16_t)) {
69
+ int r = *(int16_t *)(a + i) + b;
70
+ if (r > INT16_MAX) {
71
+ r = INT16_MAX;
72
+ } else if (r < INT16_MIN) {
73
+ r = INT16_MIN;
74
+ }
75
+ *(int16_t *)(d + i) = r;
76
+ }
77
+}
78
+
79
+void HELPER(sve_sqaddi_s)(void *d, void *a, int64_t b, uint32_t desc)
80
+{
81
+ intptr_t i, oprsz = simd_oprsz(desc);
82
+
83
+ for (i = 0; i < oprsz; i += sizeof(int32_t)) {
84
+ int64_t r = *(int32_t *)(a + i) + b;
85
+ if (r > INT32_MAX) {
86
+ r = INT32_MAX;
87
+ } else if (r < INT32_MIN) {
88
+ r = INT32_MIN;
89
+ }
90
+ *(int32_t *)(d + i) = r;
91
+ }
92
+}
93
+
94
+void HELPER(sve_sqaddi_d)(void *d, void *a, int64_t b, uint32_t desc)
95
+{
96
+ intptr_t i, oprsz = simd_oprsz(desc);
97
+
98
+ for (i = 0; i < oprsz; i += sizeof(int64_t)) {
99
+ int64_t ai = *(int64_t *)(a + i);
100
+ int64_t r = ai + b;
101
+ if (((r ^ ai) & ~(ai ^ b)) < 0) {
102
+ /* Signed overflow. */
103
+ r = (r < 0 ? INT64_MAX : INT64_MIN);
104
+ }
105
+ *(int64_t *)(d + i) = r;
106
+ }
107
+}
108
+
109
+/*
110
+ * Unsigned saturating addition with scalar operand.
111
+ */
112
+
113
+void HELPER(sve_uqaddi_b)(void *d, void *a, int32_t b, uint32_t desc)
114
+{
115
+ intptr_t i, oprsz = simd_oprsz(desc);
116
+
117
+ for (i = 0; i < oprsz; i += sizeof(uint8_t)) {
118
+ int r = *(uint8_t *)(a + i) + b;
119
+ if (r > UINT8_MAX) {
120
+ r = UINT8_MAX;
121
+ } else if (r < 0) {
122
+ r = 0;
123
+ }
124
+ *(uint8_t *)(d + i) = r;
125
+ }
126
+}
127
+
128
+void HELPER(sve_uqaddi_h)(void *d, void *a, int32_t b, uint32_t desc)
129
+{
130
+ intptr_t i, oprsz = simd_oprsz(desc);
131
+
132
+ for (i = 0; i < oprsz; i += sizeof(uint16_t)) {
133
+ int r = *(uint16_t *)(a + i) + b;
134
+ if (r > UINT16_MAX) {
135
+ r = UINT16_MAX;
136
+ } else if (r < 0) {
137
+ r = 0;
138
+ }
139
+ *(uint16_t *)(d + i) = r;
140
+ }
141
+}
142
+
143
+void HELPER(sve_uqaddi_s)(void *d, void *a, int64_t b, uint32_t desc)
144
+{
145
+ intptr_t i, oprsz = simd_oprsz(desc);
146
+
147
+ for (i = 0; i < oprsz; i += sizeof(uint32_t)) {
148
+ int64_t r = *(uint32_t *)(a + i) + b;
149
+ if (r > UINT32_MAX) {
150
+ r = UINT32_MAX;
151
+ } else if (r < 0) {
152
+ r = 0;
153
+ }
154
+ *(uint32_t *)(d + i) = r;
155
+ }
156
+}
157
+
158
+void HELPER(sve_uqaddi_d)(void *d, void *a, uint64_t b, uint32_t desc)
159
+{
160
+ intptr_t i, oprsz = simd_oprsz(desc);
161
+
162
+ for (i = 0; i < oprsz; i += sizeof(uint64_t)) {
163
+ uint64_t r = *(uint64_t *)(a + i) + b;
164
+ if (r < b) {
165
+ r = UINT64_MAX;
166
+ }
167
+ *(uint64_t *)(d + i) = r;
168
+ }
169
+}
170
+
171
+void HELPER(sve_uqsubi_d)(void *d, void *a, uint64_t b, uint32_t desc)
172
+{
173
+ intptr_t i, oprsz = simd_oprsz(desc);
174
+
175
+ for (i = 0; i < oprsz; i += sizeof(uint64_t)) {
176
+ uint64_t ai = *(uint64_t *)(a + i);
177
+ *(uint64_t *)(d + i) = (ai < b ? 0 : ai - b);
178
+ }
179
+}
180
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
181
index XXXXXXX..XXXXXXX 100644
182
--- a/target/arm/translate-sve.c
183
+++ b/target/arm/translate-sve.c
184
@@ -XXX,XX +XXX,XX @@ static int tszimm_shl(int x)
185
return x - (8 << tszimm_esz(x));
186
}
187
188
+static inline int plus1(int x)
189
+{
190
+ return x + 1;
191
+}
192
+
193
/*
194
* Include the generated decoder.
195
*/
196
@@ -XXX,XX +XXX,XX @@ static bool trans_PNEXT(DisasContext *s, arg_rr_esz *a, uint32_t insn)
197
return do_pfirst_pnext(s, a, gen_helper_sve_pnext);
198
}
199
200
+/*
201
+ *** SVE Element Count Group
202
+ */
203
+
204
+/* Perform an inline saturating addition of a 32-bit value within
205
+ * a 64-bit register. The second operand is known to be positive,
206
+ * which halves the comparisions we must perform to bound the result.
207
+ */
208
+static void do_sat_addsub_32(TCGv_i64 reg, TCGv_i64 val, bool u, bool d)
209
+{
210
+ int64_t ibound;
211
+ TCGv_i64 bound;
212
+ TCGCond cond;
213
+
214
+ /* Use normal 64-bit arithmetic to detect 32-bit overflow. */
215
+ if (u) {
216
+ tcg_gen_ext32u_i64(reg, reg);
217
+ } else {
218
+ tcg_gen_ext32s_i64(reg, reg);
219
+ }
220
+ if (d) {
221
+ tcg_gen_sub_i64(reg, reg, val);
222
+ ibound = (u ? 0 : INT32_MIN);
223
+ cond = TCG_COND_LT;
224
+ } else {
225
+ tcg_gen_add_i64(reg, reg, val);
226
+ ibound = (u ? UINT32_MAX : INT32_MAX);
227
+ cond = TCG_COND_GT;
228
+ }
229
+ bound = tcg_const_i64(ibound);
230
+ tcg_gen_movcond_i64(cond, reg, reg, bound, bound, reg);
231
+ tcg_temp_free_i64(bound);
232
+}
233
+
234
+/* Similarly with 64-bit values. */
235
+static void do_sat_addsub_64(TCGv_i64 reg, TCGv_i64 val, bool u, bool d)
236
+{
237
+ TCGv_i64 t0 = tcg_temp_new_i64();
238
+ TCGv_i64 t1 = tcg_temp_new_i64();
239
+ TCGv_i64 t2;
240
+
241
+ if (u) {
242
+ if (d) {
243
+ tcg_gen_sub_i64(t0, reg, val);
244
+ tcg_gen_movi_i64(t1, 0);
245
+ tcg_gen_movcond_i64(TCG_COND_LTU, reg, reg, val, t1, t0);
246
+ } else {
247
+ tcg_gen_add_i64(t0, reg, val);
248
+ tcg_gen_movi_i64(t1, -1);
249
+ tcg_gen_movcond_i64(TCG_COND_LTU, reg, t0, reg, t1, t0);
250
+ }
251
+ } else {
252
+ if (d) {
253
+ /* Detect signed overflow for subtraction. */
254
+ tcg_gen_xor_i64(t0, reg, val);
255
+ tcg_gen_sub_i64(t1, reg, val);
256
+ tcg_gen_xor_i64(reg, reg, t0);
257
+ tcg_gen_and_i64(t0, t0, reg);
258
+
259
+ /* Bound the result. */
260
+ tcg_gen_movi_i64(reg, INT64_MIN);
261
+ t2 = tcg_const_i64(0);
262
+ tcg_gen_movcond_i64(TCG_COND_LT, reg, t0, t2, reg, t1);
263
+ } else {
264
+ /* Detect signed overflow for addition. */
265
+ tcg_gen_xor_i64(t0, reg, val);
266
+ tcg_gen_add_i64(reg, reg, val);
267
+ tcg_gen_xor_i64(t1, reg, val);
268
+ tcg_gen_andc_i64(t0, t1, t0);
269
+
270
+ /* Bound the result. */
271
+ tcg_gen_movi_i64(t1, INT64_MAX);
272
+ t2 = tcg_const_i64(0);
273
+ tcg_gen_movcond_i64(TCG_COND_LT, reg, t0, t2, t1, reg);
274
+ }
275
+ tcg_temp_free_i64(t2);
276
+ }
277
+ tcg_temp_free_i64(t0);
278
+ tcg_temp_free_i64(t1);
279
+}
280
+
281
+/* Similarly with a vector and a scalar operand. */
282
+static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
283
+ TCGv_i64 val, bool u, bool d)
284
+{
285
+ unsigned vsz = vec_full_reg_size(s);
286
+ TCGv_ptr dptr, nptr;
287
+ TCGv_i32 t32, desc;
288
+ TCGv_i64 t64;
289
+
290
+ dptr = tcg_temp_new_ptr();
291
+ nptr = tcg_temp_new_ptr();
292
+ tcg_gen_addi_ptr(dptr, cpu_env, vec_full_reg_offset(s, rd));
293
+ tcg_gen_addi_ptr(nptr, cpu_env, vec_full_reg_offset(s, rn));
294
+ desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
295
+
296
+ switch (esz) {
297
+ case MO_8:
298
+ t32 = tcg_temp_new_i32();
299
+ tcg_gen_extrl_i64_i32(t32, val);
300
+ if (d) {
301
+ tcg_gen_neg_i32(t32, t32);
302
+ }
303
+ if (u) {
304
+ gen_helper_sve_uqaddi_b(dptr, nptr, t32, desc);
305
+ } else {
306
+ gen_helper_sve_sqaddi_b(dptr, nptr, t32, desc);
307
+ }
308
+ tcg_temp_free_i32(t32);
309
+ break;
310
+
311
+ case MO_16:
312
+ t32 = tcg_temp_new_i32();
313
+ tcg_gen_extrl_i64_i32(t32, val);
314
+ if (d) {
315
+ tcg_gen_neg_i32(t32, t32);
316
+ }
317
+ if (u) {
318
+ gen_helper_sve_uqaddi_h(dptr, nptr, t32, desc);
319
+ } else {
320
+ gen_helper_sve_sqaddi_h(dptr, nptr, t32, desc);
321
+ }
322
+ tcg_temp_free_i32(t32);
323
+ break;
324
+
325
+ case MO_32:
326
+ t64 = tcg_temp_new_i64();
327
+ if (d) {
328
+ tcg_gen_neg_i64(t64, val);
329
+ } else {
330
+ tcg_gen_mov_i64(t64, val);
331
+ }
332
+ if (u) {
333
+ gen_helper_sve_uqaddi_s(dptr, nptr, t64, desc);
334
+ } else {
335
+ gen_helper_sve_sqaddi_s(dptr, nptr, t64, desc);
336
+ }
337
+ tcg_temp_free_i64(t64);
338
+ break;
339
+
340
+ case MO_64:
341
+ if (u) {
342
+ if (d) {
343
+ gen_helper_sve_uqsubi_d(dptr, nptr, val, desc);
344
+ } else {
345
+ gen_helper_sve_uqaddi_d(dptr, nptr, val, desc);
346
+ }
347
+ } else if (d) {
348
+ t64 = tcg_temp_new_i64();
349
+ tcg_gen_neg_i64(t64, val);
350
+ gen_helper_sve_sqaddi_d(dptr, nptr, t64, desc);
351
+ tcg_temp_free_i64(t64);
352
+ } else {
353
+ gen_helper_sve_sqaddi_d(dptr, nptr, val, desc);
354
+ }
355
+ break;
356
+
357
+ default:
358
+ g_assert_not_reached();
359
+ }
360
+
361
+ tcg_temp_free_ptr(dptr);
362
+ tcg_temp_free_ptr(nptr);
363
+ tcg_temp_free_i32(desc);
364
+}
365
+
366
+static bool trans_CNT_r(DisasContext *s, arg_CNT_r *a, uint32_t insn)
367
+{
368
+ if (sve_access_check(s)) {
369
+ unsigned fullsz = vec_full_reg_size(s);
370
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
371
+ tcg_gen_movi_i64(cpu_reg(s, a->rd), numelem * a->imm);
372
+ }
373
+ return true;
374
+}
375
+
376
+static bool trans_INCDEC_r(DisasContext *s, arg_incdec_cnt *a, uint32_t insn)
377
+{
378
+ if (sve_access_check(s)) {
379
+ unsigned fullsz = vec_full_reg_size(s);
380
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
381
+ int inc = numelem * a->imm * (a->d ? -1 : 1);
382
+ TCGv_i64 reg = cpu_reg(s, a->rd);
383
+
384
+ tcg_gen_addi_i64(reg, reg, inc);
385
+ }
386
+ return true;
387
+}
388
+
389
+static bool trans_SINCDEC_r_32(DisasContext *s, arg_incdec_cnt *a,
390
+ uint32_t insn)
391
+{
392
+ if (!sve_access_check(s)) {
393
+ return true;
394
+ }
395
+
396
+ unsigned fullsz = vec_full_reg_size(s);
397
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
398
+ int inc = numelem * a->imm;
399
+ TCGv_i64 reg = cpu_reg(s, a->rd);
400
+
401
+ /* Use normal 64-bit arithmetic to detect 32-bit overflow. */
402
+ if (inc == 0) {
403
+ if (a->u) {
404
+ tcg_gen_ext32u_i64(reg, reg);
405
+ } else {
406
+ tcg_gen_ext32s_i64(reg, reg);
407
+ }
408
+ } else {
409
+ TCGv_i64 t = tcg_const_i64(inc);
410
+ do_sat_addsub_32(reg, t, a->u, a->d);
411
+ tcg_temp_free_i64(t);
412
+ }
413
+ return true;
414
+}
415
+
416
+static bool trans_SINCDEC_r_64(DisasContext *s, arg_incdec_cnt *a,
417
+ uint32_t insn)
418
+{
419
+ if (!sve_access_check(s)) {
420
+ return true;
421
+ }
422
+
423
+ unsigned fullsz = vec_full_reg_size(s);
424
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
425
+ int inc = numelem * a->imm;
426
+ TCGv_i64 reg = cpu_reg(s, a->rd);
427
+
428
+ if (inc != 0) {
429
+ TCGv_i64 t = tcg_const_i64(inc);
430
+ do_sat_addsub_64(reg, t, a->u, a->d);
431
+ tcg_temp_free_i64(t);
432
+ }
433
+ return true;
434
+}
435
+
436
+static bool trans_INCDEC_v(DisasContext *s, arg_incdec2_cnt *a, uint32_t insn)
437
+{
438
+ if (a->esz == 0) {
439
+ return false;
440
+ }
441
+
442
+ unsigned fullsz = vec_full_reg_size(s);
443
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
444
+ int inc = numelem * a->imm;
445
+
446
+ if (inc != 0) {
447
+ if (sve_access_check(s)) {
448
+ TCGv_i64 t = tcg_const_i64(a->d ? -inc : inc);
449
+ tcg_gen_gvec_adds(a->esz, vec_full_reg_offset(s, a->rd),
450
+ vec_full_reg_offset(s, a->rn),
451
+ t, fullsz, fullsz);
452
+ tcg_temp_free_i64(t);
453
+ }
454
+ } else {
455
+ do_mov_z(s, a->rd, a->rn);
456
+ }
457
+ return true;
458
+}
459
+
460
+static bool trans_SINCDEC_v(DisasContext *s, arg_incdec2_cnt *a,
461
+ uint32_t insn)
462
+{
463
+ if (a->esz == 0) {
464
+ return false;
465
+ }
466
+
467
+ unsigned fullsz = vec_full_reg_size(s);
468
+ unsigned numelem = decode_pred_count(fullsz, a->pat, a->esz);
469
+ int inc = numelem * a->imm;
470
+
471
+ if (inc != 0) {
472
+ if (sve_access_check(s)) {
473
+ TCGv_i64 t = tcg_const_i64(inc);
474
+ do_sat_addsub_vec(s, a->esz, a->rd, a->rn, t, a->u, a->d);
475
+ tcg_temp_free_i64(t);
476
+ }
477
+ } else {
478
+ do_mov_z(s, a->rd, a->rn);
479
+ }
480
+ return true;
481
+}
482
+
483
/*
484
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
485
*/
486
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
487
index XXXXXXX..XXXXXXX 100644
488
--- a/target/arm/sve.decode
489
+++ b/target/arm/sve.decode
490
@@ -XXX,XX +XXX,XX @@
491
###########################################################################
492
# Named fields. These are primarily for disjoint fields.
493
494
+%imm4_16_p1 16:4 !function=plus1
495
%imm6_22_5 22:1 5:5
496
%imm9_16_10 16:s6 10:3
497
498
@@ -XXX,XX +XXX,XX @@
499
&rprr_esz rd pg rn rm esz
500
&rprrr_esz rd pg rn rm ra esz
501
&rpri_esz rd pg rn imm esz
502
+&ptrue rd esz pat s
503
+&incdec_cnt rd pat esz imm d u
504
+&incdec2_cnt rd rn pat esz imm d u
505
506
###########################################################################
507
# Named instruction formats. These are generally used to
508
@@ -XXX,XX +XXX,XX @@
509
@rd_rn_i9 ........ ........ ...... rn:5 rd:5 \
510
&rri imm=%imm9_16_10
511
512
+# One register, pattern, and uint4+1.
513
+# User must fill in U and D.
514
+@incdec_cnt ........ esz:2 .. .... ...... pat:5 rd:5 \
515
+ &incdec_cnt imm=%imm4_16_p1
516
+@incdec2_cnt ........ esz:2 .. .... ...... pat:5 rd:5 \
517
+ &incdec2_cnt imm=%imm4_16_p1 rn=%reg_movprfx
518
+
519
###########################################################################
520
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
521
522
@@ -XXX,XX +XXX,XX @@ FEXPA 00000100 .. 1 00000 101110 ..... ..... @rd_rn
523
# Note esz != 0
524
FTSSEL 00000100 .. 1 ..... 101100 ..... ..... @rd_rn_rm
525
526
-### SVE Predicate Logical Operations Group
527
+### SVE Element Count Group
528
+
529
+# SVE element count
530
+CNT_r 00000100 .. 10 .... 1110 0 0 ..... ..... @incdec_cnt d=0 u=1
531
+
532
+# SVE inc/dec register by element count
533
+INCDEC_r 00000100 .. 11 .... 1110 0 d:1 ..... ..... @incdec_cnt u=1
534
+
535
+# SVE saturating inc/dec register by element count
536
+SINCDEC_r_32 00000100 .. 10 .... 1111 d:1 u:1 ..... ..... @incdec_cnt
537
+SINCDEC_r_64 00000100 .. 11 .... 1111 d:1 u:1 ..... ..... @incdec_cnt
538
+
539
+# SVE inc/dec vector by element count
540
+# Note this requires esz != 0.
541
+INCDEC_v 00000100 .. 1 1 .... 1100 0 d:1 ..... ..... @incdec2_cnt u=1
542
+
543
+# SVE saturating inc/dec vector by element count
544
+# Note these require esz != 0.
545
+SINCDEC_v 00000100 .. 1 0 .... 1100 d:1 u:1 ..... ..... @incdec2_cnt
546
547
# SVE predicate logical operations
548
AND_pppp 00100101 0. 00 .... 01 .... 0 .... 0 .... @pd_pg_pn_pm_s
549
--
80
--
550
2.17.0
81
2.20.1
551
82
552
83
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
We incorrectly treat SVC 0xf0002 as a cacheflush request (which is a
2
NOP for QEMU). This is the wrong syscall number, because in the
3
svc-immediate OABI syscall numbers are all offset by the
4
ARM_SYSCALL_BASE value and so the correct insn is SVC 0x9f0002.
5
(This is handled further down in the code with the other Arm-specific
6
syscalls like NR_breakpoint.)
2
7
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
When this code was initially added in commit 6f1f31c069b20611 in
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
2004, ARM_NR_cacheflush was defined as (ARM_SYSCALL_BASE + 0xf0000 + 2)
5
Message-id: 20180516223007.10256-22-richard.henderson@linaro.org
10
so the value in the comparison took account of the extra 0x900000
11
offset. In commit fbb4a2e371f2fa7 in 2008, the ARM_SYSCALL_BASE
12
was removed from the definition of ARM_NR_cacheflush and handling
13
for this group of syscalls was added below the point where we subtract
14
ARM_SYSCALL_BASE from the SVC immediate value. However that commit
15
forgot to remove the now-obsolete earlier handling code.
16
17
Remove the spurious ARM_NR_cacheflush condition.
18
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
22
Message-id: 20200420212206.12776-3-peter.maydell@linaro.org
7
---
23
---
8
target/arm/helper-sve.h | 4 ++++
24
linux-user/arm/cpu_loop.c | 4 +---
9
target/arm/sve_helper.c | 43 ++++++++++++++++++++++++++++++++++++++
25
1 file changed, 1 insertion(+), 3 deletions(-)
10
target/arm/translate-sve.c | 21 +++++++++++++++++++
11
target/arm/sve.decode | 4 ++++
12
4 files changed, 72 insertions(+)
13
26
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
27
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
15
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
29
--- a/linux-user/arm/cpu_loop.c
17
+++ b/target/arm/helper-sve.h
30
+++ b/linux-user/arm/cpu_loop.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_fexpa_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
31
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
19
DEF_HELPER_FLAGS_3(sve_fexpa_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
32
n = insn & 0xffffff;
20
DEF_HELPER_FLAGS_3(sve_fexpa_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
33
}
21
34
22
+DEF_HELPER_FLAGS_4(sve_ftssel_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
- if (n == ARM_NR_cacheflush) {
23
+DEF_HELPER_FLAGS_4(sve_ftssel_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
- /* nop */
24
+DEF_HELPER_FLAGS_4(sve_ftssel_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
- } else if (n == 0 || n >= ARM_SYSCALL_BASE || env->thumb) {
25
+
38
+ if (n == 0 || n >= ARM_SYSCALL_BASE || env->thumb) {
26
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
39
/* linux syscall */
27
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
40
if (env->thumb || n == 0) {
28
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
41
n = env->regs[7];
29
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/sve_helper.c
32
+++ b/target/arm/sve_helper.c
33
@@ -XXX,XX +XXX,XX @@
34
#include "exec/cpu_ldst.h"
35
#include "exec/helper-proto.h"
36
#include "tcg/tcg-gvec-desc.h"
37
+#include "fpu/softfloat.h"
38
39
40
/* Note that vector data is stored in host-endian 64-bit chunks,
41
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_fexpa_d)(void *vd, void *vn, uint32_t desc)
42
d[i] = coeff[idx] | (exp << 52);
43
}
44
}
45
+
46
+void HELPER(sve_ftssel_h)(void *vd, void *vn, void *vm, uint32_t desc)
47
+{
48
+ intptr_t i, opr_sz = simd_oprsz(desc) / 2;
49
+ uint16_t *d = vd, *n = vn, *m = vm;
50
+ for (i = 0; i < opr_sz; i += 1) {
51
+ uint16_t nn = n[i];
52
+ uint16_t mm = m[i];
53
+ if (mm & 1) {
54
+ nn = float16_one;
55
+ }
56
+ d[i] = nn ^ (mm & 2) << 14;
57
+ }
58
+}
59
+
60
+void HELPER(sve_ftssel_s)(void *vd, void *vn, void *vm, uint32_t desc)
61
+{
62
+ intptr_t i, opr_sz = simd_oprsz(desc) / 4;
63
+ uint32_t *d = vd, *n = vn, *m = vm;
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ uint32_t nn = n[i];
66
+ uint32_t mm = m[i];
67
+ if (mm & 1) {
68
+ nn = float32_one;
69
+ }
70
+ d[i] = nn ^ (mm & 2) << 30;
71
+ }
72
+}
73
+
74
+void HELPER(sve_ftssel_d)(void *vd, void *vn, void *vm, uint32_t desc)
75
+{
76
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
77
+ uint64_t *d = vd, *n = vn, *m = vm;
78
+ for (i = 0; i < opr_sz; i += 1) {
79
+ uint64_t nn = n[i];
80
+ uint64_t mm = m[i];
81
+ if (mm & 1) {
82
+ nn = float64_one;
83
+ }
84
+ d[i] = nn ^ (mm & 2) << 62;
85
+ }
86
+}
87
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
88
index XXXXXXX..XXXXXXX 100644
89
--- a/target/arm/translate-sve.c
90
+++ b/target/arm/translate-sve.c
91
@@ -XXX,XX +XXX,XX @@ static bool trans_FEXPA(DisasContext *s, arg_rr_esz *a, uint32_t insn)
92
return true;
93
}
94
95
+static bool trans_FTSSEL(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
96
+{
97
+ static gen_helper_gvec_3 * const fns[4] = {
98
+ NULL,
99
+ gen_helper_sve_ftssel_h,
100
+ gen_helper_sve_ftssel_s,
101
+ gen_helper_sve_ftssel_d,
102
+ };
103
+ if (a->esz == 0) {
104
+ return false;
105
+ }
106
+ if (sve_access_check(s)) {
107
+ unsigned vsz = vec_full_reg_size(s);
108
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
109
+ vec_full_reg_offset(s, a->rn),
110
+ vec_full_reg_offset(s, a->rm),
111
+ vsz, vsz, 0, fns[a->esz]);
112
+ }
113
+ return true;
114
+}
115
+
116
/*
117
*** SVE Predicate Logical Operations Group
118
*/
119
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/sve.decode
122
+++ b/target/arm/sve.decode
123
@@ -XXX,XX +XXX,XX @@ ADR_p64 00000100 11 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
124
# Note esz != 0
125
FEXPA 00000100 .. 1 00000 101110 ..... ..... @rd_rn
126
127
+# SVE floating-point trig select coefficient
128
+# Note esz != 0
129
+FTSSEL 00000100 .. 1 ..... 101100 ..... ..... @rd_rn_rm
130
+
131
### SVE Predicate Logical Operations Group
132
133
# SVE predicate logical operations
134
--
42
--
135
2.17.0
43
2.20.1
136
44
137
45
diff view generated by jsdifflib
1
From: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
1
The kernel has different handling for syscalls with invalid
2
numbers that are in the "arm-specific" range 0x9f0000 and up:
3
* 0x9f0000..0x9f07ff return -ENOSYS if not implemented
4
* other out of range syscalls cause a SIGILL
5
(see the kernel's arch/arm/kernel/traps.c:arm_syscall())
2
6
3
This is a preparation for the coming feature of creating dynamically an XML
7
Implement this distinction. (Note that our code doesn't look
4
description for the ARM sysregs.
8
quite like the kernel's, because we have removed the
5
Add "_S" suffix to the secure version of sysregs that have both S and NS views
9
0x900000 prefix by this point, whereas the kernel retains
6
Replace (S) and (NS) by _S and _NS for the register that are manually defined,
10
it in arm_syscall().)
7
so all the registers follow the same convention.
8
11
9
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Tested-by: Alex Bennée <alex.bennee@linaro.org>
13
Message-id: 1524153386-3550-3-git-send-email-abdallah.bouassida@lauterbach.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 20200420212206.12776-4-peter.maydell@linaro.org
15
---
15
---
16
target/arm/helper.c | 29 ++++++++++++++++++-----------
16
linux-user/arm/cpu_loop.c | 30 ++++++++++++++++++++++++++----
17
1 file changed, 18 insertions(+), 11 deletions(-)
17
1 file changed, 26 insertions(+), 4 deletions(-)
18
18
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
20
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
21
--- a/linux-user/arm/cpu_loop.c
22
+++ b/target/arm/helper.c
22
+++ b/linux-user/arm/cpu_loop.c
23
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
23
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
24
* the secure register to be properly reset and migrated. There is also no
24
env->regs[0] = cpu_get_tls(env);
25
* v8 EL1 version of the register so the non-secure instance stands alone.
26
*/
27
- { .name = "FCSEIDR(NS)",
28
+ { .name = "FCSEIDR",
29
.cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 0,
30
.access = PL1_RW, .secure = ARM_CP_SECSTATE_NS,
31
.fieldoffset = offsetof(CPUARMState, cp15.fcseidr_ns),
32
.resetvalue = 0, .writefn = fcse_write, .raw_writefn = raw_write, },
33
- { .name = "FCSEIDR(S)",
34
+ { .name = "FCSEIDR_S",
35
.cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 0,
36
.access = PL1_RW, .secure = ARM_CP_SECSTATE_S,
37
.fieldoffset = offsetof(CPUARMState, cp15.fcseidr_s),
38
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
39
.access = PL1_RW, .secure = ARM_CP_SECSTATE_NS,
40
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[1]),
41
.resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, },
42
- { .name = "CONTEXTIDR(S)", .state = ARM_CP_STATE_AA32,
43
+ { .name = "CONTEXTIDR_S", .state = ARM_CP_STATE_AA32,
44
.cp = 15, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 1,
45
.access = PL1_RW, .secure = ARM_CP_SECSTATE_S,
46
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_s),
47
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
48
cp15.c14_timer[GTIMER_PHYS].ctl),
49
.writefn = gt_phys_ctl_write, .raw_writefn = raw_write,
50
},
51
- { .name = "CNTP_CTL(S)",
52
+ { .name = "CNTP_CTL_S",
53
.cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 1,
54
.secure = ARM_CP_SECSTATE_S,
55
.type = ARM_CP_IO | ARM_CP_ALIAS, .access = PL1_RW | PL0_R,
56
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
57
.accessfn = gt_ptimer_access,
58
.readfn = gt_phys_tval_read, .writefn = gt_phys_tval_write,
59
},
60
- { .name = "CNTP_TVAL(S)",
61
+ { .name = "CNTP_TVAL_S",
62
.cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 0,
63
.secure = ARM_CP_SECSTATE_S,
64
.type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL1_RW | PL0_R,
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
66
.accessfn = gt_ptimer_access,
67
.writefn = gt_phys_cval_write, .raw_writefn = raw_write,
68
},
69
- { .name = "CNTP_CVAL(S)", .cp = 15, .crm = 14, .opc1 = 2,
70
+ { .name = "CNTP_CVAL_S", .cp = 15, .crm = 14, .opc1 = 2,
71
.secure = ARM_CP_SECSTATE_S,
72
.access = PL1_RW | PL0_R,
73
.type = ARM_CP_64BIT | ARM_CP_IO | ARM_CP_ALIAS,
74
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *arch_query_cpu_definitions(Error **errp)
75
76
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
77
void *opaque, int state, int secstate,
78
- int crm, int opc1, int opc2)
79
+ int crm, int opc1, int opc2,
80
+ const char *name)
81
{
82
/* Private utility function for define_one_arm_cp_reg_with_opaque():
83
* add a single reginfo struct to the hash table.
84
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
85
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
86
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
87
88
+ r2->name = g_strdup(name);
89
/* Reset the secure state to the specific incoming state. This is
90
* necessary as the register may have been defined with both states.
91
*/
92
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
93
/* Under AArch32 CP registers can be common
94
* (same for secure and non-secure world) or banked.
95
*/
96
+ char *name;
97
+
98
switch (r->secure) {
99
case ARM_CP_SECSTATE_S:
100
case ARM_CP_SECSTATE_NS:
101
add_cpreg_to_hashtable(cpu, r, opaque, state,
102
- r->secure, crm, opc1, opc2);
103
+ r->secure, crm, opc1, opc2,
104
+ r->name);
105
break;
25
break;
106
default:
26
default:
107
+ name = g_strdup_printf("%s_S", r->name);
27
- qemu_log_mask(LOG_UNIMP,
108
add_cpreg_to_hashtable(cpu, r, opaque, state,
28
- "qemu: Unsupported ARM syscall: 0x%x\n",
109
ARM_CP_SECSTATE_S,
29
- n);
110
- crm, opc1, opc2);
30
- env->regs[0] = -TARGET_ENOSYS;
111
+ crm, opc1, opc2, name);
31
+ if (n < 0xf0800) {
112
+ g_free(name);
32
+ /*
113
add_cpreg_to_hashtable(cpu, r, opaque, state,
33
+ * Syscalls 0xf0000..0xf07ff (or 0x9f0000..
114
ARM_CP_SECSTATE_NS,
34
+ * 0x9f07ff in OABI numbering) are defined
115
- crm, opc1, opc2);
35
+ * to return -ENOSYS rather than raising
116
+ crm, opc1, opc2, r->name);
36
+ * SIGILL. Note that we have already
37
+ * removed the 0x900000 prefix.
38
+ */
39
+ qemu_log_mask(LOG_UNIMP,
40
+ "qemu: Unsupported ARM syscall: 0x%x\n",
41
+ n);
42
+ env->regs[0] = -TARGET_ENOSYS;
43
+ } else {
44
+ /* Otherwise SIGILL */
45
+ info.si_signo = TARGET_SIGILL;
46
+ info.si_errno = 0;
47
+ info.si_code = TARGET_ILL_ILLTRP;
48
+ info._sifields._sigfault._addr = env->regs[15];
49
+ if (env->thumb) {
50
+ info._sifields._sigfault._addr -= 2;
51
+ } else {
52
+ info._sifields._sigfault._addr -= 4;
53
+ }
54
+ queue_signal(env, info.si_signo,
55
+ QEMU_SI_FAULT, &info);
56
+ }
117
break;
57
break;
118
}
58
}
119
} else {
59
} else {
120
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
121
* of AArch32 */
122
add_cpreg_to_hashtable(cpu, r, opaque, state,
123
ARM_CP_SECSTATE_NS,
124
- crm, opc1, opc2);
125
+ crm, opc1, opc2, r->name);
126
}
127
}
128
}
129
--
60
--
130
2.17.0
61
2.20.1
131
62
132
63
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Our code to identify syscall numbers has some issues:
2
* for Thumb mode, we never need the immediate value from the insn,
3
but we always read it anyway
4
* bad immediate values in the svc insn should cause a SIGILL, but we
5
were abort()ing instead (via "goto error")
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
We can fix both these things by refactoring the code that identifies
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
the syscall number to more closely follow the kernel COMPAT_OABI code:
5
Message-id: 20180516223007.10256-21-richard.henderson@linaro.org
9
* for Thumb it is always r7
10
* for Arm, if the immediate value is 0, then this is an EABI call
11
with the syscall number in r7
12
* otherwise, we XOR the immediate value with 0x900000
13
(ARM_SYSCALL_BASE for QEMU; __NR_OABI_SYSCALL_BASE in the kernel),
14
which converts valid syscall immediates into the desired value,
15
and puts all invalid immediates in the range 0x100000 or above
16
* then we can just let the existing "value too large, deliver
17
SIGILL" case handle invalid numbers, and drop the 'goto error'
18
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
21
Message-id: 20200420212206.12776-5-peter.maydell@linaro.org
7
---
22
---
8
target/arm/helper-sve.h | 4 ++
23
linux-user/arm/cpu_loop.c | 143 ++++++++++++++++++++------------------
9
target/arm/sve_helper.c | 90 ++++++++++++++++++++++++++++++++++++++
24
1 file changed, 77 insertions(+), 66 deletions(-)
10
target/arm/translate-sve.c | 24 ++++++++++
11
target/arm/sve.decode | 7 +++
12
4 files changed, 125 insertions(+)
13
25
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
26
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
15
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
28
--- a/linux-user/arm/cpu_loop.c
17
+++ b/target/arm/helper-sve.h
29
+++ b/linux-user/arm/cpu_loop.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_adr_p64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
19
DEF_HELPER_FLAGS_4(sve_adr_s32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
env->eabi = 1;
20
DEF_HELPER_FLAGS_4(sve_adr_u32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
/* system call */
21
33
if (env->thumb) {
22
+DEF_HELPER_FLAGS_3(sve_fexpa_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
34
- /* FIXME - what to do if get_user() fails? */
23
+DEF_HELPER_FLAGS_3(sve_fexpa_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
35
- get_user_code_u16(insn, env->regs[15] - 2, env);
24
+DEF_HELPER_FLAGS_3(sve_fexpa_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
36
- n = insn & 0xff;
37
+ /* Thumb is always EABI style with syscall number in r7 */
38
+ n = env->regs[7];
39
} else {
40
+ /*
41
+ * Equivalent of kernel CONFIG_OABI_COMPAT: read the
42
+ * Arm SVC insn to extract the immediate, which is the
43
+ * syscall number in OABI.
44
+ */
45
/* FIXME - what to do if get_user() fails? */
46
get_user_code_u32(insn, env->regs[15] - 4, env);
47
n = insn & 0xffffff;
48
- }
49
-
50
- if (n == 0 || n >= ARM_SYSCALL_BASE || env->thumb) {
51
- /* linux syscall */
52
- if (env->thumb || n == 0) {
53
+ if (n == 0) {
54
+ /* zero immediate: EABI, syscall number in r7 */
55
n = env->regs[7];
56
} else {
57
- n -= ARM_SYSCALL_BASE;
58
+ /*
59
+ * This XOR matches the kernel code: an immediate
60
+ * in the valid range (0x900000 .. 0x9fffff) is
61
+ * converted into the correct EABI-style syscall
62
+ * number; invalid immediates end up as values
63
+ * > 0xfffff and are handled below as out-of-range.
64
+ */
65
+ n ^= ARM_SYSCALL_BASE;
66
env->eabi = 0;
67
}
68
- if ( n > ARM_NR_BASE) {
69
- switch (n) {
70
- case ARM_NR_cacheflush:
71
- /* nop */
72
- break;
73
- case ARM_NR_set_tls:
74
- cpu_set_tls(env, env->regs[0]);
75
- env->regs[0] = 0;
76
- break;
77
- case ARM_NR_breakpoint:
78
- env->regs[15] -= env->thumb ? 2 : 4;
79
- goto excp_debug;
80
- case ARM_NR_get_tls:
81
- env->regs[0] = cpu_get_tls(env);
82
- break;
83
- default:
84
- if (n < 0xf0800) {
85
- /*
86
- * Syscalls 0xf0000..0xf07ff (or 0x9f0000..
87
- * 0x9f07ff in OABI numbering) are defined
88
- * to return -ENOSYS rather than raising
89
- * SIGILL. Note that we have already
90
- * removed the 0x900000 prefix.
91
- */
92
- qemu_log_mask(LOG_UNIMP,
93
- "qemu: Unsupported ARM syscall: 0x%x\n",
94
- n);
95
- env->regs[0] = -TARGET_ENOSYS;
96
+ }
25
+
97
+
26
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
98
+ if (n > ARM_NR_BASE) {
27
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
99
+ switch (n) {
28
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
100
+ case ARM_NR_cacheflush:
29
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
101
+ /* nop */
30
index XXXXXXX..XXXXXXX 100644
102
+ break;
31
--- a/target/arm/sve_helper.c
103
+ case ARM_NR_set_tls:
32
+++ b/target/arm/sve_helper.c
104
+ cpu_set_tls(env, env->regs[0]);
33
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_adr_u32)(void *vd, void *vn, void *vm, uint32_t desc)
105
+ env->regs[0] = 0;
34
d[i] = n[i] + ((uint64_t)(uint32_t)m[i] << sh);
106
+ break;
35
}
107
+ case ARM_NR_breakpoint:
36
}
108
+ env->regs[15] -= env->thumb ? 2 : 4;
37
+
109
+ goto excp_debug;
38
+void HELPER(sve_fexpa_h)(void *vd, void *vn, uint32_t desc)
110
+ case ARM_NR_get_tls:
39
+{
111
+ env->regs[0] = cpu_get_tls(env);
40
+ /* These constants are cut-and-paste directly from the ARM pseudocode. */
112
+ break;
41
+ static const uint16_t coeff[] = {
113
+ default:
42
+ 0x0000, 0x0016, 0x002d, 0x0045, 0x005d, 0x0075, 0x008e, 0x00a8,
114
+ if (n < 0xf0800) {
43
+ 0x00c2, 0x00dc, 0x00f8, 0x0114, 0x0130, 0x014d, 0x016b, 0x0189,
115
+ /*
44
+ 0x01a8, 0x01c8, 0x01e8, 0x0209, 0x022b, 0x024e, 0x0271, 0x0295,
116
+ * Syscalls 0xf0000..0xf07ff (or 0x9f0000..
45
+ 0x02ba, 0x02e0, 0x0306, 0x032e, 0x0356, 0x037f, 0x03a9, 0x03d4,
117
+ * 0x9f07ff in OABI numbering) are defined
46
+ };
118
+ * to return -ENOSYS rather than raising
47
+ intptr_t i, opr_sz = simd_oprsz(desc) / 2;
119
+ * SIGILL. Note that we have already
48
+ uint16_t *d = vd, *n = vn;
120
+ * removed the 0x900000 prefix.
49
+
121
+ */
50
+ for (i = 0; i < opr_sz; i++) {
122
+ qemu_log_mask(LOG_UNIMP,
51
+ uint16_t nn = n[i];
123
+ "qemu: Unsupported ARM syscall: 0x%x\n",
52
+ intptr_t idx = extract32(nn, 0, 5);
124
+ n);
53
+ uint16_t exp = extract32(nn, 5, 5);
125
+ env->regs[0] = -TARGET_ENOSYS;
54
+ d[i] = coeff[idx] | (exp << 10);
126
+ } else {
55
+ }
127
+ /*
56
+}
128
+ * Otherwise SIGILL. This includes any SWI with
57
+
129
+ * immediate not originally 0x9fxxxx, because
58
+void HELPER(sve_fexpa_s)(void *vd, void *vn, uint32_t desc)
130
+ * of the earlier XOR.
59
+{
131
+ */
60
+ /* These constants are cut-and-paste directly from the ARM pseudocode. */
132
+ info.si_signo = TARGET_SIGILL;
61
+ static const uint32_t coeff[] = {
133
+ info.si_errno = 0;
62
+ 0x000000, 0x0164d2, 0x02cd87, 0x043a29,
134
+ info.si_code = TARGET_ILL_ILLTRP;
63
+ 0x05aac3, 0x071f62, 0x08980f, 0x0a14d5,
135
+ info._sifields._sigfault._addr = env->regs[15];
64
+ 0x0b95c2, 0x0d1adf, 0x0ea43a, 0x1031dc,
136
+ if (env->thumb) {
65
+ 0x11c3d3, 0x135a2b, 0x14f4f0, 0x16942d,
137
+ info._sifields._sigfault._addr -= 2;
66
+ 0x1837f0, 0x19e046, 0x1b8d3a, 0x1d3eda,
138
} else {
67
+ 0x1ef532, 0x20b051, 0x227043, 0x243516,
139
- /* Otherwise SIGILL */
68
+ 0x25fed7, 0x27cd94, 0x29a15b, 0x2b7a3a,
140
- info.si_signo = TARGET_SIGILL;
69
+ 0x2d583f, 0x2f3b79, 0x3123f6, 0x3311c4,
141
- info.si_errno = 0;
70
+ 0x3504f3, 0x36fd92, 0x38fbaf, 0x3aff5b,
142
- info.si_code = TARGET_ILL_ILLTRP;
71
+ 0x3d08a4, 0x3f179a, 0x412c4d, 0x4346cd,
143
- info._sifields._sigfault._addr = env->regs[15];
72
+ 0x45672a, 0x478d75, 0x49b9be, 0x4bec15,
144
- if (env->thumb) {
73
+ 0x4e248c, 0x506334, 0x52a81e, 0x54f35b,
145
- info._sifields._sigfault._addr -= 2;
74
+ 0x5744fd, 0x599d16, 0x5bfbb8, 0x5e60f5,
146
- } else {
75
+ 0x60ccdf, 0x633f89, 0x65b907, 0x68396a,
147
- info._sifields._sigfault._addr -= 4;
76
+ 0x6ac0c7, 0x6d4f30, 0x6fe4ba, 0x728177,
148
- }
77
+ 0x75257d, 0x77d0df, 0x7a83b3, 0x7d3e0c,
149
- queue_signal(env, info.si_signo,
78
+ };
150
- QEMU_SI_FAULT, &info);
79
+ intptr_t i, opr_sz = simd_oprsz(desc) / 4;
151
+ info._sifields._sigfault._addr -= 4;
80
+ uint32_t *d = vd, *n = vn;
152
}
81
+
153
- break;
82
+ for (i = 0; i < opr_sz; i++) {
154
- }
83
+ uint32_t nn = n[i];
155
- } else {
84
+ intptr_t idx = extract32(nn, 0, 6);
156
- ret = do_syscall(env,
85
+ uint32_t exp = extract32(nn, 6, 8);
157
- n,
86
+ d[i] = coeff[idx] | (exp << 23);
158
- env->regs[0],
87
+ }
159
- env->regs[1],
88
+}
160
- env->regs[2],
89
+
161
- env->regs[3],
90
+void HELPER(sve_fexpa_d)(void *vd, void *vn, uint32_t desc)
162
- env->regs[4],
91
+{
163
- env->regs[5],
92
+ /* These constants are cut-and-paste directly from the ARM pseudocode. */
164
- 0, 0);
93
+ static const uint64_t coeff[] = {
165
- if (ret == -TARGET_ERESTARTSYS) {
94
+ 0x0000000000000ull, 0x02C9A3E778061ull, 0x059B0D3158574ull,
166
- env->regs[15] -= env->thumb ? 2 : 4;
95
+ 0x0874518759BC8ull, 0x0B5586CF9890Full, 0x0E3EC32D3D1A2ull,
167
- } else if (ret != -TARGET_QEMU_ESIGRETURN) {
96
+ 0x11301D0125B51ull, 0x1429AAEA92DE0ull, 0x172B83C7D517Bull,
168
- env->regs[0] = ret;
97
+ 0x1A35BEB6FCB75ull, 0x1D4873168B9AAull, 0x2063B88628CD6ull,
169
+ queue_signal(env, info.si_signo,
98
+ 0x2387A6E756238ull, 0x26B4565E27CDDull, 0x29E9DF51FDEE1ull,
170
+ QEMU_SI_FAULT, &info);
99
+ 0x2D285A6E4030Bull, 0x306FE0A31B715ull, 0x33C08B26416FFull,
171
}
100
+ 0x371A7373AA9CBull, 0x3A7DB34E59FF7ull, 0x3DEA64C123422ull,
172
+ break;
101
+ 0x4160A21F72E2Aull, 0x44E086061892Dull, 0x486A2B5C13CD0ull,
173
}
102
+ 0x4BFDAD5362A27ull, 0x4F9B2769D2CA7ull, 0x5342B569D4F82ull,
174
} else {
103
+ 0x56F4736B527DAull, 0x5AB07DD485429ull, 0x5E76F15AD2148ull,
175
- goto error;
104
+ 0x6247EB03A5585ull, 0x6623882552225ull, 0x6A09E667F3BCDull,
176
+ ret = do_syscall(env,
105
+ 0x6DFB23C651A2Full, 0x71F75E8EC5F74ull, 0x75FEB564267C9ull,
177
+ n,
106
+ 0x7A11473EB0187ull, 0x7E2F336CF4E62ull, 0x82589994CCE13ull,
178
+ env->regs[0],
107
+ 0x868D99B4492EDull, 0x8ACE5422AA0DBull, 0x8F1AE99157736ull,
179
+ env->regs[1],
108
+ 0x93737B0CDC5E5ull, 0x97D829FDE4E50ull, 0x9C49182A3F090ull,
180
+ env->regs[2],
109
+ 0xA0C667B5DE565ull, 0xA5503B23E255Dull, 0xA9E6B5579FDBFull,
181
+ env->regs[3],
110
+ 0xAE89F995AD3ADull, 0xB33A2B84F15FBull, 0xB7F76F2FB5E47ull,
182
+ env->regs[4],
111
+ 0xBCC1E904BC1D2ull, 0xC199BDD85529Cull, 0xC67F12E57D14Bull,
183
+ env->regs[5],
112
+ 0xCB720DCEF9069ull, 0xD072D4A07897Cull, 0xD5818DCFBA487ull,
184
+ 0, 0);
113
+ 0xDA9E603DB3285ull, 0xDFC97337B9B5Full, 0xE502EE78B3FF6ull,
185
+ if (ret == -TARGET_ERESTARTSYS) {
114
+ 0xEA4AFA2A490DAull, 0xEFA1BEE615A27ull, 0xF50765B6E4540ull,
186
+ env->regs[15] -= env->thumb ? 2 : 4;
115
+ 0xFA7C1819E90D8ull,
187
+ } else if (ret != -TARGET_QEMU_ESIGRETURN) {
116
+ };
188
+ env->regs[0] = ret;
117
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
189
+ }
118
+ uint64_t *d = vd, *n = vn;
190
}
119
+
191
}
120
+ for (i = 0; i < opr_sz; i++) {
192
break;
121
+ uint64_t nn = n[i];
122
+ intptr_t idx = extract32(nn, 0, 6);
123
+ uint64_t exp = extract32(nn, 6, 11);
124
+ d[i] = coeff[idx] | (exp << 52);
125
+ }
126
+}
127
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/translate-sve.c
130
+++ b/target/arm/translate-sve.c
131
@@ -XXX,XX +XXX,XX @@ static bool trans_ADR_u32(DisasContext *s, arg_rrri *a, uint32_t insn)
132
return do_adr(s, a, gen_helper_sve_adr_u32);
133
}
134
135
+/*
136
+ *** SVE Integer Misc - Unpredicated Group
137
+ */
138
+
139
+static bool trans_FEXPA(DisasContext *s, arg_rr_esz *a, uint32_t insn)
140
+{
141
+ static gen_helper_gvec_2 * const fns[4] = {
142
+ NULL,
143
+ gen_helper_sve_fexpa_h,
144
+ gen_helper_sve_fexpa_s,
145
+ gen_helper_sve_fexpa_d,
146
+ };
147
+ if (a->esz == 0) {
148
+ return false;
149
+ }
150
+ if (sve_access_check(s)) {
151
+ unsigned vsz = vec_full_reg_size(s);
152
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, a->rd),
153
+ vec_full_reg_offset(s, a->rn),
154
+ vsz, vsz, 0, fns[a->esz]);
155
+ }
156
+ return true;
157
+}
158
+
159
/*
160
*** SVE Predicate Logical Operations Group
161
*/
162
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
163
index XXXXXXX..XXXXXXX 100644
164
--- a/target/arm/sve.decode
165
+++ b/target/arm/sve.decode
166
@@ -XXX,XX +XXX,XX @@
167
168
# Two operand
169
@pd_pn ........ esz:2 .. .... ....... rn:4 . rd:4 &rr_esz
170
+@rd_rn ........ esz:2 ...... ...... rn:5 rd:5 &rr_esz
171
172
# Three operand with unused vector element size
173
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
174
@@ -XXX,XX +XXX,XX @@ ADR_u32 00000100 01 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
175
ADR_p32 00000100 10 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
176
ADR_p64 00000100 11 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
177
178
+### SVE Integer Misc - Unpredicated Group
179
+
180
+# SVE floating-point exponential accelerator
181
+# Note esz != 0
182
+FEXPA 00000100 .. 1 00000 101110 ..... ..... @rd_rn
183
+
184
### SVE Predicate Logical Operations Group
185
186
# SVE predicate logical operations
187
--
193
--
188
2.17.0
194
2.20.1
189
195
190
196
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The GEN_NEON_INTEGER_OP macro is no longer used; remove it.
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180516223007.10256-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
---
5
---
8
target/arm/helper-sve.h | 12 ++++++
6
target/arm/translate.c | 23 -----------------------
9
target/arm/sve_helper.c | 30 ++++++++++++++
7
1 file changed, 23 deletions(-)
10
target/arm/translate-sve.c | 85 ++++++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 26 ++++++++++++
12
4 files changed, 153 insertions(+)
13
8
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
9
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
10
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
11
--- a/target/arm/translate.c
17
+++ b/target/arm/helper-sve.h
12
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_index_h, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
13
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_rsb(int size, TCGv_i32 t0, TCGv_i32 t1)
19
DEF_HELPER_FLAGS_4(sve_index_s, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
14
default: return 1; \
20
DEF_HELPER_FLAGS_4(sve_index_d, TCG_CALL_NO_RWG, void, ptr, i64, i64, i32)
15
}} while (0)
21
16
22
+DEF_HELPER_FLAGS_4(sve_asr_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
17
-#define GEN_NEON_INTEGER_OP(name) do { \
23
+DEF_HELPER_FLAGS_4(sve_asr_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
- switch ((size << 1) | u) { \
24
+DEF_HELPER_FLAGS_4(sve_asr_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
- case 0: \
25
+
20
- gen_helper_neon_##name##_s8(tmp, tmp, tmp2); \
26
+DEF_HELPER_FLAGS_4(sve_lsr_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
- break; \
27
+DEF_HELPER_FLAGS_4(sve_lsr_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
- case 1: \
28
+DEF_HELPER_FLAGS_4(sve_lsr_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
- gen_helper_neon_##name##_u8(tmp, tmp, tmp2); \
29
+
24
- break; \
30
+DEF_HELPER_FLAGS_4(sve_lsl_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
- case 2: \
31
+DEF_HELPER_FLAGS_4(sve_lsl_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
- gen_helper_neon_##name##_s16(tmp, tmp, tmp2); \
32
+DEF_HELPER_FLAGS_4(sve_lsl_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
- break; \
33
+
28
- case 3: \
34
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
29
- gen_helper_neon_##name##_u16(tmp, tmp, tmp2); \
35
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
- break; \
36
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
- case 4: \
37
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
32
- gen_helper_neon_##name##_s32(tmp, tmp, tmp2); \
38
index XXXXXXX..XXXXXXX 100644
33
- break; \
39
--- a/target/arm/sve_helper.c
34
- case 5: \
40
+++ b/target/arm/sve_helper.c
35
- gen_helper_neon_##name##_u32(tmp, tmp, tmp2); \
41
@@ -XXX,XX +XXX,XX @@ DO_ZPZ(sve_neg_h, uint16_t, H1_2, DO_NEG)
36
- break; \
42
DO_ZPZ(sve_neg_s, uint32_t, H1_4, DO_NEG)
37
- default: return 1; \
43
DO_ZPZ_D(sve_neg_d, uint64_t, DO_NEG)
38
- }} while (0)
44
39
-
45
+/* Three-operand expander, unpredicated, in which the third operand is "wide".
40
static TCGv_i32 neon_load_scratch(int scratch)
46
+ */
41
{
47
+#define DO_ZZW(NAME, TYPE, TYPEW, H, OP) \
42
TCGv_i32 tmp = tcg_temp_new_i32();
48
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
49
+{ \
50
+ intptr_t i, opr_sz = simd_oprsz(desc); \
51
+ for (i = 0; i < opr_sz; ) { \
52
+ TYPEW mm = *(TYPEW *)(vm + i); \
53
+ do { \
54
+ TYPE nn = *(TYPE *)(vn + H(i)); \
55
+ *(TYPE *)(vd + H(i)) = OP(nn, mm); \
56
+ i += sizeof(TYPE); \
57
+ } while (i & 7); \
58
+ } \
59
+}
60
+
61
+DO_ZZW(sve_asr_zzw_b, int8_t, uint64_t, H1, DO_ASR)
62
+DO_ZZW(sve_lsr_zzw_b, uint8_t, uint64_t, H1, DO_LSR)
63
+DO_ZZW(sve_lsl_zzw_b, uint8_t, uint64_t, H1, DO_LSL)
64
+
65
+DO_ZZW(sve_asr_zzw_h, int16_t, uint64_t, H1_2, DO_ASR)
66
+DO_ZZW(sve_lsr_zzw_h, uint16_t, uint64_t, H1_2, DO_LSR)
67
+DO_ZZW(sve_lsl_zzw_h, uint16_t, uint64_t, H1_2, DO_LSL)
68
+
69
+DO_ZZW(sve_asr_zzw_s, int32_t, uint64_t, H1_4, DO_ASR)
70
+DO_ZZW(sve_lsr_zzw_s, uint32_t, uint64_t, H1_4, DO_LSR)
71
+DO_ZZW(sve_lsl_zzw_s, uint32_t, uint64_t, H1_4, DO_LSL)
72
+
73
+#undef DO_ZZW
74
+
75
#undef DO_CLS_B
76
#undef DO_CLS_H
77
#undef DO_CLZ_B
78
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate-sve.c
81
+++ b/target/arm/translate-sve.c
82
@@ -XXX,XX +XXX,XX @@ static bool do_mov_z(DisasContext *s, int rd, int rn)
83
return do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
84
}
85
86
+/* Initialize a Zreg with replications of a 64-bit immediate. */
87
+static void do_dupi_z(DisasContext *s, int rd, uint64_t word)
88
+{
89
+ unsigned vsz = vec_full_reg_size(s);
90
+ tcg_gen_gvec_dup64i(vec_full_reg_offset(s, rd), vsz, vsz, word);
91
+}
92
+
93
/* Invoke a vector expander on two Pregs. */
94
static bool do_vector2_p(DisasContext *s, GVecGen2Fn *gvec_fn,
95
int esz, int rd, int rn)
96
@@ -XXX,XX +XXX,XX @@ DO_ZPZW(LSL, lsl)
97
98
#undef DO_ZPZW
99
100
+/*
101
+ *** SVE Bitwise Shift - Unpredicated Group
102
+ */
103
+
104
+static bool do_shift_imm(DisasContext *s, arg_rri_esz *a, bool asr,
105
+ void (*gvec_fn)(unsigned, uint32_t, uint32_t,
106
+ int64_t, uint32_t, uint32_t))
107
+{
108
+ if (a->esz < 0) {
109
+ /* Invalid tsz encoding -- see tszimm_esz. */
110
+ return false;
111
+ }
112
+ if (sve_access_check(s)) {
113
+ unsigned vsz = vec_full_reg_size(s);
114
+ /* Shift by element size is architecturally valid. For
115
+ arithmetic right-shift, it's the same as by one less.
116
+ Otherwise it is a zeroing operation. */
117
+ if (a->imm >= 8 << a->esz) {
118
+ if (asr) {
119
+ a->imm = (8 << a->esz) - 1;
120
+ } else {
121
+ do_dupi_z(s, a->rd, 0);
122
+ return true;
123
+ }
124
+ }
125
+ gvec_fn(a->esz, vec_full_reg_offset(s, a->rd),
126
+ vec_full_reg_offset(s, a->rn), a->imm, vsz, vsz);
127
+ }
128
+ return true;
129
+}
130
+
131
+static bool trans_ASR_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
132
+{
133
+ return do_shift_imm(s, a, true, tcg_gen_gvec_sari);
134
+}
135
+
136
+static bool trans_LSR_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
137
+{
138
+ return do_shift_imm(s, a, false, tcg_gen_gvec_shri);
139
+}
140
+
141
+static bool trans_LSL_zzi(DisasContext *s, arg_rri_esz *a, uint32_t insn)
142
+{
143
+ return do_shift_imm(s, a, false, tcg_gen_gvec_shli);
144
+}
145
+
146
+static bool do_zzw_ool(DisasContext *s, arg_rrr_esz *a, gen_helper_gvec_3 *fn)
147
+{
148
+ if (fn == NULL) {
149
+ return false;
150
+ }
151
+ if (sve_access_check(s)) {
152
+ unsigned vsz = vec_full_reg_size(s);
153
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
154
+ vec_full_reg_offset(s, a->rn),
155
+ vec_full_reg_offset(s, a->rm),
156
+ vsz, vsz, 0, fn);
157
+ }
158
+ return true;
159
+}
160
+
161
+#define DO_ZZW(NAME, name) \
162
+static bool trans_##NAME##_zzw(DisasContext *s, arg_rrr_esz *a, \
163
+ uint32_t insn) \
164
+{ \
165
+ static gen_helper_gvec_3 * const fns[4] = { \
166
+ gen_helper_sve_##name##_zzw_b, gen_helper_sve_##name##_zzw_h, \
167
+ gen_helper_sve_##name##_zzw_s, NULL \
168
+ }; \
169
+ return do_zzw_ool(s, a, fns[a->esz]); \
170
+}
171
+
172
+DO_ZZW(ASR, asr)
173
+DO_ZZW(LSR, lsr)
174
+DO_ZZW(LSL, lsl)
175
+
176
+#undef DO_ZZW
177
+
178
/*
179
*** SVE Integer Multiply-Add Group
180
*/
181
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
182
index XXXXXXX..XXXXXXX 100644
183
--- a/target/arm/sve.decode
184
+++ b/target/arm/sve.decode
185
@@ -XXX,XX +XXX,XX @@
186
# A combination of tsz:imm3 -- extract (tsz:imm3) - esize
187
%tszimm_shl 22:2 5:5 !function=tszimm_shl
188
189
+# Similarly for the tszh/tszl pair at 22/16 for zzi
190
+%tszimm16_esz 22:2 16:5 !function=tszimm_esz
191
+%tszimm16_shr 22:2 16:5 !function=tszimm_shr
192
+%tszimm16_shl 22:2 16:5 !function=tszimm_shl
193
+
194
# Either a copy of rd (at bit 0), or a different source
195
# as propagated via the MOVPRFX instruction.
196
%reg_movprfx 0:5
197
@@ -XXX,XX +XXX,XX @@
198
199
&rr_esz rd rn esz
200
&rri rd rn imm
201
+&rri_esz rd rn imm esz
202
&rrr_esz rd rn rm esz
203
&rpr_esz rd pg rn esz
204
&rprr_s rd pg rn rm s
205
@@ -XXX,XX +XXX,XX @@
206
@rdn_pg_tszimm ........ .. ... ... ... pg:3 ..... rd:5 \
207
&rpri_esz rn=%reg_movprfx esz=%tszimm_esz
208
209
+# Similarly without predicate.
210
+@rd_rn_tszimm ........ .. ... ... ...... rn:5 rd:5 \
211
+ &rri_esz esz=%tszimm16_esz
212
+
213
# Basic Load/Store with 9-bit immediate offset
214
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
215
&rri imm=%imm9_16_10
216
@@ -XXX,XX +XXX,XX @@ ADDPL 00000100 011 ..... 01010 ...... ..... @rd_rn_i6
217
# SVE stack frame size
218
RDVL 00000100 101 11111 01010 imm:s6 rd:5
219
220
+### SVE Bitwise Shift - Unpredicated Group
221
+
222
+# SVE bitwise shift by immediate (unpredicated)
223
+ASR_zzi 00000100 .. 1 ..... 1001 00 ..... ..... \
224
+ @rd_rn_tszimm imm=%tszimm16_shr
225
+LSR_zzi 00000100 .. 1 ..... 1001 01 ..... ..... \
226
+ @rd_rn_tszimm imm=%tszimm16_shr
227
+LSL_zzi 00000100 .. 1 ..... 1001 11 ..... ..... \
228
+ @rd_rn_tszimm imm=%tszimm16_shl
229
+
230
+# SVE bitwise shift by wide elements (unpredicated)
231
+# Note esz != 3
232
+ASR_zzw 00000100 .. 1 ..... 1000 00 ..... ..... @rd_rn_rm
233
+LSR_zzw 00000100 .. 1 ..... 1000 01 ..... ..... @rd_rn_rm
234
+LSL_zzw 00000100 .. 1 ..... 1000 11 ..... ..... @rd_rn_rm
235
+
236
### SVE Predicate Logical Operations Group
237
238
# SVE predicate logical operations
239
--
43
--
240
2.17.0
44
2.20.1
241
45
242
46
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
In preparation for a full implementation, move i.MX watchdog driver
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
from hw/misc to hw/watchdog. While at it, add the watchdog files
5
Message-id: 20180516223007.10256-17-richard.henderson@linaro.org
5
to MAINTAINERS.
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
9
Message-id: 20200517162135.110364-2-linux@roeck-us.net
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 5 +++
12
include/hw/arm/fsl-imx6.h | 2 +-
9
target/arm/sve_helper.c | 40 +++++++++++++++++++
13
include/hw/arm/fsl-imx6ul.h | 2 +-
10
target/arm/translate-sve.c | 79 ++++++++++++++++++++++++++++++++++++++
14
include/hw/arm/fsl-imx7.h | 2 +-
11
target/arm/sve.decode | 14 +++++++
15
include/hw/{misc/imx2_wdt.h => watchdog/wdt_imx2.h} | 0
12
4 files changed, 138 insertions(+)
16
hw/{misc/imx2_wdt.c => watchdog/wdt_imx2.c} | 2 +-
17
MAINTAINERS | 2 ++
18
hw/arm/Kconfig | 3 +++
19
hw/misc/Makefile.objs | 1 -
20
hw/watchdog/Kconfig | 3 +++
21
hw/watchdog/Makefile.objs | 1 +
22
10 files changed, 13 insertions(+), 5 deletions(-)
23
rename include/hw/{misc/imx2_wdt.h => watchdog/wdt_imx2.h} (100%)
24
rename hw/{misc/imx2_wdt.c => watchdog/wdt_imx2.c} (98%)
13
25
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
26
diff --git a/include/hw/arm/fsl-imx6.h b/include/hw/arm/fsl-imx6.h
15
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
28
--- a/include/hw/arm/fsl-imx6.h
17
+++ b/target/arm/helper-sve.h
29
+++ b/include/hw/arm/fsl-imx6.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(sve_mls_s, TCG_CALL_NO_RWG,
30
@@ -XXX,XX +XXX,XX @@
19
DEF_HELPER_FLAGS_6(sve_mls_d, TCG_CALL_NO_RWG,
31
#include "hw/cpu/a9mpcore.h"
20
void, ptr, ptr, ptr, ptr, ptr, i32)
32
#include "hw/misc/imx6_ccm.h"
21
33
#include "hw/misc/imx6_src.h"
22
+DEF_HELPER_FLAGS_4(sve_index_b, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
34
-#include "hw/misc/imx2_wdt.h"
23
+DEF_HELPER_FLAGS_4(sve_index_h, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
35
+#include "hw/watchdog/wdt_imx2.h"
24
+DEF_HELPER_FLAGS_4(sve_index_s, TCG_CALL_NO_RWG, void, ptr, i32, i32, i32)
36
#include "hw/char/imx_serial.h"
25
+DEF_HELPER_FLAGS_4(sve_index_d, TCG_CALL_NO_RWG, void, ptr, i64, i64, i32)
37
#include "hw/timer/imx_gpt.h"
38
#include "hw/timer/imx_epit.h"
39
diff --git a/include/hw/arm/fsl-imx6ul.h b/include/hw/arm/fsl-imx6ul.h
40
index XXXXXXX..XXXXXXX 100644
41
--- a/include/hw/arm/fsl-imx6ul.h
42
+++ b/include/hw/arm/fsl-imx6ul.h
43
@@ -XXX,XX +XXX,XX @@
44
#include "hw/misc/imx7_snvs.h"
45
#include "hw/misc/imx7_gpr.h"
46
#include "hw/intc/imx_gpcv2.h"
47
-#include "hw/misc/imx2_wdt.h"
48
+#include "hw/watchdog/wdt_imx2.h"
49
#include "hw/gpio/imx_gpio.h"
50
#include "hw/char/imx_serial.h"
51
#include "hw/timer/imx_gpt.h"
52
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
53
index XXXXXXX..XXXXXXX 100644
54
--- a/include/hw/arm/fsl-imx7.h
55
+++ b/include/hw/arm/fsl-imx7.h
56
@@ -XXX,XX +XXX,XX @@
57
#include "hw/misc/imx7_snvs.h"
58
#include "hw/misc/imx7_gpr.h"
59
#include "hw/misc/imx6_src.h"
60
-#include "hw/misc/imx2_wdt.h"
61
+#include "hw/watchdog/wdt_imx2.h"
62
#include "hw/gpio/imx_gpio.h"
63
#include "hw/char/imx_serial.h"
64
#include "hw/timer/imx_gpt.h"
65
diff --git a/include/hw/misc/imx2_wdt.h b/include/hw/watchdog/wdt_imx2.h
66
similarity index 100%
67
rename from include/hw/misc/imx2_wdt.h
68
rename to include/hw/watchdog/wdt_imx2.h
69
diff --git a/hw/misc/imx2_wdt.c b/hw/watchdog/wdt_imx2.c
70
similarity index 98%
71
rename from hw/misc/imx2_wdt.c
72
rename to hw/watchdog/wdt_imx2.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/misc/imx2_wdt.c
75
+++ b/hw/watchdog/wdt_imx2.c
76
@@ -XXX,XX +XXX,XX @@
77
#include "qemu/module.h"
78
#include "sysemu/watchdog.h"
79
80
-#include "hw/misc/imx2_wdt.h"
81
+#include "hw/watchdog/wdt_imx2.h"
82
83
#define IMX2_WDT_WCR_WDA BIT(5) /* -> External Reset WDOG_B */
84
#define IMX2_WDT_WCR_SRS BIT(4) /* -> Software Reset Signal */
85
diff --git a/MAINTAINERS b/MAINTAINERS
86
index XXXXXXX..XXXXXXX 100644
87
--- a/MAINTAINERS
88
+++ b/MAINTAINERS
89
@@ -XXX,XX +XXX,XX @@ S: Odd Fixes
90
F: hw/arm/fsl-imx25.c
91
F: hw/arm/imx25_pdk.c
92
F: hw/misc/imx25_ccm.c
93
+F: hw/watchdog/wdt_imx2.c
94
F: include/hw/arm/fsl-imx25.h
95
F: include/hw/misc/imx25_ccm.h
96
+F: include/hw/watchdog/wdt_imx2.h
97
98
i.MX31 (kzm)
99
M: Peter Chubb <peter.chubb@nicta.com.au>
100
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
101
index XXXXXXX..XXXXXXX 100644
102
--- a/hw/arm/Kconfig
103
+++ b/hw/arm/Kconfig
104
@@ -XXX,XX +XXX,XX @@ config FSL_IMX6
105
select IMX_FEC
106
select IMX_I2C
107
select IMX_USBPHY
108
+ select WDT_IMX2
109
select SDHCI
110
111
config ASPEED_SOC
112
@@ -XXX,XX +XXX,XX @@ config FSL_IMX7
113
select IMX
114
select IMX_FEC
115
select IMX_I2C
116
+ select WDT_IMX2
117
select PCI_EXPRESS_DESIGNWARE
118
select SDHCI
119
select UNIMP
120
@@ -XXX,XX +XXX,XX @@ config FSL_IMX6UL
121
select IMX
122
select IMX_FEC
123
select IMX_I2C
124
+ select WDT_IMX2
125
select SDHCI
126
select UNIMP
127
128
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
129
index XXXXXXX..XXXXXXX 100644
130
--- a/hw/misc/Makefile.objs
131
+++ b/hw/misc/Makefile.objs
132
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_IMX) += imx6_ccm.o
133
common-obj-$(CONFIG_IMX) += imx6ul_ccm.o
134
obj-$(CONFIG_IMX) += imx6_src.o
135
common-obj-$(CONFIG_IMX) += imx7_ccm.o
136
-common-obj-$(CONFIG_IMX) += imx2_wdt.o
137
common-obj-$(CONFIG_IMX) += imx7_snvs.o
138
common-obj-$(CONFIG_IMX) += imx7_gpr.o
139
common-obj-$(CONFIG_IMX) += imx_rngc.o
140
diff --git a/hw/watchdog/Kconfig b/hw/watchdog/Kconfig
141
index XXXXXXX..XXXXXXX 100644
142
--- a/hw/watchdog/Kconfig
143
+++ b/hw/watchdog/Kconfig
144
@@ -XXX,XX +XXX,XX @@ config WDT_IB700
145
146
config WDT_DIAG288
147
bool
26
+
148
+
27
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
149
+config WDT_IMX2
28
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
150
+ bool
29
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
151
diff --git a/hw/watchdog/Makefile.objs b/hw/watchdog/Makefile.objs
30
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
31
index XXXXXXX..XXXXXXX 100644
152
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/sve_helper.c
153
--- a/hw/watchdog/Makefile.objs
33
+++ b/target/arm/sve_helper.c
154
+++ b/hw/watchdog/Makefile.objs
34
@@ -XXX,XX +XXX,XX @@ DO_ZPZZZ_D(sve_mls_d, uint64_t, DO_MLS)
155
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_WDT_IB6300ESB) += wdt_i6300esb.o
35
#undef DO_MLS
156
common-obj-$(CONFIG_WDT_IB700) += wdt_ib700.o
36
#undef DO_ZPZZZ
157
common-obj-$(CONFIG_WDT_DIAG288) += wdt_diag288.o
37
#undef DO_ZPZZZ_D
158
common-obj-$(CONFIG_ASPEED_SOC) += wdt_aspeed.o
38
+
159
+common-obj-$(CONFIG_WDT_IMX2) += wdt_imx2.o
39
+void HELPER(sve_index_b)(void *vd, uint32_t start,
40
+ uint32_t incr, uint32_t desc)
41
+{
42
+ intptr_t i, opr_sz = simd_oprsz(desc);
43
+ uint8_t *d = vd;
44
+ for (i = 0; i < opr_sz; i += 1) {
45
+ d[H1(i)] = start + i * incr;
46
+ }
47
+}
48
+
49
+void HELPER(sve_index_h)(void *vd, uint32_t start,
50
+ uint32_t incr, uint32_t desc)
51
+{
52
+ intptr_t i, opr_sz = simd_oprsz(desc) / 2;
53
+ uint16_t *d = vd;
54
+ for (i = 0; i < opr_sz; i += 1) {
55
+ d[H2(i)] = start + i * incr;
56
+ }
57
+}
58
+
59
+void HELPER(sve_index_s)(void *vd, uint32_t start,
60
+ uint32_t incr, uint32_t desc)
61
+{
62
+ intptr_t i, opr_sz = simd_oprsz(desc) / 4;
63
+ uint32_t *d = vd;
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ d[H4(i)] = start + i * incr;
66
+ }
67
+}
68
+
69
+void HELPER(sve_index_d)(void *vd, uint64_t start,
70
+ uint64_t incr, uint32_t desc)
71
+{
72
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
73
+ uint64_t *d = vd;
74
+ for (i = 0; i < opr_sz; i += 1) {
75
+ d[i] = start + i * incr;
76
+ }
77
+}
78
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate-sve.c
81
+++ b/target/arm/translate-sve.c
82
@@ -XXX,XX +XXX,XX @@ DO_ZPZZZ(MLS, mls)
83
84
#undef DO_ZPZZZ
85
86
+/*
87
+ *** SVE Index Generation Group
88
+ */
89
+
90
+static void do_index(DisasContext *s, int esz, int rd,
91
+ TCGv_i64 start, TCGv_i64 incr)
92
+{
93
+ unsigned vsz = vec_full_reg_size(s);
94
+ TCGv_i32 desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
95
+ TCGv_ptr t_zd = tcg_temp_new_ptr();
96
+
97
+ tcg_gen_addi_ptr(t_zd, cpu_env, vec_full_reg_offset(s, rd));
98
+ if (esz == 3) {
99
+ gen_helper_sve_index_d(t_zd, start, incr, desc);
100
+ } else {
101
+ typedef void index_fn(TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
102
+ static index_fn * const fns[3] = {
103
+ gen_helper_sve_index_b,
104
+ gen_helper_sve_index_h,
105
+ gen_helper_sve_index_s,
106
+ };
107
+ TCGv_i32 s32 = tcg_temp_new_i32();
108
+ TCGv_i32 i32 = tcg_temp_new_i32();
109
+
110
+ tcg_gen_extrl_i64_i32(s32, start);
111
+ tcg_gen_extrl_i64_i32(i32, incr);
112
+ fns[esz](t_zd, s32, i32, desc);
113
+
114
+ tcg_temp_free_i32(s32);
115
+ tcg_temp_free_i32(i32);
116
+ }
117
+ tcg_temp_free_ptr(t_zd);
118
+ tcg_temp_free_i32(desc);
119
+}
120
+
121
+static bool trans_INDEX_ii(DisasContext *s, arg_INDEX_ii *a, uint32_t insn)
122
+{
123
+ if (sve_access_check(s)) {
124
+ TCGv_i64 start = tcg_const_i64(a->imm1);
125
+ TCGv_i64 incr = tcg_const_i64(a->imm2);
126
+ do_index(s, a->esz, a->rd, start, incr);
127
+ tcg_temp_free_i64(start);
128
+ tcg_temp_free_i64(incr);
129
+ }
130
+ return true;
131
+}
132
+
133
+static bool trans_INDEX_ir(DisasContext *s, arg_INDEX_ir *a, uint32_t insn)
134
+{
135
+ if (sve_access_check(s)) {
136
+ TCGv_i64 start = tcg_const_i64(a->imm);
137
+ TCGv_i64 incr = cpu_reg(s, a->rm);
138
+ do_index(s, a->esz, a->rd, start, incr);
139
+ tcg_temp_free_i64(start);
140
+ }
141
+ return true;
142
+}
143
+
144
+static bool trans_INDEX_ri(DisasContext *s, arg_INDEX_ri *a, uint32_t insn)
145
+{
146
+ if (sve_access_check(s)) {
147
+ TCGv_i64 start = cpu_reg(s, a->rn);
148
+ TCGv_i64 incr = tcg_const_i64(a->imm);
149
+ do_index(s, a->esz, a->rd, start, incr);
150
+ tcg_temp_free_i64(incr);
151
+ }
152
+ return true;
153
+}
154
+
155
+static bool trans_INDEX_rr(DisasContext *s, arg_INDEX_rr *a, uint32_t insn)
156
+{
157
+ if (sve_access_check(s)) {
158
+ TCGv_i64 start = cpu_reg(s, a->rn);
159
+ TCGv_i64 incr = cpu_reg(s, a->rm);
160
+ do_index(s, a->esz, a->rd, start, incr);
161
+ }
162
+ return true;
163
+}
164
+
165
/*
166
*** SVE Predicate Logical Operations Group
167
*/
168
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
169
index XXXXXXX..XXXXXXX 100644
170
--- a/target/arm/sve.decode
171
+++ b/target/arm/sve.decode
172
@@ -XXX,XX +XXX,XX @@ ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
173
EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
174
BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
175
176
+### SVE Index Generation Group
177
+
178
+# SVE index generation (immediate start, immediate increment)
179
+INDEX_ii 00000100 esz:2 1 imm2:s5 010000 imm1:s5 rd:5
180
+
181
+# SVE index generation (immediate start, register increment)
182
+INDEX_ir 00000100 esz:2 1 rm:5 010010 imm:s5 rd:5
183
+
184
+# SVE index generation (register start, immediate increment)
185
+INDEX_ri 00000100 esz:2 1 imm:s5 010001 rn:5 rd:5
186
+
187
+# SVE index generation (register start, register increment)
188
+INDEX_rr 00000100 .. 1 ..... 010011 ..... ..... @rd_rn_rm
189
+
190
### SVE Predicate Logical Operations Group
191
192
# SVE predicate logical operations
193
--
160
--
194
2.17.0
161
2.20.1
195
162
196
163
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Implement full support for the watchdog in i.MX systems.
4
Message-id: 20180516223007.10256-8-richard.henderson@linaro.org
4
Pretimeout support is optional because the watchdog hardware
5
on i.MX31 does not support pretimeouts.
6
7
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
8
Message-id: 20200517162135.110364-3-linux@roeck-us.net
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
---
11
---
7
target/arm/cpu.h | 4 +
12
include/hw/watchdog/wdt_imx2.h | 61 ++++++++-
8
target/arm/helper-sve.h | 3 +
13
hw/watchdog/wdt_imx2.c | 239 +++++++++++++++++++++++++++++++--
9
target/arm/sve_helper.c | 84 +++++++++++++++
14
2 files changed, 285 insertions(+), 15 deletions(-)
10
target/arm/translate-sve.c | 209 +++++++++++++++++++++++++++++++++++++
15
11
target/arm/sve.decode | 31 ++++++
16
diff --git a/include/hw/watchdog/wdt_imx2.h b/include/hw/watchdog/wdt_imx2.h
12
5 files changed, 331 insertions(+)
13
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
18
--- a/include/hw/watchdog/wdt_imx2.h
17
+++ b/target/arm/cpu.h
19
+++ b/include/hw/watchdog/wdt_imx2.h
18
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
20
@@ -XXX,XX +XXX,XX @@
19
21
#ifndef IMX2_WDT_H
20
#ifdef TARGET_AARCH64
22
#define IMX2_WDT_H
21
/* Store FFR as pregs[16] to make it easier to treat as any other. */
23
22
+#define FFR_PRED_NUM 16
24
+#include "qemu/bitops.h"
23
ARMPredicateReg pregs[17];
25
#include "hw/sysbus.h"
24
/* Scratch space for aa64 sve predicate temporary. */
26
+#include "hw/irq.h"
25
ARMPredicateReg preg_tmp;
27
+#include "hw/ptimer.h"
26
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
28
27
return &env->vfp.zregs[regno].d[0];
29
#define TYPE_IMX2_WDT "imx2.wdt"
30
#define IMX2_WDT(obj) OBJECT_CHECK(IMX2WdtState, (obj), TYPE_IMX2_WDT)
31
32
enum IMX2WdtRegisters {
33
- IMX2_WDT_WCR = 0x0000,
34
- IMX2_WDT_REG_NUM = 0x0008 / sizeof(uint16_t) + 1,
35
+ IMX2_WDT_WCR = 0x0000, /* Control Register */
36
+ IMX2_WDT_WSR = 0x0002, /* Service Register */
37
+ IMX2_WDT_WRSR = 0x0004, /* Reset Status Register */
38
+ IMX2_WDT_WICR = 0x0006, /* Interrupt Control Register */
39
+ IMX2_WDT_WMCR = 0x0008, /* Misc Register */
40
};
41
42
+#define IMX2_WDT_MMIO_SIZE 0x000a
43
+
44
+/* Control Register definitions */
45
+#define IMX2_WDT_WCR_WT (0xFF << 8) /* Watchdog Timeout Field */
46
+#define IMX2_WDT_WCR_WDW BIT(7) /* WDOG Disable for Wait */
47
+#define IMX2_WDT_WCR_WDA BIT(5) /* WDOG Assertion */
48
+#define IMX2_WDT_WCR_SRS BIT(4) /* Software Reset Signal */
49
+#define IMX2_WDT_WCR_WDT BIT(3) /* WDOG Timeout Assertion */
50
+#define IMX2_WDT_WCR_WDE BIT(2) /* Watchdog Enable */
51
+#define IMX2_WDT_WCR_WDBG BIT(1) /* Watchdog Debug Enable */
52
+#define IMX2_WDT_WCR_WDZST BIT(0) /* Watchdog Timer Suspend */
53
+
54
+#define IMX2_WDT_WCR_LOCK_MASK (IMX2_WDT_WCR_WDZST | IMX2_WDT_WCR_WDBG \
55
+ | IMX2_WDT_WCR_WDW)
56
+
57
+/* Service Register definitions */
58
+#define IMX2_WDT_SEQ1 0x5555 /* service sequence 1 */
59
+#define IMX2_WDT_SEQ2 0xAAAA /* service sequence 2 */
60
+
61
+/* Reset Status Register definitions */
62
+#define IMX2_WDT_WRSR_TOUT BIT(1) /* Reset due to Timeout */
63
+#define IMX2_WDT_WRSR_SFTW BIT(0) /* Reset due to software reset */
64
+
65
+/* Interrupt Control Register definitions */
66
+#define IMX2_WDT_WICR_WIE BIT(15) /* Interrupt Enable */
67
+#define IMX2_WDT_WICR_WTIS BIT(14) /* Interrupt Status */
68
+#define IMX2_WDT_WICR_WICT 0xff /* Interrupt Timeout */
69
+#define IMX2_WDT_WICR_WICT_DEF 0x04 /* Default interrupt timeout (2s) */
70
+
71
+#define IMX2_WDT_WICR_LOCK_MASK (IMX2_WDT_WICR_WIE | IMX2_WDT_WICR_WICT)
72
+
73
+/* Misc Control Register definitions */
74
+#define IMX2_WDT_WMCR_PDE BIT(0) /* Power-Down Enable */
75
76
typedef struct IMX2WdtState {
77
/* <private> */
78
SysBusDevice parent_obj;
79
80
+ /*< public >*/
81
MemoryRegion mmio;
82
+ qemu_irq irq;
83
+
84
+ struct ptimer_state *timer;
85
+ struct ptimer_state *itimer;
86
+
87
+ bool pretimeout_support;
88
+ bool wicr_locked;
89
+
90
+ uint16_t wcr;
91
+ uint16_t wsr;
92
+ uint16_t wrsr;
93
+ uint16_t wicr;
94
+ uint16_t wmcr;
95
+
96
+ bool wcr_locked; /* affects WDZST, WDBG, and WDW */
97
+ bool wcr_wde_locked; /* affects WDE */
98
+ bool wcr_wdt_locked; /* affects WDT (never cleared) */
99
} IMX2WdtState;
100
101
#endif /* IMX2_WDT_H */
102
diff --git a/hw/watchdog/wdt_imx2.c b/hw/watchdog/wdt_imx2.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/hw/watchdog/wdt_imx2.c
105
+++ b/hw/watchdog/wdt_imx2.c
106
@@ -XXX,XX +XXX,XX @@
107
#include "qemu/bitops.h"
108
#include "qemu/module.h"
109
#include "sysemu/watchdog.h"
110
+#include "migration/vmstate.h"
111
+#include "hw/qdev-properties.h"
112
113
#include "hw/watchdog/wdt_imx2.h"
114
115
-#define IMX2_WDT_WCR_WDA BIT(5) /* -> External Reset WDOG_B */
116
-#define IMX2_WDT_WCR_SRS BIT(4) /* -> Software Reset Signal */
117
-
118
-static uint64_t imx2_wdt_read(void *opaque, hwaddr addr,
119
- unsigned int size)
120
+static void imx2_wdt_interrupt(void *opaque)
121
{
122
+ IMX2WdtState *s = IMX2_WDT(opaque);
123
+
124
+ s->wicr |= IMX2_WDT_WICR_WTIS;
125
+ qemu_set_irq(s->irq, 1);
126
+}
127
+
128
+static void imx2_wdt_expired(void *opaque)
129
+{
130
+ IMX2WdtState *s = IMX2_WDT(opaque);
131
+
132
+ s->wrsr = IMX2_WDT_WRSR_TOUT;
133
+
134
+ /* Perform watchdog action if watchdog is enabled */
135
+ if (s->wcr & IMX2_WDT_WCR_WDE) {
136
+ s->wrsr = IMX2_WDT_WRSR_TOUT;
137
+ watchdog_perform_action();
138
+ }
139
+}
140
+
141
+static void imx2_wdt_reset(DeviceState *dev)
142
+{
143
+ IMX2WdtState *s = IMX2_WDT(dev);
144
+
145
+ ptimer_transaction_begin(s->timer);
146
+ ptimer_stop(s->timer);
147
+ ptimer_transaction_commit(s->timer);
148
+
149
+ if (s->pretimeout_support) {
150
+ ptimer_transaction_begin(s->itimer);
151
+ ptimer_stop(s->itimer);
152
+ ptimer_transaction_commit(s->itimer);
153
+ }
154
+
155
+ s->wicr_locked = false;
156
+ s->wcr_locked = false;
157
+ s->wcr_wde_locked = false;
158
+
159
+ s->wcr = IMX2_WDT_WCR_WDA | IMX2_WDT_WCR_SRS;
160
+ s->wsr = 0;
161
+ s->wrsr &= ~(IMX2_WDT_WRSR_TOUT | IMX2_WDT_WRSR_SFTW);
162
+ s->wicr = IMX2_WDT_WICR_WICT_DEF;
163
+ s->wmcr = IMX2_WDT_WMCR_PDE;
164
+}
165
+
166
+static uint64_t imx2_wdt_read(void *opaque, hwaddr addr, unsigned int size)
167
+{
168
+ IMX2WdtState *s = IMX2_WDT(opaque);
169
+
170
+ switch (addr) {
171
+ case IMX2_WDT_WCR:
172
+ return s->wcr;
173
+ case IMX2_WDT_WSR:
174
+ return s->wsr;
175
+ case IMX2_WDT_WRSR:
176
+ return s->wrsr;
177
+ case IMX2_WDT_WICR:
178
+ return s->wicr;
179
+ case IMX2_WDT_WMCR:
180
+ return s->wmcr;
181
+ }
182
return 0;
28
}
183
}
29
184
30
+/* Shared between translate-sve.c and sve_helper.c. */
185
+static void imx_wdt2_update_itimer(IMX2WdtState *s, bool start)
31
+extern const uint64_t pred_esz_masks[4];
186
+{
32
+
187
+ bool running = (s->wcr & IMX2_WDT_WCR_WDE) && (s->wcr & IMX2_WDT_WCR_WT);
33
#endif
188
+ bool enabled = s->wicr & IMX2_WDT_WICR_WIE;
34
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
189
+
35
index XXXXXXX..XXXXXXX 100644
190
+ ptimer_transaction_begin(s->itimer);
36
--- a/target/arm/helper-sve.h
191
+ if (start || !enabled) {
37
+++ b/target/arm/helper-sve.h
192
+ ptimer_stop(s->itimer);
38
@@ -XXX,XX +XXX,XX @@
193
+ }
39
DEF_HELPER_FLAGS_2(sve_predtest1, TCG_CALL_NO_WG, i32, i64, i64)
194
+ if (running && enabled) {
40
DEF_HELPER_FLAGS_3(sve_predtest, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
195
+ int count = ptimer_get_count(s->timer);
41
196
+ int pretimeout = s->wicr & IMX2_WDT_WICR_WICT;
42
+DEF_HELPER_FLAGS_3(sve_pfirst, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
197
+
43
+DEF_HELPER_FLAGS_3(sve_pnext, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
198
+ /*
44
+
199
+ * Only (re-)start pretimeout timer if its counter value is larger
45
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
200
+ * than 0. Otherwise it will fire right away and we'll get an
46
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
201
+ * interrupt loop.
47
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
202
+ */
48
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
203
+ if (count > pretimeout) {
49
index XXXXXXX..XXXXXXX 100644
204
+ ptimer_set_count(s->itimer, count - pretimeout);
50
--- a/target/arm/sve_helper.c
205
+ if (start) {
51
+++ b/target/arm/sve_helper.c
206
+ ptimer_run(s->itimer, 1);
52
@@ -XXX,XX +XXX,XX @@ LOGICAL_PPPP(sve_nand_pppp, DO_NAND)
53
#undef DO_NAND
54
#undef DO_SEL
55
#undef LOGICAL_PPPP
56
+
57
+/* Similar to the ARM LastActiveElement pseudocode function, except the
58
+ result is multiplied by the element size. This includes the not found
59
+ indication; e.g. not found for esz=3 is -8. */
60
+static intptr_t last_active_element(uint64_t *g, intptr_t words, intptr_t esz)
61
+{
62
+ uint64_t mask = pred_esz_masks[esz];
63
+ intptr_t i = words;
64
+
65
+ do {
66
+ uint64_t this_g = g[--i] & mask;
67
+ if (this_g) {
68
+ return i * 64 + (63 - clz64(this_g));
69
+ }
70
+ } while (i > 0);
71
+ return (intptr_t)-1 << esz;
72
+}
73
+
74
+uint32_t HELPER(sve_pfirst)(void *vd, void *vg, uint32_t words)
75
+{
76
+ uint32_t flags = PREDTEST_INIT;
77
+ uint64_t *d = vd, *g = vg;
78
+ intptr_t i = 0;
79
+
80
+ do {
81
+ uint64_t this_d = d[i];
82
+ uint64_t this_g = g[i];
83
+
84
+ if (this_g) {
85
+ if (!(flags & 4)) {
86
+ /* Set in D the first bit of G. */
87
+ this_d |= this_g & -this_g;
88
+ d[i] = this_d;
89
+ }
207
+ }
90
+ flags = iter_predtest_fwd(this_d, this_g, flags);
208
+ }
91
+ }
209
+ }
92
+ } while (++i < words);
210
+ ptimer_transaction_commit(s->itimer);
93
+
211
+}
94
+ return flags;
212
+
95
+}
213
+static void imx_wdt2_update_timer(IMX2WdtState *s, bool start)
96
+
214
+{
97
+uint32_t HELPER(sve_pnext)(void *vd, void *vg, uint32_t pred_desc)
215
+ ptimer_transaction_begin(s->timer);
98
+{
216
+ if (start) {
99
+ intptr_t words = extract32(pred_desc, 0, SIMD_OPRSZ_BITS);
217
+ ptimer_stop(s->timer);
100
+ intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
218
+ }
101
+ uint32_t flags = PREDTEST_INIT;
219
+ if ((s->wcr & IMX2_WDT_WCR_WDE) && (s->wcr & IMX2_WDT_WCR_WT)) {
102
+ uint64_t *d = vd, *g = vg, esz_mask;
220
+ int count = (s->wcr & IMX2_WDT_WCR_WT) >> 8;
103
+ intptr_t i, next;
221
+
104
+
222
+ /* A value of 0 reflects one period (0.5s). */
105
+ next = last_active_element(vd, words, esz) + (1 << esz);
223
+ ptimer_set_count(s->timer, count + 1);
106
+ esz_mask = pred_esz_masks[esz];
224
+ if (start) {
107
+
225
+ ptimer_run(s->timer, 1);
108
+ /* Similar to the pseudocode for pnext, but scaled by ESZ
226
+ }
109
+ so that we find the correct bit. */
227
+ }
110
+ if (next < words * 64) {
228
+ ptimer_transaction_commit(s->timer);
111
+ uint64_t mask = -1;
229
+ if (s->pretimeout_support) {
112
+
230
+ imx_wdt2_update_itimer(s, start);
113
+ if (next & 63) {
231
+ }
114
+ mask = ~((1ull << (next & 63)) - 1);
232
+}
115
+ next &= -64;
233
+
116
+ }
234
static void imx2_wdt_write(void *opaque, hwaddr addr,
117
+ do {
235
uint64_t value, unsigned int size)
118
+ uint64_t this_g = g[next / 64] & esz_mask & mask;
236
{
119
+ if (this_g != 0) {
237
- if (addr == IMX2_WDT_WCR &&
120
+ next = (next & -64) + ctz64(this_g);
238
- (~value & (IMX2_WDT_WCR_WDA | IMX2_WDT_WCR_SRS))) {
121
+ break;
239
- watchdog_perform_action();
122
+ }
240
+ IMX2WdtState *s = IMX2_WDT(opaque);
123
+ next += 64;
241
+
124
+ mask = -1;
242
+ switch (addr) {
125
+ } while (next < words * 64);
243
+ case IMX2_WDT_WCR:
126
+ }
244
+ if (s->wcr_locked) {
127
+
245
+ value &= ~IMX2_WDT_WCR_LOCK_MASK;
128
+ i = 0;
246
+ value |= (s->wicr & IMX2_WDT_WCR_LOCK_MASK);
129
+ do {
247
+ }
130
+ uint64_t this_d = 0;
248
+ s->wcr_locked = true;
131
+ if (i == next / 64) {
249
+ if (s->wcr_wde_locked) {
132
+ this_d = 1ull << (next & 63);
250
+ value &= ~IMX2_WDT_WCR_WDE;
133
+ }
251
+ value |= (s->wicr & ~IMX2_WDT_WCR_WDE);
134
+ d[i] = this_d;
252
+ } else if (value & IMX2_WDT_WCR_WDE) {
135
+ flags = iter_predtest_fwd(this_d, g[i] & esz_mask, flags);
253
+ s->wcr_wde_locked = true;
136
+ } while (++i < words);
254
+ }
137
+
255
+ if (s->wcr_wdt_locked) {
138
+ return flags;
256
+ value &= ~IMX2_WDT_WCR_WDT;
139
+}
257
+ value |= (s->wicr & ~IMX2_WDT_WCR_WDT);
140
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
258
+ } else if (value & IMX2_WDT_WCR_WDT) {
141
index XXXXXXX..XXXXXXX 100644
259
+ s->wcr_wdt_locked = true;
142
--- a/target/arm/translate-sve.c
260
+ }
143
+++ b/target/arm/translate-sve.c
261
+
144
@@ -XXX,XX +XXX,XX @@
262
+ s->wcr = value;
145
#include "exec/exec-all.h"
263
+ if (!(value & IMX2_WDT_WCR_SRS)) {
146
#include "tcg-op.h"
264
+ s->wrsr = IMX2_WDT_WRSR_SFTW;
147
#include "tcg-op-gvec.h"
265
+ }
148
+#include "tcg-gvec-desc.h"
266
+ if (!(value & (IMX2_WDT_WCR_WDA | IMX2_WDT_WCR_SRS)) ||
149
#include "qemu/log.h"
267
+ (!(value & IMX2_WDT_WCR_WT) && (value & IMX2_WDT_WCR_WDE))) {
150
#include "arm_ldst.h"
268
+ watchdog_perform_action();
151
#include "translate.h"
269
+ }
152
@@ -XXX,XX +XXX,XX @@ static void do_predtest(DisasContext *s, int dofs, int gofs, int words)
270
+ s->wcr |= IMX2_WDT_WCR_SRS;
153
tcg_temp_free_i32(t);
271
+ imx_wdt2_update_timer(s, true);
272
+ break;
273
+ case IMX2_WDT_WSR:
274
+ if (s->wsr == IMX2_WDT_SEQ1 && value == IMX2_WDT_SEQ2) {
275
+ imx_wdt2_update_timer(s, false);
276
+ }
277
+ s->wsr = value;
278
+ break;
279
+ case IMX2_WDT_WRSR:
280
+ break;
281
+ case IMX2_WDT_WICR:
282
+ if (!s->pretimeout_support) {
283
+ return;
284
+ }
285
+ value &= IMX2_WDT_WICR_LOCK_MASK | IMX2_WDT_WICR_WTIS;
286
+ if (s->wicr_locked) {
287
+ value &= IMX2_WDT_WICR_WTIS;
288
+ value |= (s->wicr & IMX2_WDT_WICR_LOCK_MASK);
289
+ }
290
+ s->wicr = value | (s->wicr & IMX2_WDT_WICR_WTIS);
291
+ if (value & IMX2_WDT_WICR_WTIS) {
292
+ s->wicr &= ~IMX2_WDT_WICR_WTIS;
293
+ qemu_set_irq(s->irq, 0);
294
+ }
295
+ imx_wdt2_update_itimer(s, true);
296
+ s->wicr_locked = true;
297
+ break;
298
+ case IMX2_WDT_WMCR:
299
+ s->wmcr = value & IMX2_WDT_WMCR_PDE;
300
+ break;
301
}
154
}
302
}
155
303
156
+/* For each element size, the bits within a predicate word that are active. */
304
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps imx2_wdt_ops = {
157
+const uint64_t pred_esz_masks[4] = {
305
* real device but in practice there is no reason for a guest
158
+ 0xffffffffffffffffull, 0x5555555555555555ull,
306
* to access this device unaligned.
159
+ 0x1111111111111111ull, 0x0101010101010101ull
307
*/
308
- .min_access_size = 4,
309
- .max_access_size = 4,
310
+ .min_access_size = 2,
311
+ .max_access_size = 2,
312
.unaligned = false,
313
},
314
};
315
316
+static const VMStateDescription vmstate_imx2_wdt = {
317
+ .name = "imx2.wdt",
318
+ .fields = (VMStateField[]) {
319
+ VMSTATE_PTIMER(timer, IMX2WdtState),
320
+ VMSTATE_PTIMER(itimer, IMX2WdtState),
321
+ VMSTATE_BOOL(wicr_locked, IMX2WdtState),
322
+ VMSTATE_BOOL(wcr_locked, IMX2WdtState),
323
+ VMSTATE_BOOL(wcr_wde_locked, IMX2WdtState),
324
+ VMSTATE_BOOL(wcr_wdt_locked, IMX2WdtState),
325
+ VMSTATE_UINT16(wcr, IMX2WdtState),
326
+ VMSTATE_UINT16(wsr, IMX2WdtState),
327
+ VMSTATE_UINT16(wrsr, IMX2WdtState),
328
+ VMSTATE_UINT16(wmcr, IMX2WdtState),
329
+ VMSTATE_UINT16(wicr, IMX2WdtState),
330
+ VMSTATE_END_OF_LIST()
331
+ }
160
+};
332
+};
161
+
333
+
162
/*
334
static void imx2_wdt_realize(DeviceState *dev, Error **errp)
163
*** SVE Logical - Unpredicated Group
335
{
164
*/
336
IMX2WdtState *s = IMX2_WDT(dev);
165
@@ -XXX,XX +XXX,XX @@ static bool trans_PTEST(DisasContext *s, arg_PTEST *a, uint32_t insn)
337
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
166
return true;
338
339
memory_region_init_io(&s->mmio, OBJECT(dev),
340
&imx2_wdt_ops, s,
341
- TYPE_IMX2_WDT".mmio",
342
- IMX2_WDT_REG_NUM * sizeof(uint16_t));
343
- sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->mmio);
344
+ TYPE_IMX2_WDT,
345
+ IMX2_WDT_MMIO_SIZE);
346
+ sysbus_init_mmio(sbd, &s->mmio);
347
+ sysbus_init_irq(sbd, &s->irq);
348
+
349
+ s->timer = ptimer_init(imx2_wdt_expired, s,
350
+ PTIMER_POLICY_NO_IMMEDIATE_TRIGGER |
351
+ PTIMER_POLICY_NO_IMMEDIATE_RELOAD |
352
+ PTIMER_POLICY_NO_COUNTER_ROUND_DOWN);
353
+ ptimer_transaction_begin(s->timer);
354
+ ptimer_set_freq(s->timer, 2);
355
+ ptimer_set_limit(s->timer, 0xff, 1);
356
+ ptimer_transaction_commit(s->timer);
357
+ if (s->pretimeout_support) {
358
+ s->itimer = ptimer_init(imx2_wdt_interrupt, s,
359
+ PTIMER_POLICY_NO_IMMEDIATE_TRIGGER |
360
+ PTIMER_POLICY_NO_IMMEDIATE_RELOAD |
361
+ PTIMER_POLICY_NO_COUNTER_ROUND_DOWN);
362
+ ptimer_transaction_begin(s->itimer);
363
+ ptimer_set_freq(s->itimer, 2);
364
+ ptimer_set_limit(s->itimer, 0xff, 1);
365
+ ptimer_transaction_commit(s->itimer);
366
+ }
167
}
367
}
168
368
169
+/* See the ARM pseudocode DecodePredCount. */
369
+static Property imx2_wdt_properties[] = {
170
+static unsigned decode_pred_count(unsigned fullsz, int pattern, int esz)
370
+ DEFINE_PROP_BOOL("pretimeout-support", IMX2WdtState, pretimeout_support,
171
+{
371
+ false),
172
+ unsigned elements = fullsz >> esz;
372
+};
173
+ unsigned bound;
373
+
174
+
374
static void imx2_wdt_class_init(ObjectClass *klass, void *data)
175
+ switch (pattern) {
375
{
176
+ case 0x0: /* POW2 */
376
DeviceClass *dc = DEVICE_CLASS(klass);
177
+ return pow2floor(elements);
377
178
+ case 0x1: /* VL1 */
378
+ device_class_set_props(dc, imx2_wdt_properties);
179
+ case 0x2: /* VL2 */
379
dc->realize = imx2_wdt_realize;
180
+ case 0x3: /* VL3 */
380
+ dc->reset = imx2_wdt_reset;
181
+ case 0x4: /* VL4 */
381
+ dc->vmsd = &vmstate_imx2_wdt;
182
+ case 0x5: /* VL5 */
382
+ dc->desc = "i.MX watchdog timer";
183
+ case 0x6: /* VL6 */
383
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
184
+ case 0x7: /* VL7 */
384
}
185
+ case 0x8: /* VL8 */
385
186
+ bound = pattern;
187
+ break;
188
+ case 0x9: /* VL16 */
189
+ case 0xa: /* VL32 */
190
+ case 0xb: /* VL64 */
191
+ case 0xc: /* VL128 */
192
+ case 0xd: /* VL256 */
193
+ bound = 16 << (pattern - 9);
194
+ break;
195
+ case 0x1d: /* MUL4 */
196
+ return elements - elements % 4;
197
+ case 0x1e: /* MUL3 */
198
+ return elements - elements % 3;
199
+ case 0x1f: /* ALL */
200
+ return elements;
201
+ default: /* #uimm5 */
202
+ return 0;
203
+ }
204
+ return elements >= bound ? bound : 0;
205
+}
206
+
207
+/* This handles all of the predicate initialization instructions,
208
+ * PTRUE, PFALSE, SETFFR. For PFALSE, we will have set PAT == 32
209
+ * so that decode_pred_count returns 0. For SETFFR, we will have
210
+ * set RD == 16 == FFR.
211
+ */
212
+static bool do_predset(DisasContext *s, int esz, int rd, int pat, bool setflag)
213
+{
214
+ if (!sve_access_check(s)) {
215
+ return true;
216
+ }
217
+
218
+ unsigned fullsz = vec_full_reg_size(s);
219
+ unsigned ofs = pred_full_reg_offset(s, rd);
220
+ unsigned numelem, setsz, i;
221
+ uint64_t word, lastword;
222
+ TCGv_i64 t;
223
+
224
+ numelem = decode_pred_count(fullsz, pat, esz);
225
+
226
+ /* Determine what we must store into each bit, and how many. */
227
+ if (numelem == 0) {
228
+ lastword = word = 0;
229
+ setsz = fullsz;
230
+ } else {
231
+ setsz = numelem << esz;
232
+ lastword = word = pred_esz_masks[esz];
233
+ if (setsz % 64) {
234
+ lastword &= ~(-1ull << (setsz % 64));
235
+ }
236
+ }
237
+
238
+ t = tcg_temp_new_i64();
239
+ if (fullsz <= 64) {
240
+ tcg_gen_movi_i64(t, lastword);
241
+ tcg_gen_st_i64(t, cpu_env, ofs);
242
+ goto done;
243
+ }
244
+
245
+ if (word == lastword) {
246
+ unsigned maxsz = size_for_gvec(fullsz / 8);
247
+ unsigned oprsz = size_for_gvec(setsz / 8);
248
+
249
+ if (oprsz * 8 == setsz) {
250
+ tcg_gen_gvec_dup64i(ofs, oprsz, maxsz, word);
251
+ goto done;
252
+ }
253
+ if (oprsz * 8 == setsz + 8) {
254
+ tcg_gen_gvec_dup64i(ofs, oprsz, maxsz, word);
255
+ tcg_gen_movi_i64(t, 0);
256
+ tcg_gen_st_i64(t, cpu_env, ofs + oprsz - 8);
257
+ goto done;
258
+ }
259
+ }
260
+
261
+ setsz /= 8;
262
+ fullsz /= 8;
263
+
264
+ tcg_gen_movi_i64(t, word);
265
+ for (i = 0; i < setsz; i += 8) {
266
+ tcg_gen_st_i64(t, cpu_env, ofs + i);
267
+ }
268
+ if (lastword != word) {
269
+ tcg_gen_movi_i64(t, lastword);
270
+ tcg_gen_st_i64(t, cpu_env, ofs + i);
271
+ i += 8;
272
+ }
273
+ if (i < fullsz) {
274
+ tcg_gen_movi_i64(t, 0);
275
+ for (; i < fullsz; i += 8) {
276
+ tcg_gen_st_i64(t, cpu_env, ofs + i);
277
+ }
278
+ }
279
+
280
+ done:
281
+ tcg_temp_free_i64(t);
282
+
283
+ /* PTRUES */
284
+ if (setflag) {
285
+ tcg_gen_movi_i32(cpu_NF, -(word != 0));
286
+ tcg_gen_movi_i32(cpu_CF, word == 0);
287
+ tcg_gen_movi_i32(cpu_VF, 0);
288
+ tcg_gen_mov_i32(cpu_ZF, cpu_NF);
289
+ }
290
+ return true;
291
+}
292
+
293
+static bool trans_PTRUE(DisasContext *s, arg_PTRUE *a, uint32_t insn)
294
+{
295
+ return do_predset(s, a->esz, a->rd, a->pat, a->s);
296
+}
297
+
298
+static bool trans_SETFFR(DisasContext *s, arg_SETFFR *a, uint32_t insn)
299
+{
300
+ /* Note pat == 31 is #all, to set all elements. */
301
+ return do_predset(s, 0, FFR_PRED_NUM, 31, false);
302
+}
303
+
304
+static bool trans_PFALSE(DisasContext *s, arg_PFALSE *a, uint32_t insn)
305
+{
306
+ /* Note pat == 32 is #unimp, to set no elements. */
307
+ return do_predset(s, 0, a->rd, 32, false);
308
+}
309
+
310
+static bool trans_RDFFR_p(DisasContext *s, arg_RDFFR_p *a, uint32_t insn)
311
+{
312
+ /* The path through do_pppp_flags is complicated enough to want to avoid
313
+ * duplication. Frob the arguments into the form of a predicated AND.
314
+ */
315
+ arg_rprr_s alt_a = {
316
+ .rd = a->rd, .pg = a->pg, .s = a->s,
317
+ .rn = FFR_PRED_NUM, .rm = FFR_PRED_NUM,
318
+ };
319
+ return trans_AND_pppp(s, &alt_a, insn);
320
+}
321
+
322
+static bool trans_RDFFR(DisasContext *s, arg_RDFFR *a, uint32_t insn)
323
+{
324
+ return do_mov_p(s, a->rd, FFR_PRED_NUM);
325
+}
326
+
327
+static bool trans_WRFFR(DisasContext *s, arg_WRFFR *a, uint32_t insn)
328
+{
329
+ return do_mov_p(s, FFR_PRED_NUM, a->rn);
330
+}
331
+
332
+static bool do_pfirst_pnext(DisasContext *s, arg_rr_esz *a,
333
+ void (*gen_fn)(TCGv_i32, TCGv_ptr,
334
+ TCGv_ptr, TCGv_i32))
335
+{
336
+ if (!sve_access_check(s)) {
337
+ return true;
338
+ }
339
+
340
+ TCGv_ptr t_pd = tcg_temp_new_ptr();
341
+ TCGv_ptr t_pg = tcg_temp_new_ptr();
342
+ TCGv_i32 t;
343
+ unsigned desc;
344
+
345
+ desc = DIV_ROUND_UP(pred_full_reg_size(s), 8);
346
+ desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
347
+
348
+ tcg_gen_addi_ptr(t_pd, cpu_env, pred_full_reg_offset(s, a->rd));
349
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->rn));
350
+ t = tcg_const_i32(desc);
351
+
352
+ gen_fn(t, t_pd, t_pg, t);
353
+ tcg_temp_free_ptr(t_pd);
354
+ tcg_temp_free_ptr(t_pg);
355
+
356
+ do_pred_flags(t);
357
+ tcg_temp_free_i32(t);
358
+ return true;
359
+}
360
+
361
+static bool trans_PFIRST(DisasContext *s, arg_rr_esz *a, uint32_t insn)
362
+{
363
+ return do_pfirst_pnext(s, a, gen_helper_sve_pfirst);
364
+}
365
+
366
+static bool trans_PNEXT(DisasContext *s, arg_rr_esz *a, uint32_t insn)
367
+{
368
+ return do_pfirst_pnext(s, a, gen_helper_sve_pnext);
369
+}
370
+
371
/*
372
*** SVE Memory - 32-bit Gather and Unsized Contiguous Group
373
*/
374
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
375
index XXXXXXX..XXXXXXX 100644
376
--- a/target/arm/sve.decode
377
+++ b/target/arm/sve.decode
378
@@ -XXX,XX +XXX,XX @@
379
# when creating helpers common to those for the individual
380
# instruction patterns.
381
382
+&rr_esz rd rn esz
383
&rri rd rn imm
384
&rrr_esz rd rn rm esz
385
&rprr_s rd pg rn rm s
386
@@ -XXX,XX +XXX,XX @@
387
# Named instruction formats. These are generally used to
388
# reduce the amount of duplication between instruction patterns.
389
390
+# Two operand with unused vector element size
391
+@pd_pn_e0 ........ ........ ....... rn:4 . rd:4 &rr_esz esz=0
392
+
393
+# Two operand
394
+@pd_pn ........ esz:2 .. .... ....... rn:4 . rd:4 &rr_esz
395
+
396
# Three operand with unused vector element size
397
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
398
399
@@ -XXX,XX +XXX,XX @@ NAND_pppp 00100101 1. 00 .... 01 .... 1 .... 1 .... @pd_pg_pn_pm_s
400
# SVE predicate test
401
PTEST 00100101 01 010000 11 pg:4 0 rn:4 0 0000
402
403
+# SVE predicate initialize
404
+PTRUE 00100101 esz:2 01100 s:1 111000 pat:5 0 rd:4
405
+
406
+# SVE initialize FFR
407
+SETFFR 00100101 0010 1100 1001 0000 0000 0000
408
+
409
+# SVE zero predicate register
410
+PFALSE 00100101 0001 1000 1110 0100 0000 rd:4
411
+
412
+# SVE predicate read from FFR (predicated)
413
+RDFFR_p 00100101 0 s:1 0110001111000 pg:4 0 rd:4
414
+
415
+# SVE predicate read from FFR (unpredicated)
416
+RDFFR 00100101 0001 1001 1111 0000 0000 rd:4
417
+
418
+# SVE FFR write from predicate (WRFFR)
419
+WRFFR 00100101 0010 1000 1001 000 rn:4 00000
420
+
421
+# SVE predicate first active
422
+PFIRST 00100101 01 011 000 11000 00 .... 0 .... @pd_pn_e0
423
+
424
+# SVE predicate next active
425
+PNEXT 00100101 .. 011 001 11000 10 .... 0 .... @pd_pn
426
+
427
### SVE Memory - 32-bit Gather and Unsized Contiguous Group
428
429
# SVE load predicate register
430
--
386
--
431
2.17.0
387
2.20.1
432
388
433
389
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
With this commit, the watchdog on imx25-pdk is fully operational,
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
including pretimeout support.
5
Message-id: 20180516223007.10256-18-richard.henderson@linaro.org
5
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
8
Message-id: 20200517162135.110364-4-linux@roeck-us.net
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate-sve.c | 27 +++++++++++++++++++++++++++
11
include/hw/arm/fsl-imx25.h | 5 +++++
9
target/arm/sve.decode | 12 ++++++++++++
12
hw/arm/fsl-imx25.c | 10 ++++++++++
10
2 files changed, 39 insertions(+)
13
hw/arm/Kconfig | 1 +
14
3 files changed, 16 insertions(+)
11
15
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
16
diff --git a/include/hw/arm/fsl-imx25.h b/include/hw/arm/fsl-imx25.h
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
18
--- a/include/hw/arm/fsl-imx25.h
15
+++ b/target/arm/translate-sve.c
19
+++ b/include/hw/arm/fsl-imx25.h
16
@@ -XXX,XX +XXX,XX @@ static bool trans_INDEX_rr(DisasContext *s, arg_INDEX_rr *a, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@
17
return true;
21
#include "hw/gpio/imx_gpio.h"
22
#include "hw/sd/sdhci.h"
23
#include "hw/usb/chipidea.h"
24
+#include "hw/watchdog/wdt_imx2.h"
25
#include "exec/memory.h"
26
#include "target/arm/cpu.h"
27
28
@@ -XXX,XX +XXX,XX @@ typedef struct FslIMX25State {
29
IMXGPIOState gpio[FSL_IMX25_NUM_GPIOS];
30
SDHCIState esdhc[FSL_IMX25_NUM_ESDHCS];
31
ChipideaState usb[FSL_IMX25_NUM_USBS];
32
+ IMX2WdtState wdt;
33
MemoryRegion rom[2];
34
MemoryRegion iram;
35
MemoryRegion iram_alias;
36
@@ -XXX,XX +XXX,XX @@ typedef struct FslIMX25State {
37
#define FSL_IMX25_GPIO1_SIZE 0x4000
38
#define FSL_IMX25_GPIO2_ADDR 0x53FD0000
39
#define FSL_IMX25_GPIO2_SIZE 0x4000
40
+#define FSL_IMX25_WDT_ADDR 0x53FDC000
41
+#define FSL_IMX25_WDT_SIZE 0x4000
42
#define FSL_IMX25_USB1_ADDR 0x53FF4000
43
#define FSL_IMX25_USB1_SIZE 0x0200
44
#define FSL_IMX25_USB2_ADDR 0x53FF4400
45
@@ -XXX,XX +XXX,XX @@ typedef struct FslIMX25State {
46
#define FSL_IMX25_ESDHC2_IRQ 8
47
#define FSL_IMX25_USB1_IRQ 37
48
#define FSL_IMX25_USB2_IRQ 35
49
+#define FSL_IMX25_WDT_IRQ 55
50
51
#endif /* FSL_IMX25_H */
52
diff --git a/hw/arm/fsl-imx25.c b/hw/arm/fsl-imx25.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/arm/fsl-imx25.c
55
+++ b/hw/arm/fsl-imx25.c
56
@@ -XXX,XX +XXX,XX @@ static void fsl_imx25_init(Object *obj)
57
TYPE_CHIPIDEA);
58
}
59
60
+ sysbus_init_child_obj(obj, "wdt", &s->wdt, sizeof(s->wdt), TYPE_IMX2_WDT);
18
}
61
}
19
62
20
+/*
63
static void fsl_imx25_realize(DeviceState *dev, Error **errp)
21
+ *** SVE Stack Allocation Group
64
@@ -XXX,XX +XXX,XX @@ static void fsl_imx25_realize(DeviceState *dev, Error **errp)
22
+ */
65
usb_table[i].irq));
66
}
67
68
+ /* Watchdog */
69
+ object_property_set_bool(OBJECT(&s->wdt), true, "pretimeout-support",
70
+ &error_abort);
71
+ object_property_set_bool(OBJECT(&s->wdt), true, "realized", &error_abort);
72
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->wdt), 0, FSL_IMX25_WDT_ADDR);
73
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->wdt), 0,
74
+ qdev_get_gpio_in(DEVICE(&s->avic),
75
+ FSL_IMX25_WDT_IRQ));
23
+
76
+
24
+static bool trans_ADDVL(DisasContext *s, arg_ADDVL *a, uint32_t insn)
77
/* initialize 2 x 16 KB ROM */
25
+{
78
memory_region_init_rom(&s->rom[0], OBJECT(dev), "imx25.rom0",
26
+ TCGv_i64 rd = cpu_reg_sp(s, a->rd);
79
FSL_IMX25_ROM0_SIZE, &err);
27
+ TCGv_i64 rn = cpu_reg_sp(s, a->rn);
80
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
28
+ tcg_gen_addi_i64(rd, rn, a->imm * vec_full_reg_size(s));
29
+ return true;
30
+}
31
+
32
+static bool trans_ADDPL(DisasContext *s, arg_ADDPL *a, uint32_t insn)
33
+{
34
+ TCGv_i64 rd = cpu_reg_sp(s, a->rd);
35
+ TCGv_i64 rn = cpu_reg_sp(s, a->rn);
36
+ tcg_gen_addi_i64(rd, rn, a->imm * pred_full_reg_size(s));
37
+ return true;
38
+}
39
+
40
+static bool trans_RDVL(DisasContext *s, arg_RDVL *a, uint32_t insn)
41
+{
42
+ TCGv_i64 reg = cpu_reg(s, a->rd);
43
+ tcg_gen_movi_i64(reg, a->imm * vec_full_reg_size(s));
44
+ return true;
45
+}
46
+
47
/*
48
*** SVE Predicate Logical Operations Group
49
*/
50
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
51
index XXXXXXX..XXXXXXX 100644
81
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/sve.decode
82
--- a/hw/arm/Kconfig
53
+++ b/target/arm/sve.decode
83
+++ b/hw/arm/Kconfig
54
@@ -XXX,XX +XXX,XX @@
84
@@ -XXX,XX +XXX,XX @@ config FSL_IMX25
55
# One register operand, with governing predicate, vector element size
85
select IMX
56
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
86
select IMX_FEC
57
87
select IMX_I2C
58
+# Two register operands with a 6-bit signed immediate.
88
+ select WDT_IMX2
59
+@rd_rn_i6 ........ ... rn:5 ..... imm:s6 rd:5 &rri
89
select DS1338
60
+
90
61
# Two register operand, one immediate operand, with predicate,
91
config FSL_IMX31
62
# element size encoded as TSZHL. User must fill in imm.
63
@rdn_pg_tszimm ........ .. ... ... ... pg:3 ..... rd:5 \
64
@@ -XXX,XX +XXX,XX @@ INDEX_ri 00000100 esz:2 1 imm:s5 010001 rn:5 rd:5
65
# SVE index generation (register start, register increment)
66
INDEX_rr 00000100 .. 1 ..... 010011 ..... ..... @rd_rn_rm
67
68
+### SVE Stack Allocation Group
69
+
70
+# SVE stack frame adjustment
71
+ADDVL 00000100 001 ..... 01010 ...... ..... @rd_rn_i6
72
+ADDPL 00000100 011 ..... 01010 ...... ..... @rd_rn_i6
73
+
74
+# SVE stack frame size
75
+RDVL 00000100 101 11111 01010 imm:s6 rd:5
76
+
77
### SVE Predicate Logical Operations Group
78
79
# SVE predicate logical operations
80
--
92
--
81
2.17.0
93
2.20.1
82
94
83
95
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
With this patch, the watchdog on i.MX31 emulations is fully operational.
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
5
Message-id: 20180516223007.10256-16-richard.henderson@linaro.org
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
7
Message-id: 20200517162135.110364-5-linux@roeck-us.net
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate-sve.c | 34 ++++++++++++++++++++++++++++++++++
10
include/hw/arm/fsl-imx31.h | 4 ++++
9
target/arm/sve.decode | 13 +++++++++++++
11
hw/arm/fsl-imx31.c | 6 ++++++
10
2 files changed, 47 insertions(+)
12
hw/arm/Kconfig | 1 +
13
3 files changed, 11 insertions(+)
11
14
12
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
15
diff --git a/include/hw/arm/fsl-imx31.h b/include/hw/arm/fsl-imx31.h
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-sve.c
17
--- a/include/hw/arm/fsl-imx31.h
15
+++ b/target/arm/translate-sve.c
18
+++ b/include/hw/arm/fsl-imx31.h
16
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
19
@@ -XXX,XX +XXX,XX @@
17
return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
20
#include "hw/timer/imx_epit.h"
21
#include "hw/i2c/imx_i2c.h"
22
#include "hw/gpio/imx_gpio.h"
23
+#include "hw/watchdog/wdt_imx2.h"
24
#include "exec/memory.h"
25
#include "target/arm/cpu.h"
26
27
@@ -XXX,XX +XXX,XX @@ typedef struct FslIMX31State {
28
IMXEPITState epit[FSL_IMX31_NUM_EPITS];
29
IMXI2CState i2c[FSL_IMX31_NUM_I2CS];
30
IMXGPIOState gpio[FSL_IMX31_NUM_GPIOS];
31
+ IMX2WdtState wdt;
32
MemoryRegion secure_rom;
33
MemoryRegion rom;
34
MemoryRegion iram;
35
@@ -XXX,XX +XXX,XX @@ typedef struct FslIMX31State {
36
#define FSL_IMX31_GPIO1_SIZE 0x4000
37
#define FSL_IMX31_GPIO2_ADDR 0x53FD0000
38
#define FSL_IMX31_GPIO2_SIZE 0x4000
39
+#define FSL_IMX31_WDT_ADDR 0x53FDC000
40
+#define FSL_IMX31_WDT_SIZE 0x4000
41
#define FSL_IMX31_AVIC_ADDR 0x68000000
42
#define FSL_IMX31_AVIC_SIZE 0x100
43
#define FSL_IMX31_SDRAM0_ADDR 0x80000000
44
diff --git a/hw/arm/fsl-imx31.c b/hw/arm/fsl-imx31.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/hw/arm/fsl-imx31.c
47
+++ b/hw/arm/fsl-imx31.c
48
@@ -XXX,XX +XXX,XX @@ static void fsl_imx31_init(Object *obj)
49
sysbus_init_child_obj(obj, "gpio[*]", &s->gpio[i], sizeof(s->gpio[i]),
50
TYPE_IMX_GPIO);
51
}
52
+
53
+ sysbus_init_child_obj(obj, "wdt", &s->wdt, sizeof(s->wdt), TYPE_IMX2_WDT);
18
}
54
}
19
55
20
+/*
56
static void fsl_imx31_realize(DeviceState *dev, Error **errp)
21
+ *** SVE Integer Arithmetic - Unpredicated Group
57
@@ -XXX,XX +XXX,XX @@ static void fsl_imx31_realize(DeviceState *dev, Error **errp)
22
+ */
58
gpio_table[i].irq));
59
}
60
61
+ /* Watchdog */
62
+ object_property_set_bool(OBJECT(&s->wdt), true, "realized", &error_abort);
63
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->wdt), 0, FSL_IMX31_WDT_ADDR);
23
+
64
+
24
+static bool trans_ADD_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
65
/* On a real system, the first 16k is a `secure boot rom' */
25
+{
66
memory_region_init_rom(&s->secure_rom, OBJECT(dev), "imx31.secure_rom",
26
+ return do_vector3_z(s, tcg_gen_gvec_add, a->esz, a->rd, a->rn, a->rm);
67
FSL_IMX31_SECURE_ROM_SIZE, &err);
27
+}
68
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
28
+
29
+static bool trans_SUB_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
30
+{
31
+ return do_vector3_z(s, tcg_gen_gvec_sub, a->esz, a->rd, a->rn, a->rm);
32
+}
33
+
34
+static bool trans_SQADD_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
35
+{
36
+ return do_vector3_z(s, tcg_gen_gvec_ssadd, a->esz, a->rd, a->rn, a->rm);
37
+}
38
+
39
+static bool trans_SQSUB_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
40
+{
41
+ return do_vector3_z(s, tcg_gen_gvec_sssub, a->esz, a->rd, a->rn, a->rm);
42
+}
43
+
44
+static bool trans_UQADD_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
45
+{
46
+ return do_vector3_z(s, tcg_gen_gvec_usadd, a->esz, a->rd, a->rn, a->rm);
47
+}
48
+
49
+static bool trans_UQSUB_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
50
+{
51
+ return do_vector3_z(s, tcg_gen_gvec_ussub, a->esz, a->rd, a->rn, a->rm);
52
+}
53
+
54
/*
55
*** SVE Integer Arithmetic - Binary Predicated Group
56
*/
57
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
58
index XXXXXXX..XXXXXXX 100644
69
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/sve.decode
70
--- a/hw/arm/Kconfig
60
+++ b/target/arm/sve.decode
71
+++ b/hw/arm/Kconfig
61
@@ -XXX,XX +XXX,XX @@
72
@@ -XXX,XX +XXX,XX @@ config FSL_IMX31
62
# Three predicate operand, with governing predicate, flag setting
73
select SERIAL
63
@pd_pg_pn_pm_s ........ . s:1 .. rm:4 .. pg:4 . rn:4 . rd:4 &rprr_s
74
select IMX
64
75
select IMX_I2C
65
+# Three operand, vector element size
76
+ select WDT_IMX2
66
+@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
77
select LAN9118
67
+
78
68
# Two register operand, with governing predicate, vector element size
79
config FSL_IMX6
69
@rdn_pg_rm ........ esz:2 ... ... ... pg:3 rm:5 rd:5 \
70
&rprr_esz rn=%reg_movprfx
71
@@ -XXX,XX +XXX,XX @@ MLS 00000100 .. 0 ..... 011 ... ..... ..... @rda_pg_rn_rm
72
MLA 00000100 .. 0 ..... 110 ... ..... ..... @rdn_pg_ra_rm # MAD
73
MLS 00000100 .. 0 ..... 111 ... ..... ..... @rdn_pg_ra_rm # MSB
74
75
+### SVE Integer Arithmetic - Unpredicated Group
76
+
77
+# SVE integer add/subtract vectors (unpredicated)
78
+ADD_zzz 00000100 .. 1 ..... 000 000 ..... ..... @rd_rn_rm
79
+SUB_zzz 00000100 .. 1 ..... 000 001 ..... ..... @rd_rn_rm
80
+SQADD_zzz 00000100 .. 1 ..... 000 100 ..... ..... @rd_rn_rm
81
+UQADD_zzz 00000100 .. 1 ..... 000 101 ..... ..... @rd_rn_rm
82
+SQSUB_zzz 00000100 .. 1 ..... 000 110 ..... ..... @rd_rn_rm
83
+UQSUB_zzz 00000100 .. 1 ..... 000 111 ..... ..... @rd_rn_rm
84
+
85
### SVE Logical - Unpredicated Group
86
87
# SVE bitwise logical operations (unpredicated)
88
--
80
--
89
2.17.0
81
2.20.1
90
82
91
83
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
With this patch applied, the watchdog in the sabrelite emulation
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
is fully operational, including pretimeout support.
5
Message-id: 20180516223007.10256-15-richard.henderson@linaro.org
5
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
8
Message-id: 20200517162135.110364-6-linux@roeck-us.net
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 18 ++++++++++++
11
hw/arm/fsl-imx6.c | 9 +++++++++
9
target/arm/sve_helper.c | 57 ++++++++++++++++++++++++++++++++++++++
12
1 file changed, 9 insertions(+)
10
target/arm/translate-sve.c | 34 +++++++++++++++++++++++
11
target/arm/sve.decode | 17 ++++++++++++
12
4 files changed, 126 insertions(+)
13
13
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
diff --git a/hw/arm/fsl-imx6.c b/hw/arm/fsl-imx6.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
16
--- a/hw/arm/fsl-imx6.c
17
+++ b/target/arm/helper-sve.h
17
+++ b/hw/arm/fsl-imx6.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_neg_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6_realize(DeviceState *dev, Error **errp)
19
DEF_HELPER_FLAGS_4(sve_neg_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
FSL_IMX6_WDOG1_ADDR,
20
DEF_HELPER_FLAGS_4(sve_neg_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
FSL_IMX6_WDOG2_ADDR,
21
21
};
22
+DEF_HELPER_FLAGS_6(sve_mla_b, TCG_CALL_NO_RWG,
22
+ static const int FSL_IMX6_WDOGn_IRQ[FSL_IMX6_NUM_WDTS] = {
23
+ void, ptr, ptr, ptr, ptr, ptr, i32)
23
+ FSL_IMX6_WDOG1_IRQ,
24
+DEF_HELPER_FLAGS_6(sve_mla_h, TCG_CALL_NO_RWG,
24
+ FSL_IMX6_WDOG2_IRQ,
25
+ void, ptr, ptr, ptr, ptr, ptr, i32)
25
+ };
26
+DEF_HELPER_FLAGS_6(sve_mla_s, TCG_CALL_NO_RWG,
26
27
+ void, ptr, ptr, ptr, ptr, ptr, i32)
27
+ object_property_set_bool(OBJECT(&s->wdt[i]), true, "pretimeout-support",
28
+DEF_HELPER_FLAGS_6(sve_mla_d, TCG_CALL_NO_RWG,
28
+ &error_abort);
29
+ void, ptr, ptr, ptr, ptr, ptr, i32)
29
object_property_set_bool(OBJECT(&s->wdt[i]), true, "realized",
30
+
30
&error_abort);
31
+DEF_HELPER_FLAGS_6(sve_mls_b, TCG_CALL_NO_RWG,
31
32
+ void, ptr, ptr, ptr, ptr, ptr, i32)
32
sysbus_mmio_map(SYS_BUS_DEVICE(&s->wdt[i]), 0, FSL_IMX6_WDOGn_ADDR[i]);
33
+DEF_HELPER_FLAGS_6(sve_mls_h, TCG_CALL_NO_RWG,
33
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->wdt[i]), 0,
34
+ void, ptr, ptr, ptr, ptr, ptr, i32)
34
+ qdev_get_gpio_in(DEVICE(&s->a9mpcore),
35
+DEF_HELPER_FLAGS_6(sve_mls_s, TCG_CALL_NO_RWG,
35
+ FSL_IMX6_WDOGn_IRQ[i]));
36
+ void, ptr, ptr, ptr, ptr, ptr, i32)
36
}
37
+DEF_HELPER_FLAGS_6(sve_mls_d, TCG_CALL_NO_RWG,
37
38
+ void, ptr, ptr, ptr, ptr, ptr, i32)
38
/* ROM memory */
39
+
40
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
41
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
42
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
43
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/sve_helper.c
46
+++ b/target/arm/sve_helper.c
47
@@ -XXX,XX +XXX,XX @@ DO_ZPZI_D(sve_asrd_d, int64_t, DO_ASRD)
48
#undef DO_ASRD
49
#undef DO_ZPZI
50
#undef DO_ZPZI_D
51
+
52
+/* Fully general four-operand expander, controlled by a predicate.
53
+ */
54
+#define DO_ZPZZZ(NAME, TYPE, H, OP) \
55
+void HELPER(NAME)(void *vd, void *va, void *vn, void *vm, \
56
+ void *vg, uint32_t desc) \
57
+{ \
58
+ intptr_t i, opr_sz = simd_oprsz(desc); \
59
+ for (i = 0; i < opr_sz; ) { \
60
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
61
+ do { \
62
+ if (pg & 1) { \
63
+ TYPE nn = *(TYPE *)(vn + H(i)); \
64
+ TYPE mm = *(TYPE *)(vm + H(i)); \
65
+ TYPE aa = *(TYPE *)(va + H(i)); \
66
+ *(TYPE *)(vd + H(i)) = OP(aa, nn, mm); \
67
+ } \
68
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
69
+ } while (i & 15); \
70
+ } \
71
+}
72
+
73
+/* Similarly, specialized for 64-bit operands. */
74
+#define DO_ZPZZZ_D(NAME, TYPE, OP) \
75
+void HELPER(NAME)(void *vd, void *va, void *vn, void *vm, \
76
+ void *vg, uint32_t desc) \
77
+{ \
78
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
79
+ TYPE *d = vd, *a = va, *n = vn, *m = vm; \
80
+ uint8_t *pg = vg; \
81
+ for (i = 0; i < opr_sz; i += 1) { \
82
+ if (pg[H1(i)] & 1) { \
83
+ TYPE aa = a[i], nn = n[i], mm = m[i]; \
84
+ d[i] = OP(aa, nn, mm); \
85
+ } \
86
+ } \
87
+}
88
+
89
+#define DO_MLA(A, N, M) (A + N * M)
90
+#define DO_MLS(A, N, M) (A - N * M)
91
+
92
+DO_ZPZZZ(sve_mla_b, uint8_t, H1, DO_MLA)
93
+DO_ZPZZZ(sve_mls_b, uint8_t, H1, DO_MLS)
94
+
95
+DO_ZPZZZ(sve_mla_h, uint16_t, H1_2, DO_MLA)
96
+DO_ZPZZZ(sve_mls_h, uint16_t, H1_2, DO_MLS)
97
+
98
+DO_ZPZZZ(sve_mla_s, uint32_t, H1_4, DO_MLA)
99
+DO_ZPZZZ(sve_mls_s, uint32_t, H1_4, DO_MLS)
100
+
101
+DO_ZPZZZ_D(sve_mla_d, uint64_t, DO_MLA)
102
+DO_ZPZZZ_D(sve_mls_d, uint64_t, DO_MLS)
103
+
104
+#undef DO_MLA
105
+#undef DO_MLS
106
+#undef DO_ZPZZZ
107
+#undef DO_ZPZZZ_D
108
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
109
index XXXXXXX..XXXXXXX 100644
110
--- a/target/arm/translate-sve.c
111
+++ b/target/arm/translate-sve.c
112
@@ -XXX,XX +XXX,XX @@ DO_ZPZW(LSL, lsl)
113
114
#undef DO_ZPZW
115
116
+/*
117
+ *** SVE Integer Multiply-Add Group
118
+ */
119
+
120
+static bool do_zpzzz_ool(DisasContext *s, arg_rprrr_esz *a,
121
+ gen_helper_gvec_5 *fn)
122
+{
123
+ if (sve_access_check(s)) {
124
+ unsigned vsz = vec_full_reg_size(s);
125
+ tcg_gen_gvec_5_ool(vec_full_reg_offset(s, a->rd),
126
+ vec_full_reg_offset(s, a->ra),
127
+ vec_full_reg_offset(s, a->rn),
128
+ vec_full_reg_offset(s, a->rm),
129
+ pred_full_reg_offset(s, a->pg),
130
+ vsz, vsz, 0, fn);
131
+ }
132
+ return true;
133
+}
134
+
135
+#define DO_ZPZZZ(NAME, name) \
136
+static bool trans_##NAME(DisasContext *s, arg_rprrr_esz *a, uint32_t insn) \
137
+{ \
138
+ static gen_helper_gvec_5 * const fns[4] = { \
139
+ gen_helper_sve_##name##_b, gen_helper_sve_##name##_h, \
140
+ gen_helper_sve_##name##_s, gen_helper_sve_##name##_d, \
141
+ }; \
142
+ return do_zpzzz_ool(s, a, fns[a->esz]); \
143
+}
144
+
145
+DO_ZPZZZ(MLA, mla)
146
+DO_ZPZZZ(MLS, mls)
147
+
148
+#undef DO_ZPZZZ
149
+
150
/*
151
*** SVE Predicate Logical Operations Group
152
*/
153
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
154
index XXXXXXX..XXXXXXX 100644
155
--- a/target/arm/sve.decode
156
+++ b/target/arm/sve.decode
157
@@ -XXX,XX +XXX,XX @@
158
&rpr_esz rd pg rn esz
159
&rprr_s rd pg rn rm s
160
&rprr_esz rd pg rn rm esz
161
+&rprrr_esz rd pg rn rm ra esz
162
&rpri_esz rd pg rn imm esz
163
164
###########################################################################
165
@@ -XXX,XX +XXX,XX @@
166
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
167
&rprr_esz rm=%reg_movprfx
168
169
+# Three register operand, with governing predicate, vector element size
170
+@rda_pg_rn_rm ........ esz:2 . rm:5 ... pg:3 rn:5 rd:5 \
171
+ &rprrr_esz ra=%reg_movprfx
172
+@rdn_pg_ra_rm ........ esz:2 . rm:5 ... pg:3 ra:5 rd:5 \
173
+ &rprrr_esz rn=%reg_movprfx
174
+
175
# One register operand, with governing predicate, vector element size
176
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
177
178
@@ -XXX,XX +XXX,XX @@ UXTH 00000100 .. 010 011 101 ... ..... ..... @rd_pg_rn
179
SXTW 00000100 .. 010 100 101 ... ..... ..... @rd_pg_rn
180
UXTW 00000100 .. 010 101 101 ... ..... ..... @rd_pg_rn
181
182
+### SVE Integer Multiply-Add Group
183
+
184
+# SVE integer multiply-add writing addend (predicated)
185
+MLA 00000100 .. 0 ..... 010 ... ..... ..... @rda_pg_rn_rm
186
+MLS 00000100 .. 0 ..... 011 ... ..... ..... @rda_pg_rn_rm
187
+
188
+# SVE integer multiply-add writing multiplicand (predicated)
189
+MLA 00000100 .. 0 ..... 110 ... ..... ..... @rdn_pg_ra_rm # MAD
190
+MLS 00000100 .. 0 ..... 111 ... ..... ..... @rdn_pg_ra_rm # MSB
191
+
192
### SVE Logical - Unpredicated Group
193
194
# SVE bitwise logical operations (unpredicated)
195
--
39
--
196
2.17.0
40
2.20.1
197
41
198
42
diff view generated by jsdifflib
1
From: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
This is a preparation for the coming feature of creating dynamically an XML
3
With this commit, the watchdog on mcimx6ul-evk is fully operational,
4
description for the ARM sysregs.
4
including pretimeout support.
5
A register has ARM_CP_NO_GDB enabled will not be shown in the dynamic XML.
6
This bit is enabled automatically when creating CP_ANY wildcard aliases.
7
This bit could be enabled manually for any register we want to remove from the
8
dynamic XML description.
9
5
10
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20200517162135.110364-7-linux@roeck-us.net
13
Tested-by: Alex Bennée <alex.bennee@linaro.org>
14
Message-id: 1524153386-3550-2-git-send-email-abdallah.bouassida@lauterbach.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
10
---
17
target/arm/cpu.h | 3 ++-
11
hw/arm/fsl-imx6ul.c | 10 ++++++++++
18
target/arm/helper.c | 2 +-
12
1 file changed, 10 insertions(+)
19
2 files changed, 3 insertions(+), 2 deletions(-)
20
13
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.h
16
--- a/hw/arm/fsl-imx6ul.c
24
+++ b/target/arm/cpu.h
17
+++ b/hw/arm/fsl-imx6ul.c
25
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
18
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
26
#define ARM_LAST_SPECIAL ARM_CP_DC_ZVA
19
FSL_IMX6UL_WDOG2_ADDR,
27
#define ARM_CP_FPU 0x1000
20
FSL_IMX6UL_WDOG3_ADDR,
28
#define ARM_CP_SVE 0x2000
21
};
29
+#define ARM_CP_NO_GDB 0x4000
22
+ static const int FSL_IMX6UL_WDOGn_IRQ[FSL_IMX6UL_NUM_WDTS] = {
30
/* Used only as a terminator for ARMCPRegInfo lists */
23
+ FSL_IMX6UL_WDOG1_IRQ,
31
#define ARM_CP_SENTINEL 0xffff
24
+ FSL_IMX6UL_WDOG2_IRQ,
32
/* Mask of only the flag bits in a type field */
25
+ FSL_IMX6UL_WDOG3_IRQ,
33
-#define ARM_CP_FLAG_MASK 0x30ff
26
+ };
34
+#define ARM_CP_FLAG_MASK 0x70ff
27
35
28
+ object_property_set_bool(OBJECT(&s->wdt[i]), true, "pretimeout-support",
36
/* Valid values for ARMCPRegInfo state field, indicating which of
29
+ &error_abort);
37
* the AArch32 and AArch64 execution states this register is visible in.
30
object_property_set_bool(OBJECT(&s->wdt[i]), true, "realized",
38
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
&error_abort);
39
index XXXXXXX..XXXXXXX 100644
32
40
--- a/target/arm/helper.c
33
sysbus_mmio_map(SYS_BUS_DEVICE(&s->wdt[i]), 0,
41
+++ b/target/arm/helper.c
34
FSL_IMX6UL_WDOGn_ADDR[i]);
42
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
35
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->wdt[i]), 0,
43
if (((r->crm == CP_ANY) && crm != 0) ||
36
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
44
((r->opc1 == CP_ANY) && opc1 != 0) ||
37
+ FSL_IMX6UL_WDOGn_IRQ[i]));
45
((r->opc2 == CP_ANY) && opc2 != 0)) {
46
- r2->type |= ARM_CP_ALIAS;
47
+ r2->type |= ARM_CP_ALIAS | ARM_CP_NO_GDB;
48
}
38
}
49
39
50
/* Check that raw accesses are either forbidden or handled. Note that
40
/*
51
--
41
--
52
2.17.0
42
2.20.1
53
43
54
44
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Guenter Roeck <linux@roeck-us.net>
2
3
Instantiating PWM, CAN, CAAM, and OCOTP devices is necessary to avoid
4
crashes when booting mainline Linux.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
5
Message-id: 20180516223007.10256-14-richard.henderson@linaro.org
8
Message-id: 20200517162135.110364-8-linux@roeck-us.net
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/helper-sve.h | 60 ++++++++++++++++++
11
include/hw/arm/fsl-imx7.h | 16 ++++++++++++++++
9
target/arm/sve_helper.c | 127 +++++++++++++++++++++++++++++++++++++
12
hw/arm/fsl-imx7.c | 24 ++++++++++++++++++++++++
10
target/arm/translate-sve.c | 113 +++++++++++++++++++++++++++++++++
13
2 files changed, 40 insertions(+)
11
target/arm/sve.decode | 23 +++++++
12
4 files changed, 323 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/include/hw/arm/fsl-imx7.h
17
+++ b/target/arm/helper-sve.h
18
+++ b/include/hw/arm/fsl-imx7.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_asrd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ enum FslIMX7MemoryMap {
19
DEF_HELPER_FLAGS_4(sve_asrd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
20
FSL_IMX7_IOMUXC_GPR_ADDR = 0x30340000,
20
DEF_HELPER_FLAGS_4(sve_asrd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
FSL_IMX7_IOMUXCn_SIZE = 0x1000,
21
22
22
+DEF_HELPER_FLAGS_4(sve_cls_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
+ FSL_IMX7_OCOTP_ADDR = 0x30350000,
23
+DEF_HELPER_FLAGS_4(sve_cls_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
24
+ FSL_IMX7_OCOTP_SIZE = 0x10000,
24
+DEF_HELPER_FLAGS_4(sve_cls_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(sve_cls_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
+
25
+
27
+DEF_HELPER_FLAGS_4(sve_clz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
FSL_IMX7_ANALOG_ADDR = 0x30360000,
28
+DEF_HELPER_FLAGS_4(sve_clz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
FSL_IMX7_SNVS_ADDR = 0x30370000,
29
+DEF_HELPER_FLAGS_4(sve_clz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
FSL_IMX7_CCM_ADDR = 0x30380000,
30
+DEF_HELPER_FLAGS_4(sve_clz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
29
@@ -XXX,XX +XXX,XX @@ enum FslIMX7MemoryMap {
30
FSL_IMX7_ADC2_ADDR = 0x30620000,
31
FSL_IMX7_ADCn_SIZE = 0x1000,
32
33
+ FSL_IMX7_PWM1_ADDR = 0x30660000,
34
+ FSL_IMX7_PWM2_ADDR = 0x30670000,
35
+ FSL_IMX7_PWM3_ADDR = 0x30680000,
36
+ FSL_IMX7_PWM4_ADDR = 0x30690000,
37
+ FSL_IMX7_PWMn_SIZE = 0x10000,
31
+
38
+
32
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
39
FSL_IMX7_PCIE_PHY_ADDR = 0x306D0000,
33
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
FSL_IMX7_PCIE_PHY_SIZE = 0x10000,
34
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
35
+DEF_HELPER_FLAGS_4(sve_cnt_zpz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
FSL_IMX7_GPC_ADDR = 0x303A0000,
43
44
+ FSL_IMX7_CAAM_ADDR = 0x30900000,
45
+ FSL_IMX7_CAAM_SIZE = 0x40000,
36
+
46
+
37
+DEF_HELPER_FLAGS_4(sve_cnot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
47
+ FSL_IMX7_CAN1_ADDR = 0x30A00000,
38
+DEF_HELPER_FLAGS_4(sve_cnot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
48
+ FSL_IMX7_CAN2_ADDR = 0x30A10000,
39
+DEF_HELPER_FLAGS_4(sve_cnot_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
49
+ FSL_IMX7_CANn_SIZE = 0x10000,
40
+DEF_HELPER_FLAGS_4(sve_cnot_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
+
50
+
42
+DEF_HELPER_FLAGS_4(sve_fabs_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
51
FSL_IMX7_I2C1_ADDR = 0x30A20000,
43
+DEF_HELPER_FLAGS_4(sve_fabs_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
52
FSL_IMX7_I2C2_ADDR = 0x30A30000,
44
+DEF_HELPER_FLAGS_4(sve_fabs_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
53
FSL_IMX7_I2C3_ADDR = 0x30A40000,
54
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/arm/fsl-imx7.c
57
+++ b/hw/arm/fsl-imx7.c
58
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
59
*/
60
create_unimplemented_device("sdma", FSL_IMX7_SDMA_ADDR, FSL_IMX7_SDMA_SIZE);
61
62
+ /*
63
+ * CAAM
64
+ */
65
+ create_unimplemented_device("caam", FSL_IMX7_CAAM_ADDR, FSL_IMX7_CAAM_SIZE);
45
+
66
+
46
+DEF_HELPER_FLAGS_4(sve_fneg_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
67
+ /*
47
+DEF_HELPER_FLAGS_4(sve_fneg_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
68
+ * PWM
48
+DEF_HELPER_FLAGS_4(sve_fneg_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
69
+ */
70
+ create_unimplemented_device("pwm1", FSL_IMX7_PWM1_ADDR, FSL_IMX7_PWMn_SIZE);
71
+ create_unimplemented_device("pwm2", FSL_IMX7_PWM2_ADDR, FSL_IMX7_PWMn_SIZE);
72
+ create_unimplemented_device("pwm3", FSL_IMX7_PWM3_ADDR, FSL_IMX7_PWMn_SIZE);
73
+ create_unimplemented_device("pwm4", FSL_IMX7_PWM4_ADDR, FSL_IMX7_PWMn_SIZE);
49
+
74
+
50
+DEF_HELPER_FLAGS_4(sve_not_zpz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
75
+ /*
51
+DEF_HELPER_FLAGS_4(sve_not_zpz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
76
+ * CAN
52
+DEF_HELPER_FLAGS_4(sve_not_zpz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
77
+ */
53
+DEF_HELPER_FLAGS_4(sve_not_zpz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
78
+ create_unimplemented_device("can1", FSL_IMX7_CAN1_ADDR, FSL_IMX7_CANn_SIZE);
79
+ create_unimplemented_device("can2", FSL_IMX7_CAN2_ADDR, FSL_IMX7_CANn_SIZE);
54
+
80
+
55
+DEF_HELPER_FLAGS_4(sve_sxtb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
81
+ /*
56
+DEF_HELPER_FLAGS_4(sve_sxtb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
82
+ * OCOTP
57
+DEF_HELPER_FLAGS_4(sve_sxtb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
83
+ */
58
+
84
+ create_unimplemented_device("ocotp", FSL_IMX7_OCOTP_ADDR,
59
+DEF_HELPER_FLAGS_4(sve_uxtb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
85
+ FSL_IMX7_OCOTP_SIZE);
60
+DEF_HELPER_FLAGS_4(sve_uxtb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
86
61
+DEF_HELPER_FLAGS_4(sve_uxtb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
87
object_property_set_bool(OBJECT(&s->gpr), true, "realized",
62
+
88
&error_abort);
63
+DEF_HELPER_FLAGS_4(sve_sxth_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
64
+DEF_HELPER_FLAGS_4(sve_sxth_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
65
+
66
+DEF_HELPER_FLAGS_4(sve_uxth_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
67
+DEF_HELPER_FLAGS_4(sve_uxth_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
68
+
69
+DEF_HELPER_FLAGS_4(sve_sxtw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
70
+DEF_HELPER_FLAGS_4(sve_uxtw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
71
+
72
+DEF_HELPER_FLAGS_4(sve_abs_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
73
+DEF_HELPER_FLAGS_4(sve_abs_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
74
+DEF_HELPER_FLAGS_4(sve_abs_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
75
+DEF_HELPER_FLAGS_4(sve_abs_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
76
+
77
+DEF_HELPER_FLAGS_4(sve_neg_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
78
+DEF_HELPER_FLAGS_4(sve_neg_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
79
+DEF_HELPER_FLAGS_4(sve_neg_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
80
+DEF_HELPER_FLAGS_4(sve_neg_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
81
+
82
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
83
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
84
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
85
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/arm/sve_helper.c
88
+++ b/target/arm/sve_helper.c
89
@@ -XXX,XX +XXX,XX @@ DO_ZPZW(sve_lsl_zpzw_s, uint32_t, uint64_t, H1_4, DO_LSL)
90
91
#undef DO_ZPZW
92
93
+/* Fully general two-operand expander, controlled by a predicate.
94
+ */
95
+#define DO_ZPZ(NAME, TYPE, H, OP) \
96
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
97
+{ \
98
+ intptr_t i, opr_sz = simd_oprsz(desc); \
99
+ for (i = 0; i < opr_sz; ) { \
100
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
101
+ do { \
102
+ if (pg & 1) { \
103
+ TYPE nn = *(TYPE *)(vn + H(i)); \
104
+ *(TYPE *)(vd + H(i)) = OP(nn); \
105
+ } \
106
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
107
+ } while (i & 15); \
108
+ } \
109
+}
110
+
111
+/* Similarly, specialized for 64-bit operands. */
112
+#define DO_ZPZ_D(NAME, TYPE, OP) \
113
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
114
+{ \
115
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
116
+ TYPE *d = vd, *n = vn; \
117
+ uint8_t *pg = vg; \
118
+ for (i = 0; i < opr_sz; i += 1) { \
119
+ if (pg[H1(i)] & 1) { \
120
+ TYPE nn = n[i]; \
121
+ d[i] = OP(nn); \
122
+ } \
123
+ } \
124
+}
125
+
126
+#define DO_CLS_B(N) (clrsb32(N) - 24)
127
+#define DO_CLS_H(N) (clrsb32(N) - 16)
128
+
129
+DO_ZPZ(sve_cls_b, int8_t, H1, DO_CLS_B)
130
+DO_ZPZ(sve_cls_h, int16_t, H1_2, DO_CLS_H)
131
+DO_ZPZ(sve_cls_s, int32_t, H1_4, clrsb32)
132
+DO_ZPZ_D(sve_cls_d, int64_t, clrsb64)
133
+
134
+#define DO_CLZ_B(N) (clz32(N) - 24)
135
+#define DO_CLZ_H(N) (clz32(N) - 16)
136
+
137
+DO_ZPZ(sve_clz_b, uint8_t, H1, DO_CLZ_B)
138
+DO_ZPZ(sve_clz_h, uint16_t, H1_2, DO_CLZ_H)
139
+DO_ZPZ(sve_clz_s, uint32_t, H1_4, clz32)
140
+DO_ZPZ_D(sve_clz_d, uint64_t, clz64)
141
+
142
+DO_ZPZ(sve_cnt_zpz_b, uint8_t, H1, ctpop8)
143
+DO_ZPZ(sve_cnt_zpz_h, uint16_t, H1_2, ctpop16)
144
+DO_ZPZ(sve_cnt_zpz_s, uint32_t, H1_4, ctpop32)
145
+DO_ZPZ_D(sve_cnt_zpz_d, uint64_t, ctpop64)
146
+
147
+#define DO_CNOT(N) (N == 0)
148
+
149
+DO_ZPZ(sve_cnot_b, uint8_t, H1, DO_CNOT)
150
+DO_ZPZ(sve_cnot_h, uint16_t, H1_2, DO_CNOT)
151
+DO_ZPZ(sve_cnot_s, uint32_t, H1_4, DO_CNOT)
152
+DO_ZPZ_D(sve_cnot_d, uint64_t, DO_CNOT)
153
+
154
+#define DO_FABS(N) (N & ((__typeof(N))-1 >> 1))
155
+
156
+DO_ZPZ(sve_fabs_h, uint16_t, H1_2, DO_FABS)
157
+DO_ZPZ(sve_fabs_s, uint32_t, H1_4, DO_FABS)
158
+DO_ZPZ_D(sve_fabs_d, uint64_t, DO_FABS)
159
+
160
+#define DO_FNEG(N) (N ^ ~((__typeof(N))-1 >> 1))
161
+
162
+DO_ZPZ(sve_fneg_h, uint16_t, H1_2, DO_FNEG)
163
+DO_ZPZ(sve_fneg_s, uint32_t, H1_4, DO_FNEG)
164
+DO_ZPZ_D(sve_fneg_d, uint64_t, DO_FNEG)
165
+
166
+#define DO_NOT(N) (~N)
167
+
168
+DO_ZPZ(sve_not_zpz_b, uint8_t, H1, DO_NOT)
169
+DO_ZPZ(sve_not_zpz_h, uint16_t, H1_2, DO_NOT)
170
+DO_ZPZ(sve_not_zpz_s, uint32_t, H1_4, DO_NOT)
171
+DO_ZPZ_D(sve_not_zpz_d, uint64_t, DO_NOT)
172
+
173
+#define DO_SXTB(N) ((int8_t)N)
174
+#define DO_SXTH(N) ((int16_t)N)
175
+#define DO_SXTS(N) ((int32_t)N)
176
+#define DO_UXTB(N) ((uint8_t)N)
177
+#define DO_UXTH(N) ((uint16_t)N)
178
+#define DO_UXTS(N) ((uint32_t)N)
179
+
180
+DO_ZPZ(sve_sxtb_h, uint16_t, H1_2, DO_SXTB)
181
+DO_ZPZ(sve_sxtb_s, uint32_t, H1_4, DO_SXTB)
182
+DO_ZPZ(sve_sxth_s, uint32_t, H1_4, DO_SXTH)
183
+DO_ZPZ_D(sve_sxtb_d, uint64_t, DO_SXTB)
184
+DO_ZPZ_D(sve_sxth_d, uint64_t, DO_SXTH)
185
+DO_ZPZ_D(sve_sxtw_d, uint64_t, DO_SXTS)
186
+
187
+DO_ZPZ(sve_uxtb_h, uint16_t, H1_2, DO_UXTB)
188
+DO_ZPZ(sve_uxtb_s, uint32_t, H1_4, DO_UXTB)
189
+DO_ZPZ(sve_uxth_s, uint32_t, H1_4, DO_UXTH)
190
+DO_ZPZ_D(sve_uxtb_d, uint64_t, DO_UXTB)
191
+DO_ZPZ_D(sve_uxth_d, uint64_t, DO_UXTH)
192
+DO_ZPZ_D(sve_uxtw_d, uint64_t, DO_UXTS)
193
+
194
+#define DO_ABS(N) (N < 0 ? -N : N)
195
+
196
+DO_ZPZ(sve_abs_b, int8_t, H1, DO_ABS)
197
+DO_ZPZ(sve_abs_h, int16_t, H1_2, DO_ABS)
198
+DO_ZPZ(sve_abs_s, int32_t, H1_4, DO_ABS)
199
+DO_ZPZ_D(sve_abs_d, int64_t, DO_ABS)
200
+
201
+#define DO_NEG(N) (-N)
202
+
203
+DO_ZPZ(sve_neg_b, uint8_t, H1, DO_NEG)
204
+DO_ZPZ(sve_neg_h, uint16_t, H1_2, DO_NEG)
205
+DO_ZPZ(sve_neg_s, uint32_t, H1_4, DO_NEG)
206
+DO_ZPZ_D(sve_neg_d, uint64_t, DO_NEG)
207
+
208
+#undef DO_CLS_B
209
+#undef DO_CLS_H
210
+#undef DO_CLZ_B
211
+#undef DO_CLZ_H
212
+#undef DO_CNOT
213
+#undef DO_FABS
214
+#undef DO_FNEG
215
+#undef DO_ABS
216
+#undef DO_NEG
217
+#undef DO_ZPZ
218
+#undef DO_ZPZ_D
219
+
220
/* Two-operand reduction expander, controlled by a predicate.
221
* The difference between TYPERED and TYPERET has to do with
222
* sign-extension. E.g. for SMAX, TYPERED must be signed,
223
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
224
index XXXXXXX..XXXXXXX 100644
225
--- a/target/arm/translate-sve.c
226
+++ b/target/arm/translate-sve.c
227
@@ -XXX,XX +XXX,XX @@ static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
228
229
#undef DO_ZPZZ
230
231
+/*
232
+ *** SVE Integer Arithmetic - Unary Predicated Group
233
+ */
234
+
235
+static bool do_zpz_ool(DisasContext *s, arg_rpr_esz *a, gen_helper_gvec_3 *fn)
236
+{
237
+ if (fn == NULL) {
238
+ return false;
239
+ }
240
+ if (sve_access_check(s)) {
241
+ unsigned vsz = vec_full_reg_size(s);
242
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
243
+ vec_full_reg_offset(s, a->rn),
244
+ pred_full_reg_offset(s, a->pg),
245
+ vsz, vsz, 0, fn);
246
+ }
247
+ return true;
248
+}
249
+
250
+#define DO_ZPZ(NAME, name) \
251
+static bool trans_##NAME(DisasContext *s, arg_rpr_esz *a, uint32_t insn) \
252
+{ \
253
+ static gen_helper_gvec_3 * const fns[4] = { \
254
+ gen_helper_sve_##name##_b, gen_helper_sve_##name##_h, \
255
+ gen_helper_sve_##name##_s, gen_helper_sve_##name##_d, \
256
+ }; \
257
+ return do_zpz_ool(s, a, fns[a->esz]); \
258
+}
259
+
260
+DO_ZPZ(CLS, cls)
261
+DO_ZPZ(CLZ, clz)
262
+DO_ZPZ(CNT_zpz, cnt_zpz)
263
+DO_ZPZ(CNOT, cnot)
264
+DO_ZPZ(NOT_zpz, not_zpz)
265
+DO_ZPZ(ABS, abs)
266
+DO_ZPZ(NEG, neg)
267
+
268
+static bool trans_FABS(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
269
+{
270
+ static gen_helper_gvec_3 * const fns[4] = {
271
+ NULL,
272
+ gen_helper_sve_fabs_h,
273
+ gen_helper_sve_fabs_s,
274
+ gen_helper_sve_fabs_d
275
+ };
276
+ return do_zpz_ool(s, a, fns[a->esz]);
277
+}
278
+
279
+static bool trans_FNEG(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
280
+{
281
+ static gen_helper_gvec_3 * const fns[4] = {
282
+ NULL,
283
+ gen_helper_sve_fneg_h,
284
+ gen_helper_sve_fneg_s,
285
+ gen_helper_sve_fneg_d
286
+ };
287
+ return do_zpz_ool(s, a, fns[a->esz]);
288
+}
289
+
290
+static bool trans_SXTB(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
291
+{
292
+ static gen_helper_gvec_3 * const fns[4] = {
293
+ NULL,
294
+ gen_helper_sve_sxtb_h,
295
+ gen_helper_sve_sxtb_s,
296
+ gen_helper_sve_sxtb_d
297
+ };
298
+ return do_zpz_ool(s, a, fns[a->esz]);
299
+}
300
+
301
+static bool trans_UXTB(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
302
+{
303
+ static gen_helper_gvec_3 * const fns[4] = {
304
+ NULL,
305
+ gen_helper_sve_uxtb_h,
306
+ gen_helper_sve_uxtb_s,
307
+ gen_helper_sve_uxtb_d
308
+ };
309
+ return do_zpz_ool(s, a, fns[a->esz]);
310
+}
311
+
312
+static bool trans_SXTH(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
313
+{
314
+ static gen_helper_gvec_3 * const fns[4] = {
315
+ NULL, NULL,
316
+ gen_helper_sve_sxth_s,
317
+ gen_helper_sve_sxth_d
318
+ };
319
+ return do_zpz_ool(s, a, fns[a->esz]);
320
+}
321
+
322
+static bool trans_UXTH(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
323
+{
324
+ static gen_helper_gvec_3 * const fns[4] = {
325
+ NULL, NULL,
326
+ gen_helper_sve_uxth_s,
327
+ gen_helper_sve_uxth_d
328
+ };
329
+ return do_zpz_ool(s, a, fns[a->esz]);
330
+}
331
+
332
+static bool trans_SXTW(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
333
+{
334
+ return do_zpz_ool(s, a, a->esz == 3 ? gen_helper_sve_sxtw_d : NULL);
335
+}
336
+
337
+static bool trans_UXTW(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
338
+{
339
+ return do_zpz_ool(s, a, a->esz == 3 ? gen_helper_sve_uxtw_d : NULL);
340
+}
341
+
342
+#undef DO_ZPZ
343
+
344
/*
345
*** SVE Integer Reduction Group
346
*/
347
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
348
index XXXXXXX..XXXXXXX 100644
349
--- a/target/arm/sve.decode
350
+++ b/target/arm/sve.decode
351
@@ -XXX,XX +XXX,XX @@ ASR_zpzw 00000100 .. 011 000 100 ... ..... ..... @rdn_pg_rm
352
LSR_zpzw 00000100 .. 011 001 100 ... ..... ..... @rdn_pg_rm
353
LSL_zpzw 00000100 .. 011 011 100 ... ..... ..... @rdn_pg_rm
354
355
+### SVE Integer Arithmetic - Unary Predicated Group
356
+
357
+# SVE unary bit operations (predicated)
358
+# Note esz != 0 for FABS and FNEG.
359
+CLS 00000100 .. 011 000 101 ... ..... ..... @rd_pg_rn
360
+CLZ 00000100 .. 011 001 101 ... ..... ..... @rd_pg_rn
361
+CNT_zpz 00000100 .. 011 010 101 ... ..... ..... @rd_pg_rn
362
+CNOT 00000100 .. 011 011 101 ... ..... ..... @rd_pg_rn
363
+NOT_zpz 00000100 .. 011 110 101 ... ..... ..... @rd_pg_rn
364
+FABS 00000100 .. 011 100 101 ... ..... ..... @rd_pg_rn
365
+FNEG 00000100 .. 011 101 101 ... ..... ..... @rd_pg_rn
366
+
367
+# SVE integer unary operations (predicated)
368
+# Note esz > original size for extensions.
369
+ABS 00000100 .. 010 110 101 ... ..... ..... @rd_pg_rn
370
+NEG 00000100 .. 010 111 101 ... ..... ..... @rd_pg_rn
371
+SXTB 00000100 .. 010 000 101 ... ..... ..... @rd_pg_rn
372
+UXTB 00000100 .. 010 001 101 ... ..... ..... @rd_pg_rn
373
+SXTH 00000100 .. 010 010 101 ... ..... ..... @rd_pg_rn
374
+UXTH 00000100 .. 010 011 101 ... ..... ..... @rd_pg_rn
375
+SXTW 00000100 .. 010 100 101 ... ..... ..... @rd_pg_rn
376
+UXTW 00000100 .. 010 101 101 ... ..... ..... @rd_pg_rn
377
+
378
### SVE Logical - Unpredicated Group
379
380
# SVE bitwise logical operations (unpredicated)
381
--
89
--
382
2.17.0
90
2.20.1
383
91
384
92
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Guenter Roeck <linux@roeck-us.net>
2
3
i.MX7 supports watchdog pretimeout interupts. With this commit,
4
the watchdog in mcimx7d-sabre is fully operational, including
5
pretimeout support.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
5
Message-id: 20180516223007.10256-12-richard.henderson@linaro.org
9
Message-id: 20200517162135.110364-9-linux@roeck-us.net
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 27 +++++++++++++++++++++++++++
12
include/hw/arm/fsl-imx7.h | 5 +++++
9
target/arm/sve_helper.c | 25 +++++++++++++++++++++++++
13
hw/arm/fsl-imx7.c | 11 +++++++++++
10
target/arm/translate-sve.c | 4 ++++
14
2 files changed, 16 insertions(+)
11
target/arm/sve.decode | 8 ++++++++
12
4 files changed, 64 insertions(+)
13
15
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
16
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
18
--- a/include/hw/arm/fsl-imx7.h
17
+++ b/target/arm/helper-sve.h
19
+++ b/include/hw/arm/fsl-imx7.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_udiv_zpzz_s, TCG_CALL_NO_RWG,
20
@@ -XXX,XX +XXX,XX @@ enum FslIMX7IRQs {
19
DEF_HELPER_FLAGS_5(sve_udiv_zpzz_d, TCG_CALL_NO_RWG,
21
FSL_IMX7_USB2_IRQ = 42,
20
void, ptr, ptr, ptr, ptr, i32)
22
FSL_IMX7_USB3_IRQ = 40,
21
23
22
+DEF_HELPER_FLAGS_5(sve_asr_zpzz_b, TCG_CALL_NO_RWG,
24
+ FSL_IMX7_WDOG1_IRQ = 78,
23
+ void, ptr, ptr, ptr, ptr, i32)
25
+ FSL_IMX7_WDOG2_IRQ = 79,
24
+DEF_HELPER_FLAGS_5(sve_asr_zpzz_h, TCG_CALL_NO_RWG,
26
+ FSL_IMX7_WDOG3_IRQ = 10,
25
+ void, ptr, ptr, ptr, ptr, i32)
27
+ FSL_IMX7_WDOG4_IRQ = 109,
26
+DEF_HELPER_FLAGS_5(sve_asr_zpzz_s, TCG_CALL_NO_RWG,
27
+ void, ptr, ptr, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_5(sve_asr_zpzz_d, TCG_CALL_NO_RWG,
29
+ void, ptr, ptr, ptr, ptr, i32)
30
+
28
+
31
+DEF_HELPER_FLAGS_5(sve_lsr_zpzz_b, TCG_CALL_NO_RWG,
29
FSL_IMX7_PCI_INTA_IRQ = 125,
32
+ void, ptr, ptr, ptr, ptr, i32)
30
FSL_IMX7_PCI_INTB_IRQ = 124,
33
+DEF_HELPER_FLAGS_5(sve_lsr_zpzz_h, TCG_CALL_NO_RWG,
31
FSL_IMX7_PCI_INTC_IRQ = 123,
34
+ void, ptr, ptr, ptr, ptr, i32)
32
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
35
+DEF_HELPER_FLAGS_5(sve_lsr_zpzz_s, TCG_CALL_NO_RWG,
36
+ void, ptr, ptr, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_5(sve_lsr_zpzz_d, TCG_CALL_NO_RWG,
38
+ void, ptr, ptr, ptr, ptr, i32)
39
+
40
+DEF_HELPER_FLAGS_5(sve_lsl_zpzz_b, TCG_CALL_NO_RWG,
41
+ void, ptr, ptr, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_5(sve_lsl_zpzz_h, TCG_CALL_NO_RWG,
43
+ void, ptr, ptr, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_5(sve_lsl_zpzz_s, TCG_CALL_NO_RWG,
45
+ void, ptr, ptr, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_5(sve_lsl_zpzz_d, TCG_CALL_NO_RWG,
47
+ void, ptr, ptr, ptr, ptr, i32)
48
+
49
DEF_HELPER_FLAGS_3(sve_orv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
50
DEF_HELPER_FLAGS_3(sve_orv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
51
DEF_HELPER_FLAGS_3(sve_orv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
52
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
53
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/sve_helper.c
34
--- a/hw/arm/fsl-imx7.c
55
+++ b/target/arm/sve_helper.c
35
+++ b/hw/arm/fsl-imx7.c
56
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_D(sve_sdiv_zpzz_d, int64_t, DO_DIV)
36
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
57
DO_ZPZZ(sve_udiv_zpzz_s, uint32_t, H1_4, DO_DIV)
37
FSL_IMX7_WDOG3_ADDR,
58
DO_ZPZZ_D(sve_udiv_zpzz_d, uint64_t, DO_DIV)
38
FSL_IMX7_WDOG4_ADDR,
59
39
};
60
+/* Note that all bits of the shift are significant
40
+ static const int FSL_IMX7_WDOGn_IRQ[FSL_IMX7_NUM_WDTS] = {
61
+ and not modulo the element size. */
41
+ FSL_IMX7_WDOG1_IRQ,
62
+#define DO_ASR(N, M) (N >> MIN(M, sizeof(N) * 8 - 1))
42
+ FSL_IMX7_WDOG2_IRQ,
63
+#define DO_LSR(N, M) (M < sizeof(N) * 8 ? N >> M : 0)
43
+ FSL_IMX7_WDOG3_IRQ,
64
+#define DO_LSL(N, M) (M < sizeof(N) * 8 ? N << M : 0)
44
+ FSL_IMX7_WDOG4_IRQ,
65
+
45
+ };
66
+DO_ZPZZ(sve_asr_zpzz_b, int8_t, H1, DO_ASR)
46
67
+DO_ZPZZ(sve_lsr_zpzz_b, uint8_t, H1_2, DO_LSR)
47
+ object_property_set_bool(OBJECT(&s->wdt[i]), true, "pretimeout-support",
68
+DO_ZPZZ(sve_lsl_zpzz_b, uint8_t, H1_4, DO_LSL)
48
+ &error_abort);
69
+
49
object_property_set_bool(OBJECT(&s->wdt[i]), true, "realized",
70
+DO_ZPZZ(sve_asr_zpzz_h, int16_t, H1, DO_ASR)
50
&error_abort);
71
+DO_ZPZZ(sve_lsr_zpzz_h, uint16_t, H1_2, DO_LSR)
51
72
+DO_ZPZZ(sve_lsl_zpzz_h, uint16_t, H1_4, DO_LSL)
52
sysbus_mmio_map(SYS_BUS_DEVICE(&s->wdt[i]), 0, FSL_IMX7_WDOGn_ADDR[i]);
73
+
53
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->wdt[i]), 0,
74
+DO_ZPZZ(sve_asr_zpzz_s, int32_t, H1, DO_ASR)
54
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
75
+DO_ZPZZ(sve_lsr_zpzz_s, uint32_t, H1_2, DO_LSR)
55
+ FSL_IMX7_WDOGn_IRQ[i]));
76
+DO_ZPZZ(sve_lsl_zpzz_s, uint32_t, H1_4, DO_LSL)
56
}
77
+
57
78
+DO_ZPZZ_D(sve_asr_zpzz_d, int64_t, DO_ASR)
58
/*
79
+DO_ZPZZ_D(sve_lsr_zpzz_d, uint64_t, DO_LSR)
80
+DO_ZPZZ_D(sve_lsl_zpzz_d, uint64_t, DO_LSL)
81
+
82
#undef DO_ZPZZ
83
#undef DO_ZPZZ_D
84
85
@@ -XXX,XX +XXX,XX @@ DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
86
#undef DO_ABD
87
#undef DO_MUL
88
#undef DO_DIV
89
+#undef DO_ASR
90
+#undef DO_LSR
91
+#undef DO_LSL
92
93
/* Similar to the ARM LastActiveElement pseudocode function, except the
94
result is multiplied by the element size. This includes the not found
95
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/target/arm/translate-sve.c
98
+++ b/target/arm/translate-sve.c
99
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ(MUL, mul)
100
DO_ZPZZ(SMULH, smulh)
101
DO_ZPZZ(UMULH, umulh)
102
103
+DO_ZPZZ(ASR, asr)
104
+DO_ZPZZ(LSR, lsr)
105
+DO_ZPZZ(LSL, lsl)
106
+
107
static bool trans_SDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
108
{
109
static gen_helper_gvec_4 * const fns[4] = {
110
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/sve.decode
113
+++ b/target/arm/sve.decode
114
@@ -XXX,XX +XXX,XX @@ LSL_zpzi 00000100 .. 000 011 100 ... .. ... ..... \
115
ASRD 00000100 .. 000 100 100 ... .. ... ..... \
116
@rdn_pg_tszimm imm=%tszimm_shr
117
118
+# SVE bitwise shift by vector (predicated)
119
+ASR_zpzz 00000100 .. 010 000 100 ... ..... ..... @rdn_pg_rm
120
+LSR_zpzz 00000100 .. 010 001 100 ... ..... ..... @rdn_pg_rm
121
+LSL_zpzz 00000100 .. 010 011 100 ... ..... ..... @rdn_pg_rm
122
+ASR_zpzz 00000100 .. 010 100 100 ... ..... ..... @rdm_pg_rn # ASRR
123
+LSR_zpzz 00000100 .. 010 101 100 ... ..... ..... @rdm_pg_rn # LSRR
124
+LSL_zpzz 00000100 .. 010 111 100 ... ..... ..... @rdm_pg_rn # LSLR
125
+
126
### SVE Logical - Unpredicated Group
127
128
# SVE bitwise logical operations (unpredicated)
129
--
59
--
130
2.17.0
60
2.20.1
131
61
132
62
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
hw_error() calls exit(). This a bit overkill when we can log
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
the accesses as unimplemented or guest error.
5
Message-id: 20180516223007.10256-20-richard.henderson@linaro.org
5
6
When fuzzing the devices, we don't want the whole process to
7
exit. Replace some hw_error() calls by qemu_log_mask().
8
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-id: 20200518140309.5220-2-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
target/arm/helper-sve.h | 5 +++++
14
hw/arm/integratorcp.c | 23 +++++++++++++++--------
9
target/arm/sve_helper.c | 40 ++++++++++++++++++++++++++++++++++++++
15
1 file changed, 15 insertions(+), 8 deletions(-)
10
target/arm/translate-sve.c | 36 ++++++++++++++++++++++++++++++++++
11
target/arm/sve.decode | 12 ++++++++++++
12
4 files changed, 93 insertions(+)
13
16
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
17
diff --git a/hw/arm/integratorcp.c b/hw/arm/integratorcp.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
19
--- a/hw/arm/integratorcp.c
17
+++ b/target/arm/helper-sve.h
20
+++ b/hw/arm/integratorcp.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(sve_lsl_zzw_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
21
@@ -XXX,XX +XXX,XX @@
19
DEF_HELPER_FLAGS_4(sve_lsl_zzw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
22
#include "exec/address-spaces.h"
20
DEF_HELPER_FLAGS_4(sve_lsl_zzw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
23
#include "sysemu/runstate.h"
21
24
#include "sysemu/sysemu.h"
22
+DEF_HELPER_FLAGS_4(sve_adr_p32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
+#include "qemu/log.h"
23
+DEF_HELPER_FLAGS_4(sve_adr_p64, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
#include "qemu/error-report.h"
24
+DEF_HELPER_FLAGS_4(sve_adr_s32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
#include "hw/char/pl011.h"
25
+DEF_HELPER_FLAGS_4(sve_adr_u32, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
#include "hw/hw.h"
26
+
29
@@ -XXX,XX +XXX,XX @@ static uint64_t integratorcm_read(void *opaque, hwaddr offset,
27
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
30
/* ??? Voltage control unimplemented. */
28
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
31
return 0;
29
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
32
default:
30
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
33
- hw_error("integratorcm_read: Unimplemented offset 0x%x\n",
31
index XXXXXXX..XXXXXXX 100644
34
- (int)offset);
32
--- a/target/arm/sve_helper.c
35
+ qemu_log_mask(LOG_UNIMP,
33
+++ b/target/arm/sve_helper.c
36
+ "%s: Unimplemented offset 0x%" HWADDR_PRIX "\n",
34
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_index_d)(void *vd, uint64_t start,
37
+ __func__, offset);
35
d[i] = start + i * incr;
38
return 0;
36
}
39
}
37
}
40
}
38
+
41
@@ -XXX,XX +XXX,XX @@ static void integratorcm_write(void *opaque, hwaddr offset,
39
+void HELPER(sve_adr_p32)(void *vd, void *vn, void *vm, uint32_t desc)
42
/* ??? Voltage control unimplemented. */
40
+{
43
break;
41
+ intptr_t i, opr_sz = simd_oprsz(desc) / 4;
44
default:
42
+ uint32_t sh = simd_data(desc);
45
- hw_error("integratorcm_write: Unimplemented offset 0x%x\n",
43
+ uint32_t *d = vd, *n = vn, *m = vm;
46
- (int)offset);
44
+ for (i = 0; i < opr_sz; i += 1) {
47
+ qemu_log_mask(LOG_UNIMP,
45
+ d[i] = n[i] + (m[i] << sh);
48
+ "%s: Unimplemented offset 0x%" HWADDR_PRIX "\n",
46
+ }
49
+ __func__, offset);
47
+}
50
break;
48
+
51
}
49
+void HELPER(sve_adr_p64)(void *vd, void *vn, void *vm, uint32_t desc)
50
+{
51
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
52
+ uint64_t sh = simd_data(desc);
53
+ uint64_t *d = vd, *n = vn, *m = vm;
54
+ for (i = 0; i < opr_sz; i += 1) {
55
+ d[i] = n[i] + (m[i] << sh);
56
+ }
57
+}
58
+
59
+void HELPER(sve_adr_s32)(void *vd, void *vn, void *vm, uint32_t desc)
60
+{
61
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
62
+ uint64_t sh = simd_data(desc);
63
+ uint64_t *d = vd, *n = vn, *m = vm;
64
+ for (i = 0; i < opr_sz; i += 1) {
65
+ d[i] = n[i] + ((uint64_t)(int32_t)m[i] << sh);
66
+ }
67
+}
68
+
69
+void HELPER(sve_adr_u32)(void *vd, void *vn, void *vm, uint32_t desc)
70
+{
71
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
72
+ uint64_t sh = simd_data(desc);
73
+ uint64_t *d = vd, *n = vn, *m = vm;
74
+ for (i = 0; i < opr_sz; i += 1) {
75
+ d[i] = n[i] + ((uint64_t)(uint32_t)m[i] << sh);
76
+ }
77
+}
78
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate-sve.c
81
+++ b/target/arm/translate-sve.c
82
@@ -XXX,XX +XXX,XX @@ static bool trans_RDVL(DisasContext *s, arg_RDVL *a, uint32_t insn)
83
return true;
84
}
52
}
85
53
@@ -XXX,XX +XXX,XX @@ static uint64_t icp_pic_read(void *opaque, hwaddr offset,
86
+/*
54
case 5: /* INT_SOFTCLR */
87
+ *** SVE Compute Vector Address Group
55
case 11: /* FRQ_ENABLECLR */
88
+ */
56
default:
89
+
57
- printf ("icp_pic_read: Bad register offset 0x%x\n", (int)offset);
90
+static bool do_adr(DisasContext *s, arg_rrri *a, gen_helper_gvec_3 *fn)
58
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
91
+{
59
+ __func__, offset);
92
+ if (sve_access_check(s)) {
60
return 0;
93
+ unsigned vsz = vec_full_reg_size(s);
61
}
94
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
62
}
95
+ vec_full_reg_offset(s, a->rn),
63
@@ -XXX,XX +XXX,XX @@ static void icp_pic_write(void *opaque, hwaddr offset,
96
+ vec_full_reg_offset(s, a->rm),
64
case 8: /* FRQ_STATUS */
97
+ vsz, vsz, a->imm, fn);
65
case 9: /* FRQ_RAWSTAT */
98
+ }
66
default:
99
+ return true;
67
- printf ("icp_pic_write: Bad register offset 0x%x\n", (int)offset);
100
+}
68
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
101
+
69
+ __func__, offset);
102
+static bool trans_ADR_p32(DisasContext *s, arg_rrri *a, uint32_t insn)
70
return;
103
+{
71
}
104
+ return do_adr(s, a, gen_helper_sve_adr_p32);
72
icp_pic_update(s);
105
+}
73
@@ -XXX,XX +XXX,XX @@ static uint64_t icp_control_read(void *opaque, hwaddr offset,
106
+
74
case 3: /* CP_DECODE */
107
+static bool trans_ADR_p64(DisasContext *s, arg_rrri *a, uint32_t insn)
75
return 0x11;
108
+{
76
default:
109
+ return do_adr(s, a, gen_helper_sve_adr_p64);
77
- hw_error("icp_control_read: Bad offset %x\n", (int)offset);
110
+}
78
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
111
+
79
+ __func__, offset);
112
+static bool trans_ADR_s32(DisasContext *s, arg_rrri *a, uint32_t insn)
80
return 0;
113
+{
81
}
114
+ return do_adr(s, a, gen_helper_sve_adr_s32);
82
}
115
+}
83
@@ -XXX,XX +XXX,XX @@ static void icp_control_write(void *opaque, hwaddr offset,
116
+
84
/* Nothing interesting implemented yet. */
117
+static bool trans_ADR_u32(DisasContext *s, arg_rrri *a, uint32_t insn)
85
break;
118
+{
86
default:
119
+ return do_adr(s, a, gen_helper_sve_adr_u32);
87
- hw_error("icp_control_write: Bad offset %x\n", (int)offset);
120
+}
88
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
121
+
89
+ __func__, offset);
122
/*
90
}
123
*** SVE Predicate Logical Operations Group
91
}
124
*/
92
125
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/sve.decode
128
+++ b/target/arm/sve.decode
129
@@ -XXX,XX +XXX,XX @@
130
131
&rr_esz rd rn esz
132
&rri rd rn imm
133
+&rrri rd rn rm imm
134
&rri_esz rd rn imm esz
135
&rrr_esz rd rn rm esz
136
&rpr_esz rd pg rn esz
137
@@ -XXX,XX +XXX,XX @@
138
# Three operand, vector element size
139
@rd_rn_rm ........ esz:2 . rm:5 ... ... rn:5 rd:5 &rrr_esz
140
141
+# Three operand with "memory" size, aka immediate left shift
142
+@rd_rn_msz_rm ........ ... rm:5 .... imm:2 rn:5 rd:5 &rrri
143
+
144
# Two register operand, with governing predicate, vector element size
145
@rdn_pg_rm ........ esz:2 ... ... ... pg:3 rm:5 rd:5 \
146
&rprr_esz rn=%reg_movprfx
147
@@ -XXX,XX +XXX,XX @@ ASR_zzw 00000100 .. 1 ..... 1000 00 ..... ..... @rd_rn_rm
148
LSR_zzw 00000100 .. 1 ..... 1000 01 ..... ..... @rd_rn_rm
149
LSL_zzw 00000100 .. 1 ..... 1000 11 ..... ..... @rd_rn_rm
150
151
+### SVE Compute Vector Address Group
152
+
153
+# SVE vector address generation
154
+ADR_s32 00000100 00 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
155
+ADR_u32 00000100 01 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
156
+ADR_p32 00000100 10 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
157
+ADR_p64 00000100 11 1 ..... 1010 .. ..... ..... @rd_rn_msz_rm
158
+
159
### SVE Predicate Logical Operations Group
160
161
# SVE predicate logical operations
162
--
93
--
163
2.17.0
94
2.20.1
164
95
165
96
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
These were the instructions that were stubbed out when
3
hw_error() calls exit(). This a bit overkill when we can log
4
introducing the decode skeleton.
4
the accesses as unimplemented or guest error.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
When fuzzing the devices, we don't want the whole process to
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
exit. Replace some hw_error() calls by qemu_log_mask().
8
Message-id: 20180516223007.10256-4-richard.henderson@linaro.org
8
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
11
Message-id: 20200518140309.5220-3-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
target/arm/translate-sve.c | 55 ++++++++++++++++++++++++++++++++------
14
hw/arm/pxa2xx_gpio.c | 7 ++++---
12
1 file changed, 47 insertions(+), 8 deletions(-)
15
hw/display/pxa2xx_lcd.c | 8 +++++---
16
hw/dma/pxa2xx_dma.c | 14 +++++++++-----
17
3 files changed, 18 insertions(+), 11 deletions(-)
13
18
14
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
19
diff --git a/hw/arm/pxa2xx_gpio.c b/hw/arm/pxa2xx_gpio.c
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-sve.c
21
--- a/hw/arm/pxa2xx_gpio.c
17
+++ b/target/arm/translate-sve.c
22
+++ b/hw/arm/pxa2xx_gpio.c
18
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@
19
* Implement all of the translator functions referenced by the decoder.
24
25
#include "qemu/osdep.h"
26
#include "cpu.h"
27
-#include "hw/hw.h"
28
#include "hw/irq.h"
29
#include "hw/qdev-properties.h"
30
#include "hw/sysbus.h"
31
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_gpio_read(void *opaque, hwaddr offset,
32
return s->status[bank];
33
34
default:
35
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
36
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
37
+ __func__, offset);
38
}
39
40
return 0;
41
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_gpio_write(void *opaque, hwaddr offset,
42
break;
43
44
default:
45
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
46
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
47
+ __func__, offset);
48
}
49
}
50
51
diff --git a/hw/display/pxa2xx_lcd.c b/hw/display/pxa2xx_lcd.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/display/pxa2xx_lcd.c
54
+++ b/hw/display/pxa2xx_lcd.c
55
@@ -XXX,XX +XXX,XX @@
20
*/
56
*/
21
57
22
-static bool trans_AND_zzz(DisasContext *s, arg_AND_zzz *a, uint32_t insn)
58
#include "qemu/osdep.h"
23
+/* Invoke a vector expander on two Zregs. */
59
-#include "hw/hw.h"
24
+static bool do_vector2_z(DisasContext *s, GVecGen2Fn *gvec_fn,
60
+#include "qemu/log.h"
25
+ int esz, int rd, int rn)
61
#include "hw/irq.h"
26
{
62
#include "migration/vmstate.h"
27
- return false;
63
#include "ui/console.h"
28
+ if (sve_access_check(s)) {
64
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_lcdc_read(void *opaque, hwaddr offset,
29
+ unsigned vsz = vec_full_reg_size(s);
65
30
+ gvec_fn(esz, vec_full_reg_offset(s, rd),
66
default:
31
+ vec_full_reg_offset(s, rn), vsz, vsz);
67
fail:
32
+ }
68
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
33
+ return true;
69
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
70
+ __func__, offset);
71
}
72
73
return 0;
74
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_lcdc_write(void *opaque, hwaddr offset,
75
76
default:
77
fail:
78
- hw_error("%s: Bad offset " REG_FMT "\n", __func__, offset);
79
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
80
+ __func__, offset);
81
}
34
}
82
}
35
83
36
-static bool trans_ORR_zzz(DisasContext *s, arg_ORR_zzz *a, uint32_t insn)
84
diff --git a/hw/dma/pxa2xx_dma.c b/hw/dma/pxa2xx_dma.c
37
+/* Invoke a vector expander on three Zregs. */
85
index XXXXXXX..XXXXXXX 100644
38
+static bool do_vector3_z(DisasContext *s, GVecGen3Fn *gvec_fn,
86
--- a/hw/dma/pxa2xx_dma.c
39
+ int esz, int rd, int rn, int rm)
87
+++ b/hw/dma/pxa2xx_dma.c
40
{
88
@@ -XXX,XX +XXX,XX @@
41
- return false;
89
*/
42
+ if (sve_access_check(s)) {
90
43
+ unsigned vsz = vec_full_reg_size(s);
91
#include "qemu/osdep.h"
44
+ gvec_fn(esz, vec_full_reg_offset(s, rd),
92
+#include "qemu/log.h"
45
+ vec_full_reg_offset(s, rn),
93
#include "hw/hw.h"
46
+ vec_full_reg_offset(s, rm), vsz, vsz);
94
#include "hw/irq.h"
47
+ }
95
#include "hw/qdev-properties.h"
48
+ return true;
96
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_dma_read(void *opaque, hwaddr offset,
97
unsigned int channel;
98
99
if (size != 4) {
100
- hw_error("%s: Bad access width\n", __func__);
101
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad access width %u\n",
102
+ __func__, size);
103
return 5;
104
}
105
106
@@ -XXX,XX +XXX,XX @@ static uint64_t pxa2xx_dma_read(void *opaque, hwaddr offset,
107
return s->chan[channel].cmd;
108
}
109
}
110
-
111
- hw_error("%s: Bad offset 0x" TARGET_FMT_plx "\n", __func__, offset);
112
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
113
+ __func__, offset);
114
return 7;
49
}
115
}
50
116
51
-static bool trans_EOR_zzz(DisasContext *s, arg_EOR_zzz *a, uint32_t insn)
117
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_dma_write(void *opaque, hwaddr offset,
52
+/* Invoke a vector move on two Zregs. */
118
unsigned int channel;
53
+static bool do_mov_z(DisasContext *s, int rd, int rn)
119
54
{
120
if (size != 4) {
55
- return false;
121
- hw_error("%s: Bad access width\n", __func__);
56
+ return do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
122
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad access width %u\n",
123
+ __func__, size);
124
return;
125
}
126
127
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_dma_write(void *opaque, hwaddr offset,
128
break;
129
}
130
fail:
131
- hw_error("%s: Bad offset " TARGET_FMT_plx "\n", __func__, offset);
132
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
133
+ __func__, offset);
134
}
57
}
135
}
58
136
59
-static bool trans_BIC_zzz(DisasContext *s, arg_BIC_zzz *a, uint32_t insn)
60
+/*
61
+ *** SVE Logical - Unpredicated Group
62
+ */
63
+
64
+static bool trans_AND_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
65
{
66
- return false;
67
+ return do_vector3_z(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->rm);
68
+}
69
+
70
+static bool trans_ORR_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
71
+{
72
+ if (a->rn == a->rm) { /* MOV */
73
+ return do_mov_z(s, a->rd, a->rn);
74
+ } else {
75
+ return do_vector3_z(s, tcg_gen_gvec_or, 0, a->rd, a->rn, a->rm);
76
+ }
77
+}
78
+
79
+static bool trans_EOR_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
80
+{
81
+ return do_vector3_z(s, tcg_gen_gvec_xor, 0, a->rd, a->rn, a->rm);
82
+}
83
+
84
+static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
85
+{
86
+ return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
87
}
88
--
137
--
89
2.17.0
138
2.20.1
90
139
91
140
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
hw_error() calls exit(). This a bit overkill when we can log
4
Message-id: 20180516223007.10256-5-richard.henderson@linaro.org
4
the accesses as unimplemented or guest error.
5
6
When fuzzing the devices, we don't want the whole process to
7
exit. Replace some hw_error() calls by qemu_log_mask().
8
9
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
12
Message-id: 20200518140309.5220-4-f4bug@amsat.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
14
---
7
target/arm/translate-sve.c | 127 +++++++++++++++++++++++++++++++++++++
15
hw/char/xilinx_uartlite.c | 5 +++--
8
target/arm/sve.decode | 20 ++++++
16
1 file changed, 3 insertions(+), 2 deletions(-)
9
2 files changed, 147 insertions(+)
10
17
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
18
diff --git a/hw/char/xilinx_uartlite.c b/hw/char/xilinx_uartlite.c
12
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-sve.c
20
--- a/hw/char/xilinx_uartlite.c
14
+++ b/target/arm/translate-sve.c
21
+++ b/hw/char/xilinx_uartlite.c
15
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@
16
* Implement all of the translator functions referenced by the decoder.
17
*/
23
*/
18
24
19
+/* Return the offset info CPUARMState of the predicate vector register Pn.
25
#include "qemu/osdep.h"
20
+ * Note for this purpose, FFR is P16.
26
-#include "hw/hw.h"
21
+ */
27
+#include "qemu/log.h"
22
+static inline int pred_full_reg_offset(DisasContext *s, int regno)
28
#include "hw/irq.h"
23
+{
29
#include "hw/qdev-properties.h"
24
+ return offsetof(CPUARMState, vfp.pregs[regno]);
30
#include "hw/sysbus.h"
25
+}
31
@@ -XXX,XX +XXX,XX @@ uart_write(void *opaque, hwaddr addr,
26
+
32
switch (addr)
27
+/* Return the byte size of the whole predicate register, VL / 64. */
33
{
28
+static inline int pred_full_reg_size(DisasContext *s)
34
case R_STATUS:
29
+{
35
- hw_error("write to UART STATUS?\n");
30
+ return s->sve_len >> 3;
36
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: write to UART STATUS\n",
31
+}
37
+ __func__);
32
+
38
break;
33
/* Invoke a vector expander on two Zregs. */
39
34
static bool do_vector2_z(DisasContext *s, GVecGen2Fn *gvec_fn,
40
case R_CTRL:
35
int esz, int rd, int rn)
36
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
37
{
38
return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
39
}
40
+
41
+/*
42
+ *** SVE Memory - 32-bit Gather and Unsized Contiguous Group
43
+ */
44
+
45
+/* Subroutine loading a vector register at VOFS of LEN bytes.
46
+ * The load should begin at the address Rn + IMM.
47
+ */
48
+
49
+static void do_ldr(DisasContext *s, uint32_t vofs, uint32_t len,
50
+ int rn, int imm)
51
+{
52
+ uint32_t len_align = QEMU_ALIGN_DOWN(len, 8);
53
+ uint32_t len_remain = len % 8;
54
+ uint32_t nparts = len / 8 + ctpop8(len_remain);
55
+ int midx = get_mem_index(s);
56
+ TCGv_i64 addr, t0, t1;
57
+
58
+ addr = tcg_temp_new_i64();
59
+ t0 = tcg_temp_new_i64();
60
+
61
+ /* Note that unpredicated load/store of vector/predicate registers
62
+ * are defined as a stream of bytes, which equates to little-endian
63
+ * operations on larger quantities. There is no nice way to force
64
+ * a little-endian load for aarch64_be-linux-user out of line.
65
+ *
66
+ * Attempt to keep code expansion to a minimum by limiting the
67
+ * amount of unrolling done.
68
+ */
69
+ if (nparts <= 4) {
70
+ int i;
71
+
72
+ for (i = 0; i < len_align; i += 8) {
73
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, rn), imm + i);
74
+ tcg_gen_qemu_ld_i64(t0, addr, midx, MO_LEQ);
75
+ tcg_gen_st_i64(t0, cpu_env, vofs + i);
76
+ }
77
+ } else {
78
+ TCGLabel *loop = gen_new_label();
79
+ TCGv_ptr tp, i = tcg_const_local_ptr(0);
80
+
81
+ gen_set_label(loop);
82
+
83
+ /* Minimize the number of local temps that must be re-read from
84
+ * the stack each iteration. Instead, re-compute values other
85
+ * than the loop counter.
86
+ */
87
+ tp = tcg_temp_new_ptr();
88
+ tcg_gen_addi_ptr(tp, i, imm);
89
+ tcg_gen_extu_ptr_i64(addr, tp);
90
+ tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, rn));
91
+
92
+ tcg_gen_qemu_ld_i64(t0, addr, midx, MO_LEQ);
93
+
94
+ tcg_gen_add_ptr(tp, cpu_env, i);
95
+ tcg_gen_addi_ptr(i, i, 8);
96
+ tcg_gen_st_i64(t0, tp, vofs);
97
+ tcg_temp_free_ptr(tp);
98
+
99
+ tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
100
+ tcg_temp_free_ptr(i);
101
+ }
102
+
103
+ /* Predicate register loads can be any multiple of 2.
104
+ * Note that we still store the entire 64-bit unit into cpu_env.
105
+ */
106
+ if (len_remain) {
107
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, rn), imm + len_align);
108
+
109
+ switch (len_remain) {
110
+ case 2:
111
+ case 4:
112
+ case 8:
113
+ tcg_gen_qemu_ld_i64(t0, addr, midx, MO_LE | ctz32(len_remain));
114
+ break;
115
+
116
+ case 6:
117
+ t1 = tcg_temp_new_i64();
118
+ tcg_gen_qemu_ld_i64(t0, addr, midx, MO_LEUL);
119
+ tcg_gen_addi_i64(addr, addr, 4);
120
+ tcg_gen_qemu_ld_i64(t1, addr, midx, MO_LEUW);
121
+ tcg_gen_deposit_i64(t0, t0, t1, 32, 32);
122
+ tcg_temp_free_i64(t1);
123
+ break;
124
+
125
+ default:
126
+ g_assert_not_reached();
127
+ }
128
+ tcg_gen_st_i64(t0, cpu_env, vofs + len_align);
129
+ }
130
+ tcg_temp_free_i64(addr);
131
+ tcg_temp_free_i64(t0);
132
+}
133
+
134
+static bool trans_LDR_zri(DisasContext *s, arg_rri *a, uint32_t insn)
135
+{
136
+ if (sve_access_check(s)) {
137
+ int size = vec_full_reg_size(s);
138
+ int off = vec_full_reg_offset(s, a->rd);
139
+ do_ldr(s, off, size, a->rn, a->imm * size);
140
+ }
141
+ return true;
142
+}
143
+
144
+static bool trans_LDR_pri(DisasContext *s, arg_rri *a, uint32_t insn)
145
+{
146
+ if (sve_access_check(s)) {
147
+ int size = pred_full_reg_size(s);
148
+ int off = pred_full_reg_offset(s, a->rd);
149
+ do_ldr(s, off, size, a->rn, a->imm * size);
150
+ }
151
+ return true;
152
+}
153
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
154
index XXXXXXX..XXXXXXX 100644
155
--- a/target/arm/sve.decode
156
+++ b/target/arm/sve.decode
157
@@ -XXX,XX +XXX,XX @@
158
# This file is processed by scripts/decodetree.py
159
#
160
161
+###########################################################################
162
+# Named fields. These are primarily for disjoint fields.
163
+
164
+%imm9_16_10 16:s6 10:3
165
+
166
###########################################################################
167
# Named attribute sets. These are used to make nice(er) names
168
# when creating helpers common to those for the individual
169
# instruction patterns.
170
171
+&rri rd rn imm
172
&rrr_esz rd rn rm esz
173
174
###########################################################################
175
@@ -XXX,XX +XXX,XX @@
176
# Three operand with unused vector element size
177
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
178
179
+# Basic Load/Store with 9-bit immediate offset
180
+@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
181
+ &rri imm=%imm9_16_10
182
+@rd_rn_i9 ........ ........ ...... rn:5 rd:5 \
183
+ &rri imm=%imm9_16_10
184
+
185
###########################################################################
186
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
187
188
@@ -XXX,XX +XXX,XX @@ AND_zzz 00000100 00 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
189
ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
190
EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
191
BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
192
+
193
+### SVE Memory - 32-bit Gather and Unsized Contiguous Group
194
+
195
+# SVE load predicate register
196
+LDR_pri 10000101 10 ...... 000 ... ..... 0 .... @pd_rn_i9
197
+
198
+# SVE load vector register
199
+LDR_zri 10000101 10 ...... 010 ... ..... ..... @rd_rn_i9
200
--
41
--
201
2.17.0
42
2.20.1
202
43
203
44
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
hw_error() calls exit(). This a bit overkill when we can log
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
the accesses as unimplemented or guest error.
5
Message-id: 20180516223007.10256-13-richard.henderson@linaro.org
5
6
When fuzzing the devices, we don't want the whole process to
7
exit. Replace some hw_error() calls by qemu_log_mask().
8
9
Per the datasheet "Exynos 4412 RISC Microprocessor Rev 1.00"
10
Chapter 25 "Multi Core Timer (MCT)" figure 1 and table 4,
11
the default value on the APB bus is 0.
12
13
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
15
Message-id: 20200518140309.5220-5-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
17
---
8
target/arm/helper-sve.h | 21 +++++++++++++++++++++
18
hw/timer/exynos4210_mct.c | 12 +++++-------
9
target/arm/sve_helper.c | 35 +++++++++++++++++++++++++++++++++++
19
1 file changed, 5 insertions(+), 7 deletions(-)
10
target/arm/translate-sve.c | 24 ++++++++++++++++++++++++
11
target/arm/sve.decode | 6 ++++++
12
4 files changed, 86 insertions(+)
13
20
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
21
diff --git a/hw/timer/exynos4210_mct.c b/hw/timer/exynos4210_mct.c
15
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
23
--- a/hw/timer/exynos4210_mct.c
17
+++ b/target/arm/helper-sve.h
24
+++ b/hw/timer/exynos4210_mct.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_lsl_zpzz_s, TCG_CALL_NO_RWG,
25
@@ -XXX,XX +XXX,XX @@
19
DEF_HELPER_FLAGS_5(sve_lsl_zpzz_d, TCG_CALL_NO_RWG,
26
20
void, ptr, ptr, ptr, ptr, i32)
27
#include "qemu/osdep.h"
21
28
#include "qemu/log.h"
22
+DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
29
-#include "hw/hw.h"
23
+ void, ptr, ptr, ptr, ptr, i32)
30
#include "hw/sysbus.h"
24
+DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
31
#include "migration/vmstate.h"
25
+ void, ptr, ptr, ptr, ptr, i32)
32
#include "qemu/timer.h"
26
+DEF_HELPER_FLAGS_5(sve_asr_zpzw_s, TCG_CALL_NO_RWG,
33
@@ -XXX,XX +XXX,XX @@
27
+ void, ptr, ptr, ptr, ptr, i32)
34
#include "hw/ptimer.h"
28
+
35
29
+DEF_HELPER_FLAGS_5(sve_lsr_zpzw_b, TCG_CALL_NO_RWG,
36
#include "hw/arm/exynos4210.h"
30
+ void, ptr, ptr, ptr, ptr, i32)
37
-#include "hw/hw.h"
31
+DEF_HELPER_FLAGS_5(sve_lsr_zpzw_h, TCG_CALL_NO_RWG,
38
#include "hw/irq.h"
32
+ void, ptr, ptr, ptr, ptr, i32)
39
33
+DEF_HELPER_FLAGS_5(sve_lsr_zpzw_s, TCG_CALL_NO_RWG,
40
//#define DEBUG_MCT
34
+ void, ptr, ptr, ptr, ptr, i32)
41
@@ -XXX,XX +XXX,XX @@ static uint64_t exynos4210_mct_read(void *opaque, hwaddr offset,
35
+
42
int index;
36
+DEF_HELPER_FLAGS_5(sve_lsl_zpzw_b, TCG_CALL_NO_RWG,
43
int shift;
37
+ void, ptr, ptr, ptr, ptr, i32)
44
uint64_t count;
38
+DEF_HELPER_FLAGS_5(sve_lsl_zpzw_h, TCG_CALL_NO_RWG,
45
- uint32_t value;
39
+ void, ptr, ptr, ptr, ptr, i32)
46
+ uint32_t value = 0;
40
+DEF_HELPER_FLAGS_5(sve_lsl_zpzw_s, TCG_CALL_NO_RWG,
47
int lt_i;
41
+ void, ptr, ptr, ptr, ptr, i32)
48
42
+
49
switch (offset) {
43
DEF_HELPER_FLAGS_3(sve_orv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
50
@@ -XXX,XX +XXX,XX @@ static uint64_t exynos4210_mct_read(void *opaque, hwaddr offset,
44
DEF_HELPER_FLAGS_3(sve_orv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
51
break;
45
DEF_HELPER_FLAGS_3(sve_orv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
52
46
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
53
default:
47
index XXXXXXX..XXXXXXX 100644
54
- hw_error("exynos4210.mct: bad read offset "
48
--- a/target/arm/sve_helper.c
55
- TARGET_FMT_plx "\n", offset);
49
+++ b/target/arm/sve_helper.c
56
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
50
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_D(sve_lsl_zpzz_d, uint64_t, DO_LSL)
57
+ __func__, offset);
51
#undef DO_ZPZZ
58
break;
52
#undef DO_ZPZZ_D
59
}
53
60
return value;
54
+/* Three-operand expander, controlled by a predicate, in which the
61
@@ -XXX,XX +XXX,XX @@ static void exynos4210_mct_write(void *opaque, hwaddr offset,
55
+ * third operand is "wide". That is, for D = N op M, the same 64-bit
62
break;
56
+ * value of M is used with all of the narrower values of N.
63
57
+ */
64
default:
58
+#define DO_ZPZW(NAME, TYPE, TYPEW, H, OP) \
65
- hw_error("exynos4210.mct: bad write offset "
59
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
66
- TARGET_FMT_plx "\n", offset);
60
+{ \
67
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset 0x%" HWADDR_PRIX "\n",
61
+ intptr_t i, opr_sz = simd_oprsz(desc); \
68
+ __func__, offset);
62
+ for (i = 0; i < opr_sz; ) { \
69
break;
63
+ uint8_t pg = *(uint8_t *)(vg + H1(i >> 3)); \
64
+ TYPEW mm = *(TYPEW *)(vm + i); \
65
+ do { \
66
+ if (pg & 1) { \
67
+ TYPE nn = *(TYPE *)(vn + H(i)); \
68
+ *(TYPE *)(vd + H(i)) = OP(nn, mm); \
69
+ } \
70
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
71
+ } while (i & 7); \
72
+ } \
73
+}
74
+
75
+DO_ZPZW(sve_asr_zpzw_b, int8_t, uint64_t, H1, DO_ASR)
76
+DO_ZPZW(sve_lsr_zpzw_b, uint8_t, uint64_t, H1, DO_LSR)
77
+DO_ZPZW(sve_lsl_zpzw_b, uint8_t, uint64_t, H1, DO_LSL)
78
+
79
+DO_ZPZW(sve_asr_zpzw_h, int16_t, uint64_t, H1_2, DO_ASR)
80
+DO_ZPZW(sve_lsr_zpzw_h, uint16_t, uint64_t, H1_2, DO_LSR)
81
+DO_ZPZW(sve_lsl_zpzw_h, uint16_t, uint64_t, H1_2, DO_LSL)
82
+
83
+DO_ZPZW(sve_asr_zpzw_s, int32_t, uint64_t, H1_4, DO_ASR)
84
+DO_ZPZW(sve_lsr_zpzw_s, uint32_t, uint64_t, H1_4, DO_LSR)
85
+DO_ZPZW(sve_lsl_zpzw_s, uint32_t, uint64_t, H1_4, DO_LSL)
86
+
87
+#undef DO_ZPZW
88
+
89
/* Two-operand reduction expander, controlled by a predicate.
90
* The difference between TYPERED and TYPERET has to do with
91
* sign-extension. E.g. for SMAX, TYPERED must be signed,
92
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/translate-sve.c
95
+++ b/target/arm/translate-sve.c
96
@@ -XXX,XX +XXX,XX @@ static bool trans_ASRD(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
97
}
70
}
98
}
71
}
99
100
+/*
101
+ *** SVE Bitwise Shift - Predicated Group
102
+ */
103
+
104
+#define DO_ZPZW(NAME, name) \
105
+static bool trans_##NAME##_zpzw(DisasContext *s, arg_rprr_esz *a, \
106
+ uint32_t insn) \
107
+{ \
108
+ static gen_helper_gvec_4 * const fns[3] = { \
109
+ gen_helper_sve_##name##_zpzw_b, gen_helper_sve_##name##_zpzw_h, \
110
+ gen_helper_sve_##name##_zpzw_s, \
111
+ }; \
112
+ if (a->esz < 0 || a->esz >= 3) { \
113
+ return false; \
114
+ } \
115
+ return do_zpzz_ool(s, a, fns[a->esz]); \
116
+}
117
+
118
+DO_ZPZW(ASR, asr)
119
+DO_ZPZW(LSR, lsr)
120
+DO_ZPZW(LSL, lsl)
121
+
122
+#undef DO_ZPZW
123
+
124
/*
125
*** SVE Predicate Logical Operations Group
126
*/
127
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/sve.decode
130
+++ b/target/arm/sve.decode
131
@@ -XXX,XX +XXX,XX @@ ASR_zpzz 00000100 .. 010 100 100 ... ..... ..... @rdm_pg_rn # ASRR
132
LSR_zpzz 00000100 .. 010 101 100 ... ..... ..... @rdm_pg_rn # LSRR
133
LSL_zpzz 00000100 .. 010 111 100 ... ..... ..... @rdm_pg_rn # LSLR
134
135
+# SVE bitwise shift by wide elements (predicated)
136
+# Note these require size != 3.
137
+ASR_zpzw 00000100 .. 011 000 100 ... ..... ..... @rdn_pg_rm
138
+LSR_zpzw 00000100 .. 011 001 100 ... ..... ..... @rdn_pg_rm
139
+LSL_zpzw 00000100 .. 011 011 100 ... ..... ..... @rdn_pg_rm
140
+
141
### SVE Logical - Unpredicated Group
142
143
# SVE bitwise logical operations (unpredicated)
144
--
72
--
145
2.17.0
73
2.20.1
146
74
147
75
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Geert Uytterhoeven <geert+renesas@glider.be>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Add a definition for the number of GPIO lines controlled by a PL061
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
instance, and use it instead of the hardcoded magic value 8.
5
Message-id: 20180516223007.10256-11-richard.henderson@linaro.org
5
6
Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20200519085143.1376-1-geert+renesas@glider.be
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper-sve.h | 25 ++++
12
hw/gpio/pl061.c | 12 +++++++-----
9
target/arm/sve_helper.c | 264 +++++++++++++++++++++++++++++++++++++
13
1 file changed, 7 insertions(+), 5 deletions(-)
10
target/arm/translate-sve.c | 130 ++++++++++++++++++
11
target/arm/sve.decode | 26 ++++
12
4 files changed, 445 insertions(+)
13
14
14
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
15
diff --git a/hw/gpio/pl061.c b/hw/gpio/pl061.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-sve.h
17
--- a/hw/gpio/pl061.c
17
+++ b/target/arm/helper-sve.h
18
+++ b/hw/gpio/pl061.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_uminv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ static const uint8_t pl061_id_luminary[12] =
19
DEF_HELPER_FLAGS_3(sve_uminv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
20
#define TYPE_PL061 "pl061"
20
DEF_HELPER_FLAGS_3(sve_uminv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
21
#define PL061(obj) OBJECT_CHECK(PL061State, (obj), TYPE_PL061)
21
22
22
+DEF_HELPER_FLAGS_3(sve_clr_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
23
+#define N_GPIOS 8
23
+DEF_HELPER_FLAGS_3(sve_clr_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_3(sve_clr_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_3(sve_clr_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
26
+
24
+
27
+DEF_HELPER_FLAGS_4(sve_asr_zpzi_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
25
typedef struct PL061State {
28
+DEF_HELPER_FLAGS_4(sve_asr_zpzi_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
26
SysBusDevice parent_obj;
29
+DEF_HELPER_FLAGS_4(sve_asr_zpzi_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
27
30
+DEF_HELPER_FLAGS_4(sve_asr_zpzi_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
28
@@ -XXX,XX +XXX,XX @@ typedef struct PL061State {
31
+
29
uint32_t cr;
32
+DEF_HELPER_FLAGS_4(sve_lsr_zpzi_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
30
uint32_t amsel;
33
+DEF_HELPER_FLAGS_4(sve_lsr_zpzi_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
31
qemu_irq irq;
34
+DEF_HELPER_FLAGS_4(sve_lsr_zpzi_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
- qemu_irq out[8];
35
+DEF_HELPER_FLAGS_4(sve_lsr_zpzi_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
+ qemu_irq out[N_GPIOS];
36
+
34
const unsigned char *id;
37
+DEF_HELPER_FLAGS_4(sve_lsl_zpzi_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
uint32_t rsvd_start; /* reserved area: [rsvd_start, 0xfcc] */
38
+DEF_HELPER_FLAGS_4(sve_lsl_zpzi_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
} PL061State;
39
+DEF_HELPER_FLAGS_4(sve_lsl_zpzi_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
@@ -XXX,XX +XXX,XX @@ static void pl061_update(PL061State *s)
40
+DEF_HELPER_FLAGS_4(sve_lsl_zpzi_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
38
changed = s->old_out_data ^ out;
41
+
39
if (changed) {
42
+DEF_HELPER_FLAGS_4(sve_asrd_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
40
s->old_out_data = out;
43
+DEF_HELPER_FLAGS_4(sve_asrd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
41
- for (i = 0; i < 8; i++) {
44
+DEF_HELPER_FLAGS_4(sve_asrd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
42
+ for (i = 0; i < N_GPIOS; i++) {
45
+DEF_HELPER_FLAGS_4(sve_asrd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
43
mask = 1 << i;
46
+
44
if (changed & mask) {
47
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
45
DPRINTF("Set output %d = %d\n", i, (out & mask) != 0);
48
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
46
@@ -XXX,XX +XXX,XX @@ static void pl061_update(PL061State *s)
49
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
changed = (s->old_in_data ^ s->data) & ~s->dir;
50
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
48
if (changed) {
51
index XXXXXXX..XXXXXXX 100644
49
s->old_in_data = s->data;
52
--- a/target/arm/sve_helper.c
50
- for (i = 0; i < 8; i++) {
53
+++ b/target/arm/sve_helper.c
51
+ for (i = 0; i < N_GPIOS; i++) {
54
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t words)
52
mask = 1 << i;
55
return flags;
53
if (changed & mask) {
54
DPRINTF("Changed input %d = %d\n", i, (s->data & mask) != 0);
55
@@ -XXX,XX +XXX,XX @@ static void pl061_init(Object *obj)
56
memory_region_init_io(&s->iomem, obj, &pl061_ops, s, "pl061", 0x1000);
57
sysbus_init_mmio(sbd, &s->iomem);
58
sysbus_init_irq(sbd, &s->irq);
59
- qdev_init_gpio_in(dev, pl061_set_irq, 8);
60
- qdev_init_gpio_out(dev, s->out, 8);
61
+ qdev_init_gpio_in(dev, pl061_set_irq, N_GPIOS);
62
+ qdev_init_gpio_out(dev, s->out, N_GPIOS);
56
}
63
}
57
64
58
+/* Expand active predicate bits to bytes, for byte elements.
65
static void pl061_class_init(ObjectClass *klass, void *data)
59
+ * for (i = 0; i < 256; ++i) {
60
+ * unsigned long m = 0;
61
+ * for (j = 0; j < 8; j++) {
62
+ * if ((i >> j) & 1) {
63
+ * m |= 0xfful << (j << 3);
64
+ * }
65
+ * }
66
+ * printf("0x%016lx,\n", m);
67
+ * }
68
+ */
69
+static inline uint64_t expand_pred_b(uint8_t byte)
70
+{
71
+ static const uint64_t word[256] = {
72
+ 0x0000000000000000, 0x00000000000000ff, 0x000000000000ff00,
73
+ 0x000000000000ffff, 0x0000000000ff0000, 0x0000000000ff00ff,
74
+ 0x0000000000ffff00, 0x0000000000ffffff, 0x00000000ff000000,
75
+ 0x00000000ff0000ff, 0x00000000ff00ff00, 0x00000000ff00ffff,
76
+ 0x00000000ffff0000, 0x00000000ffff00ff, 0x00000000ffffff00,
77
+ 0x00000000ffffffff, 0x000000ff00000000, 0x000000ff000000ff,
78
+ 0x000000ff0000ff00, 0x000000ff0000ffff, 0x000000ff00ff0000,
79
+ 0x000000ff00ff00ff, 0x000000ff00ffff00, 0x000000ff00ffffff,
80
+ 0x000000ffff000000, 0x000000ffff0000ff, 0x000000ffff00ff00,
81
+ 0x000000ffff00ffff, 0x000000ffffff0000, 0x000000ffffff00ff,
82
+ 0x000000ffffffff00, 0x000000ffffffffff, 0x0000ff0000000000,
83
+ 0x0000ff00000000ff, 0x0000ff000000ff00, 0x0000ff000000ffff,
84
+ 0x0000ff0000ff0000, 0x0000ff0000ff00ff, 0x0000ff0000ffff00,
85
+ 0x0000ff0000ffffff, 0x0000ff00ff000000, 0x0000ff00ff0000ff,
86
+ 0x0000ff00ff00ff00, 0x0000ff00ff00ffff, 0x0000ff00ffff0000,
87
+ 0x0000ff00ffff00ff, 0x0000ff00ffffff00, 0x0000ff00ffffffff,
88
+ 0x0000ffff00000000, 0x0000ffff000000ff, 0x0000ffff0000ff00,
89
+ 0x0000ffff0000ffff, 0x0000ffff00ff0000, 0x0000ffff00ff00ff,
90
+ 0x0000ffff00ffff00, 0x0000ffff00ffffff, 0x0000ffffff000000,
91
+ 0x0000ffffff0000ff, 0x0000ffffff00ff00, 0x0000ffffff00ffff,
92
+ 0x0000ffffffff0000, 0x0000ffffffff00ff, 0x0000ffffffffff00,
93
+ 0x0000ffffffffffff, 0x00ff000000000000, 0x00ff0000000000ff,
94
+ 0x00ff00000000ff00, 0x00ff00000000ffff, 0x00ff000000ff0000,
95
+ 0x00ff000000ff00ff, 0x00ff000000ffff00, 0x00ff000000ffffff,
96
+ 0x00ff0000ff000000, 0x00ff0000ff0000ff, 0x00ff0000ff00ff00,
97
+ 0x00ff0000ff00ffff, 0x00ff0000ffff0000, 0x00ff0000ffff00ff,
98
+ 0x00ff0000ffffff00, 0x00ff0000ffffffff, 0x00ff00ff00000000,
99
+ 0x00ff00ff000000ff, 0x00ff00ff0000ff00, 0x00ff00ff0000ffff,
100
+ 0x00ff00ff00ff0000, 0x00ff00ff00ff00ff, 0x00ff00ff00ffff00,
101
+ 0x00ff00ff00ffffff, 0x00ff00ffff000000, 0x00ff00ffff0000ff,
102
+ 0x00ff00ffff00ff00, 0x00ff00ffff00ffff, 0x00ff00ffffff0000,
103
+ 0x00ff00ffffff00ff, 0x00ff00ffffffff00, 0x00ff00ffffffffff,
104
+ 0x00ffff0000000000, 0x00ffff00000000ff, 0x00ffff000000ff00,
105
+ 0x00ffff000000ffff, 0x00ffff0000ff0000, 0x00ffff0000ff00ff,
106
+ 0x00ffff0000ffff00, 0x00ffff0000ffffff, 0x00ffff00ff000000,
107
+ 0x00ffff00ff0000ff, 0x00ffff00ff00ff00, 0x00ffff00ff00ffff,
108
+ 0x00ffff00ffff0000, 0x00ffff00ffff00ff, 0x00ffff00ffffff00,
109
+ 0x00ffff00ffffffff, 0x00ffffff00000000, 0x00ffffff000000ff,
110
+ 0x00ffffff0000ff00, 0x00ffffff0000ffff, 0x00ffffff00ff0000,
111
+ 0x00ffffff00ff00ff, 0x00ffffff00ffff00, 0x00ffffff00ffffff,
112
+ 0x00ffffffff000000, 0x00ffffffff0000ff, 0x00ffffffff00ff00,
113
+ 0x00ffffffff00ffff, 0x00ffffffffff0000, 0x00ffffffffff00ff,
114
+ 0x00ffffffffffff00, 0x00ffffffffffffff, 0xff00000000000000,
115
+ 0xff000000000000ff, 0xff0000000000ff00, 0xff0000000000ffff,
116
+ 0xff00000000ff0000, 0xff00000000ff00ff, 0xff00000000ffff00,
117
+ 0xff00000000ffffff, 0xff000000ff000000, 0xff000000ff0000ff,
118
+ 0xff000000ff00ff00, 0xff000000ff00ffff, 0xff000000ffff0000,
119
+ 0xff000000ffff00ff, 0xff000000ffffff00, 0xff000000ffffffff,
120
+ 0xff0000ff00000000, 0xff0000ff000000ff, 0xff0000ff0000ff00,
121
+ 0xff0000ff0000ffff, 0xff0000ff00ff0000, 0xff0000ff00ff00ff,
122
+ 0xff0000ff00ffff00, 0xff0000ff00ffffff, 0xff0000ffff000000,
123
+ 0xff0000ffff0000ff, 0xff0000ffff00ff00, 0xff0000ffff00ffff,
124
+ 0xff0000ffffff0000, 0xff0000ffffff00ff, 0xff0000ffffffff00,
125
+ 0xff0000ffffffffff, 0xff00ff0000000000, 0xff00ff00000000ff,
126
+ 0xff00ff000000ff00, 0xff00ff000000ffff, 0xff00ff0000ff0000,
127
+ 0xff00ff0000ff00ff, 0xff00ff0000ffff00, 0xff00ff0000ffffff,
128
+ 0xff00ff00ff000000, 0xff00ff00ff0000ff, 0xff00ff00ff00ff00,
129
+ 0xff00ff00ff00ffff, 0xff00ff00ffff0000, 0xff00ff00ffff00ff,
130
+ 0xff00ff00ffffff00, 0xff00ff00ffffffff, 0xff00ffff00000000,
131
+ 0xff00ffff000000ff, 0xff00ffff0000ff00, 0xff00ffff0000ffff,
132
+ 0xff00ffff00ff0000, 0xff00ffff00ff00ff, 0xff00ffff00ffff00,
133
+ 0xff00ffff00ffffff, 0xff00ffffff000000, 0xff00ffffff0000ff,
134
+ 0xff00ffffff00ff00, 0xff00ffffff00ffff, 0xff00ffffffff0000,
135
+ 0xff00ffffffff00ff, 0xff00ffffffffff00, 0xff00ffffffffffff,
136
+ 0xffff000000000000, 0xffff0000000000ff, 0xffff00000000ff00,
137
+ 0xffff00000000ffff, 0xffff000000ff0000, 0xffff000000ff00ff,
138
+ 0xffff000000ffff00, 0xffff000000ffffff, 0xffff0000ff000000,
139
+ 0xffff0000ff0000ff, 0xffff0000ff00ff00, 0xffff0000ff00ffff,
140
+ 0xffff0000ffff0000, 0xffff0000ffff00ff, 0xffff0000ffffff00,
141
+ 0xffff0000ffffffff, 0xffff00ff00000000, 0xffff00ff000000ff,
142
+ 0xffff00ff0000ff00, 0xffff00ff0000ffff, 0xffff00ff00ff0000,
143
+ 0xffff00ff00ff00ff, 0xffff00ff00ffff00, 0xffff00ff00ffffff,
144
+ 0xffff00ffff000000, 0xffff00ffff0000ff, 0xffff00ffff00ff00,
145
+ 0xffff00ffff00ffff, 0xffff00ffffff0000, 0xffff00ffffff00ff,
146
+ 0xffff00ffffffff00, 0xffff00ffffffffff, 0xffffff0000000000,
147
+ 0xffffff00000000ff, 0xffffff000000ff00, 0xffffff000000ffff,
148
+ 0xffffff0000ff0000, 0xffffff0000ff00ff, 0xffffff0000ffff00,
149
+ 0xffffff0000ffffff, 0xffffff00ff000000, 0xffffff00ff0000ff,
150
+ 0xffffff00ff00ff00, 0xffffff00ff00ffff, 0xffffff00ffff0000,
151
+ 0xffffff00ffff00ff, 0xffffff00ffffff00, 0xffffff00ffffffff,
152
+ 0xffffffff00000000, 0xffffffff000000ff, 0xffffffff0000ff00,
153
+ 0xffffffff0000ffff, 0xffffffff00ff0000, 0xffffffff00ff00ff,
154
+ 0xffffffff00ffff00, 0xffffffff00ffffff, 0xffffffffff000000,
155
+ 0xffffffffff0000ff, 0xffffffffff00ff00, 0xffffffffff00ffff,
156
+ 0xffffffffffff0000, 0xffffffffffff00ff, 0xffffffffffffff00,
157
+ 0xffffffffffffffff,
158
+ };
159
+ return word[byte];
160
+}
161
+
162
+/* Similarly for half-word elements.
163
+ * for (i = 0; i < 256; ++i) {
164
+ * unsigned long m = 0;
165
+ * if (i & 0xaa) {
166
+ * continue;
167
+ * }
168
+ * for (j = 0; j < 8; j += 2) {
169
+ * if ((i >> j) & 1) {
170
+ * m |= 0xfffful << (j << 3);
171
+ * }
172
+ * }
173
+ * printf("[0x%x] = 0x%016lx,\n", i, m);
174
+ * }
175
+ */
176
+static inline uint64_t expand_pred_h(uint8_t byte)
177
+{
178
+ static const uint64_t word[] = {
179
+ [0x01] = 0x000000000000ffff, [0x04] = 0x00000000ffff0000,
180
+ [0x05] = 0x00000000ffffffff, [0x10] = 0x0000ffff00000000,
181
+ [0x11] = 0x0000ffff0000ffff, [0x14] = 0x0000ffffffff0000,
182
+ [0x15] = 0x0000ffffffffffff, [0x40] = 0xffff000000000000,
183
+ [0x41] = 0xffff00000000ffff, [0x44] = 0xffff0000ffff0000,
184
+ [0x45] = 0xffff0000ffffffff, [0x50] = 0xffffffff00000000,
185
+ [0x51] = 0xffffffff0000ffff, [0x54] = 0xffffffffffff0000,
186
+ [0x55] = 0xffffffffffffffff,
187
+ };
188
+ return word[byte & 0x55];
189
+}
190
+
191
+/* Similarly for single word elements. */
192
+static inline uint64_t expand_pred_s(uint8_t byte)
193
+{
194
+ static const uint64_t word[] = {
195
+ [0x01] = 0x00000000ffffffffull,
196
+ [0x10] = 0xffffffff00000000ull,
197
+ [0x11] = 0xffffffffffffffffull,
198
+ };
199
+ return word[byte & 0x11];
200
+}
201
+
202
#define LOGICAL_PPPP(NAME, FUNC) \
203
void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
204
{ \
205
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_pnext)(void *vd, void *vg, uint32_t pred_desc)
206
207
return flags;
208
}
209
+
210
+/* Store zero into every active element of Zd. We will use this for two
211
+ * and three-operand predicated instructions for which logic dictates a
212
+ * zero result. In particular, logical shift by element size, which is
213
+ * otherwise undefined on the host.
214
+ *
215
+ * For element sizes smaller than uint64_t, we use tables to expand
216
+ * the N bits of the controlling predicate to a byte mask, and clear
217
+ * those bytes.
218
+ */
219
+void HELPER(sve_clr_b)(void *vd, void *vg, uint32_t desc)
220
+{
221
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
222
+ uint64_t *d = vd;
223
+ uint8_t *pg = vg;
224
+ for (i = 0; i < opr_sz; i += 1) {
225
+ d[i] &= ~expand_pred_b(pg[H1(i)]);
226
+ }
227
+}
228
+
229
+void HELPER(sve_clr_h)(void *vd, void *vg, uint32_t desc)
230
+{
231
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
232
+ uint64_t *d = vd;
233
+ uint8_t *pg = vg;
234
+ for (i = 0; i < opr_sz; i += 1) {
235
+ d[i] &= ~expand_pred_h(pg[H1(i)]);
236
+ }
237
+}
238
+
239
+void HELPER(sve_clr_s)(void *vd, void *vg, uint32_t desc)
240
+{
241
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
242
+ uint64_t *d = vd;
243
+ uint8_t *pg = vg;
244
+ for (i = 0; i < opr_sz; i += 1) {
245
+ d[i] &= ~expand_pred_s(pg[H1(i)]);
246
+ }
247
+}
248
+
249
+void HELPER(sve_clr_d)(void *vd, void *vg, uint32_t desc)
250
+{
251
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8;
252
+ uint64_t *d = vd;
253
+ uint8_t *pg = vg;
254
+ for (i = 0; i < opr_sz; i += 1) {
255
+ if (pg[H1(i)] & 1) {
256
+ d[i] = 0;
257
+ }
258
+ }
259
+}
260
+
261
+/* Three-operand expander, immediate operand, controlled by a predicate.
262
+ */
263
+#define DO_ZPZI(NAME, TYPE, H, OP) \
264
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
265
+{ \
266
+ intptr_t i, opr_sz = simd_oprsz(desc); \
267
+ TYPE imm = simd_data(desc); \
268
+ for (i = 0; i < opr_sz; ) { \
269
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
270
+ do { \
271
+ if (pg & 1) { \
272
+ TYPE nn = *(TYPE *)(vn + H(i)); \
273
+ *(TYPE *)(vd + H(i)) = OP(nn, imm); \
274
+ } \
275
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
276
+ } while (i & 15); \
277
+ } \
278
+}
279
+
280
+/* Similarly, specialized for 64-bit operands. */
281
+#define DO_ZPZI_D(NAME, TYPE, OP) \
282
+void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t desc) \
283
+{ \
284
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
285
+ TYPE *d = vd, *n = vn; \
286
+ TYPE imm = simd_data(desc); \
287
+ uint8_t *pg = vg; \
288
+ for (i = 0; i < opr_sz; i += 1) { \
289
+ if (pg[H1(i)] & 1) { \
290
+ TYPE nn = n[i]; \
291
+ d[i] = OP(nn, imm); \
292
+ } \
293
+ } \
294
+}
295
+
296
+#define DO_SHR(N, M) (N >> M)
297
+#define DO_SHL(N, M) (N << M)
298
+
299
+/* Arithmetic shift right for division. This rounds negative numbers
300
+ toward zero as per signed division. Therefore before shifting,
301
+ when N is negative, add 2**M-1. */
302
+#define DO_ASRD(N, M) ((N + (N < 0 ? ((__typeof(N))1 << M) - 1 : 0)) >> M)
303
+
304
+DO_ZPZI(sve_asr_zpzi_b, int8_t, H1, DO_SHR)
305
+DO_ZPZI(sve_asr_zpzi_h, int16_t, H1_2, DO_SHR)
306
+DO_ZPZI(sve_asr_zpzi_s, int32_t, H1_4, DO_SHR)
307
+DO_ZPZI_D(sve_asr_zpzi_d, int64_t, DO_SHR)
308
+
309
+DO_ZPZI(sve_lsr_zpzi_b, uint8_t, H1, DO_SHR)
310
+DO_ZPZI(sve_lsr_zpzi_h, uint16_t, H1_2, DO_SHR)
311
+DO_ZPZI(sve_lsr_zpzi_s, uint32_t, H1_4, DO_SHR)
312
+DO_ZPZI_D(sve_lsr_zpzi_d, uint64_t, DO_SHR)
313
+
314
+DO_ZPZI(sve_lsl_zpzi_b, uint8_t, H1, DO_SHL)
315
+DO_ZPZI(sve_lsl_zpzi_h, uint16_t, H1_2, DO_SHL)
316
+DO_ZPZI(sve_lsl_zpzi_s, uint32_t, H1_4, DO_SHL)
317
+DO_ZPZI_D(sve_lsl_zpzi_d, uint64_t, DO_SHL)
318
+
319
+DO_ZPZI(sve_asrd_b, int8_t, H1, DO_ASRD)
320
+DO_ZPZI(sve_asrd_h, int16_t, H1_2, DO_ASRD)
321
+DO_ZPZI(sve_asrd_s, int32_t, H1_4, DO_ASRD)
322
+DO_ZPZI_D(sve_asrd_d, int64_t, DO_ASRD)
323
+
324
+#undef DO_SHR
325
+#undef DO_SHL
326
+#undef DO_ASRD
327
+#undef DO_ZPZI
328
+#undef DO_ZPZI_D
329
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
330
index XXXXXXX..XXXXXXX 100644
331
--- a/target/arm/translate-sve.c
332
+++ b/target/arm/translate-sve.c
333
@@ -XXX,XX +XXX,XX @@
334
#include "trace-tcg.h"
335
#include "translate-a64.h"
336
337
+/*
338
+ * Helpers for extracting complex instruction fields.
339
+ */
340
+
341
+/* See e.g. ASR (immediate, predicated).
342
+ * Returns -1 for unallocated encoding; diagnose later.
343
+ */
344
+static int tszimm_esz(int x)
345
+{
346
+ x >>= 3; /* discard imm3 */
347
+ return 31 - clz32(x);
348
+}
349
+
350
+static int tszimm_shr(int x)
351
+{
352
+ return (16 << tszimm_esz(x)) - x;
353
+}
354
+
355
+/* See e.g. LSL (immediate, predicated). */
356
+static int tszimm_shl(int x)
357
+{
358
+ return x - (8 << tszimm_esz(x));
359
+}
360
+
361
/*
362
* Include the generated decoder.
363
*/
364
@@ -XXX,XX +XXX,XX @@ static bool trans_SADDV(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
365
366
#undef DO_VPZ
367
368
+/*
369
+ *** SVE Shift by Immediate - Predicated Group
370
+ */
371
+
372
+/* Store zero into every active element of Zd. We will use this for two
373
+ * and three-operand predicated instructions for which logic dictates a
374
+ * zero result.
375
+ */
376
+static bool do_clr_zp(DisasContext *s, int rd, int pg, int esz)
377
+{
378
+ static gen_helper_gvec_2 * const fns[4] = {
379
+ gen_helper_sve_clr_b, gen_helper_sve_clr_h,
380
+ gen_helper_sve_clr_s, gen_helper_sve_clr_d,
381
+ };
382
+ if (sve_access_check(s)) {
383
+ unsigned vsz = vec_full_reg_size(s);
384
+ tcg_gen_gvec_2_ool(vec_full_reg_offset(s, rd),
385
+ pred_full_reg_offset(s, pg),
386
+ vsz, vsz, 0, fns[esz]);
387
+ }
388
+ return true;
389
+}
390
+
391
+static bool do_zpzi_ool(DisasContext *s, arg_rpri_esz *a,
392
+ gen_helper_gvec_3 *fn)
393
+{
394
+ if (sve_access_check(s)) {
395
+ unsigned vsz = vec_full_reg_size(s);
396
+ tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
397
+ vec_full_reg_offset(s, a->rn),
398
+ pred_full_reg_offset(s, a->pg),
399
+ vsz, vsz, a->imm, fn);
400
+ }
401
+ return true;
402
+}
403
+
404
+static bool trans_ASR_zpzi(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
405
+{
406
+ static gen_helper_gvec_3 * const fns[4] = {
407
+ gen_helper_sve_asr_zpzi_b, gen_helper_sve_asr_zpzi_h,
408
+ gen_helper_sve_asr_zpzi_s, gen_helper_sve_asr_zpzi_d,
409
+ };
410
+ if (a->esz < 0) {
411
+ /* Invalid tsz encoding -- see tszimm_esz. */
412
+ return false;
413
+ }
414
+ /* Shift by element size is architecturally valid. For
415
+ arithmetic right-shift, it's the same as by one less. */
416
+ a->imm = MIN(a->imm, (8 << a->esz) - 1);
417
+ return do_zpzi_ool(s, a, fns[a->esz]);
418
+}
419
+
420
+static bool trans_LSR_zpzi(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
421
+{
422
+ static gen_helper_gvec_3 * const fns[4] = {
423
+ gen_helper_sve_lsr_zpzi_b, gen_helper_sve_lsr_zpzi_h,
424
+ gen_helper_sve_lsr_zpzi_s, gen_helper_sve_lsr_zpzi_d,
425
+ };
426
+ if (a->esz < 0) {
427
+ return false;
428
+ }
429
+ /* Shift by element size is architecturally valid.
430
+ For logical shifts, it is a zeroing operation. */
431
+ if (a->imm >= (8 << a->esz)) {
432
+ return do_clr_zp(s, a->rd, a->pg, a->esz);
433
+ } else {
434
+ return do_zpzi_ool(s, a, fns[a->esz]);
435
+ }
436
+}
437
+
438
+static bool trans_LSL_zpzi(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
439
+{
440
+ static gen_helper_gvec_3 * const fns[4] = {
441
+ gen_helper_sve_lsl_zpzi_b, gen_helper_sve_lsl_zpzi_h,
442
+ gen_helper_sve_lsl_zpzi_s, gen_helper_sve_lsl_zpzi_d,
443
+ };
444
+ if (a->esz < 0) {
445
+ return false;
446
+ }
447
+ /* Shift by element size is architecturally valid.
448
+ For logical shifts, it is a zeroing operation. */
449
+ if (a->imm >= (8 << a->esz)) {
450
+ return do_clr_zp(s, a->rd, a->pg, a->esz);
451
+ } else {
452
+ return do_zpzi_ool(s, a, fns[a->esz]);
453
+ }
454
+}
455
+
456
+static bool trans_ASRD(DisasContext *s, arg_rpri_esz *a, uint32_t insn)
457
+{
458
+ static gen_helper_gvec_3 * const fns[4] = {
459
+ gen_helper_sve_asrd_b, gen_helper_sve_asrd_h,
460
+ gen_helper_sve_asrd_s, gen_helper_sve_asrd_d,
461
+ };
462
+ if (a->esz < 0) {
463
+ return false;
464
+ }
465
+ /* Shift by element size is architecturally valid. For arithmetic
466
+ right shift for division, it is a zeroing operation. */
467
+ if (a->imm >= (8 << a->esz)) {
468
+ return do_clr_zp(s, a->rd, a->pg, a->esz);
469
+ } else {
470
+ return do_zpzi_ool(s, a, fns[a->esz]);
471
+ }
472
+}
473
+
474
/*
475
*** SVE Predicate Logical Operations Group
476
*/
477
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
478
index XXXXXXX..XXXXXXX 100644
479
--- a/target/arm/sve.decode
480
+++ b/target/arm/sve.decode
481
@@ -XXX,XX +XXX,XX @@
482
###########################################################################
483
# Named fields. These are primarily for disjoint fields.
484
485
+%imm6_22_5 22:1 5:5
486
%imm9_16_10 16:s6 10:3
487
488
+# A combination of tsz:imm3 -- extract esize.
489
+%tszimm_esz 22:2 5:5 !function=tszimm_esz
490
+# A combination of tsz:imm3 -- extract (2 * esize) - (tsz:imm3)
491
+%tszimm_shr 22:2 5:5 !function=tszimm_shr
492
+# A combination of tsz:imm3 -- extract (tsz:imm3) - esize
493
+%tszimm_shl 22:2 5:5 !function=tszimm_shl
494
+
495
# Either a copy of rd (at bit 0), or a different source
496
# as propagated via the MOVPRFX instruction.
497
%reg_movprfx 0:5
498
@@ -XXX,XX +XXX,XX @@
499
&rpr_esz rd pg rn esz
500
&rprr_s rd pg rn rm s
501
&rprr_esz rd pg rn rm esz
502
+&rpri_esz rd pg rn imm esz
503
504
###########################################################################
505
# Named instruction formats. These are generally used to
506
@@ -XXX,XX +XXX,XX @@
507
# One register operand, with governing predicate, vector element size
508
@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
509
510
+# Two register operand, one immediate operand, with predicate,
511
+# element size encoded as TSZHL. User must fill in imm.
512
+@rdn_pg_tszimm ........ .. ... ... ... pg:3 ..... rd:5 \
513
+ &rpri_esz rn=%reg_movprfx esz=%tszimm_esz
514
+
515
# Basic Load/Store with 9-bit immediate offset
516
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
517
&rri imm=%imm9_16_10
518
@@ -XXX,XX +XXX,XX @@ UMAXV 00000100 .. 001 001 001 ... ..... ..... @rd_pg_rn
519
SMINV 00000100 .. 001 010 001 ... ..... ..... @rd_pg_rn
520
UMINV 00000100 .. 001 011 001 ... ..... ..... @rd_pg_rn
521
522
+### SVE Shift by Immediate - Predicated Group
523
+
524
+# SVE bitwise shift by immediate (predicated)
525
+ASR_zpzi 00000100 .. 000 000 100 ... .. ... ..... \
526
+ @rdn_pg_tszimm imm=%tszimm_shr
527
+LSR_zpzi 00000100 .. 000 001 100 ... .. ... ..... \
528
+ @rdn_pg_tszimm imm=%tszimm_shr
529
+LSL_zpzi 00000100 .. 000 011 100 ... .. ... ..... \
530
+ @rdn_pg_tszimm imm=%tszimm_shl
531
+ASRD 00000100 .. 000 100 100 ... .. ... ..... \
532
+ @rdn_pg_tszimm imm=%tszimm_shr
533
+
534
### SVE Logical - Unpredicated Group
535
536
# SVE bitwise logical operations (unpredicated)
537
--
66
--
538
2.17.0
67
2.20.1
539
68
540
69
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Including only 4, as-yet unimplemented, instruction patterns
3
The 8-byte store for the end a !is_q operation can be
4
so that the whole thing compiles.
4
merged with the other stores. Use a no-op vector move
5
to trigger the expand_clr portion of tcg_gen_gvec_mov.
5
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180516223007.10256-3-richard.henderson@linaro.org
9
Message-id: 20200519212453.28494-2-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/Makefile.objs | 10 ++++++
12
target/arm/translate-a64.c | 10 ++--------
12
target/arm/translate-a64.c | 7 ++++-
13
1 file changed, 2 insertions(+), 8 deletions(-)
13
target/arm/translate-sve.c | 63 ++++++++++++++++++++++++++++++++++++++
14
.gitignore | 1 +
15
target/arm/sve.decode | 45 +++++++++++++++++++++++++++
16
5 files changed, 125 insertions(+), 1 deletion(-)
17
create mode 100644 target/arm/translate-sve.c
18
create mode 100644 target/arm/sve.decode
19
14
20
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/Makefile.objs
23
+++ b/target/arm/Makefile.objs
24
@@ -XXX,XX +XXX,XX @@ obj-y += gdbstub.o
25
obj-$(TARGET_AARCH64) += cpu64.o translate-a64.o helper-a64.o gdbstub64.o
26
obj-y += crypto_helper.o
27
obj-$(CONFIG_SOFTMMU) += arm-powerctl.o
28
+
29
+DECODETREE = $(SRC_PATH)/scripts/decodetree.py
30
+
31
+target/arm/decode-sve.inc.c: $(SRC_PATH)/target/arm/sve.decode $(DECODETREE)
32
+    $(call quiet-command,\
33
+     $(PYTHON) $(DECODETREE) --decode disas_sve -o $@ $<,\
34
+     "GEN", $(TARGET_DIR)$@)
35
+
36
+target/arm/translate-sve.o: target/arm/decode-sve.inc.c
37
+obj-$(TARGET_AARCH64) += translate-sve.o
38
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-a64.c
17
--- a/target/arm/translate-a64.c
41
+++ b/target/arm/translate-a64.c
18
+++ b/target/arm/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
19
@@ -XXX,XX +XXX,XX @@ static void clear_vec_high(DisasContext *s, bool is_q, int rd)
43
s->fp_access_checked = false;
20
unsigned ofs = fp_reg_offset(s, rd, MO_64);
44
21
unsigned vsz = vec_full_reg_size(s);
45
switch (extract32(insn, 25, 4)) {
22
46
- case 0x0: case 0x1: case 0x2: case 0x3: /* UNALLOCATED */
23
- if (!is_q) {
47
+ case 0x0: case 0x1: case 0x3: /* UNALLOCATED */
24
- TCGv_i64 tcg_zero = tcg_const_i64(0);
48
unallocated_encoding(s);
25
- tcg_gen_st_i64(tcg_zero, cpu_env, ofs + 8);
49
break;
26
- tcg_temp_free_i64(tcg_zero);
50
+ case 0x2:
27
- }
51
+ if (!arm_dc_feature(s, ARM_FEATURE_SVE) || !disas_sve(s, insn)) {
28
- if (vsz > 16) {
52
+ unallocated_encoding(s);
29
- tcg_gen_gvec_dup_imm(MO_64, ofs + 16, vsz - 16, vsz - 16, 0);
53
+ }
30
- }
54
+ break;
31
+ /* Nop move, with side effect of clearing the tail. */
55
case 0x8: case 0x9: /* Data processing - immediate */
32
+ tcg_gen_gvec_mov(MO_64, ofs, ofs, is_q ? 16 : 8, vsz);
56
disas_data_proc_imm(s, insn);
33
}
57
break;
34
58
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
35
void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v)
59
new file mode 100644
60
index XXXXXXX..XXXXXXX
61
--- /dev/null
62
+++ b/target/arm/translate-sve.c
63
@@ -XXX,XX +XXX,XX @@
64
+/*
65
+ * AArch64 SVE translation
66
+ *
67
+ * Copyright (c) 2018 Linaro, Ltd
68
+ *
69
+ * This library is free software; you can redistribute it and/or
70
+ * modify it under the terms of the GNU Lesser General Public
71
+ * License as published by the Free Software Foundation; either
72
+ * version 2 of the License, or (at your option) any later version.
73
+ *
74
+ * This library is distributed in the hope that it will be useful,
75
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
76
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
77
+ * Lesser General Public License for more details.
78
+ *
79
+ * You should have received a copy of the GNU Lesser General Public
80
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
81
+ */
82
+
83
+#include "qemu/osdep.h"
84
+#include "cpu.h"
85
+#include "exec/exec-all.h"
86
+#include "tcg-op.h"
87
+#include "tcg-op-gvec.h"
88
+#include "qemu/log.h"
89
+#include "arm_ldst.h"
90
+#include "translate.h"
91
+#include "internals.h"
92
+#include "exec/helper-proto.h"
93
+#include "exec/helper-gen.h"
94
+#include "exec/log.h"
95
+#include "trace-tcg.h"
96
+#include "translate-a64.h"
97
+
98
+/*
99
+ * Include the generated decoder.
100
+ */
101
+
102
+#include "decode-sve.inc.c"
103
+
104
+/*
105
+ * Implement all of the translator functions referenced by the decoder.
106
+ */
107
+
108
+static bool trans_AND_zzz(DisasContext *s, arg_AND_zzz *a, uint32_t insn)
109
+{
110
+ return false;
111
+}
112
+
113
+static bool trans_ORR_zzz(DisasContext *s, arg_ORR_zzz *a, uint32_t insn)
114
+{
115
+ return false;
116
+}
117
+
118
+static bool trans_EOR_zzz(DisasContext *s, arg_EOR_zzz *a, uint32_t insn)
119
+{
120
+ return false;
121
+}
122
+
123
+static bool trans_BIC_zzz(DisasContext *s, arg_BIC_zzz *a, uint32_t insn)
124
+{
125
+ return false;
126
+}
127
diff --git a/.gitignore b/.gitignore
128
index XXXXXXX..XXXXXXX 100644
129
--- a/.gitignore
130
+++ b/.gitignore
131
@@ -XXX,XX +XXX,XX @@ trace-dtrace-root.h
132
trace-dtrace-root.dtrace
133
trace-ust-all.h
134
trace-ust-all.c
135
+/target/arm/decode-sve.inc.c
136
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
137
new file mode 100644
138
index XXXXXXX..XXXXXXX
139
--- /dev/null
140
+++ b/target/arm/sve.decode
141
@@ -XXX,XX +XXX,XX @@
142
+# AArch64 SVE instruction descriptions
143
+#
144
+# Copyright (c) 2017 Linaro, Ltd
145
+#
146
+# This library is free software; you can redistribute it and/or
147
+# modify it under the terms of the GNU Lesser General Public
148
+# License as published by the Free Software Foundation; either
149
+# version 2 of the License, or (at your option) any later version.
150
+#
151
+# This library is distributed in the hope that it will be useful,
152
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
153
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
154
+# Lesser General Public License for more details.
155
+#
156
+# You should have received a copy of the GNU Lesser General Public
157
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
158
+
159
+#
160
+# This file is processed by scripts/decodetree.py
161
+#
162
+
163
+###########################################################################
164
+# Named attribute sets. These are used to make nice(er) names
165
+# when creating helpers common to those for the individual
166
+# instruction patterns.
167
+
168
+&rrr_esz rd rn rm esz
169
+
170
+###########################################################################
171
+# Named instruction formats. These are generally used to
172
+# reduce the amount of duplication between instruction patterns.
173
+
174
+# Three operand with unused vector element size
175
+@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
176
+
177
+###########################################################################
178
+# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
179
+
180
+### SVE Logical - Unpredicated Group
181
+
182
+# SVE bitwise logical operations (unpredicated)
183
+AND_zzz 00000100 00 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
184
+ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
185
+EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
186
+BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
187
--
36
--
188
2.17.0
37
2.20.1
189
38
190
39
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move some stuff that will be common to both translate-a64.c
3
Do not explicitly store zero to the NEON high part
4
and translate-sve.c.
4
when we can pass !is_q to clear_vec_high.
5
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180516223007.10256-2-richard.henderson@linaro.org
8
Message-id: 20200519212453.28494-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/translate-a64.h | 118 +++++++++++++++++++++++++++++++++++++
11
target/arm/translate-a64.c | 53 +++++++++++++++++++++++---------------
13
target/arm/translate-a64.c | 112 +++++------------------------------
12
1 file changed, 32 insertions(+), 21 deletions(-)
14
2 files changed, 133 insertions(+), 97 deletions(-)
15
create mode 100644 target/arm/translate-a64.h
16
13
17
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
18
new file mode 100644
19
index XXXXXXX..XXXXXXX
20
--- /dev/null
21
+++ b/target/arm/translate-a64.h
22
@@ -XXX,XX +XXX,XX @@
23
+/*
24
+ * AArch64 translation, common definitions.
25
+ *
26
+ * This library is free software; you can redistribute it and/or
27
+ * modify it under the terms of the GNU Lesser General Public
28
+ * License as published by the Free Software Foundation; either
29
+ * version 2 of the License, or (at your option) any later version.
30
+ *
31
+ * This library is distributed in the hope that it will be useful,
32
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
33
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
34
+ * Lesser General Public License for more details.
35
+ *
36
+ * You should have received a copy of the GNU Lesser General Public
37
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
38
+ */
39
+
40
+#ifndef TARGET_ARM_TRANSLATE_A64_H
41
+#define TARGET_ARM_TRANSLATE_A64_H
42
+
43
+void unallocated_encoding(DisasContext *s);
44
+
45
+#define unsupported_encoding(s, insn) \
46
+ do { \
47
+ qemu_log_mask(LOG_UNIMP, \
48
+ "%s:%d: unsupported instruction encoding 0x%08x " \
49
+ "at pc=%016" PRIx64 "\n", \
50
+ __FILE__, __LINE__, insn, s->pc - 4); \
51
+ unallocated_encoding(s); \
52
+ } while (0)
53
+
54
+TCGv_i64 new_tmp_a64(DisasContext *s);
55
+TCGv_i64 new_tmp_a64_zero(DisasContext *s);
56
+TCGv_i64 cpu_reg(DisasContext *s, int reg);
57
+TCGv_i64 cpu_reg_sp(DisasContext *s, int reg);
58
+TCGv_i64 read_cpu_reg(DisasContext *s, int reg, int sf);
59
+TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf);
60
+void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v);
61
+TCGv_ptr get_fpstatus_ptr(bool);
62
+bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
63
+ unsigned int imms, unsigned int immr);
64
+uint64_t vfp_expand_imm(int size, uint8_t imm8);
65
+bool sve_access_check(DisasContext *s);
66
+
67
+/* We should have at some point before trying to access an FP register
68
+ * done the necessary access check, so assert that
69
+ * (a) we did the check and
70
+ * (b) we didn't then just plough ahead anyway if it failed.
71
+ * Print the instruction pattern in the abort message so we can figure
72
+ * out what we need to fix if a user encounters this problem in the wild.
73
+ */
74
+static inline void assert_fp_access_checked(DisasContext *s)
75
+{
76
+#ifdef CONFIG_DEBUG_TCG
77
+ if (unlikely(!s->fp_access_checked || s->fp_excp_el)) {
78
+ fprintf(stderr, "target-arm: FP access check missing for "
79
+ "instruction 0x%08x\n", s->insn);
80
+ abort();
81
+ }
82
+#endif
83
+}
84
+
85
+/* Return the offset into CPUARMState of an element of specified
86
+ * size, 'element' places in from the least significant end of
87
+ * the FP/vector register Qn.
88
+ */
89
+static inline int vec_reg_offset(DisasContext *s, int regno,
90
+ int element, TCGMemOp size)
91
+{
92
+ int offs = 0;
93
+#ifdef HOST_WORDS_BIGENDIAN
94
+ /* This is complicated slightly because vfp.zregs[n].d[0] is
95
+ * still the low half and vfp.zregs[n].d[1] the high half
96
+ * of the 128 bit vector, even on big endian systems.
97
+ * Calculate the offset assuming a fully bigendian 128 bits,
98
+ * then XOR to account for the order of the two 64 bit halves.
99
+ */
100
+ offs += (16 - ((element + 1) * (1 << size)));
101
+ offs ^= 8;
102
+#else
103
+ offs += element * (1 << size);
104
+#endif
105
+ offs += offsetof(CPUARMState, vfp.zregs[regno]);
106
+ assert_fp_access_checked(s);
107
+ return offs;
108
+}
109
+
110
+/* Return the offset info CPUARMState of the "whole" vector register Qn. */
111
+static inline int vec_full_reg_offset(DisasContext *s, int regno)
112
+{
113
+ assert_fp_access_checked(s);
114
+ return offsetof(CPUARMState, vfp.zregs[regno]);
115
+}
116
+
117
+/* Return a newly allocated pointer to the vector register. */
118
+static inline TCGv_ptr vec_full_reg_ptr(DisasContext *s, int regno)
119
+{
120
+ TCGv_ptr ret = tcg_temp_new_ptr();
121
+ tcg_gen_addi_ptr(ret, cpu_env, vec_full_reg_offset(s, regno));
122
+ return ret;
123
+}
124
+
125
+/* Return the byte size of the "whole" vector register, VL / 8. */
126
+static inline int vec_full_reg_size(DisasContext *s)
127
+{
128
+ return s->sve_len;
129
+}
130
+
131
+bool disas_sve(DisasContext *, uint32_t);
132
+
133
+/* Note that the gvec expanders operate on offsets + sizes. */
134
+typedef void GVecGen2Fn(unsigned, uint32_t, uint32_t, uint32_t, uint32_t);
135
+typedef void GVecGen2iFn(unsigned, uint32_t, uint32_t, int64_t,
136
+ uint32_t, uint32_t);
137
+typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
138
+ uint32_t, uint32_t, uint32_t);
139
+
140
+#endif /* TARGET_ARM_TRANSLATE_A64_H */
141
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
142
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
143
--- a/target/arm/translate-a64.c
16
--- a/target/arm/translate-a64.c
144
+++ b/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
145
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
146
#include "exec/log.h"
19
{
147
20
/* This always zero-extends and writes to a full 128 bit wide vector */
148
#include "trace-tcg.h"
21
TCGv_i64 tmplo = tcg_temp_new_i64();
149
+#include "translate-a64.h"
22
- TCGv_i64 tmphi;
150
23
+ TCGv_i64 tmphi = NULL;
151
static TCGv_i64 cpu_X[32];
24
152
static TCGv_i64 cpu_pc;
25
if (size < 4) {
153
26
MemOp memop = s->be_data + size;
154
/* Load/store exclusive handling */
27
- tmphi = tcg_const_i64(0);
155
static TCGv_i64 cpu_exclusive_high;
28
tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), memop);
156
-static TCGv_i64 cpu_reg(DisasContext *s, int reg);
29
} else {
157
30
bool be = s->be_data == MO_BE;
158
static const char *regnames[] = {
31
@@ -XXX,XX +XXX,XX @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
159
"x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
32
}
160
@@ -XXX,XX +XXX,XX @@ typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
33
161
typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
34
tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(s, destidx, MO_64));
162
typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, TCGMemOp);
35
- tcg_gen_st_i64(tmphi, cpu_env, fp_reg_hi_offset(s, destidx));
163
164
-/* Note that the gvec expanders operate on offsets + sizes. */
165
-typedef void GVecGen2Fn(unsigned, uint32_t, uint32_t, uint32_t, uint32_t);
166
-typedef void GVecGen2iFn(unsigned, uint32_t, uint32_t, int64_t,
167
- uint32_t, uint32_t);
168
-typedef void GVecGen3Fn(unsigned, uint32_t, uint32_t,
169
- uint32_t, uint32_t, uint32_t);
170
-
36
-
171
/* initialize TCG globals. */
37
tcg_temp_free_i64(tmplo);
172
void a64_translate_init(void)
38
- tcg_temp_free_i64(tmphi);
173
{
39
174
@@ -XXX,XX +XXX,XX @@ static inline void gen_goto_tb(DisasContext *s, int n, uint64_t dest)
40
- clear_vec_high(s, true, destidx);
41
+ if (tmphi) {
42
+ tcg_gen_st_i64(tmphi, cpu_env, fp_reg_hi_offset(s, destidx));
43
+ tcg_temp_free_i64(tmphi);
44
+ }
45
+ clear_vec_high(s, tmphi != NULL, destidx);
46
}
47
48
/*
49
@@ -XXX,XX +XXX,XX @@ static void disas_simd_ext(DisasContext *s, uint32_t insn)
50
read_vec_element(s, tcg_resh, rm, 0, MO_64);
51
do_ext64(s, tcg_resh, tcg_resl, pos);
52
}
53
- tcg_gen_movi_i64(tcg_resh, 0);
54
} else {
55
TCGv_i64 tcg_hh;
56
typedef struct {
57
@@ -XXX,XX +XXX,XX @@ static void disas_simd_ext(DisasContext *s, uint32_t insn)
58
59
write_vec_element(s, tcg_resl, rd, 0, MO_64);
60
tcg_temp_free_i64(tcg_resl);
61
- write_vec_element(s, tcg_resh, rd, 1, MO_64);
62
+ if (is_q) {
63
+ write_vec_element(s, tcg_resh, rd, 1, MO_64);
64
+ }
65
tcg_temp_free_i64(tcg_resh);
66
- clear_vec_high(s, true, rd);
67
+ clear_vec_high(s, is_q, rd);
68
}
69
70
/* TBL/TBX
71
@@ -XXX,XX +XXX,XX @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
72
* the input.
73
*/
74
tcg_resl = tcg_temp_new_i64();
75
- tcg_resh = tcg_temp_new_i64();
76
+ tcg_resh = NULL;
77
78
if (is_tblx) {
79
read_vec_element(s, tcg_resl, rd, 0, MO_64);
80
} else {
81
tcg_gen_movi_i64(tcg_resl, 0);
175
}
82
}
83
- if (is_tblx && is_q) {
84
- read_vec_element(s, tcg_resh, rd, 1, MO_64);
85
- } else {
86
- tcg_gen_movi_i64(tcg_resh, 0);
87
+
88
+ if (is_q) {
89
+ tcg_resh = tcg_temp_new_i64();
90
+ if (is_tblx) {
91
+ read_vec_element(s, tcg_resh, rd, 1, MO_64);
92
+ } else {
93
+ tcg_gen_movi_i64(tcg_resh, 0);
94
+ }
95
}
96
97
tcg_idx = tcg_temp_new_i64();
98
@@ -XXX,XX +XXX,XX @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
99
100
write_vec_element(s, tcg_resl, rd, 0, MO_64);
101
tcg_temp_free_i64(tcg_resl);
102
- write_vec_element(s, tcg_resh, rd, 1, MO_64);
103
- tcg_temp_free_i64(tcg_resh);
104
- clear_vec_high(s, true, rd);
105
+
106
+ if (is_q) {
107
+ write_vec_element(s, tcg_resh, rd, 1, MO_64);
108
+ tcg_temp_free_i64(tcg_resh);
109
+ }
110
+ clear_vec_high(s, is_q, rd);
176
}
111
}
177
112
178
-static void unallocated_encoding(DisasContext *s)
113
/* ZIP/UZP/TRN
179
+void unallocated_encoding(DisasContext *s)
114
@@ -XXX,XX +XXX,XX @@ static void disas_simd_zip_trn(DisasContext *s, uint32_t insn)
180
{
115
}
181
/* Unallocated and reserved encodings are uncategorized */
116
182
gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(),
117
tcg_resl = tcg_const_i64(0);
183
default_exception_el(s));
118
- tcg_resh = tcg_const_i64(0);
119
+ tcg_resh = is_q ? tcg_const_i64(0) : NULL;
120
tcg_res = tcg_temp_new_i64();
121
122
for (i = 0; i < elements; i++) {
123
@@ -XXX,XX +XXX,XX @@ static void disas_simd_zip_trn(DisasContext *s, uint32_t insn)
124
125
write_vec_element(s, tcg_resl, rd, 0, MO_64);
126
tcg_temp_free_i64(tcg_resl);
127
- write_vec_element(s, tcg_resh, rd, 1, MO_64);
128
- tcg_temp_free_i64(tcg_resh);
129
- clear_vec_high(s, true, rd);
130
+
131
+ if (is_q) {
132
+ write_vec_element(s, tcg_resh, rd, 1, MO_64);
133
+ tcg_temp_free_i64(tcg_resh);
134
+ }
135
+ clear_vec_high(s, is_q, rd);
184
}
136
}
185
137
186
-#define unsupported_encoding(s, insn) \
187
- do { \
188
- qemu_log_mask(LOG_UNIMP, \
189
- "%s:%d: unsupported instruction encoding 0x%08x " \
190
- "at pc=%016" PRIx64 "\n", \
191
- __FILE__, __LINE__, insn, s->pc - 4); \
192
- unallocated_encoding(s); \
193
- } while (0)
194
-
195
static void init_tmp_a64_array(DisasContext *s)
196
{
197
#ifdef CONFIG_DEBUG_TCG
198
@@ -XXX,XX +XXX,XX @@ static void free_tmp_a64(DisasContext *s)
199
init_tmp_a64_array(s);
200
}
201
202
-static TCGv_i64 new_tmp_a64(DisasContext *s)
203
+TCGv_i64 new_tmp_a64(DisasContext *s)
204
{
205
assert(s->tmp_a64_count < TMP_A64_MAX);
206
return s->tmp_a64[s->tmp_a64_count++] = tcg_temp_new_i64();
207
}
208
209
-static TCGv_i64 new_tmp_a64_zero(DisasContext *s)
210
+TCGv_i64 new_tmp_a64_zero(DisasContext *s)
211
{
212
TCGv_i64 t = new_tmp_a64(s);
213
tcg_gen_movi_i64(t, 0);
214
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 new_tmp_a64_zero(DisasContext *s)
215
* to cpu_X[31] and ZR accesses to a temporary which can be discarded.
216
* This is the point of the _sp forms.
217
*/
218
-static TCGv_i64 cpu_reg(DisasContext *s, int reg)
219
+TCGv_i64 cpu_reg(DisasContext *s, int reg)
220
{
221
if (reg == 31) {
222
return new_tmp_a64_zero(s);
223
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_reg(DisasContext *s, int reg)
224
}
225
226
/* register access for when 31 == SP */
227
-static TCGv_i64 cpu_reg_sp(DisasContext *s, int reg)
228
+TCGv_i64 cpu_reg_sp(DisasContext *s, int reg)
229
{
230
return cpu_X[reg];
231
}
232
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_reg_sp(DisasContext *s, int reg)
233
* representing the register contents. This TCGv is an auto-freed
234
* temporary so it need not be explicitly freed, and may be modified.
235
*/
236
-static TCGv_i64 read_cpu_reg(DisasContext *s, int reg, int sf)
237
+TCGv_i64 read_cpu_reg(DisasContext *s, int reg, int sf)
238
{
239
TCGv_i64 v = new_tmp_a64(s);
240
if (reg != 31) {
241
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 read_cpu_reg(DisasContext *s, int reg, int sf)
242
return v;
243
}
244
245
-static TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
246
+TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
247
{
248
TCGv_i64 v = new_tmp_a64(s);
249
if (sf) {
250
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
251
return v;
252
}
253
254
-/* We should have at some point before trying to access an FP register
255
- * done the necessary access check, so assert that
256
- * (a) we did the check and
257
- * (b) we didn't then just plough ahead anyway if it failed.
258
- * Print the instruction pattern in the abort message so we can figure
259
- * out what we need to fix if a user encounters this problem in the wild.
260
- */
261
-static inline void assert_fp_access_checked(DisasContext *s)
262
-{
263
-#ifdef CONFIG_DEBUG_TCG
264
- if (unlikely(!s->fp_access_checked || s->fp_excp_el)) {
265
- fprintf(stderr, "target-arm: FP access check missing for "
266
- "instruction 0x%08x\n", s->insn);
267
- abort();
268
- }
269
-#endif
270
-}
271
-
272
-/* Return the offset into CPUARMState of an element of specified
273
- * size, 'element' places in from the least significant end of
274
- * the FP/vector register Qn.
275
- */
276
-static inline int vec_reg_offset(DisasContext *s, int regno,
277
- int element, TCGMemOp size)
278
-{
279
- int offs = 0;
280
-#ifdef HOST_WORDS_BIGENDIAN
281
- /* This is complicated slightly because vfp.zregs[n].d[0] is
282
- * still the low half and vfp.zregs[n].d[1] the high half
283
- * of the 128 bit vector, even on big endian systems.
284
- * Calculate the offset assuming a fully bigendian 128 bits,
285
- * then XOR to account for the order of the two 64 bit halves.
286
- */
287
- offs += (16 - ((element + 1) * (1 << size)));
288
- offs ^= 8;
289
-#else
290
- offs += element * (1 << size);
291
-#endif
292
- offs += offsetof(CPUARMState, vfp.zregs[regno]);
293
- assert_fp_access_checked(s);
294
- return offs;
295
-}
296
-
297
-/* Return the offset info CPUARMState of the "whole" vector register Qn. */
298
-static inline int vec_full_reg_offset(DisasContext *s, int regno)
299
-{
300
- assert_fp_access_checked(s);
301
- return offsetof(CPUARMState, vfp.zregs[regno]);
302
-}
303
-
304
-/* Return a newly allocated pointer to the vector register. */
305
-static TCGv_ptr vec_full_reg_ptr(DisasContext *s, int regno)
306
-{
307
- TCGv_ptr ret = tcg_temp_new_ptr();
308
- tcg_gen_addi_ptr(ret, cpu_env, vec_full_reg_offset(s, regno));
309
- return ret;
310
-}
311
-
312
-/* Return the byte size of the "whole" vector register, VL / 8. */
313
-static inline int vec_full_reg_size(DisasContext *s)
314
-{
315
- /* FIXME SVE: We should put the composite ZCR_EL* value into tb->flags.
316
- In the meantime this is just the AdvSIMD length of 128. */
317
- return 128 / 8;
318
-}
319
-
320
/* Return the offset into CPUARMState of a slice (from
321
* the least significant end) of FP register Qn (ie
322
* Dn, Sn, Hn or Bn).
323
@@ -XXX,XX +XXX,XX @@ static void clear_vec_high(DisasContext *s, bool is_q, int rd)
324
}
325
}
326
327
-static void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v)
328
+void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v)
329
{
330
unsigned ofs = fp_reg_offset(s, reg, MO_64);
331
332
@@ -XXX,XX +XXX,XX @@ static void write_fp_sreg(DisasContext *s, int reg, TCGv_i32 v)
333
tcg_temp_free_i64(tmp);
334
}
335
336
-static TCGv_ptr get_fpstatus_ptr(bool is_f16)
337
+TCGv_ptr get_fpstatus_ptr(bool is_f16)
338
{
339
TCGv_ptr statusptr = tcg_temp_new_ptr();
340
int offset;
341
@@ -XXX,XX +XXX,XX @@ static inline bool fp_access_check(DisasContext *s)
342
/* Check that SVE access is enabled. If it is, return true.
343
* If not, emit code to generate an appropriate exception and return false.
344
*/
345
-static inline bool sve_access_check(DisasContext *s)
346
+bool sve_access_check(DisasContext *s)
347
{
348
if (s->sve_excp_el) {
349
gen_exception_insn(s, 4, EXCP_UDEF, syn_sve_access_trap(),
350
s->sve_excp_el);
351
return false;
352
}
353
- return true;
354
+ return fp_access_check(s);
355
}
356
357
/*
138
/*
358
@@ -XXX,XX +XXX,XX @@ static inline uint64_t bitmask64(unsigned int length)
359
* value (ie should cause a guest UNDEF exception), and true if they are
360
* valid, in which case the decoded bit pattern is written to result.
361
*/
362
-static bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
363
- unsigned int imms, unsigned int immr)
364
+bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
365
+ unsigned int imms, unsigned int immr)
366
{
367
uint64_t mask;
368
unsigned e, levels, s, r;
369
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
370
* the range 01....1xx to 10....0xx, and the most significant 4 bits of
371
* the mantissa; see VFPExpandImm() in the v8 ARM ARM.
372
*/
373
-static uint64_t vfp_expand_imm(int size, uint8_t imm8)
374
+uint64_t vfp_expand_imm(int size, uint8_t imm8)
375
{
376
uint64_t imm;
377
378
--
139
--
379
2.17.0
140
2.20.1
380
141
381
142
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Using the MSR instruction to write to CPSR.E is deprecated, but it is
2
required to work from any mode including unprivileged code. We were
3
incorrectly forbidding usermode code from writing it because
4
CPSR_USER did not include the CPSR_E bit.
2
5
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
We use CPSR_USER in only three places:
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
* as the mask of what to allow userspace MSR to write to CPSR
5
Message-id: 20180516223007.10256-7-richard.henderson@linaro.org
8
* when deciding what bits a linux-user signal-return should be
9
able to write from the sigcontext structure
10
* in target_user_copy_regs() when we set up the initial
11
registers for the linux-user process
12
13
In the first two cases not being able to update CPSR.E is a bug, and
14
in the third case it doesn't matter because CPSR.E is always 0 there.
15
So we can fix both bugs by adding CPSR_E to CPSR_USER.
16
17
Because the cpsr_write() in restore_sigcontext() is now changing
18
a CPSR bit which is cached in hflags, we need to add an
19
arm_rebuild_hflags() call there; the callsite in
20
target_user_copy_regs() was already rebuilding hflags for other
21
reasons.
22
23
(The recommended way to change CPSR.E is to use the 'SETEND'
24
instruction, which we do correctly allow from usermode code.)
25
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Message-id: 20200518142801.20503-1-peter.maydell@linaro.org
7
---
29
---
8
target/arm/cpu.h | 4 +-
30
target/arm/cpu.h | 2 +-
9
target/arm/helper-sve.h | 10 +
31
linux-user/arm/signal.c | 1 +
10
target/arm/sve_helper.c | 39 ++++
32
2 files changed, 2 insertions(+), 1 deletion(-)
11
target/arm/translate-sve.c | 361 +++++++++++++++++++++++++++++++++++++
12
target/arm/sve.decode | 16 ++
13
5 files changed, 429 insertions(+), 1 deletion(-)
14
33
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
36
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
37
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
38
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
20
#ifdef TARGET_AARCH64
39
#define CACHED_CPSR_BITS (CPSR_T | CPSR_AIF | CPSR_GE | CPSR_IT | CPSR_Q \
21
/* Store FFR as pregs[16] to make it easier to treat as any other. */
40
| CPSR_NZCV)
22
ARMPredicateReg pregs[17];
41
/* Bits writable in user mode. */
23
+ /* Scratch space for aa64 sve predicate temporary. */
42
-#define CPSR_USER (CPSR_NZCV | CPSR_Q | CPSR_GE)
24
+ ARMPredicateReg preg_tmp;
43
+#define CPSR_USER (CPSR_NZCV | CPSR_Q | CPSR_GE | CPSR_E)
44
/* Execution state bits. MRS read as zero, MSR writes ignored. */
45
#define CPSR_EXEC (CPSR_T | CPSR_IT | CPSR_J | CPSR_IL)
46
47
diff --git a/linux-user/arm/signal.c b/linux-user/arm/signal.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/linux-user/arm/signal.c
50
+++ b/linux-user/arm/signal.c
51
@@ -XXX,XX +XXX,XX @@ restore_sigcontext(CPUARMState *env, struct target_sigcontext *sc)
52
#ifdef TARGET_CONFIG_CPU_32
53
__get_user(cpsr, &sc->arm_cpsr);
54
cpsr_write(env, cpsr, CPSR_USER | CPSR_EXEC, CPSRWriteByInstr);
55
+ arm_rebuild_hflags(env);
25
#endif
56
#endif
26
57
27
uint32_t xregs[16];
58
err |= !valid_user_regs(env);
28
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
29
int vec_len;
30
int vec_stride;
31
32
- /* scratch space when Tn are not sufficient. */
33
+ /* Scratch space for aa32 neon expansion. */
34
uint32_t scratch[8];
35
36
/* There are a number of distinct float control structures:
37
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper-sve.h
40
+++ b/target/arm/helper-sve.h
41
@@ -XXX,XX +XXX,XX @@
42
43
DEF_HELPER_FLAGS_2(sve_predtest1, TCG_CALL_NO_WG, i32, i64, i64)
44
DEF_HELPER_FLAGS_3(sve_predtest, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
45
+
46
+DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
48
+DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
49
+DEF_HELPER_FLAGS_5(sve_sel_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(sve_orr_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(sve_orn_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_5(sve_nor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_5(sve_nand_pppp, TCG_CALL_NO_RWG,
54
+ void, ptr, ptr, ptr, ptr, i32)
55
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/sve_helper.c
58
+++ b/target/arm/sve_helper.c
59
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t words)
60
61
return flags;
62
}
63
+
64
+#define LOGICAL_PPPP(NAME, FUNC) \
65
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
66
+{ \
67
+ uintptr_t opr_sz = simd_oprsz(desc); \
68
+ uint64_t *d = vd, *n = vn, *m = vm, *g = vg; \
69
+ uintptr_t i; \
70
+ for (i = 0; i < opr_sz / 8; ++i) { \
71
+ d[i] = FUNC(n[i], m[i], g[i]); \
72
+ } \
73
+}
74
+
75
+#define DO_AND(N, M, G) (((N) & (M)) & (G))
76
+#define DO_BIC(N, M, G) (((N) & ~(M)) & (G))
77
+#define DO_EOR(N, M, G) (((N) ^ (M)) & (G))
78
+#define DO_ORR(N, M, G) (((N) | (M)) & (G))
79
+#define DO_ORN(N, M, G) (((N) | ~(M)) & (G))
80
+#define DO_NOR(N, M, G) (~((N) | (M)) & (G))
81
+#define DO_NAND(N, M, G) (~((N) & (M)) & (G))
82
+#define DO_SEL(N, M, G) (((N) & (G)) | ((M) & ~(G)))
83
+
84
+LOGICAL_PPPP(sve_and_pppp, DO_AND)
85
+LOGICAL_PPPP(sve_bic_pppp, DO_BIC)
86
+LOGICAL_PPPP(sve_eor_pppp, DO_EOR)
87
+LOGICAL_PPPP(sve_sel_pppp, DO_SEL)
88
+LOGICAL_PPPP(sve_orr_pppp, DO_ORR)
89
+LOGICAL_PPPP(sve_orn_pppp, DO_ORN)
90
+LOGICAL_PPPP(sve_nor_pppp, DO_NOR)
91
+LOGICAL_PPPP(sve_nand_pppp, DO_NAND)
92
+
93
+#undef DO_AND
94
+#undef DO_BIC
95
+#undef DO_EOR
96
+#undef DO_ORR
97
+#undef DO_ORN
98
+#undef DO_NOR
99
+#undef DO_NAND
100
+#undef DO_SEL
101
+#undef LOGICAL_PPPP
102
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/translate-sve.c
105
+++ b/target/arm/translate-sve.c
106
@@ -XXX,XX +XXX,XX @@ static inline int pred_full_reg_size(DisasContext *s)
107
return s->sve_len >> 3;
108
}
109
110
+/* Round up the size of a register to a size allowed by
111
+ * the tcg vector infrastructure. Any operation which uses this
112
+ * size may assume that the bits above pred_full_reg_size are zero,
113
+ * and must leave them the same way.
114
+ *
115
+ * Note that this is not needed for the vector registers as they
116
+ * are always properly sized for tcg vectors.
117
+ */
118
+static int size_for_gvec(int size)
119
+{
120
+ if (size <= 8) {
121
+ return 8;
122
+ } else {
123
+ return QEMU_ALIGN_UP(size, 16);
124
+ }
125
+}
126
+
127
+static int pred_gvec_reg_size(DisasContext *s)
128
+{
129
+ return size_for_gvec(pred_full_reg_size(s));
130
+}
131
+
132
/* Invoke a vector expander on two Zregs. */
133
static bool do_vector2_z(DisasContext *s, GVecGen2Fn *gvec_fn,
134
int esz, int rd, int rn)
135
@@ -XXX,XX +XXX,XX @@ static bool do_mov_z(DisasContext *s, int rd, int rn)
136
return do_vector2_z(s, tcg_gen_gvec_mov, 0, rd, rn);
137
}
138
139
+/* Invoke a vector expander on two Pregs. */
140
+static bool do_vector2_p(DisasContext *s, GVecGen2Fn *gvec_fn,
141
+ int esz, int rd, int rn)
142
+{
143
+ if (sve_access_check(s)) {
144
+ unsigned psz = pred_gvec_reg_size(s);
145
+ gvec_fn(esz, pred_full_reg_offset(s, rd),
146
+ pred_full_reg_offset(s, rn), psz, psz);
147
+ }
148
+ return true;
149
+}
150
+
151
+/* Invoke a vector expander on three Pregs. */
152
+static bool do_vector3_p(DisasContext *s, GVecGen3Fn *gvec_fn,
153
+ int esz, int rd, int rn, int rm)
154
+{
155
+ if (sve_access_check(s)) {
156
+ unsigned psz = pred_gvec_reg_size(s);
157
+ gvec_fn(esz, pred_full_reg_offset(s, rd),
158
+ pred_full_reg_offset(s, rn),
159
+ pred_full_reg_offset(s, rm), psz, psz);
160
+ }
161
+ return true;
162
+}
163
+
164
+/* Invoke a vector operation on four Pregs. */
165
+static bool do_vecop4_p(DisasContext *s, const GVecGen4 *gvec_op,
166
+ int rd, int rn, int rm, int rg)
167
+{
168
+ if (sve_access_check(s)) {
169
+ unsigned psz = pred_gvec_reg_size(s);
170
+ tcg_gen_gvec_4(pred_full_reg_offset(s, rd),
171
+ pred_full_reg_offset(s, rn),
172
+ pred_full_reg_offset(s, rm),
173
+ pred_full_reg_offset(s, rg),
174
+ psz, psz, gvec_op);
175
+ }
176
+ return true;
177
+}
178
+
179
+/* Invoke a vector move on two Pregs. */
180
+static bool do_mov_p(DisasContext *s, int rd, int rn)
181
+{
182
+ return do_vector2_p(s, tcg_gen_gvec_mov, 0, rd, rn);
183
+}
184
+
185
/* Set the cpu flags as per a return from an SVE helper. */
186
static void do_pred_flags(TCGv_i32 t)
187
{
188
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
189
return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
190
}
191
192
+/*
193
+ *** SVE Predicate Logical Operations Group
194
+ */
195
+
196
+static bool do_pppp_flags(DisasContext *s, arg_rprr_s *a,
197
+ const GVecGen4 *gvec_op)
198
+{
199
+ if (!sve_access_check(s)) {
200
+ return true;
201
+ }
202
+
203
+ unsigned psz = pred_gvec_reg_size(s);
204
+ int dofs = pred_full_reg_offset(s, a->rd);
205
+ int nofs = pred_full_reg_offset(s, a->rn);
206
+ int mofs = pred_full_reg_offset(s, a->rm);
207
+ int gofs = pred_full_reg_offset(s, a->pg);
208
+
209
+ if (psz == 8) {
210
+ /* Do the operation and the flags generation in temps. */
211
+ TCGv_i64 pd = tcg_temp_new_i64();
212
+ TCGv_i64 pn = tcg_temp_new_i64();
213
+ TCGv_i64 pm = tcg_temp_new_i64();
214
+ TCGv_i64 pg = tcg_temp_new_i64();
215
+
216
+ tcg_gen_ld_i64(pn, cpu_env, nofs);
217
+ tcg_gen_ld_i64(pm, cpu_env, mofs);
218
+ tcg_gen_ld_i64(pg, cpu_env, gofs);
219
+
220
+ gvec_op->fni8(pd, pn, pm, pg);
221
+ tcg_gen_st_i64(pd, cpu_env, dofs);
222
+
223
+ do_predtest1(pd, pg);
224
+
225
+ tcg_temp_free_i64(pd);
226
+ tcg_temp_free_i64(pn);
227
+ tcg_temp_free_i64(pm);
228
+ tcg_temp_free_i64(pg);
229
+ } else {
230
+ /* The operation and flags generation is large. The computation
231
+ * of the flags depends on the original contents of the guarding
232
+ * predicate. If the destination overwrites the guarding predicate,
233
+ * then the easiest way to get this right is to save a copy.
234
+ */
235
+ int tofs = gofs;
236
+ if (a->rd == a->pg) {
237
+ tofs = offsetof(CPUARMState, vfp.preg_tmp);
238
+ tcg_gen_gvec_mov(0, tofs, gofs, psz, psz);
239
+ }
240
+
241
+ tcg_gen_gvec_4(dofs, nofs, mofs, gofs, psz, psz, gvec_op);
242
+ do_predtest(s, dofs, tofs, psz / 8);
243
+ }
244
+ return true;
245
+}
246
+
247
+static void gen_and_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
248
+{
249
+ tcg_gen_and_i64(pd, pn, pm);
250
+ tcg_gen_and_i64(pd, pd, pg);
251
+}
252
+
253
+static void gen_and_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
254
+ TCGv_vec pm, TCGv_vec pg)
255
+{
256
+ tcg_gen_and_vec(vece, pd, pn, pm);
257
+ tcg_gen_and_vec(vece, pd, pd, pg);
258
+}
259
+
260
+static bool trans_AND_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
261
+{
262
+ static const GVecGen4 op = {
263
+ .fni8 = gen_and_pg_i64,
264
+ .fniv = gen_and_pg_vec,
265
+ .fno = gen_helper_sve_and_pppp,
266
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
267
+ };
268
+ if (a->s) {
269
+ return do_pppp_flags(s, a, &op);
270
+ } else if (a->rn == a->rm) {
271
+ if (a->pg == a->rn) {
272
+ return do_mov_p(s, a->rd, a->rn);
273
+ } else {
274
+ return do_vector3_p(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->pg);
275
+ }
276
+ } else if (a->pg == a->rn || a->pg == a->rm) {
277
+ return do_vector3_p(s, tcg_gen_gvec_and, 0, a->rd, a->rn, a->rm);
278
+ } else {
279
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
280
+ }
281
+}
282
+
283
+static void gen_bic_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
284
+{
285
+ tcg_gen_andc_i64(pd, pn, pm);
286
+ tcg_gen_and_i64(pd, pd, pg);
287
+}
288
+
289
+static void gen_bic_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
290
+ TCGv_vec pm, TCGv_vec pg)
291
+{
292
+ tcg_gen_andc_vec(vece, pd, pn, pm);
293
+ tcg_gen_and_vec(vece, pd, pd, pg);
294
+}
295
+
296
+static bool trans_BIC_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
297
+{
298
+ static const GVecGen4 op = {
299
+ .fni8 = gen_bic_pg_i64,
300
+ .fniv = gen_bic_pg_vec,
301
+ .fno = gen_helper_sve_bic_pppp,
302
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
303
+ };
304
+ if (a->s) {
305
+ return do_pppp_flags(s, a, &op);
306
+ } else if (a->pg == a->rn) {
307
+ return do_vector3_p(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
308
+ } else {
309
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
310
+ }
311
+}
312
+
313
+static void gen_eor_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
314
+{
315
+ tcg_gen_xor_i64(pd, pn, pm);
316
+ tcg_gen_and_i64(pd, pd, pg);
317
+}
318
+
319
+static void gen_eor_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
320
+ TCGv_vec pm, TCGv_vec pg)
321
+{
322
+ tcg_gen_xor_vec(vece, pd, pn, pm);
323
+ tcg_gen_and_vec(vece, pd, pd, pg);
324
+}
325
+
326
+static bool trans_EOR_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
327
+{
328
+ static const GVecGen4 op = {
329
+ .fni8 = gen_eor_pg_i64,
330
+ .fniv = gen_eor_pg_vec,
331
+ .fno = gen_helper_sve_eor_pppp,
332
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
333
+ };
334
+ if (a->s) {
335
+ return do_pppp_flags(s, a, &op);
336
+ } else {
337
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
338
+ }
339
+}
340
+
341
+static void gen_sel_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
342
+{
343
+ tcg_gen_and_i64(pn, pn, pg);
344
+ tcg_gen_andc_i64(pm, pm, pg);
345
+ tcg_gen_or_i64(pd, pn, pm);
346
+}
347
+
348
+static void gen_sel_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
349
+ TCGv_vec pm, TCGv_vec pg)
350
+{
351
+ tcg_gen_and_vec(vece, pn, pn, pg);
352
+ tcg_gen_andc_vec(vece, pm, pm, pg);
353
+ tcg_gen_or_vec(vece, pd, pn, pm);
354
+}
355
+
356
+static bool trans_SEL_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
357
+{
358
+ static const GVecGen4 op = {
359
+ .fni8 = gen_sel_pg_i64,
360
+ .fniv = gen_sel_pg_vec,
361
+ .fno = gen_helper_sve_sel_pppp,
362
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
363
+ };
364
+ if (a->s) {
365
+ return false;
366
+ } else {
367
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
368
+ }
369
+}
370
+
371
+static void gen_orr_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
372
+{
373
+ tcg_gen_or_i64(pd, pn, pm);
374
+ tcg_gen_and_i64(pd, pd, pg);
375
+}
376
+
377
+static void gen_orr_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
378
+ TCGv_vec pm, TCGv_vec pg)
379
+{
380
+ tcg_gen_or_vec(vece, pd, pn, pm);
381
+ tcg_gen_and_vec(vece, pd, pd, pg);
382
+}
383
+
384
+static bool trans_ORR_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
385
+{
386
+ static const GVecGen4 op = {
387
+ .fni8 = gen_orr_pg_i64,
388
+ .fniv = gen_orr_pg_vec,
389
+ .fno = gen_helper_sve_orr_pppp,
390
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
391
+ };
392
+ if (a->s) {
393
+ return do_pppp_flags(s, a, &op);
394
+ } else if (a->pg == a->rn && a->rn == a->rm) {
395
+ return do_mov_p(s, a->rd, a->rn);
396
+ } else {
397
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
398
+ }
399
+}
400
+
401
+static void gen_orn_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
402
+{
403
+ tcg_gen_orc_i64(pd, pn, pm);
404
+ tcg_gen_and_i64(pd, pd, pg);
405
+}
406
+
407
+static void gen_orn_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
408
+ TCGv_vec pm, TCGv_vec pg)
409
+{
410
+ tcg_gen_orc_vec(vece, pd, pn, pm);
411
+ tcg_gen_and_vec(vece, pd, pd, pg);
412
+}
413
+
414
+static bool trans_ORN_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
415
+{
416
+ static const GVecGen4 op = {
417
+ .fni8 = gen_orn_pg_i64,
418
+ .fniv = gen_orn_pg_vec,
419
+ .fno = gen_helper_sve_orn_pppp,
420
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
421
+ };
422
+ if (a->s) {
423
+ return do_pppp_flags(s, a, &op);
424
+ } else {
425
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
426
+ }
427
+}
428
+
429
+static void gen_nor_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
430
+{
431
+ tcg_gen_or_i64(pd, pn, pm);
432
+ tcg_gen_andc_i64(pd, pg, pd);
433
+}
434
+
435
+static void gen_nor_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
436
+ TCGv_vec pm, TCGv_vec pg)
437
+{
438
+ tcg_gen_or_vec(vece, pd, pn, pm);
439
+ tcg_gen_andc_vec(vece, pd, pg, pd);
440
+}
441
+
442
+static bool trans_NOR_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
443
+{
444
+ static const GVecGen4 op = {
445
+ .fni8 = gen_nor_pg_i64,
446
+ .fniv = gen_nor_pg_vec,
447
+ .fno = gen_helper_sve_nor_pppp,
448
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
449
+ };
450
+ if (a->s) {
451
+ return do_pppp_flags(s, a, &op);
452
+ } else {
453
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
454
+ }
455
+}
456
+
457
+static void gen_nand_pg_i64(TCGv_i64 pd, TCGv_i64 pn, TCGv_i64 pm, TCGv_i64 pg)
458
+{
459
+ tcg_gen_and_i64(pd, pn, pm);
460
+ tcg_gen_andc_i64(pd, pg, pd);
461
+}
462
+
463
+static void gen_nand_pg_vec(unsigned vece, TCGv_vec pd, TCGv_vec pn,
464
+ TCGv_vec pm, TCGv_vec pg)
465
+{
466
+ tcg_gen_and_vec(vece, pd, pn, pm);
467
+ tcg_gen_andc_vec(vece, pd, pg, pd);
468
+}
469
+
470
+static bool trans_NAND_pppp(DisasContext *s, arg_rprr_s *a, uint32_t insn)
471
+{
472
+ static const GVecGen4 op = {
473
+ .fni8 = gen_nand_pg_i64,
474
+ .fniv = gen_nand_pg_vec,
475
+ .fno = gen_helper_sve_nand_pppp,
476
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
477
+ };
478
+ if (a->s) {
479
+ return do_pppp_flags(s, a, &op);
480
+ } else {
481
+ return do_vecop4_p(s, &op, a->rd, a->rn, a->rm, a->pg);
482
+ }
483
+}
484
+
485
/*
486
*** SVE Predicate Misc Group
487
*/
488
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
489
index XXXXXXX..XXXXXXX 100644
490
--- a/target/arm/sve.decode
491
+++ b/target/arm/sve.decode
492
@@ -XXX,XX +XXX,XX @@
493
494
&rri rd rn imm
495
&rrr_esz rd rn rm esz
496
+&rprr_s rd pg rn rm s
497
498
###########################################################################
499
# Named instruction formats. These are generally used to
500
@@ -XXX,XX +XXX,XX @@
501
# Three operand with unused vector element size
502
@rd_rn_rm_e0 ........ ... rm:5 ... ... rn:5 rd:5 &rrr_esz esz=0
503
504
+# Three predicate operand, with governing predicate, flag setting
505
+@pd_pg_pn_pm_s ........ . s:1 .. rm:4 .. pg:4 . rn:4 . rd:4 &rprr_s
506
+
507
# Basic Load/Store with 9-bit immediate offset
508
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
509
&rri imm=%imm9_16_10
510
@@ -XXX,XX +XXX,XX @@ ORR_zzz 00000100 01 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
511
EOR_zzz 00000100 10 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
512
BIC_zzz 00000100 11 1 ..... 001 100 ..... ..... @rd_rn_rm_e0
513
514
+### SVE Predicate Logical Operations Group
515
+
516
+# SVE predicate logical operations
517
+AND_pppp 00100101 0. 00 .... 01 .... 0 .... 0 .... @pd_pg_pn_pm_s
518
+BIC_pppp 00100101 0. 00 .... 01 .... 0 .... 1 .... @pd_pg_pn_pm_s
519
+EOR_pppp 00100101 0. 00 .... 01 .... 1 .... 0 .... @pd_pg_pn_pm_s
520
+SEL_pppp 00100101 0. 00 .... 01 .... 1 .... 1 .... @pd_pg_pn_pm_s
521
+ORR_pppp 00100101 1. 00 .... 01 .... 0 .... 0 .... @pd_pg_pn_pm_s
522
+ORN_pppp 00100101 1. 00 .... 01 .... 0 .... 1 .... @pd_pg_pn_pm_s
523
+NOR_pppp 00100101 1. 00 .... 01 .... 1 .... 0 .... @pd_pg_pn_pm_s
524
+NAND_pppp 00100101 1. 00 .... 01 .... 1 .... 1 .... @pd_pg_pn_pm_s
525
+
526
### SVE Predicate Misc Group
527
528
# SVE predicate test
529
--
59
--
530
2.17.0
60
2.20.1
531
61
532
62
diff view generated by jsdifflib
1
From: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
1
From: Amanieu d'Antras <amanieu@gmail.com>
2
2
3
Generate an XML description for the cp-regs.
3
This fixes signal handlers running with the wrong endianness if the
4
Register these regs with the gdb_register_coprocessor().
4
interrupted code used SETEND to dynamically switch endianness.
5
Add arm_gdb_get_sysreg() to use it as a callback to read those regs.
6
Add a dummy arm_gdb_set_sysreg().
7
5
8
Signed-off-by: Abdallah Bouassida <abdallah.bouassida@lauterbach.com>
6
Signed-off-by: Amanieu d'Antras <amanieu@gmail.com>
9
Tested-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 1524153386-3550-4-git-send-email-abdallah.bouassida@lauterbach.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20200511131117.2486486-1-amanieu@gmail.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
include/qom/cpu.h | 5 ++-
11
linux-user/arm/signal.c | 8 +++++++-
15
target/arm/cpu.h | 26 +++++++++++++++
12
1 file changed, 7 insertions(+), 1 deletion(-)
16
gdbstub.c | 10 ++++++
17
target/arm/cpu.c | 1 +
18
target/arm/gdbstub.c | 76 ++++++++++++++++++++++++++++++++++++++++++++
19
target/arm/helper.c | 26 +++++++++++++++
20
6 files changed, 143 insertions(+), 1 deletion(-)
21
13
22
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
14
diff --git a/linux-user/arm/signal.c b/linux-user/arm/signal.c
23
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
24
--- a/include/qom/cpu.h
16
--- a/linux-user/arm/signal.c
25
+++ b/include/qom/cpu.h
17
+++ b/linux-user/arm/signal.c
26
@@ -XXX,XX +XXX,XX @@ struct TranslationBlock;
18
@@ -XXX,XX +XXX,XX @@ setup_return(CPUARMState *env, struct target_sigaction *ka,
27
* before the insn which triggers a watchpoint rather than after it.
19
} else {
28
* @gdb_arch_name: Optional callback that returns the architecture name known
20
cpsr &= ~CPSR_T;
29
* to GDB. The caller must free the returned string with g_free.
30
+ * @gdb_get_dynamic_xml: Callback to return dynamically generated XML for the
31
+ * gdb stub. Returns a pointer to the XML contents for the specified XML file
32
+ * or NULL if the CPU doesn't have a dynamically generated content for it.
33
* @cpu_exec_enter: Callback for cpu_exec preparation.
34
* @cpu_exec_exit: Callback for cpu_exec cleanup.
35
* @cpu_exec_interrupt: Callback for processing interrupts in cpu_exec.
36
@@ -XXX,XX +XXX,XX @@ typedef struct CPUClass {
37
const struct VMStateDescription *vmsd;
38
const char *gdb_core_xml_file;
39
gchar * (*gdb_arch_name)(CPUState *cpu);
40
-
41
+ const char * (*gdb_get_dynamic_xml)(CPUState *cpu, const char *xmlname);
42
void (*cpu_exec_enter)(CPUState *cpu);
43
void (*cpu_exec_exit)(CPUState *cpu);
44
bool (*cpu_exec_interrupt)(CPUState *cpu, int interrupt_request);
45
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.h
48
+++ b/target/arm/cpu.h
49
@@ -XXX,XX +XXX,XX @@ enum {
50
s<2n+1> maps to the most significant half of d<n>
51
*/
52
53
+/**
54
+ * DynamicGDBXMLInfo:
55
+ * @desc: Contains the XML descriptions.
56
+ * @num_cpregs: Number of the Coprocessor registers seen by GDB.
57
+ * @cpregs_keys: Array that contains the corresponding Key of
58
+ * a given cpreg with the same order of the cpreg in the XML description.
59
+ */
60
+typedef struct DynamicGDBXMLInfo {
61
+ char *desc;
62
+ int num_cpregs;
63
+ uint32_t *cpregs_keys;
64
+} DynamicGDBXMLInfo;
65
+
66
/* CPU state for each instance of a generic timer (in cp15 c14) */
67
typedef struct ARMGenericTimer {
68
uint64_t cval; /* Timer CompareValue register */
69
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
70
uint64_t *cpreg_vmstate_values;
71
int32_t cpreg_vmstate_array_len;
72
73
+ DynamicGDBXMLInfo dyn_xml;
74
+
75
/* Timers used by the generic (architected) timer */
76
QEMUTimer *gt_timer[NUM_GTIMERS];
77
/* GPIO outputs for generic timer */
78
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cpu, vaddr addr,
79
int arm_cpu_gdb_read_register(CPUState *cpu, uint8_t *buf, int reg);
80
int arm_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
81
82
+/* Dynamically generates for gdb stub an XML description of the sysregs from
83
+ * the cp_regs hashtable. Returns the registered sysregs number.
84
+ */
85
+int arm_gen_dynamic_xml(CPUState *cpu);
86
+
87
+/* Returns the dynamically generated XML for the gdb stub.
88
+ * Returns a pointer to the XML contents for the specified XML file or NULL
89
+ * if the XML name doesn't match the predefined one.
90
+ */
91
+const char *arm_gdb_get_dynamic_xml(CPUState *cpu, const char *xmlname);
92
+
93
int arm_cpu_write_elf64_note(WriteCoreDumpFunction f, CPUState *cs,
94
int cpuid, void *opaque);
95
int arm_cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cs,
96
diff --git a/gdbstub.c b/gdbstub.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/gdbstub.c
99
+++ b/gdbstub.c
100
@@ -XXX,XX +XXX,XX @@ static const char *get_feature_xml(const char *p, const char **newp,
101
}
102
return target_xml;
103
}
21
}
104
+ if (cc->gdb_get_dynamic_xml) {
22
+ if (env->cp15.sctlr_el[1] & SCTLR_E0E) {
105
+ CPUState *cpu = first_cpu;
23
+ cpsr |= CPSR_E;
106
+ char *xmlname = g_strndup(p, len);
24
+ } else {
107
+ const char *xml = cc->gdb_get_dynamic_xml(cpu, xmlname);
25
+ cpsr &= ~CPSR_E;
108
+
109
+ g_free(xmlname);
110
+ if (xml) {
111
+ return xml;
112
+ }
113
+ }
26
+ }
114
for (i = 0; ; i++) {
27
115
name = xml_builtin[i][0];
28
if (ka->sa_flags & TARGET_SA_RESTORER) {
116
if (!name || (strncmp(name, p, len) == 0 && strlen(name) == len))
29
if (is_fdpic) {
117
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
30
@@ -XXX,XX +XXX,XX @@ setup_return(CPUARMState *env, struct target_sigaction *ka,
118
index XXXXXXX..XXXXXXX 100644
31
env->regs[13] = frame_addr;
119
--- a/target/arm/cpu.c
32
env->regs[14] = retcode;
120
+++ b/target/arm/cpu.c
33
env->regs[15] = handler & (thumb ? ~1 : ~3);
121
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
34
- cpsr_write(env, cpsr, CPSR_IT | CPSR_T, CPSRWriteByInstr);
122
cc->gdb_num_core_regs = 26;
35
+ cpsr_write(env, cpsr, CPSR_IT | CPSR_T | CPSR_E, CPSRWriteByInstr);
123
cc->gdb_core_xml_file = "arm-core.xml";
36
+ arm_rebuild_hflags(env);
124
cc->gdb_arch_name = arm_gdb_arch_name;
37
125
+ cc->gdb_get_dynamic_xml = arm_gdb_get_dynamic_xml;
126
cc->gdb_stop_before_watchpoint = true;
127
cc->debug_excp_handler = arm_debug_excp_handler;
128
cc->debug_check_watchpoint = arm_debug_check_watchpoint;
129
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/target/arm/gdbstub.c
132
+++ b/target/arm/gdbstub.c
133
@@ -XXX,XX +XXX,XX @@
134
#include "cpu.h"
135
#include "exec/gdbstub.h"
136
137
+typedef struct RegisterSysregXmlParam {
138
+ CPUState *cs;
139
+ GString *s;
140
+} RegisterSysregXmlParam;
141
+
142
/* Old gdb always expect FPA registers. Newer (xml-aware) gdb only expect
143
whatever the target description contains. Due to a historical mishap
144
the FPA registers appear in between core integer regs and the CPSR.
145
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
146
/* Unknown register. */
147
return 0;
38
return 0;
148
}
39
}
149
+
150
+static void arm_gen_one_xml_reg_tag(GString *s, DynamicGDBXMLInfo *dyn_xml,
151
+ ARMCPRegInfo *ri, uint32_t ri_key,
152
+ int bitsize)
153
+{
154
+ g_string_append_printf(s, "<reg name=\"%s\"", ri->name);
155
+ g_string_append_printf(s, " bitsize=\"%d\"", bitsize);
156
+ g_string_append_printf(s, " group=\"cp_regs\"/>");
157
+ dyn_xml->num_cpregs++;
158
+ dyn_xml->cpregs_keys[dyn_xml->num_cpregs - 1] = ri_key;
159
+}
160
+
161
+static void arm_register_sysreg_for_xml(gpointer key, gpointer value,
162
+ gpointer p)
163
+{
164
+ uint32_t ri_key = *(uint32_t *)key;
165
+ ARMCPRegInfo *ri = value;
166
+ RegisterSysregXmlParam *param = (RegisterSysregXmlParam *)p;
167
+ GString *s = param->s;
168
+ ARMCPU *cpu = ARM_CPU(param->cs);
169
+ CPUARMState *env = &cpu->env;
170
+ DynamicGDBXMLInfo *dyn_xml = &cpu->dyn_xml;
171
+
172
+ if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_NO_GDB))) {
173
+ if (arm_feature(env, ARM_FEATURE_AARCH64)) {
174
+ if (ri->state == ARM_CP_STATE_AA64) {
175
+ arm_gen_one_xml_reg_tag(s , dyn_xml, ri, ri_key, 64);
176
+ }
177
+ } else {
178
+ if (ri->state == ARM_CP_STATE_AA32) {
179
+ if (!arm_feature(env, ARM_FEATURE_EL3) &&
180
+ (ri->secure & ARM_CP_SECSTATE_S)) {
181
+ return;
182
+ }
183
+ if (ri->type & ARM_CP_64BIT) {
184
+ arm_gen_one_xml_reg_tag(s , dyn_xml, ri, ri_key, 64);
185
+ } else {
186
+ arm_gen_one_xml_reg_tag(s , dyn_xml, ri, ri_key, 32);
187
+ }
188
+ }
189
+ }
190
+ }
191
+}
192
+
193
+int arm_gen_dynamic_xml(CPUState *cs)
194
+{
195
+ ARMCPU *cpu = ARM_CPU(cs);
196
+ GString *s = g_string_new(NULL);
197
+ RegisterSysregXmlParam param = {cs, s};
198
+
199
+ cpu->dyn_xml.num_cpregs = 0;
200
+ cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
201
+ g_hash_table_size(cpu->cp_regs));
202
+ g_string_printf(s, "<?xml version=\"1.0\"?>");
203
+ g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
204
+ g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
205
+ g_hash_table_foreach(cpu->cp_regs, arm_register_sysreg_for_xml, &param);
206
+ g_string_append_printf(s, "</feature>");
207
+ cpu->dyn_xml.desc = g_string_free(s, false);
208
+ return cpu->dyn_xml.num_cpregs;
209
+}
210
+
211
+const char *arm_gdb_get_dynamic_xml(CPUState *cs, const char *xmlname)
212
+{
213
+ ARMCPU *cpu = ARM_CPU(cs);
214
+
215
+ if (strcmp(xmlname, "system-registers.xml") == 0) {
216
+ return cpu->dyn_xml.desc;
217
+ }
218
+ return NULL;
219
+}
220
diff --git a/target/arm/helper.c b/target/arm/helper.c
221
index XXXXXXX..XXXXXXX 100644
222
--- a/target/arm/helper.c
223
+++ b/target/arm/helper.c
224
@@ -XXX,XX +XXX,XX @@ static void write_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri,
225
}
226
}
227
228
+static int arm_gdb_get_sysreg(CPUARMState *env, uint8_t *buf, int reg)
229
+{
230
+ ARMCPU *cpu = arm_env_get_cpu(env);
231
+ const ARMCPRegInfo *ri;
232
+ uint32_t key;
233
+
234
+ key = cpu->dyn_xml.cpregs_keys[reg];
235
+ ri = get_arm_cp_reginfo(cpu->cp_regs, key);
236
+ if (ri) {
237
+ if (cpreg_field_is_64bit(ri)) {
238
+ return gdb_get_reg64(buf, (uint64_t)read_raw_cp_reg(env, ri));
239
+ } else {
240
+ return gdb_get_reg32(buf, (uint32_t)read_raw_cp_reg(env, ri));
241
+ }
242
+ }
243
+ return 0;
244
+}
245
+
246
+static int arm_gdb_set_sysreg(CPUARMState *env, uint8_t *buf, int reg)
247
+{
248
+ return 0;
249
+}
250
+
251
static bool raw_accessors_invalid(const ARMCPRegInfo *ri)
252
{
253
/* Return true if the regdef would cause an assertion if you called
254
@@ -XXX,XX +XXX,XX @@ void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
255
gdb_register_coprocessor(cs, vfp_gdb_get_reg, vfp_gdb_set_reg,
256
19, "arm-vfp.xml", 0);
257
}
258
+ gdb_register_coprocessor(cs, arm_gdb_get_sysreg, arm_gdb_set_sysreg,
259
+ arm_gen_dynamic_xml(cs),
260
+ "system-registers.xml", 0);
261
}
262
263
/* Sort alphabetically by type name, except for "any". */
264
--
40
--
265
2.17.0
41
2.20.1
266
42
267
43
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
The Arm signal-handling code has some parts ifdeffed with a
2
TARGET_CONFIG_CPU_32, which is always defined. This is a leftover
3
from when this code's structure was based on the Linux kernel
4
signal handling code, where it was intended to support 26-bit
5
Arm CPUs. The kernel dropped its CONFIG_CPU_32 in kernel commit
6
4da8b8208eded0ba21e3 in 2009.
2
7
3
The ZynqMP contains two instances of a generic DMA, the GDMA, located in the
8
QEMU has never had 26-bit CPU support and is unlikely to ever
4
FPD (full power domain), and the ADMA, located in LPD (low power domain). This
9
add it; we certainly aren't going to support 26-bit Linux
5
patch adds these two DMAs to the ZynqMP board.
10
binaries via linux-user mode. The ifdef is just unhelpful
11
noise, so remove it entirely.
6
12
7
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20180503214201.29082-3-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20200518143014.20689-1-peter.maydell@linaro.org
12
---
16
---
13
include/hw/arm/xlnx-zynqmp.h | 5 ++++
17
linux-user/arm/signal.c | 6 ------
14
hw/arm/xlnx-zynqmp.c | 53 ++++++++++++++++++++++++++++++++++++
18
1 file changed, 6 deletions(-)
15
2 files changed, 58 insertions(+)
16
19
17
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
20
diff --git a/linux-user/arm/signal.c b/linux-user/arm/signal.c
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/xlnx-zynqmp.h
22
--- a/linux-user/arm/signal.c
20
+++ b/include/hw/arm/xlnx-zynqmp.h
23
+++ b/linux-user/arm/signal.c
21
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@ struct rt_sigframe_v2
22
#include "hw/sd/sdhci.h"
25
abi_ulong retcode[4];
23
#include "hw/ssi/xilinx_spips.h"
24
#include "hw/dma/xlnx_dpdma.h"
25
+#include "hw/dma/xlnx-zdma.h"
26
#include "hw/display/xlnx_dp.h"
27
#include "hw/intc/xlnx-zynqmp-ipi.h"
28
#include "hw/timer/xlnx-zynqmp-rtc.h"
29
@@ -XXX,XX +XXX,XX @@
30
#define XLNX_ZYNQMP_NUM_UARTS 2
31
#define XLNX_ZYNQMP_NUM_SDHCI 2
32
#define XLNX_ZYNQMP_NUM_SPIS 2
33
+#define XLNX_ZYNQMP_NUM_GDMA_CH 8
34
+#define XLNX_ZYNQMP_NUM_ADMA_CH 8
35
36
#define XLNX_ZYNQMP_NUM_QSPI_BUS 2
37
#define XLNX_ZYNQMP_NUM_QSPI_BUS_CS 2
38
@@ -XXX,XX +XXX,XX @@ typedef struct XlnxZynqMPState {
39
XlnxDPDMAState dpdma;
40
XlnxZynqMPIPI ipi;
41
XlnxZynqMPRTC rtc;
42
+ XlnxZDMA gdma[XLNX_ZYNQMP_NUM_GDMA_CH];
43
+ XlnxZDMA adma[XLNX_ZYNQMP_NUM_ADMA_CH];
44
45
char *boot_cpu;
46
ARMCPU *boot_cpu_ptr;
47
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/hw/arm/xlnx-zynqmp.c
50
+++ b/hw/arm/xlnx-zynqmp.c
51
@@ -XXX,XX +XXX,XX @@ static const int spi_intr[XLNX_ZYNQMP_NUM_SPIS] = {
52
19, 20,
53
};
26
};
54
27
55
+static const uint64_t gdma_ch_addr[XLNX_ZYNQMP_NUM_GDMA_CH] = {
28
-#define TARGET_CONFIG_CPU_32 1
56
+ 0xFD500000, 0xFD510000, 0xFD520000, 0xFD530000,
29
-
57
+ 0xFD540000, 0xFD550000, 0xFD560000, 0xFD570000
30
/*
58
+};
31
* For ARM syscalls, we encode the syscall number into the instruction.
59
+
32
*/
60
+static const int gdma_ch_intr[XLNX_ZYNQMP_NUM_GDMA_CH] = {
33
@@ -XXX,XX +XXX,XX @@ setup_sigcontext(struct target_sigcontext *sc, /*struct _fpstate *fpstate,*/
61
+ 124, 125, 126, 127, 128, 129, 130, 131
34
__put_user(env->regs[13], &sc->arm_sp);
62
+};
35
__put_user(env->regs[14], &sc->arm_lr);
63
+
36
__put_user(env->regs[15], &sc->arm_pc);
64
+static const uint64_t adma_ch_addr[XLNX_ZYNQMP_NUM_ADMA_CH] = {
37
-#ifdef TARGET_CONFIG_CPU_32
65
+ 0xFFA80000, 0xFFA90000, 0xFFAA0000, 0xFFAB0000,
38
__put_user(cpsr_read(env), &sc->arm_cpsr);
66
+ 0xFFAC0000, 0xFFAD0000, 0xFFAE0000, 0xFFAF0000
39
-#endif
67
+};
40
68
+
41
__put_user(/* current->thread.trap_no */ 0, &sc->trap_no);
69
+static const int adma_ch_intr[XLNX_ZYNQMP_NUM_ADMA_CH] = {
42
__put_user(/* current->thread.error_code */ 0, &sc->error_code);
70
+ 77, 78, 79, 80, 81, 82, 83, 84
43
@@ -XXX,XX +XXX,XX @@ restore_sigcontext(CPUARMState *env, struct target_sigcontext *sc)
71
+};
44
__get_user(env->regs[13], &sc->arm_sp);
72
+
45
__get_user(env->regs[14], &sc->arm_lr);
73
typedef struct XlnxZynqMPGICRegion {
46
__get_user(env->regs[15], &sc->arm_pc);
74
int region_index;
47
-#ifdef TARGET_CONFIG_CPU_32
75
uint32_t address;
48
__get_user(cpsr, &sc->arm_cpsr);
76
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
49
cpsr_write(env, cpsr, CPSR_USER | CPSR_EXEC, CPSRWriteByInstr);
77
50
arm_rebuild_hflags(env);
78
object_initialize(&s->rtc, sizeof(s->rtc), TYPE_XLNX_ZYNQMP_RTC);
51
-#endif
79
qdev_set_parent_bus(DEVICE(&s->rtc), sysbus_get_default());
52
80
+
53
err |= !valid_user_regs(env);
81
+ for (i = 0; i < XLNX_ZYNQMP_NUM_GDMA_CH; i++) {
54
82
+ object_initialize(&s->gdma[i], sizeof(s->gdma[i]), TYPE_XLNX_ZDMA);
83
+ qdev_set_parent_bus(DEVICE(&s->gdma[i]), sysbus_get_default());
84
+ }
85
+
86
+ for (i = 0; i < XLNX_ZYNQMP_NUM_ADMA_CH; i++) {
87
+ object_initialize(&s->adma[i], sizeof(s->adma[i]), TYPE_XLNX_ZDMA);
88
+ qdev_set_parent_bus(DEVICE(&s->adma[i]), sysbus_get_default());
89
+ }
90
}
91
92
static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
93
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
94
}
95
sysbus_mmio_map(SYS_BUS_DEVICE(&s->rtc), 0, RTC_ADDR);
96
sysbus_connect_irq(SYS_BUS_DEVICE(&s->rtc), 0, gic_spi[RTC_IRQ]);
97
+
98
+ for (i = 0; i < XLNX_ZYNQMP_NUM_GDMA_CH; i++) {
99
+ object_property_set_uint(OBJECT(&s->gdma[i]), 128, "bus-width", &err);
100
+ object_property_set_bool(OBJECT(&s->gdma[i]), true, "realized", &err);
101
+ if (err) {
102
+ error_propagate(errp, err);
103
+ return;
104
+ }
105
+
106
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gdma[i]), 0, gdma_ch_addr[i]);
107
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gdma[i]), 0,
108
+ gic_spi[gdma_ch_intr[i]]);
109
+ }
110
+
111
+ for (i = 0; i < XLNX_ZYNQMP_NUM_ADMA_CH; i++) {
112
+ object_property_set_bool(OBJECT(&s->adma[i]), true, "realized", &err);
113
+ if (err) {
114
+ error_propagate(errp, err);
115
+ return;
116
+ }
117
+
118
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->adma[i]), 0, adma_ch_addr[i]);
119
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->adma[i]), 0,
120
+ gic_spi[adma_ch_intr[i]]);
121
+ }
122
}
123
124
static Property xlnx_zynqmp_props[] = {
125
--
55
--
126
2.17.0
56
2.20.1
127
57
128
58
diff view generated by jsdifflib
Deleted patch
1
From: Eric Auger <eric.auger@redhat.com>
2
1
3
Coverity complains about use of uninitialized Evt struct.
4
The EVT_SET_TYPE and similar setters use deposit32() on fields
5
in the struct, so they read the uninitialized existing values.
6
In cases where we don't set all the fields in the event struct
7
we'll end up leaking random uninitialized data from QEMU's
8
stack into the guest.
9
10
Initializing the struct with "Evt evt = {};" ought to satisfy
11
Coverity and fix the data leak.
12
13
Signed-off-by: Eric Auger <eric.auger@redhat.com>
14
Reported-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 1526493784-25328-2-git-send-email-eric.auger@redhat.com
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
hw/arm/smmuv3.c | 2 +-
21
1 file changed, 1 insertion(+), 1 deletion(-)
22
23
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/arm/smmuv3.c
26
+++ b/hw/arm/smmuv3.c
27
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
28
29
void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
30
{
31
- Evt evt;
32
+ Evt evt = {};
33
MemTxResult r;
34
35
if (!smmuv3_eventq_enabled(s)) {
36
--
37
2.17.0
38
39
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20180516223007.10256-9-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
---
7
target/arm/helper-sve.h | 145 +++++++++++++++++++++++++++
8
target/arm/sve_helper.c | 194 +++++++++++++++++++++++++++++++++++++
9
target/arm/translate-sve.c | 68 +++++++++++++
10
target/arm/sve.decode | 42 ++++++++
11
4 files changed, 449 insertions(+)
12
13
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-sve.h
16
+++ b/target/arm/helper-sve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(sve_predtest, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
18
DEF_HELPER_FLAGS_3(sve_pfirst, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_3(sve_pnext, TCG_CALL_NO_WG, i32, ptr, ptr, i32)
20
21
+DEF_HELPER_FLAGS_5(sve_and_zpzz_b, TCG_CALL_NO_RWG,
22
+ void, ptr, ptr, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_5(sve_and_zpzz_h, TCG_CALL_NO_RWG,
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(sve_and_zpzz_s, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(sve_and_zpzz_d, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_5(sve_eor_zpzz_b, TCG_CALL_NO_RWG,
31
+ void, ptr, ptr, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_5(sve_eor_zpzz_h, TCG_CALL_NO_RWG,
33
+ void, ptr, ptr, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_5(sve_eor_zpzz_s, TCG_CALL_NO_RWG,
35
+ void, ptr, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_5(sve_eor_zpzz_d, TCG_CALL_NO_RWG,
37
+ void, ptr, ptr, ptr, ptr, i32)
38
+
39
+DEF_HELPER_FLAGS_5(sve_orr_zpzz_b, TCG_CALL_NO_RWG,
40
+ void, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_5(sve_orr_zpzz_h, TCG_CALL_NO_RWG,
42
+ void, ptr, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_5(sve_orr_zpzz_s, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_5(sve_orr_zpzz_d, TCG_CALL_NO_RWG,
46
+ void, ptr, ptr, ptr, ptr, i32)
47
+
48
+DEF_HELPER_FLAGS_5(sve_bic_zpzz_b, TCG_CALL_NO_RWG,
49
+ void, ptr, ptr, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_5(sve_bic_zpzz_h, TCG_CALL_NO_RWG,
51
+ void, ptr, ptr, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_5(sve_bic_zpzz_s, TCG_CALL_NO_RWG,
53
+ void, ptr, ptr, ptr, ptr, i32)
54
+DEF_HELPER_FLAGS_5(sve_bic_zpzz_d, TCG_CALL_NO_RWG,
55
+ void, ptr, ptr, ptr, ptr, i32)
56
+
57
+DEF_HELPER_FLAGS_5(sve_add_zpzz_b, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, i32)
59
+DEF_HELPER_FLAGS_5(sve_add_zpzz_h, TCG_CALL_NO_RWG,
60
+ void, ptr, ptr, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_5(sve_add_zpzz_s, TCG_CALL_NO_RWG,
62
+ void, ptr, ptr, ptr, ptr, i32)
63
+DEF_HELPER_FLAGS_5(sve_add_zpzz_d, TCG_CALL_NO_RWG,
64
+ void, ptr, ptr, ptr, ptr, i32)
65
+
66
+DEF_HELPER_FLAGS_5(sve_sub_zpzz_b, TCG_CALL_NO_RWG,
67
+ void, ptr, ptr, ptr, ptr, i32)
68
+DEF_HELPER_FLAGS_5(sve_sub_zpzz_h, TCG_CALL_NO_RWG,
69
+ void, ptr, ptr, ptr, ptr, i32)
70
+DEF_HELPER_FLAGS_5(sve_sub_zpzz_s, TCG_CALL_NO_RWG,
71
+ void, ptr, ptr, ptr, ptr, i32)
72
+DEF_HELPER_FLAGS_5(sve_sub_zpzz_d, TCG_CALL_NO_RWG,
73
+ void, ptr, ptr, ptr, ptr, i32)
74
+
75
+DEF_HELPER_FLAGS_5(sve_smax_zpzz_b, TCG_CALL_NO_RWG,
76
+ void, ptr, ptr, ptr, ptr, i32)
77
+DEF_HELPER_FLAGS_5(sve_smax_zpzz_h, TCG_CALL_NO_RWG,
78
+ void, ptr, ptr, ptr, ptr, i32)
79
+DEF_HELPER_FLAGS_5(sve_smax_zpzz_s, TCG_CALL_NO_RWG,
80
+ void, ptr, ptr, ptr, ptr, i32)
81
+DEF_HELPER_FLAGS_5(sve_smax_zpzz_d, TCG_CALL_NO_RWG,
82
+ void, ptr, ptr, ptr, ptr, i32)
83
+
84
+DEF_HELPER_FLAGS_5(sve_umax_zpzz_b, TCG_CALL_NO_RWG,
85
+ void, ptr, ptr, ptr, ptr, i32)
86
+DEF_HELPER_FLAGS_5(sve_umax_zpzz_h, TCG_CALL_NO_RWG,
87
+ void, ptr, ptr, ptr, ptr, i32)
88
+DEF_HELPER_FLAGS_5(sve_umax_zpzz_s, TCG_CALL_NO_RWG,
89
+ void, ptr, ptr, ptr, ptr, i32)
90
+DEF_HELPER_FLAGS_5(sve_umax_zpzz_d, TCG_CALL_NO_RWG,
91
+ void, ptr, ptr, ptr, ptr, i32)
92
+
93
+DEF_HELPER_FLAGS_5(sve_smin_zpzz_b, TCG_CALL_NO_RWG,
94
+ void, ptr, ptr, ptr, ptr, i32)
95
+DEF_HELPER_FLAGS_5(sve_smin_zpzz_h, TCG_CALL_NO_RWG,
96
+ void, ptr, ptr, ptr, ptr, i32)
97
+DEF_HELPER_FLAGS_5(sve_smin_zpzz_s, TCG_CALL_NO_RWG,
98
+ void, ptr, ptr, ptr, ptr, i32)
99
+DEF_HELPER_FLAGS_5(sve_smin_zpzz_d, TCG_CALL_NO_RWG,
100
+ void, ptr, ptr, ptr, ptr, i32)
101
+
102
+DEF_HELPER_FLAGS_5(sve_umin_zpzz_b, TCG_CALL_NO_RWG,
103
+ void, ptr, ptr, ptr, ptr, i32)
104
+DEF_HELPER_FLAGS_5(sve_umin_zpzz_h, TCG_CALL_NO_RWG,
105
+ void, ptr, ptr, ptr, ptr, i32)
106
+DEF_HELPER_FLAGS_5(sve_umin_zpzz_s, TCG_CALL_NO_RWG,
107
+ void, ptr, ptr, ptr, ptr, i32)
108
+DEF_HELPER_FLAGS_5(sve_umin_zpzz_d, TCG_CALL_NO_RWG,
109
+ void, ptr, ptr, ptr, ptr, i32)
110
+
111
+DEF_HELPER_FLAGS_5(sve_sabd_zpzz_b, TCG_CALL_NO_RWG,
112
+ void, ptr, ptr, ptr, ptr, i32)
113
+DEF_HELPER_FLAGS_5(sve_sabd_zpzz_h, TCG_CALL_NO_RWG,
114
+ void, ptr, ptr, ptr, ptr, i32)
115
+DEF_HELPER_FLAGS_5(sve_sabd_zpzz_s, TCG_CALL_NO_RWG,
116
+ void, ptr, ptr, ptr, ptr, i32)
117
+DEF_HELPER_FLAGS_5(sve_sabd_zpzz_d, TCG_CALL_NO_RWG,
118
+ void, ptr, ptr, ptr, ptr, i32)
119
+
120
+DEF_HELPER_FLAGS_5(sve_uabd_zpzz_b, TCG_CALL_NO_RWG,
121
+ void, ptr, ptr, ptr, ptr, i32)
122
+DEF_HELPER_FLAGS_5(sve_uabd_zpzz_h, TCG_CALL_NO_RWG,
123
+ void, ptr, ptr, ptr, ptr, i32)
124
+DEF_HELPER_FLAGS_5(sve_uabd_zpzz_s, TCG_CALL_NO_RWG,
125
+ void, ptr, ptr, ptr, ptr, i32)
126
+DEF_HELPER_FLAGS_5(sve_uabd_zpzz_d, TCG_CALL_NO_RWG,
127
+ void, ptr, ptr, ptr, ptr, i32)
128
+
129
+DEF_HELPER_FLAGS_5(sve_mul_zpzz_b, TCG_CALL_NO_RWG,
130
+ void, ptr, ptr, ptr, ptr, i32)
131
+DEF_HELPER_FLAGS_5(sve_mul_zpzz_h, TCG_CALL_NO_RWG,
132
+ void, ptr, ptr, ptr, ptr, i32)
133
+DEF_HELPER_FLAGS_5(sve_mul_zpzz_s, TCG_CALL_NO_RWG,
134
+ void, ptr, ptr, ptr, ptr, i32)
135
+DEF_HELPER_FLAGS_5(sve_mul_zpzz_d, TCG_CALL_NO_RWG,
136
+ void, ptr, ptr, ptr, ptr, i32)
137
+
138
+DEF_HELPER_FLAGS_5(sve_smulh_zpzz_b, TCG_CALL_NO_RWG,
139
+ void, ptr, ptr, ptr, ptr, i32)
140
+DEF_HELPER_FLAGS_5(sve_smulh_zpzz_h, TCG_CALL_NO_RWG,
141
+ void, ptr, ptr, ptr, ptr, i32)
142
+DEF_HELPER_FLAGS_5(sve_smulh_zpzz_s, TCG_CALL_NO_RWG,
143
+ void, ptr, ptr, ptr, ptr, i32)
144
+DEF_HELPER_FLAGS_5(sve_smulh_zpzz_d, TCG_CALL_NO_RWG,
145
+ void, ptr, ptr, ptr, ptr, i32)
146
+
147
+DEF_HELPER_FLAGS_5(sve_umulh_zpzz_b, TCG_CALL_NO_RWG,
148
+ void, ptr, ptr, ptr, ptr, i32)
149
+DEF_HELPER_FLAGS_5(sve_umulh_zpzz_h, TCG_CALL_NO_RWG,
150
+ void, ptr, ptr, ptr, ptr, i32)
151
+DEF_HELPER_FLAGS_5(sve_umulh_zpzz_s, TCG_CALL_NO_RWG,
152
+ void, ptr, ptr, ptr, ptr, i32)
153
+DEF_HELPER_FLAGS_5(sve_umulh_zpzz_d, TCG_CALL_NO_RWG,
154
+ void, ptr, ptr, ptr, ptr, i32)
155
+
156
+DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_s, TCG_CALL_NO_RWG,
157
+ void, ptr, ptr, ptr, ptr, i32)
158
+DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_d, TCG_CALL_NO_RWG,
159
+ void, ptr, ptr, ptr, ptr, i32)
160
+
161
+DEF_HELPER_FLAGS_5(sve_udiv_zpzz_s, TCG_CALL_NO_RWG,
162
+ void, ptr, ptr, ptr, ptr, i32)
163
+DEF_HELPER_FLAGS_5(sve_udiv_zpzz_d, TCG_CALL_NO_RWG,
164
+ void, ptr, ptr, ptr, ptr, i32)
165
+
166
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
167
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
168
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
169
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
170
index XXXXXXX..XXXXXXX 100644
171
--- a/target/arm/sve_helper.c
172
+++ b/target/arm/sve_helper.c
173
@@ -XXX,XX +XXX,XX @@
174
#include "tcg/tcg-gvec-desc.h"
175
176
177
+/* Note that vector data is stored in host-endian 64-bit chunks,
178
+ so addressing units smaller than that needs a host-endian fixup. */
179
+#ifdef HOST_WORDS_BIGENDIAN
180
+#define H1(x) ((x) ^ 7)
181
+#define H1_2(x) ((x) ^ 6)
182
+#define H1_4(x) ((x) ^ 4)
183
+#define H2(x) ((x) ^ 3)
184
+#define H4(x) ((x) ^ 1)
185
+#else
186
+#define H1(x) (x)
187
+#define H1_2(x) (x)
188
+#define H1_4(x) (x)
189
+#define H2(x) (x)
190
+#define H4(x) (x)
191
+#endif
192
+
193
/* Return a value for NZCV as per the ARM PredTest pseudofunction.
194
*
195
* The return value has bit 31 set if N is set, bit 1 set if Z is clear,
196
@@ -XXX,XX +XXX,XX @@ LOGICAL_PPPP(sve_nand_pppp, DO_NAND)
197
#undef DO_SEL
198
#undef LOGICAL_PPPP
199
200
+/* Fully general three-operand expander, controlled by a predicate.
201
+ * This is complicated by the host-endian storage of the register file.
202
+ */
203
+/* ??? I don't expect the compiler could ever vectorize this itself.
204
+ * With some tables we can convert bit masks to byte masks, and with
205
+ * extra care wrt byte/word ordering we could use gcc generic vectors
206
+ * and do 16 bytes at a time.
207
+ */
208
+#define DO_ZPZZ(NAME, TYPE, H, OP) \
209
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
210
+{ \
211
+ intptr_t i, opr_sz = simd_oprsz(desc); \
212
+ for (i = 0; i < opr_sz; ) { \
213
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
214
+ do { \
215
+ if (pg & 1) { \
216
+ TYPE nn = *(TYPE *)(vn + H(i)); \
217
+ TYPE mm = *(TYPE *)(vm + H(i)); \
218
+ *(TYPE *)(vd + H(i)) = OP(nn, mm); \
219
+ } \
220
+ i += sizeof(TYPE), pg >>= sizeof(TYPE); \
221
+ } while (i & 15); \
222
+ } \
223
+}
224
+
225
+/* Similarly, specialized for 64-bit operands. */
226
+#define DO_ZPZZ_D(NAME, TYPE, OP) \
227
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
228
+{ \
229
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
230
+ TYPE *d = vd, *n = vn, *m = vm; \
231
+ uint8_t *pg = vg; \
232
+ for (i = 0; i < opr_sz; i += 1) { \
233
+ if (pg[H1(i)] & 1) { \
234
+ TYPE nn = n[i], mm = m[i]; \
235
+ d[i] = OP(nn, mm); \
236
+ } \
237
+ } \
238
+}
239
+
240
+#define DO_AND(N, M) (N & M)
241
+#define DO_EOR(N, M) (N ^ M)
242
+#define DO_ORR(N, M) (N | M)
243
+#define DO_BIC(N, M) (N & ~M)
244
+#define DO_ADD(N, M) (N + M)
245
+#define DO_SUB(N, M) (N - M)
246
+#define DO_MAX(N, M) ((N) >= (M) ? (N) : (M))
247
+#define DO_MIN(N, M) ((N) >= (M) ? (M) : (N))
248
+#define DO_ABD(N, M) ((N) >= (M) ? (N) - (M) : (M) - (N))
249
+#define DO_MUL(N, M) (N * M)
250
+#define DO_DIV(N, M) (M ? N / M : 0)
251
+
252
+DO_ZPZZ(sve_and_zpzz_b, uint8_t, H1, DO_AND)
253
+DO_ZPZZ(sve_and_zpzz_h, uint16_t, H1_2, DO_AND)
254
+DO_ZPZZ(sve_and_zpzz_s, uint32_t, H1_4, DO_AND)
255
+DO_ZPZZ_D(sve_and_zpzz_d, uint64_t, DO_AND)
256
+
257
+DO_ZPZZ(sve_orr_zpzz_b, uint8_t, H1, DO_ORR)
258
+DO_ZPZZ(sve_orr_zpzz_h, uint16_t, H1_2, DO_ORR)
259
+DO_ZPZZ(sve_orr_zpzz_s, uint32_t, H1_4, DO_ORR)
260
+DO_ZPZZ_D(sve_orr_zpzz_d, uint64_t, DO_ORR)
261
+
262
+DO_ZPZZ(sve_eor_zpzz_b, uint8_t, H1, DO_EOR)
263
+DO_ZPZZ(sve_eor_zpzz_h, uint16_t, H1_2, DO_EOR)
264
+DO_ZPZZ(sve_eor_zpzz_s, uint32_t, H1_4, DO_EOR)
265
+DO_ZPZZ_D(sve_eor_zpzz_d, uint64_t, DO_EOR)
266
+
267
+DO_ZPZZ(sve_bic_zpzz_b, uint8_t, H1, DO_BIC)
268
+DO_ZPZZ(sve_bic_zpzz_h, uint16_t, H1_2, DO_BIC)
269
+DO_ZPZZ(sve_bic_zpzz_s, uint32_t, H1_4, DO_BIC)
270
+DO_ZPZZ_D(sve_bic_zpzz_d, uint64_t, DO_BIC)
271
+
272
+DO_ZPZZ(sve_add_zpzz_b, uint8_t, H1, DO_ADD)
273
+DO_ZPZZ(sve_add_zpzz_h, uint16_t, H1_2, DO_ADD)
274
+DO_ZPZZ(sve_add_zpzz_s, uint32_t, H1_4, DO_ADD)
275
+DO_ZPZZ_D(sve_add_zpzz_d, uint64_t, DO_ADD)
276
+
277
+DO_ZPZZ(sve_sub_zpzz_b, uint8_t, H1, DO_SUB)
278
+DO_ZPZZ(sve_sub_zpzz_h, uint16_t, H1_2, DO_SUB)
279
+DO_ZPZZ(sve_sub_zpzz_s, uint32_t, H1_4, DO_SUB)
280
+DO_ZPZZ_D(sve_sub_zpzz_d, uint64_t, DO_SUB)
281
+
282
+DO_ZPZZ(sve_smax_zpzz_b, int8_t, H1, DO_MAX)
283
+DO_ZPZZ(sve_smax_zpzz_h, int16_t, H1_2, DO_MAX)
284
+DO_ZPZZ(sve_smax_zpzz_s, int32_t, H1_4, DO_MAX)
285
+DO_ZPZZ_D(sve_smax_zpzz_d, int64_t, DO_MAX)
286
+
287
+DO_ZPZZ(sve_umax_zpzz_b, uint8_t, H1, DO_MAX)
288
+DO_ZPZZ(sve_umax_zpzz_h, uint16_t, H1_2, DO_MAX)
289
+DO_ZPZZ(sve_umax_zpzz_s, uint32_t, H1_4, DO_MAX)
290
+DO_ZPZZ_D(sve_umax_zpzz_d, uint64_t, DO_MAX)
291
+
292
+DO_ZPZZ(sve_smin_zpzz_b, int8_t, H1, DO_MIN)
293
+DO_ZPZZ(sve_smin_zpzz_h, int16_t, H1_2, DO_MIN)
294
+DO_ZPZZ(sve_smin_zpzz_s, int32_t, H1_4, DO_MIN)
295
+DO_ZPZZ_D(sve_smin_zpzz_d, int64_t, DO_MIN)
296
+
297
+DO_ZPZZ(sve_umin_zpzz_b, uint8_t, H1, DO_MIN)
298
+DO_ZPZZ(sve_umin_zpzz_h, uint16_t, H1_2, DO_MIN)
299
+DO_ZPZZ(sve_umin_zpzz_s, uint32_t, H1_4, DO_MIN)
300
+DO_ZPZZ_D(sve_umin_zpzz_d, uint64_t, DO_MIN)
301
+
302
+DO_ZPZZ(sve_sabd_zpzz_b, int8_t, H1, DO_ABD)
303
+DO_ZPZZ(sve_sabd_zpzz_h, int16_t, H1_2, DO_ABD)
304
+DO_ZPZZ(sve_sabd_zpzz_s, int32_t, H1_4, DO_ABD)
305
+DO_ZPZZ_D(sve_sabd_zpzz_d, int64_t, DO_ABD)
306
+
307
+DO_ZPZZ(sve_uabd_zpzz_b, uint8_t, H1, DO_ABD)
308
+DO_ZPZZ(sve_uabd_zpzz_h, uint16_t, H1_2, DO_ABD)
309
+DO_ZPZZ(sve_uabd_zpzz_s, uint32_t, H1_4, DO_ABD)
310
+DO_ZPZZ_D(sve_uabd_zpzz_d, uint64_t, DO_ABD)
311
+
312
+/* Because the computation type is at least twice as large as required,
313
+ these work for both signed and unsigned source types. */
314
+static inline uint8_t do_mulh_b(int32_t n, int32_t m)
315
+{
316
+ return (n * m) >> 8;
317
+}
318
+
319
+static inline uint16_t do_mulh_h(int32_t n, int32_t m)
320
+{
321
+ return (n * m) >> 16;
322
+}
323
+
324
+static inline uint32_t do_mulh_s(int64_t n, int64_t m)
325
+{
326
+ return (n * m) >> 32;
327
+}
328
+
329
+static inline uint64_t do_smulh_d(uint64_t n, uint64_t m)
330
+{
331
+ uint64_t lo, hi;
332
+ muls64(&lo, &hi, n, m);
333
+ return hi;
334
+}
335
+
336
+static inline uint64_t do_umulh_d(uint64_t n, uint64_t m)
337
+{
338
+ uint64_t lo, hi;
339
+ mulu64(&lo, &hi, n, m);
340
+ return hi;
341
+}
342
+
343
+DO_ZPZZ(sve_mul_zpzz_b, uint8_t, H1, DO_MUL)
344
+DO_ZPZZ(sve_mul_zpzz_h, uint16_t, H1_2, DO_MUL)
345
+DO_ZPZZ(sve_mul_zpzz_s, uint32_t, H1_4, DO_MUL)
346
+DO_ZPZZ_D(sve_mul_zpzz_d, uint64_t, DO_MUL)
347
+
348
+DO_ZPZZ(sve_smulh_zpzz_b, int8_t, H1, do_mulh_b)
349
+DO_ZPZZ(sve_smulh_zpzz_h, int16_t, H1_2, do_mulh_h)
350
+DO_ZPZZ(sve_smulh_zpzz_s, int32_t, H1_4, do_mulh_s)
351
+DO_ZPZZ_D(sve_smulh_zpzz_d, uint64_t, do_smulh_d)
352
+
353
+DO_ZPZZ(sve_umulh_zpzz_b, uint8_t, H1, do_mulh_b)
354
+DO_ZPZZ(sve_umulh_zpzz_h, uint16_t, H1_2, do_mulh_h)
355
+DO_ZPZZ(sve_umulh_zpzz_s, uint32_t, H1_4, do_mulh_s)
356
+DO_ZPZZ_D(sve_umulh_zpzz_d, uint64_t, do_umulh_d)
357
+
358
+DO_ZPZZ(sve_sdiv_zpzz_s, int32_t, H1_4, DO_DIV)
359
+DO_ZPZZ_D(sve_sdiv_zpzz_d, int64_t, DO_DIV)
360
+
361
+DO_ZPZZ(sve_udiv_zpzz_s, uint32_t, H1_4, DO_DIV)
362
+DO_ZPZZ_D(sve_udiv_zpzz_d, uint64_t, DO_DIV)
363
+
364
+#undef DO_ZPZZ
365
+#undef DO_ZPZZ_D
366
+#undef DO_AND
367
+#undef DO_ORR
368
+#undef DO_EOR
369
+#undef DO_BIC
370
+#undef DO_ADD
371
+#undef DO_SUB
372
+#undef DO_MAX
373
+#undef DO_MIN
374
+#undef DO_ABD
375
+#undef DO_MUL
376
+#undef DO_DIV
377
+
378
/* Similar to the ARM LastActiveElement pseudocode function, except the
379
result is multiplied by the element size. This includes the not found
380
indication; e.g. not found for esz=3 is -8. */
381
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
382
index XXXXXXX..XXXXXXX 100644
383
--- a/target/arm/translate-sve.c
384
+++ b/target/arm/translate-sve.c
385
@@ -XXX,XX +XXX,XX @@ static bool trans_BIC_zzz(DisasContext *s, arg_rrr_esz *a, uint32_t insn)
386
return do_vector3_z(s, tcg_gen_gvec_andc, 0, a->rd, a->rn, a->rm);
387
}
388
389
+/*
390
+ *** SVE Integer Arithmetic - Binary Predicated Group
391
+ */
392
+
393
+static bool do_zpzz_ool(DisasContext *s, arg_rprr_esz *a, gen_helper_gvec_4 *fn)
394
+{
395
+ unsigned vsz = vec_full_reg_size(s);
396
+ if (fn == NULL) {
397
+ return false;
398
+ }
399
+ if (sve_access_check(s)) {
400
+ tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
401
+ vec_full_reg_offset(s, a->rn),
402
+ vec_full_reg_offset(s, a->rm),
403
+ pred_full_reg_offset(s, a->pg),
404
+ vsz, vsz, 0, fn);
405
+ }
406
+ return true;
407
+}
408
+
409
+#define DO_ZPZZ(NAME, name) \
410
+static bool trans_##NAME##_zpzz(DisasContext *s, arg_rprr_esz *a, \
411
+ uint32_t insn) \
412
+{ \
413
+ static gen_helper_gvec_4 * const fns[4] = { \
414
+ gen_helper_sve_##name##_zpzz_b, gen_helper_sve_##name##_zpzz_h, \
415
+ gen_helper_sve_##name##_zpzz_s, gen_helper_sve_##name##_zpzz_d, \
416
+ }; \
417
+ return do_zpzz_ool(s, a, fns[a->esz]); \
418
+}
419
+
420
+DO_ZPZZ(AND, and)
421
+DO_ZPZZ(EOR, eor)
422
+DO_ZPZZ(ORR, orr)
423
+DO_ZPZZ(BIC, bic)
424
+
425
+DO_ZPZZ(ADD, add)
426
+DO_ZPZZ(SUB, sub)
427
+
428
+DO_ZPZZ(SMAX, smax)
429
+DO_ZPZZ(UMAX, umax)
430
+DO_ZPZZ(SMIN, smin)
431
+DO_ZPZZ(UMIN, umin)
432
+DO_ZPZZ(SABD, sabd)
433
+DO_ZPZZ(UABD, uabd)
434
+
435
+DO_ZPZZ(MUL, mul)
436
+DO_ZPZZ(SMULH, smulh)
437
+DO_ZPZZ(UMULH, umulh)
438
+
439
+static bool trans_SDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
440
+{
441
+ static gen_helper_gvec_4 * const fns[4] = {
442
+ NULL, NULL, gen_helper_sve_sdiv_zpzz_s, gen_helper_sve_sdiv_zpzz_d
443
+ };
444
+ return do_zpzz_ool(s, a, fns[a->esz]);
445
+}
446
+
447
+static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
448
+{
449
+ static gen_helper_gvec_4 * const fns[4] = {
450
+ NULL, NULL, gen_helper_sve_udiv_zpzz_s, gen_helper_sve_udiv_zpzz_d
451
+ };
452
+ return do_zpzz_ool(s, a, fns[a->esz]);
453
+}
454
+
455
+#undef DO_ZPZZ
456
+
457
/*
458
*** SVE Predicate Logical Operations Group
459
*/
460
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
461
index XXXXXXX..XXXXXXX 100644
462
--- a/target/arm/sve.decode
463
+++ b/target/arm/sve.decode
464
@@ -XXX,XX +XXX,XX @@
465
466
%imm9_16_10 16:s6 10:3
467
468
+# Either a copy of rd (at bit 0), or a different source
469
+# as propagated via the MOVPRFX instruction.
470
+%reg_movprfx 0:5
471
+
472
###########################################################################
473
# Named attribute sets. These are used to make nice(er) names
474
# when creating helpers common to those for the individual
475
@@ -XXX,XX +XXX,XX @@
476
&rri rd rn imm
477
&rrr_esz rd rn rm esz
478
&rprr_s rd pg rn rm s
479
+&rprr_esz rd pg rn rm esz
480
481
###########################################################################
482
# Named instruction formats. These are generally used to
483
@@ -XXX,XX +XXX,XX @@
484
# Three predicate operand, with governing predicate, flag setting
485
@pd_pg_pn_pm_s ........ . s:1 .. rm:4 .. pg:4 . rn:4 . rd:4 &rprr_s
486
487
+# Two register operand, with governing predicate, vector element size
488
+@rdn_pg_rm ........ esz:2 ... ... ... pg:3 rm:5 rd:5 \
489
+ &rprr_esz rn=%reg_movprfx
490
+@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
491
+ &rprr_esz rm=%reg_movprfx
492
+
493
# Basic Load/Store with 9-bit immediate offset
494
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
495
&rri imm=%imm9_16_10
496
@@ -XXX,XX +XXX,XX @@
497
###########################################################################
498
# Instruction patterns. Grouped according to the SVE encodingindex.xhtml.
499
500
+### SVE Integer Arithmetic - Binary Predicated Group
501
+
502
+# SVE bitwise logical vector operations (predicated)
503
+ORR_zpzz 00000100 .. 011 000 000 ... ..... ..... @rdn_pg_rm
504
+EOR_zpzz 00000100 .. 011 001 000 ... ..... ..... @rdn_pg_rm
505
+AND_zpzz 00000100 .. 011 010 000 ... ..... ..... @rdn_pg_rm
506
+BIC_zpzz 00000100 .. 011 011 000 ... ..... ..... @rdn_pg_rm
507
+
508
+# SVE integer add/subtract vectors (predicated)
509
+ADD_zpzz 00000100 .. 000 000 000 ... ..... ..... @rdn_pg_rm
510
+SUB_zpzz 00000100 .. 000 001 000 ... ..... ..... @rdn_pg_rm
511
+SUB_zpzz 00000100 .. 000 011 000 ... ..... ..... @rdm_pg_rn # SUBR
512
+
513
+# SVE integer min/max/difference (predicated)
514
+SMAX_zpzz 00000100 .. 001 000 000 ... ..... ..... @rdn_pg_rm
515
+UMAX_zpzz 00000100 .. 001 001 000 ... ..... ..... @rdn_pg_rm
516
+SMIN_zpzz 00000100 .. 001 010 000 ... ..... ..... @rdn_pg_rm
517
+UMIN_zpzz 00000100 .. 001 011 000 ... ..... ..... @rdn_pg_rm
518
+SABD_zpzz 00000100 .. 001 100 000 ... ..... ..... @rdn_pg_rm
519
+UABD_zpzz 00000100 .. 001 101 000 ... ..... ..... @rdn_pg_rm
520
+
521
+# SVE integer multiply/divide (predicated)
522
+MUL_zpzz 00000100 .. 010 000 000 ... ..... ..... @rdn_pg_rm
523
+SMULH_zpzz 00000100 .. 010 010 000 ... ..... ..... @rdn_pg_rm
524
+UMULH_zpzz 00000100 .. 010 011 000 ... ..... ..... @rdn_pg_rm
525
+# Note that divide requires size >= 2; below 2 is unallocated.
526
+SDIV_zpzz 00000100 .. 010 100 000 ... ..... ..... @rdn_pg_rm
527
+UDIV_zpzz 00000100 .. 010 101 000 ... ..... ..... @rdn_pg_rm
528
+SDIV_zpzz 00000100 .. 010 110 000 ... ..... ..... @rdm_pg_rn # SDIVR
529
+UDIV_zpzz 00000100 .. 010 111 000 ... ..... ..... @rdm_pg_rn # UDIVR
530
+
531
### SVE Logical - Unpredicated Group
532
533
# SVE bitwise logical operations (unpredicated)
534
--
535
2.17.0
536
537
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Excepting MOVPRFX, which isn't a reduction. Presumably it is
4
placed within the group because of its encoding.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180516223007.10256-10-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper-sve.h | 44 ++++++++++++++++++
12
target/arm/sve_helper.c | 91 ++++++++++++++++++++++++++++++++++++++
13
target/arm/translate-sve.c | 68 ++++++++++++++++++++++++++++
14
target/arm/sve.decode | 22 +++++++++
15
4 files changed, 225 insertions(+)
16
17
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-sve.h
20
+++ b/target/arm/helper-sve.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(sve_udiv_zpzz_s, TCG_CALL_NO_RWG,
22
DEF_HELPER_FLAGS_5(sve_udiv_zpzz_d, TCG_CALL_NO_RWG,
23
void, ptr, ptr, ptr, ptr, i32)
24
25
+DEF_HELPER_FLAGS_3(sve_orv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_3(sve_orv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_3(sve_orv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_3(sve_orv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_3(sve_eorv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_3(sve_eorv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_3(sve_eorv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_3(sve_eorv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
34
+
35
+DEF_HELPER_FLAGS_3(sve_andv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_3(sve_andv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_3(sve_andv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_3(sve_andv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
39
+
40
+DEF_HELPER_FLAGS_3(sve_saddv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_3(sve_saddv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
42
+DEF_HELPER_FLAGS_3(sve_saddv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
43
+
44
+DEF_HELPER_FLAGS_3(sve_uaddv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_3(sve_uaddv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_3(sve_uaddv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_3(sve_uaddv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_3(sve_smaxv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_3(sve_smaxv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_3(sve_smaxv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_3(sve_smaxv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
53
+
54
+DEF_HELPER_FLAGS_3(sve_umaxv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_3(sve_umaxv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
56
+DEF_HELPER_FLAGS_3(sve_umaxv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
57
+DEF_HELPER_FLAGS_3(sve_umaxv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
58
+
59
+DEF_HELPER_FLAGS_3(sve_sminv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
60
+DEF_HELPER_FLAGS_3(sve_sminv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
61
+DEF_HELPER_FLAGS_3(sve_sminv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
62
+DEF_HELPER_FLAGS_3(sve_sminv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
63
+
64
+DEF_HELPER_FLAGS_3(sve_uminv_b, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
65
+DEF_HELPER_FLAGS_3(sve_uminv_h, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
66
+DEF_HELPER_FLAGS_3(sve_uminv_s, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
67
+DEF_HELPER_FLAGS_3(sve_uminv_d, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
68
+
69
DEF_HELPER_FLAGS_5(sve_and_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
70
DEF_HELPER_FLAGS_5(sve_bic_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
71
DEF_HELPER_FLAGS_5(sve_eor_pppp, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
72
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/sve_helper.c
75
+++ b/target/arm/sve_helper.c
76
@@ -XXX,XX +XXX,XX @@ DO_ZPZZ_D(sve_udiv_zpzz_d, uint64_t, DO_DIV)
77
78
#undef DO_ZPZZ
79
#undef DO_ZPZZ_D
80
+
81
+/* Two-operand reduction expander, controlled by a predicate.
82
+ * The difference between TYPERED and TYPERET has to do with
83
+ * sign-extension. E.g. for SMAX, TYPERED must be signed,
84
+ * but TYPERET must be unsigned so that e.g. a 32-bit value
85
+ * is not sign-extended to the ABI uint64_t return type.
86
+ */
87
+/* ??? If we were to vectorize this by hand the reduction ordering
88
+ * would change. For integer operands, this is perfectly fine.
89
+ */
90
+#define DO_VPZ(NAME, TYPEELT, TYPERED, TYPERET, H, INIT, OP) \
91
+uint64_t HELPER(NAME)(void *vn, void *vg, uint32_t desc) \
92
+{ \
93
+ intptr_t i, opr_sz = simd_oprsz(desc); \
94
+ TYPERED ret = INIT; \
95
+ for (i = 0; i < opr_sz; ) { \
96
+ uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
97
+ do { \
98
+ if (pg & 1) { \
99
+ TYPEELT nn = *(TYPEELT *)(vn + H(i)); \
100
+ ret = OP(ret, nn); \
101
+ } \
102
+ i += sizeof(TYPEELT), pg >>= sizeof(TYPEELT); \
103
+ } while (i & 15); \
104
+ } \
105
+ return (TYPERET)ret; \
106
+}
107
+
108
+#define DO_VPZ_D(NAME, TYPEE, TYPER, INIT, OP) \
109
+uint64_t HELPER(NAME)(void *vn, void *vg, uint32_t desc) \
110
+{ \
111
+ intptr_t i, opr_sz = simd_oprsz(desc) / 8; \
112
+ TYPEE *n = vn; \
113
+ uint8_t *pg = vg; \
114
+ TYPER ret = INIT; \
115
+ for (i = 0; i < opr_sz; i += 1) { \
116
+ if (pg[H1(i)] & 1) { \
117
+ TYPEE nn = n[i]; \
118
+ ret = OP(ret, nn); \
119
+ } \
120
+ } \
121
+ return ret; \
122
+}
123
+
124
+DO_VPZ(sve_orv_b, uint8_t, uint8_t, uint8_t, H1, 0, DO_ORR)
125
+DO_VPZ(sve_orv_h, uint16_t, uint16_t, uint16_t, H1_2, 0, DO_ORR)
126
+DO_VPZ(sve_orv_s, uint32_t, uint32_t, uint32_t, H1_4, 0, DO_ORR)
127
+DO_VPZ_D(sve_orv_d, uint64_t, uint64_t, 0, DO_ORR)
128
+
129
+DO_VPZ(sve_eorv_b, uint8_t, uint8_t, uint8_t, H1, 0, DO_EOR)
130
+DO_VPZ(sve_eorv_h, uint16_t, uint16_t, uint16_t, H1_2, 0, DO_EOR)
131
+DO_VPZ(sve_eorv_s, uint32_t, uint32_t, uint32_t, H1_4, 0, DO_EOR)
132
+DO_VPZ_D(sve_eorv_d, uint64_t, uint64_t, 0, DO_EOR)
133
+
134
+DO_VPZ(sve_andv_b, uint8_t, uint8_t, uint8_t, H1, -1, DO_AND)
135
+DO_VPZ(sve_andv_h, uint16_t, uint16_t, uint16_t, H1_2, -1, DO_AND)
136
+DO_VPZ(sve_andv_s, uint32_t, uint32_t, uint32_t, H1_4, -1, DO_AND)
137
+DO_VPZ_D(sve_andv_d, uint64_t, uint64_t, -1, DO_AND)
138
+
139
+DO_VPZ(sve_saddv_b, int8_t, uint64_t, uint64_t, H1, 0, DO_ADD)
140
+DO_VPZ(sve_saddv_h, int16_t, uint64_t, uint64_t, H1_2, 0, DO_ADD)
141
+DO_VPZ(sve_saddv_s, int32_t, uint64_t, uint64_t, H1_4, 0, DO_ADD)
142
+
143
+DO_VPZ(sve_uaddv_b, uint8_t, uint64_t, uint64_t, H1, 0, DO_ADD)
144
+DO_VPZ(sve_uaddv_h, uint16_t, uint64_t, uint64_t, H1_2, 0, DO_ADD)
145
+DO_VPZ(sve_uaddv_s, uint32_t, uint64_t, uint64_t, H1_4, 0, DO_ADD)
146
+DO_VPZ_D(sve_uaddv_d, uint64_t, uint64_t, 0, DO_ADD)
147
+
148
+DO_VPZ(sve_smaxv_b, int8_t, int8_t, uint8_t, H1, INT8_MIN, DO_MAX)
149
+DO_VPZ(sve_smaxv_h, int16_t, int16_t, uint16_t, H1_2, INT16_MIN, DO_MAX)
150
+DO_VPZ(sve_smaxv_s, int32_t, int32_t, uint32_t, H1_4, INT32_MIN, DO_MAX)
151
+DO_VPZ_D(sve_smaxv_d, int64_t, int64_t, INT64_MIN, DO_MAX)
152
+
153
+DO_VPZ(sve_umaxv_b, uint8_t, uint8_t, uint8_t, H1, 0, DO_MAX)
154
+DO_VPZ(sve_umaxv_h, uint16_t, uint16_t, uint16_t, H1_2, 0, DO_MAX)
155
+DO_VPZ(sve_umaxv_s, uint32_t, uint32_t, uint32_t, H1_4, 0, DO_MAX)
156
+DO_VPZ_D(sve_umaxv_d, uint64_t, uint64_t, 0, DO_MAX)
157
+
158
+DO_VPZ(sve_sminv_b, int8_t, int8_t, uint8_t, H1, INT8_MAX, DO_MIN)
159
+DO_VPZ(sve_sminv_h, int16_t, int16_t, uint16_t, H1_2, INT16_MAX, DO_MIN)
160
+DO_VPZ(sve_sminv_s, int32_t, int32_t, uint32_t, H1_4, INT32_MAX, DO_MIN)
161
+DO_VPZ_D(sve_sminv_d, int64_t, int64_t, INT64_MAX, DO_MIN)
162
+
163
+DO_VPZ(sve_uminv_b, uint8_t, uint8_t, uint8_t, H1, -1, DO_MIN)
164
+DO_VPZ(sve_uminv_h, uint16_t, uint16_t, uint16_t, H1_2, -1, DO_MIN)
165
+DO_VPZ(sve_uminv_s, uint32_t, uint32_t, uint32_t, H1_4, -1, DO_MIN)
166
+DO_VPZ_D(sve_uminv_d, uint64_t, uint64_t, -1, DO_MIN)
167
+
168
+#undef DO_VPZ
169
+#undef DO_VPZ_D
170
+
171
#undef DO_AND
172
#undef DO_ORR
173
#undef DO_EOR
174
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/target/arm/translate-sve.c
177
+++ b/target/arm/translate-sve.c
178
@@ -XXX,XX +XXX,XX @@ static bool trans_UDIV_zpzz(DisasContext *s, arg_rprr_esz *a, uint32_t insn)
179
180
#undef DO_ZPZZ
181
182
+/*
183
+ *** SVE Integer Reduction Group
184
+ */
185
+
186
+typedef void gen_helper_gvec_reduc(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_i32);
187
+static bool do_vpz_ool(DisasContext *s, arg_rpr_esz *a,
188
+ gen_helper_gvec_reduc *fn)
189
+{
190
+ unsigned vsz = vec_full_reg_size(s);
191
+ TCGv_ptr t_zn, t_pg;
192
+ TCGv_i32 desc;
193
+ TCGv_i64 temp;
194
+
195
+ if (fn == NULL) {
196
+ return false;
197
+ }
198
+ if (!sve_access_check(s)) {
199
+ return true;
200
+ }
201
+
202
+ desc = tcg_const_i32(simd_desc(vsz, vsz, 0));
203
+ temp = tcg_temp_new_i64();
204
+ t_zn = tcg_temp_new_ptr();
205
+ t_pg = tcg_temp_new_ptr();
206
+
207
+ tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
208
+ tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
209
+ fn(temp, t_zn, t_pg, desc);
210
+ tcg_temp_free_ptr(t_zn);
211
+ tcg_temp_free_ptr(t_pg);
212
+ tcg_temp_free_i32(desc);
213
+
214
+ write_fp_dreg(s, a->rd, temp);
215
+ tcg_temp_free_i64(temp);
216
+ return true;
217
+}
218
+
219
+#define DO_VPZ(NAME, name) \
220
+static bool trans_##NAME(DisasContext *s, arg_rpr_esz *a, uint32_t insn) \
221
+{ \
222
+ static gen_helper_gvec_reduc * const fns[4] = { \
223
+ gen_helper_sve_##name##_b, gen_helper_sve_##name##_h, \
224
+ gen_helper_sve_##name##_s, gen_helper_sve_##name##_d, \
225
+ }; \
226
+ return do_vpz_ool(s, a, fns[a->esz]); \
227
+}
228
+
229
+DO_VPZ(ORV, orv)
230
+DO_VPZ(ANDV, andv)
231
+DO_VPZ(EORV, eorv)
232
+
233
+DO_VPZ(UADDV, uaddv)
234
+DO_VPZ(SMAXV, smaxv)
235
+DO_VPZ(UMAXV, umaxv)
236
+DO_VPZ(SMINV, sminv)
237
+DO_VPZ(UMINV, uminv)
238
+
239
+static bool trans_SADDV(DisasContext *s, arg_rpr_esz *a, uint32_t insn)
240
+{
241
+ static gen_helper_gvec_reduc * const fns[4] = {
242
+ gen_helper_sve_saddv_b, gen_helper_sve_saddv_h,
243
+ gen_helper_sve_saddv_s, NULL
244
+ };
245
+ return do_vpz_ool(s, a, fns[a->esz]);
246
+}
247
+
248
+#undef DO_VPZ
249
+
250
/*
251
*** SVE Predicate Logical Operations Group
252
*/
253
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
254
index XXXXXXX..XXXXXXX 100644
255
--- a/target/arm/sve.decode
256
+++ b/target/arm/sve.decode
257
@@ -XXX,XX +XXX,XX @@
258
&rr_esz rd rn esz
259
&rri rd rn imm
260
&rrr_esz rd rn rm esz
261
+&rpr_esz rd pg rn esz
262
&rprr_s rd pg rn rm s
263
&rprr_esz rd pg rn rm esz
264
265
@@ -XXX,XX +XXX,XX @@
266
@rdm_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 \
267
&rprr_esz rm=%reg_movprfx
268
269
+# One register operand, with governing predicate, vector element size
270
+@rd_pg_rn ........ esz:2 ... ... ... pg:3 rn:5 rd:5 &rpr_esz
271
+
272
# Basic Load/Store with 9-bit immediate offset
273
@pd_rn_i9 ........ ........ ...... rn:5 . rd:4 \
274
&rri imm=%imm9_16_10
275
@@ -XXX,XX +XXX,XX @@ UDIV_zpzz 00000100 .. 010 101 000 ... ..... ..... @rdn_pg_rm
276
SDIV_zpzz 00000100 .. 010 110 000 ... ..... ..... @rdm_pg_rn # SDIVR
277
UDIV_zpzz 00000100 .. 010 111 000 ... ..... ..... @rdm_pg_rn # UDIVR
278
279
+### SVE Integer Reduction Group
280
+
281
+# SVE bitwise logical reduction (predicated)
282
+ORV 00000100 .. 011 000 001 ... ..... ..... @rd_pg_rn
283
+EORV 00000100 .. 011 001 001 ... ..... ..... @rd_pg_rn
284
+ANDV 00000100 .. 011 010 001 ... ..... ..... @rd_pg_rn
285
+
286
+# SVE integer add reduction (predicated)
287
+# Note that saddv requires size != 3.
288
+UADDV 00000100 .. 000 001 001 ... ..... ..... @rd_pg_rn
289
+SADDV 00000100 .. 000 000 001 ... ..... ..... @rd_pg_rn
290
+
291
+# SVE integer min/max reduction (predicated)
292
+SMAXV 00000100 .. 001 000 001 ... ..... ..... @rd_pg_rn
293
+UMAXV 00000100 .. 001 001 001 ... ..... ..... @rd_pg_rn
294
+SMINV 00000100 .. 001 010 001 ... ..... ..... @rd_pg_rn
295
+UMINV 00000100 .. 001 011 001 ... ..... ..... @rd_pg_rn
296
+
297
### SVE Logical - Unpredicated Group
298
299
# SVE bitwise logical operations (unpredicated)
300
--
301
2.17.0
302
303
diff view generated by jsdifflib