1
First pullreq for 6.0: mostly my v8.1M work, plus some other
1
The following changes since commit 53f306f316549d20c76886903181413d20842423:
2
bits and pieces. (I still have a lot of stuff in my to-review
3
folder, which I may or may not get to before the Christmas break...)
4
2
5
thanks
3
Merge remote-tracking branch 'remotes/ehabkost-gl/tags/x86-next-pull-request' into staging (2021-06-21 11:26:04 +0100)
6
-- PMM
7
8
The following changes since commit 5e7b204dbfae9a562fc73684986f936b97f63877:
9
10
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2020-12-09 20:08:54 +0000)
11
4
12
are available in the Git repository at:
5
are available in the Git repository at:
13
6
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201210
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210621
15
8
16
for you to fetch changes up to 71f916be1c7e9ede0e37d9cabc781b5a9e8638ff:
9
for you to fetch changes up to a83f1d9263d281f938a3984cda7104d55affd43a:
17
10
18
hw/arm/armv7m: Correct typo in QOM object name (2020-12-10 11:44:56 +0000)
11
docs/system: arm: Add nRF boards description (2021-06-21 17:24:33 +0100)
19
12
20
----------------------------------------------------------------
13
----------------------------------------------------------------
21
target-arm queue:
14
target-arm queue:
22
* hw/arm/smmuv3: Fix up L1STD_SPAN decoding
15
* Don't require 'virt' board to be compiled in for ACPI GHES code
23
* xlnx-zynqmp: Support Xilinx ZynqMP CAN controllers
16
* docs: Document which architecture extensions we emulate
24
* sbsa-ref: allow to use Cortex-A53/57/72 cpus
17
* Fix bugs in M-profile FPCXT_NS accesses
25
* Various minor code cleanups
18
* First slice of MVE patches
26
* hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
19
* Implement MTE3
27
* Implement more pieces of ARMv8.1M support
20
* docs/system: arm: Add nRF boards description
28
21
29
----------------------------------------------------------------
22
----------------------------------------------------------------
30
Alex Chen (4):
23
Alexandre Iooss (1):
31
i.MX25: Fix bad printf format specifiers
24
docs/system: arm: Add nRF boards description
32
i.MX31: Fix bad printf format specifiers
33
i.MX6: Fix bad printf format specifiers
34
i.MX6ul: Fix bad printf format specifiers
35
25
36
Havard Skinnemoen (1):
26
Peter Collingbourne (1):
37
tests/qtest/npcm7xx_rng-test: dump random data on failure
27
target/arm: Implement MTE3
38
28
39
Kunkun Jiang (1):
29
Peter Maydell (55):
40
hw/arm/smmuv3: Fix up L1STD_SPAN decoding
30
hw/acpi: Provide stub version of acpi_ghes_record_errors()
31
hw/acpi: Provide function acpi_ghes_present()
32
target/arm: Use acpi_ghes_present() to see if we report ACPI memory errors
33
docs/system/arm: Document which architecture extensions we emulate
34
target/arm/translate-vfp.c: Whitespace fixes
35
target/arm: Handle FPU being disabled in FPCXT_NS accesses
36
target/arm: Don't NOCP fault for FPCXT_NS accesses
37
target/arm: Handle writeback in VLDR/VSTR sysreg with no memory access
38
target/arm: Factor FP context update code out into helper function
39
target/arm: Split vfp_access_check() into A and M versions
40
target/arm: Handle FPU check for FPCXT_NS insns via vfp_access_check_m()
41
target/arm: Implement MVE VLDR/VSTR (non-widening forms)
42
target/arm: Implement widening/narrowing MVE VLDR/VSTR insns
43
target/arm: Implement MVE VCLZ
44
target/arm: Implement MVE VCLS
45
target/arm: Implement MVE VREV16, VREV32, VREV64
46
target/arm: Implement MVE VMVN (register)
47
target/arm: Implement MVE VABS
48
target/arm: Implement MVE VNEG
49
tcg: Make gen_dup_i32/i64() public as tcg_gen_dup_i32/i64
50
target/arm: Implement MVE VDUP
51
target/arm: Implement MVE VAND, VBIC, VORR, VORN, VEOR
52
target/arm: Implement MVE VADD, VSUB, VMUL
53
target/arm: Implement MVE VMULH
54
target/arm: Implement MVE VRMULH
55
target/arm: Implement MVE VMAX, VMIN
56
target/arm: Implement MVE VABD
57
target/arm: Implement MVE VHADD, VHSUB
58
target/arm: Implement MVE VMULL
59
target/arm: Implement MVE VMLALDAV
60
target/arm: Implement MVE VMLSLDAV
61
target/arm: Implement MVE VRMLALDAVH, VRMLSLDAVH
62
target/arm: Implement MVE VADD (scalar)
63
target/arm: Implement MVE VSUB, VMUL (scalar)
64
target/arm: Implement MVE VHADD, VHSUB (scalar)
65
target/arm: Implement MVE VBRSR
66
target/arm: Implement MVE VPST
67
target/arm: Implement MVE VQADD and VQSUB
68
target/arm: Implement MVE VQDMULH and VQRDMULH (scalar)
69
target/arm: Implement MVE VQDMULL scalar
70
target/arm: Implement MVE VQDMULH, VQRDMULH (vector)
71
target/arm: Implement MVE VQADD, VQSUB (vector)
72
target/arm: Implement MVE VQSHL (vector)
73
target/arm: Implement MVE VQRSHL
74
target/arm: Implement MVE VSHL insn
75
target/arm: Implement MVE VRSHL
76
target/arm: Implement MVE VQDMLADH and VQRDMLADH
77
target/arm: Implement MVE VQDMLSDH and VQRDMLSDH
78
target/arm: Implement MVE VQDMULL (vector)
79
target/arm: Implement MVE VRHADD
80
target/arm: Implement MVE VADC, VSBC
81
target/arm: Implement MVE VCADD
82
target/arm: Implement MVE VHCADD
83
target/arm: Implement MVE VADDV
84
target/arm: Make VMOV scalar <-> gpreg beatwise for MVE
41
85
42
Marcin Juszkiewicz (1):
86
docs/system/arm/emulation.rst | 103 ++++
43
sbsa-ref: allow to use Cortex-A53/57/72 cpus
87
docs/system/arm/nrf.rst | 51 ++
88
docs/system/target-arm.rst | 7 +
89
include/hw/acpi/ghes.h | 9 +
90
include/tcg/tcg-op.h | 8 +
91
include/tcg/tcg.h | 1 -
92
target/arm/helper-mve.h | 357 +++++++++++++
93
target/arm/helper.h | 2 +
94
target/arm/internals.h | 11 +
95
target/arm/translate-a32.h | 3 +
96
target/arm/translate.h | 10 +
97
target/arm/m-nocp.decode | 24 +
98
target/arm/mve.decode | 240 +++++++++
99
target/arm/vfp.decode | 14 -
100
hw/acpi/ghes-stub.c | 22 +
101
hw/acpi/ghes.c | 17 +
102
target/arm/cpu64.c | 2 +-
103
target/arm/kvm64.c | 6 +-
104
target/arm/mte_helper.c | 82 +--
105
target/arm/mve_helper.c | 1160 +++++++++++++++++++++++++++++++++++++++++
106
target/arm/translate-m-nocp.c | 550 +++++++++++++++++++
107
target/arm/translate-mve.c | 759 +++++++++++++++++++++++++++
108
target/arm/translate-vfp.c | 741 +++++++-------------------
109
tcg/tcg-op-gvec.c | 20 +-
110
MAINTAINERS | 1 +
111
hw/acpi/meson.build | 6 +-
112
target/arm/meson.build | 1 +
113
27 files changed, 3578 insertions(+), 629 deletions(-)
114
create mode 100644 docs/system/arm/emulation.rst
115
create mode 100644 docs/system/arm/nrf.rst
116
create mode 100644 target/arm/helper-mve.h
117
create mode 100644 hw/acpi/ghes-stub.c
118
create mode 100644 target/arm/mve_helper.c
44
119
45
Peter Maydell (25):
46
hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
47
target/arm: Implement v8.1M PXN extension
48
target/arm: Don't clobber ID_PFR1.Security on M-profile cores
49
target/arm: Implement VSCCLRM insn
50
target/arm: Implement CLRM instruction
51
target/arm: Enforce M-profile VMRS/VMSR register restrictions
52
target/arm: Refactor M-profile VMSR/VMRS handling
53
target/arm: Move general-use constant expanders up in translate.c
54
target/arm: Implement VLDR/VSTR system register
55
target/arm: Implement M-profile FPSCR_nzcvqc
56
target/arm: Use new FPCR_NZCV_MASK constant
57
target/arm: Factor out preserve-fp-state from full_vfp_access_check()
58
target/arm: Implement FPCXT_S fp system register
59
hw/intc/armv7m_nvic: Update FPDSCR masking for v8.1M
60
target/arm: For v8.1M, always clear R0-R3, R12, APSR, EPSR on exception entry
61
target/arm: In v8.1M, don't set HFSR.FORCED on vector table fetch failures
62
target/arm: Implement v8.1M REVIDR register
63
target/arm: Implement new v8.1M NOCP check for exception return
64
target/arm: Implement new v8.1M VLLDM and VLSTM encodings
65
hw/intc/armv7m_nvic: Support v8.1M CCR.TRD bit
66
target/arm: Implement CCR_S.TRD behaviour for SG insns
67
hw/intc/armv7m_nvic: Fix "return from inactive handler" check
68
target/arm: Implement M-profile "minimal RAS implementation"
69
hw/intc/armv7m_nvic: Implement read/write for RAS register block
70
hw/arm/armv7m: Correct typo in QOM object name
71
72
Vikram Garhwal (4):
73
hw/net/can: Introduce Xilinx ZynqMP CAN controller
74
xlnx-zynqmp: Connect Xilinx ZynqMP CAN controllers
75
tests/qtest: Introduce tests for Xilinx ZynqMP CAN controller
76
MAINTAINERS: Add maintainer entry for Xilinx ZynqMP CAN controller
77
78
meson.build | 1 +
79
hw/arm/smmuv3-internal.h | 2 +-
80
hw/net/can/trace.h | 1 +
81
include/hw/arm/xlnx-zynqmp.h | 8 +
82
include/hw/intc/armv7m_nvic.h | 2 +
83
include/hw/net/xlnx-zynqmp-can.h | 78 +++
84
target/arm/cpu.h | 46 ++
85
target/arm/m-nocp.decode | 10 +-
86
target/arm/t32.decode | 10 +-
87
target/arm/vfp.decode | 14 +
88
hw/arm/armv7m.c | 4 +-
89
hw/arm/sbsa-ref.c | 23 +-
90
hw/arm/xlnx-zcu102.c | 20 +
91
hw/arm/xlnx-zynqmp.c | 34 ++
92
hw/intc/armv7m_nvic.c | 246 ++++++--
93
hw/misc/imx25_ccm.c | 12 +-
94
hw/misc/imx31_ccm.c | 14 +-
95
hw/misc/imx6_ccm.c | 20 +-
96
hw/misc/imx6_src.c | 2 +-
97
hw/misc/imx6ul_ccm.c | 4 +-
98
hw/misc/imx_ccm.c | 4 +-
99
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++++++++++
100
target/arm/cpu.c | 5 +-
101
target/arm/helper.c | 7 +-
102
target/arm/m_helper.c | 130 ++++-
103
target/arm/translate.c | 105 +++-
104
tests/qtest/npcm7xx_rng-test.c | 12 +
105
tests/qtest/xlnx-can-test.c | 360 ++++++++++++
106
MAINTAINERS | 8 +
107
hw/Kconfig | 1 +
108
hw/net/can/meson.build | 1 +
109
hw/net/can/trace-events | 9 +
110
target/arm/translate-vfp.c.inc | 511 ++++++++++++++++-
111
tests/qtest/meson.build | 1 +
112
34 files changed, 2713 insertions(+), 153 deletions(-)
113
create mode 100644 hw/net/can/trace.h
114
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
115
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
116
create mode 100644 tests/qtest/xlnx-can-test.c
117
create mode 100644 hw/net/can/trace-events
118
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
Generic code in target/arm wants to call acpi_ghes_record_errors();
2
provide a stub version so that we don't fail to link when
3
CONFIG_ACPI_APEI is not set. This requires us to add a new
4
ghes-stub.c file to contain it and the meson.build mechanics
5
to use it when appropriate.
2
6
3
The QTests perform five tests on the Xilinx ZynqMP CAN controller:
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Tests the CAN controller in loopback, sleep and snoop mode.
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Tests filtering of incoming CAN messages.
9
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
10
Message-id: 20210603171259.27962-2-peter.maydell@linaro.org
11
---
12
hw/acpi/ghes-stub.c | 17 +++++++++++++++++
13
hw/acpi/meson.build | 6 +++---
14
2 files changed, 20 insertions(+), 3 deletions(-)
15
create mode 100644 hw/acpi/ghes-stub.c
6
16
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
diff --git a/hw/acpi/ghes-stub.c b/hw/acpi/ghes-stub.c
8
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
9
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
10
Message-id: 1605728926-352690-4-git-send-email-fnu.vikram@xilinx.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
tests/qtest/xlnx-can-test.c | 360 ++++++++++++++++++++++++++++++++++++
14
tests/qtest/meson.build | 1 +
15
2 files changed, 361 insertions(+)
16
create mode 100644 tests/qtest/xlnx-can-test.c
17
18
diff --git a/tests/qtest/xlnx-can-test.c b/tests/qtest/xlnx-can-test.c
19
new file mode 100644
18
new file mode 100644
20
index XXXXXXX..XXXXXXX
19
index XXXXXXX..XXXXXXX
21
--- /dev/null
20
--- /dev/null
22
+++ b/tests/qtest/xlnx-can-test.c
21
+++ b/hw/acpi/ghes-stub.c
23
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@
24
+/*
23
+/*
25
+ * QTests for the Xilinx ZynqMP CAN controller.
24
+ * Support for generating APEI tables and recording CPER for Guests:
25
+ * stub functions.
26
+ *
26
+ *
27
+ * Copyright (c) 2020 Xilinx Inc.
27
+ * Copyright (c) 2021 Linaro, Ltd
28
+ *
28
+ *
29
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
29
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
30
+ *
30
+ * See the COPYING file in the top-level directory.
31
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
32
+ * of this software and associated documentation files (the "Software"), to deal
33
+ * in the Software without restriction, including without limitation the rights
34
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
35
+ * copies of the Software, and to permit persons to whom the Software is
36
+ * furnished to do so, subject to the following conditions:
37
+ *
38
+ * The above copyright notice and this permission notice shall be included in
39
+ * all copies or substantial portions of the Software.
40
+ *
41
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
42
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
43
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
44
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
45
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
46
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
47
+ * THE SOFTWARE.
48
+ */
31
+ */
49
+
32
+
50
+#include "qemu/osdep.h"
33
+#include "qemu/osdep.h"
51
+#include "libqos/libqtest.h"
34
+#include "hw/acpi/ghes.h"
52
+
35
+
53
+/* Base address. */
36
+int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
54
+#define CAN0_BASE_ADDR 0xFF060000
55
+#define CAN1_BASE_ADDR 0xFF070000
56
+
57
+/* Register addresses. */
58
+#define R_SRR_OFFSET 0x00
59
+#define R_MSR_OFFSET 0x04
60
+#define R_SR_OFFSET 0x18
61
+#define R_ISR_OFFSET 0x1C
62
+#define R_ICR_OFFSET 0x24
63
+#define R_TXID_OFFSET 0x30
64
+#define R_TXDLC_OFFSET 0x34
65
+#define R_TXDATA1_OFFSET 0x38
66
+#define R_TXDATA2_OFFSET 0x3C
67
+#define R_RXID_OFFSET 0x50
68
+#define R_RXDLC_OFFSET 0x54
69
+#define R_RXDATA1_OFFSET 0x58
70
+#define R_RXDATA2_OFFSET 0x5C
71
+#define R_AFR 0x60
72
+#define R_AFMR1 0x64
73
+#define R_AFIR1 0x68
74
+#define R_AFMR2 0x6C
75
+#define R_AFIR2 0x70
76
+#define R_AFMR3 0x74
77
+#define R_AFIR3 0x78
78
+#define R_AFMR4 0x7C
79
+#define R_AFIR4 0x80
80
+
81
+/* CAN modes. */
82
+#define CONFIG_MODE 0x00
83
+#define NORMAL_MODE 0x00
84
+#define LOOPBACK_MODE 0x02
85
+#define SNOOP_MODE 0x04
86
+#define SLEEP_MODE 0x01
87
+#define ENABLE_CAN (1 << 1)
88
+#define STATUS_NORMAL_MODE (1 << 3)
89
+#define STATUS_LOOPBACK_MODE (1 << 1)
90
+#define STATUS_SNOOP_MODE (1 << 12)
91
+#define STATUS_SLEEP_MODE (1 << 2)
92
+#define ISR_TXOK (1 << 1)
93
+#define ISR_RXOK (1 << 4)
94
+
95
+static void match_rx_tx_data(const uint32_t *buf_tx, const uint32_t *buf_rx,
96
+ uint8_t can_timestamp)
97
+{
37
+{
98
+ uint16_t size = 0;
38
+ return -1;
99
+ uint8_t len = 4;
100
+
101
+ while (size < len) {
102
+ if (R_RXID_OFFSET + 4 * size == R_RXDLC_OFFSET) {
103
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size] + can_timestamp);
104
+ } else {
105
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size]);
106
+ }
107
+
108
+ size++;
109
+ }
110
+}
39
+}
111
+
40
diff --git a/hw/acpi/meson.build b/hw/acpi/meson.build
112
+static void read_data(QTestState *qts, uint64_t can_base_addr, uint32_t *buf_rx)
113
+{
114
+ uint32_t int_status;
115
+
116
+ /* Read the interrupt on CAN rx. */
117
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_RXOK;
118
+
119
+ g_assert_cmpint(int_status, ==, ISR_RXOK);
120
+
121
+ /* Read the RX register data for CAN. */
122
+ buf_rx[0] = qtest_readl(qts, can_base_addr + R_RXID_OFFSET);
123
+ buf_rx[1] = qtest_readl(qts, can_base_addr + R_RXDLC_OFFSET);
124
+ buf_rx[2] = qtest_readl(qts, can_base_addr + R_RXDATA1_OFFSET);
125
+ buf_rx[3] = qtest_readl(qts, can_base_addr + R_RXDATA2_OFFSET);
126
+
127
+ /* Clear the RX interrupt. */
128
+ qtest_writel(qts, CAN1_BASE_ADDR + R_ICR_OFFSET, ISR_RXOK);
129
+}
130
+
131
+static void send_data(QTestState *qts, uint64_t can_base_addr,
132
+ const uint32_t *buf_tx)
133
+{
134
+ uint32_t int_status;
135
+
136
+ /* Write the TX register data for CAN. */
137
+ qtest_writel(qts, can_base_addr + R_TXID_OFFSET, buf_tx[0]);
138
+ qtest_writel(qts, can_base_addr + R_TXDLC_OFFSET, buf_tx[1]);
139
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET, buf_tx[2]);
140
+ qtest_writel(qts, can_base_addr + R_TXDATA2_OFFSET, buf_tx[3]);
141
+
142
+ /* Read the interrupt on CAN for tx. */
143
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_TXOK;
144
+
145
+ g_assert_cmpint(int_status, ==, ISR_TXOK);
146
+
147
+ /* Clear the interrupt for tx. */
148
+ qtest_writel(qts, CAN0_BASE_ADDR + R_ICR_OFFSET, ISR_TXOK);
149
+}
150
+
151
+/*
152
+ * This test will be transferring data from CAN0 and CAN1 through canbus. CAN0
153
+ * initiate the data transfer to can-bus, CAN1 receives the data. Test compares
154
+ * the data sent from CAN0 with received on CAN1.
155
+ */
156
+static void test_can_bus(void)
157
+{
158
+ const uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
159
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
160
+ uint32_t status = 0;
161
+ uint8_t can_timestamp = 1;
162
+
163
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
164
+ " -object can-bus,id=canbus0"
165
+ " -machine xlnx-zcu102.canbus0=canbus0"
166
+ " -machine xlnx-zcu102.canbus1=canbus0"
167
+ );
168
+
169
+ /* Configure the CAN0 and CAN1. */
170
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
171
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
172
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
173
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
174
+
175
+ /* Check here if CAN0 and CAN1 are in normal mode. */
176
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
177
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
178
+
179
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
180
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
181
+
182
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
183
+
184
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
185
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
186
+
187
+ qtest_quit(qts);
188
+}
189
+
190
+/*
191
+ * This test is performing loopback mode on CAN0 and CAN1. Data sent from TX of
192
+ * each CAN0 and CAN1 are compared with RX register data for respective CAN.
193
+ */
194
+static void test_can_loopback(void)
195
+{
196
+ uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
197
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
198
+ uint32_t status = 0;
199
+
200
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
201
+ " -object can-bus,id=canbus0"
202
+ " -machine xlnx-zcu102.canbus0=canbus0"
203
+ " -machine xlnx-zcu102.canbus1=canbus0"
204
+ );
205
+
206
+ /* Configure the CAN0 in loopback mode. */
207
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
208
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
209
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
210
+
211
+ /* Check here if CAN0 is set in loopback mode. */
212
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
213
+
214
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
215
+
216
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
217
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
218
+ match_rx_tx_data(buf_tx, buf_rx, 0);
219
+
220
+ /* Configure the CAN1 in loopback mode. */
221
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
222
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
223
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
224
+
225
+ /* Check here if CAN1 is set in loopback mode. */
226
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
227
+
228
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
229
+
230
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
231
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
232
+ match_rx_tx_data(buf_tx, buf_rx, 0);
233
+
234
+ qtest_quit(qts);
235
+}
236
+
237
+/*
238
+ * Enable filters for CAN1. This will filter incoming messages with ID. In this
239
+ * test message will pass through filter 2.
240
+ */
241
+static void test_can_filter(void)
242
+{
243
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
244
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
245
+ uint32_t status = 0;
246
+ uint8_t can_timestamp = 1;
247
+
248
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
249
+ " -object can-bus,id=canbus0"
250
+ " -machine xlnx-zcu102.canbus0=canbus0"
251
+ " -machine xlnx-zcu102.canbus1=canbus0"
252
+ );
253
+
254
+ /* Configure the CAN0 and CAN1. */
255
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
256
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
257
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
258
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
259
+
260
+ /* Check here if CAN0 and CAN1 are in normal mode. */
261
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
262
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
263
+
264
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
265
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
266
+
267
+ /* Set filter for CAN1 for incoming messages. */
268
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0x0);
269
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR1, 0xF7);
270
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR1, 0x121F);
271
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR2, 0x5431);
272
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR2, 0x14);
273
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR3, 0x1234);
274
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR3, 0x5431);
275
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR4, 0xFFF);
276
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR4, 0x1234);
277
+
278
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0xF);
279
+
280
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
281
+
282
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
283
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
284
+
285
+ qtest_quit(qts);
286
+}
287
+
288
+/* Testing sleep mode on CAN0 while CAN1 is in normal mode. */
289
+static void test_can_sleepmode(void)
290
+{
291
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
292
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
293
+ uint32_t status = 0;
294
+ uint8_t can_timestamp = 1;
295
+
296
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
297
+ " -object can-bus,id=canbus0"
298
+ " -machine xlnx-zcu102.canbus0=canbus0"
299
+ " -machine xlnx-zcu102.canbus1=canbus0"
300
+ );
301
+
302
+ /* Configure the CAN0. */
303
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
304
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SLEEP_MODE);
305
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
306
+
307
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
308
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
309
+
310
+ /* Check here if CAN0 is in SLEEP mode and CAN1 in normal mode. */
311
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
312
+ g_assert_cmpint(status, ==, STATUS_SLEEP_MODE);
313
+
314
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
315
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
316
+
317
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
318
+
319
+ /*
320
+ * Once CAN1 sends data on can-bus. CAN0 should exit sleep mode.
321
+ * Check the CAN0 status now. It should exit the sleep mode and receive the
322
+ * incoming data.
323
+ */
324
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
325
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
326
+
327
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
328
+
329
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
330
+
331
+ qtest_quit(qts);
332
+}
333
+
334
+/* Testing Snoop mode on CAN0 while CAN1 is in normal mode. */
335
+static void test_can_snoopmode(void)
336
+{
337
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
338
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
339
+ uint32_t status = 0;
340
+ uint8_t can_timestamp = 1;
341
+
342
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
343
+ " -object can-bus,id=canbus0"
344
+ " -machine xlnx-zcu102.canbus0=canbus0"
345
+ " -machine xlnx-zcu102.canbus1=canbus0"
346
+ );
347
+
348
+ /* Configure the CAN0. */
349
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
350
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SNOOP_MODE);
351
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
352
+
353
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
354
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
355
+
356
+ /* Check here if CAN0 is in SNOOP mode and CAN1 in normal mode. */
357
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
358
+ g_assert_cmpint(status, ==, STATUS_SNOOP_MODE);
359
+
360
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
361
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
362
+
363
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
364
+
365
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
366
+
367
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
368
+
369
+ qtest_quit(qts);
370
+}
371
+
372
+int main(int argc, char **argv)
373
+{
374
+ g_test_init(&argc, &argv, NULL);
375
+
376
+ qtest_add_func("/net/can/can_bus", test_can_bus);
377
+ qtest_add_func("/net/can/can_loopback", test_can_loopback);
378
+ qtest_add_func("/net/can/can_filter", test_can_filter);
379
+ qtest_add_func("/net/can/can_test_snoopmode", test_can_snoopmode);
380
+ qtest_add_func("/net/can/can_test_sleepmode", test_can_sleepmode);
381
+
382
+ return g_test_run();
383
+}
384
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
385
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
386
--- a/tests/qtest/meson.build
42
--- a/hw/acpi/meson.build
387
+++ b/tests/qtest/meson.build
43
+++ b/hw/acpi/meson.build
388
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
44
@@ -XXX,XX +XXX,XX @@ acpi_ss.add(when: 'CONFIG_ACPI_PCI', if_true: files('pci.c'))
389
['arm-cpu-features',
45
acpi_ss.add(when: 'CONFIG_ACPI_VMGENID', if_true: files('vmgenid.c'))
390
'numa-test',
46
acpi_ss.add(when: 'CONFIG_ACPI_HW_REDUCED', if_true: files('generic_event_device.c'))
391
'boot-serial-test',
47
acpi_ss.add(when: 'CONFIG_ACPI_HMAT', if_true: files('hmat.c'))
392
+ 'xlnx-can-test',
48
-acpi_ss.add(when: 'CONFIG_ACPI_APEI', if_true: files('ghes.c'))
393
'migration-test']
49
+acpi_ss.add(when: 'CONFIG_ACPI_APEI', if_true: files('ghes.c'), if_false: files('ghes-stub.c'))
394
50
acpi_ss.add(when: 'CONFIG_ACPI_X86', if_true: files('core.c', 'piix4.c', 'pcihp.c'), if_false: files('acpi-stub.c'))
395
qtests_s390x = \
51
acpi_ss.add(when: 'CONFIG_ACPI_X86_ICH', if_true: files('ich9.c', 'tco.c'))
52
acpi_ss.add(when: 'CONFIG_IPMI', if_true: files('ipmi.c'), if_false: files('ipmi-stub.c'))
53
acpi_ss.add(when: 'CONFIG_PC', if_false: files('acpi-x86-stub.c'))
54
acpi_ss.add(when: 'CONFIG_TPM', if_true: files('tpm.c'))
55
-softmmu_ss.add(when: 'CONFIG_ACPI', if_false: files('acpi-stub.c', 'aml-build-stub.c'))
56
+softmmu_ss.add(when: 'CONFIG_ACPI', if_false: files('acpi-stub.c', 'aml-build-stub.c', 'ghes-stub.c'))
57
softmmu_ss.add_all(when: 'CONFIG_ACPI', if_true: acpi_ss)
58
softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('acpi-stub.c', 'aml-build-stub.c',
59
- 'acpi-x86-stub.c', 'ipmi-stub.c'))
60
+ 'acpi-x86-stub.c', 'ipmi-stub.c', 'ghes-stub.c'))
396
--
61
--
397
2.20.1
62
2.20.1
398
63
399
64
diff view generated by jsdifflib
New patch
1
Allow code elsewhere in the system to check whether the ACPI GHES
2
table is present, so it can determine whether it is OK to try to
3
record an error by calling acpi_ghes_record_errors().
1
4
5
(We don't need to migrate the new 'present' field in AcpiGhesState,
6
because it is set once at system initialization and doesn't change.)
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
11
Message-id: 20210603171259.27962-3-peter.maydell@linaro.org
12
---
13
include/hw/acpi/ghes.h | 9 +++++++++
14
hw/acpi/ghes-stub.c | 5 +++++
15
hw/acpi/ghes.c | 17 +++++++++++++++++
16
3 files changed, 31 insertions(+)
17
18
diff --git a/include/hw/acpi/ghes.h b/include/hw/acpi/ghes.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/acpi/ghes.h
21
+++ b/include/hw/acpi/ghes.h
22
@@ -XXX,XX +XXX,XX @@ enum {
23
24
typedef struct AcpiGhesState {
25
uint64_t ghes_addr_le;
26
+ bool present; /* True if GHES is present at all on this board */
27
} AcpiGhesState;
28
29
void build_ghes_error_table(GArray *hardware_errors, BIOSLinker *linker);
30
@@ -XXX,XX +XXX,XX @@ void acpi_build_hest(GArray *table_data, BIOSLinker *linker,
31
void acpi_ghes_add_fw_cfg(AcpiGhesState *vms, FWCfgState *s,
32
GArray *hardware_errors);
33
int acpi_ghes_record_errors(uint8_t notify, uint64_t error_physical_addr);
34
+
35
+/**
36
+ * acpi_ghes_present: Report whether ACPI GHES table is present
37
+ *
38
+ * Returns: true if the system has an ACPI GHES table and it is
39
+ * safe to call acpi_ghes_record_errors() to record a memory error.
40
+ */
41
+bool acpi_ghes_present(void);
42
#endif
43
diff --git a/hw/acpi/ghes-stub.c b/hw/acpi/ghes-stub.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/acpi/ghes-stub.c
46
+++ b/hw/acpi/ghes-stub.c
47
@@ -XXX,XX +XXX,XX @@ int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
48
{
49
return -1;
50
}
51
+
52
+bool acpi_ghes_present(void)
53
+{
54
+ return false;
55
+}
56
diff --git a/hw/acpi/ghes.c b/hw/acpi/ghes.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/acpi/ghes.c
59
+++ b/hw/acpi/ghes.c
60
@@ -XXX,XX +XXX,XX @@ void acpi_ghes_add_fw_cfg(AcpiGhesState *ags, FWCfgState *s,
61
/* Create a read-write fw_cfg file for Address */
62
fw_cfg_add_file_callback(s, ACPI_GHES_DATA_ADDR_FW_CFG_FILE, NULL, NULL,
63
NULL, &(ags->ghes_addr_le), sizeof(ags->ghes_addr_le), false);
64
+
65
+ ags->present = true;
66
}
67
68
int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
69
@@ -XXX,XX +XXX,XX @@ int acpi_ghes_record_errors(uint8_t source_id, uint64_t physical_address)
70
71
return ret;
72
}
73
+
74
+bool acpi_ghes_present(void)
75
+{
76
+ AcpiGedState *acpi_ged_state;
77
+ AcpiGhesState *ags;
78
+
79
+ acpi_ged_state = ACPI_GED(object_resolve_path_type("", TYPE_ACPI_GED,
80
+ NULL));
81
+
82
+ if (!acpi_ged_state) {
83
+ return false;
84
+ }
85
+ ags = &acpi_ged_state->ghes_state;
86
+ return ags->present;
87
+}
88
--
89
2.20.1
90
91
diff view generated by jsdifflib
New patch
1
The virt_is_acpi_enabled() function is specific to the virt board, as
2
is the check for its 'ras' property. Use the new acpi_ghes_present()
3
function to check whether we should report memory errors via
4
acpi_ghes_record_errors().
1
5
6
This avoids a link error if QEMU was built without support for the
7
virt board, and provides a mechanism that can be used by any future
8
board models that want to add ACPI memory error reporting support
9
(they only need to call acpi_ghes_add_fw_cfg()).
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Dongjiu Geng <gengdongjiu1@gmail.com>
14
Message-id: 20210603171259.27962-4-peter.maydell@linaro.org
15
---
16
target/arm/kvm64.c | 6 +-----
17
1 file changed, 1 insertion(+), 5 deletions(-)
18
19
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/kvm64.c
22
+++ b/target/arm/kvm64.c
23
@@ -XXX,XX +XXX,XX @@ void kvm_arch_on_sigbus_vcpu(CPUState *c, int code, void *addr)
24
{
25
ram_addr_t ram_addr;
26
hwaddr paddr;
27
- Object *obj = qdev_get_machine();
28
- VirtMachineState *vms = VIRT_MACHINE(obj);
29
- bool acpi_enabled = virt_is_acpi_enabled(vms);
30
31
assert(code == BUS_MCEERR_AR || code == BUS_MCEERR_AO);
32
33
- if (acpi_enabled && addr &&
34
- object_property_get_bool(obj, "ras", NULL)) {
35
+ if (acpi_ghes_present() && addr) {
36
ram_addr = qemu_ram_addr_from_host(addr);
37
if (ram_addr != RAM_ADDR_INVALID &&
38
kvm_physical_memory_addr_from_host(c->kvm_state, addr, &paddr)) {
39
--
40
2.20.1
41
42
diff view generated by jsdifflib
1
Correct a typo in the name we give the NVIC object.
1
These days the Arm architecture has a wide range of fine-grained
2
optional extra architectural features. We implement quite a lot
3
of these but by no means all of them. Document what we do implement,
4
so that users can find out without having to dig through back-issues
5
of our Changelog on the wiki.
2
6
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20210617140328.28622-1-peter.maydell@linaro.org
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-28-peter.maydell@linaro.org
7
---
11
---
8
hw/arm/armv7m.c | 2 +-
12
docs/system/arm/emulation.rst | 102 ++++++++++++++++++++++++++++++++++
9
1 file changed, 1 insertion(+), 1 deletion(-)
13
docs/system/target-arm.rst | 6 ++
14
2 files changed, 108 insertions(+)
15
create mode 100644 docs/system/arm/emulation.rst
10
16
11
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
17
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
18
new file mode 100644
19
index XXXXXXX..XXXXXXX
20
--- /dev/null
21
+++ b/docs/system/arm/emulation.rst
22
@@ -XXX,XX +XXX,XX @@
23
+A-profile CPU architecture support
24
+==================================
25
+
26
+QEMU's TCG emulation includes support for the Armv5, Armv6, Armv7 and
27
+Armv8 versions of the A-profile architecture. It also has support for
28
+the following architecture extensions:
29
+
30
+- FEAT_AA32BF16 (AArch32 BFloat16 instructions)
31
+- FEAT_AA32HPD (AArch32 hierarchical permission disables)
32
+- FEAT_AA32I8MM (AArch32 Int8 matrix multiplication instructions)
33
+- FEAT_AES (AESD and AESE instructions)
34
+- FEAT_BF16 (AArch64 BFloat16 instructions)
35
+- FEAT_BTI (Branch Target Identification)
36
+- FEAT_DIT (Data Independent Timing instructions)
37
+- FEAT_DPB (DC CVAP instruction)
38
+- FEAT_DotProd (Advanced SIMD dot product instructions)
39
+- FEAT_FCMA (Floating-point complex number instructions)
40
+- FEAT_FHM (Floating-point half-precision multiplication instructions)
41
+- FEAT_FP16 (Half-precision floating-point data processing)
42
+- FEAT_FRINTTS (Floating-point to integer instructions)
43
+- FEAT_FlagM (Flag manipulation instructions v2)
44
+- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
45
+- FEAT_HPDS (Hierarchical permission disables)
46
+- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
47
+- FEAT_JSCVT (JavaScript conversion instructions)
48
+- FEAT_LOR (Limited ordering regions)
49
+- FEAT_LRCPC (Load-acquire RCpc instructions)
50
+- FEAT_LRCPC2 (Load-acquire RCpc instructions v2)
51
+- FEAT_LSE (Large System Extensions)
52
+- FEAT_MTE (Memory Tagging Extension)
53
+- FEAT_MTE2 (Memory Tagging Extension)
54
+- FEAT_PAN (Privileged access never)
55
+- FEAT_PAN2 (AT S1E1R and AT S1E1W instruction variants affected by PSTATE.PAN)
56
+- FEAT_PAuth (Pointer authentication)
57
+- FEAT_PMULL (PMULL, PMULL2 instructions)
58
+- FEAT_PMUv3p1 (PMU Extensions v3.1)
59
+- FEAT_PMUv3p4 (PMU Extensions v3.4)
60
+- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
61
+- FEAT_RNG (Random number generator)
62
+- FEAT_SB (Speculation Barrier)
63
+- FEAT_SEL2 (Secure EL2)
64
+- FEAT_SHA1 (SHA1 instructions)
65
+- FEAT_SHA256 (SHA256 instructions)
66
+- FEAT_SHA3 (Advanced SIMD SHA3 instructions)
67
+- FEAT_SHA512 (Advanced SIMD SHA512 instructions)
68
+- FEAT_SM3 (Advanced SIMD SM3 instructions)
69
+- FEAT_SM4 (Advanced SIMD SM4 instructions)
70
+- FEAT_SPECRES (Speculation restriction instructions)
71
+- FEAT_SSBS (Speculative Store Bypass Safe)
72
+- FEAT_TLBIOS (TLB invalidate instructions in Outer Shareable domain)
73
+- FEAT_TLBIRANGE (TLB invalidate range instructions)
74
+- FEAT_TTCNP (Translation table Common not private translations)
75
+- FEAT_TTST (Small translation tables)
76
+- FEAT_UAO (Unprivileged Access Override control)
77
+- FEAT_VHE (Virtualization Host Extensions)
78
+- FEAT_VMID16 (16-bit VMID)
79
+- FEAT_XNX (Translation table stage 2 Unprivileged Execute-never)
80
+- SVE (The Scalable Vector Extension)
81
+- SVE2 (The Scalable Vector Extension v2)
82
+
83
+For information on the specifics of these extensions, please refer
84
+to the `Armv8-A Arm Architecture Reference Manual
85
+<https://developer.arm.com/documentation/ddi0487/latest>`_.
86
+
87
+When a specific named CPU is being emulated, only those features which
88
+are present in hardware for that CPU are emulated. (If a feature is
89
+not in the list above then it is not supported, even if the real
90
+hardware should have it.) The ``max`` CPU enables all features.
91
+
92
+R-profile CPU architecture support
93
+==================================
94
+
95
+QEMU's TCG emulation support for R-profile CPUs is currently limited.
96
+We emulate only the Cortex-R5 and Cortex-R5F CPUs.
97
+
98
+M-profile CPU architecture support
99
+==================================
100
+
101
+QEMU's TCG emulation includes support for Armv6-M, Armv7-M, Armv8-M, and
102
+Armv8.1-M versions of the M-profile architucture. It also has support
103
+for the following architecture extensions:
104
+
105
+- FP (Floating-point Extension)
106
+- FPCXT (FPCXT access instructions)
107
+- HP (Half-precision floating-point instructions)
108
+- LOB (Low Overhead loops and Branch future)
109
+- M (Main Extension)
110
+- MPU (Memory Protection Unit Extension)
111
+- PXN (Privileged Execute Never)
112
+- RAS (Reliability, Serviceability and Availability): "minimum RAS Extension" only
113
+- S (Security Extension)
114
+- ST (System Timer Extension)
115
+
116
+For information on the specifics of these extensions, please refer
117
+to the `Armv8-M Arm Architecture Reference Manual
118
+<https://developer.arm.com/documentation/ddi0553/latest>`_.
119
+
120
+When a specific named CPU is being emulated, only those features which
121
+are present in hardware for that CPU are emulated. (If a feature is
122
+not in the list above then it is not supported, even if the real
123
+hardware should have it.) There is no equivalent of the ``max`` CPU for
124
+M-profile.
125
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
12
index XXXXXXX..XXXXXXX 100644
126
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/armv7m.c
127
--- a/docs/system/target-arm.rst
14
+++ b/hw/arm/armv7m.c
128
+++ b/docs/system/target-arm.rst
15
@@ -XXX,XX +XXX,XX @@ static void armv7m_instance_init(Object *obj)
129
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
16
130
arm/virt
17
memory_region_init(&s->container, obj, "armv7m-container", UINT64_MAX);
131
arm/xlnx-versal-virt
18
132
19
- object_initialize_child(obj, "nvnic", &s->nvic, TYPE_NVIC);
133
+Emulated CPU architecture support
20
+ object_initialize_child(obj, "nvic", &s->nvic, TYPE_NVIC);
134
+=================================
21
object_property_add_alias(obj, "num-irq",
135
+
22
OBJECT(&s->nvic), "num-irq");
136
+.. toctree::
137
+ arm/emulation
138
+
139
Arm CPU features
140
================
23
141
24
--
142
--
25
2.20.1
143
2.20.1
26
144
27
145
diff view generated by jsdifflib
New patch
1
In the code for handling VFP system register accesses there is some
2
stray whitespace after a unary '-' operator, and also some incorrect
3
indent in a couple of function prototypes. We're about to move this
4
code to another file, so fix the code style issues first so
5
checkpatch doesn't complain about the code-movement patch.
1
6
7
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210618141019.10671-2-peter.maydell@linaro.org
11
---
12
target/arm/translate-vfp.c | 11 +++++------
13
1 file changed, 5 insertions(+), 6 deletions(-)
14
15
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-vfp.c
18
+++ b/target/arm/translate-vfp.c
19
@@ -XXX,XX +XXX,XX @@ static void gen_branch_fpInactive(DisasContext *s, TCGCond cond,
20
}
21
22
static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
23
-
24
fp_sysreg_loadfn *loadfn,
25
- void *opaque)
26
+ void *opaque)
27
{
28
/* Do a write to an M-profile floating point system register */
29
TCGv_i32 tmp;
30
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
31
}
32
33
static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
34
- fp_sysreg_storefn *storefn,
35
- void *opaque)
36
+ fp_sysreg_storefn *storefn,
37
+ void *opaque)
38
{
39
/* Do a read from an M-profile floating point system register */
40
TCGv_i32 tmp;
41
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
42
TCGv_i32 addr;
43
44
if (!a->a) {
45
- offset = - offset;
46
+ offset = -offset;
47
}
48
49
addr = load_reg(s, a->rn);
50
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
51
TCGv_i32 value = tcg_temp_new_i32();
52
53
if (!a->a) {
54
- offset = - offset;
55
+ offset = -offset;
56
}
57
58
addr = load_reg(s, a->rn);
59
--
60
2.20.1
61
62
diff view generated by jsdifflib
New patch
1
If the guest makes an FPCXT_NS access when the FPU is disabled,
2
one of two things happens:
3
* if there is no active FP context, then the insn behaves the
4
same way as if the FPU was enabled: writes ignored, reads
5
same value as FPDSCR_NS
6
* if there is an active FP context, then we take a NOCP
7
exception
1
8
9
Add code to the sysreg read/write functions which emits
10
code to take the NOCP exception in the latter case.
11
12
At the moment this will never be used, because the NOCP checks in
13
m-nocp.decode happen first, and so the trans functions are never
14
called when the FPU is disabled. The code will be needed when we
15
move the sysreg access insns to before the NOCP patterns in the
16
following commit.
17
18
Cc: qemu-stable@nongnu.org
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20210618141019.10671-3-peter.maydell@linaro.org
22
---
23
target/arm/translate-vfp.c | 32 ++++++++++++++++++++++++++++++--
24
1 file changed, 30 insertions(+), 2 deletions(-)
25
26
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/translate-vfp.c
29
+++ b/target/arm/translate-vfp.c
30
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
31
lab_end = gen_new_label();
32
/* fpInactive case: write is a NOP, so branch to end */
33
gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
34
- /* !fpInactive: PreserveFPState(), and reads same as FPCXT_S */
35
+ /*
36
+ * !fpInactive: if FPU disabled, take NOCP exception;
37
+ * otherwise PreserveFPState(), and then FPCXT_NS writes
38
+ * behave the same as FPCXT_S writes.
39
+ */
40
+ if (s->fp_excp_el) {
41
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
42
+ syn_uncategorized(), s->fp_excp_el);
43
+ /*
44
+ * This was only a conditional exception, so override
45
+ * gen_exception_insn()'s default to DISAS_NORETURN
46
+ */
47
+ s->base.is_jmp = DISAS_NEXT;
48
+ break;
49
+ }
50
gen_preserve_fp_state(s);
51
/* fall through */
52
case ARM_VFP_FPCXT_S:
53
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
54
tcg_gen_br(lab_end);
55
56
gen_set_label(lab_active);
57
- /* !fpInactive: Reads the same as FPCXT_S, but side effects differ */
58
+ /*
59
+ * !fpInactive: if FPU disabled, take NOCP exception;
60
+ * otherwise PreserveFPState(), and then FPCXT_NS
61
+ * reads the same as FPCXT_S.
62
+ */
63
+ if (s->fp_excp_el) {
64
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
65
+ syn_uncategorized(), s->fp_excp_el);
66
+ /*
67
+ * This was only a conditional exception, so override
68
+ * gen_exception_insn()'s default to DISAS_NORETURN
69
+ */
70
+ s->base.is_jmp = DISAS_NEXT;
71
+ break;
72
+ }
73
gen_preserve_fp_state(s);
74
tmp = tcg_temp_new_i32();
75
sfpa = tcg_temp_new_i32();
76
--
77
2.20.1
78
79
diff view generated by jsdifflib
1
Currently M-profile borrows the A-profile code for VMSR and VMRS
1
The M-profile architecture requires that accesses to FPCXT_NS when
2
(access to the FP system registers), because all it needs to support
2
there is no active FP state must not take a NOCP fault even if the
3
is the FPSCR. In v8.1M things become significantly more complicated
3
FPU is disabled. We were not implementing this correctly, because
4
in two ways:
4
in our decode we catch the NOCP faults early in m-nocp.decode.
5
5
6
* there are several new FP system registers; some have side effects
6
Fix this bug by moving all the handling of M-profile FP system
7
on read, and one (FPCXT_NS) needs to avoid the usual
7
register accesses from vfp.decode into m-nocp.decode and putting
8
vfp_access_check() and the "only if FPU implemented" check
8
it above the NOCP blocks. This provides the correct behaviour:
9
* for accesses other than FPCXT_NS the trans functions call
10
vfp_access_check(), which will check for FPU disabled and
11
raise a NOCP exception if necessary
12
* for FPCXT_NS we have the special case code that doesn't
13
call vfp_access_check()
14
* when these trans functions want to raise an UNDEF they return
15
false, so the decoder will fall through into the NOCP blocks.
16
This means that NOCP correctly takes precedence over UNDEF
17
for these insns. (This is a difference from the other insns
18
handled by m-nocp.decode, where UNDEF takes precedence and
19
which we implement by having those trans functions call
20
unallocated_encoding() in the appropriate places.)
9
21
10
* all sysregs are now accessible both by VMRS/VMSR (which
22
[Note for backport to stable: this commit has a semantic dependency
11
reads/writes a general purpose register) and also by VLDR/VSTR
23
on commit 9a486856e9173af, which was not marked as cc-stable because
12
(which reads/writes them directly to memory)
24
we didn't know we'd need it for a for-stable bugfix.]
13
25
14
Refactor the structure of how we handle VMSR/VMRS to cope with this:
26
Cc: qemu-stable@nongnu.org
15
16
* keep the M-profile code entirely separate from the A-profile code
17
18
* abstract out the "read or write the general purpose register" part
19
of the code into a loadfn or storefn function pointer, so we can
20
reuse it for VLDR/VSTR.
21
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-8-peter.maydell@linaro.org
29
Message-id: 20210618141019.10671-4-peter.maydell@linaro.org
25
---
30
---
26
target/arm/cpu.h | 3 +
31
target/arm/translate-a32.h | 1 +
27
target/arm/translate-vfp.c.inc | 182 ++++++++++++++++++++++++++++++---
32
target/arm/m-nocp.decode | 24 ++
28
2 files changed, 171 insertions(+), 14 deletions(-)
33
target/arm/vfp.decode | 14 -
34
target/arm/translate-m-nocp.c | 514 +++++++++++++++++++++++++++++++++
35
target/arm/translate-vfp.c | 517 +---------------------------------
36
5 files changed, 542 insertions(+), 528 deletions(-)
29
37
30
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
38
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
31
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.h
40
--- a/target/arm/translate-a32.h
33
+++ b/target/arm/cpu.h
41
+++ b/target/arm/translate-a32.h
34
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
42
@@ -XXX,XX +XXX,XX @@ bool disas_neon_shared(DisasContext *s, uint32_t insn);
35
#define ARM_VFP_FPINST 9
43
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg);
36
#define ARM_VFP_FPINST2 10
44
void arm_gen_condlabel(DisasContext *s);
37
45
bool vfp_access_check(DisasContext *s);
38
+/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
46
+void gen_preserve_fp_state(DisasContext *s);
39
+#define QEMU_VFP_FPSCR_NZCV 0xffff
47
void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop);
40
+
48
void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop);
41
/* iwMMXt coprocessor control registers. */
49
void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop);
42
#define ARM_IWMMXT_wCID 0
50
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
43
#define ARM_IWMMXT_wCon 1
44
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
45
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/translate-vfp.c.inc
52
--- a/target/arm/m-nocp.decode
47
+++ b/target/arm/translate-vfp.c.inc
53
+++ b/target/arm/m-nocp.decode
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
54
@@ -XXX,XX +XXX,XX @@
55
56
&nocp cp
57
58
+# M-profile VLDR/VSTR to sysreg
59
+%vldr_sysreg 22:1 13:3
60
+%imm7_0x4 0:7 !function=times_4
61
+
62
+&vldr_sysreg rn reg imm a w p
63
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
64
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
65
+
66
{
67
# Special cases which do not take an early NOCP: VLLDM and VLSTM
68
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
69
@@ -XXX,XX +XXX,XX @@
70
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
71
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
72
73
+ # FP system register accesses: these are a special case because accesses
74
+ # to FPCXT_NS succeed even if the FPU is disabled. We therefore need
75
+ # to handle them before the big NOCP blocks. Note that within these
76
+ # insns NOCP still has higher priority than UNDEFs; this is implemented
77
+ # by their returning 'false' for UNDEF so as to fall through into the
78
+ # NOCP check (in contrast to VLLDM etc, which call unallocated_encoding()
79
+ # for the UNDEFs there that must take precedence over NOCP.)
80
+
81
+ VMSR_VMRS ---- 1110 111 l:1 reg:4 rt:4 1010 0001 0000
82
+
83
+ # P=0 W=0 is SEE "Related encodings", so split into two patterns
84
+ VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
85
+ VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
86
+ VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
87
+ VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
88
+
89
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
90
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
91
# From v8.1M onwards this range will also NOCP:
92
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/vfp.decode
95
+++ b/target/arm/vfp.decode
96
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
97
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
98
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
99
100
-# M-profile VLDR/VSTR to sysreg
101
-%vldr_sysreg 22:1 13:3
102
-%imm7_0x4 0:7 !function=times_4
103
-
104
-&vldr_sysreg rn reg imm a w p
105
-@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
106
- reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
107
-
108
-# P=0 W=0 is SEE "Related encodings", so split into two patterns
109
-VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
110
-VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
111
-VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
112
-VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
113
-
114
# We split the load/store multiple up into two patterns to avoid
115
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
116
# grouping:
117
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/target/arm/translate-m-nocp.c
120
+++ b/target/arm/translate-m-nocp.c
121
@@ -XXX,XX +XXX,XX @@
122
123
#include "qemu/osdep.h"
124
#include "tcg/tcg-op.h"
125
+#include "tcg/tcg-op-gvec.h"
126
#include "translate.h"
127
#include "translate-a32.h"
128
129
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
49
return true;
130
return true;
50
}
131
}
51
132
52
+/*
133
+/*
53
+ * M-profile provides two different sets of instructions that can
134
+ * M-profile provides two different sets of instructions that can
...
...
80
+ FPSysRegCheckContinue, /* caller should continue generating code */
161
+ FPSysRegCheckContinue, /* caller should continue generating code */
81
+} FPSysRegCheckResult;
162
+} FPSysRegCheckResult;
82
+
163
+
83
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
164
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
84
+{
165
+{
85
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
166
+ if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
86
+ return FPSysRegCheckFailed;
167
+ return FPSysRegCheckFailed;
87
+ }
168
+ }
88
+
169
+
89
+ switch (regno) {
170
+ switch (regno) {
90
+ case ARM_VFP_FPSCR:
171
+ case ARM_VFP_FPSCR:
91
+ case QEMU_VFP_FPSCR_NZCV:
172
+ case QEMU_VFP_FPSCR_NZCV:
92
+ break;
173
+ break;
174
+ case ARM_VFP_FPSCR_NZCVQC:
175
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
176
+ return FPSysRegCheckFailed;
177
+ }
178
+ break;
179
+ case ARM_VFP_FPCXT_S:
180
+ case ARM_VFP_FPCXT_NS:
181
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
182
+ return FPSysRegCheckFailed;
183
+ }
184
+ if (!s->v8m_secure) {
185
+ return FPSysRegCheckFailed;
186
+ }
187
+ break;
188
+ case ARM_VFP_VPR:
189
+ case ARM_VFP_P0:
190
+ if (!dc_isar_feature(aa32_mve, s)) {
191
+ return FPSysRegCheckFailed;
192
+ }
193
+ break;
93
+ default:
194
+ default:
94
+ return FPSysRegCheckFailed;
195
+ return FPSysRegCheckFailed;
95
+ }
196
+ }
96
+
197
+
97
+ if (!vfp_access_check(s)) {
198
+ /*
199
+ * FPCXT_NS is a special case: it has specific handling for
200
+ * "current FP state is inactive", and must do the PreserveFPState()
201
+ * but not the usual full set of actions done by ExecuteFPCheck().
202
+ * So we don't call vfp_access_check() and the callers must handle this.
203
+ */
204
+ if (regno != ARM_VFP_FPCXT_NS && !vfp_access_check(s)) {
98
+ return FPSysRegCheckDone;
205
+ return FPSysRegCheckDone;
99
+ }
206
+ }
100
+
101
+ return FPSysRegCheckContinue;
207
+ return FPSysRegCheckContinue;
102
+}
208
+}
103
+
209
+
210
+static void gen_branch_fpInactive(DisasContext *s, TCGCond cond,
211
+ TCGLabel *label)
212
+{
213
+ /*
214
+ * FPCXT_NS is a special case: it has specific handling for
215
+ * "current FP state is inactive", and must do the PreserveFPState()
216
+ * but not the usual full set of actions done by ExecuteFPCheck().
217
+ * We don't have a TB flag that matches the fpInactive check, so we
218
+ * do it at runtime as we don't expect FPCXT_NS accesses to be frequent.
219
+ *
220
+ * Emit code that checks fpInactive and does a conditional
221
+ * branch to label based on it:
222
+ * if cond is TCG_COND_NE then branch if fpInactive != 0 (ie if inactive)
223
+ * if cond is TCG_COND_EQ then branch if fpInactive == 0 (ie if active)
224
+ */
225
+ assert(cond == TCG_COND_EQ || cond == TCG_COND_NE);
226
+
227
+ /* fpInactive = FPCCR_NS.ASPEN == 1 && CONTROL.FPCA == 0 */
228
+ TCGv_i32 aspen, fpca;
229
+ aspen = load_cpu_field(v7m.fpccr[M_REG_NS]);
230
+ fpca = load_cpu_field(v7m.control[M_REG_S]);
231
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
232
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
233
+ tcg_gen_andi_i32(fpca, fpca, R_V7M_CONTROL_FPCA_MASK);
234
+ tcg_gen_or_i32(fpca, fpca, aspen);
235
+ tcg_gen_brcondi_i32(tcg_invert_cond(cond), fpca, 0, label);
236
+ tcg_temp_free_i32(aspen);
237
+ tcg_temp_free_i32(fpca);
238
+}
239
+
104
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
240
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
105
+
106
+ fp_sysreg_loadfn *loadfn,
241
+ fp_sysreg_loadfn *loadfn,
107
+ void *opaque)
242
+ void *opaque)
108
+{
243
+{
109
+ /* Do a write to an M-profile floating point system register */
244
+ /* Do a write to an M-profile floating point system register */
110
+ TCGv_i32 tmp;
245
+ TCGv_i32 tmp;
246
+ TCGLabel *lab_end = NULL;
111
+
247
+
112
+ switch (fp_sysreg_checks(s, regno)) {
248
+ switch (fp_sysreg_checks(s, regno)) {
113
+ case FPSysRegCheckFailed:
249
+ case FPSysRegCheckFailed:
114
+ return false;
250
+ return false;
115
+ case FPSysRegCheckDone:
251
+ case FPSysRegCheckDone:
...
...
123
+ tmp = loadfn(s, opaque);
259
+ tmp = loadfn(s, opaque);
124
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
260
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
125
+ tcg_temp_free_i32(tmp);
261
+ tcg_temp_free_i32(tmp);
126
+ gen_lookup_tb(s);
262
+ gen_lookup_tb(s);
127
+ break;
263
+ break;
264
+ case ARM_VFP_FPSCR_NZCVQC:
265
+ {
266
+ TCGv_i32 fpscr;
267
+ tmp = loadfn(s, opaque);
268
+ if (dc_isar_feature(aa32_mve, s)) {
269
+ /* QC is only present for MVE; otherwise RES0 */
270
+ TCGv_i32 qc = tcg_temp_new_i32();
271
+ tcg_gen_andi_i32(qc, tmp, FPCR_QC);
272
+ /*
273
+ * The 4 vfp.qc[] fields need only be "zero" vs "non-zero";
274
+ * here writing the same value into all elements is simplest.
275
+ */
276
+ tcg_gen_gvec_dup_i32(MO_32, offsetof(CPUARMState, vfp.qc),
277
+ 16, 16, qc);
278
+ }
279
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
280
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
281
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
282
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
283
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
284
+ tcg_temp_free_i32(tmp);
285
+ break;
286
+ }
287
+ case ARM_VFP_FPCXT_NS:
288
+ lab_end = gen_new_label();
289
+ /* fpInactive case: write is a NOP, so branch to end */
290
+ gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
291
+ /*
292
+ * !fpInactive: if FPU disabled, take NOCP exception;
293
+ * otherwise PreserveFPState(), and then FPCXT_NS writes
294
+ * behave the same as FPCXT_S writes.
295
+ */
296
+ if (s->fp_excp_el) {
297
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
298
+ syn_uncategorized(), s->fp_excp_el);
299
+ /*
300
+ * This was only a conditional exception, so override
301
+ * gen_exception_insn()'s default to DISAS_NORETURN
302
+ */
303
+ s->base.is_jmp = DISAS_NEXT;
304
+ break;
305
+ }
306
+ gen_preserve_fp_state(s);
307
+ /* fall through */
308
+ case ARM_VFP_FPCXT_S:
309
+ {
310
+ TCGv_i32 sfpa, control;
311
+ /*
312
+ * Set FPSCR and CONTROL.SFPA from value; the new FPSCR takes
313
+ * bits [27:0] from value and zeroes bits [31:28].
314
+ */
315
+ tmp = loadfn(s, opaque);
316
+ sfpa = tcg_temp_new_i32();
317
+ tcg_gen_shri_i32(sfpa, tmp, 31);
318
+ control = load_cpu_field(v7m.control[M_REG_S]);
319
+ tcg_gen_deposit_i32(control, control, sfpa,
320
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
321
+ store_cpu_field(control, v7m.control[M_REG_S]);
322
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
323
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
324
+ tcg_temp_free_i32(tmp);
325
+ tcg_temp_free_i32(sfpa);
326
+ break;
327
+ }
328
+ case ARM_VFP_VPR:
329
+ /* Behaves as NOP if not privileged */
330
+ if (IS_USER(s)) {
331
+ break;
332
+ }
333
+ tmp = loadfn(s, opaque);
334
+ store_cpu_field(tmp, v7m.vpr);
335
+ break;
336
+ case ARM_VFP_P0:
337
+ {
338
+ TCGv_i32 vpr;
339
+ tmp = loadfn(s, opaque);
340
+ vpr = load_cpu_field(v7m.vpr);
341
+ tcg_gen_deposit_i32(vpr, vpr, tmp,
342
+ R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
343
+ store_cpu_field(vpr, v7m.vpr);
344
+ tcg_temp_free_i32(tmp);
345
+ break;
346
+ }
128
+ default:
347
+ default:
129
+ g_assert_not_reached();
348
+ g_assert_not_reached();
130
+ }
349
+ }
350
+ if (lab_end) {
351
+ gen_set_label(lab_end);
352
+ }
131
+ return true;
353
+ return true;
132
+}
354
+}
133
+
355
+
134
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
356
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
135
+ fp_sysreg_storefn *storefn,
357
+ fp_sysreg_storefn *storefn,
136
+ void *opaque)
358
+ void *opaque)
137
+{
359
+{
138
+ /* Do a read from an M-profile floating point system register */
360
+ /* Do a read from an M-profile floating point system register */
139
+ TCGv_i32 tmp;
361
+ TCGv_i32 tmp;
362
+ TCGLabel *lab_end = NULL;
363
+ bool lookup_tb = false;
140
+
364
+
141
+ switch (fp_sysreg_checks(s, regno)) {
365
+ switch (fp_sysreg_checks(s, regno)) {
142
+ case FPSysRegCheckFailed:
366
+ case FPSysRegCheckFailed:
143
+ return false;
367
+ return false;
144
+ case FPSysRegCheckDone:
368
+ case FPSysRegCheckDone:
145
+ return true;
369
+ return true;
146
+ case FPSysRegCheckContinue:
370
+ case FPSysRegCheckContinue:
147
+ break;
371
+ break;
148
+ }
372
+ }
149
+
373
+
374
+ if (regno == ARM_VFP_FPSCR_NZCVQC && !dc_isar_feature(aa32_mve, s)) {
375
+ /* QC is RES0 without MVE, so NZCVQC simplifies to NZCV */
376
+ regno = QEMU_VFP_FPSCR_NZCV;
377
+ }
378
+
150
+ switch (regno) {
379
+ switch (regno) {
151
+ case ARM_VFP_FPSCR:
380
+ case ARM_VFP_FPSCR:
152
+ tmp = tcg_temp_new_i32();
381
+ tmp = tcg_temp_new_i32();
153
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
382
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
383
+ storefn(s, opaque, tmp);
384
+ break;
385
+ case ARM_VFP_FPSCR_NZCVQC:
386
+ tmp = tcg_temp_new_i32();
387
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
388
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCVQC_MASK);
154
+ storefn(s, opaque, tmp);
389
+ storefn(s, opaque, tmp);
155
+ break;
390
+ break;
156
+ case QEMU_VFP_FPSCR_NZCV:
391
+ case QEMU_VFP_FPSCR_NZCV:
157
+ /*
392
+ /*
158
+ * Read just NZCV; this is a special case to avoid the
393
+ * Read just NZCV; this is a special case to avoid the
159
+ * helper call for the "VMRS to CPSR.NZCV" insn.
394
+ * helper call for the "VMRS to CPSR.NZCV" insn.
160
+ */
395
+ */
161
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
396
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
162
+ tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
397
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
398
+ storefn(s, opaque, tmp);
399
+ break;
400
+ case ARM_VFP_FPCXT_S:
401
+ {
402
+ TCGv_i32 control, sfpa, fpscr;
403
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
404
+ tmp = tcg_temp_new_i32();
405
+ sfpa = tcg_temp_new_i32();
406
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
407
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
408
+ control = load_cpu_field(v7m.control[M_REG_S]);
409
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
410
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
411
+ tcg_gen_or_i32(tmp, tmp, sfpa);
412
+ tcg_temp_free_i32(sfpa);
413
+ /*
414
+ * Store result before updating FPSCR etc, in case
415
+ * it is a memory write which causes an exception.
416
+ */
417
+ storefn(s, opaque, tmp);
418
+ /*
419
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
420
+ * CONTROL.SFPA; so we'll end the TB here.
421
+ */
422
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
423
+ store_cpu_field(control, v7m.control[M_REG_S]);
424
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
425
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
426
+ tcg_temp_free_i32(fpscr);
427
+ lookup_tb = true;
428
+ break;
429
+ }
430
+ case ARM_VFP_FPCXT_NS:
431
+ {
432
+ TCGv_i32 control, sfpa, fpscr, fpdscr, zero;
433
+ TCGLabel *lab_active = gen_new_label();
434
+
435
+ lookup_tb = true;
436
+
437
+ gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
438
+ /* fpInactive case: reads as FPDSCR_NS */
439
+ TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
440
+ storefn(s, opaque, tmp);
441
+ lab_end = gen_new_label();
442
+ tcg_gen_br(lab_end);
443
+
444
+ gen_set_label(lab_active);
445
+ /*
446
+ * !fpInactive: if FPU disabled, take NOCP exception;
447
+ * otherwise PreserveFPState(), and then FPCXT_NS
448
+ * reads the same as FPCXT_S.
449
+ */
450
+ if (s->fp_excp_el) {
451
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
452
+ syn_uncategorized(), s->fp_excp_el);
453
+ /*
454
+ * This was only a conditional exception, so override
455
+ * gen_exception_insn()'s default to DISAS_NORETURN
456
+ */
457
+ s->base.is_jmp = DISAS_NEXT;
458
+ break;
459
+ }
460
+ gen_preserve_fp_state(s);
461
+ tmp = tcg_temp_new_i32();
462
+ sfpa = tcg_temp_new_i32();
463
+ fpscr = tcg_temp_new_i32();
464
+ gen_helper_vfp_get_fpscr(fpscr, cpu_env);
465
+ tcg_gen_andi_i32(tmp, fpscr, ~FPCR_NZCV_MASK);
466
+ control = load_cpu_field(v7m.control[M_REG_S]);
467
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
468
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
469
+ tcg_gen_or_i32(tmp, tmp, sfpa);
470
+ tcg_temp_free_i32(control);
471
+ /* Store result before updating FPSCR, in case it faults */
472
+ storefn(s, opaque, tmp);
473
+ /* If SFPA is zero then set FPSCR from FPDSCR_NS */
474
+ fpdscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
475
+ zero = tcg_const_i32(0);
476
+ tcg_gen_movcond_i32(TCG_COND_EQ, fpscr, sfpa, zero, fpdscr, fpscr);
477
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
478
+ tcg_temp_free_i32(zero);
479
+ tcg_temp_free_i32(sfpa);
480
+ tcg_temp_free_i32(fpdscr);
481
+ tcg_temp_free_i32(fpscr);
482
+ break;
483
+ }
484
+ case ARM_VFP_VPR:
485
+ /* Behaves as NOP if not privileged */
486
+ if (IS_USER(s)) {
487
+ break;
488
+ }
489
+ tmp = load_cpu_field(v7m.vpr);
490
+ storefn(s, opaque, tmp);
491
+ break;
492
+ case ARM_VFP_P0:
493
+ tmp = load_cpu_field(v7m.vpr);
494
+ tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
163
+ storefn(s, opaque, tmp);
495
+ storefn(s, opaque, tmp);
164
+ break;
496
+ break;
165
+ default:
497
+ default:
166
+ g_assert_not_reached();
498
+ g_assert_not_reached();
499
+ }
500
+
501
+ if (lab_end) {
502
+ gen_set_label(lab_end);
503
+ }
504
+ if (lookup_tb) {
505
+ gen_lookup_tb(s);
167
+ }
506
+ }
168
+ return true;
507
+ return true;
169
+}
508
+}
170
+
509
+
171
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
510
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
...
...
186
+ arg_VMSR_VMRS *a = opaque;
525
+ arg_VMSR_VMRS *a = opaque;
187
+
526
+
188
+ return load_reg(s, a->rt);
527
+ return load_reg(s, a->rt);
189
+}
528
+}
190
+
529
+
191
+static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
530
+static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
192
+{
531
+{
193
+ /*
532
+ /*
194
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
533
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
195
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
534
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
196
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
535
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
...
...
211
+ /* VMSR, move gp register to FP system register */
550
+ /* VMSR, move gp register to FP system register */
212
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
551
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
213
+ }
552
+ }
214
+}
553
+}
215
+
554
+
555
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
556
+{
557
+ arg_vldr_sysreg *a = opaque;
558
+ uint32_t offset = a->imm;
559
+ TCGv_i32 addr;
560
+
561
+ if (!a->a) {
562
+ offset = -offset;
563
+ }
564
+
565
+ addr = load_reg(s, a->rn);
566
+ if (a->p) {
567
+ tcg_gen_addi_i32(addr, addr, offset);
568
+ }
569
+
570
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
571
+ gen_helper_v8m_stackcheck(cpu_env, addr);
572
+ }
573
+
574
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
575
+ MO_UL | MO_ALIGN | s->be_data);
576
+ tcg_temp_free_i32(value);
577
+
578
+ if (a->w) {
579
+ /* writeback */
580
+ if (!a->p) {
581
+ tcg_gen_addi_i32(addr, addr, offset);
582
+ }
583
+ store_reg(s, a->rn, addr);
584
+ } else {
585
+ tcg_temp_free_i32(addr);
586
+ }
587
+}
588
+
589
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
590
+{
591
+ arg_vldr_sysreg *a = opaque;
592
+ uint32_t offset = a->imm;
593
+ TCGv_i32 addr;
594
+ TCGv_i32 value = tcg_temp_new_i32();
595
+
596
+ if (!a->a) {
597
+ offset = -offset;
598
+ }
599
+
600
+ addr = load_reg(s, a->rn);
601
+ if (a->p) {
602
+ tcg_gen_addi_i32(addr, addr, offset);
603
+ }
604
+
605
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
606
+ gen_helper_v8m_stackcheck(cpu_env, addr);
607
+ }
608
+
609
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
610
+ MO_UL | MO_ALIGN | s->be_data);
611
+
612
+ if (a->w) {
613
+ /* writeback */
614
+ if (!a->p) {
615
+ tcg_gen_addi_i32(addr, addr, offset);
616
+ }
617
+ store_reg(s, a->rn, addr);
618
+ } else {
619
+ tcg_temp_free_i32(addr);
620
+ }
621
+ return value;
622
+}
623
+
624
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
625
+{
626
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
627
+ return false;
628
+ }
629
+ if (a->rn == 15) {
630
+ return false;
631
+ }
632
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
633
+}
634
+
635
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
636
+{
637
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
638
+ return false;
639
+ }
640
+ if (a->rn == 15) {
641
+ return false;
642
+ }
643
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
644
+}
645
+
646
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
647
{
648
/*
649
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
650
index XXXXXXX..XXXXXXX 100644
651
--- a/target/arm/translate-vfp.c
652
+++ b/target/arm/translate-vfp.c
653
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
654
* Generate code for M-profile lazy FP state preservation if needed;
655
* this corresponds to the pseudocode PreserveFPState() function.
656
*/
657
-static void gen_preserve_fp_state(DisasContext *s)
658
+void gen_preserve_fp_state(DisasContext *s)
659
{
660
if (s->v7m_lspact) {
661
/*
662
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
663
return true;
664
}
665
666
-/*
667
- * M-profile provides two different sets of instructions that can
668
- * access floating point system registers: VMSR/VMRS (which move
669
- * to/from a general purpose register) and VLDR/VSTR sysreg (which
670
- * move directly to/from memory). In some cases there are also side
671
- * effects which must happen after any write to memory (which could
672
- * cause an exception). So we implement the common logic for the
673
- * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
674
- * which take pointers to callback functions which will perform the
675
- * actual "read/write general purpose register" and "read/write
676
- * memory" operations.
677
- */
678
-
679
-/*
680
- * Emit code to store the sysreg to its final destination; frees the
681
- * TCG temp 'value' it is passed.
682
- */
683
-typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
684
-/*
685
- * Emit code to load the value to be copied to the sysreg; returns
686
- * a new TCG temporary
687
- */
688
-typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
689
-
690
-/* Common decode/access checks for fp sysreg read/write */
691
-typedef enum FPSysRegCheckResult {
692
- FPSysRegCheckFailed, /* caller should return false */
693
- FPSysRegCheckDone, /* caller should return true */
694
- FPSysRegCheckContinue, /* caller should continue generating code */
695
-} FPSysRegCheckResult;
696
-
697
-static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
698
-{
699
- if (!dc_isar_feature(aa32_fpsp_v2, s) && !dc_isar_feature(aa32_mve, s)) {
700
- return FPSysRegCheckFailed;
701
- }
702
-
703
- switch (regno) {
704
- case ARM_VFP_FPSCR:
705
- case QEMU_VFP_FPSCR_NZCV:
706
- break;
707
- case ARM_VFP_FPSCR_NZCVQC:
708
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
709
- return FPSysRegCheckFailed;
710
- }
711
- break;
712
- case ARM_VFP_FPCXT_S:
713
- case ARM_VFP_FPCXT_NS:
714
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
715
- return FPSysRegCheckFailed;
716
- }
717
- if (!s->v8m_secure) {
718
- return FPSysRegCheckFailed;
719
- }
720
- break;
721
- case ARM_VFP_VPR:
722
- case ARM_VFP_P0:
723
- if (!dc_isar_feature(aa32_mve, s)) {
724
- return FPSysRegCheckFailed;
725
- }
726
- break;
727
- default:
728
- return FPSysRegCheckFailed;
729
- }
730
-
731
- /*
732
- * FPCXT_NS is a special case: it has specific handling for
733
- * "current FP state is inactive", and must do the PreserveFPState()
734
- * but not the usual full set of actions done by ExecuteFPCheck().
735
- * So we don't call vfp_access_check() and the callers must handle this.
736
- */
737
- if (regno != ARM_VFP_FPCXT_NS && !vfp_access_check(s)) {
738
- return FPSysRegCheckDone;
739
- }
740
- return FPSysRegCheckContinue;
741
-}
742
-
743
-static void gen_branch_fpInactive(DisasContext *s, TCGCond cond,
744
- TCGLabel *label)
745
-{
746
- /*
747
- * FPCXT_NS is a special case: it has specific handling for
748
- * "current FP state is inactive", and must do the PreserveFPState()
749
- * but not the usual full set of actions done by ExecuteFPCheck().
750
- * We don't have a TB flag that matches the fpInactive check, so we
751
- * do it at runtime as we don't expect FPCXT_NS accesses to be frequent.
752
- *
753
- * Emit code that checks fpInactive and does a conditional
754
- * branch to label based on it:
755
- * if cond is TCG_COND_NE then branch if fpInactive != 0 (ie if inactive)
756
- * if cond is TCG_COND_EQ then branch if fpInactive == 0 (ie if active)
757
- */
758
- assert(cond == TCG_COND_EQ || cond == TCG_COND_NE);
759
-
760
- /* fpInactive = FPCCR_NS.ASPEN == 1 && CONTROL.FPCA == 0 */
761
- TCGv_i32 aspen, fpca;
762
- aspen = load_cpu_field(v7m.fpccr[M_REG_NS]);
763
- fpca = load_cpu_field(v7m.control[M_REG_S]);
764
- tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
765
- tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
766
- tcg_gen_andi_i32(fpca, fpca, R_V7M_CONTROL_FPCA_MASK);
767
- tcg_gen_or_i32(fpca, fpca, aspen);
768
- tcg_gen_brcondi_i32(tcg_invert_cond(cond), fpca, 0, label);
769
- tcg_temp_free_i32(aspen);
770
- tcg_temp_free_i32(fpca);
771
-}
772
-
773
-static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
774
- fp_sysreg_loadfn *loadfn,
775
- void *opaque)
776
-{
777
- /* Do a write to an M-profile floating point system register */
778
- TCGv_i32 tmp;
779
- TCGLabel *lab_end = NULL;
780
-
781
- switch (fp_sysreg_checks(s, regno)) {
782
- case FPSysRegCheckFailed:
783
- return false;
784
- case FPSysRegCheckDone:
785
- return true;
786
- case FPSysRegCheckContinue:
787
- break;
788
- }
789
-
790
- switch (regno) {
791
- case ARM_VFP_FPSCR:
792
- tmp = loadfn(s, opaque);
793
- gen_helper_vfp_set_fpscr(cpu_env, tmp);
794
- tcg_temp_free_i32(tmp);
795
- gen_lookup_tb(s);
796
- break;
797
- case ARM_VFP_FPSCR_NZCVQC:
798
- {
799
- TCGv_i32 fpscr;
800
- tmp = loadfn(s, opaque);
801
- if (dc_isar_feature(aa32_mve, s)) {
802
- /* QC is only present for MVE; otherwise RES0 */
803
- TCGv_i32 qc = tcg_temp_new_i32();
804
- tcg_gen_andi_i32(qc, tmp, FPCR_QC);
805
- /*
806
- * The 4 vfp.qc[] fields need only be "zero" vs "non-zero";
807
- * here writing the same value into all elements is simplest.
808
- */
809
- tcg_gen_gvec_dup_i32(MO_32, offsetof(CPUARMState, vfp.qc),
810
- 16, 16, qc);
811
- }
812
- tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
813
- fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
814
- tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
815
- tcg_gen_or_i32(fpscr, fpscr, tmp);
816
- store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
817
- tcg_temp_free_i32(tmp);
818
- break;
819
- }
820
- case ARM_VFP_FPCXT_NS:
821
- lab_end = gen_new_label();
822
- /* fpInactive case: write is a NOP, so branch to end */
823
- gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
824
- /*
825
- * !fpInactive: if FPU disabled, take NOCP exception;
826
- * otherwise PreserveFPState(), and then FPCXT_NS writes
827
- * behave the same as FPCXT_S writes.
828
- */
829
- if (s->fp_excp_el) {
830
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
831
- syn_uncategorized(), s->fp_excp_el);
832
- /*
833
- * This was only a conditional exception, so override
834
- * gen_exception_insn()'s default to DISAS_NORETURN
835
- */
836
- s->base.is_jmp = DISAS_NEXT;
837
- break;
838
- }
839
- gen_preserve_fp_state(s);
840
- /* fall through */
841
- case ARM_VFP_FPCXT_S:
842
- {
843
- TCGv_i32 sfpa, control;
844
- /*
845
- * Set FPSCR and CONTROL.SFPA from value; the new FPSCR takes
846
- * bits [27:0] from value and zeroes bits [31:28].
847
- */
848
- tmp = loadfn(s, opaque);
849
- sfpa = tcg_temp_new_i32();
850
- tcg_gen_shri_i32(sfpa, tmp, 31);
851
- control = load_cpu_field(v7m.control[M_REG_S]);
852
- tcg_gen_deposit_i32(control, control, sfpa,
853
- R_V7M_CONTROL_SFPA_SHIFT, 1);
854
- store_cpu_field(control, v7m.control[M_REG_S]);
855
- tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
856
- gen_helper_vfp_set_fpscr(cpu_env, tmp);
857
- tcg_temp_free_i32(tmp);
858
- tcg_temp_free_i32(sfpa);
859
- break;
860
- }
861
- case ARM_VFP_VPR:
862
- /* Behaves as NOP if not privileged */
863
- if (IS_USER(s)) {
864
- break;
865
- }
866
- tmp = loadfn(s, opaque);
867
- store_cpu_field(tmp, v7m.vpr);
868
- break;
869
- case ARM_VFP_P0:
870
- {
871
- TCGv_i32 vpr;
872
- tmp = loadfn(s, opaque);
873
- vpr = load_cpu_field(v7m.vpr);
874
- tcg_gen_deposit_i32(vpr, vpr, tmp,
875
- R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
876
- store_cpu_field(vpr, v7m.vpr);
877
- tcg_temp_free_i32(tmp);
878
- break;
879
- }
880
- default:
881
- g_assert_not_reached();
882
- }
883
- if (lab_end) {
884
- gen_set_label(lab_end);
885
- }
886
- return true;
887
-}
888
-
889
-static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
890
- fp_sysreg_storefn *storefn,
891
- void *opaque)
892
-{
893
- /* Do a read from an M-profile floating point system register */
894
- TCGv_i32 tmp;
895
- TCGLabel *lab_end = NULL;
896
- bool lookup_tb = false;
897
-
898
- switch (fp_sysreg_checks(s, regno)) {
899
- case FPSysRegCheckFailed:
900
- return false;
901
- case FPSysRegCheckDone:
902
- return true;
903
- case FPSysRegCheckContinue:
904
- break;
905
- }
906
-
907
- if (regno == ARM_VFP_FPSCR_NZCVQC && !dc_isar_feature(aa32_mve, s)) {
908
- /* QC is RES0 without MVE, so NZCVQC simplifies to NZCV */
909
- regno = QEMU_VFP_FPSCR_NZCV;
910
- }
911
-
912
- switch (regno) {
913
- case ARM_VFP_FPSCR:
914
- tmp = tcg_temp_new_i32();
915
- gen_helper_vfp_get_fpscr(tmp, cpu_env);
916
- storefn(s, opaque, tmp);
917
- break;
918
- case ARM_VFP_FPSCR_NZCVQC:
919
- tmp = tcg_temp_new_i32();
920
- gen_helper_vfp_get_fpscr(tmp, cpu_env);
921
- tcg_gen_andi_i32(tmp, tmp, FPCR_NZCVQC_MASK);
922
- storefn(s, opaque, tmp);
923
- break;
924
- case QEMU_VFP_FPSCR_NZCV:
925
- /*
926
- * Read just NZCV; this is a special case to avoid the
927
- * helper call for the "VMRS to CPSR.NZCV" insn.
928
- */
929
- tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
930
- tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
931
- storefn(s, opaque, tmp);
932
- break;
933
- case ARM_VFP_FPCXT_S:
934
- {
935
- TCGv_i32 control, sfpa, fpscr;
936
- /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
937
- tmp = tcg_temp_new_i32();
938
- sfpa = tcg_temp_new_i32();
939
- gen_helper_vfp_get_fpscr(tmp, cpu_env);
940
- tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
941
- control = load_cpu_field(v7m.control[M_REG_S]);
942
- tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
943
- tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
944
- tcg_gen_or_i32(tmp, tmp, sfpa);
945
- tcg_temp_free_i32(sfpa);
946
- /*
947
- * Store result before updating FPSCR etc, in case
948
- * it is a memory write which causes an exception.
949
- */
950
- storefn(s, opaque, tmp);
951
- /*
952
- * Now we must reset FPSCR from FPDSCR_NS, and clear
953
- * CONTROL.SFPA; so we'll end the TB here.
954
- */
955
- tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
956
- store_cpu_field(control, v7m.control[M_REG_S]);
957
- fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
958
- gen_helper_vfp_set_fpscr(cpu_env, fpscr);
959
- tcg_temp_free_i32(fpscr);
960
- lookup_tb = true;
961
- break;
962
- }
963
- case ARM_VFP_FPCXT_NS:
964
- {
965
- TCGv_i32 control, sfpa, fpscr, fpdscr, zero;
966
- TCGLabel *lab_active = gen_new_label();
967
-
968
- lookup_tb = true;
969
-
970
- gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
971
- /* fpInactive case: reads as FPDSCR_NS */
972
- TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
973
- storefn(s, opaque, tmp);
974
- lab_end = gen_new_label();
975
- tcg_gen_br(lab_end);
976
-
977
- gen_set_label(lab_active);
978
- /*
979
- * !fpInactive: if FPU disabled, take NOCP exception;
980
- * otherwise PreserveFPState(), and then FPCXT_NS
981
- * reads the same as FPCXT_S.
982
- */
983
- if (s->fp_excp_el) {
984
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
985
- syn_uncategorized(), s->fp_excp_el);
986
- /*
987
- * This was only a conditional exception, so override
988
- * gen_exception_insn()'s default to DISAS_NORETURN
989
- */
990
- s->base.is_jmp = DISAS_NEXT;
991
- break;
992
- }
993
- gen_preserve_fp_state(s);
994
- tmp = tcg_temp_new_i32();
995
- sfpa = tcg_temp_new_i32();
996
- fpscr = tcg_temp_new_i32();
997
- gen_helper_vfp_get_fpscr(fpscr, cpu_env);
998
- tcg_gen_andi_i32(tmp, fpscr, ~FPCR_NZCV_MASK);
999
- control = load_cpu_field(v7m.control[M_REG_S]);
1000
- tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
1001
- tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
1002
- tcg_gen_or_i32(tmp, tmp, sfpa);
1003
- tcg_temp_free_i32(control);
1004
- /* Store result before updating FPSCR, in case it faults */
1005
- storefn(s, opaque, tmp);
1006
- /* If SFPA is zero then set FPSCR from FPDSCR_NS */
1007
- fpdscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
1008
- zero = tcg_const_i32(0);
1009
- tcg_gen_movcond_i32(TCG_COND_EQ, fpscr, sfpa, zero, fpdscr, fpscr);
1010
- gen_helper_vfp_set_fpscr(cpu_env, fpscr);
1011
- tcg_temp_free_i32(zero);
1012
- tcg_temp_free_i32(sfpa);
1013
- tcg_temp_free_i32(fpdscr);
1014
- tcg_temp_free_i32(fpscr);
1015
- break;
1016
- }
1017
- case ARM_VFP_VPR:
1018
- /* Behaves as NOP if not privileged */
1019
- if (IS_USER(s)) {
1020
- break;
1021
- }
1022
- tmp = load_cpu_field(v7m.vpr);
1023
- storefn(s, opaque, tmp);
1024
- break;
1025
- case ARM_VFP_P0:
1026
- tmp = load_cpu_field(v7m.vpr);
1027
- tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
1028
- storefn(s, opaque, tmp);
1029
- break;
1030
- default:
1031
- g_assert_not_reached();
1032
- }
1033
-
1034
- if (lab_end) {
1035
- gen_set_label(lab_end);
1036
- }
1037
- if (lookup_tb) {
1038
- gen_lookup_tb(s);
1039
- }
1040
- return true;
1041
-}
1042
-
1043
-static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
1044
-{
1045
- arg_VMSR_VMRS *a = opaque;
1046
-
1047
- if (a->rt == 15) {
1048
- /* Set the 4 flag bits in the CPSR */
1049
- gen_set_nzcv(value);
1050
- tcg_temp_free_i32(value);
1051
- } else {
1052
- store_reg(s, a->rt, value);
1053
- }
1054
-}
1055
-
1056
-static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
1057
-{
1058
- arg_VMSR_VMRS *a = opaque;
1059
-
1060
- return load_reg(s, a->rt);
1061
-}
1062
-
1063
-static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
1064
-{
1065
- /*
1066
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
1067
- * FPSCR -> r15 is a special case which writes to the PSR flags;
1068
- * set a->reg to a special value to tell gen_M_fp_sysreg_read()
1069
- * we only care about the top 4 bits of FPSCR there.
1070
- */
1071
- if (a->rt == 15) {
1072
- if (a->l && a->reg == ARM_VFP_FPSCR) {
1073
- a->reg = QEMU_VFP_FPSCR_NZCV;
1074
- } else {
1075
- return false;
1076
- }
1077
- }
1078
-
1079
- if (a->l) {
1080
- /* VMRS, move FP system register to gp register */
1081
- return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
1082
- } else {
1083
- /* VMSR, move gp register to FP system register */
1084
- return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
1085
- }
1086
-}
1087
-
216
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
1088
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
217
{
1089
{
218
TCGv_i32 tmp;
1090
TCGv_i32 tmp;
219
bool ignore_vfp_enabled = false;
1091
bool ignore_vfp_enabled = false;
220
1092
221
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
1093
if (arm_dc_feature(s, ARM_FEATURE_M)) {
222
- return false;
1094
- return gen_M_VMSR_VMRS(s, a);
223
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
1095
+ /* M profile version was already handled in m-nocp.decode */
224
+ return gen_M_VMSR_VMRS(s, a);
225
}
226
227
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
228
- /*
229
- * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
230
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
231
- * (FPSCR -> r15 is a special case which writes to the PSR flags.)
232
- */
233
- if (a->reg != ARM_VFP_FPSCR) {
234
- return false;
235
- }
236
- if (a->rt == 15 && !a->l) {
237
- return false;
238
- }
239
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
240
+ return false;
1096
+ return false;
241
}
1097
}
242
1098
243
switch (a->reg) {
1099
if (!dc_isar_feature(aa32_fpsp_v2, s)) {
1100
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
1101
return true;
1102
}
1103
1104
-static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
1105
-{
1106
- arg_vldr_sysreg *a = opaque;
1107
- uint32_t offset = a->imm;
1108
- TCGv_i32 addr;
1109
-
1110
- if (!a->a) {
1111
- offset = -offset;
1112
- }
1113
-
1114
- addr = load_reg(s, a->rn);
1115
- if (a->p) {
1116
- tcg_gen_addi_i32(addr, addr, offset);
1117
- }
1118
-
1119
- if (s->v8m_stackcheck && a->rn == 13 && a->w) {
1120
- gen_helper_v8m_stackcheck(cpu_env, addr);
1121
- }
1122
-
1123
- gen_aa32_st_i32(s, value, addr, get_mem_index(s),
1124
- MO_UL | MO_ALIGN | s->be_data);
1125
- tcg_temp_free_i32(value);
1126
-
1127
- if (a->w) {
1128
- /* writeback */
1129
- if (!a->p) {
1130
- tcg_gen_addi_i32(addr, addr, offset);
1131
- }
1132
- store_reg(s, a->rn, addr);
1133
- } else {
1134
- tcg_temp_free_i32(addr);
1135
- }
1136
-}
1137
-
1138
-static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
1139
-{
1140
- arg_vldr_sysreg *a = opaque;
1141
- uint32_t offset = a->imm;
1142
- TCGv_i32 addr;
1143
- TCGv_i32 value = tcg_temp_new_i32();
1144
-
1145
- if (!a->a) {
1146
- offset = -offset;
1147
- }
1148
-
1149
- addr = load_reg(s, a->rn);
1150
- if (a->p) {
1151
- tcg_gen_addi_i32(addr, addr, offset);
1152
- }
1153
-
1154
- if (s->v8m_stackcheck && a->rn == 13 && a->w) {
1155
- gen_helper_v8m_stackcheck(cpu_env, addr);
1156
- }
1157
-
1158
- gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
1159
- MO_UL | MO_ALIGN | s->be_data);
1160
-
1161
- if (a->w) {
1162
- /* writeback */
1163
- if (!a->p) {
1164
- tcg_gen_addi_i32(addr, addr, offset);
1165
- }
1166
- store_reg(s, a->rn, addr);
1167
- } else {
1168
- tcg_temp_free_i32(addr);
1169
- }
1170
- return value;
1171
-}
1172
-
1173
-static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
1174
-{
1175
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
1176
- return false;
1177
- }
1178
- if (a->rn == 15) {
1179
- return false;
1180
- }
1181
- return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
1182
-}
1183
-
1184
-static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
1185
-{
1186
- if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
1187
- return false;
1188
- }
1189
- if (a->rn == 15) {
1190
- return false;
1191
- }
1192
- return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
1193
-}
1194
1195
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
1196
{
244
--
1197
--
245
2.20.1
1198
2.20.1
246
1199
247
1200
diff view generated by jsdifflib
1
We defined a constant name for the mask of NZCV bits in the FPCR/FPSCR
1
A few subcases of VLDR/VSTR sysreg succeed but do not perform a
2
in the previous commit; use it in a couple of places in existing code,
2
memory access:
3
where we're masking out everything except NZCV for the "load to Rt=15
3
* VSTR of VPR when unprivileged
4
sets CPSR.NZCV" special case.
4
* VLDR to VPR when unprivileged
5
5
* VLDR to FPCXT_NS when fpInactive
6
7
In these cases, even though we don't do the memory access we should
8
still update the base register and perform the stack limit check if
9
the insn's addressing mode specifies writeback. Our implementation
10
failed to do this, because we handle these side-effects inside the
11
memory_to_fp_sysreg() and fp_sysreg_to_memory() callback functions,
12
which are only called if there's something to load or store.
13
14
Fix this by adding an extra argument to the callbacks which is set to
15
true to actually perform the access and false to only do side effects
16
like writeback, and calling the callback with do_access = false
17
for the three cases listed above.
18
19
This produces slightly suboptimal code for the case of a write
20
to FPCXT_NS when the FPU is inactive and the insn didn't have
21
side effects (ie no writeback, or via VMSR), in which case we'll
22
generate a conditional branch over an unconditional branch.
23
But this doesn't seem to be important enough to merit requiring
24
the callback to report back whether it generated any code or not.
25
26
Cc: qemu-stable@nongnu.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-12-peter.maydell@linaro.org
29
Message-id: 20210618141019.10671-5-peter.maydell@linaro.org
9
---
30
---
10
target/arm/translate-vfp.c.inc | 4 ++--
31
target/arm/translate-m-nocp.c | 102 ++++++++++++++++++++++++----------
11
1 file changed, 2 insertions(+), 2 deletions(-)
32
1 file changed, 72 insertions(+), 30 deletions(-)
12
33
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
34
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
14
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
36
--- a/target/arm/translate-m-nocp.c
16
+++ b/target/arm/translate-vfp.c.inc
37
+++ b/target/arm/translate-m-nocp.c
17
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
18
* helper call for the "VMRS to CPSR.NZCV" insn.
39
40
/*
41
* Emit code to store the sysreg to its final destination; frees the
42
- * TCG temp 'value' it is passed.
43
+ * TCG temp 'value' it is passed. do_access is true to do the store,
44
+ * and false to skip it and only perform side-effects like base
45
+ * register writeback.
46
*/
47
-typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
48
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value,
49
+ bool do_access);
50
/*
51
* Emit code to load the value to be copied to the sysreg; returns
52
- * a new TCG temporary
53
+ * a new TCG temporary. do_access is true to do the store,
54
+ * and false to skip it and only perform side-effects like base
55
+ * register writeback.
56
*/
57
-typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
58
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque,
59
+ bool do_access);
60
61
/* Common decode/access checks for fp sysreg read/write */
62
typedef enum FPSysRegCheckResult {
63
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
64
65
switch (regno) {
66
case ARM_VFP_FPSCR:
67
- tmp = loadfn(s, opaque);
68
+ tmp = loadfn(s, opaque, true);
69
gen_helper_vfp_set_fpscr(cpu_env, tmp);
70
tcg_temp_free_i32(tmp);
71
gen_lookup_tb(s);
72
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
73
case ARM_VFP_FPSCR_NZCVQC:
74
{
75
TCGv_i32 fpscr;
76
- tmp = loadfn(s, opaque);
77
+ tmp = loadfn(s, opaque, true);
78
if (dc_isar_feature(aa32_mve, s)) {
79
/* QC is only present for MVE; otherwise RES0 */
80
TCGv_i32 qc = tcg_temp_new_i32();
81
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
82
break;
83
}
84
case ARM_VFP_FPCXT_NS:
85
+ {
86
+ TCGLabel *lab_active = gen_new_label();
87
+
88
lab_end = gen_new_label();
89
- /* fpInactive case: write is a NOP, so branch to end */
90
- gen_branch_fpInactive(s, TCG_COND_NE, lab_end);
91
+ gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
92
+ /*
93
+ * fpInactive case: write is a NOP, so only do side effects
94
+ * like register writeback before we branch to end
95
+ */
96
+ loadfn(s, opaque, false);
97
+ tcg_gen_br(lab_end);
98
+
99
+ gen_set_label(lab_active);
100
/*
101
* !fpInactive: if FPU disabled, take NOCP exception;
102
* otherwise PreserveFPState(), and then FPCXT_NS writes
103
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
104
break;
105
}
106
gen_preserve_fp_state(s);
107
- /* fall through */
108
+ }
109
+ /* fall through */
110
case ARM_VFP_FPCXT_S:
111
{
112
TCGv_i32 sfpa, control;
113
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
114
* Set FPSCR and CONTROL.SFPA from value; the new FPSCR takes
115
* bits [27:0] from value and zeroes bits [31:28].
116
*/
117
- tmp = loadfn(s, opaque);
118
+ tmp = loadfn(s, opaque, true);
119
sfpa = tcg_temp_new_i32();
120
tcg_gen_shri_i32(sfpa, tmp, 31);
121
control = load_cpu_field(v7m.control[M_REG_S]);
122
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
123
case ARM_VFP_VPR:
124
/* Behaves as NOP if not privileged */
125
if (IS_USER(s)) {
126
+ loadfn(s, opaque, false);
127
break;
128
}
129
- tmp = loadfn(s, opaque);
130
+ tmp = loadfn(s, opaque, true);
131
store_cpu_field(tmp, v7m.vpr);
132
break;
133
case ARM_VFP_P0:
134
{
135
TCGv_i32 vpr;
136
- tmp = loadfn(s, opaque);
137
+ tmp = loadfn(s, opaque, true);
138
vpr = load_cpu_field(v7m.vpr);
139
tcg_gen_deposit_i32(vpr, vpr, tmp,
140
R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
141
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
142
case ARM_VFP_FPSCR:
143
tmp = tcg_temp_new_i32();
144
gen_helper_vfp_get_fpscr(tmp, cpu_env);
145
- storefn(s, opaque, tmp);
146
+ storefn(s, opaque, tmp, true);
147
break;
148
case ARM_VFP_FPSCR_NZCVQC:
149
tmp = tcg_temp_new_i32();
150
gen_helper_vfp_get_fpscr(tmp, cpu_env);
151
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCVQC_MASK);
152
- storefn(s, opaque, tmp);
153
+ storefn(s, opaque, tmp, true);
154
break;
155
case QEMU_VFP_FPSCR_NZCV:
156
/*
157
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
19
*/
158
*/
20
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
159
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
21
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
160
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
22
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
161
- storefn(s, opaque, tmp);
23
storefn(s, opaque, tmp);
162
+ storefn(s, opaque, tmp, true);
163
break;
164
case ARM_VFP_FPCXT_S:
165
{
166
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
167
* Store result before updating FPSCR etc, in case
168
* it is a memory write which causes an exception.
169
*/
170
- storefn(s, opaque, tmp);
171
+ storefn(s, opaque, tmp, true);
172
/*
173
* Now we must reset FPSCR from FPDSCR_NS, and clear
174
* CONTROL.SFPA; so we'll end the TB here.
175
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
176
gen_branch_fpInactive(s, TCG_COND_EQ, lab_active);
177
/* fpInactive case: reads as FPDSCR_NS */
178
TCGv_i32 tmp = load_cpu_field(v7m.fpdscr[M_REG_NS]);
179
- storefn(s, opaque, tmp);
180
+ storefn(s, opaque, tmp, true);
181
lab_end = gen_new_label();
182
tcg_gen_br(lab_end);
183
184
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
185
tcg_gen_or_i32(tmp, tmp, sfpa);
186
tcg_temp_free_i32(control);
187
/* Store result before updating FPSCR, in case it faults */
188
- storefn(s, opaque, tmp);
189
+ storefn(s, opaque, tmp, true);
190
/* If SFPA is zero then set FPSCR from FPDSCR_NS */
191
fpdscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
192
zero = tcg_const_i32(0);
193
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
194
case ARM_VFP_VPR:
195
/* Behaves as NOP if not privileged */
196
if (IS_USER(s)) {
197
+ storefn(s, opaque, NULL, false);
198
break;
199
}
200
tmp = load_cpu_field(v7m.vpr);
201
- storefn(s, opaque, tmp);
202
+ storefn(s, opaque, tmp, true);
203
break;
204
case ARM_VFP_P0:
205
tmp = load_cpu_field(v7m.vpr);
206
tcg_gen_extract_i32(tmp, tmp, R_V7M_VPR_P0_SHIFT, R_V7M_VPR_P0_LENGTH);
207
- storefn(s, opaque, tmp);
208
+ storefn(s, opaque, tmp, true);
24
break;
209
break;
25
default:
210
default:
211
g_assert_not_reached();
212
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
213
return true;
214
}
215
216
-static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
217
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value,
218
+ bool do_access)
219
{
220
arg_VMSR_VMRS *a = opaque;
221
222
+ if (!do_access) {
223
+ return;
224
+ }
225
+
226
if (a->rt == 15) {
227
/* Set the 4 flag bits in the CPSR */
228
gen_set_nzcv(value);
229
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
230
}
231
}
232
233
-static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
234
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque, bool do_access)
235
{
236
arg_VMSR_VMRS *a = opaque;
237
238
+ if (!do_access) {
239
+ return NULL;
240
+ }
241
return load_reg(s, a->rt);
242
}
243
26
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
244
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
27
case ARM_VFP_FPSCR:
245
}
28
if (a->rt == 15) {
246
}
29
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
247
30
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
248
-static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
31
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
249
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value,
32
} else {
250
+ bool do_access)
33
tmp = tcg_temp_new_i32();
251
{
34
gen_helper_vfp_get_fpscr(tmp, cpu_env);
252
arg_vldr_sysreg *a = opaque;
253
uint32_t offset = a->imm;
254
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
255
offset = -offset;
256
}
257
258
+ if (!do_access && !a->w) {
259
+ return;
260
+ }
261
+
262
addr = load_reg(s, a->rn);
263
if (a->p) {
264
tcg_gen_addi_i32(addr, addr, offset);
265
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
266
gen_helper_v8m_stackcheck(cpu_env, addr);
267
}
268
269
- gen_aa32_st_i32(s, value, addr, get_mem_index(s),
270
- MO_UL | MO_ALIGN | s->be_data);
271
- tcg_temp_free_i32(value);
272
+ if (do_access) {
273
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
274
+ MO_UL | MO_ALIGN | s->be_data);
275
+ tcg_temp_free_i32(value);
276
+ }
277
278
if (a->w) {
279
/* writeback */
280
@@ -XXX,XX +XXX,XX @@ static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
281
}
282
}
283
284
-static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
285
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque,
286
+ bool do_access)
287
{
288
arg_vldr_sysreg *a = opaque;
289
uint32_t offset = a->imm;
290
TCGv_i32 addr;
291
- TCGv_i32 value = tcg_temp_new_i32();
292
+ TCGv_i32 value = NULL;
293
294
if (!a->a) {
295
offset = -offset;
296
}
297
298
+ if (!do_access && !a->w) {
299
+ return NULL;
300
+ }
301
+
302
addr = load_reg(s, a->rn);
303
if (a->p) {
304
tcg_gen_addi_i32(addr, addr, offset);
305
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
306
gen_helper_v8m_stackcheck(cpu_env, addr);
307
}
308
309
- gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
310
- MO_UL | MO_ALIGN | s->be_data);
311
+ if (do_access) {
312
+ value = tcg_temp_new_i32();
313
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
314
+ MO_UL | MO_ALIGN | s->be_data);
315
+ }
316
317
if (a->w) {
318
/* writeback */
35
--
319
--
36
2.20.1
320
2.20.1
37
321
38
322
diff view generated by jsdifflib
1
Factor out the code which handles M-profile lazy FP state preservation
1
Factor the code in full_vfp_access_check() which updates the
2
from full_vfp_access_check(); accesses to the FPCXT_NS register are
2
ownership of the FP context and creates a new FP context
3
a special case which need to do just this part (corresponding in the
3
out into its own function.
4
pseudocode to the PreserveFPState() function), and not the full
5
set of actions matching the pseudocode ExecuteFPCheck() which
6
normal FP instructions need to do.
7
4
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20210618141019.10671-6-peter.maydell@linaro.org
11
Message-id: 20201119215617.29887-13-peter.maydell@linaro.org
12
---
8
---
13
target/arm/translate-vfp.c.inc | 45 ++++++++++++++++++++--------------
9
target/arm/translate-vfp.c | 104 +++++++++++++++++++++----------------
14
1 file changed, 27 insertions(+), 18 deletions(-)
10
1 file changed, 58 insertions(+), 46 deletions(-)
15
11
16
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
12
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-vfp.c.inc
14
--- a/target/arm/translate-vfp.c
19
+++ b/target/arm/translate-vfp.c.inc
15
+++ b/target/arm/translate-vfp.c
20
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
16
@@ -XXX,XX +XXX,XX @@ void gen_preserve_fp_state(DisasContext *s)
21
return offs;
17
}
22
}
18
}
23
19
24
+/*
20
+/*
25
+ * Generate code for M-profile lazy FP state preservation if needed;
21
+ * Generate code for M-profile FP context handling: update the
26
+ * this corresponds to the pseudocode PreserveFPState() function.
22
+ * ownership of the FP context, and create a new context if
23
+ * necessary. This corresponds to the parts of the pseudocode
24
+ * ExecuteFPCheck() after the inital PreserveFPState() call.
27
+ */
25
+ */
28
+static void gen_preserve_fp_state(DisasContext *s)
26
+static void gen_update_fp_context(DisasContext *s)
29
+{
27
+{
30
+ if (s->v7m_lspact) {
28
+ /* Update ownership of FP context: set FPCCR.S to match current state */
29
+ if (s->v8m_fpccr_s_wrong) {
30
+ TCGv_i32 tmp;
31
+
32
+ tmp = load_cpu_field(v7m.fpccr[M_REG_S]);
33
+ if (s->v8m_secure) {
34
+ tcg_gen_ori_i32(tmp, tmp, R_V7M_FPCCR_S_MASK);
35
+ } else {
36
+ tcg_gen_andi_i32(tmp, tmp, ~R_V7M_FPCCR_S_MASK);
37
+ }
38
+ store_cpu_field(tmp, v7m.fpccr[M_REG_S]);
39
+ /* Don't need to do this for any further FP insns in this TB */
40
+ s->v8m_fpccr_s_wrong = false;
41
+ }
42
+
43
+ if (s->v7m_new_fp_ctxt_needed) {
31
+ /*
44
+ /*
32
+ * Lazy state saving affects external memory and also the NVIC,
45
+ * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA,
33
+ * so we must mark it as an IO operation for icount (and cause
46
+ * the FPSCR, and VPR.
34
+ * this to be the last insn in the TB).
35
+ */
47
+ */
36
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
48
+ TCGv_i32 control, fpscr;
37
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
49
+ uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
38
+ gen_io_start();
50
+
51
+ fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
52
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
53
+ tcg_temp_free_i32(fpscr);
54
+ if (dc_isar_feature(aa32_mve, s)) {
55
+ TCGv_i32 z32 = tcg_const_i32(0);
56
+ store_cpu_field(z32, v7m.vpr);
39
+ }
57
+ }
40
+ gen_helper_v7m_preserve_fp_state(cpu_env);
58
+
41
+ /*
59
+ /*
42
+ * If the preserve_fp_state helper doesn't throw an exception
60
+ * We don't need to arrange to end the TB, because the only
43
+ * then it will clear LSPACT; we don't need to repeat this for
61
+ * parts of FPSCR which we cache in the TB flags are the VECLEN
44
+ * any further FP insns in this TB.
62
+ * and VECSTRIDE, and those don't exist for M-profile.
45
+ */
63
+ */
46
+ s->v7m_lspact = false;
64
+
65
+ if (s->v8m_secure) {
66
+ bits |= R_V7M_CONTROL_SFPA_MASK;
67
+ }
68
+ control = load_cpu_field(v7m.control[M_REG_S]);
69
+ tcg_gen_ori_i32(control, control, bits);
70
+ store_cpu_field(control, v7m.control[M_REG_S]);
71
+ /* Don't need to do this for any further FP insns in this TB */
72
+ s->v7m_new_fp_ctxt_needed = false;
47
+ }
73
+ }
48
+}
74
+}
49
+
75
+
50
/*
76
/*
51
* Check that VFP access is enabled. If it is, do the necessary
77
* Check that VFP access is enabled. If it is, do the necessary
52
* M-profile lazy-FP handling and then return true.
78
* M-profile lazy-FP handling and then return true.
53
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
79
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
54
/* Handle M-profile lazy FP state mechanics */
55
56
/* Trigger lazy-state preservation if necessary */
80
/* Trigger lazy-state preservation if necessary */
57
- if (s->v7m_lspact) {
81
gen_preserve_fp_state(s);
82
83
- /* Update ownership of FP context: set FPCCR.S to match current state */
84
- if (s->v8m_fpccr_s_wrong) {
85
- TCGv_i32 tmp;
86
-
87
- tmp = load_cpu_field(v7m.fpccr[M_REG_S]);
88
- if (s->v8m_secure) {
89
- tcg_gen_ori_i32(tmp, tmp, R_V7M_FPCCR_S_MASK);
90
- } else {
91
- tcg_gen_andi_i32(tmp, tmp, ~R_V7M_FPCCR_S_MASK);
92
- }
93
- store_cpu_field(tmp, v7m.fpccr[M_REG_S]);
94
- /* Don't need to do this for any further FP insns in this TB */
95
- s->v8m_fpccr_s_wrong = false;
96
- }
97
-
98
- if (s->v7m_new_fp_ctxt_needed) {
58
- /*
99
- /*
59
- * Lazy state saving affects external memory and also the NVIC,
100
- * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA,
60
- * so we must mark it as an IO operation for icount (and cause
101
- * the FPSCR, and VPR.
61
- * this to be the last insn in the TB).
62
- */
102
- */
63
- if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
103
- TCGv_i32 control, fpscr;
64
- s->base.is_jmp = DISAS_UPDATE_EXIT;
104
- uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
65
- gen_io_start();
105
-
106
- fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
107
- gen_helper_vfp_set_fpscr(cpu_env, fpscr);
108
- tcg_temp_free_i32(fpscr);
109
- if (dc_isar_feature(aa32_mve, s)) {
110
- TCGv_i32 z32 = tcg_const_i32(0);
111
- store_cpu_field(z32, v7m.vpr);
66
- }
112
- }
67
- gen_helper_v7m_preserve_fp_state(cpu_env);
113
-
68
- /*
114
- /*
69
- * If the preserve_fp_state helper doesn't throw an exception
115
- * We don't need to arrange to end the TB, because the only
70
- * then it will clear LSPACT; we don't need to repeat this for
116
- * parts of FPSCR which we cache in the TB flags are the VECLEN
71
- * any further FP insns in this TB.
117
- * and VECSTRIDE, and those don't exist for M-profile.
72
- */
118
- */
73
- s->v7m_lspact = false;
119
-
120
- if (s->v8m_secure) {
121
- bits |= R_V7M_CONTROL_SFPA_MASK;
122
- }
123
- control = load_cpu_field(v7m.control[M_REG_S]);
124
- tcg_gen_ori_i32(control, control, bits);
125
- store_cpu_field(control, v7m.control[M_REG_S]);
126
- /* Don't need to do this for any further FP insns in this TB */
127
- s->v7m_new_fp_ctxt_needed = false;
74
- }
128
- }
75
+ gen_preserve_fp_state(s);
129
+ /* Update ownership of FP context and create new FP context if needed */
76
130
+ gen_update_fp_context(s);
77
/* Update ownership of FP context: set FPCCR.S to match current state */
131
}
78
if (s->v8m_fpccr_s_wrong) {
132
133
return true;
79
--
134
--
80
2.20.1
135
2.20.1
81
136
82
137
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
vfp_access_check and its helper routine full_vfp_access_check() has
2
gradually grown and is now an awkward mix of A-profile only and
3
M-profile only pieces. Refactor it into an A-profile only and an
4
M-profile only version, taking advantage of the fact that now the
5
only direct call to full_vfp_access_check() is in A-profile-only
6
code.
2
7
3
We should use printf format specifier "%u" instead of "%d" for
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
argument of type "unsigned int".
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210618141019.10671-7-peter.maydell@linaro.org
11
---
12
target/arm/translate-vfp.c | 79 +++++++++++++++++++++++---------------
13
1 file changed, 48 insertions(+), 31 deletions(-)
5
14
6
Reported-by: Euler Robot <euler.robot@huawei.com>
15
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20201126111109.112238-3-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/misc/imx31_ccm.c | 14 +++++++-------
13
hw/misc/imx_ccm.c | 4 ++--
14
2 files changed, 9 insertions(+), 9 deletions(-)
15
16
diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx31_ccm.c
17
--- a/target/arm/translate-vfp.c
19
+++ b/hw/misc/imx31_ccm.c
18
+++ b/target/arm/translate-vfp.c
20
@@ -XXX,XX +XXX,XX @@ static const char *imx31_ccm_reg_name(uint32_t reg)
19
@@ -XXX,XX +XXX,XX @@ static void gen_update_fp_context(DisasContext *s)
21
case IMX31_CCM_PDR2_REG:
20
}
22
return "PDR2";
21
23
default:
22
/*
24
- sprintf(unknown, "[%d ?]", reg);
23
- * Check that VFP access is enabled. If it is, do the necessary
25
+ sprintf(unknown, "[%u ?]", reg);
24
- * M-profile lazy-FP handling and then return true.
26
return unknown;
25
- * If not, emit code to generate an appropriate exception and
26
- * return false.
27
+ * Check that VFP access is enabled, A-profile specific version.
28
+ *
29
+ * If VFP is enabled, return true. If not, emit code to generate an
30
+ * appropriate exception and return false.
31
* The ignore_vfp_enabled argument specifies that we should ignore
32
- * whether VFP is enabled via FPEXC[EN]: this should be true for FMXR/FMRX
33
+ * whether VFP is enabled via FPEXC.EN: this should be true for FMXR/FMRX
34
* accesses to FPSID, FPEXC, MVFR0, MVFR1, MVFR2, and false for all other insns.
35
*/
36
-static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
37
+static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
38
{
39
if (s->fp_excp_el) {
40
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
41
- /*
42
- * M-profile mostly catches the "FPU disabled" case early, in
43
- * disas_m_nocp(), but a few insns (eg LCTP, WLSTP, DLSTP)
44
- * which do coprocessor-checks are outside the large ranges of
45
- * the encoding space handled by the patterns in m-nocp.decode,
46
- * and for them we may need to raise NOCP here.
47
- */
48
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
49
- syn_uncategorized(), s->fp_excp_el);
50
- } else {
51
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
52
- syn_fp_access_trap(1, 0xe, false),
53
- s->fp_excp_el);
54
- }
55
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
56
+ syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
57
return false;
27
}
58
}
59
60
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
61
unallocated_encoding(s);
62
return false;
63
}
64
+ return true;
65
+}
66
67
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
68
- /* Handle M-profile lazy FP state mechanics */
69
-
70
- /* Trigger lazy-state preservation if necessary */
71
- gen_preserve_fp_state(s);
72
-
73
- /* Update ownership of FP context and create new FP context if needed */
74
- gen_update_fp_context(s);
75
+/*
76
+ * Check that VFP access is enabled, M-profile specific version.
77
+ *
78
+ * If VFP is enabled, do the necessary M-profile lazy-FP handling and then
79
+ * return true. If not, emit code to generate an appropriate exception and
80
+ * return false.
81
+ */
82
+static bool vfp_access_check_m(DisasContext *s)
83
+{
84
+ if (s->fp_excp_el) {
85
+ /*
86
+ * M-profile mostly catches the "FPU disabled" case early, in
87
+ * disas_m_nocp(), but a few insns (eg LCTP, WLSTP, DLSTP)
88
+ * which do coprocessor-checks are outside the large ranges of
89
+ * the encoding space handled by the patterns in m-nocp.decode,
90
+ * and for them we may need to raise NOCP here.
91
+ */
92
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
93
+ syn_uncategorized(), s->fp_excp_el);
94
+ return false;
95
}
96
97
+ /* Handle M-profile lazy FP state mechanics */
98
+
99
+ /* Trigger lazy-state preservation if necessary */
100
+ gen_preserve_fp_state(s);
101
+
102
+ /* Update ownership of FP context and create new FP context if needed */
103
+ gen_update_fp_context(s);
104
+
105
return true;
28
}
106
}
29
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState *dev)
107
30
freq = CKIH_FREQ;
108
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
109
*/
110
bool vfp_access_check(DisasContext *s)
111
{
112
- return full_vfp_access_check(s, false);
113
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
114
+ return vfp_access_check_m(s);
115
+ } else {
116
+ return vfp_access_check_a(s, false);
117
+ }
118
}
119
120
static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
121
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
122
return false;
31
}
123
}
32
124
33
- DPRINTF("freq = %d\n", freq);
125
- if (!full_vfp_access_check(s, ignore_vfp_enabled)) {
34
+ DPRINTF("freq = %u\n", freq);
126
+ /*
35
127
+ * Call vfp_access_check_a() directly, because we need to tell
36
return freq;
128
+ * it to ignore FPEXC.EN for some register accesses.
37
}
129
+ */
38
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mpll_clk(IMXCCMState *dev)
130
+ if (!vfp_access_check_a(s, ignore_vfp_enabled)) {
39
freq = imx_ccm_calc_pll(s->reg[IMX31_CCM_MPCTL_REG],
131
return true;
40
imx31_ccm_get_pll_ref_clk(dev));
41
42
- DPRINTF("freq = %d\n", freq);
43
+ DPRINTF("freq = %u\n", freq);
44
45
return freq;
46
}
47
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mcu_main_clk(IMXCCMState *dev)
48
freq = imx31_ccm_get_mpll_clk(dev);
49
}
132
}
50
133
51
- DPRINTF("freq = %d\n", freq);
52
+ DPRINTF("freq = %u\n", freq);
53
54
return freq;
55
}
56
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_hclk_clk(IMXCCMState *dev)
57
freq = imx31_ccm_get_mcu_main_clk(dev)
58
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], MAX));
59
60
- DPRINTF("freq = %d\n", freq);
61
+ DPRINTF("freq = %u\n", freq);
62
63
return freq;
64
}
65
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_ipg_clk(IMXCCMState *dev)
66
freq = imx31_ccm_get_hclk_clk(dev)
67
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], IPG));
68
69
- DPRINTF("freq = %d\n", freq);
70
+ DPRINTF("freq = %u\n", freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
75
break;
76
}
77
78
- DPRINTF("Clock = %d) = %d\n", clock, freq);
79
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
80
81
return freq;
82
}
83
diff --git a/hw/misc/imx_ccm.c b/hw/misc/imx_ccm.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/hw/misc/imx_ccm.c
86
+++ b/hw/misc/imx_ccm.c
87
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
88
freq = klass->get_clock_frequency(dev, clock);
89
}
90
91
- DPRINTF("(clock = %d) = %d\n", clock, freq);
92
+ DPRINTF("(clock = %d) = %u\n", clock, freq);
93
94
return freq;
95
}
96
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_calc_pll(uint32_t pllreg, uint32_t base_freq)
97
freq = ((2 * (base_freq >> 10) * (mfi * mfd + mfn)) /
98
(mfd * pd)) << 10;
99
100
- DPRINTF("(pllreg = 0x%08x, base_freq = %d) = %d\n", pllreg, base_freq,
101
+ DPRINTF("(pllreg = 0x%08x, base_freq = %u) = %d\n", pllreg, base_freq,
102
freq);
103
104
return freq;
105
--
134
--
106
2.20.1
135
2.20.1
107
136
108
137
diff view generated by jsdifflib
1
Implement the new-in-v8.1M FPCXT_S floating point system register.
1
Instead of open-coding the "take NOCP exception if FPU disabled,
2
This is for saving and restoring the secure floating point context,
2
otherwise call gen_preserve_fp_state()" code in the accessors for
3
and it reads and writes bits [27:0] from the FPSCR and the
3
FPCXT_NS, add an argument to vfp_access_check_m() which tells it to
4
CONTROL.SFPA bit in bit [31].
4
skip the gen_update_fp_context() call, so we can use it for the
5
FPCXT_NS case.
5
6
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-14-peter.maydell@linaro.org
9
Message-id: 20210618141019.10671-8-peter.maydell@linaro.org
9
---
10
---
10
target/arm/translate-vfp.c.inc | 58 ++++++++++++++++++++++++++++++++++
11
target/arm/translate-a32.h | 2 +-
11
1 file changed, 58 insertions(+)
12
target/arm/translate-m-nocp.c | 10 ++--------
13
target/arm/translate-vfp.c | 13 ++++++++-----
14
3 files changed, 11 insertions(+), 14 deletions(-)
12
15
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
16
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
18
--- a/target/arm/translate-a32.h
16
+++ b/target/arm/translate-vfp.c.inc
19
+++ b/target/arm/translate-a32.h
17
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
20
@@ -XXX,XX +XXX,XX @@ bool disas_neon_shared(DisasContext *s, uint32_t insn);
18
return false;
21
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg);
22
void arm_gen_condlabel(DisasContext *s);
23
bool vfp_access_check(DisasContext *s);
24
-void gen_preserve_fp_state(DisasContext *s);
25
+bool vfp_access_check_m(DisasContext *s, bool skip_context_update);
26
void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop);
27
void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop);
28
void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop);
29
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate-m-nocp.c
32
+++ b/target/arm/translate-m-nocp.c
33
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
34
* otherwise PreserveFPState(), and then FPCXT_NS writes
35
* behave the same as FPCXT_S writes.
36
*/
37
- if (s->fp_excp_el) {
38
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
39
- syn_uncategorized(), s->fp_excp_el);
40
+ if (!vfp_access_check_m(s, true)) {
41
/*
42
* This was only a conditional exception, so override
43
* gen_exception_insn()'s default to DISAS_NORETURN
44
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
45
s->base.is_jmp = DISAS_NEXT;
46
break;
19
}
47
}
20
break;
48
- gen_preserve_fp_state(s);
21
+ case ARM_VFP_FPCXT_S:
22
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
23
+ return false;
24
+ }
25
+ if (!s->v8m_secure) {
26
+ return false;
27
+ }
28
+ break;
29
default:
30
return FPSysRegCheckFailed;
31
}
49
}
32
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
50
/* fall through */
33
tcg_temp_free_i32(tmp);
51
case ARM_VFP_FPCXT_S:
34
break;
52
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
35
}
53
* otherwise PreserveFPState(), and then FPCXT_NS
36
+ case ARM_VFP_FPCXT_S:
54
* reads the same as FPCXT_S.
37
+ {
55
*/
38
+ TCGv_i32 sfpa, control, fpscr;
56
- if (s->fp_excp_el) {
39
+ /* Set FPSCR[27:0] and CONTROL.SFPA from value */
57
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
40
+ tmp = loadfn(s, opaque);
58
- syn_uncategorized(), s->fp_excp_el);
41
+ sfpa = tcg_temp_new_i32();
59
+ if (!vfp_access_check_m(s, true)) {
42
+ tcg_gen_shri_i32(sfpa, tmp, 31);
60
/*
43
+ control = load_cpu_field(v7m.control[M_REG_S]);
61
* This was only a conditional exception, so override
44
+ tcg_gen_deposit_i32(control, control, sfpa,
62
* gen_exception_insn()'s default to DISAS_NORETURN
45
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
63
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
46
+ store_cpu_field(control, v7m.control[M_REG_S]);
64
s->base.is_jmp = DISAS_NEXT;
47
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
65
break;
48
+ tcg_gen_andi_i32(fpscr, fpscr, FPCR_NZCV_MASK);
66
}
49
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
67
- gen_preserve_fp_state(s);
50
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
68
tmp = tcg_temp_new_i32();
51
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
69
sfpa = tcg_temp_new_i32();
52
+ tcg_temp_free_i32(tmp);
70
fpscr = tcg_temp_new_i32();
53
+ tcg_temp_free_i32(sfpa);
71
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
54
+ break;
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate-vfp.c
74
+++ b/target/arm/translate-vfp.c
75
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
76
* Generate code for M-profile lazy FP state preservation if needed;
77
* this corresponds to the pseudocode PreserveFPState() function.
78
*/
79
-void gen_preserve_fp_state(DisasContext *s)
80
+static void gen_preserve_fp_state(DisasContext *s)
81
{
82
if (s->v7m_lspact) {
83
/*
84
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
85
* If VFP is enabled, do the necessary M-profile lazy-FP handling and then
86
* return true. If not, emit code to generate an appropriate exception and
87
* return false.
88
+ * skip_context_update is true to skip the "update FP context" part of this.
89
*/
90
-static bool vfp_access_check_m(DisasContext *s)
91
+bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
92
{
93
if (s->fp_excp_el) {
94
/*
95
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_m(DisasContext *s)
96
/* Trigger lazy-state preservation if necessary */
97
gen_preserve_fp_state(s);
98
99
- /* Update ownership of FP context and create new FP context if needed */
100
- gen_update_fp_context(s);
101
+ if (!skip_context_update) {
102
+ /* Update ownership of FP context and create new FP context if needed */
103
+ gen_update_fp_context(s);
55
+ }
104
+ }
56
default:
105
57
g_assert_not_reached();
106
return true;
58
}
107
}
59
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
108
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_m(DisasContext *s)
60
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
109
bool vfp_access_check(DisasContext *s)
61
storefn(s, opaque, tmp);
110
{
62
break;
111
if (arm_dc_feature(s, ARM_FEATURE_M)) {
63
+ case ARM_VFP_FPCXT_S:
112
- return vfp_access_check_m(s);
64
+ {
113
+ return vfp_access_check_m(s, false);
65
+ TCGv_i32 control, sfpa, fpscr;
114
} else {
66
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
115
return vfp_access_check_a(s, false);
67
+ tmp = tcg_temp_new_i32();
68
+ sfpa = tcg_temp_new_i32();
69
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
70
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
71
+ control = load_cpu_field(v7m.control[M_REG_S]);
72
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
73
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
74
+ tcg_gen_or_i32(tmp, tmp, sfpa);
75
+ tcg_temp_free_i32(sfpa);
76
+ /*
77
+ * Store result before updating FPSCR etc, in case
78
+ * it is a memory write which causes an exception.
79
+ */
80
+ storefn(s, opaque, tmp);
81
+ /*
82
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
83
+ * CONTROL.SFPA; so we'll end the TB here.
84
+ */
85
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
86
+ store_cpu_field(control, v7m.control[M_REG_S]);
87
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
88
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
89
+ tcg_temp_free_i32(fpscr);
90
+ gen_lookup_tb(s);
91
+ break;
92
+ }
93
default:
94
g_assert_not_reached();
95
}
116
}
96
--
117
--
97
2.20.1
118
2.20.1
98
119
99
120
diff view generated by jsdifflib
1
Implement the new-in-v8.1M VLDR/VSTR variants which directly
1
Implement the forms of the MVE VLDR and VSTR insns which perform
2
read or write FP system registers to memory.
2
non-widening loads of bytes, halfwords or words from memory into
3
vector elements of the same width (encodings T5, T6, T7).
4
5
(At the moment we know for MVE and M-profile in general that
6
vfp_access_check() can never return false, but we include the
7
conventional return-true-on-failure check for consistency
8
with non-M-profile translation code.)
3
9
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-10-peter.maydell@linaro.org
12
Message-id: 20210617121628.20116-2-peter.maydell@linaro.org
7
---
13
---
8
target/arm/vfp.decode | 14 ++++++
14
target/arm/{translate-mve.c => helper-mve.h} | 19 +-
9
target/arm/translate-vfp.c.inc | 91 ++++++++++++++++++++++++++++++++++
15
target/arm/helper.h | 2 +
10
2 files changed, 105 insertions(+)
16
target/arm/internals.h | 11 ++
17
target/arm/mve.decode | 22 +++
18
target/arm/mve_helper.c | 172 +++++++++++++++++++
19
target/arm/translate-mve.c | 119 +++++++++++++
20
target/arm/meson.build | 1 +
21
7 files changed, 334 insertions(+), 12 deletions(-)
22
copy target/arm/{translate-mve.c => helper-mve.h} (61%)
23
create mode 100644 target/arm/mve_helper.c
11
24
12
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
25
diff --git a/target/arm/translate-mve.c b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
26
similarity index 61%
14
--- a/target/arm/vfp.decode
27
copy from target/arm/translate-mve.c
15
+++ b/target/arm/vfp.decode
28
copy to target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
29
index XXXXXXX..XXXXXXX 100644
17
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
30
--- a/target/arm/translate-mve.c
18
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
31
+++ b/target/arm/helper-mve.h
19
32
@@ -XXX,XX +XXX,XX @@
20
+# M-profile VLDR/VSTR to sysreg
33
/*
21
+%vldr_sysreg 22:1 13:3
34
- * ARM translation: M-profile MVE instructions
22
+%imm7_0x4 0:7 !function=times_4
35
+ * M-profile MVE specific helper definitions
23
+
36
*
24
+&vldr_sysreg rn reg imm a w p
37
* Copyright (c) 2021 Linaro, Ltd.
25
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
38
*
26
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
39
@@ -XXX,XX +XXX,XX @@
27
+
40
* You should have received a copy of the GNU Lesser General Public
28
+# P=0 W=0 is SEE "Related encodings", so split into two patterns
41
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
29
+VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
42
*/
30
+VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
43
-
31
+VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
44
-#include "qemu/osdep.h"
32
+VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
45
-#include "tcg/tcg-op.h"
33
+
46
-#include "tcg/tcg-op-gvec.h"
34
# We split the load/store multiple up into two patterns to avoid
47
-#include "exec/exec-all.h"
35
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
48
-#include "exec/gen-icount.h"
36
# grouping:
49
-#include "translate.h"
37
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
50
-#include "translate-a32.h"
38
index XXXXXXX..XXXXXXX 100644
51
-
39
--- a/target/arm/translate-vfp.c.inc
52
-/* Include the generated decoder */
40
+++ b/target/arm/translate-vfp.c.inc
53
-#include "decode-mve.c.inc"
41
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
54
+DEF_HELPER_FLAGS_3(mve_vldrb, TCG_CALL_NO_WG, void, env, ptr, i32)
42
return true;
55
+DEF_HELPER_FLAGS_3(mve_vldrh, TCG_CALL_NO_WG, void, env, ptr, i32)
56
+DEF_HELPER_FLAGS_3(mve_vldrw, TCG_CALL_NO_WG, void, env, ptr, i32)
57
+DEF_HELPER_FLAGS_3(mve_vstrb, TCG_CALL_NO_WG, void, env, ptr, i32)
58
+DEF_HELPER_FLAGS_3(mve_vstrh, TCG_CALL_NO_WG, void, env, ptr, i32)
59
+DEF_HELPER_FLAGS_3(mve_vstrw, TCG_CALL_NO_WG, void, env, ptr, i32)
60
diff --git a/target/arm/helper.h b/target/arm/helper.h
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/helper.h
63
+++ b/target/arm/helper.h
64
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
65
#include "helper-a64.h"
66
#include "helper-sve.h"
67
#endif
68
+
69
+#include "helper-mve.h"
70
diff --git a/target/arm/internals.h b/target/arm/internals.h
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/internals.h
73
+++ b/target/arm/internals.h
74
@@ -XXX,XX +XXX,XX @@ static inline uint64_t useronly_maybe_clean_ptr(uint32_t desc, uint64_t ptr)
75
return ptr;
43
}
76
}
44
77
45
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
78
+/* Values for M-profile PSR.ECI for MVE insns */
46
+{
79
+enum MVEECIState {
47
+ arg_vldr_sysreg *a = opaque;
80
+ ECI_NONE = 0, /* No completed beats */
48
+ uint32_t offset = a->imm;
81
+ ECI_A0 = 1, /* Completed: A0 */
82
+ ECI_A0A1 = 2, /* Completed: A0, A1 */
83
+ /* 3 is reserved */
84
+ ECI_A0A1A2 = 4, /* Completed: A0, A1, A2 */
85
+ ECI_A0A1A2B0 = 5, /* Completed: A0, A1, A2, B0 */
86
+ /* All other values reserved */
87
+};
88
+
89
#endif
90
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/mve.decode
93
+++ b/target/arm/mve.decode
94
@@ -XXX,XX +XXX,XX @@
95
#
96
# This file is processed by scripts/decodetree.py
97
#
98
+
99
+%qd 22:1 13:3
100
+
101
+&vldr_vstr rn qd imm p a w size l
102
+
103
+@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd
104
+
105
+# Vector loads and stores
106
+
107
+# Non-widening loads/stores (P=0 W=0 is 'related encoding')
108
+VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111100 ....... @vldr_vstr \
109
+ size=0 p=0 w=1
110
+VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111101 ....... @vldr_vstr \
111
+ size=1 p=0 w=1
112
+VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111110 ....... @vldr_vstr \
113
+ size=2 p=0 w=1
114
+VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111100 ....... @vldr_vstr \
115
+ size=0 p=1
116
+VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
117
+ size=1 p=1
118
+VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
119
+ size=2 p=1
120
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
121
new file mode 100644
122
index XXXXXXX..XXXXXXX
123
--- /dev/null
124
+++ b/target/arm/mve_helper.c
125
@@ -XXX,XX +XXX,XX @@
126
+/*
127
+ * M-profile MVE Operations
128
+ *
129
+ * Copyright (c) 2021 Linaro, Ltd.
130
+ *
131
+ * This library is free software; you can redistribute it and/or
132
+ * modify it under the terms of the GNU Lesser General Public
133
+ * License as published by the Free Software Foundation; either
134
+ * version 2.1 of the License, or (at your option) any later version.
135
+ *
136
+ * This library is distributed in the hope that it will be useful,
137
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
138
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
139
+ * Lesser General Public License for more details.
140
+ *
141
+ * You should have received a copy of the GNU Lesser General Public
142
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
143
+ */
144
+
145
+#include "qemu/osdep.h"
146
+#include "cpu.h"
147
+#include "internals.h"
148
+#include "vec_internal.h"
149
+#include "exec/helper-proto.h"
150
+#include "exec/cpu_ldst.h"
151
+#include "exec/exec-all.h"
152
+
153
+static uint16_t mve_element_mask(CPUARMState *env)
154
+{
155
+ /*
156
+ * Return the mask of which elements in the MVE vector should be
157
+ * updated. This is a combination of multiple things:
158
+ * (1) by default, we update every lane in the vector
159
+ * (2) VPT predication stores its state in the VPR register;
160
+ * (3) low-overhead-branch tail predication will mask out part
161
+ * the vector on the final iteration of the loop
162
+ * (4) if EPSR.ECI is set then we must execute only some beats
163
+ * of the insn
164
+ * We combine all these into a 16-bit result with the same semantics
165
+ * as VPR.P0: 0 to mask the lane, 1 if it is active.
166
+ * 8-bit vector ops will look at all bits of the result;
167
+ * 16-bit ops will look at bits 0, 2, 4, ...;
168
+ * 32-bit ops will look at bits 0, 4, 8 and 12.
169
+ * Compare pseudocode GetCurInstrBeat(), though that only returns
170
+ * the 4-bit slice of the mask corresponding to a single beat.
171
+ */
172
+ uint16_t mask = FIELD_EX32(env->v7m.vpr, V7M_VPR, P0);
173
+
174
+ if (!(env->v7m.vpr & R_V7M_VPR_MASK01_MASK)) {
175
+ mask |= 0xff;
176
+ }
177
+ if (!(env->v7m.vpr & R_V7M_VPR_MASK23_MASK)) {
178
+ mask |= 0xff00;
179
+ }
180
+
181
+ if (env->v7m.ltpsize < 4 &&
182
+ env->regs[14] <= (1 << (4 - env->v7m.ltpsize))) {
183
+ /*
184
+ * Tail predication active, and this is the last loop iteration.
185
+ * The element size is (1 << ltpsize), and we only want to process
186
+ * loopcount elements, so we want to retain the least significant
187
+ * (loopcount * esize) predicate bits and zero out bits above that.
188
+ */
189
+ int masklen = env->regs[14] << env->v7m.ltpsize;
190
+ assert(masklen <= 16);
191
+ mask &= MAKE_64BIT_MASK(0, masklen);
192
+ }
193
+
194
+ if ((env->condexec_bits & 0xf) == 0) {
195
+ /*
196
+ * ECI bits indicate which beats are already executed;
197
+ * we handle this by effectively predicating them out.
198
+ */
199
+ int eci = env->condexec_bits >> 4;
200
+ switch (eci) {
201
+ case ECI_NONE:
202
+ break;
203
+ case ECI_A0:
204
+ mask &= 0xfff0;
205
+ break;
206
+ case ECI_A0A1:
207
+ mask &= 0xff00;
208
+ break;
209
+ case ECI_A0A1A2:
210
+ case ECI_A0A1A2B0:
211
+ mask &= 0xf000;
212
+ break;
213
+ default:
214
+ g_assert_not_reached();
215
+ }
216
+ }
217
+
218
+ return mask;
219
+}
220
+
221
+static void mve_advance_vpt(CPUARMState *env)
222
+{
223
+ /* Advance the VPT and ECI state if necessary */
224
+ uint32_t vpr = env->v7m.vpr;
225
+ unsigned mask01, mask23;
226
+
227
+ if ((env->condexec_bits & 0xf) == 0) {
228
+ env->condexec_bits = (env->condexec_bits == (ECI_A0A1A2B0 << 4)) ?
229
+ (ECI_A0 << 4) : (ECI_NONE << 4);
230
+ }
231
+
232
+ if (!(vpr & (R_V7M_VPR_MASK01_MASK | R_V7M_VPR_MASK23_MASK))) {
233
+ /* VPT not enabled, nothing to do */
234
+ return;
235
+ }
236
+
237
+ mask01 = FIELD_EX32(vpr, V7M_VPR, MASK01);
238
+ mask23 = FIELD_EX32(vpr, V7M_VPR, MASK23);
239
+ if (mask01 > 8) {
240
+ /* high bit set, but not 0b1000: invert the relevant half of P0 */
241
+ vpr ^= 0xff;
242
+ }
243
+ if (mask23 > 8) {
244
+ /* high bit set, but not 0b1000: invert the relevant half of P0 */
245
+ vpr ^= 0xff00;
246
+ }
247
+ vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1);
248
+ vpr = FIELD_DP32(vpr, V7M_VPR, MASK23, mask23 << 1);
249
+ env->v7m.vpr = vpr;
250
+}
251
+
252
+
253
+#define DO_VLDR(OP, MSIZE, LDTYPE, ESIZE, TYPE) \
254
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr) \
255
+ { \
256
+ TYPE *d = vd; \
257
+ uint16_t mask = mve_element_mask(env); \
258
+ unsigned b, e; \
259
+ /* \
260
+ * R_SXTM allows the dest reg to become UNKNOWN for abandoned \
261
+ * beats so we don't care if we update part of the dest and \
262
+ * then take an exception. \
263
+ */ \
264
+ for (b = 0, e = 0; b < 16; b += ESIZE, e++) { \
265
+ if (mask & (1 << b)) { \
266
+ d[H##ESIZE(e)] = cpu_##LDTYPE##_data_ra(env, addr, GETPC()); \
267
+ } \
268
+ addr += MSIZE; \
269
+ } \
270
+ mve_advance_vpt(env); \
271
+ }
272
+
273
+#define DO_VSTR(OP, MSIZE, STTYPE, ESIZE, TYPE) \
274
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr) \
275
+ { \
276
+ TYPE *d = vd; \
277
+ uint16_t mask = mve_element_mask(env); \
278
+ unsigned b, e; \
279
+ for (b = 0, e = 0; b < 16; b += ESIZE, e++) { \
280
+ if (mask & (1 << b)) { \
281
+ cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \
282
+ } \
283
+ addr += MSIZE; \
284
+ } \
285
+ mve_advance_vpt(env); \
286
+ }
287
+
288
+DO_VLDR(vldrb, 1, ldub, 1, uint8_t)
289
+DO_VLDR(vldrh, 2, lduw, 2, uint16_t)
290
+DO_VLDR(vldrw, 4, ldl, 4, uint32_t)
291
+
292
+DO_VSTR(vstrb, 1, stb, 1, uint8_t)
293
+DO_VSTR(vstrh, 2, stw, 2, uint16_t)
294
+DO_VSTR(vstrw, 4, stl, 4, uint32_t)
295
+
296
+#undef DO_VLDR
297
+#undef DO_VSTR
298
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
299
index XXXXXXX..XXXXXXX 100644
300
--- a/target/arm/translate-mve.c
301
+++ b/target/arm/translate-mve.c
302
@@ -XXX,XX +XXX,XX @@
303
304
/* Include the generated decoder */
305
#include "decode-mve.c.inc"
306
+
307
+typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
308
+
309
+/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
310
+static inline long mve_qreg_offset(unsigned reg)
311
+{
312
+ return offsetof(CPUARMState, vfp.zregs[reg].d[0]);
313
+}
314
+
315
+static TCGv_ptr mve_qreg_ptr(unsigned reg)
316
+{
317
+ TCGv_ptr ret = tcg_temp_new_ptr();
318
+ tcg_gen_addi_ptr(ret, cpu_env, mve_qreg_offset(reg));
319
+ return ret;
320
+}
321
+
322
+static bool mve_check_qreg_bank(DisasContext *s, int qmask)
323
+{
324
+ /*
325
+ * Check whether Qregs are in range. For v8.1M only Q0..Q7
326
+ * are supported, see VFPSmallRegisterBank().
327
+ */
328
+ return qmask < 8;
329
+}
330
+
331
+static bool mve_eci_check(DisasContext *s)
332
+{
333
+ /*
334
+ * This is a beatwise insn: check that ECI is valid (not a
335
+ * reserved value) and note that we are handling it.
336
+ * Return true if OK, false if we generated an exception.
337
+ */
338
+ s->eci_handled = true;
339
+ switch (s->eci) {
340
+ case ECI_NONE:
341
+ case ECI_A0:
342
+ case ECI_A0A1:
343
+ case ECI_A0A1A2:
344
+ case ECI_A0A1A2B0:
345
+ return true;
346
+ default:
347
+ /* Reserved value: INVSTATE UsageFault */
348
+ gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized(),
349
+ default_exception_el(s));
350
+ return false;
351
+ }
352
+}
353
+
354
+static void mve_update_eci(DisasContext *s)
355
+{
356
+ /*
357
+ * The helper function will always update the CPUState field,
358
+ * so we only need to update the DisasContext field.
359
+ */
360
+ if (s->eci) {
361
+ s->eci = (s->eci == ECI_A0A1A2B0) ? ECI_A0 : ECI_NONE;
362
+ }
363
+}
364
+
365
+static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
366
+{
49
+ TCGv_i32 addr;
367
+ TCGv_i32 addr;
50
+
368
+ uint32_t offset;
369
+ TCGv_ptr qreg;
370
+
371
+ if (!dc_isar_feature(aa32_mve, s) ||
372
+ !mve_check_qreg_bank(s, a->qd) ||
373
+ !fn) {
374
+ return false;
375
+ }
376
+
377
+ /* CONSTRAINED UNPREDICTABLE: we choose to UNDEF */
378
+ if (a->rn == 15 || (a->rn == 13 && a->w)) {
379
+ return false;
380
+ }
381
+
382
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
383
+ return true;
384
+ }
385
+
386
+ offset = a->imm << a->size;
51
+ if (!a->a) {
387
+ if (!a->a) {
52
+ offset = - offset;
388
+ offset = -offset;
53
+ }
389
+ }
54
+
55
+ addr = load_reg(s, a->rn);
390
+ addr = load_reg(s, a->rn);
56
+ if (a->p) {
391
+ if (a->p) {
57
+ tcg_gen_addi_i32(addr, addr, offset);
392
+ tcg_gen_addi_i32(addr, addr, offset);
58
+ }
393
+ }
59
+
394
+
60
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
395
+ qreg = mve_qreg_ptr(a->qd);
61
+ gen_helper_v8m_stackcheck(cpu_env, addr);
396
+ fn(cpu_env, qreg, addr);
62
+ }
397
+ tcg_temp_free_ptr(qreg);
63
+
398
+
64
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
399
+ /*
65
+ MO_UL | MO_ALIGN | s->be_data);
400
+ * Writeback always happens after the last beat of the insn,
66
+ tcg_temp_free_i32(value);
401
+ * regardless of predication
67
+
402
+ */
68
+ if (a->w) {
403
+ if (a->w) {
69
+ /* writeback */
70
+ if (!a->p) {
404
+ if (!a->p) {
71
+ tcg_gen_addi_i32(addr, addr, offset);
405
+ tcg_gen_addi_i32(addr, addr, offset);
72
+ }
406
+ }
73
+ store_reg(s, a->rn, addr);
407
+ store_reg(s, a->rn, addr);
74
+ } else {
408
+ } else {
75
+ tcg_temp_free_i32(addr);
409
+ tcg_temp_free_i32(addr);
76
+ }
410
+ }
77
+}
411
+ mve_update_eci(s);
78
+
412
+ return true;
79
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
413
+}
80
+{
414
+
81
+ arg_vldr_sysreg *a = opaque;
415
+static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
82
+ uint32_t offset = a->imm;
416
+{
83
+ TCGv_i32 addr;
417
+ static MVEGenLdStFn * const ldstfns[4][2] = {
84
+ TCGv_i32 value = tcg_temp_new_i32();
418
+ { gen_helper_mve_vstrb, gen_helper_mve_vldrb },
85
+
419
+ { gen_helper_mve_vstrh, gen_helper_mve_vldrh },
86
+ if (!a->a) {
420
+ { gen_helper_mve_vstrw, gen_helper_mve_vldrw },
87
+ offset = - offset;
421
+ { NULL, NULL }
88
+ }
422
+ };
89
+
423
+ return do_ldst(s, a, ldstfns[a->size][a->l]);
90
+ addr = load_reg(s, a->rn);
424
+}
91
+ if (a->p) {
425
diff --git a/target/arm/meson.build b/target/arm/meson.build
92
+ tcg_gen_addi_i32(addr, addr, offset);
426
index XXXXXXX..XXXXXXX 100644
93
+ }
427
--- a/target/arm/meson.build
94
+
428
+++ b/target/arm/meson.build
95
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
429
@@ -XXX,XX +XXX,XX @@ arm_ss.add(files(
96
+ gen_helper_v8m_stackcheck(cpu_env, addr);
430
'helper.c',
97
+ }
431
'iwmmxt_helper.c',
98
+
432
'm_helper.c',
99
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
433
+ 'mve_helper.c',
100
+ MO_UL | MO_ALIGN | s->be_data);
434
'neon_helper.c',
101
+
435
'op_helper.c',
102
+ if (a->w) {
436
'tlb_helper.c',
103
+ /* writeback */
104
+ if (!a->p) {
105
+ tcg_gen_addi_i32(addr, addr, offset);
106
+ }
107
+ store_reg(s, a->rn, addr);
108
+ } else {
109
+ tcg_temp_free_i32(addr);
110
+ }
111
+ return value;
112
+}
113
+
114
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
115
+{
116
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
117
+ return false;
118
+ }
119
+ if (a->rn == 15) {
120
+ return false;
121
+ }
122
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
123
+}
124
+
125
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
126
+{
127
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
128
+ return false;
129
+ }
130
+ if (a->rn == 15) {
131
+ return false;
132
+ }
133
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
134
+}
135
+
136
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
137
{
138
TCGv_i32 tmp;
139
--
437
--
140
2.20.1
438
2.20.1
141
439
142
440
diff view generated by jsdifflib
New patch
1
Implement the variants of MVE VLDR (encodings T1, T2) which perform
2
"widening" loads where bytes or halfwords are loaded from memory and
3
zero or sign-extended into halfword or word length vector elements,
4
and the narrowing MVE VSTR (encodings T1, T2) where bytes or
5
halfwords are stored from halfword or word elements.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210617121628.20116-3-peter.maydell@linaro.org
10
---
11
target/arm/helper-mve.h | 10 ++++++++++
12
target/arm/mve.decode | 25 +++++++++++++++++++++++--
13
target/arm/mve_helper.c | 11 +++++++++++
14
target/arm/translate-mve.c | 14 ++++++++++++++
15
4 files changed, 58 insertions(+), 2 deletions(-)
16
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-mve.h
20
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vldrw, TCG_CALL_NO_WG, void, env, ptr, i32)
22
DEF_HELPER_FLAGS_3(mve_vstrb, TCG_CALL_NO_WG, void, env, ptr, i32)
23
DEF_HELPER_FLAGS_3(mve_vstrh, TCG_CALL_NO_WG, void, env, ptr, i32)
24
DEF_HELPER_FLAGS_3(mve_vstrw, TCG_CALL_NO_WG, void, env, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_3(mve_vldrb_sh, TCG_CALL_NO_WG, void, env, ptr, i32)
27
+DEF_HELPER_FLAGS_3(mve_vldrb_sw, TCG_CALL_NO_WG, void, env, ptr, i32)
28
+DEF_HELPER_FLAGS_3(mve_vldrb_uh, TCG_CALL_NO_WG, void, env, ptr, i32)
29
+DEF_HELPER_FLAGS_3(mve_vldrb_uw, TCG_CALL_NO_WG, void, env, ptr, i32)
30
+DEF_HELPER_FLAGS_3(mve_vldrh_sw, TCG_CALL_NO_WG, void, env, ptr, i32)
31
+DEF_HELPER_FLAGS_3(mve_vldrh_uw, TCG_CALL_NO_WG, void, env, ptr, i32)
32
+DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
33
+DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
34
+DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
35
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/mve.decode
38
+++ b/target/arm/mve.decode
39
@@ -XXX,XX +XXX,XX @@
40
41
%qd 22:1 13:3
42
43
-&vldr_vstr rn qd imm p a w size l
44
+&vldr_vstr rn qd imm p a w size l u
45
46
-@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd
47
+@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
48
+# Note that both Rn and Qd are 3 bits only (no D bit)
49
+@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
50
51
# Vector loads and stores
52
53
+# Widening loads and narrowing stores:
54
+# for these P=0 W=0 is 'related encoding'; sz=11 is 'related encoding'
55
+# This means we need to expand out to multiple patterns for P, W, SZ.
56
+# For stores the U bit must be 0 but we catch that in the trans_ function.
57
+# The naming scheme here is "VLDSTB_H == in-memory byte load/store to/from
58
+# signed halfword element in register", etc.
59
+VLDSTB_H 111 . 110 0 a:1 0 1 . 0 ... ... 0 111 01 ....... @vldst_wn \
60
+ p=0 w=1 size=1
61
+VLDSTB_H 111 . 110 1 a:1 0 w:1 . 0 ... ... 0 111 01 ....... @vldst_wn \
62
+ p=1 size=1
63
+VLDSTB_W 111 . 110 0 a:1 0 1 . 0 ... ... 0 111 10 ....... @vldst_wn \
64
+ p=0 w=1 size=2
65
+VLDSTB_W 111 . 110 1 a:1 0 w:1 . 0 ... ... 0 111 10 ....... @vldst_wn \
66
+ p=1 size=2
67
+VLDSTH_W 111 . 110 0 a:1 0 1 . 1 ... ... 0 111 10 ....... @vldst_wn \
68
+ p=0 w=1 size=2
69
+VLDSTH_W 111 . 110 1 a:1 0 w:1 . 1 ... ... 0 111 10 ....... @vldst_wn \
70
+ p=1 size=2
71
+
72
# Non-widening loads/stores (P=0 W=0 is 'related encoding')
73
VLDR_VSTR 1110110 0 a:1 . 1 . .... ... 111100 ....... @vldr_vstr \
74
size=0 p=0 w=1
75
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/mve_helper.c
78
+++ b/target/arm/mve_helper.c
79
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrb, 1, stb, 1, uint8_t)
80
DO_VSTR(vstrh, 2, stw, 2, uint16_t)
81
DO_VSTR(vstrw, 4, stl, 4, uint32_t)
82
83
+DO_VLDR(vldrb_sh, 1, ldsb, 2, int16_t)
84
+DO_VLDR(vldrb_sw, 1, ldsb, 4, int32_t)
85
+DO_VLDR(vldrb_uh, 1, ldub, 2, uint16_t)
86
+DO_VLDR(vldrb_uw, 1, ldub, 4, uint32_t)
87
+DO_VLDR(vldrh_sw, 2, ldsw, 4, int32_t)
88
+DO_VLDR(vldrh_uw, 2, lduw, 4, uint32_t)
89
+
90
+DO_VSTR(vstrb_h, 1, stb, 2, int16_t)
91
+DO_VSTR(vstrb_w, 1, stb, 4, int32_t)
92
+DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
93
+
94
#undef DO_VLDR
95
#undef DO_VSTR
96
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate-mve.c
99
+++ b/target/arm/translate-mve.c
100
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
101
};
102
return do_ldst(s, a, ldstfns[a->size][a->l]);
103
}
104
+
105
+#define DO_VLDST_WIDE_NARROW(OP, SLD, ULD, ST) \
106
+ static bool trans_##OP(DisasContext *s, arg_VLDR_VSTR *a) \
107
+ { \
108
+ static MVEGenLdStFn * const ldstfns[2][2] = { \
109
+ { gen_helper_mve_##ST, gen_helper_mve_##SLD }, \
110
+ { NULL, gen_helper_mve_##ULD }, \
111
+ }; \
112
+ return do_ldst(s, a, ldstfns[a->u][a->l]); \
113
+ }
114
+
115
+DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
116
+DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
117
+DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
118
--
119
2.20.1
120
121
diff view generated by jsdifflib
New patch
1
1
Implement the MVE VCLZ insn (and the necessary machinery
2
for MVE 1-input vector ops).
3
4
Note that for non-load instructions predication is always performed
5
at a byte level granularity regardless of element size (R_ZLSJ),
6
and so the masking logic here differs from that used in the VLDR
7
and VSTR helpers.
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210617121628.20116-4-peter.maydell@linaro.org
12
---
13
target/arm/helper-mve.h | 4 ++
14
target/arm/mve.decode | 8 ++++
15
target/arm/mve_helper.c | 82 ++++++++++++++++++++++++++++++++++++++
16
target/arm/translate-mve.c | 38 ++++++++++++++++++
17
4 files changed, 132 insertions(+)
18
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-mve.h
22
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vldrh_uw, TCG_CALL_NO_WG, void, env, ptr, i32)
24
DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
25
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
26
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
27
+
28
+DEF_HELPER_FLAGS_3(mve_vclzb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
+DEF_HELPER_FLAGS_3(mve_vclzh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
+DEF_HELPER_FLAGS_3(mve_vclzw, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/mve.decode
34
+++ b/target/arm/mve.decode
35
@@ -XXX,XX +XXX,XX @@
36
#
37
38
%qd 22:1 13:3
39
+%qm 5:1 1:3
40
41
&vldr_vstr rn qd imm p a w size l u
42
+&1op qd qm size
43
44
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
45
# Note that both Rn and Qd are 3 bits only (no D bit)
46
@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
47
48
+@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
49
+
50
# Vector loads and stores
51
52
# Widening loads and narrowing stores:
53
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
54
size=1 p=1
55
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
56
size=2 p=1
57
+
58
+# Vector miscellaneous
59
+
60
+VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
61
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/mve_helper.c
64
+++ b/target/arm/mve_helper.c
65
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
66
67
#undef DO_VLDR
68
#undef DO_VSTR
69
+
70
+/*
71
+ * The mergemask(D, R, M) macro performs the operation "*D = R" but
72
+ * storing only the bytes which correspond to 1 bits in M,
73
+ * leaving other bytes in *D unchanged. We use _Generic
74
+ * to select the correct implementation based on the type of D.
75
+ */
76
+
77
+static void mergemask_ub(uint8_t *d, uint8_t r, uint16_t mask)
78
+{
79
+ if (mask & 1) {
80
+ *d = r;
81
+ }
82
+}
83
+
84
+static void mergemask_sb(int8_t *d, int8_t r, uint16_t mask)
85
+{
86
+ mergemask_ub((uint8_t *)d, r, mask);
87
+}
88
+
89
+static void mergemask_uh(uint16_t *d, uint16_t r, uint16_t mask)
90
+{
91
+ uint16_t bmask = expand_pred_b_data[mask & 3];
92
+ *d = (*d & ~bmask) | (r & bmask);
93
+}
94
+
95
+static void mergemask_sh(int16_t *d, int16_t r, uint16_t mask)
96
+{
97
+ mergemask_uh((uint16_t *)d, r, mask);
98
+}
99
+
100
+static void mergemask_uw(uint32_t *d, uint32_t r, uint16_t mask)
101
+{
102
+ uint32_t bmask = expand_pred_b_data[mask & 0xf];
103
+ *d = (*d & ~bmask) | (r & bmask);
104
+}
105
+
106
+static void mergemask_sw(int32_t *d, int32_t r, uint16_t mask)
107
+{
108
+ mergemask_uw((uint32_t *)d, r, mask);
109
+}
110
+
111
+static void mergemask_uq(uint64_t *d, uint64_t r, uint16_t mask)
112
+{
113
+ uint64_t bmask = expand_pred_b_data[mask & 0xff];
114
+ *d = (*d & ~bmask) | (r & bmask);
115
+}
116
+
117
+static void mergemask_sq(int64_t *d, int64_t r, uint16_t mask)
118
+{
119
+ mergemask_uq((uint64_t *)d, r, mask);
120
+}
121
+
122
+#define mergemask(D, R, M) \
123
+ _Generic(D, \
124
+ uint8_t *: mergemask_ub, \
125
+ int8_t *: mergemask_sb, \
126
+ uint16_t *: mergemask_uh, \
127
+ int16_t *: mergemask_sh, \
128
+ uint32_t *: mergemask_uw, \
129
+ int32_t *: mergemask_sw, \
130
+ uint64_t *: mergemask_uq, \
131
+ int64_t *: mergemask_sq)(D, R, M)
132
+
133
+#define DO_1OP(OP, ESIZE, TYPE, FN) \
134
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
135
+ { \
136
+ TYPE *d = vd, *m = vm; \
137
+ uint16_t mask = mve_element_mask(env); \
138
+ unsigned e; \
139
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
140
+ mergemask(&d[H##ESIZE(e)], FN(m[H##ESIZE(e)]), mask); \
141
+ } \
142
+ mve_advance_vpt(env); \
143
+ }
144
+
145
+#define DO_CLZ_B(N) (clz32(N) - 24)
146
+#define DO_CLZ_H(N) (clz32(N) - 16)
147
+
148
+DO_1OP(vclzb, 1, uint8_t, DO_CLZ_B)
149
+DO_1OP(vclzh, 2, uint16_t, DO_CLZ_H)
150
+DO_1OP(vclzw, 4, uint32_t, clz32)
151
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
152
index XXXXXXX..XXXXXXX 100644
153
--- a/target/arm/translate-mve.c
154
+++ b/target/arm/translate-mve.c
155
@@ -XXX,XX +XXX,XX @@
156
#include "decode-mve.c.inc"
157
158
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
159
+typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
160
161
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
162
static inline long mve_qreg_offset(unsigned reg)
163
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
164
DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
165
DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
166
DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
167
+
168
+static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
169
+{
170
+ TCGv_ptr qd, qm;
171
+
172
+ if (!dc_isar_feature(aa32_mve, s) ||
173
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
174
+ !fn) {
175
+ return false;
176
+ }
177
+
178
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
179
+ return true;
180
+ }
181
+
182
+ qd = mve_qreg_ptr(a->qd);
183
+ qm = mve_qreg_ptr(a->qm);
184
+ fn(cpu_env, qd, qm);
185
+ tcg_temp_free_ptr(qd);
186
+ tcg_temp_free_ptr(qm);
187
+ mve_update_eci(s);
188
+ return true;
189
+}
190
+
191
+#define DO_1OP(INSN, FN) \
192
+ static bool trans_##INSN(DisasContext *s, arg_1op *a) \
193
+ { \
194
+ static MVEGenOneOpFn * const fns[] = { \
195
+ gen_helper_mve_##FN##b, \
196
+ gen_helper_mve_##FN##h, \
197
+ gen_helper_mve_##FN##w, \
198
+ NULL, \
199
+ }; \
200
+ return do_1op(s, a, fns[a->size]); \
201
+ }
202
+
203
+DO_1OP(VCLZ, vclz)
204
--
205
2.20.1
206
207
diff view generated by jsdifflib
New patch
1
Implement the MVE VCLS insn.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-5-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 4 ++++
8
target/arm/mve.decode | 1 +
9
target/arm/mve_helper.c | 7 +++++++
10
target/arm/translate-mve.c | 1 +
11
4 files changed, 13 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
18
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
19
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
20
21
+DEF_HELPER_FLAGS_3(mve_vclsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+DEF_HELPER_FLAGS_3(mve_vclsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+
25
DEF_HELPER_FLAGS_3(mve_vclzb, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
DEF_HELPER_FLAGS_3(mve_vclzh, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
DEF_HELPER_FLAGS_3(mve_vclzw, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
33
34
# Vector miscellaneous
35
36
+VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
37
VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve_helper.c
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@ static void mergemask_sq(int64_t *d, int64_t r, uint16_t mask)
43
mve_advance_vpt(env); \
44
}
45
46
+#define DO_CLS_B(N) (clrsb32(N) - 24)
47
+#define DO_CLS_H(N) (clrsb32(N) - 16)
48
+
49
+DO_1OP(vclsb, 1, int8_t, DO_CLS_B)
50
+DO_1OP(vclsh, 2, int16_t, DO_CLS_H)
51
+DO_1OP(vclsw, 4, int32_t, clrsb32)
52
+
53
#define DO_CLZ_B(N) (clz32(N) - 24)
54
#define DO_CLZ_H(N) (clz32(N) - 16)
55
56
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/translate-mve.c
59
+++ b/target/arm/translate-mve.c
60
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
61
}
62
63
DO_1OP(VCLZ, vclz)
64
+DO_1OP(VCLS, vcls)
65
--
66
2.20.1
67
68
diff view generated by jsdifflib
New patch
1
Implement the MVE instructions VREV16, VREV32 and VREV64.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-6-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 7 +++++++
8
target/arm/mve.decode | 4 ++++
9
target/arm/mve_helper.c | 7 +++++++
10
target/arm/translate-mve.c | 33 +++++++++++++++++++++++++++++++++
11
4 files changed, 51 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
DEF_HELPER_FLAGS_3(mve_vclzb, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
DEF_HELPER_FLAGS_3(mve_vclzh, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
DEF_HELPER_FLAGS_3(mve_vclzw, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_3(mve_vrev16b, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vrev32b, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+DEF_HELPER_FLAGS_3(mve_vrev32h, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+DEF_HELPER_FLAGS_3(mve_vrev64b, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+DEF_HELPER_FLAGS_3(mve_vrev64h, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
+DEF_HELPER_FLAGS_3(mve_vrev64w, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
33
34
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
35
VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
36
+
37
+VREV16 1111 1111 1 . 11 .. 00 ... 0 0001 01 . 0 ... 0 @1op
38
+VREV32 1111 1111 1 . 11 .. 00 ... 0 0000 11 . 0 ... 0 @1op
39
+VREV64 1111 1111 1 . 11 .. 00 ... 0 0000 01 . 0 ... 0 @1op
40
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/mve_helper.c
43
+++ b/target/arm/mve_helper.c
44
@@ -XXX,XX +XXX,XX @@ DO_1OP(vclsw, 4, int32_t, clrsb32)
45
DO_1OP(vclzb, 1, uint8_t, DO_CLZ_B)
46
DO_1OP(vclzh, 2, uint16_t, DO_CLZ_H)
47
DO_1OP(vclzw, 4, uint32_t, clz32)
48
+
49
+DO_1OP(vrev16b, 2, uint16_t, bswap16)
50
+DO_1OP(vrev32b, 4, uint32_t, bswap32)
51
+DO_1OP(vrev32h, 4, uint32_t, hswap32)
52
+DO_1OP(vrev64b, 8, uint64_t, bswap64)
53
+DO_1OP(vrev64h, 8, uint64_t, hswap64)
54
+DO_1OP(vrev64w, 8, uint64_t, wswap64)
55
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/translate-mve.c
58
+++ b/target/arm/translate-mve.c
59
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
60
61
DO_1OP(VCLZ, vclz)
62
DO_1OP(VCLS, vcls)
63
+
64
+static bool trans_VREV16(DisasContext *s, arg_1op *a)
65
+{
66
+ static MVEGenOneOpFn * const fns[] = {
67
+ gen_helper_mve_vrev16b,
68
+ NULL,
69
+ NULL,
70
+ NULL,
71
+ };
72
+ return do_1op(s, a, fns[a->size]);
73
+}
74
+
75
+static bool trans_VREV32(DisasContext *s, arg_1op *a)
76
+{
77
+ static MVEGenOneOpFn * const fns[] = {
78
+ gen_helper_mve_vrev32b,
79
+ gen_helper_mve_vrev32h,
80
+ NULL,
81
+ NULL,
82
+ };
83
+ return do_1op(s, a, fns[a->size]);
84
+}
85
+
86
+static bool trans_VREV64(DisasContext *s, arg_1op *a)
87
+{
88
+ static MVEGenOneOpFn * const fns[] = {
89
+ gen_helper_mve_vrev64b,
90
+ gen_helper_mve_vrev64h,
91
+ gen_helper_mve_vrev64w,
92
+ NULL,
93
+ };
94
+ return do_1op(s, a, fns[a->size]);
95
+}
96
--
97
2.20.1
98
99
diff view generated by jsdifflib
New patch
1
Implement the MVE VMVN(register) operation. Note that for
2
predication this operation is byte-by-byte.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-7-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 2 ++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 4 ++++
11
target/arm/translate-mve.c | 5 +++++
12
4 files changed, 14 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vrev32h, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
DEF_HELPER_FLAGS_3(mve_vrev64b, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
DEF_HELPER_FLAGS_3(mve_vrev64h, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
DEF_HELPER_FLAGS_3(mve_vrev64w, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_3(mve_vmvn, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/mve.decode
27
+++ b/target/arm/mve.decode
28
@@ -XXX,XX +XXX,XX @@
29
@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
30
31
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
32
+@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
33
34
# Vector loads and stores
35
36
@@ -XXX,XX +XXX,XX @@ VCLZ 1111 1111 1 . 11 .. 00 ... 0 0100 11 . 0 ... 0 @1op
37
VREV16 1111 1111 1 . 11 .. 00 ... 0 0001 01 . 0 ... 0 @1op
38
VREV32 1111 1111 1 . 11 .. 00 ... 0 0000 11 . 0 ... 0 @1op
39
VREV64 1111 1111 1 . 11 .. 00 ... 0 0000 01 . 0 ... 0 @1op
40
+
41
+VMVN 1111 1111 1 . 11 00 00 ... 0 0101 11 . 0 ... 0 @1op_nosz
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_1OP(vrev32h, 4, uint32_t, hswap32)
47
DO_1OP(vrev64b, 8, uint64_t, bswap64)
48
DO_1OP(vrev64h, 8, uint64_t, hswap64)
49
DO_1OP(vrev64w, 8, uint64_t, wswap64)
50
+
51
+#define DO_NOT(N) (~(N))
52
+
53
+DO_1OP(vmvn, 8, uint64_t, DO_NOT)
54
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/translate-mve.c
57
+++ b/target/arm/translate-mve.c
58
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_1op *a)
59
};
60
return do_1op(s, a, fns[a->size]);
61
}
62
+
63
+static bool trans_VMVN(DisasContext *s, arg_1op *a)
64
+{
65
+ return do_1op(s, a, gen_helper_mve_vmvn);
66
+}
67
--
68
2.20.1
69
70
diff view generated by jsdifflib
New patch
1
Implement the MVE VABS functions (both integer and floating point).
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-8-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 6 ++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 13 +++++++++++++
10
target/arm/translate-mve.c | 15 +++++++++++++++
11
4 files changed, 37 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vrev64h, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
DEF_HELPER_FLAGS_3(mve_vrev64w, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
20
DEF_HELPER_FLAGS_3(mve_vmvn, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_3(mve_vabsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+DEF_HELPER_FLAGS_3(mve_vabsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+DEF_HELPER_FLAGS_3(mve_vfabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+DEF_HELPER_FLAGS_3(mve_vfabss, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/mve.decode
30
+++ b/target/arm/mve.decode
31
@@ -XXX,XX +XXX,XX @@ VREV32 1111 1111 1 . 11 .. 00 ... 0 0000 11 . 0 ... 0 @1op
32
VREV64 1111 1111 1 . 11 .. 00 ... 0 0000 01 . 0 ... 0 @1op
33
34
VMVN 1111 1111 1 . 11 00 00 ... 0 0101 11 . 0 ... 0 @1op_nosz
35
+
36
+VABS 1111 1111 1 . 11 .. 01 ... 0 0011 01 . 0 ... 0 @1op
37
+VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve_helper.c
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@
43
#include "exec/helper-proto.h"
44
#include "exec/cpu_ldst.h"
45
#include "exec/exec-all.h"
46
+#include "tcg/tcg.h"
47
48
static uint16_t mve_element_mask(CPUARMState *env)
49
{
50
@@ -XXX,XX +XXX,XX @@ DO_1OP(vrev64w, 8, uint64_t, wswap64)
51
#define DO_NOT(N) (~(N))
52
53
DO_1OP(vmvn, 8, uint64_t, DO_NOT)
54
+
55
+#define DO_ABS(N) ((N) < 0 ? -(N) : (N))
56
+#define DO_FABSH(N) ((N) & dup_const(MO_16, 0x7fff))
57
+#define DO_FABSS(N) ((N) & dup_const(MO_32, 0x7fffffff))
58
+
59
+DO_1OP(vabsb, 1, int8_t, DO_ABS)
60
+DO_1OP(vabsh, 2, int16_t, DO_ABS)
61
+DO_1OP(vabsw, 4, int32_t, DO_ABS)
62
+
63
+/* We can do these 64 bits at a time */
64
+DO_1OP(vfabsh, 8, uint64_t, DO_FABSH)
65
+DO_1OP(vfabss, 8, uint64_t, DO_FABSS)
66
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/translate-mve.c
69
+++ b/target/arm/translate-mve.c
70
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
71
72
DO_1OP(VCLZ, vclz)
73
DO_1OP(VCLS, vcls)
74
+DO_1OP(VABS, vabs)
75
76
static bool trans_VREV16(DisasContext *s, arg_1op *a)
77
{
78
@@ -XXX,XX +XXX,XX @@ static bool trans_VMVN(DisasContext *s, arg_1op *a)
79
{
80
return do_1op(s, a, gen_helper_mve_vmvn);
81
}
82
+
83
+static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
84
+{
85
+ static MVEGenOneOpFn * const fns[] = {
86
+ NULL,
87
+ gen_helper_mve_vfabsh,
88
+ gen_helper_mve_vfabss,
89
+ NULL,
90
+ };
91
+ if (!dc_isar_feature(aa32_mve_fp, s)) {
92
+ return false;
93
+ }
94
+ return do_1op(s, a, fns[a->size]);
95
+}
96
--
97
2.20.1
98
99
diff view generated by jsdifflib
New patch
1
Implement the MVE VNEG insn (both integer and floating point forms).
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-9-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 6 ++++++
8
target/arm/mve.decode | 2 ++
9
target/arm/mve_helper.c | 12 ++++++++++++
10
target/arm/translate-mve.c | 15 +++++++++++++++
11
4 files changed, 35 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
DEF_HELPER_FLAGS_3(mve_vabsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
DEF_HELPER_FLAGS_3(mve_vfabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
DEF_HELPER_FLAGS_3(mve_vfabss, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_3(mve_vnegb, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/mve.decode
30
+++ b/target/arm/mve.decode
31
@@ -XXX,XX +XXX,XX @@ VMVN 1111 1111 1 . 11 00 00 ... 0 0101 11 . 0 ... 0 @1op_nosz
32
33
VABS 1111 1111 1 . 11 .. 01 ... 0 0011 01 . 0 ... 0 @1op
34
VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
35
+VNEG 1111 1111 1 . 11 .. 01 ... 0 0011 11 . 0 ... 0 @1op
36
+VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
37
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/mve_helper.c
40
+++ b/target/arm/mve_helper.c
41
@@ -XXX,XX +XXX,XX @@ DO_1OP(vabsw, 4, int32_t, DO_ABS)
42
/* We can do these 64 bits at a time */
43
DO_1OP(vfabsh, 8, uint64_t, DO_FABSH)
44
DO_1OP(vfabss, 8, uint64_t, DO_FABSS)
45
+
46
+#define DO_NEG(N) (-(N))
47
+#define DO_FNEGH(N) ((N) ^ dup_const(MO_16, 0x8000))
48
+#define DO_FNEGS(N) ((N) ^ dup_const(MO_32, 0x80000000))
49
+
50
+DO_1OP(vnegb, 1, int8_t, DO_NEG)
51
+DO_1OP(vnegh, 2, int16_t, DO_NEG)
52
+DO_1OP(vnegw, 4, int32_t, DO_NEG)
53
+
54
+/* We can do these 64 bits at a time */
55
+DO_1OP(vfnegh, 8, uint64_t, DO_FNEGH)
56
+DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
57
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/translate-mve.c
60
+++ b/target/arm/translate-mve.c
61
@@ -XXX,XX +XXX,XX @@ static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
62
DO_1OP(VCLZ, vclz)
63
DO_1OP(VCLS, vcls)
64
DO_1OP(VABS, vabs)
65
+DO_1OP(VNEG, vneg)
66
67
static bool trans_VREV16(DisasContext *s, arg_1op *a)
68
{
69
@@ -XXX,XX +XXX,XX @@ static bool trans_VABS_fp(DisasContext *s, arg_1op *a)
70
}
71
return do_1op(s, a, fns[a->size]);
72
}
73
+
74
+static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
75
+{
76
+ static MVEGenOneOpFn * const fns[] = {
77
+ NULL,
78
+ gen_helper_mve_vfnegh,
79
+ gen_helper_mve_vfnegs,
80
+ NULL,
81
+ };
82
+ if (!dc_isar_feature(aa32_mve_fp, s)) {
83
+ return false;
84
+ }
85
+ return do_1op(s, a, fns[a->size]);
86
+}
87
--
88
2.20.1
89
90
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
The Arm MVE VDUP implementation would like to be able to emit code to
2
duplicate a byte or halfword value into an i32. We have code to do
3
this already in tcg-op-gvec.c, so all we need to do is make the
4
functions global.
2
5
3
We should use printf format specifier "%u" instead of "%d" for
6
For consistency with other functions made available to the frontends:
4
argument of type "unsigned int".
7
* we rename to tcg_gen_dup_*
8
* we expose both the _i32 and _i64 forms
9
* we provide the #define for a _tl form
5
10
6
Reported-by: Euler Robot <euler.robot@huawei.com>
11
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201126111109.112238-5-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20210617121628.20116-10-peter.maydell@linaro.org
11
---
15
---
12
hw/misc/imx6ul_ccm.c | 4 ++--
16
include/tcg/tcg-op.h | 8 ++++++++
13
1 file changed, 2 insertions(+), 2 deletions(-)
17
include/tcg/tcg.h | 1 -
18
tcg/tcg-op-gvec.c | 20 ++++++++++----------
19
3 files changed, 18 insertions(+), 11 deletions(-)
14
20
15
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
21
diff --git a/include/tcg/tcg-op.h b/include/tcg/tcg-op.h
16
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx6ul_ccm.c
23
--- a/include/tcg/tcg-op.h
18
+++ b/hw/misc/imx6ul_ccm.c
24
+++ b/include/tcg/tcg-op.h
19
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_ccm_reg_name(uint32_t reg)
25
@@ -XXX,XX +XXX,XX @@ void tcg_gen_umin_i32(TCGv_i32, TCGv_i32 arg1, TCGv_i32 arg2);
20
case CCM_CMEOR:
26
void tcg_gen_umax_i32(TCGv_i32, TCGv_i32 arg1, TCGv_i32 arg2);
21
return "CMEOR";
27
void tcg_gen_abs_i32(TCGv_i32, TCGv_i32);
22
default:
28
23
- sprintf(unknown, "%d ?", reg);
29
+/* Replicate a value of size @vece from @in to all the lanes in @out */
24
+ sprintf(unknown, "%u ?", reg);
30
+void tcg_gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in);
25
return unknown;
31
+
32
static inline void tcg_gen_discard_i32(TCGv_i32 arg)
33
{
34
tcg_gen_op1_i32(INDEX_op_discard, arg);
35
@@ -XXX,XX +XXX,XX @@ void tcg_gen_umin_i64(TCGv_i64, TCGv_i64 arg1, TCGv_i64 arg2);
36
void tcg_gen_umax_i64(TCGv_i64, TCGv_i64 arg1, TCGv_i64 arg2);
37
void tcg_gen_abs_i64(TCGv_i64, TCGv_i64);
38
39
+/* Replicate a value of size @vece from @in to all the lanes in @out */
40
+void tcg_gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in);
41
+
42
#if TCG_TARGET_REG_BITS == 64
43
static inline void tcg_gen_discard_i64(TCGv_i64 arg)
44
{
45
@@ -XXX,XX +XXX,XX @@ void tcg_gen_stl_vec(TCGv_vec r, TCGv_ptr base, TCGArg offset, TCGType t);
46
#define tcg_gen_atomic_smax_fetch_tl tcg_gen_atomic_smax_fetch_i64
47
#define tcg_gen_atomic_umax_fetch_tl tcg_gen_atomic_umax_fetch_i64
48
#define tcg_gen_dup_tl_vec tcg_gen_dup_i64_vec
49
+#define tcg_gen_dup_tl tcg_gen_dup_i64
50
#else
51
#define tcg_gen_movi_tl tcg_gen_movi_i32
52
#define tcg_gen_mov_tl tcg_gen_mov_i32
53
@@ -XXX,XX +XXX,XX @@ void tcg_gen_stl_vec(TCGv_vec r, TCGv_ptr base, TCGArg offset, TCGType t);
54
#define tcg_gen_atomic_smax_fetch_tl tcg_gen_atomic_smax_fetch_i32
55
#define tcg_gen_atomic_umax_fetch_tl tcg_gen_atomic_umax_fetch_i32
56
#define tcg_gen_dup_tl_vec tcg_gen_dup_i32_vec
57
+#define tcg_gen_dup_tl tcg_gen_dup_i32
58
#endif
59
60
#if UINTPTR_MAX == UINT32_MAX
61
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
62
index XXXXXXX..XXXXXXX 100644
63
--- a/include/tcg/tcg.h
64
+++ b/include/tcg/tcg.h
65
@@ -XXX,XX +XXX,XX @@ uint64_t dup_const(unsigned vece, uint64_t c);
66
: (qemu_build_not_reached_always(), 0)) \
67
: dup_const(VECE, C))
68
69
-
70
/*
71
* Memory helpers that will be used by TCG generated code.
72
*/
73
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/tcg/tcg-op-gvec.c
76
+++ b/tcg/tcg-op-gvec.c
77
@@ -XXX,XX +XXX,XX @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
78
}
79
80
/* Duplicate IN into OUT as per VECE. */
81
-static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
82
+void tcg_gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
83
{
84
switch (vece) {
85
case MO_8:
86
@@ -XXX,XX +XXX,XX @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
26
}
87
}
27
}
88
}
28
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_analog_reg_name(uint32_t reg)
89
29
case USB_ANALOG_DIGPROG:
90
-static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
30
return "USB_ANALOG_DIGPROG";
91
+void tcg_gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
31
default:
92
{
32
- sprintf(unknown, "%d ?", reg);
93
switch (vece) {
33
+ sprintf(unknown, "%u ?", reg);
94
case MO_8:
34
return unknown;
95
@@ -XXX,XX +XXX,XX @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
35
}
96
&& (vece != MO_32 || !check_size_impl(oprsz, 4))) {
97
t_64 = tcg_temp_new_i64();
98
tcg_gen_extu_i32_i64(t_64, in_32);
99
- gen_dup_i64(vece, t_64, t_64);
100
+ tcg_gen_dup_i64(vece, t_64, t_64);
101
} else {
102
t_32 = tcg_temp_new_i32();
103
- gen_dup_i32(vece, t_32, in_32);
104
+ tcg_gen_dup_i32(vece, t_32, in_32);
105
}
106
} else if (in_64) {
107
/* We are given a 64-bit variable input. */
108
t_64 = tcg_temp_new_i64();
109
- gen_dup_i64(vece, t_64, in_64);
110
+ tcg_gen_dup_i64(vece, t_64, in_64);
111
} else {
112
/* We are given a constant input. */
113
/* For 64-bit hosts, use 64-bit constants for "simple" constants
114
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_2s(uint32_t dofs, uint32_t aofs, uint32_t oprsz,
115
} else if (g->fni8 && check_size_impl(oprsz, 8)) {
116
TCGv_i64 t64 = tcg_temp_new_i64();
117
118
- gen_dup_i64(g->vece, t64, c);
119
+ tcg_gen_dup_i64(g->vece, t64, c);
120
expand_2s_i64(dofs, aofs, oprsz, t64, g->scalar_first, g->fni8);
121
tcg_temp_free_i64(t64);
122
} else if (g->fni4 && check_size_impl(oprsz, 4)) {
123
TCGv_i32 t32 = tcg_temp_new_i32();
124
125
tcg_gen_extrl_i64_i32(t32, c);
126
- gen_dup_i32(g->vece, t32, t32);
127
+ tcg_gen_dup_i32(g->vece, t32, t32);
128
expand_2s_i32(dofs, aofs, oprsz, t32, g->scalar_first, g->fni4);
129
tcg_temp_free_i32(t32);
130
} else {
131
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_ands(unsigned vece, uint32_t dofs, uint32_t aofs,
132
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
133
{
134
TCGv_i64 tmp = tcg_temp_new_i64();
135
- gen_dup_i64(vece, tmp, c);
136
+ tcg_gen_dup_i64(vece, tmp, c);
137
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_ands);
138
tcg_temp_free_i64(tmp);
139
}
140
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_xors(unsigned vece, uint32_t dofs, uint32_t aofs,
141
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
142
{
143
TCGv_i64 tmp = tcg_temp_new_i64();
144
- gen_dup_i64(vece, tmp, c);
145
+ tcg_gen_dup_i64(vece, tmp, c);
146
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_xors);
147
tcg_temp_free_i64(tmp);
148
}
149
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_ors(unsigned vece, uint32_t dofs, uint32_t aofs,
150
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
151
{
152
TCGv_i64 tmp = tcg_temp_new_i64();
153
- gen_dup_i64(vece, tmp, c);
154
+ tcg_gen_dup_i64(vece, tmp, c);
155
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_ors);
156
tcg_temp_free_i64(tmp);
36
}
157
}
37
--
158
--
38
2.20.1
159
2.20.1
39
160
40
161
diff view generated by jsdifflib
1
In v8.1M the new CLRM instruction allows zeroing an arbitrary set of
1
Implement the MVE VDUP insn, which duplicates a value from
2
the general-purpose registers and APSR. Implement this.
2
a general-purpose register into every lane of a vector
3
3
register (subject to predication).
4
The encoding is a subset of the LDMIA T2 encoding, using what would
5
be Rn=0b1111 (which UNDEFs for LDMIA).
6
4
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201119215617.29887-6-peter.maydell@linaro.org
7
Message-id: 20210617121628.20116-11-peter.maydell@linaro.org
10
---
8
---
11
target/arm/t32.decode | 6 +++++-
9
target/arm/helper-mve.h | 2 ++
12
target/arm/translate.c | 38 ++++++++++++++++++++++++++++++++++++++
10
target/arm/mve.decode | 10 ++++++++++
13
2 files changed, 43 insertions(+), 1 deletion(-)
11
target/arm/mve_helper.c | 16 ++++++++++++++++
12
target/arm/translate-mve.c | 27 +++++++++++++++++++++++++++
13
4 files changed, 55 insertions(+)
14
14
15
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/t32.decode
17
--- a/target/arm/helper-mve.h
18
+++ b/target/arm/t32.decode
18
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ UXTAB 1111 1010 0101 .... 1111 .... 10.. .... @rrr_rot
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
20
20
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
21
STM_t32 1110 1000 10.0 .... ................ @ldstm i=1 b=0
21
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
22
STM_t32 1110 1001 00.0 .... ................ @ldstm i=0 b=1
22
23
-LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
23
+DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
24
+
25
DEF_HELPER_FLAGS_3(mve_vclsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
DEF_HELPER_FLAGS_3(mve_vclsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@
33
34
%qd 22:1 13:3
35
%qm 5:1 1:3
36
+%qn 7:1 17:3
37
38
&vldr_vstr rn qd imm p a w size l u
39
&1op qd qm size
40
@@ -XXX,XX +XXX,XX @@ VABS 1111 1111 1 . 11 .. 01 ... 0 0011 01 . 0 ... 0 @1op
41
VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
42
VNEG 1111 1111 1 . 11 .. 01 ... 0 0011 11 . 0 ... 0 @1op
43
VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
44
+
45
+&vdup qd rt size
46
+# Qd is in the fields usually named Qn
47
+@vdup .... .... . . .. ... . rt:4 .... . . . . .... qd=%qn &vdup
48
+
49
+# B and E bits encode size, which we decode here to the usual size values
50
+VDUP 1110 1110 1 1 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=0
51
+VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 1 1 0000 @vdup size=1
52
+VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
53
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/mve_helper.c
56
+++ b/target/arm/mve_helper.c
57
@@ -XXX,XX +XXX,XX @@ static void mergemask_sq(int64_t *d, int64_t r, uint16_t mask)
58
uint64_t *: mergemask_uq, \
59
int64_t *: mergemask_sq)(D, R, M)
60
61
+void HELPER(mve_vdup)(CPUARMState *env, void *vd, uint32_t val)
24
+{
62
+{
25
+ # Rn=15 UNDEFs for LDM; M-profile CLRM uses that encoding
63
+ /*
26
+ CLRM 1110 1000 1001 1111 list:16
64
+ * The generated code already replicated an 8 or 16 bit constant
27
+ LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
65
+ * into the 32-bit value, so we only need to write the 32-bit
66
+ * value to all elements of the Qreg, allowing for predication.
67
+ */
68
+ uint32_t *d = vd;
69
+ uint16_t mask = mve_element_mask(env);
70
+ unsigned e;
71
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) {
72
+ mergemask(&d[H4(e)], val, mask);
73
+ }
74
+ mve_advance_vpt(env);
28
+}
75
+}
29
LDM_t32 1110 1001 00.1 .... ................ @ldstm i=0 b=1
76
+
30
77
#define DO_1OP(OP, ESIZE, TYPE, FN) \
31
&rfe !extern rn w pu
78
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
32
diff --git a/target/arm/translate.c b/target/arm/translate.c
79
{ \
80
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
33
index XXXXXXX..XXXXXXX 100644
81
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.c
82
--- a/target/arm/translate-mve.c
35
+++ b/target/arm/translate.c
83
+++ b/target/arm/translate-mve.c
36
@@ -XXX,XX +XXX,XX @@ static bool trans_LDM_t16(DisasContext *s, arg_ldst_block *a)
84
@@ -XXX,XX +XXX,XX @@ DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
37
return do_ldm(s, a, 1);
85
DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
38
}
86
DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
39
87
40
+static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
88
+static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
41
+{
89
+{
42
+ int i;
90
+ TCGv_ptr qd;
43
+ TCGv_i32 zero;
91
+ TCGv_i32 rt;
44
+
92
+
45
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
93
+ if (!dc_isar_feature(aa32_mve, s) ||
94
+ !mve_check_qreg_bank(s, a->qd)) {
46
+ return false;
95
+ return false;
47
+ }
96
+ }
48
+
97
+ if (a->rt == 13 || a->rt == 15) {
49
+ if (extract32(a->list, 13, 1)) {
50
+ return false;
51
+ }
52
+
53
+ if (!a->list) {
54
+ /* UNPREDICTABLE; we choose to UNDEF */
98
+ /* UNPREDICTABLE; we choose to UNDEF */
55
+ return false;
99
+ return false;
56
+ }
100
+ }
101
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
102
+ return true;
103
+ }
57
+
104
+
58
+ zero = tcg_const_i32(0);
105
+ qd = mve_qreg_ptr(a->qd);
59
+ for (i = 0; i < 15; i++) {
106
+ rt = load_reg(s, a->rt);
60
+ if (extract32(a->list, i, 1)) {
107
+ tcg_gen_dup_i32(a->size, rt, rt);
61
+ /* Clear R[i] */
108
+ gen_helper_mve_vdup(cpu_env, qd, rt);
62
+ tcg_gen_mov_i32(cpu_R[i], zero);
109
+ tcg_temp_free_ptr(qd);
63
+ }
110
+ tcg_temp_free_i32(rt);
64
+ }
111
+ mve_update_eci(s);
65
+ if (extract32(a->list, 15, 1)) {
66
+ /*
67
+ * Clear APSR (by calling the MSR helper with the same argument
68
+ * as for "MSR APSR_nzcvqg, Rn": mask = 0b1100, SYSM=0)
69
+ */
70
+ TCGv_i32 maskreg = tcg_const_i32(0xc << 8);
71
+ gen_helper_v7m_msr(cpu_env, maskreg, zero);
72
+ tcg_temp_free_i32(maskreg);
73
+ }
74
+ tcg_temp_free_i32(zero);
75
+ return true;
112
+ return true;
76
+}
113
+}
77
+
114
+
78
/*
115
static bool do_1op(DisasContext *s, arg_1op *a, MVEGenOneOpFn fn)
79
* Branch, branch with link
116
{
80
*/
117
TCGv_ptr qd, qm;
81
--
118
--
82
2.20.1
119
2.20.1
83
120
84
121
diff view generated by jsdifflib
1
v8.1M introduces a new TRD flag in the CCR register, which enables
1
Implement the MVE vector logical operations operating
2
checking for stack frame integrity signatures on SG instructions.
2
on two registers.
3
Add the code in the SG insn implementation for the new behaviour.
4
3
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-24-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-12-peter.maydell@linaro.org
8
---
7
---
9
target/arm/m_helper.c | 86 +++++++++++++++++++++++++++++++++++++++++++
8
target/arm/helper-mve.h | 6 ++++++
10
1 file changed, 86 insertions(+)
9
target/arm/mve.decode | 9 +++++++++
10
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 37 +++++++++++++++++++++++++++++++++++++
12
4 files changed, 78 insertions(+)
11
13
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
16
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/m_helper.c
17
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
17
return true;
19
DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vand, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vbic, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@
33
34
&vldr_vstr rn qd imm p a w size l u
35
&1op qd qm size
36
+&2op qd qm qn size
37
38
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
39
# Note that both Rn and Qd are 3 bits only (no D bit)
40
@@ -XXX,XX +XXX,XX @@
41
42
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
43
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
44
+@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
45
46
# Vector loads and stores
47
48
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
49
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
50
size=2 p=1
51
52
+# Vector 2-op
53
+VAND 1110 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
54
+VBIC 1110 1111 0 . 01 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
55
+VORR 1110 1111 0 . 10 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
56
+VORN 1110 1111 0 . 11 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
57
+VEOR 1111 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
58
+
59
# Vector miscellaneous
60
61
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
62
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/mve_helper.c
65
+++ b/target/arm/mve_helper.c
66
@@ -XXX,XX +XXX,XX @@ DO_1OP(vnegw, 4, int32_t, DO_NEG)
67
/* We can do these 64 bits at a time */
68
DO_1OP(vfnegh, 8, uint64_t, DO_FNEGH)
69
DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
70
+
71
+#define DO_2OP(OP, ESIZE, TYPE, FN) \
72
+ void HELPER(glue(mve_, OP))(CPUARMState *env, \
73
+ void *vd, void *vn, void *vm) \
74
+ { \
75
+ TYPE *d = vd, *n = vn, *m = vm; \
76
+ uint16_t mask = mve_element_mask(env); \
77
+ unsigned e; \
78
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
79
+ mergemask(&d[H##ESIZE(e)], \
80
+ FN(n[H##ESIZE(e)], m[H##ESIZE(e)]), mask); \
81
+ } \
82
+ mve_advance_vpt(env); \
83
+ }
84
+
85
+#define DO_AND(N, M) ((N) & (M))
86
+#define DO_BIC(N, M) ((N) & ~(M))
87
+#define DO_ORR(N, M) ((N) | (M))
88
+#define DO_ORN(N, M) ((N) | ~(M))
89
+#define DO_EOR(N, M) ((N) ^ (M))
90
+
91
+DO_2OP(vand, 8, uint64_t, DO_AND)
92
+DO_2OP(vbic, 8, uint64_t, DO_BIC)
93
+DO_2OP(vorr, 8, uint64_t, DO_ORR)
94
+DO_2OP(vorn, 8, uint64_t, DO_ORN)
95
+DO_2OP(veor, 8, uint64_t, DO_EOR)
96
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate-mve.c
99
+++ b/target/arm/translate-mve.c
100
@@ -XXX,XX +XXX,XX @@
101
102
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
103
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
104
+typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
105
106
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
107
static inline long mve_qreg_offset(unsigned reg)
108
@@ -XXX,XX +XXX,XX @@ static bool trans_VNEG_fp(DisasContext *s, arg_1op *a)
109
}
110
return do_1op(s, a, fns[a->size]);
18
}
111
}
19
112
+
20
+static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
113
+static bool do_2op(DisasContext *s, arg_2op *a, MVEGenTwoOpFn fn)
21
+ uint32_t addr, uint32_t *spdata)
22
+{
114
+{
23
+ /*
115
+ TCGv_ptr qd, qn, qm;
24
+ * Read a word of data from the stack for the SG instruction,
25
+ * writing the value into *spdata. If the load succeeds, return
26
+ * true; otherwise pend an appropriate exception and return false.
27
+ * (We can't use data load helpers here that throw an exception
28
+ * because of the context we're called in, which is halfway through
29
+ * arm_v7m_cpu_do_interrupt().)
30
+ */
31
+ CPUState *cs = CPU(cpu);
32
+ CPUARMState *env = &cpu->env;
33
+ MemTxAttrs attrs = {};
34
+ MemTxResult txres;
35
+ target_ulong page_size;
36
+ hwaddr physaddr;
37
+ int prot;
38
+ ARMMMUFaultInfo fi = {};
39
+ ARMCacheAttrs cacheattrs = {};
40
+ uint32_t value;
41
+
116
+
42
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
117
+ if (!dc_isar_feature(aa32_mve, s) ||
43
+ &attrs, &prot, &page_size, &fi, &cacheattrs)) {
118
+ !mve_check_qreg_bank(s, a->qd | a->qn | a->qm) ||
44
+ /* MPU/SAU lookup failed */
119
+ !fn) {
45
+ if (fi.type == ARMFault_QEMU_SFault) {
46
+ qemu_log_mask(CPU_LOG_INT,
47
+ "...SecureFault during stack word read\n");
48
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
49
+ env->v7m.sfar = addr;
50
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
51
+ } else {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...MemManageFault during stack word read\n");
54
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_DACCVIOL_MASK |
55
+ R_V7M_CFSR_MMARVALID_MASK;
56
+ env->v7m.mmfar[M_REG_S] = addr;
57
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, false);
58
+ }
59
+ return false;
120
+ return false;
60
+ }
121
+ }
61
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
122
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
62
+ attrs, &txres);
123
+ return true;
63
+ if (txres != MEMTX_OK) {
64
+ /* BusFault trying to read the data */
65
+ qemu_log_mask(CPU_LOG_INT,
66
+ "...BusFault during stack word read\n");
67
+ env->v7m.cfsr[M_REG_NS] |=
68
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
69
+ env->v7m.bfar = addr;
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
71
+ return false;
72
+ }
124
+ }
73
+
125
+
74
+ *spdata = value;
126
+ qd = mve_qreg_ptr(a->qd);
127
+ qn = mve_qreg_ptr(a->qn);
128
+ qm = mve_qreg_ptr(a->qm);
129
+ fn(cpu_env, qd, qn, qm);
130
+ tcg_temp_free_ptr(qd);
131
+ tcg_temp_free_ptr(qn);
132
+ tcg_temp_free_ptr(qm);
133
+ mve_update_eci(s);
75
+ return true;
134
+ return true;
76
+}
135
+}
77
+
136
+
78
static bool v7m_handle_execute_nsc(ARMCPU *cpu)
137
+#define DO_LOGIC(INSN, HELPER) \
79
{
138
+ static bool trans_##INSN(DisasContext *s, arg_2op *a) \
80
/*
139
+ { \
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
140
+ return do_2op(s, a, HELPER); \
82
*/
83
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
84
", executing it\n", env->regs[15]);
85
+
86
+ if (cpu_isar_feature(aa32_m_sec_state, cpu) &&
87
+ !arm_v7m_is_handler_mode(env)) {
88
+ /*
89
+ * v8.1M exception stack frame integrity check. Note that we
90
+ * must perform the memory access even if CCR_S.TRD is zero
91
+ * and we aren't going to check what the data loaded is.
92
+ */
93
+ uint32_t spdata, sp;
94
+
95
+ /*
96
+ * We know we are currently NS, so the S stack pointers must be
97
+ * in other_ss_{psp,msp}, not in regs[13]/other_sp.
98
+ */
99
+ sp = v7m_using_psp(env) ? env->v7m.other_ss_psp : env->v7m.other_ss_msp;
100
+ if (!v7m_read_sg_stack_word(cpu, mmu_idx, sp, &spdata)) {
101
+ /* Stack access failed and an exception has been pended */
102
+ return false;
103
+ }
104
+
105
+ if (env->v7m.ccr[M_REG_S] & R_V7M_CCR_TRD_MASK) {
106
+ if (((spdata & ~1) == 0xfefa125a) ||
107
+ !(env->v7m.control[M_REG_S] & 1)) {
108
+ goto gen_invep;
109
+ }
110
+ }
111
+ }
141
+ }
112
+
142
+
113
env->regs[14] &= ~1;
143
+DO_LOGIC(VAND, gen_helper_mve_vand)
114
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
144
+DO_LOGIC(VBIC, gen_helper_mve_vbic)
115
switch_v7m_security_state(env, true);
145
+DO_LOGIC(VORR, gen_helper_mve_vorr)
146
+DO_LOGIC(VORN, gen_helper_mve_vorn)
147
+DO_LOGIC(VEOR, gen_helper_mve_veor)
116
--
148
--
117
2.20.1
149
2.20.1
118
150
119
151
diff view generated by jsdifflib
1
v8.1M adds new encodings of VLLDM and VLSTM (where bit 7 is set).
1
Implement the MVE VADD, VSUB and VMUL insns.
2
The only difference is that:
3
* the old T1 encodings UNDEF if the implementation implements 32
4
Dregs (this is currently architecturally impossible for M-profile)
5
* the new T2 encodings have the implementation-defined option to
6
read from memory (discarding the data) or write UNKNOWN values to
7
memory for the stack slots that would be D16-D31
8
9
We choose not to make those accesses, so for us the two
10
instructions behave identically assuming they don't UNDEF.
11
2
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20201119215617.29887-21-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-13-peter.maydell@linaro.org
15
---
6
---
16
target/arm/m-nocp.decode | 2 +-
7
target/arm/helper-mve.h | 12 ++++++++++++
17
target/arm/translate-vfp.c.inc | 25 +++++++++++++++++++++++++
8
target/arm/mve.decode | 5 +++++
18
2 files changed, 26 insertions(+), 1 deletion(-)
9
target/arm/mve_helper.c | 14 ++++++++++++++
10
target/arm/translate-mve.c | 16 ++++++++++++++++
11
4 files changed, 47 insertions(+)
19
12
20
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
21
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m-nocp.decode
15
--- a/target/arm/helper-mve.h
23
+++ b/target/arm/m-nocp.decode
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vbic, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+
26
+DEF_HELPER_FLAGS_4(mve_vsubb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vsubh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vsubw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vmulb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vmulh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vmulw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
24
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@
25
38
26
{
39
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
27
# Special cases which do not take an early NOCP: VLLDM and VLSTM
40
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
28
- VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
41
+@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
29
+ VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
42
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
30
# VSCCLRM (new in v8.1M) is similar:
43
31
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
44
# Vector loads and stores
32
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
45
@@ -XXX,XX +XXX,XX @@ VORR 1110 1111 0 . 10 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
33
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
46
VORN 1110 1111 0 . 11 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
47
VEOR 1111 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
48
49
+VADD 1110 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
50
+VSUB 1111 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
51
+VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
52
+
53
# Vector miscellaneous
54
55
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
56
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
34
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-vfp.c.inc
58
--- a/target/arm/mve_helper.c
36
+++ b/target/arm/translate-vfp.c.inc
59
+++ b/target/arm/mve_helper.c
37
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
60
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
38
!arm_dc_feature(s, ARM_FEATURE_V8)) {
61
mve_advance_vpt(env); \
39
return false;
40
}
62
}
63
64
+/* provide unsigned 2-op helpers for all sizes */
65
+#define DO_2OP_U(OP, FN) \
66
+ DO_2OP(OP##b, 1, uint8_t, FN) \
67
+ DO_2OP(OP##h, 2, uint16_t, FN) \
68
+ DO_2OP(OP##w, 4, uint32_t, FN)
41
+
69
+
42
+ if (a->op) {
70
#define DO_AND(N, M) ((N) & (M))
43
+ /*
71
#define DO_BIC(N, M) ((N) & ~(M))
44
+ * T2 encoding ({D0-D31} reglist): v8.1M and up. We choose not
72
#define DO_ORR(N, M) ((N) | (M))
45
+ * to take the IMPDEF option to make memory accesses to the stack
73
@@ -XXX,XX +XXX,XX @@ DO_2OP(vbic, 8, uint64_t, DO_BIC)
46
+ * slots that correspond to the D16-D31 registers (discarding
74
DO_2OP(vorr, 8, uint64_t, DO_ORR)
47
+ * read data and writing UNKNOWN values), so for us the T2
75
DO_2OP(vorn, 8, uint64_t, DO_ORN)
48
+ * encoding behaves identically to the T1 encoding.
76
DO_2OP(veor, 8, uint64_t, DO_EOR)
49
+ */
77
+
50
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
78
+#define DO_ADD(N, M) ((N) + (M))
51
+ return false;
79
+#define DO_SUB(N, M) ((N) - (M))
52
+ }
80
+#define DO_MUL(N, M) ((N) * (M))
53
+ } else {
81
+
54
+ /*
82
+DO_2OP_U(vadd, DO_ADD)
55
+ * T1 encoding ({D0-D15} reglist); undef if we have 32 Dregs.
83
+DO_2OP_U(vsub, DO_SUB)
56
+ * This is currently architecturally impossible, but we add the
84
+DO_2OP_U(vmul, DO_MUL)
57
+ * check to stay in line with the pseudocode. Note that we must
85
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
58
+ * emit code for the UNDEF so it takes precedence over the NOCP.
86
index XXXXXXX..XXXXXXX 100644
59
+ */
87
--- a/target/arm/translate-mve.c
60
+ if (dc_isar_feature(aa32_simd_r32, s)) {
88
+++ b/target/arm/translate-mve.c
61
+ unallocated_encoding(s);
89
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VBIC, gen_helper_mve_vbic)
62
+ return true;
90
DO_LOGIC(VORR, gen_helper_mve_vorr)
63
+ }
91
DO_LOGIC(VORN, gen_helper_mve_vorn)
92
DO_LOGIC(VEOR, gen_helper_mve_veor)
93
+
94
+#define DO_2OP(INSN, FN) \
95
+ static bool trans_##INSN(DisasContext *s, arg_2op *a) \
96
+ { \
97
+ static MVEGenTwoOpFn * const fns[] = { \
98
+ gen_helper_mve_##FN##b, \
99
+ gen_helper_mve_##FN##h, \
100
+ gen_helper_mve_##FN##w, \
101
+ NULL, \
102
+ }; \
103
+ return do_2op(s, a, fns[a->size]); \
64
+ }
104
+ }
65
+
105
+
66
/*
106
+DO_2OP(VADD, vadd)
67
* If not secure, UNDEF. We must emit code for this
107
+DO_2OP(VSUB, vsub)
68
* rather than returning false so that this takes
108
+DO_2OP(VMUL, vmul)
69
--
109
--
70
2.20.1
110
2.20.1
71
111
72
112
diff view generated by jsdifflib
New patch
1
Implement the MVE VMULH insn, which performs a vector
2
multiply and returns the high half of the result.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-14-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 7 +++++++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 38 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vsubw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vmulb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vmulh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vmulw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vmulhsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vmulhsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@ VADD 1110 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
34
VSUB 1111 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
35
VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
36
37
+VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
38
+VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
39
+
40
# Vector miscellaneous
41
42
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
43
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mve_helper.c
46
+++ b/target/arm/mve_helper.c
47
@@ -XXX,XX +XXX,XX @@ DO_2OP(veor, 8, uint64_t, DO_EOR)
48
DO_2OP_U(vadd, DO_ADD)
49
DO_2OP_U(vsub, DO_SUB)
50
DO_2OP_U(vmul, DO_MUL)
51
+
52
+/*
53
+ * Because the computation type is at least twice as large as required,
54
+ * these work for both signed and unsigned source types.
55
+ */
56
+static inline uint8_t do_mulh_b(int32_t n, int32_t m)
57
+{
58
+ return (n * m) >> 8;
59
+}
60
+
61
+static inline uint16_t do_mulh_h(int32_t n, int32_t m)
62
+{
63
+ return (n * m) >> 16;
64
+}
65
+
66
+static inline uint32_t do_mulh_w(int64_t n, int64_t m)
67
+{
68
+ return (n * m) >> 32;
69
+}
70
+
71
+DO_2OP(vmulhsb, 1, int8_t, do_mulh_b)
72
+DO_2OP(vmulhsh, 2, int16_t, do_mulh_h)
73
+DO_2OP(vmulhsw, 4, int32_t, do_mulh_w)
74
+DO_2OP(vmulhub, 1, uint8_t, do_mulh_b)
75
+DO_2OP(vmulhuh, 2, uint16_t, do_mulh_h)
76
+DO_2OP(vmulhuw, 4, uint32_t, do_mulh_w)
77
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/translate-mve.c
80
+++ b/target/arm/translate-mve.c
81
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VEOR, gen_helper_mve_veor)
82
DO_2OP(VADD, vadd)
83
DO_2OP(VSUB, vsub)
84
DO_2OP(VMUL, vmul)
85
+DO_2OP(VMULH_S, vmulhs)
86
+DO_2OP(VMULH_U, vmulhu)
87
--
88
2.20.1
89
90
diff view generated by jsdifflib
New patch
1
Implement the MVE VRMULH insn, which performs a rounding multiply
2
and then returns the high half.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-15-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 7 +++++++
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 22 ++++++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 34 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vrmulhsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vrmulhsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vrmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vrmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vrmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vrmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
34
VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
35
VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
36
37
+VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
38
+VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
39
+
40
# Vector miscellaneous
41
42
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
43
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mve_helper.c
46
+++ b/target/arm/mve_helper.c
47
@@ -XXX,XX +XXX,XX @@ static inline uint32_t do_mulh_w(int64_t n, int64_t m)
48
return (n * m) >> 32;
49
}
50
51
+static inline uint8_t do_rmulh_b(int32_t n, int32_t m)
52
+{
53
+ return (n * m + (1U << 7)) >> 8;
54
+}
55
+
56
+static inline uint16_t do_rmulh_h(int32_t n, int32_t m)
57
+{
58
+ return (n * m + (1U << 15)) >> 16;
59
+}
60
+
61
+static inline uint32_t do_rmulh_w(int64_t n, int64_t m)
62
+{
63
+ return (n * m + (1U << 31)) >> 32;
64
+}
65
+
66
DO_2OP(vmulhsb, 1, int8_t, do_mulh_b)
67
DO_2OP(vmulhsh, 2, int16_t, do_mulh_h)
68
DO_2OP(vmulhsw, 4, int32_t, do_mulh_w)
69
DO_2OP(vmulhub, 1, uint8_t, do_mulh_b)
70
DO_2OP(vmulhuh, 2, uint16_t, do_mulh_h)
71
DO_2OP(vmulhuw, 4, uint32_t, do_mulh_w)
72
+
73
+DO_2OP(vrmulhsb, 1, int8_t, do_rmulh_b)
74
+DO_2OP(vrmulhsh, 2, int16_t, do_rmulh_h)
75
+DO_2OP(vrmulhsw, 4, int32_t, do_rmulh_w)
76
+DO_2OP(vrmulhub, 1, uint8_t, do_rmulh_b)
77
+DO_2OP(vrmulhuh, 2, uint16_t, do_rmulh_h)
78
+DO_2OP(vrmulhuw, 4, uint32_t, do_rmulh_w)
79
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/translate-mve.c
82
+++ b/target/arm/translate-mve.c
83
@@ -XXX,XX +XXX,XX @@ DO_2OP(VSUB, vsub)
84
DO_2OP(VMUL, vmul)
85
DO_2OP(VMULH_S, vmulhs)
86
DO_2OP(VMULH_U, vmulhu)
87
+DO_2OP(VRMULH_S, vrmulhs)
88
+DO_2OP(VRMULH_U, vrmulhu)
89
--
90
2.20.1
91
92
diff view generated by jsdifflib
New patch
1
Implement the MVE VMAX and VMIN insns.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-16-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 14 ++++++++++++++
8
target/arm/mve.decode | 5 +++++
9
target/arm/mve_helper.c | 14 ++++++++++++++
10
target/arm/translate-mve.c | 4 ++++
11
4 files changed, 37 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrmulhsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
DEF_HELPER_FLAGS_4(mve_vrmulhub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vrmulhuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vrmulhuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_4(mve_vmaxsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+DEF_HELPER_FLAGS_4(mve_vmaxsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vmaxsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vmaxub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vmaxuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vmaxuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+
29
+DEF_HELPER_FLAGS_4(mve_vminsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+DEF_HELPER_FLAGS_4(mve_vminsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vminsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vminub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vminuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vminuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/mve.decode
38
+++ b/target/arm/mve.decode
39
@@ -XXX,XX +XXX,XX @@ VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
40
VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
41
VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
42
43
+VMAX_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
44
+VMAX_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
45
+VMIN_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
46
+VMIN_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
47
+
48
# Vector miscellaneous
49
50
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
51
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/mve_helper.c
54
+++ b/target/arm/mve_helper.c
55
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
56
DO_2OP(OP##h, 2, uint16_t, FN) \
57
DO_2OP(OP##w, 4, uint32_t, FN)
58
59
+/* provide signed 2-op helpers for all sizes */
60
+#define DO_2OP_S(OP, FN) \
61
+ DO_2OP(OP##b, 1, int8_t, FN) \
62
+ DO_2OP(OP##h, 2, int16_t, FN) \
63
+ DO_2OP(OP##w, 4, int32_t, FN)
64
+
65
#define DO_AND(N, M) ((N) & (M))
66
#define DO_BIC(N, M) ((N) & ~(M))
67
#define DO_ORR(N, M) ((N) | (M))
68
@@ -XXX,XX +XXX,XX @@ DO_2OP(vrmulhsw, 4, int32_t, do_rmulh_w)
69
DO_2OP(vrmulhub, 1, uint8_t, do_rmulh_b)
70
DO_2OP(vrmulhuh, 2, uint16_t, do_rmulh_h)
71
DO_2OP(vrmulhuw, 4, uint32_t, do_rmulh_w)
72
+
73
+#define DO_MAX(N, M) ((N) >= (M) ? (N) : (M))
74
+#define DO_MIN(N, M) ((N) >= (M) ? (M) : (N))
75
+
76
+DO_2OP_S(vmaxs, DO_MAX)
77
+DO_2OP_U(vmaxu, DO_MAX)
78
+DO_2OP_S(vmins, DO_MIN)
79
+DO_2OP_U(vminu, DO_MIN)
80
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/translate-mve.c
83
+++ b/target/arm/translate-mve.c
84
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULH_S, vmulhs)
85
DO_2OP(VMULH_U, vmulhu)
86
DO_2OP(VRMULH_S, vrmulhs)
87
DO_2OP(VRMULH_U, vrmulhu)
88
+DO_2OP(VMAX_S, vmaxs)
89
+DO_2OP(VMAX_U, vmaxu)
90
+DO_2OP(VMIN_S, vmins)
91
+DO_2OP(VMIN_U, vminu)
92
--
93
2.20.1
94
95
diff view generated by jsdifflib
New patch
1
Implement the MVE VABD insn.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-17-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 7 +++++++
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 5 +++++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 17 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vminsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
DEF_HELPER_FLAGS_4(mve_vminub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vminuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vminuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
+
22
+DEF_HELPER_FLAGS_4(mve_vabdsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+DEF_HELPER_FLAGS_4(mve_vabdsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vabdsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vabdub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vabduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vabduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VMAX_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
33
VMIN_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
34
VMIN_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
35
36
+VABD_S 111 0 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
37
+VABD_U 111 1 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
38
+
39
# Vector miscellaneous
40
41
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_2OP_S(vmaxs, DO_MAX)
47
DO_2OP_U(vmaxu, DO_MAX)
48
DO_2OP_S(vmins, DO_MIN)
49
DO_2OP_U(vminu, DO_MIN)
50
+
51
+#define DO_ABD(N, M) ((N) >= (M) ? (N) - (M) : (M) - (N))
52
+
53
+DO_2OP_S(vabds, DO_ABD)
54
+DO_2OP_U(vabdu, DO_ABD)
55
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/translate-mve.c
58
+++ b/target/arm/translate-mve.c
59
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMAX_S, vmaxs)
60
DO_2OP(VMAX_U, vmaxu)
61
DO_2OP(VMIN_S, vmins)
62
DO_2OP(VMIN_U, vminu)
63
+DO_2OP(VABD_S, vabds)
64
+DO_2OP(VABD_U, vabdu)
65
--
66
2.20.1
67
68
diff view generated by jsdifflib
New patch
1
Implement MVE VHADD and VHSUB insns, which perform an addition
2
or subtraction and then halve the result.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-18-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 14 ++++++++++++++
9
target/arm/mve.decode | 5 +++++
10
target/arm/mve_helper.c | 25 +++++++++++++++++++++++++
11
target/arm/translate-mve.c | 4 ++++
12
4 files changed, 48 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vabdsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vabdub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vabduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vabduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vhaddsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vhaddsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vhaddsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vhaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vhadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vhadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vhsubsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vhsubsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vhsubsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vhsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vhsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+DEF_HELPER_FLAGS_4(mve_vhsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/mve.decode
39
+++ b/target/arm/mve.decode
40
@@ -XXX,XX +XXX,XX @@ VMIN_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 1 ... 0 @2op
41
VABD_S 111 0 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
42
VABD_U 111 1 1111 0 . .. ... 0 ... 0 0111 . 1 . 0 ... 0 @2op
43
44
+VHADD_S 111 0 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
45
+VHADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
46
+VHSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
47
+VHSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
48
+
49
# Vector miscellaneous
50
51
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vminu, DO_MIN)
57
58
DO_2OP_S(vabds, DO_ABD)
59
DO_2OP_U(vabdu, DO_ABD)
60
+
61
+static inline uint32_t do_vhadd_u(uint32_t n, uint32_t m)
62
+{
63
+ return ((uint64_t)n + m) >> 1;
64
+}
65
+
66
+static inline int32_t do_vhadd_s(int32_t n, int32_t m)
67
+{
68
+ return ((int64_t)n + m) >> 1;
69
+}
70
+
71
+static inline uint32_t do_vhsub_u(uint32_t n, uint32_t m)
72
+{
73
+ return ((uint64_t)n - m) >> 1;
74
+}
75
+
76
+static inline int32_t do_vhsub_s(int32_t n, int32_t m)
77
+{
78
+ return ((int64_t)n - m) >> 1;
79
+}
80
+
81
+DO_2OP_S(vhadds, do_vhadd_s)
82
+DO_2OP_U(vhaddu, do_vhadd_u)
83
+DO_2OP_S(vhsubs, do_vhsub_s)
84
+DO_2OP_U(vhsubu, do_vhsub_u)
85
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/target/arm/translate-mve.c
88
+++ b/target/arm/translate-mve.c
89
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMIN_S, vmins)
90
DO_2OP(VMIN_U, vminu)
91
DO_2OP(VABD_S, vabds)
92
DO_2OP(VABD_U, vabdu)
93
+DO_2OP(VHADD_S, vhadds)
94
+DO_2OP(VHADD_U, vhaddu)
95
+DO_2OP(VHSUB_S, vhsubs)
96
+DO_2OP(VHSUB_U, vhsubu)
97
--
98
2.20.1
99
100
diff view generated by jsdifflib
New patch
1
Implement the MVE VMULL insn, which multiplies two single
2
width integer elements to produce a double width result.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-19-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 14 ++++++++++++++
9
target/arm/mve.decode | 5 +++++
10
target/arm/mve_helper.c | 34 ++++++++++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 4 ++++
12
4 files changed, 57 insertions(+)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vhsubsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vhsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vhsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vhsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+
23
+DEF_HELPER_FLAGS_4(mve_vmullbsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vmullbsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vmullbsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vmullbub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vmullbuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vmullbuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vmulltsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
+DEF_HELPER_FLAGS_4(mve_vmulltsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vmulltsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/mve.decode
39
+++ b/target/arm/mve.decode
40
@@ -XXX,XX +XXX,XX @@ VHADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
41
VHSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
42
VHSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
43
44
+VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
45
+VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
46
+VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
47
+VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
48
+
49
# Vector miscellaneous
50
51
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
57
DO_2OP(OP##h, 2, int16_t, FN) \
58
DO_2OP(OP##w, 4, int32_t, FN)
59
60
+/*
61
+ * "Long" operations where two half-sized inputs (taken from either the
62
+ * top or the bottom of the input vector) produce a double-width result.
63
+ * Here ESIZE, TYPE are for the input, and LESIZE, LTYPE for the output.
64
+ */
65
+#define DO_2OP_L(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \
66
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, void *vm) \
67
+ { \
68
+ LTYPE *d = vd; \
69
+ TYPE *n = vn, *m = vm; \
70
+ uint16_t mask = mve_element_mask(env); \
71
+ unsigned le; \
72
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
73
+ LTYPE r = FN((LTYPE)n[H##ESIZE(le * 2 + TOP)], \
74
+ m[H##ESIZE(le * 2 + TOP)]); \
75
+ mergemask(&d[H##LESIZE(le)], r, mask); \
76
+ } \
77
+ mve_advance_vpt(env); \
78
+ }
79
+
80
#define DO_AND(N, M) ((N) & (M))
81
#define DO_BIC(N, M) ((N) & ~(M))
82
#define DO_ORR(N, M) ((N) | (M))
83
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vadd, DO_ADD)
84
DO_2OP_U(vsub, DO_SUB)
85
DO_2OP_U(vmul, DO_MUL)
86
87
+DO_2OP_L(vmullbsb, 0, 1, int8_t, 2, int16_t, DO_MUL)
88
+DO_2OP_L(vmullbsh, 0, 2, int16_t, 4, int32_t, DO_MUL)
89
+DO_2OP_L(vmullbsw, 0, 4, int32_t, 8, int64_t, DO_MUL)
90
+DO_2OP_L(vmullbub, 0, 1, uint8_t, 2, uint16_t, DO_MUL)
91
+DO_2OP_L(vmullbuh, 0, 2, uint16_t, 4, uint32_t, DO_MUL)
92
+DO_2OP_L(vmullbuw, 0, 4, uint32_t, 8, uint64_t, DO_MUL)
93
+
94
+DO_2OP_L(vmulltsb, 1, 1, int8_t, 2, int16_t, DO_MUL)
95
+DO_2OP_L(vmulltsh, 1, 2, int16_t, 4, int32_t, DO_MUL)
96
+DO_2OP_L(vmulltsw, 1, 4, int32_t, 8, int64_t, DO_MUL)
97
+DO_2OP_L(vmulltub, 1, 1, uint8_t, 2, uint16_t, DO_MUL)
98
+DO_2OP_L(vmulltuh, 1, 2, uint16_t, 4, uint32_t, DO_MUL)
99
+DO_2OP_L(vmulltuw, 1, 4, uint32_t, 8, uint64_t, DO_MUL)
100
+
101
/*
102
* Because the computation type is at least twice as large as required,
103
* these work for both signed and unsigned source types.
104
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/target/arm/translate-mve.c
107
+++ b/target/arm/translate-mve.c
108
@@ -XXX,XX +XXX,XX @@ DO_2OP(VHADD_S, vhadds)
109
DO_2OP(VHADD_U, vhaddu)
110
DO_2OP(VHSUB_S, vhsubs)
111
DO_2OP(VHSUB_U, vhsubu)
112
+DO_2OP(VMULL_BS, vmullbs)
113
+DO_2OP(VMULL_BU, vmullbu)
114
+DO_2OP(VMULL_TS, vmullts)
115
+DO_2OP(VMULL_TU, vmulltu)
116
--
117
2.20.1
118
119
diff view generated by jsdifflib
1
Implement the v8.1M VSCCLRM insn, which zeros floating point
1
Implement the MVE VMLALDAV insn, which multiplies pairs of integer
2
registers if there is an active floating point context.
2
elements, accumulating them into a 64-bit result in a pair of
3
This requires support in write_neon_element32() for the MO_32
3
general-purpose registers.
4
element size, so add it.
5
6
Because we want to use arm_gen_condlabel(), we need to move
7
the definition of that function up in translate.c so it is
8
before the #include of translate-vfp.c.inc.
9
4
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-5-peter.maydell@linaro.org
7
Message-id: 20210617121628.20116-20-peter.maydell@linaro.org
13
---
8
---
14
target/arm/cpu.h | 9 ++++
9
target/arm/helper-mve.h | 8 ++++
15
target/arm/m-nocp.decode | 8 +++-
10
target/arm/translate.h | 10 ++++
16
target/arm/translate.c | 21 +++++----
11
target/arm/mve.decode | 15 ++++++
17
target/arm/translate-vfp.c.inc | 84 ++++++++++++++++++++++++++++++++++
12
target/arm/mve_helper.c | 34 ++++++++++++++
18
4 files changed, 111 insertions(+), 11 deletions(-)
13
target/arm/translate-mve.c | 96 ++++++++++++++++++++++++++++++++++++++
19
14
5 files changed, 163 insertions(+)
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
21
index XXXXXXX..XXXXXXX 100644
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
22
--- a/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
23
+++ b/target/arm/cpu.h
18
--- a/target/arm/helper-mve.h
24
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
19
+++ b/target/arm/helper-mve.h
25
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+
25
+DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
26
+DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
+DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
28
+DEF_HELPER_FLAGS_4(mve_vmlaldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vmlaldavuh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
31
+DEF_HELPER_FLAGS_4(mve_vmlaldavuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
diff --git a/target/arm/translate.h b/target/arm/translate.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.h
35
+++ b/target/arm/translate.h
36
@@ -XXX,XX +XXX,XX @@ static inline int negate(DisasContext *s, int x)
37
return -x;
26
}
38
}
27
39
28
+static inline bool isar_feature_aa32_m_sec_state(const ARMISARegisters *id)
40
+static inline int plus_1(DisasContext *s, int x)
29
+{
41
+{
42
+ return x + 1;
43
+}
44
+
45
static inline int plus_2(DisasContext *s, int x)
46
{
47
return x + 2;
48
@@ -XXX,XX +XXX,XX @@ static inline int times_4(DisasContext *s, int x)
49
return x * 4;
50
}
51
52
+static inline int times_2_plus_1(DisasContext *s, int x)
53
+{
54
+ return x * 2 + 1;
55
+}
56
+
57
static inline int arm_dc_feature(DisasContext *dc, int feature)
58
{
59
return (dc->features & (1ULL << feature)) != 0;
60
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/mve.decode
63
+++ b/target/arm/mve.decode
64
@@ -XXX,XX +XXX,XX @@ VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
65
VDUP 1110 1110 1 1 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=0
66
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 1 1 0000 @vdup size=1
67
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
68
+
69
+# multiply-add long dual accumulate
70
+# rdahi: bits [3:1] from insn, bit 0 is 1
71
+# rdalo: bits [3:1] from insn, bit 0 is 0
72
+%rdahi 20:3 !function=times_2_plus_1
73
+%rdalo 13:3 !function=times_2
74
+# size bit is 0 for 16 bit, 1 for 32 bit
75
+%size_16 16:1 !function=plus_1
76
+
77
+&vmlaldav rdahi rdalo size qn qm x a
78
+
79
+@vmlaldav .... .... . ... ... . ... . .... .... qm:3 . \
80
+ qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
81
+VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
82
+VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
83
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/mve_helper.c
86
+++ b/target/arm/mve_helper.c
87
@@ -XXX,XX +XXX,XX @@ DO_2OP_S(vhadds, do_vhadd_s)
88
DO_2OP_U(vhaddu, do_vhadd_u)
89
DO_2OP_S(vhsubs, do_vhsub_s)
90
DO_2OP_U(vhsubu, do_vhsub_u)
91
+
92
+
93
+/*
94
+ * Multiply add long dual accumulate ops.
95
+ */
96
+#define DO_LDAV(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC) \
97
+ uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
98
+ void *vm, uint64_t a) \
99
+ { \
100
+ uint16_t mask = mve_element_mask(env); \
101
+ unsigned e; \
102
+ TYPE *n = vn, *m = vm; \
103
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
104
+ if (mask & 1) { \
105
+ if (e & 1) { \
106
+ a ODDACC \
107
+ (int64_t)n[H##ESIZE(e - 1 * XCHG)] * m[H##ESIZE(e)]; \
108
+ } else { \
109
+ a EVENACC \
110
+ (int64_t)n[H##ESIZE(e + 1 * XCHG)] * m[H##ESIZE(e)]; \
111
+ } \
112
+ } \
113
+ } \
114
+ mve_advance_vpt(env); \
115
+ return a; \
116
+ }
117
+
118
+DO_LDAV(vmlaldavsh, 2, int16_t, false, +=, +=)
119
+DO_LDAV(vmlaldavxsh, 2, int16_t, true, +=, +=)
120
+DO_LDAV(vmlaldavsw, 4, int32_t, false, +=, +=)
121
+DO_LDAV(vmlaldavxsw, 4, int32_t, true, +=, +=)
122
+
123
+DO_LDAV(vmlaldavuh, 2, uint16_t, false, +=, +=)
124
+DO_LDAV(vmlaldavuw, 4, uint32_t, false, +=, +=)
125
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/translate-mve.c
128
+++ b/target/arm/translate-mve.c
129
@@ -XXX,XX +XXX,XX @@
130
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
131
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
132
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
133
+typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
134
135
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
136
static inline long mve_qreg_offset(unsigned reg)
137
@@ -XXX,XX +XXX,XX @@ static void mve_update_eci(DisasContext *s)
138
}
139
}
140
141
+static bool mve_skip_first_beat(DisasContext *s)
142
+{
143
+ /* Return true if PSR.ECI says we must skip the first beat of this insn */
144
+ switch (s->eci) {
145
+ case ECI_NONE:
146
+ return false;
147
+ case ECI_A0:
148
+ case ECI_A0A1:
149
+ case ECI_A0A1A2:
150
+ case ECI_A0A1A2B0:
151
+ return true;
152
+ default:
153
+ g_assert_not_reached();
154
+ }
155
+}
156
+
157
static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
158
{
159
TCGv_i32 addr;
160
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_BS, vmullbs)
161
DO_2OP(VMULL_BU, vmullbu)
162
DO_2OP(VMULL_TS, vmullts)
163
DO_2OP(VMULL_TU, vmulltu)
164
+
165
+static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
166
+ MVEGenDualAccOpFn *fn)
167
+{
168
+ TCGv_ptr qn, qm;
169
+ TCGv_i64 rda;
170
+ TCGv_i32 rdalo, rdahi;
171
+
172
+ if (!dc_isar_feature(aa32_mve, s) ||
173
+ !mve_check_qreg_bank(s, a->qn | a->qm) ||
174
+ !fn) {
175
+ return false;
176
+ }
30
+ /*
177
+ /*
31
+ * Return true if M-profile state handling insns
178
+ * rdahi == 13 is UNPREDICTABLE; rdahi == 15 is a related
32
+ * (VSCCLRM, CLRM, FPCTX access insns) are implemented
179
+ * encoding; rdalo always has bit 0 clear so cannot be 13 or 15.
33
+ */
180
+ */
34
+ return FIELD_EX32(id->id_pfr1, ID_PFR1, SECURITY) >= 3;
181
+ if (a->rdahi == 13 || a->rdahi == 15) {
35
+}
36
+
37
static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
38
{
39
/* Sadly this is encoded differently for A-profile and M-profile */
40
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/m-nocp.decode
43
+++ b/target/arm/m-nocp.decode
44
@@ -XXX,XX +XXX,XX @@
45
# If the coprocessor is not present or disabled then we will generate
46
# the NOCP exception; otherwise we let the insn through to the main decode.
47
48
+%vd_dp 22:1 12:4
49
+%vd_sp 12:4 22:1
50
+
51
&nocp cp
52
53
{
54
# Special cases which do not take an early NOCP: VLLDM and VLSTM
55
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
56
- # TODO: VSCCLRM (new in v8.1M) is similar:
57
- #VSCCLRM 1110 1100 1-01 1111 ---- 1011 ---- ---0
58
+ # VSCCLRM (new in v8.1M) is similar:
59
+ VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
60
+ VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
61
62
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
63
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate.c
67
+++ b/target/arm/translate.c
68
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void)
69
a64_translate_init();
70
}
71
72
+/* Generate a label used for skipping this instruction */
73
+static void arm_gen_condlabel(DisasContext *s)
74
+{
75
+ if (!s->condjmp) {
76
+ s->condlabel = gen_new_label();
77
+ s->condjmp = 1;
78
+ }
79
+}
80
+
81
/* Flags for the disas_set_da_iss info argument:
82
* lower bits hold the Rt register number, higher bits are flags.
83
*/
84
@@ -XXX,XX +XXX,XX @@ static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
85
long off = neon_element_offset(reg, ele, memop);
86
87
switch (memop) {
88
+ case MO_32:
89
+ tcg_gen_st32_i64(src, cpu_env, off);
90
+ break;
91
case MO_64:
92
tcg_gen_st_i64(src, cpu_env, off);
93
break;
94
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
95
s->base.is_jmp = DISAS_UPDATE_EXIT;
96
}
97
98
-/* Generate a label used for skipping this instruction */
99
-static void arm_gen_condlabel(DisasContext *s)
100
-{
101
- if (!s->condjmp) {
102
- s->condlabel = gen_new_label();
103
- s->condjmp = 1;
104
- }
105
-}
106
-
107
/* Skip this instruction if the ARM condition is false */
108
static void arm_skip_unless(DisasContext *s, uint32_t cond)
109
{
110
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/translate-vfp.c.inc
113
+++ b/target/arm/translate-vfp.c.inc
114
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
115
return true;
116
}
117
118
+static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
119
+{
120
+ int btmreg, topreg;
121
+ TCGv_i64 zero;
122
+ TCGv_i32 aspen, sfpa;
123
+
124
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
125
+ /* Before v8.1M, fall through in decode to NOCP check */
126
+ return false;
182
+ return false;
127
+ }
183
+ }
128
+
184
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
129
+ /* Explicitly UNDEF because this takes precedence over NOCP */
130
+ if (!arm_dc_feature(s, ARM_FEATURE_M_MAIN) || !s->v8m_secure) {
131
+ unallocated_encoding(s);
132
+ return true;
185
+ return true;
133
+ }
186
+ }
134
+
187
+
135
+ if (!dc_isar_feature(aa32_vfp_simd, s)) {
188
+ qn = mve_qreg_ptr(a->qn);
136
+ /* NOP if we have neither FP nor MVE */
189
+ qm = mve_qreg_ptr(a->qm);
137
+ return true;
138
+ }
139
+
190
+
140
+ /*
191
+ /*
141
+ * If FPCCR.ASPEN != 0 && CONTROL_S.SFPA == 0 then there is no
192
+ * This insn is subject to beat-wise execution. Partial execution
142
+ * active floating point context so we must NOP (without doing
193
+ * of an A=0 (no-accumulate) insn which does not execute the first
143
+ * any lazy state preservation or the NOCP check).
194
+ * beat must start with the current rda value, not 0.
144
+ */
195
+ */
145
+ aspen = load_cpu_field(v7m.fpccr[M_REG_S]);
196
+ if (a->a || mve_skip_first_beat(s)) {
146
+ sfpa = load_cpu_field(v7m.control[M_REG_S]);
197
+ rda = tcg_temp_new_i64();
147
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
198
+ rdalo = load_reg(s, a->rdalo);
148
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
199
+ rdahi = load_reg(s, a->rdahi);
149
+ tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
200
+ tcg_gen_concat_i32_i64(rda, rdalo, rdahi);
150
+ tcg_gen_or_i32(sfpa, sfpa, aspen);
201
+ tcg_temp_free_i32(rdalo);
151
+ arm_gen_condlabel(s);
202
+ tcg_temp_free_i32(rdahi);
152
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
203
+ } else {
153
+
204
+ rda = tcg_const_i64(0);
154
+ if (s->fp_excp_el != 0) {
205
+ }
155
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
206
+
156
+ syn_uncategorized(), s->fp_excp_el);
207
+ fn(rda, cpu_env, qn, qm, rda);
157
+ return true;
208
+ tcg_temp_free_ptr(qn);
158
+ }
209
+ tcg_temp_free_ptr(qm);
159
+
210
+
160
+ topreg = a->vd + a->imm - 1;
211
+ rdalo = tcg_temp_new_i32();
161
+ btmreg = a->vd;
212
+ rdahi = tcg_temp_new_i32();
162
+
213
+ tcg_gen_extrl_i64_i32(rdalo, rda);
163
+ /* Convert to Sreg numbers if the insn specified in Dregs */
214
+ tcg_gen_extrh_i64_i32(rdahi, rda);
164
+ if (a->size == 3) {
215
+ store_reg(s, a->rdalo, rdalo);
165
+ topreg = topreg * 2 + 1;
216
+ store_reg(s, a->rdahi, rdahi);
166
+ btmreg *= 2;
217
+ tcg_temp_free_i64(rda);
167
+ }
218
+ mve_update_eci(s);
168
+
169
+ if (topreg > 63 || (topreg > 31 && !(topreg & 1))) {
170
+ /* UNPREDICTABLE: we choose to undef */
171
+ unallocated_encoding(s);
172
+ return true;
173
+ }
174
+
175
+ /* Silently ignore requests to clear D16-D31 if they don't exist */
176
+ if (topreg > 31 && !dc_isar_feature(aa32_simd_r32, s)) {
177
+ topreg = 31;
178
+ }
179
+
180
+ if (!vfp_access_check(s)) {
181
+ return true;
182
+ }
183
+
184
+ /* Zero the Sregs from btmreg to topreg inclusive. */
185
+ zero = tcg_const_i64(0);
186
+ if (btmreg & 1) {
187
+ write_neon_element64(zero, btmreg >> 1, 1, MO_32);
188
+ btmreg++;
189
+ }
190
+ for (; btmreg + 1 <= topreg; btmreg += 2) {
191
+ write_neon_element64(zero, btmreg >> 1, 0, MO_64);
192
+ }
193
+ if (btmreg == topreg) {
194
+ write_neon_element64(zero, btmreg >> 1, 0, MO_32);
195
+ btmreg++;
196
+ }
197
+ assert(btmreg == topreg + 1);
198
+ /* TODO: when MVE is implemented, zero VPR here */
199
+ return true;
219
+ return true;
200
+}
220
+}
201
+
221
+
202
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
222
+static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a)
203
{
223
+{
204
/*
224
+ static MVEGenDualAccOpFn * const fns[4][2] = {
225
+ { NULL, NULL },
226
+ { gen_helper_mve_vmlaldavsh, gen_helper_mve_vmlaldavxsh },
227
+ { gen_helper_mve_vmlaldavsw, gen_helper_mve_vmlaldavxsw },
228
+ { NULL, NULL },
229
+ };
230
+ return do_long_dual_acc(s, a, fns[a->size][a->x]);
231
+}
232
+
233
+static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
234
+{
235
+ static MVEGenDualAccOpFn * const fns[4][2] = {
236
+ { NULL, NULL },
237
+ { gen_helper_mve_vmlaldavuh, NULL },
238
+ { gen_helper_mve_vmlaldavuw, NULL },
239
+ { NULL, NULL },
240
+ };
241
+ return do_long_dual_acc(s, a, fns[a->size][a->x]);
242
+}
205
--
243
--
206
2.20.1
244
2.20.1
207
245
208
246
diff view generated by jsdifflib
New patch
1
Implement the MVE insn VMLSLDAV, which multiplies source elements,
2
alternately adding and subtracting them, and accumulates into a
3
64-bit result in a pair of general purpose registers.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210617121628.20116-21-peter.maydell@linaro.org
8
---
9
target/arm/helper-mve.h | 5 +++++
10
target/arm/mve.decode | 2 ++
11
target/arm/mve_helper.c | 5 +++++
12
target/arm/translate-mve.c | 11 +++++++++++
13
4 files changed, 23 insertions(+)
14
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-mve.h
18
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmlaldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
20
21
DEF_HELPER_FLAGS_4(mve_vmlaldavuh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
22
DEF_HELPER_FLAGS_4(mve_vmlaldavuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
23
+
24
+DEF_HELPER_FLAGS_4(mve_vmlsldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
25
+DEF_HELPER_FLAGS_4(mve_vmlsldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
26
+DEF_HELPER_FLAGS_4(mve_vmlsldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
+DEF_HELPER_FLAGS_4(mve_vmlsldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
33
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
34
VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
35
VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
36
+
37
+VMLSLDAV 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve_helper.c
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlaldavxsw, 4, int32_t, true, +=, +=)
43
44
DO_LDAV(vmlaldavuh, 2, uint16_t, false, +=, +=)
45
DO_LDAV(vmlaldavuw, 4, uint32_t, false, +=, +=)
46
+
47
+DO_LDAV(vmlsldavsh, 2, int16_t, false, +=, -=)
48
+DO_LDAV(vmlsldavxsh, 2, int16_t, true, +=, -=)
49
+DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
50
+DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
51
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/translate-mve.c
54
+++ b/target/arm/translate-mve.c
55
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
56
};
57
return do_long_dual_acc(s, a, fns[a->size][a->x]);
58
}
59
+
60
+static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
61
+{
62
+ static MVEGenDualAccOpFn * const fns[4][2] = {
63
+ { NULL, NULL },
64
+ { gen_helper_mve_vmlsldavsh, gen_helper_mve_vmlsldavxsh },
65
+ { gen_helper_mve_vmlsldavsw, gen_helper_mve_vmlsldavxsw },
66
+ { NULL, NULL },
67
+ };
68
+ return do_long_dual_acc(s, a, fns[a->size][a->x]);
69
+}
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
New patch
1
Implement the MVE VRMLALDAVH and VRMLSLDAVH insns, which accumulate
2
the results of a rounded multiply of pairs of elements into a 72-bit
3
accumulator, returning the top 64 bits in a pair of general purpose
4
registers.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210617121628.20116-22-peter.maydell@linaro.org
9
---
10
target/arm/helper-mve.h | 8 ++++++++
11
target/arm/mve.decode | 7 +++++++
12
target/arm/mve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
13
target/arm/translate-mve.c | 24 ++++++++++++++++++++++++
14
4 files changed, 76 insertions(+)
15
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper-mve.h
19
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmlsldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
21
DEF_HELPER_FLAGS_4(mve_vmlsldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
22
DEF_HELPER_FLAGS_4(mve_vmlsldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
23
DEF_HELPER_FLAGS_4(mve_vmlsldavxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
24
+
25
+DEF_HELPER_FLAGS_4(mve_vrmlaldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
26
+DEF_HELPER_FLAGS_4(mve_vrmlaldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
+
28
+DEF_HELPER_FLAGS_4(mve_vrmlaldavhuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vrmlsldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
31
+DEF_HELPER_FLAGS_4(mve_vrmlsldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
35
+++ b/target/arm/mve.decode
36
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
37
38
@vmlaldav .... .... . ... ... . ... . .... .... qm:3 . \
39
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
40
+@vmlaldav_nosz .... .... . ... ... . ... . .... .... qm:3 . \
41
+ qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav
42
VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
43
VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
44
45
VMLSLDAV 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav
46
+
47
+VRMLALDAVH_S 1110 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
48
+VRMLALDAVH_U 1111 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
49
+
50
+VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_nosz
51
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/mve_helper.c
54
+++ b/target/arm/mve_helper.c
55
@@ -XXX,XX +XXX,XX @@
56
*/
57
58
#include "qemu/osdep.h"
59
+#include "qemu/int128.h"
60
#include "cpu.h"
61
#include "internals.h"
62
#include "vec_internal.h"
63
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlsldavsh, 2, int16_t, false, +=, -=)
64
DO_LDAV(vmlsldavxsh, 2, int16_t, true, +=, -=)
65
DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
66
DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
67
+
68
+/*
69
+ * Rounding multiply add long dual accumulate high: we must keep
70
+ * a 72-bit internal accumulator value and return the top 64 bits.
71
+ */
72
+#define DO_LDAVH(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC, TO128) \
73
+ uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
74
+ void *vm, uint64_t a) \
75
+ { \
76
+ uint16_t mask = mve_element_mask(env); \
77
+ unsigned e; \
78
+ TYPE *n = vn, *m = vm; \
79
+ Int128 acc = int128_lshift(TO128(a), 8); \
80
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
81
+ if (mask & 1) { \
82
+ if (e & 1) { \
83
+ acc = ODDACC(acc, TO128(n[H##ESIZE(e - 1 * XCHG)] * \
84
+ m[H##ESIZE(e)])); \
85
+ } else { \
86
+ acc = EVENACC(acc, TO128(n[H##ESIZE(e + 1 * XCHG)] * \
87
+ m[H##ESIZE(e)])); \
88
+ } \
89
+ acc = int128_add(acc, 1 << 7); \
90
+ } \
91
+ } \
92
+ mve_advance_vpt(env); \
93
+ return int128_getlo(int128_rshift(acc, 8)); \
94
+ }
95
+
96
+DO_LDAVH(vrmlaldavhsw, 4, int32_t, false, int128_add, int128_add, int128_makes64)
97
+DO_LDAVH(vrmlaldavhxsw, 4, int32_t, true, int128_add, int128_add, int128_makes64)
98
+
99
+DO_LDAVH(vrmlaldavhuw, 4, uint32_t, false, int128_add, int128_add, int128_make64)
100
+
101
+DO_LDAVH(vrmlsldavhsw, 4, int32_t, false, int128_add, int128_sub, int128_makes64)
102
+DO_LDAVH(vrmlsldavhxsw, 4, int32_t, true, int128_add, int128_sub, int128_makes64)
103
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/target/arm/translate-mve.c
106
+++ b/target/arm/translate-mve.c
107
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
108
};
109
return do_long_dual_acc(s, a, fns[a->size][a->x]);
110
}
111
+
112
+static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a)
113
+{
114
+ static MVEGenDualAccOpFn * const fns[] = {
115
+ gen_helper_mve_vrmlaldavhsw, gen_helper_mve_vrmlaldavhxsw,
116
+ };
117
+ return do_long_dual_acc(s, a, fns[a->x]);
118
+}
119
+
120
+static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a)
121
+{
122
+ static MVEGenDualAccOpFn * const fns[] = {
123
+ gen_helper_mve_vrmlaldavhuw, NULL,
124
+ };
125
+ return do_long_dual_acc(s, a, fns[a->x]);
126
+}
127
+
128
+static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
129
+{
130
+ static MVEGenDualAccOpFn * const fns[] = {
131
+ gen_helper_mve_vrmlsldavhsw, gen_helper_mve_vrmlsldavhxsw,
132
+ };
133
+ return do_long_dual_acc(s, a, fns[a->x]);
134
+}
135
--
136
2.20.1
137
138
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
Implement the scalar form of the MVE VADD insn. This takes the
2
scalar operand from a general purpose register.
2
3
3
Connect CAN0 and CAN1 on the ZynqMP.
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-23-peter.maydell@linaro.org
7
---
8
target/arm/helper-mve.h | 4 ++++
9
target/arm/mve.decode | 7 ++++++
10
target/arm/mve_helper.c | 22 +++++++++++++++++++
11
target/arm/translate-mve.c | 45 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 78 insertions(+)
4
13
5
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
8
Message-id: 1605728926-352690-3-git-send-email-fnu.vikram@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/xlnx-zynqmp.h | 8 ++++++++
12
hw/arm/xlnx-zcu102.c | 20 ++++++++++++++++++++
13
hw/arm/xlnx-zynqmp.c | 34 ++++++++++++++++++++++++++++++++++
14
3 files changed, 62 insertions(+)
15
16
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/xlnx-zynqmp.h
16
--- a/target/arm/helper-mve.h
19
+++ b/include/hw/arm/xlnx-zynqmp.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
22
+DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
+
26
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
28
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
20
@@ -XXX,XX +XXX,XX @@
33
@@ -XXX,XX +XXX,XX @@
21
#include "hw/intc/arm_gic.h"
34
&vldr_vstr rn qd imm p a w size l u
22
#include "hw/net/cadence_gem.h"
35
&1op qd qm size
23
#include "hw/char/cadence_uart.h"
36
&2op qd qm qn size
24
+#include "hw/net/xlnx-zynqmp-can.h"
37
+&2scalar qd qn rm size
25
#include "hw/ide/ahci.h"
38
26
#include "hw/sd/sdhci.h"
39
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
27
#include "hw/ssi/xilinx_spips.h"
40
# Note that both Rn and Qd are 3 bits only (no D bit)
28
@@ -XXX,XX +XXX,XX @@
41
@@ -XXX,XX +XXX,XX @@
29
#include "hw/cpu/cluster.h"
42
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
30
#include "target/arm/cpu.h"
43
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
31
#include "qom/object.h"
44
32
+#include "net/can_emu.h"
45
+@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
33
34
#define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
35
OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
36
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
37
#define XLNX_ZYNQMP_NUM_RPU_CPUS 2
38
#define XLNX_ZYNQMP_NUM_GEMS 4
39
#define XLNX_ZYNQMP_NUM_UARTS 2
40
+#define XLNX_ZYNQMP_NUM_CAN 2
41
+#define XLNX_ZYNQMP_CAN_REF_CLK (24 * 1000 * 1000)
42
#define XLNX_ZYNQMP_NUM_SDHCI 2
43
#define XLNX_ZYNQMP_NUM_SPIS 2
44
#define XLNX_ZYNQMP_NUM_GDMA_CH 8
45
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
46
47
CadenceGEMState gem[XLNX_ZYNQMP_NUM_GEMS];
48
CadenceUARTState uart[XLNX_ZYNQMP_NUM_UARTS];
49
+ XlnxZynqMPCANState can[XLNX_ZYNQMP_NUM_CAN];
50
SysbusAHCIState sata;
51
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
52
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
53
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
54
bool virt;
55
/* Has the RPU subsystem? */
56
bool has_rpu;
57
+
46
+
58
+ /* CAN bus. */
47
# Vector loads and stores
59
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
48
60
};
49
# Widening loads and narrowing stores:
61
50
@@ -XXX,XX +XXX,XX @@ VRMLALDAVH_S 1110 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_no
62
#endif
51
VRMLALDAVH_U 1111 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
63
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
52
53
VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_nosz
54
+
55
+# Scalar operations
56
+
57
+VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
58
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
64
index XXXXXXX..XXXXXXX 100644
59
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/xlnx-zcu102.c
60
--- a/target/arm/mve_helper.c
66
+++ b/hw/arm/xlnx-zcu102.c
61
+++ b/target/arm/mve_helper.c
67
@@ -XXX,XX +XXX,XX @@
62
@@ -XXX,XX +XXX,XX @@ DO_2OP_S(vhsubs, do_vhsub_s)
68
#include "sysemu/qtest.h"
63
DO_2OP_U(vhsubu, do_vhsub_u)
69
#include "sysemu/device_tree.h"
64
70
#include "qom/object.h"
65
71
+#include "net/can_emu.h"
66
+#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
72
67
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
73
struct XlnxZCU102 {
68
+ uint32_t rm) \
74
MachineState parent_obj;
69
+ { \
75
@@ -XXX,XX +XXX,XX @@ struct XlnxZCU102 {
70
+ TYPE *d = vd, *n = vn; \
76
bool secure;
71
+ TYPE m = rm; \
77
bool virt;
72
+ uint16_t mask = mve_element_mask(env); \
78
73
+ unsigned e; \
79
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
74
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
80
+
75
+ mergemask(&d[H##ESIZE(e)], FN(n[H##ESIZE(e)], m), mask); \
81
struct arm_boot_info binfo;
76
+ } \
82
};
77
+ mve_advance_vpt(env); \
83
84
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_init(MachineState *machine)
85
object_property_set_bool(OBJECT(&s->soc), "virtualization", s->virt,
86
&error_fatal);
87
88
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
89
+ gchar *bus_name = g_strdup_printf("canbus%d", i);
90
+
91
+ object_property_set_link(OBJECT(&s->soc), bus_name,
92
+ OBJECT(s->canbus[i]), &error_fatal);
93
+ g_free(bus_name);
94
+ }
78
+ }
95
+
79
+
96
qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
80
+/* provide unsigned 2-op scalar helpers for all sizes */
97
81
+#define DO_2OP_SCALAR_U(OP, FN) \
98
/* Create and plug in the SD cards */
82
+ DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
99
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_machine_instance_init(Object *obj)
83
+ DO_2OP_SCALAR(OP##h, 2, uint16_t, FN) \
100
s->secure = false;
84
+ DO_2OP_SCALAR(OP##w, 4, uint32_t, FN)
101
/* Default to virt (EL2) being disabled */
102
s->virt = false;
103
+ object_property_add_link(obj, "xlnx-zcu102.canbus0", TYPE_CAN_BUS,
104
+ (Object **)&s->canbus[0],
105
+ object_property_allow_set_link,
106
+ 0);
107
+
85
+
108
+ object_property_add_link(obj, "xlnx-zcu102.canbus1", TYPE_CAN_BUS,
86
+DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
109
+ (Object **)&s->canbus[1],
87
+
110
+ object_property_allow_set_link,
88
/*
111
+ 0);
89
* Multiply add long dual accumulate ops.
112
}
90
*/
113
91
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
114
static void xlnx_zcu102_machine_class_init(ObjectClass *oc, void *data)
115
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
116
index XXXXXXX..XXXXXXX 100644
92
index XXXXXXX..XXXXXXX 100644
117
--- a/hw/arm/xlnx-zynqmp.c
93
--- a/target/arm/translate-mve.c
118
+++ b/hw/arm/xlnx-zynqmp.c
94
+++ b/target/arm/translate-mve.c
119
@@ -XXX,XX +XXX,XX @@ static const int uart_intr[XLNX_ZYNQMP_NUM_UARTS] = {
95
@@ -XXX,XX +XXX,XX @@
120
21, 22,
96
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
121
};
97
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
122
98
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
123
+static const uint64_t can_addr[XLNX_ZYNQMP_NUM_CAN] = {
99
+typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
124
+ 0xFF060000, 0xFF070000,
100
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
125
+};
101
102
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
103
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_BU, vmullbu)
104
DO_2OP(VMULL_TS, vmullts)
105
DO_2OP(VMULL_TU, vmulltu)
106
107
+static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
108
+ MVEGenTwoOpScalarFn fn)
109
+{
110
+ TCGv_ptr qd, qn;
111
+ TCGv_i32 rm;
126
+
112
+
127
+static const int can_intr[XLNX_ZYNQMP_NUM_CAN] = {
113
+ if (!dc_isar_feature(aa32_mve, s) ||
128
+ 23, 24,
114
+ !mve_check_qreg_bank(s, a->qd | a->qn) ||
129
+};
115
+ !fn) {
130
+
116
+ return false;
131
static const uint64_t sdhci_addr[XLNX_ZYNQMP_NUM_SDHCI] = {
117
+ }
132
0xFF160000, 0xFF170000,
118
+ if (a->rm == 13 || a->rm == 15) {
133
};
119
+ /* UNPREDICTABLE */
134
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
120
+ return false;
135
TYPE_CADENCE_UART);
121
+ }
136
}
122
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
137
123
+ return true;
138
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
139
+ object_initialize_child(obj, "can[*]", &s->can[i],
140
+ TYPE_XLNX_ZYNQMP_CAN);
141
+ }
124
+ }
142
+
125
+
143
object_initialize_child(obj, "sata", &s->sata, TYPE_SYSBUS_AHCI);
126
+ qd = mve_qreg_ptr(a->qd);
144
127
+ qn = mve_qreg_ptr(a->qn);
145
for (i = 0; i < XLNX_ZYNQMP_NUM_SDHCI; i++) {
128
+ rm = load_reg(s, a->rm);
146
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
129
+ fn(cpu_env, qd, qn, rm);
147
gic_spi[uart_intr[i]]);
130
+ tcg_temp_free_i32(rm);
148
}
131
+ tcg_temp_free_ptr(qd);
149
132
+ tcg_temp_free_ptr(qn);
150
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
133
+ mve_update_eci(s);
151
+ object_property_set_int(OBJECT(&s->can[i]), "ext_clk_freq",
134
+ return true;
152
+ XLNX_ZYNQMP_CAN_REF_CLK, &error_abort);
135
+}
153
+
136
+
154
+ object_property_set_link(OBJECT(&s->can[i]), "canbus",
137
+#define DO_2OP_SCALAR(INSN, FN) \
155
+ OBJECT(s->canbus[i]), &error_fatal);
138
+ static bool trans_##INSN(DisasContext *s, arg_2scalar *a) \
156
+
139
+ { \
157
+ sysbus_realize(SYS_BUS_DEVICE(&s->can[i]), &err);
140
+ static MVEGenTwoOpScalarFn * const fns[] = { \
158
+ if (err) {
141
+ gen_helper_mve_##FN##b, \
159
+ error_propagate(errp, err);
142
+ gen_helper_mve_##FN##h, \
160
+ return;
143
+ gen_helper_mve_##FN##w, \
161
+ }
144
+ NULL, \
162
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->can[i]), 0, can_addr[i]);
145
+ }; \
163
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->can[i]), 0,
146
+ return do_2op_scalar(s, a, fns[a->size]); \
164
+ gic_spi[can_intr[i]]);
165
+ }
147
+ }
166
+
148
+
167
object_property_set_int(OBJECT(&s->sata), "num-ports", SATA_NUM_PORTS,
149
+DO_2OP_SCALAR(VADD_scalar, vadd_scalar)
168
&error_abort);
150
+
169
if (!sysbus_realize(SYS_BUS_DEVICE(&s->sata), errp)) {
151
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
170
@@ -XXX,XX +XXX,XX @@ static Property xlnx_zynqmp_props[] = {
152
MVEGenDualAccOpFn *fn)
171
DEFINE_PROP_BOOL("has_rpu", XlnxZynqMPState, has_rpu, false),
153
{
172
DEFINE_PROP_LINK("ddr-ram", XlnxZynqMPState, ddr_ram, TYPE_MEMORY_REGION,
173
MemoryRegion *),
174
+ DEFINE_PROP_LINK("canbus0", XlnxZynqMPState, canbus[0], TYPE_CAN_BUS,
175
+ CanBusState *),
176
+ DEFINE_PROP_LINK("canbus1", XlnxZynqMPState, canbus[1], TYPE_CAN_BUS,
177
+ CanBusState *),
178
DEFINE_PROP_END_OF_LIST()
179
};
180
181
--
154
--
182
2.20.1
155
2.20.1
183
156
184
157
diff view generated by jsdifflib
New patch
1
Implement the scalar forms of the MVE VSUB and VMUL insns.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-24-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 8 ++++++++
8
target/arm/mve.decode | 2 ++
9
target/arm/mve_helper.c | 2 ++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 14 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
18
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
21
+DEF_HELPER_FLAGS_4(mve_vsub_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
+DEF_HELPER_FLAGS_4(mve_vsub_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(mve_vsub_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+
25
+DEF_HELPER_FLAGS_4(mve_vmul_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vmul_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vmul_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+
29
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
30
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
31
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
35
+++ b/target/arm/mve.decode
36
@@ -XXX,XX +XXX,XX @@ VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_no
37
# Scalar operations
38
39
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
40
+VSUB_scalar 1110 1110 0 . .. ... 1 ... 1 1111 . 100 .... @2scalar
41
+VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
47
DO_2OP_SCALAR(OP##w, 4, uint32_t, FN)
48
49
DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
50
+DO_2OP_SCALAR_U(vsub_scalar, DO_SUB)
51
+DO_2OP_SCALAR_U(vmul_scalar, DO_MUL)
52
53
/*
54
* Multiply add long dual accumulate ops.
55
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/translate-mve.c
58
+++ b/target/arm/translate-mve.c
59
@@ -XXX,XX +XXX,XX @@ static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
60
}
61
62
DO_2OP_SCALAR(VADD_scalar, vadd_scalar)
63
+DO_2OP_SCALAR(VSUB_scalar, vsub_scalar)
64
+DO_2OP_SCALAR(VMUL_scalar, vmul_scalar)
65
66
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
67
MVEGenDualAccOpFn *fn)
68
--
69
2.20.1
70
71
diff view generated by jsdifflib
New patch
1
Implement the scalar variants of the MVE VHADD and VHSUB insns.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210617121628.20116-25-peter.maydell@linaro.org
6
---
7
target/arm/helper-mve.h | 16 ++++++++++++++++
8
target/arm/mve.decode | 4 ++++
9
target/arm/mve_helper.c | 8 ++++++++
10
target/arm/translate-mve.c | 4 ++++
11
4 files changed, 32 insertions(+)
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmul_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
18
DEF_HELPER_FLAGS_4(mve_vmul_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
DEF_HELPER_FLAGS_4(mve_vmul_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
21
+DEF_HELPER_FLAGS_4(mve_vhadds_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
+DEF_HELPER_FLAGS_4(mve_vhadds_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(mve_vhadds_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+
25
+DEF_HELPER_FLAGS_4(mve_vhaddu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vhaddu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vhaddu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+
29
+DEF_HELPER_FLAGS_4(mve_vhsubs_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(mve_vhsubs_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vhsubs_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(mve_vhsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(mve_vhsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vhsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+
37
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
38
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
39
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
40
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/mve.decode
43
+++ b/target/arm/mve.decode
44
@@ -XXX,XX +XXX,XX @@ VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_no
45
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
46
VSUB_scalar 1110 1110 0 . .. ... 1 ... 1 1111 . 100 .... @2scalar
47
VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
48
+VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
49
+VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
50
+VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
51
+VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
57
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
58
DO_2OP_SCALAR(OP##h, 2, uint16_t, FN) \
59
DO_2OP_SCALAR(OP##w, 4, uint32_t, FN)
60
+#define DO_2OP_SCALAR_S(OP, FN) \
61
+ DO_2OP_SCALAR(OP##b, 1, int8_t, FN) \
62
+ DO_2OP_SCALAR(OP##h, 2, int16_t, FN) \
63
+ DO_2OP_SCALAR(OP##w, 4, int32_t, FN)
64
65
DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
66
DO_2OP_SCALAR_U(vsub_scalar, DO_SUB)
67
DO_2OP_SCALAR_U(vmul_scalar, DO_MUL)
68
+DO_2OP_SCALAR_S(vhadds_scalar, do_vhadd_s)
69
+DO_2OP_SCALAR_U(vhaddu_scalar, do_vhadd_u)
70
+DO_2OP_SCALAR_S(vhsubs_scalar, do_vhsub_s)
71
+DO_2OP_SCALAR_U(vhsubu_scalar, do_vhsub_u)
72
73
/*
74
* Multiply add long dual accumulate ops.
75
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/translate-mve.c
78
+++ b/target/arm/translate-mve.c
79
@@ -XXX,XX +XXX,XX @@ static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
80
DO_2OP_SCALAR(VADD_scalar, vadd_scalar)
81
DO_2OP_SCALAR(VSUB_scalar, vsub_scalar)
82
DO_2OP_SCALAR(VMUL_scalar, vmul_scalar)
83
+DO_2OP_SCALAR(VHADD_S_scalar, vhadds_scalar)
84
+DO_2OP_SCALAR(VHADD_U_scalar, vhaddu_scalar)
85
+DO_2OP_SCALAR(VHSUB_S_scalar, vhsubs_scalar)
86
+DO_2OP_SCALAR(VHSUB_U_scalar, vhsubu_scalar)
87
88
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
89
MVEGenDualAccOpFn *fn)
90
--
91
2.20.1
92
93
diff view generated by jsdifflib
1
The RAS feature has a block of memory-mapped registers at offset
1
Implement the MVE VBRSR insn, which reverses a specified
2
0x5000 within the PPB. For a "minimal RAS" implementation we provide
2
number of bits in each element, setting the rest to zero.
3
no error records and so the only registers that exist in the block
4
are ERRIIDR and ERRDEVID.
5
6
The "RAZ/WI for privileged, BusFault for nonprivileged" behaviour
7
of the "nvic-default" region is actually valid for minimal-RAS,
8
so the main benefit of providing an explicit implementation of
9
the register block is more accurate LOG_UNIMP messages, and a
10
framework for where we could add a real RAS implementation later
11
if necessary.
12
3
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-27-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-26-peter.maydell@linaro.org
16
---
7
---
17
include/hw/intc/armv7m_nvic.h | 1 +
8
target/arm/helper-mve.h | 4 ++++
18
hw/intc/armv7m_nvic.c | 56 +++++++++++++++++++++++++++++++++++
9
target/arm/mve.decode | 1 +
19
2 files changed, 57 insertions(+)
10
target/arm/mve_helper.c | 43 ++++++++++++++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 1 +
12
4 files changed, 49 insertions(+)
20
13
21
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/intc/armv7m_nvic.h
16
--- a/target/arm/helper-mve.h
24
+++ b/include/hw/intc/armv7m_nvic.h
17
+++ b/target/arm/helper-mve.h
25
@@ -XXX,XX +XXX,XX @@ struct NVICState {
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vhsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
MemoryRegion sysreg_ns_mem;
19
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
MemoryRegion systickmem;
20
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
MemoryRegion systick_ns_mem;
21
29
+ MemoryRegion ras_mem;
22
+DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
MemoryRegion container;
23
+DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
MemoryRegion defaultmem;
24
+DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
25
+
33
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
26
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
28
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/armv7m_nvic.c
31
--- a/target/arm/mve.decode
36
+++ b/hw/intc/armv7m_nvic.c
32
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
33
@@ -XXX,XX +XXX,XX @@ VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
38
.endianness = DEVICE_NATIVE_ENDIAN,
34
VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
39
};
35
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
40
36
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
41
+
37
+VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
42
+static MemTxResult ras_read(void *opaque, hwaddr addr,
38
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
+ uint64_t *data, unsigned size,
39
index XXXXXXX..XXXXXXX 100644
44
+ MemTxAttrs attrs)
40
--- a/target/arm/mve_helper.c
41
+++ b/target/arm/mve_helper.c
42
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR_U(vhaddu_scalar, do_vhadd_u)
43
DO_2OP_SCALAR_S(vhsubs_scalar, do_vhsub_s)
44
DO_2OP_SCALAR_U(vhsubu_scalar, do_vhsub_u)
45
46
+static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
45
+{
47
+{
46
+ if (attrs.user) {
48
+ m &= 0xff;
47
+ return MEMTX_ERROR;
49
+ if (m == 0) {
50
+ return 0;
48
+ }
51
+ }
49
+
52
+ n = revbit8(n);
50
+ switch (addr) {
53
+ if (m < 8) {
51
+ case 0xe10: /* ERRIIDR */
54
+ n >>= 8 - m;
52
+ /* architect field = Arm; product/variant/revision 0 */
53
+ *data = 0x43b;
54
+ break;
55
+ case 0xfc8: /* ERRDEVID */
56
+ /* Minimal RAS: we implement 0 error record indexes */
57
+ *data = 0;
58
+ break;
59
+ default:
60
+ qemu_log_mask(LOG_UNIMP, "Read RAS register offset 0x%x\n",
61
+ (uint32_t)addr);
62
+ *data = 0;
63
+ break;
64
+ }
55
+ }
65
+ return MEMTX_OK;
56
+ return n;
66
+}
57
+}
67
+
58
+
68
+static MemTxResult ras_write(void *opaque, hwaddr addr,
59
+static inline uint32_t do_vbrsrh(uint32_t n, uint32_t m)
69
+ uint64_t value, unsigned size,
70
+ MemTxAttrs attrs)
71
+{
60
+{
72
+ if (attrs.user) {
61
+ m &= 0xff;
73
+ return MEMTX_ERROR;
62
+ if (m == 0) {
63
+ return 0;
74
+ }
64
+ }
75
+
65
+ n = revbit16(n);
76
+ switch (addr) {
66
+ if (m < 16) {
77
+ default:
67
+ n >>= 16 - m;
78
+ qemu_log_mask(LOG_UNIMP, "Write to RAS register offset 0x%x\n",
79
+ (uint32_t)addr);
80
+ break;
81
+ }
68
+ }
82
+ return MEMTX_OK;
69
+ return n;
83
+}
70
+}
84
+
71
+
85
+static const MemoryRegionOps ras_ops = {
72
+static inline uint32_t do_vbrsrw(uint32_t n, uint32_t m)
86
+ .read_with_attrs = ras_read,
73
+{
87
+ .write_with_attrs = ras_write,
74
+ m &= 0xff;
88
+ .endianness = DEVICE_NATIVE_ENDIAN,
75
+ if (m == 0) {
89
+};
76
+ return 0;
77
+ }
78
+ n = revbit32(n);
79
+ if (m < 32) {
80
+ n >>= 32 - m;
81
+ }
82
+ return n;
83
+}
84
+
85
+DO_2OP_SCALAR(vbrsrb, 1, uint8_t, do_vbrsrb)
86
+DO_2OP_SCALAR(vbrsrh, 2, uint16_t, do_vbrsrh)
87
+DO_2OP_SCALAR(vbrsrw, 4, uint32_t, do_vbrsrw)
90
+
88
+
91
/*
89
/*
92
* Unassigned portions of the PPB space are RAZ/WI for privileged
90
* Multiply add long dual accumulate ops.
93
* accesses, and fault for non-privileged accesses.
91
*/
94
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
92
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
95
&s->systick_ns_mem, 1);
93
index XXXXXXX..XXXXXXX 100644
96
}
94
--- a/target/arm/translate-mve.c
97
95
+++ b/target/arm/translate-mve.c
98
+ if (cpu_isar_feature(aa32_ras, s->cpu)) {
96
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VHADD_S_scalar, vhadds_scalar)
99
+ memory_region_init_io(&s->ras_mem, OBJECT(s),
97
DO_2OP_SCALAR(VHADD_U_scalar, vhaddu_scalar)
100
+ &ras_ops, s, "nvic_ras", 0x1000);
98
DO_2OP_SCALAR(VHSUB_S_scalar, vhsubs_scalar)
101
+ memory_region_add_subregion(&s->container, 0x5000, &s->ras_mem);
99
DO_2OP_SCALAR(VHSUB_U_scalar, vhsubu_scalar)
102
+ }
100
+DO_2OP_SCALAR(VBRSR, vbrsr)
103
+
101
104
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
102
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
105
}
103
MVEGenDualAccOpFn *fn)
106
107
--
104
--
108
2.20.1
105
2.20.1
109
106
110
107
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
Implement the MVE VPST insn, which sets the predicate mask
2
fields in the VPR to the immediate value encoded in the insn.
2
3
3
We should use printf format specifier "%u" instead of "%d" for
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
argument of type "unsigned int".
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210617121628.20116-27-peter.maydell@linaro.org
7
---
8
target/arm/mve.decode | 4 +++
9
target/arm/translate-mve.c | 59 ++++++++++++++++++++++++++++++++++++++
10
2 files changed, 63 insertions(+)
5
11
6
Reported-by: Euler Robot <euler.robot@huawei.com>
12
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20201126111109.112238-2-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/misc/imx25_ccm.c | 12 ++++++------
13
1 file changed, 6 insertions(+), 6 deletions(-)
14
15
diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx25_ccm.c
14
--- a/target/arm/mve.decode
18
+++ b/hw/misc/imx25_ccm.c
15
+++ b/target/arm/mve.decode
19
@@ -XXX,XX +XXX,XX @@ static const char *imx25_ccm_reg_name(uint32_t reg)
16
@@ -XXX,XX +XXX,XX @@ VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
20
case IMX25_CCM_LPIMR1_REG:
17
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
21
return "lpimr1";
18
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
22
default:
19
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
23
- sprintf(unknown, "[%d ?]", reg);
20
+
24
+ sprintf(unknown, "[%u ?]", reg);
21
+# Predicate operations
25
return unknown;
22
+%mask_22_13 22:1 13:3
23
+VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
24
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/translate-mve.c
27
+++ b/target/arm/translate-mve.c
28
@@ -XXX,XX +XXX,XX @@ static void mve_update_eci(DisasContext *s)
26
}
29
}
27
}
30
}
28
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mpll_clk(IMXCCMState *dev)
31
29
freq = imx_ccm_calc_pll(s->reg[IMX25_CCM_MPCTL_REG], CKIH_FREQ);
32
+static void mve_update_and_store_eci(DisasContext *s)
30
}
33
+{
31
34
+ /*
32
- DPRINTF("freq = %d\n", freq);
35
+ * For insns which don't call a helper function that will call
33
+ DPRINTF("freq = %u\n", freq);
36
+ * mve_advance_vpt(), this version updates s->eci and also stores
34
37
+ * it out to the CPUState field.
35
return freq;
38
+ */
39
+ if (s->eci) {
40
+ mve_update_eci(s);
41
+ store_cpu_field(tcg_constant_i32(s->eci << 4), condexec_bits);
42
+ }
43
+}
44
+
45
static bool mve_skip_first_beat(DisasContext *s)
46
{
47
/* Return true if PSR.ECI says we must skip the first beat of this insn */
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
49
};
50
return do_long_dual_acc(s, a, fns[a->x]);
36
}
51
}
37
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mcu_clk(IMXCCMState *dev)
52
+
38
53
+static bool trans_VPST(DisasContext *s, arg_VPST *a)
39
freq = freq / (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], ARM_CLK_DIV));
54
+{
40
55
+ TCGv_i32 vpr;
41
- DPRINTF("freq = %d\n", freq);
56
+
42
+ DPRINTF("freq = %u\n", freq);
57
+ /* mask == 0 is a "related encoding" */
43
58
+ if (!dc_isar_feature(aa32_mve, s) || !a->mask) {
44
return freq;
59
+ return false;
45
}
60
+ }
46
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ahb_clk(IMXCCMState *dev)
61
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
47
freq = imx25_ccm_get_mcu_clk(dev)
62
+ return true;
48
/ (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], AHB_CLK_DIV));
63
+ }
49
64
+ /*
50
- DPRINTF("freq = %d\n", freq);
65
+ * Set the VPR mask fields. We take advantage of MASK01 and MASK23
51
+ DPRINTF("freq = %u\n", freq);
66
+ * being adjacent fields in the register.
52
67
+ *
53
return freq;
68
+ * This insn is not predicated, but it is subject to beat-wise
54
}
69
+ * execution, and the mask is updated on the odd-numbered beats.
55
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ipg_clk(IMXCCMState *dev)
70
+ * So if PSR.ECI says we should skip beat 1, we mustn't update the
56
71
+ * 01 mask field.
57
freq = imx25_ccm_get_ahb_clk(dev) / 2;
72
+ */
58
73
+ vpr = load_cpu_field(v7m.vpr);
59
- DPRINTF("freq = %d\n", freq);
74
+ switch (s->eci) {
60
+ DPRINTF("freq = %u\n", freq);
75
+ case ECI_NONE:
61
76
+ case ECI_A0:
62
return freq;
77
+ /* Update both 01 and 23 fields */
63
}
78
+ tcg_gen_deposit_i32(vpr, vpr,
64
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
79
+ tcg_constant_i32(a->mask | (a->mask << 4)),
65
break;
80
+ R_V7M_VPR_MASK01_SHIFT,
66
}
81
+ R_V7M_VPR_MASK01_LENGTH + R_V7M_VPR_MASK23_LENGTH);
67
82
+ break;
68
- DPRINTF("Clock = %d) = %d\n", clock, freq);
83
+ case ECI_A0A1:
69
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
84
+ case ECI_A0A1A2:
70
85
+ case ECI_A0A1A2B0:
71
return freq;
86
+ /* Update only the 23 mask field */
72
}
87
+ tcg_gen_deposit_i32(vpr, vpr,
88
+ tcg_constant_i32(a->mask),
89
+ R_V7M_VPR_MASK23_SHIFT, R_V7M_VPR_MASK23_LENGTH);
90
+ break;
91
+ default:
92
+ g_assert_not_reached();
93
+ }
94
+ store_cpu_field(vpr, v7m.vpr);
95
+ mve_update_and_store_eci(s);
96
+ return true;
97
+}
73
--
98
--
74
2.20.1
99
2.20.1
75
100
76
101
diff view generated by jsdifflib
1
v8.1M introduces a new TRD flag in the CCR register, which enables
1
Implement the MVE VQADD and VQSUB insns, which perform saturating
2
checking for stack frame integrity signatures on SG instructions.
2
addition of a scalar to each element. Note that individual bytes of
3
This bit is not banked, and is always RAZ/WI to Non-secure code.
3
each result element are used or discarded according to the predicate
4
Adjust the code for handling CCR reads and writes to handle this.
4
mask, but FPSCR.QC is only set if the predicate mask for the lowest
5
byte of the element is set.
5
6
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-23-peter.maydell@linaro.org
9
Message-id: 20210617121628.20116-28-peter.maydell@linaro.org
9
---
10
---
10
target/arm/cpu.h | 2 ++
11
target/arm/helper-mve.h | 16 ++++++++++
11
hw/intc/armv7m_nvic.c | 26 ++++++++++++++++++--------
12
target/arm/mve.decode | 5 +++
12
2 files changed, 20 insertions(+), 8 deletions(-)
13
target/arm/mve_helper.c | 62 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-mve.c | 4 +++
15
4 files changed, 87 insertions(+)
13
16
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
19
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/cpu.h
20
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKOFHFNMIGN, 10, 1)
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vhsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
FIELD(V7M_CCR, DC, 16, 1)
22
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
FIELD(V7M_CCR, IC, 17, 1)
23
DEF_HELPER_FLAGS_4(mve_vhsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
FIELD(V7M_CCR, BP, 18, 1)
24
22
+FIELD(V7M_CCR, LOB, 19, 1)
25
+DEF_HELPER_FLAGS_4(mve_vqadds_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+FIELD(V7M_CCR, TRD, 20, 1)
26
+DEF_HELPER_FLAGS_4(mve_vqadds_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
27
+DEF_HELPER_FLAGS_4(mve_vqadds_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
/* V7M SCR bits */
28
+
26
FIELD(V7M_SCR, SLEEPONEXIT, 1, 1)
29
+DEF_HELPER_FLAGS_4(mve_vqaddu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
30
+DEF_HELPER_FLAGS_4(mve_vqaddu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vqaddu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+
33
+DEF_HELPER_FLAGS_4(mve_vqsubs_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(mve_vqsubs_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vqsubs_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_4(mve_vqsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(mve_vqsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vqsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+
41
DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
42
DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
43
DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
44
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/intc/armv7m_nvic.c
46
--- a/target/arm/mve.decode
30
+++ b/hw/intc/armv7m_nvic.c
47
+++ b/target/arm/mve.decode
31
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
48
@@ -XXX,XX +XXX,XX @@ VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
32
}
49
VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
33
return cpu->env.v7m.scr[attrs.secure];
50
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
34
case 0xd14: /* Configuration Control. */
51
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
35
- /* The BFHFNMIGN bit is the only non-banked bit; we
36
- * keep it in the non-secure copy of the register.
37
+ /*
38
+ * Non-banked bits: BFHFNMIGN (stored in the NS copy of the register)
39
+ * and TRD (stored in the S copy of the register)
40
*/
41
val = cpu->env.v7m.ccr[attrs.secure];
42
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
44
cpu->env.v7m.scr[attrs.secure] = value;
45
break;
46
case 0xd14: /* Configuration Control. */
47
+ {
48
+ uint32_t mask;
49
+
52
+
50
if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
53
+VQADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
51
goto bad_offset;
54
+VQADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
52
}
55
+VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
53
56
+VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
54
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
57
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
55
- value &= (R_V7M_CCR_STKALIGN_MASK |
58
56
- R_V7M_CCR_BFHFNMIGN_MASK |
59
# Predicate operations
57
- R_V7M_CCR_DIV_0_TRP_MASK |
60
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
58
- R_V7M_CCR_UNALIGN_TRP_MASK |
61
index XXXXXXX..XXXXXXX 100644
59
- R_V7M_CCR_USERSETMPEND_MASK |
62
--- a/target/arm/mve_helper.c
60
- R_V7M_CCR_NONBASETHRDENA_MASK);
63
+++ b/target/arm/mve_helper.c
61
+ mask = R_V7M_CCR_STKALIGN_MASK |
64
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhaddu, do_vhadd_u)
62
+ R_V7M_CCR_BFHFNMIGN_MASK |
65
DO_2OP_S(vhsubs, do_vhsub_s)
63
+ R_V7M_CCR_DIV_0_TRP_MASK |
66
DO_2OP_U(vhsubu, do_vhsub_u)
64
+ R_V7M_CCR_UNALIGN_TRP_MASK |
67
65
+ R_V7M_CCR_USERSETMPEND_MASK |
68
+static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
66
+ R_V7M_CCR_NONBASETHRDENA_MASK;
69
+{
67
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8_1M) && attrs.secure) {
70
+ if (val > max) {
68
+ /* TRD is always RAZ/WI from NS */
71
+ *s = true;
69
+ mask |= R_V7M_CCR_TRD_MASK;
72
+ return max;
70
+ }
73
+ } else if (val < min) {
71
+ value &= mask;
74
+ *s = true;
72
75
+ return min;
73
if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
74
/* v8M makes NONBASETHRDENA and STKALIGN be RES1 */
75
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
76
77
cpu->env.v7m.ccr[attrs.secure] = value;
78
break;
79
+ }
76
+ }
80
case 0xd24: /* System Handler Control and State (SHCSR) */
77
+ return val;
81
if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
78
+}
82
goto bad_offset;
79
+
80
+#define DO_SQADD_B(n, m, s) do_sat_bhw((int64_t)n + m, INT8_MIN, INT8_MAX, s)
81
+#define DO_SQADD_H(n, m, s) do_sat_bhw((int64_t)n + m, INT16_MIN, INT16_MAX, s)
82
+#define DO_SQADD_W(n, m, s) do_sat_bhw((int64_t)n + m, INT32_MIN, INT32_MAX, s)
83
+
84
+#define DO_UQADD_B(n, m, s) do_sat_bhw((int64_t)n + m, 0, UINT8_MAX, s)
85
+#define DO_UQADD_H(n, m, s) do_sat_bhw((int64_t)n + m, 0, UINT16_MAX, s)
86
+#define DO_UQADD_W(n, m, s) do_sat_bhw((int64_t)n + m, 0, UINT32_MAX, s)
87
+
88
+#define DO_SQSUB_B(n, m, s) do_sat_bhw((int64_t)n - m, INT8_MIN, INT8_MAX, s)
89
+#define DO_SQSUB_H(n, m, s) do_sat_bhw((int64_t)n - m, INT16_MIN, INT16_MAX, s)
90
+#define DO_SQSUB_W(n, m, s) do_sat_bhw((int64_t)n - m, INT32_MIN, INT32_MAX, s)
91
+
92
+#define DO_UQSUB_B(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT8_MAX, s)
93
+#define DO_UQSUB_H(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT16_MAX, s)
94
+#define DO_UQSUB_W(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT32_MAX, s)
95
96
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
97
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
98
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
99
mve_advance_vpt(env); \
100
}
101
102
+#define DO_2OP_SAT_SCALAR(OP, ESIZE, TYPE, FN) \
103
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
104
+ uint32_t rm) \
105
+ { \
106
+ TYPE *d = vd, *n = vn; \
107
+ TYPE m = rm; \
108
+ uint16_t mask = mve_element_mask(env); \
109
+ unsigned e; \
110
+ bool qc = false; \
111
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
112
+ bool sat = false; \
113
+ mergemask(&d[H##ESIZE(e)], FN(n[H##ESIZE(e)], m, &sat), \
114
+ mask); \
115
+ qc |= sat & mask & 1; \
116
+ } \
117
+ if (qc) { \
118
+ env->vfp.qc[0] = qc; \
119
+ } \
120
+ mve_advance_vpt(env); \
121
+ }
122
+
123
/* provide unsigned 2-op scalar helpers for all sizes */
124
#define DO_2OP_SCALAR_U(OP, FN) \
125
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
126
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR_U(vhaddu_scalar, do_vhadd_u)
127
DO_2OP_SCALAR_S(vhsubs_scalar, do_vhsub_s)
128
DO_2OP_SCALAR_U(vhsubu_scalar, do_vhsub_u)
129
130
+DO_2OP_SAT_SCALAR(vqaddu_scalarb, 1, uint8_t, DO_UQADD_B)
131
+DO_2OP_SAT_SCALAR(vqaddu_scalarh, 2, uint16_t, DO_UQADD_H)
132
+DO_2OP_SAT_SCALAR(vqaddu_scalarw, 4, uint32_t, DO_UQADD_W)
133
+DO_2OP_SAT_SCALAR(vqadds_scalarb, 1, int8_t, DO_SQADD_B)
134
+DO_2OP_SAT_SCALAR(vqadds_scalarh, 2, int16_t, DO_SQADD_H)
135
+DO_2OP_SAT_SCALAR(vqadds_scalarw, 4, int32_t, DO_SQADD_W)
136
+
137
+DO_2OP_SAT_SCALAR(vqsubu_scalarb, 1, uint8_t, DO_UQSUB_B)
138
+DO_2OP_SAT_SCALAR(vqsubu_scalarh, 2, uint16_t, DO_UQSUB_H)
139
+DO_2OP_SAT_SCALAR(vqsubu_scalarw, 4, uint32_t, DO_UQSUB_W)
140
+DO_2OP_SAT_SCALAR(vqsubs_scalarb, 1, int8_t, DO_SQSUB_B)
141
+DO_2OP_SAT_SCALAR(vqsubs_scalarh, 2, int16_t, DO_SQSUB_H)
142
+DO_2OP_SAT_SCALAR(vqsubs_scalarw, 4, int32_t, DO_SQSUB_W)
143
+
144
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
145
{
146
m &= 0xff;
147
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
148
index XXXXXXX..XXXXXXX 100644
149
--- a/target/arm/translate-mve.c
150
+++ b/target/arm/translate-mve.c
151
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VHADD_S_scalar, vhadds_scalar)
152
DO_2OP_SCALAR(VHADD_U_scalar, vhaddu_scalar)
153
DO_2OP_SCALAR(VHSUB_S_scalar, vhsubs_scalar)
154
DO_2OP_SCALAR(VHSUB_U_scalar, vhsubu_scalar)
155
+DO_2OP_SCALAR(VQADD_S_scalar, vqadds_scalar)
156
+DO_2OP_SCALAR(VQADD_U_scalar, vqaddu_scalar)
157
+DO_2OP_SCALAR(VQSUB_S_scalar, vqsubs_scalar)
158
+DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
159
DO_2OP_SCALAR(VBRSR, vbrsr)
160
161
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
83
--
162
--
84
2.20.1
163
2.20.1
85
164
86
165
diff view generated by jsdifflib
1
In v8.1M a new exception return check is added which may cause a NOCP
1
Implement the MVE VQDMULH and VQRDMULH scalar insns, which multiply
2
UsageFault (see rule R_XLTP): before we clear s0..s15 and the FPSCR
2
elements by the scalar, double, possibly round, take the high half
3
we must check whether access to CP10 from the Security state of the
3
and saturate.
4
returning exception is disabled; if it is then we must take a fault.
5
6
(Note that for our implementation CPPWR is always RAZ/WI and so can
7
never cause CP10 accesses to fail.)
8
9
The other v8.1M change to this register-clearing code is that if MVE
10
is implemented VPR must also be cleared, so add a TODO comment to
11
that effect.
12
4
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-20-peter.maydell@linaro.org
7
Message-id: 20210617121628.20116-29-peter.maydell@linaro.org
16
---
8
---
17
target/arm/m_helper.c | 22 +++++++++++++++++++++-
9
target/arm/helper-mve.h | 8 ++++++++
18
1 file changed, 21 insertions(+), 1 deletion(-)
10
target/arm/mve.decode | 3 +++
11
target/arm/mve_helper.c | 25 +++++++++++++++++++++++++
12
target/arm/translate-mve.c | 2 ++
13
4 files changed, 38 insertions(+)
19
14
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m_helper.c
17
--- a/target/arm/helper-mve.h
23
+++ b/target/arm/m_helper.c
18
+++ b/target/arm/helper-mve.h
24
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqsubu_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
v7m_exception_taken(cpu, excret, true, false);
20
DEF_HELPER_FLAGS_4(mve_vqsubu_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
return;
21
DEF_HELPER_FLAGS_4(mve_vqsubu_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
} else {
22
28
- /* Clear s0..s15 and FPSCR */
23
+DEF_HELPER_FLAGS_4(mve_vqdmulh_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
24
+DEF_HELPER_FLAGS_4(mve_vqdmulh_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+ /* v8.1M adds this NOCP check */
25
+DEF_HELPER_FLAGS_4(mve_vqdmulh_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+ bool nsacr_pass = exc_secure ||
26
+
32
+ extract32(env->v7m.nsacr, 10, 1);
27
+DEF_HELPER_FLAGS_4(mve_vqrdmulh_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
+ bool cpacr_pass = v7m_cpacr_pass(env, exc_secure, true);
28
+DEF_HELPER_FLAGS_4(mve_vqrdmulh_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+ if (!nsacr_pass) {
29
+DEF_HELPER_FLAGS_4(mve_vqrdmulh_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
30
+
36
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
31
DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
32
DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+ "stackframe: NSACR prevents clearing FPU registers\n");
33
DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+ v7m_exception_taken(cpu, excret, true, false);
34
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
40
+ } else if (!cpacr_pass) {
35
index XXXXXXX..XXXXXXX 100644
41
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
36
--- a/target/arm/mve.decode
42
+ exc_secure);
37
+++ b/target/arm/mve.decode
43
+ env->v7m.cfsr[exc_secure] |= R_V7M_CFSR_NOCP_MASK;
38
@@ -XXX,XX +XXX,XX @@ VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
44
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
39
VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
45
+ "stackframe: CPACR prevents clearing FPU registers\n");
40
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
46
+ v7m_exception_taken(cpu, excret, true, false);
41
47
+ }
42
+VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
48
+ }
43
+VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
49
+ /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */
44
+
50
int i;
45
# Predicate operations
51
46
%mask_22_13 22:1 13:3
52
for (i = 0; i < 16; i += 2) {
47
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
48
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/mve_helper.c
51
+++ b/target/arm/mve_helper.c
52
@@ -XXX,XX +XXX,XX @@ static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
53
#define DO_UQSUB_H(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT16_MAX, s)
54
#define DO_UQSUB_W(n, m, s) do_sat_bhw((int64_t)n - m, 0, UINT32_MAX, s)
55
56
+/*
57
+ * For QDMULH and QRDMULH we simplify "double and shift by esize" into
58
+ * "shift by esize-1", adjusting the QRDMULH rounding constant to match.
59
+ */
60
+#define DO_QDMULH_B(n, m, s) do_sat_bhw(((int64_t)n * m) >> 7, \
61
+ INT8_MIN, INT8_MAX, s)
62
+#define DO_QDMULH_H(n, m, s) do_sat_bhw(((int64_t)n * m) >> 15, \
63
+ INT16_MIN, INT16_MAX, s)
64
+#define DO_QDMULH_W(n, m, s) do_sat_bhw(((int64_t)n * m) >> 31, \
65
+ INT32_MIN, INT32_MAX, s)
66
+
67
+#define DO_QRDMULH_B(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 6)) >> 7, \
68
+ INT8_MIN, INT8_MAX, s)
69
+#define DO_QRDMULH_H(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 14)) >> 15, \
70
+ INT16_MIN, INT16_MAX, s)
71
+#define DO_QRDMULH_W(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 30)) >> 31, \
72
+ INT32_MIN, INT32_MAX, s)
73
+
74
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
75
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
76
uint32_t rm) \
77
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqsubs_scalarb, 1, int8_t, DO_SQSUB_B)
78
DO_2OP_SAT_SCALAR(vqsubs_scalarh, 2, int16_t, DO_SQSUB_H)
79
DO_2OP_SAT_SCALAR(vqsubs_scalarw, 4, int32_t, DO_SQSUB_W)
80
81
+DO_2OP_SAT_SCALAR(vqdmulh_scalarb, 1, int8_t, DO_QDMULH_B)
82
+DO_2OP_SAT_SCALAR(vqdmulh_scalarh, 2, int16_t, DO_QDMULH_H)
83
+DO_2OP_SAT_SCALAR(vqdmulh_scalarw, 4, int32_t, DO_QDMULH_W)
84
+DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
85
+DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
86
+DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
87
+
88
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
89
{
90
m &= 0xff;
91
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/translate-mve.c
94
+++ b/target/arm/translate-mve.c
95
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQADD_S_scalar, vqadds_scalar)
96
DO_2OP_SCALAR(VQADD_U_scalar, vqaddu_scalar)
97
DO_2OP_SCALAR(VQSUB_S_scalar, vqsubs_scalar)
98
DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
99
+DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
100
+DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
101
DO_2OP_SCALAR(VBRSR, vbrsr)
102
103
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
53
--
104
--
54
2.20.1
105
2.20.1
55
106
56
107
diff view generated by jsdifflib
1
In v8.1M a REVIDR register is defined, which is at address 0xe00ecfc
1
Implement the MVE VQDMULL scalar insn. This multiplies the top or
2
and is a read-only IMPDEF register providing implementation specific
2
bottom half of each element by the scalar, doubles and saturates
3
minor revision information, like the v8A REVIDR_EL1. Implement this.
3
to a double-width result.
4
5
Note that this encoding overlaps with VQADD and VQSUB; it uses
6
what in VQADD and VQSUB would be the 'size=0b11' encoding.
4
7
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-19-peter.maydell@linaro.org
10
Message-id: 20210617121628.20116-30-peter.maydell@linaro.org
8
---
11
---
9
hw/intc/armv7m_nvic.c | 5 +++++
12
target/arm/helper-mve.h | 5 +++
10
1 file changed, 5 insertions(+)
13
target/arm/mve.decode | 23 +++++++++++---
11
14
target/arm/mve_helper.c | 65 ++++++++++++++++++++++++++++++++++++++
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
target/arm/translate-mve.c | 30 ++++++++++++++++++
13
index XXXXXXX..XXXXXXX 100644
16
4 files changed, 119 insertions(+), 4 deletions(-)
14
--- a/hw/intc/armv7m_nvic.c
17
15
+++ b/hw/intc/armv7m_nvic.c
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
19
index XXXXXXX..XXXXXXX 100644
17
}
20
--- a/target/arm/helper-mve.h
18
return val;
21
+++ b/target/arm/helper-mve.h
19
}
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vbrsrb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
+ case 0xcfc:
23
DEF_HELPER_FLAGS_4(mve_vbrsrh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8_1M)) {
24
DEF_HELPER_FLAGS_4(mve_vbrsrw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
+ goto bad_offset;
25
23
+ }
26
+DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+ return cpu->revidr;
27
+DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
case 0xd00: /* CPUID Base. */
28
+DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
return cpu->midr;
29
+DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
case 0xd04: /* Interrupt Control State (ICSR) */
30
+
31
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
32
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
33
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
34
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/mve.decode
37
+++ b/target/arm/mve.decode
38
@@ -XXX,XX +XXX,XX @@
39
%qm 5:1 1:3
40
%qn 7:1 17:3
41
42
+# VQDMULL has size in bit 28: 0 for 16 bit, 1 for 32 bit
43
+%size_28 28:1 !function=plus_1
44
+
45
&vldr_vstr rn qd imm p a w size l u
46
&1op qd qm size
47
&2op qd qm qn size
48
@@ -XXX,XX +XXX,XX @@
49
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
50
51
@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
52
+@2scalar_nosz .... .... .... .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
53
54
# Vector loads and stores
55
56
@@ -XXX,XX +XXX,XX @@ VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
57
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
58
VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
59
60
-VQADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
61
-VQADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
62
-VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
63
-VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
64
+{
65
+ VQADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
66
+ VQADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 110 .... @2scalar
67
+ VQDMULLB_scalar 111 . 1110 0 . 11 ... 0 ... 0 1111 . 110 .... @2scalar_nosz \
68
+ size=%size_28
69
+}
70
+
71
+{
72
+ VQSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
73
+ VQSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 110 .... @2scalar
74
+ VQDMULLT_scalar 111 . 1110 0 . 11 ... 0 ... 1 1111 . 110 .... @2scalar_nosz \
75
+ size=%size_28
76
+}
77
+
78
VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
79
80
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
81
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
82
83
+
84
# Predicate operations
85
%mask_22_13 22:1 13:3
86
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
87
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
88
index XXXXXXX..XXXXXXX 100644
89
--- a/target/arm/mve_helper.c
90
+++ b/target/arm/mve_helper.c
91
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
92
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
93
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
94
95
+/*
96
+ * Long saturating scalar ops. As with DO_2OP_L, TYPE and H are for the
97
+ * input (smaller) type and LESIZE, LTYPE, LH for the output (long) type.
98
+ * SATMASK specifies which bits of the predicate mask matter for determining
99
+ * whether to propagate a saturation indication into FPSCR.QC -- for
100
+ * the 16x16->32 case we must check only the bit corresponding to the T or B
101
+ * half that we used, but for the 32x32->64 case we propagate if the mask
102
+ * bit is set for either half.
103
+ */
104
+#define DO_2OP_SAT_SCALAR_L(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN, SATMASK) \
105
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
106
+ uint32_t rm) \
107
+ { \
108
+ LTYPE *d = vd; \
109
+ TYPE *n = vn; \
110
+ TYPE m = rm; \
111
+ uint16_t mask = mve_element_mask(env); \
112
+ unsigned le; \
113
+ bool qc = false; \
114
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
115
+ bool sat = false; \
116
+ LTYPE r = FN((LTYPE)n[H##ESIZE(le * 2 + TOP)], m, &sat); \
117
+ mergemask(&d[H##LESIZE(le)], r, mask); \
118
+ qc |= sat && (mask & SATMASK); \
119
+ } \
120
+ if (qc) { \
121
+ env->vfp.qc[0] = qc; \
122
+ } \
123
+ mve_advance_vpt(env); \
124
+ }
125
+
126
+static inline int32_t do_qdmullh(int16_t n, int16_t m, bool *sat)
127
+{
128
+ int64_t r = ((int64_t)n * m) * 2;
129
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat);
130
+}
131
+
132
+static inline int64_t do_qdmullw(int32_t n, int32_t m, bool *sat)
133
+{
134
+ /* The multiply can't overflow, but the doubling might */
135
+ int64_t r = (int64_t)n * m;
136
+ if (r > INT64_MAX / 2) {
137
+ *sat = true;
138
+ return INT64_MAX;
139
+ } else if (r < INT64_MIN / 2) {
140
+ *sat = true;
141
+ return INT64_MIN;
142
+ } else {
143
+ return r * 2;
144
+ }
145
+}
146
+
147
+#define SATMASK16B 1
148
+#define SATMASK16T (1 << 2)
149
+#define SATMASK32 ((1 << 4) | 1)
150
+
151
+DO_2OP_SAT_SCALAR_L(vqdmullb_scalarh, 0, 2, int16_t, 4, int32_t, \
152
+ do_qdmullh, SATMASK16B)
153
+DO_2OP_SAT_SCALAR_L(vqdmullb_scalarw, 0, 4, int32_t, 8, int64_t, \
154
+ do_qdmullw, SATMASK32)
155
+DO_2OP_SAT_SCALAR_L(vqdmullt_scalarh, 1, 2, int16_t, 4, int32_t, \
156
+ do_qdmullh, SATMASK16T)
157
+DO_2OP_SAT_SCALAR_L(vqdmullt_scalarw, 1, 4, int32_t, 8, int64_t, \
158
+ do_qdmullw, SATMASK32)
159
+
160
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
161
{
162
m &= 0xff;
163
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/translate-mve.c
166
+++ b/target/arm/translate-mve.c
167
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
168
DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
169
DO_2OP_SCALAR(VBRSR, vbrsr)
170
171
+static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
172
+{
173
+ static MVEGenTwoOpScalarFn * const fns[] = {
174
+ NULL,
175
+ gen_helper_mve_vqdmullb_scalarh,
176
+ gen_helper_mve_vqdmullb_scalarw,
177
+ NULL,
178
+ };
179
+ if (a->qd == a->qn && a->size == MO_32) {
180
+ /* UNPREDICTABLE; we choose to undef */
181
+ return false;
182
+ }
183
+ return do_2op_scalar(s, a, fns[a->size]);
184
+}
185
+
186
+static bool trans_VQDMULLT_scalar(DisasContext *s, arg_2scalar *a)
187
+{
188
+ static MVEGenTwoOpScalarFn * const fns[] = {
189
+ NULL,
190
+ gen_helper_mve_vqdmullt_scalarh,
191
+ gen_helper_mve_vqdmullt_scalarw,
192
+ NULL,
193
+ };
194
+ if (a->qd == a->qn && a->size == MO_32) {
195
+ /* UNPREDICTABLE; we choose to undef */
196
+ return false;
197
+ }
198
+ return do_2op_scalar(s, a, fns[a->size]);
199
+}
200
+
201
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
202
MVEGenDualAccOpFn *fn)
203
{
28
--
204
--
29
2.20.1
205
2.20.1
30
206
31
207
diff view generated by jsdifflib
1
In arm_cpu_realizefn() we check whether the board code disabled EL3
1
Implement the vector forms of the MVE VQDMULH and VQRDMULH insns.
2
via the has_el3 CPU object property, which we create if the CPU
3
starts with the ARM_FEATURE_EL3 feature bit. If it is disabled, then
4
we turn off ARM_FEATURE_EL3 and also zero out the relevant fields in
5
the ID_PFR1 and ID_AA64PFR0 registers.
6
7
This codepath was incorrectly being taken for M-profile CPUs, which
8
do not have an EL3 and don't set ARM_FEATURE_EL3, but which may have
9
the M-profile Security extension and so should have non-zero values
10
in the ID_PFR1.Security field.
11
12
Restrict the handling of the feature flag to A/R-profile cores.
13
2
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20201119215617.29887-4-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-31-peter.maydell@linaro.org
17
---
6
---
18
target/arm/cpu.c | 2 +-
7
target/arm/helper-mve.h | 8 ++++++++
19
1 file changed, 1 insertion(+), 1 deletion(-)
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 27 +++++++++++++++++++++++++++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 40 insertions(+)
20
12
21
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
22
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.c
15
--- a/target/arm/helper-mve.h
24
+++ b/target/arm/cpu.c
16
+++ b/target/arm/helper-mve.h
25
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
}
18
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
21
+DEF_HELPER_FLAGS_4(mve_vqdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+DEF_HELPER_FLAGS_4(mve_vqdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+DEF_HELPER_FLAGS_4(mve_vqdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+
25
+DEF_HELPER_FLAGS_4(mve_vqrdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+DEF_HELPER_FLAGS_4(mve_vqrdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+DEF_HELPER_FLAGS_4(mve_vqrdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+
29
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
35
+++ b/target/arm/mve.decode
36
@@ -XXX,XX +XXX,XX @@ VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
37
VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
38
VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
39
40
+VQDMULH 1110 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
41
+VQRDMULH 1111 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
42
+
43
# Vector miscellaneous
44
45
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve_helper.c
49
+++ b/target/arm/mve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
51
mve_advance_vpt(env); \
27
}
52
}
28
53
29
- if (!cpu->has_el3) {
54
+#define DO_2OP_SAT(OP, ESIZE, TYPE, FN) \
30
+ if (!arm_feature(env, ARM_FEATURE_M) && !cpu->has_el3) {
55
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, void *vm) \
31
/* If the has_el3 CPU property is disabled then we need to disable the
56
+ { \
32
* feature.
57
+ TYPE *d = vd, *n = vn, *m = vm; \
33
*/
58
+ uint16_t mask = mve_element_mask(env); \
59
+ unsigned e; \
60
+ bool qc = false; \
61
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
62
+ bool sat = false; \
63
+ TYPE r = FN(n[H##ESIZE(e)], m[H##ESIZE(e)], &sat); \
64
+ mergemask(&d[H##ESIZE(e)], r, mask); \
65
+ qc |= sat & mask & 1; \
66
+ } \
67
+ if (qc) { \
68
+ env->vfp.qc[0] = qc; \
69
+ } \
70
+ mve_advance_vpt(env); \
71
+ }
72
+
73
#define DO_AND(N, M) ((N) & (M))
74
#define DO_BIC(N, M) ((N) & ~(M))
75
#define DO_ORR(N, M) ((N) | (M))
76
@@ -XXX,XX +XXX,XX @@ static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
77
#define DO_QRDMULH_W(n, m, s) do_sat_bhw(((int64_t)n * m + (1 << 30)) >> 31, \
78
INT32_MIN, INT32_MAX, s)
79
80
+DO_2OP_SAT(vqdmulhb, 1, int8_t, DO_QDMULH_B)
81
+DO_2OP_SAT(vqdmulhh, 2, int16_t, DO_QDMULH_H)
82
+DO_2OP_SAT(vqdmulhw, 4, int32_t, DO_QDMULH_W)
83
+
84
+DO_2OP_SAT(vqrdmulhb, 1, int8_t, DO_QRDMULH_B)
85
+DO_2OP_SAT(vqrdmulhh, 2, int16_t, DO_QRDMULH_H)
86
+DO_2OP_SAT(vqrdmulhw, 4, int32_t, DO_QRDMULH_W)
87
+
88
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
89
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
90
uint32_t rm) \
91
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/translate-mve.c
94
+++ b/target/arm/translate-mve.c
95
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_BS, vmullbs)
96
DO_2OP(VMULL_BU, vmullbu)
97
DO_2OP(VMULL_TS, vmullts)
98
DO_2OP(VMULL_TU, vmulltu)
99
+DO_2OP(VQDMULH, vqdmulh)
100
+DO_2OP(VQRDMULH, vqrdmulh)
101
102
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
103
MVEGenTwoOpScalarFn fn)
34
--
104
--
35
2.20.1
105
2.20.1
36
106
37
107
diff view generated by jsdifflib
1
In v8.1M, vector table fetch failures don't set HFSR.FORCED (see rule
1
Implement the vector forms of the MVE VQADD and VQSUB insns.
2
R_LLRP). (In previous versions of the architecture this was either
3
required or IMPDEF.)
4
2
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-18-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-32-peter.maydell@linaro.org
8
---
6
---
9
target/arm/m_helper.c | 6 +++++-
7
target/arm/helper-mve.h | 16 ++++++++++++++++
10
1 file changed, 5 insertions(+), 1 deletion(-)
8
target/arm/mve.decode | 5 +++++
9
target/arm/mve_helper.c | 14 ++++++++++++++
10
target/arm/translate-mve.c | 4 ++++
11
4 files changed, 39 insertions(+)
11
12
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
15
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/m_helper.c
16
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ load_fail:
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
17
* The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
18
DEF_HELPER_FLAGS_4(mve_vqrdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
* secure); otherwise it targets the same security state as the
19
DEF_HELPER_FLAGS_4(mve_vqrdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
* underlying exception.
20
20
+ * In v8.1M HardFaults from vector table fetch fails don't set FORCED.
21
+DEF_HELPER_FLAGS_4(mve_vqaddsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
*/
22
+DEF_HELPER_FLAGS_4(mve_vqaddsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
23
+DEF_HELPER_FLAGS_4(mve_vqaddsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
exc_secure = true;
24
+
24
}
25
+DEF_HELPER_FLAGS_4(mve_vqaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
26
+DEF_HELPER_FLAGS_4(mve_vqadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK;
27
+DEF_HELPER_FLAGS_4(mve_vqadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+ if (!arm_feature(env, ARM_FEATURE_V8_1M)) {
28
+
28
+ env->v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
29
+DEF_HELPER_FLAGS_4(mve_vqsubsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+ }
30
+DEF_HELPER_FLAGS_4(mve_vqsubsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
31
+DEF_HELPER_FLAGS_4(mve_vqsubsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
return false;
32
+
32
}
33
+DEF_HELPER_FLAGS_4(mve_vqsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vqsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+DEF_HELPER_FLAGS_4(mve_vqsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
+
37
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/mve.decode
43
+++ b/target/arm/mve.decode
44
@@ -XXX,XX +XXX,XX @@ VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
45
VQDMULH 1110 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
46
VQRDMULH 1111 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
47
48
+VQADD_S 111 0 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
49
+VQADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
50
+VQSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
51
+VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
52
+
53
# Vector miscellaneous
54
55
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
56
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/mve_helper.c
59
+++ b/target/arm/mve_helper.c
60
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqrdmulhb, 1, int8_t, DO_QRDMULH_B)
61
DO_2OP_SAT(vqrdmulhh, 2, int16_t, DO_QRDMULH_H)
62
DO_2OP_SAT(vqrdmulhw, 4, int32_t, DO_QRDMULH_W)
63
64
+DO_2OP_SAT(vqaddub, 1, uint8_t, DO_UQADD_B)
65
+DO_2OP_SAT(vqadduh, 2, uint16_t, DO_UQADD_H)
66
+DO_2OP_SAT(vqadduw, 4, uint32_t, DO_UQADD_W)
67
+DO_2OP_SAT(vqaddsb, 1, int8_t, DO_SQADD_B)
68
+DO_2OP_SAT(vqaddsh, 2, int16_t, DO_SQADD_H)
69
+DO_2OP_SAT(vqaddsw, 4, int32_t, DO_SQADD_W)
70
+
71
+DO_2OP_SAT(vqsubub, 1, uint8_t, DO_UQSUB_B)
72
+DO_2OP_SAT(vqsubuh, 2, uint16_t, DO_UQSUB_H)
73
+DO_2OP_SAT(vqsubuw, 4, uint32_t, DO_UQSUB_W)
74
+DO_2OP_SAT(vqsubsb, 1, int8_t, DO_SQSUB_B)
75
+DO_2OP_SAT(vqsubsh, 2, int16_t, DO_SQSUB_H)
76
+DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
77
+
78
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
79
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
80
uint32_t rm) \
81
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/translate-mve.c
84
+++ b/target/arm/translate-mve.c
85
@@ -XXX,XX +XXX,XX @@ DO_2OP(VMULL_TS, vmullts)
86
DO_2OP(VMULL_TU, vmulltu)
87
DO_2OP(VQDMULH, vqdmulh)
88
DO_2OP(VQRDMULH, vqrdmulh)
89
+DO_2OP(VQADD_S, vqadds)
90
+DO_2OP(VQADD_U, vqaddu)
91
+DO_2OP(VQSUB_S, vqsubs)
92
+DO_2OP(VQSUB_U, vqsubu)
93
94
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
95
MVEGenTwoOpScalarFn fn)
33
--
96
--
34
2.20.1
97
2.20.1
35
98
36
99
diff view generated by jsdifflib
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
1
Implement the MVE VQSHL insn (encoding T4, which is the
2
vector-shift-by-vector version).
2
3
3
Trusted Firmware now supports A72 on sbsa-ref by default [1] so enable
4
The DO_SQSHL_OP and DO_UQSHL_OP macros here are derived from
4
it for QEMU as well. A53 was already enabled there.
5
the neon_helper.c code for qshl_u{8,16,32} and qshl_s{8,16,32}.
5
6
6
1. https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/7117
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210617121628.20116-33-peter.maydell@linaro.org
10
---
11
target/arm/helper-mve.h | 8 ++++++++
12
target/arm/mve.decode | 12 ++++++++++++
13
target/arm/mve_helper.c | 34 ++++++++++++++++++++++++++++++++++
14
target/arm/translate-mve.c | 2 ++
15
4 files changed, 56 insertions(+)
7
16
8
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201120141705.246690-1-marcin.juszkiewicz@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/arm/sbsa-ref.c | 23 ++++++++++++++++++++---
15
1 file changed, 20 insertions(+), 3 deletions(-)
16
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
19
--- a/target/arm/helper-mve.h
20
+++ b/hw/arm/sbsa-ref.c
20
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
[SBSA_GWDT] = 16,
22
DEF_HELPER_FLAGS_4(mve_vqsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
};
23
DEF_HELPER_FLAGS_4(mve_vqsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
24
25
+static const char * const valid_cpus[] = {
25
+DEF_HELPER_FLAGS_4(mve_vqshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+ ARM_CPU_TYPE_NAME("cortex-a53"),
26
+DEF_HELPER_FLAGS_4(mve_vqshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+ ARM_CPU_TYPE_NAME("cortex-a57"),
27
+DEF_HELPER_FLAGS_4(mve_vqshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+ ARM_CPU_TYPE_NAME("cortex-a72"),
29
+};
30
+
28
+
31
+static bool cpu_type_valid(const char *cpu)
29
+DEF_HELPER_FLAGS_4(mve_vqshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+{
30
+DEF_HELPER_FLAGS_4(mve_vqshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+ int i;
31
+DEF_HELPER_FLAGS_4(mve_vqshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+
32
+
35
+ for (i = 0; i < ARRAY_SIZE(valid_cpus); i++) {
33
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+ if (strcmp(cpu, valid_cpus[i]) == 0) {
34
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+ return true;
35
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+ }
36
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
39
+ }
37
index XXXXXXX..XXXXXXX 100644
40
+ return false;
38
--- a/target/arm/mve.decode
41
+}
39
+++ b/target/arm/mve.decode
40
@@ -XXX,XX +XXX,XX @@
41
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
42
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
43
44
+# The _rev suffix indicates that Vn and Vm are reversed. This is
45
+# the case for shifts. In the Arm ARM these insns are documented
46
+# with the Vm and Vn fields in their usual places, but in the
47
+# assembly the operands are listed "backwards", ie in the order
48
+# Qd, Qm, Qn where other insns use Qd, Qn, Qm. For QEMU we choose
49
+# to consider Vm and Vn as being in different fields in the insn.
50
+# This gives us consistency with A64 and Neon.
51
+@2op_rev .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qn qn=%qm
42
+
52
+
43
static uint64_t sbsa_ref_cpu_mp_affinity(SBSAMachineState *sms, int idx)
53
@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
44
{
54
@2scalar_nosz .... .... .... .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
45
uint8_t clustersz = ARM_DEFAULT_CPUS_PER_CLUSTER;
55
46
@@ -XXX,XX +XXX,XX @@ static void sbsa_ref_init(MachineState *machine)
56
@@ -XXX,XX +XXX,XX @@ VQADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
47
const CPUArchIdList *possible_cpus;
57
VQSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
48
int n, sbsa_max_cpus;
58
VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
49
59
50
- if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a57"))) {
60
+VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
51
- error_report("sbsa-ref: CPU type other than the built-in "
61
+VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
52
- "cortex-a57 not supported");
62
+
53
+ if (!cpu_type_valid(machine->cpu_type)) {
63
# Vector miscellaneous
54
+ error_report("mach-virt: CPU type %s not supported", machine->cpu_type);
64
55
exit(1);
65
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
66
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/mve_helper.c
69
+++ b/target/arm/mve_helper.c
70
@@ -XXX,XX +XXX,XX @@ DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
71
mve_advance_vpt(env); \
56
}
72
}
57
73
74
+/* provide unsigned 2-op helpers for all sizes */
75
+#define DO_2OP_SAT_U(OP, FN) \
76
+ DO_2OP_SAT(OP##b, 1, uint8_t, FN) \
77
+ DO_2OP_SAT(OP##h, 2, uint16_t, FN) \
78
+ DO_2OP_SAT(OP##w, 4, uint32_t, FN)
79
+
80
+/* provide signed 2-op helpers for all sizes */
81
+#define DO_2OP_SAT_S(OP, FN) \
82
+ DO_2OP_SAT(OP##b, 1, int8_t, FN) \
83
+ DO_2OP_SAT(OP##h, 2, int16_t, FN) \
84
+ DO_2OP_SAT(OP##w, 4, int32_t, FN)
85
+
86
#define DO_AND(N, M) ((N) & (M))
87
#define DO_BIC(N, M) ((N) & ~(M))
88
#define DO_ORR(N, M) ((N) | (M))
89
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqsubsb, 1, int8_t, DO_SQSUB_B)
90
DO_2OP_SAT(vqsubsh, 2, int16_t, DO_SQSUB_H)
91
DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
92
93
+/*
94
+ * This wrapper fixes up the impedance mismatch between do_sqrshl_bhs()
95
+ * and friends wanting a uint32_t* sat and our needing a bool*.
96
+ */
97
+#define WRAP_QRSHL_HELPER(FN, N, M, ROUND, satp) \
98
+ ({ \
99
+ uint32_t su32 = 0; \
100
+ typeof(N) r = FN(N, (int8_t)(M), sizeof(N) * 8, ROUND, &su32); \
101
+ if (su32) { \
102
+ *satp = true; \
103
+ } \
104
+ r; \
105
+ })
106
+
107
+#define DO_SQSHL_OP(N, M, satp) \
108
+ WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, false, satp)
109
+#define DO_UQSHL_OP(N, M, satp) \
110
+ WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, false, satp)
111
+
112
+DO_2OP_SAT_S(vqshls, DO_SQSHL_OP)
113
+DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
114
+
115
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
116
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
117
uint32_t rm) \
118
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/target/arm/translate-mve.c
121
+++ b/target/arm/translate-mve.c
122
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQADD_S, vqadds)
123
DO_2OP(VQADD_U, vqaddu)
124
DO_2OP(VQSUB_S, vqsubs)
125
DO_2OP(VQSUB_U, vqsubu)
126
+DO_2OP(VQSHL_S, vqshls)
127
+DO_2OP(VQSHL_U, vqshlu)
128
129
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
130
MVEGenTwoOpScalarFn fn)
58
--
131
--
59
2.20.1
132
2.20.1
60
133
61
134
diff view generated by jsdifflib
1
In v8.0M, on exception entry the registers R0-R3, R12, APSR and EPSR
1
Implement the MV VQRSHL (vector) insn. Again, the code to perform
2
are zeroed for an exception taken to Non-secure state; for an
2
the actual shifts is borrowed from neon_helper.c.
3
exception taken to Secure state they become UNKNOWN, and we chose to
4
leave them at their previous values.
5
6
In v8.1M the behaviour is specified more tightly and these registers
7
are always zeroed regardless of the security state that the exception
8
targets (see rule R_KPZV). Implement this.
9
3
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-17-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-34-peter.maydell@linaro.org
13
---
7
---
14
target/arm/m_helper.c | 16 ++++++++++++----
8
target/arm/helper-mve.h | 8 ++++++++
15
1 file changed, 12 insertions(+), 4 deletions(-)
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 6 ++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 19 insertions(+)
16
13
17
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/m_helper.c
16
--- a/target/arm/helper-mve.h
20
+++ b/target/arm/m_helper.c
17
+++ b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
* Clear registers if necessary to prevent non-secure exception
19
DEF_HELPER_FLAGS_4(mve_vqshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
* code being able to see register values from secure code.
20
DEF_HELPER_FLAGS_4(mve_vqshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
* Where register values become architecturally UNKNOWN we leave
21
25
- * them with their previous values.
22
+DEF_HELPER_FLAGS_4(mve_vqrshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+ * them with their previous values. v8.1M is tighter than v8.0M
23
+DEF_HELPER_FLAGS_4(mve_vqrshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+ * here and always zeroes the caller-saved registers regardless
24
+DEF_HELPER_FLAGS_4(mve_vqrshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+ * of the security state the exception is targeting.
25
+
29
*/
26
+DEF_HELPER_FLAGS_4(mve_vqrshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
27
+DEF_HELPER_FLAGS_4(mve_vqrshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
- if (!targets_secure) {
28
+DEF_HELPER_FLAGS_4(mve_vqrshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+ if (!targets_secure || arm_feature(env, ARM_FEATURE_V8_1M)) {
29
+
33
/*
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
* Always clear the caller-saved registers (they have been
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
* pushed to the stack earlier in v7m_push_stack()).
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
37
* v7m_push_callee_stack()).
34
index XXXXXXX..XXXXXXX 100644
38
*/
35
--- a/target/arm/mve.decode
39
int i;
36
+++ b/target/arm/mve.decode
40
+ /*
37
@@ -XXX,XX +XXX,XX @@ VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
41
+ * r4..r11 are callee-saves, zero only if background
38
VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
42
+ * state was Secure (EXCRET.S == 1) and exception
39
VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
43
+ * targets Non-secure state
40
44
+ */
41
+VQRSHL_S 111 0 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
45
+ bool zero_callee_saves = !targets_secure &&
42
+VQRSHL_U 111 1 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
46
+ (lr & R_V7M_EXCRET_S_MASK);
43
+
47
44
# Vector miscellaneous
48
for (i = 0; i < 13; i++) {
45
49
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
46
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
50
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
47
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
51
+ if (i < 4 || i > 11 || zero_callee_saves) {
48
index XXXXXXX..XXXXXXX 100644
52
env->regs[i] = 0;
49
--- a/target/arm/mve_helper.c
53
}
50
+++ b/target/arm/mve_helper.c
54
}
51
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
52
WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, false, satp)
53
#define DO_UQSHL_OP(N, M, satp) \
54
WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, false, satp)
55
+#define DO_SQRSHL_OP(N, M, satp) \
56
+ WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, true, satp)
57
+#define DO_UQRSHL_OP(N, M, satp) \
58
+ WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, true, satp)
59
60
DO_2OP_SAT_S(vqshls, DO_SQSHL_OP)
61
DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
62
+DO_2OP_SAT_S(vqrshls, DO_SQRSHL_OP)
63
+DO_2OP_SAT_U(vqrshlu, DO_UQRSHL_OP)
64
65
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
66
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
67
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/translate-mve.c
70
+++ b/target/arm/translate-mve.c
71
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQSUB_S, vqsubs)
72
DO_2OP(VQSUB_U, vqsubu)
73
DO_2OP(VQSHL_S, vqshls)
74
DO_2OP(VQSHL_U, vqshlu)
75
+DO_2OP(VQRSHL_S, vqrshls)
76
+DO_2OP(VQRSHL_U, vqrshlu)
77
78
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
79
MVEGenTwoOpScalarFn fn)
55
--
80
--
56
2.20.1
81
2.20.1
57
82
58
83
diff view generated by jsdifflib
1
The FPDSCR register has a similar layout to the FPSCR. In v8.1M it
1
Implement the MVE VSHL insn (vector form).
2
gains new fields FZ16 (if half-precision floating point is supported)
3
and LTPSIZE (always reads as 4). Update the reset value and the code
4
that handles writes to this register accordingly.
5
2
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-16-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-35-peter.maydell@linaro.org
9
---
6
---
10
target/arm/cpu.h | 5 +++++
7
target/arm/helper-mve.h | 8 ++++++++
11
hw/intc/armv7m_nvic.c | 9 ++++++++-
8
target/arm/mve.decode | 3 +++
12
target/arm/cpu.c | 3 +++
9
target/arm/mve_helper.c | 6 ++++++
13
3 files changed, 16 insertions(+), 1 deletion(-)
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 19 insertions(+)
14
12
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
15
--- a/target/arm/helper-mve.h
18
+++ b/target/arm/cpu.h
16
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqsubub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
#define FPCR_IXE (1 << 12) /* Inexact exception trap enable */
18
DEF_HELPER_FLAGS_4(mve_vqsubuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
#define FPCR_IDE (1 << 15) /* Input Denormal exception trap enable */
19
DEF_HELPER_FLAGS_4(mve_vqsubuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
#define FPCR_FZ16 (1 << 19) /* ARMv8.2+, FP16 flush-to-zero */
20
23
+#define FPCR_RMODE_MASK (3 << 22) /* Rounding mode */
21
+DEF_HELPER_FLAGS_4(mve_vshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
22
+DEF_HELPER_FLAGS_4(mve_vshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
23
+DEF_HELPER_FLAGS_4(mve_vshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+#define FPCR_AHP (1 << 26) /* Alternative half-precision */
27
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
28
#define FPCR_V (1 << 28) /* FP overflow flag */
29
#define FPCR_C (1 << 29) /* FP carry flag */
30
#define FPCR_Z (1 << 30) /* FP zero flag */
31
#define FPCR_N (1 << 31) /* FP negative flag */
32
33
+#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
34
+#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
35
+
24
+
36
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
25
+DEF_HELPER_FLAGS_4(mve_vshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
26
+DEF_HELPER_FLAGS_4(mve_vshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
27
+DEF_HELPER_FLAGS_4(mve_vshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
28
+
29
DEF_HELPER_FLAGS_4(mve_vqshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
DEF_HELPER_FLAGS_4(mve_vqshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
DEF_HELPER_FLAGS_4(mve_vqshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
40
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/intc/armv7m_nvic.c
34
--- a/target/arm/mve.decode
42
+++ b/hw/intc/armv7m_nvic.c
35
+++ b/target/arm/mve.decode
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
36
@@ -XXX,XX +XXX,XX @@ VQADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 1 ... 0 @2op
44
break;
37
VQSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
45
case 0xf3c: /* FPDSCR */
38
VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
46
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
39
47
- value &= 0x07c00000;
40
+VSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
48
+ uint32_t mask = FPCR_AHP | FPCR_DN | FPCR_FZ | FPCR_RMODE_MASK;
41
+VSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
49
+ if (cpu_isar_feature(any_fp16, cpu)) {
42
+
50
+ mask |= FPCR_FZ16;
43
VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
51
+ }
44
VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
52
+ value &= mask;
45
53
+ if (cpu_isar_feature(aa32_lob, cpu)) {
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
54
+ value |= 4 << FPCR_LTPSIZE_SHIFT;
55
+ }
56
cpu->env.v7m.fpdscr[attrs.secure] = value;
57
}
58
break;
59
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
60
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/cpu.c
48
--- a/target/arm/mve_helper.c
62
+++ b/target/arm/cpu.c
49
+++ b/target/arm/mve_helper.c
63
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
50
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhaddu, do_vhadd_u)
64
* always reset to 4.
51
DO_2OP_S(vhsubs, do_vhsub_s)
65
*/
52
DO_2OP_U(vhsubu, do_vhsub_u)
66
env->v7m.ltpsize = 4;
53
67
+ /* The LTPSIZE field in FPDSCR is constant and reads as 4. */
54
+#define DO_VSHLS(N, M) do_sqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
68
+ env->v7m.fpdscr[M_REG_NS] = 4 << FPCR_LTPSIZE_SHIFT;
55
+#define DO_VSHLU(N, M) do_uqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
69
+ env->v7m.fpdscr[M_REG_S] = 4 << FPCR_LTPSIZE_SHIFT;
56
+
70
}
57
+DO_2OP_S(vshls, DO_VSHLS)
71
58
+DO_2OP_U(vshlu, DO_VSHLU)
72
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
59
+
60
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
61
{
62
if (val > max) {
63
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate-mve.c
66
+++ b/target/arm/translate-mve.c
67
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQADD_S, vqadds)
68
DO_2OP(VQADD_U, vqaddu)
69
DO_2OP(VQSUB_S, vqsubs)
70
DO_2OP(VQSUB_U, vqsubu)
71
+DO_2OP(VSHL_S, vshls)
72
+DO_2OP(VSHL_U, vshlu)
73
DO_2OP(VQSHL_S, vqshls)
74
DO_2OP(VQSHL_U, vqshlu)
75
DO_2OP(VQRSHL_S, vqrshls)
73
--
76
--
74
2.20.1
77
2.20.1
75
78
76
79
diff view generated by jsdifflib
1
v8.1M defines a new FP system register FPSCR_nzcvqc; this behaves
1
Implement the MVE VRSHL insn (vector form).
2
like the existing FPSCR, except that it reads and writes only bits
3
[31:27] of the FPSCR (the N, Z, C, V and QC flag bits). (Unlike the
4
FPSCR, the special case for Rt=15 of writing the CPSR.NZCV is not
5
permitted.)
6
7
Implement the register. Since we don't yet implement MVE, we handle
8
the QC bit as RES0, with todo comments for where we will need to add
9
support later.
10
2
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20201119215617.29887-11-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-36-peter.maydell@linaro.org
14
---
6
---
15
target/arm/cpu.h | 13 +++++++++++++
7
target/arm/helper-mve.h | 8 ++++++++
16
target/arm/translate-vfp.c.inc | 27 +++++++++++++++++++++++++++
8
target/arm/mve.decode | 3 +++
17
2 files changed, 40 insertions(+)
9
target/arm/mve_helper.c | 4 ++++
10
target/arm/translate-mve.c | 2 ++
11
4 files changed, 17 insertions(+)
18
12
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
15
--- a/target/arm/helper-mve.h
22
+++ b/target/arm/cpu.h
16
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
18
DEF_HELPER_FLAGS_4(mve_vshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
19
DEF_HELPER_FLAGS_4(mve_vshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
20
27
+#define FPCR_V (1 << 28) /* FP overflow flag */
21
+DEF_HELPER_FLAGS_4(mve_vrshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+#define FPCR_C (1 << 29) /* FP carry flag */
22
+DEF_HELPER_FLAGS_4(mve_vrshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+#define FPCR_Z (1 << 30) /* FP zero flag */
23
+DEF_HELPER_FLAGS_4(mve_vrshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+#define FPCR_N (1 << 31) /* FP negative flag */
31
+
24
+
32
+#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
25
+DEF_HELPER_FLAGS_4(mve_vrshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
26
+DEF_HELPER_FLAGS_4(mve_vrshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
27
+DEF_HELPER_FLAGS_4(mve_vrshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
static inline uint32_t vfp_get_fpsr(CPUARMState *env)
28
+
29
DEF_HELPER_FLAGS_4(mve_vqshlsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
DEF_HELPER_FLAGS_4(mve_vqshlsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
DEF_HELPER_FLAGS_4(mve_vqshlsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
35
+++ b/target/arm/mve.decode
36
@@ -XXX,XX +XXX,XX @@ VQSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 1 ... 0 @2op
37
VSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
38
VSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 0 ... 0 @2op_rev
39
40
+VRSHL_S 111 0 1111 0 . .. ... 0 ... 0 0101 . 1 . 0 ... 0 @2op_rev
41
+VRSHL_U 111 1 1111 0 . .. ... 0 ... 0 0101 . 1 . 0 ... 0 @2op_rev
42
+
43
VQSHL_S 111 0 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
44
VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
45
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve_helper.c
49
+++ b/target/arm/mve_helper.c
50
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vhsubu, do_vhsub_u)
51
52
#define DO_VSHLS(N, M) do_sqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
53
#define DO_VSHLU(N, M) do_uqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, false, NULL)
54
+#define DO_VRSHLS(N, M) do_sqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, true, NULL)
55
+#define DO_VRSHLU(N, M) do_uqrshl_bhs(N, (int8_t)(M), sizeof(N) * 8, true, NULL)
56
57
DO_2OP_S(vshls, DO_VSHLS)
58
DO_2OP_U(vshlu, DO_VSHLU)
59
+DO_2OP_S(vrshls, DO_VRSHLS)
60
+DO_2OP_U(vrshlu, DO_VRSHLU)
61
62
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
36
{
63
{
37
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
64
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
38
#define ARM_VFP_FPEXC 8
39
#define ARM_VFP_FPINST 9
40
#define ARM_VFP_FPINST2 10
41
+/* These ones are M-profile only */
42
+#define ARM_VFP_FPSCR_NZCVQC 2
43
+#define ARM_VFP_VPR 12
44
+#define ARM_VFP_P0 13
45
+#define ARM_VFP_FPCXT_NS 14
46
+#define ARM_VFP_FPCXT_S 15
47
48
/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
49
#define QEMU_VFP_FPSCR_NZCV 0xffff
50
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
51
index XXXXXXX..XXXXXXX 100644
65
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate-vfp.c.inc
66
--- a/target/arm/translate-mve.c
53
+++ b/target/arm/translate-vfp.c.inc
67
+++ b/target/arm/translate-mve.c
54
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
68
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQSUB_S, vqsubs)
55
case ARM_VFP_FPSCR:
69
DO_2OP(VQSUB_U, vqsubu)
56
case QEMU_VFP_FPSCR_NZCV:
70
DO_2OP(VSHL_S, vshls)
57
break;
71
DO_2OP(VSHL_U, vshlu)
58
+ case ARM_VFP_FPSCR_NZCVQC:
72
+DO_2OP(VRSHL_S, vrshls)
59
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
73
+DO_2OP(VRSHL_U, vrshlu)
60
+ return false;
74
DO_2OP(VQSHL_S, vqshls)
61
+ }
75
DO_2OP(VQSHL_U, vqshlu)
62
+ break;
76
DO_2OP(VQRSHL_S, vqrshls)
63
default:
64
return FPSysRegCheckFailed;
65
}
66
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
67
tcg_temp_free_i32(tmp);
68
gen_lookup_tb(s);
69
break;
70
+ case ARM_VFP_FPSCR_NZCVQC:
71
+ {
72
+ TCGv_i32 fpscr;
73
+ tmp = loadfn(s, opaque);
74
+ /*
75
+ * TODO: when we implement MVE, write the QC bit.
76
+ * For non-MVE, QC is RES0.
77
+ */
78
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
79
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
80
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
81
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
82
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
83
+ tcg_temp_free_i32(tmp);
84
+ break;
85
+ }
86
default:
87
g_assert_not_reached();
88
}
89
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
90
gen_helper_vfp_get_fpscr(tmp, cpu_env);
91
storefn(s, opaque, tmp);
92
break;
93
+ case ARM_VFP_FPSCR_NZCVQC:
94
+ /*
95
+ * TODO: MVE has a QC bit, which we probably won't store
96
+ * in the xregs[] field. For non-MVE, where QC is RES0,
97
+ * we can just fall through to the FPSCR_NZCV case.
98
+ */
99
case QEMU_VFP_FPSCR_NZCV:
100
/*
101
* Read just NZCV; this is a special case to avoid the
102
--
77
--
103
2.20.1
78
2.20.1
104
79
105
80
diff view generated by jsdifflib
1
The constant-expander functions like negate, plus_2, etc, are
1
Implement the MVE VQDMLADH and VQRDMLADH insns. These multiply
2
generally useful; move them up in translate.c so we can use them in
2
elements, and then add pairs of products, double, possibly round,
3
the VFP/Neon decoders as well as in the A32/T32/T16 decoders.
3
saturate and return the high half of the result.
4
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-9-peter.maydell@linaro.org
7
Message-id: 20210617121628.20116-37-peter.maydell@linaro.org
8
---
8
---
9
target/arm/translate.c | 46 +++++++++++++++++++++++-------------------
9
target/arm/helper-mve.h | 16 +++++++
10
1 file changed, 25 insertions(+), 21 deletions(-)
10
target/arm/mve.decode | 5 +++
11
target/arm/mve_helper.c | 89 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate-mve.c | 4 ++
13
4 files changed, 114 insertions(+)
11
14
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
17
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/translate.c
18
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ static void arm_gen_condlabel(DisasContext *s)
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrshlub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
17
}
20
DEF_HELPER_FLAGS_4(mve_vqrshluh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
}
21
DEF_HELPER_FLAGS_4(mve_vqrshluw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
23
+DEF_HELPER_FLAGS_4(mve_vqdmladhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vqdmladhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
+DEF_HELPER_FLAGS_4(mve_vqdmladhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+
27
+DEF_HELPER_FLAGS_4(mve_vqdmladhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vqdmladhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+DEF_HELPER_FLAGS_4(mve_vqdmladhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+
31
+DEF_HELPER_FLAGS_4(mve_vqrdmladhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vqrdmladhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vqrdmladhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+
35
+DEF_HELPER_FLAGS_4(mve_vqrdmladhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
+DEF_HELPER_FLAGS_4(mve_vqrdmladhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
+DEF_HELPER_FLAGS_4(mve_vqrdmladhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
+
39
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
42
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve.decode
45
+++ b/target/arm/mve.decode
46
@@ -XXX,XX +XXX,XX @@ VQSHL_U 111 1 1111 0 . .. ... 0 ... 0 0100 . 1 . 1 ... 0 @2op_rev
47
VQRSHL_S 111 0 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
48
VQRSHL_U 111 1 1111 0 . .. ... 0 ... 0 0101 . 1 . 1 ... 0 @2op_rev
49
50
+VQDMLADH 1110 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 0 @2op
51
+VQDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
52
+VQRDMLADH 1110 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
53
+VQRDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
54
+
55
# Vector miscellaneous
56
57
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
58
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve_helper.c
61
+++ b/target/arm/mve_helper.c
62
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
63
DO_2OP_SAT_S(vqrshls, DO_SQRSHL_OP)
64
DO_2OP_SAT_U(vqrshlu, DO_UQRSHL_OP)
19
65
20
+/*
66
+/*
21
+ * Constant expanders for the decoders.
67
+ * Multiply add dual returning high half
68
+ * The 'FN' here takes four inputs A, B, C, D, a 0/1 indicator of
69
+ * whether to add the rounding constant, and the pointer to the
70
+ * saturation flag, and should do "(A * B + C * D) * 2 + rounding constant",
71
+ * saturate to twice the input size and return the high half; or
72
+ * (A * B - C * D) etc for VQDMLSDH.
22
+ */
73
+ */
74
+#define DO_VQDMLADH_OP(OP, ESIZE, TYPE, XCHG, ROUND, FN) \
75
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
76
+ void *vm) \
77
+ { \
78
+ TYPE *d = vd, *n = vn, *m = vm; \
79
+ uint16_t mask = mve_element_mask(env); \
80
+ unsigned e; \
81
+ bool qc = false; \
82
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
83
+ bool sat = false; \
84
+ if ((e & 1) == XCHG) { \
85
+ TYPE r = FN(n[H##ESIZE(e)], \
86
+ m[H##ESIZE(e - XCHG)], \
87
+ n[H##ESIZE(e + (1 - 2 * XCHG))], \
88
+ m[H##ESIZE(e + (1 - XCHG))], \
89
+ ROUND, &sat); \
90
+ mergemask(&d[H##ESIZE(e)], r, mask); \
91
+ qc |= sat & mask & 1; \
92
+ } \
93
+ } \
94
+ if (qc) { \
95
+ env->vfp.qc[0] = qc; \
96
+ } \
97
+ mve_advance_vpt(env); \
98
+ }
23
+
99
+
24
+static int negate(DisasContext *s, int x)
100
+static int8_t do_vqdmladh_b(int8_t a, int8_t b, int8_t c, int8_t d,
101
+ int round, bool *sat)
25
+{
102
+{
26
+ return -x;
103
+ int64_t r = ((int64_t)a * b + (int64_t)c * d) * 2 + (round << 7);
104
+ return do_sat_bhw(r, INT16_MIN, INT16_MAX, sat) >> 8;
27
+}
105
+}
28
+
106
+
29
+static int plus_2(DisasContext *s, int x)
107
+static int16_t do_vqdmladh_h(int16_t a, int16_t b, int16_t c, int16_t d,
108
+ int round, bool *sat)
30
+{
109
+{
31
+ return x + 2;
110
+ int64_t r = ((int64_t)a * b + (int64_t)c * d) * 2 + (round << 15);
111
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat) >> 16;
32
+}
112
+}
33
+
113
+
34
+static int times_2(DisasContext *s, int x)
114
+static int32_t do_vqdmladh_w(int32_t a, int32_t b, int32_t c, int32_t d,
115
+ int round, bool *sat)
35
+{
116
+{
36
+ return x * 2;
117
+ int64_t m1 = (int64_t)a * b;
118
+ int64_t m2 = (int64_t)c * d;
119
+ int64_t r;
120
+ /*
121
+ * Architecturally we should do the entire add, double, round
122
+ * and then check for saturation. We do three saturating adds,
123
+ * but we need to be careful about the order. If the first
124
+ * m1 + m2 saturates then it's impossible for the *2+rc to
125
+ * bring it back into the non-saturated range. However, if
126
+ * m1 + m2 is negative then it's possible that doing the doubling
127
+ * would take the intermediate result below INT64_MAX and the
128
+ * addition of the rounding constant then brings it back in range.
129
+ * So we add half the rounding constant before doubling rather
130
+ * than adding the rounding constant after the doubling.
131
+ */
132
+ if (sadd64_overflow(m1, m2, &r) ||
133
+ sadd64_overflow(r, (round << 30), &r) ||
134
+ sadd64_overflow(r, r, &r)) {
135
+ *sat = true;
136
+ return r < 0 ? INT32_MAX : INT32_MIN;
137
+ }
138
+ return r >> 32;
37
+}
139
+}
38
+
140
+
39
+static int times_4(DisasContext *s, int x)
141
+DO_VQDMLADH_OP(vqdmladhb, 1, int8_t, 0, 0, do_vqdmladh_b)
40
+{
142
+DO_VQDMLADH_OP(vqdmladhh, 2, int16_t, 0, 0, do_vqdmladh_h)
41
+ return x * 4;
143
+DO_VQDMLADH_OP(vqdmladhw, 4, int32_t, 0, 0, do_vqdmladh_w)
42
+}
144
+DO_VQDMLADH_OP(vqdmladhxb, 1, int8_t, 1, 0, do_vqdmladh_b)
145
+DO_VQDMLADH_OP(vqdmladhxh, 2, int16_t, 1, 0, do_vqdmladh_h)
146
+DO_VQDMLADH_OP(vqdmladhxw, 4, int32_t, 1, 0, do_vqdmladh_w)
43
+
147
+
44
/* Flags for the disas_set_da_iss info argument:
148
+DO_VQDMLADH_OP(vqrdmladhb, 1, int8_t, 0, 1, do_vqdmladh_b)
45
* lower bits hold the Rt register number, higher bits are flags.
149
+DO_VQDMLADH_OP(vqrdmladhh, 2, int16_t, 0, 1, do_vqdmladh_h)
46
*/
150
+DO_VQDMLADH_OP(vqrdmladhw, 4, int32_t, 0, 1, do_vqdmladh_w)
47
@@ -XXX,XX +XXX,XX @@ static void arm_skip_unless(DisasContext *s, uint32_t cond)
151
+DO_VQDMLADH_OP(vqrdmladhxb, 1, int8_t, 1, 1, do_vqdmladh_b)
48
152
+DO_VQDMLADH_OP(vqrdmladhxh, 2, int16_t, 1, 1, do_vqdmladh_h)
49
153
+DO_VQDMLADH_OP(vqrdmladhxw, 4, int32_t, 1, 1, do_vqdmladh_w)
50
/*
154
+
51
- * Constant expanders for the decoders.
155
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
52
+ * Constant expanders used by T16/T32 decode
156
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
53
*/
157
uint32_t rm) \
54
158
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
55
-static int negate(DisasContext *s, int x)
159
index XXXXXXX..XXXXXXX 100644
56
-{
160
--- a/target/arm/translate-mve.c
57
- return -x;
161
+++ b/target/arm/translate-mve.c
58
-}
162
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQSHL_S, vqshls)
59
-
163
DO_2OP(VQSHL_U, vqshlu)
60
-static int plus_2(DisasContext *s, int x)
164
DO_2OP(VQRSHL_S, vqrshls)
61
-{
165
DO_2OP(VQRSHL_U, vqrshlu)
62
- return x + 2;
166
+DO_2OP(VQDMLADH, vqdmladh)
63
-}
167
+DO_2OP(VQDMLADHX, vqdmladhx)
64
-
168
+DO_2OP(VQRDMLADH, vqrdmladh)
65
-static int times_2(DisasContext *s, int x)
169
+DO_2OP(VQRDMLADHX, vqrdmladhx)
66
-{
170
67
- return x * 2;
171
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
68
-}
172
MVEGenTwoOpScalarFn fn)
69
-
70
-static int times_4(DisasContext *s, int x)
71
-{
72
- return x * 4;
73
-}
74
-
75
/* Return only the rotation part of T32ExpandImm. */
76
static int t32_expandimm_rot(DisasContext *s, int x)
77
{
78
--
173
--
79
2.20.1
174
2.20.1
80
175
81
176
diff view generated by jsdifflib
1
For v8.1M the architecture mandates that CPUs must provide at
1
Implement the MVE VQDMLSDH and VQRDMLSDH insns, which are
2
least the "minimal RAS implementation" from the Reliability,
2
like VQDMLADH and VQRDMLADH except that products are subtracted
3
Availability and Serviceability extension. This consists of:
3
rather than added.
4
* an ESB instruction which is a NOP
5
-- since it is in the HINT space we need only add a comment
6
* an RFSR register which will RAZ/WI
7
* a RAZ/WI AIRCR.IESB bit
8
-- the code which handles writes to AIRCR does not allow setting
9
of RES0 bits, so we already treat this as RAZ/WI; add a comment
10
noting that this is deliberate
11
* minimal implementation of the RAS register block at 0xe0005000
12
-- this will be in a subsequent commit
13
* setting the ID_PFR0.RAS field to 0b0010
14
-- we will do this when we add the Cortex-M55 CPU model
15
4
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20201119215617.29887-26-peter.maydell@linaro.org
7
Message-id: 20210617121628.20116-38-peter.maydell@linaro.org
19
---
8
---
20
target/arm/cpu.h | 14 ++++++++++++++
9
target/arm/helper-mve.h | 16 ++++++++++++++
21
target/arm/t32.decode | 4 ++++
10
target/arm/mve.decode | 5 +++++
22
hw/intc/armv7m_nvic.c | 13 +++++++++++++
11
target/arm/mve_helper.c | 44 ++++++++++++++++++++++++++++++++++++++
23
3 files changed, 31 insertions(+)
12
target/arm/translate-mve.c | 4 ++++
13
4 files changed, 69 insertions(+)
24
14
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
26
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
17
--- a/target/arm/helper-mve.h
28
+++ b/target/arm/cpu.h
18
+++ b/target/arm/helper-mve.h
29
@@ -XXX,XX +XXX,XX @@ FIELD(ID_MMFR4, LSM, 20, 4)
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrdmladhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
FIELD(ID_MMFR4, CCIDX, 24, 4)
20
DEF_HELPER_FLAGS_4(mve_vqrdmladhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
FIELD(ID_MMFR4, EVT, 28, 4)
21
DEF_HELPER_FLAGS_4(mve_vqrdmladhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
22
33
+FIELD(ID_PFR0, STATE0, 0, 4)
23
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+FIELD(ID_PFR0, STATE1, 4, 4)
24
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+FIELD(ID_PFR0, STATE2, 8, 4)
25
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
+FIELD(ID_PFR0, STATE3, 12, 4)
37
+FIELD(ID_PFR0, CSV2, 16, 4)
38
+FIELD(ID_PFR0, AMU, 20, 4)
39
+FIELD(ID_PFR0, DIT, 24, 4)
40
+FIELD(ID_PFR0, RAS, 28, 4)
41
+
26
+
42
FIELD(ID_PFR1, PROGMOD, 0, 4)
27
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
43
FIELD(ID_PFR1, SECURITY, 4, 4)
28
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
44
FIELD(ID_PFR1, MPROGMOD, 8, 4)
29
+DEF_HELPER_FLAGS_4(mve_vqdmlsdhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
45
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
30
+
46
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
31
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+
35
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
+DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
+
39
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
42
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve.decode
45
+++ b/target/arm/mve.decode
46
@@ -XXX,XX +XXX,XX @@ VQDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
47
VQRDMLADH 1110 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
48
VQRDMLADHX 1110 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
49
50
+VQDMLSDH 1111 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 0 @2op
51
+VQDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
52
+VQRDMLSDH 1111 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
53
+VQRDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
54
+
55
# Vector miscellaneous
56
57
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
58
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve_helper.c
61
+++ b/target/arm/mve_helper.c
62
@@ -XXX,XX +XXX,XX @@ static int32_t do_vqdmladh_w(int32_t a, int32_t b, int32_t c, int32_t d,
63
return r >> 32;
47
}
64
}
48
65
49
+static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
66
+static int8_t do_vqdmlsdh_b(int8_t a, int8_t b, int8_t c, int8_t d,
67
+ int round, bool *sat)
50
+{
68
+{
51
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
69
+ int64_t r = ((int64_t)a * b - (int64_t)c * d) * 2 + (round << 7);
70
+ return do_sat_bhw(r, INT16_MIN, INT16_MAX, sat) >> 8;
52
+}
71
+}
53
+
72
+
54
static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
73
+static int16_t do_vqdmlsdh_h(int16_t a, int16_t b, int16_t c, int16_t d,
55
{
74
+ int round, bool *sat)
56
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
75
+{
57
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
76
+ int64_t r = ((int64_t)a * b - (int64_t)c * d) * 2 + (round << 15);
77
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat) >> 16;
78
+}
79
+
80
+static int32_t do_vqdmlsdh_w(int32_t a, int32_t b, int32_t c, int32_t d,
81
+ int round, bool *sat)
82
+{
83
+ int64_t m1 = (int64_t)a * b;
84
+ int64_t m2 = (int64_t)c * d;
85
+ int64_t r;
86
+ /* The same ordering issue as in do_vqdmladh_w applies here too */
87
+ if (ssub64_overflow(m1, m2, &r) ||
88
+ sadd64_overflow(r, (round << 30), &r) ||
89
+ sadd64_overflow(r, r, &r)) {
90
+ *sat = true;
91
+ return r < 0 ? INT32_MAX : INT32_MIN;
92
+ }
93
+ return r >> 32;
94
+}
95
+
96
DO_VQDMLADH_OP(vqdmladhb, 1, int8_t, 0, 0, do_vqdmladh_b)
97
DO_VQDMLADH_OP(vqdmladhh, 2, int16_t, 0, 0, do_vqdmladh_h)
98
DO_VQDMLADH_OP(vqdmladhw, 4, int32_t, 0, 0, do_vqdmladh_w)
99
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmladhxb, 1, int8_t, 1, 1, do_vqdmladh_b)
100
DO_VQDMLADH_OP(vqrdmladhxh, 2, int16_t, 1, 1, do_vqdmladh_h)
101
DO_VQDMLADH_OP(vqrdmladhxw, 4, int32_t, 1, 1, do_vqdmladh_w)
102
103
+DO_VQDMLADH_OP(vqdmlsdhb, 1, int8_t, 0, 0, do_vqdmlsdh_b)
104
+DO_VQDMLADH_OP(vqdmlsdhh, 2, int16_t, 0, 0, do_vqdmlsdh_h)
105
+DO_VQDMLADH_OP(vqdmlsdhw, 4, int32_t, 0, 0, do_vqdmlsdh_w)
106
+DO_VQDMLADH_OP(vqdmlsdhxb, 1, int8_t, 1, 0, do_vqdmlsdh_b)
107
+DO_VQDMLADH_OP(vqdmlsdhxh, 2, int16_t, 1, 0, do_vqdmlsdh_h)
108
+DO_VQDMLADH_OP(vqdmlsdhxw, 4, int32_t, 1, 0, do_vqdmlsdh_w)
109
+
110
+DO_VQDMLADH_OP(vqrdmlsdhb, 1, int8_t, 0, 1, do_vqdmlsdh_b)
111
+DO_VQDMLADH_OP(vqrdmlsdhh, 2, int16_t, 0, 1, do_vqdmlsdh_h)
112
+DO_VQDMLADH_OP(vqrdmlsdhw, 4, int32_t, 0, 1, do_vqdmlsdh_w)
113
+DO_VQDMLADH_OP(vqrdmlsdhxb, 1, int8_t, 1, 1, do_vqdmlsdh_b)
114
+DO_VQDMLADH_OP(vqrdmlsdhxh, 2, int16_t, 1, 1, do_vqdmlsdh_h)
115
+DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
116
+
117
#define DO_2OP_SCALAR(OP, ESIZE, TYPE, FN) \
118
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
119
uint32_t rm) \
120
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
58
index XXXXXXX..XXXXXXX 100644
121
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/t32.decode
122
--- a/target/arm/translate-mve.c
60
+++ b/target/arm/t32.decode
123
+++ b/target/arm/translate-mve.c
61
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
124
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQDMLADH, vqdmladh)
62
# SEV 1111 0011 1010 1111 1000 0000 0000 0100
125
DO_2OP(VQDMLADHX, vqdmladhx)
63
# SEVL 1111 0011 1010 1111 1000 0000 0000 0101
126
DO_2OP(VQRDMLADH, vqrdmladh)
64
127
DO_2OP(VQRDMLADHX, vqrdmladhx)
65
+ # For M-profile minimal-RAS ESB can be a NOP, which is the
128
+DO_2OP(VQDMLSDH, vqdmlsdh)
66
+ # default behaviour since it is in the hint space.
129
+DO_2OP(VQDMLSDHX, vqdmlsdhx)
67
+ # ESB 1111 0011 1010 1111 1000 0000 0001 0000
130
+DO_2OP(VQRDMLSDH, vqrdmlsdh)
68
+
131
+DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
69
# The canonical nop ends in 0000 0000, but the whole rest
132
70
# of the space is "reserved hint, behaves as nop".
133
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
71
NOP 1111 0011 1010 1111 1000 0000 ---- ----
134
MVEGenTwoOpScalarFn fn)
72
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/intc/armv7m_nvic.c
75
+++ b/hw/intc/armv7m_nvic.c
76
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
77
return 0;
78
}
79
return cpu->env.v7m.sfar;
80
+ case 0xf04: /* RFSR */
81
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
82
+ goto bad_offset;
83
+ }
84
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
85
+ return 0;
86
case 0xf34: /* FPCCR */
87
if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
88
return 0;
89
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
90
R_V7M_AIRCR_PRIGROUP_SHIFT,
91
R_V7M_AIRCR_PRIGROUP_LENGTH);
92
}
93
+ /* AIRCR.IESB is RAZ/WI because we implement only minimal RAS */
94
if (attrs.secure) {
95
/* These bits are only writable by secure */
96
cpu->env.v7m.aircr = value &
97
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
98
}
99
break;
100
}
101
+ case 0xf04: /* RFSR */
102
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
103
+ goto bad_offset;
104
+ }
105
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
106
+ break;
107
case 0xf34: /* FPCCR */
108
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
109
/* Not all bits here are banked. */
110
--
135
--
111
2.20.1
136
2.20.1
112
137
113
138
diff view generated by jsdifflib
1
For M-profile CPUs, the range from 0xe0000000 to 0xe00fffff is the
1
Implement the vector form of the MVE VQDMULL insn.
2
Private Peripheral Bus range, which includes all of the memory mapped
3
devices and registers that are part of the CPU itself, including the
4
NVIC, systick timer, and debug and trace components like the Data
5
Watchpoint and Trace unit (DWT). Within this large region, the range
6
0xe000e000 to 0xe000efff is the System Control Space (NVIC, system
7
registers, systick) and 0xe002e000 to 0exe002efff is its Non-secure
8
alias.
9
10
The architecture is clear that within the SCS unimplemented registers
11
should be RES0 for privileged accesses and generate BusFault for
12
unprivileged accesses, and we currently implement this.
13
14
It is less clear about how to handle accesses to unimplemented
15
regions of the wider PPB. Unprivileged accesses should definitely
16
cause BusFaults (R_DQQS), but the behaviour of privileged accesses is
17
not given as a general rule. However, the register definitions of
18
individual registers for components like the DWT all state that they
19
are RES0 if the relevant component is not implemented, so the
20
simplest way to provide that is to provide RAZ/WI for the whole range
21
for privileged accesses. (The v7M Arm ARM does say that reserved
22
registers should be UNK/SBZP.)
23
24
Expand the container MemoryRegion that the NVIC exposes so that
25
it covers the whole PPB space. This means:
26
* moving the address that the ARMV7M device maps it to down by
27
0xe000 bytes
28
* moving the off and the offsets within the container of all the
29
subregions forward by 0xe000 bytes
30
* adding a new default MemoryRegion that covers the whole container
31
at a lower priority than anything else and which provides the
32
RAZWI/BusFault behaviour
33
2
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20201119215617.29887-2-peter.maydell@linaro.org
5
Message-id: 20210617121628.20116-39-peter.maydell@linaro.org
37
---
6
---
38
include/hw/intc/armv7m_nvic.h | 1 +
7
target/arm/helper-mve.h | 5 +++++
39
hw/arm/armv7m.c | 2 +-
8
target/arm/mve.decode | 5 +++++
40
hw/intc/armv7m_nvic.c | 78 ++++++++++++++++++++++++++++++-----
9
target/arm/mve_helper.c | 30 ++++++++++++++++++++++++++++++
41
3 files changed, 69 insertions(+), 12 deletions(-)
10
target/arm/translate-mve.c | 30 ++++++++++++++++++++++++++++++
11
4 files changed, 70 insertions(+)
42
12
43
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
44
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
45
--- a/include/hw/intc/armv7m_nvic.h
15
--- a/target/arm/helper-mve.h
46
+++ b/include/hw/intc/armv7m_nvic.h
16
+++ b/target/arm/helper-mve.h
47
@@ -XXX,XX +XXX,XX @@ struct NVICState {
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
48
MemoryRegion systickmem;
18
DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
49
MemoryRegion systick_ns_mem;
19
DEF_HELPER_FLAGS_4(mve_vqrdmlsdhxw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
50
MemoryRegion container;
20
51
+ MemoryRegion defaultmem;
21
+DEF_HELPER_FLAGS_4(mve_vqdmullbh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
52
22
+DEF_HELPER_FLAGS_4(mve_vqdmullbw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
53
uint32_t num_irq;
23
+DEF_HELPER_FLAGS_4(mve_vqdmullth, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
54
qemu_irq excpout;
24
+DEF_HELPER_FLAGS_4(mve_vqdmulltw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
55
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
25
+
26
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
56
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/armv7m.c
31
--- a/target/arm/mve.decode
58
+++ b/hw/arm/armv7m.c
32
+++ b/target/arm/mve.decode
59
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
33
@@ -XXX,XX +XXX,XX @@
60
sysbus_connect_irq(sbd, 0,
34
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
61
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
35
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
62
36
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
63
- memory_region_add_subregion(&s->container, 0xe000e000,
37
+@2op_sz28 .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn \
64
+ memory_region_add_subregion(&s->container, 0xe0000000,
38
+ size=%size_28
65
sysbus_mmio_get_region(sbd, 0));
39
66
40
# The _rev suffix indicates that Vn and Vm are reversed. This is
67
for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
41
# the case for shifts. In the Arm ARM these insns are documented
68
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
42
@@ -XXX,XX +XXX,XX @@ VQDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 0 @2op
43
VQRDMLSDH 1111 1110 0 . .. ... 0 ... 0 1110 . 0 . 0 ... 1 @2op
44
VQRDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
45
46
+VQDMULLB 111 . 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 1 @2op_sz28
47
+VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
48
+
49
# Vector miscellaneous
50
51
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
69
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/intc/armv7m_nvic.c
54
--- a/target/arm/mve_helper.c
71
+++ b/hw/intc/armv7m_nvic.c
55
+++ b/target/arm/mve_helper.c
72
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
56
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR_L(vqdmullt_scalarh, 1, 2, int16_t, 4, int32_t, \
73
.endianness = DEVICE_NATIVE_ENDIAN,
57
DO_2OP_SAT_SCALAR_L(vqdmullt_scalarw, 1, 4, int32_t, 8, int64_t, \
74
};
58
do_qdmullw, SATMASK32)
75
59
76
+/*
60
+/*
77
+ * Unassigned portions of the PPB space are RAZ/WI for privileged
61
+ * Long saturating ops
78
+ * accesses, and fault for non-privileged accesses.
79
+ */
62
+ */
80
+static MemTxResult ppb_default_read(void *opaque, hwaddr addr,
63
+#define DO_2OP_SAT_L(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN, SATMASK) \
81
+ uint64_t *data, unsigned size,
64
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
82
+ MemTxAttrs attrs)
65
+ void *vm) \
66
+ { \
67
+ LTYPE *d = vd; \
68
+ TYPE *n = vn, *m = vm; \
69
+ uint16_t mask = mve_element_mask(env); \
70
+ unsigned le; \
71
+ bool qc = false; \
72
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
73
+ bool sat = false; \
74
+ LTYPE op1 = n[H##ESIZE(le * 2 + TOP)]; \
75
+ LTYPE op2 = m[H##ESIZE(le * 2 + TOP)]; \
76
+ mergemask(&d[H##LESIZE(le)], FN(op1, op2, &sat), mask); \
77
+ qc |= sat && (mask & SATMASK); \
78
+ } \
79
+ if (qc) { \
80
+ env->vfp.qc[0] = qc; \
81
+ } \
82
+ mve_advance_vpt(env); \
83
+ }
84
+
85
+DO_2OP_SAT_L(vqdmullbh, 0, 2, int16_t, 4, int32_t, do_qdmullh, SATMASK16B)
86
+DO_2OP_SAT_L(vqdmullbw, 0, 4, int32_t, 8, int64_t, do_qdmullw, SATMASK32)
87
+DO_2OP_SAT_L(vqdmullth, 1, 2, int16_t, 4, int32_t, do_qdmullh, SATMASK16T)
88
+DO_2OP_SAT_L(vqdmulltw, 1, 4, int32_t, 8, int64_t, do_qdmullw, SATMASK32)
89
+
90
static inline uint32_t do_vbrsrb(uint32_t n, uint32_t m)
91
{
92
m &= 0xff;
93
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/translate-mve.c
96
+++ b/target/arm/translate-mve.c
97
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQDMLSDHX, vqdmlsdhx)
98
DO_2OP(VQRDMLSDH, vqrdmlsdh)
99
DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
100
101
+static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
83
+{
102
+{
84
+ qemu_log_mask(LOG_UNIMP, "Read of unassigned area of PPB: offset 0x%x\n",
103
+ static MVEGenTwoOpFn * const fns[] = {
85
+ (uint32_t)addr);
104
+ NULL,
86
+ if (attrs.user) {
105
+ gen_helper_mve_vqdmullbh,
87
+ return MEMTX_ERROR;
106
+ gen_helper_mve_vqdmullbw,
107
+ NULL,
108
+ };
109
+ if (a->size == MO_32 && (a->qd == a->qm || a->qd == a->qn)) {
110
+ /* UNPREDICTABLE; we choose to undef */
111
+ return false;
88
+ }
112
+ }
89
+ *data = 0;
113
+ return do_2op(s, a, fns[a->size]);
90
+ return MEMTX_OK;
91
+}
114
+}
92
+
115
+
93
+static MemTxResult ppb_default_write(void *opaque, hwaddr addr,
116
+static bool trans_VQDMULLT(DisasContext *s, arg_2op *a)
94
+ uint64_t value, unsigned size,
95
+ MemTxAttrs attrs)
96
+{
117
+{
97
+ qemu_log_mask(LOG_UNIMP, "Write of unassigned area of PPB: offset 0x%x\n",
118
+ static MVEGenTwoOpFn * const fns[] = {
98
+ (uint32_t)addr);
119
+ NULL,
99
+ if (attrs.user) {
120
+ gen_helper_mve_vqdmullth,
100
+ return MEMTX_ERROR;
121
+ gen_helper_mve_vqdmulltw,
122
+ NULL,
123
+ };
124
+ if (a->size == MO_32 && (a->qd == a->qm || a->qd == a->qn)) {
125
+ /* UNPREDICTABLE; we choose to undef */
126
+ return false;
101
+ }
127
+ }
102
+ return MEMTX_OK;
128
+ return do_2op(s, a, fns[a->size]);
103
+}
129
+}
104
+
130
+
105
+static const MemoryRegionOps ppb_default_ops = {
131
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
106
+ .read_with_attrs = ppb_default_read,
132
MVEGenTwoOpScalarFn fn)
107
+ .write_with_attrs = ppb_default_write,
108
+ .endianness = DEVICE_NATIVE_ENDIAN,
109
+ .valid.min_access_size = 1,
110
+ .valid.max_access_size = 8,
111
+};
112
+
113
static int nvic_post_load(void *opaque, int version_id)
114
{
133
{
115
NVICState *s = opaque;
116
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
117
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
118
{
119
NVICState *s = NVIC(dev);
120
- int regionlen;
121
122
/* The armv7m container object will have set our CPU pointer */
123
if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
124
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
125
M_REG_S));
126
}
127
128
- /* The NVIC and System Control Space (SCS) starts at 0xe000e000
129
+ /*
130
+ * This device provides a single sysbus memory region which
131
+ * represents the whole of the "System PPB" space. This is the
132
+ * range from 0xe0000000 to 0xe00fffff and includes the NVIC,
133
+ * the System Control Space (system registers), the systick timer,
134
+ * and for CPUs with the Security extension an NS banked version
135
+ * of all of these.
136
+ *
137
+ * The default behaviour for unimplemented registers/ranges
138
+ * (for instance the Data Watchpoint and Trace unit at 0xe0001000)
139
+ * is to RAZ/WI for privileged access and BusFault for non-privileged
140
+ * access.
141
+ *
142
+ * The NVIC and System Control Space (SCS) starts at 0xe000e000
143
* and looks like this:
144
* 0x004 - ICTR
145
* 0x010 - 0xff - systick
146
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
147
* generally code determining which banked register to use should
148
* use attrs.secure; code determining actual behaviour of the system
149
* should use env->v7m.secure.
150
+ *
151
+ * The container covers the whole PPB space. Within it the priority
152
+ * of overlapping regions is:
153
+ * - default region (for RAZ/WI and BusFault) : -1
154
+ * - system register regions : 0
155
+ * - systick : 1
156
+ * This is because the systick device is a small block of registers
157
+ * in the middle of the other system control registers.
158
*/
159
- regionlen = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? 0x21000 : 0x1000;
160
- memory_region_init(&s->container, OBJECT(s), "nvic", regionlen);
161
- /* The system register region goes at the bottom of the priority
162
- * stack as it covers the whole page.
163
- */
164
+ memory_region_init(&s->container, OBJECT(s), "nvic", 0x100000);
165
+ memory_region_init_io(&s->defaultmem, OBJECT(s), &ppb_default_ops, s,
166
+ "nvic-default", 0x100000);
167
+ memory_region_add_subregion_overlap(&s->container, 0, &s->defaultmem, -1);
168
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
169
"nvic_sysregs", 0x1000);
170
- memory_region_add_subregion(&s->container, 0, &s->sysregmem);
171
+ memory_region_add_subregion(&s->container, 0xe000, &s->sysregmem);
172
173
memory_region_init_io(&s->systickmem, OBJECT(s),
174
&nvic_systick_ops, s,
175
"nvic_systick", 0xe0);
176
177
- memory_region_add_subregion_overlap(&s->container, 0x10,
178
+ memory_region_add_subregion_overlap(&s->container, 0xe010,
179
&s->systickmem, 1);
180
181
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
182
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
183
&nvic_sysreg_ns_ops, &s->sysregmem,
184
"nvic_sysregs_ns", 0x1000);
185
- memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
186
+ memory_region_add_subregion(&s->container, 0x2e000, &s->sysreg_ns_mem);
187
memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
188
&nvic_sysreg_ns_ops, &s->systickmem,
189
"nvic_systick_ns", 0xe0);
190
- memory_region_add_subregion_overlap(&s->container, 0x20010,
191
+ memory_region_add_subregion_overlap(&s->container, 0x2e010,
192
&s->systick_ns_mem, 1);
193
}
194
195
--
134
--
196
2.20.1
135
2.20.1
197
136
198
137
diff view generated by jsdifflib
1
For M-profile before v8.1M, the only valid register for VMSR/VMRS is
1
Implement the MVE VRHADD insn, which performs a rounded halving
2
the FPSCR. We have a comment that states this, but the actual logic
2
addition.
3
to forbid accesses for any other register value is missing, so we
4
would end up with A-profile style behaviour. Add the missing check.
5
3
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-7-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-40-peter.maydell@linaro.org
9
---
7
---
10
target/arm/translate-vfp.c.inc | 5 ++++-
8
target/arm/helper-mve.h | 8 ++++++++
11
1 file changed, 4 insertions(+), 1 deletion(-)
9
target/arm/mve.decode | 3 +++
10
target/arm/mve_helper.c | 6 ++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 19 insertions(+)
12
13
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
16
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/translate-vfp.c.inc
17
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqdmullbw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
* Accesses to R15 are UNPREDICTABLE; we choose to undef.
19
DEF_HELPER_FLAGS_4(mve_vqdmullth, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
* (FPSCR -> r15 is a special case which writes to the PSR flags.)
20
DEF_HELPER_FLAGS_4(mve_vqdmulltw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
*/
21
21
- if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
22
+DEF_HELPER_FLAGS_4(mve_vrhaddsb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
+ if (a->reg != ARM_VFP_FPSCR) {
23
+DEF_HELPER_FLAGS_4(mve_vrhaddsh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+ return false;
24
+DEF_HELPER_FLAGS_4(mve_vrhaddsw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+ }
25
+
25
+ if (a->rt == 15 && !a->l) {
26
+DEF_HELPER_FLAGS_4(mve_vrhaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
return false;
27
+DEF_HELPER_FLAGS_4(mve_vrhadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
}
28
+DEF_HELPER_FLAGS_4(mve_vrhadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
}
29
+
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ VQRDMLSDHX 1111 1110 0 . .. ... 0 ... 1 1110 . 0 . 0 ... 1 @2op
38
VQDMULLB 111 . 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 1 @2op_sz28
39
VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
40
41
+VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
42
+VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
43
+
44
# Vector miscellaneous
45
46
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
47
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/mve_helper.c
50
+++ b/target/arm/mve_helper.c
51
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vshlu, DO_VSHLU)
52
DO_2OP_S(vrshls, DO_VRSHLS)
53
DO_2OP_U(vrshlu, DO_VRSHLU)
54
55
+#define DO_RHADD_S(N, M) (((int64_t)(N) + (M) + 1) >> 1)
56
+#define DO_RHADD_U(N, M) (((uint64_t)(N) + (M) + 1) >> 1)
57
+
58
+DO_2OP_S(vrhadds, DO_RHADD_S)
59
+DO_2OP_U(vrhaddu, DO_RHADD_U)
60
+
61
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
62
{
63
if (val > max) {
64
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate-mve.c
67
+++ b/target/arm/translate-mve.c
68
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQDMLSDH, vqdmlsdh)
69
DO_2OP(VQDMLSDHX, vqdmlsdhx)
70
DO_2OP(VQRDMLSDH, vqrdmlsdh)
71
DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
72
+DO_2OP(VRHADD_S, vrhadds)
73
+DO_2OP(VRHADD_U, vrhaddu)
74
75
static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
76
{
29
--
77
--
30
2.20.1
78
2.20.1
31
79
32
80
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
Implement the MVE VADC and VSBC insns. These perform an
2
add-with-carry or subtract-with-carry of the 32-bit elements in each
3
lane of the input vectors, where the carry-out of each add is the
4
carry-in of the next. The initial carry input is either 1 or is from
5
FPSCR.C; the carry out at the end is written back to FPSCR.C.
2
6
3
The Xilinx ZynqMP CAN controller is developed based on SocketCAN, QEMU CAN bus
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
implementation. Bus connection and socketCAN connection for each CAN module
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
can be set through command lines.
9
Message-id: 20210617121628.20116-41-peter.maydell@linaro.org
10
---
11
target/arm/helper-mve.h | 5 ++++
12
target/arm/mve.decode | 5 ++++
13
target/arm/mve_helper.c | 52 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-mve.c | 37 +++++++++++++++++++++++++++
15
4 files changed, 99 insertions(+)
6
16
7
Example for using single CAN:
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
8
-object can-bus,id=canbus0 \
9
-machine xlnx-zcu102.canbus0=canbus0 \
10
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0
11
12
Example for connecting both CAN to same virtual CAN on host machine:
13
-object can-bus,id=canbus0 -object can-bus,id=canbus1 \
14
-machine xlnx-zcu102.canbus0=canbus0 \
15
-machine xlnx-zcu102.canbus1=canbus1 \
16
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0 \
17
-object can-host-socketcan,id=socketcan1,if=vcan0,canbus=canbus1
18
19
To create virtual CAN on the host machine, please check the QEMU CAN docs:
20
https://github.com/qemu/qemu/blob/master/docs/can.txt
21
22
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
23
Message-id: 1605728926-352690-2-git-send-email-fnu.vikram@xilinx.com
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
27
meson.build | 1 +
28
hw/net/can/trace.h | 1 +
29
include/hw/net/xlnx-zynqmp-can.h | 78 ++
30
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++
31
hw/Kconfig | 1 +
32
hw/net/can/meson.build | 1 +
33
hw/net/can/trace-events | 9 +
34
7 files changed, 1252 insertions(+)
35
create mode 100644 hw/net/can/trace.h
36
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
37
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
38
create mode 100644 hw/net/can/trace-events
39
40
diff --git a/meson.build b/meson.build
41
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
42
--- a/meson.build
19
--- a/target/arm/helper-mve.h
43
+++ b/meson.build
20
+++ b/target/arm/helper-mve.h
44
@@ -XXX,XX +XXX,XX @@ if have_system
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrhaddub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
45
'hw/misc',
22
DEF_HELPER_FLAGS_4(mve_vrhadduh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
46
'hw/misc/macio',
23
DEF_HELPER_FLAGS_4(mve_vrhadduw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
47
'hw/net',
24
48
+ 'hw/net/can',
25
+DEF_HELPER_FLAGS_4(mve_vadc, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
49
'hw/nvram',
26
+DEF_HELPER_FLAGS_4(mve_vadci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
50
'hw/pci',
27
+DEF_HELPER_FLAGS_4(mve_vsbc, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
51
'hw/pci-host',
28
+DEF_HELPER_FLAGS_4(mve_vsbci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
52
diff --git a/hw/net/can/trace.h b/hw/net/can/trace.h
53
new file mode 100644
54
index XXXXXXX..XXXXXXX
55
--- /dev/null
56
+++ b/hw/net/can/trace.h
57
@@ -0,0 +1 @@
58
+#include "trace/trace-hw_net_can.h"
59
diff --git a/include/hw/net/xlnx-zynqmp-can.h b/include/hw/net/xlnx-zynqmp-can.h
60
new file mode 100644
61
index XXXXXXX..XXXXXXX
62
--- /dev/null
63
+++ b/include/hw/net/xlnx-zynqmp-can.h
64
@@ -XXX,XX +XXX,XX @@
65
+/*
66
+ * QEMU model of the Xilinx ZynqMP CAN controller.
67
+ *
68
+ * Copyright (c) 2020 Xilinx Inc.
69
+ *
70
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
71
+ *
72
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
73
+ * Pavel Pisa.
74
+ *
75
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
76
+ * of this software and associated documentation files (the "Software"), to deal
77
+ * in the Software without restriction, including without limitation the rights
78
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
79
+ * copies of the Software, and to permit persons to whom the Software is
80
+ * furnished to do so, subject to the following conditions:
81
+ *
82
+ * The above copyright notice and this permission notice shall be included in
83
+ * all copies or substantial portions of the Software.
84
+ *
85
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
86
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
87
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
88
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
89
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
90
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
91
+ * THE SOFTWARE.
92
+ */
93
+
29
+
94
+#ifndef XLNX_ZYNQMP_CAN_H
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
95
+#define XLNX_ZYNQMP_CAN_H
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
38
VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
39
VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
40
41
+VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
42
+VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
43
+VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
44
+VSBCI 1111 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
96
+
45
+
97
+#include "hw/register.h"
46
# Vector miscellaneous
98
+#include "net/can_emu.h"
47
99
+#include "net/can_host.h"
48
VCLS 1111 1111 1 . 11 .. 00 ... 0 0100 01 . 0 ... 0 @1op
100
+#include "qemu/fifo32.h"
49
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
101
+#include "hw/ptimer.h"
50
index XXXXXXX..XXXXXXX 100644
102
+#include "hw/qdev-clock.h"
51
--- a/target/arm/mve_helper.c
52
+++ b/target/arm/mve_helper.c
53
@@ -XXX,XX +XXX,XX @@ DO_2OP_U(vrshlu, DO_VRSHLU)
54
DO_2OP_S(vrhadds, DO_RHADD_S)
55
DO_2OP_U(vrhaddu, DO_RHADD_U)
56
57
+static void do_vadc(CPUARMState *env, uint32_t *d, uint32_t *n, uint32_t *m,
58
+ uint32_t inv, uint32_t carry_in, bool update_flags)
59
+{
60
+ uint16_t mask = mve_element_mask(env);
61
+ unsigned e;
103
+
62
+
104
+#define TYPE_XLNX_ZYNQMP_CAN "xlnx.zynqmp-can"
63
+ /* If any additions trigger, we will update flags. */
105
+
64
+ if (mask & 0x1111) {
106
+#define XLNX_ZYNQMP_CAN(obj) \
65
+ update_flags = true;
107
+ OBJECT_CHECK(XlnxZynqMPCANState, (obj), TYPE_XLNX_ZYNQMP_CAN)
108
+
109
+#define MAX_CAN_CTRLS 2
110
+#define XLNX_ZYNQMP_CAN_R_MAX (0x84 / 4)
111
+#define MAILBOX_CAPACITY 64
112
+#define CAN_TIMER_MAX 0XFFFFUL
113
+#define CAN_DEFAULT_CLOCK (24 * 1000 * 1000)
114
+
115
+/* Each CAN_FRAME will have 4 * 32bit size. */
116
+#define CAN_FRAME_SIZE 4
117
+#define RXFIFO_SIZE (MAILBOX_CAPACITY * CAN_FRAME_SIZE)
118
+
119
+typedef struct XlnxZynqMPCANState {
120
+ SysBusDevice parent_obj;
121
+ MemoryRegion iomem;
122
+
123
+ qemu_irq irq;
124
+
125
+ CanBusClientState bus_client;
126
+ CanBusState *canbus;
127
+
128
+ struct {
129
+ uint32_t ext_clk_freq;
130
+ } cfg;
131
+
132
+ RegisterInfo reg_info[XLNX_ZYNQMP_CAN_R_MAX];
133
+ uint32_t regs[XLNX_ZYNQMP_CAN_R_MAX];
134
+
135
+ Fifo32 rx_fifo;
136
+ Fifo32 tx_fifo;
137
+ Fifo32 txhpb_fifo;
138
+
139
+ ptimer_state *can_timer;
140
+} XlnxZynqMPCANState;
141
+
142
+#endif
143
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
144
new file mode 100644
145
index XXXXXXX..XXXXXXX
146
--- /dev/null
147
+++ b/hw/net/can/xlnx-zynqmp-can.c
148
@@ -XXX,XX +XXX,XX @@
149
+/*
150
+ * QEMU model of the Xilinx ZynqMP CAN controller.
151
+ * This implementation is based on the following datasheet:
152
+ * https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
153
+ *
154
+ * Copyright (c) 2020 Xilinx Inc.
155
+ *
156
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
157
+ *
158
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
159
+ * Pavel Pisa
160
+ *
161
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
162
+ * of this software and associated documentation files (the "Software"), to deal
163
+ * in the Software without restriction, including without limitation the rights
164
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
165
+ * copies of the Software, and to permit persons to whom the Software is
166
+ * furnished to do so, subject to the following conditions:
167
+ *
168
+ * The above copyright notice and this permission notice shall be included in
169
+ * all copies or substantial portions of the Software.
170
+ *
171
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
172
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
173
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
174
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
175
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
176
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
177
+ * THE SOFTWARE.
178
+ */
179
+
180
+#include "qemu/osdep.h"
181
+#include "hw/sysbus.h"
182
+#include "hw/register.h"
183
+#include "hw/irq.h"
184
+#include "qapi/error.h"
185
+#include "qemu/bitops.h"
186
+#include "qemu/log.h"
187
+#include "qemu/cutils.h"
188
+#include "sysemu/sysemu.h"
189
+#include "migration/vmstate.h"
190
+#include "hw/qdev-properties.h"
191
+#include "net/can_emu.h"
192
+#include "net/can_host.h"
193
+#include "qemu/event_notifier.h"
194
+#include "qom/object_interfaces.h"
195
+#include "hw/net/xlnx-zynqmp-can.h"
196
+#include "trace.h"
197
+
198
+#ifndef XLNX_ZYNQMP_CAN_ERR_DEBUG
199
+#define XLNX_ZYNQMP_CAN_ERR_DEBUG 0
200
+#endif
201
+
202
+#define MAX_DLC 8
203
+#undef ERROR
204
+
205
+REG32(SOFTWARE_RESET_REGISTER, 0x0)
206
+ FIELD(SOFTWARE_RESET_REGISTER, CEN, 1, 1)
207
+ FIELD(SOFTWARE_RESET_REGISTER, SRST, 0, 1)
208
+REG32(MODE_SELECT_REGISTER, 0x4)
209
+ FIELD(MODE_SELECT_REGISTER, SNOOP, 2, 1)
210
+ FIELD(MODE_SELECT_REGISTER, LBACK, 1, 1)
211
+ FIELD(MODE_SELECT_REGISTER, SLEEP, 0, 1)
212
+REG32(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x8)
213
+ FIELD(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, BRP, 0, 8)
214
+REG32(ARBITRATION_PHASE_BIT_TIMING_REGISTER, 0xc)
215
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, SJW, 7, 2)
216
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS2, 4, 3)
217
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS1, 0, 4)
218
+REG32(ERROR_COUNTER_REGISTER, 0x10)
219
+ FIELD(ERROR_COUNTER_REGISTER, REC, 8, 8)
220
+ FIELD(ERROR_COUNTER_REGISTER, TEC, 0, 8)
221
+REG32(ERROR_STATUS_REGISTER, 0x14)
222
+ FIELD(ERROR_STATUS_REGISTER, ACKER, 4, 1)
223
+ FIELD(ERROR_STATUS_REGISTER, BERR, 3, 1)
224
+ FIELD(ERROR_STATUS_REGISTER, STER, 2, 1)
225
+ FIELD(ERROR_STATUS_REGISTER, FMER, 1, 1)
226
+ FIELD(ERROR_STATUS_REGISTER, CRCER, 0, 1)
227
+REG32(STATUS_REGISTER, 0x18)
228
+ FIELD(STATUS_REGISTER, SNOOP, 12, 1)
229
+ FIELD(STATUS_REGISTER, ACFBSY, 11, 1)
230
+ FIELD(STATUS_REGISTER, TXFLL, 10, 1)
231
+ FIELD(STATUS_REGISTER, TXBFLL, 9, 1)
232
+ FIELD(STATUS_REGISTER, ESTAT, 7, 2)
233
+ FIELD(STATUS_REGISTER, ERRWRN, 6, 1)
234
+ FIELD(STATUS_REGISTER, BBSY, 5, 1)
235
+ FIELD(STATUS_REGISTER, BIDLE, 4, 1)
236
+ FIELD(STATUS_REGISTER, NORMAL, 3, 1)
237
+ FIELD(STATUS_REGISTER, SLEEP, 2, 1)
238
+ FIELD(STATUS_REGISTER, LBACK, 1, 1)
239
+ FIELD(STATUS_REGISTER, CONFIG, 0, 1)
240
+REG32(INTERRUPT_STATUS_REGISTER, 0x1c)
241
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFEMP, 14, 1)
242
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFWMEMP, 13, 1)
243
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL, 12, 1)
244
+ FIELD(INTERRUPT_STATUS_REGISTER, WKUP, 11, 1)
245
+ FIELD(INTERRUPT_STATUS_REGISTER, SLP, 10, 1)
246
+ FIELD(INTERRUPT_STATUS_REGISTER, BSOFF, 9, 1)
247
+ FIELD(INTERRUPT_STATUS_REGISTER, ERROR, 8, 1)
248
+ FIELD(INTERRUPT_STATUS_REGISTER, RXNEMP, 7, 1)
249
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOFLW, 6, 1)
250
+ FIELD(INTERRUPT_STATUS_REGISTER, RXUFLW, 5, 1)
251
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOK, 4, 1)
252
+ FIELD(INTERRUPT_STATUS_REGISTER, TXBFLL, 3, 1)
253
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFLL, 2, 1)
254
+ FIELD(INTERRUPT_STATUS_REGISTER, TXOK, 1, 1)
255
+ FIELD(INTERRUPT_STATUS_REGISTER, ARBLST, 0, 1)
256
+REG32(INTERRUPT_ENABLE_REGISTER, 0x20)
257
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFEMP, 14, 1)
258
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFWMEMP, 13, 1)
259
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL, 12, 1)
260
+ FIELD(INTERRUPT_ENABLE_REGISTER, EWKUP, 11, 1)
261
+ FIELD(INTERRUPT_ENABLE_REGISTER, ESLP, 10, 1)
262
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSOFF, 9, 1)
263
+ FIELD(INTERRUPT_ENABLE_REGISTER, EERROR, 8, 1)
264
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXNEMP, 7, 1)
265
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOFLW, 6, 1)
266
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXUFLW, 5, 1)
267
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOK, 4, 1)
268
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXBFLL, 3, 1)
269
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFLL, 2, 1)
270
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXOK, 1, 1)
271
+ FIELD(INTERRUPT_ENABLE_REGISTER, EARBLST, 0, 1)
272
+REG32(INTERRUPT_CLEAR_REGISTER, 0x24)
273
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFEMP, 14, 1)
274
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFWMEMP, 13, 1)
275
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL, 12, 1)
276
+ FIELD(INTERRUPT_CLEAR_REGISTER, CWKUP, 11, 1)
277
+ FIELD(INTERRUPT_CLEAR_REGISTER, CSLP, 10, 1)
278
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSOFF, 9, 1)
279
+ FIELD(INTERRUPT_CLEAR_REGISTER, CERROR, 8, 1)
280
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXNEMP, 7, 1)
281
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOFLW, 6, 1)
282
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXUFLW, 5, 1)
283
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOK, 4, 1)
284
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXBFLL, 3, 1)
285
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFLL, 2, 1)
286
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXOK, 1, 1)
287
+ FIELD(INTERRUPT_CLEAR_REGISTER, CARBLST, 0, 1)
288
+REG32(TIMESTAMP_REGISTER, 0x28)
289
+ FIELD(TIMESTAMP_REGISTER, CTS, 0, 1)
290
+REG32(WIR, 0x2c)
291
+ FIELD(WIR, EW, 8, 8)
292
+ FIELD(WIR, FW, 0, 8)
293
+REG32(TXFIFO_ID, 0x30)
294
+ FIELD(TXFIFO_ID, IDH, 21, 11)
295
+ FIELD(TXFIFO_ID, SRRRTR, 20, 1)
296
+ FIELD(TXFIFO_ID, IDE, 19, 1)
297
+ FIELD(TXFIFO_ID, IDL, 1, 18)
298
+ FIELD(TXFIFO_ID, RTR, 0, 1)
299
+REG32(TXFIFO_DLC, 0x34)
300
+ FIELD(TXFIFO_DLC, DLC, 28, 4)
301
+REG32(TXFIFO_DATA1, 0x38)
302
+ FIELD(TXFIFO_DATA1, DB0, 24, 8)
303
+ FIELD(TXFIFO_DATA1, DB1, 16, 8)
304
+ FIELD(TXFIFO_DATA1, DB2, 8, 8)
305
+ FIELD(TXFIFO_DATA1, DB3, 0, 8)
306
+REG32(TXFIFO_DATA2, 0x3c)
307
+ FIELD(TXFIFO_DATA2, DB4, 24, 8)
308
+ FIELD(TXFIFO_DATA2, DB5, 16, 8)
309
+ FIELD(TXFIFO_DATA2, DB6, 8, 8)
310
+ FIELD(TXFIFO_DATA2, DB7, 0, 8)
311
+REG32(TXHPB_ID, 0x40)
312
+ FIELD(TXHPB_ID, IDH, 21, 11)
313
+ FIELD(TXHPB_ID, SRRRTR, 20, 1)
314
+ FIELD(TXHPB_ID, IDE, 19, 1)
315
+ FIELD(TXHPB_ID, IDL, 1, 18)
316
+ FIELD(TXHPB_ID, RTR, 0, 1)
317
+REG32(TXHPB_DLC, 0x44)
318
+ FIELD(TXHPB_DLC, DLC, 28, 4)
319
+REG32(TXHPB_DATA1, 0x48)
320
+ FIELD(TXHPB_DATA1, DB0, 24, 8)
321
+ FIELD(TXHPB_DATA1, DB1, 16, 8)
322
+ FIELD(TXHPB_DATA1, DB2, 8, 8)
323
+ FIELD(TXHPB_DATA1, DB3, 0, 8)
324
+REG32(TXHPB_DATA2, 0x4c)
325
+ FIELD(TXHPB_DATA2, DB4, 24, 8)
326
+ FIELD(TXHPB_DATA2, DB5, 16, 8)
327
+ FIELD(TXHPB_DATA2, DB6, 8, 8)
328
+ FIELD(TXHPB_DATA2, DB7, 0, 8)
329
+REG32(RXFIFO_ID, 0x50)
330
+ FIELD(RXFIFO_ID, IDH, 21, 11)
331
+ FIELD(RXFIFO_ID, SRRRTR, 20, 1)
332
+ FIELD(RXFIFO_ID, IDE, 19, 1)
333
+ FIELD(RXFIFO_ID, IDL, 1, 18)
334
+ FIELD(RXFIFO_ID, RTR, 0, 1)
335
+REG32(RXFIFO_DLC, 0x54)
336
+ FIELD(RXFIFO_DLC, DLC, 28, 4)
337
+ FIELD(RXFIFO_DLC, RXT, 0, 16)
338
+REG32(RXFIFO_DATA1, 0x58)
339
+ FIELD(RXFIFO_DATA1, DB0, 24, 8)
340
+ FIELD(RXFIFO_DATA1, DB1, 16, 8)
341
+ FIELD(RXFIFO_DATA1, DB2, 8, 8)
342
+ FIELD(RXFIFO_DATA1, DB3, 0, 8)
343
+REG32(RXFIFO_DATA2, 0x5c)
344
+ FIELD(RXFIFO_DATA2, DB4, 24, 8)
345
+ FIELD(RXFIFO_DATA2, DB5, 16, 8)
346
+ FIELD(RXFIFO_DATA2, DB6, 8, 8)
347
+ FIELD(RXFIFO_DATA2, DB7, 0, 8)
348
+REG32(AFR, 0x60)
349
+ FIELD(AFR, UAF4, 3, 1)
350
+ FIELD(AFR, UAF3, 2, 1)
351
+ FIELD(AFR, UAF2, 1, 1)
352
+ FIELD(AFR, UAF1, 0, 1)
353
+REG32(AFMR1, 0x64)
354
+ FIELD(AFMR1, AMIDH, 21, 11)
355
+ FIELD(AFMR1, AMSRR, 20, 1)
356
+ FIELD(AFMR1, AMIDE, 19, 1)
357
+ FIELD(AFMR1, AMIDL, 1, 18)
358
+ FIELD(AFMR1, AMRTR, 0, 1)
359
+REG32(AFIR1, 0x68)
360
+ FIELD(AFIR1, AIIDH, 21, 11)
361
+ FIELD(AFIR1, AISRR, 20, 1)
362
+ FIELD(AFIR1, AIIDE, 19, 1)
363
+ FIELD(AFIR1, AIIDL, 1, 18)
364
+ FIELD(AFIR1, AIRTR, 0, 1)
365
+REG32(AFMR2, 0x6c)
366
+ FIELD(AFMR2, AMIDH, 21, 11)
367
+ FIELD(AFMR2, AMSRR, 20, 1)
368
+ FIELD(AFMR2, AMIDE, 19, 1)
369
+ FIELD(AFMR2, AMIDL, 1, 18)
370
+ FIELD(AFMR2, AMRTR, 0, 1)
371
+REG32(AFIR2, 0x70)
372
+ FIELD(AFIR2, AIIDH, 21, 11)
373
+ FIELD(AFIR2, AISRR, 20, 1)
374
+ FIELD(AFIR2, AIIDE, 19, 1)
375
+ FIELD(AFIR2, AIIDL, 1, 18)
376
+ FIELD(AFIR2, AIRTR, 0, 1)
377
+REG32(AFMR3, 0x74)
378
+ FIELD(AFMR3, AMIDH, 21, 11)
379
+ FIELD(AFMR3, AMSRR, 20, 1)
380
+ FIELD(AFMR3, AMIDE, 19, 1)
381
+ FIELD(AFMR3, AMIDL, 1, 18)
382
+ FIELD(AFMR3, AMRTR, 0, 1)
383
+REG32(AFIR3, 0x78)
384
+ FIELD(AFIR3, AIIDH, 21, 11)
385
+ FIELD(AFIR3, AISRR, 20, 1)
386
+ FIELD(AFIR3, AIIDE, 19, 1)
387
+ FIELD(AFIR3, AIIDL, 1, 18)
388
+ FIELD(AFIR3, AIRTR, 0, 1)
389
+REG32(AFMR4, 0x7c)
390
+ FIELD(AFMR4, AMIDH, 21, 11)
391
+ FIELD(AFMR4, AMSRR, 20, 1)
392
+ FIELD(AFMR4, AMIDE, 19, 1)
393
+ FIELD(AFMR4, AMIDL, 1, 18)
394
+ FIELD(AFMR4, AMRTR, 0, 1)
395
+REG32(AFIR4, 0x80)
396
+ FIELD(AFIR4, AIIDH, 21, 11)
397
+ FIELD(AFIR4, AISRR, 20, 1)
398
+ FIELD(AFIR4, AIIDE, 19, 1)
399
+ FIELD(AFIR4, AIIDL, 1, 18)
400
+ FIELD(AFIR4, AIRTR, 0, 1)
401
+
402
+static void can_update_irq(XlnxZynqMPCANState *s)
403
+{
404
+ uint32_t irq;
405
+
406
+ /* Watermark register interrupts. */
407
+ if ((fifo32_num_free(&s->tx_fifo) / CAN_FRAME_SIZE) >
408
+ ARRAY_FIELD_EX32(s->regs, WIR, EW)) {
409
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFWMEMP, 1);
410
+ }
66
+ }
411
+
67
+
412
+ if ((fifo32_num_used(&s->rx_fifo) / CAN_FRAME_SIZE) >
68
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) {
413
+ ARRAY_FIELD_EX32(s->regs, WIR, FW)) {
69
+ uint64_t r = carry_in;
414
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL, 1);
70
+ r += n[H4(e)];
71
+ r += m[H4(e)] ^ inv;
72
+ if (mask & 1) {
73
+ carry_in = r >> 32;
74
+ }
75
+ mergemask(&d[H4(e)], r, mask);
415
+ }
76
+ }
416
+
77
+
417
+ /* RX Interrupts. */
78
+ if (update_flags) {
418
+ if (fifo32_num_used(&s->rx_fifo) >= CAN_FRAME_SIZE) {
79
+ /* Store C, clear NZV. */
419
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXNEMP, 1);
80
+ env->vfp.xregs[ARM_VFP_FPSCR] &= ~FPCR_NZCV_MASK;
81
+ env->vfp.xregs[ARM_VFP_FPSCR] |= carry_in * FPCR_C;
420
+ }
82
+ }
421
+
83
+ mve_advance_vpt(env);
422
+ /* TX interrupts. */
423
+ if (fifo32_is_empty(&s->tx_fifo)) {
424
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFEMP, 1);
425
+ }
426
+
427
+ if (fifo32_is_full(&s->tx_fifo)) {
428
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFLL, 1);
429
+ }
430
+
431
+ if (fifo32_is_full(&s->txhpb_fifo)) {
432
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXBFLL, 1);
433
+ }
434
+
435
+ irq = s->regs[R_INTERRUPT_STATUS_REGISTER];
436
+ irq &= s->regs[R_INTERRUPT_ENABLE_REGISTER];
437
+
438
+ trace_xlnx_can_update_irq(s->regs[R_INTERRUPT_STATUS_REGISTER],
439
+ s->regs[R_INTERRUPT_ENABLE_REGISTER], irq);
440
+ qemu_set_irq(s->irq, irq);
441
+}
84
+}
442
+
85
+
443
+static void can_ier_post_write(RegisterInfo *reg, uint64_t val)
86
+void HELPER(mve_vadc)(CPUARMState *env, void *vd, void *vn, void *vm)
444
+{
87
+{
445
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
88
+ bool carry_in = env->vfp.xregs[ARM_VFP_FPSCR] & FPCR_C;
446
+
89
+ do_vadc(env, vd, vn, vm, 0, carry_in, false);
447
+ can_update_irq(s);
448
+}
90
+}
449
+
91
+
450
+static uint64_t can_icr_pre_write(RegisterInfo *reg, uint64_t val)
92
+void HELPER(mve_vsbc)(CPUARMState *env, void *vd, void *vn, void *vm)
451
+{
93
+{
452
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
94
+ bool carry_in = env->vfp.xregs[ARM_VFP_FPSCR] & FPCR_C;
453
+
95
+ do_vadc(env, vd, vn, vm, -1, carry_in, false);
454
+ s->regs[R_INTERRUPT_STATUS_REGISTER] &= ~val;
455
+ can_update_irq(s);
456
+
457
+ return 0;
458
+}
96
+}
459
+
97
+
460
+static void can_config_reset(XlnxZynqMPCANState *s)
98
+
99
+void HELPER(mve_vadci)(CPUARMState *env, void *vd, void *vn, void *vm)
461
+{
100
+{
462
+ /* Reset all the configuration registers. */
101
+ do_vadc(env, vd, vn, vm, 0, 0, true);
463
+ register_reset(&s->reg_info[R_SOFTWARE_RESET_REGISTER]);
464
+ register_reset(&s->reg_info[R_MODE_SELECT_REGISTER]);
465
+ register_reset(
466
+ &s->reg_info[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER]);
467
+ register_reset(&s->reg_info[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER]);
468
+ register_reset(&s->reg_info[R_STATUS_REGISTER]);
469
+ register_reset(&s->reg_info[R_INTERRUPT_STATUS_REGISTER]);
470
+ register_reset(&s->reg_info[R_INTERRUPT_ENABLE_REGISTER]);
471
+ register_reset(&s->reg_info[R_INTERRUPT_CLEAR_REGISTER]);
472
+ register_reset(&s->reg_info[R_WIR]);
473
+}
102
+}
474
+
103
+
475
+static void can_config_mode(XlnxZynqMPCANState *s)
104
+void HELPER(mve_vsbci)(CPUARMState *env, void *vd, void *vn, void *vm)
476
+{
105
+{
477
+ register_reset(&s->reg_info[R_ERROR_COUNTER_REGISTER]);
106
+ do_vadc(env, vd, vn, vm, -1, 1, true);
478
+ register_reset(&s->reg_info[R_ERROR_STATUS_REGISTER]);
479
+
480
+ /* Put XlnxZynqMPCAN in configuration mode. */
481
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 1);
482
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP, 0);
483
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP, 0);
484
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, BSOFF, 0);
485
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ERROR, 0);
486
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 0);
487
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 0);
488
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 0);
489
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ARBLST, 0);
490
+
491
+ can_update_irq(s);
492
+}
107
+}
493
+
108
+
494
+static void update_status_register_mode_bits(XlnxZynqMPCANState *s)
109
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
110
{
111
if (val > max) {
112
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/target/arm/translate-mve.c
115
+++ b/target/arm/translate-mve.c
116
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMULLT(DisasContext *s, arg_2op *a)
117
return do_2op(s, a, fns[a->size]);
118
}
119
120
+/*
121
+ * VADC and VSBC: these perform an add-with-carry or subtract-with-carry
122
+ * of the 32-bit elements in each lane of the input vectors, where the
123
+ * carry-out of each add is the carry-in of the next. The initial carry
124
+ * input is either fixed (0 for VADCI, 1 for VSBCI) or is from FPSCR.C
125
+ * (for VADC and VSBC); the carry out at the end is written back to FPSCR.C.
126
+ * These insns are subject to beat-wise execution. Partial execution
127
+ * of an I=1 (initial carry input fixed) insn which does not
128
+ * execute the first beat must start with the current FPSCR.NZCV
129
+ * value, not the fixed constant input.
130
+ */
131
+static bool trans_VADC(DisasContext *s, arg_2op *a)
495
+{
132
+{
496
+ bool sleep_status = ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP);
133
+ return do_2op(s, a, gen_helper_mve_vadc);
497
+ bool sleep_mode = ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP);
498
+ /* Wake up interrupt bit. */
499
+ bool wakeup_irq_val = sleep_status && (sleep_mode == 0);
500
+ /* Sleep interrupt bit. */
501
+ bool sleep_irq_val = sleep_mode && (sleep_status == 0);
502
+
503
+ /* Clear previous core mode status bits. */
504
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 0);
505
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 0);
506
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 0);
507
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 0);
508
+
509
+ /* set current mode bit and generate irqs accordingly. */
510
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, LBACK)) {
511
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 1);
512
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP)) {
513
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 1);
514
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP,
515
+ sleep_irq_val);
516
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
517
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 1);
518
+ } else {
519
+ /*
520
+ * If all bits are zero then XlnxZynqMPCAN is set in normal mode.
521
+ */
522
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 1);
523
+ /* Set wakeup interrupt bit. */
524
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP,
525
+ wakeup_irq_val);
526
+ }
527
+
528
+ can_update_irq(s);
529
+}
134
+}
530
+
135
+
531
+static void can_exit_sleep_mode(XlnxZynqMPCANState *s)
136
+static bool trans_VADCI(DisasContext *s, arg_2op *a)
532
+{
137
+{
533
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, 0);
138
+ if (mve_skip_first_beat(s)) {
534
+ update_status_register_mode_bits(s);
139
+ return trans_VADC(s, a);
140
+ }
141
+ return do_2op(s, a, gen_helper_mve_vadci);
535
+}
142
+}
536
+
143
+
537
+static void generate_frame(qemu_can_frame *frame, uint32_t *data)
144
+static bool trans_VSBC(DisasContext *s, arg_2op *a)
538
+{
145
+{
539
+ frame->can_id = data[0];
146
+ return do_2op(s, a, gen_helper_mve_vsbc);
540
+ frame->can_dlc = FIELD_EX32(data[1], TXFIFO_DLC, DLC);
541
+
542
+ frame->data[0] = FIELD_EX32(data[2], TXFIFO_DATA1, DB3);
543
+ frame->data[1] = FIELD_EX32(data[2], TXFIFO_DATA1, DB2);
544
+ frame->data[2] = FIELD_EX32(data[2], TXFIFO_DATA1, DB1);
545
+ frame->data[3] = FIELD_EX32(data[2], TXFIFO_DATA1, DB0);
546
+
547
+ frame->data[4] = FIELD_EX32(data[3], TXFIFO_DATA2, DB7);
548
+ frame->data[5] = FIELD_EX32(data[3], TXFIFO_DATA2, DB6);
549
+ frame->data[6] = FIELD_EX32(data[3], TXFIFO_DATA2, DB5);
550
+ frame->data[7] = FIELD_EX32(data[3], TXFIFO_DATA2, DB4);
551
+}
147
+}
552
+
148
+
553
+static bool tx_ready_check(XlnxZynqMPCANState *s)
149
+static bool trans_VSBCI(DisasContext *s, arg_2op *a)
554
+{
150
+{
555
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
151
+ if (mve_skip_first_beat(s)) {
556
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
152
+ return trans_VSBC(s, a);
557
+
558
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
559
+ " data while controller is in reset mode.\n",
560
+ path);
561
+ return false;
562
+ }
153
+ }
563
+
154
+ return do_2op(s, a, gen_helper_mve_vsbci);
564
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
565
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
566
+
567
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
568
+ " data while controller is in configuration mode. Reset"
569
+ " the core so operations can start fresh.\n",
570
+ path);
571
+ return false;
572
+ }
573
+
574
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
575
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
576
+
577
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
578
+ " data while controller is in SNOOP MODE.\n",
579
+ path);
580
+ return false;
581
+ }
582
+
583
+ return true;
584
+}
155
+}
585
+
156
+
586
+static void transfer_fifo(XlnxZynqMPCANState *s, Fifo32 *fifo)
157
static bool do_2op_scalar(DisasContext *s, arg_2scalar *a,
587
+{
158
MVEGenTwoOpScalarFn fn)
588
+ qemu_can_frame frame;
159
{
589
+ uint32_t data[CAN_FRAME_SIZE];
590
+ int i;
591
+ bool can_tx = tx_ready_check(s);
592
+
593
+ if (!can_tx) {
594
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
595
+
596
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is not enabled for data"
597
+ " transfer.\n", path);
598
+ can_update_irq(s);
599
+ return;
600
+ }
601
+
602
+ while (!fifo32_is_empty(fifo)) {
603
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
604
+ data[i] = fifo32_pop(fifo);
605
+ }
606
+
607
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
608
+ /*
609
+ * Controller is in loopback. In Loopback mode, the CAN core
610
+ * transmits a recessive bitstream on to the XlnxZynqMPCAN Bus.
611
+ * Any message transmitted is looped back to the RX line and
612
+ * acknowledged. The XlnxZynqMPCAN core receives any message
613
+ * that it transmits.
614
+ */
615
+ if (fifo32_is_full(&s->rx_fifo)) {
616
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
617
+ } else {
618
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
619
+ fifo32_push(&s->rx_fifo, data[i]);
620
+ }
621
+
622
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
623
+ }
624
+ } else {
625
+ /* Normal mode Tx. */
626
+ generate_frame(&frame, data);
627
+
628
+ trace_xlnx_can_tx_data(frame.can_id, frame.can_dlc,
629
+ frame.data[0], frame.data[1],
630
+ frame.data[2], frame.data[3],
631
+ frame.data[4], frame.data[5],
632
+ frame.data[6], frame.data[7]);
633
+ can_bus_client_send(&s->bus_client, &frame, 1);
634
+ }
635
+ }
636
+
637
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 1);
638
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, TXBFLL, 0);
639
+
640
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) {
641
+ can_exit_sleep_mode(s);
642
+ }
643
+
644
+ can_update_irq(s);
645
+}
646
+
647
+static uint64_t can_srr_pre_write(RegisterInfo *reg, uint64_t val)
648
+{
649
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
650
+
651
+ ARRAY_FIELD_DP32(s->regs, SOFTWARE_RESET_REGISTER, CEN,
652
+ FIELD_EX32(val, SOFTWARE_RESET_REGISTER, CEN));
653
+
654
+ if (FIELD_EX32(val, SOFTWARE_RESET_REGISTER, SRST)) {
655
+ trace_xlnx_can_reset(val);
656
+
657
+ /* First, core will do software reset then will enter in config mode. */
658
+ can_config_reset(s);
659
+ }
660
+
661
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
662
+ can_config_mode(s);
663
+ } else {
664
+ /*
665
+ * Leave config mode. Now XlnxZynqMPCAN core will enter normal,
666
+ * sleep, snoop or loopback mode depending upon LBACK, SLEEP, SNOOP
667
+ * register states.
668
+ */
669
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 0);
670
+
671
+ ptimer_transaction_begin(s->can_timer);
672
+ ptimer_set_count(s->can_timer, 0);
673
+ ptimer_transaction_commit(s->can_timer);
674
+
675
+ /* XlnxZynqMPCAN is out of config mode. It will send pending data. */
676
+ transfer_fifo(s, &s->txhpb_fifo);
677
+ transfer_fifo(s, &s->tx_fifo);
678
+ }
679
+
680
+ update_status_register_mode_bits(s);
681
+
682
+ return s->regs[R_SOFTWARE_RESET_REGISTER];
683
+}
684
+
685
+static uint64_t can_msr_pre_write(RegisterInfo *reg, uint64_t val)
686
+{
687
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
688
+ uint8_t multi_mode;
689
+
690
+ /*
691
+ * Multiple mode set check. This is done to make sure user doesn't set
692
+ * multiple modes.
693
+ */
694
+ multi_mode = FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK) +
695
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP) +
696
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP);
697
+
698
+ if (multi_mode > 1) {
699
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
700
+
701
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to config"
702
+ " several modes simultaneously. One mode will be selected"
703
+ " according to their priority: LBACK > SLEEP > SNOOP.\n",
704
+ path);
705
+ }
706
+
707
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
708
+ /* We are in configuration mode, any mode can be selected. */
709
+ s->regs[R_MODE_SELECT_REGISTER] = val;
710
+ } else {
711
+ bool sleep_mode_bit = FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP);
712
+
713
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, sleep_mode_bit);
714
+
715
+ if (FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK)) {
716
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
717
+
718
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
719
+ " LBACK mode without setting CEN bit as 0.\n",
720
+ path);
721
+ } else if (FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP)) {
722
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
723
+
724
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
725
+ " SNOOP mode without setting CEN bit as 0.\n",
726
+ path);
727
+ }
728
+
729
+ update_status_register_mode_bits(s);
730
+ }
731
+
732
+ return s->regs[R_MODE_SELECT_REGISTER];
733
+}
734
+
735
+static uint64_t can_brpr_pre_write(RegisterInfo *reg, uint64_t val)
736
+{
737
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
738
+
739
+ /* Only allow writes when in config mode. */
740
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
741
+ return s->regs[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER];
742
+ }
743
+
744
+ return val;
745
+}
746
+
747
+static uint64_t can_btr_pre_write(RegisterInfo *reg, uint64_t val)
748
+{
749
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
750
+
751
+ /* Only allow writes when in config mode. */
752
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
753
+ return s->regs[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER];
754
+ }
755
+
756
+ return val;
757
+}
758
+
759
+static uint64_t can_tcr_pre_write(RegisterInfo *reg, uint64_t val)
760
+{
761
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
762
+
763
+ if (FIELD_EX32(val, TIMESTAMP_REGISTER, CTS)) {
764
+ ptimer_transaction_begin(s->can_timer);
765
+ ptimer_set_count(s->can_timer, 0);
766
+ ptimer_transaction_commit(s->can_timer);
767
+ }
768
+
769
+ return 0;
770
+}
771
+
772
+static void update_rx_fifo(XlnxZynqMPCANState *s, const qemu_can_frame *frame)
773
+{
774
+ bool filter_pass = false;
775
+ uint16_t timestamp = 0;
776
+
777
+ /* If no filter is enabled. Message will be stored in FIFO. */
778
+ if (!((ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) |
779
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) |
780
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) |
781
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)))) {
782
+ filter_pass = true;
783
+ }
784
+
785
+ /*
786
+ * Messages that pass any of the acceptance filters will be stored in
787
+ * the RX FIFO.
788
+ */
789
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) {
790
+ uint32_t id_masked = s->regs[R_AFMR1] & frame->can_id;
791
+ uint32_t filter_id_masked = s->regs[R_AFMR1] & s->regs[R_AFIR1];
792
+
793
+ if (filter_id_masked == id_masked) {
794
+ filter_pass = true;
795
+ }
796
+ }
797
+
798
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) {
799
+ uint32_t id_masked = s->regs[R_AFMR2] & frame->can_id;
800
+ uint32_t filter_id_masked = s->regs[R_AFMR2] & s->regs[R_AFIR2];
801
+
802
+ if (filter_id_masked == id_masked) {
803
+ filter_pass = true;
804
+ }
805
+ }
806
+
807
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) {
808
+ uint32_t id_masked = s->regs[R_AFMR3] & frame->can_id;
809
+ uint32_t filter_id_masked = s->regs[R_AFMR3] & s->regs[R_AFIR3];
810
+
811
+ if (filter_id_masked == id_masked) {
812
+ filter_pass = true;
813
+ }
814
+ }
815
+
816
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
817
+ uint32_t id_masked = s->regs[R_AFMR4] & frame->can_id;
818
+ uint32_t filter_id_masked = s->regs[R_AFMR4] & s->regs[R_AFIR4];
819
+
820
+ if (filter_id_masked == id_masked) {
821
+ filter_pass = true;
822
+ }
823
+ }
824
+
825
+ if (!filter_pass) {
826
+ trace_xlnx_can_rx_fifo_filter_reject(frame->can_id, frame->can_dlc);
827
+ return;
828
+ }
829
+
830
+ /* Store the message in fifo if it passed through any of the filters. */
831
+ if (filter_pass && frame->can_dlc <= MAX_DLC) {
832
+
833
+ if (fifo32_is_full(&s->rx_fifo)) {
834
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
835
+ } else {
836
+ timestamp = CAN_TIMER_MAX - ptimer_get_count(s->can_timer);
837
+
838
+ fifo32_push(&s->rx_fifo, frame->can_id);
839
+
840
+ fifo32_push(&s->rx_fifo, deposit32(0, R_RXFIFO_DLC_DLC_SHIFT,
841
+ R_RXFIFO_DLC_DLC_LENGTH,
842
+ frame->can_dlc) |
843
+ deposit32(0, R_RXFIFO_DLC_RXT_SHIFT,
844
+ R_RXFIFO_DLC_RXT_LENGTH,
845
+ timestamp));
846
+
847
+ /* First 32 bit of the data. */
848
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA1_DB3_SHIFT,
849
+ R_TXFIFO_DATA1_DB3_LENGTH,
850
+ frame->data[0]) |
851
+ deposit32(0, R_TXFIFO_DATA1_DB2_SHIFT,
852
+ R_TXFIFO_DATA1_DB2_LENGTH,
853
+ frame->data[1]) |
854
+ deposit32(0, R_TXFIFO_DATA1_DB1_SHIFT,
855
+ R_TXFIFO_DATA1_DB1_LENGTH,
856
+ frame->data[2]) |
857
+ deposit32(0, R_TXFIFO_DATA1_DB0_SHIFT,
858
+ R_TXFIFO_DATA1_DB0_LENGTH,
859
+ frame->data[3]));
860
+ /* Last 32 bit of the data. */
861
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA2_DB7_SHIFT,
862
+ R_TXFIFO_DATA2_DB7_LENGTH,
863
+ frame->data[4]) |
864
+ deposit32(0, R_TXFIFO_DATA2_DB6_SHIFT,
865
+ R_TXFIFO_DATA2_DB6_LENGTH,
866
+ frame->data[5]) |
867
+ deposit32(0, R_TXFIFO_DATA2_DB5_SHIFT,
868
+ R_TXFIFO_DATA2_DB5_LENGTH,
869
+ frame->data[6]) |
870
+ deposit32(0, R_TXFIFO_DATA2_DB4_SHIFT,
871
+ R_TXFIFO_DATA2_DB4_LENGTH,
872
+ frame->data[7]));
873
+
874
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
875
+ trace_xlnx_can_rx_data(frame->can_id, frame->can_dlc,
876
+ frame->data[0], frame->data[1],
877
+ frame->data[2], frame->data[3],
878
+ frame->data[4], frame->data[5],
879
+ frame->data[6], frame->data[7]);
880
+ }
881
+
882
+ can_update_irq(s);
883
+ }
884
+}
885
+
886
+static uint64_t can_rxfifo_pre_read(RegisterInfo *reg, uint64_t val)
887
+{
888
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
889
+
890
+ if (!fifo32_is_empty(&s->rx_fifo)) {
891
+ val = fifo32_pop(&s->rx_fifo);
892
+ } else {
893
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXUFLW, 1);
894
+ }
895
+
896
+ can_update_irq(s);
897
+ return val;
898
+}
899
+
900
+static void can_filter_enable_post_write(RegisterInfo *reg, uint64_t val)
901
+{
902
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
903
+
904
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1) &&
905
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF2) &&
906
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF3) &&
907
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
908
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 1);
909
+ } else {
910
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 0);
911
+ }
912
+}
913
+
914
+static uint64_t can_filter_mask_pre_write(RegisterInfo *reg, uint64_t val)
915
+{
916
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
917
+ uint32_t reg_idx = (reg->access->addr) / 4;
918
+ uint32_t filter_number = (reg_idx - R_AFMR1) / 2;
919
+
920
+ /* modify an acceptance filter, the corresponding UAF bit should be '0'. */
921
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
922
+ s->regs[reg_idx] = val;
923
+
924
+ trace_xlnx_can_filter_mask_pre_write(filter_number, s->regs[reg_idx]);
925
+ } else {
926
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
927
+
928
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
929
+ " mask is not set as corresponding UAF bit is not 0.\n",
930
+ path, filter_number + 1);
931
+ }
932
+
933
+ return s->regs[reg_idx];
934
+}
935
+
936
+static uint64_t can_filter_id_pre_write(RegisterInfo *reg, uint64_t val)
937
+{
938
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
939
+ uint32_t reg_idx = (reg->access->addr) / 4;
940
+ uint32_t filter_number = (reg_idx - R_AFIR1) / 2;
941
+
942
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
943
+ s->regs[reg_idx] = val;
944
+
945
+ trace_xlnx_can_filter_id_pre_write(filter_number, s->regs[reg_idx]);
946
+ } else {
947
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
948
+
949
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
950
+ " id is not set as corresponding UAF bit is not 0.\n",
951
+ path, filter_number + 1);
952
+ }
953
+
954
+ return s->regs[reg_idx];
955
+}
956
+
957
+static void can_tx_post_write(RegisterInfo *reg, uint64_t val)
958
+{
959
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
960
+
961
+ bool is_txhpb = reg->access->addr > A_TXFIFO_DATA2;
962
+
963
+ bool initiate_transfer = (reg->access->addr == A_TXFIFO_DATA2) ||
964
+ (reg->access->addr == A_TXHPB_DATA2);
965
+
966
+ Fifo32 *f = is_txhpb ? &s->txhpb_fifo : &s->tx_fifo;
967
+
968
+ if (!fifo32_is_full(f)) {
969
+ fifo32_push(f, val);
970
+ } else {
971
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
972
+
973
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: TX FIFO is full.\n", path);
974
+ }
975
+
976
+ /* Initiate the message send if TX register is written. */
977
+ if (initiate_transfer &&
978
+ ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
979
+ transfer_fifo(s, f);
980
+ }
981
+
982
+ can_update_irq(s);
983
+}
984
+
985
+static const RegisterAccessInfo can_regs_info[] = {
986
+ { .name = "SOFTWARE_RESET_REGISTER",
987
+ .addr = A_SOFTWARE_RESET_REGISTER,
988
+ .rsvd = 0xfffffffc,
989
+ .pre_write = can_srr_pre_write,
990
+ },{ .name = "MODE_SELECT_REGISTER",
991
+ .addr = A_MODE_SELECT_REGISTER,
992
+ .rsvd = 0xfffffff8,
993
+ .pre_write = can_msr_pre_write,
994
+ },{ .name = "ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER",
995
+ .addr = A_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER,
996
+ .rsvd = 0xffffff00,
997
+ .pre_write = can_brpr_pre_write,
998
+ },{ .name = "ARBITRATION_PHASE_BIT_TIMING_REGISTER",
999
+ .addr = A_ARBITRATION_PHASE_BIT_TIMING_REGISTER,
1000
+ .rsvd = 0xfffffe00,
1001
+ .pre_write = can_btr_pre_write,
1002
+ },{ .name = "ERROR_COUNTER_REGISTER",
1003
+ .addr = A_ERROR_COUNTER_REGISTER,
1004
+ .rsvd = 0xffff0000,
1005
+ .ro = 0xffffffff,
1006
+ },{ .name = "ERROR_STATUS_REGISTER",
1007
+ .addr = A_ERROR_STATUS_REGISTER,
1008
+ .rsvd = 0xffffffe0,
1009
+ .w1c = 0x1f,
1010
+ },{ .name = "STATUS_REGISTER", .addr = A_STATUS_REGISTER,
1011
+ .reset = 0x1,
1012
+ .rsvd = 0xffffe000,
1013
+ .ro = 0x1fff,
1014
+ },{ .name = "INTERRUPT_STATUS_REGISTER",
1015
+ .addr = A_INTERRUPT_STATUS_REGISTER,
1016
+ .reset = 0x6000,
1017
+ .rsvd = 0xffff8000,
1018
+ .ro = 0x7fff,
1019
+ },{ .name = "INTERRUPT_ENABLE_REGISTER",
1020
+ .addr = A_INTERRUPT_ENABLE_REGISTER,
1021
+ .rsvd = 0xffff8000,
1022
+ .post_write = can_ier_post_write,
1023
+ },{ .name = "INTERRUPT_CLEAR_REGISTER",
1024
+ .addr = A_INTERRUPT_CLEAR_REGISTER,
1025
+ .rsvd = 0xffff8000,
1026
+ .pre_write = can_icr_pre_write,
1027
+ },{ .name = "TIMESTAMP_REGISTER",
1028
+ .addr = A_TIMESTAMP_REGISTER,
1029
+ .rsvd = 0xfffffffe,
1030
+ .pre_write = can_tcr_pre_write,
1031
+ },{ .name = "WIR", .addr = A_WIR,
1032
+ .reset = 0x3f3f,
1033
+ .rsvd = 0xffff0000,
1034
+ },{ .name = "TXFIFO_ID", .addr = A_TXFIFO_ID,
1035
+ .post_write = can_tx_post_write,
1036
+ },{ .name = "TXFIFO_DLC", .addr = A_TXFIFO_DLC,
1037
+ .rsvd = 0xfffffff,
1038
+ .post_write = can_tx_post_write,
1039
+ },{ .name = "TXFIFO_DATA1", .addr = A_TXFIFO_DATA1,
1040
+ .post_write = can_tx_post_write,
1041
+ },{ .name = "TXFIFO_DATA2", .addr = A_TXFIFO_DATA2,
1042
+ .post_write = can_tx_post_write,
1043
+ },{ .name = "TXHPB_ID", .addr = A_TXHPB_ID,
1044
+ .post_write = can_tx_post_write,
1045
+ },{ .name = "TXHPB_DLC", .addr = A_TXHPB_DLC,
1046
+ .rsvd = 0xfffffff,
1047
+ .post_write = can_tx_post_write,
1048
+ },{ .name = "TXHPB_DATA1", .addr = A_TXHPB_DATA1,
1049
+ .post_write = can_tx_post_write,
1050
+ },{ .name = "TXHPB_DATA2", .addr = A_TXHPB_DATA2,
1051
+ .post_write = can_tx_post_write,
1052
+ },{ .name = "RXFIFO_ID", .addr = A_RXFIFO_ID,
1053
+ .ro = 0xffffffff,
1054
+ .post_read = can_rxfifo_pre_read,
1055
+ },{ .name = "RXFIFO_DLC", .addr = A_RXFIFO_DLC,
1056
+ .rsvd = 0xfff0000,
1057
+ .post_read = can_rxfifo_pre_read,
1058
+ },{ .name = "RXFIFO_DATA1", .addr = A_RXFIFO_DATA1,
1059
+ .post_read = can_rxfifo_pre_read,
1060
+ },{ .name = "RXFIFO_DATA2", .addr = A_RXFIFO_DATA2,
1061
+ .post_read = can_rxfifo_pre_read,
1062
+ },{ .name = "AFR", .addr = A_AFR,
1063
+ .rsvd = 0xfffffff0,
1064
+ .post_write = can_filter_enable_post_write,
1065
+ },{ .name = "AFMR1", .addr = A_AFMR1,
1066
+ .pre_write = can_filter_mask_pre_write,
1067
+ },{ .name = "AFIR1", .addr = A_AFIR1,
1068
+ .pre_write = can_filter_id_pre_write,
1069
+ },{ .name = "AFMR2", .addr = A_AFMR2,
1070
+ .pre_write = can_filter_mask_pre_write,
1071
+ },{ .name = "AFIR2", .addr = A_AFIR2,
1072
+ .pre_write = can_filter_id_pre_write,
1073
+ },{ .name = "AFMR3", .addr = A_AFMR3,
1074
+ .pre_write = can_filter_mask_pre_write,
1075
+ },{ .name = "AFIR3", .addr = A_AFIR3,
1076
+ .pre_write = can_filter_id_pre_write,
1077
+ },{ .name = "AFMR4", .addr = A_AFMR4,
1078
+ .pre_write = can_filter_mask_pre_write,
1079
+ },{ .name = "AFIR4", .addr = A_AFIR4,
1080
+ .pre_write = can_filter_id_pre_write,
1081
+ }
1082
+};
1083
+
1084
+static void xlnx_zynqmp_can_ptimer_cb(void *opaque)
1085
+{
1086
+ /* No action required on the timer rollover. */
1087
+}
1088
+
1089
+static const MemoryRegionOps can_ops = {
1090
+ .read = register_read_memory,
1091
+ .write = register_write_memory,
1092
+ .endianness = DEVICE_LITTLE_ENDIAN,
1093
+ .valid = {
1094
+ .min_access_size = 4,
1095
+ .max_access_size = 4,
1096
+ },
1097
+};
1098
+
1099
+static void xlnx_zynqmp_can_reset_init(Object *obj, ResetType type)
1100
+{
1101
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1102
+ unsigned int i;
1103
+
1104
+ for (i = R_RXFIFO_ID; i < ARRAY_SIZE(s->reg_info); ++i) {
1105
+ register_reset(&s->reg_info[i]);
1106
+ }
1107
+
1108
+ ptimer_transaction_begin(s->can_timer);
1109
+ ptimer_set_count(s->can_timer, 0);
1110
+ ptimer_transaction_commit(s->can_timer);
1111
+}
1112
+
1113
+static void xlnx_zynqmp_can_reset_hold(Object *obj)
1114
+{
1115
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1116
+ unsigned int i;
1117
+
1118
+ for (i = 0; i < R_RXFIFO_ID; ++i) {
1119
+ register_reset(&s->reg_info[i]);
1120
+ }
1121
+
1122
+ /*
1123
+ * Reset FIFOs when CAN model is reset. This will clear the fifo writes
1124
+ * done by post_write which gets called from register_reset function,
1125
+ * post_write handle will not be able to trigger tx because CAN will be
1126
+ * disabled when software_reset_register is cleared first.
1127
+ */
1128
+ fifo32_reset(&s->rx_fifo);
1129
+ fifo32_reset(&s->tx_fifo);
1130
+ fifo32_reset(&s->txhpb_fifo);
1131
+}
1132
+
1133
+static bool xlnx_zynqmp_can_can_receive(CanBusClientState *client)
1134
+{
1135
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1136
+ bus_client);
1137
+
1138
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
1139
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1140
+
1141
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is in reset state.\n",
1142
+ path);
1143
+ return false;
1144
+ }
1145
+
1146
+ if ((ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) == 0) {
1147
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1148
+
1149
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is disabled. Incoming"
1150
+ " messages will be discarded.\n", path);
1151
+ return false;
1152
+ }
1153
+
1154
+ return true;
1155
+}
1156
+
1157
+static ssize_t xlnx_zynqmp_can_receive(CanBusClientState *client,
1158
+ const qemu_can_frame *buf, size_t buf_size) {
1159
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1160
+ bus_client);
1161
+ const qemu_can_frame *frame = buf;
1162
+
1163
+ if (buf_size <= 0) {
1164
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1165
+
1166
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Error in the data received.\n",
1167
+ path);
1168
+ return 0;
1169
+ }
1170
+
1171
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
1172
+ /* Snoop Mode: Just keep the data. no response back. */
1173
+ update_rx_fifo(s, frame);
1174
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP))) {
1175
+ /*
1176
+ * XlnxZynqMPCAN is in sleep mode. Any data on bus will bring it to wake
1177
+ * up state.
1178
+ */
1179
+ can_exit_sleep_mode(s);
1180
+ update_rx_fifo(s, frame);
1181
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) == 0) {
1182
+ update_rx_fifo(s, frame);
1183
+ } else {
1184
+ /*
1185
+ * XlnxZynqMPCAN will not participate in normal bus communication
1186
+ * and will not receive any messages transmitted by other CAN nodes.
1187
+ */
1188
+ trace_xlnx_can_rx_discard(s->regs[R_STATUS_REGISTER]);
1189
+ }
1190
+
1191
+ return 1;
1192
+}
1193
+
1194
+static CanBusClientInfo can_xilinx_bus_client_info = {
1195
+ .can_receive = xlnx_zynqmp_can_can_receive,
1196
+ .receive = xlnx_zynqmp_can_receive,
1197
+};
1198
+
1199
+static int xlnx_zynqmp_can_connect_to_bus(XlnxZynqMPCANState *s,
1200
+ CanBusState *bus)
1201
+{
1202
+ s->bus_client.info = &can_xilinx_bus_client_info;
1203
+
1204
+ if (can_bus_insert_client(bus, &s->bus_client) < 0) {
1205
+ return -1;
1206
+ }
1207
+ return 0;
1208
+}
1209
+
1210
+static void xlnx_zynqmp_can_realize(DeviceState *dev, Error **errp)
1211
+{
1212
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(dev);
1213
+
1214
+ if (s->canbus) {
1215
+ if (xlnx_zynqmp_can_connect_to_bus(s, s->canbus) < 0) {
1216
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1217
+
1218
+ error_setg(errp, "%s: xlnx_zynqmp_can_connect_to_bus"
1219
+ " failed.", path);
1220
+ return;
1221
+ }
1222
+ }
1223
+
1224
+ /* Create RX FIFO, TXFIFO, TXHPB storage. */
1225
+ fifo32_create(&s->rx_fifo, RXFIFO_SIZE);
1226
+ fifo32_create(&s->tx_fifo, RXFIFO_SIZE);
1227
+ fifo32_create(&s->txhpb_fifo, CAN_FRAME_SIZE);
1228
+
1229
+ /* Allocate a new timer. */
1230
+ s->can_timer = ptimer_init(xlnx_zynqmp_can_ptimer_cb, s,
1231
+ PTIMER_POLICY_DEFAULT);
1232
+
1233
+ ptimer_transaction_begin(s->can_timer);
1234
+
1235
+ ptimer_set_freq(s->can_timer, s->cfg.ext_clk_freq);
1236
+ ptimer_set_limit(s->can_timer, CAN_TIMER_MAX, 1);
1237
+ ptimer_run(s->can_timer, 0);
1238
+ ptimer_transaction_commit(s->can_timer);
1239
+}
1240
+
1241
+static void xlnx_zynqmp_can_init(Object *obj)
1242
+{
1243
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1244
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1245
+
1246
+ RegisterInfoArray *reg_array;
1247
+
1248
+ memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_CAN,
1249
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1250
+ reg_array = register_init_block32(DEVICE(obj), can_regs_info,
1251
+ ARRAY_SIZE(can_regs_info),
1252
+ s->reg_info, s->regs,
1253
+ &can_ops,
1254
+ XLNX_ZYNQMP_CAN_ERR_DEBUG,
1255
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1256
+
1257
+ memory_region_add_subregion(&s->iomem, 0x00, &reg_array->mem);
1258
+ sysbus_init_mmio(sbd, &s->iomem);
1259
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
1260
+}
1261
+
1262
+static const VMStateDescription vmstate_can = {
1263
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1264
+ .version_id = 1,
1265
+ .minimum_version_id = 1,
1266
+ .fields = (VMStateField[]) {
1267
+ VMSTATE_FIFO32(rx_fifo, XlnxZynqMPCANState),
1268
+ VMSTATE_FIFO32(tx_fifo, XlnxZynqMPCANState),
1269
+ VMSTATE_FIFO32(txhpb_fifo, XlnxZynqMPCANState),
1270
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPCANState, XLNX_ZYNQMP_CAN_R_MAX),
1271
+ VMSTATE_PTIMER(can_timer, XlnxZynqMPCANState),
1272
+ VMSTATE_END_OF_LIST(),
1273
+ }
1274
+};
1275
+
1276
+static Property xlnx_zynqmp_can_properties[] = {
1277
+ DEFINE_PROP_UINT32("ext_clk_freq", XlnxZynqMPCANState, cfg.ext_clk_freq,
1278
+ CAN_DEFAULT_CLOCK),
1279
+ DEFINE_PROP_LINK("canbus", XlnxZynqMPCANState, canbus, TYPE_CAN_BUS,
1280
+ CanBusState *),
1281
+ DEFINE_PROP_END_OF_LIST(),
1282
+};
1283
+
1284
+static void xlnx_zynqmp_can_class_init(ObjectClass *klass, void *data)
1285
+{
1286
+ DeviceClass *dc = DEVICE_CLASS(klass);
1287
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1288
+
1289
+ rc->phases.enter = xlnx_zynqmp_can_reset_init;
1290
+ rc->phases.hold = xlnx_zynqmp_can_reset_hold;
1291
+ dc->realize = xlnx_zynqmp_can_realize;
1292
+ device_class_set_props(dc, xlnx_zynqmp_can_properties);
1293
+ dc->vmsd = &vmstate_can;
1294
+}
1295
+
1296
+static const TypeInfo can_info = {
1297
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1298
+ .parent = TYPE_SYS_BUS_DEVICE,
1299
+ .instance_size = sizeof(XlnxZynqMPCANState),
1300
+ .class_init = xlnx_zynqmp_can_class_init,
1301
+ .instance_init = xlnx_zynqmp_can_init,
1302
+};
1303
+
1304
+static void can_register_types(void)
1305
+{
1306
+ type_register_static(&can_info);
1307
+}
1308
+
1309
+type_init(can_register_types)
1310
diff --git a/hw/Kconfig b/hw/Kconfig
1311
index XXXXXXX..XXXXXXX 100644
1312
--- a/hw/Kconfig
1313
+++ b/hw/Kconfig
1314
@@ -XXX,XX +XXX,XX @@ config XILINX_AXI
1315
config XLNX_ZYNQMP
1316
bool
1317
select REGISTER
1318
+ select CAN_BUS
1319
diff --git a/hw/net/can/meson.build b/hw/net/can/meson.build
1320
index XXXXXXX..XXXXXXX 100644
1321
--- a/hw/net/can/meson.build
1322
+++ b/hw/net/can/meson.build
1323
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_pcm3680_pci.c'))
1324
softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_mioe3680_pci.c'))
1325
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD', if_true: files('ctucan_core.c'))
1326
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD_PCI', if_true: files('ctucan_pci.c'))
1327
+softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP', if_true: files('xlnx-zynqmp-can.c'))
1328
diff --git a/hw/net/can/trace-events b/hw/net/can/trace-events
1329
new file mode 100644
1330
index XXXXXXX..XXXXXXX
1331
--- /dev/null
1332
+++ b/hw/net/can/trace-events
1333
@@ -XXX,XX +XXX,XX @@
1334
+# xlnx-zynqmp-can.c
1335
+xlnx_can_update_irq(uint32_t isr, uint32_t ier, uint32_t irq) "ISR: 0x%08x IER: 0x%08x IRQ: 0x%08x"
1336
+xlnx_can_reset(uint32_t val) "Resetting controller with value = 0x%08x"
1337
+xlnx_can_rx_fifo_filter_reject(uint32_t id, uint8_t dlc) "Frame: ID: 0x%08x DLC: 0x%02x"
1338
+xlnx_can_filter_id_pre_write(uint8_t filter_num, uint32_t value) "Filter%d ID: 0x%08x"
1339
+xlnx_can_filter_mask_pre_write(uint8_t filter_num, uint32_t value) "Filter%d MASK: 0x%08x"
1340
+xlnx_can_tx_data(uint32_t id, uint8_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1341
+xlnx_can_rx_data(uint32_t id, uint32_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1342
+xlnx_can_rx_discard(uint32_t status) "Controller is not enabled for bus communication. Status Register: 0x%08x"
1343
--
160
--
1344
2.20.1
161
2.20.1
1345
162
1346
163
diff view generated by jsdifflib
1
From: Kunkun Jiang <jiangkunkun@huawei.com>
1
Implement the MVE VCADD insn, which performs a complex add with
2
rotate. Note that the size=0b11 encoding is VSBC.
2
3
3
Accroding to the SMMUv3 spec, the SPAN field of Level1 Stream Table
4
The architecture grants some leeway for the "destination and Vm
4
Descriptor is 5 bits([4:0]).
5
source overlap" case for the size MO_32 case, but we choose not to
6
make use of it, instead always calculating all 16 bytes worth of
7
results before setting the destination register.
5
8
6
Fixes: 9bde7f0674f(hw/arm/smmuv3: Implement translate callback)
7
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
8
Message-id: 20201124023711.1184-1-jiangkunkun@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Acked-by: Eric Auger <eric.auger@redhat.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210617121628.20116-42-peter.maydell@linaro.org
12
---
12
---
13
hw/arm/smmuv3-internal.h | 2 +-
13
target/arm/helper-mve.h | 8 ++++++++
14
1 file changed, 1 insertion(+), 1 deletion(-)
14
target/arm/mve.decode | 9 +++++++--
15
target/arm/mve_helper.c | 29 +++++++++++++++++++++++++++++
16
target/arm/translate-mve.c | 7 +++++++
17
4 files changed, 51 insertions(+), 2 deletions(-)
15
18
16
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
17
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/smmuv3-internal.h
21
--- a/target/arm/helper-mve.h
19
+++ b/hw/arm/smmuv3-internal.h
22
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ static inline uint64_t l1std_l2ptr(STEDesc *desc)
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vadci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
return hi << 32 | lo;
24
DEF_HELPER_FLAGS_4(mve_vsbc, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
DEF_HELPER_FLAGS_4(mve_vsbci, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
27
+DEF_HELPER_FLAGS_4(mve_vcadd90b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
+DEF_HELPER_FLAGS_4(mve_vcadd90h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
29
+DEF_HELPER_FLAGS_4(mve_vcadd90w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
+
31
+DEF_HELPER_FLAGS_4(mve_vcadd270b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
+DEF_HELPER_FLAGS_4(mve_vcadd270h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
33
+DEF_HELPER_FLAGS_4(mve_vcadd270w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+
35
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/mve.decode
41
+++ b/target/arm/mve.decode
42
@@ -XXX,XX +XXX,XX @@ VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
43
VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
44
45
VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
46
-VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
47
VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
48
-VSBCI 1111 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
49
+
50
+{
51
+ VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
52
+ VSBCI 1111 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
53
+ VCADD90 1111 1110 0 . .. ... 0 ... 0 1111 . 0 . 0 ... 0 @2op
54
+ VCADD270 1111 1110 0 . .. ... 0 ... 1 1111 . 0 . 0 ... 0 @2op
55
+}
56
57
# Vector miscellaneous
58
59
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/mve_helper.c
62
+++ b/target/arm/mve_helper.c
63
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vsbci)(CPUARMState *env, void *vd, void *vn, void *vm)
64
do_vadc(env, vd, vn, vm, -1, 1, true);
22
}
65
}
23
66
24
-#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 4))
67
+#define DO_VCADD(OP, ESIZE, TYPE, FN0, FN1) \
25
+#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 5))
68
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, void *vm) \
26
69
+ { \
27
#endif
70
+ TYPE *d = vd, *n = vn, *m = vm; \
71
+ uint16_t mask = mve_element_mask(env); \
72
+ unsigned e; \
73
+ TYPE r[16 / ESIZE]; \
74
+ /* Calculate all results first to avoid overwriting inputs */ \
75
+ for (e = 0; e < 16 / ESIZE; e++) { \
76
+ if (!(e & 1)) { \
77
+ r[e] = FN0(n[H##ESIZE(e)], m[H##ESIZE(e + 1)]); \
78
+ } else { \
79
+ r[e] = FN1(n[H##ESIZE(e)], m[H##ESIZE(e - 1)]); \
80
+ } \
81
+ } \
82
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
83
+ mergemask(&d[H##ESIZE(e)], r[e], mask); \
84
+ } \
85
+ mve_advance_vpt(env); \
86
+ }
87
+
88
+#define DO_VCADD_ALL(OP, FN0, FN1) \
89
+ DO_VCADD(OP##b, 1, int8_t, FN0, FN1) \
90
+ DO_VCADD(OP##h, 2, int16_t, FN0, FN1) \
91
+ DO_VCADD(OP##w, 4, int32_t, FN0, FN1)
92
+
93
+DO_VCADD_ALL(vcadd90, DO_SUB, DO_ADD)
94
+DO_VCADD_ALL(vcadd270, DO_ADD, DO_SUB)
95
+
96
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
97
{
98
if (val > max) {
99
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/translate-mve.c
102
+++ b/target/arm/translate-mve.c
103
@@ -XXX,XX +XXX,XX @@ DO_2OP(VQRDMLSDH, vqrdmlsdh)
104
DO_2OP(VQRDMLSDHX, vqrdmlsdhx)
105
DO_2OP(VRHADD_S, vrhadds)
106
DO_2OP(VRHADD_U, vrhaddu)
107
+/*
108
+ * VCADD Qd == Qm at size MO_32 is UNPREDICTABLE; we choose not to diagnose
109
+ * so we can reuse the DO_2OP macro. (Our implementation calculates the
110
+ * "expected" results in this case.)
111
+ */
112
+DO_2OP(VCADD90, vcadd90)
113
+DO_2OP(VCADD270, vcadd270)
114
115
static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
116
{
28
--
117
--
29
2.20.1
118
2.20.1
30
119
31
120
diff view generated by jsdifflib
1
In v8.1M the PXN architecture extension adds a new PXN bit to the
1
Implement the MVE VHCADD insn, which is similar to VCADD
2
MPU_RLAR registers, which forbids execution of code in the region
2
but performs a halving step. This one overlaps with VADC.
3
from a privileged mode.
4
5
This is another feature which is just in the generic "in v8.1M" set
6
and has no ID register field indicating its presence.
7
3
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-43-peter.maydell@linaro.org
11
---
7
---
12
target/arm/helper.c | 7 ++++++-
8
target/arm/helper-mve.h | 8 ++++++++
13
1 file changed, 6 insertions(+), 1 deletion(-)
9
target/arm/mve.decode | 8 ++++++--
10
target/arm/mve_helper.c | 2 ++
11
target/arm/translate-mve.c | 4 +++-
12
4 files changed, 19 insertions(+), 3 deletions(-)
14
13
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
16
--- a/target/arm/helper-mve.h
18
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper-mve.h
19
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vcadd270b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
} else {
19
DEF_HELPER_FLAGS_4(mve_vcadd270h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
20
DEF_HELPER_FLAGS_4(mve_vcadd270w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
21
23
+ bool pxn = false;
22
+DEF_HELPER_FLAGS_4(mve_vhcadd90b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+DEF_HELPER_FLAGS_4(mve_vhcadd90h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+DEF_HELPER_FLAGS_4(mve_vhcadd90w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
+
25
+
25
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
26
+DEF_HELPER_FLAGS_4(mve_vhcadd270b, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
+ pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
27
+DEF_HELPER_FLAGS_4(mve_vhcadd270h, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
+ }
28
+DEF_HELPER_FLAGS_4(mve_vhcadd270w, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
28
29
+
29
if (m_is_system_region(env, address)) {
30
DEF_HELPER_FLAGS_4(mve_vadd_scalarb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
/* System space is always execute never */
31
DEF_HELPER_FLAGS_4(mve_vadd_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
32
DEF_HELPER_FLAGS_4(mve_vadd_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
}
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
33
34
index XXXXXXX..XXXXXXX 100644
34
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
35
--- a/target/arm/mve.decode
35
- if (*prot && !xn) {
36
+++ b/target/arm/mve.decode
36
+ if (*prot && !xn && !(pxn && !is_user)) {
37
@@ -XXX,XX +XXX,XX @@ VQDMULLT 111 . 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 1 @2op_sz28
37
*prot |= PAGE_EXEC;
38
VRHADD_S 111 0 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
38
}
39
VRHADD_U 111 1 1111 0 . .. ... 0 ... 0 0001 . 1 . 0 ... 0 @2op
39
/* We don't need to look the attribute up in the MAIR0/MAIR1
40
41
-VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
42
-VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
43
+{
44
+ VADC 1110 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
45
+ VADCI 1110 1110 0 . 11 ... 0 ... 1 1111 . 0 . 0 ... 0 @2op_nosz
46
+ VHCADD90 1110 1110 0 . .. ... 0 ... 0 1111 . 0 . 0 ... 0 @2op
47
+ VHCADD270 1110 1110 0 . .. ... 0 ... 1 1111 . 0 . 0 ... 0 @2op
48
+}
49
50
{
51
VSBC 1111 1110 0 . 11 ... 0 ... 0 1111 . 0 . 0 ... 0 @2op_nosz
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/mve_helper.c
55
+++ b/target/arm/mve_helper.c
56
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vsbci)(CPUARMState *env, void *vd, void *vn, void *vm)
57
58
DO_VCADD_ALL(vcadd90, DO_SUB, DO_ADD)
59
DO_VCADD_ALL(vcadd270, DO_ADD, DO_SUB)
60
+DO_VCADD_ALL(vhcadd90, do_vhsub_s, do_vhadd_s)
61
+DO_VCADD_ALL(vhcadd270, do_vhadd_s, do_vhsub_s)
62
63
static inline int32_t do_sat_bhw(int64_t val, int64_t min, int64_t max, bool *s)
64
{
65
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate-mve.c
68
+++ b/target/arm/translate-mve.c
69
@@ -XXX,XX +XXX,XX @@ DO_2OP(VRHADD_U, vrhaddu)
70
/*
71
* VCADD Qd == Qm at size MO_32 is UNPREDICTABLE; we choose not to diagnose
72
* so we can reuse the DO_2OP macro. (Our implementation calculates the
73
- * "expected" results in this case.)
74
+ * "expected" results in this case.) Similarly for VHCADD.
75
*/
76
DO_2OP(VCADD90, vcadd90)
77
DO_2OP(VCADD270, vcadd270)
78
+DO_2OP(VHCADD90, vhcadd90)
79
+DO_2OP(VHCADD270, vhcadd270)
80
81
static bool trans_VQDMULLB(DisasContext *s, arg_2op *a)
82
{
40
--
83
--
41
2.20.1
84
2.20.1
42
85
43
86
diff view generated by jsdifflib
1
In commit 077d7449100d824a4 we added code to handle the v8M
1
Implement the MVE VADDV insn, which performs an addition
2
requirement that returns from NMI or HardFault forcibly deactivate
2
across vector lanes.
3
those exceptions regardless of what interrupt the guest is trying to
4
deactivate. Unfortunately this broke the handling of the "illegal
5
exception return because the returning exception number is not
6
active" check for those cases. In the pseudocode this test is done
7
on the exception the guest asks to return from, but because our
8
implementation was doing this in armv7m_nvic_complete_irq() after the
9
new "deactivate NMI/HardFault regardless" code we ended up doing the
10
test on the VecInfo for that exception instead, which usually meant
11
failing to raise the illegal exception return fault.
12
13
In the case for "configurable exception targeting the opposite
14
security state" we detected the illegal-return case but went ahead
15
and deactivated the VecInfo anyway, which is wrong because that is
16
the VecInfo for the other security state.
17
18
Rearrange the code so that we first identify the illegal return
19
cases, then see if we really need to deactivate NMI or HardFault
20
instead, and finally do the deactivation.
21
3
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-25-peter.maydell@linaro.org
6
Message-id: 20210617121628.20116-44-peter.maydell@linaro.org
25
---
7
---
26
hw/intc/armv7m_nvic.c | 59 +++++++++++++++++++++++--------------------
8
target/arm/helper-mve.h | 7 +++++++
27
1 file changed, 32 insertions(+), 27 deletions(-)
9
target/arm/mve.decode | 2 ++
10
target/arm/mve_helper.c | 24 +++++++++++++++++++++
11
target/arm/translate-mve.c | 43 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 76 insertions(+)
28
13
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
30
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/intc/armv7m_nvic.c
16
--- a/target/arm/helper-mve.h
32
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/target/arm/helper-mve.h
33
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrmlaldavhuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
34
{
19
35
NVICState *s = (NVICState *)opaque;
20
DEF_HELPER_FLAGS_4(mve_vrmlsldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
36
VecInfo *vec = NULL;
21
DEF_HELPER_FLAGS_4(mve_vrmlsldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
37
- int ret;
38
+ int ret = 0;
39
40
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
41
42
+ trace_nvic_complete_irq(irq, secure);
43
+
22
+
44
+ if (secure && exc_is_banked(irq)) {
23
+DEF_HELPER_FLAGS_3(mve_vaddvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
45
+ vec = &s->sec_vectors[irq];
24
+DEF_HELPER_FLAGS_3(mve_vaddvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
46
+ } else {
25
+DEF_HELPER_FLAGS_3(mve_vaddvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
47
+ vec = &s->vectors[irq];
26
+DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
27
+DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
28
+DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
32
+++ b/target/arm/mve.decode
33
@@ -XXX,XX +XXX,XX @@ VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
34
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
35
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
36
37
+# Vector add across vector
38
+VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
39
40
# Predicate operations
41
%mask_22_13 22:1 13:3
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_LDAVH(vrmlaldavhuw, 4, uint32_t, false, int128_add, int128_add, int128_make64
47
48
DO_LDAVH(vrmlsldavhsw, 4, int32_t, false, int128_add, int128_sub, int128_makes64)
49
DO_LDAVH(vrmlsldavhxsw, 4, int32_t, true, int128_add, int128_sub, int128_makes64)
50
+
51
+/* Vector add across vector */
52
+#define DO_VADDV(OP, ESIZE, TYPE) \
53
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
54
+ uint32_t ra) \
55
+ { \
56
+ uint16_t mask = mve_element_mask(env); \
57
+ unsigned e; \
58
+ TYPE *m = vm; \
59
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
60
+ if (mask & 1) { \
61
+ ra += m[H##ESIZE(e)]; \
62
+ } \
63
+ } \
64
+ mve_advance_vpt(env); \
65
+ return ra; \
66
+ } \
67
+
68
+DO_VADDV(vaddvsb, 1, uint8_t)
69
+DO_VADDV(vaddvsh, 2, uint16_t)
70
+DO_VADDV(vaddvsw, 4, uint32_t)
71
+DO_VADDV(vaddvub, 1, uint8_t)
72
+DO_VADDV(vaddvuh, 2, uint16_t)
73
+DO_VADDV(vaddvuw, 4, uint32_t)
74
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/translate-mve.c
77
+++ b/target/arm/translate-mve.c
78
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
79
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
80
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
81
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
82
+typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
83
84
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
85
static inline long mve_qreg_offset(unsigned reg)
86
@@ -XXX,XX +XXX,XX @@ static bool trans_VPST(DisasContext *s, arg_VPST *a)
87
mve_update_and_store_eci(s);
88
return true;
89
}
90
+
91
+static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
92
+{
93
+ /* VADDV: vector add across vector */
94
+ static MVEGenVADDVFn * const fns[4][2] = {
95
+ { gen_helper_mve_vaddvsb, gen_helper_mve_vaddvub },
96
+ { gen_helper_mve_vaddvsh, gen_helper_mve_vaddvuh },
97
+ { gen_helper_mve_vaddvsw, gen_helper_mve_vaddvuw },
98
+ { NULL, NULL }
99
+ };
100
+ TCGv_ptr qm;
101
+ TCGv_i32 rda;
102
+
103
+ if (!dc_isar_feature(aa32_mve, s) ||
104
+ a->size == 3) {
105
+ return false;
106
+ }
107
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
108
+ return true;
48
+ }
109
+ }
49
+
110
+
50
+ /*
111
+ /*
51
+ * Identify illegal exception return cases. We can't immediately
112
+ * This insn is subject to beat-wise execution. Partial execution
52
+ * return at this point because we still need to deactivate
113
+ * of an A=0 (no-accumulate) insn which does not execute the first
53
+ * (either this exception or NMI/HardFault) first.
114
+ * beat must start with the current value of Rda, not zero.
54
+ */
115
+ */
55
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
116
+ if (a->a || mve_skip_first_beat(s)) {
56
+ /*
117
+ /* Accumulate input from Rda */
57
+ * Return from a configurable exception targeting the opposite
118
+ rda = load_reg(s, a->rda);
58
+ * security state from the one we're trying to complete it for.
59
+ * Clear vec because it's not really the VecInfo for this
60
+ * (irq, secstate) so we mustn't deactivate it.
61
+ */
62
+ ret = -1;
63
+ vec = NULL;
64
+ } else if (!vec->active) {
65
+ /* Return from an inactive interrupt */
66
+ ret = -1;
67
+ } else {
119
+ } else {
68
+ /* Legal return, we will return the RETTOBASE bit value to the caller */
120
+ /* Accumulate starting at zero */
69
+ ret = nvic_rettobase(s);
121
+ rda = tcg_const_i32(0);
70
+ }
122
+ }
71
+
123
+
72
/*
124
+ qm = mve_qreg_ptr(a->qm);
73
* For negative priorities, v8M will forcibly deactivate the appropriate
125
+ fns[a->size][a->u](rda, cpu_env, qm, rda);
74
* NMI or HardFault regardless of what interrupt we're being asked to
126
+ store_reg(s, a->rda, rda);
75
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
127
+ tcg_temp_free_ptr(qm);
76
}
128
+
77
129
+ mve_update_eci(s);
78
if (!vec) {
130
+ return true;
79
- if (secure && exc_is_banked(irq)) {
131
+}
80
- vec = &s->sec_vectors[irq];
81
- } else {
82
- vec = &s->vectors[irq];
83
- }
84
- }
85
-
86
- trace_nvic_complete_irq(irq, secure);
87
-
88
- if (!vec->active) {
89
- /* Tell the caller this was an illegal exception return */
90
- return -1;
91
- }
92
-
93
- /*
94
- * If this is a configurable exception and it is currently
95
- * targeting the opposite security state from the one we're trying
96
- * to complete it for, this counts as an illegal exception return.
97
- * We still need to deactivate whatever vector the logic above has
98
- * selected, though, as it might not be the same as the one for the
99
- * requested exception number.
100
- */
101
- if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
102
- ret = -1;
103
- } else {
104
- ret = nvic_rettobase(s);
105
+ return ret;
106
}
107
108
vec->active = 0;
109
--
132
--
110
2.20.1
133
2.20.1
111
134
112
135
diff view generated by jsdifflib
1
From: Havard Skinnemoen <hskinnemoen@google.com>
1
In a CPU with MVE, the VMOV (vector lane to general-purpose register)
2
and VMOV (general-purpose register to vector lane) insns are not
3
predicated, but they are subject to beatwise execution if they
4
are not in an IT block.
2
5
3
Dump the collected random data after a randomness test failure.
6
Since our implementation always executes all 4 beats in one tick,
7
this means only that we need to handle PSR.ECI:
8
* we must do the usual check for bad ECI state
9
* we must advance ECI state if the insn succeeds
10
* if ECI says we should not be executing the beat corresponding
11
to the lane of the vector register being accessed then we
12
should skip performing the move
4
13
5
Note that this relies on the test having called
14
Note that if PSR.ECI is non-zero then we cannot be in an IT block.
6
g_test_set_nonfatal_assertions() so we don't abort immediately on the
7
assertion failure.
8
15
9
Signed-off-by: Havard Skinnemoen <hskinnemoen@google.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
[PMM: minor commit message tweak]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20210617121628.20116-45-peter.maydell@linaro.org
13
---
19
---
14
tests/qtest/npcm7xx_rng-test.c | 12 ++++++++++++
20
target/arm/translate-a32.h | 2 +
15
1 file changed, 12 insertions(+)
21
target/arm/translate-mve.c | 4 +-
22
target/arm/translate-vfp.c | 77 +++++++++++++++++++++++++++++++++++---
23
3 files changed, 75 insertions(+), 8 deletions(-)
16
24
17
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
25
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
18
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/qtest/npcm7xx_rng-test.c
27
--- a/target/arm/translate-a32.h
20
+++ b/tests/qtest/npcm7xx_rng-test.c
28
+++ b/target/arm/translate-a32.h
21
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@ long neon_full_reg_offset(unsigned reg);
22
30
long neon_element_offset(int reg, int element, MemOp memop);
23
#include "libqtest-single.h"
31
void gen_rev16(TCGv_i32 dest, TCGv_i32 var);
24
#include "qemu/bitops.h"
32
void clear_eci_state(DisasContext *s);
25
+#include "qemu-common.h"
33
+bool mve_eci_check(DisasContext *s);
26
34
+void mve_update_and_store_eci(DisasContext *s);
27
#define RNG_BASE_ADDR 0xf000b000
35
28
36
static inline TCGv_i32 load_cpu_offset(int offset)
29
@@ -XXX,XX +XXX,XX @@
37
{
30
/* Number of bits to collect for randomness tests. */
38
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
31
#define TEST_INPUT_BITS (128)
39
index XXXXXXX..XXXXXXX 100644
32
40
--- a/target/arm/translate-mve.c
33
+static void dump_buf_if_failed(const uint8_t *buf, size_t size)
41
+++ b/target/arm/translate-mve.c
42
@@ -XXX,XX +XXX,XX @@ static bool mve_check_qreg_bank(DisasContext *s, int qmask)
43
return qmask < 8;
44
}
45
46
-static bool mve_eci_check(DisasContext *s)
47
+bool mve_eci_check(DisasContext *s)
48
{
49
/*
50
* This is a beatwise insn: check that ECI is valid (not a
51
@@ -XXX,XX +XXX,XX @@ static void mve_update_eci(DisasContext *s)
52
}
53
}
54
55
-static void mve_update_and_store_eci(DisasContext *s)
56
+void mve_update_and_store_eci(DisasContext *s)
57
{
58
/*
59
* For insns which don't call a helper function that will call
60
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/translate-vfp.c
63
+++ b/target/arm/translate-vfp.c
64
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
65
return true;
66
}
67
68
+static bool mve_skip_vmov(DisasContext *s, int vn, int index, int size)
34
+{
69
+{
35
+ if (g_test_failed()) {
70
+ /*
36
+ qemu_hexdump(stderr, "", buf, size);
71
+ * In a CPU with MVE, the VMOV (vector lane to general-purpose register)
72
+ * and VMOV (general-purpose register to vector lane) insns are not
73
+ * predicated, but they are subject to beatwise execution if they are
74
+ * not in an IT block.
75
+ *
76
+ * Since our implementation always executes all 4 beats in one tick,
77
+ * this means only that if PSR.ECI says we should not be executing
78
+ * the beat corresponding to the lane of the vector register being
79
+ * accessed then we should skip performing the move, and that we need
80
+ * to do the usual check for bad ECI state and advance of ECI state.
81
+ *
82
+ * Note that if PSR.ECI is non-zero then we cannot be in an IT block.
83
+ *
84
+ * Return true if this VMOV scalar <-> gpreg should be skipped because
85
+ * the MVE PSR.ECI state says we skip the beat where the store happens.
86
+ */
87
+
88
+ /* Calculate the byte offset into Qn which we're going to access */
89
+ int ofs = (index << size) + ((vn & 1) * 8);
90
+
91
+ if (!dc_isar_feature(aa32_mve, s)) {
92
+ return false;
93
+ }
94
+
95
+ switch (s->eci) {
96
+ case ECI_NONE:
97
+ return false;
98
+ case ECI_A0:
99
+ return ofs < 4;
100
+ case ECI_A0A1:
101
+ return ofs < 8;
102
+ case ECI_A0A1A2:
103
+ case ECI_A0A1A2B0:
104
+ return ofs < 12;
105
+ default:
106
+ g_assert_not_reached();
37
+ }
107
+ }
38
+}
108
+}
39
+
109
+
40
static void rng_writeb(unsigned int offset, uint8_t value)
110
static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
41
{
111
{
42
writeb(RNG_BASE_ADDR + offset, value);
112
/* VMOV scalar to general purpose register */
43
@@ -XXX,XX +XXX,XX @@ static void test_continuous_monobit(void)
113
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
114
return false;
44
}
115
}
45
116
46
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
117
+ if (dc_isar_feature(aa32_mve, s)) {
47
+ dump_buf_if_failed(buf, sizeof(buf));
118
+ if (!mve_eci_check(s)) {
119
+ return true;
120
+ }
121
+ }
122
+
123
if (!vfp_access_check(s)) {
124
return true;
125
}
126
127
- tmp = tcg_temp_new_i32();
128
- read_neon_element32(tmp, a->vn, a->index, a->size | (a->u ? 0 : MO_SIGN));
129
- store_reg(s, a->rt, tmp);
130
+ if (!mve_skip_vmov(s, a->vn, a->index, a->size)) {
131
+ tmp = tcg_temp_new_i32();
132
+ read_neon_element32(tmp, a->vn, a->index,
133
+ a->size | (a->u ? 0 : MO_SIGN));
134
+ store_reg(s, a->rt, tmp);
135
+ }
136
137
+ if (dc_isar_feature(aa32_mve, s)) {
138
+ mve_update_and_store_eci(s);
139
+ }
140
return true;
48
}
141
}
49
142
50
/*
143
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
51
@@ -XXX,XX +XXX,XX @@ static void test_continuous_runs(void)
144
return false;
52
}
145
}
53
146
54
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
147
+ if (dc_isar_feature(aa32_mve, s)) {
55
+ dump_buf_if_failed(buf.c, sizeof(buf));
148
+ if (!mve_eci_check(s)) {
149
+ return true;
150
+ }
151
+ }
152
+
153
if (!vfp_access_check(s)) {
154
return true;
155
}
156
157
- tmp = load_reg(s, a->rt);
158
- write_neon_element32(tmp, a->vn, a->index, a->size);
159
- tcg_temp_free_i32(tmp);
160
+ if (!mve_skip_vmov(s, a->vn, a->index, a->size)) {
161
+ tmp = load_reg(s, a->rt);
162
+ write_neon_element32(tmp, a->vn, a->index, a->size);
163
+ tcg_temp_free_i32(tmp);
164
+ }
165
166
+ if (dc_isar_feature(aa32_mve, s)) {
167
+ mve_update_and_store_eci(s);
168
+ }
169
return true;
56
}
170
}
57
171
58
/*
59
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_monobit(void)
60
}
61
62
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
63
+ dump_buf_if_failed(buf, sizeof(buf));
64
}
65
66
/*
67
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_runs(void)
68
}
69
70
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
71
+ dump_buf_if_failed(buf.c, sizeof(buf));
72
}
73
74
int main(int argc, char **argv)
75
--
172
--
76
2.20.1
173
2.20.1
77
174
78
175
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Peter Collingbourne <pcc@google.com>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
MTE3 introduces an asymmetric tag checking mode, in which loads are
4
argument of type "unsigned int".
4
checked synchronously and stores are checked asynchronously. Add
5
support for it.
5
6
6
Reported-by: Euler Robot <euler.robot@huawei.com>
7
Signed-off-by: Peter Collingbourne <pcc@google.com>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201126111109.112238-4-alex.chen@huawei.com
9
Message-id: 20210616195614.11785-1-pcc@google.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
[PMM: Add line to emulation.rst]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
hw/misc/imx6_ccm.c | 20 ++++++++++----------
13
docs/system/arm/emulation.rst | 1 +
13
hw/misc/imx6_src.c | 2 +-
14
target/arm/cpu64.c | 2 +-
14
2 files changed, 11 insertions(+), 11 deletions(-)
15
target/arm/mte_helper.c | 82 ++++++++++++++++++++++-------------
16
3 files changed, 53 insertions(+), 32 deletions(-)
15
17
16
diff --git a/hw/misc/imx6_ccm.c b/hw/misc/imx6_ccm.c
18
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx6_ccm.c
20
--- a/docs/system/arm/emulation.rst
19
+++ b/hw/misc/imx6_ccm.c
21
+++ b/docs/system/arm/emulation.rst
20
@@ -XXX,XX +XXX,XX @@ static const char *imx6_ccm_reg_name(uint32_t reg)
22
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
case CCM_CMEOR:
23
- FEAT_LSE (Large System Extensions)
22
return "CMEOR";
24
- FEAT_MTE (Memory Tagging Extension)
23
default:
25
- FEAT_MTE2 (Memory Tagging Extension)
24
- sprintf(unknown, "%d ?", reg);
26
+- FEAT_MTE3 (MTE Asymmetric Fault Handling)
25
+ sprintf(unknown, "%u ?", reg);
27
- FEAT_PAN (Privileged access never)
26
return unknown;
28
- FEAT_PAN2 (AT S1E1R and AT S1E1W instruction variants affected by PSTATE.PAN)
29
- FEAT_PAuth (Pointer authentication)
30
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu64.c
33
+++ b/target/arm/cpu64.c
34
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
35
* during realize if the board provides no tag memory, much like
36
* we do for EL2 with the virtualization=on property.
37
*/
38
- t = FIELD_DP64(t, ID_AA64PFR1, MTE, 2);
39
+ t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3);
40
cpu->isar.id_aa64pfr1 = t;
41
42
t = cpu->isar.id_aa64mmfr0;
43
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mte_helper.c
46
+++ b/target/arm/mte_helper.c
47
@@ -XXX,XX +XXX,XX @@ void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
27
}
48
}
28
}
49
}
29
@@ -XXX,XX +XXX,XX @@ static const char *imx6_analog_reg_name(uint32_t reg)
50
30
case USB_ANALOG_DIGPROG:
51
+static void mte_sync_check_fail(CPUARMState *env, uint32_t desc,
31
return "USB_ANALOG_DIGPROG";
52
+ uint64_t dirty_ptr, uintptr_t ra)
32
default:
53
+{
33
- sprintf(unknown, "%d ?", reg);
54
+ int is_write, syn;
34
+ sprintf(unknown, "%u ?", reg);
55
+
35
return unknown;
56
+ env->exception.vaddress = dirty_ptr;
36
}
57
+
37
}
58
+ is_write = FIELD_EX32(desc, MTEDESC, WRITE);
38
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_clk(IMX6CCMState *dev)
59
+ syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0, is_write,
39
freq *= 20;
60
+ 0x11);
40
}
61
+ raise_exception_ra(env, EXCP_DATA_ABORT, syn, exception_target_el(env), ra);
41
62
+ g_assert_not_reached();
42
- DPRINTF("freq = %d\n", (uint32_t)freq);
63
+}
43
+ DPRINTF("freq = %u\n", (uint32_t)freq);
64
+
44
65
+static void mte_async_check_fail(CPUARMState *env, uint64_t dirty_ptr,
45
return freq;
66
+ uintptr_t ra, ARMMMUIdx arm_mmu_idx, int el)
46
}
67
+{
47
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd0_clk(IMX6CCMState *dev)
68
+ int select;
48
freq = imx6_analog_get_pll2_clk(dev) * 18
69
+
49
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD0_FRAC);
70
+ if (regime_has_2_ranges(arm_mmu_idx)) {
50
71
+ select = extract64(dirty_ptr, 55, 1);
51
- DPRINTF("freq = %d\n", (uint32_t)freq);
72
+ } else {
52
+ DPRINTF("freq = %u\n", (uint32_t)freq);
73
+ select = 0;
53
74
+ }
54
return freq;
75
+ env->cp15.tfsr_el[el] |= 1 << select;
55
}
76
+#ifdef CONFIG_USER_ONLY
56
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd2_clk(IMX6CCMState *dev)
77
+ /*
57
freq = imx6_analog_get_pll2_clk(dev) * 18
78
+ * Stand in for a timer irq, setting _TIF_MTE_ASYNC_FAULT,
58
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD2_FRAC);
79
+ * which then sends a SIGSEGV when the thread is next scheduled.
59
80
+ * This cpu will return to the main loop at the end of the TB,
60
- DPRINTF("freq = %d\n", (uint32_t)freq);
81
+ * which is rather sooner than "normal". But the alternative
61
+ DPRINTF("freq = %u\n", (uint32_t)freq);
82
+ * is waiting until the next syscall.
62
83
+ */
63
return freq;
84
+ qemu_cpu_kick(env_cpu(env));
64
}
85
+#endif
65
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_periph_clk(IMX6CCMState *dev)
86
+}
87
+
88
/* Record a tag check failure. */
89
static void mte_check_fail(CPUARMState *env, uint32_t desc,
90
uint64_t dirty_ptr, uintptr_t ra)
91
{
92
int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
93
ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx);
94
- int el, reg_el, tcf, select, is_write, syn;
95
+ int el, reg_el, tcf;
96
uint64_t sctlr;
97
98
reg_el = regime_el(env, arm_mmu_idx);
99
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
100
switch (tcf) {
101
case 1:
102
/* Tag check fail causes a synchronous exception. */
103
- env->exception.vaddress = dirty_ptr;
104
-
105
- is_write = FIELD_EX32(desc, MTEDESC, WRITE);
106
- syn = syn_data_abort_no_iss(arm_current_el(env) != 0, 0, 0, 0, 0,
107
- is_write, 0x11);
108
- raise_exception_ra(env, EXCP_DATA_ABORT, syn,
109
- exception_target_el(env), ra);
110
- /* noreturn, but fall through to the assert anyway */
111
+ mte_sync_check_fail(env, desc, dirty_ptr, ra);
112
+ break;
113
114
case 0:
115
/*
116
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
117
118
case 2:
119
/* Tag check fail causes asynchronous flag set. */
120
- if (regime_has_2_ranges(arm_mmu_idx)) {
121
- select = extract64(dirty_ptr, 55, 1);
122
- } else {
123
- select = 0;
124
- }
125
- env->cp15.tfsr_el[el] |= 1 << select;
126
-#ifdef CONFIG_USER_ONLY
127
- /*
128
- * Stand in for a timer irq, setting _TIF_MTE_ASYNC_FAULT,
129
- * which then sends a SIGSEGV when the thread is next scheduled.
130
- * This cpu will return to the main loop at the end of the TB,
131
- * which is rather sooner than "normal". But the alternative
132
- * is waiting until the next syscall.
133
- */
134
- qemu_cpu_kick(env_cpu(env));
135
-#endif
136
+ mte_async_check_fail(env, dirty_ptr, ra, arm_mmu_idx, el);
66
break;
137
break;
67
}
138
68
139
- default:
69
- DPRINTF("freq = %d\n", (uint32_t)freq);
140
- /* Case 3: Reserved. */
70
+ DPRINTF("freq = %u\n", (uint32_t)freq);
141
- qemu_log_mask(LOG_GUEST_ERROR,
71
142
- "Tag check failure with SCTLR_EL%d.TCF%s "
72
return freq;
143
- "set to reserved value %d\n",
73
}
144
- reg_el, el ? "" : "0", tcf);
74
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ahb_clk(IMX6CCMState *dev)
145
+ case 3:
75
freq = imx6_analog_get_periph_clk(dev)
146
+ /*
76
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], AHB_PODF));
147
+ * Tag check fail causes asynchronous flag set for stores, or
77
148
+ * a synchronous exception for loads.
78
- DPRINTF("freq = %d\n", (uint32_t)freq);
149
+ */
79
+ DPRINTF("freq = %u\n", (uint32_t)freq);
150
+ if (FIELD_EX32(desc, MTEDESC, WRITE)) {
80
151
+ mte_async_check_fail(env, dirty_ptr, ra, arm_mmu_idx, el);
81
return freq;
152
+ } else {
82
}
153
+ mte_sync_check_fail(env, desc, dirty_ptr, ra);
83
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ipg_clk(IMX6CCMState *dev)
154
+ }
84
freq = imx6_ccm_get_ahb_clk(dev)
85
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], IPG_PODF));
86
87
- DPRINTF("freq = %d\n", (uint32_t)freq);
88
+ DPRINTF("freq = %u\n", (uint32_t)freq);
89
90
return freq;
91
}
92
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_per_clk(IMX6CCMState *dev)
93
freq = imx6_ccm_get_ipg_clk(dev)
94
/ (1 + EXTRACT(dev->ccm[CCM_CSCMR1], PERCLK_PODF));
95
96
- DPRINTF("freq = %d\n", (uint32_t)freq);
97
+ DPRINTF("freq = %u\n", (uint32_t)freq);
98
99
return freq;
100
}
101
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
102
break;
155
break;
103
}
104
105
- DPRINTF("Clock = %d) = %d\n", clock, freq);
106
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
107
108
return freq;
109
}
110
diff --git a/hw/misc/imx6_src.c b/hw/misc/imx6_src.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/hw/misc/imx6_src.c
113
+++ b/hw/misc/imx6_src.c
114
@@ -XXX,XX +XXX,XX @@ static const char *imx6_src_reg_name(uint32_t reg)
115
case SRC_GPR10:
116
return "SRC_GPR10";
117
default:
118
- sprintf(unknown, "%d ?", reg);
119
+ sprintf(unknown, "%u ?", reg);
120
return unknown;
121
}
156
}
122
}
157
}
123
--
158
--
124
2.20.1
159
2.20.1
125
160
126
161
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Alexandre Iooss <erdnaxe@crans.org>
2
2
3
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
3
This adds the target guide for BBC Micro:bit.
4
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
4
5
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
5
Information is taken from https://wiki.qemu.org/Features/MicroBit
6
Message-id: 1605728926-352690-5-git-send-email-fnu.vikram@xilinx.com
6
and from hw/arm/nrf51_soc.c.
7
8
Signed-off-by: Alexandre Iooss <erdnaxe@crans.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Joel Stanley <joel@jms.id.au>
11
Message-id: 20210621075625.540471-1-erdnaxe@crans.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
13
---
9
MAINTAINERS | 8 ++++++++
14
docs/system/arm/nrf.rst | 51 ++++++++++++++++++++++++++++++++++++++
10
1 file changed, 8 insertions(+)
15
docs/system/target-arm.rst | 1 +
16
MAINTAINERS | 1 +
17
3 files changed, 53 insertions(+)
18
create mode 100644 docs/system/arm/nrf.rst
11
19
20
diff --git a/docs/system/arm/nrf.rst b/docs/system/arm/nrf.rst
21
new file mode 100644
22
index XXXXXXX..XXXXXXX
23
--- /dev/null
24
+++ b/docs/system/arm/nrf.rst
25
@@ -XXX,XX +XXX,XX @@
26
+Nordic nRF boards (``microbit``)
27
+================================
28
+
29
+The `Nordic nRF`_ chips are a family of ARM-based System-on-Chip that
30
+are designed to be used for low-power and short-range wireless solutions.
31
+
32
+.. _Nordic nRF: https://www.nordicsemi.com/Products
33
+
34
+The nRF51 series is the first series for short range wireless applications.
35
+It is superseded by the nRF52 series.
36
+The following machines are based on this chip :
37
+
38
+- ``microbit`` BBC micro:bit board with nRF51822 SoC
39
+
40
+There are other series such as nRF52, nRF53 and nRF91 which are currently not
41
+supported by QEMU.
42
+
43
+Supported devices
44
+-----------------
45
+
46
+ * ARM Cortex-M0 (ARMv6-M)
47
+ * Serial ports (UART)
48
+ * Clock controller
49
+ * Timers
50
+ * Random Number Generator (RNG)
51
+ * GPIO controller
52
+ * NVMC
53
+ * SWI
54
+
55
+Missing devices
56
+---------------
57
+
58
+ * Watchdog
59
+ * Real-Time Clock (RTC) controller
60
+ * TWI (i2c)
61
+ * SPI controller
62
+ * Analog to Digital Converter (ADC)
63
+ * Quadrature decoder
64
+ * Radio
65
+
66
+Boot options
67
+------------
68
+
69
+The Micro:bit machine can be started using the ``-device`` option to load a
70
+firmware in `ihex format`_. Example:
71
+
72
+.. _ihex format: https://en.wikipedia.org/wiki/Intel_HEX
73
+
74
+.. code-block:: bash
75
+
76
+ $ qemu-system-arm -M microbit -device loader,file=test.hex
77
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
78
index XXXXXXX..XXXXXXX 100644
79
--- a/docs/system/target-arm.rst
80
+++ b/docs/system/target-arm.rst
81
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
82
arm/digic
83
arm/musicpal
84
arm/gumstix
85
+ arm/nrf
86
arm/nseries
87
arm/nuvoton
88
arm/orangepi
12
diff --git a/MAINTAINERS b/MAINTAINERS
89
diff --git a/MAINTAINERS b/MAINTAINERS
13
index XXXXXXX..XXXXXXX 100644
90
index XXXXXXX..XXXXXXX 100644
14
--- a/MAINTAINERS
91
--- a/MAINTAINERS
15
+++ b/MAINTAINERS
92
+++ b/MAINTAINERS
16
@@ -XXX,XX +XXX,XX @@ F: hw/net/opencores_eth.c
93
@@ -XXX,XX +XXX,XX @@ F: hw/*/microbit*.c
17
94
F: include/hw/*/nrf51*.h
18
Devices
95
F: include/hw/*/microbit*.h
19
-------
96
F: tests/qtest/microbit-test.c
20
+Xilinx CAN
97
+F: docs/system/arm/nrf.rst
21
+M: Vikram Garhwal <fnu.vikram@xilinx.com>
98
22
+M: Francisco Iglesias <francisco.iglesias@xilinx.com>
99
AVR Machines
23
+S: Maintained
100
-------------
24
+F: hw/net/can/xlnx-*
25
+F: include/hw/net/xlnx-*
26
+F: tests/qtest/xlnx-can-test*
27
+
28
EDU
29
M: Jiri Slaby <jslaby@suse.cz>
30
S: Maintained
31
--
101
--
32
2.20.1
102
2.20.1
33
103
34
104
diff view generated by jsdifflib