1
Mostly my decodetree stuff, but also some patches for various
1
First pullreq for 6.0: mostly my v8.1M work, plus some other
2
smaller bugs/features from others.
2
bits and pieces. (I still have a lot of stuff in my to-review
3
folder, which I may or may not get to before the Christmas break...)
3
4
4
thanks
5
thanks
5
-- PMM
6
-- PMM
6
7
7
The following changes since commit 53550e81e2cafe7c03a39526b95cd21b5194d9b1:
8
The following changes since commit 5e7b204dbfae9a562fc73684986f936b97f63877:
8
9
9
Merge remote-tracking branch 'remotes/berrange/tags/qcrypto-next-pull-request' into staging (2020-06-15 16:36:34 +0100)
10
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2020-12-09 20:08:54 +0000)
10
11
11
are available in the Git repository at:
12
are available in the Git repository at:
12
13
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200616
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201210
14
15
15
for you to fetch changes up to 64b397417a26509bcdff44ab94356a35c7901c79:
16
for you to fetch changes up to 71f916be1c7e9ede0e37d9cabc781b5a9e8638ff:
16
17
17
hw: arm: Set vendor property for IMX SDHCI emulations (2020-06-16 10:32:29 +0100)
18
hw/arm/armv7m: Correct typo in QOM object name (2020-12-10 11:44:56 +0000)
18
19
19
----------------------------------------------------------------
20
----------------------------------------------------------------
20
* hw: arm: Set vendor property for IMX SDHCI emulations
21
target-arm queue:
21
* sd: sdhci: Implement basic vendor specific register support
22
* hw/arm/smmuv3: Fix up L1STD_SPAN decoding
22
* hw/net/imx_fec: Convert debug fprintf() to trace events
23
* xlnx-zynqmp: Support Xilinx ZynqMP CAN controllers
23
* target/arm/cpu: adjust virtual time for all KVM arm cpus
24
* sbsa-ref: allow to use Cortex-A53/57/72 cpus
24
* Implement configurable descriptor size in ftgmac100
25
* Various minor code cleanups
25
* hw/misc/imx6ul_ccm: Implement non writable bits in CCM registers
26
* hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
26
* target/arm: More Neon decodetree conversion work
27
* Implement more pieces of ARMv8.1M support
27
28
28
----------------------------------------------------------------
29
----------------------------------------------------------------
29
Erik Smit (1):
30
Alex Chen (4):
30
Implement configurable descriptor size in ftgmac100
31
i.MX25: Fix bad printf format specifiers
32
i.MX31: Fix bad printf format specifiers
33
i.MX6: Fix bad printf format specifiers
34
i.MX6ul: Fix bad printf format specifiers
31
35
32
Guenter Roeck (2):
36
Havard Skinnemoen (1):
33
sd: sdhci: Implement basic vendor specific register support
37
tests/qtest/npcm7xx_rng-test: dump random data on failure
34
hw: arm: Set vendor property for IMX SDHCI emulations
35
38
36
Jean-Christophe Dubois (2):
39
Kunkun Jiang (1):
37
hw/misc/imx6ul_ccm: Implement non writable bits in CCM registers
40
hw/arm/smmuv3: Fix up L1STD_SPAN decoding
38
hw/net/imx_fec: Convert debug fprintf() to trace events
39
41
40
Peter Maydell (17):
42
Marcin Juszkiewicz (1):
41
target/arm: Fix missing temp frees in do_vshll_2sh
43
sbsa-ref: allow to use Cortex-A53/57/72 cpus
42
target/arm: Convert Neon 3-reg-diff prewidening ops to decodetree
43
target/arm: Convert Neon 3-reg-diff narrowing ops to decodetree
44
target/arm: Convert Neon 3-reg-diff VABAL, VABDL to decodetree
45
target/arm: Convert Neon 3-reg-diff long multiplies
46
target/arm: Convert Neon 3-reg-diff saturating doubling multiplies
47
target/arm: Convert Neon 3-reg-diff polynomial VMULL
48
target/arm: Add 'static' and 'const' annotations to VSHLL function arrays
49
target/arm: Add missing TCG temp free in do_2shift_env_64()
50
target/arm: Convert Neon 2-reg-scalar integer multiplies to decodetree
51
target/arm: Convert Neon 2-reg-scalar float multiplies to decodetree
52
target/arm: Convert Neon 2-reg-scalar VQDMULH, VQRDMULH to decodetree
53
target/arm: Convert Neon 2-reg-scalar VQRDMLAH, VQRDMLSH to decodetree
54
target/arm: Convert Neon 2-reg-scalar long multiplies to decodetree
55
target/arm: Convert Neon VEXT to decodetree
56
target/arm: Convert Neon VTBL, VTBX to decodetree
57
target/arm: Convert Neon VDUP (scalar) to decodetree
58
44
59
fangying (1):
45
Peter Maydell (25):
60
target/arm/cpu: adjust virtual time for all KVM arm cpus
46
hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
47
target/arm: Implement v8.1M PXN extension
48
target/arm: Don't clobber ID_PFR1.Security on M-profile cores
49
target/arm: Implement VSCCLRM insn
50
target/arm: Implement CLRM instruction
51
target/arm: Enforce M-profile VMRS/VMSR register restrictions
52
target/arm: Refactor M-profile VMSR/VMRS handling
53
target/arm: Move general-use constant expanders up in translate.c
54
target/arm: Implement VLDR/VSTR system register
55
target/arm: Implement M-profile FPSCR_nzcvqc
56
target/arm: Use new FPCR_NZCV_MASK constant
57
target/arm: Factor out preserve-fp-state from full_vfp_access_check()
58
target/arm: Implement FPCXT_S fp system register
59
hw/intc/armv7m_nvic: Update FPDSCR masking for v8.1M
60
target/arm: For v8.1M, always clear R0-R3, R12, APSR, EPSR on exception entry
61
target/arm: In v8.1M, don't set HFSR.FORCED on vector table fetch failures
62
target/arm: Implement v8.1M REVIDR register
63
target/arm: Implement new v8.1M NOCP check for exception return
64
target/arm: Implement new v8.1M VLLDM and VLSTM encodings
65
hw/intc/armv7m_nvic: Support v8.1M CCR.TRD bit
66
target/arm: Implement CCR_S.TRD behaviour for SG insns
67
hw/intc/armv7m_nvic: Fix "return from inactive handler" check
68
target/arm: Implement M-profile "minimal RAS implementation"
69
hw/intc/armv7m_nvic: Implement read/write for RAS register block
70
hw/arm/armv7m: Correct typo in QOM object name
61
71
62
hw/sd/sdhci-internal.h | 5 +
72
Vikram Garhwal (4):
63
include/hw/sd/sdhci.h | 5 +
73
hw/net/can: Introduce Xilinx ZynqMP CAN controller
64
target/arm/translate.h | 1 +
74
xlnx-zynqmp: Connect Xilinx ZynqMP CAN controllers
65
target/arm/neon-dp.decode | 130 +++++
75
tests/qtest: Introduce tests for Xilinx ZynqMP CAN controller
66
hw/arm/fsl-imx25.c | 6 +
76
MAINTAINERS: Add maintainer entry for Xilinx ZynqMP CAN controller
67
hw/arm/fsl-imx6.c | 6 +
68
hw/arm/fsl-imx6ul.c | 2 +
69
hw/arm/fsl-imx7.c | 2 +
70
hw/misc/imx6ul_ccm.c | 76 ++-
71
hw/net/ftgmac100.c | 26 +-
72
hw/net/imx_fec.c | 106 ++--
73
hw/sd/sdhci.c | 18 +-
74
target/arm/cpu.c | 6 +-
75
target/arm/cpu64.c | 1 -
76
target/arm/kvm.c | 21 +-
77
target/arm/translate-neon.inc.c | 1148 ++++++++++++++++++++++++++++++++++++++-
78
target/arm/translate.c | 684 +----------------------
79
hw/net/trace-events | 18 +
80
18 files changed, 1495 insertions(+), 766 deletions(-)
81
77
78
meson.build | 1 +
79
hw/arm/smmuv3-internal.h | 2 +-
80
hw/net/can/trace.h | 1 +
81
include/hw/arm/xlnx-zynqmp.h | 8 +
82
include/hw/intc/armv7m_nvic.h | 2 +
83
include/hw/net/xlnx-zynqmp-can.h | 78 +++
84
target/arm/cpu.h | 46 ++
85
target/arm/m-nocp.decode | 10 +-
86
target/arm/t32.decode | 10 +-
87
target/arm/vfp.decode | 14 +
88
hw/arm/armv7m.c | 4 +-
89
hw/arm/sbsa-ref.c | 23 +-
90
hw/arm/xlnx-zcu102.c | 20 +
91
hw/arm/xlnx-zynqmp.c | 34 ++
92
hw/intc/armv7m_nvic.c | 246 ++++++--
93
hw/misc/imx25_ccm.c | 12 +-
94
hw/misc/imx31_ccm.c | 14 +-
95
hw/misc/imx6_ccm.c | 20 +-
96
hw/misc/imx6_src.c | 2 +-
97
hw/misc/imx6ul_ccm.c | 4 +-
98
hw/misc/imx_ccm.c | 4 +-
99
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++++++++++
100
target/arm/cpu.c | 5 +-
101
target/arm/helper.c | 7 +-
102
target/arm/m_helper.c | 130 ++++-
103
target/arm/translate.c | 105 +++-
104
tests/qtest/npcm7xx_rng-test.c | 12 +
105
tests/qtest/xlnx-can-test.c | 360 ++++++++++++
106
MAINTAINERS | 8 +
107
hw/Kconfig | 1 +
108
hw/net/can/meson.build | 1 +
109
hw/net/can/trace-events | 9 +
110
target/arm/translate-vfp.c.inc | 511 ++++++++++++++++-
111
tests/qtest/meson.build | 1 +
112
34 files changed, 2713 insertions(+), 153 deletions(-)
113
create mode 100644 hw/net/can/trace.h
114
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
115
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
116
create mode 100644 tests/qtest/xlnx-can-test.c
117
create mode 100644 hw/net/can/trace-events
118
diff view generated by jsdifflib
New patch
1
From: Kunkun Jiang <jiangkunkun@huawei.com>
1
2
3
Accroding to the SMMUv3 spec, the SPAN field of Level1 Stream Table
4
Descriptor is 5 bits([4:0]).
5
6
Fixes: 9bde7f0674f(hw/arm/smmuv3: Implement translate callback)
7
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
8
Message-id: 20201124023711.1184-1-jiangkunkun@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Acked-by: Eric Auger <eric.auger@redhat.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/arm/smmuv3-internal.h | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/smmuv3-internal.h
19
+++ b/hw/arm/smmuv3-internal.h
20
@@ -XXX,XX +XXX,XX @@ static inline uint64_t l1std_l2ptr(STEDesc *desc)
21
return hi << 32 | lo;
22
}
23
24
-#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 4))
25
+#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 5))
26
27
#endif
28
--
29
2.20.1
30
31
diff view generated by jsdifflib
New patch
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
2
3
The Xilinx ZynqMP CAN controller is developed based on SocketCAN, QEMU CAN bus
4
implementation. Bus connection and socketCAN connection for each CAN module
5
can be set through command lines.
6
7
Example for using single CAN:
8
-object can-bus,id=canbus0 \
9
-machine xlnx-zcu102.canbus0=canbus0 \
10
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0
11
12
Example for connecting both CAN to same virtual CAN on host machine:
13
-object can-bus,id=canbus0 -object can-bus,id=canbus1 \
14
-machine xlnx-zcu102.canbus0=canbus0 \
15
-machine xlnx-zcu102.canbus1=canbus1 \
16
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0 \
17
-object can-host-socketcan,id=socketcan1,if=vcan0,canbus=canbus1
18
19
To create virtual CAN on the host machine, please check the QEMU CAN docs:
20
https://github.com/qemu/qemu/blob/master/docs/can.txt
21
22
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
23
Message-id: 1605728926-352690-2-git-send-email-fnu.vikram@xilinx.com
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
27
meson.build | 1 +
28
hw/net/can/trace.h | 1 +
29
include/hw/net/xlnx-zynqmp-can.h | 78 ++
30
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++
31
hw/Kconfig | 1 +
32
hw/net/can/meson.build | 1 +
33
hw/net/can/trace-events | 9 +
34
7 files changed, 1252 insertions(+)
35
create mode 100644 hw/net/can/trace.h
36
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
37
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
38
create mode 100644 hw/net/can/trace-events
39
40
diff --git a/meson.build b/meson.build
41
index XXXXXXX..XXXXXXX 100644
42
--- a/meson.build
43
+++ b/meson.build
44
@@ -XXX,XX +XXX,XX @@ if have_system
45
'hw/misc',
46
'hw/misc/macio',
47
'hw/net',
48
+ 'hw/net/can',
49
'hw/nvram',
50
'hw/pci',
51
'hw/pci-host',
52
diff --git a/hw/net/can/trace.h b/hw/net/can/trace.h
53
new file mode 100644
54
index XXXXXXX..XXXXXXX
55
--- /dev/null
56
+++ b/hw/net/can/trace.h
57
@@ -0,0 +1 @@
58
+#include "trace/trace-hw_net_can.h"
59
diff --git a/include/hw/net/xlnx-zynqmp-can.h b/include/hw/net/xlnx-zynqmp-can.h
60
new file mode 100644
61
index XXXXXXX..XXXXXXX
62
--- /dev/null
63
+++ b/include/hw/net/xlnx-zynqmp-can.h
64
@@ -XXX,XX +XXX,XX @@
65
+/*
66
+ * QEMU model of the Xilinx ZynqMP CAN controller.
67
+ *
68
+ * Copyright (c) 2020 Xilinx Inc.
69
+ *
70
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
71
+ *
72
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
73
+ * Pavel Pisa.
74
+ *
75
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
76
+ * of this software and associated documentation files (the "Software"), to deal
77
+ * in the Software without restriction, including without limitation the rights
78
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
79
+ * copies of the Software, and to permit persons to whom the Software is
80
+ * furnished to do so, subject to the following conditions:
81
+ *
82
+ * The above copyright notice and this permission notice shall be included in
83
+ * all copies or substantial portions of the Software.
84
+ *
85
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
86
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
87
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
88
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
89
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
90
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
91
+ * THE SOFTWARE.
92
+ */
93
+
94
+#ifndef XLNX_ZYNQMP_CAN_H
95
+#define XLNX_ZYNQMP_CAN_H
96
+
97
+#include "hw/register.h"
98
+#include "net/can_emu.h"
99
+#include "net/can_host.h"
100
+#include "qemu/fifo32.h"
101
+#include "hw/ptimer.h"
102
+#include "hw/qdev-clock.h"
103
+
104
+#define TYPE_XLNX_ZYNQMP_CAN "xlnx.zynqmp-can"
105
+
106
+#define XLNX_ZYNQMP_CAN(obj) \
107
+ OBJECT_CHECK(XlnxZynqMPCANState, (obj), TYPE_XLNX_ZYNQMP_CAN)
108
+
109
+#define MAX_CAN_CTRLS 2
110
+#define XLNX_ZYNQMP_CAN_R_MAX (0x84 / 4)
111
+#define MAILBOX_CAPACITY 64
112
+#define CAN_TIMER_MAX 0XFFFFUL
113
+#define CAN_DEFAULT_CLOCK (24 * 1000 * 1000)
114
+
115
+/* Each CAN_FRAME will have 4 * 32bit size. */
116
+#define CAN_FRAME_SIZE 4
117
+#define RXFIFO_SIZE (MAILBOX_CAPACITY * CAN_FRAME_SIZE)
118
+
119
+typedef struct XlnxZynqMPCANState {
120
+ SysBusDevice parent_obj;
121
+ MemoryRegion iomem;
122
+
123
+ qemu_irq irq;
124
+
125
+ CanBusClientState bus_client;
126
+ CanBusState *canbus;
127
+
128
+ struct {
129
+ uint32_t ext_clk_freq;
130
+ } cfg;
131
+
132
+ RegisterInfo reg_info[XLNX_ZYNQMP_CAN_R_MAX];
133
+ uint32_t regs[XLNX_ZYNQMP_CAN_R_MAX];
134
+
135
+ Fifo32 rx_fifo;
136
+ Fifo32 tx_fifo;
137
+ Fifo32 txhpb_fifo;
138
+
139
+ ptimer_state *can_timer;
140
+} XlnxZynqMPCANState;
141
+
142
+#endif
143
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
144
new file mode 100644
145
index XXXXXXX..XXXXXXX
146
--- /dev/null
147
+++ b/hw/net/can/xlnx-zynqmp-can.c
148
@@ -XXX,XX +XXX,XX @@
149
+/*
150
+ * QEMU model of the Xilinx ZynqMP CAN controller.
151
+ * This implementation is based on the following datasheet:
152
+ * https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
153
+ *
154
+ * Copyright (c) 2020 Xilinx Inc.
155
+ *
156
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
157
+ *
158
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
159
+ * Pavel Pisa
160
+ *
161
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
162
+ * of this software and associated documentation files (the "Software"), to deal
163
+ * in the Software without restriction, including without limitation the rights
164
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
165
+ * copies of the Software, and to permit persons to whom the Software is
166
+ * furnished to do so, subject to the following conditions:
167
+ *
168
+ * The above copyright notice and this permission notice shall be included in
169
+ * all copies or substantial portions of the Software.
170
+ *
171
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
172
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
173
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
174
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
175
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
176
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
177
+ * THE SOFTWARE.
178
+ */
179
+
180
+#include "qemu/osdep.h"
181
+#include "hw/sysbus.h"
182
+#include "hw/register.h"
183
+#include "hw/irq.h"
184
+#include "qapi/error.h"
185
+#include "qemu/bitops.h"
186
+#include "qemu/log.h"
187
+#include "qemu/cutils.h"
188
+#include "sysemu/sysemu.h"
189
+#include "migration/vmstate.h"
190
+#include "hw/qdev-properties.h"
191
+#include "net/can_emu.h"
192
+#include "net/can_host.h"
193
+#include "qemu/event_notifier.h"
194
+#include "qom/object_interfaces.h"
195
+#include "hw/net/xlnx-zynqmp-can.h"
196
+#include "trace.h"
197
+
198
+#ifndef XLNX_ZYNQMP_CAN_ERR_DEBUG
199
+#define XLNX_ZYNQMP_CAN_ERR_DEBUG 0
200
+#endif
201
+
202
+#define MAX_DLC 8
203
+#undef ERROR
204
+
205
+REG32(SOFTWARE_RESET_REGISTER, 0x0)
206
+ FIELD(SOFTWARE_RESET_REGISTER, CEN, 1, 1)
207
+ FIELD(SOFTWARE_RESET_REGISTER, SRST, 0, 1)
208
+REG32(MODE_SELECT_REGISTER, 0x4)
209
+ FIELD(MODE_SELECT_REGISTER, SNOOP, 2, 1)
210
+ FIELD(MODE_SELECT_REGISTER, LBACK, 1, 1)
211
+ FIELD(MODE_SELECT_REGISTER, SLEEP, 0, 1)
212
+REG32(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x8)
213
+ FIELD(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, BRP, 0, 8)
214
+REG32(ARBITRATION_PHASE_BIT_TIMING_REGISTER, 0xc)
215
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, SJW, 7, 2)
216
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS2, 4, 3)
217
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS1, 0, 4)
218
+REG32(ERROR_COUNTER_REGISTER, 0x10)
219
+ FIELD(ERROR_COUNTER_REGISTER, REC, 8, 8)
220
+ FIELD(ERROR_COUNTER_REGISTER, TEC, 0, 8)
221
+REG32(ERROR_STATUS_REGISTER, 0x14)
222
+ FIELD(ERROR_STATUS_REGISTER, ACKER, 4, 1)
223
+ FIELD(ERROR_STATUS_REGISTER, BERR, 3, 1)
224
+ FIELD(ERROR_STATUS_REGISTER, STER, 2, 1)
225
+ FIELD(ERROR_STATUS_REGISTER, FMER, 1, 1)
226
+ FIELD(ERROR_STATUS_REGISTER, CRCER, 0, 1)
227
+REG32(STATUS_REGISTER, 0x18)
228
+ FIELD(STATUS_REGISTER, SNOOP, 12, 1)
229
+ FIELD(STATUS_REGISTER, ACFBSY, 11, 1)
230
+ FIELD(STATUS_REGISTER, TXFLL, 10, 1)
231
+ FIELD(STATUS_REGISTER, TXBFLL, 9, 1)
232
+ FIELD(STATUS_REGISTER, ESTAT, 7, 2)
233
+ FIELD(STATUS_REGISTER, ERRWRN, 6, 1)
234
+ FIELD(STATUS_REGISTER, BBSY, 5, 1)
235
+ FIELD(STATUS_REGISTER, BIDLE, 4, 1)
236
+ FIELD(STATUS_REGISTER, NORMAL, 3, 1)
237
+ FIELD(STATUS_REGISTER, SLEEP, 2, 1)
238
+ FIELD(STATUS_REGISTER, LBACK, 1, 1)
239
+ FIELD(STATUS_REGISTER, CONFIG, 0, 1)
240
+REG32(INTERRUPT_STATUS_REGISTER, 0x1c)
241
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFEMP, 14, 1)
242
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFWMEMP, 13, 1)
243
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL, 12, 1)
244
+ FIELD(INTERRUPT_STATUS_REGISTER, WKUP, 11, 1)
245
+ FIELD(INTERRUPT_STATUS_REGISTER, SLP, 10, 1)
246
+ FIELD(INTERRUPT_STATUS_REGISTER, BSOFF, 9, 1)
247
+ FIELD(INTERRUPT_STATUS_REGISTER, ERROR, 8, 1)
248
+ FIELD(INTERRUPT_STATUS_REGISTER, RXNEMP, 7, 1)
249
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOFLW, 6, 1)
250
+ FIELD(INTERRUPT_STATUS_REGISTER, RXUFLW, 5, 1)
251
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOK, 4, 1)
252
+ FIELD(INTERRUPT_STATUS_REGISTER, TXBFLL, 3, 1)
253
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFLL, 2, 1)
254
+ FIELD(INTERRUPT_STATUS_REGISTER, TXOK, 1, 1)
255
+ FIELD(INTERRUPT_STATUS_REGISTER, ARBLST, 0, 1)
256
+REG32(INTERRUPT_ENABLE_REGISTER, 0x20)
257
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFEMP, 14, 1)
258
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFWMEMP, 13, 1)
259
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL, 12, 1)
260
+ FIELD(INTERRUPT_ENABLE_REGISTER, EWKUP, 11, 1)
261
+ FIELD(INTERRUPT_ENABLE_REGISTER, ESLP, 10, 1)
262
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSOFF, 9, 1)
263
+ FIELD(INTERRUPT_ENABLE_REGISTER, EERROR, 8, 1)
264
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXNEMP, 7, 1)
265
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOFLW, 6, 1)
266
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXUFLW, 5, 1)
267
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOK, 4, 1)
268
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXBFLL, 3, 1)
269
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFLL, 2, 1)
270
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXOK, 1, 1)
271
+ FIELD(INTERRUPT_ENABLE_REGISTER, EARBLST, 0, 1)
272
+REG32(INTERRUPT_CLEAR_REGISTER, 0x24)
273
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFEMP, 14, 1)
274
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFWMEMP, 13, 1)
275
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL, 12, 1)
276
+ FIELD(INTERRUPT_CLEAR_REGISTER, CWKUP, 11, 1)
277
+ FIELD(INTERRUPT_CLEAR_REGISTER, CSLP, 10, 1)
278
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSOFF, 9, 1)
279
+ FIELD(INTERRUPT_CLEAR_REGISTER, CERROR, 8, 1)
280
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXNEMP, 7, 1)
281
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOFLW, 6, 1)
282
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXUFLW, 5, 1)
283
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOK, 4, 1)
284
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXBFLL, 3, 1)
285
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFLL, 2, 1)
286
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXOK, 1, 1)
287
+ FIELD(INTERRUPT_CLEAR_REGISTER, CARBLST, 0, 1)
288
+REG32(TIMESTAMP_REGISTER, 0x28)
289
+ FIELD(TIMESTAMP_REGISTER, CTS, 0, 1)
290
+REG32(WIR, 0x2c)
291
+ FIELD(WIR, EW, 8, 8)
292
+ FIELD(WIR, FW, 0, 8)
293
+REG32(TXFIFO_ID, 0x30)
294
+ FIELD(TXFIFO_ID, IDH, 21, 11)
295
+ FIELD(TXFIFO_ID, SRRRTR, 20, 1)
296
+ FIELD(TXFIFO_ID, IDE, 19, 1)
297
+ FIELD(TXFIFO_ID, IDL, 1, 18)
298
+ FIELD(TXFIFO_ID, RTR, 0, 1)
299
+REG32(TXFIFO_DLC, 0x34)
300
+ FIELD(TXFIFO_DLC, DLC, 28, 4)
301
+REG32(TXFIFO_DATA1, 0x38)
302
+ FIELD(TXFIFO_DATA1, DB0, 24, 8)
303
+ FIELD(TXFIFO_DATA1, DB1, 16, 8)
304
+ FIELD(TXFIFO_DATA1, DB2, 8, 8)
305
+ FIELD(TXFIFO_DATA1, DB3, 0, 8)
306
+REG32(TXFIFO_DATA2, 0x3c)
307
+ FIELD(TXFIFO_DATA2, DB4, 24, 8)
308
+ FIELD(TXFIFO_DATA2, DB5, 16, 8)
309
+ FIELD(TXFIFO_DATA2, DB6, 8, 8)
310
+ FIELD(TXFIFO_DATA2, DB7, 0, 8)
311
+REG32(TXHPB_ID, 0x40)
312
+ FIELD(TXHPB_ID, IDH, 21, 11)
313
+ FIELD(TXHPB_ID, SRRRTR, 20, 1)
314
+ FIELD(TXHPB_ID, IDE, 19, 1)
315
+ FIELD(TXHPB_ID, IDL, 1, 18)
316
+ FIELD(TXHPB_ID, RTR, 0, 1)
317
+REG32(TXHPB_DLC, 0x44)
318
+ FIELD(TXHPB_DLC, DLC, 28, 4)
319
+REG32(TXHPB_DATA1, 0x48)
320
+ FIELD(TXHPB_DATA1, DB0, 24, 8)
321
+ FIELD(TXHPB_DATA1, DB1, 16, 8)
322
+ FIELD(TXHPB_DATA1, DB2, 8, 8)
323
+ FIELD(TXHPB_DATA1, DB3, 0, 8)
324
+REG32(TXHPB_DATA2, 0x4c)
325
+ FIELD(TXHPB_DATA2, DB4, 24, 8)
326
+ FIELD(TXHPB_DATA2, DB5, 16, 8)
327
+ FIELD(TXHPB_DATA2, DB6, 8, 8)
328
+ FIELD(TXHPB_DATA2, DB7, 0, 8)
329
+REG32(RXFIFO_ID, 0x50)
330
+ FIELD(RXFIFO_ID, IDH, 21, 11)
331
+ FIELD(RXFIFO_ID, SRRRTR, 20, 1)
332
+ FIELD(RXFIFO_ID, IDE, 19, 1)
333
+ FIELD(RXFIFO_ID, IDL, 1, 18)
334
+ FIELD(RXFIFO_ID, RTR, 0, 1)
335
+REG32(RXFIFO_DLC, 0x54)
336
+ FIELD(RXFIFO_DLC, DLC, 28, 4)
337
+ FIELD(RXFIFO_DLC, RXT, 0, 16)
338
+REG32(RXFIFO_DATA1, 0x58)
339
+ FIELD(RXFIFO_DATA1, DB0, 24, 8)
340
+ FIELD(RXFIFO_DATA1, DB1, 16, 8)
341
+ FIELD(RXFIFO_DATA1, DB2, 8, 8)
342
+ FIELD(RXFIFO_DATA1, DB3, 0, 8)
343
+REG32(RXFIFO_DATA2, 0x5c)
344
+ FIELD(RXFIFO_DATA2, DB4, 24, 8)
345
+ FIELD(RXFIFO_DATA2, DB5, 16, 8)
346
+ FIELD(RXFIFO_DATA2, DB6, 8, 8)
347
+ FIELD(RXFIFO_DATA2, DB7, 0, 8)
348
+REG32(AFR, 0x60)
349
+ FIELD(AFR, UAF4, 3, 1)
350
+ FIELD(AFR, UAF3, 2, 1)
351
+ FIELD(AFR, UAF2, 1, 1)
352
+ FIELD(AFR, UAF1, 0, 1)
353
+REG32(AFMR1, 0x64)
354
+ FIELD(AFMR1, AMIDH, 21, 11)
355
+ FIELD(AFMR1, AMSRR, 20, 1)
356
+ FIELD(AFMR1, AMIDE, 19, 1)
357
+ FIELD(AFMR1, AMIDL, 1, 18)
358
+ FIELD(AFMR1, AMRTR, 0, 1)
359
+REG32(AFIR1, 0x68)
360
+ FIELD(AFIR1, AIIDH, 21, 11)
361
+ FIELD(AFIR1, AISRR, 20, 1)
362
+ FIELD(AFIR1, AIIDE, 19, 1)
363
+ FIELD(AFIR1, AIIDL, 1, 18)
364
+ FIELD(AFIR1, AIRTR, 0, 1)
365
+REG32(AFMR2, 0x6c)
366
+ FIELD(AFMR2, AMIDH, 21, 11)
367
+ FIELD(AFMR2, AMSRR, 20, 1)
368
+ FIELD(AFMR2, AMIDE, 19, 1)
369
+ FIELD(AFMR2, AMIDL, 1, 18)
370
+ FIELD(AFMR2, AMRTR, 0, 1)
371
+REG32(AFIR2, 0x70)
372
+ FIELD(AFIR2, AIIDH, 21, 11)
373
+ FIELD(AFIR2, AISRR, 20, 1)
374
+ FIELD(AFIR2, AIIDE, 19, 1)
375
+ FIELD(AFIR2, AIIDL, 1, 18)
376
+ FIELD(AFIR2, AIRTR, 0, 1)
377
+REG32(AFMR3, 0x74)
378
+ FIELD(AFMR3, AMIDH, 21, 11)
379
+ FIELD(AFMR3, AMSRR, 20, 1)
380
+ FIELD(AFMR3, AMIDE, 19, 1)
381
+ FIELD(AFMR3, AMIDL, 1, 18)
382
+ FIELD(AFMR3, AMRTR, 0, 1)
383
+REG32(AFIR3, 0x78)
384
+ FIELD(AFIR3, AIIDH, 21, 11)
385
+ FIELD(AFIR3, AISRR, 20, 1)
386
+ FIELD(AFIR3, AIIDE, 19, 1)
387
+ FIELD(AFIR3, AIIDL, 1, 18)
388
+ FIELD(AFIR3, AIRTR, 0, 1)
389
+REG32(AFMR4, 0x7c)
390
+ FIELD(AFMR4, AMIDH, 21, 11)
391
+ FIELD(AFMR4, AMSRR, 20, 1)
392
+ FIELD(AFMR4, AMIDE, 19, 1)
393
+ FIELD(AFMR4, AMIDL, 1, 18)
394
+ FIELD(AFMR4, AMRTR, 0, 1)
395
+REG32(AFIR4, 0x80)
396
+ FIELD(AFIR4, AIIDH, 21, 11)
397
+ FIELD(AFIR4, AISRR, 20, 1)
398
+ FIELD(AFIR4, AIIDE, 19, 1)
399
+ FIELD(AFIR4, AIIDL, 1, 18)
400
+ FIELD(AFIR4, AIRTR, 0, 1)
401
+
402
+static void can_update_irq(XlnxZynqMPCANState *s)
403
+{
404
+ uint32_t irq;
405
+
406
+ /* Watermark register interrupts. */
407
+ if ((fifo32_num_free(&s->tx_fifo) / CAN_FRAME_SIZE) >
408
+ ARRAY_FIELD_EX32(s->regs, WIR, EW)) {
409
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFWMEMP, 1);
410
+ }
411
+
412
+ if ((fifo32_num_used(&s->rx_fifo) / CAN_FRAME_SIZE) >
413
+ ARRAY_FIELD_EX32(s->regs, WIR, FW)) {
414
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL, 1);
415
+ }
416
+
417
+ /* RX Interrupts. */
418
+ if (fifo32_num_used(&s->rx_fifo) >= CAN_FRAME_SIZE) {
419
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXNEMP, 1);
420
+ }
421
+
422
+ /* TX interrupts. */
423
+ if (fifo32_is_empty(&s->tx_fifo)) {
424
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFEMP, 1);
425
+ }
426
+
427
+ if (fifo32_is_full(&s->tx_fifo)) {
428
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFLL, 1);
429
+ }
430
+
431
+ if (fifo32_is_full(&s->txhpb_fifo)) {
432
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXBFLL, 1);
433
+ }
434
+
435
+ irq = s->regs[R_INTERRUPT_STATUS_REGISTER];
436
+ irq &= s->regs[R_INTERRUPT_ENABLE_REGISTER];
437
+
438
+ trace_xlnx_can_update_irq(s->regs[R_INTERRUPT_STATUS_REGISTER],
439
+ s->regs[R_INTERRUPT_ENABLE_REGISTER], irq);
440
+ qemu_set_irq(s->irq, irq);
441
+}
442
+
443
+static void can_ier_post_write(RegisterInfo *reg, uint64_t val)
444
+{
445
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
446
+
447
+ can_update_irq(s);
448
+}
449
+
450
+static uint64_t can_icr_pre_write(RegisterInfo *reg, uint64_t val)
451
+{
452
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
453
+
454
+ s->regs[R_INTERRUPT_STATUS_REGISTER] &= ~val;
455
+ can_update_irq(s);
456
+
457
+ return 0;
458
+}
459
+
460
+static void can_config_reset(XlnxZynqMPCANState *s)
461
+{
462
+ /* Reset all the configuration registers. */
463
+ register_reset(&s->reg_info[R_SOFTWARE_RESET_REGISTER]);
464
+ register_reset(&s->reg_info[R_MODE_SELECT_REGISTER]);
465
+ register_reset(
466
+ &s->reg_info[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER]);
467
+ register_reset(&s->reg_info[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER]);
468
+ register_reset(&s->reg_info[R_STATUS_REGISTER]);
469
+ register_reset(&s->reg_info[R_INTERRUPT_STATUS_REGISTER]);
470
+ register_reset(&s->reg_info[R_INTERRUPT_ENABLE_REGISTER]);
471
+ register_reset(&s->reg_info[R_INTERRUPT_CLEAR_REGISTER]);
472
+ register_reset(&s->reg_info[R_WIR]);
473
+}
474
+
475
+static void can_config_mode(XlnxZynqMPCANState *s)
476
+{
477
+ register_reset(&s->reg_info[R_ERROR_COUNTER_REGISTER]);
478
+ register_reset(&s->reg_info[R_ERROR_STATUS_REGISTER]);
479
+
480
+ /* Put XlnxZynqMPCAN in configuration mode. */
481
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 1);
482
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP, 0);
483
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP, 0);
484
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, BSOFF, 0);
485
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ERROR, 0);
486
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 0);
487
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 0);
488
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 0);
489
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ARBLST, 0);
490
+
491
+ can_update_irq(s);
492
+}
493
+
494
+static void update_status_register_mode_bits(XlnxZynqMPCANState *s)
495
+{
496
+ bool sleep_status = ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP);
497
+ bool sleep_mode = ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP);
498
+ /* Wake up interrupt bit. */
499
+ bool wakeup_irq_val = sleep_status && (sleep_mode == 0);
500
+ /* Sleep interrupt bit. */
501
+ bool sleep_irq_val = sleep_mode && (sleep_status == 0);
502
+
503
+ /* Clear previous core mode status bits. */
504
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 0);
505
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 0);
506
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 0);
507
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 0);
508
+
509
+ /* set current mode bit and generate irqs accordingly. */
510
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, LBACK)) {
511
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 1);
512
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP)) {
513
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 1);
514
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP,
515
+ sleep_irq_val);
516
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
517
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 1);
518
+ } else {
519
+ /*
520
+ * If all bits are zero then XlnxZynqMPCAN is set in normal mode.
521
+ */
522
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 1);
523
+ /* Set wakeup interrupt bit. */
524
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP,
525
+ wakeup_irq_val);
526
+ }
527
+
528
+ can_update_irq(s);
529
+}
530
+
531
+static void can_exit_sleep_mode(XlnxZynqMPCANState *s)
532
+{
533
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, 0);
534
+ update_status_register_mode_bits(s);
535
+}
536
+
537
+static void generate_frame(qemu_can_frame *frame, uint32_t *data)
538
+{
539
+ frame->can_id = data[0];
540
+ frame->can_dlc = FIELD_EX32(data[1], TXFIFO_DLC, DLC);
541
+
542
+ frame->data[0] = FIELD_EX32(data[2], TXFIFO_DATA1, DB3);
543
+ frame->data[1] = FIELD_EX32(data[2], TXFIFO_DATA1, DB2);
544
+ frame->data[2] = FIELD_EX32(data[2], TXFIFO_DATA1, DB1);
545
+ frame->data[3] = FIELD_EX32(data[2], TXFIFO_DATA1, DB0);
546
+
547
+ frame->data[4] = FIELD_EX32(data[3], TXFIFO_DATA2, DB7);
548
+ frame->data[5] = FIELD_EX32(data[3], TXFIFO_DATA2, DB6);
549
+ frame->data[6] = FIELD_EX32(data[3], TXFIFO_DATA2, DB5);
550
+ frame->data[7] = FIELD_EX32(data[3], TXFIFO_DATA2, DB4);
551
+}
552
+
553
+static bool tx_ready_check(XlnxZynqMPCANState *s)
554
+{
555
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
556
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
557
+
558
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
559
+ " data while controller is in reset mode.\n",
560
+ path);
561
+ return false;
562
+ }
563
+
564
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
565
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
566
+
567
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
568
+ " data while controller is in configuration mode. Reset"
569
+ " the core so operations can start fresh.\n",
570
+ path);
571
+ return false;
572
+ }
573
+
574
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
575
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
576
+
577
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
578
+ " data while controller is in SNOOP MODE.\n",
579
+ path);
580
+ return false;
581
+ }
582
+
583
+ return true;
584
+}
585
+
586
+static void transfer_fifo(XlnxZynqMPCANState *s, Fifo32 *fifo)
587
+{
588
+ qemu_can_frame frame;
589
+ uint32_t data[CAN_FRAME_SIZE];
590
+ int i;
591
+ bool can_tx = tx_ready_check(s);
592
+
593
+ if (!can_tx) {
594
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
595
+
596
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is not enabled for data"
597
+ " transfer.\n", path);
598
+ can_update_irq(s);
599
+ return;
600
+ }
601
+
602
+ while (!fifo32_is_empty(fifo)) {
603
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
604
+ data[i] = fifo32_pop(fifo);
605
+ }
606
+
607
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
608
+ /*
609
+ * Controller is in loopback. In Loopback mode, the CAN core
610
+ * transmits a recessive bitstream on to the XlnxZynqMPCAN Bus.
611
+ * Any message transmitted is looped back to the RX line and
612
+ * acknowledged. The XlnxZynqMPCAN core receives any message
613
+ * that it transmits.
614
+ */
615
+ if (fifo32_is_full(&s->rx_fifo)) {
616
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
617
+ } else {
618
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
619
+ fifo32_push(&s->rx_fifo, data[i]);
620
+ }
621
+
622
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
623
+ }
624
+ } else {
625
+ /* Normal mode Tx. */
626
+ generate_frame(&frame, data);
627
+
628
+ trace_xlnx_can_tx_data(frame.can_id, frame.can_dlc,
629
+ frame.data[0], frame.data[1],
630
+ frame.data[2], frame.data[3],
631
+ frame.data[4], frame.data[5],
632
+ frame.data[6], frame.data[7]);
633
+ can_bus_client_send(&s->bus_client, &frame, 1);
634
+ }
635
+ }
636
+
637
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 1);
638
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, TXBFLL, 0);
639
+
640
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) {
641
+ can_exit_sleep_mode(s);
642
+ }
643
+
644
+ can_update_irq(s);
645
+}
646
+
647
+static uint64_t can_srr_pre_write(RegisterInfo *reg, uint64_t val)
648
+{
649
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
650
+
651
+ ARRAY_FIELD_DP32(s->regs, SOFTWARE_RESET_REGISTER, CEN,
652
+ FIELD_EX32(val, SOFTWARE_RESET_REGISTER, CEN));
653
+
654
+ if (FIELD_EX32(val, SOFTWARE_RESET_REGISTER, SRST)) {
655
+ trace_xlnx_can_reset(val);
656
+
657
+ /* First, core will do software reset then will enter in config mode. */
658
+ can_config_reset(s);
659
+ }
660
+
661
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
662
+ can_config_mode(s);
663
+ } else {
664
+ /*
665
+ * Leave config mode. Now XlnxZynqMPCAN core will enter normal,
666
+ * sleep, snoop or loopback mode depending upon LBACK, SLEEP, SNOOP
667
+ * register states.
668
+ */
669
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 0);
670
+
671
+ ptimer_transaction_begin(s->can_timer);
672
+ ptimer_set_count(s->can_timer, 0);
673
+ ptimer_transaction_commit(s->can_timer);
674
+
675
+ /* XlnxZynqMPCAN is out of config mode. It will send pending data. */
676
+ transfer_fifo(s, &s->txhpb_fifo);
677
+ transfer_fifo(s, &s->tx_fifo);
678
+ }
679
+
680
+ update_status_register_mode_bits(s);
681
+
682
+ return s->regs[R_SOFTWARE_RESET_REGISTER];
683
+}
684
+
685
+static uint64_t can_msr_pre_write(RegisterInfo *reg, uint64_t val)
686
+{
687
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
688
+ uint8_t multi_mode;
689
+
690
+ /*
691
+ * Multiple mode set check. This is done to make sure user doesn't set
692
+ * multiple modes.
693
+ */
694
+ multi_mode = FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK) +
695
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP) +
696
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP);
697
+
698
+ if (multi_mode > 1) {
699
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
700
+
701
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to config"
702
+ " several modes simultaneously. One mode will be selected"
703
+ " according to their priority: LBACK > SLEEP > SNOOP.\n",
704
+ path);
705
+ }
706
+
707
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
708
+ /* We are in configuration mode, any mode can be selected. */
709
+ s->regs[R_MODE_SELECT_REGISTER] = val;
710
+ } else {
711
+ bool sleep_mode_bit = FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP);
712
+
713
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, sleep_mode_bit);
714
+
715
+ if (FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK)) {
716
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
717
+
718
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
719
+ " LBACK mode without setting CEN bit as 0.\n",
720
+ path);
721
+ } else if (FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP)) {
722
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
723
+
724
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
725
+ " SNOOP mode without setting CEN bit as 0.\n",
726
+ path);
727
+ }
728
+
729
+ update_status_register_mode_bits(s);
730
+ }
731
+
732
+ return s->regs[R_MODE_SELECT_REGISTER];
733
+}
734
+
735
+static uint64_t can_brpr_pre_write(RegisterInfo *reg, uint64_t val)
736
+{
737
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
738
+
739
+ /* Only allow writes when in config mode. */
740
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
741
+ return s->regs[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER];
742
+ }
743
+
744
+ return val;
745
+}
746
+
747
+static uint64_t can_btr_pre_write(RegisterInfo *reg, uint64_t val)
748
+{
749
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
750
+
751
+ /* Only allow writes when in config mode. */
752
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
753
+ return s->regs[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER];
754
+ }
755
+
756
+ return val;
757
+}
758
+
759
+static uint64_t can_tcr_pre_write(RegisterInfo *reg, uint64_t val)
760
+{
761
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
762
+
763
+ if (FIELD_EX32(val, TIMESTAMP_REGISTER, CTS)) {
764
+ ptimer_transaction_begin(s->can_timer);
765
+ ptimer_set_count(s->can_timer, 0);
766
+ ptimer_transaction_commit(s->can_timer);
767
+ }
768
+
769
+ return 0;
770
+}
771
+
772
+static void update_rx_fifo(XlnxZynqMPCANState *s, const qemu_can_frame *frame)
773
+{
774
+ bool filter_pass = false;
775
+ uint16_t timestamp = 0;
776
+
777
+ /* If no filter is enabled. Message will be stored in FIFO. */
778
+ if (!((ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) |
779
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) |
780
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) |
781
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)))) {
782
+ filter_pass = true;
783
+ }
784
+
785
+ /*
786
+ * Messages that pass any of the acceptance filters will be stored in
787
+ * the RX FIFO.
788
+ */
789
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) {
790
+ uint32_t id_masked = s->regs[R_AFMR1] & frame->can_id;
791
+ uint32_t filter_id_masked = s->regs[R_AFMR1] & s->regs[R_AFIR1];
792
+
793
+ if (filter_id_masked == id_masked) {
794
+ filter_pass = true;
795
+ }
796
+ }
797
+
798
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) {
799
+ uint32_t id_masked = s->regs[R_AFMR2] & frame->can_id;
800
+ uint32_t filter_id_masked = s->regs[R_AFMR2] & s->regs[R_AFIR2];
801
+
802
+ if (filter_id_masked == id_masked) {
803
+ filter_pass = true;
804
+ }
805
+ }
806
+
807
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) {
808
+ uint32_t id_masked = s->regs[R_AFMR3] & frame->can_id;
809
+ uint32_t filter_id_masked = s->regs[R_AFMR3] & s->regs[R_AFIR3];
810
+
811
+ if (filter_id_masked == id_masked) {
812
+ filter_pass = true;
813
+ }
814
+ }
815
+
816
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
817
+ uint32_t id_masked = s->regs[R_AFMR4] & frame->can_id;
818
+ uint32_t filter_id_masked = s->regs[R_AFMR4] & s->regs[R_AFIR4];
819
+
820
+ if (filter_id_masked == id_masked) {
821
+ filter_pass = true;
822
+ }
823
+ }
824
+
825
+ if (!filter_pass) {
826
+ trace_xlnx_can_rx_fifo_filter_reject(frame->can_id, frame->can_dlc);
827
+ return;
828
+ }
829
+
830
+ /* Store the message in fifo if it passed through any of the filters. */
831
+ if (filter_pass && frame->can_dlc <= MAX_DLC) {
832
+
833
+ if (fifo32_is_full(&s->rx_fifo)) {
834
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
835
+ } else {
836
+ timestamp = CAN_TIMER_MAX - ptimer_get_count(s->can_timer);
837
+
838
+ fifo32_push(&s->rx_fifo, frame->can_id);
839
+
840
+ fifo32_push(&s->rx_fifo, deposit32(0, R_RXFIFO_DLC_DLC_SHIFT,
841
+ R_RXFIFO_DLC_DLC_LENGTH,
842
+ frame->can_dlc) |
843
+ deposit32(0, R_RXFIFO_DLC_RXT_SHIFT,
844
+ R_RXFIFO_DLC_RXT_LENGTH,
845
+ timestamp));
846
+
847
+ /* First 32 bit of the data. */
848
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA1_DB3_SHIFT,
849
+ R_TXFIFO_DATA1_DB3_LENGTH,
850
+ frame->data[0]) |
851
+ deposit32(0, R_TXFIFO_DATA1_DB2_SHIFT,
852
+ R_TXFIFO_DATA1_DB2_LENGTH,
853
+ frame->data[1]) |
854
+ deposit32(0, R_TXFIFO_DATA1_DB1_SHIFT,
855
+ R_TXFIFO_DATA1_DB1_LENGTH,
856
+ frame->data[2]) |
857
+ deposit32(0, R_TXFIFO_DATA1_DB0_SHIFT,
858
+ R_TXFIFO_DATA1_DB0_LENGTH,
859
+ frame->data[3]));
860
+ /* Last 32 bit of the data. */
861
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA2_DB7_SHIFT,
862
+ R_TXFIFO_DATA2_DB7_LENGTH,
863
+ frame->data[4]) |
864
+ deposit32(0, R_TXFIFO_DATA2_DB6_SHIFT,
865
+ R_TXFIFO_DATA2_DB6_LENGTH,
866
+ frame->data[5]) |
867
+ deposit32(0, R_TXFIFO_DATA2_DB5_SHIFT,
868
+ R_TXFIFO_DATA2_DB5_LENGTH,
869
+ frame->data[6]) |
870
+ deposit32(0, R_TXFIFO_DATA2_DB4_SHIFT,
871
+ R_TXFIFO_DATA2_DB4_LENGTH,
872
+ frame->data[7]));
873
+
874
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
875
+ trace_xlnx_can_rx_data(frame->can_id, frame->can_dlc,
876
+ frame->data[0], frame->data[1],
877
+ frame->data[2], frame->data[3],
878
+ frame->data[4], frame->data[5],
879
+ frame->data[6], frame->data[7]);
880
+ }
881
+
882
+ can_update_irq(s);
883
+ }
884
+}
885
+
886
+static uint64_t can_rxfifo_pre_read(RegisterInfo *reg, uint64_t val)
887
+{
888
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
889
+
890
+ if (!fifo32_is_empty(&s->rx_fifo)) {
891
+ val = fifo32_pop(&s->rx_fifo);
892
+ } else {
893
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXUFLW, 1);
894
+ }
895
+
896
+ can_update_irq(s);
897
+ return val;
898
+}
899
+
900
+static void can_filter_enable_post_write(RegisterInfo *reg, uint64_t val)
901
+{
902
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
903
+
904
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1) &&
905
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF2) &&
906
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF3) &&
907
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
908
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 1);
909
+ } else {
910
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 0);
911
+ }
912
+}
913
+
914
+static uint64_t can_filter_mask_pre_write(RegisterInfo *reg, uint64_t val)
915
+{
916
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
917
+ uint32_t reg_idx = (reg->access->addr) / 4;
918
+ uint32_t filter_number = (reg_idx - R_AFMR1) / 2;
919
+
920
+ /* modify an acceptance filter, the corresponding UAF bit should be '0'. */
921
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
922
+ s->regs[reg_idx] = val;
923
+
924
+ trace_xlnx_can_filter_mask_pre_write(filter_number, s->regs[reg_idx]);
925
+ } else {
926
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
927
+
928
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
929
+ " mask is not set as corresponding UAF bit is not 0.\n",
930
+ path, filter_number + 1);
931
+ }
932
+
933
+ return s->regs[reg_idx];
934
+}
935
+
936
+static uint64_t can_filter_id_pre_write(RegisterInfo *reg, uint64_t val)
937
+{
938
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
939
+ uint32_t reg_idx = (reg->access->addr) / 4;
940
+ uint32_t filter_number = (reg_idx - R_AFIR1) / 2;
941
+
942
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
943
+ s->regs[reg_idx] = val;
944
+
945
+ trace_xlnx_can_filter_id_pre_write(filter_number, s->regs[reg_idx]);
946
+ } else {
947
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
948
+
949
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
950
+ " id is not set as corresponding UAF bit is not 0.\n",
951
+ path, filter_number + 1);
952
+ }
953
+
954
+ return s->regs[reg_idx];
955
+}
956
+
957
+static void can_tx_post_write(RegisterInfo *reg, uint64_t val)
958
+{
959
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
960
+
961
+ bool is_txhpb = reg->access->addr > A_TXFIFO_DATA2;
962
+
963
+ bool initiate_transfer = (reg->access->addr == A_TXFIFO_DATA2) ||
964
+ (reg->access->addr == A_TXHPB_DATA2);
965
+
966
+ Fifo32 *f = is_txhpb ? &s->txhpb_fifo : &s->tx_fifo;
967
+
968
+ if (!fifo32_is_full(f)) {
969
+ fifo32_push(f, val);
970
+ } else {
971
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
972
+
973
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: TX FIFO is full.\n", path);
974
+ }
975
+
976
+ /* Initiate the message send if TX register is written. */
977
+ if (initiate_transfer &&
978
+ ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
979
+ transfer_fifo(s, f);
980
+ }
981
+
982
+ can_update_irq(s);
983
+}
984
+
985
+static const RegisterAccessInfo can_regs_info[] = {
986
+ { .name = "SOFTWARE_RESET_REGISTER",
987
+ .addr = A_SOFTWARE_RESET_REGISTER,
988
+ .rsvd = 0xfffffffc,
989
+ .pre_write = can_srr_pre_write,
990
+ },{ .name = "MODE_SELECT_REGISTER",
991
+ .addr = A_MODE_SELECT_REGISTER,
992
+ .rsvd = 0xfffffff8,
993
+ .pre_write = can_msr_pre_write,
994
+ },{ .name = "ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER",
995
+ .addr = A_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER,
996
+ .rsvd = 0xffffff00,
997
+ .pre_write = can_brpr_pre_write,
998
+ },{ .name = "ARBITRATION_PHASE_BIT_TIMING_REGISTER",
999
+ .addr = A_ARBITRATION_PHASE_BIT_TIMING_REGISTER,
1000
+ .rsvd = 0xfffffe00,
1001
+ .pre_write = can_btr_pre_write,
1002
+ },{ .name = "ERROR_COUNTER_REGISTER",
1003
+ .addr = A_ERROR_COUNTER_REGISTER,
1004
+ .rsvd = 0xffff0000,
1005
+ .ro = 0xffffffff,
1006
+ },{ .name = "ERROR_STATUS_REGISTER",
1007
+ .addr = A_ERROR_STATUS_REGISTER,
1008
+ .rsvd = 0xffffffe0,
1009
+ .w1c = 0x1f,
1010
+ },{ .name = "STATUS_REGISTER", .addr = A_STATUS_REGISTER,
1011
+ .reset = 0x1,
1012
+ .rsvd = 0xffffe000,
1013
+ .ro = 0x1fff,
1014
+ },{ .name = "INTERRUPT_STATUS_REGISTER",
1015
+ .addr = A_INTERRUPT_STATUS_REGISTER,
1016
+ .reset = 0x6000,
1017
+ .rsvd = 0xffff8000,
1018
+ .ro = 0x7fff,
1019
+ },{ .name = "INTERRUPT_ENABLE_REGISTER",
1020
+ .addr = A_INTERRUPT_ENABLE_REGISTER,
1021
+ .rsvd = 0xffff8000,
1022
+ .post_write = can_ier_post_write,
1023
+ },{ .name = "INTERRUPT_CLEAR_REGISTER",
1024
+ .addr = A_INTERRUPT_CLEAR_REGISTER,
1025
+ .rsvd = 0xffff8000,
1026
+ .pre_write = can_icr_pre_write,
1027
+ },{ .name = "TIMESTAMP_REGISTER",
1028
+ .addr = A_TIMESTAMP_REGISTER,
1029
+ .rsvd = 0xfffffffe,
1030
+ .pre_write = can_tcr_pre_write,
1031
+ },{ .name = "WIR", .addr = A_WIR,
1032
+ .reset = 0x3f3f,
1033
+ .rsvd = 0xffff0000,
1034
+ },{ .name = "TXFIFO_ID", .addr = A_TXFIFO_ID,
1035
+ .post_write = can_tx_post_write,
1036
+ },{ .name = "TXFIFO_DLC", .addr = A_TXFIFO_DLC,
1037
+ .rsvd = 0xfffffff,
1038
+ .post_write = can_tx_post_write,
1039
+ },{ .name = "TXFIFO_DATA1", .addr = A_TXFIFO_DATA1,
1040
+ .post_write = can_tx_post_write,
1041
+ },{ .name = "TXFIFO_DATA2", .addr = A_TXFIFO_DATA2,
1042
+ .post_write = can_tx_post_write,
1043
+ },{ .name = "TXHPB_ID", .addr = A_TXHPB_ID,
1044
+ .post_write = can_tx_post_write,
1045
+ },{ .name = "TXHPB_DLC", .addr = A_TXHPB_DLC,
1046
+ .rsvd = 0xfffffff,
1047
+ .post_write = can_tx_post_write,
1048
+ },{ .name = "TXHPB_DATA1", .addr = A_TXHPB_DATA1,
1049
+ .post_write = can_tx_post_write,
1050
+ },{ .name = "TXHPB_DATA2", .addr = A_TXHPB_DATA2,
1051
+ .post_write = can_tx_post_write,
1052
+ },{ .name = "RXFIFO_ID", .addr = A_RXFIFO_ID,
1053
+ .ro = 0xffffffff,
1054
+ .post_read = can_rxfifo_pre_read,
1055
+ },{ .name = "RXFIFO_DLC", .addr = A_RXFIFO_DLC,
1056
+ .rsvd = 0xfff0000,
1057
+ .post_read = can_rxfifo_pre_read,
1058
+ },{ .name = "RXFIFO_DATA1", .addr = A_RXFIFO_DATA1,
1059
+ .post_read = can_rxfifo_pre_read,
1060
+ },{ .name = "RXFIFO_DATA2", .addr = A_RXFIFO_DATA2,
1061
+ .post_read = can_rxfifo_pre_read,
1062
+ },{ .name = "AFR", .addr = A_AFR,
1063
+ .rsvd = 0xfffffff0,
1064
+ .post_write = can_filter_enable_post_write,
1065
+ },{ .name = "AFMR1", .addr = A_AFMR1,
1066
+ .pre_write = can_filter_mask_pre_write,
1067
+ },{ .name = "AFIR1", .addr = A_AFIR1,
1068
+ .pre_write = can_filter_id_pre_write,
1069
+ },{ .name = "AFMR2", .addr = A_AFMR2,
1070
+ .pre_write = can_filter_mask_pre_write,
1071
+ },{ .name = "AFIR2", .addr = A_AFIR2,
1072
+ .pre_write = can_filter_id_pre_write,
1073
+ },{ .name = "AFMR3", .addr = A_AFMR3,
1074
+ .pre_write = can_filter_mask_pre_write,
1075
+ },{ .name = "AFIR3", .addr = A_AFIR3,
1076
+ .pre_write = can_filter_id_pre_write,
1077
+ },{ .name = "AFMR4", .addr = A_AFMR4,
1078
+ .pre_write = can_filter_mask_pre_write,
1079
+ },{ .name = "AFIR4", .addr = A_AFIR4,
1080
+ .pre_write = can_filter_id_pre_write,
1081
+ }
1082
+};
1083
+
1084
+static void xlnx_zynqmp_can_ptimer_cb(void *opaque)
1085
+{
1086
+ /* No action required on the timer rollover. */
1087
+}
1088
+
1089
+static const MemoryRegionOps can_ops = {
1090
+ .read = register_read_memory,
1091
+ .write = register_write_memory,
1092
+ .endianness = DEVICE_LITTLE_ENDIAN,
1093
+ .valid = {
1094
+ .min_access_size = 4,
1095
+ .max_access_size = 4,
1096
+ },
1097
+};
1098
+
1099
+static void xlnx_zynqmp_can_reset_init(Object *obj, ResetType type)
1100
+{
1101
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1102
+ unsigned int i;
1103
+
1104
+ for (i = R_RXFIFO_ID; i < ARRAY_SIZE(s->reg_info); ++i) {
1105
+ register_reset(&s->reg_info[i]);
1106
+ }
1107
+
1108
+ ptimer_transaction_begin(s->can_timer);
1109
+ ptimer_set_count(s->can_timer, 0);
1110
+ ptimer_transaction_commit(s->can_timer);
1111
+}
1112
+
1113
+static void xlnx_zynqmp_can_reset_hold(Object *obj)
1114
+{
1115
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1116
+ unsigned int i;
1117
+
1118
+ for (i = 0; i < R_RXFIFO_ID; ++i) {
1119
+ register_reset(&s->reg_info[i]);
1120
+ }
1121
+
1122
+ /*
1123
+ * Reset FIFOs when CAN model is reset. This will clear the fifo writes
1124
+ * done by post_write which gets called from register_reset function,
1125
+ * post_write handle will not be able to trigger tx because CAN will be
1126
+ * disabled when software_reset_register is cleared first.
1127
+ */
1128
+ fifo32_reset(&s->rx_fifo);
1129
+ fifo32_reset(&s->tx_fifo);
1130
+ fifo32_reset(&s->txhpb_fifo);
1131
+}
1132
+
1133
+static bool xlnx_zynqmp_can_can_receive(CanBusClientState *client)
1134
+{
1135
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1136
+ bus_client);
1137
+
1138
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
1139
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1140
+
1141
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is in reset state.\n",
1142
+ path);
1143
+ return false;
1144
+ }
1145
+
1146
+ if ((ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) == 0) {
1147
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1148
+
1149
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is disabled. Incoming"
1150
+ " messages will be discarded.\n", path);
1151
+ return false;
1152
+ }
1153
+
1154
+ return true;
1155
+}
1156
+
1157
+static ssize_t xlnx_zynqmp_can_receive(CanBusClientState *client,
1158
+ const qemu_can_frame *buf, size_t buf_size) {
1159
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1160
+ bus_client);
1161
+ const qemu_can_frame *frame = buf;
1162
+
1163
+ if (buf_size <= 0) {
1164
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1165
+
1166
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Error in the data received.\n",
1167
+ path);
1168
+ return 0;
1169
+ }
1170
+
1171
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
1172
+ /* Snoop Mode: Just keep the data. no response back. */
1173
+ update_rx_fifo(s, frame);
1174
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP))) {
1175
+ /*
1176
+ * XlnxZynqMPCAN is in sleep mode. Any data on bus will bring it to wake
1177
+ * up state.
1178
+ */
1179
+ can_exit_sleep_mode(s);
1180
+ update_rx_fifo(s, frame);
1181
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) == 0) {
1182
+ update_rx_fifo(s, frame);
1183
+ } else {
1184
+ /*
1185
+ * XlnxZynqMPCAN will not participate in normal bus communication
1186
+ * and will not receive any messages transmitted by other CAN nodes.
1187
+ */
1188
+ trace_xlnx_can_rx_discard(s->regs[R_STATUS_REGISTER]);
1189
+ }
1190
+
1191
+ return 1;
1192
+}
1193
+
1194
+static CanBusClientInfo can_xilinx_bus_client_info = {
1195
+ .can_receive = xlnx_zynqmp_can_can_receive,
1196
+ .receive = xlnx_zynqmp_can_receive,
1197
+};
1198
+
1199
+static int xlnx_zynqmp_can_connect_to_bus(XlnxZynqMPCANState *s,
1200
+ CanBusState *bus)
1201
+{
1202
+ s->bus_client.info = &can_xilinx_bus_client_info;
1203
+
1204
+ if (can_bus_insert_client(bus, &s->bus_client) < 0) {
1205
+ return -1;
1206
+ }
1207
+ return 0;
1208
+}
1209
+
1210
+static void xlnx_zynqmp_can_realize(DeviceState *dev, Error **errp)
1211
+{
1212
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(dev);
1213
+
1214
+ if (s->canbus) {
1215
+ if (xlnx_zynqmp_can_connect_to_bus(s, s->canbus) < 0) {
1216
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1217
+
1218
+ error_setg(errp, "%s: xlnx_zynqmp_can_connect_to_bus"
1219
+ " failed.", path);
1220
+ return;
1221
+ }
1222
+ }
1223
+
1224
+ /* Create RX FIFO, TXFIFO, TXHPB storage. */
1225
+ fifo32_create(&s->rx_fifo, RXFIFO_SIZE);
1226
+ fifo32_create(&s->tx_fifo, RXFIFO_SIZE);
1227
+ fifo32_create(&s->txhpb_fifo, CAN_FRAME_SIZE);
1228
+
1229
+ /* Allocate a new timer. */
1230
+ s->can_timer = ptimer_init(xlnx_zynqmp_can_ptimer_cb, s,
1231
+ PTIMER_POLICY_DEFAULT);
1232
+
1233
+ ptimer_transaction_begin(s->can_timer);
1234
+
1235
+ ptimer_set_freq(s->can_timer, s->cfg.ext_clk_freq);
1236
+ ptimer_set_limit(s->can_timer, CAN_TIMER_MAX, 1);
1237
+ ptimer_run(s->can_timer, 0);
1238
+ ptimer_transaction_commit(s->can_timer);
1239
+}
1240
+
1241
+static void xlnx_zynqmp_can_init(Object *obj)
1242
+{
1243
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1244
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1245
+
1246
+ RegisterInfoArray *reg_array;
1247
+
1248
+ memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_CAN,
1249
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1250
+ reg_array = register_init_block32(DEVICE(obj), can_regs_info,
1251
+ ARRAY_SIZE(can_regs_info),
1252
+ s->reg_info, s->regs,
1253
+ &can_ops,
1254
+ XLNX_ZYNQMP_CAN_ERR_DEBUG,
1255
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1256
+
1257
+ memory_region_add_subregion(&s->iomem, 0x00, &reg_array->mem);
1258
+ sysbus_init_mmio(sbd, &s->iomem);
1259
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
1260
+}
1261
+
1262
+static const VMStateDescription vmstate_can = {
1263
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1264
+ .version_id = 1,
1265
+ .minimum_version_id = 1,
1266
+ .fields = (VMStateField[]) {
1267
+ VMSTATE_FIFO32(rx_fifo, XlnxZynqMPCANState),
1268
+ VMSTATE_FIFO32(tx_fifo, XlnxZynqMPCANState),
1269
+ VMSTATE_FIFO32(txhpb_fifo, XlnxZynqMPCANState),
1270
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPCANState, XLNX_ZYNQMP_CAN_R_MAX),
1271
+ VMSTATE_PTIMER(can_timer, XlnxZynqMPCANState),
1272
+ VMSTATE_END_OF_LIST(),
1273
+ }
1274
+};
1275
+
1276
+static Property xlnx_zynqmp_can_properties[] = {
1277
+ DEFINE_PROP_UINT32("ext_clk_freq", XlnxZynqMPCANState, cfg.ext_clk_freq,
1278
+ CAN_DEFAULT_CLOCK),
1279
+ DEFINE_PROP_LINK("canbus", XlnxZynqMPCANState, canbus, TYPE_CAN_BUS,
1280
+ CanBusState *),
1281
+ DEFINE_PROP_END_OF_LIST(),
1282
+};
1283
+
1284
+static void xlnx_zynqmp_can_class_init(ObjectClass *klass, void *data)
1285
+{
1286
+ DeviceClass *dc = DEVICE_CLASS(klass);
1287
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1288
+
1289
+ rc->phases.enter = xlnx_zynqmp_can_reset_init;
1290
+ rc->phases.hold = xlnx_zynqmp_can_reset_hold;
1291
+ dc->realize = xlnx_zynqmp_can_realize;
1292
+ device_class_set_props(dc, xlnx_zynqmp_can_properties);
1293
+ dc->vmsd = &vmstate_can;
1294
+}
1295
+
1296
+static const TypeInfo can_info = {
1297
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1298
+ .parent = TYPE_SYS_BUS_DEVICE,
1299
+ .instance_size = sizeof(XlnxZynqMPCANState),
1300
+ .class_init = xlnx_zynqmp_can_class_init,
1301
+ .instance_init = xlnx_zynqmp_can_init,
1302
+};
1303
+
1304
+static void can_register_types(void)
1305
+{
1306
+ type_register_static(&can_info);
1307
+}
1308
+
1309
+type_init(can_register_types)
1310
diff --git a/hw/Kconfig b/hw/Kconfig
1311
index XXXXXXX..XXXXXXX 100644
1312
--- a/hw/Kconfig
1313
+++ b/hw/Kconfig
1314
@@ -XXX,XX +XXX,XX @@ config XILINX_AXI
1315
config XLNX_ZYNQMP
1316
bool
1317
select REGISTER
1318
+ select CAN_BUS
1319
diff --git a/hw/net/can/meson.build b/hw/net/can/meson.build
1320
index XXXXXXX..XXXXXXX 100644
1321
--- a/hw/net/can/meson.build
1322
+++ b/hw/net/can/meson.build
1323
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_pcm3680_pci.c'))
1324
softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_mioe3680_pci.c'))
1325
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD', if_true: files('ctucan_core.c'))
1326
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD_PCI', if_true: files('ctucan_pci.c'))
1327
+softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP', if_true: files('xlnx-zynqmp-can.c'))
1328
diff --git a/hw/net/can/trace-events b/hw/net/can/trace-events
1329
new file mode 100644
1330
index XXXXXXX..XXXXXXX
1331
--- /dev/null
1332
+++ b/hw/net/can/trace-events
1333
@@ -XXX,XX +XXX,XX @@
1334
+# xlnx-zynqmp-can.c
1335
+xlnx_can_update_irq(uint32_t isr, uint32_t ier, uint32_t irq) "ISR: 0x%08x IER: 0x%08x IRQ: 0x%08x"
1336
+xlnx_can_reset(uint32_t val) "Resetting controller with value = 0x%08x"
1337
+xlnx_can_rx_fifo_filter_reject(uint32_t id, uint8_t dlc) "Frame: ID: 0x%08x DLC: 0x%02x"
1338
+xlnx_can_filter_id_pre_write(uint8_t filter_num, uint32_t value) "Filter%d ID: 0x%08x"
1339
+xlnx_can_filter_mask_pre_write(uint8_t filter_num, uint32_t value) "Filter%d MASK: 0x%08x"
1340
+xlnx_can_tx_data(uint32_t id, uint8_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1341
+xlnx_can_rx_data(uint32_t id, uint32_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1342
+xlnx_can_rx_discard(uint32_t status) "Controller is not enabled for bus communication. Status Register: 0x%08x"
1343
--
1344
2.20.1
1345
1346
diff view generated by jsdifflib
1
From: Guenter Roeck <linux@roeck-us.net>
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
2
2
3
Set vendor property to IMX to enable IMX specific functionality
3
Connect CAN0 and CAN1 on the ZynqMP.
4
in sdhci code.
5
4
6
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
7
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
9
Message-id: 20200603145258.195920-3-linux@roeck-us.net
8
Message-id: 1605728926-352690-3-git-send-email-fnu.vikram@xilinx.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
hw/arm/fsl-imx25.c | 6 ++++++
11
include/hw/arm/xlnx-zynqmp.h | 8 ++++++++
13
hw/arm/fsl-imx6.c | 6 ++++++
12
hw/arm/xlnx-zcu102.c | 20 ++++++++++++++++++++
14
hw/arm/fsl-imx6ul.c | 2 ++
13
hw/arm/xlnx-zynqmp.c | 34 ++++++++++++++++++++++++++++++++++
15
hw/arm/fsl-imx7.c | 2 ++
14
3 files changed, 62 insertions(+)
16
4 files changed, 16 insertions(+)
17
15
18
diff --git a/hw/arm/fsl-imx25.c b/hw/arm/fsl-imx25.c
16
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/fsl-imx25.c
18
--- a/include/hw/arm/xlnx-zynqmp.h
21
+++ b/hw/arm/fsl-imx25.c
19
+++ b/include/hw/arm/xlnx-zynqmp.h
22
@@ -XXX,XX +XXX,XX @@ static void fsl_imx25_realize(DeviceState *dev, Error **errp)
20
@@ -XXX,XX +XXX,XX @@
23
&err);
21
#include "hw/intc/arm_gic.h"
24
object_property_set_uint(OBJECT(&s->esdhc[i]), IMX25_ESDHC_CAPABILITIES,
22
#include "hw/net/cadence_gem.h"
25
"capareg", &err);
23
#include "hw/char/cadence_uart.h"
26
+ object_property_set_uint(OBJECT(&s->esdhc[i]), SDHCI_VENDOR_IMX,
24
+#include "hw/net/xlnx-zynqmp-can.h"
27
+ "vendor", &err);
25
#include "hw/ide/ahci.h"
26
#include "hw/sd/sdhci.h"
27
#include "hw/ssi/xilinx_spips.h"
28
@@ -XXX,XX +XXX,XX @@
29
#include "hw/cpu/cluster.h"
30
#include "target/arm/cpu.h"
31
#include "qom/object.h"
32
+#include "net/can_emu.h"
33
34
#define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
35
OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
36
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
37
#define XLNX_ZYNQMP_NUM_RPU_CPUS 2
38
#define XLNX_ZYNQMP_NUM_GEMS 4
39
#define XLNX_ZYNQMP_NUM_UARTS 2
40
+#define XLNX_ZYNQMP_NUM_CAN 2
41
+#define XLNX_ZYNQMP_CAN_REF_CLK (24 * 1000 * 1000)
42
#define XLNX_ZYNQMP_NUM_SDHCI 2
43
#define XLNX_ZYNQMP_NUM_SPIS 2
44
#define XLNX_ZYNQMP_NUM_GDMA_CH 8
45
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
46
47
CadenceGEMState gem[XLNX_ZYNQMP_NUM_GEMS];
48
CadenceUARTState uart[XLNX_ZYNQMP_NUM_UARTS];
49
+ XlnxZynqMPCANState can[XLNX_ZYNQMP_NUM_CAN];
50
SysbusAHCIState sata;
51
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
52
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
53
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
54
bool virt;
55
/* Has the RPU subsystem? */
56
bool has_rpu;
57
+
58
+ /* CAN bus. */
59
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
60
};
61
62
#endif
63
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/xlnx-zcu102.c
66
+++ b/hw/arm/xlnx-zcu102.c
67
@@ -XXX,XX +XXX,XX @@
68
#include "sysemu/qtest.h"
69
#include "sysemu/device_tree.h"
70
#include "qom/object.h"
71
+#include "net/can_emu.h"
72
73
struct XlnxZCU102 {
74
MachineState parent_obj;
75
@@ -XXX,XX +XXX,XX @@ struct XlnxZCU102 {
76
bool secure;
77
bool virt;
78
79
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
80
+
81
struct arm_boot_info binfo;
82
};
83
84
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_init(MachineState *machine)
85
object_property_set_bool(OBJECT(&s->soc), "virtualization", s->virt,
86
&error_fatal);
87
88
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
89
+ gchar *bus_name = g_strdup_printf("canbus%d", i);
90
+
91
+ object_property_set_link(OBJECT(&s->soc), bus_name,
92
+ OBJECT(s->canbus[i]), &error_fatal);
93
+ g_free(bus_name);
94
+ }
95
+
96
qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
97
98
/* Create and plug in the SD cards */
99
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_machine_instance_init(Object *obj)
100
s->secure = false;
101
/* Default to virt (EL2) being disabled */
102
s->virt = false;
103
+ object_property_add_link(obj, "xlnx-zcu102.canbus0", TYPE_CAN_BUS,
104
+ (Object **)&s->canbus[0],
105
+ object_property_allow_set_link,
106
+ 0);
107
+
108
+ object_property_add_link(obj, "xlnx-zcu102.canbus1", TYPE_CAN_BUS,
109
+ (Object **)&s->canbus[1],
110
+ object_property_allow_set_link,
111
+ 0);
112
}
113
114
static void xlnx_zcu102_machine_class_init(ObjectClass *oc, void *data)
115
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/hw/arm/xlnx-zynqmp.c
118
+++ b/hw/arm/xlnx-zynqmp.c
119
@@ -XXX,XX +XXX,XX @@ static const int uart_intr[XLNX_ZYNQMP_NUM_UARTS] = {
120
21, 22,
121
};
122
123
+static const uint64_t can_addr[XLNX_ZYNQMP_NUM_CAN] = {
124
+ 0xFF060000, 0xFF070000,
125
+};
126
+
127
+static const int can_intr[XLNX_ZYNQMP_NUM_CAN] = {
128
+ 23, 24,
129
+};
130
+
131
static const uint64_t sdhci_addr[XLNX_ZYNQMP_NUM_SDHCI] = {
132
0xFF160000, 0xFF170000,
133
};
134
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
135
TYPE_CADENCE_UART);
136
}
137
138
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
139
+ object_initialize_child(obj, "can[*]", &s->can[i],
140
+ TYPE_XLNX_ZYNQMP_CAN);
141
+ }
142
+
143
object_initialize_child(obj, "sata", &s->sata, TYPE_SYSBUS_AHCI);
144
145
for (i = 0; i < XLNX_ZYNQMP_NUM_SDHCI; i++) {
146
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
147
gic_spi[uart_intr[i]]);
148
}
149
150
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
151
+ object_property_set_int(OBJECT(&s->can[i]), "ext_clk_freq",
152
+ XLNX_ZYNQMP_CAN_REF_CLK, &error_abort);
153
+
154
+ object_property_set_link(OBJECT(&s->can[i]), "canbus",
155
+ OBJECT(s->canbus[i]), &error_fatal);
156
+
157
+ sysbus_realize(SYS_BUS_DEVICE(&s->can[i]), &err);
28
+ if (err) {
158
+ if (err) {
29
+ error_propagate(errp, err);
159
+ error_propagate(errp, err);
30
+ return;
160
+ return;
31
+ }
161
+ }
32
object_property_set_bool(OBJECT(&s->esdhc[i]), true, "realized", &err);
162
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->can[i]), 0, can_addr[i]);
33
if (err) {
163
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->can[i]), 0,
34
error_propagate(errp, err);
164
+ gic_spi[can_intr[i]]);
35
diff --git a/hw/arm/fsl-imx6.c b/hw/arm/fsl-imx6.c
165
+ }
36
index XXXXXXX..XXXXXXX 100644
166
+
37
--- a/hw/arm/fsl-imx6.c
167
object_property_set_int(OBJECT(&s->sata), "num-ports", SATA_NUM_PORTS,
38
+++ b/hw/arm/fsl-imx6.c
168
&error_abort);
39
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6_realize(DeviceState *dev, Error **errp)
169
if (!sysbus_realize(SYS_BUS_DEVICE(&s->sata), errp)) {
40
&err);
170
@@ -XXX,XX +XXX,XX @@ static Property xlnx_zynqmp_props[] = {
41
object_property_set_uint(OBJECT(&s->esdhc[i]), IMX6_ESDHC_CAPABILITIES,
171
DEFINE_PROP_BOOL("has_rpu", XlnxZynqMPState, has_rpu, false),
42
"capareg", &err);
172
DEFINE_PROP_LINK("ddr-ram", XlnxZynqMPState, ddr_ram, TYPE_MEMORY_REGION,
43
+ object_property_set_uint(OBJECT(&s->esdhc[i]), SDHCI_VENDOR_IMX,
173
MemoryRegion *),
44
+ "vendor", &err);
174
+ DEFINE_PROP_LINK("canbus0", XlnxZynqMPState, canbus[0], TYPE_CAN_BUS,
45
+ if (err) {
175
+ CanBusState *),
46
+ error_propagate(errp, err);
176
+ DEFINE_PROP_LINK("canbus1", XlnxZynqMPState, canbus[1], TYPE_CAN_BUS,
47
+ return;
177
+ CanBusState *),
48
+ }
178
DEFINE_PROP_END_OF_LIST()
49
object_property_set_bool(OBJECT(&s->esdhc[i]), true, "realized", &err);
179
};
50
if (err) {
51
error_propagate(errp, err);
52
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/arm/fsl-imx6ul.c
55
+++ b/hw/arm/fsl-imx6ul.c
56
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
57
FSL_IMX6UL_USDHC2_IRQ,
58
};
59
60
+ object_property_set_uint(OBJECT(&s->usdhc[i]), SDHCI_VENDOR_IMX,
61
+ "vendor", &error_abort);
62
object_property_set_bool(OBJECT(&s->usdhc[i]), true, "realized",
63
&error_abort);
64
65
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/hw/arm/fsl-imx7.c
68
+++ b/hw/arm/fsl-imx7.c
69
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
70
FSL_IMX7_USDHC3_IRQ,
71
};
72
73
+ object_property_set_uint(OBJECT(&s->usdhc[i]), SDHCI_VENDOR_IMX,
74
+ "vendor", &error_abort);
75
object_property_set_bool(OBJECT(&s->usdhc[i]), true, "realized",
76
&error_abort);
77
180
78
--
181
--
79
2.20.1
182
2.20.1
80
183
81
184
diff view generated by jsdifflib
1
From: Guenter Roeck <linux@roeck-us.net>
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
2
2
3
The Linux kernel's IMX code now uses vendor specific commands.
3
The QTests perform five tests on the Xilinx ZynqMP CAN controller:
4
This results in endless warnings when booting the Linux kernel.
4
Tests the CAN controller in loopback, sleep and snoop mode.
5
5
Tests filtering of incoming CAN messages.
6
sdhci-esdhc-imx 2194000.usdhc: esdhc_wait_for_card_clock_gate_off:
7
    card clock still not gate off in 100us!.
8
9
Implement support for the vendor specific command implemented in IMX hardware
10
to be able to avoid this warning.
11
6
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
14
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
9
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
15
Message-id: 20200603145258.195920-2-linux@roeck-us.net
10
Message-id: 1605728926-352690-4-git-send-email-fnu.vikram@xilinx.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
12
---
18
hw/sd/sdhci-internal.h | 5 +++++
13
tests/qtest/xlnx-can-test.c | 360 ++++++++++++++++++++++++++++++++++++
19
include/hw/sd/sdhci.h | 5 +++++
14
tests/qtest/meson.build | 1 +
20
hw/sd/sdhci.c | 18 +++++++++++++++++-
15
2 files changed, 361 insertions(+)
21
3 files changed, 27 insertions(+), 1 deletion(-)
16
create mode 100644 tests/qtest/xlnx-can-test.c
22
17
23
diff --git a/hw/sd/sdhci-internal.h b/hw/sd/sdhci-internal.h
18
diff --git a/tests/qtest/xlnx-can-test.c b/tests/qtest/xlnx-can-test.c
19
new file mode 100644
20
index XXXXXXX..XXXXXXX
21
--- /dev/null
22
+++ b/tests/qtest/xlnx-can-test.c
23
@@ -XXX,XX +XXX,XX @@
24
+/*
25
+ * QTests for the Xilinx ZynqMP CAN controller.
26
+ *
27
+ * Copyright (c) 2020 Xilinx Inc.
28
+ *
29
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
30
+ *
31
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
32
+ * of this software and associated documentation files (the "Software"), to deal
33
+ * in the Software without restriction, including without limitation the rights
34
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
35
+ * copies of the Software, and to permit persons to whom the Software is
36
+ * furnished to do so, subject to the following conditions:
37
+ *
38
+ * The above copyright notice and this permission notice shall be included in
39
+ * all copies or substantial portions of the Software.
40
+ *
41
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
42
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
43
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
44
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
45
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
46
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
47
+ * THE SOFTWARE.
48
+ */
49
+
50
+#include "qemu/osdep.h"
51
+#include "libqos/libqtest.h"
52
+
53
+/* Base address. */
54
+#define CAN0_BASE_ADDR 0xFF060000
55
+#define CAN1_BASE_ADDR 0xFF070000
56
+
57
+/* Register addresses. */
58
+#define R_SRR_OFFSET 0x00
59
+#define R_MSR_OFFSET 0x04
60
+#define R_SR_OFFSET 0x18
61
+#define R_ISR_OFFSET 0x1C
62
+#define R_ICR_OFFSET 0x24
63
+#define R_TXID_OFFSET 0x30
64
+#define R_TXDLC_OFFSET 0x34
65
+#define R_TXDATA1_OFFSET 0x38
66
+#define R_TXDATA2_OFFSET 0x3C
67
+#define R_RXID_OFFSET 0x50
68
+#define R_RXDLC_OFFSET 0x54
69
+#define R_RXDATA1_OFFSET 0x58
70
+#define R_RXDATA2_OFFSET 0x5C
71
+#define R_AFR 0x60
72
+#define R_AFMR1 0x64
73
+#define R_AFIR1 0x68
74
+#define R_AFMR2 0x6C
75
+#define R_AFIR2 0x70
76
+#define R_AFMR3 0x74
77
+#define R_AFIR3 0x78
78
+#define R_AFMR4 0x7C
79
+#define R_AFIR4 0x80
80
+
81
+/* CAN modes. */
82
+#define CONFIG_MODE 0x00
83
+#define NORMAL_MODE 0x00
84
+#define LOOPBACK_MODE 0x02
85
+#define SNOOP_MODE 0x04
86
+#define SLEEP_MODE 0x01
87
+#define ENABLE_CAN (1 << 1)
88
+#define STATUS_NORMAL_MODE (1 << 3)
89
+#define STATUS_LOOPBACK_MODE (1 << 1)
90
+#define STATUS_SNOOP_MODE (1 << 12)
91
+#define STATUS_SLEEP_MODE (1 << 2)
92
+#define ISR_TXOK (1 << 1)
93
+#define ISR_RXOK (1 << 4)
94
+
95
+static void match_rx_tx_data(const uint32_t *buf_tx, const uint32_t *buf_rx,
96
+ uint8_t can_timestamp)
97
+{
98
+ uint16_t size = 0;
99
+ uint8_t len = 4;
100
+
101
+ while (size < len) {
102
+ if (R_RXID_OFFSET + 4 * size == R_RXDLC_OFFSET) {
103
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size] + can_timestamp);
104
+ } else {
105
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size]);
106
+ }
107
+
108
+ size++;
109
+ }
110
+}
111
+
112
+static void read_data(QTestState *qts, uint64_t can_base_addr, uint32_t *buf_rx)
113
+{
114
+ uint32_t int_status;
115
+
116
+ /* Read the interrupt on CAN rx. */
117
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_RXOK;
118
+
119
+ g_assert_cmpint(int_status, ==, ISR_RXOK);
120
+
121
+ /* Read the RX register data for CAN. */
122
+ buf_rx[0] = qtest_readl(qts, can_base_addr + R_RXID_OFFSET);
123
+ buf_rx[1] = qtest_readl(qts, can_base_addr + R_RXDLC_OFFSET);
124
+ buf_rx[2] = qtest_readl(qts, can_base_addr + R_RXDATA1_OFFSET);
125
+ buf_rx[3] = qtest_readl(qts, can_base_addr + R_RXDATA2_OFFSET);
126
+
127
+ /* Clear the RX interrupt. */
128
+ qtest_writel(qts, CAN1_BASE_ADDR + R_ICR_OFFSET, ISR_RXOK);
129
+}
130
+
131
+static void send_data(QTestState *qts, uint64_t can_base_addr,
132
+ const uint32_t *buf_tx)
133
+{
134
+ uint32_t int_status;
135
+
136
+ /* Write the TX register data for CAN. */
137
+ qtest_writel(qts, can_base_addr + R_TXID_OFFSET, buf_tx[0]);
138
+ qtest_writel(qts, can_base_addr + R_TXDLC_OFFSET, buf_tx[1]);
139
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET, buf_tx[2]);
140
+ qtest_writel(qts, can_base_addr + R_TXDATA2_OFFSET, buf_tx[3]);
141
+
142
+ /* Read the interrupt on CAN for tx. */
143
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_TXOK;
144
+
145
+ g_assert_cmpint(int_status, ==, ISR_TXOK);
146
+
147
+ /* Clear the interrupt for tx. */
148
+ qtest_writel(qts, CAN0_BASE_ADDR + R_ICR_OFFSET, ISR_TXOK);
149
+}
150
+
151
+/*
152
+ * This test will be transferring data from CAN0 and CAN1 through canbus. CAN0
153
+ * initiate the data transfer to can-bus, CAN1 receives the data. Test compares
154
+ * the data sent from CAN0 with received on CAN1.
155
+ */
156
+static void test_can_bus(void)
157
+{
158
+ const uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
159
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
160
+ uint32_t status = 0;
161
+ uint8_t can_timestamp = 1;
162
+
163
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
164
+ " -object can-bus,id=canbus0"
165
+ " -machine xlnx-zcu102.canbus0=canbus0"
166
+ " -machine xlnx-zcu102.canbus1=canbus0"
167
+ );
168
+
169
+ /* Configure the CAN0 and CAN1. */
170
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
171
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
172
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
173
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
174
+
175
+ /* Check here if CAN0 and CAN1 are in normal mode. */
176
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
177
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
178
+
179
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
180
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
181
+
182
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
183
+
184
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
185
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
186
+
187
+ qtest_quit(qts);
188
+}
189
+
190
+/*
191
+ * This test is performing loopback mode on CAN0 and CAN1. Data sent from TX of
192
+ * each CAN0 and CAN1 are compared with RX register data for respective CAN.
193
+ */
194
+static void test_can_loopback(void)
195
+{
196
+ uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
197
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
198
+ uint32_t status = 0;
199
+
200
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
201
+ " -object can-bus,id=canbus0"
202
+ " -machine xlnx-zcu102.canbus0=canbus0"
203
+ " -machine xlnx-zcu102.canbus1=canbus0"
204
+ );
205
+
206
+ /* Configure the CAN0 in loopback mode. */
207
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
208
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
209
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
210
+
211
+ /* Check here if CAN0 is set in loopback mode. */
212
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
213
+
214
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
215
+
216
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
217
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
218
+ match_rx_tx_data(buf_tx, buf_rx, 0);
219
+
220
+ /* Configure the CAN1 in loopback mode. */
221
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
222
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
223
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
224
+
225
+ /* Check here if CAN1 is set in loopback mode. */
226
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
227
+
228
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
229
+
230
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
231
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
232
+ match_rx_tx_data(buf_tx, buf_rx, 0);
233
+
234
+ qtest_quit(qts);
235
+}
236
+
237
+/*
238
+ * Enable filters for CAN1. This will filter incoming messages with ID. In this
239
+ * test message will pass through filter 2.
240
+ */
241
+static void test_can_filter(void)
242
+{
243
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
244
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
245
+ uint32_t status = 0;
246
+ uint8_t can_timestamp = 1;
247
+
248
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
249
+ " -object can-bus,id=canbus0"
250
+ " -machine xlnx-zcu102.canbus0=canbus0"
251
+ " -machine xlnx-zcu102.canbus1=canbus0"
252
+ );
253
+
254
+ /* Configure the CAN0 and CAN1. */
255
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
256
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
257
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
258
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
259
+
260
+ /* Check here if CAN0 and CAN1 are in normal mode. */
261
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
262
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
263
+
264
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
265
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
266
+
267
+ /* Set filter for CAN1 for incoming messages. */
268
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0x0);
269
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR1, 0xF7);
270
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR1, 0x121F);
271
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR2, 0x5431);
272
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR2, 0x14);
273
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR3, 0x1234);
274
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR3, 0x5431);
275
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR4, 0xFFF);
276
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR4, 0x1234);
277
+
278
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0xF);
279
+
280
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
281
+
282
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
283
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
284
+
285
+ qtest_quit(qts);
286
+}
287
+
288
+/* Testing sleep mode on CAN0 while CAN1 is in normal mode. */
289
+static void test_can_sleepmode(void)
290
+{
291
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
292
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
293
+ uint32_t status = 0;
294
+ uint8_t can_timestamp = 1;
295
+
296
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
297
+ " -object can-bus,id=canbus0"
298
+ " -machine xlnx-zcu102.canbus0=canbus0"
299
+ " -machine xlnx-zcu102.canbus1=canbus0"
300
+ );
301
+
302
+ /* Configure the CAN0. */
303
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
304
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SLEEP_MODE);
305
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
306
+
307
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
308
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
309
+
310
+ /* Check here if CAN0 is in SLEEP mode and CAN1 in normal mode. */
311
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
312
+ g_assert_cmpint(status, ==, STATUS_SLEEP_MODE);
313
+
314
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
315
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
316
+
317
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
318
+
319
+ /*
320
+ * Once CAN1 sends data on can-bus. CAN0 should exit sleep mode.
321
+ * Check the CAN0 status now. It should exit the sleep mode and receive the
322
+ * incoming data.
323
+ */
324
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
325
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
326
+
327
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
328
+
329
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
330
+
331
+ qtest_quit(qts);
332
+}
333
+
334
+/* Testing Snoop mode on CAN0 while CAN1 is in normal mode. */
335
+static void test_can_snoopmode(void)
336
+{
337
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
338
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
339
+ uint32_t status = 0;
340
+ uint8_t can_timestamp = 1;
341
+
342
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
343
+ " -object can-bus,id=canbus0"
344
+ " -machine xlnx-zcu102.canbus0=canbus0"
345
+ " -machine xlnx-zcu102.canbus1=canbus0"
346
+ );
347
+
348
+ /* Configure the CAN0. */
349
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
350
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SNOOP_MODE);
351
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
352
+
353
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
354
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
355
+
356
+ /* Check here if CAN0 is in SNOOP mode and CAN1 in normal mode. */
357
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
358
+ g_assert_cmpint(status, ==, STATUS_SNOOP_MODE);
359
+
360
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
361
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
362
+
363
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
364
+
365
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
366
+
367
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
368
+
369
+ qtest_quit(qts);
370
+}
371
+
372
+int main(int argc, char **argv)
373
+{
374
+ g_test_init(&argc, &argv, NULL);
375
+
376
+ qtest_add_func("/net/can/can_bus", test_can_bus);
377
+ qtest_add_func("/net/can/can_loopback", test_can_loopback);
378
+ qtest_add_func("/net/can/can_filter", test_can_filter);
379
+ qtest_add_func("/net/can/can_test_snoopmode", test_can_snoopmode);
380
+ qtest_add_func("/net/can/can_test_sleepmode", test_can_sleepmode);
381
+
382
+ return g_test_run();
383
+}
384
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
24
index XXXXXXX..XXXXXXX 100644
385
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/sd/sdhci-internal.h
386
--- a/tests/qtest/meson.build
26
+++ b/hw/sd/sdhci-internal.h
387
+++ b/tests/qtest/meson.build
27
@@ -XXX,XX +XXX,XX @@
388
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
28
#define SDHC_CMD_INHIBIT 0x00000001
389
['arm-cpu-features',
29
#define SDHC_DATA_INHIBIT 0x00000002
390
'numa-test',
30
#define SDHC_DAT_LINE_ACTIVE 0x00000004
391
'boot-serial-test',
31
+#define SDHC_IMX_CLOCK_GATE_OFF 0x00000080
392
+ 'xlnx-can-test',
32
#define SDHC_DOING_WRITE 0x00000100
393
'migration-test']
33
#define SDHC_DOING_READ 0x00000200
394
34
#define SDHC_SPACE_AVAILABLE 0x00000400
395
qtests_s390x = \
35
@@ -XXX,XX +XXX,XX @@ extern const VMStateDescription sdhci_vmstate;
36
37
38
#define ESDHC_MIX_CTRL 0x48
39
+
40
#define ESDHC_VENDOR_SPEC 0xc0
41
+#define ESDHC_IMX_FRC_SDCLK_ON (1 << 8)
42
+
43
#define ESDHC_DLL_CTRL 0x60
44
45
#define ESDHC_TUNING_CTRL 0xcc
46
@@ -XXX,XX +XXX,XX @@ extern const VMStateDescription sdhci_vmstate;
47
#define DEFINE_SDHCI_COMMON_PROPERTIES(_state) \
48
DEFINE_PROP_UINT8("sd-spec-version", _state, sd_spec_version, 2), \
49
DEFINE_PROP_UINT8("uhs", _state, uhs_mode, UHS_NOT_SUPPORTED), \
50
+ DEFINE_PROP_UINT8("vendor", _state, vendor, SDHCI_VENDOR_NONE), \
51
\
52
/* Capabilities registers provide information on supported
53
* features of this specific host controller implementation */ \
54
diff --git a/include/hw/sd/sdhci.h b/include/hw/sd/sdhci.h
55
index XXXXXXX..XXXXXXX 100644
56
--- a/include/hw/sd/sdhci.h
57
+++ b/include/hw/sd/sdhci.h
58
@@ -XXX,XX +XXX,XX @@ typedef struct SDHCIState {
59
uint16_t acmd12errsts; /* Auto CMD12 error status register */
60
uint16_t hostctl2; /* Host Control 2 */
61
uint64_t admasysaddr; /* ADMA System Address Register */
62
+ uint16_t vendor_spec; /* Vendor specific register */
63
64
/* Read-only registers */
65
uint64_t capareg; /* Capabilities Register */
66
@@ -XXX,XX +XXX,XX @@ typedef struct SDHCIState {
67
uint32_t quirks;
68
uint8_t sd_spec_version;
69
uint8_t uhs_mode;
70
+ uint8_t vendor; /* For vendor specific functionality */
71
} SDHCIState;
72
73
+#define SDHCI_VENDOR_NONE 0
74
+#define SDHCI_VENDOR_IMX 1
75
+
76
/*
77
* Controller does not provide transfer-complete interrupt when not
78
* busy.
79
diff --git a/hw/sd/sdhci.c b/hw/sd/sdhci.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/sd/sdhci.c
82
+++ b/hw/sd/sdhci.c
83
@@ -XXX,XX +XXX,XX @@ static uint64_t usdhc_read(void *opaque, hwaddr offset, unsigned size)
84
}
85
break;
86
87
+ case ESDHC_VENDOR_SPEC:
88
+ ret = s->vendor_spec;
89
+ break;
90
case ESDHC_DLL_CTRL:
91
case ESDHC_TUNE_CTRL_STATUS:
92
case ESDHC_UNDOCUMENTED_REG27:
93
case ESDHC_TUNING_CTRL:
94
- case ESDHC_VENDOR_SPEC:
95
case ESDHC_MIX_CTRL:
96
case ESDHC_WTMK_LVL:
97
ret = 0;
98
@@ -XXX,XX +XXX,XX @@ usdhc_write(void *opaque, hwaddr offset, uint64_t val, unsigned size)
99
case ESDHC_UNDOCUMENTED_REG27:
100
case ESDHC_TUNING_CTRL:
101
case ESDHC_WTMK_LVL:
102
+ break;
103
+
104
case ESDHC_VENDOR_SPEC:
105
+ s->vendor_spec = value;
106
+ switch (s->vendor) {
107
+ case SDHCI_VENDOR_IMX:
108
+ if (value & ESDHC_IMX_FRC_SDCLK_ON) {
109
+ s->prnsts &= ~SDHC_IMX_CLOCK_GATE_OFF;
110
+ } else {
111
+ s->prnsts |= SDHC_IMX_CLOCK_GATE_OFF;
112
+ }
113
+ break;
114
+ default:
115
+ break;
116
+ }
117
break;
118
119
case SDHC_HOSTCTL:
120
--
396
--
121
2.20.1
397
2.20.1
122
398
123
399
diff view generated by jsdifflib
New patch
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
2
3
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
4
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
5
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
6
Message-id: 1605728926-352690-5-git-send-email-fnu.vikram@xilinx.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
MAINTAINERS | 8 ++++++++
10
1 file changed, 8 insertions(+)
11
12
diff --git a/MAINTAINERS b/MAINTAINERS
13
index XXXXXXX..XXXXXXX 100644
14
--- a/MAINTAINERS
15
+++ b/MAINTAINERS
16
@@ -XXX,XX +XXX,XX @@ F: hw/net/opencores_eth.c
17
18
Devices
19
-------
20
+Xilinx CAN
21
+M: Vikram Garhwal <fnu.vikram@xilinx.com>
22
+M: Francisco Iglesias <francisco.iglesias@xilinx.com>
23
+S: Maintained
24
+F: hw/net/can/xlnx-*
25
+F: include/hw/net/xlnx-*
26
+F: tests/qtest/xlnx-can-test*
27
+
28
EDU
29
M: Jiri Slaby <jslaby@suse.cz>
30
S: Maintained
31
--
32
2.20.1
33
34
diff view generated by jsdifflib
1
Convert the float versions of VMLA, VMLS and VMUL in the Neon
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2-reg-scalar group to decodetree.
3
2
3
Trusted Firmware now supports A72 on sbsa-ref by default [1] so enable
4
it for QEMU as well. A53 was already enabled there.
5
6
1. https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/7117
7
8
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201120141705.246690-1-marcin.juszkiewicz@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
---
13
---
6
As noted in the comment on the WRAP_FP_FN macro, we could have
14
hw/arm/sbsa-ref.c | 23 ++++++++++++++++++++---
7
had a do_2scalar_fp() function, but for 3 insns it seemed
15
1 file changed, 20 insertions(+), 3 deletions(-)
8
simpler to just do the wrapping to get hold of the fpstatus ptr.
9
(These are the only fp insns in the group.)
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
---
12
target/arm/neon-dp.decode | 3 ++
13
target/arm/translate-neon.inc.c | 65 +++++++++++++++++++++++++++++++++
14
target/arm/translate.c | 37 ++-----------------
15
3 files changed, 71 insertions(+), 34 deletions(-)
16
16
17
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/neon-dp.decode
19
--- a/hw/arm/sbsa-ref.c
20
+++ b/target/arm/neon-dp.decode
20
+++ b/hw/arm/sbsa-ref.c
21
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
21
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
22
&2scalar vm=%vm_dp vn=%vn_dp vd=%vd_dp
22
[SBSA_GWDT] = 16,
23
23
};
24
VMLA_2sc 1111 001 . 1 . .. .... .... 0000 . 1 . 0 .... @2scalar
24
25
+ VMLA_F_2sc 1111 001 . 1 . .. .... .... 0001 . 1 . 0 .... @2scalar
25
+static const char * const valid_cpus[] = {
26
26
+ ARM_CPU_TYPE_NAME("cortex-a53"),
27
VMLS_2sc 1111 001 . 1 . .. .... .... 0100 . 1 . 0 .... @2scalar
27
+ ARM_CPU_TYPE_NAME("cortex-a57"),
28
+ VMLS_F_2sc 1111 001 . 1 . .. .... .... 0101 . 1 . 0 .... @2scalar
28
+ ARM_CPU_TYPE_NAME("cortex-a72"),
29
29
+};
30
VMUL_2sc 1111 001 . 1 . .. .... .... 1000 . 1 . 0 .... @2scalar
31
+ VMUL_F_2sc 1111 001 . 1 . .. .... .... 1001 . 1 . 0 .... @2scalar
32
]
33
}
34
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-neon.inc.c
37
+++ b/target/arm/translate-neon.inc.c
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLS_2sc(DisasContext *s, arg_2scalar *a)
39
40
return do_2scalar(s, a, opfn[a->size], accfn[a->size]);
41
}
42
+
30
+
43
+/*
31
+static bool cpu_type_valid(const char *cpu)
44
+ * Rather than have a float-specific version of do_2scalar just for
32
+{
45
+ * three insns, we wrap a NeonGenTwoSingleOpFn to turn it into
33
+ int i;
46
+ * a NeonGenTwoOpFn.
34
+
47
+ */
35
+ for (i = 0; i < ARRAY_SIZE(valid_cpus); i++) {
48
+#define WRAP_FP_FN(WRAPNAME, FUNC) \
36
+ if (strcmp(cpu, valid_cpus[i]) == 0) {
49
+ static void WRAPNAME(TCGv_i32 rd, TCGv_i32 rn, TCGv_i32 rm) \
37
+ return true;
50
+ { \
38
+ }
51
+ TCGv_ptr fpstatus = get_fpstatus_ptr(1); \
52
+ FUNC(rd, rn, rm, fpstatus); \
53
+ tcg_temp_free_ptr(fpstatus); \
54
+ }
39
+ }
55
+
40
+ return false;
56
+WRAP_FP_FN(gen_VMUL_F_mul, gen_helper_vfp_muls)
57
+WRAP_FP_FN(gen_VMUL_F_add, gen_helper_vfp_adds)
58
+WRAP_FP_FN(gen_VMUL_F_sub, gen_helper_vfp_subs)
59
+
60
+static bool trans_VMUL_F_2sc(DisasContext *s, arg_2scalar *a)
61
+{
62
+ static NeonGenTwoOpFn * const opfn[] = {
63
+ NULL,
64
+ NULL, /* TODO: fp16 support */
65
+ gen_VMUL_F_mul,
66
+ NULL,
67
+ };
68
+
69
+ return do_2scalar(s, a, opfn[a->size], NULL);
70
+}
41
+}
71
+
42
+
72
+static bool trans_VMLA_F_2sc(DisasContext *s, arg_2scalar *a)
43
static uint64_t sbsa_ref_cpu_mp_affinity(SBSAMachineState *sms, int idx)
73
+{
44
{
74
+ static NeonGenTwoOpFn * const opfn[] = {
45
uint8_t clustersz = ARM_DEFAULT_CPUS_PER_CLUSTER;
75
+ NULL,
46
@@ -XXX,XX +XXX,XX @@ static void sbsa_ref_init(MachineState *machine)
76
+ NULL, /* TODO: fp16 support */
47
const CPUArchIdList *possible_cpus;
77
+ gen_VMUL_F_mul,
48
int n, sbsa_max_cpus;
78
+ NULL,
49
79
+ };
50
- if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a57"))) {
80
+ static NeonGenTwoOpFn * const accfn[] = {
51
- error_report("sbsa-ref: CPU type other than the built-in "
81
+ NULL,
52
- "cortex-a57 not supported");
82
+ NULL, /* TODO: fp16 support */
53
+ if (!cpu_type_valid(machine->cpu_type)) {
83
+ gen_VMUL_F_add,
54
+ error_report("mach-virt: CPU type %s not supported", machine->cpu_type);
84
+ NULL,
55
exit(1);
85
+ };
56
}
86
+
57
87
+ return do_2scalar(s, a, opfn[a->size], accfn[a->size]);
88
+}
89
+
90
+static bool trans_VMLS_F_2sc(DisasContext *s, arg_2scalar *a)
91
+{
92
+ static NeonGenTwoOpFn * const opfn[] = {
93
+ NULL,
94
+ NULL, /* TODO: fp16 support */
95
+ gen_VMUL_F_mul,
96
+ NULL,
97
+ };
98
+ static NeonGenTwoOpFn * const accfn[] = {
99
+ NULL,
100
+ NULL, /* TODO: fp16 support */
101
+ gen_VMUL_F_sub,
102
+ NULL,
103
+ };
104
+
105
+ return do_2scalar(s, a, opfn[a->size], accfn[a->size]);
106
+}
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
112
case 0: /* Integer VMLA scalar */
113
case 4: /* Integer VMLS scalar */
114
case 8: /* Integer VMUL scalar */
115
- return 1; /* handled by decodetree */
116
-
117
case 1: /* Float VMLA scalar */
118
case 5: /* Floating point VMLS scalar */
119
case 9: /* Floating point VMUL scalar */
120
- if (size == 1) {
121
- return 1;
122
- }
123
- /* fall through */
124
+ return 1; /* handled by decodetree */
125
+
126
case 12: /* VQDMULH scalar */
127
case 13: /* VQRDMULH scalar */
128
if (u && ((rd | rn) & 1)) {
129
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
130
} else {
131
gen_helper_neon_qdmulh_s32(tmp, cpu_env, tmp, tmp2);
132
}
133
- } else if (op == 13) {
134
+ } else {
135
if (size == 1) {
136
gen_helper_neon_qrdmulh_s16(tmp, cpu_env, tmp, tmp2);
137
} else {
138
gen_helper_neon_qrdmulh_s32(tmp, cpu_env, tmp, tmp2);
139
}
140
- } else {
141
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
142
- gen_helper_vfp_muls(tmp, tmp, tmp2, fpstatus);
143
- tcg_temp_free_ptr(fpstatus);
144
}
145
tcg_temp_free_i32(tmp2);
146
- if (op < 8) {
147
- /* Accumulate. */
148
- tmp2 = neon_load_reg(rd, pass);
149
- switch (op) {
150
- case 1:
151
- {
152
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
153
- gen_helper_vfp_adds(tmp, tmp, tmp2, fpstatus);
154
- tcg_temp_free_ptr(fpstatus);
155
- break;
156
- }
157
- case 5:
158
- {
159
- TCGv_ptr fpstatus = get_fpstatus_ptr(1);
160
- gen_helper_vfp_subs(tmp, tmp2, tmp, fpstatus);
161
- tcg_temp_free_ptr(fpstatus);
162
- break;
163
- }
164
- default:
165
- abort();
166
- }
167
- tcg_temp_free_i32(tmp2);
168
- }
169
neon_store_reg(rd, pass, tmp);
170
}
171
break;
172
--
58
--
173
2.20.1
59
2.20.1
174
60
175
61
diff view generated by jsdifflib
1
From: Erik Smit <erik.lucas.smit@gmail.com>
1
From: Havard Skinnemoen <hskinnemoen@google.com>
2
2
3
The hardware supports configurable descriptor sizes, configured in the DBLAC
3
Dump the collected random data after a randomness test failure.
4
register.
5
4
6
Most drivers use the default 4 word descriptor, which is currently hardcoded,
5
Note that this relies on the test having called
7
but Aspeed SDK configures 8 words to store extra data.
6
g_test_set_nonfatal_assertions() so we don't abort immediately on the
7
assertion failure.
8
8
9
Signed-off-by: Erik Smit <erik.lucas.smit@gmail.com>
9
Signed-off-by: Havard Skinnemoen <hskinnemoen@google.com>
10
Reviewed-by: Cédric Le Goater <clg@kaod.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
[PMM: removed unnecessary parens]
11
[PMM: minor commit message tweak]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
13
---
14
hw/net/ftgmac100.c | 26 ++++++++++++++++++++++++--
14
tests/qtest/npcm7xx_rng-test.c | 12 ++++++++++++
15
1 file changed, 24 insertions(+), 2 deletions(-)
15
1 file changed, 12 insertions(+)
16
16
17
diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
17
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/net/ftgmac100.c
19
--- a/tests/qtest/npcm7xx_rng-test.c
20
+++ b/hw/net/ftgmac100.c
20
+++ b/tests/qtest/npcm7xx_rng-test.c
21
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@
22
#define FTGMAC100_APTC_TXPOLL_CNT(x) (((x) >> 8) & 0xf)
22
23
#define FTGMAC100_APTC_TXPOLL_TIME_SEL (1 << 12)
23
#include "libqtest-single.h"
24
24
#include "qemu/bitops.h"
25
+/*
25
+#include "qemu-common.h"
26
+ * DMA burst length and arbitration control register
26
27
+ */
27
#define RNG_BASE_ADDR 0xf000b000
28
+#define FTGMAC100_DBLAC_RXBURST_SIZE(x) (((x) >> 8) & 0x3)
28
29
+#define FTGMAC100_DBLAC_TXBURST_SIZE(x) (((x) >> 10) & 0x3)
29
@@ -XXX,XX +XXX,XX @@
30
+#define FTGMAC100_DBLAC_RXDES_SIZE(x) ((((x) >> 12) & 0xf) * 8)
30
/* Number of bits to collect for randomness tests. */
31
+#define FTGMAC100_DBLAC_TXDES_SIZE(x) ((((x) >> 16) & 0xf) * 8)
31
#define TEST_INPUT_BITS (128)
32
+#define FTGMAC100_DBLAC_IFG_CNT(x) (((x) >> 20) & 0x7)
32
33
+#define FTGMAC100_DBLAC_IFG_INC (1 << 23)
33
+static void dump_buf_if_failed(const uint8_t *buf, size_t size)
34
+{
35
+ if (g_test_failed()) {
36
+ qemu_hexdump(stderr, "", buf, size);
37
+ }
38
+}
34
+
39
+
40
static void rng_writeb(unsigned int offset, uint8_t value)
41
{
42
writeb(RNG_BASE_ADDR + offset, value);
43
@@ -XXX,XX +XXX,XX @@ static void test_continuous_monobit(void)
44
}
45
46
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
47
+ dump_buf_if_failed(buf, sizeof(buf));
48
}
49
35
/*
50
/*
36
* PHY control register
51
@@ -XXX,XX +XXX,XX @@ static void test_continuous_runs(void)
37
*/
38
@@ -XXX,XX +XXX,XX @@ static void ftgmac100_do_tx(FTGMAC100State *s, uint32_t tx_ring,
39
if (bd.des0 & s->txdes0_edotr) {
40
addr = tx_ring;
41
} else {
42
- addr += sizeof(FTGMAC100Desc);
43
+ addr += FTGMAC100_DBLAC_TXDES_SIZE(s->dblac);
44
}
45
}
52
}
46
53
47
@@ -XXX,XX +XXX,XX @@ static void ftgmac100_write(void *opaque, hwaddr addr,
54
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
48
s->phydata = value & 0xffff;
55
+ dump_buf_if_failed(buf.c, sizeof(buf));
49
break;
56
}
50
case FTGMAC100_DBLAC: /* DMA Burst Length and Arbitration Control */
57
51
+ if (FTGMAC100_DBLAC_TXDES_SIZE(s->dblac) < sizeof(FTGMAC100Desc)) {
58
/*
52
+ qemu_log_mask(LOG_GUEST_ERROR,
59
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_monobit(void)
53
+ "%s: transmit descriptor too small : %d bytes\n",
54
+ __func__, FTGMAC100_DBLAC_TXDES_SIZE(s->dblac));
55
+ break;
56
+ }
57
+ if (FTGMAC100_DBLAC_RXDES_SIZE(s->dblac) < sizeof(FTGMAC100Desc)) {
58
+ qemu_log_mask(LOG_GUEST_ERROR,
59
+ "%s: receive descriptor too small : %d bytes\n",
60
+ __func__, FTGMAC100_DBLAC_RXDES_SIZE(s->dblac));
61
+ break;
62
+ }
63
s->dblac = value;
64
break;
65
case FTGMAC100_REVR: /* Feature Register */
66
@@ -XXX,XX +XXX,XX @@ static ssize_t ftgmac100_receive(NetClientState *nc, const uint8_t *buf,
67
if (bd.des0 & s->rxdes0_edorr) {
68
addr = s->rx_ring;
69
} else {
70
- addr += sizeof(FTGMAC100Desc);
71
+ addr += FTGMAC100_DBLAC_RXDES_SIZE(s->dblac);
72
}
73
}
60
}
74
s->rx_descriptor = addr;
61
62
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
63
+ dump_buf_if_failed(buf, sizeof(buf));
64
}
65
66
/*
67
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_runs(void)
68
}
69
70
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
71
+ dump_buf_if_failed(buf.c, sizeof(buf));
72
}
73
74
int main(int argc, char **argv)
75
--
75
--
76
2.20.1
76
2.20.1
77
77
78
78
diff view generated by jsdifflib
New patch
1
From: Alex Chen <alex.chen@huawei.com>
1
2
3
We should use printf format specifier "%u" instead of "%d" for
4
argument of type "unsigned int".
5
6
Reported-by: Euler Robot <euler.robot@huawei.com>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20201126111109.112238-2-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/misc/imx25_ccm.c | 12 ++++++------
13
1 file changed, 6 insertions(+), 6 deletions(-)
14
15
diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx25_ccm.c
18
+++ b/hw/misc/imx25_ccm.c
19
@@ -XXX,XX +XXX,XX @@ static const char *imx25_ccm_reg_name(uint32_t reg)
20
case IMX25_CCM_LPIMR1_REG:
21
return "lpimr1";
22
default:
23
- sprintf(unknown, "[%d ?]", reg);
24
+ sprintf(unknown, "[%u ?]", reg);
25
return unknown;
26
}
27
}
28
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mpll_clk(IMXCCMState *dev)
29
freq = imx_ccm_calc_pll(s->reg[IMX25_CCM_MPCTL_REG], CKIH_FREQ);
30
}
31
32
- DPRINTF("freq = %d\n", freq);
33
+ DPRINTF("freq = %u\n", freq);
34
35
return freq;
36
}
37
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mcu_clk(IMXCCMState *dev)
38
39
freq = freq / (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], ARM_CLK_DIV));
40
41
- DPRINTF("freq = %d\n", freq);
42
+ DPRINTF("freq = %u\n", freq);
43
44
return freq;
45
}
46
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ahb_clk(IMXCCMState *dev)
47
freq = imx25_ccm_get_mcu_clk(dev)
48
/ (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], AHB_CLK_DIV));
49
50
- DPRINTF("freq = %d\n", freq);
51
+ DPRINTF("freq = %u\n", freq);
52
53
return freq;
54
}
55
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ipg_clk(IMXCCMState *dev)
56
57
freq = imx25_ccm_get_ahb_clk(dev) / 2;
58
59
- DPRINTF("freq = %d\n", freq);
60
+ DPRINTF("freq = %u\n", freq);
61
62
return freq;
63
}
64
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
65
break;
66
}
67
68
- DPRINTF("Clock = %d) = %d\n", clock, freq);
69
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
70
71
return freq;
72
}
73
--
74
2.20.1
75
76
diff view generated by jsdifflib
New patch
1
From: Alex Chen <alex.chen@huawei.com>
1
2
3
We should use printf format specifier "%u" instead of "%d" for
4
argument of type "unsigned int".
5
6
Reported-by: Euler Robot <euler.robot@huawei.com>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20201126111109.112238-3-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/misc/imx31_ccm.c | 14 +++++++-------
13
hw/misc/imx_ccm.c | 4 ++--
14
2 files changed, 9 insertions(+), 9 deletions(-)
15
16
diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx31_ccm.c
19
+++ b/hw/misc/imx31_ccm.c
20
@@ -XXX,XX +XXX,XX @@ static const char *imx31_ccm_reg_name(uint32_t reg)
21
case IMX31_CCM_PDR2_REG:
22
return "PDR2";
23
default:
24
- sprintf(unknown, "[%d ?]", reg);
25
+ sprintf(unknown, "[%u ?]", reg);
26
return unknown;
27
}
28
}
29
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState *dev)
30
freq = CKIH_FREQ;
31
}
32
33
- DPRINTF("freq = %d\n", freq);
34
+ DPRINTF("freq = %u\n", freq);
35
36
return freq;
37
}
38
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mpll_clk(IMXCCMState *dev)
39
freq = imx_ccm_calc_pll(s->reg[IMX31_CCM_MPCTL_REG],
40
imx31_ccm_get_pll_ref_clk(dev));
41
42
- DPRINTF("freq = %d\n", freq);
43
+ DPRINTF("freq = %u\n", freq);
44
45
return freq;
46
}
47
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mcu_main_clk(IMXCCMState *dev)
48
freq = imx31_ccm_get_mpll_clk(dev);
49
}
50
51
- DPRINTF("freq = %d\n", freq);
52
+ DPRINTF("freq = %u\n", freq);
53
54
return freq;
55
}
56
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_hclk_clk(IMXCCMState *dev)
57
freq = imx31_ccm_get_mcu_main_clk(dev)
58
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], MAX));
59
60
- DPRINTF("freq = %d\n", freq);
61
+ DPRINTF("freq = %u\n", freq);
62
63
return freq;
64
}
65
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_ipg_clk(IMXCCMState *dev)
66
freq = imx31_ccm_get_hclk_clk(dev)
67
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], IPG));
68
69
- DPRINTF("freq = %d\n", freq);
70
+ DPRINTF("freq = %u\n", freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
75
break;
76
}
77
78
- DPRINTF("Clock = %d) = %d\n", clock, freq);
79
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
80
81
return freq;
82
}
83
diff --git a/hw/misc/imx_ccm.c b/hw/misc/imx_ccm.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/hw/misc/imx_ccm.c
86
+++ b/hw/misc/imx_ccm.c
87
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
88
freq = klass->get_clock_frequency(dev, clock);
89
}
90
91
- DPRINTF("(clock = %d) = %d\n", clock, freq);
92
+ DPRINTF("(clock = %d) = %u\n", clock, freq);
93
94
return freq;
95
}
96
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_calc_pll(uint32_t pllreg, uint32_t base_freq)
97
freq = ((2 * (base_freq >> 10) * (mfi * mfd + mfn)) /
98
(mfd * pd)) << 10;
99
100
- DPRINTF("(pllreg = 0x%08x, base_freq = %d) = %d\n", pllreg, base_freq,
101
+ DPRINTF("(pllreg = 0x%08x, base_freq = %u) = %d\n", pllreg, base_freq,
102
freq);
103
104
return freq;
105
--
106
2.20.1
107
108
diff view generated by jsdifflib
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
1
From: Alex Chen <alex.chen@huawei.com>
2
2
3
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
3
We should use printf format specifier "%u" instead of "%d" for
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
argument of type "unsigned int".
5
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
6
[PMD: Fixed 32-bit format string using PRIx32/PRIx64]
6
Reported-by: Euler Robot <euler.robot@huawei.com>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20201126111109.112238-4-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
hw/net/imx_fec.c | 106 +++++++++++++++++++-------------------------
12
hw/misc/imx6_ccm.c | 20 ++++++++++----------
11
hw/net/trace-events | 18 ++++++++
13
hw/misc/imx6_src.c | 2 +-
12
2 files changed, 63 insertions(+), 61 deletions(-)
14
2 files changed, 11 insertions(+), 11 deletions(-)
13
15
14
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
16
diff --git a/hw/misc/imx6_ccm.c b/hw/misc/imx6_ccm.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/net/imx_fec.c
18
--- a/hw/misc/imx6_ccm.c
17
+++ b/hw/net/imx_fec.c
19
+++ b/hw/misc/imx6_ccm.c
18
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static const char *imx6_ccm_reg_name(uint32_t reg)
19
#include "qemu/module.h"
21
case CCM_CMEOR:
20
#include "net/checksum.h"
22
return "CMEOR";
21
#include "net/eth.h"
23
default:
22
+#include "trace.h"
24
- sprintf(unknown, "%d ?", reg);
23
25
+ sprintf(unknown, "%u ?", reg);
24
/* For crc32 */
26
return unknown;
25
#include <zlib.h>
27
}
26
27
-#ifndef DEBUG_IMX_FEC
28
-#define DEBUG_IMX_FEC 0
29
-#endif
30
-
31
-#define FEC_PRINTF(fmt, args...) \
32
- do { \
33
- if (DEBUG_IMX_FEC) { \
34
- fprintf(stderr, "[%s]%s: " fmt , TYPE_IMX_FEC, \
35
- __func__, ##args); \
36
- } \
37
- } while (0)
38
-
39
-#ifndef DEBUG_IMX_PHY
40
-#define DEBUG_IMX_PHY 0
41
-#endif
42
-
43
-#define PHY_PRINTF(fmt, args...) \
44
- do { \
45
- if (DEBUG_IMX_PHY) { \
46
- fprintf(stderr, "[%s.phy]%s: " fmt , TYPE_IMX_FEC, \
47
- __func__, ##args); \
48
- } \
49
- } while (0)
50
-
51
#define IMX_MAX_DESC 1024
52
53
static const char *imx_default_reg_name(IMXFECState *s, uint32_t index)
54
@@ -XXX,XX +XXX,XX @@ static void imx_eth_update(IMXFECState *s);
55
* For now we don't handle any GPIO/interrupt line, so the OS will
56
* have to poll for the PHY status.
57
*/
58
-static void phy_update_irq(IMXFECState *s)
59
+static void imx_phy_update_irq(IMXFECState *s)
60
{
61
imx_eth_update(s);
62
}
28
}
63
29
@@ -XXX,XX +XXX,XX @@ static const char *imx6_analog_reg_name(uint32_t reg)
64
-static void phy_update_link(IMXFECState *s)
30
case USB_ANALOG_DIGPROG:
65
+static void imx_phy_update_link(IMXFECState *s)
31
return "USB_ANALOG_DIGPROG";
66
{
32
default:
67
/* Autonegotiation status mirrors link status. */
33
- sprintf(unknown, "%d ?", reg);
68
if (qemu_get_queue(s->nic)->link_down) {
34
+ sprintf(unknown, "%u ?", reg);
69
- PHY_PRINTF("link is down\n");
35
return unknown;
70
+ trace_imx_phy_update_link("down");
71
s->phy_status &= ~0x0024;
72
s->phy_int |= PHY_INT_DOWN;
73
} else {
74
- PHY_PRINTF("link is up\n");
75
+ trace_imx_phy_update_link("up");
76
s->phy_status |= 0x0024;
77
s->phy_int |= PHY_INT_ENERGYON;
78
s->phy_int |= PHY_INT_AUTONEG_COMPLETE;
79
}
36
}
80
- phy_update_irq(s);
81
+ imx_phy_update_irq(s);
82
}
37
}
83
38
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_clk(IMX6CCMState *dev)
84
static void imx_eth_set_link(NetClientState *nc)
39
freq *= 20;
85
{
40
}
86
- phy_update_link(IMX_FEC(qemu_get_nic_opaque(nc)));
41
87
+ imx_phy_update_link(IMX_FEC(qemu_get_nic_opaque(nc)));
42
- DPRINTF("freq = %d\n", (uint32_t)freq);
43
+ DPRINTF("freq = %u\n", (uint32_t)freq);
44
45
return freq;
88
}
46
}
89
47
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd0_clk(IMX6CCMState *dev)
90
-static void phy_reset(IMXFECState *s)
48
freq = imx6_analog_get_pll2_clk(dev) * 18
91
+static void imx_phy_reset(IMXFECState *s)
49
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD0_FRAC);
92
{
50
93
+ trace_imx_phy_reset();
51
- DPRINTF("freq = %d\n", (uint32_t)freq);
94
+
52
+ DPRINTF("freq = %u\n", (uint32_t)freq);
95
s->phy_status = 0x7809;
53
96
s->phy_control = 0x3000;
54
return freq;
97
s->phy_advertise = 0x01e1;
98
s->phy_int_mask = 0;
99
s->phy_int = 0;
100
- phy_update_link(s);
101
+ imx_phy_update_link(s);
102
}
55
}
103
56
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd2_clk(IMX6CCMState *dev)
104
-static uint32_t do_phy_read(IMXFECState *s, int reg)
57
freq = imx6_analog_get_pll2_clk(dev) * 18
105
+static uint32_t imx_phy_read(IMXFECState *s, int reg)
58
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD2_FRAC);
106
{
59
107
uint32_t val;
60
- DPRINTF("freq = %d\n", (uint32_t)freq);
108
61
+ DPRINTF("freq = %u\n", (uint32_t)freq);
109
@@ -XXX,XX +XXX,XX @@ static uint32_t do_phy_read(IMXFECState *s, int reg)
62
110
case 29: /* Interrupt source. */
63
return freq;
111
val = s->phy_int;
64
}
112
s->phy_int = 0;
65
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_periph_clk(IMX6CCMState *dev)
113
- phy_update_irq(s);
114
+ imx_phy_update_irq(s);
115
break;
116
case 30: /* Interrupt mask */
117
val = s->phy_int_mask;
118
@@ -XXX,XX +XXX,XX @@ static uint32_t do_phy_read(IMXFECState *s, int reg)
119
break;
66
break;
120
}
67
}
121
68
122
- PHY_PRINTF("read 0x%04x @ %d\n", val, reg);
69
- DPRINTF("freq = %d\n", (uint32_t)freq);
123
+ trace_imx_phy_read(val, reg);
70
+ DPRINTF("freq = %u\n", (uint32_t)freq);
124
71
125
return val;
72
return freq;
126
}
73
}
127
74
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ahb_clk(IMX6CCMState *dev)
128
-static void do_phy_write(IMXFECState *s, int reg, uint32_t val)
75
freq = imx6_analog_get_periph_clk(dev)
129
+static void imx_phy_write(IMXFECState *s, int reg, uint32_t val)
76
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], AHB_PODF));
130
{
77
131
- PHY_PRINTF("write 0x%04x @ %d\n", val, reg);
78
- DPRINTF("freq = %d\n", (uint32_t)freq);
132
+ trace_imx_phy_write(val, reg);
79
+ DPRINTF("freq = %u\n", (uint32_t)freq);
133
80
134
if (reg > 31) {
81
return freq;
135
/* we only advertise one phy */
136
@@ -XXX,XX +XXX,XX @@ static void do_phy_write(IMXFECState *s, int reg, uint32_t val)
137
switch (reg) {
138
case 0: /* Basic Control */
139
if (val & 0x8000) {
140
- phy_reset(s);
141
+ imx_phy_reset(s);
142
} else {
143
s->phy_control = val & 0x7980;
144
/* Complete autonegotiation immediately. */
145
@@ -XXX,XX +XXX,XX @@ static void do_phy_write(IMXFECState *s, int reg, uint32_t val)
146
break;
147
case 30: /* Interrupt mask */
148
s->phy_int_mask = val & 0xff;
149
- phy_update_irq(s);
150
+ imx_phy_update_irq(s);
151
break;
152
case 17:
153
case 18:
154
@@ -XXX,XX +XXX,XX @@ static void do_phy_write(IMXFECState *s, int reg, uint32_t val)
155
static void imx_fec_read_bd(IMXFECBufDesc *bd, dma_addr_t addr)
156
{
157
dma_memory_read(&address_space_memory, addr, bd, sizeof(*bd));
158
+
159
+ trace_imx_fec_read_bd(addr, bd->flags, bd->length, bd->data);
160
}
82
}
161
83
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ipg_clk(IMX6CCMState *dev)
162
static void imx_fec_write_bd(IMXFECBufDesc *bd, dma_addr_t addr)
84
freq = imx6_ccm_get_ahb_clk(dev)
163
@@ -XXX,XX +XXX,XX @@ static void imx_fec_write_bd(IMXFECBufDesc *bd, dma_addr_t addr)
85
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], IPG_PODF));
164
static void imx_enet_read_bd(IMXENETBufDesc *bd, dma_addr_t addr)
86
165
{
87
- DPRINTF("freq = %d\n", (uint32_t)freq);
166
dma_memory_read(&address_space_memory, addr, bd, sizeof(*bd));
88
+ DPRINTF("freq = %u\n", (uint32_t)freq);
167
+
89
168
+ trace_imx_enet_read_bd(addr, bd->flags, bd->length, bd->data,
90
return freq;
169
+ bd->option, bd->status);
170
}
91
}
171
92
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_per_clk(IMX6CCMState *dev)
172
static void imx_enet_write_bd(IMXENETBufDesc *bd, dma_addr_t addr)
93
freq = imx6_ccm_get_ipg_clk(dev)
173
@@ -XXX,XX +XXX,XX @@ static void imx_fec_do_tx(IMXFECState *s)
94
/ (1 + EXTRACT(dev->ccm[CCM_CSCMR1], PERCLK_PODF));
174
int len;
95
175
96
- DPRINTF("freq = %d\n", (uint32_t)freq);
176
imx_fec_read_bd(&bd, addr);
97
+ DPRINTF("freq = %u\n", (uint32_t)freq);
177
- FEC_PRINTF("tx_bd %x flags %04x len %d data %08x\n",
98
178
- addr, bd.flags, bd.length, bd.data);
99
return freq;
179
if ((bd.flags & ENET_BD_R) == 0) {
180
+
181
/* Run out of descriptors to transmit. */
182
- FEC_PRINTF("tx_bd ran out of descriptors to transmit\n");
183
+ trace_imx_eth_tx_bd_busy();
184
+
185
break;
186
}
187
len = bd.length;
188
@@ -XXX,XX +XXX,XX @@ static void imx_enet_do_tx(IMXFECState *s, uint32_t index)
189
int len;
190
191
imx_enet_read_bd(&bd, addr);
192
- FEC_PRINTF("tx_bd %x flags %04x len %d data %08x option %04x "
193
- "status %04x\n", addr, bd.flags, bd.length, bd.data,
194
- bd.option, bd.status);
195
if ((bd.flags & ENET_BD_R) == 0) {
196
/* Run out of descriptors to transmit. */
197
+
198
+ trace_imx_eth_tx_bd_busy();
199
+
200
break;
201
}
202
len = bd.length;
203
@@ -XXX,XX +XXX,XX @@ static void imx_eth_enable_rx(IMXFECState *s, bool flush)
204
s->regs[ENET_RDAR] = (bd.flags & ENET_BD_E) ? ENET_RDAR_RDAR : 0;
205
206
if (!s->regs[ENET_RDAR]) {
207
- FEC_PRINTF("RX buffer full\n");
208
+ trace_imx_eth_rx_bd_full();
209
} else if (flush) {
210
qemu_flush_queued_packets(qemu_get_queue(s->nic));
211
}
212
@@ -XXX,XX +XXX,XX @@ static void imx_eth_reset(DeviceState *d)
213
memset(s->tx_descriptor, 0, sizeof(s->tx_descriptor));
214
215
/* We also reset the PHY */
216
- phy_reset(s);
217
+ imx_phy_reset(s);
218
}
100
}
219
101
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
220
static uint32_t imx_default_read(IMXFECState *s, uint32_t index)
221
@@ -XXX,XX +XXX,XX @@ static uint64_t imx_eth_read(void *opaque, hwaddr offset, unsigned size)
222
break;
102
break;
223
}
103
}
224
104
225
- FEC_PRINTF("reg[%s] => 0x%" PRIx32 "\n", imx_eth_reg_name(s, index),
105
- DPRINTF("Clock = %d) = %d\n", clock, freq);
226
- value);
106
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
227
+ trace_imx_eth_read(index, imx_eth_reg_name(s, index), value);
107
228
108
return freq;
229
return value;
230
}
109
}
231
@@ -XXX,XX +XXX,XX @@ static void imx_eth_write(void *opaque, hwaddr offset, uint64_t value,
110
diff --git a/hw/misc/imx6_src.c b/hw/misc/imx6_src.c
232
const bool single_tx_ring = !imx_eth_is_multi_tx_ring(s);
111
index XXXXXXX..XXXXXXX 100644
233
uint32_t index = offset >> 2;
112
--- a/hw/misc/imx6_src.c
234
113
+++ b/hw/misc/imx6_src.c
235
- FEC_PRINTF("reg[%s] <= 0x%" PRIx32 "\n", imx_eth_reg_name(s, index),
114
@@ -XXX,XX +XXX,XX @@ static const char *imx6_src_reg_name(uint32_t reg)
236
- (uint32_t)value);
115
case SRC_GPR10:
237
+ trace_imx_eth_write(index, imx_eth_reg_name(s, index), value);
116
return "SRC_GPR10";
238
117
default:
239
switch (index) {
118
- sprintf(unknown, "%d ?", reg);
240
case ENET_EIR:
119
+ sprintf(unknown, "%u ?", reg);
241
@@ -XXX,XX +XXX,XX @@ static void imx_eth_write(void *opaque, hwaddr offset, uint64_t value,
120
return unknown;
242
if (extract32(value, 29, 1)) {
121
}
243
/* This is a read operation */
244
s->regs[ENET_MMFR] = deposit32(s->regs[ENET_MMFR], 0, 16,
245
- do_phy_read(s,
246
+ imx_phy_read(s,
247
extract32(value,
248
18, 10)));
249
} else {
250
/* This a write operation */
251
- do_phy_write(s, extract32(value, 18, 10), extract32(value, 0, 16));
252
+ imx_phy_write(s, extract32(value, 18, 10), extract32(value, 0, 16));
253
}
254
/* raise the interrupt as the PHY operation is done */
255
s->regs[ENET_EIR] |= ENET_INT_MII;
256
@@ -XXX,XX +XXX,XX @@ static bool imx_eth_can_receive(NetClientState *nc)
257
{
258
IMXFECState *s = IMX_FEC(qemu_get_nic_opaque(nc));
259
260
- FEC_PRINTF("\n");
261
-
262
return !!s->regs[ENET_RDAR];
263
}
122
}
264
265
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_fec_receive(NetClientState *nc, const uint8_t *buf,
266
unsigned int buf_len;
267
size_t size = len;
268
269
- FEC_PRINTF("len %d\n", (int)size);
270
+ trace_imx_fec_receive(size);
271
272
if (!s->regs[ENET_RDAR]) {
273
qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Unexpected packet\n",
274
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_fec_receive(NetClientState *nc, const uint8_t *buf,
275
bd.length = buf_len;
276
size -= buf_len;
277
278
- FEC_PRINTF("rx_bd 0x%x length %d\n", addr, bd.length);
279
+ trace_imx_fec_receive_len(addr, bd.length);
280
281
/* The last 4 bytes are the CRC. */
282
if (size < 4) {
283
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_fec_receive(NetClientState *nc, const uint8_t *buf,
284
if (size == 0) {
285
/* Last buffer in frame. */
286
bd.flags |= flags | ENET_BD_L;
287
- FEC_PRINTF("rx frame flags %04x\n", bd.flags);
288
+
289
+ trace_imx_fec_receive_last(bd.flags);
290
+
291
s->regs[ENET_EIR] |= ENET_INT_RXF;
292
} else {
293
s->regs[ENET_EIR] |= ENET_INT_RXB;
294
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_enet_receive(NetClientState *nc, const uint8_t *buf,
295
size_t size = len;
296
bool shift16 = s->regs[ENET_RACC] & ENET_RACC_SHIFT16;
297
298
- FEC_PRINTF("len %d\n", (int)size);
299
+ trace_imx_enet_receive(size);
300
301
if (!s->regs[ENET_RDAR]) {
302
qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Unexpected packet\n",
303
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_enet_receive(NetClientState *nc, const uint8_t *buf,
304
bd.length = buf_len;
305
size -= buf_len;
306
307
- FEC_PRINTF("rx_bd 0x%x length %d\n", addr, bd.length);
308
+ trace_imx_enet_receive_len(addr, bd.length);
309
310
/* The last 4 bytes are the CRC. */
311
if (size < 4) {
312
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_enet_receive(NetClientState *nc, const uint8_t *buf,
313
if (size == 0) {
314
/* Last buffer in frame. */
315
bd.flags |= flags | ENET_BD_L;
316
- FEC_PRINTF("rx frame flags %04x\n", bd.flags);
317
+
318
+ trace_imx_enet_receive_last(bd.flags);
319
+
320
/* Indicate that we've updated the last buffer descriptor. */
321
bd.last_buffer = ENET_BD_BDU;
322
if (bd.option & ENET_BD_RX_INT) {
323
diff --git a/hw/net/trace-events b/hw/net/trace-events
324
index XXXXXXX..XXXXXXX 100644
325
--- a/hw/net/trace-events
326
+++ b/hw/net/trace-events
327
@@ -XXX,XX +XXX,XX @@ i82596_receive_packet(size_t sz) "len=%zu"
328
i82596_new_mac(const char *id_with_mac) "New MAC for: %s"
329
i82596_set_multicast(uint16_t count) "Added %d multicast entries"
330
i82596_channel_attention(void *s) "%p: Received CHANNEL ATTENTION"
331
+
332
+# imx_fec.c
333
+imx_phy_read(uint32_t val, int reg) "0x%04"PRIx32" <= reg[%d]"
334
+imx_phy_write(uint32_t val, int reg) "0x%04"PRIx32" => reg[%d]"
335
+imx_phy_update_link(const char *s) "%s"
336
+imx_phy_reset(void) ""
337
+imx_fec_read_bd(uint64_t addr, int flags, int len, int data) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x"
338
+imx_enet_read_bd(uint64_t addr, int flags, int len, int data, int options, int status) "tx_bd 0x%"PRIx64" flags 0x%04x len %d data 0x%08x option 0x%04x status 0x%04x"
339
+imx_eth_tx_bd_busy(void) "tx_bd ran out of descriptors to transmit"
340
+imx_eth_rx_bd_full(void) "RX buffer is full"
341
+imx_eth_read(int reg, const char *reg_name, uint32_t value) "reg[%d:%s] => 0x%08"PRIx32
342
+imx_eth_write(int reg, const char *reg_name, uint64_t value) "reg[%d:%s] <= 0x%08"PRIx64
343
+imx_fec_receive(size_t size) "len %zu"
344
+imx_fec_receive_len(uint64_t addr, int len) "rx_bd 0x%"PRIx64" length %d"
345
+imx_fec_receive_last(int last) "rx frame flags 0x%04x"
346
+imx_enet_receive(size_t size) "len %zu"
347
+imx_enet_receive_len(uint64_t addr, int len) "rx_bd 0x%"PRIx64" length %d"
348
+imx_enet_receive_last(int last) "rx frame flags 0x%04x"
349
--
123
--
350
2.20.1
124
2.20.1
351
125
352
126
diff view generated by jsdifflib
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
1
From: Alex Chen <alex.chen@huawei.com>
2
2
3
Some bits of the CCM registers are non writable.
3
We should use printf format specifier "%u" instead of "%d" for
4
argument of type "unsigned int".
4
5
5
This was left undone in the initial commit (all bits of registers were
6
Reported-by: Euler Robot <euler.robot@huawei.com>
6
writable).
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
7
8
Message-id: 20201126111109.112238-5-alex.chen@huawei.com
8
This patch adds the required code to protect the non writable bits.
9
10
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
11
Message-id: 20200608133508.550046-1-jcd@tribudubois.net
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/misc/imx6ul_ccm.c | 76 ++++++++++++++++++++++++++++++++++++--------
12
hw/misc/imx6ul_ccm.c | 4 ++--
16
1 file changed, 63 insertions(+), 13 deletions(-)
13
1 file changed, 2 insertions(+), 2 deletions(-)
17
14
18
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
15
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/misc/imx6ul_ccm.c
17
--- a/hw/misc/imx6ul_ccm.c
21
+++ b/hw/misc/imx6ul_ccm.c
18
+++ b/hw/misc/imx6ul_ccm.c
22
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_ccm_reg_name(uint32_t reg)
23
20
case CCM_CMEOR:
24
#include "trace.h"
21
return "CMEOR";
25
22
default:
26
+static const uint32_t ccm_mask[CCM_MAX] = {
23
- sprintf(unknown, "%d ?", reg);
27
+ [CCM_CCR] = 0xf01fef80,
24
+ sprintf(unknown, "%u ?", reg);
28
+ [CCM_CCDR] = 0xfffeffff,
25
return unknown;
29
+ [CCM_CSR] = 0xffffffff,
26
}
30
+ [CCM_CCSR] = 0xfffffef2,
31
+ [CCM_CACRR] = 0xfffffff8,
32
+ [CCM_CBCDR] = 0xc1f8e000,
33
+ [CCM_CBCMR] = 0xfc03cfff,
34
+ [CCM_CSCMR1] = 0x80700000,
35
+ [CCM_CSCMR2] = 0xe01ff003,
36
+ [CCM_CSCDR1] = 0xfe00c780,
37
+ [CCM_CS1CDR] = 0xfe00fe00,
38
+ [CCM_CS2CDR] = 0xf8007000,
39
+ [CCM_CDCDR] = 0xf00fffff,
40
+ [CCM_CHSCCDR] = 0xfffc01ff,
41
+ [CCM_CSCDR2] = 0xfe0001ff,
42
+ [CCM_CSCDR3] = 0xffffc1ff,
43
+ [CCM_CDHIPR] = 0xffffffff,
44
+ [CCM_CTOR] = 0x00000000,
45
+ [CCM_CLPCR] = 0xf39ff01c,
46
+ [CCM_CISR] = 0xfb85ffbe,
47
+ [CCM_CIMR] = 0xfb85ffbf,
48
+ [CCM_CCOSR] = 0xfe00fe00,
49
+ [CCM_CGPR] = 0xfffc3fea,
50
+ [CCM_CCGR0] = 0x00000000,
51
+ [CCM_CCGR1] = 0x00000000,
52
+ [CCM_CCGR2] = 0x00000000,
53
+ [CCM_CCGR3] = 0x00000000,
54
+ [CCM_CCGR4] = 0x00000000,
55
+ [CCM_CCGR5] = 0x00000000,
56
+ [CCM_CCGR6] = 0x00000000,
57
+ [CCM_CMEOR] = 0xafffff1f,
58
+};
59
+
60
+static const uint32_t analog_mask[CCM_ANALOG_MAX] = {
61
+ [CCM_ANALOG_PLL_ARM] = 0xfff60f80,
62
+ [CCM_ANALOG_PLL_USB1] = 0xfffe0fbc,
63
+ [CCM_ANALOG_PLL_USB2] = 0xfffe0fbc,
64
+ [CCM_ANALOG_PLL_SYS] = 0xfffa0ffe,
65
+ [CCM_ANALOG_PLL_SYS_SS] = 0x00000000,
66
+ [CCM_ANALOG_PLL_SYS_NUM] = 0xc0000000,
67
+ [CCM_ANALOG_PLL_SYS_DENOM] = 0xc0000000,
68
+ [CCM_ANALOG_PLL_AUDIO] = 0xffe20f80,
69
+ [CCM_ANALOG_PLL_AUDIO_NUM] = 0xc0000000,
70
+ [CCM_ANALOG_PLL_AUDIO_DENOM] = 0xc0000000,
71
+ [CCM_ANALOG_PLL_VIDEO] = 0xffe20f80,
72
+ [CCM_ANALOG_PLL_VIDEO_NUM] = 0xc0000000,
73
+ [CCM_ANALOG_PLL_VIDEO_DENOM] = 0xc0000000,
74
+ [CCM_ANALOG_PLL_ENET] = 0xffc20ff0,
75
+ [CCM_ANALOG_PFD_480] = 0x40404040,
76
+ [CCM_ANALOG_PFD_528] = 0x40404040,
77
+ [PMU_MISC0] = 0x01fe8306,
78
+ [PMU_MISC1] = 0x07fcede0,
79
+ [PMU_MISC2] = 0x005f5f5f,
80
+};
81
+
82
static const char *imx6ul_ccm_reg_name(uint32_t reg)
83
{
84
static char unknown[20];
85
@@ -XXX,XX +XXX,XX @@ static void imx6ul_ccm_write(void *opaque, hwaddr offset, uint64_t value,
86
87
trace_ccm_write_reg(imx6ul_ccm_reg_name(index), (uint32_t)value);
88
89
- /*
90
- * We will do a better implementation later. In particular some bits
91
- * cannot be written to.
92
- */
93
- s->ccm[index] = (uint32_t)value;
94
+ s->ccm[index] = (s->ccm[index] & ccm_mask[index]) |
95
+ ((uint32_t)value & ~ccm_mask[index]);
96
}
27
}
97
28
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_analog_reg_name(uint32_t reg)
98
static uint64_t imx6ul_analog_read(void *opaque, hwaddr offset, unsigned size)
29
case USB_ANALOG_DIGPROG:
99
@@ -XXX,XX +XXX,XX @@ static void imx6ul_analog_write(void *opaque, hwaddr offset, uint64_t value,
30
return "USB_ANALOG_DIGPROG";
100
* the REG_NAME register. So we change the value of the
101
* REG_NAME register, setting bits passed in the value.
102
*/
103
- s->analog[index - 1] |= value;
104
+ s->analog[index - 1] |= (value & ~analog_mask[index - 1]);
105
break;
106
case CCM_ANALOG_PLL_ARM_CLR:
107
case CCM_ANALOG_PLL_USB1_CLR:
108
@@ -XXX,XX +XXX,XX @@ static void imx6ul_analog_write(void *opaque, hwaddr offset, uint64_t value,
109
* the REG_NAME register. So we change the value of the
110
* REG_NAME register, unsetting bits passed in the value.
111
*/
112
- s->analog[index - 2] &= ~value;
113
+ s->analog[index - 2] &= ~(value & ~analog_mask[index - 2]);
114
break;
115
case CCM_ANALOG_PLL_ARM_TOG:
116
case CCM_ANALOG_PLL_USB1_TOG:
117
@@ -XXX,XX +XXX,XX @@ static void imx6ul_analog_write(void *opaque, hwaddr offset, uint64_t value,
118
* the REG_NAME register. So we change the value of the
119
* REG_NAME register, toggling bits passed in the value.
120
*/
121
- s->analog[index - 3] ^= value;
122
+ s->analog[index - 3] ^= (value & ~analog_mask[index - 3]);
123
break;
124
default:
31
default:
125
- /*
32
- sprintf(unknown, "%d ?", reg);
126
- * We will do a better implementation later. In particular some bits
33
+ sprintf(unknown, "%u ?", reg);
127
- * cannot be written to.
34
return unknown;
128
- */
129
- s->analog[index] = value;
130
+ s->analog[index] = (s->analog[index] & analog_mask[index]) |
131
+ (value & ~analog_mask[index]);
132
break;
133
}
35
}
134
}
36
}
135
--
37
--
136
2.20.1
38
2.20.1
137
39
138
40
diff view generated by jsdifflib
1
Convert the Neon 3-reg-diff insns VQDMULL, VQDMLAL and VQDMLSL:
1
For M-profile CPUs, the range from 0xe0000000 to 0xe00fffff is the
2
these are all saturating doubling long multiplies with a possible
2
Private Peripheral Bus range, which includes all of the memory mapped
3
accumulate step.
3
devices and registers that are part of the CPU itself, including the
4
NVIC, systick timer, and debug and trace components like the Data
5
Watchpoint and Trace unit (DWT). Within this large region, the range
6
0xe000e000 to 0xe000efff is the System Control Space (NVIC, system
7
registers, systick) and 0xe002e000 to 0exe002efff is its Non-secure
8
alias.
4
9
5
These are the last insns in the group which use the pass-over-each
10
The architecture is clear that within the SCS unimplemented registers
6
elements loop, so we can delete that code.
11
should be RES0 for privileged accesses and generate BusFault for
12
unprivileged accesses, and we currently implement this.
13
14
It is less clear about how to handle accesses to unimplemented
15
regions of the wider PPB. Unprivileged accesses should definitely
16
cause BusFaults (R_DQQS), but the behaviour of privileged accesses is
17
not given as a general rule. However, the register definitions of
18
individual registers for components like the DWT all state that they
19
are RES0 if the relevant component is not implemented, so the
20
simplest way to provide that is to provide RAZ/WI for the whole range
21
for privileged accesses. (The v7M Arm ARM does say that reserved
22
registers should be UNK/SBZP.)
23
24
Expand the container MemoryRegion that the NVIC exposes so that
25
it covers the whole PPB space. This means:
26
* moving the address that the ARMV7M device maps it to down by
27
0xe000 bytes
28
* moving the off and the offsets within the container of all the
29
subregions forward by 0xe000 bytes
30
* adding a new default MemoryRegion that covers the whole container
31
at a lower priority than anything else and which provides the
32
RAZWI/BusFault behaviour
7
33
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20201119215617.29887-2-peter.maydell@linaro.org
10
---
37
---
11
target/arm/neon-dp.decode | 6 +++
38
include/hw/intc/armv7m_nvic.h | 1 +
12
target/arm/translate-neon.inc.c | 82 +++++++++++++++++++++++++++++++++
39
hw/arm/armv7m.c | 2 +-
13
target/arm/translate.c | 59 ++----------------------
40
hw/intc/armv7m_nvic.c | 78 ++++++++++++++++++++++++++++++-----
14
3 files changed, 92 insertions(+), 55 deletions(-)
41
3 files changed, 69 insertions(+), 12 deletions(-)
15
42
16
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
43
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
17
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/neon-dp.decode
45
--- a/include/hw/intc/armv7m_nvic.h
19
+++ b/target/arm/neon-dp.decode
46
+++ b/include/hw/intc/armv7m_nvic.h
20
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
47
@@ -XXX,XX +XXX,XX @@ struct NVICState {
21
VMLAL_S_3d 1111 001 0 1 . .. .... .... 1000 . 0 . 0 .... @3diff
48
MemoryRegion systickmem;
22
VMLAL_U_3d 1111 001 1 1 . .. .... .... 1000 . 0 . 0 .... @3diff
49
MemoryRegion systick_ns_mem;
23
50
MemoryRegion container;
24
+ VQDMLAL_3d 1111 001 0 1 . .. .... .... 1001 . 0 . 0 .... @3diff
51
+ MemoryRegion defaultmem;
25
+
52
26
VMLSL_S_3d 1111 001 0 1 . .. .... .... 1010 . 0 . 0 .... @3diff
53
uint32_t num_irq;
27
VMLSL_U_3d 1111 001 1 1 . .. .... .... 1010 . 0 . 0 .... @3diff
54
qemu_irq excpout;
28
55
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
29
+ VQDMLSL_3d 1111 001 0 1 . .. .... .... 1011 . 0 . 0 .... @3diff
30
+
31
VMULL_S_3d 1111 001 0 1 . .. .... .... 1100 . 0 . 0 .... @3diff
32
VMULL_U_3d 1111 001 1 1 . .. .... .... 1100 . 0 . 0 .... @3diff
33
+
34
+ VQDMULL_3d 1111 001 0 1 . .. .... .... 1101 . 0 . 0 .... @3diff
35
]
36
}
37
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
38
index XXXXXXX..XXXXXXX 100644
56
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-neon.inc.c
57
--- a/hw/arm/armv7m.c
40
+++ b/target/arm/translate-neon.inc.c
58
+++ b/hw/arm/armv7m.c
41
@@ -XXX,XX +XXX,XX @@ DO_VMLAL(VMLAL_S,mull_s,add)
59
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
42
DO_VMLAL(VMLAL_U,mull_u,add)
60
sysbus_connect_irq(sbd, 0,
43
DO_VMLAL(VMLSL_S,mull_s,sub)
61
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
44
DO_VMLAL(VMLSL_U,mull_u,sub)
62
45
+
63
- memory_region_add_subregion(&s->container, 0xe000e000,
46
+static void gen_VQDMULL_16(TCGv_i64 rd, TCGv_i32 rn, TCGv_i32 rm)
64
+ memory_region_add_subregion(&s->container, 0xe0000000,
65
sysbus_mmio_get_region(sbd, 0));
66
67
for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
68
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/intc/armv7m_nvic.c
71
+++ b/hw/intc/armv7m_nvic.c
72
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
73
.endianness = DEVICE_NATIVE_ENDIAN,
74
};
75
76
+/*
77
+ * Unassigned portions of the PPB space are RAZ/WI for privileged
78
+ * accesses, and fault for non-privileged accesses.
79
+ */
80
+static MemTxResult ppb_default_read(void *opaque, hwaddr addr,
81
+ uint64_t *data, unsigned size,
82
+ MemTxAttrs attrs)
47
+{
83
+{
48
+ gen_helper_neon_mull_s16(rd, rn, rm);
84
+ qemu_log_mask(LOG_UNIMP, "Read of unassigned area of PPB: offset 0x%x\n",
49
+ gen_helper_neon_addl_saturate_s32(rd, cpu_env, rd, rd);
85
+ (uint32_t)addr);
86
+ if (attrs.user) {
87
+ return MEMTX_ERROR;
88
+ }
89
+ *data = 0;
90
+ return MEMTX_OK;
50
+}
91
+}
51
+
92
+
52
+static void gen_VQDMULL_32(TCGv_i64 rd, TCGv_i32 rn, TCGv_i32 rm)
93
+static MemTxResult ppb_default_write(void *opaque, hwaddr addr,
94
+ uint64_t value, unsigned size,
95
+ MemTxAttrs attrs)
53
+{
96
+{
54
+ gen_mull_s32(rd, rn, rm);
97
+ qemu_log_mask(LOG_UNIMP, "Write of unassigned area of PPB: offset 0x%x\n",
55
+ gen_helper_neon_addl_saturate_s64(rd, cpu_env, rd, rd);
98
+ (uint32_t)addr);
99
+ if (attrs.user) {
100
+ return MEMTX_ERROR;
101
+ }
102
+ return MEMTX_OK;
56
+}
103
+}
57
+
104
+
58
+static bool trans_VQDMULL_3d(DisasContext *s, arg_3diff *a)
105
+static const MemoryRegionOps ppb_default_ops = {
59
+{
106
+ .read_with_attrs = ppb_default_read,
60
+ static NeonGenTwoOpWidenFn * const opfn[] = {
107
+ .write_with_attrs = ppb_default_write,
61
+ NULL,
108
+ .endianness = DEVICE_NATIVE_ENDIAN,
62
+ gen_VQDMULL_16,
109
+ .valid.min_access_size = 1,
63
+ gen_VQDMULL_32,
110
+ .valid.max_access_size = 8,
64
+ NULL,
111
+};
65
+ };
66
+
112
+
67
+ return do_long_3d(s, a, opfn[a->size], NULL);
113
static int nvic_post_load(void *opaque, int version_id)
68
+}
114
{
69
+
115
NVICState *s = opaque;
70
+static void gen_VQDMLAL_acc_16(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
116
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
71
+{
117
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
72
+ gen_helper_neon_addl_saturate_s32(rd, cpu_env, rn, rm);
118
{
73
+}
119
NVICState *s = NVIC(dev);
74
+
120
- int regionlen;
75
+static void gen_VQDMLAL_acc_32(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
121
76
+{
122
/* The armv7m container object will have set our CPU pointer */
77
+ gen_helper_neon_addl_saturate_s64(rd, cpu_env, rn, rm);
123
if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
78
+}
124
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
79
+
125
M_REG_S));
80
+static bool trans_VQDMLAL_3d(DisasContext *s, arg_3diff *a)
126
}
81
+{
127
82
+ static NeonGenTwoOpWidenFn * const opfn[] = {
128
- /* The NVIC and System Control Space (SCS) starts at 0xe000e000
83
+ NULL,
129
+ /*
84
+ gen_VQDMULL_16,
130
+ * This device provides a single sysbus memory region which
85
+ gen_VQDMULL_32,
131
+ * represents the whole of the "System PPB" space. This is the
86
+ NULL,
132
+ * range from 0xe0000000 to 0xe00fffff and includes the NVIC,
87
+ };
133
+ * the System Control Space (system registers), the systick timer,
88
+ static NeonGenTwo64OpFn * const accfn[] = {
134
+ * and for CPUs with the Security extension an NS banked version
89
+ NULL,
135
+ * of all of these.
90
+ gen_VQDMLAL_acc_16,
136
+ *
91
+ gen_VQDMLAL_acc_32,
137
+ * The default behaviour for unimplemented registers/ranges
92
+ NULL,
138
+ * (for instance the Data Watchpoint and Trace unit at 0xe0001000)
93
+ };
139
+ * is to RAZ/WI for privileged access and BusFault for non-privileged
94
+
140
+ * access.
95
+ return do_long_3d(s, a, opfn[a->size], accfn[a->size]);
141
+ *
96
+}
142
+ * The NVIC and System Control Space (SCS) starts at 0xe000e000
97
+
143
* and looks like this:
98
+static void gen_VQDMLSL_acc_16(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
144
* 0x004 - ICTR
99
+{
145
* 0x010 - 0xff - systick
100
+ gen_helper_neon_negl_u32(rm, rm);
146
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
101
+ gen_helper_neon_addl_saturate_s32(rd, cpu_env, rn, rm);
147
* generally code determining which banked register to use should
102
+}
148
* use attrs.secure; code determining actual behaviour of the system
103
+
149
* should use env->v7m.secure.
104
+static void gen_VQDMLSL_acc_32(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
150
+ *
105
+{
151
+ * The container covers the whole PPB space. Within it the priority
106
+ tcg_gen_neg_i64(rm, rm);
152
+ * of overlapping regions is:
107
+ gen_helper_neon_addl_saturate_s64(rd, cpu_env, rn, rm);
153
+ * - default region (for RAZ/WI and BusFault) : -1
108
+}
154
+ * - system register regions : 0
109
+
155
+ * - systick : 1
110
+static bool trans_VQDMLSL_3d(DisasContext *s, arg_3diff *a)
156
+ * This is because the systick device is a small block of registers
111
+{
157
+ * in the middle of the other system control registers.
112
+ static NeonGenTwoOpWidenFn * const opfn[] = {
158
*/
113
+ NULL,
159
- regionlen = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? 0x21000 : 0x1000;
114
+ gen_VQDMULL_16,
160
- memory_region_init(&s->container, OBJECT(s), "nvic", regionlen);
115
+ gen_VQDMULL_32,
161
- /* The system register region goes at the bottom of the priority
116
+ NULL,
162
- * stack as it covers the whole page.
117
+ };
163
- */
118
+ static NeonGenTwo64OpFn * const accfn[] = {
164
+ memory_region_init(&s->container, OBJECT(s), "nvic", 0x100000);
119
+ NULL,
165
+ memory_region_init_io(&s->defaultmem, OBJECT(s), &ppb_default_ops, s,
120
+ gen_VQDMLSL_acc_16,
166
+ "nvic-default", 0x100000);
121
+ gen_VQDMLSL_acc_32,
167
+ memory_region_add_subregion_overlap(&s->container, 0, &s->defaultmem, -1);
122
+ NULL,
168
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
123
+ };
169
"nvic_sysregs", 0x1000);
124
+
170
- memory_region_add_subregion(&s->container, 0, &s->sysregmem);
125
+ return do_long_3d(s, a, opfn[a->size], accfn[a->size]);
171
+ memory_region_add_subregion(&s->container, 0xe000, &s->sysregmem);
126
+}
172
127
diff --git a/target/arm/translate.c b/target/arm/translate.c
173
memory_region_init_io(&s->systickmem, OBJECT(s),
128
index XXXXXXX..XXXXXXX 100644
174
&nvic_systick_ops, s,
129
--- a/target/arm/translate.c
175
"nvic_systick", 0xe0);
130
+++ b/target/arm/translate.c
176
131
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
177
- memory_region_add_subregion_overlap(&s->container, 0x10,
132
{0, 0, 0, 7}, /* VSUBHN: handled by decodetree */
178
+ memory_region_add_subregion_overlap(&s->container, 0xe010,
133
{0, 0, 0, 7}, /* VABDL */
179
&s->systickmem, 1);
134
{0, 0, 0, 7}, /* VMLAL */
180
135
- {0, 0, 0, 9}, /* VQDMLAL */
181
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
136
+ {0, 0, 0, 7}, /* VQDMLAL */
182
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
137
{0, 0, 0, 7}, /* VMLSL */
183
&nvic_sysreg_ns_ops, &s->sysregmem,
138
- {0, 0, 0, 9}, /* VQDMLSL */
184
"nvic_sysregs_ns", 0x1000);
139
+ {0, 0, 0, 7}, /* VQDMLSL */
185
- memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
140
{0, 0, 0, 7}, /* Integer VMULL */
186
+ memory_region_add_subregion(&s->container, 0x2e000, &s->sysreg_ns_mem);
141
- {0, 0, 0, 9}, /* VQDMULL */
187
memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
142
+ {0, 0, 0, 7}, /* VQDMULL */
188
&nvic_sysreg_ns_ops, &s->systickmem,
143
{0, 0, 0, 0xa}, /* Polynomial VMULL */
189
"nvic_systick_ns", 0xe0);
144
{0, 0, 0, 7}, /* Reserved: always UNDEF */
190
- memory_region_add_subregion_overlap(&s->container, 0x20010,
145
};
191
+ memory_region_add_subregion_overlap(&s->container, 0x2e010,
146
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
192
&s->systick_ns_mem, 1);
147
}
193
}
148
return 0;
194
149
}
150
-
151
- /* Avoid overlapping operands. Wide source operands are
152
- always aligned so will never overlap with wide
153
- destinations in problematic ways. */
154
- if (rd == rm) {
155
- tmp = neon_load_reg(rm, 1);
156
- neon_store_scratch(2, tmp);
157
- } else if (rd == rn) {
158
- tmp = neon_load_reg(rn, 1);
159
- neon_store_scratch(2, tmp);
160
- }
161
- tmp3 = NULL;
162
- for (pass = 0; pass < 2; pass++) {
163
- if (pass == 1 && rd == rn) {
164
- tmp = neon_load_scratch(2);
165
- } else {
166
- tmp = neon_load_reg(rn, pass);
167
- }
168
- if (pass == 1 && rd == rm) {
169
- tmp2 = neon_load_scratch(2);
170
- } else {
171
- tmp2 = neon_load_reg(rm, pass);
172
- }
173
- switch (op) {
174
- case 9: case 11: case 13:
175
- /* VQDMLAL, VQDMLSL, VQDMULL */
176
- gen_neon_mull(cpu_V0, tmp, tmp2, size, u);
177
- break;
178
- default: /* 15 is RESERVED: caught earlier */
179
- abort();
180
- }
181
- if (op == 13) {
182
- /* VQDMULL */
183
- gen_neon_addl_saturate(cpu_V0, cpu_V0, size);
184
- neon_store_reg64(cpu_V0, rd + pass);
185
- } else {
186
- /* Accumulate. */
187
- neon_load_reg64(cpu_V1, rd + pass);
188
- switch (op) {
189
- case 9: case 11: /* VQDMLAL, VQDMLSL */
190
- gen_neon_addl_saturate(cpu_V0, cpu_V0, size);
191
- if (op == 11) {
192
- gen_neon_negl(cpu_V0, size);
193
- }
194
- gen_neon_addl_saturate(cpu_V0, cpu_V1, size);
195
- break;
196
- default:
197
- abort();
198
- }
199
- neon_store_reg64(cpu_V0, rd + pass);
200
- }
201
- }
202
+ abort(); /* all others handled by decodetree */
203
} else {
204
/* Two registers and a scalar. NB that for ops of this form
205
* the ARM ARM labels bit 24 as Q, but it is in our variable
206
--
195
--
207
2.20.1
196
2.20.1
208
197
209
198
diff view generated by jsdifflib
New patch
1
In v8.1M the PXN architecture extension adds a new PXN bit to the
2
MPU_RLAR registers, which forbids execution of code in the region
3
from a privileged mode.
1
4
5
This is another feature which is just in the generic "in v8.1M" set
6
and has no ID register field indicating its presence.
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 7 ++++++-
13
1 file changed, 6 insertions(+), 1 deletion(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
20
} else {
21
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
22
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
23
+ bool pxn = false;
24
+
25
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
26
+ pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
27
+ }
28
29
if (m_is_system_region(env, address)) {
30
/* System space is always execute never */
31
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
32
}
33
34
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
35
- if (*prot && !xn) {
36
+ if (*prot && !xn && !(pxn && !is_user)) {
37
*prot |= PAGE_EXEC;
38
}
39
/* We don't need to look the attribute up in the MAIR0/MAIR1
40
--
41
2.20.1
42
43
diff view generated by jsdifflib
1
From: fangying <fangying1@huawei.com>
1
In arm_cpu_realizefn() we check whether the board code disabled EL3
2
via the has_el3 CPU object property, which we create if the CPU
3
starts with the ARM_FEATURE_EL3 feature bit. If it is disabled, then
4
we turn off ARM_FEATURE_EL3 and also zero out the relevant fields in
5
the ID_PFR1 and ID_AA64PFR0 registers.
2
6
3
Virtual time adjustment was implemented for virt-5.0 machine type,
7
This codepath was incorrectly being taken for M-profile CPUs, which
4
but the cpu property was enabled only for host-passthrough and max
8
do not have an EL3 and don't set ARM_FEATURE_EL3, but which may have
5
cpu model. Let's add it for any KVM arm cpu which has the generic
9
the M-profile Security extension and so should have non-zero values
6
timer feature enabled.
10
in the ID_PFR1.Security field.
7
11
8
Signed-off-by: Ying Fang <fangying1@huawei.com>
12
Restrict the handling of the feature flag to A/R-profile cores.
9
Reviewed-by: Andrew Jones <drjones@redhat.com>
13
10
Message-id: 20200608121243.2076-1-fangying1@huawei.com
11
[PMM: minor commit message tweak, removed inaccurate
12
suggested-by tag]
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20201119215617.29887-4-peter.maydell@linaro.org
14
---
17
---
15
target/arm/cpu.c | 6 ++++--
18
target/arm/cpu.c | 2 +-
16
target/arm/cpu64.c | 1 -
19
1 file changed, 1 insertion(+), 1 deletion(-)
17
target/arm/kvm.c | 21 +++++++++++----------
18
3 files changed, 15 insertions(+), 13 deletions(-)
19
20
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
21
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
21
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.c
23
--- a/target/arm/cpu.c
23
+++ b/target/arm/cpu.c
24
+++ b/target/arm/cpu.c
24
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
25
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
25
if (arm_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER)) {
26
}
26
qdev_property_add_static(DEVICE(cpu), &arm_cpu_gt_cntfrq_property);
27
}
27
}
28
+
28
29
+ if (kvm_enabled()) {
29
- if (!cpu->has_el3) {
30
+ kvm_arm_add_vcpu_properties(obj);
30
+ if (!arm_feature(env, ARM_FEATURE_M) && !cpu->has_el3) {
31
+ }
31
/* If the has_el3 CPU property is disabled then we need to disable the
32
}
32
* feature.
33
33
*/
34
static void arm_cpu_finalizefn(Object *obj)
35
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
36
37
if (kvm_enabled()) {
38
kvm_arm_set_cpu_features_from_host(cpu);
39
- kvm_arm_add_vcpu_properties(obj);
40
} else {
41
cortex_a15_initfn(obj);
42
43
@@ -XXX,XX +XXX,XX @@ static void arm_host_initfn(Object *obj)
44
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
45
aarch64_add_sve_properties(obj);
46
}
47
- kvm_arm_add_vcpu_properties(obj);
48
arm_cpu_post_init(obj);
49
}
50
51
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/cpu64.c
54
+++ b/target/arm/cpu64.c
55
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
56
57
if (kvm_enabled()) {
58
kvm_arm_set_cpu_features_from_host(cpu);
59
- kvm_arm_add_vcpu_properties(obj);
60
} else {
61
uint64_t t;
62
uint32_t u;
63
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/kvm.c
66
+++ b/target/arm/kvm.c
67
@@ -XXX,XX +XXX,XX @@ static void kvm_no_adjvtime_set(Object *obj, bool value, Error **errp)
68
/* KVM VCPU properties should be prefixed with "kvm-". */
69
void kvm_arm_add_vcpu_properties(Object *obj)
70
{
71
- if (!kvm_enabled()) {
72
- return;
73
- }
74
+ ARMCPU *cpu = ARM_CPU(obj);
75
+ CPUARMState *env = &cpu->env;
76
77
- ARM_CPU(obj)->kvm_adjvtime = true;
78
- object_property_add_bool(obj, "kvm-no-adjvtime", kvm_no_adjvtime_get,
79
- kvm_no_adjvtime_set);
80
- object_property_set_description(obj, "kvm-no-adjvtime",
81
- "Set on to disable the adjustment of "
82
- "the virtual counter. VM stopped time "
83
- "will be counted.");
84
+ if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
85
+ cpu->kvm_adjvtime = true;
86
+ object_property_add_bool(obj, "kvm-no-adjvtime", kvm_no_adjvtime_get,
87
+ kvm_no_adjvtime_set);
88
+ object_property_set_description(obj, "kvm-no-adjvtime",
89
+ "Set on to disable the adjustment of "
90
+ "the virtual counter. VM stopped time "
91
+ "will be counted.");
92
+ }
93
}
94
95
bool kvm_arm_pmu_supported(CPUState *cpu)
96
--
34
--
97
2.20.1
35
2.20.1
98
36
99
37
diff view generated by jsdifflib
1
Convert the VQRDMLAH and VQRDMLSH insns in the 2-reg-scalar
1
Implement the v8.1M VSCCLRM insn, which zeros floating point
2
group to decodetree.
2
registers if there is an active floating point context.
3
This requires support in write_neon_element32() for the MO_32
4
element size, so add it.
5
6
Because we want to use arm_gen_condlabel(), we need to move
7
the definition of that function up in translate.c so it is
8
before the #include of translate-vfp.c.inc.
3
9
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-5-peter.maydell@linaro.org
6
---
13
---
7
target/arm/neon-dp.decode | 3 ++
14
target/arm/cpu.h | 9 ++++
8
target/arm/translate-neon.inc.c | 74 +++++++++++++++++++++++++++++++++
15
target/arm/m-nocp.decode | 8 +++-
9
target/arm/translate.c | 38 +----------------
16
target/arm/translate.c | 21 +++++----
10
3 files changed, 79 insertions(+), 36 deletions(-)
17
target/arm/translate-vfp.c.inc | 84 ++++++++++++++++++++++++++++++++++
11
18
4 files changed, 111 insertions(+), 11 deletions(-)
12
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
19
13
index XXXXXXX..XXXXXXX 100644
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
--- a/target/arm/neon-dp.decode
21
index XXXXXXX..XXXXXXX 100644
15
+++ b/target/arm/neon-dp.decode
22
--- a/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
23
+++ b/target/arm/cpu.h
17
24
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
18
VQDMULH_2sc 1111 001 . 1 . .. .... .... 1100 . 1 . 0 .... @2scalar
25
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
19
VQRDMULH_2sc 1111 001 . 1 . .. .... .... 1101 . 1 . 0 .... @2scalar
26
}
20
+
27
21
+ VQRDMLAH_2sc 1111 001 . 1 . .. .... .... 1110 . 1 . 0 .... @2scalar
28
+static inline bool isar_feature_aa32_m_sec_state(const ARMISARegisters *id)
22
+ VQRDMLSH_2sc 1111 001 . 1 . .. .... .... 1111 . 1 . 0 .... @2scalar
23
]
24
}
25
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/translate-neon.inc.c
28
+++ b/target/arm/translate-neon.inc.c
29
@@ -XXX,XX +XXX,XX @@ static bool trans_VQRDMULH_2sc(DisasContext *s, arg_2scalar *a)
30
31
return do_2scalar(s, a, opfn[a->size], NULL);
32
}
33
+
34
+static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
35
+ NeonGenThreeOpEnvFn *opfn)
36
+{
29
+{
37
+ /*
30
+ /*
38
+ * VQRDMLAH/VQRDMLSH: this is like do_2scalar, but the opfn
31
+ * Return true if M-profile state handling insns
39
+ * performs a kind of fused op-then-accumulate using a helper
32
+ * (VSCCLRM, CLRM, FPCTX access insns) are implemented
40
+ * function that takes all of rd, rn and the scalar at once.
41
+ */
33
+ */
42
+ TCGv_i32 scalar;
34
+ return FIELD_EX32(id->id_pfr1, ID_PFR1, SECURITY) >= 3;
43
+ int pass;
35
+}
44
+
36
+
45
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
37
static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
38
{
39
/* Sadly this is encoded differently for A-profile and M-profile */
40
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/m-nocp.decode
43
+++ b/target/arm/m-nocp.decode
44
@@ -XXX,XX +XXX,XX @@
45
# If the coprocessor is not present or disabled then we will generate
46
# the NOCP exception; otherwise we let the insn through to the main decode.
47
48
+%vd_dp 22:1 12:4
49
+%vd_sp 12:4 22:1
50
+
51
&nocp cp
52
53
{
54
# Special cases which do not take an early NOCP: VLLDM and VLSTM
55
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
56
- # TODO: VSCCLRM (new in v8.1M) is similar:
57
- #VSCCLRM 1110 1100 1-01 1111 ---- 1011 ---- ---0
58
+ # VSCCLRM (new in v8.1M) is similar:
59
+ VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
60
+ VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
61
62
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
63
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate.c
67
+++ b/target/arm/translate.c
68
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void)
69
a64_translate_init();
70
}
71
72
+/* Generate a label used for skipping this instruction */
73
+static void arm_gen_condlabel(DisasContext *s)
74
+{
75
+ if (!s->condjmp) {
76
+ s->condlabel = gen_new_label();
77
+ s->condjmp = 1;
78
+ }
79
+}
80
+
81
/* Flags for the disas_set_da_iss info argument:
82
* lower bits hold the Rt register number, higher bits are flags.
83
*/
84
@@ -XXX,XX +XXX,XX @@ static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
85
long off = neon_element_offset(reg, ele, memop);
86
87
switch (memop) {
88
+ case MO_32:
89
+ tcg_gen_st32_i64(src, cpu_env, off);
90
+ break;
91
case MO_64:
92
tcg_gen_st_i64(src, cpu_env, off);
93
break;
94
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
95
s->base.is_jmp = DISAS_UPDATE_EXIT;
96
}
97
98
-/* Generate a label used for skipping this instruction */
99
-static void arm_gen_condlabel(DisasContext *s)
100
-{
101
- if (!s->condjmp) {
102
- s->condlabel = gen_new_label();
103
- s->condjmp = 1;
104
- }
105
-}
106
-
107
/* Skip this instruction if the ARM condition is false */
108
static void arm_skip_unless(DisasContext *s, uint32_t cond)
109
{
110
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/translate-vfp.c.inc
113
+++ b/target/arm/translate-vfp.c.inc
114
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
115
return true;
116
}
117
118
+static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
119
+{
120
+ int btmreg, topreg;
121
+ TCGv_i64 zero;
122
+ TCGv_i32 aspen, sfpa;
123
+
124
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
125
+ /* Before v8.1M, fall through in decode to NOCP check */
46
+ return false;
126
+ return false;
47
+ }
127
+ }
48
+
128
+
49
+ if (!dc_isar_feature(aa32_rdm, s)) {
129
+ /* Explicitly UNDEF because this takes precedence over NOCP */
50
+ return false;
130
+ if (!arm_dc_feature(s, ARM_FEATURE_M_MAIN) || !s->v8m_secure) {
51
+ }
131
+ unallocated_encoding(s);
52
+
132
+ return true;
53
+ /* UNDEF accesses to D16-D31 if they don't exist. */
133
+ }
54
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
134
+
55
+ ((a->vd | a->vn | a->vm) & 0x10)) {
135
+ if (!dc_isar_feature(aa32_vfp_simd, s)) {
56
+ return false;
136
+ /* NOP if we have neither FP nor MVE */
57
+ }
137
+ return true;
58
+
138
+ }
59
+ if (!opfn) {
139
+
60
+ /* Bad size (including size == 3, which is a different insn group) */
140
+ /*
61
+ return false;
141
+ * If FPCCR.ASPEN != 0 && CONTROL_S.SFPA == 0 then there is no
62
+ }
142
+ * active floating point context so we must NOP (without doing
63
+
143
+ * any lazy state preservation or the NOCP check).
64
+ if (a->q && ((a->vd | a->vn) & 1)) {
144
+ */
65
+ return false;
145
+ aspen = load_cpu_field(v7m.fpccr[M_REG_S]);
146
+ sfpa = load_cpu_field(v7m.control[M_REG_S]);
147
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
148
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
149
+ tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
150
+ tcg_gen_or_i32(sfpa, sfpa, aspen);
151
+ arm_gen_condlabel(s);
152
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
153
+
154
+ if (s->fp_excp_el != 0) {
155
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
156
+ syn_uncategorized(), s->fp_excp_el);
157
+ return true;
158
+ }
159
+
160
+ topreg = a->vd + a->imm - 1;
161
+ btmreg = a->vd;
162
+
163
+ /* Convert to Sreg numbers if the insn specified in Dregs */
164
+ if (a->size == 3) {
165
+ topreg = topreg * 2 + 1;
166
+ btmreg *= 2;
167
+ }
168
+
169
+ if (topreg > 63 || (topreg > 31 && !(topreg & 1))) {
170
+ /* UNPREDICTABLE: we choose to undef */
171
+ unallocated_encoding(s);
172
+ return true;
173
+ }
174
+
175
+ /* Silently ignore requests to clear D16-D31 if they don't exist */
176
+ if (topreg > 31 && !dc_isar_feature(aa32_simd_r32, s)) {
177
+ topreg = 31;
66
+ }
178
+ }
67
+
179
+
68
+ if (!vfp_access_check(s)) {
180
+ if (!vfp_access_check(s)) {
69
+ return true;
181
+ return true;
70
+ }
182
+ }
71
+
183
+
72
+ scalar = neon_get_scalar(a->size, a->vm);
184
+ /* Zero the Sregs from btmreg to topreg inclusive. */
73
+
185
+ zero = tcg_const_i64(0);
74
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
186
+ if (btmreg & 1) {
75
+ TCGv_i32 rn = neon_load_reg(a->vn, pass);
187
+ write_neon_element64(zero, btmreg >> 1, 1, MO_32);
76
+ TCGv_i32 rd = neon_load_reg(a->vd, pass);
188
+ btmreg++;
77
+ opfn(rd, cpu_env, rn, scalar, rd);
189
+ }
78
+ tcg_temp_free_i32(rn);
190
+ for (; btmreg + 1 <= topreg; btmreg += 2) {
79
+ neon_store_reg(a->vd, pass, rd);
191
+ write_neon_element64(zero, btmreg >> 1, 0, MO_64);
80
+ }
192
+ }
81
+ tcg_temp_free_i32(scalar);
193
+ if (btmreg == topreg) {
82
+
194
+ write_neon_element64(zero, btmreg >> 1, 0, MO_32);
195
+ btmreg++;
196
+ }
197
+ assert(btmreg == topreg + 1);
198
+ /* TODO: when MVE is implemented, zero VPR here */
83
+ return true;
199
+ return true;
84
+}
200
+}
85
+
201
+
86
+static bool trans_VQRDMLAH_2sc(DisasContext *s, arg_2scalar *a)
202
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
87
+{
203
{
88
+ static NeonGenThreeOpEnvFn *opfn[] = {
204
/*
89
+ NULL,
90
+ gen_helper_neon_qrdmlah_s16,
91
+ gen_helper_neon_qrdmlah_s32,
92
+ NULL,
93
+ };
94
+ return do_vqrdmlah_2sc(s, a, opfn[a->size]);
95
+}
96
+
97
+static bool trans_VQRDMLSH_2sc(DisasContext *s, arg_2scalar *a)
98
+{
99
+ static NeonGenThreeOpEnvFn *opfn[] = {
100
+ NULL,
101
+ gen_helper_neon_qrdmlsh_s16,
102
+ gen_helper_neon_qrdmlsh_s32,
103
+ NULL,
104
+ };
105
+ return do_vqrdmlah_2sc(s, a, opfn[a->size]);
106
+}
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
112
case 9: /* Floating point VMUL scalar */
113
case 12: /* VQDMULH scalar */
114
case 13: /* VQRDMULH scalar */
115
+ case 14: /* VQRDMLAH scalar */
116
+ case 15: /* VQRDMLSH scalar */
117
return 1; /* handled by decodetree */
118
119
case 3: /* VQDMLAL scalar */
120
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
121
neon_store_reg64(cpu_V0, rd + pass);
122
}
123
break;
124
- case 14: /* VQRDMLAH scalar */
125
- case 15: /* VQRDMLSH scalar */
126
- {
127
- NeonGenThreeOpEnvFn *fn;
128
-
129
- if (!dc_isar_feature(aa32_rdm, s)) {
130
- return 1;
131
- }
132
- if (u && ((rd | rn) & 1)) {
133
- return 1;
134
- }
135
- if (op == 14) {
136
- if (size == 1) {
137
- fn = gen_helper_neon_qrdmlah_s16;
138
- } else {
139
- fn = gen_helper_neon_qrdmlah_s32;
140
- }
141
- } else {
142
- if (size == 1) {
143
- fn = gen_helper_neon_qrdmlsh_s16;
144
- } else {
145
- fn = gen_helper_neon_qrdmlsh_s32;
146
- }
147
- }
148
-
149
- tmp2 = neon_get_scalar(size, rm);
150
- for (pass = 0; pass < (u ? 4 : 2); pass++) {
151
- tmp = neon_load_reg(rn, pass);
152
- tmp3 = neon_load_reg(rd, pass);
153
- fn(tmp, cpu_env, tmp, tmp2, tmp3);
154
- tcg_temp_free_i32(tmp3);
155
- neon_store_reg(rd, pass, tmp);
156
- }
157
- tcg_temp_free_i32(tmp2);
158
- }
159
- break;
160
default:
161
g_assert_not_reached();
162
}
163
--
205
--
164
2.20.1
206
2.20.1
165
207
166
208
diff view generated by jsdifflib
1
Convert the Neon VDUP (scalar) insn to decodetree. (Note that we
1
In v8.1M the new CLRM instruction allows zeroing an arbitrary set of
2
can't call this just "VDUP" as we used that already in vfp.decode for
2
the general-purpose registers and APSR. Implement this.
3
the "VDUP (general purpose register" insn.)
3
4
The encoding is a subset of the LDMIA T2 encoding, using what would
5
be Rn=0b1111 (which UNDEFs for LDMIA).
4
6
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201119215617.29887-6-peter.maydell@linaro.org
7
---
10
---
8
target/arm/neon-dp.decode | 7 +++++++
11
target/arm/t32.decode | 6 +++++-
9
target/arm/translate-neon.inc.c | 26 ++++++++++++++++++++++++++
12
target/arm/translate.c | 38 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate.c | 25 +------------------------
13
2 files changed, 43 insertions(+), 1 deletion(-)
11
3 files changed, 34 insertions(+), 24 deletions(-)
12
14
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
15
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
17
--- a/target/arm/t32.decode
16
+++ b/target/arm/neon-dp.decode
18
+++ b/target/arm/t32.decode
17
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
19
@@ -XXX,XX +XXX,XX @@ UXTAB 1111 1010 0101 .... 1111 .... 10.. .... @rrr_rot
18
20
19
VTBL 1111 001 1 1 . 11 .... .... 10 len:2 . op:1 . 0 .... \
21
STM_t32 1110 1000 10.0 .... ................ @ldstm i=1 b=0
20
vm=%vm_dp vn=%vn_dp vd=%vd_dp
22
STM_t32 1110 1001 00.0 .... ................ @ldstm i=0 b=1
21
+
23
-LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
22
+ VDUP_scalar 1111 001 1 1 . 11 index:3 1 .... 11 000 q:1 . 0 .... \
23
+ vm=%vm_dp vd=%vd_dp size=0
24
+ VDUP_scalar 1111 001 1 1 . 11 index:2 10 .... 11 000 q:1 . 0 .... \
25
+ vm=%vm_dp vd=%vd_dp size=1
26
+ VDUP_scalar 1111 001 1 1 . 11 index:1 100 .... 11 000 q:1 . 0 .... \
27
+ vm=%vm_dp vd=%vd_dp size=2
28
]
29
30
# Subgroup for size != 0b11
31
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate-neon.inc.c
34
+++ b/target/arm/translate-neon.inc.c
35
@@ -XXX,XX +XXX,XX @@ static bool trans_VTBL(DisasContext *s, arg_VTBL *a)
36
tcg_temp_free_i32(tmp);
37
return true;
38
}
39
+
40
+static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
41
+{
24
+{
42
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
25
+ # Rn=15 UNDEFs for LDM; M-profile CLRM uses that encoding
43
+ return false;
26
+ CLRM 1110 1000 1001 1111 list:16
44
+ }
27
+ LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
45
+
46
+ /* UNDEF accesses to D16-D31 if they don't exist. */
47
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
48
+ ((a->vd | a->vm) & 0x10)) {
49
+ return false;
50
+ }
51
+
52
+ if (a->vd & a->q) {
53
+ return false;
54
+ }
55
+
56
+ if (!vfp_access_check(s)) {
57
+ return true;
58
+ }
59
+
60
+ tcg_gen_gvec_dup_mem(a->size, neon_reg_offset(a->vd, 0),
61
+ neon_element_offset(a->vm, a->index, a->size),
62
+ a->q ? 16 : 8, a->q ? 16 : 8);
63
+ return true;
64
+}
28
+}
29
LDM_t32 1110 1001 00.1 .... ................ @ldstm i=0 b=1
30
31
&rfe !extern rn w pu
65
diff --git a/target/arm/translate.c b/target/arm/translate.c
32
diff --git a/target/arm/translate.c b/target/arm/translate.c
66
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate.c
34
--- a/target/arm/translate.c
68
+++ b/target/arm/translate.c
35
+++ b/target/arm/translate.c
69
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
36
@@ -XXX,XX +XXX,XX @@ static bool trans_LDM_t16(DisasContext *s, arg_ldst_block *a)
70
}
37
return do_ldm(s, a, 1);
71
break;
38
}
72
}
39
73
- } else if ((insn & (1 << 10)) == 0) {
40
+static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
74
- /* VTBL, VTBX: handled by decodetree */
41
+{
75
- return 1;
42
+ int i;
76
- } else if ((insn & 0x380) == 0) {
43
+ TCGv_i32 zero;
77
- /* VDUP */
44
+
78
- int element;
45
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
79
- MemOp size;
46
+ return false;
80
-
47
+ }
81
- if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
48
+
82
- return 1;
49
+ if (extract32(a->list, 13, 1)) {
83
- }
50
+ return false;
84
- if (insn & (1 << 16)) {
51
+ }
85
- size = MO_8;
52
+
86
- element = (insn >> 17) & 7;
53
+ if (!a->list) {
87
- } else if (insn & (1 << 17)) {
54
+ /* UNPREDICTABLE; we choose to UNDEF */
88
- size = MO_16;
55
+ return false;
89
- element = (insn >> 18) & 3;
56
+ }
90
- } else {
57
+
91
- size = MO_32;
58
+ zero = tcg_const_i32(0);
92
- element = (insn >> 19) & 1;
59
+ for (i = 0; i < 15; i++) {
93
- }
60
+ if (extract32(a->list, i, 1)) {
94
- tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
61
+ /* Clear R[i] */
95
- neon_element_offset(rm, element, size),
62
+ tcg_gen_mov_i32(cpu_R[i], zero);
96
- q ? 16 : 8, q ? 16 : 8);
63
+ }
97
} else {
64
+ }
98
+ /* VTBL, VTBX, VDUP: handled by decodetree */
65
+ if (extract32(a->list, 15, 1)) {
99
return 1;
66
+ /*
100
}
67
+ * Clear APSR (by calling the MSR helper with the same argument
101
}
68
+ * as for "MSR APSR_nzcvqg, Rn": mask = 0b1100, SYSM=0)
69
+ */
70
+ TCGv_i32 maskreg = tcg_const_i32(0xc << 8);
71
+ gen_helper_v7m_msr(cpu_env, maskreg, zero);
72
+ tcg_temp_free_i32(maskreg);
73
+ }
74
+ tcg_temp_free_i32(zero);
75
+ return true;
76
+}
77
+
78
/*
79
* Branch, branch with link
80
*/
102
--
81
--
103
2.20.1
82
2.20.1
104
83
105
84
diff view generated by jsdifflib
New patch
1
For M-profile before v8.1M, the only valid register for VMSR/VMRS is
2
the FPSCR. We have a comment that states this, but the actual logic
3
to forbid accesses for any other register value is missing, so we
4
would end up with A-profile style behaviour. Add the missing check.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-7-peter.maydell@linaro.org
9
---
10
target/arm/translate-vfp.c.inc | 5 ++++-
11
1 file changed, 4 insertions(+), 1 deletion(-)
12
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
16
+++ b/target/arm/translate-vfp.c.inc
17
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
18
* Accesses to R15 are UNPREDICTABLE; we choose to undef.
19
* (FPSCR -> r15 is a special case which writes to the PSR flags.)
20
*/
21
- if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
22
+ if (a->reg != ARM_VFP_FPSCR) {
23
+ return false;
24
+ }
25
+ if (a->rt == 15 && !a->l) {
26
return false;
27
}
28
}
29
--
30
2.20.1
31
32
diff view generated by jsdifflib
1
Convert the Neon VTBL, VTBX instructions to decodetree. The actual
1
Currently M-profile borrows the A-profile code for VMSR and VMRS
2
implementation of the insn is copied across to the new trans function
2
(access to the FP system registers), because all it needs to support
3
unchanged except for renaming 'tmp5' to 'tmp4'.
3
is the FPSCR. In v8.1M things become significantly more complicated
4
in two ways:
5
6
* there are several new FP system registers; some have side effects
7
on read, and one (FPCXT_NS) needs to avoid the usual
8
vfp_access_check() and the "only if FPU implemented" check
9
10
* all sysregs are now accessible both by VMRS/VMSR (which
11
reads/writes a general purpose register) and also by VLDR/VSTR
12
(which reads/writes them directly to memory)
13
14
Refactor the structure of how we handle VMSR/VMRS to cope with this:
15
16
* keep the M-profile code entirely separate from the A-profile code
17
18
* abstract out the "read or write the general purpose register" part
19
of the code into a loadfn or storefn function pointer, so we can
20
reuse it for VLDR/VSTR.
4
21
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-8-peter.maydell@linaro.org
7
---
25
---
8
target/arm/neon-dp.decode | 3 ++
26
target/arm/cpu.h | 3 +
9
target/arm/translate-neon.inc.c | 56 +++++++++++++++++++++++++++++++++
27
target/arm/translate-vfp.c.inc | 182 ++++++++++++++++++++++++++++++---
10
target/arm/translate.c | 41 +++---------------------
28
2 files changed, 171 insertions(+), 14 deletions(-)
11
3 files changed, 63 insertions(+), 37 deletions(-)
29
12
30
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
14
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
32
--- a/target/arm/cpu.h
16
+++ b/target/arm/neon-dp.decode
33
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
34
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
18
##################################################################
35
#define ARM_VFP_FPINST 9
19
VEXT 1111 001 0 1 . 11 .... .... imm:4 . q:1 . 0 .... \
36
#define ARM_VFP_FPINST2 10
20
vm=%vm_dp vn=%vn_dp vd=%vd_dp
37
21
+
38
+/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
22
+ VTBL 1111 001 1 1 . 11 .... .... 10 len:2 . op:1 . 0 .... \
39
+#define QEMU_VFP_FPSCR_NZCV 0xffff
23
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
40
+
24
]
41
/* iwMMXt coprocessor control registers. */
25
42
#define ARM_IWMMXT_wCID 0
26
# Subgroup for size != 0b11
43
#define ARM_IWMMXT_wCon 1
27
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
44
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
28
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/translate-neon.inc.c
46
--- a/target/arm/translate-vfp.c.inc
30
+++ b/target/arm/translate-neon.inc.c
47
+++ b/target/arm/translate-vfp.c.inc
31
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
32
}
33
return true;
49
return true;
34
}
50
}
35
+
51
36
+static bool trans_VTBL(DisasContext *s, arg_VTBL *a)
52
+/*
37
+{
53
+ * M-profile provides two different sets of instructions that can
38
+ int n;
54
+ * access floating point system registers: VMSR/VMRS (which move
39
+ TCGv_i32 tmp, tmp2, tmp3, tmp4;
55
+ * to/from a general purpose register) and VLDR/VSTR sysreg (which
40
+ TCGv_ptr ptr1;
56
+ * move directly to/from memory). In some cases there are also side
41
+
57
+ * effects which must happen after any write to memory (which could
42
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
58
+ * cause an exception). So we implement the common logic for the
59
+ * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
60
+ * which take pointers to callback functions which will perform the
61
+ * actual "read/write general purpose register" and "read/write
62
+ * memory" operations.
63
+ */
64
+
65
+/*
66
+ * Emit code to store the sysreg to its final destination; frees the
67
+ * TCG temp 'value' it is passed.
68
+ */
69
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
70
+/*
71
+ * Emit code to load the value to be copied to the sysreg; returns
72
+ * a new TCG temporary
73
+ */
74
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
75
+
76
+/* Common decode/access checks for fp sysreg read/write */
77
+typedef enum FPSysRegCheckResult {
78
+ FPSysRegCheckFailed, /* caller should return false */
79
+ FPSysRegCheckDone, /* caller should return true */
80
+ FPSysRegCheckContinue, /* caller should continue generating code */
81
+} FPSysRegCheckResult;
82
+
83
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
84
+{
85
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
86
+ return FPSysRegCheckFailed;
87
+ }
88
+
89
+ switch (regno) {
90
+ case ARM_VFP_FPSCR:
91
+ case QEMU_VFP_FPSCR_NZCV:
92
+ break;
93
+ default:
94
+ return FPSysRegCheckFailed;
95
+ }
96
+
97
+ if (!vfp_access_check(s)) {
98
+ return FPSysRegCheckDone;
99
+ }
100
+
101
+ return FPSysRegCheckContinue;
102
+}
103
+
104
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
105
+
106
+ fp_sysreg_loadfn *loadfn,
107
+ void *opaque)
108
+{
109
+ /* Do a write to an M-profile floating point system register */
110
+ TCGv_i32 tmp;
111
+
112
+ switch (fp_sysreg_checks(s, regno)) {
113
+ case FPSysRegCheckFailed:
43
+ return false;
114
+ return false;
44
+ }
115
+ case FPSysRegCheckDone:
45
+
116
+ return true;
46
+ /* UNDEF accesses to D16-D31 if they don't exist. */
117
+ case FPSysRegCheckContinue:
47
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
118
+ break;
48
+ ((a->vd | a->vn | a->vm) & 0x10)) {
119
+ }
120
+
121
+ switch (regno) {
122
+ case ARM_VFP_FPSCR:
123
+ tmp = loadfn(s, opaque);
124
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
125
+ tcg_temp_free_i32(tmp);
126
+ gen_lookup_tb(s);
127
+ break;
128
+ default:
129
+ g_assert_not_reached();
130
+ }
131
+ return true;
132
+}
133
+
134
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
135
+ fp_sysreg_storefn *storefn,
136
+ void *opaque)
137
+{
138
+ /* Do a read from an M-profile floating point system register */
139
+ TCGv_i32 tmp;
140
+
141
+ switch (fp_sysreg_checks(s, regno)) {
142
+ case FPSysRegCheckFailed:
49
+ return false;
143
+ return false;
50
+ }
144
+ case FPSysRegCheckDone:
51
+
52
+ if (!vfp_access_check(s)) {
53
+ return true;
145
+ return true;
54
+ }
146
+ case FPSysRegCheckContinue:
55
+
147
+ break;
56
+ n = a->len + 1;
148
+ }
57
+ if ((a->vn + n) > 32) {
149
+
150
+ switch (regno) {
151
+ case ARM_VFP_FPSCR:
152
+ tmp = tcg_temp_new_i32();
153
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
154
+ storefn(s, opaque, tmp);
155
+ break;
156
+ case QEMU_VFP_FPSCR_NZCV:
58
+ /*
157
+ /*
59
+ * This is UNPREDICTABLE; we choose to UNDEF to avoid the
158
+ * Read just NZCV; this is a special case to avoid the
60
+ * helper function running off the end of the register file.
159
+ * helper call for the "VMRS to CPSR.NZCV" insn.
61
+ */
160
+ */
161
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
162
+ tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
163
+ storefn(s, opaque, tmp);
164
+ break;
165
+ default:
166
+ g_assert_not_reached();
167
+ }
168
+ return true;
169
+}
170
+
171
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
172
+{
173
+ arg_VMSR_VMRS *a = opaque;
174
+
175
+ if (a->rt == 15) {
176
+ /* Set the 4 flag bits in the CPSR */
177
+ gen_set_nzcv(value);
178
+ tcg_temp_free_i32(value);
179
+ } else {
180
+ store_reg(s, a->rt, value);
181
+ }
182
+}
183
+
184
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
185
+{
186
+ arg_VMSR_VMRS *a = opaque;
187
+
188
+ return load_reg(s, a->rt);
189
+}
190
+
191
+static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
192
+{
193
+ /*
194
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
195
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
196
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
197
+ * we only care about the top 4 bits of FPSCR there.
198
+ */
199
+ if (a->rt == 15) {
200
+ if (a->l && a->reg == ARM_VFP_FPSCR) {
201
+ a->reg = QEMU_VFP_FPSCR_NZCV;
202
+ } else {
203
+ return false;
204
+ }
205
+ }
206
+
207
+ if (a->l) {
208
+ /* VMRS, move FP system register to gp register */
209
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
210
+ } else {
211
+ /* VMSR, move gp register to FP system register */
212
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
213
+ }
214
+}
215
+
216
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
217
{
218
TCGv_i32 tmp;
219
bool ignore_vfp_enabled = false;
220
221
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
222
- return false;
223
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
224
+ return gen_M_VMSR_VMRS(s, a);
225
}
226
227
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
228
- /*
229
- * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
230
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
231
- * (FPSCR -> r15 is a special case which writes to the PSR flags.)
232
- */
233
- if (a->reg != ARM_VFP_FPSCR) {
234
- return false;
235
- }
236
- if (a->rt == 15 && !a->l) {
237
- return false;
238
- }
239
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
62
+ return false;
240
+ return false;
63
+ }
241
}
64
+ n <<= 3;
242
65
+ if (a->op) {
243
switch (a->reg) {
66
+ tmp = neon_load_reg(a->vd, 0);
67
+ } else {
68
+ tmp = tcg_temp_new_i32();
69
+ tcg_gen_movi_i32(tmp, 0);
70
+ }
71
+ tmp2 = neon_load_reg(a->vm, 0);
72
+ ptr1 = vfp_reg_ptr(true, a->vn);
73
+ tmp4 = tcg_const_i32(n);
74
+ gen_helper_neon_tbl(tmp2, tmp2, tmp, ptr1, tmp4);
75
+ tcg_temp_free_i32(tmp);
76
+ if (a->op) {
77
+ tmp = neon_load_reg(a->vd, 1);
78
+ } else {
79
+ tmp = tcg_temp_new_i32();
80
+ tcg_gen_movi_i32(tmp, 0);
81
+ }
82
+ tmp3 = neon_load_reg(a->vm, 1);
83
+ gen_helper_neon_tbl(tmp3, tmp3, tmp, ptr1, tmp4);
84
+ tcg_temp_free_i32(tmp4);
85
+ tcg_temp_free_ptr(ptr1);
86
+ neon_store_reg(a->vd, 0, tmp2);
87
+ neon_store_reg(a->vd, 1, tmp3);
88
+ tcg_temp_free_i32(tmp);
89
+ return true;
90
+}
91
diff --git a/target/arm/translate.c b/target/arm/translate.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/translate.c
94
+++ b/target/arm/translate.c
95
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
96
{
97
int op;
98
int q;
99
- int rd, rn, rm, rd_ofs, rm_ofs;
100
+ int rd, rm, rd_ofs, rm_ofs;
101
int size;
102
int pass;
103
int u;
104
int vec_size;
105
- TCGv_i32 tmp, tmp2, tmp3, tmp5;
106
- TCGv_ptr ptr1;
107
+ TCGv_i32 tmp, tmp2, tmp3;
108
109
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
110
return 1;
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
112
q = (insn & (1 << 6)) != 0;
113
u = (insn >> 24) & 1;
114
VFP_DREG_D(rd, insn);
115
- VFP_DREG_N(rn, insn);
116
VFP_DREG_M(rm, insn);
117
size = (insn >> 20) & 3;
118
vec_size = q ? 16 : 8;
119
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
120
break;
121
}
122
} else if ((insn & (1 << 10)) == 0) {
123
- /* VTBL, VTBX. */
124
- int n = ((insn >> 8) & 3) + 1;
125
- if ((rn + n) > 32) {
126
- /* This is UNPREDICTABLE; we choose to UNDEF to avoid the
127
- * helper function running off the end of the register file.
128
- */
129
- return 1;
130
- }
131
- n <<= 3;
132
- if (insn & (1 << 6)) {
133
- tmp = neon_load_reg(rd, 0);
134
- } else {
135
- tmp = tcg_temp_new_i32();
136
- tcg_gen_movi_i32(tmp, 0);
137
- }
138
- tmp2 = neon_load_reg(rm, 0);
139
- ptr1 = vfp_reg_ptr(true, rn);
140
- tmp5 = tcg_const_i32(n);
141
- gen_helper_neon_tbl(tmp2, tmp2, tmp, ptr1, tmp5);
142
- tcg_temp_free_i32(tmp);
143
- if (insn & (1 << 6)) {
144
- tmp = neon_load_reg(rd, 1);
145
- } else {
146
- tmp = tcg_temp_new_i32();
147
- tcg_gen_movi_i32(tmp, 0);
148
- }
149
- tmp3 = neon_load_reg(rm, 1);
150
- gen_helper_neon_tbl(tmp3, tmp3, tmp, ptr1, tmp5);
151
- tcg_temp_free_i32(tmp5);
152
- tcg_temp_free_ptr(ptr1);
153
- neon_store_reg(rd, 0, tmp2);
154
- neon_store_reg(rd, 1, tmp3);
155
- tcg_temp_free_i32(tmp);
156
+ /* VTBL, VTBX: handled by decodetree */
157
+ return 1;
158
} else if ((insn & 0x380) == 0) {
159
/* VDUP */
160
int element;
161
--
244
--
162
2.20.1
245
2.20.1
163
246
164
247
diff view generated by jsdifflib
1
Convert the Neon 2-reg-scalar long multiplies to decodetree.
1
The constant-expander functions like negate, plus_2, etc, are
2
These are the last instructions in the group.
2
generally useful; move them up in translate.c so we can use them in
3
the VFP/Neon decoders as well as in the A32/T32/T16 decoders.
3
4
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-9-peter.maydell@linaro.org
6
---
8
---
7
target/arm/neon-dp.decode | 18 ++++
9
target/arm/translate.c | 46 +++++++++++++++++++++++-------------------
8
target/arm/translate-neon.inc.c | 163 ++++++++++++++++++++++++++++
10
1 file changed, 25 insertions(+), 21 deletions(-)
9
target/arm/translate.c | 182 ++------------------------------
10
3 files changed, 187 insertions(+), 176 deletions(-)
11
11
12
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-dp.decode
15
+++ b/target/arm/neon-dp.decode
16
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
17
18
@2scalar .... ... q:1 . . size:2 .... .... .... . . . . .... \
19
&2scalar vm=%vm_dp vn=%vn_dp vd=%vd_dp
20
+ # For the 'long' ops the Q bit is part of insn decode
21
+ @2scalar_q0 .... ... . . . size:2 .... .... .... . . . . .... \
22
+ &2scalar vm=%vm_dp vn=%vn_dp vd=%vd_dp q=0
23
24
VMLA_2sc 1111 001 . 1 . .. .... .... 0000 . 1 . 0 .... @2scalar
25
VMLA_F_2sc 1111 001 . 1 . .. .... .... 0001 . 1 . 0 .... @2scalar
26
27
+ VMLAL_S_2sc 1111 001 0 1 . .. .... .... 0010 . 1 . 0 .... @2scalar_q0
28
+ VMLAL_U_2sc 1111 001 1 1 . .. .... .... 0010 . 1 . 0 .... @2scalar_q0
29
+
30
+ VQDMLAL_2sc 1111 001 0 1 . .. .... .... 0011 . 1 . 0 .... @2scalar_q0
31
+
32
VMLS_2sc 1111 001 . 1 . .. .... .... 0100 . 1 . 0 .... @2scalar
33
VMLS_F_2sc 1111 001 . 1 . .. .... .... 0101 . 1 . 0 .... @2scalar
34
35
+ VMLSL_S_2sc 1111 001 0 1 . .. .... .... 0110 . 1 . 0 .... @2scalar_q0
36
+ VMLSL_U_2sc 1111 001 1 1 . .. .... .... 0110 . 1 . 0 .... @2scalar_q0
37
+
38
+ VQDMLSL_2sc 1111 001 0 1 . .. .... .... 0111 . 1 . 0 .... @2scalar_q0
39
+
40
VMUL_2sc 1111 001 . 1 . .. .... .... 1000 . 1 . 0 .... @2scalar
41
VMUL_F_2sc 1111 001 . 1 . .. .... .... 1001 . 1 . 0 .... @2scalar
42
43
+ VMULL_S_2sc 1111 001 0 1 . .. .... .... 1010 . 1 . 0 .... @2scalar_q0
44
+ VMULL_U_2sc 1111 001 1 1 . .. .... .... 1010 . 1 . 0 .... @2scalar_q0
45
+
46
+ VQDMULL_2sc 1111 001 0 1 . .. .... .... 1011 . 1 . 0 .... @2scalar_q0
47
+
48
VQDMULH_2sc 1111 001 . 1 . .. .... .... 1100 . 1 . 0 .... @2scalar
49
VQRDMULH_2sc 1111 001 . 1 . .. .... .... 1101 . 1 . 0 .... @2scalar
50
51
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/translate-neon.inc.c
54
+++ b/target/arm/translate-neon.inc.c
55
@@ -XXX,XX +XXX,XX @@ static bool trans_VQRDMLSH_2sc(DisasContext *s, arg_2scalar *a)
56
};
57
return do_vqrdmlah_2sc(s, a, opfn[a->size]);
58
}
59
+
60
+static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
61
+ NeonGenTwoOpWidenFn *opfn,
62
+ NeonGenTwo64OpFn *accfn)
63
+{
64
+ /*
65
+ * Two registers and a scalar, long operations: perform an
66
+ * operation on the input elements and the scalar which produces
67
+ * a double-width result, and then possibly perform an accumulation
68
+ * operation of that result into the destination.
69
+ */
70
+ TCGv_i32 scalar, rn;
71
+ TCGv_i64 rn0_64, rn1_64;
72
+
73
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
74
+ return false;
75
+ }
76
+
77
+ /* UNDEF accesses to D16-D31 if they don't exist. */
78
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
79
+ ((a->vd | a->vn | a->vm) & 0x10)) {
80
+ return false;
81
+ }
82
+
83
+ if (!opfn) {
84
+ /* Bad size (including size == 3, which is a different insn group) */
85
+ return false;
86
+ }
87
+
88
+ if (a->vd & 1) {
89
+ return false;
90
+ }
91
+
92
+ if (!vfp_access_check(s)) {
93
+ return true;
94
+ }
95
+
96
+ scalar = neon_get_scalar(a->size, a->vm);
97
+
98
+ /* Load all inputs before writing any outputs, in case of overlap */
99
+ rn = neon_load_reg(a->vn, 0);
100
+ rn0_64 = tcg_temp_new_i64();
101
+ opfn(rn0_64, rn, scalar);
102
+ tcg_temp_free_i32(rn);
103
+
104
+ rn = neon_load_reg(a->vn, 1);
105
+ rn1_64 = tcg_temp_new_i64();
106
+ opfn(rn1_64, rn, scalar);
107
+ tcg_temp_free_i32(rn);
108
+ tcg_temp_free_i32(scalar);
109
+
110
+ if (accfn) {
111
+ TCGv_i64 t64 = tcg_temp_new_i64();
112
+ neon_load_reg64(t64, a->vd);
113
+ accfn(t64, t64, rn0_64);
114
+ neon_store_reg64(t64, a->vd);
115
+ neon_load_reg64(t64, a->vd + 1);
116
+ accfn(t64, t64, rn1_64);
117
+ neon_store_reg64(t64, a->vd + 1);
118
+ tcg_temp_free_i64(t64);
119
+ } else {
120
+ neon_store_reg64(rn0_64, a->vd);
121
+ neon_store_reg64(rn1_64, a->vd + 1);
122
+ }
123
+ tcg_temp_free_i64(rn0_64);
124
+ tcg_temp_free_i64(rn1_64);
125
+ return true;
126
+}
127
+
128
+static bool trans_VMULL_S_2sc(DisasContext *s, arg_2scalar *a)
129
+{
130
+ static NeonGenTwoOpWidenFn * const opfn[] = {
131
+ NULL,
132
+ gen_helper_neon_mull_s16,
133
+ gen_mull_s32,
134
+ NULL,
135
+ };
136
+
137
+ return do_2scalar_long(s, a, opfn[a->size], NULL);
138
+}
139
+
140
+static bool trans_VMULL_U_2sc(DisasContext *s, arg_2scalar *a)
141
+{
142
+ static NeonGenTwoOpWidenFn * const opfn[] = {
143
+ NULL,
144
+ gen_helper_neon_mull_u16,
145
+ gen_mull_u32,
146
+ NULL,
147
+ };
148
+
149
+ return do_2scalar_long(s, a, opfn[a->size], NULL);
150
+}
151
+
152
+#define DO_VMLAL_2SC(INSN, MULL, ACC) \
153
+ static bool trans_##INSN##_2sc(DisasContext *s, arg_2scalar *a) \
154
+ { \
155
+ static NeonGenTwoOpWidenFn * const opfn[] = { \
156
+ NULL, \
157
+ gen_helper_neon_##MULL##16, \
158
+ gen_##MULL##32, \
159
+ NULL, \
160
+ }; \
161
+ static NeonGenTwo64OpFn * const accfn[] = { \
162
+ NULL, \
163
+ gen_helper_neon_##ACC##l_u32, \
164
+ tcg_gen_##ACC##_i64, \
165
+ NULL, \
166
+ }; \
167
+ return do_2scalar_long(s, a, opfn[a->size], accfn[a->size]); \
168
+ }
169
+
170
+DO_VMLAL_2SC(VMLAL_S, mull_s, add)
171
+DO_VMLAL_2SC(VMLAL_U, mull_u, add)
172
+DO_VMLAL_2SC(VMLSL_S, mull_s, sub)
173
+DO_VMLAL_2SC(VMLSL_U, mull_u, sub)
174
+
175
+static bool trans_VQDMULL_2sc(DisasContext *s, arg_2scalar *a)
176
+{
177
+ static NeonGenTwoOpWidenFn * const opfn[] = {
178
+ NULL,
179
+ gen_VQDMULL_16,
180
+ gen_VQDMULL_32,
181
+ NULL,
182
+ };
183
+
184
+ return do_2scalar_long(s, a, opfn[a->size], NULL);
185
+}
186
+
187
+static bool trans_VQDMLAL_2sc(DisasContext *s, arg_2scalar *a)
188
+{
189
+ static NeonGenTwoOpWidenFn * const opfn[] = {
190
+ NULL,
191
+ gen_VQDMULL_16,
192
+ gen_VQDMULL_32,
193
+ NULL,
194
+ };
195
+ static NeonGenTwo64OpFn * const accfn[] = {
196
+ NULL,
197
+ gen_VQDMLAL_acc_16,
198
+ gen_VQDMLAL_acc_32,
199
+ NULL,
200
+ };
201
+
202
+ return do_2scalar_long(s, a, opfn[a->size], accfn[a->size]);
203
+}
204
+
205
+static bool trans_VQDMLSL_2sc(DisasContext *s, arg_2scalar *a)
206
+{
207
+ static NeonGenTwoOpWidenFn * const opfn[] = {
208
+ NULL,
209
+ gen_VQDMULL_16,
210
+ gen_VQDMULL_32,
211
+ NULL,
212
+ };
213
+ static NeonGenTwo64OpFn * const accfn[] = {
214
+ NULL,
215
+ gen_VQDMLSL_acc_16,
216
+ gen_VQDMLSL_acc_32,
217
+ NULL,
218
+ };
219
+
220
+ return do_2scalar_long(s, a, opfn[a->size], accfn[a->size]);
221
+}
222
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
223
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
224
--- a/target/arm/translate.c
14
--- a/target/arm/translate.c
225
+++ b/target/arm/translate.c
15
+++ b/target/arm/translate.c
226
@@ -XXX,XX +XXX,XX @@ static void gen_revsh(TCGv_i32 dest, TCGv_i32 var)
16
@@ -XXX,XX +XXX,XX @@ static void arm_gen_condlabel(DisasContext *s)
227
tcg_gen_ext16s_i32(dest, var);
17
}
228
}
18
}
229
19
230
-/* 32x32->64 multiply. Marks inputs as dead. */
20
+/*
231
-static TCGv_i64 gen_mulu_i64_i32(TCGv_i32 a, TCGv_i32 b)
21
+ * Constant expanders for the decoders.
22
+ */
23
+
24
+static int negate(DisasContext *s, int x)
25
+{
26
+ return -x;
27
+}
28
+
29
+static int plus_2(DisasContext *s, int x)
30
+{
31
+ return x + 2;
32
+}
33
+
34
+static int times_2(DisasContext *s, int x)
35
+{
36
+ return x * 2;
37
+}
38
+
39
+static int times_4(DisasContext *s, int x)
40
+{
41
+ return x * 4;
42
+}
43
+
44
/* Flags for the disas_set_da_iss info argument:
45
* lower bits hold the Rt register number, higher bits are flags.
46
*/
47
@@ -XXX,XX +XXX,XX @@ static void arm_skip_unless(DisasContext *s, uint32_t cond)
48
49
50
/*
51
- * Constant expanders for the decoders.
52
+ * Constant expanders used by T16/T32 decode
53
*/
54
55
-static int negate(DisasContext *s, int x)
232
-{
56
-{
233
- TCGv_i32 lo = tcg_temp_new_i32();
57
- return -x;
234
- TCGv_i32 hi = tcg_temp_new_i32();
235
- TCGv_i64 ret;
236
-
237
- tcg_gen_mulu2_i32(lo, hi, a, b);
238
- tcg_temp_free_i32(a);
239
- tcg_temp_free_i32(b);
240
-
241
- ret = tcg_temp_new_i64();
242
- tcg_gen_concat_i32_i64(ret, lo, hi);
243
- tcg_temp_free_i32(lo);
244
- tcg_temp_free_i32(hi);
245
-
246
- return ret;
247
-}
58
-}
248
-
59
-
249
-static TCGv_i64 gen_muls_i64_i32(TCGv_i32 a, TCGv_i32 b)
60
-static int plus_2(DisasContext *s, int x)
250
-{
61
-{
251
- TCGv_i32 lo = tcg_temp_new_i32();
62
- return x + 2;
252
- TCGv_i32 hi = tcg_temp_new_i32();
253
- TCGv_i64 ret;
254
-
255
- tcg_gen_muls2_i32(lo, hi, a, b);
256
- tcg_temp_free_i32(a);
257
- tcg_temp_free_i32(b);
258
-
259
- ret = tcg_temp_new_i64();
260
- tcg_gen_concat_i32_i64(ret, lo, hi);
261
- tcg_temp_free_i32(lo);
262
- tcg_temp_free_i32(hi);
263
-
264
- return ret;
265
-}
63
-}
266
-
64
-
267
/* Swap low and high halfwords. */
65
-static int times_2(DisasContext *s, int x)
268
static void gen_swap_half(TCGv_i32 var)
269
{
270
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_addl(int size)
271
}
272
}
273
274
-static inline void gen_neon_negl(TCGv_i64 var, int size)
275
-{
66
-{
276
- switch (size) {
67
- return x * 2;
277
- case 0: gen_helper_neon_negl_u16(var, var); break;
278
- case 1: gen_helper_neon_negl_u32(var, var); break;
279
- case 2:
280
- tcg_gen_neg_i64(var, var);
281
- break;
282
- default: abort();
283
- }
284
-}
68
-}
285
-
69
-
286
-static inline void gen_neon_addl_saturate(TCGv_i64 op0, TCGv_i64 op1, int size)
70
-static int times_4(DisasContext *s, int x)
287
-{
71
-{
288
- switch (size) {
72
- return x * 4;
289
- case 1: gen_helper_neon_addl_saturate_s32(op0, cpu_env, op0, op1); break;
290
- case 2: gen_helper_neon_addl_saturate_s64(op0, cpu_env, op0, op1); break;
291
- default: abort();
292
- }
293
-}
73
-}
294
-
74
-
295
-static inline void gen_neon_mull(TCGv_i64 dest, TCGv_i32 a, TCGv_i32 b,
75
/* Return only the rotation part of T32ExpandImm. */
296
- int size, int u)
76
static int t32_expandimm_rot(DisasContext *s, int x)
297
-{
298
- TCGv_i64 tmp;
299
-
300
- switch ((size << 1) | u) {
301
- case 0: gen_helper_neon_mull_s8(dest, a, b); break;
302
- case 1: gen_helper_neon_mull_u8(dest, a, b); break;
303
- case 2: gen_helper_neon_mull_s16(dest, a, b); break;
304
- case 3: gen_helper_neon_mull_u16(dest, a, b); break;
305
- case 4:
306
- tmp = gen_muls_i64_i32(a, b);
307
- tcg_gen_mov_i64(dest, tmp);
308
- tcg_temp_free_i64(tmp);
309
- break;
310
- case 5:
311
- tmp = gen_mulu_i64_i32(a, b);
312
- tcg_gen_mov_i64(dest, tmp);
313
- tcg_temp_free_i64(tmp);
314
- break;
315
- default: abort();
316
- }
317
-
318
- /* gen_helper_neon_mull_[su]{8|16} do not free their parameters.
319
- Don't forget to clean them now. */
320
- if (size < 2) {
321
- tcg_temp_free_i32(a);
322
- tcg_temp_free_i32(b);
323
- }
324
-}
325
-
326
static void gen_neon_narrow_op(int op, int u, int size,
327
TCGv_i32 dest, TCGv_i64 src)
328
{
77
{
329
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
330
int u;
331
int vec_size;
332
uint32_t imm;
333
- TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
334
+ TCGv_i32 tmp, tmp2, tmp3, tmp5;
335
TCGv_ptr ptr1;
336
TCGv_i64 tmp64;
337
338
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
339
return 1;
340
} else { /* (insn & 0x00800010 == 0x00800000) */
341
if (size != 3) {
342
- op = (insn >> 8) & 0xf;
343
- if ((insn & (1 << 6)) == 0) {
344
- /* Three registers of different lengths: handled by decodetree */
345
- return 1;
346
- } else {
347
- /* Two registers and a scalar. NB that for ops of this form
348
- * the ARM ARM labels bit 24 as Q, but it is in our variable
349
- * 'u', not 'q'.
350
- */
351
- if (size == 0) {
352
- return 1;
353
- }
354
- switch (op) {
355
- case 0: /* Integer VMLA scalar */
356
- case 4: /* Integer VMLS scalar */
357
- case 8: /* Integer VMUL scalar */
358
- case 1: /* Float VMLA scalar */
359
- case 5: /* Floating point VMLS scalar */
360
- case 9: /* Floating point VMUL scalar */
361
- case 12: /* VQDMULH scalar */
362
- case 13: /* VQRDMULH scalar */
363
- case 14: /* VQRDMLAH scalar */
364
- case 15: /* VQRDMLSH scalar */
365
- return 1; /* handled by decodetree */
366
-
367
- case 3: /* VQDMLAL scalar */
368
- case 7: /* VQDMLSL scalar */
369
- case 11: /* VQDMULL scalar */
370
- if (u == 1) {
371
- return 1;
372
- }
373
- /* fall through */
374
- case 2: /* VMLAL sclar */
375
- case 6: /* VMLSL scalar */
376
- case 10: /* VMULL scalar */
377
- if (rd & 1) {
378
- return 1;
379
- }
380
- tmp2 = neon_get_scalar(size, rm);
381
- /* We need a copy of tmp2 because gen_neon_mull
382
- * deletes it during pass 0. */
383
- tmp4 = tcg_temp_new_i32();
384
- tcg_gen_mov_i32(tmp4, tmp2);
385
- tmp3 = neon_load_reg(rn, 1);
386
-
387
- for (pass = 0; pass < 2; pass++) {
388
- if (pass == 0) {
389
- tmp = neon_load_reg(rn, 0);
390
- } else {
391
- tmp = tmp3;
392
- tmp2 = tmp4;
393
- }
394
- gen_neon_mull(cpu_V0, tmp, tmp2, size, u);
395
- if (op != 11) {
396
- neon_load_reg64(cpu_V1, rd + pass);
397
- }
398
- switch (op) {
399
- case 6:
400
- gen_neon_negl(cpu_V0, size);
401
- /* Fall through */
402
- case 2:
403
- gen_neon_addl(size);
404
- break;
405
- case 3: case 7:
406
- gen_neon_addl_saturate(cpu_V0, cpu_V0, size);
407
- if (op == 7) {
408
- gen_neon_negl(cpu_V0, size);
409
- }
410
- gen_neon_addl_saturate(cpu_V0, cpu_V1, size);
411
- break;
412
- case 10:
413
- /* no-op */
414
- break;
415
- case 11:
416
- gen_neon_addl_saturate(cpu_V0, cpu_V0, size);
417
- break;
418
- default:
419
- abort();
420
- }
421
- neon_store_reg64(cpu_V0, rd + pass);
422
- }
423
- break;
424
- default:
425
- g_assert_not_reached();
426
- }
427
- }
428
+ /*
429
+ * Three registers of different lengths, or two registers and
430
+ * a scalar: handled by decodetree
431
+ */
432
+ return 1;
433
} else { /* size == 3 */
434
if (!u) {
435
/* Extract. */
436
--
78
--
437
2.20.1
79
2.20.1
438
80
439
81
diff view generated by jsdifflib
1
Convert the VMLA, VMLS and VMUL insns in the Neon "2 registers and a
1
Implement the new-in-v8.1M VLDR/VSTR variants which directly
2
scalar" group to decodetree. These are 32x32->32 operations where
2
read or write FP system registers to memory.
3
one of the inputs is the scalar, followed by a possible accumulate
4
operation of the 32-bit result.
5
6
The refactoring removes some of the oddities of the old decoder:
7
* operands to the operation and accumulation were often
8
reversed (taking advantage of the fact that most of these ops
9
are commutative); the new code follows the pseudocode order
10
* the Q bit in the insn was in a local variable 'u'; in the
11
new code it is decoded into a->q
12
3
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-10-peter.maydell@linaro.org
15
---
7
---
16
target/arm/neon-dp.decode | 15 ++++
8
target/arm/vfp.decode | 14 ++++++
17
target/arm/translate-neon.inc.c | 133 ++++++++++++++++++++++++++++++++
9
target/arm/translate-vfp.c.inc | 91 ++++++++++++++++++++++++++++++++++
18
target/arm/translate.c | 77 ++----------------
10
2 files changed, 105 insertions(+)
19
3 files changed, 154 insertions(+), 71 deletions(-)
20
11
21
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
12
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
22
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/neon-dp.decode
14
--- a/target/arm/vfp.decode
24
+++ b/target/arm/neon-dp.decode
15
+++ b/target/arm/vfp.decode
25
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
16
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
26
VQDMULL_3d 1111 001 0 1 . .. .... .... 1101 . 0 . 0 .... @3diff
17
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
27
18
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
28
VMULL_P_3d 1111 001 0 1 . .. .... .... 1110 . 0 . 0 .... @3diff
19
20
+# M-profile VLDR/VSTR to sysreg
21
+%vldr_sysreg 22:1 13:3
22
+%imm7_0x4 0:7 !function=times_4
29
+
23
+
30
+ ##################################################################
24
+&vldr_sysreg rn reg imm a w p
31
+ # 2-regs-plus-scalar grouping:
25
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
32
+ # 1111 001 Q 1 D sz!=11 Vn:4 Vd:4 opc:4 N 1 M 0 Vm:4
26
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
33
+ ##################################################################
34
+ &2scalar vm vn vd size q
35
+
27
+
36
+ @2scalar .... ... q:1 . . size:2 .... .... .... . . . . .... \
28
+# P=0 W=0 is SEE "Related encodings", so split into two patterns
37
+ &2scalar vm=%vm_dp vn=%vn_dp vd=%vd_dp
29
+VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
30
+VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
31
+VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
32
+VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
38
+
33
+
39
+ VMLA_2sc 1111 001 . 1 . .. .... .... 0000 . 1 . 0 .... @2scalar
34
# We split the load/store multiple up into two patterns to avoid
40
+
35
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
41
+ VMLS_2sc 1111 001 . 1 . .. .... .... 0100 . 1 . 0 .... @2scalar
36
# grouping:
42
+
37
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
43
+ VMUL_2sc 1111 001 . 1 . .. .... .... 1000 . 1 . 0 .... @2scalar
44
]
45
}
46
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
47
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-neon.inc.c
39
--- a/target/arm/translate-vfp.c.inc
49
+++ b/target/arm/translate-neon.inc.c
40
+++ b/target/arm/translate-vfp.c.inc
50
@@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_P_3d(DisasContext *s, arg_3diff *a)
41
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
51
16, 16, 0, fn_gvec);
52
return true;
42
return true;
53
}
43
}
44
45
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
46
+{
47
+ arg_vldr_sysreg *a = opaque;
48
+ uint32_t offset = a->imm;
49
+ TCGv_i32 addr;
54
+
50
+
55
+static void gen_neon_dup_low16(TCGv_i32 var)
51
+ if (!a->a) {
56
+{
52
+ offset = - offset;
57
+ TCGv_i32 tmp = tcg_temp_new_i32();
53
+ }
58
+ tcg_gen_ext16u_i32(var, var);
54
+
59
+ tcg_gen_shli_i32(tmp, var, 16);
55
+ addr = load_reg(s, a->rn);
60
+ tcg_gen_or_i32(var, var, tmp);
56
+ if (a->p) {
61
+ tcg_temp_free_i32(tmp);
57
+ tcg_gen_addi_i32(addr, addr, offset);
58
+ }
59
+
60
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
61
+ gen_helper_v8m_stackcheck(cpu_env, addr);
62
+ }
63
+
64
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
65
+ MO_UL | MO_ALIGN | s->be_data);
66
+ tcg_temp_free_i32(value);
67
+
68
+ if (a->w) {
69
+ /* writeback */
70
+ if (!a->p) {
71
+ tcg_gen_addi_i32(addr, addr, offset);
72
+ }
73
+ store_reg(s, a->rn, addr);
74
+ } else {
75
+ tcg_temp_free_i32(addr);
76
+ }
62
+}
77
+}
63
+
78
+
64
+static void gen_neon_dup_high16(TCGv_i32 var)
79
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
65
+{
80
+{
66
+ TCGv_i32 tmp = tcg_temp_new_i32();
81
+ arg_vldr_sysreg *a = opaque;
67
+ tcg_gen_andi_i32(var, var, 0xffff0000);
82
+ uint32_t offset = a->imm;
68
+ tcg_gen_shri_i32(tmp, var, 16);
83
+ TCGv_i32 addr;
69
+ tcg_gen_or_i32(var, var, tmp);
84
+ TCGv_i32 value = tcg_temp_new_i32();
70
+ tcg_temp_free_i32(tmp);
85
+
86
+ if (!a->a) {
87
+ offset = - offset;
88
+ }
89
+
90
+ addr = load_reg(s, a->rn);
91
+ if (a->p) {
92
+ tcg_gen_addi_i32(addr, addr, offset);
93
+ }
94
+
95
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
96
+ gen_helper_v8m_stackcheck(cpu_env, addr);
97
+ }
98
+
99
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
100
+ MO_UL | MO_ALIGN | s->be_data);
101
+
102
+ if (a->w) {
103
+ /* writeback */
104
+ if (!a->p) {
105
+ tcg_gen_addi_i32(addr, addr, offset);
106
+ }
107
+ store_reg(s, a->rn, addr);
108
+ } else {
109
+ tcg_temp_free_i32(addr);
110
+ }
111
+ return value;
71
+}
112
+}
72
+
113
+
73
+static inline TCGv_i32 neon_get_scalar(int size, int reg)
114
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
74
+{
115
+{
75
+ TCGv_i32 tmp;
116
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
76
+ if (size == 1) {
117
+ return false;
77
+ tmp = neon_load_reg(reg & 7, reg >> 4);
78
+ if (reg & 8) {
79
+ gen_neon_dup_high16(tmp);
80
+ } else {
81
+ gen_neon_dup_low16(tmp);
82
+ }
83
+ } else {
84
+ tmp = neon_load_reg(reg & 15, reg >> 4);
85
+ }
118
+ }
86
+ return tmp;
119
+ if (a->rn == 15) {
120
+ return false;
121
+ }
122
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
87
+}
123
+}
88
+
124
+
89
+static bool do_2scalar(DisasContext *s, arg_2scalar *a,
125
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
90
+ NeonGenTwoOpFn *opfn, NeonGenTwoOpFn *accfn)
91
+{
126
+{
92
+ /*
127
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
93
+ * Two registers and a scalar: perform an operation between
94
+ * the input elements and the scalar, and then possibly
95
+ * perform an accumulation operation of that result into the
96
+ * destination.
97
+ */
98
+ TCGv_i32 scalar;
99
+ int pass;
100
+
101
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
102
+ return false;
128
+ return false;
103
+ }
129
+ }
104
+
130
+ if (a->rn == 15) {
105
+ /* UNDEF accesses to D16-D31 if they don't exist. */
106
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
107
+ ((a->vd | a->vn | a->vm) & 0x10)) {
108
+ return false;
131
+ return false;
109
+ }
132
+ }
110
+
133
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
111
+ if (!opfn) {
112
+ /* Bad size (including size == 3, which is a different insn group) */
113
+ return false;
114
+ }
115
+
116
+ if (a->q && ((a->vd | a->vn) & 1)) {
117
+ return false;
118
+ }
119
+
120
+ if (!vfp_access_check(s)) {
121
+ return true;
122
+ }
123
+
124
+ scalar = neon_get_scalar(a->size, a->vm);
125
+
126
+ for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
127
+ TCGv_i32 tmp = neon_load_reg(a->vn, pass);
128
+ opfn(tmp, tmp, scalar);
129
+ if (accfn) {
130
+ TCGv_i32 rd = neon_load_reg(a->vd, pass);
131
+ accfn(tmp, rd, tmp);
132
+ tcg_temp_free_i32(rd);
133
+ }
134
+ neon_store_reg(a->vd, pass, tmp);
135
+ }
136
+ tcg_temp_free_i32(scalar);
137
+ return true;
138
+}
134
+}
139
+
135
+
140
+static bool trans_VMUL_2sc(DisasContext *s, arg_2scalar *a)
136
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
141
+{
142
+ static NeonGenTwoOpFn * const opfn[] = {
143
+ NULL,
144
+ gen_helper_neon_mul_u16,
145
+ tcg_gen_mul_i32,
146
+ NULL,
147
+ };
148
+
149
+ return do_2scalar(s, a, opfn[a->size], NULL);
150
+}
151
+
152
+static bool trans_VMLA_2sc(DisasContext *s, arg_2scalar *a)
153
+{
154
+ static NeonGenTwoOpFn * const opfn[] = {
155
+ NULL,
156
+ gen_helper_neon_mul_u16,
157
+ tcg_gen_mul_i32,
158
+ NULL,
159
+ };
160
+ static NeonGenTwoOpFn * const accfn[] = {
161
+ NULL,
162
+ gen_helper_neon_add_u16,
163
+ tcg_gen_add_i32,
164
+ NULL,
165
+ };
166
+
167
+ return do_2scalar(s, a, opfn[a->size], accfn[a->size]);
168
+}
169
+
170
+static bool trans_VMLS_2sc(DisasContext *s, arg_2scalar *a)
171
+{
172
+ static NeonGenTwoOpFn * const opfn[] = {
173
+ NULL,
174
+ gen_helper_neon_mul_u16,
175
+ tcg_gen_mul_i32,
176
+ NULL,
177
+ };
178
+ static NeonGenTwoOpFn * const accfn[] = {
179
+ NULL,
180
+ gen_helper_neon_sub_u16,
181
+ tcg_gen_sub_i32,
182
+ NULL,
183
+ };
184
+
185
+ return do_2scalar(s, a, opfn[a->size], accfn[a->size]);
186
+}
187
diff --git a/target/arm/translate.c b/target/arm/translate.c
188
index XXXXXXX..XXXXXXX 100644
189
--- a/target/arm/translate.c
190
+++ b/target/arm/translate.c
191
@@ -XXX,XX +XXX,XX @@ static int disas_dsp_insn(DisasContext *s, uint32_t insn)
192
#define VFP_DREG_N(reg, insn) VFP_DREG(reg, insn, 16, 7)
193
#define VFP_DREG_M(reg, insn) VFP_DREG(reg, insn, 0, 5)
194
195
-static void gen_neon_dup_low16(TCGv_i32 var)
196
-{
197
- TCGv_i32 tmp = tcg_temp_new_i32();
198
- tcg_gen_ext16u_i32(var, var);
199
- tcg_gen_shli_i32(tmp, var, 16);
200
- tcg_gen_or_i32(var, var, tmp);
201
- tcg_temp_free_i32(tmp);
202
-}
203
-
204
-static void gen_neon_dup_high16(TCGv_i32 var)
205
-{
206
- TCGv_i32 tmp = tcg_temp_new_i32();
207
- tcg_gen_andi_i32(var, var, 0xffff0000);
208
- tcg_gen_shri_i32(tmp, var, 16);
209
- tcg_gen_or_i32(var, var, tmp);
210
- tcg_temp_free_i32(tmp);
211
-}
212
-
213
static inline bool use_goto_tb(DisasContext *s, target_ulong dest)
214
{
137
{
215
#ifndef CONFIG_USER_ONLY
138
TCGv_i32 tmp;
216
@@ -XXX,XX +XXX,XX @@ static void gen_exception_return(DisasContext *s, TCGv_i32 pc)
217
218
#define CPU_V001 cpu_V0, cpu_V0, cpu_V1
219
220
-static inline void gen_neon_add(int size, TCGv_i32 t0, TCGv_i32 t1)
221
-{
222
- switch (size) {
223
- case 0: gen_helper_neon_add_u8(t0, t0, t1); break;
224
- case 1: gen_helper_neon_add_u16(t0, t0, t1); break;
225
- case 2: tcg_gen_add_i32(t0, t0, t1); break;
226
- default: abort();
227
- }
228
-}
229
-
230
-static inline void gen_neon_rsb(int size, TCGv_i32 t0, TCGv_i32 t1)
231
-{
232
- switch (size) {
233
- case 0: gen_helper_neon_sub_u8(t0, t1, t0); break;
234
- case 1: gen_helper_neon_sub_u16(t0, t1, t0); break;
235
- case 2: tcg_gen_sub_i32(t0, t1, t0); break;
236
- default: return;
237
- }
238
-}
239
-
240
static TCGv_i32 neon_load_scratch(int scratch)
241
{
242
TCGv_i32 tmp = tcg_temp_new_i32();
243
@@ -XXX,XX +XXX,XX @@ static void neon_store_scratch(int scratch, TCGv_i32 var)
244
tcg_temp_free_i32(var);
245
}
246
247
-static inline TCGv_i32 neon_get_scalar(int size, int reg)
248
-{
249
- TCGv_i32 tmp;
250
- if (size == 1) {
251
- tmp = neon_load_reg(reg & 7, reg >> 4);
252
- if (reg & 8) {
253
- gen_neon_dup_high16(tmp);
254
- } else {
255
- gen_neon_dup_low16(tmp);
256
- }
257
- } else {
258
- tmp = neon_load_reg(reg & 15, reg >> 4);
259
- }
260
- return tmp;
261
-}
262
-
263
static int gen_neon_unzip(int rd, int rm, int size, int q)
264
{
265
TCGv_ptr pd, pm;
266
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
267
return 1;
268
}
269
switch (op) {
270
+ case 0: /* Integer VMLA scalar */
271
+ case 4: /* Integer VMLS scalar */
272
+ case 8: /* Integer VMUL scalar */
273
+ return 1; /* handled by decodetree */
274
+
275
case 1: /* Float VMLA scalar */
276
case 5: /* Floating point VMLS scalar */
277
case 9: /* Floating point VMUL scalar */
278
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
279
return 1;
280
}
281
/* fall through */
282
- case 0: /* Integer VMLA scalar */
283
- case 4: /* Integer VMLS scalar */
284
- case 8: /* Integer VMUL scalar */
285
case 12: /* VQDMULH scalar */
286
case 13: /* VQRDMULH scalar */
287
if (u && ((rd | rn) & 1)) {
288
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
289
} else {
290
gen_helper_neon_qrdmulh_s32(tmp, cpu_env, tmp, tmp2);
291
}
292
- } else if (op & 1) {
293
+ } else {
294
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
295
gen_helper_vfp_muls(tmp, tmp, tmp2, fpstatus);
296
tcg_temp_free_ptr(fpstatus);
297
- } else {
298
- switch (size) {
299
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
300
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
301
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
302
- default: abort();
303
- }
304
}
305
tcg_temp_free_i32(tmp2);
306
if (op < 8) {
307
/* Accumulate. */
308
tmp2 = neon_load_reg(rd, pass);
309
switch (op) {
310
- case 0:
311
- gen_neon_add(size, tmp, tmp2);
312
- break;
313
case 1:
314
{
315
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
316
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
317
tcg_temp_free_ptr(fpstatus);
318
break;
319
}
320
- case 4:
321
- gen_neon_rsb(size, tmp, tmp2);
322
- break;
323
case 5:
324
{
325
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
326
--
139
--
327
2.20.1
140
2.20.1
328
141
329
142
diff view generated by jsdifflib
New patch
1
v8.1M defines a new FP system register FPSCR_nzcvqc; this behaves
2
like the existing FPSCR, except that it reads and writes only bits
3
[31:27] of the FPSCR (the N, Z, C, V and QC flag bits). (Unlike the
4
FPSCR, the special case for Rt=15 of writing the CPSR.NZCV is not
5
permitted.)
1
6
7
Implement the register. Since we don't yet implement MVE, we handle
8
the QC bit as RES0, with todo comments for where we will need to add
9
support later.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20201119215617.29887-11-peter.maydell@linaro.org
14
---
15
target/arm/cpu.h | 13 +++++++++++++
16
target/arm/translate-vfp.c.inc | 27 +++++++++++++++++++++++++++
17
2 files changed, 40 insertions(+)
18
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
26
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
27
+#define FPCR_V (1 << 28) /* FP overflow flag */
28
+#define FPCR_C (1 << 29) /* FP carry flag */
29
+#define FPCR_Z (1 << 30) /* FP zero flag */
30
+#define FPCR_N (1 << 31) /* FP negative flag */
31
+
32
+#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
33
+#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
34
35
static inline uint32_t vfp_get_fpsr(CPUARMState *env)
36
{
37
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
38
#define ARM_VFP_FPEXC 8
39
#define ARM_VFP_FPINST 9
40
#define ARM_VFP_FPINST2 10
41
+/* These ones are M-profile only */
42
+#define ARM_VFP_FPSCR_NZCVQC 2
43
+#define ARM_VFP_VPR 12
44
+#define ARM_VFP_P0 13
45
+#define ARM_VFP_FPCXT_NS 14
46
+#define ARM_VFP_FPCXT_S 15
47
48
/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
49
#define QEMU_VFP_FPSCR_NZCV 0xffff
50
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate-vfp.c.inc
53
+++ b/target/arm/translate-vfp.c.inc
54
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
55
case ARM_VFP_FPSCR:
56
case QEMU_VFP_FPSCR_NZCV:
57
break;
58
+ case ARM_VFP_FPSCR_NZCVQC:
59
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
60
+ return false;
61
+ }
62
+ break;
63
default:
64
return FPSysRegCheckFailed;
65
}
66
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
67
tcg_temp_free_i32(tmp);
68
gen_lookup_tb(s);
69
break;
70
+ case ARM_VFP_FPSCR_NZCVQC:
71
+ {
72
+ TCGv_i32 fpscr;
73
+ tmp = loadfn(s, opaque);
74
+ /*
75
+ * TODO: when we implement MVE, write the QC bit.
76
+ * For non-MVE, QC is RES0.
77
+ */
78
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
79
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
80
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
81
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
82
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
83
+ tcg_temp_free_i32(tmp);
84
+ break;
85
+ }
86
default:
87
g_assert_not_reached();
88
}
89
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
90
gen_helper_vfp_get_fpscr(tmp, cpu_env);
91
storefn(s, opaque, tmp);
92
break;
93
+ case ARM_VFP_FPSCR_NZCVQC:
94
+ /*
95
+ * TODO: MVE has a QC bit, which we probably won't store
96
+ * in the xregs[] field. For non-MVE, where QC is RES0,
97
+ * we can just fall through to the FPSCR_NZCV case.
98
+ */
99
case QEMU_VFP_FPSCR_NZCV:
100
/*
101
* Read just NZCV; this is a special case to avoid the
102
--
103
2.20.1
104
105
diff view generated by jsdifflib
1
Mark the arrays of function pointers in trans_VSHLL_S_2sh() and
1
We defined a constant name for the mask of NZCV bits in the FPCR/FPSCR
2
trans_VSHLL_U_2sh() as both 'static' and 'const'.
2
in the previous commit; use it in a couple of places in existing code,
3
where we're masking out everything except NZCV for the "load to Rt=15
4
sets CPSR.NZCV" special case.
3
5
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-12-peter.maydell@linaro.org
6
---
9
---
7
target/arm/translate-neon.inc.c | 4 ++--
10
target/arm/translate-vfp.c.inc | 4 ++--
8
1 file changed, 2 insertions(+), 2 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
9
12
10
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
11
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate-neon.inc.c
15
--- a/target/arm/translate-vfp.c.inc
13
+++ b/target/arm/translate-neon.inc.c
16
+++ b/target/arm/translate-vfp.c.inc
14
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
17
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
15
18
* helper call for the "VMRS to CPSR.NZCV" insn.
16
static bool trans_VSHLL_S_2sh(DisasContext *s, arg_2reg_shift *a)
19
*/
17
{
20
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
18
- NeonGenWidenFn *widenfn[] = {
21
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
19
+ static NeonGenWidenFn * const widenfn[] = {
22
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
20
gen_helper_neon_widen_s8,
23
storefn(s, opaque, tmp);
21
gen_helper_neon_widen_s16,
24
break;
22
tcg_gen_ext_i32_i64,
25
default:
23
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL_S_2sh(DisasContext *s, arg_2reg_shift *a)
26
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
24
27
case ARM_VFP_FPSCR:
25
static bool trans_VSHLL_U_2sh(DisasContext *s, arg_2reg_shift *a)
28
if (a->rt == 15) {
26
{
29
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
27
- NeonGenWidenFn *widenfn[] = {
30
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
28
+ static NeonGenWidenFn * const widenfn[] = {
31
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
29
gen_helper_neon_widen_u8,
32
} else {
30
gen_helper_neon_widen_u16,
33
tmp = tcg_temp_new_i32();
31
tcg_gen_extu_i32_i64,
34
gen_helper_vfp_get_fpscr(tmp, cpu_env);
32
--
35
--
33
2.20.1
36
2.20.1
34
37
35
38
diff view generated by jsdifflib
1
The widenfn() in do_vshll_2sh() does not free the input 32-bit
1
Factor out the code which handles M-profile lazy FP state preservation
2
TCGv, so we need to do this in the calling code.
2
from full_vfp_access_check(); accesses to the FPCXT_NS register are
3
a special case which need to do just this part (corresponding in the
4
pseudocode to the PreserveFPState() function), and not the full
5
set of actions matching the pseudocode ExecuteFPCheck() which
6
normal FP instructions need to do.
3
7
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20201119215617.29887-13-peter.maydell@linaro.org
7
---
12
---
8
target/arm/translate-neon.inc.c | 2 ++
13
target/arm/translate-vfp.c.inc | 45 ++++++++++++++++++++--------------
9
1 file changed, 2 insertions(+)
14
1 file changed, 27 insertions(+), 18 deletions(-)
10
15
11
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
16
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-neon.inc.c
18
--- a/target/arm/translate-vfp.c.inc
14
+++ b/target/arm/translate-neon.inc.c
19
+++ b/target/arm/translate-vfp.c.inc
15
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
20
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
16
tmp = tcg_temp_new_i64();
21
return offs;
17
22
}
18
widenfn(tmp, rm0);
23
19
+ tcg_temp_free_i32(rm0);
24
+/*
20
if (a->shift != 0) {
25
+ * Generate code for M-profile lazy FP state preservation if needed;
21
tcg_gen_shli_i64(tmp, tmp, a->shift);
26
+ * this corresponds to the pseudocode PreserveFPState() function.
22
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
27
+ */
23
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
28
+static void gen_preserve_fp_state(DisasContext *s)
24
neon_store_reg64(tmp, a->vd);
29
+{
25
30
+ if (s->v7m_lspact) {
26
widenfn(tmp, rm1);
31
+ /*
27
+ tcg_temp_free_i32(rm1);
32
+ * Lazy state saving affects external memory and also the NVIC,
28
if (a->shift != 0) {
33
+ * so we must mark it as an IO operation for icount (and cause
29
tcg_gen_shli_i64(tmp, tmp, a->shift);
34
+ * this to be the last insn in the TB).
30
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
35
+ */
36
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
37
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
38
+ gen_io_start();
39
+ }
40
+ gen_helper_v7m_preserve_fp_state(cpu_env);
41
+ /*
42
+ * If the preserve_fp_state helper doesn't throw an exception
43
+ * then it will clear LSPACT; we don't need to repeat this for
44
+ * any further FP insns in this TB.
45
+ */
46
+ s->v7m_lspact = false;
47
+ }
48
+}
49
+
50
/*
51
* Check that VFP access is enabled. If it is, do the necessary
52
* M-profile lazy-FP handling and then return true.
53
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
54
/* Handle M-profile lazy FP state mechanics */
55
56
/* Trigger lazy-state preservation if necessary */
57
- if (s->v7m_lspact) {
58
- /*
59
- * Lazy state saving affects external memory and also the NVIC,
60
- * so we must mark it as an IO operation for icount (and cause
61
- * this to be the last insn in the TB).
62
- */
63
- if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
64
- s->base.is_jmp = DISAS_UPDATE_EXIT;
65
- gen_io_start();
66
- }
67
- gen_helper_v7m_preserve_fp_state(cpu_env);
68
- /*
69
- * If the preserve_fp_state helper doesn't throw an exception
70
- * then it will clear LSPACT; we don't need to repeat this for
71
- * any further FP insns in this TB.
72
- */
73
- s->v7m_lspact = false;
74
- }
75
+ gen_preserve_fp_state(s);
76
77
/* Update ownership of FP context: set FPCCR.S to match current state */
78
if (s->v8m_fpccr_s_wrong) {
31
--
79
--
32
2.20.1
80
2.20.1
33
81
34
82
diff view generated by jsdifflib
1
Convert the Neon 3-reg-diff insn polynomial VMULL. This is the last
1
Implement the new-in-v8.1M FPCXT_S floating point system register.
2
insn in this group to be converted.
2
This is for saving and restoring the secure floating point context,
3
and it reads and writes bits [27:0] from the FPSCR and the
4
CONTROL.SFPA bit in bit [31].
3
5
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-14-peter.maydell@linaro.org
6
---
9
---
7
target/arm/neon-dp.decode | 2 ++
10
target/arm/translate-vfp.c.inc | 58 ++++++++++++++++++++++++++++++++++
8
target/arm/translate-neon.inc.c | 43 +++++++++++++++++++++++
11
1 file changed, 58 insertions(+)
9
target/arm/translate.c | 60 ++-------------------------------
10
3 files changed, 48 insertions(+), 57 deletions(-)
11
12
12
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-dp.decode
15
--- a/target/arm/translate-vfp.c.inc
15
+++ b/target/arm/neon-dp.decode
16
+++ b/target/arm/translate-vfp.c.inc
16
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
17
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
17
VMULL_U_3d 1111 001 1 1 . .. .... .... 1100 . 0 . 0 .... @3diff
18
return false;
18
19
}
19
VQDMULL_3d 1111 001 0 1 . .. .... .... 1101 . 0 . 0 .... @3diff
20
break;
20
+
21
+ case ARM_VFP_FPCXT_S:
21
+ VMULL_P_3d 1111 001 0 1 . .. .... .... 1110 . 0 . 0 .... @3diff
22
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
22
]
23
}
24
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/translate-neon.inc.c
27
+++ b/target/arm/translate-neon.inc.c
28
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMLSL_3d(DisasContext *s, arg_3diff *a)
29
30
return do_long_3d(s, a, opfn[a->size], accfn[a->size]);
31
}
32
+
33
+static bool trans_VMULL_P_3d(DisasContext *s, arg_3diff *a)
34
+{
35
+ gen_helper_gvec_3 *fn_gvec;
36
+
37
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
38
+ return false;
39
+ }
40
+
41
+ /* UNDEF accesses to D16-D31 if they don't exist. */
42
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
43
+ ((a->vd | a->vn | a->vm) & 0x10)) {
44
+ return false;
45
+ }
46
+
47
+ if (a->vd & 1) {
48
+ return false;
49
+ }
50
+
51
+ switch (a->size) {
52
+ case 0:
53
+ fn_gvec = gen_helper_neon_pmull_h;
54
+ break;
55
+ case 2:
56
+ if (!dc_isar_feature(aa32_pmull, s)) {
57
+ return false;
23
+ return false;
58
+ }
24
+ }
59
+ fn_gvec = gen_helper_gvec_pmull_q;
25
+ if (!s->v8m_secure) {
26
+ return false;
27
+ }
60
+ break;
28
+ break;
61
+ default:
29
default:
62
+ return false;
30
return FPSysRegCheckFailed;
31
}
32
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
33
tcg_temp_free_i32(tmp);
34
break;
35
}
36
+ case ARM_VFP_FPCXT_S:
37
+ {
38
+ TCGv_i32 sfpa, control, fpscr;
39
+ /* Set FPSCR[27:0] and CONTROL.SFPA from value */
40
+ tmp = loadfn(s, opaque);
41
+ sfpa = tcg_temp_new_i32();
42
+ tcg_gen_shri_i32(sfpa, tmp, 31);
43
+ control = load_cpu_field(v7m.control[M_REG_S]);
44
+ tcg_gen_deposit_i32(control, control, sfpa,
45
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
46
+ store_cpu_field(control, v7m.control[M_REG_S]);
47
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
48
+ tcg_gen_andi_i32(fpscr, fpscr, FPCR_NZCV_MASK);
49
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
50
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
51
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
52
+ tcg_temp_free_i32(tmp);
53
+ tcg_temp_free_i32(sfpa);
54
+ break;
63
+ }
55
+ }
64
+
56
default:
65
+ if (!vfp_access_check(s)) {
57
g_assert_not_reached();
66
+ return true;
58
}
59
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
60
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
61
storefn(s, opaque, tmp);
62
break;
63
+ case ARM_VFP_FPCXT_S:
64
+ {
65
+ TCGv_i32 control, sfpa, fpscr;
66
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
67
+ tmp = tcg_temp_new_i32();
68
+ sfpa = tcg_temp_new_i32();
69
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
70
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
71
+ control = load_cpu_field(v7m.control[M_REG_S]);
72
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
73
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
74
+ tcg_gen_or_i32(tmp, tmp, sfpa);
75
+ tcg_temp_free_i32(sfpa);
76
+ /*
77
+ * Store result before updating FPSCR etc, in case
78
+ * it is a memory write which causes an exception.
79
+ */
80
+ storefn(s, opaque, tmp);
81
+ /*
82
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
83
+ * CONTROL.SFPA; so we'll end the TB here.
84
+ */
85
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
86
+ store_cpu_field(control, v7m.control[M_REG_S]);
87
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
88
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
89
+ tcg_temp_free_i32(fpscr);
90
+ gen_lookup_tb(s);
91
+ break;
67
+ }
92
+ }
68
+
93
default:
69
+ tcg_gen_gvec_3_ool(neon_reg_offset(a->vd, 0),
94
g_assert_not_reached();
70
+ neon_reg_offset(a->vn, 0),
95
}
71
+ neon_reg_offset(a->vm, 0),
72
+ 16, 16, 0, fn_gvec);
73
+ return true;
74
+}
75
diff --git a/target/arm/translate.c b/target/arm/translate.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/translate.c
78
+++ b/target/arm/translate.c
79
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
80
{
81
int op;
82
int q;
83
- int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
84
+ int rd, rn, rm, rd_ofs, rm_ofs;
85
int size;
86
int pass;
87
int u;
88
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
89
size = (insn >> 20) & 3;
90
vec_size = q ? 16 : 8;
91
rd_ofs = neon_reg_offset(rd, 0);
92
- rn_ofs = neon_reg_offset(rn, 0);
93
rm_ofs = neon_reg_offset(rm, 0);
94
95
if ((insn & (1 << 23)) == 0) {
96
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
97
if (size != 3) {
98
op = (insn >> 8) & 0xf;
99
if ((insn & (1 << 6)) == 0) {
100
- /* Three registers of different lengths. */
101
- /* undefreq: bit 0 : UNDEF if size == 0
102
- * bit 1 : UNDEF if size == 1
103
- * bit 2 : UNDEF if size == 2
104
- * bit 3 : UNDEF if U == 1
105
- * Note that [2:0] set implies 'always UNDEF'
106
- */
107
- int undefreq;
108
- /* prewiden, src1_wide, src2_wide, undefreq */
109
- static const int neon_3reg_wide[16][4] = {
110
- {0, 0, 0, 7}, /* VADDL: handled by decodetree */
111
- {0, 0, 0, 7}, /* VADDW: handled by decodetree */
112
- {0, 0, 0, 7}, /* VSUBL: handled by decodetree */
113
- {0, 0, 0, 7}, /* VSUBW: handled by decodetree */
114
- {0, 0, 0, 7}, /* VADDHN: handled by decodetree */
115
- {0, 0, 0, 7}, /* VABAL */
116
- {0, 0, 0, 7}, /* VSUBHN: handled by decodetree */
117
- {0, 0, 0, 7}, /* VABDL */
118
- {0, 0, 0, 7}, /* VMLAL */
119
- {0, 0, 0, 7}, /* VQDMLAL */
120
- {0, 0, 0, 7}, /* VMLSL */
121
- {0, 0, 0, 7}, /* VQDMLSL */
122
- {0, 0, 0, 7}, /* Integer VMULL */
123
- {0, 0, 0, 7}, /* VQDMULL */
124
- {0, 0, 0, 0xa}, /* Polynomial VMULL */
125
- {0, 0, 0, 7}, /* Reserved: always UNDEF */
126
- };
127
-
128
- undefreq = neon_3reg_wide[op][3];
129
-
130
- if ((undefreq & (1 << size)) ||
131
- ((undefreq & 8) && u)) {
132
- return 1;
133
- }
134
- if (rd & 1) {
135
- return 1;
136
- }
137
-
138
- /* Handle polynomial VMULL in a single pass. */
139
- if (op == 14) {
140
- if (size == 0) {
141
- /* VMULL.P8 */
142
- tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, 16, 16,
143
- 0, gen_helper_neon_pmull_h);
144
- } else {
145
- /* VMULL.P64 */
146
- if (!dc_isar_feature(aa32_pmull, s)) {
147
- return 1;
148
- }
149
- tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, 16, 16,
150
- 0, gen_helper_gvec_pmull_q);
151
- }
152
- return 0;
153
- }
154
- abort(); /* all others handled by decodetree */
155
+ /* Three registers of different lengths: handled by decodetree */
156
+ return 1;
157
} else {
158
/* Two registers and a scalar. NB that for ops of this form
159
* the ARM ARM labels bit 24 as Q, but it is in our variable
160
--
96
--
161
2.20.1
97
2.20.1
162
98
163
99
diff view generated by jsdifflib
New patch
1
The FPDSCR register has a similar layout to the FPSCR. In v8.1M it
2
gains new fields FZ16 (if half-precision floating point is supported)
3
and LTPSIZE (always reads as 4). Update the reset value and the code
4
that handles writes to this register accordingly.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-16-peter.maydell@linaro.org
9
---
10
target/arm/cpu.h | 5 +++++
11
hw/intc/armv7m_nvic.c | 9 ++++++++-
12
target/arm/cpu.c | 3 +++
13
3 files changed, 16 insertions(+), 1 deletion(-)
14
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
20
#define FPCR_IXE (1 << 12) /* Inexact exception trap enable */
21
#define FPCR_IDE (1 << 15) /* Input Denormal exception trap enable */
22
#define FPCR_FZ16 (1 << 19) /* ARMv8.2+, FP16 flush-to-zero */
23
+#define FPCR_RMODE_MASK (3 << 22) /* Rounding mode */
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
26
+#define FPCR_AHP (1 << 26) /* Alternative half-precision */
27
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
28
#define FPCR_V (1 << 28) /* FP overflow flag */
29
#define FPCR_C (1 << 29) /* FP carry flag */
30
#define FPCR_Z (1 << 30) /* FP zero flag */
31
#define FPCR_N (1 << 31) /* FP negative flag */
32
33
+#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
34
+#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
35
+
36
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
37
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
38
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/intc/armv7m_nvic.c
42
+++ b/hw/intc/armv7m_nvic.c
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
44
break;
45
case 0xf3c: /* FPDSCR */
46
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
47
- value &= 0x07c00000;
48
+ uint32_t mask = FPCR_AHP | FPCR_DN | FPCR_FZ | FPCR_RMODE_MASK;
49
+ if (cpu_isar_feature(any_fp16, cpu)) {
50
+ mask |= FPCR_FZ16;
51
+ }
52
+ value &= mask;
53
+ if (cpu_isar_feature(aa32_lob, cpu)) {
54
+ value |= 4 << FPCR_LTPSIZE_SHIFT;
55
+ }
56
cpu->env.v7m.fpdscr[attrs.secure] = value;
57
}
58
break;
59
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/cpu.c
62
+++ b/target/arm/cpu.c
63
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
64
* always reset to 4.
65
*/
66
env->v7m.ltpsize = 4;
67
+ /* The LTPSIZE field in FPDSCR is constant and reads as 4. */
68
+ env->v7m.fpdscr[M_REG_NS] = 4 << FPCR_LTPSIZE_SHIFT;
69
+ env->v7m.fpdscr[M_REG_S] = 4 << FPCR_LTPSIZE_SHIFT;
70
}
71
72
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
73
--
74
2.20.1
75
76
diff view generated by jsdifflib
1
Convert the Neon 3-reg-diff insns VMULL, VMLAL and VMLSL; these perform
1
In v8.0M, on exception entry the registers R0-R3, R12, APSR and EPSR
2
a 32x32->64 multiply with possible accumulate.
2
are zeroed for an exception taken to Non-secure state; for an
3
exception taken to Secure state they become UNKNOWN, and we chose to
4
leave them at their previous values.
3
5
4
Note that for VMLSL we do the accumulate directly with a subtraction
6
In v8.1M the behaviour is specified more tightly and these registers
5
rather than doing a negate-then-add as the old code did.
7
are always zeroed regardless of the security state that the exception
8
targets (see rule R_KPZV). Implement this.
6
9
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-17-peter.maydell@linaro.org
9
---
13
---
10
target/arm/neon-dp.decode | 9 +++++
14
target/arm/m_helper.c | 16 ++++++++++++----
11
target/arm/translate-neon.inc.c | 71 +++++++++++++++++++++++++++++++++
15
1 file changed, 12 insertions(+), 4 deletions(-)
12
target/arm/translate.c | 21 +++-------
13
3 files changed, 86 insertions(+), 15 deletions(-)
14
16
15
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
17
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/neon-dp.decode
19
--- a/target/arm/m_helper.c
18
+++ b/target/arm/neon-dp.decode
20
+++ b/target/arm/m_helper.c
19
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
21
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
20
22
* Clear registers if necessary to prevent non-secure exception
21
VABDL_S_3d 1111 001 0 1 . .. .... .... 0111 . 0 . 0 .... @3diff
23
* code being able to see register values from secure code.
22
VABDL_U_3d 1111 001 1 1 . .. .... .... 0111 . 0 . 0 .... @3diff
24
* Where register values become architecturally UNKNOWN we leave
23
+
25
- * them with their previous values.
24
+ VMLAL_S_3d 1111 001 0 1 . .. .... .... 1000 . 0 . 0 .... @3diff
26
+ * them with their previous values. v8.1M is tighter than v8.0M
25
+ VMLAL_U_3d 1111 001 1 1 . .. .... .... 1000 . 0 . 0 .... @3diff
27
+ * here and always zeroes the caller-saved registers regardless
26
+
28
+ * of the security state the exception is targeting.
27
+ VMLSL_S_3d 1111 001 0 1 . .. .... .... 1010 . 0 . 0 .... @3diff
29
*/
28
+ VMLSL_U_3d 1111 001 1 1 . .. .... .... 1010 . 0 . 0 .... @3diff
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
29
+
31
- if (!targets_secure) {
30
+ VMULL_S_3d 1111 001 0 1 . .. .... .... 1100 . 0 . 0 .... @3diff
32
+ if (!targets_secure || arm_feature(env, ARM_FEATURE_V8_1M)) {
31
+ VMULL_U_3d 1111 001 1 1 . .. .... .... 1100 . 0 . 0 .... @3diff
33
/*
32
]
34
* Always clear the caller-saved registers (they have been
33
}
35
* pushed to the stack earlier in v7m_push_stack()).
34
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
36
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
35
index XXXXXXX..XXXXXXX 100644
37
* v7m_push_callee_stack()).
36
--- a/target/arm/translate-neon.inc.c
38
*/
37
+++ b/target/arm/translate-neon.inc.c
39
int i;
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VABAL_U_3d(DisasContext *s, arg_3diff *a)
40
+ /*
39
41
+ * r4..r11 are callee-saves, zero only if background
40
return do_long_3d(s, a, opfn[a->size], addfn[a->size]);
42
+ * state was Secure (EXCRET.S == 1) and exception
41
}
43
+ * targets Non-secure state
42
+
44
+ */
43
+static void gen_mull_s32(TCGv_i64 rd, TCGv_i32 rn, TCGv_i32 rm)
45
+ bool zero_callee_saves = !targets_secure &&
44
+{
46
+ (lr & R_V7M_EXCRET_S_MASK);
45
+ TCGv_i32 lo = tcg_temp_new_i32();
47
46
+ TCGv_i32 hi = tcg_temp_new_i32();
48
for (i = 0; i < 13; i++) {
47
+
49
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
48
+ tcg_gen_muls2_i32(lo, hi, rn, rm);
50
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
49
+ tcg_gen_concat_i32_i64(rd, lo, hi);
51
+ if (i < 4 || i > 11 || zero_callee_saves) {
50
+
52
env->regs[i] = 0;
51
+ tcg_temp_free_i32(lo);
52
+ tcg_temp_free_i32(hi);
53
+}
54
+
55
+static void gen_mull_u32(TCGv_i64 rd, TCGv_i32 rn, TCGv_i32 rm)
56
+{
57
+ TCGv_i32 lo = tcg_temp_new_i32();
58
+ TCGv_i32 hi = tcg_temp_new_i32();
59
+
60
+ tcg_gen_mulu2_i32(lo, hi, rn, rm);
61
+ tcg_gen_concat_i32_i64(rd, lo, hi);
62
+
63
+ tcg_temp_free_i32(lo);
64
+ tcg_temp_free_i32(hi);
65
+}
66
+
67
+static bool trans_VMULL_S_3d(DisasContext *s, arg_3diff *a)
68
+{
69
+ static NeonGenTwoOpWidenFn * const opfn[] = {
70
+ gen_helper_neon_mull_s8,
71
+ gen_helper_neon_mull_s16,
72
+ gen_mull_s32,
73
+ NULL,
74
+ };
75
+
76
+ return do_long_3d(s, a, opfn[a->size], NULL);
77
+}
78
+
79
+static bool trans_VMULL_U_3d(DisasContext *s, arg_3diff *a)
80
+{
81
+ static NeonGenTwoOpWidenFn * const opfn[] = {
82
+ gen_helper_neon_mull_u8,
83
+ gen_helper_neon_mull_u16,
84
+ gen_mull_u32,
85
+ NULL,
86
+ };
87
+
88
+ return do_long_3d(s, a, opfn[a->size], NULL);
89
+}
90
+
91
+#define DO_VMLAL(INSN,MULL,ACC) \
92
+ static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \
93
+ { \
94
+ static NeonGenTwoOpWidenFn * const opfn[] = { \
95
+ gen_helper_neon_##MULL##8, \
96
+ gen_helper_neon_##MULL##16, \
97
+ gen_##MULL##32, \
98
+ NULL, \
99
+ }; \
100
+ static NeonGenTwo64OpFn * const accfn[] = { \
101
+ gen_helper_neon_##ACC##l_u16, \
102
+ gen_helper_neon_##ACC##l_u32, \
103
+ tcg_gen_##ACC##_i64, \
104
+ NULL, \
105
+ }; \
106
+ return do_long_3d(s, a, opfn[a->size], accfn[a->size]); \
107
+ }
108
+
109
+DO_VMLAL(VMLAL_S,mull_s,add)
110
+DO_VMLAL(VMLAL_U,mull_u,add)
111
+DO_VMLAL(VMLSL_S,mull_s,sub)
112
+DO_VMLAL(VMLSL_U,mull_u,sub)
113
diff --git a/target/arm/translate.c b/target/arm/translate.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/translate.c
116
+++ b/target/arm/translate.c
117
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
118
{0, 0, 0, 7}, /* VABAL */
119
{0, 0, 0, 7}, /* VSUBHN: handled by decodetree */
120
{0, 0, 0, 7}, /* VABDL */
121
- {0, 0, 0, 0}, /* VMLAL */
122
+ {0, 0, 0, 7}, /* VMLAL */
123
{0, 0, 0, 9}, /* VQDMLAL */
124
- {0, 0, 0, 0}, /* VMLSL */
125
+ {0, 0, 0, 7}, /* VMLSL */
126
{0, 0, 0, 9}, /* VQDMLSL */
127
- {0, 0, 0, 0}, /* Integer VMULL */
128
+ {0, 0, 0, 7}, /* Integer VMULL */
129
{0, 0, 0, 9}, /* VQDMULL */
130
{0, 0, 0, 0xa}, /* Polynomial VMULL */
131
{0, 0, 0, 7}, /* Reserved: always UNDEF */
132
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
133
tmp2 = neon_load_reg(rm, pass);
134
}
135
switch (op) {
136
- case 8: case 9: case 10: case 11: case 12: case 13:
137
- /* VMLAL, VQDMLAL, VMLSL, VQDMLSL, VMULL, VQDMULL */
138
+ case 9: case 11: case 13:
139
+ /* VQDMLAL, VQDMLSL, VQDMULL */
140
gen_neon_mull(cpu_V0, tmp, tmp2, size, u);
141
break;
142
default: /* 15 is RESERVED: caught earlier */
143
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
144
/* VQDMULL */
145
gen_neon_addl_saturate(cpu_V0, cpu_V0, size);
146
neon_store_reg64(cpu_V0, rd + pass);
147
- } else if (op == 5 || (op >= 8 && op <= 11)) {
148
+ } else {
149
/* Accumulate. */
150
neon_load_reg64(cpu_V1, rd + pass);
151
switch (op) {
152
- case 10: /* VMLSL */
153
- gen_neon_negl(cpu_V0, size);
154
- /* Fall through */
155
- case 8: /* VABAL, VMLAL */
156
- gen_neon_addl(size);
157
- break;
158
case 9: case 11: /* VQDMLAL, VQDMLSL */
159
gen_neon_addl_saturate(cpu_V0, cpu_V0, size);
160
if (op == 11) {
161
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
162
abort();
163
}
164
neon_store_reg64(cpu_V0, rd + pass);
165
- } else {
166
- /* Write back the result. */
167
- neon_store_reg64(cpu_V0, rd + pass);
168
}
53
}
169
}
54
}
170
} else {
171
--
55
--
172
2.20.1
56
2.20.1
173
57
174
58
diff view generated by jsdifflib
New patch
1
In v8.1M, vector table fetch failures don't set HFSR.FORCED (see rule
2
R_LLRP). (In previous versions of the architecture this was either
3
required or IMPDEF.)
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-18-peter.maydell@linaro.org
8
---
9
target/arm/m_helper.c | 6 +++++-
10
1 file changed, 5 insertions(+), 1 deletion(-)
11
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
15
+++ b/target/arm/m_helper.c
16
@@ -XXX,XX +XXX,XX @@ load_fail:
17
* The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
18
* secure); otherwise it targets the same security state as the
19
* underlying exception.
20
+ * In v8.1M HardFaults from vector table fetch fails don't set FORCED.
21
*/
22
if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
23
exc_secure = true;
24
}
25
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
26
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK;
27
+ if (!arm_feature(env, ARM_FEATURE_V8_1M)) {
28
+ env->v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
29
+ }
30
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
31
return false;
32
}
33
--
34
2.20.1
35
36
diff view generated by jsdifflib
New patch
1
In v8.1M a REVIDR register is defined, which is at address 0xe00ecfc
2
and is a read-only IMPDEF register providing implementation specific
3
minor revision information, like the v8A REVIDR_EL1. Implement this.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-19-peter.maydell@linaro.org
8
---
9
hw/intc/armv7m_nvic.c | 5 +++++
10
1 file changed, 5 insertions(+)
11
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/armv7m_nvic.c
15
+++ b/hw/intc/armv7m_nvic.c
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
17
}
18
return val;
19
}
20
+ case 0xcfc:
21
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8_1M)) {
22
+ goto bad_offset;
23
+ }
24
+ return cpu->revidr;
25
case 0xd00: /* CPUID Base. */
26
return cpu->midr;
27
case 0xd04: /* Interrupt Control State (ICSR) */
28
--
29
2.20.1
30
31
diff view generated by jsdifflib
New patch
1
In v8.1M a new exception return check is added which may cause a NOCP
2
UsageFault (see rule R_XLTP): before we clear s0..s15 and the FPSCR
3
we must check whether access to CP10 from the Security state of the
4
returning exception is disabled; if it is then we must take a fault.
1
5
6
(Note that for our implementation CPPWR is always RAZ/WI and so can
7
never cause CP10 accesses to fail.)
8
9
The other v8.1M change to this register-clearing code is that if MVE
10
is implemented VPR must also be cleared, so add a TODO comment to
11
that effect.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-20-peter.maydell@linaro.org
16
---
17
target/arm/m_helper.c | 22 +++++++++++++++++++++-
18
1 file changed, 21 insertions(+), 1 deletion(-)
19
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m_helper.c
23
+++ b/target/arm/m_helper.c
24
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
25
v7m_exception_taken(cpu, excret, true, false);
26
return;
27
} else {
28
- /* Clear s0..s15 and FPSCR */
29
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
30
+ /* v8.1M adds this NOCP check */
31
+ bool nsacr_pass = exc_secure ||
32
+ extract32(env->v7m.nsacr, 10, 1);
33
+ bool cpacr_pass = v7m_cpacr_pass(env, exc_secure, true);
34
+ if (!nsacr_pass) {
35
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
36
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
37
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
38
+ "stackframe: NSACR prevents clearing FPU registers\n");
39
+ v7m_exception_taken(cpu, excret, true, false);
40
+ } else if (!cpacr_pass) {
41
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
42
+ exc_secure);
43
+ env->v7m.cfsr[exc_secure] |= R_V7M_CFSR_NOCP_MASK;
44
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
45
+ "stackframe: CPACR prevents clearing FPU registers\n");
46
+ v7m_exception_taken(cpu, excret, true, false);
47
+ }
48
+ }
49
+ /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */
50
int i;
51
52
for (i = 0; i < 16; i += 2) {
53
--
54
2.20.1
55
56
diff view generated by jsdifflib
1
Convert the Neon VEXT insn to decodetree. Rather than keeping the
1
v8.1M adds new encodings of VLLDM and VLSTM (where bit 7 is set).
2
old implementation which used fixed temporaries cpu_V0 and cpu_V1
2
The only difference is that:
3
and did the extraction with by-hand shift and logic ops, we use
3
* the old T1 encodings UNDEF if the implementation implements 32
4
the TCG extract2 insn.
4
Dregs (this is currently architecturally impossible for M-profile)
5
* the new T2 encodings have the implementation-defined option to
6
read from memory (discarding the data) or write UNKNOWN values to
7
memory for the stack slots that would be D16-D31
5
8
6
We don't need to special case 0 or 8 immediates any more as the
9
We choose not to make those accesses, so for us the two
7
optimizer is smart enough to throw away the dead code.
10
instructions behave identically assuming they don't UNDEF.
8
11
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20201119215617.29887-21-peter.maydell@linaro.org
11
---
15
---
12
target/arm/neon-dp.decode | 8 +++-
16
target/arm/m-nocp.decode | 2 +-
13
target/arm/translate-neon.inc.c | 76 +++++++++++++++++++++++++++++++++
17
target/arm/translate-vfp.c.inc | 25 +++++++++++++++++++++++++
14
target/arm/translate.c | 58 +------------------------
18
2 files changed, 26 insertions(+), 1 deletion(-)
15
3 files changed, 85 insertions(+), 57 deletions(-)
16
19
17
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
20
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/neon-dp.decode
22
--- a/target/arm/m-nocp.decode
20
+++ b/target/arm/neon-dp.decode
23
+++ b/target/arm/m-nocp.decode
21
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
24
@@ -XXX,XX +XXX,XX @@
22
# return false for size==3.
25
23
######################################################################
24
{
26
{
25
- # 0b11 subgroup will go here
27
# Special cases which do not take an early NOCP: VLLDM and VLSTM
26
+ [
28
- VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
27
+ ##################################################################
29
+ VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
28
+ # Miscellaneous size=0b11 insns
30
# VSCCLRM (new in v8.1M) is similar:
29
+ ##################################################################
31
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
30
+ VEXT 1111 001 0 1 . 11 .... .... imm:4 . q:1 . 0 .... \
32
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
31
+ vm=%vm_dp vn=%vn_dp vd=%vd_dp
33
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
32
+ ]
33
34
# Subgroup for size != 0b11
35
[
36
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
37
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-neon.inc.c
35
--- a/target/arm/translate-vfp.c.inc
39
+++ b/target/arm/translate-neon.inc.c
36
+++ b/target/arm/translate-vfp.c.inc
40
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMLSL_2sc(DisasContext *s, arg_2scalar *a)
37
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
41
38
!arm_dc_feature(s, ARM_FEATURE_V8)) {
42
return do_2scalar_long(s, a, opfn[a->size], accfn[a->size]);
39
return false;
43
}
40
}
44
+
41
+
45
+static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
42
+ if (a->op) {
46
+{
43
+ /*
47
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
44
+ * T2 encoding ({D0-D31} reglist): v8.1M and up. We choose not
48
+ return false;
45
+ * to take the IMPDEF option to make memory accesses to the stack
46
+ * slots that correspond to the D16-D31 registers (discarding
47
+ * read data and writing UNKNOWN values), so for us the T2
48
+ * encoding behaves identically to the T1 encoding.
49
+ */
50
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
51
+ return false;
52
+ }
53
+ } else {
54
+ /*
55
+ * T1 encoding ({D0-D15} reglist); undef if we have 32 Dregs.
56
+ * This is currently architecturally impossible, but we add the
57
+ * check to stay in line with the pseudocode. Note that we must
58
+ * emit code for the UNDEF so it takes precedence over the NOCP.
59
+ */
60
+ if (dc_isar_feature(aa32_simd_r32, s)) {
61
+ unallocated_encoding(s);
62
+ return true;
63
+ }
49
+ }
64
+ }
50
+
65
+
51
+ /* UNDEF accesses to D16-D31 if they don't exist. */
66
/*
52
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
67
* If not secure, UNDEF. We must emit code for this
53
+ ((a->vd | a->vn | a->vm) & 0x10)) {
68
* rather than returning false so that this takes
54
+ return false;
55
+ }
56
+
57
+ if ((a->vn | a->vm | a->vd) & a->q) {
58
+ return false;
59
+ }
60
+
61
+ if (a->imm > 7 && !a->q) {
62
+ return false;
63
+ }
64
+
65
+ if (!vfp_access_check(s)) {
66
+ return true;
67
+ }
68
+
69
+ if (!a->q) {
70
+ /* Extract 64 bits from <Vm:Vn> */
71
+ TCGv_i64 left, right, dest;
72
+
73
+ left = tcg_temp_new_i64();
74
+ right = tcg_temp_new_i64();
75
+ dest = tcg_temp_new_i64();
76
+
77
+ neon_load_reg64(right, a->vn);
78
+ neon_load_reg64(left, a->vm);
79
+ tcg_gen_extract2_i64(dest, right, left, a->imm * 8);
80
+ neon_store_reg64(dest, a->vd);
81
+
82
+ tcg_temp_free_i64(left);
83
+ tcg_temp_free_i64(right);
84
+ tcg_temp_free_i64(dest);
85
+ } else {
86
+ /* Extract 128 bits from <Vm+1:Vm:Vn+1:Vn> */
87
+ TCGv_i64 left, middle, right, destleft, destright;
88
+
89
+ left = tcg_temp_new_i64();
90
+ middle = tcg_temp_new_i64();
91
+ right = tcg_temp_new_i64();
92
+ destleft = tcg_temp_new_i64();
93
+ destright = tcg_temp_new_i64();
94
+
95
+ if (a->imm < 8) {
96
+ neon_load_reg64(right, a->vn);
97
+ neon_load_reg64(middle, a->vn + 1);
98
+ tcg_gen_extract2_i64(destright, right, middle, a->imm * 8);
99
+ neon_load_reg64(left, a->vm);
100
+ tcg_gen_extract2_i64(destleft, middle, left, a->imm * 8);
101
+ } else {
102
+ neon_load_reg64(right, a->vn + 1);
103
+ neon_load_reg64(middle, a->vm);
104
+ tcg_gen_extract2_i64(destright, right, middle, (a->imm - 8) * 8);
105
+ neon_load_reg64(left, a->vm + 1);
106
+ tcg_gen_extract2_i64(destleft, middle, left, (a->imm - 8) * 8);
107
+ }
108
+
109
+ neon_store_reg64(destright, a->vd);
110
+ neon_store_reg64(destleft, a->vd + 1);
111
+
112
+ tcg_temp_free_i64(destright);
113
+ tcg_temp_free_i64(destleft);
114
+ tcg_temp_free_i64(right);
115
+ tcg_temp_free_i64(middle);
116
+ tcg_temp_free_i64(left);
117
+ }
118
+ return true;
119
+}
120
diff --git a/target/arm/translate.c b/target/arm/translate.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/target/arm/translate.c
123
+++ b/target/arm/translate.c
124
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
125
int pass;
126
int u;
127
int vec_size;
128
- uint32_t imm;
129
TCGv_i32 tmp, tmp2, tmp3, tmp5;
130
TCGv_ptr ptr1;
131
- TCGv_i64 tmp64;
132
133
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
134
return 1;
135
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
136
return 1;
137
} else { /* size == 3 */
138
if (!u) {
139
- /* Extract. */
140
- imm = (insn >> 8) & 0xf;
141
-
142
- if (imm > 7 && !q)
143
- return 1;
144
-
145
- if (q && ((rd | rn | rm) & 1)) {
146
- return 1;
147
- }
148
-
149
- if (imm == 0) {
150
- neon_load_reg64(cpu_V0, rn);
151
- if (q) {
152
- neon_load_reg64(cpu_V1, rn + 1);
153
- }
154
- } else if (imm == 8) {
155
- neon_load_reg64(cpu_V0, rn + 1);
156
- if (q) {
157
- neon_load_reg64(cpu_V1, rm);
158
- }
159
- } else if (q) {
160
- tmp64 = tcg_temp_new_i64();
161
- if (imm < 8) {
162
- neon_load_reg64(cpu_V0, rn);
163
- neon_load_reg64(tmp64, rn + 1);
164
- } else {
165
- neon_load_reg64(cpu_V0, rn + 1);
166
- neon_load_reg64(tmp64, rm);
167
- }
168
- tcg_gen_shri_i64(cpu_V0, cpu_V0, (imm & 7) * 8);
169
- tcg_gen_shli_i64(cpu_V1, tmp64, 64 - ((imm & 7) * 8));
170
- tcg_gen_or_i64(cpu_V0, cpu_V0, cpu_V1);
171
- if (imm < 8) {
172
- neon_load_reg64(cpu_V1, rm);
173
- } else {
174
- neon_load_reg64(cpu_V1, rm + 1);
175
- imm -= 8;
176
- }
177
- tcg_gen_shli_i64(cpu_V1, cpu_V1, 64 - (imm * 8));
178
- tcg_gen_shri_i64(tmp64, tmp64, imm * 8);
179
- tcg_gen_or_i64(cpu_V1, cpu_V1, tmp64);
180
- tcg_temp_free_i64(tmp64);
181
- } else {
182
- /* BUGFIX */
183
- neon_load_reg64(cpu_V0, rn);
184
- tcg_gen_shri_i64(cpu_V0, cpu_V0, imm * 8);
185
- neon_load_reg64(cpu_V1, rm);
186
- tcg_gen_shli_i64(cpu_V1, cpu_V1, 64 - (imm * 8));
187
- tcg_gen_or_i64(cpu_V0, cpu_V0, cpu_V1);
188
- }
189
- neon_store_reg64(cpu_V0, rd);
190
- if (q) {
191
- neon_store_reg64(cpu_V1, rd + 1);
192
- }
193
+ /* Extract: handled by decodetree */
194
+ return 1;
195
} else if ((insn & (1 << 11)) == 0) {
196
/* Two register misc. */
197
op = ((insn >> 12) & 0x30) | ((insn >> 7) & 0xf);
198
--
69
--
199
2.20.1
70
2.20.1
200
71
201
72
diff view generated by jsdifflib
1
In commit 37bfce81b10450071 we accidentally introduced a leak of a TCG
1
v8.1M introduces a new TRD flag in the CCR register, which enables
2
temporary in do_2shift_env_64(); free it.
2
checking for stack frame integrity signatures on SG instructions.
3
This bit is not banked, and is always RAZ/WI to Non-secure code.
4
Adjust the code for handling CCR reads and writes to handle this.
3
5
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-23-peter.maydell@linaro.org
6
---
9
---
7
target/arm/translate-neon.inc.c | 1 +
10
target/arm/cpu.h | 2 ++
8
1 file changed, 1 insertion(+)
11
hw/intc/armv7m_nvic.c | 26 ++++++++++++++++++--------
12
2 files changed, 20 insertions(+), 8 deletions(-)
9
13
10
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
11
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate-neon.inc.c
16
--- a/target/arm/cpu.h
13
+++ b/target/arm/translate-neon.inc.c
17
+++ b/target/arm/cpu.h
14
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a,
18
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKOFHFNMIGN, 10, 1)
15
neon_load_reg64(tmp, a->vm + pass);
19
FIELD(V7M_CCR, DC, 16, 1)
16
fn(tmp, cpu_env, tmp, constimm);
20
FIELD(V7M_CCR, IC, 17, 1)
17
neon_store_reg64(tmp, a->vd + pass);
21
FIELD(V7M_CCR, BP, 18, 1)
18
+ tcg_temp_free_i64(tmp);
22
+FIELD(V7M_CCR, LOB, 19, 1)
19
}
23
+FIELD(V7M_CCR, TRD, 20, 1)
20
tcg_temp_free_i64(constimm);
24
21
return true;
25
/* V7M SCR bits */
26
FIELD(V7M_SCR, SLEEPONEXIT, 1, 1)
27
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/intc/armv7m_nvic.c
30
+++ b/hw/intc/armv7m_nvic.c
31
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
32
}
33
return cpu->env.v7m.scr[attrs.secure];
34
case 0xd14: /* Configuration Control. */
35
- /* The BFHFNMIGN bit is the only non-banked bit; we
36
- * keep it in the non-secure copy of the register.
37
+ /*
38
+ * Non-banked bits: BFHFNMIGN (stored in the NS copy of the register)
39
+ * and TRD (stored in the S copy of the register)
40
*/
41
val = cpu->env.v7m.ccr[attrs.secure];
42
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
44
cpu->env.v7m.scr[attrs.secure] = value;
45
break;
46
case 0xd14: /* Configuration Control. */
47
+ {
48
+ uint32_t mask;
49
+
50
if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
51
goto bad_offset;
52
}
53
54
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
55
- value &= (R_V7M_CCR_STKALIGN_MASK |
56
- R_V7M_CCR_BFHFNMIGN_MASK |
57
- R_V7M_CCR_DIV_0_TRP_MASK |
58
- R_V7M_CCR_UNALIGN_TRP_MASK |
59
- R_V7M_CCR_USERSETMPEND_MASK |
60
- R_V7M_CCR_NONBASETHRDENA_MASK);
61
+ mask = R_V7M_CCR_STKALIGN_MASK |
62
+ R_V7M_CCR_BFHFNMIGN_MASK |
63
+ R_V7M_CCR_DIV_0_TRP_MASK |
64
+ R_V7M_CCR_UNALIGN_TRP_MASK |
65
+ R_V7M_CCR_USERSETMPEND_MASK |
66
+ R_V7M_CCR_NONBASETHRDENA_MASK;
67
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8_1M) && attrs.secure) {
68
+ /* TRD is always RAZ/WI from NS */
69
+ mask |= R_V7M_CCR_TRD_MASK;
70
+ }
71
+ value &= mask;
72
73
if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
74
/* v8M makes NONBASETHRDENA and STKALIGN be RES1 */
75
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
76
77
cpu->env.v7m.ccr[attrs.secure] = value;
78
break;
79
+ }
80
case 0xd24: /* System Handler Control and State (SHCSR) */
81
if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
82
goto bad_offset;
22
--
83
--
23
2.20.1
84
2.20.1
24
85
25
86
diff view generated by jsdifflib
1
Convert the Neon 3-reg-diff insns VABAL and VABDL to decodetree.
1
v8.1M introduces a new TRD flag in the CCR register, which enables
2
Like almost all the remaining insns in this group, these are
2
checking for stack frame integrity signatures on SG instructions.
3
a combination of a two-input operation which returns a double width
3
Add the code in the SG insn implementation for the new behaviour.
4
result and then a possible accumulation of that double width
5
result into the destination.
6
4
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-24-peter.maydell@linaro.org
9
---
8
---
10
target/arm/translate.h | 1 +
9
target/arm/m_helper.c | 86 +++++++++++++++++++++++++++++++++++++++++++
11
target/arm/neon-dp.decode | 6 ++
10
1 file changed, 86 insertions(+)
12
target/arm/translate-neon.inc.c | 132 ++++++++++++++++++++++++++++++++
13
target/arm/translate.c | 31 +-------
14
4 files changed, 142 insertions(+), 28 deletions(-)
15
11
16
diff --git a/target/arm/translate.h b/target/arm/translate.h
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.h
14
--- a/target/arm/m_helper.c
19
+++ b/target/arm/translate.h
15
+++ b/target/arm/m_helper.c
20
@@ -XXX,XX +XXX,XX @@ typedef void NeonGenTwo64OpEnvFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i64);
16
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
21
typedef void NeonGenNarrowFn(TCGv_i32, TCGv_i64);
17
return true;
22
typedef void NeonGenNarrowEnvFn(TCGv_i32, TCGv_ptr, TCGv_i64);
23
typedef void NeonGenWidenFn(TCGv_i64, TCGv_i32);
24
+typedef void NeonGenTwoOpWidenFn(TCGv_i64, TCGv_i32, TCGv_i32);
25
typedef void NeonGenTwoSingleOPFn(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
26
typedef void NeonGenTwoDoubleOPFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGv_ptr);
27
typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
28
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/neon-dp.decode
31
+++ b/target/arm/neon-dp.decode
32
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
33
VADDHN_3d 1111 001 0 1 . .. .... .... 0100 . 0 . 0 .... @3diff
34
VRADDHN_3d 1111 001 1 1 . .. .... .... 0100 . 0 . 0 .... @3diff
35
36
+ VABAL_S_3d 1111 001 0 1 . .. .... .... 0101 . 0 . 0 .... @3diff
37
+ VABAL_U_3d 1111 001 1 1 . .. .... .... 0101 . 0 . 0 .... @3diff
38
+
39
VSUBHN_3d 1111 001 0 1 . .. .... .... 0110 . 0 . 0 .... @3diff
40
VRSUBHN_3d 1111 001 1 1 . .. .... .... 0110 . 0 . 0 .... @3diff
41
+
42
+ VABDL_S_3d 1111 001 0 1 . .. .... .... 0111 . 0 . 0 .... @3diff
43
+ VABDL_U_3d 1111 001 1 1 . .. .... .... 0111 . 0 . 0 .... @3diff
44
]
45
}
18
}
46
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
19
47
index XXXXXXX..XXXXXXX 100644
20
+static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
48
--- a/target/arm/translate-neon.inc.c
21
+ uint32_t addr, uint32_t *spdata)
49
+++ b/target/arm/translate-neon.inc.c
50
@@ -XXX,XX +XXX,XX @@ DO_NARROW_3D(VADDHN, add, narrow, tcg_gen_extrh_i64_i32)
51
DO_NARROW_3D(VSUBHN, sub, narrow, tcg_gen_extrh_i64_i32)
52
DO_NARROW_3D(VRADDHN, add, narrow_round, gen_narrow_round_high_u32)
53
DO_NARROW_3D(VRSUBHN, sub, narrow_round, gen_narrow_round_high_u32)
54
+
55
+static bool do_long_3d(DisasContext *s, arg_3diff *a,
56
+ NeonGenTwoOpWidenFn *opfn,
57
+ NeonGenTwo64OpFn *accfn)
58
+{
22
+{
59
+ /*
23
+ /*
60
+ * 3-regs different lengths, long operations.
24
+ * Read a word of data from the stack for the SG instruction,
61
+ * These perform an operation on two inputs that returns a double-width
25
+ * writing the value into *spdata. If the load succeeds, return
62
+ * result, and then possibly perform an accumulation operation of
26
+ * true; otherwise pend an appropriate exception and return false.
63
+ * that result into the double-width destination.
27
+ * (We can't use data load helpers here that throw an exception
28
+ * because of the context we're called in, which is halfway through
29
+ * arm_v7m_cpu_do_interrupt().)
64
+ */
30
+ */
65
+ TCGv_i64 rd0, rd1, tmp;
31
+ CPUState *cs = CPU(cpu);
66
+ TCGv_i32 rn, rm;
32
+ CPUARMState *env = &cpu->env;
33
+ MemTxAttrs attrs = {};
34
+ MemTxResult txres;
35
+ target_ulong page_size;
36
+ hwaddr physaddr;
37
+ int prot;
38
+ ARMMMUFaultInfo fi = {};
39
+ ARMCacheAttrs cacheattrs = {};
40
+ uint32_t value;
67
+
41
+
68
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
42
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
43
+ &attrs, &prot, &page_size, &fi, &cacheattrs)) {
44
+ /* MPU/SAU lookup failed */
45
+ if (fi.type == ARMFault_QEMU_SFault) {
46
+ qemu_log_mask(CPU_LOG_INT,
47
+ "...SecureFault during stack word read\n");
48
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
49
+ env->v7m.sfar = addr;
50
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
51
+ } else {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...MemManageFault during stack word read\n");
54
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_DACCVIOL_MASK |
55
+ R_V7M_CFSR_MMARVALID_MASK;
56
+ env->v7m.mmfar[M_REG_S] = addr;
57
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, false);
58
+ }
59
+ return false;
60
+ }
61
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
62
+ attrs, &txres);
63
+ if (txres != MEMTX_OK) {
64
+ /* BusFault trying to read the data */
65
+ qemu_log_mask(CPU_LOG_INT,
66
+ "...BusFault during stack word read\n");
67
+ env->v7m.cfsr[M_REG_NS] |=
68
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
69
+ env->v7m.bfar = addr;
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
69
+ return false;
71
+ return false;
70
+ }
72
+ }
71
+
73
+
72
+ /* UNDEF accesses to D16-D31 if they don't exist. */
74
+ *spdata = value;
73
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
74
+ ((a->vd | a->vn | a->vm) & 0x10)) {
75
+ return false;
76
+ }
77
+
78
+ if (!opfn) {
79
+ /* size == 3 case, which is an entirely different insn group */
80
+ return false;
81
+ }
82
+
83
+ if (a->vd & 1) {
84
+ return false;
85
+ }
86
+
87
+ if (!vfp_access_check(s)) {
88
+ return true;
89
+ }
90
+
91
+ rd0 = tcg_temp_new_i64();
92
+ rd1 = tcg_temp_new_i64();
93
+
94
+ rn = neon_load_reg(a->vn, 0);
95
+ rm = neon_load_reg(a->vm, 0);
96
+ opfn(rd0, rn, rm);
97
+ tcg_temp_free_i32(rn);
98
+ tcg_temp_free_i32(rm);
99
+
100
+ rn = neon_load_reg(a->vn, 1);
101
+ rm = neon_load_reg(a->vm, 1);
102
+ opfn(rd1, rn, rm);
103
+ tcg_temp_free_i32(rn);
104
+ tcg_temp_free_i32(rm);
105
+
106
+ /* Don't store results until after all loads: they might overlap */
107
+ if (accfn) {
108
+ tmp = tcg_temp_new_i64();
109
+ neon_load_reg64(tmp, a->vd);
110
+ accfn(tmp, tmp, rd0);
111
+ neon_store_reg64(tmp, a->vd);
112
+ neon_load_reg64(tmp, a->vd + 1);
113
+ accfn(tmp, tmp, rd1);
114
+ neon_store_reg64(tmp, a->vd + 1);
115
+ tcg_temp_free_i64(tmp);
116
+ } else {
117
+ neon_store_reg64(rd0, a->vd);
118
+ neon_store_reg64(rd1, a->vd + 1);
119
+ }
120
+
121
+ tcg_temp_free_i64(rd0);
122
+ tcg_temp_free_i64(rd1);
123
+
124
+ return true;
75
+ return true;
125
+}
76
+}
126
+
77
+
127
+static bool trans_VABDL_S_3d(DisasContext *s, arg_3diff *a)
78
static bool v7m_handle_execute_nsc(ARMCPU *cpu)
128
+{
79
{
129
+ static NeonGenTwoOpWidenFn * const opfn[] = {
80
/*
130
+ gen_helper_neon_abdl_s16,
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
131
+ gen_helper_neon_abdl_s32,
82
*/
132
+ gen_helper_neon_abdl_s64,
83
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
133
+ NULL,
84
", executing it\n", env->regs[15]);
134
+ };
135
+
85
+
136
+ return do_long_3d(s, a, opfn[a->size], NULL);
86
+ if (cpu_isar_feature(aa32_m_sec_state, cpu) &&
137
+}
87
+ !arm_v7m_is_handler_mode(env)) {
88
+ /*
89
+ * v8.1M exception stack frame integrity check. Note that we
90
+ * must perform the memory access even if CCR_S.TRD is zero
91
+ * and we aren't going to check what the data loaded is.
92
+ */
93
+ uint32_t spdata, sp;
138
+
94
+
139
+static bool trans_VABDL_U_3d(DisasContext *s, arg_3diff *a)
95
+ /*
140
+{
96
+ * We know we are currently NS, so the S stack pointers must be
141
+ static NeonGenTwoOpWidenFn * const opfn[] = {
97
+ * in other_ss_{psp,msp}, not in regs[13]/other_sp.
142
+ gen_helper_neon_abdl_u16,
98
+ */
143
+ gen_helper_neon_abdl_u32,
99
+ sp = v7m_using_psp(env) ? env->v7m.other_ss_psp : env->v7m.other_ss_msp;
144
+ gen_helper_neon_abdl_u64,
100
+ if (!v7m_read_sg_stack_word(cpu, mmu_idx, sp, &spdata)) {
145
+ NULL,
101
+ /* Stack access failed and an exception has been pended */
146
+ };
102
+ return false;
103
+ }
147
+
104
+
148
+ return do_long_3d(s, a, opfn[a->size], NULL);
105
+ if (env->v7m.ccr[M_REG_S] & R_V7M_CCR_TRD_MASK) {
149
+}
106
+ if (((spdata & ~1) == 0xfefa125a) ||
107
+ !(env->v7m.control[M_REG_S] & 1)) {
108
+ goto gen_invep;
109
+ }
110
+ }
111
+ }
150
+
112
+
151
+static bool trans_VABAL_S_3d(DisasContext *s, arg_3diff *a)
113
env->regs[14] &= ~1;
152
+{
114
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
153
+ static NeonGenTwoOpWidenFn * const opfn[] = {
115
switch_v7m_security_state(env, true);
154
+ gen_helper_neon_abdl_s16,
155
+ gen_helper_neon_abdl_s32,
156
+ gen_helper_neon_abdl_s64,
157
+ NULL,
158
+ };
159
+ static NeonGenTwo64OpFn * const addfn[] = {
160
+ gen_helper_neon_addl_u16,
161
+ gen_helper_neon_addl_u32,
162
+ tcg_gen_add_i64,
163
+ NULL,
164
+ };
165
+
166
+ return do_long_3d(s, a, opfn[a->size], addfn[a->size]);
167
+}
168
+
169
+static bool trans_VABAL_U_3d(DisasContext *s, arg_3diff *a)
170
+{
171
+ static NeonGenTwoOpWidenFn * const opfn[] = {
172
+ gen_helper_neon_abdl_u16,
173
+ gen_helper_neon_abdl_u32,
174
+ gen_helper_neon_abdl_u64,
175
+ NULL,
176
+ };
177
+ static NeonGenTwo64OpFn * const addfn[] = {
178
+ gen_helper_neon_addl_u16,
179
+ gen_helper_neon_addl_u32,
180
+ tcg_gen_add_i64,
181
+ NULL,
182
+ };
183
+
184
+ return do_long_3d(s, a, opfn[a->size], addfn[a->size]);
185
+}
186
diff --git a/target/arm/translate.c b/target/arm/translate.c
187
index XXXXXXX..XXXXXXX 100644
188
--- a/target/arm/translate.c
189
+++ b/target/arm/translate.c
190
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
191
{0, 0, 0, 7}, /* VSUBL: handled by decodetree */
192
{0, 0, 0, 7}, /* VSUBW: handled by decodetree */
193
{0, 0, 0, 7}, /* VADDHN: handled by decodetree */
194
- {0, 0, 0, 0}, /* VABAL */
195
+ {0, 0, 0, 7}, /* VABAL */
196
{0, 0, 0, 7}, /* VSUBHN: handled by decodetree */
197
- {0, 0, 0, 0}, /* VABDL */
198
+ {0, 0, 0, 7}, /* VABDL */
199
{0, 0, 0, 0}, /* VMLAL */
200
{0, 0, 0, 9}, /* VQDMLAL */
201
{0, 0, 0, 0}, /* VMLSL */
202
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
203
tmp2 = neon_load_reg(rm, pass);
204
}
205
switch (op) {
206
- case 5: case 7: /* VABAL, VABDL */
207
- switch ((size << 1) | u) {
208
- case 0:
209
- gen_helper_neon_abdl_s16(cpu_V0, tmp, tmp2);
210
- break;
211
- case 1:
212
- gen_helper_neon_abdl_u16(cpu_V0, tmp, tmp2);
213
- break;
214
- case 2:
215
- gen_helper_neon_abdl_s32(cpu_V0, tmp, tmp2);
216
- break;
217
- case 3:
218
- gen_helper_neon_abdl_u32(cpu_V0, tmp, tmp2);
219
- break;
220
- case 4:
221
- gen_helper_neon_abdl_s64(cpu_V0, tmp, tmp2);
222
- break;
223
- case 5:
224
- gen_helper_neon_abdl_u64(cpu_V0, tmp, tmp2);
225
- break;
226
- default: abort();
227
- }
228
- tcg_temp_free_i32(tmp2);
229
- tcg_temp_free_i32(tmp);
230
- break;
231
case 8: case 9: case 10: case 11: case 12: case 13:
232
/* VMLAL, VQDMLAL, VMLSL, VQDMLSL, VMULL, VQDMULL */
233
gen_neon_mull(cpu_V0, tmp, tmp2, size, u);
234
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
235
case 10: /* VMLSL */
236
gen_neon_negl(cpu_V0, size);
237
/* Fall through */
238
- case 5: case 8: /* VABAL, VMLAL */
239
+ case 8: /* VABAL, VMLAL */
240
gen_neon_addl(size);
241
break;
242
case 9: case 11: /* VQDMLAL, VQDMLSL */
243
--
116
--
244
2.20.1
117
2.20.1
245
118
246
119
diff view generated by jsdifflib
1
Convert the "pre-widening" insns VADDL, VSUBL, VADDW and VSUBW
1
In commit 077d7449100d824a4 we added code to handle the v8M
2
in the Neon 3-registers-different-lengths group to decodetree.
2
requirement that returns from NMI or HardFault forcibly deactivate
3
These insns work by widening one or both inputs to double their
3
those exceptions regardless of what interrupt the guest is trying to
4
size, performing an add or subtract at the doubled size and
4
deactivate. Unfortunately this broke the handling of the "illegal
5
then storing the double-size result.
5
exception return because the returning exception number is not
6
active" check for those cases. In the pseudocode this test is done
7
on the exception the guest asks to return from, but because our
8
implementation was doing this in armv7m_nvic_complete_irq() after the
9
new "deactivate NMI/HardFault regardless" code we ended up doing the
10
test on the VecInfo for that exception instead, which usually meant
11
failing to raise the illegal exception return fault.
6
12
7
As usual, rather than copying the loop of the original decoder
13
In the case for "configurable exception targeting the opposite
8
(which needs awkward code to avoid problems when source and
14
security state" we detected the illegal-return case but went ahead
9
destination registers overlap) we just unroll the two passes.
15
and deactivated the VecInfo anyway, which is wrong because that is
16
the VecInfo for the other security state.
17
18
Rearrange the code so that we first identify the illegal return
19
cases, then see if we really need to deactivate NMI or HardFault
20
instead, and finally do the deactivation.
10
21
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-25-peter.maydell@linaro.org
13
---
25
---
14
target/arm/neon-dp.decode | 43 +++++++++++++
26
hw/intc/armv7m_nvic.c | 59 +++++++++++++++++++++++--------------------
15
target/arm/translate-neon.inc.c | 104 ++++++++++++++++++++++++++++++++
27
1 file changed, 32 insertions(+), 27 deletions(-)
16
target/arm/translate.c | 16 ++---
17
3 files changed, 151 insertions(+), 12 deletions(-)
18
28
19
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
20
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/neon-dp.decode
31
--- a/hw/intc/armv7m_nvic.c
22
+++ b/target/arm/neon-dp.decode
32
+++ b/hw/intc/armv7m_nvic.c
23
@@ -XXX,XX +XXX,XX @@ VCVT_FU_2sh 1111 001 1 1 . ...... .... 1111 0 . . 1 .... @2reg_vcvt
33
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
24
# So we have a single decode line and check the cmode/op in the
34
{
25
# trans function.
35
NVICState *s = (NVICState *)opaque;
26
Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
36
VecInfo *vec = NULL;
37
- int ret;
38
+ int ret = 0;
39
40
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
41
42
+ trace_nvic_complete_irq(irq, secure);
27
+
43
+
28
+######################################################################
44
+ if (secure && exc_is_banked(irq)) {
29
+# Within the "two registers, or three registers of different lengths"
45
+ vec = &s->sec_vectors[irq];
30
+# grouping ([23,4]=0b10), bits [21:20] are either part of the opcode
46
+ } else {
31
+# decode: 0b11 for VEXT, two-reg-misc, VTBL, and duplicate-scalar;
47
+ vec = &s->vectors[irq];
32
+# or they are a size field for the three-reg-different-lengths and
33
+# two-reg-and-scalar insn groups (where size cannot be 0b11). This
34
+# is slightly awkward for decodetree: we handle it with this
35
+# non-exclusive group which contains within it two exclusive groups:
36
+# one for the size=0b11 patterns, and one for the size-not-0b11
37
+# patterns. This allows us to check that none of the insns within
38
+# each subgroup accidentally overlap each other. Note that all the
39
+# trans functions for the size-not-0b11 patterns must check and
40
+# return false for size==3.
41
+######################################################################
42
+{
43
+ # 0b11 subgroup will go here
44
+
45
+ # Subgroup for size != 0b11
46
+ [
47
+ ##################################################################
48
+ # 3-reg-different-length grouping:
49
+ # 1111 001 U 1 D sz!=11 Vn:4 Vd:4 opc:4 N 0 M 0 Vm:4
50
+ ##################################################################
51
+
52
+ &3diff vm vn vd size
53
+
54
+ @3diff .... ... . . . size:2 .... .... .... . . . . .... \
55
+ &3diff vm=%vm_dp vn=%vn_dp vd=%vd_dp
56
+
57
+ VADDL_S_3d 1111 001 0 1 . .. .... .... 0000 . 0 . 0 .... @3diff
58
+ VADDL_U_3d 1111 001 1 1 . .. .... .... 0000 . 0 . 0 .... @3diff
59
+
60
+ VADDW_S_3d 1111 001 0 1 . .. .... .... 0001 . 0 . 0 .... @3diff
61
+ VADDW_U_3d 1111 001 1 1 . .. .... .... 0001 . 0 . 0 .... @3diff
62
+
63
+ VSUBL_S_3d 1111 001 0 1 . .. .... .... 0010 . 0 . 0 .... @3diff
64
+ VSUBL_U_3d 1111 001 1 1 . .. .... .... 0010 . 0 . 0 .... @3diff
65
+
66
+ VSUBW_S_3d 1111 001 0 1 . .. .... .... 0011 . 0 . 0 .... @3diff
67
+ VSUBW_U_3d 1111 001 1 1 . .. .... .... 0011 . 0 . 0 .... @3diff
68
+ ]
69
+}
70
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/translate-neon.inc.c
73
+++ b/target/arm/translate-neon.inc.c
74
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a)
75
}
76
return do_1reg_imm(s, a, fn);
77
}
78
+
79
+static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
80
+ NeonGenWidenFn *widenfn,
81
+ NeonGenTwo64OpFn *opfn,
82
+ bool src1_wide)
83
+{
84
+ /* 3-regs different lengths, prewidening case (VADDL/VSUBL/VAADW/VSUBW) */
85
+ TCGv_i64 rn0_64, rn1_64, rm_64;
86
+ TCGv_i32 rm;
87
+
88
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
89
+ return false;
90
+ }
48
+ }
91
+
49
+
92
+ /* UNDEF accesses to D16-D31 if they don't exist. */
50
+ /*
93
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
51
+ * Identify illegal exception return cases. We can't immediately
94
+ ((a->vd | a->vn | a->vm) & 0x10)) {
52
+ * return at this point because we still need to deactivate
95
+ return false;
53
+ * (either this exception or NMI/HardFault) first.
54
+ */
55
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
56
+ /*
57
+ * Return from a configurable exception targeting the opposite
58
+ * security state from the one we're trying to complete it for.
59
+ * Clear vec because it's not really the VecInfo for this
60
+ * (irq, secstate) so we mustn't deactivate it.
61
+ */
62
+ ret = -1;
63
+ vec = NULL;
64
+ } else if (!vec->active) {
65
+ /* Return from an inactive interrupt */
66
+ ret = -1;
67
+ } else {
68
+ /* Legal return, we will return the RETTOBASE bit value to the caller */
69
+ ret = nvic_rettobase(s);
96
+ }
70
+ }
97
+
71
+
98
+ if (!widenfn || !opfn) {
72
/*
99
+ /* size == 3 case, which is an entirely different insn group */
73
* For negative priorities, v8M will forcibly deactivate the appropriate
100
+ return false;
74
* NMI or HardFault regardless of what interrupt we're being asked to
101
+ }
75
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
102
+
76
}
103
+ if ((a->vd & 1) || (src1_wide && (a->vn & 1))) {
77
104
+ return false;
78
if (!vec) {
105
+ }
79
- if (secure && exc_is_banked(irq)) {
106
+
80
- vec = &s->sec_vectors[irq];
107
+ if (!vfp_access_check(s)) {
81
- } else {
108
+ return true;
82
- vec = &s->vectors[irq];
109
+ }
83
- }
110
+
84
- }
111
+ rn0_64 = tcg_temp_new_i64();
85
-
112
+ rn1_64 = tcg_temp_new_i64();
86
- trace_nvic_complete_irq(irq, secure);
113
+ rm_64 = tcg_temp_new_i64();
87
-
114
+
88
- if (!vec->active) {
115
+ if (src1_wide) {
89
- /* Tell the caller this was an illegal exception return */
116
+ neon_load_reg64(rn0_64, a->vn);
90
- return -1;
117
+ } else {
91
- }
118
+ TCGv_i32 tmp = neon_load_reg(a->vn, 0);
92
-
119
+ widenfn(rn0_64, tmp);
93
- /*
120
+ tcg_temp_free_i32(tmp);
94
- * If this is a configurable exception and it is currently
121
+ }
95
- * targeting the opposite security state from the one we're trying
122
+ rm = neon_load_reg(a->vm, 0);
96
- * to complete it for, this counts as an illegal exception return.
123
+
97
- * We still need to deactivate whatever vector the logic above has
124
+ widenfn(rm_64, rm);
98
- * selected, though, as it might not be the same as the one for the
125
+ tcg_temp_free_i32(rm);
99
- * requested exception number.
126
+ opfn(rn0_64, rn0_64, rm_64);
100
- */
127
+
101
- if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
128
+ /*
102
- ret = -1;
129
+ * Load second pass inputs before storing the first pass result, to
103
- } else {
130
+ * avoid incorrect results if a narrow input overlaps with the result.
104
- ret = nvic_rettobase(s);
131
+ */
105
+ return ret;
132
+ if (src1_wide) {
106
}
133
+ neon_load_reg64(rn1_64, a->vn + 1);
107
134
+ } else {
108
vec->active = 0;
135
+ TCGv_i32 tmp = neon_load_reg(a->vn, 1);
136
+ widenfn(rn1_64, tmp);
137
+ tcg_temp_free_i32(tmp);
138
+ }
139
+ rm = neon_load_reg(a->vm, 1);
140
+
141
+ neon_store_reg64(rn0_64, a->vd);
142
+
143
+ widenfn(rm_64, rm);
144
+ tcg_temp_free_i32(rm);
145
+ opfn(rn1_64, rn1_64, rm_64);
146
+ neon_store_reg64(rn1_64, a->vd + 1);
147
+
148
+ tcg_temp_free_i64(rn0_64);
149
+ tcg_temp_free_i64(rn1_64);
150
+ tcg_temp_free_i64(rm_64);
151
+
152
+ return true;
153
+}
154
+
155
+#define DO_PREWIDEN(INSN, S, EXT, OP, SRC1WIDE) \
156
+ static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \
157
+ { \
158
+ static NeonGenWidenFn * const widenfn[] = { \
159
+ gen_helper_neon_widen_##S##8, \
160
+ gen_helper_neon_widen_##S##16, \
161
+ tcg_gen_##EXT##_i32_i64, \
162
+ NULL, \
163
+ }; \
164
+ static NeonGenTwo64OpFn * const addfn[] = { \
165
+ gen_helper_neon_##OP##l_u16, \
166
+ gen_helper_neon_##OP##l_u32, \
167
+ tcg_gen_##OP##_i64, \
168
+ NULL, \
169
+ }; \
170
+ return do_prewiden_3d(s, a, widenfn[a->size], \
171
+ addfn[a->size], SRC1WIDE); \
172
+ }
173
+
174
+DO_PREWIDEN(VADDL_S, s, ext, add, false)
175
+DO_PREWIDEN(VADDL_U, u, extu, add, false)
176
+DO_PREWIDEN(VSUBL_S, s, ext, sub, false)
177
+DO_PREWIDEN(VSUBL_U, u, extu, sub, false)
178
+DO_PREWIDEN(VADDW_S, s, ext, add, true)
179
+DO_PREWIDEN(VADDW_U, u, extu, add, true)
180
+DO_PREWIDEN(VSUBW_S, s, ext, sub, true)
181
+DO_PREWIDEN(VSUBW_U, u, extu, sub, true)
182
diff --git a/target/arm/translate.c b/target/arm/translate.c
183
index XXXXXXX..XXXXXXX 100644
184
--- a/target/arm/translate.c
185
+++ b/target/arm/translate.c
186
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
187
/* Three registers of different lengths. */
188
int src1_wide;
189
int src2_wide;
190
- int prewiden;
191
/* undefreq: bit 0 : UNDEF if size == 0
192
* bit 1 : UNDEF if size == 1
193
* bit 2 : UNDEF if size == 2
194
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
195
int undefreq;
196
/* prewiden, src1_wide, src2_wide, undefreq */
197
static const int neon_3reg_wide[16][4] = {
198
- {1, 0, 0, 0}, /* VADDL */
199
- {1, 1, 0, 0}, /* VADDW */
200
- {1, 0, 0, 0}, /* VSUBL */
201
- {1, 1, 0, 0}, /* VSUBW */
202
+ {0, 0, 0, 7}, /* VADDL: handled by decodetree */
203
+ {0, 0, 0, 7}, /* VADDW: handled by decodetree */
204
+ {0, 0, 0, 7}, /* VSUBL: handled by decodetree */
205
+ {0, 0, 0, 7}, /* VSUBW: handled by decodetree */
206
{0, 1, 1, 0}, /* VADDHN */
207
{0, 0, 0, 0}, /* VABAL */
208
{0, 1, 1, 0}, /* VSUBHN */
209
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
210
{0, 0, 0, 7}, /* Reserved: always UNDEF */
211
};
212
213
- prewiden = neon_3reg_wide[op][0];
214
src1_wide = neon_3reg_wide[op][1];
215
src2_wide = neon_3reg_wide[op][2];
216
undefreq = neon_3reg_wide[op][3];
217
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
218
} else {
219
tmp = neon_load_reg(rn, pass);
220
}
221
- if (prewiden) {
222
- gen_neon_widen(cpu_V0, tmp, size, u);
223
- }
224
}
225
if (src2_wide) {
226
neon_load_reg64(cpu_V1, rm + pass);
227
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
228
} else {
229
tmp2 = neon_load_reg(rm, pass);
230
}
231
- if (prewiden) {
232
- gen_neon_widen(cpu_V1, tmp2, size, u);
233
- }
234
}
235
switch (op) {
236
case 0: case 1: case 4: /* VADDL, VADDW, VADDHN, VRADDHN */
237
--
109
--
238
2.20.1
110
2.20.1
239
111
240
112
diff view generated by jsdifflib
1
Convert the VQDMULH and VQRDMULH insns in the 2-reg-scalar group
1
For v8.1M the architecture mandates that CPUs must provide at
2
to decodetree.
2
least the "minimal RAS implementation" from the Reliability,
3
Availability and Serviceability extension. This consists of:
4
* an ESB instruction which is a NOP
5
-- since it is in the HINT space we need only add a comment
6
* an RFSR register which will RAZ/WI
7
* a RAZ/WI AIRCR.IESB bit
8
-- the code which handles writes to AIRCR does not allow setting
9
of RES0 bits, so we already treat this as RAZ/WI; add a comment
10
noting that this is deliberate
11
* minimal implementation of the RAS register block at 0xe0005000
12
-- this will be in a subsequent commit
13
* setting the ID_PFR0.RAS field to 0b0010
14
-- we will do this when we add the Cortex-M55 CPU model
3
15
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20201119215617.29887-26-peter.maydell@linaro.org
6
---
19
---
7
target/arm/neon-dp.decode | 3 +++
20
target/arm/cpu.h | 14 ++++++++++++++
8
target/arm/translate-neon.inc.c | 29 +++++++++++++++++++++++
21
target/arm/t32.decode | 4 ++++
9
target/arm/translate.c | 42 ++-------------------------------
22
hw/intc/armv7m_nvic.c | 13 +++++++++++++
10
3 files changed, 34 insertions(+), 40 deletions(-)
23
3 files changed, 31 insertions(+)
11
24
12
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/neon-dp.decode
27
--- a/target/arm/cpu.h
15
+++ b/target/arm/neon-dp.decode
28
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
29
@@ -XXX,XX +XXX,XX @@ FIELD(ID_MMFR4, LSM, 20, 4)
17
30
FIELD(ID_MMFR4, CCIDX, 24, 4)
18
VMUL_2sc 1111 001 . 1 . .. .... .... 1000 . 1 . 0 .... @2scalar
31
FIELD(ID_MMFR4, EVT, 28, 4)
19
VMUL_F_2sc 1111 001 . 1 . .. .... .... 1001 . 1 . 0 .... @2scalar
32
33
+FIELD(ID_PFR0, STATE0, 0, 4)
34
+FIELD(ID_PFR0, STATE1, 4, 4)
35
+FIELD(ID_PFR0, STATE2, 8, 4)
36
+FIELD(ID_PFR0, STATE3, 12, 4)
37
+FIELD(ID_PFR0, CSV2, 16, 4)
38
+FIELD(ID_PFR0, AMU, 20, 4)
39
+FIELD(ID_PFR0, DIT, 24, 4)
40
+FIELD(ID_PFR0, RAS, 28, 4)
20
+
41
+
21
+ VQDMULH_2sc 1111 001 . 1 . .. .... .... 1100 . 1 . 0 .... @2scalar
42
FIELD(ID_PFR1, PROGMOD, 0, 4)
22
+ VQRDMULH_2sc 1111 001 . 1 . .. .... .... 1101 . 1 . 0 .... @2scalar
43
FIELD(ID_PFR1, SECURITY, 4, 4)
23
]
44
FIELD(ID_PFR1, MPROGMOD, 8, 4)
45
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
46
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
24
}
47
}
25
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
48
26
index XXXXXXX..XXXXXXX 100644
49
+static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
27
--- a/target/arm/translate-neon.inc.c
28
+++ b/target/arm/translate-neon.inc.c
29
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLS_F_2sc(DisasContext *s, arg_2scalar *a)
30
31
return do_2scalar(s, a, opfn[a->size], accfn[a->size]);
32
}
33
+
34
+WRAP_ENV_FN(gen_VQDMULH_16, gen_helper_neon_qdmulh_s16)
35
+WRAP_ENV_FN(gen_VQDMULH_32, gen_helper_neon_qdmulh_s32)
36
+WRAP_ENV_FN(gen_VQRDMULH_16, gen_helper_neon_qrdmulh_s16)
37
+WRAP_ENV_FN(gen_VQRDMULH_32, gen_helper_neon_qrdmulh_s32)
38
+
39
+static bool trans_VQDMULH_2sc(DisasContext *s, arg_2scalar *a)
40
+{
50
+{
41
+ static NeonGenTwoOpFn * const opfn[] = {
51
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
42
+ NULL,
43
+ gen_VQDMULH_16,
44
+ gen_VQDMULH_32,
45
+ NULL,
46
+ };
47
+
48
+ return do_2scalar(s, a, opfn[a->size], NULL);
49
+}
52
+}
50
+
53
+
51
+static bool trans_VQRDMULH_2sc(DisasContext *s, arg_2scalar *a)
54
static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
52
+{
55
{
53
+ static NeonGenTwoOpFn * const opfn[] = {
56
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
54
+ NULL,
57
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
55
+ gen_VQRDMULH_16,
58
index XXXXXXX..XXXXXXX 100644
56
+ gen_VQRDMULH_32,
59
--- a/target/arm/t32.decode
57
+ NULL,
60
+++ b/target/arm/t32.decode
58
+ };
61
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
62
# SEV 1111 0011 1010 1111 1000 0000 0000 0100
63
# SEVL 1111 0011 1010 1111 1000 0000 0000 0101
64
65
+ # For M-profile minimal-RAS ESB can be a NOP, which is the
66
+ # default behaviour since it is in the hint space.
67
+ # ESB 1111 0011 1010 1111 1000 0000 0001 0000
59
+
68
+
60
+ return do_2scalar(s, a, opfn[a->size], NULL);
69
# The canonical nop ends in 0000 0000, but the whole rest
61
+}
70
# of the space is "reserved hint, behaves as nop".
62
diff --git a/target/arm/translate.c b/target/arm/translate.c
71
NOP 1111 0011 1010 1111 1000 0000 ---- ----
72
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
63
index XXXXXXX..XXXXXXX 100644
73
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/translate.c
74
--- a/hw/intc/armv7m_nvic.c
65
+++ b/target/arm/translate.c
75
+++ b/hw/intc/armv7m_nvic.c
66
@@ -XXX,XX +XXX,XX @@ static void gen_exception_return(DisasContext *s, TCGv_i32 pc)
76
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
67
77
return 0;
68
#define CPU_V001 cpu_V0, cpu_V0, cpu_V1
78
}
69
79
return cpu->env.v7m.sfar;
70
-static TCGv_i32 neon_load_scratch(int scratch)
80
+ case 0xf04: /* RFSR */
71
-{
81
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
72
- TCGv_i32 tmp = tcg_temp_new_i32();
82
+ goto bad_offset;
73
- tcg_gen_ld_i32(tmp, cpu_env, offsetof(CPUARMState, vfp.scratch[scratch]));
83
+ }
74
- return tmp;
84
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
75
-}
85
+ return 0;
76
-
86
case 0xf34: /* FPCCR */
77
-static void neon_store_scratch(int scratch, TCGv_i32 var)
87
if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
78
-{
88
return 0;
79
- tcg_gen_st_i32(var, cpu_env, offsetof(CPUARMState, vfp.scratch[scratch]));
89
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
80
- tcg_temp_free_i32(var);
90
R_V7M_AIRCR_PRIGROUP_SHIFT,
81
-}
91
R_V7M_AIRCR_PRIGROUP_LENGTH);
82
-
92
}
83
static int gen_neon_unzip(int rd, int rm, int size, int q)
93
+ /* AIRCR.IESB is RAZ/WI because we implement only minimal RAS */
84
{
94
if (attrs.secure) {
85
TCGv_ptr pd, pm;
95
/* These bits are only writable by secure */
86
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
96
cpu->env.v7m.aircr = value &
87
case 1: /* Float VMLA scalar */
97
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
88
case 5: /* Floating point VMLS scalar */
98
}
89
case 9: /* Floating point VMUL scalar */
99
break;
90
- return 1; /* handled by decodetree */
100
}
91
-
101
+ case 0xf04: /* RFSR */
92
case 12: /* VQDMULH scalar */
102
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
93
case 13: /* VQRDMULH scalar */
103
+ goto bad_offset;
94
- if (u && ((rd | rn) & 1)) {
104
+ }
95
- return 1;
105
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
96
- }
106
+ break;
97
- tmp = neon_get_scalar(size, rm);
107
case 0xf34: /* FPCCR */
98
- neon_store_scratch(0, tmp);
108
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
99
- for (pass = 0; pass < (u ? 4 : 2); pass++) {
109
/* Not all bits here are banked. */
100
- tmp = neon_load_scratch(0);
101
- tmp2 = neon_load_reg(rn, pass);
102
- if (op == 12) {
103
- if (size == 1) {
104
- gen_helper_neon_qdmulh_s16(tmp, cpu_env, tmp, tmp2);
105
- } else {
106
- gen_helper_neon_qdmulh_s32(tmp, cpu_env, tmp, tmp2);
107
- }
108
- } else {
109
- if (size == 1) {
110
- gen_helper_neon_qrdmulh_s16(tmp, cpu_env, tmp, tmp2);
111
- } else {
112
- gen_helper_neon_qrdmulh_s32(tmp, cpu_env, tmp, tmp2);
113
- }
114
- }
115
- tcg_temp_free_i32(tmp2);
116
- neon_store_reg(rd, pass, tmp);
117
- }
118
- break;
119
+ return 1; /* handled by decodetree */
120
+
121
case 3: /* VQDMLAL scalar */
122
case 7: /* VQDMLSL scalar */
123
case 11: /* VQDMULL scalar */
124
--
110
--
125
2.20.1
111
2.20.1
126
112
127
113
diff view generated by jsdifflib
1
Convert the narrow-to-high-half insns VADDHN, VSUBHN, VRADDHN,
1
The RAS feature has a block of memory-mapped registers at offset
2
VRSUBHN in the Neon 3-registers-different-lengths group to
2
0x5000 within the PPB. For a "minimal RAS" implementation we provide
3
decodetree.
3
no error records and so the only registers that exist in the block
4
are ERRIIDR and ERRDEVID.
5
6
The "RAZ/WI for privileged, BusFault for nonprivileged" behaviour
7
of the "nvic-default" region is actually valid for minimal-RAS,
8
so the main benefit of providing an explicit implementation of
9
the register block is more accurate LOG_UNIMP messages, and a
10
framework for where we could add a real RAS implementation later
11
if necessary.
4
12
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-27-peter.maydell@linaro.org
7
---
16
---
8
target/arm/neon-dp.decode | 6 +++
17
include/hw/intc/armv7m_nvic.h | 1 +
9
target/arm/translate-neon.inc.c | 87 +++++++++++++++++++++++++++++++
18
hw/intc/armv7m_nvic.c | 56 +++++++++++++++++++++++++++++++++++
10
target/arm/translate.c | 91 ++++-----------------------------
19
2 files changed, 57 insertions(+)
11
3 files changed, 104 insertions(+), 80 deletions(-)
12
20
13
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
21
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
14
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/neon-dp.decode
23
--- a/include/hw/intc/armv7m_nvic.h
16
+++ b/target/arm/neon-dp.decode
24
+++ b/include/hw/intc/armv7m_nvic.h
17
@@ -XXX,XX +XXX,XX @@ Vimm_1r 1111 001 . 1 . 000 ... .... cmode:4 0 . op:1 1 .... @1reg_imm
25
@@ -XXX,XX +XXX,XX @@ struct NVICState {
18
26
MemoryRegion sysreg_ns_mem;
19
VSUBW_S_3d 1111 001 0 1 . .. .... .... 0011 . 0 . 0 .... @3diff
27
MemoryRegion systickmem;
20
VSUBW_U_3d 1111 001 1 1 . .. .... .... 0011 . 0 . 0 .... @3diff
28
MemoryRegion systick_ns_mem;
29
+ MemoryRegion ras_mem;
30
MemoryRegion container;
31
MemoryRegion defaultmem;
32
33
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/armv7m_nvic.c
36
+++ b/hw/intc/armv7m_nvic.c
37
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
38
.endianness = DEVICE_NATIVE_ENDIAN,
39
};
40
21
+
41
+
22
+ VADDHN_3d 1111 001 0 1 . .. .... .... 0100 . 0 . 0 .... @3diff
42
+static MemTxResult ras_read(void *opaque, hwaddr addr,
23
+ VRADDHN_3d 1111 001 1 1 . .. .... .... 0100 . 0 . 0 .... @3diff
43
+ uint64_t *data, unsigned size,
24
+
44
+ MemTxAttrs attrs)
25
+ VSUBHN_3d 1111 001 0 1 . .. .... .... 0110 . 0 . 0 .... @3diff
26
+ VRSUBHN_3d 1111 001 1 1 . .. .... .... 0110 . 0 . 0 .... @3diff
27
]
28
}
29
diff --git a/target/arm/translate-neon.inc.c b/target/arm/translate-neon.inc.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate-neon.inc.c
32
+++ b/target/arm/translate-neon.inc.c
33
@@ -XXX,XX +XXX,XX @@ DO_PREWIDEN(VADDW_S, s, ext, add, true)
34
DO_PREWIDEN(VADDW_U, u, extu, add, true)
35
DO_PREWIDEN(VSUBW_S, s, ext, sub, true)
36
DO_PREWIDEN(VSUBW_U, u, extu, sub, true)
37
+
38
+static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
39
+ NeonGenTwo64OpFn *opfn, NeonGenNarrowFn *narrowfn)
40
+{
45
+{
41
+ /* 3-regs different lengths, narrowing (VADDHN/VSUBHN/VRADDHN/VRSUBHN) */
46
+ if (attrs.user) {
42
+ TCGv_i64 rn_64, rm_64;
47
+ return MEMTX_ERROR;
43
+ TCGv_i32 rd0, rd1;
44
+
45
+ if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
46
+ return false;
47
+ }
48
+ }
48
+
49
+
49
+ /* UNDEF accesses to D16-D31 if they don't exist. */
50
+ switch (addr) {
50
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
51
+ case 0xe10: /* ERRIIDR */
51
+ ((a->vd | a->vn | a->vm) & 0x10)) {
52
+ /* architect field = Arm; product/variant/revision 0 */
52
+ return false;
53
+ *data = 0x43b;
54
+ break;
55
+ case 0xfc8: /* ERRDEVID */
56
+ /* Minimal RAS: we implement 0 error record indexes */
57
+ *data = 0;
58
+ break;
59
+ default:
60
+ qemu_log_mask(LOG_UNIMP, "Read RAS register offset 0x%x\n",
61
+ (uint32_t)addr);
62
+ *data = 0;
63
+ break;
64
+ }
65
+ return MEMTX_OK;
66
+}
67
+
68
+static MemTxResult ras_write(void *opaque, hwaddr addr,
69
+ uint64_t value, unsigned size,
70
+ MemTxAttrs attrs)
71
+{
72
+ if (attrs.user) {
73
+ return MEMTX_ERROR;
53
+ }
74
+ }
54
+
75
+
55
+ if (!opfn || !narrowfn) {
76
+ switch (addr) {
56
+ /* size == 3 case, which is an entirely different insn group */
77
+ default:
57
+ return false;
78
+ qemu_log_mask(LOG_UNIMP, "Write to RAS register offset 0x%x\n",
79
+ (uint32_t)addr);
80
+ break;
81
+ }
82
+ return MEMTX_OK;
83
+}
84
+
85
+static const MemoryRegionOps ras_ops = {
86
+ .read_with_attrs = ras_read,
87
+ .write_with_attrs = ras_write,
88
+ .endianness = DEVICE_NATIVE_ENDIAN,
89
+};
90
+
91
/*
92
* Unassigned portions of the PPB space are RAZ/WI for privileged
93
* accesses, and fault for non-privileged accesses.
94
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
95
&s->systick_ns_mem, 1);
96
}
97
98
+ if (cpu_isar_feature(aa32_ras, s->cpu)) {
99
+ memory_region_init_io(&s->ras_mem, OBJECT(s),
100
+ &ras_ops, s, "nvic_ras", 0x1000);
101
+ memory_region_add_subregion(&s->container, 0x5000, &s->ras_mem);
58
+ }
102
+ }
59
+
103
+
60
+ if ((a->vn | a->vm) & 1) {
104
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
61
+ return false;
62
+ }
63
+
64
+ if (!vfp_access_check(s)) {
65
+ return true;
66
+ }
67
+
68
+ rn_64 = tcg_temp_new_i64();
69
+ rm_64 = tcg_temp_new_i64();
70
+ rd0 = tcg_temp_new_i32();
71
+ rd1 = tcg_temp_new_i32();
72
+
73
+ neon_load_reg64(rn_64, a->vn);
74
+ neon_load_reg64(rm_64, a->vm);
75
+
76
+ opfn(rn_64, rn_64, rm_64);
77
+
78
+ narrowfn(rd0, rn_64);
79
+
80
+ neon_load_reg64(rn_64, a->vn + 1);
81
+ neon_load_reg64(rm_64, a->vm + 1);
82
+
83
+ opfn(rn_64, rn_64, rm_64);
84
+
85
+ narrowfn(rd1, rn_64);
86
+
87
+ neon_store_reg(a->vd, 0, rd0);
88
+ neon_store_reg(a->vd, 1, rd1);
89
+
90
+ tcg_temp_free_i64(rn_64);
91
+ tcg_temp_free_i64(rm_64);
92
+
93
+ return true;
94
+}
95
+
96
+#define DO_NARROW_3D(INSN, OP, NARROWTYPE, EXTOP) \
97
+ static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \
98
+ { \
99
+ static NeonGenTwo64OpFn * const addfn[] = { \
100
+ gen_helper_neon_##OP##l_u16, \
101
+ gen_helper_neon_##OP##l_u32, \
102
+ tcg_gen_##OP##_i64, \
103
+ NULL, \
104
+ }; \
105
+ static NeonGenNarrowFn * const narrowfn[] = { \
106
+ gen_helper_neon_##NARROWTYPE##_high_u8, \
107
+ gen_helper_neon_##NARROWTYPE##_high_u16, \
108
+ EXTOP, \
109
+ NULL, \
110
+ }; \
111
+ return do_narrow_3d(s, a, addfn[a->size], narrowfn[a->size]); \
112
+ }
113
+
114
+static void gen_narrow_round_high_u32(TCGv_i32 rd, TCGv_i64 rn)
115
+{
116
+ tcg_gen_addi_i64(rn, rn, 1u << 31);
117
+ tcg_gen_extrh_i64_i32(rd, rn);
118
+}
119
+
120
+DO_NARROW_3D(VADDHN, add, narrow, tcg_gen_extrh_i64_i32)
121
+DO_NARROW_3D(VSUBHN, sub, narrow, tcg_gen_extrh_i64_i32)
122
+DO_NARROW_3D(VRADDHN, add, narrow_round, gen_narrow_round_high_u32)
123
+DO_NARROW_3D(VRSUBHN, sub, narrow_round, gen_narrow_round_high_u32)
124
diff --git a/target/arm/translate.c b/target/arm/translate.c
125
index XXXXXXX..XXXXXXX 100644
126
--- a/target/arm/translate.c
127
+++ b/target/arm/translate.c
128
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_addl(int size)
129
}
130
}
105
}
131
106
132
-static inline void gen_neon_subl(int size)
133
-{
134
- switch (size) {
135
- case 0: gen_helper_neon_subl_u16(CPU_V001); break;
136
- case 1: gen_helper_neon_subl_u32(CPU_V001); break;
137
- case 2: tcg_gen_sub_i64(CPU_V001); break;
138
- default: abort();
139
- }
140
-}
141
-
142
static inline void gen_neon_negl(TCGv_i64 var, int size)
143
{
144
switch (size) {
145
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
146
op = (insn >> 8) & 0xf;
147
if ((insn & (1 << 6)) == 0) {
148
/* Three registers of different lengths. */
149
- int src1_wide;
150
- int src2_wide;
151
/* undefreq: bit 0 : UNDEF if size == 0
152
* bit 1 : UNDEF if size == 1
153
* bit 2 : UNDEF if size == 2
154
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
155
{0, 0, 0, 7}, /* VADDW: handled by decodetree */
156
{0, 0, 0, 7}, /* VSUBL: handled by decodetree */
157
{0, 0, 0, 7}, /* VSUBW: handled by decodetree */
158
- {0, 1, 1, 0}, /* VADDHN */
159
+ {0, 0, 0, 7}, /* VADDHN: handled by decodetree */
160
{0, 0, 0, 0}, /* VABAL */
161
- {0, 1, 1, 0}, /* VSUBHN */
162
+ {0, 0, 0, 7}, /* VSUBHN: handled by decodetree */
163
{0, 0, 0, 0}, /* VABDL */
164
{0, 0, 0, 0}, /* VMLAL */
165
{0, 0, 0, 9}, /* VQDMLAL */
166
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
167
{0, 0, 0, 7}, /* Reserved: always UNDEF */
168
};
169
170
- src1_wide = neon_3reg_wide[op][1];
171
- src2_wide = neon_3reg_wide[op][2];
172
undefreq = neon_3reg_wide[op][3];
173
174
if ((undefreq & (1 << size)) ||
175
((undefreq & 8) && u)) {
176
return 1;
177
}
178
- if ((src1_wide && (rn & 1)) ||
179
- (src2_wide && (rm & 1)) ||
180
- (!src2_wide && (rd & 1))) {
181
+ if (rd & 1) {
182
return 1;
183
}
184
185
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
186
/* Avoid overlapping operands. Wide source operands are
187
always aligned so will never overlap with wide
188
destinations in problematic ways. */
189
- if (rd == rm && !src2_wide) {
190
+ if (rd == rm) {
191
tmp = neon_load_reg(rm, 1);
192
neon_store_scratch(2, tmp);
193
- } else if (rd == rn && !src1_wide) {
194
+ } else if (rd == rn) {
195
tmp = neon_load_reg(rn, 1);
196
neon_store_scratch(2, tmp);
197
}
198
tmp3 = NULL;
199
for (pass = 0; pass < 2; pass++) {
200
- if (src1_wide) {
201
- neon_load_reg64(cpu_V0, rn + pass);
202
- tmp = NULL;
203
+ if (pass == 1 && rd == rn) {
204
+ tmp = neon_load_scratch(2);
205
} else {
206
- if (pass == 1 && rd == rn) {
207
- tmp = neon_load_scratch(2);
208
- } else {
209
- tmp = neon_load_reg(rn, pass);
210
- }
211
+ tmp = neon_load_reg(rn, pass);
212
}
213
- if (src2_wide) {
214
- neon_load_reg64(cpu_V1, rm + pass);
215
- tmp2 = NULL;
216
+ if (pass == 1 && rd == rm) {
217
+ tmp2 = neon_load_scratch(2);
218
} else {
219
- if (pass == 1 && rd == rm) {
220
- tmp2 = neon_load_scratch(2);
221
- } else {
222
- tmp2 = neon_load_reg(rm, pass);
223
- }
224
+ tmp2 = neon_load_reg(rm, pass);
225
}
226
switch (op) {
227
- case 0: case 1: case 4: /* VADDL, VADDW, VADDHN, VRADDHN */
228
- gen_neon_addl(size);
229
- break;
230
- case 2: case 3: case 6: /* VSUBL, VSUBW, VSUBHN, VRSUBHN */
231
- gen_neon_subl(size);
232
- break;
233
case 5: case 7: /* VABAL, VABDL */
234
switch ((size << 1) | u) {
235
case 0:
236
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
237
abort();
238
}
239
neon_store_reg64(cpu_V0, rd + pass);
240
- } else if (op == 4 || op == 6) {
241
- /* Narrowing operation. */
242
- tmp = tcg_temp_new_i32();
243
- if (!u) {
244
- switch (size) {
245
- case 0:
246
- gen_helper_neon_narrow_high_u8(tmp, cpu_V0);
247
- break;
248
- case 1:
249
- gen_helper_neon_narrow_high_u16(tmp, cpu_V0);
250
- break;
251
- case 2:
252
- tcg_gen_extrh_i64_i32(tmp, cpu_V0);
253
- break;
254
- default: abort();
255
- }
256
- } else {
257
- switch (size) {
258
- case 0:
259
- gen_helper_neon_narrow_round_high_u8(tmp, cpu_V0);
260
- break;
261
- case 1:
262
- gen_helper_neon_narrow_round_high_u16(tmp, cpu_V0);
263
- break;
264
- case 2:
265
- tcg_gen_addi_i64(cpu_V0, cpu_V0, 1u << 31);
266
- tcg_gen_extrh_i64_i32(tmp, cpu_V0);
267
- break;
268
- default: abort();
269
- }
270
- }
271
- if (pass == 0) {
272
- tmp3 = tmp;
273
- } else {
274
- neon_store_reg(rd, 0, tmp3);
275
- neon_store_reg(rd, 1, tmp);
276
- }
277
} else {
278
/* Write back the result. */
279
neon_store_reg64(cpu_V0, rd + pass);
280
--
107
--
281
2.20.1
108
2.20.1
282
109
283
110
diff view generated by jsdifflib
New patch
1
Correct a typo in the name we give the NVIC object.
1
2
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-28-peter.maydell@linaro.org
7
---
8
hw/arm/armv7m.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
10
11
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/armv7m.c
14
+++ b/hw/arm/armv7m.c
15
@@ -XXX,XX +XXX,XX @@ static void armv7m_instance_init(Object *obj)
16
17
memory_region_init(&s->container, obj, "armv7m-container", UINT64_MAX);
18
19
- object_initialize_child(obj, "nvnic", &s->nvic, TYPE_NVIC);
20
+ object_initialize_child(obj, "nvic", &s->nvic, TYPE_NVIC);
21
object_property_add_alias(obj, "num-irq",
22
OBJECT(&s->nvic), "num-irq");
23
24
--
25
2.20.1
26
27
diff view generated by jsdifflib