1
First pullreq for 6.0: mostly my v8.1M work, plus some other
1
First set of arm patches for 6.2. I have a lot more in my
2
bits and pieces. (I still have a lot of stuff in my to-review
2
to-review queue still...
3
folder, which I may or may not get to before the Christmas break...)
4
3
5
thanks
6
-- PMM
4
-- PMM
7
5
8
The following changes since commit 5e7b204dbfae9a562fc73684986f936b97f63877:
6
The following changes since commit d42685765653ec155fdf60910662f8830bdb2cef:
9
7
10
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2020-12-09 20:08:54 +0000)
8
Open 6.2 development tree (2021-08-25 10:25:12 +0100)
11
9
12
are available in the Git repository at:
10
are available in the Git repository at:
13
11
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201210
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210825
15
13
16
for you to fetch changes up to 71f916be1c7e9ede0e37d9cabc781b5a9e8638ff:
14
for you to fetch changes up to 24b1a6aa43615be22c7ee66bd68ec5675f6a6a9a:
17
15
18
hw/arm/armv7m: Correct typo in QOM object name (2020-12-10 11:44:56 +0000)
16
docs: Document how to use gdb with unix sockets (2021-08-25 10:48:51 +0100)
19
17
20
----------------------------------------------------------------
18
----------------------------------------------------------------
21
target-arm queue:
19
target-arm queue:
22
* hw/arm/smmuv3: Fix up L1STD_SPAN decoding
20
* More MVE emulation work
23
* xlnx-zynqmp: Support Xilinx ZynqMP CAN controllers
21
* Implement M-profile trapping on division by zero
24
* sbsa-ref: allow to use Cortex-A53/57/72 cpus
22
* kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route()
25
* Various minor code cleanups
23
* hw/char/pl011: add support for sending break
26
* hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
24
* fsl-imx6ul: Instantiate SAI1/2/3 and ASRC as unimplemented devices
27
* Implement more pieces of ARMv8.1M support
25
* hw/dma/pl330: Add memory region to replace default
26
* sbsa-ref: Rename SBSA_GWDT enum value
27
* fsl-imx7: Instantiate SAI1/2/3 as unimplemented devices
28
* docs: Document how to use gdb with unix sockets
28
29
29
----------------------------------------------------------------
30
----------------------------------------------------------------
30
Alex Chen (4):
31
Eduardo Habkost (1):
31
i.MX25: Fix bad printf format specifiers
32
sbsa-ref: Rename SBSA_GWDT enum value
32
i.MX31: Fix bad printf format specifiers
33
i.MX6: Fix bad printf format specifiers
34
i.MX6ul: Fix bad printf format specifiers
35
33
36
Havard Skinnemoen (1):
34
Guenter Roeck (2):
37
tests/qtest/npcm7xx_rng-test: dump random data on failure
35
fsl-imx6ul: Instantiate SAI1/2/3 and ASRC as unimplemented devices
36
fsl-imx7: Instantiate SAI1/2/3 as unimplemented devices
38
37
39
Kunkun Jiang (1):
38
Hamza Mahfooz (1):
40
hw/arm/smmuv3: Fix up L1STD_SPAN decoding
39
target/arm: kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route()
41
40
42
Marcin Juszkiewicz (1):
41
Jan Luebbe (1):
43
sbsa-ref: allow to use Cortex-A53/57/72 cpus
42
hw/char/pl011: add support for sending break
44
43
45
Peter Maydell (25):
44
Peter Maydell (37):
46
hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
45
target/arm: Note that we handle VMOVL as a special case of VSHLL
47
target/arm: Implement v8.1M PXN extension
46
target/arm: Print MVE VPR in CPU dumps
48
target/arm: Don't clobber ID_PFR1.Security on M-profile cores
47
target/arm: Fix MVE VSLI by 0 and VSRI by <dt>
49
target/arm: Implement VSCCLRM insn
48
target/arm: Fix signed VADDV
50
target/arm: Implement CLRM instruction
49
target/arm: Fix mask handling for MVE narrowing operations
51
target/arm: Enforce M-profile VMRS/VMSR register restrictions
50
target/arm: Fix 48-bit saturating shifts
52
target/arm: Refactor M-profile VMSR/VMRS handling
51
target/arm: Fix MVE 48-bit SQRSHRL for small right shifts
53
target/arm: Move general-use constant expanders up in translate.c
52
target/arm: Fix calculation of LTP mask when LR is 0
54
target/arm: Implement VLDR/VSTR system register
53
target/arm: Factor out mve_eci_mask()
55
target/arm: Implement M-profile FPSCR_nzcvqc
54
target/arm: Fix VPT advance when ECI is non-zero
56
target/arm: Use new FPCR_NZCV_MASK constant
55
target/arm: Fix VLDRB/H/W for predicated elements
57
target/arm: Factor out preserve-fp-state from full_vfp_access_check()
56
target/arm: Implement MVE VMULL (polynomial)
58
target/arm: Implement FPCXT_S fp system register
57
target/arm: Implement MVE incrementing/decrementing dup insns
59
hw/intc/armv7m_nvic: Update FPDSCR masking for v8.1M
58
target/arm: Factor out gen_vpst()
60
target/arm: For v8.1M, always clear R0-R3, R12, APSR, EPSR on exception entry
59
target/arm: Implement MVE integer vector comparisons
61
target/arm: In v8.1M, don't set HFSR.FORCED on vector table fetch failures
60
target/arm: Implement MVE integer vector-vs-scalar comparisons
62
target/arm: Implement v8.1M REVIDR register
61
target/arm: Implement MVE VPSEL
63
target/arm: Implement new v8.1M NOCP check for exception return
62
target/arm: Implement MVE VMLAS
64
target/arm: Implement new v8.1M VLLDM and VLSTM encodings
63
target/arm: Implement MVE shift-by-scalar
65
hw/intc/armv7m_nvic: Support v8.1M CCR.TRD bit
64
target/arm: Move 'x' and 'a' bit definitions into vmlaldav formats
66
target/arm: Implement CCR_S.TRD behaviour for SG insns
65
target/arm: Implement MVE integer min/max across vector
67
hw/intc/armv7m_nvic: Fix "return from inactive handler" check
66
target/arm: Implement MVE VABAV
68
target/arm: Implement M-profile "minimal RAS implementation"
67
target/arm: Implement MVE narrowing moves
69
hw/intc/armv7m_nvic: Implement read/write for RAS register block
68
target/arm: Rename MVEGenDualAccOpFn to MVEGenLongDualAccOpFn
70
hw/arm/armv7m: Correct typo in QOM object name
69
target/arm: Implement MVE VMLADAV and VMLSLDAV
70
target/arm: Implement MVE VMLA
71
target/arm: Implement MVE saturating doubling multiply accumulates
72
target/arm: Implement MVE VQABS, VQNEG
73
target/arm: Implement MVE VMAXA, VMINA
74
target/arm: Implement MVE VMOV to/from 2 general-purpose registers
75
target/arm: Implement MVE VPNOT
76
target/arm: Implement MVE VCTP
77
target/arm: Implement MVE scatter-gather insns
78
target/arm: Implement MVE scatter-gather immediate forms
79
target/arm: Implement MVE interleaving loads/stores
80
target/arm: Re-indent sdiv and udiv helpers
81
target/arm: Implement M-profile trapping on division by zero
71
82
72
Vikram Garhwal (4):
83
Sebastian Meyer (1):
73
hw/net/can: Introduce Xilinx ZynqMP CAN controller
84
docs: Document how to use gdb with unix sockets
74
xlnx-zynqmp: Connect Xilinx ZynqMP CAN controllers
75
tests/qtest: Introduce tests for Xilinx ZynqMP CAN controller
76
MAINTAINERS: Add maintainer entry for Xilinx ZynqMP CAN controller
77
85
78
meson.build | 1 +
86
Wen, Jianxian (1):
79
hw/arm/smmuv3-internal.h | 2 +-
87
hw/dma/pl330: Add memory region to replace default
80
hw/net/can/trace.h | 1 +
81
include/hw/arm/xlnx-zynqmp.h | 8 +
82
include/hw/intc/armv7m_nvic.h | 2 +
83
include/hw/net/xlnx-zynqmp-can.h | 78 +++
84
target/arm/cpu.h | 46 ++
85
target/arm/m-nocp.decode | 10 +-
86
target/arm/t32.decode | 10 +-
87
target/arm/vfp.decode | 14 +
88
hw/arm/armv7m.c | 4 +-
89
hw/arm/sbsa-ref.c | 23 +-
90
hw/arm/xlnx-zcu102.c | 20 +
91
hw/arm/xlnx-zynqmp.c | 34 ++
92
hw/intc/armv7m_nvic.c | 246 ++++++--
93
hw/misc/imx25_ccm.c | 12 +-
94
hw/misc/imx31_ccm.c | 14 +-
95
hw/misc/imx6_ccm.c | 20 +-
96
hw/misc/imx6_src.c | 2 +-
97
hw/misc/imx6ul_ccm.c | 4 +-
98
hw/misc/imx_ccm.c | 4 +-
99
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++++++++++
100
target/arm/cpu.c | 5 +-
101
target/arm/helper.c | 7 +-
102
target/arm/m_helper.c | 130 ++++-
103
target/arm/translate.c | 105 +++-
104
tests/qtest/npcm7xx_rng-test.c | 12 +
105
tests/qtest/xlnx-can-test.c | 360 ++++++++++++
106
MAINTAINERS | 8 +
107
hw/Kconfig | 1 +
108
hw/net/can/meson.build | 1 +
109
hw/net/can/trace-events | 9 +
110
target/arm/translate-vfp.c.inc | 511 ++++++++++++++++-
111
tests/qtest/meson.build | 1 +
112
34 files changed, 2713 insertions(+), 153 deletions(-)
113
create mode 100644 hw/net/can/trace.h
114
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
115
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
116
create mode 100644 tests/qtest/xlnx-can-test.c
117
create mode 100644 hw/net/can/trace-events
118
88
89
docs/system/gdb.rst | 26 +-
90
include/hw/arm/fsl-imx7.h | 5 +
91
target/arm/cpu.h | 1 +
92
target/arm/helper-mve.h | 283 ++++++++++
93
target/arm/helper.h | 4 +-
94
target/arm/translate-a32.h | 2 +
95
target/arm/vec_internal.h | 11 +
96
target/arm/mve.decode | 226 +++++++-
97
target/arm/t32.decode | 1 +
98
hw/arm/exynos4210.c | 3 +
99
hw/arm/fsl-imx6ul.c | 12 +
100
hw/arm/fsl-imx7.c | 7 +
101
hw/arm/sbsa-ref.c | 6 +-
102
hw/arm/xilinx_zynq.c | 3 +
103
hw/char/pl011.c | 6 +
104
hw/dma/pl330.c | 26 +-
105
target/arm/cpu.c | 3 +
106
target/arm/helper.c | 34 +-
107
target/arm/kvm.c | 17 +-
108
target/arm/m_helper.c | 4 +
109
target/arm/mve_helper.c | 1254 ++++++++++++++++++++++++++++++++++++++++++--
110
target/arm/translate-mve.c | 877 ++++++++++++++++++++++++++++++-
111
target/arm/translate-vfp.c | 2 +-
112
target/arm/translate.c | 37 +-
113
target/arm/vec_helper.c | 14 +-
114
25 files changed, 2746 insertions(+), 118 deletions(-)
115
diff view generated by jsdifflib
1
The FPDSCR register has a similar layout to the FPSCR. In v8.1M it
1
Although the architecture doesn't define it as an alias, VMOVL
2
gains new fields FZ16 (if half-precision floating point is supported)
2
(vector move long) is encoded as a VSHLL with a zero shift.
3
and LTPSIZE (always reads as 4). Update the reset value and the code
3
Add a comment in the decode file noting that we handle VMOVL
4
that handles writes to this register accordingly.
4
as part of VSHLL.
5
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-16-peter.maydell@linaro.org
9
---
8
---
10
target/arm/cpu.h | 5 +++++
9
target/arm/mve.decode | 2 ++
11
hw/intc/armv7m_nvic.c | 9 ++++++++-
10
1 file changed, 2 insertions(+)
12
target/arm/cpu.c | 3 +++
13
3 files changed, 16 insertions(+), 1 deletion(-)
14
11
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
14
--- a/target/arm/mve.decode
18
+++ b/target/arm/cpu.h
15
+++ b/target/arm/mve.decode
19
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
16
@@ -XXX,XX +XXX,XX @@ VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_h
20
#define FPCR_IXE (1 << 12) /* Inexact exception trap enable */
17
VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
21
#define FPCR_IDE (1 << 15) /* Input Denormal exception trap enable */
18
22
#define FPCR_FZ16 (1 << 19) /* ARMv8.2+, FP16 flush-to-zero */
19
# VSHLL T1 encoding; the T2 VSHLL encoding is elsewhere in this file
23
+#define FPCR_RMODE_MASK (3 << 22) /* Rounding mode */
20
+# Note that VMOVL is encoded as "VSHLL with a zero shift count"; we
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
21
+# implement it that way rather than special-casing it in the decode.
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
22
VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_b
26
+#define FPCR_AHP (1 << 26) /* Alternative half-precision */
23
VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_h
27
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
24
28
#define FPCR_V (1 << 28) /* FP overflow flag */
29
#define FPCR_C (1 << 29) /* FP carry flag */
30
#define FPCR_Z (1 << 30) /* FP zero flag */
31
#define FPCR_N (1 << 31) /* FP negative flag */
32
33
+#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
34
+#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
35
+
36
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
37
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
38
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/intc/armv7m_nvic.c
42
+++ b/hw/intc/armv7m_nvic.c
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
44
break;
45
case 0xf3c: /* FPDSCR */
46
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
47
- value &= 0x07c00000;
48
+ uint32_t mask = FPCR_AHP | FPCR_DN | FPCR_FZ | FPCR_RMODE_MASK;
49
+ if (cpu_isar_feature(any_fp16, cpu)) {
50
+ mask |= FPCR_FZ16;
51
+ }
52
+ value &= mask;
53
+ if (cpu_isar_feature(aa32_lob, cpu)) {
54
+ value |= 4 << FPCR_LTPSIZE_SHIFT;
55
+ }
56
cpu->env.v7m.fpdscr[attrs.secure] = value;
57
}
58
break;
59
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/cpu.c
62
+++ b/target/arm/cpu.c
63
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
64
* always reset to 4.
65
*/
66
env->v7m.ltpsize = 4;
67
+ /* The LTPSIZE field in FPDSCR is constant and reads as 4. */
68
+ env->v7m.fpdscr[M_REG_NS] = 4 << FPCR_LTPSIZE_SHIFT;
69
+ env->v7m.fpdscr[M_REG_S] = 4 << FPCR_LTPSIZE_SHIFT;
70
}
71
72
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
73
--
25
--
74
2.20.1
26
2.20.1
75
27
76
28
diff view generated by jsdifflib
1
In arm_cpu_realizefn() we check whether the board code disabled EL3
1
Include the MVE VPR register value in the CPU dumps produced by
2
via the has_el3 CPU object property, which we create if the CPU
2
arm_cpu_dump_state() if we are printing FPU information. This
3
starts with the ARM_FEATURE_EL3 feature bit. If it is disabled, then
3
makes it easier to interpret debug logs when predication is
4
we turn off ARM_FEATURE_EL3 and also zero out the relevant fields in
4
active.
5
the ID_PFR1 and ID_AA64PFR0 registers.
6
7
This codepath was incorrectly being taken for M-profile CPUs, which
8
do not have an EL3 and don't set ARM_FEATURE_EL3, but which may have
9
the M-profile Security extension and so should have non-zero values
10
in the ID_PFR1.Security field.
11
12
Restrict the handling of the feature flag to A/R-profile cores.
13
5
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20201119215617.29887-4-peter.maydell@linaro.org
17
---
8
---
18
target/arm/cpu.c | 2 +-
9
target/arm/cpu.c | 3 +++
19
1 file changed, 1 insertion(+), 1 deletion(-)
10
1 file changed, 3 insertions(+)
20
11
21
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
22
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.c
14
--- a/target/arm/cpu.c
24
+++ b/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
25
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
16
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags)
17
i, v);
26
}
18
}
19
qemu_fprintf(f, "FPSCR: %08x\n", vfp_get_fpscr(env));
20
+ if (cpu_isar_feature(aa32_mve, cpu)) {
21
+ qemu_fprintf(f, "VPR: %08x\n", env->v7m.vpr);
22
+ }
27
}
23
}
28
24
}
29
- if (!cpu->has_el3) {
25
30
+ if (!arm_feature(env, ARM_FEATURE_M) && !cpu->has_el3) {
31
/* If the has_el3 CPU property is disabled then we need to disable the
32
* feature.
33
*/
34
--
26
--
35
2.20.1
27
2.20.1
36
28
37
29
diff view generated by jsdifflib
New patch
1
In the MVE shift-and-insert insns, we special case VSLI by 0
2
and VSRI by <dt>. VSRI by <dt> means "don't update the destination",
3
which is what we've implemented. However VSLI by 0 is "set
4
destination to the input", so we don't want to use the same
5
special-casing that we do for VSRI by <dt>.
1
6
7
Since the generic logic gives the right answer for a shift
8
by 0, just use that.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/arm/mve_helper.c | 9 +++++----
14
1 file changed, 5 insertions(+), 4 deletions(-)
15
16
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/mve_helper.c
19
+++ b/target/arm/mve_helper.c
20
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
21
uint16_t mask; \
22
uint64_t shiftmask; \
23
unsigned e; \
24
- if (shift == 0 || shift == ESIZE * 8) { \
25
+ if (shift == ESIZE * 8) { \
26
/* \
27
- * Only VSLI can shift by 0; only VSRI can shift by <dt>. \
28
- * The generic logic would give the right answer for 0 but \
29
- * fails for <dt>. \
30
+ * Only VSRI can shift by <dt>; it should mean "don't \
31
+ * update the destination". The generic logic can't handle \
32
+ * this because it would try to shift by an out-of-range \
33
+ * amount, so special case it here. \
34
*/ \
35
goto done; \
36
} \
37
--
38
2.20.1
39
40
diff view generated by jsdifflib
New patch
1
A cut-and-paste error meant we handled signed VADDV like
2
unsigned VADDV; fix the type used.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
target/arm/mve_helper.c | 6 +++---
8
1 file changed, 3 insertions(+), 3 deletions(-)
9
10
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/mve_helper.c
13
+++ b/target/arm/mve_helper.c
14
@@ -XXX,XX +XXX,XX @@ DO_LDAVH(vrmlsldavhxsw, int32_t, int64_t, true, true)
15
return ra; \
16
} \
17
18
-DO_VADDV(vaddvsb, 1, uint8_t)
19
-DO_VADDV(vaddvsh, 2, uint16_t)
20
-DO_VADDV(vaddvsw, 4, uint32_t)
21
+DO_VADDV(vaddvsb, 1, int8_t)
22
+DO_VADDV(vaddvsh, 2, int16_t)
23
+DO_VADDV(vaddvsw, 4, int32_t)
24
DO_VADDV(vaddvub, 1, uint8_t)
25
DO_VADDV(vaddvuh, 2, uint16_t)
26
DO_VADDV(vaddvuw, 4, uint32_t)
27
--
28
2.20.1
29
30
diff view generated by jsdifflib
New patch
1
In the MVE helpers for the narrowing operations (DO_VSHRN and
2
DO_VSHRN_SAT) we were using the wrong bits of the predicate mask for
3
the 'top' versions of the insn. This is because the loop works over
4
the double-sized input elements and shifts the predicate mask by that
5
many bits each time, but when we write out the half-sized output we
6
must look at the mask bits for whichever half of the element we are
7
writing to.
1
8
9
Correct this by shifting the whole mask right by ESIZE bits for the
10
'top' insns. This allows us also to simplify the saturation bit
11
checking (where we had noticed that we needed to look at a different
12
mask bit for the 'top' insn.)
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
---
17
target/arm/mve_helper.c | 4 +++-
18
1 file changed, 3 insertions(+), 1 deletion(-)
19
20
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/mve_helper.c
23
+++ b/target/arm/mve_helper.c
24
@@ -XXX,XX +XXX,XX @@ DO_VSHLL_ALL(vshllt, true)
25
TYPE *d = vd; \
26
uint16_t mask = mve_element_mask(env); \
27
unsigned le; \
28
+ mask >>= ESIZE * TOP; \
29
for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
30
TYPE r = FN(m[H##LESIZE(le)], shift); \
31
mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
32
@@ -XXX,XX +XXX,XX @@ static inline int32_t do_sat_bhs(int64_t val, int64_t min, int64_t max,
33
uint16_t mask = mve_element_mask(env); \
34
bool qc = false; \
35
unsigned le; \
36
+ mask >>= ESIZE * TOP; \
37
for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
38
bool sat = false; \
39
TYPE r = FN(m[H##LESIZE(le)], shift, &sat); \
40
mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
41
- qc |= sat && (mask & 1 << (TOP * ESIZE)); \
42
+ qc |= sat & mask & 1; \
43
} \
44
if (qc) { \
45
env->vfp.qc[0] = qc; \
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
New patch
1
In do_sqrshl48_d() and do_uqrshl48_d() we got some of the edge
2
cases wrong and failed to saturate correctly:
1
3
4
(1) In do_sqrshl48_d() we used the same code that do_shrshl_bhs()
5
does to obtain the saturated most-negative and most-positive 48-bit
6
signed values for the large-shift-left case. This gives (1 << 47)
7
for saturate-to-most-negative, but we weren't sign-extending this
8
value to the 64-bit output as the pseudocode requires.
9
10
(2) For left shifts by less than 48, we copied the "8/16 bit" code
11
from do_sqrshl_bhs() and do_uqrshl_bhs(). This doesn't do the right
12
thing because it assumes the C type we're working with is at least
13
twice the number of bits we're saturating to (so that a shift left by
14
bits-1 can't shift anything off the top of the value). This isn't
15
true for bits == 48, so we would incorrectly return 0 rather than the
16
most-positive value for situations like "shift (1 << 44) right by
17
20". Instead check for saturation by doing the shift and signextend
18
and then testing whether shifting back left again gives the original
19
value.
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
---
24
target/arm/mve_helper.c | 12 +++++-------
25
1 file changed, 5 insertions(+), 7 deletions(-)
26
27
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/mve_helper.c
30
+++ b/target/arm/mve_helper.c
31
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
32
}
33
return src >> -shift;
34
} else if (shift < 48) {
35
- int64_t val = src << shift;
36
- int64_t extval = sextract64(val, 0, 48);
37
- if (!sat || val == extval) {
38
+ int64_t extval = sextract64(src << shift, 0, 48);
39
+ if (!sat || src == (extval >> shift)) {
40
return extval;
41
}
42
} else if (!sat || src == 0) {
43
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
44
}
45
46
*sat = 1;
47
- return (1ULL << 47) - (src >= 0);
48
+ return src >= 0 ? MAKE_64BIT_MASK(0, 47) : MAKE_64BIT_MASK(47, 17);
49
}
50
51
/* Operate on 64-bit values, but saturate at 48 bits */
52
@@ -XXX,XX +XXX,XX @@ static inline uint64_t do_uqrshl48_d(uint64_t src, int64_t shift,
53
return extval;
54
}
55
} else if (shift < 48) {
56
- uint64_t val = src << shift;
57
- uint64_t extval = extract64(val, 0, 48);
58
- if (!sat || val == extval) {
59
+ uint64_t extval = extract64(src << shift, 0, 48);
60
+ if (!sat || src == (extval >> shift)) {
61
return extval;
62
}
63
} else if (!sat || src == 0) {
64
--
65
2.20.1
66
67
diff view generated by jsdifflib
New patch
1
We got an edge case wrong in the 48-bit SQRSHRL implementation: if
2
the shift is to the right, although it always makes the result
3
smaller than the input value it might not be within the 48-bit range
4
the result is supposed to be if the input had some bits in [63..48]
5
set and the shift didn't bring all of those within the [47..0] range.
1
6
7
Handle this similarly to the way we already do for this case in
8
do_uqrshl48_d(): extend the calculated result from 48 bits,
9
and return that if not saturating or if it doesn't change the
10
result; otherwise fall through to return a saturated value.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
---
15
target/arm/mve_helper.c | 11 +++++++++--
16
1 file changed, 9 insertions(+), 2 deletions(-)
17
18
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/mve_helper.c
21
+++ b/target/arm/mve_helper.c
22
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mve_uqrshll)(CPUARMState *env, uint64_t n, uint32_t shift)
23
static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
24
bool round, uint32_t *sat)
25
{
26
+ int64_t val, extval;
27
+
28
if (shift <= -48) {
29
/* Rounding the sign bit always produces 0. */
30
if (round) {
31
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
32
} else if (shift < 0) {
33
if (round) {
34
src >>= -shift - 1;
35
- return (src >> 1) + (src & 1);
36
+ val = (src >> 1) + (src & 1);
37
+ } else {
38
+ val = src >> -shift;
39
+ }
40
+ extval = sextract64(val, 0, 48);
41
+ if (!sat || val == extval) {
42
+ return extval;
43
}
44
- return src >> -shift;
45
} else if (shift < 48) {
46
int64_t extval = sextract64(src << shift, 0, 48);
47
if (!sat || src == (extval >> shift)) {
48
--
49
2.20.1
50
51
diff view generated by jsdifflib
New patch
1
In mve_element_mask(), we calculate a mask for tail predication which
2
should have a number of 1 bits based on the value of LR. However,
3
our MAKE_64BIT_MASK() macro has undefined behaviour when passed a
4
zero length. Special case this to give the all-zeroes mask we
5
require.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
---
10
target/arm/mve_helper.c | 3 ++-
11
1 file changed, 2 insertions(+), 1 deletion(-)
12
13
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/mve_helper.c
16
+++ b/target/arm/mve_helper.c
17
@@ -XXX,XX +XXX,XX @@ static uint16_t mve_element_mask(CPUARMState *env)
18
*/
19
int masklen = env->regs[14] << env->v7m.ltpsize;
20
assert(masklen <= 16);
21
- mask &= MAKE_64BIT_MASK(0, masklen);
22
+ uint16_t ltpmask = masklen ? MAKE_64BIT_MASK(0, masklen) : 0;
23
+ mask &= ltpmask;
24
}
25
26
if ((env->condexec_bits & 0xf) == 0) {
27
--
28
2.20.1
29
30
diff view generated by jsdifflib
1
The RAS feature has a block of memory-mapped registers at offset
1
In some situations we need a mask telling us which parts of the
2
0x5000 within the PPB. For a "minimal RAS" implementation we provide
2
vector correspond to beats that are not being executed because of
3
no error records and so the only registers that exist in the block
3
ECI, separately from the combined "which bytes are predicated away"
4
are ERRIIDR and ERRDEVID.
4
mask. Factor this mask calculation out of mve_element_mask() into
5
5
its own function.
6
The "RAZ/WI for privileged, BusFault for nonprivileged" behaviour
7
of the "nvic-default" region is actually valid for minimal-RAS,
8
so the main benefit of providing an explicit implementation of
9
the register block is more accurate LOG_UNIMP messages, and a
10
framework for where we could add a real RAS implementation later
11
if necessary.
12
6
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-27-peter.maydell@linaro.org
16
---
9
---
17
include/hw/intc/armv7m_nvic.h | 1 +
10
target/arm/mve_helper.c | 58 ++++++++++++++++++++++++-----------------
18
hw/intc/armv7m_nvic.c | 56 +++++++++++++++++++++++++++++++++++
11
1 file changed, 34 insertions(+), 24 deletions(-)
19
2 files changed, 57 insertions(+)
20
12
21
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
13
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
22
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/intc/armv7m_nvic.h
15
--- a/target/arm/mve_helper.c
24
+++ b/include/hw/intc/armv7m_nvic.h
16
+++ b/target/arm/mve_helper.c
25
@@ -XXX,XX +XXX,XX @@ struct NVICState {
17
@@ -XXX,XX +XXX,XX @@
26
MemoryRegion sysreg_ns_mem;
18
#include "exec/exec-all.h"
27
MemoryRegion systickmem;
19
#include "tcg/tcg.h"
28
MemoryRegion systick_ns_mem;
20
29
+ MemoryRegion ras_mem;
21
+static uint16_t mve_eci_mask(CPUARMState *env)
30
MemoryRegion container;
22
+{
31
MemoryRegion defaultmem;
23
+ /*
32
24
+ * Return the mask of which elements in the MVE vector correspond
33
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
25
+ * to beats being executed. The mask has 1 bits for executed lanes
34
index XXXXXXX..XXXXXXX 100644
26
+ * and 0 bits where ECI says this beat was already executed.
35
--- a/hw/intc/armv7m_nvic.c
27
+ */
36
+++ b/hw/intc/armv7m_nvic.c
28
+ int eci;
37
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
38
.endianness = DEVICE_NATIVE_ENDIAN,
39
};
40
41
+
29
+
42
+static MemTxResult ras_read(void *opaque, hwaddr addr,
30
+ if ((env->condexec_bits & 0xf) != 0) {
43
+ uint64_t *data, unsigned size,
31
+ return 0xffff;
44
+ MemTxAttrs attrs)
45
+{
46
+ if (attrs.user) {
47
+ return MEMTX_ERROR;
48
+ }
32
+ }
49
+
33
+
50
+ switch (addr) {
34
+ eci = env->condexec_bits >> 4;
51
+ case 0xe10: /* ERRIIDR */
35
+ switch (eci) {
52
+ /* architect field = Arm; product/variant/revision 0 */
36
+ case ECI_NONE:
53
+ *data = 0x43b;
37
+ return 0xffff;
54
+ break;
38
+ case ECI_A0:
55
+ case 0xfc8: /* ERRDEVID */
39
+ return 0xfff0;
56
+ /* Minimal RAS: we implement 0 error record indexes */
40
+ case ECI_A0A1:
57
+ *data = 0;
41
+ return 0xff00;
58
+ break;
42
+ case ECI_A0A1A2:
43
+ case ECI_A0A1A2B0:
44
+ return 0xf000;
59
+ default:
45
+ default:
60
+ qemu_log_mask(LOG_UNIMP, "Read RAS register offset 0x%x\n",
46
+ g_assert_not_reached();
61
+ (uint32_t)addr);
62
+ *data = 0;
63
+ break;
64
+ }
47
+ }
65
+ return MEMTX_OK;
66
+}
48
+}
67
+
49
+
68
+static MemTxResult ras_write(void *opaque, hwaddr addr,
50
static uint16_t mve_element_mask(CPUARMState *env)
69
+ uint64_t value, unsigned size,
51
{
70
+ MemTxAttrs attrs)
52
/*
71
+{
53
@@ -XXX,XX +XXX,XX @@ static uint16_t mve_element_mask(CPUARMState *env)
72
+ if (attrs.user) {
54
mask &= ltpmask;
73
+ return MEMTX_ERROR;
74
+ }
75
+
76
+ switch (addr) {
77
+ default:
78
+ qemu_log_mask(LOG_UNIMP, "Write to RAS register offset 0x%x\n",
79
+ (uint32_t)addr);
80
+ break;
81
+ }
82
+ return MEMTX_OK;
83
+}
84
+
85
+static const MemoryRegionOps ras_ops = {
86
+ .read_with_attrs = ras_read,
87
+ .write_with_attrs = ras_write,
88
+ .endianness = DEVICE_NATIVE_ENDIAN,
89
+};
90
+
91
/*
92
* Unassigned portions of the PPB space are RAZ/WI for privileged
93
* accesses, and fault for non-privileged accesses.
94
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
95
&s->systick_ns_mem, 1);
96
}
55
}
97
56
98
+ if (cpu_isar_feature(aa32_ras, s->cpu)) {
57
- if ((env->condexec_bits & 0xf) == 0) {
99
+ memory_region_init_io(&s->ras_mem, OBJECT(s),
58
- /*
100
+ &ras_ops, s, "nvic_ras", 0x1000);
59
- * ECI bits indicate which beats are already executed;
101
+ memory_region_add_subregion(&s->container, 0x5000, &s->ras_mem);
60
- * we handle this by effectively predicating them out.
102
+ }
61
- */
103
+
62
- int eci = env->condexec_bits >> 4;
104
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
63
- switch (eci) {
64
- case ECI_NONE:
65
- break;
66
- case ECI_A0:
67
- mask &= 0xfff0;
68
- break;
69
- case ECI_A0A1:
70
- mask &= 0xff00;
71
- break;
72
- case ECI_A0A1A2:
73
- case ECI_A0A1A2B0:
74
- mask &= 0xf000;
75
- break;
76
- default:
77
- g_assert_not_reached();
78
- }
79
- }
80
-
81
+ /*
82
+ * ECI bits indicate which beats are already executed;
83
+ * we handle this by effectively predicating them out.
84
+ */
85
+ mask &= mve_eci_mask(env);
86
return mask;
105
}
87
}
106
88
107
--
89
--
108
2.20.1
90
2.20.1
109
91
110
92
diff view generated by jsdifflib
1
In v8.1M, vector table fetch failures don't set HFSR.FORCED (see rule
1
We were not paying attention to the ECI state when advancing the VPT
2
R_LLRP). (In previous versions of the architecture this was either
2
state. Architecturally, VPT state advance happens for every beat
3
required or IMPDEF.)
3
(see the pseudocode VPTAdvance()), so on every beat the 4 bits of
4
VPR.P0 corresponding to the current beat are inverted if required,
5
and at the end of beats 1 and 3 the VPR MASK fields are updated.
6
This means that if the ECI state says we should not be executing all
7
4 beats then we need to skip some of the updating of the VPR that we
8
currently do in mve_advance_vpt().
4
9
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-18-peter.maydell@linaro.org
8
---
12
---
9
target/arm/m_helper.c | 6 +++++-
13
target/arm/mve_helper.c | 24 +++++++++++++++++-------
10
1 file changed, 5 insertions(+), 1 deletion(-)
14
1 file changed, 17 insertions(+), 7 deletions(-)
11
15
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
16
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
18
--- a/target/arm/mve_helper.c
15
+++ b/target/arm/m_helper.c
19
+++ b/target/arm/mve_helper.c
16
@@ -XXX,XX +XXX,XX @@ load_fail:
20
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
17
* The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
21
/* Advance the VPT and ECI state if necessary */
18
* secure); otherwise it targets the same security state as the
22
uint32_t vpr = env->v7m.vpr;
19
* underlying exception.
23
unsigned mask01, mask23;
20
+ * In v8.1M HardFaults from vector table fetch fails don't set FORCED.
24
+ uint16_t inv_mask;
21
*/
25
+ uint16_t eci_mask = mve_eci_mask(env);
22
if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
26
23
exc_secure = true;
27
if ((env->condexec_bits & 0xf) == 0) {
28
env->condexec_bits = (env->condexec_bits == (ECI_A0A1A2B0 << 4)) ?
29
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
30
return;
24
}
31
}
25
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
32
26
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK;
33
+ /* Invert P0 bits if needed, but only for beats we actually executed */
27
+ if (!arm_feature(env, ARM_FEATURE_V8_1M)) {
34
mask01 = FIELD_EX32(vpr, V7M_VPR, MASK01);
28
+ env->v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
35
mask23 = FIELD_EX32(vpr, V7M_VPR, MASK23);
36
- if (mask01 > 8) {
37
- /* high bit set, but not 0b1000: invert the relevant half of P0 */
38
- vpr ^= 0xff;
39
+ /* Start by assuming we invert all bits corresponding to executed beats */
40
+ inv_mask = eci_mask;
41
+ if (mask01 <= 8) {
42
+ /* MASK01 says don't invert low half of P0 */
43
+ inv_mask &= ~0xff;
44
}
45
- if (mask23 > 8) {
46
- /* high bit set, but not 0b1000: invert the relevant half of P0 */
47
- vpr ^= 0xff00;
48
+ if (mask23 <= 8) {
49
+ /* MASK23 says don't invert high half of P0 */
50
+ inv_mask &= ~0xff00;
51
}
52
- vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1);
53
+ vpr ^= inv_mask;
54
+ /* Only update MASK01 if beat 1 executed */
55
+ if (eci_mask & 0xf0) {
56
+ vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1);
29
+ }
57
+ }
30
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
58
+ /* Beat 3 always executes, so update MASK23 */
31
return false;
59
vpr = FIELD_DP32(vpr, V7M_VPR, MASK23, mask23 << 1);
60
env->v7m.vpr = vpr;
32
}
61
}
33
--
62
--
34
2.20.1
63
2.20.1
35
64
36
65
diff view generated by jsdifflib
New patch
1
For vector loads, predicated elements are zeroed, instead of
2
retaining their previous values (as happens for most data
3
processing operations). This means we need to distinguish
4
"beat not executed due to ECI" (don't touch destination
5
element) from "beat executed but predicated out" (zero
6
destination element).
1
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
target/arm/mve_helper.c | 8 +++++---
12
1 file changed, 5 insertions(+), 3 deletions(-)
13
14
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/mve_helper.c
17
+++ b/target/arm/mve_helper.c
18
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
19
env->v7m.vpr = vpr;
20
}
21
22
-
23
+/* For loads, predicated lanes are zeroed instead of keeping their old values */
24
#define DO_VLDR(OP, MSIZE, LDTYPE, ESIZE, TYPE) \
25
void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr) \
26
{ \
27
TYPE *d = vd; \
28
uint16_t mask = mve_element_mask(env); \
29
+ uint16_t eci_mask = mve_eci_mask(env); \
30
unsigned b, e; \
31
/* \
32
* R_SXTM allows the dest reg to become UNKNOWN for abandoned \
33
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
34
* then take an exception. \
35
*/ \
36
for (b = 0, e = 0; b < 16; b += ESIZE, e++) { \
37
- if (mask & (1 << b)) { \
38
- d[H##ESIZE(e)] = cpu_##LDTYPE##_data_ra(env, addr, GETPC()); \
39
+ if (eci_mask & (1 << b)) { \
40
+ d[H##ESIZE(e)] = (mask & (1 << b)) ? \
41
+ cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \
42
} \
43
addr += MSIZE; \
44
} \
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
1
For M-profile CPUs, the range from 0xe0000000 to 0xe00fffff is the
1
Implement the MVE VMULL (polynomial) insn. Unlike Neon, this comes
2
Private Peripheral Bus range, which includes all of the memory mapped
2
in two flavours: 8x8->16 and a 16x16->32. Also unlike Neon, the
3
devices and registers that are part of the CPU itself, including the
3
inputs are in either the low or the high half of each double-width
4
NVIC, systick timer, and debug and trace components like the Data
4
element.
5
Watchpoint and Trace unit (DWT). Within this large region, the range
6
0xe000e000 to 0xe000efff is the System Control Space (NVIC, system
7
registers, systick) and 0xe002e000 to 0exe002efff is its Non-secure
8
alias.
9
5
10
The architecture is clear that within the SCS unimplemented registers
6
The assembler for this insn indicates the size with "P8" or "P16",
11
should be RES0 for privileged accesses and generate BusFault for
7
encoded into bit 28 as size = 0 or 1. We choose to follow the
12
unprivileged accesses, and we currently implement this.
8
same encoding as VQDMULL and decode this into a->size as MO_16
13
9
or MO_32 indicating the size of the result elements. This then
14
It is less clear about how to handle accesses to unimplemented
10
carries through to the helper function names where it then
15
regions of the wider PPB. Unprivileged accesses should definitely
11
matches up with the existing pmull_h() which does an 8x8->16
16
cause BusFaults (R_DQQS), but the behaviour of privileged accesses is
12
operation and a new pmull_w() which does the 16x16->32.
17
not given as a general rule. However, the register definitions of
18
individual registers for components like the DWT all state that they
19
are RES0 if the relevant component is not implemented, so the
20
simplest way to provide that is to provide RAZ/WI for the whole range
21
for privileged accesses. (The v7M Arm ARM does say that reserved
22
registers should be UNK/SBZP.)
23
24
Expand the container MemoryRegion that the NVIC exposes so that
25
it covers the whole PPB space. This means:
26
* moving the address that the ARMV7M device maps it to down by
27
0xe000 bytes
28
* moving the off and the offsets within the container of all the
29
subregions forward by 0xe000 bytes
30
* adding a new default MemoryRegion that covers the whole container
31
at a lower priority than anything else and which provides the
32
RAZWI/BusFault behaviour
33
13
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20201119215617.29887-2-peter.maydell@linaro.org
37
---
16
---
38
include/hw/intc/armv7m_nvic.h | 1 +
17
target/arm/helper-mve.h | 5 +++++
39
hw/arm/armv7m.c | 2 +-
18
target/arm/vec_internal.h | 11 +++++++++++
40
hw/intc/armv7m_nvic.c | 78 ++++++++++++++++++++++++++++++-----
19
target/arm/mve.decode | 14 ++++++++++----
41
3 files changed, 69 insertions(+), 12 deletions(-)
20
target/arm/mve_helper.c | 16 ++++++++++++++++
21
target/arm/translate-mve.c | 28 ++++++++++++++++++++++++++++
22
target/arm/vec_helper.c | 14 +++++++++++++-
23
6 files changed, 83 insertions(+), 5 deletions(-)
42
24
43
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
25
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
44
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
45
--- a/include/hw/intc/armv7m_nvic.h
27
--- a/target/arm/helper-mve.h
46
+++ b/include/hw/intc/armv7m_nvic.h
28
+++ b/target/arm/helper-mve.h
47
@@ -XXX,XX +XXX,XX @@ struct NVICState {
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
48
MemoryRegion systickmem;
30
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
49
MemoryRegion systick_ns_mem;
31
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
50
MemoryRegion container;
32
51
+ MemoryRegion defaultmem;
33
+DEF_HELPER_FLAGS_4(mve_vmullpbh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
52
34
+DEF_HELPER_FLAGS_4(mve_vmullpth, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
53
uint32_t num_irq;
35
+DEF_HELPER_FLAGS_4(mve_vmullpbw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
54
qemu_irq excpout;
36
+DEF_HELPER_FLAGS_4(mve_vmullptw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
55
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
37
+
38
DEF_HELPER_FLAGS_4(mve_vqdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
39
DEF_HELPER_FLAGS_4(mve_vqdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
40
DEF_HELPER_FLAGS_4(mve_vqdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
41
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
56
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/armv7m.c
43
--- a/target/arm/vec_internal.h
58
+++ b/hw/arm/armv7m.c
44
+++ b/target/arm/vec_internal.h
59
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
45
@@ -XXX,XX +XXX,XX @@ int16_t do_sqrdmlah_h(int16_t, int16_t, int16_t, bool, bool, uint32_t *);
60
sysbus_connect_irq(sbd, 0,
46
int32_t do_sqrdmlah_s(int32_t, int32_t, int32_t, bool, bool, uint32_t *);
61
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
47
int64_t do_sqrdmlah_d(int64_t, int64_t, int64_t, bool, bool);
62
48
63
- memory_region_add_subregion(&s->container, 0xe000e000,
49
+/*
64
+ memory_region_add_subregion(&s->container, 0xe0000000,
50
+ * 8 x 8 -> 16 vector polynomial multiply where the inputs are
65
sysbus_mmio_get_region(sbd, 0));
51
+ * in the low 8 bits of each 16-bit element
66
52
+*/
67
for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
53
+uint64_t pmull_h(uint64_t op1, uint64_t op2);
68
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
54
+/*
55
+ * 16 x 16 -> 32 vector polynomial multiply where the inputs are
56
+ * in the low 16 bits of each 32-bit element
57
+ */
58
+uint64_t pmull_w(uint64_t op1, uint64_t op2);
59
+
60
#endif /* TARGET_ARM_VEC_INTERNALS_H */
61
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
69
index XXXXXXX..XXXXXXX 100644
62
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/intc/armv7m_nvic.c
63
--- a/target/arm/mve.decode
71
+++ b/hw/intc/armv7m_nvic.c
64
+++ b/target/arm/mve.decode
72
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
65
@@ -XXX,XX +XXX,XX @@ VHADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
73
.endianness = DEVICE_NATIVE_ENDIAN,
66
VHSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
74
};
67
VHSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
68
69
-VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
70
-VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
71
-VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
72
-VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
73
+{
74
+ VMULLP_B 111 . 1110 0 . 11 ... 1 ... 0 1110 . 0 . 0 ... 0 @2op_sz28
75
+ VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
76
+ VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
77
+}
78
+{
79
+ VMULLP_T 111 . 1110 0 . 11 ... 1 ... 1 1110 . 0 . 0 ... 0 @2op_sz28
80
+ VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
81
+ VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
82
+}
83
84
VQDMULH 1110 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
85
VQRDMULH 1111 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
86
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/mve_helper.c
89
+++ b/target/arm/mve_helper.c
90
@@ -XXX,XX +XXX,XX @@ DO_2OP_L(vmulltub, 1, 1, uint8_t, 2, uint16_t, DO_MUL)
91
DO_2OP_L(vmulltuh, 1, 2, uint16_t, 4, uint32_t, DO_MUL)
92
DO_2OP_L(vmulltuw, 1, 4, uint32_t, 8, uint64_t, DO_MUL)
75
93
76
+/*
94
+/*
77
+ * Unassigned portions of the PPB space are RAZ/WI for privileged
95
+ * Polynomial multiply. We can always do this generating 64 bits
78
+ * accesses, and fault for non-privileged accesses.
96
+ * of the result at a time, so we don't need to use DO_2OP_L.
79
+ */
97
+ */
80
+static MemTxResult ppb_default_read(void *opaque, hwaddr addr,
98
+#define VMULLPH_MASK 0x00ff00ff00ff00ffULL
81
+ uint64_t *data, unsigned size,
99
+#define VMULLPW_MASK 0x0000ffff0000ffffULL
82
+ MemTxAttrs attrs)
100
+#define DO_VMULLPBH(N, M) pmull_h((N) & VMULLPH_MASK, (M) & VMULLPH_MASK)
101
+#define DO_VMULLPTH(N, M) DO_VMULLPBH((N) >> 8, (M) >> 8)
102
+#define DO_VMULLPBW(N, M) pmull_w((N) & VMULLPW_MASK, (M) & VMULLPW_MASK)
103
+#define DO_VMULLPTW(N, M) DO_VMULLPBW((N) >> 16, (M) >> 16)
104
+
105
+DO_2OP(vmullpbh, 8, uint64_t, DO_VMULLPBH)
106
+DO_2OP(vmullpth, 8, uint64_t, DO_VMULLPTH)
107
+DO_2OP(vmullpbw, 8, uint64_t, DO_VMULLPBW)
108
+DO_2OP(vmullptw, 8, uint64_t, DO_VMULLPTW)
109
+
110
/*
111
* Because the computation type is at least twice as large as required,
112
* these work for both signed and unsigned source types.
113
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/translate-mve.c
116
+++ b/target/arm/translate-mve.c
117
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMULLT(DisasContext *s, arg_2op *a)
118
return do_2op(s, a, fns[a->size]);
119
}
120
121
+static bool trans_VMULLP_B(DisasContext *s, arg_2op *a)
83
+{
122
+{
84
+ qemu_log_mask(LOG_UNIMP, "Read of unassigned area of PPB: offset 0x%x\n",
123
+ /*
85
+ (uint32_t)addr);
124
+ * Note that a->size indicates the output size, ie VMULL.P8
86
+ if (attrs.user) {
125
+ * is the 8x8->16 operation and a->size is MO_16; VMULL.P16
87
+ return MEMTX_ERROR;
126
+ * is the 16x16->32 operation and a->size is MO_32.
88
+ }
127
+ */
89
+ *data = 0;
128
+ static MVEGenTwoOpFn * const fns[] = {
90
+ return MEMTX_OK;
129
+ NULL,
130
+ gen_helper_mve_vmullpbh,
131
+ gen_helper_mve_vmullpbw,
132
+ NULL,
133
+ };
134
+ return do_2op(s, a, fns[a->size]);
91
+}
135
+}
92
+
136
+
93
+static MemTxResult ppb_default_write(void *opaque, hwaddr addr,
137
+static bool trans_VMULLP_T(DisasContext *s, arg_2op *a)
94
+ uint64_t value, unsigned size,
95
+ MemTxAttrs attrs)
96
+{
138
+{
97
+ qemu_log_mask(LOG_UNIMP, "Write of unassigned area of PPB: offset 0x%x\n",
139
+ /* a->size is as for trans_VMULLP_B */
98
+ (uint32_t)addr);
140
+ static MVEGenTwoOpFn * const fns[] = {
99
+ if (attrs.user) {
141
+ NULL,
100
+ return MEMTX_ERROR;
142
+ gen_helper_mve_vmullpth,
101
+ }
143
+ gen_helper_mve_vmullptw,
102
+ return MEMTX_OK;
144
+ NULL,
145
+ };
146
+ return do_2op(s, a, fns[a->size]);
103
+}
147
+}
104
+
148
+
105
+static const MemoryRegionOps ppb_default_ops = {
149
/*
106
+ .read_with_attrs = ppb_default_read,
150
* VADC and VSBC: these perform an add-with-carry or subtract-with-carry
107
+ .write_with_attrs = ppb_default_write,
151
* of the 32-bit elements in each lane of the input vectors, where the
108
+ .endianness = DEVICE_NATIVE_ENDIAN,
152
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
109
+ .valid.min_access_size = 1,
153
index XXXXXXX..XXXXXXX 100644
110
+ .valid.max_access_size = 8,
154
--- a/target/arm/vec_helper.c
111
+};
155
+++ b/target/arm/vec_helper.c
112
+
156
@@ -XXX,XX +XXX,XX @@ static uint64_t expand_byte_to_half(uint64_t x)
113
static int nvic_post_load(void *opaque, int version_id)
157
| ((x & 0xff000000) << 24);
158
}
159
160
-static uint64_t pmull_h(uint64_t op1, uint64_t op2)
161
+uint64_t pmull_w(uint64_t op1, uint64_t op2)
114
{
162
{
115
NVICState *s = opaque;
163
uint64_t result = 0;
116
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
164
int i;
117
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
165
+ for (i = 0; i < 16; ++i) {
118
{
166
+ uint64_t mask = (op1 & 0x0000000100000001ull) * 0xffffffff;
119
NVICState *s = NVIC(dev);
167
+ result ^= op2 & mask;
120
- int regionlen;
168
+ op1 >>= 1;
121
169
+ op2 <<= 1;
122
/* The armv7m container object will have set our CPU pointer */
170
+ }
123
if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
171
+ return result;
124
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
172
+}
125
M_REG_S));
173
126
}
174
+uint64_t pmull_h(uint64_t op1, uint64_t op2)
127
175
+{
128
- /* The NVIC and System Control Space (SCS) starts at 0xe000e000
176
+ uint64_t result = 0;
129
+ /*
177
+ int i;
130
+ * This device provides a single sysbus memory region which
178
for (i = 0; i < 8; ++i) {
131
+ * represents the whole of the "System PPB" space. This is the
179
uint64_t mask = (op1 & 0x0001000100010001ull) * 0xffff;
132
+ * range from 0xe0000000 to 0xe00fffff and includes the NVIC,
180
result ^= op2 & mask;
133
+ * the System Control Space (system registers), the systick timer,
134
+ * and for CPUs with the Security extension an NS banked version
135
+ * of all of these.
136
+ *
137
+ * The default behaviour for unimplemented registers/ranges
138
+ * (for instance the Data Watchpoint and Trace unit at 0xe0001000)
139
+ * is to RAZ/WI for privileged access and BusFault for non-privileged
140
+ * access.
141
+ *
142
+ * The NVIC and System Control Space (SCS) starts at 0xe000e000
143
* and looks like this:
144
* 0x004 - ICTR
145
* 0x010 - 0xff - systick
146
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
147
* generally code determining which banked register to use should
148
* use attrs.secure; code determining actual behaviour of the system
149
* should use env->v7m.secure.
150
+ *
151
+ * The container covers the whole PPB space. Within it the priority
152
+ * of overlapping regions is:
153
+ * - default region (for RAZ/WI and BusFault) : -1
154
+ * - system register regions : 0
155
+ * - systick : 1
156
+ * This is because the systick device is a small block of registers
157
+ * in the middle of the other system control registers.
158
*/
159
- regionlen = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? 0x21000 : 0x1000;
160
- memory_region_init(&s->container, OBJECT(s), "nvic", regionlen);
161
- /* The system register region goes at the bottom of the priority
162
- * stack as it covers the whole page.
163
- */
164
+ memory_region_init(&s->container, OBJECT(s), "nvic", 0x100000);
165
+ memory_region_init_io(&s->defaultmem, OBJECT(s), &ppb_default_ops, s,
166
+ "nvic-default", 0x100000);
167
+ memory_region_add_subregion_overlap(&s->container, 0, &s->defaultmem, -1);
168
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
169
"nvic_sysregs", 0x1000);
170
- memory_region_add_subregion(&s->container, 0, &s->sysregmem);
171
+ memory_region_add_subregion(&s->container, 0xe000, &s->sysregmem);
172
173
memory_region_init_io(&s->systickmem, OBJECT(s),
174
&nvic_systick_ops, s,
175
"nvic_systick", 0xe0);
176
177
- memory_region_add_subregion_overlap(&s->container, 0x10,
178
+ memory_region_add_subregion_overlap(&s->container, 0xe010,
179
&s->systickmem, 1);
180
181
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
182
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
183
&nvic_sysreg_ns_ops, &s->sysregmem,
184
"nvic_sysregs_ns", 0x1000);
185
- memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
186
+ memory_region_add_subregion(&s->container, 0x2e000, &s->sysreg_ns_mem);
187
memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
188
&nvic_sysreg_ns_ops, &s->systickmem,
189
"nvic_systick_ns", 0xe0);
190
- memory_region_add_subregion_overlap(&s->container, 0x20010,
191
+ memory_region_add_subregion_overlap(&s->container, 0x2e010,
192
&s->systick_ns_mem, 1);
193
}
194
195
--
181
--
196
2.20.1
182
2.20.1
197
183
198
184
diff view generated by jsdifflib
1
Implement the v8.1M VSCCLRM insn, which zeros floating point
1
Implement the MVE incrementing/decrementing dup insns VIDUP, VDDUP,
2
registers if there is an active floating point context.
2
VIWDUP and VDWDUP. These fill the elements of a vector with
3
This requires support in write_neon_element32() for the MO_32
3
successively incrementing values, starting at the offset specified in
4
element size, so add it.
4
a general purpose register. The final value of the offset is written
5
5
back to this register. The wrapping variants take a second general
6
Because we want to use arm_gen_condlabel(), we need to move
6
purpose register which specifies the point where the count should
7
the definition of that function up in translate.c so it is
7
wrap back to 0.
8
before the #include of translate-vfp.c.inc.
9
8
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-5-peter.maydell@linaro.org
13
---
11
---
14
target/arm/cpu.h | 9 ++++
12
target/arm/helper-mve.h | 12 ++++
15
target/arm/m-nocp.decode | 8 +++-
13
target/arm/mve.decode | 25 ++++++++
16
target/arm/translate.c | 21 +++++----
14
target/arm/mve_helper.c | 63 +++++++++++++++++++
17
target/arm/translate-vfp.c.inc | 84 ++++++++++++++++++++++++++++++++++
15
target/arm/translate-mve.c | 120 +++++++++++++++++++++++++++++++++++++
18
4 files changed, 111 insertions(+), 11 deletions(-)
16
4 files changed, 220 insertions(+)
19
17
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
20
--- a/target/arm/helper-mve.h
23
+++ b/target/arm/cpu.h
21
+++ b/target/arm/helper-mve.h
24
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
25
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
23
24
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
25
26
+DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
27
+DEF_HELPER_FLAGS_4(mve_viduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
28
+DEF_HELPER_FLAGS_4(mve_vidupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
29
+
30
+DEF_HELPER_FLAGS_5(mve_viwdupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
31
+DEF_HELPER_FLAGS_5(mve_viwduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
32
+DEF_HELPER_FLAGS_5(mve_viwdupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
33
+
34
+DEF_HELPER_FLAGS_5(mve_vdwdupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
35
+DEF_HELPER_FLAGS_5(mve_vdwduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
36
+DEF_HELPER_FLAGS_5(mve_vdwdupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
37
+
38
DEF_HELPER_FLAGS_3(mve_vclsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
DEF_HELPER_FLAGS_3(mve_vclsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
40
DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
41
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/mve.decode
44
+++ b/target/arm/mve.decode
45
@@ -XXX,XX +XXX,XX @@
46
&2scalar qd qn rm size
47
&1imm qd imm cmode op
48
&2shift qd qm shift size
49
+&vidup qd rn size imm
50
+&viwdup qd rn rm size imm
51
52
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
53
# Note that both Rn and Qd are 3 bits only (no D bit)
54
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 1 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=0
55
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 1 1 0000 @vdup size=1
56
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
57
58
+# Incrementing and decrementing dup
59
+
60
+# VIDUP, VDDUP format immediate: 1 << (immh:imml)
61
+%imm_vidup 7:1 0:1 !function=vidup_imm
62
+
63
+# VIDUP, VDDUP registers: Rm bits [3:1] from insn, bit 0 is 1;
64
+# Rn bits [3:1] from insn, bit 0 is 0
65
+%vidup_rm 1:3 !function=times_2_plus_1
66
+%vidup_rn 17:3 !function=times_2
67
+
68
+@vidup .... .... . . size:2 .... .... .... .... .... \
69
+ qd=%qd imm=%imm_vidup rn=%vidup_rn &vidup
70
+@viwdup .... .... . . size:2 .... .... .... .... .... \
71
+ qd=%qd imm=%imm_vidup rm=%vidup_rm rn=%vidup_rn &viwdup
72
+{
73
+ VIDUP 1110 1110 0 . .. ... 1 ... 0 1111 . 110 111 . @vidup
74
+ VIWDUP 1110 1110 0 . .. ... 1 ... 0 1111 . 110 ... . @viwdup
75
+}
76
+{
77
+ VDDUP 1110 1110 0 . .. ... 1 ... 1 1111 . 110 111 . @vidup
78
+ VDWDUP 1110 1110 0 . .. ... 1 ... 1 1111 . 110 ... . @viwdup
79
+}
80
+
81
# multiply-add long dual accumulate
82
# rdahi: bits [3:1] from insn, bit 0 is 1
83
# rdalo: bits [3:1] from insn, bit 0 is 0
84
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/mve_helper.c
87
+++ b/target/arm/mve_helper.c
88
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mve_sqrshr)(CPUARMState *env, uint32_t n, uint32_t shift)
89
{
90
return do_sqrshl_bhs(n, -(int8_t)shift, 32, true, &env->QF);
26
}
91
}
27
92
+
28
+static inline bool isar_feature_aa32_m_sec_state(const ARMISARegisters *id)
93
+#define DO_VIDUP(OP, ESIZE, TYPE, FN) \
29
+{
94
+ uint32_t HELPER(mve_##OP)(CPUARMState *env, void *vd, \
30
+ /*
95
+ uint32_t offset, uint32_t imm) \
31
+ * Return true if M-profile state handling insns
96
+ { \
32
+ * (VSCCLRM, CLRM, FPCTX access insns) are implemented
97
+ TYPE *d = vd; \
33
+ */
98
+ uint16_t mask = mve_element_mask(env); \
34
+ return FIELD_EX32(id->id_pfr1, ID_PFR1, SECURITY) >= 3;
99
+ unsigned e; \
35
+}
100
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
36
+
101
+ mergemask(&d[H##ESIZE(e)], offset, mask); \
37
static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
102
+ offset = FN(offset, imm); \
38
{
103
+ } \
39
/* Sadly this is encoded differently for A-profile and M-profile */
104
+ mve_advance_vpt(env); \
40
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
105
+ return offset; \
106
+ }
107
+
108
+#define DO_VIWDUP(OP, ESIZE, TYPE, FN) \
109
+ uint32_t HELPER(mve_##OP)(CPUARMState *env, void *vd, \
110
+ uint32_t offset, uint32_t wrap, \
111
+ uint32_t imm) \
112
+ { \
113
+ TYPE *d = vd; \
114
+ uint16_t mask = mve_element_mask(env); \
115
+ unsigned e; \
116
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
117
+ mergemask(&d[H##ESIZE(e)], offset, mask); \
118
+ offset = FN(offset, wrap, imm); \
119
+ } \
120
+ mve_advance_vpt(env); \
121
+ return offset; \
122
+ }
123
+
124
+#define DO_VIDUP_ALL(OP, FN) \
125
+ DO_VIDUP(OP##b, 1, int8_t, FN) \
126
+ DO_VIDUP(OP##h, 2, int16_t, FN) \
127
+ DO_VIDUP(OP##w, 4, int32_t, FN)
128
+
129
+#define DO_VIWDUP_ALL(OP, FN) \
130
+ DO_VIWDUP(OP##b, 1, int8_t, FN) \
131
+ DO_VIWDUP(OP##h, 2, int16_t, FN) \
132
+ DO_VIWDUP(OP##w, 4, int32_t, FN)
133
+
134
+static uint32_t do_add_wrap(uint32_t offset, uint32_t wrap, uint32_t imm)
135
+{
136
+ offset += imm;
137
+ if (offset == wrap) {
138
+ offset = 0;
139
+ }
140
+ return offset;
141
+}
142
+
143
+static uint32_t do_sub_wrap(uint32_t offset, uint32_t wrap, uint32_t imm)
144
+{
145
+ if (offset == 0) {
146
+ offset = wrap;
147
+ }
148
+ offset -= imm;
149
+ return offset;
150
+}
151
+
152
+DO_VIDUP_ALL(vidup, DO_ADD)
153
+DO_VIWDUP_ALL(viwdup, do_add_wrap)
154
+DO_VIWDUP_ALL(vdwdup, do_sub_wrap)
155
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
41
index XXXXXXX..XXXXXXX 100644
156
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/m-nocp.decode
157
--- a/target/arm/translate-mve.c
43
+++ b/target/arm/m-nocp.decode
158
+++ b/target/arm/translate-mve.c
44
@@ -XXX,XX +XXX,XX @@
159
@@ -XXX,XX +XXX,XX @@
45
# If the coprocessor is not present or disabled then we will generate
160
#include "translate.h"
46
# the NOCP exception; otherwise we let the insn through to the main decode.
161
#include "translate-a32.h"
47
162
48
+%vd_dp 22:1 12:4
163
+static inline int vidup_imm(DisasContext *s, int x)
49
+%vd_sp 12:4 22:1
164
+{
50
+
165
+ return 1 << x;
51
&nocp cp
166
+}
52
167
+
53
{
168
/* Include the generated decoder */
54
# Special cases which do not take an early NOCP: VLLDM and VLSTM
169
#include "decode-mve.c.inc"
55
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
170
56
- # TODO: VSCCLRM (new in v8.1M) is similar:
171
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenTwoOpShiftFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
57
- #VSCCLRM 1110 1100 1-01 1111 ---- 1011 ---- ---0
172
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
58
+ # VSCCLRM (new in v8.1M) is similar:
173
typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
59
+ VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
174
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
60
+ VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
175
+typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
61
176
+typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
62
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
177
63
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
178
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
179
static inline long mve_qreg_offset(unsigned reg)
65
index XXXXXXX..XXXXXXX 100644
180
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLC(DisasContext *s, arg_VSHLC *a)
66
--- a/target/arm/translate.c
181
mve_update_eci(s);
67
+++ b/target/arm/translate.c
68
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void)
69
a64_translate_init();
70
}
71
72
+/* Generate a label used for skipping this instruction */
73
+static void arm_gen_condlabel(DisasContext *s)
74
+{
75
+ if (!s->condjmp) {
76
+ s->condlabel = gen_new_label();
77
+ s->condjmp = 1;
78
+ }
79
+}
80
+
81
/* Flags for the disas_set_da_iss info argument:
82
* lower bits hold the Rt register number, higher bits are flags.
83
*/
84
@@ -XXX,XX +XXX,XX @@ static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
85
long off = neon_element_offset(reg, ele, memop);
86
87
switch (memop) {
88
+ case MO_32:
89
+ tcg_gen_st32_i64(src, cpu_env, off);
90
+ break;
91
case MO_64:
92
tcg_gen_st_i64(src, cpu_env, off);
93
break;
94
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
95
s->base.is_jmp = DISAS_UPDATE_EXIT;
96
}
97
98
-/* Generate a label used for skipping this instruction */
99
-static void arm_gen_condlabel(DisasContext *s)
100
-{
101
- if (!s->condjmp) {
102
- s->condlabel = gen_new_label();
103
- s->condjmp = 1;
104
- }
105
-}
106
-
107
/* Skip this instruction if the ARM condition is false */
108
static void arm_skip_unless(DisasContext *s, uint32_t cond)
109
{
110
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/translate-vfp.c.inc
113
+++ b/target/arm/translate-vfp.c.inc
114
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
115
return true;
182
return true;
116
}
183
}
117
184
+
118
+static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
185
+static bool do_vidup(DisasContext *s, arg_vidup *a, MVEGenVIDUPFn *fn)
119
+{
186
+{
120
+ int btmreg, topreg;
187
+ TCGv_ptr qd;
121
+ TCGv_i64 zero;
188
+ TCGv_i32 rn;
122
+ TCGv_i32 aspen, sfpa;
189
+
123
+
190
+ /*
124
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
191
+ * Vector increment/decrement with wrap and duplicate (VIDUP, VDDUP).
125
+ /* Before v8.1M, fall through in decode to NOCP check */
192
+ * This fills the vector with elements of successively increasing
193
+ * or decreasing values, starting from Rn.
194
+ */
195
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd)) {
126
+ return false;
196
+ return false;
127
+ }
197
+ }
128
+
198
+ if (a->size == MO_64) {
129
+ /* Explicitly UNDEF because this takes precedence over NOCP */
199
+ /* size 0b11 is another encoding */
130
+ if (!arm_dc_feature(s, ARM_FEATURE_M_MAIN) || !s->v8m_secure) {
200
+ return false;
131
+ unallocated_encoding(s);
201
+ }
202
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
132
+ return true;
203
+ return true;
133
+ }
204
+ }
134
+
205
+
135
+ if (!dc_isar_feature(aa32_vfp_simd, s)) {
206
+ qd = mve_qreg_ptr(a->qd);
136
+ /* NOP if we have neither FP nor MVE */
207
+ rn = load_reg(s, a->rn);
208
+ fn(rn, cpu_env, qd, rn, tcg_constant_i32(a->imm));
209
+ store_reg(s, a->rn, rn);
210
+ tcg_temp_free_ptr(qd);
211
+ mve_update_eci(s);
212
+ return true;
213
+}
214
+
215
+static bool do_viwdup(DisasContext *s, arg_viwdup *a, MVEGenVIWDUPFn *fn)
216
+{
217
+ TCGv_ptr qd;
218
+ TCGv_i32 rn, rm;
219
+
220
+ /*
221
+ * Vector increment/decrement with wrap and duplicate (VIWDUp, VDWDUP)
222
+ * This fills the vector with elements of successively increasing
223
+ * or decreasing values, starting from Rn. Rm specifies a point where
224
+ * the count wraps back around to 0. The updated offset is written back
225
+ * to Rn.
226
+ */
227
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd)) {
228
+ return false;
229
+ }
230
+ if (!fn || a->rm == 13 || a->rm == 15) {
231
+ /*
232
+ * size 0b11 is another encoding; Rm == 13 is UNPREDICTABLE;
233
+ * Rm == 13 is VIWDUP, VDWDUP.
234
+ */
235
+ return false;
236
+ }
237
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
137
+ return true;
238
+ return true;
138
+ }
239
+ }
139
+
240
+
140
+ /*
241
+ qd = mve_qreg_ptr(a->qd);
141
+ * If FPCCR.ASPEN != 0 && CONTROL_S.SFPA == 0 then there is no
242
+ rn = load_reg(s, a->rn);
142
+ * active floating point context so we must NOP (without doing
243
+ rm = load_reg(s, a->rm);
143
+ * any lazy state preservation or the NOCP check).
244
+ fn(rn, cpu_env, qd, rn, rm, tcg_constant_i32(a->imm));
144
+ */
245
+ store_reg(s, a->rn, rn);
145
+ aspen = load_cpu_field(v7m.fpccr[M_REG_S]);
246
+ tcg_temp_free_ptr(qd);
146
+ sfpa = load_cpu_field(v7m.control[M_REG_S]);
247
+ tcg_temp_free_i32(rm);
147
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
248
+ mve_update_eci(s);
148
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
149
+ tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
150
+ tcg_gen_or_i32(sfpa, sfpa, aspen);
151
+ arm_gen_condlabel(s);
152
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
153
+
154
+ if (s->fp_excp_el != 0) {
155
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
156
+ syn_uncategorized(), s->fp_excp_el);
157
+ return true;
158
+ }
159
+
160
+ topreg = a->vd + a->imm - 1;
161
+ btmreg = a->vd;
162
+
163
+ /* Convert to Sreg numbers if the insn specified in Dregs */
164
+ if (a->size == 3) {
165
+ topreg = topreg * 2 + 1;
166
+ btmreg *= 2;
167
+ }
168
+
169
+ if (topreg > 63 || (topreg > 31 && !(topreg & 1))) {
170
+ /* UNPREDICTABLE: we choose to undef */
171
+ unallocated_encoding(s);
172
+ return true;
173
+ }
174
+
175
+ /* Silently ignore requests to clear D16-D31 if they don't exist */
176
+ if (topreg > 31 && !dc_isar_feature(aa32_simd_r32, s)) {
177
+ topreg = 31;
178
+ }
179
+
180
+ if (!vfp_access_check(s)) {
181
+ return true;
182
+ }
183
+
184
+ /* Zero the Sregs from btmreg to topreg inclusive. */
185
+ zero = tcg_const_i64(0);
186
+ if (btmreg & 1) {
187
+ write_neon_element64(zero, btmreg >> 1, 1, MO_32);
188
+ btmreg++;
189
+ }
190
+ for (; btmreg + 1 <= topreg; btmreg += 2) {
191
+ write_neon_element64(zero, btmreg >> 1, 0, MO_64);
192
+ }
193
+ if (btmreg == topreg) {
194
+ write_neon_element64(zero, btmreg >> 1, 0, MO_32);
195
+ btmreg++;
196
+ }
197
+ assert(btmreg == topreg + 1);
198
+ /* TODO: when MVE is implemented, zero VPR here */
199
+ return true;
249
+ return true;
200
+}
250
+}
201
+
251
+
202
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
252
+static bool trans_VIDUP(DisasContext *s, arg_vidup *a)
203
{
253
+{
204
/*
254
+ static MVEGenVIDUPFn * const fns[] = {
255
+ gen_helper_mve_vidupb,
256
+ gen_helper_mve_viduph,
257
+ gen_helper_mve_vidupw,
258
+ NULL,
259
+ };
260
+ return do_vidup(s, a, fns[a->size]);
261
+}
262
+
263
+static bool trans_VDDUP(DisasContext *s, arg_vidup *a)
264
+{
265
+ static MVEGenVIDUPFn * const fns[] = {
266
+ gen_helper_mve_vidupb,
267
+ gen_helper_mve_viduph,
268
+ gen_helper_mve_vidupw,
269
+ NULL,
270
+ };
271
+ /* VDDUP is just like VIDUP but with a negative immediate */
272
+ a->imm = -a->imm;
273
+ return do_vidup(s, a, fns[a->size]);
274
+}
275
+
276
+static bool trans_VIWDUP(DisasContext *s, arg_viwdup *a)
277
+{
278
+ static MVEGenVIWDUPFn * const fns[] = {
279
+ gen_helper_mve_viwdupb,
280
+ gen_helper_mve_viwduph,
281
+ gen_helper_mve_viwdupw,
282
+ NULL,
283
+ };
284
+ return do_viwdup(s, a, fns[a->size]);
285
+}
286
+
287
+static bool trans_VDWDUP(DisasContext *s, arg_viwdup *a)
288
+{
289
+ static MVEGenVIWDUPFn * const fns[] = {
290
+ gen_helper_mve_vdwdupb,
291
+ gen_helper_mve_vdwduph,
292
+ gen_helper_mve_vdwdupw,
293
+ NULL,
294
+ };
295
+ return do_viwdup(s, a, fns[a->size]);
296
+}
205
--
297
--
206
2.20.1
298
2.20.1
207
299
208
300
diff view generated by jsdifflib
1
Implement the new-in-v8.1M FPCXT_S floating point system register.
1
Factor out the "generate code to update VPR.MASK01/MASK23" part of
2
This is for saving and restoring the secure floating point context,
2
trans_VPST(); we are going to want to reuse it for the VPT insns.
3
and it reads and writes bits [27:0] from the FPSCR and the
4
CONTROL.SFPA bit in bit [31].
5
3
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-14-peter.maydell@linaro.org
9
---
6
---
10
target/arm/translate-vfp.c.inc | 58 ++++++++++++++++++++++++++++++++++
7
target/arm/translate-mve.c | 31 +++++++++++++++++--------------
11
1 file changed, 58 insertions(+)
8
1 file changed, 17 insertions(+), 14 deletions(-)
12
9
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
14
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
12
--- a/target/arm/translate-mve.c
16
+++ b/target/arm/translate-vfp.c.inc
13
+++ b/target/arm/translate-mve.c
17
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
18
return false;
15
return do_long_dual_acc(s, a, fns[a->x]);
19
}
16
}
17
18
-static bool trans_VPST(DisasContext *s, arg_VPST *a)
19
+static void gen_vpst(DisasContext *s, uint32_t mask)
20
{
21
- TCGv_i32 vpr;
22
-
23
- /* mask == 0 is a "related encoding" */
24
- if (!dc_isar_feature(aa32_mve, s) || !a->mask) {
25
- return false;
26
- }
27
- if (!mve_eci_check(s) || !vfp_access_check(s)) {
28
- return true;
29
- }
30
/*
31
* Set the VPR mask fields. We take advantage of MASK01 and MASK23
32
* being adjacent fields in the register.
33
*
34
- * This insn is not predicated, but it is subject to beat-wise
35
+ * Updating the masks is not predicated, but it is subject to beat-wise
36
* execution, and the mask is updated on the odd-numbered beats.
37
* So if PSR.ECI says we should skip beat 1, we mustn't update the
38
* 01 mask field.
39
*/
40
- vpr = load_cpu_field(v7m.vpr);
41
+ TCGv_i32 vpr = load_cpu_field(v7m.vpr);
42
switch (s->eci) {
43
case ECI_NONE:
44
case ECI_A0:
45
/* Update both 01 and 23 fields */
46
tcg_gen_deposit_i32(vpr, vpr,
47
- tcg_constant_i32(a->mask | (a->mask << 4)),
48
+ tcg_constant_i32(mask | (mask << 4)),
49
R_V7M_VPR_MASK01_SHIFT,
50
R_V7M_VPR_MASK01_LENGTH + R_V7M_VPR_MASK23_LENGTH);
20
break;
51
break;
21
+ case ARM_VFP_FPCXT_S:
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VPST(DisasContext *s, arg_VPST *a)
22
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
53
case ECI_A0A1A2B0:
23
+ return false;
54
/* Update only the 23 mask field */
24
+ }
55
tcg_gen_deposit_i32(vpr, vpr,
25
+ if (!s->v8m_secure) {
56
- tcg_constant_i32(a->mask),
26
+ return false;
57
+ tcg_constant_i32(mask),
27
+ }
58
R_V7M_VPR_MASK23_SHIFT, R_V7M_VPR_MASK23_LENGTH);
28
+ break;
29
default:
30
return FPSysRegCheckFailed;
31
}
32
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
33
tcg_temp_free_i32(tmp);
34
break;
59
break;
35
}
36
+ case ARM_VFP_FPCXT_S:
37
+ {
38
+ TCGv_i32 sfpa, control, fpscr;
39
+ /* Set FPSCR[27:0] and CONTROL.SFPA from value */
40
+ tmp = loadfn(s, opaque);
41
+ sfpa = tcg_temp_new_i32();
42
+ tcg_gen_shri_i32(sfpa, tmp, 31);
43
+ control = load_cpu_field(v7m.control[M_REG_S]);
44
+ tcg_gen_deposit_i32(control, control, sfpa,
45
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
46
+ store_cpu_field(control, v7m.control[M_REG_S]);
47
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
48
+ tcg_gen_andi_i32(fpscr, fpscr, FPCR_NZCV_MASK);
49
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
50
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
51
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
52
+ tcg_temp_free_i32(tmp);
53
+ tcg_temp_free_i32(sfpa);
54
+ break;
55
+ }
56
default:
60
default:
57
g_assert_not_reached();
61
g_assert_not_reached();
58
}
62
}
59
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
63
store_cpu_field(vpr, v7m.vpr);
60
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
64
+}
61
storefn(s, opaque, tmp);
65
+
62
break;
66
+static bool trans_VPST(DisasContext *s, arg_VPST *a)
63
+ case ARM_VFP_FPCXT_S:
67
+{
64
+ {
68
+ /* mask == 0 is a "related encoding" */
65
+ TCGv_i32 control, sfpa, fpscr;
69
+ if (!dc_isar_feature(aa32_mve, s) || !a->mask) {
66
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
70
+ return false;
67
+ tmp = tcg_temp_new_i32();
68
+ sfpa = tcg_temp_new_i32();
69
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
70
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
71
+ control = load_cpu_field(v7m.control[M_REG_S]);
72
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
73
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
74
+ tcg_gen_or_i32(tmp, tmp, sfpa);
75
+ tcg_temp_free_i32(sfpa);
76
+ /*
77
+ * Store result before updating FPSCR etc, in case
78
+ * it is a memory write which causes an exception.
79
+ */
80
+ storefn(s, opaque, tmp);
81
+ /*
82
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
83
+ * CONTROL.SFPA; so we'll end the TB here.
84
+ */
85
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
86
+ store_cpu_field(control, v7m.control[M_REG_S]);
87
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
88
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
89
+ tcg_temp_free_i32(fpscr);
90
+ gen_lookup_tb(s);
91
+ break;
92
+ }
71
+ }
93
default:
72
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
94
g_assert_not_reached();
73
+ return true;
95
}
74
+ }
75
+ gen_vpst(s, a->mask);
76
mve_update_and_store_eci(s);
77
return true;
78
}
96
--
79
--
97
2.20.1
80
2.20.1
98
81
99
82
diff view generated by jsdifflib
New patch
1
1
Implement the MVE integer vector comparison instructions. These are
2
"VCMP (vector)" encodings T1, T2 and T3, and "VPT (vector)" encodings
3
T1, T2 and T3.
4
5
These insns compare corresponding elements in each vector, and update
6
the VPR.P0 predicate bits with the results of the comparison. VPT
7
also sets the VPR.MASK01 and VPR.MASK23 fields -- it is effectively
8
"VCMP then VPST".
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/arm/helper-mve.h | 32 ++++++++++++++++++++++
14
target/arm/mve.decode | 18 +++++++++++-
15
target/arm/mve_helper.c | 56 ++++++++++++++++++++++++++++++++++++++
16
target/arm/translate-mve.c | 47 ++++++++++++++++++++++++++++++++
17
4 files changed, 152 insertions(+), 1 deletion(-)
18
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-mve.h
22
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_uqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
24
DEF_HELPER_FLAGS_3(mve_sqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
25
DEF_HELPER_FLAGS_3(mve_uqrshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
26
DEF_HELPER_FLAGS_3(mve_sqrshr, TCG_CALL_NO_RWG, i32, env, i32, i32)
27
+
28
+DEF_HELPER_FLAGS_3(mve_vcmpeqb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
+DEF_HELPER_FLAGS_3(mve_vcmpeqh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
+DEF_HELPER_FLAGS_3(mve_vcmpeqw, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
+
32
+DEF_HELPER_FLAGS_3(mve_vcmpneb, TCG_CALL_NO_WG, void, env, ptr, ptr)
33
+DEF_HELPER_FLAGS_3(mve_vcmpneh, TCG_CALL_NO_WG, void, env, ptr, ptr)
34
+DEF_HELPER_FLAGS_3(mve_vcmpnew, TCG_CALL_NO_WG, void, env, ptr, ptr)
35
+
36
+DEF_HELPER_FLAGS_3(mve_vcmpcsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
37
+DEF_HELPER_FLAGS_3(mve_vcmpcsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
38
+DEF_HELPER_FLAGS_3(mve_vcmpcsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
+
40
+DEF_HELPER_FLAGS_3(mve_vcmphib, TCG_CALL_NO_WG, void, env, ptr, ptr)
41
+DEF_HELPER_FLAGS_3(mve_vcmphih, TCG_CALL_NO_WG, void, env, ptr, ptr)
42
+DEF_HELPER_FLAGS_3(mve_vcmphiw, TCG_CALL_NO_WG, void, env, ptr, ptr)
43
+
44
+DEF_HELPER_FLAGS_3(mve_vcmpgeb, TCG_CALL_NO_WG, void, env, ptr, ptr)
45
+DEF_HELPER_FLAGS_3(mve_vcmpgeh, TCG_CALL_NO_WG, void, env, ptr, ptr)
46
+DEF_HELPER_FLAGS_3(mve_vcmpgew, TCG_CALL_NO_WG, void, env, ptr, ptr)
47
+
48
+DEF_HELPER_FLAGS_3(mve_vcmpltb, TCG_CALL_NO_WG, void, env, ptr, ptr)
49
+DEF_HELPER_FLAGS_3(mve_vcmplth, TCG_CALL_NO_WG, void, env, ptr, ptr)
50
+DEF_HELPER_FLAGS_3(mve_vcmpltw, TCG_CALL_NO_WG, void, env, ptr, ptr)
51
+
52
+DEF_HELPER_FLAGS_3(mve_vcmpgtb, TCG_CALL_NO_WG, void, env, ptr, ptr)
53
+DEF_HELPER_FLAGS_3(mve_vcmpgth, TCG_CALL_NO_WG, void, env, ptr, ptr)
54
+DEF_HELPER_FLAGS_3(mve_vcmpgtw, TCG_CALL_NO_WG, void, env, ptr, ptr)
55
+
56
+DEF_HELPER_FLAGS_3(mve_vcmpleb, TCG_CALL_NO_WG, void, env, ptr, ptr)
57
+DEF_HELPER_FLAGS_3(mve_vcmpleh, TCG_CALL_NO_WG, void, env, ptr, ptr)
58
+DEF_HELPER_FLAGS_3(mve_vcmplew, TCG_CALL_NO_WG, void, env, ptr, ptr)
59
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/mve.decode
62
+++ b/target/arm/mve.decode
63
@@ -XXX,XX +XXX,XX @@
64
&2shift qd qm shift size
65
&vidup qd rn size imm
66
&viwdup qd rn rm size imm
67
+&vcmp qm qn size mask
68
69
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
70
# Note that both Rn and Qd are 3 bits only (no D bit)
71
@@ -XXX,XX +XXX,XX @@
72
@2_shr_w .... .... .. 1 ..... .... .... .... .... &2shift qd=%qd qm=%qm \
73
size=2 shift=%rshift_i5
74
75
+# Vector comparison; 4-bit Qm but 3-bit Qn
76
+%mask_22_13 22:1 13:3
77
+@vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13
78
+
79
# Vector loads and stores
80
81
# Widening loads and narrowing stores:
82
@@ -XXX,XX +XXX,XX @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
83
}
84
85
# Predicate operations
86
-%mask_22_13 22:1 13:3
87
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
88
89
# Logical immediate operations (1 reg and modified-immediate)
90
@@ -XXX,XX +XXX,XX @@ VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_b
91
VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_h
92
93
VSHLC 111 0 1110 1 . 1 imm:5 ... 0 1111 1100 rdm:4 qd=%qd
94
+
95
+# Comparisons. We expand out the conditions which are split across
96
+# encodings T1, T2, T3 and the fc bits. These include VPT, which is
97
+# effectively "VCMP then VPST". A plain "VCMP" has a mask field of zero.
98
+VCMPEQ 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 0 @vcmp
99
+VCMPNE 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 0 @vcmp
100
+VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp
101
+VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp
102
+VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp
103
+VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp
104
+VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
105
+VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp
106
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/mve_helper.c
109
+++ b/target/arm/mve_helper.c
110
@@ -XXX,XX +XXX,XX @@ static uint32_t do_sub_wrap(uint32_t offset, uint32_t wrap, uint32_t imm)
111
DO_VIDUP_ALL(vidup, DO_ADD)
112
DO_VIWDUP_ALL(viwdup, do_add_wrap)
113
DO_VIWDUP_ALL(vdwdup, do_sub_wrap)
114
+
115
+/*
116
+ * Vector comparison.
117
+ * P0 bits for non-executed beats (where eci_mask is 0) are unchanged.
118
+ * P0 bits for predicated lanes in executed beats (where mask is 0) are 0.
119
+ * P0 bits otherwise are updated with the results of the comparisons.
120
+ * We must also keep unchanged the MASK fields at the top of v7m.vpr.
121
+ */
122
+#define DO_VCMP(OP, ESIZE, TYPE, FN) \
123
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, void *vm) \
124
+ { \
125
+ TYPE *n = vn, *m = vm; \
126
+ uint16_t mask = mve_element_mask(env); \
127
+ uint16_t eci_mask = mve_eci_mask(env); \
128
+ uint16_t beatpred = 0; \
129
+ uint16_t emask = MAKE_64BIT_MASK(0, ESIZE); \
130
+ unsigned e; \
131
+ for (e = 0; e < 16 / ESIZE; e++) { \
132
+ bool r = FN(n[H##ESIZE(e)], m[H##ESIZE(e)]); \
133
+ /* Comparison sets 0/1 bits for each byte in the element */ \
134
+ beatpred |= r * emask; \
135
+ emask <<= ESIZE; \
136
+ } \
137
+ beatpred &= mask; \
138
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | \
139
+ (beatpred & eci_mask); \
140
+ mve_advance_vpt(env); \
141
+ }
142
+
143
+#define DO_VCMP_S(OP, FN) \
144
+ DO_VCMP(OP##b, 1, int8_t, FN) \
145
+ DO_VCMP(OP##h, 2, int16_t, FN) \
146
+ DO_VCMP(OP##w, 4, int32_t, FN)
147
+
148
+#define DO_VCMP_U(OP, FN) \
149
+ DO_VCMP(OP##b, 1, uint8_t, FN) \
150
+ DO_VCMP(OP##h, 2, uint16_t, FN) \
151
+ DO_VCMP(OP##w, 4, uint32_t, FN)
152
+
153
+#define DO_EQ(N, M) ((N) == (M))
154
+#define DO_NE(N, M) ((N) != (M))
155
+#define DO_EQ(N, M) ((N) == (M))
156
+#define DO_EQ(N, M) ((N) == (M))
157
+#define DO_GE(N, M) ((N) >= (M))
158
+#define DO_LT(N, M) ((N) < (M))
159
+#define DO_GT(N, M) ((N) > (M))
160
+#define DO_LE(N, M) ((N) <= (M))
161
+
162
+DO_VCMP_U(vcmpeq, DO_EQ)
163
+DO_VCMP_U(vcmpne, DO_NE)
164
+DO_VCMP_U(vcmpcs, DO_GE)
165
+DO_VCMP_U(vcmphi, DO_GT)
166
+DO_VCMP_S(vcmpge, DO_GE)
167
+DO_VCMP_S(vcmplt, DO_LT)
168
+DO_VCMP_S(vcmpgt, DO_GT)
169
+DO_VCMP_S(vcmple, DO_LE)
170
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/target/arm/translate-mve.c
173
+++ b/target/arm/translate-mve.c
174
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
175
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
176
typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
177
typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
178
+typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
179
180
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
181
static inline long mve_qreg_offset(unsigned reg)
182
@@ -XXX,XX +XXX,XX @@ static bool trans_VDWDUP(DisasContext *s, arg_viwdup *a)
183
};
184
return do_viwdup(s, a, fns[a->size]);
185
}
186
+
187
+static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
188
+{
189
+ TCGv_ptr qn, qm;
190
+
191
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qm) ||
192
+ !fn) {
193
+ return false;
194
+ }
195
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
196
+ return true;
197
+ }
198
+
199
+ qn = mve_qreg_ptr(a->qn);
200
+ qm = mve_qreg_ptr(a->qm);
201
+ fn(cpu_env, qn, qm);
202
+ tcg_temp_free_ptr(qn);
203
+ tcg_temp_free_ptr(qm);
204
+ if (a->mask) {
205
+ /* VPT */
206
+ gen_vpst(s, a->mask);
207
+ }
208
+ mve_update_eci(s);
209
+ return true;
210
+}
211
+
212
+#define DO_VCMP(INSN, FN) \
213
+ static bool trans_##INSN(DisasContext *s, arg_vcmp *a) \
214
+ { \
215
+ static MVEGenCmpFn * const fns[] = { \
216
+ gen_helper_mve_##FN##b, \
217
+ gen_helper_mve_##FN##h, \
218
+ gen_helper_mve_##FN##w, \
219
+ NULL, \
220
+ }; \
221
+ return do_vcmp(s, a, fns[a->size]); \
222
+ }
223
+
224
+DO_VCMP(VCMPEQ, vcmpeq)
225
+DO_VCMP(VCMPNE, vcmpne)
226
+DO_VCMP(VCMPCS, vcmpcs)
227
+DO_VCMP(VCMPHI, vcmphi)
228
+DO_VCMP(VCMPGE, vcmpge)
229
+DO_VCMP(VCMPLT, vcmplt)
230
+DO_VCMP(VCMPGT, vcmpgt)
231
+DO_VCMP(VCMPLE, vcmple)
232
--
233
2.20.1
234
235
diff view generated by jsdifflib
1
From: Havard Skinnemoen <hskinnemoen@google.com>
1
Implement the MVE integer vector comparison instructions that compare
2
2
each element against a scalar from a general purpose register. These
3
Dump the collected random data after a randomness test failure.
3
are "VCMP (vector)" encodings T4, T5 and T6 and "VPT (vector)"
4
4
encodings T4, T5 and T6.
5
Note that this relies on the test having called
5
6
g_test_set_nonfatal_assertions() so we don't abort immediately on the
6
We have to move the decodetree pattern for VPST, because it
7
assertion failure.
7
overlaps with VCMP T4 with size = 0b11.
8
8
9
Signed-off-by: Havard Skinnemoen <hskinnemoen@google.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
[PMM: minor commit message tweak]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
---
11
---
14
tests/qtest/npcm7xx_rng-test.c | 12 ++++++++++++
12
target/arm/helper-mve.h | 32 +++++++++++++++++++++++++++
15
1 file changed, 12 insertions(+)
13
target/arm/mve.decode | 18 +++++++++++++---
16
14
target/arm/mve_helper.c | 44 +++++++++++++++++++++++++++++++-------
17
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
15
target/arm/translate-mve.c | 43 +++++++++++++++++++++++++++++++++++++
18
index XXXXXXX..XXXXXXX 100644
16
4 files changed, 126 insertions(+), 11 deletions(-)
19
--- a/tests/qtest/npcm7xx_rng-test.c
17
20
+++ b/tests/qtest/npcm7xx_rng-test.c
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-mve.h
21
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vcmpgtw, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
DEF_HELPER_FLAGS_3(mve_vcmpleb, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
DEF_HELPER_FLAGS_3(mve_vcmpleh, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
DEF_HELPER_FLAGS_3(mve_vcmplew, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+
27
+DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
28
+DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
29
+DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_3(mve_vcmpne_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
32
+DEF_HELPER_FLAGS_3(mve_vcmpne_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
33
+DEF_HELPER_FLAGS_3(mve_vcmpne_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
34
+
35
+DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
36
+DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
37
+DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
38
+
39
+DEF_HELPER_FLAGS_3(mve_vcmphi_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
40
+DEF_HELPER_FLAGS_3(mve_vcmphi_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
41
+DEF_HELPER_FLAGS_3(mve_vcmphi_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
42
+
43
+DEF_HELPER_FLAGS_3(mve_vcmpge_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
44
+DEF_HELPER_FLAGS_3(mve_vcmpge_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
45
+DEF_HELPER_FLAGS_3(mve_vcmpge_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
46
+
47
+DEF_HELPER_FLAGS_3(mve_vcmplt_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
48
+DEF_HELPER_FLAGS_3(mve_vcmplt_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
49
+DEF_HELPER_FLAGS_3(mve_vcmplt_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
50
+
51
+DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
52
+DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
53
+DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
54
+
55
+DEF_HELPER_FLAGS_3(mve_vcmple_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
56
+DEF_HELPER_FLAGS_3(mve_vcmple_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
57
+DEF_HELPER_FLAGS_3(mve_vcmple_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
58
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve.decode
61
+++ b/target/arm/mve.decode
21
@@ -XXX,XX +XXX,XX @@
62
@@ -XXX,XX +XXX,XX @@
22
63
&vidup qd rn size imm
23
#include "libqtest-single.h"
64
&viwdup qd rn rm size imm
24
#include "qemu/bitops.h"
65
&vcmp qm qn size mask
25
+#include "qemu-common.h"
66
+&vcmp_scalar qn rm size mask
26
67
27
#define RNG_BASE_ADDR 0xf000b000
68
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
28
69
# Note that both Rn and Qd are 3 bits only (no D bit)
29
@@ -XXX,XX +XXX,XX @@
70
@@ -XXX,XX +XXX,XX @@
30
/* Number of bits to collect for randomness tests. */
71
# Vector comparison; 4-bit Qm but 3-bit Qn
31
#define TEST_INPUT_BITS (128)
72
%mask_22_13 22:1 13:3
32
73
@vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13
33
+static void dump_buf_if_failed(const uint8_t *buf, size_t size)
74
+@vcmp_scalar .... .... .. size:2 qn:3 . .... .... .... rm:4 &vcmp_scalar \
75
+ mask=%mask_22_13
76
77
# Vector loads and stores
78
79
@@ -XXX,XX +XXX,XX @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
80
rdahi=%rdahi rdalo=%rdalo
81
}
82
83
-# Predicate operations
84
-VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
85
-
86
# Logical immediate operations (1 reg and modified-immediate)
87
88
# The cmode/op bits here decode VORR/VBIC/VMOV/VMVN, but
89
@@ -XXX,XX +XXX,XX @@ VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp
90
VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp
91
VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
92
VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp
93
+
34
+{
94
+{
35
+ if (g_test_failed()) {
95
+ VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
36
+ qemu_hexdump(stderr, "", buf, size);
96
+ VCMPEQ_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 0 0 .... @vcmp_scalar
37
+ }
38
+}
97
+}
39
+
98
+VCMPNE_scalar 1111 1110 0 . .. ... 1 ... 0 1111 1 1 0 0 .... @vcmp_scalar
40
static void rng_writeb(unsigned int offset, uint8_t value)
99
+VCMPCS_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 1 0 .... @vcmp_scalar
41
{
100
+VCMPHI_scalar 1111 1110 0 . .. ... 1 ... 0 1111 1 1 1 0 .... @vcmp_scalar
42
writeb(RNG_BASE_ADDR + offset, value);
101
+VCMPGE_scalar 1111 1110 0 . .. ... 1 ... 1 1111 0 1 0 0 .... @vcmp_scalar
43
@@ -XXX,XX +XXX,XX @@ static void test_continuous_monobit(void)
102
+VCMPLT_scalar 1111 1110 0 . .. ... 1 ... 1 1111 1 1 0 0 .... @vcmp_scalar
103
+VCMPGT_scalar 1111 1110 0 . .. ... 1 ... 1 1111 0 1 1 0 .... @vcmp_scalar
104
+VCMPLE_scalar 1111 1110 0 . .. ... 1 ... 1 1111 1 1 1 0 .... @vcmp_scalar
105
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/mve_helper.c
108
+++ b/target/arm/mve_helper.c
109
@@ -XXX,XX +XXX,XX @@ DO_VIWDUP_ALL(vdwdup, do_sub_wrap)
110
mve_advance_vpt(env); \
44
}
111
}
45
112
46
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
113
-#define DO_VCMP_S(OP, FN) \
47
+ dump_buf_if_failed(buf, sizeof(buf));
114
- DO_VCMP(OP##b, 1, int8_t, FN) \
115
- DO_VCMP(OP##h, 2, int16_t, FN) \
116
- DO_VCMP(OP##w, 4, int32_t, FN)
117
+#define DO_VCMP_SCALAR(OP, ESIZE, TYPE, FN) \
118
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
119
+ uint32_t rm) \
120
+ { \
121
+ TYPE *n = vn; \
122
+ uint16_t mask = mve_element_mask(env); \
123
+ uint16_t eci_mask = mve_eci_mask(env); \
124
+ uint16_t beatpred = 0; \
125
+ uint16_t emask = MAKE_64BIT_MASK(0, ESIZE); \
126
+ unsigned e; \
127
+ for (e = 0; e < 16 / ESIZE; e++) { \
128
+ bool r = FN(n[H##ESIZE(e)], (TYPE)rm); \
129
+ /* Comparison sets 0/1 bits for each byte in the element */ \
130
+ beatpred |= r * emask; \
131
+ emask <<= ESIZE; \
132
+ } \
133
+ beatpred &= mask; \
134
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | \
135
+ (beatpred & eci_mask); \
136
+ mve_advance_vpt(env); \
137
+ }
138
139
-#define DO_VCMP_U(OP, FN) \
140
- DO_VCMP(OP##b, 1, uint8_t, FN) \
141
- DO_VCMP(OP##h, 2, uint16_t, FN) \
142
- DO_VCMP(OP##w, 4, uint32_t, FN)
143
+#define DO_VCMP_S(OP, FN) \
144
+ DO_VCMP(OP##b, 1, int8_t, FN) \
145
+ DO_VCMP(OP##h, 2, int16_t, FN) \
146
+ DO_VCMP(OP##w, 4, int32_t, FN) \
147
+ DO_VCMP_SCALAR(OP##_scalarb, 1, int8_t, FN) \
148
+ DO_VCMP_SCALAR(OP##_scalarh, 2, int16_t, FN) \
149
+ DO_VCMP_SCALAR(OP##_scalarw, 4, int32_t, FN)
150
+
151
+#define DO_VCMP_U(OP, FN) \
152
+ DO_VCMP(OP##b, 1, uint8_t, FN) \
153
+ DO_VCMP(OP##h, 2, uint16_t, FN) \
154
+ DO_VCMP(OP##w, 4, uint32_t, FN) \
155
+ DO_VCMP_SCALAR(OP##_scalarb, 1, uint8_t, FN) \
156
+ DO_VCMP_SCALAR(OP##_scalarh, 2, uint16_t, FN) \
157
+ DO_VCMP_SCALAR(OP##_scalarw, 4, uint32_t, FN)
158
159
#define DO_EQ(N, M) ((N) == (M))
160
#define DO_NE(N, M) ((N) != (M))
161
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-mve.c
164
+++ b/target/arm/translate-mve.c
165
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
166
typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
167
typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
168
typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
169
+typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
170
171
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
172
static inline long mve_qreg_offset(unsigned reg)
173
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
174
return true;
48
}
175
}
49
176
50
/*
177
+static bool do_vcmp_scalar(DisasContext *s, arg_vcmp_scalar *a,
51
@@ -XXX,XX +XXX,XX @@ static void test_continuous_runs(void)
178
+ MVEGenScalarCmpFn *fn)
179
+{
180
+ TCGv_ptr qn;
181
+ TCGv_i32 rm;
182
+
183
+ if (!dc_isar_feature(aa32_mve, s) || !fn || a->rm == 13) {
184
+ return false;
185
+ }
186
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
187
+ return true;
188
+ }
189
+
190
+ qn = mve_qreg_ptr(a->qn);
191
+ if (a->rm == 15) {
192
+ /* Encoding Rm=0b1111 means "constant zero" */
193
+ rm = tcg_constant_i32(0);
194
+ } else {
195
+ rm = load_reg(s, a->rm);
196
+ }
197
+ fn(cpu_env, qn, rm);
198
+ tcg_temp_free_ptr(qn);
199
+ tcg_temp_free_i32(rm);
200
+ if (a->mask) {
201
+ /* VPT */
202
+ gen_vpst(s, a->mask);
203
+ }
204
+ mve_update_eci(s);
205
+ return true;
206
+}
207
+
208
#define DO_VCMP(INSN, FN) \
209
static bool trans_##INSN(DisasContext *s, arg_vcmp *a) \
210
{ \
211
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
212
NULL, \
213
}; \
214
return do_vcmp(s, a, fns[a->size]); \
215
+ } \
216
+ static bool trans_##INSN##_scalar(DisasContext *s, \
217
+ arg_vcmp_scalar *a) \
218
+ { \
219
+ static MVEGenScalarCmpFn * const fns[] = { \
220
+ gen_helper_mve_##FN##_scalarb, \
221
+ gen_helper_mve_##FN##_scalarh, \
222
+ gen_helper_mve_##FN##_scalarw, \
223
+ NULL, \
224
+ }; \
225
+ return do_vcmp_scalar(s, a, fns[a->size]); \
52
}
226
}
53
227
54
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
228
DO_VCMP(VCMPEQ, vcmpeq)
55
+ dump_buf_if_failed(buf.c, sizeof(buf));
56
}
57
58
/*
59
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_monobit(void)
60
}
61
62
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
63
+ dump_buf_if_failed(buf, sizeof(buf));
64
}
65
66
/*
67
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_runs(void)
68
}
69
70
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
71
+ dump_buf_if_failed(buf.c, sizeof(buf));
72
}
73
74
int main(int argc, char **argv)
75
--
229
--
76
2.20.1
230
2.20.1
77
231
78
232
diff view generated by jsdifflib
1
v8.1M introduces a new TRD flag in the CCR register, which enables
1
Implement the MVE VPSEL insn, which sets each byte of the destination
2
checking for stack frame integrity signatures on SG instructions.
2
vector Qd to the byte from either Qn or Qm depending on the value of
3
This bit is not banked, and is always RAZ/WI to Non-secure code.
3
the corresponding bit in VPR.P0.
4
Adjust the code for handling CCR reads and writes to handle this.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-23-peter.maydell@linaro.org
9
---
7
---
10
target/arm/cpu.h | 2 ++
8
target/arm/helper-mve.h | 2 ++
11
hw/intc/armv7m_nvic.c | 26 ++++++++++++++++++--------
9
target/arm/mve.decode | 7 +++++--
12
2 files changed, 20 insertions(+), 8 deletions(-)
10
target/arm/mve_helper.c | 19 +++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 28 insertions(+), 2 deletions(-)
13
13
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/cpu.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKOFHFNMIGN, 10, 1)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
FIELD(V7M_CCR, DC, 16, 1)
19
DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
FIELD(V7M_CCR, IC, 17, 1)
20
DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
FIELD(V7M_CCR, BP, 18, 1)
21
22
+FIELD(V7M_CCR, LOB, 19, 1)
22
+DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+FIELD(V7M_CCR, TRD, 20, 1)
23
+
24
24
DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
/* V7M SCR bits */
25
DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
FIELD(V7M_SCR, SLEEPONEXIT, 1, 1)
26
DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
27
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/intc/armv7m_nvic.c
29
--- a/target/arm/mve.decode
30
+++ b/hw/intc/armv7m_nvic.c
30
+++ b/target/arm/mve.decode
31
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
31
@@ -XXX,XX +XXX,XX @@ VSHLC 111 0 1110 1 . 1 imm:5 ... 0 1111 1100 rdm:4 qd=%qd
32
}
32
# effectively "VCMP then VPST". A plain "VCMP" has a mask field of zero.
33
return cpu->env.v7m.scr[attrs.secure];
33
VCMPEQ 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 0 @vcmp
34
case 0xd14: /* Configuration Control. */
34
VCMPNE 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 0 @vcmp
35
- /* The BFHFNMIGN bit is the only non-banked bit; we
35
-VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp
36
- * keep it in the non-secure copy of the register.
36
-VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp
37
+ /*
37
+{
38
+ * Non-banked bits: BFHFNMIGN (stored in the NS copy of the register)
38
+ VPSEL 1111 1110 0 . 11 ... 1 ... 0 1111 . 0 . 0 ... 1 @2op_nosz
39
+ * and TRD (stored in the S copy of the register)
39
+ VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp
40
*/
40
+ VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp
41
val = cpu->env.v7m.ccr[attrs.secure];
41
+}
42
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
42
VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
43
VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp
44
cpu->env.v7m.scr[attrs.secure] = value;
44
VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
45
break;
45
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
46
case 0xd14: /* Configuration Control. */
46
index XXXXXXX..XXXXXXX 100644
47
+ {
47
--- a/target/arm/mve_helper.c
48
+ uint32_t mask;
48
+++ b/target/arm/mve_helper.c
49
@@ -XXX,XX +XXX,XX @@ DO_VCMP_S(vcmpge, DO_GE)
50
DO_VCMP_S(vcmplt, DO_LT)
51
DO_VCMP_S(vcmpgt, DO_GT)
52
DO_VCMP_S(vcmple, DO_LE)
49
+
53
+
50
if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
54
+void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm)
51
goto bad_offset;
55
+{
52
}
56
+ /*
53
57
+ * Qd[n] = VPR.P0[n] ? Qn[n] : Qm[n]
54
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
58
+ * but note that whether bytes are written to Qd is still subject
55
- value &= (R_V7M_CCR_STKALIGN_MASK |
59
+ * to (all forms of) predication in the usual way.
56
- R_V7M_CCR_BFHFNMIGN_MASK |
60
+ */
57
- R_V7M_CCR_DIV_0_TRP_MASK |
61
+ uint64_t *d = vd, *n = vn, *m = vm;
58
- R_V7M_CCR_UNALIGN_TRP_MASK |
62
+ uint16_t mask = mve_element_mask(env);
59
- R_V7M_CCR_USERSETMPEND_MASK |
63
+ uint16_t p0 = FIELD_EX32(env->v7m.vpr, V7M_VPR, P0);
60
- R_V7M_CCR_NONBASETHRDENA_MASK);
64
+ unsigned e;
61
+ mask = R_V7M_CCR_STKALIGN_MASK |
65
+ for (e = 0; e < 16 / 8; e++, mask >>= 8, p0 >>= 8) {
62
+ R_V7M_CCR_BFHFNMIGN_MASK |
66
+ uint64_t r = m[H8(e)];
63
+ R_V7M_CCR_DIV_0_TRP_MASK |
67
+ mergemask(&r, n[H8(e)], p0);
64
+ R_V7M_CCR_UNALIGN_TRP_MASK |
68
+ mergemask(&d[H8(e)], r, mask);
65
+ R_V7M_CCR_USERSETMPEND_MASK |
66
+ R_V7M_CCR_NONBASETHRDENA_MASK;
67
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8_1M) && attrs.secure) {
68
+ /* TRD is always RAZ/WI from NS */
69
+ mask |= R_V7M_CCR_TRD_MASK;
70
+ }
71
+ value &= mask;
72
73
if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
74
/* v8M makes NONBASETHRDENA and STKALIGN be RES1 */
75
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
76
77
cpu->env.v7m.ccr[attrs.secure] = value;
78
break;
79
+ }
69
+ }
80
case 0xd24: /* System Handler Control and State (SHCSR) */
70
+ mve_advance_vpt(env);
81
if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
71
+}
82
goto bad_offset;
72
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/translate-mve.c
75
+++ b/target/arm/translate-mve.c
76
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VORR, gen_helper_mve_vorr)
77
DO_LOGIC(VORN, gen_helper_mve_vorn)
78
DO_LOGIC(VEOR, gen_helper_mve_veor)
79
80
+DO_LOGIC(VPSEL, gen_helper_mve_vpsel)
81
+
82
#define DO_2OP(INSN, FN) \
83
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
84
{ \
83
--
85
--
84
2.20.1
86
2.20.1
85
87
86
88
diff view generated by jsdifflib
1
In v8.1M a new exception return check is added which may cause a NOCP
1
Implement the MVE VMLAS insn, which multiplies a vector by a vector
2
UsageFault (see rule R_XLTP): before we clear s0..s15 and the FPSCR
2
and adds a scalar.
3
we must check whether access to CP10 from the Security state of the
4
returning exception is disabled; if it is then we must take a fault.
5
6
(Note that for our implementation CPPWR is always RAZ/WI and so can
7
never cause CP10 accesses to fail.)
8
9
The other v8.1M change to this register-clearing code is that if MVE
10
is implemented VPR must also be cleared, so add a TODO comment to
11
that effect.
12
3
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-20-peter.maydell@linaro.org
16
---
6
---
17
target/arm/m_helper.c | 22 +++++++++++++++++++++-
7
target/arm/helper-mve.h | 4 ++++
18
1 file changed, 21 insertions(+), 1 deletion(-)
8
target/arm/mve.decode | 3 +++
9
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
10
target/arm/translate-mve.c | 1 +
11
4 files changed, 34 insertions(+)
19
12
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
21
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m_helper.c
15
--- a/target/arm/helper-mve.h
23
+++ b/target/arm/m_helper.c
16
+++ b/target/arm/helper-mve.h
24
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i3
25
v7m_exception_taken(cpu, excret, true, false);
18
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
return;
19
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
} else {
20
28
- /* Clear s0..s15 and FPSCR */
21
+DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
22
+DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+ /* v8.1M adds this NOCP check */
23
+DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+ bool nsacr_pass = exc_secure ||
24
+
32
+ extract32(env->v7m.nsacr, 10, 1);
25
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
33
+ bool cpacr_pass = v7m_cpacr_pass(env, exc_secure, true);
26
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
34
+ if (!nsacr_pass) {
27
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
35
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
36
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
29
index XXXXXXX..XXXXXXX 100644
37
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
30
--- a/target/arm/mve.decode
38
+ "stackframe: NSACR prevents clearing FPU registers\n");
31
+++ b/target/arm/mve.decode
39
+ v7m_exception_taken(cpu, excret, true, false);
32
@@ -XXX,XX +XXX,XX @@ VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
40
+ } else if (!cpacr_pass) {
33
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
41
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
34
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
42
+ exc_secure);
35
43
+ env->v7m.cfsr[exc_secure] |= R_V7M_CFSR_NOCP_MASK;
36
+# The U bit (28) is don't-care because it does not affect the result
44
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
37
+VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
45
+ "stackframe: CPACR prevents clearing FPU registers\n");
38
+
46
+ v7m_exception_taken(cpu, excret, true, false);
39
# Vector add across vector
47
+ }
40
{
48
+ }
41
VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
49
+ /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
50
int i;
43
index XXXXXXX..XXXXXXX 100644
51
44
--- a/target/arm/mve_helper.c
52
for (i = 0; i < 16; i += 2) {
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
47
mve_advance_vpt(env); \
48
}
49
50
+/* "accumulating" version where FN takes d as well as n and m */
51
+#define DO_2OP_ACC_SCALAR(OP, ESIZE, TYPE, FN) \
52
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
53
+ uint32_t rm) \
54
+ { \
55
+ TYPE *d = vd, *n = vn; \
56
+ TYPE m = rm; \
57
+ uint16_t mask = mve_element_mask(env); \
58
+ unsigned e; \
59
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
60
+ mergemask(&d[H##ESIZE(e)], \
61
+ FN(d[H##ESIZE(e)], n[H##ESIZE(e)], m), mask); \
62
+ } \
63
+ mve_advance_vpt(env); \
64
+ }
65
+
66
/* provide unsigned 2-op scalar helpers for all sizes */
67
#define DO_2OP_SCALAR_U(OP, FN) \
68
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
69
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
70
DO_2OP_SCALAR(OP##h, 2, int16_t, FN) \
71
DO_2OP_SCALAR(OP##w, 4, int32_t, FN)
72
73
+#define DO_2OP_ACC_SCALAR_U(OP, FN) \
74
+ DO_2OP_ACC_SCALAR(OP##b, 1, uint8_t, FN) \
75
+ DO_2OP_ACC_SCALAR(OP##h, 2, uint16_t, FN) \
76
+ DO_2OP_ACC_SCALAR(OP##w, 4, uint32_t, FN)
77
+
78
DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
79
DO_2OP_SCALAR_U(vsub_scalar, DO_SUB)
80
DO_2OP_SCALAR_U(vmul_scalar, DO_MUL)
81
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
82
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
83
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
84
85
+/* Vector by vector plus scalar */
86
+#define DO_VMLAS(D, N, M) ((N) * (D) + (M))
87
+
88
+DO_2OP_ACC_SCALAR_U(vmlas, DO_VMLAS)
89
+
90
/*
91
* Long saturating scalar ops. As with DO_2OP_L, TYPE and H are for the
92
* input (smaller) type and LESIZE, LTYPE, LH for the output (long) type.
93
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/translate-mve.c
96
+++ b/target/arm/translate-mve.c
97
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
98
DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
99
DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
100
DO_2OP_SCALAR(VBRSR, vbrsr)
101
+DO_2OP_SCALAR(VMLAS, vmlas)
102
103
static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
104
{
53
--
105
--
54
2.20.1
106
2.20.1
55
107
56
108
diff view generated by jsdifflib
1
In commit 077d7449100d824a4 we added code to handle the v8M
1
Implement the MVE instructions which perform shifts by a scalar.
2
requirement that returns from NMI or HardFault forcibly deactivate
2
These are VSHL T2, VRSHL T2, VQSHL T1 and VQRSHL T2. They take the
3
those exceptions regardless of what interrupt the guest is trying to
3
shift amount in a general purpose register and shift every element in
4
deactivate. Unfortunately this broke the handling of the "illegal
4
the vector by that amount.
5
exception return because the returning exception number is not
6
active" check for those cases. In the pseudocode this test is done
7
on the exception the guest asks to return from, but because our
8
implementation was doing this in armv7m_nvic_complete_irq() after the
9
new "deactivate NMI/HardFault regardless" code we ended up doing the
10
test on the VecInfo for that exception instead, which usually meant
11
failing to raise the illegal exception return fault.
12
5
13
In the case for "configurable exception targeting the opposite
6
Mostly we can reuse the helper functions for shift-by-immediate; we
14
security state" we detected the illegal-return case but went ahead
7
do need two new helpers for VQRSHL.
15
and deactivated the VecInfo anyway, which is wrong because that is
16
the VecInfo for the other security state.
17
18
Rearrange the code so that we first identify the illegal return
19
cases, then see if we really need to deactivate NMI or HardFault
20
instead, and finally do the deactivation.
21
8
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-25-peter.maydell@linaro.org
25
---
11
---
26
hw/intc/armv7m_nvic.c | 59 +++++++++++++++++++++++--------------------
12
target/arm/helper-mve.h | 8 +++++++
27
1 file changed, 32 insertions(+), 27 deletions(-)
13
target/arm/mve.decode | 23 ++++++++++++++++---
14
target/arm/mve_helper.c | 2 ++
15
target/arm/translate-mve.c | 46 ++++++++++++++++++++++++++++++++++++++
16
4 files changed, 76 insertions(+), 3 deletions(-)
28
17
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
30
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/intc/armv7m_nvic.c
20
--- a/target/arm/helper-mve.h
32
+++ b/hw/intc/armv7m_nvic.c
21
+++ b/target/arm/helper-mve.h
33
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
{
23
DEF_HELPER_FLAGS_4(mve_vrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
NVICState *s = (NVICState *)opaque;
24
DEF_HELPER_FLAGS_4(mve_vrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
VecInfo *vec = NULL;
25
37
- int ret;
26
+DEF_HELPER_FLAGS_4(mve_vqrshli_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+ int ret = 0;
27
+DEF_HELPER_FLAGS_4(mve_vqrshli_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
28
+DEF_HELPER_FLAGS_4(mve_vqrshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
41
42
+ trace_nvic_complete_irq(irq, secure);
43
+
29
+
44
+ if (secure && exc_is_banked(irq)) {
30
+DEF_HELPER_FLAGS_4(mve_vqrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
45
+ vec = &s->sec_vectors[irq];
31
+DEF_HELPER_FLAGS_4(mve_vqrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
46
+ } else {
32
+DEF_HELPER_FLAGS_4(mve_vqrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
47
+ vec = &s->vectors[irq];
33
+
34
DEF_HELPER_FLAGS_4(mve_vshllbsb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_4(mve_vshllbsh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(mve_vshllbub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/mve.decode
40
+++ b/target/arm/mve.decode
41
@@ -XXX,XX +XXX,XX @@
42
&viwdup qd rn rm size imm
43
&vcmp qm qn size mask
44
&vcmp_scalar qn rm size mask
45
+&shl_scalar qda rm size
46
47
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
48
# Note that both Rn and Qd are 3 bits only (no D bit)
49
@@ -XXX,XX +XXX,XX @@
50
@2_shr_w .... .... .. 1 ..... .... .... .... .... &2shift qd=%qd qm=%qm \
51
size=2 shift=%rshift_i5
52
53
+@shl_scalar .... .... .... size:2 .. .... .... .... rm:4 &shl_scalar qda=%qd
54
+
55
# Vector comparison; 4-bit Qm but 3-bit Qn
56
%mask_22_13 22:1 13:3
57
@vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13
58
@@ -XXX,XX +XXX,XX @@ VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_no
59
60
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
61
VSUB_scalar 1110 1110 0 . .. ... 1 ... 1 1111 . 100 .... @2scalar
62
-VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
63
+
64
+{
65
+ VSHL_S_scalar 1110 1110 0 . 11 .. 01 ... 1 1110 0110 .... @shl_scalar
66
+ VRSHL_S_scalar 1110 1110 0 . 11 .. 11 ... 1 1110 0110 .... @shl_scalar
67
+ VQSHL_S_scalar 1110 1110 0 . 11 .. 01 ... 1 1110 1110 .... @shl_scalar
68
+ VQRSHL_S_scalar 1110 1110 0 . 11 .. 11 ... 1 1110 1110 .... @shl_scalar
69
+ VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
70
+}
71
+
72
+{
73
+ VSHL_U_scalar 1111 1110 0 . 11 .. 01 ... 1 1110 0110 .... @shl_scalar
74
+ VRSHL_U_scalar 1111 1110 0 . 11 .. 11 ... 1 1110 0110 .... @shl_scalar
75
+ VQSHL_U_scalar 1111 1110 0 . 11 .. 01 ... 1 1110 1110 .... @shl_scalar
76
+ VQRSHL_U_scalar 1111 1110 0 . 11 .. 11 ... 1 1110 1110 .... @shl_scalar
77
+ VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
78
+}
79
+
80
VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
81
VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
82
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
83
@@ -XXX,XX +XXX,XX @@ VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
84
size=%size_28
85
}
86
87
-VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
88
-
89
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
90
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
91
92
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/mve_helper.c
95
+++ b/target/arm/mve_helper.c
96
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SAT_S(vqshli_s, DO_SQSHL_OP)
97
DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
98
DO_2SHIFT_U(vrshli_u, DO_VRSHLU)
99
DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
100
+DO_2SHIFT_SAT_U(vqrshli_u, DO_UQRSHL_OP)
101
+DO_2SHIFT_SAT_S(vqrshli_s, DO_SQRSHL_OP)
102
103
/* Shift-and-insert; we always work with 64 bits at a time */
104
#define DO_2SHIFT_INSERT(OP, ESIZE, SHIFTFN, MASKFN) \
105
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate-mve.c
108
+++ b/target/arm/translate-mve.c
109
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT(VRSHRI_U, vrshli_u, true)
110
DO_2SHIFT(VSRI, vsri, false)
111
DO_2SHIFT(VSLI, vsli, false)
112
113
+static bool do_2shift_scalar(DisasContext *s, arg_shl_scalar *a,
114
+ MVEGenTwoOpShiftFn *fn)
115
+{
116
+ TCGv_ptr qda;
117
+ TCGv_i32 rm;
118
+
119
+ if (!dc_isar_feature(aa32_mve, s) ||
120
+ !mve_check_qreg_bank(s, a->qda) ||
121
+ a->rm == 13 || a->rm == 15 || !fn) {
122
+ /* Rm cases are UNPREDICTABLE */
123
+ return false;
124
+ }
125
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
126
+ return true;
48
+ }
127
+ }
49
+
128
+
50
+ /*
129
+ qda = mve_qreg_ptr(a->qda);
51
+ * Identify illegal exception return cases. We can't immediately
130
+ rm = load_reg(s, a->rm);
52
+ * return at this point because we still need to deactivate
131
+ fn(cpu_env, qda, qda, rm);
53
+ * (either this exception or NMI/HardFault) first.
132
+ tcg_temp_free_ptr(qda);
54
+ */
133
+ tcg_temp_free_i32(rm);
55
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
134
+ mve_update_eci(s);
56
+ /*
135
+ return true;
57
+ * Return from a configurable exception targeting the opposite
136
+}
58
+ * security state from the one we're trying to complete it for.
137
+
59
+ * Clear vec because it's not really the VecInfo for this
138
+#define DO_2SHIFT_SCALAR(INSN, FN) \
60
+ * (irq, secstate) so we mustn't deactivate it.
139
+ static bool trans_##INSN(DisasContext *s, arg_shl_scalar *a) \
61
+ */
140
+ { \
62
+ ret = -1;
141
+ static MVEGenTwoOpShiftFn * const fns[] = { \
63
+ vec = NULL;
142
+ gen_helper_mve_##FN##b, \
64
+ } else if (!vec->active) {
143
+ gen_helper_mve_##FN##h, \
65
+ /* Return from an inactive interrupt */
144
+ gen_helper_mve_##FN##w, \
66
+ ret = -1;
145
+ NULL, \
67
+ } else {
146
+ }; \
68
+ /* Legal return, we will return the RETTOBASE bit value to the caller */
147
+ return do_2shift_scalar(s, a, fns[a->size]); \
69
+ ret = nvic_rettobase(s);
70
+ }
148
+ }
71
+
149
+
72
/*
150
+DO_2SHIFT_SCALAR(VSHL_S_scalar, vshli_s)
73
* For negative priorities, v8M will forcibly deactivate the appropriate
151
+DO_2SHIFT_SCALAR(VSHL_U_scalar, vshli_u)
74
* NMI or HardFault regardless of what interrupt we're being asked to
152
+DO_2SHIFT_SCALAR(VRSHL_S_scalar, vrshli_s)
75
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
153
+DO_2SHIFT_SCALAR(VRSHL_U_scalar, vrshli_u)
76
}
154
+DO_2SHIFT_SCALAR(VQSHL_S_scalar, vqshli_s)
77
155
+DO_2SHIFT_SCALAR(VQSHL_U_scalar, vqshli_u)
78
if (!vec) {
156
+DO_2SHIFT_SCALAR(VQRSHL_S_scalar, vqrshli_s)
79
- if (secure && exc_is_banked(irq)) {
157
+DO_2SHIFT_SCALAR(VQRSHL_U_scalar, vqrshli_u)
80
- vec = &s->sec_vectors[irq];
158
+
81
- } else {
159
#define DO_VSHLL(INSN, FN) \
82
- vec = &s->vectors[irq];
160
static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
83
- }
161
{ \
84
- }
85
-
86
- trace_nvic_complete_irq(irq, secure);
87
-
88
- if (!vec->active) {
89
- /* Tell the caller this was an illegal exception return */
90
- return -1;
91
- }
92
-
93
- /*
94
- * If this is a configurable exception and it is currently
95
- * targeting the opposite security state from the one we're trying
96
- * to complete it for, this counts as an illegal exception return.
97
- * We still need to deactivate whatever vector the logic above has
98
- * selected, though, as it might not be the same as the one for the
99
- * requested exception number.
100
- */
101
- if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
102
- ret = -1;
103
- } else {
104
- ret = nvic_rettobase(s);
105
+ return ret;
106
}
107
108
vec->active = 0;
109
--
162
--
110
2.20.1
163
2.20.1
111
164
112
165
diff view generated by jsdifflib
1
Correct a typo in the name we give the NVIC object.
1
All the users of the vmlaldav formats have an 'x bit in bit 12 and an
2
'a' bit in bit 5; move these to the format rather than specifying them
3
in each insn pattern.
2
4
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-28-peter.maydell@linaro.org
7
---
7
---
8
hw/arm/armv7m.c | 2 +-
8
target/arm/mve.decode | 16 ++++++++--------
9
1 file changed, 1 insertion(+), 1 deletion(-)
9
1 file changed, 8 insertions(+), 8 deletions(-)
10
10
11
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
11
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
12
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/armv7m.c
13
--- a/target/arm/mve.decode
14
+++ b/hw/arm/armv7m.c
14
+++ b/target/arm/mve.decode
15
@@ -XXX,XX +XXX,XX @@ static void armv7m_instance_init(Object *obj)
15
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
16
16
17
memory_region_init(&s->container, obj, "armv7m-container", UINT64_MAX);
17
&vmlaldav rdahi rdalo size qn qm x a
18
18
19
- object_initialize_child(obj, "nvnic", &s->nvic, TYPE_NVIC);
19
-@vmlaldav .... .... . ... ... . ... . .... .... qm:3 . \
20
+ object_initialize_child(obj, "nvic", &s->nvic, TYPE_NVIC);
20
+@vmlaldav .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
21
object_property_add_alias(obj, "num-irq",
21
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
22
OBJECT(&s->nvic), "num-irq");
22
-@vmlaldav_nosz .... .... . ... ... . ... . .... .... qm:3 . \
23
+@vmlaldav_nosz .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
24
qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav
25
-VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
26
-VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
27
+VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
28
+VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
29
30
-VMLSLDAV 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav
31
+VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
32
33
-VRMLALDAVH_S 1110 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
34
-VRMLALDAVH_U 1111 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
35
+VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
36
+VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
37
38
-VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_nosz
39
+VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
40
41
# Scalar operations
23
42
24
--
43
--
25
2.20.1
44
2.20.1
26
45
27
46
diff view generated by jsdifflib
1
In v8.1M a REVIDR register is defined, which is at address 0xe00ecfc
1
Implement the MVE integer min/max across vector insns
2
and is a read-only IMPDEF register providing implementation specific
2
VMAXV, VMINV, VMAXAV and VMINAV, which find the maximum
3
minor revision information, like the v8A REVIDR_EL1. Implement this.
3
from the vector elements and a general purpose register,
4
and store the maximum back into the general purpose
5
register.
6
7
These insns overlap with VRMLALDAVH (they use what would
8
be RdaHi=0b110).
4
9
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-19-peter.maydell@linaro.org
8
---
12
---
9
hw/intc/armv7m_nvic.c | 5 +++++
13
target/arm/helper-mve.h | 20 ++++++++++++
10
1 file changed, 5 insertions(+)
14
target/arm/mve.decode | 18 +++++++++--
11
15
target/arm/mve_helper.c | 66 ++++++++++++++++++++++++++++++++++++++
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
16
target/arm/translate-mve.c | 48 +++++++++++++++++++++++++++
13
index XXXXXXX..XXXXXXX 100644
17
4 files changed, 150 insertions(+), 2 deletions(-)
14
--- a/hw/intc/armv7m_nvic.c
18
15
+++ b/hw/intc/armv7m_nvic.c
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
20
index XXXXXXX..XXXXXXX 100644
17
}
21
--- a/target/arm/helper-mve.h
18
return val;
22
+++ b/target/arm/helper-mve.h
19
}
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
20
+ case 0xcfc:
24
DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
21
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8_1M)) {
25
DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
22
+ goto bad_offset;
26
23
+ }
27
+DEF_HELPER_FLAGS_3(mve_vmaxvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
24
+ return cpu->revidr;
28
+DEF_HELPER_FLAGS_3(mve_vmaxvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
25
case 0xd00: /* CPUID Base. */
29
+DEF_HELPER_FLAGS_3(mve_vmaxvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
26
return cpu->midr;
30
+DEF_HELPER_FLAGS_3(mve_vmaxvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
27
case 0xd04: /* Interrupt Control State (ICSR) */
31
+DEF_HELPER_FLAGS_3(mve_vmaxvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
32
+DEF_HELPER_FLAGS_3(mve_vmaxvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
33
+DEF_HELPER_FLAGS_3(mve_vmaxavb, TCG_CALL_NO_WG, i32, env, ptr, i32)
34
+DEF_HELPER_FLAGS_3(mve_vmaxavh, TCG_CALL_NO_WG, i32, env, ptr, i32)
35
+DEF_HELPER_FLAGS_3(mve_vmaxavw, TCG_CALL_NO_WG, i32, env, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_3(mve_vminvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
38
+DEF_HELPER_FLAGS_3(mve_vminvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
39
+DEF_HELPER_FLAGS_3(mve_vminvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
40
+DEF_HELPER_FLAGS_3(mve_vminvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
41
+DEF_HELPER_FLAGS_3(mve_vminvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
42
+DEF_HELPER_FLAGS_3(mve_vminvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
43
+DEF_HELPER_FLAGS_3(mve_vminavb, TCG_CALL_NO_WG, i32, env, ptr, i32)
44
+DEF_HELPER_FLAGS_3(mve_vminavh, TCG_CALL_NO_WG, i32, env, ptr, i32)
45
+DEF_HELPER_FLAGS_3(mve_vminavw, TCG_CALL_NO_WG, i32, env, ptr, i32)
46
+
47
DEF_HELPER_FLAGS_3(mve_vaddlv_s, TCG_CALL_NO_WG, i64, env, ptr, i64)
48
DEF_HELPER_FLAGS_3(mve_vaddlv_u, TCG_CALL_NO_WG, i64, env, ptr, i64)
49
50
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/mve.decode
53
+++ b/target/arm/mve.decode
54
@@ -XXX,XX +XXX,XX @@
55
&vcmp qm qn size mask
56
&vcmp_scalar qn rm size mask
57
&shl_scalar qda rm size
58
+&vmaxv qm rda size
59
60
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
61
# Note that both Rn and Qd are 3 bits only (no D bit)
62
@@ -XXX,XX +XXX,XX @@
63
@vcmp_scalar .... .... .. size:2 qn:3 . .... .... .... rm:4 &vcmp_scalar \
64
mask=%mask_22_13
65
66
+@vmaxv .... .... .... size:2 .. rda:4 .... .... .... &vmaxv qm=%qm
67
+
68
# Vector loads and stores
69
70
# Widening loads and narrowing stores:
71
@@ -XXX,XX +XXX,XX @@ VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
72
73
VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
74
75
-VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
76
-VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
77
+{
78
+ VMAXV_S 1110 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
79
+ VMINV_S 1110 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
80
+ VMAXAV 1110 1110 1110 .. 00 .... 1111 0 0 . 0 ... 0 @vmaxv
81
+ VMINAV 1110 1110 1110 .. 00 .... 1111 1 0 . 0 ... 0 @vmaxv
82
+ VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
83
+}
84
+
85
+{
86
+ VMAXV_U 1111 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
87
+ VMINV_U 1111 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
88
+ VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
89
+}
90
91
VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
92
93
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/mve_helper.c
96
+++ b/target/arm/mve_helper.c
97
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvub, 1, uint8_t)
98
DO_VADDV(vaddvuh, 2, uint16_t)
99
DO_VADDV(vaddvuw, 4, uint32_t)
100
101
+/*
102
+ * Vector max/min across vector. Unlike VADDV, we must
103
+ * read ra as the element size, not its full width.
104
+ * We work with int64_t internally for simplicity.
105
+ */
106
+#define DO_VMAXMINV(OP, ESIZE, TYPE, RATYPE, FN) \
107
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
108
+ uint32_t ra_in) \
109
+ { \
110
+ uint16_t mask = mve_element_mask(env); \
111
+ unsigned e; \
112
+ TYPE *m = vm; \
113
+ int64_t ra = (RATYPE)ra_in; \
114
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
115
+ if (mask & 1) { \
116
+ ra = FN(ra, m[H##ESIZE(e)]); \
117
+ } \
118
+ } \
119
+ mve_advance_vpt(env); \
120
+ return ra; \
121
+ } \
122
+
123
+#define DO_VMAXMINV_U(INSN, FN) \
124
+ DO_VMAXMINV(INSN##b, 1, uint8_t, uint8_t, FN) \
125
+ DO_VMAXMINV(INSN##h, 2, uint16_t, uint16_t, FN) \
126
+ DO_VMAXMINV(INSN##w, 4, uint32_t, uint32_t, FN)
127
+#define DO_VMAXMINV_S(INSN, FN) \
128
+ DO_VMAXMINV(INSN##b, 1, int8_t, int8_t, FN) \
129
+ DO_VMAXMINV(INSN##h, 2, int16_t, int16_t, FN) \
130
+ DO_VMAXMINV(INSN##w, 4, int32_t, int32_t, FN)
131
+
132
+/*
133
+ * Helpers for max and min of absolute values across vector:
134
+ * note that we only take the absolute value of 'm', not 'n'
135
+ */
136
+static int64_t do_maxa(int64_t n, int64_t m)
137
+{
138
+ if (m < 0) {
139
+ m = -m;
140
+ }
141
+ return MAX(n, m);
142
+}
143
+
144
+static int64_t do_mina(int64_t n, int64_t m)
145
+{
146
+ if (m < 0) {
147
+ m = -m;
148
+ }
149
+ return MIN(n, m);
150
+}
151
+
152
+DO_VMAXMINV_S(vmaxvs, DO_MAX)
153
+DO_VMAXMINV_U(vmaxvu, DO_MAX)
154
+DO_VMAXMINV_S(vminvs, DO_MIN)
155
+DO_VMAXMINV_U(vminvu, DO_MIN)
156
+/*
157
+ * VMAXAV, VMINAV treat the general purpose input as unsigned
158
+ * and the vector elements as signed.
159
+ */
160
+DO_VMAXMINV(vmaxavb, 1, int8_t, uint8_t, do_maxa)
161
+DO_VMAXMINV(vmaxavh, 2, int16_t, uint16_t, do_maxa)
162
+DO_VMAXMINV(vmaxavw, 4, int32_t, uint32_t, do_maxa)
163
+DO_VMAXMINV(vminavb, 1, int8_t, uint8_t, do_mina)
164
+DO_VMAXMINV(vminavh, 2, int16_t, uint16_t, do_mina)
165
+DO_VMAXMINV(vminavw, 4, int32_t, uint32_t, do_mina)
166
+
167
#define DO_VADDLV(OP, TYPE, LTYPE) \
168
uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
169
uint64_t ra) \
170
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/target/arm/translate-mve.c
173
+++ b/target/arm/translate-mve.c
174
@@ -XXX,XX +XXX,XX @@ DO_VCMP(VCMPGE, vcmpge)
175
DO_VCMP(VCMPLT, vcmplt)
176
DO_VCMP(VCMPGT, vcmpgt)
177
DO_VCMP(VCMPLE, vcmple)
178
+
179
+static bool do_vmaxv(DisasContext *s, arg_vmaxv *a, MVEGenVADDVFn fn)
180
+{
181
+ /*
182
+ * MIN/MAX operations across a vector: compute the min or
183
+ * max of the initial value in a general purpose register
184
+ * and all the elements in the vector, and store it back
185
+ * into the general purpose register.
186
+ */
187
+ TCGv_ptr qm;
188
+ TCGv_i32 rda;
189
+
190
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qm) ||
191
+ !fn || a->rda == 13 || a->rda == 15) {
192
+ /* Rda cases are UNPREDICTABLE */
193
+ return false;
194
+ }
195
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
196
+ return true;
197
+ }
198
+
199
+ qm = mve_qreg_ptr(a->qm);
200
+ rda = load_reg(s, a->rda);
201
+ fn(rda, cpu_env, qm, rda);
202
+ store_reg(s, a->rda, rda);
203
+ tcg_temp_free_ptr(qm);
204
+ mve_update_eci(s);
205
+ return true;
206
+}
207
+
208
+#define DO_VMAXV(INSN, FN) \
209
+ static bool trans_##INSN(DisasContext *s, arg_vmaxv *a) \
210
+ { \
211
+ static MVEGenVADDVFn * const fns[] = { \
212
+ gen_helper_mve_##FN##b, \
213
+ gen_helper_mve_##FN##h, \
214
+ gen_helper_mve_##FN##w, \
215
+ NULL, \
216
+ }; \
217
+ return do_vmaxv(s, a, fns[a->size]); \
218
+ }
219
+
220
+DO_VMAXV(VMAXV_S, vmaxvs)
221
+DO_VMAXV(VMAXV_U, vmaxvu)
222
+DO_VMAXV(VMAXAV, vmaxav)
223
+DO_VMAXV(VMINV_S, vminvs)
224
+DO_VMAXV(VMINV_U, vminvu)
225
+DO_VMAXV(VMINAV, vminav)
28
--
226
--
29
2.20.1
227
2.20.1
30
228
31
229
diff view generated by jsdifflib
1
Currently M-profile borrows the A-profile code for VMSR and VMRS
1
Implement the MVE VABAV insn, which computes absolute differences
2
(access to the FP system registers), because all it needs to support
2
between elements of two vectors and accumulates the result into
3
is the FPSCR. In v8.1M things become significantly more complicated
3
a general purpose register.
4
in two ways:
5
6
* there are several new FP system registers; some have side effects
7
on read, and one (FPCXT_NS) needs to avoid the usual
8
vfp_access_check() and the "only if FPU implemented" check
9
10
* all sysregs are now accessible both by VMRS/VMSR (which
11
reads/writes a general purpose register) and also by VLDR/VSTR
12
(which reads/writes them directly to memory)
13
14
Refactor the structure of how we handle VMSR/VMRS to cope with this:
15
16
* keep the M-profile code entirely separate from the A-profile code
17
18
* abstract out the "read or write the general purpose register" part
19
of the code into a loadfn or storefn function pointer, so we can
20
reuse it for VLDR/VSTR.
21
4
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-8-peter.maydell@linaro.org
25
---
7
---
26
target/arm/cpu.h | 3 +
8
target/arm/helper-mve.h | 7 +++++++
27
target/arm/translate-vfp.c.inc | 182 ++++++++++++++++++++++++++++++---
9
target/arm/mve.decode | 6 ++++++
28
2 files changed, 171 insertions(+), 14 deletions(-)
10
target/arm/mve_helper.c | 26 +++++++++++++++++++++++
11
target/arm/translate-mve.c | 43 ++++++++++++++++++++++++++++++++++++++
12
4 files changed, 82 insertions(+)
29
13
30
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
31
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.h
16
--- a/target/arm/helper-mve.h
33
+++ b/target/arm/cpu.h
17
+++ b/target/arm/helper-mve.h
34
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vminavw, TCG_CALL_NO_WG, i32, env, ptr, i32)
35
#define ARM_VFP_FPINST 9
19
DEF_HELPER_FLAGS_3(mve_vaddlv_s, TCG_CALL_NO_WG, i64, env, ptr, i64)
36
#define ARM_VFP_FPINST2 10
20
DEF_HELPER_FLAGS_3(mve_vaddlv_u, TCG_CALL_NO_WG, i64, env, ptr, i64)
37
21
38
+/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
22
+DEF_HELPER_FLAGS_4(mve_vabavsb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
39
+#define QEMU_VFP_FPSCR_NZCV 0xffff
23
+DEF_HELPER_FLAGS_4(mve_vabavsh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(mve_vabavsw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_4(mve_vabavub, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vabavuh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vabavuw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
40
+
28
+
41
/* iwMMXt coprocessor control registers. */
29
DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
42
#define ARM_IWMMXT_wCID 0
30
DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
43
#define ARM_IWMMXT_wCon 1
31
DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
44
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
45
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/translate-vfp.c.inc
34
--- a/target/arm/mve.decode
47
+++ b/target/arm/translate-vfp.c.inc
35
+++ b/target/arm/mve.decode
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
36
@@ -XXX,XX +XXX,XX @@
49
return true;
37
&vcmp_scalar qn rm size mask
38
&shl_scalar qda rm size
39
&vmaxv qm rda size
40
+&vabav qn qm rda size
41
42
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
43
# Note that both Rn and Qd are 3 bits only (no D bit)
44
@@ -XXX,XX +XXX,XX @@ VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
45
rdahi=%rdahi rdalo=%rdalo
50
}
46
}
51
47
52
+/*
48
+@vabav .... .... .. size:2 .... rda:4 .... .... .... &vabav qn=%qn qm=%qm
53
+ * M-profile provides two different sets of instructions that can
54
+ * access floating point system registers: VMSR/VMRS (which move
55
+ * to/from a general purpose register) and VLDR/VSTR sysreg (which
56
+ * move directly to/from memory). In some cases there are also side
57
+ * effects which must happen after any write to memory (which could
58
+ * cause an exception). So we implement the common logic for the
59
+ * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
60
+ * which take pointers to callback functions which will perform the
61
+ * actual "read/write general purpose register" and "read/write
62
+ * memory" operations.
63
+ */
64
+
49
+
65
+/*
50
+VABAV_S 111 0 1110 10 .. ... 0 .... 1111 . 0 . 0 ... 1 @vabav
66
+ * Emit code to store the sysreg to its final destination; frees the
51
+VABAV_U 111 1 1110 10 .. ... 0 .... 1111 . 0 . 0 ... 1 @vabav
67
+ * TCG temp 'value' it is passed.
68
+ */
69
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
70
+/*
71
+ * Emit code to load the value to be copied to the sysreg; returns
72
+ * a new TCG temporary
73
+ */
74
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
75
+
52
+
76
+/* Common decode/access checks for fp sysreg read/write */
53
# Logical immediate operations (1 reg and modified-immediate)
77
+typedef enum FPSysRegCheckResult {
54
78
+ FPSysRegCheckFailed, /* caller should return false */
55
# The cmode/op bits here decode VORR/VBIC/VMOV/VMVN, but
79
+ FPSysRegCheckDone, /* caller should return true */
56
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
80
+ FPSysRegCheckContinue, /* caller should continue generating code */
57
index XXXXXXX..XXXXXXX 100644
81
+} FPSysRegCheckResult;
58
--- a/target/arm/mve_helper.c
82
+
59
+++ b/target/arm/mve_helper.c
83
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
60
@@ -XXX,XX +XXX,XX @@ DO_VMAXMINV(vminavb, 1, int8_t, uint8_t, do_mina)
84
+{
61
DO_VMAXMINV(vminavh, 2, int16_t, uint16_t, do_mina)
85
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
62
DO_VMAXMINV(vminavw, 4, int32_t, uint32_t, do_mina)
86
+ return FPSysRegCheckFailed;
63
64
+#define DO_VABAV(OP, ESIZE, TYPE) \
65
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
66
+ void *vm, uint32_t ra) \
67
+ { \
68
+ uint16_t mask = mve_element_mask(env); \
69
+ unsigned e; \
70
+ TYPE *m = vm, *n = vn; \
71
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
72
+ if (mask & 1) { \
73
+ int64_t n0 = n[H##ESIZE(e)]; \
74
+ int64_t m0 = m[H##ESIZE(e)]; \
75
+ uint32_t r = n0 >= m0 ? (n0 - m0) : (m0 - n0); \
76
+ ra += r; \
77
+ } \
78
+ } \
79
+ mve_advance_vpt(env); \
80
+ return ra; \
87
+ }
81
+ }
88
+
82
+
89
+ switch (regno) {
83
+DO_VABAV(vabavsb, 1, int8_t)
90
+ case ARM_VFP_FPSCR:
84
+DO_VABAV(vabavsh, 2, int16_t)
91
+ case QEMU_VFP_FPSCR_NZCV:
85
+DO_VABAV(vabavsw, 4, int32_t)
92
+ break;
86
+DO_VABAV(vabavub, 1, uint8_t)
93
+ default:
87
+DO_VABAV(vabavuh, 2, uint16_t)
94
+ return FPSysRegCheckFailed;
88
+DO_VABAV(vabavuw, 4, uint32_t)
89
+
90
#define DO_VADDLV(OP, TYPE, LTYPE) \
91
uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
92
uint64_t ra) \
93
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/translate-mve.c
96
+++ b/target/arm/translate-mve.c
97
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
98
typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
99
typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
100
typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
101
+typedef void MVEGenVABAVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
102
103
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
104
static inline long mve_qreg_offset(unsigned reg)
105
@@ -XXX,XX +XXX,XX @@ DO_VMAXV(VMAXAV, vmaxav)
106
DO_VMAXV(VMINV_S, vminvs)
107
DO_VMAXV(VMINV_U, vminvu)
108
DO_VMAXV(VMINAV, vminav)
109
+
110
+static bool do_vabav(DisasContext *s, arg_vabav *a, MVEGenVABAVFn *fn)
111
+{
112
+ /* Absolute difference accumulated across vector */
113
+ TCGv_ptr qn, qm;
114
+ TCGv_i32 rda;
115
+
116
+ if (!dc_isar_feature(aa32_mve, s) ||
117
+ !mve_check_qreg_bank(s, a->qm | a->qn) ||
118
+ !fn || a->rda == 13 || a->rda == 15) {
119
+ /* Rda cases are UNPREDICTABLE */
120
+ return false;
121
+ }
122
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
123
+ return true;
95
+ }
124
+ }
96
+
125
+
97
+ if (!vfp_access_check(s)) {
126
+ qm = mve_qreg_ptr(a->qm);
98
+ return FPSysRegCheckDone;
127
+ qn = mve_qreg_ptr(a->qn);
99
+ }
128
+ rda = load_reg(s, a->rda);
100
+
129
+ fn(rda, cpu_env, qn, qm, rda);
101
+ return FPSysRegCheckContinue;
130
+ store_reg(s, a->rda, rda);
102
+}
131
+ tcg_temp_free_ptr(qm);
103
+
132
+ tcg_temp_free_ptr(qn);
104
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
133
+ mve_update_eci(s);
105
+
106
+ fp_sysreg_loadfn *loadfn,
107
+ void *opaque)
108
+{
109
+ /* Do a write to an M-profile floating point system register */
110
+ TCGv_i32 tmp;
111
+
112
+ switch (fp_sysreg_checks(s, regno)) {
113
+ case FPSysRegCheckFailed:
114
+ return false;
115
+ case FPSysRegCheckDone:
116
+ return true;
117
+ case FPSysRegCheckContinue:
118
+ break;
119
+ }
120
+
121
+ switch (regno) {
122
+ case ARM_VFP_FPSCR:
123
+ tmp = loadfn(s, opaque);
124
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
125
+ tcg_temp_free_i32(tmp);
126
+ gen_lookup_tb(s);
127
+ break;
128
+ default:
129
+ g_assert_not_reached();
130
+ }
131
+ return true;
134
+ return true;
132
+}
135
+}
133
+
136
+
134
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
137
+#define DO_VABAV(INSN, FN) \
135
+ fp_sysreg_storefn *storefn,
138
+ static bool trans_##INSN(DisasContext *s, arg_vabav *a) \
136
+ void *opaque)
139
+ { \
137
+{
140
+ static MVEGenVABAVFn * const fns[] = { \
138
+ /* Do a read from an M-profile floating point system register */
141
+ gen_helper_mve_##FN##b, \
139
+ TCGv_i32 tmp;
142
+ gen_helper_mve_##FN##h, \
140
+
143
+ gen_helper_mve_##FN##w, \
141
+ switch (fp_sysreg_checks(s, regno)) {
144
+ NULL, \
142
+ case FPSysRegCheckFailed:
145
+ }; \
143
+ return false;
146
+ return do_vabav(s, a, fns[a->size]); \
144
+ case FPSysRegCheckDone:
145
+ return true;
146
+ case FPSysRegCheckContinue:
147
+ break;
148
+ }
147
+ }
149
+
148
+
150
+ switch (regno) {
149
+DO_VABAV(VABAV_S, vabavs)
151
+ case ARM_VFP_FPSCR:
150
+DO_VABAV(VABAV_U, vabavu)
152
+ tmp = tcg_temp_new_i32();
153
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
154
+ storefn(s, opaque, tmp);
155
+ break;
156
+ case QEMU_VFP_FPSCR_NZCV:
157
+ /*
158
+ * Read just NZCV; this is a special case to avoid the
159
+ * helper call for the "VMRS to CPSR.NZCV" insn.
160
+ */
161
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
162
+ tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
163
+ storefn(s, opaque, tmp);
164
+ break;
165
+ default:
166
+ g_assert_not_reached();
167
+ }
168
+ return true;
169
+}
170
+
171
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
172
+{
173
+ arg_VMSR_VMRS *a = opaque;
174
+
175
+ if (a->rt == 15) {
176
+ /* Set the 4 flag bits in the CPSR */
177
+ gen_set_nzcv(value);
178
+ tcg_temp_free_i32(value);
179
+ } else {
180
+ store_reg(s, a->rt, value);
181
+ }
182
+}
183
+
184
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
185
+{
186
+ arg_VMSR_VMRS *a = opaque;
187
+
188
+ return load_reg(s, a->rt);
189
+}
190
+
191
+static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
192
+{
193
+ /*
194
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
195
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
196
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
197
+ * we only care about the top 4 bits of FPSCR there.
198
+ */
199
+ if (a->rt == 15) {
200
+ if (a->l && a->reg == ARM_VFP_FPSCR) {
201
+ a->reg = QEMU_VFP_FPSCR_NZCV;
202
+ } else {
203
+ return false;
204
+ }
205
+ }
206
+
207
+ if (a->l) {
208
+ /* VMRS, move FP system register to gp register */
209
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
210
+ } else {
211
+ /* VMSR, move gp register to FP system register */
212
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
213
+ }
214
+}
215
+
216
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
217
{
218
TCGv_i32 tmp;
219
bool ignore_vfp_enabled = false;
220
221
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
222
- return false;
223
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
224
+ return gen_M_VMSR_VMRS(s, a);
225
}
226
227
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
228
- /*
229
- * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
230
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
231
- * (FPSCR -> r15 is a special case which writes to the PSR flags.)
232
- */
233
- if (a->reg != ARM_VFP_FPSCR) {
234
- return false;
235
- }
236
- if (a->rt == 15 && !a->l) {
237
- return false;
238
- }
239
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
240
+ return false;
241
}
242
243
switch (a->reg) {
244
--
151
--
245
2.20.1
152
2.20.1
246
153
247
154
diff view generated by jsdifflib
1
In v8.0M, on exception entry the registers R0-R3, R12, APSR and EPSR
1
Implement the MVE narrowing move insns VMOVN, VQMOVN and VQMOVUN.
2
are zeroed for an exception taken to Non-secure state; for an
2
These take a double-width input, narrow it (possibly saturating) and
3
exception taken to Secure state they become UNKNOWN, and we chose to
3
store the result to either the top or bottom half of the output
4
leave them at their previous values.
4
element.
5
6
In v8.1M the behaviour is specified more tightly and these registers
7
are always zeroed regardless of the security state that the exception
8
targets (see rule R_KPZV). Implement this.
9
5
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-17-peter.maydell@linaro.org
13
---
8
---
14
target/arm/m_helper.c | 16 ++++++++++++----
9
target/arm/helper-mve.h | 20 ++++++++++
15
1 file changed, 12 insertions(+), 4 deletions(-)
10
target/arm/mve.decode | 12 ++++++
16
11
target/arm/mve_helper.c | 78 ++++++++++++++++++++++++++++++++++++++
17
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
12
target/arm/translate-mve.c | 22 +++++++++++
18
index XXXXXXX..XXXXXXX 100644
13
4 files changed, 132 insertions(+)
19
--- a/target/arm/m_helper.c
14
20
+++ b/target/arm/m_helper.c
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
21
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
16
index XXXXXXX..XXXXXXX 100644
22
* Clear registers if necessary to prevent non-secure exception
17
--- a/target/arm/helper-mve.h
23
* code being able to see register values from secure code.
18
+++ b/target/arm/helper-mve.h
24
* Where register values become architecturally UNKNOWN we leave
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
- * them with their previous values.
20
DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+ * them with their previous values. v8.1M is tighter than v8.0M
21
DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
+ * here and always zeroes the caller-saved registers regardless
22
28
+ * of the security state the exception is targeting.
23
+DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
*/
24
+DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
25
+DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
- if (!targets_secure) {
26
+DEF_HELPER_FLAGS_3(mve_vmovnth, TCG_CALL_NO_WG, void, env, ptr, ptr)
32
+ if (!targets_secure || arm_feature(env, ARM_FEATURE_V8_1M)) {
27
+
33
/*
28
+DEF_HELPER_FLAGS_3(mve_vqmovunbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
34
* Always clear the caller-saved registers (they have been
29
+DEF_HELPER_FLAGS_3(mve_vqmovunbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
35
* pushed to the stack earlier in v7m_push_stack()).
30
+DEF_HELPER_FLAGS_3(mve_vqmovuntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
36
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
31
+DEF_HELPER_FLAGS_3(mve_vqmovunth, TCG_CALL_NO_WG, void, env, ptr, ptr)
37
* v7m_push_callee_stack()).
32
+
38
*/
33
+DEF_HELPER_FLAGS_3(mve_vqmovnbsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
int i;
34
+DEF_HELPER_FLAGS_3(mve_vqmovnbsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
40
+ /*
35
+DEF_HELPER_FLAGS_3(mve_vqmovntsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
41
+ * r4..r11 are callee-saves, zero only if background
36
+DEF_HELPER_FLAGS_3(mve_vqmovntsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
42
+ * state was Secure (EXCRET.S == 1) and exception
37
+
43
+ * targets Non-secure state
38
+DEF_HELPER_FLAGS_3(mve_vqmovnbub, TCG_CALL_NO_WG, void, env, ptr, ptr)
44
+ */
39
+DEF_HELPER_FLAGS_3(mve_vqmovnbuh, TCG_CALL_NO_WG, void, env, ptr, ptr)
45
+ bool zero_callee_saves = !targets_secure &&
40
+DEF_HELPER_FLAGS_3(mve_vqmovntub, TCG_CALL_NO_WG, void, env, ptr, ptr)
46
+ (lr & R_V7M_EXCRET_S_MASK);
41
+DEF_HELPER_FLAGS_3(mve_vqmovntuh, TCG_CALL_NO_WG, void, env, ptr, ptr)
47
42
+
48
for (i = 0; i < 13; i++) {
43
DEF_HELPER_FLAGS_4(mve_vand, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
49
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
44
DEF_HELPER_FLAGS_4(mve_vbic, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
50
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
45
DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
51
+ if (i < 4 || i > 11 || zero_callee_saves) {
46
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
52
env->regs[i] = 0;
47
index XXXXXXX..XXXXXXX 100644
53
}
48
--- a/target/arm/mve.decode
54
}
49
+++ b/target/arm/mve.decode
50
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
51
VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b
52
VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h
53
54
+ VQMOVUNB 111 0 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op
55
+ VQMOVN_BS 111 0 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op
56
+
57
VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
58
}
59
60
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
61
VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b
62
VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h
63
64
+ VMOVNB 111 1 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op
65
+ VQMOVN_BU 111 1 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op
66
+
67
VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
68
}
69
70
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
71
VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b
72
VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h
73
74
+ VQMOVUNT 111 0 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op
75
+ VQMOVN_TS 111 0 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op
76
+
77
VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
78
}
79
80
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
81
VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b
82
VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h
83
84
+ VMOVNT 111 1 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op
85
+ VQMOVN_TU 111 1 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op
86
+
87
VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
88
}
89
90
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/mve_helper.c
93
+++ b/target/arm/mve_helper.c
94
@@ -XXX,XX +XXX,XX @@ DO_VSHRN_SAT_UH(vqrshrnb_uh, vqrshrnt_uh, DO_RSHRN_UH)
95
DO_VSHRN_SAT_SB(vqrshrunbb, vqrshruntb, DO_RSHRUN_B)
96
DO_VSHRN_SAT_SH(vqrshrunbh, vqrshrunth, DO_RSHRUN_H)
97
98
+#define DO_VMOVN(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE) \
99
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
100
+ { \
101
+ LTYPE *m = vm; \
102
+ TYPE *d = vd; \
103
+ uint16_t mask = mve_element_mask(env); \
104
+ unsigned le; \
105
+ mask >>= ESIZE * TOP; \
106
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
107
+ mergemask(&d[H##ESIZE(le * 2 + TOP)], \
108
+ m[H##LESIZE(le)], mask); \
109
+ } \
110
+ mve_advance_vpt(env); \
111
+ }
112
+
113
+DO_VMOVN(vmovnbb, false, 1, uint8_t, 2, uint16_t)
114
+DO_VMOVN(vmovnbh, false, 2, uint16_t, 4, uint32_t)
115
+DO_VMOVN(vmovntb, true, 1, uint8_t, 2, uint16_t)
116
+DO_VMOVN(vmovnth, true, 2, uint16_t, 4, uint32_t)
117
+
118
+#define DO_VMOVN_SAT(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \
119
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
120
+ { \
121
+ LTYPE *m = vm; \
122
+ TYPE *d = vd; \
123
+ uint16_t mask = mve_element_mask(env); \
124
+ bool qc = false; \
125
+ unsigned le; \
126
+ mask >>= ESIZE * TOP; \
127
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
128
+ bool sat = false; \
129
+ TYPE r = FN(m[H##LESIZE(le)], &sat); \
130
+ mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
131
+ qc |= sat & mask & 1; \
132
+ } \
133
+ if (qc) { \
134
+ env->vfp.qc[0] = qc; \
135
+ } \
136
+ mve_advance_vpt(env); \
137
+ }
138
+
139
+#define DO_VMOVN_SAT_UB(BOP, TOP, FN) \
140
+ DO_VMOVN_SAT(BOP, false, 1, uint8_t, 2, uint16_t, FN) \
141
+ DO_VMOVN_SAT(TOP, true, 1, uint8_t, 2, uint16_t, FN)
142
+
143
+#define DO_VMOVN_SAT_UH(BOP, TOP, FN) \
144
+ DO_VMOVN_SAT(BOP, false, 2, uint16_t, 4, uint32_t, FN) \
145
+ DO_VMOVN_SAT(TOP, true, 2, uint16_t, 4, uint32_t, FN)
146
+
147
+#define DO_VMOVN_SAT_SB(BOP, TOP, FN) \
148
+ DO_VMOVN_SAT(BOP, false, 1, int8_t, 2, int16_t, FN) \
149
+ DO_VMOVN_SAT(TOP, true, 1, int8_t, 2, int16_t, FN)
150
+
151
+#define DO_VMOVN_SAT_SH(BOP, TOP, FN) \
152
+ DO_VMOVN_SAT(BOP, false, 2, int16_t, 4, int32_t, FN) \
153
+ DO_VMOVN_SAT(TOP, true, 2, int16_t, 4, int32_t, FN)
154
+
155
+#define DO_VQMOVN_SB(N, SATP) \
156
+ do_sat_bhs((int64_t)(N), INT8_MIN, INT8_MAX, SATP)
157
+#define DO_VQMOVN_UB(N, SATP) \
158
+ do_sat_bhs((uint64_t)(N), 0, UINT8_MAX, SATP)
159
+#define DO_VQMOVUN_B(N, SATP) \
160
+ do_sat_bhs((int64_t)(N), 0, UINT8_MAX, SATP)
161
+
162
+#define DO_VQMOVN_SH(N, SATP) \
163
+ do_sat_bhs((int64_t)(N), INT16_MIN, INT16_MAX, SATP)
164
+#define DO_VQMOVN_UH(N, SATP) \
165
+ do_sat_bhs((uint64_t)(N), 0, UINT16_MAX, SATP)
166
+#define DO_VQMOVUN_H(N, SATP) \
167
+ do_sat_bhs((int64_t)(N), 0, UINT16_MAX, SATP)
168
+
169
+DO_VMOVN_SAT_SB(vqmovnbsb, vqmovntsb, DO_VQMOVN_SB)
170
+DO_VMOVN_SAT_SH(vqmovnbsh, vqmovntsh, DO_VQMOVN_SH)
171
+DO_VMOVN_SAT_UB(vqmovnbub, vqmovntub, DO_VQMOVN_UB)
172
+DO_VMOVN_SAT_UH(vqmovnbuh, vqmovntuh, DO_VQMOVN_UH)
173
+DO_VMOVN_SAT_SB(vqmovunbb, vqmovuntb, DO_VQMOVUN_B)
174
+DO_VMOVN_SAT_SH(vqmovunbh, vqmovunth, DO_VQMOVUN_H)
175
+
176
uint32_t HELPER(mve_vshlc)(CPUARMState *env, void *vd, uint32_t rdm,
177
uint32_t shift)
178
{
179
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
180
index XXXXXXX..XXXXXXX 100644
181
--- a/target/arm/translate-mve.c
182
+++ b/target/arm/translate-mve.c
183
@@ -XXX,XX +XXX,XX @@ DO_1OP(VCLS, vcls)
184
DO_1OP(VABS, vabs)
185
DO_1OP(VNEG, vneg)
186
187
+/* Narrowing moves: only size 0 and 1 are valid */
188
+#define DO_VMOVN(INSN, FN) \
189
+ static bool trans_##INSN(DisasContext *s, arg_1op *a) \
190
+ { \
191
+ static MVEGenOneOpFn * const fns[] = { \
192
+ gen_helper_mve_##FN##b, \
193
+ gen_helper_mve_##FN##h, \
194
+ NULL, \
195
+ NULL, \
196
+ }; \
197
+ return do_1op(s, a, fns[a->size]); \
198
+ }
199
+
200
+DO_VMOVN(VMOVNB, vmovnb)
201
+DO_VMOVN(VMOVNT, vmovnt)
202
+DO_VMOVN(VQMOVUNB, vqmovunb)
203
+DO_VMOVN(VQMOVUNT, vqmovunt)
204
+DO_VMOVN(VQMOVN_BS, vqmovnbs)
205
+DO_VMOVN(VQMOVN_TS, vqmovnts)
206
+DO_VMOVN(VQMOVN_BU, vqmovnbu)
207
+DO_VMOVN(VQMOVN_TU, vqmovntu)
208
+
209
static bool trans_VREV16(DisasContext *s, arg_1op *a)
210
{
211
static MVEGenOneOpFn * const fns[] = {
55
--
212
--
56
2.20.1
213
2.20.1
57
214
58
215
diff view generated by jsdifflib
1
We defined a constant name for the mask of NZCV bits in the FPCR/FPSCR
1
The MVEGenDualAccOpFn is a bit misnamed, since it is used for
2
in the previous commit; use it in a couple of places in existing code,
2
the "long dual accumulate" operations that use a 64-bit
3
where we're masking out everything except NZCV for the "load to Rt=15
3
accumulator. Rename it to MVEGenLongDualAccOpFn so we can
4
sets CPSR.NZCV" special case.
4
use the former name for the 32-bit accumulator insns.
5
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-12-peter.maydell@linaro.org
9
---
8
---
10
target/arm/translate-vfp.c.inc | 4 ++--
9
target/arm/translate-mve.c | 16 ++++++++--------
11
1 file changed, 2 insertions(+), 2 deletions(-)
10
1 file changed, 8 insertions(+), 8 deletions(-)
12
11
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
14
--- a/target/arm/translate-mve.c
16
+++ b/target/arm/translate-vfp.c.inc
15
+++ b/target/arm/translate-mve.c
17
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
16
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
18
* helper call for the "VMRS to CPSR.NZCV" insn.
17
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
19
*/
18
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
20
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
19
typedef void MVEGenTwoOpShiftFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
21
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
20
-typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
22
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
21
+typedef void MVEGenLongDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
23
storefn(s, opaque, tmp);
22
typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
24
break;
23
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
25
default:
24
typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
26
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
25
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMULLT_scalar(DisasContext *s, arg_2scalar *a)
27
case ARM_VFP_FPSCR:
26
}
28
if (a->rt == 15) {
27
29
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
28
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
30
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
29
- MVEGenDualAccOpFn *fn)
31
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
30
+ MVEGenLongDualAccOpFn *fn)
32
} else {
31
{
33
tmp = tcg_temp_new_i32();
32
TCGv_ptr qn, qm;
34
gen_helper_vfp_get_fpscr(tmp, cpu_env);
33
TCGv_i64 rda;
34
@@ -XXX,XX +XXX,XX @@ static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
35
36
static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a)
37
{
38
- static MVEGenDualAccOpFn * const fns[4][2] = {
39
+ static MVEGenLongDualAccOpFn * const fns[4][2] = {
40
{ NULL, NULL },
41
{ gen_helper_mve_vmlaldavsh, gen_helper_mve_vmlaldavxsh },
42
{ gen_helper_mve_vmlaldavsw, gen_helper_mve_vmlaldavxsw },
43
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a)
44
45
static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
46
{
47
- static MVEGenDualAccOpFn * const fns[4][2] = {
48
+ static MVEGenLongDualAccOpFn * const fns[4][2] = {
49
{ NULL, NULL },
50
{ gen_helper_mve_vmlaldavuh, NULL },
51
{ gen_helper_mve_vmlaldavuw, NULL },
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
53
54
static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
55
{
56
- static MVEGenDualAccOpFn * const fns[4][2] = {
57
+ static MVEGenLongDualAccOpFn * const fns[4][2] = {
58
{ NULL, NULL },
59
{ gen_helper_mve_vmlsldavsh, gen_helper_mve_vmlsldavxsh },
60
{ gen_helper_mve_vmlsldavsw, gen_helper_mve_vmlsldavxsw },
61
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
62
63
static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a)
64
{
65
- static MVEGenDualAccOpFn * const fns[] = {
66
+ static MVEGenLongDualAccOpFn * const fns[] = {
67
gen_helper_mve_vrmlaldavhsw, gen_helper_mve_vrmlaldavhxsw,
68
};
69
return do_long_dual_acc(s, a, fns[a->x]);
70
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a)
71
72
static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a)
73
{
74
- static MVEGenDualAccOpFn * const fns[] = {
75
+ static MVEGenLongDualAccOpFn * const fns[] = {
76
gen_helper_mve_vrmlaldavhuw, NULL,
77
};
78
return do_long_dual_acc(s, a, fns[a->x]);
79
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a)
80
81
static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
82
{
83
- static MVEGenDualAccOpFn * const fns[] = {
84
+ static MVEGenLongDualAccOpFn * const fns[] = {
85
gen_helper_mve_vrmlsldavhsw, gen_helper_mve_vrmlsldavhxsw,
86
};
87
return do_long_dual_acc(s, a, fns[a->x]);
35
--
88
--
36
2.20.1
89
2.20.1
37
90
38
91
diff view generated by jsdifflib
1
Factor out the code which handles M-profile lazy FP state preservation
1
Implement the MVE VMLADAV and VMLSLDAV insns. Like the VMLALDAV and
2
from full_vfp_access_check(); accesses to the FPCXT_NS register are
2
VMLSLDAV insns already implemented, these accumulate multiplied
3
a special case which need to do just this part (corresponding in the
3
vector elements; but they accumulate a 32-bit result rather than a
4
pseudocode to the PreserveFPState() function), and not the full
4
64-bit one.
5
set of actions matching the pseudocode ExecuteFPCheck() which
5
6
normal FP instructions need to do.
6
Note that these encodings overlap with what would be RdaHi=0b111 for
7
VMLALDAV, VMLSLDAV, VRMLALDAVH and VRMLSLDAVH.
7
8
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20201119215617.29887-13-peter.maydell@linaro.org
12
---
11
---
13
target/arm/translate-vfp.c.inc | 45 ++++++++++++++++++++--------------
12
target/arm/helper-mve.h | 17 ++++++++++
14
1 file changed, 27 insertions(+), 18 deletions(-)
13
target/arm/mve.decode | 33 +++++++++++++++++---
15
14
target/arm/mve_helper.c | 41 ++++++++++++++++++++++++
16
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
15
target/arm/translate-mve.c | 64 ++++++++++++++++++++++++++++++++++++++
17
index XXXXXXX..XXXXXXX 100644
16
4 files changed, 150 insertions(+), 5 deletions(-)
18
--- a/target/arm/translate-vfp.c.inc
17
19
+++ b/target/arm/translate-vfp.c.inc
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
19
index XXXXXXX..XXXXXXX 100644
21
return offs;
20
--- a/target/arm/helper-mve.h
21
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrmlaldavhuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
23
DEF_HELPER_FLAGS_4(mve_vrmlsldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
24
DEF_HELPER_FLAGS_4(mve_vrmlsldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
25
26
+DEF_HELPER_FLAGS_4(mve_vmladavsb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vmladavsh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vmladavsw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(mve_vmladavub, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(mve_vmladavuh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vmladavuw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(mve_vmlsdavb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
33
+DEF_HELPER_FLAGS_4(mve_vmlsdavh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
34
+DEF_HELPER_FLAGS_4(mve_vmlsdavw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
35
+
36
+DEF_HELPER_FLAGS_4(mve_vmladavsxb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(mve_vmladavsxh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(mve_vmladavsxw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vmlsdavxb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(mve_vmlsdavxh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_4(mve_vmlsdavxw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
42
+
43
DEF_HELPER_FLAGS_3(mve_vaddvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
44
DEF_HELPER_FLAGS_3(mve_vaddvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
45
DEF_HELPER_FLAGS_3(mve_vaddvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
46
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve.decode
49
+++ b/target/arm/mve.decode
50
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
51
%size_16 16:1 !function=plus_1
52
53
&vmlaldav rdahi rdalo size qn qm x a
54
+&vmladav rda size qn qm x a
55
56
@vmlaldav .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
57
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
58
@vmlaldav_nosz .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
59
qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav
60
-VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
61
-VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
62
+@vmladav .... .... .... ... . ... x:1 .... . . a:1 . qm:3 . \
63
+ qn=%qn rda=%rdalo size=%size_16 &vmladav
64
+@vmladav_nosz .... .... .... ... . ... x:1 .... . . a:1 . qm:3 . \
65
+ qn=%qn rda=%rdalo size=0 &vmladav
66
67
-VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
68
+{
69
+ VMLADAV_S 1110 1110 1111 ... . ... . 1110 . 0 . 0 ... 0 @vmladav
70
+ VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
71
+}
72
+{
73
+ VMLADAV_U 1111 1110 1111 ... . ... . 1110 . 0 . 0 ... 0 @vmladav
74
+ VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
75
+}
76
+
77
+{
78
+ VMLSDAV 1110 1110 1111 ... . ... . 1110 . 0 . 0 ... 1 @vmladav
79
+ VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
80
+}
81
+
82
+{
83
+ VMLSDAV 1111 1110 1111 ... 0 ... . 1110 . 0 . 0 ... 1 @vmladav_nosz
84
+ VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
85
+}
86
+
87
+VMLADAV_S 1110 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 1 @vmladav_nosz
88
+VMLADAV_U 1111 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 1 @vmladav_nosz
89
90
{
91
VMAXV_S 1110 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
92
VMINV_S 1110 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
93
VMAXAV 1110 1110 1110 .. 00 .... 1111 0 0 . 0 ... 0 @vmaxv
94
VMINAV 1110 1110 1110 .. 00 .... 1111 1 0 . 0 ... 0 @vmaxv
95
+ VMLADAV_S 1110 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 0 @vmladav_nosz
96
VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
22
}
97
}
23
98
99
{
100
VMAXV_U 1111 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
101
VMINV_U 1111 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
102
+ VMLADAV_U 1111 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 0 @vmladav_nosz
103
VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
104
}
105
106
-VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
107
-
108
# Scalar operations
109
110
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
111
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
112
index XXXXXXX..XXXXXXX 100644
113
--- a/target/arm/mve_helper.c
114
+++ b/target/arm/mve_helper.c
115
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlsldavxsh, 2, int16_t, true, +=, -=)
116
DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
117
DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
118
24
+/*
119
+/*
25
+ * Generate code for M-profile lazy FP state preservation if needed;
120
+ * Multiply add dual accumulate ops
26
+ * this corresponds to the pseudocode PreserveFPState() function.
27
+ */
121
+ */
28
+static void gen_preserve_fp_state(DisasContext *s)
122
+#define DO_DAV(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC) \
29
+{
123
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
30
+ if (s->v7m_lspact) {
124
+ void *vm, uint32_t a) \
31
+ /*
125
+ { \
32
+ * Lazy state saving affects external memory and also the NVIC,
126
+ uint16_t mask = mve_element_mask(env); \
33
+ * so we must mark it as an IO operation for icount (and cause
127
+ unsigned e; \
34
+ * this to be the last insn in the TB).
128
+ TYPE *n = vn, *m = vm; \
35
+ */
129
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
36
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
130
+ if (mask & 1) { \
37
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
131
+ if (e & 1) { \
38
+ gen_io_start();
132
+ a ODDACC \
39
+ }
133
+ n[H##ESIZE(e - 1 * XCHG)] * m[H##ESIZE(e)]; \
40
+ gen_helper_v7m_preserve_fp_state(cpu_env);
134
+ } else { \
41
+ /*
135
+ a EVENACC \
42
+ * If the preserve_fp_state helper doesn't throw an exception
136
+ n[H##ESIZE(e + 1 * XCHG)] * m[H##ESIZE(e)]; \
43
+ * then it will clear LSPACT; we don't need to repeat this for
137
+ } \
44
+ * any further FP insns in this TB.
138
+ } \
45
+ */
139
+ } \
46
+ s->v7m_lspact = false;
140
+ mve_advance_vpt(env); \
47
+ }
141
+ return a; \
48
+}
142
+ }
143
+
144
+#define DO_DAV_S(INSN, XCHG, EVENACC, ODDACC) \
145
+ DO_DAV(INSN##b, 1, int8_t, XCHG, EVENACC, ODDACC) \
146
+ DO_DAV(INSN##h, 2, int16_t, XCHG, EVENACC, ODDACC) \
147
+ DO_DAV(INSN##w, 4, int32_t, XCHG, EVENACC, ODDACC)
148
+
149
+#define DO_DAV_U(INSN, XCHG, EVENACC, ODDACC) \
150
+ DO_DAV(INSN##b, 1, uint8_t, XCHG, EVENACC, ODDACC) \
151
+ DO_DAV(INSN##h, 2, uint16_t, XCHG, EVENACC, ODDACC) \
152
+ DO_DAV(INSN##w, 4, uint32_t, XCHG, EVENACC, ODDACC)
153
+
154
+DO_DAV_S(vmladavs, false, +=, +=)
155
+DO_DAV_U(vmladavu, false, +=, +=)
156
+DO_DAV_S(vmlsdav, false, +=, -=)
157
+DO_DAV_S(vmladavsx, true, +=, +=)
158
+DO_DAV_S(vmlsdavx, true, +=, -=)
49
+
159
+
50
/*
160
/*
51
* Check that VFP access is enabled. If it is, do the necessary
161
* Rounding multiply add long dual accumulate high. In the pseudocode
52
* M-profile lazy-FP handling and then return true.
162
* this is implemented with a 72-bit internal accumulator value of which
53
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
163
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
54
/* Handle M-profile lazy FP state mechanics */
164
index XXXXXXX..XXXXXXX 100644
55
165
--- a/target/arm/translate-mve.c
56
/* Trigger lazy-state preservation if necessary */
166
+++ b/target/arm/translate-mve.c
57
- if (s->v7m_lspact) {
167
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TC
58
- /*
168
typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
59
- * Lazy state saving affects external memory and also the NVIC,
169
typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
60
- * so we must mark it as an IO operation for icount (and cause
170
typedef void MVEGenVABAVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
61
- * this to be the last insn in the TB).
171
+typedef void MVEGenDualAccOpFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
62
- */
172
63
- if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
173
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
64
- s->base.is_jmp = DISAS_UPDATE_EXIT;
174
static inline long mve_qreg_offset(unsigned reg)
65
- gen_io_start();
175
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
66
- }
176
return do_long_dual_acc(s, a, fns[a->x]);
67
- gen_helper_v7m_preserve_fp_state(cpu_env);
177
}
68
- /*
178
69
- * If the preserve_fp_state helper doesn't throw an exception
179
+static bool do_dual_acc(DisasContext *s, arg_vmladav *a, MVEGenDualAccOpFn *fn)
70
- * then it will clear LSPACT; we don't need to repeat this for
180
+{
71
- * any further FP insns in this TB.
181
+ TCGv_ptr qn, qm;
72
- */
182
+ TCGv_i32 rda;
73
- s->v7m_lspact = false;
183
+
74
- }
184
+ if (!dc_isar_feature(aa32_mve, s) ||
75
+ gen_preserve_fp_state(s);
185
+ !mve_check_qreg_bank(s, a->qn) ||
76
186
+ !fn) {
77
/* Update ownership of FP context: set FPCCR.S to match current state */
187
+ return false;
78
if (s->v8m_fpccr_s_wrong) {
188
+ }
189
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
190
+ return true;
191
+ }
192
+
193
+ qn = mve_qreg_ptr(a->qn);
194
+ qm = mve_qreg_ptr(a->qm);
195
+
196
+ /*
197
+ * This insn is subject to beat-wise execution. Partial execution
198
+ * of an A=0 (no-accumulate) insn which does not execute the first
199
+ * beat must start with the current rda value, not 0.
200
+ */
201
+ if (a->a || mve_skip_first_beat(s)) {
202
+ rda = load_reg(s, a->rda);
203
+ } else {
204
+ rda = tcg_const_i32(0);
205
+ }
206
+
207
+ fn(rda, cpu_env, qn, qm, rda);
208
+ store_reg(s, a->rda, rda);
209
+ tcg_temp_free_ptr(qn);
210
+ tcg_temp_free_ptr(qm);
211
+
212
+ mve_update_eci(s);
213
+ return true;
214
+}
215
+
216
+#define DO_DUAL_ACC(INSN, FN) \
217
+ static bool trans_##INSN(DisasContext *s, arg_vmladav *a) \
218
+ { \
219
+ static MVEGenDualAccOpFn * const fns[4][2] = { \
220
+ { gen_helper_mve_##FN##b, gen_helper_mve_##FN##xb }, \
221
+ { gen_helper_mve_##FN##h, gen_helper_mve_##FN##xh }, \
222
+ { gen_helper_mve_##FN##w, gen_helper_mve_##FN##xw }, \
223
+ { NULL, NULL }, \
224
+ }; \
225
+ return do_dual_acc(s, a, fns[a->size][a->x]); \
226
+ }
227
+
228
+DO_DUAL_ACC(VMLADAV_S, vmladavs)
229
+DO_DUAL_ACC(VMLSDAV, vmlsdav)
230
+
231
+static bool trans_VMLADAV_U(DisasContext *s, arg_vmladav *a)
232
+{
233
+ static MVEGenDualAccOpFn * const fns[4][2] = {
234
+ { gen_helper_mve_vmladavub, NULL },
235
+ { gen_helper_mve_vmladavuh, NULL },
236
+ { gen_helper_mve_vmladavuw, NULL },
237
+ { NULL, NULL },
238
+ };
239
+ return do_dual_acc(s, a, fns[a->size][a->x]);
240
+}
241
+
242
static void gen_vpst(DisasContext *s, uint32_t mask)
243
{
244
/*
79
--
245
--
80
2.20.1
246
2.20.1
81
247
82
248
diff view generated by jsdifflib
1
v8.1M defines a new FP system register FPSCR_nzcvqc; this behaves
1
Implement the MVE VMLA insn, which multiplies a vector by a scalar
2
like the existing FPSCR, except that it reads and writes only bits
2
and accumulates into another vector.
3
[31:27] of the FPSCR (the N, Z, C, V and QC flag bits). (Unlike the
4
FPSCR, the special case for Rt=15 of writing the CPSR.NZCV is not
5
permitted.)
6
7
Implement the register. Since we don't yet implement MVE, we handle
8
the QC bit as RES0, with todo comments for where we will need to add
9
support later.
10
3
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20201119215617.29887-11-peter.maydell@linaro.org
14
---
6
---
15
target/arm/cpu.h | 13 +++++++++++++
7
target/arm/helper-mve.h | 4 ++++
16
target/arm/translate-vfp.c.inc | 27 +++++++++++++++++++++++++++
8
target/arm/mve.decode | 1 +
17
2 files changed, 40 insertions(+)
9
target/arm/mve_helper.c | 5 +++++
10
target/arm/translate-mve.c | 1 +
11
4 files changed, 11 insertions(+)
18
12
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
15
--- a/target/arm/helper-mve.h
22
+++ b/target/arm/cpu.h
16
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i3
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
18
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
19
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
20
27
+#define FPCR_V (1 << 28) /* FP overflow flag */
21
+DEF_HELPER_FLAGS_4(mve_vmlab, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+#define FPCR_C (1 << 29) /* FP carry flag */
22
+DEF_HELPER_FLAGS_4(mve_vmlah, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+#define FPCR_Z (1 << 30) /* FP zero flag */
23
+DEF_HELPER_FLAGS_4(mve_vmlaw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+#define FPCR_N (1 << 31) /* FP negative flag */
31
+
24
+
32
+#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
25
DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
+#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
26
DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
27
DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
static inline uint32_t vfp_get_fpsr(CPUARMState *env)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
36
{
37
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
38
#define ARM_VFP_FPEXC 8
39
#define ARM_VFP_FPINST 9
40
#define ARM_VFP_FPINST2 10
41
+/* These ones are M-profile only */
42
+#define ARM_VFP_FPSCR_NZCVQC 2
43
+#define ARM_VFP_VPR 12
44
+#define ARM_VFP_P0 13
45
+#define ARM_VFP_FPCXT_NS 14
46
+#define ARM_VFP_FPCXT_S 15
47
48
/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
49
#define QEMU_VFP_FPSCR_NZCV 0xffff
50
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
51
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate-vfp.c.inc
30
--- a/target/arm/mve.decode
53
+++ b/target/arm/translate-vfp.c.inc
31
+++ b/target/arm/mve.decode
54
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
32
@@ -XXX,XX +XXX,XX @@ VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
55
case ARM_VFP_FPSCR:
33
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
56
case QEMU_VFP_FPSCR_NZCV:
34
57
break;
35
# The U bit (28) is don't-care because it does not affect the result
58
+ case ARM_VFP_FPSCR_NZCVQC:
36
+VMLA 111- 1110 0 . .. ... 1 ... 0 1110 . 100 .... @2scalar
59
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
37
VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
60
+ return false;
38
61
+ }
39
# Vector add across vector
62
+ break;
40
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
63
default:
41
index XXXXXXX..XXXXXXX 100644
64
return FPSysRegCheckFailed;
42
--- a/target/arm/mve_helper.c
65
}
43
+++ b/target/arm/mve_helper.c
66
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
44
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
67
tcg_temp_free_i32(tmp);
45
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
68
gen_lookup_tb(s);
46
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
69
break;
47
70
+ case ARM_VFP_FPSCR_NZCVQC:
48
+/* Vector by scalar plus vector */
71
+ {
49
+#define DO_VMLA(D, N, M) ((N) * (M) + (D))
72
+ TCGv_i32 fpscr;
50
+
73
+ tmp = loadfn(s, opaque);
51
+DO_2OP_ACC_SCALAR_U(vmla, DO_VMLA)
74
+ /*
52
+
75
+ * TODO: when we implement MVE, write the QC bit.
53
/* Vector by vector plus scalar */
76
+ * For non-MVE, QC is RES0.
54
#define DO_VMLAS(D, N, M) ((N) * (D) + (M))
77
+ */
55
78
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
56
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
79
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
57
index XXXXXXX..XXXXXXX 100644
80
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
58
--- a/target/arm/translate-mve.c
81
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
59
+++ b/target/arm/translate-mve.c
82
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
60
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
83
+ tcg_temp_free_i32(tmp);
61
DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
84
+ break;
62
DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
85
+ }
63
DO_2OP_SCALAR(VBRSR, vbrsr)
86
default:
64
+DO_2OP_SCALAR(VMLA, vmla)
87
g_assert_not_reached();
65
DO_2OP_SCALAR(VMLAS, vmlas)
88
}
66
89
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
67
static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
90
gen_helper_vfp_get_fpscr(tmp, cpu_env);
91
storefn(s, opaque, tmp);
92
break;
93
+ case ARM_VFP_FPSCR_NZCVQC:
94
+ /*
95
+ * TODO: MVE has a QC bit, which we probably won't store
96
+ * in the xregs[] field. For non-MVE, where QC is RES0,
97
+ * we can just fall through to the FPSCR_NZCV case.
98
+ */
99
case QEMU_VFP_FPSCR_NZCV:
100
/*
101
* Read just NZCV; this is a special case to avoid the
102
--
68
--
103
2.20.1
69
2.20.1
104
70
105
71
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
Implement the MVE saturating doubling multiply accumulate insns
2
VQDMLAH, VQRDMLAH, VQDMLASH and VQRDMLASH. These perform a multiply,
3
double, add the accumulator shifted by the element size, possibly
4
round, saturate to twice the element size, then take the high half of
5
the result. The *MLAH insns do vector * scalar + vector, and the
6
*MLASH insns do vector * vector + scalar.
2
7
3
The QTests perform five tests on the Xilinx ZynqMP CAN controller:
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Tests the CAN controller in loopback, sleep and snoop mode.
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Tests filtering of incoming CAN messages.
10
---
11
target/arm/helper-mve.h | 16 +++++++
12
target/arm/mve.decode | 5 ++
13
target/arm/mve_helper.c | 95 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-mve.c | 4 ++
15
4 files changed, 120 insertions(+)
6
16
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
8
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
18
index XXXXXXX..XXXXXXX 100644
9
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
19
--- a/target/arm/helper-mve.h
10
Message-id: 1605728926-352690-4-git-send-email-fnu.vikram@xilinx.com
20
+++ b/target/arm/helper-mve.h
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
12
---
22
DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
13
tests/qtest/xlnx-can-test.c | 360 ++++++++++++++++++++++++++++++++++++
23
DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
14
tests/qtest/meson.build | 1 +
24
15
2 files changed, 361 insertions(+)
25
+DEF_HELPER_FLAGS_4(mve_vqdmlahb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
16
create mode 100644 tests/qtest/xlnx-can-test.c
26
+DEF_HELPER_FLAGS_4(mve_vqdmlahh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
17
27
+DEF_HELPER_FLAGS_4(mve_vqdmlahw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
18
diff --git a/tests/qtest/xlnx-can-test.c b/tests/qtest/xlnx-can-test.c
19
new file mode 100644
20
index XXXXXXX..XXXXXXX
21
--- /dev/null
22
+++ b/tests/qtest/xlnx-can-test.c
23
@@ -XXX,XX +XXX,XX @@
24
+/*
25
+ * QTests for the Xilinx ZynqMP CAN controller.
26
+ *
27
+ * Copyright (c) 2020 Xilinx Inc.
28
+ *
29
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
30
+ *
31
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
32
+ * of this software and associated documentation files (the "Software"), to deal
33
+ * in the Software without restriction, including without limitation the rights
34
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
35
+ * copies of the Software, and to permit persons to whom the Software is
36
+ * furnished to do so, subject to the following conditions:
37
+ *
38
+ * The above copyright notice and this permission notice shall be included in
39
+ * all copies or substantial portions of the Software.
40
+ *
41
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
42
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
43
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
44
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
45
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
46
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
47
+ * THE SOFTWARE.
48
+ */
49
+
28
+
50
+#include "qemu/osdep.h"
29
+DEF_HELPER_FLAGS_4(mve_vqrdmlahb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
51
+#include "libqos/libqtest.h"
30
+DEF_HELPER_FLAGS_4(mve_vqrdmlahh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vqrdmlahw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
52
+
32
+
53
+/* Base address. */
33
+DEF_HELPER_FLAGS_4(mve_vqdmlashb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
54
+#define CAN0_BASE_ADDR 0xFF060000
34
+DEF_HELPER_FLAGS_4(mve_vqdmlashh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
55
+#define CAN1_BASE_ADDR 0xFF070000
35
+DEF_HELPER_FLAGS_4(mve_vqdmlashw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
56
+
36
+
57
+/* Register addresses. */
37
+DEF_HELPER_FLAGS_4(mve_vqrdmlashb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
58
+#define R_SRR_OFFSET 0x00
38
+DEF_HELPER_FLAGS_4(mve_vqrdmlashh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
59
+#define R_MSR_OFFSET 0x04
39
+DEF_HELPER_FLAGS_4(mve_vqrdmlashw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
60
+#define R_SR_OFFSET 0x18
61
+#define R_ISR_OFFSET 0x1C
62
+#define R_ICR_OFFSET 0x24
63
+#define R_TXID_OFFSET 0x30
64
+#define R_TXDLC_OFFSET 0x34
65
+#define R_TXDATA1_OFFSET 0x38
66
+#define R_TXDATA2_OFFSET 0x3C
67
+#define R_RXID_OFFSET 0x50
68
+#define R_RXDLC_OFFSET 0x54
69
+#define R_RXDATA1_OFFSET 0x58
70
+#define R_RXDATA2_OFFSET 0x5C
71
+#define R_AFR 0x60
72
+#define R_AFMR1 0x64
73
+#define R_AFIR1 0x68
74
+#define R_AFMR2 0x6C
75
+#define R_AFIR2 0x70
76
+#define R_AFMR3 0x74
77
+#define R_AFIR3 0x78
78
+#define R_AFMR4 0x7C
79
+#define R_AFIR4 0x80
80
+
40
+
81
+/* CAN modes. */
41
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
82
+#define CONFIG_MODE 0x00
42
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
83
+#define NORMAL_MODE 0x00
43
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
84
+#define LOOPBACK_MODE 0x02
44
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
85
+#define SNOOP_MODE 0x04
45
index XXXXXXX..XXXXXXX 100644
86
+#define SLEEP_MODE 0x01
46
--- a/target/arm/mve.decode
87
+#define ENABLE_CAN (1 << 1)
47
+++ b/target/arm/mve.decode
88
+#define STATUS_NORMAL_MODE (1 << 3)
48
@@ -XXX,XX +XXX,XX @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
89
+#define STATUS_LOOPBACK_MODE (1 << 1)
49
VMLA 111- 1110 0 . .. ... 1 ... 0 1110 . 100 .... @2scalar
90
+#define STATUS_SNOOP_MODE (1 << 12)
50
VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
91
+#define STATUS_SLEEP_MODE (1 << 2)
51
92
+#define ISR_TXOK (1 << 1)
52
+VQRDMLAH 1110 1110 0 . .. ... 0 ... 0 1110 . 100 .... @2scalar
93
+#define ISR_RXOK (1 << 4)
53
+VQRDMLASH 1110 1110 0 . .. ... 0 ... 1 1110 . 100 .... @2scalar
54
+VQDMLAH 1110 1110 0 . .. ... 0 ... 0 1110 . 110 .... @2scalar
55
+VQDMLASH 1110 1110 0 . .. ... 0 ... 1 1110 . 110 .... @2scalar
94
+
56
+
95
+static void match_rx_tx_data(const uint32_t *buf_tx, const uint32_t *buf_rx,
57
# Vector add across vector
96
+ uint8_t can_timestamp)
58
{
59
VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
60
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/mve_helper.c
63
+++ b/target/arm/mve_helper.c
64
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
65
mve_advance_vpt(env); \
66
}
67
68
+#define DO_2OP_SAT_ACC_SCALAR(OP, ESIZE, TYPE, FN) \
69
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
70
+ uint32_t rm) \
71
+ { \
72
+ TYPE *d = vd, *n = vn; \
73
+ TYPE m = rm; \
74
+ uint16_t mask = mve_element_mask(env); \
75
+ unsigned e; \
76
+ bool qc = false; \
77
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
78
+ bool sat = false; \
79
+ mergemask(&d[H##ESIZE(e)], \
80
+ FN(d[H##ESIZE(e)], n[H##ESIZE(e)], m, &sat), \
81
+ mask); \
82
+ qc |= sat & mask & 1; \
83
+ } \
84
+ if (qc) { \
85
+ env->vfp.qc[0] = qc; \
86
+ } \
87
+ mve_advance_vpt(env); \
88
+ }
89
+
90
/* provide unsigned 2-op scalar helpers for all sizes */
91
#define DO_2OP_SCALAR_U(OP, FN) \
92
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
93
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
94
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
95
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
96
97
+static int8_t do_vqdmlah_b(int8_t a, int8_t b, int8_t c, int round, bool *sat)
97
+{
98
+{
98
+ uint16_t size = 0;
99
+ int64_t r = (int64_t)a * b * 2 + ((int64_t)c << 8) + (round << 7);
99
+ uint8_t len = 4;
100
+ return do_sat_bhw(r, INT16_MIN, INT16_MAX, sat) >> 8;
100
+
101
+ while (size < len) {
102
+ if (R_RXID_OFFSET + 4 * size == R_RXDLC_OFFSET) {
103
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size] + can_timestamp);
104
+ } else {
105
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size]);
106
+ }
107
+
108
+ size++;
109
+ }
110
+}
101
+}
111
+
102
+
112
+static void read_data(QTestState *qts, uint64_t can_base_addr, uint32_t *buf_rx)
103
+static int16_t do_vqdmlah_h(int16_t a, int16_t b, int16_t c,
104
+ int round, bool *sat)
113
+{
105
+{
114
+ uint32_t int_status;
106
+ int64_t r = (int64_t)a * b * 2 + ((int64_t)c << 16) + (round << 15);
115
+
107
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat) >> 16;
116
+ /* Read the interrupt on CAN rx. */
117
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_RXOK;
118
+
119
+ g_assert_cmpint(int_status, ==, ISR_RXOK);
120
+
121
+ /* Read the RX register data for CAN. */
122
+ buf_rx[0] = qtest_readl(qts, can_base_addr + R_RXID_OFFSET);
123
+ buf_rx[1] = qtest_readl(qts, can_base_addr + R_RXDLC_OFFSET);
124
+ buf_rx[2] = qtest_readl(qts, can_base_addr + R_RXDATA1_OFFSET);
125
+ buf_rx[3] = qtest_readl(qts, can_base_addr + R_RXDATA2_OFFSET);
126
+
127
+ /* Clear the RX interrupt. */
128
+ qtest_writel(qts, CAN1_BASE_ADDR + R_ICR_OFFSET, ISR_RXOK);
129
+}
108
+}
130
+
109
+
131
+static void send_data(QTestState *qts, uint64_t can_base_addr,
110
+static int32_t do_vqdmlah_w(int32_t a, int32_t b, int32_t c,
132
+ const uint32_t *buf_tx)
111
+ int round, bool *sat)
133
+{
112
+{
134
+ uint32_t int_status;
113
+ /*
135
+
114
+ * Architecturally we should do the entire add, double, round
136
+ /* Write the TX register data for CAN. */
115
+ * and then check for saturation. We do three saturating adds,
137
+ qtest_writel(qts, can_base_addr + R_TXID_OFFSET, buf_tx[0]);
116
+ * but we need to be careful about the order. If the first
138
+ qtest_writel(qts, can_base_addr + R_TXDLC_OFFSET, buf_tx[1]);
117
+ * m1 + m2 saturates then it's impossible for the *2+rc to
139
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET, buf_tx[2]);
118
+ * bring it back into the non-saturated range. However, if
140
+ qtest_writel(qts, can_base_addr + R_TXDATA2_OFFSET, buf_tx[3]);
119
+ * m1 + m2 is negative then it's possible that doing the doubling
141
+
120
+ * would take the intermediate result below INT64_MAX and the
142
+ /* Read the interrupt on CAN for tx. */
121
+ * addition of the rounding constant then brings it back in range.
143
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_TXOK;
122
+ * So we add half the rounding constant and half the "c << esize"
144
+
123
+ * before doubling rather than adding the rounding constant after
145
+ g_assert_cmpint(int_status, ==, ISR_TXOK);
124
+ * the doubling.
146
+
125
+ */
147
+ /* Clear the interrupt for tx. */
126
+ int64_t m1 = (int64_t)a * b;
148
+ qtest_writel(qts, CAN0_BASE_ADDR + R_ICR_OFFSET, ISR_TXOK);
127
+ int64_t m2 = (int64_t)c << 31;
128
+ int64_t r;
129
+ if (sadd64_overflow(m1, m2, &r) ||
130
+ sadd64_overflow(r, (round << 30), &r) ||
131
+ sadd64_overflow(r, r, &r)) {
132
+ *sat = true;
133
+ return r < 0 ? INT32_MAX : INT32_MIN;
134
+ }
135
+ return r >> 32;
149
+}
136
+}
150
+
137
+
151
+/*
138
+/*
152
+ * This test will be transferring data from CAN0 and CAN1 through canbus. CAN0
139
+ * The *MLAH insns are vector * scalar + vector;
153
+ * initiate the data transfer to can-bus, CAN1 receives the data. Test compares
140
+ * the *MLASH insns are vector * vector + scalar
154
+ * the data sent from CAN0 with received on CAN1.
155
+ */
141
+ */
156
+static void test_can_bus(void)
142
+#define DO_VQDMLAH_B(D, N, M, S) do_vqdmlah_b(N, M, D, 0, S)
157
+{
143
+#define DO_VQDMLAH_H(D, N, M, S) do_vqdmlah_h(N, M, D, 0, S)
158
+ const uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
144
+#define DO_VQDMLAH_W(D, N, M, S) do_vqdmlah_w(N, M, D, 0, S)
159
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
145
+#define DO_VQRDMLAH_B(D, N, M, S) do_vqdmlah_b(N, M, D, 1, S)
160
+ uint32_t status = 0;
146
+#define DO_VQRDMLAH_H(D, N, M, S) do_vqdmlah_h(N, M, D, 1, S)
161
+ uint8_t can_timestamp = 1;
147
+#define DO_VQRDMLAH_W(D, N, M, S) do_vqdmlah_w(N, M, D, 1, S)
162
+
148
+
163
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
149
+#define DO_VQDMLASH_B(D, N, M, S) do_vqdmlah_b(N, D, M, 0, S)
164
+ " -object can-bus,id=canbus0"
150
+#define DO_VQDMLASH_H(D, N, M, S) do_vqdmlah_h(N, D, M, 0, S)
165
+ " -machine xlnx-zcu102.canbus0=canbus0"
151
+#define DO_VQDMLASH_W(D, N, M, S) do_vqdmlah_w(N, D, M, 0, S)
166
+ " -machine xlnx-zcu102.canbus1=canbus0"
152
+#define DO_VQRDMLASH_B(D, N, M, S) do_vqdmlah_b(N, D, M, 1, S)
167
+ );
153
+#define DO_VQRDMLASH_H(D, N, M, S) do_vqdmlah_h(N, D, M, 1, S)
154
+#define DO_VQRDMLASH_W(D, N, M, S) do_vqdmlah_w(N, D, M, 1, S)
168
+
155
+
169
+ /* Configure the CAN0 and CAN1. */
156
+DO_2OP_SAT_ACC_SCALAR(vqdmlahb, 1, int8_t, DO_VQDMLAH_B)
170
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
157
+DO_2OP_SAT_ACC_SCALAR(vqdmlahh, 2, int16_t, DO_VQDMLAH_H)
171
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
158
+DO_2OP_SAT_ACC_SCALAR(vqdmlahw, 4, int32_t, DO_VQDMLAH_W)
172
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
159
+DO_2OP_SAT_ACC_SCALAR(vqrdmlahb, 1, int8_t, DO_VQRDMLAH_B)
173
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
160
+DO_2OP_SAT_ACC_SCALAR(vqrdmlahh, 2, int16_t, DO_VQRDMLAH_H)
161
+DO_2OP_SAT_ACC_SCALAR(vqrdmlahw, 4, int32_t, DO_VQRDMLAH_W)
174
+
162
+
175
+ /* Check here if CAN0 and CAN1 are in normal mode. */
163
+DO_2OP_SAT_ACC_SCALAR(vqdmlashb, 1, int8_t, DO_VQDMLASH_B)
176
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
164
+DO_2OP_SAT_ACC_SCALAR(vqdmlashh, 2, int16_t, DO_VQDMLASH_H)
177
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
165
+DO_2OP_SAT_ACC_SCALAR(vqdmlashw, 4, int32_t, DO_VQDMLASH_W)
166
+DO_2OP_SAT_ACC_SCALAR(vqrdmlashb, 1, int8_t, DO_VQRDMLASH_B)
167
+DO_2OP_SAT_ACC_SCALAR(vqrdmlashh, 2, int16_t, DO_VQRDMLASH_H)
168
+DO_2OP_SAT_ACC_SCALAR(vqrdmlashw, 4, int32_t, DO_VQRDMLASH_W)
178
+
169
+
179
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
170
/* Vector by scalar plus vector */
180
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
171
#define DO_VMLA(D, N, M) ((N) * (M) + (D))
181
+
172
182
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
173
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
183
+
184
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
185
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
186
+
187
+ qtest_quit(qts);
188
+}
189
+
190
+/*
191
+ * This test is performing loopback mode on CAN0 and CAN1. Data sent from TX of
192
+ * each CAN0 and CAN1 are compared with RX register data for respective CAN.
193
+ */
194
+static void test_can_loopback(void)
195
+{
196
+ uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
197
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
198
+ uint32_t status = 0;
199
+
200
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
201
+ " -object can-bus,id=canbus0"
202
+ " -machine xlnx-zcu102.canbus0=canbus0"
203
+ " -machine xlnx-zcu102.canbus1=canbus0"
204
+ );
205
+
206
+ /* Configure the CAN0 in loopback mode. */
207
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
208
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
209
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
210
+
211
+ /* Check here if CAN0 is set in loopback mode. */
212
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
213
+
214
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
215
+
216
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
217
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
218
+ match_rx_tx_data(buf_tx, buf_rx, 0);
219
+
220
+ /* Configure the CAN1 in loopback mode. */
221
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
222
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
223
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
224
+
225
+ /* Check here if CAN1 is set in loopback mode. */
226
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
227
+
228
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
229
+
230
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
231
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
232
+ match_rx_tx_data(buf_tx, buf_rx, 0);
233
+
234
+ qtest_quit(qts);
235
+}
236
+
237
+/*
238
+ * Enable filters for CAN1. This will filter incoming messages with ID. In this
239
+ * test message will pass through filter 2.
240
+ */
241
+static void test_can_filter(void)
242
+{
243
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
244
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
245
+ uint32_t status = 0;
246
+ uint8_t can_timestamp = 1;
247
+
248
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
249
+ " -object can-bus,id=canbus0"
250
+ " -machine xlnx-zcu102.canbus0=canbus0"
251
+ " -machine xlnx-zcu102.canbus1=canbus0"
252
+ );
253
+
254
+ /* Configure the CAN0 and CAN1. */
255
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
256
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
257
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
258
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
259
+
260
+ /* Check here if CAN0 and CAN1 are in normal mode. */
261
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
262
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
263
+
264
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
265
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
266
+
267
+ /* Set filter for CAN1 for incoming messages. */
268
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0x0);
269
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR1, 0xF7);
270
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR1, 0x121F);
271
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR2, 0x5431);
272
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR2, 0x14);
273
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR3, 0x1234);
274
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR3, 0x5431);
275
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR4, 0xFFF);
276
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR4, 0x1234);
277
+
278
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0xF);
279
+
280
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
281
+
282
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
283
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
284
+
285
+ qtest_quit(qts);
286
+}
287
+
288
+/* Testing sleep mode on CAN0 while CAN1 is in normal mode. */
289
+static void test_can_sleepmode(void)
290
+{
291
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
292
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
293
+ uint32_t status = 0;
294
+ uint8_t can_timestamp = 1;
295
+
296
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
297
+ " -object can-bus,id=canbus0"
298
+ " -machine xlnx-zcu102.canbus0=canbus0"
299
+ " -machine xlnx-zcu102.canbus1=canbus0"
300
+ );
301
+
302
+ /* Configure the CAN0. */
303
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
304
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SLEEP_MODE);
305
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
306
+
307
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
308
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
309
+
310
+ /* Check here if CAN0 is in SLEEP mode and CAN1 in normal mode. */
311
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
312
+ g_assert_cmpint(status, ==, STATUS_SLEEP_MODE);
313
+
314
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
315
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
316
+
317
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
318
+
319
+ /*
320
+ * Once CAN1 sends data on can-bus. CAN0 should exit sleep mode.
321
+ * Check the CAN0 status now. It should exit the sleep mode and receive the
322
+ * incoming data.
323
+ */
324
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
325
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
326
+
327
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
328
+
329
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
330
+
331
+ qtest_quit(qts);
332
+}
333
+
334
+/* Testing Snoop mode on CAN0 while CAN1 is in normal mode. */
335
+static void test_can_snoopmode(void)
336
+{
337
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
338
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
339
+ uint32_t status = 0;
340
+ uint8_t can_timestamp = 1;
341
+
342
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
343
+ " -object can-bus,id=canbus0"
344
+ " -machine xlnx-zcu102.canbus0=canbus0"
345
+ " -machine xlnx-zcu102.canbus1=canbus0"
346
+ );
347
+
348
+ /* Configure the CAN0. */
349
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
350
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SNOOP_MODE);
351
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
352
+
353
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
354
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
355
+
356
+ /* Check here if CAN0 is in SNOOP mode and CAN1 in normal mode. */
357
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
358
+ g_assert_cmpint(status, ==, STATUS_SNOOP_MODE);
359
+
360
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
361
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
362
+
363
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
364
+
365
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
366
+
367
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
368
+
369
+ qtest_quit(qts);
370
+}
371
+
372
+int main(int argc, char **argv)
373
+{
374
+ g_test_init(&argc, &argv, NULL);
375
+
376
+ qtest_add_func("/net/can/can_bus", test_can_bus);
377
+ qtest_add_func("/net/can/can_loopback", test_can_loopback);
378
+ qtest_add_func("/net/can/can_filter", test_can_filter);
379
+ qtest_add_func("/net/can/can_test_snoopmode", test_can_snoopmode);
380
+ qtest_add_func("/net/can/can_test_sleepmode", test_can_sleepmode);
381
+
382
+ return g_test_run();
383
+}
384
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
385
index XXXXXXX..XXXXXXX 100644
174
index XXXXXXX..XXXXXXX 100644
386
--- a/tests/qtest/meson.build
175
--- a/target/arm/translate-mve.c
387
+++ b/tests/qtest/meson.build
176
+++ b/target/arm/translate-mve.c
388
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
177
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
389
['arm-cpu-features',
178
DO_2OP_SCALAR(VBRSR, vbrsr)
390
'numa-test',
179
DO_2OP_SCALAR(VMLA, vmla)
391
'boot-serial-test',
180
DO_2OP_SCALAR(VMLAS, vmlas)
392
+ 'xlnx-can-test',
181
+DO_2OP_SCALAR(VQDMLAH, vqdmlah)
393
'migration-test']
182
+DO_2OP_SCALAR(VQRDMLAH, vqrdmlah)
394
183
+DO_2OP_SCALAR(VQDMLASH, vqdmlash)
395
qtests_s390x = \
184
+DO_2OP_SCALAR(VQRDMLASH, vqrdmlash)
185
186
static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
187
{
396
--
188
--
397
2.20.1
189
2.20.1
398
190
399
191
diff view generated by jsdifflib
1
v8.1M adds new encodings of VLLDM and VLSTM (where bit 7 is set).
1
Implement the MVE 1-operand saturating operations VQABS and VQNEG.
2
The only difference is that:
3
* the old T1 encodings UNDEF if the implementation implements 32
4
Dregs (this is currently architecturally impossible for M-profile)
5
* the new T2 encodings have the implementation-defined option to
6
read from memory (discarding the data) or write UNKNOWN values to
7
memory for the stack slots that would be D16-D31
8
9
We choose not to make those accesses, so for us the two
10
instructions behave identically assuming they don't UNDEF.
11
2
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20201119215617.29887-21-peter.maydell@linaro.org
15
---
5
---
16
target/arm/m-nocp.decode | 2 +-
6
target/arm/helper-mve.h | 8 ++++++++
17
target/arm/translate-vfp.c.inc | 25 +++++++++++++++++++++++++
7
target/arm/mve.decode | 3 +++
18
2 files changed, 26 insertions(+), 1 deletion(-)
8
target/arm/mve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
9
target/arm/translate-mve.c | 2 ++
10
4 files changed, 50 insertions(+)
19
11
20
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
12
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
21
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m-nocp.decode
14
--- a/target/arm/helper-mve.h
23
+++ b/target/arm/m-nocp.decode
15
+++ b/target/arm/helper-mve.h
24
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
17
DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
{
18
DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
# Special cases which do not take an early NOCP: VLLDM and VLSTM
19
28
- VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
20
+DEF_HELPER_FLAGS_3(mve_vqabsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
+ VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
21
+DEF_HELPER_FLAGS_3(mve_vqabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
# VSCCLRM (new in v8.1M) is similar:
22
+DEF_HELPER_FLAGS_3(mve_vqabsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
23
+
32
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
24
+DEF_HELPER_FLAGS_3(mve_vqnegb, TCG_CALL_NO_WG, void, env, ptr, ptr)
33
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
25
+DEF_HELPER_FLAGS_3(mve_vqnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+DEF_HELPER_FLAGS_3(mve_vqnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
+
28
DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-vfp.c.inc
33
--- a/target/arm/mve.decode
36
+++ b/target/arm/translate-vfp.c.inc
34
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
35
@@ -XXX,XX +XXX,XX @@ VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
38
!arm_dc_feature(s, ARM_FEATURE_V8)) {
36
VNEG 1111 1111 1 . 11 .. 01 ... 0 0011 11 . 0 ... 0 @1op
39
return false;
37
VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
38
39
+VQABS 1111 1111 1 . 11 .. 00 ... 0 0111 01 . 0 ... 0 @1op
40
+VQNEG 1111 1111 1 . 11 .. 00 ... 0 0111 11 . 0 ... 0 @1op
41
+
42
&vdup qd rt size
43
# Qd is in the fields usually named Qn
44
@vdup .... .... . . .. ... . rt:4 .... . . . . .... qd=%qn &vdup
45
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/mve_helper.c
48
+++ b/target/arm/mve_helper.c
49
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm)
40
}
50
}
51
mve_advance_vpt(env);
52
}
41
+
53
+
42
+ if (a->op) {
54
+#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \
43
+ /*
55
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
44
+ * T2 encoding ({D0-D31} reglist): v8.1M and up. We choose not
56
+ { \
45
+ * to take the IMPDEF option to make memory accesses to the stack
57
+ TYPE *d = vd, *m = vm; \
46
+ * slots that correspond to the D16-D31 registers (discarding
58
+ uint16_t mask = mve_element_mask(env); \
47
+ * read data and writing UNKNOWN values), so for us the T2
59
+ unsigned e; \
48
+ * encoding behaves identically to the T1 encoding.
60
+ bool qc = false; \
49
+ */
61
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
50
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
62
+ bool sat = false; \
51
+ return false;
63
+ mergemask(&d[H##ESIZE(e)], FN(m[H##ESIZE(e)], &sat), mask); \
52
+ }
64
+ qc |= sat & mask & 1; \
53
+ } else {
65
+ } \
54
+ /*
66
+ if (qc) { \
55
+ * T1 encoding ({D0-D15} reglist); undef if we have 32 Dregs.
67
+ env->vfp.qc[0] = qc; \
56
+ * This is currently architecturally impossible, but we add the
68
+ } \
57
+ * check to stay in line with the pseudocode. Note that we must
69
+ mve_advance_vpt(env); \
58
+ * emit code for the UNDEF so it takes precedence over the NOCP.
59
+ */
60
+ if (dc_isar_feature(aa32_simd_r32, s)) {
61
+ unallocated_encoding(s);
62
+ return true;
63
+ }
64
+ }
70
+ }
65
+
71
+
66
/*
72
+#define DO_VQABS_B(N, SATP) \
67
* If not secure, UNDEF. We must emit code for this
73
+ do_sat_bhs(DO_ABS((int64_t)N), INT8_MIN, INT8_MAX, SATP)
68
* rather than returning false so that this takes
74
+#define DO_VQABS_H(N, SATP) \
75
+ do_sat_bhs(DO_ABS((int64_t)N), INT16_MIN, INT16_MAX, SATP)
76
+#define DO_VQABS_W(N, SATP) \
77
+ do_sat_bhs(DO_ABS((int64_t)N), INT32_MIN, INT32_MAX, SATP)
78
+
79
+#define DO_VQNEG_B(N, SATP) do_sat_bhs(-(int64_t)N, INT8_MIN, INT8_MAX, SATP)
80
+#define DO_VQNEG_H(N, SATP) do_sat_bhs(-(int64_t)N, INT16_MIN, INT16_MAX, SATP)
81
+#define DO_VQNEG_W(N, SATP) do_sat_bhs(-(int64_t)N, INT32_MIN, INT32_MAX, SATP)
82
+
83
+DO_1OP_SAT(vqabsb, 1, int8_t, DO_VQABS_B)
84
+DO_1OP_SAT(vqabsh, 2, int16_t, DO_VQABS_H)
85
+DO_1OP_SAT(vqabsw, 4, int32_t, DO_VQABS_W)
86
+
87
+DO_1OP_SAT(vqnegb, 1, int8_t, DO_VQNEG_B)
88
+DO_1OP_SAT(vqnegh, 2, int16_t, DO_VQNEG_H)
89
+DO_1OP_SAT(vqnegw, 4, int32_t, DO_VQNEG_W)
90
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/translate-mve.c
93
+++ b/target/arm/translate-mve.c
94
@@ -XXX,XX +XXX,XX @@ DO_1OP(VCLZ, vclz)
95
DO_1OP(VCLS, vcls)
96
DO_1OP(VABS, vabs)
97
DO_1OP(VNEG, vneg)
98
+DO_1OP(VQABS, vqabs)
99
+DO_1OP(VQNEG, vqneg)
100
101
/* Narrowing moves: only size 0 and 1 are valid */
102
#define DO_VMOVN(INSN, FN) \
69
--
103
--
70
2.20.1
104
2.20.1
71
105
72
106
diff view generated by jsdifflib
1
For M-profile before v8.1M, the only valid register for VMSR/VMRS is
1
Implement the MVE VMAXA and VMINA insns, which take the absolute
2
the FPSCR. We have a comment that states this, but the actual logic
2
value of the signed elements in the input vector and then accumulate
3
to forbid accesses for any other register value is missing, so we
3
the unsigned max or min into the destination vector.
4
would end up with A-profile style behaviour. Add the missing check.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-7-peter.maydell@linaro.org
9
---
7
---
10
target/arm/translate-vfp.c.inc | 5 ++++-
8
target/arm/helper-mve.h | 8 ++++++++
11
1 file changed, 4 insertions(+), 1 deletion(-)
9
target/arm/mve.decode | 4 ++++
10
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 40 insertions(+)
12
13
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
16
--- a/target/arm/helper-mve.h
16
+++ b/target/arm/translate-vfp.c.inc
17
+++ b/target/arm/helper-mve.h
17
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vqnegb, TCG_CALL_NO_WG, void, env, ptr, ptr)
18
* Accesses to R15 are UNPREDICTABLE; we choose to undef.
19
DEF_HELPER_FLAGS_3(mve_vqnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
* (FPSCR -> r15 is a special case which writes to the PSR flags.)
20
DEF_HELPER_FLAGS_3(mve_vqnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
*/
21
21
- if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
22
+DEF_HELPER_FLAGS_3(mve_vmaxab, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
+ if (a->reg != ARM_VFP_FPSCR) {
23
+DEF_HELPER_FLAGS_3(mve_vmaxah, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+ return false;
24
+DEF_HELPER_FLAGS_3(mve_vmaxaw, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+ }
25
+
25
+ if (a->rt == 15 && !a->l) {
26
+DEF_HELPER_FLAGS_3(mve_vminab, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
return false;
27
+DEF_HELPER_FLAGS_3(mve_vminah, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
}
28
+DEF_HELPER_FLAGS_3(mve_vminaw, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
}
29
+
30
DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
32
DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
38
VQMOVUNB 111 0 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op
39
VQMOVN_BS 111 0 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op
40
41
+ VMAXA 111 0 1110 0 . 11 .. 11 ... 0 1110 1 0 . 0 ... 1 @1op
42
+
43
VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
44
}
45
46
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
47
VQMOVUNT 111 0 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op
48
VQMOVN_TS 111 0 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op
49
50
+ VMINA 111 0 1110 0 . 11 .. 11 ... 1 1110 1 0 . 0 ... 1 @1op
51
+
52
VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
53
}
54
55
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/mve_helper.c
58
+++ b/target/arm/mve_helper.c
59
@@ -XXX,XX +XXX,XX @@ DO_1OP_SAT(vqabsw, 4, int32_t, DO_VQABS_W)
60
DO_1OP_SAT(vqnegb, 1, int8_t, DO_VQNEG_B)
61
DO_1OP_SAT(vqnegh, 2, int16_t, DO_VQNEG_H)
62
DO_1OP_SAT(vqnegw, 4, int32_t, DO_VQNEG_W)
63
+
64
+/*
65
+ * VMAXA, VMINA: vd is unsigned; vm is signed, and we take its
66
+ * absolute value; we then do an unsigned comparison.
67
+ */
68
+#define DO_VMAXMINA(OP, ESIZE, STYPE, UTYPE, FN) \
69
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
70
+ { \
71
+ UTYPE *d = vd; \
72
+ STYPE *m = vm; \
73
+ uint16_t mask = mve_element_mask(env); \
74
+ unsigned e; \
75
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
76
+ UTYPE r = DO_ABS(m[H##ESIZE(e)]); \
77
+ r = FN(d[H##ESIZE(e)], r); \
78
+ mergemask(&d[H##ESIZE(e)], r, mask); \
79
+ } \
80
+ mve_advance_vpt(env); \
81
+ }
82
+
83
+DO_VMAXMINA(vmaxab, 1, int8_t, uint8_t, DO_MAX)
84
+DO_VMAXMINA(vmaxah, 2, int16_t, uint16_t, DO_MAX)
85
+DO_VMAXMINA(vmaxaw, 4, int32_t, uint32_t, DO_MAX)
86
+DO_VMAXMINA(vminab, 1, int8_t, uint8_t, DO_MIN)
87
+DO_VMAXMINA(vminah, 2, int16_t, uint16_t, DO_MIN)
88
+DO_VMAXMINA(vminaw, 4, int32_t, uint32_t, DO_MIN)
89
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/translate-mve.c
92
+++ b/target/arm/translate-mve.c
93
@@ -XXX,XX +XXX,XX @@ DO_1OP(VABS, vabs)
94
DO_1OP(VNEG, vneg)
95
DO_1OP(VQABS, vqabs)
96
DO_1OP(VQNEG, vqneg)
97
+DO_1OP(VMAXA, vmaxa)
98
+DO_1OP(VMINA, vmina)
99
100
/* Narrowing moves: only size 0 and 1 are valid */
101
#define DO_VMOVN(INSN, FN) \
29
--
102
--
30
2.20.1
103
2.20.1
31
104
32
105
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
Implement the MVE VMOV forms that move data between 2 general-purpose
2
registers and 2 32-bit lanes in a vector register.
2
3
3
The Xilinx ZynqMP CAN controller is developed based on SocketCAN, QEMU CAN bus
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
implementation. Bus connection and socketCAN connection for each CAN module
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
can be set through command lines.
6
---
7
target/arm/translate-a32.h | 1 +
8
target/arm/mve.decode | 4 ++
9
target/arm/translate-mve.c | 85 ++++++++++++++++++++++++++++++++++++++
10
target/arm/translate-vfp.c | 2 +-
11
4 files changed, 91 insertions(+), 1 deletion(-)
6
12
7
Example for using single CAN:
13
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
8
-object can-bus,id=canbus0 \
9
-machine xlnx-zcu102.canbus0=canbus0 \
10
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0
11
12
Example for connecting both CAN to same virtual CAN on host machine:
13
-object can-bus,id=canbus0 -object can-bus,id=canbus1 \
14
-machine xlnx-zcu102.canbus0=canbus0 \
15
-machine xlnx-zcu102.canbus1=canbus1 \
16
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0 \
17
-object can-host-socketcan,id=socketcan1,if=vcan0,canbus=canbus1
18
19
To create virtual CAN on the host machine, please check the QEMU CAN docs:
20
https://github.com/qemu/qemu/blob/master/docs/can.txt
21
22
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
23
Message-id: 1605728926-352690-2-git-send-email-fnu.vikram@xilinx.com
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
27
meson.build | 1 +
28
hw/net/can/trace.h | 1 +
29
include/hw/net/xlnx-zynqmp-can.h | 78 ++
30
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++
31
hw/Kconfig | 1 +
32
hw/net/can/meson.build | 1 +
33
hw/net/can/trace-events | 9 +
34
7 files changed, 1252 insertions(+)
35
create mode 100644 hw/net/can/trace.h
36
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
37
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
38
create mode 100644 hw/net/can/trace-events
39
40
diff --git a/meson.build b/meson.build
41
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
42
--- a/meson.build
15
--- a/target/arm/translate-a32.h
43
+++ b/meson.build
16
+++ b/target/arm/translate-a32.h
44
@@ -XXX,XX +XXX,XX @@ if have_system
17
@@ -XXX,XX +XXX,XX @@ void gen_rev16(TCGv_i32 dest, TCGv_i32 var);
45
'hw/misc',
18
void clear_eci_state(DisasContext *s);
46
'hw/misc/macio',
19
bool mve_eci_check(DisasContext *s);
47
'hw/net',
20
void mve_update_and_store_eci(DisasContext *s);
48
+ 'hw/net/can',
21
+bool mve_skip_vmov(DisasContext *s, int vn, int index, int size);
49
'hw/nvram',
22
50
'hw/pci',
23
static inline TCGv_i32 load_cpu_offset(int offset)
51
'hw/pci-host',
24
{
52
diff --git a/hw/net/can/trace.h b/hw/net/can/trace.h
25
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
53
new file mode 100644
26
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX
27
--- a/target/arm/mve.decode
55
--- /dev/null
28
+++ b/target/arm/mve.decode
56
+++ b/hw/net/can/trace.h
29
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
57
@@ -0,0 +1 @@
30
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
58
+#include "trace/trace-hw_net_can.h"
31
size=2 p=1
59
diff --git a/include/hw/net/xlnx-zynqmp-can.h b/include/hw/net/xlnx-zynqmp-can.h
32
60
new file mode 100644
33
+# Moves between 2 32-bit vector lanes and 2 general purpose registers
61
index XXXXXXX..XXXXXXX
34
+VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
62
--- /dev/null
35
+VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
63
+++ b/include/hw/net/xlnx-zynqmp-can.h
64
@@ -XXX,XX +XXX,XX @@
65
+/*
66
+ * QEMU model of the Xilinx ZynqMP CAN controller.
67
+ *
68
+ * Copyright (c) 2020 Xilinx Inc.
69
+ *
70
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
71
+ *
72
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
73
+ * Pavel Pisa.
74
+ *
75
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
76
+ * of this software and associated documentation files (the "Software"), to deal
77
+ * in the Software without restriction, including without limitation the rights
78
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
79
+ * copies of the Software, and to permit persons to whom the Software is
80
+ * furnished to do so, subject to the following conditions:
81
+ *
82
+ * The above copyright notice and this permission notice shall be included in
83
+ * all copies or substantial portions of the Software.
84
+ *
85
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
86
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
87
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
88
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
89
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
90
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
91
+ * THE SOFTWARE.
92
+ */
93
+
36
+
94
+#ifndef XLNX_ZYNQMP_CAN_H
37
# Vector 2-op
95
+#define XLNX_ZYNQMP_CAN_H
38
VAND 1110 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
39
VBIC 1110 1111 0 . 01 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
40
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate-mve.c
43
+++ b/target/arm/translate-mve.c
44
@@ -XXX,XX +XXX,XX @@ static bool do_vabav(DisasContext *s, arg_vabav *a, MVEGenVABAVFn *fn)
45
46
DO_VABAV(VABAV_S, vabavs)
47
DO_VABAV(VABAV_U, vabavu)
96
+
48
+
97
+#include "hw/register.h"
49
+static bool trans_VMOV_to_2gp(DisasContext *s, arg_VMOV_to_2gp *a)
98
+#include "net/can_emu.h"
50
+{
99
+#include "net/can_host.h"
51
+ /*
100
+#include "qemu/fifo32.h"
52
+ * VMOV two 32-bit vector lanes to two general-purpose registers.
101
+#include "hw/ptimer.h"
53
+ * This insn is not predicated but it is subject to beat-wise
102
+#include "hw/qdev-clock.h"
54
+ * execution if it is not in an IT block. For us this means
55
+ * only that if PSR.ECI says we should not be executing the beat
56
+ * corresponding to the lane of the vector register being accessed
57
+ * then we should skip perfoming the move, and that we need to do
58
+ * the usual check for bad ECI state and advance of ECI state.
59
+ * (If PSR.ECI is non-zero then we cannot be in an IT block.)
60
+ */
61
+ TCGv_i32 tmp;
62
+ int vd;
103
+
63
+
104
+#define TYPE_XLNX_ZYNQMP_CAN "xlnx.zynqmp-can"
64
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd) ||
105
+
65
+ a->rt == 13 || a->rt == 15 || a->rt2 == 13 || a->rt2 == 15 ||
106
+#define XLNX_ZYNQMP_CAN(obj) \
66
+ a->rt == a->rt2) {
107
+ OBJECT_CHECK(XlnxZynqMPCANState, (obj), TYPE_XLNX_ZYNQMP_CAN)
67
+ /* Rt/Rt2 cases are UNPREDICTABLE */
108
+
68
+ return false;
109
+#define MAX_CAN_CTRLS 2
69
+ }
110
+#define XLNX_ZYNQMP_CAN_R_MAX (0x84 / 4)
70
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
111
+#define MAILBOX_CAPACITY 64
71
+ return true;
112
+#define CAN_TIMER_MAX 0XFFFFUL
113
+#define CAN_DEFAULT_CLOCK (24 * 1000 * 1000)
114
+
115
+/* Each CAN_FRAME will have 4 * 32bit size. */
116
+#define CAN_FRAME_SIZE 4
117
+#define RXFIFO_SIZE (MAILBOX_CAPACITY * CAN_FRAME_SIZE)
118
+
119
+typedef struct XlnxZynqMPCANState {
120
+ SysBusDevice parent_obj;
121
+ MemoryRegion iomem;
122
+
123
+ qemu_irq irq;
124
+
125
+ CanBusClientState bus_client;
126
+ CanBusState *canbus;
127
+
128
+ struct {
129
+ uint32_t ext_clk_freq;
130
+ } cfg;
131
+
132
+ RegisterInfo reg_info[XLNX_ZYNQMP_CAN_R_MAX];
133
+ uint32_t regs[XLNX_ZYNQMP_CAN_R_MAX];
134
+
135
+ Fifo32 rx_fifo;
136
+ Fifo32 tx_fifo;
137
+ Fifo32 txhpb_fifo;
138
+
139
+ ptimer_state *can_timer;
140
+} XlnxZynqMPCANState;
141
+
142
+#endif
143
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
144
new file mode 100644
145
index XXXXXXX..XXXXXXX
146
--- /dev/null
147
+++ b/hw/net/can/xlnx-zynqmp-can.c
148
@@ -XXX,XX +XXX,XX @@
149
+/*
150
+ * QEMU model of the Xilinx ZynqMP CAN controller.
151
+ * This implementation is based on the following datasheet:
152
+ * https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
153
+ *
154
+ * Copyright (c) 2020 Xilinx Inc.
155
+ *
156
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
157
+ *
158
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
159
+ * Pavel Pisa
160
+ *
161
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
162
+ * of this software and associated documentation files (the "Software"), to deal
163
+ * in the Software without restriction, including without limitation the rights
164
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
165
+ * copies of the Software, and to permit persons to whom the Software is
166
+ * furnished to do so, subject to the following conditions:
167
+ *
168
+ * The above copyright notice and this permission notice shall be included in
169
+ * all copies or substantial portions of the Software.
170
+ *
171
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
172
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
173
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
174
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
175
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
176
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
177
+ * THE SOFTWARE.
178
+ */
179
+
180
+#include "qemu/osdep.h"
181
+#include "hw/sysbus.h"
182
+#include "hw/register.h"
183
+#include "hw/irq.h"
184
+#include "qapi/error.h"
185
+#include "qemu/bitops.h"
186
+#include "qemu/log.h"
187
+#include "qemu/cutils.h"
188
+#include "sysemu/sysemu.h"
189
+#include "migration/vmstate.h"
190
+#include "hw/qdev-properties.h"
191
+#include "net/can_emu.h"
192
+#include "net/can_host.h"
193
+#include "qemu/event_notifier.h"
194
+#include "qom/object_interfaces.h"
195
+#include "hw/net/xlnx-zynqmp-can.h"
196
+#include "trace.h"
197
+
198
+#ifndef XLNX_ZYNQMP_CAN_ERR_DEBUG
199
+#define XLNX_ZYNQMP_CAN_ERR_DEBUG 0
200
+#endif
201
+
202
+#define MAX_DLC 8
203
+#undef ERROR
204
+
205
+REG32(SOFTWARE_RESET_REGISTER, 0x0)
206
+ FIELD(SOFTWARE_RESET_REGISTER, CEN, 1, 1)
207
+ FIELD(SOFTWARE_RESET_REGISTER, SRST, 0, 1)
208
+REG32(MODE_SELECT_REGISTER, 0x4)
209
+ FIELD(MODE_SELECT_REGISTER, SNOOP, 2, 1)
210
+ FIELD(MODE_SELECT_REGISTER, LBACK, 1, 1)
211
+ FIELD(MODE_SELECT_REGISTER, SLEEP, 0, 1)
212
+REG32(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x8)
213
+ FIELD(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, BRP, 0, 8)
214
+REG32(ARBITRATION_PHASE_BIT_TIMING_REGISTER, 0xc)
215
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, SJW, 7, 2)
216
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS2, 4, 3)
217
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS1, 0, 4)
218
+REG32(ERROR_COUNTER_REGISTER, 0x10)
219
+ FIELD(ERROR_COUNTER_REGISTER, REC, 8, 8)
220
+ FIELD(ERROR_COUNTER_REGISTER, TEC, 0, 8)
221
+REG32(ERROR_STATUS_REGISTER, 0x14)
222
+ FIELD(ERROR_STATUS_REGISTER, ACKER, 4, 1)
223
+ FIELD(ERROR_STATUS_REGISTER, BERR, 3, 1)
224
+ FIELD(ERROR_STATUS_REGISTER, STER, 2, 1)
225
+ FIELD(ERROR_STATUS_REGISTER, FMER, 1, 1)
226
+ FIELD(ERROR_STATUS_REGISTER, CRCER, 0, 1)
227
+REG32(STATUS_REGISTER, 0x18)
228
+ FIELD(STATUS_REGISTER, SNOOP, 12, 1)
229
+ FIELD(STATUS_REGISTER, ACFBSY, 11, 1)
230
+ FIELD(STATUS_REGISTER, TXFLL, 10, 1)
231
+ FIELD(STATUS_REGISTER, TXBFLL, 9, 1)
232
+ FIELD(STATUS_REGISTER, ESTAT, 7, 2)
233
+ FIELD(STATUS_REGISTER, ERRWRN, 6, 1)
234
+ FIELD(STATUS_REGISTER, BBSY, 5, 1)
235
+ FIELD(STATUS_REGISTER, BIDLE, 4, 1)
236
+ FIELD(STATUS_REGISTER, NORMAL, 3, 1)
237
+ FIELD(STATUS_REGISTER, SLEEP, 2, 1)
238
+ FIELD(STATUS_REGISTER, LBACK, 1, 1)
239
+ FIELD(STATUS_REGISTER, CONFIG, 0, 1)
240
+REG32(INTERRUPT_STATUS_REGISTER, 0x1c)
241
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFEMP, 14, 1)
242
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFWMEMP, 13, 1)
243
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL, 12, 1)
244
+ FIELD(INTERRUPT_STATUS_REGISTER, WKUP, 11, 1)
245
+ FIELD(INTERRUPT_STATUS_REGISTER, SLP, 10, 1)
246
+ FIELD(INTERRUPT_STATUS_REGISTER, BSOFF, 9, 1)
247
+ FIELD(INTERRUPT_STATUS_REGISTER, ERROR, 8, 1)
248
+ FIELD(INTERRUPT_STATUS_REGISTER, RXNEMP, 7, 1)
249
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOFLW, 6, 1)
250
+ FIELD(INTERRUPT_STATUS_REGISTER, RXUFLW, 5, 1)
251
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOK, 4, 1)
252
+ FIELD(INTERRUPT_STATUS_REGISTER, TXBFLL, 3, 1)
253
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFLL, 2, 1)
254
+ FIELD(INTERRUPT_STATUS_REGISTER, TXOK, 1, 1)
255
+ FIELD(INTERRUPT_STATUS_REGISTER, ARBLST, 0, 1)
256
+REG32(INTERRUPT_ENABLE_REGISTER, 0x20)
257
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFEMP, 14, 1)
258
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFWMEMP, 13, 1)
259
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL, 12, 1)
260
+ FIELD(INTERRUPT_ENABLE_REGISTER, EWKUP, 11, 1)
261
+ FIELD(INTERRUPT_ENABLE_REGISTER, ESLP, 10, 1)
262
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSOFF, 9, 1)
263
+ FIELD(INTERRUPT_ENABLE_REGISTER, EERROR, 8, 1)
264
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXNEMP, 7, 1)
265
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOFLW, 6, 1)
266
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXUFLW, 5, 1)
267
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOK, 4, 1)
268
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXBFLL, 3, 1)
269
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFLL, 2, 1)
270
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXOK, 1, 1)
271
+ FIELD(INTERRUPT_ENABLE_REGISTER, EARBLST, 0, 1)
272
+REG32(INTERRUPT_CLEAR_REGISTER, 0x24)
273
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFEMP, 14, 1)
274
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFWMEMP, 13, 1)
275
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL, 12, 1)
276
+ FIELD(INTERRUPT_CLEAR_REGISTER, CWKUP, 11, 1)
277
+ FIELD(INTERRUPT_CLEAR_REGISTER, CSLP, 10, 1)
278
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSOFF, 9, 1)
279
+ FIELD(INTERRUPT_CLEAR_REGISTER, CERROR, 8, 1)
280
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXNEMP, 7, 1)
281
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOFLW, 6, 1)
282
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXUFLW, 5, 1)
283
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOK, 4, 1)
284
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXBFLL, 3, 1)
285
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFLL, 2, 1)
286
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXOK, 1, 1)
287
+ FIELD(INTERRUPT_CLEAR_REGISTER, CARBLST, 0, 1)
288
+REG32(TIMESTAMP_REGISTER, 0x28)
289
+ FIELD(TIMESTAMP_REGISTER, CTS, 0, 1)
290
+REG32(WIR, 0x2c)
291
+ FIELD(WIR, EW, 8, 8)
292
+ FIELD(WIR, FW, 0, 8)
293
+REG32(TXFIFO_ID, 0x30)
294
+ FIELD(TXFIFO_ID, IDH, 21, 11)
295
+ FIELD(TXFIFO_ID, SRRRTR, 20, 1)
296
+ FIELD(TXFIFO_ID, IDE, 19, 1)
297
+ FIELD(TXFIFO_ID, IDL, 1, 18)
298
+ FIELD(TXFIFO_ID, RTR, 0, 1)
299
+REG32(TXFIFO_DLC, 0x34)
300
+ FIELD(TXFIFO_DLC, DLC, 28, 4)
301
+REG32(TXFIFO_DATA1, 0x38)
302
+ FIELD(TXFIFO_DATA1, DB0, 24, 8)
303
+ FIELD(TXFIFO_DATA1, DB1, 16, 8)
304
+ FIELD(TXFIFO_DATA1, DB2, 8, 8)
305
+ FIELD(TXFIFO_DATA1, DB3, 0, 8)
306
+REG32(TXFIFO_DATA2, 0x3c)
307
+ FIELD(TXFIFO_DATA2, DB4, 24, 8)
308
+ FIELD(TXFIFO_DATA2, DB5, 16, 8)
309
+ FIELD(TXFIFO_DATA2, DB6, 8, 8)
310
+ FIELD(TXFIFO_DATA2, DB7, 0, 8)
311
+REG32(TXHPB_ID, 0x40)
312
+ FIELD(TXHPB_ID, IDH, 21, 11)
313
+ FIELD(TXHPB_ID, SRRRTR, 20, 1)
314
+ FIELD(TXHPB_ID, IDE, 19, 1)
315
+ FIELD(TXHPB_ID, IDL, 1, 18)
316
+ FIELD(TXHPB_ID, RTR, 0, 1)
317
+REG32(TXHPB_DLC, 0x44)
318
+ FIELD(TXHPB_DLC, DLC, 28, 4)
319
+REG32(TXHPB_DATA1, 0x48)
320
+ FIELD(TXHPB_DATA1, DB0, 24, 8)
321
+ FIELD(TXHPB_DATA1, DB1, 16, 8)
322
+ FIELD(TXHPB_DATA1, DB2, 8, 8)
323
+ FIELD(TXHPB_DATA1, DB3, 0, 8)
324
+REG32(TXHPB_DATA2, 0x4c)
325
+ FIELD(TXHPB_DATA2, DB4, 24, 8)
326
+ FIELD(TXHPB_DATA2, DB5, 16, 8)
327
+ FIELD(TXHPB_DATA2, DB6, 8, 8)
328
+ FIELD(TXHPB_DATA2, DB7, 0, 8)
329
+REG32(RXFIFO_ID, 0x50)
330
+ FIELD(RXFIFO_ID, IDH, 21, 11)
331
+ FIELD(RXFIFO_ID, SRRRTR, 20, 1)
332
+ FIELD(RXFIFO_ID, IDE, 19, 1)
333
+ FIELD(RXFIFO_ID, IDL, 1, 18)
334
+ FIELD(RXFIFO_ID, RTR, 0, 1)
335
+REG32(RXFIFO_DLC, 0x54)
336
+ FIELD(RXFIFO_DLC, DLC, 28, 4)
337
+ FIELD(RXFIFO_DLC, RXT, 0, 16)
338
+REG32(RXFIFO_DATA1, 0x58)
339
+ FIELD(RXFIFO_DATA1, DB0, 24, 8)
340
+ FIELD(RXFIFO_DATA1, DB1, 16, 8)
341
+ FIELD(RXFIFO_DATA1, DB2, 8, 8)
342
+ FIELD(RXFIFO_DATA1, DB3, 0, 8)
343
+REG32(RXFIFO_DATA2, 0x5c)
344
+ FIELD(RXFIFO_DATA2, DB4, 24, 8)
345
+ FIELD(RXFIFO_DATA2, DB5, 16, 8)
346
+ FIELD(RXFIFO_DATA2, DB6, 8, 8)
347
+ FIELD(RXFIFO_DATA2, DB7, 0, 8)
348
+REG32(AFR, 0x60)
349
+ FIELD(AFR, UAF4, 3, 1)
350
+ FIELD(AFR, UAF3, 2, 1)
351
+ FIELD(AFR, UAF2, 1, 1)
352
+ FIELD(AFR, UAF1, 0, 1)
353
+REG32(AFMR1, 0x64)
354
+ FIELD(AFMR1, AMIDH, 21, 11)
355
+ FIELD(AFMR1, AMSRR, 20, 1)
356
+ FIELD(AFMR1, AMIDE, 19, 1)
357
+ FIELD(AFMR1, AMIDL, 1, 18)
358
+ FIELD(AFMR1, AMRTR, 0, 1)
359
+REG32(AFIR1, 0x68)
360
+ FIELD(AFIR1, AIIDH, 21, 11)
361
+ FIELD(AFIR1, AISRR, 20, 1)
362
+ FIELD(AFIR1, AIIDE, 19, 1)
363
+ FIELD(AFIR1, AIIDL, 1, 18)
364
+ FIELD(AFIR1, AIRTR, 0, 1)
365
+REG32(AFMR2, 0x6c)
366
+ FIELD(AFMR2, AMIDH, 21, 11)
367
+ FIELD(AFMR2, AMSRR, 20, 1)
368
+ FIELD(AFMR2, AMIDE, 19, 1)
369
+ FIELD(AFMR2, AMIDL, 1, 18)
370
+ FIELD(AFMR2, AMRTR, 0, 1)
371
+REG32(AFIR2, 0x70)
372
+ FIELD(AFIR2, AIIDH, 21, 11)
373
+ FIELD(AFIR2, AISRR, 20, 1)
374
+ FIELD(AFIR2, AIIDE, 19, 1)
375
+ FIELD(AFIR2, AIIDL, 1, 18)
376
+ FIELD(AFIR2, AIRTR, 0, 1)
377
+REG32(AFMR3, 0x74)
378
+ FIELD(AFMR3, AMIDH, 21, 11)
379
+ FIELD(AFMR3, AMSRR, 20, 1)
380
+ FIELD(AFMR3, AMIDE, 19, 1)
381
+ FIELD(AFMR3, AMIDL, 1, 18)
382
+ FIELD(AFMR3, AMRTR, 0, 1)
383
+REG32(AFIR3, 0x78)
384
+ FIELD(AFIR3, AIIDH, 21, 11)
385
+ FIELD(AFIR3, AISRR, 20, 1)
386
+ FIELD(AFIR3, AIIDE, 19, 1)
387
+ FIELD(AFIR3, AIIDL, 1, 18)
388
+ FIELD(AFIR3, AIRTR, 0, 1)
389
+REG32(AFMR4, 0x7c)
390
+ FIELD(AFMR4, AMIDH, 21, 11)
391
+ FIELD(AFMR4, AMSRR, 20, 1)
392
+ FIELD(AFMR4, AMIDE, 19, 1)
393
+ FIELD(AFMR4, AMIDL, 1, 18)
394
+ FIELD(AFMR4, AMRTR, 0, 1)
395
+REG32(AFIR4, 0x80)
396
+ FIELD(AFIR4, AIIDH, 21, 11)
397
+ FIELD(AFIR4, AISRR, 20, 1)
398
+ FIELD(AFIR4, AIIDE, 19, 1)
399
+ FIELD(AFIR4, AIIDL, 1, 18)
400
+ FIELD(AFIR4, AIRTR, 0, 1)
401
+
402
+static void can_update_irq(XlnxZynqMPCANState *s)
403
+{
404
+ uint32_t irq;
405
+
406
+ /* Watermark register interrupts. */
407
+ if ((fifo32_num_free(&s->tx_fifo) / CAN_FRAME_SIZE) >
408
+ ARRAY_FIELD_EX32(s->regs, WIR, EW)) {
409
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFWMEMP, 1);
410
+ }
72
+ }
411
+
73
+
412
+ if ((fifo32_num_used(&s->rx_fifo) / CAN_FRAME_SIZE) >
74
+ /* Convert Qreg index to Dreg for read_neon_element32() etc */
413
+ ARRAY_FIELD_EX32(s->regs, WIR, FW)) {
75
+ vd = a->qd * 2;
414
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL, 1);
76
+
77
+ if (!mve_skip_vmov(s, vd, a->idx, MO_32)) {
78
+ tmp = tcg_temp_new_i32();
79
+ read_neon_element32(tmp, vd, a->idx, MO_32);
80
+ store_reg(s, a->rt, tmp);
81
+ }
82
+ if (!mve_skip_vmov(s, vd + 1, a->idx, MO_32)) {
83
+ tmp = tcg_temp_new_i32();
84
+ read_neon_element32(tmp, vd + 1, a->idx, MO_32);
85
+ store_reg(s, a->rt2, tmp);
415
+ }
86
+ }
416
+
87
+
417
+ /* RX Interrupts. */
88
+ mve_update_and_store_eci(s);
418
+ if (fifo32_num_used(&s->rx_fifo) >= CAN_FRAME_SIZE) {
419
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXNEMP, 1);
420
+ }
421
+
422
+ /* TX interrupts. */
423
+ if (fifo32_is_empty(&s->tx_fifo)) {
424
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFEMP, 1);
425
+ }
426
+
427
+ if (fifo32_is_full(&s->tx_fifo)) {
428
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFLL, 1);
429
+ }
430
+
431
+ if (fifo32_is_full(&s->txhpb_fifo)) {
432
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXBFLL, 1);
433
+ }
434
+
435
+ irq = s->regs[R_INTERRUPT_STATUS_REGISTER];
436
+ irq &= s->regs[R_INTERRUPT_ENABLE_REGISTER];
437
+
438
+ trace_xlnx_can_update_irq(s->regs[R_INTERRUPT_STATUS_REGISTER],
439
+ s->regs[R_INTERRUPT_ENABLE_REGISTER], irq);
440
+ qemu_set_irq(s->irq, irq);
441
+}
442
+
443
+static void can_ier_post_write(RegisterInfo *reg, uint64_t val)
444
+{
445
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
446
+
447
+ can_update_irq(s);
448
+}
449
+
450
+static uint64_t can_icr_pre_write(RegisterInfo *reg, uint64_t val)
451
+{
452
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
453
+
454
+ s->regs[R_INTERRUPT_STATUS_REGISTER] &= ~val;
455
+ can_update_irq(s);
456
+
457
+ return 0;
458
+}
459
+
460
+static void can_config_reset(XlnxZynqMPCANState *s)
461
+{
462
+ /* Reset all the configuration registers. */
463
+ register_reset(&s->reg_info[R_SOFTWARE_RESET_REGISTER]);
464
+ register_reset(&s->reg_info[R_MODE_SELECT_REGISTER]);
465
+ register_reset(
466
+ &s->reg_info[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER]);
467
+ register_reset(&s->reg_info[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER]);
468
+ register_reset(&s->reg_info[R_STATUS_REGISTER]);
469
+ register_reset(&s->reg_info[R_INTERRUPT_STATUS_REGISTER]);
470
+ register_reset(&s->reg_info[R_INTERRUPT_ENABLE_REGISTER]);
471
+ register_reset(&s->reg_info[R_INTERRUPT_CLEAR_REGISTER]);
472
+ register_reset(&s->reg_info[R_WIR]);
473
+}
474
+
475
+static void can_config_mode(XlnxZynqMPCANState *s)
476
+{
477
+ register_reset(&s->reg_info[R_ERROR_COUNTER_REGISTER]);
478
+ register_reset(&s->reg_info[R_ERROR_STATUS_REGISTER]);
479
+
480
+ /* Put XlnxZynqMPCAN in configuration mode. */
481
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 1);
482
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP, 0);
483
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP, 0);
484
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, BSOFF, 0);
485
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ERROR, 0);
486
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 0);
487
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 0);
488
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 0);
489
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ARBLST, 0);
490
+
491
+ can_update_irq(s);
492
+}
493
+
494
+static void update_status_register_mode_bits(XlnxZynqMPCANState *s)
495
+{
496
+ bool sleep_status = ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP);
497
+ bool sleep_mode = ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP);
498
+ /* Wake up interrupt bit. */
499
+ bool wakeup_irq_val = sleep_status && (sleep_mode == 0);
500
+ /* Sleep interrupt bit. */
501
+ bool sleep_irq_val = sleep_mode && (sleep_status == 0);
502
+
503
+ /* Clear previous core mode status bits. */
504
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 0);
505
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 0);
506
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 0);
507
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 0);
508
+
509
+ /* set current mode bit and generate irqs accordingly. */
510
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, LBACK)) {
511
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 1);
512
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP)) {
513
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 1);
514
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP,
515
+ sleep_irq_val);
516
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
517
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 1);
518
+ } else {
519
+ /*
520
+ * If all bits are zero then XlnxZynqMPCAN is set in normal mode.
521
+ */
522
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 1);
523
+ /* Set wakeup interrupt bit. */
524
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP,
525
+ wakeup_irq_val);
526
+ }
527
+
528
+ can_update_irq(s);
529
+}
530
+
531
+static void can_exit_sleep_mode(XlnxZynqMPCANState *s)
532
+{
533
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, 0);
534
+ update_status_register_mode_bits(s);
535
+}
536
+
537
+static void generate_frame(qemu_can_frame *frame, uint32_t *data)
538
+{
539
+ frame->can_id = data[0];
540
+ frame->can_dlc = FIELD_EX32(data[1], TXFIFO_DLC, DLC);
541
+
542
+ frame->data[0] = FIELD_EX32(data[2], TXFIFO_DATA1, DB3);
543
+ frame->data[1] = FIELD_EX32(data[2], TXFIFO_DATA1, DB2);
544
+ frame->data[2] = FIELD_EX32(data[2], TXFIFO_DATA1, DB1);
545
+ frame->data[3] = FIELD_EX32(data[2], TXFIFO_DATA1, DB0);
546
+
547
+ frame->data[4] = FIELD_EX32(data[3], TXFIFO_DATA2, DB7);
548
+ frame->data[5] = FIELD_EX32(data[3], TXFIFO_DATA2, DB6);
549
+ frame->data[6] = FIELD_EX32(data[3], TXFIFO_DATA2, DB5);
550
+ frame->data[7] = FIELD_EX32(data[3], TXFIFO_DATA2, DB4);
551
+}
552
+
553
+static bool tx_ready_check(XlnxZynqMPCANState *s)
554
+{
555
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
556
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
557
+
558
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
559
+ " data while controller is in reset mode.\n",
560
+ path);
561
+ return false;
562
+ }
563
+
564
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
565
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
566
+
567
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
568
+ " data while controller is in configuration mode. Reset"
569
+ " the core so operations can start fresh.\n",
570
+ path);
571
+ return false;
572
+ }
573
+
574
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
575
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
576
+
577
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
578
+ " data while controller is in SNOOP MODE.\n",
579
+ path);
580
+ return false;
581
+ }
582
+
583
+ return true;
89
+ return true;
584
+}
90
+}
585
+
91
+
586
+static void transfer_fifo(XlnxZynqMPCANState *s, Fifo32 *fifo)
92
+static bool trans_VMOV_from_2gp(DisasContext *s, arg_VMOV_to_2gp *a)
587
+{
93
+{
588
+ qemu_can_frame frame;
94
+ /*
589
+ uint32_t data[CAN_FRAME_SIZE];
95
+ * VMOV two general-purpose registers to two 32-bit vector lanes.
590
+ int i;
96
+ * This insn is not predicated but it is subject to beat-wise
591
+ bool can_tx = tx_ready_check(s);
97
+ * execution if it is not in an IT block. For us this means
98
+ * only that if PSR.ECI says we should not be executing the beat
99
+ * corresponding to the lane of the vector register being accessed
100
+ * then we should skip perfoming the move, and that we need to do
101
+ * the usual check for bad ECI state and advance of ECI state.
102
+ * (If PSR.ECI is non-zero then we cannot be in an IT block.)
103
+ */
104
+ TCGv_i32 tmp;
105
+ int vd;
592
+
106
+
593
+ if (!can_tx) {
107
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd) ||
594
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
108
+ a->rt == 13 || a->rt == 15 || a->rt2 == 13 || a->rt2 == 15) {
595
+
109
+ /* Rt/Rt2 cases are UNPREDICTABLE */
596
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is not enabled for data"
110
+ return false;
597
+ " transfer.\n", path);
111
+ }
598
+ can_update_irq(s);
112
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
599
+ return;
113
+ return true;
600
+ }
114
+ }
601
+
115
+
602
+ while (!fifo32_is_empty(fifo)) {
116
+ /* Convert Qreg idx to Dreg for read_neon_element32() etc */
603
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
117
+ vd = a->qd * 2;
604
+ data[i] = fifo32_pop(fifo);
605
+ }
606
+
118
+
607
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
119
+ if (!mve_skip_vmov(s, vd, a->idx, MO_32)) {
608
+ /*
120
+ tmp = load_reg(s, a->rt);
609
+ * Controller is in loopback. In Loopback mode, the CAN core
121
+ write_neon_element32(tmp, vd, a->idx, MO_32);
610
+ * transmits a recessive bitstream on to the XlnxZynqMPCAN Bus.
122
+ tcg_temp_free_i32(tmp);
611
+ * Any message transmitted is looped back to the RX line and
123
+ }
612
+ * acknowledged. The XlnxZynqMPCAN core receives any message
124
+ if (!mve_skip_vmov(s, vd + 1, a->idx, MO_32)) {
613
+ * that it transmits.
125
+ tmp = load_reg(s, a->rt2);
614
+ */
126
+ write_neon_element32(tmp, vd + 1, a->idx, MO_32);
615
+ if (fifo32_is_full(&s->rx_fifo)) {
127
+ tcg_temp_free_i32(tmp);
616
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
617
+ } else {
618
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
619
+ fifo32_push(&s->rx_fifo, data[i]);
620
+ }
621
+
622
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
623
+ }
624
+ } else {
625
+ /* Normal mode Tx. */
626
+ generate_frame(&frame, data);
627
+
628
+ trace_xlnx_can_tx_data(frame.can_id, frame.can_dlc,
629
+ frame.data[0], frame.data[1],
630
+ frame.data[2], frame.data[3],
631
+ frame.data[4], frame.data[5],
632
+ frame.data[6], frame.data[7]);
633
+ can_bus_client_send(&s->bus_client, &frame, 1);
634
+ }
635
+ }
128
+ }
636
+
129
+
637
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 1);
130
+ mve_update_and_store_eci(s);
638
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, TXBFLL, 0);
639
+
640
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) {
641
+ can_exit_sleep_mode(s);
642
+ }
643
+
644
+ can_update_irq(s);
645
+}
646
+
647
+static uint64_t can_srr_pre_write(RegisterInfo *reg, uint64_t val)
648
+{
649
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
650
+
651
+ ARRAY_FIELD_DP32(s->regs, SOFTWARE_RESET_REGISTER, CEN,
652
+ FIELD_EX32(val, SOFTWARE_RESET_REGISTER, CEN));
653
+
654
+ if (FIELD_EX32(val, SOFTWARE_RESET_REGISTER, SRST)) {
655
+ trace_xlnx_can_reset(val);
656
+
657
+ /* First, core will do software reset then will enter in config mode. */
658
+ can_config_reset(s);
659
+ }
660
+
661
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
662
+ can_config_mode(s);
663
+ } else {
664
+ /*
665
+ * Leave config mode. Now XlnxZynqMPCAN core will enter normal,
666
+ * sleep, snoop or loopback mode depending upon LBACK, SLEEP, SNOOP
667
+ * register states.
668
+ */
669
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 0);
670
+
671
+ ptimer_transaction_begin(s->can_timer);
672
+ ptimer_set_count(s->can_timer, 0);
673
+ ptimer_transaction_commit(s->can_timer);
674
+
675
+ /* XlnxZynqMPCAN is out of config mode. It will send pending data. */
676
+ transfer_fifo(s, &s->txhpb_fifo);
677
+ transfer_fifo(s, &s->tx_fifo);
678
+ }
679
+
680
+ update_status_register_mode_bits(s);
681
+
682
+ return s->regs[R_SOFTWARE_RESET_REGISTER];
683
+}
684
+
685
+static uint64_t can_msr_pre_write(RegisterInfo *reg, uint64_t val)
686
+{
687
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
688
+ uint8_t multi_mode;
689
+
690
+ /*
691
+ * Multiple mode set check. This is done to make sure user doesn't set
692
+ * multiple modes.
693
+ */
694
+ multi_mode = FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK) +
695
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP) +
696
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP);
697
+
698
+ if (multi_mode > 1) {
699
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
700
+
701
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to config"
702
+ " several modes simultaneously. One mode will be selected"
703
+ " according to their priority: LBACK > SLEEP > SNOOP.\n",
704
+ path);
705
+ }
706
+
707
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
708
+ /* We are in configuration mode, any mode can be selected. */
709
+ s->regs[R_MODE_SELECT_REGISTER] = val;
710
+ } else {
711
+ bool sleep_mode_bit = FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP);
712
+
713
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, sleep_mode_bit);
714
+
715
+ if (FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK)) {
716
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
717
+
718
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
719
+ " LBACK mode without setting CEN bit as 0.\n",
720
+ path);
721
+ } else if (FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP)) {
722
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
723
+
724
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
725
+ " SNOOP mode without setting CEN bit as 0.\n",
726
+ path);
727
+ }
728
+
729
+ update_status_register_mode_bits(s);
730
+ }
731
+
732
+ return s->regs[R_MODE_SELECT_REGISTER];
733
+}
734
+
735
+static uint64_t can_brpr_pre_write(RegisterInfo *reg, uint64_t val)
736
+{
737
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
738
+
739
+ /* Only allow writes when in config mode. */
740
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
741
+ return s->regs[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER];
742
+ }
743
+
744
+ return val;
745
+}
746
+
747
+static uint64_t can_btr_pre_write(RegisterInfo *reg, uint64_t val)
748
+{
749
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
750
+
751
+ /* Only allow writes when in config mode. */
752
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
753
+ return s->regs[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER];
754
+ }
755
+
756
+ return val;
757
+}
758
+
759
+static uint64_t can_tcr_pre_write(RegisterInfo *reg, uint64_t val)
760
+{
761
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
762
+
763
+ if (FIELD_EX32(val, TIMESTAMP_REGISTER, CTS)) {
764
+ ptimer_transaction_begin(s->can_timer);
765
+ ptimer_set_count(s->can_timer, 0);
766
+ ptimer_transaction_commit(s->can_timer);
767
+ }
768
+
769
+ return 0;
770
+}
771
+
772
+static void update_rx_fifo(XlnxZynqMPCANState *s, const qemu_can_frame *frame)
773
+{
774
+ bool filter_pass = false;
775
+ uint16_t timestamp = 0;
776
+
777
+ /* If no filter is enabled. Message will be stored in FIFO. */
778
+ if (!((ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) |
779
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) |
780
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) |
781
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)))) {
782
+ filter_pass = true;
783
+ }
784
+
785
+ /*
786
+ * Messages that pass any of the acceptance filters will be stored in
787
+ * the RX FIFO.
788
+ */
789
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) {
790
+ uint32_t id_masked = s->regs[R_AFMR1] & frame->can_id;
791
+ uint32_t filter_id_masked = s->regs[R_AFMR1] & s->regs[R_AFIR1];
792
+
793
+ if (filter_id_masked == id_masked) {
794
+ filter_pass = true;
795
+ }
796
+ }
797
+
798
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) {
799
+ uint32_t id_masked = s->regs[R_AFMR2] & frame->can_id;
800
+ uint32_t filter_id_masked = s->regs[R_AFMR2] & s->regs[R_AFIR2];
801
+
802
+ if (filter_id_masked == id_masked) {
803
+ filter_pass = true;
804
+ }
805
+ }
806
+
807
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) {
808
+ uint32_t id_masked = s->regs[R_AFMR3] & frame->can_id;
809
+ uint32_t filter_id_masked = s->regs[R_AFMR3] & s->regs[R_AFIR3];
810
+
811
+ if (filter_id_masked == id_masked) {
812
+ filter_pass = true;
813
+ }
814
+ }
815
+
816
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
817
+ uint32_t id_masked = s->regs[R_AFMR4] & frame->can_id;
818
+ uint32_t filter_id_masked = s->regs[R_AFMR4] & s->regs[R_AFIR4];
819
+
820
+ if (filter_id_masked == id_masked) {
821
+ filter_pass = true;
822
+ }
823
+ }
824
+
825
+ if (!filter_pass) {
826
+ trace_xlnx_can_rx_fifo_filter_reject(frame->can_id, frame->can_dlc);
827
+ return;
828
+ }
829
+
830
+ /* Store the message in fifo if it passed through any of the filters. */
831
+ if (filter_pass && frame->can_dlc <= MAX_DLC) {
832
+
833
+ if (fifo32_is_full(&s->rx_fifo)) {
834
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
835
+ } else {
836
+ timestamp = CAN_TIMER_MAX - ptimer_get_count(s->can_timer);
837
+
838
+ fifo32_push(&s->rx_fifo, frame->can_id);
839
+
840
+ fifo32_push(&s->rx_fifo, deposit32(0, R_RXFIFO_DLC_DLC_SHIFT,
841
+ R_RXFIFO_DLC_DLC_LENGTH,
842
+ frame->can_dlc) |
843
+ deposit32(0, R_RXFIFO_DLC_RXT_SHIFT,
844
+ R_RXFIFO_DLC_RXT_LENGTH,
845
+ timestamp));
846
+
847
+ /* First 32 bit of the data. */
848
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA1_DB3_SHIFT,
849
+ R_TXFIFO_DATA1_DB3_LENGTH,
850
+ frame->data[0]) |
851
+ deposit32(0, R_TXFIFO_DATA1_DB2_SHIFT,
852
+ R_TXFIFO_DATA1_DB2_LENGTH,
853
+ frame->data[1]) |
854
+ deposit32(0, R_TXFIFO_DATA1_DB1_SHIFT,
855
+ R_TXFIFO_DATA1_DB1_LENGTH,
856
+ frame->data[2]) |
857
+ deposit32(0, R_TXFIFO_DATA1_DB0_SHIFT,
858
+ R_TXFIFO_DATA1_DB0_LENGTH,
859
+ frame->data[3]));
860
+ /* Last 32 bit of the data. */
861
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA2_DB7_SHIFT,
862
+ R_TXFIFO_DATA2_DB7_LENGTH,
863
+ frame->data[4]) |
864
+ deposit32(0, R_TXFIFO_DATA2_DB6_SHIFT,
865
+ R_TXFIFO_DATA2_DB6_LENGTH,
866
+ frame->data[5]) |
867
+ deposit32(0, R_TXFIFO_DATA2_DB5_SHIFT,
868
+ R_TXFIFO_DATA2_DB5_LENGTH,
869
+ frame->data[6]) |
870
+ deposit32(0, R_TXFIFO_DATA2_DB4_SHIFT,
871
+ R_TXFIFO_DATA2_DB4_LENGTH,
872
+ frame->data[7]));
873
+
874
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
875
+ trace_xlnx_can_rx_data(frame->can_id, frame->can_dlc,
876
+ frame->data[0], frame->data[1],
877
+ frame->data[2], frame->data[3],
878
+ frame->data[4], frame->data[5],
879
+ frame->data[6], frame->data[7]);
880
+ }
881
+
882
+ can_update_irq(s);
883
+ }
884
+}
885
+
886
+static uint64_t can_rxfifo_pre_read(RegisterInfo *reg, uint64_t val)
887
+{
888
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
889
+
890
+ if (!fifo32_is_empty(&s->rx_fifo)) {
891
+ val = fifo32_pop(&s->rx_fifo);
892
+ } else {
893
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXUFLW, 1);
894
+ }
895
+
896
+ can_update_irq(s);
897
+ return val;
898
+}
899
+
900
+static void can_filter_enable_post_write(RegisterInfo *reg, uint64_t val)
901
+{
902
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
903
+
904
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1) &&
905
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF2) &&
906
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF3) &&
907
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
908
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 1);
909
+ } else {
910
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 0);
911
+ }
912
+}
913
+
914
+static uint64_t can_filter_mask_pre_write(RegisterInfo *reg, uint64_t val)
915
+{
916
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
917
+ uint32_t reg_idx = (reg->access->addr) / 4;
918
+ uint32_t filter_number = (reg_idx - R_AFMR1) / 2;
919
+
920
+ /* modify an acceptance filter, the corresponding UAF bit should be '0'. */
921
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
922
+ s->regs[reg_idx] = val;
923
+
924
+ trace_xlnx_can_filter_mask_pre_write(filter_number, s->regs[reg_idx]);
925
+ } else {
926
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
927
+
928
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
929
+ " mask is not set as corresponding UAF bit is not 0.\n",
930
+ path, filter_number + 1);
931
+ }
932
+
933
+ return s->regs[reg_idx];
934
+}
935
+
936
+static uint64_t can_filter_id_pre_write(RegisterInfo *reg, uint64_t val)
937
+{
938
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
939
+ uint32_t reg_idx = (reg->access->addr) / 4;
940
+ uint32_t filter_number = (reg_idx - R_AFIR1) / 2;
941
+
942
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
943
+ s->regs[reg_idx] = val;
944
+
945
+ trace_xlnx_can_filter_id_pre_write(filter_number, s->regs[reg_idx]);
946
+ } else {
947
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
948
+
949
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
950
+ " id is not set as corresponding UAF bit is not 0.\n",
951
+ path, filter_number + 1);
952
+ }
953
+
954
+ return s->regs[reg_idx];
955
+}
956
+
957
+static void can_tx_post_write(RegisterInfo *reg, uint64_t val)
958
+{
959
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
960
+
961
+ bool is_txhpb = reg->access->addr > A_TXFIFO_DATA2;
962
+
963
+ bool initiate_transfer = (reg->access->addr == A_TXFIFO_DATA2) ||
964
+ (reg->access->addr == A_TXHPB_DATA2);
965
+
966
+ Fifo32 *f = is_txhpb ? &s->txhpb_fifo : &s->tx_fifo;
967
+
968
+ if (!fifo32_is_full(f)) {
969
+ fifo32_push(f, val);
970
+ } else {
971
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
972
+
973
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: TX FIFO is full.\n", path);
974
+ }
975
+
976
+ /* Initiate the message send if TX register is written. */
977
+ if (initiate_transfer &&
978
+ ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
979
+ transfer_fifo(s, f);
980
+ }
981
+
982
+ can_update_irq(s);
983
+}
984
+
985
+static const RegisterAccessInfo can_regs_info[] = {
986
+ { .name = "SOFTWARE_RESET_REGISTER",
987
+ .addr = A_SOFTWARE_RESET_REGISTER,
988
+ .rsvd = 0xfffffffc,
989
+ .pre_write = can_srr_pre_write,
990
+ },{ .name = "MODE_SELECT_REGISTER",
991
+ .addr = A_MODE_SELECT_REGISTER,
992
+ .rsvd = 0xfffffff8,
993
+ .pre_write = can_msr_pre_write,
994
+ },{ .name = "ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER",
995
+ .addr = A_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER,
996
+ .rsvd = 0xffffff00,
997
+ .pre_write = can_brpr_pre_write,
998
+ },{ .name = "ARBITRATION_PHASE_BIT_TIMING_REGISTER",
999
+ .addr = A_ARBITRATION_PHASE_BIT_TIMING_REGISTER,
1000
+ .rsvd = 0xfffffe00,
1001
+ .pre_write = can_btr_pre_write,
1002
+ },{ .name = "ERROR_COUNTER_REGISTER",
1003
+ .addr = A_ERROR_COUNTER_REGISTER,
1004
+ .rsvd = 0xffff0000,
1005
+ .ro = 0xffffffff,
1006
+ },{ .name = "ERROR_STATUS_REGISTER",
1007
+ .addr = A_ERROR_STATUS_REGISTER,
1008
+ .rsvd = 0xffffffe0,
1009
+ .w1c = 0x1f,
1010
+ },{ .name = "STATUS_REGISTER", .addr = A_STATUS_REGISTER,
1011
+ .reset = 0x1,
1012
+ .rsvd = 0xffffe000,
1013
+ .ro = 0x1fff,
1014
+ },{ .name = "INTERRUPT_STATUS_REGISTER",
1015
+ .addr = A_INTERRUPT_STATUS_REGISTER,
1016
+ .reset = 0x6000,
1017
+ .rsvd = 0xffff8000,
1018
+ .ro = 0x7fff,
1019
+ },{ .name = "INTERRUPT_ENABLE_REGISTER",
1020
+ .addr = A_INTERRUPT_ENABLE_REGISTER,
1021
+ .rsvd = 0xffff8000,
1022
+ .post_write = can_ier_post_write,
1023
+ },{ .name = "INTERRUPT_CLEAR_REGISTER",
1024
+ .addr = A_INTERRUPT_CLEAR_REGISTER,
1025
+ .rsvd = 0xffff8000,
1026
+ .pre_write = can_icr_pre_write,
1027
+ },{ .name = "TIMESTAMP_REGISTER",
1028
+ .addr = A_TIMESTAMP_REGISTER,
1029
+ .rsvd = 0xfffffffe,
1030
+ .pre_write = can_tcr_pre_write,
1031
+ },{ .name = "WIR", .addr = A_WIR,
1032
+ .reset = 0x3f3f,
1033
+ .rsvd = 0xffff0000,
1034
+ },{ .name = "TXFIFO_ID", .addr = A_TXFIFO_ID,
1035
+ .post_write = can_tx_post_write,
1036
+ },{ .name = "TXFIFO_DLC", .addr = A_TXFIFO_DLC,
1037
+ .rsvd = 0xfffffff,
1038
+ .post_write = can_tx_post_write,
1039
+ },{ .name = "TXFIFO_DATA1", .addr = A_TXFIFO_DATA1,
1040
+ .post_write = can_tx_post_write,
1041
+ },{ .name = "TXFIFO_DATA2", .addr = A_TXFIFO_DATA2,
1042
+ .post_write = can_tx_post_write,
1043
+ },{ .name = "TXHPB_ID", .addr = A_TXHPB_ID,
1044
+ .post_write = can_tx_post_write,
1045
+ },{ .name = "TXHPB_DLC", .addr = A_TXHPB_DLC,
1046
+ .rsvd = 0xfffffff,
1047
+ .post_write = can_tx_post_write,
1048
+ },{ .name = "TXHPB_DATA1", .addr = A_TXHPB_DATA1,
1049
+ .post_write = can_tx_post_write,
1050
+ },{ .name = "TXHPB_DATA2", .addr = A_TXHPB_DATA2,
1051
+ .post_write = can_tx_post_write,
1052
+ },{ .name = "RXFIFO_ID", .addr = A_RXFIFO_ID,
1053
+ .ro = 0xffffffff,
1054
+ .post_read = can_rxfifo_pre_read,
1055
+ },{ .name = "RXFIFO_DLC", .addr = A_RXFIFO_DLC,
1056
+ .rsvd = 0xfff0000,
1057
+ .post_read = can_rxfifo_pre_read,
1058
+ },{ .name = "RXFIFO_DATA1", .addr = A_RXFIFO_DATA1,
1059
+ .post_read = can_rxfifo_pre_read,
1060
+ },{ .name = "RXFIFO_DATA2", .addr = A_RXFIFO_DATA2,
1061
+ .post_read = can_rxfifo_pre_read,
1062
+ },{ .name = "AFR", .addr = A_AFR,
1063
+ .rsvd = 0xfffffff0,
1064
+ .post_write = can_filter_enable_post_write,
1065
+ },{ .name = "AFMR1", .addr = A_AFMR1,
1066
+ .pre_write = can_filter_mask_pre_write,
1067
+ },{ .name = "AFIR1", .addr = A_AFIR1,
1068
+ .pre_write = can_filter_id_pre_write,
1069
+ },{ .name = "AFMR2", .addr = A_AFMR2,
1070
+ .pre_write = can_filter_mask_pre_write,
1071
+ },{ .name = "AFIR2", .addr = A_AFIR2,
1072
+ .pre_write = can_filter_id_pre_write,
1073
+ },{ .name = "AFMR3", .addr = A_AFMR3,
1074
+ .pre_write = can_filter_mask_pre_write,
1075
+ },{ .name = "AFIR3", .addr = A_AFIR3,
1076
+ .pre_write = can_filter_id_pre_write,
1077
+ },{ .name = "AFMR4", .addr = A_AFMR4,
1078
+ .pre_write = can_filter_mask_pre_write,
1079
+ },{ .name = "AFIR4", .addr = A_AFIR4,
1080
+ .pre_write = can_filter_id_pre_write,
1081
+ }
1082
+};
1083
+
1084
+static void xlnx_zynqmp_can_ptimer_cb(void *opaque)
1085
+{
1086
+ /* No action required on the timer rollover. */
1087
+}
1088
+
1089
+static const MemoryRegionOps can_ops = {
1090
+ .read = register_read_memory,
1091
+ .write = register_write_memory,
1092
+ .endianness = DEVICE_LITTLE_ENDIAN,
1093
+ .valid = {
1094
+ .min_access_size = 4,
1095
+ .max_access_size = 4,
1096
+ },
1097
+};
1098
+
1099
+static void xlnx_zynqmp_can_reset_init(Object *obj, ResetType type)
1100
+{
1101
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1102
+ unsigned int i;
1103
+
1104
+ for (i = R_RXFIFO_ID; i < ARRAY_SIZE(s->reg_info); ++i) {
1105
+ register_reset(&s->reg_info[i]);
1106
+ }
1107
+
1108
+ ptimer_transaction_begin(s->can_timer);
1109
+ ptimer_set_count(s->can_timer, 0);
1110
+ ptimer_transaction_commit(s->can_timer);
1111
+}
1112
+
1113
+static void xlnx_zynqmp_can_reset_hold(Object *obj)
1114
+{
1115
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1116
+ unsigned int i;
1117
+
1118
+ for (i = 0; i < R_RXFIFO_ID; ++i) {
1119
+ register_reset(&s->reg_info[i]);
1120
+ }
1121
+
1122
+ /*
1123
+ * Reset FIFOs when CAN model is reset. This will clear the fifo writes
1124
+ * done by post_write which gets called from register_reset function,
1125
+ * post_write handle will not be able to trigger tx because CAN will be
1126
+ * disabled when software_reset_register is cleared first.
1127
+ */
1128
+ fifo32_reset(&s->rx_fifo);
1129
+ fifo32_reset(&s->tx_fifo);
1130
+ fifo32_reset(&s->txhpb_fifo);
1131
+}
1132
+
1133
+static bool xlnx_zynqmp_can_can_receive(CanBusClientState *client)
1134
+{
1135
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1136
+ bus_client);
1137
+
1138
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
1139
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1140
+
1141
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is in reset state.\n",
1142
+ path);
1143
+ return false;
1144
+ }
1145
+
1146
+ if ((ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) == 0) {
1147
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1148
+
1149
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is disabled. Incoming"
1150
+ " messages will be discarded.\n", path);
1151
+ return false;
1152
+ }
1153
+
1154
+ return true;
131
+ return true;
1155
+}
132
+}
1156
+
133
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
1157
+static ssize_t xlnx_zynqmp_can_receive(CanBusClientState *client,
1158
+ const qemu_can_frame *buf, size_t buf_size) {
1159
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1160
+ bus_client);
1161
+ const qemu_can_frame *frame = buf;
1162
+
1163
+ if (buf_size <= 0) {
1164
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1165
+
1166
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Error in the data received.\n",
1167
+ path);
1168
+ return 0;
1169
+ }
1170
+
1171
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
1172
+ /* Snoop Mode: Just keep the data. no response back. */
1173
+ update_rx_fifo(s, frame);
1174
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP))) {
1175
+ /*
1176
+ * XlnxZynqMPCAN is in sleep mode. Any data on bus will bring it to wake
1177
+ * up state.
1178
+ */
1179
+ can_exit_sleep_mode(s);
1180
+ update_rx_fifo(s, frame);
1181
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) == 0) {
1182
+ update_rx_fifo(s, frame);
1183
+ } else {
1184
+ /*
1185
+ * XlnxZynqMPCAN will not participate in normal bus communication
1186
+ * and will not receive any messages transmitted by other CAN nodes.
1187
+ */
1188
+ trace_xlnx_can_rx_discard(s->regs[R_STATUS_REGISTER]);
1189
+ }
1190
+
1191
+ return 1;
1192
+}
1193
+
1194
+static CanBusClientInfo can_xilinx_bus_client_info = {
1195
+ .can_receive = xlnx_zynqmp_can_can_receive,
1196
+ .receive = xlnx_zynqmp_can_receive,
1197
+};
1198
+
1199
+static int xlnx_zynqmp_can_connect_to_bus(XlnxZynqMPCANState *s,
1200
+ CanBusState *bus)
1201
+{
1202
+ s->bus_client.info = &can_xilinx_bus_client_info;
1203
+
1204
+ if (can_bus_insert_client(bus, &s->bus_client) < 0) {
1205
+ return -1;
1206
+ }
1207
+ return 0;
1208
+}
1209
+
1210
+static void xlnx_zynqmp_can_realize(DeviceState *dev, Error **errp)
1211
+{
1212
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(dev);
1213
+
1214
+ if (s->canbus) {
1215
+ if (xlnx_zynqmp_can_connect_to_bus(s, s->canbus) < 0) {
1216
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1217
+
1218
+ error_setg(errp, "%s: xlnx_zynqmp_can_connect_to_bus"
1219
+ " failed.", path);
1220
+ return;
1221
+ }
1222
+ }
1223
+
1224
+ /* Create RX FIFO, TXFIFO, TXHPB storage. */
1225
+ fifo32_create(&s->rx_fifo, RXFIFO_SIZE);
1226
+ fifo32_create(&s->tx_fifo, RXFIFO_SIZE);
1227
+ fifo32_create(&s->txhpb_fifo, CAN_FRAME_SIZE);
1228
+
1229
+ /* Allocate a new timer. */
1230
+ s->can_timer = ptimer_init(xlnx_zynqmp_can_ptimer_cb, s,
1231
+ PTIMER_POLICY_DEFAULT);
1232
+
1233
+ ptimer_transaction_begin(s->can_timer);
1234
+
1235
+ ptimer_set_freq(s->can_timer, s->cfg.ext_clk_freq);
1236
+ ptimer_set_limit(s->can_timer, CAN_TIMER_MAX, 1);
1237
+ ptimer_run(s->can_timer, 0);
1238
+ ptimer_transaction_commit(s->can_timer);
1239
+}
1240
+
1241
+static void xlnx_zynqmp_can_init(Object *obj)
1242
+{
1243
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1244
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1245
+
1246
+ RegisterInfoArray *reg_array;
1247
+
1248
+ memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_CAN,
1249
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1250
+ reg_array = register_init_block32(DEVICE(obj), can_regs_info,
1251
+ ARRAY_SIZE(can_regs_info),
1252
+ s->reg_info, s->regs,
1253
+ &can_ops,
1254
+ XLNX_ZYNQMP_CAN_ERR_DEBUG,
1255
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1256
+
1257
+ memory_region_add_subregion(&s->iomem, 0x00, &reg_array->mem);
1258
+ sysbus_init_mmio(sbd, &s->iomem);
1259
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
1260
+}
1261
+
1262
+static const VMStateDescription vmstate_can = {
1263
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1264
+ .version_id = 1,
1265
+ .minimum_version_id = 1,
1266
+ .fields = (VMStateField[]) {
1267
+ VMSTATE_FIFO32(rx_fifo, XlnxZynqMPCANState),
1268
+ VMSTATE_FIFO32(tx_fifo, XlnxZynqMPCANState),
1269
+ VMSTATE_FIFO32(txhpb_fifo, XlnxZynqMPCANState),
1270
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPCANState, XLNX_ZYNQMP_CAN_R_MAX),
1271
+ VMSTATE_PTIMER(can_timer, XlnxZynqMPCANState),
1272
+ VMSTATE_END_OF_LIST(),
1273
+ }
1274
+};
1275
+
1276
+static Property xlnx_zynqmp_can_properties[] = {
1277
+ DEFINE_PROP_UINT32("ext_clk_freq", XlnxZynqMPCANState, cfg.ext_clk_freq,
1278
+ CAN_DEFAULT_CLOCK),
1279
+ DEFINE_PROP_LINK("canbus", XlnxZynqMPCANState, canbus, TYPE_CAN_BUS,
1280
+ CanBusState *),
1281
+ DEFINE_PROP_END_OF_LIST(),
1282
+};
1283
+
1284
+static void xlnx_zynqmp_can_class_init(ObjectClass *klass, void *data)
1285
+{
1286
+ DeviceClass *dc = DEVICE_CLASS(klass);
1287
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1288
+
1289
+ rc->phases.enter = xlnx_zynqmp_can_reset_init;
1290
+ rc->phases.hold = xlnx_zynqmp_can_reset_hold;
1291
+ dc->realize = xlnx_zynqmp_can_realize;
1292
+ device_class_set_props(dc, xlnx_zynqmp_can_properties);
1293
+ dc->vmsd = &vmstate_can;
1294
+}
1295
+
1296
+static const TypeInfo can_info = {
1297
+ .name = TYPE_XLNX_ZYNQMP_CAN,
1298
+ .parent = TYPE_SYS_BUS_DEVICE,
1299
+ .instance_size = sizeof(XlnxZynqMPCANState),
1300
+ .class_init = xlnx_zynqmp_can_class_init,
1301
+ .instance_init = xlnx_zynqmp_can_init,
1302
+};
1303
+
1304
+static void can_register_types(void)
1305
+{
1306
+ type_register_static(&can_info);
1307
+}
1308
+
1309
+type_init(can_register_types)
1310
diff --git a/hw/Kconfig b/hw/Kconfig
1311
index XXXXXXX..XXXXXXX 100644
134
index XXXXXXX..XXXXXXX 100644
1312
--- a/hw/Kconfig
135
--- a/target/arm/translate-vfp.c
1313
+++ b/hw/Kconfig
136
+++ b/target/arm/translate-vfp.c
1314
@@ -XXX,XX +XXX,XX @@ config XILINX_AXI
137
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
1315
config XLNX_ZYNQMP
138
return true;
1316
bool
139
}
1317
select REGISTER
140
1318
+ select CAN_BUS
141
-static bool mve_skip_vmov(DisasContext *s, int vn, int index, int size)
1319
diff --git a/hw/net/can/meson.build b/hw/net/can/meson.build
142
+bool mve_skip_vmov(DisasContext *s, int vn, int index, int size)
1320
index XXXXXXX..XXXXXXX 100644
143
{
1321
--- a/hw/net/can/meson.build
144
/*
1322
+++ b/hw/net/can/meson.build
145
* In a CPU with MVE, the VMOV (vector lane to general-purpose register)
1323
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_pcm3680_pci.c'))
1324
softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_mioe3680_pci.c'))
1325
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD', if_true: files('ctucan_core.c'))
1326
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD_PCI', if_true: files('ctucan_pci.c'))
1327
+softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP', if_true: files('xlnx-zynqmp-can.c'))
1328
diff --git a/hw/net/can/trace-events b/hw/net/can/trace-events
1329
new file mode 100644
1330
index XXXXXXX..XXXXXXX
1331
--- /dev/null
1332
+++ b/hw/net/can/trace-events
1333
@@ -XXX,XX +XXX,XX @@
1334
+# xlnx-zynqmp-can.c
1335
+xlnx_can_update_irq(uint32_t isr, uint32_t ier, uint32_t irq) "ISR: 0x%08x IER: 0x%08x IRQ: 0x%08x"
1336
+xlnx_can_reset(uint32_t val) "Resetting controller with value = 0x%08x"
1337
+xlnx_can_rx_fifo_filter_reject(uint32_t id, uint8_t dlc) "Frame: ID: 0x%08x DLC: 0x%02x"
1338
+xlnx_can_filter_id_pre_write(uint8_t filter_num, uint32_t value) "Filter%d ID: 0x%08x"
1339
+xlnx_can_filter_mask_pre_write(uint8_t filter_num, uint32_t value) "Filter%d MASK: 0x%08x"
1340
+xlnx_can_tx_data(uint32_t id, uint8_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1341
+xlnx_can_rx_data(uint32_t id, uint32_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
1342
+xlnx_can_rx_discard(uint32_t status) "Controller is not enabled for bus communication. Status Register: 0x%08x"
1343
--
146
--
1344
2.20.1
147
2.20.1
1345
148
1346
149
diff view generated by jsdifflib
1
v8.1M introduces a new TRD flag in the CCR register, which enables
1
Implement the MVE VPNOT insn, which inverts the bits in VPR.P0
2
checking for stack frame integrity signatures on SG instructions.
2
(subject to both predication and to beatwise execution).
3
Add the code in the SG insn implementation for the new behaviour.
4
3
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-24-peter.maydell@linaro.org
8
---
6
---
9
target/arm/m_helper.c | 86 +++++++++++++++++++++++++++++++++++++++++++
7
target/arm/helper-mve.h | 1 +
10
1 file changed, 86 insertions(+)
8
target/arm/mve.decode | 1 +
9
target/arm/mve_helper.c | 17 +++++++++++++++++
10
target/arm/translate-mve.c | 19 +++++++++++++++++++
11
4 files changed, 38 insertions(+)
11
12
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
15
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/m_helper.c
16
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
20
DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
+DEF_HELPER_FLAGS_1(mve_vpnot, TCG_CALL_NO_WG, void, env)
22
23
DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
24
DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/mve.decode
28
+++ b/target/arm/mve.decode
29
@@ -XXX,XX +XXX,XX @@ VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
30
VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp
31
32
{
33
+ VPNOT 1111 1110 0 0 11 000 1 000 0 1111 0100 1101
34
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
35
VCMPEQ_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 0 0 .... @vcmp_scalar
36
}
37
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/mve_helper.c
40
+++ b/target/arm/mve_helper.c
41
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm)
42
mve_advance_vpt(env);
43
}
44
45
+void HELPER(mve_vpnot)(CPUARMState *env)
46
+{
47
+ /*
48
+ * P0 bits for unexecuted beats (where eci_mask is 0) are unchanged.
49
+ * P0 bits for predicated lanes in executed bits (where mask is 0) are 0.
50
+ * P0 bits otherwise are inverted.
51
+ * (This is the same logic as VCMP.)
52
+ * This insn is itself subject to predication and to beat-wise execution,
53
+ * and after it executes VPT state advances in the usual way.
54
+ */
55
+ uint16_t mask = mve_element_mask(env);
56
+ uint16_t eci_mask = mve_eci_mask(env);
57
+ uint16_t beatpred = ~env->v7m.vpr & mask;
58
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | (beatpred & eci_mask);
59
+ mve_advance_vpt(env);
60
+}
61
+
62
#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \
63
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
64
{ \
65
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate-mve.c
68
+++ b/target/arm/translate-mve.c
69
@@ -XXX,XX +XXX,XX @@ static bool trans_VPST(DisasContext *s, arg_VPST *a)
17
return true;
70
return true;
18
}
71
}
19
72
20
+static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
73
+static bool trans_VPNOT(DisasContext *s, arg_VPNOT *a)
21
+ uint32_t addr, uint32_t *spdata)
22
+{
74
+{
23
+ /*
75
+ /*
24
+ * Read a word of data from the stack for the SG instruction,
76
+ * Invert the predicate in VPR.P0. We have call out to
25
+ * writing the value into *spdata. If the load succeeds, return
77
+ * a helper because this insn itself is beatwise and can
26
+ * true; otherwise pend an appropriate exception and return false.
78
+ * be predicated.
27
+ * (We can't use data load helpers here that throw an exception
28
+ * because of the context we're called in, which is halfway through
29
+ * arm_v7m_cpu_do_interrupt().)
30
+ */
79
+ */
31
+ CPUState *cs = CPU(cpu);
80
+ if (!dc_isar_feature(aa32_mve, s)) {
32
+ CPUARMState *env = &cpu->env;
33
+ MemTxAttrs attrs = {};
34
+ MemTxResult txres;
35
+ target_ulong page_size;
36
+ hwaddr physaddr;
37
+ int prot;
38
+ ARMMMUFaultInfo fi = {};
39
+ ARMCacheAttrs cacheattrs = {};
40
+ uint32_t value;
41
+
42
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
43
+ &attrs, &prot, &page_size, &fi, &cacheattrs)) {
44
+ /* MPU/SAU lookup failed */
45
+ if (fi.type == ARMFault_QEMU_SFault) {
46
+ qemu_log_mask(CPU_LOG_INT,
47
+ "...SecureFault during stack word read\n");
48
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
49
+ env->v7m.sfar = addr;
50
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
51
+ } else {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...MemManageFault during stack word read\n");
54
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_DACCVIOL_MASK |
55
+ R_V7M_CFSR_MMARVALID_MASK;
56
+ env->v7m.mmfar[M_REG_S] = addr;
57
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, false);
58
+ }
59
+ return false;
81
+ return false;
60
+ }
82
+ }
61
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
83
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
62
+ attrs, &txres);
84
+ return true;
63
+ if (txres != MEMTX_OK) {
64
+ /* BusFault trying to read the data */
65
+ qemu_log_mask(CPU_LOG_INT,
66
+ "...BusFault during stack word read\n");
67
+ env->v7m.cfsr[M_REG_NS] |=
68
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
69
+ env->v7m.bfar = addr;
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
71
+ return false;
72
+ }
85
+ }
73
+
86
+
74
+ *spdata = value;
87
+ gen_helper_mve_vpnot(cpu_env);
88
+ mve_update_eci(s);
75
+ return true;
89
+ return true;
76
+}
90
+}
77
+
91
+
78
static bool v7m_handle_execute_nsc(ARMCPU *cpu)
92
static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
79
{
93
{
80
/*
94
/* VADDV: vector add across vector */
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
82
*/
83
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
84
", executing it\n", env->regs[15]);
85
+
86
+ if (cpu_isar_feature(aa32_m_sec_state, cpu) &&
87
+ !arm_v7m_is_handler_mode(env)) {
88
+ /*
89
+ * v8.1M exception stack frame integrity check. Note that we
90
+ * must perform the memory access even if CCR_S.TRD is zero
91
+ * and we aren't going to check what the data loaded is.
92
+ */
93
+ uint32_t spdata, sp;
94
+
95
+ /*
96
+ * We know we are currently NS, so the S stack pointers must be
97
+ * in other_ss_{psp,msp}, not in regs[13]/other_sp.
98
+ */
99
+ sp = v7m_using_psp(env) ? env->v7m.other_ss_psp : env->v7m.other_ss_msp;
100
+ if (!v7m_read_sg_stack_word(cpu, mmu_idx, sp, &spdata)) {
101
+ /* Stack access failed and an exception has been pended */
102
+ return false;
103
+ }
104
+
105
+ if (env->v7m.ccr[M_REG_S] & R_V7M_CCR_TRD_MASK) {
106
+ if (((spdata & ~1) == 0xfefa125a) ||
107
+ !(env->v7m.control[M_REG_S] & 1)) {
108
+ goto gen_invep;
109
+ }
110
+ }
111
+ }
112
+
113
env->regs[14] &= ~1;
114
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
115
switch_v7m_security_state(env, true);
116
--
95
--
117
2.20.1
96
2.20.1
118
97
119
98
diff view generated by jsdifflib
1
In v8.1M the new CLRM instruction allows zeroing an arbitrary set of
1
Implement the MVE VCTP insn, which sets the VPR.P0 predicate bits so
2
the general-purpose registers and APSR. Implement this.
2
as to predicate any element at index Rn or greater is predicated. As
3
with VPNOT, this insn itself is predicable and subject to beatwise
4
execution.
3
5
4
The encoding is a subset of the LDMIA T2 encoding, using what would
6
The calculation of the mask is the same as is used to determine
5
be Rn=0b1111 (which UNDEFs for LDMIA).
7
ltpmask in mve_element_mask(), but we precalculate masklen in
8
generated code to avoid having to have 4 helpers specialized by size.
9
10
We put the decode line in with the low-overhead-loop insns in
11
t32.decode because it's logically part of that collection of insn
12
patterns, even though it is an MVE only insn.
6
13
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201119215617.29887-6-peter.maydell@linaro.org
10
---
16
---
11
target/arm/t32.decode | 6 +++++-
17
target/arm/helper-mve.h | 2 ++
12
target/arm/translate.c | 38 ++++++++++++++++++++++++++++++++++++++
18
target/arm/translate-a32.h | 1 +
13
2 files changed, 43 insertions(+), 1 deletion(-)
19
target/arm/t32.decode | 1 +
20
target/arm/mve_helper.c | 20 ++++++++++++++++++++
21
target/arm/translate-mve.c | 2 +-
22
target/arm/translate.c | 33 +++++++++++++++++++++++++++++++++
23
6 files changed, 58 insertions(+), 1 deletion(-)
14
24
25
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper-mve.h
28
+++ b/target/arm/helper-mve.h
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
DEF_HELPER_FLAGS_1(mve_vpnot, TCG_CALL_NO_WG, void, env)
32
33
+DEF_HELPER_FLAGS_2(mve_vctp, TCG_CALL_NO_WG, void, env, i32)
34
+
35
DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-a32.h
41
+++ b/target/arm/translate-a32.h
42
@@ -XXX,XX +XXX,XX @@ long neon_element_offset(int reg, int element, MemOp memop);
43
void gen_rev16(TCGv_i32 dest, TCGv_i32 var);
44
void clear_eci_state(DisasContext *s);
45
bool mve_eci_check(DisasContext *s);
46
+void mve_update_eci(DisasContext *s);
47
void mve_update_and_store_eci(DisasContext *s);
48
bool mve_skip_vmov(DisasContext *s, int vn, int index, int size);
49
15
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
50
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
16
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/t32.decode
52
--- a/target/arm/t32.decode
18
+++ b/target/arm/t32.decode
53
+++ b/target/arm/t32.decode
19
@@ -XXX,XX +XXX,XX @@ UXTAB 1111 1010 0101 .... 1111 .... 10.. .... @rrr_rot
54
@@ -XXX,XX +XXX,XX @@ BL 1111 0. .......... 11.1 ............ @branch24
20
55
# This is DLSTP
21
STM_t32 1110 1000 10.0 .... ................ @ldstm i=1 b=0
56
DLS 1111 0 0000 0 size:2 rn:4 1110 0000 0000 0001
22
STM_t32 1110 1001 00.0 .... ................ @ldstm i=0 b=1
57
}
23
-LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
58
+ VCTP 1111 0 0000 0 size:2 rn:4 1110 1000 0000 0001
59
]
60
}
61
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/mve_helper.c
64
+++ b/target/arm/mve_helper.c
65
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vpnot)(CPUARMState *env)
66
mve_advance_vpt(env);
67
}
68
69
+/*
70
+ * VCTP: P0 unexecuted bits unchanged, predicated bits zeroed,
71
+ * otherwise set according to value of Rn. The calculation of
72
+ * newmask here works in the same way as the calculation of the
73
+ * ltpmask in mve_element_mask(), but we have pre-calculated
74
+ * the masklen in the generated code.
75
+ */
76
+void HELPER(mve_vctp)(CPUARMState *env, uint32_t masklen)
24
+{
77
+{
25
+ # Rn=15 UNDEFs for LDM; M-profile CLRM uses that encoding
78
+ uint16_t mask = mve_element_mask(env);
26
+ CLRM 1110 1000 1001 1111 list:16
79
+ uint16_t eci_mask = mve_eci_mask(env);
27
+ LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
80
+ uint16_t newmask;
81
+
82
+ assert(masklen <= 16);
83
+ newmask = masklen ? MAKE_64BIT_MASK(0, masklen) : 0;
84
+ newmask &= mask;
85
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | (newmask & eci_mask);
86
+ mve_advance_vpt(env);
28
+}
87
+}
29
LDM_t32 1110 1001 00.1 .... ................ @ldstm i=0 b=1
88
+
30
89
#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \
31
&rfe !extern rn w pu
90
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
91
{ \
92
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/translate-mve.c
95
+++ b/target/arm/translate-mve.c
96
@@ -XXX,XX +XXX,XX @@ bool mve_eci_check(DisasContext *s)
97
}
98
}
99
100
-static void mve_update_eci(DisasContext *s)
101
+void mve_update_eci(DisasContext *s)
102
{
103
/*
104
* The helper function will always update the CPUState field,
32
diff --git a/target/arm/translate.c b/target/arm/translate.c
105
diff --git a/target/arm/translate.c b/target/arm/translate.c
33
index XXXXXXX..XXXXXXX 100644
106
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.c
107
--- a/target/arm/translate.c
35
+++ b/target/arm/translate.c
108
+++ b/target/arm/translate.c
36
@@ -XXX,XX +XXX,XX @@ static bool trans_LDM_t16(DisasContext *s, arg_ldst_block *a)
109
@@ -XXX,XX +XXX,XX @@ static bool trans_LCTP(DisasContext *s, arg_LCTP *a)
37
return do_ldm(s, a, 1);
110
return true;
38
}
111
}
39
112
40
+static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
113
+static bool trans_VCTP(DisasContext *s, arg_VCTP *a)
41
+{
114
+{
42
+ int i;
115
+ /*
43
+ TCGv_i32 zero;
116
+ * M-profile Create Vector Tail Predicate. This insn is itself
117
+ * predicated and is subject to beatwise execution.
118
+ */
119
+ TCGv_i32 rn_shifted, masklen;
44
+
120
+
45
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
121
+ if (!dc_isar_feature(aa32_mve, s) || a->rn == 13 || a->rn == 15) {
46
+ return false;
122
+ return false;
47
+ }
123
+ }
48
+
124
+
49
+ if (extract32(a->list, 13, 1)) {
125
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
50
+ return false;
126
+ return true;
51
+ }
127
+ }
52
+
128
+
53
+ if (!a->list) {
129
+ /*
54
+ /* UNPREDICTABLE; we choose to UNDEF */
130
+ * We pre-calculate the mask length here to avoid having
55
+ return false;
131
+ * to have multiple helpers specialized for size.
56
+ }
132
+ * We pass the helper "rn <= (1 << (4 - size)) ? (rn << size) : 16".
57
+
133
+ */
58
+ zero = tcg_const_i32(0);
134
+ rn_shifted = tcg_temp_new_i32();
59
+ for (i = 0; i < 15; i++) {
135
+ masklen = load_reg(s, a->rn);
60
+ if (extract32(a->list, i, 1)) {
136
+ tcg_gen_shli_i32(rn_shifted, masklen, a->size);
61
+ /* Clear R[i] */
137
+ tcg_gen_movcond_i32(TCG_COND_LEU, masklen,
62
+ tcg_gen_mov_i32(cpu_R[i], zero);
138
+ masklen, tcg_constant_i32(1 << (4 - a->size)),
63
+ }
139
+ rn_shifted, tcg_constant_i32(16));
64
+ }
140
+ gen_helper_mve_vctp(cpu_env, masklen);
65
+ if (extract32(a->list, 15, 1)) {
141
+ tcg_temp_free_i32(masklen);
66
+ /*
142
+ tcg_temp_free_i32(rn_shifted);
67
+ * Clear APSR (by calling the MSR helper with the same argument
143
+ mve_update_eci(s);
68
+ * as for "MSR APSR_nzcvqg, Rn": mask = 0b1100, SYSM=0)
69
+ */
70
+ TCGv_i32 maskreg = tcg_const_i32(0xc << 8);
71
+ gen_helper_v7m_msr(cpu_env, maskreg, zero);
72
+ tcg_temp_free_i32(maskreg);
73
+ }
74
+ tcg_temp_free_i32(zero);
75
+ return true;
144
+ return true;
76
+}
145
+}
77
+
146
78
/*
147
static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
79
* Branch, branch with link
148
{
80
*/
81
--
149
--
82
2.20.1
150
2.20.1
83
151
84
152
diff view generated by jsdifflib
1
The constant-expander functions like negate, plus_2, etc, are
1
Implement the MVE gather-loads and scatter-stores which
2
generally useful; move them up in translate.c so we can use them in
2
form the address by adding a base value from a scalar
3
the VFP/Neon decoders as well as in the A32/T32/T16 decoders.
3
register to an offset in each element of a vector.
4
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-9-peter.maydell@linaro.org
8
---
7
---
9
target/arm/translate.c | 46 +++++++++++++++++++++++-------------------
8
target/arm/helper-mve.h | 32 +++++++++
10
1 file changed, 25 insertions(+), 21 deletions(-)
9
target/arm/mve.decode | 12 ++++
10
target/arm/mve_helper.c | 129 +++++++++++++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 97 ++++++++++++++++++++++++++++
12
4 files changed, 270 insertions(+)
11
13
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
16
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/translate.c
17
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ static void arm_gen_condlabel(DisasContext *s)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
17
}
19
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
18
}
20
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
21
22
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
+
26
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vldrw_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(mve_vldrd_sg_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
+
34
+DEF_HELPER_FLAGS_4(mve_vstrb_sg_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vstrb_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(mve_vstrb_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vstrw_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(mve_vstrd_sg_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
+
42
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
43
+
44
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_4(mve_vldrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_4(mve_vldrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_4(mve_vstrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_4(mve_vstrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
53
+
54
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
55
56
DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
57
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/mve.decode
60
+++ b/target/arm/mve.decode
61
@@ -XXX,XX +XXX,XX @@
62
&shl_scalar qda rm size
63
&vmaxv qm rda size
64
&vabav qn qm rda size
65
+&vldst_sg qd qm rn size msize os
66
+
67
+# scatter-gather memory size is in bits 6:4
68
+%sg_msize 6:1 4:1
69
70
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
71
# Note that both Rn and Qd are 3 bits only (no D bit)
72
@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
73
74
+@vldst_sg .... .... .... rn:4 .... ... size:2 ... ... os:1 &vldst_sg \
75
+ qd=%qd qm=%qm msize=%sg_msize
76
+
77
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
78
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
79
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
80
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
81
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
82
size=2 p=1
83
84
+# gather loads/scatter stores
85
+VLDR_S_sg 111 0 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
86
+VLDR_U_sg 111 1 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
87
+VSTR_sg 111 0 1100 1 . 00 .... ... 0 111 . .... .... @vldst_sg
88
+
89
# Moves between 2 32-bit vector lanes and 2 general purpose registers
90
VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
91
VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
92
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/mve_helper.c
95
+++ b/target/arm/mve_helper.c
96
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
97
#undef DO_VLDR
98
#undef DO_VSTR
19
99
20
+/*
100
+/*
21
+ * Constant expanders for the decoders.
101
+ * Gather loads/scatter stores. Here each element of Qm specifies
102
+ * an offset to use from the base register Rm. In the _os_ versions
103
+ * that offset is scaled by the element size.
104
+ * For loads, predicated lanes are zeroed instead of retaining
105
+ * their previous values.
22
+ */
106
+ */
23
+
107
+#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN) \
24
+static int negate(DisasContext *s, int x)
108
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
109
+ uint32_t base) \
110
+ { \
111
+ TYPE *d = vd; \
112
+ OFFTYPE *m = vm; \
113
+ uint16_t mask = mve_element_mask(env); \
114
+ uint16_t eci_mask = mve_eci_mask(env); \
115
+ unsigned e; \
116
+ uint32_t addr; \
117
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE, eci_mask >>= ESIZE) { \
118
+ if (!(eci_mask & 1)) { \
119
+ continue; \
120
+ } \
121
+ addr = ADDRFN(base, m[H##ESIZE(e)]); \
122
+ d[H##ESIZE(e)] = (mask & 1) ? \
123
+ cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \
124
+ } \
125
+ mve_advance_vpt(env); \
126
+ }
127
+
128
+/* We know here TYPE is unsigned so always the same as the offset type */
129
+#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN) \
130
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
131
+ uint32_t base) \
132
+ { \
133
+ TYPE *d = vd; \
134
+ TYPE *m = vm; \
135
+ uint16_t mask = mve_element_mask(env); \
136
+ unsigned e; \
137
+ uint32_t addr; \
138
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
139
+ addr = ADDRFN(base, m[H##ESIZE(e)]); \
140
+ if (mask & 1) { \
141
+ cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \
142
+ } \
143
+ } \
144
+ mve_advance_vpt(env); \
145
+ }
146
+
147
+/*
148
+ * 64-bit accesses are slightly different: they are done as two 32-bit
149
+ * accesses, controlled by the predicate mask for the relevant beat,
150
+ * and with a single 32-bit offset in the first of the two Qm elements.
151
+ * Note that for QEMU our IMPDEF AIRCR.ENDIANNESS is always 0 (little).
152
+ */
153
+#define DO_VLDR64_SG(OP, ADDRFN) \
154
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
155
+ uint32_t base) \
156
+ { \
157
+ uint32_t *d = vd; \
158
+ uint32_t *m = vm; \
159
+ uint16_t mask = mve_element_mask(env); \
160
+ uint16_t eci_mask = mve_eci_mask(env); \
161
+ unsigned e; \
162
+ uint32_t addr; \
163
+ for (e = 0; e < 16 / 4; e++, mask >>= 4, eci_mask >>= 4) { \
164
+ if (!(eci_mask & 1)) { \
165
+ continue; \
166
+ } \
167
+ addr = ADDRFN(base, m[H4(e & ~1)]); \
168
+ addr += 4 * (e & 1); \
169
+ d[H4(e)] = (mask & 1) ? cpu_ldl_data_ra(env, addr, GETPC()) : 0; \
170
+ } \
171
+ mve_advance_vpt(env); \
172
+ }
173
+
174
+#define DO_VSTR64_SG(OP, ADDRFN) \
175
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
176
+ uint32_t base) \
177
+ { \
178
+ uint32_t *d = vd; \
179
+ uint32_t *m = vm; \
180
+ uint16_t mask = mve_element_mask(env); \
181
+ unsigned e; \
182
+ uint32_t addr; \
183
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) { \
184
+ addr = ADDRFN(base, m[H4(e & ~1)]); \
185
+ addr += 4 * (e & 1); \
186
+ if (mask & 1) { \
187
+ cpu_stl_data_ra(env, addr, d[H4(e)], GETPC()); \
188
+ } \
189
+ } \
190
+ mve_advance_vpt(env); \
191
+ }
192
+
193
+#define ADDR_ADD(BASE, OFFSET) ((BASE) + (OFFSET))
194
+#define ADDR_ADD_OSH(BASE, OFFSET) ((BASE) + ((OFFSET) << 1))
195
+#define ADDR_ADD_OSW(BASE, OFFSET) ((BASE) + ((OFFSET) << 2))
196
+#define ADDR_ADD_OSD(BASE, OFFSET) ((BASE) + ((OFFSET) << 3))
197
+
198
+DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD)
199
+DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD)
200
+DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD)
201
+
202
+DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD)
203
+DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD)
204
+DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD)
205
+DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD)
206
+DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD)
207
+DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD)
208
+DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD)
209
+
210
+DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH)
211
+DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH)
212
+DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH)
213
+DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW)
214
+DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD)
215
+
216
+DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD)
217
+DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD)
218
+DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD)
219
+DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD)
220
+DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD)
221
+DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD)
222
+DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD)
223
+
224
+DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH)
225
+DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH)
226
+DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW)
227
+DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD)
228
+
229
/*
230
* The mergemask(D, R, M) macro performs the operation "*D = R" but
231
* storing only the bytes which correspond to 1 bits in M,
232
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
233
index XXXXXXX..XXXXXXX 100644
234
--- a/target/arm/translate-mve.c
235
+++ b/target/arm/translate-mve.c
236
@@ -XXX,XX +XXX,XX @@ static inline int vidup_imm(DisasContext *s, int x)
237
#include "decode-mve.c.inc"
238
239
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
240
+typedef void MVEGenLdStSGFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
241
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
242
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
243
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
244
@@ -XXX,XX +XXX,XX @@ DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h, MO_8)
245
DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w, MO_8)
246
DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w, MO_16)
247
248
+static bool do_ldst_sg(DisasContext *s, arg_vldst_sg *a, MVEGenLdStSGFn fn)
25
+{
249
+{
26
+ return -x;
250
+ TCGv_i32 addr;
251
+ TCGv_ptr qd, qm;
252
+
253
+ if (!dc_isar_feature(aa32_mve, s) ||
254
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
255
+ !fn || a->rn == 15) {
256
+ /* Rn case is UNPREDICTABLE */
257
+ return false;
258
+ }
259
+
260
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
261
+ return true;
262
+ }
263
+
264
+ addr = load_reg(s, a->rn);
265
+
266
+ qd = mve_qreg_ptr(a->qd);
267
+ qm = mve_qreg_ptr(a->qm);
268
+ fn(cpu_env, qd, qm, addr);
269
+ tcg_temp_free_ptr(qd);
270
+ tcg_temp_free_ptr(qm);
271
+ tcg_temp_free_i32(addr);
272
+ mve_update_eci(s);
273
+ return true;
27
+}
274
+}
28
+
275
+
29
+static int plus_2(DisasContext *s, int x)
276
+/*
277
+ * The naming scheme here is "vldrb_sg_sh == in-memory byte loads
278
+ * signextended to halfword elements in register". _os_ indicates that
279
+ * the offsets in Qm should be scaled by the element size.
280
+ */
281
+/* This macro is just to make the arrays more compact in these functions */
282
+#define F(N) gen_helper_mve_##N
283
+
284
+/* VLDRB/VSTRB (ie msize 1) with OS=1 is UNPREDICTABLE; we UNDEF */
285
+static bool trans_VLDR_S_sg(DisasContext *s, arg_vldst_sg *a)
30
+{
286
+{
31
+ return x + 2;
287
+ static MVEGenLdStSGFn * const fns[2][4][4] = { {
288
+ { NULL, F(vldrb_sg_sh), F(vldrb_sg_sw), NULL },
289
+ { NULL, NULL, F(vldrh_sg_sw), NULL },
290
+ { NULL, NULL, NULL, NULL },
291
+ { NULL, NULL, NULL, NULL }
292
+ }, {
293
+ { NULL, NULL, NULL, NULL },
294
+ { NULL, NULL, F(vldrh_sg_os_sw), NULL },
295
+ { NULL, NULL, NULL, NULL },
296
+ { NULL, NULL, NULL, NULL }
297
+ }
298
+ };
299
+ if (a->qd == a->qm) {
300
+ return false; /* UNPREDICTABLE */
301
+ }
302
+ return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]);
32
+}
303
+}
33
+
304
+
34
+static int times_2(DisasContext *s, int x)
305
+static bool trans_VLDR_U_sg(DisasContext *s, arg_vldst_sg *a)
35
+{
306
+{
36
+ return x * 2;
307
+ static MVEGenLdStSGFn * const fns[2][4][4] = { {
308
+ { F(vldrb_sg_ub), F(vldrb_sg_uh), F(vldrb_sg_uw), NULL },
309
+ { NULL, F(vldrh_sg_uh), F(vldrh_sg_uw), NULL },
310
+ { NULL, NULL, F(vldrw_sg_uw), NULL },
311
+ { NULL, NULL, NULL, F(vldrd_sg_ud) }
312
+ }, {
313
+ { NULL, NULL, NULL, NULL },
314
+ { NULL, F(vldrh_sg_os_uh), F(vldrh_sg_os_uw), NULL },
315
+ { NULL, NULL, F(vldrw_sg_os_uw), NULL },
316
+ { NULL, NULL, NULL, F(vldrd_sg_os_ud) }
317
+ }
318
+ };
319
+ if (a->qd == a->qm) {
320
+ return false; /* UNPREDICTABLE */
321
+ }
322
+ return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]);
37
+}
323
+}
38
+
324
+
39
+static int times_4(DisasContext *s, int x)
325
+static bool trans_VSTR_sg(DisasContext *s, arg_vldst_sg *a)
40
+{
326
+{
41
+ return x * 4;
327
+ static MVEGenLdStSGFn * const fns[2][4][4] = { {
328
+ { F(vstrb_sg_ub), F(vstrb_sg_uh), F(vstrb_sg_uw), NULL },
329
+ { NULL, F(vstrh_sg_uh), F(vstrh_sg_uw), NULL },
330
+ { NULL, NULL, F(vstrw_sg_uw), NULL },
331
+ { NULL, NULL, NULL, F(vstrd_sg_ud) }
332
+ }, {
333
+ { NULL, NULL, NULL, NULL },
334
+ { NULL, F(vstrh_sg_os_uh), F(vstrh_sg_os_uw), NULL },
335
+ { NULL, NULL, F(vstrw_sg_os_uw), NULL },
336
+ { NULL, NULL, NULL, F(vstrd_sg_os_ud) }
337
+ }
338
+ };
339
+ return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]);
42
+}
340
+}
43
+
341
+
44
/* Flags for the disas_set_da_iss info argument:
342
+#undef F
45
* lower bits hold the Rt register number, higher bits are flags.
343
+
46
*/
344
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
47
@@ -XXX,XX +XXX,XX @@ static void arm_skip_unless(DisasContext *s, uint32_t cond)
48
49
50
/*
51
- * Constant expanders for the decoders.
52
+ * Constant expanders used by T16/T32 decode
53
*/
54
55
-static int negate(DisasContext *s, int x)
56
-{
57
- return -x;
58
-}
59
-
60
-static int plus_2(DisasContext *s, int x)
61
-{
62
- return x + 2;
63
-}
64
-
65
-static int times_2(DisasContext *s, int x)
66
-{
67
- return x * 2;
68
-}
69
-
70
-static int times_4(DisasContext *s, int x)
71
-{
72
- return x * 4;
73
-}
74
-
75
/* Return only the rotation part of T32ExpandImm. */
76
static int t32_expandimm_rot(DisasContext *s, int x)
77
{
345
{
346
TCGv_ptr qd;
78
--
347
--
79
2.20.1
348
2.20.1
80
349
81
350
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
Implement the MVE VLDR/VSTR insns which do scatter-gather using base
2
addresses from Qm plus or minus an immediate offset (possibly with
3
writeback). Note that writeback is not predicated but it does have
4
to honour ECI state, so we have to add an eci_mask check to the
5
VSTR_SG macros (the VLDR_SG macros already needed this to be able
6
to distinguish "skip beat" from "set predicated element to 0").
2
7
3
We should use printf format specifier "%u" instead of "%d" for
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
argument of type "unsigned int".
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
target/arm/helper-mve.h | 5 +++
12
target/arm/mve.decode | 10 +++++
13
target/arm/mve_helper.c | 91 ++++++++++++++++++++++++--------------
14
target/arm/translate-mve.c | 72 ++++++++++++++++++++++++++++++
15
4 files changed, 146 insertions(+), 32 deletions(-)
5
16
6
Reported-by: Euler Robot <euler.robot@huawei.com>
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20201126111109.112238-4-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/misc/imx6_ccm.c | 20 ++++++++++----------
13
hw/misc/imx6_src.c | 2 +-
14
2 files changed, 11 insertions(+), 11 deletions(-)
15
16
diff --git a/hw/misc/imx6_ccm.c b/hw/misc/imx6_ccm.c
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx6_ccm.c
19
--- a/target/arm/helper-mve.h
19
+++ b/hw/misc/imx6_ccm.c
20
+++ b/target/arm/helper-mve.h
20
@@ -XXX,XX +XXX,XX @@ static const char *imx6_ccm_reg_name(uint32_t reg)
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
case CCM_CMEOR:
22
DEF_HELPER_FLAGS_4(mve_vstrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
return "CMEOR";
23
DEF_HELPER_FLAGS_4(mve_vstrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
default:
24
24
- sprintf(unknown, "%d ?", reg);
25
+DEF_HELPER_FLAGS_4(mve_vldrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
+ sprintf(unknown, "%u ?", reg);
26
+DEF_HELPER_FLAGS_4(mve_vldrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
return unknown;
27
+DEF_HELPER_FLAGS_4(mve_vstrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vstrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+
30
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
31
32
DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@
38
&vmaxv qm rda size
39
&vabav qn qm rda size
40
&vldst_sg qd qm rn size msize os
41
+&vldst_sg_imm qd qm a w imm
42
43
# scatter-gather memory size is in bits 6:4
44
%sg_msize 6:1 4:1
45
@@ -XXX,XX +XXX,XX @@
46
@vldst_sg .... .... .... rn:4 .... ... size:2 ... ... os:1 &vldst_sg \
47
qd=%qd qm=%qm msize=%sg_msize
48
49
+# Qm is in the fields usually labeled Qn
50
+@vldst_sg_imm .... .... a:1 . w:1 . .... .... .... . imm:7 &vldst_sg_imm \
51
+ qd=%qd qm=%qn
52
+
53
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
54
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
55
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
56
@@ -XXX,XX +XXX,XX @@ VLDR_S_sg 111 0 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
57
VLDR_U_sg 111 1 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
58
VSTR_sg 111 0 1100 1 . 00 .... ... 0 111 . .... .... @vldst_sg
59
60
+VLDRW_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1110 .... .... @vldst_sg_imm
61
+VLDRD_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1111 .... .... @vldst_sg_imm
62
+VSTRW_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1110 .... .... @vldst_sg_imm
63
+VSTRD_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1111 .... .... @vldst_sg_imm
64
+
65
# Moves between 2 32-bit vector lanes and 2 general purpose registers
66
VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
67
VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
68
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/mve_helper.c
71
+++ b/target/arm/mve_helper.c
72
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
73
* For loads, predicated lanes are zeroed instead of retaining
74
* their previous values.
75
*/
76
-#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN) \
77
+#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN, WB) \
78
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
79
uint32_t base) \
80
{ \
81
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
82
addr = ADDRFN(base, m[H##ESIZE(e)]); \
83
d[H##ESIZE(e)] = (mask & 1) ? \
84
cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \
85
+ if (WB) { \
86
+ m[H##ESIZE(e)] = addr; \
87
+ } \
88
} \
89
mve_advance_vpt(env); \
27
}
90
}
28
}
91
29
@@ -XXX,XX +XXX,XX @@ static const char *imx6_analog_reg_name(uint32_t reg)
92
/* We know here TYPE is unsigned so always the same as the offset type */
30
case USB_ANALOG_DIGPROG:
93
-#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN) \
31
return "USB_ANALOG_DIGPROG";
94
+#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN, WB) \
32
default:
95
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
33
- sprintf(unknown, "%d ?", reg);
96
uint32_t base) \
34
+ sprintf(unknown, "%u ?", reg);
97
{ \
35
return unknown;
98
TYPE *d = vd; \
99
TYPE *m = vm; \
100
uint16_t mask = mve_element_mask(env); \
101
+ uint16_t eci_mask = mve_eci_mask(env); \
102
unsigned e; \
103
uint32_t addr; \
104
- for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
105
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE, eci_mask >>= ESIZE) { \
106
+ if (!(eci_mask & 1)) { \
107
+ continue; \
108
+ } \
109
addr = ADDRFN(base, m[H##ESIZE(e)]); \
110
if (mask & 1) { \
111
cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \
112
} \
113
+ if (WB) { \
114
+ m[H##ESIZE(e)] = addr; \
115
+ } \
116
} \
117
mve_advance_vpt(env); \
36
}
118
}
37
}
119
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
38
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_clk(IMX6CCMState *dev)
120
* accesses, controlled by the predicate mask for the relevant beat,
39
freq *= 20;
121
* and with a single 32-bit offset in the first of the two Qm elements.
122
* Note that for QEMU our IMPDEF AIRCR.ENDIANNESS is always 0 (little).
123
+ * Address writeback happens on the odd beats and updates the address
124
+ * stored in the even-beat element.
125
*/
126
-#define DO_VLDR64_SG(OP, ADDRFN) \
127
+#define DO_VLDR64_SG(OP, ADDRFN, WB) \
128
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
129
uint32_t base) \
130
{ \
131
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
132
addr = ADDRFN(base, m[H4(e & ~1)]); \
133
addr += 4 * (e & 1); \
134
d[H4(e)] = (mask & 1) ? cpu_ldl_data_ra(env, addr, GETPC()) : 0; \
135
+ if (WB && (e & 1)) { \
136
+ m[H4(e & ~1)] = addr - 4; \
137
+ } \
138
} \
139
mve_advance_vpt(env); \
40
}
140
}
41
141
42
- DPRINTF("freq = %d\n", (uint32_t)freq);
142
-#define DO_VSTR64_SG(OP, ADDRFN) \
43
+ DPRINTF("freq = %u\n", (uint32_t)freq);
143
+#define DO_VSTR64_SG(OP, ADDRFN, WB) \
44
144
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
45
return freq;
145
uint32_t base) \
46
}
146
{ \
47
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd0_clk(IMX6CCMState *dev)
147
uint32_t *d = vd; \
48
freq = imx6_analog_get_pll2_clk(dev) * 18
148
uint32_t *m = vm; \
49
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD0_FRAC);
149
uint16_t mask = mve_element_mask(env); \
50
150
+ uint16_t eci_mask = mve_eci_mask(env); \
51
- DPRINTF("freq = %d\n", (uint32_t)freq);
151
unsigned e; \
52
+ DPRINTF("freq = %u\n", (uint32_t)freq);
152
uint32_t addr; \
53
153
- for (e = 0; e < 16 / 4; e++, mask >>= 4) { \
54
return freq;
154
+ for (e = 0; e < 16 / 4; e++, mask >>= 4, eci_mask >>= 4) { \
55
}
155
+ if (!(eci_mask & 1)) { \
56
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd2_clk(IMX6CCMState *dev)
156
+ continue; \
57
freq = imx6_analog_get_pll2_clk(dev) * 18
157
+ } \
58
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD2_FRAC);
158
addr = ADDRFN(base, m[H4(e & ~1)]); \
59
159
addr += 4 * (e & 1); \
60
- DPRINTF("freq = %d\n", (uint32_t)freq);
160
if (mask & 1) { \
61
+ DPRINTF("freq = %u\n", (uint32_t)freq);
161
cpu_stl_data_ra(env, addr, d[H4(e)], GETPC()); \
62
162
} \
63
return freq;
163
+ if (WB && (e & 1)) { \
64
}
164
+ m[H4(e & ~1)] = addr - 4; \
65
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_periph_clk(IMX6CCMState *dev)
165
+ } \
66
break;
166
} \
167
mve_advance_vpt(env); \
67
}
168
}
68
169
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
69
- DPRINTF("freq = %d\n", (uint32_t)freq);
170
#define ADDR_ADD_OSW(BASE, OFFSET) ((BASE) + ((OFFSET) << 2))
70
+ DPRINTF("freq = %u\n", (uint32_t)freq);
171
#define ADDR_ADD_OSD(BASE, OFFSET) ((BASE) + ((OFFSET) << 3))
71
172
72
return freq;
173
-DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD)
73
}
174
-DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD)
74
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ahb_clk(IMX6CCMState *dev)
175
-DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD)
75
freq = imx6_analog_get_periph_clk(dev)
176
+DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD, false)
76
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], AHB_PODF));
177
+DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD, false)
77
178
+DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD, false)
78
- DPRINTF("freq = %d\n", (uint32_t)freq);
179
79
+ DPRINTF("freq = %u\n", (uint32_t)freq);
180
-DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD)
80
181
-DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD)
81
return freq;
182
-DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD)
82
}
183
-DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD)
83
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ipg_clk(IMX6CCMState *dev)
184
-DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD)
84
freq = imx6_ccm_get_ahb_clk(dev)
185
-DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD)
85
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], IPG_PODF));
186
-DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD)
86
187
+DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD, false)
87
- DPRINTF("freq = %d\n", (uint32_t)freq);
188
+DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD, false)
88
+ DPRINTF("freq = %u\n", (uint32_t)freq);
189
+DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD, false)
89
190
+DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD, false)
90
return freq;
191
+DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD, false)
91
}
192
+DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD, false)
92
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_per_clk(IMX6CCMState *dev)
193
+DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD, false)
93
freq = imx6_ccm_get_ipg_clk(dev)
194
94
/ (1 + EXTRACT(dev->ccm[CCM_CSCMR1], PERCLK_PODF));
195
-DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH)
95
196
-DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH)
96
- DPRINTF("freq = %d\n", (uint32_t)freq);
197
-DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH)
97
+ DPRINTF("freq = %u\n", (uint32_t)freq);
198
-DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW)
98
199
-DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD)
99
return freq;
200
+DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH, false)
100
}
201
+DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH, false)
101
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
202
+DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH, false)
102
break;
203
+DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW, false)
103
}
204
+DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD, false)
104
205
105
- DPRINTF("Clock = %d) = %d\n", clock, freq);
206
-DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD)
106
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
207
-DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD)
107
208
-DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD)
108
return freq;
209
-DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD)
109
}
210
-DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD)
110
diff --git a/hw/misc/imx6_src.c b/hw/misc/imx6_src.c
211
-DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD)
212
-DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD)
213
+DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD, false)
214
+DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD, false)
215
+DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD, false)
216
+DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD, false)
217
+DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD, false)
218
+DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD, false)
219
+DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD, false)
220
221
-DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH)
222
-DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH)
223
-DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW)
224
-DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD)
225
+DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH, false)
226
+DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH, false)
227
+DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW, false)
228
+DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD, false)
229
+
230
+DO_VLDR_SG(vldrw_sg_wb_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD, true)
231
+DO_VLDR64_SG(vldrd_sg_wb_ud, ADDR_ADD, true)
232
+DO_VSTR_SG(vstrw_sg_wb_uw, stl, 4, uint32_t, ADDR_ADD, true)
233
+DO_VSTR64_SG(vstrd_sg_wb_ud, ADDR_ADD, true)
234
235
/*
236
* The mergemask(D, R, M) macro performs the operation "*D = R" but
237
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
111
index XXXXXXX..XXXXXXX 100644
238
index XXXXXXX..XXXXXXX 100644
112
--- a/hw/misc/imx6_src.c
239
--- a/target/arm/translate-mve.c
113
+++ b/hw/misc/imx6_src.c
240
+++ b/target/arm/translate-mve.c
114
@@ -XXX,XX +XXX,XX @@ static const char *imx6_src_reg_name(uint32_t reg)
241
@@ -XXX,XX +XXX,XX @@ static bool trans_VSTR_sg(DisasContext *s, arg_vldst_sg *a)
115
case SRC_GPR10:
242
116
return "SRC_GPR10";
243
#undef F
117
default:
244
118
- sprintf(unknown, "%d ?", reg);
245
+static bool do_ldst_sg_imm(DisasContext *s, arg_vldst_sg_imm *a,
119
+ sprintf(unknown, "%u ?", reg);
246
+ MVEGenLdStSGFn *fn, unsigned msize)
120
return unknown;
247
+{
121
}
248
+ uint32_t offset;
122
}
249
+ TCGv_ptr qd, qm;
250
+
251
+ if (!dc_isar_feature(aa32_mve, s) ||
252
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
253
+ !fn) {
254
+ return false;
255
+ }
256
+
257
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
258
+ return true;
259
+ }
260
+
261
+ offset = a->imm << msize;
262
+ if (!a->a) {
263
+ offset = -offset;
264
+ }
265
+
266
+ qd = mve_qreg_ptr(a->qd);
267
+ qm = mve_qreg_ptr(a->qm);
268
+ fn(cpu_env, qd, qm, tcg_constant_i32(offset));
269
+ tcg_temp_free_ptr(qd);
270
+ tcg_temp_free_ptr(qm);
271
+ mve_update_eci(s);
272
+ return true;
273
+}
274
+
275
+static bool trans_VLDRW_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
276
+{
277
+ static MVEGenLdStSGFn * const fns[] = {
278
+ gen_helper_mve_vldrw_sg_uw,
279
+ gen_helper_mve_vldrw_sg_wb_uw,
280
+ };
281
+ if (a->qd == a->qm) {
282
+ return false; /* UNPREDICTABLE */
283
+ }
284
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_32);
285
+}
286
+
287
+static bool trans_VLDRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
288
+{
289
+ static MVEGenLdStSGFn * const fns[] = {
290
+ gen_helper_mve_vldrd_sg_ud,
291
+ gen_helper_mve_vldrd_sg_wb_ud,
292
+ };
293
+ if (a->qd == a->qm) {
294
+ return false; /* UNPREDICTABLE */
295
+ }
296
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_64);
297
+}
298
+
299
+static bool trans_VSTRW_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
300
+{
301
+ static MVEGenLdStSGFn * const fns[] = {
302
+ gen_helper_mve_vstrw_sg_uw,
303
+ gen_helper_mve_vstrw_sg_wb_uw,
304
+ };
305
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_32);
306
+}
307
+
308
+static bool trans_VSTRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
309
+{
310
+ static MVEGenLdStSGFn * const fns[] = {
311
+ gen_helper_mve_vstrd_sg_ud,
312
+ gen_helper_mve_vstrd_sg_wb_ud,
313
+ };
314
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_64);
315
+}
316
+
317
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
318
{
319
TCGv_ptr qd;
123
--
320
--
124
2.20.1
321
2.20.1
125
322
126
323
diff view generated by jsdifflib
1
Implement the new-in-v8.1M VLDR/VSTR variants which directly
1
Implement the MVE interleaving load/store functions VLD2, VLD4, VST2
2
read or write FP system registers to memory.
2
and VST4. VLD2 loads 16 bytes of data from memory and writes to 2
3
consecutive Qregs; VLD4 loads 16 bytes of data from memory and writes
4
to 4 consecutive Qregs. The 'pattern' field in the encoding
5
determines the offset into memory which is accessed and also which
6
elements in the Qregs are written to. (The intention is that a
7
sequence of four consecutive VLD4 with different pattern values
8
performs a complete de-interleaving load of 64 bytes into all
9
elements of the 4 Qregs.) VST2 and VST4 do the same, but for stores.
3
10
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-10-peter.maydell@linaro.org
7
---
13
---
8
target/arm/vfp.decode | 14 ++++++
14
target/arm/helper-mve.h | 48 ++++++
9
target/arm/translate-vfp.c.inc | 91 ++++++++++++++++++++++++++++++++++
15
target/arm/mve.decode | 11 ++
10
2 files changed, 105 insertions(+)
16
target/arm/mve_helper.c | 342 +++++++++++++++++++++++++++++++++++++
17
target/arm/translate-mve.c | 94 ++++++++++
18
4 files changed, 495 insertions(+)
11
19
12
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
20
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/vfp.decode
22
--- a/target/arm/helper-mve.h
15
+++ b/target/arm/vfp.decode
23
+++ b/target/arm/helper-mve.h
16
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vldrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
17
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
25
DEF_HELPER_FLAGS_4(mve_vstrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
18
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
26
DEF_HELPER_FLAGS_4(mve_vstrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
27
20
+# M-profile VLDR/VSTR to sysreg
28
+DEF_HELPER_FLAGS_3(mve_vld20b, TCG_CALL_NO_WG, void, env, i32, i32)
21
+%vldr_sysreg 22:1 13:3
29
+DEF_HELPER_FLAGS_3(mve_vld20h, TCG_CALL_NO_WG, void, env, i32, i32)
22
+%imm7_0x4 0:7 !function=times_4
30
+DEF_HELPER_FLAGS_3(mve_vld20w, TCG_CALL_NO_WG, void, env, i32, i32)
23
+
31
+
24
+&vldr_sysreg rn reg imm a w p
32
+DEF_HELPER_FLAGS_3(mve_vld21b, TCG_CALL_NO_WG, void, env, i32, i32)
25
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
33
+DEF_HELPER_FLAGS_3(mve_vld21h, TCG_CALL_NO_WG, void, env, i32, i32)
26
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
34
+DEF_HELPER_FLAGS_3(mve_vld21w, TCG_CALL_NO_WG, void, env, i32, i32)
27
+
35
+
28
+# P=0 W=0 is SEE "Related encodings", so split into two patterns
36
+DEF_HELPER_FLAGS_3(mve_vld40b, TCG_CALL_NO_WG, void, env, i32, i32)
29
+VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
37
+DEF_HELPER_FLAGS_3(mve_vld40h, TCG_CALL_NO_WG, void, env, i32, i32)
30
+VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
38
+DEF_HELPER_FLAGS_3(mve_vld40w, TCG_CALL_NO_WG, void, env, i32, i32)
31
+VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
39
+
32
+VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
40
+DEF_HELPER_FLAGS_3(mve_vld41b, TCG_CALL_NO_WG, void, env, i32, i32)
33
+
41
+DEF_HELPER_FLAGS_3(mve_vld41h, TCG_CALL_NO_WG, void, env, i32, i32)
34
# We split the load/store multiple up into two patterns to avoid
42
+DEF_HELPER_FLAGS_3(mve_vld41w, TCG_CALL_NO_WG, void, env, i32, i32)
35
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
43
+
36
# grouping:
44
+DEF_HELPER_FLAGS_3(mve_vld42b, TCG_CALL_NO_WG, void, env, i32, i32)
37
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
45
+DEF_HELPER_FLAGS_3(mve_vld42h, TCG_CALL_NO_WG, void, env, i32, i32)
46
+DEF_HELPER_FLAGS_3(mve_vld42w, TCG_CALL_NO_WG, void, env, i32, i32)
47
+
48
+DEF_HELPER_FLAGS_3(mve_vld43b, TCG_CALL_NO_WG, void, env, i32, i32)
49
+DEF_HELPER_FLAGS_3(mve_vld43h, TCG_CALL_NO_WG, void, env, i32, i32)
50
+DEF_HELPER_FLAGS_3(mve_vld43w, TCG_CALL_NO_WG, void, env, i32, i32)
51
+
52
+DEF_HELPER_FLAGS_3(mve_vst20b, TCG_CALL_NO_WG, void, env, i32, i32)
53
+DEF_HELPER_FLAGS_3(mve_vst20h, TCG_CALL_NO_WG, void, env, i32, i32)
54
+DEF_HELPER_FLAGS_3(mve_vst20w, TCG_CALL_NO_WG, void, env, i32, i32)
55
+
56
+DEF_HELPER_FLAGS_3(mve_vst21b, TCG_CALL_NO_WG, void, env, i32, i32)
57
+DEF_HELPER_FLAGS_3(mve_vst21h, TCG_CALL_NO_WG, void, env, i32, i32)
58
+DEF_HELPER_FLAGS_3(mve_vst21w, TCG_CALL_NO_WG, void, env, i32, i32)
59
+
60
+DEF_HELPER_FLAGS_3(mve_vst40b, TCG_CALL_NO_WG, void, env, i32, i32)
61
+DEF_HELPER_FLAGS_3(mve_vst40h, TCG_CALL_NO_WG, void, env, i32, i32)
62
+DEF_HELPER_FLAGS_3(mve_vst40w, TCG_CALL_NO_WG, void, env, i32, i32)
63
+
64
+DEF_HELPER_FLAGS_3(mve_vst41b, TCG_CALL_NO_WG, void, env, i32, i32)
65
+DEF_HELPER_FLAGS_3(mve_vst41h, TCG_CALL_NO_WG, void, env, i32, i32)
66
+DEF_HELPER_FLAGS_3(mve_vst41w, TCG_CALL_NO_WG, void, env, i32, i32)
67
+
68
+DEF_HELPER_FLAGS_3(mve_vst42b, TCG_CALL_NO_WG, void, env, i32, i32)
69
+DEF_HELPER_FLAGS_3(mve_vst42h, TCG_CALL_NO_WG, void, env, i32, i32)
70
+DEF_HELPER_FLAGS_3(mve_vst42w, TCG_CALL_NO_WG, void, env, i32, i32)
71
+
72
+DEF_HELPER_FLAGS_3(mve_vst43b, TCG_CALL_NO_WG, void, env, i32, i32)
73
+DEF_HELPER_FLAGS_3(mve_vst43h, TCG_CALL_NO_WG, void, env, i32, i32)
74
+DEF_HELPER_FLAGS_3(mve_vst43w, TCG_CALL_NO_WG, void, env, i32, i32)
75
+
76
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
77
78
DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
79
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
38
index XXXXXXX..XXXXXXX 100644
80
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-vfp.c.inc
81
--- a/target/arm/mve.decode
40
+++ b/target/arm/translate-vfp.c.inc
82
+++ b/target/arm/mve.decode
41
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
83
@@ -XXX,XX +XXX,XX @@
42
return true;
84
&vabav qn qm rda size
85
&vldst_sg qd qm rn size msize os
86
&vldst_sg_imm qd qm a w imm
87
+&vldst_il qd rn size pat w
88
89
# scatter-gather memory size is in bits 6:4
90
%sg_msize 6:1 4:1
91
@@ -XXX,XX +XXX,XX @@
92
@vldst_sg_imm .... .... a:1 . w:1 . .... .... .... . imm:7 &vldst_sg_imm \
93
qd=%qd qm=%qn
94
95
+# Deinterleaving load/interleaving store
96
+@vldst_il .... .... .. w:1 . rn:4 .... ... size:2 pat:2 ..... &vldst_il \
97
+ qd=%qd
98
+
99
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
100
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
101
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
102
@@ -XXX,XX +XXX,XX @@ VLDRD_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1111 .... .... @vldst_sg_imm
103
VSTRW_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1110 .... .... @vldst_sg_imm
104
VSTRD_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1111 .... .... @vldst_sg_imm
105
106
+# deinterleaving loads/interleaving stores
107
+VLD2 1111 1100 1 .. 1 .... ... 1 111 .. .. 00000 @vldst_il
108
+VLD4 1111 1100 1 .. 1 .... ... 1 111 .. .. 00001 @vldst_il
109
+VST2 1111 1100 1 .. 0 .... ... 1 111 .. .. 00000 @vldst_il
110
+VST4 1111 1100 1 .. 0 .... ... 1 111 .. .. 00001 @vldst_il
111
+
112
# Moves between 2 32-bit vector lanes and 2 general purpose registers
113
VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
114
VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
115
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/mve_helper.c
118
+++ b/target/arm/mve_helper.c
119
@@ -XXX,XX +XXX,XX @@ DO_VLDR64_SG(vldrd_sg_wb_ud, ADDR_ADD, true)
120
DO_VSTR_SG(vstrw_sg_wb_uw, stl, 4, uint32_t, ADDR_ADD, true)
121
DO_VSTR64_SG(vstrd_sg_wb_ud, ADDR_ADD, true)
122
123
+/*
124
+ * Deinterleaving loads/interleaving stores.
125
+ *
126
+ * For these helpers we are passed the index of the first Qreg
127
+ * (VLD2/VST2 will also access Qn+1, VLD4/VST4 access Qn .. Qn+3)
128
+ * and the value of the base address register Rn.
129
+ * The helpers are specialized for pattern and element size, so
130
+ * for instance vld42h is VLD4 with pattern 2, element size MO_16.
131
+ *
132
+ * These insns are beatwise but not predicated, so we must honour ECI,
133
+ * but need not look at mve_element_mask().
134
+ *
135
+ * The pseudocode implements these insns with multiple memory accesses
136
+ * of the element size, but rules R_VVVG and R_FXDM permit us to make
137
+ * one 32-bit memory access per beat.
138
+ */
139
+#define DO_VLD4B(OP, O1, O2, O3, O4) \
140
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
141
+ uint32_t base) \
142
+ { \
143
+ int beat, e; \
144
+ uint16_t mask = mve_eci_mask(env); \
145
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
146
+ uint32_t addr, data; \
147
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
148
+ if ((mask & 1) == 0) { \
149
+ /* ECI says skip this beat */ \
150
+ continue; \
151
+ } \
152
+ addr = base + off[beat] * 4; \
153
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
154
+ for (e = 0; e < 4; e++, data >>= 8) { \
155
+ uint8_t *qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + e); \
156
+ qd[H1(off[beat])] = data; \
157
+ } \
158
+ } \
159
+ }
160
+
161
+#define DO_VLD4H(OP, O1, O2) \
162
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
163
+ uint32_t base) \
164
+ { \
165
+ int beat; \
166
+ uint16_t mask = mve_eci_mask(env); \
167
+ static const uint8_t off[4] = { O1, O1, O2, O2 }; \
168
+ uint32_t addr, data; \
169
+ int y; /* y counts 0 2 0 2 */ \
170
+ uint16_t *qd; \
171
+ for (beat = 0, y = 0; beat < 4; beat++, mask >>= 4, y ^= 2) { \
172
+ if ((mask & 1) == 0) { \
173
+ /* ECI says skip this beat */ \
174
+ continue; \
175
+ } \
176
+ addr = base + off[beat] * 8 + (beat & 1) * 4; \
177
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
178
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y); \
179
+ qd[H2(off[beat])] = data; \
180
+ data >>= 16; \
181
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y + 1); \
182
+ qd[H2(off[beat])] = data; \
183
+ } \
184
+ }
185
+
186
+#define DO_VLD4W(OP, O1, O2, O3, O4) \
187
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
188
+ uint32_t base) \
189
+ { \
190
+ int beat; \
191
+ uint16_t mask = mve_eci_mask(env); \
192
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
193
+ uint32_t addr, data; \
194
+ uint32_t *qd; \
195
+ int y; \
196
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
197
+ if ((mask & 1) == 0) { \
198
+ /* ECI says skip this beat */ \
199
+ continue; \
200
+ } \
201
+ addr = base + off[beat] * 4; \
202
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
203
+ y = (beat + (O1 & 2)) & 3; \
204
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + y); \
205
+ qd[H4(off[beat] >> 2)] = data; \
206
+ } \
207
+ }
208
+
209
+DO_VLD4B(vld40b, 0, 1, 10, 11)
210
+DO_VLD4B(vld41b, 2, 3, 12, 13)
211
+DO_VLD4B(vld42b, 4, 5, 14, 15)
212
+DO_VLD4B(vld43b, 6, 7, 8, 9)
213
+
214
+DO_VLD4H(vld40h, 0, 5)
215
+DO_VLD4H(vld41h, 1, 6)
216
+DO_VLD4H(vld42h, 2, 7)
217
+DO_VLD4H(vld43h, 3, 4)
218
+
219
+DO_VLD4W(vld40w, 0, 1, 10, 11)
220
+DO_VLD4W(vld41w, 2, 3, 12, 13)
221
+DO_VLD4W(vld42w, 4, 5, 14, 15)
222
+DO_VLD4W(vld43w, 6, 7, 8, 9)
223
+
224
+#define DO_VLD2B(OP, O1, O2, O3, O4) \
225
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
226
+ uint32_t base) \
227
+ { \
228
+ int beat, e; \
229
+ uint16_t mask = mve_eci_mask(env); \
230
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
231
+ uint32_t addr, data; \
232
+ uint8_t *qd; \
233
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
234
+ if ((mask & 1) == 0) { \
235
+ /* ECI says skip this beat */ \
236
+ continue; \
237
+ } \
238
+ addr = base + off[beat] * 2; \
239
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
240
+ for (e = 0; e < 4; e++, data >>= 8) { \
241
+ qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + (e & 1)); \
242
+ qd[H1(off[beat] + (e >> 1))] = data; \
243
+ } \
244
+ } \
245
+ }
246
+
247
+#define DO_VLD2H(OP, O1, O2, O3, O4) \
248
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
249
+ uint32_t base) \
250
+ { \
251
+ int beat; \
252
+ uint16_t mask = mve_eci_mask(env); \
253
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
254
+ uint32_t addr, data; \
255
+ int e; \
256
+ uint16_t *qd; \
257
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
258
+ if ((mask & 1) == 0) { \
259
+ /* ECI says skip this beat */ \
260
+ continue; \
261
+ } \
262
+ addr = base + off[beat] * 4; \
263
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
264
+ for (e = 0; e < 2; e++, data >>= 16) { \
265
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + e); \
266
+ qd[H2(off[beat])] = data; \
267
+ } \
268
+ } \
269
+ }
270
+
271
+#define DO_VLD2W(OP, O1, O2, O3, O4) \
272
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
273
+ uint32_t base) \
274
+ { \
275
+ int beat; \
276
+ uint16_t mask = mve_eci_mask(env); \
277
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
278
+ uint32_t addr, data; \
279
+ uint32_t *qd; \
280
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
281
+ if ((mask & 1) == 0) { \
282
+ /* ECI says skip this beat */ \
283
+ continue; \
284
+ } \
285
+ addr = base + off[beat]; \
286
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
287
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + (beat & 1)); \
288
+ qd[H4(off[beat] >> 3)] = data; \
289
+ } \
290
+ }
291
+
292
+DO_VLD2B(vld20b, 0, 2, 12, 14)
293
+DO_VLD2B(vld21b, 4, 6, 8, 10)
294
+
295
+DO_VLD2H(vld20h, 0, 1, 6, 7)
296
+DO_VLD2H(vld21h, 2, 3, 4, 5)
297
+
298
+DO_VLD2W(vld20w, 0, 4, 24, 28)
299
+DO_VLD2W(vld21w, 8, 12, 16, 20)
300
+
301
+#define DO_VST4B(OP, O1, O2, O3, O4) \
302
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
303
+ uint32_t base) \
304
+ { \
305
+ int beat, e; \
306
+ uint16_t mask = mve_eci_mask(env); \
307
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
308
+ uint32_t addr, data; \
309
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
310
+ if ((mask & 1) == 0) { \
311
+ /* ECI says skip this beat */ \
312
+ continue; \
313
+ } \
314
+ addr = base + off[beat] * 4; \
315
+ data = 0; \
316
+ for (e = 3; e >= 0; e--) { \
317
+ uint8_t *qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + e); \
318
+ data = (data << 8) | qd[H1(off[beat])]; \
319
+ } \
320
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
321
+ } \
322
+ }
323
+
324
+#define DO_VST4H(OP, O1, O2) \
325
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
326
+ uint32_t base) \
327
+ { \
328
+ int beat; \
329
+ uint16_t mask = mve_eci_mask(env); \
330
+ static const uint8_t off[4] = { O1, O1, O2, O2 }; \
331
+ uint32_t addr, data; \
332
+ int y; /* y counts 0 2 0 2 */ \
333
+ uint16_t *qd; \
334
+ for (beat = 0, y = 0; beat < 4; beat++, mask >>= 4, y ^= 2) { \
335
+ if ((mask & 1) == 0) { \
336
+ /* ECI says skip this beat */ \
337
+ continue; \
338
+ } \
339
+ addr = base + off[beat] * 8 + (beat & 1) * 4; \
340
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y); \
341
+ data = qd[H2(off[beat])]; \
342
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y + 1); \
343
+ data |= qd[H2(off[beat])] << 16; \
344
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
345
+ } \
346
+ }
347
+
348
+#define DO_VST4W(OP, O1, O2, O3, O4) \
349
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
350
+ uint32_t base) \
351
+ { \
352
+ int beat; \
353
+ uint16_t mask = mve_eci_mask(env); \
354
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
355
+ uint32_t addr, data; \
356
+ uint32_t *qd; \
357
+ int y; \
358
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
359
+ if ((mask & 1) == 0) { \
360
+ /* ECI says skip this beat */ \
361
+ continue; \
362
+ } \
363
+ addr = base + off[beat] * 4; \
364
+ y = (beat + (O1 & 2)) & 3; \
365
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + y); \
366
+ data = qd[H4(off[beat] >> 2)]; \
367
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
368
+ } \
369
+ }
370
+
371
+DO_VST4B(vst40b, 0, 1, 10, 11)
372
+DO_VST4B(vst41b, 2, 3, 12, 13)
373
+DO_VST4B(vst42b, 4, 5, 14, 15)
374
+DO_VST4B(vst43b, 6, 7, 8, 9)
375
+
376
+DO_VST4H(vst40h, 0, 5)
377
+DO_VST4H(vst41h, 1, 6)
378
+DO_VST4H(vst42h, 2, 7)
379
+DO_VST4H(vst43h, 3, 4)
380
+
381
+DO_VST4W(vst40w, 0, 1, 10, 11)
382
+DO_VST4W(vst41w, 2, 3, 12, 13)
383
+DO_VST4W(vst42w, 4, 5, 14, 15)
384
+DO_VST4W(vst43w, 6, 7, 8, 9)
385
+
386
+#define DO_VST2B(OP, O1, O2, O3, O4) \
387
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
388
+ uint32_t base) \
389
+ { \
390
+ int beat, e; \
391
+ uint16_t mask = mve_eci_mask(env); \
392
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
393
+ uint32_t addr, data; \
394
+ uint8_t *qd; \
395
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
396
+ if ((mask & 1) == 0) { \
397
+ /* ECI says skip this beat */ \
398
+ continue; \
399
+ } \
400
+ addr = base + off[beat] * 2; \
401
+ data = 0; \
402
+ for (e = 3; e >= 0; e--) { \
403
+ qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + (e & 1)); \
404
+ data = (data << 8) | qd[H1(off[beat] + (e >> 1))]; \
405
+ } \
406
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
407
+ } \
408
+ }
409
+
410
+#define DO_VST2H(OP, O1, O2, O3, O4) \
411
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
412
+ uint32_t base) \
413
+ { \
414
+ int beat; \
415
+ uint16_t mask = mve_eci_mask(env); \
416
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
417
+ uint32_t addr, data; \
418
+ int e; \
419
+ uint16_t *qd; \
420
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
421
+ if ((mask & 1) == 0) { \
422
+ /* ECI says skip this beat */ \
423
+ continue; \
424
+ } \
425
+ addr = base + off[beat] * 4; \
426
+ data = 0; \
427
+ for (e = 1; e >= 0; e--) { \
428
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + e); \
429
+ data = (data << 16) | qd[H2(off[beat])]; \
430
+ } \
431
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
432
+ } \
433
+ }
434
+
435
+#define DO_VST2W(OP, O1, O2, O3, O4) \
436
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
437
+ uint32_t base) \
438
+ { \
439
+ int beat; \
440
+ uint16_t mask = mve_eci_mask(env); \
441
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
442
+ uint32_t addr, data; \
443
+ uint32_t *qd; \
444
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
445
+ if ((mask & 1) == 0) { \
446
+ /* ECI says skip this beat */ \
447
+ continue; \
448
+ } \
449
+ addr = base + off[beat]; \
450
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + (beat & 1)); \
451
+ data = qd[H4(off[beat] >> 3)]; \
452
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
453
+ } \
454
+ }
455
+
456
+DO_VST2B(vst20b, 0, 2, 12, 14)
457
+DO_VST2B(vst21b, 4, 6, 8, 10)
458
+
459
+DO_VST2H(vst20h, 0, 1, 6, 7)
460
+DO_VST2H(vst21h, 2, 3, 4, 5)
461
+
462
+DO_VST2W(vst20w, 0, 4, 24, 28)
463
+DO_VST2W(vst21w, 8, 12, 16, 20)
464
+
465
/*
466
* The mergemask(D, R, M) macro performs the operation "*D = R" but
467
* storing only the bytes which correspond to 1 bits in M,
468
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
469
index XXXXXXX..XXXXXXX 100644
470
--- a/target/arm/translate-mve.c
471
+++ b/target/arm/translate-mve.c
472
@@ -XXX,XX +XXX,XX @@ static inline int vidup_imm(DisasContext *s, int x)
473
474
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
475
typedef void MVEGenLdStSGFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
476
+typedef void MVEGenLdStIlFn(TCGv_ptr, TCGv_i32, TCGv_i32);
477
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
478
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
479
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
480
@@ -XXX,XX +XXX,XX @@ static bool trans_VSTRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
481
return do_ldst_sg_imm(s, a, fns[a->w], MO_64);
43
}
482
}
44
483
45
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
484
+static bool do_vldst_il(DisasContext *s, arg_vldst_il *a, MVEGenLdStIlFn *fn,
485
+ int addrinc)
46
+{
486
+{
47
+ arg_vldr_sysreg *a = opaque;
487
+ TCGv_i32 rn;
48
+ uint32_t offset = a->imm;
488
+
49
+ TCGv_i32 addr;
489
+ if (!dc_isar_feature(aa32_mve, s) ||
50
+
490
+ !mve_check_qreg_bank(s, a->qd) ||
51
+ if (!a->a) {
491
+ !fn || (a->rn == 13 && a->w) || a->rn == 15) {
52
+ offset = - offset;
492
+ /* Variously UNPREDICTABLE or UNDEF or related-encoding */
53
+ }
493
+ return false;
54
+
494
+ }
55
+ addr = load_reg(s, a->rn);
495
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
56
+ if (a->p) {
496
+ return true;
57
+ tcg_gen_addi_i32(addr, addr, offset);
497
+ }
58
+ }
498
+
59
+
499
+ rn = load_reg(s, a->rn);
60
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
500
+ /*
61
+ gen_helper_v8m_stackcheck(cpu_env, addr);
501
+ * We pass the index of Qd, not a pointer, because the helper must
62
+ }
502
+ * access multiple Q registers starting at Qd and working up.
63
+
503
+ */
64
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
504
+ fn(cpu_env, tcg_constant_i32(a->qd), rn);
65
+ MO_UL | MO_ALIGN | s->be_data);
66
+ tcg_temp_free_i32(value);
67
+
505
+
68
+ if (a->w) {
506
+ if (a->w) {
69
+ /* writeback */
507
+ tcg_gen_addi_i32(rn, rn, addrinc);
70
+ if (!a->p) {
508
+ store_reg(s, a->rn, rn);
71
+ tcg_gen_addi_i32(addr, addr, offset);
72
+ }
73
+ store_reg(s, a->rn, addr);
74
+ } else {
509
+ } else {
75
+ tcg_temp_free_i32(addr);
510
+ tcg_temp_free_i32(rn);
76
+ }
511
+ }
512
+ mve_update_and_store_eci(s);
513
+ return true;
77
+}
514
+}
78
+
515
+
79
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
516
+/* This macro is just to make the arrays more compact in these functions */
517
+#define F(N) gen_helper_mve_##N
518
+
519
+static bool trans_VLD2(DisasContext *s, arg_vldst_il *a)
80
+{
520
+{
81
+ arg_vldr_sysreg *a = opaque;
521
+ static MVEGenLdStIlFn * const fns[4][4] = {
82
+ uint32_t offset = a->imm;
522
+ { F(vld20b), F(vld20h), F(vld20w), NULL, },
83
+ TCGv_i32 addr;
523
+ { F(vld21b), F(vld21h), F(vld21w), NULL, },
84
+ TCGv_i32 value = tcg_temp_new_i32();
524
+ { NULL, NULL, NULL, NULL },
85
+
525
+ { NULL, NULL, NULL, NULL },
86
+ if (!a->a) {
526
+ };
87
+ offset = - offset;
527
+ if (a->qd > 6) {
88
+ }
528
+ return false;
89
+
529
+ }
90
+ addr = load_reg(s, a->rn);
530
+ return do_vldst_il(s, a, fns[a->pat][a->size], 32);
91
+ if (a->p) {
92
+ tcg_gen_addi_i32(addr, addr, offset);
93
+ }
94
+
95
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
96
+ gen_helper_v8m_stackcheck(cpu_env, addr);
97
+ }
98
+
99
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
100
+ MO_UL | MO_ALIGN | s->be_data);
101
+
102
+ if (a->w) {
103
+ /* writeback */
104
+ if (!a->p) {
105
+ tcg_gen_addi_i32(addr, addr, offset);
106
+ }
107
+ store_reg(s, a->rn, addr);
108
+ } else {
109
+ tcg_temp_free_i32(addr);
110
+ }
111
+ return value;
112
+}
531
+}
113
+
532
+
114
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
533
+static bool trans_VLD4(DisasContext *s, arg_vldst_il *a)
115
+{
534
+{
116
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
535
+ static MVEGenLdStIlFn * const fns[4][4] = {
536
+ { F(vld40b), F(vld40h), F(vld40w), NULL, },
537
+ { F(vld41b), F(vld41h), F(vld41w), NULL, },
538
+ { F(vld42b), F(vld42h), F(vld42w), NULL, },
539
+ { F(vld43b), F(vld43h), F(vld43w), NULL, },
540
+ };
541
+ if (a->qd > 4) {
117
+ return false;
542
+ return false;
118
+ }
543
+ }
119
+ if (a->rn == 15) {
544
+ return do_vldst_il(s, a, fns[a->pat][a->size], 64);
545
+}
546
+
547
+static bool trans_VST2(DisasContext *s, arg_vldst_il *a)
548
+{
549
+ static MVEGenLdStIlFn * const fns[4][4] = {
550
+ { F(vst20b), F(vst20h), F(vst20w), NULL, },
551
+ { F(vst21b), F(vst21h), F(vst21w), NULL, },
552
+ { NULL, NULL, NULL, NULL },
553
+ { NULL, NULL, NULL, NULL },
554
+ };
555
+ if (a->qd > 6) {
120
+ return false;
556
+ return false;
121
+ }
557
+ }
122
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
558
+ return do_vldst_il(s, a, fns[a->pat][a->size], 32);
123
+}
559
+}
124
+
560
+
125
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
561
+static bool trans_VST4(DisasContext *s, arg_vldst_il *a)
126
+{
562
+{
127
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
563
+ static MVEGenLdStIlFn * const fns[4][4] = {
564
+ { F(vst40b), F(vst40h), F(vst40w), NULL, },
565
+ { F(vst41b), F(vst41h), F(vst41w), NULL, },
566
+ { F(vst42b), F(vst42h), F(vst42w), NULL, },
567
+ { F(vst43b), F(vst43h), F(vst43w), NULL, },
568
+ };
569
+ if (a->qd > 4) {
128
+ return false;
570
+ return false;
129
+ }
571
+ }
130
+ if (a->rn == 15) {
572
+ return do_vldst_il(s, a, fns[a->pat][a->size], 64);
131
+ return false;
132
+ }
133
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
134
+}
573
+}
135
+
574
+
136
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
575
+#undef F
576
+
577
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
137
{
578
{
138
TCGv_i32 tmp;
579
TCGv_ptr qd;
139
--
580
--
140
2.20.1
581
2.20.1
141
582
142
583
diff view generated by jsdifflib
1
In v8.1M the PXN architecture extension adds a new PXN bit to the
1
We're about to make a code change to the sdiv and udiv helper
2
MPU_RLAR registers, which forbids execution of code in the region
2
functions, so first fix their indentation and coding style.
3
from a privileged mode.
4
5
This is another feature which is just in the generic "in v8.1M" set
6
and has no ID register field indicating its presence.
7
3
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
6
Message-id: 20210730151636.17254-2-peter.maydell@linaro.org
11
---
7
---
12
target/arm/helper.c | 7 ++++++-
8
target/arm/helper.c | 15 +++++++++------
13
1 file changed, 6 insertions(+), 1 deletion(-)
9
1 file changed, 9 insertions(+), 6 deletions(-)
14
10
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
13
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
14
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
15
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(uxtb16)(uint32_t x)
20
} else {
16
21
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
17
int32_t HELPER(sdiv)(int32_t num, int32_t den)
22
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
18
{
23
+ bool pxn = false;
19
- if (den == 0)
24
+
20
- return 0;
25
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
21
- if (num == INT_MIN && den == -1)
26
+ pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
22
- return INT_MIN;
27
+ }
23
+ if (den == 0) {
28
24
+ return 0;
29
if (m_is_system_region(env, address)) {
25
+ }
30
/* System space is always execute never */
26
+ if (num == INT_MIN && den == -1) {
31
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
27
+ return INT_MIN;
32
}
28
+ }
33
29
return num / den;
34
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
30
}
35
- if (*prot && !xn) {
31
36
+ if (*prot && !xn && !(pxn && !is_user)) {
32
uint32_t HELPER(udiv)(uint32_t num, uint32_t den)
37
*prot |= PAGE_EXEC;
33
{
38
}
34
- if (den == 0)
39
/* We don't need to look the attribute up in the MAIR0/MAIR1
35
- return 0;
36
+ if (den == 0) {
37
+ return 0;
38
+ }
39
return num / den;
40
}
41
40
--
42
--
41
2.20.1
43
2.20.1
42
44
43
45
diff view generated by jsdifflib
1
For v8.1M the architecture mandates that CPUs must provide at
1
Unlike A-profile, for M-profile the UDIV and SDIV insns can be
2
least the "minimal RAS implementation" from the Reliability,
2
configured to raise an exception on division by zero, using the CCR
3
Availability and Serviceability extension. This consists of:
3
DIV_0_TRP bit.
4
* an ESB instruction which is a NOP
4
5
-- since it is in the HINT space we need only add a comment
5
Implement support for setting this bit by making the helper functions
6
* an RFSR register which will RAZ/WI
6
raise the appropriate exception.
7
* a RAZ/WI AIRCR.IESB bit
8
-- the code which handles writes to AIRCR does not allow setting
9
of RES0 bits, so we already treat this as RAZ/WI; add a comment
10
noting that this is deliberate
11
* minimal implementation of the RAS register block at 0xe0005000
12
-- this will be in a subsequent commit
13
* setting the ID_PFR0.RAS field to 0b0010
14
-- we will do this when we add the Cortex-M55 CPU model
15
7
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20201119215617.29887-26-peter.maydell@linaro.org
10
Message-id: 20210730151636.17254-3-peter.maydell@linaro.org
19
---
11
---
20
target/arm/cpu.h | 14 ++++++++++++++
12
target/arm/cpu.h | 1 +
21
target/arm/t32.decode | 4 ++++
13
target/arm/helper.h | 4 ++--
22
hw/intc/armv7m_nvic.c | 13 +++++++++++++
14
target/arm/helper.c | 19 +++++++++++++++++--
23
3 files changed, 31 insertions(+)
15
target/arm/m_helper.c | 4 ++++
16
target/arm/translate.c | 4 ++--
17
5 files changed, 26 insertions(+), 6 deletions(-)
24
18
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
29
@@ -XXX,XX +XXX,XX @@ FIELD(ID_MMFR4, LSM, 20, 4)
23
@@ -XXX,XX +XXX,XX @@
30
FIELD(ID_MMFR4, CCIDX, 24, 4)
24
#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
31
FIELD(ID_MMFR4, EVT, 28, 4)
25
#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
32
26
#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
33
+FIELD(ID_PFR0, STATE0, 0, 4)
27
+#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
34
+FIELD(ID_PFR0, STATE1, 4, 4)
28
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
35
+FIELD(ID_PFR0, STATE2, 8, 4)
29
36
+FIELD(ID_PFR0, STATE3, 12, 4)
30
#define ARMV7M_EXCP_RESET 1
37
+FIELD(ID_PFR0, CSV2, 16, 4)
31
diff --git a/target/arm/helper.h b/target/arm/helper.h
38
+FIELD(ID_PFR0, AMU, 20, 4)
32
index XXXXXXX..XXXXXXX 100644
39
+FIELD(ID_PFR0, DIT, 24, 4)
33
--- a/target/arm/helper.h
40
+FIELD(ID_PFR0, RAS, 28, 4)
34
+++ b/target/arm/helper.h
41
+
35
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(add_saturate, i32, env, i32, i32)
42
FIELD(ID_PFR1, PROGMOD, 0, 4)
36
DEF_HELPER_3(sub_saturate, i32, env, i32, i32)
43
FIELD(ID_PFR1, SECURITY, 4, 4)
37
DEF_HELPER_3(add_usaturate, i32, env, i32, i32)
44
FIELD(ID_PFR1, MPROGMOD, 8, 4)
38
DEF_HELPER_3(sub_usaturate, i32, env, i32, i32)
45
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
39
-DEF_HELPER_FLAGS_2(sdiv, TCG_CALL_NO_RWG_SE, s32, s32, s32)
46
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
40
-DEF_HELPER_FLAGS_2(udiv, TCG_CALL_NO_RWG_SE, i32, i32, i32)
41
+DEF_HELPER_FLAGS_3(sdiv, TCG_CALL_NO_RWG, s32, env, s32, s32)
42
+DEF_HELPER_FLAGS_3(udiv, TCG_CALL_NO_RWG, i32, env, i32, i32)
43
DEF_HELPER_FLAGS_1(rbit, TCG_CALL_NO_RWG_SE, i32, i32)
44
45
#define PAS_OP(pfx) \
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/helper.c
49
+++ b/target/arm/helper.c
50
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sxtb16)(uint32_t x)
51
return res;
47
}
52
}
48
53
49
+static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
54
+static void handle_possible_div0_trap(CPUARMState *env, uintptr_t ra)
50
+{
55
+{
51
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
56
+ /*
57
+ * Take a division-by-zero exception if necessary; otherwise return
58
+ * to get the usual non-trapping division behaviour (result of 0)
59
+ */
60
+ if (arm_feature(env, ARM_FEATURE_M)
61
+ && (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_DIV_0_TRP_MASK)) {
62
+ raise_exception_ra(env, EXCP_DIVBYZERO, 0, 1, ra);
63
+ }
52
+}
64
+}
53
+
65
+
54
static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
66
uint32_t HELPER(uxtb16)(uint32_t x)
55
{
67
{
56
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
68
uint32_t res;
57
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
69
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(uxtb16)(uint32_t x)
70
return res;
71
}
72
73
-int32_t HELPER(sdiv)(int32_t num, int32_t den)
74
+int32_t HELPER(sdiv)(CPUARMState *env, int32_t num, int32_t den)
75
{
76
if (den == 0) {
77
+ handle_possible_div0_trap(env, GETPC());
78
return 0;
79
}
80
if (num == INT_MIN && den == -1) {
81
@@ -XXX,XX +XXX,XX @@ int32_t HELPER(sdiv)(int32_t num, int32_t den)
82
return num / den;
83
}
84
85
-uint32_t HELPER(udiv)(uint32_t num, uint32_t den)
86
+uint32_t HELPER(udiv)(CPUARMState *env, uint32_t num, uint32_t den)
87
{
88
if (den == 0) {
89
+ handle_possible_div0_trap(env, GETPC());
90
return 0;
91
}
92
return num / den;
93
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(int idx)
94
[EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
95
[EXCP_LSERR] = "v8M LSERR UsageFault",
96
[EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
97
+ [EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
98
};
99
100
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
101
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
58
index XXXXXXX..XXXXXXX 100644
102
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/t32.decode
103
--- a/target/arm/m_helper.c
60
+++ b/target/arm/t32.decode
104
+++ b/target/arm/m_helper.c
61
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
105
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
62
# SEV 1111 0011 1010 1111 1000 0000 0000 0100
106
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
63
# SEVL 1111 0011 1010 1111 1000 0000 0000 0101
107
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
64
108
break;
65
+ # For M-profile minimal-RAS ESB can be a NOP, which is the
109
+ case EXCP_DIVBYZERO:
66
+ # default behaviour since it is in the hint space.
110
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
67
+ # ESB 1111 0011 1010 1111 1000 0000 0001 0000
111
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_DIVBYZERO_MASK;
68
+
112
+ break;
69
# The canonical nop ends in 0000 0000, but the whole rest
113
case EXCP_SWI:
70
# of the space is "reserved hint, behaves as nop".
114
/* The PC already points to the next instruction. */
71
NOP 1111 0011 1010 1111 1000 0000 ---- ----
115
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
72
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
116
diff --git a/target/arm/translate.c b/target/arm/translate.c
73
index XXXXXXX..XXXXXXX 100644
117
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/intc/armv7m_nvic.c
118
--- a/target/arm/translate.c
75
+++ b/hw/intc/armv7m_nvic.c
119
+++ b/target/arm/translate.c
76
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
120
@@ -XXX,XX +XXX,XX @@ static bool op_div(DisasContext *s, arg_rrr *a, bool u)
77
return 0;
121
t1 = load_reg(s, a->rn);
78
}
122
t2 = load_reg(s, a->rm);
79
return cpu->env.v7m.sfar;
123
if (u) {
80
+ case 0xf04: /* RFSR */
124
- gen_helper_udiv(t1, t1, t2);
81
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
125
+ gen_helper_udiv(t1, cpu_env, t1, t2);
82
+ goto bad_offset;
126
} else {
83
+ }
127
- gen_helper_sdiv(t1, t1, t2);
84
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
128
+ gen_helper_sdiv(t1, cpu_env, t1, t2);
85
+ return 0;
86
case 0xf34: /* FPCCR */
87
if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
88
return 0;
89
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
90
R_V7M_AIRCR_PRIGROUP_SHIFT,
91
R_V7M_AIRCR_PRIGROUP_LENGTH);
92
}
93
+ /* AIRCR.IESB is RAZ/WI because we implement only minimal RAS */
94
if (attrs.secure) {
95
/* These bits are only writable by secure */
96
cpu->env.v7m.aircr = value &
97
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
98
}
99
break;
100
}
129
}
101
+ case 0xf04: /* RFSR */
130
tcg_temp_free_i32(t2);
102
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
131
store_reg(s, a->rd, t1);
103
+ goto bad_offset;
104
+ }
105
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
106
+ break;
107
case 0xf34: /* FPCCR */
108
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
109
/* Not all bits here are banked. */
110
--
132
--
111
2.20.1
133
2.20.1
112
134
113
135
diff view generated by jsdifflib
1
From: Kunkun Jiang <jiangkunkun@huawei.com>
1
From: Hamza Mahfooz <someguy@effective-light.com>
2
2
3
Accroding to the SMMUv3 spec, the SPAN field of Level1 Stream Table
3
As per commit 5626f8c6d468 ("rcu: Add automatically released rcu_read_lock
4
Descriptor is 5 bits([4:0]).
4
variants"), RCU_READ_LOCK_GUARD() should be used instead of
5
rcu_read_{un}lock().
5
6
6
Fixes: 9bde7f0674f(hw/arm/smmuv3: Implement translate callback)
7
Signed-off-by: Hamza Mahfooz <someguy@effective-light.com>
7
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
8
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
8
Message-id: 20201124023711.1184-1-jiangkunkun@huawei.com
9
Message-id: 20210727235201.11491-1-someguy@effective-light.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Acked-by: Eric Auger <eric.auger@redhat.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
hw/arm/smmuv3-internal.h | 2 +-
12
target/arm/kvm.c | 17 ++++++++---------
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 8 insertions(+), 9 deletions(-)
15
14
16
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
15
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/smmuv3-internal.h
17
--- a/target/arm/kvm.c
19
+++ b/hw/arm/smmuv3-internal.h
18
+++ b/target/arm/kvm.c
20
@@ -XXX,XX +XXX,XX @@ static inline uint64_t l1std_l2ptr(STEDesc *desc)
19
@@ -XXX,XX +XXX,XX @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
21
return hi << 32 | lo;
20
hwaddr xlat, len, doorbell_gpa;
21
MemoryRegionSection mrs;
22
MemoryRegion *mr;
23
- int ret = 1;
24
25
if (as == &address_space_memory) {
26
return 0;
27
@@ -XXX,XX +XXX,XX @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
28
29
/* MSI doorbell address is translated by an IOMMU */
30
31
- rcu_read_lock();
32
+ RCU_READ_LOCK_GUARD();
33
+
34
mr = address_space_translate(as, address, &xlat, &len, true,
35
MEMTXATTRS_UNSPECIFIED);
36
+
37
if (!mr) {
38
- goto unlock;
39
+ return 1;
40
}
41
+
42
mrs = memory_region_find(mr, xlat, 1);
43
+
44
if (!mrs.mr) {
45
- goto unlock;
46
+ return 1;
47
}
48
49
doorbell_gpa = mrs.offset_within_address_space;
50
@@ -XXX,XX +XXX,XX @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
51
52
trace_kvm_arm_fixup_msi_route(address, doorbell_gpa);
53
54
- ret = 0;
55
-
56
-unlock:
57
- rcu_read_unlock();
58
- return ret;
59
+ return 0;
22
}
60
}
23
61
24
-#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 4))
62
int kvm_arch_add_msi_route_post(struct kvm_irq_routing_entry *route,
25
+#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 5))
26
27
#endif
28
--
63
--
29
2.20.1
64
2.20.1
30
65
31
66
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Jan Luebbe <jlu@pengutronix.de>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
Break events are currently only handled by chardev/char-serial.c, so we
4
argument of type "unsigned int".
4
just ignore errors, which results in no behaviour change for other
5
chardevs.
5
6
6
Reported-by: Euler Robot <euler.robot@huawei.com>
7
Signed-off-by: Jan Luebbe <jlu@pengutronix.de>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Message-id: 20210806144700.3751979-1-jlu@pengutronix.de
8
Message-id: 20201126111109.112238-5-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/misc/imx6ul_ccm.c | 4 ++--
12
hw/char/pl011.c | 6 ++++++
13
1 file changed, 2 insertions(+), 2 deletions(-)
13
1 file changed, 6 insertions(+)
14
14
15
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
15
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx6ul_ccm.c
17
--- a/hw/char/pl011.c
18
+++ b/hw/misc/imx6ul_ccm.c
18
+++ b/hw/char/pl011.c
19
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_ccm_reg_name(uint32_t reg)
19
@@ -XXX,XX +XXX,XX @@
20
case CCM_CMEOR:
20
#include "hw/qdev-properties-system.h"
21
return "CMEOR";
21
#include "migration/vmstate.h"
22
default:
22
#include "chardev/char-fe.h"
23
- sprintf(unknown, "%d ?", reg);
23
+#include "chardev/char-serial.h"
24
+ sprintf(unknown, "%u ?", reg);
24
#include "qemu/log.h"
25
return unknown;
25
#include "qemu/module.h"
26
}
26
#include "trace.h"
27
}
27
@@ -XXX,XX +XXX,XX @@ static void pl011_write(void *opaque, hwaddr offset,
28
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_analog_reg_name(uint32_t reg)
28
s->read_count = 0;
29
case USB_ANALOG_DIGPROG:
29
s->read_pos = 0;
30
return "USB_ANALOG_DIGPROG";
30
}
31
default:
31
+ if ((s->lcr ^ value) & 0x1) {
32
- sprintf(unknown, "%d ?", reg);
32
+ int break_enable = value & 0x1;
33
+ sprintf(unknown, "%u ?", reg);
33
+ qemu_chr_fe_ioctl(&s->chr, CHR_IOCTL_SERIAL_SET_BREAK,
34
return unknown;
34
+ &break_enable);
35
}
35
+ }
36
}
36
s->lcr = value;
37
pl011_set_read_trigger(s);
38
break;
37
--
39
--
38
2.20.1
40
2.20.1
39
41
40
42
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
Instantiate SAI1/2/3 and ASRC as unimplemented devices to avoid random
4
argument of type "unsigned int".
4
Linux kernel crashes, such as
5
5
6
Reported-by: Euler Robot <euler.robot@huawei.com>
6
Unhandled fault: external abort on non-linefetch (0x808) at 0xd1580010
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
7
pgd = (ptrval)
8
Message-id: 20201126111109.112238-3-alex.chen@huawei.com
8
[d1580010] *pgd=8231b811, *pte=02034653, *ppte=02034453
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Internal error: : 808 [#1] SMP ARM
10
...
11
[<c095e974>] (regmap_mmio_write32le) from [<c095eb48>] (regmap_mmio_write+0x3c/0x54)
12
[<c095eb48>] (regmap_mmio_write) from [<c09580f4>] (_regmap_write+0x4c/0x1f0)
13
[<c09580f4>] (_regmap_write) from [<c095837c>] (_regmap_update_bits+0xe4/0xec)
14
[<c095837c>] (_regmap_update_bits) from [<c09599b4>] (regmap_update_bits_base+0x50/0x74)
15
[<c09599b4>] (regmap_update_bits_base) from [<c0d3e9e4>] (fsl_asrc_runtime_resume+0x1e4/0x21c)
16
[<c0d3e9e4>] (fsl_asrc_runtime_resume) from [<c0942464>] (__rpm_callback+0x3c/0x108)
17
[<c0942464>] (__rpm_callback) from [<c0942590>] (rpm_callback+0x60/0x64)
18
[<c0942590>] (rpm_callback) from [<c0942b60>] (rpm_resume+0x5cc/0x808)
19
[<c0942b60>] (rpm_resume) from [<c0942dfc>] (__pm_runtime_resume+0x60/0xa0)
20
[<c0942dfc>] (__pm_runtime_resume) from [<c0d3ecc4>] (fsl_asrc_probe+0x2a8/0x708)
21
[<c0d3ecc4>] (fsl_asrc_probe) from [<c0935b08>] (platform_probe+0x58/0xb8)
22
[<c0935b08>] (platform_probe) from [<c0933264>] (really_probe.part.0+0x9c/0x334)
23
[<c0933264>] (really_probe.part.0) from [<c093359c>] (__driver_probe_device+0xa0/0x138)
24
[<c093359c>] (__driver_probe_device) from [<c0933664>] (driver_probe_device+0x30/0xc8)
25
[<c0933664>] (driver_probe_device) from [<c0933c88>] (__driver_attach+0x90/0x130)
26
[<c0933c88>] (__driver_attach) from [<c0931060>] (bus_for_each_dev+0x78/0xb8)
27
[<c0931060>] (bus_for_each_dev) from [<c093254c>] (bus_add_driver+0xf0/0x1d8)
28
[<c093254c>] (bus_add_driver) from [<c0934a30>] (driver_register+0x88/0x118)
29
[<c0934a30>] (driver_register) from [<c01022c0>] (do_one_initcall+0x7c/0x3a4)
30
[<c01022c0>] (do_one_initcall) from [<c1601204>] (kernel_init_freeable+0x198/0x22c)
31
[<c1601204>] (kernel_init_freeable) from [<c0f5ff2c>] (kernel_init+0x10/0x128)
32
[<c0f5ff2c>] (kernel_init) from [<c010013c>] (ret_from_fork+0x14/0x38)
33
34
or
35
36
Unhandled fault: external abort on non-linefetch (0x808) at 0xd19b0000
37
pgd = (ptrval)
38
[d19b0000] *pgd=82711811, *pte=308a0653, *ppte=308a0453
39
Internal error: : 808 [#1] SMP ARM
40
...
41
[<c095e974>] (regmap_mmio_write32le) from [<c095eb48>] (regmap_mmio_write+0x3c/0x54)
42
[<c095eb48>] (regmap_mmio_write) from [<c09580f4>] (_regmap_write+0x4c/0x1f0)
43
[<c09580f4>] (_regmap_write) from [<c0959b28>] (regmap_write+0x3c/0x60)
44
[<c0959b28>] (regmap_write) from [<c0d41130>] (fsl_sai_runtime_resume+0x9c/0x1ec)
45
[<c0d41130>] (fsl_sai_runtime_resume) from [<c0942464>] (__rpm_callback+0x3c/0x108)
46
[<c0942464>] (__rpm_callback) from [<c0942590>] (rpm_callback+0x60/0x64)
47
[<c0942590>] (rpm_callback) from [<c0942b60>] (rpm_resume+0x5cc/0x808)
48
[<c0942b60>] (rpm_resume) from [<c0942dfc>] (__pm_runtime_resume+0x60/0xa0)
49
[<c0942dfc>] (__pm_runtime_resume) from [<c0d4231c>] (fsl_sai_probe+0x2b8/0x65c)
50
[<c0d4231c>] (fsl_sai_probe) from [<c0935b08>] (platform_probe+0x58/0xb8)
51
[<c0935b08>] (platform_probe) from [<c0933264>] (really_probe.part.0+0x9c/0x334)
52
[<c0933264>] (really_probe.part.0) from [<c093359c>] (__driver_probe_device+0xa0/0x138)
53
[<c093359c>] (__driver_probe_device) from [<c0933664>] (driver_probe_device+0x30/0xc8)
54
[<c0933664>] (driver_probe_device) from [<c0933c88>] (__driver_attach+0x90/0x130)
55
[<c0933c88>] (__driver_attach) from [<c0931060>] (bus_for_each_dev+0x78/0xb8)
56
[<c0931060>] (bus_for_each_dev) from [<c093254c>] (bus_add_driver+0xf0/0x1d8)
57
[<c093254c>] (bus_add_driver) from [<c0934a30>] (driver_register+0x88/0x118)
58
[<c0934a30>] (driver_register) from [<c01022c0>] (do_one_initcall+0x7c/0x3a4)
59
[<c01022c0>] (do_one_initcall) from [<c1601204>] (kernel_init_freeable+0x198/0x22c)
60
[<c1601204>] (kernel_init_freeable) from [<c0f5ff2c>] (kernel_init+0x10/0x128)
61
[<c0f5ff2c>] (kernel_init) from [<c010013c>] (ret_from_fork+0x14/0x38)
62
63
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
64
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
65
Message-id: 20210810160318.87376-1-linux@roeck-us.net
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
66
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
67
---
12
hw/misc/imx31_ccm.c | 14 +++++++-------
68
hw/arm/fsl-imx6ul.c | 12 ++++++++++++
13
hw/misc/imx_ccm.c | 4 ++--
69
1 file changed, 12 insertions(+)
14
2 files changed, 9 insertions(+), 9 deletions(-)
15
70
16
diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
71
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
17
index XXXXXXX..XXXXXXX 100644
72
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx31_ccm.c
73
--- a/hw/arm/fsl-imx6ul.c
19
+++ b/hw/misc/imx31_ccm.c
74
+++ b/hw/arm/fsl-imx6ul.c
20
@@ -XXX,XX +XXX,XX @@ static const char *imx31_ccm_reg_name(uint32_t reg)
75
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
21
case IMX31_CCM_PDR2_REG:
76
*/
22
return "PDR2";
77
create_unimplemented_device("sdma", FSL_IMX6UL_SDMA_ADDR, 0x4000);
23
default:
78
24
- sprintf(unknown, "[%d ?]", reg);
79
+ /*
25
+ sprintf(unknown, "[%u ?]", reg);
80
+ * SAI (Audio SSI (Synchronous Serial Interface))
26
return unknown;
81
+ */
27
}
82
+ create_unimplemented_device("sai1", FSL_IMX6UL_SAI1_ADDR, 0x4000);
28
}
83
+ create_unimplemented_device("sai2", FSL_IMX6UL_SAI2_ADDR, 0x4000);
29
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState *dev)
84
+ create_unimplemented_device("sai3", FSL_IMX6UL_SAI3_ADDR, 0x4000);
30
freq = CKIH_FREQ;
85
+
31
}
86
/*
32
87
* PWM
33
- DPRINTF("freq = %d\n", freq);
88
*/
34
+ DPRINTF("freq = %u\n", freq);
89
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
35
90
create_unimplemented_device("pwm3", FSL_IMX6UL_PWM3_ADDR, 0x4000);
36
return freq;
91
create_unimplemented_device("pwm4", FSL_IMX6UL_PWM4_ADDR, 0x4000);
37
}
92
38
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mpll_clk(IMXCCMState *dev)
93
+ /*
39
freq = imx_ccm_calc_pll(s->reg[IMX31_CCM_MPCTL_REG],
94
+ * Audio ASRC (asynchronous sample rate converter)
40
imx31_ccm_get_pll_ref_clk(dev));
95
+ */
41
96
+ create_unimplemented_device("asrc", FSL_IMX6UL_ASRC_ADDR, 0x4000);
42
- DPRINTF("freq = %d\n", freq);
97
+
43
+ DPRINTF("freq = %u\n", freq);
98
/*
44
99
* CAN
45
return freq;
100
*/
46
}
47
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mcu_main_clk(IMXCCMState *dev)
48
freq = imx31_ccm_get_mpll_clk(dev);
49
}
50
51
- DPRINTF("freq = %d\n", freq);
52
+ DPRINTF("freq = %u\n", freq);
53
54
return freq;
55
}
56
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_hclk_clk(IMXCCMState *dev)
57
freq = imx31_ccm_get_mcu_main_clk(dev)
58
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], MAX));
59
60
- DPRINTF("freq = %d\n", freq);
61
+ DPRINTF("freq = %u\n", freq);
62
63
return freq;
64
}
65
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_ipg_clk(IMXCCMState *dev)
66
freq = imx31_ccm_get_hclk_clk(dev)
67
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], IPG));
68
69
- DPRINTF("freq = %d\n", freq);
70
+ DPRINTF("freq = %u\n", freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
75
break;
76
}
77
78
- DPRINTF("Clock = %d) = %d\n", clock, freq);
79
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
80
81
return freq;
82
}
83
diff --git a/hw/misc/imx_ccm.c b/hw/misc/imx_ccm.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/hw/misc/imx_ccm.c
86
+++ b/hw/misc/imx_ccm.c
87
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
88
freq = klass->get_clock_frequency(dev, clock);
89
}
90
91
- DPRINTF("(clock = %d) = %d\n", clock, freq);
92
+ DPRINTF("(clock = %d) = %u\n", clock, freq);
93
94
return freq;
95
}
96
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_calc_pll(uint32_t pllreg, uint32_t base_freq)
97
freq = ((2 * (base_freq >> 10) * (mfi * mfd + mfn)) /
98
(mfd * pd)) << 10;
99
100
- DPRINTF("(pllreg = 0x%08x, base_freq = %d) = %d\n", pllreg, base_freq,
101
+ DPRINTF("(pllreg = 0x%08x, base_freq = %u) = %d\n", pllreg, base_freq,
102
freq);
103
104
return freq;
105
--
101
--
106
2.20.1
102
2.20.1
107
103
108
104
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: "Wen, Jianxian" <Jianxian.Wen@verisilicon.com>
2
2
3
Connect CAN0 and CAN1 on the ZynqMP.
3
Add property memory region which can connect with IOMMU region to support SMMU translate.
4
4
5
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
5
Signed-off-by: Jianxian Wen <jianxian.wen@verisilicon.com>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
7
Message-id: 4C23C17B8E87E74E906A25A3254A03F4FA1FEC31@SHASXM03.verisilicon.com
8
Message-id: 1605728926-352690-3-git-send-email-fnu.vikram@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
include/hw/arm/xlnx-zynqmp.h | 8 ++++++++
10
hw/arm/exynos4210.c | 3 +++
12
hw/arm/xlnx-zcu102.c | 20 ++++++++++++++++++++
11
hw/arm/xilinx_zynq.c | 3 +++
13
hw/arm/xlnx-zynqmp.c | 34 ++++++++++++++++++++++++++++++++++
12
hw/dma/pl330.c | 26 ++++++++++++++++++++++----
14
3 files changed, 62 insertions(+)
13
3 files changed, 28 insertions(+), 4 deletions(-)
15
14
16
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
15
diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/xlnx-zynqmp.h
17
--- a/hw/arm/exynos4210.c
19
+++ b/include/hw/arm/xlnx-zynqmp.h
18
+++ b/hw/arm/exynos4210.c
20
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static DeviceState *pl330_create(uint32_t base, qemu_or_irq *orgate,
21
#include "hw/intc/arm_gic.h"
20
int i;
22
#include "hw/net/cadence_gem.h"
21
23
#include "hw/char/cadence_uart.h"
22
dev = qdev_new("pl330");
24
+#include "hw/net/xlnx-zynqmp-can.h"
23
+ object_property_set_link(OBJECT(dev), "memory",
25
#include "hw/ide/ahci.h"
24
+ OBJECT(get_system_memory()),
26
#include "hw/sd/sdhci.h"
25
+ &error_fatal);
27
#include "hw/ssi/xilinx_spips.h"
26
qdev_prop_set_uint8(dev, "num_events", nevents);
28
@@ -XXX,XX +XXX,XX @@
27
qdev_prop_set_uint8(dev, "num_chnls", 8);
29
#include "hw/cpu/cluster.h"
28
qdev_prop_set_uint8(dev, "num_periph_req", nreq);
30
#include "target/arm/cpu.h"
29
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
31
#include "qom/object.h"
30
index XXXXXXX..XXXXXXX 100644
32
+#include "net/can_emu.h"
31
--- a/hw/arm/xilinx_zynq.c
33
32
+++ b/hw/arm/xilinx_zynq.c
34
#define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
33
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
35
OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
34
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[39-IRQ_OFFSET]);
36
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
35
37
#define XLNX_ZYNQMP_NUM_RPU_CPUS 2
36
dev = qdev_new("pl330");
38
#define XLNX_ZYNQMP_NUM_GEMS 4
37
+ object_property_set_link(OBJECT(dev), "memory",
39
#define XLNX_ZYNQMP_NUM_UARTS 2
38
+ OBJECT(address_space_mem),
40
+#define XLNX_ZYNQMP_NUM_CAN 2
39
+ &error_fatal);
41
+#define XLNX_ZYNQMP_CAN_REF_CLK (24 * 1000 * 1000)
40
qdev_prop_set_uint8(dev, "num_chnls", 8);
42
#define XLNX_ZYNQMP_NUM_SDHCI 2
41
qdev_prop_set_uint8(dev, "num_periph_req", 4);
43
#define XLNX_ZYNQMP_NUM_SPIS 2
42
qdev_prop_set_uint8(dev, "num_events", 16);
44
#define XLNX_ZYNQMP_NUM_GDMA_CH 8
43
diff --git a/hw/dma/pl330.c b/hw/dma/pl330.c
45
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
44
index XXXXXXX..XXXXXXX 100644
46
45
--- a/hw/dma/pl330.c
47
CadenceGEMState gem[XLNX_ZYNQMP_NUM_GEMS];
46
+++ b/hw/dma/pl330.c
48
CadenceUARTState uart[XLNX_ZYNQMP_NUM_UARTS];
47
@@ -XXX,XX +XXX,XX @@ struct PL330State {
49
+ XlnxZynqMPCANState can[XLNX_ZYNQMP_NUM_CAN];
48
uint8_t num_faulting;
50
SysbusAHCIState sata;
49
uint8_t periph_busy[PL330_PERIPH_NUM];
51
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
50
52
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
51
+ /* Memory region that DMA operation access */
53
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
52
+ MemoryRegion *mem_mr;
54
bool virt;
53
+ AddressSpace *mem_as;
55
/* Has the RPU subsystem? */
56
bool has_rpu;
57
+
58
+ /* CAN bus. */
59
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
60
};
54
};
61
55
62
#endif
56
#define TYPE_PL330 "pl330"
63
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
57
@@ -XXX,XX +XXX,XX @@ static inline const PL330InsnDesc *pl330_fetch_insn(PL330Chan *ch)
64
index XXXXXXX..XXXXXXX 100644
58
uint8_t opcode;
65
--- a/hw/arm/xlnx-zcu102.c
59
int i;
66
+++ b/hw/arm/xlnx-zcu102.c
60
67
@@ -XXX,XX +XXX,XX @@
61
- dma_memory_read(&address_space_memory, ch->pc, &opcode, 1);
68
#include "sysemu/qtest.h"
62
+ dma_memory_read(ch->parent->mem_as, ch->pc, &opcode, 1);
69
#include "sysemu/device_tree.h"
63
for (i = 0; insn_desc[i].size; i++) {
70
#include "qom/object.h"
64
if ((opcode & insn_desc[i].opmask) == insn_desc[i].opcode) {
71
+#include "net/can_emu.h"
65
return &insn_desc[i];
72
66
@@ -XXX,XX +XXX,XX @@ static inline void pl330_exec_insn(PL330Chan *ch, const PL330InsnDesc *insn)
73
struct XlnxZCU102 {
67
uint8_t buf[PL330_INSN_MAXSIZE];
74
MachineState parent_obj;
68
75
@@ -XXX,XX +XXX,XX @@ struct XlnxZCU102 {
69
assert(insn->size <= PL330_INSN_MAXSIZE);
76
bool secure;
70
- dma_memory_read(&address_space_memory, ch->pc, buf, insn->size);
77
bool virt;
71
+ dma_memory_read(ch->parent->mem_as, ch->pc, buf, insn->size);
78
72
insn->exec(ch, buf[0], &buf[1], insn->size - 1);
79
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
73
}
80
+
74
81
struct arm_boot_info binfo;
75
@@ -XXX,XX +XXX,XX @@ static int pl330_exec_cycle(PL330Chan *channel)
82
};
76
if (q != NULL && q->len <= pl330_fifo_num_free(&s->fifo)) {
83
77
int len = q->len - (q->addr & (q->len - 1));
84
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_init(MachineState *machine)
78
85
object_property_set_bool(OBJECT(&s->soc), "virtualization", s->virt,
79
- dma_memory_read(&address_space_memory, q->addr, buf, len);
86
&error_fatal);
80
+ dma_memory_read(s->mem_as, q->addr, buf, len);
87
81
trace_pl330_exec_cycle(q->addr, len);
88
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
82
if (trace_event_get_state_backends(TRACE_PL330_HEXDUMP)) {
89
+ gchar *bus_name = g_strdup_printf("canbus%d", i);
83
pl330_hexdump(buf, len);
90
+
84
@@ -XXX,XX +XXX,XX @@ static int pl330_exec_cycle(PL330Chan *channel)
91
+ object_property_set_link(OBJECT(&s->soc), bus_name,
85
fifo_res = pl330_fifo_get(&s->fifo, buf, len, q->tag);
92
+ OBJECT(s->canbus[i]), &error_fatal);
86
}
93
+ g_free(bus_name);
87
if (fifo_res == PL330_FIFO_OK || q->z) {
88
- dma_memory_write(&address_space_memory, q->addr, buf, len);
89
+ dma_memory_write(s->mem_as, q->addr, buf, len);
90
trace_pl330_exec_cycle(q->addr, len);
91
if (trace_event_get_state_backends(TRACE_PL330_HEXDUMP)) {
92
pl330_hexdump(buf, len);
93
@@ -XXX,XX +XXX,XX @@ static void pl330_realize(DeviceState *dev, Error **errp)
94
"dma", PL330_IOMEM_SIZE);
95
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
96
97
+ if (!s->mem_mr) {
98
+ error_setg(errp, "'memory' link is not set");
99
+ return;
100
+ } else if (s->mem_mr == get_system_memory()) {
101
+ /* Avoid creating new AS for system memory. */
102
+ s->mem_as = &address_space_memory;
103
+ } else {
104
+ s->mem_as = g_new0(AddressSpace, 1);
105
+ address_space_init(s->mem_as, s->mem_mr,
106
+ memory_region_name(s->mem_mr));
94
+ }
107
+ }
95
+
108
+
96
qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
109
s->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, pl330_exec_cycle_timer, s);
97
110
98
/* Create and plug in the SD cards */
111
s->cfg[0] = (s->mgr_ns_at_rst ? 0x4 : 0) |
99
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_machine_instance_init(Object *obj)
112
@@ -XXX,XX +XXX,XX @@ static Property pl330_properties[] = {
100
s->secure = false;
113
DEFINE_PROP_UINT8("rd_q_dep", PL330State, rd_q_dep, 16),
101
/* Default to virt (EL2) being disabled */
114
DEFINE_PROP_UINT16("data_buffer_dep", PL330State, data_buffer_dep, 256),
102
s->virt = false;
115
103
+ object_property_add_link(obj, "xlnx-zcu102.canbus0", TYPE_CAN_BUS,
116
+ DEFINE_PROP_LINK("memory", PL330State, mem_mr,
104
+ (Object **)&s->canbus[0],
117
+ TYPE_MEMORY_REGION, MemoryRegion *),
105
+ object_property_allow_set_link,
106
+ 0);
107
+
118
+
108
+ object_property_add_link(obj, "xlnx-zcu102.canbus1", TYPE_CAN_BUS,
119
DEFINE_PROP_END_OF_LIST(),
109
+ (Object **)&s->canbus[1],
110
+ object_property_allow_set_link,
111
+ 0);
112
}
113
114
static void xlnx_zcu102_machine_class_init(ObjectClass *oc, void *data)
115
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/hw/arm/xlnx-zynqmp.c
118
+++ b/hw/arm/xlnx-zynqmp.c
119
@@ -XXX,XX +XXX,XX @@ static const int uart_intr[XLNX_ZYNQMP_NUM_UARTS] = {
120
21, 22,
121
};
120
};
122
123
+static const uint64_t can_addr[XLNX_ZYNQMP_NUM_CAN] = {
124
+ 0xFF060000, 0xFF070000,
125
+};
126
+
127
+static const int can_intr[XLNX_ZYNQMP_NUM_CAN] = {
128
+ 23, 24,
129
+};
130
+
131
static const uint64_t sdhci_addr[XLNX_ZYNQMP_NUM_SDHCI] = {
132
0xFF160000, 0xFF170000,
133
};
134
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
135
TYPE_CADENCE_UART);
136
}
137
138
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
139
+ object_initialize_child(obj, "can[*]", &s->can[i],
140
+ TYPE_XLNX_ZYNQMP_CAN);
141
+ }
142
+
143
object_initialize_child(obj, "sata", &s->sata, TYPE_SYSBUS_AHCI);
144
145
for (i = 0; i < XLNX_ZYNQMP_NUM_SDHCI; i++) {
146
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
147
gic_spi[uart_intr[i]]);
148
}
149
150
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
151
+ object_property_set_int(OBJECT(&s->can[i]), "ext_clk_freq",
152
+ XLNX_ZYNQMP_CAN_REF_CLK, &error_abort);
153
+
154
+ object_property_set_link(OBJECT(&s->can[i]), "canbus",
155
+ OBJECT(s->canbus[i]), &error_fatal);
156
+
157
+ sysbus_realize(SYS_BUS_DEVICE(&s->can[i]), &err);
158
+ if (err) {
159
+ error_propagate(errp, err);
160
+ return;
161
+ }
162
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->can[i]), 0, can_addr[i]);
163
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->can[i]), 0,
164
+ gic_spi[can_intr[i]]);
165
+ }
166
+
167
object_property_set_int(OBJECT(&s->sata), "num-ports", SATA_NUM_PORTS,
168
&error_abort);
169
if (!sysbus_realize(SYS_BUS_DEVICE(&s->sata), errp)) {
170
@@ -XXX,XX +XXX,XX @@ static Property xlnx_zynqmp_props[] = {
171
DEFINE_PROP_BOOL("has_rpu", XlnxZynqMPState, has_rpu, false),
172
DEFINE_PROP_LINK("ddr-ram", XlnxZynqMPState, ddr_ram, TYPE_MEMORY_REGION,
173
MemoryRegion *),
174
+ DEFINE_PROP_LINK("canbus0", XlnxZynqMPState, canbus[0], TYPE_CAN_BUS,
175
+ CanBusState *),
176
+ DEFINE_PROP_LINK("canbus1", XlnxZynqMPState, canbus[1], TYPE_CAN_BUS,
177
+ CanBusState *),
178
DEFINE_PROP_END_OF_LIST()
179
};
180
121
181
--
122
--
182
2.20.1
123
2.20.1
183
124
184
125
diff view generated by jsdifflib
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
1
From: Eduardo Habkost <ehabkost@redhat.com>
2
2
3
Trusted Firmware now supports A72 on sbsa-ref by default [1] so enable
3
The SBSA_GWDT enum value conflicts with the SBSA_GWDT() QOM type
4
it for QEMU as well. A53 was already enabled there.
4
checking helper, preventing us from using a OBJECT_DEFINE* or
5
DEFINE_INSTANCE_CHECKER macro for the SBSA_GWDT() wrapper.
5
6
6
1. https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/7117
7
If I understand the SBSA 6.0 specification correctly, the signal
8
being connected to IRQ 16 is the WS0 output signal from the
9
Generic Watchdog. Rename the enum value to SBSA_GWDT_WS0 to be
10
more explicit and avoid the name conflict.
7
11
8
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
12
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210806023119.431680-1-ehabkost@redhat.com
10
Message-id: 20201120141705.246690-1-marcin.juszkiewicz@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
16
---
14
hw/arm/sbsa-ref.c | 23 ++++++++++++++++++++---
17
hw/arm/sbsa-ref.c | 6 +++---
15
1 file changed, 20 insertions(+), 3 deletions(-)
18
1 file changed, 3 insertions(+), 3 deletions(-)
16
19
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
20
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
22
--- a/hw/arm/sbsa-ref.c
20
+++ b/hw/arm/sbsa-ref.c
23
+++ b/hw/arm/sbsa-ref.c
24
@@ -XXX,XX +XXX,XX @@ enum {
25
SBSA_GIC_DIST,
26
SBSA_GIC_REDIST,
27
SBSA_SECURE_EC,
28
- SBSA_GWDT,
29
+ SBSA_GWDT_WS0,
30
SBSA_GWDT_REFRESH,
31
SBSA_GWDT_CONTROL,
32
SBSA_SMMU,
21
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
33
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
22
[SBSA_GWDT] = 16,
34
[SBSA_AHCI] = 10,
35
[SBSA_EHCI] = 11,
36
[SBSA_SMMU] = 12, /* ... to 15 */
37
- [SBSA_GWDT] = 16,
38
+ [SBSA_GWDT_WS0] = 16,
23
};
39
};
24
40
25
+static const char * const valid_cpus[] = {
41
static const char * const valid_cpus[] = {
26
+ ARM_CPU_TYPE_NAME("cortex-a53"),
42
@@ -XXX,XX +XXX,XX @@ static void create_wdt(const SBSAMachineState *sms)
27
+ ARM_CPU_TYPE_NAME("cortex-a57"),
43
hwaddr cbase = sbsa_ref_memmap[SBSA_GWDT_CONTROL].base;
28
+ ARM_CPU_TYPE_NAME("cortex-a72"),
44
DeviceState *dev = qdev_new(TYPE_WDT_SBSA);
29
+};
45
SysBusDevice *s = SYS_BUS_DEVICE(dev);
30
+
46
- int irq = sbsa_ref_irqmap[SBSA_GWDT];
31
+static bool cpu_type_valid(const char *cpu)
47
+ int irq = sbsa_ref_irqmap[SBSA_GWDT_WS0];
32
+{
48
33
+ int i;
49
sysbus_realize_and_unref(s, &error_fatal);
34
+
50
sysbus_mmio_map(s, 0, rbase);
35
+ for (i = 0; i < ARRAY_SIZE(valid_cpus); i++) {
36
+ if (strcmp(cpu, valid_cpus[i]) == 0) {
37
+ return true;
38
+ }
39
+ }
40
+ return false;
41
+}
42
+
43
static uint64_t sbsa_ref_cpu_mp_affinity(SBSAMachineState *sms, int idx)
44
{
45
uint8_t clustersz = ARM_DEFAULT_CPUS_PER_CLUSTER;
46
@@ -XXX,XX +XXX,XX @@ static void sbsa_ref_init(MachineState *machine)
47
const CPUArchIdList *possible_cpus;
48
int n, sbsa_max_cpus;
49
50
- if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a57"))) {
51
- error_report("sbsa-ref: CPU type other than the built-in "
52
- "cortex-a57 not supported");
53
+ if (!cpu_type_valid(machine->cpu_type)) {
54
+ error_report("mach-virt: CPU type %s not supported", machine->cpu_type);
55
exit(1);
56
}
57
58
--
51
--
59
2.20.1
52
2.20.1
60
53
61
54
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
Instantiate SAI1/2/3 as unimplemented devices to avoid Linux kernel crashes
4
argument of type "unsigned int".
4
such as the following.
5
5
6
Reported-by: Euler Robot <euler.robot@huawei.com>
6
Unhandled fault: external abort on non-linefetch (0x808) at 0xd19b0000
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
7
pgd = (ptrval)
8
Message-id: 20201126111109.112238-2-alex.chen@huawei.com
8
[d19b0000] *pgd=82711811, *pte=308a0653, *ppte=308a0453
9
Internal error: : 808 [#1] SMP ARM
10
Modules linked in:
11
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-rc5 #1
12
...
13
[<c095e974>] (regmap_mmio_write32le) from [<c095eb48>] (regmap_mmio_write+0x3c/0x54)
14
[<c095eb48>] (regmap_mmio_write) from [<c09580f4>] (_regmap_write+0x4c/0x1f0)
15
[<c09580f4>] (_regmap_write) from [<c0959b28>] (regmap_write+0x3c/0x60)
16
[<c0959b28>] (regmap_write) from [<c0d41130>] (fsl_sai_runtime_resume+0x9c/0x1ec)
17
[<c0d41130>] (fsl_sai_runtime_resume) from [<c0942464>] (__rpm_callback+0x3c/0x108)
18
[<c0942464>] (__rpm_callback) from [<c0942590>] (rpm_callback+0x60/0x64)
19
[<c0942590>] (rpm_callback) from [<c0942b60>] (rpm_resume+0x5cc/0x808)
20
[<c0942b60>] (rpm_resume) from [<c0942dfc>] (__pm_runtime_resume+0x60/0xa0)
21
[<c0942dfc>] (__pm_runtime_resume) from [<c0d4231c>] (fsl_sai_probe+0x2b8/0x65c)
22
[<c0d4231c>] (fsl_sai_probe) from [<c0935b08>] (platform_probe+0x58/0xb8)
23
[<c0935b08>] (platform_probe) from [<c0933264>] (really_probe.part.0+0x9c/0x334)
24
[<c0933264>] (really_probe.part.0) from [<c093359c>] (__driver_probe_device+0xa0/0x138)
25
[<c093359c>] (__driver_probe_device) from [<c0933664>] (driver_probe_device+0x30/0xc8)
26
[<c0933664>] (driver_probe_device) from [<c0933c88>] (__driver_attach+0x90/0x130)
27
[<c0933c88>] (__driver_attach) from [<c0931060>] (bus_for_each_dev+0x78/0xb8)
28
[<c0931060>] (bus_for_each_dev) from [<c093254c>] (bus_add_driver+0xf0/0x1d8)
29
[<c093254c>] (bus_add_driver) from [<c0934a30>] (driver_register+0x88/0x118)
30
[<c0934a30>] (driver_register) from [<c01022c0>] (do_one_initcall+0x7c/0x3a4)
31
[<c01022c0>] (do_one_initcall) from [<c1601204>] (kernel_init_freeable+0x198/0x22c)
32
[<c1601204>] (kernel_init_freeable) from [<c0f5ff2c>] (kernel_init+0x10/0x128)
33
[<c0f5ff2c>] (kernel_init) from [<c010013c>] (ret_from_fork+0x14/0x38)
34
35
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
36
Message-id: 20210810175607.538090-1-linux@roeck-us.net
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
37
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
39
---
12
hw/misc/imx25_ccm.c | 12 ++++++------
40
include/hw/arm/fsl-imx7.h | 5 +++++
13
1 file changed, 6 insertions(+), 6 deletions(-)
41
hw/arm/fsl-imx7.c | 7 +++++++
42
2 files changed, 12 insertions(+)
14
43
15
diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
44
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
16
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx25_ccm.c
46
--- a/include/hw/arm/fsl-imx7.h
18
+++ b/hw/misc/imx25_ccm.c
47
+++ b/include/hw/arm/fsl-imx7.h
19
@@ -XXX,XX +XXX,XX @@ static const char *imx25_ccm_reg_name(uint32_t reg)
48
@@ -XXX,XX +XXX,XX @@ enum FslIMX7MemoryMap {
20
case IMX25_CCM_LPIMR1_REG:
49
FSL_IMX7_UART6_ADDR = 0x30A80000,
21
return "lpimr1";
50
FSL_IMX7_UART7_ADDR = 0x30A90000,
22
default:
51
23
- sprintf(unknown, "[%d ?]", reg);
52
+ FSL_IMX7_SAI1_ADDR = 0x308A0000,
24
+ sprintf(unknown, "[%u ?]", reg);
53
+ FSL_IMX7_SAI2_ADDR = 0x308B0000,
25
return unknown;
54
+ FSL_IMX7_SAI3_ADDR = 0x308C0000,
26
}
55
+ FSL_IMX7_SAIn_SIZE = 0x10000,
27
}
56
+
28
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mpll_clk(IMXCCMState *dev)
57
FSL_IMX7_ENET1_ADDR = 0x30BE0000,
29
freq = imx_ccm_calc_pll(s->reg[IMX25_CCM_MPCTL_REG], CKIH_FREQ);
58
FSL_IMX7_ENET2_ADDR = 0x30BF0000,
30
}
59
31
60
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
32
- DPRINTF("freq = %d\n", freq);
61
index XXXXXXX..XXXXXXX 100644
33
+ DPRINTF("freq = %u\n", freq);
62
--- a/hw/arm/fsl-imx7.c
34
63
+++ b/hw/arm/fsl-imx7.c
35
return freq;
64
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
36
}
65
create_unimplemented_device("can1", FSL_IMX7_CAN1_ADDR, FSL_IMX7_CANn_SIZE);
37
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mcu_clk(IMXCCMState *dev)
66
create_unimplemented_device("can2", FSL_IMX7_CAN2_ADDR, FSL_IMX7_CANn_SIZE);
38
67
39
freq = freq / (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], ARM_CLK_DIV));
68
+ /*
40
69
+ * SAI (Audio SSI (Synchronous Serial Interface))
41
- DPRINTF("freq = %d\n", freq);
70
+ */
42
+ DPRINTF("freq = %u\n", freq);
71
+ create_unimplemented_device("sai1", FSL_IMX7_SAI1_ADDR, FSL_IMX7_SAIn_SIZE);
43
72
+ create_unimplemented_device("sai2", FSL_IMX7_SAI2_ADDR, FSL_IMX7_SAIn_SIZE);
44
return freq;
73
+ create_unimplemented_device("sai2", FSL_IMX7_SAI3_ADDR, FSL_IMX7_SAIn_SIZE);
45
}
74
+
46
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ahb_clk(IMXCCMState *dev)
75
/*
47
freq = imx25_ccm_get_mcu_clk(dev)
76
* OCOTP
48
/ (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], AHB_CLK_DIV));
77
*/
49
50
- DPRINTF("freq = %d\n", freq);
51
+ DPRINTF("freq = %u\n", freq);
52
53
return freq;
54
}
55
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ipg_clk(IMXCCMState *dev)
56
57
freq = imx25_ccm_get_ahb_clk(dev) / 2;
58
59
- DPRINTF("freq = %d\n", freq);
60
+ DPRINTF("freq = %u\n", freq);
61
62
return freq;
63
}
64
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
65
break;
66
}
67
68
- DPRINTF("Clock = %d) = %d\n", clock, freq);
69
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
70
71
return freq;
72
}
73
--
78
--
74
2.20.1
79
2.20.1
75
80
76
81
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Sebastian Meyer <meyer@absint.com>
2
2
3
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
3
With gdb 9.0 and better it is possible to connect to a gdbstub
4
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
4
over unix sockets, which is better than a TCP socket connection
5
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
5
in some situations. The QEMU command line to set this up is
6
Message-id: 1605728926-352690-5-git-send-email-fnu.vikram@xilinx.com
6
non-obvious; document it.
7
8
Signed-off-by: Sebastian Meyer <meyer@absint.com>
9
Message-id: 162867284829.27377.4784930719350564918-0@git.sr.ht
10
[PMM: Tweaked commit message; adjusted wording in a couple of
11
places; fixed rST formatting issue; moved section up out of
12
the 'advanced debugging options' subsection]
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
16
---
9
MAINTAINERS | 8 ++++++++
17
docs/system/gdb.rst | 26 +++++++++++++++++++++++++-
10
1 file changed, 8 insertions(+)
18
1 file changed, 25 insertions(+), 1 deletion(-)
11
19
12
diff --git a/MAINTAINERS b/MAINTAINERS
20
diff --git a/docs/system/gdb.rst b/docs/system/gdb.rst
13
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
14
--- a/MAINTAINERS
22
--- a/docs/system/gdb.rst
15
+++ b/MAINTAINERS
23
+++ b/docs/system/gdb.rst
16
@@ -XXX,XX +XXX,XX @@ F: hw/net/opencores_eth.c
24
@@ -XXX,XX +XXX,XX @@ The ``-s`` option will make QEMU listen for an incoming connection
17
25
from gdb on TCP port 1234, and ``-S`` will make QEMU not start the
18
Devices
26
guest until you tell it to from gdb. (If you want to specify which
19
-------
27
TCP port to use or to use something other than TCP for the gdbstub
20
+Xilinx CAN
28
-connection, use the ``-gdb dev`` option instead of ``-s``.)
21
+M: Vikram Garhwal <fnu.vikram@xilinx.com>
29
+connection, use the ``-gdb dev`` option instead of ``-s``. See
22
+M: Francisco Iglesias <francisco.iglesias@xilinx.com>
30
+`Using unix sockets`_ for an example.)
23
+S: Maintained
31
24
+F: hw/net/can/xlnx-*
32
.. parsed-literal::
25
+F: include/hw/net/xlnx-*
33
26
+F: tests/qtest/xlnx-can-test*
34
@@ -XXX,XX +XXX,XX @@ not just those in the cluster you are currently working on::
35
36
(gdb) set schedule-multiple on
37
38
+Using unix sockets
39
+==================
27
+
40
+
28
EDU
41
+An alternate method for connecting gdb to the QEMU gdbstub is to use
29
M: Jiri Slaby <jslaby@suse.cz>
42
+a unix socket (if supported by your operating system). This is useful when
30
S: Maintained
43
+running several tests in parallel, or if you do not have a known free TCP
44
+port (e.g. when running automated tests).
45
+
46
+First create a chardev with the appropriate options, then
47
+instruct the gdbserver to use that device:
48
+
49
+.. parsed-literal::
50
+
51
+ |qemu_system| -chardev socket,path=/tmp/gdb-socket,server=on,wait=off,id=gdb0 -gdb chardev:gdb0 -S ...
52
+
53
+Start gdb as before, but this time connect using the path to
54
+the socket::
55
+
56
+ (gdb) target remote /tmp/gdb-socket
57
+
58
+Note that to use a unix socket for the connection you will need
59
+gdb version 9.0 or newer.
60
+
61
Advanced debugging options
62
==========================
63
31
--
64
--
32
2.20.1
65
2.20.1
33
66
34
67
diff view generated by jsdifflib