1
First set of arm patches for 6.2. I have a lot more in my
1
The following changes since commit 003ba52a8b327180e284630b289c6ece5a3e08b9:
2
to-review queue still...
3
2
4
-- PMM
3
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2023-02-16 11:16:39 +0000)
5
6
The following changes since commit d42685765653ec155fdf60910662f8830bdb2cef:
7
8
Open 6.2 development tree (2021-08-25 10:25:12 +0100)
9
4
10
are available in the Git repository at:
5
are available in the Git repository at:
11
6
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210825
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230216
13
8
14
for you to fetch changes up to 24b1a6aa43615be22c7ee66bd68ec5675f6a6a9a:
9
for you to fetch changes up to caf01d6a435d9f4a95aeae2f9fc6cb8b889b1fb8:
15
10
16
docs: Document how to use gdb with unix sockets (2021-08-25 10:48:51 +0100)
11
tests/qtest: Restrict tpm-tis-devices-{swtpm}-test to CONFIG_TCG (2023-02-16 16:28:53 +0000)
17
12
18
----------------------------------------------------------------
13
----------------------------------------------------------------
19
target-arm queue:
14
target-arm queue:
20
* More MVE emulation work
15
* Some mostly M-profile-related code cleanups
21
* Implement M-profile trapping on division by zero
16
* avocado: Retire the boot_linux.py AArch64 TCG tests
22
* kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route()
17
* hw/arm/smmuv3: Add GBPA register
23
* hw/char/pl011: add support for sending break
18
* arm/virt: don't try to spell out the accelerator
24
* fsl-imx6ul: Instantiate SAI1/2/3 and ASRC as unimplemented devices
19
* hw/arm: Attach PSPI module to NPCM7XX SoC
25
* hw/dma/pl330: Add memory region to replace default
20
* Some cleanup/refactoring patches aiming towards
26
* sbsa-ref: Rename SBSA_GWDT enum value
21
allowing building Arm targets without CONFIG_TCG
27
* fsl-imx7: Instantiate SAI1/2/3 as unimplemented devices
28
* docs: Document how to use gdb with unix sockets
29
22
30
----------------------------------------------------------------
23
----------------------------------------------------------------
31
Eduardo Habkost (1):
24
Alex Bennée (1):
32
sbsa-ref: Rename SBSA_GWDT enum value
25
tests/avocado: retire the Aarch64 TCG tests from boot_linux.py
33
26
34
Guenter Roeck (2):
27
Claudio Fontana (3):
35
fsl-imx6ul: Instantiate SAI1/2/3 and ASRC as unimplemented devices
28
target/arm: rename handle_semihosting to tcg_handle_semihosting
36
fsl-imx7: Instantiate SAI1/2/3 as unimplemented devices
29
target/arm: wrap psci call with tcg_enabled
30
target/arm: wrap call to aarch64_sve_change_el in tcg_enabled()
37
31
38
Hamza Mahfooz (1):
32
Cornelia Huck (1):
39
target/arm: kvm: use RCU_READ_LOCK_GUARD() in kvm_arch_fixup_msi_route()
33
arm/virt: don't try to spell out the accelerator
40
34
41
Jan Luebbe (1):
35
Fabiano Rosas (7):
42
hw/char/pl011: add support for sending break
36
target/arm: Move PC alignment check
37
target/arm: Move cpregs code out of cpu.h
38
tests/avocado: Skip tests that require a missing accelerator
39
tests/avocado: Tag TCG tests with accel:tcg
40
target/arm: Use "max" as default cpu for the virt machine with KVM
41
tests/qtest: arm-cpu-features: Match tests to required accelerators
42
tests/qtest: Restrict tpm-tis-devices-{swtpm}-test to CONFIG_TCG
43
43
44
Peter Maydell (37):
44
Hao Wu (3):
45
target/arm: Note that we handle VMOVL as a special case of VSHLL
45
MAINTAINERS: Add myself to maintainers and remove Havard
46
target/arm: Print MVE VPR in CPU dumps
46
hw/ssi: Add Nuvoton PSPI Module
47
target/arm: Fix MVE VSLI by 0 and VSRI by <dt>
47
hw/arm: Attach PSPI module to NPCM7XX SoC
48
target/arm: Fix signed VADDV
49
target/arm: Fix mask handling for MVE narrowing operations
50
target/arm: Fix 48-bit saturating shifts
51
target/arm: Fix MVE 48-bit SQRSHRL for small right shifts
52
target/arm: Fix calculation of LTP mask when LR is 0
53
target/arm: Factor out mve_eci_mask()
54
target/arm: Fix VPT advance when ECI is non-zero
55
target/arm: Fix VLDRB/H/W for predicated elements
56
target/arm: Implement MVE VMULL (polynomial)
57
target/arm: Implement MVE incrementing/decrementing dup insns
58
target/arm: Factor out gen_vpst()
59
target/arm: Implement MVE integer vector comparisons
60
target/arm: Implement MVE integer vector-vs-scalar comparisons
61
target/arm: Implement MVE VPSEL
62
target/arm: Implement MVE VMLAS
63
target/arm: Implement MVE shift-by-scalar
64
target/arm: Move 'x' and 'a' bit definitions into vmlaldav formats
65
target/arm: Implement MVE integer min/max across vector
66
target/arm: Implement MVE VABAV
67
target/arm: Implement MVE narrowing moves
68
target/arm: Rename MVEGenDualAccOpFn to MVEGenLongDualAccOpFn
69
target/arm: Implement MVE VMLADAV and VMLSLDAV
70
target/arm: Implement MVE VMLA
71
target/arm: Implement MVE saturating doubling multiply accumulates
72
target/arm: Implement MVE VQABS, VQNEG
73
target/arm: Implement MVE VMAXA, VMINA
74
target/arm: Implement MVE VMOV to/from 2 general-purpose registers
75
target/arm: Implement MVE VPNOT
76
target/arm: Implement MVE VCTP
77
target/arm: Implement MVE scatter-gather insns
78
target/arm: Implement MVE scatter-gather immediate forms
79
target/arm: Implement MVE interleaving loads/stores
80
target/arm: Re-indent sdiv and udiv helpers
81
target/arm: Implement M-profile trapping on division by zero
82
48
83
Sebastian Meyer (1):
49
Jean-Philippe Brucker (2):
84
docs: Document how to use gdb with unix sockets
50
hw/arm/smmu-common: Support 64-bit addresses
51
hw/arm/smmu-common: Fix TTB1 handling
85
52
86
Wen, Jianxian (1):
53
Mostafa Saleh (1):
87
hw/dma/pl330: Add memory region to replace default
54
hw/arm/smmuv3: Add GBPA register
88
55
89
docs/system/gdb.rst | 26 +-
56
Philippe Mathieu-Daudé (12):
90
include/hw/arm/fsl-imx7.h | 5 +
57
hw/intc/armv7m_nvic: Use OBJECT_DECLARE_SIMPLE_TYPE() macro
91
target/arm/cpu.h | 1 +
58
target/arm: Simplify arm_v7m_mmu_idx_for_secstate() for user emulation
92
target/arm/helper-mve.h | 283 ++++++++++
59
target/arm: Reduce arm_v7m_mmu_idx_[all/for_secstate_and_priv]() scope
93
target/arm/helper.h | 4 +-
60
target/arm: Constify ID_PFR1 on user emulation
94
target/arm/translate-a32.h | 2 +
61
target/arm: Convert CPUARMState::eabi to boolean
95
target/arm/vec_internal.h | 11 +
62
target/arm: Avoid resetting CPUARMState::eabi field
96
target/arm/mve.decode | 226 +++++++-
63
target/arm: Restrict CPUARMState::gicv3state to sysemu
97
target/arm/t32.decode | 1 +
64
target/arm: Restrict CPUARMState::arm_boot_info to sysemu
98
hw/arm/exynos4210.c | 3 +
65
target/arm: Restrict CPUARMState::nvic to sysemu
99
hw/arm/fsl-imx6ul.c | 12 +
66
target/arm: Store CPUARMState::nvic as NVICState*
100
hw/arm/fsl-imx7.c | 7 +
67
target/arm: Declare CPU <-> NVIC helpers in 'hw/intc/armv7m_nvic.h'
101
hw/arm/sbsa-ref.c | 6 +-
68
hw/arm: Add missing XLNX_ZYNQMP_ARM -> USB_DWC3 Kconfig dependency
102
hw/arm/xilinx_zynq.c | 3 +
103
hw/char/pl011.c | 6 +
104
hw/dma/pl330.c | 26 +-
105
target/arm/cpu.c | 3 +
106
target/arm/helper.c | 34 +-
107
target/arm/kvm.c | 17 +-
108
target/arm/m_helper.c | 4 +
109
target/arm/mve_helper.c | 1254 ++++++++++++++++++++++++++++++++++++++++++--
110
target/arm/translate-mve.c | 877 ++++++++++++++++++++++++++++++-
111
target/arm/translate-vfp.c | 2 +-
112
target/arm/translate.c | 37 +-
113
target/arm/vec_helper.c | 14 +-
114
25 files changed, 2746 insertions(+), 118 deletions(-)
115
69
70
MAINTAINERS | 8 +-
71
docs/system/arm/nuvoton.rst | 2 +-
72
hw/arm/smmuv3-internal.h | 7 +
73
include/hw/arm/npcm7xx.h | 2 +
74
include/hw/arm/smmu-common.h | 2 -
75
include/hw/arm/smmuv3.h | 1 +
76
include/hw/intc/armv7m_nvic.h | 128 +++++++++++++++++-
77
include/hw/ssi/npcm_pspi.h | 53 ++++++++
78
linux-user/user-internals.h | 2 +-
79
target/arm/cpregs.h | 98 ++++++++++++++
80
target/arm/cpu.h | 228 ++-------------------------------
81
target/arm/internals.h | 14 --
82
hw/arm/npcm7xx.c | 25 +++-
83
hw/arm/smmu-common.c | 4 +-
84
hw/arm/smmuv3.c | 43 ++++++-
85
hw/arm/virt.c | 10 +-
86
hw/intc/armv7m_nvic.c | 38 ++----
87
hw/ssi/npcm_pspi.c | 221 ++++++++++++++++++++++++++++++++
88
linux-user/arm/cpu_loop.c | 4 +-
89
target/arm/cpu.c | 5 +-
90
target/arm/cpu_tcg.c | 3 +
91
target/arm/helper.c | 31 +++--
92
target/arm/m_helper.c | 86 +++++++------
93
target/arm/machine.c | 18 +--
94
tests/qtest/arm-cpu-features.c | 28 ++--
95
hw/arm/Kconfig | 1 +
96
hw/ssi/meson.build | 2 +-
97
hw/ssi/trace-events | 5 +
98
tests/avocado/avocado_qemu/__init__.py | 4 +
99
tests/avocado/boot_linux.py | 48 ++-----
100
tests/avocado/boot_linux_console.py | 1 +
101
tests/avocado/machine_aarch64_virt.py | 63 ++++++++-
102
tests/avocado/reverse_debugging.py | 8 ++
103
tests/qtest/meson.build | 4 +-
104
34 files changed, 798 insertions(+), 399 deletions(-)
105
create mode 100644 include/hw/ssi/npcm_pspi.h
106
create mode 100644 hw/ssi/npcm_pspi.c
107
diff view generated by jsdifflib
Deleted patch
1
Although the architecture doesn't define it as an alias, VMOVL
2
(vector move long) is encoded as a VSHLL with a zero shift.
3
Add a comment in the decode file noting that we handle VMOVL
4
as part of VSHLL.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
target/arm/mve.decode | 2 ++
10
1 file changed, 2 insertions(+)
11
12
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/mve.decode
15
+++ b/target/arm/mve.decode
16
@@ -XXX,XX +XXX,XX @@ VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_h
17
VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
18
19
# VSHLL T1 encoding; the T2 VSHLL encoding is elsewhere in this file
20
+# Note that VMOVL is encoded as "VSHLL with a zero shift count"; we
21
+# implement it that way rather than special-casing it in the decode.
22
VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_b
23
VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_h
24
25
--
26
2.20.1
27
28
diff view generated by jsdifflib
1
Implement the MVE interleaving load/store functions VLD2, VLD4, VST2
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
and VST4. VLD2 loads 16 bytes of data from memory and writes to 2
3
consecutive Qregs; VLD4 loads 16 bytes of data from memory and writes
4
to 4 consecutive Qregs. The 'pattern' field in the encoding
5
determines the offset into memory which is accessed and also which
6
elements in the Qregs are written to. (The intention is that a
7
sequence of four consecutive VLD4 with different pattern values
8
performs a complete de-interleaving load of 64 bytes into all
9
elements of the 4 Qregs.) VST2 and VST4 do the same, but for stores.
10
2
3
Manually convert to OBJECT_DECLARE_SIMPLE_TYPE() macro,
4
similarly to automatic conversion from commit 8063396bf3
5
("Use OBJECT_DECLARE_SIMPLE_TYPE when possible").
6
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230206223502.25122-2-philmd@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
---
11
---
14
target/arm/helper-mve.h | 48 ++++++
12
include/hw/intc/armv7m_nvic.h | 5 +----
15
target/arm/mve.decode | 11 ++
13
1 file changed, 1 insertion(+), 4 deletions(-)
16
target/arm/mve_helper.c | 342 +++++++++++++++++++++++++++++++++++++
17
target/arm/translate-mve.c | 94 ++++++++++
18
4 files changed, 495 insertions(+)
19
14
20
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper-mve.h
17
--- a/include/hw/intc/armv7m_nvic.h
23
+++ b/target/arm/helper-mve.h
18
+++ b/include/hw/intc/armv7m_nvic.h
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vldrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
DEF_HELPER_FLAGS_4(mve_vstrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
DEF_HELPER_FLAGS_4(mve_vstrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
28
+DEF_HELPER_FLAGS_3(mve_vld20b, TCG_CALL_NO_WG, void, env, i32, i32)
29
+DEF_HELPER_FLAGS_3(mve_vld20h, TCG_CALL_NO_WG, void, env, i32, i32)
30
+DEF_HELPER_FLAGS_3(mve_vld20w, TCG_CALL_NO_WG, void, env, i32, i32)
31
+
32
+DEF_HELPER_FLAGS_3(mve_vld21b, TCG_CALL_NO_WG, void, env, i32, i32)
33
+DEF_HELPER_FLAGS_3(mve_vld21h, TCG_CALL_NO_WG, void, env, i32, i32)
34
+DEF_HELPER_FLAGS_3(mve_vld21w, TCG_CALL_NO_WG, void, env, i32, i32)
35
+
36
+DEF_HELPER_FLAGS_3(mve_vld40b, TCG_CALL_NO_WG, void, env, i32, i32)
37
+DEF_HELPER_FLAGS_3(mve_vld40h, TCG_CALL_NO_WG, void, env, i32, i32)
38
+DEF_HELPER_FLAGS_3(mve_vld40w, TCG_CALL_NO_WG, void, env, i32, i32)
39
+
40
+DEF_HELPER_FLAGS_3(mve_vld41b, TCG_CALL_NO_WG, void, env, i32, i32)
41
+DEF_HELPER_FLAGS_3(mve_vld41h, TCG_CALL_NO_WG, void, env, i32, i32)
42
+DEF_HELPER_FLAGS_3(mve_vld41w, TCG_CALL_NO_WG, void, env, i32, i32)
43
+
44
+DEF_HELPER_FLAGS_3(mve_vld42b, TCG_CALL_NO_WG, void, env, i32, i32)
45
+DEF_HELPER_FLAGS_3(mve_vld42h, TCG_CALL_NO_WG, void, env, i32, i32)
46
+DEF_HELPER_FLAGS_3(mve_vld42w, TCG_CALL_NO_WG, void, env, i32, i32)
47
+
48
+DEF_HELPER_FLAGS_3(mve_vld43b, TCG_CALL_NO_WG, void, env, i32, i32)
49
+DEF_HELPER_FLAGS_3(mve_vld43h, TCG_CALL_NO_WG, void, env, i32, i32)
50
+DEF_HELPER_FLAGS_3(mve_vld43w, TCG_CALL_NO_WG, void, env, i32, i32)
51
+
52
+DEF_HELPER_FLAGS_3(mve_vst20b, TCG_CALL_NO_WG, void, env, i32, i32)
53
+DEF_HELPER_FLAGS_3(mve_vst20h, TCG_CALL_NO_WG, void, env, i32, i32)
54
+DEF_HELPER_FLAGS_3(mve_vst20w, TCG_CALL_NO_WG, void, env, i32, i32)
55
+
56
+DEF_HELPER_FLAGS_3(mve_vst21b, TCG_CALL_NO_WG, void, env, i32, i32)
57
+DEF_HELPER_FLAGS_3(mve_vst21h, TCG_CALL_NO_WG, void, env, i32, i32)
58
+DEF_HELPER_FLAGS_3(mve_vst21w, TCG_CALL_NO_WG, void, env, i32, i32)
59
+
60
+DEF_HELPER_FLAGS_3(mve_vst40b, TCG_CALL_NO_WG, void, env, i32, i32)
61
+DEF_HELPER_FLAGS_3(mve_vst40h, TCG_CALL_NO_WG, void, env, i32, i32)
62
+DEF_HELPER_FLAGS_3(mve_vst40w, TCG_CALL_NO_WG, void, env, i32, i32)
63
+
64
+DEF_HELPER_FLAGS_3(mve_vst41b, TCG_CALL_NO_WG, void, env, i32, i32)
65
+DEF_HELPER_FLAGS_3(mve_vst41h, TCG_CALL_NO_WG, void, env, i32, i32)
66
+DEF_HELPER_FLAGS_3(mve_vst41w, TCG_CALL_NO_WG, void, env, i32, i32)
67
+
68
+DEF_HELPER_FLAGS_3(mve_vst42b, TCG_CALL_NO_WG, void, env, i32, i32)
69
+DEF_HELPER_FLAGS_3(mve_vst42h, TCG_CALL_NO_WG, void, env, i32, i32)
70
+DEF_HELPER_FLAGS_3(mve_vst42w, TCG_CALL_NO_WG, void, env, i32, i32)
71
+
72
+DEF_HELPER_FLAGS_3(mve_vst43b, TCG_CALL_NO_WG, void, env, i32, i32)
73
+DEF_HELPER_FLAGS_3(mve_vst43h, TCG_CALL_NO_WG, void, env, i32, i32)
74
+DEF_HELPER_FLAGS_3(mve_vst43w, TCG_CALL_NO_WG, void, env, i32, i32)
75
+
76
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
77
78
DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
79
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
80
index XXXXXXX..XXXXXXX 100644
81
--- a/target/arm/mve.decode
82
+++ b/target/arm/mve.decode
83
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
84
&vabav qn qm rda size
20
#include "qom/object.h"
85
&vldst_sg qd qm rn size msize os
21
86
&vldst_sg_imm qd qm a w imm
22
#define TYPE_NVIC "armv7m_nvic"
87
+&vldst_il qd rn size pat w
23
-
88
24
-typedef struct NVICState NVICState;
89
# scatter-gather memory size is in bits 6:4
25
-DECLARE_INSTANCE_CHECKER(NVICState, NVIC,
90
%sg_msize 6:1 4:1
26
- TYPE_NVIC)
91
@@ -XXX,XX +XXX,XX @@
27
+OBJECT_DECLARE_SIMPLE_TYPE(NVICState, NVIC)
92
@vldst_sg_imm .... .... a:1 . w:1 . .... .... .... . imm:7 &vldst_sg_imm \
28
93
qd=%qd qm=%qn
29
/* Highest permitted number of exceptions (architectural limit) */
94
30
#define NVIC_MAX_VECTORS 512
95
+# Deinterleaving load/interleaving store
96
+@vldst_il .... .... .. w:1 . rn:4 .... ... size:2 pat:2 ..... &vldst_il \
97
+ qd=%qd
98
+
99
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
100
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
101
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
102
@@ -XXX,XX +XXX,XX @@ VLDRD_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1111 .... .... @vldst_sg_imm
103
VSTRW_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1110 .... .... @vldst_sg_imm
104
VSTRD_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1111 .... .... @vldst_sg_imm
105
106
+# deinterleaving loads/interleaving stores
107
+VLD2 1111 1100 1 .. 1 .... ... 1 111 .. .. 00000 @vldst_il
108
+VLD4 1111 1100 1 .. 1 .... ... 1 111 .. .. 00001 @vldst_il
109
+VST2 1111 1100 1 .. 0 .... ... 1 111 .. .. 00000 @vldst_il
110
+VST4 1111 1100 1 .. 0 .... ... 1 111 .. .. 00001 @vldst_il
111
+
112
# Moves between 2 32-bit vector lanes and 2 general purpose registers
113
VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
114
VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
115
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/mve_helper.c
118
+++ b/target/arm/mve_helper.c
119
@@ -XXX,XX +XXX,XX @@ DO_VLDR64_SG(vldrd_sg_wb_ud, ADDR_ADD, true)
120
DO_VSTR_SG(vstrw_sg_wb_uw, stl, 4, uint32_t, ADDR_ADD, true)
121
DO_VSTR64_SG(vstrd_sg_wb_ud, ADDR_ADD, true)
122
123
+/*
124
+ * Deinterleaving loads/interleaving stores.
125
+ *
126
+ * For these helpers we are passed the index of the first Qreg
127
+ * (VLD2/VST2 will also access Qn+1, VLD4/VST4 access Qn .. Qn+3)
128
+ * and the value of the base address register Rn.
129
+ * The helpers are specialized for pattern and element size, so
130
+ * for instance vld42h is VLD4 with pattern 2, element size MO_16.
131
+ *
132
+ * These insns are beatwise but not predicated, so we must honour ECI,
133
+ * but need not look at mve_element_mask().
134
+ *
135
+ * The pseudocode implements these insns with multiple memory accesses
136
+ * of the element size, but rules R_VVVG and R_FXDM permit us to make
137
+ * one 32-bit memory access per beat.
138
+ */
139
+#define DO_VLD4B(OP, O1, O2, O3, O4) \
140
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
141
+ uint32_t base) \
142
+ { \
143
+ int beat, e; \
144
+ uint16_t mask = mve_eci_mask(env); \
145
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
146
+ uint32_t addr, data; \
147
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
148
+ if ((mask & 1) == 0) { \
149
+ /* ECI says skip this beat */ \
150
+ continue; \
151
+ } \
152
+ addr = base + off[beat] * 4; \
153
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
154
+ for (e = 0; e < 4; e++, data >>= 8) { \
155
+ uint8_t *qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + e); \
156
+ qd[H1(off[beat])] = data; \
157
+ } \
158
+ } \
159
+ }
160
+
161
+#define DO_VLD4H(OP, O1, O2) \
162
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
163
+ uint32_t base) \
164
+ { \
165
+ int beat; \
166
+ uint16_t mask = mve_eci_mask(env); \
167
+ static const uint8_t off[4] = { O1, O1, O2, O2 }; \
168
+ uint32_t addr, data; \
169
+ int y; /* y counts 0 2 0 2 */ \
170
+ uint16_t *qd; \
171
+ for (beat = 0, y = 0; beat < 4; beat++, mask >>= 4, y ^= 2) { \
172
+ if ((mask & 1) == 0) { \
173
+ /* ECI says skip this beat */ \
174
+ continue; \
175
+ } \
176
+ addr = base + off[beat] * 8 + (beat & 1) * 4; \
177
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
178
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y); \
179
+ qd[H2(off[beat])] = data; \
180
+ data >>= 16; \
181
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y + 1); \
182
+ qd[H2(off[beat])] = data; \
183
+ } \
184
+ }
185
+
186
+#define DO_VLD4W(OP, O1, O2, O3, O4) \
187
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
188
+ uint32_t base) \
189
+ { \
190
+ int beat; \
191
+ uint16_t mask = mve_eci_mask(env); \
192
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
193
+ uint32_t addr, data; \
194
+ uint32_t *qd; \
195
+ int y; \
196
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
197
+ if ((mask & 1) == 0) { \
198
+ /* ECI says skip this beat */ \
199
+ continue; \
200
+ } \
201
+ addr = base + off[beat] * 4; \
202
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
203
+ y = (beat + (O1 & 2)) & 3; \
204
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + y); \
205
+ qd[H4(off[beat] >> 2)] = data; \
206
+ } \
207
+ }
208
+
209
+DO_VLD4B(vld40b, 0, 1, 10, 11)
210
+DO_VLD4B(vld41b, 2, 3, 12, 13)
211
+DO_VLD4B(vld42b, 4, 5, 14, 15)
212
+DO_VLD4B(vld43b, 6, 7, 8, 9)
213
+
214
+DO_VLD4H(vld40h, 0, 5)
215
+DO_VLD4H(vld41h, 1, 6)
216
+DO_VLD4H(vld42h, 2, 7)
217
+DO_VLD4H(vld43h, 3, 4)
218
+
219
+DO_VLD4W(vld40w, 0, 1, 10, 11)
220
+DO_VLD4W(vld41w, 2, 3, 12, 13)
221
+DO_VLD4W(vld42w, 4, 5, 14, 15)
222
+DO_VLD4W(vld43w, 6, 7, 8, 9)
223
+
224
+#define DO_VLD2B(OP, O1, O2, O3, O4) \
225
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
226
+ uint32_t base) \
227
+ { \
228
+ int beat, e; \
229
+ uint16_t mask = mve_eci_mask(env); \
230
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
231
+ uint32_t addr, data; \
232
+ uint8_t *qd; \
233
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
234
+ if ((mask & 1) == 0) { \
235
+ /* ECI says skip this beat */ \
236
+ continue; \
237
+ } \
238
+ addr = base + off[beat] * 2; \
239
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
240
+ for (e = 0; e < 4; e++, data >>= 8) { \
241
+ qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + (e & 1)); \
242
+ qd[H1(off[beat] + (e >> 1))] = data; \
243
+ } \
244
+ } \
245
+ }
246
+
247
+#define DO_VLD2H(OP, O1, O2, O3, O4) \
248
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
249
+ uint32_t base) \
250
+ { \
251
+ int beat; \
252
+ uint16_t mask = mve_eci_mask(env); \
253
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
254
+ uint32_t addr, data; \
255
+ int e; \
256
+ uint16_t *qd; \
257
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
258
+ if ((mask & 1) == 0) { \
259
+ /* ECI says skip this beat */ \
260
+ continue; \
261
+ } \
262
+ addr = base + off[beat] * 4; \
263
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
264
+ for (e = 0; e < 2; e++, data >>= 16) { \
265
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + e); \
266
+ qd[H2(off[beat])] = data; \
267
+ } \
268
+ } \
269
+ }
270
+
271
+#define DO_VLD2W(OP, O1, O2, O3, O4) \
272
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
273
+ uint32_t base) \
274
+ { \
275
+ int beat; \
276
+ uint16_t mask = mve_eci_mask(env); \
277
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
278
+ uint32_t addr, data; \
279
+ uint32_t *qd; \
280
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
281
+ if ((mask & 1) == 0) { \
282
+ /* ECI says skip this beat */ \
283
+ continue; \
284
+ } \
285
+ addr = base + off[beat]; \
286
+ data = cpu_ldl_le_data_ra(env, addr, GETPC()); \
287
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + (beat & 1)); \
288
+ qd[H4(off[beat] >> 3)] = data; \
289
+ } \
290
+ }
291
+
292
+DO_VLD2B(vld20b, 0, 2, 12, 14)
293
+DO_VLD2B(vld21b, 4, 6, 8, 10)
294
+
295
+DO_VLD2H(vld20h, 0, 1, 6, 7)
296
+DO_VLD2H(vld21h, 2, 3, 4, 5)
297
+
298
+DO_VLD2W(vld20w, 0, 4, 24, 28)
299
+DO_VLD2W(vld21w, 8, 12, 16, 20)
300
+
301
+#define DO_VST4B(OP, O1, O2, O3, O4) \
302
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
303
+ uint32_t base) \
304
+ { \
305
+ int beat, e; \
306
+ uint16_t mask = mve_eci_mask(env); \
307
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
308
+ uint32_t addr, data; \
309
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
310
+ if ((mask & 1) == 0) { \
311
+ /* ECI says skip this beat */ \
312
+ continue; \
313
+ } \
314
+ addr = base + off[beat] * 4; \
315
+ data = 0; \
316
+ for (e = 3; e >= 0; e--) { \
317
+ uint8_t *qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + e); \
318
+ data = (data << 8) | qd[H1(off[beat])]; \
319
+ } \
320
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
321
+ } \
322
+ }
323
+
324
+#define DO_VST4H(OP, O1, O2) \
325
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
326
+ uint32_t base) \
327
+ { \
328
+ int beat; \
329
+ uint16_t mask = mve_eci_mask(env); \
330
+ static const uint8_t off[4] = { O1, O1, O2, O2 }; \
331
+ uint32_t addr, data; \
332
+ int y; /* y counts 0 2 0 2 */ \
333
+ uint16_t *qd; \
334
+ for (beat = 0, y = 0; beat < 4; beat++, mask >>= 4, y ^= 2) { \
335
+ if ((mask & 1) == 0) { \
336
+ /* ECI says skip this beat */ \
337
+ continue; \
338
+ } \
339
+ addr = base + off[beat] * 8 + (beat & 1) * 4; \
340
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y); \
341
+ data = qd[H2(off[beat])]; \
342
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + y + 1); \
343
+ data |= qd[H2(off[beat])] << 16; \
344
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
345
+ } \
346
+ }
347
+
348
+#define DO_VST4W(OP, O1, O2, O3, O4) \
349
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
350
+ uint32_t base) \
351
+ { \
352
+ int beat; \
353
+ uint16_t mask = mve_eci_mask(env); \
354
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
355
+ uint32_t addr, data; \
356
+ uint32_t *qd; \
357
+ int y; \
358
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
359
+ if ((mask & 1) == 0) { \
360
+ /* ECI says skip this beat */ \
361
+ continue; \
362
+ } \
363
+ addr = base + off[beat] * 4; \
364
+ y = (beat + (O1 & 2)) & 3; \
365
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + y); \
366
+ data = qd[H4(off[beat] >> 2)]; \
367
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
368
+ } \
369
+ }
370
+
371
+DO_VST4B(vst40b, 0, 1, 10, 11)
372
+DO_VST4B(vst41b, 2, 3, 12, 13)
373
+DO_VST4B(vst42b, 4, 5, 14, 15)
374
+DO_VST4B(vst43b, 6, 7, 8, 9)
375
+
376
+DO_VST4H(vst40h, 0, 5)
377
+DO_VST4H(vst41h, 1, 6)
378
+DO_VST4H(vst42h, 2, 7)
379
+DO_VST4H(vst43h, 3, 4)
380
+
381
+DO_VST4W(vst40w, 0, 1, 10, 11)
382
+DO_VST4W(vst41w, 2, 3, 12, 13)
383
+DO_VST4W(vst42w, 4, 5, 14, 15)
384
+DO_VST4W(vst43w, 6, 7, 8, 9)
385
+
386
+#define DO_VST2B(OP, O1, O2, O3, O4) \
387
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
388
+ uint32_t base) \
389
+ { \
390
+ int beat, e; \
391
+ uint16_t mask = mve_eci_mask(env); \
392
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
393
+ uint32_t addr, data; \
394
+ uint8_t *qd; \
395
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
396
+ if ((mask & 1) == 0) { \
397
+ /* ECI says skip this beat */ \
398
+ continue; \
399
+ } \
400
+ addr = base + off[beat] * 2; \
401
+ data = 0; \
402
+ for (e = 3; e >= 0; e--) { \
403
+ qd = (uint8_t *)aa32_vfp_qreg(env, qnidx + (e & 1)); \
404
+ data = (data << 8) | qd[H1(off[beat] + (e >> 1))]; \
405
+ } \
406
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
407
+ } \
408
+ }
409
+
410
+#define DO_VST2H(OP, O1, O2, O3, O4) \
411
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
412
+ uint32_t base) \
413
+ { \
414
+ int beat; \
415
+ uint16_t mask = mve_eci_mask(env); \
416
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
417
+ uint32_t addr, data; \
418
+ int e; \
419
+ uint16_t *qd; \
420
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
421
+ if ((mask & 1) == 0) { \
422
+ /* ECI says skip this beat */ \
423
+ continue; \
424
+ } \
425
+ addr = base + off[beat] * 4; \
426
+ data = 0; \
427
+ for (e = 1; e >= 0; e--) { \
428
+ qd = (uint16_t *)aa32_vfp_qreg(env, qnidx + e); \
429
+ data = (data << 16) | qd[H2(off[beat])]; \
430
+ } \
431
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
432
+ } \
433
+ }
434
+
435
+#define DO_VST2W(OP, O1, O2, O3, O4) \
436
+ void HELPER(mve_##OP)(CPUARMState *env, uint32_t qnidx, \
437
+ uint32_t base) \
438
+ { \
439
+ int beat; \
440
+ uint16_t mask = mve_eci_mask(env); \
441
+ static const uint8_t off[4] = { O1, O2, O3, O4 }; \
442
+ uint32_t addr, data; \
443
+ uint32_t *qd; \
444
+ for (beat = 0; beat < 4; beat++, mask >>= 4) { \
445
+ if ((mask & 1) == 0) { \
446
+ /* ECI says skip this beat */ \
447
+ continue; \
448
+ } \
449
+ addr = base + off[beat]; \
450
+ qd = (uint32_t *)aa32_vfp_qreg(env, qnidx + (beat & 1)); \
451
+ data = qd[H4(off[beat] >> 3)]; \
452
+ cpu_stl_le_data_ra(env, addr, data, GETPC()); \
453
+ } \
454
+ }
455
+
456
+DO_VST2B(vst20b, 0, 2, 12, 14)
457
+DO_VST2B(vst21b, 4, 6, 8, 10)
458
+
459
+DO_VST2H(vst20h, 0, 1, 6, 7)
460
+DO_VST2H(vst21h, 2, 3, 4, 5)
461
+
462
+DO_VST2W(vst20w, 0, 4, 24, 28)
463
+DO_VST2W(vst21w, 8, 12, 16, 20)
464
+
465
/*
466
* The mergemask(D, R, M) macro performs the operation "*D = R" but
467
* storing only the bytes which correspond to 1 bits in M,
468
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
469
index XXXXXXX..XXXXXXX 100644
470
--- a/target/arm/translate-mve.c
471
+++ b/target/arm/translate-mve.c
472
@@ -XXX,XX +XXX,XX @@ static inline int vidup_imm(DisasContext *s, int x)
473
474
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
475
typedef void MVEGenLdStSGFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
476
+typedef void MVEGenLdStIlFn(TCGv_ptr, TCGv_i32, TCGv_i32);
477
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
478
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
479
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
480
@@ -XXX,XX +XXX,XX @@ static bool trans_VSTRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
481
return do_ldst_sg_imm(s, a, fns[a->w], MO_64);
482
}
483
484
+static bool do_vldst_il(DisasContext *s, arg_vldst_il *a, MVEGenLdStIlFn *fn,
485
+ int addrinc)
486
+{
487
+ TCGv_i32 rn;
488
+
489
+ if (!dc_isar_feature(aa32_mve, s) ||
490
+ !mve_check_qreg_bank(s, a->qd) ||
491
+ !fn || (a->rn == 13 && a->w) || a->rn == 15) {
492
+ /* Variously UNPREDICTABLE or UNDEF or related-encoding */
493
+ return false;
494
+ }
495
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
496
+ return true;
497
+ }
498
+
499
+ rn = load_reg(s, a->rn);
500
+ /*
501
+ * We pass the index of Qd, not a pointer, because the helper must
502
+ * access multiple Q registers starting at Qd and working up.
503
+ */
504
+ fn(cpu_env, tcg_constant_i32(a->qd), rn);
505
+
506
+ if (a->w) {
507
+ tcg_gen_addi_i32(rn, rn, addrinc);
508
+ store_reg(s, a->rn, rn);
509
+ } else {
510
+ tcg_temp_free_i32(rn);
511
+ }
512
+ mve_update_and_store_eci(s);
513
+ return true;
514
+}
515
+
516
+/* This macro is just to make the arrays more compact in these functions */
517
+#define F(N) gen_helper_mve_##N
518
+
519
+static bool trans_VLD2(DisasContext *s, arg_vldst_il *a)
520
+{
521
+ static MVEGenLdStIlFn * const fns[4][4] = {
522
+ { F(vld20b), F(vld20h), F(vld20w), NULL, },
523
+ { F(vld21b), F(vld21h), F(vld21w), NULL, },
524
+ { NULL, NULL, NULL, NULL },
525
+ { NULL, NULL, NULL, NULL },
526
+ };
527
+ if (a->qd > 6) {
528
+ return false;
529
+ }
530
+ return do_vldst_il(s, a, fns[a->pat][a->size], 32);
531
+}
532
+
533
+static bool trans_VLD4(DisasContext *s, arg_vldst_il *a)
534
+{
535
+ static MVEGenLdStIlFn * const fns[4][4] = {
536
+ { F(vld40b), F(vld40h), F(vld40w), NULL, },
537
+ { F(vld41b), F(vld41h), F(vld41w), NULL, },
538
+ { F(vld42b), F(vld42h), F(vld42w), NULL, },
539
+ { F(vld43b), F(vld43h), F(vld43w), NULL, },
540
+ };
541
+ if (a->qd > 4) {
542
+ return false;
543
+ }
544
+ return do_vldst_il(s, a, fns[a->pat][a->size], 64);
545
+}
546
+
547
+static bool trans_VST2(DisasContext *s, arg_vldst_il *a)
548
+{
549
+ static MVEGenLdStIlFn * const fns[4][4] = {
550
+ { F(vst20b), F(vst20h), F(vst20w), NULL, },
551
+ { F(vst21b), F(vst21h), F(vst21w), NULL, },
552
+ { NULL, NULL, NULL, NULL },
553
+ { NULL, NULL, NULL, NULL },
554
+ };
555
+ if (a->qd > 6) {
556
+ return false;
557
+ }
558
+ return do_vldst_il(s, a, fns[a->pat][a->size], 32);
559
+}
560
+
561
+static bool trans_VST4(DisasContext *s, arg_vldst_il *a)
562
+{
563
+ static MVEGenLdStIlFn * const fns[4][4] = {
564
+ { F(vst40b), F(vst40h), F(vst40w), NULL, },
565
+ { F(vst41b), F(vst41h), F(vst41w), NULL, },
566
+ { F(vst42b), F(vst42h), F(vst42w), NULL, },
567
+ { F(vst43b), F(vst43h), F(vst43w), NULL, },
568
+ };
569
+ if (a->qd > 4) {
570
+ return false;
571
+ }
572
+ return do_vldst_il(s, a, fns[a->pat][a->size], 64);
573
+}
574
+
575
+#undef F
576
+
577
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
578
{
579
TCGv_ptr qd;
580
--
31
--
581
2.20.1
32
2.34.1
582
33
583
34
diff view generated by jsdifflib
1
Implement the MVE VCTP insn, which sets the VPR.P0 predicate bits so
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
as to predicate any element at index Rn or greater is predicated. As
3
with VPNOT, this insn itself is predicable and subject to beatwise
4
execution.
5
2
6
The calculation of the mask is the same as is used to determine
3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
7
ltpmask in mve_element_mask(), but we precalculate masklen in
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
generated code to avoid having to have 4 helpers specialized by size.
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230206223502.25122-3-philmd@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/m_helper.c | 11 ++++++++---
10
1 file changed, 8 insertions(+), 3 deletions(-)
9
11
10
We put the decode line in with the low-overhead-loop insns in
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
11
t32.decode because it's logically part of that collection of insn
12
patterns, even though it is an MVE only insn.
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
---
17
target/arm/helper-mve.h | 2 ++
18
target/arm/translate-a32.h | 1 +
19
target/arm/t32.decode | 1 +
20
target/arm/mve_helper.c | 20 ++++++++++++++++++++
21
target/arm/translate-mve.c | 2 +-
22
target/arm/translate.c | 33 +++++++++++++++++++++++++++++++++
23
6 files changed, 58 insertions(+), 1 deletion(-)
24
25
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
26
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper-mve.h
14
--- a/target/arm/m_helper.c
28
+++ b/target/arm/helper-mve.h
15
+++ b/target/arm/m_helper.c
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
16
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
30
DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
17
return 0;
31
DEF_HELPER_FLAGS_1(mve_vpnot, TCG_CALL_NO_WG, void, env)
32
33
+DEF_HELPER_FLAGS_2(mve_vctp, TCG_CALL_NO_WG, void, env, i32)
34
+
35
DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
38
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-a32.h
41
+++ b/target/arm/translate-a32.h
42
@@ -XXX,XX +XXX,XX @@ long neon_element_offset(int reg, int element, MemOp memop);
43
void gen_rev16(TCGv_i32 dest, TCGv_i32 var);
44
void clear_eci_state(DisasContext *s);
45
bool mve_eci_check(DisasContext *s);
46
+void mve_update_eci(DisasContext *s);
47
void mve_update_and_store_eci(DisasContext *s);
48
bool mve_skip_vmov(DisasContext *s, int vn, int index, int size);
49
50
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/t32.decode
53
+++ b/target/arm/t32.decode
54
@@ -XXX,XX +XXX,XX @@ BL 1111 0. .......... 11.1 ............ @branch24
55
# This is DLSTP
56
DLS 1111 0 0000 0 size:2 rn:4 1110 0000 0000 0001
57
}
58
+ VCTP 1111 0 0000 0 size:2 rn:4 1110 1000 0000 0001
59
]
60
}
18
}
61
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
19
62
index XXXXXXX..XXXXXXX 100644
20
-#else
63
--- a/target/arm/mve_helper.c
21
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
64
+++ b/target/arm/mve_helper.c
65
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vpnot)(CPUARMState *env)
66
mve_advance_vpt(env);
67
}
68
69
+/*
70
+ * VCTP: P0 unexecuted bits unchanged, predicated bits zeroed,
71
+ * otherwise set according to value of Rn. The calculation of
72
+ * newmask here works in the same way as the calculation of the
73
+ * ltpmask in mve_element_mask(), but we have pre-calculated
74
+ * the masklen in the generated code.
75
+ */
76
+void HELPER(mve_vctp)(CPUARMState *env, uint32_t masklen)
77
+{
22
+{
78
+ uint16_t mask = mve_element_mask(env);
23
+ return ARMMMUIdx_MUser;
79
+ uint16_t eci_mask = mve_eci_mask(env);
80
+ uint16_t newmask;
81
+
82
+ assert(masklen <= 16);
83
+ newmask = masklen ? MAKE_64BIT_MASK(0, masklen) : 0;
84
+ newmask &= mask;
85
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | (newmask & eci_mask);
86
+ mve_advance_vpt(env);
87
+}
24
+}
88
+
25
+
89
#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \
26
+#else /* !CONFIG_USER_ONLY */
90
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
27
91
{ \
28
/*
92
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
29
* What kind of stack write are we doing? This affects how exceptions
93
index XXXXXXX..XXXXXXX 100644
30
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
94
--- a/target/arm/translate-mve.c
31
return tt_resp;
95
+++ b/target/arm/translate-mve.c
96
@@ -XXX,XX +XXX,XX @@ bool mve_eci_check(DisasContext *s)
97
}
98
}
32
}
99
33
100
-static void mve_update_eci(DisasContext *s)
34
-#endif /* !CONFIG_USER_ONLY */
101
+void mve_update_eci(DisasContext *s)
35
-
36
ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
37
bool secstate, bool priv, bool negpri)
102
{
38
{
103
/*
39
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
104
* The helper function will always update the CPUState field,
40
105
diff --git a/target/arm/translate.c b/target/arm/translate.c
41
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate.c
108
+++ b/target/arm/translate.c
109
@@ -XXX,XX +XXX,XX @@ static bool trans_LCTP(DisasContext *s, arg_LCTP *a)
110
return true;
111
}
42
}
112
113
+static bool trans_VCTP(DisasContext *s, arg_VCTP *a)
114
+{
115
+ /*
116
+ * M-profile Create Vector Tail Predicate. This insn is itself
117
+ * predicated and is subject to beatwise execution.
118
+ */
119
+ TCGv_i32 rn_shifted, masklen;
120
+
43
+
121
+ if (!dc_isar_feature(aa32_mve, s) || a->rn == 13 || a->rn == 15) {
44
+#endif /* !CONFIG_USER_ONLY */
122
+ return false;
123
+ }
124
+
125
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
126
+ return true;
127
+ }
128
+
129
+ /*
130
+ * We pre-calculate the mask length here to avoid having
131
+ * to have multiple helpers specialized for size.
132
+ * We pass the helper "rn <= (1 << (4 - size)) ? (rn << size) : 16".
133
+ */
134
+ rn_shifted = tcg_temp_new_i32();
135
+ masklen = load_reg(s, a->rn);
136
+ tcg_gen_shli_i32(rn_shifted, masklen, a->size);
137
+ tcg_gen_movcond_i32(TCG_COND_LEU, masklen,
138
+ masklen, tcg_constant_i32(1 << (4 - a->size)),
139
+ rn_shifted, tcg_constant_i32(16));
140
+ gen_helper_mve_vctp(cpu_env, masklen);
141
+ tcg_temp_free_i32(masklen);
142
+ tcg_temp_free_i32(rn_shifted);
143
+ mve_update_eci(s);
144
+ return true;
145
+}
146
147
static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
148
{
149
--
45
--
150
2.20.1
46
2.34.1
151
47
152
48
diff view generated by jsdifflib
1
Implement the MVE VLDR/VSTR insns which do scatter-gather using base
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
addresses from Qm plus or minus an immediate offset (possibly with
3
writeback). Note that writeback is not predicated but it does have
4
to honour ECI state, so we have to add an eci_mask check to the
5
VSTR_SG macros (the VLDR_SG macros already needed this to be able
6
to distinguish "skip beat" from "set predicated element to 0").
7
2
3
arm_v7m_mmu_idx_all() and arm_v7m_mmu_idx_for_secstate_and_priv()
4
are only used for system emulation in m_helper.c.
5
Move the definitions to avoid prototype forward declarations.
6
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230206223502.25122-4-philmd@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
---
11
target/arm/helper-mve.h | 5 +++
12
target/arm/internals.h | 14 --------
12
target/arm/mve.decode | 10 +++++
13
target/arm/m_helper.c | 74 +++++++++++++++++++++---------------------
13
target/arm/mve_helper.c | 91 ++++++++++++++++++++++++--------------
14
2 files changed, 37 insertions(+), 51 deletions(-)
14
target/arm/translate-mve.c | 72 ++++++++++++++++++++++++++++++
15
4 files changed, 146 insertions(+), 32 deletions(-)
16
15
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-mve.h
18
--- a/target/arm/internals.h
20
+++ b/target/arm/helper-mve.h
19
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx core_to_aa64_mmu_idx(int mmu_idx)
22
DEF_HELPER_FLAGS_4(mve_vstrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
23
DEF_HELPER_FLAGS_4(mve_vstrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx);
24
23
25
+DEF_HELPER_FLAGS_4(mve_vldrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
-/*
26
+DEF_HELPER_FLAGS_4(mve_vldrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
- * Return the MMU index for a v7M CPU with all relevant information
27
+DEF_HELPER_FLAGS_4(mve_vstrw_sg_wb_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
- * manually specified.
28
+DEF_HELPER_FLAGS_4(mve_vstrd_sg_wb_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
- */
28
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
29
- bool secstate, bool priv, bool negpri);
30
-
31
-/*
32
- * Return the MMU index for a v7M CPU in the specified security and
33
- * privilege state.
34
- */
35
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
36
- bool secstate, bool priv);
37
-
38
/* Return the MMU index for a v7M CPU in the specified security state */
39
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
40
41
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/m_helper.c
44
+++ b/target/arm/m_helper.c
45
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
46
47
#else /* !CONFIG_USER_ONLY */
48
49
+static ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
50
+ bool secstate, bool priv, bool negpri)
51
+{
52
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
29
+
53
+
30
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
54
+ if (priv) {
31
55
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
32
DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@
38
&vmaxv qm rda size
39
&vabav qn qm rda size
40
&vldst_sg qd qm rn size msize os
41
+&vldst_sg_imm qd qm a w imm
42
43
# scatter-gather memory size is in bits 6:4
44
%sg_msize 6:1 4:1
45
@@ -XXX,XX +XXX,XX @@
46
@vldst_sg .... .... .... rn:4 .... ... size:2 ... ... os:1 &vldst_sg \
47
qd=%qd qm=%qm msize=%sg_msize
48
49
+# Qm is in the fields usually labeled Qn
50
+@vldst_sg_imm .... .... a:1 . w:1 . .... .... .... . imm:7 &vldst_sg_imm \
51
+ qd=%qd qm=%qn
52
+
53
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
54
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
55
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
56
@@ -XXX,XX +XXX,XX @@ VLDR_S_sg 111 0 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
57
VLDR_U_sg 111 1 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
58
VSTR_sg 111 0 1100 1 . 00 .... ... 0 111 . .... .... @vldst_sg
59
60
+VLDRW_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1110 .... .... @vldst_sg_imm
61
+VLDRD_sg_imm 111 1 1101 ... 1 ... 0 ... 1 1111 .... .... @vldst_sg_imm
62
+VSTRW_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1110 .... .... @vldst_sg_imm
63
+VSTRD_sg_imm 111 1 1101 ... 0 ... 0 ... 1 1111 .... .... @vldst_sg_imm
64
+
65
# Moves between 2 32-bit vector lanes and 2 general purpose registers
66
VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
67
VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
68
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/mve_helper.c
71
+++ b/target/arm/mve_helper.c
72
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
73
* For loads, predicated lanes are zeroed instead of retaining
74
* their previous values.
75
*/
76
-#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN) \
77
+#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN, WB) \
78
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
79
uint32_t base) \
80
{ \
81
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
82
addr = ADDRFN(base, m[H##ESIZE(e)]); \
83
d[H##ESIZE(e)] = (mask & 1) ? \
84
cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \
85
+ if (WB) { \
86
+ m[H##ESIZE(e)] = addr; \
87
+ } \
88
} \
89
mve_advance_vpt(env); \
90
}
91
92
/* We know here TYPE is unsigned so always the same as the offset type */
93
-#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN) \
94
+#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN, WB) \
95
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
96
uint32_t base) \
97
{ \
98
TYPE *d = vd; \
99
TYPE *m = vm; \
100
uint16_t mask = mve_element_mask(env); \
101
+ uint16_t eci_mask = mve_eci_mask(env); \
102
unsigned e; \
103
uint32_t addr; \
104
- for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
105
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE, eci_mask >>= ESIZE) { \
106
+ if (!(eci_mask & 1)) { \
107
+ continue; \
108
+ } \
109
addr = ADDRFN(base, m[H##ESIZE(e)]); \
110
if (mask & 1) { \
111
cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \
112
} \
113
+ if (WB) { \
114
+ m[H##ESIZE(e)] = addr; \
115
+ } \
116
} \
117
mve_advance_vpt(env); \
118
}
119
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
120
* accesses, controlled by the predicate mask for the relevant beat,
121
* and with a single 32-bit offset in the first of the two Qm elements.
122
* Note that for QEMU our IMPDEF AIRCR.ENDIANNESS is always 0 (little).
123
+ * Address writeback happens on the odd beats and updates the address
124
+ * stored in the even-beat element.
125
*/
126
-#define DO_VLDR64_SG(OP, ADDRFN) \
127
+#define DO_VLDR64_SG(OP, ADDRFN, WB) \
128
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
129
uint32_t base) \
130
{ \
131
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
132
addr = ADDRFN(base, m[H4(e & ~1)]); \
133
addr += 4 * (e & 1); \
134
d[H4(e)] = (mask & 1) ? cpu_ldl_data_ra(env, addr, GETPC()) : 0; \
135
+ if (WB && (e & 1)) { \
136
+ m[H4(e & ~1)] = addr - 4; \
137
+ } \
138
} \
139
mve_advance_vpt(env); \
140
}
141
142
-#define DO_VSTR64_SG(OP, ADDRFN) \
143
+#define DO_VSTR64_SG(OP, ADDRFN, WB) \
144
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
145
uint32_t base) \
146
{ \
147
uint32_t *d = vd; \
148
uint32_t *m = vm; \
149
uint16_t mask = mve_element_mask(env); \
150
+ uint16_t eci_mask = mve_eci_mask(env); \
151
unsigned e; \
152
uint32_t addr; \
153
- for (e = 0; e < 16 / 4; e++, mask >>= 4) { \
154
+ for (e = 0; e < 16 / 4; e++, mask >>= 4, eci_mask >>= 4) { \
155
+ if (!(eci_mask & 1)) { \
156
+ continue; \
157
+ } \
158
addr = ADDRFN(base, m[H4(e & ~1)]); \
159
addr += 4 * (e & 1); \
160
if (mask & 1) { \
161
cpu_stl_data_ra(env, addr, d[H4(e)], GETPC()); \
162
} \
163
+ if (WB && (e & 1)) { \
164
+ m[H4(e & ~1)] = addr - 4; \
165
+ } \
166
} \
167
mve_advance_vpt(env); \
168
}
169
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
170
#define ADDR_ADD_OSW(BASE, OFFSET) ((BASE) + ((OFFSET) << 2))
171
#define ADDR_ADD_OSD(BASE, OFFSET) ((BASE) + ((OFFSET) << 3))
172
173
-DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD)
174
-DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD)
175
-DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD)
176
+DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD, false)
177
+DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD, false)
178
+DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD, false)
179
180
-DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD)
181
-DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD)
182
-DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD)
183
-DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD)
184
-DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD)
185
-DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD)
186
-DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD)
187
+DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD, false)
188
+DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD, false)
189
+DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD, false)
190
+DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD, false)
191
+DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD, false)
192
+DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD, false)
193
+DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD, false)
194
195
-DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH)
196
-DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH)
197
-DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH)
198
-DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW)
199
-DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD)
200
+DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH, false)
201
+DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH, false)
202
+DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH, false)
203
+DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW, false)
204
+DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD, false)
205
206
-DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD)
207
-DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD)
208
-DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD)
209
-DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD)
210
-DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD)
211
-DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD)
212
-DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD)
213
+DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD, false)
214
+DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD, false)
215
+DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD, false)
216
+DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD, false)
217
+DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD, false)
218
+DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD, false)
219
+DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD, false)
220
221
-DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH)
222
-DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH)
223
-DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW)
224
-DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD)
225
+DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH, false)
226
+DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH, false)
227
+DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW, false)
228
+DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD, false)
229
+
230
+DO_VLDR_SG(vldrw_sg_wb_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD, true)
231
+DO_VLDR64_SG(vldrd_sg_wb_ud, ADDR_ADD, true)
232
+DO_VSTR_SG(vstrw_sg_wb_uw, stl, 4, uint32_t, ADDR_ADD, true)
233
+DO_VSTR64_SG(vstrd_sg_wb_ud, ADDR_ADD, true)
234
235
/*
236
* The mergemask(D, R, M) macro performs the operation "*D = R" but
237
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
238
index XXXXXXX..XXXXXXX 100644
239
--- a/target/arm/translate-mve.c
240
+++ b/target/arm/translate-mve.c
241
@@ -XXX,XX +XXX,XX @@ static bool trans_VSTR_sg(DisasContext *s, arg_vldst_sg *a)
242
243
#undef F
244
245
+static bool do_ldst_sg_imm(DisasContext *s, arg_vldst_sg_imm *a,
246
+ MVEGenLdStSGFn *fn, unsigned msize)
247
+{
248
+ uint32_t offset;
249
+ TCGv_ptr qd, qm;
250
+
251
+ if (!dc_isar_feature(aa32_mve, s) ||
252
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
253
+ !fn) {
254
+ return false;
255
+ }
56
+ }
256
+
57
+
257
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
58
+ if (negpri) {
258
+ return true;
59
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
259
+ }
60
+ }
260
+
61
+
261
+ offset = a->imm << msize;
62
+ if (secstate) {
262
+ if (!a->a) {
63
+ mmu_idx |= ARM_MMU_IDX_M_S;
263
+ offset = -offset;
264
+ }
64
+ }
265
+
65
+
266
+ qd = mve_qreg_ptr(a->qd);
66
+ return mmu_idx;
267
+ qm = mve_qreg_ptr(a->qm);
268
+ fn(cpu_env, qd, qm, tcg_constant_i32(offset));
269
+ tcg_temp_free_ptr(qd);
270
+ tcg_temp_free_ptr(qm);
271
+ mve_update_eci(s);
272
+ return true;
273
+}
67
+}
274
+
68
+
275
+static bool trans_VLDRW_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
69
+static ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
70
+ bool secstate, bool priv)
276
+{
71
+{
277
+ static MVEGenLdStSGFn * const fns[] = {
72
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
278
+ gen_helper_mve_vldrw_sg_uw,
73
+
279
+ gen_helper_mve_vldrw_sg_wb_uw,
74
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
280
+ };
281
+ if (a->qd == a->qm) {
282
+ return false; /* UNPREDICTABLE */
283
+ }
284
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_32);
285
+}
75
+}
286
+
76
+
287
+static bool trans_VLDRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
77
+/* Return the MMU index for a v7M CPU in the specified security state */
78
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
288
+{
79
+{
289
+ static MVEGenLdStSGFn * const fns[] = {
80
+ bool priv = arm_v7m_is_handler_mode(env) ||
290
+ gen_helper_mve_vldrd_sg_ud,
81
+ !(env->v7m.control[secstate] & 1);
291
+ gen_helper_mve_vldrd_sg_wb_ud,
82
+
292
+ };
83
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
293
+ if (a->qd == a->qm) {
294
+ return false; /* UNPREDICTABLE */
295
+ }
296
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_64);
297
+}
84
+}
298
+
85
+
299
+static bool trans_VSTRW_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
86
/*
300
+{
87
* What kind of stack write are we doing? This affects how exceptions
301
+ static MVEGenLdStSGFn * const fns[] = {
88
* generated during the stacking are treated.
302
+ gen_helper_mve_vstrw_sg_uw,
89
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
303
+ gen_helper_mve_vstrw_sg_wb_uw,
90
return tt_resp;
304
+ };
91
}
305
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_32);
92
306
+}
93
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
307
+
94
- bool secstate, bool priv, bool negpri)
308
+static bool trans_VSTRD_sg_imm(DisasContext *s, arg_vldst_sg_imm *a)
95
-{
309
+{
96
- ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
310
+ static MVEGenLdStSGFn * const fns[] = {
97
-
311
+ gen_helper_mve_vstrd_sg_ud,
98
- if (priv) {
312
+ gen_helper_mve_vstrd_sg_wb_ud,
99
- mmu_idx |= ARM_MMU_IDX_M_PRIV;
313
+ };
100
- }
314
+ return do_ldst_sg_imm(s, a, fns[a->w], MO_64);
101
-
315
+}
102
- if (negpri) {
316
+
103
- mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
317
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
104
- }
318
{
105
-
319
TCGv_ptr qd;
106
- if (secstate) {
107
- mmu_idx |= ARM_MMU_IDX_M_S;
108
- }
109
-
110
- return mmu_idx;
111
-}
112
-
113
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
114
- bool secstate, bool priv)
115
-{
116
- bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
117
-
118
- return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
119
-}
120
-
121
-/* Return the MMU index for a v7M CPU in the specified security state */
122
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
123
-{
124
- bool priv = arm_v7m_is_handler_mode(env) ||
125
- !(env->v7m.control[secstate] & 1);
126
-
127
- return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
128
-}
129
-
130
#endif /* !CONFIG_USER_ONLY */
320
--
131
--
321
2.20.1
132
2.34.1
322
133
323
134
diff view generated by jsdifflib
1
Implement the MVE VMAXA and VMINA insns, which take the absolute
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
value of the signed elements in the input vector and then accumulate
3
the unsigned max or min into the destination vector.
4
2
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20230206223502.25122-5-philmd@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
---
7
---
8
target/arm/helper-mve.h | 8 ++++++++
8
target/arm/helper.c | 12 ++++++++++--
9
target/arm/mve.decode | 4 ++++
9
1 file changed, 10 insertions(+), 2 deletions(-)
10
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 40 insertions(+)
13
10
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
13
--- a/target/arm/helper.c
17
+++ b/target/arm/helper-mve.h
14
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vqnegb, TCG_CALL_NO_WG, void, env, ptr, ptr)
15
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
19
DEF_HELPER_FLAGS_3(mve_vqnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
16
}
20
DEF_HELPER_FLAGS_3(mve_vqnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
22
+DEF_HELPER_FLAGS_3(mve_vmaxab, TCG_CALL_NO_WG, void, env, ptr, ptr)
23
+DEF_HELPER_FLAGS_3(mve_vmaxah, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
+DEF_HELPER_FLAGS_3(mve_vmaxaw, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
+
26
+DEF_HELPER_FLAGS_3(mve_vminab, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
+DEF_HELPER_FLAGS_3(mve_vminah, TCG_CALL_NO_WG, void, env, ptr, ptr)
28
+DEF_HELPER_FLAGS_3(mve_vminaw, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
+
30
DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
32
DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
38
VQMOVUNB 111 0 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op
39
VQMOVN_BS 111 0 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op
40
41
+ VMAXA 111 0 1110 0 . 11 .. 11 ... 0 1110 1 0 . 0 ... 1 @1op
42
+
43
VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
44
}
17
}
45
18
46
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
19
+#ifndef CONFIG_USER_ONLY
47
VQMOVUNT 111 0 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op
20
/*
48
VQMOVN_TS 111 0 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op
21
* We don't know until after realize whether there's a GICv3
49
22
* attached, and that is what registers the gicv3 sysregs.
50
+ VMINA 111 0 1110 0 . 11 .. 11 ... 1 1110 1 0 . 0 ... 1 @1op
23
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
51
+
24
return pfr1;
52
VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
53
}
25
}
54
26
55
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
27
-#ifndef CONFIG_USER_ONLY
56
index XXXXXXX..XXXXXXX 100644
28
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
--- a/target/arm/mve_helper.c
29
{
58
+++ b/target/arm/mve_helper.c
30
ARMCPU *cpu = env_archcpu(env);
59
@@ -XXX,XX +XXX,XX @@ DO_1OP_SAT(vqabsw, 4, int32_t, DO_VQABS_W)
31
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
60
DO_1OP_SAT(vqnegb, 1, int8_t, DO_VQNEG_B)
32
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 1,
61
DO_1OP_SAT(vqnegh, 2, int16_t, DO_VQNEG_H)
33
.access = PL1_R, .type = ARM_CP_NO_RAW,
62
DO_1OP_SAT(vqnegw, 4, int32_t, DO_VQNEG_W)
34
.accessfn = access_aa32_tid3,
63
+
35
+#ifdef CONFIG_USER_ONLY
64
+/*
36
+ .type = ARM_CP_CONST,
65
+ * VMAXA, VMINA: vd is unsigned; vm is signed, and we take its
37
+ .resetvalue = cpu->isar.id_pfr1,
66
+ * absolute value; we then do an unsigned comparison.
38
+#else
67
+ */
39
+ .type = ARM_CP_NO_RAW,
68
+#define DO_VMAXMINA(OP, ESIZE, STYPE, UTYPE, FN) \
40
+ .accessfn = access_aa32_tid3,
69
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
41
.readfn = id_pfr1_read,
70
+ { \
42
- .writefn = arm_cp_write_ignore },
71
+ UTYPE *d = vd; \
43
+ .writefn = arm_cp_write_ignore
72
+ STYPE *m = vm; \
44
+#endif
73
+ uint16_t mask = mve_element_mask(env); \
45
+ },
74
+ unsigned e; \
46
{ .name = "ID_DFR0", .state = ARM_CP_STATE_BOTH,
75
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
47
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 2,
76
+ UTYPE r = DO_ABS(m[H##ESIZE(e)]); \
48
.access = PL1_R, .type = ARM_CP_CONST,
77
+ r = FN(d[H##ESIZE(e)], r); \
78
+ mergemask(&d[H##ESIZE(e)], r, mask); \
79
+ } \
80
+ mve_advance_vpt(env); \
81
+ }
82
+
83
+DO_VMAXMINA(vmaxab, 1, int8_t, uint8_t, DO_MAX)
84
+DO_VMAXMINA(vmaxah, 2, int16_t, uint16_t, DO_MAX)
85
+DO_VMAXMINA(vmaxaw, 4, int32_t, uint32_t, DO_MAX)
86
+DO_VMAXMINA(vminab, 1, int8_t, uint8_t, DO_MIN)
87
+DO_VMAXMINA(vminah, 2, int16_t, uint16_t, DO_MIN)
88
+DO_VMAXMINA(vminaw, 4, int32_t, uint32_t, DO_MIN)
89
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/translate-mve.c
92
+++ b/target/arm/translate-mve.c
93
@@ -XXX,XX +XXX,XX @@ DO_1OP(VABS, vabs)
94
DO_1OP(VNEG, vneg)
95
DO_1OP(VQABS, vqabs)
96
DO_1OP(VQNEG, vqneg)
97
+DO_1OP(VMAXA, vmaxa)
98
+DO_1OP(VMINA, vmina)
99
100
/* Narrowing moves: only size 0 and 1 are valid */
101
#define DO_VMOVN(INSN, FN) \
102
--
49
--
103
2.20.1
50
2.34.1
104
51
105
52
diff view generated by jsdifflib
1
All the users of the vmlaldav formats have an 'x bit in bit 12 and an
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
'a' bit in bit 5; move these to the format rather than specifying them
3
in each insn pattern.
4
2
3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230206223502.25122-6-philmd@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
---
8
target/arm/mve.decode | 16 ++++++++--------
9
linux-user/user-internals.h | 2 +-
9
1 file changed, 8 insertions(+), 8 deletions(-)
10
target/arm/cpu.h | 2 +-
11
linux-user/arm/cpu_loop.c | 4 ++--
12
3 files changed, 4 insertions(+), 4 deletions(-)
10
13
11
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
14
diff --git a/linux-user/user-internals.h b/linux-user/user-internals.h
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/mve.decode
16
--- a/linux-user/user-internals.h
14
+++ b/target/arm/mve.decode
17
+++ b/linux-user/user-internals.h
15
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
18
@@ -XXX,XX +XXX,XX @@ void print_termios(void *arg);
16
19
#ifdef TARGET_ARM
17
&vmlaldav rdahi rdalo size qn qm x a
20
static inline int regpairs_aligned(CPUArchState *cpu_env, int num)
18
21
{
19
-@vmlaldav .... .... . ... ... . ... . .... .... qm:3 . \
22
- return cpu_env->eabi == 1;
20
+@vmlaldav .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
23
+ return cpu_env->eabi;
21
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
24
}
22
-@vmlaldav_nosz .... .... . ... ... . ... . .... .... qm:3 . \
25
#elif defined(TARGET_MIPS) && defined(TARGET_ABI_MIPSO32)
23
+@vmlaldav_nosz .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
26
static inline int regpairs_aligned(CPUArchState *cpu_env, int num) { return 1; }
24
qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
-VMLALDAV_S 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
28
index XXXXXXX..XXXXXXX 100644
26
-VMLALDAV_U 1111 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 0 @vmlaldav
29
--- a/target/arm/cpu.h
27
+VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
30
+++ b/target/arm/cpu.h
28
+VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
29
32
30
-VMLSLDAV 1110 1110 1 ... ... . ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav
33
#if defined(CONFIG_USER_ONLY)
31
+VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
34
/* For usermode syscall translation. */
32
35
- int eabi;
33
-VRMLALDAVH_S 1110 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
36
+ bool eabi;
34
-VRMLALDAVH_U 1111 1110 1 ... ... 0 ... x:1 1111 . 0 a:1 0 ... 0 @vmlaldav_nosz
37
#endif
35
+VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
38
36
+VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
39
struct CPUBreakpoint *cpu_breakpoint[16];
37
40
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
38
-VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_nosz
41
index XXXXXXX..XXXXXXX 100644
39
+VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
42
--- a/linux-user/arm/cpu_loop.c
40
43
+++ b/linux-user/arm/cpu_loop.c
41
# Scalar operations
44
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
45
break;
46
case EXCP_SWI:
47
{
48
- env->eabi = 1;
49
+ env->eabi = true;
50
/* system call */
51
if (env->thumb) {
52
/* Thumb is always EABI style with syscall number in r7 */
53
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
54
* > 0xfffff and are handled below as out-of-range.
55
*/
56
n ^= ARM_SYSCALL_BASE;
57
- env->eabi = 0;
58
+ env->eabi = false;
59
}
60
}
42
61
43
--
62
--
44
2.20.1
63
2.34.1
45
64
46
65
diff view generated by jsdifflib
1
Implement the MVE gather-loads and scatter-stores which
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
form the address by adding a base value from a scalar
3
register to an offset in each element of a vector.
4
2
3
Although the 'eabi' field is only used in user emulation where
4
CPU reset doesn't occur, it doesn't belong to the area to reset.
5
Move it after the 'end_reset_fields' for consistency.
6
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20230206223502.25122-7-philmd@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
---
11
---
8
target/arm/helper-mve.h | 32 +++++++++
12
target/arm/cpu.h | 9 ++++-----
9
target/arm/mve.decode | 12 ++++
13
1 file changed, 4 insertions(+), 5 deletions(-)
10
target/arm/mve_helper.c | 129 +++++++++++++++++++++++++++++++++++++
11
target/arm/translate-mve.c | 97 ++++++++++++++++++++++++++++
12
4 files changed, 270 insertions(+)
13
14
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
--- a/target/arm/cpu.h
17
+++ b/target/arm/helper-mve.h
18
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrb_h, TCG_CALL_NO_WG, void, env, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
19
DEF_HELPER_FLAGS_3(mve_vstrb_w, TCG_CALL_NO_WG, void, env, ptr, i32)
20
ARMVectorReg zarray[ARM_MAX_VQ * 16];
20
DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
21
#endif
21
22
22
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
-#if defined(CONFIG_USER_ONLY)
23
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
- /* For usermode syscall translation. */
24
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
- bool eabi;
25
+
26
-#endif
26
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
-
27
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
struct CPUBreakpoint *cpu_breakpoint[16];
28
+DEF_HELPER_FLAGS_4(mve_vldrb_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
struct CPUWatchpoint *cpu_watchpoint[16];
29
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
30
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
31
+DEF_HELPER_FLAGS_4(mve_vldrw_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
const struct arm_boot_info *boot_info;
32
+DEF_HELPER_FLAGS_4(mve_vldrd_sg_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
/* Store GICv3CPUState to access from this struct */
33
+
34
void *gicv3state;
34
+DEF_HELPER_FLAGS_4(mve_vstrb_sg_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+#if defined(CONFIG_USER_ONLY)
35
+DEF_HELPER_FLAGS_4(mve_vstrb_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+ /* For usermode syscall translation. */
36
+DEF_HELPER_FLAGS_4(mve_vstrb_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+ bool eabi;
37
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+#endif /* CONFIG_USER_ONLY */
38
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
39
+DEF_HELPER_FLAGS_4(mve_vstrw_sg_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
#ifdef TARGET_TAGGED_ADDRESSES
40
+DEF_HELPER_FLAGS_4(mve_vstrd_sg_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
/* Linux syscall tagged address support */
41
+
42
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
43
+
44
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_4(mve_vldrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
46
+DEF_HELPER_FLAGS_4(mve_vldrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_4(mve_vldrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
48
+
49
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
50
+DEF_HELPER_FLAGS_4(mve_vstrh_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_4(mve_vstrw_sg_os_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
52
+DEF_HELPER_FLAGS_4(mve_vstrd_sg_os_ud, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
53
+
54
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
55
56
DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
57
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/mve.decode
60
+++ b/target/arm/mve.decode
61
@@ -XXX,XX +XXX,XX @@
62
&shl_scalar qda rm size
63
&vmaxv qm rda size
64
&vabav qn qm rda size
65
+&vldst_sg qd qm rn size msize os
66
+
67
+# scatter-gather memory size is in bits 6:4
68
+%sg_msize 6:1 4:1
69
70
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
71
# Note that both Rn and Qd are 3 bits only (no D bit)
72
@vldst_wn ... u:1 ... . . . . l:1 . rn:3 qd:3 . ... .. imm:7 &vldr_vstr
73
74
+@vldst_sg .... .... .... rn:4 .... ... size:2 ... ... os:1 &vldst_sg \
75
+ qd=%qd qm=%qm msize=%sg_msize
76
+
77
@1op .... .... .... size:2 .. .... .... .... .... &1op qd=%qd qm=%qm
78
@1op_nosz .... .... .... .... .... .... .... .... &1op qd=%qd qm=%qm size=0
79
@2op .... .... .. size:2 .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn
80
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
81
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
82
size=2 p=1
83
84
+# gather loads/scatter stores
85
+VLDR_S_sg 111 0 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
86
+VLDR_U_sg 111 1 1100 1 . 01 .... ... 0 111 . .... .... @vldst_sg
87
+VSTR_sg 111 0 1100 1 . 00 .... ... 0 111 . .... .... @vldst_sg
88
+
89
# Moves between 2 32-bit vector lanes and 2 general purpose registers
90
VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
91
VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
92
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/mve_helper.c
95
+++ b/target/arm/mve_helper.c
96
@@ -XXX,XX +XXX,XX @@ DO_VSTR(vstrh_w, 2, stw, 4, int32_t)
97
#undef DO_VLDR
98
#undef DO_VSTR
99
100
+/*
101
+ * Gather loads/scatter stores. Here each element of Qm specifies
102
+ * an offset to use from the base register Rm. In the _os_ versions
103
+ * that offset is scaled by the element size.
104
+ * For loads, predicated lanes are zeroed instead of retaining
105
+ * their previous values.
106
+ */
107
+#define DO_VLDR_SG(OP, LDTYPE, ESIZE, TYPE, OFFTYPE, ADDRFN) \
108
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
109
+ uint32_t base) \
110
+ { \
111
+ TYPE *d = vd; \
112
+ OFFTYPE *m = vm; \
113
+ uint16_t mask = mve_element_mask(env); \
114
+ uint16_t eci_mask = mve_eci_mask(env); \
115
+ unsigned e; \
116
+ uint32_t addr; \
117
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE, eci_mask >>= ESIZE) { \
118
+ if (!(eci_mask & 1)) { \
119
+ continue; \
120
+ } \
121
+ addr = ADDRFN(base, m[H##ESIZE(e)]); \
122
+ d[H##ESIZE(e)] = (mask & 1) ? \
123
+ cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \
124
+ } \
125
+ mve_advance_vpt(env); \
126
+ }
127
+
128
+/* We know here TYPE is unsigned so always the same as the offset type */
129
+#define DO_VSTR_SG(OP, STTYPE, ESIZE, TYPE, ADDRFN) \
130
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
131
+ uint32_t base) \
132
+ { \
133
+ TYPE *d = vd; \
134
+ TYPE *m = vm; \
135
+ uint16_t mask = mve_element_mask(env); \
136
+ unsigned e; \
137
+ uint32_t addr; \
138
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
139
+ addr = ADDRFN(base, m[H##ESIZE(e)]); \
140
+ if (mask & 1) { \
141
+ cpu_##STTYPE##_data_ra(env, addr, d[H##ESIZE(e)], GETPC()); \
142
+ } \
143
+ } \
144
+ mve_advance_vpt(env); \
145
+ }
146
+
147
+/*
148
+ * 64-bit accesses are slightly different: they are done as two 32-bit
149
+ * accesses, controlled by the predicate mask for the relevant beat,
150
+ * and with a single 32-bit offset in the first of the two Qm elements.
151
+ * Note that for QEMU our IMPDEF AIRCR.ENDIANNESS is always 0 (little).
152
+ */
153
+#define DO_VLDR64_SG(OP, ADDRFN) \
154
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
155
+ uint32_t base) \
156
+ { \
157
+ uint32_t *d = vd; \
158
+ uint32_t *m = vm; \
159
+ uint16_t mask = mve_element_mask(env); \
160
+ uint16_t eci_mask = mve_eci_mask(env); \
161
+ unsigned e; \
162
+ uint32_t addr; \
163
+ for (e = 0; e < 16 / 4; e++, mask >>= 4, eci_mask >>= 4) { \
164
+ if (!(eci_mask & 1)) { \
165
+ continue; \
166
+ } \
167
+ addr = ADDRFN(base, m[H4(e & ~1)]); \
168
+ addr += 4 * (e & 1); \
169
+ d[H4(e)] = (mask & 1) ? cpu_ldl_data_ra(env, addr, GETPC()) : 0; \
170
+ } \
171
+ mve_advance_vpt(env); \
172
+ }
173
+
174
+#define DO_VSTR64_SG(OP, ADDRFN) \
175
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm, \
176
+ uint32_t base) \
177
+ { \
178
+ uint32_t *d = vd; \
179
+ uint32_t *m = vm; \
180
+ uint16_t mask = mve_element_mask(env); \
181
+ unsigned e; \
182
+ uint32_t addr; \
183
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) { \
184
+ addr = ADDRFN(base, m[H4(e & ~1)]); \
185
+ addr += 4 * (e & 1); \
186
+ if (mask & 1) { \
187
+ cpu_stl_data_ra(env, addr, d[H4(e)], GETPC()); \
188
+ } \
189
+ } \
190
+ mve_advance_vpt(env); \
191
+ }
192
+
193
+#define ADDR_ADD(BASE, OFFSET) ((BASE) + (OFFSET))
194
+#define ADDR_ADD_OSH(BASE, OFFSET) ((BASE) + ((OFFSET) << 1))
195
+#define ADDR_ADD_OSW(BASE, OFFSET) ((BASE) + ((OFFSET) << 2))
196
+#define ADDR_ADD_OSD(BASE, OFFSET) ((BASE) + ((OFFSET) << 3))
197
+
198
+DO_VLDR_SG(vldrb_sg_sh, ldsb, 2, int16_t, uint16_t, ADDR_ADD)
199
+DO_VLDR_SG(vldrb_sg_sw, ldsb, 4, int32_t, uint32_t, ADDR_ADD)
200
+DO_VLDR_SG(vldrh_sg_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD)
201
+
202
+DO_VLDR_SG(vldrb_sg_ub, ldub, 1, uint8_t, uint8_t, ADDR_ADD)
203
+DO_VLDR_SG(vldrb_sg_uh, ldub, 2, uint16_t, uint16_t, ADDR_ADD)
204
+DO_VLDR_SG(vldrb_sg_uw, ldub, 4, uint32_t, uint32_t, ADDR_ADD)
205
+DO_VLDR_SG(vldrh_sg_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD)
206
+DO_VLDR_SG(vldrh_sg_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD)
207
+DO_VLDR_SG(vldrw_sg_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD)
208
+DO_VLDR64_SG(vldrd_sg_ud, ADDR_ADD)
209
+
210
+DO_VLDR_SG(vldrh_sg_os_sw, ldsw, 4, int32_t, uint32_t, ADDR_ADD_OSH)
211
+DO_VLDR_SG(vldrh_sg_os_uh, lduw, 2, uint16_t, uint16_t, ADDR_ADD_OSH)
212
+DO_VLDR_SG(vldrh_sg_os_uw, lduw, 4, uint32_t, uint32_t, ADDR_ADD_OSH)
213
+DO_VLDR_SG(vldrw_sg_os_uw, ldl, 4, uint32_t, uint32_t, ADDR_ADD_OSW)
214
+DO_VLDR64_SG(vldrd_sg_os_ud, ADDR_ADD_OSD)
215
+
216
+DO_VSTR_SG(vstrb_sg_ub, stb, 1, uint8_t, ADDR_ADD)
217
+DO_VSTR_SG(vstrb_sg_uh, stb, 2, uint16_t, ADDR_ADD)
218
+DO_VSTR_SG(vstrb_sg_uw, stb, 4, uint32_t, ADDR_ADD)
219
+DO_VSTR_SG(vstrh_sg_uh, stw, 2, uint16_t, ADDR_ADD)
220
+DO_VSTR_SG(vstrh_sg_uw, stw, 4, uint32_t, ADDR_ADD)
221
+DO_VSTR_SG(vstrw_sg_uw, stl, 4, uint32_t, ADDR_ADD)
222
+DO_VSTR64_SG(vstrd_sg_ud, ADDR_ADD)
223
+
224
+DO_VSTR_SG(vstrh_sg_os_uh, stw, 2, uint16_t, ADDR_ADD_OSH)
225
+DO_VSTR_SG(vstrh_sg_os_uw, stw, 4, uint32_t, ADDR_ADD_OSH)
226
+DO_VSTR_SG(vstrw_sg_os_uw, stl, 4, uint32_t, ADDR_ADD_OSW)
227
+DO_VSTR64_SG(vstrd_sg_os_ud, ADDR_ADD_OSD)
228
+
229
/*
230
* The mergemask(D, R, M) macro performs the operation "*D = R" but
231
* storing only the bytes which correspond to 1 bits in M,
232
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
233
index XXXXXXX..XXXXXXX 100644
234
--- a/target/arm/translate-mve.c
235
+++ b/target/arm/translate-mve.c
236
@@ -XXX,XX +XXX,XX @@ static inline int vidup_imm(DisasContext *s, int x)
237
#include "decode-mve.c.inc"
238
239
typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
240
+typedef void MVEGenLdStSGFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
241
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
242
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
243
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
244
@@ -XXX,XX +XXX,XX @@ DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h, MO_8)
245
DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w, MO_8)
246
DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w, MO_16)
247
248
+static bool do_ldst_sg(DisasContext *s, arg_vldst_sg *a, MVEGenLdStSGFn fn)
249
+{
250
+ TCGv_i32 addr;
251
+ TCGv_ptr qd, qm;
252
+
253
+ if (!dc_isar_feature(aa32_mve, s) ||
254
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
255
+ !fn || a->rn == 15) {
256
+ /* Rn case is UNPREDICTABLE */
257
+ return false;
258
+ }
259
+
260
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
261
+ return true;
262
+ }
263
+
264
+ addr = load_reg(s, a->rn);
265
+
266
+ qd = mve_qreg_ptr(a->qd);
267
+ qm = mve_qreg_ptr(a->qm);
268
+ fn(cpu_env, qd, qm, addr);
269
+ tcg_temp_free_ptr(qd);
270
+ tcg_temp_free_ptr(qm);
271
+ tcg_temp_free_i32(addr);
272
+ mve_update_eci(s);
273
+ return true;
274
+}
275
+
276
+/*
277
+ * The naming scheme here is "vldrb_sg_sh == in-memory byte loads
278
+ * signextended to halfword elements in register". _os_ indicates that
279
+ * the offsets in Qm should be scaled by the element size.
280
+ */
281
+/* This macro is just to make the arrays more compact in these functions */
282
+#define F(N) gen_helper_mve_##N
283
+
284
+/* VLDRB/VSTRB (ie msize 1) with OS=1 is UNPREDICTABLE; we UNDEF */
285
+static bool trans_VLDR_S_sg(DisasContext *s, arg_vldst_sg *a)
286
+{
287
+ static MVEGenLdStSGFn * const fns[2][4][4] = { {
288
+ { NULL, F(vldrb_sg_sh), F(vldrb_sg_sw), NULL },
289
+ { NULL, NULL, F(vldrh_sg_sw), NULL },
290
+ { NULL, NULL, NULL, NULL },
291
+ { NULL, NULL, NULL, NULL }
292
+ }, {
293
+ { NULL, NULL, NULL, NULL },
294
+ { NULL, NULL, F(vldrh_sg_os_sw), NULL },
295
+ { NULL, NULL, NULL, NULL },
296
+ { NULL, NULL, NULL, NULL }
297
+ }
298
+ };
299
+ if (a->qd == a->qm) {
300
+ return false; /* UNPREDICTABLE */
301
+ }
302
+ return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]);
303
+}
304
+
305
+static bool trans_VLDR_U_sg(DisasContext *s, arg_vldst_sg *a)
306
+{
307
+ static MVEGenLdStSGFn * const fns[2][4][4] = { {
308
+ { F(vldrb_sg_ub), F(vldrb_sg_uh), F(vldrb_sg_uw), NULL },
309
+ { NULL, F(vldrh_sg_uh), F(vldrh_sg_uw), NULL },
310
+ { NULL, NULL, F(vldrw_sg_uw), NULL },
311
+ { NULL, NULL, NULL, F(vldrd_sg_ud) }
312
+ }, {
313
+ { NULL, NULL, NULL, NULL },
314
+ { NULL, F(vldrh_sg_os_uh), F(vldrh_sg_os_uw), NULL },
315
+ { NULL, NULL, F(vldrw_sg_os_uw), NULL },
316
+ { NULL, NULL, NULL, F(vldrd_sg_os_ud) }
317
+ }
318
+ };
319
+ if (a->qd == a->qm) {
320
+ return false; /* UNPREDICTABLE */
321
+ }
322
+ return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]);
323
+}
324
+
325
+static bool trans_VSTR_sg(DisasContext *s, arg_vldst_sg *a)
326
+{
327
+ static MVEGenLdStSGFn * const fns[2][4][4] = { {
328
+ { F(vstrb_sg_ub), F(vstrb_sg_uh), F(vstrb_sg_uw), NULL },
329
+ { NULL, F(vstrh_sg_uh), F(vstrh_sg_uw), NULL },
330
+ { NULL, NULL, F(vstrw_sg_uw), NULL },
331
+ { NULL, NULL, NULL, F(vstrd_sg_ud) }
332
+ }, {
333
+ { NULL, NULL, NULL, NULL },
334
+ { NULL, F(vstrh_sg_os_uh), F(vstrh_sg_os_uw), NULL },
335
+ { NULL, NULL, F(vstrw_sg_os_uw), NULL },
336
+ { NULL, NULL, NULL, F(vstrd_sg_os_ud) }
337
+ }
338
+ };
339
+ return do_ldst_sg(s, a, fns[a->os][a->msize][a->size]);
340
+}
341
+
342
+#undef F
343
+
344
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
345
{
346
TCGv_ptr qd;
347
--
42
--
348
2.20.1
43
2.34.1
349
44
350
45
diff view generated by jsdifflib
1
From: Sebastian Meyer <meyer@absint.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
With gdb 9.0 and better it is possible to connect to a gdbstub
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
over unix sockets, which is better than a TCP socket connection
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
in some situations. The QEMU command line to set this up is
5
Message-id: 20230206223502.25122-8-philmd@linaro.org
6
non-obvious; document it.
7
8
Signed-off-by: Sebastian Meyer <meyer@absint.com>
9
Message-id: 162867284829.27377.4784930719350564918-0@git.sr.ht
10
[PMM: Tweaked commit message; adjusted wording in a couple of
11
places; fixed rST formatting issue; moved section up out of
12
the 'advanced debugging options' subsection]
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
7
---
17
docs/system/gdb.rst | 26 +++++++++++++++++++++++++-
8
target/arm/cpu.h | 3 ++-
18
1 file changed, 25 insertions(+), 1 deletion(-)
9
1 file changed, 2 insertions(+), 1 deletion(-)
19
10
20
diff --git a/docs/system/gdb.rst b/docs/system/gdb.rst
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
22
--- a/docs/system/gdb.rst
13
--- a/target/arm/cpu.h
23
+++ b/docs/system/gdb.rst
14
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ The ``-s`` option will make QEMU listen for an incoming connection
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
25
from gdb on TCP port 1234, and ``-S`` will make QEMU not start the
16
26
guest until you tell it to from gdb. (If you want to specify which
17
void *nvic;
27
TCP port to use or to use something other than TCP for the gdbstub
18
const struct arm_boot_info *boot_info;
28
-connection, use the ``-gdb dev`` option instead of ``-s``.)
19
+#if !defined(CONFIG_USER_ONLY)
29
+connection, use the ``-gdb dev`` option instead of ``-s``. See
20
/* Store GICv3CPUState to access from this struct */
30
+`Using unix sockets`_ for an example.)
21
void *gicv3state;
31
22
-#if defined(CONFIG_USER_ONLY)
32
.. parsed-literal::
23
+#else /* CONFIG_USER_ONLY */
33
24
/* For usermode syscall translation. */
34
@@ -XXX,XX +XXX,XX @@ not just those in the cluster you are currently working on::
25
bool eabi;
35
26
#endif /* CONFIG_USER_ONLY */
36
(gdb) set schedule-multiple on
37
38
+Using unix sockets
39
+==================
40
+
41
+An alternate method for connecting gdb to the QEMU gdbstub is to use
42
+a unix socket (if supported by your operating system). This is useful when
43
+running several tests in parallel, or if you do not have a known free TCP
44
+port (e.g. when running automated tests).
45
+
46
+First create a chardev with the appropriate options, then
47
+instruct the gdbserver to use that device:
48
+
49
+.. parsed-literal::
50
+
51
+ |qemu_system| -chardev socket,path=/tmp/gdb-socket,server=on,wait=off,id=gdb0 -gdb chardev:gdb0 -S ...
52
+
53
+Start gdb as before, but this time connect using the path to
54
+the socket::
55
+
56
+ (gdb) target remote /tmp/gdb-socket
57
+
58
+Note that to use a unix socket for the connection you will need
59
+gdb version 9.0 or newer.
60
+
61
Advanced debugging options
62
==========================
63
64
--
27
--
65
2.20.1
28
2.34.1
66
29
67
30
diff view generated by jsdifflib
1
From: Guenter Roeck <linux@roeck-us.net>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Instantiate SAI1/2/3 as unimplemented devices to avoid Linux kernel crashes
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
such as the following.
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
5
Message-id: 20230206223502.25122-9-philmd@linaro.org
6
Unhandled fault: external abort on non-linefetch (0x808) at 0xd19b0000
7
pgd = (ptrval)
8
[d19b0000] *pgd=82711811, *pte=308a0653, *ppte=308a0453
9
Internal error: : 808 [#1] SMP ARM
10
Modules linked in:
11
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-rc5 #1
12
...
13
[<c095e974>] (regmap_mmio_write32le) from [<c095eb48>] (regmap_mmio_write+0x3c/0x54)
14
[<c095eb48>] (regmap_mmio_write) from [<c09580f4>] (_regmap_write+0x4c/0x1f0)
15
[<c09580f4>] (_regmap_write) from [<c0959b28>] (regmap_write+0x3c/0x60)
16
[<c0959b28>] (regmap_write) from [<c0d41130>] (fsl_sai_runtime_resume+0x9c/0x1ec)
17
[<c0d41130>] (fsl_sai_runtime_resume) from [<c0942464>] (__rpm_callback+0x3c/0x108)
18
[<c0942464>] (__rpm_callback) from [<c0942590>] (rpm_callback+0x60/0x64)
19
[<c0942590>] (rpm_callback) from [<c0942b60>] (rpm_resume+0x5cc/0x808)
20
[<c0942b60>] (rpm_resume) from [<c0942dfc>] (__pm_runtime_resume+0x60/0xa0)
21
[<c0942dfc>] (__pm_runtime_resume) from [<c0d4231c>] (fsl_sai_probe+0x2b8/0x65c)
22
[<c0d4231c>] (fsl_sai_probe) from [<c0935b08>] (platform_probe+0x58/0xb8)
23
[<c0935b08>] (platform_probe) from [<c0933264>] (really_probe.part.0+0x9c/0x334)
24
[<c0933264>] (really_probe.part.0) from [<c093359c>] (__driver_probe_device+0xa0/0x138)
25
[<c093359c>] (__driver_probe_device) from [<c0933664>] (driver_probe_device+0x30/0xc8)
26
[<c0933664>] (driver_probe_device) from [<c0933c88>] (__driver_attach+0x90/0x130)
27
[<c0933c88>] (__driver_attach) from [<c0931060>] (bus_for_each_dev+0x78/0xb8)
28
[<c0931060>] (bus_for_each_dev) from [<c093254c>] (bus_add_driver+0xf0/0x1d8)
29
[<c093254c>] (bus_add_driver) from [<c0934a30>] (driver_register+0x88/0x118)
30
[<c0934a30>] (driver_register) from [<c01022c0>] (do_one_initcall+0x7c/0x3a4)
31
[<c01022c0>] (do_one_initcall) from [<c1601204>] (kernel_init_freeable+0x198/0x22c)
32
[<c1601204>] (kernel_init_freeable) from [<c0f5ff2c>] (kernel_init+0x10/0x128)
33
[<c0f5ff2c>] (kernel_init) from [<c010013c>] (ret_from_fork+0x14/0x38)
34
35
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
36
Message-id: 20210810175607.538090-1-linux@roeck-us.net
37
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
38
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
39
---
7
---
40
include/hw/arm/fsl-imx7.h | 5 +++++
8
target/arm/cpu.h | 2 +-
41
hw/arm/fsl-imx7.c | 7 +++++++
9
1 file changed, 1 insertion(+), 1 deletion(-)
42
2 files changed, 12 insertions(+)
43
10
44
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
45
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
46
--- a/include/hw/arm/fsl-imx7.h
13
--- a/target/arm/cpu.h
47
+++ b/include/hw/arm/fsl-imx7.h
14
+++ b/target/arm/cpu.h
48
@@ -XXX,XX +XXX,XX @@ enum FslIMX7MemoryMap {
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
49
FSL_IMX7_UART6_ADDR = 0x30A80000,
16
} sau;
50
FSL_IMX7_UART7_ADDR = 0x30A90000,
17
51
18
void *nvic;
52
+ FSL_IMX7_SAI1_ADDR = 0x308A0000,
19
- const struct arm_boot_info *boot_info;
53
+ FSL_IMX7_SAI2_ADDR = 0x308B0000,
20
#if !defined(CONFIG_USER_ONLY)
54
+ FSL_IMX7_SAI3_ADDR = 0x308C0000,
21
+ const struct arm_boot_info *boot_info;
55
+ FSL_IMX7_SAIn_SIZE = 0x10000,
22
/* Store GICv3CPUState to access from this struct */
56
+
23
void *gicv3state;
57
FSL_IMX7_ENET1_ADDR = 0x30BE0000,
24
#else /* CONFIG_USER_ONLY */
58
FSL_IMX7_ENET2_ADDR = 0x30BF0000,
59
60
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/hw/arm/fsl-imx7.c
63
+++ b/hw/arm/fsl-imx7.c
64
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
65
create_unimplemented_device("can1", FSL_IMX7_CAN1_ADDR, FSL_IMX7_CANn_SIZE);
66
create_unimplemented_device("can2", FSL_IMX7_CAN2_ADDR, FSL_IMX7_CANn_SIZE);
67
68
+ /*
69
+ * SAI (Audio SSI (Synchronous Serial Interface))
70
+ */
71
+ create_unimplemented_device("sai1", FSL_IMX7_SAI1_ADDR, FSL_IMX7_SAIn_SIZE);
72
+ create_unimplemented_device("sai2", FSL_IMX7_SAI2_ADDR, FSL_IMX7_SAIn_SIZE);
73
+ create_unimplemented_device("sai2", FSL_IMX7_SAI3_ADDR, FSL_IMX7_SAIn_SIZE);
74
+
75
/*
76
* OCOTP
77
*/
78
--
25
--
79
2.20.1
26
2.34.1
80
27
81
28
diff view generated by jsdifflib
1
Implement the MVE VPNOT insn, which inverts the bits in VPR.P0
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
(subject to both predication and to beatwise execution).
3
2
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20230206223502.25122-10-philmd@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
---
7
target/arm/helper-mve.h | 1 +
8
target/arm/cpu.h | 2 +-
8
target/arm/mve.decode | 1 +
9
1 file changed, 1 insertion(+), 1 deletion(-)
9
target/arm/mve_helper.c | 17 +++++++++++++++++
10
target/arm/translate-mve.c | 19 +++++++++++++++++++
11
4 files changed, 38 insertions(+)
12
10
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
13
--- a/target/arm/cpu.h
16
+++ b/target/arm/helper-mve.h
14
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
18
DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
16
uint32_t ctrl;
19
17
} sau;
20
DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
18
21
+DEF_HELPER_FLAGS_1(mve_vpnot, TCG_CALL_NO_WG, void, env)
19
- void *nvic;
22
20
#if !defined(CONFIG_USER_ONLY)
23
DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
+ void *nvic;
24
DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
22
const struct arm_boot_info *boot_info;
25
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
23
/* Store GICv3CPUState to access from this struct */
26
index XXXXXXX..XXXXXXX 100644
24
void *gicv3state;
27
--- a/target/arm/mve.decode
28
+++ b/target/arm/mve.decode
29
@@ -XXX,XX +XXX,XX @@ VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
30
VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp
31
32
{
33
+ VPNOT 1111 1110 0 0 11 000 1 000 0 1111 0100 1101
34
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
35
VCMPEQ_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 0 0 .... @vcmp_scalar
36
}
37
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/mve_helper.c
40
+++ b/target/arm/mve_helper.c
41
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm)
42
mve_advance_vpt(env);
43
}
44
45
+void HELPER(mve_vpnot)(CPUARMState *env)
46
+{
47
+ /*
48
+ * P0 bits for unexecuted beats (where eci_mask is 0) are unchanged.
49
+ * P0 bits for predicated lanes in executed bits (where mask is 0) are 0.
50
+ * P0 bits otherwise are inverted.
51
+ * (This is the same logic as VCMP.)
52
+ * This insn is itself subject to predication and to beat-wise execution,
53
+ * and after it executes VPT state advances in the usual way.
54
+ */
55
+ uint16_t mask = mve_element_mask(env);
56
+ uint16_t eci_mask = mve_eci_mask(env);
57
+ uint16_t beatpred = ~env->v7m.vpr & mask;
58
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | (beatpred & eci_mask);
59
+ mve_advance_vpt(env);
60
+}
61
+
62
#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \
63
void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
64
{ \
65
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/translate-mve.c
68
+++ b/target/arm/translate-mve.c
69
@@ -XXX,XX +XXX,XX @@ static bool trans_VPST(DisasContext *s, arg_VPST *a)
70
return true;
71
}
72
73
+static bool trans_VPNOT(DisasContext *s, arg_VPNOT *a)
74
+{
75
+ /*
76
+ * Invert the predicate in VPR.P0. We have call out to
77
+ * a helper because this insn itself is beatwise and can
78
+ * be predicated.
79
+ */
80
+ if (!dc_isar_feature(aa32_mve, s)) {
81
+ return false;
82
+ }
83
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
84
+ return true;
85
+ }
86
+
87
+ gen_helper_mve_vpnot(cpu_env);
88
+ mve_update_eci(s);
89
+ return true;
90
+}
91
+
92
static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
93
{
94
/* VADDV: vector add across vector */
95
--
25
--
96
2.20.1
26
2.34.1
97
27
98
28
diff view generated by jsdifflib
1
Unlike A-profile, for M-profile the UDIV and SDIV insns can be
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
configured to raise an exception on division by zero, using the CCR
2
3
DIV_0_TRP bit.
3
There is no point in using a void pointer to access the NVIC.
4
4
Use the real type to avoid casting it while debugging.
5
Implement support for setting this bit by making the helper functions
5
6
raise the appropriate exception.
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230206223502.25122-11-philmd@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210730151636.17254-3-peter.maydell@linaro.org
11
---
10
---
12
target/arm/cpu.h | 1 +
11
target/arm/cpu.h | 46 ++++++++++++++++++++++---------------------
13
target/arm/helper.h | 4 ++--
12
hw/intc/armv7m_nvic.c | 38 ++++++++++++-----------------------
14
target/arm/helper.c | 19 +++++++++++++++++--
13
target/arm/cpu.c | 1 +
15
target/arm/m_helper.c | 4 ++++
14
target/arm/m_helper.c | 2 +-
16
target/arm/translate.c | 4 ++--
15
4 files changed, 39 insertions(+), 48 deletions(-)
17
5 files changed, 26 insertions(+), 6 deletions(-)
18
16
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMTBFlags {
22
23
typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
24
25
+typedef struct NVICState NVICState;
26
+
27
typedef struct CPUArchState {
28
/* Regs for current mode. */
29
uint32_t regs[16];
30
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
31
} sau;
32
33
#if !defined(CONFIG_USER_ONLY)
34
- void *nvic;
35
+ NVICState *nvic;
36
const struct arm_boot_info *boot_info;
37
/* Store GICv3CPUState to access from this struct */
38
void *gicv3state;
39
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
40
41
/* Interface between CPU and Interrupt controller. */
42
#ifndef CONFIG_USER_ONLY
43
-bool armv7m_nvic_can_take_pending_exception(void *opaque);
44
+bool armv7m_nvic_can_take_pending_exception(NVICState *s);
45
#else
46
-static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
47
+static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
48
{
49
return true;
50
}
51
#endif
52
/**
53
* armv7m_nvic_set_pending: mark the specified exception as pending
54
- * @opaque: the NVIC
55
+ * @s: the NVIC
56
* @irq: the exception number to mark pending
57
* @secure: false for non-banked exceptions or for the nonsecure
58
* version of a banked exception, true for the secure version of a banked
59
@@ -XXX,XX +XXX,XX @@ static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
60
* if @secure is true and @irq does not specify one of the fixed set
61
* of architecturally banked exceptions.
62
*/
63
-void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
64
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
65
/**
66
* armv7m_nvic_set_pending_derived: mark this derived exception as pending
67
- * @opaque: the NVIC
68
+ * @s: the NVIC
69
* @irq: the exception number to mark pending
70
* @secure: false for non-banked exceptions or for the nonsecure
71
* version of a banked exception, true for the secure version of a banked
72
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
73
* exceptions (exceptions generated in the course of trying to take
74
* a different exception).
75
*/
76
-void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
77
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
78
/**
79
* armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
80
- * @opaque: the NVIC
81
+ * @s: the NVIC
82
* @irq: the exception number to mark pending
83
* @secure: false for non-banked exceptions or for the nonsecure
84
* version of a banked exception, true for the secure version of a banked
85
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
86
* Similar to armv7m_nvic_set_pending(), but specifically for exceptions
87
* generated in the course of lazy stacking of FP registers.
88
*/
89
-void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
90
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
91
/**
92
* armv7m_nvic_get_pending_irq_info: return highest priority pending
93
* exception, and whether it targets Secure state
94
- * @opaque: the NVIC
95
+ * @s: the NVIC
96
* @pirq: set to pending exception number
97
* @ptargets_secure: set to whether pending exception targets Secure
98
*
99
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
100
* to true if the current highest priority pending exception should
101
* be taken to Secure state, false for NS.
102
*/
103
-void armv7m_nvic_get_pending_irq_info(void *opaque, int *pirq,
104
+void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
105
bool *ptargets_secure);
106
/**
107
* armv7m_nvic_acknowledge_irq: make highest priority pending exception active
108
- * @opaque: the NVIC
109
+ * @s: the NVIC
110
*
111
* Move the current highest priority pending exception from the pending
112
* state to the active state, and update v7m.exception to indicate that
113
* it is the exception currently being handled.
114
*/
115
-void armv7m_nvic_acknowledge_irq(void *opaque);
116
+void armv7m_nvic_acknowledge_irq(NVICState *s);
117
/**
118
* armv7m_nvic_complete_irq: complete specified interrupt or exception
119
- * @opaque: the NVIC
120
+ * @s: the NVIC
121
* @irq: the exception number to complete
122
* @secure: true if this exception was secure
123
*
124
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque);
125
* 0 if there is still an irq active after this one was completed
126
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
127
*/
128
-int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
129
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
130
/**
131
* armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
132
- * @opaque: the NVIC
133
+ * @s: the NVIC
134
* @irq: the exception number to mark pending
135
* @secure: false for non-banked exceptions or for the nonsecure
136
* version of a banked exception, true for the secure version of a banked
137
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
138
* interrupt the current execution priority. This controls whether the
139
* RDY bit for it in the FPCCR is set.
140
*/
141
-bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure);
142
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
143
/**
144
* armv7m_nvic_raw_execution_priority: return the raw execution priority
145
- * @opaque: the NVIC
146
+ * @s: the NVIC
147
*
148
* Returns: the raw execution priority as defined by the v8M architecture.
149
* This is the execution priority minus the effects of AIRCR.PRIS,
150
* and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
151
* (v8M ARM ARM I_PKLD.)
152
*/
153
-int armv7m_nvic_raw_execution_priority(void *opaque);
154
+int armv7m_nvic_raw_execution_priority(NVICState *s);
155
/**
156
* armv7m_nvic_neg_prio_requested: return true if the requested execution
157
* priority is negative for the specified security state.
158
- * @opaque: the NVIC
159
+ * @s: the NVIC
160
* @secure: the security state to test
161
* This corresponds to the pseudocode IsReqExecPriNeg().
162
*/
163
#ifndef CONFIG_USER_ONLY
164
-bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure);
165
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
166
#else
167
-static inline bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
168
+static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
169
{
170
return false;
171
}
172
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
173
index XXXXXXX..XXXXXXX 100644
174
--- a/hw/intc/armv7m_nvic.c
175
+++ b/hw/intc/armv7m_nvic.c
176
@@ -XXX,XX +XXX,XX @@ static inline int nvic_exec_prio(NVICState *s)
177
return MIN(running, s->exception_prio);
178
}
179
180
-bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
181
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
182
{
183
/* Return true if the requested execution priority is negative
184
* for the specified security state, ie that security state
185
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
186
* mean we don't allow FAULTMASK_NS to actually make the execution
187
* priority negative). Compare pseudocode IsReqExcPriNeg().
188
*/
189
- NVICState *s = opaque;
190
-
191
if (s->cpu->env.v7m.faultmask[secure]) {
192
return true;
193
}
194
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
195
return false;
196
}
197
198
-bool armv7m_nvic_can_take_pending_exception(void *opaque)
199
+bool armv7m_nvic_can_take_pending_exception(NVICState *s)
200
{
201
- NVICState *s = opaque;
202
-
203
return nvic_exec_prio(s) > nvic_pending_prio(s);
204
}
205
206
-int armv7m_nvic_raw_execution_priority(void *opaque)
207
+int armv7m_nvic_raw_execution_priority(NVICState *s)
208
{
209
- NVICState *s = opaque;
210
-
211
return s->exception_prio;
212
}
213
214
@@ -XXX,XX +XXX,XX @@ static void nvic_irq_update(NVICState *s)
215
* if @secure is true and @irq does not specify one of the fixed set
216
* of architecturally banked exceptions.
217
*/
218
-static void armv7m_nvic_clear_pending(void *opaque, int irq, bool secure)
219
+static void armv7m_nvic_clear_pending(NVICState *s, int irq, bool secure)
220
{
221
- NVICState *s = (NVICState *)opaque;
222
VecInfo *vec;
223
224
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
225
@@ -XXX,XX +XXX,XX @@ static void do_armv7m_nvic_set_pending(void *opaque, int irq, bool secure,
226
}
227
}
228
229
-void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
230
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure)
231
{
232
- do_armv7m_nvic_set_pending(opaque, irq, secure, false);
233
+ do_armv7m_nvic_set_pending(s, irq, secure, false);
234
}
235
236
-void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure)
237
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure)
238
{
239
- do_armv7m_nvic_set_pending(opaque, irq, secure, true);
240
+ do_armv7m_nvic_set_pending(s, irq, secure, true);
241
}
242
243
-void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
244
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure)
245
{
246
/*
247
* Pend an exception during lazy FP stacking. This differs
248
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
249
* whether we should escalate depends on the saved context
250
* in the FPCCR register, not on the current state of the CPU/NVIC.
251
*/
252
- NVICState *s = (NVICState *)opaque;
253
bool banked = exc_is_banked(irq);
254
VecInfo *vec;
255
bool targets_secure;
256
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
257
}
258
259
/* Make pending IRQ active. */
260
-void armv7m_nvic_acknowledge_irq(void *opaque)
261
+void armv7m_nvic_acknowledge_irq(NVICState *s)
262
{
263
- NVICState *s = (NVICState *)opaque;
264
CPUARMState *env = &s->cpu->env;
265
const int pending = s->vectpending;
266
const int running = nvic_exec_prio(s);
267
@@ -XXX,XX +XXX,XX @@ static bool vectpending_targets_secure(NVICState *s)
268
exc_targets_secure(s, s->vectpending);
269
}
270
271
-void armv7m_nvic_get_pending_irq_info(void *opaque,
272
+void armv7m_nvic_get_pending_irq_info(NVICState *s,
273
int *pirq, bool *ptargets_secure)
274
{
275
- NVICState *s = (NVICState *)opaque;
276
const int pending = s->vectpending;
277
bool targets_secure;
278
279
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
280
*pirq = pending;
281
}
282
283
-int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
284
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure)
285
{
286
- NVICState *s = (NVICState *)opaque;
287
VecInfo *vec = NULL;
288
int ret = 0;
289
290
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
291
return ret;
292
}
293
294
-bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
295
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure)
296
{
297
/*
298
* Return whether an exception is "ready", i.e. it is enabled and is
299
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
300
* for non-banked exceptions secure is always false; for banked exceptions
301
* it indicates which of the exceptions is required.
302
*/
303
- NVICState *s = (NVICState *)opaque;
304
bool banked = exc_is_banked(irq);
305
VecInfo *vec;
306
int running = nvic_exec_prio(s);
307
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
308
index XXXXXXX..XXXXXXX 100644
309
--- a/target/arm/cpu.c
310
+++ b/target/arm/cpu.c
23
@@ -XXX,XX +XXX,XX @@
311
@@ -XXX,XX +XXX,XX @@
24
#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
312
#if !defined(CONFIG_USER_ONLY)
25
#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
313
#include "hw/loader.h"
26
#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
314
#include "hw/boards.h"
27
+#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
315
+#include "hw/intc/armv7m_nvic.h"
28
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
316
#endif
29
317
#include "sysemu/tcg.h"
30
#define ARMV7M_EXCP_RESET 1
318
#include "sysemu/qtest.h"
31
diff --git a/target/arm/helper.h b/target/arm/helper.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/helper.h
34
+++ b/target/arm/helper.h
35
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(add_saturate, i32, env, i32, i32)
36
DEF_HELPER_3(sub_saturate, i32, env, i32, i32)
37
DEF_HELPER_3(add_usaturate, i32, env, i32, i32)
38
DEF_HELPER_3(sub_usaturate, i32, env, i32, i32)
39
-DEF_HELPER_FLAGS_2(sdiv, TCG_CALL_NO_RWG_SE, s32, s32, s32)
40
-DEF_HELPER_FLAGS_2(udiv, TCG_CALL_NO_RWG_SE, i32, i32, i32)
41
+DEF_HELPER_FLAGS_3(sdiv, TCG_CALL_NO_RWG, s32, env, s32, s32)
42
+DEF_HELPER_FLAGS_3(udiv, TCG_CALL_NO_RWG, i32, env, i32, i32)
43
DEF_HELPER_FLAGS_1(rbit, TCG_CALL_NO_RWG_SE, i32, i32)
44
45
#define PAS_OP(pfx) \
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/helper.c
49
+++ b/target/arm/helper.c
50
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sxtb16)(uint32_t x)
51
return res;
52
}
53
54
+static void handle_possible_div0_trap(CPUARMState *env, uintptr_t ra)
55
+{
56
+ /*
57
+ * Take a division-by-zero exception if necessary; otherwise return
58
+ * to get the usual non-trapping division behaviour (result of 0)
59
+ */
60
+ if (arm_feature(env, ARM_FEATURE_M)
61
+ && (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_DIV_0_TRP_MASK)) {
62
+ raise_exception_ra(env, EXCP_DIVBYZERO, 0, 1, ra);
63
+ }
64
+}
65
+
66
uint32_t HELPER(uxtb16)(uint32_t x)
67
{
68
uint32_t res;
69
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(uxtb16)(uint32_t x)
70
return res;
71
}
72
73
-int32_t HELPER(sdiv)(int32_t num, int32_t den)
74
+int32_t HELPER(sdiv)(CPUARMState *env, int32_t num, int32_t den)
75
{
76
if (den == 0) {
77
+ handle_possible_div0_trap(env, GETPC());
78
return 0;
79
}
80
if (num == INT_MIN && den == -1) {
81
@@ -XXX,XX +XXX,XX @@ int32_t HELPER(sdiv)(int32_t num, int32_t den)
82
return num / den;
83
}
84
85
-uint32_t HELPER(udiv)(uint32_t num, uint32_t den)
86
+uint32_t HELPER(udiv)(CPUARMState *env, uint32_t num, uint32_t den)
87
{
88
if (den == 0) {
89
+ handle_possible_div0_trap(env, GETPC());
90
return 0;
91
}
92
return num / den;
93
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(int idx)
94
[EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
95
[EXCP_LSERR] = "v8M LSERR UsageFault",
96
[EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
97
+ [EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
98
};
99
100
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
101
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
319
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
102
index XXXXXXX..XXXXXXX 100644
320
index XXXXXXX..XXXXXXX 100644
103
--- a/target/arm/m_helper.c
321
--- a/target/arm/m_helper.c
104
+++ b/target/arm/m_helper.c
322
+++ b/target/arm/m_helper.c
105
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
323
@@ -XXX,XX +XXX,XX @@ static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
106
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
324
* that we will need later in order to do lazy FP reg stacking.
107
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
325
*/
108
break;
326
bool is_secure = env->v7m.secure;
109
+ case EXCP_DIVBYZERO:
327
- void *nvic = env->nvic;
110
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
328
+ NVICState *nvic = env->nvic;
111
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_DIVBYZERO_MASK;
329
/*
112
+ break;
330
* Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
113
case EXCP_SWI:
331
* are banked and we want to update the bit in the bank for the
114
/* The PC already points to the next instruction. */
115
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
116
diff --git a/target/arm/translate.c b/target/arm/translate.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/translate.c
119
+++ b/target/arm/translate.c
120
@@ -XXX,XX +XXX,XX @@ static bool op_div(DisasContext *s, arg_rrr *a, bool u)
121
t1 = load_reg(s, a->rn);
122
t2 = load_reg(s, a->rm);
123
if (u) {
124
- gen_helper_udiv(t1, t1, t2);
125
+ gen_helper_udiv(t1, cpu_env, t1, t2);
126
} else {
127
- gen_helper_sdiv(t1, t1, t2);
128
+ gen_helper_sdiv(t1, cpu_env, t1, t2);
129
}
130
tcg_temp_free_i32(t2);
131
store_reg(s, a->rd, t1);
132
--
332
--
133
2.20.1
333
2.34.1
134
334
135
335
diff view generated by jsdifflib
1
Include the MVE VPR register value in the CPU dumps produced by
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
arm_cpu_dump_state() if we are printing FPU information. This
2
3
makes it easier to interpret debug logs when predication is
3
While dozens of files include "cpu.h", only 3 files require
4
active.
4
these NVIC helper declarations.
5
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20230206223502.25122-12-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
---
10
---
9
target/arm/cpu.c | 3 +++
11
include/hw/intc/armv7m_nvic.h | 123 ++++++++++++++++++++++++++++++++++
10
1 file changed, 3 insertions(+)
12
target/arm/cpu.h | 123 ----------------------------------
11
13
target/arm/cpu.c | 4 +-
14
target/arm/cpu_tcg.c | 3 +
15
target/arm/m_helper.c | 3 +
16
5 files changed, 132 insertions(+), 124 deletions(-)
17
18
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/intc/armv7m_nvic.h
21
+++ b/include/hw/intc/armv7m_nvic.h
22
@@ -XXX,XX +XXX,XX @@ struct NVICState {
23
qemu_irq sysresetreq;
24
};
25
26
+/* Interface between CPU and Interrupt controller. */
27
+/**
28
+ * armv7m_nvic_set_pending: mark the specified exception as pending
29
+ * @s: the NVIC
30
+ * @irq: the exception number to mark pending
31
+ * @secure: false for non-banked exceptions or for the nonsecure
32
+ * version of a banked exception, true for the secure version of a banked
33
+ * exception.
34
+ *
35
+ * Marks the specified exception as pending. Note that we will assert()
36
+ * if @secure is true and @irq does not specify one of the fixed set
37
+ * of architecturally banked exceptions.
38
+ */
39
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
40
+/**
41
+ * armv7m_nvic_set_pending_derived: mark this derived exception as pending
42
+ * @s: the NVIC
43
+ * @irq: the exception number to mark pending
44
+ * @secure: false for non-banked exceptions or for the nonsecure
45
+ * version of a banked exception, true for the secure version of a banked
46
+ * exception.
47
+ *
48
+ * Similar to armv7m_nvic_set_pending(), but specifically for derived
49
+ * exceptions (exceptions generated in the course of trying to take
50
+ * a different exception).
51
+ */
52
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
53
+/**
54
+ * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
55
+ * @s: the NVIC
56
+ * @irq: the exception number to mark pending
57
+ * @secure: false for non-banked exceptions or for the nonsecure
58
+ * version of a banked exception, true for the secure version of a banked
59
+ * exception.
60
+ *
61
+ * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
62
+ * generated in the course of lazy stacking of FP registers.
63
+ */
64
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
65
+/**
66
+ * armv7m_nvic_get_pending_irq_info: return highest priority pending
67
+ * exception, and whether it targets Secure state
68
+ * @s: the NVIC
69
+ * @pirq: set to pending exception number
70
+ * @ptargets_secure: set to whether pending exception targets Secure
71
+ *
72
+ * This function writes the number of the highest priority pending
73
+ * exception (the one which would be made active by
74
+ * armv7m_nvic_acknowledge_irq()) to @pirq, and sets @ptargets_secure
75
+ * to true if the current highest priority pending exception should
76
+ * be taken to Secure state, false for NS.
77
+ */
78
+void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
79
+ bool *ptargets_secure);
80
+/**
81
+ * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
82
+ * @s: the NVIC
83
+ *
84
+ * Move the current highest priority pending exception from the pending
85
+ * state to the active state, and update v7m.exception to indicate that
86
+ * it is the exception currently being handled.
87
+ */
88
+void armv7m_nvic_acknowledge_irq(NVICState *s);
89
+/**
90
+ * armv7m_nvic_complete_irq: complete specified interrupt or exception
91
+ * @s: the NVIC
92
+ * @irq: the exception number to complete
93
+ * @secure: true if this exception was secure
94
+ *
95
+ * Returns: -1 if the irq was not active
96
+ * 1 if completing this irq brought us back to base (no active irqs)
97
+ * 0 if there is still an irq active after this one was completed
98
+ * (Ignoring -1, this is the same as the RETTOBASE value before completion.)
99
+ */
100
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
101
+/**
102
+ * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
103
+ * @s: the NVIC
104
+ * @irq: the exception number to mark pending
105
+ * @secure: false for non-banked exceptions or for the nonsecure
106
+ * version of a banked exception, true for the secure version of a banked
107
+ * exception.
108
+ *
109
+ * Return whether an exception is "ready", i.e. whether the exception is
110
+ * enabled and is configured at a priority which would allow it to
111
+ * interrupt the current execution priority. This controls whether the
112
+ * RDY bit for it in the FPCCR is set.
113
+ */
114
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
115
+/**
116
+ * armv7m_nvic_raw_execution_priority: return the raw execution priority
117
+ * @s: the NVIC
118
+ *
119
+ * Returns: the raw execution priority as defined by the v8M architecture.
120
+ * This is the execution priority minus the effects of AIRCR.PRIS,
121
+ * and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
122
+ * (v8M ARM ARM I_PKLD.)
123
+ */
124
+int armv7m_nvic_raw_execution_priority(NVICState *s);
125
+/**
126
+ * armv7m_nvic_neg_prio_requested: return true if the requested execution
127
+ * priority is negative for the specified security state.
128
+ * @s: the NVIC
129
+ * @secure: the security state to test
130
+ * This corresponds to the pseudocode IsReqExecPriNeg().
131
+ */
132
+#ifndef CONFIG_USER_ONLY
133
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
134
+#else
135
+static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
136
+{
137
+ return false;
138
+}
139
+#endif
140
+#ifndef CONFIG_USER_ONLY
141
+bool armv7m_nvic_can_take_pending_exception(NVICState *s);
142
+#else
143
+static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
144
+{
145
+ return true;
146
+}
147
+#endif
148
+
149
#endif
150
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
151
index XXXXXXX..XXXXXXX 100644
152
--- a/target/arm/cpu.h
153
+++ b/target/arm/cpu.h
154
@@ -XXX,XX +XXX,XX @@ void arm_cpu_list(void);
155
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
156
uint32_t cur_el, bool secure);
157
158
-/* Interface between CPU and Interrupt controller. */
159
-#ifndef CONFIG_USER_ONLY
160
-bool armv7m_nvic_can_take_pending_exception(NVICState *s);
161
-#else
162
-static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
163
-{
164
- return true;
165
-}
166
-#endif
167
-/**
168
- * armv7m_nvic_set_pending: mark the specified exception as pending
169
- * @s: the NVIC
170
- * @irq: the exception number to mark pending
171
- * @secure: false for non-banked exceptions or for the nonsecure
172
- * version of a banked exception, true for the secure version of a banked
173
- * exception.
174
- *
175
- * Marks the specified exception as pending. Note that we will assert()
176
- * if @secure is true and @irq does not specify one of the fixed set
177
- * of architecturally banked exceptions.
178
- */
179
-void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
180
-/**
181
- * armv7m_nvic_set_pending_derived: mark this derived exception as pending
182
- * @s: the NVIC
183
- * @irq: the exception number to mark pending
184
- * @secure: false for non-banked exceptions or for the nonsecure
185
- * version of a banked exception, true for the secure version of a banked
186
- * exception.
187
- *
188
- * Similar to armv7m_nvic_set_pending(), but specifically for derived
189
- * exceptions (exceptions generated in the course of trying to take
190
- * a different exception).
191
- */
192
-void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
193
-/**
194
- * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
195
- * @s: the NVIC
196
- * @irq: the exception number to mark pending
197
- * @secure: false for non-banked exceptions or for the nonsecure
198
- * version of a banked exception, true for the secure version of a banked
199
- * exception.
200
- *
201
- * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
202
- * generated in the course of lazy stacking of FP registers.
203
- */
204
-void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
205
-/**
206
- * armv7m_nvic_get_pending_irq_info: return highest priority pending
207
- * exception, and whether it targets Secure state
208
- * @s: the NVIC
209
- * @pirq: set to pending exception number
210
- * @ptargets_secure: set to whether pending exception targets Secure
211
- *
212
- * This function writes the number of the highest priority pending
213
- * exception (the one which would be made active by
214
- * armv7m_nvic_acknowledge_irq()) to @pirq, and sets @ptargets_secure
215
- * to true if the current highest priority pending exception should
216
- * be taken to Secure state, false for NS.
217
- */
218
-void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
219
- bool *ptargets_secure);
220
-/**
221
- * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
222
- * @s: the NVIC
223
- *
224
- * Move the current highest priority pending exception from the pending
225
- * state to the active state, and update v7m.exception to indicate that
226
- * it is the exception currently being handled.
227
- */
228
-void armv7m_nvic_acknowledge_irq(NVICState *s);
229
-/**
230
- * armv7m_nvic_complete_irq: complete specified interrupt or exception
231
- * @s: the NVIC
232
- * @irq: the exception number to complete
233
- * @secure: true if this exception was secure
234
- *
235
- * Returns: -1 if the irq was not active
236
- * 1 if completing this irq brought us back to base (no active irqs)
237
- * 0 if there is still an irq active after this one was completed
238
- * (Ignoring -1, this is the same as the RETTOBASE value before completion.)
239
- */
240
-int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
241
-/**
242
- * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
243
- * @s: the NVIC
244
- * @irq: the exception number to mark pending
245
- * @secure: false for non-banked exceptions or for the nonsecure
246
- * version of a banked exception, true for the secure version of a banked
247
- * exception.
248
- *
249
- * Return whether an exception is "ready", i.e. whether the exception is
250
- * enabled and is configured at a priority which would allow it to
251
- * interrupt the current execution priority. This controls whether the
252
- * RDY bit for it in the FPCCR is set.
253
- */
254
-bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
255
-/**
256
- * armv7m_nvic_raw_execution_priority: return the raw execution priority
257
- * @s: the NVIC
258
- *
259
- * Returns: the raw execution priority as defined by the v8M architecture.
260
- * This is the execution priority minus the effects of AIRCR.PRIS,
261
- * and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
262
- * (v8M ARM ARM I_PKLD.)
263
- */
264
-int armv7m_nvic_raw_execution_priority(NVICState *s);
265
-/**
266
- * armv7m_nvic_neg_prio_requested: return true if the requested execution
267
- * priority is negative for the specified security state.
268
- * @s: the NVIC
269
- * @secure: the security state to test
270
- * This corresponds to the pseudocode IsReqExecPriNeg().
271
- */
272
-#ifndef CONFIG_USER_ONLY
273
-bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
274
-#else
275
-static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
276
-{
277
- return false;
278
-}
279
-#endif
280
-
281
/* Interface for defining coprocessor registers.
282
* Registers are defined in tables of arm_cp_reginfo structs
283
* which are passed to define_arm_cp_regs().
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
284
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
index XXXXXXX..XXXXXXX 100644
285
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
286
--- a/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
287
+++ b/target/arm/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags)
288
@@ -XXX,XX +XXX,XX @@
17
i, v);
289
#if !defined(CONFIG_USER_ONLY)
18
}
290
#include "hw/loader.h"
19
qemu_fprintf(f, "FPSCR: %08x\n", vfp_get_fpscr(env));
291
#include "hw/boards.h"
20
+ if (cpu_isar_feature(aa32_mve, cpu)) {
292
+#ifdef CONFIG_TCG
21
+ qemu_fprintf(f, "VPR: %08x\n", env->v7m.vpr);
293
#include "hw/intc/armv7m_nvic.h"
22
+ }
294
-#endif
23
}
295
+#endif /* CONFIG_TCG */
24
}
296
+#endif /* !CONFIG_USER_ONLY */
25
297
#include "sysemu/tcg.h"
298
#include "sysemu/qtest.h"
299
#include "sysemu/hw_accel.h"
300
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
301
index XXXXXXX..XXXXXXX 100644
302
--- a/target/arm/cpu_tcg.c
303
+++ b/target/arm/cpu_tcg.c
304
@@ -XXX,XX +XXX,XX @@
305
#include "hw/boards.h"
306
#endif
307
#include "cpregs.h"
308
+#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_TCG)
309
+#include "hw/intc/armv7m_nvic.h"
310
+#endif
311
312
313
/* Share AArch32 -cpu max features with AArch64. */
314
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
315
index XXXXXXX..XXXXXXX 100644
316
--- a/target/arm/m_helper.c
317
+++ b/target/arm/m_helper.c
318
@@ -XXX,XX +XXX,XX @@
319
#include "exec/cpu_ldst.h"
320
#include "semihosting/common-semi.h"
321
#endif
322
+#if !defined(CONFIG_USER_ONLY)
323
+#include "hw/intc/armv7m_nvic.h"
324
+#endif
325
326
static void v7m_msr_xpsr(CPUARMState *env, uint32_t mask,
327
uint32_t reg, uint32_t val)
26
--
328
--
27
2.20.1
329
2.34.1
28
330
29
331
diff view generated by jsdifflib
Deleted patch
1
In the MVE shift-and-insert insns, we special case VSLI by 0
2
and VSRI by <dt>. VSRI by <dt> means "don't update the destination",
3
which is what we've implemented. However VSLI by 0 is "set
4
destination to the input", so we don't want to use the same
5
special-casing that we do for VSRI by <dt>.
6
1
7
Since the generic logic gives the right answer for a shift
8
by 0, just use that.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/arm/mve_helper.c | 9 +++++----
14
1 file changed, 5 insertions(+), 4 deletions(-)
15
16
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/mve_helper.c
19
+++ b/target/arm/mve_helper.c
20
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
21
uint16_t mask; \
22
uint64_t shiftmask; \
23
unsigned e; \
24
- if (shift == 0 || shift == ESIZE * 8) { \
25
+ if (shift == ESIZE * 8) { \
26
/* \
27
- * Only VSLI can shift by 0; only VSRI can shift by <dt>. \
28
- * The generic logic would give the right answer for 0 but \
29
- * fails for <dt>. \
30
+ * Only VSRI can shift by <dt>; it should mean "don't \
31
+ * update the destination". The generic logic can't handle \
32
+ * this because it would try to shift by an out-of-range \
33
+ * amount, so special case it here. \
34
*/ \
35
goto done; \
36
} \
37
--
38
2.20.1
39
40
diff view generated by jsdifflib
Deleted patch
1
A cut-and-paste error meant we handled signed VADDV like
2
unsigned VADDV; fix the type used.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
target/arm/mve_helper.c | 6 +++---
8
1 file changed, 3 insertions(+), 3 deletions(-)
9
10
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/mve_helper.c
13
+++ b/target/arm/mve_helper.c
14
@@ -XXX,XX +XXX,XX @@ DO_LDAVH(vrmlsldavhxsw, int32_t, int64_t, true, true)
15
return ra; \
16
} \
17
18
-DO_VADDV(vaddvsb, 1, uint8_t)
19
-DO_VADDV(vaddvsh, 2, uint16_t)
20
-DO_VADDV(vaddvsw, 4, uint32_t)
21
+DO_VADDV(vaddvsb, 1, int8_t)
22
+DO_VADDV(vaddvsh, 2, int16_t)
23
+DO_VADDV(vaddvsw, 4, int32_t)
24
DO_VADDV(vaddvub, 1, uint8_t)
25
DO_VADDV(vaddvuh, 2, uint16_t)
26
DO_VADDV(vaddvuw, 4, uint32_t)
27
--
28
2.20.1
29
30
diff view generated by jsdifflib
Deleted patch
1
In the MVE helpers for the narrowing operations (DO_VSHRN and
2
DO_VSHRN_SAT) we were using the wrong bits of the predicate mask for
3
the 'top' versions of the insn. This is because the loop works over
4
the double-sized input elements and shifts the predicate mask by that
5
many bits each time, but when we write out the half-sized output we
6
must look at the mask bits for whichever half of the element we are
7
writing to.
8
1
9
Correct this by shifting the whole mask right by ESIZE bits for the
10
'top' insns. This allows us also to simplify the saturation bit
11
checking (where we had noticed that we needed to look at a different
12
mask bit for the 'top' insn.)
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
---
17
target/arm/mve_helper.c | 4 +++-
18
1 file changed, 3 insertions(+), 1 deletion(-)
19
20
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/mve_helper.c
23
+++ b/target/arm/mve_helper.c
24
@@ -XXX,XX +XXX,XX @@ DO_VSHLL_ALL(vshllt, true)
25
TYPE *d = vd; \
26
uint16_t mask = mve_element_mask(env); \
27
unsigned le; \
28
+ mask >>= ESIZE * TOP; \
29
for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
30
TYPE r = FN(m[H##LESIZE(le)], shift); \
31
mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
32
@@ -XXX,XX +XXX,XX @@ static inline int32_t do_sat_bhs(int64_t val, int64_t min, int64_t max,
33
uint16_t mask = mve_element_mask(env); \
34
bool qc = false; \
35
unsigned le; \
36
+ mask >>= ESIZE * TOP; \
37
for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
38
bool sat = false; \
39
TYPE r = FN(m[H##LESIZE(le)], shift, &sat); \
40
mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
41
- qc |= sat && (mask & 1 << (TOP * ESIZE)); \
42
+ qc |= sat & mask & 1; \
43
} \
44
if (qc) { \
45
env->vfp.qc[0] = qc; \
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
Deleted patch
1
In do_sqrshl48_d() and do_uqrshl48_d() we got some of the edge
2
cases wrong and failed to saturate correctly:
3
1
4
(1) In do_sqrshl48_d() we used the same code that do_shrshl_bhs()
5
does to obtain the saturated most-negative and most-positive 48-bit
6
signed values for the large-shift-left case. This gives (1 << 47)
7
for saturate-to-most-negative, but we weren't sign-extending this
8
value to the 64-bit output as the pseudocode requires.
9
10
(2) For left shifts by less than 48, we copied the "8/16 bit" code
11
from do_sqrshl_bhs() and do_uqrshl_bhs(). This doesn't do the right
12
thing because it assumes the C type we're working with is at least
13
twice the number of bits we're saturating to (so that a shift left by
14
bits-1 can't shift anything off the top of the value). This isn't
15
true for bits == 48, so we would incorrectly return 0 rather than the
16
most-positive value for situations like "shift (1 << 44) right by
17
20". Instead check for saturation by doing the shift and signextend
18
and then testing whether shifting back left again gives the original
19
value.
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
---
24
target/arm/mve_helper.c | 12 +++++-------
25
1 file changed, 5 insertions(+), 7 deletions(-)
26
27
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/mve_helper.c
30
+++ b/target/arm/mve_helper.c
31
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
32
}
33
return src >> -shift;
34
} else if (shift < 48) {
35
- int64_t val = src << shift;
36
- int64_t extval = sextract64(val, 0, 48);
37
- if (!sat || val == extval) {
38
+ int64_t extval = sextract64(src << shift, 0, 48);
39
+ if (!sat || src == (extval >> shift)) {
40
return extval;
41
}
42
} else if (!sat || src == 0) {
43
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
44
}
45
46
*sat = 1;
47
- return (1ULL << 47) - (src >= 0);
48
+ return src >= 0 ? MAKE_64BIT_MASK(0, 47) : MAKE_64BIT_MASK(47, 17);
49
}
50
51
/* Operate on 64-bit values, but saturate at 48 bits */
52
@@ -XXX,XX +XXX,XX @@ static inline uint64_t do_uqrshl48_d(uint64_t src, int64_t shift,
53
return extval;
54
}
55
} else if (shift < 48) {
56
- uint64_t val = src << shift;
57
- uint64_t extval = extract64(val, 0, 48);
58
- if (!sat || val == extval) {
59
+ uint64_t extval = extract64(src << shift, 0, 48);
60
+ if (!sat || src == (extval >> shift)) {
61
return extval;
62
}
63
} else if (!sat || src == 0) {
64
--
65
2.20.1
66
67
diff view generated by jsdifflib
Deleted patch
1
We got an edge case wrong in the 48-bit SQRSHRL implementation: if
2
the shift is to the right, although it always makes the result
3
smaller than the input value it might not be within the 48-bit range
4
the result is supposed to be if the input had some bits in [63..48]
5
set and the shift didn't bring all of those within the [47..0] range.
6
1
7
Handle this similarly to the way we already do for this case in
8
do_uqrshl48_d(): extend the calculated result from 48 bits,
9
and return that if not saturating or if it doesn't change the
10
result; otherwise fall through to return a saturated value.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
---
15
target/arm/mve_helper.c | 11 +++++++++--
16
1 file changed, 9 insertions(+), 2 deletions(-)
17
18
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/mve_helper.c
21
+++ b/target/arm/mve_helper.c
22
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mve_uqrshll)(CPUARMState *env, uint64_t n, uint32_t shift)
23
static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
24
bool round, uint32_t *sat)
25
{
26
+ int64_t val, extval;
27
+
28
if (shift <= -48) {
29
/* Rounding the sign bit always produces 0. */
30
if (round) {
31
@@ -XXX,XX +XXX,XX @@ static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
32
} else if (shift < 0) {
33
if (round) {
34
src >>= -shift - 1;
35
- return (src >> 1) + (src & 1);
36
+ val = (src >> 1) + (src & 1);
37
+ } else {
38
+ val = src >> -shift;
39
+ }
40
+ extval = sextract64(val, 0, 48);
41
+ if (!sat || val == extval) {
42
+ return extval;
43
}
44
- return src >> -shift;
45
} else if (shift < 48) {
46
int64_t extval = sextract64(src << shift, 0, 48);
47
if (!sat || src == (extval >> shift)) {
48
--
49
2.20.1
50
51
diff view generated by jsdifflib
1
Implement the MVE VMOV forms that move data between 2 general-purpose
1
From: Alex Bennée <alex.bennee@linaro.org>
2
registers and 2 32-bit lanes in a vector register.
2
3
3
The two TCG tests for GICv2 and GICv3 are very heavy weight distros
4
that take a long time to boot up, especially for an --enable-debug
5
build. The total code coverage they give is:
6
7
Overall coverage rate:
8
lines......: 11.2% (59584 of 530123 lines)
9
functions..: 15.0% (7436 of 49443 functions)
10
branches...: 6.3% (19273 of 303933 branches)
11
12
We already get pretty close to that with the machine_aarch64_virt
13
tests which only does one full boot (~120s vs ~600s) of alpine. We
14
expand the kernel+initrd boot (~8s) to test both GICs and also add an
15
RNG device and a block device to generate a few IRQs and exercise the
16
storage layer. With that we get to a coverage of:
17
18
Overall coverage rate:
19
lines......: 11.0% (58121 of 530123 lines)
20
functions..: 14.9% (7343 of 49443 functions)
21
branches...: 6.0% (18269 of 303933 branches)
22
23
which I feel is close enough given the massive time saving. If we want
24
to target any more sub-systems we can use lighter weight more directed
25
tests.
26
27
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
28
Reviewed-by: Fabiano Rosas <farosas@suse.de>
29
Acked-by: Richard Henderson <richard.henderson@linaro.org>
30
Message-id: 20230203181632.2919715-1-alex.bennee@linaro.org
31
Cc: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
33
---
7
target/arm/translate-a32.h | 1 +
34
tests/avocado/boot_linux.py | 48 ++++----------------
8
target/arm/mve.decode | 4 ++
35
tests/avocado/machine_aarch64_virt.py | 63 ++++++++++++++++++++++++---
9
target/arm/translate-mve.c | 85 ++++++++++++++++++++++++++++++++++++++
36
2 files changed, 65 insertions(+), 46 deletions(-)
10
target/arm/translate-vfp.c | 2 +-
37
11
4 files changed, 91 insertions(+), 1 deletion(-)
38
diff --git a/tests/avocado/boot_linux.py b/tests/avocado/boot_linux.py
12
13
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
14
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a32.h
40
--- a/tests/avocado/boot_linux.py
16
+++ b/target/arm/translate-a32.h
41
+++ b/tests/avocado/boot_linux.py
17
@@ -XXX,XX +XXX,XX @@ void gen_rev16(TCGv_i32 dest, TCGv_i32 var);
42
@@ -XXX,XX +XXX,XX @@ def test_pc_q35_kvm(self):
18
void clear_eci_state(DisasContext *s);
43
self.launch_and_wait(set_up_ssh_connection=False)
19
bool mve_eci_check(DisasContext *s);
44
20
void mve_update_and_store_eci(DisasContext *s);
45
21
+bool mve_skip_vmov(DisasContext *s, int vn, int index, int size);
46
-# For Aarch64 we only boot KVM tests in CI as the TCG tests are very
22
47
-# heavyweight. There are lighter weight distros which we use in the
23
static inline TCGv_i32 load_cpu_offset(int offset)
48
-# machine_aarch64_virt.py tests.
24
{
49
+# For Aarch64 we only boot KVM tests in CI as booting the current
25
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
50
+# Fedora OS in TCG tests is very heavyweight. There are lighter weight
51
+# distros which we use in the machine_aarch64_virt.py tests.
52
class BootLinuxAarch64(LinuxTest):
53
"""
54
:avocado: tags=arch:aarch64
55
:avocado: tags=machine:virt
56
- :avocado: tags=machine:gic-version=2
57
"""
58
timeout = 720
59
60
- def add_common_args(self):
61
- self.vm.add_args('-bios',
62
- os.path.join(BUILD_DIR, 'pc-bios',
63
- 'edk2-aarch64-code.fd'))
64
- self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
65
- self.vm.add_args('-object', 'rng-random,id=rng0,filename=/dev/urandom')
66
-
67
- @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
68
- def test_fedora_cloud_tcg_gicv2(self):
69
- """
70
- :avocado: tags=accel:tcg
71
- :avocado: tags=cpu:max
72
- :avocado: tags=device:gicv2
73
- """
74
- self.require_accelerator("tcg")
75
- self.vm.add_args("-accel", "tcg")
76
- self.vm.add_args("-cpu", "max,lpa2=off")
77
- self.vm.add_args("-machine", "virt,gic-version=2")
78
- self.add_common_args()
79
- self.launch_and_wait(set_up_ssh_connection=False)
80
-
81
- @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
82
- def test_fedora_cloud_tcg_gicv3(self):
83
- """
84
- :avocado: tags=accel:tcg
85
- :avocado: tags=cpu:max
86
- :avocado: tags=device:gicv3
87
- """
88
- self.require_accelerator("tcg")
89
- self.vm.add_args("-accel", "tcg")
90
- self.vm.add_args("-cpu", "max,lpa2=off")
91
- self.vm.add_args("-machine", "virt,gic-version=3")
92
- self.add_common_args()
93
- self.launch_and_wait(set_up_ssh_connection=False)
94
-
95
def test_virt_kvm(self):
96
"""
97
:avocado: tags=accel:kvm
98
@@ -XXX,XX +XXX,XX @@ def test_virt_kvm(self):
99
self.require_accelerator("kvm")
100
self.vm.add_args("-accel", "kvm")
101
self.vm.add_args("-machine", "virt,gic-version=host")
102
- self.add_common_args()
103
+ self.vm.add_args('-bios',
104
+ os.path.join(BUILD_DIR, 'pc-bios',
105
+ 'edk2-aarch64-code.fd'))
106
+ self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
107
+ self.vm.add_args('-object', 'rng-random,id=rng0,filename=/dev/urandom')
108
self.launch_and_wait(set_up_ssh_connection=False)
109
110
111
diff --git a/tests/avocado/machine_aarch64_virt.py b/tests/avocado/machine_aarch64_virt.py
26
index XXXXXXX..XXXXXXX 100644
112
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/mve.decode
113
--- a/tests/avocado/machine_aarch64_virt.py
28
+++ b/target/arm/mve.decode
114
+++ b/tests/avocado/machine_aarch64_virt.py
29
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111101 ....... @vldr_vstr \
115
@@ -XXX,XX +XXX,XX @@
30
VLDR_VSTR 1110110 1 a:1 . w:1 . .... ... 111110 ....... @vldr_vstr \
116
31
size=2 p=1
117
import time
32
118
import os
33
+# Moves between 2 32-bit vector lanes and 2 general purpose registers
119
+import logging
34
+VMOV_to_2gp 1110 1100 0 . 00 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
120
35
+VMOV_from_2gp 1110 1100 0 . 01 rt2:4 ... 0 1111 000 idx:1 rt:4 qd=%qd
121
from avocado_qemu import QemuSystemTest
36
+
122
from avocado_qemu import wait_for_console_pattern
37
# Vector 2-op
123
from avocado_qemu import exec_command
38
VAND 1110 1111 0 . 00 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
124
from avocado_qemu import BUILD_DIR
39
VBIC 1110 1111 0 . 01 ... 0 ... 0 0001 . 1 . 1 ... 0 @2op_nosz
125
+from avocado.utils import process
40
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
126
+from avocado.utils.path import find_command
41
index XXXXXXX..XXXXXXX 100644
127
42
--- a/target/arm/translate-mve.c
128
class Aarch64VirtMachine(QemuSystemTest):
43
+++ b/target/arm/translate-mve.c
129
KERNEL_COMMON_COMMAND_LINE = 'printk.time=0 '
44
@@ -XXX,XX +XXX,XX @@ static bool do_vabav(DisasContext *s, arg_vabav *a, MVEGenVABAVFn *fn)
130
@@ -XXX,XX +XXX,XX @@ def test_alpine_virt_tcg_gic_max(self):
45
131
self.wait_for_console_pattern('Welcome to Alpine Linux 3.16')
46
DO_VABAV(VABAV_S, vabavs)
132
47
DO_VABAV(VABAV_U, vabavu)
133
48
+
134
- def test_aarch64_virt(self):
49
+static bool trans_VMOV_to_2gp(DisasContext *s, arg_VMOV_to_2gp *a)
135
+ def common_aarch64_virt(self, machine):
50
+{
136
"""
51
+ /*
137
- :avocado: tags=arch:aarch64
52
+ * VMOV two 32-bit vector lanes to two general-purpose registers.
138
- :avocado: tags=machine:virt
53
+ * This insn is not predicated but it is subject to beat-wise
139
- :avocado: tags=accel:tcg
54
+ * execution if it is not in an IT block. For us this means
140
- :avocado: tags=cpu:max
55
+ * only that if PSR.ECI says we should not be executing the beat
141
+ Common code to launch basic virt machine with kernel+initrd
56
+ * corresponding to the lane of the vector register being accessed
142
+ and a scratch disk.
57
+ * then we should skip perfoming the move, and that we need to do
143
"""
58
+ * the usual check for bad ECI state and advance of ECI state.
144
+ logger = logging.getLogger('aarch64_virt')
59
+ * (If PSR.ECI is non-zero then we cannot be in an IT block.)
145
+
60
+ */
146
kernel_url = ('https://fileserver.linaro.org/s/'
61
+ TCGv_i32 tmp;
147
'z6B2ARM7DQT3HWN/download')
62
+ int vd;
148
-
63
+
149
kernel_hash = 'ed11daab50c151dde0e1e9c9cb8b2d9bd3215347'
64
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd) ||
150
kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
65
+ a->rt == 13 || a->rt == 15 || a->rt2 == 13 || a->rt2 == 15 ||
151
66
+ a->rt == a->rt2) {
152
@@ -XXX,XX +XXX,XX @@ def test_aarch64_virt(self):
67
+ /* Rt/Rt2 cases are UNPREDICTABLE */
153
'console=ttyAMA0')
68
+ return false;
154
self.require_accelerator("tcg")
69
+ }
155
self.vm.add_args('-cpu', 'max,pauth-impdef=on',
70
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
156
+ '-machine', machine,
71
+ return true;
157
'-accel', 'tcg',
72
+ }
158
'-kernel', kernel_path,
73
+
159
'-append', kernel_command_line)
74
+ /* Convert Qreg index to Dreg for read_neon_element32() etc */
160
+
75
+ vd = a->qd * 2;
161
+ # A RNG offers an easy way to generate a few IRQs
76
+
162
+ self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
77
+ if (!mve_skip_vmov(s, vd, a->idx, MO_32)) {
163
+ self.vm.add_args('-object',
78
+ tmp = tcg_temp_new_i32();
164
+ 'rng-random,id=rng0,filename=/dev/urandom')
79
+ read_neon_element32(tmp, vd, a->idx, MO_32);
165
+
80
+ store_reg(s, a->rt, tmp);
166
+ # Also add a scratch block device
81
+ }
167
+ logger.info('creating scratch qcow2 image')
82
+ if (!mve_skip_vmov(s, vd + 1, a->idx, MO_32)) {
168
+ image_path = os.path.join(self.workdir, 'scratch.qcow2')
83
+ tmp = tcg_temp_new_i32();
169
+ qemu_img = os.path.join(BUILD_DIR, 'qemu-img')
84
+ read_neon_element32(tmp, vd + 1, a->idx, MO_32);
170
+ if not os.path.exists(qemu_img):
85
+ store_reg(s, a->rt2, tmp);
171
+ qemu_img = find_command('qemu-img', False)
86
+ }
172
+ if qemu_img is False:
87
+
173
+ self.cancel('Could not find "qemu-img", which is required to '
88
+ mve_update_and_store_eci(s);
174
+ 'create the temporary qcow2 image')
89
+ return true;
175
+ cmd = '%s create -f qcow2 %s 8M' % (qemu_img, image_path)
90
+}
176
+ process.run(cmd)
91
+
177
+
92
+static bool trans_VMOV_from_2gp(DisasContext *s, arg_VMOV_to_2gp *a)
178
+ # Add the device
93
+{
179
+ self.vm.add_args('-blockdev',
94
+ /*
180
+ f"driver=qcow2,file.driver=file,file.filename={image_path},node-name=scratch")
95
+ * VMOV two general-purpose registers to two 32-bit vector lanes.
181
+ self.vm.add_args('-device',
96
+ * This insn is not predicated but it is subject to beat-wise
182
+ 'virtio-blk-device,drive=scratch')
97
+ * execution if it is not in an IT block. For us this means
183
+
98
+ * only that if PSR.ECI says we should not be executing the beat
184
self.vm.launch()
99
+ * corresponding to the lane of the vector register being accessed
185
self.wait_for_console_pattern('Welcome to Buildroot')
100
+ * then we should skip perfoming the move, and that we need to do
186
time.sleep(0.1)
101
+ * the usual check for bad ECI state and advance of ECI state.
187
exec_command(self, 'root')
102
+ * (If PSR.ECI is non-zero then we cannot be in an IT block.)
188
time.sleep(0.1)
103
+ */
189
+ exec_command(self, 'dd if=/dev/hwrng of=/dev/vda bs=512 count=4')
104
+ TCGv_i32 tmp;
190
+ time.sleep(0.1)
105
+ int vd;
191
+ exec_command(self, 'md5sum /dev/vda')
106
+
192
+ time.sleep(0.1)
107
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd) ||
193
+ exec_command(self, 'cat /proc/interrupts')
108
+ a->rt == 13 || a->rt == 15 || a->rt2 == 13 || a->rt2 == 15) {
194
+ time.sleep(0.1)
109
+ /* Rt/Rt2 cases are UNPREDICTABLE */
195
exec_command(self, 'cat /proc/self/maps')
110
+ return false;
196
time.sleep(0.1)
111
+ }
197
+
112
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
198
+ def test_aarch64_virt_gicv3(self):
113
+ return true;
199
+ """
114
+ }
200
+ :avocado: tags=arch:aarch64
115
+
201
+ :avocado: tags=machine:virt
116
+ /* Convert Qreg idx to Dreg for read_neon_element32() etc */
202
+ :avocado: tags=accel:tcg
117
+ vd = a->qd * 2;
203
+ :avocado: tags=cpu:max
118
+
204
+ """
119
+ if (!mve_skip_vmov(s, vd, a->idx, MO_32)) {
205
+ self.common_aarch64_virt("virt,gic_version=3")
120
+ tmp = load_reg(s, a->rt);
206
+
121
+ write_neon_element32(tmp, vd, a->idx, MO_32);
207
+ def test_aarch64_virt_gicv2(self):
122
+ tcg_temp_free_i32(tmp);
208
+ """
123
+ }
209
+ :avocado: tags=arch:aarch64
124
+ if (!mve_skip_vmov(s, vd + 1, a->idx, MO_32)) {
210
+ :avocado: tags=machine:virt
125
+ tmp = load_reg(s, a->rt2);
211
+ :avocado: tags=accel:tcg
126
+ write_neon_element32(tmp, vd + 1, a->idx, MO_32);
212
+ :avocado: tags=cpu:max
127
+ tcg_temp_free_i32(tmp);
213
+ """
128
+ }
214
+ self.common_aarch64_virt("virt,gic-version=2")
129
+
130
+ mve_update_and_store_eci(s);
131
+ return true;
132
+}
133
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/target/arm/translate-vfp.c
136
+++ b/target/arm/translate-vfp.c
137
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
138
return true;
139
}
140
141
-static bool mve_skip_vmov(DisasContext *s, int vn, int index, int size)
142
+bool mve_skip_vmov(DisasContext *s, int vn, int index, int size)
143
{
144
/*
145
* In a CPU with MVE, the VMOV (vector lane to general-purpose register)
146
--
215
--
147
2.20.1
216
2.34.1
148
217
149
218
diff view generated by jsdifflib
1
Implement the MVE VABAV insn, which computes absolute differences
1
From: Mostafa Saleh <smostafa@google.com>
2
between elements of two vectors and accumulates the result into
3
a general purpose register.
4
2
3
GBPA register can be used to globally abort all
4
transactions.
5
6
It is described in the SMMU manual in "6.3.14 SMMU_GBPA".
7
ABORT reset value is IMPLEMENTATION DEFINED, it is chosen to
8
be zero(Do not abort incoming transactions).
9
10
Other fields have default values of Use Incoming.
11
12
If UPDATE is not set, the write is ignored. This is the only permitted
13
behavior in SMMUv3.2 and later.(6.3.14.1 Update procedure)
14
15
As this patch adds a new state to the SMMU (GBPA), it is added
16
in a new subsection for forward migration compatibility.
17
GBPA is only migrated if its value is different from the reset value.
18
It does this to be backward migration compatible if SW didn't write
19
the register.
20
21
Signed-off-by: Mostafa Saleh <smostafa@google.com>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Eric Auger <eric.auger@redhat.com>
24
Message-id: 20230214094009.2445653-1-smostafa@google.com
25
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
---
27
---
8
target/arm/helper-mve.h | 7 +++++++
28
hw/arm/smmuv3-internal.h | 7 +++++++
9
target/arm/mve.decode | 6 ++++++
29
include/hw/arm/smmuv3.h | 1 +
10
target/arm/mve_helper.c | 26 +++++++++++++++++++++++
30
hw/arm/smmuv3.c | 43 +++++++++++++++++++++++++++++++++++++++-
11
target/arm/translate-mve.c | 43 ++++++++++++++++++++++++++++++++++++++
31
3 files changed, 50 insertions(+), 1 deletion(-)
12
4 files changed, 82 insertions(+)
13
32
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
33
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
15
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
35
--- a/hw/arm/smmuv3-internal.h
17
+++ b/target/arm/helper-mve.h
36
+++ b/hw/arm/smmuv3-internal.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vminavw, TCG_CALL_NO_WG, i32, env, ptr, i32)
37
@@ -XXX,XX +XXX,XX @@ REG32(CR0ACK, 0x24)
19
DEF_HELPER_FLAGS_3(mve_vaddlv_s, TCG_CALL_NO_WG, i64, env, ptr, i64)
38
REG32(CR1, 0x28)
20
DEF_HELPER_FLAGS_3(mve_vaddlv_u, TCG_CALL_NO_WG, i64, env, ptr, i64)
39
REG32(CR2, 0x2c)
21
40
REG32(STATUSR, 0x40)
22
+DEF_HELPER_FLAGS_4(mve_vabavsb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
41
+REG32(GBPA, 0x44)
23
+DEF_HELPER_FLAGS_4(mve_vabavsh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
42
+ FIELD(GBPA, ABORT, 20, 1)
24
+DEF_HELPER_FLAGS_4(mve_vabavsw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
43
+ FIELD(GBPA, UPDATE, 31, 1)
25
+DEF_HELPER_FLAGS_4(mve_vabavub, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
26
+DEF_HELPER_FLAGS_4(mve_vabavuh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vabavuw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
28
+
44
+
29
DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
45
+/* Use incoming. */
30
DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
46
+#define SMMU_GBPA_RESET_VAL 0x1000
31
DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
47
+
32
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
48
REG32(IRQ_CTRL, 0x50)
49
FIELD(IRQ_CTRL, GERROR_IRQEN, 0, 1)
50
FIELD(IRQ_CTRL, PRI_IRQEN, 1, 1)
51
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
33
index XXXXXXX..XXXXXXX 100644
52
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/mve.decode
53
--- a/include/hw/arm/smmuv3.h
35
+++ b/target/arm/mve.decode
54
+++ b/include/hw/arm/smmuv3.h
36
@@ -XXX,XX +XXX,XX @@
55
@@ -XXX,XX +XXX,XX @@ struct SMMUv3State {
37
&vcmp_scalar qn rm size mask
56
uint32_t cr[3];
38
&shl_scalar qda rm size
57
uint32_t cr0ack;
39
&vmaxv qm rda size
58
uint32_t statusr;
40
+&vabav qn qm rda size
59
+ uint32_t gbpa;
41
60
uint32_t irq_ctrl;
42
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
61
uint32_t gerror;
43
# Note that both Rn and Qd are 3 bits only (no D bit)
62
uint32_t gerrorn;
44
@@ -XXX,XX +XXX,XX @@ VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
63
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
45
rdahi=%rdahi rdalo=%rdalo
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/smmuv3.c
66
+++ b/hw/arm/smmuv3.c
67
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
68
s->gerror = 0;
69
s->gerrorn = 0;
70
s->statusr = 0;
71
+ s->gbpa = SMMU_GBPA_RESET_VAL;
46
}
72
}
47
73
48
+@vabav .... .... .. size:2 .... rda:4 .... .... .... &vabav qn=%qn qm=%qm
74
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
75
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
76
qemu_mutex_lock(&s->mutex);
77
78
if (!smmu_enabled(s)) {
79
- status = SMMU_TRANS_DISABLE;
80
+ if (FIELD_EX32(s->gbpa, GBPA, ABORT)) {
81
+ status = SMMU_TRANS_ABORT;
82
+ } else {
83
+ status = SMMU_TRANS_DISABLE;
84
+ }
85
goto epilogue;
86
}
87
88
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
89
case A_GERROR_IRQ_CFG2:
90
s->gerror_irq_cfg2 = data;
91
return MEMTX_OK;
92
+ case A_GBPA:
93
+ /*
94
+ * If UPDATE is not set, the write is ignored. This is the only
95
+ * permitted behavior in SMMUv3.2 and later.
96
+ */
97
+ if (data & R_GBPA_UPDATE_MASK) {
98
+ /* Ignore update bit as write is synchronous. */
99
+ s->gbpa = data & ~R_GBPA_UPDATE_MASK;
100
+ }
101
+ return MEMTX_OK;
102
case A_STRTAB_BASE: /* 64b */
103
s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
104
return MEMTX_OK;
105
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
106
case A_STATUSR:
107
*data = s->statusr;
108
return MEMTX_OK;
109
+ case A_GBPA:
110
+ *data = s->gbpa;
111
+ return MEMTX_OK;
112
case A_IRQ_CTRL:
113
case A_IRQ_CTRL_ACK:
114
*data = s->irq_ctrl;
115
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3_queue = {
116
},
117
};
118
119
+static bool smmuv3_gbpa_needed(void *opaque)
120
+{
121
+ SMMUv3State *s = opaque;
49
+
122
+
50
+VABAV_S 111 0 1110 10 .. ... 0 .... 1111 . 0 . 0 ... 1 @vabav
123
+ /* Only migrate GBPA if it has different reset value. */
51
+VABAV_U 111 1 1110 10 .. ... 0 .... 1111 . 0 . 0 ... 1 @vabav
124
+ return s->gbpa != SMMU_GBPA_RESET_VAL;
52
+
53
# Logical immediate operations (1 reg and modified-immediate)
54
55
# The cmode/op bits here decode VORR/VBIC/VMOV/VMVN, but
56
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/mve_helper.c
59
+++ b/target/arm/mve_helper.c
60
@@ -XXX,XX +XXX,XX @@ DO_VMAXMINV(vminavb, 1, int8_t, uint8_t, do_mina)
61
DO_VMAXMINV(vminavh, 2, int16_t, uint16_t, do_mina)
62
DO_VMAXMINV(vminavw, 4, int32_t, uint32_t, do_mina)
63
64
+#define DO_VABAV(OP, ESIZE, TYPE) \
65
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
66
+ void *vm, uint32_t ra) \
67
+ { \
68
+ uint16_t mask = mve_element_mask(env); \
69
+ unsigned e; \
70
+ TYPE *m = vm, *n = vn; \
71
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
72
+ if (mask & 1) { \
73
+ int64_t n0 = n[H##ESIZE(e)]; \
74
+ int64_t m0 = m[H##ESIZE(e)]; \
75
+ uint32_t r = n0 >= m0 ? (n0 - m0) : (m0 - n0); \
76
+ ra += r; \
77
+ } \
78
+ } \
79
+ mve_advance_vpt(env); \
80
+ return ra; \
81
+ }
82
+
83
+DO_VABAV(vabavsb, 1, int8_t)
84
+DO_VABAV(vabavsh, 2, int16_t)
85
+DO_VABAV(vabavsw, 4, int32_t)
86
+DO_VABAV(vabavub, 1, uint8_t)
87
+DO_VABAV(vabavuh, 2, uint16_t)
88
+DO_VABAV(vabavuw, 4, uint32_t)
89
+
90
#define DO_VADDLV(OP, TYPE, LTYPE) \
91
uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
92
uint64_t ra) \
93
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/translate-mve.c
96
+++ b/target/arm/translate-mve.c
97
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
98
typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
99
typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
100
typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
101
+typedef void MVEGenVABAVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
102
103
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
104
static inline long mve_qreg_offset(unsigned reg)
105
@@ -XXX,XX +XXX,XX @@ DO_VMAXV(VMAXAV, vmaxav)
106
DO_VMAXV(VMINV_S, vminvs)
107
DO_VMAXV(VMINV_U, vminvu)
108
DO_VMAXV(VMINAV, vminav)
109
+
110
+static bool do_vabav(DisasContext *s, arg_vabav *a, MVEGenVABAVFn *fn)
111
+{
112
+ /* Absolute difference accumulated across vector */
113
+ TCGv_ptr qn, qm;
114
+ TCGv_i32 rda;
115
+
116
+ if (!dc_isar_feature(aa32_mve, s) ||
117
+ !mve_check_qreg_bank(s, a->qm | a->qn) ||
118
+ !fn || a->rda == 13 || a->rda == 15) {
119
+ /* Rda cases are UNPREDICTABLE */
120
+ return false;
121
+ }
122
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
123
+ return true;
124
+ }
125
+
126
+ qm = mve_qreg_ptr(a->qm);
127
+ qn = mve_qreg_ptr(a->qn);
128
+ rda = load_reg(s, a->rda);
129
+ fn(rda, cpu_env, qn, qm, rda);
130
+ store_reg(s, a->rda, rda);
131
+ tcg_temp_free_ptr(qm);
132
+ tcg_temp_free_ptr(qn);
133
+ mve_update_eci(s);
134
+ return true;
135
+}
125
+}
136
+
126
+
137
+#define DO_VABAV(INSN, FN) \
127
+static const VMStateDescription vmstate_gbpa = {
138
+ static bool trans_##INSN(DisasContext *s, arg_vabav *a) \
128
+ .name = "smmuv3/gbpa",
139
+ { \
129
+ .version_id = 1,
140
+ static MVEGenVABAVFn * const fns[] = { \
130
+ .minimum_version_id = 1,
141
+ gen_helper_mve_##FN##b, \
131
+ .needed = smmuv3_gbpa_needed,
142
+ gen_helper_mve_##FN##h, \
132
+ .fields = (VMStateField[]) {
143
+ gen_helper_mve_##FN##w, \
133
+ VMSTATE_UINT32(gbpa, SMMUv3State),
144
+ NULL, \
134
+ VMSTATE_END_OF_LIST()
145
+ }; \
146
+ return do_vabav(s, a, fns[a->size]); \
147
+ }
135
+ }
136
+};
148
+
137
+
149
+DO_VABAV(VABAV_S, vabavs)
138
static const VMStateDescription vmstate_smmuv3 = {
150
+DO_VABAV(VABAV_U, vabavu)
139
.name = "smmuv3",
140
.version_id = 1,
141
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3 = {
142
143
VMSTATE_END_OF_LIST(),
144
},
145
+ .subsections = (const VMStateDescription * []) {
146
+ &vmstate_gbpa,
147
+ NULL
148
+ }
149
};
150
151
static void smmuv3_instance_init(Object *obj)
151
--
152
--
152
2.20.1
153
2.34.1
153
154
diff view generated by jsdifflib
1
From: Eduardo Habkost <ehabkost@redhat.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
The SBSA_GWDT enum value conflicts with the SBSA_GWDT() QOM type
3
Since commit acc0b8b05a when running the ZynqMP ZCU102 board with
4
checking helper, preventing us from using a OBJECT_DEFINE* or
4
a QEMU configured using --without-default-devices, we get:
5
DEFINE_INSTANCE_CHECKER macro for the SBSA_GWDT() wrapper.
6
5
7
If I understand the SBSA 6.0 specification correctly, the signal
6
$ qemu-system-aarch64 -M xlnx-zcu102
8
being connected to IRQ 16 is the WS0 output signal from the
7
qemu-system-aarch64: missing object type 'usb_dwc3'
9
Generic Watchdog. Rename the enum value to SBSA_GWDT_WS0 to be
8
Abort trap: 6
10
more explicit and avoid the name conflict.
11
9
12
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
10
Fix by adding the missing Kconfig dependency.
13
Message-id: 20210806023119.431680-1-ehabkost@redhat.com
11
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Fixes: acc0b8b05a ("hw/arm/xlnx-zynqmp: Connect ZynqMP's USB controllers")
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
Message-id: 20230216092327.2203-1-philmd@linaro.org
15
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
---
17
hw/arm/sbsa-ref.c | 6 +++---
18
hw/arm/Kconfig | 1 +
18
1 file changed, 3 insertions(+), 3 deletions(-)
19
1 file changed, 1 insertion(+)
19
20
20
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
21
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
21
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/sbsa-ref.c
23
--- a/hw/arm/Kconfig
23
+++ b/hw/arm/sbsa-ref.c
24
+++ b/hw/arm/Kconfig
24
@@ -XXX,XX +XXX,XX @@ enum {
25
@@ -XXX,XX +XXX,XX @@ config XLNX_ZYNQMP_ARM
25
SBSA_GIC_DIST,
26
select XLNX_CSU_DMA
26
SBSA_GIC_REDIST,
27
select XLNX_ZYNQMP
27
SBSA_SECURE_EC,
28
select XLNX_ZDMA
28
- SBSA_GWDT,
29
+ select USB_DWC3
29
+ SBSA_GWDT_WS0,
30
30
SBSA_GWDT_REFRESH,
31
config XLNX_VERSAL
31
SBSA_GWDT_CONTROL,
32
bool
32
SBSA_SMMU,
33
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
34
[SBSA_AHCI] = 10,
35
[SBSA_EHCI] = 11,
36
[SBSA_SMMU] = 12, /* ... to 15 */
37
- [SBSA_GWDT] = 16,
38
+ [SBSA_GWDT_WS0] = 16,
39
};
40
41
static const char * const valid_cpus[] = {
42
@@ -XXX,XX +XXX,XX @@ static void create_wdt(const SBSAMachineState *sms)
43
hwaddr cbase = sbsa_ref_memmap[SBSA_GWDT_CONTROL].base;
44
DeviceState *dev = qdev_new(TYPE_WDT_SBSA);
45
SysBusDevice *s = SYS_BUS_DEVICE(dev);
46
- int irq = sbsa_ref_irqmap[SBSA_GWDT];
47
+ int irq = sbsa_ref_irqmap[SBSA_GWDT_WS0];
48
49
sysbus_realize_and_unref(s, &error_fatal);
50
sysbus_mmio_map(s, 0, rbase);
51
--
33
--
52
2.20.1
34
2.34.1
53
35
54
36
diff view generated by jsdifflib
1
Implement the MVE VMLAS insn, which multiplies a vector by a vector
1
From: Cornelia Huck <cohuck@redhat.com>
2
and adds a scalar.
3
2
3
Just use current_accel_name() directly.
4
5
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
6
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
9
---
7
target/arm/helper-mve.h | 4 ++++
10
hw/arm/virt.c | 6 +++---
8
target/arm/mve.decode | 3 +++
11
1 file changed, 3 insertions(+), 3 deletions(-)
9
target/arm/mve_helper.c | 26 ++++++++++++++++++++++++++
10
target/arm/translate-mve.c | 1 +
11
4 files changed, 34 insertions(+)
12
12
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
13
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
15
--- a/hw/arm/virt.c
16
+++ b/target/arm/helper-mve.h
16
+++ b/hw/arm/virt.c
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i3
17
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
18
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
18
if (vms->secure && (kvm_enabled() || hvf_enabled())) {
19
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
error_report("mach-virt: %s does not support providing "
20
20
"Security extensions (TrustZone) to the guest CPU",
21
+DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
- kvm_enabled() ? "KVM" : "HVF");
22
+DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
+ current_accel_name());
23
+DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
exit(1);
24
+
25
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
26
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
27
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
+++ b/target/arm/mve.decode
32
@@ -XXX,XX +XXX,XX @@ VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
33
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
34
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
35
36
+# The U bit (28) is don't-care because it does not affect the result
37
+VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
38
+
39
# Vector add across vector
40
{
41
VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
42
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/mve_helper.c
45
+++ b/target/arm/mve_helper.c
46
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
47
mve_advance_vpt(env); \
48
}
24
}
49
25
50
+/* "accumulating" version where FN takes d as well as n and m */
26
if (vms->virt && (kvm_enabled() || hvf_enabled())) {
51
+#define DO_2OP_ACC_SCALAR(OP, ESIZE, TYPE, FN) \
27
error_report("mach-virt: %s does not support providing "
52
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
28
"Virtualization extensions to the guest CPU",
53
+ uint32_t rm) \
29
- kvm_enabled() ? "KVM" : "HVF");
54
+ { \
30
+ current_accel_name());
55
+ TYPE *d = vd, *n = vn; \
31
exit(1);
56
+ TYPE m = rm; \
32
}
57
+ uint16_t mask = mve_element_mask(env); \
33
58
+ unsigned e; \
34
if (vms->mte && (kvm_enabled() || hvf_enabled())) {
59
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
35
error_report("mach-virt: %s does not support providing "
60
+ mergemask(&d[H##ESIZE(e)], \
36
"MTE to the guest CPU",
61
+ FN(d[H##ESIZE(e)], n[H##ESIZE(e)], m), mask); \
37
- kvm_enabled() ? "KVM" : "HVF");
62
+ } \
38
+ current_accel_name());
63
+ mve_advance_vpt(env); \
39
exit(1);
64
+ }
40
}
65
+
41
66
/* provide unsigned 2-op scalar helpers for all sizes */
67
#define DO_2OP_SCALAR_U(OP, FN) \
68
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
69
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
70
DO_2OP_SCALAR(OP##h, 2, int16_t, FN) \
71
DO_2OP_SCALAR(OP##w, 4, int32_t, FN)
72
73
+#define DO_2OP_ACC_SCALAR_U(OP, FN) \
74
+ DO_2OP_ACC_SCALAR(OP##b, 1, uint8_t, FN) \
75
+ DO_2OP_ACC_SCALAR(OP##h, 2, uint16_t, FN) \
76
+ DO_2OP_ACC_SCALAR(OP##w, 4, uint32_t, FN)
77
+
78
DO_2OP_SCALAR_U(vadd_scalar, DO_ADD)
79
DO_2OP_SCALAR_U(vsub_scalar, DO_SUB)
80
DO_2OP_SCALAR_U(vmul_scalar, DO_MUL)
81
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
82
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
83
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
84
85
+/* Vector by vector plus scalar */
86
+#define DO_VMLAS(D, N, M) ((N) * (D) + (M))
87
+
88
+DO_2OP_ACC_SCALAR_U(vmlas, DO_VMLAS)
89
+
90
/*
91
* Long saturating scalar ops. As with DO_2OP_L, TYPE and H are for the
92
* input (smaller) type and LESIZE, LTYPE, LH for the output (long) type.
93
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/translate-mve.c
96
+++ b/target/arm/translate-mve.c
97
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
98
DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
99
DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
100
DO_2OP_SCALAR(VBRSR, vbrsr)
101
+DO_2OP_SCALAR(VMLAS, vmlas)
102
103
static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
104
{
105
--
42
--
106
2.20.1
43
2.34.1
107
108
diff view generated by jsdifflib
1
From: Guenter Roeck <linux@roeck-us.net>
1
From: Hao Wu <wuhaotsh@google.com>
2
2
3
Instantiate SAI1/2/3 and ASRC as unimplemented devices to avoid random
3
Havard is no longer working on the Nuvoton systems for a while
4
Linux kernel crashes, such as
4
and won't be able to do any work on it in the future. So I'll
5
take over maintaining the Nuvoton system from him.
5
6
6
Unhandled fault: external abort on non-linefetch (0x808) at 0xd1580010
7
Signed-off-by: Hao Wu <wuhaotsh@google.com>
7
pgd = (ptrval)
8
Acked-by: Havard Skinnemoen <hskinnemoen@google.com>
8
[d1580010] *pgd=8231b811, *pte=02034653, *ppte=02034453
9
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
9
Internal error: : 808 [#1] SMP ARM
10
Message-id: 20230208235433.3989937-2-wuhaotsh@google.com
10
...
11
[<c095e974>] (regmap_mmio_write32le) from [<c095eb48>] (regmap_mmio_write+0x3c/0x54)
12
[<c095eb48>] (regmap_mmio_write) from [<c09580f4>] (_regmap_write+0x4c/0x1f0)
13
[<c09580f4>] (_regmap_write) from [<c095837c>] (_regmap_update_bits+0xe4/0xec)
14
[<c095837c>] (_regmap_update_bits) from [<c09599b4>] (regmap_update_bits_base+0x50/0x74)
15
[<c09599b4>] (regmap_update_bits_base) from [<c0d3e9e4>] (fsl_asrc_runtime_resume+0x1e4/0x21c)
16
[<c0d3e9e4>] (fsl_asrc_runtime_resume) from [<c0942464>] (__rpm_callback+0x3c/0x108)
17
[<c0942464>] (__rpm_callback) from [<c0942590>] (rpm_callback+0x60/0x64)
18
[<c0942590>] (rpm_callback) from [<c0942b60>] (rpm_resume+0x5cc/0x808)
19
[<c0942b60>] (rpm_resume) from [<c0942dfc>] (__pm_runtime_resume+0x60/0xa0)
20
[<c0942dfc>] (__pm_runtime_resume) from [<c0d3ecc4>] (fsl_asrc_probe+0x2a8/0x708)
21
[<c0d3ecc4>] (fsl_asrc_probe) from [<c0935b08>] (platform_probe+0x58/0xb8)
22
[<c0935b08>] (platform_probe) from [<c0933264>] (really_probe.part.0+0x9c/0x334)
23
[<c0933264>] (really_probe.part.0) from [<c093359c>] (__driver_probe_device+0xa0/0x138)
24
[<c093359c>] (__driver_probe_device) from [<c0933664>] (driver_probe_device+0x30/0xc8)
25
[<c0933664>] (driver_probe_device) from [<c0933c88>] (__driver_attach+0x90/0x130)
26
[<c0933c88>] (__driver_attach) from [<c0931060>] (bus_for_each_dev+0x78/0xb8)
27
[<c0931060>] (bus_for_each_dev) from [<c093254c>] (bus_add_driver+0xf0/0x1d8)
28
[<c093254c>] (bus_add_driver) from [<c0934a30>] (driver_register+0x88/0x118)
29
[<c0934a30>] (driver_register) from [<c01022c0>] (do_one_initcall+0x7c/0x3a4)
30
[<c01022c0>] (do_one_initcall) from [<c1601204>] (kernel_init_freeable+0x198/0x22c)
31
[<c1601204>] (kernel_init_freeable) from [<c0f5ff2c>] (kernel_init+0x10/0x128)
32
[<c0f5ff2c>] (kernel_init) from [<c010013c>] (ret_from_fork+0x14/0x38)
33
34
or
35
36
Unhandled fault: external abort on non-linefetch (0x808) at 0xd19b0000
37
pgd = (ptrval)
38
[d19b0000] *pgd=82711811, *pte=308a0653, *ppte=308a0453
39
Internal error: : 808 [#1] SMP ARM
40
...
41
[<c095e974>] (regmap_mmio_write32le) from [<c095eb48>] (regmap_mmio_write+0x3c/0x54)
42
[<c095eb48>] (regmap_mmio_write) from [<c09580f4>] (_regmap_write+0x4c/0x1f0)
43
[<c09580f4>] (_regmap_write) from [<c0959b28>] (regmap_write+0x3c/0x60)
44
[<c0959b28>] (regmap_write) from [<c0d41130>] (fsl_sai_runtime_resume+0x9c/0x1ec)
45
[<c0d41130>] (fsl_sai_runtime_resume) from [<c0942464>] (__rpm_callback+0x3c/0x108)
46
[<c0942464>] (__rpm_callback) from [<c0942590>] (rpm_callback+0x60/0x64)
47
[<c0942590>] (rpm_callback) from [<c0942b60>] (rpm_resume+0x5cc/0x808)
48
[<c0942b60>] (rpm_resume) from [<c0942dfc>] (__pm_runtime_resume+0x60/0xa0)
49
[<c0942dfc>] (__pm_runtime_resume) from [<c0d4231c>] (fsl_sai_probe+0x2b8/0x65c)
50
[<c0d4231c>] (fsl_sai_probe) from [<c0935b08>] (platform_probe+0x58/0xb8)
51
[<c0935b08>] (platform_probe) from [<c0933264>] (really_probe.part.0+0x9c/0x334)
52
[<c0933264>] (really_probe.part.0) from [<c093359c>] (__driver_probe_device+0xa0/0x138)
53
[<c093359c>] (__driver_probe_device) from [<c0933664>] (driver_probe_device+0x30/0xc8)
54
[<c0933664>] (driver_probe_device) from [<c0933c88>] (__driver_attach+0x90/0x130)
55
[<c0933c88>] (__driver_attach) from [<c0931060>] (bus_for_each_dev+0x78/0xb8)
56
[<c0931060>] (bus_for_each_dev) from [<c093254c>] (bus_add_driver+0xf0/0x1d8)
57
[<c093254c>] (bus_add_driver) from [<c0934a30>] (driver_register+0x88/0x118)
58
[<c0934a30>] (driver_register) from [<c01022c0>] (do_one_initcall+0x7c/0x3a4)
59
[<c01022c0>] (do_one_initcall) from [<c1601204>] (kernel_init_freeable+0x198/0x22c)
60
[<c1601204>] (kernel_init_freeable) from [<c0f5ff2c>] (kernel_init+0x10/0x128)
61
[<c0f5ff2c>] (kernel_init) from [<c010013c>] (ret_from_fork+0x14/0x38)
62
63
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
64
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
65
Message-id: 20210810160318.87376-1-linux@roeck-us.net
66
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
67
---
12
---
68
hw/arm/fsl-imx6ul.c | 12 ++++++++++++
13
MAINTAINERS | 2 +-
69
1 file changed, 12 insertions(+)
14
1 file changed, 1 insertion(+), 1 deletion(-)
70
15
71
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
16
diff --git a/MAINTAINERS b/MAINTAINERS
72
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
73
--- a/hw/arm/fsl-imx6ul.c
18
--- a/MAINTAINERS
74
+++ b/hw/arm/fsl-imx6ul.c
19
+++ b/MAINTAINERS
75
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
20
@@ -XXX,XX +XXX,XX @@ F: include/hw/net/mv88w8618_eth.h
76
*/
21
F: docs/system/arm/musicpal.rst
77
create_unimplemented_device("sdma", FSL_IMX6UL_SDMA_ADDR, 0x4000);
22
78
23
Nuvoton NPCM7xx
79
+ /*
24
-M: Havard Skinnemoen <hskinnemoen@google.com>
80
+ * SAI (Audio SSI (Synchronous Serial Interface))
25
M: Tyrone Ting <kfting@nuvoton.com>
81
+ */
26
+M: Hao Wu <wuhaotsh@google.com>
82
+ create_unimplemented_device("sai1", FSL_IMX6UL_SAI1_ADDR, 0x4000);
27
L: qemu-arm@nongnu.org
83
+ create_unimplemented_device("sai2", FSL_IMX6UL_SAI2_ADDR, 0x4000);
28
S: Supported
84
+ create_unimplemented_device("sai3", FSL_IMX6UL_SAI3_ADDR, 0x4000);
29
F: hw/*/npcm7xx*
85
+
86
/*
87
* PWM
88
*/
89
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
90
create_unimplemented_device("pwm3", FSL_IMX6UL_PWM3_ADDR, 0x4000);
91
create_unimplemented_device("pwm4", FSL_IMX6UL_PWM4_ADDR, 0x4000);
92
93
+ /*
94
+ * Audio ASRC (asynchronous sample rate converter)
95
+ */
96
+ create_unimplemented_device("asrc", FSL_IMX6UL_ASRC_ADDR, 0x4000);
97
+
98
/*
99
* CAN
100
*/
101
--
30
--
102
2.20.1
31
2.34.1
103
104
diff view generated by jsdifflib
1
Implement the MVE 1-operand saturating operations VQABS and VQNEG.
1
From: Hao Wu <wuhaotsh@google.com>
2
2
3
Nuvoton's PSPI is a general purpose SPI module which enables
4
connections to SPI-based peripheral devices.
5
6
Signed-off-by: Hao Wu <wuhaotsh@google.com>
7
Reviewed-by: Chris Rauer <crauer@google.com>
8
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
9
Message-id: 20230208235433.3989937-3-wuhaotsh@google.com
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
---
11
---
6
target/arm/helper-mve.h | 8 ++++++++
12
MAINTAINERS | 6 +-
7
target/arm/mve.decode | 3 +++
13
include/hw/ssi/npcm_pspi.h | 53 +++++++++
8
target/arm/mve_helper.c | 37 +++++++++++++++++++++++++++++++++++++
14
hw/ssi/npcm_pspi.c | 221 +++++++++++++++++++++++++++++++++++++
9
target/arm/translate-mve.c | 2 ++
15
hw/ssi/meson.build | 2 +-
10
4 files changed, 50 insertions(+)
16
hw/ssi/trace-events | 5 +
17
5 files changed, 283 insertions(+), 4 deletions(-)
18
create mode 100644 include/hw/ssi/npcm_pspi.h
19
create mode 100644 hw/ssi/npcm_pspi.c
11
20
12
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
21
diff --git a/MAINTAINERS b/MAINTAINERS
13
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper-mve.h
23
--- a/MAINTAINERS
15
+++ b/target/arm/helper-mve.h
24
+++ b/MAINTAINERS
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
@@ -XXX,XX +XXX,XX @@ M: Tyrone Ting <kfting@nuvoton.com>
17
DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
M: Hao Wu <wuhaotsh@google.com>
18
DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
L: qemu-arm@nongnu.org
19
28
S: Supported
20
+DEF_HELPER_FLAGS_3(mve_vqabsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
-F: hw/*/npcm7xx*
21
+DEF_HELPER_FLAGS_3(mve_vqabsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
-F: include/hw/*/npcm7xx*
22
+DEF_HELPER_FLAGS_3(mve_vqabsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
-F: tests/qtest/npcm7xx*
23
+
32
+F: hw/*/npcm*
24
+DEF_HELPER_FLAGS_3(mve_vqnegb, TCG_CALL_NO_WG, void, env, ptr, ptr)
33
+F: include/hw/*/npcm*
25
+DEF_HELPER_FLAGS_3(mve_vqnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
34
+F: tests/qtest/npcm*
26
+DEF_HELPER_FLAGS_3(mve_vqnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
35
F: pc-bios/npcm7xx_bootrom.bin
27
+
36
F: roms/vbootrom
28
DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
37
F: docs/system/arm/nuvoton.rst
29
DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
38
diff --git a/include/hw/ssi/npcm_pspi.h b/include/hw/ssi/npcm_pspi.h
30
DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
new file mode 100644
31
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
40
index XXXXXXX..XXXXXXX
41
--- /dev/null
42
+++ b/include/hw/ssi/npcm_pspi.h
43
@@ -XXX,XX +XXX,XX @@
44
+/*
45
+ * Nuvoton Peripheral SPI Module
46
+ *
47
+ * Copyright 2023 Google LLC
48
+ *
49
+ * This program is free software; you can redistribute it and/or modify it
50
+ * under the terms of the GNU General Public License as published by the
51
+ * Free Software Foundation; either version 2 of the License, or
52
+ * (at your option) any later version.
53
+ *
54
+ * This program is distributed in the hope that it will be useful, but WITHOUT
55
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
56
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
57
+ * for more details.
58
+ */
59
+#ifndef NPCM_PSPI_H
60
+#define NPCM_PSPI_H
61
+
62
+#include "hw/ssi/ssi.h"
63
+#include "hw/sysbus.h"
64
+
65
+/*
66
+ * Number of registers in our device state structure. Don't change this without
67
+ * incrementing the version_id in the vmstate.
68
+ */
69
+#define NPCM_PSPI_NR_REGS 3
70
+
71
+/**
72
+ * NPCMPSPIState - Device state for one Flash Interface Unit.
73
+ * @parent: System bus device.
74
+ * @mmio: Memory region for register access.
75
+ * @spi: The SPI bus mastered by this controller.
76
+ * @regs: Register contents.
77
+ * @irq: The interrupt request queue for this module.
78
+ *
79
+ * Each PSPI has a shared bank of registers, and controls up to four chip
80
+ * selects. Each chip select has a dedicated memory region which may be used to
81
+ * read and write the flash connected to that chip select as if it were memory.
82
+ */
83
+typedef struct NPCMPSPIState {
84
+ SysBusDevice parent;
85
+
86
+ MemoryRegion mmio;
87
+
88
+ SSIBus *spi;
89
+ uint16_t regs[NPCM_PSPI_NR_REGS];
90
+ qemu_irq irq;
91
+} NPCMPSPIState;
92
+
93
+#define TYPE_NPCM_PSPI "npcm-pspi"
94
+OBJECT_DECLARE_SIMPLE_TYPE(NPCMPSPIState, NPCM_PSPI)
95
+
96
+#endif /* NPCM_PSPI_H */
97
diff --git a/hw/ssi/npcm_pspi.c b/hw/ssi/npcm_pspi.c
98
new file mode 100644
99
index XXXXXXX..XXXXXXX
100
--- /dev/null
101
+++ b/hw/ssi/npcm_pspi.c
102
@@ -XXX,XX +XXX,XX @@
103
+/*
104
+ * Nuvoton NPCM Peripheral SPI Module (PSPI)
105
+ *
106
+ * Copyright 2023 Google LLC
107
+ *
108
+ * This program is free software; you can redistribute it and/or modify it
109
+ * under the terms of the GNU General Public License as published by the
110
+ * Free Software Foundation; either version 2 of the License, or
111
+ * (at your option) any later version.
112
+ *
113
+ * This program is distributed in the hope that it will be useful, but WITHOUT
114
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
115
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
116
+ * for more details.
117
+ */
118
+
119
+#include "qemu/osdep.h"
120
+
121
+#include "hw/irq.h"
122
+#include "hw/registerfields.h"
123
+#include "hw/ssi/npcm_pspi.h"
124
+#include "migration/vmstate.h"
125
+#include "qapi/error.h"
126
+#include "qemu/error-report.h"
127
+#include "qemu/log.h"
128
+#include "qemu/module.h"
129
+#include "qemu/units.h"
130
+
131
+#include "trace.h"
132
+
133
+REG16(PSPI_DATA, 0x0)
134
+REG16(PSPI_CTL1, 0x2)
135
+ FIELD(PSPI_CTL1, SPIEN, 0, 1)
136
+ FIELD(PSPI_CTL1, MOD, 2, 1)
137
+ FIELD(PSPI_CTL1, EIR, 5, 1)
138
+ FIELD(PSPI_CTL1, EIW, 6, 1)
139
+ FIELD(PSPI_CTL1, SCM, 7, 1)
140
+ FIELD(PSPI_CTL1, SCIDL, 8, 1)
141
+ FIELD(PSPI_CTL1, SCDV, 9, 7)
142
+REG16(PSPI_STAT, 0x4)
143
+ FIELD(PSPI_STAT, BSY, 0, 1)
144
+ FIELD(PSPI_STAT, RBF, 1, 1)
145
+
146
+static void npcm_pspi_update_irq(NPCMPSPIState *s)
147
+{
148
+ int level = 0;
149
+
150
+ /* Only fire IRQ when the module is enabled. */
151
+ if (FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, SPIEN)) {
152
+ /* Update interrupt as BSY is cleared. */
153
+ if ((!FIELD_EX16(s->regs[R_PSPI_STAT], PSPI_STAT, BSY)) &&
154
+ FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, EIW)) {
155
+ level = 1;
156
+ }
157
+
158
+ /* Update interrupt as RBF is set. */
159
+ if (FIELD_EX16(s->regs[R_PSPI_STAT], PSPI_STAT, RBF) &&
160
+ FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, EIR)) {
161
+ level = 1;
162
+ }
163
+ }
164
+ qemu_set_irq(s->irq, level);
165
+}
166
+
167
+static uint16_t npcm_pspi_read_data(NPCMPSPIState *s)
168
+{
169
+ uint16_t value = s->regs[R_PSPI_DATA];
170
+
171
+ /* Clear stat bits as the value are read out. */
172
+ s->regs[R_PSPI_STAT] = 0;
173
+
174
+ return value;
175
+}
176
+
177
+static void npcm_pspi_write_data(NPCMPSPIState *s, uint16_t data)
178
+{
179
+ uint16_t value = 0;
180
+
181
+ if (FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, MOD)) {
182
+ value = ssi_transfer(s->spi, extract16(data, 8, 8)) << 8;
183
+ }
184
+ value |= ssi_transfer(s->spi, extract16(data, 0, 8));
185
+ s->regs[R_PSPI_DATA] = value;
186
+
187
+ /* Mark data as available */
188
+ s->regs[R_PSPI_STAT] = R_PSPI_STAT_BSY_MASK | R_PSPI_STAT_RBF_MASK;
189
+}
190
+
191
+/* Control register read handler. */
192
+static uint64_t npcm_pspi_ctrl_read(void *opaque, hwaddr addr,
193
+ unsigned int size)
194
+{
195
+ NPCMPSPIState *s = opaque;
196
+ uint16_t value;
197
+
198
+ switch (addr) {
199
+ case A_PSPI_DATA:
200
+ value = npcm_pspi_read_data(s);
201
+ break;
202
+
203
+ case A_PSPI_CTL1:
204
+ value = s->regs[R_PSPI_CTL1];
205
+ break;
206
+
207
+ case A_PSPI_STAT:
208
+ value = s->regs[R_PSPI_STAT];
209
+ break;
210
+
211
+ default:
212
+ qemu_log_mask(LOG_GUEST_ERROR,
213
+ "%s: write to invalid offset 0x%" PRIx64 "\n",
214
+ DEVICE(s)->canonical_path, addr);
215
+ return 0;
216
+ }
217
+ trace_npcm_pspi_ctrl_read(DEVICE(s)->canonical_path, addr, value);
218
+ npcm_pspi_update_irq(s);
219
+
220
+ return value;
221
+}
222
+
223
+/* Control register write handler. */
224
+static void npcm_pspi_ctrl_write(void *opaque, hwaddr addr, uint64_t v,
225
+ unsigned int size)
226
+{
227
+ NPCMPSPIState *s = opaque;
228
+ uint16_t value = v;
229
+
230
+ trace_npcm_pspi_ctrl_write(DEVICE(s)->canonical_path, addr, value);
231
+
232
+ switch (addr) {
233
+ case A_PSPI_DATA:
234
+ npcm_pspi_write_data(s, value);
235
+ break;
236
+
237
+ case A_PSPI_CTL1:
238
+ s->regs[R_PSPI_CTL1] = value;
239
+ break;
240
+
241
+ case A_PSPI_STAT:
242
+ qemu_log_mask(LOG_GUEST_ERROR,
243
+ "%s: write to read-only register PSPI_STAT: 0x%08"
244
+ PRIx64 "\n", DEVICE(s)->canonical_path, v);
245
+ break;
246
+
247
+ default:
248
+ qemu_log_mask(LOG_GUEST_ERROR,
249
+ "%s: write to invalid offset 0x%" PRIx64 "\n",
250
+ DEVICE(s)->canonical_path, addr);
251
+ return;
252
+ }
253
+ npcm_pspi_update_irq(s);
254
+}
255
+
256
+static const MemoryRegionOps npcm_pspi_ctrl_ops = {
257
+ .read = npcm_pspi_ctrl_read,
258
+ .write = npcm_pspi_ctrl_write,
259
+ .endianness = DEVICE_LITTLE_ENDIAN,
260
+ .valid = {
261
+ .min_access_size = 1,
262
+ .max_access_size = 2,
263
+ .unaligned = false,
264
+ },
265
+ .impl = {
266
+ .min_access_size = 2,
267
+ .max_access_size = 2,
268
+ .unaligned = false,
269
+ },
270
+};
271
+
272
+static void npcm_pspi_enter_reset(Object *obj, ResetType type)
273
+{
274
+ NPCMPSPIState *s = NPCM_PSPI(obj);
275
+
276
+ trace_npcm_pspi_enter_reset(DEVICE(obj)->canonical_path, type);
277
+ memset(s->regs, 0, sizeof(s->regs));
278
+}
279
+
280
+static void npcm_pspi_realize(DeviceState *dev, Error **errp)
281
+{
282
+ NPCMPSPIState *s = NPCM_PSPI(dev);
283
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
284
+ Object *obj = OBJECT(dev);
285
+
286
+ s->spi = ssi_create_bus(dev, "pspi");
287
+ memory_region_init_io(&s->mmio, obj, &npcm_pspi_ctrl_ops, s,
288
+ "mmio", 4 * KiB);
289
+ sysbus_init_mmio(sbd, &s->mmio);
290
+ sysbus_init_irq(sbd, &s->irq);
291
+}
292
+
293
+static const VMStateDescription vmstate_npcm_pspi = {
294
+ .name = "npcm-pspi",
295
+ .version_id = 0,
296
+ .minimum_version_id = 0,
297
+ .fields = (VMStateField[]) {
298
+ VMSTATE_UINT16_ARRAY(regs, NPCMPSPIState, NPCM_PSPI_NR_REGS),
299
+ VMSTATE_END_OF_LIST(),
300
+ },
301
+};
302
+
303
+
304
+static void npcm_pspi_class_init(ObjectClass *klass, void *data)
305
+{
306
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
307
+ DeviceClass *dc = DEVICE_CLASS(klass);
308
+
309
+ dc->desc = "NPCM Peripheral SPI Module";
310
+ dc->realize = npcm_pspi_realize;
311
+ dc->vmsd = &vmstate_npcm_pspi;
312
+ rc->phases.enter = npcm_pspi_enter_reset;
313
+}
314
+
315
+static const TypeInfo npcm_pspi_types[] = {
316
+ {
317
+ .name = TYPE_NPCM_PSPI,
318
+ .parent = TYPE_SYS_BUS_DEVICE,
319
+ .instance_size = sizeof(NPCMPSPIState),
320
+ .class_init = npcm_pspi_class_init,
321
+ },
322
+};
323
+DEFINE_TYPES(npcm_pspi_types);
324
diff --git a/hw/ssi/meson.build b/hw/ssi/meson.build
32
index XXXXXXX..XXXXXXX 100644
325
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/mve.decode
326
--- a/hw/ssi/meson.build
34
+++ b/target/arm/mve.decode
327
+++ b/hw/ssi/meson.build
35
@@ -XXX,XX +XXX,XX @@ VABS_fp 1111 1111 1 . 11 .. 01 ... 0 0111 01 . 0 ... 0 @1op
328
@@ -XXX,XX +XXX,XX @@
36
VNEG 1111 1111 1 . 11 .. 01 ... 0 0011 11 . 0 ... 0 @1op
329
softmmu_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files('aspeed_smc.c'))
37
VNEG_fp 1111 1111 1 . 11 .. 01 ... 0 0111 11 . 0 ... 0 @1op
330
softmmu_ss.add(when: 'CONFIG_MSF2', if_true: files('mss-spi.c'))
38
331
-softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_fiu.c'))
39
+VQABS 1111 1111 1 . 11 .. 00 ... 0 0111 01 . 0 ... 0 @1op
332
+softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_fiu.c', 'npcm_pspi.c'))
40
+VQNEG 1111 1111 1 . 11 .. 00 ... 0 0111 11 . 0 ... 0 @1op
333
softmmu_ss.add(when: 'CONFIG_PL022', if_true: files('pl022.c'))
41
+
334
softmmu_ss.add(when: 'CONFIG_SIFIVE_SPI', if_true: files('sifive_spi.c'))
42
&vdup qd rt size
335
softmmu_ss.add(when: 'CONFIG_SSI', if_true: files('ssi.c'))
43
# Qd is in the fields usually named Qn
336
diff --git a/hw/ssi/trace-events b/hw/ssi/trace-events
44
@vdup .... .... . . .. ... . rt:4 .... . . . . .... qd=%qn &vdup
45
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
46
index XXXXXXX..XXXXXXX 100644
337
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/mve_helper.c
338
--- a/hw/ssi/trace-events
48
+++ b/target/arm/mve_helper.c
339
+++ b/hw/ssi/trace-events
49
@@ -XXX,XX +XXX,XX @@ void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm)
340
@@ -XXX,XX +XXX,XX @@ npcm7xx_fiu_ctrl_write(const char *id, uint64_t addr, uint32_t data) "%s offset:
50
}
341
npcm7xx_fiu_flash_read(const char *id, int cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
51
mve_advance_vpt(env);
342
npcm7xx_fiu_flash_write(const char *id, unsigned cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
52
}
343
53
+
344
+# npcm_pspi.c
54
+#define DO_1OP_SAT(OP, ESIZE, TYPE, FN) \
345
+npcm_pspi_enter_reset(const char *id, int reset_type) "%s reset type: %d"
55
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
346
+npcm_pspi_ctrl_read(const char *id, uint64_t addr, uint16_t data) "%s offset: 0x%03" PRIx64 " value: 0x%04" PRIx16
56
+ { \
347
+npcm_pspi_ctrl_write(const char *id, uint64_t addr, uint16_t data) "%s offset: 0x%03" PRIx64 " value: 0x%04" PRIx16
57
+ TYPE *d = vd, *m = vm; \
348
+
58
+ uint16_t mask = mve_element_mask(env); \
349
# ibex_spi_host.c
59
+ unsigned e; \
350
60
+ bool qc = false; \
351
ibex_spi_host_reset(const char *msg) "%s"
61
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
62
+ bool sat = false; \
63
+ mergemask(&d[H##ESIZE(e)], FN(m[H##ESIZE(e)], &sat), mask); \
64
+ qc |= sat & mask & 1; \
65
+ } \
66
+ if (qc) { \
67
+ env->vfp.qc[0] = qc; \
68
+ } \
69
+ mve_advance_vpt(env); \
70
+ }
71
+
72
+#define DO_VQABS_B(N, SATP) \
73
+ do_sat_bhs(DO_ABS((int64_t)N), INT8_MIN, INT8_MAX, SATP)
74
+#define DO_VQABS_H(N, SATP) \
75
+ do_sat_bhs(DO_ABS((int64_t)N), INT16_MIN, INT16_MAX, SATP)
76
+#define DO_VQABS_W(N, SATP) \
77
+ do_sat_bhs(DO_ABS((int64_t)N), INT32_MIN, INT32_MAX, SATP)
78
+
79
+#define DO_VQNEG_B(N, SATP) do_sat_bhs(-(int64_t)N, INT8_MIN, INT8_MAX, SATP)
80
+#define DO_VQNEG_H(N, SATP) do_sat_bhs(-(int64_t)N, INT16_MIN, INT16_MAX, SATP)
81
+#define DO_VQNEG_W(N, SATP) do_sat_bhs(-(int64_t)N, INT32_MIN, INT32_MAX, SATP)
82
+
83
+DO_1OP_SAT(vqabsb, 1, int8_t, DO_VQABS_B)
84
+DO_1OP_SAT(vqabsh, 2, int16_t, DO_VQABS_H)
85
+DO_1OP_SAT(vqabsw, 4, int32_t, DO_VQABS_W)
86
+
87
+DO_1OP_SAT(vqnegb, 1, int8_t, DO_VQNEG_B)
88
+DO_1OP_SAT(vqnegh, 2, int16_t, DO_VQNEG_H)
89
+DO_1OP_SAT(vqnegw, 4, int32_t, DO_VQNEG_W)
90
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/translate-mve.c
93
+++ b/target/arm/translate-mve.c
94
@@ -XXX,XX +XXX,XX @@ DO_1OP(VCLZ, vclz)
95
DO_1OP(VCLS, vcls)
96
DO_1OP(VABS, vabs)
97
DO_1OP(VNEG, vneg)
98
+DO_1OP(VQABS, vqabs)
99
+DO_1OP(VQNEG, vqneg)
100
101
/* Narrowing moves: only size 0 and 1 are valid */
102
#define DO_VMOVN(INSN, FN) \
103
--
352
--
104
2.20.1
353
2.34.1
105
106
diff view generated by jsdifflib
1
Implement the MVE saturating doubling multiply accumulate insns
1
From: Hao Wu <wuhaotsh@google.com>
2
VQDMLAH, VQRDMLAH, VQDMLASH and VQRDMLASH. These perform a multiply,
3
double, add the accumulator shifted by the element size, possibly
4
round, saturate to twice the element size, then take the high half of
5
the result. The *MLAH insns do vector * scalar + vector, and the
6
*MLASH insns do vector * vector + scalar.
7
2
3
Signed-off-by: Hao Wu <wuhaotsh@google.com>
4
Reviewed-by: Titus Rwantare <titusr@google.com>
5
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
6
Message-id: 20230208235433.3989937-4-wuhaotsh@google.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
8
---
11
target/arm/helper-mve.h | 16 +++++++
9
docs/system/arm/nuvoton.rst | 2 +-
12
target/arm/mve.decode | 5 ++
10
include/hw/arm/npcm7xx.h | 2 ++
13
target/arm/mve_helper.c | 95 ++++++++++++++++++++++++++++++++++++++
11
hw/arm/npcm7xx.c | 25 +++++++++++++++++++++++--
14
target/arm/translate-mve.c | 4 ++
12
3 files changed, 26 insertions(+), 3 deletions(-)
15
4 files changed, 120 insertions(+)
16
13
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-mve.h
16
--- a/docs/system/arm/nuvoton.rst
20
+++ b/target/arm/helper-mve.h
17
+++ b/docs/system/arm/nuvoton.rst
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@ Supported devices
22
DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
* SMBus controller (SMBF)
23
DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
* Ethernet controller (EMC)
24
21
* Tachometer
25
+DEF_HELPER_FLAGS_4(mve_vqdmlahb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
+ * Peripheral SPI controller (PSPI)
26
+DEF_HELPER_FLAGS_4(mve_vqdmlahh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
27
+DEF_HELPER_FLAGS_4(mve_vqdmlahw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
Missing devices
25
---------------
26
@@ -XXX,XX +XXX,XX @@ Missing devices
27
28
* Ethernet controller (GMAC)
29
* USB device (USBD)
30
- * Peripheral SPI controller (PSPI)
31
* SD/MMC host
32
* PECI interface
33
* PCI and PCIe root complex and bridges
34
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/include/hw/arm/npcm7xx.h
37
+++ b/include/hw/arm/npcm7xx.h
38
@@ -XXX,XX +XXX,XX @@
39
#include "hw/nvram/npcm7xx_otp.h"
40
#include "hw/timer/npcm7xx_timer.h"
41
#include "hw/ssi/npcm7xx_fiu.h"
42
+#include "hw/ssi/npcm_pspi.h"
43
#include "hw/usb/hcd-ehci.h"
44
#include "hw/usb/hcd-ohci.h"
45
#include "target/arm/cpu.h"
46
@@ -XXX,XX +XXX,XX @@ struct NPCM7xxState {
47
NPCM7xxFIUState fiu[2];
48
NPCM7xxEMCState emc[2];
49
NPCM7xxSDHCIState mmc;
50
+ NPCMPSPIState pspi[2];
51
};
52
53
#define TYPE_NPCM7XX "npcm7xx"
54
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/arm/npcm7xx.c
57
+++ b/hw/arm/npcm7xx.c
58
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
59
NPCM7XX_EMC1RX_IRQ = 15,
60
NPCM7XX_EMC1TX_IRQ,
61
NPCM7XX_MMC_IRQ = 26,
62
+ NPCM7XX_PSPI2_IRQ = 28,
63
+ NPCM7XX_PSPI1_IRQ = 31,
64
NPCM7XX_TIMER0_IRQ = 32, /* Timer Module 0 */
65
NPCM7XX_TIMER1_IRQ,
66
NPCM7XX_TIMER2_IRQ,
67
@@ -XXX,XX +XXX,XX @@ static const hwaddr npcm7xx_emc_addr[] = {
68
0xf0826000,
69
};
70
71
+/* Register base address for each PSPI Module */
72
+static const hwaddr npcm7xx_pspi_addr[] = {
73
+ 0xf0200000,
74
+ 0xf0201000,
75
+};
28
+
76
+
29
+DEF_HELPER_FLAGS_4(mve_vqrdmlahb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
77
static const struct {
30
+DEF_HELPER_FLAGS_4(mve_vqrdmlahh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
78
hwaddr regs_addr;
31
+DEF_HELPER_FLAGS_4(mve_vqrdmlahw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
79
uint32_t unconnected_pins;
32
+
80
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_init(Object *obj)
33
+DEF_HELPER_FLAGS_4(mve_vqdmlashb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
81
object_initialize_child(obj, "emc[*]", &s->emc[i], TYPE_NPCM7XX_EMC);
34
+DEF_HELPER_FLAGS_4(mve_vqdmlashh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vqdmlashw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_4(mve_vqrdmlashb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+DEF_HELPER_FLAGS_4(mve_vqrdmlashh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vqrdmlashw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+
41
DEF_HELPER_FLAGS_4(mve_vmlaldavsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
42
DEF_HELPER_FLAGS_4(mve_vmlaldavsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
43
DEF_HELPER_FLAGS_4(mve_vmlaldavxsh, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
44
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/mve.decode
47
+++ b/target/arm/mve.decode
48
@@ -XXX,XX +XXX,XX @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
49
VMLA 111- 1110 0 . .. ... 1 ... 0 1110 . 100 .... @2scalar
50
VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
51
52
+VQRDMLAH 1110 1110 0 . .. ... 0 ... 0 1110 . 100 .... @2scalar
53
+VQRDMLASH 1110 1110 0 . .. ... 0 ... 1 1110 . 100 .... @2scalar
54
+VQDMLAH 1110 1110 0 . .. ... 0 ... 0 1110 . 110 .... @2scalar
55
+VQDMLASH 1110 1110 0 . .. ... 0 ... 1 1110 . 110 .... @2scalar
56
+
57
# Vector add across vector
58
{
59
VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
60
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/mve_helper.c
63
+++ b/target/arm/mve_helper.c
64
@@ -XXX,XX +XXX,XX @@ DO_VQDMLADH_OP(vqrdmlsdhxw, 4, int32_t, 1, 1, do_vqdmlsdh_w)
65
mve_advance_vpt(env); \
66
}
82
}
67
83
68
+#define DO_2OP_SAT_ACC_SCALAR(OP, ESIZE, TYPE, FN) \
84
+ for (i = 0; i < ARRAY_SIZE(s->pspi); i++) {
69
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, void *vn, \
85
+ object_initialize_child(obj, "pspi[*]", &s->pspi[i], TYPE_NPCM_PSPI);
70
+ uint32_t rm) \
71
+ { \
72
+ TYPE *d = vd, *n = vn; \
73
+ TYPE m = rm; \
74
+ uint16_t mask = mve_element_mask(env); \
75
+ unsigned e; \
76
+ bool qc = false; \
77
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
78
+ bool sat = false; \
79
+ mergemask(&d[H##ESIZE(e)], \
80
+ FN(d[H##ESIZE(e)], n[H##ESIZE(e)], m, &sat), \
81
+ mask); \
82
+ qc |= sat & mask & 1; \
83
+ } \
84
+ if (qc) { \
85
+ env->vfp.qc[0] = qc; \
86
+ } \
87
+ mve_advance_vpt(env); \
88
+ }
86
+ }
89
+
87
+
90
/* provide unsigned 2-op scalar helpers for all sizes */
88
object_initialize_child(obj, "mmc", &s->mmc, TYPE_NPCM7XX_SDHCI);
91
#define DO_2OP_SCALAR_U(OP, FN) \
89
}
92
DO_2OP_SCALAR(OP##b, 1, uint8_t, FN) \
90
93
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
91
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
94
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
92
sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc), 0,
95
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
93
npcm7xx_irq(s, NPCM7XX_MMC_IRQ));
96
94
97
+static int8_t do_vqdmlah_b(int8_t a, int8_t b, int8_t c, int round, bool *sat)
95
+ /* PSPI */
98
+{
96
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(npcm7xx_pspi_addr) != ARRAY_SIZE(s->pspi));
99
+ int64_t r = (int64_t)a * b * 2 + ((int64_t)c << 8) + (round << 7);
97
+ for (i = 0; i < ARRAY_SIZE(s->pspi); i++) {
100
+ return do_sat_bhw(r, INT16_MIN, INT16_MAX, sat) >> 8;
98
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->pspi[i]);
101
+}
99
+ int irq = (i == 0) ? NPCM7XX_PSPI1_IRQ : NPCM7XX_PSPI2_IRQ;
102
+
100
+
103
+static int16_t do_vqdmlah_h(int16_t a, int16_t b, int16_t c,
101
+ sysbus_realize(sbd, &error_abort);
104
+ int round, bool *sat)
102
+ sysbus_mmio_map(sbd, 0, npcm7xx_pspi_addr[i]);
105
+{
103
+ sysbus_connect_irq(sbd, 0, npcm7xx_irq(s, irq));
106
+ int64_t r = (int64_t)a * b * 2 + ((int64_t)c << 16) + (round << 15);
104
+ }
107
+ return do_sat_bhw(r, INT32_MIN, INT32_MAX, sat) >> 16;
108
+}
109
+
105
+
110
+static int32_t do_vqdmlah_w(int32_t a, int32_t b, int32_t c,
106
create_unimplemented_device("npcm7xx.shm", 0xc0001000, 4 * KiB);
111
+ int round, bool *sat)
107
create_unimplemented_device("npcm7xx.vdmx", 0xe0800000, 4 * KiB);
112
+{
108
create_unimplemented_device("npcm7xx.pcierc", 0xe1000000, 64 * KiB);
113
+ /*
109
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
114
+ * Architecturally we should do the entire add, double, round
110
create_unimplemented_device("npcm7xx.peci", 0xf0100000, 4 * KiB);
115
+ * and then check for saturation. We do three saturating adds,
111
create_unimplemented_device("npcm7xx.siox[1]", 0xf0101000, 4 * KiB);
116
+ * but we need to be careful about the order. If the first
112
create_unimplemented_device("npcm7xx.siox[2]", 0xf0102000, 4 * KiB);
117
+ * m1 + m2 saturates then it's impossible for the *2+rc to
113
- create_unimplemented_device("npcm7xx.pspi1", 0xf0200000, 4 * KiB);
118
+ * bring it back into the non-saturated range. However, if
114
- create_unimplemented_device("npcm7xx.pspi2", 0xf0201000, 4 * KiB);
119
+ * m1 + m2 is negative then it's possible that doing the doubling
115
create_unimplemented_device("npcm7xx.ahbpci", 0xf0400000, 1 * MiB);
120
+ * would take the intermediate result below INT64_MAX and the
116
create_unimplemented_device("npcm7xx.mcphy", 0xf05f0000, 64 * KiB);
121
+ * addition of the rounding constant then brings it back in range.
117
create_unimplemented_device("npcm7xx.gmac1", 0xf0802000, 8 * KiB);
122
+ * So we add half the rounding constant and half the "c << esize"
123
+ * before doubling rather than adding the rounding constant after
124
+ * the doubling.
125
+ */
126
+ int64_t m1 = (int64_t)a * b;
127
+ int64_t m2 = (int64_t)c << 31;
128
+ int64_t r;
129
+ if (sadd64_overflow(m1, m2, &r) ||
130
+ sadd64_overflow(r, (round << 30), &r) ||
131
+ sadd64_overflow(r, r, &r)) {
132
+ *sat = true;
133
+ return r < 0 ? INT32_MAX : INT32_MIN;
134
+ }
135
+ return r >> 32;
136
+}
137
+
138
+/*
139
+ * The *MLAH insns are vector * scalar + vector;
140
+ * the *MLASH insns are vector * vector + scalar
141
+ */
142
+#define DO_VQDMLAH_B(D, N, M, S) do_vqdmlah_b(N, M, D, 0, S)
143
+#define DO_VQDMLAH_H(D, N, M, S) do_vqdmlah_h(N, M, D, 0, S)
144
+#define DO_VQDMLAH_W(D, N, M, S) do_vqdmlah_w(N, M, D, 0, S)
145
+#define DO_VQRDMLAH_B(D, N, M, S) do_vqdmlah_b(N, M, D, 1, S)
146
+#define DO_VQRDMLAH_H(D, N, M, S) do_vqdmlah_h(N, M, D, 1, S)
147
+#define DO_VQRDMLAH_W(D, N, M, S) do_vqdmlah_w(N, M, D, 1, S)
148
+
149
+#define DO_VQDMLASH_B(D, N, M, S) do_vqdmlah_b(N, D, M, 0, S)
150
+#define DO_VQDMLASH_H(D, N, M, S) do_vqdmlah_h(N, D, M, 0, S)
151
+#define DO_VQDMLASH_W(D, N, M, S) do_vqdmlah_w(N, D, M, 0, S)
152
+#define DO_VQRDMLASH_B(D, N, M, S) do_vqdmlah_b(N, D, M, 1, S)
153
+#define DO_VQRDMLASH_H(D, N, M, S) do_vqdmlah_h(N, D, M, 1, S)
154
+#define DO_VQRDMLASH_W(D, N, M, S) do_vqdmlah_w(N, D, M, 1, S)
155
+
156
+DO_2OP_SAT_ACC_SCALAR(vqdmlahb, 1, int8_t, DO_VQDMLAH_B)
157
+DO_2OP_SAT_ACC_SCALAR(vqdmlahh, 2, int16_t, DO_VQDMLAH_H)
158
+DO_2OP_SAT_ACC_SCALAR(vqdmlahw, 4, int32_t, DO_VQDMLAH_W)
159
+DO_2OP_SAT_ACC_SCALAR(vqrdmlahb, 1, int8_t, DO_VQRDMLAH_B)
160
+DO_2OP_SAT_ACC_SCALAR(vqrdmlahh, 2, int16_t, DO_VQRDMLAH_H)
161
+DO_2OP_SAT_ACC_SCALAR(vqrdmlahw, 4, int32_t, DO_VQRDMLAH_W)
162
+
163
+DO_2OP_SAT_ACC_SCALAR(vqdmlashb, 1, int8_t, DO_VQDMLASH_B)
164
+DO_2OP_SAT_ACC_SCALAR(vqdmlashh, 2, int16_t, DO_VQDMLASH_H)
165
+DO_2OP_SAT_ACC_SCALAR(vqdmlashw, 4, int32_t, DO_VQDMLASH_W)
166
+DO_2OP_SAT_ACC_SCALAR(vqrdmlashb, 1, int8_t, DO_VQRDMLASH_B)
167
+DO_2OP_SAT_ACC_SCALAR(vqrdmlashh, 2, int16_t, DO_VQRDMLASH_H)
168
+DO_2OP_SAT_ACC_SCALAR(vqrdmlashw, 4, int32_t, DO_VQRDMLASH_W)
169
+
170
/* Vector by scalar plus vector */
171
#define DO_VMLA(D, N, M) ((N) * (M) + (D))
172
173
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
174
index XXXXXXX..XXXXXXX 100644
175
--- a/target/arm/translate-mve.c
176
+++ b/target/arm/translate-mve.c
177
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
178
DO_2OP_SCALAR(VBRSR, vbrsr)
179
DO_2OP_SCALAR(VMLA, vmla)
180
DO_2OP_SCALAR(VMLAS, vmlas)
181
+DO_2OP_SCALAR(VQDMLAH, vqdmlah)
182
+DO_2OP_SCALAR(VQRDMLAH, vqrdmlah)
183
+DO_2OP_SCALAR(VQDMLASH, vqdmlash)
184
+DO_2OP_SCALAR(VQRDMLASH, vqrdmlash)
185
186
static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
187
{
188
--
118
--
189
2.20.1
119
2.34.1
190
191
diff view generated by jsdifflib
1
Implement the MVE VMLA insn, which multiplies a vector by a scalar
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
and accumulates into another vector.
3
2
3
Addresses targeting the second translation table (TTB1) in the SMMU have
4
all upper bits set. Ensure the IOMMU region covers all 64 bits.
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Message-id: 20230214171921.1917916-2-jean-philippe@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
11
---
7
target/arm/helper-mve.h | 4 ++++
12
include/hw/arm/smmu-common.h | 2 --
8
target/arm/mve.decode | 1 +
13
hw/arm/smmu-common.c | 2 +-
9
target/arm/mve_helper.c | 5 +++++
14
2 files changed, 1 insertion(+), 3 deletions(-)
10
target/arm/translate-mve.c | 1 +
11
4 files changed, 11 insertions(+)
12
15
13
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-mve.h
18
--- a/include/hw/arm/smmu-common.h
16
+++ b/target/arm/helper-mve.h
19
+++ b/include/hw/arm/smmu-common.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqdmullb_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i3
20
@@ -XXX,XX +XXX,XX @@
18
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
#define SMMU_PCI_DEVFN_MAX 256
19
DEF_HELPER_FLAGS_4(mve_vqdmullt_scalarw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
#define SMMU_PCI_DEVFN(sid) (sid & 0xFF)
20
23
21
+DEF_HELPER_FLAGS_4(mve_vmlab, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
-#define SMMU_MAX_VA_BITS 48
22
+DEF_HELPER_FLAGS_4(mve_vmlah, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
-
23
+DEF_HELPER_FLAGS_4(mve_vmlaw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
/*
24
+
27
* Page table walk error types
25
DEF_HELPER_FLAGS_4(mve_vmlasb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
*/
26
DEF_HELPER_FLAGS_4(mve_vmlash, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
27
DEF_HELPER_FLAGS_4(mve_vmlasw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
29
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/mve.decode
31
--- a/hw/arm/smmu-common.c
31
+++ b/target/arm/mve.decode
32
+++ b/hw/arm/smmu-common.c
32
@@ -XXX,XX +XXX,XX @@ VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
33
@@ -XXX,XX +XXX,XX @@ static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
33
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
34
34
35
memory_region_init_iommu(&sdev->iommu, sizeof(sdev->iommu),
35
# The U bit (28) is don't-care because it does not affect the result
36
s->mrtypename,
36
+VMLA 111- 1110 0 . .. ... 1 ... 0 1110 . 100 .... @2scalar
37
- OBJECT(s), name, 1ULL << SMMU_MAX_VA_BITS);
37
VMLAS 111- 1110 0 . .. ... 1 ... 1 1110 . 100 .... @2scalar
38
+ OBJECT(s), name, UINT64_MAX);
38
39
address_space_init(&sdev->as,
39
# Vector add across vector
40
MEMORY_REGION(&sdev->iommu), name);
40
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
41
trace_smmu_add_mr(name);
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/mve_helper.c
43
+++ b/target/arm/mve_helper.c
44
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT_SCALAR(vqrdmulh_scalarb, 1, int8_t, DO_QRDMULH_B)
45
DO_2OP_SAT_SCALAR(vqrdmulh_scalarh, 2, int16_t, DO_QRDMULH_H)
46
DO_2OP_SAT_SCALAR(vqrdmulh_scalarw, 4, int32_t, DO_QRDMULH_W)
47
48
+/* Vector by scalar plus vector */
49
+#define DO_VMLA(D, N, M) ((N) * (M) + (D))
50
+
51
+DO_2OP_ACC_SCALAR_U(vmla, DO_VMLA)
52
+
53
/* Vector by vector plus scalar */
54
#define DO_VMLAS(D, N, M) ((N) * (D) + (M))
55
56
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/translate-mve.c
59
+++ b/target/arm/translate-mve.c
60
@@ -XXX,XX +XXX,XX @@ DO_2OP_SCALAR(VQSUB_U_scalar, vqsubu_scalar)
61
DO_2OP_SCALAR(VQDMULH_scalar, vqdmulh_scalar)
62
DO_2OP_SCALAR(VQRDMULH_scalar, vqrdmulh_scalar)
63
DO_2OP_SCALAR(VBRSR, vbrsr)
64
+DO_2OP_SCALAR(VMLA, vmla)
65
DO_2OP_SCALAR(VMLAS, vmlas)
66
67
static bool trans_VQDMULLB_scalar(DisasContext *s, arg_2scalar *a)
68
--
42
--
69
2.20.1
43
2.34.1
70
71
diff view generated by jsdifflib
1
From: Jan Luebbe <jlu@pengutronix.de>
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
2
3
Break events are currently only handled by chardev/char-serial.c, so we
3
Addresses targeting the second translation table (TTB1) in the SMMU have
4
just ignore errors, which results in no behaviour change for other
4
all upper bits set (except for the top byte when TBI is enabled). Fix
5
chardevs.
5
the TTB1 check.
6
6
7
Signed-off-by: Jan Luebbe <jlu@pengutronix.de>
7
Reported-by: Ola Hugosson <ola.hugosson@arm.com>
8
Message-id: 20210806144700.3751979-1-jlu@pengutronix.de
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
11
Message-id: 20230214171921.1917916-3-jean-philippe@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
hw/char/pl011.c | 6 ++++++
14
hw/arm/smmu-common.c | 2 +-
13
1 file changed, 6 insertions(+)
15
1 file changed, 1 insertion(+), 1 deletion(-)
14
16
15
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
17
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/char/pl011.c
19
--- a/hw/arm/smmu-common.c
18
+++ b/hw/char/pl011.c
20
+++ b/hw/arm/smmu-common.c
19
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
20
#include "hw/qdev-properties-system.h"
22
/* there is a ttbr0 region and we are in it (high bits all zero) */
21
#include "migration/vmstate.h"
23
return &cfg->tt[0];
22
#include "chardev/char-fe.h"
24
} else if (cfg->tt[1].tsz &&
23
+#include "chardev/char-serial.h"
25
- !extract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte)) {
24
#include "qemu/log.h"
26
+ sextract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte) == -1) {
25
#include "qemu/module.h"
27
/* there is a ttbr1 region and we are in it (high bits all one) */
26
#include "trace.h"
28
return &cfg->tt[1];
27
@@ -XXX,XX +XXX,XX @@ static void pl011_write(void *opaque, hwaddr offset,
29
} else if (!cfg->tt[0].tsz) {
28
s->read_count = 0;
29
s->read_pos = 0;
30
}
31
+ if ((s->lcr ^ value) & 0x1) {
32
+ int break_enable = value & 0x1;
33
+ qemu_chr_fe_ioctl(&s->chr, CHR_IOCTL_SERIAL_SET_BREAK,
34
+ &break_enable);
35
+ }
36
s->lcr = value;
37
pl011_set_read_trigger(s);
38
break;
39
--
30
--
40
2.20.1
31
2.34.1
41
42
diff view generated by jsdifflib
1
We were not paying attention to the ECI state when advancing the VPT
1
From: Claudio Fontana <cfontana@suse.de>
2
state. Architecturally, VPT state advance happens for every beat
3
(see the pseudocode VPTAdvance()), so on every beat the 4 bits of
4
VPR.P0 corresponding to the current beat are inverted if required,
5
and at the end of beats 1 and 3 the VPR MASK fields are updated.
6
This means that if the ECI state says we should not be executing all
7
4 beats then we need to skip some of the updating of the VPR that we
8
currently do in mve_advance_vpt().
9
2
3
make it clearer from the name that this is a tcg-only function.
4
5
Signed-off-by: Claudio Fontana <cfontana@suse.de>
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
11
---
13
target/arm/mve_helper.c | 24 +++++++++++++++++-------
12
target/arm/helper.c | 4 ++--
14
1 file changed, 17 insertions(+), 7 deletions(-)
13
1 file changed, 2 insertions(+), 2 deletions(-)
15
14
16
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/mve_helper.c
17
--- a/target/arm/helper.c
19
+++ b/target/arm/mve_helper.c
18
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
21
/* Advance the VPT and ECI state if necessary */
20
* trapped to the hypervisor in KVM.
22
uint32_t vpr = env->v7m.vpr;
21
*/
23
unsigned mask01, mask23;
22
#ifdef CONFIG_TCG
24
+ uint16_t inv_mask;
23
-static void handle_semihosting(CPUState *cs)
25
+ uint16_t eci_mask = mve_eci_mask(env);
24
+static void tcg_handle_semihosting(CPUState *cs)
26
25
{
27
if ((env->condexec_bits & 0xf) == 0) {
26
ARMCPU *cpu = ARM_CPU(cs);
28
env->condexec_bits = (env->condexec_bits == (ECI_A0A1A2B0 << 4)) ?
27
CPUARMState *env = &cpu->env;
29
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
28
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
29
*/
30
#ifdef CONFIG_TCG
31
if (cs->exception_index == EXCP_SEMIHOST) {
32
- handle_semihosting(cs);
33
+ tcg_handle_semihosting(cs);
30
return;
34
return;
31
}
35
}
32
36
#endif
33
+ /* Invert P0 bits if needed, but only for beats we actually executed */
34
mask01 = FIELD_EX32(vpr, V7M_VPR, MASK01);
35
mask23 = FIELD_EX32(vpr, V7M_VPR, MASK23);
36
- if (mask01 > 8) {
37
- /* high bit set, but not 0b1000: invert the relevant half of P0 */
38
- vpr ^= 0xff;
39
+ /* Start by assuming we invert all bits corresponding to executed beats */
40
+ inv_mask = eci_mask;
41
+ if (mask01 <= 8) {
42
+ /* MASK01 says don't invert low half of P0 */
43
+ inv_mask &= ~0xff;
44
}
45
- if (mask23 > 8) {
46
- /* high bit set, but not 0b1000: invert the relevant half of P0 */
47
- vpr ^= 0xff00;
48
+ if (mask23 <= 8) {
49
+ /* MASK23 says don't invert high half of P0 */
50
+ inv_mask &= ~0xff00;
51
}
52
- vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1);
53
+ vpr ^= inv_mask;
54
+ /* Only update MASK01 if beat 1 executed */
55
+ if (eci_mask & 0xf0) {
56
+ vpr = FIELD_DP32(vpr, V7M_VPR, MASK01, mask01 << 1);
57
+ }
58
+ /* Beat 3 always executes, so update MASK23 */
59
vpr = FIELD_DP32(vpr, V7M_VPR, MASK23, mask23 << 1);
60
env->v7m.vpr = vpr;
61
}
62
--
37
--
63
2.20.1
38
2.34.1
64
39
65
40
diff view generated by jsdifflib
1
In mve_element_mask(), we calculate a mask for tail predication which
1
From: Claudio Fontana <cfontana@suse.de>
2
should have a number of 1 bits based on the value of LR. However,
3
our MAKE_64BIT_MASK() macro has undefined behaviour when passed a
4
zero length. Special case this to give the all-zeroes mask we
5
require.
6
2
3
for "all" builds (tcg + kvm), we want to avoid doing
4
the psci check if tcg is built-in, but not enabled.
5
6
Signed-off-by: Claudio Fontana <cfontana@suse.de>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
---
11
---
10
target/arm/mve_helper.c | 3 ++-
12
target/arm/helper.c | 3 ++-
11
1 file changed, 2 insertions(+), 1 deletion(-)
13
1 file changed, 2 insertions(+), 1 deletion(-)
12
14
13
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/mve_helper.c
17
--- a/target/arm/helper.c
16
+++ b/target/arm/mve_helper.c
18
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static uint16_t mve_element_mask(CPUARMState *env)
19
@@ -XXX,XX +XXX,XX @@
18
*/
20
#include "hw/irq.h"
19
int masklen = env->regs[14] << env->v7m.ltpsize;
21
#include "sysemu/cpu-timers.h"
20
assert(masklen <= 16);
22
#include "sysemu/kvm.h"
21
- mask &= MAKE_64BIT_MASK(0, masklen);
23
+#include "sysemu/tcg.h"
22
+ uint16_t ltpmask = masklen ? MAKE_64BIT_MASK(0, masklen) : 0;
24
#include "qapi/qapi-commands-machine-target.h"
23
+ mask &= ltpmask;
25
#include "qapi/error.h"
26
#include "qemu/guest-random.h"
27
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
28
env->exception.syndrome);
24
}
29
}
25
30
26
if ((env->condexec_bits & 0xf) == 0) {
31
- if (arm_is_psci_call(cpu, cs->exception_index)) {
32
+ if (tcg_enabled() && arm_is_psci_call(cpu, cs->exception_index)) {
33
arm_handle_psci_call(cpu);
34
qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n");
35
return;
27
--
36
--
28
2.20.1
37
2.34.1
29
38
30
39
diff view generated by jsdifflib
1
We're about to make a code change to the sdiv and udiv helper
1
From: Claudio Fontana <cfontana@suse.de>
2
functions, so first fix their indentation and coding style.
3
2
3
Signed-off-by: Claudio Fontana <cfontana@suse.de>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210730151636.17254-2-peter.maydell@linaro.org
7
---
8
---
8
target/arm/helper.c | 15 +++++++++------
9
target/arm/helper.c | 12 +++++++-----
9
1 file changed, 9 insertions(+), 6 deletions(-)
10
1 file changed, 7 insertions(+), 5 deletions(-)
10
11
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
14
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
15
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(uxtb16)(uint32_t x)
16
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
16
17
unsigned int cur_el = arm_current_el(env);
17
int32_t HELPER(sdiv)(int32_t num, int32_t den)
18
int rt;
18
{
19
19
- if (den == 0)
20
- /*
20
- return 0;
21
- * Note that new_el can never be 0. If cur_el is 0, then
21
- if (num == INT_MIN && den == -1)
22
- * el0_a64 is is_a64(), else el0_a64 is ignored.
22
- return INT_MIN;
23
- */
23
+ if (den == 0) {
24
- aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
24
+ return 0;
25
+ if (tcg_enabled()) {
26
+ /*
27
+ * Note that new_el can never be 0. If cur_el is 0, then
28
+ * el0_a64 is is_a64(), else el0_a64 is ignored.
29
+ */
30
+ aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
25
+ }
31
+ }
26
+ if (num == INT_MIN && den == -1) {
32
27
+ return INT_MIN;
33
if (cur_el < new_el) {
28
+ }
34
/*
29
return num / den;
30
}
31
32
uint32_t HELPER(udiv)(uint32_t num, uint32_t den)
33
{
34
- if (den == 0)
35
- return 0;
36
+ if (den == 0) {
37
+ return 0;
38
+ }
39
return num / den;
40
}
41
42
--
35
--
43
2.20.1
36
2.34.1
44
37
45
38
diff view generated by jsdifflib
1
Implement the MVE integer vector comparison instructions that compare
1
From: Fabiano Rosas <farosas@suse.de>
2
each element against a scalar from a general purpose register. These
3
are "VCMP (vector)" encodings T4, T5 and T6 and "VPT (vector)"
4
encodings T4, T5 and T6.
5
2
6
We have to move the decodetree pattern for VPST, because it
3
Move this earlier to make the next patch diff cleaner. While here
7
overlaps with VCMP T4 with size = 0b11.
4
update the comment slightly to not give the impression that the
5
misalignment affects only TCG.
8
6
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
---
12
---
12
target/arm/helper-mve.h | 32 +++++++++++++++++++++++++++
13
target/arm/machine.c | 18 +++++++++---------
13
target/arm/mve.decode | 18 +++++++++++++---
14
1 file changed, 9 insertions(+), 9 deletions(-)
14
target/arm/mve_helper.c | 44 +++++++++++++++++++++++++++++++-------
15
target/arm/translate-mve.c | 43 +++++++++++++++++++++++++++++++++++++
16
4 files changed, 126 insertions(+), 11 deletions(-)
17
15
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
diff --git a/target/arm/machine.c b/target/arm/machine.c
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-mve.h
18
--- a/target/arm/machine.c
21
+++ b/target/arm/helper-mve.h
19
+++ b/target/arm/machine.c
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vcmpgtw, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
23
DEF_HELPER_FLAGS_3(mve_vcmpleb, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
}
24
DEF_HELPER_FLAGS_3(mve_vcmpleh, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
DEF_HELPER_FLAGS_3(mve_vcmplew, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
+
27
+DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
28
+DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
29
+DEF_HELPER_FLAGS_3(mve_vcmpeq_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
30
+
31
+DEF_HELPER_FLAGS_3(mve_vcmpne_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
32
+DEF_HELPER_FLAGS_3(mve_vcmpne_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
33
+DEF_HELPER_FLAGS_3(mve_vcmpne_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
34
+
35
+DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
36
+DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
37
+DEF_HELPER_FLAGS_3(mve_vcmpcs_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
38
+
39
+DEF_HELPER_FLAGS_3(mve_vcmphi_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
40
+DEF_HELPER_FLAGS_3(mve_vcmphi_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
41
+DEF_HELPER_FLAGS_3(mve_vcmphi_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
42
+
43
+DEF_HELPER_FLAGS_3(mve_vcmpge_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
44
+DEF_HELPER_FLAGS_3(mve_vcmpge_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
45
+DEF_HELPER_FLAGS_3(mve_vcmpge_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
46
+
47
+DEF_HELPER_FLAGS_3(mve_vcmplt_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
48
+DEF_HELPER_FLAGS_3(mve_vcmplt_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
49
+DEF_HELPER_FLAGS_3(mve_vcmplt_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
50
+
51
+DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
52
+DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
53
+DEF_HELPER_FLAGS_3(mve_vcmpgt_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
54
+
55
+DEF_HELPER_FLAGS_3(mve_vcmple_scalarb, TCG_CALL_NO_WG, void, env, ptr, i32)
56
+DEF_HELPER_FLAGS_3(mve_vcmple_scalarh, TCG_CALL_NO_WG, void, env, ptr, i32)
57
+DEF_HELPER_FLAGS_3(mve_vcmple_scalarw, TCG_CALL_NO_WG, void, env, ptr, i32)
58
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mve.decode
61
+++ b/target/arm/mve.decode
62
@@ -XXX,XX +XXX,XX @@
63
&vidup qd rn size imm
64
&viwdup qd rn rm size imm
65
&vcmp qm qn size mask
66
+&vcmp_scalar qn rm size mask
67
68
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
69
# Note that both Rn and Qd are 3 bits only (no D bit)
70
@@ -XXX,XX +XXX,XX @@
71
# Vector comparison; 4-bit Qm but 3-bit Qn
72
%mask_22_13 22:1 13:3
73
@vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13
74
+@vcmp_scalar .... .... .. size:2 qn:3 . .... .... .... rm:4 &vcmp_scalar \
75
+ mask=%mask_22_13
76
77
# Vector loads and stores
78
79
@@ -XXX,XX +XXX,XX @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
80
rdahi=%rdahi rdalo=%rdalo
81
}
82
83
-# Predicate operations
84
-VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
85
-
86
# Logical immediate operations (1 reg and modified-immediate)
87
88
# The cmode/op bits here decode VORR/VBIC/VMOV/VMVN, but
89
@@ -XXX,XX +XXX,XX @@ VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp
90
VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp
91
VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
92
VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp
93
+
94
+{
95
+ VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
96
+ VCMPEQ_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 0 0 .... @vcmp_scalar
97
+}
98
+VCMPNE_scalar 1111 1110 0 . .. ... 1 ... 0 1111 1 1 0 0 .... @vcmp_scalar
99
+VCMPCS_scalar 1111 1110 0 . .. ... 1 ... 0 1111 0 1 1 0 .... @vcmp_scalar
100
+VCMPHI_scalar 1111 1110 0 . .. ... 1 ... 0 1111 1 1 1 0 .... @vcmp_scalar
101
+VCMPGE_scalar 1111 1110 0 . .. ... 1 ... 1 1111 0 1 0 0 .... @vcmp_scalar
102
+VCMPLT_scalar 1111 1110 0 . .. ... 1 ... 1 1111 1 1 0 0 .... @vcmp_scalar
103
+VCMPGT_scalar 1111 1110 0 . .. ... 1 ... 1 1111 0 1 1 0 .... @vcmp_scalar
104
+VCMPLE_scalar 1111 1110 0 . .. ... 1 ... 1 1111 1 1 1 0 .... @vcmp_scalar
105
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/mve_helper.c
108
+++ b/target/arm/mve_helper.c
109
@@ -XXX,XX +XXX,XX @@ DO_VIWDUP_ALL(vdwdup, do_sub_wrap)
110
mve_advance_vpt(env); \
111
}
22
}
112
23
113
-#define DO_VCMP_S(OP, FN) \
24
+ /*
114
- DO_VCMP(OP##b, 1, int8_t, FN) \
25
+ * Misaligned thumb pc is architecturally impossible. Fail the
115
- DO_VCMP(OP##h, 2, int16_t, FN) \
26
+ * incoming migration. For TCG it would trigger the assert in
116
- DO_VCMP(OP##w, 4, int32_t, FN)
27
+ * thumb_tr_translate_insn().
117
+#define DO_VCMP_SCALAR(OP, ESIZE, TYPE, FN) \
28
+ */
118
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
29
+ if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
119
+ uint32_t rm) \
30
+ return -1;
120
+ { \
121
+ TYPE *n = vn; \
122
+ uint16_t mask = mve_element_mask(env); \
123
+ uint16_t eci_mask = mve_eci_mask(env); \
124
+ uint16_t beatpred = 0; \
125
+ uint16_t emask = MAKE_64BIT_MASK(0, ESIZE); \
126
+ unsigned e; \
127
+ for (e = 0; e < 16 / ESIZE; e++) { \
128
+ bool r = FN(n[H##ESIZE(e)], (TYPE)rm); \
129
+ /* Comparison sets 0/1 bits for each byte in the element */ \
130
+ beatpred |= r * emask; \
131
+ emask <<= ESIZE; \
132
+ } \
133
+ beatpred &= mask; \
134
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | \
135
+ (beatpred & eci_mask); \
136
+ mve_advance_vpt(env); \
137
+ }
138
139
-#define DO_VCMP_U(OP, FN) \
140
- DO_VCMP(OP##b, 1, uint8_t, FN) \
141
- DO_VCMP(OP##h, 2, uint16_t, FN) \
142
- DO_VCMP(OP##w, 4, uint32_t, FN)
143
+#define DO_VCMP_S(OP, FN) \
144
+ DO_VCMP(OP##b, 1, int8_t, FN) \
145
+ DO_VCMP(OP##h, 2, int16_t, FN) \
146
+ DO_VCMP(OP##w, 4, int32_t, FN) \
147
+ DO_VCMP_SCALAR(OP##_scalarb, 1, int8_t, FN) \
148
+ DO_VCMP_SCALAR(OP##_scalarh, 2, int16_t, FN) \
149
+ DO_VCMP_SCALAR(OP##_scalarw, 4, int32_t, FN)
150
+
151
+#define DO_VCMP_U(OP, FN) \
152
+ DO_VCMP(OP##b, 1, uint8_t, FN) \
153
+ DO_VCMP(OP##h, 2, uint16_t, FN) \
154
+ DO_VCMP(OP##w, 4, uint32_t, FN) \
155
+ DO_VCMP_SCALAR(OP##_scalarb, 1, uint8_t, FN) \
156
+ DO_VCMP_SCALAR(OP##_scalarh, 2, uint16_t, FN) \
157
+ DO_VCMP_SCALAR(OP##_scalarw, 4, uint32_t, FN)
158
159
#define DO_EQ(N, M) ((N) == (M))
160
#define DO_NE(N, M) ((N) != (M))
161
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate-mve.c
164
+++ b/target/arm/translate-mve.c
165
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
166
typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
167
typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
168
typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
169
+typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
170
171
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
172
static inline long mve_qreg_offset(unsigned reg)
173
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
174
return true;
175
}
176
177
+static bool do_vcmp_scalar(DisasContext *s, arg_vcmp_scalar *a,
178
+ MVEGenScalarCmpFn *fn)
179
+{
180
+ TCGv_ptr qn;
181
+ TCGv_i32 rm;
182
+
183
+ if (!dc_isar_feature(aa32_mve, s) || !fn || a->rm == 13) {
184
+ return false;
185
+ }
186
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
187
+ return true;
188
+ }
31
+ }
189
+
32
+
190
+ qn = mve_qreg_ptr(a->qn);
33
hw_breakpoint_update_all(cpu);
191
+ if (a->rm == 15) {
34
hw_watchpoint_update_all(cpu);
192
+ /* Encoding Rm=0b1111 means "constant zero" */
35
193
+ rm = tcg_constant_i32(0);
36
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
194
+ } else {
37
}
195
+ rm = load_reg(s, a->rm);
196
+ }
197
+ fn(cpu_env, qn, rm);
198
+ tcg_temp_free_ptr(qn);
199
+ tcg_temp_free_i32(rm);
200
+ if (a->mask) {
201
+ /* VPT */
202
+ gen_vpst(s, a->mask);
203
+ }
204
+ mve_update_eci(s);
205
+ return true;
206
+}
207
+
208
#define DO_VCMP(INSN, FN) \
209
static bool trans_##INSN(DisasContext *s, arg_vcmp *a) \
210
{ \
211
@@ -XXX,XX +XXX,XX @@ static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
212
NULL, \
213
}; \
214
return do_vcmp(s, a, fns[a->size]); \
215
+ } \
216
+ static bool trans_##INSN##_scalar(DisasContext *s, \
217
+ arg_vcmp_scalar *a) \
218
+ { \
219
+ static MVEGenScalarCmpFn * const fns[] = { \
220
+ gen_helper_mve_##FN##_scalarb, \
221
+ gen_helper_mve_##FN##_scalarh, \
222
+ gen_helper_mve_##FN##_scalarw, \
223
+ NULL, \
224
+ }; \
225
+ return do_vcmp_scalar(s, a, fns[a->size]); \
226
}
38
}
227
39
228
DO_VCMP(VCMPEQ, vcmpeq)
40
- /*
41
- * Misaligned thumb pc is architecturally impossible.
42
- * We have an assert in thumb_tr_translate_insn to verify this.
43
- * Fail an incoming migrate to avoid this assert.
44
- */
45
- if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
46
- return -1;
47
- }
48
-
49
if (!kvm_enabled()) {
50
pmu_op_finish(&cpu->env);
51
}
229
--
52
--
230
2.20.1
53
2.34.1
231
54
232
55
diff view generated by jsdifflib
1
In some situations we need a mask telling us which parts of the
1
From: Fabiano Rosas <farosas@suse.de>
2
vector correspond to beats that are not being executed because of
2
3
ECI, separately from the combined "which bytes are predicated away"
3
Since commit cf7c6d1004 ("target/arm: Split out cpregs.h") we now have
4
mask. Factor this mask calculation out of mve_element_mask() into
4
a cpregs.h header which is more suitable for this code.
5
its own function.
5
6
6
Code moved verbatim.
7
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
---
13
---
10
target/arm/mve_helper.c | 58 ++++++++++++++++++++++++-----------------
14
target/arm/cpregs.h | 98 +++++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 34 insertions(+), 24 deletions(-)
15
target/arm/cpu.h | 91 -----------------------------------------
12
16
2 files changed, 98 insertions(+), 91 deletions(-)
13
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
17
18
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/mve_helper.c
20
--- a/target/arm/cpregs.h
16
+++ b/target/arm/mve_helper.c
21
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ enum {
18
#include "exec/exec-all.h"
23
ARM_CP_SME = 1 << 19,
19
#include "tcg/tcg.h"
24
};
20
25
21
+static uint16_t mve_eci_mask(CPUARMState *env)
26
+/*
27
+ * Interface for defining coprocessor registers.
28
+ * Registers are defined in tables of arm_cp_reginfo structs
29
+ * which are passed to define_arm_cp_regs().
30
+ */
31
+
32
+/*
33
+ * When looking up a coprocessor register we look for it
34
+ * via an integer which encodes all of:
35
+ * coprocessor number
36
+ * Crn, Crm, opc1, opc2 fields
37
+ * 32 or 64 bit register (ie is it accessed via MRC/MCR
38
+ * or via MRRC/MCRR?)
39
+ * non-secure/secure bank (AArch32 only)
40
+ * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field.
41
+ * (In this case crn and opc2 should be zero.)
42
+ * For AArch64, there is no 32/64 bit size distinction;
43
+ * instead all registers have a 2 bit op0, 3 bit op1 and op2,
44
+ * and 4 bit CRn and CRm. The encoding patterns are chosen
45
+ * to be easy to convert to and from the KVM encodings, and also
46
+ * so that the hashtable can contain both AArch32 and AArch64
47
+ * registers (to allow for interprocessing where we might run
48
+ * 32 bit code on a 64 bit core).
49
+ */
50
+/*
51
+ * This bit is private to our hashtable cpreg; in KVM register
52
+ * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64
53
+ * in the upper bits of the 64 bit ID.
54
+ */
55
+#define CP_REG_AA64_SHIFT 28
56
+#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT)
57
+
58
+/*
59
+ * To enable banking of coprocessor registers depending on ns-bit we
60
+ * add a bit to distinguish between secure and non-secure cpregs in the
61
+ * hashtable.
62
+ */
63
+#define CP_REG_NS_SHIFT 29
64
+#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT)
65
+
66
+#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \
67
+ ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \
68
+ ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2))
69
+
70
+#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \
71
+ (CP_REG_AA64_MASK | \
72
+ ((cp) << CP_REG_ARM_COPROC_SHIFT) | \
73
+ ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \
74
+ ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \
75
+ ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \
76
+ ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \
77
+ ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT))
78
+
79
+/*
80
+ * Convert a full 64 bit KVM register ID to the truncated 32 bit
81
+ * version used as a key for the coprocessor register hashtable
82
+ */
83
+static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid)
22
+{
84
+{
23
+ /*
85
+ uint32_t cpregid = kvmid;
24
+ * Return the mask of which elements in the MVE vector correspond
86
+ if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) {
25
+ * to beats being executed. The mask has 1 bits for executed lanes
87
+ cpregid |= CP_REG_AA64_MASK;
26
+ * and 0 bits where ECI says this beat was already executed.
88
+ } else {
27
+ */
89
+ if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) {
28
+ int eci;
90
+ cpregid |= (1 << 15);
29
+
91
+ }
30
+ if ((env->condexec_bits & 0xf) != 0) {
92
+
31
+ return 0xffff;
93
+ /*
94
+ * KVM is always non-secure so add the NS flag on AArch32 register
95
+ * entries.
96
+ */
97
+ cpregid |= 1 << CP_REG_NS_SHIFT;
32
+ }
98
+ }
33
+
99
+ return cpregid;
34
+ eci = env->condexec_bits >> 4;
100
+}
35
+ switch (eci) {
101
+
36
+ case ECI_NONE:
102
+/*
37
+ return 0xffff;
103
+ * Convert a truncated 32 bit hashtable key into the full
38
+ case ECI_A0:
104
+ * 64 bit KVM register ID.
39
+ return 0xfff0;
105
+ */
40
+ case ECI_A0A1:
106
+static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
41
+ return 0xff00;
107
+{
42
+ case ECI_A0A1A2:
108
+ uint64_t kvmid;
43
+ case ECI_A0A1A2B0:
109
+
44
+ return 0xf000;
110
+ if (cpregid & CP_REG_AA64_MASK) {
45
+ default:
111
+ kvmid = cpregid & ~CP_REG_AA64_MASK;
46
+ g_assert_not_reached();
112
+ kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64;
113
+ } else {
114
+ kvmid = cpregid & ~(1 << 15);
115
+ if (cpregid & (1 << 15)) {
116
+ kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM;
117
+ } else {
118
+ kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM;
119
+ }
47
+ }
120
+ }
121
+ return kvmid;
48
+}
122
+}
49
+
123
+
50
static uint16_t mve_element_mask(CPUARMState *env)
124
/*
51
{
125
* Valid values for ARMCPRegInfo state field, indicating which of
52
/*
126
* the AArch32 and AArch64 execution states this register is visible in.
53
@@ -XXX,XX +XXX,XX @@ static uint16_t mve_element_mask(CPUARMState *env)
127
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
54
mask &= ltpmask;
128
index XXXXXXX..XXXXXXX 100644
55
}
129
--- a/target/arm/cpu.h
56
130
+++ b/target/arm/cpu.h
57
- if ((env->condexec_bits & 0xf) == 0) {
131
@@ -XXX,XX +XXX,XX @@ void arm_cpu_list(void);
58
- /*
132
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
59
- * ECI bits indicate which beats are already executed;
133
uint32_t cur_el, bool secure);
60
- * we handle this by effectively predicating them out.
134
135
-/* Interface for defining coprocessor registers.
136
- * Registers are defined in tables of arm_cp_reginfo structs
137
- * which are passed to define_arm_cp_regs().
138
- */
139
-
140
-/* When looking up a coprocessor register we look for it
141
- * via an integer which encodes all of:
142
- * coprocessor number
143
- * Crn, Crm, opc1, opc2 fields
144
- * 32 or 64 bit register (ie is it accessed via MRC/MCR
145
- * or via MRRC/MCRR?)
146
- * non-secure/secure bank (AArch32 only)
147
- * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field.
148
- * (In this case crn and opc2 should be zero.)
149
- * For AArch64, there is no 32/64 bit size distinction;
150
- * instead all registers have a 2 bit op0, 3 bit op1 and op2,
151
- * and 4 bit CRn and CRm. The encoding patterns are chosen
152
- * to be easy to convert to and from the KVM encodings, and also
153
- * so that the hashtable can contain both AArch32 and AArch64
154
- * registers (to allow for interprocessing where we might run
155
- * 32 bit code on a 64 bit core).
156
- */
157
-/* This bit is private to our hashtable cpreg; in KVM register
158
- * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64
159
- * in the upper bits of the 64 bit ID.
160
- */
161
-#define CP_REG_AA64_SHIFT 28
162
-#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT)
163
-
164
-/* To enable banking of coprocessor registers depending on ns-bit we
165
- * add a bit to distinguish between secure and non-secure cpregs in the
166
- * hashtable.
167
- */
168
-#define CP_REG_NS_SHIFT 29
169
-#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT)
170
-
171
-#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \
172
- ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \
173
- ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2))
174
-
175
-#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \
176
- (CP_REG_AA64_MASK | \
177
- ((cp) << CP_REG_ARM_COPROC_SHIFT) | \
178
- ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \
179
- ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \
180
- ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \
181
- ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \
182
- ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT))
183
-
184
-/* Convert a full 64 bit KVM register ID to the truncated 32 bit
185
- * version used as a key for the coprocessor register hashtable
186
- */
187
-static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid)
188
-{
189
- uint32_t cpregid = kvmid;
190
- if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) {
191
- cpregid |= CP_REG_AA64_MASK;
192
- } else {
193
- if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) {
194
- cpregid |= (1 << 15);
195
- }
196
-
197
- /* KVM is always non-secure so add the NS flag on AArch32 register
198
- * entries.
61
- */
199
- */
62
- int eci = env->condexec_bits >> 4;
200
- cpregid |= 1 << CP_REG_NS_SHIFT;
63
- switch (eci) {
201
- }
64
- case ECI_NONE:
202
- return cpregid;
65
- break;
203
-}
66
- case ECI_A0:
204
-
67
- mask &= 0xfff0;
205
-/* Convert a truncated 32 bit hashtable key into the full
68
- break;
206
- * 64 bit KVM register ID.
69
- case ECI_A0A1:
207
- */
70
- mask &= 0xff00;
208
-static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
71
- break;
209
-{
72
- case ECI_A0A1A2:
210
- uint64_t kvmid;
73
- case ECI_A0A1A2B0:
211
-
74
- mask &= 0xf000;
212
- if (cpregid & CP_REG_AA64_MASK) {
75
- break;
213
- kvmid = cpregid & ~CP_REG_AA64_MASK;
76
- default:
214
- kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64;
77
- g_assert_not_reached();
215
- } else {
216
- kvmid = cpregid & ~(1 << 15);
217
- if (cpregid & (1 << 15)) {
218
- kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM;
219
- } else {
220
- kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM;
78
- }
221
- }
79
- }
222
- }
80
-
223
- return kvmid;
81
+ /*
224
-}
82
+ * ECI bits indicate which beats are already executed;
225
-
83
+ * we handle this by effectively predicating them out.
226
/* Return the highest implemented Exception Level */
84
+ */
227
static inline int arm_highest_el(CPUARMState *env)
85
+ mask &= mve_eci_mask(env);
228
{
86
return mask;
87
}
88
89
--
229
--
90
2.20.1
230
2.34.1
91
231
92
232
diff view generated by jsdifflib
Deleted patch
1
For vector loads, predicated elements are zeroed, instead of
2
retaining their previous values (as happens for most data
3
processing operations). This means we need to distinguish
4
"beat not executed due to ECI" (don't touch destination
5
element) from "beat executed but predicated out" (zero
6
destination element).
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
target/arm/mve_helper.c | 8 +++++---
12
1 file changed, 5 insertions(+), 3 deletions(-)
13
14
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/mve_helper.c
17
+++ b/target/arm/mve_helper.c
18
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
19
env->v7m.vpr = vpr;
20
}
21
22
-
23
+/* For loads, predicated lanes are zeroed instead of keeping their old values */
24
#define DO_VLDR(OP, MSIZE, LDTYPE, ESIZE, TYPE) \
25
void HELPER(mve_##OP)(CPUARMState *env, void *vd, uint32_t addr) \
26
{ \
27
TYPE *d = vd; \
28
uint16_t mask = mve_element_mask(env); \
29
+ uint16_t eci_mask = mve_eci_mask(env); \
30
unsigned b, e; \
31
/* \
32
* R_SXTM allows the dest reg to become UNKNOWN for abandoned \
33
@@ -XXX,XX +XXX,XX @@ static void mve_advance_vpt(CPUARMState *env)
34
* then take an exception. \
35
*/ \
36
for (b = 0, e = 0; b < 16; b += ESIZE, e++) { \
37
- if (mask & (1 << b)) { \
38
- d[H##ESIZE(e)] = cpu_##LDTYPE##_data_ra(env, addr, GETPC()); \
39
+ if (eci_mask & (1 << b)) { \
40
+ d[H##ESIZE(e)] = (mask & (1 << b)) ? \
41
+ cpu_##LDTYPE##_data_ra(env, addr, GETPC()) : 0; \
42
} \
43
addr += MSIZE; \
44
} \
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
Deleted patch
1
Implement the MVE VMULL (polynomial) insn. Unlike Neon, this comes
2
in two flavours: 8x8->16 and a 16x16->32. Also unlike Neon, the
3
inputs are in either the low or the high half of each double-width
4
element.
5
1
6
The assembler for this insn indicates the size with "P8" or "P16",
7
encoded into bit 28 as size = 0 or 1. We choose to follow the
8
same encoding as VQDMULL and decode this into a->size as MO_16
9
or MO_32 indicating the size of the result elements. This then
10
carries through to the helper function names where it then
11
matches up with the existing pmull_h() which does an 8x8->16
12
operation and a new pmull_w() which does the 16x16->32.
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
---
17
target/arm/helper-mve.h | 5 +++++
18
target/arm/vec_internal.h | 11 +++++++++++
19
target/arm/mve.decode | 14 ++++++++++----
20
target/arm/mve_helper.c | 16 ++++++++++++++++
21
target/arm/translate-mve.c | 28 ++++++++++++++++++++++++++++
22
target/arm/vec_helper.c | 14 +++++++++++++-
23
6 files changed, 83 insertions(+), 5 deletions(-)
24
25
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper-mve.h
28
+++ b/target/arm/helper-mve.h
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vmulltub, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
30
DEF_HELPER_FLAGS_4(mve_vmulltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
31
DEF_HELPER_FLAGS_4(mve_vmulltuw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
32
33
+DEF_HELPER_FLAGS_4(mve_vmullpbh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
34
+DEF_HELPER_FLAGS_4(mve_vmullpth, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
35
+DEF_HELPER_FLAGS_4(mve_vmullpbw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
36
+DEF_HELPER_FLAGS_4(mve_vmullptw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
37
+
38
DEF_HELPER_FLAGS_4(mve_vqdmulhb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
39
DEF_HELPER_FLAGS_4(mve_vqdmulhh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
40
DEF_HELPER_FLAGS_4(mve_vqdmulhw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
41
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/vec_internal.h
44
+++ b/target/arm/vec_internal.h
45
@@ -XXX,XX +XXX,XX @@ int16_t do_sqrdmlah_h(int16_t, int16_t, int16_t, bool, bool, uint32_t *);
46
int32_t do_sqrdmlah_s(int32_t, int32_t, int32_t, bool, bool, uint32_t *);
47
int64_t do_sqrdmlah_d(int64_t, int64_t, int64_t, bool, bool);
48
49
+/*
50
+ * 8 x 8 -> 16 vector polynomial multiply where the inputs are
51
+ * in the low 8 bits of each 16-bit element
52
+*/
53
+uint64_t pmull_h(uint64_t op1, uint64_t op2);
54
+/*
55
+ * 16 x 16 -> 32 vector polynomial multiply where the inputs are
56
+ * in the low 16 bits of each 32-bit element
57
+ */
58
+uint64_t pmull_w(uint64_t op1, uint64_t op2);
59
+
60
#endif /* TARGET_ARM_VEC_INTERNALS_H */
61
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/mve.decode
64
+++ b/target/arm/mve.decode
65
@@ -XXX,XX +XXX,XX @@ VHADD_U 111 1 1111 0 . .. ... 0 ... 0 0000 . 1 . 0 ... 0 @2op
66
VHSUB_S 111 0 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
67
VHSUB_U 111 1 1111 0 . .. ... 0 ... 0 0010 . 1 . 0 ... 0 @2op
68
69
-VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
70
-VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
71
-VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
72
-VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
73
+{
74
+ VMULLP_B 111 . 1110 0 . 11 ... 1 ... 0 1110 . 0 . 0 ... 0 @2op_sz28
75
+ VMULL_BS 111 0 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
76
+ VMULL_BU 111 1 1110 0 . .. ... 1 ... 0 1110 . 0 . 0 ... 0 @2op
77
+}
78
+{
79
+ VMULLP_T 111 . 1110 0 . 11 ... 1 ... 1 1110 . 0 . 0 ... 0 @2op_sz28
80
+ VMULL_TS 111 0 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
81
+ VMULL_TU 111 1 1110 0 . .. ... 1 ... 1 1110 . 0 . 0 ... 0 @2op
82
+}
83
84
VQDMULH 1110 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
85
VQRDMULH 1111 1111 0 . .. ... 0 ... 0 1011 . 1 . 0 ... 0 @2op
86
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/mve_helper.c
89
+++ b/target/arm/mve_helper.c
90
@@ -XXX,XX +XXX,XX @@ DO_2OP_L(vmulltub, 1, 1, uint8_t, 2, uint16_t, DO_MUL)
91
DO_2OP_L(vmulltuh, 1, 2, uint16_t, 4, uint32_t, DO_MUL)
92
DO_2OP_L(vmulltuw, 1, 4, uint32_t, 8, uint64_t, DO_MUL)
93
94
+/*
95
+ * Polynomial multiply. We can always do this generating 64 bits
96
+ * of the result at a time, so we don't need to use DO_2OP_L.
97
+ */
98
+#define VMULLPH_MASK 0x00ff00ff00ff00ffULL
99
+#define VMULLPW_MASK 0x0000ffff0000ffffULL
100
+#define DO_VMULLPBH(N, M) pmull_h((N) & VMULLPH_MASK, (M) & VMULLPH_MASK)
101
+#define DO_VMULLPTH(N, M) DO_VMULLPBH((N) >> 8, (M) >> 8)
102
+#define DO_VMULLPBW(N, M) pmull_w((N) & VMULLPW_MASK, (M) & VMULLPW_MASK)
103
+#define DO_VMULLPTW(N, M) DO_VMULLPBW((N) >> 16, (M) >> 16)
104
+
105
+DO_2OP(vmullpbh, 8, uint64_t, DO_VMULLPBH)
106
+DO_2OP(vmullpth, 8, uint64_t, DO_VMULLPTH)
107
+DO_2OP(vmullpbw, 8, uint64_t, DO_VMULLPBW)
108
+DO_2OP(vmullptw, 8, uint64_t, DO_VMULLPTW)
109
+
110
/*
111
* Because the computation type is at least twice as large as required,
112
* these work for both signed and unsigned source types.
113
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/translate-mve.c
116
+++ b/target/arm/translate-mve.c
117
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMULLT(DisasContext *s, arg_2op *a)
118
return do_2op(s, a, fns[a->size]);
119
}
120
121
+static bool trans_VMULLP_B(DisasContext *s, arg_2op *a)
122
+{
123
+ /*
124
+ * Note that a->size indicates the output size, ie VMULL.P8
125
+ * is the 8x8->16 operation and a->size is MO_16; VMULL.P16
126
+ * is the 16x16->32 operation and a->size is MO_32.
127
+ */
128
+ static MVEGenTwoOpFn * const fns[] = {
129
+ NULL,
130
+ gen_helper_mve_vmullpbh,
131
+ gen_helper_mve_vmullpbw,
132
+ NULL,
133
+ };
134
+ return do_2op(s, a, fns[a->size]);
135
+}
136
+
137
+static bool trans_VMULLP_T(DisasContext *s, arg_2op *a)
138
+{
139
+ /* a->size is as for trans_VMULLP_B */
140
+ static MVEGenTwoOpFn * const fns[] = {
141
+ NULL,
142
+ gen_helper_mve_vmullpth,
143
+ gen_helper_mve_vmullptw,
144
+ NULL,
145
+ };
146
+ return do_2op(s, a, fns[a->size]);
147
+}
148
+
149
/*
150
* VADC and VSBC: these perform an add-with-carry or subtract-with-carry
151
* of the 32-bit elements in each lane of the input vectors, where the
152
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/vec_helper.c
155
+++ b/target/arm/vec_helper.c
156
@@ -XXX,XX +XXX,XX @@ static uint64_t expand_byte_to_half(uint64_t x)
157
| ((x & 0xff000000) << 24);
158
}
159
160
-static uint64_t pmull_h(uint64_t op1, uint64_t op2)
161
+uint64_t pmull_w(uint64_t op1, uint64_t op2)
162
{
163
uint64_t result = 0;
164
int i;
165
+ for (i = 0; i < 16; ++i) {
166
+ uint64_t mask = (op1 & 0x0000000100000001ull) * 0xffffffff;
167
+ result ^= op2 & mask;
168
+ op1 >>= 1;
169
+ op2 <<= 1;
170
+ }
171
+ return result;
172
+}
173
174
+uint64_t pmull_h(uint64_t op1, uint64_t op2)
175
+{
176
+ uint64_t result = 0;
177
+ int i;
178
for (i = 0; i < 8; ++i) {
179
uint64_t mask = (op1 & 0x0001000100010001ull) * 0xffff;
180
result ^= op2 & mask;
181
--
182
2.20.1
183
184
diff view generated by jsdifflib
Deleted patch
1
Implement the MVE incrementing/decrementing dup insns VIDUP, VDDUP,
2
VIWDUP and VDWDUP. These fill the elements of a vector with
3
successively incrementing values, starting at the offset specified in
4
a general purpose register. The final value of the offset is written
5
back to this register. The wrapping variants take a second general
6
purpose register which specifies the point where the count should
7
wrap back to 0.
8
1
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
---
12
target/arm/helper-mve.h | 12 ++++
13
target/arm/mve.decode | 25 ++++++++
14
target/arm/mve_helper.c | 63 +++++++++++++++++++
15
target/arm/translate-mve.c | 120 +++++++++++++++++++++++++++++++++++++
16
4 files changed, 220 insertions(+)
17
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-mve.h
21
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vstrh_w, TCG_CALL_NO_WG, void, env, ptr, i32)
23
24
DEF_HELPER_FLAGS_3(mve_vdup, TCG_CALL_NO_WG, void, env, ptr, i32)
25
26
+DEF_HELPER_FLAGS_4(mve_vidupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
27
+DEF_HELPER_FLAGS_4(mve_viduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
28
+DEF_HELPER_FLAGS_4(mve_vidupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
29
+
30
+DEF_HELPER_FLAGS_5(mve_viwdupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
31
+DEF_HELPER_FLAGS_5(mve_viwduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
32
+DEF_HELPER_FLAGS_5(mve_viwdupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
33
+
34
+DEF_HELPER_FLAGS_5(mve_vdwdupb, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
35
+DEF_HELPER_FLAGS_5(mve_vdwduph, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
36
+DEF_HELPER_FLAGS_5(mve_vdwdupw, TCG_CALL_NO_WG, i32, env, ptr, i32, i32, i32)
37
+
38
DEF_HELPER_FLAGS_3(mve_vclsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
DEF_HELPER_FLAGS_3(mve_vclsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
40
DEF_HELPER_FLAGS_3(mve_vclsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
41
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/mve.decode
44
+++ b/target/arm/mve.decode
45
@@ -XXX,XX +XXX,XX @@
46
&2scalar qd qn rm size
47
&1imm qd imm cmode op
48
&2shift qd qm shift size
49
+&vidup qd rn size imm
50
+&viwdup qd rn rm size imm
51
52
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
53
# Note that both Rn and Qd are 3 bits only (no D bit)
54
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 1 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=0
55
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 1 1 0000 @vdup size=1
56
VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
57
58
+# Incrementing and decrementing dup
59
+
60
+# VIDUP, VDDUP format immediate: 1 << (immh:imml)
61
+%imm_vidup 7:1 0:1 !function=vidup_imm
62
+
63
+# VIDUP, VDDUP registers: Rm bits [3:1] from insn, bit 0 is 1;
64
+# Rn bits [3:1] from insn, bit 0 is 0
65
+%vidup_rm 1:3 !function=times_2_plus_1
66
+%vidup_rn 17:3 !function=times_2
67
+
68
+@vidup .... .... . . size:2 .... .... .... .... .... \
69
+ qd=%qd imm=%imm_vidup rn=%vidup_rn &vidup
70
+@viwdup .... .... . . size:2 .... .... .... .... .... \
71
+ qd=%qd imm=%imm_vidup rm=%vidup_rm rn=%vidup_rn &viwdup
72
+{
73
+ VIDUP 1110 1110 0 . .. ... 1 ... 0 1111 . 110 111 . @vidup
74
+ VIWDUP 1110 1110 0 . .. ... 1 ... 0 1111 . 110 ... . @viwdup
75
+}
76
+{
77
+ VDDUP 1110 1110 0 . .. ... 1 ... 1 1111 . 110 111 . @vidup
78
+ VDWDUP 1110 1110 0 . .. ... 1 ... 1 1111 . 110 ... . @viwdup
79
+}
80
+
81
# multiply-add long dual accumulate
82
# rdahi: bits [3:1] from insn, bit 0 is 1
83
# rdalo: bits [3:1] from insn, bit 0 is 0
84
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/mve_helper.c
87
+++ b/target/arm/mve_helper.c
88
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mve_sqrshr)(CPUARMState *env, uint32_t n, uint32_t shift)
89
{
90
return do_sqrshl_bhs(n, -(int8_t)shift, 32, true, &env->QF);
91
}
92
+
93
+#define DO_VIDUP(OP, ESIZE, TYPE, FN) \
94
+ uint32_t HELPER(mve_##OP)(CPUARMState *env, void *vd, \
95
+ uint32_t offset, uint32_t imm) \
96
+ { \
97
+ TYPE *d = vd; \
98
+ uint16_t mask = mve_element_mask(env); \
99
+ unsigned e; \
100
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
101
+ mergemask(&d[H##ESIZE(e)], offset, mask); \
102
+ offset = FN(offset, imm); \
103
+ } \
104
+ mve_advance_vpt(env); \
105
+ return offset; \
106
+ }
107
+
108
+#define DO_VIWDUP(OP, ESIZE, TYPE, FN) \
109
+ uint32_t HELPER(mve_##OP)(CPUARMState *env, void *vd, \
110
+ uint32_t offset, uint32_t wrap, \
111
+ uint32_t imm) \
112
+ { \
113
+ TYPE *d = vd; \
114
+ uint16_t mask = mve_element_mask(env); \
115
+ unsigned e; \
116
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
117
+ mergemask(&d[H##ESIZE(e)], offset, mask); \
118
+ offset = FN(offset, wrap, imm); \
119
+ } \
120
+ mve_advance_vpt(env); \
121
+ return offset; \
122
+ }
123
+
124
+#define DO_VIDUP_ALL(OP, FN) \
125
+ DO_VIDUP(OP##b, 1, int8_t, FN) \
126
+ DO_VIDUP(OP##h, 2, int16_t, FN) \
127
+ DO_VIDUP(OP##w, 4, int32_t, FN)
128
+
129
+#define DO_VIWDUP_ALL(OP, FN) \
130
+ DO_VIWDUP(OP##b, 1, int8_t, FN) \
131
+ DO_VIWDUP(OP##h, 2, int16_t, FN) \
132
+ DO_VIWDUP(OP##w, 4, int32_t, FN)
133
+
134
+static uint32_t do_add_wrap(uint32_t offset, uint32_t wrap, uint32_t imm)
135
+{
136
+ offset += imm;
137
+ if (offset == wrap) {
138
+ offset = 0;
139
+ }
140
+ return offset;
141
+}
142
+
143
+static uint32_t do_sub_wrap(uint32_t offset, uint32_t wrap, uint32_t imm)
144
+{
145
+ if (offset == 0) {
146
+ offset = wrap;
147
+ }
148
+ offset -= imm;
149
+ return offset;
150
+}
151
+
152
+DO_VIDUP_ALL(vidup, DO_ADD)
153
+DO_VIWDUP_ALL(viwdup, do_add_wrap)
154
+DO_VIWDUP_ALL(vdwdup, do_sub_wrap)
155
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
156
index XXXXXXX..XXXXXXX 100644
157
--- a/target/arm/translate-mve.c
158
+++ b/target/arm/translate-mve.c
159
@@ -XXX,XX +XXX,XX @@
160
#include "translate.h"
161
#include "translate-a32.h"
162
163
+static inline int vidup_imm(DisasContext *s, int x)
164
+{
165
+ return 1 << x;
166
+}
167
+
168
/* Include the generated decoder */
169
#include "decode-mve.c.inc"
170
171
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenTwoOpShiftFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
172
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
173
typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
174
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
175
+typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
176
+typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
177
178
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
179
static inline long mve_qreg_offset(unsigned reg)
180
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLC(DisasContext *s, arg_VSHLC *a)
181
mve_update_eci(s);
182
return true;
183
}
184
+
185
+static bool do_vidup(DisasContext *s, arg_vidup *a, MVEGenVIDUPFn *fn)
186
+{
187
+ TCGv_ptr qd;
188
+ TCGv_i32 rn;
189
+
190
+ /*
191
+ * Vector increment/decrement with wrap and duplicate (VIDUP, VDDUP).
192
+ * This fills the vector with elements of successively increasing
193
+ * or decreasing values, starting from Rn.
194
+ */
195
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd)) {
196
+ return false;
197
+ }
198
+ if (a->size == MO_64) {
199
+ /* size 0b11 is another encoding */
200
+ return false;
201
+ }
202
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
203
+ return true;
204
+ }
205
+
206
+ qd = mve_qreg_ptr(a->qd);
207
+ rn = load_reg(s, a->rn);
208
+ fn(rn, cpu_env, qd, rn, tcg_constant_i32(a->imm));
209
+ store_reg(s, a->rn, rn);
210
+ tcg_temp_free_ptr(qd);
211
+ mve_update_eci(s);
212
+ return true;
213
+}
214
+
215
+static bool do_viwdup(DisasContext *s, arg_viwdup *a, MVEGenVIWDUPFn *fn)
216
+{
217
+ TCGv_ptr qd;
218
+ TCGv_i32 rn, rm;
219
+
220
+ /*
221
+ * Vector increment/decrement with wrap and duplicate (VIWDUp, VDWDUP)
222
+ * This fills the vector with elements of successively increasing
223
+ * or decreasing values, starting from Rn. Rm specifies a point where
224
+ * the count wraps back around to 0. The updated offset is written back
225
+ * to Rn.
226
+ */
227
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd)) {
228
+ return false;
229
+ }
230
+ if (!fn || a->rm == 13 || a->rm == 15) {
231
+ /*
232
+ * size 0b11 is another encoding; Rm == 13 is UNPREDICTABLE;
233
+ * Rm == 13 is VIWDUP, VDWDUP.
234
+ */
235
+ return false;
236
+ }
237
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
238
+ return true;
239
+ }
240
+
241
+ qd = mve_qreg_ptr(a->qd);
242
+ rn = load_reg(s, a->rn);
243
+ rm = load_reg(s, a->rm);
244
+ fn(rn, cpu_env, qd, rn, rm, tcg_constant_i32(a->imm));
245
+ store_reg(s, a->rn, rn);
246
+ tcg_temp_free_ptr(qd);
247
+ tcg_temp_free_i32(rm);
248
+ mve_update_eci(s);
249
+ return true;
250
+}
251
+
252
+static bool trans_VIDUP(DisasContext *s, arg_vidup *a)
253
+{
254
+ static MVEGenVIDUPFn * const fns[] = {
255
+ gen_helper_mve_vidupb,
256
+ gen_helper_mve_viduph,
257
+ gen_helper_mve_vidupw,
258
+ NULL,
259
+ };
260
+ return do_vidup(s, a, fns[a->size]);
261
+}
262
+
263
+static bool trans_VDDUP(DisasContext *s, arg_vidup *a)
264
+{
265
+ static MVEGenVIDUPFn * const fns[] = {
266
+ gen_helper_mve_vidupb,
267
+ gen_helper_mve_viduph,
268
+ gen_helper_mve_vidupw,
269
+ NULL,
270
+ };
271
+ /* VDDUP is just like VIDUP but with a negative immediate */
272
+ a->imm = -a->imm;
273
+ return do_vidup(s, a, fns[a->size]);
274
+}
275
+
276
+static bool trans_VIWDUP(DisasContext *s, arg_viwdup *a)
277
+{
278
+ static MVEGenVIWDUPFn * const fns[] = {
279
+ gen_helper_mve_viwdupb,
280
+ gen_helper_mve_viwduph,
281
+ gen_helper_mve_viwdupw,
282
+ NULL,
283
+ };
284
+ return do_viwdup(s, a, fns[a->size]);
285
+}
286
+
287
+static bool trans_VDWDUP(DisasContext *s, arg_viwdup *a)
288
+{
289
+ static MVEGenVIWDUPFn * const fns[] = {
290
+ gen_helper_mve_vdwdupb,
291
+ gen_helper_mve_vdwduph,
292
+ gen_helper_mve_vdwdupw,
293
+ NULL,
294
+ };
295
+ return do_viwdup(s, a, fns[a->size]);
296
+}
297
--
298
2.20.1
299
300
diff view generated by jsdifflib
Deleted patch
1
Factor out the "generate code to update VPR.MASK01/MASK23" part of
2
trans_VPST(); we are going to want to reuse it for the VPT insns.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
target/arm/translate-mve.c | 31 +++++++++++++++++--------------
8
1 file changed, 17 insertions(+), 14 deletions(-)
9
10
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/translate-mve.c
13
+++ b/target/arm/translate-mve.c
14
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
15
return do_long_dual_acc(s, a, fns[a->x]);
16
}
17
18
-static bool trans_VPST(DisasContext *s, arg_VPST *a)
19
+static void gen_vpst(DisasContext *s, uint32_t mask)
20
{
21
- TCGv_i32 vpr;
22
-
23
- /* mask == 0 is a "related encoding" */
24
- if (!dc_isar_feature(aa32_mve, s) || !a->mask) {
25
- return false;
26
- }
27
- if (!mve_eci_check(s) || !vfp_access_check(s)) {
28
- return true;
29
- }
30
/*
31
* Set the VPR mask fields. We take advantage of MASK01 and MASK23
32
* being adjacent fields in the register.
33
*
34
- * This insn is not predicated, but it is subject to beat-wise
35
+ * Updating the masks is not predicated, but it is subject to beat-wise
36
* execution, and the mask is updated on the odd-numbered beats.
37
* So if PSR.ECI says we should skip beat 1, we mustn't update the
38
* 01 mask field.
39
*/
40
- vpr = load_cpu_field(v7m.vpr);
41
+ TCGv_i32 vpr = load_cpu_field(v7m.vpr);
42
switch (s->eci) {
43
case ECI_NONE:
44
case ECI_A0:
45
/* Update both 01 and 23 fields */
46
tcg_gen_deposit_i32(vpr, vpr,
47
- tcg_constant_i32(a->mask | (a->mask << 4)),
48
+ tcg_constant_i32(mask | (mask << 4)),
49
R_V7M_VPR_MASK01_SHIFT,
50
R_V7M_VPR_MASK01_LENGTH + R_V7M_VPR_MASK23_LENGTH);
51
break;
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VPST(DisasContext *s, arg_VPST *a)
53
case ECI_A0A1A2B0:
54
/* Update only the 23 mask field */
55
tcg_gen_deposit_i32(vpr, vpr,
56
- tcg_constant_i32(a->mask),
57
+ tcg_constant_i32(mask),
58
R_V7M_VPR_MASK23_SHIFT, R_V7M_VPR_MASK23_LENGTH);
59
break;
60
default:
61
g_assert_not_reached();
62
}
63
store_cpu_field(vpr, v7m.vpr);
64
+}
65
+
66
+static bool trans_VPST(DisasContext *s, arg_VPST *a)
67
+{
68
+ /* mask == 0 is a "related encoding" */
69
+ if (!dc_isar_feature(aa32_mve, s) || !a->mask) {
70
+ return false;
71
+ }
72
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
73
+ return true;
74
+ }
75
+ gen_vpst(s, a->mask);
76
mve_update_and_store_eci(s);
77
return true;
78
}
79
--
80
2.20.1
81
82
diff view generated by jsdifflib
Deleted patch
1
Implement the MVE integer vector comparison instructions. These are
2
"VCMP (vector)" encodings T1, T2 and T3, and "VPT (vector)" encodings
3
T1, T2 and T3.
4
1
5
These insns compare corresponding elements in each vector, and update
6
the VPR.P0 predicate bits with the results of the comparison. VPT
7
also sets the VPR.MASK01 and VPR.MASK23 fields -- it is effectively
8
"VCMP then VPST".
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/arm/helper-mve.h | 32 ++++++++++++++++++++++
14
target/arm/mve.decode | 18 +++++++++++-
15
target/arm/mve_helper.c | 56 ++++++++++++++++++++++++++++++++++++++
16
target/arm/translate-mve.c | 47 ++++++++++++++++++++++++++++++++
17
4 files changed, 152 insertions(+), 1 deletion(-)
18
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-mve.h
22
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_uqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
24
DEF_HELPER_FLAGS_3(mve_sqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
25
DEF_HELPER_FLAGS_3(mve_uqrshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
26
DEF_HELPER_FLAGS_3(mve_sqrshr, TCG_CALL_NO_RWG, i32, env, i32, i32)
27
+
28
+DEF_HELPER_FLAGS_3(mve_vcmpeqb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
+DEF_HELPER_FLAGS_3(mve_vcmpeqh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
+DEF_HELPER_FLAGS_3(mve_vcmpeqw, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
+
32
+DEF_HELPER_FLAGS_3(mve_vcmpneb, TCG_CALL_NO_WG, void, env, ptr, ptr)
33
+DEF_HELPER_FLAGS_3(mve_vcmpneh, TCG_CALL_NO_WG, void, env, ptr, ptr)
34
+DEF_HELPER_FLAGS_3(mve_vcmpnew, TCG_CALL_NO_WG, void, env, ptr, ptr)
35
+
36
+DEF_HELPER_FLAGS_3(mve_vcmpcsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
37
+DEF_HELPER_FLAGS_3(mve_vcmpcsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
38
+DEF_HELPER_FLAGS_3(mve_vcmpcsw, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
+
40
+DEF_HELPER_FLAGS_3(mve_vcmphib, TCG_CALL_NO_WG, void, env, ptr, ptr)
41
+DEF_HELPER_FLAGS_3(mve_vcmphih, TCG_CALL_NO_WG, void, env, ptr, ptr)
42
+DEF_HELPER_FLAGS_3(mve_vcmphiw, TCG_CALL_NO_WG, void, env, ptr, ptr)
43
+
44
+DEF_HELPER_FLAGS_3(mve_vcmpgeb, TCG_CALL_NO_WG, void, env, ptr, ptr)
45
+DEF_HELPER_FLAGS_3(mve_vcmpgeh, TCG_CALL_NO_WG, void, env, ptr, ptr)
46
+DEF_HELPER_FLAGS_3(mve_vcmpgew, TCG_CALL_NO_WG, void, env, ptr, ptr)
47
+
48
+DEF_HELPER_FLAGS_3(mve_vcmpltb, TCG_CALL_NO_WG, void, env, ptr, ptr)
49
+DEF_HELPER_FLAGS_3(mve_vcmplth, TCG_CALL_NO_WG, void, env, ptr, ptr)
50
+DEF_HELPER_FLAGS_3(mve_vcmpltw, TCG_CALL_NO_WG, void, env, ptr, ptr)
51
+
52
+DEF_HELPER_FLAGS_3(mve_vcmpgtb, TCG_CALL_NO_WG, void, env, ptr, ptr)
53
+DEF_HELPER_FLAGS_3(mve_vcmpgth, TCG_CALL_NO_WG, void, env, ptr, ptr)
54
+DEF_HELPER_FLAGS_3(mve_vcmpgtw, TCG_CALL_NO_WG, void, env, ptr, ptr)
55
+
56
+DEF_HELPER_FLAGS_3(mve_vcmpleb, TCG_CALL_NO_WG, void, env, ptr, ptr)
57
+DEF_HELPER_FLAGS_3(mve_vcmpleh, TCG_CALL_NO_WG, void, env, ptr, ptr)
58
+DEF_HELPER_FLAGS_3(mve_vcmplew, TCG_CALL_NO_WG, void, env, ptr, ptr)
59
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/mve.decode
62
+++ b/target/arm/mve.decode
63
@@ -XXX,XX +XXX,XX @@
64
&2shift qd qm shift size
65
&vidup qd rn size imm
66
&viwdup qd rn rm size imm
67
+&vcmp qm qn size mask
68
69
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
70
# Note that both Rn and Qd are 3 bits only (no D bit)
71
@@ -XXX,XX +XXX,XX @@
72
@2_shr_w .... .... .. 1 ..... .... .... .... .... &2shift qd=%qd qm=%qm \
73
size=2 shift=%rshift_i5
74
75
+# Vector comparison; 4-bit Qm but 3-bit Qn
76
+%mask_22_13 22:1 13:3
77
+@vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13
78
+
79
# Vector loads and stores
80
81
# Widening loads and narrowing stores:
82
@@ -XXX,XX +XXX,XX @@ VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
83
}
84
85
# Predicate operations
86
-%mask_22_13 22:1 13:3
87
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
88
89
# Logical immediate operations (1 reg and modified-immediate)
90
@@ -XXX,XX +XXX,XX @@ VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_b
91
VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_h
92
93
VSHLC 111 0 1110 1 . 1 imm:5 ... 0 1111 1100 rdm:4 qd=%qd
94
+
95
+# Comparisons. We expand out the conditions which are split across
96
+# encodings T1, T2, T3 and the fc bits. These include VPT, which is
97
+# effectively "VCMP then VPST". A plain "VCMP" has a mask field of zero.
98
+VCMPEQ 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 0 @vcmp
99
+VCMPNE 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 0 @vcmp
100
+VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp
101
+VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp
102
+VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp
103
+VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp
104
+VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
105
+VCMPLE 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 1 @vcmp
106
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/mve_helper.c
109
+++ b/target/arm/mve_helper.c
110
@@ -XXX,XX +XXX,XX @@ static uint32_t do_sub_wrap(uint32_t offset, uint32_t wrap, uint32_t imm)
111
DO_VIDUP_ALL(vidup, DO_ADD)
112
DO_VIWDUP_ALL(viwdup, do_add_wrap)
113
DO_VIWDUP_ALL(vdwdup, do_sub_wrap)
114
+
115
+/*
116
+ * Vector comparison.
117
+ * P0 bits for non-executed beats (where eci_mask is 0) are unchanged.
118
+ * P0 bits for predicated lanes in executed beats (where mask is 0) are 0.
119
+ * P0 bits otherwise are updated with the results of the comparisons.
120
+ * We must also keep unchanged the MASK fields at the top of v7m.vpr.
121
+ */
122
+#define DO_VCMP(OP, ESIZE, TYPE, FN) \
123
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, void *vm) \
124
+ { \
125
+ TYPE *n = vn, *m = vm; \
126
+ uint16_t mask = mve_element_mask(env); \
127
+ uint16_t eci_mask = mve_eci_mask(env); \
128
+ uint16_t beatpred = 0; \
129
+ uint16_t emask = MAKE_64BIT_MASK(0, ESIZE); \
130
+ unsigned e; \
131
+ for (e = 0; e < 16 / ESIZE; e++) { \
132
+ bool r = FN(n[H##ESIZE(e)], m[H##ESIZE(e)]); \
133
+ /* Comparison sets 0/1 bits for each byte in the element */ \
134
+ beatpred |= r * emask; \
135
+ emask <<= ESIZE; \
136
+ } \
137
+ beatpred &= mask; \
138
+ env->v7m.vpr = (env->v7m.vpr & ~(uint32_t)eci_mask) | \
139
+ (beatpred & eci_mask); \
140
+ mve_advance_vpt(env); \
141
+ }
142
+
143
+#define DO_VCMP_S(OP, FN) \
144
+ DO_VCMP(OP##b, 1, int8_t, FN) \
145
+ DO_VCMP(OP##h, 2, int16_t, FN) \
146
+ DO_VCMP(OP##w, 4, int32_t, FN)
147
+
148
+#define DO_VCMP_U(OP, FN) \
149
+ DO_VCMP(OP##b, 1, uint8_t, FN) \
150
+ DO_VCMP(OP##h, 2, uint16_t, FN) \
151
+ DO_VCMP(OP##w, 4, uint32_t, FN)
152
+
153
+#define DO_EQ(N, M) ((N) == (M))
154
+#define DO_NE(N, M) ((N) != (M))
155
+#define DO_EQ(N, M) ((N) == (M))
156
+#define DO_EQ(N, M) ((N) == (M))
157
+#define DO_GE(N, M) ((N) >= (M))
158
+#define DO_LT(N, M) ((N) < (M))
159
+#define DO_GT(N, M) ((N) > (M))
160
+#define DO_LE(N, M) ((N) <= (M))
161
+
162
+DO_VCMP_U(vcmpeq, DO_EQ)
163
+DO_VCMP_U(vcmpne, DO_NE)
164
+DO_VCMP_U(vcmpcs, DO_GE)
165
+DO_VCMP_U(vcmphi, DO_GT)
166
+DO_VCMP_S(vcmpge, DO_GE)
167
+DO_VCMP_S(vcmplt, DO_LT)
168
+DO_VCMP_S(vcmpgt, DO_GT)
169
+DO_VCMP_S(vcmple, DO_LE)
170
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/target/arm/translate-mve.c
173
+++ b/target/arm/translate-mve.c
174
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
175
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
176
typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
177
typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TCGv_i32);
178
+typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
179
180
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
181
static inline long mve_qreg_offset(unsigned reg)
182
@@ -XXX,XX +XXX,XX @@ static bool trans_VDWDUP(DisasContext *s, arg_viwdup *a)
183
};
184
return do_viwdup(s, a, fns[a->size]);
185
}
186
+
187
+static bool do_vcmp(DisasContext *s, arg_vcmp *a, MVEGenCmpFn *fn)
188
+{
189
+ TCGv_ptr qn, qm;
190
+
191
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qm) ||
192
+ !fn) {
193
+ return false;
194
+ }
195
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
196
+ return true;
197
+ }
198
+
199
+ qn = mve_qreg_ptr(a->qn);
200
+ qm = mve_qreg_ptr(a->qm);
201
+ fn(cpu_env, qn, qm);
202
+ tcg_temp_free_ptr(qn);
203
+ tcg_temp_free_ptr(qm);
204
+ if (a->mask) {
205
+ /* VPT */
206
+ gen_vpst(s, a->mask);
207
+ }
208
+ mve_update_eci(s);
209
+ return true;
210
+}
211
+
212
+#define DO_VCMP(INSN, FN) \
213
+ static bool trans_##INSN(DisasContext *s, arg_vcmp *a) \
214
+ { \
215
+ static MVEGenCmpFn * const fns[] = { \
216
+ gen_helper_mve_##FN##b, \
217
+ gen_helper_mve_##FN##h, \
218
+ gen_helper_mve_##FN##w, \
219
+ NULL, \
220
+ }; \
221
+ return do_vcmp(s, a, fns[a->size]); \
222
+ }
223
+
224
+DO_VCMP(VCMPEQ, vcmpeq)
225
+DO_VCMP(VCMPNE, vcmpne)
226
+DO_VCMP(VCMPCS, vcmpcs)
227
+DO_VCMP(VCMPHI, vcmphi)
228
+DO_VCMP(VCMPGE, vcmpge)
229
+DO_VCMP(VCMPLT, vcmplt)
230
+DO_VCMP(VCMPGT, vcmpgt)
231
+DO_VCMP(VCMPLE, vcmple)
232
--
233
2.20.1
234
235
diff view generated by jsdifflib
Deleted patch
1
Implement the MVE VPSEL insn, which sets each byte of the destination
2
vector Qd to the byte from either Qn or Qm depending on the value of
3
the corresponding bit in VPR.P0.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/arm/helper-mve.h | 2 ++
9
target/arm/mve.decode | 7 +++++--
10
target/arm/mve_helper.c | 19 +++++++++++++++++++
11
target/arm/translate-mve.c | 2 ++
12
4 files changed, 28 insertions(+), 2 deletions(-)
13
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
17
+++ b/target/arm/helper-mve.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
19
DEF_HELPER_FLAGS_4(mve_vorn, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
20
DEF_HELPER_FLAGS_4(mve_veor, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
21
22
+DEF_HELPER_FLAGS_4(mve_vpsel, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
23
+
24
DEF_HELPER_FLAGS_4(mve_vaddb, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
25
DEF_HELPER_FLAGS_4(mve_vaddh, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
26
DEF_HELPER_FLAGS_4(mve_vaddw, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
27
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/mve.decode
30
+++ b/target/arm/mve.decode
31
@@ -XXX,XX +XXX,XX @@ VSHLC 111 0 1110 1 . 1 imm:5 ... 0 1111 1100 rdm:4 qd=%qd
32
# effectively "VCMP then VPST". A plain "VCMP" has a mask field of zero.
33
VCMPEQ 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 0 @vcmp
34
VCMPNE 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 0 @vcmp
35
-VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp
36
-VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp
37
+{
38
+ VPSEL 1111 1110 0 . 11 ... 1 ... 0 1111 . 0 . 0 ... 1 @2op_nosz
39
+ VCMPCS 1111 1110 0 . .. ... 1 ... 0 1111 0 0 . 0 ... 1 @vcmp
40
+ VCMPHI 1111 1110 0 . .. ... 1 ... 0 1111 1 0 . 0 ... 1 @vcmp
41
+}
42
VCMPGE 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 0 @vcmp
43
VCMPLT 1111 1110 0 . .. ... 1 ... 1 1111 1 0 . 0 ... 0 @vcmp
44
VCMPGT 1111 1110 0 . .. ... 1 ... 1 1111 0 0 . 0 ... 1 @vcmp
45
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/mve_helper.c
48
+++ b/target/arm/mve_helper.c
49
@@ -XXX,XX +XXX,XX @@ DO_VCMP_S(vcmpge, DO_GE)
50
DO_VCMP_S(vcmplt, DO_LT)
51
DO_VCMP_S(vcmpgt, DO_GT)
52
DO_VCMP_S(vcmple, DO_LE)
53
+
54
+void HELPER(mve_vpsel)(CPUARMState *env, void *vd, void *vn, void *vm)
55
+{
56
+ /*
57
+ * Qd[n] = VPR.P0[n] ? Qn[n] : Qm[n]
58
+ * but note that whether bytes are written to Qd is still subject
59
+ * to (all forms of) predication in the usual way.
60
+ */
61
+ uint64_t *d = vd, *n = vn, *m = vm;
62
+ uint16_t mask = mve_element_mask(env);
63
+ uint16_t p0 = FIELD_EX32(env->v7m.vpr, V7M_VPR, P0);
64
+ unsigned e;
65
+ for (e = 0; e < 16 / 8; e++, mask >>= 8, p0 >>= 8) {
66
+ uint64_t r = m[H8(e)];
67
+ mergemask(&r, n[H8(e)], p0);
68
+ mergemask(&d[H8(e)], r, mask);
69
+ }
70
+ mve_advance_vpt(env);
71
+}
72
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/translate-mve.c
75
+++ b/target/arm/translate-mve.c
76
@@ -XXX,XX +XXX,XX @@ DO_LOGIC(VORR, gen_helper_mve_vorr)
77
DO_LOGIC(VORN, gen_helper_mve_vorn)
78
DO_LOGIC(VEOR, gen_helper_mve_veor)
79
80
+DO_LOGIC(VPSEL, gen_helper_mve_vpsel)
81
+
82
#define DO_2OP(INSN, FN) \
83
static bool trans_##INSN(DisasContext *s, arg_2op *a) \
84
{ \
85
--
86
2.20.1
87
88
diff view generated by jsdifflib
Deleted patch
1
Implement the MVE instructions which perform shifts by a scalar.
2
These are VSHL T2, VRSHL T2, VQSHL T1 and VQRSHL T2. They take the
3
shift amount in a general purpose register and shift every element in
4
the vector by that amount.
5
1
6
Mostly we can reuse the helper functions for shift-by-immediate; we
7
do need two new helpers for VQRSHL.
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
---
12
target/arm/helper-mve.h | 8 +++++++
13
target/arm/mve.decode | 23 ++++++++++++++++---
14
target/arm/mve_helper.c | 2 ++
15
target/arm/translate-mve.c | 46 ++++++++++++++++++++++++++++++++++++++
16
4 files changed, 76 insertions(+), 3 deletions(-)
17
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-mve.h
21
+++ b/target/arm/helper-mve.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
DEF_HELPER_FLAGS_4(mve_vrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
DEF_HELPER_FLAGS_4(mve_vrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
26
+DEF_HELPER_FLAGS_4(mve_vqrshli_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vqrshli_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vqrshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+
30
+DEF_HELPER_FLAGS_4(mve_vqrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vqrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(mve_vqrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
+
34
DEF_HELPER_FLAGS_4(mve_vshllbsb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_4(mve_vshllbsh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(mve_vshllbub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/mve.decode
40
+++ b/target/arm/mve.decode
41
@@ -XXX,XX +XXX,XX @@
42
&viwdup qd rn rm size imm
43
&vcmp qm qn size mask
44
&vcmp_scalar qn rm size mask
45
+&shl_scalar qda rm size
46
47
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
48
# Note that both Rn and Qd are 3 bits only (no D bit)
49
@@ -XXX,XX +XXX,XX @@
50
@2_shr_w .... .... .. 1 ..... .... .... .... .... &2shift qd=%qd qm=%qm \
51
size=2 shift=%rshift_i5
52
53
+@shl_scalar .... .... .... size:2 .. .... .... .... rm:4 &shl_scalar qda=%qd
54
+
55
# Vector comparison; 4-bit Qm but 3-bit Qn
56
%mask_22_13 22:1 13:3
57
@vcmp .... .... .. size:2 qn:3 . .... .... .... .... &vcmp qm=%qm mask=%mask_22_13
58
@@ -XXX,XX +XXX,XX @@ VRMLSLDAVH 1111 1110 1 ... ... 0 ... x:1 1110 . 0 a:1 0 ... 1 @vmlaldav_no
59
60
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
61
VSUB_scalar 1110 1110 0 . .. ... 1 ... 1 1111 . 100 .... @2scalar
62
-VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
63
+
64
+{
65
+ VSHL_S_scalar 1110 1110 0 . 11 .. 01 ... 1 1110 0110 .... @shl_scalar
66
+ VRSHL_S_scalar 1110 1110 0 . 11 .. 11 ... 1 1110 0110 .... @shl_scalar
67
+ VQSHL_S_scalar 1110 1110 0 . 11 .. 01 ... 1 1110 1110 .... @shl_scalar
68
+ VQRSHL_S_scalar 1110 1110 0 . 11 .. 11 ... 1 1110 1110 .... @shl_scalar
69
+ VMUL_scalar 1110 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
70
+}
71
+
72
+{
73
+ VSHL_U_scalar 1111 1110 0 . 11 .. 01 ... 1 1110 0110 .... @shl_scalar
74
+ VRSHL_U_scalar 1111 1110 0 . 11 .. 11 ... 1 1110 0110 .... @shl_scalar
75
+ VQSHL_U_scalar 1111 1110 0 . 11 .. 01 ... 1 1110 1110 .... @shl_scalar
76
+ VQRSHL_U_scalar 1111 1110 0 . 11 .. 11 ... 1 1110 1110 .... @shl_scalar
77
+ VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
78
+}
79
+
80
VHADD_S_scalar 1110 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
81
VHADD_U_scalar 1111 1110 0 . .. ... 0 ... 0 1111 . 100 .... @2scalar
82
VHSUB_S_scalar 1110 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
83
@@ -XXX,XX +XXX,XX @@ VHSUB_U_scalar 1111 1110 0 . .. ... 0 ... 1 1111 . 100 .... @2scalar
84
size=%size_28
85
}
86
87
-VBRSR 1111 1110 0 . .. ... 1 ... 1 1110 . 110 .... @2scalar
88
-
89
VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
90
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
91
92
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/mve_helper.c
95
+++ b/target/arm/mve_helper.c
96
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SAT_S(vqshli_s, DO_SQSHL_OP)
97
DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
98
DO_2SHIFT_U(vrshli_u, DO_VRSHLU)
99
DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
100
+DO_2SHIFT_SAT_U(vqrshli_u, DO_UQRSHL_OP)
101
+DO_2SHIFT_SAT_S(vqrshli_s, DO_SQRSHL_OP)
102
103
/* Shift-and-insert; we always work with 64 bits at a time */
104
#define DO_2SHIFT_INSERT(OP, ESIZE, SHIFTFN, MASKFN) \
105
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate-mve.c
108
+++ b/target/arm/translate-mve.c
109
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT(VRSHRI_U, vrshli_u, true)
110
DO_2SHIFT(VSRI, vsri, false)
111
DO_2SHIFT(VSLI, vsli, false)
112
113
+static bool do_2shift_scalar(DisasContext *s, arg_shl_scalar *a,
114
+ MVEGenTwoOpShiftFn *fn)
115
+{
116
+ TCGv_ptr qda;
117
+ TCGv_i32 rm;
118
+
119
+ if (!dc_isar_feature(aa32_mve, s) ||
120
+ !mve_check_qreg_bank(s, a->qda) ||
121
+ a->rm == 13 || a->rm == 15 || !fn) {
122
+ /* Rm cases are UNPREDICTABLE */
123
+ return false;
124
+ }
125
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
126
+ return true;
127
+ }
128
+
129
+ qda = mve_qreg_ptr(a->qda);
130
+ rm = load_reg(s, a->rm);
131
+ fn(cpu_env, qda, qda, rm);
132
+ tcg_temp_free_ptr(qda);
133
+ tcg_temp_free_i32(rm);
134
+ mve_update_eci(s);
135
+ return true;
136
+}
137
+
138
+#define DO_2SHIFT_SCALAR(INSN, FN) \
139
+ static bool trans_##INSN(DisasContext *s, arg_shl_scalar *a) \
140
+ { \
141
+ static MVEGenTwoOpShiftFn * const fns[] = { \
142
+ gen_helper_mve_##FN##b, \
143
+ gen_helper_mve_##FN##h, \
144
+ gen_helper_mve_##FN##w, \
145
+ NULL, \
146
+ }; \
147
+ return do_2shift_scalar(s, a, fns[a->size]); \
148
+ }
149
+
150
+DO_2SHIFT_SCALAR(VSHL_S_scalar, vshli_s)
151
+DO_2SHIFT_SCALAR(VSHL_U_scalar, vshli_u)
152
+DO_2SHIFT_SCALAR(VRSHL_S_scalar, vrshli_s)
153
+DO_2SHIFT_SCALAR(VRSHL_U_scalar, vrshli_u)
154
+DO_2SHIFT_SCALAR(VQSHL_S_scalar, vqshli_s)
155
+DO_2SHIFT_SCALAR(VQSHL_U_scalar, vqshli_u)
156
+DO_2SHIFT_SCALAR(VQRSHL_S_scalar, vqrshli_s)
157
+DO_2SHIFT_SCALAR(VQRSHL_U_scalar, vqrshli_u)
158
+
159
#define DO_VSHLL(INSN, FN) \
160
static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
161
{ \
162
--
163
2.20.1
164
165
diff view generated by jsdifflib
Deleted patch
1
Implement the MVE integer min/max across vector insns
2
VMAXV, VMINV, VMAXAV and VMINAV, which find the maximum
3
from the vector elements and a general purpose register,
4
and store the maximum back into the general purpose
5
register.
6
1
7
These insns overlap with VRMLALDAVH (they use what would
8
be RdaHi=0b110).
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/arm/helper-mve.h | 20 ++++++++++++
14
target/arm/mve.decode | 18 +++++++++--
15
target/arm/mve_helper.c | 66 ++++++++++++++++++++++++++++++++++++++
16
target/arm/translate-mve.c | 48 +++++++++++++++++++++++++++
17
4 files changed, 150 insertions(+), 2 deletions(-)
18
19
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-mve.h
22
+++ b/target/arm/helper-mve.h
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
24
DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
25
DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
26
27
+DEF_HELPER_FLAGS_3(mve_vmaxvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
28
+DEF_HELPER_FLAGS_3(mve_vmaxvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
29
+DEF_HELPER_FLAGS_3(mve_vmaxvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
30
+DEF_HELPER_FLAGS_3(mve_vmaxvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
31
+DEF_HELPER_FLAGS_3(mve_vmaxvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
32
+DEF_HELPER_FLAGS_3(mve_vmaxvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
33
+DEF_HELPER_FLAGS_3(mve_vmaxavb, TCG_CALL_NO_WG, i32, env, ptr, i32)
34
+DEF_HELPER_FLAGS_3(mve_vmaxavh, TCG_CALL_NO_WG, i32, env, ptr, i32)
35
+DEF_HELPER_FLAGS_3(mve_vmaxavw, TCG_CALL_NO_WG, i32, env, ptr, i32)
36
+
37
+DEF_HELPER_FLAGS_3(mve_vminvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
38
+DEF_HELPER_FLAGS_3(mve_vminvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
39
+DEF_HELPER_FLAGS_3(mve_vminvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
40
+DEF_HELPER_FLAGS_3(mve_vminvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
41
+DEF_HELPER_FLAGS_3(mve_vminvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
42
+DEF_HELPER_FLAGS_3(mve_vminvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
43
+DEF_HELPER_FLAGS_3(mve_vminavb, TCG_CALL_NO_WG, i32, env, ptr, i32)
44
+DEF_HELPER_FLAGS_3(mve_vminavh, TCG_CALL_NO_WG, i32, env, ptr, i32)
45
+DEF_HELPER_FLAGS_3(mve_vminavw, TCG_CALL_NO_WG, i32, env, ptr, i32)
46
+
47
DEF_HELPER_FLAGS_3(mve_vaddlv_s, TCG_CALL_NO_WG, i64, env, ptr, i64)
48
DEF_HELPER_FLAGS_3(mve_vaddlv_u, TCG_CALL_NO_WG, i64, env, ptr, i64)
49
50
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/mve.decode
53
+++ b/target/arm/mve.decode
54
@@ -XXX,XX +XXX,XX @@
55
&vcmp qm qn size mask
56
&vcmp_scalar qn rm size mask
57
&shl_scalar qda rm size
58
+&vmaxv qm rda size
59
60
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
61
# Note that both Rn and Qd are 3 bits only (no D bit)
62
@@ -XXX,XX +XXX,XX @@
63
@vcmp_scalar .... .... .. size:2 qn:3 . .... .... .... rm:4 &vcmp_scalar \
64
mask=%mask_22_13
65
66
+@vmaxv .... .... .... size:2 .. rda:4 .... .... .... &vmaxv qm=%qm
67
+
68
# Vector loads and stores
69
70
# Widening loads and narrowing stores:
71
@@ -XXX,XX +XXX,XX @@ VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
72
73
VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
74
75
-VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
76
-VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
77
+{
78
+ VMAXV_S 1110 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
79
+ VMINV_S 1110 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
80
+ VMAXAV 1110 1110 1110 .. 00 .... 1111 0 0 . 0 ... 0 @vmaxv
81
+ VMINAV 1110 1110 1110 .. 00 .... 1111 1 0 . 0 ... 0 @vmaxv
82
+ VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
83
+}
84
+
85
+{
86
+ VMAXV_U 1111 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
87
+ VMINV_U 1111 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
88
+ VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
89
+}
90
91
VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
92
93
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/mve_helper.c
96
+++ b/target/arm/mve_helper.c
97
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvub, 1, uint8_t)
98
DO_VADDV(vaddvuh, 2, uint16_t)
99
DO_VADDV(vaddvuw, 4, uint32_t)
100
101
+/*
102
+ * Vector max/min across vector. Unlike VADDV, we must
103
+ * read ra as the element size, not its full width.
104
+ * We work with int64_t internally for simplicity.
105
+ */
106
+#define DO_VMAXMINV(OP, ESIZE, TYPE, RATYPE, FN) \
107
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
108
+ uint32_t ra_in) \
109
+ { \
110
+ uint16_t mask = mve_element_mask(env); \
111
+ unsigned e; \
112
+ TYPE *m = vm; \
113
+ int64_t ra = (RATYPE)ra_in; \
114
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
115
+ if (mask & 1) { \
116
+ ra = FN(ra, m[H##ESIZE(e)]); \
117
+ } \
118
+ } \
119
+ mve_advance_vpt(env); \
120
+ return ra; \
121
+ } \
122
+
123
+#define DO_VMAXMINV_U(INSN, FN) \
124
+ DO_VMAXMINV(INSN##b, 1, uint8_t, uint8_t, FN) \
125
+ DO_VMAXMINV(INSN##h, 2, uint16_t, uint16_t, FN) \
126
+ DO_VMAXMINV(INSN##w, 4, uint32_t, uint32_t, FN)
127
+#define DO_VMAXMINV_S(INSN, FN) \
128
+ DO_VMAXMINV(INSN##b, 1, int8_t, int8_t, FN) \
129
+ DO_VMAXMINV(INSN##h, 2, int16_t, int16_t, FN) \
130
+ DO_VMAXMINV(INSN##w, 4, int32_t, int32_t, FN)
131
+
132
+/*
133
+ * Helpers for max and min of absolute values across vector:
134
+ * note that we only take the absolute value of 'm', not 'n'
135
+ */
136
+static int64_t do_maxa(int64_t n, int64_t m)
137
+{
138
+ if (m < 0) {
139
+ m = -m;
140
+ }
141
+ return MAX(n, m);
142
+}
143
+
144
+static int64_t do_mina(int64_t n, int64_t m)
145
+{
146
+ if (m < 0) {
147
+ m = -m;
148
+ }
149
+ return MIN(n, m);
150
+}
151
+
152
+DO_VMAXMINV_S(vmaxvs, DO_MAX)
153
+DO_VMAXMINV_U(vmaxvu, DO_MAX)
154
+DO_VMAXMINV_S(vminvs, DO_MIN)
155
+DO_VMAXMINV_U(vminvu, DO_MIN)
156
+/*
157
+ * VMAXAV, VMINAV treat the general purpose input as unsigned
158
+ * and the vector elements as signed.
159
+ */
160
+DO_VMAXMINV(vmaxavb, 1, int8_t, uint8_t, do_maxa)
161
+DO_VMAXMINV(vmaxavh, 2, int16_t, uint16_t, do_maxa)
162
+DO_VMAXMINV(vmaxavw, 4, int32_t, uint32_t, do_maxa)
163
+DO_VMAXMINV(vminavb, 1, int8_t, uint8_t, do_mina)
164
+DO_VMAXMINV(vminavh, 2, int16_t, uint16_t, do_mina)
165
+DO_VMAXMINV(vminavw, 4, int32_t, uint32_t, do_mina)
166
+
167
#define DO_VADDLV(OP, TYPE, LTYPE) \
168
uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
169
uint64_t ra) \
170
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/target/arm/translate-mve.c
173
+++ b/target/arm/translate-mve.c
174
@@ -XXX,XX +XXX,XX @@ DO_VCMP(VCMPGE, vcmpge)
175
DO_VCMP(VCMPLT, vcmplt)
176
DO_VCMP(VCMPGT, vcmpgt)
177
DO_VCMP(VCMPLE, vcmple)
178
+
179
+static bool do_vmaxv(DisasContext *s, arg_vmaxv *a, MVEGenVADDVFn fn)
180
+{
181
+ /*
182
+ * MIN/MAX operations across a vector: compute the min or
183
+ * max of the initial value in a general purpose register
184
+ * and all the elements in the vector, and store it back
185
+ * into the general purpose register.
186
+ */
187
+ TCGv_ptr qm;
188
+ TCGv_i32 rda;
189
+
190
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qm) ||
191
+ !fn || a->rda == 13 || a->rda == 15) {
192
+ /* Rda cases are UNPREDICTABLE */
193
+ return false;
194
+ }
195
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
196
+ return true;
197
+ }
198
+
199
+ qm = mve_qreg_ptr(a->qm);
200
+ rda = load_reg(s, a->rda);
201
+ fn(rda, cpu_env, qm, rda);
202
+ store_reg(s, a->rda, rda);
203
+ tcg_temp_free_ptr(qm);
204
+ mve_update_eci(s);
205
+ return true;
206
+}
207
+
208
+#define DO_VMAXV(INSN, FN) \
209
+ static bool trans_##INSN(DisasContext *s, arg_vmaxv *a) \
210
+ { \
211
+ static MVEGenVADDVFn * const fns[] = { \
212
+ gen_helper_mve_##FN##b, \
213
+ gen_helper_mve_##FN##h, \
214
+ gen_helper_mve_##FN##w, \
215
+ NULL, \
216
+ }; \
217
+ return do_vmaxv(s, a, fns[a->size]); \
218
+ }
219
+
220
+DO_VMAXV(VMAXV_S, vmaxvs)
221
+DO_VMAXV(VMAXV_U, vmaxvu)
222
+DO_VMAXV(VMAXAV, vmaxav)
223
+DO_VMAXV(VMINV_S, vminvs)
224
+DO_VMAXV(VMINV_U, vminvu)
225
+DO_VMAXV(VMINAV, vminav)
226
--
227
2.20.1
228
229
diff view generated by jsdifflib
1
From: "Wen, Jianxian" <Jianxian.Wen@verisilicon.com>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Add property memory region which can connect with IOMMU region to support SMMU translate.
3
If a test was tagged with the "accel" tag and the specified
4
accelerator it not present in the qemu binary, cancel the test.
4
5
5
Signed-off-by: Jianxian Wen <jianxian.wen@verisilicon.com>
6
We can now write tests without explicit calls to require_accelerator,
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
just the tag is enough.
7
Message-id: 4C23C17B8E87E74E906A25A3254A03F4FA1FEC31@SHASXM03.verisilicon.com
8
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
hw/arm/exynos4210.c | 3 +++
14
tests/avocado/avocado_qemu/__init__.py | 4 ++++
11
hw/arm/xilinx_zynq.c | 3 +++
15
1 file changed, 4 insertions(+)
12
hw/dma/pl330.c | 26 ++++++++++++++++++++++----
13
3 files changed, 28 insertions(+), 4 deletions(-)
14
16
15
diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
17
diff --git a/tests/avocado/avocado_qemu/__init__.py b/tests/avocado/avocado_qemu/__init__.py
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/exynos4210.c
19
--- a/tests/avocado/avocado_qemu/__init__.py
18
+++ b/hw/arm/exynos4210.c
20
+++ b/tests/avocado/avocado_qemu/__init__.py
19
@@ -XXX,XX +XXX,XX @@ static DeviceState *pl330_create(uint32_t base, qemu_or_irq *orgate,
21
@@ -XXX,XX +XXX,XX @@ def setUp(self):
20
int i;
22
21
23
super().setUp('qemu-system-')
22
dev = qdev_new("pl330");
24
23
+ object_property_set_link(OBJECT(dev), "memory",
25
+ accel_required = self._get_unique_tag_val('accel')
24
+ OBJECT(get_system_memory()),
26
+ if accel_required:
25
+ &error_fatal);
27
+ self.require_accelerator(accel_required)
26
qdev_prop_set_uint8(dev, "num_events", nevents);
27
qdev_prop_set_uint8(dev, "num_chnls", 8);
28
qdev_prop_set_uint8(dev, "num_periph_req", nreq);
29
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/arm/xilinx_zynq.c
32
+++ b/hw/arm/xilinx_zynq.c
33
@@ -XXX,XX +XXX,XX @@ static void zynq_init(MachineState *machine)
34
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[39-IRQ_OFFSET]);
35
36
dev = qdev_new("pl330");
37
+ object_property_set_link(OBJECT(dev), "memory",
38
+ OBJECT(address_space_mem),
39
+ &error_fatal);
40
qdev_prop_set_uint8(dev, "num_chnls", 8);
41
qdev_prop_set_uint8(dev, "num_periph_req", 4);
42
qdev_prop_set_uint8(dev, "num_events", 16);
43
diff --git a/hw/dma/pl330.c b/hw/dma/pl330.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/dma/pl330.c
46
+++ b/hw/dma/pl330.c
47
@@ -XXX,XX +XXX,XX @@ struct PL330State {
48
uint8_t num_faulting;
49
uint8_t periph_busy[PL330_PERIPH_NUM];
50
51
+ /* Memory region that DMA operation access */
52
+ MemoryRegion *mem_mr;
53
+ AddressSpace *mem_as;
54
};
55
56
#define TYPE_PL330 "pl330"
57
@@ -XXX,XX +XXX,XX @@ static inline const PL330InsnDesc *pl330_fetch_insn(PL330Chan *ch)
58
uint8_t opcode;
59
int i;
60
61
- dma_memory_read(&address_space_memory, ch->pc, &opcode, 1);
62
+ dma_memory_read(ch->parent->mem_as, ch->pc, &opcode, 1);
63
for (i = 0; insn_desc[i].size; i++) {
64
if ((opcode & insn_desc[i].opmask) == insn_desc[i].opcode) {
65
return &insn_desc[i];
66
@@ -XXX,XX +XXX,XX @@ static inline void pl330_exec_insn(PL330Chan *ch, const PL330InsnDesc *insn)
67
uint8_t buf[PL330_INSN_MAXSIZE];
68
69
assert(insn->size <= PL330_INSN_MAXSIZE);
70
- dma_memory_read(&address_space_memory, ch->pc, buf, insn->size);
71
+ dma_memory_read(ch->parent->mem_as, ch->pc, buf, insn->size);
72
insn->exec(ch, buf[0], &buf[1], insn->size - 1);
73
}
74
75
@@ -XXX,XX +XXX,XX @@ static int pl330_exec_cycle(PL330Chan *channel)
76
if (q != NULL && q->len <= pl330_fifo_num_free(&s->fifo)) {
77
int len = q->len - (q->addr & (q->len - 1));
78
79
- dma_memory_read(&address_space_memory, q->addr, buf, len);
80
+ dma_memory_read(s->mem_as, q->addr, buf, len);
81
trace_pl330_exec_cycle(q->addr, len);
82
if (trace_event_get_state_backends(TRACE_PL330_HEXDUMP)) {
83
pl330_hexdump(buf, len);
84
@@ -XXX,XX +XXX,XX @@ static int pl330_exec_cycle(PL330Chan *channel)
85
fifo_res = pl330_fifo_get(&s->fifo, buf, len, q->tag);
86
}
87
if (fifo_res == PL330_FIFO_OK || q->z) {
88
- dma_memory_write(&address_space_memory, q->addr, buf, len);
89
+ dma_memory_write(s->mem_as, q->addr, buf, len);
90
trace_pl330_exec_cycle(q->addr, len);
91
if (trace_event_get_state_backends(TRACE_PL330_HEXDUMP)) {
92
pl330_hexdump(buf, len);
93
@@ -XXX,XX +XXX,XX @@ static void pl330_realize(DeviceState *dev, Error **errp)
94
"dma", PL330_IOMEM_SIZE);
95
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
96
97
+ if (!s->mem_mr) {
98
+ error_setg(errp, "'memory' link is not set");
99
+ return;
100
+ } else if (s->mem_mr == get_system_memory()) {
101
+ /* Avoid creating new AS for system memory. */
102
+ s->mem_as = &address_space_memory;
103
+ } else {
104
+ s->mem_as = g_new0(AddressSpace, 1);
105
+ address_space_init(s->mem_as, s->mem_mr,
106
+ memory_region_name(s->mem_mr));
107
+ }
108
+
28
+
109
s->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, pl330_exec_cycle_timer, s);
29
self.machine = self.params.get('machine',
110
30
default=self._get_unique_tag_val('machine'))
111
s->cfg[0] = (s->mgr_ns_at_rst ? 0x4 : 0) |
112
@@ -XXX,XX +XXX,XX @@ static Property pl330_properties[] = {
113
DEFINE_PROP_UINT8("rd_q_dep", PL330State, rd_q_dep, 16),
114
DEFINE_PROP_UINT16("data_buffer_dep", PL330State, data_buffer_dep, 256),
115
116
+ DEFINE_PROP_LINK("memory", PL330State, mem_mr,
117
+ TYPE_MEMORY_REGION, MemoryRegion *),
118
+
119
DEFINE_PROP_END_OF_LIST(),
120
};
121
31
122
--
32
--
123
2.20.1
33
2.34.1
124
34
125
35
diff view generated by jsdifflib
1
Implement the MVE VMLADAV and VMLSLDAV insns. Like the VMLALDAV and
1
From: Fabiano Rosas <farosas@suse.de>
2
VMLSLDAV insns already implemented, these accumulate multiplied
3
vector elements; but they accumulate a 32-bit result rather than a
4
64-bit one.
5
2
6
Note that these encodings overlap with what would be RdaHi=0b111 for
3
This allows the test to be skipped when TCG is not present in the QEMU
7
VMLALDAV, VMLSLDAV, VRMLALDAVH and VRMLSLDAVH.
4
binary.
8
5
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
---
10
---
12
target/arm/helper-mve.h | 17 ++++++++++
11
tests/avocado/boot_linux_console.py | 1 +
13
target/arm/mve.decode | 33 +++++++++++++++++---
12
tests/avocado/reverse_debugging.py | 8 ++++++++
14
target/arm/mve_helper.c | 41 ++++++++++++++++++++++++
13
2 files changed, 9 insertions(+)
15
target/arm/translate-mve.c | 64 ++++++++++++++++++++++++++++++++++++++
16
4 files changed, 150 insertions(+), 5 deletions(-)
17
14
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
diff --git a/tests/avocado/boot_linux_console.py b/tests/avocado/boot_linux_console.py
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-mve.h
17
--- a/tests/avocado/boot_linux_console.py
21
+++ b/target/arm/helper-mve.h
18
+++ b/tests/avocado/boot_linux_console.py
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrmlaldavhuw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
19
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_uboot_netbsd9(self):
23
DEF_HELPER_FLAGS_4(mve_vrmlsldavhsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
20
24
DEF_HELPER_FLAGS_4(mve_vrmlsldavhxsw, TCG_CALL_NO_WG, i64, env, ptr, ptr, i64)
21
def test_aarch64_raspi3_atf(self):
25
22
"""
26
+DEF_HELPER_FLAGS_4(mve_vmladavsb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
23
+ :avocado: tags=accel:tcg
27
+DEF_HELPER_FLAGS_4(mve_vmladavsh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
24
:avocado: tags=arch:aarch64
28
+DEF_HELPER_FLAGS_4(mve_vmladavsw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
25
:avocado: tags=machine:raspi3b
29
+DEF_HELPER_FLAGS_4(mve_vmladavub, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
26
:avocado: tags=cpu:cortex-a53
30
+DEF_HELPER_FLAGS_4(mve_vmladavuh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
27
diff --git a/tests/avocado/reverse_debugging.py b/tests/avocado/reverse_debugging.py
31
+DEF_HELPER_FLAGS_4(mve_vmladavuw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
28
index XXXXXXX..XXXXXXX 100644
32
+DEF_HELPER_FLAGS_4(mve_vmlsdavb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
29
--- a/tests/avocado/reverse_debugging.py
33
+DEF_HELPER_FLAGS_4(mve_vmlsdavh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
30
+++ b/tests/avocado/reverse_debugging.py
34
+DEF_HELPER_FLAGS_4(mve_vmlsdavw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
31
@@ -XXX,XX +XXX,XX @@ def reverse_debugging(self, shift=7, args=None):
32
vm.shutdown()
33
34
class ReverseDebugging_X86_64(ReverseDebugging):
35
+ """
36
+ :avocado: tags=accel:tcg
37
+ """
35
+
38
+
36
+DEF_HELPER_FLAGS_4(mve_vmladavsxb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
39
REG_PC = 0x10
37
+DEF_HELPER_FLAGS_4(mve_vmladavsxh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
40
REG_CS = 0x12
38
+DEF_HELPER_FLAGS_4(mve_vmladavsxw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
41
def get_pc(self, g):
39
+DEF_HELPER_FLAGS_4(mve_vmlsdavxb, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
42
@@ -XXX,XX +XXX,XX @@ def test_x86_64_pc(self):
40
+DEF_HELPER_FLAGS_4(mve_vmlsdavxh, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
43
self.reverse_debugging()
41
+DEF_HELPER_FLAGS_4(mve_vmlsdavxw, TCG_CALL_NO_WG, i32, env, ptr, ptr, i32)
44
45
class ReverseDebugging_AArch64(ReverseDebugging):
46
+ """
47
+ :avocado: tags=accel:tcg
48
+ """
42
+
49
+
43
DEF_HELPER_FLAGS_3(mve_vaddvsb, TCG_CALL_NO_WG, i32, env, ptr, i32)
50
REG_PC = 32
44
DEF_HELPER_FLAGS_3(mve_vaddvub, TCG_CALL_NO_WG, i32, env, ptr, i32)
51
45
DEF_HELPER_FLAGS_3(mve_vaddvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
52
# unidentified gitlab timeout problem
46
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve.decode
49
+++ b/target/arm/mve.decode
50
@@ -XXX,XX +XXX,XX @@ VDUP 1110 1110 1 0 10 ... 0 .... 1011 . 0 0 1 0000 @vdup size=2
51
%size_16 16:1 !function=plus_1
52
53
&vmlaldav rdahi rdalo size qn qm x a
54
+&vmladav rda size qn qm x a
55
56
@vmlaldav .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
57
qn=%qn rdahi=%rdahi rdalo=%rdalo size=%size_16 &vmlaldav
58
@vmlaldav_nosz .... .... . ... ... . ... x:1 .... .. a:1 . qm:3 . \
59
qn=%qn rdahi=%rdahi rdalo=%rdalo size=0 &vmlaldav
60
-VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
61
-VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
62
+@vmladav .... .... .... ... . ... x:1 .... . . a:1 . qm:3 . \
63
+ qn=%qn rda=%rdalo size=%size_16 &vmladav
64
+@vmladav_nosz .... .... .... ... . ... x:1 .... . . a:1 . qm:3 . \
65
+ qn=%qn rda=%rdalo size=0 &vmladav
66
67
-VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
68
+{
69
+ VMLADAV_S 1110 1110 1111 ... . ... . 1110 . 0 . 0 ... 0 @vmladav
70
+ VMLALDAV_S 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
71
+}
72
+{
73
+ VMLADAV_U 1111 1110 1111 ... . ... . 1110 . 0 . 0 ... 0 @vmladav
74
+ VMLALDAV_U 1111 1110 1 ... ... . ... . 1110 . 0 . 0 ... 0 @vmlaldav
75
+}
76
+
77
+{
78
+ VMLSDAV 1110 1110 1111 ... . ... . 1110 . 0 . 0 ... 1 @vmladav
79
+ VMLSLDAV 1110 1110 1 ... ... . ... . 1110 . 0 . 0 ... 1 @vmlaldav
80
+}
81
+
82
+{
83
+ VMLSDAV 1111 1110 1111 ... 0 ... . 1110 . 0 . 0 ... 1 @vmladav_nosz
84
+ VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
85
+}
86
+
87
+VMLADAV_S 1110 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 1 @vmladav_nosz
88
+VMLADAV_U 1111 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 1 @vmladav_nosz
89
90
{
91
VMAXV_S 1110 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
92
VMINV_S 1110 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
93
VMAXAV 1110 1110 1110 .. 00 .... 1111 0 0 . 0 ... 0 @vmaxv
94
VMINAV 1110 1110 1110 .. 00 .... 1111 1 0 . 0 ... 0 @vmaxv
95
+ VMLADAV_S 1110 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 0 @vmladav_nosz
96
VRMLALDAVH_S 1110 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
97
}
98
99
{
100
VMAXV_U 1111 1110 1110 .. 10 .... 1111 0 0 . 0 ... 0 @vmaxv
101
VMINV_U 1111 1110 1110 .. 10 .... 1111 1 0 . 0 ... 0 @vmaxv
102
+ VMLADAV_U 1111 1110 1111 ... 0 ... . 1111 . 0 . 0 ... 0 @vmladav_nosz
103
VRMLALDAVH_U 1111 1110 1 ... ... 0 ... . 1111 . 0 . 0 ... 0 @vmlaldav_nosz
104
}
105
106
-VRMLSLDAVH 1111 1110 1 ... ... 0 ... . 1110 . 0 . 0 ... 1 @vmlaldav_nosz
107
-
108
# Scalar operations
109
110
VADD_scalar 1110 1110 0 . .. ... 1 ... 0 1111 . 100 .... @2scalar
111
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
112
index XXXXXXX..XXXXXXX 100644
113
--- a/target/arm/mve_helper.c
114
+++ b/target/arm/mve_helper.c
115
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlsldavxsh, 2, int16_t, true, +=, -=)
116
DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
117
DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
118
119
+/*
120
+ * Multiply add dual accumulate ops
121
+ */
122
+#define DO_DAV(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC) \
123
+ uint32_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
124
+ void *vm, uint32_t a) \
125
+ { \
126
+ uint16_t mask = mve_element_mask(env); \
127
+ unsigned e; \
128
+ TYPE *n = vn, *m = vm; \
129
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
130
+ if (mask & 1) { \
131
+ if (e & 1) { \
132
+ a ODDACC \
133
+ n[H##ESIZE(e - 1 * XCHG)] * m[H##ESIZE(e)]; \
134
+ } else { \
135
+ a EVENACC \
136
+ n[H##ESIZE(e + 1 * XCHG)] * m[H##ESIZE(e)]; \
137
+ } \
138
+ } \
139
+ } \
140
+ mve_advance_vpt(env); \
141
+ return a; \
142
+ }
143
+
144
+#define DO_DAV_S(INSN, XCHG, EVENACC, ODDACC) \
145
+ DO_DAV(INSN##b, 1, int8_t, XCHG, EVENACC, ODDACC) \
146
+ DO_DAV(INSN##h, 2, int16_t, XCHG, EVENACC, ODDACC) \
147
+ DO_DAV(INSN##w, 4, int32_t, XCHG, EVENACC, ODDACC)
148
+
149
+#define DO_DAV_U(INSN, XCHG, EVENACC, ODDACC) \
150
+ DO_DAV(INSN##b, 1, uint8_t, XCHG, EVENACC, ODDACC) \
151
+ DO_DAV(INSN##h, 2, uint16_t, XCHG, EVENACC, ODDACC) \
152
+ DO_DAV(INSN##w, 4, uint32_t, XCHG, EVENACC, ODDACC)
153
+
154
+DO_DAV_S(vmladavs, false, +=, +=)
155
+DO_DAV_U(vmladavu, false, +=, +=)
156
+DO_DAV_S(vmlsdav, false, +=, -=)
157
+DO_DAV_S(vmladavsx, true, +=, +=)
158
+DO_DAV_S(vmlsdavx, true, +=, -=)
159
+
160
/*
161
* Rounding multiply add long dual accumulate high. In the pseudocode
162
* this is implemented with a 72-bit internal accumulator value of which
163
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/translate-mve.c
166
+++ b/target/arm/translate-mve.c
167
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenVIWDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32, TC
168
typedef void MVEGenCmpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
169
typedef void MVEGenScalarCmpFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
170
typedef void MVEGenVABAVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
171
+typedef void MVEGenDualAccOpFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
172
173
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
174
static inline long mve_qreg_offset(unsigned reg)
175
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
176
return do_long_dual_acc(s, a, fns[a->x]);
177
}
178
179
+static bool do_dual_acc(DisasContext *s, arg_vmladav *a, MVEGenDualAccOpFn *fn)
180
+{
181
+ TCGv_ptr qn, qm;
182
+ TCGv_i32 rda;
183
+
184
+ if (!dc_isar_feature(aa32_mve, s) ||
185
+ !mve_check_qreg_bank(s, a->qn) ||
186
+ !fn) {
187
+ return false;
188
+ }
189
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
190
+ return true;
191
+ }
192
+
193
+ qn = mve_qreg_ptr(a->qn);
194
+ qm = mve_qreg_ptr(a->qm);
195
+
196
+ /*
197
+ * This insn is subject to beat-wise execution. Partial execution
198
+ * of an A=0 (no-accumulate) insn which does not execute the first
199
+ * beat must start with the current rda value, not 0.
200
+ */
201
+ if (a->a || mve_skip_first_beat(s)) {
202
+ rda = load_reg(s, a->rda);
203
+ } else {
204
+ rda = tcg_const_i32(0);
205
+ }
206
+
207
+ fn(rda, cpu_env, qn, qm, rda);
208
+ store_reg(s, a->rda, rda);
209
+ tcg_temp_free_ptr(qn);
210
+ tcg_temp_free_ptr(qm);
211
+
212
+ mve_update_eci(s);
213
+ return true;
214
+}
215
+
216
+#define DO_DUAL_ACC(INSN, FN) \
217
+ static bool trans_##INSN(DisasContext *s, arg_vmladav *a) \
218
+ { \
219
+ static MVEGenDualAccOpFn * const fns[4][2] = { \
220
+ { gen_helper_mve_##FN##b, gen_helper_mve_##FN##xb }, \
221
+ { gen_helper_mve_##FN##h, gen_helper_mve_##FN##xh }, \
222
+ { gen_helper_mve_##FN##w, gen_helper_mve_##FN##xw }, \
223
+ { NULL, NULL }, \
224
+ }; \
225
+ return do_dual_acc(s, a, fns[a->size][a->x]); \
226
+ }
227
+
228
+DO_DUAL_ACC(VMLADAV_S, vmladavs)
229
+DO_DUAL_ACC(VMLSDAV, vmlsdav)
230
+
231
+static bool trans_VMLADAV_U(DisasContext *s, arg_vmladav *a)
232
+{
233
+ static MVEGenDualAccOpFn * const fns[4][2] = {
234
+ { gen_helper_mve_vmladavub, NULL },
235
+ { gen_helper_mve_vmladavuh, NULL },
236
+ { gen_helper_mve_vmladavuw, NULL },
237
+ { NULL, NULL },
238
+ };
239
+ return do_dual_acc(s, a, fns[a->size][a->x]);
240
+}
241
+
242
static void gen_vpst(DisasContext *s, uint32_t mask)
243
{
244
/*
245
--
53
--
246
2.20.1
54
2.34.1
247
55
248
56
diff view generated by jsdifflib
1
The MVEGenDualAccOpFn is a bit misnamed, since it is used for
1
From: Fabiano Rosas <farosas@suse.de>
2
the "long dual accumulate" operations that use a 64-bit
3
accumulator. Rename it to MVEGenLongDualAccOpFn so we can
4
use the former name for the 32-bit accumulator insns.
5
2
3
Now that the cortex-a15 is under CONFIG_TCG, use as default CPU for a
4
KVM-only build the 'max' cpu.
5
6
Note that we cannot use 'host' here because the qtests can run without
7
any other accelerator (than qtest) and 'host' depends on KVM being
8
enabled.
9
10
Signed-off-by: Fabiano Rosas <farosas@suse.de>
11
Acked-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Thomas Huth <thuth@redhat.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
---
14
---
9
target/arm/translate-mve.c | 16 ++++++++--------
15
hw/arm/virt.c | 4 ++++
10
1 file changed, 8 insertions(+), 8 deletions(-)
16
1 file changed, 4 insertions(+)
11
17
12
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
18
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
13
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate-mve.c
20
--- a/hw/arm/virt.c
15
+++ b/target/arm/translate-mve.c
21
+++ b/hw/arm/virt.c
16
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
22
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
17
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
23
mc->minimum_page_bits = 12;
18
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
24
mc->possible_cpu_arch_ids = virt_possible_cpu_arch_ids;
19
typedef void MVEGenTwoOpShiftFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
25
mc->cpu_index_to_instance_props = virt_cpu_index_to_props;
20
-typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
26
+#ifdef CONFIG_TCG
21
+typedef void MVEGenLongDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
27
mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a15");
22
typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
28
+#else
23
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
29
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("max");
24
typedef void MVEGenVIDUPFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32, TCGv_i32);
30
+#endif
25
@@ -XXX,XX +XXX,XX @@ static bool trans_VQDMULLT_scalar(DisasContext *s, arg_2scalar *a)
31
mc->get_default_cpu_node_id = virt_get_default_cpu_node_id;
26
}
32
mc->kvm_type = virt_kvm_type;
27
33
assert(!mc->get_hotplug_handler);
28
static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
29
- MVEGenDualAccOpFn *fn)
30
+ MVEGenLongDualAccOpFn *fn)
31
{
32
TCGv_ptr qn, qm;
33
TCGv_i64 rda;
34
@@ -XXX,XX +XXX,XX @@ static bool do_long_dual_acc(DisasContext *s, arg_vmlaldav *a,
35
36
static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a)
37
{
38
- static MVEGenDualAccOpFn * const fns[4][2] = {
39
+ static MVEGenLongDualAccOpFn * const fns[4][2] = {
40
{ NULL, NULL },
41
{ gen_helper_mve_vmlaldavsh, gen_helper_mve_vmlaldavxsh },
42
{ gen_helper_mve_vmlaldavsw, gen_helper_mve_vmlaldavxsw },
43
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLALDAV_S(DisasContext *s, arg_vmlaldav *a)
44
45
static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
46
{
47
- static MVEGenDualAccOpFn * const fns[4][2] = {
48
+ static MVEGenLongDualAccOpFn * const fns[4][2] = {
49
{ NULL, NULL },
50
{ gen_helper_mve_vmlaldavuh, NULL },
51
{ gen_helper_mve_vmlaldavuw, NULL },
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLALDAV_U(DisasContext *s, arg_vmlaldav *a)
53
54
static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
55
{
56
- static MVEGenDualAccOpFn * const fns[4][2] = {
57
+ static MVEGenLongDualAccOpFn * const fns[4][2] = {
58
{ NULL, NULL },
59
{ gen_helper_mve_vmlsldavsh, gen_helper_mve_vmlsldavxsh },
60
{ gen_helper_mve_vmlsldavsw, gen_helper_mve_vmlsldavxsw },
61
@@ -XXX,XX +XXX,XX @@ static bool trans_VMLSLDAV(DisasContext *s, arg_vmlaldav *a)
62
63
static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a)
64
{
65
- static MVEGenDualAccOpFn * const fns[] = {
66
+ static MVEGenLongDualAccOpFn * const fns[] = {
67
gen_helper_mve_vrmlaldavhsw, gen_helper_mve_vrmlaldavhxsw,
68
};
69
return do_long_dual_acc(s, a, fns[a->x]);
70
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLALDAVH_S(DisasContext *s, arg_vmlaldav *a)
71
72
static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a)
73
{
74
- static MVEGenDualAccOpFn * const fns[] = {
75
+ static MVEGenLongDualAccOpFn * const fns[] = {
76
gen_helper_mve_vrmlaldavhuw, NULL,
77
};
78
return do_long_dual_acc(s, a, fns[a->x]);
79
@@ -XXX,XX +XXX,XX @@ static bool trans_VRMLALDAVH_U(DisasContext *s, arg_vmlaldav *a)
80
81
static bool trans_VRMLSLDAVH(DisasContext *s, arg_vmlaldav *a)
82
{
83
- static MVEGenDualAccOpFn * const fns[] = {
84
+ static MVEGenLongDualAccOpFn * const fns[] = {
85
gen_helper_mve_vrmlsldavhsw, gen_helper_mve_vrmlsldavhxsw,
86
};
87
return do_long_dual_acc(s, a, fns[a->x]);
88
--
34
--
89
2.20.1
35
2.34.1
90
91
diff view generated by jsdifflib
1
Implement the MVE narrowing move insns VMOVN, VQMOVN and VQMOVUN.
1
From: Fabiano Rosas <farosas@suse.de>
2
These take a double-width input, narrow it (possibly saturating) and
3
store the result to either the top or bottom half of the output
4
element.
5
2
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Acked-by: Thomas Huth <thuth@redhat.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
---
7
---
9
target/arm/helper-mve.h | 20 ++++++++++
8
tests/qtest/arm-cpu-features.c | 28 ++++++++++++++++++----------
10
target/arm/mve.decode | 12 ++++++
9
1 file changed, 18 insertions(+), 10 deletions(-)
11
target/arm/mve_helper.c | 78 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate-mve.c | 22 +++++++++++
13
4 files changed, 132 insertions(+)
14
10
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
11
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
16
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-mve.h
13
--- a/tests/qtest/arm-cpu-features.c
18
+++ b/target/arm/helper-mve.h
14
+++ b/tests/qtest/arm-cpu-features.c
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vnegw, TCG_CALL_NO_WG, void, env, ptr, ptr)
15
@@ -XXX,XX +XXX,XX @@
20
DEF_HELPER_FLAGS_3(mve_vfnegh, TCG_CALL_NO_WG, void, env, ptr, ptr)
16
#define SVE_MAX_VQ 16
21
DEF_HELPER_FLAGS_3(mve_vfnegs, TCG_CALL_NO_WG, void, env, ptr, ptr)
17
22
18
#define MACHINE "-machine virt,gic-version=max -accel tcg "
23
+DEF_HELPER_FLAGS_3(mve_vmovnbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
19
-#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm -accel tcg "
24
+DEF_HELPER_FLAGS_3(mve_vmovnbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
20
+#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm "
25
+DEF_HELPER_FLAGS_3(mve_vmovntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
21
#define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \
26
+DEF_HELPER_FLAGS_3(mve_vmovnth, TCG_CALL_NO_WG, void, env, ptr, ptr)
22
" 'arguments': { 'type': 'full', "
27
+
23
#define QUERY_TAIL "}}"
28
+DEF_HELPER_FLAGS_3(mve_vqmovunbb, TCG_CALL_NO_WG, void, env, ptr, ptr)
24
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
29
+DEF_HELPER_FLAGS_3(mve_vqmovunbh, TCG_CALL_NO_WG, void, env, ptr, ptr)
25
{
30
+DEF_HELPER_FLAGS_3(mve_vqmovuntb, TCG_CALL_NO_WG, void, env, ptr, ptr)
26
g_test_init(&argc, &argv, NULL);
31
+DEF_HELPER_FLAGS_3(mve_vqmovunth, TCG_CALL_NO_WG, void, env, ptr, ptr)
27
32
+
28
- qtest_add_data_func("/arm/query-cpu-model-expansion",
33
+DEF_HELPER_FLAGS_3(mve_vqmovnbsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
29
- NULL, test_query_cpu_model_expansion);
34
+DEF_HELPER_FLAGS_3(mve_vqmovnbsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
30
+ if (qtest_has_accel("tcg")) {
35
+DEF_HELPER_FLAGS_3(mve_vqmovntsb, TCG_CALL_NO_WG, void, env, ptr, ptr)
31
+ qtest_add_data_func("/arm/query-cpu-model-expansion",
36
+DEF_HELPER_FLAGS_3(mve_vqmovntsh, TCG_CALL_NO_WG, void, env, ptr, ptr)
32
+ NULL, test_query_cpu_model_expansion);
37
+
38
+DEF_HELPER_FLAGS_3(mve_vqmovnbub, TCG_CALL_NO_WG, void, env, ptr, ptr)
39
+DEF_HELPER_FLAGS_3(mve_vqmovnbuh, TCG_CALL_NO_WG, void, env, ptr, ptr)
40
+DEF_HELPER_FLAGS_3(mve_vqmovntub, TCG_CALL_NO_WG, void, env, ptr, ptr)
41
+DEF_HELPER_FLAGS_3(mve_vqmovntuh, TCG_CALL_NO_WG, void, env, ptr, ptr)
42
+
43
DEF_HELPER_FLAGS_4(mve_vand, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
44
DEF_HELPER_FLAGS_4(mve_vbic, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
45
DEF_HELPER_FLAGS_4(mve_vorr, TCG_CALL_NO_WG, void, env, ptr, ptr, ptr)
46
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mve.decode
49
+++ b/target/arm/mve.decode
50
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
51
VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b
52
VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h
53
54
+ VQMOVUNB 111 0 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op
55
+ VQMOVN_BS 111 0 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op
56
+
57
VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
58
}
59
60
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
61
VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b
62
VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h
63
64
+ VMOVNB 111 1 1110 0 . 11 .. 01 ... 0 1110 1 0 . 0 ... 1 @1op
65
+ VQMOVN_BU 111 1 1110 0 . 11 .. 11 ... 0 1110 0 0 . 0 ... 1 @1op
66
+
67
VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
68
}
69
70
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
71
VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b
72
VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h
73
74
+ VQMOVUNT 111 0 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op
75
+ VQMOVN_TS 111 0 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op
76
+
77
VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
78
}
79
80
@@ -XXX,XX +XXX,XX @@ VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
81
VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b
82
VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h
83
84
+ VMOVNT 111 1 1110 0 . 11 .. 01 ... 1 1110 1 0 . 0 ... 1 @1op
85
+ VQMOVN_TU 111 1 1110 0 . 11 .. 11 ... 1 1110 0 0 . 0 ... 1 @1op
86
+
87
VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
88
}
89
90
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/mve_helper.c
93
+++ b/target/arm/mve_helper.c
94
@@ -XXX,XX +XXX,XX @@ DO_VSHRN_SAT_UH(vqrshrnb_uh, vqrshrnt_uh, DO_RSHRN_UH)
95
DO_VSHRN_SAT_SB(vqrshrunbb, vqrshruntb, DO_RSHRUN_B)
96
DO_VSHRN_SAT_SH(vqrshrunbh, vqrshrunth, DO_RSHRUN_H)
97
98
+#define DO_VMOVN(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE) \
99
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
100
+ { \
101
+ LTYPE *m = vm; \
102
+ TYPE *d = vd; \
103
+ uint16_t mask = mve_element_mask(env); \
104
+ unsigned le; \
105
+ mask >>= ESIZE * TOP; \
106
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
107
+ mergemask(&d[H##ESIZE(le * 2 + TOP)], \
108
+ m[H##LESIZE(le)], mask); \
109
+ } \
110
+ mve_advance_vpt(env); \
111
+ }
33
+ }
112
+
34
+
113
+DO_VMOVN(vmovnbb, false, 1, uint8_t, 2, uint16_t)
35
+ if (!g_str_equal(qtest_get_arch(), "aarch64")) {
114
+DO_VMOVN(vmovnbh, false, 2, uint16_t, 4, uint32_t)
36
+ goto out;
115
+DO_VMOVN(vmovntb, true, 1, uint8_t, 2, uint16_t)
37
+ }
116
+DO_VMOVN(vmovnth, true, 2, uint16_t, 4, uint32_t)
38
117
+
39
/*
118
+#define DO_VMOVN_SAT(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \
40
* For now we only run KVM specific tests with AArch64 QEMU in
119
+ void HELPER(mve_##OP)(CPUARMState *env, void *vd, void *vm) \
41
* order avoid attempting to run an AArch32 QEMU with KVM on
120
+ { \
42
* AArch64 hosts. That won't work and isn't easy to detect.
121
+ LTYPE *m = vm; \
43
*/
122
+ TYPE *d = vd; \
44
- if (g_str_equal(qtest_get_arch(), "aarch64") && qtest_has_accel("kvm")) {
123
+ uint16_t mask = mve_element_mask(env); \
45
+ if (qtest_has_accel("kvm")) {
124
+ bool qc = false; \
46
/*
125
+ unsigned le; \
47
* This tests target the 'host' CPU type, so register it only if
126
+ mask >>= ESIZE * TOP; \
48
* KVM is available.
127
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
49
*/
128
+ bool sat = false; \
50
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion",
129
+ TYPE r = FN(m[H##LESIZE(le)], &sat); \
51
NULL, test_query_cpu_model_expansion_kvm);
130
+ mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
52
- }
131
+ qc |= sat & mask & 1; \
53
132
+ } \
54
- if (g_str_equal(qtest_get_arch(), "aarch64")) {
133
+ if (qc) { \
55
- qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
134
+ env->vfp.qc[0] = qc; \
56
- NULL, sve_tests_sve_max_vq_8);
135
+ } \
57
- qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
136
+ mve_advance_vpt(env); \
58
- NULL, sve_tests_sve_off);
59
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion/sve-off",
60
NULL, sve_tests_sve_off_kvm);
61
}
62
63
+ if (qtest_has_accel("tcg")) {
64
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
65
+ NULL, sve_tests_sve_max_vq_8);
66
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
67
+ NULL, sve_tests_sve_off);
137
+ }
68
+ }
138
+
69
+
139
+#define DO_VMOVN_SAT_UB(BOP, TOP, FN) \
70
+out:
140
+ DO_VMOVN_SAT(BOP, false, 1, uint8_t, 2, uint16_t, FN) \
71
return g_test_run();
141
+ DO_VMOVN_SAT(TOP, true, 1, uint8_t, 2, uint16_t, FN)
72
}
142
+
143
+#define DO_VMOVN_SAT_UH(BOP, TOP, FN) \
144
+ DO_VMOVN_SAT(BOP, false, 2, uint16_t, 4, uint32_t, FN) \
145
+ DO_VMOVN_SAT(TOP, true, 2, uint16_t, 4, uint32_t, FN)
146
+
147
+#define DO_VMOVN_SAT_SB(BOP, TOP, FN) \
148
+ DO_VMOVN_SAT(BOP, false, 1, int8_t, 2, int16_t, FN) \
149
+ DO_VMOVN_SAT(TOP, true, 1, int8_t, 2, int16_t, FN)
150
+
151
+#define DO_VMOVN_SAT_SH(BOP, TOP, FN) \
152
+ DO_VMOVN_SAT(BOP, false, 2, int16_t, 4, int32_t, FN) \
153
+ DO_VMOVN_SAT(TOP, true, 2, int16_t, 4, int32_t, FN)
154
+
155
+#define DO_VQMOVN_SB(N, SATP) \
156
+ do_sat_bhs((int64_t)(N), INT8_MIN, INT8_MAX, SATP)
157
+#define DO_VQMOVN_UB(N, SATP) \
158
+ do_sat_bhs((uint64_t)(N), 0, UINT8_MAX, SATP)
159
+#define DO_VQMOVUN_B(N, SATP) \
160
+ do_sat_bhs((int64_t)(N), 0, UINT8_MAX, SATP)
161
+
162
+#define DO_VQMOVN_SH(N, SATP) \
163
+ do_sat_bhs((int64_t)(N), INT16_MIN, INT16_MAX, SATP)
164
+#define DO_VQMOVN_UH(N, SATP) \
165
+ do_sat_bhs((uint64_t)(N), 0, UINT16_MAX, SATP)
166
+#define DO_VQMOVUN_H(N, SATP) \
167
+ do_sat_bhs((int64_t)(N), 0, UINT16_MAX, SATP)
168
+
169
+DO_VMOVN_SAT_SB(vqmovnbsb, vqmovntsb, DO_VQMOVN_SB)
170
+DO_VMOVN_SAT_SH(vqmovnbsh, vqmovntsh, DO_VQMOVN_SH)
171
+DO_VMOVN_SAT_UB(vqmovnbub, vqmovntub, DO_VQMOVN_UB)
172
+DO_VMOVN_SAT_UH(vqmovnbuh, vqmovntuh, DO_VQMOVN_UH)
173
+DO_VMOVN_SAT_SB(vqmovunbb, vqmovuntb, DO_VQMOVUN_B)
174
+DO_VMOVN_SAT_SH(vqmovunbh, vqmovunth, DO_VQMOVUN_H)
175
+
176
uint32_t HELPER(mve_vshlc)(CPUARMState *env, void *vd, uint32_t rdm,
177
uint32_t shift)
178
{
179
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
180
index XXXXXXX..XXXXXXX 100644
181
--- a/target/arm/translate-mve.c
182
+++ b/target/arm/translate-mve.c
183
@@ -XXX,XX +XXX,XX @@ DO_1OP(VCLS, vcls)
184
DO_1OP(VABS, vabs)
185
DO_1OP(VNEG, vneg)
186
187
+/* Narrowing moves: only size 0 and 1 are valid */
188
+#define DO_VMOVN(INSN, FN) \
189
+ static bool trans_##INSN(DisasContext *s, arg_1op *a) \
190
+ { \
191
+ static MVEGenOneOpFn * const fns[] = { \
192
+ gen_helper_mve_##FN##b, \
193
+ gen_helper_mve_##FN##h, \
194
+ NULL, \
195
+ NULL, \
196
+ }; \
197
+ return do_1op(s, a, fns[a->size]); \
198
+ }
199
+
200
+DO_VMOVN(VMOVNB, vmovnb)
201
+DO_VMOVN(VMOVNT, vmovnt)
202
+DO_VMOVN(VQMOVUNB, vqmovunb)
203
+DO_VMOVN(VQMOVUNT, vqmovunt)
204
+DO_VMOVN(VQMOVN_BS, vqmovnbs)
205
+DO_VMOVN(VQMOVN_TS, vqmovnts)
206
+DO_VMOVN(VQMOVN_BU, vqmovnbu)
207
+DO_VMOVN(VQMOVN_TU, vqmovntu)
208
+
209
static bool trans_VREV16(DisasContext *s, arg_1op *a)
210
{
211
static MVEGenOneOpFn * const fns[] = {
212
--
73
--
213
2.20.1
74
2.34.1
214
215
diff view generated by jsdifflib
1
From: Hamza Mahfooz <someguy@effective-light.com>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
As per commit 5626f8c6d468 ("rcu: Add automatically released rcu_read_lock
3
These tests set -accel tcg, so restrict them to when TCG is present.
4
variants"), RCU_READ_LOCK_GUARD() should be used instead of
5
rcu_read_{un}lock().
6
4
7
Signed-off-by: Hamza Mahfooz <someguy@effective-light.com>
5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
8
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
6
Acked-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210727235201.11491-1-someguy@effective-light.com
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/kvm.c | 17 ++++++++---------
10
tests/qtest/meson.build | 4 ++--
13
1 file changed, 8 insertions(+), 9 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
14
12
15
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
13
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/kvm.c
15
--- a/tests/qtest/meson.build
18
+++ b/target/arm/kvm.c
16
+++ b/tests/qtest/meson.build
19
@@ -XXX,XX +XXX,XX @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
17
@@ -XXX,XX +XXX,XX @@ qtests_arm = \
20
hwaddr xlat, len, doorbell_gpa;
18
# TODO: once aarch64 TCG is fixed on ARM 32 bit host, make bios-tables-test unconditional
21
MemoryRegionSection mrs;
19
qtests_aarch64 = \
22
MemoryRegion *mr;
20
(cpu != 'arm' and unpack_edk2_blobs ? ['bios-tables-test'] : []) + \
23
- int ret = 1;
21
- (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) + \
24
22
- (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) + \
25
if (as == &address_space_memory) {
23
+ (config_all.has_key('CONFIG_TCG') and config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? \
26
return 0;
24
+ ['tpm-tis-device-test', 'tpm-tis-device-swtpm-test'] : []) + \
27
@@ -XXX,XX +XXX,XX @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
25
(config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
28
26
(config_all_devices.has_key('CONFIG_RASPI') ? ['bcm2835-dma-test'] : []) + \
29
/* MSI doorbell address is translated by an IOMMU */
27
['arm-cpu-features',
30
31
- rcu_read_lock();
32
+ RCU_READ_LOCK_GUARD();
33
+
34
mr = address_space_translate(as, address, &xlat, &len, true,
35
MEMTXATTRS_UNSPECIFIED);
36
+
37
if (!mr) {
38
- goto unlock;
39
+ return 1;
40
}
41
+
42
mrs = memory_region_find(mr, xlat, 1);
43
+
44
if (!mrs.mr) {
45
- goto unlock;
46
+ return 1;
47
}
48
49
doorbell_gpa = mrs.offset_within_address_space;
50
@@ -XXX,XX +XXX,XX @@ int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
51
52
trace_kvm_arm_fixup_msi_route(address, doorbell_gpa);
53
54
- ret = 0;
55
-
56
-unlock:
57
- rcu_read_unlock();
58
- return ret;
59
+ return 0;
60
}
61
62
int kvm_arch_add_msi_route_post(struct kvm_irq_routing_entry *route,
63
--
28
--
64
2.20.1
29
2.34.1
65
66
diff view generated by jsdifflib