1
The following changes since commit 5a67d7735d4162630769ef495cf813244fc850df:
1
Hi; this pullreq includes FEAT_LSE2 support, the new
2
bpim2u board, and some other smaller patchsets.
2
3
3
Merge remote-tracking branch 'remotes/berrange-gitlab/tags/tls-deps-pull-request' into staging (2021-07-02 08:22:39 +0100)
4
thanks
5
-- PMM
6
7
The following changes since commit 369081c4558e7e940fa36ce59bf17b2e390f55d3:
8
9
Merge tag 'pull-tcg-20230605' of https://gitlab.com/rth7680/qemu into staging (2023-06-05 13:16:56 -0700)
4
10
5
are available in the Git repository at:
11
are available in the Git repository at:
6
12
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210702
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230606
8
14
9
for you to fetch changes up to 04ea4d3cfd0a21b248ece8eb7a9436a3d9898dd8:
15
for you to fetch changes up to f9ac778898cb28307e0f91421aba34d43c34b679:
10
16
11
target/arm: Implement MVE shifts by register (2021-07-02 11:48:38 +0100)
17
target/arm: trap DCC access in user mode emulation (2023-06-06 10:19:40 +0100)
12
18
13
----------------------------------------------------------------
19
----------------------------------------------------------------
14
target-arm queue:
20
target-arm queue:
15
* more MVE instructions
21
* Support gdbstub (guest debug) in HVF
16
* hw/gpio/gpio_pwr: use shutdown function for reboot
22
* xnlx-versal: Support CANFD controller
17
* target/arm: Check NaN mode before silencing NaN
23
* bpim2u: New board model: Banana Pi BPI-M2 Ultra
18
* tests: Boot and halt a Linux guest on the Raspberry Pi 2 machine
24
* Emulate FEAT_LSE2
19
* hw/arm: Add basic power management to raspi.
25
* allow DC CVA[D]P in user mode emulation
20
* docs/system/arm: Add quanta-gbs-bmc, quanta-q7l1-bmc
26
* trap DCC access in user mode emulation
21
27
22
----------------------------------------------------------------
28
----------------------------------------------------------------
23
Joe Komlodi (1):
29
Francesco Cagnin (4):
24
target/arm: Check NaN mode before silencing NaN
30
arm: move KVM breakpoints helpers
31
hvf: handle access for more registers
32
hvf: add breakpoint handlers
33
hvf: add guest debugging handlers for Apple Silicon hosts
25
34
26
Maxim Uvarov (1):
35
Richard Henderson (20):
27
hw/gpio/gpio_pwr: use shutdown function for reboot
36
target/arm: Add commentary for CPUARMState.exclusive_high
37
target/arm: Add feature test for FEAT_LSE2
38
target/arm: Introduce finalize_memop_{atom,pair}
39
target/arm: Use tcg_gen_qemu_ld_i128 for LDXP
40
target/arm: Use tcg_gen_qemu_{st, ld}_i128 for do_fp_{st, ld}
41
target/arm: Use tcg_gen_qemu_st_i128 for STZG, STZ2G
42
target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r
43
target/arm: Sink gen_mte_check1 into load/store_exclusive
44
target/arm: Load/store integer pair with one tcg operation
45
target/arm: Hoist finalize_memop out of do_gpr_{ld, st}
46
target/arm: Hoist finalize_memop out of do_fp_{ld, st}
47
target/arm: Pass memop to gen_mte_check1*
48
target/arm: Pass single_memop to gen_mte_checkN
49
target/arm: Check alignment in helper_mte_check
50
target/arm: Add SCTLR.nAA to TBFLAG_A64
51
target/arm: Relax ordered/atomic alignment checks for LSE2
52
target/arm: Move mte check for store-exclusive
53
tests/tcg/aarch64: Use stz2g in mte-7.c
54
tests/tcg/multiarch: Adjust sigbus.c
55
target/arm: Enable FEAT_LSE2 for -cpu max
28
56
29
Nolan Leake (1):
57
Vikram Garhwal (4):
30
hw/arm: Add basic power management to raspi.
58
hw/net/can: Introduce Xilinx Versal CANFD controller
59
xlnx-versal: Connect Xilinx VERSAL CANFD controllers
60
MAINTAINERS: Include canfd tests under Xilinx CAN
61
tests/qtest: Introduce tests for Xilinx VERSAL CANFD controller
31
62
32
Patrick Venture (2):
63
Zhuojia Shen (3):
33
docs/system/arm: Add quanta-q7l1-bmc reference
64
target/arm: allow DC CVA[D]P in user mode emulation
34
docs/system/arm: Add quanta-gbs-bmc reference
65
tests/tcg/aarch64: add DC CVA[D]P tests
66
target/arm: trap DCC access in user mode emulation
35
67
36
Peter Maydell (18):
68
qianfan Zhao (11):
37
target/arm: Fix MVE widening/narrowing VLDR/VSTR offset calculation
69
hw: arm: Add bananapi M2-Ultra and allwinner-r40 support
38
target/arm: Fix bugs in MVE VRMLALDAVH, VRMLSLDAVH
70
hw/arm/allwinner-r40: add Clock Control Unit
39
target/arm: Make asimd_imm_const() public
71
hw: allwinner-r40: Complete uart devices
40
target/arm: Use asimd_imm_const for A64 decode
72
hw: arm: allwinner-r40: Add i2c0 device
41
target/arm: Use dup_const() instead of bitfield_replicate()
73
hw/misc: Rename axp209 to axp22x and add support AXP221 PMU
42
target/arm: Implement MVE logical immediate insns
74
hw/arm/allwinner-r40: add SDRAM controller device
43
target/arm: Implement MVE vector shift left by immediate insns
75
hw: sd: allwinner-sdhost: Add sun50i-a64 SoC support
44
target/arm: Implement MVE vector shift right by immediate insns
76
hw: arm: allwinner-r40: Add emac and gmac support
45
target/arm: Implement MVE VSHLL
77
hw: arm: allwinner-sramc: Add SRAM Controller support for R40
46
target/arm: Implement MVE VSRI, VSLI
78
tests: avocado: boot_linux_console: Add test case for bpim2u
47
target/arm: Implement MVE VSHRN, VRSHRN
79
docs: system: arm: Introduce bananapi_m2u
48
target/arm: Implement MVE saturating narrowing shifts
49
target/arm: Implement MVE VSHLC
50
target/arm: Implement MVE VADDLV
51
target/arm: Implement MVE long shifts by immediate
52
target/arm: Implement MVE long shifts by register
53
target/arm: Implement MVE shifts by immediate
54
target/arm: Implement MVE shifts by register
55
80
56
Philippe Mathieu-Daudé (1):
81
MAINTAINERS | 2 +-
57
tests: Boot and halt a Linux guest on the Raspberry Pi 2 machine
82
docs/system/arm/bananapi_m2u.rst | 139 +++
58
83
docs/system/arm/emulation.rst | 1 +
59
docs/system/arm/aspeed.rst | 1 +
84
docs/system/arm/xlnx-versal-virt.rst | 31 +
60
docs/system/arm/nuvoton.rst | 5 +-
85
docs/system/target-arm.rst | 1 +
61
include/hw/arm/bcm2835_peripherals.h | 3 +-
86
include/hw/arm/allwinner-r40.h | 143 +++
62
include/hw/misc/bcm2835_powermgt.h | 29 ++
87
include/hw/arm/xlnx-versal.h | 12 +
63
target/arm/helper-mve.h | 108 +++++++
88
include/hw/misc/allwinner-r40-ccu.h | 65 +
64
target/arm/translate.h | 41 +++
89
include/hw/misc/allwinner-r40-dramc.h | 108 ++
65
target/arm/mve.decode | 177 ++++++++++-
90
include/hw/misc/allwinner-sramc.h | 69 ++
66
target/arm/t32.decode | 71 ++++-
91
include/hw/net/xlnx-versal-canfd.h | 87 ++
67
hw/arm/bcm2835_peripherals.c | 13 +-
92
include/hw/sd/allwinner-sdhost.h | 9 +
68
hw/gpio/gpio_pwr.c | 2 +-
93
include/sysemu/hvf.h | 37 +
69
hw/misc/bcm2835_powermgt.c | 160 ++++++++++
94
include/sysemu/hvf_int.h | 2 +
70
target/arm/helper-a64.c | 12 +-
95
target/arm/cpu.h | 16 +-
71
target/arm/mve_helper.c | 524 +++++++++++++++++++++++++++++++--
96
target/arm/hvf_arm.h | 7 +
72
target/arm/translate-a64.c | 86 +-----
97
target/arm/internals.h | 53 +-
73
target/arm/translate-mve.c | 261 +++++++++++++++-
98
target/arm/tcg/helper-a64.h | 3 +
74
target/arm/translate-neon.c | 81 -----
99
target/arm/tcg/translate-a64.h | 4 +-
75
target/arm/translate.c | 327 +++++++++++++++++++-
100
target/arm/tcg/translate.h | 65 +-
76
target/arm/vfp_helper.c | 24 +-
101
accel/hvf/hvf-accel-ops.c | 119 ++
77
hw/misc/meson.build | 1 +
102
accel/hvf/hvf-all.c | 23 +
78
tests/acceptance/boot_linux_console.py | 43 +++
103
hw/arm/allwinner-r40.c | 526 ++++++++
79
20 files changed, 1760 insertions(+), 209 deletions(-)
104
hw/arm/bananapi_m2u.c | 145 +++
80
create mode 100644 include/hw/misc/bcm2835_powermgt.h
105
hw/arm/xlnx-versal-virt.c | 53 +
81
create mode 100644 hw/misc/bcm2835_powermgt.c
106
hw/arm/xlnx-versal.c | 37 +
82
107
hw/misc/allwinner-r40-ccu.c | 209 ++++
108
hw/misc/allwinner-r40-dramc.c | 513 ++++++++
109
hw/misc/allwinner-sramc.c | 184 +++
110
hw/misc/axp209.c | 238 ----
111
hw/misc/axp2xx.c | 283 +++++
112
hw/net/can/xlnx-versal-canfd.c | 2107 +++++++++++++++++++++++++++++++++
113
hw/sd/allwinner-sdhost.c | 72 +-
114
target/arm/cpu.c | 2 +
115
target/arm/debug_helper.c | 5 +
116
target/arm/helper.c | 6 +-
117
target/arm/hvf/hvf.c | 750 +++++++++++-
118
target/arm/hyp_gdbstub.c | 253 ++++
119
target/arm/kvm64.c | 276 -----
120
target/arm/tcg/cpu64.c | 1 +
121
target/arm/tcg/helper-a64.c | 7 +
122
target/arm/tcg/hflags.c | 6 +
123
target/arm/tcg/mte_helper.c | 18 +
124
target/arm/tcg/translate-a64.c | 477 +++++---
125
target/arm/tcg/translate-sve.c | 106 +-
126
target/arm/tcg/translate.c | 1 +
127
target/i386/hvf/hvf.c | 33 +
128
tests/qtest/xlnx-canfd-test.c | 423 +++++++
129
tests/tcg/aarch64/dcpodp.c | 63 +
130
tests/tcg/aarch64/dcpop.c | 63 +
131
tests/tcg/aarch64/mte-7.c | 3 +-
132
tests/tcg/multiarch/sigbus.c | 13 +-
133
hw/arm/Kconfig | 14 +-
134
hw/arm/meson.build | 1 +
135
hw/misc/Kconfig | 5 +-
136
hw/misc/meson.build | 5 +-
137
hw/misc/trace-events | 26 +-
138
hw/net/can/meson.build | 1 +
139
hw/net/can/trace-events | 7 +
140
target/arm/meson.build | 3 +-
141
tests/avocado/boot_linux_console.py | 176 +++
142
tests/qtest/meson.build | 1 +
143
tests/tcg/aarch64/Makefile.target | 11 +
144
63 files changed, 7386 insertions(+), 733 deletions(-)
145
create mode 100644 docs/system/arm/bananapi_m2u.rst
146
create mode 100644 include/hw/arm/allwinner-r40.h
147
create mode 100644 include/hw/misc/allwinner-r40-ccu.h
148
create mode 100644 include/hw/misc/allwinner-r40-dramc.h
149
create mode 100644 include/hw/misc/allwinner-sramc.h
150
create mode 100644 include/hw/net/xlnx-versal-canfd.h
151
create mode 100644 hw/arm/allwinner-r40.c
152
create mode 100644 hw/arm/bananapi_m2u.c
153
create mode 100644 hw/misc/allwinner-r40-ccu.c
154
create mode 100644 hw/misc/allwinner-r40-dramc.c
155
create mode 100644 hw/misc/allwinner-sramc.c
156
delete mode 100644 hw/misc/axp209.c
157
create mode 100644 hw/misc/axp2xx.c
158
create mode 100644 hw/net/can/xlnx-versal-canfd.c
159
create mode 100644 target/arm/hyp_gdbstub.c
160
create mode 100644 tests/qtest/xlnx-canfd-test.c
161
create mode 100644 tests/tcg/aarch64/dcpodp.c
162
create mode 100644 tests/tcg/aarch64/dcpop.c
diff view generated by jsdifflib
1
Implement the MVE long shifts by register, which perform shifts on a
1
From: Francesco Cagnin <fcagnin@quarkslab.com>
2
pair of general-purpose registers treated as a 64-bit quantity, with
3
the shift count in another general-purpose register, which might be
4
either positive or negative.
5
2
6
Like the long-shifts-by-immediate, these encodings sit in the space
3
These helpers will be also used for HVF. Aside from reformatting a
7
that was previously the UNPREDICTABLE MOVS/ORRS with Rm==13,15.
4
couple of comments for 'checkpatch.pl' and updating meson to compile
8
Because LSLL_rr and ASRL_rr overlap with both MOV_rxri/ORR_rrri and
5
'hyp_gdbstub.c', this is just code motion.
9
also with CSEL (as one of the previously-UNPREDICTABLE Rm==13 cases),
10
we have to move the CSEL pattern into the same decodetree group.
11
6
7
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20230601153107.81955-2-fcagnin@quarkslab.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210628135835.6690-17-peter.maydell@linaro.org
15
---
12
---
16
target/arm/helper-mve.h | 6 +++
13
target/arm/internals.h | 50 +++++++
17
target/arm/translate.h | 1 +
14
target/arm/hyp_gdbstub.c | 253 +++++++++++++++++++++++++++++++++++
18
target/arm/t32.decode | 16 +++++--
15
target/arm/kvm64.c | 276 ---------------------------------------
19
target/arm/mve_helper.c | 93 +++++++++++++++++++++++++++++++++++++++++
16
target/arm/meson.build | 3 +-
20
target/arm/translate.c | 69 ++++++++++++++++++++++++++++++
17
4 files changed, 305 insertions(+), 277 deletions(-)
21
5 files changed, 182 insertions(+), 3 deletions(-)
18
create mode 100644 target/arm/hyp_gdbstub.c
22
19
23
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
20
diff --git a/target/arm/internals.h b/target/arm/internals.h
24
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/helper-mve.h
22
--- a/target/arm/internals.h
26
+++ b/target/arm/helper-mve.h
23
+++ b/target/arm/internals.h
27
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrshrunth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
@@ -XXX,XX +XXX,XX @@ static inline bool arm_fgt_active(CPUARMState *env, int el)
28
25
}
29
DEF_HELPER_FLAGS_4(mve_vshlc, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
26
30
27
void assert_hflags_rebuild_correctly(CPUARMState *env);
31
+DEF_HELPER_FLAGS_3(mve_sshrl, TCG_CALL_NO_RWG, i64, env, i64, i32)
28
+
32
+DEF_HELPER_FLAGS_3(mve_ushll, TCG_CALL_NO_RWG, i64, env, i64, i32)
29
+/*
33
DEF_HELPER_FLAGS_3(mve_sqshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
30
+ * Although the ARM implementation of hardware assisted debugging
34
DEF_HELPER_FLAGS_3(mve_uqshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
31
+ * allows for different breakpoints per-core, the current GDB
35
+DEF_HELPER_FLAGS_3(mve_sqrshrl, TCG_CALL_NO_RWG, i64, env, i64, i32)
32
+ * interface treats them as a global pool of registers (which seems to
36
+DEF_HELPER_FLAGS_3(mve_uqrshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
33
+ * be the case for x86, ppc and s390). As a result we store one copy
37
+DEF_HELPER_FLAGS_3(mve_sqrshrl48, TCG_CALL_NO_RWG, i64, env, i64, i32)
34
+ * of registers which is used for all active cores.
38
+DEF_HELPER_FLAGS_3(mve_uqrshll48, TCG_CALL_NO_RWG, i64, env, i64, i32)
35
+ *
39
diff --git a/target/arm/translate.h b/target/arm/translate.h
36
+ * Write access is serialised by virtue of the GDB protocol which
40
index XXXXXXX..XXXXXXX 100644
37
+ * updates things. Read access (i.e. when the values are copied to the
41
--- a/target/arm/translate.h
38
+ * vCPU) is also gated by GDB's run control.
42
+++ b/target/arm/translate.h
39
+ *
43
@@ -XXX,XX +XXX,XX @@ typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
40
+ * This is not unreasonable as most of the time debugging kernels you
44
typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
41
+ * never know which core will eventually execute your function.
45
typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
42
+ */
46
typedef void WideShiftImmFn(TCGv_i64, TCGv_i64, int64_t shift);
43
+
47
+typedef void WideShiftFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i32);
44
+typedef struct {
48
45
+ uint64_t bcr;
49
/**
46
+ uint64_t bvr;
50
* arm_tbflags_from_tb:
47
+} HWBreakpoint;
51
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
48
+
52
index XXXXXXX..XXXXXXX 100644
49
+/*
53
--- a/target/arm/t32.decode
50
+ * The watchpoint registers can cover more area than the requested
54
+++ b/target/arm/t32.decode
51
+ * watchpoint so we need to store the additional information
52
+ * somewhere. We also need to supply a CPUWatchpoint to the GDB stub
53
+ * when the watchpoint is hit.
54
+ */
55
+typedef struct {
56
+ uint64_t wcr;
57
+ uint64_t wvr;
58
+ CPUWatchpoint details;
59
+} HWWatchpoint;
60
+
61
+/* Maximum and current break/watch point counts */
62
+extern int max_hw_bps, max_hw_wps;
63
+extern GArray *hw_breakpoints, *hw_watchpoints;
64
+
65
+#define cur_hw_wps (hw_watchpoints->len)
66
+#define cur_hw_bps (hw_breakpoints->len)
67
+#define get_hw_bp(i) (&g_array_index(hw_breakpoints, HWBreakpoint, i))
68
+#define get_hw_wp(i) (&g_array_index(hw_watchpoints, HWWatchpoint, i))
69
+
70
+bool find_hw_breakpoint(CPUState *cpu, target_ulong pc);
71
+int insert_hw_breakpoint(target_ulong pc);
72
+int delete_hw_breakpoint(target_ulong pc);
73
+
74
+bool check_watchpoint_in_range(int i, target_ulong addr);
75
+CPUWatchpoint *find_hw_watchpoint(CPUState *cpu, target_ulong addr);
76
+int insert_hw_watchpoint(target_ulong addr, target_ulong len, int type);
77
+int delete_hw_watchpoint(target_ulong addr, target_ulong len, int type);
78
#endif
79
diff --git a/target/arm/hyp_gdbstub.c b/target/arm/hyp_gdbstub.c
80
new file mode 100644
81
index XXXXXXX..XXXXXXX
82
--- /dev/null
83
+++ b/target/arm/hyp_gdbstub.c
55
@@ -XXX,XX +XXX,XX @@
84
@@ -XXX,XX +XXX,XX @@
56
&mcrr !extern cp opc1 crm rt rt2
85
+/*
57
86
+ * ARM implementation of KVM and HVF hooks, 64 bit specific code
58
&mve_shl_ri rdalo rdahi shim
87
+ *
59
+&mve_shl_rr rdalo rdahi rm
88
+ * Copyright Mian-M. Hamayun 2013, Virtual Open Systems
60
89
+ * Copyright Alex Bennée 2014, Linaro
61
# rdahi: bits [3:1] from insn, bit 0 is 1
90
+ *
62
# rdalo: bits [3:1] from insn, bit 0 is 0
91
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
63
@@ -XXX,XX +XXX,XX @@
92
+ * See the COPYING file in the top-level directory.
64
93
+ *
65
@mve_shl_ri ....... .... . ... . . ... ... . .. .. .... \
94
+ */
66
&mve_shl_ri shim=%imm5_12_6 rdalo=%rdalo_17 rdahi=%rdahi_9
95
+
67
+@mve_shl_rr ....... .... . ... . rm:4 ... . .. .. .... \
96
+#include "qemu/osdep.h"
68
+ &mve_shl_rr rdalo=%rdalo_17 rdahi=%rdahi_9
97
+#include "cpu.h"
69
98
+#include "internals.h"
70
{
99
+#include "exec/gdbstub.h"
71
TST_xrri 1110101 0000 1 .... 0 ... 1111 .... .... @S_xrr_shi
100
+
72
@@ -XXX,XX +XXX,XX @@ BIC_rrri 1110101 0001 . .... 0 ... .... .... .... @s_rrr_shi
101
+/* Maximum and current break/watch point counts */
73
URSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 01 1111 @mve_shl_ri
102
+int max_hw_bps, max_hw_wps;
74
SRSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 10 1111 @mve_shl_ri
103
+GArray *hw_breakpoints, *hw_watchpoints;
75
SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
104
+
76
+
105
+/**
77
+ LSLL_rr 1110101 0010 1 ... 0 .... ... 1 0000 1101 @mve_shl_rr
106
+ * insert_hw_breakpoint()
78
+ ASRL_rr 1110101 0010 1 ... 0 .... ... 1 0010 1101 @mve_shl_rr
107
+ * @addr: address of breakpoint
79
+ UQRSHLL64_rr 1110101 0010 1 ... 1 .... ... 1 0000 1101 @mve_shl_rr
108
+ *
80
+ SQRSHRL64_rr 1110101 0010 1 ... 1 .... ... 1 0010 1101 @mve_shl_rr
109
+ * See ARM ARM D2.9.1 for details but here we are only going to create
81
+ UQRSHLL48_rr 1110101 0010 1 ... 1 .... ... 1 1000 1101 @mve_shl_rr
110
+ * simple un-linked breakpoints (i.e. we don't chain breakpoints
82
+ SQRSHRL48_rr 1110101 0010 1 ... 1 .... ... 1 1010 1101 @mve_shl_rr
111
+ * together to match address and context or vmid). The hardware is
83
]
112
+ * capable of fancier matching but that will require exposing that
84
113
+ * fanciness to GDB's interface
85
MOV_rxri 1110101 0010 . 1111 0 ... .... .... .... @s_rxr_shi
114
+ *
86
ORR_rrri 1110101 0010 . .... 0 ... .... .... .... @s_rrr_shi
115
+ * DBGBCR<n>_EL1, Debug Breakpoint Control Registers
87
+
116
+ *
88
+ # v8.1M CSEL and friends
117
+ * 31 24 23 20 19 16 15 14 13 12 9 8 5 4 3 2 1 0
89
+ CSEL 1110101 0010 1 rn:4 10 op:2 rd:4 fcond:4 rm:4
118
+ * +------+------+-------+-----+----+------+-----+------+-----+---+
90
}
119
+ * | RES0 | BT | LBN | SSC | HMC| RES0 | BAS | RES0 | PMC | E |
91
{
120
+ * +------+------+-------+-----+----+------+-----+------+-----+---+
92
MVN_rxri 1110101 0011 . 1111 0 ... .... .... .... @s_rxr_shi
121
+ *
93
@@ -XXX,XX +XXX,XX @@ SBC_rrri 1110101 1011 . .... 0 ... .... .... .... @s_rrr_shi
122
+ * BT: Breakpoint type (0 = unlinked address match)
94
}
123
+ * LBN: Linked BP number (0 = unused)
95
RSB_rrri 1110101 1110 . .... 0 ... .... .... .... @s_rrr_shi
124
+ * SSC/HMC/PMC: Security, Higher and Priv access control (Table D-12)
96
125
+ * BAS: Byte Address Select (RES1 for AArch64)
97
-# v8.1M CSEL and friends
126
+ * E: Enable bit
98
-CSEL 1110101 0010 1 rn:4 10 op:2 rd:4 fcond:4 rm:4
127
+ *
99
-
128
+ * DBGBVR<n>_EL1, Debug Breakpoint Value Registers
100
# Data-processing (register-shifted register)
129
+ *
101
130
+ * 63 53 52 49 48 2 1 0
102
MOV_rxrr 1111 1010 0 shty:2 s:1 rm:4 1111 rd:4 0000 rs:4 \
131
+ * +------+-----------+----------+-----+
103
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
132
+ * | RESS | VA[52:49] | VA[48:2] | 0 0 |
104
index XXXXXXX..XXXXXXX 100644
133
+ * +------+-----------+----------+-----+
105
--- a/target/arm/mve_helper.c
134
+ *
106
+++ b/target/arm/mve_helper.c
135
+ * Depending on the addressing mode bits the top bits of the register
107
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mve_vshlc)(CPUARMState *env, void *vd, uint32_t rdm,
136
+ * are a sign extension of the highest applicable VA bit. Some
108
return rdm;
137
+ * versions of GDB don't do it correctly so we ensure they are correct
109
}
138
+ * here so future PC comparisons will work properly.
110
139
+ */
111
+uint64_t HELPER(mve_sshrl)(CPUARMState *env, uint64_t n, uint32_t shift)
140
+
141
+int insert_hw_breakpoint(target_ulong addr)
112
+{
142
+{
113
+ return do_sqrshl_d(n, -(int8_t)shift, false, NULL);
143
+ HWBreakpoint brk = {
144
+ .bcr = 0x1, /* BCR E=1, enable */
145
+ .bvr = sextract64(addr, 0, 53)
146
+ };
147
+
148
+ if (cur_hw_bps >= max_hw_bps) {
149
+ return -ENOBUFS;
150
+ }
151
+
152
+ brk.bcr = deposit32(brk.bcr, 1, 2, 0x3); /* PMC = 11 */
153
+ brk.bcr = deposit32(brk.bcr, 5, 4, 0xf); /* BAS = RES1 */
154
+
155
+ g_array_append_val(hw_breakpoints, brk);
156
+
157
+ return 0;
114
+}
158
+}
115
+
159
+
116
+uint64_t HELPER(mve_ushll)(CPUARMState *env, uint64_t n, uint32_t shift)
160
+/**
161
+ * delete_hw_breakpoint()
162
+ * @pc: address of breakpoint
163
+ *
164
+ * Delete a breakpoint and shuffle any above down
165
+ */
166
+
167
+int delete_hw_breakpoint(target_ulong pc)
117
+{
168
+{
118
+ return do_uqrshl_d(n, (int8_t)shift, false, NULL);
169
+ int i;
119
+}
170
+ for (i = 0; i < hw_breakpoints->len; i++) {
120
+
171
+ HWBreakpoint *brk = get_hw_bp(i);
121
uint64_t HELPER(mve_sqshll)(CPUARMState *env, uint64_t n, uint32_t shift)
172
+ if (brk->bvr == pc) {
122
{
173
+ g_array_remove_index(hw_breakpoints, i);
123
return do_sqrshl_d(n, (int8_t)shift, false, &env->QF);
124
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mve_uqshll)(CPUARMState *env, uint64_t n, uint32_t shift)
125
{
126
return do_uqrshl_d(n, (int8_t)shift, false, &env->QF);
127
}
128
+
129
+uint64_t HELPER(mve_sqrshrl)(CPUARMState *env, uint64_t n, uint32_t shift)
130
+{
131
+ return do_sqrshl_d(n, -(int8_t)shift, true, &env->QF);
132
+}
133
+
134
+uint64_t HELPER(mve_uqrshll)(CPUARMState *env, uint64_t n, uint32_t shift)
135
+{
136
+ return do_uqrshl_d(n, (int8_t)shift, true, &env->QF);
137
+}
138
+
139
+/* Operate on 64-bit values, but saturate at 48 bits */
140
+static inline int64_t do_sqrshl48_d(int64_t src, int64_t shift,
141
+ bool round, uint32_t *sat)
142
+{
143
+ if (shift <= -48) {
144
+ /* Rounding the sign bit always produces 0. */
145
+ if (round) {
146
+ return 0;
174
+ return 0;
147
+ }
175
+ }
148
+ return src >> 63;
176
+ }
149
+ } else if (shift < 0) {
177
+ return -ENOENT;
150
+ if (round) {
178
+}
151
+ src >>= -shift - 1;
179
+
152
+ return (src >> 1) + (src & 1);
180
+/**
181
+ * insert_hw_watchpoint()
182
+ * @addr: address of watch point
183
+ * @len: size of area
184
+ * @type: type of watch point
185
+ *
186
+ * See ARM ARM D2.10. As with the breakpoints we can do some advanced
187
+ * stuff if we want to. The watch points can be linked with the break
188
+ * points above to make them context aware. However for simplicity
189
+ * currently we only deal with simple read/write watch points.
190
+ *
191
+ * D7.3.11 DBGWCR<n>_EL1, Debug Watchpoint Control Registers
192
+ *
193
+ * 31 29 28 24 23 21 20 19 16 15 14 13 12 5 4 3 2 1 0
194
+ * +------+-------+------+----+-----+-----+-----+-----+-----+-----+---+
195
+ * | RES0 | MASK | RES0 | WT | LBN | SSC | HMC | BAS | LSC | PAC | E |
196
+ * +------+-------+------+----+-----+-----+-----+-----+-----+-----+---+
197
+ *
198
+ * MASK: num bits addr mask (0=none,01/10=res,11=3 bits (8 bytes))
199
+ * WT: 0 - unlinked, 1 - linked (not currently used)
200
+ * LBN: Linked BP number (not currently used)
201
+ * SSC/HMC/PAC: Security, Higher and Priv access control (Table D2-11)
202
+ * BAS: Byte Address Select
203
+ * LSC: Load/Store control (01: load, 10: store, 11: both)
204
+ * E: Enable
205
+ *
206
+ * The bottom 2 bits of the value register are masked. Therefore to
207
+ * break on any sizes smaller than an unaligned word you need to set
208
+ * MASK=0, BAS=bit per byte in question. For larger regions (^2) you
209
+ * need to ensure you mask the address as required and set BAS=0xff
210
+ */
211
+
212
+int insert_hw_watchpoint(target_ulong addr, target_ulong len, int type)
213
+{
214
+ HWWatchpoint wp = {
215
+ .wcr = R_DBGWCR_E_MASK, /* E=1, enable */
216
+ .wvr = addr & (~0x7ULL),
217
+ .details = { .vaddr = addr, .len = len }
218
+ };
219
+
220
+ if (cur_hw_wps >= max_hw_wps) {
221
+ return -ENOBUFS;
222
+ }
223
+
224
+ /*
225
+ * HMC=0 SSC=0 PAC=3 will hit EL0 or EL1, any security state,
226
+ * valid whether EL3 is implemented or not
227
+ */
228
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, PAC, 3);
229
+
230
+ switch (type) {
231
+ case GDB_WATCHPOINT_READ:
232
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 1);
233
+ wp.details.flags = BP_MEM_READ;
234
+ break;
235
+ case GDB_WATCHPOINT_WRITE:
236
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 2);
237
+ wp.details.flags = BP_MEM_WRITE;
238
+ break;
239
+ case GDB_WATCHPOINT_ACCESS:
240
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 3);
241
+ wp.details.flags = BP_MEM_ACCESS;
242
+ break;
243
+ default:
244
+ g_assert_not_reached();
245
+ break;
246
+ }
247
+ if (len <= 8) {
248
+ /* we align the address and set the bits in BAS */
249
+ int off = addr & 0x7;
250
+ int bas = (1 << len) - 1;
251
+
252
+ wp.wcr = deposit32(wp.wcr, 5 + off, 8 - off, bas);
253
+ } else {
254
+ /* For ranges above 8 bytes we need to be a power of 2 */
255
+ if (is_power_of_2(len)) {
256
+ int bits = ctz64(len);
257
+
258
+ wp.wvr &= ~((1 << bits) - 1);
259
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, MASK, bits);
260
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, BAS, 0xff);
261
+ } else {
262
+ return -ENOBUFS;
153
+ }
263
+ }
154
+ return src >> -shift;
264
+ }
155
+ } else if (shift < 48) {
265
+
156
+ int64_t val = src << shift;
266
+ g_array_append_val(hw_watchpoints, wp);
157
+ int64_t extval = sextract64(val, 0, 48);
267
+ return 0;
158
+ if (!sat || val == extval) {
268
+}
159
+ return extval;
269
+
270
+bool check_watchpoint_in_range(int i, target_ulong addr)
271
+{
272
+ HWWatchpoint *wp = get_hw_wp(i);
273
+ uint64_t addr_top, addr_bottom = wp->wvr;
274
+ int bas = extract32(wp->wcr, 5, 8);
275
+ int mask = extract32(wp->wcr, 24, 4);
276
+
277
+ if (mask) {
278
+ addr_top = addr_bottom + (1 << mask);
279
+ } else {
280
+ /*
281
+ * BAS must be contiguous but can offset against the base
282
+ * address in DBGWVR
283
+ */
284
+ addr_bottom = addr_bottom + ctz32(bas);
285
+ addr_top = addr_bottom + clo32(bas);
286
+ }
287
+
288
+ if (addr >= addr_bottom && addr <= addr_top) {
289
+ return true;
290
+ }
291
+
292
+ return false;
293
+}
294
+
295
+/**
296
+ * delete_hw_watchpoint()
297
+ * @addr: address of breakpoint
298
+ *
299
+ * Delete a breakpoint and shuffle any above down
300
+ */
301
+
302
+int delete_hw_watchpoint(target_ulong addr, target_ulong len, int type)
303
+{
304
+ int i;
305
+ for (i = 0; i < cur_hw_wps; i++) {
306
+ if (check_watchpoint_in_range(i, addr)) {
307
+ g_array_remove_index(hw_watchpoints, i);
308
+ return 0;
160
+ }
309
+ }
161
+ } else if (!sat || src == 0) {
310
+ }
162
+ return 0;
311
+ return -ENOENT;
163
+ }
164
+
165
+ *sat = 1;
166
+ return (1ULL << 47) - (src >= 0);
167
+}
312
+}
168
+
313
+
169
+/* Operate on 64-bit values, but saturate at 48 bits */
314
+bool find_hw_breakpoint(CPUState *cpu, target_ulong pc)
170
+static inline uint64_t do_uqrshl48_d(uint64_t src, int64_t shift,
171
+ bool round, uint32_t *sat)
172
+{
315
+{
173
+ uint64_t val, extval;
316
+ int i;
174
+
317
+
175
+ if (shift <= -(48 + round)) {
318
+ for (i = 0; i < cur_hw_bps; i++) {
176
+ return 0;
319
+ HWBreakpoint *bp = get_hw_bp(i);
177
+ } else if (shift < 0) {
320
+ if (bp->bvr == pc) {
178
+ if (round) {
321
+ return true;
179
+ val = src >> (-shift - 1);
180
+ val = (val >> 1) + (val & 1);
181
+ } else {
182
+ val = src >> -shift;
183
+ }
322
+ }
184
+ extval = extract64(val, 0, 48);
323
+ }
185
+ if (!sat || val == extval) {
324
+ return false;
186
+ return extval;
325
+}
326
+
327
+CPUWatchpoint *find_hw_watchpoint(CPUState *cpu, target_ulong addr)
328
+{
329
+ int i;
330
+
331
+ for (i = 0; i < cur_hw_wps; i++) {
332
+ if (check_watchpoint_in_range(i, addr)) {
333
+ return &get_hw_wp(i)->details;
187
+ }
334
+ }
188
+ } else if (shift < 48) {
335
+ }
189
+ uint64_t val = src << shift;
336
+ return NULL;
190
+ uint64_t extval = extract64(val, 0, 48);
191
+ if (!sat || val == extval) {
192
+ return extval;
193
+ }
194
+ } else if (!sat || src == 0) {
195
+ return 0;
196
+ }
197
+
198
+ *sat = 1;
199
+ return MAKE_64BIT_MASK(0, 48);
200
+}
337
+}
201
+
338
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
202
+uint64_t HELPER(mve_sqrshrl48)(CPUARMState *env, uint64_t n, uint32_t shift)
203
+{
204
+ return do_sqrshl48_d(n, -(int8_t)shift, true, &env->QF);
205
+}
206
+
207
+uint64_t HELPER(mve_uqrshll48)(CPUARMState *env, uint64_t n, uint32_t shift)
208
+{
209
+ return do_uqrshl48_d(n, (int8_t)shift, true, &env->QF);
210
+}
211
diff --git a/target/arm/translate.c b/target/arm/translate.c
212
index XXXXXXX..XXXXXXX 100644
339
index XXXXXXX..XXXXXXX 100644
213
--- a/target/arm/translate.c
340
--- a/target/arm/kvm64.c
214
+++ b/target/arm/translate.c
341
+++ b/target/arm/kvm64.c
215
@@ -XXX,XX +XXX,XX @@ static bool trans_URSHRL_ri(DisasContext *s, arg_mve_shl_ri *a)
342
@@ -XXX,XX +XXX,XX @@
216
return do_mve_shl_ri(s, a, gen_urshr64_i64);
343
344
static bool have_guest_debug;
345
346
-/*
347
- * Although the ARM implementation of hardware assisted debugging
348
- * allows for different breakpoints per-core, the current GDB
349
- * interface treats them as a global pool of registers (which seems to
350
- * be the case for x86, ppc and s390). As a result we store one copy
351
- * of registers which is used for all active cores.
352
- *
353
- * Write access is serialised by virtue of the GDB protocol which
354
- * updates things. Read access (i.e. when the values are copied to the
355
- * vCPU) is also gated by GDB's run control.
356
- *
357
- * This is not unreasonable as most of the time debugging kernels you
358
- * never know which core will eventually execute your function.
359
- */
360
-
361
-typedef struct {
362
- uint64_t bcr;
363
- uint64_t bvr;
364
-} HWBreakpoint;
365
-
366
-/* The watchpoint registers can cover more area than the requested
367
- * watchpoint so we need to store the additional information
368
- * somewhere. We also need to supply a CPUWatchpoint to the GDB stub
369
- * when the watchpoint is hit.
370
- */
371
-typedef struct {
372
- uint64_t wcr;
373
- uint64_t wvr;
374
- CPUWatchpoint details;
375
-} HWWatchpoint;
376
-
377
-/* Maximum and current break/watch point counts */
378
-int max_hw_bps, max_hw_wps;
379
-GArray *hw_breakpoints, *hw_watchpoints;
380
-
381
-#define cur_hw_wps (hw_watchpoints->len)
382
-#define cur_hw_bps (hw_breakpoints->len)
383
-#define get_hw_bp(i) (&g_array_index(hw_breakpoints, HWBreakpoint, i))
384
-#define get_hw_wp(i) (&g_array_index(hw_watchpoints, HWWatchpoint, i))
385
-
386
void kvm_arm_init_debug(KVMState *s)
387
{
388
have_guest_debug = kvm_check_extension(s,
389
@@ -XXX,XX +XXX,XX @@ void kvm_arm_init_debug(KVMState *s)
390
return;
217
}
391
}
218
392
219
+static bool do_mve_shl_rr(DisasContext *s, arg_mve_shl_rr *a, WideShiftFn *fn)
393
-/**
220
+{
394
- * insert_hw_breakpoint()
221
+ TCGv_i64 rda;
395
- * @addr: address of breakpoint
222
+ TCGv_i32 rdalo, rdahi;
396
- *
223
+
397
- * See ARM ARM D2.9.1 for details but here we are only going to create
224
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
398
- * simple un-linked breakpoints (i.e. we don't chain breakpoints
225
+ /* Decode falls through to ORR/MOV UNPREDICTABLE handling */
399
- * together to match address and context or vmid). The hardware is
226
+ return false;
400
- * capable of fancier matching but that will require exposing that
227
+ }
401
- * fanciness to GDB's interface
228
+ if (a->rdahi == 15) {
402
- *
229
+ /* These are a different encoding (SQSHL/SRSHR/UQSHL/URSHR) */
403
- * DBGBCR<n>_EL1, Debug Breakpoint Control Registers
230
+ return false;
404
- *
231
+ }
405
- * 31 24 23 20 19 16 15 14 13 12 9 8 5 4 3 2 1 0
232
+ if (!dc_isar_feature(aa32_mve, s) ||
406
- * +------+------+-------+-----+----+------+-----+------+-----+---+
233
+ !arm_dc_feature(s, ARM_FEATURE_M_MAIN) ||
407
- * | RES0 | BT | LBN | SSC | HMC| RES0 | BAS | RES0 | PMC | E |
234
+ a->rdahi == 13 || a->rm == 13 || a->rm == 15 ||
408
- * +------+------+-------+-----+----+------+-----+------+-----+---+
235
+ a->rm == a->rdahi || a->rm == a->rdalo) {
409
- *
236
+ /* These rdahi/rdalo/rm cases are UNPREDICTABLE; we choose to UNDEF */
410
- * BT: Breakpoint type (0 = unlinked address match)
237
+ unallocated_encoding(s);
411
- * LBN: Linked BP number (0 = unused)
238
+ return true;
412
- * SSC/HMC/PMC: Security, Higher and Priv access control (Table D-12)
239
+ }
413
- * BAS: Byte Address Select (RES1 for AArch64)
240
+
414
- * E: Enable bit
241
+ rda = tcg_temp_new_i64();
415
- *
242
+ rdalo = load_reg(s, a->rdalo);
416
- * DBGBVR<n>_EL1, Debug Breakpoint Value Registers
243
+ rdahi = load_reg(s, a->rdahi);
417
- *
244
+ tcg_gen_concat_i32_i64(rda, rdalo, rdahi);
418
- * 63 53 52 49 48 2 1 0
245
+
419
- * +------+-----------+----------+-----+
246
+ /* The helper takes care of the sign-extension of the low 8 bits of Rm */
420
- * | RESS | VA[52:49] | VA[48:2] | 0 0 |
247
+ fn(rda, cpu_env, rda, cpu_R[a->rm]);
421
- * +------+-----------+----------+-----+
248
+
422
- *
249
+ tcg_gen_extrl_i64_i32(rdalo, rda);
423
- * Depending on the addressing mode bits the top bits of the register
250
+ tcg_gen_extrh_i64_i32(rdahi, rda);
424
- * are a sign extension of the highest applicable VA bit. Some
251
+ store_reg(s, a->rdalo, rdalo);
425
- * versions of GDB don't do it correctly so we ensure they are correct
252
+ store_reg(s, a->rdahi, rdahi);
426
- * here so future PC comparisons will work properly.
253
+ tcg_temp_free_i64(rda);
427
- */
254
+
428
-
255
+ return true;
429
-static int insert_hw_breakpoint(target_ulong addr)
256
+}
430
-{
257
+
431
- HWBreakpoint brk = {
258
+static bool trans_LSLL_rr(DisasContext *s, arg_mve_shl_rr *a)
432
- .bcr = 0x1, /* BCR E=1, enable */
259
+{
433
- .bvr = sextract64(addr, 0, 53)
260
+ return do_mve_shl_rr(s, a, gen_helper_mve_ushll);
434
- };
261
+}
435
-
262
+
436
- if (cur_hw_bps >= max_hw_bps) {
263
+static bool trans_ASRL_rr(DisasContext *s, arg_mve_shl_rr *a)
437
- return -ENOBUFS;
264
+{
438
- }
265
+ return do_mve_shl_rr(s, a, gen_helper_mve_sshrl);
439
-
266
+}
440
- brk.bcr = deposit32(brk.bcr, 1, 2, 0x3); /* PMC = 11 */
267
+
441
- brk.bcr = deposit32(brk.bcr, 5, 4, 0xf); /* BAS = RES1 */
268
+static bool trans_UQRSHLL64_rr(DisasContext *s, arg_mve_shl_rr *a)
442
-
269
+{
443
- g_array_append_val(hw_breakpoints, brk);
270
+ return do_mve_shl_rr(s, a, gen_helper_mve_uqrshll);
444
-
271
+}
445
- return 0;
272
+
446
-}
273
+static bool trans_SQRSHRL64_rr(DisasContext *s, arg_mve_shl_rr *a)
447
-
274
+{
448
-/**
275
+ return do_mve_shl_rr(s, a, gen_helper_mve_sqrshrl);
449
- * delete_hw_breakpoint()
276
+}
450
- * @pc: address of breakpoint
277
+
451
- *
278
+static bool trans_UQRSHLL48_rr(DisasContext *s, arg_mve_shl_rr *a)
452
- * Delete a breakpoint and shuffle any above down
279
+{
453
- */
280
+ return do_mve_shl_rr(s, a, gen_helper_mve_uqrshll48);
454
-
281
+}
455
-static int delete_hw_breakpoint(target_ulong pc)
282
+
456
-{
283
+static bool trans_SQRSHRL48_rr(DisasContext *s, arg_mve_shl_rr *a)
457
- int i;
284
+{
458
- for (i = 0; i < hw_breakpoints->len; i++) {
285
+ return do_mve_shl_rr(s, a, gen_helper_mve_sqrshrl48);
459
- HWBreakpoint *brk = get_hw_bp(i);
286
+}
460
- if (brk->bvr == pc) {
287
+
461
- g_array_remove_index(hw_breakpoints, i);
288
/*
462
- return 0;
289
* Multiply and multiply accumulate
463
- }
290
*/
464
- }
465
- return -ENOENT;
466
-}
467
-
468
-/**
469
- * insert_hw_watchpoint()
470
- * @addr: address of watch point
471
- * @len: size of area
472
- * @type: type of watch point
473
- *
474
- * See ARM ARM D2.10. As with the breakpoints we can do some advanced
475
- * stuff if we want to. The watch points can be linked with the break
476
- * points above to make them context aware. However for simplicity
477
- * currently we only deal with simple read/write watch points.
478
- *
479
- * D7.3.11 DBGWCR<n>_EL1, Debug Watchpoint Control Registers
480
- *
481
- * 31 29 28 24 23 21 20 19 16 15 14 13 12 5 4 3 2 1 0
482
- * +------+-------+------+----+-----+-----+-----+-----+-----+-----+---+
483
- * | RES0 | MASK | RES0 | WT | LBN | SSC | HMC | BAS | LSC | PAC | E |
484
- * +------+-------+------+----+-----+-----+-----+-----+-----+-----+---+
485
- *
486
- * MASK: num bits addr mask (0=none,01/10=res,11=3 bits (8 bytes))
487
- * WT: 0 - unlinked, 1 - linked (not currently used)
488
- * LBN: Linked BP number (not currently used)
489
- * SSC/HMC/PAC: Security, Higher and Priv access control (Table D2-11)
490
- * BAS: Byte Address Select
491
- * LSC: Load/Store control (01: load, 10: store, 11: both)
492
- * E: Enable
493
- *
494
- * The bottom 2 bits of the value register are masked. Therefore to
495
- * break on any sizes smaller than an unaligned word you need to set
496
- * MASK=0, BAS=bit per byte in question. For larger regions (^2) you
497
- * need to ensure you mask the address as required and set BAS=0xff
498
- */
499
-
500
-static int insert_hw_watchpoint(target_ulong addr,
501
- target_ulong len, int type)
502
-{
503
- HWWatchpoint wp = {
504
- .wcr = R_DBGWCR_E_MASK, /* E=1, enable */
505
- .wvr = addr & (~0x7ULL),
506
- .details = { .vaddr = addr, .len = len }
507
- };
508
-
509
- if (cur_hw_wps >= max_hw_wps) {
510
- return -ENOBUFS;
511
- }
512
-
513
- /*
514
- * HMC=0 SSC=0 PAC=3 will hit EL0 or EL1, any security state,
515
- * valid whether EL3 is implemented or not
516
- */
517
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, PAC, 3);
518
-
519
- switch (type) {
520
- case GDB_WATCHPOINT_READ:
521
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 1);
522
- wp.details.flags = BP_MEM_READ;
523
- break;
524
- case GDB_WATCHPOINT_WRITE:
525
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 2);
526
- wp.details.flags = BP_MEM_WRITE;
527
- break;
528
- case GDB_WATCHPOINT_ACCESS:
529
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 3);
530
- wp.details.flags = BP_MEM_ACCESS;
531
- break;
532
- default:
533
- g_assert_not_reached();
534
- break;
535
- }
536
- if (len <= 8) {
537
- /* we align the address and set the bits in BAS */
538
- int off = addr & 0x7;
539
- int bas = (1 << len) - 1;
540
-
541
- wp.wcr = deposit32(wp.wcr, 5 + off, 8 - off, bas);
542
- } else {
543
- /* For ranges above 8 bytes we need to be a power of 2 */
544
- if (is_power_of_2(len)) {
545
- int bits = ctz64(len);
546
-
547
- wp.wvr &= ~((1 << bits) - 1);
548
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, MASK, bits);
549
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, BAS, 0xff);
550
- } else {
551
- return -ENOBUFS;
552
- }
553
- }
554
-
555
- g_array_append_val(hw_watchpoints, wp);
556
- return 0;
557
-}
558
-
559
-
560
-static bool check_watchpoint_in_range(int i, target_ulong addr)
561
-{
562
- HWWatchpoint *wp = get_hw_wp(i);
563
- uint64_t addr_top, addr_bottom = wp->wvr;
564
- int bas = extract32(wp->wcr, 5, 8);
565
- int mask = extract32(wp->wcr, 24, 4);
566
-
567
- if (mask) {
568
- addr_top = addr_bottom + (1 << mask);
569
- } else {
570
- /* BAS must be contiguous but can offset against the base
571
- * address in DBGWVR */
572
- addr_bottom = addr_bottom + ctz32(bas);
573
- addr_top = addr_bottom + clo32(bas);
574
- }
575
-
576
- if (addr >= addr_bottom && addr <= addr_top) {
577
- return true;
578
- }
579
-
580
- return false;
581
-}
582
-
583
-/**
584
- * delete_hw_watchpoint()
585
- * @addr: address of breakpoint
586
- *
587
- * Delete a breakpoint and shuffle any above down
588
- */
589
-
590
-static int delete_hw_watchpoint(target_ulong addr,
591
- target_ulong len, int type)
592
-{
593
- int i;
594
- for (i = 0; i < cur_hw_wps; i++) {
595
- if (check_watchpoint_in_range(i, addr)) {
596
- g_array_remove_index(hw_watchpoints, i);
597
- return 0;
598
- }
599
- }
600
- return -ENOENT;
601
-}
602
-
603
-
604
int kvm_arch_insert_hw_breakpoint(target_ulong addr,
605
target_ulong len, int type)
606
{
607
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_hw_debug_active(CPUState *cs)
608
return ((cur_hw_wps > 0) || (cur_hw_bps > 0));
609
}
610
611
-static bool find_hw_breakpoint(CPUState *cpu, target_ulong pc)
612
-{
613
- int i;
614
-
615
- for (i = 0; i < cur_hw_bps; i++) {
616
- HWBreakpoint *bp = get_hw_bp(i);
617
- if (bp->bvr == pc) {
618
- return true;
619
- }
620
- }
621
- return false;
622
-}
623
-
624
-static CPUWatchpoint *find_hw_watchpoint(CPUState *cpu, target_ulong addr)
625
-{
626
- int i;
627
-
628
- for (i = 0; i < cur_hw_wps; i++) {
629
- if (check_watchpoint_in_range(i, addr)) {
630
- return &get_hw_wp(i)->details;
631
- }
632
- }
633
- return NULL;
634
-}
635
-
636
static bool kvm_arm_set_device_attr(CPUState *cs, struct kvm_device_attr *attr,
637
const char *name)
638
{
639
diff --git a/target/arm/meson.build b/target/arm/meson.build
640
index XXXXXXX..XXXXXXX 100644
641
--- a/target/arm/meson.build
642
+++ b/target/arm/meson.build
643
@@ -XXX,XX +XXX,XX @@ arm_ss.add(files(
644
))
645
arm_ss.add(zlib)
646
647
-arm_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c', 'kvm64.c'), if_false: files('kvm-stub.c'))
648
+arm_ss.add(when: 'CONFIG_KVM', if_true: files('hyp_gdbstub.c', 'kvm.c', 'kvm64.c'), if_false: files('kvm-stub.c'))
649
+arm_ss.add(when: 'CONFIG_HVF', if_true: files('hyp_gdbstub.c'))
650
651
arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
652
'cpu64.c',
291
--
653
--
292
2.20.1
654
2.34.1
293
655
294
656
diff view generated by jsdifflib
New patch
1
From: Francesco Cagnin <fcagnin@quarkslab.com>
1
2
3
Required for guest debugging.
4
5
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
6
Message-id: 20230601153107.81955-3-fcagnin@quarkslab.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/hvf/hvf.c | 213 +++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 213 insertions(+)
12
13
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/hvf/hvf.c
16
+++ b/target/arm/hvf/hvf.c
17
@@ -XXX,XX +XXX,XX @@
18
#define SYSREG_ICC_SGI1R_EL1 SYSREG(3, 0, 12, 11, 5)
19
#define SYSREG_ICC_SRE_EL1 SYSREG(3, 0, 12, 12, 5)
20
21
+#define SYSREG_MDSCR_EL1 SYSREG(2, 0, 0, 2, 2)
22
+#define SYSREG_DBGBVR0_EL1 SYSREG(2, 0, 0, 0, 4)
23
+#define SYSREG_DBGBCR0_EL1 SYSREG(2, 0, 0, 0, 5)
24
+#define SYSREG_DBGWVR0_EL1 SYSREG(2, 0, 0, 0, 6)
25
+#define SYSREG_DBGWCR0_EL1 SYSREG(2, 0, 0, 0, 7)
26
+#define SYSREG_DBGBVR1_EL1 SYSREG(2, 0, 0, 1, 4)
27
+#define SYSREG_DBGBCR1_EL1 SYSREG(2, 0, 0, 1, 5)
28
+#define SYSREG_DBGWVR1_EL1 SYSREG(2, 0, 0, 1, 6)
29
+#define SYSREG_DBGWCR1_EL1 SYSREG(2, 0, 0, 1, 7)
30
+#define SYSREG_DBGBVR2_EL1 SYSREG(2, 0, 0, 2, 4)
31
+#define SYSREG_DBGBCR2_EL1 SYSREG(2, 0, 0, 2, 5)
32
+#define SYSREG_DBGWVR2_EL1 SYSREG(2, 0, 0, 2, 6)
33
+#define SYSREG_DBGWCR2_EL1 SYSREG(2, 0, 0, 2, 7)
34
+#define SYSREG_DBGBVR3_EL1 SYSREG(2, 0, 0, 3, 4)
35
+#define SYSREG_DBGBCR3_EL1 SYSREG(2, 0, 0, 3, 5)
36
+#define SYSREG_DBGWVR3_EL1 SYSREG(2, 0, 0, 3, 6)
37
+#define SYSREG_DBGWCR3_EL1 SYSREG(2, 0, 0, 3, 7)
38
+#define SYSREG_DBGBVR4_EL1 SYSREG(2, 0, 0, 4, 4)
39
+#define SYSREG_DBGBCR4_EL1 SYSREG(2, 0, 0, 4, 5)
40
+#define SYSREG_DBGWVR4_EL1 SYSREG(2, 0, 0, 4, 6)
41
+#define SYSREG_DBGWCR4_EL1 SYSREG(2, 0, 0, 4, 7)
42
+#define SYSREG_DBGBVR5_EL1 SYSREG(2, 0, 0, 5, 4)
43
+#define SYSREG_DBGBCR5_EL1 SYSREG(2, 0, 0, 5, 5)
44
+#define SYSREG_DBGWVR5_EL1 SYSREG(2, 0, 0, 5, 6)
45
+#define SYSREG_DBGWCR5_EL1 SYSREG(2, 0, 0, 5, 7)
46
+#define SYSREG_DBGBVR6_EL1 SYSREG(2, 0, 0, 6, 4)
47
+#define SYSREG_DBGBCR6_EL1 SYSREG(2, 0, 0, 6, 5)
48
+#define SYSREG_DBGWVR6_EL1 SYSREG(2, 0, 0, 6, 6)
49
+#define SYSREG_DBGWCR6_EL1 SYSREG(2, 0, 0, 6, 7)
50
+#define SYSREG_DBGBVR7_EL1 SYSREG(2, 0, 0, 7, 4)
51
+#define SYSREG_DBGBCR7_EL1 SYSREG(2, 0, 0, 7, 5)
52
+#define SYSREG_DBGWVR7_EL1 SYSREG(2, 0, 0, 7, 6)
53
+#define SYSREG_DBGWCR7_EL1 SYSREG(2, 0, 0, 7, 7)
54
+#define SYSREG_DBGBVR8_EL1 SYSREG(2, 0, 0, 8, 4)
55
+#define SYSREG_DBGBCR8_EL1 SYSREG(2, 0, 0, 8, 5)
56
+#define SYSREG_DBGWVR8_EL1 SYSREG(2, 0, 0, 8, 6)
57
+#define SYSREG_DBGWCR8_EL1 SYSREG(2, 0, 0, 8, 7)
58
+#define SYSREG_DBGBVR9_EL1 SYSREG(2, 0, 0, 9, 4)
59
+#define SYSREG_DBGBCR9_EL1 SYSREG(2, 0, 0, 9, 5)
60
+#define SYSREG_DBGWVR9_EL1 SYSREG(2, 0, 0, 9, 6)
61
+#define SYSREG_DBGWCR9_EL1 SYSREG(2, 0, 0, 9, 7)
62
+#define SYSREG_DBGBVR10_EL1 SYSREG(2, 0, 0, 10, 4)
63
+#define SYSREG_DBGBCR10_EL1 SYSREG(2, 0, 0, 10, 5)
64
+#define SYSREG_DBGWVR10_EL1 SYSREG(2, 0, 0, 10, 6)
65
+#define SYSREG_DBGWCR10_EL1 SYSREG(2, 0, 0, 10, 7)
66
+#define SYSREG_DBGBVR11_EL1 SYSREG(2, 0, 0, 11, 4)
67
+#define SYSREG_DBGBCR11_EL1 SYSREG(2, 0, 0, 11, 5)
68
+#define SYSREG_DBGWVR11_EL1 SYSREG(2, 0, 0, 11, 6)
69
+#define SYSREG_DBGWCR11_EL1 SYSREG(2, 0, 0, 11, 7)
70
+#define SYSREG_DBGBVR12_EL1 SYSREG(2, 0, 0, 12, 4)
71
+#define SYSREG_DBGBCR12_EL1 SYSREG(2, 0, 0, 12, 5)
72
+#define SYSREG_DBGWVR12_EL1 SYSREG(2, 0, 0, 12, 6)
73
+#define SYSREG_DBGWCR12_EL1 SYSREG(2, 0, 0, 12, 7)
74
+#define SYSREG_DBGBVR13_EL1 SYSREG(2, 0, 0, 13, 4)
75
+#define SYSREG_DBGBCR13_EL1 SYSREG(2, 0, 0, 13, 5)
76
+#define SYSREG_DBGWVR13_EL1 SYSREG(2, 0, 0, 13, 6)
77
+#define SYSREG_DBGWCR13_EL1 SYSREG(2, 0, 0, 13, 7)
78
+#define SYSREG_DBGBVR14_EL1 SYSREG(2, 0, 0, 14, 4)
79
+#define SYSREG_DBGBCR14_EL1 SYSREG(2, 0, 0, 14, 5)
80
+#define SYSREG_DBGWVR14_EL1 SYSREG(2, 0, 0, 14, 6)
81
+#define SYSREG_DBGWCR14_EL1 SYSREG(2, 0, 0, 14, 7)
82
+#define SYSREG_DBGBVR15_EL1 SYSREG(2, 0, 0, 15, 4)
83
+#define SYSREG_DBGBCR15_EL1 SYSREG(2, 0, 0, 15, 5)
84
+#define SYSREG_DBGWVR15_EL1 SYSREG(2, 0, 0, 15, 6)
85
+#define SYSREG_DBGWCR15_EL1 SYSREG(2, 0, 0, 15, 7)
86
+
87
#define WFX_IS_WFE (1 << 0)
88
89
#define TMR_CTL_ENABLE (1 << 0)
90
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
91
hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
92
}
93
break;
94
+ case SYSREG_DBGBVR0_EL1:
95
+ case SYSREG_DBGBVR1_EL1:
96
+ case SYSREG_DBGBVR2_EL1:
97
+ case SYSREG_DBGBVR3_EL1:
98
+ case SYSREG_DBGBVR4_EL1:
99
+ case SYSREG_DBGBVR5_EL1:
100
+ case SYSREG_DBGBVR6_EL1:
101
+ case SYSREG_DBGBVR7_EL1:
102
+ case SYSREG_DBGBVR8_EL1:
103
+ case SYSREG_DBGBVR9_EL1:
104
+ case SYSREG_DBGBVR10_EL1:
105
+ case SYSREG_DBGBVR11_EL1:
106
+ case SYSREG_DBGBVR12_EL1:
107
+ case SYSREG_DBGBVR13_EL1:
108
+ case SYSREG_DBGBVR14_EL1:
109
+ case SYSREG_DBGBVR15_EL1:
110
+ val = env->cp15.dbgbvr[SYSREG_CRM(reg)];
111
+ break;
112
+ case SYSREG_DBGBCR0_EL1:
113
+ case SYSREG_DBGBCR1_EL1:
114
+ case SYSREG_DBGBCR2_EL1:
115
+ case SYSREG_DBGBCR3_EL1:
116
+ case SYSREG_DBGBCR4_EL1:
117
+ case SYSREG_DBGBCR5_EL1:
118
+ case SYSREG_DBGBCR6_EL1:
119
+ case SYSREG_DBGBCR7_EL1:
120
+ case SYSREG_DBGBCR8_EL1:
121
+ case SYSREG_DBGBCR9_EL1:
122
+ case SYSREG_DBGBCR10_EL1:
123
+ case SYSREG_DBGBCR11_EL1:
124
+ case SYSREG_DBGBCR12_EL1:
125
+ case SYSREG_DBGBCR13_EL1:
126
+ case SYSREG_DBGBCR14_EL1:
127
+ case SYSREG_DBGBCR15_EL1:
128
+ val = env->cp15.dbgbcr[SYSREG_CRM(reg)];
129
+ break;
130
+ case SYSREG_DBGWVR0_EL1:
131
+ case SYSREG_DBGWVR1_EL1:
132
+ case SYSREG_DBGWVR2_EL1:
133
+ case SYSREG_DBGWVR3_EL1:
134
+ case SYSREG_DBGWVR4_EL1:
135
+ case SYSREG_DBGWVR5_EL1:
136
+ case SYSREG_DBGWVR6_EL1:
137
+ case SYSREG_DBGWVR7_EL1:
138
+ case SYSREG_DBGWVR8_EL1:
139
+ case SYSREG_DBGWVR9_EL1:
140
+ case SYSREG_DBGWVR10_EL1:
141
+ case SYSREG_DBGWVR11_EL1:
142
+ case SYSREG_DBGWVR12_EL1:
143
+ case SYSREG_DBGWVR13_EL1:
144
+ case SYSREG_DBGWVR14_EL1:
145
+ case SYSREG_DBGWVR15_EL1:
146
+ val = env->cp15.dbgwvr[SYSREG_CRM(reg)];
147
+ break;
148
+ case SYSREG_DBGWCR0_EL1:
149
+ case SYSREG_DBGWCR1_EL1:
150
+ case SYSREG_DBGWCR2_EL1:
151
+ case SYSREG_DBGWCR3_EL1:
152
+ case SYSREG_DBGWCR4_EL1:
153
+ case SYSREG_DBGWCR5_EL1:
154
+ case SYSREG_DBGWCR6_EL1:
155
+ case SYSREG_DBGWCR7_EL1:
156
+ case SYSREG_DBGWCR8_EL1:
157
+ case SYSREG_DBGWCR9_EL1:
158
+ case SYSREG_DBGWCR10_EL1:
159
+ case SYSREG_DBGWCR11_EL1:
160
+ case SYSREG_DBGWCR12_EL1:
161
+ case SYSREG_DBGWCR13_EL1:
162
+ case SYSREG_DBGWCR14_EL1:
163
+ case SYSREG_DBGWCR15_EL1:
164
+ val = env->cp15.dbgwcr[SYSREG_CRM(reg)];
165
+ break;
166
default:
167
if (is_id_sysreg(reg)) {
168
/* ID system registers read as RES0 */
169
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
170
hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
171
}
172
break;
173
+ case SYSREG_MDSCR_EL1:
174
+ env->cp15.mdscr_el1 = val;
175
+ break;
176
+ case SYSREG_DBGBVR0_EL1:
177
+ case SYSREG_DBGBVR1_EL1:
178
+ case SYSREG_DBGBVR2_EL1:
179
+ case SYSREG_DBGBVR3_EL1:
180
+ case SYSREG_DBGBVR4_EL1:
181
+ case SYSREG_DBGBVR5_EL1:
182
+ case SYSREG_DBGBVR6_EL1:
183
+ case SYSREG_DBGBVR7_EL1:
184
+ case SYSREG_DBGBVR8_EL1:
185
+ case SYSREG_DBGBVR9_EL1:
186
+ case SYSREG_DBGBVR10_EL1:
187
+ case SYSREG_DBGBVR11_EL1:
188
+ case SYSREG_DBGBVR12_EL1:
189
+ case SYSREG_DBGBVR13_EL1:
190
+ case SYSREG_DBGBVR14_EL1:
191
+ case SYSREG_DBGBVR15_EL1:
192
+ env->cp15.dbgbvr[SYSREG_CRM(reg)] = val;
193
+ break;
194
+ case SYSREG_DBGBCR0_EL1:
195
+ case SYSREG_DBGBCR1_EL1:
196
+ case SYSREG_DBGBCR2_EL1:
197
+ case SYSREG_DBGBCR3_EL1:
198
+ case SYSREG_DBGBCR4_EL1:
199
+ case SYSREG_DBGBCR5_EL1:
200
+ case SYSREG_DBGBCR6_EL1:
201
+ case SYSREG_DBGBCR7_EL1:
202
+ case SYSREG_DBGBCR8_EL1:
203
+ case SYSREG_DBGBCR9_EL1:
204
+ case SYSREG_DBGBCR10_EL1:
205
+ case SYSREG_DBGBCR11_EL1:
206
+ case SYSREG_DBGBCR12_EL1:
207
+ case SYSREG_DBGBCR13_EL1:
208
+ case SYSREG_DBGBCR14_EL1:
209
+ case SYSREG_DBGBCR15_EL1:
210
+ env->cp15.dbgbcr[SYSREG_CRM(reg)] = val;
211
+ break;
212
+ case SYSREG_DBGWVR0_EL1:
213
+ case SYSREG_DBGWVR1_EL1:
214
+ case SYSREG_DBGWVR2_EL1:
215
+ case SYSREG_DBGWVR3_EL1:
216
+ case SYSREG_DBGWVR4_EL1:
217
+ case SYSREG_DBGWVR5_EL1:
218
+ case SYSREG_DBGWVR6_EL1:
219
+ case SYSREG_DBGWVR7_EL1:
220
+ case SYSREG_DBGWVR8_EL1:
221
+ case SYSREG_DBGWVR9_EL1:
222
+ case SYSREG_DBGWVR10_EL1:
223
+ case SYSREG_DBGWVR11_EL1:
224
+ case SYSREG_DBGWVR12_EL1:
225
+ case SYSREG_DBGWVR13_EL1:
226
+ case SYSREG_DBGWVR14_EL1:
227
+ case SYSREG_DBGWVR15_EL1:
228
+ env->cp15.dbgwvr[SYSREG_CRM(reg)] = val;
229
+ break;
230
+ case SYSREG_DBGWCR0_EL1:
231
+ case SYSREG_DBGWCR1_EL1:
232
+ case SYSREG_DBGWCR2_EL1:
233
+ case SYSREG_DBGWCR3_EL1:
234
+ case SYSREG_DBGWCR4_EL1:
235
+ case SYSREG_DBGWCR5_EL1:
236
+ case SYSREG_DBGWCR6_EL1:
237
+ case SYSREG_DBGWCR7_EL1:
238
+ case SYSREG_DBGWCR8_EL1:
239
+ case SYSREG_DBGWCR9_EL1:
240
+ case SYSREG_DBGWCR10_EL1:
241
+ case SYSREG_DBGWCR11_EL1:
242
+ case SYSREG_DBGWCR12_EL1:
243
+ case SYSREG_DBGWCR13_EL1:
244
+ case SYSREG_DBGWCR14_EL1:
245
+ case SYSREG_DBGWCR15_EL1:
246
+ env->cp15.dbgwcr[SYSREG_CRM(reg)] = val;
247
+ break;
248
default:
249
cpu_synchronize_state(cpu);
250
trace_hvf_unhandled_sysreg_write(env->pc, reg,
251
--
252
2.34.1
diff view generated by jsdifflib
New patch
1
From: Francesco Cagnin <fcagnin@quarkslab.com>
1
2
3
Required for guest debugging. The code has been structured like the KVM
4
counterpart.
5
6
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
7
Message-id: 20230601153107.81955-4-fcagnin@quarkslab.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/sysemu/hvf.h | 22 ++++++++
12
include/sysemu/hvf_int.h | 1 +
13
accel/hvf/hvf-accel-ops.c | 109 ++++++++++++++++++++++++++++++++++++++
14
accel/hvf/hvf-all.c | 17 ++++++
15
target/arm/hvf/hvf.c | 63 ++++++++++++++++++++++
16
target/i386/hvf/hvf.c | 24 +++++++++
17
6 files changed, 236 insertions(+)
18
19
diff --git a/include/sysemu/hvf.h b/include/sysemu/hvf.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/sysemu/hvf.h
22
+++ b/include/sysemu/hvf.h
23
@@ -XXX,XX +XXX,XX @@
24
#include "qom/object.h"
25
26
#ifdef NEED_CPU_H
27
+#include "cpu.h"
28
29
#ifdef CONFIG_HVF
30
uint32_t hvf_get_supported_cpuid(uint32_t func, uint32_t idx,
31
@@ -XXX,XX +XXX,XX @@ typedef struct HVFState HVFState;
32
DECLARE_INSTANCE_CHECKER(HVFState, HVF_STATE,
33
TYPE_HVF_ACCEL)
34
35
+#ifdef NEED_CPU_H
36
+struct hvf_sw_breakpoint {
37
+ target_ulong pc;
38
+ target_ulong saved_insn;
39
+ int use_count;
40
+ QTAILQ_ENTRY(hvf_sw_breakpoint) entry;
41
+};
42
+
43
+struct hvf_sw_breakpoint *hvf_find_sw_breakpoint(CPUState *cpu,
44
+ target_ulong pc);
45
+int hvf_sw_breakpoints_active(CPUState *cpu);
46
+
47
+int hvf_arch_insert_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp);
48
+int hvf_arch_remove_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp);
49
+int hvf_arch_insert_hw_breakpoint(target_ulong addr, target_ulong len,
50
+ int type);
51
+int hvf_arch_remove_hw_breakpoint(target_ulong addr, target_ulong len,
52
+ int type);
53
+void hvf_arch_remove_all_hw_breakpoints(void);
54
+#endif /* NEED_CPU_H */
55
+
56
#endif
57
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/include/sysemu/hvf_int.h
60
+++ b/include/sysemu/hvf_int.h
61
@@ -XXX,XX +XXX,XX @@ struct HVFState {
62
63
hvf_vcpu_caps *hvf_caps;
64
uint64_t vtimer_offset;
65
+ QTAILQ_HEAD(, hvf_sw_breakpoint) hvf_sw_breakpoints;
66
};
67
extern HVFState *hvf_state;
68
69
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/accel/hvf/hvf-accel-ops.c
72
+++ b/accel/hvf/hvf-accel-ops.c
73
@@ -XXX,XX +XXX,XX @@
74
#include "qemu/main-loop.h"
75
#include "exec/address-spaces.h"
76
#include "exec/exec-all.h"
77
+#include "exec/gdbstub.h"
78
#include "sysemu/cpus.h"
79
#include "sysemu/hvf.h"
80
#include "sysemu/hvf_int.h"
81
@@ -XXX,XX +XXX,XX @@ static int hvf_accel_init(MachineState *ms)
82
s->slots[x].slot_id = x;
83
}
84
85
+ QTAILQ_INIT(&s->hvf_sw_breakpoints);
86
+
87
hvf_state = s;
88
memory_listener_register(&hvf_memory_listener, &address_space_memory);
89
90
@@ -XXX,XX +XXX,XX @@ static void hvf_start_vcpu_thread(CPUState *cpu)
91
cpu, QEMU_THREAD_JOINABLE);
92
}
93
94
+static int hvf_insert_breakpoint(CPUState *cpu, int type, hwaddr addr, hwaddr len)
95
+{
96
+ struct hvf_sw_breakpoint *bp;
97
+ int err;
98
+
99
+ if (type == GDB_BREAKPOINT_SW) {
100
+ bp = hvf_find_sw_breakpoint(cpu, addr);
101
+ if (bp) {
102
+ bp->use_count++;
103
+ return 0;
104
+ }
105
+
106
+ bp = g_new(struct hvf_sw_breakpoint, 1);
107
+ bp->pc = addr;
108
+ bp->use_count = 1;
109
+ err = hvf_arch_insert_sw_breakpoint(cpu, bp);
110
+ if (err) {
111
+ g_free(bp);
112
+ return err;
113
+ }
114
+
115
+ QTAILQ_INSERT_HEAD(&hvf_state->hvf_sw_breakpoints, bp, entry);
116
+ } else {
117
+ err = hvf_arch_insert_hw_breakpoint(addr, len, type);
118
+ if (err) {
119
+ return err;
120
+ }
121
+ }
122
+
123
+ CPU_FOREACH(cpu) {
124
+ err = hvf_update_guest_debug(cpu);
125
+ if (err) {
126
+ return err;
127
+ }
128
+ }
129
+ return 0;
130
+}
131
+
132
+static int hvf_remove_breakpoint(CPUState *cpu, int type, hwaddr addr, hwaddr len)
133
+{
134
+ struct hvf_sw_breakpoint *bp;
135
+ int err;
136
+
137
+ if (type == GDB_BREAKPOINT_SW) {
138
+ bp = hvf_find_sw_breakpoint(cpu, addr);
139
+ if (!bp) {
140
+ return -ENOENT;
141
+ }
142
+
143
+ if (bp->use_count > 1) {
144
+ bp->use_count--;
145
+ return 0;
146
+ }
147
+
148
+ err = hvf_arch_remove_sw_breakpoint(cpu, bp);
149
+ if (err) {
150
+ return err;
151
+ }
152
+
153
+ QTAILQ_REMOVE(&hvf_state->hvf_sw_breakpoints, bp, entry);
154
+ g_free(bp);
155
+ } else {
156
+ err = hvf_arch_remove_hw_breakpoint(addr, len, type);
157
+ if (err) {
158
+ return err;
159
+ }
160
+ }
161
+
162
+ CPU_FOREACH(cpu) {
163
+ err = hvf_update_guest_debug(cpu);
164
+ if (err) {
165
+ return err;
166
+ }
167
+ }
168
+ return 0;
169
+}
170
+
171
+static void hvf_remove_all_breakpoints(CPUState *cpu)
172
+{
173
+ struct hvf_sw_breakpoint *bp, *next;
174
+ CPUState *tmpcpu;
175
+
176
+ QTAILQ_FOREACH_SAFE(bp, &hvf_state->hvf_sw_breakpoints, entry, next) {
177
+ if (hvf_arch_remove_sw_breakpoint(cpu, bp) != 0) {
178
+ /* Try harder to find a CPU that currently sees the breakpoint. */
179
+ CPU_FOREACH(tmpcpu)
180
+ {
181
+ if (hvf_arch_remove_sw_breakpoint(tmpcpu, bp) == 0) {
182
+ break;
183
+ }
184
+ }
185
+ }
186
+ QTAILQ_REMOVE(&hvf_state->hvf_sw_breakpoints, bp, entry);
187
+ g_free(bp);
188
+ }
189
+ hvf_arch_remove_all_hw_breakpoints();
190
+
191
+ CPU_FOREACH(cpu) {
192
+ hvf_update_guest_debug(cpu);
193
+ }
194
+}
195
+
196
static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
197
{
198
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
199
@@ -XXX,XX +XXX,XX @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
200
ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
201
ops->synchronize_state = hvf_cpu_synchronize_state;
202
ops->synchronize_pre_loadvm = hvf_cpu_synchronize_pre_loadvm;
203
+
204
+ ops->insert_breakpoint = hvf_insert_breakpoint;
205
+ ops->remove_breakpoint = hvf_remove_breakpoint;
206
+ ops->remove_all_breakpoints = hvf_remove_all_breakpoints;
207
};
208
static const TypeInfo hvf_accel_ops_type = {
209
.name = ACCEL_OPS_NAME("hvf"),
210
diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c
211
index XXXXXXX..XXXXXXX 100644
212
--- a/accel/hvf/hvf-all.c
213
+++ b/accel/hvf/hvf-all.c
214
@@ -XXX,XX +XXX,XX @@ void assert_hvf_ok(hv_return_t ret)
215
216
abort();
217
}
218
+
219
+struct hvf_sw_breakpoint *hvf_find_sw_breakpoint(CPUState *cpu, target_ulong pc)
220
+{
221
+ struct hvf_sw_breakpoint *bp;
222
+
223
+ QTAILQ_FOREACH(bp, &hvf_state->hvf_sw_breakpoints, entry) {
224
+ if (bp->pc == pc) {
225
+ return bp;
226
+ }
227
+ }
228
+ return NULL;
229
+}
230
+
231
+int hvf_sw_breakpoints_active(CPUState *cpu)
232
+{
233
+ return !QTAILQ_EMPTY(&hvf_state->hvf_sw_breakpoints);
234
+}
235
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
236
index XXXXXXX..XXXXXXX 100644
237
--- a/target/arm/hvf/hvf.c
238
+++ b/target/arm/hvf/hvf.c
239
@@ -XXX,XX +XXX,XX @@
240
#include "trace/trace-target_arm_hvf.h"
241
#include "migration/vmstate.h"
242
243
+#include "exec/gdbstub.h"
244
+
245
#define HVF_SYSREG(crn, crm, op0, op1, op2) \
246
ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
247
#define PL1_WRITE_MASK 0x4
248
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init(void)
249
qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
250
return 0;
251
}
252
+
253
+static const uint32_t brk_insn = 0xd4200000;
254
+
255
+int hvf_arch_insert_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp)
256
+{
257
+ if (cpu_memory_rw_debug(cpu, bp->pc, (uint8_t *)&bp->saved_insn, 4, 0) ||
258
+ cpu_memory_rw_debug(cpu, bp->pc, (uint8_t *)&brk_insn, 4, 1)) {
259
+ return -EINVAL;
260
+ }
261
+ return 0;
262
+}
263
+
264
+int hvf_arch_remove_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp)
265
+{
266
+ static uint32_t brk;
267
+
268
+ if (cpu_memory_rw_debug(cpu, bp->pc, (uint8_t *)&brk, 4, 0) ||
269
+ brk != brk_insn ||
270
+ cpu_memory_rw_debug(cpu, bp->pc, (uint8_t *)&bp->saved_insn, 4, 1)) {
271
+ return -EINVAL;
272
+ }
273
+ return 0;
274
+}
275
+
276
+int hvf_arch_insert_hw_breakpoint(target_ulong addr, target_ulong len, int type)
277
+{
278
+ switch (type) {
279
+ case GDB_BREAKPOINT_HW:
280
+ return insert_hw_breakpoint(addr);
281
+ case GDB_WATCHPOINT_READ:
282
+ case GDB_WATCHPOINT_WRITE:
283
+ case GDB_WATCHPOINT_ACCESS:
284
+ return insert_hw_watchpoint(addr, len, type);
285
+ default:
286
+ return -ENOSYS;
287
+ }
288
+}
289
+
290
+int hvf_arch_remove_hw_breakpoint(target_ulong addr, target_ulong len, int type)
291
+{
292
+ switch (type) {
293
+ case GDB_BREAKPOINT_HW:
294
+ return delete_hw_breakpoint(addr);
295
+ case GDB_WATCHPOINT_READ:
296
+ case GDB_WATCHPOINT_WRITE:
297
+ case GDB_WATCHPOINT_ACCESS:
298
+ return delete_hw_watchpoint(addr, len, type);
299
+ default:
300
+ return -ENOSYS;
301
+ }
302
+}
303
+
304
+void hvf_arch_remove_all_hw_breakpoints(void)
305
+{
306
+ if (cur_hw_wps > 0) {
307
+ g_array_remove_range(hw_watchpoints, 0, cur_hw_wps);
308
+ }
309
+ if (cur_hw_bps > 0) {
310
+ g_array_remove_range(hw_breakpoints, 0, cur_hw_bps);
311
+ }
312
+}
313
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
314
index XXXXXXX..XXXXXXX 100644
315
--- a/target/i386/hvf/hvf.c
316
+++ b/target/i386/hvf/hvf.c
317
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
318
319
return ret;
320
}
321
+
322
+int hvf_arch_insert_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp)
323
+{
324
+ return -ENOSYS;
325
+}
326
+
327
+int hvf_arch_remove_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp)
328
+{
329
+ return -ENOSYS;
330
+}
331
+
332
+int hvf_arch_insert_hw_breakpoint(target_ulong addr, target_ulong len, int type)
333
+{
334
+ return -ENOSYS;
335
+}
336
+
337
+int hvf_arch_remove_hw_breakpoint(target_ulong addr, target_ulong len, int type)
338
+{
339
+ return -ENOSYS;
340
+}
341
+
342
+void hvf_arch_remove_all_hw_breakpoints(void)
343
+{
344
+}
345
--
346
2.34.1
diff view generated by jsdifflib
New patch
1
From: Francesco Cagnin <fcagnin@quarkslab.com>
1
2
3
Guests can now be debugged through the gdbstub. Support is added for
4
single-stepping, software breakpoints, hardware breakpoints and
5
watchpoints. The code has been structured like the KVM counterpart.
6
7
While guest debugging is enabled, the guest can still read and write the
8
DBG*_EL1 registers but they don't have any effect.
9
10
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
11
Message-id: 20230601153107.81955-5-fcagnin@quarkslab.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
include/sysemu/hvf.h | 15 ++
16
include/sysemu/hvf_int.h | 1 +
17
target/arm/hvf_arm.h | 7 +
18
accel/hvf/hvf-accel-ops.c | 10 +
19
accel/hvf/hvf-all.c | 6 +
20
target/arm/hvf/hvf.c | 474 +++++++++++++++++++++++++++++++++++++-
21
target/i386/hvf/hvf.c | 9 +
22
7 files changed, 520 insertions(+), 2 deletions(-)
23
24
diff --git a/include/sysemu/hvf.h b/include/sysemu/hvf.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/sysemu/hvf.h
27
+++ b/include/sysemu/hvf.h
28
@@ -XXX,XX +XXX,XX @@ int hvf_arch_insert_hw_breakpoint(target_ulong addr, target_ulong len,
29
int hvf_arch_remove_hw_breakpoint(target_ulong addr, target_ulong len,
30
int type);
31
void hvf_arch_remove_all_hw_breakpoints(void);
32
+
33
+/*
34
+ * hvf_update_guest_debug:
35
+ * @cs: CPUState for the CPU to update
36
+ *
37
+ * Update guest to enable or disable debugging. Per-arch specifics will be
38
+ * handled by calling down to hvf_arch_update_guest_debug.
39
+ */
40
+int hvf_update_guest_debug(CPUState *cpu);
41
+void hvf_arch_update_guest_debug(CPUState *cpu);
42
+
43
+/*
44
+ * Return whether the guest supports debugging.
45
+ */
46
+bool hvf_arch_supports_guest_debug(void);
47
#endif /* NEED_CPU_H */
48
49
#endif
50
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
51
index XXXXXXX..XXXXXXX 100644
52
--- a/include/sysemu/hvf_int.h
53
+++ b/include/sysemu/hvf_int.h
54
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
55
void *exit;
56
bool vtimer_masked;
57
sigset_t unblock_ipi_mask;
58
+ bool guest_debug_enabled;
59
};
60
61
void assert_hvf_ok(hv_return_t ret);
62
diff --git a/target/arm/hvf_arm.h b/target/arm/hvf_arm.h
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/hvf_arm.h
65
+++ b/target/arm/hvf_arm.h
66
@@ -XXX,XX +XXX,XX @@
67
68
#include "cpu.h"
69
70
+/**
71
+ * hvf_arm_init_debug() - initialize guest debug capabilities
72
+ *
73
+ * Should be called only once before using guest debug capabilities.
74
+ */
75
+void hvf_arm_init_debug(void);
76
+
77
void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu);
78
79
#endif
80
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/accel/hvf/hvf-accel-ops.c
83
+++ b/accel/hvf/hvf-accel-ops.c
84
@@ -XXX,XX +XXX,XX @@ static int hvf_accel_init(MachineState *ms)
85
return hvf_arch_init();
86
}
87
88
+static inline int hvf_gdbstub_sstep_flags(void)
89
+{
90
+ return SSTEP_ENABLE | SSTEP_NOIRQ;
91
+}
92
+
93
static void hvf_accel_class_init(ObjectClass *oc, void *data)
94
{
95
AccelClass *ac = ACCEL_CLASS(oc);
96
ac->name = "HVF";
97
ac->init_machine = hvf_accel_init;
98
ac->allowed = &hvf_allowed;
99
+ ac->gdbstub_supported_sstep_flags = hvf_gdbstub_sstep_flags;
100
}
101
102
static const TypeInfo hvf_accel_type = {
103
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
104
cpu->vcpu_dirty = 1;
105
assert_hvf_ok(r);
106
107
+ cpu->hvf->guest_debug_enabled = false;
108
+
109
return hvf_arch_init_vcpu(cpu);
110
}
111
112
@@ -XXX,XX +XXX,XX @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
113
ops->insert_breakpoint = hvf_insert_breakpoint;
114
ops->remove_breakpoint = hvf_remove_breakpoint;
115
ops->remove_all_breakpoints = hvf_remove_all_breakpoints;
116
+ ops->update_guest_debug = hvf_update_guest_debug;
117
+ ops->supports_guest_debug = hvf_arch_supports_guest_debug;
118
};
119
static const TypeInfo hvf_accel_ops_type = {
120
.name = ACCEL_OPS_NAME("hvf"),
121
diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/accel/hvf/hvf-all.c
124
+++ b/accel/hvf/hvf-all.c
125
@@ -XXX,XX +XXX,XX @@ int hvf_sw_breakpoints_active(CPUState *cpu)
126
{
127
return !QTAILQ_EMPTY(&hvf_state->hvf_sw_breakpoints);
128
}
129
+
130
+int hvf_update_guest_debug(CPUState *cpu)
131
+{
132
+ hvf_arch_update_guest_debug(cpu);
133
+ return 0;
134
+}
135
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
136
index XXXXXXX..XXXXXXX 100644
137
--- a/target/arm/hvf/hvf.c
138
+++ b/target/arm/hvf/hvf.c
139
@@ -XXX,XX +XXX,XX @@
140
141
#include "exec/gdbstub.h"
142
143
+#define MDSCR_EL1_SS_SHIFT 0
144
+#define MDSCR_EL1_MDE_SHIFT 15
145
+
146
+static uint16_t dbgbcr_regs[] = {
147
+ HV_SYS_REG_DBGBCR0_EL1,
148
+ HV_SYS_REG_DBGBCR1_EL1,
149
+ HV_SYS_REG_DBGBCR2_EL1,
150
+ HV_SYS_REG_DBGBCR3_EL1,
151
+ HV_SYS_REG_DBGBCR4_EL1,
152
+ HV_SYS_REG_DBGBCR5_EL1,
153
+ HV_SYS_REG_DBGBCR6_EL1,
154
+ HV_SYS_REG_DBGBCR7_EL1,
155
+ HV_SYS_REG_DBGBCR8_EL1,
156
+ HV_SYS_REG_DBGBCR9_EL1,
157
+ HV_SYS_REG_DBGBCR10_EL1,
158
+ HV_SYS_REG_DBGBCR11_EL1,
159
+ HV_SYS_REG_DBGBCR12_EL1,
160
+ HV_SYS_REG_DBGBCR13_EL1,
161
+ HV_SYS_REG_DBGBCR14_EL1,
162
+ HV_SYS_REG_DBGBCR15_EL1,
163
+};
164
+static uint16_t dbgbvr_regs[] = {
165
+ HV_SYS_REG_DBGBVR0_EL1,
166
+ HV_SYS_REG_DBGBVR1_EL1,
167
+ HV_SYS_REG_DBGBVR2_EL1,
168
+ HV_SYS_REG_DBGBVR3_EL1,
169
+ HV_SYS_REG_DBGBVR4_EL1,
170
+ HV_SYS_REG_DBGBVR5_EL1,
171
+ HV_SYS_REG_DBGBVR6_EL1,
172
+ HV_SYS_REG_DBGBVR7_EL1,
173
+ HV_SYS_REG_DBGBVR8_EL1,
174
+ HV_SYS_REG_DBGBVR9_EL1,
175
+ HV_SYS_REG_DBGBVR10_EL1,
176
+ HV_SYS_REG_DBGBVR11_EL1,
177
+ HV_SYS_REG_DBGBVR12_EL1,
178
+ HV_SYS_REG_DBGBVR13_EL1,
179
+ HV_SYS_REG_DBGBVR14_EL1,
180
+ HV_SYS_REG_DBGBVR15_EL1,
181
+};
182
+static uint16_t dbgwcr_regs[] = {
183
+ HV_SYS_REG_DBGWCR0_EL1,
184
+ HV_SYS_REG_DBGWCR1_EL1,
185
+ HV_SYS_REG_DBGWCR2_EL1,
186
+ HV_SYS_REG_DBGWCR3_EL1,
187
+ HV_SYS_REG_DBGWCR4_EL1,
188
+ HV_SYS_REG_DBGWCR5_EL1,
189
+ HV_SYS_REG_DBGWCR6_EL1,
190
+ HV_SYS_REG_DBGWCR7_EL1,
191
+ HV_SYS_REG_DBGWCR8_EL1,
192
+ HV_SYS_REG_DBGWCR9_EL1,
193
+ HV_SYS_REG_DBGWCR10_EL1,
194
+ HV_SYS_REG_DBGWCR11_EL1,
195
+ HV_SYS_REG_DBGWCR12_EL1,
196
+ HV_SYS_REG_DBGWCR13_EL1,
197
+ HV_SYS_REG_DBGWCR14_EL1,
198
+ HV_SYS_REG_DBGWCR15_EL1,
199
+};
200
+static uint16_t dbgwvr_regs[] = {
201
+ HV_SYS_REG_DBGWVR0_EL1,
202
+ HV_SYS_REG_DBGWVR1_EL1,
203
+ HV_SYS_REG_DBGWVR2_EL1,
204
+ HV_SYS_REG_DBGWVR3_EL1,
205
+ HV_SYS_REG_DBGWVR4_EL1,
206
+ HV_SYS_REG_DBGWVR5_EL1,
207
+ HV_SYS_REG_DBGWVR6_EL1,
208
+ HV_SYS_REG_DBGWVR7_EL1,
209
+ HV_SYS_REG_DBGWVR8_EL1,
210
+ HV_SYS_REG_DBGWVR9_EL1,
211
+ HV_SYS_REG_DBGWVR10_EL1,
212
+ HV_SYS_REG_DBGWVR11_EL1,
213
+ HV_SYS_REG_DBGWVR12_EL1,
214
+ HV_SYS_REG_DBGWVR13_EL1,
215
+ HV_SYS_REG_DBGWVR14_EL1,
216
+ HV_SYS_REG_DBGWVR15_EL1,
217
+};
218
+
219
+static inline int hvf_arm_num_brps(hv_vcpu_config_t config)
220
+{
221
+ uint64_t val;
222
+ hv_return_t ret;
223
+ ret = hv_vcpu_config_get_feature_reg(config, HV_FEATURE_REG_ID_AA64DFR0_EL1,
224
+ &val);
225
+ assert_hvf_ok(ret);
226
+ return FIELD_EX64(val, ID_AA64DFR0, BRPS) + 1;
227
+}
228
+
229
+static inline int hvf_arm_num_wrps(hv_vcpu_config_t config)
230
+{
231
+ uint64_t val;
232
+ hv_return_t ret;
233
+ ret = hv_vcpu_config_get_feature_reg(config, HV_FEATURE_REG_ID_AA64DFR0_EL1,
234
+ &val);
235
+ assert_hvf_ok(ret);
236
+ return FIELD_EX64(val, ID_AA64DFR0, WRPS) + 1;
237
+}
238
+
239
+void hvf_arm_init_debug(void)
240
+{
241
+ hv_vcpu_config_t config;
242
+ config = hv_vcpu_config_create();
243
+
244
+ max_hw_bps = hvf_arm_num_brps(config);
245
+ hw_breakpoints =
246
+ g_array_sized_new(true, true, sizeof(HWBreakpoint), max_hw_bps);
247
+
248
+ max_hw_wps = hvf_arm_num_wrps(config);
249
+ hw_watchpoints =
250
+ g_array_sized_new(true, true, sizeof(HWWatchpoint), max_hw_wps);
251
+}
252
+
253
#define HVF_SYSREG(crn, crm, op0, op1, op2) \
254
ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
255
#define PL1_WRITE_MASK 0x4
256
@@ -XXX,XX +XXX,XX @@ int hvf_get_registers(CPUState *cpu)
257
continue;
258
}
259
260
+ if (cpu->hvf->guest_debug_enabled) {
261
+ /* Handle debug registers */
262
+ switch (hvf_sreg_match[i].reg) {
263
+ case HV_SYS_REG_DBGBVR0_EL1:
264
+ case HV_SYS_REG_DBGBCR0_EL1:
265
+ case HV_SYS_REG_DBGWVR0_EL1:
266
+ case HV_SYS_REG_DBGWCR0_EL1:
267
+ case HV_SYS_REG_DBGBVR1_EL1:
268
+ case HV_SYS_REG_DBGBCR1_EL1:
269
+ case HV_SYS_REG_DBGWVR1_EL1:
270
+ case HV_SYS_REG_DBGWCR1_EL1:
271
+ case HV_SYS_REG_DBGBVR2_EL1:
272
+ case HV_SYS_REG_DBGBCR2_EL1:
273
+ case HV_SYS_REG_DBGWVR2_EL1:
274
+ case HV_SYS_REG_DBGWCR2_EL1:
275
+ case HV_SYS_REG_DBGBVR3_EL1:
276
+ case HV_SYS_REG_DBGBCR3_EL1:
277
+ case HV_SYS_REG_DBGWVR3_EL1:
278
+ case HV_SYS_REG_DBGWCR3_EL1:
279
+ case HV_SYS_REG_DBGBVR4_EL1:
280
+ case HV_SYS_REG_DBGBCR4_EL1:
281
+ case HV_SYS_REG_DBGWVR4_EL1:
282
+ case HV_SYS_REG_DBGWCR4_EL1:
283
+ case HV_SYS_REG_DBGBVR5_EL1:
284
+ case HV_SYS_REG_DBGBCR5_EL1:
285
+ case HV_SYS_REG_DBGWVR5_EL1:
286
+ case HV_SYS_REG_DBGWCR5_EL1:
287
+ case HV_SYS_REG_DBGBVR6_EL1:
288
+ case HV_SYS_REG_DBGBCR6_EL1:
289
+ case HV_SYS_REG_DBGWVR6_EL1:
290
+ case HV_SYS_REG_DBGWCR6_EL1:
291
+ case HV_SYS_REG_DBGBVR7_EL1:
292
+ case HV_SYS_REG_DBGBCR7_EL1:
293
+ case HV_SYS_REG_DBGWVR7_EL1:
294
+ case HV_SYS_REG_DBGWCR7_EL1:
295
+ case HV_SYS_REG_DBGBVR8_EL1:
296
+ case HV_SYS_REG_DBGBCR8_EL1:
297
+ case HV_SYS_REG_DBGWVR8_EL1:
298
+ case HV_SYS_REG_DBGWCR8_EL1:
299
+ case HV_SYS_REG_DBGBVR9_EL1:
300
+ case HV_SYS_REG_DBGBCR9_EL1:
301
+ case HV_SYS_REG_DBGWVR9_EL1:
302
+ case HV_SYS_REG_DBGWCR9_EL1:
303
+ case HV_SYS_REG_DBGBVR10_EL1:
304
+ case HV_SYS_REG_DBGBCR10_EL1:
305
+ case HV_SYS_REG_DBGWVR10_EL1:
306
+ case HV_SYS_REG_DBGWCR10_EL1:
307
+ case HV_SYS_REG_DBGBVR11_EL1:
308
+ case HV_SYS_REG_DBGBCR11_EL1:
309
+ case HV_SYS_REG_DBGWVR11_EL1:
310
+ case HV_SYS_REG_DBGWCR11_EL1:
311
+ case HV_SYS_REG_DBGBVR12_EL1:
312
+ case HV_SYS_REG_DBGBCR12_EL1:
313
+ case HV_SYS_REG_DBGWVR12_EL1:
314
+ case HV_SYS_REG_DBGWCR12_EL1:
315
+ case HV_SYS_REG_DBGBVR13_EL1:
316
+ case HV_SYS_REG_DBGBCR13_EL1:
317
+ case HV_SYS_REG_DBGWVR13_EL1:
318
+ case HV_SYS_REG_DBGWCR13_EL1:
319
+ case HV_SYS_REG_DBGBVR14_EL1:
320
+ case HV_SYS_REG_DBGBCR14_EL1:
321
+ case HV_SYS_REG_DBGWVR14_EL1:
322
+ case HV_SYS_REG_DBGWCR14_EL1:
323
+ case HV_SYS_REG_DBGBVR15_EL1:
324
+ case HV_SYS_REG_DBGBCR15_EL1:
325
+ case HV_SYS_REG_DBGWVR15_EL1:
326
+ case HV_SYS_REG_DBGWCR15_EL1: {
327
+ /*
328
+ * If the guest is being debugged, the vCPU's debug registers
329
+ * are holding the gdbstub's view of the registers (set in
330
+ * hvf_arch_update_guest_debug()).
331
+ * Since the environment is used to store only the guest's view
332
+ * of the registers, don't update it with the values from the
333
+ * vCPU but simply keep the values from the previous
334
+ * environment.
335
+ */
336
+ const ARMCPRegInfo *ri;
337
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, hvf_sreg_match[i].key);
338
+ val = read_raw_cp_reg(env, ri);
339
+
340
+ arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx] = val;
341
+ continue;
342
+ }
343
+ }
344
+ }
345
+
346
ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val);
347
assert_hvf_ok(ret);
348
349
@@ -XXX,XX +XXX,XX @@ int hvf_put_registers(CPUState *cpu)
350
continue;
351
}
352
353
+ if (cpu->hvf->guest_debug_enabled) {
354
+ /* Handle debug registers */
355
+ switch (hvf_sreg_match[i].reg) {
356
+ case HV_SYS_REG_DBGBVR0_EL1:
357
+ case HV_SYS_REG_DBGBCR0_EL1:
358
+ case HV_SYS_REG_DBGWVR0_EL1:
359
+ case HV_SYS_REG_DBGWCR0_EL1:
360
+ case HV_SYS_REG_DBGBVR1_EL1:
361
+ case HV_SYS_REG_DBGBCR1_EL1:
362
+ case HV_SYS_REG_DBGWVR1_EL1:
363
+ case HV_SYS_REG_DBGWCR1_EL1:
364
+ case HV_SYS_REG_DBGBVR2_EL1:
365
+ case HV_SYS_REG_DBGBCR2_EL1:
366
+ case HV_SYS_REG_DBGWVR2_EL1:
367
+ case HV_SYS_REG_DBGWCR2_EL1:
368
+ case HV_SYS_REG_DBGBVR3_EL1:
369
+ case HV_SYS_REG_DBGBCR3_EL1:
370
+ case HV_SYS_REG_DBGWVR3_EL1:
371
+ case HV_SYS_REG_DBGWCR3_EL1:
372
+ case HV_SYS_REG_DBGBVR4_EL1:
373
+ case HV_SYS_REG_DBGBCR4_EL1:
374
+ case HV_SYS_REG_DBGWVR4_EL1:
375
+ case HV_SYS_REG_DBGWCR4_EL1:
376
+ case HV_SYS_REG_DBGBVR5_EL1:
377
+ case HV_SYS_REG_DBGBCR5_EL1:
378
+ case HV_SYS_REG_DBGWVR5_EL1:
379
+ case HV_SYS_REG_DBGWCR5_EL1:
380
+ case HV_SYS_REG_DBGBVR6_EL1:
381
+ case HV_SYS_REG_DBGBCR6_EL1:
382
+ case HV_SYS_REG_DBGWVR6_EL1:
383
+ case HV_SYS_REG_DBGWCR6_EL1:
384
+ case HV_SYS_REG_DBGBVR7_EL1:
385
+ case HV_SYS_REG_DBGBCR7_EL1:
386
+ case HV_SYS_REG_DBGWVR7_EL1:
387
+ case HV_SYS_REG_DBGWCR7_EL1:
388
+ case HV_SYS_REG_DBGBVR8_EL1:
389
+ case HV_SYS_REG_DBGBCR8_EL1:
390
+ case HV_SYS_REG_DBGWVR8_EL1:
391
+ case HV_SYS_REG_DBGWCR8_EL1:
392
+ case HV_SYS_REG_DBGBVR9_EL1:
393
+ case HV_SYS_REG_DBGBCR9_EL1:
394
+ case HV_SYS_REG_DBGWVR9_EL1:
395
+ case HV_SYS_REG_DBGWCR9_EL1:
396
+ case HV_SYS_REG_DBGBVR10_EL1:
397
+ case HV_SYS_REG_DBGBCR10_EL1:
398
+ case HV_SYS_REG_DBGWVR10_EL1:
399
+ case HV_SYS_REG_DBGWCR10_EL1:
400
+ case HV_SYS_REG_DBGBVR11_EL1:
401
+ case HV_SYS_REG_DBGBCR11_EL1:
402
+ case HV_SYS_REG_DBGWVR11_EL1:
403
+ case HV_SYS_REG_DBGWCR11_EL1:
404
+ case HV_SYS_REG_DBGBVR12_EL1:
405
+ case HV_SYS_REG_DBGBCR12_EL1:
406
+ case HV_SYS_REG_DBGWVR12_EL1:
407
+ case HV_SYS_REG_DBGWCR12_EL1:
408
+ case HV_SYS_REG_DBGBVR13_EL1:
409
+ case HV_SYS_REG_DBGBCR13_EL1:
410
+ case HV_SYS_REG_DBGWVR13_EL1:
411
+ case HV_SYS_REG_DBGWCR13_EL1:
412
+ case HV_SYS_REG_DBGBVR14_EL1:
413
+ case HV_SYS_REG_DBGBCR14_EL1:
414
+ case HV_SYS_REG_DBGWVR14_EL1:
415
+ case HV_SYS_REG_DBGWCR14_EL1:
416
+ case HV_SYS_REG_DBGBVR15_EL1:
417
+ case HV_SYS_REG_DBGBCR15_EL1:
418
+ case HV_SYS_REG_DBGWVR15_EL1:
419
+ case HV_SYS_REG_DBGWCR15_EL1:
420
+ /*
421
+ * If the guest is being debugged, the vCPU's debug registers
422
+ * are already holding the gdbstub's view of the registers (set
423
+ * in hvf_arch_update_guest_debug()).
424
+ */
425
+ continue;
426
+ }
427
+ }
428
+
429
val = arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx];
430
ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val);
431
assert_hvf_ok(ret);
432
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
433
{
434
ARMCPU *arm_cpu = ARM_CPU(cpu);
435
CPUARMState *env = &arm_cpu->env;
436
+ int ret;
437
hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit;
438
hv_return_t r;
439
bool advance_pc = false;
440
441
- if (hvf_inject_interrupts(cpu)) {
442
+ if (!(cpu->singlestep_enabled & SSTEP_NOIRQ) &&
443
+ hvf_inject_interrupts(cpu)) {
444
return EXCP_INTERRUPT;
445
}
446
447
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
448
uint64_t syndrome = hvf_exit->exception.syndrome;
449
uint32_t ec = syn_get_ec(syndrome);
450
451
+ ret = 0;
452
qemu_mutex_lock_iothread();
453
switch (exit_reason) {
454
case HV_EXIT_REASON_EXCEPTION:
455
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
456
hvf_sync_vtimer(cpu);
457
458
switch (ec) {
459
+ case EC_SOFTWARESTEP: {
460
+ ret = EXCP_DEBUG;
461
+
462
+ if (!cpu->singlestep_enabled) {
463
+ error_report("EC_SOFTWARESTEP but single-stepping not enabled");
464
+ }
465
+ break;
466
+ }
467
+ case EC_AA64_BKPT: {
468
+ ret = EXCP_DEBUG;
469
+
470
+ cpu_synchronize_state(cpu);
471
+
472
+ if (!hvf_find_sw_breakpoint(cpu, env->pc)) {
473
+ /* Re-inject into the guest */
474
+ ret = 0;
475
+ hvf_raise_exception(cpu, EXCP_BKPT, syn_aa64_bkpt(0));
476
+ }
477
+ break;
478
+ }
479
+ case EC_BREAKPOINT: {
480
+ ret = EXCP_DEBUG;
481
+
482
+ cpu_synchronize_state(cpu);
483
+
484
+ if (!find_hw_breakpoint(cpu, env->pc)) {
485
+ error_report("EC_BREAKPOINT but unknown hw breakpoint");
486
+ }
487
+ break;
488
+ }
489
+ case EC_WATCHPOINT: {
490
+ ret = EXCP_DEBUG;
491
+
492
+ cpu_synchronize_state(cpu);
493
+
494
+ CPUWatchpoint *wp =
495
+ find_hw_watchpoint(cpu, hvf_exit->exception.virtual_address);
496
+ if (!wp) {
497
+ error_report("EXCP_DEBUG but unknown hw watchpoint");
498
+ }
499
+ cpu->watchpoint_hit = wp;
500
+ break;
501
+ }
502
case EC_DATAABORT: {
503
bool isv = syndrome & ARM_EL_ISV;
504
bool iswrite = (syndrome >> 6) & 1;
505
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
506
pc += 4;
507
r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
508
assert_hvf_ok(r);
509
+
510
+ /* Handle single-stepping over instructions which trigger a VM exit */
511
+ if (cpu->singlestep_enabled) {
512
+ ret = EXCP_DEBUG;
513
+ }
514
}
515
516
- return 0;
517
+ return ret;
518
}
519
520
static const VMStateDescription vmstate_hvf_vtimer = {
521
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init(void)
522
hvf_state->vtimer_offset = mach_absolute_time();
523
vmstate_register(NULL, 0, &vmstate_hvf_vtimer, &vtimer);
524
qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
525
+
526
+ hvf_arm_init_debug();
527
+
528
return 0;
529
}
530
531
@@ -XXX,XX +XXX,XX @@ void hvf_arch_remove_all_hw_breakpoints(void)
532
g_array_remove_range(hw_breakpoints, 0, cur_hw_bps);
533
}
534
}
535
+
536
+/*
537
+ * Update the vCPU with the gdbstub's view of debug registers. This view
538
+ * consists of all hardware breakpoints and watchpoints inserted so far while
539
+ * debugging the guest.
540
+ */
541
+static void hvf_put_gdbstub_debug_registers(CPUState *cpu)
542
+{
543
+ hv_return_t r = HV_SUCCESS;
544
+ int i;
545
+
546
+ for (i = 0; i < cur_hw_bps; i++) {
547
+ HWBreakpoint *bp = get_hw_bp(i);
548
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbcr_regs[i], bp->bcr);
549
+ assert_hvf_ok(r);
550
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbvr_regs[i], bp->bvr);
551
+ assert_hvf_ok(r);
552
+ }
553
+ for (i = cur_hw_bps; i < max_hw_bps; i++) {
554
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbcr_regs[i], 0);
555
+ assert_hvf_ok(r);
556
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbvr_regs[i], 0);
557
+ assert_hvf_ok(r);
558
+ }
559
+
560
+ for (i = 0; i < cur_hw_wps; i++) {
561
+ HWWatchpoint *wp = get_hw_wp(i);
562
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwcr_regs[i], wp->wcr);
563
+ assert_hvf_ok(r);
564
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwvr_regs[i], wp->wvr);
565
+ assert_hvf_ok(r);
566
+ }
567
+ for (i = cur_hw_wps; i < max_hw_wps; i++) {
568
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwcr_regs[i], 0);
569
+ assert_hvf_ok(r);
570
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwvr_regs[i], 0);
571
+ assert_hvf_ok(r);
572
+ }
573
+}
574
+
575
+/*
576
+ * Update the vCPU with the guest's view of debug registers. This view is kept
577
+ * in the environment at all times.
578
+ */
579
+static void hvf_put_guest_debug_registers(CPUState *cpu)
580
+{
581
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
582
+ CPUARMState *env = &arm_cpu->env;
583
+ hv_return_t r = HV_SUCCESS;
584
+ int i;
585
+
586
+ for (i = 0; i < max_hw_bps; i++) {
587
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbcr_regs[i],
588
+ env->cp15.dbgbcr[i]);
589
+ assert_hvf_ok(r);
590
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbvr_regs[i],
591
+ env->cp15.dbgbvr[i]);
592
+ assert_hvf_ok(r);
593
+ }
594
+
595
+ for (i = 0; i < max_hw_wps; i++) {
596
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwcr_regs[i],
597
+ env->cp15.dbgwcr[i]);
598
+ assert_hvf_ok(r);
599
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwvr_regs[i],
600
+ env->cp15.dbgwvr[i]);
601
+ assert_hvf_ok(r);
602
+ }
603
+}
604
+
605
+static inline bool hvf_arm_hw_debug_active(CPUState *cpu)
606
+{
607
+ return ((cur_hw_wps > 0) || (cur_hw_bps > 0));
608
+}
609
+
610
+static void hvf_arch_set_traps(void)
611
+{
612
+ CPUState *cpu;
613
+ bool should_enable_traps = false;
614
+ hv_return_t r = HV_SUCCESS;
615
+
616
+ /* Check whether guest debugging is enabled for at least one vCPU; if it
617
+ * is, enable exiting the guest on all vCPUs */
618
+ CPU_FOREACH(cpu) {
619
+ should_enable_traps |= cpu->hvf->guest_debug_enabled;
620
+ }
621
+ CPU_FOREACH(cpu) {
622
+ /* Set whether debug exceptions exit the guest */
623
+ r = hv_vcpu_set_trap_debug_exceptions(cpu->hvf->fd,
624
+ should_enable_traps);
625
+ assert_hvf_ok(r);
626
+
627
+ /* Set whether accesses to debug registers exit the guest */
628
+ r = hv_vcpu_set_trap_debug_reg_accesses(cpu->hvf->fd,
629
+ should_enable_traps);
630
+ assert_hvf_ok(r);
631
+ }
632
+}
633
+
634
+void hvf_arch_update_guest_debug(CPUState *cpu)
635
+{
636
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
637
+ CPUARMState *env = &arm_cpu->env;
638
+
639
+ /* Check whether guest debugging is enabled */
640
+ cpu->hvf->guest_debug_enabled = cpu->singlestep_enabled ||
641
+ hvf_sw_breakpoints_active(cpu) ||
642
+ hvf_arm_hw_debug_active(cpu);
643
+
644
+ /* Update debug registers */
645
+ if (cpu->hvf->guest_debug_enabled) {
646
+ hvf_put_gdbstub_debug_registers(cpu);
647
+ } else {
648
+ hvf_put_guest_debug_registers(cpu);
649
+ }
650
+
651
+ cpu_synchronize_state(cpu);
652
+
653
+ /* Enable/disable single-stepping */
654
+ if (cpu->singlestep_enabled) {
655
+ env->cp15.mdscr_el1 =
656
+ deposit64(env->cp15.mdscr_el1, MDSCR_EL1_SS_SHIFT, 1, 1);
657
+ pstate_write(env, pstate_read(env) | PSTATE_SS);
658
+ } else {
659
+ env->cp15.mdscr_el1 =
660
+ deposit64(env->cp15.mdscr_el1, MDSCR_EL1_SS_SHIFT, 1, 0);
661
+ }
662
+
663
+ /* Enable/disable Breakpoint exceptions */
664
+ if (hvf_arm_hw_debug_active(cpu)) {
665
+ env->cp15.mdscr_el1 =
666
+ deposit64(env->cp15.mdscr_el1, MDSCR_EL1_MDE_SHIFT, 1, 1);
667
+ } else {
668
+ env->cp15.mdscr_el1 =
669
+ deposit64(env->cp15.mdscr_el1, MDSCR_EL1_MDE_SHIFT, 1, 0);
670
+ }
671
+
672
+ hvf_arch_set_traps();
673
+}
674
+
675
+inline bool hvf_arch_supports_guest_debug(void)
676
+{
677
+ return true;
678
+}
679
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
680
index XXXXXXX..XXXXXXX 100644
681
--- a/target/i386/hvf/hvf.c
682
+++ b/target/i386/hvf/hvf.c
683
@@ -XXX,XX +XXX,XX @@ int hvf_arch_remove_hw_breakpoint(target_ulong addr, target_ulong len, int type)
684
void hvf_arch_remove_all_hw_breakpoints(void)
685
{
686
}
687
+
688
+void hvf_arch_update_guest_debug(CPUState *cpu)
689
+{
690
+}
691
+
692
+inline bool hvf_arch_supports_guest_debug(void)
693
+{
694
+ return false;
695
+}
696
--
697
2.34.1
diff view generated by jsdifflib
New patch
1
From: Vikram Garhwal <vikram.garhwal@amd.com>
1
2
3
The Xilinx Versal CANFD controller is developed based on SocketCAN, QEMU CAN bus
4
implementation. Bus connection and socketCAN connection for each CAN module
5
can be set through command lines.
6
7
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
8
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/net/xlnx-versal-canfd.h | 87 ++
12
hw/net/can/xlnx-versal-canfd.c | 2107 ++++++++++++++++++++++++++++
13
hw/net/can/meson.build | 1 +
14
hw/net/can/trace-events | 7 +
15
4 files changed, 2202 insertions(+)
16
create mode 100644 include/hw/net/xlnx-versal-canfd.h
17
create mode 100644 hw/net/can/xlnx-versal-canfd.c
18
19
diff --git a/include/hw/net/xlnx-versal-canfd.h b/include/hw/net/xlnx-versal-canfd.h
20
new file mode 100644
21
index XXXXXXX..XXXXXXX
22
--- /dev/null
23
+++ b/include/hw/net/xlnx-versal-canfd.h
24
@@ -XXX,XX +XXX,XX @@
25
+/*
26
+ * QEMU model of the Xilinx Versal CANFD Controller.
27
+ *
28
+ * Copyright (c) 2023 Advanced Micro Devices, Inc.
29
+ *
30
+ * Written-by: Vikram Garhwal<vikram.garhwal@amd.com>
31
+ * Based on QEMU CANFD Device emulation implemented by Jin Yang, Deniz Eren and
32
+ * Pavel Pisa.
33
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
34
+ * of this software and associated documentation files (the "Software"), to deal
35
+ * in the Software without restriction, including without limitation the rights
36
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
37
+ * copies of the Software, and to permit persons to whom the Software is
38
+ * furnished to do so, subject to the following conditions:
39
+ *
40
+ * The above copyright notice and this permission notice shall be included in
41
+ * all copies or substantial portions of the Software.
42
+ *
43
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
44
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
45
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
46
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
47
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
48
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
49
+ * THE SOFTWARE.
50
+ */
51
+
52
+#ifndef HW_CANFD_XILINX_H
53
+#define HW_CANFD_XILINX_H
54
+
55
+#include "hw/register.h"
56
+#include "hw/ptimer.h"
57
+#include "net/can_emu.h"
58
+#include "hw/qdev-clock.h"
59
+
60
+#define TYPE_XILINX_CANFD "xlnx.versal-canfd"
61
+
62
+OBJECT_DECLARE_SIMPLE_TYPE(XlnxVersalCANFDState, XILINX_CANFD)
63
+
64
+#define NUM_REGS_PER_MSG_SPACE 18 /* 1 ID + 1 DLC + 16 Data(DW0 - DW15) regs. */
65
+#define MAX_NUM_RX 64
66
+#define OFFSET_RX1_DW15 (0x4144 / 4)
67
+#define CANFD_TIMER_MAX 0xFFFFUL
68
+#define CANFD_DEFAULT_CLOCK (25 * 1000 * 1000)
69
+
70
+#define XLNX_VERSAL_CANFD_R_MAX (OFFSET_RX1_DW15 + \
71
+ ((MAX_NUM_RX - 1) * NUM_REGS_PER_MSG_SPACE) + 1)
72
+
73
+typedef struct XlnxVersalCANFDState {
74
+ SysBusDevice parent_obj;
75
+ MemoryRegion iomem;
76
+
77
+ qemu_irq irq_canfd_int;
78
+ qemu_irq irq_addr_err;
79
+
80
+ RegisterInfo reg_info[XLNX_VERSAL_CANFD_R_MAX];
81
+ RegisterAccessInfo *tx_regs;
82
+ RegisterAccessInfo *rx0_regs;
83
+ RegisterAccessInfo *rx1_regs;
84
+ RegisterAccessInfo *af_regs;
85
+ RegisterAccessInfo *txe_regs;
86
+ RegisterAccessInfo *rx_mailbox_regs;
87
+ RegisterAccessInfo *af_mask_regs_mailbox;
88
+
89
+ uint32_t regs[XLNX_VERSAL_CANFD_R_MAX];
90
+
91
+ ptimer_state *canfd_timer;
92
+
93
+ CanBusClientState bus_client;
94
+ CanBusState *canfdbus;
95
+
96
+ struct {
97
+ uint8_t rx0_fifo;
98
+ uint8_t rx1_fifo;
99
+ uint8_t tx_fifo;
100
+ bool enable_rx_fifo1;
101
+ uint32_t ext_clk_freq;
102
+ } cfg;
103
+
104
+} XlnxVersalCANFDState;
105
+
106
+typedef struct tx_ready_reg_info {
107
+ uint32_t can_id;
108
+ uint32_t reg_num;
109
+} tx_ready_reg_info;
110
+
111
+#endif
112
diff --git a/hw/net/can/xlnx-versal-canfd.c b/hw/net/can/xlnx-versal-canfd.c
113
new file mode 100644
114
index XXXXXXX..XXXXXXX
115
--- /dev/null
116
+++ b/hw/net/can/xlnx-versal-canfd.c
117
@@ -XXX,XX +XXX,XX @@
118
+/*
119
+ * QEMU model of the Xilinx Versal CANFD device.
120
+ *
121
+ * This implementation is based on the following datasheet:
122
+ * https://docs.xilinx.com/v/u/2.0-English/pg223-canfd
123
+ *
124
+ * Copyright (c) 2023 Advanced Micro Devices, Inc.
125
+ *
126
+ * Written-by: Vikram Garhwal <vikram.garhwal@amd.com>
127
+ *
128
+ * Based on QEMU CANFD Device emulation implemented by Jin Yang, Deniz Eren and
129
+ * Pavel Pisa
130
+ *
131
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
132
+ * of this software and associated documentation files (the "Software"), to deal
133
+ * in the Software without restriction, including without limitation the rights
134
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
135
+ * copies of the Software, and to permit persons to whom the Software is
136
+ * furnished to do so, subject to the following conditions:
137
+ *
138
+ * The above copyright notice and this permission notice shall be included in
139
+ * all copies or substantial portions of the Software.
140
+ *
141
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
142
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
143
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
144
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
145
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
146
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
147
+ * THE SOFTWARE.
148
+ */
149
+
150
+#include "qemu/osdep.h"
151
+#include "hw/sysbus.h"
152
+#include "hw/irq.h"
153
+#include "hw/register.h"
154
+#include "qapi/error.h"
155
+#include "qemu/bitops.h"
156
+#include "qemu/log.h"
157
+#include "qemu/cutils.h"
158
+#include "qemu/event_notifier.h"
159
+#include "hw/qdev-properties.h"
160
+#include "qom/object_interfaces.h"
161
+#include "migration/vmstate.h"
162
+#include "hw/net/xlnx-versal-canfd.h"
163
+#include "trace.h"
164
+
165
+REG32(SOFTWARE_RESET_REGISTER, 0x0)
166
+ FIELD(SOFTWARE_RESET_REGISTER, CEN, 1, 1)
167
+ FIELD(SOFTWARE_RESET_REGISTER, SRST, 0, 1)
168
+REG32(MODE_SELECT_REGISTER, 0x4)
169
+ FIELD(MODE_SELECT_REGISTER, ITO, 8, 8)
170
+ FIELD(MODE_SELECT_REGISTER, ABR, 7, 1)
171
+ FIELD(MODE_SELECT_REGISTER, SBR, 6, 1)
172
+ FIELD(MODE_SELECT_REGISTER, DPEE, 5, 1)
173
+ FIELD(MODE_SELECT_REGISTER, DAR, 4, 1)
174
+ FIELD(MODE_SELECT_REGISTER, BRSD, 3, 1)
175
+ FIELD(MODE_SELECT_REGISTER, SNOOP, 2, 1)
176
+ FIELD(MODE_SELECT_REGISTER, LBACK, 1, 1)
177
+ FIELD(MODE_SELECT_REGISTER, SLEEP, 0, 1)
178
+REG32(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x8)
179
+ FIELD(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, BRP, 0, 8)
180
+REG32(ARBITRATION_PHASE_BIT_TIMING_REGISTER, 0xc)
181
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, SJW, 16, 7)
182
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS2, 8, 7)
183
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS1, 0, 8)
184
+REG32(ERROR_COUNTER_REGISTER, 0x10)
185
+ FIELD(ERROR_COUNTER_REGISTER, REC, 8, 8)
186
+ FIELD(ERROR_COUNTER_REGISTER, TEC, 0, 8)
187
+REG32(ERROR_STATUS_REGISTER, 0x14)
188
+ FIELD(ERROR_STATUS_REGISTER, F_BERR, 11, 1)
189
+ FIELD(ERROR_STATUS_REGISTER, F_STER, 10, 1)
190
+ FIELD(ERROR_STATUS_REGISTER, F_FMER, 9, 1)
191
+ FIELD(ERROR_STATUS_REGISTER, F_CRCER, 8, 1)
192
+ FIELD(ERROR_STATUS_REGISTER, ACKER, 4, 1)
193
+ FIELD(ERROR_STATUS_REGISTER, BERR, 3, 1)
194
+ FIELD(ERROR_STATUS_REGISTER, STER, 2, 1)
195
+ FIELD(ERROR_STATUS_REGISTER, FMER, 1, 1)
196
+ FIELD(ERROR_STATUS_REGISTER, CRCER, 0, 1)
197
+REG32(STATUS_REGISTER, 0x18)
198
+ FIELD(STATUS_REGISTER, TDCV, 16, 7)
199
+ FIELD(STATUS_REGISTER, SNOOP, 12, 1)
200
+ FIELD(STATUS_REGISTER, BSFR_CONFIG, 10, 1)
201
+ FIELD(STATUS_REGISTER, PEE_CONFIG, 9, 1)
202
+ FIELD(STATUS_REGISTER, ESTAT, 7, 2)
203
+ FIELD(STATUS_REGISTER, ERRWRN, 6, 1)
204
+ FIELD(STATUS_REGISTER, BBSY, 5, 1)
205
+ FIELD(STATUS_REGISTER, BIDLE, 4, 1)
206
+ FIELD(STATUS_REGISTER, NORMAL, 3, 1)
207
+ FIELD(STATUS_REGISTER, SLEEP, 2, 1)
208
+ FIELD(STATUS_REGISTER, LBACK, 1, 1)
209
+ FIELD(STATUS_REGISTER, CONFIG, 0, 1)
210
+REG32(INTERRUPT_STATUS_REGISTER, 0x1c)
211
+ FIELD(INTERRUPT_STATUS_REGISTER, TXEWMFLL, 31, 1)
212
+ FIELD(INTERRUPT_STATUS_REGISTER, TXEOFLW, 30, 1)
213
+ FIELD(INTERRUPT_STATUS_REGISTER, RXBOFLW_BI, 24, 6)
214
+ FIELD(INTERRUPT_STATUS_REGISTER, RXLRM_BI, 18, 6)
215
+ FIELD(INTERRUPT_STATUS_REGISTER, RXMNF, 17, 1)
216
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL_1, 16, 1)
217
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFOFLW_1, 15, 1)
218
+ FIELD(INTERRUPT_STATUS_REGISTER, TXCRS, 14, 1)
219
+ FIELD(INTERRUPT_STATUS_REGISTER, TXRRS, 13, 1)
220
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL, 12, 1)
221
+ FIELD(INTERRUPT_STATUS_REGISTER, WKUP, 11, 1)
222
+ FIELD(INTERRUPT_STATUS_REGISTER, SLP, 10, 1)
223
+ FIELD(INTERRUPT_STATUS_REGISTER, BSOFF, 9, 1)
224
+ /*
225
+ * In the original HW description below bit is named as ERROR but an ERROR
226
+ * field name collides with a macro in Windows build. To avoid Windows build
227
+ * failures, the bit is renamed to ERROR_BIT.
228
+ */
229
+ FIELD(INTERRUPT_STATUS_REGISTER, ERROR_BIT, 8, 1)
230
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFOFLW, 6, 1)
231
+ FIELD(INTERRUPT_STATUS_REGISTER, TSCNT_OFLW, 5, 1)
232
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOK, 4, 1)
233
+ FIELD(INTERRUPT_STATUS_REGISTER, BSFRD, 3, 1)
234
+ FIELD(INTERRUPT_STATUS_REGISTER, PEE, 2, 1)
235
+ FIELD(INTERRUPT_STATUS_REGISTER, TXOK, 1, 1)
236
+ FIELD(INTERRUPT_STATUS_REGISTER, ARBLST, 0, 1)
237
+REG32(INTERRUPT_ENABLE_REGISTER, 0x20)
238
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXEWMFLL, 31, 1)
239
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXEOFLW, 30, 1)
240
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXMNF, 17, 1)
241
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL_1, 16, 1)
242
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFOFLW_1, 15, 1)
243
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXCRS, 14, 1)
244
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXRRS, 13, 1)
245
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL, 12, 1)
246
+ FIELD(INTERRUPT_ENABLE_REGISTER, EWKUP, 11, 1)
247
+ FIELD(INTERRUPT_ENABLE_REGISTER, ESLP, 10, 1)
248
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSOFF, 9, 1)
249
+ FIELD(INTERRUPT_ENABLE_REGISTER, EERROR, 8, 1)
250
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERFXOFLW, 6, 1)
251
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETSCNT_OFLW, 5, 1)
252
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOK, 4, 1)
253
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSFRD, 3, 1)
254
+ FIELD(INTERRUPT_ENABLE_REGISTER, EPEE, 2, 1)
255
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXOK, 1, 1)
256
+ FIELD(INTERRUPT_ENABLE_REGISTER, EARBLOST, 0, 1)
257
+REG32(INTERRUPT_CLEAR_REGISTER, 0x24)
258
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXEWMFLL, 31, 1)
259
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXEOFLW, 30, 1)
260
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXMNF, 17, 1)
261
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL_1, 16, 1)
262
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFOFLW_1, 15, 1)
263
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXCRS, 14, 1)
264
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXRRS, 13, 1)
265
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL, 12, 1)
266
+ FIELD(INTERRUPT_CLEAR_REGISTER, CWKUP, 11, 1)
267
+ FIELD(INTERRUPT_CLEAR_REGISTER, CSLP, 10, 1)
268
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSOFF, 9, 1)
269
+ FIELD(INTERRUPT_CLEAR_REGISTER, CERROR, 8, 1)
270
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRFXOFLW, 6, 1)
271
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTSCNT_OFLW, 5, 1)
272
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOK, 4, 1)
273
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSFRD, 3, 1)
274
+ FIELD(INTERRUPT_CLEAR_REGISTER, CPEE, 2, 1)
275
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXOK, 1, 1)
276
+ FIELD(INTERRUPT_CLEAR_REGISTER, CARBLOST, 0, 1)
277
+REG32(TIMESTAMP_REGISTER, 0x28)
278
+ FIELD(TIMESTAMP_REGISTER, TIMESTAMP_CNT, 16, 16)
279
+ FIELD(TIMESTAMP_REGISTER, CTS, 0, 1)
280
+REG32(DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x88)
281
+ FIELD(DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER, TDC, 16, 1)
282
+ FIELD(DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER, TDCOFF, 8, 6)
283
+ FIELD(DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER, DP_BRP, 0, 8)
284
+REG32(DATA_PHASE_BIT_TIMING_REGISTER, 0x8c)
285
+ FIELD(DATA_PHASE_BIT_TIMING_REGISTER, DP_SJW, 16, 4)
286
+ FIELD(DATA_PHASE_BIT_TIMING_REGISTER, DP_TS2, 8, 4)
287
+ FIELD(DATA_PHASE_BIT_TIMING_REGISTER, DP_TS1, 0, 5)
288
+REG32(TX_BUFFER_READY_REQUEST_REGISTER, 0x90)
289
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR31, 31, 1)
290
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR30, 30, 1)
291
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR29, 29, 1)
292
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR28, 28, 1)
293
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR27, 27, 1)
294
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR26, 26, 1)
295
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR25, 25, 1)
296
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR24, 24, 1)
297
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR23, 23, 1)
298
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR22, 22, 1)
299
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR21, 21, 1)
300
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR20, 20, 1)
301
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR19, 19, 1)
302
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR18, 18, 1)
303
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR17, 17, 1)
304
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR16, 16, 1)
305
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR15, 15, 1)
306
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR14, 14, 1)
307
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR13, 13, 1)
308
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR12, 12, 1)
309
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR11, 11, 1)
310
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR10, 10, 1)
311
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR9, 9, 1)
312
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR8, 8, 1)
313
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR7, 7, 1)
314
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR6, 6, 1)
315
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR5, 5, 1)
316
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR4, 4, 1)
317
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR3, 3, 1)
318
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR2, 2, 1)
319
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR1, 1, 1)
320
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR0, 0, 1)
321
+REG32(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, 0x94)
322
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS31, 31, 1)
323
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS30, 30, 1)
324
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS29, 29, 1)
325
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS28, 28, 1)
326
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS27, 27, 1)
327
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS26, 26, 1)
328
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS25, 25, 1)
329
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS24, 24, 1)
330
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS23, 23, 1)
331
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS22, 22, 1)
332
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS21, 21, 1)
333
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS20, 20, 1)
334
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS19, 19, 1)
335
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS18, 18, 1)
336
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS17, 17, 1)
337
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS16, 16, 1)
338
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS15, 15, 1)
339
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS14, 14, 1)
340
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS13, 13, 1)
341
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS12, 12, 1)
342
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS11, 11, 1)
343
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS10, 10, 1)
344
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS9, 9, 1)
345
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS8, 8, 1)
346
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS7, 7, 1)
347
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS6, 6, 1)
348
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS5, 5, 1)
349
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS4, 4, 1)
350
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS3, 3, 1)
351
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS2, 2, 1)
352
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS1, 1, 1)
353
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS0, 0, 1)
354
+REG32(TX_BUFFER_CANCEL_REQUEST_REGISTER, 0x98)
355
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR31, 31, 1)
356
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR30, 30, 1)
357
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR29, 29, 1)
358
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR28, 28, 1)
359
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR27, 27, 1)
360
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR26, 26, 1)
361
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR25, 25, 1)
362
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR24, 24, 1)
363
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR23, 23, 1)
364
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR22, 22, 1)
365
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR21, 21, 1)
366
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR20, 20, 1)
367
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR19, 19, 1)
368
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR18, 18, 1)
369
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR17, 17, 1)
370
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR16, 16, 1)
371
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR15, 15, 1)
372
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR14, 14, 1)
373
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR13, 13, 1)
374
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR12, 12, 1)
375
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR11, 11, 1)
376
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR10, 10, 1)
377
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR9, 9, 1)
378
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR8, 8, 1)
379
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR7, 7, 1)
380
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR6, 6, 1)
381
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR5, 5, 1)
382
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR4, 4, 1)
383
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR3, 3, 1)
384
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR2, 2, 1)
385
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR1, 1, 1)
386
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR0, 0, 1)
387
+REG32(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, 0x9c)
388
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS31, 31,
389
+ 1)
390
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS30, 30,
391
+ 1)
392
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS29, 29,
393
+ 1)
394
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS28, 28,
395
+ 1)
396
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS27, 27,
397
+ 1)
398
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS26, 26,
399
+ 1)
400
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS25, 25,
401
+ 1)
402
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS24, 24,
403
+ 1)
404
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS23, 23,
405
+ 1)
406
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS22, 22,
407
+ 1)
408
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS21, 21,
409
+ 1)
410
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS20, 20,
411
+ 1)
412
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS19, 19,
413
+ 1)
414
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS18, 18,
415
+ 1)
416
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS17, 17,
417
+ 1)
418
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS16, 16,
419
+ 1)
420
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS15, 15,
421
+ 1)
422
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS14, 14,
423
+ 1)
424
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS13, 13,
425
+ 1)
426
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS12, 12,
427
+ 1)
428
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS11, 11,
429
+ 1)
430
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS10, 10,
431
+ 1)
432
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS9, 9, 1)
433
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS8, 8, 1)
434
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS7, 7, 1)
435
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS6, 6, 1)
436
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS5, 5, 1)
437
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS4, 4, 1)
438
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS3, 3, 1)
439
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS2, 2, 1)
440
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS1, 1, 1)
441
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS0, 0, 1)
442
+REG32(TX_EVENT_FIFO_STATUS_REGISTER, 0xa0)
443
+ FIELD(TX_EVENT_FIFO_STATUS_REGISTER, TXE_FL, 8, 6)
444
+ FIELD(TX_EVENT_FIFO_STATUS_REGISTER, TXE_IRI, 7, 1)
445
+ FIELD(TX_EVENT_FIFO_STATUS_REGISTER, TXE_RI, 0, 5)
446
+REG32(TX_EVENT_FIFO_WATERMARK_REGISTER, 0xa4)
447
+ FIELD(TX_EVENT_FIFO_WATERMARK_REGISTER, TXE_FWM, 0, 5)
448
+REG32(ACCEPTANCE_FILTER_CONTROL_REGISTER, 0xe0)
449
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF31, 31, 1)
450
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF30, 30, 1)
451
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF29, 29, 1)
452
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF28, 28, 1)
453
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF27, 27, 1)
454
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF26, 26, 1)
455
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF25, 25, 1)
456
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF24, 24, 1)
457
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF23, 23, 1)
458
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF22, 22, 1)
459
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF21, 21, 1)
460
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF20, 20, 1)
461
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF19, 19, 1)
462
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF18, 18, 1)
463
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF17, 17, 1)
464
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF16, 16, 1)
465
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF15, 15, 1)
466
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF14, 14, 1)
467
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF13, 13, 1)
468
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF12, 12, 1)
469
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF11, 11, 1)
470
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF10, 10, 1)
471
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF9, 9, 1)
472
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF8, 8, 1)
473
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF7, 7, 1)
474
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF6, 6, 1)
475
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF5, 5, 1)
476
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF4, 4, 1)
477
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF3, 3, 1)
478
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF2, 2, 1)
479
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF1, 1, 1)
480
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF0, 0, 1)
481
+REG32(RX_FIFO_STATUS_REGISTER, 0xe8)
482
+ FIELD(RX_FIFO_STATUS_REGISTER, FL_1, 24, 7)
483
+ FIELD(RX_FIFO_STATUS_REGISTER, IRI_1, 23, 1)
484
+ FIELD(RX_FIFO_STATUS_REGISTER, RI_1, 16, 6)
485
+ FIELD(RX_FIFO_STATUS_REGISTER, FL, 8, 7)
486
+ FIELD(RX_FIFO_STATUS_REGISTER, IRI, 7, 1)
487
+ FIELD(RX_FIFO_STATUS_REGISTER, RI, 0, 6)
488
+REG32(RX_FIFO_WATERMARK_REGISTER, 0xec)
489
+ FIELD(RX_FIFO_WATERMARK_REGISTER, RXFP, 16, 5)
490
+ FIELD(RX_FIFO_WATERMARK_REGISTER, RXFWM_1, 8, 6)
491
+ FIELD(RX_FIFO_WATERMARK_REGISTER, RXFWM, 0, 6)
492
+REG32(TB_ID_REGISTER, 0x100)
493
+ FIELD(TB_ID_REGISTER, ID, 21, 11)
494
+ FIELD(TB_ID_REGISTER, SRR_RTR_RRS, 20, 1)
495
+ FIELD(TB_ID_REGISTER, IDE, 19, 1)
496
+ FIELD(TB_ID_REGISTER, ID_EXT, 1, 18)
497
+ FIELD(TB_ID_REGISTER, RTR_RRS, 0, 1)
498
+REG32(TB0_DLC_REGISTER, 0x104)
499
+ FIELD(TB0_DLC_REGISTER, DLC, 28, 4)
500
+ FIELD(TB0_DLC_REGISTER, FDF, 27, 1)
501
+ FIELD(TB0_DLC_REGISTER, BRS, 26, 1)
502
+ FIELD(TB0_DLC_REGISTER, RSVD2, 25, 1)
503
+ FIELD(TB0_DLC_REGISTER, EFC, 24, 1)
504
+ FIELD(TB0_DLC_REGISTER, MM, 16, 8)
505
+ FIELD(TB0_DLC_REGISTER, RSVD1, 0, 16)
506
+REG32(TB_DW0_REGISTER, 0x108)
507
+ FIELD(TB_DW0_REGISTER, DATA_BYTES0, 24, 8)
508
+ FIELD(TB_DW0_REGISTER, DATA_BYTES1, 16, 8)
509
+ FIELD(TB_DW0_REGISTER, DATA_BYTES2, 8, 8)
510
+ FIELD(TB_DW0_REGISTER, DATA_BYTES3, 0, 8)
511
+REG32(TB_DW1_REGISTER, 0x10c)
512
+ FIELD(TB_DW1_REGISTER, DATA_BYTES4, 24, 8)
513
+ FIELD(TB_DW1_REGISTER, DATA_BYTES5, 16, 8)
514
+ FIELD(TB_DW1_REGISTER, DATA_BYTES6, 8, 8)
515
+ FIELD(TB_DW1_REGISTER, DATA_BYTES7, 0, 8)
516
+REG32(TB_DW2_REGISTER, 0x110)
517
+ FIELD(TB_DW2_REGISTER, DATA_BYTES8, 24, 8)
518
+ FIELD(TB_DW2_REGISTER, DATA_BYTES9, 16, 8)
519
+ FIELD(TB_DW2_REGISTER, DATA_BYTES10, 8, 8)
520
+ FIELD(TB_DW2_REGISTER, DATA_BYTES11, 0, 8)
521
+REG32(TB_DW3_REGISTER, 0x114)
522
+ FIELD(TB_DW3_REGISTER, DATA_BYTES12, 24, 8)
523
+ FIELD(TB_DW3_REGISTER, DATA_BYTES13, 16, 8)
524
+ FIELD(TB_DW3_REGISTER, DATA_BYTES14, 8, 8)
525
+ FIELD(TB_DW3_REGISTER, DATA_BYTES15, 0, 8)
526
+REG32(TB_DW4_REGISTER, 0x118)
527
+ FIELD(TB_DW4_REGISTER, DATA_BYTES16, 24, 8)
528
+ FIELD(TB_DW4_REGISTER, DATA_BYTES17, 16, 8)
529
+ FIELD(TB_DW4_REGISTER, DATA_BYTES18, 8, 8)
530
+ FIELD(TB_DW4_REGISTER, DATA_BYTES19, 0, 8)
531
+REG32(TB_DW5_REGISTER, 0x11c)
532
+ FIELD(TB_DW5_REGISTER, DATA_BYTES20, 24, 8)
533
+ FIELD(TB_DW5_REGISTER, DATA_BYTES21, 16, 8)
534
+ FIELD(TB_DW5_REGISTER, DATA_BYTES22, 8, 8)
535
+ FIELD(TB_DW5_REGISTER, DATA_BYTES23, 0, 8)
536
+REG32(TB_DW6_REGISTER, 0x120)
537
+ FIELD(TB_DW6_REGISTER, DATA_BYTES24, 24, 8)
538
+ FIELD(TB_DW6_REGISTER, DATA_BYTES25, 16, 8)
539
+ FIELD(TB_DW6_REGISTER, DATA_BYTES26, 8, 8)
540
+ FIELD(TB_DW6_REGISTER, DATA_BYTES27, 0, 8)
541
+REG32(TB_DW7_REGISTER, 0x124)
542
+ FIELD(TB_DW7_REGISTER, DATA_BYTES28, 24, 8)
543
+ FIELD(TB_DW7_REGISTER, DATA_BYTES29, 16, 8)
544
+ FIELD(TB_DW7_REGISTER, DATA_BYTES30, 8, 8)
545
+ FIELD(TB_DW7_REGISTER, DATA_BYTES31, 0, 8)
546
+REG32(TB_DW8_REGISTER, 0x128)
547
+ FIELD(TB_DW8_REGISTER, DATA_BYTES32, 24, 8)
548
+ FIELD(TB_DW8_REGISTER, DATA_BYTES33, 16, 8)
549
+ FIELD(TB_DW8_REGISTER, DATA_BYTES34, 8, 8)
550
+ FIELD(TB_DW8_REGISTER, DATA_BYTES35, 0, 8)
551
+REG32(TB_DW9_REGISTER, 0x12c)
552
+ FIELD(TB_DW9_REGISTER, DATA_BYTES36, 24, 8)
553
+ FIELD(TB_DW9_REGISTER, DATA_BYTES37, 16, 8)
554
+ FIELD(TB_DW9_REGISTER, DATA_BYTES38, 8, 8)
555
+ FIELD(TB_DW9_REGISTER, DATA_BYTES39, 0, 8)
556
+REG32(TB_DW10_REGISTER, 0x130)
557
+ FIELD(TB_DW10_REGISTER, DATA_BYTES40, 24, 8)
558
+ FIELD(TB_DW10_REGISTER, DATA_BYTES41, 16, 8)
559
+ FIELD(TB_DW10_REGISTER, DATA_BYTES42, 8, 8)
560
+ FIELD(TB_DW10_REGISTER, DATA_BYTES43, 0, 8)
561
+REG32(TB_DW11_REGISTER, 0x134)
562
+ FIELD(TB_DW11_REGISTER, DATA_BYTES44, 24, 8)
563
+ FIELD(TB_DW11_REGISTER, DATA_BYTES45, 16, 8)
564
+ FIELD(TB_DW11_REGISTER, DATA_BYTES46, 8, 8)
565
+ FIELD(TB_DW11_REGISTER, DATA_BYTES47, 0, 8)
566
+REG32(TB_DW12_REGISTER, 0x138)
567
+ FIELD(TB_DW12_REGISTER, DATA_BYTES48, 24, 8)
568
+ FIELD(TB_DW12_REGISTER, DATA_BYTES49, 16, 8)
569
+ FIELD(TB_DW12_REGISTER, DATA_BYTES50, 8, 8)
570
+ FIELD(TB_DW12_REGISTER, DATA_BYTES51, 0, 8)
571
+REG32(TB_DW13_REGISTER, 0x13c)
572
+ FIELD(TB_DW13_REGISTER, DATA_BYTES52, 24, 8)
573
+ FIELD(TB_DW13_REGISTER, DATA_BYTES53, 16, 8)
574
+ FIELD(TB_DW13_REGISTER, DATA_BYTES54, 8, 8)
575
+ FIELD(TB_DW13_REGISTER, DATA_BYTES55, 0, 8)
576
+REG32(TB_DW14_REGISTER, 0x140)
577
+ FIELD(TB_DW14_REGISTER, DATA_BYTES56, 24, 8)
578
+ FIELD(TB_DW14_REGISTER, DATA_BYTES57, 16, 8)
579
+ FIELD(TB_DW14_REGISTER, DATA_BYTES58, 8, 8)
580
+ FIELD(TB_DW14_REGISTER, DATA_BYTES59, 0, 8)
581
+REG32(TB_DW15_REGISTER, 0x144)
582
+ FIELD(TB_DW15_REGISTER, DATA_BYTES60, 24, 8)
583
+ FIELD(TB_DW15_REGISTER, DATA_BYTES61, 16, 8)
584
+ FIELD(TB_DW15_REGISTER, DATA_BYTES62, 8, 8)
585
+ FIELD(TB_DW15_REGISTER, DATA_BYTES63, 0, 8)
586
+REG32(AFMR_REGISTER, 0xa00)
587
+ FIELD(AFMR_REGISTER, AMID, 21, 11)
588
+ FIELD(AFMR_REGISTER, AMSRR, 20, 1)
589
+ FIELD(AFMR_REGISTER, AMIDE, 19, 1)
590
+ FIELD(AFMR_REGISTER, AMID_EXT, 1, 18)
591
+ FIELD(AFMR_REGISTER, AMRTR, 0, 1)
592
+REG32(AFIR_REGISTER, 0xa04)
593
+ FIELD(AFIR_REGISTER, AIID, 21, 11)
594
+ FIELD(AFIR_REGISTER, AISRR, 20, 1)
595
+ FIELD(AFIR_REGISTER, AIIDE, 19, 1)
596
+ FIELD(AFIR_REGISTER, AIID_EXT, 1, 18)
597
+ FIELD(AFIR_REGISTER, AIRTR, 0, 1)
598
+REG32(TXE_FIFO_TB_ID_REGISTER, 0x2000)
599
+ FIELD(TXE_FIFO_TB_ID_REGISTER, ID, 21, 11)
600
+ FIELD(TXE_FIFO_TB_ID_REGISTER, SRR_RTR_RRS, 20, 1)
601
+ FIELD(TXE_FIFO_TB_ID_REGISTER, IDE, 19, 1)
602
+ FIELD(TXE_FIFO_TB_ID_REGISTER, ID_EXT, 1, 18)
603
+ FIELD(TXE_FIFO_TB_ID_REGISTER, RTR_RRS, 0, 1)
604
+REG32(TXE_FIFO_TB_DLC_REGISTER, 0x2004)
605
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, DLC, 28, 4)
606
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, FDF, 27, 1)
607
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, BRS, 26, 1)
608
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, ET, 24, 2)
609
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, MM, 16, 8)
610
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, TIMESTAMP, 0, 16)
611
+REG32(RB_ID_REGISTER, 0x2100)
612
+ FIELD(RB_ID_REGISTER, ID, 21, 11)
613
+ FIELD(RB_ID_REGISTER, SRR_RTR_RRS, 20, 1)
614
+ FIELD(RB_ID_REGISTER, IDE, 19, 1)
615
+ FIELD(RB_ID_REGISTER, ID_EXT, 1, 18)
616
+ FIELD(RB_ID_REGISTER, RTR_RRS, 0, 1)
617
+REG32(RB_DLC_REGISTER, 0x2104)
618
+ FIELD(RB_DLC_REGISTER, DLC, 28, 4)
619
+ FIELD(RB_DLC_REGISTER, FDF, 27, 1)
620
+ FIELD(RB_DLC_REGISTER, BRS, 26, 1)
621
+ FIELD(RB_DLC_REGISTER, ESI, 25, 1)
622
+ FIELD(RB_DLC_REGISTER, MATCHED_FILTER_INDEX, 16, 5)
623
+ FIELD(RB_DLC_REGISTER, TIMESTAMP, 0, 16)
624
+REG32(RB_DW0_REGISTER, 0x2108)
625
+ FIELD(RB_DW0_REGISTER, DATA_BYTES0, 24, 8)
626
+ FIELD(RB_DW0_REGISTER, DATA_BYTES1, 16, 8)
627
+ FIELD(RB_DW0_REGISTER, DATA_BYTES2, 8, 8)
628
+ FIELD(RB_DW0_REGISTER, DATA_BYTES3, 0, 8)
629
+REG32(RB_DW1_REGISTER, 0x210c)
630
+ FIELD(RB_DW1_REGISTER, DATA_BYTES4, 24, 8)
631
+ FIELD(RB_DW1_REGISTER, DATA_BYTES5, 16, 8)
632
+ FIELD(RB_DW1_REGISTER, DATA_BYTES6, 8, 8)
633
+ FIELD(RB_DW1_REGISTER, DATA_BYTES7, 0, 8)
634
+REG32(RB_DW2_REGISTER, 0x2110)
635
+ FIELD(RB_DW2_REGISTER, DATA_BYTES8, 24, 8)
636
+ FIELD(RB_DW2_REGISTER, DATA_BYTES9, 16, 8)
637
+ FIELD(RB_DW2_REGISTER, DATA_BYTES10, 8, 8)
638
+ FIELD(RB_DW2_REGISTER, DATA_BYTES11, 0, 8)
639
+REG32(RB_DW3_REGISTER, 0x2114)
640
+ FIELD(RB_DW3_REGISTER, DATA_BYTES12, 24, 8)
641
+ FIELD(RB_DW3_REGISTER, DATA_BYTES13, 16, 8)
642
+ FIELD(RB_DW3_REGISTER, DATA_BYTES14, 8, 8)
643
+ FIELD(RB_DW3_REGISTER, DATA_BYTES15, 0, 8)
644
+REG32(RB_DW4_REGISTER, 0x2118)
645
+ FIELD(RB_DW4_REGISTER, DATA_BYTES16, 24, 8)
646
+ FIELD(RB_DW4_REGISTER, DATA_BYTES17, 16, 8)
647
+ FIELD(RB_DW4_REGISTER, DATA_BYTES18, 8, 8)
648
+ FIELD(RB_DW4_REGISTER, DATA_BYTES19, 0, 8)
649
+REG32(RB_DW5_REGISTER, 0x211c)
650
+ FIELD(RB_DW5_REGISTER, DATA_BYTES20, 24, 8)
651
+ FIELD(RB_DW5_REGISTER, DATA_BYTES21, 16, 8)
652
+ FIELD(RB_DW5_REGISTER, DATA_BYTES22, 8, 8)
653
+ FIELD(RB_DW5_REGISTER, DATA_BYTES23, 0, 8)
654
+REG32(RB_DW6_REGISTER, 0x2120)
655
+ FIELD(RB_DW6_REGISTER, DATA_BYTES24, 24, 8)
656
+ FIELD(RB_DW6_REGISTER, DATA_BYTES25, 16, 8)
657
+ FIELD(RB_DW6_REGISTER, DATA_BYTES26, 8, 8)
658
+ FIELD(RB_DW6_REGISTER, DATA_BYTES27, 0, 8)
659
+REG32(RB_DW7_REGISTER, 0x2124)
660
+ FIELD(RB_DW7_REGISTER, DATA_BYTES28, 24, 8)
661
+ FIELD(RB_DW7_REGISTER, DATA_BYTES29, 16, 8)
662
+ FIELD(RB_DW7_REGISTER, DATA_BYTES30, 8, 8)
663
+ FIELD(RB_DW7_REGISTER, DATA_BYTES31, 0, 8)
664
+REG32(RB_DW8_REGISTER, 0x2128)
665
+ FIELD(RB_DW8_REGISTER, DATA_BYTES32, 24, 8)
666
+ FIELD(RB_DW8_REGISTER, DATA_BYTES33, 16, 8)
667
+ FIELD(RB_DW8_REGISTER, DATA_BYTES34, 8, 8)
668
+ FIELD(RB_DW8_REGISTER, DATA_BYTES35, 0, 8)
669
+REG32(RB_DW9_REGISTER, 0x212c)
670
+ FIELD(RB_DW9_REGISTER, DATA_BYTES36, 24, 8)
671
+ FIELD(RB_DW9_REGISTER, DATA_BYTES37, 16, 8)
672
+ FIELD(RB_DW9_REGISTER, DATA_BYTES38, 8, 8)
673
+ FIELD(RB_DW9_REGISTER, DATA_BYTES39, 0, 8)
674
+REG32(RB_DW10_REGISTER, 0x2130)
675
+ FIELD(RB_DW10_REGISTER, DATA_BYTES40, 24, 8)
676
+ FIELD(RB_DW10_REGISTER, DATA_BYTES41, 16, 8)
677
+ FIELD(RB_DW10_REGISTER, DATA_BYTES42, 8, 8)
678
+ FIELD(RB_DW10_REGISTER, DATA_BYTES43, 0, 8)
679
+REG32(RB_DW11_REGISTER, 0x2134)
680
+ FIELD(RB_DW11_REGISTER, DATA_BYTES44, 24, 8)
681
+ FIELD(RB_DW11_REGISTER, DATA_BYTES45, 16, 8)
682
+ FIELD(RB_DW11_REGISTER, DATA_BYTES46, 8, 8)
683
+ FIELD(RB_DW11_REGISTER, DATA_BYTES47, 0, 8)
684
+REG32(RB_DW12_REGISTER, 0x2138)
685
+ FIELD(RB_DW12_REGISTER, DATA_BYTES48, 24, 8)
686
+ FIELD(RB_DW12_REGISTER, DATA_BYTES49, 16, 8)
687
+ FIELD(RB_DW12_REGISTER, DATA_BYTES50, 8, 8)
688
+ FIELD(RB_DW12_REGISTER, DATA_BYTES51, 0, 8)
689
+REG32(RB_DW13_REGISTER, 0x213c)
690
+ FIELD(RB_DW13_REGISTER, DATA_BYTES52, 24, 8)
691
+ FIELD(RB_DW13_REGISTER, DATA_BYTES53, 16, 8)
692
+ FIELD(RB_DW13_REGISTER, DATA_BYTES54, 8, 8)
693
+ FIELD(RB_DW13_REGISTER, DATA_BYTES55, 0, 8)
694
+REG32(RB_DW14_REGISTER, 0x2140)
695
+ FIELD(RB_DW14_REGISTER, DATA_BYTES56, 24, 8)
696
+ FIELD(RB_DW14_REGISTER, DATA_BYTES57, 16, 8)
697
+ FIELD(RB_DW14_REGISTER, DATA_BYTES58, 8, 8)
698
+ FIELD(RB_DW14_REGISTER, DATA_BYTES59, 0, 8)
699
+REG32(RB_DW15_REGISTER, 0x2144)
700
+ FIELD(RB_DW15_REGISTER, DATA_BYTES60, 24, 8)
701
+ FIELD(RB_DW15_REGISTER, DATA_BYTES61, 16, 8)
702
+ FIELD(RB_DW15_REGISTER, DATA_BYTES62, 8, 8)
703
+ FIELD(RB_DW15_REGISTER, DATA_BYTES63, 0, 8)
704
+REG32(RB_ID_REGISTER_1, 0x4100)
705
+ FIELD(RB_ID_REGISTER_1, ID, 21, 11)
706
+ FIELD(RB_ID_REGISTER_1, SRR_RTR_RRS, 20, 1)
707
+ FIELD(RB_ID_REGISTER_1, IDE, 19, 1)
708
+ FIELD(RB_ID_REGISTER_1, ID_EXT, 1, 18)
709
+ FIELD(RB_ID_REGISTER_1, RTR_RRS, 0, 1)
710
+REG32(RB_DLC_REGISTER_1, 0x4104)
711
+ FIELD(RB_DLC_REGISTER_1, DLC, 28, 4)
712
+ FIELD(RB_DLC_REGISTER_1, FDF, 27, 1)
713
+ FIELD(RB_DLC_REGISTER_1, BRS, 26, 1)
714
+ FIELD(RB_DLC_REGISTER_1, ESI, 25, 1)
715
+ FIELD(RB_DLC_REGISTER_1, MATCHED_FILTER_INDEX, 16, 5)
716
+ FIELD(RB_DLC_REGISTER_1, TIMESTAMP, 0, 16)
717
+REG32(RB0_DW0_REGISTER_1, 0x4108)
718
+ FIELD(RB0_DW0_REGISTER_1, DATA_BYTES0, 24, 8)
719
+ FIELD(RB0_DW0_REGISTER_1, DATA_BYTES1, 16, 8)
720
+ FIELD(RB0_DW0_REGISTER_1, DATA_BYTES2, 8, 8)
721
+ FIELD(RB0_DW0_REGISTER_1, DATA_BYTES3, 0, 8)
722
+REG32(RB_DW1_REGISTER_1, 0x410c)
723
+ FIELD(RB_DW1_REGISTER_1, DATA_BYTES4, 24, 8)
724
+ FIELD(RB_DW1_REGISTER_1, DATA_BYTES5, 16, 8)
725
+ FIELD(RB_DW1_REGISTER_1, DATA_BYTES6, 8, 8)
726
+ FIELD(RB_DW1_REGISTER_1, DATA_BYTES7, 0, 8)
727
+REG32(RB_DW2_REGISTER_1, 0x4110)
728
+ FIELD(RB_DW2_REGISTER_1, DATA_BYTES8, 24, 8)
729
+ FIELD(RB_DW2_REGISTER_1, DATA_BYTES9, 16, 8)
730
+ FIELD(RB_DW2_REGISTER_1, DATA_BYTES10, 8, 8)
731
+ FIELD(RB_DW2_REGISTER_1, DATA_BYTES11, 0, 8)
732
+REG32(RB_DW3_REGISTER_1, 0x4114)
733
+ FIELD(RB_DW3_REGISTER_1, DATA_BYTES12, 24, 8)
734
+ FIELD(RB_DW3_REGISTER_1, DATA_BYTES13, 16, 8)
735
+ FIELD(RB_DW3_REGISTER_1, DATA_BYTES14, 8, 8)
736
+ FIELD(RB_DW3_REGISTER_1, DATA_BYTES15, 0, 8)
737
+REG32(RB_DW4_REGISTER_1, 0x4118)
738
+ FIELD(RB_DW4_REGISTER_1, DATA_BYTES16, 24, 8)
739
+ FIELD(RB_DW4_REGISTER_1, DATA_BYTES17, 16, 8)
740
+ FIELD(RB_DW4_REGISTER_1, DATA_BYTES18, 8, 8)
741
+ FIELD(RB_DW4_REGISTER_1, DATA_BYTES19, 0, 8)
742
+REG32(RB_DW5_REGISTER_1, 0x411c)
743
+ FIELD(RB_DW5_REGISTER_1, DATA_BYTES20, 24, 8)
744
+ FIELD(RB_DW5_REGISTER_1, DATA_BYTES21, 16, 8)
745
+ FIELD(RB_DW5_REGISTER_1, DATA_BYTES22, 8, 8)
746
+ FIELD(RB_DW5_REGISTER_1, DATA_BYTES23, 0, 8)
747
+REG32(RB_DW6_REGISTER_1, 0x4120)
748
+ FIELD(RB_DW6_REGISTER_1, DATA_BYTES24, 24, 8)
749
+ FIELD(RB_DW6_REGISTER_1, DATA_BYTES25, 16, 8)
750
+ FIELD(RB_DW6_REGISTER_1, DATA_BYTES26, 8, 8)
751
+ FIELD(RB_DW6_REGISTER_1, DATA_BYTES27, 0, 8)
752
+REG32(RB_DW7_REGISTER_1, 0x4124)
753
+ FIELD(RB_DW7_REGISTER_1, DATA_BYTES28, 24, 8)
754
+ FIELD(RB_DW7_REGISTER_1, DATA_BYTES29, 16, 8)
755
+ FIELD(RB_DW7_REGISTER_1, DATA_BYTES30, 8, 8)
756
+ FIELD(RB_DW7_REGISTER_1, DATA_BYTES31, 0, 8)
757
+REG32(RB_DW8_REGISTER_1, 0x4128)
758
+ FIELD(RB_DW8_REGISTER_1, DATA_BYTES32, 24, 8)
759
+ FIELD(RB_DW8_REGISTER_1, DATA_BYTES33, 16, 8)
760
+ FIELD(RB_DW8_REGISTER_1, DATA_BYTES34, 8, 8)
761
+ FIELD(RB_DW8_REGISTER_1, DATA_BYTES35, 0, 8)
762
+REG32(RB_DW9_REGISTER_1, 0x412c)
763
+ FIELD(RB_DW9_REGISTER_1, DATA_BYTES36, 24, 8)
764
+ FIELD(RB_DW9_REGISTER_1, DATA_BYTES37, 16, 8)
765
+ FIELD(RB_DW9_REGISTER_1, DATA_BYTES38, 8, 8)
766
+ FIELD(RB_DW9_REGISTER_1, DATA_BYTES39, 0, 8)
767
+REG32(RB_DW10_REGISTER_1, 0x4130)
768
+ FIELD(RB_DW10_REGISTER_1, DATA_BYTES40, 24, 8)
769
+ FIELD(RB_DW10_REGISTER_1, DATA_BYTES41, 16, 8)
770
+ FIELD(RB_DW10_REGISTER_1, DATA_BYTES42, 8, 8)
771
+ FIELD(RB_DW10_REGISTER_1, DATA_BYTES43, 0, 8)
772
+REG32(RB_DW11_REGISTER_1, 0x4134)
773
+ FIELD(RB_DW11_REGISTER_1, DATA_BYTES44, 24, 8)
774
+ FIELD(RB_DW11_REGISTER_1, DATA_BYTES45, 16, 8)
775
+ FIELD(RB_DW11_REGISTER_1, DATA_BYTES46, 8, 8)
776
+ FIELD(RB_DW11_REGISTER_1, DATA_BYTES47, 0, 8)
777
+REG32(RB_DW12_REGISTER_1, 0x4138)
778
+ FIELD(RB_DW12_REGISTER_1, DATA_BYTES48, 24, 8)
779
+ FIELD(RB_DW12_REGISTER_1, DATA_BYTES49, 16, 8)
780
+ FIELD(RB_DW12_REGISTER_1, DATA_BYTES50, 8, 8)
781
+ FIELD(RB_DW12_REGISTER_1, DATA_BYTES51, 0, 8)
782
+REG32(RB_DW13_REGISTER_1, 0x413c)
783
+ FIELD(RB_DW13_REGISTER_1, DATA_BYTES52, 24, 8)
784
+ FIELD(RB_DW13_REGISTER_1, DATA_BYTES53, 16, 8)
785
+ FIELD(RB_DW13_REGISTER_1, DATA_BYTES54, 8, 8)
786
+ FIELD(RB_DW13_REGISTER_1, DATA_BYTES55, 0, 8)
787
+REG32(RB_DW14_REGISTER_1, 0x4140)
788
+ FIELD(RB_DW14_REGISTER_1, DATA_BYTES56, 24, 8)
789
+ FIELD(RB_DW14_REGISTER_1, DATA_BYTES57, 16, 8)
790
+ FIELD(RB_DW14_REGISTER_1, DATA_BYTES58, 8, 8)
791
+ FIELD(RB_DW14_REGISTER_1, DATA_BYTES59, 0, 8)
792
+REG32(RB_DW15_REGISTER_1, 0x4144)
793
+ FIELD(RB_DW15_REGISTER_1, DATA_BYTES60, 24, 8)
794
+ FIELD(RB_DW15_REGISTER_1, DATA_BYTES61, 16, 8)
795
+ FIELD(RB_DW15_REGISTER_1, DATA_BYTES62, 8, 8)
796
+ FIELD(RB_DW15_REGISTER_1, DATA_BYTES63, 0, 8)
797
+
798
+static uint8_t canfd_dlc_array[8] = {8, 12, 16, 20, 24, 32, 48, 64};
799
+
800
+static void canfd_update_irq(XlnxVersalCANFDState *s)
801
+{
802
+ unsigned int irq = s->regs[R_INTERRUPT_STATUS_REGISTER] &
803
+ s->regs[R_INTERRUPT_ENABLE_REGISTER];
804
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
805
+
806
+ /* RX watermark interrupts. */
807
+ if (ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER, FL) >
808
+ ARRAY_FIELD_EX32(s->regs, RX_FIFO_WATERMARK_REGISTER, RXFWM)) {
809
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL, 1);
810
+ }
811
+
812
+ if (ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER, FL_1) >
813
+ ARRAY_FIELD_EX32(s->regs, RX_FIFO_WATERMARK_REGISTER, RXFWM_1)) {
814
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL_1, 1);
815
+ }
816
+
817
+ /* TX watermark interrupt. */
818
+ if (ARRAY_FIELD_EX32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER, TXE_FL) >
819
+ ARRAY_FIELD_EX32(s->regs, TX_EVENT_FIFO_WATERMARK_REGISTER, TXE_FWM)) {
820
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXEWMFLL, 1);
821
+ }
822
+
823
+ trace_xlnx_canfd_update_irq(path, s->regs[R_INTERRUPT_STATUS_REGISTER],
824
+ s->regs[R_INTERRUPT_ENABLE_REGISTER], irq);
825
+
826
+ qemu_set_irq(s->irq_canfd_int, irq);
827
+}
828
+
829
+static void canfd_ier_post_write(RegisterInfo *reg, uint64_t val64)
830
+{
831
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
832
+
833
+ canfd_update_irq(s);
834
+}
835
+
836
+static uint64_t canfd_icr_pre_write(RegisterInfo *reg, uint64_t val64)
837
+{
838
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
839
+ uint32_t val = val64;
840
+
841
+ s->regs[R_INTERRUPT_STATUS_REGISTER] &= ~val;
842
+
843
+ /*
844
+ * RXBOFLW_BI field is automatically cleared to default if RXBOFLW bit is
845
+ * cleared in ISR.
846
+ */
847
+ if (ARRAY_FIELD_EX32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL_1)) {
848
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXBOFLW_BI, 0);
849
+ }
850
+
851
+ canfd_update_irq(s);
852
+
853
+ return 0;
854
+}
855
+
856
+static void canfd_config_reset(XlnxVersalCANFDState *s)
857
+{
858
+
859
+ unsigned int i;
860
+
861
+ /* Reset all the configuration registers. */
862
+ for (i = 0; i < R_RX_FIFO_WATERMARK_REGISTER; ++i) {
863
+ register_reset(&s->reg_info[i]);
864
+ }
865
+
866
+ canfd_update_irq(s);
867
+}
868
+
869
+static void canfd_config_mode(XlnxVersalCANFDState *s)
870
+{
871
+ register_reset(&s->reg_info[R_ERROR_COUNTER_REGISTER]);
872
+ register_reset(&s->reg_info[R_ERROR_STATUS_REGISTER]);
873
+ register_reset(&s->reg_info[R_STATUS_REGISTER]);
874
+
875
+ /* Put XlnxVersalCANFDState in configuration mode. */
876
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 1);
877
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP, 0);
878
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP, 0);
879
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, BSOFF, 0);
880
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ERROR_BIT, 0);
881
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFOFLW, 0);
882
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFOFLW_1, 0);
883
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 0);
884
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 0);
885
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ARBLST, 0);
886
+
887
+ /* Clear the time stamp. */
888
+ ptimer_transaction_begin(s->canfd_timer);
889
+ ptimer_set_count(s->canfd_timer, 0);
890
+ ptimer_transaction_commit(s->canfd_timer);
891
+
892
+ canfd_update_irq(s);
893
+}
894
+
895
+static void update_status_register_mode_bits(XlnxVersalCANFDState *s)
896
+{
897
+ bool sleep_status = ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP);
898
+ bool sleep_mode = ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP);
899
+ /* Wake up interrupt bit. */
900
+ bool wakeup_irq_val = !sleep_mode && sleep_status;
901
+ /* Sleep interrupt bit. */
902
+ bool sleep_irq_val = sleep_mode && !sleep_status;
903
+
904
+ /* Clear previous core mode status bits. */
905
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 0);
906
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 0);
907
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 0);
908
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 0);
909
+
910
+ /* set current mode bit and generate irqs accordingly. */
911
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, LBACK)) {
912
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 1);
913
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP)) {
914
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 1);
915
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP,
916
+ sleep_irq_val);
917
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
918
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 1);
919
+ } else {
920
+ /* If all bits are zero, XlnxVersalCANFDState is set in normal mode. */
921
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 1);
922
+ /* Set wakeup interrupt bit. */
923
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP,
924
+ wakeup_irq_val);
925
+ }
926
+
927
+ /* Put the CANFD in error active state. */
928
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ESTAT, 1);
929
+
930
+ canfd_update_irq(s);
931
+}
932
+
933
+static uint64_t canfd_msr_pre_write(RegisterInfo *reg, uint64_t val64)
934
+{
935
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
936
+ uint32_t val = val64;
937
+ uint8_t multi_mode = 0;
938
+
939
+ /*
940
+ * Multiple mode set check. This is done to make sure user doesn't set
941
+ * multiple modes.
942
+ */
943
+ multi_mode = FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK) +
944
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP) +
945
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP);
946
+
947
+ if (multi_mode > 1) {
948
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempting to configure several modes"
949
+ " simultaneously. One mode will be selected according to"
950
+ " their priority: LBACK > SLEEP > SNOOP.\n");
951
+ }
952
+
953
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
954
+ /* In configuration mode, any mode can be selected. */
955
+ s->regs[R_MODE_SELECT_REGISTER] = val;
956
+ } else {
957
+ bool sleep_mode_bit = FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP);
958
+
959
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, sleep_mode_bit);
960
+
961
+ if (FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK)) {
962
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempting to set LBACK mode"
963
+ " without setting CEN bit as 0\n");
964
+ } else if (FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP)) {
965
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempting to set SNOOP mode"
966
+ " without setting CEN bit as 0\n");
967
+ }
968
+
969
+ update_status_register_mode_bits(s);
970
+ }
971
+
972
+ return s->regs[R_MODE_SELECT_REGISTER];
973
+}
974
+
975
+static void canfd_exit_sleep_mode(XlnxVersalCANFDState *s)
976
+{
977
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, 0);
978
+ update_status_register_mode_bits(s);
979
+}
980
+
981
+static void regs2frame(XlnxVersalCANFDState *s, qemu_can_frame *frame,
982
+ uint32_t reg_num)
983
+{
984
+ uint32_t i = 0;
985
+ uint32_t j = 0;
986
+ uint32_t val = 0;
987
+ uint32_t dlc_reg_val = 0;
988
+ uint32_t dlc_value = 0;
989
+
990
+ /* Check that reg_num should be within TX register space. */
991
+ assert(reg_num <= R_TB_ID_REGISTER + (NUM_REGS_PER_MSG_SPACE *
992
+ s->cfg.tx_fifo));
993
+
994
+ dlc_reg_val = s->regs[reg_num + 1];
995
+ dlc_value = FIELD_EX32(dlc_reg_val, TB0_DLC_REGISTER, DLC);
996
+
997
+ frame->can_id = s->regs[reg_num];
998
+
999
+ if (FIELD_EX32(dlc_reg_val, TB0_DLC_REGISTER, FDF)) {
1000
+ /*
1001
+ * CANFD frame.
1002
+ * Converting dlc(0 to 15) 4 Byte data to plain length(i.e. 0 to 64)
1003
+ * 1 Byte data. This is done to make it work with SocketCAN.
1004
+ * On actual CANFD frame, this value can't be more than 0xF.
1005
+ * Conversion table for DLC to plain length:
1006
+ *
1007
+ * DLC Plain Length
1008
+ * 0 - 8 0 - 8
1009
+ * 9 9 - 12
1010
+ * 10 13 - 16
1011
+ * 11 17 - 20
1012
+ * 12 21 - 24
1013
+ * 13 25 - 32
1014
+ * 14 33 - 48
1015
+ * 15 49 - 64
1016
+ */
1017
+
1018
+ frame->flags = QEMU_CAN_FRMF_TYPE_FD;
1019
+
1020
+ if (dlc_value < 8) {
1021
+ frame->can_dlc = dlc_value;
1022
+ } else {
1023
+ assert((dlc_value - 8) < ARRAY_SIZE(canfd_dlc_array));
1024
+ frame->can_dlc = canfd_dlc_array[dlc_value - 8];
1025
+ }
1026
+ } else {
1027
+ /*
1028
+ * FD Format bit not set that means it is a CAN Frame.
1029
+ * Conversion table for classic CAN:
1030
+ *
1031
+ * DLC Plain Length
1032
+ * 0 - 7 0 - 7
1033
+ * 8 - 15 8
1034
+ */
1035
+
1036
+ if (dlc_value > 8) {
1037
+ frame->can_dlc = 8;
1038
+ qemu_log_mask(LOG_GUEST_ERROR, "Maximum DLC value for Classic CAN"
1039
+ " frame is 8. Only 8 byte data will be sent.\n");
1040
+ } else {
1041
+ frame->can_dlc = dlc_value;
1042
+ }
1043
+ }
1044
+
1045
+ for (j = 0; j < frame->can_dlc; j++) {
1046
+ val = 8 * i;
1047
+
1048
+ frame->data[j] = extract32(s->regs[reg_num + 2 + (j / 4)], val, 8);
1049
+ i++;
1050
+
1051
+ if (i % 4 == 0) {
1052
+ i = 0;
1053
+ }
1054
+ }
1055
+}
1056
+
1057
+static void process_cancellation_requests(XlnxVersalCANFDState *s)
1058
+{
1059
+ uint32_t clear_mask = s->regs[R_TX_BUFFER_READY_REQUEST_REGISTER] &
1060
+ s->regs[R_TX_BUFFER_CANCEL_REQUEST_REGISTER];
1061
+
1062
+ s->regs[R_TX_BUFFER_READY_REQUEST_REGISTER] &= ~clear_mask;
1063
+ s->regs[R_TX_BUFFER_CANCEL_REQUEST_REGISTER] &= ~clear_mask;
1064
+
1065
+ canfd_update_irq(s);
1066
+}
1067
+
1068
+static void store_rx_sequential(XlnxVersalCANFDState *s,
1069
+ const qemu_can_frame *frame,
1070
+ uint32_t fill_level, uint32_t read_index,
1071
+ uint32_t store_location, uint8_t rx_fifo,
1072
+ bool rx_fifo_id, uint8_t filter_index)
1073
+{
1074
+ int i;
1075
+ bool is_canfd_frame;
1076
+ uint8_t dlc = frame->can_dlc;
1077
+ uint8_t rx_reg_num = 0;
1078
+ uint32_t dlc_reg_val = 0;
1079
+ uint32_t data_reg_val = 0;
1080
+
1081
+ /* Getting RX0/1 fill level */
1082
+ if ((fill_level) > rx_fifo - 1) {
1083
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1084
+
1085
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: RX%d Buffer is full. Discarding the"
1086
+ " message\n", path, rx_fifo_id);
1087
+
1088
+ /* Set the corresponding RF buffer overflow interrupt. */
1089
+ if (rx_fifo_id == 0) {
1090
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFOFLW, 1);
1091
+ } else {
1092
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFOFLW_1, 1);
1093
+ }
1094
+ } else {
1095
+ uint16_t rx_timestamp = CANFD_TIMER_MAX -
1096
+ ptimer_get_count(s->canfd_timer);
1097
+
1098
+ if (rx_timestamp == 0xFFFF) {
1099
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TSCNT_OFLW, 1);
1100
+ } else {
1101
+ ARRAY_FIELD_DP32(s->regs, TIMESTAMP_REGISTER, TIMESTAMP_CNT,
1102
+ rx_timestamp);
1103
+ }
1104
+
1105
+ if (rx_fifo_id == 0) {
1106
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, FL,
1107
+ fill_level + 1);
1108
+ assert(store_location <=
1109
+ R_RB_ID_REGISTER + (s->cfg.rx0_fifo *
1110
+ NUM_REGS_PER_MSG_SPACE));
1111
+ } else {
1112
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, FL_1,
1113
+ fill_level + 1);
1114
+ assert(store_location <=
1115
+ R_RB_ID_REGISTER_1 + (s->cfg.rx1_fifo *
1116
+ NUM_REGS_PER_MSG_SPACE));
1117
+ }
1118
+
1119
+ s->regs[store_location] = frame->can_id;
1120
+
1121
+ dlc = frame->can_dlc;
1122
+
1123
+ if (frame->flags == QEMU_CAN_FRMF_TYPE_FD) {
1124
+ is_canfd_frame = true;
1125
+
1126
+ /* Store dlc value in Xilinx specific format. */
1127
+ for (i = 0; i < ARRAY_SIZE(canfd_dlc_array); i++) {
1128
+ if (canfd_dlc_array[i] == frame->can_dlc) {
1129
+ dlc_reg_val = FIELD_DP32(0, RB_DLC_REGISTER, DLC, 8 + i);
1130
+ }
1131
+ }
1132
+ } else {
1133
+ is_canfd_frame = false;
1134
+
1135
+ if (frame->can_dlc > 8) {
1136
+ dlc = 8;
1137
+ }
1138
+
1139
+ dlc_reg_val = FIELD_DP32(0, RB_DLC_REGISTER, DLC, dlc);
1140
+ }
1141
+
1142
+ dlc_reg_val |= FIELD_DP32(0, RB_DLC_REGISTER, FDF, is_canfd_frame);
1143
+ dlc_reg_val |= FIELD_DP32(0, RB_DLC_REGISTER, TIMESTAMP, rx_timestamp);
1144
+ dlc_reg_val |= FIELD_DP32(0, RB_DLC_REGISTER, MATCHED_FILTER_INDEX,
1145
+ filter_index);
1146
+ s->regs[store_location + 1] = dlc_reg_val;
1147
+
1148
+ for (i = 0; i < dlc; i++) {
1149
+ /* Register size is 4 byte but frame->data each is 1 byte. */
1150
+ switch (i % 4) {
1151
+ case 0:
1152
+ rx_reg_num = i / 4;
1153
+
1154
+ data_reg_val = FIELD_DP32(0, RB_DW0_REGISTER, DATA_BYTES3,
1155
+ frame->data[i]);
1156
+ break;
1157
+ case 1:
1158
+ data_reg_val |= FIELD_DP32(0, RB_DW0_REGISTER, DATA_BYTES2,
1159
+ frame->data[i]);
1160
+ break;
1161
+ case 2:
1162
+ data_reg_val |= FIELD_DP32(0, RB_DW0_REGISTER, DATA_BYTES1,
1163
+ frame->data[i]);
1164
+ break;
1165
+ case 3:
1166
+ data_reg_val |= FIELD_DP32(0, RB_DW0_REGISTER, DATA_BYTES0,
1167
+ frame->data[i]);
1168
+ /*
1169
+ * Last Bytes data which means we have all 4 bytes ready to
1170
+ * store in one rx regs.
1171
+ */
1172
+ s->regs[store_location + rx_reg_num + 2] = data_reg_val;
1173
+ break;
1174
+ }
1175
+ }
1176
+
1177
+ if (i % 4) {
1178
+ /*
1179
+ * In case DLC is not multiplier of 4, data is not saved to RX FIFO
1180
+ * in above switch case. Store the remaining bytes here.
1181
+ */
1182
+ s->regs[store_location + rx_reg_num + 2] = data_reg_val;
1183
+ }
1184
+
1185
+ /* set the interrupt as RXOK. */
1186
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
1187
+ }
1188
+}
1189
+
1190
+static void update_rx_sequential(XlnxVersalCANFDState *s,
1191
+ const qemu_can_frame *frame)
1192
+{
1193
+ bool filter_pass = false;
1194
+ uint8_t filter_index = 0;
1195
+ int i;
1196
+ int filter_partition = ARRAY_FIELD_EX32(s->regs,
1197
+ RX_FIFO_WATERMARK_REGISTER, RXFP);
1198
+ uint32_t store_location;
1199
+ uint32_t fill_level;
1200
+ uint32_t read_index;
1201
+ uint8_t store_index = 0;
1202
+ g_autofree char *path = NULL;
1203
+ /*
1204
+ * If all UAF bits are set to 0, then received messages are not stored
1205
+ * in the RX buffers.
1206
+ */
1207
+ if (s->regs[R_ACCEPTANCE_FILTER_CONTROL_REGISTER]) {
1208
+ uint32_t acceptance_filter_status =
1209
+ s->regs[R_ACCEPTANCE_FILTER_CONTROL_REGISTER];
1210
+
1211
+ for (i = 0; i < 32; i++) {
1212
+ if (acceptance_filter_status & 0x1) {
1213
+ uint32_t msg_id_masked = s->regs[R_AFMR_REGISTER + 2 * i] &
1214
+ frame->can_id;
1215
+ uint32_t afir_id_masked = s->regs[R_AFIR_REGISTER + 2 * i] &
1216
+ s->regs[R_AFMR_REGISTER + 2 * i];
1217
+ uint16_t std_msg_id_masked = FIELD_EX32(msg_id_masked,
1218
+ AFIR_REGISTER, AIID);
1219
+ uint16_t std_afir_id_masked = FIELD_EX32(afir_id_masked,
1220
+ AFIR_REGISTER, AIID);
1221
+ uint32_t ext_msg_id_masked = FIELD_EX32(msg_id_masked,
1222
+ AFIR_REGISTER,
1223
+ AIID_EXT);
1224
+ uint32_t ext_afir_id_masked = FIELD_EX32(afir_id_masked,
1225
+ AFIR_REGISTER,
1226
+ AIID_EXT);
1227
+ bool ext_ide = FIELD_EX32(s->regs[R_AFMR_REGISTER + 2 * i],
1228
+ AFMR_REGISTER, AMIDE);
1229
+
1230
+ if (std_msg_id_masked == std_afir_id_masked) {
1231
+ if (ext_ide) {
1232
+ /* Extended message ID message. */
1233
+ if (ext_msg_id_masked == ext_afir_id_masked) {
1234
+ filter_pass = true;
1235
+ filter_index = i;
1236
+
1237
+ break;
1238
+ }
1239
+ } else {
1240
+ /* Standard message ID. */
1241
+ filter_pass = true;
1242
+ filter_index = i;
1243
+
1244
+ break;
1245
+ }
1246
+ }
1247
+ }
1248
+ acceptance_filter_status >>= 1;
1249
+ }
1250
+ }
1251
+
1252
+ if (!filter_pass) {
1253
+ path = object_get_canonical_path(OBJECT(s));
1254
+
1255
+ trace_xlnx_canfd_rx_fifo_filter_reject(path, frame->can_id,
1256
+ frame->can_dlc);
1257
+ } else {
1258
+ if (filter_index <= filter_partition) {
1259
+ fill_level = ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER, FL);
1260
+ read_index = ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER, RI);
1261
+ store_index = read_index + fill_level;
1262
+
1263
+ if (read_index == s->cfg.rx0_fifo - 1) {
1264
+ /*
1265
+ * When ri is s->cfg.rx0_fifo - 1 i.e. max, it goes cyclic that
1266
+ * means we reset the ri to 0x0.
1267
+ */
1268
+ read_index = 0;
1269
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, RI,
1270
+ read_index);
1271
+ }
1272
+
1273
+ if (store_index > s->cfg.rx0_fifo - 1) {
1274
+ store_index -= s->cfg.rx0_fifo - 1;
1275
+ }
1276
+
1277
+ store_location = R_RB_ID_REGISTER +
1278
+ (store_index * NUM_REGS_PER_MSG_SPACE);
1279
+
1280
+ store_rx_sequential(s, frame, fill_level, read_index,
1281
+ store_location, s->cfg.rx0_fifo, 0,
1282
+ filter_index);
1283
+ } else {
1284
+ /* RX 1 fill level message */
1285
+ fill_level = ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER,
1286
+ FL_1);
1287
+ read_index = ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER,
1288
+ RI_1);
1289
+ store_index = read_index + fill_level;
1290
+
1291
+ if (read_index == s->cfg.rx1_fifo - 1) {
1292
+ /*
1293
+ * When ri is s->cfg.rx1_fifo - 1 i.e. max, it goes cyclic that
1294
+ * means we reset the ri to 0x0.
1295
+ */
1296
+ read_index = 0;
1297
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, RI_1,
1298
+ read_index);
1299
+ }
1300
+
1301
+ if (store_index > s->cfg.rx1_fifo - 1) {
1302
+ store_index -= s->cfg.rx1_fifo - 1;
1303
+ }
1304
+
1305
+ store_location = R_RB_ID_REGISTER_1 +
1306
+ (store_index * NUM_REGS_PER_MSG_SPACE);
1307
+
1308
+ store_rx_sequential(s, frame, fill_level, read_index,
1309
+ store_location, s->cfg.rx1_fifo, 1,
1310
+ filter_index);
1311
+ }
1312
+
1313
+ path = object_get_canonical_path(OBJECT(s));
1314
+
1315
+ trace_xlnx_canfd_rx_data(path, frame->can_id, frame->can_dlc,
1316
+ frame->flags);
1317
+ canfd_update_irq(s);
1318
+ }
1319
+}
1320
+
1321
+static bool tx_ready_check(XlnxVersalCANFDState *s)
1322
+{
1323
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
1324
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1325
+
1326
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
1327
+ " XlnxVersalCANFDState is in reset mode\n", path);
1328
+
1329
+ return false;
1330
+ }
1331
+
1332
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
1333
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1334
+
1335
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
1336
+ " XlnxVersalCANFDState is in configuration mode."
1337
+ " Reset the core so operations can start fresh\n",
1338
+ path);
1339
+ return false;
1340
+ }
1341
+
1342
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
1343
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1344
+
1345
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
1346
+ " XlnxVersalCANFDState is in SNOOP MODE\n",
1347
+ path);
1348
+ return false;
1349
+ }
1350
+
1351
+ return true;
1352
+}
1353
+
1354
+static void tx_fifo_stamp(XlnxVersalCANFDState *s, uint32_t tb0_regid)
1355
+{
1356
+ /*
1357
+ * If EFC bit in DLC message is set, this means we will store the
1358
+ * event of this transmitted message with time stamp.
1359
+ */
1360
+ uint32_t dlc_reg_val = 0;
1361
+
1362
+ if (FIELD_EX32(s->regs[tb0_regid + 1], TB0_DLC_REGISTER, EFC)) {
1363
+ uint8_t dlc_val = FIELD_EX32(s->regs[tb0_regid + 1], TB0_DLC_REGISTER,
1364
+ DLC);
1365
+ bool fdf_val = FIELD_EX32(s->regs[tb0_regid + 1], TB0_DLC_REGISTER,
1366
+ FDF);
1367
+ bool brs_val = FIELD_EX32(s->regs[tb0_regid + 1], TB0_DLC_REGISTER,
1368
+ BRS);
1369
+ uint8_t mm_val = FIELD_EX32(s->regs[tb0_regid + 1], TB0_DLC_REGISTER,
1370
+ MM);
1371
+ uint8_t fill_level = ARRAY_FIELD_EX32(s->regs,
1372
+ TX_EVENT_FIFO_STATUS_REGISTER,
1373
+ TXE_FL);
1374
+ uint8_t read_index = ARRAY_FIELD_EX32(s->regs,
1375
+ TX_EVENT_FIFO_STATUS_REGISTER,
1376
+ TXE_RI);
1377
+ uint8_t store_index = fill_level + read_index;
1378
+
1379
+ if ((fill_level) > s->cfg.tx_fifo - 1) {
1380
+ qemu_log_mask(LOG_GUEST_ERROR, "TX Event Buffer is full."
1381
+ " Discarding the message\n");
1382
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXEOFLW, 1);
1383
+ } else {
1384
+ if (read_index == s->cfg.tx_fifo - 1) {
1385
+ /*
1386
+ * When ri is s->cfg.tx_fifo - 1 i.e. max, it goes cyclic that
1387
+ * means we reset the ri to 0x0.
1388
+ */
1389
+ read_index = 0;
1390
+ ARRAY_FIELD_DP32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER, TXE_RI,
1391
+ read_index);
1392
+ }
1393
+
1394
+ if (store_index > s->cfg.tx_fifo - 1) {
1395
+ store_index -= s->cfg.tx_fifo - 1;
1396
+ }
1397
+
1398
+ assert(store_index < s->cfg.tx_fifo);
1399
+
1400
+ uint32_t tx_event_reg0_id = R_TXE_FIFO_TB_ID_REGISTER +
1401
+ (store_index * 2);
1402
+
1403
+ /* Store message ID in TX event register. */
1404
+ s->regs[tx_event_reg0_id] = s->regs[tb0_regid];
1405
+
1406
+ uint16_t tx_timestamp = CANFD_TIMER_MAX -
1407
+ ptimer_get_count(s->canfd_timer);
1408
+
1409
+ /* Store DLC with time stamp in DLC regs. */
1410
+ dlc_reg_val = FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, DLC, dlc_val);
1411
+ dlc_reg_val |= FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, FDF,
1412
+ fdf_val);
1413
+ dlc_reg_val |= FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, BRS,
1414
+ brs_val);
1415
+ dlc_reg_val |= FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, ET, 0x3);
1416
+ dlc_reg_val |= FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, MM, mm_val);
1417
+ dlc_reg_val |= FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, TIMESTAMP,
1418
+ tx_timestamp);
1419
+ s->regs[tx_event_reg0_id + 1] = dlc_reg_val;
1420
+
1421
+ ARRAY_FIELD_DP32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER, TXE_FL,
1422
+ fill_level + 1);
1423
+ }
1424
+ }
1425
+}
1426
+
1427
+static gint g_cmp_ids(gconstpointer data1, gconstpointer data2)
1428
+{
1429
+ tx_ready_reg_info *tx_reg_1 = (tx_ready_reg_info *) data1;
1430
+ tx_ready_reg_info *tx_reg_2 = (tx_ready_reg_info *) data2;
1431
+
1432
+ return tx_reg_1->can_id - tx_reg_2->can_id;
1433
+}
1434
+
1435
+static void free_list(GSList *list)
1436
+{
1437
+ GSList *iterator = NULL;
1438
+
1439
+ for (iterator = list; iterator != NULL; iterator = iterator->next) {
1440
+ g_free((tx_ready_reg_info *)iterator->data);
1441
+ }
1442
+
1443
+ g_slist_free(list);
1444
+
1445
+ return;
1446
+}
1447
+
1448
+static GSList *prepare_tx_data(XlnxVersalCANFDState *s)
1449
+{
1450
+ uint8_t i = 0;
1451
+ GSList *list = NULL;
1452
+ uint32_t reg_num = 0;
1453
+ uint32_t reg_ready = s->regs[R_TX_BUFFER_READY_REQUEST_REGISTER];
1454
+
1455
+ /* First find the messages which are ready for transmission. */
1456
+ for (i = 0; i < s->cfg.tx_fifo; i++) {
1457
+ if (reg_ready & 1) {
1458
+ reg_num = R_TB_ID_REGISTER + (NUM_REGS_PER_MSG_SPACE * i);
1459
+ tx_ready_reg_info *temp = g_new(tx_ready_reg_info, 1);
1460
+
1461
+ temp->can_id = s->regs[reg_num];
1462
+ temp->reg_num = reg_num;
1463
+ list = g_slist_prepend(list, temp);
1464
+ list = g_slist_sort(list, g_cmp_ids);
1465
+ }
1466
+
1467
+ reg_ready >>= 1;
1468
+ }
1469
+
1470
+ s->regs[R_TX_BUFFER_READY_REQUEST_REGISTER] = 0;
1471
+ s->regs[R_TX_BUFFER_CANCEL_REQUEST_REGISTER] = 0;
1472
+
1473
+ return list;
1474
+}
1475
+
1476
+static void transfer_data(XlnxVersalCANFDState *s)
1477
+{
1478
+ bool canfd_tx = tx_ready_check(s);
1479
+ GSList *list, *iterator = NULL;
1480
+ qemu_can_frame frame;
1481
+
1482
+ if (!canfd_tx) {
1483
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1484
+
1485
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller not enabled for data"
1486
+ " transfer\n", path);
1487
+ return;
1488
+ }
1489
+
1490
+ list = prepare_tx_data(s);
1491
+ if (list == NULL) {
1492
+ return;
1493
+ }
1494
+
1495
+ for (iterator = list; iterator != NULL; iterator = iterator->next) {
1496
+ regs2frame(s, &frame,
1497
+ ((tx_ready_reg_info *)iterator->data)->reg_num);
1498
+
1499
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
1500
+ update_rx_sequential(s, &frame);
1501
+ tx_fifo_stamp(s, ((tx_ready_reg_info *)iterator->data)->reg_num);
1502
+
1503
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
1504
+ } else {
1505
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1506
+
1507
+ trace_xlnx_canfd_tx_data(path, frame.can_id, frame.can_dlc,
1508
+ frame.flags);
1509
+ can_bus_client_send(&s->bus_client, &frame, 1);
1510
+ tx_fifo_stamp(s,
1511
+ ((tx_ready_reg_info *)iterator->data)->reg_num);
1512
+
1513
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXRRS, 1);
1514
+
1515
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) {
1516
+ canfd_exit_sleep_mode(s);
1517
+ }
1518
+ }
1519
+ }
1520
+
1521
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 1);
1522
+ free_list(list);
1523
+
1524
+ canfd_update_irq(s);
1525
+}
1526
+
1527
+static uint64_t canfd_srr_pre_write(RegisterInfo *reg, uint64_t val64)
1528
+{
1529
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1530
+ uint32_t val = val64;
1531
+
1532
+ ARRAY_FIELD_DP32(s->regs, SOFTWARE_RESET_REGISTER, CEN,
1533
+ FIELD_EX32(val, SOFTWARE_RESET_REGISTER, CEN));
1534
+
1535
+ if (FIELD_EX32(val, SOFTWARE_RESET_REGISTER, SRST)) {
1536
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1537
+
1538
+ trace_xlnx_canfd_reset(path, val64);
1539
+
1540
+ /* First, core will do software reset then will enter in config mode. */
1541
+ canfd_config_reset(s);
1542
+ } else if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
1543
+ canfd_config_mode(s);
1544
+ } else {
1545
+ /*
1546
+ * Leave config mode. Now XlnxVersalCANFD core will enter Normal, Sleep,
1547
+ * snoop or Loopback mode depending upon LBACK, SLEEP, SNOOP register
1548
+ * states.
1549
+ */
1550
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 0);
1551
+
1552
+ ptimer_transaction_begin(s->canfd_timer);
1553
+ ptimer_set_count(s->canfd_timer, 0);
1554
+ ptimer_transaction_commit(s->canfd_timer);
1555
+ update_status_register_mode_bits(s);
1556
+ transfer_data(s);
1557
+ }
1558
+
1559
+ return s->regs[R_SOFTWARE_RESET_REGISTER];
1560
+}
1561
+
1562
+static uint64_t filter_mask(RegisterInfo *reg, uint64_t val64)
1563
+{
1564
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1565
+ uint32_t reg_idx = (reg->access->addr) / 4;
1566
+ uint32_t val = val64;
1567
+ uint32_t filter_offset = (reg_idx - R_AFMR_REGISTER) / 2;
1568
+
1569
+ if (!(s->regs[R_ACCEPTANCE_FILTER_CONTROL_REGISTER] &
1570
+ (1 << filter_offset))) {
1571
+ s->regs[reg_idx] = val;
1572
+ } else {
1573
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1574
+
1575
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d not enabled\n",
1576
+ path, filter_offset + 1);
1577
+ }
1578
+
1579
+ return s->regs[reg_idx];
1580
+}
1581
+
1582
+static uint64_t filter_id(RegisterInfo *reg, uint64_t val64)
1583
+{
1584
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1585
+ hwaddr reg_idx = (reg->access->addr) / 4;
1586
+ uint32_t val = val64;
1587
+ uint32_t filter_offset = (reg_idx - R_AFIR_REGISTER) / 2;
1588
+
1589
+ if (!(s->regs[R_ACCEPTANCE_FILTER_CONTROL_REGISTER] &
1590
+ (1 << filter_offset))) {
1591
+ s->regs[reg_idx] = val;
1592
+ } else {
1593
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1594
+
1595
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d not enabled\n",
1596
+ path, filter_offset + 1);
1597
+ }
1598
+
1599
+ return s->regs[reg_idx];
1600
+}
1601
+
1602
+static uint64_t canfd_tx_fifo_status_prew(RegisterInfo *reg, uint64_t val64)
1603
+{
1604
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1605
+ uint32_t val = val64;
1606
+ uint8_t read_ind = 0;
1607
+ uint8_t fill_ind = ARRAY_FIELD_EX32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER,
1608
+ TXE_FL);
1609
+
1610
+ if (FIELD_EX32(val, TX_EVENT_FIFO_STATUS_REGISTER, TXE_IRI) && fill_ind) {
1611
+ read_ind = ARRAY_FIELD_EX32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER,
1612
+ TXE_RI) + 1;
1613
+
1614
+ if (read_ind > s->cfg.tx_fifo - 1) {
1615
+ read_ind = 0;
1616
+ }
1617
+
1618
+ /*
1619
+ * Increase the read index by 1 and decrease the fill level by 1.
1620
+ */
1621
+ ARRAY_FIELD_DP32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER, TXE_RI,
1622
+ read_ind);
1623
+ ARRAY_FIELD_DP32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER, TXE_FL,
1624
+ fill_ind - 1);
1625
+ }
1626
+
1627
+ return s->regs[R_TX_EVENT_FIFO_STATUS_REGISTER];
1628
+}
1629
+
1630
+static uint64_t canfd_rx_fifo_status_prew(RegisterInfo *reg, uint64_t val64)
1631
+{
1632
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1633
+ uint32_t val = val64;
1634
+ uint8_t read_ind = 0;
1635
+ uint8_t fill_ind = 0;
1636
+
1637
+ if (FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, IRI)) {
1638
+ /* FL index is zero, setting IRI bit has no effect. */
1639
+ if (FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, FL) != 0) {
1640
+ read_ind = FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, RI) + 1;
1641
+
1642
+ if (read_ind > s->cfg.rx0_fifo - 1) {
1643
+ read_ind = 0;
1644
+ }
1645
+
1646
+ fill_ind = FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, FL) - 1;
1647
+
1648
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, RI, read_ind);
1649
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, FL, fill_ind);
1650
+ }
1651
+ }
1652
+
1653
+ if (FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, IRI_1)) {
1654
+ /* FL_1 index is zero, setting IRI_1 bit has no effect. */
1655
+ if (FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, FL_1) != 0) {
1656
+ read_ind = FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, RI_1) + 1;
1657
+
1658
+ if (read_ind > s->cfg.rx1_fifo - 1) {
1659
+ read_ind = 0;
1660
+ }
1661
+
1662
+ fill_ind = FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, FL_1) - 1;
1663
+
1664
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, RI_1, read_ind);
1665
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, FL_1, fill_ind);
1666
+ }
1667
+ }
1668
+
1669
+ return s->regs[R_RX_FIFO_STATUS_REGISTER];
1670
+}
1671
+
1672
+static uint64_t canfd_tsr_pre_write(RegisterInfo *reg, uint64_t val64)
1673
+{
1674
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1675
+ uint32_t val = val64;
1676
+
1677
+ if (FIELD_EX32(val, TIMESTAMP_REGISTER, CTS)) {
1678
+ ARRAY_FIELD_DP32(s->regs, TIMESTAMP_REGISTER, TIMESTAMP_CNT, 0);
1679
+ ptimer_transaction_begin(s->canfd_timer);
1680
+ ptimer_set_count(s->canfd_timer, 0);
1681
+ ptimer_transaction_commit(s->canfd_timer);
1682
+ }
1683
+
1684
+ return 0;
1685
+}
1686
+
1687
+static uint64_t canfd_trr_reg_prew(RegisterInfo *reg, uint64_t val64)
1688
+{
1689
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1690
+
1691
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
1692
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1693
+
1694
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is in SNOOP mode."
1695
+ " tx_ready_register will stay in reset mode\n", path);
1696
+ return 0;
1697
+ } else {
1698
+ return val64;
1699
+ }
1700
+}
1701
+
1702
+static void canfd_trr_reg_postw(RegisterInfo *reg, uint64_t val64)
1703
+{
1704
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1705
+
1706
+ transfer_data(s);
1707
+}
1708
+
1709
+static void canfd_cancel_reg_postw(RegisterInfo *reg, uint64_t val64)
1710
+{
1711
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1712
+
1713
+ process_cancellation_requests(s);
1714
+}
1715
+
1716
+static uint64_t canfd_write_check_prew(RegisterInfo *reg, uint64_t val64)
1717
+{
1718
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1719
+ uint32_t val = val64;
1720
+
1721
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
1722
+ return val;
1723
+ }
1724
+ return 0;
1725
+}
1726
+
1727
+static const RegisterAccessInfo canfd_tx_regs[] = {
1728
+ { .name = "TB_ID_REGISTER", .addr = A_TB_ID_REGISTER,
1729
+ },{ .name = "TB0_DLC_REGISTER", .addr = A_TB0_DLC_REGISTER,
1730
+ },{ .name = "TB_DW0_REGISTER", .addr = A_TB_DW0_REGISTER,
1731
+ },{ .name = "TB_DW1_REGISTER", .addr = A_TB_DW1_REGISTER,
1732
+ },{ .name = "TB_DW2_REGISTER", .addr = A_TB_DW2_REGISTER,
1733
+ },{ .name = "TB_DW3_REGISTER", .addr = A_TB_DW3_REGISTER,
1734
+ },{ .name = "TB_DW4_REGISTER", .addr = A_TB_DW4_REGISTER,
1735
+ },{ .name = "TB_DW5_REGISTER", .addr = A_TB_DW5_REGISTER,
1736
+ },{ .name = "TB_DW6_REGISTER", .addr = A_TB_DW6_REGISTER,
1737
+ },{ .name = "TB_DW7_REGISTER", .addr = A_TB_DW7_REGISTER,
1738
+ },{ .name = "TB_DW8_REGISTER", .addr = A_TB_DW8_REGISTER,
1739
+ },{ .name = "TB_DW9_REGISTER", .addr = A_TB_DW9_REGISTER,
1740
+ },{ .name = "TB_DW10_REGISTER", .addr = A_TB_DW10_REGISTER,
1741
+ },{ .name = "TB_DW11_REGISTER", .addr = A_TB_DW11_REGISTER,
1742
+ },{ .name = "TB_DW12_REGISTER", .addr = A_TB_DW12_REGISTER,
1743
+ },{ .name = "TB_DW13_REGISTER", .addr = A_TB_DW13_REGISTER,
1744
+ },{ .name = "TB_DW14_REGISTER", .addr = A_TB_DW14_REGISTER,
1745
+ },{ .name = "TB_DW15_REGISTER", .addr = A_TB_DW15_REGISTER,
1746
+ }
1747
+};
1748
+
1749
+static const RegisterAccessInfo canfd_rx0_regs[] = {
1750
+ { .name = "RB_ID_REGISTER", .addr = A_RB_ID_REGISTER,
1751
+ .ro = 0xffffffff,
1752
+ },{ .name = "RB_DLC_REGISTER", .addr = A_RB_DLC_REGISTER,
1753
+ .ro = 0xfe1fffff,
1754
+ },{ .name = "RB_DW0_REGISTER", .addr = A_RB_DW0_REGISTER,
1755
+ .ro = 0xffffffff,
1756
+ },{ .name = "RB_DW1_REGISTER", .addr = A_RB_DW1_REGISTER,
1757
+ .ro = 0xffffffff,
1758
+ },{ .name = "RB_DW2_REGISTER", .addr = A_RB_DW2_REGISTER,
1759
+ .ro = 0xffffffff,
1760
+ },{ .name = "RB_DW3_REGISTER", .addr = A_RB_DW3_REGISTER,
1761
+ .ro = 0xffffffff,
1762
+ },{ .name = "RB_DW4_REGISTER", .addr = A_RB_DW4_REGISTER,
1763
+ .ro = 0xffffffff,
1764
+ },{ .name = "RB_DW5_REGISTER", .addr = A_RB_DW5_REGISTER,
1765
+ .ro = 0xffffffff,
1766
+ },{ .name = "RB_DW6_REGISTER", .addr = A_RB_DW6_REGISTER,
1767
+ .ro = 0xffffffff,
1768
+ },{ .name = "RB_DW7_REGISTER", .addr = A_RB_DW7_REGISTER,
1769
+ .ro = 0xffffffff,
1770
+ },{ .name = "RB_DW8_REGISTER", .addr = A_RB_DW8_REGISTER,
1771
+ .ro = 0xffffffff,
1772
+ },{ .name = "RB_DW9_REGISTER", .addr = A_RB_DW9_REGISTER,
1773
+ .ro = 0xffffffff,
1774
+ },{ .name = "RB_DW10_REGISTER", .addr = A_RB_DW10_REGISTER,
1775
+ .ro = 0xffffffff,
1776
+ },{ .name = "RB_DW11_REGISTER", .addr = A_RB_DW11_REGISTER,
1777
+ .ro = 0xffffffff,
1778
+ },{ .name = "RB_DW12_REGISTER", .addr = A_RB_DW12_REGISTER,
1779
+ .ro = 0xffffffff,
1780
+ },{ .name = "RB_DW13_REGISTER", .addr = A_RB_DW13_REGISTER,
1781
+ .ro = 0xffffffff,
1782
+ },{ .name = "RB_DW14_REGISTER", .addr = A_RB_DW14_REGISTER,
1783
+ .ro = 0xffffffff,
1784
+ },{ .name = "RB_DW15_REGISTER", .addr = A_RB_DW15_REGISTER,
1785
+ .ro = 0xffffffff,
1786
+ }
1787
+};
1788
+
1789
+static const RegisterAccessInfo canfd_rx1_regs[] = {
1790
+ { .name = "RB_ID_REGISTER_1", .addr = A_RB_ID_REGISTER_1,
1791
+ .ro = 0xffffffff,
1792
+ },{ .name = "RB_DLC_REGISTER_1", .addr = A_RB_DLC_REGISTER_1,
1793
+ .ro = 0xfe1fffff,
1794
+ },{ .name = "RB0_DW0_REGISTER_1", .addr = A_RB0_DW0_REGISTER_1,
1795
+ .ro = 0xffffffff,
1796
+ },{ .name = "RB_DW1_REGISTER_1", .addr = A_RB_DW1_REGISTER_1,
1797
+ .ro = 0xffffffff,
1798
+ },{ .name = "RB_DW2_REGISTER_1", .addr = A_RB_DW2_REGISTER_1,
1799
+ .ro = 0xffffffff,
1800
+ },{ .name = "RB_DW3_REGISTER_1", .addr = A_RB_DW3_REGISTER_1,
1801
+ .ro = 0xffffffff,
1802
+ },{ .name = "RB_DW4_REGISTER_1", .addr = A_RB_DW4_REGISTER_1,
1803
+ .ro = 0xffffffff,
1804
+ },{ .name = "RB_DW5_REGISTER_1", .addr = A_RB_DW5_REGISTER_1,
1805
+ .ro = 0xffffffff,
1806
+ },{ .name = "RB_DW6_REGISTER_1", .addr = A_RB_DW6_REGISTER_1,
1807
+ .ro = 0xffffffff,
1808
+ },{ .name = "RB_DW7_REGISTER_1", .addr = A_RB_DW7_REGISTER_1,
1809
+ .ro = 0xffffffff,
1810
+ },{ .name = "RB_DW8_REGISTER_1", .addr = A_RB_DW8_REGISTER_1,
1811
+ .ro = 0xffffffff,
1812
+ },{ .name = "RB_DW9_REGISTER_1", .addr = A_RB_DW9_REGISTER_1,
1813
+ .ro = 0xffffffff,
1814
+ },{ .name = "RB_DW10_REGISTER_1", .addr = A_RB_DW10_REGISTER_1,
1815
+ .ro = 0xffffffff,
1816
+ },{ .name = "RB_DW11_REGISTER_1", .addr = A_RB_DW11_REGISTER_1,
1817
+ .ro = 0xffffffff,
1818
+ },{ .name = "RB_DW12_REGISTER_1", .addr = A_RB_DW12_REGISTER_1,
1819
+ .ro = 0xffffffff,
1820
+ },{ .name = "RB_DW13_REGISTER_1", .addr = A_RB_DW13_REGISTER_1,
1821
+ .ro = 0xffffffff,
1822
+ },{ .name = "RB_DW14_REGISTER_1", .addr = A_RB_DW14_REGISTER_1,
1823
+ .ro = 0xffffffff,
1824
+ },{ .name = "RB_DW15_REGISTER_1", .addr = A_RB_DW15_REGISTER_1,
1825
+ .ro = 0xffffffff,
1826
+ }
1827
+};
1828
+
1829
+/* Acceptance filter registers. */
1830
+static const RegisterAccessInfo canfd_af_regs[] = {
1831
+ { .name = "AFMR_REGISTER", .addr = A_AFMR_REGISTER,
1832
+ .pre_write = filter_mask,
1833
+ },{ .name = "AFIR_REGISTER", .addr = A_AFIR_REGISTER,
1834
+ .pre_write = filter_id,
1835
+ }
1836
+};
1837
+
1838
+static const RegisterAccessInfo canfd_txe_regs[] = {
1839
+ { .name = "TXE_FIFO_TB_ID_REGISTER", .addr = A_TXE_FIFO_TB_ID_REGISTER,
1840
+ .ro = 0xffffffff,
1841
+ },{ .name = "TXE_FIFO_TB_DLC_REGISTER", .addr = A_TXE_FIFO_TB_DLC_REGISTER,
1842
+ .ro = 0xffffffff,
1843
+ }
1844
+};
1845
+
1846
+static const RegisterAccessInfo canfd_regs_info[] = {
1847
+ { .name = "SOFTWARE_RESET_REGISTER", .addr = A_SOFTWARE_RESET_REGISTER,
1848
+ .pre_write = canfd_srr_pre_write,
1849
+ },{ .name = "MODE_SELECT_REGISTER", .addr = A_MODE_SELECT_REGISTER,
1850
+ .pre_write = canfd_msr_pre_write,
1851
+ },{ .name = "ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER",
1852
+ .addr = A_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER,
1853
+ .pre_write = canfd_write_check_prew,
1854
+ },{ .name = "ARBITRATION_PHASE_BIT_TIMING_REGISTER",
1855
+ .addr = A_ARBITRATION_PHASE_BIT_TIMING_REGISTER,
1856
+ .pre_write = canfd_write_check_prew,
1857
+ },{ .name = "ERROR_COUNTER_REGISTER", .addr = A_ERROR_COUNTER_REGISTER,
1858
+ .ro = 0xffff,
1859
+ },{ .name = "ERROR_STATUS_REGISTER", .addr = A_ERROR_STATUS_REGISTER,
1860
+ .w1c = 0xf1f,
1861
+ },{ .name = "STATUS_REGISTER", .addr = A_STATUS_REGISTER,
1862
+ .reset = 0x1,
1863
+ .ro = 0x7f17ff,
1864
+ },{ .name = "INTERRUPT_STATUS_REGISTER",
1865
+ .addr = A_INTERRUPT_STATUS_REGISTER,
1866
+ .ro = 0xffffff7f,
1867
+ },{ .name = "INTERRUPT_ENABLE_REGISTER",
1868
+ .addr = A_INTERRUPT_ENABLE_REGISTER,
1869
+ .post_write = canfd_ier_post_write,
1870
+ },{ .name = "INTERRUPT_CLEAR_REGISTER",
1871
+ .addr = A_INTERRUPT_CLEAR_REGISTER, .pre_write = canfd_icr_pre_write,
1872
+ },{ .name = "TIMESTAMP_REGISTER", .addr = A_TIMESTAMP_REGISTER,
1873
+ .ro = 0xffff0000,
1874
+ .pre_write = canfd_tsr_pre_write,
1875
+ },{ .name = "DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER",
1876
+ .addr = A_DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER,
1877
+ .pre_write = canfd_write_check_prew,
1878
+ },{ .name = "DATA_PHASE_BIT_TIMING_REGISTER",
1879
+ .addr = A_DATA_PHASE_BIT_TIMING_REGISTER,
1880
+ .pre_write = canfd_write_check_prew,
1881
+ },{ .name = "TX_BUFFER_READY_REQUEST_REGISTER",
1882
+ .addr = A_TX_BUFFER_READY_REQUEST_REGISTER,
1883
+ .pre_write = canfd_trr_reg_prew,
1884
+ .post_write = canfd_trr_reg_postw,
1885
+ },{ .name = "INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER",
1886
+ .addr = A_INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER,
1887
+ },{ .name = "TX_BUFFER_CANCEL_REQUEST_REGISTER",
1888
+ .addr = A_TX_BUFFER_CANCEL_REQUEST_REGISTER,
1889
+ .post_write = canfd_cancel_reg_postw,
1890
+ },{ .name = "INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER",
1891
+ .addr = A_INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER,
1892
+ },{ .name = "TX_EVENT_FIFO_STATUS_REGISTER",
1893
+ .addr = A_TX_EVENT_FIFO_STATUS_REGISTER,
1894
+ .ro = 0x3f1f, .pre_write = canfd_tx_fifo_status_prew,
1895
+ },{ .name = "TX_EVENT_FIFO_WATERMARK_REGISTER",
1896
+ .addr = A_TX_EVENT_FIFO_WATERMARK_REGISTER,
1897
+ .reset = 0xf,
1898
+ .pre_write = canfd_write_check_prew,
1899
+ },{ .name = "ACCEPTANCE_FILTER_CONTROL_REGISTER",
1900
+ .addr = A_ACCEPTANCE_FILTER_CONTROL_REGISTER,
1901
+ },{ .name = "RX_FIFO_STATUS_REGISTER", .addr = A_RX_FIFO_STATUS_REGISTER,
1902
+ .ro = 0x7f3f7f3f, .pre_write = canfd_rx_fifo_status_prew,
1903
+ },{ .name = "RX_FIFO_WATERMARK_REGISTER",
1904
+ .addr = A_RX_FIFO_WATERMARK_REGISTER,
1905
+ .reset = 0x1f0f0f,
1906
+ .pre_write = canfd_write_check_prew,
1907
+ }
1908
+};
1909
+
1910
+static void xlnx_versal_canfd_ptimer_cb(void *opaque)
1911
+{
1912
+ /* No action required on the timer rollover. */
1913
+}
1914
+
1915
+static const MemoryRegionOps canfd_ops = {
1916
+ .read = register_read_memory,
1917
+ .write = register_write_memory,
1918
+ .endianness = DEVICE_LITTLE_ENDIAN,
1919
+ .valid = {
1920
+ .min_access_size = 4,
1921
+ .max_access_size = 4,
1922
+ },
1923
+};
1924
+
1925
+static void canfd_reset(DeviceState *dev)
1926
+{
1927
+ XlnxVersalCANFDState *s = XILINX_CANFD(dev);
1928
+ unsigned int i;
1929
+
1930
+ for (i = 0; i < ARRAY_SIZE(s->reg_info); ++i) {
1931
+ register_reset(&s->reg_info[i]);
1932
+ }
1933
+
1934
+ ptimer_transaction_begin(s->canfd_timer);
1935
+ ptimer_set_count(s->canfd_timer, 0);
1936
+ ptimer_transaction_commit(s->canfd_timer);
1937
+}
1938
+
1939
+static bool can_xilinx_canfd_receive(CanBusClientState *client)
1940
+{
1941
+ XlnxVersalCANFDState *s = container_of(client, XlnxVersalCANFDState,
1942
+ bus_client);
1943
+
1944
+ bool reset_state = ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST);
1945
+ bool can_enabled = ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN);
1946
+
1947
+ return !reset_state && can_enabled;
1948
+}
1949
+
1950
+static ssize_t canfd_xilinx_receive(CanBusClientState *client,
1951
+ const qemu_can_frame *buf,
1952
+ size_t buf_size)
1953
+{
1954
+ XlnxVersalCANFDState *s = container_of(client, XlnxVersalCANFDState,
1955
+ bus_client);
1956
+ const qemu_can_frame *frame = buf;
1957
+
1958
+ assert(buf_size > 0);
1959
+
1960
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
1961
+ /*
1962
+ * XlnxVersalCANFDState will not participate in normal bus communication
1963
+ * and does not receive any messages transmitted by other CAN nodes.
1964
+ */
1965
+ return 1;
1966
+ }
1967
+
1968
+ /* Update the status register that we are receiving message. */
1969
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, BBSY, 1);
1970
+
1971
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
1972
+ /* Snoop Mode: Just keep the data. no response back. */
1973
+ update_rx_sequential(s, frame);
1974
+ } else {
1975
+ if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP))) {
1976
+ /*
1977
+ * XlnxVersalCANFDState is in sleep mode. Any data on bus will bring
1978
+ * it to the wake up state.
1979
+ */
1980
+ canfd_exit_sleep_mode(s);
1981
+ }
1982
+
1983
+ update_rx_sequential(s, frame);
1984
+ }
1985
+
1986
+ /* Message processing done. Update the status back to !busy */
1987
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, BBSY, 0);
1988
+ return 1;
1989
+}
1990
+
1991
+static CanBusClientInfo canfd_xilinx_bus_client_info = {
1992
+ .can_receive = can_xilinx_canfd_receive,
1993
+ .receive = canfd_xilinx_receive,
1994
+};
1995
+
1996
+static int xlnx_canfd_connect_to_bus(XlnxVersalCANFDState *s,
1997
+ CanBusState *bus)
1998
+{
1999
+ s->bus_client.info = &canfd_xilinx_bus_client_info;
2000
+
2001
+ return can_bus_insert_client(bus, &s->bus_client);
2002
+}
2003
+
2004
+#define NUM_REG_PER_AF ARRAY_SIZE(canfd_af_regs)
2005
+#define NUM_AF 32
2006
+#define NUM_REG_PER_TXE ARRAY_SIZE(canfd_txe_regs)
2007
+#define NUM_TXE 32
2008
+
2009
+static int canfd_populate_regarray(XlnxVersalCANFDState *s,
2010
+ RegisterInfoArray *r_array, int pos,
2011
+ const RegisterAccessInfo *rae,
2012
+ int num_rae)
2013
+{
2014
+ int i;
2015
+
2016
+ for (i = 0; i < num_rae; i++) {
2017
+ int index = rae[i].addr / 4;
2018
+ RegisterInfo *r = &s->reg_info[index];
2019
+
2020
+ object_initialize(r, sizeof(*r), TYPE_REGISTER);
2021
+
2022
+ *r = (RegisterInfo) {
2023
+ .data = &s->regs[index],
2024
+ .data_size = sizeof(uint32_t),
2025
+ .access = &rae[i],
2026
+ .opaque = OBJECT(s),
2027
+ };
2028
+
2029
+ r_array->r[i + pos] = r;
2030
+ }
2031
+ return i + pos;
2032
+}
2033
+
2034
+static void canfd_create_rai(RegisterAccessInfo *rai_array,
2035
+ const RegisterAccessInfo *canfd_regs,
2036
+ int template_rai_array_sz,
2037
+ int num_template_to_copy)
2038
+{
2039
+ int i;
2040
+ int reg_num;
2041
+
2042
+ for (reg_num = 0; reg_num < num_template_to_copy; reg_num++) {
2043
+ int pos = reg_num * template_rai_array_sz;
2044
+
2045
+ memcpy(rai_array + pos, canfd_regs,
2046
+ template_rai_array_sz * sizeof(RegisterAccessInfo));
2047
+
2048
+ for (i = 0; i < template_rai_array_sz; i++) {
2049
+ const char *name = canfd_regs[i].name;
2050
+ uint64_t addr = canfd_regs[i].addr;
2051
+ rai_array[i + pos].name = g_strdup_printf("%s%d", name, reg_num);
2052
+ rai_array[i + pos].addr = addr + pos * 4;
2053
+ }
2054
+ }
2055
+}
2056
+
2057
+static RegisterInfoArray *canfd_create_regarray(XlnxVersalCANFDState *s)
2058
+{
2059
+ const char *device_prefix = object_get_typename(OBJECT(s));
2060
+ uint64_t memory_size = XLNX_VERSAL_CANFD_R_MAX * 4;
2061
+ int num_regs;
2062
+ int pos = 0;
2063
+ RegisterInfoArray *r_array;
2064
+
2065
+ num_regs = ARRAY_SIZE(canfd_regs_info) +
2066
+ s->cfg.tx_fifo * NUM_REGS_PER_MSG_SPACE +
2067
+ s->cfg.rx0_fifo * NUM_REGS_PER_MSG_SPACE +
2068
+ NUM_AF * NUM_REG_PER_AF +
2069
+ NUM_TXE * NUM_REG_PER_TXE;
2070
+
2071
+ s->tx_regs = g_new0(RegisterAccessInfo,
2072
+ s->cfg.tx_fifo * ARRAY_SIZE(canfd_tx_regs));
2073
+
2074
+ canfd_create_rai(s->tx_regs, canfd_tx_regs,
2075
+ ARRAY_SIZE(canfd_tx_regs), s->cfg.tx_fifo);
2076
+
2077
+ s->rx0_regs = g_new0(RegisterAccessInfo,
2078
+ s->cfg.rx0_fifo * ARRAY_SIZE(canfd_rx0_regs));
2079
+
2080
+ canfd_create_rai(s->rx0_regs, canfd_rx0_regs,
2081
+ ARRAY_SIZE(canfd_rx0_regs), s->cfg.rx0_fifo);
2082
+
2083
+ s->af_regs = g_new0(RegisterAccessInfo,
2084
+ NUM_AF * ARRAY_SIZE(canfd_af_regs));
2085
+
2086
+ canfd_create_rai(s->af_regs, canfd_af_regs,
2087
+ ARRAY_SIZE(canfd_af_regs), NUM_AF);
2088
+
2089
+ s->txe_regs = g_new0(RegisterAccessInfo,
2090
+ NUM_TXE * ARRAY_SIZE(canfd_txe_regs));
2091
+
2092
+ canfd_create_rai(s->txe_regs, canfd_txe_regs,
2093
+ ARRAY_SIZE(canfd_txe_regs), NUM_TXE);
2094
+
2095
+ if (s->cfg.enable_rx_fifo1) {
2096
+ num_regs += s->cfg.rx1_fifo * NUM_REGS_PER_MSG_SPACE;
2097
+
2098
+ s->rx1_regs = g_new0(RegisterAccessInfo,
2099
+ s->cfg.rx1_fifo * ARRAY_SIZE(canfd_rx1_regs));
2100
+
2101
+ canfd_create_rai(s->rx1_regs, canfd_rx1_regs,
2102
+ ARRAY_SIZE(canfd_rx1_regs), s->cfg.rx1_fifo);
2103
+ }
2104
+
2105
+ r_array = g_new0(RegisterInfoArray, 1);
2106
+ r_array->r = g_new0(RegisterInfo * , num_regs);
2107
+ r_array->num_elements = num_regs;
2108
+ r_array->prefix = device_prefix;
2109
+
2110
+ pos = canfd_populate_regarray(s, r_array, pos,
2111
+ canfd_regs_info,
2112
+ ARRAY_SIZE(canfd_regs_info));
2113
+ pos = canfd_populate_regarray(s, r_array, pos,
2114
+ s->tx_regs, s->cfg.tx_fifo *
2115
+ NUM_REGS_PER_MSG_SPACE);
2116
+ pos = canfd_populate_regarray(s, r_array, pos,
2117
+ s->rx0_regs, s->cfg.rx0_fifo *
2118
+ NUM_REGS_PER_MSG_SPACE);
2119
+ if (s->cfg.enable_rx_fifo1) {
2120
+ pos = canfd_populate_regarray(s, r_array, pos,
2121
+ s->rx1_regs, s->cfg.rx1_fifo *
2122
+ NUM_REGS_PER_MSG_SPACE);
2123
+ }
2124
+ pos = canfd_populate_regarray(s, r_array, pos,
2125
+ s->af_regs, NUM_AF * NUM_REG_PER_AF);
2126
+ pos = canfd_populate_regarray(s, r_array, pos,
2127
+ s->txe_regs, NUM_TXE * NUM_REG_PER_TXE);
2128
+
2129
+ memory_region_init_io(&r_array->mem, OBJECT(s), &canfd_ops, r_array,
2130
+ device_prefix, memory_size);
2131
+ return r_array;
2132
+}
2133
+
2134
+static void canfd_realize(DeviceState *dev, Error **errp)
2135
+{
2136
+ XlnxVersalCANFDState *s = XILINX_CANFD(dev);
2137
+ RegisterInfoArray *reg_array;
2138
+
2139
+ reg_array = canfd_create_regarray(s);
2140
+ memory_region_add_subregion(&s->iomem, 0x00, &reg_array->mem);
2141
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
2142
+ sysbus_init_irq(SYS_BUS_DEVICE(dev), &s->irq_canfd_int);
2143
+
2144
+ if (s->canfdbus) {
2145
+ if (xlnx_canfd_connect_to_bus(s, s->canfdbus) < 0) {
2146
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
2147
+
2148
+ error_setg(errp, "%s: xlnx_canfd_connect_to_bus failed", path);
2149
+ return;
2150
+ }
2151
+
2152
+ }
2153
+
2154
+ /* Allocate a new timer. */
2155
+ s->canfd_timer = ptimer_init(xlnx_versal_canfd_ptimer_cb, s,
2156
+ PTIMER_POLICY_WRAP_AFTER_ONE_PERIOD |
2157
+ PTIMER_POLICY_TRIGGER_ONLY_ON_DECREMENT |
2158
+ PTIMER_POLICY_NO_IMMEDIATE_RELOAD);
2159
+
2160
+ ptimer_transaction_begin(s->canfd_timer);
2161
+
2162
+ ptimer_set_freq(s->canfd_timer, s->cfg.ext_clk_freq);
2163
+ ptimer_set_limit(s->canfd_timer, CANFD_TIMER_MAX, 1);
2164
+ ptimer_run(s->canfd_timer, 0);
2165
+ ptimer_transaction_commit(s->canfd_timer);
2166
+}
2167
+
2168
+static void canfd_init(Object *obj)
2169
+{
2170
+ XlnxVersalCANFDState *s = XILINX_CANFD(obj);
2171
+
2172
+ memory_region_init(&s->iomem, obj, TYPE_XILINX_CANFD,
2173
+ XLNX_VERSAL_CANFD_R_MAX * 4);
2174
+}
2175
+
2176
+static const VMStateDescription vmstate_canfd = {
2177
+ .name = TYPE_XILINX_CANFD,
2178
+ .version_id = 1,
2179
+ .minimum_version_id = 1,
2180
+ .fields = (VMStateField[]) {
2181
+ VMSTATE_UINT32_ARRAY(regs, XlnxVersalCANFDState,
2182
+ XLNX_VERSAL_CANFD_R_MAX),
2183
+ VMSTATE_PTIMER(canfd_timer, XlnxVersalCANFDState),
2184
+ VMSTATE_END_OF_LIST(),
2185
+ }
2186
+};
2187
+
2188
+static Property canfd_core_properties[] = {
2189
+ DEFINE_PROP_UINT8("rx-fifo0", XlnxVersalCANFDState, cfg.rx0_fifo, 0x40),
2190
+ DEFINE_PROP_UINT8("rx-fifo1", XlnxVersalCANFDState, cfg.rx1_fifo, 0x40),
2191
+ DEFINE_PROP_UINT8("tx-fifo", XlnxVersalCANFDState, cfg.tx_fifo, 0x20),
2192
+ DEFINE_PROP_BOOL("enable-rx-fifo1", XlnxVersalCANFDState,
2193
+ cfg.enable_rx_fifo1, true),
2194
+ DEFINE_PROP_UINT32("ext_clk_freq", XlnxVersalCANFDState, cfg.ext_clk_freq,
2195
+ CANFD_DEFAULT_CLOCK),
2196
+ DEFINE_PROP_LINK("canfdbus", XlnxVersalCANFDState, canfdbus, TYPE_CAN_BUS,
2197
+ CanBusState *),
2198
+ DEFINE_PROP_END_OF_LIST(),
2199
+};
2200
+
2201
+static void canfd_class_init(ObjectClass *klass, void *data)
2202
+{
2203
+ DeviceClass *dc = DEVICE_CLASS(klass);
2204
+
2205
+ dc->reset = canfd_reset;
2206
+ dc->realize = canfd_realize;
2207
+ device_class_set_props(dc, canfd_core_properties);
2208
+ dc->vmsd = &vmstate_canfd;
2209
+}
2210
+
2211
+static const TypeInfo canfd_info = {
2212
+ .name = TYPE_XILINX_CANFD,
2213
+ .parent = TYPE_SYS_BUS_DEVICE,
2214
+ .instance_size = sizeof(XlnxVersalCANFDState),
2215
+ .class_init = canfd_class_init,
2216
+ .instance_init = canfd_init,
2217
+};
2218
+
2219
+static void canfd_register_types(void)
2220
+{
2221
+ type_register_static(&canfd_info);
2222
+}
2223
+
2224
+type_init(canfd_register_types)
2225
diff --git a/hw/net/can/meson.build b/hw/net/can/meson.build
2226
index XXXXXXX..XXXXXXX 100644
2227
--- a/hw/net/can/meson.build
2228
+++ b/hw/net/can/meson.build
2229
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_mioe3680_pci.c'))
2230
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD', if_true: files('ctucan_core.c'))
2231
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD_PCI', if_true: files('ctucan_pci.c'))
2232
softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP', if_true: files('xlnx-zynqmp-can.c'))
2233
+softmmu_ss.add(when: 'CONFIG_XLNX_VERSAL', if_true: files('xlnx-versal-canfd.c'))
2234
diff --git a/hw/net/can/trace-events b/hw/net/can/trace-events
2235
index XXXXXXX..XXXXXXX 100644
2236
--- a/hw/net/can/trace-events
2237
+++ b/hw/net/can/trace-events
2238
@@ -XXX,XX +XXX,XX @@ xlnx_can_filter_mask_pre_write(uint8_t filter_num, uint32_t value) "Filter%d MAS
2239
xlnx_can_tx_data(uint32_t id, uint8_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
2240
xlnx_can_rx_data(uint32_t id, uint32_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
2241
xlnx_can_rx_discard(uint32_t status) "Controller is not enabled for bus communication. Status Register: 0x%08x"
2242
+
2243
+# xlnx-versal-canfd.c
2244
+xlnx_canfd_update_irq(char *path, uint32_t isr, uint32_t ier, uint32_t irq) "%s: ISR: 0x%08x IER: 0x%08x IRQ: 0x%08x"
2245
+xlnx_canfd_rx_fifo_filter_reject(char *path, uint32_t id, uint8_t dlc) "%s: Frame: ID: 0x%08x DLC: 0x%02x"
2246
+xlnx_canfd_rx_data(char *path, uint32_t id, uint8_t dlc, uint8_t flags) "%s: Frame: ID: 0x%08x DLC: 0x%02x CANFD Flag: 0x%02x"
2247
+xlnx_canfd_tx_data(char *path, uint32_t id, uint8_t dlc, uint8_t flgas) "%s: Frame: ID: 0x%08x DLC: 0x%02x CANFD Flag: 0x%02x"
2248
+xlnx_canfd_reset(char *path, uint32_t val) "%s: Resetting controller with value = 0x%08x"
2249
--
2250
2.34.1
diff view generated by jsdifflib
1
Implement the MVE saturating shift-right-and-narrow insns
1
From: Vikram Garhwal <vikram.garhwal@amd.com>
2
VQSHRN, VQSHRUN, VQRSHRN and VQRSHRUN.
3
2
4
do_srshr() is borrowed from sve_helper.c.
3
Connect CANFD0 and CANFD1 on the Versal-virt machine and update xlnx-versal-virt
4
document with CANFD command line examples.
5
5
6
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-13-peter.maydell@linaro.org
9
---
10
---
10
target/arm/helper-mve.h | 30 +++++++++++
11
docs/system/arm/xlnx-versal-virt.rst | 31 ++++++++++++++++
11
target/arm/mve.decode | 28 ++++++++++
12
include/hw/arm/xlnx-versal.h | 12 +++++++
12
target/arm/mve_helper.c | 104 +++++++++++++++++++++++++++++++++++++
13
hw/arm/xlnx-versal-virt.c | 53 ++++++++++++++++++++++++++++
13
target/arm/translate-mve.c | 12 +++++
14
hw/arm/xlnx-versal.c | 37 +++++++++++++++++++
14
4 files changed, 174 insertions(+)
15
4 files changed, 133 insertions(+)
15
16
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
17
diff --git a/docs/system/arm/xlnx-versal-virt.rst b/docs/system/arm/xlnx-versal-virt.rst
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper-mve.h
19
--- a/docs/system/arm/xlnx-versal-virt.rst
19
+++ b/target/arm/helper-mve.h
20
+++ b/docs/system/arm/xlnx-versal-virt.rst
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrshrnbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
@@ -XXX,XX +XXX,XX @@ Implemented devices:
21
DEF_HELPER_FLAGS_4(mve_vrshrnbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
- DDR memory
22
DEF_HELPER_FLAGS_4(mve_vrshrntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
- BBRAM (36 bytes of Battery-backed RAM)
23
DEF_HELPER_FLAGS_4(mve_vrshrnth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
- eFUSE (3072 bytes of one-time field-programmable bit array)
24
+
25
+- 2 CANFDs
25
+DEF_HELPER_FLAGS_4(mve_vqshrnb_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
26
+DEF_HELPER_FLAGS_4(mve_vqshrnb_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
QEMU does not yet model any other devices, including the PL and the AI Engine.
27
+DEF_HELPER_FLAGS_4(mve_vqshrnt_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
28
+DEF_HELPER_FLAGS_4(mve_vqshrnt_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
@@ -XXX,XX +XXX,XX @@ To use a different index value, N, from default of 1, add:
29
+
30
30
+DEF_HELPER_FLAGS_4(mve_vqshrnb_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
Better yet, do not use actual product data when running guest image
31
+DEF_HELPER_FLAGS_4(mve_vqshrnb_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
on this Xilinx Versal Virt board.
32
+DEF_HELPER_FLAGS_4(mve_vqshrnt_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
+
33
+DEF_HELPER_FLAGS_4(mve_vqshrnt_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
+Using CANFDs for Versal Virt
34
+
35
+""""""""""""""""""""""""""""
35
+DEF_HELPER_FLAGS_4(mve_vqshrunbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+Versal CANFD controller is developed based on SocketCAN and QEMU CAN bus
36
+DEF_HELPER_FLAGS_4(mve_vqshrunbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+implementation. Bus connection and socketCAN connection for each CAN module
37
+DEF_HELPER_FLAGS_4(mve_vqshruntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
38
+can be set through command lines.
38
+DEF_HELPER_FLAGS_4(mve_vqshrunth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+
39
+
40
+To connect both CANFD0 and CANFD1 on the same bus:
40
+DEF_HELPER_FLAGS_4(mve_vqrshrnb_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
+
41
+DEF_HELPER_FLAGS_4(mve_vqrshrnb_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
42
+.. code-block:: bash
42
+DEF_HELPER_FLAGS_4(mve_vqrshrnt_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
43
+
43
+DEF_HELPER_FLAGS_4(mve_vqrshrnt_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
44
+ -object can-bus,id=canbus -machine canbus0=canbus -machine canbus1=canbus
44
+
45
+
45
+DEF_HELPER_FLAGS_4(mve_vqrshrnb_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
46
+To connect CANFD0 and CANFD1 to separate buses:
46
+DEF_HELPER_FLAGS_4(mve_vqrshrnb_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
47
+
47
+DEF_HELPER_FLAGS_4(mve_vqrshrnt_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
48
+.. code-block:: bash
48
+DEF_HELPER_FLAGS_4(mve_vqrshrnt_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
49
+
49
+
50
+ -object can-bus,id=canbus0 -object can-bus,id=canbus1 \
50
+DEF_HELPER_FLAGS_4(mve_vqrshrunbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
51
+ -machine canbus0=canbus0 -machine canbus1=canbus1
51
+DEF_HELPER_FLAGS_4(mve_vqrshrunbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
52
+
52
+DEF_HELPER_FLAGS_4(mve_vqrshruntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
53
+The SocketCAN interface can connect to a Physical or a Virtual CAN interfaces on
53
+DEF_HELPER_FLAGS_4(mve_vqrshrunth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
54
+the host machine. Please check this document to learn about CAN interface on
54
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
55
+Linux: docs/system/devices/can.rst
55
index XXXXXXX..XXXXXXX 100644
56
+
56
--- a/target/arm/mve.decode
57
+To connect CANFD0 and CANFD1 to host machine's CAN interface can0:
57
+++ b/target/arm/mve.decode
58
+
58
@@ -XXX,XX +XXX,XX @@ VRSHRNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_b
59
+.. code-block:: bash
59
VRSHRNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_h
60
+
60
VRSHRNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_b
61
+ -object can-bus,id=canbus -machine canbus0=canbus -machine canbus1=canbus
61
VRSHRNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_h
62
+ -object can-host-socketcan,id=canhost0,if=can0,canbus=canbus
62
+
63
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
63
+VQSHRNB_S 111 0 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 0 @2_shr_b
64
index XXXXXXX..XXXXXXX 100644
64
+VQSHRNB_S 111 0 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 0 @2_shr_h
65
--- a/include/hw/arm/xlnx-versal.h
65
+VQSHRNT_S 111 0 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 0 @2_shr_b
66
+++ b/include/hw/arm/xlnx-versal.h
66
+VQSHRNT_S 111 0 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 0 @2_shr_h
67
@@ -XXX,XX +XXX,XX @@
67
+VQSHRNB_U 111 1 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 0 @2_shr_b
68
#include "hw/dma/xlnx_csu_dma.h"
68
+VQSHRNB_U 111 1 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 0 @2_shr_h
69
#include "hw/misc/xlnx-versal-crl.h"
69
+VQSHRNT_U 111 1 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 0 @2_shr_b
70
#include "hw/misc/xlnx-versal-pmc-iou-slcr.h"
70
+VQSHRNT_U 111 1 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 0 @2_shr_h
71
+#include "hw/net/xlnx-versal-canfd.h"
71
+
72
72
+VQSHRUNB 111 0 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_b
73
#define TYPE_XLNX_VERSAL "xlnx-versal"
73
+VQSHRUNB 111 0 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_h
74
OBJECT_DECLARE_SIMPLE_TYPE(Versal, XLNX_VERSAL)
74
+VQSHRUNT 111 0 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_b
75
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(Versal, XLNX_VERSAL)
75
+VQSHRUNT 111 0 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_h
76
#define XLNX_VERSAL_NR_SDS 2
76
+
77
#define XLNX_VERSAL_NR_XRAM 4
77
+VQRSHRNB_S 111 0 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 1 @2_shr_b
78
#define XLNX_VERSAL_NR_IRQS 192
78
+VQRSHRNB_S 111 0 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 1 @2_shr_h
79
+#define XLNX_VERSAL_NR_CANFD 2
79
+VQRSHRNT_S 111 0 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 1 @2_shr_b
80
+#define XLNX_VERSAL_CANFD_REF_CLK (24 * 1000 * 1000)
80
+VQRSHRNT_S 111 0 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 1 @2_shr_h
81
81
+VQRSHRNB_U 111 1 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 1 @2_shr_b
82
struct Versal {
82
+VQRSHRNB_U 111 1 1110 1 . ... ... ... 0 1111 0 1 . 0 ... 1 @2_shr_h
83
/*< private >*/
83
+VQRSHRNT_U 111 1 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 1 @2_shr_b
84
@@ -XXX,XX +XXX,XX @@ struct Versal {
84
+VQRSHRNT_U 111 1 1110 1 . ... ... ... 1 1111 0 1 . 0 ... 1 @2_shr_h
85
CadenceGEMState gem[XLNX_VERSAL_NR_GEMS];
85
+
86
XlnxZDMA adma[XLNX_VERSAL_NR_ADMAS];
86
+VQRSHRUNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_b
87
VersalUsb2 usb;
87
+VQRSHRUNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_h
88
+ CanBusState *canbus[XLNX_VERSAL_NR_CANFD];
88
+VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_b
89
+ XlnxVersalCANFDState canfd[XLNX_VERSAL_NR_CANFD];
89
+VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_h
90
} iou;
90
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
91
91
index XXXXXXX..XXXXXXX 100644
92
/* Real-time Processing Unit. */
92
--- a/target/arm/mve_helper.c
93
@@ -XXX,XX +XXX,XX @@ struct Versal {
93
+++ b/target/arm/mve_helper.c
94
#define VERSAL_CRL_IRQ 10
94
@@ -XXX,XX +XXX,XX @@ static inline uint64_t do_urshr(uint64_t x, unsigned sh)
95
#define VERSAL_UART0_IRQ_0 18
96
#define VERSAL_UART1_IRQ_0 19
97
+#define VERSAL_CANFD0_IRQ_0 20
98
+#define VERSAL_CANFD1_IRQ_0 21
99
#define VERSAL_USB0_IRQ_0 22
100
#define VERSAL_GEM0_IRQ_0 56
101
#define VERSAL_GEM0_WAKE_IRQ_0 57
102
@@ -XXX,XX +XXX,XX @@ struct Versal {
103
#define MM_UART1 0xff010000U
104
#define MM_UART1_SIZE 0x10000
105
106
+#define MM_CANFD0 0xff060000U
107
+#define MM_CANFD0_SIZE 0x10000
108
+#define MM_CANFD1 0xff070000U
109
+#define MM_CANFD1_SIZE 0x10000
110
+
111
#define MM_GEM0 0xff0c0000U
112
#define MM_GEM0_SIZE 0x10000
113
#define MM_GEM1 0xff0d0000U
114
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/hw/arm/xlnx-versal-virt.c
117
+++ b/hw/arm/xlnx-versal-virt.c
118
@@ -XXX,XX +XXX,XX @@ struct VersalVirt {
119
uint32_t clk_25Mhz;
120
uint32_t usb;
121
uint32_t dwc;
122
+ uint32_t canfd[2];
123
} phandle;
124
struct arm_boot_info binfo;
125
126
+ CanBusState *canbus[XLNX_VERSAL_NR_CANFD];
127
struct {
128
bool secure;
129
} cfg;
130
@@ -XXX,XX +XXX,XX @@ static void fdt_add_uart_nodes(VersalVirt *s)
95
}
131
}
96
}
132
}
97
133
98
+static inline int64_t do_srshr(int64_t x, unsigned sh)
134
+static void fdt_add_canfd_nodes(VersalVirt *s)
99
+{
135
+{
100
+ if (likely(sh < 64)) {
136
+ uint64_t addrs[] = { MM_CANFD1, MM_CANFD0 };
101
+ return (x >> sh) + ((x >> (sh - 1)) & 1);
137
+ uint32_t size[] = { MM_CANFD1_SIZE, MM_CANFD0_SIZE };
102
+ } else {
138
+ unsigned int irqs[] = { VERSAL_CANFD1_IRQ_0, VERSAL_CANFD0_IRQ_0 };
103
+ /* Rounding the sign bit always produces 0. */
139
+ const char clocknames[] = "can_clk\0s_axi_aclk";
104
+ return 0;
140
+ int i;
141
+
142
+ /* Create and connect CANFD0 and CANFD1 nodes to canbus0. */
143
+ for (i = 0; i < ARRAY_SIZE(addrs); i++) {
144
+ char *name = g_strdup_printf("/canfd@%" PRIx64, addrs[i]);
145
+ qemu_fdt_add_subnode(s->fdt, name);
146
+
147
+ qemu_fdt_setprop_cell(s->fdt, name, "rx-fifo-depth", 0x40);
148
+ qemu_fdt_setprop_cell(s->fdt, name, "tx-mailbox-count", 0x20);
149
+
150
+ qemu_fdt_setprop_cells(s->fdt, name, "clocks",
151
+ s->phandle.clk_25Mhz, s->phandle.clk_25Mhz);
152
+ qemu_fdt_setprop(s->fdt, name, "clock-names",
153
+ clocknames, sizeof(clocknames));
154
+ qemu_fdt_setprop_cells(s->fdt, name, "interrupts",
155
+ GIC_FDT_IRQ_TYPE_SPI, irqs[i],
156
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
157
+ qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
158
+ 2, addrs[i], 2, size[i]);
159
+ qemu_fdt_setprop_string(s->fdt, name, "compatible",
160
+ "xlnx,canfd-2.0");
161
+
162
+ g_free(name);
105
+ }
163
+ }
106
+}
164
+}
107
+
165
+
108
DO_VSHRN_ALL(vshrn, DO_SHR)
166
static void fdt_add_fixed_link_nodes(VersalVirt *s, char *gemname,
109
DO_VSHRN_ALL(vrshrn, do_urshr)
167
uint32_t phandle)
110
+
168
{
111
+static inline int32_t do_sat_bhs(int64_t val, int64_t min, int64_t max,
169
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
112
+ bool *satp)
170
TYPE_XLNX_VERSAL);
171
object_property_set_link(OBJECT(&s->soc), "ddr", OBJECT(machine->ram),
172
&error_abort);
173
+ object_property_set_link(OBJECT(&s->soc), "canbus0", OBJECT(s->canbus[0]),
174
+ &error_abort);
175
+ object_property_set_link(OBJECT(&s->soc), "canbus1", OBJECT(s->canbus[1]),
176
+ &error_abort);
177
sysbus_realize(SYS_BUS_DEVICE(&s->soc), &error_fatal);
178
179
fdt_create(s);
180
create_virtio_regions(s);
181
fdt_add_gem_nodes(s);
182
fdt_add_uart_nodes(s);
183
+ fdt_add_canfd_nodes(s);
184
fdt_add_gic_nodes(s);
185
fdt_add_timer_nodes(s);
186
fdt_add_zdma_nodes(s);
187
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
188
189
static void versal_virt_machine_instance_init(Object *obj)
190
{
191
+ VersalVirt *s = XLNX_VERSAL_VIRT_MACHINE(obj);
192
+
193
+ /*
194
+ * User can set canbus0 and canbus1 properties to can-bus object and connect
195
+ * to socketcan(optional) interface via command line.
196
+ */
197
+ object_property_add_link(obj, "canbus0", TYPE_CAN_BUS,
198
+ (Object **)&s->canbus[0],
199
+ object_property_allow_set_link,
200
+ 0);
201
+ object_property_add_link(obj, "canbus1", TYPE_CAN_BUS,
202
+ (Object **)&s->canbus[1],
203
+ object_property_allow_set_link,
204
+ 0);
205
}
206
207
static void versal_virt_machine_class_init(ObjectClass *oc, void *data)
208
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
209
index XXXXXXX..XXXXXXX 100644
210
--- a/hw/arm/xlnx-versal.c
211
+++ b/hw/arm/xlnx-versal.c
212
@@ -XXX,XX +XXX,XX @@ static void versal_create_uarts(Versal *s, qemu_irq *pic)
213
}
214
}
215
216
+static void versal_create_canfds(Versal *s, qemu_irq *pic)
113
+{
217
+{
114
+ if (val > max) {
218
+ int i;
115
+ *satp = true;
219
+ uint32_t irqs[] = { VERSAL_CANFD0_IRQ_0, VERSAL_CANFD1_IRQ_0};
116
+ return max;
220
+ uint64_t addrs[] = { MM_CANFD0, MM_CANFD1 };
117
+ } else if (val < min) {
221
+
118
+ *satp = true;
222
+ for (i = 0; i < ARRAY_SIZE(s->lpd.iou.canfd); i++) {
119
+ return min;
223
+ char *name = g_strdup_printf("canfd%d", i);
120
+ } else {
224
+ SysBusDevice *sbd;
121
+ return val;
225
+ MemoryRegion *mr;
226
+
227
+ object_initialize_child(OBJECT(s), name, &s->lpd.iou.canfd[i],
228
+ TYPE_XILINX_CANFD);
229
+ sbd = SYS_BUS_DEVICE(&s->lpd.iou.canfd[i]);
230
+
231
+ object_property_set_int(OBJECT(&s->lpd.iou.canfd[i]), "ext_clk_freq",
232
+ XLNX_VERSAL_CANFD_REF_CLK , &error_abort);
233
+
234
+ object_property_set_link(OBJECT(&s->lpd.iou.canfd[i]), "canfdbus",
235
+ OBJECT(s->lpd.iou.canbus[i]),
236
+ &error_abort);
237
+
238
+ sysbus_realize(sbd, &error_fatal);
239
+
240
+ mr = sysbus_mmio_get_region(sbd, 0);
241
+ memory_region_add_subregion(&s->mr_ps, addrs[i], mr);
242
+
243
+ sysbus_connect_irq(sbd, 0, pic[irqs[i]]);
244
+ g_free(name);
122
+ }
245
+ }
123
+}
246
+}
124
+
247
+
125
+/* Saturating narrowing right shifts */
248
static void versal_create_usbs(Versal *s, qemu_irq *pic)
126
+#define DO_VSHRN_SAT(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \
249
{
127
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
250
DeviceState *dev;
128
+ void *vm, uint32_t shift) \
251
@@ -XXX,XX +XXX,XX @@ static void versal_realize(DeviceState *dev, Error **errp)
129
+ { \
252
versal_create_apu_gic(s, pic);
130
+ LTYPE *m = vm; \
253
versal_create_rpu_cpus(s);
131
+ TYPE *d = vd; \
254
versal_create_uarts(s, pic);
132
+ uint16_t mask = mve_element_mask(env); \
255
+ versal_create_canfds(s, pic);
133
+ bool qc = false; \
256
versal_create_usbs(s, pic);
134
+ unsigned le; \
257
versal_create_gems(s, pic);
135
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
258
versal_create_admas(s, pic);
136
+ bool sat = false; \
259
@@ -XXX,XX +XXX,XX @@ static void versal_init(Object *obj)
137
+ TYPE r = FN(m[H##LESIZE(le)], shift, &sat); \
260
static Property versal_properties[] = {
138
+ mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
261
DEFINE_PROP_LINK("ddr", Versal, cfg.mr_ddr, TYPE_MEMORY_REGION,
139
+ qc |= sat && (mask & 1 << (TOP * ESIZE)); \
262
MemoryRegion *),
140
+ } \
263
+ DEFINE_PROP_LINK("canbus0", Versal, lpd.iou.canbus[0],
141
+ if (qc) { \
264
+ TYPE_CAN_BUS, CanBusState *),
142
+ env->vfp.qc[0] = qc; \
265
+ DEFINE_PROP_LINK("canbus1", Versal, lpd.iou.canbus[1],
143
+ } \
266
+ TYPE_CAN_BUS, CanBusState *),
144
+ mve_advance_vpt(env); \
267
DEFINE_PROP_END_OF_LIST()
145
+ }
268
};
146
+
269
147
+#define DO_VSHRN_SAT_UB(BOP, TOP, FN) \
148
+ DO_VSHRN_SAT(BOP, false, 1, uint8_t, 2, uint16_t, FN) \
149
+ DO_VSHRN_SAT(TOP, true, 1, uint8_t, 2, uint16_t, FN)
150
+
151
+#define DO_VSHRN_SAT_UH(BOP, TOP, FN) \
152
+ DO_VSHRN_SAT(BOP, false, 2, uint16_t, 4, uint32_t, FN) \
153
+ DO_VSHRN_SAT(TOP, true, 2, uint16_t, 4, uint32_t, FN)
154
+
155
+#define DO_VSHRN_SAT_SB(BOP, TOP, FN) \
156
+ DO_VSHRN_SAT(BOP, false, 1, int8_t, 2, int16_t, FN) \
157
+ DO_VSHRN_SAT(TOP, true, 1, int8_t, 2, int16_t, FN)
158
+
159
+#define DO_VSHRN_SAT_SH(BOP, TOP, FN) \
160
+ DO_VSHRN_SAT(BOP, false, 2, int16_t, 4, int32_t, FN) \
161
+ DO_VSHRN_SAT(TOP, true, 2, int16_t, 4, int32_t, FN)
162
+
163
+#define DO_SHRN_SB(N, M, SATP) \
164
+ do_sat_bhs((int64_t)(N) >> (M), INT8_MIN, INT8_MAX, SATP)
165
+#define DO_SHRN_UB(N, M, SATP) \
166
+ do_sat_bhs((uint64_t)(N) >> (M), 0, UINT8_MAX, SATP)
167
+#define DO_SHRUN_B(N, M, SATP) \
168
+ do_sat_bhs((int64_t)(N) >> (M), 0, UINT8_MAX, SATP)
169
+
170
+#define DO_SHRN_SH(N, M, SATP) \
171
+ do_sat_bhs((int64_t)(N) >> (M), INT16_MIN, INT16_MAX, SATP)
172
+#define DO_SHRN_UH(N, M, SATP) \
173
+ do_sat_bhs((uint64_t)(N) >> (M), 0, UINT16_MAX, SATP)
174
+#define DO_SHRUN_H(N, M, SATP) \
175
+ do_sat_bhs((int64_t)(N) >> (M), 0, UINT16_MAX, SATP)
176
+
177
+#define DO_RSHRN_SB(N, M, SATP) \
178
+ do_sat_bhs(do_srshr(N, M), INT8_MIN, INT8_MAX, SATP)
179
+#define DO_RSHRN_UB(N, M, SATP) \
180
+ do_sat_bhs(do_urshr(N, M), 0, UINT8_MAX, SATP)
181
+#define DO_RSHRUN_B(N, M, SATP) \
182
+ do_sat_bhs(do_srshr(N, M), 0, UINT8_MAX, SATP)
183
+
184
+#define DO_RSHRN_SH(N, M, SATP) \
185
+ do_sat_bhs(do_srshr(N, M), INT16_MIN, INT16_MAX, SATP)
186
+#define DO_RSHRN_UH(N, M, SATP) \
187
+ do_sat_bhs(do_urshr(N, M), 0, UINT16_MAX, SATP)
188
+#define DO_RSHRUN_H(N, M, SATP) \
189
+ do_sat_bhs(do_srshr(N, M), 0, UINT16_MAX, SATP)
190
+
191
+DO_VSHRN_SAT_SB(vqshrnb_sb, vqshrnt_sb, DO_SHRN_SB)
192
+DO_VSHRN_SAT_SH(vqshrnb_sh, vqshrnt_sh, DO_SHRN_SH)
193
+DO_VSHRN_SAT_UB(vqshrnb_ub, vqshrnt_ub, DO_SHRN_UB)
194
+DO_VSHRN_SAT_UH(vqshrnb_uh, vqshrnt_uh, DO_SHRN_UH)
195
+DO_VSHRN_SAT_SB(vqshrunbb, vqshruntb, DO_SHRUN_B)
196
+DO_VSHRN_SAT_SH(vqshrunbh, vqshrunth, DO_SHRUN_H)
197
+
198
+DO_VSHRN_SAT_SB(vqrshrnb_sb, vqrshrnt_sb, DO_RSHRN_SB)
199
+DO_VSHRN_SAT_SH(vqrshrnb_sh, vqrshrnt_sh, DO_RSHRN_SH)
200
+DO_VSHRN_SAT_UB(vqrshrnb_ub, vqrshrnt_ub, DO_RSHRN_UB)
201
+DO_VSHRN_SAT_UH(vqrshrnb_uh, vqrshrnt_uh, DO_RSHRN_UH)
202
+DO_VSHRN_SAT_SB(vqrshrunbb, vqrshruntb, DO_RSHRUN_B)
203
+DO_VSHRN_SAT_SH(vqrshrunbh, vqrshrunth, DO_RSHRUN_H)
204
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
205
index XXXXXXX..XXXXXXX 100644
206
--- a/target/arm/translate-mve.c
207
+++ b/target/arm/translate-mve.c
208
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_N(VSHRNB, vshrnb)
209
DO_2SHIFT_N(VSHRNT, vshrnt)
210
DO_2SHIFT_N(VRSHRNB, vrshrnb)
211
DO_2SHIFT_N(VRSHRNT, vrshrnt)
212
+DO_2SHIFT_N(VQSHRNB_S, vqshrnb_s)
213
+DO_2SHIFT_N(VQSHRNT_S, vqshrnt_s)
214
+DO_2SHIFT_N(VQSHRNB_U, vqshrnb_u)
215
+DO_2SHIFT_N(VQSHRNT_U, vqshrnt_u)
216
+DO_2SHIFT_N(VQSHRUNB, vqshrunb)
217
+DO_2SHIFT_N(VQSHRUNT, vqshrunt)
218
+DO_2SHIFT_N(VQRSHRNB_S, vqrshrnb_s)
219
+DO_2SHIFT_N(VQRSHRNT_S, vqrshrnt_s)
220
+DO_2SHIFT_N(VQRSHRNB_U, vqrshrnb_u)
221
+DO_2SHIFT_N(VQRSHRNT_U, vqrshrnt_u)
222
+DO_2SHIFT_N(VQRSHRUNB, vqrshrunb)
223
+DO_2SHIFT_N(VQRSHRUNT, vqrshrunt)
224
--
270
--
225
2.20.1
271
2.34.1
226
227
diff view generated by jsdifflib
New patch
1
From: Vikram Garhwal <vikram.garhwal@amd.com>
1
2
3
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
MAINTAINERS | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
10
11
diff --git a/MAINTAINERS b/MAINTAINERS
12
index XXXXXXX..XXXXXXX 100644
13
--- a/MAINTAINERS
14
+++ b/MAINTAINERS
15
@@ -XXX,XX +XXX,XX @@ M: Francisco Iglesias <francisco.iglesias@amd.com>
16
S: Maintained
17
F: hw/net/can/xlnx-*
18
F: include/hw/net/xlnx-*
19
-F: tests/qtest/xlnx-can-test*
20
+F: tests/qtest/xlnx-can*-test*
21
22
EDU
23
M: Jiri Slaby <jslaby@suse.cz>
24
--
25
2.34.1
diff view generated by jsdifflib
New patch
1
From: Vikram Garhwal <vikram.garhwal@amd.com>
1
2
3
The QTests perform three tests on the Xilinx VERSAL CANFD controller:
4
Tests the CANFD controllers in loopback.
5
Tests the CANFD controllers in normal mode with CAN frame.
6
Tests the CANFD controllers in normal mode with CANFD frame.
7
8
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
9
Acked-by: Thomas Huth <thuth@redhat.com>
10
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
tests/qtest/xlnx-canfd-test.c | 423 ++++++++++++++++++++++++++++++++++
15
tests/qtest/meson.build | 1 +
16
2 files changed, 424 insertions(+)
17
create mode 100644 tests/qtest/xlnx-canfd-test.c
18
19
diff --git a/tests/qtest/xlnx-canfd-test.c b/tests/qtest/xlnx-canfd-test.c
20
new file mode 100644
21
index XXXXXXX..XXXXXXX
22
--- /dev/null
23
+++ b/tests/qtest/xlnx-canfd-test.c
24
@@ -XXX,XX +XXX,XX @@
25
+/*
26
+ * SPDX-License-Identifier: MIT
27
+ *
28
+ * QTests for the Xilinx Versal CANFD controller.
29
+ *
30
+ * Copyright (c) 2022 AMD Inc.
31
+ *
32
+ * Written-by: Vikram Garhwal<vikram.garhwal@amd.com>
33
+ *
34
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
35
+ * of this software and associated documentation files (the "Software"), to deal
36
+ * in the Software without restriction, including without limitation the rights
37
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
38
+ * copies of the Software, and to permit persons to whom the Software is
39
+ * furnished to do so, subject to the following conditions:
40
+ *
41
+ * The above copyright notice and this permission notice shall be included in
42
+ * all copies or substantial portions of the Software.
43
+ *
44
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
45
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
46
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
47
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
48
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
49
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
50
+ * THE SOFTWARE.
51
+ */
52
+
53
+#include "qemu/osdep.h"
54
+#include "libqtest.h"
55
+
56
+/* Base address. */
57
+#define CANFD0_BASE_ADDR 0xff060000
58
+#define CANFD1_BASE_ADDR 0xff070000
59
+
60
+/* Register addresses. */
61
+#define R_SRR_OFFSET 0x00
62
+#define R_MSR_OFFSET 0x04
63
+#define R_FILTER_CONTROL_REGISTER 0xe0
64
+#define R_SR_OFFSET 0x18
65
+#define R_ISR_OFFSET 0x1c
66
+#define R_IER_OFFSET 0x20
67
+#define R_ICR_OFFSET 0x24
68
+#define R_TX_READY_REQ_REGISTER 0x90
69
+#define RX_FIFO_STATUS_REGISTER 0xe8
70
+#define R_TXID_OFFSET 0x100
71
+#define R_TXDLC_OFFSET 0x104
72
+#define R_TXDATA1_OFFSET 0x108
73
+#define R_TXDATA2_OFFSET 0x10c
74
+#define R_AFMR_REGISTER0 0xa00
75
+#define R_AFIR_REGISTER0 0xa04
76
+#define R_RX0_ID_OFFSET 0x2100
77
+#define R_RX0_DLC_OFFSET 0x2104
78
+#define R_RX0_DATA1_OFFSET 0x2108
79
+#define R_RX0_DATA2_OFFSET 0x210c
80
+
81
+/* CANFD modes. */
82
+#define SRR_CONFIG_MODE 0x00
83
+#define MSR_NORMAL_MODE 0x00
84
+#define MSR_LOOPBACK_MODE (1 << 1)
85
+#define ENABLE_CANFD (1 << 1)
86
+
87
+/* CANFD status. */
88
+#define STATUS_CONFIG_MODE (1 << 0)
89
+#define STATUS_NORMAL_MODE (1 << 3)
90
+#define STATUS_LOOPBACK_MODE (1 << 1)
91
+#define ISR_TXOK (1 << 1)
92
+#define ISR_RXOK (1 << 4)
93
+
94
+#define ENABLE_ALL_FILTERS 0xffffffff
95
+#define ENABLE_ALL_INTERRUPTS 0xffffffff
96
+
97
+/* We are sending one canfd message. */
98
+#define TX_READY_REG_VAL 0x1
99
+
100
+#define FIRST_RX_STORE_INDEX 0x1
101
+#define STATUS_REG_MASK 0xf
102
+#define DLC_FD_BIT_SHIFT 0x1b
103
+#define DLC_FD_BIT_MASK 0xf8000000
104
+#define FIFO_STATUS_READ_INDEX_MASK 0x3f
105
+#define FIFO_STATUS_FILL_LEVEL_MASK 0x7f00
106
+#define FILL_LEVEL_SHIFT 0x8
107
+
108
+/* CANFD frame size ID, DLC and 16 DATA word. */
109
+#define CANFD_FRAME_SIZE 18
110
+/* CAN frame size ID, DLC and 2 DATA word. */
111
+#define CAN_FRAME_SIZE 4
112
+
113
+/* Set the filters for CANFD controller. */
114
+static void enable_filters(QTestState *qts)
115
+{
116
+ const uint32_t arr_afmr[32] = { 0xb423deaa, 0xa2a40bdc, 0x1b64f486,
117
+ 0x95c0d4ee, 0xe0c44528, 0x4b407904,
118
+ 0xd2673f46, 0x9fc638d6, 0x8844f3d8,
119
+ 0xa607d1e8, 0x67871bf4, 0xc2557dc,
120
+ 0x9ea5b53e, 0x3643c0cc, 0x5a05ea8e,
121
+ 0x83a46d84, 0x4a25c2b8, 0x93a66008,
122
+ 0x2e467470, 0xedc66118, 0x9086f9f2,
123
+ 0xfa23dd36, 0xb6654b90, 0xb221b8ca,
124
+ 0x3467d1e2, 0xa3a55542, 0x5b26a012,
125
+ 0x2281ea7e, 0xcea0ece8, 0xdc61e588,
126
+ 0x2e5676a, 0x16821320 };
127
+
128
+ const uint32_t arr_afir[32] = { 0xa833dfa1, 0x255a477e, 0x3a4bb1c5,
129
+ 0x8f560a6c, 0x27f38903, 0x2fecec4d,
130
+ 0xa014c66d, 0xec289b8, 0x7e52dead,
131
+ 0x82e94f3c, 0xcf3e3c5c, 0x66059871,
132
+ 0x3f213df4, 0x25ac3959, 0xa12e9bef,
133
+ 0xa3ad3af, 0xbafd7fe, 0xb3cb40fd,
134
+ 0x5d9caa81, 0x2ed61902, 0x7cd64a0,
135
+ 0x4b1fa538, 0x9b5ced8c, 0x150de059,
136
+ 0xd2794227, 0x635e820a, 0xbb6b02cf,
137
+ 0xbb58176, 0x570025bb, 0xa78d9658,
138
+ 0x49d735df, 0xe5399d2f };
139
+
140
+ /* Passing the respective array values to all the AFMR and AFIR pairs. */
141
+ for (int i = 0; i < 32; i++) {
142
+ /* For CANFD0. */
143
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_AFMR_REGISTER0 + 8 * i,
144
+ arr_afmr[i]);
145
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_AFIR_REGISTER0 + 8 * i,
146
+ arr_afir[i]);
147
+
148
+ /* For CANFD1. */
149
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_AFMR_REGISTER0 + 8 * i,
150
+ arr_afmr[i]);
151
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_AFIR_REGISTER0 + 8 * i,
152
+ arr_afir[i]);
153
+ }
154
+
155
+ /* Enable all the pairs from AFR register. */
156
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_FILTER_CONTROL_REGISTER,
157
+ ENABLE_ALL_FILTERS);
158
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_FILTER_CONTROL_REGISTER,
159
+ ENABLE_ALL_FILTERS);
160
+}
161
+
162
+static void configure_canfd(QTestState *qts, uint8_t mode)
163
+{
164
+ uint32_t status = 0;
165
+
166
+ /* Put CANFD0 and CANFD1 in config mode. */
167
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_SRR_OFFSET, SRR_CONFIG_MODE);
168
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_SRR_OFFSET, SRR_CONFIG_MODE);
169
+
170
+ /* Write mode of operation in Mode select register. */
171
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_MSR_OFFSET, mode);
172
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_MSR_OFFSET, mode);
173
+
174
+ enable_filters(qts);
175
+
176
+ /* Check here if CANFD0 and CANFD1 are in config mode. */
177
+ status = qtest_readl(qts, CANFD0_BASE_ADDR + R_SR_OFFSET);
178
+ status = status & STATUS_REG_MASK;
179
+ g_assert_cmpint(status, ==, STATUS_CONFIG_MODE);
180
+
181
+ status = qtest_readl(qts, CANFD1_BASE_ADDR + R_SR_OFFSET);
182
+ status = status & STATUS_REG_MASK;
183
+ g_assert_cmpint(status, ==, STATUS_CONFIG_MODE);
184
+
185
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_IER_OFFSET, ENABLE_ALL_INTERRUPTS);
186
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_IER_OFFSET, ENABLE_ALL_INTERRUPTS);
187
+
188
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CANFD);
189
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CANFD);
190
+}
191
+
192
+static void generate_random_data(uint32_t *buf_tx, bool is_canfd_frame)
193
+{
194
+ /* Generate random TX data for CANFD frame. */
195
+ if (is_canfd_frame) {
196
+ for (int i = 0; i < CANFD_FRAME_SIZE - 2; i++) {
197
+ buf_tx[2 + i] = rand();
198
+ }
199
+ } else {
200
+ /* Generate random TX data for CAN frame. */
201
+ for (int i = 0; i < CAN_FRAME_SIZE - 2; i++) {
202
+ buf_tx[2 + i] = rand();
203
+ }
204
+ }
205
+}
206
+
207
+static void read_data(QTestState *qts, uint64_t can_base_addr, uint32_t *buf_rx)
208
+{
209
+ uint32_t int_status;
210
+ uint32_t fifo_status_reg_value;
211
+ /* At which RX FIFO the received data is stored. */
212
+ uint8_t store_ind = 0;
213
+ bool is_canfd_frame = false;
214
+
215
+ /* Read the interrupt on CANFD rx. */
216
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_RXOK;
217
+
218
+ g_assert_cmpint(int_status, ==, ISR_RXOK);
219
+
220
+ /* Find the fill level and read index. */
221
+ fifo_status_reg_value = qtest_readl(qts, can_base_addr +
222
+ RX_FIFO_STATUS_REGISTER);
223
+
224
+ store_ind = (fifo_status_reg_value & FIFO_STATUS_READ_INDEX_MASK) +
225
+ ((fifo_status_reg_value & FIFO_STATUS_FILL_LEVEL_MASK) >>
226
+ FILL_LEVEL_SHIFT);
227
+
228
+ g_assert_cmpint(store_ind, ==, FIRST_RX_STORE_INDEX);
229
+
230
+ /* Read the RX register data for CANFD. */
231
+ buf_rx[0] = qtest_readl(qts, can_base_addr + R_RX0_ID_OFFSET);
232
+ buf_rx[1] = qtest_readl(qts, can_base_addr + R_RX0_DLC_OFFSET);
233
+
234
+ is_canfd_frame = (buf_rx[1] >> DLC_FD_BIT_SHIFT) & 1;
235
+
236
+ if (is_canfd_frame) {
237
+ for (int i = 0; i < CANFD_FRAME_SIZE - 2; i++) {
238
+ buf_rx[i + 2] = qtest_readl(qts,
239
+ can_base_addr + R_RX0_DATA1_OFFSET + 4 * i);
240
+ }
241
+ } else {
242
+ buf_rx[2] = qtest_readl(qts, can_base_addr + R_RX0_DATA1_OFFSET);
243
+ buf_rx[3] = qtest_readl(qts, can_base_addr + R_RX0_DATA2_OFFSET);
244
+ }
245
+
246
+ /* Clear the RX interrupt. */
247
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_ICR_OFFSET, ISR_RXOK);
248
+}
249
+
250
+static void write_data(QTestState *qts, uint64_t can_base_addr,
251
+ const uint32_t *buf_tx, bool is_canfd_frame)
252
+{
253
+ /* Write the TX register data for CANFD. */
254
+ qtest_writel(qts, can_base_addr + R_TXID_OFFSET, buf_tx[0]);
255
+ qtest_writel(qts, can_base_addr + R_TXDLC_OFFSET, buf_tx[1]);
256
+
257
+ if (is_canfd_frame) {
258
+ for (int i = 0; i < CANFD_FRAME_SIZE - 2; i++) {
259
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET + 4 * i,
260
+ buf_tx[2 + i]);
261
+ }
262
+ } else {
263
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET, buf_tx[2]);
264
+ qtest_writel(qts, can_base_addr + R_TXDATA2_OFFSET, buf_tx[3]);
265
+ }
266
+}
267
+
268
+static void send_data(QTestState *qts, uint64_t can_base_addr)
269
+{
270
+ uint32_t int_status;
271
+
272
+ qtest_writel(qts, can_base_addr + R_TX_READY_REQ_REGISTER,
273
+ TX_READY_REG_VAL);
274
+
275
+ /* Read the interrupt on CANFD for tx. */
276
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_TXOK;
277
+
278
+ g_assert_cmpint(int_status, ==, ISR_TXOK);
279
+
280
+ /* Clear the interrupt for tx. */
281
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_ICR_OFFSET, ISR_TXOK);
282
+}
283
+
284
+static void match_rx_tx_data(const uint32_t *buf_tx, const uint32_t *buf_rx,
285
+ bool is_canfd_frame)
286
+{
287
+ uint16_t size = 0;
288
+ uint8_t len = CAN_FRAME_SIZE;
289
+
290
+ if (is_canfd_frame) {
291
+ len = CANFD_FRAME_SIZE;
292
+ }
293
+
294
+ while (size < len) {
295
+ if (R_RX0_ID_OFFSET + 4 * size == R_RX0_DLC_OFFSET) {
296
+ g_assert_cmpint((buf_rx[size] & DLC_FD_BIT_MASK), ==,
297
+ (buf_tx[size] & DLC_FD_BIT_MASK));
298
+ } else {
299
+ if (!is_canfd_frame && size == 4) {
300
+ break;
301
+ }
302
+
303
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size]);
304
+ }
305
+
306
+ size++;
307
+ }
308
+}
309
+/*
310
+ * Xilinx CANFD supports both CAN and CANFD frames. This test will be
311
+ * transferring CAN frame i.e. 8 bytes of data from CANFD0 and CANFD1 through
312
+ * canbus. CANFD0 initiate the data transfer to can-bus, CANFD1 receives the
313
+ * data. Test compares the can frame data sent from CANFD0 and received on
314
+ * CANFD1.
315
+ */
316
+static void test_can_data_transfer(void)
317
+{
318
+ uint32_t buf_tx[CAN_FRAME_SIZE] = { 0x5a5bb9a4, 0x80000000,
319
+ 0x12345678, 0x87654321 };
320
+ uint32_t buf_rx[CAN_FRAME_SIZE] = { 0x00, 0x00, 0x00, 0x00 };
321
+ uint32_t status = 0;
322
+
323
+ generate_random_data(buf_tx, false);
324
+
325
+ QTestState *qts = qtest_init("-machine xlnx-versal-virt"
326
+ " -object can-bus,id=canbus"
327
+ " -machine canbus0=canbus"
328
+ " -machine canbus1=canbus"
329
+ );
330
+
331
+ configure_canfd(qts, MSR_NORMAL_MODE);
332
+
333
+ /* Check if CANFD0 and CANFD1 are in Normal mode. */
334
+ status = qtest_readl(qts, CANFD0_BASE_ADDR + R_SR_OFFSET);
335
+ status = status & STATUS_REG_MASK;
336
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
337
+
338
+ status = qtest_readl(qts, CANFD1_BASE_ADDR + R_SR_OFFSET);
339
+ status = status & STATUS_REG_MASK;
340
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
341
+
342
+ write_data(qts, CANFD0_BASE_ADDR, buf_tx, false);
343
+
344
+ send_data(qts, CANFD0_BASE_ADDR);
345
+ read_data(qts, CANFD1_BASE_ADDR, buf_rx);
346
+ match_rx_tx_data(buf_tx, buf_rx, false);
347
+
348
+ qtest_quit(qts);
349
+}
350
+
351
+/*
352
+ * This test will be transferring CANFD frame i.e. 64 bytes of data from CANFD0
353
+ * and CANFD1 through canbus. CANFD0 initiate the data transfer to can-bus,
354
+ * CANFD1 receives the data. Test compares the CANFD frame data sent from CANFD0
355
+ * with received on CANFD1.
356
+ */
357
+static void test_canfd_data_transfer(void)
358
+{
359
+ uint32_t buf_tx[CANFD_FRAME_SIZE] = { 0x5a5bb9a4, 0xf8000000 };
360
+ uint32_t buf_rx[CANFD_FRAME_SIZE] = { 0x00, 0x00, 0x00, 0x00 };
361
+ uint32_t status = 0;
362
+
363
+ generate_random_data(buf_tx, true);
364
+
365
+ QTestState *qts = qtest_init("-machine xlnx-versal-virt"
366
+ " -object can-bus,id=canbus"
367
+ " -machine canbus0=canbus"
368
+ " -machine canbus1=canbus"
369
+ );
370
+
371
+ configure_canfd(qts, MSR_NORMAL_MODE);
372
+
373
+ /* Check if CANFD0 and CANFD1 are in Normal mode. */
374
+ status = qtest_readl(qts, CANFD0_BASE_ADDR + R_SR_OFFSET);
375
+ status = status & STATUS_REG_MASK;
376
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
377
+
378
+ status = qtest_readl(qts, CANFD1_BASE_ADDR + R_SR_OFFSET);
379
+ status = status & STATUS_REG_MASK;
380
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
381
+
382
+ write_data(qts, CANFD0_BASE_ADDR, buf_tx, true);
383
+
384
+ send_data(qts, CANFD0_BASE_ADDR);
385
+ read_data(qts, CANFD1_BASE_ADDR, buf_rx);
386
+ match_rx_tx_data(buf_tx, buf_rx, true);
387
+
388
+ qtest_quit(qts);
389
+}
390
+
391
+/*
392
+ * This test is performing loopback mode on CANFD0 and CANFD1. Data sent from
393
+ * TX of each CANFD0 and CANFD1 are compared with RX register data for
394
+ * respective CANFD Controller.
395
+ */
396
+static void test_can_loopback(void)
397
+{
398
+ uint32_t buf_tx[CANFD_FRAME_SIZE] = { 0x5a5bb9a4, 0xf8000000 };
399
+ uint32_t buf_rx[CANFD_FRAME_SIZE] = { 0x00, 0x00, 0x00, 0x00 };
400
+ uint32_t status = 0;
401
+
402
+ generate_random_data(buf_tx, true);
403
+
404
+ QTestState *qts = qtest_init("-machine xlnx-versal-virt"
405
+ " -object can-bus,id=canbus"
406
+ " -machine canbus0=canbus"
407
+ " -machine canbus1=canbus"
408
+ );
409
+
410
+ configure_canfd(qts, MSR_LOOPBACK_MODE);
411
+
412
+ /* Check if CANFD0 and CANFD1 are set in correct loopback mode. */
413
+ status = qtest_readl(qts, CANFD0_BASE_ADDR + R_SR_OFFSET);
414
+ status = status & STATUS_REG_MASK;
415
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
416
+
417
+ status = qtest_readl(qts, CANFD1_BASE_ADDR + R_SR_OFFSET);
418
+ status = status & STATUS_REG_MASK;
419
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
420
+
421
+ write_data(qts, CANFD0_BASE_ADDR, buf_tx, true);
422
+
423
+ send_data(qts, CANFD0_BASE_ADDR);
424
+ read_data(qts, CANFD0_BASE_ADDR, buf_rx);
425
+ match_rx_tx_data(buf_tx, buf_rx, true);
426
+
427
+ generate_random_data(buf_tx, true);
428
+
429
+ write_data(qts, CANFD1_BASE_ADDR, buf_tx, true);
430
+
431
+ send_data(qts, CANFD1_BASE_ADDR);
432
+ read_data(qts, CANFD1_BASE_ADDR, buf_rx);
433
+ match_rx_tx_data(buf_tx, buf_rx, true);
434
+
435
+ qtest_quit(qts);
436
+}
437
+
438
+int main(int argc, char **argv)
439
+{
440
+ g_test_init(&argc, &argv, NULL);
441
+
442
+ qtest_add_func("/net/canfd/can_data_transfer", test_can_data_transfer);
443
+ qtest_add_func("/net/canfd/canfd_data_transfer", test_canfd_data_transfer);
444
+ qtest_add_func("/net/canfd/can_loopback", test_can_loopback);
445
+
446
+ return g_test_run();
447
+}
448
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
449
index XXXXXXX..XXXXXXX 100644
450
--- a/tests/qtest/meson.build
451
+++ b/tests/qtest/meson.build
452
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
453
(config_all.has_key('CONFIG_TCG') and config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? \
454
['tpm-tis-device-test', 'tpm-tis-device-swtpm-test'] : []) + \
455
(config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
456
+ (config_all_devices.has_key('CONFIG_XLNX_VERSAL') ? ['xlnx-canfd-test'] : []) + \
457
(config_all_devices.has_key('CONFIG_RASPI') ? ['bcm2835-dma-test'] : []) + \
458
(config_all.has_key('CONFIG_TCG') and \
459
config_all_devices.has_key('CONFIG_TPM_TIS_I2C') ? ['tpm-tis-i2c-test'] : []) + \
460
--
461
2.34.1
diff view generated by jsdifflib
1
The MVE extension to v8.1M includes some new shift instructions which
1
From: qianfan Zhao <qianfanguijin@163.com>
2
sit entirely within the non-coprocessor part of the encoding space
3
and which operate only on general-purpose registers. They take up
4
the space which was previously UNPREDICTABLE MOVS and ORRS encodings
5
with Rm == 13 or 15.
6
2
7
Implement the long shifts by immediate, which perform shifts on a
3
Allwinner R40 (sun8i) SoC features a Quad-Core Cortex-A7 ARM CPU,
8
pair of general-purpose registers treated as a 64-bit quantity, with
4
and a Mali400 MP2 GPU from ARM. It's also known as the Allwinner T3
9
an immediate shift count between 1 and 32.
5
for In-Car Entertainment usage, A40i and A40pro are variants that
6
differ in applicable temperatures range (industrial and military).
10
7
11
Awkwardly, because the MOVS and ORRS trans functions do not UNDEF for
8
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
12
the Rm==13,15 case, we need to explicitly emit code to UNDEF for the
9
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
13
cases where v8.1M now requires that. (Trying to change MOVS and ORRS
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
is too difficult, because the functions that generate the code are
11
---
15
shared between a dozen different kinds of arithmetic or logical
12
include/hw/arm/allwinner-r40.h | 110 +++++++++
16
instruction for all A32, T16 and T32 encodings, and for some insns
13
hw/arm/allwinner-r40.c | 415 +++++++++++++++++++++++++++++++++
17
and some encodings Rm==13,15 are valid.)
14
hw/arm/bananapi_m2u.c | 129 ++++++++++
15
hw/arm/Kconfig | 10 +
16
hw/arm/meson.build | 1 +
17
5 files changed, 665 insertions(+)
18
create mode 100644 include/hw/arm/allwinner-r40.h
19
create mode 100644 hw/arm/allwinner-r40.c
20
create mode 100644 hw/arm/bananapi_m2u.c
18
21
19
We make the helper functions we need for UQSHLL and SQSHLL take
22
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
20
a 32-bit value which the helper casts to int8_t because we'll need
23
new file mode 100644
21
these helpers also for the shift-by-register insns, where the shift
24
index XXXXXXX..XXXXXXX
22
count might be < 0 or > 32.
25
--- /dev/null
23
26
+++ b/include/hw/arm/allwinner-r40.h
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
26
Message-id: 20210628135835.6690-16-peter.maydell@linaro.org
27
---
28
target/arm/helper-mve.h | 3 ++
29
target/arm/translate.h | 1 +
30
target/arm/t32.decode | 28 +++++++++++++
31
target/arm/mve_helper.c | 10 +++++
32
target/arm/translate.c | 90 +++++++++++++++++++++++++++++++++++++++++
33
5 files changed, 132 insertions(+)
34
35
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/helper-mve.h
38
+++ b/target/arm/helper-mve.h
39
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrshruntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
DEF_HELPER_FLAGS_4(mve_vqrshrunth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
42
DEF_HELPER_FLAGS_4(mve_vshlc, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
43
+
44
+DEF_HELPER_FLAGS_3(mve_sqshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
45
+DEF_HELPER_FLAGS_3(mve_uqshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
46
diff --git a/target/arm/translate.h b/target/arm/translate.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate.h
49
+++ b/target/arm/translate.h
50
@@ -XXX,XX +XXX,XX @@ typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
51
typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
52
typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
53
typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
54
+typedef void WideShiftImmFn(TCGv_i64, TCGv_i64, int64_t shift);
55
56
/**
57
* arm_tbflags_from_tb:
58
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/t32.decode
61
+++ b/target/arm/t32.decode
62
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@
63
&mcr !extern cp opc1 crn crm opc2 rt
28
+/*
64
&mcrr !extern cp opc1 crm rt rt2
29
+ * Allwinner R40/A40i/T3 System on Chip emulation
65
30
+ *
66
+&mve_shl_ri rdalo rdahi shim
31
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
67
+
32
+ *
68
+# rdahi: bits [3:1] from insn, bit 0 is 1
33
+ * This program is free software: you can redistribute it and/or modify
69
+# rdalo: bits [3:1] from insn, bit 0 is 0
34
+ * it under the terms of the GNU General Public License as published by
70
+%rdahi_9 9:3 !function=times_2_plus_1
35
+ * the Free Software Foundation, either version 2 of the License, or
71
+%rdalo_17 17:3 !function=times_2
36
+ * (at your option) any later version.
72
+
37
+ *
73
# Data-processing (register)
38
+ * This program is distributed in the hope that it will be useful,
74
39
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
75
%imm5_12_6 12:3 6:2
40
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
41
+ * GNU General Public License for more details.
42
+ *
43
+ * You should have received a copy of the GNU General Public License
44
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
45
+ */
46
+
47
+#ifndef HW_ARM_ALLWINNER_R40_H
48
+#define HW_ARM_ALLWINNER_R40_H
49
+
50
+#include "qom/object.h"
51
+#include "hw/arm/boot.h"
52
+#include "hw/timer/allwinner-a10-pit.h"
53
+#include "hw/intc/arm_gic.h"
54
+#include "hw/sd/allwinner-sdhost.h"
55
+#include "target/arm/cpu.h"
56
+#include "sysemu/block-backend.h"
57
+
58
+enum {
59
+ AW_R40_DEV_SRAM_A1,
60
+ AW_R40_DEV_SRAM_A2,
61
+ AW_R40_DEV_SRAM_A3,
62
+ AW_R40_DEV_SRAM_A4,
63
+ AW_R40_DEV_MMC0,
64
+ AW_R40_DEV_MMC1,
65
+ AW_R40_DEV_MMC2,
66
+ AW_R40_DEV_MMC3,
67
+ AW_R40_DEV_CCU,
68
+ AW_R40_DEV_PIT,
69
+ AW_R40_DEV_UART0,
70
+ AW_R40_DEV_GIC_DIST,
71
+ AW_R40_DEV_GIC_CPU,
72
+ AW_R40_DEV_GIC_HYP,
73
+ AW_R40_DEV_GIC_VCPU,
74
+ AW_R40_DEV_SDRAM
75
+};
76
+
77
+#define AW_R40_NUM_CPUS (4)
78
+
79
+/**
80
+ * Allwinner R40 object model
81
+ * @{
82
+ */
83
+
84
+/** Object type for the Allwinner R40 SoC */
85
+#define TYPE_AW_R40 "allwinner-r40"
86
+
87
+/** Convert input object to Allwinner R40 state object */
88
+OBJECT_DECLARE_SIMPLE_TYPE(AwR40State, AW_R40)
89
+
90
+/** @} */
91
+
92
+/**
93
+ * Allwinner R40 object
94
+ *
95
+ * This struct contains the state of all the devices
96
+ * which are currently emulated by the R40 SoC code.
97
+ */
98
+#define AW_R40_NUM_MMCS 4
99
+
100
+struct AwR40State {
101
+ /*< private >*/
102
+ DeviceState parent_obj;
103
+ /*< public >*/
104
+
105
+ ARMCPU cpus[AW_R40_NUM_CPUS];
106
+ const hwaddr *memmap;
107
+ AwA10PITState timer;
108
+ AwSdHostState mmc[AW_R40_NUM_MMCS];
109
+ GICState gic;
110
+ MemoryRegion sram_a1;
111
+ MemoryRegion sram_a2;
112
+ MemoryRegion sram_a3;
113
+ MemoryRegion sram_a4;
114
+};
115
+
116
+/**
117
+ * Emulate Boot ROM firmware setup functionality.
118
+ *
119
+ * A real Allwinner R40 SoC contains a Boot ROM
120
+ * which is the first code that runs right after
121
+ * the SoC is powered on. The Boot ROM is responsible
122
+ * for loading user code (e.g. a bootloader) from any
123
+ * of the supported external devices and writing the
124
+ * downloaded code to internal SRAM. After loading the SoC
125
+ * begins executing the code written to SRAM.
126
+ *
127
+ * This function emulates the Boot ROM by copying 32 KiB
128
+ * of data from the given block device and writes it to
129
+ * the start of the first internal SRAM memory.
130
+ *
131
+ * @s: Allwinner R40 state object pointer
132
+ * @blk: Block backend device object pointer
133
+ * @unit: the mmc control's unit
134
+ */
135
+bool allwinner_r40_bootrom_setup(AwR40State *s, BlockBackend *blk, int unit);
136
+
137
+#endif /* HW_ARM_ALLWINNER_R40_H */
138
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
139
new file mode 100644
140
index XXXXXXX..XXXXXXX
141
--- /dev/null
142
+++ b/hw/arm/allwinner-r40.c
76
@@ -XXX,XX +XXX,XX @@
143
@@ -XXX,XX +XXX,XX @@
77
@S_xrr_shi ....... .... . rn:4 .... .... .. shty:2 rm:4 \
144
+/*
78
&s_rrr_shi shim=%imm5_12_6 s=1 rd=0
145
+ * Allwinner R40/A40i/T3 System on Chip emulation
79
146
+ *
80
+@mve_shl_ri ....... .... . ... . . ... ... . .. .. .... \
147
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
81
+ &mve_shl_ri shim=%imm5_12_6 rdalo=%rdalo_17 rdahi=%rdahi_9
148
+ *
82
+
149
+ * This program is free software: you can redistribute it and/or modify
83
{
150
+ * it under the terms of the GNU General Public License as published by
84
TST_xrri 1110101 0000 1 .... 0 ... 1111 .... .... @S_xrr_shi
151
+ * the Free Software Foundation, either version 2 of the License, or
85
AND_rrri 1110101 0000 . .... 0 ... .... .... .... @s_rrr_shi
152
+ * (at your option) any later version.
86
}
153
+ *
87
BIC_rrri 1110101 0001 . .... 0 ... .... .... .... @s_rrr_shi
154
+ * This program is distributed in the hope that it will be useful,
88
{
155
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
89
+ # The v8.1M MVE shift insns overlap in encoding with MOVS/ORRS
156
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
90
+ # and are distinguished by having Rm==13 or 15. Those are UNPREDICTABLE
157
+ * GNU General Public License for more details.
91
+ # cases for MOVS/ORRS. We decode the MVE cases first, ensuring that
158
+ *
92
+ # they explicitly call unallocated_encoding() for cases that must UNDEF
159
+ * You should have received a copy of the GNU General Public License
93
+ # (eg "using a new shift insn on a v8.1M CPU without MVE"), and letting
160
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
94
+ # the rest fall through (where ORR_rrri and MOV_rxri will end up
161
+ */
95
+ # handling them as r13 and r15 accesses with the same semantics as A32).
162
+
96
+ [
163
+#include "qemu/osdep.h"
97
+ LSLL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 00 1111 @mve_shl_ri
164
+#include "qapi/error.h"
98
+ LSRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 01 1111 @mve_shl_ri
165
+#include "qemu/error-report.h"
99
+ ASRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 10 1111 @mve_shl_ri
166
+#include "qemu/bswap.h"
100
+
167
+#include "qemu/module.h"
101
+ UQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 00 1111 @mve_shl_ri
168
+#include "qemu/units.h"
102
+ URSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 01 1111 @mve_shl_ri
169
+#include "hw/qdev-core.h"
103
+ SRSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 10 1111 @mve_shl_ri
170
+#include "hw/sysbus.h"
104
+ SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
171
+#include "hw/char/serial.h"
105
+ ]
172
+#include "hw/misc/unimp.h"
106
+
173
+#include "hw/usb/hcd-ehci.h"
107
MOV_rxri 1110101 0010 . 1111 0 ... .... .... .... @s_rxr_shi
174
+#include "hw/loader.h"
108
ORR_rrri 1110101 0010 . .... 0 ... .... .... .... @s_rrr_shi
175
+#include "sysemu/sysemu.h"
109
}
176
+#include "hw/arm/allwinner-r40.h"
110
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
177
+
111
index XXXXXXX..XXXXXXX 100644
178
+/* Memory map */
112
--- a/target/arm/mve_helper.c
179
+const hwaddr allwinner_r40_memmap[] = {
113
+++ b/target/arm/mve_helper.c
180
+ [AW_R40_DEV_SRAM_A1] = 0x00000000,
114
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mve_vshlc)(CPUARMState *env, void *vd, uint32_t rdm,
181
+ [AW_R40_DEV_SRAM_A2] = 0x00004000,
115
mve_advance_vpt(env);
182
+ [AW_R40_DEV_SRAM_A3] = 0x00008000,
116
return rdm;
183
+ [AW_R40_DEV_SRAM_A4] = 0x0000b400,
117
}
184
+ [AW_R40_DEV_MMC0] = 0x01c0f000,
118
+
185
+ [AW_R40_DEV_MMC1] = 0x01c10000,
119
+uint64_t HELPER(mve_sqshll)(CPUARMState *env, uint64_t n, uint32_t shift)
186
+ [AW_R40_DEV_MMC2] = 0x01c11000,
187
+ [AW_R40_DEV_MMC3] = 0x01c12000,
188
+ [AW_R40_DEV_PIT] = 0x01c20c00,
189
+ [AW_R40_DEV_UART0] = 0x01c28000,
190
+ [AW_R40_DEV_GIC_DIST] = 0x01c81000,
191
+ [AW_R40_DEV_GIC_CPU] = 0x01c82000,
192
+ [AW_R40_DEV_GIC_HYP] = 0x01c84000,
193
+ [AW_R40_DEV_GIC_VCPU] = 0x01c86000,
194
+ [AW_R40_DEV_SDRAM] = 0x40000000
195
+};
196
+
197
+/* List of unimplemented devices */
198
+struct AwR40Unimplemented {
199
+ const char *device_name;
200
+ hwaddr base;
201
+ hwaddr size;
202
+};
203
+
204
+static struct AwR40Unimplemented r40_unimplemented[] = {
205
+ { "d-engine", 0x01000000, 4 * MiB },
206
+ { "d-inter", 0x01400000, 128 * KiB },
207
+ { "sram-c", 0x01c00000, 4 * KiB },
208
+ { "dma", 0x01c02000, 4 * KiB },
209
+ { "nfdc", 0x01c03000, 4 * KiB },
210
+ { "ts", 0x01c04000, 4 * KiB },
211
+ { "spi0", 0x01c05000, 4 * KiB },
212
+ { "spi1", 0x01c06000, 4 * KiB },
213
+ { "cs0", 0x01c09000, 4 * KiB },
214
+ { "keymem", 0x01c0a000, 4 * KiB },
215
+ { "emac", 0x01c0b000, 4 * KiB },
216
+ { "usb0-otg", 0x01c13000, 4 * KiB },
217
+ { "usb0-host", 0x01c14000, 4 * KiB },
218
+ { "crypto", 0x01c15000, 4 * KiB },
219
+ { "spi2", 0x01c17000, 4 * KiB },
220
+ { "sata", 0x01c18000, 4 * KiB },
221
+ { "usb1-host", 0x01c19000, 4 * KiB },
222
+ { "sid", 0x01c1b000, 4 * KiB },
223
+ { "usb2-host", 0x01c1c000, 4 * KiB },
224
+ { "cs1", 0x01c1d000, 4 * KiB },
225
+ { "spi3", 0x01c1f000, 4 * KiB },
226
+ { "ccu", 0x01c20000, 1 * KiB },
227
+ { "rtc", 0x01c20400, 1 * KiB },
228
+ { "pio", 0x01c20800, 1 * KiB },
229
+ { "owa", 0x01c21000, 1 * KiB },
230
+ { "ac97", 0x01c21400, 1 * KiB },
231
+ { "cir0", 0x01c21800, 1 * KiB },
232
+ { "cir1", 0x01c21c00, 1 * KiB },
233
+ { "pcm0", 0x01c22000, 1 * KiB },
234
+ { "pcm1", 0x01c22400, 1 * KiB },
235
+ { "pcm2", 0x01c22800, 1 * KiB },
236
+ { "audio", 0x01c22c00, 1 * KiB },
237
+ { "keypad", 0x01c23000, 1 * KiB },
238
+ { "pwm", 0x01c23400, 1 * KiB },
239
+ { "keyadc", 0x01c24400, 1 * KiB },
240
+ { "ths", 0x01c24c00, 1 * KiB },
241
+ { "rtp", 0x01c25000, 1 * KiB },
242
+ { "pmu", 0x01c25400, 1 * KiB },
243
+ { "cpu-cfg", 0x01c25c00, 1 * KiB },
244
+ { "uart0", 0x01c28000, 1 * KiB },
245
+ { "uart1", 0x01c28400, 1 * KiB },
246
+ { "uart2", 0x01c28800, 1 * KiB },
247
+ { "uart3", 0x01c28c00, 1 * KiB },
248
+ { "uart4", 0x01c29000, 1 * KiB },
249
+ { "uart5", 0x01c29400, 1 * KiB },
250
+ { "uart6", 0x01c29800, 1 * KiB },
251
+ { "uart7", 0x01c29c00, 1 * KiB },
252
+ { "ps20", 0x01c2a000, 1 * KiB },
253
+ { "ps21", 0x01c2a400, 1 * KiB },
254
+ { "twi0", 0x01c2ac00, 1 * KiB },
255
+ { "twi1", 0x01c2b000, 1 * KiB },
256
+ { "twi2", 0x01c2b400, 1 * KiB },
257
+ { "twi3", 0x01c2b800, 1 * KiB },
258
+ { "twi4", 0x01c2c000, 1 * KiB },
259
+ { "scr", 0x01c2c400, 1 * KiB },
260
+ { "tvd-top", 0x01c30000, 4 * KiB },
261
+ { "tvd0", 0x01c31000, 4 * KiB },
262
+ { "tvd1", 0x01c32000, 4 * KiB },
263
+ { "tvd2", 0x01c33000, 4 * KiB },
264
+ { "tvd3", 0x01c34000, 4 * KiB },
265
+ { "gpu", 0x01c40000, 64 * KiB },
266
+ { "gmac", 0x01c50000, 64 * KiB },
267
+ { "hstmr", 0x01c60000, 4 * KiB },
268
+ { "dram-com", 0x01c62000, 4 * KiB },
269
+ { "dram-ctl", 0x01c63000, 4 * KiB },
270
+ { "tcon-top", 0x01c70000, 4 * KiB },
271
+ { "lcd0", 0x01c71000, 4 * KiB },
272
+ { "lcd1", 0x01c72000, 4 * KiB },
273
+ { "tv0", 0x01c73000, 4 * KiB },
274
+ { "tv1", 0x01c74000, 4 * KiB },
275
+ { "tve-top", 0x01c90000, 16 * KiB },
276
+ { "tve0", 0x01c94000, 16 * KiB },
277
+ { "tve1", 0x01c98000, 16 * KiB },
278
+ { "mipi_dsi", 0x01ca0000, 4 * KiB },
279
+ { "mipi_dphy", 0x01ca1000, 4 * KiB },
280
+ { "ve", 0x01d00000, 1024 * KiB },
281
+ { "mp", 0x01e80000, 128 * KiB },
282
+ { "hdmi", 0x01ee0000, 128 * KiB },
283
+ { "prcm", 0x01f01400, 1 * KiB },
284
+ { "debug", 0x3f500000, 64 * KiB },
285
+ { "cpubist", 0x3f501000, 4 * KiB },
286
+ { "dcu", 0x3fff0000, 64 * KiB },
287
+ { "hstmr", 0x01c60000, 4 * KiB },
288
+ { "brom", 0xffff0000, 36 * KiB }
289
+};
290
+
291
+/* Per Processor Interrupts */
292
+enum {
293
+ AW_R40_GIC_PPI_MAINT = 9,
294
+ AW_R40_GIC_PPI_HYPTIMER = 10,
295
+ AW_R40_GIC_PPI_VIRTTIMER = 11,
296
+ AW_R40_GIC_PPI_SECTIMER = 13,
297
+ AW_R40_GIC_PPI_PHYSTIMER = 14
298
+};
299
+
300
+/* Shared Processor Interrupts */
301
+enum {
302
+ AW_R40_GIC_SPI_UART0 = 1,
303
+ AW_R40_GIC_SPI_TIMER0 = 22,
304
+ AW_R40_GIC_SPI_TIMER1 = 23,
305
+ AW_R40_GIC_SPI_MMC0 = 32,
306
+ AW_R40_GIC_SPI_MMC1 = 33,
307
+ AW_R40_GIC_SPI_MMC2 = 34,
308
+ AW_R40_GIC_SPI_MMC3 = 35,
309
+};
310
+
311
+/* Allwinner R40 general constants */
312
+enum {
313
+ AW_R40_GIC_NUM_SPI = 128
314
+};
315
+
316
+#define BOOT0_MAGIC "eGON.BT0"
317
+
318
+/* The low 8-bits of the 'boot_media' field in the SPL header */
319
+#define SUNXI_BOOTED_FROM_MMC0 0
320
+#define SUNXI_BOOTED_FROM_NAND 1
321
+#define SUNXI_BOOTED_FROM_MMC2 2
322
+#define SUNXI_BOOTED_FROM_SPI 3
323
+
324
+struct boot_file_head {
325
+ uint32_t b_instruction;
326
+ uint8_t magic[8];
327
+ uint32_t check_sum;
328
+ uint32_t length;
329
+ uint32_t pub_head_size;
330
+ uint32_t fel_script_address;
331
+ uint32_t fel_uEnv_length;
332
+ uint32_t dt_name_offset;
333
+ uint32_t dram_size;
334
+ uint32_t boot_media;
335
+ uint32_t string_pool[13];
336
+};
337
+
338
+bool allwinner_r40_bootrom_setup(AwR40State *s, BlockBackend *blk, int unit)
120
+{
339
+{
121
+ return do_sqrshl_d(n, (int8_t)shift, false, &env->QF);
340
+ const int64_t rom_size = 32 * KiB;
122
+}
341
+ g_autofree uint8_t *buffer = g_new0(uint8_t, rom_size);
123
+
342
+ struct boot_file_head *head = (struct boot_file_head *)buffer;
124
+uint64_t HELPER(mve_uqshll)(CPUARMState *env, uint64_t n, uint32_t shift)
343
+
125
+{
344
+ if (blk_pread(blk, 8 * KiB, rom_size, buffer, 0) < 0) {
126
+ return do_uqrshl_d(n, (int8_t)shift, false, &env->QF);
345
+ error_setg(&error_fatal, "%s: failed to read BlockBackend data",
127
+}
346
+ __func__);
128
diff --git a/target/arm/translate.c b/target/arm/translate.c
129
index XXXXXXX..XXXXXXX 100644
130
--- a/target/arm/translate.c
131
+++ b/target/arm/translate.c
132
@@ -XXX,XX +XXX,XX @@ static bool trans_MOVT(DisasContext *s, arg_MOVW *a)
133
return true;
134
}
135
136
+/*
137
+ * v8.1M MVE wide-shifts
138
+ */
139
+static bool do_mve_shl_ri(DisasContext *s, arg_mve_shl_ri *a,
140
+ WideShiftImmFn *fn)
141
+{
142
+ TCGv_i64 rda;
143
+ TCGv_i32 rdalo, rdahi;
144
+
145
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
146
+ /* Decode falls through to ORR/MOV UNPREDICTABLE handling */
147
+ return false;
347
+ return false;
148
+ }
348
+ }
149
+ if (a->rdahi == 15) {
349
+
150
+ /* These are a different encoding (SQSHL/SRSHR/UQSHL/URSHR) */
350
+ /* we only check the magic string here. */
351
+ if (memcmp(head->magic, BOOT0_MAGIC, sizeof(head->magic))) {
151
+ return false;
352
+ return false;
152
+ }
353
+ }
153
+ if (!dc_isar_feature(aa32_mve, s) ||
354
+
154
+ !arm_dc_feature(s, ARM_FEATURE_M_MAIN) ||
355
+ /*
155
+ a->rdahi == 13) {
356
+ * Simulate the behavior of the bootROM, it will change the boot_media
156
+ /* RdaHi == 13 is UNPREDICTABLE; we choose to UNDEF */
357
+ * flag to indicate where the chip is booting from. R40 can boot from
157
+ unallocated_encoding(s);
358
+ * mmc0 or mmc2, the default value of boot_media is zero
158
+ return true;
359
+ * (SUNXI_BOOTED_FROM_MMC0), let's fix this flag when it is booting from
159
+ }
360
+ * the others.
160
+
361
+ */
161
+ if (a->shim == 0) {
362
+ if (unit == 2) {
162
+ a->shim = 32;
363
+ head->boot_media = cpu_to_le32(SUNXI_BOOTED_FROM_MMC2);
163
+ }
364
+ } else {
164
+
365
+ head->boot_media = cpu_to_le32(SUNXI_BOOTED_FROM_MMC0);
165
+ rda = tcg_temp_new_i64();
366
+ }
166
+ rdalo = load_reg(s, a->rdalo);
367
+
167
+ rdahi = load_reg(s, a->rdahi);
368
+ rom_add_blob("allwinner-r40.bootrom", buffer, rom_size,
168
+ tcg_gen_concat_i32_i64(rda, rdalo, rdahi);
369
+ rom_size, s->memmap[AW_R40_DEV_SRAM_A1],
169
+
370
+ NULL, NULL, NULL, NULL, false);
170
+ fn(rda, rda, a->shim);
171
+
172
+ tcg_gen_extrl_i64_i32(rdalo, rda);
173
+ tcg_gen_extrh_i64_i32(rdahi, rda);
174
+ store_reg(s, a->rdalo, rdalo);
175
+ store_reg(s, a->rdahi, rdahi);
176
+ tcg_temp_free_i64(rda);
177
+
178
+ return true;
371
+ return true;
179
+}
372
+}
180
+
373
+
181
+static bool trans_ASRL_ri(DisasContext *s, arg_mve_shl_ri *a)
374
+static void allwinner_r40_init(Object *obj)
182
+{
375
+{
183
+ return do_mve_shl_ri(s, a, tcg_gen_sari_i64);
376
+ static const char *mmc_names[AW_R40_NUM_MMCS] = {
377
+ "mmc0", "mmc1", "mmc2", "mmc3"
378
+ };
379
+ AwR40State *s = AW_R40(obj);
380
+
381
+ s->memmap = allwinner_r40_memmap;
382
+
383
+ for (int i = 0; i < AW_R40_NUM_CPUS; i++) {
384
+ object_initialize_child(obj, "cpu[*]", &s->cpus[i],
385
+ ARM_CPU_TYPE_NAME("cortex-a7"));
386
+ }
387
+
388
+ object_initialize_child(obj, "gic", &s->gic, TYPE_ARM_GIC);
389
+
390
+ object_initialize_child(obj, "timer", &s->timer, TYPE_AW_A10_PIT);
391
+ object_property_add_alias(obj, "clk0-freq", OBJECT(&s->timer),
392
+ "clk0-freq");
393
+ object_property_add_alias(obj, "clk1-freq", OBJECT(&s->timer),
394
+ "clk1-freq");
395
+
396
+ for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
397
+ object_initialize_child(obj, mmc_names[i], &s->mmc[i],
398
+ TYPE_AW_SDHOST_SUN5I);
399
+ }
184
+}
400
+}
185
+
401
+
186
+static bool trans_LSLL_ri(DisasContext *s, arg_mve_shl_ri *a)
402
+static void allwinner_r40_realize(DeviceState *dev, Error **errp)
187
+{
403
+{
188
+ return do_mve_shl_ri(s, a, tcg_gen_shli_i64);
404
+ AwR40State *s = AW_R40(dev);
405
+ unsigned i;
406
+
407
+ /* CPUs */
408
+ for (i = 0; i < AW_R40_NUM_CPUS; i++) {
409
+
410
+ /*
411
+ * Disable secondary CPUs. Guest EL3 firmware will start
412
+ * them via CPU reset control registers.
413
+ */
414
+ qdev_prop_set_bit(DEVICE(&s->cpus[i]), "start-powered-off",
415
+ i > 0);
416
+
417
+ /* All exception levels required */
418
+ qdev_prop_set_bit(DEVICE(&s->cpus[i]), "has_el3", true);
419
+ qdev_prop_set_bit(DEVICE(&s->cpus[i]), "has_el2", true);
420
+
421
+ /* Mark realized */
422
+ qdev_realize(DEVICE(&s->cpus[i]), NULL, &error_fatal);
423
+ }
424
+
425
+ /* Generic Interrupt Controller */
426
+ qdev_prop_set_uint32(DEVICE(&s->gic), "num-irq", AW_R40_GIC_NUM_SPI +
427
+ GIC_INTERNAL);
428
+ qdev_prop_set_uint32(DEVICE(&s->gic), "revision", 2);
429
+ qdev_prop_set_uint32(DEVICE(&s->gic), "num-cpu", AW_R40_NUM_CPUS);
430
+ qdev_prop_set_bit(DEVICE(&s->gic), "has-security-extensions", false);
431
+ qdev_prop_set_bit(DEVICE(&s->gic), "has-virtualization-extensions", true);
432
+ sysbus_realize(SYS_BUS_DEVICE(&s->gic), &error_fatal);
433
+
434
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gic), 0, s->memmap[AW_R40_DEV_GIC_DIST]);
435
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gic), 1, s->memmap[AW_R40_DEV_GIC_CPU]);
436
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gic), 2, s->memmap[AW_R40_DEV_GIC_HYP]);
437
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gic), 3, s->memmap[AW_R40_DEV_GIC_VCPU]);
438
+
439
+ /*
440
+ * Wire the outputs from each CPU's generic timer and the GICv2
441
+ * maintenance interrupt signal to the appropriate GIC PPI inputs,
442
+ * and the GIC's IRQ/FIQ/VIRQ/VFIQ interrupt outputs to the CPU's inputs.
443
+ */
444
+ for (i = 0; i < AW_R40_NUM_CPUS; i++) {
445
+ DeviceState *cpudev = DEVICE(&s->cpus[i]);
446
+ int ppibase = AW_R40_GIC_NUM_SPI + i * GIC_INTERNAL + GIC_NR_SGIS;
447
+ int irq;
448
+ /*
449
+ * Mapping from the output timer irq lines from the CPU to the
450
+ * GIC PPI inputs used for this board.
451
+ */
452
+ const int timer_irq[] = {
453
+ [GTIMER_PHYS] = AW_R40_GIC_PPI_PHYSTIMER,
454
+ [GTIMER_VIRT] = AW_R40_GIC_PPI_VIRTTIMER,
455
+ [GTIMER_HYP] = AW_R40_GIC_PPI_HYPTIMER,
456
+ [GTIMER_SEC] = AW_R40_GIC_PPI_SECTIMER,
457
+ };
458
+
459
+ /* Connect CPU timer outputs to GIC PPI inputs */
460
+ for (irq = 0; irq < ARRAY_SIZE(timer_irq); irq++) {
461
+ qdev_connect_gpio_out(cpudev, irq,
462
+ qdev_get_gpio_in(DEVICE(&s->gic),
463
+ ppibase + timer_irq[irq]));
464
+ }
465
+
466
+ /* Connect GIC outputs to CPU interrupt inputs */
467
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i,
468
+ qdev_get_gpio_in(cpudev, ARM_CPU_IRQ));
469
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + AW_R40_NUM_CPUS,
470
+ qdev_get_gpio_in(cpudev, ARM_CPU_FIQ));
471
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + (2 * AW_R40_NUM_CPUS),
472
+ qdev_get_gpio_in(cpudev, ARM_CPU_VIRQ));
473
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + (3 * AW_R40_NUM_CPUS),
474
+ qdev_get_gpio_in(cpudev, ARM_CPU_VFIQ));
475
+
476
+ /* GIC maintenance signal */
477
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + (4 * AW_R40_NUM_CPUS),
478
+ qdev_get_gpio_in(DEVICE(&s->gic),
479
+ ppibase + AW_R40_GIC_PPI_MAINT));
480
+ }
481
+
482
+ /* Timer */
483
+ sysbus_realize(SYS_BUS_DEVICE(&s->timer), &error_fatal);
484
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->timer), 0, s->memmap[AW_R40_DEV_PIT]);
485
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->timer), 0,
486
+ qdev_get_gpio_in(DEVICE(&s->gic),
487
+ AW_R40_GIC_SPI_TIMER0));
488
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->timer), 1,
489
+ qdev_get_gpio_in(DEVICE(&s->gic),
490
+ AW_R40_GIC_SPI_TIMER1));
491
+
492
+ /* SRAM */
493
+ memory_region_init_ram(&s->sram_a1, OBJECT(dev), "sram A1",
494
+ 16 * KiB, &error_abort);
495
+ memory_region_init_ram(&s->sram_a2, OBJECT(dev), "sram A2",
496
+ 16 * KiB, &error_abort);
497
+ memory_region_init_ram(&s->sram_a3, OBJECT(dev), "sram A3",
498
+ 13 * KiB, &error_abort);
499
+ memory_region_init_ram(&s->sram_a4, OBJECT(dev), "sram A4",
500
+ 3 * KiB, &error_abort);
501
+ memory_region_add_subregion(get_system_memory(),
502
+ s->memmap[AW_R40_DEV_SRAM_A1], &s->sram_a1);
503
+ memory_region_add_subregion(get_system_memory(),
504
+ s->memmap[AW_R40_DEV_SRAM_A2], &s->sram_a2);
505
+ memory_region_add_subregion(get_system_memory(),
506
+ s->memmap[AW_R40_DEV_SRAM_A3], &s->sram_a3);
507
+ memory_region_add_subregion(get_system_memory(),
508
+ s->memmap[AW_R40_DEV_SRAM_A4], &s->sram_a4);
509
+
510
+ /* SD/MMC */
511
+ for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
512
+ qemu_irq irq = qdev_get_gpio_in(DEVICE(&s->gic),
513
+ AW_R40_GIC_SPI_MMC0 + i);
514
+ const hwaddr addr = s->memmap[AW_R40_DEV_MMC0 + i];
515
+
516
+ object_property_set_link(OBJECT(&s->mmc[i]), "dma-memory",
517
+ OBJECT(get_system_memory()), &error_fatal);
518
+ sysbus_realize(SYS_BUS_DEVICE(&s->mmc[i]), &error_fatal);
519
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->mmc[i]), 0, addr);
520
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc[i]), 0, irq);
521
+ }
522
+
523
+ /* UART0. For future clocktree API: All UARTS are connected to APB2_CLK. */
524
+ serial_mm_init(get_system_memory(), s->memmap[AW_R40_DEV_UART0], 2,
525
+ qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_UART0),
526
+ 115200, serial_hd(0), DEVICE_NATIVE_ENDIAN);
527
+
528
+ /* Unimplemented devices */
529
+ for (i = 0; i < ARRAY_SIZE(r40_unimplemented); i++) {
530
+ create_unimplemented_device(r40_unimplemented[i].device_name,
531
+ r40_unimplemented[i].base,
532
+ r40_unimplemented[i].size);
533
+ }
189
+}
534
+}
190
+
535
+
191
+static bool trans_LSRL_ri(DisasContext *s, arg_mve_shl_ri *a)
536
+static void allwinner_r40_class_init(ObjectClass *oc, void *data)
192
+{
537
+{
193
+ return do_mve_shl_ri(s, a, tcg_gen_shri_i64);
538
+ DeviceClass *dc = DEVICE_CLASS(oc);
539
+
540
+ dc->realize = allwinner_r40_realize;
541
+ /* Reason: uses serial_hd() in realize function */
542
+ dc->user_creatable = false;
194
+}
543
+}
195
+
544
+
196
+static void gen_mve_sqshll(TCGv_i64 r, TCGv_i64 n, int64_t shift)
545
+static const TypeInfo allwinner_r40_type_info = {
546
+ .name = TYPE_AW_R40,
547
+ .parent = TYPE_DEVICE,
548
+ .instance_size = sizeof(AwR40State),
549
+ .instance_init = allwinner_r40_init,
550
+ .class_init = allwinner_r40_class_init,
551
+};
552
+
553
+static void allwinner_r40_register_types(void)
197
+{
554
+{
198
+ gen_helper_mve_sqshll(r, cpu_env, n, tcg_constant_i32(shift));
555
+ type_register_static(&allwinner_r40_type_info);
199
+}
556
+}
200
+
557
+
201
+static bool trans_SQSHLL_ri(DisasContext *s, arg_mve_shl_ri *a)
558
+type_init(allwinner_r40_register_types)
559
diff --git a/hw/arm/bananapi_m2u.c b/hw/arm/bananapi_m2u.c
560
new file mode 100644
561
index XXXXXXX..XXXXXXX
562
--- /dev/null
563
+++ b/hw/arm/bananapi_m2u.c
564
@@ -XXX,XX +XXX,XX @@
565
+/*
566
+ * Bananapi M2U emulation
567
+ *
568
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
569
+ *
570
+ * This program is free software: you can redistribute it and/or modify
571
+ * it under the terms of the GNU General Public License as published by
572
+ * the Free Software Foundation, either version 2 of the License, or
573
+ * (at your option) any later version.
574
+ *
575
+ * This program is distributed in the hope that it will be useful,
576
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
577
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
578
+ * GNU General Public License for more details.
579
+ *
580
+ * You should have received a copy of the GNU General Public License
581
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
582
+ */
583
+
584
+#include "qemu/osdep.h"
585
+#include "qemu/units.h"
586
+#include "exec/address-spaces.h"
587
+#include "qapi/error.h"
588
+#include "qemu/error-report.h"
589
+#include "hw/boards.h"
590
+#include "hw/qdev-properties.h"
591
+#include "hw/arm/allwinner-r40.h"
592
+
593
+static struct arm_boot_info bpim2u_binfo;
594
+
595
+/*
596
+ * R40 can boot from mmc0 and mmc2, and bpim2u has two mmc interface, one is
597
+ * connected to sdcard and another mount an emmc media.
598
+ * Attach the mmc driver and try loading bootloader.
599
+ */
600
+static void mmc_attach_drive(AwR40State *s, AwSdHostState *mmc, int unit,
601
+ bool load_bootroom, bool *bootroom_loaded)
202
+{
602
+{
203
+ return do_mve_shl_ri(s, a, gen_mve_sqshll);
603
+ DriveInfo *di = drive_get(IF_SD, 0, unit);
604
+ BlockBackend *blk = di ? blk_by_legacy_dinfo(di) : NULL;
605
+ BusState *bus;
606
+ DeviceState *carddev;
607
+
608
+ bus = qdev_get_child_bus(DEVICE(mmc), "sd-bus");
609
+ if (bus == NULL) {
610
+ error_report("No SD bus found in SOC object");
611
+ exit(1);
612
+ }
613
+
614
+ carddev = qdev_new(TYPE_SD_CARD);
615
+ qdev_prop_set_drive_err(carddev, "drive", blk, &error_fatal);
616
+ qdev_realize_and_unref(carddev, bus, &error_fatal);
617
+
618
+ if (load_bootroom && blk && blk_is_available(blk)) {
619
+ /* Use Boot ROM to copy data from SD card to SRAM */
620
+ *bootroom_loaded = allwinner_r40_bootrom_setup(s, blk, unit);
621
+ }
204
+}
622
+}
205
+
623
+
206
+static void gen_mve_uqshll(TCGv_i64 r, TCGv_i64 n, int64_t shift)
624
+static void bpim2u_init(MachineState *machine)
207
+{
625
+{
208
+ gen_helper_mve_uqshll(r, cpu_env, n, tcg_constant_i32(shift));
626
+ bool bootroom_loaded = false;
627
+ AwR40State *r40;
628
+
629
+ /* BIOS is not supported by this board */
630
+ if (machine->firmware) {
631
+ error_report("BIOS not supported for this machine");
632
+ exit(1);
633
+ }
634
+
635
+ /* Only allow Cortex-A7 for this board */
636
+ if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a7")) != 0) {
637
+ error_report("This board can only be used with cortex-a7 CPU");
638
+ exit(1);
639
+ }
640
+
641
+ r40 = AW_R40(object_new(TYPE_AW_R40));
642
+ object_property_add_child(OBJECT(machine), "soc", OBJECT(r40));
643
+ object_unref(OBJECT(r40));
644
+
645
+ /* Setup timer properties */
646
+ object_property_set_int(OBJECT(r40), "clk0-freq", 32768, &error_abort);
647
+ object_property_set_int(OBJECT(r40), "clk1-freq", 24 * 1000 * 1000,
648
+ &error_abort);
649
+
650
+ /* Mark R40 object realized */
651
+ qdev_realize(DEVICE(r40), NULL, &error_abort);
652
+
653
+ /*
654
+ * Plug in SD card and try load bootrom, R40 has 4 mmc controllers but can
655
+ * only booting from mmc0 and mmc2.
656
+ */
657
+ for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
658
+ switch (i) {
659
+ case 0:
660
+ case 2:
661
+ mmc_attach_drive(r40, &r40->mmc[i], i,
662
+ !machine->kernel_filename && !bootroom_loaded,
663
+ &bootroom_loaded);
664
+ break;
665
+ default:
666
+ mmc_attach_drive(r40, &r40->mmc[i], i, false, NULL);
667
+ break;
668
+ }
669
+ }
670
+
671
+ /* SDRAM */
672
+ memory_region_add_subregion(get_system_memory(),
673
+ r40->memmap[AW_R40_DEV_SDRAM], machine->ram);
674
+
675
+ bpim2u_binfo.loader_start = r40->memmap[AW_R40_DEV_SDRAM];
676
+ bpim2u_binfo.ram_size = machine->ram_size;
677
+ bpim2u_binfo.psci_conduit = QEMU_PSCI_CONDUIT_SMC;
678
+ arm_load_kernel(ARM_CPU(first_cpu), machine, &bpim2u_binfo);
209
+}
679
+}
210
+
680
+
211
+static bool trans_UQSHLL_ri(DisasContext *s, arg_mve_shl_ri *a)
681
+static void bpim2u_machine_init(MachineClass *mc)
212
+{
682
+{
213
+ return do_mve_shl_ri(s, a, gen_mve_uqshll);
683
+ mc->desc = "Bananapi M2U (Cortex-A7)";
684
+ mc->init = bpim2u_init;
685
+ mc->min_cpus = AW_R40_NUM_CPUS;
686
+ mc->max_cpus = AW_R40_NUM_CPUS;
687
+ mc->default_cpus = AW_R40_NUM_CPUS;
688
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a7");
689
+ mc->default_ram_size = 1 * GiB;
690
+ mc->default_ram_id = "bpim2u.ram";
214
+}
691
+}
215
+
692
+
216
+static bool trans_SRSHRL_ri(DisasContext *s, arg_mve_shl_ri *a)
693
+DEFINE_MACHINE("bpim2u", bpim2u_machine_init)
217
+{
694
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
218
+ return do_mve_shl_ri(s, a, gen_srshr64_i64);
695
index XXXXXXX..XXXXXXX 100644
219
+}
696
--- a/hw/arm/Kconfig
220
+
697
+++ b/hw/arm/Kconfig
221
+static bool trans_URSHRL_ri(DisasContext *s, arg_mve_shl_ri *a)
698
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_H3
222
+{
699
select USB_EHCI_SYSBUS
223
+ return do_mve_shl_ri(s, a, gen_urshr64_i64);
700
select SD
224
+}
701
225
+
702
+config ALLWINNER_R40
226
/*
703
+ bool
227
* Multiply and multiply accumulate
704
+ default y if TCG && ARM
228
*/
705
+ select ALLWINNER_A10_PIT
706
+ select SERIAL
707
+ select ARM_TIMER
708
+ select ARM_GIC
709
+ select UNIMP
710
+ select SD
711
+
712
config RASPI
713
bool
714
default y
715
diff --git a/hw/arm/meson.build b/hw/arm/meson.build
716
index XXXXXXX..XXXXXXX 100644
717
--- a/hw/arm/meson.build
718
+++ b/hw/arm/meson.build
719
@@ -XXX,XX +XXX,XX @@ arm_ss.add(when: 'CONFIG_OMAP', if_true: files('omap1.c', 'omap2.c'))
720
arm_ss.add(when: 'CONFIG_STRONGARM', if_true: files('strongarm.c'))
721
arm_ss.add(when: 'CONFIG_ALLWINNER_A10', if_true: files('allwinner-a10.c', 'cubieboard.c'))
722
arm_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3.c', 'orangepi.c'))
723
+arm_ss.add(when: 'CONFIG_ALLWINNER_R40', if_true: files('allwinner-r40.c', 'bananapi_m2u.c'))
724
arm_ss.add(when: 'CONFIG_RASPI', if_true: files('bcm2836.c', 'raspi.c'))
725
arm_ss.add(when: 'CONFIG_STM32F100_SOC', if_true: files('stm32f100_soc.c'))
726
arm_ss.add(when: 'CONFIG_STM32F205_SOC', if_true: files('stm32f205_soc.c'))
229
--
727
--
230
2.20.1
728
2.34.1
231
232
diff view generated by jsdifflib
New patch
1
From: qianfan Zhao <qianfanguijin@163.com>
1
2
3
The CCU provides the registers to program the PLLs and the controls
4
most of the clock generation, division, distribution, synchronization
5
and gating.
6
7
This commit adds support for the Clock Control Unit which emulates
8
a simple read/write register interface.
9
10
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
11
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
include/hw/arm/allwinner-r40.h | 2 +
15
include/hw/misc/allwinner-r40-ccu.h | 65 +++++++++
16
hw/arm/allwinner-r40.c | 8 +-
17
hw/misc/allwinner-r40-ccu.c | 209 ++++++++++++++++++++++++++++
18
hw/misc/meson.build | 1 +
19
5 files changed, 284 insertions(+), 1 deletion(-)
20
create mode 100644 include/hw/misc/allwinner-r40-ccu.h
21
create mode 100644 hw/misc/allwinner-r40-ccu.c
22
23
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/arm/allwinner-r40.h
26
+++ b/include/hw/arm/allwinner-r40.h
27
@@ -XXX,XX +XXX,XX @@
28
#include "hw/timer/allwinner-a10-pit.h"
29
#include "hw/intc/arm_gic.h"
30
#include "hw/sd/allwinner-sdhost.h"
31
+#include "hw/misc/allwinner-r40-ccu.h"
32
#include "target/arm/cpu.h"
33
#include "sysemu/block-backend.h"
34
35
@@ -XXX,XX +XXX,XX @@ struct AwR40State {
36
const hwaddr *memmap;
37
AwA10PITState timer;
38
AwSdHostState mmc[AW_R40_NUM_MMCS];
39
+ AwR40ClockCtlState ccu;
40
GICState gic;
41
MemoryRegion sram_a1;
42
MemoryRegion sram_a2;
43
diff --git a/include/hw/misc/allwinner-r40-ccu.h b/include/hw/misc/allwinner-r40-ccu.h
44
new file mode 100644
45
index XXXXXXX..XXXXXXX
46
--- /dev/null
47
+++ b/include/hw/misc/allwinner-r40-ccu.h
48
@@ -XXX,XX +XXX,XX @@
49
+/*
50
+ * Allwinner R40 Clock Control Unit emulation
51
+ *
52
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
53
+ *
54
+ * This program is free software: you can redistribute it and/or modify
55
+ * it under the terms of the GNU General Public License as published by
56
+ * the Free Software Foundation, either version 2 of the License, or
57
+ * (at your option) any later version.
58
+ *
59
+ * This program is distributed in the hope that it will be useful,
60
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
61
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
62
+ * GNU General Public License for more details.
63
+ *
64
+ * You should have received a copy of the GNU General Public License
65
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
66
+ */
67
+
68
+#ifndef HW_MISC_ALLWINNER_R40_CCU_H
69
+#define HW_MISC_ALLWINNER_R40_CCU_H
70
+
71
+#include "qom/object.h"
72
+#include "hw/sysbus.h"
73
+
74
+/**
75
+ * @name Constants
76
+ * @{
77
+ */
78
+
79
+/** Size of register I/O address space used by CCU device */
80
+#define AW_R40_CCU_IOSIZE (0x400)
81
+
82
+/** Total number of known registers */
83
+#define AW_R40_CCU_REGS_NUM (AW_R40_CCU_IOSIZE / sizeof(uint32_t))
84
+
85
+/** @} */
86
+
87
+/**
88
+ * @name Object model
89
+ * @{
90
+ */
91
+
92
+#define TYPE_AW_R40_CCU "allwinner-r40-ccu"
93
+OBJECT_DECLARE_SIMPLE_TYPE(AwR40ClockCtlState, AW_R40_CCU)
94
+
95
+/** @} */
96
+
97
+/**
98
+ * Allwinner R40 CCU object instance state.
99
+ */
100
+struct AwR40ClockCtlState {
101
+ /*< private >*/
102
+ SysBusDevice parent_obj;
103
+ /*< public >*/
104
+
105
+ /** Maps I/O registers in physical memory */
106
+ MemoryRegion iomem;
107
+
108
+ /** Array of hardware registers */
109
+ uint32_t regs[AW_R40_CCU_REGS_NUM];
110
+
111
+};
112
+
113
+#endif /* HW_MISC_ALLWINNER_R40_CCU_H */
114
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/hw/arm/allwinner-r40.c
117
+++ b/hw/arm/allwinner-r40.c
118
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
119
[AW_R40_DEV_MMC1] = 0x01c10000,
120
[AW_R40_DEV_MMC2] = 0x01c11000,
121
[AW_R40_DEV_MMC3] = 0x01c12000,
122
+ [AW_R40_DEV_CCU] = 0x01c20000,
123
[AW_R40_DEV_PIT] = 0x01c20c00,
124
[AW_R40_DEV_UART0] = 0x01c28000,
125
[AW_R40_DEV_GIC_DIST] = 0x01c81000,
126
@@ -XXX,XX +XXX,XX @@ static struct AwR40Unimplemented r40_unimplemented[] = {
127
{ "usb2-host", 0x01c1c000, 4 * KiB },
128
{ "cs1", 0x01c1d000, 4 * KiB },
129
{ "spi3", 0x01c1f000, 4 * KiB },
130
- { "ccu", 0x01c20000, 1 * KiB },
131
{ "rtc", 0x01c20400, 1 * KiB },
132
{ "pio", 0x01c20800, 1 * KiB },
133
{ "owa", 0x01c21000, 1 * KiB },
134
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
135
object_property_add_alias(obj, "clk1-freq", OBJECT(&s->timer),
136
"clk1-freq");
137
138
+ object_initialize_child(obj, "ccu", &s->ccu, TYPE_AW_R40_CCU);
139
+
140
for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
141
object_initialize_child(obj, mmc_names[i], &s->mmc[i],
142
TYPE_AW_SDHOST_SUN5I);
143
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
144
memory_region_add_subregion(get_system_memory(),
145
s->memmap[AW_R40_DEV_SRAM_A4], &s->sram_a4);
146
147
+ /* Clock Control Unit */
148
+ sysbus_realize(SYS_BUS_DEVICE(&s->ccu), &error_fatal);
149
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->ccu), 0, s->memmap[AW_R40_DEV_CCU]);
150
+
151
/* SD/MMC */
152
for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
153
qemu_irq irq = qdev_get_gpio_in(DEVICE(&s->gic),
154
diff --git a/hw/misc/allwinner-r40-ccu.c b/hw/misc/allwinner-r40-ccu.c
155
new file mode 100644
156
index XXXXXXX..XXXXXXX
157
--- /dev/null
158
+++ b/hw/misc/allwinner-r40-ccu.c
159
@@ -XXX,XX +XXX,XX @@
160
+/*
161
+ * Allwinner R40 Clock Control Unit emulation
162
+ *
163
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
164
+ *
165
+ * This program is free software: you can redistribute it and/or modify
166
+ * it under the terms of the GNU General Public License as published by
167
+ * the Free Software Foundation, either version 2 of the License, or
168
+ * (at your option) any later version.
169
+ *
170
+ * This program is distributed in the hope that it will be useful,
171
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
172
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
173
+ * GNU General Public License for more details.
174
+ *
175
+ * You should have received a copy of the GNU General Public License
176
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
177
+ */
178
+
179
+#include "qemu/osdep.h"
180
+#include "qemu/units.h"
181
+#include "hw/sysbus.h"
182
+#include "migration/vmstate.h"
183
+#include "qemu/log.h"
184
+#include "qemu/module.h"
185
+#include "hw/misc/allwinner-r40-ccu.h"
186
+
187
+/* CCU register offsets */
188
+enum {
189
+ REG_PLL_CPUX_CTRL = 0x0000,
190
+ REG_PLL_AUDIO_CTRL = 0x0008,
191
+ REG_PLL_VIDEO0_CTRL = 0x0010,
192
+ REG_PLL_VE_CTRL = 0x0018,
193
+ REG_PLL_DDR0_CTRL = 0x0020,
194
+ REG_PLL_PERIPH0_CTRL = 0x0028,
195
+ REG_PLL_PERIPH1_CTRL = 0x002c,
196
+ REG_PLL_VIDEO1_CTRL = 0x0030,
197
+ REG_PLL_SATA_CTRL = 0x0034,
198
+ REG_PLL_GPU_CTRL = 0x0038,
199
+ REG_PLL_MIPI_CTRL = 0x0040,
200
+ REG_PLL_DE_CTRL = 0x0048,
201
+ REG_PLL_DDR1_CTRL = 0x004c,
202
+ REG_AHB1_APB1_CFG = 0x0054,
203
+ REG_APB2_CFG = 0x0058,
204
+ REG_MMC0_CLK = 0x0088,
205
+ REG_MMC1_CLK = 0x008c,
206
+ REG_MMC2_CLK = 0x0090,
207
+ REG_MMC3_CLK = 0x0094,
208
+ REG_USBPHY_CFG = 0x00cc,
209
+ REG_PLL_DDR_AUX = 0x00f0,
210
+ REG_DRAM_CFG = 0x00f4,
211
+ REG_PLL_DDR1_CFG = 0x00f8,
212
+ REG_DRAM_CLK_GATING = 0x0100,
213
+ REG_GMAC_CLK = 0x0164,
214
+ REG_SYS_32K_CLK = 0x0310,
215
+ REG_PLL_LOCK_CTRL = 0x0320,
216
+};
217
+
218
+#define REG_INDEX(offset) (offset / sizeof(uint32_t))
219
+
220
+/* CCU register flags */
221
+enum {
222
+ REG_PLL_ENABLE = (1 << 31),
223
+ REG_PLL_LOCK = (1 << 28),
224
+};
225
+
226
+static uint64_t allwinner_r40_ccu_read(void *opaque, hwaddr offset,
227
+ unsigned size)
228
+{
229
+ const AwR40ClockCtlState *s = AW_R40_CCU(opaque);
230
+ const uint32_t idx = REG_INDEX(offset);
231
+
232
+ switch (offset) {
233
+ case 0x324 ... AW_R40_CCU_IOSIZE:
234
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
235
+ __func__, (uint32_t)offset);
236
+ return 0;
237
+ }
238
+
239
+ return s->regs[idx];
240
+}
241
+
242
+static void allwinner_r40_ccu_write(void *opaque, hwaddr offset,
243
+ uint64_t val, unsigned size)
244
+{
245
+ AwR40ClockCtlState *s = AW_R40_CCU(opaque);
246
+
247
+ switch (offset) {
248
+ case REG_DRAM_CFG: /* DRAM Configuration(for DDR0) */
249
+ /* bit16: SDRCLK_UPD (SDRCLK configuration 0 update) */
250
+ val &= ~(1 << 16);
251
+ break;
252
+ case REG_PLL_DDR1_CTRL: /* DDR1 Control register */
253
+ /* bit30: SDRPLL_UPD */
254
+ val &= ~(1 << 30);
255
+ if (val & REG_PLL_ENABLE) {
256
+ val |= REG_PLL_LOCK;
257
+ }
258
+ break;
259
+ case REG_PLL_CPUX_CTRL:
260
+ case REG_PLL_AUDIO_CTRL:
261
+ case REG_PLL_VE_CTRL:
262
+ case REG_PLL_VIDEO0_CTRL:
263
+ case REG_PLL_DDR0_CTRL:
264
+ case REG_PLL_PERIPH0_CTRL:
265
+ case REG_PLL_PERIPH1_CTRL:
266
+ case REG_PLL_VIDEO1_CTRL:
267
+ case REG_PLL_SATA_CTRL:
268
+ case REG_PLL_GPU_CTRL:
269
+ case REG_PLL_MIPI_CTRL:
270
+ case REG_PLL_DE_CTRL:
271
+ if (val & REG_PLL_ENABLE) {
272
+ val |= REG_PLL_LOCK;
273
+ }
274
+ break;
275
+ case 0x324 ... AW_R40_CCU_IOSIZE:
276
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
277
+ __func__, (uint32_t)offset);
278
+ break;
279
+ default:
280
+ qemu_log_mask(LOG_UNIMP, "%s: unimplemented write offset 0x%04x\n",
281
+ __func__, (uint32_t)offset);
282
+ break;
283
+ }
284
+
285
+ s->regs[REG_INDEX(offset)] = (uint32_t) val;
286
+}
287
+
288
+static const MemoryRegionOps allwinner_r40_ccu_ops = {
289
+ .read = allwinner_r40_ccu_read,
290
+ .write = allwinner_r40_ccu_write,
291
+ .endianness = DEVICE_NATIVE_ENDIAN,
292
+ .valid = {
293
+ .min_access_size = 4,
294
+ .max_access_size = 4,
295
+ },
296
+ .impl.min_access_size = 4,
297
+};
298
+
299
+static void allwinner_r40_ccu_reset(DeviceState *dev)
300
+{
301
+ AwR40ClockCtlState *s = AW_R40_CCU(dev);
302
+
303
+ memset(s->regs, 0, sizeof(s->regs));
304
+
305
+ /* Set default values for registers */
306
+ s->regs[REG_INDEX(REG_PLL_CPUX_CTRL)] = 0x00001000;
307
+ s->regs[REG_INDEX(REG_PLL_AUDIO_CTRL)] = 0x00035514;
308
+ s->regs[REG_INDEX(REG_PLL_VIDEO0_CTRL)] = 0x03006207;
309
+ s->regs[REG_INDEX(REG_PLL_VE_CTRL)] = 0x03006207;
310
+ s->regs[REG_INDEX(REG_PLL_DDR0_CTRL)] = 0x00001000,
311
+ s->regs[REG_INDEX(REG_PLL_PERIPH0_CTRL)] = 0x00041811;
312
+ s->regs[REG_INDEX(REG_PLL_PERIPH1_CTRL)] = 0x00041811;
313
+ s->regs[REG_INDEX(REG_PLL_VIDEO1_CTRL)] = 0x03006207;
314
+ s->regs[REG_INDEX(REG_PLL_SATA_CTRL)] = 0x00001811;
315
+ s->regs[REG_INDEX(REG_PLL_GPU_CTRL)] = 0x03006207;
316
+ s->regs[REG_INDEX(REG_PLL_MIPI_CTRL)] = 0x00000515;
317
+ s->regs[REG_INDEX(REG_PLL_DE_CTRL)] = 0x03006207;
318
+ s->regs[REG_INDEX(REG_PLL_DDR1_CTRL)] = 0x00001800;
319
+ s->regs[REG_INDEX(REG_AHB1_APB1_CFG)] = 0x00001010;
320
+ s->regs[REG_INDEX(REG_APB2_CFG)] = 0x01000000;
321
+ s->regs[REG_INDEX(REG_PLL_DDR_AUX)] = 0x00000001;
322
+ s->regs[REG_INDEX(REG_PLL_DDR1_CFG)] = 0x0ccca000;
323
+ s->regs[REG_INDEX(REG_SYS_32K_CLK)] = 0x0000000f;
324
+}
325
+
326
+static void allwinner_r40_ccu_init(Object *obj)
327
+{
328
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
329
+ AwR40ClockCtlState *s = AW_R40_CCU(obj);
330
+
331
+ /* Memory mapping */
332
+ memory_region_init_io(&s->iomem, OBJECT(s), &allwinner_r40_ccu_ops, s,
333
+ TYPE_AW_R40_CCU, AW_R40_CCU_IOSIZE);
334
+ sysbus_init_mmio(sbd, &s->iomem);
335
+}
336
+
337
+static const VMStateDescription allwinner_r40_ccu_vmstate = {
338
+ .name = "allwinner-r40-ccu",
339
+ .version_id = 1,
340
+ .minimum_version_id = 1,
341
+ .fields = (VMStateField[]) {
342
+ VMSTATE_UINT32_ARRAY(regs, AwR40ClockCtlState, AW_R40_CCU_REGS_NUM),
343
+ VMSTATE_END_OF_LIST()
344
+ }
345
+};
346
+
347
+static void allwinner_r40_ccu_class_init(ObjectClass *klass, void *data)
348
+{
349
+ DeviceClass *dc = DEVICE_CLASS(klass);
350
+
351
+ dc->reset = allwinner_r40_ccu_reset;
352
+ dc->vmsd = &allwinner_r40_ccu_vmstate;
353
+}
354
+
355
+static const TypeInfo allwinner_r40_ccu_info = {
356
+ .name = TYPE_AW_R40_CCU,
357
+ .parent = TYPE_SYS_BUS_DEVICE,
358
+ .instance_init = allwinner_r40_ccu_init,
359
+ .instance_size = sizeof(AwR40ClockCtlState),
360
+ .class_init = allwinner_r40_ccu_class_init,
361
+};
362
+
363
+static void allwinner_r40_ccu_register(void)
364
+{
365
+ type_register_static(&allwinner_r40_ccu_info);
366
+}
367
+
368
+type_init(allwinner_r40_ccu_register)
369
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
370
index XXXXXXX..XXXXXXX 100644
371
--- a/hw/misc/meson.build
372
+++ b/hw/misc/meson.build
373
@@ -XXX,XX +XXX,XX @@ specific_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-cpucfg.c'
374
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-dramc.c'))
375
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-sysctrl.c'))
376
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-sid.c'))
377
+softmmu_ss.add(when: 'CONFIG_ALLWINNER_R40', if_true: files('allwinner-r40-ccu.c'))
378
softmmu_ss.add(when: 'CONFIG_AXP209_PMU', if_true: files('axp209.c'))
379
softmmu_ss.add(when: 'CONFIG_REALVIEW', if_true: files('arm_sysctl.c'))
380
softmmu_ss.add(when: 'CONFIG_NSERIES', if_true: files('cbus.c'))
381
--
382
2.34.1
diff view generated by jsdifflib
New patch
1
From: qianfan Zhao <qianfanguijin@163.com>
1
2
3
R40 has eight UARTs, support both 16450 and 16550 compatible modes.
4
5
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
include/hw/arm/allwinner-r40.h | 8 ++++++++
9
hw/arm/allwinner-r40.c | 34 +++++++++++++++++++++++++++++++---
10
2 files changed, 39 insertions(+), 3 deletions(-)
11
12
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/include/hw/arm/allwinner-r40.h
15
+++ b/include/hw/arm/allwinner-r40.h
16
@@ -XXX,XX +XXX,XX @@ enum {
17
AW_R40_DEV_CCU,
18
AW_R40_DEV_PIT,
19
AW_R40_DEV_UART0,
20
+ AW_R40_DEV_UART1,
21
+ AW_R40_DEV_UART2,
22
+ AW_R40_DEV_UART3,
23
+ AW_R40_DEV_UART4,
24
+ AW_R40_DEV_UART5,
25
+ AW_R40_DEV_UART6,
26
+ AW_R40_DEV_UART7,
27
AW_R40_DEV_GIC_DIST,
28
AW_R40_DEV_GIC_CPU,
29
AW_R40_DEV_GIC_HYP,
30
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(AwR40State, AW_R40)
31
* which are currently emulated by the R40 SoC code.
32
*/
33
#define AW_R40_NUM_MMCS 4
34
+#define AW_R40_NUM_UARTS 8
35
36
struct AwR40State {
37
/*< private >*/
38
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/arm/allwinner-r40.c
41
+++ b/hw/arm/allwinner-r40.c
42
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
43
[AW_R40_DEV_CCU] = 0x01c20000,
44
[AW_R40_DEV_PIT] = 0x01c20c00,
45
[AW_R40_DEV_UART0] = 0x01c28000,
46
+ [AW_R40_DEV_UART1] = 0x01c28400,
47
+ [AW_R40_DEV_UART2] = 0x01c28800,
48
+ [AW_R40_DEV_UART3] = 0x01c28c00,
49
+ [AW_R40_DEV_UART4] = 0x01c29000,
50
+ [AW_R40_DEV_UART5] = 0x01c29400,
51
+ [AW_R40_DEV_UART6] = 0x01c29800,
52
+ [AW_R40_DEV_UART7] = 0x01c29c00,
53
[AW_R40_DEV_GIC_DIST] = 0x01c81000,
54
[AW_R40_DEV_GIC_CPU] = 0x01c82000,
55
[AW_R40_DEV_GIC_HYP] = 0x01c84000,
56
@@ -XXX,XX +XXX,XX @@ enum {
57
/* Shared Processor Interrupts */
58
enum {
59
AW_R40_GIC_SPI_UART0 = 1,
60
+ AW_R40_GIC_SPI_UART1 = 2,
61
+ AW_R40_GIC_SPI_UART2 = 3,
62
+ AW_R40_GIC_SPI_UART3 = 4,
63
+ AW_R40_GIC_SPI_UART4 = 17,
64
+ AW_R40_GIC_SPI_UART5 = 18,
65
+ AW_R40_GIC_SPI_UART6 = 19,
66
+ AW_R40_GIC_SPI_UART7 = 20,
67
AW_R40_GIC_SPI_TIMER0 = 22,
68
AW_R40_GIC_SPI_TIMER1 = 23,
69
AW_R40_GIC_SPI_MMC0 = 32,
70
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
71
}
72
73
/* UART0. For future clocktree API: All UARTS are connected to APB2_CLK. */
74
- serial_mm_init(get_system_memory(), s->memmap[AW_R40_DEV_UART0], 2,
75
- qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_UART0),
76
- 115200, serial_hd(0), DEVICE_NATIVE_ENDIAN);
77
+ for (int i = 0; i < AW_R40_NUM_UARTS; i++) {
78
+ static const int uart_irqs[AW_R40_NUM_UARTS] = {
79
+ AW_R40_GIC_SPI_UART0,
80
+ AW_R40_GIC_SPI_UART1,
81
+ AW_R40_GIC_SPI_UART2,
82
+ AW_R40_GIC_SPI_UART3,
83
+ AW_R40_GIC_SPI_UART4,
84
+ AW_R40_GIC_SPI_UART5,
85
+ AW_R40_GIC_SPI_UART6,
86
+ AW_R40_GIC_SPI_UART7,
87
+ };
88
+ const hwaddr addr = s->memmap[AW_R40_DEV_UART0 + i];
89
+
90
+ serial_mm_init(get_system_memory(), addr, 2,
91
+ qdev_get_gpio_in(DEVICE(&s->gic), uart_irqs[i]),
92
+ 115200, serial_hd(i), DEVICE_NATIVE_ENDIAN);
93
+ }
94
95
/* Unimplemented devices */
96
for (i = 0; i < ARRAY_SIZE(r40_unimplemented); i++) {
97
--
98
2.34.1
diff view generated by jsdifflib
New patch
1
From: qianfan Zhao <qianfanguijin@163.com>
1
2
3
TWI(i2c) is designed to be used as an interface between CPU host and the
4
serial 2-Wire bus. It can support all standard 2-Wire transfer, can be
5
operated in standard mode(100kbit/s) or fast-mode, supporting data rate
6
up to 400kbit/s.
7
8
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
9
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/hw/arm/allwinner-r40.h | 3 +++
13
hw/arm/allwinner-r40.c | 11 ++++++++++-
14
2 files changed, 13 insertions(+), 1 deletion(-)
15
16
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/allwinner-r40.h
19
+++ b/include/hw/arm/allwinner-r40.h
20
@@ -XXX,XX +XXX,XX @@
21
#include "hw/intc/arm_gic.h"
22
#include "hw/sd/allwinner-sdhost.h"
23
#include "hw/misc/allwinner-r40-ccu.h"
24
+#include "hw/i2c/allwinner-i2c.h"
25
#include "target/arm/cpu.h"
26
#include "sysemu/block-backend.h"
27
28
@@ -XXX,XX +XXX,XX @@ enum {
29
AW_R40_DEV_UART5,
30
AW_R40_DEV_UART6,
31
AW_R40_DEV_UART7,
32
+ AW_R40_DEV_TWI0,
33
AW_R40_DEV_GIC_DIST,
34
AW_R40_DEV_GIC_CPU,
35
AW_R40_DEV_GIC_HYP,
36
@@ -XXX,XX +XXX,XX @@ struct AwR40State {
37
AwA10PITState timer;
38
AwSdHostState mmc[AW_R40_NUM_MMCS];
39
AwR40ClockCtlState ccu;
40
+ AWI2CState i2c0;
41
GICState gic;
42
MemoryRegion sram_a1;
43
MemoryRegion sram_a2;
44
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/hw/arm/allwinner-r40.c
47
+++ b/hw/arm/allwinner-r40.c
48
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
49
[AW_R40_DEV_UART5] = 0x01c29400,
50
[AW_R40_DEV_UART6] = 0x01c29800,
51
[AW_R40_DEV_UART7] = 0x01c29c00,
52
+ [AW_R40_DEV_TWI0] = 0x01c2ac00,
53
[AW_R40_DEV_GIC_DIST] = 0x01c81000,
54
[AW_R40_DEV_GIC_CPU] = 0x01c82000,
55
[AW_R40_DEV_GIC_HYP] = 0x01c84000,
56
@@ -XXX,XX +XXX,XX @@ static struct AwR40Unimplemented r40_unimplemented[] = {
57
{ "uart7", 0x01c29c00, 1 * KiB },
58
{ "ps20", 0x01c2a000, 1 * KiB },
59
{ "ps21", 0x01c2a400, 1 * KiB },
60
- { "twi0", 0x01c2ac00, 1 * KiB },
61
{ "twi1", 0x01c2b000, 1 * KiB },
62
{ "twi2", 0x01c2b400, 1 * KiB },
63
{ "twi3", 0x01c2b800, 1 * KiB },
64
@@ -XXX,XX +XXX,XX @@ enum {
65
AW_R40_GIC_SPI_UART1 = 2,
66
AW_R40_GIC_SPI_UART2 = 3,
67
AW_R40_GIC_SPI_UART3 = 4,
68
+ AW_R40_GIC_SPI_TWI0 = 7,
69
AW_R40_GIC_SPI_UART4 = 17,
70
AW_R40_GIC_SPI_UART5 = 18,
71
AW_R40_GIC_SPI_UART6 = 19,
72
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
73
object_initialize_child(obj, mmc_names[i], &s->mmc[i],
74
TYPE_AW_SDHOST_SUN5I);
75
}
76
+
77
+ object_initialize_child(obj, "twi0", &s->i2c0, TYPE_AW_I2C_SUN6I);
78
}
79
80
static void allwinner_r40_realize(DeviceState *dev, Error **errp)
81
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
82
115200, serial_hd(i), DEVICE_NATIVE_ENDIAN);
83
}
84
85
+ /* I2C */
86
+ sysbus_realize(SYS_BUS_DEVICE(&s->i2c0), &error_fatal);
87
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->i2c0), 0, s->memmap[AW_R40_DEV_TWI0]);
88
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->i2c0), 0,
89
+ qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_TWI0));
90
+
91
/* Unimplemented devices */
92
for (i = 0; i < ARRAY_SIZE(r40_unimplemented); i++) {
93
create_unimplemented_device(r40_unimplemented[i].device_name,
94
--
95
2.34.1
diff view generated by jsdifflib
New patch
1
From: qianfan Zhao <qianfanguijin@163.com>
1
2
3
This patch adds minimal support for AXP-221 PMU and connect it to
4
bananapi M2U board.
5
6
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
hw/arm/bananapi_m2u.c | 6 +
10
hw/misc/axp209.c | 238 -----------------------------------
11
hw/misc/axp2xx.c | 283 ++++++++++++++++++++++++++++++++++++++++++
12
hw/arm/Kconfig | 3 +-
13
hw/misc/Kconfig | 2 +-
14
hw/misc/meson.build | 2 +-
15
hw/misc/trace-events | 8 +-
16
7 files changed, 297 insertions(+), 245 deletions(-)
17
delete mode 100644 hw/misc/axp209.c
18
create mode 100644 hw/misc/axp2xx.c
19
20
diff --git a/hw/arm/bananapi_m2u.c b/hw/arm/bananapi_m2u.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/bananapi_m2u.c
23
+++ b/hw/arm/bananapi_m2u.c
24
@@ -XXX,XX +XXX,XX @@
25
#include "qapi/error.h"
26
#include "qemu/error-report.h"
27
#include "hw/boards.h"
28
+#include "hw/i2c/i2c.h"
29
#include "hw/qdev-properties.h"
30
#include "hw/arm/allwinner-r40.h"
31
32
@@ -XXX,XX +XXX,XX @@ static void bpim2u_init(MachineState *machine)
33
{
34
bool bootroom_loaded = false;
35
AwR40State *r40;
36
+ I2CBus *i2c;
37
38
/* BIOS is not supported by this board */
39
if (machine->firmware) {
40
@@ -XXX,XX +XXX,XX @@ static void bpim2u_init(MachineState *machine)
41
}
42
}
43
44
+ /* Connect AXP221 */
45
+ i2c = I2C_BUS(qdev_get_child_bus(DEVICE(&r40->i2c0), "i2c"));
46
+ i2c_slave_create_simple(i2c, "axp221_pmu", 0x34);
47
+
48
/* SDRAM */
49
memory_region_add_subregion(get_system_memory(),
50
r40->memmap[AW_R40_DEV_SDRAM], machine->ram);
51
diff --git a/hw/misc/axp209.c b/hw/misc/axp209.c
52
deleted file mode 100644
53
index XXXXXXX..XXXXXXX
54
--- a/hw/misc/axp209.c
55
+++ /dev/null
56
@@ -XXX,XX +XXX,XX @@
57
-/*
58
- * AXP-209 PMU Emulation
59
- *
60
- * Copyright (C) 2022 Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
61
- *
62
- * Permission is hereby granted, free of charge, to any person obtaining a
63
- * copy of this software and associated documentation files (the "Software"),
64
- * to deal in the Software without restriction, including without limitation
65
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
66
- * and/or sell copies of the Software, and to permit persons to whom the
67
- * Software is furnished to do so, subject to the following conditions:
68
- *
69
- * The above copyright notice and this permission notice shall be included in
70
- * all copies or substantial portions of the Software.
71
- *
72
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
73
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
74
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
75
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
76
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
77
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
78
- * DEALINGS IN THE SOFTWARE.
79
- *
80
- * SPDX-License-Identifier: MIT
81
- */
82
-
83
-#include "qemu/osdep.h"
84
-#include "qemu/log.h"
85
-#include "trace.h"
86
-#include "hw/i2c/i2c.h"
87
-#include "migration/vmstate.h"
88
-
89
-#define TYPE_AXP209_PMU "axp209_pmu"
90
-
91
-#define AXP209(obj) \
92
- OBJECT_CHECK(AXP209I2CState, (obj), TYPE_AXP209_PMU)
93
-
94
-/* registers */
95
-enum {
96
- REG_POWER_STATUS = 0x0u,
97
- REG_OPERATING_MODE,
98
- REG_OTG_VBUS_STATUS,
99
- REG_CHIP_VERSION,
100
- REG_DATA_CACHE_0,
101
- REG_DATA_CACHE_1,
102
- REG_DATA_CACHE_2,
103
- REG_DATA_CACHE_3,
104
- REG_DATA_CACHE_4,
105
- REG_DATA_CACHE_5,
106
- REG_DATA_CACHE_6,
107
- REG_DATA_CACHE_7,
108
- REG_DATA_CACHE_8,
109
- REG_DATA_CACHE_9,
110
- REG_DATA_CACHE_A,
111
- REG_DATA_CACHE_B,
112
- REG_POWER_OUTPUT_CTRL = 0x12u,
113
- REG_DC_DC2_OUT_V_CTRL = 0x23u,
114
- REG_DC_DC2_DVS_CTRL = 0x25u,
115
- REG_DC_DC3_OUT_V_CTRL = 0x27u,
116
- REG_LDO2_4_OUT_V_CTRL,
117
- REG_LDO3_OUT_V_CTRL,
118
- REG_VBUS_CH_MGMT = 0x30u,
119
- REG_SHUTDOWN_V_CTRL,
120
- REG_SHUTDOWN_CTRL,
121
- REG_CHARGE_CTRL_1,
122
- REG_CHARGE_CTRL_2,
123
- REG_SPARE_CHARGE_CTRL,
124
- REG_PEK_KEY_CTRL,
125
- REG_DC_DC_FREQ_SET,
126
- REG_CHR_TEMP_TH_SET,
127
- REG_CHR_HIGH_TEMP_TH_CTRL,
128
- REG_IPSOUT_WARN_L1,
129
- REG_IPSOUT_WARN_L2,
130
- REG_DISCHR_TEMP_TH_SET,
131
- REG_DISCHR_HIGH_TEMP_TH_CTRL,
132
- REG_IRQ_BANK_1_CTRL = 0x40u,
133
- REG_IRQ_BANK_2_CTRL,
134
- REG_IRQ_BANK_3_CTRL,
135
- REG_IRQ_BANK_4_CTRL,
136
- REG_IRQ_BANK_5_CTRL,
137
- REG_IRQ_BANK_1_STAT = 0x48u,
138
- REG_IRQ_BANK_2_STAT,
139
- REG_IRQ_BANK_3_STAT,
140
- REG_IRQ_BANK_4_STAT,
141
- REG_IRQ_BANK_5_STAT,
142
- REG_ADC_ACIN_V_H = 0x56u,
143
- REG_ADC_ACIN_V_L,
144
- REG_ADC_ACIN_CURR_H,
145
- REG_ADC_ACIN_CURR_L,
146
- REG_ADC_VBUS_V_H,
147
- REG_ADC_VBUS_V_L,
148
- REG_ADC_VBUS_CURR_H,
149
- REG_ADC_VBUS_CURR_L,
150
- REG_ADC_INT_TEMP_H,
151
- REG_ADC_INT_TEMP_L,
152
- REG_ADC_TEMP_SENS_V_H = 0x62u,
153
- REG_ADC_TEMP_SENS_V_L,
154
- REG_ADC_BAT_V_H = 0x78u,
155
- REG_ADC_BAT_V_L,
156
- REG_ADC_BAT_DISCHR_CURR_H,
157
- REG_ADC_BAT_DISCHR_CURR_L,
158
- REG_ADC_BAT_CHR_CURR_H,
159
- REG_ADC_BAT_CHR_CURR_L,
160
- REG_ADC_IPSOUT_V_H,
161
- REG_ADC_IPSOUT_V_L,
162
- REG_DC_DC_MOD_SEL = 0x80u,
163
- REG_ADC_EN_1,
164
- REG_ADC_EN_2,
165
- REG_ADC_SR_CTRL,
166
- REG_ADC_IN_RANGE,
167
- REG_GPIO1_ADC_IRQ_RISING_TH,
168
- REG_GPIO1_ADC_IRQ_FALLING_TH,
169
- REG_TIMER_CTRL = 0x8au,
170
- REG_VBUS_CTRL_MON_SRP,
171
- REG_OVER_TEMP_SHUTDOWN = 0x8fu,
172
- REG_GPIO0_FEAT_SET,
173
- REG_GPIO_OUT_HIGH_SET,
174
- REG_GPIO1_FEAT_SET,
175
- REG_GPIO2_FEAT_SET,
176
- REG_GPIO_SIG_STATE_SET_MON,
177
- REG_GPIO3_SET,
178
- REG_COULOMB_CNTR_CTRL = 0xb8u,
179
- REG_POWER_MEAS_RES,
180
- NR_REGS
181
-};
182
-
183
-#define AXP209_CHIP_VERSION_ID (0x01)
184
-#define AXP209_DC_DC2_OUT_V_CTRL_RESET (0x16)
185
-#define AXP209_IRQ_BANK_1_CTRL_RESET (0xd8)
186
-
187
-/* A simple I2C slave which returns values of ID or CNT register. */
188
-typedef struct AXP209I2CState {
189
- /*< private >*/
190
- I2CSlave i2c;
191
- /*< public >*/
192
- uint8_t regs[NR_REGS]; /* peripheral registers */
193
- uint8_t ptr; /* current register index */
194
- uint8_t count; /* counter used for tx/rx */
195
-} AXP209I2CState;
196
-
197
-/* Reset all counters and load ID register */
198
-static void axp209_reset_enter(Object *obj, ResetType type)
199
-{
200
- AXP209I2CState *s = AXP209(obj);
201
-
202
- memset(s->regs, 0, NR_REGS);
203
- s->ptr = 0;
204
- s->count = 0;
205
- s->regs[REG_CHIP_VERSION] = AXP209_CHIP_VERSION_ID;
206
- s->regs[REG_DC_DC2_OUT_V_CTRL] = AXP209_DC_DC2_OUT_V_CTRL_RESET;
207
- s->regs[REG_IRQ_BANK_1_CTRL] = AXP209_IRQ_BANK_1_CTRL_RESET;
208
-}
209
-
210
-/* Handle events from master. */
211
-static int axp209_event(I2CSlave *i2c, enum i2c_event event)
212
-{
213
- AXP209I2CState *s = AXP209(i2c);
214
-
215
- s->count = 0;
216
-
217
- return 0;
218
-}
219
-
220
-/* Called when master requests read */
221
-static uint8_t axp209_rx(I2CSlave *i2c)
222
-{
223
- AXP209I2CState *s = AXP209(i2c);
224
- uint8_t ret = 0xff;
225
-
226
- if (s->ptr < NR_REGS) {
227
- ret = s->regs[s->ptr++];
228
- }
229
-
230
- trace_axp209_rx(s->ptr - 1, ret);
231
-
232
- return ret;
233
-}
234
-
235
-/*
236
- * Called when master sends write.
237
- * Update ptr with byte 0, then perform write with second byte.
238
- */
239
-static int axp209_tx(I2CSlave *i2c, uint8_t data)
240
-{
241
- AXP209I2CState *s = AXP209(i2c);
242
-
243
- if (s->count == 0) {
244
- /* Store register address */
245
- s->ptr = data;
246
- s->count++;
247
- trace_axp209_select(data);
248
- } else {
249
- trace_axp209_tx(s->ptr, data);
250
- if (s->ptr == REG_DC_DC2_OUT_V_CTRL) {
251
- s->regs[s->ptr++] = data;
252
- }
253
- }
254
-
255
- return 0;
256
-}
257
-
258
-static const VMStateDescription vmstate_axp209 = {
259
- .name = TYPE_AXP209_PMU,
260
- .version_id = 1,
261
- .fields = (VMStateField[]) {
262
- VMSTATE_UINT8_ARRAY(regs, AXP209I2CState, NR_REGS),
263
- VMSTATE_UINT8(count, AXP209I2CState),
264
- VMSTATE_UINT8(ptr, AXP209I2CState),
265
- VMSTATE_END_OF_LIST()
266
- }
267
-};
268
-
269
-static void axp209_class_init(ObjectClass *oc, void *data)
270
-{
271
- DeviceClass *dc = DEVICE_CLASS(oc);
272
- I2CSlaveClass *isc = I2C_SLAVE_CLASS(oc);
273
- ResettableClass *rc = RESETTABLE_CLASS(oc);
274
-
275
- rc->phases.enter = axp209_reset_enter;
276
- dc->vmsd = &vmstate_axp209;
277
- isc->event = axp209_event;
278
- isc->recv = axp209_rx;
279
- isc->send = axp209_tx;
280
-}
281
-
282
-static const TypeInfo axp209_info = {
283
- .name = TYPE_AXP209_PMU,
284
- .parent = TYPE_I2C_SLAVE,
285
- .instance_size = sizeof(AXP209I2CState),
286
- .class_init = axp209_class_init
287
-};
288
-
289
-static void axp209_register_devices(void)
290
-{
291
- type_register_static(&axp209_info);
292
-}
293
-
294
-type_init(axp209_register_devices);
295
diff --git a/hw/misc/axp2xx.c b/hw/misc/axp2xx.c
296
new file mode 100644
297
index XXXXXXX..XXXXXXX
298
--- /dev/null
299
+++ b/hw/misc/axp2xx.c
300
@@ -XXX,XX +XXX,XX @@
301
+/*
302
+ * AXP-2XX PMU Emulation, supported lists:
303
+ * AXP209
304
+ * AXP221
305
+ *
306
+ * Copyright (C) 2022 Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
307
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
308
+ *
309
+ * Permission is hereby granted, free of charge, to any person obtaining a
310
+ * copy of this software and associated documentation files (the "Software"),
311
+ * to deal in the Software without restriction, including without limitation
312
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
313
+ * and/or sell copies of the Software, and to permit persons to whom the
314
+ * Software is furnished to do so, subject to the following conditions:
315
+ *
316
+ * The above copyright notice and this permission notice shall be included in
317
+ * all copies or substantial portions of the Software.
318
+ *
319
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
320
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
321
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
322
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
323
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
324
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
325
+ * DEALINGS IN THE SOFTWARE.
326
+ *
327
+ * SPDX-License-Identifier: MIT
328
+ */
329
+
330
+#include "qemu/osdep.h"
331
+#include "qemu/log.h"
332
+#include "qom/object.h"
333
+#include "trace.h"
334
+#include "hw/i2c/i2c.h"
335
+#include "migration/vmstate.h"
336
+
337
+#define TYPE_AXP2XX "axp2xx_pmu"
338
+#define TYPE_AXP209_PMU "axp209_pmu"
339
+#define TYPE_AXP221_PMU "axp221_pmu"
340
+
341
+OBJECT_DECLARE_TYPE(AXP2xxI2CState, AXP2xxClass, AXP2XX)
342
+
343
+#define NR_REGS (0xff)
344
+
345
+/* A simple I2C slave which returns values of ID or CNT register. */
346
+typedef struct AXP2xxI2CState {
347
+ /*< private >*/
348
+ I2CSlave i2c;
349
+ /*< public >*/
350
+ uint8_t regs[NR_REGS]; /* peripheral registers */
351
+ uint8_t ptr; /* current register index */
352
+ uint8_t count; /* counter used for tx/rx */
353
+} AXP2xxI2CState;
354
+
355
+typedef struct AXP2xxClass {
356
+ /*< private >*/
357
+ I2CSlaveClass parent_class;
358
+ /*< public >*/
359
+ void (*reset_enter)(AXP2xxI2CState *s, ResetType type);
360
+} AXP2xxClass;
361
+
362
+#define AXP209_CHIP_VERSION_ID (0x01)
363
+#define AXP209_DC_DC2_OUT_V_CTRL_RESET (0x16)
364
+
365
+/* Reset all counters and load ID register */
366
+static void axp209_reset_enter(AXP2xxI2CState *s, ResetType type)
367
+{
368
+ memset(s->regs, 0, NR_REGS);
369
+ s->ptr = 0;
370
+ s->count = 0;
371
+
372
+ s->regs[0x03] = AXP209_CHIP_VERSION_ID;
373
+ s->regs[0x23] = AXP209_DC_DC2_OUT_V_CTRL_RESET;
374
+
375
+ s->regs[0x30] = 0x60;
376
+ s->regs[0x32] = 0x46;
377
+ s->regs[0x34] = 0x41;
378
+ s->regs[0x35] = 0x22;
379
+ s->regs[0x36] = 0x5d;
380
+ s->regs[0x37] = 0x08;
381
+ s->regs[0x38] = 0xa5;
382
+ s->regs[0x39] = 0x1f;
383
+ s->regs[0x3a] = 0x68;
384
+ s->regs[0x3b] = 0x5f;
385
+ s->regs[0x3c] = 0xfc;
386
+ s->regs[0x3d] = 0x16;
387
+ s->regs[0x40] = 0xd8;
388
+ s->regs[0x42] = 0xff;
389
+ s->regs[0x43] = 0x3b;
390
+ s->regs[0x80] = 0xe0;
391
+ s->regs[0x82] = 0x83;
392
+ s->regs[0x83] = 0x80;
393
+ s->regs[0x84] = 0x32;
394
+ s->regs[0x86] = 0xff;
395
+ s->regs[0x90] = 0x07;
396
+ s->regs[0x91] = 0xa0;
397
+ s->regs[0x92] = 0x07;
398
+ s->regs[0x93] = 0x07;
399
+}
400
+
401
+#define AXP221_PWR_STATUS_ACIN_PRESENT BIT(7)
402
+#define AXP221_PWR_STATUS_ACIN_AVAIL BIT(6)
403
+#define AXP221_PWR_STATUS_VBUS_PRESENT BIT(5)
404
+#define AXP221_PWR_STATUS_VBUS_USED BIT(4)
405
+#define AXP221_PWR_STATUS_BAT_CHARGING BIT(2)
406
+#define AXP221_PWR_STATUS_ACIN_VBUS_POWERED BIT(1)
407
+
408
+/* Reset all counters and load ID register */
409
+static void axp221_reset_enter(AXP2xxI2CState *s, ResetType type)
410
+{
411
+ memset(s->regs, 0, NR_REGS);
412
+ s->ptr = 0;
413
+ s->count = 0;
414
+
415
+ /* input power status register */
416
+ s->regs[0x00] = AXP221_PWR_STATUS_ACIN_PRESENT
417
+ | AXP221_PWR_STATUS_ACIN_AVAIL
418
+ | AXP221_PWR_STATUS_ACIN_VBUS_POWERED;
419
+
420
+ s->regs[0x01] = 0x00; /* no battery is connected */
421
+
422
+ /*
423
+ * CHIPID register, no documented on datasheet, but it is checked in
424
+ * u-boot spl. I had read it from AXP221s and got 0x06 value.
425
+ * So leave 06h here.
426
+ */
427
+ s->regs[0x03] = 0x06;
428
+
429
+ s->regs[0x10] = 0xbf;
430
+ s->regs[0x13] = 0x01;
431
+ s->regs[0x30] = 0x60;
432
+ s->regs[0x31] = 0x03;
433
+ s->regs[0x32] = 0x43;
434
+ s->regs[0x33] = 0xc6;
435
+ s->regs[0x34] = 0x45;
436
+ s->regs[0x35] = 0x0e;
437
+ s->regs[0x36] = 0x5d;
438
+ s->regs[0x37] = 0x08;
439
+ s->regs[0x38] = 0xa5;
440
+ s->regs[0x39] = 0x1f;
441
+ s->regs[0x3c] = 0xfc;
442
+ s->regs[0x3d] = 0x16;
443
+ s->regs[0x80] = 0x80;
444
+ s->regs[0x82] = 0xe0;
445
+ s->regs[0x84] = 0x32;
446
+ s->regs[0x8f] = 0x01;
447
+
448
+ s->regs[0x90] = 0x07;
449
+ s->regs[0x91] = 0x1f;
450
+ s->regs[0x92] = 0x07;
451
+ s->regs[0x93] = 0x1f;
452
+
453
+ s->regs[0x40] = 0xd8;
454
+ s->regs[0x41] = 0xff;
455
+ s->regs[0x42] = 0x03;
456
+ s->regs[0x43] = 0x03;
457
+
458
+ s->regs[0xb8] = 0xc0;
459
+ s->regs[0xb9] = 0x64;
460
+ s->regs[0xe6] = 0xa0;
461
+}
462
+
463
+static void axp2xx_reset_enter(Object *obj, ResetType type)
464
+{
465
+ AXP2xxI2CState *s = AXP2XX(obj);
466
+ AXP2xxClass *sc = AXP2XX_GET_CLASS(s);
467
+
468
+ sc->reset_enter(s, type);
469
+}
470
+
471
+/* Handle events from master. */
472
+static int axp2xx_event(I2CSlave *i2c, enum i2c_event event)
473
+{
474
+ AXP2xxI2CState *s = AXP2XX(i2c);
475
+
476
+ s->count = 0;
477
+
478
+ return 0;
479
+}
480
+
481
+/* Called when master requests read */
482
+static uint8_t axp2xx_rx(I2CSlave *i2c)
483
+{
484
+ AXP2xxI2CState *s = AXP2XX(i2c);
485
+ uint8_t ret = 0xff;
486
+
487
+ if (s->ptr < NR_REGS) {
488
+ ret = s->regs[s->ptr++];
489
+ }
490
+
491
+ trace_axp2xx_rx(s->ptr - 1, ret);
492
+
493
+ return ret;
494
+}
495
+
496
+/*
497
+ * Called when master sends write.
498
+ * Update ptr with byte 0, then perform write with second byte.
499
+ */
500
+static int axp2xx_tx(I2CSlave *i2c, uint8_t data)
501
+{
502
+ AXP2xxI2CState *s = AXP2XX(i2c);
503
+
504
+ if (s->count == 0) {
505
+ /* Store register address */
506
+ s->ptr = data;
507
+ s->count++;
508
+ trace_axp2xx_select(data);
509
+ } else {
510
+ trace_axp2xx_tx(s->ptr, data);
511
+ s->regs[s->ptr++] = data;
512
+ }
513
+
514
+ return 0;
515
+}
516
+
517
+static const VMStateDescription vmstate_axp2xx = {
518
+ .name = TYPE_AXP2XX,
519
+ .version_id = 1,
520
+ .fields = (VMStateField[]) {
521
+ VMSTATE_UINT8_ARRAY(regs, AXP2xxI2CState, NR_REGS),
522
+ VMSTATE_UINT8(ptr, AXP2xxI2CState),
523
+ VMSTATE_UINT8(count, AXP2xxI2CState),
524
+ VMSTATE_END_OF_LIST()
525
+ }
526
+};
527
+
528
+static void axp2xx_class_init(ObjectClass *oc, void *data)
529
+{
530
+ DeviceClass *dc = DEVICE_CLASS(oc);
531
+ I2CSlaveClass *isc = I2C_SLAVE_CLASS(oc);
532
+ ResettableClass *rc = RESETTABLE_CLASS(oc);
533
+
534
+ rc->phases.enter = axp2xx_reset_enter;
535
+ dc->vmsd = &vmstate_axp2xx;
536
+ isc->event = axp2xx_event;
537
+ isc->recv = axp2xx_rx;
538
+ isc->send = axp2xx_tx;
539
+}
540
+
541
+static const TypeInfo axp2xx_info = {
542
+ .name = TYPE_AXP2XX,
543
+ .parent = TYPE_I2C_SLAVE,
544
+ .instance_size = sizeof(AXP2xxI2CState),
545
+ .class_size = sizeof(AXP2xxClass),
546
+ .class_init = axp2xx_class_init,
547
+ .abstract = true,
548
+};
549
+
550
+static void axp209_class_init(ObjectClass *oc, void *data)
551
+{
552
+ AXP2xxClass *sc = AXP2XX_CLASS(oc);
553
+
554
+ sc->reset_enter = axp209_reset_enter;
555
+}
556
+
557
+static const TypeInfo axp209_info = {
558
+ .name = TYPE_AXP209_PMU,
559
+ .parent = TYPE_AXP2XX,
560
+ .class_init = axp209_class_init
561
+};
562
+
563
+static void axp221_class_init(ObjectClass *oc, void *data)
564
+{
565
+ AXP2xxClass *sc = AXP2XX_CLASS(oc);
566
+
567
+ sc->reset_enter = axp221_reset_enter;
568
+}
569
+
570
+static const TypeInfo axp221_info = {
571
+ .name = TYPE_AXP221_PMU,
572
+ .parent = TYPE_AXP2XX,
573
+ .class_init = axp221_class_init,
574
+};
575
+
576
+static void axp2xx_register_devices(void)
577
+{
578
+ type_register_static(&axp2xx_info);
579
+ type_register_static(&axp209_info);
580
+ type_register_static(&axp221_info);
581
+}
582
+
583
+type_init(axp2xx_register_devices);
584
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
585
index XXXXXXX..XXXXXXX 100644
586
--- a/hw/arm/Kconfig
587
+++ b/hw/arm/Kconfig
588
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_A10
589
select ALLWINNER_WDT
590
select ALLWINNER_EMAC
591
select ALLWINNER_I2C
592
- select AXP209_PMU
593
+ select AXP2XX_PMU
594
select SERIAL
595
select UNIMP
596
597
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_R40
598
bool
599
default y if TCG && ARM
600
select ALLWINNER_A10_PIT
601
+ select AXP2XX_PMU
602
select SERIAL
603
select ARM_TIMER
604
select ARM_GIC
605
diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
606
index XXXXXXX..XXXXXXX 100644
607
--- a/hw/misc/Kconfig
608
+++ b/hw/misc/Kconfig
609
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_A10_CCM
610
config ALLWINNER_A10_DRAMC
611
bool
612
613
-config AXP209_PMU
614
+config AXP2XX_PMU
615
bool
616
depends on I2C
617
618
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
619
index XXXXXXX..XXXXXXX 100644
620
--- a/hw/misc/meson.build
621
+++ b/hw/misc/meson.build
622
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-dramc.c
623
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-sysctrl.c'))
624
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-sid.c'))
625
softmmu_ss.add(when: 'CONFIG_ALLWINNER_R40', if_true: files('allwinner-r40-ccu.c'))
626
-softmmu_ss.add(when: 'CONFIG_AXP209_PMU', if_true: files('axp209.c'))
627
+softmmu_ss.add(when: 'CONFIG_AXP2XX_PMU', if_true: files('axp2xx.c'))
628
softmmu_ss.add(when: 'CONFIG_REALVIEW', if_true: files('arm_sysctl.c'))
629
softmmu_ss.add(when: 'CONFIG_NSERIES', if_true: files('cbus.c'))
630
softmmu_ss.add(when: 'CONFIG_ECCMEMCTL', if_true: files('eccmemctl.c'))
631
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
632
index XXXXXXX..XXXXXXX 100644
633
--- a/hw/misc/trace-events
634
+++ b/hw/misc/trace-events
635
@@ -XXX,XX +XXX,XX @@ allwinner_sid_write(uint64_t offset, uint64_t data, unsigned size) "offset 0x%"
636
avr_power_read(uint8_t value) "power_reduc read value:%u"
637
avr_power_write(uint8_t value) "power_reduc write value:%u"
638
639
-# axp209.c
640
-axp209_rx(uint8_t reg, uint8_t data) "Read reg 0x%" PRIx8 " : 0x%" PRIx8
641
-axp209_select(uint8_t reg) "Accessing reg 0x%" PRIx8
642
-axp209_tx(uint8_t reg, uint8_t data) "Write reg 0x%" PRIx8 " : 0x%" PRIx8
643
+# axp2xx
644
+axp2xx_rx(uint8_t reg, uint8_t data) "Read reg 0x%" PRIx8 " : 0x%" PRIx8
645
+axp2xx_select(uint8_t reg) "Accessing reg 0x%" PRIx8
646
+axp2xx_tx(uint8_t reg, uint8_t data) "Write reg 0x%" PRIx8 " : 0x%" PRIx8
647
648
# eccmemctl.c
649
ecc_mem_writel_mer(uint32_t val) "Write memory enable 0x%08x"
650
--
651
2.34.1
diff view generated by jsdifflib
1
From: Nolan Leake <nolan@sigbus.net>
1
From: qianfan Zhao <qianfanguijin@163.com>
2
2
3
This is just enough to make reboot and poweroff work. Works for
3
Types of memory that the SDRAM controller supports are DDR2/DDR3
4
linux, u-boot, and the arm trusted firmware. Not tested, but should
4
and capacities of up to 2GiB. This commit adds emulation support
5
work for plan9, and bare-metal/hobby OSes, since they seem to generally
5
of the Allwinner R40 SDRAM controller.
6
do what linux does for reset.
7
6
8
The watchdog timer functionality is not yet implemented.
7
This driver only support 256M, 512M and 1024M memory now.
9
8
10
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/64
9
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
11
Signed-off-by: Nolan Leake <nolan@sigbus.net>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 20210625210209.1870217-1-nolan@sigbus.net
15
[PMM: tweaked commit title; fixed region size to 0x200;
16
moved header file to include/]
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
11
---
19
include/hw/arm/bcm2835_peripherals.h | 3 +-
12
include/hw/arm/allwinner-r40.h | 13 +-
20
include/hw/misc/bcm2835_powermgt.h | 29 +++++
13
include/hw/misc/allwinner-r40-dramc.h | 108 ++++++
21
hw/arm/bcm2835_peripherals.c | 13 ++-
14
hw/arm/allwinner-r40.c | 21 +-
22
hw/misc/bcm2835_powermgt.c | 160 +++++++++++++++++++++++++++
15
hw/arm/bananapi_m2u.c | 7 +
23
hw/misc/meson.build | 1 +
16
hw/misc/allwinner-r40-dramc.c | 513 ++++++++++++++++++++++++++
24
5 files changed, 204 insertions(+), 2 deletions(-)
17
hw/misc/meson.build | 1 +
25
create mode 100644 include/hw/misc/bcm2835_powermgt.h
18
hw/misc/trace-events | 14 +
26
create mode 100644 hw/misc/bcm2835_powermgt.c
19
7 files changed, 674 insertions(+), 3 deletions(-)
20
create mode 100644 include/hw/misc/allwinner-r40-dramc.h
21
create mode 100644 hw/misc/allwinner-r40-dramc.c
27
22
28
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
23
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
29
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
30
--- a/include/hw/arm/bcm2835_peripherals.h
25
--- a/include/hw/arm/allwinner-r40.h
31
+++ b/include/hw/arm/bcm2835_peripherals.h
26
+++ b/include/hw/arm/allwinner-r40.h
32
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@
33
#include "hw/misc/bcm2835_mphi.h"
28
#include "hw/intc/arm_gic.h"
34
#include "hw/misc/bcm2835_thermal.h"
29
#include "hw/sd/allwinner-sdhost.h"
35
#include "hw/misc/bcm2835_cprman.h"
30
#include "hw/misc/allwinner-r40-ccu.h"
36
+#include "hw/misc/bcm2835_powermgt.h"
31
+#include "hw/misc/allwinner-r40-dramc.h"
37
#include "hw/sd/sdhci.h"
32
#include "hw/i2c/allwinner-i2c.h"
38
#include "hw/sd/bcm2835_sdhost.h"
33
#include "target/arm/cpu.h"
39
#include "hw/gpio/bcm2835_gpio.h"
34
#include "sysemu/block-backend.h"
40
@@ -XXX,XX +XXX,XX @@ struct BCM2835PeripheralState {
35
@@ -XXX,XX +XXX,XX @@ enum {
41
BCM2835MphiState mphi;
36
AW_R40_DEV_GIC_CPU,
42
UnimplementedDeviceState txp;
37
AW_R40_DEV_GIC_HYP,
43
UnimplementedDeviceState armtmr;
38
AW_R40_DEV_GIC_VCPU,
44
- UnimplementedDeviceState powermgt;
39
- AW_R40_DEV_SDRAM
45
+ BCM2835PowerMgtState powermgt;
40
+ AW_R40_DEV_SDRAM,
46
BCM2835CprmanState cprman;
41
+ AW_R40_DEV_DRAMCOM,
47
PL011State uart0;
42
+ AW_R40_DEV_DRAMCTL,
48
BCM2835AuxState aux;
43
+ AW_R40_DEV_DRAMPHY,
49
diff --git a/include/hw/misc/bcm2835_powermgt.h b/include/hw/misc/bcm2835_powermgt.h
44
};
45
46
#define AW_R40_NUM_CPUS (4)
47
@@ -XXX,XX +XXX,XX @@ struct AwR40State {
48
DeviceState parent_obj;
49
/*< public >*/
50
51
+ /** Physical base address for start of RAM */
52
+ hwaddr ram_addr;
53
+
54
+ /** Total RAM size in megabytes */
55
+ uint32_t ram_size;
56
+
57
ARMCPU cpus[AW_R40_NUM_CPUS];
58
const hwaddr *memmap;
59
AwA10PITState timer;
60
AwSdHostState mmc[AW_R40_NUM_MMCS];
61
AwR40ClockCtlState ccu;
62
+ AwR40DramCtlState dramc;
63
AWI2CState i2c0;
64
GICState gic;
65
MemoryRegion sram_a1;
66
diff --git a/include/hw/misc/allwinner-r40-dramc.h b/include/hw/misc/allwinner-r40-dramc.h
50
new file mode 100644
67
new file mode 100644
51
index XXXXXXX..XXXXXXX
68
index XXXXXXX..XXXXXXX
52
--- /dev/null
69
--- /dev/null
53
+++ b/include/hw/misc/bcm2835_powermgt.h
70
+++ b/include/hw/misc/allwinner-r40-dramc.h
54
@@ -XXX,XX +XXX,XX @@
71
@@ -XXX,XX +XXX,XX @@
55
+/*
72
+/*
56
+ * BCM2835 Power Management emulation
73
+ * Allwinner R40 SDRAM Controller emulation
57
+ *
74
+ *
58
+ * Copyright (C) 2017 Marcin Chojnacki <marcinch7@gmail.com>
75
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
59
+ * Copyright (C) 2021 Nolan Leake <nolan@sigbus.net>
60
+ *
76
+ *
61
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
77
+ * This program is free software: you can redistribute it and/or modify
62
+ * See the COPYING file in the top-level directory.
78
+ * it under the terms of the GNU General Public License as published by
79
+ * the Free Software Foundation, either version 2 of the License, or
80
+ * (at your option) any later version.
81
+ *
82
+ * This program is distributed in the hope that it will be useful,
83
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
84
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
85
+ * GNU General Public License for more details.
86
+ *
87
+ * You should have received a copy of the GNU General Public License
88
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
63
+ */
89
+ */
64
+
90
+
65
+#ifndef BCM2835_POWERMGT_H
91
+#ifndef HW_MISC_ALLWINNER_R40_DRAMC_H
66
+#define BCM2835_POWERMGT_H
92
+#define HW_MISC_ALLWINNER_R40_DRAMC_H
67
+
93
+
94
+#include "qom/object.h"
68
+#include "hw/sysbus.h"
95
+#include "hw/sysbus.h"
69
+#include "qom/object.h"
96
+#include "exec/hwaddr.h"
70
+
97
+
71
+#define TYPE_BCM2835_POWERMGT "bcm2835-powermgt"
98
+/**
72
+OBJECT_DECLARE_SIMPLE_TYPE(BCM2835PowerMgtState, BCM2835_POWERMGT)
99
+ * Constants
73
+
100
+ * @{
74
+struct BCM2835PowerMgtState {
101
+ */
75
+ SysBusDevice busdev;
102
+
76
+ MemoryRegion iomem;
103
+/** Highest register address used by DRAMCOM module */
77
+
104
+#define AW_R40_DRAMCOM_REGS_MAXADDR (0x804)
78
+ uint32_t rstc;
105
+
79
+ uint32_t rsts;
106
+/** Total number of known DRAMCOM registers */
80
+ uint32_t wdog;
107
+#define AW_R40_DRAMCOM_REGS_NUM (AW_R40_DRAMCOM_REGS_MAXADDR / \
81
+};
108
+ sizeof(uint32_t))
82
+
109
+
83
+#endif
110
+/** Highest register address used by DRAMCTL module */
84
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
111
+#define AW_R40_DRAMCTL_REGS_MAXADDR (0x88c)
112
+
113
+/** Total number of known DRAMCTL registers */
114
+#define AW_R40_DRAMCTL_REGS_NUM (AW_R40_DRAMCTL_REGS_MAXADDR / \
115
+ sizeof(uint32_t))
116
+
117
+/** Highest register address used by DRAMPHY module */
118
+#define AW_R40_DRAMPHY_REGS_MAXADDR (0x4)
119
+
120
+/** Total number of known DRAMPHY registers */
121
+#define AW_R40_DRAMPHY_REGS_NUM (AW_R40_DRAMPHY_REGS_MAXADDR / \
122
+ sizeof(uint32_t))
123
+
124
+/** @} */
125
+
126
+/**
127
+ * Object model
128
+ * @{
129
+ */
130
+
131
+#define TYPE_AW_R40_DRAMC "allwinner-r40-dramc"
132
+OBJECT_DECLARE_SIMPLE_TYPE(AwR40DramCtlState, AW_R40_DRAMC)
133
+
134
+/** @} */
135
+
136
+/**
137
+ * Allwinner R40 SDRAM Controller object instance state.
138
+ */
139
+struct AwR40DramCtlState {
140
+ /*< private >*/
141
+ SysBusDevice parent_obj;
142
+ /*< public >*/
143
+
144
+ /** Physical base address for start of RAM */
145
+ hwaddr ram_addr;
146
+
147
+ /** Total RAM size in megabytes */
148
+ uint32_t ram_size;
149
+
150
+ uint8_t set_row_bits;
151
+ uint8_t set_bank_bits;
152
+ uint8_t set_col_bits;
153
+
154
+ /**
155
+ * @name Memory Regions
156
+ * @{
157
+ */
158
+ MemoryRegion dramcom_iomem; /**< DRAMCOM module I/O registers */
159
+ MemoryRegion dramctl_iomem; /**< DRAMCTL module I/O registers */
160
+ MemoryRegion dramphy_iomem; /**< DRAMPHY module I/O registers */
161
+ MemoryRegion dram_high; /**< The high 1G dram for dualrank detect */
162
+ MemoryRegion detect_cells; /**< DRAM memory cells for auto detect */
163
+
164
+ /** @} */
165
+
166
+ /**
167
+ * @name Hardware Registers
168
+ * @{
169
+ */
170
+
171
+ uint32_t dramcom[AW_R40_DRAMCOM_REGS_NUM]; /**< DRAMCOM registers */
172
+ uint32_t dramctl[AW_R40_DRAMCTL_REGS_NUM]; /**< DRAMCTL registers */
173
+ uint32_t dramphy[AW_R40_DRAMPHY_REGS_NUM] ;/**< DRAMPHY registers */
174
+
175
+ /** @} */
176
+
177
+};
178
+
179
+#endif /* HW_MISC_ALLWINNER_R40_DRAMC_H */
180
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
85
index XXXXXXX..XXXXXXX 100644
181
index XXXXXXX..XXXXXXX 100644
86
--- a/hw/arm/bcm2835_peripherals.c
182
--- a/hw/arm/allwinner-r40.c
87
+++ b/hw/arm/bcm2835_peripherals.c
183
+++ b/hw/arm/allwinner-r40.c
88
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
184
@@ -XXX,XX +XXX,XX @@
89
185
#include "hw/loader.h"
90
object_property_add_const_link(OBJECT(&s->dwc2), "dma-mr",
186
#include "sysemu/sysemu.h"
91
OBJECT(&s->gpu_bus_mr));
187
#include "hw/arm/allwinner-r40.h"
92
+
188
+#include "hw/misc/allwinner-r40-dramc.h"
93
+ /* Power Management */
189
94
+ object_initialize_child(obj, "powermgt", &s->powermgt,
190
/* Memory map */
95
+ TYPE_BCM2835_POWERMGT);
191
const hwaddr allwinner_r40_memmap[] = {
192
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
193
[AW_R40_DEV_UART6] = 0x01c29800,
194
[AW_R40_DEV_UART7] = 0x01c29c00,
195
[AW_R40_DEV_TWI0] = 0x01c2ac00,
196
+ [AW_R40_DEV_DRAMCOM] = 0x01c62000,
197
+ [AW_R40_DEV_DRAMCTL] = 0x01c63000,
198
+ [AW_R40_DEV_DRAMPHY] = 0x01c65000,
199
[AW_R40_DEV_GIC_DIST] = 0x01c81000,
200
[AW_R40_DEV_GIC_CPU] = 0x01c82000,
201
[AW_R40_DEV_GIC_HYP] = 0x01c84000,
202
@@ -XXX,XX +XXX,XX @@ static struct AwR40Unimplemented r40_unimplemented[] = {
203
{ "gpu", 0x01c40000, 64 * KiB },
204
{ "gmac", 0x01c50000, 64 * KiB },
205
{ "hstmr", 0x01c60000, 4 * KiB },
206
- { "dram-com", 0x01c62000, 4 * KiB },
207
- { "dram-ctl", 0x01c63000, 4 * KiB },
208
{ "tcon-top", 0x01c70000, 4 * KiB },
209
{ "lcd0", 0x01c71000, 4 * KiB },
210
{ "lcd1", 0x01c72000, 4 * KiB },
211
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
212
}
213
214
object_initialize_child(obj, "twi0", &s->i2c0, TYPE_AW_I2C_SUN6I);
215
+
216
+ object_initialize_child(obj, "dramc", &s->dramc, TYPE_AW_R40_DRAMC);
217
+ object_property_add_alias(obj, "ram-addr", OBJECT(&s->dramc),
218
+ "ram-addr");
219
+ object_property_add_alias(obj, "ram-size", OBJECT(&s->dramc),
220
+ "ram-size");
96
}
221
}
97
222
98
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
223
static void allwinner_r40_realize(DeviceState *dev, Error **errp)
99
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
224
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
100
qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
225
sysbus_connect_irq(SYS_BUS_DEVICE(&s->i2c0), 0,
101
INTERRUPT_USB));
226
qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_TWI0));
102
227
103
+ /* Power Management */
228
+ /* DRAMC */
104
+ if (!sysbus_realize(SYS_BUS_DEVICE(&s->powermgt), errp)) {
229
+ sysbus_realize(SYS_BUS_DEVICE(&s->dramc), &error_fatal);
105
+ return;
230
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->dramc), 0,
106
+ }
231
+ s->memmap[AW_R40_DEV_DRAMCOM]);
107
+
232
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->dramc), 1,
108
+ memory_region_add_subregion(&s->peri_mr, PM_OFFSET,
233
+ s->memmap[AW_R40_DEV_DRAMCTL]);
109
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->powermgt), 0));
234
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->dramc), 2,
110
+
235
+ s->memmap[AW_R40_DEV_DRAMPHY]);
111
create_unimp(s, &s->txp, "bcm2835-txp", TXP_OFFSET, 0x1000);
236
+
112
create_unimp(s, &s->armtmr, "bcm2835-sp804", ARMCTRL_TIMER0_1_OFFSET, 0x40);
237
/* Unimplemented devices */
113
- create_unimp(s, &s->powermgt, "bcm2835-powermgt", PM_OFFSET, 0x114);
238
for (i = 0; i < ARRAY_SIZE(r40_unimplemented); i++) {
114
create_unimp(s, &s->i2s, "bcm2835-i2s", I2S_OFFSET, 0x100);
239
create_unimplemented_device(r40_unimplemented[i].device_name,
115
create_unimp(s, &s->smi, "bcm2835-smi", SMI_OFFSET, 0x100);
240
diff --git a/hw/arm/bananapi_m2u.c b/hw/arm/bananapi_m2u.c
116
create_unimp(s, &s->spi[0], "bcm2835-spi0", SPI0_OFFSET, 0x20);
241
index XXXXXXX..XXXXXXX 100644
117
diff --git a/hw/misc/bcm2835_powermgt.c b/hw/misc/bcm2835_powermgt.c
242
--- a/hw/arm/bananapi_m2u.c
243
+++ b/hw/arm/bananapi_m2u.c
244
@@ -XXX,XX +XXX,XX @@ static void bpim2u_init(MachineState *machine)
245
object_property_set_int(OBJECT(r40), "clk1-freq", 24 * 1000 * 1000,
246
&error_abort);
247
248
+ /* DRAMC */
249
+ r40->ram_size = machine->ram_size / MiB;
250
+ object_property_set_uint(OBJECT(r40), "ram-addr",
251
+ r40->memmap[AW_R40_DEV_SDRAM], &error_abort);
252
+ object_property_set_int(OBJECT(r40), "ram-size",
253
+ r40->ram_size, &error_abort);
254
+
255
/* Mark R40 object realized */
256
qdev_realize(DEVICE(r40), NULL, &error_abort);
257
258
diff --git a/hw/misc/allwinner-r40-dramc.c b/hw/misc/allwinner-r40-dramc.c
118
new file mode 100644
259
new file mode 100644
119
index XXXXXXX..XXXXXXX
260
index XXXXXXX..XXXXXXX
120
--- /dev/null
261
--- /dev/null
121
+++ b/hw/misc/bcm2835_powermgt.c
262
+++ b/hw/misc/allwinner-r40-dramc.c
122
@@ -XXX,XX +XXX,XX @@
263
@@ -XXX,XX +XXX,XX @@
123
+/*
264
+/*
124
+ * BCM2835 Power Management emulation
265
+ * Allwinner R40 SDRAM Controller emulation
125
+ *
266
+ *
126
+ * Copyright (C) 2017 Marcin Chojnacki <marcinch7@gmail.com>
267
+ * CCopyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
127
+ * Copyright (C) 2021 Nolan Leake <nolan@sigbus.net>
128
+ *
268
+ *
129
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
269
+ * This program is free software: you can redistribute it and/or modify
130
+ * See the COPYING file in the top-level directory.
270
+ * it under the terms of the GNU General Public License as published by
271
+ * the Free Software Foundation, either version 2 of the License, or
272
+ * (at your option) any later version.
273
+ *
274
+ * This program is distributed in the hope that it will be useful,
275
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
276
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
277
+ * GNU General Public License for more details.
278
+ *
279
+ * You should have received a copy of the GNU General Public License
280
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
131
+ */
281
+ */
132
+
282
+
133
+#include "qemu/osdep.h"
283
+#include "qemu/osdep.h"
284
+#include "qemu/units.h"
285
+#include "qemu/error-report.h"
286
+#include "hw/sysbus.h"
287
+#include "migration/vmstate.h"
134
+#include "qemu/log.h"
288
+#include "qemu/log.h"
135
+#include "qemu/module.h"
289
+#include "qemu/module.h"
136
+#include "hw/misc/bcm2835_powermgt.h"
290
+#include "exec/address-spaces.h"
137
+#include "migration/vmstate.h"
291
+#include "hw/qdev-properties.h"
138
+#include "sysemu/runstate.h"
292
+#include "qapi/error.h"
139
+
293
+#include "qemu/bitops.h"
140
+#define PASSWORD 0x5a000000
294
+#include "hw/misc/allwinner-r40-dramc.h"
141
+#define PASSWORD_MASK 0xff000000
295
+#include "trace.h"
142
+
296
+
143
+#define R_RSTC 0x1c
297
+#define REG_INDEX(offset) (offset / sizeof(uint32_t))
144
+#define V_RSTC_RESET 0x20
298
+
145
+#define R_RSTS 0x20
299
+/* DRAMCOM register offsets */
146
+#define V_RSTS_POWEROFF 0x555 /* Linux uses partition 63 to indicate halt. */
300
+enum {
147
+#define R_WDOG 0x24
301
+ REG_DRAMCOM_CR = 0x0000, /* Control Register */
148
+
302
+};
149
+static uint64_t bcm2835_powermgt_read(void *opaque, hwaddr offset,
303
+
150
+ unsigned size)
304
+/* DRAMCOMM register flags */
151
+{
305
+enum {
152
+ BCM2835PowerMgtState *s = (BCM2835PowerMgtState *)opaque;
306
+ REG_DRAMCOM_CR_DUAL_RANK = (1 << 0),
153
+ uint32_t res = 0;
307
+};
308
+
309
+/* DRAMCTL register offsets */
310
+enum {
311
+ REG_DRAMCTL_PIR = 0x0000, /* PHY Initialization Register */
312
+ REG_DRAMCTL_PGSR = 0x0010, /* PHY General Status Register */
313
+ REG_DRAMCTL_STATR = 0x0018, /* Status Register */
314
+ REG_DRAMCTL_PGCR = 0x0100, /* PHY general configuration registers */
315
+};
316
+
317
+/* DRAMCTL register flags */
318
+enum {
319
+ REG_DRAMCTL_PGSR_INITDONE = (1 << 0),
320
+ REG_DRAMCTL_PGSR_READ_TIMEOUT = (1 << 13),
321
+ REG_DRAMCTL_PGCR_ENABLE_READ_TIMEOUT = (1 << 25),
322
+};
323
+
324
+enum {
325
+ REG_DRAMCTL_STATR_ACTIVE = (1 << 0),
326
+};
327
+
328
+#define DRAM_MAX_ROW_BITS 16
329
+#define DRAM_MAX_COL_BITS 13 /* 8192 */
330
+#define DRAM_MAX_BANK 3
331
+
332
+static uint64_t dram_autodetect_cells[DRAM_MAX_ROW_BITS]
333
+ [DRAM_MAX_BANK]
334
+ [DRAM_MAX_COL_BITS];
335
+struct VirtualDDRChip {
336
+ uint32_t ram_size;
337
+ uint8_t bank_bits;
338
+ uint8_t row_bits;
339
+ uint8_t col_bits;
340
+};
341
+
342
+/*
343
+ * Only power of 2 RAM sizes from 256MiB up to 2048MiB are supported,
344
+ * 2GiB memory is not supported due to dual rank feature.
345
+ */
346
+static const struct VirtualDDRChip dummy_ddr_chips[] = {
347
+ {
348
+ .ram_size = 256,
349
+ .bank_bits = 3,
350
+ .row_bits = 12,
351
+ .col_bits = 13,
352
+ }, {
353
+ .ram_size = 512,
354
+ .bank_bits = 3,
355
+ .row_bits = 13,
356
+ .col_bits = 13,
357
+ }, {
358
+ .ram_size = 1024,
359
+ .bank_bits = 3,
360
+ .row_bits = 14,
361
+ .col_bits = 13,
362
+ }, {
363
+ 0
364
+ }
365
+};
366
+
367
+static const struct VirtualDDRChip *get_match_ddr(uint32_t ram_size)
368
+{
369
+ const struct VirtualDDRChip *ddr;
370
+
371
+ for (ddr = &dummy_ddr_chips[0]; ddr->ram_size; ddr++) {
372
+ if (ddr->ram_size == ram_size) {
373
+ return ddr;
374
+ }
375
+ }
376
+
377
+ return NULL;
378
+}
379
+
380
+static uint64_t *address_to_autodetect_cells(AwR40DramCtlState *s,
381
+ const struct VirtualDDRChip *ddr,
382
+ uint32_t offset)
383
+{
384
+ int row_index = 0, bank_index = 0, col_index = 0;
385
+ uint32_t row_addr, bank_addr, col_addr;
386
+
387
+ row_addr = extract32(offset, s->set_col_bits + s->set_bank_bits,
388
+ s->set_row_bits);
389
+ bank_addr = extract32(offset, s->set_col_bits, s->set_bank_bits);
390
+ col_addr = extract32(offset, 0, s->set_col_bits);
391
+
392
+ for (int i = 0; i < ddr->row_bits; i++) {
393
+ if (row_addr & BIT(i)) {
394
+ row_index = i;
395
+ }
396
+ }
397
+
398
+ for (int i = 0; i < ddr->bank_bits; i++) {
399
+ if (bank_addr & BIT(i)) {
400
+ bank_index = i;
401
+ }
402
+ }
403
+
404
+ for (int i = 0; i < ddr->col_bits; i++) {
405
+ if (col_addr & BIT(i)) {
406
+ col_index = i;
407
+ }
408
+ }
409
+
410
+ trace_allwinner_r40_dramc_offset_to_cell(offset, row_index, bank_index,
411
+ col_index);
412
+ return &dram_autodetect_cells[row_index][bank_index][col_index];
413
+}
414
+
415
+static void allwinner_r40_dramc_map_rows(AwR40DramCtlState *s, uint8_t row_bits,
416
+ uint8_t bank_bits, uint8_t col_bits)
417
+{
418
+ const struct VirtualDDRChip *ddr = get_match_ddr(s->ram_size);
419
+ bool enable_detect_cells;
420
+
421
+ trace_allwinner_r40_dramc_map_rows(row_bits, bank_bits, col_bits);
422
+
423
+ if (!ddr) {
424
+ return;
425
+ }
426
+
427
+ s->set_row_bits = row_bits;
428
+ s->set_bank_bits = bank_bits;
429
+ s->set_col_bits = col_bits;
430
+
431
+ enable_detect_cells = ddr->bank_bits != bank_bits
432
+ || ddr->row_bits != row_bits
433
+ || ddr->col_bits != col_bits;
434
+
435
+ if (enable_detect_cells) {
436
+ trace_allwinner_r40_dramc_detect_cells_enable();
437
+ } else {
438
+ trace_allwinner_r40_dramc_detect_cells_disable();
439
+ }
440
+
441
+ memory_region_set_enabled(&s->detect_cells, enable_detect_cells);
442
+}
443
+
444
+static uint64_t allwinner_r40_dramcom_read(void *opaque, hwaddr offset,
445
+ unsigned size)
446
+{
447
+ const AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
448
+ const uint32_t idx = REG_INDEX(offset);
449
+
450
+ if (idx >= AW_R40_DRAMCOM_REGS_NUM) {
451
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
452
+ __func__, (uint32_t)offset);
453
+ return 0;
454
+ }
455
+
456
+ trace_allwinner_r40_dramcom_read(offset, s->dramcom[idx], size);
457
+ return s->dramcom[idx];
458
+}
459
+
460
+static void allwinner_r40_dramcom_write(void *opaque, hwaddr offset,
461
+ uint64_t val, unsigned size)
462
+{
463
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
464
+ const uint32_t idx = REG_INDEX(offset);
465
+
466
+ trace_allwinner_r40_dramcom_write(offset, val, size);
467
+
468
+ if (idx >= AW_R40_DRAMCOM_REGS_NUM) {
469
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
470
+ __func__, (uint32_t)offset);
471
+ return;
472
+ }
154
+
473
+
155
+ switch (offset) {
474
+ switch (offset) {
156
+ case R_RSTC:
475
+ case REG_DRAMCOM_CR: /* Control Register */
157
+ res = s->rstc;
476
+ if (!(val & REG_DRAMCOM_CR_DUAL_RANK)) {
158
+ break;
477
+ allwinner_r40_dramc_map_rows(s, ((val >> 4) & 0xf) + 1,
159
+ case R_RSTS:
478
+ ((val >> 2) & 0x1) + 2,
160
+ res = s->rsts;
479
+ (((val >> 8) & 0xf) + 3));
161
+ break;
162
+ case R_WDOG:
163
+ res = s->wdog;
164
+ break;
165
+
166
+ default:
167
+ qemu_log_mask(LOG_UNIMP,
168
+ "bcm2835_powermgt_read: Unknown offset 0x%08"HWADDR_PRIx
169
+ "\n", offset);
170
+ res = 0;
171
+ break;
172
+ }
173
+
174
+ return res;
175
+}
176
+
177
+static void bcm2835_powermgt_write(void *opaque, hwaddr offset,
178
+ uint64_t value, unsigned size)
179
+{
180
+ BCM2835PowerMgtState *s = (BCM2835PowerMgtState *)opaque;
181
+
182
+ if ((value & PASSWORD_MASK) != PASSWORD) {
183
+ qemu_log_mask(LOG_GUEST_ERROR,
184
+ "bcm2835_powermgt_write: Bad password 0x%"PRIx64
185
+ " at offset 0x%08"HWADDR_PRIx"\n",
186
+ value, offset);
187
+ return;
188
+ }
189
+
190
+ value = value & ~PASSWORD_MASK;
191
+
192
+ switch (offset) {
193
+ case R_RSTC:
194
+ s->rstc = value;
195
+ if (value & V_RSTC_RESET) {
196
+ if ((s->rsts & 0xfff) == V_RSTS_POWEROFF) {
197
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
198
+ } else {
199
+ qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
200
+ }
201
+ }
480
+ }
202
+ break;
481
+ break;
203
+ case R_RSTS:
482
+ };
204
+ qemu_log_mask(LOG_UNIMP,
483
+
205
+ "bcm2835_powermgt_write: RSTS\n");
484
+ s->dramcom[idx] = (uint32_t) val;
206
+ s->rsts = value;
485
+}
486
+
487
+static uint64_t allwinner_r40_dramctl_read(void *opaque, hwaddr offset,
488
+ unsigned size)
489
+{
490
+ const AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
491
+ const uint32_t idx = REG_INDEX(offset);
492
+
493
+ if (idx >= AW_R40_DRAMCTL_REGS_NUM) {
494
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
495
+ __func__, (uint32_t)offset);
496
+ return 0;
497
+ }
498
+
499
+ trace_allwinner_r40_dramctl_read(offset, s->dramctl[idx], size);
500
+ return s->dramctl[idx];
501
+}
502
+
503
+static void allwinner_r40_dramctl_write(void *opaque, hwaddr offset,
504
+ uint64_t val, unsigned size)
505
+{
506
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
507
+ const uint32_t idx = REG_INDEX(offset);
508
+
509
+ trace_allwinner_r40_dramctl_write(offset, val, size);
510
+
511
+ if (idx >= AW_R40_DRAMCTL_REGS_NUM) {
512
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
513
+ __func__, (uint32_t)offset);
514
+ return;
515
+ }
516
+
517
+ switch (offset) {
518
+ case REG_DRAMCTL_PIR: /* PHY Initialization Register */
519
+ s->dramctl[REG_INDEX(REG_DRAMCTL_PGSR)] |= REG_DRAMCTL_PGSR_INITDONE;
520
+ s->dramctl[REG_INDEX(REG_DRAMCTL_STATR)] |= REG_DRAMCTL_STATR_ACTIVE;
207
+ break;
521
+ break;
208
+ case R_WDOG:
522
+ }
209
+ qemu_log_mask(LOG_UNIMP,
523
+
210
+ "bcm2835_powermgt_write: WDOG\n");
524
+ s->dramctl[idx] = (uint32_t) val;
211
+ s->wdog = value;
525
+}
212
+ break;
526
+
213
+
527
+static uint64_t allwinner_r40_dramphy_read(void *opaque, hwaddr offset,
214
+ default:
528
+ unsigned size)
215
+ qemu_log_mask(LOG_UNIMP,
529
+{
216
+ "bcm2835_powermgt_write: Unknown offset 0x%08"HWADDR_PRIx
530
+ const AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
217
+ "\n", offset);
531
+ const uint32_t idx = REG_INDEX(offset);
218
+ break;
532
+
219
+ }
533
+ if (idx >= AW_R40_DRAMPHY_REGS_NUM) {
220
+}
534
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
221
+
535
+ __func__, (uint32_t)offset);
222
+static const MemoryRegionOps bcm2835_powermgt_ops = {
536
+ return 0;
223
+ .read = bcm2835_powermgt_read,
537
+ }
224
+ .write = bcm2835_powermgt_write,
538
+
539
+ trace_allwinner_r40_dramphy_read(offset, s->dramphy[idx], size);
540
+ return s->dramphy[idx];
541
+}
542
+
543
+static void allwinner_r40_dramphy_write(void *opaque, hwaddr offset,
544
+ uint64_t val, unsigned size)
545
+{
546
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
547
+ const uint32_t idx = REG_INDEX(offset);
548
+
549
+ trace_allwinner_r40_dramphy_write(offset, val, size);
550
+
551
+ if (idx >= AW_R40_DRAMPHY_REGS_NUM) {
552
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
553
+ __func__, (uint32_t)offset);
554
+ return;
555
+ }
556
+
557
+ s->dramphy[idx] = (uint32_t) val;
558
+}
559
+
560
+static const MemoryRegionOps allwinner_r40_dramcom_ops = {
561
+ .read = allwinner_r40_dramcom_read,
562
+ .write = allwinner_r40_dramcom_write,
225
+ .endianness = DEVICE_NATIVE_ENDIAN,
563
+ .endianness = DEVICE_NATIVE_ENDIAN,
564
+ .valid = {
565
+ .min_access_size = 4,
566
+ .max_access_size = 4,
567
+ },
226
+ .impl.min_access_size = 4,
568
+ .impl.min_access_size = 4,
227
+ .impl.max_access_size = 4,
569
+};
228
+};
570
+
229
+
571
+static const MemoryRegionOps allwinner_r40_dramctl_ops = {
230
+static const VMStateDescription vmstate_bcm2835_powermgt = {
572
+ .read = allwinner_r40_dramctl_read,
231
+ .name = TYPE_BCM2835_POWERMGT,
573
+ .write = allwinner_r40_dramctl_write,
574
+ .endianness = DEVICE_NATIVE_ENDIAN,
575
+ .valid = {
576
+ .min_access_size = 4,
577
+ .max_access_size = 4,
578
+ },
579
+ .impl.min_access_size = 4,
580
+};
581
+
582
+static const MemoryRegionOps allwinner_r40_dramphy_ops = {
583
+ .read = allwinner_r40_dramphy_read,
584
+ .write = allwinner_r40_dramphy_write,
585
+ .endianness = DEVICE_NATIVE_ENDIAN,
586
+ .valid = {
587
+ .min_access_size = 4,
588
+ .max_access_size = 4,
589
+ },
590
+ .impl.min_access_size = 4,
591
+};
592
+
593
+static uint64_t allwinner_r40_detect_read(void *opaque, hwaddr offset,
594
+ unsigned size)
595
+{
596
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
597
+ const struct VirtualDDRChip *ddr = get_match_ddr(s->ram_size);
598
+ uint64_t data = 0;
599
+
600
+ if (ddr) {
601
+ data = *address_to_autodetect_cells(s, ddr, (uint32_t)offset);
602
+ }
603
+
604
+ trace_allwinner_r40_dramc_detect_cell_read(offset, data);
605
+ return data;
606
+}
607
+
608
+static void allwinner_r40_detect_write(void *opaque, hwaddr offset,
609
+ uint64_t data, unsigned size)
610
+{
611
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
612
+ const struct VirtualDDRChip *ddr = get_match_ddr(s->ram_size);
613
+
614
+ if (ddr) {
615
+ uint64_t *cell = address_to_autodetect_cells(s, ddr, (uint32_t)offset);
616
+ trace_allwinner_r40_dramc_detect_cell_write(offset, data);
617
+ *cell = data;
618
+ }
619
+}
620
+
621
+static const MemoryRegionOps allwinner_r40_detect_ops = {
622
+ .read = allwinner_r40_detect_read,
623
+ .write = allwinner_r40_detect_write,
624
+ .endianness = DEVICE_NATIVE_ENDIAN,
625
+ .valid = {
626
+ .min_access_size = 4,
627
+ .max_access_size = 4,
628
+ },
629
+ .impl.min_access_size = 4,
630
+};
631
+
632
+/*
633
+ * mctl_r40_detect_rank_count in u-boot will write the high 1G of DDR
634
+ * to detect wether the board support dual_rank or not. Create a virtual memory
635
+ * if the board's ram_size less or equal than 1G, and set read time out flag of
636
+ * REG_DRAMCTL_PGSR when the user touch this high dram.
637
+ */
638
+static uint64_t allwinner_r40_dualrank_detect_read(void *opaque, hwaddr offset,
639
+ unsigned size)
640
+{
641
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
642
+ uint32_t reg;
643
+
644
+ reg = s->dramctl[REG_INDEX(REG_DRAMCTL_PGCR)];
645
+ if (reg & REG_DRAMCTL_PGCR_ENABLE_READ_TIMEOUT) { /* Enable read time out */
646
+ /*
647
+ * this driver only support one rank, mark READ_TIMEOUT when try
648
+ * read the second rank.
649
+ */
650
+ s->dramctl[REG_INDEX(REG_DRAMCTL_PGSR)]
651
+ |= REG_DRAMCTL_PGSR_READ_TIMEOUT;
652
+ }
653
+
654
+ return 0;
655
+}
656
+
657
+static const MemoryRegionOps allwinner_r40_dualrank_detect_ops = {
658
+ .read = allwinner_r40_dualrank_detect_read,
659
+ .endianness = DEVICE_NATIVE_ENDIAN,
660
+ .valid = {
661
+ .min_access_size = 4,
662
+ .max_access_size = 4,
663
+ },
664
+ .impl.min_access_size = 4,
665
+};
666
+
667
+static void allwinner_r40_dramc_reset(DeviceState *dev)
668
+{
669
+ AwR40DramCtlState *s = AW_R40_DRAMC(dev);
670
+
671
+ /* Set default values for registers */
672
+ memset(&s->dramcom, 0, sizeof(s->dramcom));
673
+ memset(&s->dramctl, 0, sizeof(s->dramctl));
674
+ memset(&s->dramphy, 0, sizeof(s->dramphy));
675
+}
676
+
677
+static void allwinner_r40_dramc_realize(DeviceState *dev, Error **errp)
678
+{
679
+ AwR40DramCtlState *s = AW_R40_DRAMC(dev);
680
+
681
+ if (!get_match_ddr(s->ram_size)) {
682
+ error_report("%s: ram-size %u MiB is not supported",
683
+ __func__, s->ram_size);
684
+ exit(1);
685
+ }
686
+
687
+ /* detect_cells */
688
+ sysbus_mmio_map_overlap(SYS_BUS_DEVICE(s), 3, s->ram_addr, 10);
689
+ memory_region_set_enabled(&s->detect_cells, false);
690
+
691
+ /*
692
+ * We only support DRAM size up to 1G now, so prepare a high memory page
693
+ * after 1G for dualrank detect. index = 4
694
+ */
695
+ memory_region_init_io(&s->dram_high, OBJECT(s),
696
+ &allwinner_r40_dualrank_detect_ops, s,
697
+ "DRAMHIGH", KiB);
698
+ sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->dram_high);
699
+ sysbus_mmio_map(SYS_BUS_DEVICE(s), 4, s->ram_addr + GiB);
700
+}
701
+
702
+static void allwinner_r40_dramc_init(Object *obj)
703
+{
704
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
705
+ AwR40DramCtlState *s = AW_R40_DRAMC(obj);
706
+
707
+ /* DRAMCOM registers, index 0 */
708
+ memory_region_init_io(&s->dramcom_iomem, OBJECT(s),
709
+ &allwinner_r40_dramcom_ops, s,
710
+ "DRAMCOM", 4 * KiB);
711
+ sysbus_init_mmio(sbd, &s->dramcom_iomem);
712
+
713
+ /* DRAMCTL registers, index 1 */
714
+ memory_region_init_io(&s->dramctl_iomem, OBJECT(s),
715
+ &allwinner_r40_dramctl_ops, s,
716
+ "DRAMCTL", 4 * KiB);
717
+ sysbus_init_mmio(sbd, &s->dramctl_iomem);
718
+
719
+ /* DRAMPHY registers. index 2 */
720
+ memory_region_init_io(&s->dramphy_iomem, OBJECT(s),
721
+ &allwinner_r40_dramphy_ops, s,
722
+ "DRAMPHY", 4 * KiB);
723
+ sysbus_init_mmio(sbd, &s->dramphy_iomem);
724
+
725
+ /* R40 support max 2G memory but we only support up to 1G now. index 3 */
726
+ memory_region_init_io(&s->detect_cells, OBJECT(s),
727
+ &allwinner_r40_detect_ops, s,
728
+ "DRAMCELLS", 1 * GiB);
729
+ sysbus_init_mmio(sbd, &s->detect_cells);
730
+}
731
+
732
+static Property allwinner_r40_dramc_properties[] = {
733
+ DEFINE_PROP_UINT64("ram-addr", AwR40DramCtlState, ram_addr, 0x0),
734
+ DEFINE_PROP_UINT32("ram-size", AwR40DramCtlState, ram_size, 256), /* MiB */
735
+ DEFINE_PROP_END_OF_LIST()
736
+};
737
+
738
+static const VMStateDescription allwinner_r40_dramc_vmstate = {
739
+ .name = "allwinner-r40-dramc",
232
+ .version_id = 1,
740
+ .version_id = 1,
233
+ .minimum_version_id = 1,
741
+ .minimum_version_id = 1,
234
+ .fields = (VMStateField[]) {
742
+ .fields = (VMStateField[]) {
235
+ VMSTATE_UINT32(rstc, BCM2835PowerMgtState),
743
+ VMSTATE_UINT32_ARRAY(dramcom, AwR40DramCtlState,
236
+ VMSTATE_UINT32(rsts, BCM2835PowerMgtState),
744
+ AW_R40_DRAMCOM_REGS_NUM),
237
+ VMSTATE_UINT32(wdog, BCM2835PowerMgtState),
745
+ VMSTATE_UINT32_ARRAY(dramctl, AwR40DramCtlState,
746
+ AW_R40_DRAMCTL_REGS_NUM),
747
+ VMSTATE_UINT32_ARRAY(dramphy, AwR40DramCtlState,
748
+ AW_R40_DRAMPHY_REGS_NUM),
238
+ VMSTATE_END_OF_LIST()
749
+ VMSTATE_END_OF_LIST()
239
+ }
750
+ }
240
+};
751
+};
241
+
752
+
242
+static void bcm2835_powermgt_init(Object *obj)
753
+static void allwinner_r40_dramc_class_init(ObjectClass *klass, void *data)
243
+{
244
+ BCM2835PowerMgtState *s = BCM2835_POWERMGT(obj);
245
+
246
+ memory_region_init_io(&s->iomem, obj, &bcm2835_powermgt_ops, s,
247
+ TYPE_BCM2835_POWERMGT, 0x200);
248
+ sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->iomem);
249
+}
250
+
251
+static void bcm2835_powermgt_reset(DeviceState *dev)
252
+{
253
+ BCM2835PowerMgtState *s = BCM2835_POWERMGT(dev);
254
+
255
+ /* https://elinux.org/BCM2835_registers#PM */
256
+ s->rstc = 0x00000102;
257
+ s->rsts = 0x00001000;
258
+ s->wdog = 0x00000000;
259
+}
260
+
261
+static void bcm2835_powermgt_class_init(ObjectClass *klass, void *data)
262
+{
754
+{
263
+ DeviceClass *dc = DEVICE_CLASS(klass);
755
+ DeviceClass *dc = DEVICE_CLASS(klass);
264
+
756
+
265
+ dc->reset = bcm2835_powermgt_reset;
757
+ dc->reset = allwinner_r40_dramc_reset;
266
+ dc->vmsd = &vmstate_bcm2835_powermgt;
758
+ dc->vmsd = &allwinner_r40_dramc_vmstate;
267
+}
759
+ dc->realize = allwinner_r40_dramc_realize;
268
+
760
+ device_class_set_props(dc, allwinner_r40_dramc_properties);
269
+static TypeInfo bcm2835_powermgt_info = {
761
+}
270
+ .name = TYPE_BCM2835_POWERMGT,
762
+
763
+static const TypeInfo allwinner_r40_dramc_info = {
764
+ .name = TYPE_AW_R40_DRAMC,
271
+ .parent = TYPE_SYS_BUS_DEVICE,
765
+ .parent = TYPE_SYS_BUS_DEVICE,
272
+ .instance_size = sizeof(BCM2835PowerMgtState),
766
+ .instance_init = allwinner_r40_dramc_init,
273
+ .class_init = bcm2835_powermgt_class_init,
767
+ .instance_size = sizeof(AwR40DramCtlState),
274
+ .instance_init = bcm2835_powermgt_init,
768
+ .class_init = allwinner_r40_dramc_class_init,
275
+};
769
+};
276
+
770
+
277
+static void bcm2835_powermgt_register_types(void)
771
+static void allwinner_r40_dramc_register(void)
278
+{
772
+{
279
+ type_register_static(&bcm2835_powermgt_info);
773
+ type_register_static(&allwinner_r40_dramc_info);
280
+}
774
+}
281
+
775
+
282
+type_init(bcm2835_powermgt_register_types)
776
+type_init(allwinner_r40_dramc_register)
283
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
777
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
284
index XXXXXXX..XXXXXXX 100644
778
index XXXXXXX..XXXXXXX 100644
285
--- a/hw/misc/meson.build
779
--- a/hw/misc/meson.build
286
+++ b/hw/misc/meson.build
780
+++ b/hw/misc/meson.build
287
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_RASPI', if_true: files(
781
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-dramc.c
288
'bcm2835_rng.c',
782
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-sysctrl.c'))
289
'bcm2835_thermal.c',
783
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-sid.c'))
290
'bcm2835_cprman.c',
784
softmmu_ss.add(when: 'CONFIG_ALLWINNER_R40', if_true: files('allwinner-r40-ccu.c'))
291
+ 'bcm2835_powermgt.c',
785
+softmmu_ss.add(when: 'CONFIG_ALLWINNER_R40', if_true: files('allwinner-r40-dramc.c'))
292
))
786
softmmu_ss.add(when: 'CONFIG_AXP2XX_PMU', if_true: files('axp2xx.c'))
293
softmmu_ss.add(when: 'CONFIG_SLAVIO', if_true: files('slavio_misc.c'))
787
softmmu_ss.add(when: 'CONFIG_REALVIEW', if_true: files('arm_sysctl.c'))
294
softmmu_ss.add(when: 'CONFIG_ZYNQ', if_true: files('zynq_slcr.c', 'zynq-xadc.c'))
788
softmmu_ss.add(when: 'CONFIG_NSERIES', if_true: files('cbus.c'))
789
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
790
index XXXXXXX..XXXXXXX 100644
791
--- a/hw/misc/trace-events
792
+++ b/hw/misc/trace-events
793
@@ -XXX,XX +XXX,XX @@ allwinner_h3_dramctl_write(uint64_t offset, uint64_t data, unsigned size) "Write
794
allwinner_h3_dramphy_read(uint64_t offset, uint64_t data, unsigned size) "Read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
795
allwinner_h3_dramphy_write(uint64_t offset, uint64_t data, unsigned size) "write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
796
797
+# allwinner-r40-dramc.c
798
+allwinner_r40_dramc_detect_cells_disable(void) "Disable detect cells"
799
+allwinner_r40_dramc_detect_cells_enable(void) "Enable detect cells"
800
+allwinner_r40_dramc_map_rows(uint8_t row_bits, uint8_t bank_bits, uint8_t col_bits) "DRAM layout: row_bits %d, bank_bits %d, col_bits %d"
801
+allwinner_r40_dramc_offset_to_cell(uint64_t offset, int row, int bank, int col) "offset 0x%" PRIx64 " row %d bank %d col %d"
802
+allwinner_r40_dramc_detect_cell_write(uint64_t offset, uint64_t data) "offset 0x%" PRIx64 " data 0x%" PRIx64 ""
803
+allwinner_r40_dramc_detect_cell_read(uint64_t offset, uint64_t data) "offset 0x%" PRIx64 " data 0x%" PRIx64 ""
804
+allwinner_r40_dramcom_read(uint64_t offset, uint64_t data, unsigned size) "Read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
805
+allwinner_r40_dramcom_write(uint64_t offset, uint64_t data, unsigned size) "Write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
806
+allwinner_r40_dramctl_read(uint64_t offset, uint64_t data, unsigned size) "Read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
807
+allwinner_r40_dramctl_write(uint64_t offset, uint64_t data, unsigned size) "Write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
808
+allwinner_r40_dramphy_read(uint64_t offset, uint64_t data, unsigned size) "Read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
809
+allwinner_r40_dramphy_write(uint64_t offset, uint64_t data, unsigned size) "write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
810
+
811
# allwinner-sid.c
812
allwinner_sid_read(uint64_t offset, uint64_t data, unsigned size) "offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
813
allwinner_sid_write(uint64_t offset, uint64_t data, unsigned size) "offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
295
--
814
--
296
2.20.1
815
2.34.1
297
298
diff view generated by jsdifflib
1
Implement the MVE shifts by register, which perform
1
From: qianfan Zhao <qianfanguijin@163.com>
2
shifts on a single general-purpose register.
2
3
3
A64's sd register was similar to H3, and it introduced a new register
4
named SAMP_DL_REG location at 0x144. The dma descriptor buffer size of
5
mmc2 is only 8K and the other mmc controllers has 64K.
6
7
Also fix allwinner-r40's mmc controller type.
8
9
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210628135835.6690-19-peter.maydell@linaro.org
7
---
11
---
8
target/arm/helper-mve.h | 2 ++
12
include/hw/sd/allwinner-sdhost.h | 9 ++++
9
target/arm/translate.h | 1 +
13
hw/arm/allwinner-r40.c | 2 +-
10
target/arm/t32.decode | 18 ++++++++++++++----
14
hw/sd/allwinner-sdhost.c | 72 ++++++++++++++++++++++++++++++--
11
target/arm/mve_helper.c | 10 ++++++++++
15
3 files changed, 79 insertions(+), 4 deletions(-)
12
target/arm/translate.c | 30 ++++++++++++++++++++++++++++++
16
13
5 files changed, 57 insertions(+), 4 deletions(-)
17
diff --git a/include/hw/sd/allwinner-sdhost.h b/include/hw/sd/allwinner-sdhost.h
14
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-mve.h
19
--- a/include/hw/sd/allwinner-sdhost.h
18
+++ b/target/arm/helper-mve.h
20
+++ b/include/hw/sd/allwinner-sdhost.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_uqrshll48, TCG_CALL_NO_RWG, i64, env, i64, i32)
21
@@ -XXX,XX +XXX,XX @@
20
22
/** Allwinner sun5i family and newer (A13, H2+, H3, etc) */
21
DEF_HELPER_FLAGS_3(mve_uqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
23
#define TYPE_AW_SDHOST_SUN5I TYPE_AW_SDHOST "-sun5i"
22
DEF_HELPER_FLAGS_3(mve_sqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
24
23
+DEF_HELPER_FLAGS_3(mve_uqrshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
25
+/** Allwinner sun50i-a64 */
24
+DEF_HELPER_FLAGS_3(mve_sqrshr, TCG_CALL_NO_RWG, i32, env, i32, i32)
26
+#define TYPE_AW_SDHOST_SUN50I_A64 TYPE_AW_SDHOST "-sun50i-a64"
25
diff --git a/target/arm/translate.h b/target/arm/translate.h
27
+
28
+/** Allwinner sun50i-a64 emmc */
29
+#define TYPE_AW_SDHOST_SUN50I_A64_EMMC TYPE_AW_SDHOST "-sun50i-a64-emmc"
30
+
31
/** @} */
32
33
/**
34
@@ -XXX,XX +XXX,XX @@ struct AwSdHostState {
35
uint32_t startbit_detect; /**< eMMC DDR Start Bit Detection Control */
36
uint32_t response_crc; /**< Response CRC */
37
uint32_t data_crc[8]; /**< Data CRC */
38
+ uint32_t sample_delay; /**< Sample delay control */
39
uint32_t status_crc; /**< Status CRC */
40
41
/** @} */
42
@@ -XXX,XX +XXX,XX @@ struct AwSdHostClass {
43
size_t max_desc_size;
44
bool is_sun4i;
45
46
+ /** does the IP block support autocalibration? */
47
+ bool can_calibrate;
48
};
49
50
#endif /* HW_SD_ALLWINNER_SDHOST_H */
51
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
26
index XXXXXXX..XXXXXXX 100644
52
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/translate.h
53
--- a/hw/arm/allwinner-r40.c
28
+++ b/target/arm/translate.h
54
+++ b/hw/arm/allwinner-r40.c
29
@@ -XXX,XX +XXX,XX @@ typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
55
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
30
typedef void WideShiftImmFn(TCGv_i64, TCGv_i64, int64_t shift);
56
31
typedef void WideShiftFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i32);
57
for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
32
typedef void ShiftImmFn(TCGv_i32, TCGv_i32, int32_t shift);
58
object_initialize_child(obj, mmc_names[i], &s->mmc[i],
33
+typedef void ShiftFn(TCGv_i32, TCGv_ptr, TCGv_i32, TCGv_i32);
59
- TYPE_AW_SDHOST_SUN5I);
34
60
+ TYPE_AW_SDHOST_SUN50I_A64);
35
/**
61
}
36
* arm_tbflags_from_tb:
62
37
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
63
object_initialize_child(obj, "twi0", &s->i2c0, TYPE_AW_I2C_SUN6I);
64
diff --git a/hw/sd/allwinner-sdhost.c b/hw/sd/allwinner-sdhost.c
38
index XXXXXXX..XXXXXXX 100644
65
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/t32.decode
66
--- a/hw/sd/allwinner-sdhost.c
40
+++ b/target/arm/t32.decode
67
+++ b/hw/sd/allwinner-sdhost.c
41
@@ -XXX,XX +XXX,XX @@
68
@@ -XXX,XX +XXX,XX @@ enum {
42
&mve_shl_ri rdalo rdahi shim
69
REG_SD_DATA1_CRC = 0x12C, /* CRC Data 1 from card/eMMC */
43
&mve_shl_rr rdalo rdahi rm
70
REG_SD_DATA0_CRC = 0x130, /* CRC Data 0 from card/eMMC */
44
&mve_sh_ri rda shim
71
REG_SD_CRC_STA = 0x134, /* CRC status from card/eMMC during write */
45
+&mve_sh_rr rda rm
72
+ REG_SD_SAMP_DL = 0x144, /* Sample Delay Control (sun50i-a64) */
46
73
REG_SD_FIFO = 0x200, /* Read/Write FIFO */
47
# rdahi: bits [3:1] from insn, bit 0 is 1
74
};
48
# rdalo: bits [3:1] from insn, bit 0 is 0
75
49
@@ -XXX,XX +XXX,XX @@
76
@@ -XXX,XX +XXX,XX @@ enum {
50
&mve_shl_rr rdalo=%rdalo_17 rdahi=%rdahi_9
77
REG_SD_RES_CRC_RST = 0x0,
51
@mve_sh_ri ....... .... . rda:4 . ... ... . .. .. .... \
78
REG_SD_DATA_CRC_RST = 0x0,
52
&mve_sh_ri shim=%imm5_12_6
79
REG_SD_CRC_STA_RST = 0x0,
53
+@mve_sh_rr ....... .... . rda:4 rm:4 .... .... .... &mve_sh_rr
80
+ REG_SD_SAMPLE_DL_RST = 0x00002000,
54
81
REG_SD_FIFO_RST = 0x0,
82
};
83
84
@@ -XXX,XX +XXX,XX @@ static uint64_t allwinner_sdhost_read(void *opaque, hwaddr offset,
55
{
85
{
56
TST_xrri 1110101 0000 1 .... 0 ... 1111 .... .... @S_xrr_shi
86
AwSdHostState *s = AW_SDHOST(opaque);
57
@@ -XXX,XX +XXX,XX @@ BIC_rrri 1110101 0001 . .... 0 ... .... .... .... @s_rrr_shi
87
AwSdHostClass *sc = AW_SDHOST_GET_CLASS(s);
58
SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
88
+ bool out_of_bounds = false;
59
}
89
uint32_t res = 0;
60
90
61
- LSLL_rr 1110101 0010 1 ... 0 .... ... 1 0000 1101 @mve_shl_rr
91
switch (offset) {
62
- ASRL_rr 1110101 0010 1 ... 0 .... ... 1 0010 1101 @mve_shl_rr
92
@@ -XXX,XX +XXX,XX @@ static uint64_t allwinner_sdhost_read(void *opaque, hwaddr offset,
63
- UQRSHLL64_rr 1110101 0010 1 ... 1 .... ... 1 0000 1101 @mve_shl_rr
93
case REG_SD_FIFO: /* Read/Write FIFO */
64
- SQRSHRL64_rr 1110101 0010 1 ... 1 .... ... 1 0010 1101 @mve_shl_rr
94
res = allwinner_sdhost_fifo_read(s);
65
+ {
95
break;
66
+ UQRSHL_rr 1110101 0010 1 .... .... 1111 0000 1101 @mve_sh_rr
96
+ case REG_SD_SAMP_DL: /* Sample Delay */
67
+ LSLL_rr 1110101 0010 1 ... 0 .... ... 1 0000 1101 @mve_shl_rr
97
+ if (sc->can_calibrate) {
68
+ UQRSHLL64_rr 1110101 0010 1 ... 1 .... ... 1 0000 1101 @mve_shl_rr
98
+ res = s->sample_delay;
99
+ } else {
100
+ out_of_bounds = true;
101
+ }
102
+ break;
103
default:
104
- qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset %"
105
- HWADDR_PRIx"\n", __func__, offset);
106
+ out_of_bounds = true;
107
res = 0;
108
break;
109
}
110
111
+ if (out_of_bounds) {
112
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset %"
113
+ HWADDR_PRIx"\n", __func__, offset);
69
+ }
114
+ }
70
+
115
+
71
+ {
116
trace_allwinner_sdhost_read(offset, res, size);
72
+ SQRSHR_rr 1110101 0010 1 .... .... 1111 0010 1101 @mve_sh_rr
117
return res;
73
+ ASRL_rr 1110101 0010 1 ... 0 .... ... 1 0010 1101 @mve_shl_rr
118
}
74
+ SQRSHRL64_rr 1110101 0010 1 ... 1 .... ... 1 0010 1101 @mve_shl_rr
119
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_write(void *opaque, hwaddr offset,
120
{
121
AwSdHostState *s = AW_SDHOST(opaque);
122
AwSdHostClass *sc = AW_SDHOST_GET_CLASS(s);
123
+ bool out_of_bounds = false;
124
125
trace_allwinner_sdhost_write(offset, value, size);
126
127
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_write(void *opaque, hwaddr offset,
128
case REG_SD_DATA0_CRC: /* CRC Data 0 from card/eMMC */
129
case REG_SD_CRC_STA: /* CRC status from card/eMMC in write operation */
130
break;
131
+ case REG_SD_SAMP_DL: /* Sample delay control */
132
+ if (sc->can_calibrate) {
133
+ s->sample_delay = value;
134
+ } else {
135
+ out_of_bounds = true;
136
+ }
137
+ break;
138
default:
139
+ out_of_bounds = true;
140
+ break;
75
+ }
141
+ }
76
+
142
+
77
UQRSHLL48_rr 1110101 0010 1 ... 1 .... ... 1 1000 1101 @mve_shl_rr
143
+ if (out_of_bounds) {
78
SQRSHRL48_rr 1110101 0010 1 ... 1 .... ... 1 1010 1101 @mve_shl_rr
144
qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset %"
79
]
145
HWADDR_PRIx"\n", __func__, offset);
80
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
146
- break;
81
index XXXXXXX..XXXXXXX 100644
147
}
82
--- a/target/arm/mve_helper.c
148
}
83
+++ b/target/arm/mve_helper.c
149
84
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mve_sqshl)(CPUARMState *env, uint32_t n, uint32_t shift)
150
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_allwinner_sdhost = {
151
VMSTATE_UINT32(response_crc, AwSdHostState),
152
VMSTATE_UINT32_ARRAY(data_crc, AwSdHostState, 8),
153
VMSTATE_UINT32(status_crc, AwSdHostState),
154
+ VMSTATE_UINT32(sample_delay, AwSdHostState),
155
VMSTATE_END_OF_LIST()
156
}
157
};
158
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_realize(DeviceState *dev, Error **errp)
159
static void allwinner_sdhost_reset(DeviceState *dev)
85
{
160
{
86
return do_sqrshl_bhs(n, (int8_t)shift, 32, false, &env->QF);
161
AwSdHostState *s = AW_SDHOST(dev);
87
}
162
+ AwSdHostClass *sc = AW_SDHOST_GET_CLASS(s);
88
+
163
89
+uint32_t HELPER(mve_uqrshl)(CPUARMState *env, uint32_t n, uint32_t shift)
164
s->global_ctl = REG_SD_GCTL_RST;
165
s->clock_ctl = REG_SD_CKCR_RST;
166
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_reset(DeviceState *dev)
167
}
168
169
s->status_crc = REG_SD_CRC_STA_RST;
170
+
171
+ if (sc->can_calibrate) {
172
+ s->sample_delay = REG_SD_SAMPLE_DL_RST;
173
+ }
174
}
175
176
static void allwinner_sdhost_bus_class_init(ObjectClass *klass, void *data)
177
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_sun4i_class_init(ObjectClass *klass, void *data)
178
AwSdHostClass *sc = AW_SDHOST_CLASS(klass);
179
sc->max_desc_size = 8 * KiB;
180
sc->is_sun4i = true;
181
+ sc->can_calibrate = false;
182
}
183
184
static void allwinner_sdhost_sun5i_class_init(ObjectClass *klass, void *data)
185
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_sun5i_class_init(ObjectClass *klass, void *data)
186
AwSdHostClass *sc = AW_SDHOST_CLASS(klass);
187
sc->max_desc_size = 64 * KiB;
188
sc->is_sun4i = false;
189
+ sc->can_calibrate = false;
190
+}
191
+
192
+static void allwinner_sdhost_sun50i_a64_class_init(ObjectClass *klass,
193
+ void *data)
90
+{
194
+{
91
+ return do_uqrshl_bhs(n, (int8_t)shift, 32, true, &env->QF);
195
+ AwSdHostClass *sc = AW_SDHOST_CLASS(klass);
196
+ sc->max_desc_size = 64 * KiB;
197
+ sc->is_sun4i = false;
198
+ sc->can_calibrate = true;
92
+}
199
+}
93
+
200
+
94
+uint32_t HELPER(mve_sqrshr)(CPUARMState *env, uint32_t n, uint32_t shift)
201
+static void allwinner_sdhost_sun50i_a64_emmc_class_init(ObjectClass *klass,
202
+ void *data)
95
+{
203
+{
96
+ return do_sqrshl_bhs(n, -(int8_t)shift, 32, true, &env->QF);
204
+ AwSdHostClass *sc = AW_SDHOST_CLASS(klass);
97
+}
205
+ sc->max_desc_size = 8 * KiB;
98
diff --git a/target/arm/translate.c b/target/arm/translate.c
206
+ sc->is_sun4i = false;
99
index XXXXXXX..XXXXXXX 100644
207
+ sc->can_calibrate = true;
100
--- a/target/arm/translate.c
208
}
101
+++ b/target/arm/translate.c
209
102
@@ -XXX,XX +XXX,XX @@ static bool trans_UQSHL_ri(DisasContext *s, arg_mve_sh_ri *a)
210
static const TypeInfo allwinner_sdhost_info = {
103
return do_mve_sh_ri(s, a, gen_mve_uqshl);
211
@@ -XXX,XX +XXX,XX @@ static const TypeInfo allwinner_sdhost_sun5i_info = {
104
}
212
.class_init = allwinner_sdhost_sun5i_class_init,
105
213
};
106
+static bool do_mve_sh_rr(DisasContext *s, arg_mve_sh_rr *a, ShiftFn *fn)
214
107
+{
215
+static const TypeInfo allwinner_sdhost_sun50i_a64_info = {
108
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
216
+ .name = TYPE_AW_SDHOST_SUN50I_A64,
109
+ /* Decode falls through to ORR/MOV UNPREDICTABLE handling */
217
+ .parent = TYPE_AW_SDHOST,
110
+ return false;
218
+ .class_init = allwinner_sdhost_sun50i_a64_class_init,
111
+ }
219
+};
112
+ if (!dc_isar_feature(aa32_mve, s) ||
220
+
113
+ !arm_dc_feature(s, ARM_FEATURE_M_MAIN) ||
221
+static const TypeInfo allwinner_sdhost_sun50i_a64_emmc_info = {
114
+ a->rda == 13 || a->rda == 15 || a->rm == 13 || a->rm == 15 ||
222
+ .name = TYPE_AW_SDHOST_SUN50I_A64_EMMC,
115
+ a->rm == a->rda) {
223
+ .parent = TYPE_AW_SDHOST,
116
+ /* These rda/rm cases are UNPREDICTABLE; we choose to UNDEF */
224
+ .class_init = allwinner_sdhost_sun50i_a64_emmc_class_init,
117
+ unallocated_encoding(s);
225
+};
118
+ return true;
226
+
119
+ }
227
static const TypeInfo allwinner_sdhost_bus_info = {
120
+
228
.name = TYPE_AW_SDHOST_BUS,
121
+ /* The helper takes care of the sign-extension of the low 8 bits of Rm */
229
.parent = TYPE_SD_BUS,
122
+ fn(cpu_R[a->rda], cpu_env, cpu_R[a->rda], cpu_R[a->rm]);
230
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_register_types(void)
123
+ return true;
231
type_register_static(&allwinner_sdhost_info);
124
+}
232
type_register_static(&allwinner_sdhost_sun4i_info);
125
+
233
type_register_static(&allwinner_sdhost_sun5i_info);
126
+static bool trans_SQRSHR_rr(DisasContext *s, arg_mve_sh_rr *a)
234
+ type_register_static(&allwinner_sdhost_sun50i_a64_info);
127
+{
235
+ type_register_static(&allwinner_sdhost_sun50i_a64_emmc_info);
128
+ return do_mve_sh_rr(s, a, gen_helper_mve_sqrshr);
236
type_register_static(&allwinner_sdhost_bus_info);
129
+}
237
}
130
+
238
131
+static bool trans_UQRSHL_rr(DisasContext *s, arg_mve_sh_rr *a)
132
+{
133
+ return do_mve_sh_rr(s, a, gen_helper_mve_uqrshl);
134
+}
135
+
136
/*
137
* Multiply and multiply accumulate
138
*/
139
--
239
--
140
2.20.1
240
2.34.1
141
142
diff view generated by jsdifflib
1
Implement the MVE logical-immediate insns (VMOV, VMVN,
1
From: qianfan Zhao <qianfanguijin@163.com>
2
VORR and VBIC). These have essentially the same encoding
3
as their Neon equivalents, and we implement the decode
4
in the same way.
5
2
3
R40 has two ethernet controllers named as emac and gmac. The emac is
4
compatibled with A10, and the GMAC is compatibled with H3.
5
6
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-7-peter.maydell@linaro.org
9
---
8
---
10
target/arm/helper-mve.h | 4 +++
9
include/hw/arm/allwinner-r40.h | 6 ++++
11
target/arm/mve.decode | 17 +++++++++++++
10
hw/arm/allwinner-r40.c | 50 ++++++++++++++++++++++++++++++++--
12
target/arm/mve_helper.c | 24 ++++++++++++++++++
11
hw/arm/bananapi_m2u.c | 3 ++
13
target/arm/translate-mve.c | 50 ++++++++++++++++++++++++++++++++++++++
12
3 files changed, 57 insertions(+), 2 deletions(-)
14
4 files changed, 95 insertions(+)
15
13
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
14
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper-mve.h
16
--- a/include/hw/arm/allwinner-r40.h
19
+++ b/target/arm/helper-mve.h
17
+++ b/include/hw/arm/allwinner-r40.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vaddvsh, TCG_CALL_NO_WG, i32, env, ptr, i32)
18
@@ -XXX,XX +XXX,XX @@
21
DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
19
#include "hw/misc/allwinner-r40-ccu.h"
22
DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
20
#include "hw/misc/allwinner-r40-dramc.h"
23
DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
21
#include "hw/i2c/allwinner-i2c.h"
22
+#include "hw/net/allwinner_emac.h"
23
+#include "hw/net/allwinner-sun8i-emac.h"
24
#include "target/arm/cpu.h"
25
#include "sysemu/block-backend.h"
26
27
@@ -XXX,XX +XXX,XX @@ enum {
28
AW_R40_DEV_SRAM_A2,
29
AW_R40_DEV_SRAM_A3,
30
AW_R40_DEV_SRAM_A4,
31
+ AW_R40_DEV_EMAC,
32
AW_R40_DEV_MMC0,
33
AW_R40_DEV_MMC1,
34
AW_R40_DEV_MMC2,
35
@@ -XXX,XX +XXX,XX @@ enum {
36
AW_R40_DEV_UART6,
37
AW_R40_DEV_UART7,
38
AW_R40_DEV_TWI0,
39
+ AW_R40_DEV_GMAC,
40
AW_R40_DEV_GIC_DIST,
41
AW_R40_DEV_GIC_CPU,
42
AW_R40_DEV_GIC_HYP,
43
@@ -XXX,XX +XXX,XX @@ struct AwR40State {
44
AwR40ClockCtlState ccu;
45
AwR40DramCtlState dramc;
46
AWI2CState i2c0;
47
+ AwEmacState emac;
48
+ AwSun8iEmacState gmac;
49
GICState gic;
50
MemoryRegion sram_a1;
51
MemoryRegion sram_a2;
52
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/arm/allwinner-r40.c
55
+++ b/hw/arm/allwinner-r40.c
56
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
57
[AW_R40_DEV_SRAM_A2] = 0x00004000,
58
[AW_R40_DEV_SRAM_A3] = 0x00008000,
59
[AW_R40_DEV_SRAM_A4] = 0x0000b400,
60
+ [AW_R40_DEV_EMAC] = 0x01c0b000,
61
[AW_R40_DEV_MMC0] = 0x01c0f000,
62
[AW_R40_DEV_MMC1] = 0x01c10000,
63
[AW_R40_DEV_MMC2] = 0x01c11000,
64
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
65
[AW_R40_DEV_UART6] = 0x01c29800,
66
[AW_R40_DEV_UART7] = 0x01c29c00,
67
[AW_R40_DEV_TWI0] = 0x01c2ac00,
68
+ [AW_R40_DEV_GMAC] = 0x01c50000,
69
[AW_R40_DEV_DRAMCOM] = 0x01c62000,
70
[AW_R40_DEV_DRAMCTL] = 0x01c63000,
71
[AW_R40_DEV_DRAMPHY] = 0x01c65000,
72
@@ -XXX,XX +XXX,XX @@ static struct AwR40Unimplemented r40_unimplemented[] = {
73
{ "spi1", 0x01c06000, 4 * KiB },
74
{ "cs0", 0x01c09000, 4 * KiB },
75
{ "keymem", 0x01c0a000, 4 * KiB },
76
- { "emac", 0x01c0b000, 4 * KiB },
77
{ "usb0-otg", 0x01c13000, 4 * KiB },
78
{ "usb0-host", 0x01c14000, 4 * KiB },
79
{ "crypto", 0x01c15000, 4 * KiB },
80
@@ -XXX,XX +XXX,XX @@ static struct AwR40Unimplemented r40_unimplemented[] = {
81
{ "tvd2", 0x01c33000, 4 * KiB },
82
{ "tvd3", 0x01c34000, 4 * KiB },
83
{ "gpu", 0x01c40000, 64 * KiB },
84
- { "gmac", 0x01c50000, 64 * KiB },
85
{ "hstmr", 0x01c60000, 4 * KiB },
86
{ "tcon-top", 0x01c70000, 4 * KiB },
87
{ "lcd0", 0x01c71000, 4 * KiB },
88
@@ -XXX,XX +XXX,XX @@ enum {
89
AW_R40_GIC_SPI_MMC1 = 33,
90
AW_R40_GIC_SPI_MMC2 = 34,
91
AW_R40_GIC_SPI_MMC3 = 35,
92
+ AW_R40_GIC_SPI_EMAC = 55,
93
+ AW_R40_GIC_SPI_GMAC = 85,
94
};
95
96
/* Allwinner R40 general constants */
97
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
98
99
object_initialize_child(obj, "twi0", &s->i2c0, TYPE_AW_I2C_SUN6I);
100
101
+ object_initialize_child(obj, "emac", &s->emac, TYPE_AW_EMAC);
102
+ object_initialize_child(obj, "gmac", &s->gmac, TYPE_AW_SUN8I_EMAC);
103
+ object_property_add_alias(obj, "gmac-phy-addr",
104
+ OBJECT(&s->gmac), "phy-addr");
24
+
105
+
25
+DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
106
object_initialize_child(obj, "dramc", &s->dramc, TYPE_AW_R40_DRAMC);
26
+DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
107
object_property_add_alias(obj, "ram-addr", OBJECT(&s->dramc),
27
+DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
108
"ram-addr");
28
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
109
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
29
index XXXXXXX..XXXXXXX 100644
110
30
--- a/target/arm/mve.decode
111
static void allwinner_r40_realize(DeviceState *dev, Error **errp)
31
+++ b/target/arm/mve.decode
112
{
32
@@ -XXX,XX +XXX,XX @@
113
+ const char *r40_nic_models[] = { "gmac", "emac", NULL };
33
# VQDMULL has size in bit 28: 0 for 16 bit, 1 for 32 bit
114
AwR40State *s = AW_R40(dev);
34
%size_28 28:1 !function=plus_1
115
unsigned i;
35
116
36
+# 1imm format immediate
117
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
37
+%imm_28_16_0 28:1 16:3 0:4
118
sysbus_mmio_map(SYS_BUS_DEVICE(&s->dramc), 2,
119
s->memmap[AW_R40_DEV_DRAMPHY]);
120
121
+ /* nic support gmac and emac */
122
+ for (int i = 0; i < ARRAY_SIZE(r40_nic_models) - 1; i++) {
123
+ NICInfo *nic = &nd_table[i];
38
+
124
+
39
&vldr_vstr rn qd imm p a w size l u
125
+ if (!nic->used) {
40
&1op qd qm size
126
+ continue;
41
&2op qd qm qn size
127
+ }
42
&2scalar qd qn rm size
128
+ if (qemu_show_nic_models(nic->model, r40_nic_models)) {
43
+&1imm qd imm cmode op
129
+ exit(0);
44
130
+ }
45
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
46
# Note that both Rn and Qd are 3 bits only (no D bit)
47
@@ -XXX,XX +XXX,XX @@
48
@2op_nosz .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn size=0
49
@2op_sz28 .... .... .... .... .... .... .... .... &2op qd=%qd qm=%qm qn=%qn \
50
size=%size_28
51
+@1imm .... .... .... .... .... cmode:4 .. op:1 . .... &1imm qd=%qd imm=%imm_28_16_0
52
53
# The _rev suffix indicates that Vn and Vm are reversed. This is
54
# the case for shifts. In the Arm ARM these insns are documented
55
@@ -XXX,XX +XXX,XX @@ VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rd
56
# Predicate operations
57
%mask_22_13 22:1 13:3
58
VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
59
+
131
+
60
+# Logical immediate operations (1 reg and modified-immediate)
132
+ switch (qemu_find_nic_model(nic, r40_nic_models, r40_nic_models[0])) {
61
+
133
+ case 0: /* gmac */
62
+# The cmode/op bits here decode VORR/VBIC/VMOV/VMVN, but
134
+ qdev_set_nic_properties(DEVICE(&s->gmac), nic);
63
+# not in a way we can conveniently represent in decodetree without
135
+ break;
64
+# a lot of repetition:
136
+ case 1: /* emac */
65
+# VORR: op=0, (cmode & 1) && cmode < 12
137
+ qdev_set_nic_properties(DEVICE(&s->emac), nic);
66
+# VBIC: op=1, (cmode & 1) && cmode < 12
138
+ break;
67
+# VMOV: everything else
139
+ default:
68
+# So we have a single decode line and check the cmode/op in the
140
+ exit(1);
69
+# trans function.
141
+ break;
70
+Vimm_1r 111 . 1111 1 . 00 0 ... ... 0 .... 0 1 . 1 .... @1imm
142
+ }
71
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/mve_helper.c
74
+++ b/target/arm/mve_helper.c
75
@@ -XXX,XX +XXX,XX @@ DO_1OP(vnegw, 4, int32_t, DO_NEG)
76
DO_1OP(vfnegh, 8, uint64_t, DO_FNEGH)
77
DO_1OP(vfnegs, 8, uint64_t, DO_FNEGS)
78
79
+/*
80
+ * 1 operand immediates: Vda is destination and possibly also one source.
81
+ * All these insns work at 64-bit widths.
82
+ */
83
+#define DO_1OP_IMM(OP, FN) \
84
+ void HELPER(mve_##OP)(CPUARMState *env, void *vda, uint64_t imm) \
85
+ { \
86
+ uint64_t *da = vda; \
87
+ uint16_t mask = mve_element_mask(env); \
88
+ unsigned e; \
89
+ for (e = 0; e < 16 / 8; e++, mask >>= 8) { \
90
+ mergemask(&da[H8(e)], FN(da[H8(e)], imm), mask); \
91
+ } \
92
+ mve_advance_vpt(env); \
93
+ }
143
+ }
94
+
144
+
95
+#define DO_MOVI(N, I) (I)
145
+ /* GMAC */
96
+#define DO_ANDI(N, I) ((N) & (I))
146
+ object_property_set_link(OBJECT(&s->gmac), "dma-memory",
97
+#define DO_ORRI(N, I) ((N) | (I))
147
+ OBJECT(get_system_memory()), &error_fatal);
148
+ sysbus_realize(SYS_BUS_DEVICE(&s->gmac), &error_fatal);
149
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gmac), 0, s->memmap[AW_R40_DEV_GMAC]);
150
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gmac), 0,
151
+ qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_GMAC));
98
+
152
+
99
+DO_1OP_IMM(vmovi, DO_MOVI)
153
+ /* EMAC */
100
+DO_1OP_IMM(vandi, DO_ANDI)
154
+ sysbus_realize(SYS_BUS_DEVICE(&s->emac), &error_fatal);
101
+DO_1OP_IMM(vorri, DO_ORRI)
155
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->emac), 0, s->memmap[AW_R40_DEV_EMAC]);
156
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->emac), 0,
157
+ qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_EMAC));
102
+
158
+
103
#define DO_2OP(OP, ESIZE, TYPE, FN) \
159
/* Unimplemented devices */
104
void HELPER(glue(mve_, OP))(CPUARMState *env, \
160
for (i = 0; i < ARRAY_SIZE(r40_unimplemented); i++) {
105
void *vd, void *vn, void *vm) \
161
create_unimplemented_device(r40_unimplemented[i].device_name,
106
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
162
diff --git a/hw/arm/bananapi_m2u.c b/hw/arm/bananapi_m2u.c
107
index XXXXXXX..XXXXXXX 100644
163
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/translate-mve.c
164
--- a/hw/arm/bananapi_m2u.c
109
+++ b/target/arm/translate-mve.c
165
+++ b/hw/arm/bananapi_m2u.c
110
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
166
@@ -XXX,XX +XXX,XX @@ static void bpim2u_init(MachineState *machine)
111
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
167
object_property_set_int(OBJECT(r40), "ram-size",
112
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
168
r40->ram_size, &error_abort);
113
typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
169
114
+typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
170
+ /* GMAC PHY */
115
171
+ object_property_set_uint(OBJECT(r40), "gmac-phy-addr", 1, &error_abort);
116
/* Return the offset of a Qn register (same semantics as aa32_vfp_qreg()) */
117
static inline long mve_qreg_offset(unsigned reg)
118
@@ -XXX,XX +XXX,XX @@ static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
119
mve_update_eci(s);
120
return true;
121
}
122
+
172
+
123
+static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
173
/* Mark R40 object realized */
124
+{
174
qdev_realize(DEVICE(r40), NULL, &error_abort);
125
+ TCGv_ptr qd;
175
126
+ uint64_t imm;
127
+
128
+ if (!dc_isar_feature(aa32_mve, s) ||
129
+ !mve_check_qreg_bank(s, a->qd) ||
130
+ !fn) {
131
+ return false;
132
+ }
133
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
134
+ return true;
135
+ }
136
+
137
+ imm = asimd_imm_const(a->imm, a->cmode, a->op);
138
+
139
+ qd = mve_qreg_ptr(a->qd);
140
+ fn(cpu_env, qd, tcg_constant_i64(imm));
141
+ tcg_temp_free_ptr(qd);
142
+ mve_update_eci(s);
143
+ return true;
144
+}
145
+
146
+static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
147
+{
148
+ /* Handle decode of cmode/op here between VORR/VBIC/VMOV */
149
+ MVEGenOneOpImmFn *fn;
150
+
151
+ if ((a->cmode & 1) && a->cmode < 12) {
152
+ if (a->op) {
153
+ /*
154
+ * For op=1, the immediate will be inverted by asimd_imm_const(),
155
+ * so the VBIC becomes a logical AND operation.
156
+ */
157
+ fn = gen_helper_mve_vandi;
158
+ } else {
159
+ fn = gen_helper_mve_vorri;
160
+ }
161
+ } else {
162
+ /* There is one unallocated cmode/op combination in this space */
163
+ if (a->cmode == 15 && a->op == 1) {
164
+ return false;
165
+ }
166
+ /* asimd_imm_const() sorts out VMVNI vs VMOVI for us */
167
+ fn = gen_helper_mve_vmovi;
168
+ }
169
+ return do_1imm(s, a, fn);
170
+}
171
--
176
--
172
2.20.1
177
2.34.1
173
174
diff view generated by jsdifflib
1
Implement the MVE VSRI and VSLI insns, which perform a
1
From: qianfan Zhao <qianfanguijin@163.com>
2
shift-and-insert operation.
3
2
3
Only a few important registers are added, especially the SRAM_VER
4
register.
5
6
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
7
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210628135835.6690-11-peter.maydell@linaro.org
7
---
9
---
8
target/arm/helper-mve.h | 8 ++++++++
10
include/hw/arm/allwinner-r40.h | 3 +
9
target/arm/mve.decode | 9 ++++++++
11
include/hw/misc/allwinner-sramc.h | 69 +++++++++++
10
target/arm/mve_helper.c | 42 ++++++++++++++++++++++++++++++++++++++
12
hw/arm/allwinner-r40.c | 7 +-
11
target/arm/translate-mve.c | 3 +++
13
hw/misc/allwinner-sramc.c | 184 ++++++++++++++++++++++++++++++
12
4 files changed, 62 insertions(+)
14
hw/arm/Kconfig | 1 +
15
hw/misc/Kconfig | 3 +
16
hw/misc/meson.build | 1 +
17
hw/misc/trace-events | 4 +
18
8 files changed, 271 insertions(+), 1 deletion(-)
19
create mode 100644 include/hw/misc/allwinner-sramc.h
20
create mode 100644 hw/misc/allwinner-sramc.c
13
21
14
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
22
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
15
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper-mve.h
24
--- a/include/hw/arm/allwinner-r40.h
17
+++ b/target/arm/helper-mve.h
25
+++ b/include/hw/arm/allwinner-r40.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vshlltsb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
@@ -XXX,XX +XXX,XX @@
19
DEF_HELPER_FLAGS_4(mve_vshlltsh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
#include "hw/sd/allwinner-sdhost.h"
20
DEF_HELPER_FLAGS_4(mve_vshlltub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
#include "hw/misc/allwinner-r40-ccu.h"
21
DEF_HELPER_FLAGS_4(mve_vshlltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
#include "hw/misc/allwinner-r40-dramc.h"
22
+
30
+#include "hw/misc/allwinner-sramc.h"
23
+DEF_HELPER_FLAGS_4(mve_vsrib, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
#include "hw/i2c/allwinner-i2c.h"
24
+DEF_HELPER_FLAGS_4(mve_vsrih, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
#include "hw/net/allwinner_emac.h"
25
+DEF_HELPER_FLAGS_4(mve_vsriw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
#include "hw/net/allwinner-sun8i-emac.h"
26
+
34
@@ -XXX,XX +XXX,XX @@ enum {
27
+DEF_HELPER_FLAGS_4(mve_vslib, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
AW_R40_DEV_SRAM_A2,
28
+DEF_HELPER_FLAGS_4(mve_vslih, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
AW_R40_DEV_SRAM_A3,
29
+DEF_HELPER_FLAGS_4(mve_vsliw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
AW_R40_DEV_SRAM_A4,
30
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
38
+ AW_R40_DEV_SRAMC,
31
index XXXXXXX..XXXXXXX 100644
39
AW_R40_DEV_EMAC,
32
--- a/target/arm/mve.decode
40
AW_R40_DEV_MMC0,
33
+++ b/target/arm/mve.decode
41
AW_R40_DEV_MMC1,
34
@@ -XXX,XX +XXX,XX @@ VSHLL_TS 111 0 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_h
42
@@ -XXX,XX +XXX,XX @@ struct AwR40State {
35
43
36
VSHLL_TU 111 1 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_b
44
ARMCPU cpus[AW_R40_NUM_CPUS];
37
VSHLL_TU 111 1 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_h
45
const hwaddr *memmap;
38
+
46
+ AwSRAMCState sramc;
39
+# Shift-and-insert
47
AwA10PITState timer;
40
+VSRI 111 1 1111 1 . ... ... ... 0 0100 0 1 . 1 ... 0 @2_shr_b
48
AwSdHostState mmc[AW_R40_NUM_MMCS];
41
+VSRI 111 1 1111 1 . ... ... ... 0 0100 0 1 . 1 ... 0 @2_shr_h
49
AwR40ClockCtlState ccu;
42
+VSRI 111 1 1111 1 . ... ... ... 0 0100 0 1 . 1 ... 0 @2_shr_w
50
diff --git a/include/hw/misc/allwinner-sramc.h b/include/hw/misc/allwinner-sramc.h
43
+
51
new file mode 100644
44
+VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_b
52
index XXXXXXX..XXXXXXX
45
+VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_h
53
--- /dev/null
46
+VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_w
54
+++ b/include/hw/misc/allwinner-sramc.h
47
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
55
@@ -XXX,XX +XXX,XX @@
48
index XXXXXXX..XXXXXXX 100644
56
+/*
49
--- a/target/arm/mve_helper.c
57
+ * Allwinner SRAM controller emulation
50
+++ b/target/arm/mve_helper.c
58
+ *
51
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
59
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
52
DO_2SHIFT_U(vrshli_u, DO_VRSHLU)
60
+ *
53
DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
61
+ * This program is free software: you can redistribute it and/or modify
54
62
+ * it under the terms of the GNU General Public License as published by
55
+/* Shift-and-insert; we always work with 64 bits at a time */
63
+ * the Free Software Foundation, either version 2 of the License, or
56
+#define DO_2SHIFT_INSERT(OP, ESIZE, SHIFTFN, MASKFN) \
64
+ * (at your option) any later version.
57
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
65
+ *
58
+ void *vm, uint32_t shift) \
66
+ * This program is distributed in the hope that it will be useful,
59
+ { \
67
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
60
+ uint64_t *d = vd, *m = vm; \
68
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
61
+ uint16_t mask; \
69
+ * GNU General Public License for more details.
62
+ uint64_t shiftmask; \
70
+ *
63
+ unsigned e; \
71
+ * You should have received a copy of the GNU General Public License
64
+ if (shift == 0 || shift == ESIZE * 8) { \
72
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
65
+ /* \
73
+ */
66
+ * Only VSLI can shift by 0; only VSRI can shift by <dt>. \
74
+
67
+ * The generic logic would give the right answer for 0 but \
75
+#ifndef HW_MISC_ALLWINNER_SRAMC_H
68
+ * fails for <dt>. \
76
+#define HW_MISC_ALLWINNER_SRAMC_H
69
+ */ \
77
+
70
+ goto done; \
78
+#include "qom/object.h"
71
+ } \
79
+#include "hw/sysbus.h"
72
+ assert(shift < ESIZE * 8); \
80
+#include "qemu/uuid.h"
73
+ mask = mve_element_mask(env); \
81
+
74
+ /* ESIZE / 2 gives the MO_* value if ESIZE is in [1,2,4] */ \
82
+/**
75
+ shiftmask = dup_const(ESIZE / 2, MASKFN(ESIZE * 8, shift)); \
83
+ * Object model
76
+ for (e = 0; e < 16 / 8; e++, mask >>= 8) { \
84
+ * @{
77
+ uint64_t r = (SHIFTFN(m[H8(e)], shift) & shiftmask) | \
85
+ */
78
+ (d[H8(e)] & ~shiftmask); \
86
+#define TYPE_AW_SRAMC "allwinner-sramc"
79
+ mergemask(&d[H8(e)], r, mask); \
87
+#define TYPE_AW_SRAMC_SUN8I_R40 TYPE_AW_SRAMC "-sun8i-r40"
80
+ } \
88
+OBJECT_DECLARE_TYPE(AwSRAMCState, AwSRAMCClass, AW_SRAMC)
81
+done: \
89
+
82
+ mve_advance_vpt(env); \
90
+/** @} */
91
+
92
+/**
93
+ * Allwinner SRAMC object instance state
94
+ */
95
+struct AwSRAMCState {
96
+ /*< private >*/
97
+ SysBusDevice parent_obj;
98
+ /*< public >*/
99
+
100
+ /** Maps I/O registers in physical memory */
101
+ MemoryRegion iomem;
102
+
103
+ /* registers */
104
+ uint32_t sram_ctl1;
105
+ uint32_t sram_ver;
106
+ uint32_t sram_soft_entry_reg0;
107
+};
108
+
109
+/**
110
+ * Allwinner SRAM Controller class-level struct.
111
+ *
112
+ * This struct is filled by each sunxi device specific code
113
+ * such that the generic code can use this struct to support
114
+ * all devices.
115
+ */
116
+struct AwSRAMCClass {
117
+ /*< private >*/
118
+ SysBusDeviceClass parent_class;
119
+ /*< public >*/
120
+
121
+ uint32_t sram_version_code;
122
+};
123
+
124
+#endif /* HW_MISC_ALLWINNER_SRAMC_H */
125
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/hw/arm/allwinner-r40.c
128
+++ b/hw/arm/allwinner-r40.c
129
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
130
[AW_R40_DEV_SRAM_A2] = 0x00004000,
131
[AW_R40_DEV_SRAM_A3] = 0x00008000,
132
[AW_R40_DEV_SRAM_A4] = 0x0000b400,
133
+ [AW_R40_DEV_SRAMC] = 0x01c00000,
134
[AW_R40_DEV_EMAC] = 0x01c0b000,
135
[AW_R40_DEV_MMC0] = 0x01c0f000,
136
[AW_R40_DEV_MMC1] = 0x01c10000,
137
@@ -XXX,XX +XXX,XX @@ struct AwR40Unimplemented {
138
static struct AwR40Unimplemented r40_unimplemented[] = {
139
{ "d-engine", 0x01000000, 4 * MiB },
140
{ "d-inter", 0x01400000, 128 * KiB },
141
- { "sram-c", 0x01c00000, 4 * KiB },
142
{ "dma", 0x01c02000, 4 * KiB },
143
{ "nfdc", 0x01c03000, 4 * KiB },
144
{ "ts", 0x01c04000, 4 * KiB },
145
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
146
"ram-addr");
147
object_property_add_alias(obj, "ram-size", OBJECT(&s->dramc),
148
"ram-size");
149
+
150
+ object_initialize_child(obj, "sramc", &s->sramc, TYPE_AW_SRAMC_SUN8I_R40);
151
}
152
153
static void allwinner_r40_realize(DeviceState *dev, Error **errp)
154
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
155
AW_R40_GIC_SPI_TIMER1));
156
157
/* SRAM */
158
+ sysbus_realize(SYS_BUS_DEVICE(&s->sramc), &error_fatal);
159
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->sramc), 0, s->memmap[AW_R40_DEV_SRAMC]);
160
+
161
memory_region_init_ram(&s->sram_a1, OBJECT(dev), "sram A1",
162
16 * KiB, &error_abort);
163
memory_region_init_ram(&s->sram_a2, OBJECT(dev), "sram A2",
164
diff --git a/hw/misc/allwinner-sramc.c b/hw/misc/allwinner-sramc.c
165
new file mode 100644
166
index XXXXXXX..XXXXXXX
167
--- /dev/null
168
+++ b/hw/misc/allwinner-sramc.c
169
@@ -XXX,XX +XXX,XX @@
170
+/*
171
+ * Allwinner R40 SRAM controller emulation
172
+ *
173
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
174
+ *
175
+ * This program is free software: you can redistribute it and/or modify
176
+ * it under the terms of the GNU General Public License as published by
177
+ * the Free Software Foundation, either version 2 of the License, or
178
+ * (at your option) any later version.
179
+ *
180
+ * This program is distributed in the hope that it will be useful,
181
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
182
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
183
+ * GNU General Public License for more details.
184
+ *
185
+ * You should have received a copy of the GNU General Public License
186
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
187
+ */
188
+
189
+#include "qemu/osdep.h"
190
+#include "qemu/units.h"
191
+#include "hw/sysbus.h"
192
+#include "migration/vmstate.h"
193
+#include "qemu/log.h"
194
+#include "qemu/module.h"
195
+#include "qapi/error.h"
196
+#include "hw/qdev-properties.h"
197
+#include "hw/qdev-properties-system.h"
198
+#include "hw/misc/allwinner-sramc.h"
199
+#include "trace.h"
200
+
201
+/*
202
+ * register offsets
203
+ * https://linux-sunxi.org/SRAM_Controller_Register_Guide
204
+ */
205
+enum {
206
+ REG_SRAM_CTL1_CFG = 0x04, /* SRAM Control register 1 */
207
+ REG_SRAM_VER = 0x24, /* SRAM Version register */
208
+ REG_SRAM_R40_SOFT_ENTRY_REG0 = 0xbc,
209
+};
210
+
211
+/* REG_SRAMC_VERSION bit defines */
212
+#define SRAM_VER_READ_ENABLE (1 << 15)
213
+#define SRAM_VER_VERSION_SHIFT 16
214
+#define SRAM_VERSION_SUN8I_R40 0x1701
215
+
216
+static uint64_t allwinner_sramc_read(void *opaque, hwaddr offset,
217
+ unsigned size)
218
+{
219
+ AwSRAMCState *s = AW_SRAMC(opaque);
220
+ AwSRAMCClass *sc = AW_SRAMC_GET_CLASS(s);
221
+ uint64_t val = 0;
222
+
223
+ switch (offset) {
224
+ case REG_SRAM_CTL1_CFG:
225
+ val = s->sram_ctl1;
226
+ break;
227
+ case REG_SRAM_VER:
228
+ /* bit15: lock bit, set this bit before reading this register */
229
+ if (s->sram_ver & SRAM_VER_READ_ENABLE) {
230
+ val = SRAM_VER_READ_ENABLE |
231
+ (sc->sram_version_code << SRAM_VER_VERSION_SHIFT);
232
+ }
233
+ break;
234
+ case REG_SRAM_R40_SOFT_ENTRY_REG0:
235
+ val = s->sram_soft_entry_reg0;
236
+ break;
237
+ default:
238
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
239
+ __func__, (uint32_t)offset);
240
+ return 0;
83
+ }
241
+ }
84
+
242
+
85
+#define DO_SHL(N, SHIFT) ((N) << (SHIFT))
243
+ trace_allwinner_sramc_read(offset, val);
86
+#define DO_SHR(N, SHIFT) ((N) >> (SHIFT))
244
+
87
+#define SHL_MASK(EBITS, SHIFT) MAKE_64BIT_MASK((SHIFT), (EBITS) - (SHIFT))
245
+ return val;
88
+#define SHR_MASK(EBITS, SHIFT) MAKE_64BIT_MASK(0, (EBITS) - (SHIFT))
246
+}
89
+
247
+
90
+DO_2SHIFT_INSERT(vsrib, 1, DO_SHR, SHR_MASK)
248
+static void allwinner_sramc_write(void *opaque, hwaddr offset,
91
+DO_2SHIFT_INSERT(vsrih, 2, DO_SHR, SHR_MASK)
249
+ uint64_t val, unsigned size)
92
+DO_2SHIFT_INSERT(vsriw, 4, DO_SHR, SHR_MASK)
250
+{
93
+DO_2SHIFT_INSERT(vslib, 1, DO_SHL, SHL_MASK)
251
+ AwSRAMCState *s = AW_SRAMC(opaque);
94
+DO_2SHIFT_INSERT(vslih, 2, DO_SHL, SHL_MASK)
252
+
95
+DO_2SHIFT_INSERT(vsliw, 4, DO_SHL, SHL_MASK)
253
+ trace_allwinner_sramc_write(offset, val);
96
+
254
+
97
/*
255
+ switch (offset) {
98
* Long shifts taking half-sized inputs from top or bottom of the input
256
+ case REG_SRAM_CTL1_CFG:
99
* vector and producing a double-width result. ESIZE, TYPE are for
257
+ s->sram_ctl1 = val;
100
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
258
+ break;
101
index XXXXXXX..XXXXXXX 100644
259
+ case REG_SRAM_VER:
102
--- a/target/arm/translate-mve.c
260
+ /* Only the READ_ENABLE bit is writeable */
103
+++ b/target/arm/translate-mve.c
261
+ s->sram_ver = val & SRAM_VER_READ_ENABLE;
104
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT(VSHRI_U, vshli_u, true)
262
+ break;
105
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
263
+ case REG_SRAM_R40_SOFT_ENTRY_REG0:
106
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
264
+ s->sram_soft_entry_reg0 = val;
107
265
+ break;
108
+DO_2SHIFT(VSRI, vsri, false)
266
+ default:
109
+DO_2SHIFT(VSLI, vsli, false)
267
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
110
+
268
+ __func__, (uint32_t)offset);
111
#define DO_VSHLL(INSN, FN) \
269
+ break;
112
static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
270
+ }
113
{ \
271
+}
272
+
273
+static const MemoryRegionOps allwinner_sramc_ops = {
274
+ .read = allwinner_sramc_read,
275
+ .write = allwinner_sramc_write,
276
+ .endianness = DEVICE_NATIVE_ENDIAN,
277
+ .valid = {
278
+ .min_access_size = 4,
279
+ .max_access_size = 4,
280
+ },
281
+ .impl.min_access_size = 4,
282
+};
283
+
284
+static const VMStateDescription allwinner_sramc_vmstate = {
285
+ .name = "allwinner-sramc",
286
+ .version_id = 1,
287
+ .minimum_version_id = 1,
288
+ .fields = (VMStateField[]) {
289
+ VMSTATE_UINT32(sram_ver, AwSRAMCState),
290
+ VMSTATE_UINT32(sram_soft_entry_reg0, AwSRAMCState),
291
+ VMSTATE_END_OF_LIST()
292
+ }
293
+};
294
+
295
+static void allwinner_sramc_reset(DeviceState *dev)
296
+{
297
+ AwSRAMCState *s = AW_SRAMC(dev);
298
+ AwSRAMCClass *sc = AW_SRAMC_GET_CLASS(s);
299
+
300
+ switch (sc->sram_version_code) {
301
+ case SRAM_VERSION_SUN8I_R40:
302
+ s->sram_ctl1 = 0x1300;
303
+ break;
304
+ }
305
+}
306
+
307
+static void allwinner_sramc_class_init(ObjectClass *klass, void *data)
308
+{
309
+ DeviceClass *dc = DEVICE_CLASS(klass);
310
+
311
+ dc->reset = allwinner_sramc_reset;
312
+ dc->vmsd = &allwinner_sramc_vmstate;
313
+}
314
+
315
+static void allwinner_sramc_init(Object *obj)
316
+{
317
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
318
+ AwSRAMCState *s = AW_SRAMC(obj);
319
+
320
+ /* Memory mapping */
321
+ memory_region_init_io(&s->iomem, OBJECT(s), &allwinner_sramc_ops, s,
322
+ TYPE_AW_SRAMC, 1 * KiB);
323
+ sysbus_init_mmio(sbd, &s->iomem);
324
+}
325
+
326
+static const TypeInfo allwinner_sramc_info = {
327
+ .name = TYPE_AW_SRAMC,
328
+ .parent = TYPE_SYS_BUS_DEVICE,
329
+ .instance_init = allwinner_sramc_init,
330
+ .instance_size = sizeof(AwSRAMCState),
331
+ .class_init = allwinner_sramc_class_init,
332
+};
333
+
334
+static void allwinner_r40_sramc_class_init(ObjectClass *klass, void *data)
335
+{
336
+ AwSRAMCClass *sc = AW_SRAMC_CLASS(klass);
337
+
338
+ sc->sram_version_code = SRAM_VERSION_SUN8I_R40;
339
+}
340
+
341
+static const TypeInfo allwinner_r40_sramc_info = {
342
+ .name = TYPE_AW_SRAMC_SUN8I_R40,
343
+ .parent = TYPE_AW_SRAMC,
344
+ .class_init = allwinner_r40_sramc_class_init,
345
+};
346
+
347
+static void allwinner_sramc_register(void)
348
+{
349
+ type_register_static(&allwinner_sramc_info);
350
+ type_register_static(&allwinner_r40_sramc_info);
351
+}
352
+
353
+type_init(allwinner_sramc_register)
354
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
355
index XXXXXXX..XXXXXXX 100644
356
--- a/hw/arm/Kconfig
357
+++ b/hw/arm/Kconfig
358
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_H3
359
config ALLWINNER_R40
360
bool
361
default y if TCG && ARM
362
+ select ALLWINNER_SRAMC
363
select ALLWINNER_A10_PIT
364
select AXP2XX_PMU
365
select SERIAL
366
diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
367
index XXXXXXX..XXXXXXX 100644
368
--- a/hw/misc/Kconfig
369
+++ b/hw/misc/Kconfig
370
@@ -XXX,XX +XXX,XX @@ config VIRT_CTRL
371
config LASI
372
bool
373
374
+config ALLWINNER_SRAMC
375
+ bool
376
+
377
config ALLWINNER_A10_CCM
378
bool
379
380
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
381
index XXXXXXX..XXXXXXX 100644
382
--- a/hw/misc/meson.build
383
+++ b/hw/misc/meson.build
384
@@ -XXX,XX +XXX,XX @@ subdir('macio')
385
386
softmmu_ss.add(when: 'CONFIG_IVSHMEM_DEVICE', if_true: files('ivshmem.c'))
387
388
+softmmu_ss.add(when: 'CONFIG_ALLWINNER_SRAMC', if_true: files('allwinner-sramc.c'))
389
softmmu_ss.add(when: 'CONFIG_ALLWINNER_A10_CCM', if_true: files('allwinner-a10-ccm.c'))
390
softmmu_ss.add(when: 'CONFIG_ALLWINNER_A10_DRAMC', if_true: files('allwinner-a10-dramc.c'))
391
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-ccu.c'))
392
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
393
index XXXXXXX..XXXXXXX 100644
394
--- a/hw/misc/trace-events
395
+++ b/hw/misc/trace-events
396
@@ -XXX,XX +XXX,XX @@ allwinner_r40_dramphy_write(uint64_t offset, uint64_t data, unsigned size) "writ
397
allwinner_sid_read(uint64_t offset, uint64_t data, unsigned size) "offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
398
allwinner_sid_write(uint64_t offset, uint64_t data, unsigned size) "offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
399
400
+# allwinner-sramc.c
401
+allwinner_sramc_read(uint64_t offset, uint64_t data) "offset 0x%" PRIx64 " data 0x%" PRIx64
402
+allwinner_sramc_write(uint64_t offset, uint64_t data) "offset 0x%" PRIx64 " data 0x%" PRIx64
403
+
404
# avr_power.c
405
avr_power_read(uint8_t value) "power_reduc read value:%u"
406
avr_power_write(uint8_t value) "power_reduc write value:%u"
114
--
407
--
115
2.20.1
408
2.34.1
116
117
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: qianfan Zhao <qianfanguijin@163.com>
2
2
3
Add a test booting and quickly shutdown a raspi2 machine,
3
Add test case for booting from initrd and sd card.
4
to test the power management model:
5
4
6
(1/1) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_raspi2_initrd:
5
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
7
console: [ 0.000000] Booting Linux on physical CPU 0xf00
6
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
8
console: [ 0.000000] Linux version 4.14.98-v7+ (dom@dom-XPS-13-9370) (gcc version 4.9.3 (crosstool-NG crosstool-ng-1.22.0-88-g8460611)) #1200 SMP Tue Feb 12 20:27:48 GMT 2019
7
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
9
console: [ 0.000000] CPU: ARMv7 Processor [410fc075] revision 5 (ARMv7), cr=10c5387d
10
console: [ 0.000000] CPU: div instructions available: patching division code
11
console: [ 0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
12
console: [ 0.000000] OF: fdt: Machine model: Raspberry Pi 2 Model B
13
...
14
console: Boot successful.
15
console: cat /proc/cpuinfo
16
console: / # cat /proc/cpuinfo
17
...
18
console: processor : 3
19
console: model name : ARMv7 Processor rev 5 (v7l)
20
console: BogoMIPS : 125.00
21
console: Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
22
console: CPU implementer : 0x41
23
console: CPU architecture: 7
24
console: CPU variant : 0x0
25
console: CPU part : 0xc07
26
console: CPU revision : 5
27
console: Hardware : BCM2835
28
console: Revision : 0000
29
console: Serial : 0000000000000000
30
console: cat /proc/iomem
31
console: / # cat /proc/iomem
32
console: 00000000-3bffffff : System RAM
33
console: 00008000-00afffff : Kernel code
34
console: 00c00000-00d468ef : Kernel data
35
console: 3f006000-3f006fff : dwc_otg
36
console: 3f007000-3f007eff : /soc/dma@7e007000
37
console: 3f00b880-3f00b8bf : /soc/mailbox@7e00b880
38
console: 3f100000-3f100027 : /soc/watchdog@7e100000
39
console: 3f101000-3f102fff : /soc/cprman@7e101000
40
console: 3f200000-3f2000b3 : /soc/gpio@7e200000
41
PASS (24.59 s)
42
RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
43
JOB TIME : 25.02 s
44
45
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
46
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
47
Message-id: 20210531113837.1689775-1-f4bug@amsat.org
48
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
49
---
9
---
50
tests/acceptance/boot_linux_console.py | 43 ++++++++++++++++++++++++++
10
tests/avocado/boot_linux_console.py | 176 ++++++++++++++++++++++++++++
51
1 file changed, 43 insertions(+)
11
1 file changed, 176 insertions(+)
52
12
53
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
13
diff --git a/tests/avocado/boot_linux_console.py b/tests/avocado/boot_linux_console.py
54
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
55
--- a/tests/acceptance/boot_linux_console.py
15
--- a/tests/avocado/boot_linux_console.py
56
+++ b/tests/acceptance/boot_linux_console.py
16
+++ b/tests/avocado/boot_linux_console.py
57
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ def test_arm_quanta_gsj_initrd(self):
58
from avocado import skip
18
self.wait_for_console_pattern(
59
from avocado import skipUnless
19
'Give root password for system maintenance')
60
from avocado_qemu import Test
20
61
+from avocado_qemu import exec_command
21
+ def test_arm_bpim2u(self):
62
from avocado_qemu import exec_command_and_wait_for_pattern
22
+ """
63
from avocado_qemu import interrupt_interactive_console_until_pattern
23
+ :avocado: tags=arch:arm
64
from avocado_qemu import wait_for_console_pattern
24
+ :avocado: tags=machine:bpim2u
65
@@ -XXX,XX +XXX,XX @@ def test_arm_raspi2_uart0(self):
25
+ :avocado: tags=accel:tcg
66
"""
26
+ """
67
self.do_test_arm_raspi2(0)
27
+ deb_url = ('https://apt.armbian.com/pool/main/l/linux-5.10.16-sunxi/'
68
28
+ 'linux-image-current-sunxi_21.02.2_armhf.deb')
69
+ def test_arm_raspi2_initrd(self):
29
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
70
+ """
71
+ :avocado: tags=arch:arm
72
+ :avocado: tags=machine:raspi2
73
+ """
74
+ deb_url = ('http://archive.raspberrypi.org/debian/'
75
+ 'pool/main/r/raspberrypi-firmware/'
76
+ 'raspberrypi-kernel_1.20190215-1_armhf.deb')
77
+ deb_hash = 'cd284220b32128c5084037553db3c482426f3972'
78
+ deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
30
+ deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
79
+ kernel_path = self.extract_from_deb(deb_path, '/boot/kernel7.img')
31
+ kernel_path = self.extract_from_deb(deb_path,
80
+ dtb_path = self.extract_from_deb(deb_path, '/boot/bcm2709-rpi-2-b.dtb')
32
+ '/boot/vmlinuz-5.10.16-sunxi')
81
+
33
+ dtb_path = ('/usr/lib/linux-image-current-sunxi/'
34
+ 'sun8i-r40-bananapi-m2-ultra.dtb')
35
+ dtb_path = self.extract_from_deb(deb_path, dtb_path)
36
+
37
+ self.vm.set_console()
38
+ kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
39
+ 'console=ttyS0,115200n8 '
40
+ 'earlycon=uart,mmio32,0x1c28000')
41
+ self.vm.add_args('-kernel', kernel_path,
42
+ '-dtb', dtb_path,
43
+ '-append', kernel_command_line)
44
+ self.vm.launch()
45
+ console_pattern = 'Kernel command line: %s' % kernel_command_line
46
+ self.wait_for_console_pattern(console_pattern)
47
+
48
+ def test_arm_bpim2u_initrd(self):
49
+ """
50
+ :avocado: tags=arch:arm
51
+ :avocado: tags=accel:tcg
52
+ :avocado: tags=machine:bpim2u
53
+ """
54
+ deb_url = ('https://apt.armbian.com/pool/main/l/linux-5.10.16-sunxi/'
55
+ 'linux-image-current-sunxi_21.02.2_armhf.deb')
56
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
57
+ deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
58
+ kernel_path = self.extract_from_deb(deb_path,
59
+ '/boot/vmlinuz-5.10.16-sunxi')
60
+ dtb_path = ('/usr/lib/linux-image-current-sunxi/'
61
+ 'sun8i-r40-bananapi-m2-ultra.dtb')
62
+ dtb_path = self.extract_from_deb(deb_path, dtb_path)
82
+ initrd_url = ('https://github.com/groeck/linux-build-test/raw/'
63
+ initrd_url = ('https://github.com/groeck/linux-build-test/raw/'
83
+ '2eb0a73b5d5a28df3170c546ddaaa9757e1e0848/rootfs/'
64
+ '2eb0a73b5d5a28df3170c546ddaaa9757e1e0848/rootfs/'
84
+ 'arm/rootfs-armv7a.cpio.gz')
65
+ 'arm/rootfs-armv7a.cpio.gz')
85
+ initrd_hash = '604b2e45cdf35045846b8bbfbf2129b1891bdc9c'
66
+ initrd_hash = '604b2e45cdf35045846b8bbfbf2129b1891bdc9c'
86
+ initrd_path_gz = self.fetch_asset(initrd_url, asset_hash=initrd_hash)
67
+ initrd_path_gz = self.fetch_asset(initrd_url, asset_hash=initrd_hash)
87
+ initrd_path = os.path.join(self.workdir, 'rootfs.cpio')
68
+ initrd_path = os.path.join(self.workdir, 'rootfs.cpio')
88
+ archive.gzip_uncompress(initrd_path_gz, initrd_path)
69
+ archive.gzip_uncompress(initrd_path_gz, initrd_path)
89
+
70
+
90
+ self.vm.set_console()
71
+ self.vm.set_console()
91
+ kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
72
+ kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
92
+ 'earlycon=pl011,0x3f201000 console=ttyAMA0 '
73
+ 'console=ttyS0,115200 '
93
+ 'panic=-1 noreboot ' +
74
+ 'panic=-1 noreboot')
94
+ 'dwc_otg.fiq_fsm_enable=0')
95
+ self.vm.add_args('-kernel', kernel_path,
75
+ self.vm.add_args('-kernel', kernel_path,
96
+ '-dtb', dtb_path,
76
+ '-dtb', dtb_path,
97
+ '-initrd', initrd_path,
77
+ '-initrd', initrd_path,
98
+ '-append', kernel_command_line,
78
+ '-append', kernel_command_line,
99
+ '-no-reboot')
79
+ '-no-reboot')
100
+ self.vm.launch()
80
+ self.vm.launch()
101
+ self.wait_for_console_pattern('Boot successful.')
81
+ self.wait_for_console_pattern('Boot successful.')
102
+
82
+
103
+ exec_command_and_wait_for_pattern(self, 'cat /proc/cpuinfo',
83
+ exec_command_and_wait_for_pattern(self, 'cat /proc/cpuinfo',
104
+ 'BCM2835')
84
+ 'Allwinner sun8i Family')
105
+ exec_command_and_wait_for_pattern(self, 'cat /proc/iomem',
85
+ exec_command_and_wait_for_pattern(self, 'cat /proc/iomem',
106
+ '/soc/cprman@7e101000')
86
+ 'system-control@1c00000')
107
+ exec_command(self, 'halt')
87
+ exec_command_and_wait_for_pattern(self, 'reboot',
88
+ 'reboot: Restarting system')
108
+ # Wait for VM to shut down gracefully
89
+ # Wait for VM to shut down gracefully
109
+ self.vm.wait()
90
+ self.vm.wait()
110
+
91
+
111
def test_arm_exynos4210_initrd(self):
92
+ def test_arm_bpim2u_gmac(self):
93
+ """
94
+ :avocado: tags=arch:arm
95
+ :avocado: tags=accel:tcg
96
+ :avocado: tags=machine:bpim2u
97
+ :avocado: tags=device:sd
98
+ """
99
+ self.require_netdev('user')
100
+
101
+ deb_url = ('https://apt.armbian.com/pool/main/l/linux-5.10.16-sunxi/'
102
+ 'linux-image-current-sunxi_21.02.2_armhf.deb')
103
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
104
+ deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
105
+ kernel_path = self.extract_from_deb(deb_path,
106
+ '/boot/vmlinuz-5.10.16-sunxi')
107
+ dtb_path = ('/usr/lib/linux-image-current-sunxi/'
108
+ 'sun8i-r40-bananapi-m2-ultra.dtb')
109
+ dtb_path = self.extract_from_deb(deb_path, dtb_path)
110
+ rootfs_url = ('http://storage.kernelci.org/images/rootfs/buildroot/'
111
+ 'buildroot-baseline/20221116.0/armel/rootfs.ext2.xz')
112
+ rootfs_hash = 'fae32f337c7b87547b10f42599acf109da8b6d9a'
113
+ rootfs_path_xz = self.fetch_asset(rootfs_url, asset_hash=rootfs_hash)
114
+ rootfs_path = os.path.join(self.workdir, 'rootfs.cpio')
115
+ archive.lzma_uncompress(rootfs_path_xz, rootfs_path)
116
+ image_pow2ceil_expand(rootfs_path)
117
+
118
+ self.vm.set_console()
119
+ kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
120
+ 'console=ttyS0,115200 '
121
+ 'root=/dev/mmcblk0 rootwait rw '
122
+ 'panic=-1 noreboot')
123
+ self.vm.add_args('-kernel', kernel_path,
124
+ '-dtb', dtb_path,
125
+ '-drive', 'file=' + rootfs_path + ',if=sd,format=raw',
126
+ '-net', 'nic,model=gmac,netdev=host_gmac',
127
+ '-netdev', 'user,id=host_gmac',
128
+ '-append', kernel_command_line,
129
+ '-no-reboot')
130
+ self.vm.launch()
131
+ shell_ready = "/bin/sh: can't access tty; job control turned off"
132
+ self.wait_for_console_pattern(shell_ready)
133
+
134
+ exec_command_and_wait_for_pattern(self, 'cat /proc/cpuinfo',
135
+ 'Allwinner sun8i Family')
136
+ exec_command_and_wait_for_pattern(self, 'cat /proc/partitions',
137
+ 'mmcblk0')
138
+ exec_command_and_wait_for_pattern(self, 'ifconfig eth0 up',
139
+ 'eth0: Link is Up')
140
+ exec_command_and_wait_for_pattern(self, 'udhcpc eth0',
141
+ 'udhcpc: lease of 10.0.2.15 obtained')
142
+ exec_command_and_wait_for_pattern(self, 'ping -c 3 10.0.2.2',
143
+ '3 packets transmitted, 3 packets received, 0% packet loss')
144
+ exec_command_and_wait_for_pattern(self, 'reboot',
145
+ 'reboot: Restarting system')
146
+ # Wait for VM to shut down gracefully
147
+ self.vm.wait()
148
+
149
+ @skipUnless(os.getenv('AVOCADO_ALLOW_LARGE_STORAGE'), 'storage limited')
150
+ def test_arm_bpim2u_openwrt_22_03_3(self):
151
+ """
152
+ :avocado: tags=arch:arm
153
+ :avocado: tags=machine:bpim2u
154
+ :avocado: tags=device:sd
155
+ """
156
+
157
+ # This test download a 8.9 MiB compressed image and expand it
158
+ # to 127 MiB.
159
+ image_url = ('https://downloads.openwrt.org/releases/22.03.3/targets/'
160
+ 'sunxi/cortexa7/openwrt-22.03.3-sunxi-cortexa7-'
161
+ 'sinovoip_bananapi-m2-ultra-ext4-sdcard.img.gz')
162
+ image_hash = ('5b41b4e11423e562c6011640f9a7cd3b'
163
+ 'dd0a3d42b83430f7caa70a432e6cd82c')
164
+ image_path_gz = self.fetch_asset(image_url, asset_hash=image_hash,
165
+ algorithm='sha256')
166
+ image_path = archive.extract(image_path_gz, self.workdir)
167
+ image_pow2ceil_expand(image_path)
168
+
169
+ self.vm.set_console()
170
+ self.vm.add_args('-drive', 'file=' + image_path + ',if=sd,format=raw',
171
+ '-nic', 'user',
172
+ '-no-reboot')
173
+ self.vm.launch()
174
+
175
+ kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
176
+ 'usbcore.nousb '
177
+ 'noreboot')
178
+
179
+ self.wait_for_console_pattern('U-Boot SPL')
180
+
181
+ interrupt_interactive_console_until_pattern(
182
+ self, 'Hit any key to stop autoboot:', '=>')
183
+ exec_command_and_wait_for_pattern(self, "setenv extraargs '" +
184
+ kernel_command_line + "'", '=>')
185
+ exec_command_and_wait_for_pattern(self, 'boot', 'Starting kernel ...');
186
+
187
+ self.wait_for_console_pattern(
188
+ 'Please press Enter to activate this console.')
189
+
190
+ exec_command_and_wait_for_pattern(self, ' ', 'root@')
191
+
192
+ exec_command_and_wait_for_pattern(self, 'cat /proc/cpuinfo',
193
+ 'Allwinner sun8i Family')
194
+ exec_command_and_wait_for_pattern(self, 'cat /proc/iomem',
195
+ 'system-control@1c00000')
196
+
197
def test_arm_orangepi(self):
112
"""
198
"""
113
:avocado: tags=arch:arm
199
:avocado: tags=arch:arm
114
--
200
--
115
2.20.1
201
2.34.1
116
117
diff view generated by jsdifflib
New patch
1
From: qianfan Zhao <qianfanguijin@163.com>
1
2
3
Add documents for Banana Pi M2U
4
5
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
6
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
7
[PMM: Minor format fixes to correct sphinx errors]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
docs/system/arm/bananapi_m2u.rst | 139 +++++++++++++++++++++++++++++++
11
docs/system/target-arm.rst | 1 +
12
2 files changed, 140 insertions(+)
13
create mode 100644 docs/system/arm/bananapi_m2u.rst
14
15
diff --git a/docs/system/arm/bananapi_m2u.rst b/docs/system/arm/bananapi_m2u.rst
16
new file mode 100644
17
index XXXXXXX..XXXXXXX
18
--- /dev/null
19
+++ b/docs/system/arm/bananapi_m2u.rst
20
@@ -XXX,XX +XXX,XX @@
21
+Banana Pi BPI-M2U (``bpim2u``)
22
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
23
+
24
+Banana Pi BPI-M2 Ultra is a quad-core mini single board computer built with
25
+Allwinner A40i/R40/V40 SoC. It features 2GB of RAM and 8GB eMMC. It also
26
+has onboard WiFi and BT. On the ports side, the BPI-M2 Ultra has 2 USB A
27
+2.0 ports, 1 USB OTG port, 1 HDMI port, 1 audio jack, a DC power port,
28
+and last but not least, a SATA port.
29
+
30
+Supported devices
31
+"""""""""""""""""
32
+
33
+The Banana Pi M2U machine supports the following devices:
34
+
35
+ * SMP (Quad Core Cortex-A7)
36
+ * Generic Interrupt Controller configuration
37
+ * SRAM mappings
38
+ * SDRAM controller
39
+ * Timer device (re-used from Allwinner A10)
40
+ * UART
41
+ * SD/MMC storage controller
42
+ * EMAC ethernet
43
+ * GMAC ethernet
44
+ * Clock Control Unit
45
+ * TWI (I2C)
46
+
47
+Limitations
48
+"""""""""""
49
+
50
+Currently, Banana Pi M2U does *not* support the following features:
51
+
52
+- Graphical output via HDMI, GPU and/or the Display Engine
53
+- Audio output
54
+- Hardware Watchdog
55
+- Real Time Clock
56
+- USB 2.0 interfaces
57
+
58
+Also see the 'unimplemented' array in the Allwinner R40 SoC module
59
+for a complete list of unimplemented I/O devices: ``./hw/arm/allwinner-r40.c``
60
+
61
+Boot options
62
+""""""""""""
63
+
64
+The Banana Pi M2U machine can start using the standard -kernel functionality
65
+for loading a Linux kernel or ELF executable. Additionally, the Banana Pi M2U
66
+machine can also emulate the BootROM which is present on an actual Allwinner R40
67
+based SoC, which loads the bootloader from a SD card, specified via the -sd
68
+argument to qemu-system-arm.
69
+
70
+Running mainline Linux
71
+""""""""""""""""""""""
72
+
73
+To build a Linux mainline kernel that can be booted by the Banana Pi M2U machine,
74
+simply configure the kernel using the sunxi_defconfig configuration:
75
+
76
+.. code-block:: bash
77
+
78
+ $ ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- make mrproper
79
+ $ ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- make sunxi_defconfig
80
+
81
+To boot the newly build linux kernel in QEMU with the Banana Pi M2U machine, use:
82
+
83
+.. code-block:: bash
84
+
85
+ $ qemu-system-arm -M bpim2u -nographic \
86
+ -kernel /path/to/linux/arch/arm/boot/zImage \
87
+ -append 'console=ttyS0,115200' \
88
+ -dtb /path/to/linux/arch/arm/boot/dts/sun8i-r40-bananapi-m2-ultra.dtb
89
+
90
+Banana Pi M2U images
91
+""""""""""""""""""""
92
+
93
+Note that the mainline kernel does not have a root filesystem. You can choose
94
+to build you own image with buildroot using the bananapi_m2_ultra_defconfig.
95
+Also see https://buildroot.org for more information.
96
+
97
+Another possibility is to run an OpenWrt image for Banana Pi M2U which
98
+can be downloaded from:
99
+
100
+ https://downloads.openwrt.org/releases/22.03.3/targets/sunxi/cortexa7/
101
+
102
+When using an image as an SD card, it must be resized to a power of two. This can be
103
+done with the ``qemu-img`` command. It is recommended to only increase the image size
104
+instead of shrinking it to a power of two, to avoid loss of data. For example,
105
+to prepare a downloaded Armbian image, first extract it and then increase
106
+its size to one gigabyte as follows:
107
+
108
+.. code-block:: bash
109
+
110
+ $ qemu-img resize \
111
+ openwrt-22.03.3-sunxi-cortexa7-sinovoip_bananapi-m2-ultra-ext4-sdcard.img \
112
+ 1G
113
+
114
+Instead of providing a custom Linux kernel via the -kernel command you may also
115
+choose to let the Banana Pi M2U machine load the bootloader from SD card, just like
116
+a real board would do using the BootROM. Simply pass the selected image via the -sd
117
+argument and remove the -kernel, -append, -dbt and -initrd arguments:
118
+
119
+.. code-block:: bash
120
+
121
+ $ qemu-system-arm -M bpim2u -nic user -nographic \
122
+ -sd openwrt-22.03.3-sunxi-cortexa7-sinovoip_bananapi-m2-ultra-ext4-sdcard.img
123
+
124
+Running U-Boot
125
+""""""""""""""
126
+
127
+U-Boot mainline can be build and configured using the Bananapi_M2_Ultra_defconfig
128
+using similar commands as describe above for Linux. Note that it is recommended
129
+for development/testing to select the following configuration setting in U-Boot:
130
+
131
+ Device Tree Control > Provider for DTB for DT Control > Embedded DTB
132
+
133
+The BootROM of allwinner R40 loading u-boot from the 8KiB offset of sdcard.
134
+Let's create an bootable disk image:
135
+
136
+.. code-block:: bash
137
+
138
+ $ dd if=/dev/zero of=sd.img bs=32M count=1
139
+ $ dd if=u-boot-sunxi-with-spl.bin of=sd.img bs=1k seek=8 conv=notrunc
140
+
141
+And then boot it.
142
+
143
+.. code-block:: bash
144
+
145
+ $ qemu-system-arm -M bpim2u -nographic -sd sd.img
146
+
147
+Banana Pi M2U integration tests
148
+"""""""""""""""""""""""""""""""
149
+
150
+The Banana Pi M2U machine has several integration tests included.
151
+To run the whole set of tests, build QEMU from source and simply
152
+provide the following command:
153
+
154
+.. code-block:: bash
155
+
156
+ $ cd qemu-build-dir
157
+ $ AVOCADO_ALLOW_LARGE_STORAGE=yes tests/venv/bin/avocado \
158
+ --verbose --show=app,console run -t machine:bpim2u \
159
+ ../tests/avocado/boot_linux_console.py
160
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
161
index XXXXXXX..XXXXXXX 100644
162
--- a/docs/system/target-arm.rst
163
+++ b/docs/system/target-arm.rst
164
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
165
arm/versatile
166
arm/vexpress
167
arm/aspeed
168
+ arm/bananapi_m2u.rst
169
arm/sabrelite
170
arm/digic
171
arm/cubieboard
172
--
173
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Document the meaning of exclusive_high in a big-endian context,
4
and why we can't change it now.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-2-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/cpu.h | 8 ++++++++
12
1 file changed, 8 insertions(+)
13
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
19
uint64_t zcr_el[4]; /* ZCR_EL[1-3] */
20
uint64_t smcr_el[4]; /* SMCR_EL[1-3] */
21
} vfp;
22
+
23
uint64_t exclusive_addr;
24
uint64_t exclusive_val;
25
+ /*
26
+ * Contains the 'val' for the second 64-bit register of LDXP, which comes
27
+ * from the higher address, not the high part of a complete 128-bit value.
28
+ * In some ways it might be more convenient to record the exclusive value
29
+ * as the low and high halves of a 128 bit data value, but the current
30
+ * semantics of these fields are baked into the migration format.
31
+ */
32
uint64_t exclusive_high;
33
34
/* iwMMXt coprocessor state. */
35
--
36
2.34.1
diff view generated by jsdifflib
1
Implement the MVE vector shift right by immediate insns VSHRI and
1
From: Richard Henderson <richard.henderson@linaro.org>
2
VRSHRI. As with Neon, we implement these by using helper functions
3
which perform left shifts but allow negative shift counts to indicate
4
right shifts.
5
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230530191438.411344-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-9-peter.maydell@linaro.org
9
---
8
---
10
target/arm/helper-mve.h | 12 ++++++++++++
9
target/arm/cpu.h | 5 +++++
11
target/arm/translate.h | 20 ++++++++++++++++++++
10
1 file changed, 5 insertions(+)
12
target/arm/mve.decode | 28 ++++++++++++++++++++++++++++
13
target/arm/mve_helper.c | 7 +++++++
14
target/arm/translate-mve.c | 5 +++++
15
target/arm/translate-neon.c | 18 ------------------
16
6 files changed, 72 insertions(+), 18 deletions(-)
17
11
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-mve.h
14
--- a/target/arm/cpu.h
21
+++ b/target/arm/helper-mve.h
15
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
16
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_st(const ARMISARegisters *id)
23
DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
17
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, ST) != 0;
24
DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
25
26
+DEF_HELPER_FLAGS_4(mve_vshli_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_4(mve_vshli_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+
30
DEF_HELPER_FLAGS_4(mve_vshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
DEF_HELPER_FLAGS_4(mve_vshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
DEF_HELPER_FLAGS_4(mve_vshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
34
DEF_HELPER_FLAGS_4(mve_vqshlui_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
DEF_HELPER_FLAGS_4(mve_vqshlui_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
DEF_HELPER_FLAGS_4(mve_vqshlui_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+
38
+DEF_HELPER_FLAGS_4(mve_vrshli_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vrshli_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(mve_vrshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
+
42
+DEF_HELPER_FLAGS_4(mve_vrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_4(mve_vrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
44
+DEF_HELPER_FLAGS_4(mve_vrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
45
diff --git a/target/arm/translate.h b/target/arm/translate.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate.h
48
+++ b/target/arm/translate.h
49
@@ -XXX,XX +XXX,XX @@ static inline int times_2_plus_1(DisasContext *s, int x)
50
return x * 2 + 1;
51
}
18
}
52
19
53
+static inline int rsub_64(DisasContext *s, int x)
20
+static inline bool isar_feature_aa64_lse2(const ARMISARegisters *id)
54
+{
21
+{
55
+ return 64 - x;
22
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, AT) != 0;
56
+}
23
+}
57
+
24
+
58
+static inline int rsub_32(DisasContext *s, int x)
25
static inline bool isar_feature_aa64_fwb(const ARMISARegisters *id)
59
+{
60
+ return 32 - x;
61
+}
62
+
63
+static inline int rsub_16(DisasContext *s, int x)
64
+{
65
+ return 16 - x;
66
+}
67
+
68
+static inline int rsub_8(DisasContext *s, int x)
69
+{
70
+ return 8 - x;
71
+}
72
+
73
static inline int arm_dc_feature(DisasContext *dc, int feature)
74
{
26
{
75
return (dc->features & (1ULL << feature)) != 0;
27
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, FWB) != 0;
76
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/mve.decode
79
+++ b/target/arm/mve.decode
80
@@ -XXX,XX +XXX,XX @@
81
@2_shl_h .... .... .. 01 shift:4 .... .... .... .... &2shift qd=%qd qm=%qm size=1
82
@2_shl_w .... .... .. 1 shift:5 .... .... .... .... &2shift qd=%qd qm=%qm size=2
83
84
+# Right shifts are encoded as N - shift, where N is the element size in bits.
85
+%rshift_i5 16:5 !function=rsub_32
86
+%rshift_i4 16:4 !function=rsub_16
87
+%rshift_i3 16:3 !function=rsub_8
88
+
89
+@2_shr_b .... .... .. 001 ... .... .... .... .... &2shift qd=%qd qm=%qm \
90
+ size=0 shift=%rshift_i3
91
+@2_shr_h .... .... .. 01 .... .... .... .... .... &2shift qd=%qd qm=%qm \
92
+ size=1 shift=%rshift_i4
93
+@2_shr_w .... .... .. 1 ..... .... .... .... .... &2shift qd=%qd qm=%qm \
94
+ size=2 shift=%rshift_i5
95
+
96
# Vector loads and stores
97
98
# Widening loads and narrowing stores:
99
@@ -XXX,XX +XXX,XX @@ VQSHLI_U 111 1 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_w
100
VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_b
101
VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_h
102
VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_w
103
+
104
+VSHRI_S 111 0 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_b
105
+VSHRI_S 111 0 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_h
106
+VSHRI_S 111 0 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_w
107
+
108
+VSHRI_U 111 1 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_b
109
+VSHRI_U 111 1 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_h
110
+VSHRI_U 111 1 1111 1 . ... ... ... 0 0000 0 1 . 1 ... 0 @2_shr_w
111
+
112
+VRSHRI_S 111 0 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_b
113
+VRSHRI_S 111 0 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_h
114
+VRSHRI_S 111 0 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
115
+
116
+VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_b
117
+VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_h
118
+VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
119
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/mve_helper.c
122
+++ b/target/arm/mve_helper.c
123
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvuw, 4, uint32_t)
124
DO_2SHIFT(OP##b, 1, uint8_t, FN) \
125
DO_2SHIFT(OP##h, 2, uint16_t, FN) \
126
DO_2SHIFT(OP##w, 4, uint32_t, FN)
127
+#define DO_2SHIFT_S(OP, FN) \
128
+ DO_2SHIFT(OP##b, 1, int8_t, FN) \
129
+ DO_2SHIFT(OP##h, 2, int16_t, FN) \
130
+ DO_2SHIFT(OP##w, 4, int32_t, FN)
131
132
#define DO_2SHIFT_SAT_U(OP, FN) \
133
DO_2SHIFT_SAT(OP##b, 1, uint8_t, FN) \
134
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvuw, 4, uint32_t)
135
DO_2SHIFT_SAT(OP##w, 4, int32_t, FN)
136
137
DO_2SHIFT_U(vshli_u, DO_VSHLU)
138
+DO_2SHIFT_S(vshli_s, DO_VSHLS)
139
DO_2SHIFT_SAT_U(vqshli_u, DO_UQSHL_OP)
140
DO_2SHIFT_SAT_S(vqshli_s, DO_SQSHL_OP)
141
DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
142
+DO_2SHIFT_U(vrshli_u, DO_VRSHLU)
143
+DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
144
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
145
index XXXXXXX..XXXXXXX 100644
146
--- a/target/arm/translate-mve.c
147
+++ b/target/arm/translate-mve.c
148
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT(VSHLI, vshli_u, false)
149
DO_2SHIFT(VQSHLI_S, vqshli_s, false)
150
DO_2SHIFT(VQSHLI_U, vqshli_u, false)
151
DO_2SHIFT(VQSHLUI, vqshlui_s, false)
152
+/* These right shifts use a left-shift helper with negated shift count */
153
+DO_2SHIFT(VSHRI_S, vshli_s, true)
154
+DO_2SHIFT(VSHRI_U, vshli_u, true)
155
+DO_2SHIFT(VRSHRI_S, vrshli_s, true)
156
+DO_2SHIFT(VRSHRI_U, vrshli_u, true)
157
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/target/arm/translate-neon.c
160
+++ b/target/arm/translate-neon.c
161
@@ -XXX,XX +XXX,XX @@ static inline int plus1(DisasContext *s, int x)
162
return x + 1;
163
}
164
165
-static inline int rsub_64(DisasContext *s, int x)
166
-{
167
- return 64 - x;
168
-}
169
-
170
-static inline int rsub_32(DisasContext *s, int x)
171
-{
172
- return 32 - x;
173
-}
174
-static inline int rsub_16(DisasContext *s, int x)
175
-{
176
- return 16 - x;
177
-}
178
-static inline int rsub_8(DisasContext *s, int x)
179
-{
180
- return 8 - x;
181
-}
182
-
183
static inline int neon_3same_fp_size(DisasContext *s, int x)
184
{
185
/* Convert 0==fp32, 1==fp16 into a MO_* value */
186
--
28
--
187
2.20.1
29
2.34.1
188
30
189
31
diff view generated by jsdifflib
1
Implement the MVE shifts by immediate, which perform shifts
1
From: Richard Henderson <richard.henderson@linaro.org>
2
on a single general-purpose register.
3
2
4
These patterns overlap with the long-shift-by-immediates,
3
Let finalize_memop_atom be the new basic function, with
5
so we have to rearrange the grouping a little here.
4
finalize_memop and finalize_memop_pair testing FEAT_LSE2
5
to apply the appropriate atomicity.
6
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20230530191438.411344-4-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210628135835.6690-18-peter.maydell@linaro.org
10
---
12
---
11
target/arm/helper-mve.h | 3 ++
13
target/arm/tcg/translate.h | 39 +++++++++++++++++++++++++++++-----
12
target/arm/translate.h | 1 +
14
target/arm/tcg/translate-a64.c | 2 ++
13
target/arm/t32.decode | 31 ++++++++++++++-----
15
target/arm/tcg/translate.c | 1 +
14
target/arm/mve_helper.c | 10 ++++++
16
3 files changed, 37 insertions(+), 5 deletions(-)
15
target/arm/translate.c | 68 +++++++++++++++++++++++++++++++++++++++--
16
5 files changed, 104 insertions(+), 9 deletions(-)
17
17
18
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
18
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper-mve.h
20
--- a/target/arm/tcg/translate.h
21
+++ b/target/arm/helper-mve.h
21
+++ b/target/arm/tcg/translate.h
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_sqrshrl, TCG_CALL_NO_RWG, i64, env, i64, i32)
22
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
23
DEF_HELPER_FLAGS_3(mve_uqrshll, TCG_CALL_NO_RWG, i64, env, i64, i32)
23
uint64_t features; /* CPU features bits */
24
DEF_HELPER_FLAGS_3(mve_sqrshrl48, TCG_CALL_NO_RWG, i64, env, i64, i32)
24
bool aarch64;
25
DEF_HELPER_FLAGS_3(mve_uqrshll48, TCG_CALL_NO_RWG, i64, env, i64, i32)
25
bool thumb;
26
+
26
+ bool lse2;
27
+DEF_HELPER_FLAGS_3(mve_uqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
27
/* Because unallocated encodings generate different exception syndrome
28
+DEF_HELPER_FLAGS_3(mve_sqshl, TCG_CALL_NO_RWG, i32, env, i32, i32)
28
* information from traps due to FP being disabled, we can't do a single
29
diff --git a/target/arm/translate.h b/target/arm/translate.h
29
* "is fp access disabled" check at a high level in the decode tree.
30
index XXXXXXX..XXXXXXX 100644
30
@@ -XXX,XX +XXX,XX @@ static inline TCGv_ptr fpstatus_ptr(ARMFPStatusFlavour flavour)
31
--- a/target/arm/translate.h
31
}
32
+++ b/target/arm/translate.h
33
@@ -XXX,XX +XXX,XX @@ typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
34
typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
35
typedef void WideShiftImmFn(TCGv_i64, TCGv_i64, int64_t shift);
36
typedef void WideShiftFn(TCGv_i64, TCGv_ptr, TCGv_i64, TCGv_i32);
37
+typedef void ShiftImmFn(TCGv_i32, TCGv_i32, int32_t shift);
38
32
39
/**
33
/**
40
* arm_tbflags_from_tb:
34
- * finalize_memop:
41
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
35
+ * finalize_memop_atom:
42
index XXXXXXX..XXXXXXX 100644
36
* @s: DisasContext
43
--- a/target/arm/t32.decode
37
* @opc: size+sign+align of the memory operation
44
+++ b/target/arm/t32.decode
38
+ * @atom: atomicity of the memory operation
45
@@ -XXX,XX +XXX,XX @@
39
*
46
40
- * Build the complete MemOp for a memory operation, including alignment
47
&mve_shl_ri rdalo rdahi shim
41
- * and endianness.
48
&mve_shl_rr rdalo rdahi rm
42
+ * Build the complete MemOp for a memory operation, including alignment,
49
+&mve_sh_ri rda shim
43
+ * endianness, and atomicity.
50
44
*
51
# rdahi: bits [3:1] from insn, bit 0 is 1
45
* If (op & MO_AMASK) then the operation already contains the required
52
# rdalo: bits [3:1] from insn, bit 0 is 0
46
* alignment, e.g. for AccType_ATOMIC. Otherwise, this an optionally
53
@@ -XXX,XX +XXX,XX @@
47
@@ -XXX,XX +XXX,XX @@ static inline TCGv_ptr fpstatus_ptr(ARMFPStatusFlavour flavour)
54
&mve_shl_ri shim=%imm5_12_6 rdalo=%rdalo_17 rdahi=%rdahi_9
48
* and this is applied here. Note that there is no way to indicate that
55
@mve_shl_rr ....... .... . ... . rm:4 ... . .. .. .... \
49
* no alignment should ever be enforced; this must be handled manually.
56
&mve_shl_rr rdalo=%rdalo_17 rdahi=%rdahi_9
50
*/
57
+@mve_sh_ri ....... .... . rda:4 . ... ... . .. .. .... \
51
-static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
58
+ &mve_sh_ri shim=%imm5_12_6
52
+static inline MemOp finalize_memop_atom(DisasContext *s, MemOp opc, MemOp atom)
59
60
{
53
{
61
TST_xrri 1110101 0000 1 .... 0 ... 1111 .... .... @S_xrr_shi
54
if (s->align_mem && !(opc & MO_AMASK)) {
62
@@ -XXX,XX +XXX,XX @@ BIC_rrri 1110101 0001 . .... 0 ... .... .... .... @s_rrr_shi
55
opc |= MO_ALIGN;
63
# the rest fall through (where ORR_rrri and MOV_rxri will end up
56
}
64
# handling them as r13 and r15 accesses with the same semantics as A32).
57
- return opc | s->be_data;
65
[
58
+ return opc | atom | s->be_data;
66
- LSLL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 00 1111 @mve_shl_ri
67
- LSRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 01 1111 @mve_shl_ri
68
- ASRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 10 1111 @mve_shl_ri
69
+ {
70
+ UQSHL_ri 1110101 0010 1 .... 0 ... 1111 .. 00 1111 @mve_sh_ri
71
+ LSLL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 00 1111 @mve_shl_ri
72
+ UQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 00 1111 @mve_shl_ri
73
+ }
74
75
- UQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 00 1111 @mve_shl_ri
76
- URSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 01 1111 @mve_shl_ri
77
- SRSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 10 1111 @mve_shl_ri
78
- SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
79
+ {
80
+ URSHR_ri 1110101 0010 1 .... 0 ... 1111 .. 01 1111 @mve_sh_ri
81
+ LSRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 01 1111 @mve_shl_ri
82
+ URSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 01 1111 @mve_shl_ri
83
+ }
84
+
85
+ {
86
+ SRSHR_ri 1110101 0010 1 .... 0 ... 1111 .. 10 1111 @mve_sh_ri
87
+ ASRL_ri 1110101 0010 1 ... 0 0 ... ... 1 .. 10 1111 @mve_shl_ri
88
+ SRSHRL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 10 1111 @mve_shl_ri
89
+ }
90
+
91
+ {
92
+ SQSHL_ri 1110101 0010 1 .... 0 ... 1111 .. 11 1111 @mve_sh_ri
93
+ SQSHLL_ri 1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111 @mve_shl_ri
94
+ }
95
96
LSLL_rr 1110101 0010 1 ... 0 .... ... 1 0000 1101 @mve_shl_rr
97
ASRL_rr 1110101 0010 1 ... 0 .... ... 1 0010 1101 @mve_shl_rr
98
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/mve_helper.c
101
+++ b/target/arm/mve_helper.c
102
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mve_uqrshll48)(CPUARMState *env, uint64_t n, uint32_t shift)
103
{
104
return do_uqrshl48_d(n, (int8_t)shift, true, &env->QF);
105
}
106
+
107
+uint32_t HELPER(mve_uqshl)(CPUARMState *env, uint32_t n, uint32_t shift)
108
+{
109
+ return do_uqrshl_bhs(n, (int8_t)shift, 32, false, &env->QF);
110
+}
59
+}
111
+
60
+
112
+uint32_t HELPER(mve_sqshl)(CPUARMState *env, uint32_t n, uint32_t shift)
61
+/**
62
+ * finalize_memop:
63
+ * @s: DisasContext
64
+ * @opc: size+sign+align of the memory operation
65
+ *
66
+ * Like finalize_memop_atom, but with default atomicity.
67
+ */
68
+static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
113
+{
69
+{
114
+ return do_sqrshl_bhs(n, (int8_t)shift, 32, false, &env->QF);
70
+ MemOp atom = s->lse2 ? MO_ATOM_WITHIN16 : MO_ATOM_IFALIGN;
115
+}
71
+ return finalize_memop_atom(s, opc, atom);
116
diff --git a/target/arm/translate.c b/target/arm/translate.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/translate.c
119
+++ b/target/arm/translate.c
120
@@ -XXX,XX +XXX,XX @@ static void gen_srshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
121
122
static void gen_srshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
123
{
124
- TCGv_i32 t = tcg_temp_new_i32();
125
+ TCGv_i32 t;
126
127
+ /* Handle shift by the input size for the benefit of trans_SRSHR_ri */
128
+ if (sh == 32) {
129
+ tcg_gen_movi_i32(d, 0);
130
+ return;
131
+ }
132
+ t = tcg_temp_new_i32();
133
tcg_gen_extract_i32(t, a, sh - 1, 1);
134
tcg_gen_sari_i32(d, a, sh);
135
tcg_gen_add_i32(d, d, t);
136
@@ -XXX,XX +XXX,XX @@ static void gen_urshr16_i64(TCGv_i64 d, TCGv_i64 a, int64_t sh)
137
138
static void gen_urshr32_i32(TCGv_i32 d, TCGv_i32 a, int32_t sh)
139
{
140
- TCGv_i32 t = tcg_temp_new_i32();
141
+ TCGv_i32 t;
142
143
+ /* Handle shift by the input size for the benefit of trans_URSHR_ri */
144
+ if (sh == 32) {
145
+ tcg_gen_extract_i32(d, a, sh - 1, 1);
146
+ return;
147
+ }
148
+ t = tcg_temp_new_i32();
149
tcg_gen_extract_i32(t, a, sh - 1, 1);
150
tcg_gen_shri_i32(d, a, sh);
151
tcg_gen_add_i32(d, d, t);
152
@@ -XXX,XX +XXX,XX @@ static bool trans_SQRSHRL48_rr(DisasContext *s, arg_mve_shl_rr *a)
153
return do_mve_shl_rr(s, a, gen_helper_mve_sqrshrl48);
154
}
155
156
+static bool do_mve_sh_ri(DisasContext *s, arg_mve_sh_ri *a, ShiftImmFn *fn)
157
+{
158
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
159
+ /* Decode falls through to ORR/MOV UNPREDICTABLE handling */
160
+ return false;
161
+ }
162
+ if (!dc_isar_feature(aa32_mve, s) ||
163
+ !arm_dc_feature(s, ARM_FEATURE_M_MAIN) ||
164
+ a->rda == 13 || a->rda == 15) {
165
+ /* These rda cases are UNPREDICTABLE; we choose to UNDEF */
166
+ unallocated_encoding(s);
167
+ return true;
168
+ }
169
+
170
+ if (a->shim == 0) {
171
+ a->shim = 32;
172
+ }
173
+ fn(cpu_R[a->rda], cpu_R[a->rda], a->shim);
174
+
175
+ return true;
176
+}
72
+}
177
+
73
+
178
+static bool trans_URSHR_ri(DisasContext *s, arg_mve_sh_ri *a)
74
+/**
75
+ * finalize_memop_pair:
76
+ * @s: DisasContext
77
+ * @opc: size+sign+align of the memory operation
78
+ *
79
+ * Like finalize_memop_atom, but with atomicity for a pair.
80
+ * C.f. Pseudocode for Mem[], operand ispair.
81
+ */
82
+static inline MemOp finalize_memop_pair(DisasContext *s, MemOp opc)
179
+{
83
+{
180
+ return do_mve_sh_ri(s, a, gen_urshr32_i32);
84
+ MemOp atom = s->lse2 ? MO_ATOM_WITHIN16_PAIR : MO_ATOM_IFALIGN_PAIR;
181
+}
85
+ return finalize_memop_atom(s, opc, atom);
86
}
87
88
/**
89
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/tcg/translate-a64.c
92
+++ b/target/arm/tcg/translate-a64.c
93
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
94
tcg_debug_assert(dc->tbid & 1);
95
#endif
96
97
+ dc->lse2 = dc_isar_feature(aa64_lse2, dc);
182
+
98
+
183
+static bool trans_SRSHR_ri(DisasContext *s, arg_mve_sh_ri *a)
99
/* Single step state. The code-generation logic here is:
184
+{
100
* SS_ACTIVE == 0:
185
+ return do_mve_sh_ri(s, a, gen_srshr32_i32);
101
* generate code with no special handling for single-stepping (except
186
+}
102
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
187
+
103
index XXXXXXX..XXXXXXX 100644
188
+static void gen_mve_sqshl(TCGv_i32 r, TCGv_i32 n, int32_t shift)
104
--- a/target/arm/tcg/translate.c
189
+{
105
+++ b/target/arm/tcg/translate.c
190
+ gen_helper_mve_sqshl(r, cpu_env, n, tcg_constant_i32(shift));
106
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
191
+}
107
dc->sme_trap_nonstreaming =
192
+
108
EX_TBFLAG_A32(tb_flags, SME_TRAP_NONSTREAMING);
193
+static bool trans_SQSHL_ri(DisasContext *s, arg_mve_sh_ri *a)
109
}
194
+{
110
+ dc->lse2 = false; /* applies only to aarch64 */
195
+ return do_mve_sh_ri(s, a, gen_mve_sqshl);
111
dc->cp_regs = cpu->cp_regs;
196
+}
112
dc->features = env->features;
197
+
113
198
+static void gen_mve_uqshl(TCGv_i32 r, TCGv_i32 n, int32_t shift)
199
+{
200
+ gen_helper_mve_uqshl(r, cpu_env, n, tcg_constant_i32(shift));
201
+}
202
+
203
+static bool trans_UQSHL_ri(DisasContext *s, arg_mve_sh_ri *a)
204
+{
205
+ return do_mve_sh_ri(s, a, gen_mve_uqshl);
206
+}
207
+
208
/*
209
* Multiply and multiply accumulate
210
*/
211
--
114
--
212
2.20.1
115
2.34.1
213
116
214
117
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
While we don't require 16-byte atomicity here, using a single larger
4
load simplifies the code, and makes it a closer match to STXP.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-5-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/translate-a64.c | 31 ++++++++++++++++++++-----------
12
1 file changed, 20 insertions(+), 11 deletions(-)
13
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/translate-a64.c
17
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
19
TCGv_i64 addr, int size, bool is_pair)
20
{
21
int idx = get_mem_index(s);
22
- MemOp memop = s->be_data;
23
+ MemOp memop;
24
25
g_assert(size <= 3);
26
if (is_pair) {
27
g_assert(size >= 2);
28
if (size == 2) {
29
/* The pair must be single-copy atomic for the doubleword. */
30
- memop |= MO_64 | MO_ALIGN;
31
+ memop = finalize_memop(s, MO_64 | MO_ALIGN);
32
tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
33
if (s->be_data == MO_LE) {
34
tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
35
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
36
tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 0, 32);
37
}
38
} else {
39
- /* The pair must be single-copy atomic for *each* doubleword, not
40
- the entire quadword, however it must be quadword aligned. */
41
- memop |= MO_64;
42
- tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx,
43
- memop | MO_ALIGN_16);
44
+ /*
45
+ * The pair must be single-copy atomic for *each* doubleword, not
46
+ * the entire quadword, however it must be quadword aligned.
47
+ * Expose the complete load to tcg, for ease of tlb lookup,
48
+ * but indicate that only 8-byte atomicity is required.
49
+ */
50
+ TCGv_i128 t16 = tcg_temp_new_i128();
51
52
- TCGv_i64 addr2 = tcg_temp_new_i64();
53
- tcg_gen_addi_i64(addr2, addr, 8);
54
- tcg_gen_qemu_ld_i64(cpu_exclusive_high, addr2, idx, memop);
55
+ memop = finalize_memop_atom(s, MO_128 | MO_ALIGN_16,
56
+ MO_ATOM_IFALIGN_PAIR);
57
+ tcg_gen_qemu_ld_i128(t16, addr, idx, memop);
58
59
+ if (s->be_data == MO_LE) {
60
+ tcg_gen_extr_i128_i64(cpu_exclusive_val,
61
+ cpu_exclusive_high, t16);
62
+ } else {
63
+ tcg_gen_extr_i128_i64(cpu_exclusive_high,
64
+ cpu_exclusive_val, t16);
65
+ }
66
tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
67
tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high);
68
}
69
} else {
70
- memop |= size | MO_ALIGN;
71
+ memop = finalize_memop(s, size | MO_ALIGN);
72
tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
73
tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
74
}
75
--
76
2.34.1
diff view generated by jsdifflib
1
The function asimd_imm_const() in translate-neon.c is an
1
From: Richard Henderson <richard.henderson@linaro.org>
2
implementation of the pseudocode AdvSIMDExpandImm(), which we will
3
also want for MVE. Move the implementation to translate.c, with a
4
prototype in translate.h.
5
2
3
While we don't require 16-byte atomicity here, using a single larger
4
operation simplifies the code. Introduce finalize_memop_asimd for this.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-4-peter.maydell@linaro.org
9
---
10
---
10
target/arm/translate.h | 16 ++++++++++
11
target/arm/tcg/translate.h | 24 +++++++++++++++++++++++
11
target/arm/translate-neon.c | 63 -------------------------------------
12
target/arm/tcg/translate-a64.c | 35 +++++++++++-----------------------
12
target/arm/translate.c | 57 +++++++++++++++++++++++++++++++++
13
2 files changed, 35 insertions(+), 24 deletions(-)
13
3 files changed, 73 insertions(+), 63 deletions(-)
14
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
15
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
17
--- a/target/arm/tcg/translate.h
18
+++ b/target/arm/translate.h
18
+++ b/target/arm/tcg/translate.h
19
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
19
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop_pair(DisasContext *s, MemOp opc)
20
return opc | s->be_data;
20
return finalize_memop_atom(s, opc, atom);
21
}
21
}
22
22
23
+/**
23
+/**
24
+ * asimd_imm_const: Expand an encoded SIMD constant value
24
+ * finalize_memop_asimd:
25
+ * @s: DisasContext
26
+ * @opc: size+sign+align of the memory operation
25
+ *
27
+ *
26
+ * Expand a SIMD constant value. This is essentially the pseudocode
28
+ * Like finalize_memop_atom, but with atomicity of AccessType_ASIMD.
27
+ * AdvSIMDExpandImm, except that we also perform the boolean NOT needed for
28
+ * VMVN and VBIC (when cmode < 14 && op == 1).
29
+ *
30
+ * The combination cmode == 15 op == 1 is a reserved encoding for AArch32;
31
+ * callers must catch this.
32
+ *
33
+ * cmode = 2,3,4,5,6,7,10,11,12,13 imm=0 was UNPREDICTABLE in v7A but
34
+ * is either not unpredictable or merely CONSTRAINED UNPREDICTABLE in v8A;
35
+ * we produce an immediate constant value of 0 in these cases.
36
+ */
29
+ */
37
+uint64_t asimd_imm_const(uint32_t imm, int cmode, int op);
30
+static inline MemOp finalize_memop_asimd(DisasContext *s, MemOp opc)
38
+
39
#endif /* TARGET_ARM_TRANSLATE_H */
40
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate-neon.c
43
+++ b/target/arm/translate-neon.c
44
@@ -XXX,XX +XXX,XX @@ DO_FP_2SH(VCVT_UH, gen_helper_gvec_vcvt_uh)
45
DO_FP_2SH(VCVT_HS, gen_helper_gvec_vcvt_hs)
46
DO_FP_2SH(VCVT_HU, gen_helper_gvec_vcvt_hu)
47
48
-static uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
49
-{
50
- /*
51
- * Expand the encoded constant.
52
- * Note that cmode = 2,3,4,5,6,7,10,11,12,13 imm=0 is UNPREDICTABLE.
53
- * We choose to not special-case this and will behave as if a
54
- * valid constant encoding of 0 had been given.
55
- * cmode = 15 op = 1 must UNDEF; we assume decode has handled that.
56
- */
57
- switch (cmode) {
58
- case 0: case 1:
59
- /* no-op */
60
- break;
61
- case 2: case 3:
62
- imm <<= 8;
63
- break;
64
- case 4: case 5:
65
- imm <<= 16;
66
- break;
67
- case 6: case 7:
68
- imm <<= 24;
69
- break;
70
- case 8: case 9:
71
- imm |= imm << 16;
72
- break;
73
- case 10: case 11:
74
- imm = (imm << 8) | (imm << 24);
75
- break;
76
- case 12:
77
- imm = (imm << 8) | 0xff;
78
- break;
79
- case 13:
80
- imm = (imm << 16) | 0xffff;
81
- break;
82
- case 14:
83
- if (op) {
84
- /*
85
- * This is the only case where the top and bottom 32 bits
86
- * of the encoded constant differ.
87
- */
88
- uint64_t imm64 = 0;
89
- int n;
90
-
91
- for (n = 0; n < 8; n++) {
92
- if (imm & (1 << n)) {
93
- imm64 |= (0xffULL << (n * 8));
94
- }
95
- }
96
- return imm64;
97
- }
98
- imm |= (imm << 8) | (imm << 16) | (imm << 24);
99
- break;
100
- case 15:
101
- imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
102
- | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
103
- break;
104
- }
105
- if (op) {
106
- imm = ~imm;
107
- }
108
- return dup_const(MO_32, imm);
109
-}
110
-
111
static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
112
GVecGen2iFn *fn)
113
{
114
diff --git a/target/arm/translate.c b/target/arm/translate.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/target/arm/translate.c
117
+++ b/target/arm/translate.c
118
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void)
119
a64_translate_init();
120
}
121
122
+uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
123
+{
31
+{
124
+ /* Expand the encoded constant as per AdvSIMDExpandImm pseudocode */
32
+ /*
125
+ switch (cmode) {
33
+ * In the pseudocode for Mem[], with AccessType_ASIMD, size == 16,
126
+ case 0: case 1:
34
+ * if IsAligned(8), the first case provides separate atomicity for
127
+ /* no-op */
35
+ * the pair of 64-bit accesses. If !IsAligned(8), the middle cases
128
+ break;
36
+ * do not apply, and we're left with the final case of no atomicity.
129
+ case 2: case 3:
37
+ * Thus MO_ATOM_IFALIGN_PAIR.
130
+ imm <<= 8;
38
+ *
131
+ break;
39
+ * For other sizes, normal LSE2 rules apply.
132
+ case 4: case 5:
40
+ */
133
+ imm <<= 16;
41
+ if ((opc & MO_SIZE) == MO_128) {
134
+ break;
42
+ return finalize_memop_atom(s, opc, MO_ATOM_IFALIGN_PAIR);
135
+ case 6: case 7:
136
+ imm <<= 24;
137
+ break;
138
+ case 8: case 9:
139
+ imm |= imm << 16;
140
+ break;
141
+ case 10: case 11:
142
+ imm = (imm << 8) | (imm << 24);
143
+ break;
144
+ case 12:
145
+ imm = (imm << 8) | 0xff;
146
+ break;
147
+ case 13:
148
+ imm = (imm << 16) | 0xffff;
149
+ break;
150
+ case 14:
151
+ if (op) {
152
+ /*
153
+ * This is the only case where the top and bottom 32 bits
154
+ * of the encoded constant differ.
155
+ */
156
+ uint64_t imm64 = 0;
157
+ int n;
158
+
159
+ for (n = 0; n < 8; n++) {
160
+ if (imm & (1 << n)) {
161
+ imm64 |= (0xffULL << (n * 8));
162
+ }
163
+ }
164
+ return imm64;
165
+ }
166
+ imm |= (imm << 8) | (imm << 16) | (imm << 24);
167
+ break;
168
+ case 15:
169
+ imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
170
+ | ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
171
+ break;
172
+ }
43
+ }
173
+ if (op) {
44
+ return finalize_memop(s, opc);
174
+ imm = ~imm;
175
+ }
176
+ return dup_const(MO_32, imm);
177
+}
45
+}
178
+
46
+
179
/* Generate a label used for skipping this instruction */
47
/**
180
void arm_gen_condlabel(DisasContext *s)
48
* asimd_imm_const: Expand an encoded SIMD constant value
49
*
50
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/translate-a64.c
53
+++ b/target/arm/tcg/translate-a64.c
54
@@ -XXX,XX +XXX,XX @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)
181
{
55
{
56
/* This writes the bottom N bits of a 128 bit wide vector to memory */
57
TCGv_i64 tmplo = tcg_temp_new_i64();
58
- MemOp mop;
59
+ MemOp mop = finalize_memop_asimd(s, size);
60
61
tcg_gen_ld_i64(tmplo, cpu_env, fp_reg_offset(s, srcidx, MO_64));
62
63
- if (size < 4) {
64
- mop = finalize_memop(s, size);
65
+ if (size < MO_128) {
66
tcg_gen_qemu_st_i64(tmplo, tcg_addr, get_mem_index(s), mop);
67
} else {
68
- bool be = s->be_data == MO_BE;
69
- TCGv_i64 tcg_hiaddr = tcg_temp_new_i64();
70
TCGv_i64 tmphi = tcg_temp_new_i64();
71
+ TCGv_i128 t16 = tcg_temp_new_i128();
72
73
tcg_gen_ld_i64(tmphi, cpu_env, fp_reg_hi_offset(s, srcidx));
74
+ tcg_gen_concat_i64_i128(t16, tmplo, tmphi);
75
76
- mop = s->be_data | MO_UQ;
77
- tcg_gen_qemu_st_i64(be ? tmphi : tmplo, tcg_addr, get_mem_index(s),
78
- mop | (s->align_mem ? MO_ALIGN_16 : 0));
79
- tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
80
- tcg_gen_qemu_st_i64(be ? tmplo : tmphi, tcg_hiaddr,
81
- get_mem_index(s), mop);
82
+ tcg_gen_qemu_st_i128(t16, tcg_addr, get_mem_index(s), mop);
83
}
84
}
85
86
@@ -XXX,XX +XXX,XX @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
87
/* This always zero-extends and writes to a full 128 bit wide vector */
88
TCGv_i64 tmplo = tcg_temp_new_i64();
89
TCGv_i64 tmphi = NULL;
90
- MemOp mop;
91
+ MemOp mop = finalize_memop_asimd(s, size);
92
93
- if (size < 4) {
94
- mop = finalize_memop(s, size);
95
+ if (size < MO_128) {
96
tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), mop);
97
} else {
98
- bool be = s->be_data == MO_BE;
99
- TCGv_i64 tcg_hiaddr;
100
+ TCGv_i128 t16 = tcg_temp_new_i128();
101
+
102
+ tcg_gen_qemu_ld_i128(t16, tcg_addr, get_mem_index(s), mop);
103
104
tmphi = tcg_temp_new_i64();
105
- tcg_hiaddr = tcg_temp_new_i64();
106
-
107
- mop = s->be_data | MO_UQ;
108
- tcg_gen_qemu_ld_i64(be ? tmphi : tmplo, tcg_addr, get_mem_index(s),
109
- mop | (s->align_mem ? MO_ALIGN_16 : 0));
110
- tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
111
- tcg_gen_qemu_ld_i64(be ? tmplo : tmphi, tcg_hiaddr,
112
- get_mem_index(s), mop);
113
+ tcg_gen_extr_i128_i64(tmplo, tmphi, t16);
114
}
115
116
tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(s, destidx, MO_64));
182
--
117
--
183
2.20.1
118
2.34.1
184
185
diff view generated by jsdifflib
1
From: Patrick Venture <venture@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Adds a line-item reference to the supported quanta-q71l-bmc aspeed
3
This fixes a bug in that these two insns should have been using atomic
4
entry.
4
16-byte stores, since MTE is ARMv8.5 and LSE2 is mandatory from ARMv8.4.
5
5
6
Signed-off-by: Patrick Venture <venture@google.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Cédric Le Goater <clg@kaod.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210615192848.1065297-2-venture@google.com
8
Message-id: 20230530191438.411344-7-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
docs/system/arm/aspeed.rst | 1 +
11
target/arm/tcg/translate-a64.c | 17 ++++++++++-------
12
1 file changed, 1 insertion(+)
12
1 file changed, 10 insertions(+), 7 deletions(-)
13
13
14
diff --git a/docs/system/arm/aspeed.rst b/docs/system/arm/aspeed.rst
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/docs/system/arm/aspeed.rst
16
--- a/target/arm/tcg/translate-a64.c
17
+++ b/docs/system/arm/aspeed.rst
17
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ etc.
18
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
19
AST2400 SoC based machines :
19
20
20
if (is_zero) {
21
- ``palmetto-bmc`` OpenPOWER Palmetto POWER8 BMC
21
TCGv_i64 clean_addr = clean_data_tbi(s, addr);
22
+- ``quanta-q71l-bmc`` OpenBMC Quanta BMC
22
- TCGv_i64 tcg_zero = tcg_constant_i64(0);
23
23
+ TCGv_i64 zero64 = tcg_constant_i64(0);
24
AST2500 SoC based machines :
24
+ TCGv_i128 zero128 = tcg_temp_new_i128();
25
int mem_index = get_mem_index(s);
26
- int i, n = (1 + is_pair) << LOG2_TAG_GRANULE;
27
+ MemOp mop = finalize_memop(s, MO_128 | MO_ALIGN);
28
29
- tcg_gen_qemu_st_i64(tcg_zero, clean_addr, mem_index,
30
- MO_UQ | MO_ALIGN_16);
31
- for (i = 8; i < n; i += 8) {
32
- tcg_gen_addi_i64(clean_addr, clean_addr, 8);
33
- tcg_gen_qemu_st_i64(tcg_zero, clean_addr, mem_index, MO_UQ);
34
+ tcg_gen_concat_i64_i128(zero128, zero64, zero64);
35
+
36
+ /* This is 1 or 2 atomic 16-byte operations. */
37
+ tcg_gen_qemu_st_i128(zero128, clean_addr, mem_index, mop);
38
+ if (is_pair) {
39
+ tcg_gen_addi_i64(clean_addr, clean_addr, 16);
40
+ tcg_gen_qemu_st_i128(zero128, clean_addr, mem_index, mop);
41
}
42
}
25
43
26
--
44
--
27
2.20.1
45
2.34.1
28
29
diff view generated by jsdifflib
1
Implement the MVE VSHLC insn, which performs a shift left of the
1
From: Richard Henderson <richard.henderson@linaro.org>
2
entire vector with carry in bits provided from a general purpose
3
register and carry out bits written back to that register.
4
2
3
Round len_align to 16 instead of 8, handling an odd 8-byte as part
4
of the tail. Use MO_ATOM_NONE to indicate that all of these memory
5
ops have only byte atomicity.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230530191438.411344-8-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210628135835.6690-14-peter.maydell@linaro.org
8
---
11
---
9
target/arm/helper-mve.h | 2 ++
12
target/arm/tcg/translate-sve.c | 95 +++++++++++++++++++++++++---------
10
target/arm/mve.decode | 2 ++
13
1 file changed, 70 insertions(+), 25 deletions(-)
11
target/arm/mve_helper.c | 38 ++++++++++++++++++++++++++++++++++++++
12
target/arm/translate-mve.c | 30 ++++++++++++++++++++++++++++++
13
4 files changed, 72 insertions(+)
14
14
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
15
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-mve.h
17
--- a/target/arm/tcg/translate-sve.c
18
+++ b/target/arm/helper-mve.h
18
+++ b/target/arm/tcg/translate-sve.c
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vqrshrunbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
19
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(UCVTF_dd, aa64_sve, gen_gvec_fpst_arg_zpz,
20
DEF_HELPER_FLAGS_4(mve_vqrshrunbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
21
DEF_HELPER_FLAGS_4(mve_vqrshruntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
int len, int rn, int imm)
22
DEF_HELPER_FLAGS_4(mve_vqrshrunth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
{
23
+
23
- int len_align = QEMU_ALIGN_DOWN(len, 8);
24
+DEF_HELPER_FLAGS_4(mve_vshlc, TCG_CALL_NO_WG, i32, env, ptr, i32, i32)
24
- int len_remain = len % 8;
25
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
25
- int nparts = len / 8 + ctpop8(len_remain);
26
index XXXXXXX..XXXXXXX 100644
26
+ int len_align = QEMU_ALIGN_DOWN(len, 16);
27
--- a/target/arm/mve.decode
27
+ int len_remain = len % 16;
28
+++ b/target/arm/mve.decode
28
+ int nparts = len / 16 + ctpop8(len_remain);
29
@@ -XXX,XX +XXX,XX @@ VQRSHRUNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_b
29
int midx = get_mem_index(s);
30
VQRSHRUNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 0 @2_shr_h
30
TCGv_i64 dirty_addr, clean_addr, t0, t1;
31
VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_b
31
+ TCGv_i128 t16;
32
VQRSHRUNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 0 @2_shr_h
32
33
+
33
dirty_addr = tcg_temp_new_i64();
34
+VSHLC 111 0 1110 1 . 1 imm:5 ... 0 1111 1100 rdm:4 qd=%qd
34
tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm);
35
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
35
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
36
index XXXXXXX..XXXXXXX 100644
36
int i;
37
--- a/target/arm/mve_helper.c
37
38
+++ b/target/arm/mve_helper.c
38
t0 = tcg_temp_new_i64();
39
@@ -XXX,XX +XXX,XX @@ DO_VSHRN_SAT_UB(vqrshrnb_ub, vqrshrnt_ub, DO_RSHRN_UB)
39
- for (i = 0; i < len_align; i += 8) {
40
DO_VSHRN_SAT_UH(vqrshrnb_uh, vqrshrnt_uh, DO_RSHRN_UH)
40
- tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUQ);
41
DO_VSHRN_SAT_SB(vqrshrunbb, vqrshruntb, DO_RSHRUN_B)
41
+ t1 = tcg_temp_new_i64();
42
DO_VSHRN_SAT_SH(vqrshrunbh, vqrshrunth, DO_RSHRUN_H)
42
+ t16 = tcg_temp_new_i128();
43
+
43
+
44
+uint32_t HELPER(mve_vshlc)(CPUARMState *env, void *vd, uint32_t rdm,
44
+ for (i = 0; i < len_align; i += 16) {
45
+ uint32_t shift)
45
+ tcg_gen_qemu_ld_i128(t16, clean_addr, midx,
46
+{
46
+ MO_LE | MO_128 | MO_ATOM_NONE);
47
+ uint32_t *d = vd;
47
+ tcg_gen_extr_i128_i64(t0, t1, t16);
48
+ uint16_t mask = mve_element_mask(env);
48
tcg_gen_st_i64(t0, base, vofs + i);
49
+ unsigned e;
49
- tcg_gen_addi_i64(clean_addr, clean_addr, 8);
50
+ uint32_t r;
50
+ tcg_gen_st_i64(t1, base, vofs + i + 8);
51
+
51
+ tcg_gen_addi_i64(clean_addr, clean_addr, 16);
52
+ /*
52
}
53
+ * For each 32-bit element, we shift it left, bringing in the
53
} else {
54
+ * low 'shift' bits of rdm at the bottom. Bits shifted out at
54
TCGLabel *loop = gen_new_label();
55
+ * the top become the new rdm, if the predicate mask permits.
55
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
56
+ * The final rdm value is returned to update the register.
56
tcg_gen_movi_ptr(i, 0);
57
+ * shift == 0 here means "shift by 32 bits".
57
gen_set_label(loop);
58
+ */
58
59
+ if (shift == 0) {
59
- t0 = tcg_temp_new_i64();
60
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) {
60
- tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUQ);
61
+ r = rdm;
61
- tcg_gen_addi_i64(clean_addr, clean_addr, 8);
62
+ if (mask & 1) {
62
+ t16 = tcg_temp_new_i128();
63
+ rdm = d[H4(e)];
63
+ tcg_gen_qemu_ld_i128(t16, clean_addr, midx,
64
+ }
64
+ MO_LE | MO_128 | MO_ATOM_NONE);
65
+ mergemask(&d[H4(e)], r, mask);
65
+ tcg_gen_addi_i64(clean_addr, clean_addr, 16);
66
+ }
66
67
+ } else {
67
tp = tcg_temp_new_ptr();
68
+ uint32_t shiftmask = MAKE_64BIT_MASK(0, shift);
68
tcg_gen_add_ptr(tp, base, i);
69
+
69
- tcg_gen_addi_ptr(i, i, 8);
70
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) {
70
+ tcg_gen_addi_ptr(i, i, 16);
71
+ r = (d[H4(e)] << shift) | (rdm & shiftmask);
71
+
72
+ if (mask & 1) {
72
+ t0 = tcg_temp_new_i64();
73
+ rdm = d[H4(e)] >> (32 - shift);
73
+ t1 = tcg_temp_new_i64();
74
+ }
74
+ tcg_gen_extr_i128_i64(t0, t1, t16);
75
+ mergemask(&d[H4(e)], r, mask);
75
+
76
tcg_gen_st_i64(t0, tp, vofs);
77
+ tcg_gen_st_i64(t1, tp, vofs + 8);
78
79
tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
80
}
81
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
82
* Predicate register loads can be any multiple of 2.
83
* Note that we still store the entire 64-bit unit into cpu_env.
84
*/
85
+ if (len_remain >= 8) {
86
+ t0 = tcg_temp_new_i64();
87
+ tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUQ | MO_ATOM_NONE);
88
+ tcg_gen_st_i64(t0, base, vofs + len_align);
89
+ len_remain -= 8;
90
+ len_align += 8;
91
+ if (len_remain) {
92
+ tcg_gen_addi_i64(clean_addr, clean_addr, 8);
76
+ }
93
+ }
77
+ }
94
+ }
78
+ mve_advance_vpt(env);
95
if (len_remain) {
79
+ return rdm;
96
t0 = tcg_temp_new_i64();
80
+}
97
switch (len_remain) {
81
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
98
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
82
index XXXXXXX..XXXXXXX 100644
99
case 4:
83
--- a/target/arm/translate-mve.c
100
case 8:
84
+++ b/target/arm/translate-mve.c
101
tcg_gen_qemu_ld_i64(t0, clean_addr, midx,
85
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_N(VQRSHRNB_U, vqrshrnb_u)
102
- MO_LE | ctz32(len_remain));
86
DO_2SHIFT_N(VQRSHRNT_U, vqrshrnt_u)
103
+ MO_LE | ctz32(len_remain) | MO_ATOM_NONE);
87
DO_2SHIFT_N(VQRSHRUNB, vqrshrunb)
104
break;
88
DO_2SHIFT_N(VQRSHRUNT, vqrshrunt)
105
89
+
106
case 6:
90
+static bool trans_VSHLC(DisasContext *s, arg_VSHLC *a)
107
t1 = tcg_temp_new_i64();
91
+{
108
- tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUL);
92
+ /*
109
+ tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUL | MO_ATOM_NONE);
93
+ * Whole Vector Left Shift with Carry. The carry is taken
110
tcg_gen_addi_i64(clean_addr, clean_addr, 4);
94
+ * from a general purpose register and written back there.
111
- tcg_gen_qemu_ld_i64(t1, clean_addr, midx, MO_LEUW);
95
+ * An imm of 0 means "shift by 32".
112
+ tcg_gen_qemu_ld_i64(t1, clean_addr, midx, MO_LEUW | MO_ATOM_NONE);
96
+ */
113
tcg_gen_deposit_i64(t0, t0, t1, 32, 32);
97
+ TCGv_ptr qd;
114
break;
98
+ TCGv_i32 rdm;
115
99
+
116
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
100
+ if (!dc_isar_feature(aa32_mve, s) || !mve_check_qreg_bank(s, a->qd)) {
117
void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
101
+ return false;
118
int len, int rn, int imm)
119
{
120
- int len_align = QEMU_ALIGN_DOWN(len, 8);
121
- int len_remain = len % 8;
122
- int nparts = len / 8 + ctpop8(len_remain);
123
+ int len_align = QEMU_ALIGN_DOWN(len, 16);
124
+ int len_remain = len % 16;
125
+ int nparts = len / 16 + ctpop8(len_remain);
126
int midx = get_mem_index(s);
127
- TCGv_i64 dirty_addr, clean_addr, t0;
128
+ TCGv_i64 dirty_addr, clean_addr, t0, t1;
129
+ TCGv_i128 t16;
130
131
dirty_addr = tcg_temp_new_i64();
132
tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm);
133
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
134
int i;
135
136
t0 = tcg_temp_new_i64();
137
+ t1 = tcg_temp_new_i64();
138
+ t16 = tcg_temp_new_i128();
139
for (i = 0; i < len_align; i += 8) {
140
tcg_gen_ld_i64(t0, base, vofs + i);
141
- tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUQ);
142
- tcg_gen_addi_i64(clean_addr, clean_addr, 8);
143
+ tcg_gen_ld_i64(t1, base, vofs + i + 8);
144
+ tcg_gen_concat_i64_i128(t16, t0, t1);
145
+ tcg_gen_qemu_st_i128(t16, clean_addr, midx,
146
+ MO_LE | MO_128 | MO_ATOM_NONE);
147
+ tcg_gen_addi_i64(clean_addr, clean_addr, 16);
148
}
149
} else {
150
TCGLabel *loop = gen_new_label();
151
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
152
gen_set_label(loop);
153
154
t0 = tcg_temp_new_i64();
155
+ t1 = tcg_temp_new_i64();
156
tp = tcg_temp_new_ptr();
157
tcg_gen_add_ptr(tp, base, i);
158
tcg_gen_ld_i64(t0, tp, vofs);
159
- tcg_gen_addi_ptr(i, i, 8);
160
+ tcg_gen_ld_i64(t1, tp, vofs + 8);
161
+ tcg_gen_addi_ptr(i, i, 16);
162
163
- tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUQ);
164
- tcg_gen_addi_i64(clean_addr, clean_addr, 8);
165
+ t16 = tcg_temp_new_i128();
166
+ tcg_gen_concat_i64_i128(t16, t0, t1);
167
+
168
+ tcg_gen_qemu_st_i128(t16, clean_addr, midx, MO_LEUQ);
169
+ tcg_gen_addi_i64(clean_addr, clean_addr, 16);
170
171
tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
172
}
173
174
/* Predicate register stores can be any multiple of 2. */
175
+ if (len_remain >= 8) {
176
+ t0 = tcg_temp_new_i64();
177
+ tcg_gen_st_i64(t0, base, vofs + len_align);
178
+ tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUQ | MO_ATOM_NONE);
179
+ len_remain -= 8;
180
+ len_align += 8;
181
+ if (len_remain) {
182
+ tcg_gen_addi_i64(clean_addr, clean_addr, 8);
183
+ }
102
+ }
184
+ }
103
+ if (a->rdm == 13 || a->rdm == 15) {
185
if (len_remain) {
104
+ /* CONSTRAINED UNPREDICTABLE: we UNDEF */
186
t0 = tcg_temp_new_i64();
105
+ return false;
187
tcg_gen_ld_i64(t0, base, vofs + len_align);
106
+ }
188
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
107
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
189
case 4:
108
+ return true;
190
case 8:
109
+ }
191
tcg_gen_qemu_st_i64(t0, clean_addr, midx,
110
+
192
- MO_LE | ctz32(len_remain));
111
+ qd = mve_qreg_ptr(a->qd);
193
+ MO_LE | ctz32(len_remain) | MO_ATOM_NONE);
112
+ rdm = load_reg(s, a->rdm);
194
break;
113
+ gen_helper_mve_vshlc(rdm, cpu_env, qd, rdm, tcg_constant_i32(a->imm));
195
114
+ store_reg(s, a->rdm, rdm);
196
case 6:
115
+ tcg_temp_free_ptr(qd);
197
- tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUL);
116
+ mve_update_eci(s);
198
+ tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUL | MO_ATOM_NONE);
117
+ return true;
199
tcg_gen_addi_i64(clean_addr, clean_addr, 4);
118
+}
200
tcg_gen_shri_i64(t0, t0, 32);
201
- tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUW);
202
+ tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUW | MO_ATOM_NONE);
203
break;
204
205
default:
119
--
206
--
120
2.20.1
207
2.34.1
121
122
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
No need to duplicate this check across multiple call sites.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20230530191438.411344-9-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/tcg/translate-a64.c | 44 ++++++++++++++++------------------
11
1 file changed, 21 insertions(+), 23 deletions(-)
12
13
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/tcg/translate-a64.c
16
+++ b/target/arm/tcg/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
18
* races in multi-threaded linux-user and when MTTCG softmmu is
19
* enabled.
20
*/
21
-static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
22
- TCGv_i64 addr, int size, bool is_pair)
23
+static void gen_load_exclusive(DisasContext *s, int rt, int rt2, int rn,
24
+ int size, bool is_pair)
25
{
26
int idx = get_mem_index(s);
27
MemOp memop;
28
+ TCGv_i64 dirty_addr, clean_addr;
29
+
30
+ s->is_ldex = true;
31
+ dirty_addr = cpu_reg_sp(s, rn);
32
+ clean_addr = gen_mte_check1(s, dirty_addr, false, rn != 31, size);
33
34
g_assert(size <= 3);
35
if (is_pair) {
36
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
37
if (size == 2) {
38
/* The pair must be single-copy atomic for the doubleword. */
39
memop = finalize_memop(s, MO_64 | MO_ALIGN);
40
- tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
41
+ tcg_gen_qemu_ld_i64(cpu_exclusive_val, clean_addr, idx, memop);
42
if (s->be_data == MO_LE) {
43
tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
44
tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 32, 32);
45
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
46
47
memop = finalize_memop_atom(s, MO_128 | MO_ALIGN_16,
48
MO_ATOM_IFALIGN_PAIR);
49
- tcg_gen_qemu_ld_i128(t16, addr, idx, memop);
50
+ tcg_gen_qemu_ld_i128(t16, clean_addr, idx, memop);
51
52
if (s->be_data == MO_LE) {
53
tcg_gen_extr_i128_i64(cpu_exclusive_val,
54
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
55
}
56
} else {
57
memop = finalize_memop(s, size | MO_ALIGN);
58
- tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
59
+ tcg_gen_qemu_ld_i64(cpu_exclusive_val, clean_addr, idx, memop);
60
tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
61
}
62
- tcg_gen_mov_i64(cpu_exclusive_addr, addr);
63
+ tcg_gen_mov_i64(cpu_exclusive_addr, clean_addr);
64
}
65
66
static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
67
- TCGv_i64 addr, int size, int is_pair)
68
+ int rn, int size, int is_pair)
69
{
70
/* if (env->exclusive_addr == addr && env->exclusive_val == [addr]
71
* && (!is_pair || env->exclusive_high == [addr + datasize])) {
72
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
73
*/
74
TCGLabel *fail_label = gen_new_label();
75
TCGLabel *done_label = gen_new_label();
76
- TCGv_i64 tmp;
77
+ TCGv_i64 tmp, dirty_addr, clean_addr;
78
79
- tcg_gen_brcond_i64(TCG_COND_NE, addr, cpu_exclusive_addr, fail_label);
80
+ dirty_addr = cpu_reg_sp(s, rn);
81
+ clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, size);
82
+
83
+ tcg_gen_brcond_i64(TCG_COND_NE, clean_addr, cpu_exclusive_addr, fail_label);
84
85
tmp = tcg_temp_new_i64();
86
if (is_pair) {
87
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
88
if (is_lasr) {
89
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
90
}
91
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
92
- true, rn != 31, size);
93
- gen_store_exclusive(s, rs, rt, rt2, clean_addr, size, false);
94
+ gen_store_exclusive(s, rs, rt, rt2, rn, size, false);
95
return;
96
97
case 0x4: /* LDXR */
98
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
99
if (rn == 31) {
100
gen_check_sp_alignment(s);
101
}
102
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
103
- false, rn != 31, size);
104
- s->is_ldex = true;
105
- gen_load_exclusive(s, rt, rt2, clean_addr, size, false);
106
+ gen_load_exclusive(s, rt, rt2, rn, size, false);
107
if (is_lasr) {
108
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
109
}
110
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
111
if (is_lasr) {
112
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
113
}
114
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
115
- true, rn != 31, size);
116
- gen_store_exclusive(s, rs, rt, rt2, clean_addr, size, true);
117
+ gen_store_exclusive(s, rs, rt, rt2, rn, size, true);
118
return;
119
}
120
if (rt2 == 31
121
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
122
if (rn == 31) {
123
gen_check_sp_alignment(s);
124
}
125
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
126
- false, rn != 31, size);
127
- s->is_ldex = true;
128
- gen_load_exclusive(s, rt, rt2, clean_addr, size, true);
129
+ gen_load_exclusive(s, rt, rt2, rn, size, true);
130
if (is_lasr) {
131
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
132
}
133
--
134
2.34.1
diff view generated by jsdifflib
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
If the CPU is running in default NaN mode (FPCR.DN == 1) and we execute
3
This is required for LSE2, where the pair must be treated atomically if
4
FRSQRTE, FRECPE, or FRECPX with a signaling NaN, parts_silence_nan_frac() will
4
it does not cross a 16-byte boundary. But it simplifies the code to do
5
assert due to fpst->default_nan_mode being set.
5
this always.
6
6
7
To avoid this, we check to see what NaN mode we're running in before we call
8
floatxx_silence_nan().
9
10
Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1624662174-175828-2-git-send-email-joe.komlodi@xilinx.com
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230530191438.411344-10-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
11
---
16
target/arm/helper-a64.c | 12 +++++++++---
12
target/arm/tcg/translate-a64.c | 70 ++++++++++++++++++++++++++--------
17
target/arm/vfp_helper.c | 24 ++++++++++++++++++------
13
1 file changed, 55 insertions(+), 15 deletions(-)
18
2 files changed, 27 insertions(+), 9 deletions(-)
19
14
20
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
15
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper-a64.c
17
--- a/target/arm/tcg/translate-a64.c
23
+++ b/target/arm/helper-a64.c
18
+++ b/target/arm/tcg/translate-a64.c
24
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
19
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
25
float16 nan = a;
20
} else {
26
if (float16_is_signaling_nan(a, fpst)) {
21
TCGv_i64 tcg_rt = cpu_reg(s, rt);
27
float_raise(float_flag_invalid, fpst);
22
TCGv_i64 tcg_rt2 = cpu_reg(s, rt2);
28
- nan = float16_silence_nan(a, fpst);
23
+ MemOp mop = size + 1;
29
+ if (!fpst->default_nan_mode) {
24
+
30
+ nan = float16_silence_nan(a, fpst);
25
+ /*
26
+ * With LSE2, non-sign-extending pairs are treated atomically if
27
+ * aligned, and if unaligned one of the pair will be completely
28
+ * within a 16-byte block and that element will be atomic.
29
+ * Otherwise each element is separately atomic.
30
+ * In all cases, issue one operation with the correct atomicity.
31
+ *
32
+ * This treats sign-extending loads like zero-extending loads,
33
+ * since that reuses the most code below.
34
+ */
35
+ if (s->align_mem) {
36
+ mop |= (size == 2 ? MO_ALIGN_4 : MO_ALIGN_8);
37
+ }
38
+ mop = finalize_memop_pair(s, mop);
39
40
if (is_load) {
41
- TCGv_i64 tmp = tcg_temp_new_i64();
42
+ if (size == 2) {
43
+ int o2 = s->be_data == MO_LE ? 32 : 0;
44
+ int o1 = o2 ^ 32;
45
46
- /* Do not modify tcg_rt before recognizing any exception
47
- * from the second load.
48
- */
49
- do_gpr_ld(s, tmp, clean_addr, size + is_signed * MO_SIGN,
50
- false, false, 0, false, false);
51
- tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
52
- do_gpr_ld(s, tcg_rt2, clean_addr, size + is_signed * MO_SIGN,
53
- false, false, 0, false, false);
54
+ tcg_gen_qemu_ld_i64(tcg_rt, clean_addr, get_mem_index(s), mop);
55
+ if (is_signed) {
56
+ tcg_gen_sextract_i64(tcg_rt2, tcg_rt, o2, 32);
57
+ tcg_gen_sextract_i64(tcg_rt, tcg_rt, o1, 32);
58
+ } else {
59
+ tcg_gen_extract_i64(tcg_rt2, tcg_rt, o2, 32);
60
+ tcg_gen_extract_i64(tcg_rt, tcg_rt, o1, 32);
61
+ }
62
+ } else {
63
+ TCGv_i128 tmp = tcg_temp_new_i128();
64
65
- tcg_gen_mov_i64(tcg_rt, tmp);
66
+ tcg_gen_qemu_ld_i128(tmp, clean_addr, get_mem_index(s), mop);
67
+ if (s->be_data == MO_LE) {
68
+ tcg_gen_extr_i128_i64(tcg_rt, tcg_rt2, tmp);
69
+ } else {
70
+ tcg_gen_extr_i128_i64(tcg_rt2, tcg_rt, tmp);
71
+ }
72
+ }
73
} else {
74
- do_gpr_st(s, tcg_rt, clean_addr, size,
75
- false, 0, false, false);
76
- tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
77
- do_gpr_st(s, tcg_rt2, clean_addr, size,
78
- false, 0, false, false);
79
+ if (size == 2) {
80
+ TCGv_i64 tmp = tcg_temp_new_i64();
81
+
82
+ if (s->be_data == MO_LE) {
83
+ tcg_gen_concat32_i64(tmp, tcg_rt, tcg_rt2);
84
+ } else {
85
+ tcg_gen_concat32_i64(tmp, tcg_rt2, tcg_rt);
86
+ }
87
+ tcg_gen_qemu_st_i64(tmp, clean_addr, get_mem_index(s), mop);
88
+ } else {
89
+ TCGv_i128 tmp = tcg_temp_new_i128();
90
+
91
+ if (s->be_data == MO_LE) {
92
+ tcg_gen_concat_i64_i128(tmp, tcg_rt, tcg_rt2);
93
+ } else {
94
+ tcg_gen_concat_i64_i128(tmp, tcg_rt2, tcg_rt);
95
+ }
96
+ tcg_gen_qemu_st_i128(tmp, clean_addr, get_mem_index(s), mop);
31
+ }
97
+ }
32
}
98
}
33
if (fpst->default_nan_mode) {
99
}
34
nan = float16_default_nan(fpst);
100
35
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
36
float32 nan = a;
37
if (float32_is_signaling_nan(a, fpst)) {
38
float_raise(float_flag_invalid, fpst);
39
- nan = float32_silence_nan(a, fpst);
40
+ if (!fpst->default_nan_mode) {
41
+ nan = float32_silence_nan(a, fpst);
42
+ }
43
}
44
if (fpst->default_nan_mode) {
45
nan = float32_default_nan(fpst);
46
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
47
float64 nan = a;
48
if (float64_is_signaling_nan(a, fpst)) {
49
float_raise(float_flag_invalid, fpst);
50
- nan = float64_silence_nan(a, fpst);
51
+ if (!fpst->default_nan_mode) {
52
+ nan = float64_silence_nan(a, fpst);
53
+ }
54
}
55
if (fpst->default_nan_mode) {
56
nan = float64_default_nan(fpst);
57
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/vfp_helper.c
60
+++ b/target/arm/vfp_helper.c
61
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
62
float16 nan = f16;
63
if (float16_is_signaling_nan(f16, fpst)) {
64
float_raise(float_flag_invalid, fpst);
65
- nan = float16_silence_nan(f16, fpst);
66
+ if (!fpst->default_nan_mode) {
67
+ nan = float16_silence_nan(f16, fpst);
68
+ }
69
}
70
if (fpst->default_nan_mode) {
71
nan = float16_default_nan(fpst);
72
@@ -XXX,XX +XXX,XX @@ float32 HELPER(recpe_f32)(float32 input, void *fpstp)
73
float32 nan = f32;
74
if (float32_is_signaling_nan(f32, fpst)) {
75
float_raise(float_flag_invalid, fpst);
76
- nan = float32_silence_nan(f32, fpst);
77
+ if (!fpst->default_nan_mode) {
78
+ nan = float32_silence_nan(f32, fpst);
79
+ }
80
}
81
if (fpst->default_nan_mode) {
82
nan = float32_default_nan(fpst);
83
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpe_f64)(float64 input, void *fpstp)
84
float64 nan = f64;
85
if (float64_is_signaling_nan(f64, fpst)) {
86
float_raise(float_flag_invalid, fpst);
87
- nan = float64_silence_nan(f64, fpst);
88
+ if (!fpst->default_nan_mode) {
89
+ nan = float64_silence_nan(f64, fpst);
90
+ }
91
}
92
if (fpst->default_nan_mode) {
93
nan = float64_default_nan(fpst);
94
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
95
float16 nan = f16;
96
if (float16_is_signaling_nan(f16, s)) {
97
float_raise(float_flag_invalid, s);
98
- nan = float16_silence_nan(f16, s);
99
+ if (!s->default_nan_mode) {
100
+ nan = float16_silence_nan(f16, fpstp);
101
+ }
102
}
103
if (s->default_nan_mode) {
104
nan = float16_default_nan(s);
105
@@ -XXX,XX +XXX,XX @@ float32 HELPER(rsqrte_f32)(float32 input, void *fpstp)
106
float32 nan = f32;
107
if (float32_is_signaling_nan(f32, s)) {
108
float_raise(float_flag_invalid, s);
109
- nan = float32_silence_nan(f32, s);
110
+ if (!s->default_nan_mode) {
111
+ nan = float32_silence_nan(f32, fpstp);
112
+ }
113
}
114
if (s->default_nan_mode) {
115
nan = float32_default_nan(s);
116
@@ -XXX,XX +XXX,XX @@ float64 HELPER(rsqrte_f64)(float64 input, void *fpstp)
117
float64 nan = f64;
118
if (float64_is_signaling_nan(f64, s)) {
119
float_raise(float_flag_invalid, s);
120
- nan = float64_silence_nan(f64, s);
121
+ if (!s->default_nan_mode) {
122
+ nan = float64_silence_nan(f64, fpstp);
123
+ }
124
}
125
if (s->default_nan_mode) {
126
nan = float64_default_nan(s);
127
--
101
--
128
2.20.1
102
2.34.1
129
130
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
We are going to need the complete memop beforehand,
4
so let's not compute it twice.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-11-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/tcg/translate-a64.c | 61 +++++++++++++++++++---------------
12
1 file changed, 35 insertions(+), 26 deletions(-)
13
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/translate-a64.c
17
+++ b/target/arm/tcg/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static void do_gpr_st_memidx(DisasContext *s, TCGv_i64 source,
19
unsigned int iss_srt,
20
bool iss_sf, bool iss_ar)
21
{
22
- memop = finalize_memop(s, memop);
23
tcg_gen_qemu_st_i64(source, tcg_addr, memidx, memop);
24
25
if (iss_valid) {
26
@@ -XXX,XX +XXX,XX @@ static void do_gpr_ld_memidx(DisasContext *s, TCGv_i64 dest, TCGv_i64 tcg_addr,
27
bool iss_valid, unsigned int iss_srt,
28
bool iss_sf, bool iss_ar)
29
{
30
- memop = finalize_memop(s, memop);
31
tcg_gen_qemu_ld_i64(dest, tcg_addr, memidx, memop);
32
33
if (extend && (memop & MO_SIGN)) {
34
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
35
int o2_L_o1_o0 = extract32(insn, 21, 3) * 2 | is_lasr;
36
int size = extract32(insn, 30, 2);
37
TCGv_i64 clean_addr;
38
+ MemOp memop;
39
40
switch (o2_L_o1_o0) {
41
case 0x0: /* STXR */
42
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
43
gen_check_sp_alignment(s);
44
}
45
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
46
+ /* TODO: ARMv8.4-LSE SCTLR.nAA */
47
+ memop = finalize_memop(s, size | MO_ALIGN);
48
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
49
true, rn != 31, size);
50
- /* TODO: ARMv8.4-LSE SCTLR.nAA */
51
- do_gpr_st(s, cpu_reg(s, rt), clean_addr, size | MO_ALIGN, true, rt,
52
+ do_gpr_st(s, cpu_reg(s, rt), clean_addr, memop, true, rt,
53
disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
54
return;
55
56
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
57
if (rn == 31) {
58
gen_check_sp_alignment(s);
59
}
60
+ /* TODO: ARMv8.4-LSE SCTLR.nAA */
61
+ memop = finalize_memop(s, size | MO_ALIGN);
62
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
63
false, rn != 31, size);
64
- /* TODO: ARMv8.4-LSE SCTLR.nAA */
65
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size | MO_ALIGN, false, true,
66
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, memop, false, true,
67
rt, disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
68
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
69
return;
70
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
71
} else {
72
/* Only unsigned 32bit loads target 32bit registers. */
73
bool iss_sf = opc != 0;
74
+ MemOp memop = finalize_memop(s, size + is_signed * MO_SIGN);
75
76
- do_gpr_ld(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
77
- false, true, rt, iss_sf, false);
78
+ do_gpr_ld(s, tcg_rt, clean_addr, memop, false, true, rt, iss_sf, false);
79
}
80
}
81
82
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
83
bool post_index;
84
bool writeback;
85
int memidx;
86
-
87
+ MemOp memop;
88
TCGv_i64 clean_addr, dirty_addr;
89
90
if (is_vector) {
91
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
92
return;
93
}
94
is_store = (opc == 0);
95
- is_signed = extract32(opc, 1, 1);
96
+ is_signed = !is_store && extract32(opc, 1, 1);
97
is_extended = (size < 3) && extract32(opc, 0, 1);
98
}
99
100
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
101
}
102
103
memidx = is_unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
104
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
105
+
106
clean_addr = gen_mte_check1_mmuidx(s, dirty_addr, is_store,
107
writeback || rn != 31,
108
size, is_unpriv, memidx);
109
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
110
bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
111
112
if (is_store) {
113
- do_gpr_st_memidx(s, tcg_rt, clean_addr, size, memidx,
114
+ do_gpr_st_memidx(s, tcg_rt, clean_addr, memop, memidx,
115
iss_valid, rt, iss_sf, false);
116
} else {
117
- do_gpr_ld_memidx(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
118
+ do_gpr_ld_memidx(s, tcg_rt, clean_addr, memop,
119
is_extended, memidx,
120
iss_valid, rt, iss_sf, false);
121
}
122
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
123
bool is_signed = false;
124
bool is_store = false;
125
bool is_extended = false;
126
-
127
TCGv_i64 tcg_rm, clean_addr, dirty_addr;
128
+ MemOp memop;
129
130
if (extract32(opt, 1, 1) == 0) {
131
unallocated_encoding(s);
132
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
133
return;
134
}
135
is_store = (opc == 0);
136
- is_signed = extract32(opc, 1, 1);
137
+ is_signed = !is_store && extract32(opc, 1, 1);
138
is_extended = (size < 3) && extract32(opc, 0, 1);
139
}
140
141
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
142
ext_and_shift_reg(tcg_rm, tcg_rm, opt, shift ? size : 0);
143
144
tcg_gen_add_i64(dirty_addr, dirty_addr, tcg_rm);
145
+
146
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
147
clean_addr = gen_mte_check1(s, dirty_addr, is_store, true, size);
148
149
if (is_vector) {
150
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
151
} else {
152
TCGv_i64 tcg_rt = cpu_reg(s, rt);
153
bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
154
+
155
if (is_store) {
156
- do_gpr_st(s, tcg_rt, clean_addr, size,
157
+ do_gpr_st(s, tcg_rt, clean_addr, memop,
158
true, rt, iss_sf, false);
159
} else {
160
- do_gpr_ld(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
161
+ do_gpr_ld(s, tcg_rt, clean_addr, memop,
162
is_extended, true, rt, iss_sf, false);
163
}
164
}
165
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
166
int rn = extract32(insn, 5, 5);
167
unsigned int imm12 = extract32(insn, 10, 12);
168
unsigned int offset;
169
-
170
TCGv_i64 clean_addr, dirty_addr;
171
-
172
bool is_store;
173
bool is_signed = false;
174
bool is_extended = false;
175
+ MemOp memop;
176
177
if (is_vector) {
178
size |= (opc & 2) << 1;
179
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
180
return;
181
}
182
is_store = (opc == 0);
183
- is_signed = extract32(opc, 1, 1);
184
+ is_signed = !is_store && extract32(opc, 1, 1);
185
is_extended = (size < 3) && extract32(opc, 0, 1);
186
}
187
188
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
189
dirty_addr = read_cpu_reg_sp(s, rn, 1);
190
offset = imm12 << size;
191
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
192
+
193
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
194
clean_addr = gen_mte_check1(s, dirty_addr, is_store, rn != 31, size);
195
196
if (is_vector) {
197
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
198
TCGv_i64 tcg_rt = cpu_reg(s, rt);
199
bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
200
if (is_store) {
201
- do_gpr_st(s, tcg_rt, clean_addr, size,
202
- true, rt, iss_sf, false);
203
+ do_gpr_st(s, tcg_rt, clean_addr, memop, true, rt, iss_sf, false);
204
} else {
205
- do_gpr_ld(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
206
+ do_gpr_ld(s, tcg_rt, clean_addr, memop,
207
is_extended, true, rt, iss_sf, false);
208
}
209
}
210
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
211
bool a = extract32(insn, 23, 1);
212
TCGv_i64 tcg_rs, tcg_rt, clean_addr;
213
AtomicThreeOpFn *fn = NULL;
214
- MemOp mop = s->be_data | size | MO_ALIGN;
215
+ MemOp mop = finalize_memop(s, size | MO_ALIGN);
216
217
if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
218
unallocated_encoding(s);
219
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
220
* full load-acquire (we only need "load-acquire processor consistent"),
221
* but we choose to implement them as full LDAQ.
222
*/
223
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, false,
224
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, mop, false,
225
true, rt, disas_ldst_compute_iss_sf(size, false, 0), true);
226
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
227
return;
228
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
229
bool use_key_a = !extract32(insn, 23, 1);
230
int offset;
231
TCGv_i64 clean_addr, dirty_addr, tcg_rt;
232
+ MemOp memop;
233
234
if (size != 3 || is_vector || !dc_isar_feature(aa64_pauth, s)) {
235
unallocated_encoding(s);
236
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
237
offset = sextract32(offset << size, 0, 10 + size);
238
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
239
240
+ memop = finalize_memop(s, size);
241
+
242
/* Note that "clean" and "dirty" here refer to TBI not PAC. */
243
clean_addr = gen_mte_check1(s, dirty_addr, false,
244
is_wback || rn != 31, size);
245
246
tcg_rt = cpu_reg(s, rt);
247
- do_gpr_ld(s, tcg_rt, clean_addr, size,
248
+ do_gpr_ld(s, tcg_rt, clean_addr, memop,
249
/* extend */ false, /* iss_valid */ !is_wback,
250
/* iss_srt */ rt, /* iss_sf */ true, /* iss_ar */ false);
251
252
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
253
}
254
255
/* TODO: ARMv8.4-LSE SCTLR.nAA */
256
- mop = size | MO_ALIGN;
257
+ mop = finalize_memop(s, size | MO_ALIGN);
258
259
switch (opc) {
260
case 0: /* STLURB */
261
--
262
2.34.1
diff view generated by jsdifflib
1
In do_ldst(), the calculation of the offset needs to be based on the
1
From: Richard Henderson <richard.henderson@linaro.org>
2
size of the memory access, not the size of the elements in the
3
vector. This meant we were getting it wrong for the widening and
4
narrowing variants of the various VLDR and VSTR insns.
5
2
3
We are going to need the complete memop beforehand,
4
so let's not compute it twice.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230530191438.411344-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-2-peter.maydell@linaro.org
9
---
11
---
10
target/arm/translate-mve.c | 17 +++++++++--------
12
target/arm/tcg/translate-a64.c | 43 ++++++++++++++++++----------------
11
1 file changed, 9 insertions(+), 8 deletions(-)
13
1 file changed, 23 insertions(+), 20 deletions(-)
12
14
13
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
15
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-mve.c
17
--- a/target/arm/tcg/translate-a64.c
16
+++ b/target/arm/translate-mve.c
18
+++ b/target/arm/tcg/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static bool mve_skip_first_beat(DisasContext *s)
19
@@ -XXX,XX +XXX,XX @@ static void do_gpr_ld(DisasContext *s, TCGv_i64 dest, TCGv_i64 tcg_addr,
20
/*
21
* Store from FP register to memory
22
*/
23
-static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)
24
+static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, MemOp mop)
25
{
26
/* This writes the bottom N bits of a 128 bit wide vector to memory */
27
TCGv_i64 tmplo = tcg_temp_new_i64();
28
- MemOp mop = finalize_memop_asimd(s, size);
29
30
tcg_gen_ld_i64(tmplo, cpu_env, fp_reg_offset(s, srcidx, MO_64));
31
32
- if (size < MO_128) {
33
+ if ((mop & MO_SIZE) < MO_128) {
34
tcg_gen_qemu_st_i64(tmplo, tcg_addr, get_mem_index(s), mop);
35
} else {
36
TCGv_i64 tmphi = tcg_temp_new_i64();
37
@@ -XXX,XX +XXX,XX @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)
38
/*
39
* Load from memory to FP register
40
*/
41
-static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
42
+static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, MemOp mop)
43
{
44
/* This always zero-extends and writes to a full 128 bit wide vector */
45
TCGv_i64 tmplo = tcg_temp_new_i64();
46
TCGv_i64 tmphi = NULL;
47
- MemOp mop = finalize_memop_asimd(s, size);
48
49
- if (size < MO_128) {
50
+ if ((mop & MO_SIZE) < MO_128) {
51
tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), mop);
52
} else {
53
TCGv_i128 t16 = tcg_temp_new_i128();
54
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
55
bool is_signed = false;
56
int size = 2;
57
TCGv_i64 tcg_rt, clean_addr;
58
+ MemOp memop;
59
60
if (is_vector) {
61
if (opc == 3) {
62
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
63
if (!fp_access_check(s)) {
64
return;
65
}
66
+ memop = finalize_memop_asimd(s, size);
67
} else {
68
if (opc == 3) {
69
/* PRFM (literal) : prefetch */
70
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
71
}
72
size = 2 + extract32(opc, 0, 1);
73
is_signed = extract32(opc, 1, 1);
74
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
75
}
76
77
tcg_rt = cpu_reg(s, rt);
78
79
clean_addr = tcg_temp_new_i64();
80
gen_pc_plus_diff(s, clean_addr, imm);
81
+
82
if (is_vector) {
83
- do_fp_ld(s, rt, clean_addr, size);
84
+ do_fp_ld(s, rt, clean_addr, memop);
85
} else {
86
/* Only unsigned 32bit loads target 32bit registers. */
87
bool iss_sf = opc != 0;
88
- MemOp memop = finalize_memop(s, size + is_signed * MO_SIGN);
89
-
90
do_gpr_ld(s, tcg_rt, clean_addr, memop, false, true, rt, iss_sf, false);
18
}
91
}
19
}
92
}
20
93
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
21
-static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
94
(wback || rn != 31) && !set_tag, 2 << size);
22
+static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn,
95
23
+ unsigned msize)
96
if (is_vector) {
24
{
97
+ MemOp mop = finalize_memop_asimd(s, size);
25
TCGv_i32 addr;
98
+
26
uint32_t offset;
99
if (is_load) {
27
@@ -XXX,XX +XXX,XX @@ static bool do_ldst(DisasContext *s, arg_VLDR_VSTR *a, MVEGenLdStFn *fn)
100
- do_fp_ld(s, rt, clean_addr, size);
28
return true;
101
+ do_fp_ld(s, rt, clean_addr, mop);
102
} else {
103
- do_fp_st(s, rt, clean_addr, size);
104
+ do_fp_st(s, rt, clean_addr, mop);
105
}
106
tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
107
if (is_load) {
108
- do_fp_ld(s, rt2, clean_addr, size);
109
+ do_fp_ld(s, rt2, clean_addr, mop);
110
} else {
111
- do_fp_st(s, rt2, clean_addr, size);
112
+ do_fp_st(s, rt2, clean_addr, mop);
113
}
114
} else {
115
TCGv_i64 tcg_rt = cpu_reg(s, rt);
116
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
117
if (!fp_access_check(s)) {
118
return;
119
}
120
+ memop = finalize_memop_asimd(s, size);
121
} else {
122
if (size == 3 && opc == 2) {
123
/* PRFM - prefetch */
124
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
125
is_store = (opc == 0);
126
is_signed = !is_store && extract32(opc, 1, 1);
127
is_extended = (size < 3) && extract32(opc, 0, 1);
128
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
29
}
129
}
30
130
31
- offset = a->imm << a->size;
131
switch (idx) {
32
+ offset = a->imm << msize;
132
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
33
if (!a->a) {
34
offset = -offset;
35
}
133
}
36
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR(DisasContext *s, arg_VLDR_VSTR *a)
134
37
{ gen_helper_mve_vstrw, gen_helper_mve_vldrw },
135
memidx = is_unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
38
{ NULL, NULL }
136
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
39
};
137
40
- return do_ldst(s, a, ldstfns[a->size][a->l]);
138
clean_addr = gen_mte_check1_mmuidx(s, dirty_addr, is_store,
41
+ return do_ldst(s, a, ldstfns[a->size][a->l], a->size);
139
writeback || rn != 31,
42
}
140
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
43
141
44
-#define DO_VLDST_WIDE_NARROW(OP, SLD, ULD, ST) \
142
if (is_vector) {
45
+#define DO_VLDST_WIDE_NARROW(OP, SLD, ULD, ST, MSIZE) \
143
if (is_store) {
46
static bool trans_##OP(DisasContext *s, arg_VLDR_VSTR *a) \
144
- do_fp_st(s, rt, clean_addr, size);
47
{ \
145
+ do_fp_st(s, rt, clean_addr, memop);
48
static MVEGenLdStFn * const ldstfns[2][2] = { \
146
} else {
49
{ gen_helper_mve_##ST, gen_helper_mve_##SLD }, \
147
- do_fp_ld(s, rt, clean_addr, size);
50
{ NULL, gen_helper_mve_##ULD }, \
148
+ do_fp_ld(s, rt, clean_addr, memop);
51
}; \
149
}
52
- return do_ldst(s, a, ldstfns[a->u][a->l]); \
150
} else {
53
+ return do_ldst(s, a, ldstfns[a->u][a->l], MSIZE); \
151
TCGv_i64 tcg_rt = cpu_reg(s, rt);
54
}
152
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
55
153
56
-DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h)
154
if (is_vector) {
57
-DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w)
155
if (is_store) {
58
-DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w)
156
- do_fp_st(s, rt, clean_addr, size);
59
+DO_VLDST_WIDE_NARROW(VLDSTB_H, vldrb_sh, vldrb_uh, vstrb_h, MO_8)
157
+ do_fp_st(s, rt, clean_addr, memop);
60
+DO_VLDST_WIDE_NARROW(VLDSTB_W, vldrb_sw, vldrb_uw, vstrb_w, MO_8)
158
} else {
61
+DO_VLDST_WIDE_NARROW(VLDSTH_W, vldrh_sw, vldrh_uw, vstrh_w, MO_16)
159
- do_fp_ld(s, rt, clean_addr, size);
62
160
+ do_fp_ld(s, rt, clean_addr, memop);
63
static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
161
}
64
{
162
} else {
163
TCGv_i64 tcg_rt = cpu_reg(s, rt);
164
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
165
166
if (is_vector) {
167
if (is_store) {
168
- do_fp_st(s, rt, clean_addr, size);
169
+ do_fp_st(s, rt, clean_addr, memop);
170
} else {
171
- do_fp_ld(s, rt, clean_addr, size);
172
+ do_fp_ld(s, rt, clean_addr, memop);
173
}
174
} else {
175
TCGv_i64 tcg_rt = cpu_reg(s, rt);
65
--
176
--
66
2.20.1
177
2.34.1
67
178
68
179
diff view generated by jsdifflib
1
Implement the MVE VADDLV insn; this is similar to VADDV, except
1
From: Richard Henderson <richard.henderson@linaro.org>
2
that it accumulates 32-bit elements into a 64-bit accumulator
3
stored in a pair of general-purpose registers.
4
2
3
Pass the completed memop to gen_mte_check1_mmuidx.
4
For the moment, do nothing more than extract the size.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-13-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210628135835.6690-15-peter.maydell@linaro.org
8
---
10
---
9
target/arm/helper-mve.h | 3 ++
11
target/arm/tcg/translate-a64.h | 2 +-
10
target/arm/mve.decode | 6 +++-
12
target/arm/tcg/translate-a64.c | 82 ++++++++++++++++++----------------
11
target/arm/mve_helper.c | 19 ++++++++++++
13
target/arm/tcg/translate-sve.c | 7 +--
12
target/arm/translate-mve.c | 63 ++++++++++++++++++++++++++++++++++++++
14
3 files changed, 49 insertions(+), 42 deletions(-)
13
4 files changed, 90 insertions(+), 1 deletion(-)
14
15
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-mve.h
18
--- a/target/arm/tcg/translate-a64.h
18
+++ b/target/arm/helper-mve.h
19
+++ b/target/arm/tcg/translate-a64.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vaddvuh, TCG_CALL_NO_WG, i32, env, ptr, i32)
20
@@ -XXX,XX +XXX,XX @@ static inline bool sme_smza_enabled_check(DisasContext *s)
20
DEF_HELPER_FLAGS_3(mve_vaddvsw, TCG_CALL_NO_WG, i32, env, ptr, i32)
21
21
DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
22
TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr);
22
23
TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
23
+DEF_HELPER_FLAGS_3(mve_vaddlv_s, TCG_CALL_NO_WG, i64, env, ptr, i64)
24
- bool tag_checked, int log2_size);
24
+DEF_HELPER_FLAGS_3(mve_vaddlv_u, TCG_CALL_NO_WG, i64, env, ptr, i64)
25
+ bool tag_checked, MemOp memop);
25
+
26
TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
26
DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
27
bool tag_checked, int size);
27
DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
28
28
DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
29
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
30
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mve.decode
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/mve.decode
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ VQDMULH_scalar 1110 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
33
@@ -XXX,XX +XXX,XX @@ static void gen_probe_access(DisasContext *s, TCGv_i64 ptr,
34
VQRDMULH_scalar 1111 1110 0 . .. ... 1 ... 0 1110 . 110 .... @2scalar
34
*/
35
35
static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
36
# Vector add across vector
36
bool is_write, bool tag_checked,
37
-VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
37
- int log2_size, bool is_unpriv,
38
+{
38
+ MemOp memop, bool is_unpriv,
39
+ VADDV 111 u:1 1110 1111 size:2 01 ... 0 1111 0 0 a:1 0 qm:3 0 rda=%rdalo
39
int core_idx)
40
+ VADDLV 111 u:1 1110 1 ... 1001 ... 0 1111 00 a:1 0 qm:3 0 \
40
{
41
+ rdahi=%rdahi rdalo=%rdalo
41
if (tag_checked && s->mte_active[is_unpriv]) {
42
+}
42
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
43
43
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
44
# Predicate operations
44
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
45
%mask_22_13 22:1 13:3
45
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
46
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
46
- desc = FIELD_DP32(desc, MTEDESC, SIZEM1, (1 << log2_size) - 1);
47
index XXXXXXX..XXXXXXX 100644
47
+ desc = FIELD_DP32(desc, MTEDESC, SIZEM1, memop_size(memop) - 1);
48
--- a/target/arm/mve_helper.c
48
49
+++ b/target/arm/mve_helper.c
49
ret = tcg_temp_new_i64();
50
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvub, 1, uint8_t)
50
gen_helper_mte_check(ret, cpu_env, tcg_constant_i32(desc), addr);
51
DO_VADDV(vaddvuh, 2, uint16_t)
51
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
52
DO_VADDV(vaddvuw, 4, uint32_t)
53
54
+#define DO_VADDLV(OP, TYPE, LTYPE) \
55
+ uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vm, \
56
+ uint64_t ra) \
57
+ { \
58
+ uint16_t mask = mve_element_mask(env); \
59
+ unsigned e; \
60
+ TYPE *m = vm; \
61
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) { \
62
+ if (mask & 1) { \
63
+ ra += (LTYPE)m[H4(e)]; \
64
+ } \
65
+ } \
66
+ mve_advance_vpt(env); \
67
+ return ra; \
68
+ } \
69
+
70
+DO_VADDLV(vaddlv_s, int32_t, int64_t)
71
+DO_VADDLV(vaddlv_u, uint32_t, uint64_t)
72
+
73
/* Shifts by immediate */
74
#define DO_2SHIFT(OP, ESIZE, TYPE, FN) \
75
void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
76
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/translate-mve.c
79
+++ b/target/arm/translate-mve.c
80
@@ -XXX,XX +XXX,XX @@ static bool trans_VADDV(DisasContext *s, arg_VADDV *a)
81
return true;
82
}
52
}
83
53
84
+static bool trans_VADDLV(DisasContext *s, arg_VADDLV *a)
54
TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
85
+{
55
- bool tag_checked, int log2_size)
86
+ /*
56
+ bool tag_checked, MemOp memop)
87
+ * Vector Add Long Across Vector: accumulate the 32-bit
57
{
88
+ * elements of the vector into a 64-bit result stored in
58
- return gen_mte_check1_mmuidx(s, addr, is_write, tag_checked, log2_size,
89
+ * a pair of general-purpose registers.
59
+ return gen_mte_check1_mmuidx(s, addr, is_write, tag_checked, memop,
90
+ * No need to check Qm's bank: it is only 3 bits in decode.
60
false, get_mem_index(s));
91
+ */
61
}
92
+ TCGv_ptr qm;
62
93
+ TCGv_i64 rda;
63
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, int rn,
94
+ TCGv_i32 rdalo, rdahi;
64
int size, bool is_pair)
95
+
65
{
96
+ if (!dc_isar_feature(aa32_mve, s)) {
66
int idx = get_mem_index(s);
97
+ return false;
67
- MemOp memop;
98
+ }
68
TCGv_i64 dirty_addr, clean_addr;
99
+ /*
69
+ MemOp memop;
100
+ * rdahi == 13 is UNPREDICTABLE; rdahi == 15 is a related
101
+ * encoding; rdalo always has bit 0 clear so cannot be 13 or 15.
102
+ */
103
+ if (a->rdahi == 13 || a->rdahi == 15) {
104
+ return false;
105
+ }
106
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
107
+ return true;
108
+ }
109
+
70
+
110
+ /*
71
+ /*
111
+ * This insn is subject to beat-wise execution. Partial execution
72
+ * For pairs:
112
+ * of an A=0 (no-accumulate) insn which does not execute the first
73
+ * if size == 2, the operation is single-copy atomic for the doubleword.
113
+ * beat must start with the current value of RdaHi:RdaLo, not zero.
74
+ * if size == 3, the operation is single-copy atomic for *each* doubleword,
75
+ * not the entire quadword, however it must be quadword aligned.
114
+ */
76
+ */
115
+ if (a->a || mve_skip_first_beat(s)) {
77
+ memop = size + is_pair;
116
+ /* Accumulate input from RdaHi:RdaLo */
78
+ if (memop == MO_128) {
117
+ rda = tcg_temp_new_i64();
79
+ memop = finalize_memop_atom(s, MO_128 | MO_ALIGN,
118
+ rdalo = load_reg(s, a->rdalo);
80
+ MO_ATOM_IFALIGN_PAIR);
119
+ rdahi = load_reg(s, a->rdahi);
120
+ tcg_gen_concat_i32_i64(rda, rdalo, rdahi);
121
+ tcg_temp_free_i32(rdalo);
122
+ tcg_temp_free_i32(rdahi);
123
+ } else {
81
+ } else {
124
+ /* Accumulate starting at zero */
82
+ memop = finalize_memop(s, memop | MO_ALIGN);
125
+ rda = tcg_const_i64(0);
126
+ }
83
+ }
84
85
s->is_ldex = true;
86
dirty_addr = cpu_reg_sp(s, rn);
87
- clean_addr = gen_mte_check1(s, dirty_addr, false, rn != 31, size);
88
+ clean_addr = gen_mte_check1(s, dirty_addr, false, rn != 31, memop);
89
90
g_assert(size <= 3);
91
if (is_pair) {
92
g_assert(size >= 2);
93
if (size == 2) {
94
- /* The pair must be single-copy atomic for the doubleword. */
95
- memop = finalize_memop(s, MO_64 | MO_ALIGN);
96
tcg_gen_qemu_ld_i64(cpu_exclusive_val, clean_addr, idx, memop);
97
if (s->be_data == MO_LE) {
98
tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
99
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, int rn,
100
tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 0, 32);
101
}
102
} else {
103
- /*
104
- * The pair must be single-copy atomic for *each* doubleword, not
105
- * the entire quadword, however it must be quadword aligned.
106
- * Expose the complete load to tcg, for ease of tlb lookup,
107
- * but indicate that only 8-byte atomicity is required.
108
- */
109
TCGv_i128 t16 = tcg_temp_new_i128();
110
111
- memop = finalize_memop_atom(s, MO_128 | MO_ALIGN_16,
112
- MO_ATOM_IFALIGN_PAIR);
113
tcg_gen_qemu_ld_i128(t16, clean_addr, idx, memop);
114
115
if (s->be_data == MO_LE) {
116
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, int rn,
117
tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high);
118
}
119
} else {
120
- memop = finalize_memop(s, size | MO_ALIGN);
121
tcg_gen_qemu_ld_i64(cpu_exclusive_val, clean_addr, idx, memop);
122
tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
123
}
124
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
125
TCGLabel *fail_label = gen_new_label();
126
TCGLabel *done_label = gen_new_label();
127
TCGv_i64 tmp, dirty_addr, clean_addr;
128
+ MemOp memop;
127
+
129
+
128
+ qm = mve_qreg_ptr(a->qm);
130
+ memop = (size + is_pair) | MO_ALIGN;
129
+ if (a->u) {
131
+ memop = finalize_memop(s, memop);
130
+ gen_helper_mve_vaddlv_u(rda, cpu_env, qm, rda);
132
131
+ } else {
133
dirty_addr = cpu_reg_sp(s, rn);
132
+ gen_helper_mve_vaddlv_s(rda, cpu_env, qm, rda);
134
- clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, size);
133
+ }
135
+ clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, memop);
134
+ tcg_temp_free_ptr(qm);
136
135
+
137
tcg_gen_brcond_i64(TCG_COND_NE, clean_addr, cpu_exclusive_addr, fail_label);
136
+ rdalo = tcg_temp_new_i32();
138
137
+ rdahi = tcg_temp_new_i32();
139
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
138
+ tcg_gen_extrl_i64_i32(rdalo, rda);
140
}
139
+ tcg_gen_extrh_i64_i32(rdahi, rda);
141
tcg_gen_atomic_cmpxchg_i64(tmp, cpu_exclusive_addr,
140
+ store_reg(s, a->rdalo, rdalo);
142
cpu_exclusive_val, tmp,
141
+ store_reg(s, a->rdahi, rdahi);
143
- get_mem_index(s),
142
+ tcg_temp_free_i64(rda);
144
- MO_64 | MO_ALIGN | s->be_data);
143
+ mve_update_eci(s);
145
+ get_mem_index(s), memop);
144
+ return true;
146
tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, cpu_exclusive_val);
145
+}
147
} else {
146
+
148
TCGv_i128 t16 = tcg_temp_new_i128();
147
static bool do_1imm(DisasContext *s, arg_1imm *a, MVEGenOneOpImmFn *fn)
149
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
148
{
150
}
149
TCGv_ptr qd;
151
152
tcg_gen_atomic_cmpxchg_i128(t16, cpu_exclusive_addr, c16, t16,
153
- get_mem_index(s),
154
- MO_128 | MO_ALIGN | s->be_data);
155
+ get_mem_index(s), memop);
156
157
a = tcg_temp_new_i64();
158
b = tcg_temp_new_i64();
159
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
160
}
161
} else {
162
tcg_gen_atomic_cmpxchg_i64(tmp, cpu_exclusive_addr, cpu_exclusive_val,
163
- cpu_reg(s, rt), get_mem_index(s),
164
- size | MO_ALIGN | s->be_data);
165
+ cpu_reg(s, rt), get_mem_index(s), memop);
166
tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, cpu_exclusive_val);
167
}
168
tcg_gen_mov_i64(cpu_reg(s, rd), tmp);
169
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap(DisasContext *s, int rs, int rt,
170
TCGv_i64 tcg_rt = cpu_reg(s, rt);
171
int memidx = get_mem_index(s);
172
TCGv_i64 clean_addr;
173
+ MemOp memop;
174
175
if (rn == 31) {
176
gen_check_sp_alignment(s);
177
}
178
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, size);
179
- tcg_gen_atomic_cmpxchg_i64(tcg_rs, clean_addr, tcg_rs, tcg_rt, memidx,
180
- size | MO_ALIGN | s->be_data);
181
+ memop = finalize_memop(s, size | MO_ALIGN);
182
+ clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
183
+ tcg_gen_atomic_cmpxchg_i64(tcg_rs, clean_addr, tcg_rs, tcg_rt,
184
+ memidx, memop);
185
}
186
187
static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
188
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
189
TCGv_i64 t2 = cpu_reg(s, rt + 1);
190
TCGv_i64 clean_addr;
191
int memidx = get_mem_index(s);
192
+ MemOp memop;
193
194
if (rn == 31) {
195
gen_check_sp_alignment(s);
196
}
197
198
/* This is a single atomic access, despite the "pair". */
199
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, size + 1);
200
+ memop = finalize_memop(s, (size + 1) | MO_ALIGN);
201
+ clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
202
203
if (size == 2) {
204
TCGv_i64 cmp = tcg_temp_new_i64();
205
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
206
tcg_gen_concat32_i64(cmp, s2, s1);
207
}
208
209
- tcg_gen_atomic_cmpxchg_i64(cmp, clean_addr, cmp, val, memidx,
210
- MO_64 | MO_ALIGN | s->be_data);
211
+ tcg_gen_atomic_cmpxchg_i64(cmp, clean_addr, cmp, val, memidx, memop);
212
213
if (s->be_data == MO_LE) {
214
tcg_gen_extr32_i64(s1, s2, cmp);
215
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
216
tcg_gen_concat_i64_i128(cmp, s2, s1);
217
}
218
219
- tcg_gen_atomic_cmpxchg_i128(cmp, clean_addr, cmp, val, memidx,
220
- MO_128 | MO_ALIGN | s->be_data);
221
+ tcg_gen_atomic_cmpxchg_i128(cmp, clean_addr, cmp, val, memidx, memop);
222
223
if (s->be_data == MO_LE) {
224
tcg_gen_extr_i128_i64(s1, s2, cmp);
225
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
226
/* TODO: ARMv8.4-LSE SCTLR.nAA */
227
memop = finalize_memop(s, size | MO_ALIGN);
228
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
229
- true, rn != 31, size);
230
+ true, rn != 31, memop);
231
do_gpr_st(s, cpu_reg(s, rt), clean_addr, memop, true, rt,
232
disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
233
return;
234
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
235
/* TODO: ARMv8.4-LSE SCTLR.nAA */
236
memop = finalize_memop(s, size | MO_ALIGN);
237
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
238
- false, rn != 31, size);
239
+ false, rn != 31, memop);
240
do_gpr_ld(s, cpu_reg(s, rt), clean_addr, memop, false, true,
241
rt, disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
242
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
243
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
244
tcg_gen_add_i64(dirty_addr, dirty_addr, tcg_rm);
245
246
memop = finalize_memop(s, size + is_signed * MO_SIGN);
247
- clean_addr = gen_mte_check1(s, dirty_addr, is_store, true, size);
248
+ clean_addr = gen_mte_check1(s, dirty_addr, is_store, true, memop);
249
250
if (is_vector) {
251
if (is_store) {
252
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
253
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
254
255
memop = finalize_memop(s, size + is_signed * MO_SIGN);
256
- clean_addr = gen_mte_check1(s, dirty_addr, is_store, rn != 31, size);
257
+ clean_addr = gen_mte_check1(s, dirty_addr, is_store, rn != 31, memop);
258
259
if (is_vector) {
260
if (is_store) {
261
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
262
if (rn == 31) {
263
gen_check_sp_alignment(s);
264
}
265
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), false, rn != 31, size);
266
+ clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), false, rn != 31, mop);
267
268
if (o3_opc == 014) {
269
/*
270
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
271
272
/* Note that "clean" and "dirty" here refer to TBI not PAC. */
273
clean_addr = gen_mte_check1(s, dirty_addr, false,
274
- is_wback || rn != 31, size);
275
+ is_wback || rn != 31, memop);
276
277
tcg_rt = cpu_reg(s, rt);
278
do_gpr_ld(s, tcg_rt, clean_addr, memop,
279
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
280
index XXXXXXX..XXXXXXX 100644
281
--- a/target/arm/tcg/translate-sve.c
282
+++ b/target/arm/tcg/translate-sve.c
283
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1R_zpri(DisasContext *s, arg_rpri_load *a)
284
unsigned msz = dtype_msz(a->dtype);
285
TCGLabel *over;
286
TCGv_i64 temp, clean_addr;
287
+ MemOp memop;
288
289
if (!dc_isar_feature(aa64_sve, s)) {
290
return false;
291
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1R_zpri(DisasContext *s, arg_rpri_load *a)
292
/* Load the data. */
293
temp = tcg_temp_new_i64();
294
tcg_gen_addi_i64(temp, cpu_reg_sp(s, a->rn), a->imm << msz);
295
- clean_addr = gen_mte_check1(s, temp, false, true, msz);
296
297
- tcg_gen_qemu_ld_i64(temp, clean_addr, get_mem_index(s),
298
- finalize_memop(s, dtype_mop[a->dtype]));
299
+ memop = finalize_memop(s, dtype_mop[a->dtype]);
300
+ clean_addr = gen_mte_check1(s, temp, false, true, memop);
301
+ tcg_gen_qemu_ld_i64(temp, clean_addr, get_mem_index(s), memop);
302
303
/* Broadcast to *all* elements. */
304
tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd),
150
--
305
--
151
2.20.1
306
2.34.1
152
153
diff view generated by jsdifflib
1
The A64 AdvSIMD modified-immediate grouping uses almost the same
1
From: Richard Henderson <richard.henderson@linaro.org>
2
constant encoding that A32 Neon does; reuse asimd_imm_const() (to
3
which we add the AArch64-specific case for cmode 15 op 1) instead of
4
reimplementing it all.
5
2
3
Pass the individual memop to gen_mte_checkN.
4
For the moment, do nothing with it.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-5-peter.maydell@linaro.org
9
---
10
---
10
target/arm/translate.h | 3 +-
11
target/arm/tcg/translate-a64.h | 2 +-
11
target/arm/translate-a64.c | 86 ++++----------------------------------
12
target/arm/tcg/translate-a64.c | 31 +++++++++++++++++++------------
12
target/arm/translate.c | 17 +++++++-
13
target/arm/tcg/translate-sve.c | 4 ++--
13
3 files changed, 24 insertions(+), 82 deletions(-)
14
3 files changed, 22 insertions(+), 15 deletions(-)
14
15
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
--- a/target/arm/tcg/translate-a64.h
18
+++ b/target/arm/translate.h
19
+++ b/target/arm/tcg/translate-a64.h
19
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
20
@@ -XXX,XX +XXX,XX @@ TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr);
20
* VMVN and VBIC (when cmode < 14 && op == 1).
21
TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
21
*
22
bool tag_checked, MemOp memop);
22
* The combination cmode == 15 op == 1 is a reserved encoding for AArch32;
23
TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
23
- * callers must catch this.
24
- bool tag_checked, int size);
24
+ * callers must catch this; we return the 64-bit constant value defined
25
+ bool tag_checked, int total_size, MemOp memop);
25
+ * for AArch64.
26
26
*
27
/* We should have at some point before trying to access an FP register
27
* cmode = 2,3,4,5,6,7,10,11,12,13 imm=0 was UNPREDICTABLE in v7A but
28
* done the necessary access check, so assert that
28
* is either not unpredictable or merely CONSTRAINED UNPREDICTABLE in v8A;
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
29
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate-a64.c
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
33
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
34
* For MTE, check multiple logical sequential accesses.
35
*/
36
TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
37
- bool tag_checked, int size)
38
+ bool tag_checked, int total_size, MemOp single_mop)
34
{
39
{
35
int rd = extract32(insn, 0, 5);
40
if (tag_checked && s->mte_active[0]) {
36
int cmode = extract32(insn, 12, 4);
41
TCGv_i64 ret;
37
- int cmode_3_1 = extract32(cmode, 1, 3);
42
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
38
- int cmode_0 = extract32(cmode, 0, 1);
43
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
39
int o2 = extract32(insn, 11, 1);
44
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
40
uint64_t abcdefgh = extract32(insn, 5, 5) | (extract32(insn, 16, 3) << 5);
45
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
41
bool is_neg = extract32(insn, 29, 1);
46
- desc = FIELD_DP32(desc, MTEDESC, SIZEM1, size - 1);
42
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
47
+ desc = FIELD_DP32(desc, MTEDESC, SIZEM1, total_size - 1);
43
return;
48
49
ret = tcg_temp_new_i64();
50
gen_helper_mte_check(ret, cpu_env, tcg_constant_i32(desc), addr);
51
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
52
bool is_vector = extract32(insn, 26, 1);
53
bool is_load = extract32(insn, 22, 1);
54
int opc = extract32(insn, 30, 2);
55
-
56
bool is_signed = false;
57
bool postindex = false;
58
bool wback = false;
59
bool set_tag = false;
60
-
61
TCGv_i64 clean_addr, dirty_addr;
62
-
63
+ MemOp mop;
64
int size;
65
66
if (opc == 3) {
67
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
68
}
44
}
69
}
45
70
46
- /* See AdvSIMDExpandImm() in ARM ARM */
71
+ if (is_vector) {
47
- switch (cmode_3_1) {
72
+ mop = finalize_memop_asimd(s, size);
48
- case 0: /* Replicate(Zeros(24):imm8, 2) */
73
+ } else {
49
- case 1: /* Replicate(Zeros(16):imm8:Zeros(8), 2) */
74
+ mop = finalize_memop(s, size);
50
- case 2: /* Replicate(Zeros(8):imm8:Zeros(16), 2) */
75
+ }
51
- case 3: /* Replicate(imm8:Zeros(24), 2) */
76
clean_addr = gen_mte_checkN(s, dirty_addr, !is_load,
52
- {
77
- (wback || rn != 31) && !set_tag, 2 << size);
53
- int shift = cmode_3_1 * 8;
78
+ (wback || rn != 31) && !set_tag,
54
- imm = bitfield_replicate(abcdefgh << shift, 32);
79
+ 2 << size, mop);
55
- break;
80
56
- }
81
if (is_vector) {
57
- case 4: /* Replicate(Zeros(8):imm8, 4) */
82
- MemOp mop = finalize_memop_asimd(s, size);
58
- case 5: /* Replicate(imm8:Zeros(8), 4) */
59
- {
60
- int shift = (cmode_3_1 & 0x1) * 8;
61
- imm = bitfield_replicate(abcdefgh << shift, 16);
62
- break;
63
- }
64
- case 6:
65
- if (cmode_0) {
66
- /* Replicate(Zeros(8):imm8:Ones(16), 2) */
67
- imm = (abcdefgh << 16) | 0xffff;
68
- } else {
69
- /* Replicate(Zeros(16):imm8:Ones(8), 2) */
70
- imm = (abcdefgh << 8) | 0xff;
71
- }
72
- imm = bitfield_replicate(imm, 32);
73
- break;
74
- case 7:
75
- if (!cmode_0 && !is_neg) {
76
- imm = bitfield_replicate(abcdefgh, 8);
77
- } else if (!cmode_0 && is_neg) {
78
- int i;
79
- imm = 0;
80
- for (i = 0; i < 8; i++) {
81
- if ((abcdefgh) & (1 << i)) {
82
- imm |= 0xffULL << (i * 8);
83
- }
84
- }
85
- } else if (cmode_0) {
86
- if (is_neg) {
87
- imm = (abcdefgh & 0x3f) << 48;
88
- if (abcdefgh & 0x80) {
89
- imm |= 0x8000000000000000ULL;
90
- }
91
- if (abcdefgh & 0x40) {
92
- imm |= 0x3fc0000000000000ULL;
93
- } else {
94
- imm |= 0x4000000000000000ULL;
95
- }
96
- } else {
97
- if (o2) {
98
- /* FMOV (vector, immediate) - half-precision */
99
- imm = vfp_expand_imm(MO_16, abcdefgh);
100
- /* now duplicate across the lanes */
101
- imm = bitfield_replicate(imm, 16);
102
- } else {
103
- imm = (abcdefgh & 0x3f) << 19;
104
- if (abcdefgh & 0x80) {
105
- imm |= 0x80000000;
106
- }
107
- if (abcdefgh & 0x40) {
108
- imm |= 0x3e000000;
109
- } else {
110
- imm |= 0x40000000;
111
- }
112
- imm |= (imm << 32);
113
- }
114
- }
115
- }
116
- break;
117
- default:
118
- g_assert_not_reached();
119
- }
120
-
83
-
121
- if (cmode_3_1 != 7 && is_neg) {
84
+ /* LSE2 does not merge FP pairs; leave these as separate operations. */
122
- imm = ~imm;
85
if (is_load) {
123
+ if (cmode == 15 && o2 && !is_neg) {
86
do_fp_ld(s, rt, clean_addr, mop);
124
+ /* FMOV (vector, immediate) - half-precision */
87
} else {
125
+ imm = vfp_expand_imm(MO_16, abcdefgh);
88
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
126
+ /* now duplicate across the lanes */
89
} else {
127
+ imm = bitfield_replicate(imm, 16);
90
TCGv_i64 tcg_rt = cpu_reg(s, rt);
128
+ } else {
91
TCGv_i64 tcg_rt2 = cpu_reg(s, rt2);
129
+ imm = asimd_imm_const(abcdefgh, cmode, is_neg);
92
- MemOp mop = size + 1;
130
}
93
131
94
/*
132
if (!((cmode & 0x9) == 0x1 || (cmode & 0xd) == 0x9)) {
95
+ * We built mop above for the single logical access -- rebuild it
133
diff --git a/target/arm/translate.c b/target/arm/translate.c
96
+ * now for the paired operation.
97
+ *
98
* With LSE2, non-sign-extending pairs are treated atomically if
99
* aligned, and if unaligned one of the pair will be completely
100
* within a 16-byte block and that element will be atomic.
101
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
102
* This treats sign-extending loads like zero-extending loads,
103
* since that reuses the most code below.
104
*/
105
+ mop = size + 1;
106
if (s->align_mem) {
107
mop |= (size == 2 ? MO_ALIGN_4 : MO_ALIGN_8);
108
}
109
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
110
* promote consecutive little-endian elements below.
111
*/
112
clean_addr = gen_mte_checkN(s, tcg_rn, is_store, is_postidx || rn != 31,
113
- total);
114
+ total, finalize_memop(s, size));
115
116
/*
117
* Consecutive little-endian elements from a single register
118
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
119
total = selem << scale;
120
tcg_rn = cpu_reg_sp(s, rn);
121
122
- clean_addr = gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != 31,
123
- total);
124
mop = finalize_memop(s, scale);
125
126
+ clean_addr = gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != 31,
127
+ total, mop);
128
+
129
tcg_ebytes = tcg_constant_i64(1 << scale);
130
for (xs = 0; xs < selem; xs++) {
131
if (replicate) {
132
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
134
index XXXXXXX..XXXXXXX 100644
133
index XXXXXXX..XXXXXXX 100644
135
--- a/target/arm/translate.c
134
--- a/target/arm/tcg/translate-sve.c
136
+++ b/target/arm/translate.c
135
+++ b/target/arm/tcg/translate-sve.c
137
@@ -XXX,XX +XXX,XX @@ uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
136
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
138
case 14:
137
139
if (op) {
138
dirty_addr = tcg_temp_new_i64();
140
/*
139
tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm);
141
- * This is the only case where the top and bottom 32 bits
140
- clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len);
142
- * of the encoded constant differ.
141
+ clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len, MO_8);
143
+ * This and cmode == 15 op == 1 are the only cases where
142
144
+ * the top and bottom 32 bits of the encoded constant differ.
143
/*
145
*/
144
* Note that unpredicated load/store of vector/predicate registers
146
uint64_t imm64 = 0;
145
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
147
int n;
146
148
@@ -XXX,XX +XXX,XX @@ uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
147
dirty_addr = tcg_temp_new_i64();
149
imm |= (imm << 8) | (imm << 16) | (imm << 24);
148
tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm);
150
break;
149
- clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len);
151
case 15:
150
+ clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len, MO_8);
152
+ if (op) {
151
153
+ /* Reserved encoding for AArch32; valid for AArch64 */
152
/* Note that unpredicated load/store of vector/predicate registers
154
+ uint64_t imm64 = (uint64_t)(imm & 0x3f) << 48;
153
* are defined as a stream of bytes, which equates to little-endian
155
+ if (imm & 0x80) {
156
+ imm64 |= 0x8000000000000000ULL;
157
+ }
158
+ if (imm & 0x40) {
159
+ imm64 |= 0x3fc0000000000000ULL;
160
+ } else {
161
+ imm64 |= 0x4000000000000000ULL;
162
+ }
163
+ return imm64;
164
+ }
165
imm = ((imm & 0x80) << 24) | ((imm & 0x3f) << 19)
166
| ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
167
break;
168
--
154
--
169
2.20.1
155
2.34.1
170
171
diff view generated by jsdifflib
1
Implement the MVE shift-right-and-narrow insn VSHRN and VRSHRN.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
do_urshr() is borrowed from sve_helper.c.
3
Fixes a bug in that with SCTLR.A set, we should raise any
4
alignment fault before raising any MTE check fault.
4
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-15-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210628135835.6690-12-peter.maydell@linaro.org
8
---
10
---
9
target/arm/helper-mve.h | 10 ++++++++++
11
target/arm/internals.h | 3 ++-
10
target/arm/mve.decode | 11 +++++++++++
12
target/arm/tcg/mte_helper.c | 18 ++++++++++++++++++
11
target/arm/mve_helper.c | 40 ++++++++++++++++++++++++++++++++++++++
13
target/arm/tcg/translate-a64.c | 2 ++
12
target/arm/translate-mve.c | 15 ++++++++++++++
14
3 files changed, 22 insertions(+), 1 deletion(-)
13
4 files changed, 76 insertions(+)
14
15
15
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-mve.h
18
--- a/target/arm/internals.h
18
+++ b/target/arm/helper-mve.h
19
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vsriw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
20
@@ -XXX,XX +XXX,XX @@ FIELD(MTEDESC, MIDX, 0, 4)
20
DEF_HELPER_FLAGS_4(mve_vslib, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
21
FIELD(MTEDESC, TBI, 4, 2)
21
DEF_HELPER_FLAGS_4(mve_vslih, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
FIELD(MTEDESC, TCMA, 6, 2)
22
DEF_HELPER_FLAGS_4(mve_vsliw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
FIELD(MTEDESC, WRITE, 8, 1)
23
+
24
-FIELD(MTEDESC, SIZEM1, 9, SIMD_DATA_BITS - 9) /* size - 1 */
24
+DEF_HELPER_FLAGS_4(mve_vshrnbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
+FIELD(MTEDESC, ALIGN, 9, 3)
25
+DEF_HELPER_FLAGS_4(mve_vshrnbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
26
+FIELD(MTEDESC, SIZEM1, 12, SIMD_DATA_BITS - 12) /* size - 1 */
26
+DEF_HELPER_FLAGS_4(mve_vshrntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
27
27
+DEF_HELPER_FLAGS_4(mve_vshrnth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr);
28
+
29
uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
29
+DEF_HELPER_FLAGS_4(mve_vrshrnbb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
30
+DEF_HELPER_FLAGS_4(mve_vrshrnbh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vrshrntb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(mve_vrshrnth, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
32
--- a/target/arm/tcg/mte_helper.c
36
+++ b/target/arm/mve.decode
33
+++ b/target/arm/tcg/mte_helper.c
37
@@ -XXX,XX +XXX,XX @@ VSRI 111 1 1111 1 . ... ... ... 0 0100 0 1 . 1 ... 0 @2_shr_w
34
@@ -XXX,XX +XXX,XX @@ uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra)
38
VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_b
35
39
VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_h
36
uint64_t HELPER(mte_check)(CPUARMState *env, uint32_t desc, uint64_t ptr)
40
VSLI 111 1 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_w
37
{
41
+
38
+ /*
42
+# Narrowing shifts (which only support b and h sizes)
39
+ * R_XCHFJ: Alignment check not caused by memory type is priority 1,
43
+VSHRNB 111 0 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_b
40
+ * higher than any translation fault. When MTE is disabled, tcg
44
+VSHRNB 111 0 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_h
41
+ * performs the alignment check during the code generated for the
45
+VSHRNT 111 0 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_b
42
+ * memory access. With MTE enabled, we must check this here before
46
+VSHRNT 111 0 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_h
43
+ * raising any translation fault in allocation_tag_mem.
47
+
44
+ */
48
+VRSHRNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_b
45
+ unsigned align = FIELD_EX32(desc, MTEDESC, ALIGN);
49
+VRSHRNB 111 1 1110 1 . ... ... ... 0 1111 1 1 . 0 ... 1 @2_shr_h
46
+ if (unlikely(align)) {
50
+VRSHRNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_b
47
+ align = (1u << align) - 1;
51
+VRSHRNT 111 1 1110 1 . ... ... ... 1 1111 1 1 . 0 ... 1 @2_shr_h
48
+ if (unlikely(ptr & align)) {
52
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
49
+ int idx = FIELD_EX32(desc, MTEDESC, MIDX);
53
index XXXXXXX..XXXXXXX 100644
50
+ bool w = FIELD_EX32(desc, MTEDESC, WRITE);
54
--- a/target/arm/mve_helper.c
51
+ MMUAccessType type = w ? MMU_DATA_STORE : MMU_DATA_LOAD;
55
+++ b/target/arm/mve_helper.c
52
+ arm_cpu_do_unaligned_access(env_cpu(env), ptr, type, idx, GETPC());
56
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_INSERT(vsliw, 4, DO_SHL, SHL_MASK)
53
+ }
57
58
DO_VSHLL_ALL(vshllb, false)
59
DO_VSHLL_ALL(vshllt, true)
60
+
61
+/*
62
+ * Narrowing right shifts, taking a double sized input, shifting it
63
+ * and putting the result in either the top or bottom half of the output.
64
+ * ESIZE, TYPE are the output, and LESIZE, LTYPE the input.
65
+ */
66
+#define DO_VSHRN(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE, FN) \
67
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
68
+ void *vm, uint32_t shift) \
69
+ { \
70
+ LTYPE *m = vm; \
71
+ TYPE *d = vd; \
72
+ uint16_t mask = mve_element_mask(env); \
73
+ unsigned le; \
74
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
75
+ TYPE r = FN(m[H##LESIZE(le)], shift); \
76
+ mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
77
+ } \
78
+ mve_advance_vpt(env); \
79
+ }
54
+ }
80
+
55
+
81
+#define DO_VSHRN_ALL(OP, FN) \
56
return mte_check(env, desc, ptr, GETPC());
82
+ DO_VSHRN(OP##bb, false, 1, uint8_t, 2, uint16_t, FN) \
57
}
83
+ DO_VSHRN(OP##bh, false, 2, uint16_t, 4, uint32_t, FN) \
58
84
+ DO_VSHRN(OP##tb, true, 1, uint8_t, 2, uint16_t, FN) \
59
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
85
+ DO_VSHRN(OP##th, true, 2, uint16_t, 4, uint32_t, FN)
86
+
87
+static inline uint64_t do_urshr(uint64_t x, unsigned sh)
88
+{
89
+ if (likely(sh < 64)) {
90
+ return (x >> sh) + ((x >> (sh - 1)) & 1);
91
+ } else if (sh == 64) {
92
+ return x >> 63;
93
+ } else {
94
+ return 0;
95
+ }
96
+}
97
+
98
+DO_VSHRN_ALL(vshrn, DO_SHR)
99
+DO_VSHRN_ALL(vrshrn, do_urshr)
100
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
101
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
102
--- a/target/arm/translate-mve.c
61
--- a/target/arm/tcg/translate-a64.c
103
+++ b/target/arm/translate-mve.c
62
+++ b/target/arm/tcg/translate-a64.c
104
@@ -XXX,XX +XXX,XX @@ DO_VSHLL(VSHLL_BS, vshllbs)
63
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
105
DO_VSHLL(VSHLL_BU, vshllbu)
64
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
106
DO_VSHLL(VSHLL_TS, vshllts)
65
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
107
DO_VSHLL(VSHLL_TU, vshlltu)
66
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
108
+
67
+ desc = FIELD_DP32(desc, MTEDESC, ALIGN, get_alignment_bits(memop));
109
+#define DO_2SHIFT_N(INSN, FN) \
68
desc = FIELD_DP32(desc, MTEDESC, SIZEM1, memop_size(memop) - 1);
110
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
69
111
+ { \
70
ret = tcg_temp_new_i64();
112
+ static MVEGenTwoOpShiftFn * const fns[] = { \
71
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
113
+ gen_helper_mve_##FN##b, \
72
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
114
+ gen_helper_mve_##FN##h, \
73
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
115
+ }; \
74
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
116
+ return do_2shift(s, a, fns[a->size], false); \
75
+ desc = FIELD_DP32(desc, MTEDESC, ALIGN, get_alignment_bits(single_mop));
117
+ }
76
desc = FIELD_DP32(desc, MTEDESC, SIZEM1, total_size - 1);
118
+
77
119
+DO_2SHIFT_N(VSHRNB, vshrnb)
78
ret = tcg_temp_new_i64();
120
+DO_2SHIFT_N(VSHRNT, vshrnt)
121
+DO_2SHIFT_N(VRSHRNB, vrshrnb)
122
+DO_2SHIFT_N(VRSHRNT, vrshrnt)
123
--
79
--
124
2.20.1
80
2.34.1
125
126
diff view generated by jsdifflib
1
The initial implementation of the MVE VRMLALDAVH and VRMLSLDAVH
1
From: Richard Henderson <richard.henderson@linaro.org>
2
insns had some bugs:
3
* the 32x32 multiply of elements was being done as 32x32->32,
4
not 32x32->64
5
* we were incorrectly maintaining the accumulator in its full
6
72-bit form across all 4 beats of the insn; in the pseudocode
7
it is squashed back into the 64 bits of the RdaHi:RdaLo
8
registers after each beat
9
2
10
In particular, fixing the second of these allows us to recast
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
the implementation to avoid 128-bit arithmetic entirely.
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20230530191438.411344-16-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/cpu.h | 3 ++-
9
target/arm/tcg/translate.h | 2 ++
10
target/arm/tcg/hflags.c | 6 ++++++
11
target/arm/tcg/translate-a64.c | 1 +
12
4 files changed, 11 insertions(+), 1 deletion(-)
12
13
13
Since the element size here is always 4, we can also drop the
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
parameterization of ESIZE to make the code a little more readable.
15
16
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20210628135835.6690-3-peter.maydell@linaro.org
20
---
21
target/arm/mve_helper.c | 38 +++++++++++++++++++++-----------------
22
1 file changed, 21 insertions(+), 17 deletions(-)
23
24
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/mve_helper.c
16
--- a/target/arm/cpu.h
27
+++ b/target/arm/mve_helper.c
17
+++ b/target/arm/cpu.h
28
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
29
*/
19
#define SCTLR_D (1U << 5) /* up to v5; RAO in v6 */
30
20
#define SCTLR_CP15BEN (1U << 5) /* v7 onward */
31
#include "qemu/osdep.h"
21
#define SCTLR_L (1U << 6) /* up to v5; RAO in v6 and v7; RAZ in v8 */
32
-#include "qemu/int128.h"
22
-#define SCTLR_nAA (1U << 6) /* when v8.4-LSE is implemented */
33
#include "cpu.h"
23
+#define SCTLR_nAA (1U << 6) /* when FEAT_LSE2 is implemented */
34
#include "internals.h"
24
#define SCTLR_B (1U << 7) /* up to v6; RAZ in v7 */
35
#include "vec_internal.h"
25
#define SCTLR_ITD (1U << 7) /* v8 onward */
36
@@ -XXX,XX +XXX,XX @@ DO_LDAV(vmlsldavsw, 4, int32_t, false, +=, -=)
26
#define SCTLR_S (1U << 8) /* up to v6; RAZ in v7 */
37
DO_LDAV(vmlsldavxsw, 4, int32_t, true, +=, -=)
27
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, SVL, 24, 4)
28
/* Indicates that SME Streaming mode is active, and SMCR_ELx.FA64 is not. */
29
FIELD(TBFLAG_A64, SME_TRAP_NONSTREAMING, 28, 1)
30
FIELD(TBFLAG_A64, FGT_ERET, 29, 1)
31
+FIELD(TBFLAG_A64, NAA, 30, 1)
38
32
39
/*
33
/*
40
- * Rounding multiply add long dual accumulate high: we must keep
34
* Helpers for using the above.
41
- * a 72-bit internal accumulator value and return the top 64 bits.
35
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
42
+ * Rounding multiply add long dual accumulate high. In the pseudocode
36
index XXXXXXX..XXXXXXX 100644
43
+ * this is implemented with a 72-bit internal accumulator value of which
37
--- a/target/arm/tcg/translate.h
44
+ * the top 64 bits are returned. We optimize this to avoid having to
38
+++ b/target/arm/tcg/translate.h
45
+ * use 128-bit arithmetic -- we can do this because the 74-bit accumulator
39
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
46
+ * is squashed back into 64-bits after each beat.
40
bool fgt_eret;
47
*/
41
/* True if fine-grained trap on SVC is enabled */
48
-#define DO_LDAVH(OP, ESIZE, TYPE, XCHG, EVENACC, ODDACC, TO128) \
42
bool fgt_svc;
49
+#define DO_LDAVH(OP, TYPE, LTYPE, XCHG, SUB) \
43
+ /* True if FEAT_LSE2 SCTLR_ELx.nAA is set */
50
uint64_t HELPER(glue(mve_, OP))(CPUARMState *env, void *vn, \
44
+ bool naa;
51
void *vm, uint64_t a) \
45
/*
52
{ \
46
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
53
uint16_t mask = mve_element_mask(env); \
47
* < 0, set by the current instruction.
54
unsigned e; \
48
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
55
TYPE *n = vn, *m = vm; \
49
index XXXXXXX..XXXXXXX 100644
56
- Int128 acc = int128_lshift(TO128(a), 8); \
50
--- a/target/arm/tcg/hflags.c
57
- for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
51
+++ b/target/arm/tcg/hflags.c
58
+ for (e = 0; e < 16 / 4; e++, mask >>= 4) { \
52
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
59
if (mask & 1) { \
53
}
60
+ LTYPE mul; \
61
if (e & 1) { \
62
- acc = ODDACC(acc, TO128(n[H##ESIZE(e - 1 * XCHG)] * \
63
- m[H##ESIZE(e)])); \
64
+ mul = (LTYPE)n[H4(e - 1 * XCHG)] * m[H4(e)]; \
65
+ if (SUB) { \
66
+ mul = -mul; \
67
+ } \
68
} else { \
69
- acc = EVENACC(acc, TO128(n[H##ESIZE(e + 1 * XCHG)] * \
70
- m[H##ESIZE(e)])); \
71
+ mul = (LTYPE)n[H4(e + 1 * XCHG)] * m[H4(e)]; \
72
} \
73
- acc = int128_add(acc, int128_make64(1 << 7)); \
74
+ mul = (mul >> 8) + ((mul >> 7) & 1); \
75
+ a += mul; \
76
} \
77
} \
78
mve_advance_vpt(env); \
79
- return int128_getlo(int128_rshift(acc, 8)); \
80
+ return a; \
81
}
54
}
82
55
83
-DO_LDAVH(vrmlaldavhsw, 4, int32_t, false, int128_add, int128_add, int128_makes64)
56
+ if (cpu_isar_feature(aa64_lse2, env_archcpu(env))) {
84
-DO_LDAVH(vrmlaldavhxsw, 4, int32_t, true, int128_add, int128_add, int128_makes64)
57
+ if (sctlr & SCTLR_nAA) {
85
+DO_LDAVH(vrmlaldavhsw, int32_t, int64_t, false, false)
58
+ DP_TBFLAG_A64(flags, NAA, 1);
86
+DO_LDAVH(vrmlaldavhxsw, int32_t, int64_t, true, false)
59
+ }
87
60
+ }
88
-DO_LDAVH(vrmlaldavhuw, 4, uint32_t, false, int128_add, int128_add, int128_make64)
61
+
89
+DO_LDAVH(vrmlaldavhuw, uint32_t, uint64_t, false, false)
62
/* Compute the condition for using AccType_UNPRIV for LDTR et al. */
90
63
if (!(env->pstate & PSTATE_UAO)) {
91
-DO_LDAVH(vrmlsldavhsw, 4, int32_t, false, int128_add, int128_sub, int128_makes64)
64
switch (mmu_idx) {
92
-DO_LDAVH(vrmlsldavhxsw, 4, int32_t, true, int128_add, int128_sub, int128_makes64)
65
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
93
+DO_LDAVH(vrmlsldavhsw, int32_t, int64_t, false, true)
66
index XXXXXXX..XXXXXXX 100644
94
+DO_LDAVH(vrmlsldavhxsw, int32_t, int64_t, true, true)
67
--- a/target/arm/tcg/translate-a64.c
95
68
+++ b/target/arm/tcg/translate-a64.c
96
/* Vector add across vector */
69
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
97
#define DO_VADDV(OP, ESIZE, TYPE) \
70
dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
71
dc->pstate_za = EX_TBFLAG_A64(tb_flags, PSTATE_ZA);
72
dc->sme_trap_nonstreaming = EX_TBFLAG_A64(tb_flags, SME_TRAP_NONSTREAMING);
73
+ dc->naa = EX_TBFLAG_A64(tb_flags, NAA);
74
dc->vec_len = 0;
75
dc->vec_stride = 0;
76
dc->cp_regs = arm_cpu->cp_regs;
98
--
77
--
99
2.20.1
78
2.34.1
100
101
diff view generated by jsdifflib
1
Implement the MVE shift-vector-left-by-immediate insns VSHL, VQSHL
1
From: Richard Henderson <richard.henderson@linaro.org>
2
and VQSHLU.
2
3
3
FEAT_LSE2 only requires that atomic operations not cross a
4
The size-and-immediate encoding here is the same as Neon, and we
4
16-byte boundary. Ordered operations may be completely
5
handle it the same way neon-dp.decode does.
5
unaligned if SCTLR.nAA is set.
6
6
7
Because this alignment check is so special, do it by hand.
8
Make sure not to keep TCG temps live across the branch.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20230530191438.411344-17-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210628135835.6690-8-peter.maydell@linaro.org
10
---
14
---
11
target/arm/helper-mve.h | 16 +++++++++++
15
target/arm/tcg/helper-a64.h | 3 +
12
target/arm/mve.decode | 23 +++++++++++++++
16
target/arm/tcg/helper-a64.c | 7 ++
13
target/arm/mve_helper.c | 57 ++++++++++++++++++++++++++++++++++++++
17
target/arm/tcg/translate-a64.c | 120 ++++++++++++++++++++++++++-------
14
target/arm/translate-mve.c | 51 ++++++++++++++++++++++++++++++++++
18
3 files changed, 104 insertions(+), 26 deletions(-)
15
4 files changed, 147 insertions(+)
19
16
20
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
17
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper-mve.h
22
--- a/target/arm/tcg/helper-a64.h
20
+++ b/target/arm/helper-mve.h
23
+++ b/target/arm/tcg/helper-a64.h
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(mve_vaddvuw, TCG_CALL_NO_WG, i32, env, ptr, i32)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(st2g_stub, TCG_CALL_NO_WG, void, env, i64)
22
DEF_HELPER_FLAGS_3(mve_vmovi, TCG_CALL_NO_WG, void, env, ptr, i64)
25
DEF_HELPER_FLAGS_2(ldgm, TCG_CALL_NO_WG, i64, env, i64)
23
DEF_HELPER_FLAGS_3(mve_vandi, TCG_CALL_NO_WG, void, env, ptr, i64)
26
DEF_HELPER_FLAGS_3(stgm, TCG_CALL_NO_WG, void, env, i64, i64)
24
DEF_HELPER_FLAGS_3(mve_vorri, TCG_CALL_NO_WG, void, env, ptr, i64)
27
DEF_HELPER_FLAGS_3(stzgm_tags, TCG_CALL_NO_WG, void, env, i64, i64)
25
+
28
+
26
+DEF_HELPER_FLAGS_4(mve_vshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(unaligned_access, TCG_CALL_NO_WG,
27
+DEF_HELPER_FLAGS_4(mve_vshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+ noreturn, env, i64, i32, i32)
28
+DEF_HELPER_FLAGS_4(mve_vshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
29
+
30
+DEF_HELPER_FLAGS_4(mve_vqshli_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vqshli_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(mve_vqshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
+
34
+DEF_HELPER_FLAGS_4(mve_vqshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
35
+DEF_HELPER_FLAGS_4(mve_vqshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(mve_vqshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
37
+
38
+DEF_HELPER_FLAGS_4(mve_vqshlui_sb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
39
+DEF_HELPER_FLAGS_4(mve_vqshlui_sh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
40
+DEF_HELPER_FLAGS_4(mve_vqshlui_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
41
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
42
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/mve.decode
33
--- a/target/arm/tcg/helper-a64.c
44
+++ b/target/arm/mve.decode
34
+++ b/target/arm/tcg/helper-a64.c
45
@@ -XXX,XX +XXX,XX @@
35
@@ -XXX,XX +XXX,XX @@ void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in)
46
&2op qd qm qn size
36
47
&2scalar qd qn rm size
37
memset(mem, 0, blocklen);
48
&1imm qd imm cmode op
38
}
49
+&2shift qd qm shift size
39
+
50
40
+void HELPER(unaligned_access)(CPUARMState *env, uint64_t addr,
51
@vldr_vstr ....... . . . . l:1 rn:4 ... ...... imm:7 &vldr_vstr qd=%qd u=0
41
+ uint32_t access_type, uint32_t mmu_idx)
52
# Note that both Rn and Qd are 3 bits only (no D bit)
42
+{
53
@@ -XXX,XX +XXX,XX @@
43
+ arm_cpu_do_unaligned_access(env_cpu(env), addr, access_type,
54
@2scalar .... .... .. size:2 .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
44
+ mmu_idx, GETPC());
55
@2scalar_nosz .... .... .... .... .... .... .... rm:4 &2scalar qd=%qd qn=%qn
45
+}
56
46
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
57
+@2_shl_b .... .... .. 001 shift:3 .... .... .... .... &2shift qd=%qd qm=%qm size=0
58
+@2_shl_h .... .... .. 01 shift:4 .... .... .... .... &2shift qd=%qd qm=%qm size=1
59
+@2_shl_w .... .... .. 1 shift:5 .... .... .... .... &2shift qd=%qd qm=%qm size=2
60
+
61
# Vector loads and stores
62
63
# Widening loads and narrowing stores:
64
@@ -XXX,XX +XXX,XX @@ VPST 1111 1110 0 . 11 000 1 ... 0 1111 0100 1101 mask=%mask_22_13
65
# So we have a single decode line and check the cmode/op in the
66
# trans function.
67
Vimm_1r 111 . 1111 1 . 00 0 ... ... 0 .... 0 1 . 1 .... @1imm
68
+
69
+# Shifts by immediate
70
+
71
+VSHLI 111 0 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_b
72
+VSHLI 111 0 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_h
73
+VSHLI 111 0 1111 1 . ... ... ... 0 0101 0 1 . 1 ... 0 @2_shl_w
74
+
75
+VQSHLI_S 111 0 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_b
76
+VQSHLI_S 111 0 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_h
77
+VQSHLI_S 111 0 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_w
78
+
79
+VQSHLI_U 111 1 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_b
80
+VQSHLI_U 111 1 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_h
81
+VQSHLI_U 111 1 1111 1 . ... ... ... 0 0111 0 1 . 1 ... 0 @2_shl_w
82
+
83
+VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_b
84
+VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_h
85
+VQSHLUI 111 1 1111 1 . ... ... ... 0 0110 0 1 . 1 ... 0 @2_shl_w
86
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
87
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/mve_helper.c
48
--- a/target/arm/tcg/translate-a64.c
89
+++ b/target/arm/mve_helper.c
49
+++ b/target/arm/tcg/translate-a64.c
90
@@ -XXX,XX +XXX,XX @@ DO_2OP_SAT(vqsubsw, 4, int32_t, DO_SQSUB_W)
50
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
91
WRAP_QRSHL_HELPER(do_sqrshl_bhs, N, M, true, satp)
51
return clean_data_tbi(s, addr);
92
#define DO_UQRSHL_OP(N, M, satp) \
93
WRAP_QRSHL_HELPER(do_uqrshl_bhs, N, M, true, satp)
94
+#define DO_SUQSHL_OP(N, M, satp) \
95
+ WRAP_QRSHL_HELPER(do_suqrshl_bhs, N, M, false, satp)
96
97
DO_2OP_SAT_S(vqshls, DO_SQSHL_OP)
98
DO_2OP_SAT_U(vqshlu, DO_UQSHL_OP)
99
@@ -XXX,XX +XXX,XX @@ DO_VADDV(vaddvsw, 4, uint32_t)
100
DO_VADDV(vaddvub, 1, uint8_t)
101
DO_VADDV(vaddvuh, 2, uint16_t)
102
DO_VADDV(vaddvuw, 4, uint32_t)
103
+
104
+/* Shifts by immediate */
105
+#define DO_2SHIFT(OP, ESIZE, TYPE, FN) \
106
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
107
+ void *vm, uint32_t shift) \
108
+ { \
109
+ TYPE *d = vd, *m = vm; \
110
+ uint16_t mask = mve_element_mask(env); \
111
+ unsigned e; \
112
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
113
+ mergemask(&d[H##ESIZE(e)], \
114
+ FN(m[H##ESIZE(e)], shift), mask); \
115
+ } \
116
+ mve_advance_vpt(env); \
117
+ }
118
+
119
+#define DO_2SHIFT_SAT(OP, ESIZE, TYPE, FN) \
120
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
121
+ void *vm, uint32_t shift) \
122
+ { \
123
+ TYPE *d = vd, *m = vm; \
124
+ uint16_t mask = mve_element_mask(env); \
125
+ unsigned e; \
126
+ bool qc = false; \
127
+ for (e = 0; e < 16 / ESIZE; e++, mask >>= ESIZE) { \
128
+ bool sat = false; \
129
+ mergemask(&d[H##ESIZE(e)], \
130
+ FN(m[H##ESIZE(e)], shift, &sat), mask); \
131
+ qc |= sat & mask & 1; \
132
+ } \
133
+ if (qc) { \
134
+ env->vfp.qc[0] = qc; \
135
+ } \
136
+ mve_advance_vpt(env); \
137
+ }
138
+
139
+/* provide unsigned 2-op shift helpers for all sizes */
140
+#define DO_2SHIFT_U(OP, FN) \
141
+ DO_2SHIFT(OP##b, 1, uint8_t, FN) \
142
+ DO_2SHIFT(OP##h, 2, uint16_t, FN) \
143
+ DO_2SHIFT(OP##w, 4, uint32_t, FN)
144
+
145
+#define DO_2SHIFT_SAT_U(OP, FN) \
146
+ DO_2SHIFT_SAT(OP##b, 1, uint8_t, FN) \
147
+ DO_2SHIFT_SAT(OP##h, 2, uint16_t, FN) \
148
+ DO_2SHIFT_SAT(OP##w, 4, uint32_t, FN)
149
+#define DO_2SHIFT_SAT_S(OP, FN) \
150
+ DO_2SHIFT_SAT(OP##b, 1, int8_t, FN) \
151
+ DO_2SHIFT_SAT(OP##h, 2, int16_t, FN) \
152
+ DO_2SHIFT_SAT(OP##w, 4, int32_t, FN)
153
+
154
+DO_2SHIFT_U(vshli_u, DO_VSHLU)
155
+DO_2SHIFT_SAT_U(vqshli_u, DO_UQSHL_OP)
156
+DO_2SHIFT_SAT_S(vqshli_s, DO_SQSHL_OP)
157
+DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
158
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
159
index XXXXXXX..XXXXXXX 100644
160
--- a/target/arm/translate-mve.c
161
+++ b/target/arm/translate-mve.c
162
@@ -XXX,XX +XXX,XX @@ typedef void MVEGenLdStFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
163
typedef void MVEGenOneOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
164
typedef void MVEGenTwoOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_ptr);
165
typedef void MVEGenTwoOpScalarFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
166
+typedef void MVEGenTwoOpShiftFn(TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i32);
167
typedef void MVEGenDualAccOpFn(TCGv_i64, TCGv_ptr, TCGv_ptr, TCGv_ptr, TCGv_i64);
168
typedef void MVEGenVADDVFn(TCGv_i32, TCGv_ptr, TCGv_ptr, TCGv_i32);
169
typedef void MVEGenOneOpImmFn(TCGv_ptr, TCGv_ptr, TCGv_i64);
170
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1imm *a)
171
}
172
return do_1imm(s, a, fn);
173
}
52
}
174
+
53
175
+static bool do_2shift(DisasContext *s, arg_2shift *a, MVEGenTwoOpShiftFn fn,
54
+/*
176
+ bool negateshift)
55
+ * Generate the special alignment check that applies to AccType_ATOMIC
177
+{
56
+ * and AccType_ORDERED insns under FEAT_LSE2: the access need not be
178
+ TCGv_ptr qd, qm;
57
+ * naturally aligned, but it must not cross a 16-byte boundary.
179
+ int shift = a->shift;
58
+ * See AArch64.CheckAlignment().
180
+
59
+ */
181
+ if (!dc_isar_feature(aa32_mve, s) ||
60
+static void check_lse2_align(DisasContext *s, int rn, int imm,
182
+ !mve_check_qreg_bank(s, a->qd | a->qm) ||
61
+ bool is_write, MemOp mop)
183
+ !fn) {
62
+{
184
+ return false;
63
+ TCGv_i32 tmp;
185
+ }
64
+ TCGv_i64 addr;
186
+ if (!mve_eci_check(s) || !vfp_access_check(s)) {
65
+ TCGLabel *over_label;
187
+ return true;
66
+ MMUAccessType type;
67
+ int mmu_idx;
68
+
69
+ tmp = tcg_temp_new_i32();
70
+ tcg_gen_extrl_i64_i32(tmp, cpu_reg_sp(s, rn));
71
+ tcg_gen_addi_i32(tmp, tmp, imm & 15);
72
+ tcg_gen_andi_i32(tmp, tmp, 15);
73
+ tcg_gen_addi_i32(tmp, tmp, memop_size(mop));
74
+
75
+ over_label = gen_new_label();
76
+ tcg_gen_brcondi_i32(TCG_COND_LEU, tmp, 16, over_label);
77
+
78
+ addr = tcg_temp_new_i64();
79
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, rn), imm);
80
+
81
+ type = is_write ? MMU_DATA_STORE : MMU_DATA_LOAD,
82
+ mmu_idx = get_mem_index(s);
83
+ gen_helper_unaligned_access(cpu_env, addr, tcg_constant_i32(type),
84
+ tcg_constant_i32(mmu_idx));
85
+
86
+ gen_set_label(over_label);
87
+
88
+}
89
+
90
+/* Handle the alignment check for AccType_ATOMIC instructions. */
91
+static MemOp check_atomic_align(DisasContext *s, int rn, MemOp mop)
92
+{
93
+ MemOp size = mop & MO_SIZE;
94
+
95
+ if (size == MO_8) {
96
+ return mop;
188
+ }
97
+ }
189
+
98
+
190
+ /*
99
+ /*
191
+ * When we handle a right shift insn using a left-shift helper
100
+ * If size == MO_128, this is a LDXP, and the operation is single-copy
192
+ * which permits a negative shift count to indicate a right-shift,
101
+ * atomic for each doubleword, not the entire quadword; it still must
193
+ * we must negate the shift count.
102
+ * be quadword aligned.
194
+ */
103
+ */
195
+ if (negateshift) {
104
+ if (size == MO_128) {
196
+ shift = -shift;
105
+ return finalize_memop_atom(s, MO_128 | MO_ALIGN,
197
+ }
106
+ MO_ATOM_IFALIGN_PAIR);
198
+
107
+ }
199
+ qd = mve_qreg_ptr(a->qd);
108
+ if (dc_isar_feature(aa64_lse2, s)) {
200
+ qm = mve_qreg_ptr(a->qm);
109
+ check_lse2_align(s, rn, 0, true, mop);
201
+ fn(cpu_env, qd, qm, tcg_constant_i32(shift));
110
+ } else {
202
+ tcg_temp_free_ptr(qd);
111
+ mop |= MO_ALIGN;
203
+ tcg_temp_free_ptr(qm);
112
+ }
204
+ mve_update_eci(s);
113
+ return finalize_memop(s, mop);
205
+ return true;
114
+}
206
+}
115
+
207
+
116
+/* Handle the alignment check for AccType_ORDERED instructions. */
208
+#define DO_2SHIFT(INSN, FN, NEGATESHIFT) \
117
+static MemOp check_ordered_align(DisasContext *s, int rn, int imm,
209
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
118
+ bool is_write, MemOp mop)
210
+ { \
119
+{
211
+ static MVEGenTwoOpShiftFn * const fns[] = { \
120
+ MemOp size = mop & MO_SIZE;
212
+ gen_helper_mve_##FN##b, \
121
+
213
+ gen_helper_mve_##FN##h, \
122
+ if (size == MO_8) {
214
+ gen_helper_mve_##FN##w, \
123
+ return mop;
215
+ NULL, \
124
+ }
216
+ }; \
125
+ if (size == MO_128) {
217
+ return do_2shift(s, a, fns[a->size], NEGATESHIFT); \
126
+ return finalize_memop_atom(s, MO_128 | MO_ALIGN,
218
+ }
127
+ MO_ATOM_IFALIGN_PAIR);
219
+
128
+ }
220
+DO_2SHIFT(VSHLI, vshli_u, false)
129
+ if (!dc_isar_feature(aa64_lse2, s)) {
221
+DO_2SHIFT(VQSHLI_S, vqshli_s, false)
130
+ mop |= MO_ALIGN;
222
+DO_2SHIFT(VQSHLI_U, vqshli_u, false)
131
+ } else if (!s->naa) {
223
+DO_2SHIFT(VQSHLUI, vqshlui_s, false)
132
+ check_lse2_align(s, rn, imm, is_write, mop);
133
+ }
134
+ return finalize_memop(s, mop);
135
+}
136
+
137
typedef struct DisasCompare64 {
138
TCGCond cond;
139
TCGv_i64 value;
140
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, int rn,
141
{
142
int idx = get_mem_index(s);
143
TCGv_i64 dirty_addr, clean_addr;
144
- MemOp memop;
145
-
146
- /*
147
- * For pairs:
148
- * if size == 2, the operation is single-copy atomic for the doubleword.
149
- * if size == 3, the operation is single-copy atomic for *each* doubleword,
150
- * not the entire quadword, however it must be quadword aligned.
151
- */
152
- memop = size + is_pair;
153
- if (memop == MO_128) {
154
- memop = finalize_memop_atom(s, MO_128 | MO_ALIGN,
155
- MO_ATOM_IFALIGN_PAIR);
156
- } else {
157
- memop = finalize_memop(s, memop | MO_ALIGN);
158
- }
159
+ MemOp memop = check_atomic_align(s, rn, size + is_pair);
160
161
s->is_ldex = true;
162
dirty_addr = cpu_reg_sp(s, rn);
163
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap(DisasContext *s, int rs, int rt,
164
if (rn == 31) {
165
gen_check_sp_alignment(s);
166
}
167
- memop = finalize_memop(s, size | MO_ALIGN);
168
+ memop = check_atomic_align(s, rn, size);
169
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
170
tcg_gen_atomic_cmpxchg_i64(tcg_rs, clean_addr, tcg_rs, tcg_rt,
171
memidx, memop);
172
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
173
}
174
175
/* This is a single atomic access, despite the "pair". */
176
- memop = finalize_memop(s, (size + 1) | MO_ALIGN);
177
+ memop = check_atomic_align(s, rn, size + 1);
178
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
179
180
if (size == 2) {
181
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
182
gen_check_sp_alignment(s);
183
}
184
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
185
- /* TODO: ARMv8.4-LSE SCTLR.nAA */
186
- memop = finalize_memop(s, size | MO_ALIGN);
187
+ memop = check_ordered_align(s, rn, 0, true, size);
188
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
189
true, rn != 31, memop);
190
do_gpr_st(s, cpu_reg(s, rt), clean_addr, memop, true, rt,
191
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
192
if (rn == 31) {
193
gen_check_sp_alignment(s);
194
}
195
- /* TODO: ARMv8.4-LSE SCTLR.nAA */
196
- memop = finalize_memop(s, size | MO_ALIGN);
197
+ memop = check_ordered_align(s, rn, 0, false, size);
198
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
199
false, rn != 31, memop);
200
do_gpr_ld(s, cpu_reg(s, rt), clean_addr, memop, false, true,
201
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
202
bool a = extract32(insn, 23, 1);
203
TCGv_i64 tcg_rs, tcg_rt, clean_addr;
204
AtomicThreeOpFn *fn = NULL;
205
- MemOp mop = finalize_memop(s, size | MO_ALIGN);
206
+ MemOp mop = size;
207
208
if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
209
unallocated_encoding(s);
210
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
211
if (rn == 31) {
212
gen_check_sp_alignment(s);
213
}
214
+
215
+ mop = check_atomic_align(s, rn, mop);
216
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), false, rn != 31, mop);
217
218
if (o3_opc == 014) {
219
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
220
bool is_store = false;
221
bool extend = false;
222
bool iss_sf;
223
- MemOp mop;
224
+ MemOp mop = size;
225
226
if (!dc_isar_feature(aa64_rcpc_8_4, s)) {
227
unallocated_encoding(s);
228
return;
229
}
230
231
- /* TODO: ARMv8.4-LSE SCTLR.nAA */
232
- mop = finalize_memop(s, size | MO_ALIGN);
233
-
234
switch (opc) {
235
case 0: /* STLURB */
236
is_store = true;
237
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
238
gen_check_sp_alignment(s);
239
}
240
241
+ mop = check_ordered_align(s, rn, offset, is_store, mop);
242
+
243
dirty_addr = read_cpu_reg_sp(s, rn, 1);
244
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
245
clean_addr = clean_data_tbi(s, dirty_addr);
224
--
246
--
225
2.20.1
247
2.34.1
226
227
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Push the mte check behind the exclusive_addr check.
4
Document the several ways that we are still out of spec
5
with this implementation.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230530191438.411344-18-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/tcg/translate-a64.c | 42 +++++++++++++++++++++++++++++-----
13
1 file changed, 36 insertions(+), 6 deletions(-)
14
15
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/tcg/translate-a64.c
18
+++ b/target/arm/tcg/translate-a64.c
19
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
20
*/
21
TCGLabel *fail_label = gen_new_label();
22
TCGLabel *done_label = gen_new_label();
23
- TCGv_i64 tmp, dirty_addr, clean_addr;
24
+ TCGv_i64 tmp, clean_addr;
25
MemOp memop;
26
27
- memop = (size + is_pair) | MO_ALIGN;
28
- memop = finalize_memop(s, memop);
29
-
30
- dirty_addr = cpu_reg_sp(s, rn);
31
- clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, memop);
32
+ /*
33
+ * FIXME: We are out of spec here. We have recorded only the address
34
+ * from load_exclusive, not the entire range, and we assume that the
35
+ * size of the access on both sides match. The architecture allows the
36
+ * store to be smaller than the load, so long as the stored bytes are
37
+ * within the range recorded by the load.
38
+ */
39
40
+ /* See AArch64.ExclusiveMonitorsPass() and AArch64.IsExclusiveVA(). */
41
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
42
tcg_gen_brcond_i64(TCG_COND_NE, clean_addr, cpu_exclusive_addr, fail_label);
43
44
+ /*
45
+ * The write, and any associated faults, only happen if the virtual
46
+ * and physical addresses pass the exclusive monitor check. These
47
+ * faults are exceedingly unlikely, because normally the guest uses
48
+ * the exact same address register for the load_exclusive, and we
49
+ * would have recognized these faults there.
50
+ *
51
+ * It is possible to trigger an alignment fault pre-LSE2, e.g. with an
52
+ * unaligned 4-byte write within the range of an aligned 8-byte load.
53
+ * With LSE2, the store would need to cross a 16-byte boundary when the
54
+ * load did not, which would mean the store is outside the range
55
+ * recorded for the monitor, which would have failed a corrected monitor
56
+ * check above. For now, we assume no size change and retain the
57
+ * MO_ALIGN to let tcg know what we checked in the load_exclusive.
58
+ *
59
+ * It is possible to trigger an MTE fault, by performing the load with
60
+ * a virtual address with a valid tag and performing the store with the
61
+ * same virtual address and a different invalid tag.
62
+ */
63
+ memop = size + is_pair;
64
+ if (memop == MO_128 || !dc_isar_feature(aa64_lse2, s)) {
65
+ memop |= MO_ALIGN;
66
+ }
67
+ memop = finalize_memop(s, memop);
68
+ gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
69
+
70
tmp = tcg_temp_new_i64();
71
if (is_pair) {
72
if (size == 2) {
73
--
74
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
We have many other instances of stg in the testsuite;
4
change these to provide an instance of stz2g.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-19-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
tests/tcg/aarch64/mte-7.c | 3 +--
12
1 file changed, 1 insertion(+), 2 deletions(-)
13
14
diff --git a/tests/tcg/aarch64/mte-7.c b/tests/tcg/aarch64/mte-7.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/tcg/aarch64/mte-7.c
17
+++ b/tests/tcg/aarch64/mte-7.c
18
@@ -XXX,XX +XXX,XX @@ int main(int ac, char **av)
19
p = (void *)((unsigned long)p | (1ul << 56));
20
21
/* Store tag in sequential granules. */
22
- asm("stg %0, [%0]" : : "r"(p + 0x0ff0));
23
- asm("stg %0, [%0]" : : "r"(p + 0x1000));
24
+ asm("stz2g %0, [%0]" : : "r"(p + 0x0ff0));
25
26
/*
27
* Perform an unaligned store with tag 1 crossing the pages.
28
--
29
2.34.1
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
With -cpu max and FEAT_LSE2, the __aarch64__ section will only raise
4
an alignment exception when the load crosses a 16-byte boundary.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-20-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
tests/tcg/multiarch/sigbus.c | 13 +++++++++----
12
1 file changed, 9 insertions(+), 4 deletions(-)
13
14
diff --git a/tests/tcg/multiarch/sigbus.c b/tests/tcg/multiarch/sigbus.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/tcg/multiarch/sigbus.c
17
+++ b/tests/tcg/multiarch/sigbus.c
18
@@ -XXX,XX +XXX,XX @@
19
#include <endian.h>
20
21
22
-unsigned long long x = 0x8877665544332211ull;
23
-void * volatile p = (void *)&x + 1;
24
+char x[32] __attribute__((aligned(16))) = {
25
+ 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
26
+ 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
27
+ 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
28
+ 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20,
29
+};
30
+void * volatile p = (void *)&x + 15;
31
32
void sigbus(int sig, siginfo_t *info, void *uc)
33
{
34
@@ -XXX,XX +XXX,XX @@ int main()
35
* We might as well validate the unaligned load worked.
36
*/
37
if (BYTE_ORDER == LITTLE_ENDIAN) {
38
- assert(tmp == 0x55443322);
39
+ assert(tmp == 0x13121110);
40
} else {
41
- assert(tmp == 0x77665544);
42
+ assert(tmp == 0x10111213);
43
}
44
return EXIT_SUCCESS;
45
}
46
--
47
2.34.1
diff view generated by jsdifflib
1
From: Patrick Venture <venture@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add line item reference to quanta-gbs-bmc machine.
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Patrick Venture <venture@google.com>
5
Message-id: 20230530191438.411344-21-richard.henderson@linaro.org
6
Reviewed-by: Cédric Le Goater <clg@kaod.org>
7
Message-id: 20210615192848.1065297-3-venture@google.com
8
[PMM: fixed underline Sphinx warning]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
7
---
11
docs/system/arm/nuvoton.rst | 5 +++--
8
docs/system/arm/emulation.rst | 1 +
12
1 file changed, 3 insertions(+), 2 deletions(-)
9
target/arm/tcg/cpu64.c | 1 +
10
2 files changed, 2 insertions(+)
13
11
14
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
12
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/docs/system/arm/nuvoton.rst
14
--- a/docs/system/arm/emulation.rst
17
+++ b/docs/system/arm/nuvoton.rst
15
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
-Nuvoton iBMC boards (``npcm750-evb``, ``quanta-gsj``)
17
- FEAT_LRCPC (Load-acquire RCpc instructions)
20
-=====================================================
18
- FEAT_LRCPC2 (Load-acquire RCpc instructions v2)
21
+Nuvoton iBMC boards (``*-bmc``, ``npcm750-evb``, ``quanta-gsj``)
19
- FEAT_LSE (Large System Extensions)
22
+================================================================
20
+- FEAT_LSE2 (Large System Extensions v2)
23
21
- FEAT_LVA (Large Virtual Address space)
24
The `Nuvoton iBMC`_ chips (NPCM7xx) are a family of ARM-based SoCs that are
22
- FEAT_MTE (Memory Tagging Extension)
25
designed to be used as Baseboard Management Controllers (BMCs) in various
23
- FEAT_MTE2 (Memory Tagging Extension)
26
@@ -XXX,XX +XXX,XX @@ segment. The following machines are based on this chip :
24
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
27
The NPCM730 SoC has two Cortex-A9 cores and is targeted for Data Center and
25
index XXXXXXX..XXXXXXX 100644
28
Hyperscale applications. The following machines are based on this chip :
26
--- a/target/arm/tcg/cpu64.c
29
27
+++ b/target/arm/tcg/cpu64.c
30
+- ``quanta-gbs-bmc`` Quanta GBS server BMC
28
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
31
- ``quanta-gsj`` Quanta GSJ server BMC
29
t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
32
30
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
33
There are also two more SoCs, NPCM710 and NPCM705, which are single-core
31
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
32
+ t = FIELD_DP64(t, ID_AA64MMFR2, AT, 1); /* FEAT_LSE2 */
33
t = FIELD_DP64(t, ID_AA64MMFR2, IDS, 1); /* FEAT_IDST */
34
t = FIELD_DP64(t, ID_AA64MMFR2, FWB, 1); /* FEAT_S2FWB */
35
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
34
--
36
--
35
2.20.1
37
2.34.1
36
37
diff view generated by jsdifflib
1
From: Maxim Uvarov <maxim.uvarov@linaro.org>
1
From: Zhuojia Shen <chaosdefinition@hotmail.com>
2
2
3
qemu has 2 type of functions: shutdown and reboot. Shutdown
3
DC CVAP and DC CVADP instructions can be executed in EL0 on Linux,
4
function has to be used for machine shutdown. Otherwise we cause
4
either directly when SCTLR_EL1.UCI == 1 or emulated by the kernel (see
5
a reset with a bogus "cause" value, when we intended a shutdown.
5
user_cache_maint_handler() in arch/arm64/kernel/traps.c).
6
6
7
Signed-off-by: Maxim Uvarov <maxim.uvarov@linaro.org>
7
This patch enables execution of the two instructions in user mode
8
emulation.
9
10
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20210625111842.3790-3-maxim.uvarov@linaro.org
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
[PMM: tweaked commit message]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
14
---
13
hw/gpio/gpio_pwr.c | 2 +-
15
target/arm/helper.c | 6 ++----
14
1 file changed, 1 insertion(+), 1 deletion(-)
16
1 file changed, 2 insertions(+), 4 deletions(-)
15
17
16
diff --git a/hw/gpio/gpio_pwr.c b/hw/gpio/gpio_pwr.c
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/gpio/gpio_pwr.c
20
--- a/target/arm/helper.c
19
+++ b/hw/gpio/gpio_pwr.c
21
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static void gpio_pwr_reset(void *opaque, int n, int level)
22
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rndr_reginfo[] = {
21
static void gpio_pwr_shutdown(void *opaque, int n, int level)
23
.access = PL0_R, .readfn = rndr_readfn },
24
};
25
26
-#ifndef CONFIG_USER_ONLY
27
static void dccvap_writefn(CPUARMState *env, const ARMCPRegInfo *opaque,
28
uint64_t value)
22
{
29
{
23
if (level) {
30
@@ -XXX,XX +XXX,XX @@ static void dccvap_writefn(CPUARMState *env, const ARMCPRegInfo *opaque,
24
- qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
31
/* This won't be crossing page boundaries */
25
+ qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN);
32
haddr = probe_read(env, vaddr, dline_size, mem_idx, GETPC());
33
if (haddr) {
34
+#ifndef CONFIG_USER_ONLY
35
36
ram_addr_t offset;
37
MemoryRegion *mr;
38
@@ -XXX,XX +XXX,XX @@ static void dccvap_writefn(CPUARMState *env, const ARMCPRegInfo *opaque,
39
if (mr) {
40
memory_region_writeback(mr, offset, dline_size);
41
}
42
+#endif /*CONFIG_USER_ONLY*/
26
}
43
}
27
}
44
}
28
45
46
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpodp_reg[] = {
47
.fgt = FGT_DCCVADP,
48
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
49
};
50
-#endif /*CONFIG_USER_ONLY*/
51
52
static CPAccessResult access_aa64_tid5(CPUARMState *env, const ARMCPRegInfo *ri,
53
bool isread)
54
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
55
if (cpu_isar_feature(aa64_tlbios, cpu)) {
56
define_arm_cp_regs(cpu, tlbios_reginfo);
57
}
58
-#ifndef CONFIG_USER_ONLY
59
/* Data Cache clean instructions up to PoP */
60
if (cpu_isar_feature(aa64_dcpop, cpu)) {
61
define_one_arm_cp_reg(cpu, dcpop_reg);
62
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
63
define_one_arm_cp_reg(cpu, dcpodp_reg);
64
}
65
}
66
-#endif /*CONFIG_USER_ONLY*/
67
68
/*
69
* If full MTE is enabled, add all of the system registers.
29
--
70
--
30
2.20.1
71
2.34.1
31
32
diff view generated by jsdifflib
1
Implement the MVE VHLL (vector shift left long) insn. This has two
1
From: Zhuojia Shen <chaosdefinition@hotmail.com>
2
encodings: the T1 encoding is the usual shift-by-immediate format,
3
and the T2 encoding is a special case where the shift count is always
4
equal to the element size.
5
2
3
Test execution of DC CVAP and DC CVADP instructions under user mode
4
emulation.
5
6
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210628135835.6690-10-peter.maydell@linaro.org
9
---
10
---
10
target/arm/helper-mve.h | 9 +++++++
11
tests/tcg/aarch64/dcpodp.c | 63 +++++++++++++++++++++++++++++++
11
target/arm/mve.decode | 53 +++++++++++++++++++++++++++++++++++---
12
tests/tcg/aarch64/dcpop.c | 63 +++++++++++++++++++++++++++++++
12
target/arm/mve_helper.c | 32 +++++++++++++++++++++++
13
tests/tcg/aarch64/Makefile.target | 11 ++++++
13
target/arm/translate-mve.c | 15 +++++++++++
14
3 files changed, 137 insertions(+)
14
4 files changed, 105 insertions(+), 4 deletions(-)
15
create mode 100644 tests/tcg/aarch64/dcpodp.c
16
create mode 100644 tests/tcg/aarch64/dcpop.c
15
17
16
diff --git a/target/arm/helper-mve.h b/target/arm/helper-mve.h
18
diff --git a/tests/tcg/aarch64/dcpodp.c b/tests/tcg/aarch64/dcpodp.c
17
index XXXXXXX..XXXXXXX 100644
19
new file mode 100644
18
--- a/target/arm/helper-mve.h
20
index XXXXXXX..XXXXXXX
19
+++ b/target/arm/helper-mve.h
21
--- /dev/null
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(mve_vrshli_sw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
22
+++ b/tests/tcg/aarch64/dcpodp.c
21
DEF_HELPER_FLAGS_4(mve_vrshli_ub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
23
@@ -XXX,XX +XXX,XX @@
22
DEF_HELPER_FLAGS_4(mve_vrshli_uh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
24
+/*
23
DEF_HELPER_FLAGS_4(mve_vrshli_uw, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
25
+ * Test execution of DC CVADP instruction.
26
+ *
27
+ * Copyright (c) 2023 Zhuojia Shen <chaosdefinition@hotmail.com>
28
+ * SPDX-License-Identifier: GPL-2.0-or-later
29
+ */
24
+
30
+
25
+DEF_HELPER_FLAGS_4(mve_vshllbsb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+#include <asm/hwcap.h>
26
+DEF_HELPER_FLAGS_4(mve_vshllbsh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+#include <sys/auxv.h>
27
+DEF_HELPER_FLAGS_4(mve_vshllbub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
28
+DEF_HELPER_FLAGS_4(mve_vshllbuh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
29
+DEF_HELPER_FLAGS_4(mve_vshlltsb, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
30
+DEF_HELPER_FLAGS_4(mve_vshlltsh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
31
+DEF_HELPER_FLAGS_4(mve_vshlltub, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
32
+DEF_HELPER_FLAGS_4(mve_vshlltuh, TCG_CALL_NO_WG, void, env, ptr, ptr, i32)
33
diff --git a/target/arm/mve.decode b/target/arm/mve.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/mve.decode
36
+++ b/target/arm/mve.decode
37
@@ -XXX,XX +XXX,XX @@
38
@2_shl_h .... .... .. 01 shift:4 .... .... .... .... &2shift qd=%qd qm=%qm size=1
39
@2_shl_w .... .... .. 1 shift:5 .... .... .... .... &2shift qd=%qd qm=%qm size=2
40
41
+@2_shll_b .... .... ... 01 shift:3 .... .... .... .... &2shift qd=%qd qm=%qm size=0
42
+@2_shll_h .... .... ... 1 shift:4 .... .... .... .... &2shift qd=%qd qm=%qm size=1
43
+# VSHLL encoding T2 where shift == esize
44
+@2_shll_esize_b .... .... .... 00 .. .... .... .... .... &2shift \
45
+ qd=%qd qm=%qm size=0 shift=8
46
+@2_shll_esize_h .... .... .... 01 .. .... .... .... .... &2shift \
47
+ qd=%qd qm=%qm size=1 shift=16
48
+
33
+
49
# Right shifts are encoded as N - shift, where N is the element size in bits.
34
+#include <signal.h>
50
%rshift_i5 16:5 !function=rsub_32
35
+#include <stdbool.h>
51
%rshift_i4 16:4 !function=rsub_16
36
+#include <stdio.h>
52
@@ -XXX,XX +XXX,XX @@ VADD 1110 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
37
+#include <stdlib.h>
53
VSUB 1111 1111 0 . .. ... 0 ... 0 1000 . 1 . 0 ... 0 @2op
38
+
54
VMUL 1110 1111 0 . .. ... 0 ... 0 1001 . 1 . 1 ... 0 @2op
39
+#ifndef HWCAP2_DCPODP
55
40
+#define HWCAP2_DCPODP (1 << 0)
56
-VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
41
+#endif
57
-VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
42
+
58
+# The VSHLL T2 encoding is not a @2op pattern, but is here because it
43
+bool should_fail = false;
59
+# overlaps what would be size=0b11 VMULH/VRMULH
44
+
45
+static void signal_handler(int sig, siginfo_t *si, void *data)
60
+{
46
+{
61
+ VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b
47
+ ucontext_t *uc = (ucontext_t *)data;
62
+ VSHLL_BS 111 0 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h
48
+
63
49
+ if (should_fail) {
64
-VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
50
+ uc->uc_mcontext.pc += 4;
65
-VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
51
+ } else {
66
+ VMULH_S 111 0 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
52
+ exit(EXIT_FAILURE);
53
+ }
67
+}
54
+}
68
+
55
+
56
+static int do_dc_cvadp(void)
69
+{
57
+{
70
+ VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_b
58
+ struct sigaction sa = {
71
+ VSHLL_BU 111 1 1110 0 . 11 .. 01 ... 0 1110 0 0 . 0 ... 1 @2_shll_esize_h
59
+ .sa_flags = SA_SIGINFO,
60
+ .sa_sigaction = signal_handler,
61
+ };
72
+
62
+
73
+ VMULH_U 111 1 1110 0 . .. ...1 ... 0 1110 . 0 . 0 ... 1 @2op
63
+ sigemptyset(&sa.sa_mask);
64
+ if (sigaction(SIGSEGV, &sa, NULL) < 0) {
65
+ perror("sigaction");
66
+ return EXIT_FAILURE;
67
+ }
68
+
69
+ asm volatile("dc cvadp, %0\n\t" :: "r"(&sa));
70
+
71
+ should_fail = true;
72
+ asm volatile("dc cvadp, %0\n\t" :: "r"(NULL));
73
+ should_fail = false;
74
+
75
+ return EXIT_SUCCESS;
74
+}
76
+}
75
+
77
+
78
+int main(void)
76
+{
79
+{
77
+ VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b
80
+ if (getauxval(AT_HWCAP2) & HWCAP2_DCPODP) {
78
+ VSHLL_TS 111 0 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h
81
+ return do_dc_cvadp();
82
+ } else {
83
+ printf("SKIP: no HWCAP2_DCPODP on this system\n");
84
+ return EXIT_SUCCESS;
85
+ }
86
+}
87
diff --git a/tests/tcg/aarch64/dcpop.c b/tests/tcg/aarch64/dcpop.c
88
new file mode 100644
89
index XXXXXXX..XXXXXXX
90
--- /dev/null
91
+++ b/tests/tcg/aarch64/dcpop.c
92
@@ -XXX,XX +XXX,XX @@
93
+/*
94
+ * Test execution of DC CVAP instruction.
95
+ *
96
+ * Copyright (c) 2023 Zhuojia Shen <chaosdefinition@hotmail.com>
97
+ * SPDX-License-Identifier: GPL-2.0-or-later
98
+ */
79
+
99
+
80
+ VRMULH_S 111 0 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
100
+#include <asm/hwcap.h>
101
+#include <sys/auxv.h>
102
+
103
+#include <signal.h>
104
+#include <stdbool.h>
105
+#include <stdio.h>
106
+#include <stdlib.h>
107
+
108
+#ifndef HWCAP_DCPOP
109
+#define HWCAP_DCPOP (1 << 16)
110
+#endif
111
+
112
+bool should_fail = false;
113
+
114
+static void signal_handler(int sig, siginfo_t *si, void *data)
115
+{
116
+ ucontext_t *uc = (ucontext_t *)data;
117
+
118
+ if (should_fail) {
119
+ uc->uc_mcontext.pc += 4;
120
+ } else {
121
+ exit(EXIT_FAILURE);
122
+ }
81
+}
123
+}
82
+
124
+
125
+static int do_dc_cvap(void)
83
+{
126
+{
84
+ VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_b
127
+ struct sigaction sa = {
85
+ VSHLL_TU 111 1 1110 0 . 11 .. 01 ... 1 1110 0 0 . 0 ... 1 @2_shll_esize_h
128
+ .sa_flags = SA_SIGINFO,
129
+ .sa_sigaction = signal_handler,
130
+ };
86
+
131
+
87
+ VRMULH_U 111 1 1110 0 . .. ...1 ... 1 1110 . 0 . 0 ... 1 @2op
132
+ sigemptyset(&sa.sa_mask);
88
+}
133
+ if (sigaction(SIGSEGV, &sa, NULL) < 0) {
89
134
+ perror("sigaction");
90
VMAX_S 111 0 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
135
+ return EXIT_FAILURE;
91
VMAX_U 111 1 1111 0 . .. ... 0 ... 0 0110 . 1 . 0 ... 0 @2op
92
@@ -XXX,XX +XXX,XX @@ VRSHRI_S 111 0 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
93
VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_b
94
VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_h
95
VRSHRI_U 111 1 1111 1 . ... ... ... 0 0010 0 1 . 1 ... 0 @2_shr_w
96
+
97
+# VSHLL T1 encoding; the T2 VSHLL encoding is elsewhere in this file
98
+VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_b
99
+VSHLL_BS 111 0 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_h
100
+
101
+VSHLL_BU 111 1 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_b
102
+VSHLL_BU 111 1 1110 1 . 1 .. ... ... 0 1111 0 1 . 0 ... 0 @2_shll_h
103
+
104
+VSHLL_TS 111 0 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_b
105
+VSHLL_TS 111 0 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_h
106
+
107
+VSHLL_TU 111 1 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_b
108
+VSHLL_TU 111 1 1110 1 . 1 .. ... ... 1 1111 0 1 . 0 ... 0 @2_shll_h
109
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
110
index XXXXXXX..XXXXXXX 100644
111
--- a/target/arm/mve_helper.c
112
+++ b/target/arm/mve_helper.c
113
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT_SAT_S(vqshli_s, DO_SQSHL_OP)
114
DO_2SHIFT_SAT_S(vqshlui_s, DO_SUQSHL_OP)
115
DO_2SHIFT_U(vrshli_u, DO_VRSHLU)
116
DO_2SHIFT_S(vrshli_s, DO_VRSHLS)
117
+
118
+/*
119
+ * Long shifts taking half-sized inputs from top or bottom of the input
120
+ * vector and producing a double-width result. ESIZE, TYPE are for
121
+ * the input, and LESIZE, LTYPE for the output.
122
+ * Unlike the normal shift helpers, we do not handle negative shift counts,
123
+ * because the long shift is strictly left-only.
124
+ */
125
+#define DO_VSHLL(OP, TOP, ESIZE, TYPE, LESIZE, LTYPE) \
126
+ void HELPER(glue(mve_, OP))(CPUARMState *env, void *vd, \
127
+ void *vm, uint32_t shift) \
128
+ { \
129
+ LTYPE *d = vd; \
130
+ TYPE *m = vm; \
131
+ uint16_t mask = mve_element_mask(env); \
132
+ unsigned le; \
133
+ assert(shift <= 16); \
134
+ for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
135
+ LTYPE r = (LTYPE)m[H##ESIZE(le * 2 + TOP)] << shift; \
136
+ mergemask(&d[H##LESIZE(le)], r, mask); \
137
+ } \
138
+ mve_advance_vpt(env); \
139
+ }
136
+ }
140
+
137
+
141
+#define DO_VSHLL_ALL(OP, TOP) \
138
+ asm volatile("dc cvap, %0\n\t" :: "r"(&sa));
142
+ DO_VSHLL(OP##sb, TOP, 1, int8_t, 2, int16_t) \
143
+ DO_VSHLL(OP##ub, TOP, 1, uint8_t, 2, uint16_t) \
144
+ DO_VSHLL(OP##sh, TOP, 2, int16_t, 4, int32_t) \
145
+ DO_VSHLL(OP##uh, TOP, 2, uint16_t, 4, uint32_t) \
146
+
139
+
147
+DO_VSHLL_ALL(vshllb, false)
140
+ should_fail = true;
148
+DO_VSHLL_ALL(vshllt, true)
141
+ asm volatile("dc cvap, %0\n\t" :: "r"(NULL));
149
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
142
+ should_fail = false;
143
+
144
+ return EXIT_SUCCESS;
145
+}
146
+
147
+int main(void)
148
+{
149
+ if (getauxval(AT_HWCAP) & HWCAP_DCPOP) {
150
+ return do_dc_cvap();
151
+ } else {
152
+ printf("SKIP: no HWCAP_DCPOP on this system\n");
153
+ return EXIT_SUCCESS;
154
+ }
155
+}
156
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
150
index XXXXXXX..XXXXXXX 100644
157
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/translate-mve.c
158
--- a/tests/tcg/aarch64/Makefile.target
152
+++ b/target/arm/translate-mve.c
159
+++ b/tests/tcg/aarch64/Makefile.target
153
@@ -XXX,XX +XXX,XX @@ DO_2SHIFT(VSHRI_S, vshli_s, true)
160
@@ -XXX,XX +XXX,XX @@ config-cc.mak: Makefile
154
DO_2SHIFT(VSHRI_U, vshli_u, true)
161
    $(quiet-@)( \
155
DO_2SHIFT(VRSHRI_S, vrshli_s, true)
162
     $(call cc-option,-march=armv8.1-a+sve, CROSS_CC_HAS_SVE); \
156
DO_2SHIFT(VRSHRI_U, vrshli_u, true)
163
     $(call cc-option,-march=armv8.1-a+sve2, CROSS_CC_HAS_SVE2); \
164
+     $(call cc-option,-march=armv8.2-a, CROSS_CC_HAS_ARMV8_2); \
165
     $(call cc-option,-march=armv8.3-a, CROSS_CC_HAS_ARMV8_3); \
166
+     $(call cc-option,-march=armv8.5-a, CROSS_CC_HAS_ARMV8_5); \
167
     $(call cc-option,-mbranch-protection=standard, CROSS_CC_HAS_ARMV8_BTI); \
168
     $(call cc-option,-march=armv8.5-a+memtag, CROSS_CC_HAS_ARMV8_MTE); \
169
     $(call cc-option,-march=armv9-a+sme, CROSS_CC_HAS_ARMV9_SME)) 3> config-cc.mak
170
-include config-cc.mak
171
172
+ifneq ($(CROSS_CC_HAS_ARMV8_2),)
173
+AARCH64_TESTS += dcpop
174
+dcpop: CFLAGS += -march=armv8.2-a
175
+endif
176
+ifneq ($(CROSS_CC_HAS_ARMV8_5),)
177
+AARCH64_TESTS += dcpodp
178
+dcpodp: CFLAGS += -march=armv8.5-a
179
+endif
157
+
180
+
158
+#define DO_VSHLL(INSN, FN) \
181
# Pauth Tests
159
+ static bool trans_##INSN(DisasContext *s, arg_2shift *a) \
182
ifneq ($(CROSS_CC_HAS_ARMV8_3),)
160
+ { \
183
AARCH64_TESTS += pauth-1 pauth-2 pauth-4 pauth-5
161
+ static MVEGenTwoOpShiftFn * const fns[] = { \
162
+ gen_helper_mve_##FN##b, \
163
+ gen_helper_mve_##FN##h, \
164
+ }; \
165
+ return do_2shift(s, a, fns[a->size], false); \
166
+ }
167
+
168
+DO_VSHLL(VSHLL_BS, vshllbs)
169
+DO_VSHLL(VSHLL_BU, vshllbu)
170
+DO_VSHLL(VSHLL_TS, vshllts)
171
+DO_VSHLL(VSHLL_TU, vshlltu)
172
--
184
--
173
2.20.1
185
2.34.1
174
175
diff view generated by jsdifflib
1
Use dup_const() instead of bitfield_replicate() in
1
From: Zhuojia Shen <chaosdefinition@hotmail.com>
2
disas_simd_mod_imm().
3
2
4
(We can't replace the other use of bitfield_replicate() in this file,
3
Accessing EL0-accessible Debug Communication Channel (DCC) registers in
5
in logic_imm_decode_wmask(), because that location needs to handle 2
4
user mode emulation is currently enabled. However, it does not match
6
and 4 bit elements, which dup_const() cannot.)
5
Linux behavior as Linux sets MDSCR_EL1.TDCC on startup to disable EL0
6
access to DCC (see __cpu_setup() in arch/arm64/mm/proc.S).
7
7
8
This patch fixes access_tdcc() to check MDSCR_EL1.TDCC for EL0 and sets
9
MDSCR_EL1.TDCC for user mode emulation to match Linux.
10
11
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: DS7PR12MB630905198DD8E69F6817544CAC4EA@DS7PR12MB6309.namprd12.prod.outlook.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210628135835.6690-6-peter.maydell@linaro.org
11
---
15
---
12
target/arm/translate-a64.c | 2 +-
16
target/arm/cpu.c | 2 ++
13
1 file changed, 1 insertion(+), 1 deletion(-)
17
target/arm/debug_helper.c | 5 +++++
18
2 files changed, 7 insertions(+)
14
19
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
22
--- a/target/arm/cpu.c
18
+++ b/target/arm/translate-a64.c
23
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
24
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset_hold(Object *obj)
20
/* FMOV (vector, immediate) - half-precision */
25
* This is not yet exposed from the Linux kernel in any way.
21
imm = vfp_expand_imm(MO_16, abcdefgh);
26
*/
22
/* now duplicate across the lanes */
27
env->cp15.sctlr_el[1] |= SCTLR_TSCXT;
23
- imm = bitfield_replicate(imm, 16);
28
+ /* Disable access to Debug Communication Channel (DCC). */
24
+ imm = dup_const(MO_16, imm);
29
+ env->cp15.mdscr_el1 |= 1 << 12;
25
} else {
30
#else
26
imm = asimd_imm_const(abcdefgh, cmode, is_neg);
31
/* Reset into the highest available EL */
32
if (arm_feature(env, ARM_FEATURE_EL3)) {
33
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/debug_helper.c
36
+++ b/target/arm/debug_helper.c
37
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
38
* is implemented then these are controlled by MDCR_EL2.TDCC for
39
* EL2 and MDCR_EL3.TDCC for EL3. They are also controlled by
40
* the general debug access trap bits MDCR_EL2.TDA and MDCR_EL3.TDA.
41
+ * For EL0, they are also controlled by MDSCR_EL1.TDCC.
42
*/
43
static CPAccessResult access_tdcc(CPUARMState *env, const ARMCPRegInfo *ri,
44
bool isread)
45
{
46
int el = arm_current_el(env);
47
uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
48
+ bool mdscr_el1_tdcc = extract32(env->cp15.mdscr_el1, 12, 1);
49
bool mdcr_el2_tda = (mdcr_el2 & MDCR_TDA) || (mdcr_el2 & MDCR_TDE) ||
50
(arm_hcr_el2_eff(env) & HCR_TGE);
51
bool mdcr_el2_tdcc = cpu_isar_feature(aa64_fgt, env_archcpu(env)) &&
52
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdcc(CPUARMState *env, const ARMCPRegInfo *ri,
53
bool mdcr_el3_tdcc = cpu_isar_feature(aa64_fgt, env_archcpu(env)) &&
54
(env->cp15.mdcr_el3 & MDCR_TDCC);
55
56
+ if (el < 1 && mdscr_el1_tdcc) {
57
+ return CP_ACCESS_TRAP;
58
+ }
59
if (el < 2 && (mdcr_el2_tda || mdcr_el2_tdcc)) {
60
return CP_ACCESS_TRAP_EL2;
27
}
61
}
28
--
62
--
29
2.20.1
63
2.34.1
30
31
diff view generated by jsdifflib