1
First arm pullreq for the 2.12 cycle, with all the
1
Hi; this pullreq includes FEAT_LSE2 support, the new
2
things that queued up during the release phase.
2
bpim2u board, and some other smaller patchsets.
3
2.11 isn't quite released yet, but might as well put
4
the pullreq on the mailing list :-)
5
3
6
thanks
4
thanks
7
-- PMM
5
-- PMM
8
6
9
The following changes since commit 0a0dc59d27527b78a195c2d838d28b7b49e5a639:
7
The following changes since commit 369081c4558e7e940fa36ce59bf17b2e390f55d3:
10
8
11
Update version for v2.11.0 release (2017-12-13 14:31:09 +0000)
9
Merge tag 'pull-tcg-20230605' of https://gitlab.com/rth7680/qemu into staging (2023-06-05 13:16:56 -0700)
12
10
13
are available in the git repository at:
11
are available in the Git repository at:
14
12
15
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20171213
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230606
16
14
17
for you to fetch changes up to d3c348b6e3af3598bfcb755d59f8f4de80a2228a:
15
for you to fetch changes up to f9ac778898cb28307e0f91421aba34d43c34b679:
18
16
19
xilinx_spips: Use memset instead of a for loop to zero registers (2017-12-13 17:59:26 +0000)
17
target/arm: trap DCC access in user mode emulation (2023-06-06 10:19:40 +0100)
20
18
21
----------------------------------------------------------------
19
----------------------------------------------------------------
22
target-arm queue:
20
target-arm queue:
23
* xilinx_spips: set reset values correctly
21
* Support gdbstub (guest debug) in HVF
24
* MAINTAINERS: fix an email address
22
* xnlx-versal: Support CANFD controller
25
* hw/display/tc6393xb: limit irq handler index to TC6393XB_GPIOS
23
* bpim2u: New board model: Banana Pi BPI-M2 Ultra
26
* nvic: Make systick banked for v8M
24
* Emulate FEAT_LSE2
27
* refactor get_phys_addr() so we can return the right format PAR
25
* allow DC CVA[D]P in user mode emulation
28
for ATS operations
26
* trap DCC access in user mode emulation
29
* implement v8M TT instruction
30
* fix some minor v8M bugs
31
* Implement reset for GICv3 ITS
32
* xlnx-zcu102: Add support for the ZynqMP QSPI
33
27
34
----------------------------------------------------------------
28
----------------------------------------------------------------
35
Alistair Francis (3):
29
Francesco Cagnin (4):
36
xilinx_spips: Update the QSPI Mod ID reset value
30
arm: move KVM breakpoints helpers
37
xilinx_spips: Set all of the reset values
31
hvf: handle access for more registers
38
xilinx_spips: Use memset instead of a for loop to zero registers
32
hvf: add breakpoint handlers
33
hvf: add guest debugging handlers for Apple Silicon hosts
39
34
40
Edgar E. Iglesias (1):
35
Richard Henderson (20):
41
target/arm: Extend PAR format determination
36
target/arm: Add commentary for CPUARMState.exclusive_high
37
target/arm: Add feature test for FEAT_LSE2
38
target/arm: Introduce finalize_memop_{atom,pair}
39
target/arm: Use tcg_gen_qemu_ld_i128 for LDXP
40
target/arm: Use tcg_gen_qemu_{st, ld}_i128 for do_fp_{st, ld}
41
target/arm: Use tcg_gen_qemu_st_i128 for STZG, STZ2G
42
target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r
43
target/arm: Sink gen_mte_check1 into load/store_exclusive
44
target/arm: Load/store integer pair with one tcg operation
45
target/arm: Hoist finalize_memop out of do_gpr_{ld, st}
46
target/arm: Hoist finalize_memop out of do_fp_{ld, st}
47
target/arm: Pass memop to gen_mte_check1*
48
target/arm: Pass single_memop to gen_mte_checkN
49
target/arm: Check alignment in helper_mte_check
50
target/arm: Add SCTLR.nAA to TBFLAG_A64
51
target/arm: Relax ordered/atomic alignment checks for LSE2
52
target/arm: Move mte check for store-exclusive
53
tests/tcg/aarch64: Use stz2g in mte-7.c
54
tests/tcg/multiarch: Adjust sigbus.c
55
target/arm: Enable FEAT_LSE2 for -cpu max
42
56
43
Eric Auger (4):
57
Vikram Garhwal (4):
44
hw/intc/arm_gicv3_its: Don't call post_load on reset
58
hw/net/can: Introduce Xilinx Versal CANFD controller
45
hw/intc/arm_gicv3_its: Implement a minimalist reset
59
xlnx-versal: Connect Xilinx VERSAL CANFD controllers
46
linux-headers: update to 4.15-rc1
60
MAINTAINERS: Include canfd tests under Xilinx CAN
47
hw/intc/arm_gicv3_its: Implement full reset
61
tests/qtest: Introduce tests for Xilinx VERSAL CANFD controller
48
62
49
Francisco Iglesias (13):
63
Zhuojia Shen (3):
50
m25p80: Add support for continuous read out of RDSR and READ_FSR
64
target/arm: allow DC CVA[D]P in user mode emulation
51
m25p80: Add support for SST READ ID 0x90/0xAB commands
65
tests/tcg/aarch64: add DC CVA[D]P tests
52
m25p80: Add support for BRRD/BRWR and BULK_ERASE (0x60)
66
target/arm: trap DCC access in user mode emulation
53
m25p80: Add support for n25q512a11 and n25q512a13
54
xilinx_spips: Move FlashCMD, XilinxQSPIPS and XilinxSPIPSClass
55
xilinx_spips: Update striping to be big-endian bit order
56
xilinx_spips: Add support for RX discard and RX drain
57
xilinx_spips: Make tx/rx_data_bytes more generic and reusable
58
xilinx_spips: Add support for zero pumping
59
xilinx_spips: Add support for 4 byte addresses in the LQSPI
60
xilinx_spips: Don't set TX FIFO UNDERFLOW at cmd done
61
xilinx_spips: Add support for the ZynqMP Generic QSPI
62
xlnx-zcu102: Add support for the ZynqMP QSPI
63
67
64
Peter Maydell (20):
68
qianfan Zhao (11):
65
target/arm: Handle SPSEL and current stack being out of sync in MSP/PSP reads
69
hw: arm: Add bananapi M2-Ultra and allwinner-r40 support
66
target/arm: Allow explicit writes to CONTROL.SPSEL in Handler mode
70
hw/arm/allwinner-r40: add Clock Control Unit
67
target/arm: Add missing M profile case to regime_is_user()
71
hw: allwinner-r40: Complete uart devices
68
target/arm: Split M profile MNegPri mmu index into user and priv
72
hw: arm: allwinner-r40: Add i2c0 device
69
target/arm: Create new arm_v7m_mmu_idx_for_secstate_and_priv()
73
hw/misc: Rename axp209 to axp22x and add support AXP221 PMU
70
target/arm: Factor MPU lookup code out of get_phys_addr_pmsav8()
74
hw/arm/allwinner-r40: add SDRAM controller device
71
target/arm: Implement TT instruction
75
hw: sd: allwinner-sdhost: Add sun50i-a64 SoC support
72
target/arm: Provide fault type enum and FSR conversion functions
76
hw: arm: allwinner-r40: Add emac and gmac support
73
target/arm: Remove fsr argument from arm_ld*_ptw()
77
hw: arm: allwinner-sramc: Add SRAM Controller support for R40
74
target/arm: Convert get_phys_addr_v5() to not return FSC values
78
tests: avocado: boot_linux_console: Add test case for bpim2u
75
target/arm: Convert get_phys_addr_v6() to not return FSC values
79
docs: system: arm: Introduce bananapi_m2u
76
target/arm: Convert get_phys_addr_lpae() to not return FSC values
77
target/arm: Convert get_phys_addr_pmsav5() to not return FSC values
78
target/arm: Convert get_phys_addr_pmsav7() to not return FSC values
79
target/arm: Convert get_phys_addr_pmsav8() to not return FSC values
80
target/arm: Use ARMMMUFaultInfo in deliver_fault()
81
target/arm: Ignore fsr from get_phys_addr() in do_ats_write()
82
target/arm: Remove fsr argument from get_phys_addr() and arm_tlb_fill()
83
nvic: Make nvic_sysreg_ns_ops work with any MemoryRegion
84
nvic: Make systick banked
85
80
86
Prasad J Pandit (1):
81
MAINTAINERS | 2 +-
87
hw/display/tc6393xb: limit irq handler index to TC6393XB_GPIOS
82
docs/system/arm/bananapi_m2u.rst | 139 +++
88
83
docs/system/arm/emulation.rst | 1 +
89
Zhaoshenglong (1):
84
docs/system/arm/xlnx-versal-virt.rst | 31 +
90
MAINTAINERS: replace the unavailable email address
85
docs/system/target-arm.rst | 1 +
91
86
include/hw/arm/allwinner-r40.h | 143 +++
92
include/hw/arm/xlnx-zynqmp.h | 5 +
87
include/hw/arm/xlnx-versal.h | 12 +
93
include/hw/intc/armv7m_nvic.h | 4 +-
88
include/hw/misc/allwinner-r40-ccu.h | 65 +
94
include/hw/ssi/xilinx_spips.h | 74 +-
89
include/hw/misc/allwinner-r40-dramc.h | 108 ++
95
include/standard-headers/asm-s390/virtio-ccw.h | 1 +
90
include/hw/misc/allwinner-sramc.h | 69 ++
96
include/standard-headers/asm-x86/hyperv.h | 394 +--------
91
include/hw/net/xlnx-versal-canfd.h | 87 ++
97
include/standard-headers/linux/input-event-codes.h | 2 +
92
include/hw/sd/allwinner-sdhost.h | 9 +
98
include/standard-headers/linux/input.h | 1 +
93
include/sysemu/hvf.h | 37 +
99
include/standard-headers/linux/pci_regs.h | 45 +-
94
include/sysemu/hvf_int.h | 2 +
100
linux-headers/asm-arm/kvm.h | 8 +
95
target/arm/cpu.h | 16 +-
101
linux-headers/asm-arm/kvm_para.h | 1 +
96
target/arm/hvf_arm.h | 7 +
102
linux-headers/asm-arm/unistd.h | 2 +
97
target/arm/internals.h | 53 +-
103
linux-headers/asm-arm64/kvm.h | 8 +
98
target/arm/tcg/helper-a64.h | 3 +
104
linux-headers/asm-arm64/unistd.h | 1 +
99
target/arm/tcg/translate-a64.h | 4 +-
105
linux-headers/asm-powerpc/epapr_hcalls.h | 1 +
100
target/arm/tcg/translate.h | 65 +-
106
linux-headers/asm-powerpc/kvm.h | 1 +
101
accel/hvf/hvf-accel-ops.c | 119 ++
107
linux-headers/asm-powerpc/kvm_para.h | 1 +
102
accel/hvf/hvf-all.c | 23 +
108
linux-headers/asm-powerpc/unistd.h | 1 +
103
hw/arm/allwinner-r40.c | 526 ++++++++
109
linux-headers/asm-s390/kvm.h | 1 +
104
hw/arm/bananapi_m2u.c | 145 +++
110
linux-headers/asm-s390/kvm_para.h | 1 +
105
hw/arm/xlnx-versal-virt.c | 53 +
111
linux-headers/asm-s390/unistd.h | 4 +-
106
hw/arm/xlnx-versal.c | 37 +
112
linux-headers/asm-x86/kvm.h | 1 +
107
hw/misc/allwinner-r40-ccu.c | 209 ++++
113
linux-headers/asm-x86/kvm_para.h | 2 +-
108
hw/misc/allwinner-r40-dramc.c | 513 ++++++++
114
linux-headers/asm-x86/unistd.h | 1 +
109
hw/misc/allwinner-sramc.c | 184 +++
115
linux-headers/linux/kvm.h | 2 +
110
hw/misc/axp209.c | 238 ----
116
linux-headers/linux/kvm_para.h | 1 +
111
hw/misc/axp2xx.c | 283 +++++
117
linux-headers/linux/psci.h | 1 +
112
hw/net/can/xlnx-versal-canfd.c | 2107 +++++++++++++++++++++++++++++++++
118
linux-headers/linux/userfaultfd.h | 1 +
113
hw/sd/allwinner-sdhost.c | 72 +-
119
linux-headers/linux/vfio.h | 1 +
114
target/arm/cpu.c | 2 +
120
linux-headers/linux/vfio_ccw.h | 1 +
115
target/arm/debug_helper.c | 5 +
121
linux-headers/linux/vhost.h | 1 +
116
target/arm/helper.c | 6 +-
122
target/arm/cpu.h | 73 +-
117
target/arm/hvf/hvf.c | 750 +++++++++++-
123
target/arm/helper.h | 2 +
118
target/arm/hyp_gdbstub.c | 253 ++++
124
target/arm/internals.h | 193 ++++-
119
target/arm/kvm64.c | 276 -----
125
hw/arm/xlnx-zcu102.c | 23 +
120
target/arm/tcg/cpu64.c | 1 +
126
hw/arm/xlnx-zynqmp.c | 26 +
121
target/arm/tcg/helper-a64.c | 7 +
127
hw/block/m25p80.c | 80 +-
122
target/arm/tcg/hflags.c | 6 +
128
hw/display/tc6393xb.c | 1 +
123
target/arm/tcg/mte_helper.c | 18 +
129
hw/intc/arm_gicv3_its_common.c | 2 -
124
target/arm/tcg/translate-a64.c | 477 +++++---
130
hw/intc/arm_gicv3_its_kvm.c | 53 +-
125
target/arm/tcg/translate-sve.c | 106 +-
131
hw/intc/armv7m_nvic.c | 100 ++-
126
target/arm/tcg/translate.c | 1 +
132
hw/ssi/xilinx_spips.c | 928 +++++++++++++++++----
127
target/i386/hvf/hvf.c | 33 +
133
target/arm/helper.c | 489 +++++++----
128
tests/qtest/xlnx-canfd-test.c | 423 +++++++
134
target/arm/op_helper.c | 82 +-
129
tests/tcg/aarch64/dcpodp.c | 63 +
135
target/arm/translate.c | 37 +-
130
tests/tcg/aarch64/dcpop.c | 63 +
136
MAINTAINERS | 2 +-
131
tests/tcg/aarch64/mte-7.c | 3 +-
137
default-configs/arm-softmmu.mak | 2 +-
132
tests/tcg/multiarch/sigbus.c | 13 +-
138
46 files changed, 1833 insertions(+), 828 deletions(-)
133
hw/arm/Kconfig | 14 +-
139
134
hw/arm/meson.build | 1 +
135
hw/misc/Kconfig | 5 +-
136
hw/misc/meson.build | 5 +-
137
hw/misc/trace-events | 26 +-
138
hw/net/can/meson.build | 1 +
139
hw/net/can/trace-events | 7 +
140
target/arm/meson.build | 3 +-
141
tests/avocado/boot_linux_console.py | 176 +++
142
tests/qtest/meson.build | 1 +
143
tests/tcg/aarch64/Makefile.target | 11 +
144
63 files changed, 7386 insertions(+), 733 deletions(-)
145
create mode 100644 docs/system/arm/bananapi_m2u.rst
146
create mode 100644 include/hw/arm/allwinner-r40.h
147
create mode 100644 include/hw/misc/allwinner-r40-ccu.h
148
create mode 100644 include/hw/misc/allwinner-r40-dramc.h
149
create mode 100644 include/hw/misc/allwinner-sramc.h
150
create mode 100644 include/hw/net/xlnx-versal-canfd.h
151
create mode 100644 hw/arm/allwinner-r40.c
152
create mode 100644 hw/arm/bananapi_m2u.c
153
create mode 100644 hw/misc/allwinner-r40-ccu.c
154
create mode 100644 hw/misc/allwinner-r40-dramc.c
155
create mode 100644 hw/misc/allwinner-sramc.c
156
delete mode 100644 hw/misc/axp209.c
157
create mode 100644 hw/misc/axp2xx.c
158
create mode 100644 hw/net/can/xlnx-versal-canfd.c
159
create mode 100644 target/arm/hyp_gdbstub.c
160
create mode 100644 tests/qtest/xlnx-canfd-test.c
161
create mode 100644 tests/tcg/aarch64/dcpodp.c
162
create mode 100644 tests/tcg/aarch64/dcpop.c
diff view generated by jsdifflib
1
Currently get_phys_addr() and its various subfunctions return
1
From: Francesco Cagnin <fcagnin@quarkslab.com>
2
a hard-coded fault status register value for translation
3
failures. This is awkward because FSR values these days may
4
be either long-descriptor format or short-descriptor format.
5
Worse, the right FSR type to use doesn't depend only on the
6
translation table being walked -- some cases, like fault
7
info reported to AArch32 EL2 for some kinds of ATS operation,
8
must be in long-descriptor format even if the translation
9
table being walked was short format. We can't get those cases
10
right with our current approach.
11
2
12
Provide fields in the ARMMMUFaultInfo struct which allow
3
These helpers will be also used for HVF. Aside from reformatting a
13
get_phys_addr() to provide sufficient information for a caller to
4
couple of comments for 'checkpatch.pl' and updating meson to compile
14
construct an FSR value themselves, and utility functions which do
5
'hyp_gdbstub.c', this is just code motion.
15
this for both long and short format FSR values, as a first step in
16
switching get_phys_addr() and its children to only returning the
17
failure cause in the ARMMMUFaultInfo struct.
18
6
7
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20230601153107.81955-2-fcagnin@quarkslab.com
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
22
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
23
Message-id: 1512503192-2239-2-git-send-email-peter.maydell@linaro.org
24
---
12
---
25
target/arm/internals.h | 185 +++++++++++++++++++++++++++++++++++++++++++++++++
13
target/arm/internals.h | 50 +++++++
26
1 file changed, 185 insertions(+)
14
target/arm/hyp_gdbstub.c | 253 +++++++++++++++++++++++++++++++++++
15
target/arm/kvm64.c | 276 ---------------------------------------
16
target/arm/meson.build | 3 +-
17
4 files changed, 305 insertions(+), 277 deletions(-)
18
create mode 100644 target/arm/hyp_gdbstub.c
27
19
28
diff --git a/target/arm/internals.h b/target/arm/internals.h
20
diff --git a/target/arm/internals.h b/target/arm/internals.h
29
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/internals.h
22
--- a/target/arm/internals.h
31
+++ b/target/arm/internals.h
23
+++ b/target/arm/internals.h
32
@@ -XXX,XX +XXX,XX @@ static inline void arm_clear_exclusive(CPUARMState *env)
24
@@ -XXX,XX +XXX,XX @@ static inline bool arm_fgt_active(CPUARMState *env, int el)
33
}
25
}
34
26
35
/**
27
void assert_hflags_rebuild_correctly(CPUARMState *env);
36
+ * ARMFaultType: type of an ARM MMU fault
28
+
37
+ * This corresponds to the v8A pseudocode's Fault enumeration,
29
+/*
38
+ * with extensions for QEMU internal conditions.
30
+ * Although the ARM implementation of hardware assisted debugging
31
+ * allows for different breakpoints per-core, the current GDB
32
+ * interface treats them as a global pool of registers (which seems to
33
+ * be the case for x86, ppc and s390). As a result we store one copy
34
+ * of registers which is used for all active cores.
35
+ *
36
+ * Write access is serialised by virtue of the GDB protocol which
37
+ * updates things. Read access (i.e. when the values are copied to the
38
+ * vCPU) is also gated by GDB's run control.
39
+ *
40
+ * This is not unreasonable as most of the time debugging kernels you
41
+ * never know which core will eventually execute your function.
39
+ */
42
+ */
40
+typedef enum ARMFaultType {
43
+
41
+ ARMFault_None,
44
+typedef struct {
42
+ ARMFault_AccessFlag,
45
+ uint64_t bcr;
43
+ ARMFault_Alignment,
46
+ uint64_t bvr;
44
+ ARMFault_Background,
47
+} HWBreakpoint;
45
+ ARMFault_Domain,
48
+
46
+ ARMFault_Permission,
49
+/*
47
+ ARMFault_Translation,
50
+ * The watchpoint registers can cover more area than the requested
48
+ ARMFault_AddressSize,
51
+ * watchpoint so we need to store the additional information
49
+ ARMFault_SyncExternal,
52
+ * somewhere. We also need to supply a CPUWatchpoint to the GDB stub
50
+ ARMFault_SyncExternalOnWalk,
53
+ * when the watchpoint is hit.
51
+ ARMFault_SyncParity,
54
+ */
52
+ ARMFault_SyncParityOnWalk,
55
+typedef struct {
53
+ ARMFault_AsyncParity,
56
+ uint64_t wcr;
54
+ ARMFault_AsyncExternal,
57
+ uint64_t wvr;
55
+ ARMFault_Debug,
58
+ CPUWatchpoint details;
56
+ ARMFault_TLBConflict,
59
+} HWWatchpoint;
57
+ ARMFault_Lockdown,
60
+
58
+ ARMFault_Exclusive,
61
+/* Maximum and current break/watch point counts */
59
+ ARMFault_ICacheMaint,
62
+extern int max_hw_bps, max_hw_wps;
60
+ ARMFault_QEMU_NSCExec, /* v8M: NS executing in S&NSC memory */
63
+extern GArray *hw_breakpoints, *hw_watchpoints;
61
+ ARMFault_QEMU_SFault, /* v8M: SecureFault INVTRAN, INVEP or AUVIOL */
64
+
62
+} ARMFaultType;
65
+#define cur_hw_wps (hw_watchpoints->len)
66
+#define cur_hw_bps (hw_breakpoints->len)
67
+#define get_hw_bp(i) (&g_array_index(hw_breakpoints, HWBreakpoint, i))
68
+#define get_hw_wp(i) (&g_array_index(hw_watchpoints, HWWatchpoint, i))
69
+
70
+bool find_hw_breakpoint(CPUState *cpu, target_ulong pc);
71
+int insert_hw_breakpoint(target_ulong pc);
72
+int delete_hw_breakpoint(target_ulong pc);
73
+
74
+bool check_watchpoint_in_range(int i, target_ulong addr);
75
+CPUWatchpoint *find_hw_watchpoint(CPUState *cpu, target_ulong addr);
76
+int insert_hw_watchpoint(target_ulong addr, target_ulong len, int type);
77
+int delete_hw_watchpoint(target_ulong addr, target_ulong len, int type);
78
#endif
79
diff --git a/target/arm/hyp_gdbstub.c b/target/arm/hyp_gdbstub.c
80
new file mode 100644
81
index XXXXXXX..XXXXXXX
82
--- /dev/null
83
+++ b/target/arm/hyp_gdbstub.c
84
@@ -XXX,XX +XXX,XX @@
85
+/*
86
+ * ARM implementation of KVM and HVF hooks, 64 bit specific code
87
+ *
88
+ * Copyright Mian-M. Hamayun 2013, Virtual Open Systems
89
+ * Copyright Alex Bennée 2014, Linaro
90
+ *
91
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
92
+ * See the COPYING file in the top-level directory.
93
+ *
94
+ */
95
+
96
+#include "qemu/osdep.h"
97
+#include "cpu.h"
98
+#include "internals.h"
99
+#include "exec/gdbstub.h"
100
+
101
+/* Maximum and current break/watch point counts */
102
+int max_hw_bps, max_hw_wps;
103
+GArray *hw_breakpoints, *hw_watchpoints;
63
+
104
+
64
+/**
105
+/**
65
* ARMMMUFaultInfo: Information describing an ARM MMU Fault
106
+ * insert_hw_breakpoint()
66
+ * @type: Type of fault
107
+ * @addr: address of breakpoint
67
+ * @level: Table walk level (for translation, access flag and permission faults)
108
+ *
68
+ * @domain: Domain of the fault address (for non-LPAE CPUs only)
109
+ * See ARM ARM D2.9.1 for details but here we are only going to create
69
* @s2addr: Address that caused a fault at stage 2
110
+ * simple un-linked breakpoints (i.e. we don't chain breakpoints
70
* @stage2: True if we faulted at stage 2
111
+ * together to match address and context or vmid). The hardware is
71
* @s1ptw: True if we faulted at stage 2 while doing a stage 1 page-table walk
112
+ * capable of fancier matching but that will require exposing that
72
@@ -XXX,XX +XXX,XX @@ static inline void arm_clear_exclusive(CPUARMState *env)
113
+ * fanciness to GDB's interface
73
*/
114
+ *
74
typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
115
+ * DBGBCR<n>_EL1, Debug Breakpoint Control Registers
75
struct ARMMMUFaultInfo {
116
+ *
76
+ ARMFaultType type;
117
+ * 31 24 23 20 19 16 15 14 13 12 9 8 5 4 3 2 1 0
77
target_ulong s2addr;
118
+ * +------+------+-------+-----+----+------+-----+------+-----+---+
78
+ int level;
119
+ * | RES0 | BT | LBN | SSC | HMC| RES0 | BAS | RES0 | PMC | E |
79
+ int domain;
120
+ * +------+------+-------+-----+----+------+-----+------+-----+---+
80
bool stage2;
121
+ *
81
bool s1ptw;
122
+ * BT: Breakpoint type (0 = unlinked address match)
82
bool ea;
123
+ * LBN: Linked BP number (0 = unused)
83
};
124
+ * SSC/HMC/PMC: Security, Higher and Priv access control (Table D-12)
84
125
+ * BAS: Byte Address Select (RES1 for AArch64)
126
+ * E: Enable bit
127
+ *
128
+ * DBGBVR<n>_EL1, Debug Breakpoint Value Registers
129
+ *
130
+ * 63 53 52 49 48 2 1 0
131
+ * +------+-----------+----------+-----+
132
+ * | RESS | VA[52:49] | VA[48:2] | 0 0 |
133
+ * +------+-----------+----------+-----+
134
+ *
135
+ * Depending on the addressing mode bits the top bits of the register
136
+ * are a sign extension of the highest applicable VA bit. Some
137
+ * versions of GDB don't do it correctly so we ensure they are correct
138
+ * here so future PC comparisons will work properly.
139
+ */
140
+
141
+int insert_hw_breakpoint(target_ulong addr)
142
+{
143
+ HWBreakpoint brk = {
144
+ .bcr = 0x1, /* BCR E=1, enable */
145
+ .bvr = sextract64(addr, 0, 53)
146
+ };
147
+
148
+ if (cur_hw_bps >= max_hw_bps) {
149
+ return -ENOBUFS;
150
+ }
151
+
152
+ brk.bcr = deposit32(brk.bcr, 1, 2, 0x3); /* PMC = 11 */
153
+ brk.bcr = deposit32(brk.bcr, 5, 4, 0xf); /* BAS = RES1 */
154
+
155
+ g_array_append_val(hw_breakpoints, brk);
156
+
157
+ return 0;
158
+}
159
+
85
+/**
160
+/**
86
+ * arm_fi_to_sfsc: Convert fault info struct to short-format FSC
161
+ * delete_hw_breakpoint()
87
+ * Compare pseudocode EncodeSDFSC(), though unlike that function
162
+ * @pc: address of breakpoint
88
+ * we set up a whole FSR-format code including domain field and
163
+ *
89
+ * putting the high bit of the FSC into bit 10.
164
+ * Delete a breakpoint and shuffle any above down
90
+ */
165
+ */
91
+static inline uint32_t arm_fi_to_sfsc(ARMMMUFaultInfo *fi)
166
+
167
+int delete_hw_breakpoint(target_ulong pc)
92
+{
168
+{
93
+ uint32_t fsc;
169
+ int i;
94
+
170
+ for (i = 0; i < hw_breakpoints->len; i++) {
95
+ switch (fi->type) {
171
+ HWBreakpoint *brk = get_hw_bp(i);
96
+ case ARMFault_None:
172
+ if (brk->bvr == pc) {
97
+ return 0;
173
+ g_array_remove_index(hw_breakpoints, i);
98
+ case ARMFault_AccessFlag:
174
+ return 0;
99
+ fsc = fi->level == 1 ? 0x3 : 0x6;
175
+ }
176
+ }
177
+ return -ENOENT;
178
+}
179
+
180
+/**
181
+ * insert_hw_watchpoint()
182
+ * @addr: address of watch point
183
+ * @len: size of area
184
+ * @type: type of watch point
185
+ *
186
+ * See ARM ARM D2.10. As with the breakpoints we can do some advanced
187
+ * stuff if we want to. The watch points can be linked with the break
188
+ * points above to make them context aware. However for simplicity
189
+ * currently we only deal with simple read/write watch points.
190
+ *
191
+ * D7.3.11 DBGWCR<n>_EL1, Debug Watchpoint Control Registers
192
+ *
193
+ * 31 29 28 24 23 21 20 19 16 15 14 13 12 5 4 3 2 1 0
194
+ * +------+-------+------+----+-----+-----+-----+-----+-----+-----+---+
195
+ * | RES0 | MASK | RES0 | WT | LBN | SSC | HMC | BAS | LSC | PAC | E |
196
+ * +------+-------+------+----+-----+-----+-----+-----+-----+-----+---+
197
+ *
198
+ * MASK: num bits addr mask (0=none,01/10=res,11=3 bits (8 bytes))
199
+ * WT: 0 - unlinked, 1 - linked (not currently used)
200
+ * LBN: Linked BP number (not currently used)
201
+ * SSC/HMC/PAC: Security, Higher and Priv access control (Table D2-11)
202
+ * BAS: Byte Address Select
203
+ * LSC: Load/Store control (01: load, 10: store, 11: both)
204
+ * E: Enable
205
+ *
206
+ * The bottom 2 bits of the value register are masked. Therefore to
207
+ * break on any sizes smaller than an unaligned word you need to set
208
+ * MASK=0, BAS=bit per byte in question. For larger regions (^2) you
209
+ * need to ensure you mask the address as required and set BAS=0xff
210
+ */
211
+
212
+int insert_hw_watchpoint(target_ulong addr, target_ulong len, int type)
213
+{
214
+ HWWatchpoint wp = {
215
+ .wcr = R_DBGWCR_E_MASK, /* E=1, enable */
216
+ .wvr = addr & (~0x7ULL),
217
+ .details = { .vaddr = addr, .len = len }
218
+ };
219
+
220
+ if (cur_hw_wps >= max_hw_wps) {
221
+ return -ENOBUFS;
222
+ }
223
+
224
+ /*
225
+ * HMC=0 SSC=0 PAC=3 will hit EL0 or EL1, any security state,
226
+ * valid whether EL3 is implemented or not
227
+ */
228
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, PAC, 3);
229
+
230
+ switch (type) {
231
+ case GDB_WATCHPOINT_READ:
232
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 1);
233
+ wp.details.flags = BP_MEM_READ;
100
+ break;
234
+ break;
101
+ case ARMFault_Alignment:
235
+ case GDB_WATCHPOINT_WRITE:
102
+ fsc = 0x1;
236
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 2);
237
+ wp.details.flags = BP_MEM_WRITE;
103
+ break;
238
+ break;
104
+ case ARMFault_Permission:
239
+ case GDB_WATCHPOINT_ACCESS:
105
+ fsc = fi->level == 1 ? 0xd : 0xf;
240
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 3);
106
+ break;
241
+ wp.details.flags = BP_MEM_ACCESS;
107
+ case ARMFault_Domain:
108
+ fsc = fi->level == 1 ? 0x9 : 0xb;
109
+ break;
110
+ case ARMFault_Translation:
111
+ fsc = fi->level == 1 ? 0x5 : 0x7;
112
+ break;
113
+ case ARMFault_SyncExternal:
114
+ fsc = 0x8 | (fi->ea << 12);
115
+ break;
116
+ case ARMFault_SyncExternalOnWalk:
117
+ fsc = fi->level == 1 ? 0xc : 0xe;
118
+ fsc |= (fi->ea << 12);
119
+ break;
120
+ case ARMFault_SyncParity:
121
+ fsc = 0x409;
122
+ break;
123
+ case ARMFault_SyncParityOnWalk:
124
+ fsc = fi->level == 1 ? 0x40c : 0x40e;
125
+ break;
126
+ case ARMFault_AsyncParity:
127
+ fsc = 0x408;
128
+ break;
129
+ case ARMFault_AsyncExternal:
130
+ fsc = 0x406 | (fi->ea << 12);
131
+ break;
132
+ case ARMFault_Debug:
133
+ fsc = 0x2;
134
+ break;
135
+ case ARMFault_TLBConflict:
136
+ fsc = 0x400;
137
+ break;
138
+ case ARMFault_Lockdown:
139
+ fsc = 0x404;
140
+ break;
141
+ case ARMFault_Exclusive:
142
+ fsc = 0x405;
143
+ break;
144
+ case ARMFault_ICacheMaint:
145
+ fsc = 0x4;
146
+ break;
147
+ case ARMFault_Background:
148
+ fsc = 0x0;
149
+ break;
150
+ case ARMFault_QEMU_NSCExec:
151
+ fsc = M_FAKE_FSR_NSC_EXEC;
152
+ break;
153
+ case ARMFault_QEMU_SFault:
154
+ fsc = M_FAKE_FSR_SFAULT;
155
+ break;
242
+ break;
156
+ default:
243
+ default:
157
+ /* Other faults can't occur in a context that requires a
244
+ g_assert_not_reached();
158
+ * short-format status code.
245
+ break;
246
+ }
247
+ if (len <= 8) {
248
+ /* we align the address and set the bits in BAS */
249
+ int off = addr & 0x7;
250
+ int bas = (1 << len) - 1;
251
+
252
+ wp.wcr = deposit32(wp.wcr, 5 + off, 8 - off, bas);
253
+ } else {
254
+ /* For ranges above 8 bytes we need to be a power of 2 */
255
+ if (is_power_of_2(len)) {
256
+ int bits = ctz64(len);
257
+
258
+ wp.wvr &= ~((1 << bits) - 1);
259
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, MASK, bits);
260
+ wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, BAS, 0xff);
261
+ } else {
262
+ return -ENOBUFS;
263
+ }
264
+ }
265
+
266
+ g_array_append_val(hw_watchpoints, wp);
267
+ return 0;
268
+}
269
+
270
+bool check_watchpoint_in_range(int i, target_ulong addr)
271
+{
272
+ HWWatchpoint *wp = get_hw_wp(i);
273
+ uint64_t addr_top, addr_bottom = wp->wvr;
274
+ int bas = extract32(wp->wcr, 5, 8);
275
+ int mask = extract32(wp->wcr, 24, 4);
276
+
277
+ if (mask) {
278
+ addr_top = addr_bottom + (1 << mask);
279
+ } else {
280
+ /*
281
+ * BAS must be contiguous but can offset against the base
282
+ * address in DBGWVR
159
+ */
283
+ */
160
+ g_assert_not_reached();
284
+ addr_bottom = addr_bottom + ctz32(bas);
161
+ }
285
+ addr_top = addr_bottom + clo32(bas);
162
+
286
+ }
163
+ fsc |= (fi->domain << 4);
287
+
164
+ return fsc;
288
+ if (addr >= addr_bottom && addr <= addr_top) {
289
+ return true;
290
+ }
291
+
292
+ return false;
165
+}
293
+}
166
+
294
+
167
+/**
295
+/**
168
+ * arm_fi_to_lfsc: Convert fault info struct to long-format FSC
296
+ * delete_hw_watchpoint()
169
+ * Compare pseudocode EncodeLDFSC(), though unlike that function
297
+ * @addr: address of breakpoint
170
+ * we fill in also the LPAE bit 9 of a DFSR format.
298
+ *
299
+ * Delete a breakpoint and shuffle any above down
171
+ */
300
+ */
172
+static inline uint32_t arm_fi_to_lfsc(ARMMMUFaultInfo *fi)
301
+
302
+int delete_hw_watchpoint(target_ulong addr, target_ulong len, int type)
173
+{
303
+{
174
+ uint32_t fsc;
304
+ int i;
175
+
305
+ for (i = 0; i < cur_hw_wps; i++) {
176
+ switch (fi->type) {
306
+ if (check_watchpoint_in_range(i, addr)) {
177
+ case ARMFault_None:
307
+ g_array_remove_index(hw_watchpoints, i);
178
+ return 0;
308
+ return 0;
179
+ case ARMFault_AddressSize:
309
+ }
180
+ fsc = fi->level & 3;
310
+ }
181
+ break;
311
+ return -ENOENT;
182
+ case ARMFault_AccessFlag:
183
+ fsc = (fi->level & 3) | (0x2 << 2);
184
+ break;
185
+ case ARMFault_Permission:
186
+ fsc = (fi->level & 3) | (0x3 << 2);
187
+ break;
188
+ case ARMFault_Translation:
189
+ fsc = (fi->level & 3) | (0x1 << 2);
190
+ break;
191
+ case ARMFault_SyncExternal:
192
+ fsc = 0x10 | (fi->ea << 12);
193
+ break;
194
+ case ARMFault_SyncExternalOnWalk:
195
+ fsc = (fi->level & 3) | (0x5 << 2) | (fi->ea << 12);
196
+ break;
197
+ case ARMFault_SyncParity:
198
+ fsc = 0x18;
199
+ break;
200
+ case ARMFault_SyncParityOnWalk:
201
+ fsc = (fi->level & 3) | (0x7 << 2);
202
+ break;
203
+ case ARMFault_AsyncParity:
204
+ fsc = 0x19;
205
+ break;
206
+ case ARMFault_AsyncExternal:
207
+ fsc = 0x11 | (fi->ea << 12);
208
+ break;
209
+ case ARMFault_Alignment:
210
+ fsc = 0x21;
211
+ break;
212
+ case ARMFault_Debug:
213
+ fsc = 0x22;
214
+ break;
215
+ case ARMFault_TLBConflict:
216
+ fsc = 0x30;
217
+ break;
218
+ case ARMFault_Lockdown:
219
+ fsc = 0x34;
220
+ break;
221
+ case ARMFault_Exclusive:
222
+ fsc = 0x35;
223
+ break;
224
+ default:
225
+ /* Other faults can't occur in a context that requires a
226
+ * long-format status code.
227
+ */
228
+ g_assert_not_reached();
229
+ }
230
+
231
+ fsc |= 1 << 9;
232
+ return fsc;
233
+}
312
+}
234
+
313
+
235
/* Do a page table walk and add page to TLB if possible */
314
+bool find_hw_breakpoint(CPUState *cpu, target_ulong pc)
236
bool arm_tlb_fill(CPUState *cpu, vaddr address,
315
+{
237
MMUAccessType access_type, int mmu_idx,
316
+ int i;
317
+
318
+ for (i = 0; i < cur_hw_bps; i++) {
319
+ HWBreakpoint *bp = get_hw_bp(i);
320
+ if (bp->bvr == pc) {
321
+ return true;
322
+ }
323
+ }
324
+ return false;
325
+}
326
+
327
+CPUWatchpoint *find_hw_watchpoint(CPUState *cpu, target_ulong addr)
328
+{
329
+ int i;
330
+
331
+ for (i = 0; i < cur_hw_wps; i++) {
332
+ if (check_watchpoint_in_range(i, addr)) {
333
+ return &get_hw_wp(i)->details;
334
+ }
335
+ }
336
+ return NULL;
337
+}
338
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
339
index XXXXXXX..XXXXXXX 100644
340
--- a/target/arm/kvm64.c
341
+++ b/target/arm/kvm64.c
342
@@ -XXX,XX +XXX,XX @@
343
344
static bool have_guest_debug;
345
346
-/*
347
- * Although the ARM implementation of hardware assisted debugging
348
- * allows for different breakpoints per-core, the current GDB
349
- * interface treats them as a global pool of registers (which seems to
350
- * be the case for x86, ppc and s390). As a result we store one copy
351
- * of registers which is used for all active cores.
352
- *
353
- * Write access is serialised by virtue of the GDB protocol which
354
- * updates things. Read access (i.e. when the values are copied to the
355
- * vCPU) is also gated by GDB's run control.
356
- *
357
- * This is not unreasonable as most of the time debugging kernels you
358
- * never know which core will eventually execute your function.
359
- */
360
-
361
-typedef struct {
362
- uint64_t bcr;
363
- uint64_t bvr;
364
-} HWBreakpoint;
365
-
366
-/* The watchpoint registers can cover more area than the requested
367
- * watchpoint so we need to store the additional information
368
- * somewhere. We also need to supply a CPUWatchpoint to the GDB stub
369
- * when the watchpoint is hit.
370
- */
371
-typedef struct {
372
- uint64_t wcr;
373
- uint64_t wvr;
374
- CPUWatchpoint details;
375
-} HWWatchpoint;
376
-
377
-/* Maximum and current break/watch point counts */
378
-int max_hw_bps, max_hw_wps;
379
-GArray *hw_breakpoints, *hw_watchpoints;
380
-
381
-#define cur_hw_wps (hw_watchpoints->len)
382
-#define cur_hw_bps (hw_breakpoints->len)
383
-#define get_hw_bp(i) (&g_array_index(hw_breakpoints, HWBreakpoint, i))
384
-#define get_hw_wp(i) (&g_array_index(hw_watchpoints, HWWatchpoint, i))
385
-
386
void kvm_arm_init_debug(KVMState *s)
387
{
388
have_guest_debug = kvm_check_extension(s,
389
@@ -XXX,XX +XXX,XX @@ void kvm_arm_init_debug(KVMState *s)
390
return;
391
}
392
393
-/**
394
- * insert_hw_breakpoint()
395
- * @addr: address of breakpoint
396
- *
397
- * See ARM ARM D2.9.1 for details but here we are only going to create
398
- * simple un-linked breakpoints (i.e. we don't chain breakpoints
399
- * together to match address and context or vmid). The hardware is
400
- * capable of fancier matching but that will require exposing that
401
- * fanciness to GDB's interface
402
- *
403
- * DBGBCR<n>_EL1, Debug Breakpoint Control Registers
404
- *
405
- * 31 24 23 20 19 16 15 14 13 12 9 8 5 4 3 2 1 0
406
- * +------+------+-------+-----+----+------+-----+------+-----+---+
407
- * | RES0 | BT | LBN | SSC | HMC| RES0 | BAS | RES0 | PMC | E |
408
- * +------+------+-------+-----+----+------+-----+------+-----+---+
409
- *
410
- * BT: Breakpoint type (0 = unlinked address match)
411
- * LBN: Linked BP number (0 = unused)
412
- * SSC/HMC/PMC: Security, Higher and Priv access control (Table D-12)
413
- * BAS: Byte Address Select (RES1 for AArch64)
414
- * E: Enable bit
415
- *
416
- * DBGBVR<n>_EL1, Debug Breakpoint Value Registers
417
- *
418
- * 63 53 52 49 48 2 1 0
419
- * +------+-----------+----------+-----+
420
- * | RESS | VA[52:49] | VA[48:2] | 0 0 |
421
- * +------+-----------+----------+-----+
422
- *
423
- * Depending on the addressing mode bits the top bits of the register
424
- * are a sign extension of the highest applicable VA bit. Some
425
- * versions of GDB don't do it correctly so we ensure they are correct
426
- * here so future PC comparisons will work properly.
427
- */
428
-
429
-static int insert_hw_breakpoint(target_ulong addr)
430
-{
431
- HWBreakpoint brk = {
432
- .bcr = 0x1, /* BCR E=1, enable */
433
- .bvr = sextract64(addr, 0, 53)
434
- };
435
-
436
- if (cur_hw_bps >= max_hw_bps) {
437
- return -ENOBUFS;
438
- }
439
-
440
- brk.bcr = deposit32(brk.bcr, 1, 2, 0x3); /* PMC = 11 */
441
- brk.bcr = deposit32(brk.bcr, 5, 4, 0xf); /* BAS = RES1 */
442
-
443
- g_array_append_val(hw_breakpoints, brk);
444
-
445
- return 0;
446
-}
447
-
448
-/**
449
- * delete_hw_breakpoint()
450
- * @pc: address of breakpoint
451
- *
452
- * Delete a breakpoint and shuffle any above down
453
- */
454
-
455
-static int delete_hw_breakpoint(target_ulong pc)
456
-{
457
- int i;
458
- for (i = 0; i < hw_breakpoints->len; i++) {
459
- HWBreakpoint *brk = get_hw_bp(i);
460
- if (brk->bvr == pc) {
461
- g_array_remove_index(hw_breakpoints, i);
462
- return 0;
463
- }
464
- }
465
- return -ENOENT;
466
-}
467
-
468
-/**
469
- * insert_hw_watchpoint()
470
- * @addr: address of watch point
471
- * @len: size of area
472
- * @type: type of watch point
473
- *
474
- * See ARM ARM D2.10. As with the breakpoints we can do some advanced
475
- * stuff if we want to. The watch points can be linked with the break
476
- * points above to make them context aware. However for simplicity
477
- * currently we only deal with simple read/write watch points.
478
- *
479
- * D7.3.11 DBGWCR<n>_EL1, Debug Watchpoint Control Registers
480
- *
481
- * 31 29 28 24 23 21 20 19 16 15 14 13 12 5 4 3 2 1 0
482
- * +------+-------+------+----+-----+-----+-----+-----+-----+-----+---+
483
- * | RES0 | MASK | RES0 | WT | LBN | SSC | HMC | BAS | LSC | PAC | E |
484
- * +------+-------+------+----+-----+-----+-----+-----+-----+-----+---+
485
- *
486
- * MASK: num bits addr mask (0=none,01/10=res,11=3 bits (8 bytes))
487
- * WT: 0 - unlinked, 1 - linked (not currently used)
488
- * LBN: Linked BP number (not currently used)
489
- * SSC/HMC/PAC: Security, Higher and Priv access control (Table D2-11)
490
- * BAS: Byte Address Select
491
- * LSC: Load/Store control (01: load, 10: store, 11: both)
492
- * E: Enable
493
- *
494
- * The bottom 2 bits of the value register are masked. Therefore to
495
- * break on any sizes smaller than an unaligned word you need to set
496
- * MASK=0, BAS=bit per byte in question. For larger regions (^2) you
497
- * need to ensure you mask the address as required and set BAS=0xff
498
- */
499
-
500
-static int insert_hw_watchpoint(target_ulong addr,
501
- target_ulong len, int type)
502
-{
503
- HWWatchpoint wp = {
504
- .wcr = R_DBGWCR_E_MASK, /* E=1, enable */
505
- .wvr = addr & (~0x7ULL),
506
- .details = { .vaddr = addr, .len = len }
507
- };
508
-
509
- if (cur_hw_wps >= max_hw_wps) {
510
- return -ENOBUFS;
511
- }
512
-
513
- /*
514
- * HMC=0 SSC=0 PAC=3 will hit EL0 or EL1, any security state,
515
- * valid whether EL3 is implemented or not
516
- */
517
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, PAC, 3);
518
-
519
- switch (type) {
520
- case GDB_WATCHPOINT_READ:
521
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 1);
522
- wp.details.flags = BP_MEM_READ;
523
- break;
524
- case GDB_WATCHPOINT_WRITE:
525
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 2);
526
- wp.details.flags = BP_MEM_WRITE;
527
- break;
528
- case GDB_WATCHPOINT_ACCESS:
529
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, LSC, 3);
530
- wp.details.flags = BP_MEM_ACCESS;
531
- break;
532
- default:
533
- g_assert_not_reached();
534
- break;
535
- }
536
- if (len <= 8) {
537
- /* we align the address and set the bits in BAS */
538
- int off = addr & 0x7;
539
- int bas = (1 << len) - 1;
540
-
541
- wp.wcr = deposit32(wp.wcr, 5 + off, 8 - off, bas);
542
- } else {
543
- /* For ranges above 8 bytes we need to be a power of 2 */
544
- if (is_power_of_2(len)) {
545
- int bits = ctz64(len);
546
-
547
- wp.wvr &= ~((1 << bits) - 1);
548
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, MASK, bits);
549
- wp.wcr = FIELD_DP64(wp.wcr, DBGWCR, BAS, 0xff);
550
- } else {
551
- return -ENOBUFS;
552
- }
553
- }
554
-
555
- g_array_append_val(hw_watchpoints, wp);
556
- return 0;
557
-}
558
-
559
-
560
-static bool check_watchpoint_in_range(int i, target_ulong addr)
561
-{
562
- HWWatchpoint *wp = get_hw_wp(i);
563
- uint64_t addr_top, addr_bottom = wp->wvr;
564
- int bas = extract32(wp->wcr, 5, 8);
565
- int mask = extract32(wp->wcr, 24, 4);
566
-
567
- if (mask) {
568
- addr_top = addr_bottom + (1 << mask);
569
- } else {
570
- /* BAS must be contiguous but can offset against the base
571
- * address in DBGWVR */
572
- addr_bottom = addr_bottom + ctz32(bas);
573
- addr_top = addr_bottom + clo32(bas);
574
- }
575
-
576
- if (addr >= addr_bottom && addr <= addr_top) {
577
- return true;
578
- }
579
-
580
- return false;
581
-}
582
-
583
-/**
584
- * delete_hw_watchpoint()
585
- * @addr: address of breakpoint
586
- *
587
- * Delete a breakpoint and shuffle any above down
588
- */
589
-
590
-static int delete_hw_watchpoint(target_ulong addr,
591
- target_ulong len, int type)
592
-{
593
- int i;
594
- for (i = 0; i < cur_hw_wps; i++) {
595
- if (check_watchpoint_in_range(i, addr)) {
596
- g_array_remove_index(hw_watchpoints, i);
597
- return 0;
598
- }
599
- }
600
- return -ENOENT;
601
-}
602
-
603
-
604
int kvm_arch_insert_hw_breakpoint(target_ulong addr,
605
target_ulong len, int type)
606
{
607
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_hw_debug_active(CPUState *cs)
608
return ((cur_hw_wps > 0) || (cur_hw_bps > 0));
609
}
610
611
-static bool find_hw_breakpoint(CPUState *cpu, target_ulong pc)
612
-{
613
- int i;
614
-
615
- for (i = 0; i < cur_hw_bps; i++) {
616
- HWBreakpoint *bp = get_hw_bp(i);
617
- if (bp->bvr == pc) {
618
- return true;
619
- }
620
- }
621
- return false;
622
-}
623
-
624
-static CPUWatchpoint *find_hw_watchpoint(CPUState *cpu, target_ulong addr)
625
-{
626
- int i;
627
-
628
- for (i = 0; i < cur_hw_wps; i++) {
629
- if (check_watchpoint_in_range(i, addr)) {
630
- return &get_hw_wp(i)->details;
631
- }
632
- }
633
- return NULL;
634
-}
635
-
636
static bool kvm_arm_set_device_attr(CPUState *cs, struct kvm_device_attr *attr,
637
const char *name)
638
{
639
diff --git a/target/arm/meson.build b/target/arm/meson.build
640
index XXXXXXX..XXXXXXX 100644
641
--- a/target/arm/meson.build
642
+++ b/target/arm/meson.build
643
@@ -XXX,XX +XXX,XX @@ arm_ss.add(files(
644
))
645
arm_ss.add(zlib)
646
647
-arm_ss.add(when: 'CONFIG_KVM', if_true: files('kvm.c', 'kvm64.c'), if_false: files('kvm-stub.c'))
648
+arm_ss.add(when: 'CONFIG_KVM', if_true: files('hyp_gdbstub.c', 'kvm.c', 'kvm64.c'), if_false: files('kvm-stub.c'))
649
+arm_ss.add(when: 'CONFIG_HVF', if_true: files('hyp_gdbstub.c'))
650
651
arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
652
'cpu64.c',
238
--
653
--
239
2.7.4
654
2.34.1
240
655
241
656
diff view generated by jsdifflib
1
For v8M it is possible for the CONTROL.SPSEL bit value and the
1
From: Francesco Cagnin <fcagnin@quarkslab.com>
2
current stack to be out of sync. This means we need to update
3
the checks used in reads and writes of the PSP and MSP special
4
registers to use v7m_using_psp() rather than directly checking
5
the SPSEL bit in the control register.
6
2
3
Required for guest debugging.
4
5
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
6
Message-id: 20230601153107.81955-3-fcagnin@quarkslab.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 1512153879-5291-2-git-send-email-peter.maydell@linaro.org
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
---
9
---
12
target/arm/helper.c | 10 ++++------
10
target/arm/hvf/hvf.c | 213 +++++++++++++++++++++++++++++++++++++++++++
13
1 file changed, 4 insertions(+), 6 deletions(-)
11
1 file changed, 213 insertions(+)
14
12
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
15
--- a/target/arm/hvf/hvf.c
18
+++ b/target/arm/helper.c
16
+++ b/target/arm/hvf/hvf.c
19
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
17
@@ -XXX,XX +XXX,XX @@
20
18
#define SYSREG_ICC_SGI1R_EL1 SYSREG(3, 0, 12, 11, 5)
21
switch (reg) {
19
#define SYSREG_ICC_SRE_EL1 SYSREG(3, 0, 12, 12, 5)
22
case 8: /* MSP */
20
23
- return (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) ?
21
+#define SYSREG_MDSCR_EL1 SYSREG(2, 0, 0, 2, 2)
24
- env->v7m.other_sp : env->regs[13];
22
+#define SYSREG_DBGBVR0_EL1 SYSREG(2, 0, 0, 0, 4)
25
+ return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13];
23
+#define SYSREG_DBGBCR0_EL1 SYSREG(2, 0, 0, 0, 5)
26
case 9: /* PSP */
24
+#define SYSREG_DBGWVR0_EL1 SYSREG(2, 0, 0, 0, 6)
27
- return (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) ?
25
+#define SYSREG_DBGWCR0_EL1 SYSREG(2, 0, 0, 0, 7)
28
- env->regs[13] : env->v7m.other_sp;
26
+#define SYSREG_DBGBVR1_EL1 SYSREG(2, 0, 0, 1, 4)
29
+ return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp;
27
+#define SYSREG_DBGBCR1_EL1 SYSREG(2, 0, 0, 1, 5)
30
case 16: /* PRIMASK */
28
+#define SYSREG_DBGWVR1_EL1 SYSREG(2, 0, 0, 1, 6)
31
return env->v7m.primask[env->v7m.secure];
29
+#define SYSREG_DBGWCR1_EL1 SYSREG(2, 0, 0, 1, 7)
32
case 17: /* BASEPRI */
30
+#define SYSREG_DBGBVR2_EL1 SYSREG(2, 0, 0, 2, 4)
33
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
31
+#define SYSREG_DBGBCR2_EL1 SYSREG(2, 0, 0, 2, 5)
32
+#define SYSREG_DBGWVR2_EL1 SYSREG(2, 0, 0, 2, 6)
33
+#define SYSREG_DBGWCR2_EL1 SYSREG(2, 0, 0, 2, 7)
34
+#define SYSREG_DBGBVR3_EL1 SYSREG(2, 0, 0, 3, 4)
35
+#define SYSREG_DBGBCR3_EL1 SYSREG(2, 0, 0, 3, 5)
36
+#define SYSREG_DBGWVR3_EL1 SYSREG(2, 0, 0, 3, 6)
37
+#define SYSREG_DBGWCR3_EL1 SYSREG(2, 0, 0, 3, 7)
38
+#define SYSREG_DBGBVR4_EL1 SYSREG(2, 0, 0, 4, 4)
39
+#define SYSREG_DBGBCR4_EL1 SYSREG(2, 0, 0, 4, 5)
40
+#define SYSREG_DBGWVR4_EL1 SYSREG(2, 0, 0, 4, 6)
41
+#define SYSREG_DBGWCR4_EL1 SYSREG(2, 0, 0, 4, 7)
42
+#define SYSREG_DBGBVR5_EL1 SYSREG(2, 0, 0, 5, 4)
43
+#define SYSREG_DBGBCR5_EL1 SYSREG(2, 0, 0, 5, 5)
44
+#define SYSREG_DBGWVR5_EL1 SYSREG(2, 0, 0, 5, 6)
45
+#define SYSREG_DBGWCR5_EL1 SYSREG(2, 0, 0, 5, 7)
46
+#define SYSREG_DBGBVR6_EL1 SYSREG(2, 0, 0, 6, 4)
47
+#define SYSREG_DBGBCR6_EL1 SYSREG(2, 0, 0, 6, 5)
48
+#define SYSREG_DBGWVR6_EL1 SYSREG(2, 0, 0, 6, 6)
49
+#define SYSREG_DBGWCR6_EL1 SYSREG(2, 0, 0, 6, 7)
50
+#define SYSREG_DBGBVR7_EL1 SYSREG(2, 0, 0, 7, 4)
51
+#define SYSREG_DBGBCR7_EL1 SYSREG(2, 0, 0, 7, 5)
52
+#define SYSREG_DBGWVR7_EL1 SYSREG(2, 0, 0, 7, 6)
53
+#define SYSREG_DBGWCR7_EL1 SYSREG(2, 0, 0, 7, 7)
54
+#define SYSREG_DBGBVR8_EL1 SYSREG(2, 0, 0, 8, 4)
55
+#define SYSREG_DBGBCR8_EL1 SYSREG(2, 0, 0, 8, 5)
56
+#define SYSREG_DBGWVR8_EL1 SYSREG(2, 0, 0, 8, 6)
57
+#define SYSREG_DBGWCR8_EL1 SYSREG(2, 0, 0, 8, 7)
58
+#define SYSREG_DBGBVR9_EL1 SYSREG(2, 0, 0, 9, 4)
59
+#define SYSREG_DBGBCR9_EL1 SYSREG(2, 0, 0, 9, 5)
60
+#define SYSREG_DBGWVR9_EL1 SYSREG(2, 0, 0, 9, 6)
61
+#define SYSREG_DBGWCR9_EL1 SYSREG(2, 0, 0, 9, 7)
62
+#define SYSREG_DBGBVR10_EL1 SYSREG(2, 0, 0, 10, 4)
63
+#define SYSREG_DBGBCR10_EL1 SYSREG(2, 0, 0, 10, 5)
64
+#define SYSREG_DBGWVR10_EL1 SYSREG(2, 0, 0, 10, 6)
65
+#define SYSREG_DBGWCR10_EL1 SYSREG(2, 0, 0, 10, 7)
66
+#define SYSREG_DBGBVR11_EL1 SYSREG(2, 0, 0, 11, 4)
67
+#define SYSREG_DBGBCR11_EL1 SYSREG(2, 0, 0, 11, 5)
68
+#define SYSREG_DBGWVR11_EL1 SYSREG(2, 0, 0, 11, 6)
69
+#define SYSREG_DBGWCR11_EL1 SYSREG(2, 0, 0, 11, 7)
70
+#define SYSREG_DBGBVR12_EL1 SYSREG(2, 0, 0, 12, 4)
71
+#define SYSREG_DBGBCR12_EL1 SYSREG(2, 0, 0, 12, 5)
72
+#define SYSREG_DBGWVR12_EL1 SYSREG(2, 0, 0, 12, 6)
73
+#define SYSREG_DBGWCR12_EL1 SYSREG(2, 0, 0, 12, 7)
74
+#define SYSREG_DBGBVR13_EL1 SYSREG(2, 0, 0, 13, 4)
75
+#define SYSREG_DBGBCR13_EL1 SYSREG(2, 0, 0, 13, 5)
76
+#define SYSREG_DBGWVR13_EL1 SYSREG(2, 0, 0, 13, 6)
77
+#define SYSREG_DBGWCR13_EL1 SYSREG(2, 0, 0, 13, 7)
78
+#define SYSREG_DBGBVR14_EL1 SYSREG(2, 0, 0, 14, 4)
79
+#define SYSREG_DBGBCR14_EL1 SYSREG(2, 0, 0, 14, 5)
80
+#define SYSREG_DBGWVR14_EL1 SYSREG(2, 0, 0, 14, 6)
81
+#define SYSREG_DBGWCR14_EL1 SYSREG(2, 0, 0, 14, 7)
82
+#define SYSREG_DBGBVR15_EL1 SYSREG(2, 0, 0, 15, 4)
83
+#define SYSREG_DBGBCR15_EL1 SYSREG(2, 0, 0, 15, 5)
84
+#define SYSREG_DBGWVR15_EL1 SYSREG(2, 0, 0, 15, 6)
85
+#define SYSREG_DBGWCR15_EL1 SYSREG(2, 0, 0, 15, 7)
86
+
87
#define WFX_IS_WFE (1 << 0)
88
89
#define TMR_CTL_ENABLE (1 << 0)
90
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
91
hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
34
}
92
}
35
break;
93
break;
36
case 8: /* MSP */
94
+ case SYSREG_DBGBVR0_EL1:
37
- if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) {
95
+ case SYSREG_DBGBVR1_EL1:
38
+ if (v7m_using_psp(env)) {
96
+ case SYSREG_DBGBVR2_EL1:
39
env->v7m.other_sp = val;
97
+ case SYSREG_DBGBVR3_EL1:
40
} else {
98
+ case SYSREG_DBGBVR4_EL1:
41
env->regs[13] = val;
99
+ case SYSREG_DBGBVR5_EL1:
100
+ case SYSREG_DBGBVR6_EL1:
101
+ case SYSREG_DBGBVR7_EL1:
102
+ case SYSREG_DBGBVR8_EL1:
103
+ case SYSREG_DBGBVR9_EL1:
104
+ case SYSREG_DBGBVR10_EL1:
105
+ case SYSREG_DBGBVR11_EL1:
106
+ case SYSREG_DBGBVR12_EL1:
107
+ case SYSREG_DBGBVR13_EL1:
108
+ case SYSREG_DBGBVR14_EL1:
109
+ case SYSREG_DBGBVR15_EL1:
110
+ val = env->cp15.dbgbvr[SYSREG_CRM(reg)];
111
+ break;
112
+ case SYSREG_DBGBCR0_EL1:
113
+ case SYSREG_DBGBCR1_EL1:
114
+ case SYSREG_DBGBCR2_EL1:
115
+ case SYSREG_DBGBCR3_EL1:
116
+ case SYSREG_DBGBCR4_EL1:
117
+ case SYSREG_DBGBCR5_EL1:
118
+ case SYSREG_DBGBCR6_EL1:
119
+ case SYSREG_DBGBCR7_EL1:
120
+ case SYSREG_DBGBCR8_EL1:
121
+ case SYSREG_DBGBCR9_EL1:
122
+ case SYSREG_DBGBCR10_EL1:
123
+ case SYSREG_DBGBCR11_EL1:
124
+ case SYSREG_DBGBCR12_EL1:
125
+ case SYSREG_DBGBCR13_EL1:
126
+ case SYSREG_DBGBCR14_EL1:
127
+ case SYSREG_DBGBCR15_EL1:
128
+ val = env->cp15.dbgbcr[SYSREG_CRM(reg)];
129
+ break;
130
+ case SYSREG_DBGWVR0_EL1:
131
+ case SYSREG_DBGWVR1_EL1:
132
+ case SYSREG_DBGWVR2_EL1:
133
+ case SYSREG_DBGWVR3_EL1:
134
+ case SYSREG_DBGWVR4_EL1:
135
+ case SYSREG_DBGWVR5_EL1:
136
+ case SYSREG_DBGWVR6_EL1:
137
+ case SYSREG_DBGWVR7_EL1:
138
+ case SYSREG_DBGWVR8_EL1:
139
+ case SYSREG_DBGWVR9_EL1:
140
+ case SYSREG_DBGWVR10_EL1:
141
+ case SYSREG_DBGWVR11_EL1:
142
+ case SYSREG_DBGWVR12_EL1:
143
+ case SYSREG_DBGWVR13_EL1:
144
+ case SYSREG_DBGWVR14_EL1:
145
+ case SYSREG_DBGWVR15_EL1:
146
+ val = env->cp15.dbgwvr[SYSREG_CRM(reg)];
147
+ break;
148
+ case SYSREG_DBGWCR0_EL1:
149
+ case SYSREG_DBGWCR1_EL1:
150
+ case SYSREG_DBGWCR2_EL1:
151
+ case SYSREG_DBGWCR3_EL1:
152
+ case SYSREG_DBGWCR4_EL1:
153
+ case SYSREG_DBGWCR5_EL1:
154
+ case SYSREG_DBGWCR6_EL1:
155
+ case SYSREG_DBGWCR7_EL1:
156
+ case SYSREG_DBGWCR8_EL1:
157
+ case SYSREG_DBGWCR9_EL1:
158
+ case SYSREG_DBGWCR10_EL1:
159
+ case SYSREG_DBGWCR11_EL1:
160
+ case SYSREG_DBGWCR12_EL1:
161
+ case SYSREG_DBGWCR13_EL1:
162
+ case SYSREG_DBGWCR14_EL1:
163
+ case SYSREG_DBGWCR15_EL1:
164
+ val = env->cp15.dbgwcr[SYSREG_CRM(reg)];
165
+ break;
166
default:
167
if (is_id_sysreg(reg)) {
168
/* ID system registers read as RES0 */
169
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
170
hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
42
}
171
}
43
break;
172
break;
44
case 9: /* PSP */
173
+ case SYSREG_MDSCR_EL1:
45
- if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) {
174
+ env->cp15.mdscr_el1 = val;
46
+ if (v7m_using_psp(env)) {
175
+ break;
47
env->regs[13] = val;
176
+ case SYSREG_DBGBVR0_EL1:
48
} else {
177
+ case SYSREG_DBGBVR1_EL1:
49
env->v7m.other_sp = val;
178
+ case SYSREG_DBGBVR2_EL1:
179
+ case SYSREG_DBGBVR3_EL1:
180
+ case SYSREG_DBGBVR4_EL1:
181
+ case SYSREG_DBGBVR5_EL1:
182
+ case SYSREG_DBGBVR6_EL1:
183
+ case SYSREG_DBGBVR7_EL1:
184
+ case SYSREG_DBGBVR8_EL1:
185
+ case SYSREG_DBGBVR9_EL1:
186
+ case SYSREG_DBGBVR10_EL1:
187
+ case SYSREG_DBGBVR11_EL1:
188
+ case SYSREG_DBGBVR12_EL1:
189
+ case SYSREG_DBGBVR13_EL1:
190
+ case SYSREG_DBGBVR14_EL1:
191
+ case SYSREG_DBGBVR15_EL1:
192
+ env->cp15.dbgbvr[SYSREG_CRM(reg)] = val;
193
+ break;
194
+ case SYSREG_DBGBCR0_EL1:
195
+ case SYSREG_DBGBCR1_EL1:
196
+ case SYSREG_DBGBCR2_EL1:
197
+ case SYSREG_DBGBCR3_EL1:
198
+ case SYSREG_DBGBCR4_EL1:
199
+ case SYSREG_DBGBCR5_EL1:
200
+ case SYSREG_DBGBCR6_EL1:
201
+ case SYSREG_DBGBCR7_EL1:
202
+ case SYSREG_DBGBCR8_EL1:
203
+ case SYSREG_DBGBCR9_EL1:
204
+ case SYSREG_DBGBCR10_EL1:
205
+ case SYSREG_DBGBCR11_EL1:
206
+ case SYSREG_DBGBCR12_EL1:
207
+ case SYSREG_DBGBCR13_EL1:
208
+ case SYSREG_DBGBCR14_EL1:
209
+ case SYSREG_DBGBCR15_EL1:
210
+ env->cp15.dbgbcr[SYSREG_CRM(reg)] = val;
211
+ break;
212
+ case SYSREG_DBGWVR0_EL1:
213
+ case SYSREG_DBGWVR1_EL1:
214
+ case SYSREG_DBGWVR2_EL1:
215
+ case SYSREG_DBGWVR3_EL1:
216
+ case SYSREG_DBGWVR4_EL1:
217
+ case SYSREG_DBGWVR5_EL1:
218
+ case SYSREG_DBGWVR6_EL1:
219
+ case SYSREG_DBGWVR7_EL1:
220
+ case SYSREG_DBGWVR8_EL1:
221
+ case SYSREG_DBGWVR9_EL1:
222
+ case SYSREG_DBGWVR10_EL1:
223
+ case SYSREG_DBGWVR11_EL1:
224
+ case SYSREG_DBGWVR12_EL1:
225
+ case SYSREG_DBGWVR13_EL1:
226
+ case SYSREG_DBGWVR14_EL1:
227
+ case SYSREG_DBGWVR15_EL1:
228
+ env->cp15.dbgwvr[SYSREG_CRM(reg)] = val;
229
+ break;
230
+ case SYSREG_DBGWCR0_EL1:
231
+ case SYSREG_DBGWCR1_EL1:
232
+ case SYSREG_DBGWCR2_EL1:
233
+ case SYSREG_DBGWCR3_EL1:
234
+ case SYSREG_DBGWCR4_EL1:
235
+ case SYSREG_DBGWCR5_EL1:
236
+ case SYSREG_DBGWCR6_EL1:
237
+ case SYSREG_DBGWCR7_EL1:
238
+ case SYSREG_DBGWCR8_EL1:
239
+ case SYSREG_DBGWCR9_EL1:
240
+ case SYSREG_DBGWCR10_EL1:
241
+ case SYSREG_DBGWCR11_EL1:
242
+ case SYSREG_DBGWCR12_EL1:
243
+ case SYSREG_DBGWCR13_EL1:
244
+ case SYSREG_DBGWCR14_EL1:
245
+ case SYSREG_DBGWCR15_EL1:
246
+ env->cp15.dbgwcr[SYSREG_CRM(reg)] = val;
247
+ break;
248
default:
249
cpu_synchronize_state(cpu);
250
trace_hvf_unhandled_sysreg_write(env->pc, reg,
50
--
251
--
51
2.7.4
252
2.34.1
52
53
diff view generated by jsdifflib
1
When we added the ARMMMUIdx_MSUser MMU index we forgot to
1
From: Francesco Cagnin <fcagnin@quarkslab.com>
2
add it to the case statement in regime_is_user(), so we
3
weren't treating it as unprivileged when doing MPU lookups.
4
Correct the omission.
5
2
3
Required for guest debugging. The code has been structured like the KVM
4
counterpart.
5
6
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
7
Message-id: 20230601153107.81955-4-fcagnin@quarkslab.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1512153879-5291-4-git-send-email-peter.maydell@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
---
10
---
11
target/arm/helper.c | 1 +
11
include/sysemu/hvf.h | 22 ++++++++
12
1 file changed, 1 insertion(+)
12
include/sysemu/hvf_int.h | 1 +
13
accel/hvf/hvf-accel-ops.c | 109 ++++++++++++++++++++++++++++++++++++++
14
accel/hvf/hvf-all.c | 17 ++++++
15
target/arm/hvf/hvf.c | 63 ++++++++++++++++++++++
16
target/i386/hvf/hvf.c | 24 +++++++++
17
6 files changed, 236 insertions(+)
13
18
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
diff --git a/include/sysemu/hvf.h b/include/sysemu/hvf.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
21
--- a/include/sysemu/hvf.h
17
+++ b/target/arm/helper.c
22
+++ b/include/sysemu/hvf.h
18
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
23
@@ -XXX,XX +XXX,XX @@
19
case ARMMMUIdx_S1SE0:
24
#include "qom/object.h"
20
case ARMMMUIdx_S1NSE0:
25
21
case ARMMMUIdx_MUser:
26
#ifdef NEED_CPU_H
22
+ case ARMMMUIdx_MSUser:
27
+#include "cpu.h"
23
return true;
28
24
default:
29
#ifdef CONFIG_HVF
25
return false;
30
uint32_t hvf_get_supported_cpuid(uint32_t func, uint32_t idx,
31
@@ -XXX,XX +XXX,XX @@ typedef struct HVFState HVFState;
32
DECLARE_INSTANCE_CHECKER(HVFState, HVF_STATE,
33
TYPE_HVF_ACCEL)
34
35
+#ifdef NEED_CPU_H
36
+struct hvf_sw_breakpoint {
37
+ target_ulong pc;
38
+ target_ulong saved_insn;
39
+ int use_count;
40
+ QTAILQ_ENTRY(hvf_sw_breakpoint) entry;
41
+};
42
+
43
+struct hvf_sw_breakpoint *hvf_find_sw_breakpoint(CPUState *cpu,
44
+ target_ulong pc);
45
+int hvf_sw_breakpoints_active(CPUState *cpu);
46
+
47
+int hvf_arch_insert_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp);
48
+int hvf_arch_remove_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp);
49
+int hvf_arch_insert_hw_breakpoint(target_ulong addr, target_ulong len,
50
+ int type);
51
+int hvf_arch_remove_hw_breakpoint(target_ulong addr, target_ulong len,
52
+ int type);
53
+void hvf_arch_remove_all_hw_breakpoints(void);
54
+#endif /* NEED_CPU_H */
55
+
56
#endif
57
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/include/sysemu/hvf_int.h
60
+++ b/include/sysemu/hvf_int.h
61
@@ -XXX,XX +XXX,XX @@ struct HVFState {
62
63
hvf_vcpu_caps *hvf_caps;
64
uint64_t vtimer_offset;
65
+ QTAILQ_HEAD(, hvf_sw_breakpoint) hvf_sw_breakpoints;
66
};
67
extern HVFState *hvf_state;
68
69
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/accel/hvf/hvf-accel-ops.c
72
+++ b/accel/hvf/hvf-accel-ops.c
73
@@ -XXX,XX +XXX,XX @@
74
#include "qemu/main-loop.h"
75
#include "exec/address-spaces.h"
76
#include "exec/exec-all.h"
77
+#include "exec/gdbstub.h"
78
#include "sysemu/cpus.h"
79
#include "sysemu/hvf.h"
80
#include "sysemu/hvf_int.h"
81
@@ -XXX,XX +XXX,XX @@ static int hvf_accel_init(MachineState *ms)
82
s->slots[x].slot_id = x;
83
}
84
85
+ QTAILQ_INIT(&s->hvf_sw_breakpoints);
86
+
87
hvf_state = s;
88
memory_listener_register(&hvf_memory_listener, &address_space_memory);
89
90
@@ -XXX,XX +XXX,XX @@ static void hvf_start_vcpu_thread(CPUState *cpu)
91
cpu, QEMU_THREAD_JOINABLE);
92
}
93
94
+static int hvf_insert_breakpoint(CPUState *cpu, int type, hwaddr addr, hwaddr len)
95
+{
96
+ struct hvf_sw_breakpoint *bp;
97
+ int err;
98
+
99
+ if (type == GDB_BREAKPOINT_SW) {
100
+ bp = hvf_find_sw_breakpoint(cpu, addr);
101
+ if (bp) {
102
+ bp->use_count++;
103
+ return 0;
104
+ }
105
+
106
+ bp = g_new(struct hvf_sw_breakpoint, 1);
107
+ bp->pc = addr;
108
+ bp->use_count = 1;
109
+ err = hvf_arch_insert_sw_breakpoint(cpu, bp);
110
+ if (err) {
111
+ g_free(bp);
112
+ return err;
113
+ }
114
+
115
+ QTAILQ_INSERT_HEAD(&hvf_state->hvf_sw_breakpoints, bp, entry);
116
+ } else {
117
+ err = hvf_arch_insert_hw_breakpoint(addr, len, type);
118
+ if (err) {
119
+ return err;
120
+ }
121
+ }
122
+
123
+ CPU_FOREACH(cpu) {
124
+ err = hvf_update_guest_debug(cpu);
125
+ if (err) {
126
+ return err;
127
+ }
128
+ }
129
+ return 0;
130
+}
131
+
132
+static int hvf_remove_breakpoint(CPUState *cpu, int type, hwaddr addr, hwaddr len)
133
+{
134
+ struct hvf_sw_breakpoint *bp;
135
+ int err;
136
+
137
+ if (type == GDB_BREAKPOINT_SW) {
138
+ bp = hvf_find_sw_breakpoint(cpu, addr);
139
+ if (!bp) {
140
+ return -ENOENT;
141
+ }
142
+
143
+ if (bp->use_count > 1) {
144
+ bp->use_count--;
145
+ return 0;
146
+ }
147
+
148
+ err = hvf_arch_remove_sw_breakpoint(cpu, bp);
149
+ if (err) {
150
+ return err;
151
+ }
152
+
153
+ QTAILQ_REMOVE(&hvf_state->hvf_sw_breakpoints, bp, entry);
154
+ g_free(bp);
155
+ } else {
156
+ err = hvf_arch_remove_hw_breakpoint(addr, len, type);
157
+ if (err) {
158
+ return err;
159
+ }
160
+ }
161
+
162
+ CPU_FOREACH(cpu) {
163
+ err = hvf_update_guest_debug(cpu);
164
+ if (err) {
165
+ return err;
166
+ }
167
+ }
168
+ return 0;
169
+}
170
+
171
+static void hvf_remove_all_breakpoints(CPUState *cpu)
172
+{
173
+ struct hvf_sw_breakpoint *bp, *next;
174
+ CPUState *tmpcpu;
175
+
176
+ QTAILQ_FOREACH_SAFE(bp, &hvf_state->hvf_sw_breakpoints, entry, next) {
177
+ if (hvf_arch_remove_sw_breakpoint(cpu, bp) != 0) {
178
+ /* Try harder to find a CPU that currently sees the breakpoint. */
179
+ CPU_FOREACH(tmpcpu)
180
+ {
181
+ if (hvf_arch_remove_sw_breakpoint(tmpcpu, bp) == 0) {
182
+ break;
183
+ }
184
+ }
185
+ }
186
+ QTAILQ_REMOVE(&hvf_state->hvf_sw_breakpoints, bp, entry);
187
+ g_free(bp);
188
+ }
189
+ hvf_arch_remove_all_hw_breakpoints();
190
+
191
+ CPU_FOREACH(cpu) {
192
+ hvf_update_guest_debug(cpu);
193
+ }
194
+}
195
+
196
static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
197
{
198
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
199
@@ -XXX,XX +XXX,XX @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
200
ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
201
ops->synchronize_state = hvf_cpu_synchronize_state;
202
ops->synchronize_pre_loadvm = hvf_cpu_synchronize_pre_loadvm;
203
+
204
+ ops->insert_breakpoint = hvf_insert_breakpoint;
205
+ ops->remove_breakpoint = hvf_remove_breakpoint;
206
+ ops->remove_all_breakpoints = hvf_remove_all_breakpoints;
207
};
208
static const TypeInfo hvf_accel_ops_type = {
209
.name = ACCEL_OPS_NAME("hvf"),
210
diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c
211
index XXXXXXX..XXXXXXX 100644
212
--- a/accel/hvf/hvf-all.c
213
+++ b/accel/hvf/hvf-all.c
214
@@ -XXX,XX +XXX,XX @@ void assert_hvf_ok(hv_return_t ret)
215
216
abort();
217
}
218
+
219
+struct hvf_sw_breakpoint *hvf_find_sw_breakpoint(CPUState *cpu, target_ulong pc)
220
+{
221
+ struct hvf_sw_breakpoint *bp;
222
+
223
+ QTAILQ_FOREACH(bp, &hvf_state->hvf_sw_breakpoints, entry) {
224
+ if (bp->pc == pc) {
225
+ return bp;
226
+ }
227
+ }
228
+ return NULL;
229
+}
230
+
231
+int hvf_sw_breakpoints_active(CPUState *cpu)
232
+{
233
+ return !QTAILQ_EMPTY(&hvf_state->hvf_sw_breakpoints);
234
+}
235
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
236
index XXXXXXX..XXXXXXX 100644
237
--- a/target/arm/hvf/hvf.c
238
+++ b/target/arm/hvf/hvf.c
239
@@ -XXX,XX +XXX,XX @@
240
#include "trace/trace-target_arm_hvf.h"
241
#include "migration/vmstate.h"
242
243
+#include "exec/gdbstub.h"
244
+
245
#define HVF_SYSREG(crn, crm, op0, op1, op2) \
246
ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
247
#define PL1_WRITE_MASK 0x4
248
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init(void)
249
qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
250
return 0;
251
}
252
+
253
+static const uint32_t brk_insn = 0xd4200000;
254
+
255
+int hvf_arch_insert_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp)
256
+{
257
+ if (cpu_memory_rw_debug(cpu, bp->pc, (uint8_t *)&bp->saved_insn, 4, 0) ||
258
+ cpu_memory_rw_debug(cpu, bp->pc, (uint8_t *)&brk_insn, 4, 1)) {
259
+ return -EINVAL;
260
+ }
261
+ return 0;
262
+}
263
+
264
+int hvf_arch_remove_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp)
265
+{
266
+ static uint32_t brk;
267
+
268
+ if (cpu_memory_rw_debug(cpu, bp->pc, (uint8_t *)&brk, 4, 0) ||
269
+ brk != brk_insn ||
270
+ cpu_memory_rw_debug(cpu, bp->pc, (uint8_t *)&bp->saved_insn, 4, 1)) {
271
+ return -EINVAL;
272
+ }
273
+ return 0;
274
+}
275
+
276
+int hvf_arch_insert_hw_breakpoint(target_ulong addr, target_ulong len, int type)
277
+{
278
+ switch (type) {
279
+ case GDB_BREAKPOINT_HW:
280
+ return insert_hw_breakpoint(addr);
281
+ case GDB_WATCHPOINT_READ:
282
+ case GDB_WATCHPOINT_WRITE:
283
+ case GDB_WATCHPOINT_ACCESS:
284
+ return insert_hw_watchpoint(addr, len, type);
285
+ default:
286
+ return -ENOSYS;
287
+ }
288
+}
289
+
290
+int hvf_arch_remove_hw_breakpoint(target_ulong addr, target_ulong len, int type)
291
+{
292
+ switch (type) {
293
+ case GDB_BREAKPOINT_HW:
294
+ return delete_hw_breakpoint(addr);
295
+ case GDB_WATCHPOINT_READ:
296
+ case GDB_WATCHPOINT_WRITE:
297
+ case GDB_WATCHPOINT_ACCESS:
298
+ return delete_hw_watchpoint(addr, len, type);
299
+ default:
300
+ return -ENOSYS;
301
+ }
302
+}
303
+
304
+void hvf_arch_remove_all_hw_breakpoints(void)
305
+{
306
+ if (cur_hw_wps > 0) {
307
+ g_array_remove_range(hw_watchpoints, 0, cur_hw_wps);
308
+ }
309
+ if (cur_hw_bps > 0) {
310
+ g_array_remove_range(hw_breakpoints, 0, cur_hw_bps);
311
+ }
312
+}
313
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
314
index XXXXXXX..XXXXXXX 100644
315
--- a/target/i386/hvf/hvf.c
316
+++ b/target/i386/hvf/hvf.c
317
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
318
319
return ret;
320
}
321
+
322
+int hvf_arch_insert_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp)
323
+{
324
+ return -ENOSYS;
325
+}
326
+
327
+int hvf_arch_remove_sw_breakpoint(CPUState *cpu, struct hvf_sw_breakpoint *bp)
328
+{
329
+ return -ENOSYS;
330
+}
331
+
332
+int hvf_arch_insert_hw_breakpoint(target_ulong addr, target_ulong len, int type)
333
+{
334
+ return -ENOSYS;
335
+}
336
+
337
+int hvf_arch_remove_hw_breakpoint(target_ulong addr, target_ulong len, int type)
338
+{
339
+ return -ENOSYS;
340
+}
341
+
342
+void hvf_arch_remove_all_hw_breakpoints(void)
343
+{
344
+}
26
--
345
--
27
2.7.4
346
2.34.1
28
29
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Francesco Cagnin <fcagnin@quarkslab.com>
2
2
3
Move the FlashCMD enum, XilinxQSPIPS and XilinxSPIPSClass structures to the
3
Guests can now be debugged through the gdbstub. Support is added for
4
header for consistency (struct XilinxSPIPS is found there). Also move out
4
single-stepping, software breakpoints, hardware breakpoints and
5
a define and remove two double included headers (while touching the code).
5
watchpoints. The code has been structured like the KVM counterpart.
6
Finally, add 4 byte address commands to the FlashCMD enum.
7
6
8
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
While guest debugging is enabled, the guest can still read and write the
9
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
8
DBG*_EL1 registers but they don't have any effect.
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Signed-off-by: Francesco Cagnin <fcagnin@quarkslab.com>
12
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
11
Message-id: 20230601153107.81955-5-fcagnin@quarkslab.com
13
Message-id: 20171126231634.9531-6-frasse.iglesias@gmail.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
14
---
16
include/hw/ssi/xilinx_spips.h | 34 ++++++++++++++++++++++++++++++++++
15
include/sysemu/hvf.h | 15 ++
17
hw/ssi/xilinx_spips.c | 35 -----------------------------------
16
include/sysemu/hvf_int.h | 1 +
18
2 files changed, 34 insertions(+), 35 deletions(-)
17
target/arm/hvf_arm.h | 7 +
18
accel/hvf/hvf-accel-ops.c | 10 +
19
accel/hvf/hvf-all.c | 6 +
20
target/arm/hvf/hvf.c | 474 +++++++++++++++++++++++++++++++++++++-
21
target/i386/hvf/hvf.c | 9 +
22
7 files changed, 520 insertions(+), 2 deletions(-)
19
23
20
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
24
diff --git a/include/sysemu/hvf.h b/include/sysemu/hvf.h
21
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
22
--- a/include/hw/ssi/xilinx_spips.h
26
--- a/include/sysemu/hvf.h
23
+++ b/include/hw/ssi/xilinx_spips.h
27
+++ b/include/sysemu/hvf.h
24
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPS XilinxSPIPS;
28
@@ -XXX,XX +XXX,XX @@ int hvf_arch_insert_hw_breakpoint(target_ulong addr, target_ulong len,
25
29
int hvf_arch_remove_hw_breakpoint(target_ulong addr, target_ulong len,
26
#define XLNX_SPIPS_R_MAX (0x100 / 4)
30
int type);
27
31
void hvf_arch_remove_all_hw_breakpoints(void);
28
+/* Bite off 4k chunks at a time */
32
+
29
+#define LQSPI_CACHE_SIZE 1024
33
+/*
30
+
34
+ * hvf_update_guest_debug:
31
+typedef enum {
35
+ * @cs: CPUState for the CPU to update
32
+ READ = 0x3, READ_4 = 0x13,
36
+ *
33
+ FAST_READ = 0xb, FAST_READ_4 = 0x0c,
37
+ * Update guest to enable or disable debugging. Per-arch specifics will be
34
+ DOR = 0x3b, DOR_4 = 0x3c,
38
+ * handled by calling down to hvf_arch_update_guest_debug.
35
+ QOR = 0x6b, QOR_4 = 0x6c,
39
+ */
36
+ DIOR = 0xbb, DIOR_4 = 0xbc,
40
+int hvf_update_guest_debug(CPUState *cpu);
37
+ QIOR = 0xeb, QIOR_4 = 0xec,
41
+void hvf_arch_update_guest_debug(CPUState *cpu);
38
+
42
+
39
+ PP = 0x2, PP_4 = 0x12,
43
+/*
40
+ DPP = 0xa2,
44
+ * Return whether the guest supports debugging.
41
+ QPP = 0x32, QPP_4 = 0x34,
45
+ */
42
+} FlashCMD;
46
+bool hvf_arch_supports_guest_debug(void);
43
+
47
#endif /* NEED_CPU_H */
44
struct XilinxSPIPS {
48
45
SysBusDevice parent_obj;
49
#endif
46
50
diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h
47
@@ -XXX,XX +XXX,XX @@ struct XilinxSPIPS {
51
index XXXXXXX..XXXXXXX 100644
48
uint32_t regs[XLNX_SPIPS_R_MAX];
52
--- a/include/sysemu/hvf_int.h
53
+++ b/include/sysemu/hvf_int.h
54
@@ -XXX,XX +XXX,XX @@ struct hvf_vcpu_state {
55
void *exit;
56
bool vtimer_masked;
57
sigset_t unblock_ipi_mask;
58
+ bool guest_debug_enabled;
49
};
59
};
50
60
51
+typedef struct {
61
void assert_hvf_ok(hv_return_t ret);
52
+ XilinxSPIPS parent_obj;
62
diff --git a/target/arm/hvf_arm.h b/target/arm/hvf_arm.h
53
+
54
+ uint8_t lqspi_buf[LQSPI_CACHE_SIZE];
55
+ hwaddr lqspi_cached_addr;
56
+ Error *migration_blocker;
57
+ bool mmio_execution_enabled;
58
+} XilinxQSPIPS;
59
+
60
+typedef struct XilinxSPIPSClass {
61
+ SysBusDeviceClass parent_class;
62
+
63
+ const MemoryRegionOps *reg_ops;
64
+
65
+ uint32_t rx_fifo_size;
66
+ uint32_t tx_fifo_size;
67
+} XilinxSPIPSClass;
68
+
69
#define TYPE_XILINX_SPIPS "xlnx.ps7-spi"
70
#define TYPE_XILINX_QSPIPS "xlnx.ps7-qspi"
71
72
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
73
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/ssi/xilinx_spips.c
64
--- a/target/arm/hvf_arm.h
75
+++ b/hw/ssi/xilinx_spips.c
65
+++ b/target/arm/hvf_arm.h
76
@@ -XXX,XX +XXX,XX @@
66
@@ -XXX,XX +XXX,XX @@
77
#include "sysemu/sysemu.h"
67
78
#include "hw/ptimer.h"
68
#include "cpu.h"
79
#include "qemu/log.h"
69
80
-#include "qemu/fifo8.h"
70
+/**
81
-#include "hw/ssi/ssi.h"
71
+ * hvf_arm_init_debug() - initialize guest debug capabilities
82
#include "qemu/bitops.h"
72
+ *
83
#include "hw/ssi/xilinx_spips.h"
73
+ * Should be called only once before using guest debug capabilities.
84
#include "qapi/error.h"
74
+ */
75
+void hvf_arm_init_debug(void);
76
+
77
void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu);
78
79
#endif
80
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/accel/hvf/hvf-accel-ops.c
83
+++ b/accel/hvf/hvf-accel-ops.c
84
@@ -XXX,XX +XXX,XX @@ static int hvf_accel_init(MachineState *ms)
85
return hvf_arch_init();
86
}
87
88
+static inline int hvf_gdbstub_sstep_flags(void)
89
+{
90
+ return SSTEP_ENABLE | SSTEP_NOIRQ;
91
+}
92
+
93
static void hvf_accel_class_init(ObjectClass *oc, void *data)
94
{
95
AccelClass *ac = ACCEL_CLASS(oc);
96
ac->name = "HVF";
97
ac->init_machine = hvf_accel_init;
98
ac->allowed = &hvf_allowed;
99
+ ac->gdbstub_supported_sstep_flags = hvf_gdbstub_sstep_flags;
100
}
101
102
static const TypeInfo hvf_accel_type = {
103
@@ -XXX,XX +XXX,XX @@ static int hvf_init_vcpu(CPUState *cpu)
104
cpu->vcpu_dirty = 1;
105
assert_hvf_ok(r);
106
107
+ cpu->hvf->guest_debug_enabled = false;
108
+
109
return hvf_arch_init_vcpu(cpu);
110
}
111
112
@@ -XXX,XX +XXX,XX @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
113
ops->insert_breakpoint = hvf_insert_breakpoint;
114
ops->remove_breakpoint = hvf_remove_breakpoint;
115
ops->remove_all_breakpoints = hvf_remove_all_breakpoints;
116
+ ops->update_guest_debug = hvf_update_guest_debug;
117
+ ops->supports_guest_debug = hvf_arch_supports_guest_debug;
118
};
119
static const TypeInfo hvf_accel_ops_type = {
120
.name = ACCEL_OPS_NAME("hvf"),
121
diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/accel/hvf/hvf-all.c
124
+++ b/accel/hvf/hvf-all.c
125
@@ -XXX,XX +XXX,XX @@ int hvf_sw_breakpoints_active(CPUState *cpu)
126
{
127
return !QTAILQ_EMPTY(&hvf_state->hvf_sw_breakpoints);
128
}
129
+
130
+int hvf_update_guest_debug(CPUState *cpu)
131
+{
132
+ hvf_arch_update_guest_debug(cpu);
133
+ return 0;
134
+}
135
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
136
index XXXXXXX..XXXXXXX 100644
137
--- a/target/arm/hvf/hvf.c
138
+++ b/target/arm/hvf/hvf.c
85
@@ -XXX,XX +XXX,XX @@
139
@@ -XXX,XX +XXX,XX @@
86
140
87
/* 16MB per linear region */
141
#include "exec/gdbstub.h"
88
#define LQSPI_ADDRESS_BITS 24
142
89
-/* Bite off 4k chunks at a time */
143
+#define MDSCR_EL1_SS_SHIFT 0
90
-#define LQSPI_CACHE_SIZE 1024
144
+#define MDSCR_EL1_MDE_SHIFT 15
91
145
+
92
#define SNOOP_CHECKING 0xFF
146
+static uint16_t dbgbcr_regs[] = {
93
#define SNOOP_NONE 0xFE
147
+ HV_SYS_REG_DBGBCR0_EL1,
94
#define SNOOP_STRIPING 0
148
+ HV_SYS_REG_DBGBCR1_EL1,
95
149
+ HV_SYS_REG_DBGBCR2_EL1,
96
-typedef enum {
150
+ HV_SYS_REG_DBGBCR3_EL1,
97
- READ = 0x3,
151
+ HV_SYS_REG_DBGBCR4_EL1,
98
- FAST_READ = 0xb,
152
+ HV_SYS_REG_DBGBCR5_EL1,
99
- DOR = 0x3b,
153
+ HV_SYS_REG_DBGBCR6_EL1,
100
- QOR = 0x6b,
154
+ HV_SYS_REG_DBGBCR7_EL1,
101
- DIOR = 0xbb,
155
+ HV_SYS_REG_DBGBCR8_EL1,
102
- QIOR = 0xeb,
156
+ HV_SYS_REG_DBGBCR9_EL1,
103
-
157
+ HV_SYS_REG_DBGBCR10_EL1,
104
- PP = 0x2,
158
+ HV_SYS_REG_DBGBCR11_EL1,
105
- DPP = 0xa2,
159
+ HV_SYS_REG_DBGBCR12_EL1,
106
- QPP = 0x32,
160
+ HV_SYS_REG_DBGBCR13_EL1,
107
-} FlashCMD;
161
+ HV_SYS_REG_DBGBCR14_EL1,
108
-
162
+ HV_SYS_REG_DBGBCR15_EL1,
109
-typedef struct {
163
+};
110
- XilinxSPIPS parent_obj;
164
+static uint16_t dbgbvr_regs[] = {
111
-
165
+ HV_SYS_REG_DBGBVR0_EL1,
112
- uint8_t lqspi_buf[LQSPI_CACHE_SIZE];
166
+ HV_SYS_REG_DBGBVR1_EL1,
113
- hwaddr lqspi_cached_addr;
167
+ HV_SYS_REG_DBGBVR2_EL1,
114
- Error *migration_blocker;
168
+ HV_SYS_REG_DBGBVR3_EL1,
115
- bool mmio_execution_enabled;
169
+ HV_SYS_REG_DBGBVR4_EL1,
116
-} XilinxQSPIPS;
170
+ HV_SYS_REG_DBGBVR5_EL1,
117
-
171
+ HV_SYS_REG_DBGBVR6_EL1,
118
-typedef struct XilinxSPIPSClass {
172
+ HV_SYS_REG_DBGBVR7_EL1,
119
- SysBusDeviceClass parent_class;
173
+ HV_SYS_REG_DBGBVR8_EL1,
120
-
174
+ HV_SYS_REG_DBGBVR9_EL1,
121
- const MemoryRegionOps *reg_ops;
175
+ HV_SYS_REG_DBGBVR10_EL1,
122
-
176
+ HV_SYS_REG_DBGBVR11_EL1,
123
- uint32_t rx_fifo_size;
177
+ HV_SYS_REG_DBGBVR12_EL1,
124
- uint32_t tx_fifo_size;
178
+ HV_SYS_REG_DBGBVR13_EL1,
125
-} XilinxSPIPSClass;
179
+ HV_SYS_REG_DBGBVR14_EL1,
126
-
180
+ HV_SYS_REG_DBGBVR15_EL1,
127
static inline int num_effective_busses(XilinxSPIPS *s)
181
+};
182
+static uint16_t dbgwcr_regs[] = {
183
+ HV_SYS_REG_DBGWCR0_EL1,
184
+ HV_SYS_REG_DBGWCR1_EL1,
185
+ HV_SYS_REG_DBGWCR2_EL1,
186
+ HV_SYS_REG_DBGWCR3_EL1,
187
+ HV_SYS_REG_DBGWCR4_EL1,
188
+ HV_SYS_REG_DBGWCR5_EL1,
189
+ HV_SYS_REG_DBGWCR6_EL1,
190
+ HV_SYS_REG_DBGWCR7_EL1,
191
+ HV_SYS_REG_DBGWCR8_EL1,
192
+ HV_SYS_REG_DBGWCR9_EL1,
193
+ HV_SYS_REG_DBGWCR10_EL1,
194
+ HV_SYS_REG_DBGWCR11_EL1,
195
+ HV_SYS_REG_DBGWCR12_EL1,
196
+ HV_SYS_REG_DBGWCR13_EL1,
197
+ HV_SYS_REG_DBGWCR14_EL1,
198
+ HV_SYS_REG_DBGWCR15_EL1,
199
+};
200
+static uint16_t dbgwvr_regs[] = {
201
+ HV_SYS_REG_DBGWVR0_EL1,
202
+ HV_SYS_REG_DBGWVR1_EL1,
203
+ HV_SYS_REG_DBGWVR2_EL1,
204
+ HV_SYS_REG_DBGWVR3_EL1,
205
+ HV_SYS_REG_DBGWVR4_EL1,
206
+ HV_SYS_REG_DBGWVR5_EL1,
207
+ HV_SYS_REG_DBGWVR6_EL1,
208
+ HV_SYS_REG_DBGWVR7_EL1,
209
+ HV_SYS_REG_DBGWVR8_EL1,
210
+ HV_SYS_REG_DBGWVR9_EL1,
211
+ HV_SYS_REG_DBGWVR10_EL1,
212
+ HV_SYS_REG_DBGWVR11_EL1,
213
+ HV_SYS_REG_DBGWVR12_EL1,
214
+ HV_SYS_REG_DBGWVR13_EL1,
215
+ HV_SYS_REG_DBGWVR14_EL1,
216
+ HV_SYS_REG_DBGWVR15_EL1,
217
+};
218
+
219
+static inline int hvf_arm_num_brps(hv_vcpu_config_t config)
220
+{
221
+ uint64_t val;
222
+ hv_return_t ret;
223
+ ret = hv_vcpu_config_get_feature_reg(config, HV_FEATURE_REG_ID_AA64DFR0_EL1,
224
+ &val);
225
+ assert_hvf_ok(ret);
226
+ return FIELD_EX64(val, ID_AA64DFR0, BRPS) + 1;
227
+}
228
+
229
+static inline int hvf_arm_num_wrps(hv_vcpu_config_t config)
230
+{
231
+ uint64_t val;
232
+ hv_return_t ret;
233
+ ret = hv_vcpu_config_get_feature_reg(config, HV_FEATURE_REG_ID_AA64DFR0_EL1,
234
+ &val);
235
+ assert_hvf_ok(ret);
236
+ return FIELD_EX64(val, ID_AA64DFR0, WRPS) + 1;
237
+}
238
+
239
+void hvf_arm_init_debug(void)
240
+{
241
+ hv_vcpu_config_t config;
242
+ config = hv_vcpu_config_create();
243
+
244
+ max_hw_bps = hvf_arm_num_brps(config);
245
+ hw_breakpoints =
246
+ g_array_sized_new(true, true, sizeof(HWBreakpoint), max_hw_bps);
247
+
248
+ max_hw_wps = hvf_arm_num_wrps(config);
249
+ hw_watchpoints =
250
+ g_array_sized_new(true, true, sizeof(HWWatchpoint), max_hw_wps);
251
+}
252
+
253
#define HVF_SYSREG(crn, crm, op0, op1, op2) \
254
ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
255
#define PL1_WRITE_MASK 0x4
256
@@ -XXX,XX +XXX,XX @@ int hvf_get_registers(CPUState *cpu)
257
continue;
258
}
259
260
+ if (cpu->hvf->guest_debug_enabled) {
261
+ /* Handle debug registers */
262
+ switch (hvf_sreg_match[i].reg) {
263
+ case HV_SYS_REG_DBGBVR0_EL1:
264
+ case HV_SYS_REG_DBGBCR0_EL1:
265
+ case HV_SYS_REG_DBGWVR0_EL1:
266
+ case HV_SYS_REG_DBGWCR0_EL1:
267
+ case HV_SYS_REG_DBGBVR1_EL1:
268
+ case HV_SYS_REG_DBGBCR1_EL1:
269
+ case HV_SYS_REG_DBGWVR1_EL1:
270
+ case HV_SYS_REG_DBGWCR1_EL1:
271
+ case HV_SYS_REG_DBGBVR2_EL1:
272
+ case HV_SYS_REG_DBGBCR2_EL1:
273
+ case HV_SYS_REG_DBGWVR2_EL1:
274
+ case HV_SYS_REG_DBGWCR2_EL1:
275
+ case HV_SYS_REG_DBGBVR3_EL1:
276
+ case HV_SYS_REG_DBGBCR3_EL1:
277
+ case HV_SYS_REG_DBGWVR3_EL1:
278
+ case HV_SYS_REG_DBGWCR3_EL1:
279
+ case HV_SYS_REG_DBGBVR4_EL1:
280
+ case HV_SYS_REG_DBGBCR4_EL1:
281
+ case HV_SYS_REG_DBGWVR4_EL1:
282
+ case HV_SYS_REG_DBGWCR4_EL1:
283
+ case HV_SYS_REG_DBGBVR5_EL1:
284
+ case HV_SYS_REG_DBGBCR5_EL1:
285
+ case HV_SYS_REG_DBGWVR5_EL1:
286
+ case HV_SYS_REG_DBGWCR5_EL1:
287
+ case HV_SYS_REG_DBGBVR6_EL1:
288
+ case HV_SYS_REG_DBGBCR6_EL1:
289
+ case HV_SYS_REG_DBGWVR6_EL1:
290
+ case HV_SYS_REG_DBGWCR6_EL1:
291
+ case HV_SYS_REG_DBGBVR7_EL1:
292
+ case HV_SYS_REG_DBGBCR7_EL1:
293
+ case HV_SYS_REG_DBGWVR7_EL1:
294
+ case HV_SYS_REG_DBGWCR7_EL1:
295
+ case HV_SYS_REG_DBGBVR8_EL1:
296
+ case HV_SYS_REG_DBGBCR8_EL1:
297
+ case HV_SYS_REG_DBGWVR8_EL1:
298
+ case HV_SYS_REG_DBGWCR8_EL1:
299
+ case HV_SYS_REG_DBGBVR9_EL1:
300
+ case HV_SYS_REG_DBGBCR9_EL1:
301
+ case HV_SYS_REG_DBGWVR9_EL1:
302
+ case HV_SYS_REG_DBGWCR9_EL1:
303
+ case HV_SYS_REG_DBGBVR10_EL1:
304
+ case HV_SYS_REG_DBGBCR10_EL1:
305
+ case HV_SYS_REG_DBGWVR10_EL1:
306
+ case HV_SYS_REG_DBGWCR10_EL1:
307
+ case HV_SYS_REG_DBGBVR11_EL1:
308
+ case HV_SYS_REG_DBGBCR11_EL1:
309
+ case HV_SYS_REG_DBGWVR11_EL1:
310
+ case HV_SYS_REG_DBGWCR11_EL1:
311
+ case HV_SYS_REG_DBGBVR12_EL1:
312
+ case HV_SYS_REG_DBGBCR12_EL1:
313
+ case HV_SYS_REG_DBGWVR12_EL1:
314
+ case HV_SYS_REG_DBGWCR12_EL1:
315
+ case HV_SYS_REG_DBGBVR13_EL1:
316
+ case HV_SYS_REG_DBGBCR13_EL1:
317
+ case HV_SYS_REG_DBGWVR13_EL1:
318
+ case HV_SYS_REG_DBGWCR13_EL1:
319
+ case HV_SYS_REG_DBGBVR14_EL1:
320
+ case HV_SYS_REG_DBGBCR14_EL1:
321
+ case HV_SYS_REG_DBGWVR14_EL1:
322
+ case HV_SYS_REG_DBGWCR14_EL1:
323
+ case HV_SYS_REG_DBGBVR15_EL1:
324
+ case HV_SYS_REG_DBGBCR15_EL1:
325
+ case HV_SYS_REG_DBGWVR15_EL1:
326
+ case HV_SYS_REG_DBGWCR15_EL1: {
327
+ /*
328
+ * If the guest is being debugged, the vCPU's debug registers
329
+ * are holding the gdbstub's view of the registers (set in
330
+ * hvf_arch_update_guest_debug()).
331
+ * Since the environment is used to store only the guest's view
332
+ * of the registers, don't update it with the values from the
333
+ * vCPU but simply keep the values from the previous
334
+ * environment.
335
+ */
336
+ const ARMCPRegInfo *ri;
337
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, hvf_sreg_match[i].key);
338
+ val = read_raw_cp_reg(env, ri);
339
+
340
+ arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx] = val;
341
+ continue;
342
+ }
343
+ }
344
+ }
345
+
346
ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val);
347
assert_hvf_ok(ret);
348
349
@@ -XXX,XX +XXX,XX @@ int hvf_put_registers(CPUState *cpu)
350
continue;
351
}
352
353
+ if (cpu->hvf->guest_debug_enabled) {
354
+ /* Handle debug registers */
355
+ switch (hvf_sreg_match[i].reg) {
356
+ case HV_SYS_REG_DBGBVR0_EL1:
357
+ case HV_SYS_REG_DBGBCR0_EL1:
358
+ case HV_SYS_REG_DBGWVR0_EL1:
359
+ case HV_SYS_REG_DBGWCR0_EL1:
360
+ case HV_SYS_REG_DBGBVR1_EL1:
361
+ case HV_SYS_REG_DBGBCR1_EL1:
362
+ case HV_SYS_REG_DBGWVR1_EL1:
363
+ case HV_SYS_REG_DBGWCR1_EL1:
364
+ case HV_SYS_REG_DBGBVR2_EL1:
365
+ case HV_SYS_REG_DBGBCR2_EL1:
366
+ case HV_SYS_REG_DBGWVR2_EL1:
367
+ case HV_SYS_REG_DBGWCR2_EL1:
368
+ case HV_SYS_REG_DBGBVR3_EL1:
369
+ case HV_SYS_REG_DBGBCR3_EL1:
370
+ case HV_SYS_REG_DBGWVR3_EL1:
371
+ case HV_SYS_REG_DBGWCR3_EL1:
372
+ case HV_SYS_REG_DBGBVR4_EL1:
373
+ case HV_SYS_REG_DBGBCR4_EL1:
374
+ case HV_SYS_REG_DBGWVR4_EL1:
375
+ case HV_SYS_REG_DBGWCR4_EL1:
376
+ case HV_SYS_REG_DBGBVR5_EL1:
377
+ case HV_SYS_REG_DBGBCR5_EL1:
378
+ case HV_SYS_REG_DBGWVR5_EL1:
379
+ case HV_SYS_REG_DBGWCR5_EL1:
380
+ case HV_SYS_REG_DBGBVR6_EL1:
381
+ case HV_SYS_REG_DBGBCR6_EL1:
382
+ case HV_SYS_REG_DBGWVR6_EL1:
383
+ case HV_SYS_REG_DBGWCR6_EL1:
384
+ case HV_SYS_REG_DBGBVR7_EL1:
385
+ case HV_SYS_REG_DBGBCR7_EL1:
386
+ case HV_SYS_REG_DBGWVR7_EL1:
387
+ case HV_SYS_REG_DBGWCR7_EL1:
388
+ case HV_SYS_REG_DBGBVR8_EL1:
389
+ case HV_SYS_REG_DBGBCR8_EL1:
390
+ case HV_SYS_REG_DBGWVR8_EL1:
391
+ case HV_SYS_REG_DBGWCR8_EL1:
392
+ case HV_SYS_REG_DBGBVR9_EL1:
393
+ case HV_SYS_REG_DBGBCR9_EL1:
394
+ case HV_SYS_REG_DBGWVR9_EL1:
395
+ case HV_SYS_REG_DBGWCR9_EL1:
396
+ case HV_SYS_REG_DBGBVR10_EL1:
397
+ case HV_SYS_REG_DBGBCR10_EL1:
398
+ case HV_SYS_REG_DBGWVR10_EL1:
399
+ case HV_SYS_REG_DBGWCR10_EL1:
400
+ case HV_SYS_REG_DBGBVR11_EL1:
401
+ case HV_SYS_REG_DBGBCR11_EL1:
402
+ case HV_SYS_REG_DBGWVR11_EL1:
403
+ case HV_SYS_REG_DBGWCR11_EL1:
404
+ case HV_SYS_REG_DBGBVR12_EL1:
405
+ case HV_SYS_REG_DBGBCR12_EL1:
406
+ case HV_SYS_REG_DBGWVR12_EL1:
407
+ case HV_SYS_REG_DBGWCR12_EL1:
408
+ case HV_SYS_REG_DBGBVR13_EL1:
409
+ case HV_SYS_REG_DBGBCR13_EL1:
410
+ case HV_SYS_REG_DBGWVR13_EL1:
411
+ case HV_SYS_REG_DBGWCR13_EL1:
412
+ case HV_SYS_REG_DBGBVR14_EL1:
413
+ case HV_SYS_REG_DBGBCR14_EL1:
414
+ case HV_SYS_REG_DBGWVR14_EL1:
415
+ case HV_SYS_REG_DBGWCR14_EL1:
416
+ case HV_SYS_REG_DBGBVR15_EL1:
417
+ case HV_SYS_REG_DBGBCR15_EL1:
418
+ case HV_SYS_REG_DBGWVR15_EL1:
419
+ case HV_SYS_REG_DBGWCR15_EL1:
420
+ /*
421
+ * If the guest is being debugged, the vCPU's debug registers
422
+ * are already holding the gdbstub's view of the registers (set
423
+ * in hvf_arch_update_guest_debug()).
424
+ */
425
+ continue;
426
+ }
427
+ }
428
+
429
val = arm_cpu->cpreg_values[hvf_sreg_match[i].cp_idx];
430
ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val);
431
assert_hvf_ok(ret);
432
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
128
{
433
{
129
return (s->regs[R_LQSPI_CFG] & LQSPI_CFG_SEP_BUS &&
434
ARMCPU *arm_cpu = ARM_CPU(cpu);
435
CPUARMState *env = &arm_cpu->env;
436
+ int ret;
437
hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit;
438
hv_return_t r;
439
bool advance_pc = false;
440
441
- if (hvf_inject_interrupts(cpu)) {
442
+ if (!(cpu->singlestep_enabled & SSTEP_NOIRQ) &&
443
+ hvf_inject_interrupts(cpu)) {
444
return EXCP_INTERRUPT;
445
}
446
447
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
448
uint64_t syndrome = hvf_exit->exception.syndrome;
449
uint32_t ec = syn_get_ec(syndrome);
450
451
+ ret = 0;
452
qemu_mutex_lock_iothread();
453
switch (exit_reason) {
454
case HV_EXIT_REASON_EXCEPTION:
455
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
456
hvf_sync_vtimer(cpu);
457
458
switch (ec) {
459
+ case EC_SOFTWARESTEP: {
460
+ ret = EXCP_DEBUG;
461
+
462
+ if (!cpu->singlestep_enabled) {
463
+ error_report("EC_SOFTWARESTEP but single-stepping not enabled");
464
+ }
465
+ break;
466
+ }
467
+ case EC_AA64_BKPT: {
468
+ ret = EXCP_DEBUG;
469
+
470
+ cpu_synchronize_state(cpu);
471
+
472
+ if (!hvf_find_sw_breakpoint(cpu, env->pc)) {
473
+ /* Re-inject into the guest */
474
+ ret = 0;
475
+ hvf_raise_exception(cpu, EXCP_BKPT, syn_aa64_bkpt(0));
476
+ }
477
+ break;
478
+ }
479
+ case EC_BREAKPOINT: {
480
+ ret = EXCP_DEBUG;
481
+
482
+ cpu_synchronize_state(cpu);
483
+
484
+ if (!find_hw_breakpoint(cpu, env->pc)) {
485
+ error_report("EC_BREAKPOINT but unknown hw breakpoint");
486
+ }
487
+ break;
488
+ }
489
+ case EC_WATCHPOINT: {
490
+ ret = EXCP_DEBUG;
491
+
492
+ cpu_synchronize_state(cpu);
493
+
494
+ CPUWatchpoint *wp =
495
+ find_hw_watchpoint(cpu, hvf_exit->exception.virtual_address);
496
+ if (!wp) {
497
+ error_report("EXCP_DEBUG but unknown hw watchpoint");
498
+ }
499
+ cpu->watchpoint_hit = wp;
500
+ break;
501
+ }
502
case EC_DATAABORT: {
503
bool isv = syndrome & ARM_EL_ISV;
504
bool iswrite = (syndrome >> 6) & 1;
505
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
506
pc += 4;
507
r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc);
508
assert_hvf_ok(r);
509
+
510
+ /* Handle single-stepping over instructions which trigger a VM exit */
511
+ if (cpu->singlestep_enabled) {
512
+ ret = EXCP_DEBUG;
513
+ }
514
}
515
516
- return 0;
517
+ return ret;
518
}
519
520
static const VMStateDescription vmstate_hvf_vtimer = {
521
@@ -XXX,XX +XXX,XX @@ int hvf_arch_init(void)
522
hvf_state->vtimer_offset = mach_absolute_time();
523
vmstate_register(NULL, 0, &vmstate_hvf_vtimer, &vtimer);
524
qemu_add_vm_change_state_handler(hvf_vm_state_change, &vtimer);
525
+
526
+ hvf_arm_init_debug();
527
+
528
return 0;
529
}
530
531
@@ -XXX,XX +XXX,XX @@ void hvf_arch_remove_all_hw_breakpoints(void)
532
g_array_remove_range(hw_breakpoints, 0, cur_hw_bps);
533
}
534
}
535
+
536
+/*
537
+ * Update the vCPU with the gdbstub's view of debug registers. This view
538
+ * consists of all hardware breakpoints and watchpoints inserted so far while
539
+ * debugging the guest.
540
+ */
541
+static void hvf_put_gdbstub_debug_registers(CPUState *cpu)
542
+{
543
+ hv_return_t r = HV_SUCCESS;
544
+ int i;
545
+
546
+ for (i = 0; i < cur_hw_bps; i++) {
547
+ HWBreakpoint *bp = get_hw_bp(i);
548
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbcr_regs[i], bp->bcr);
549
+ assert_hvf_ok(r);
550
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbvr_regs[i], bp->bvr);
551
+ assert_hvf_ok(r);
552
+ }
553
+ for (i = cur_hw_bps; i < max_hw_bps; i++) {
554
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbcr_regs[i], 0);
555
+ assert_hvf_ok(r);
556
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbvr_regs[i], 0);
557
+ assert_hvf_ok(r);
558
+ }
559
+
560
+ for (i = 0; i < cur_hw_wps; i++) {
561
+ HWWatchpoint *wp = get_hw_wp(i);
562
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwcr_regs[i], wp->wcr);
563
+ assert_hvf_ok(r);
564
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwvr_regs[i], wp->wvr);
565
+ assert_hvf_ok(r);
566
+ }
567
+ for (i = cur_hw_wps; i < max_hw_wps; i++) {
568
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwcr_regs[i], 0);
569
+ assert_hvf_ok(r);
570
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwvr_regs[i], 0);
571
+ assert_hvf_ok(r);
572
+ }
573
+}
574
+
575
+/*
576
+ * Update the vCPU with the guest's view of debug registers. This view is kept
577
+ * in the environment at all times.
578
+ */
579
+static void hvf_put_guest_debug_registers(CPUState *cpu)
580
+{
581
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
582
+ CPUARMState *env = &arm_cpu->env;
583
+ hv_return_t r = HV_SUCCESS;
584
+ int i;
585
+
586
+ for (i = 0; i < max_hw_bps; i++) {
587
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbcr_regs[i],
588
+ env->cp15.dbgbcr[i]);
589
+ assert_hvf_ok(r);
590
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgbvr_regs[i],
591
+ env->cp15.dbgbvr[i]);
592
+ assert_hvf_ok(r);
593
+ }
594
+
595
+ for (i = 0; i < max_hw_wps; i++) {
596
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwcr_regs[i],
597
+ env->cp15.dbgwcr[i]);
598
+ assert_hvf_ok(r);
599
+ r = hv_vcpu_set_sys_reg(cpu->hvf->fd, dbgwvr_regs[i],
600
+ env->cp15.dbgwvr[i]);
601
+ assert_hvf_ok(r);
602
+ }
603
+}
604
+
605
+static inline bool hvf_arm_hw_debug_active(CPUState *cpu)
606
+{
607
+ return ((cur_hw_wps > 0) || (cur_hw_bps > 0));
608
+}
609
+
610
+static void hvf_arch_set_traps(void)
611
+{
612
+ CPUState *cpu;
613
+ bool should_enable_traps = false;
614
+ hv_return_t r = HV_SUCCESS;
615
+
616
+ /* Check whether guest debugging is enabled for at least one vCPU; if it
617
+ * is, enable exiting the guest on all vCPUs */
618
+ CPU_FOREACH(cpu) {
619
+ should_enable_traps |= cpu->hvf->guest_debug_enabled;
620
+ }
621
+ CPU_FOREACH(cpu) {
622
+ /* Set whether debug exceptions exit the guest */
623
+ r = hv_vcpu_set_trap_debug_exceptions(cpu->hvf->fd,
624
+ should_enable_traps);
625
+ assert_hvf_ok(r);
626
+
627
+ /* Set whether accesses to debug registers exit the guest */
628
+ r = hv_vcpu_set_trap_debug_reg_accesses(cpu->hvf->fd,
629
+ should_enable_traps);
630
+ assert_hvf_ok(r);
631
+ }
632
+}
633
+
634
+void hvf_arch_update_guest_debug(CPUState *cpu)
635
+{
636
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
637
+ CPUARMState *env = &arm_cpu->env;
638
+
639
+ /* Check whether guest debugging is enabled */
640
+ cpu->hvf->guest_debug_enabled = cpu->singlestep_enabled ||
641
+ hvf_sw_breakpoints_active(cpu) ||
642
+ hvf_arm_hw_debug_active(cpu);
643
+
644
+ /* Update debug registers */
645
+ if (cpu->hvf->guest_debug_enabled) {
646
+ hvf_put_gdbstub_debug_registers(cpu);
647
+ } else {
648
+ hvf_put_guest_debug_registers(cpu);
649
+ }
650
+
651
+ cpu_synchronize_state(cpu);
652
+
653
+ /* Enable/disable single-stepping */
654
+ if (cpu->singlestep_enabled) {
655
+ env->cp15.mdscr_el1 =
656
+ deposit64(env->cp15.mdscr_el1, MDSCR_EL1_SS_SHIFT, 1, 1);
657
+ pstate_write(env, pstate_read(env) | PSTATE_SS);
658
+ } else {
659
+ env->cp15.mdscr_el1 =
660
+ deposit64(env->cp15.mdscr_el1, MDSCR_EL1_SS_SHIFT, 1, 0);
661
+ }
662
+
663
+ /* Enable/disable Breakpoint exceptions */
664
+ if (hvf_arm_hw_debug_active(cpu)) {
665
+ env->cp15.mdscr_el1 =
666
+ deposit64(env->cp15.mdscr_el1, MDSCR_EL1_MDE_SHIFT, 1, 1);
667
+ } else {
668
+ env->cp15.mdscr_el1 =
669
+ deposit64(env->cp15.mdscr_el1, MDSCR_EL1_MDE_SHIFT, 1, 0);
670
+ }
671
+
672
+ hvf_arch_set_traps();
673
+}
674
+
675
+inline bool hvf_arch_supports_guest_debug(void)
676
+{
677
+ return true;
678
+}
679
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
680
index XXXXXXX..XXXXXXX 100644
681
--- a/target/i386/hvf/hvf.c
682
+++ b/target/i386/hvf/hvf.c
683
@@ -XXX,XX +XXX,XX @@ int hvf_arch_remove_hw_breakpoint(target_ulong addr, target_ulong len, int type)
684
void hvf_arch_remove_all_hw_breakpoints(void)
685
{
686
}
687
+
688
+void hvf_arch_update_guest_debug(CPUState *cpu)
689
+{
690
+}
691
+
692
+inline bool hvf_arch_supports_guest_debug(void)
693
+{
694
+ return false;
695
+}
130
--
696
--
131
2.7.4
697
2.34.1
132
133
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Vikram Garhwal <vikram.garhwal@amd.com>
2
2
3
Add support for the Zynq Ultrascale MPSoc Generic QSPI.
3
The Xilinx Versal CANFD controller is developed based on SocketCAN, QEMU CAN bus
4
implementation. Bus connection and socketCAN connection for each CAN module
5
can be set through command lines.
4
6
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20171126231634.9531-13-frasse.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
include/hw/ssi/xilinx_spips.h | 32 ++-
11
include/hw/net/xlnx-versal-canfd.h | 87 ++
12
hw/ssi/xilinx_spips.c | 579 ++++++++++++++++++++++++++++++++++++----
12
hw/net/can/xlnx-versal-canfd.c | 2107 ++++++++++++++++++++++++++++
13
default-configs/arm-softmmu.mak | 2 +-
13
hw/net/can/meson.build | 1 +
14
3 files changed, 564 insertions(+), 49 deletions(-)
14
hw/net/can/trace-events | 7 +
15
4 files changed, 2202 insertions(+)
16
create mode 100644 include/hw/net/xlnx-versal-canfd.h
17
create mode 100644 hw/net/can/xlnx-versal-canfd.c
15
18
16
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
19
diff --git a/include/hw/net/xlnx-versal-canfd.h b/include/hw/net/xlnx-versal-canfd.h
17
index XXXXXXX..XXXXXXX 100644
20
new file mode 100644
18
--- a/include/hw/ssi/xilinx_spips.h
21
index XXXXXXX..XXXXXXX
19
+++ b/include/hw/ssi/xilinx_spips.h
22
--- /dev/null
23
+++ b/include/hw/net/xlnx-versal-canfd.h
20
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@
21
#define XILINX_SPIPS_H
25
+/*
22
26
+ * QEMU model of the Xilinx Versal CANFD Controller.
23
#include "hw/ssi/ssi.h"
27
+ *
24
-#include "qemu/fifo8.h"
28
+ * Copyright (c) 2023 Advanced Micro Devices, Inc.
25
+#include "qemu/fifo32.h"
29
+ *
26
+#include "hw/stream.h"
30
+ * Written-by: Vikram Garhwal<vikram.garhwal@amd.com>
27
31
+ * Based on QEMU CANFD Device emulation implemented by Jin Yang, Deniz Eren and
28
typedef struct XilinxSPIPS XilinxSPIPS;
32
+ * Pavel Pisa.
29
33
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
30
#define XLNX_SPIPS_R_MAX (0x100 / 4)
34
+ * of this software and associated documentation files (the "Software"), to deal
31
+#define XLNX_ZYNQMP_SPIPS_R_MAX (0x200 / 4)
35
+ * in the Software without restriction, including without limitation the rights
32
36
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
33
/* Bite off 4k chunks at a time */
37
+ * copies of the Software, and to permit persons to whom the Software is
34
#define LQSPI_CACHE_SIZE 1024
38
+ * furnished to do so, subject to the following conditions:
35
@@ -XXX,XX +XXX,XX @@ typedef struct {
39
+ *
36
bool mmio_execution_enabled;
40
+ * The above copyright notice and this permission notice shall be included in
37
} XilinxQSPIPS;
41
+ * all copies or substantial portions of the Software.
38
42
+ *
39
+typedef struct {
43
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
40
+ XilinxQSPIPS parent_obj;
44
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
41
+
45
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
42
+ StreamSlave *dma;
46
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
43
+ uint8_t dma_buf[4];
47
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
44
+ int gqspi_irqline;
48
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
45
+
49
+ * THE SOFTWARE.
46
+ uint32_t regs[XLNX_ZYNQMP_SPIPS_R_MAX];
50
+ */
47
+
51
+
48
+ /* GQSPI has seperate tx/rx fifos */
52
+#ifndef HW_CANFD_XILINX_H
49
+ Fifo8 rx_fifo_g;
53
+#define HW_CANFD_XILINX_H
50
+ Fifo8 tx_fifo_g;
54
+
51
+ Fifo32 fifo_g;
55
+#include "hw/register.h"
56
+#include "hw/ptimer.h"
57
+#include "net/can_emu.h"
58
+#include "hw/qdev-clock.h"
59
+
60
+#define TYPE_XILINX_CANFD "xlnx.versal-canfd"
61
+
62
+OBJECT_DECLARE_SIMPLE_TYPE(XlnxVersalCANFDState, XILINX_CANFD)
63
+
64
+#define NUM_REGS_PER_MSG_SPACE 18 /* 1 ID + 1 DLC + 16 Data(DW0 - DW15) regs. */
65
+#define MAX_NUM_RX 64
66
+#define OFFSET_RX1_DW15 (0x4144 / 4)
67
+#define CANFD_TIMER_MAX 0xFFFFUL
68
+#define CANFD_DEFAULT_CLOCK (25 * 1000 * 1000)
69
+
70
+#define XLNX_VERSAL_CANFD_R_MAX (OFFSET_RX1_DW15 + \
71
+ ((MAX_NUM_RX - 1) * NUM_REGS_PER_MSG_SPACE) + 1)
72
+
73
+typedef struct XlnxVersalCANFDState {
74
+ SysBusDevice parent_obj;
75
+ MemoryRegion iomem;
76
+
77
+ qemu_irq irq_canfd_int;
78
+ qemu_irq irq_addr_err;
79
+
80
+ RegisterInfo reg_info[XLNX_VERSAL_CANFD_R_MAX];
81
+ RegisterAccessInfo *tx_regs;
82
+ RegisterAccessInfo *rx0_regs;
83
+ RegisterAccessInfo *rx1_regs;
84
+ RegisterAccessInfo *af_regs;
85
+ RegisterAccessInfo *txe_regs;
86
+ RegisterAccessInfo *rx_mailbox_regs;
87
+ RegisterAccessInfo *af_mask_regs_mailbox;
88
+
89
+ uint32_t regs[XLNX_VERSAL_CANFD_R_MAX];
90
+
91
+ ptimer_state *canfd_timer;
92
+
93
+ CanBusClientState bus_client;
94
+ CanBusState *canfdbus;
95
+
96
+ struct {
97
+ uint8_t rx0_fifo;
98
+ uint8_t rx1_fifo;
99
+ uint8_t tx_fifo;
100
+ bool enable_rx_fifo1;
101
+ uint32_t ext_clk_freq;
102
+ } cfg;
103
+
104
+} XlnxVersalCANFDState;
105
+
106
+typedef struct tx_ready_reg_info {
107
+ uint32_t can_id;
108
+ uint32_t reg_num;
109
+} tx_ready_reg_info;
110
+
111
+#endif
112
diff --git a/hw/net/can/xlnx-versal-canfd.c b/hw/net/can/xlnx-versal-canfd.c
113
new file mode 100644
114
index XXXXXXX..XXXXXXX
115
--- /dev/null
116
+++ b/hw/net/can/xlnx-versal-canfd.c
117
@@ -XXX,XX +XXX,XX @@
118
+/*
119
+ * QEMU model of the Xilinx Versal CANFD device.
120
+ *
121
+ * This implementation is based on the following datasheet:
122
+ * https://docs.xilinx.com/v/u/2.0-English/pg223-canfd
123
+ *
124
+ * Copyright (c) 2023 Advanced Micro Devices, Inc.
125
+ *
126
+ * Written-by: Vikram Garhwal <vikram.garhwal@amd.com>
127
+ *
128
+ * Based on QEMU CANFD Device emulation implemented by Jin Yang, Deniz Eren and
129
+ * Pavel Pisa
130
+ *
131
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
132
+ * of this software and associated documentation files (the "Software"), to deal
133
+ * in the Software without restriction, including without limitation the rights
134
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
135
+ * copies of the Software, and to permit persons to whom the Software is
136
+ * furnished to do so, subject to the following conditions:
137
+ *
138
+ * The above copyright notice and this permission notice shall be included in
139
+ * all copies or substantial portions of the Software.
140
+ *
141
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
142
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
143
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
144
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
145
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
146
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
147
+ * THE SOFTWARE.
148
+ */
149
+
150
+#include "qemu/osdep.h"
151
+#include "hw/sysbus.h"
152
+#include "hw/irq.h"
153
+#include "hw/register.h"
154
+#include "qapi/error.h"
155
+#include "qemu/bitops.h"
156
+#include "qemu/log.h"
157
+#include "qemu/cutils.h"
158
+#include "qemu/event_notifier.h"
159
+#include "hw/qdev-properties.h"
160
+#include "qom/object_interfaces.h"
161
+#include "migration/vmstate.h"
162
+#include "hw/net/xlnx-versal-canfd.h"
163
+#include "trace.h"
164
+
165
+REG32(SOFTWARE_RESET_REGISTER, 0x0)
166
+ FIELD(SOFTWARE_RESET_REGISTER, CEN, 1, 1)
167
+ FIELD(SOFTWARE_RESET_REGISTER, SRST, 0, 1)
168
+REG32(MODE_SELECT_REGISTER, 0x4)
169
+ FIELD(MODE_SELECT_REGISTER, ITO, 8, 8)
170
+ FIELD(MODE_SELECT_REGISTER, ABR, 7, 1)
171
+ FIELD(MODE_SELECT_REGISTER, SBR, 6, 1)
172
+ FIELD(MODE_SELECT_REGISTER, DPEE, 5, 1)
173
+ FIELD(MODE_SELECT_REGISTER, DAR, 4, 1)
174
+ FIELD(MODE_SELECT_REGISTER, BRSD, 3, 1)
175
+ FIELD(MODE_SELECT_REGISTER, SNOOP, 2, 1)
176
+ FIELD(MODE_SELECT_REGISTER, LBACK, 1, 1)
177
+ FIELD(MODE_SELECT_REGISTER, SLEEP, 0, 1)
178
+REG32(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x8)
179
+ FIELD(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, BRP, 0, 8)
180
+REG32(ARBITRATION_PHASE_BIT_TIMING_REGISTER, 0xc)
181
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, SJW, 16, 7)
182
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS2, 8, 7)
183
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS1, 0, 8)
184
+REG32(ERROR_COUNTER_REGISTER, 0x10)
185
+ FIELD(ERROR_COUNTER_REGISTER, REC, 8, 8)
186
+ FIELD(ERROR_COUNTER_REGISTER, TEC, 0, 8)
187
+REG32(ERROR_STATUS_REGISTER, 0x14)
188
+ FIELD(ERROR_STATUS_REGISTER, F_BERR, 11, 1)
189
+ FIELD(ERROR_STATUS_REGISTER, F_STER, 10, 1)
190
+ FIELD(ERROR_STATUS_REGISTER, F_FMER, 9, 1)
191
+ FIELD(ERROR_STATUS_REGISTER, F_CRCER, 8, 1)
192
+ FIELD(ERROR_STATUS_REGISTER, ACKER, 4, 1)
193
+ FIELD(ERROR_STATUS_REGISTER, BERR, 3, 1)
194
+ FIELD(ERROR_STATUS_REGISTER, STER, 2, 1)
195
+ FIELD(ERROR_STATUS_REGISTER, FMER, 1, 1)
196
+ FIELD(ERROR_STATUS_REGISTER, CRCER, 0, 1)
197
+REG32(STATUS_REGISTER, 0x18)
198
+ FIELD(STATUS_REGISTER, TDCV, 16, 7)
199
+ FIELD(STATUS_REGISTER, SNOOP, 12, 1)
200
+ FIELD(STATUS_REGISTER, BSFR_CONFIG, 10, 1)
201
+ FIELD(STATUS_REGISTER, PEE_CONFIG, 9, 1)
202
+ FIELD(STATUS_REGISTER, ESTAT, 7, 2)
203
+ FIELD(STATUS_REGISTER, ERRWRN, 6, 1)
204
+ FIELD(STATUS_REGISTER, BBSY, 5, 1)
205
+ FIELD(STATUS_REGISTER, BIDLE, 4, 1)
206
+ FIELD(STATUS_REGISTER, NORMAL, 3, 1)
207
+ FIELD(STATUS_REGISTER, SLEEP, 2, 1)
208
+ FIELD(STATUS_REGISTER, LBACK, 1, 1)
209
+ FIELD(STATUS_REGISTER, CONFIG, 0, 1)
210
+REG32(INTERRUPT_STATUS_REGISTER, 0x1c)
211
+ FIELD(INTERRUPT_STATUS_REGISTER, TXEWMFLL, 31, 1)
212
+ FIELD(INTERRUPT_STATUS_REGISTER, TXEOFLW, 30, 1)
213
+ FIELD(INTERRUPT_STATUS_REGISTER, RXBOFLW_BI, 24, 6)
214
+ FIELD(INTERRUPT_STATUS_REGISTER, RXLRM_BI, 18, 6)
215
+ FIELD(INTERRUPT_STATUS_REGISTER, RXMNF, 17, 1)
216
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL_1, 16, 1)
217
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFOFLW_1, 15, 1)
218
+ FIELD(INTERRUPT_STATUS_REGISTER, TXCRS, 14, 1)
219
+ FIELD(INTERRUPT_STATUS_REGISTER, TXRRS, 13, 1)
220
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL, 12, 1)
221
+ FIELD(INTERRUPT_STATUS_REGISTER, WKUP, 11, 1)
222
+ FIELD(INTERRUPT_STATUS_REGISTER, SLP, 10, 1)
223
+ FIELD(INTERRUPT_STATUS_REGISTER, BSOFF, 9, 1)
52
+ /*
224
+ /*
53
+ * At the end of each generic command, misaligned extra bytes are discard
225
+ * In the original HW description below bit is named as ERROR but an ERROR
54
+ * or padded to tx and rx respectively to round it out (and avoid need for
226
+ * field name collides with a macro in Windows build. To avoid Windows build
55
+ * individual byte access. Since we use byte fifos, keep track of the
227
+ * failures, the bit is renamed to ERROR_BIT.
56
+ * alignment WRT to word access.
57
+ */
228
+ */
58
+ uint8_t rx_fifo_g_align;
229
+ FIELD(INTERRUPT_STATUS_REGISTER, ERROR_BIT, 8, 1)
59
+ uint8_t tx_fifo_g_align;
230
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFOFLW, 6, 1)
60
+ bool man_start_com_g;
231
+ FIELD(INTERRUPT_STATUS_REGISTER, TSCNT_OFLW, 5, 1)
61
+} XlnxZynqMPQSPIPS;
232
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOK, 4, 1)
62
+
233
+ FIELD(INTERRUPT_STATUS_REGISTER, BSFRD, 3, 1)
63
typedef struct XilinxSPIPSClass {
234
+ FIELD(INTERRUPT_STATUS_REGISTER, PEE, 2, 1)
64
SysBusDeviceClass parent_class;
235
+ FIELD(INTERRUPT_STATUS_REGISTER, TXOK, 1, 1)
65
236
+ FIELD(INTERRUPT_STATUS_REGISTER, ARBLST, 0, 1)
66
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPSClass {
237
+REG32(INTERRUPT_ENABLE_REGISTER, 0x20)
67
238
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXEWMFLL, 31, 1)
68
#define TYPE_XILINX_SPIPS "xlnx.ps7-spi"
239
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXEOFLW, 30, 1)
69
#define TYPE_XILINX_QSPIPS "xlnx.ps7-qspi"
240
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXMNF, 17, 1)
70
+#define TYPE_XLNX_ZYNQMP_QSPIPS "xlnx.usmp-gqspi"
241
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL_1, 16, 1)
71
242
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFOFLW_1, 15, 1)
72
#define XILINX_SPIPS(obj) \
243
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXCRS, 14, 1)
73
OBJECT_CHECK(XilinxSPIPS, (obj), TYPE_XILINX_SPIPS)
244
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXRRS, 13, 1)
74
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPSClass {
245
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL, 12, 1)
75
#define XILINX_QSPIPS(obj) \
246
+ FIELD(INTERRUPT_ENABLE_REGISTER, EWKUP, 11, 1)
76
OBJECT_CHECK(XilinxQSPIPS, (obj), TYPE_XILINX_QSPIPS)
247
+ FIELD(INTERRUPT_ENABLE_REGISTER, ESLP, 10, 1)
77
248
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSOFF, 9, 1)
78
+#define XLNX_ZYNQMP_QSPIPS(obj) \
249
+ FIELD(INTERRUPT_ENABLE_REGISTER, EERROR, 8, 1)
79
+ OBJECT_CHECK(XlnxZynqMPQSPIPS, (obj), TYPE_XLNX_ZYNQMP_QSPIPS)
250
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERFXOFLW, 6, 1)
80
+
251
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETSCNT_OFLW, 5, 1)
81
#endif /* XILINX_SPIPS_H */
252
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOK, 4, 1)
82
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
253
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSFRD, 3, 1)
83
index XXXXXXX..XXXXXXX 100644
254
+ FIELD(INTERRUPT_ENABLE_REGISTER, EPEE, 2, 1)
84
--- a/hw/ssi/xilinx_spips.c
255
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXOK, 1, 1)
85
+++ b/hw/ssi/xilinx_spips.c
256
+ FIELD(INTERRUPT_ENABLE_REGISTER, EARBLOST, 0, 1)
86
@@ -XXX,XX +XXX,XX @@
257
+REG32(INTERRUPT_CLEAR_REGISTER, 0x24)
87
#include "hw/ssi/xilinx_spips.h"
258
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXEWMFLL, 31, 1)
88
#include "qapi/error.h"
259
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXEOFLW, 30, 1)
89
#include "hw/register.h"
260
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXMNF, 17, 1)
90
+#include "sysemu/dma.h"
261
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL_1, 16, 1)
91
#include "migration/blocker.h"
262
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFOFLW_1, 15, 1)
92
263
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXCRS, 14, 1)
93
#ifndef XILINX_SPIPS_ERR_DEBUG
264
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXRRS, 13, 1)
94
@@ -XXX,XX +XXX,XX @@
265
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL, 12, 1)
95
#define R_INTR_DIS (0x0C / 4)
266
+ FIELD(INTERRUPT_CLEAR_REGISTER, CWKUP, 11, 1)
96
#define R_INTR_MASK (0x10 / 4)
267
+ FIELD(INTERRUPT_CLEAR_REGISTER, CSLP, 10, 1)
97
#define IXR_TX_FIFO_UNDERFLOW (1 << 6)
268
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSOFF, 9, 1)
98
+/* Poll timeout not implemented */
269
+ FIELD(INTERRUPT_CLEAR_REGISTER, CERROR, 8, 1)
99
+#define IXR_RX_FIFO_EMPTY (1 << 11)
270
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRFXOFLW, 6, 1)
100
+#define IXR_GENERIC_FIFO_FULL (1 << 10)
271
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTSCNT_OFLW, 5, 1)
101
+#define IXR_GENERIC_FIFO_NOT_FULL (1 << 9)
272
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOK, 4, 1)
102
+#define IXR_TX_FIFO_EMPTY (1 << 8)
273
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSFRD, 3, 1)
103
+#define IXR_GENERIC_FIFO_EMPTY (1 << 7)
274
+ FIELD(INTERRUPT_CLEAR_REGISTER, CPEE, 2, 1)
104
#define IXR_RX_FIFO_FULL (1 << 5)
275
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXOK, 1, 1)
105
#define IXR_RX_FIFO_NOT_EMPTY (1 << 4)
276
+ FIELD(INTERRUPT_CLEAR_REGISTER, CARBLOST, 0, 1)
106
#define IXR_TX_FIFO_FULL (1 << 3)
277
+REG32(TIMESTAMP_REGISTER, 0x28)
107
#define IXR_TX_FIFO_NOT_FULL (1 << 2)
278
+ FIELD(TIMESTAMP_REGISTER, TIMESTAMP_CNT, 16, 16)
108
#define IXR_TX_FIFO_MODE_FAIL (1 << 1)
279
+ FIELD(TIMESTAMP_REGISTER, CTS, 0, 1)
109
#define IXR_RX_FIFO_OVERFLOW (1 << 0)
280
+REG32(DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x88)
110
-#define IXR_ALL ((IXR_TX_FIFO_UNDERFLOW<<1)-1)
281
+ FIELD(DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER, TDC, 16, 1)
111
+#define IXR_ALL ((1 << 13) - 1)
282
+ FIELD(DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER, TDCOFF, 8, 6)
112
+#define GQSPI_IXR_MASK 0xFBE
283
+ FIELD(DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER, DP_BRP, 0, 8)
113
+#define IXR_SELF_CLEAR \
284
+REG32(DATA_PHASE_BIT_TIMING_REGISTER, 0x8c)
114
+(IXR_GENERIC_FIFO_EMPTY \
285
+ FIELD(DATA_PHASE_BIT_TIMING_REGISTER, DP_SJW, 16, 4)
115
+| IXR_GENERIC_FIFO_FULL \
286
+ FIELD(DATA_PHASE_BIT_TIMING_REGISTER, DP_TS2, 8, 4)
116
+| IXR_GENERIC_FIFO_NOT_FULL \
287
+ FIELD(DATA_PHASE_BIT_TIMING_REGISTER, DP_TS1, 0, 5)
117
+| IXR_TX_FIFO_EMPTY \
288
+REG32(TX_BUFFER_READY_REQUEST_REGISTER, 0x90)
118
+| IXR_TX_FIFO_FULL \
289
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR31, 31, 1)
119
+| IXR_TX_FIFO_NOT_FULL \
290
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR30, 30, 1)
120
+| IXR_RX_FIFO_EMPTY \
291
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR29, 29, 1)
121
+| IXR_RX_FIFO_FULL \
292
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR28, 28, 1)
122
+| IXR_RX_FIFO_NOT_EMPTY)
293
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR27, 27, 1)
123
294
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR26, 26, 1)
124
#define R_EN (0x14 / 4)
295
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR25, 25, 1)
125
#define R_DELAY (0x18 / 4)
296
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR24, 24, 1)
126
@@ -XXX,XX +XXX,XX @@
297
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR23, 23, 1)
127
298
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR22, 22, 1)
128
#define R_MOD_ID (0xFC / 4)
299
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR21, 21, 1)
129
300
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR20, 20, 1)
130
+#define R_GQSPI_SELECT (0x144 / 4)
301
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR19, 19, 1)
131
+ FIELD(GQSPI_SELECT, GENERIC_QSPI_EN, 0, 1)
302
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR18, 18, 1)
132
+#define R_GQSPI_ISR (0x104 / 4)
303
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR17, 17, 1)
133
+#define R_GQSPI_IER (0x108 / 4)
304
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR16, 16, 1)
134
+#define R_GQSPI_IDR (0x10c / 4)
305
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR15, 15, 1)
135
+#define R_GQSPI_IMR (0x110 / 4)
306
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR14, 14, 1)
136
+#define R_GQSPI_TX_THRESH (0x128 / 4)
307
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR13, 13, 1)
137
+#define R_GQSPI_RX_THRESH (0x12c / 4)
308
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR12, 12, 1)
138
+#define R_GQSPI_CNFG (0x100 / 4)
309
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR11, 11, 1)
139
+ FIELD(GQSPI_CNFG, MODE_EN, 30, 2)
310
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR10, 10, 1)
140
+ FIELD(GQSPI_CNFG, GEN_FIFO_START_MODE, 29, 1)
311
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR9, 9, 1)
141
+ FIELD(GQSPI_CNFG, GEN_FIFO_START, 28, 1)
312
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR8, 8, 1)
142
+ FIELD(GQSPI_CNFG, ENDIAN, 26, 1)
313
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR7, 7, 1)
143
+ /* Poll timeout not implemented */
314
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR6, 6, 1)
144
+ FIELD(GQSPI_CNFG, EN_POLL_TIMEOUT, 20, 1)
315
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR5, 5, 1)
145
+ /* QEMU doesnt care about any of these last three */
316
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR4, 4, 1)
146
+ FIELD(GQSPI_CNFG, BR, 3, 3)
317
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR3, 3, 1)
147
+ FIELD(GQSPI_CNFG, CPH, 2, 1)
318
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR2, 2, 1)
148
+ FIELD(GQSPI_CNFG, CPL, 1, 1)
319
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR1, 1, 1)
149
+#define R_GQSPI_GEN_FIFO (0x140 / 4)
320
+ FIELD(TX_BUFFER_READY_REQUEST_REGISTER, RR0, 0, 1)
150
+#define R_GQSPI_TXD (0x11c / 4)
321
+REG32(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, 0x94)
151
+#define R_GQSPI_RXD (0x120 / 4)
322
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS31, 31, 1)
152
+#define R_GQSPI_FIFO_CTRL (0x14c / 4)
323
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS30, 30, 1)
153
+ FIELD(GQSPI_FIFO_CTRL, RX_FIFO_RESET, 2, 1)
324
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS29, 29, 1)
154
+ FIELD(GQSPI_FIFO_CTRL, TX_FIFO_RESET, 1, 1)
325
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS28, 28, 1)
155
+ FIELD(GQSPI_FIFO_CTRL, GENERIC_FIFO_RESET, 0, 1)
326
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS27, 27, 1)
156
+#define R_GQSPI_GFIFO_THRESH (0x150 / 4)
327
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS26, 26, 1)
157
+#define R_GQSPI_DATA_STS (0x15c / 4)
328
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS25, 25, 1)
158
+/* We use the snapshot register to hold the core state for the currently
329
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS24, 24, 1)
159
+ * or most recently executed command. So the generic fifo format is defined
330
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS23, 23, 1)
160
+ * for the snapshot register
331
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS22, 22, 1)
161
+ */
332
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS21, 21, 1)
162
+#define R_GQSPI_GF_SNAPSHOT (0x160 / 4)
333
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS20, 20, 1)
163
+ FIELD(GQSPI_GF_SNAPSHOT, POLL, 19, 1)
334
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS19, 19, 1)
164
+ FIELD(GQSPI_GF_SNAPSHOT, STRIPE, 18, 1)
335
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS18, 18, 1)
165
+ FIELD(GQSPI_GF_SNAPSHOT, RECIEVE, 17, 1)
336
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS17, 17, 1)
166
+ FIELD(GQSPI_GF_SNAPSHOT, TRANSMIT, 16, 1)
337
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS16, 16, 1)
167
+ FIELD(GQSPI_GF_SNAPSHOT, DATA_BUS_SELECT, 14, 2)
338
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS15, 15, 1)
168
+ FIELD(GQSPI_GF_SNAPSHOT, CHIP_SELECT, 12, 2)
339
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS14, 14, 1)
169
+ FIELD(GQSPI_GF_SNAPSHOT, SPI_MODE, 10, 2)
340
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS13, 13, 1)
170
+ FIELD(GQSPI_GF_SNAPSHOT, EXPONENT, 9, 1)
341
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS12, 12, 1)
171
+ FIELD(GQSPI_GF_SNAPSHOT, DATA_XFER, 8, 1)
342
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS11, 11, 1)
172
+ FIELD(GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA, 0, 8)
343
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS10, 10, 1)
173
+#define R_GQSPI_MOD_ID (0x168 / 4)
344
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS9, 9, 1)
174
+#define R_GQSPI_MOD_ID_VALUE 0x010A0000
345
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS8, 8, 1)
175
/* size of TXRX FIFOs */
346
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS7, 7, 1)
176
-#define RXFF_A 32
347
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS6, 6, 1)
177
-#define TXFF_A 32
348
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS5, 5, 1)
178
+#define RXFF_A (128)
349
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS4, 4, 1)
179
+#define TXFF_A (128)
350
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS3, 3, 1)
180
351
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS2, 2, 1)
181
#define RXFF_A_Q (64 * 4)
352
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS1, 1, 1)
182
#define TXFF_A_Q (64 * 4)
353
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER, ERRS0, 0, 1)
183
@@ -XXX,XX +XXX,XX @@ static inline int num_effective_busses(XilinxSPIPS *s)
354
+REG32(TX_BUFFER_CANCEL_REQUEST_REGISTER, 0x98)
184
s->regs[R_LQSPI_CFG] & LQSPI_CFG_TWO_MEM) ? s->num_busses : 1;
355
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR31, 31, 1)
185
}
356
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR30, 30, 1)
186
357
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR29, 29, 1)
187
-static inline bool xilinx_spips_cs_is_set(XilinxSPIPS *s, int i, int field)
358
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR28, 28, 1)
188
-{
359
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR27, 27, 1)
189
- return ~field & (1 << i) && (s->regs[R_CONFIG] & MANUAL_CS
360
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR26, 26, 1)
190
- || !fifo8_is_empty(&s->tx_fifo));
361
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR25, 25, 1)
191
-}
362
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR24, 24, 1)
192
-
363
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR23, 23, 1)
193
-static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
364
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR22, 22, 1)
194
+static void xilinx_spips_update_cs(XilinxSPIPS *s, int field)
365
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR21, 21, 1)
195
{
366
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR20, 20, 1)
196
- int i, j;
367
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR19, 19, 1)
197
- bool found = false;
368
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR18, 18, 1)
198
- int field = s->regs[R_CONFIG] >> CS_SHIFT;
369
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR17, 17, 1)
370
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR16, 16, 1)
371
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR15, 15, 1)
372
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR14, 14, 1)
373
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR13, 13, 1)
374
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR12, 12, 1)
375
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR11, 11, 1)
376
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR10, 10, 1)
377
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR9, 9, 1)
378
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR8, 8, 1)
379
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR7, 7, 1)
380
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR6, 6, 1)
381
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR5, 5, 1)
382
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR4, 4, 1)
383
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR3, 3, 1)
384
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR2, 2, 1)
385
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR1, 1, 1)
386
+ FIELD(TX_BUFFER_CANCEL_REQUEST_REGISTER, CR0, 0, 1)
387
+REG32(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, 0x9c)
388
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS31, 31,
389
+ 1)
390
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS30, 30,
391
+ 1)
392
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS29, 29,
393
+ 1)
394
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS28, 28,
395
+ 1)
396
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS27, 27,
397
+ 1)
398
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS26, 26,
399
+ 1)
400
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS25, 25,
401
+ 1)
402
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS24, 24,
403
+ 1)
404
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS23, 23,
405
+ 1)
406
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS22, 22,
407
+ 1)
408
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS21, 21,
409
+ 1)
410
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS20, 20,
411
+ 1)
412
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS19, 19,
413
+ 1)
414
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS18, 18,
415
+ 1)
416
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS17, 17,
417
+ 1)
418
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS16, 16,
419
+ 1)
420
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS15, 15,
421
+ 1)
422
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS14, 14,
423
+ 1)
424
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS13, 13,
425
+ 1)
426
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS12, 12,
427
+ 1)
428
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS11, 11,
429
+ 1)
430
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS10, 10,
431
+ 1)
432
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS9, 9, 1)
433
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS8, 8, 1)
434
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS7, 7, 1)
435
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS6, 6, 1)
436
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS5, 5, 1)
437
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS4, 4, 1)
438
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS3, 3, 1)
439
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS2, 2, 1)
440
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS1, 1, 1)
441
+ FIELD(INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER, ECRS0, 0, 1)
442
+REG32(TX_EVENT_FIFO_STATUS_REGISTER, 0xa0)
443
+ FIELD(TX_EVENT_FIFO_STATUS_REGISTER, TXE_FL, 8, 6)
444
+ FIELD(TX_EVENT_FIFO_STATUS_REGISTER, TXE_IRI, 7, 1)
445
+ FIELD(TX_EVENT_FIFO_STATUS_REGISTER, TXE_RI, 0, 5)
446
+REG32(TX_EVENT_FIFO_WATERMARK_REGISTER, 0xa4)
447
+ FIELD(TX_EVENT_FIFO_WATERMARK_REGISTER, TXE_FWM, 0, 5)
448
+REG32(ACCEPTANCE_FILTER_CONTROL_REGISTER, 0xe0)
449
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF31, 31, 1)
450
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF30, 30, 1)
451
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF29, 29, 1)
452
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF28, 28, 1)
453
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF27, 27, 1)
454
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF26, 26, 1)
455
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF25, 25, 1)
456
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF24, 24, 1)
457
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF23, 23, 1)
458
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF22, 22, 1)
459
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF21, 21, 1)
460
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF20, 20, 1)
461
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF19, 19, 1)
462
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF18, 18, 1)
463
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF17, 17, 1)
464
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF16, 16, 1)
465
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF15, 15, 1)
466
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF14, 14, 1)
467
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF13, 13, 1)
468
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF12, 12, 1)
469
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF11, 11, 1)
470
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF10, 10, 1)
471
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF9, 9, 1)
472
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF8, 8, 1)
473
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF7, 7, 1)
474
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF6, 6, 1)
475
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF5, 5, 1)
476
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF4, 4, 1)
477
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF3, 3, 1)
478
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF2, 2, 1)
479
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF1, 1, 1)
480
+ FIELD(ACCEPTANCE_FILTER_CONTROL_REGISTER, UAF0, 0, 1)
481
+REG32(RX_FIFO_STATUS_REGISTER, 0xe8)
482
+ FIELD(RX_FIFO_STATUS_REGISTER, FL_1, 24, 7)
483
+ FIELD(RX_FIFO_STATUS_REGISTER, IRI_1, 23, 1)
484
+ FIELD(RX_FIFO_STATUS_REGISTER, RI_1, 16, 6)
485
+ FIELD(RX_FIFO_STATUS_REGISTER, FL, 8, 7)
486
+ FIELD(RX_FIFO_STATUS_REGISTER, IRI, 7, 1)
487
+ FIELD(RX_FIFO_STATUS_REGISTER, RI, 0, 6)
488
+REG32(RX_FIFO_WATERMARK_REGISTER, 0xec)
489
+ FIELD(RX_FIFO_WATERMARK_REGISTER, RXFP, 16, 5)
490
+ FIELD(RX_FIFO_WATERMARK_REGISTER, RXFWM_1, 8, 6)
491
+ FIELD(RX_FIFO_WATERMARK_REGISTER, RXFWM, 0, 6)
492
+REG32(TB_ID_REGISTER, 0x100)
493
+ FIELD(TB_ID_REGISTER, ID, 21, 11)
494
+ FIELD(TB_ID_REGISTER, SRR_RTR_RRS, 20, 1)
495
+ FIELD(TB_ID_REGISTER, IDE, 19, 1)
496
+ FIELD(TB_ID_REGISTER, ID_EXT, 1, 18)
497
+ FIELD(TB_ID_REGISTER, RTR_RRS, 0, 1)
498
+REG32(TB0_DLC_REGISTER, 0x104)
499
+ FIELD(TB0_DLC_REGISTER, DLC, 28, 4)
500
+ FIELD(TB0_DLC_REGISTER, FDF, 27, 1)
501
+ FIELD(TB0_DLC_REGISTER, BRS, 26, 1)
502
+ FIELD(TB0_DLC_REGISTER, RSVD2, 25, 1)
503
+ FIELD(TB0_DLC_REGISTER, EFC, 24, 1)
504
+ FIELD(TB0_DLC_REGISTER, MM, 16, 8)
505
+ FIELD(TB0_DLC_REGISTER, RSVD1, 0, 16)
506
+REG32(TB_DW0_REGISTER, 0x108)
507
+ FIELD(TB_DW0_REGISTER, DATA_BYTES0, 24, 8)
508
+ FIELD(TB_DW0_REGISTER, DATA_BYTES1, 16, 8)
509
+ FIELD(TB_DW0_REGISTER, DATA_BYTES2, 8, 8)
510
+ FIELD(TB_DW0_REGISTER, DATA_BYTES3, 0, 8)
511
+REG32(TB_DW1_REGISTER, 0x10c)
512
+ FIELD(TB_DW1_REGISTER, DATA_BYTES4, 24, 8)
513
+ FIELD(TB_DW1_REGISTER, DATA_BYTES5, 16, 8)
514
+ FIELD(TB_DW1_REGISTER, DATA_BYTES6, 8, 8)
515
+ FIELD(TB_DW1_REGISTER, DATA_BYTES7, 0, 8)
516
+REG32(TB_DW2_REGISTER, 0x110)
517
+ FIELD(TB_DW2_REGISTER, DATA_BYTES8, 24, 8)
518
+ FIELD(TB_DW2_REGISTER, DATA_BYTES9, 16, 8)
519
+ FIELD(TB_DW2_REGISTER, DATA_BYTES10, 8, 8)
520
+ FIELD(TB_DW2_REGISTER, DATA_BYTES11, 0, 8)
521
+REG32(TB_DW3_REGISTER, 0x114)
522
+ FIELD(TB_DW3_REGISTER, DATA_BYTES12, 24, 8)
523
+ FIELD(TB_DW3_REGISTER, DATA_BYTES13, 16, 8)
524
+ FIELD(TB_DW3_REGISTER, DATA_BYTES14, 8, 8)
525
+ FIELD(TB_DW3_REGISTER, DATA_BYTES15, 0, 8)
526
+REG32(TB_DW4_REGISTER, 0x118)
527
+ FIELD(TB_DW4_REGISTER, DATA_BYTES16, 24, 8)
528
+ FIELD(TB_DW4_REGISTER, DATA_BYTES17, 16, 8)
529
+ FIELD(TB_DW4_REGISTER, DATA_BYTES18, 8, 8)
530
+ FIELD(TB_DW4_REGISTER, DATA_BYTES19, 0, 8)
531
+REG32(TB_DW5_REGISTER, 0x11c)
532
+ FIELD(TB_DW5_REGISTER, DATA_BYTES20, 24, 8)
533
+ FIELD(TB_DW5_REGISTER, DATA_BYTES21, 16, 8)
534
+ FIELD(TB_DW5_REGISTER, DATA_BYTES22, 8, 8)
535
+ FIELD(TB_DW5_REGISTER, DATA_BYTES23, 0, 8)
536
+REG32(TB_DW6_REGISTER, 0x120)
537
+ FIELD(TB_DW6_REGISTER, DATA_BYTES24, 24, 8)
538
+ FIELD(TB_DW6_REGISTER, DATA_BYTES25, 16, 8)
539
+ FIELD(TB_DW6_REGISTER, DATA_BYTES26, 8, 8)
540
+ FIELD(TB_DW6_REGISTER, DATA_BYTES27, 0, 8)
541
+REG32(TB_DW7_REGISTER, 0x124)
542
+ FIELD(TB_DW7_REGISTER, DATA_BYTES28, 24, 8)
543
+ FIELD(TB_DW7_REGISTER, DATA_BYTES29, 16, 8)
544
+ FIELD(TB_DW7_REGISTER, DATA_BYTES30, 8, 8)
545
+ FIELD(TB_DW7_REGISTER, DATA_BYTES31, 0, 8)
546
+REG32(TB_DW8_REGISTER, 0x128)
547
+ FIELD(TB_DW8_REGISTER, DATA_BYTES32, 24, 8)
548
+ FIELD(TB_DW8_REGISTER, DATA_BYTES33, 16, 8)
549
+ FIELD(TB_DW8_REGISTER, DATA_BYTES34, 8, 8)
550
+ FIELD(TB_DW8_REGISTER, DATA_BYTES35, 0, 8)
551
+REG32(TB_DW9_REGISTER, 0x12c)
552
+ FIELD(TB_DW9_REGISTER, DATA_BYTES36, 24, 8)
553
+ FIELD(TB_DW9_REGISTER, DATA_BYTES37, 16, 8)
554
+ FIELD(TB_DW9_REGISTER, DATA_BYTES38, 8, 8)
555
+ FIELD(TB_DW9_REGISTER, DATA_BYTES39, 0, 8)
556
+REG32(TB_DW10_REGISTER, 0x130)
557
+ FIELD(TB_DW10_REGISTER, DATA_BYTES40, 24, 8)
558
+ FIELD(TB_DW10_REGISTER, DATA_BYTES41, 16, 8)
559
+ FIELD(TB_DW10_REGISTER, DATA_BYTES42, 8, 8)
560
+ FIELD(TB_DW10_REGISTER, DATA_BYTES43, 0, 8)
561
+REG32(TB_DW11_REGISTER, 0x134)
562
+ FIELD(TB_DW11_REGISTER, DATA_BYTES44, 24, 8)
563
+ FIELD(TB_DW11_REGISTER, DATA_BYTES45, 16, 8)
564
+ FIELD(TB_DW11_REGISTER, DATA_BYTES46, 8, 8)
565
+ FIELD(TB_DW11_REGISTER, DATA_BYTES47, 0, 8)
566
+REG32(TB_DW12_REGISTER, 0x138)
567
+ FIELD(TB_DW12_REGISTER, DATA_BYTES48, 24, 8)
568
+ FIELD(TB_DW12_REGISTER, DATA_BYTES49, 16, 8)
569
+ FIELD(TB_DW12_REGISTER, DATA_BYTES50, 8, 8)
570
+ FIELD(TB_DW12_REGISTER, DATA_BYTES51, 0, 8)
571
+REG32(TB_DW13_REGISTER, 0x13c)
572
+ FIELD(TB_DW13_REGISTER, DATA_BYTES52, 24, 8)
573
+ FIELD(TB_DW13_REGISTER, DATA_BYTES53, 16, 8)
574
+ FIELD(TB_DW13_REGISTER, DATA_BYTES54, 8, 8)
575
+ FIELD(TB_DW13_REGISTER, DATA_BYTES55, 0, 8)
576
+REG32(TB_DW14_REGISTER, 0x140)
577
+ FIELD(TB_DW14_REGISTER, DATA_BYTES56, 24, 8)
578
+ FIELD(TB_DW14_REGISTER, DATA_BYTES57, 16, 8)
579
+ FIELD(TB_DW14_REGISTER, DATA_BYTES58, 8, 8)
580
+ FIELD(TB_DW14_REGISTER, DATA_BYTES59, 0, 8)
581
+REG32(TB_DW15_REGISTER, 0x144)
582
+ FIELD(TB_DW15_REGISTER, DATA_BYTES60, 24, 8)
583
+ FIELD(TB_DW15_REGISTER, DATA_BYTES61, 16, 8)
584
+ FIELD(TB_DW15_REGISTER, DATA_BYTES62, 8, 8)
585
+ FIELD(TB_DW15_REGISTER, DATA_BYTES63, 0, 8)
586
+REG32(AFMR_REGISTER, 0xa00)
587
+ FIELD(AFMR_REGISTER, AMID, 21, 11)
588
+ FIELD(AFMR_REGISTER, AMSRR, 20, 1)
589
+ FIELD(AFMR_REGISTER, AMIDE, 19, 1)
590
+ FIELD(AFMR_REGISTER, AMID_EXT, 1, 18)
591
+ FIELD(AFMR_REGISTER, AMRTR, 0, 1)
592
+REG32(AFIR_REGISTER, 0xa04)
593
+ FIELD(AFIR_REGISTER, AIID, 21, 11)
594
+ FIELD(AFIR_REGISTER, AISRR, 20, 1)
595
+ FIELD(AFIR_REGISTER, AIIDE, 19, 1)
596
+ FIELD(AFIR_REGISTER, AIID_EXT, 1, 18)
597
+ FIELD(AFIR_REGISTER, AIRTR, 0, 1)
598
+REG32(TXE_FIFO_TB_ID_REGISTER, 0x2000)
599
+ FIELD(TXE_FIFO_TB_ID_REGISTER, ID, 21, 11)
600
+ FIELD(TXE_FIFO_TB_ID_REGISTER, SRR_RTR_RRS, 20, 1)
601
+ FIELD(TXE_FIFO_TB_ID_REGISTER, IDE, 19, 1)
602
+ FIELD(TXE_FIFO_TB_ID_REGISTER, ID_EXT, 1, 18)
603
+ FIELD(TXE_FIFO_TB_ID_REGISTER, RTR_RRS, 0, 1)
604
+REG32(TXE_FIFO_TB_DLC_REGISTER, 0x2004)
605
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, DLC, 28, 4)
606
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, FDF, 27, 1)
607
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, BRS, 26, 1)
608
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, ET, 24, 2)
609
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, MM, 16, 8)
610
+ FIELD(TXE_FIFO_TB_DLC_REGISTER, TIMESTAMP, 0, 16)
611
+REG32(RB_ID_REGISTER, 0x2100)
612
+ FIELD(RB_ID_REGISTER, ID, 21, 11)
613
+ FIELD(RB_ID_REGISTER, SRR_RTR_RRS, 20, 1)
614
+ FIELD(RB_ID_REGISTER, IDE, 19, 1)
615
+ FIELD(RB_ID_REGISTER, ID_EXT, 1, 18)
616
+ FIELD(RB_ID_REGISTER, RTR_RRS, 0, 1)
617
+REG32(RB_DLC_REGISTER, 0x2104)
618
+ FIELD(RB_DLC_REGISTER, DLC, 28, 4)
619
+ FIELD(RB_DLC_REGISTER, FDF, 27, 1)
620
+ FIELD(RB_DLC_REGISTER, BRS, 26, 1)
621
+ FIELD(RB_DLC_REGISTER, ESI, 25, 1)
622
+ FIELD(RB_DLC_REGISTER, MATCHED_FILTER_INDEX, 16, 5)
623
+ FIELD(RB_DLC_REGISTER, TIMESTAMP, 0, 16)
624
+REG32(RB_DW0_REGISTER, 0x2108)
625
+ FIELD(RB_DW0_REGISTER, DATA_BYTES0, 24, 8)
626
+ FIELD(RB_DW0_REGISTER, DATA_BYTES1, 16, 8)
627
+ FIELD(RB_DW0_REGISTER, DATA_BYTES2, 8, 8)
628
+ FIELD(RB_DW0_REGISTER, DATA_BYTES3, 0, 8)
629
+REG32(RB_DW1_REGISTER, 0x210c)
630
+ FIELD(RB_DW1_REGISTER, DATA_BYTES4, 24, 8)
631
+ FIELD(RB_DW1_REGISTER, DATA_BYTES5, 16, 8)
632
+ FIELD(RB_DW1_REGISTER, DATA_BYTES6, 8, 8)
633
+ FIELD(RB_DW1_REGISTER, DATA_BYTES7, 0, 8)
634
+REG32(RB_DW2_REGISTER, 0x2110)
635
+ FIELD(RB_DW2_REGISTER, DATA_BYTES8, 24, 8)
636
+ FIELD(RB_DW2_REGISTER, DATA_BYTES9, 16, 8)
637
+ FIELD(RB_DW2_REGISTER, DATA_BYTES10, 8, 8)
638
+ FIELD(RB_DW2_REGISTER, DATA_BYTES11, 0, 8)
639
+REG32(RB_DW3_REGISTER, 0x2114)
640
+ FIELD(RB_DW3_REGISTER, DATA_BYTES12, 24, 8)
641
+ FIELD(RB_DW3_REGISTER, DATA_BYTES13, 16, 8)
642
+ FIELD(RB_DW3_REGISTER, DATA_BYTES14, 8, 8)
643
+ FIELD(RB_DW3_REGISTER, DATA_BYTES15, 0, 8)
644
+REG32(RB_DW4_REGISTER, 0x2118)
645
+ FIELD(RB_DW4_REGISTER, DATA_BYTES16, 24, 8)
646
+ FIELD(RB_DW4_REGISTER, DATA_BYTES17, 16, 8)
647
+ FIELD(RB_DW4_REGISTER, DATA_BYTES18, 8, 8)
648
+ FIELD(RB_DW4_REGISTER, DATA_BYTES19, 0, 8)
649
+REG32(RB_DW5_REGISTER, 0x211c)
650
+ FIELD(RB_DW5_REGISTER, DATA_BYTES20, 24, 8)
651
+ FIELD(RB_DW5_REGISTER, DATA_BYTES21, 16, 8)
652
+ FIELD(RB_DW5_REGISTER, DATA_BYTES22, 8, 8)
653
+ FIELD(RB_DW5_REGISTER, DATA_BYTES23, 0, 8)
654
+REG32(RB_DW6_REGISTER, 0x2120)
655
+ FIELD(RB_DW6_REGISTER, DATA_BYTES24, 24, 8)
656
+ FIELD(RB_DW6_REGISTER, DATA_BYTES25, 16, 8)
657
+ FIELD(RB_DW6_REGISTER, DATA_BYTES26, 8, 8)
658
+ FIELD(RB_DW6_REGISTER, DATA_BYTES27, 0, 8)
659
+REG32(RB_DW7_REGISTER, 0x2124)
660
+ FIELD(RB_DW7_REGISTER, DATA_BYTES28, 24, 8)
661
+ FIELD(RB_DW7_REGISTER, DATA_BYTES29, 16, 8)
662
+ FIELD(RB_DW7_REGISTER, DATA_BYTES30, 8, 8)
663
+ FIELD(RB_DW7_REGISTER, DATA_BYTES31, 0, 8)
664
+REG32(RB_DW8_REGISTER, 0x2128)
665
+ FIELD(RB_DW8_REGISTER, DATA_BYTES32, 24, 8)
666
+ FIELD(RB_DW8_REGISTER, DATA_BYTES33, 16, 8)
667
+ FIELD(RB_DW8_REGISTER, DATA_BYTES34, 8, 8)
668
+ FIELD(RB_DW8_REGISTER, DATA_BYTES35, 0, 8)
669
+REG32(RB_DW9_REGISTER, 0x212c)
670
+ FIELD(RB_DW9_REGISTER, DATA_BYTES36, 24, 8)
671
+ FIELD(RB_DW9_REGISTER, DATA_BYTES37, 16, 8)
672
+ FIELD(RB_DW9_REGISTER, DATA_BYTES38, 8, 8)
673
+ FIELD(RB_DW9_REGISTER, DATA_BYTES39, 0, 8)
674
+REG32(RB_DW10_REGISTER, 0x2130)
675
+ FIELD(RB_DW10_REGISTER, DATA_BYTES40, 24, 8)
676
+ FIELD(RB_DW10_REGISTER, DATA_BYTES41, 16, 8)
677
+ FIELD(RB_DW10_REGISTER, DATA_BYTES42, 8, 8)
678
+ FIELD(RB_DW10_REGISTER, DATA_BYTES43, 0, 8)
679
+REG32(RB_DW11_REGISTER, 0x2134)
680
+ FIELD(RB_DW11_REGISTER, DATA_BYTES44, 24, 8)
681
+ FIELD(RB_DW11_REGISTER, DATA_BYTES45, 16, 8)
682
+ FIELD(RB_DW11_REGISTER, DATA_BYTES46, 8, 8)
683
+ FIELD(RB_DW11_REGISTER, DATA_BYTES47, 0, 8)
684
+REG32(RB_DW12_REGISTER, 0x2138)
685
+ FIELD(RB_DW12_REGISTER, DATA_BYTES48, 24, 8)
686
+ FIELD(RB_DW12_REGISTER, DATA_BYTES49, 16, 8)
687
+ FIELD(RB_DW12_REGISTER, DATA_BYTES50, 8, 8)
688
+ FIELD(RB_DW12_REGISTER, DATA_BYTES51, 0, 8)
689
+REG32(RB_DW13_REGISTER, 0x213c)
690
+ FIELD(RB_DW13_REGISTER, DATA_BYTES52, 24, 8)
691
+ FIELD(RB_DW13_REGISTER, DATA_BYTES53, 16, 8)
692
+ FIELD(RB_DW13_REGISTER, DATA_BYTES54, 8, 8)
693
+ FIELD(RB_DW13_REGISTER, DATA_BYTES55, 0, 8)
694
+REG32(RB_DW14_REGISTER, 0x2140)
695
+ FIELD(RB_DW14_REGISTER, DATA_BYTES56, 24, 8)
696
+ FIELD(RB_DW14_REGISTER, DATA_BYTES57, 16, 8)
697
+ FIELD(RB_DW14_REGISTER, DATA_BYTES58, 8, 8)
698
+ FIELD(RB_DW14_REGISTER, DATA_BYTES59, 0, 8)
699
+REG32(RB_DW15_REGISTER, 0x2144)
700
+ FIELD(RB_DW15_REGISTER, DATA_BYTES60, 24, 8)
701
+ FIELD(RB_DW15_REGISTER, DATA_BYTES61, 16, 8)
702
+ FIELD(RB_DW15_REGISTER, DATA_BYTES62, 8, 8)
703
+ FIELD(RB_DW15_REGISTER, DATA_BYTES63, 0, 8)
704
+REG32(RB_ID_REGISTER_1, 0x4100)
705
+ FIELD(RB_ID_REGISTER_1, ID, 21, 11)
706
+ FIELD(RB_ID_REGISTER_1, SRR_RTR_RRS, 20, 1)
707
+ FIELD(RB_ID_REGISTER_1, IDE, 19, 1)
708
+ FIELD(RB_ID_REGISTER_1, ID_EXT, 1, 18)
709
+ FIELD(RB_ID_REGISTER_1, RTR_RRS, 0, 1)
710
+REG32(RB_DLC_REGISTER_1, 0x4104)
711
+ FIELD(RB_DLC_REGISTER_1, DLC, 28, 4)
712
+ FIELD(RB_DLC_REGISTER_1, FDF, 27, 1)
713
+ FIELD(RB_DLC_REGISTER_1, BRS, 26, 1)
714
+ FIELD(RB_DLC_REGISTER_1, ESI, 25, 1)
715
+ FIELD(RB_DLC_REGISTER_1, MATCHED_FILTER_INDEX, 16, 5)
716
+ FIELD(RB_DLC_REGISTER_1, TIMESTAMP, 0, 16)
717
+REG32(RB0_DW0_REGISTER_1, 0x4108)
718
+ FIELD(RB0_DW0_REGISTER_1, DATA_BYTES0, 24, 8)
719
+ FIELD(RB0_DW0_REGISTER_1, DATA_BYTES1, 16, 8)
720
+ FIELD(RB0_DW0_REGISTER_1, DATA_BYTES2, 8, 8)
721
+ FIELD(RB0_DW0_REGISTER_1, DATA_BYTES3, 0, 8)
722
+REG32(RB_DW1_REGISTER_1, 0x410c)
723
+ FIELD(RB_DW1_REGISTER_1, DATA_BYTES4, 24, 8)
724
+ FIELD(RB_DW1_REGISTER_1, DATA_BYTES5, 16, 8)
725
+ FIELD(RB_DW1_REGISTER_1, DATA_BYTES6, 8, 8)
726
+ FIELD(RB_DW1_REGISTER_1, DATA_BYTES7, 0, 8)
727
+REG32(RB_DW2_REGISTER_1, 0x4110)
728
+ FIELD(RB_DW2_REGISTER_1, DATA_BYTES8, 24, 8)
729
+ FIELD(RB_DW2_REGISTER_1, DATA_BYTES9, 16, 8)
730
+ FIELD(RB_DW2_REGISTER_1, DATA_BYTES10, 8, 8)
731
+ FIELD(RB_DW2_REGISTER_1, DATA_BYTES11, 0, 8)
732
+REG32(RB_DW3_REGISTER_1, 0x4114)
733
+ FIELD(RB_DW3_REGISTER_1, DATA_BYTES12, 24, 8)
734
+ FIELD(RB_DW3_REGISTER_1, DATA_BYTES13, 16, 8)
735
+ FIELD(RB_DW3_REGISTER_1, DATA_BYTES14, 8, 8)
736
+ FIELD(RB_DW3_REGISTER_1, DATA_BYTES15, 0, 8)
737
+REG32(RB_DW4_REGISTER_1, 0x4118)
738
+ FIELD(RB_DW4_REGISTER_1, DATA_BYTES16, 24, 8)
739
+ FIELD(RB_DW4_REGISTER_1, DATA_BYTES17, 16, 8)
740
+ FIELD(RB_DW4_REGISTER_1, DATA_BYTES18, 8, 8)
741
+ FIELD(RB_DW4_REGISTER_1, DATA_BYTES19, 0, 8)
742
+REG32(RB_DW5_REGISTER_1, 0x411c)
743
+ FIELD(RB_DW5_REGISTER_1, DATA_BYTES20, 24, 8)
744
+ FIELD(RB_DW5_REGISTER_1, DATA_BYTES21, 16, 8)
745
+ FIELD(RB_DW5_REGISTER_1, DATA_BYTES22, 8, 8)
746
+ FIELD(RB_DW5_REGISTER_1, DATA_BYTES23, 0, 8)
747
+REG32(RB_DW6_REGISTER_1, 0x4120)
748
+ FIELD(RB_DW6_REGISTER_1, DATA_BYTES24, 24, 8)
749
+ FIELD(RB_DW6_REGISTER_1, DATA_BYTES25, 16, 8)
750
+ FIELD(RB_DW6_REGISTER_1, DATA_BYTES26, 8, 8)
751
+ FIELD(RB_DW6_REGISTER_1, DATA_BYTES27, 0, 8)
752
+REG32(RB_DW7_REGISTER_1, 0x4124)
753
+ FIELD(RB_DW7_REGISTER_1, DATA_BYTES28, 24, 8)
754
+ FIELD(RB_DW7_REGISTER_1, DATA_BYTES29, 16, 8)
755
+ FIELD(RB_DW7_REGISTER_1, DATA_BYTES30, 8, 8)
756
+ FIELD(RB_DW7_REGISTER_1, DATA_BYTES31, 0, 8)
757
+REG32(RB_DW8_REGISTER_1, 0x4128)
758
+ FIELD(RB_DW8_REGISTER_1, DATA_BYTES32, 24, 8)
759
+ FIELD(RB_DW8_REGISTER_1, DATA_BYTES33, 16, 8)
760
+ FIELD(RB_DW8_REGISTER_1, DATA_BYTES34, 8, 8)
761
+ FIELD(RB_DW8_REGISTER_1, DATA_BYTES35, 0, 8)
762
+REG32(RB_DW9_REGISTER_1, 0x412c)
763
+ FIELD(RB_DW9_REGISTER_1, DATA_BYTES36, 24, 8)
764
+ FIELD(RB_DW9_REGISTER_1, DATA_BYTES37, 16, 8)
765
+ FIELD(RB_DW9_REGISTER_1, DATA_BYTES38, 8, 8)
766
+ FIELD(RB_DW9_REGISTER_1, DATA_BYTES39, 0, 8)
767
+REG32(RB_DW10_REGISTER_1, 0x4130)
768
+ FIELD(RB_DW10_REGISTER_1, DATA_BYTES40, 24, 8)
769
+ FIELD(RB_DW10_REGISTER_1, DATA_BYTES41, 16, 8)
770
+ FIELD(RB_DW10_REGISTER_1, DATA_BYTES42, 8, 8)
771
+ FIELD(RB_DW10_REGISTER_1, DATA_BYTES43, 0, 8)
772
+REG32(RB_DW11_REGISTER_1, 0x4134)
773
+ FIELD(RB_DW11_REGISTER_1, DATA_BYTES44, 24, 8)
774
+ FIELD(RB_DW11_REGISTER_1, DATA_BYTES45, 16, 8)
775
+ FIELD(RB_DW11_REGISTER_1, DATA_BYTES46, 8, 8)
776
+ FIELD(RB_DW11_REGISTER_1, DATA_BYTES47, 0, 8)
777
+REG32(RB_DW12_REGISTER_1, 0x4138)
778
+ FIELD(RB_DW12_REGISTER_1, DATA_BYTES48, 24, 8)
779
+ FIELD(RB_DW12_REGISTER_1, DATA_BYTES49, 16, 8)
780
+ FIELD(RB_DW12_REGISTER_1, DATA_BYTES50, 8, 8)
781
+ FIELD(RB_DW12_REGISTER_1, DATA_BYTES51, 0, 8)
782
+REG32(RB_DW13_REGISTER_1, 0x413c)
783
+ FIELD(RB_DW13_REGISTER_1, DATA_BYTES52, 24, 8)
784
+ FIELD(RB_DW13_REGISTER_1, DATA_BYTES53, 16, 8)
785
+ FIELD(RB_DW13_REGISTER_1, DATA_BYTES54, 8, 8)
786
+ FIELD(RB_DW13_REGISTER_1, DATA_BYTES55, 0, 8)
787
+REG32(RB_DW14_REGISTER_1, 0x4140)
788
+ FIELD(RB_DW14_REGISTER_1, DATA_BYTES56, 24, 8)
789
+ FIELD(RB_DW14_REGISTER_1, DATA_BYTES57, 16, 8)
790
+ FIELD(RB_DW14_REGISTER_1, DATA_BYTES58, 8, 8)
791
+ FIELD(RB_DW14_REGISTER_1, DATA_BYTES59, 0, 8)
792
+REG32(RB_DW15_REGISTER_1, 0x4144)
793
+ FIELD(RB_DW15_REGISTER_1, DATA_BYTES60, 24, 8)
794
+ FIELD(RB_DW15_REGISTER_1, DATA_BYTES61, 16, 8)
795
+ FIELD(RB_DW15_REGISTER_1, DATA_BYTES62, 8, 8)
796
+ FIELD(RB_DW15_REGISTER_1, DATA_BYTES63, 0, 8)
797
+
798
+static uint8_t canfd_dlc_array[8] = {8, 12, 16, 20, 24, 32, 48, 64};
799
+
800
+static void canfd_update_irq(XlnxVersalCANFDState *s)
801
+{
802
+ unsigned int irq = s->regs[R_INTERRUPT_STATUS_REGISTER] &
803
+ s->regs[R_INTERRUPT_ENABLE_REGISTER];
804
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
805
+
806
+ /* RX watermark interrupts. */
807
+ if (ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER, FL) >
808
+ ARRAY_FIELD_EX32(s->regs, RX_FIFO_WATERMARK_REGISTER, RXFWM)) {
809
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL, 1);
810
+ }
811
+
812
+ if (ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER, FL_1) >
813
+ ARRAY_FIELD_EX32(s->regs, RX_FIFO_WATERMARK_REGISTER, RXFWM_1)) {
814
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL_1, 1);
815
+ }
816
+
817
+ /* TX watermark interrupt. */
818
+ if (ARRAY_FIELD_EX32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER, TXE_FL) >
819
+ ARRAY_FIELD_EX32(s->regs, TX_EVENT_FIFO_WATERMARK_REGISTER, TXE_FWM)) {
820
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXEWMFLL, 1);
821
+ }
822
+
823
+ trace_xlnx_canfd_update_irq(path, s->regs[R_INTERRUPT_STATUS_REGISTER],
824
+ s->regs[R_INTERRUPT_ENABLE_REGISTER], irq);
825
+
826
+ qemu_set_irq(s->irq_canfd_int, irq);
827
+}
828
+
829
+static void canfd_ier_post_write(RegisterInfo *reg, uint64_t val64)
830
+{
831
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
832
+
833
+ canfd_update_irq(s);
834
+}
835
+
836
+static uint64_t canfd_icr_pre_write(RegisterInfo *reg, uint64_t val64)
837
+{
838
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
839
+ uint32_t val = val64;
840
+
841
+ s->regs[R_INTERRUPT_STATUS_REGISTER] &= ~val;
842
+
843
+ /*
844
+ * RXBOFLW_BI field is automatically cleared to default if RXBOFLW bit is
845
+ * cleared in ISR.
846
+ */
847
+ if (ARRAY_FIELD_EX32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL_1)) {
848
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXBOFLW_BI, 0);
849
+ }
850
+
851
+ canfd_update_irq(s);
852
+
853
+ return 0;
854
+}
855
+
856
+static void canfd_config_reset(XlnxVersalCANFDState *s)
857
+{
858
+
859
+ unsigned int i;
860
+
861
+ /* Reset all the configuration registers. */
862
+ for (i = 0; i < R_RX_FIFO_WATERMARK_REGISTER; ++i) {
863
+ register_reset(&s->reg_info[i]);
864
+ }
865
+
866
+ canfd_update_irq(s);
867
+}
868
+
869
+static void canfd_config_mode(XlnxVersalCANFDState *s)
870
+{
871
+ register_reset(&s->reg_info[R_ERROR_COUNTER_REGISTER]);
872
+ register_reset(&s->reg_info[R_ERROR_STATUS_REGISTER]);
873
+ register_reset(&s->reg_info[R_STATUS_REGISTER]);
874
+
875
+ /* Put XlnxVersalCANFDState in configuration mode. */
876
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 1);
877
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP, 0);
878
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP, 0);
879
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, BSOFF, 0);
880
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ERROR_BIT, 0);
881
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFOFLW, 0);
882
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFOFLW_1, 0);
883
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 0);
884
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 0);
885
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ARBLST, 0);
886
+
887
+ /* Clear the time stamp. */
888
+ ptimer_transaction_begin(s->canfd_timer);
889
+ ptimer_set_count(s->canfd_timer, 0);
890
+ ptimer_transaction_commit(s->canfd_timer);
891
+
892
+ canfd_update_irq(s);
893
+}
894
+
895
+static void update_status_register_mode_bits(XlnxVersalCANFDState *s)
896
+{
897
+ bool sleep_status = ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP);
898
+ bool sleep_mode = ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP);
899
+ /* Wake up interrupt bit. */
900
+ bool wakeup_irq_val = !sleep_mode && sleep_status;
901
+ /* Sleep interrupt bit. */
902
+ bool sleep_irq_val = sleep_mode && !sleep_status;
903
+
904
+ /* Clear previous core mode status bits. */
905
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 0);
906
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 0);
907
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 0);
908
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 0);
909
+
910
+ /* set current mode bit and generate irqs accordingly. */
911
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, LBACK)) {
912
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 1);
913
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP)) {
914
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 1);
915
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP,
916
+ sleep_irq_val);
917
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
918
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 1);
919
+ } else {
920
+ /* If all bits are zero, XlnxVersalCANFDState is set in normal mode. */
921
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 1);
922
+ /* Set wakeup interrupt bit. */
923
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP,
924
+ wakeup_irq_val);
925
+ }
926
+
927
+ /* Put the CANFD in error active state. */
928
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ESTAT, 1);
929
+
930
+ canfd_update_irq(s);
931
+}
932
+
933
+static uint64_t canfd_msr_pre_write(RegisterInfo *reg, uint64_t val64)
934
+{
935
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
936
+ uint32_t val = val64;
937
+ uint8_t multi_mode = 0;
938
+
939
+ /*
940
+ * Multiple mode set check. This is done to make sure user doesn't set
941
+ * multiple modes.
942
+ */
943
+ multi_mode = FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK) +
944
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP) +
945
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP);
946
+
947
+ if (multi_mode > 1) {
948
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempting to configure several modes"
949
+ " simultaneously. One mode will be selected according to"
950
+ " their priority: LBACK > SLEEP > SNOOP.\n");
951
+ }
952
+
953
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
954
+ /* In configuration mode, any mode can be selected. */
955
+ s->regs[R_MODE_SELECT_REGISTER] = val;
956
+ } else {
957
+ bool sleep_mode_bit = FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP);
958
+
959
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, sleep_mode_bit);
960
+
961
+ if (FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK)) {
962
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempting to set LBACK mode"
963
+ " without setting CEN bit as 0\n");
964
+ } else if (FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP)) {
965
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempting to set SNOOP mode"
966
+ " without setting CEN bit as 0\n");
967
+ }
968
+
969
+ update_status_register_mode_bits(s);
970
+ }
971
+
972
+ return s->regs[R_MODE_SELECT_REGISTER];
973
+}
974
+
975
+static void canfd_exit_sleep_mode(XlnxVersalCANFDState *s)
976
+{
977
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, 0);
978
+ update_status_register_mode_bits(s);
979
+}
980
+
981
+static void regs2frame(XlnxVersalCANFDState *s, qemu_can_frame *frame,
982
+ uint32_t reg_num)
983
+{
984
+ uint32_t i = 0;
985
+ uint32_t j = 0;
986
+ uint32_t val = 0;
987
+ uint32_t dlc_reg_val = 0;
988
+ uint32_t dlc_value = 0;
989
+
990
+ /* Check that reg_num should be within TX register space. */
991
+ assert(reg_num <= R_TB_ID_REGISTER + (NUM_REGS_PER_MSG_SPACE *
992
+ s->cfg.tx_fifo));
993
+
994
+ dlc_reg_val = s->regs[reg_num + 1];
995
+ dlc_value = FIELD_EX32(dlc_reg_val, TB0_DLC_REGISTER, DLC);
996
+
997
+ frame->can_id = s->regs[reg_num];
998
+
999
+ if (FIELD_EX32(dlc_reg_val, TB0_DLC_REGISTER, FDF)) {
1000
+ /*
1001
+ * CANFD frame.
1002
+ * Converting dlc(0 to 15) 4 Byte data to plain length(i.e. 0 to 64)
1003
+ * 1 Byte data. This is done to make it work with SocketCAN.
1004
+ * On actual CANFD frame, this value can't be more than 0xF.
1005
+ * Conversion table for DLC to plain length:
1006
+ *
1007
+ * DLC Plain Length
1008
+ * 0 - 8 0 - 8
1009
+ * 9 9 - 12
1010
+ * 10 13 - 16
1011
+ * 11 17 - 20
1012
+ * 12 21 - 24
1013
+ * 13 25 - 32
1014
+ * 14 33 - 48
1015
+ * 15 49 - 64
1016
+ */
1017
+
1018
+ frame->flags = QEMU_CAN_FRMF_TYPE_FD;
1019
+
1020
+ if (dlc_value < 8) {
1021
+ frame->can_dlc = dlc_value;
1022
+ } else {
1023
+ assert((dlc_value - 8) < ARRAY_SIZE(canfd_dlc_array));
1024
+ frame->can_dlc = canfd_dlc_array[dlc_value - 8];
1025
+ }
1026
+ } else {
1027
+ /*
1028
+ * FD Format bit not set that means it is a CAN Frame.
1029
+ * Conversion table for classic CAN:
1030
+ *
1031
+ * DLC Plain Length
1032
+ * 0 - 7 0 - 7
1033
+ * 8 - 15 8
1034
+ */
1035
+
1036
+ if (dlc_value > 8) {
1037
+ frame->can_dlc = 8;
1038
+ qemu_log_mask(LOG_GUEST_ERROR, "Maximum DLC value for Classic CAN"
1039
+ " frame is 8. Only 8 byte data will be sent.\n");
1040
+ } else {
1041
+ frame->can_dlc = dlc_value;
1042
+ }
1043
+ }
1044
+
1045
+ for (j = 0; j < frame->can_dlc; j++) {
1046
+ val = 8 * i;
1047
+
1048
+ frame->data[j] = extract32(s->regs[reg_num + 2 + (j / 4)], val, 8);
1049
+ i++;
1050
+
1051
+ if (i % 4 == 0) {
1052
+ i = 0;
1053
+ }
1054
+ }
1055
+}
1056
+
1057
+static void process_cancellation_requests(XlnxVersalCANFDState *s)
1058
+{
1059
+ uint32_t clear_mask = s->regs[R_TX_BUFFER_READY_REQUEST_REGISTER] &
1060
+ s->regs[R_TX_BUFFER_CANCEL_REQUEST_REGISTER];
1061
+
1062
+ s->regs[R_TX_BUFFER_READY_REQUEST_REGISTER] &= ~clear_mask;
1063
+ s->regs[R_TX_BUFFER_CANCEL_REQUEST_REGISTER] &= ~clear_mask;
1064
+
1065
+ canfd_update_irq(s);
1066
+}
1067
+
1068
+static void store_rx_sequential(XlnxVersalCANFDState *s,
1069
+ const qemu_can_frame *frame,
1070
+ uint32_t fill_level, uint32_t read_index,
1071
+ uint32_t store_location, uint8_t rx_fifo,
1072
+ bool rx_fifo_id, uint8_t filter_index)
1073
+{
199
+ int i;
1074
+ int i;
200
1075
+ bool is_canfd_frame;
201
for (i = 0; i < s->num_cs; i++) {
1076
+ uint8_t dlc = frame->can_dlc;
202
- for (j = 0; j < num_effective_busses(s); j++) {
1077
+ uint8_t rx_reg_num = 0;
203
- int upage = !!(s->regs[R_LQSPI_STS] & LQSPI_CFG_U_PAGE);
1078
+ uint32_t dlc_reg_val = 0;
204
- int cs_to_set = (j * s->num_cs + i + upage) %
1079
+ uint32_t data_reg_val = 0;
205
- (s->num_cs * s->num_busses);
1080
+
206
-
1081
+ /* Getting RX0/1 fill level */
207
- if (xilinx_spips_cs_is_set(s, i, field) && !found) {
1082
+ if ((fill_level) > rx_fifo - 1) {
208
- DB_PRINT_L(0, "selecting slave %d\n", i);
1083
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
209
- qemu_set_irq(s->cs_lines[cs_to_set], 0);
1084
+
210
- if (s->cs_lines_state[cs_to_set]) {
1085
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: RX%d Buffer is full. Discarding the"
211
- s->cs_lines_state[cs_to_set] = false;
1086
+ " message\n", path, rx_fifo_id);
212
- s->rx_discard = ARRAY_FIELD_EX32(s->regs, CMND, RX_DISCARD);
1087
+
213
- }
1088
+ /* Set the corresponding RF buffer overflow interrupt. */
214
- } else {
1089
+ if (rx_fifo_id == 0) {
215
- DB_PRINT_L(0, "deselecting slave %d\n", i);
1090
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFOFLW, 1);
216
- qemu_set_irq(s->cs_lines[cs_to_set], 1);
1091
+ } else {
217
- s->cs_lines_state[cs_to_set] = true;
1092
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFOFLW_1, 1);
218
- }
1093
+ }
219
- }
1094
+ } else {
220
- if (xilinx_spips_cs_is_set(s, i, field)) {
1095
+ uint16_t rx_timestamp = CANFD_TIMER_MAX -
221
- found = true;
1096
+ ptimer_get_count(s->canfd_timer);
222
+ bool old_state = s->cs_lines_state[i];
1097
+
223
+ bool new_state = field & (1 << i);
1098
+ if (rx_timestamp == 0xFFFF) {
224
+
1099
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TSCNT_OFLW, 1);
225
+ if (old_state != new_state) {
1100
+ } else {
226
+ s->cs_lines_state[i] = new_state;
1101
+ ARRAY_FIELD_DP32(s->regs, TIMESTAMP_REGISTER, TIMESTAMP_CNT,
227
+ s->rx_discard = ARRAY_FIELD_EX32(s->regs, CMND, RX_DISCARD);
1102
+ rx_timestamp);
228
+ DB_PRINT_L(1, "%sselecting slave %d\n", new_state ? "" : "de", i);
1103
+ }
229
}
1104
+
230
+ qemu_set_irq(s->cs_lines[i], !new_state);
1105
+ if (rx_fifo_id == 0) {
231
}
1106
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, FL,
232
- if (!found) {
1107
+ fill_level + 1);
233
+ if (!(field & ((1 << s->num_cs) - 1))) {
1108
+ assert(store_location <=
234
s->snoop_state = SNOOP_CHECKING;
1109
+ R_RB_ID_REGISTER + (s->cfg.rx0_fifo *
235
s->cmd_dummies = 0;
1110
+ NUM_REGS_PER_MSG_SPACE));
236
s->link_state = 1;
1111
+ } else {
237
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
1112
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, FL_1,
238
}
1113
+ fill_level + 1);
239
}
1114
+ assert(store_location <=
240
1115
+ R_RB_ID_REGISTER_1 + (s->cfg.rx1_fifo *
241
+static void xlnx_zynqmp_qspips_update_cs_lines(XlnxZynqMPQSPIPS *s)
1116
+ NUM_REGS_PER_MSG_SPACE));
242
+{
1117
+ }
243
+ if (s->regs[R_GQSPI_GF_SNAPSHOT]) {
1118
+
244
+ int field = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, CHIP_SELECT);
1119
+ s->regs[store_location] = frame->can_id;
245
+ xilinx_spips_update_cs(XILINX_SPIPS(s), field);
1120
+
246
+ }
1121
+ dlc = frame->can_dlc;
247
+}
1122
+
248
+
1123
+ if (frame->flags == QEMU_CAN_FRMF_TYPE_FD) {
249
+static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
1124
+ is_canfd_frame = true;
250
+{
1125
+
251
+ int field = ~((s->regs[R_CONFIG] & CS) >> CS_SHIFT);
1126
+ /* Store dlc value in Xilinx specific format. */
252
+
1127
+ for (i = 0; i < ARRAY_SIZE(canfd_dlc_array); i++) {
253
+ /* In dual parallel, mirror low CS to both */
1128
+ if (canfd_dlc_array[i] == frame->can_dlc) {
254
+ if (num_effective_busses(s) == 2) {
1129
+ dlc_reg_val = FIELD_DP32(0, RB_DLC_REGISTER, DLC, 8 + i);
255
+ /* Single bit chip-select for qspi */
1130
+ }
256
+ field &= 0x1;
257
+ field |= field << 1;
258
+ /* Dual stack U-Page */
259
+ } else if (s->regs[R_LQSPI_CFG] & LQSPI_CFG_TWO_MEM &&
260
+ s->regs[R_LQSPI_STS] & LQSPI_CFG_U_PAGE) {
261
+ /* Single bit chip-select for qspi */
262
+ field &= 0x1;
263
+ /* change from CS0 to CS1 */
264
+ field <<= 1;
265
+ }
266
+ /* Auto CS */
267
+ if (!(s->regs[R_CONFIG] & MANUAL_CS) &&
268
+ fifo8_is_empty(&s->tx_fifo)) {
269
+ field = 0;
270
+ }
271
+ xilinx_spips_update_cs(s, field);
272
+}
273
+
274
static void xilinx_spips_update_ixr(XilinxSPIPS *s)
275
{
276
- if (s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE) {
277
- return;
278
+ if (!(s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE)) {
279
+ s->regs[R_INTR_STATUS] &= ~IXR_SELF_CLEAR;
280
+ s->regs[R_INTR_STATUS] |=
281
+ (fifo8_is_full(&s->rx_fifo) ? IXR_RX_FIFO_FULL : 0) |
282
+ (s->rx_fifo.num >= s->regs[R_RX_THRES] ?
283
+ IXR_RX_FIFO_NOT_EMPTY : 0) |
284
+ (fifo8_is_full(&s->tx_fifo) ? IXR_TX_FIFO_FULL : 0) |
285
+ (fifo8_is_empty(&s->tx_fifo) ? IXR_TX_FIFO_EMPTY : 0) |
286
+ (s->tx_fifo.num < s->regs[R_TX_THRES] ? IXR_TX_FIFO_NOT_FULL : 0);
287
}
288
- /* These are set/cleared as they occur */
289
- s->regs[R_INTR_STATUS] &= (IXR_TX_FIFO_UNDERFLOW | IXR_RX_FIFO_OVERFLOW |
290
- IXR_TX_FIFO_MODE_FAIL);
291
- /* these are pure functions of fifo state, set them here */
292
- s->regs[R_INTR_STATUS] |=
293
- (fifo8_is_full(&s->rx_fifo) ? IXR_RX_FIFO_FULL : 0) |
294
- (s->rx_fifo.num >= s->regs[R_RX_THRES] ? IXR_RX_FIFO_NOT_EMPTY : 0) |
295
- (fifo8_is_full(&s->tx_fifo) ? IXR_TX_FIFO_FULL : 0) |
296
- (s->tx_fifo.num < s->regs[R_TX_THRES] ? IXR_TX_FIFO_NOT_FULL : 0);
297
- /* drive external interrupt pin */
298
int new_irqline = !!(s->regs[R_INTR_MASK] & s->regs[R_INTR_STATUS] &
299
IXR_ALL);
300
if (new_irqline != s->irqline) {
301
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_ixr(XilinxSPIPS *s)
302
}
303
}
304
305
+static void xlnx_zynqmp_qspips_update_ixr(XlnxZynqMPQSPIPS *s)
306
+{
307
+ uint32_t gqspi_int;
308
+ int new_irqline;
309
+
310
+ s->regs[R_GQSPI_ISR] &= ~IXR_SELF_CLEAR;
311
+ s->regs[R_GQSPI_ISR] |=
312
+ (fifo32_is_empty(&s->fifo_g) ? IXR_GENERIC_FIFO_EMPTY : 0) |
313
+ (fifo32_is_full(&s->fifo_g) ? IXR_GENERIC_FIFO_FULL : 0) |
314
+ (s->fifo_g.fifo.num < s->regs[R_GQSPI_GFIFO_THRESH] ?
315
+ IXR_GENERIC_FIFO_NOT_FULL : 0) |
316
+ (fifo8_is_empty(&s->rx_fifo_g) ? IXR_RX_FIFO_EMPTY : 0) |
317
+ (fifo8_is_full(&s->rx_fifo_g) ? IXR_RX_FIFO_FULL : 0) |
318
+ (s->rx_fifo_g.num >= s->regs[R_GQSPI_RX_THRESH] ?
319
+ IXR_RX_FIFO_NOT_EMPTY : 0) |
320
+ (fifo8_is_empty(&s->tx_fifo_g) ? IXR_TX_FIFO_EMPTY : 0) |
321
+ (fifo8_is_full(&s->tx_fifo_g) ? IXR_TX_FIFO_FULL : 0) |
322
+ (s->tx_fifo_g.num < s->regs[R_GQSPI_TX_THRESH] ?
323
+ IXR_TX_FIFO_NOT_FULL : 0);
324
+
325
+ /* GQSPI Interrupt Trigger Status */
326
+ gqspi_int = (~s->regs[R_GQSPI_IMR]) & s->regs[R_GQSPI_ISR] & GQSPI_IXR_MASK;
327
+ new_irqline = !!(gqspi_int & IXR_ALL);
328
+
329
+ /* drive external interrupt pin */
330
+ if (new_irqline != s->gqspi_irqline) {
331
+ s->gqspi_irqline = new_irqline;
332
+ qemu_set_irq(XILINX_SPIPS(s)->irq, s->gqspi_irqline);
333
+ }
334
+}
335
+
336
static void xilinx_spips_reset(DeviceState *d)
337
{
338
XilinxSPIPS *s = XILINX_SPIPS(d);
339
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
340
xilinx_spips_update_cs_lines(s);
341
}
342
343
+static void xlnx_zynqmp_qspips_reset(DeviceState *d)
344
+{
345
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(d);
346
+ int i;
347
+
348
+ xilinx_spips_reset(d);
349
+
350
+ for (i = 0; i < XLNX_ZYNQMP_SPIPS_R_MAX; i++) {
351
+ s->regs[i] = 0;
352
+ }
353
+ fifo8_reset(&s->rx_fifo_g);
354
+ fifo8_reset(&s->rx_fifo_g);
355
+ fifo32_reset(&s->fifo_g);
356
+ s->regs[R_GQSPI_TX_THRESH] = 1;
357
+ s->regs[R_GQSPI_RX_THRESH] = 1;
358
+ s->regs[R_GQSPI_GFIFO_THRESH] = 1;
359
+ s->regs[R_GQSPI_IMR] = GQSPI_IXR_MASK;
360
+ s->man_start_com_g = false;
361
+ s->gqspi_irqline = 0;
362
+ xlnx_zynqmp_qspips_update_ixr(s);
363
+}
364
+
365
/* N way (num) in place bit striper. Lay out row wise bits (MSB to LSB)
366
* column wise (from element 0 to N-1). num is the length of x, and dir
367
* reverses the direction of the transform. Best illustrated by example:
368
@@ -XXX,XX +XXX,XX @@ static inline void stripe8(uint8_t *x, int num, bool dir)
369
memcpy(x, r, sizeof(uint8_t) * num);
370
}
371
372
+static void xlnx_zynqmp_qspips_flush_fifo_g(XlnxZynqMPQSPIPS *s)
373
+{
374
+ while (s->regs[R_GQSPI_DATA_STS] || !fifo32_is_empty(&s->fifo_g)) {
375
+ uint8_t tx_rx[2] = { 0 };
376
+ int num_stripes = 1;
377
+ uint8_t busses;
378
+ int i;
379
+
380
+ if (!s->regs[R_GQSPI_DATA_STS]) {
381
+ uint8_t imm;
382
+
383
+ s->regs[R_GQSPI_GF_SNAPSHOT] = fifo32_pop(&s->fifo_g);
384
+ DB_PRINT_L(0, "GQSPI command: %x\n", s->regs[R_GQSPI_GF_SNAPSHOT]);
385
+ if (!s->regs[R_GQSPI_GF_SNAPSHOT]) {
386
+ DB_PRINT_L(0, "Dummy GQSPI Delay Command Entry, Do nothing");
387
+ continue;
388
+ }
1131
+ }
389
+ xlnx_zynqmp_qspips_update_cs_lines(s);
1132
+ } else {
390
+
1133
+ is_canfd_frame = false;
391
+ imm = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA);
1134
+
392
+ if (!ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, DATA_XFER)) {
1135
+ if (frame->can_dlc > 8) {
393
+ /* immedate transfer */
1136
+ dlc = 8;
394
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, TRANSMIT) ||
1137
+ }
395
+ ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE)) {
1138
+
396
+ s->regs[R_GQSPI_DATA_STS] = 1;
1139
+ dlc_reg_val = FIELD_DP32(0, RB_DLC_REGISTER, DLC, dlc);
397
+ /* CS setup/hold - do nothing */
1140
+ }
398
+ } else {
1141
+
399
+ s->regs[R_GQSPI_DATA_STS] = 0;
1142
+ dlc_reg_val |= FIELD_DP32(0, RB_DLC_REGISTER, FDF, is_canfd_frame);
400
+ }
1143
+ dlc_reg_val |= FIELD_DP32(0, RB_DLC_REGISTER, TIMESTAMP, rx_timestamp);
401
+ } else if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, EXPONENT)) {
1144
+ dlc_reg_val |= FIELD_DP32(0, RB_DLC_REGISTER, MATCHED_FILTER_INDEX,
402
+ if (imm > 31) {
1145
+ filter_index);
403
+ qemu_log_mask(LOG_UNIMP, "QSPI exponential transfer too"
1146
+ s->regs[store_location + 1] = dlc_reg_val;
404
+ " long - 2 ^ %" PRId8 " requested\n", imm);
1147
+
405
+ }
1148
+ for (i = 0; i < dlc; i++) {
406
+ s->regs[R_GQSPI_DATA_STS] = 1ul << imm;
1149
+ /* Register size is 4 byte but frame->data each is 1 byte. */
407
+ } else {
1150
+ switch (i % 4) {
408
+ s->regs[R_GQSPI_DATA_STS] = imm;
1151
+ case 0:
1152
+ rx_reg_num = i / 4;
1153
+
1154
+ data_reg_val = FIELD_DP32(0, RB_DW0_REGISTER, DATA_BYTES3,
1155
+ frame->data[i]);
1156
+ break;
1157
+ case 1:
1158
+ data_reg_val |= FIELD_DP32(0, RB_DW0_REGISTER, DATA_BYTES2,
1159
+ frame->data[i]);
1160
+ break;
1161
+ case 2:
1162
+ data_reg_val |= FIELD_DP32(0, RB_DW0_REGISTER, DATA_BYTES1,
1163
+ frame->data[i]);
1164
+ break;
1165
+ case 3:
1166
+ data_reg_val |= FIELD_DP32(0, RB_DW0_REGISTER, DATA_BYTES0,
1167
+ frame->data[i]);
1168
+ /*
1169
+ * Last Bytes data which means we have all 4 bytes ready to
1170
+ * store in one rx regs.
1171
+ */
1172
+ s->regs[store_location + rx_reg_num + 2] = data_reg_val;
1173
+ break;
409
+ }
1174
+ }
410
+ }
1175
+ }
411
+ /* Zero length transfer check */
1176
+
412
+ if (!s->regs[R_GQSPI_DATA_STS]) {
1177
+ if (i % 4) {
413
+ continue;
1178
+ /*
1179
+ * In case DLC is not multiplier of 4, data is not saved to RX FIFO
1180
+ * in above switch case. Store the remaining bytes here.
1181
+ */
1182
+ s->regs[store_location + rx_reg_num + 2] = data_reg_val;
414
+ }
1183
+ }
415
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE) &&
1184
+
416
+ fifo8_is_full(&s->rx_fifo_g)) {
1185
+ /* set the interrupt as RXOK. */
417
+ /* No space in RX fifo for transfer - try again later */
1186
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
1187
+ }
1188
+}
1189
+
1190
+static void update_rx_sequential(XlnxVersalCANFDState *s,
1191
+ const qemu_can_frame *frame)
1192
+{
1193
+ bool filter_pass = false;
1194
+ uint8_t filter_index = 0;
1195
+ int i;
1196
+ int filter_partition = ARRAY_FIELD_EX32(s->regs,
1197
+ RX_FIFO_WATERMARK_REGISTER, RXFP);
1198
+ uint32_t store_location;
1199
+ uint32_t fill_level;
1200
+ uint32_t read_index;
1201
+ uint8_t store_index = 0;
1202
+ g_autofree char *path = NULL;
1203
+ /*
1204
+ * If all UAF bits are set to 0, then received messages are not stored
1205
+ * in the RX buffers.
1206
+ */
1207
+ if (s->regs[R_ACCEPTANCE_FILTER_CONTROL_REGISTER]) {
1208
+ uint32_t acceptance_filter_status =
1209
+ s->regs[R_ACCEPTANCE_FILTER_CONTROL_REGISTER];
1210
+
1211
+ for (i = 0; i < 32; i++) {
1212
+ if (acceptance_filter_status & 0x1) {
1213
+ uint32_t msg_id_masked = s->regs[R_AFMR_REGISTER + 2 * i] &
1214
+ frame->can_id;
1215
+ uint32_t afir_id_masked = s->regs[R_AFIR_REGISTER + 2 * i] &
1216
+ s->regs[R_AFMR_REGISTER + 2 * i];
1217
+ uint16_t std_msg_id_masked = FIELD_EX32(msg_id_masked,
1218
+ AFIR_REGISTER, AIID);
1219
+ uint16_t std_afir_id_masked = FIELD_EX32(afir_id_masked,
1220
+ AFIR_REGISTER, AIID);
1221
+ uint32_t ext_msg_id_masked = FIELD_EX32(msg_id_masked,
1222
+ AFIR_REGISTER,
1223
+ AIID_EXT);
1224
+ uint32_t ext_afir_id_masked = FIELD_EX32(afir_id_masked,
1225
+ AFIR_REGISTER,
1226
+ AIID_EXT);
1227
+ bool ext_ide = FIELD_EX32(s->regs[R_AFMR_REGISTER + 2 * i],
1228
+ AFMR_REGISTER, AMIDE);
1229
+
1230
+ if (std_msg_id_masked == std_afir_id_masked) {
1231
+ if (ext_ide) {
1232
+ /* Extended message ID message. */
1233
+ if (ext_msg_id_masked == ext_afir_id_masked) {
1234
+ filter_pass = true;
1235
+ filter_index = i;
1236
+
1237
+ break;
1238
+ }
1239
+ } else {
1240
+ /* Standard message ID. */
1241
+ filter_pass = true;
1242
+ filter_index = i;
1243
+
1244
+ break;
1245
+ }
1246
+ }
1247
+ }
1248
+ acceptance_filter_status >>= 1;
1249
+ }
1250
+ }
1251
+
1252
+ if (!filter_pass) {
1253
+ path = object_get_canonical_path(OBJECT(s));
1254
+
1255
+ trace_xlnx_canfd_rx_fifo_filter_reject(path, frame->can_id,
1256
+ frame->can_dlc);
1257
+ } else {
1258
+ if (filter_index <= filter_partition) {
1259
+ fill_level = ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER, FL);
1260
+ read_index = ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER, RI);
1261
+ store_index = read_index + fill_level;
1262
+
1263
+ if (read_index == s->cfg.rx0_fifo - 1) {
1264
+ /*
1265
+ * When ri is s->cfg.rx0_fifo - 1 i.e. max, it goes cyclic that
1266
+ * means we reset the ri to 0x0.
1267
+ */
1268
+ read_index = 0;
1269
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, RI,
1270
+ read_index);
1271
+ }
1272
+
1273
+ if (store_index > s->cfg.rx0_fifo - 1) {
1274
+ store_index -= s->cfg.rx0_fifo - 1;
1275
+ }
1276
+
1277
+ store_location = R_RB_ID_REGISTER +
1278
+ (store_index * NUM_REGS_PER_MSG_SPACE);
1279
+
1280
+ store_rx_sequential(s, frame, fill_level, read_index,
1281
+ store_location, s->cfg.rx0_fifo, 0,
1282
+ filter_index);
1283
+ } else {
1284
+ /* RX 1 fill level message */
1285
+ fill_level = ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER,
1286
+ FL_1);
1287
+ read_index = ARRAY_FIELD_EX32(s->regs, RX_FIFO_STATUS_REGISTER,
1288
+ RI_1);
1289
+ store_index = read_index + fill_level;
1290
+
1291
+ if (read_index == s->cfg.rx1_fifo - 1) {
1292
+ /*
1293
+ * When ri is s->cfg.rx1_fifo - 1 i.e. max, it goes cyclic that
1294
+ * means we reset the ri to 0x0.
1295
+ */
1296
+ read_index = 0;
1297
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, RI_1,
1298
+ read_index);
1299
+ }
1300
+
1301
+ if (store_index > s->cfg.rx1_fifo - 1) {
1302
+ store_index -= s->cfg.rx1_fifo - 1;
1303
+ }
1304
+
1305
+ store_location = R_RB_ID_REGISTER_1 +
1306
+ (store_index * NUM_REGS_PER_MSG_SPACE);
1307
+
1308
+ store_rx_sequential(s, frame, fill_level, read_index,
1309
+ store_location, s->cfg.rx1_fifo, 1,
1310
+ filter_index);
1311
+ }
1312
+
1313
+ path = object_get_canonical_path(OBJECT(s));
1314
+
1315
+ trace_xlnx_canfd_rx_data(path, frame->can_id, frame->can_dlc,
1316
+ frame->flags);
1317
+ canfd_update_irq(s);
1318
+ }
1319
+}
1320
+
1321
+static bool tx_ready_check(XlnxVersalCANFDState *s)
1322
+{
1323
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
1324
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1325
+
1326
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
1327
+ " XlnxVersalCANFDState is in reset mode\n", path);
1328
+
1329
+ return false;
1330
+ }
1331
+
1332
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
1333
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1334
+
1335
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
1336
+ " XlnxVersalCANFDState is in configuration mode."
1337
+ " Reset the core so operations can start fresh\n",
1338
+ path);
1339
+ return false;
1340
+ }
1341
+
1342
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
1343
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1344
+
1345
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
1346
+ " XlnxVersalCANFDState is in SNOOP MODE\n",
1347
+ path);
1348
+ return false;
1349
+ }
1350
+
1351
+ return true;
1352
+}
1353
+
1354
+static void tx_fifo_stamp(XlnxVersalCANFDState *s, uint32_t tb0_regid)
1355
+{
1356
+ /*
1357
+ * If EFC bit in DLC message is set, this means we will store the
1358
+ * event of this transmitted message with time stamp.
1359
+ */
1360
+ uint32_t dlc_reg_val = 0;
1361
+
1362
+ if (FIELD_EX32(s->regs[tb0_regid + 1], TB0_DLC_REGISTER, EFC)) {
1363
+ uint8_t dlc_val = FIELD_EX32(s->regs[tb0_regid + 1], TB0_DLC_REGISTER,
1364
+ DLC);
1365
+ bool fdf_val = FIELD_EX32(s->regs[tb0_regid + 1], TB0_DLC_REGISTER,
1366
+ FDF);
1367
+ bool brs_val = FIELD_EX32(s->regs[tb0_regid + 1], TB0_DLC_REGISTER,
1368
+ BRS);
1369
+ uint8_t mm_val = FIELD_EX32(s->regs[tb0_regid + 1], TB0_DLC_REGISTER,
1370
+ MM);
1371
+ uint8_t fill_level = ARRAY_FIELD_EX32(s->regs,
1372
+ TX_EVENT_FIFO_STATUS_REGISTER,
1373
+ TXE_FL);
1374
+ uint8_t read_index = ARRAY_FIELD_EX32(s->regs,
1375
+ TX_EVENT_FIFO_STATUS_REGISTER,
1376
+ TXE_RI);
1377
+ uint8_t store_index = fill_level + read_index;
1378
+
1379
+ if ((fill_level) > s->cfg.tx_fifo - 1) {
1380
+ qemu_log_mask(LOG_GUEST_ERROR, "TX Event Buffer is full."
1381
+ " Discarding the message\n");
1382
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXEOFLW, 1);
1383
+ } else {
1384
+ if (read_index == s->cfg.tx_fifo - 1) {
1385
+ /*
1386
+ * When ri is s->cfg.tx_fifo - 1 i.e. max, it goes cyclic that
1387
+ * means we reset the ri to 0x0.
1388
+ */
1389
+ read_index = 0;
1390
+ ARRAY_FIELD_DP32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER, TXE_RI,
1391
+ read_index);
1392
+ }
1393
+
1394
+ if (store_index > s->cfg.tx_fifo - 1) {
1395
+ store_index -= s->cfg.tx_fifo - 1;
1396
+ }
1397
+
1398
+ assert(store_index < s->cfg.tx_fifo);
1399
+
1400
+ uint32_t tx_event_reg0_id = R_TXE_FIFO_TB_ID_REGISTER +
1401
+ (store_index * 2);
1402
+
1403
+ /* Store message ID in TX event register. */
1404
+ s->regs[tx_event_reg0_id] = s->regs[tb0_regid];
1405
+
1406
+ uint16_t tx_timestamp = CANFD_TIMER_MAX -
1407
+ ptimer_get_count(s->canfd_timer);
1408
+
1409
+ /* Store DLC with time stamp in DLC regs. */
1410
+ dlc_reg_val = FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, DLC, dlc_val);
1411
+ dlc_reg_val |= FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, FDF,
1412
+ fdf_val);
1413
+ dlc_reg_val |= FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, BRS,
1414
+ brs_val);
1415
+ dlc_reg_val |= FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, ET, 0x3);
1416
+ dlc_reg_val |= FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, MM, mm_val);
1417
+ dlc_reg_val |= FIELD_DP32(0, TXE_FIFO_TB_DLC_REGISTER, TIMESTAMP,
1418
+ tx_timestamp);
1419
+ s->regs[tx_event_reg0_id + 1] = dlc_reg_val;
1420
+
1421
+ ARRAY_FIELD_DP32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER, TXE_FL,
1422
+ fill_level + 1);
1423
+ }
1424
+ }
1425
+}
1426
+
1427
+static gint g_cmp_ids(gconstpointer data1, gconstpointer data2)
1428
+{
1429
+ tx_ready_reg_info *tx_reg_1 = (tx_ready_reg_info *) data1;
1430
+ tx_ready_reg_info *tx_reg_2 = (tx_ready_reg_info *) data2;
1431
+
1432
+ return tx_reg_1->can_id - tx_reg_2->can_id;
1433
+}
1434
+
1435
+static void free_list(GSList *list)
1436
+{
1437
+ GSList *iterator = NULL;
1438
+
1439
+ for (iterator = list; iterator != NULL; iterator = iterator->next) {
1440
+ g_free((tx_ready_reg_info *)iterator->data);
1441
+ }
1442
+
1443
+ g_slist_free(list);
1444
+
1445
+ return;
1446
+}
1447
+
1448
+static GSList *prepare_tx_data(XlnxVersalCANFDState *s)
1449
+{
1450
+ uint8_t i = 0;
1451
+ GSList *list = NULL;
1452
+ uint32_t reg_num = 0;
1453
+ uint32_t reg_ready = s->regs[R_TX_BUFFER_READY_REQUEST_REGISTER];
1454
+
1455
+ /* First find the messages which are ready for transmission. */
1456
+ for (i = 0; i < s->cfg.tx_fifo; i++) {
1457
+ if (reg_ready & 1) {
1458
+ reg_num = R_TB_ID_REGISTER + (NUM_REGS_PER_MSG_SPACE * i);
1459
+ tx_ready_reg_info *temp = g_new(tx_ready_reg_info, 1);
1460
+
1461
+ temp->can_id = s->regs[reg_num];
1462
+ temp->reg_num = reg_num;
1463
+ list = g_slist_prepend(list, temp);
1464
+ list = g_slist_sort(list, g_cmp_ids);
1465
+ }
1466
+
1467
+ reg_ready >>= 1;
1468
+ }
1469
+
1470
+ s->regs[R_TX_BUFFER_READY_REQUEST_REGISTER] = 0;
1471
+ s->regs[R_TX_BUFFER_CANCEL_REQUEST_REGISTER] = 0;
1472
+
1473
+ return list;
1474
+}
1475
+
1476
+static void transfer_data(XlnxVersalCANFDState *s)
1477
+{
1478
+ bool canfd_tx = tx_ready_check(s);
1479
+ GSList *list, *iterator = NULL;
1480
+ qemu_can_frame frame;
1481
+
1482
+ if (!canfd_tx) {
1483
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1484
+
1485
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller not enabled for data"
1486
+ " transfer\n", path);
1487
+ return;
1488
+ }
1489
+
1490
+ list = prepare_tx_data(s);
1491
+ if (list == NULL) {
1492
+ return;
1493
+ }
1494
+
1495
+ for (iterator = list; iterator != NULL; iterator = iterator->next) {
1496
+ regs2frame(s, &frame,
1497
+ ((tx_ready_reg_info *)iterator->data)->reg_num);
1498
+
1499
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
1500
+ update_rx_sequential(s, &frame);
1501
+ tx_fifo_stamp(s, ((tx_ready_reg_info *)iterator->data)->reg_num);
1502
+
1503
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
1504
+ } else {
1505
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1506
+
1507
+ trace_xlnx_canfd_tx_data(path, frame.can_id, frame.can_dlc,
1508
+ frame.flags);
1509
+ can_bus_client_send(&s->bus_client, &frame, 1);
1510
+ tx_fifo_stamp(s,
1511
+ ((tx_ready_reg_info *)iterator->data)->reg_num);
1512
+
1513
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXRRS, 1);
1514
+
1515
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) {
1516
+ canfd_exit_sleep_mode(s);
1517
+ }
1518
+ }
1519
+ }
1520
+
1521
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 1);
1522
+ free_list(list);
1523
+
1524
+ canfd_update_irq(s);
1525
+}
1526
+
1527
+static uint64_t canfd_srr_pre_write(RegisterInfo *reg, uint64_t val64)
1528
+{
1529
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1530
+ uint32_t val = val64;
1531
+
1532
+ ARRAY_FIELD_DP32(s->regs, SOFTWARE_RESET_REGISTER, CEN,
1533
+ FIELD_EX32(val, SOFTWARE_RESET_REGISTER, CEN));
1534
+
1535
+ if (FIELD_EX32(val, SOFTWARE_RESET_REGISTER, SRST)) {
1536
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1537
+
1538
+ trace_xlnx_canfd_reset(path, val64);
1539
+
1540
+ /* First, core will do software reset then will enter in config mode. */
1541
+ canfd_config_reset(s);
1542
+ } else if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
1543
+ canfd_config_mode(s);
1544
+ } else {
1545
+ /*
1546
+ * Leave config mode. Now XlnxVersalCANFD core will enter Normal, Sleep,
1547
+ * snoop or Loopback mode depending upon LBACK, SLEEP, SNOOP register
1548
+ * states.
1549
+ */
1550
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 0);
1551
+
1552
+ ptimer_transaction_begin(s->canfd_timer);
1553
+ ptimer_set_count(s->canfd_timer, 0);
1554
+ ptimer_transaction_commit(s->canfd_timer);
1555
+ update_status_register_mode_bits(s);
1556
+ transfer_data(s);
1557
+ }
1558
+
1559
+ return s->regs[R_SOFTWARE_RESET_REGISTER];
1560
+}
1561
+
1562
+static uint64_t filter_mask(RegisterInfo *reg, uint64_t val64)
1563
+{
1564
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1565
+ uint32_t reg_idx = (reg->access->addr) / 4;
1566
+ uint32_t val = val64;
1567
+ uint32_t filter_offset = (reg_idx - R_AFMR_REGISTER) / 2;
1568
+
1569
+ if (!(s->regs[R_ACCEPTANCE_FILTER_CONTROL_REGISTER] &
1570
+ (1 << filter_offset))) {
1571
+ s->regs[reg_idx] = val;
1572
+ } else {
1573
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1574
+
1575
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d not enabled\n",
1576
+ path, filter_offset + 1);
1577
+ }
1578
+
1579
+ return s->regs[reg_idx];
1580
+}
1581
+
1582
+static uint64_t filter_id(RegisterInfo *reg, uint64_t val64)
1583
+{
1584
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1585
+ hwaddr reg_idx = (reg->access->addr) / 4;
1586
+ uint32_t val = val64;
1587
+ uint32_t filter_offset = (reg_idx - R_AFIR_REGISTER) / 2;
1588
+
1589
+ if (!(s->regs[R_ACCEPTANCE_FILTER_CONTROL_REGISTER] &
1590
+ (1 << filter_offset))) {
1591
+ s->regs[reg_idx] = val;
1592
+ } else {
1593
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1594
+
1595
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d not enabled\n",
1596
+ path, filter_offset + 1);
1597
+ }
1598
+
1599
+ return s->regs[reg_idx];
1600
+}
1601
+
1602
+static uint64_t canfd_tx_fifo_status_prew(RegisterInfo *reg, uint64_t val64)
1603
+{
1604
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1605
+ uint32_t val = val64;
1606
+ uint8_t read_ind = 0;
1607
+ uint8_t fill_ind = ARRAY_FIELD_EX32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER,
1608
+ TXE_FL);
1609
+
1610
+ if (FIELD_EX32(val, TX_EVENT_FIFO_STATUS_REGISTER, TXE_IRI) && fill_ind) {
1611
+ read_ind = ARRAY_FIELD_EX32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER,
1612
+ TXE_RI) + 1;
1613
+
1614
+ if (read_ind > s->cfg.tx_fifo - 1) {
1615
+ read_ind = 0;
1616
+ }
1617
+
1618
+ /*
1619
+ * Increase the read index by 1 and decrease the fill level by 1.
1620
+ */
1621
+ ARRAY_FIELD_DP32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER, TXE_RI,
1622
+ read_ind);
1623
+ ARRAY_FIELD_DP32(s->regs, TX_EVENT_FIFO_STATUS_REGISTER, TXE_FL,
1624
+ fill_ind - 1);
1625
+ }
1626
+
1627
+ return s->regs[R_TX_EVENT_FIFO_STATUS_REGISTER];
1628
+}
1629
+
1630
+static uint64_t canfd_rx_fifo_status_prew(RegisterInfo *reg, uint64_t val64)
1631
+{
1632
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1633
+ uint32_t val = val64;
1634
+ uint8_t read_ind = 0;
1635
+ uint8_t fill_ind = 0;
1636
+
1637
+ if (FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, IRI)) {
1638
+ /* FL index is zero, setting IRI bit has no effect. */
1639
+ if (FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, FL) != 0) {
1640
+ read_ind = FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, RI) + 1;
1641
+
1642
+ if (read_ind > s->cfg.rx0_fifo - 1) {
1643
+ read_ind = 0;
1644
+ }
1645
+
1646
+ fill_ind = FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, FL) - 1;
1647
+
1648
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, RI, read_ind);
1649
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, FL, fill_ind);
1650
+ }
1651
+ }
1652
+
1653
+ if (FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, IRI_1)) {
1654
+ /* FL_1 index is zero, setting IRI_1 bit has no effect. */
1655
+ if (FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, FL_1) != 0) {
1656
+ read_ind = FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, RI_1) + 1;
1657
+
1658
+ if (read_ind > s->cfg.rx1_fifo - 1) {
1659
+ read_ind = 0;
1660
+ }
1661
+
1662
+ fill_ind = FIELD_EX32(val, RX_FIFO_STATUS_REGISTER, FL_1) - 1;
1663
+
1664
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, RI_1, read_ind);
1665
+ ARRAY_FIELD_DP32(s->regs, RX_FIFO_STATUS_REGISTER, FL_1, fill_ind);
1666
+ }
1667
+ }
1668
+
1669
+ return s->regs[R_RX_FIFO_STATUS_REGISTER];
1670
+}
1671
+
1672
+static uint64_t canfd_tsr_pre_write(RegisterInfo *reg, uint64_t val64)
1673
+{
1674
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1675
+ uint32_t val = val64;
1676
+
1677
+ if (FIELD_EX32(val, TIMESTAMP_REGISTER, CTS)) {
1678
+ ARRAY_FIELD_DP32(s->regs, TIMESTAMP_REGISTER, TIMESTAMP_CNT, 0);
1679
+ ptimer_transaction_begin(s->canfd_timer);
1680
+ ptimer_set_count(s->canfd_timer, 0);
1681
+ ptimer_transaction_commit(s->canfd_timer);
1682
+ }
1683
+
1684
+ return 0;
1685
+}
1686
+
1687
+static uint64_t canfd_trr_reg_prew(RegisterInfo *reg, uint64_t val64)
1688
+{
1689
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1690
+
1691
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
1692
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1693
+
1694
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is in SNOOP mode."
1695
+ " tx_ready_register will stay in reset mode\n", path);
1696
+ return 0;
1697
+ } else {
1698
+ return val64;
1699
+ }
1700
+}
1701
+
1702
+static void canfd_trr_reg_postw(RegisterInfo *reg, uint64_t val64)
1703
+{
1704
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1705
+
1706
+ transfer_data(s);
1707
+}
1708
+
1709
+static void canfd_cancel_reg_postw(RegisterInfo *reg, uint64_t val64)
1710
+{
1711
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1712
+
1713
+ process_cancellation_requests(s);
1714
+}
1715
+
1716
+static uint64_t canfd_write_check_prew(RegisterInfo *reg, uint64_t val64)
1717
+{
1718
+ XlnxVersalCANFDState *s = XILINX_CANFD(reg->opaque);
1719
+ uint32_t val = val64;
1720
+
1721
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
1722
+ return val;
1723
+ }
1724
+ return 0;
1725
+}
1726
+
1727
+static const RegisterAccessInfo canfd_tx_regs[] = {
1728
+ { .name = "TB_ID_REGISTER", .addr = A_TB_ID_REGISTER,
1729
+ },{ .name = "TB0_DLC_REGISTER", .addr = A_TB0_DLC_REGISTER,
1730
+ },{ .name = "TB_DW0_REGISTER", .addr = A_TB_DW0_REGISTER,
1731
+ },{ .name = "TB_DW1_REGISTER", .addr = A_TB_DW1_REGISTER,
1732
+ },{ .name = "TB_DW2_REGISTER", .addr = A_TB_DW2_REGISTER,
1733
+ },{ .name = "TB_DW3_REGISTER", .addr = A_TB_DW3_REGISTER,
1734
+ },{ .name = "TB_DW4_REGISTER", .addr = A_TB_DW4_REGISTER,
1735
+ },{ .name = "TB_DW5_REGISTER", .addr = A_TB_DW5_REGISTER,
1736
+ },{ .name = "TB_DW6_REGISTER", .addr = A_TB_DW6_REGISTER,
1737
+ },{ .name = "TB_DW7_REGISTER", .addr = A_TB_DW7_REGISTER,
1738
+ },{ .name = "TB_DW8_REGISTER", .addr = A_TB_DW8_REGISTER,
1739
+ },{ .name = "TB_DW9_REGISTER", .addr = A_TB_DW9_REGISTER,
1740
+ },{ .name = "TB_DW10_REGISTER", .addr = A_TB_DW10_REGISTER,
1741
+ },{ .name = "TB_DW11_REGISTER", .addr = A_TB_DW11_REGISTER,
1742
+ },{ .name = "TB_DW12_REGISTER", .addr = A_TB_DW12_REGISTER,
1743
+ },{ .name = "TB_DW13_REGISTER", .addr = A_TB_DW13_REGISTER,
1744
+ },{ .name = "TB_DW14_REGISTER", .addr = A_TB_DW14_REGISTER,
1745
+ },{ .name = "TB_DW15_REGISTER", .addr = A_TB_DW15_REGISTER,
1746
+ }
1747
+};
1748
+
1749
+static const RegisterAccessInfo canfd_rx0_regs[] = {
1750
+ { .name = "RB_ID_REGISTER", .addr = A_RB_ID_REGISTER,
1751
+ .ro = 0xffffffff,
1752
+ },{ .name = "RB_DLC_REGISTER", .addr = A_RB_DLC_REGISTER,
1753
+ .ro = 0xfe1fffff,
1754
+ },{ .name = "RB_DW0_REGISTER", .addr = A_RB_DW0_REGISTER,
1755
+ .ro = 0xffffffff,
1756
+ },{ .name = "RB_DW1_REGISTER", .addr = A_RB_DW1_REGISTER,
1757
+ .ro = 0xffffffff,
1758
+ },{ .name = "RB_DW2_REGISTER", .addr = A_RB_DW2_REGISTER,
1759
+ .ro = 0xffffffff,
1760
+ },{ .name = "RB_DW3_REGISTER", .addr = A_RB_DW3_REGISTER,
1761
+ .ro = 0xffffffff,
1762
+ },{ .name = "RB_DW4_REGISTER", .addr = A_RB_DW4_REGISTER,
1763
+ .ro = 0xffffffff,
1764
+ },{ .name = "RB_DW5_REGISTER", .addr = A_RB_DW5_REGISTER,
1765
+ .ro = 0xffffffff,
1766
+ },{ .name = "RB_DW6_REGISTER", .addr = A_RB_DW6_REGISTER,
1767
+ .ro = 0xffffffff,
1768
+ },{ .name = "RB_DW7_REGISTER", .addr = A_RB_DW7_REGISTER,
1769
+ .ro = 0xffffffff,
1770
+ },{ .name = "RB_DW8_REGISTER", .addr = A_RB_DW8_REGISTER,
1771
+ .ro = 0xffffffff,
1772
+ },{ .name = "RB_DW9_REGISTER", .addr = A_RB_DW9_REGISTER,
1773
+ .ro = 0xffffffff,
1774
+ },{ .name = "RB_DW10_REGISTER", .addr = A_RB_DW10_REGISTER,
1775
+ .ro = 0xffffffff,
1776
+ },{ .name = "RB_DW11_REGISTER", .addr = A_RB_DW11_REGISTER,
1777
+ .ro = 0xffffffff,
1778
+ },{ .name = "RB_DW12_REGISTER", .addr = A_RB_DW12_REGISTER,
1779
+ .ro = 0xffffffff,
1780
+ },{ .name = "RB_DW13_REGISTER", .addr = A_RB_DW13_REGISTER,
1781
+ .ro = 0xffffffff,
1782
+ },{ .name = "RB_DW14_REGISTER", .addr = A_RB_DW14_REGISTER,
1783
+ .ro = 0xffffffff,
1784
+ },{ .name = "RB_DW15_REGISTER", .addr = A_RB_DW15_REGISTER,
1785
+ .ro = 0xffffffff,
1786
+ }
1787
+};
1788
+
1789
+static const RegisterAccessInfo canfd_rx1_regs[] = {
1790
+ { .name = "RB_ID_REGISTER_1", .addr = A_RB_ID_REGISTER_1,
1791
+ .ro = 0xffffffff,
1792
+ },{ .name = "RB_DLC_REGISTER_1", .addr = A_RB_DLC_REGISTER_1,
1793
+ .ro = 0xfe1fffff,
1794
+ },{ .name = "RB0_DW0_REGISTER_1", .addr = A_RB0_DW0_REGISTER_1,
1795
+ .ro = 0xffffffff,
1796
+ },{ .name = "RB_DW1_REGISTER_1", .addr = A_RB_DW1_REGISTER_1,
1797
+ .ro = 0xffffffff,
1798
+ },{ .name = "RB_DW2_REGISTER_1", .addr = A_RB_DW2_REGISTER_1,
1799
+ .ro = 0xffffffff,
1800
+ },{ .name = "RB_DW3_REGISTER_1", .addr = A_RB_DW3_REGISTER_1,
1801
+ .ro = 0xffffffff,
1802
+ },{ .name = "RB_DW4_REGISTER_1", .addr = A_RB_DW4_REGISTER_1,
1803
+ .ro = 0xffffffff,
1804
+ },{ .name = "RB_DW5_REGISTER_1", .addr = A_RB_DW5_REGISTER_1,
1805
+ .ro = 0xffffffff,
1806
+ },{ .name = "RB_DW6_REGISTER_1", .addr = A_RB_DW6_REGISTER_1,
1807
+ .ro = 0xffffffff,
1808
+ },{ .name = "RB_DW7_REGISTER_1", .addr = A_RB_DW7_REGISTER_1,
1809
+ .ro = 0xffffffff,
1810
+ },{ .name = "RB_DW8_REGISTER_1", .addr = A_RB_DW8_REGISTER_1,
1811
+ .ro = 0xffffffff,
1812
+ },{ .name = "RB_DW9_REGISTER_1", .addr = A_RB_DW9_REGISTER_1,
1813
+ .ro = 0xffffffff,
1814
+ },{ .name = "RB_DW10_REGISTER_1", .addr = A_RB_DW10_REGISTER_1,
1815
+ .ro = 0xffffffff,
1816
+ },{ .name = "RB_DW11_REGISTER_1", .addr = A_RB_DW11_REGISTER_1,
1817
+ .ro = 0xffffffff,
1818
+ },{ .name = "RB_DW12_REGISTER_1", .addr = A_RB_DW12_REGISTER_1,
1819
+ .ro = 0xffffffff,
1820
+ },{ .name = "RB_DW13_REGISTER_1", .addr = A_RB_DW13_REGISTER_1,
1821
+ .ro = 0xffffffff,
1822
+ },{ .name = "RB_DW14_REGISTER_1", .addr = A_RB_DW14_REGISTER_1,
1823
+ .ro = 0xffffffff,
1824
+ },{ .name = "RB_DW15_REGISTER_1", .addr = A_RB_DW15_REGISTER_1,
1825
+ .ro = 0xffffffff,
1826
+ }
1827
+};
1828
+
1829
+/* Acceptance filter registers. */
1830
+static const RegisterAccessInfo canfd_af_regs[] = {
1831
+ { .name = "AFMR_REGISTER", .addr = A_AFMR_REGISTER,
1832
+ .pre_write = filter_mask,
1833
+ },{ .name = "AFIR_REGISTER", .addr = A_AFIR_REGISTER,
1834
+ .pre_write = filter_id,
1835
+ }
1836
+};
1837
+
1838
+static const RegisterAccessInfo canfd_txe_regs[] = {
1839
+ { .name = "TXE_FIFO_TB_ID_REGISTER", .addr = A_TXE_FIFO_TB_ID_REGISTER,
1840
+ .ro = 0xffffffff,
1841
+ },{ .name = "TXE_FIFO_TB_DLC_REGISTER", .addr = A_TXE_FIFO_TB_DLC_REGISTER,
1842
+ .ro = 0xffffffff,
1843
+ }
1844
+};
1845
+
1846
+static const RegisterAccessInfo canfd_regs_info[] = {
1847
+ { .name = "SOFTWARE_RESET_REGISTER", .addr = A_SOFTWARE_RESET_REGISTER,
1848
+ .pre_write = canfd_srr_pre_write,
1849
+ },{ .name = "MODE_SELECT_REGISTER", .addr = A_MODE_SELECT_REGISTER,
1850
+ .pre_write = canfd_msr_pre_write,
1851
+ },{ .name = "ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER",
1852
+ .addr = A_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER,
1853
+ .pre_write = canfd_write_check_prew,
1854
+ },{ .name = "ARBITRATION_PHASE_BIT_TIMING_REGISTER",
1855
+ .addr = A_ARBITRATION_PHASE_BIT_TIMING_REGISTER,
1856
+ .pre_write = canfd_write_check_prew,
1857
+ },{ .name = "ERROR_COUNTER_REGISTER", .addr = A_ERROR_COUNTER_REGISTER,
1858
+ .ro = 0xffff,
1859
+ },{ .name = "ERROR_STATUS_REGISTER", .addr = A_ERROR_STATUS_REGISTER,
1860
+ .w1c = 0xf1f,
1861
+ },{ .name = "STATUS_REGISTER", .addr = A_STATUS_REGISTER,
1862
+ .reset = 0x1,
1863
+ .ro = 0x7f17ff,
1864
+ },{ .name = "INTERRUPT_STATUS_REGISTER",
1865
+ .addr = A_INTERRUPT_STATUS_REGISTER,
1866
+ .ro = 0xffffff7f,
1867
+ },{ .name = "INTERRUPT_ENABLE_REGISTER",
1868
+ .addr = A_INTERRUPT_ENABLE_REGISTER,
1869
+ .post_write = canfd_ier_post_write,
1870
+ },{ .name = "INTERRUPT_CLEAR_REGISTER",
1871
+ .addr = A_INTERRUPT_CLEAR_REGISTER, .pre_write = canfd_icr_pre_write,
1872
+ },{ .name = "TIMESTAMP_REGISTER", .addr = A_TIMESTAMP_REGISTER,
1873
+ .ro = 0xffff0000,
1874
+ .pre_write = canfd_tsr_pre_write,
1875
+ },{ .name = "DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER",
1876
+ .addr = A_DATA_PHASE_BAUD_RATE_PRESCALER_REGISTER,
1877
+ .pre_write = canfd_write_check_prew,
1878
+ },{ .name = "DATA_PHASE_BIT_TIMING_REGISTER",
1879
+ .addr = A_DATA_PHASE_BIT_TIMING_REGISTER,
1880
+ .pre_write = canfd_write_check_prew,
1881
+ },{ .name = "TX_BUFFER_READY_REQUEST_REGISTER",
1882
+ .addr = A_TX_BUFFER_READY_REQUEST_REGISTER,
1883
+ .pre_write = canfd_trr_reg_prew,
1884
+ .post_write = canfd_trr_reg_postw,
1885
+ },{ .name = "INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER",
1886
+ .addr = A_INTERRUPT_ENABLE_TX_BUFFER_READY_REQUEST_REGISTER,
1887
+ },{ .name = "TX_BUFFER_CANCEL_REQUEST_REGISTER",
1888
+ .addr = A_TX_BUFFER_CANCEL_REQUEST_REGISTER,
1889
+ .post_write = canfd_cancel_reg_postw,
1890
+ },{ .name = "INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER",
1891
+ .addr = A_INTERRUPT_ENABLE_TX_BUFFER_CANCELLATION_REQUEST_REGISTER,
1892
+ },{ .name = "TX_EVENT_FIFO_STATUS_REGISTER",
1893
+ .addr = A_TX_EVENT_FIFO_STATUS_REGISTER,
1894
+ .ro = 0x3f1f, .pre_write = canfd_tx_fifo_status_prew,
1895
+ },{ .name = "TX_EVENT_FIFO_WATERMARK_REGISTER",
1896
+ .addr = A_TX_EVENT_FIFO_WATERMARK_REGISTER,
1897
+ .reset = 0xf,
1898
+ .pre_write = canfd_write_check_prew,
1899
+ },{ .name = "ACCEPTANCE_FILTER_CONTROL_REGISTER",
1900
+ .addr = A_ACCEPTANCE_FILTER_CONTROL_REGISTER,
1901
+ },{ .name = "RX_FIFO_STATUS_REGISTER", .addr = A_RX_FIFO_STATUS_REGISTER,
1902
+ .ro = 0x7f3f7f3f, .pre_write = canfd_rx_fifo_status_prew,
1903
+ },{ .name = "RX_FIFO_WATERMARK_REGISTER",
1904
+ .addr = A_RX_FIFO_WATERMARK_REGISTER,
1905
+ .reset = 0x1f0f0f,
1906
+ .pre_write = canfd_write_check_prew,
1907
+ }
1908
+};
1909
+
1910
+static void xlnx_versal_canfd_ptimer_cb(void *opaque)
1911
+{
1912
+ /* No action required on the timer rollover. */
1913
+}
1914
+
1915
+static const MemoryRegionOps canfd_ops = {
1916
+ .read = register_read_memory,
1917
+ .write = register_write_memory,
1918
+ .endianness = DEVICE_LITTLE_ENDIAN,
1919
+ .valid = {
1920
+ .min_access_size = 4,
1921
+ .max_access_size = 4,
1922
+ },
1923
+};
1924
+
1925
+static void canfd_reset(DeviceState *dev)
1926
+{
1927
+ XlnxVersalCANFDState *s = XILINX_CANFD(dev);
1928
+ unsigned int i;
1929
+
1930
+ for (i = 0; i < ARRAY_SIZE(s->reg_info); ++i) {
1931
+ register_reset(&s->reg_info[i]);
1932
+ }
1933
+
1934
+ ptimer_transaction_begin(s->canfd_timer);
1935
+ ptimer_set_count(s->canfd_timer, 0);
1936
+ ptimer_transaction_commit(s->canfd_timer);
1937
+}
1938
+
1939
+static bool can_xilinx_canfd_receive(CanBusClientState *client)
1940
+{
1941
+ XlnxVersalCANFDState *s = container_of(client, XlnxVersalCANFDState,
1942
+ bus_client);
1943
+
1944
+ bool reset_state = ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST);
1945
+ bool can_enabled = ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN);
1946
+
1947
+ return !reset_state && can_enabled;
1948
+}
1949
+
1950
+static ssize_t canfd_xilinx_receive(CanBusClientState *client,
1951
+ const qemu_can_frame *buf,
1952
+ size_t buf_size)
1953
+{
1954
+ XlnxVersalCANFDState *s = container_of(client, XlnxVersalCANFDState,
1955
+ bus_client);
1956
+ const qemu_can_frame *frame = buf;
1957
+
1958
+ assert(buf_size > 0);
1959
+
1960
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
1961
+ /*
1962
+ * XlnxVersalCANFDState will not participate in normal bus communication
1963
+ * and does not receive any messages transmitted by other CAN nodes.
1964
+ */
1965
+ return 1;
1966
+ }
1967
+
1968
+ /* Update the status register that we are receiving message. */
1969
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, BBSY, 1);
1970
+
1971
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
1972
+ /* Snoop Mode: Just keep the data. no response back. */
1973
+ update_rx_sequential(s, frame);
1974
+ } else {
1975
+ if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP))) {
1976
+ /*
1977
+ * XlnxVersalCANFDState is in sleep mode. Any data on bus will bring
1978
+ * it to the wake up state.
1979
+ */
1980
+ canfd_exit_sleep_mode(s);
1981
+ }
1982
+
1983
+ update_rx_sequential(s, frame);
1984
+ }
1985
+
1986
+ /* Message processing done. Update the status back to !busy */
1987
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, BBSY, 0);
1988
+ return 1;
1989
+}
1990
+
1991
+static CanBusClientInfo canfd_xilinx_bus_client_info = {
1992
+ .can_receive = can_xilinx_canfd_receive,
1993
+ .receive = canfd_xilinx_receive,
1994
+};
1995
+
1996
+static int xlnx_canfd_connect_to_bus(XlnxVersalCANFDState *s,
1997
+ CanBusState *bus)
1998
+{
1999
+ s->bus_client.info = &canfd_xilinx_bus_client_info;
2000
+
2001
+ return can_bus_insert_client(bus, &s->bus_client);
2002
+}
2003
+
2004
+#define NUM_REG_PER_AF ARRAY_SIZE(canfd_af_regs)
2005
+#define NUM_AF 32
2006
+#define NUM_REG_PER_TXE ARRAY_SIZE(canfd_txe_regs)
2007
+#define NUM_TXE 32
2008
+
2009
+static int canfd_populate_regarray(XlnxVersalCANFDState *s,
2010
+ RegisterInfoArray *r_array, int pos,
2011
+ const RegisterAccessInfo *rae,
2012
+ int num_rae)
2013
+{
2014
+ int i;
2015
+
2016
+ for (i = 0; i < num_rae; i++) {
2017
+ int index = rae[i].addr / 4;
2018
+ RegisterInfo *r = &s->reg_info[index];
2019
+
2020
+ object_initialize(r, sizeof(*r), TYPE_REGISTER);
2021
+
2022
+ *r = (RegisterInfo) {
2023
+ .data = &s->regs[index],
2024
+ .data_size = sizeof(uint32_t),
2025
+ .access = &rae[i],
2026
+ .opaque = OBJECT(s),
2027
+ };
2028
+
2029
+ r_array->r[i + pos] = r;
2030
+ }
2031
+ return i + pos;
2032
+}
2033
+
2034
+static void canfd_create_rai(RegisterAccessInfo *rai_array,
2035
+ const RegisterAccessInfo *canfd_regs,
2036
+ int template_rai_array_sz,
2037
+ int num_template_to_copy)
2038
+{
2039
+ int i;
2040
+ int reg_num;
2041
+
2042
+ for (reg_num = 0; reg_num < num_template_to_copy; reg_num++) {
2043
+ int pos = reg_num * template_rai_array_sz;
2044
+
2045
+ memcpy(rai_array + pos, canfd_regs,
2046
+ template_rai_array_sz * sizeof(RegisterAccessInfo));
2047
+
2048
+ for (i = 0; i < template_rai_array_sz; i++) {
2049
+ const char *name = canfd_regs[i].name;
2050
+ uint64_t addr = canfd_regs[i].addr;
2051
+ rai_array[i + pos].name = g_strdup_printf("%s%d", name, reg_num);
2052
+ rai_array[i + pos].addr = addr + pos * 4;
2053
+ }
2054
+ }
2055
+}
2056
+
2057
+static RegisterInfoArray *canfd_create_regarray(XlnxVersalCANFDState *s)
2058
+{
2059
+ const char *device_prefix = object_get_typename(OBJECT(s));
2060
+ uint64_t memory_size = XLNX_VERSAL_CANFD_R_MAX * 4;
2061
+ int num_regs;
2062
+ int pos = 0;
2063
+ RegisterInfoArray *r_array;
2064
+
2065
+ num_regs = ARRAY_SIZE(canfd_regs_info) +
2066
+ s->cfg.tx_fifo * NUM_REGS_PER_MSG_SPACE +
2067
+ s->cfg.rx0_fifo * NUM_REGS_PER_MSG_SPACE +
2068
+ NUM_AF * NUM_REG_PER_AF +
2069
+ NUM_TXE * NUM_REG_PER_TXE;
2070
+
2071
+ s->tx_regs = g_new0(RegisterAccessInfo,
2072
+ s->cfg.tx_fifo * ARRAY_SIZE(canfd_tx_regs));
2073
+
2074
+ canfd_create_rai(s->tx_regs, canfd_tx_regs,
2075
+ ARRAY_SIZE(canfd_tx_regs), s->cfg.tx_fifo);
2076
+
2077
+ s->rx0_regs = g_new0(RegisterAccessInfo,
2078
+ s->cfg.rx0_fifo * ARRAY_SIZE(canfd_rx0_regs));
2079
+
2080
+ canfd_create_rai(s->rx0_regs, canfd_rx0_regs,
2081
+ ARRAY_SIZE(canfd_rx0_regs), s->cfg.rx0_fifo);
2082
+
2083
+ s->af_regs = g_new0(RegisterAccessInfo,
2084
+ NUM_AF * ARRAY_SIZE(canfd_af_regs));
2085
+
2086
+ canfd_create_rai(s->af_regs, canfd_af_regs,
2087
+ ARRAY_SIZE(canfd_af_regs), NUM_AF);
2088
+
2089
+ s->txe_regs = g_new0(RegisterAccessInfo,
2090
+ NUM_TXE * ARRAY_SIZE(canfd_txe_regs));
2091
+
2092
+ canfd_create_rai(s->txe_regs, canfd_txe_regs,
2093
+ ARRAY_SIZE(canfd_txe_regs), NUM_TXE);
2094
+
2095
+ if (s->cfg.enable_rx_fifo1) {
2096
+ num_regs += s->cfg.rx1_fifo * NUM_REGS_PER_MSG_SPACE;
2097
+
2098
+ s->rx1_regs = g_new0(RegisterAccessInfo,
2099
+ s->cfg.rx1_fifo * ARRAY_SIZE(canfd_rx1_regs));
2100
+
2101
+ canfd_create_rai(s->rx1_regs, canfd_rx1_regs,
2102
+ ARRAY_SIZE(canfd_rx1_regs), s->cfg.rx1_fifo);
2103
+ }
2104
+
2105
+ r_array = g_new0(RegisterInfoArray, 1);
2106
+ r_array->r = g_new0(RegisterInfo * , num_regs);
2107
+ r_array->num_elements = num_regs;
2108
+ r_array->prefix = device_prefix;
2109
+
2110
+ pos = canfd_populate_regarray(s, r_array, pos,
2111
+ canfd_regs_info,
2112
+ ARRAY_SIZE(canfd_regs_info));
2113
+ pos = canfd_populate_regarray(s, r_array, pos,
2114
+ s->tx_regs, s->cfg.tx_fifo *
2115
+ NUM_REGS_PER_MSG_SPACE);
2116
+ pos = canfd_populate_regarray(s, r_array, pos,
2117
+ s->rx0_regs, s->cfg.rx0_fifo *
2118
+ NUM_REGS_PER_MSG_SPACE);
2119
+ if (s->cfg.enable_rx_fifo1) {
2120
+ pos = canfd_populate_regarray(s, r_array, pos,
2121
+ s->rx1_regs, s->cfg.rx1_fifo *
2122
+ NUM_REGS_PER_MSG_SPACE);
2123
+ }
2124
+ pos = canfd_populate_regarray(s, r_array, pos,
2125
+ s->af_regs, NUM_AF * NUM_REG_PER_AF);
2126
+ pos = canfd_populate_regarray(s, r_array, pos,
2127
+ s->txe_regs, NUM_TXE * NUM_REG_PER_TXE);
2128
+
2129
+ memory_region_init_io(&r_array->mem, OBJECT(s), &canfd_ops, r_array,
2130
+ device_prefix, memory_size);
2131
+ return r_array;
2132
+}
2133
+
2134
+static void canfd_realize(DeviceState *dev, Error **errp)
2135
+{
2136
+ XlnxVersalCANFDState *s = XILINX_CANFD(dev);
2137
+ RegisterInfoArray *reg_array;
2138
+
2139
+ reg_array = canfd_create_regarray(s);
2140
+ memory_region_add_subregion(&s->iomem, 0x00, &reg_array->mem);
2141
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
2142
+ sysbus_init_irq(SYS_BUS_DEVICE(dev), &s->irq_canfd_int);
2143
+
2144
+ if (s->canfdbus) {
2145
+ if (xlnx_canfd_connect_to_bus(s, s->canfdbus) < 0) {
2146
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
2147
+
2148
+ error_setg(errp, "%s: xlnx_canfd_connect_to_bus failed", path);
418
+ return;
2149
+ return;
419
+ }
2150
+ }
420
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, STRIPE) &&
2151
+
421
+ (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, TRANSMIT) ||
2152
+ }
422
+ ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE))) {
2153
+
423
+ num_stripes = 2;
2154
+ /* Allocate a new timer. */
424
+ }
2155
+ s->canfd_timer = ptimer_init(xlnx_versal_canfd_ptimer_cb, s,
425
+ if (!ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, DATA_XFER)) {
2156
+ PTIMER_POLICY_WRAP_AFTER_ONE_PERIOD |
426
+ tx_rx[0] = ARRAY_FIELD_EX32(s->regs,
2157
+ PTIMER_POLICY_TRIGGER_ONLY_ON_DECREMENT |
427
+ GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA);
2158
+ PTIMER_POLICY_NO_IMMEDIATE_RELOAD);
428
+ } else if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, TRANSMIT)) {
2159
+
429
+ for (i = 0; i < num_stripes; ++i) {
2160
+ ptimer_transaction_begin(s->canfd_timer);
430
+ if (!fifo8_is_empty(&s->tx_fifo_g)) {
2161
+
431
+ tx_rx[i] = fifo8_pop(&s->tx_fifo_g);
2162
+ ptimer_set_freq(s->canfd_timer, s->cfg.ext_clk_freq);
432
+ s->tx_fifo_g_align++;
2163
+ ptimer_set_limit(s->canfd_timer, CANFD_TIMER_MAX, 1);
433
+ } else {
2164
+ ptimer_run(s->canfd_timer, 0);
434
+ return;
2165
+ ptimer_transaction_commit(s->canfd_timer);
435
+ }
2166
+}
436
+ }
2167
+
437
+ }
2168
+static void canfd_init(Object *obj)
438
+ if (num_stripes == 1) {
2169
+{
439
+ /* mirror */
2170
+ XlnxVersalCANFDState *s = XILINX_CANFD(obj);
440
+ tx_rx[1] = tx_rx[0];
2171
+
441
+ }
2172
+ memory_region_init(&s->iomem, obj, TYPE_XILINX_CANFD,
442
+ busses = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, DATA_BUS_SELECT);
2173
+ XLNX_VERSAL_CANFD_R_MAX * 4);
443
+ for (i = 0; i < 2; ++i) {
2174
+}
444
+ DB_PRINT_L(1, "bus %d tx = %02x\n", i, tx_rx[i]);
2175
+
445
+ tx_rx[i] = ssi_transfer(XILINX_SPIPS(s)->spi[i], tx_rx[i]);
2176
+static const VMStateDescription vmstate_canfd = {
446
+ DB_PRINT_L(1, "bus %d rx = %02x\n", i, tx_rx[i]);
2177
+ .name = TYPE_XILINX_CANFD,
447
+ }
448
+ if (s->regs[R_GQSPI_DATA_STS] > 1 &&
449
+ busses == 0x3 && num_stripes == 2) {
450
+ s->regs[R_GQSPI_DATA_STS] -= 2;
451
+ } else if (s->regs[R_GQSPI_DATA_STS] > 0) {
452
+ s->regs[R_GQSPI_DATA_STS]--;
453
+ }
454
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE)) {
455
+ for (i = 0; i < 2; ++i) {
456
+ if (busses & (1 << i)) {
457
+ DB_PRINT_L(1, "bus %d push_byte = %02x\n", i, tx_rx[i]);
458
+ fifo8_push(&s->rx_fifo_g, tx_rx[i]);
459
+ s->rx_fifo_g_align++;
460
+ }
461
+ }
462
+ }
463
+ if (!s->regs[R_GQSPI_DATA_STS]) {
464
+ for (; s->tx_fifo_g_align % 4; s->tx_fifo_g_align++) {
465
+ fifo8_pop(&s->tx_fifo_g);
466
+ }
467
+ for (; s->rx_fifo_g_align % 4; s->rx_fifo_g_align++) {
468
+ fifo8_push(&s->rx_fifo_g, 0);
469
+ }
470
+ }
471
+ }
472
+}
473
+
474
static int xilinx_spips_num_dummies(XilinxQSPIPS *qs, uint8_t command)
475
{
476
if (!qs) {
477
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_check_flush(XilinxSPIPS *s)
478
xilinx_spips_update_ixr(s);
479
}
480
481
+static void xlnx_zynqmp_qspips_check_flush(XlnxZynqMPQSPIPS *s)
482
+{
483
+ bool gqspi_has_work = s->regs[R_GQSPI_DATA_STS] ||
484
+ !fifo32_is_empty(&s->fifo_g);
485
+
486
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_SELECT, GENERIC_QSPI_EN)) {
487
+ if (s->man_start_com_g || (gqspi_has_work &&
488
+ !ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, GEN_FIFO_START_MODE))) {
489
+ xlnx_zynqmp_qspips_flush_fifo_g(s);
490
+ }
491
+ } else {
492
+ xilinx_spips_check_flush(XILINX_SPIPS(s));
493
+ }
494
+ if (!gqspi_has_work) {
495
+ s->man_start_com_g = false;
496
+ }
497
+ xlnx_zynqmp_qspips_update_ixr(s);
498
+}
499
+
500
static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
501
{
502
int i;
503
@@ -XXX,XX +XXX,XX @@ static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
504
return max - i;
505
}
506
507
+static const void *pop_buf(Fifo8 *fifo, uint32_t max, uint32_t *num)
508
+{
509
+ void *ret;
510
+
511
+ if (max == 0 || max > fifo->num) {
512
+ abort();
513
+ }
514
+ *num = MIN(fifo->capacity - fifo->head, max);
515
+ ret = &fifo->data[fifo->head];
516
+ fifo->head += *num;
517
+ fifo->head %= fifo->capacity;
518
+ fifo->num -= *num;
519
+ return ret;
520
+}
521
+
522
+static void xlnx_zynqmp_qspips_notify(void *opaque)
523
+{
524
+ XlnxZynqMPQSPIPS *rq = XLNX_ZYNQMP_QSPIPS(opaque);
525
+ XilinxSPIPS *s = XILINX_SPIPS(rq);
526
+ Fifo8 *recv_fifo;
527
+
528
+ if (ARRAY_FIELD_EX32(rq->regs, GQSPI_SELECT, GENERIC_QSPI_EN)) {
529
+ if (!(ARRAY_FIELD_EX32(rq->regs, GQSPI_CNFG, MODE_EN) == 2)) {
530
+ return;
531
+ }
532
+ recv_fifo = &rq->rx_fifo_g;
533
+ } else {
534
+ if (!(s->regs[R_CMND] & R_CMND_DMA_EN)) {
535
+ return;
536
+ }
537
+ recv_fifo = &s->rx_fifo;
538
+ }
539
+ while (recv_fifo->num >= 4
540
+ && stream_can_push(rq->dma, xlnx_zynqmp_qspips_notify, rq))
541
+ {
542
+ size_t ret;
543
+ uint32_t num;
544
+ const void *rxd = pop_buf(recv_fifo, 4, &num);
545
+
546
+ memcpy(rq->dma_buf, rxd, num);
547
+
548
+ ret = stream_push(rq->dma, rq->dma_buf, 4);
549
+ assert(ret == 4);
550
+ xlnx_zynqmp_qspips_check_flush(rq);
551
+ }
552
+}
553
+
554
static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
555
unsigned size)
556
{
557
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
558
ret <<= 8 * shortfall;
559
}
560
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
561
+ xilinx_spips_check_flush(s);
562
xilinx_spips_update_ixr(s);
563
return ret;
564
}
565
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
566
567
}
568
569
+static uint64_t xlnx_zynqmp_qspips_read(void *opaque,
570
+ hwaddr addr, unsigned size)
571
+{
572
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(opaque);
573
+ uint32_t reg = addr / 4;
574
+ uint32_t ret;
575
+ uint8_t rx_buf[4];
576
+ int shortfall;
577
+
578
+ if (reg <= R_MOD_ID) {
579
+ return xilinx_spips_read(opaque, addr, size);
580
+ } else {
581
+ switch (reg) {
582
+ case R_GQSPI_RXD:
583
+ if (fifo8_is_empty(&s->rx_fifo_g)) {
584
+ qemu_log_mask(LOG_GUEST_ERROR,
585
+ "Read from empty GQSPI RX FIFO\n");
586
+ return 0;
587
+ }
588
+ memset(rx_buf, 0, sizeof(rx_buf));
589
+ shortfall = rx_data_bytes(&s->rx_fifo_g, rx_buf,
590
+ XILINX_SPIPS(s)->num_txrx_bytes);
591
+ ret = ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, ENDIAN) ?
592
+ cpu_to_be32(*(uint32_t *)rx_buf) :
593
+ cpu_to_le32(*(uint32_t *)rx_buf);
594
+ if (!ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, ENDIAN)) {
595
+ ret <<= 8 * shortfall;
596
+ }
597
+ xlnx_zynqmp_qspips_check_flush(s);
598
+ xlnx_zynqmp_qspips_update_ixr(s);
599
+ return ret;
600
+ default:
601
+ return s->regs[reg];
602
+ }
603
+ }
604
+}
605
+
606
static void xilinx_spips_write(void *opaque, hwaddr addr,
607
uint64_t value, unsigned size)
608
{
609
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_write(void *opaque, hwaddr addr,
610
}
611
}
612
613
+static void xlnx_zynqmp_qspips_write(void *opaque, hwaddr addr,
614
+ uint64_t value, unsigned size)
615
+{
616
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(opaque);
617
+ uint32_t reg = addr / 4;
618
+
619
+ if (reg <= R_MOD_ID) {
620
+ xilinx_qspips_write(opaque, addr, value, size);
621
+ } else {
622
+ switch (reg) {
623
+ case R_GQSPI_CNFG:
624
+ if (FIELD_EX32(value, GQSPI_CNFG, GEN_FIFO_START) &&
625
+ ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, GEN_FIFO_START_MODE)) {
626
+ s->man_start_com_g = true;
627
+ }
628
+ s->regs[reg] = value & ~(R_GQSPI_CNFG_GEN_FIFO_START_MASK);
629
+ break;
630
+ case R_GQSPI_GEN_FIFO:
631
+ if (!fifo32_is_full(&s->fifo_g)) {
632
+ fifo32_push(&s->fifo_g, value);
633
+ }
634
+ break;
635
+ case R_GQSPI_TXD:
636
+ tx_data_bytes(&s->tx_fifo_g, (uint32_t)value, 4,
637
+ ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, ENDIAN));
638
+ break;
639
+ case R_GQSPI_FIFO_CTRL:
640
+ if (FIELD_EX32(value, GQSPI_FIFO_CTRL, GENERIC_FIFO_RESET)) {
641
+ fifo32_reset(&s->fifo_g);
642
+ }
643
+ if (FIELD_EX32(value, GQSPI_FIFO_CTRL, TX_FIFO_RESET)) {
644
+ fifo8_reset(&s->tx_fifo_g);
645
+ }
646
+ if (FIELD_EX32(value, GQSPI_FIFO_CTRL, RX_FIFO_RESET)) {
647
+ fifo8_reset(&s->rx_fifo_g);
648
+ }
649
+ break;
650
+ case R_GQSPI_IDR:
651
+ s->regs[R_GQSPI_IMR] |= value;
652
+ break;
653
+ case R_GQSPI_IER:
654
+ s->regs[R_GQSPI_IMR] &= ~value;
655
+ break;
656
+ case R_GQSPI_ISR:
657
+ s->regs[R_GQSPI_ISR] &= ~value;
658
+ break;
659
+ case R_GQSPI_IMR:
660
+ case R_GQSPI_RXD:
661
+ case R_GQSPI_GF_SNAPSHOT:
662
+ case R_GQSPI_MOD_ID:
663
+ break;
664
+ default:
665
+ s->regs[reg] = value;
666
+ break;
667
+ }
668
+ xlnx_zynqmp_qspips_update_cs_lines(s);
669
+ xlnx_zynqmp_qspips_check_flush(s);
670
+ xlnx_zynqmp_qspips_update_cs_lines(s);
671
+ xlnx_zynqmp_qspips_update_ixr(s);
672
+ }
673
+ xlnx_zynqmp_qspips_notify(s);
674
+}
675
+
676
static const MemoryRegionOps qspips_ops = {
677
.read = xilinx_spips_read,
678
.write = xilinx_qspips_write,
679
.endianness = DEVICE_LITTLE_ENDIAN,
680
};
681
682
+static const MemoryRegionOps xlnx_zynqmp_qspips_ops = {
683
+ .read = xlnx_zynqmp_qspips_read,
684
+ .write = xlnx_zynqmp_qspips_write,
685
+ .endianness = DEVICE_LITTLE_ENDIAN,
686
+};
687
+
688
#define LQSPI_CACHE_SIZE 1024
689
690
static void lqspi_load_cache(void *opaque, hwaddr addr)
691
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_realize(DeviceState *dev, Error **errp)
692
}
693
694
memory_region_init_io(&s->iomem, OBJECT(s), xsc->reg_ops, s,
695
- "spi", XLNX_SPIPS_R_MAX * 4);
696
+ "spi", XLNX_ZYNQMP_SPIPS_R_MAX * 4);
697
sysbus_init_mmio(sbd, &s->iomem);
698
699
s->irqline = -1;
700
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_realize(DeviceState *dev, Error **errp)
701
}
702
}
703
704
+static void xlnx_zynqmp_qspips_realize(DeviceState *dev, Error **errp)
705
+{
706
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(dev);
707
+ XilinxSPIPSClass *xsc = XILINX_SPIPS_GET_CLASS(s);
708
+
709
+ xilinx_qspips_realize(dev, errp);
710
+ fifo8_create(&s->rx_fifo_g, xsc->rx_fifo_size);
711
+ fifo8_create(&s->tx_fifo_g, xsc->tx_fifo_size);
712
+ fifo32_create(&s->fifo_g, 32);
713
+}
714
+
715
+static void xlnx_zynqmp_qspips_init(Object *obj)
716
+{
717
+ XlnxZynqMPQSPIPS *rq = XLNX_ZYNQMP_QSPIPS(obj);
718
+
719
+ object_property_add_link(obj, "stream-connected-dma", TYPE_STREAM_SLAVE,
720
+ (Object **)&rq->dma,
721
+ object_property_allow_set_link,
722
+ OBJ_PROP_LINK_UNREF_ON_RELEASE,
723
+ NULL);
724
+}
725
+
726
static int xilinx_spips_post_load(void *opaque, int version_id)
727
{
728
xilinx_spips_update_ixr((XilinxSPIPS *)opaque);
729
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_xilinx_spips = {
730
}
731
};
732
733
+static int xlnx_zynqmp_qspips_post_load(void *opaque, int version_id)
734
+{
735
+ XlnxZynqMPQSPIPS *s = (XlnxZynqMPQSPIPS *)opaque;
736
+ XilinxSPIPS *qs = XILINX_SPIPS(s);
737
+
738
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_SELECT, GENERIC_QSPI_EN) &&
739
+ fifo8_is_empty(&qs->rx_fifo) && fifo8_is_empty(&qs->tx_fifo)) {
740
+ xlnx_zynqmp_qspips_update_ixr(s);
741
+ xlnx_zynqmp_qspips_update_cs_lines(s);
742
+ }
743
+ return 0;
744
+}
745
+
746
+static const VMStateDescription vmstate_xilinx_qspips = {
747
+ .name = "xilinx_qspips",
748
+ .version_id = 1,
2178
+ .version_id = 1,
749
+ .minimum_version_id = 1,
2179
+ .minimum_version_id = 1,
750
+ .fields = (VMStateField[]) {
2180
+ .fields = (VMStateField[]) {
751
+ VMSTATE_STRUCT(parent_obj, XilinxQSPIPS, 0,
2181
+ VMSTATE_UINT32_ARRAY(regs, XlnxVersalCANFDState,
752
+ vmstate_xilinx_spips, XilinxSPIPS),
2182
+ XLNX_VERSAL_CANFD_R_MAX),
753
+ VMSTATE_END_OF_LIST()
2183
+ VMSTATE_PTIMER(canfd_timer, XlnxVersalCANFDState),
2184
+ VMSTATE_END_OF_LIST(),
754
+ }
2185
+ }
755
+};
2186
+};
756
+
2187
+
757
+static const VMStateDescription vmstate_xlnx_zynqmp_qspips = {
2188
+static Property canfd_core_properties[] = {
758
+ .name = "xlnx_zynqmp_qspips",
2189
+ DEFINE_PROP_UINT8("rx-fifo0", XlnxVersalCANFDState, cfg.rx0_fifo, 0x40),
759
+ .version_id = 1,
2190
+ DEFINE_PROP_UINT8("rx-fifo1", XlnxVersalCANFDState, cfg.rx1_fifo, 0x40),
760
+ .minimum_version_id = 1,
2191
+ DEFINE_PROP_UINT8("tx-fifo", XlnxVersalCANFDState, cfg.tx_fifo, 0x20),
761
+ .post_load = xlnx_zynqmp_qspips_post_load,
2192
+ DEFINE_PROP_BOOL("enable-rx-fifo1", XlnxVersalCANFDState,
762
+ .fields = (VMStateField[]) {
2193
+ cfg.enable_rx_fifo1, true),
763
+ VMSTATE_STRUCT(parent_obj, XlnxZynqMPQSPIPS, 0,
2194
+ DEFINE_PROP_UINT32("ext_clk_freq", XlnxVersalCANFDState, cfg.ext_clk_freq,
764
+ vmstate_xilinx_qspips, XilinxQSPIPS),
2195
+ CANFD_DEFAULT_CLOCK),
765
+ VMSTATE_FIFO8(tx_fifo_g, XlnxZynqMPQSPIPS),
2196
+ DEFINE_PROP_LINK("canfdbus", XlnxVersalCANFDState, canfdbus, TYPE_CAN_BUS,
766
+ VMSTATE_FIFO8(rx_fifo_g, XlnxZynqMPQSPIPS),
2197
+ CanBusState *),
767
+ VMSTATE_FIFO32(fifo_g, XlnxZynqMPQSPIPS),
2198
+ DEFINE_PROP_END_OF_LIST(),
768
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPQSPIPS, XLNX_ZYNQMP_SPIPS_R_MAX),
769
+ VMSTATE_END_OF_LIST()
770
+ }
771
+};
2199
+};
772
+
2200
+
773
static Property xilinx_qspips_properties[] = {
2201
+static void canfd_class_init(ObjectClass *klass, void *data)
774
/* We had to turn this off for 2.10 as it is not compatible with migration.
775
* It can be enabled but will prevent the device to be migrated.
776
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_class_init(ObjectClass *klass, void *data)
777
xsc->tx_fifo_size = TXFF_A;
778
}
779
780
+static void xlnx_zynqmp_qspips_class_init(ObjectClass *klass, void * data)
781
+{
2202
+{
782
+ DeviceClass *dc = DEVICE_CLASS(klass);
2203
+ DeviceClass *dc = DEVICE_CLASS(klass);
783
+ XilinxSPIPSClass *xsc = XILINX_SPIPS_CLASS(klass);
2204
+
784
+
2205
+ dc->reset = canfd_reset;
785
+ dc->realize = xlnx_zynqmp_qspips_realize;
2206
+ dc->realize = canfd_realize;
786
+ dc->reset = xlnx_zynqmp_qspips_reset;
2207
+ device_class_set_props(dc, canfd_core_properties);
787
+ dc->vmsd = &vmstate_xlnx_zynqmp_qspips;
2208
+ dc->vmsd = &vmstate_canfd;
788
+ xsc->reg_ops = &xlnx_zynqmp_qspips_ops;
2209
+}
789
+ xsc->rx_fifo_size = RXFF_A_Q;
2210
+
790
+ xsc->tx_fifo_size = TXFF_A_Q;
2211
+static const TypeInfo canfd_info = {
791
+}
2212
+ .name = TYPE_XILINX_CANFD,
792
+
2213
+ .parent = TYPE_SYS_BUS_DEVICE,
793
static const TypeInfo xilinx_spips_info = {
2214
+ .instance_size = sizeof(XlnxVersalCANFDState),
794
.name = TYPE_XILINX_SPIPS,
2215
+ .class_init = canfd_class_init,
795
.parent = TYPE_SYS_BUS_DEVICE,
2216
+ .instance_init = canfd_init,
796
@@ -XXX,XX +XXX,XX @@ static const TypeInfo xilinx_qspips_info = {
797
.class_init = xilinx_qspips_class_init,
798
};
799
800
+static const TypeInfo xlnx_zynqmp_qspips_info = {
801
+ .name = TYPE_XLNX_ZYNQMP_QSPIPS,
802
+ .parent = TYPE_XILINX_QSPIPS,
803
+ .instance_size = sizeof(XlnxZynqMPQSPIPS),
804
+ .instance_init = xlnx_zynqmp_qspips_init,
805
+ .class_init = xlnx_zynqmp_qspips_class_init,
806
+};
2217
+};
807
+
2218
+
808
static void xilinx_spips_register_types(void)
2219
+static void canfd_register_types(void)
809
{
2220
+{
810
type_register_static(&xilinx_spips_info);
2221
+ type_register_static(&canfd_info);
811
type_register_static(&xilinx_qspips_info);
2222
+}
812
+ type_register_static(&xlnx_zynqmp_qspips_info);
2223
+
813
}
2224
+type_init(canfd_register_types)
814
2225
diff --git a/hw/net/can/meson.build b/hw/net/can/meson.build
815
type_init(xilinx_spips_register_types)
816
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
817
index XXXXXXX..XXXXXXX 100644
2226
index XXXXXXX..XXXXXXX 100644
818
--- a/default-configs/arm-softmmu.mak
2227
--- a/hw/net/can/meson.build
819
+++ b/default-configs/arm-softmmu.mak
2228
+++ b/hw/net/can/meson.build
820
@@ -XXX,XX +XXX,XX @@ CONFIG_SMBIOS=y
2229
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_mioe3680_pci.c'))
821
CONFIG_ASPEED_SOC=y
2230
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD', if_true: files('ctucan_core.c'))
822
CONFIG_GPIO_KEY=y
2231
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD_PCI', if_true: files('ctucan_pci.c'))
823
CONFIG_MSF2=y
2232
softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP', if_true: files('xlnx-zynqmp-can.c'))
824
-
2233
+softmmu_ss.add(when: 'CONFIG_XLNX_VERSAL', if_true: files('xlnx-versal-canfd.c'))
825
CONFIG_FW_CFG_DMA=y
2234
diff --git a/hw/net/can/trace-events b/hw/net/can/trace-events
826
+CONFIG_XILINX_AXI=y
2235
index XXXXXXX..XXXXXXX 100644
2236
--- a/hw/net/can/trace-events
2237
+++ b/hw/net/can/trace-events
2238
@@ -XXX,XX +XXX,XX @@ xlnx_can_filter_mask_pre_write(uint8_t filter_num, uint32_t value) "Filter%d MAS
2239
xlnx_can_tx_data(uint32_t id, uint8_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
2240
xlnx_can_rx_data(uint32_t id, uint32_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
2241
xlnx_can_rx_discard(uint32_t status) "Controller is not enabled for bus communication. Status Register: 0x%08x"
2242
+
2243
+# xlnx-versal-canfd.c
2244
+xlnx_canfd_update_irq(char *path, uint32_t isr, uint32_t ier, uint32_t irq) "%s: ISR: 0x%08x IER: 0x%08x IRQ: 0x%08x"
2245
+xlnx_canfd_rx_fifo_filter_reject(char *path, uint32_t id, uint8_t dlc) "%s: Frame: ID: 0x%08x DLC: 0x%02x"
2246
+xlnx_canfd_rx_data(char *path, uint32_t id, uint8_t dlc, uint8_t flags) "%s: Frame: ID: 0x%08x DLC: 0x%02x CANFD Flag: 0x%02x"
2247
+xlnx_canfd_tx_data(char *path, uint32_t id, uint8_t dlc, uint8_t flgas) "%s: Frame: ID: 0x%08x DLC: 0x%02x CANFD Flag: 0x%02x"
2248
+xlnx_canfd_reset(char *path, uint32_t val) "%s: Resetting controller with value = 0x%08x"
827
--
2249
--
828
2.7.4
2250
2.34.1
829
830
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Vikram Garhwal <vikram.garhwal@amd.com>
2
2
3
At the moment the ITS is not properly reset and this causes
3
Connect CANFD0 and CANFD1 on the Versal-virt machine and update xlnx-versal-virt
4
various bugs on save/restore. We implement a minimalist reset
4
document with CANFD command line examples.
5
through individual register writes but for kernel versions
6
before v4.15 this fails voiding the vITS cache. We cannot
7
claim we have a comprehensive reset (hence the error message)
8
but that's better than nothing.
9
5
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
6
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1511883692-11511-3-git-send-email-eric.auger@redhat.com
8
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
hw/intc/arm_gicv3_its_kvm.c | 42 ++++++++++++++++++++++++++++++++++++++++++
11
docs/system/arm/xlnx-versal-virt.rst | 31 ++++++++++++++++
16
1 file changed, 42 insertions(+)
12
include/hw/arm/xlnx-versal.h | 12 +++++++
13
hw/arm/xlnx-versal-virt.c | 53 ++++++++++++++++++++++++++++
14
hw/arm/xlnx-versal.c | 37 +++++++++++++++++++
15
4 files changed, 133 insertions(+)
17
16
18
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
17
diff --git a/docs/system/arm/xlnx-versal-virt.rst b/docs/system/arm/xlnx-versal-virt.rst
19
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gicv3_its_kvm.c
19
--- a/docs/system/arm/xlnx-versal-virt.rst
21
+++ b/hw/intc/arm_gicv3_its_kvm.c
20
+++ b/docs/system/arm/xlnx-versal-virt.rst
21
@@ -XXX,XX +XXX,XX @@ Implemented devices:
22
- DDR memory
23
- BBRAM (36 bytes of Battery-backed RAM)
24
- eFUSE (3072 bytes of one-time field-programmable bit array)
25
+- 2 CANFDs
26
27
QEMU does not yet model any other devices, including the PL and the AI Engine.
28
29
@@ -XXX,XX +XXX,XX @@ To use a different index value, N, from default of 1, add:
30
31
Better yet, do not use actual product data when running guest image
32
on this Xilinx Versal Virt board.
33
+
34
+Using CANFDs for Versal Virt
35
+""""""""""""""""""""""""""""
36
+Versal CANFD controller is developed based on SocketCAN and QEMU CAN bus
37
+implementation. Bus connection and socketCAN connection for each CAN module
38
+can be set through command lines.
39
+
40
+To connect both CANFD0 and CANFD1 on the same bus:
41
+
42
+.. code-block:: bash
43
+
44
+ -object can-bus,id=canbus -machine canbus0=canbus -machine canbus1=canbus
45
+
46
+To connect CANFD0 and CANFD1 to separate buses:
47
+
48
+.. code-block:: bash
49
+
50
+ -object can-bus,id=canbus0 -object can-bus,id=canbus1 \
51
+ -machine canbus0=canbus0 -machine canbus1=canbus1
52
+
53
+The SocketCAN interface can connect to a Physical or a Virtual CAN interfaces on
54
+the host machine. Please check this document to learn about CAN interface on
55
+Linux: docs/system/devices/can.rst
56
+
57
+To connect CANFD0 and CANFD1 to host machine's CAN interface can0:
58
+
59
+.. code-block:: bash
60
+
61
+ -object can-bus,id=canbus -machine canbus0=canbus -machine canbus1=canbus
62
+ -object can-host-socketcan,id=canhost0,if=can0,canbus=canbus
63
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
64
index XXXXXXX..XXXXXXX 100644
65
--- a/include/hw/arm/xlnx-versal.h
66
+++ b/include/hw/arm/xlnx-versal.h
22
@@ -XXX,XX +XXX,XX @@
67
@@ -XXX,XX +XXX,XX @@
23
68
#include "hw/dma/xlnx_csu_dma.h"
24
#define TYPE_KVM_ARM_ITS "arm-its-kvm"
69
#include "hw/misc/xlnx-versal-crl.h"
25
#define KVM_ARM_ITS(obj) OBJECT_CHECK(GICv3ITSState, (obj), TYPE_KVM_ARM_ITS)
70
#include "hw/misc/xlnx-versal-pmc-iou-slcr.h"
26
+#define KVM_ARM_ITS_CLASS(klass) \
71
+#include "hw/net/xlnx-versal-canfd.h"
27
+ OBJECT_CLASS_CHECK(KVMARMITSClass, (klass), TYPE_KVM_ARM_ITS)
72
28
+#define KVM_ARM_ITS_GET_CLASS(obj) \
73
#define TYPE_XLNX_VERSAL "xlnx-versal"
29
+ OBJECT_GET_CLASS(KVMARMITSClass, (obj), TYPE_KVM_ARM_ITS)
74
OBJECT_DECLARE_SIMPLE_TYPE(Versal, XLNX_VERSAL)
30
+
75
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(Versal, XLNX_VERSAL)
31
+typedef struct KVMARMITSClass {
76
#define XLNX_VERSAL_NR_SDS 2
32
+ GICv3ITSCommonClass parent_class;
77
#define XLNX_VERSAL_NR_XRAM 4
33
+ void (*parent_reset)(DeviceState *dev);
78
#define XLNX_VERSAL_NR_IRQS 192
34
+} KVMARMITSClass;
79
+#define XLNX_VERSAL_NR_CANFD 2
35
+
80
+#define XLNX_VERSAL_CANFD_REF_CLK (24 * 1000 * 1000)
36
81
37
static int kvm_its_send_msi(GICv3ITSState *s, uint32_t value, uint16_t devid)
82
struct Versal {
38
{
83
/*< private >*/
39
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_post_load(GICv3ITSState *s)
84
@@ -XXX,XX +XXX,XX @@ struct Versal {
40
GITS_CTLR, &s->ctlr, true, &error_abort);
85
CadenceGEMState gem[XLNX_VERSAL_NR_GEMS];
86
XlnxZDMA adma[XLNX_VERSAL_NR_ADMAS];
87
VersalUsb2 usb;
88
+ CanBusState *canbus[XLNX_VERSAL_NR_CANFD];
89
+ XlnxVersalCANFDState canfd[XLNX_VERSAL_NR_CANFD];
90
} iou;
91
92
/* Real-time Processing Unit. */
93
@@ -XXX,XX +XXX,XX @@ struct Versal {
94
#define VERSAL_CRL_IRQ 10
95
#define VERSAL_UART0_IRQ_0 18
96
#define VERSAL_UART1_IRQ_0 19
97
+#define VERSAL_CANFD0_IRQ_0 20
98
+#define VERSAL_CANFD1_IRQ_0 21
99
#define VERSAL_USB0_IRQ_0 22
100
#define VERSAL_GEM0_IRQ_0 56
101
#define VERSAL_GEM0_WAKE_IRQ_0 57
102
@@ -XXX,XX +XXX,XX @@ struct Versal {
103
#define MM_UART1 0xff010000U
104
#define MM_UART1_SIZE 0x10000
105
106
+#define MM_CANFD0 0xff060000U
107
+#define MM_CANFD0_SIZE 0x10000
108
+#define MM_CANFD1 0xff070000U
109
+#define MM_CANFD1_SIZE 0x10000
110
+
111
#define MM_GEM0 0xff0c0000U
112
#define MM_GEM0_SIZE 0x10000
113
#define MM_GEM1 0xff0d0000U
114
diff --git a/hw/arm/xlnx-versal-virt.c b/hw/arm/xlnx-versal-virt.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/hw/arm/xlnx-versal-virt.c
117
+++ b/hw/arm/xlnx-versal-virt.c
118
@@ -XXX,XX +XXX,XX @@ struct VersalVirt {
119
uint32_t clk_25Mhz;
120
uint32_t usb;
121
uint32_t dwc;
122
+ uint32_t canfd[2];
123
} phandle;
124
struct arm_boot_info binfo;
125
126
+ CanBusState *canbus[XLNX_VERSAL_NR_CANFD];
127
struct {
128
bool secure;
129
} cfg;
130
@@ -XXX,XX +XXX,XX @@ static void fdt_add_uart_nodes(VersalVirt *s)
131
}
41
}
132
}
42
133
43
+static void kvm_arm_its_reset(DeviceState *dev)
134
+static void fdt_add_canfd_nodes(VersalVirt *s)
44
+{
135
+{
45
+ GICv3ITSState *s = ARM_GICV3_ITS_COMMON(dev);
136
+ uint64_t addrs[] = { MM_CANFD1, MM_CANFD0 };
46
+ KVMARMITSClass *c = KVM_ARM_ITS_GET_CLASS(s);
137
+ uint32_t size[] = { MM_CANFD1_SIZE, MM_CANFD0_SIZE };
138
+ unsigned int irqs[] = { VERSAL_CANFD1_IRQ_0, VERSAL_CANFD0_IRQ_0 };
139
+ const char clocknames[] = "can_clk\0s_axi_aclk";
47
+ int i;
140
+ int i;
48
+
141
+
49
+ c->parent_reset(dev);
142
+ /* Create and connect CANFD0 and CANFD1 nodes to canbus0. */
50
+
143
+ for (i = 0; i < ARRAY_SIZE(addrs); i++) {
51
+ error_report("ITS KVM: full reset is not supported by QEMU");
144
+ char *name = g_strdup_printf("/canfd@%" PRIx64, addrs[i]);
52
+
145
+ qemu_fdt_add_subnode(s->fdt, name);
53
+ if (!kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
146
+
54
+ GITS_CTLR)) {
147
+ qemu_fdt_setprop_cell(s->fdt, name, "rx-fifo-depth", 0x40);
55
+ return;
148
+ qemu_fdt_setprop_cell(s->fdt, name, "tx-mailbox-count", 0x20);
56
+ }
149
+
57
+
150
+ qemu_fdt_setprop_cells(s->fdt, name, "clocks",
58
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
151
+ s->phandle.clk_25Mhz, s->phandle.clk_25Mhz);
59
+ GITS_CTLR, &s->ctlr, true, &error_abort);
152
+ qemu_fdt_setprop(s->fdt, name, "clock-names",
60
+
153
+ clocknames, sizeof(clocknames));
61
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
154
+ qemu_fdt_setprop_cells(s->fdt, name, "interrupts",
62
+ GITS_CBASER, &s->cbaser, true, &error_abort);
155
+ GIC_FDT_IRQ_TYPE_SPI, irqs[i],
63
+
156
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
64
+ for (i = 0; i < 8; i++) {
157
+ qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
65
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
158
+ 2, addrs[i], 2, size[i]);
66
+ GITS_BASER + i * 8, &s->baser[i], true,
159
+ qemu_fdt_setprop_string(s->fdt, name, "compatible",
67
+ &error_abort);
160
+ "xlnx,canfd-2.0");
161
+
162
+ g_free(name);
68
+ }
163
+ }
69
+}
164
+}
70
+
165
+
71
static Property kvm_arm_its_props[] = {
166
static void fdt_add_fixed_link_nodes(VersalVirt *s, char *gemname,
72
DEFINE_PROP_LINK("parent-gicv3", GICv3ITSState, gicv3, "kvm-arm-gicv3",
167
uint32_t phandle)
73
GICv3State *),
74
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_class_init(ObjectClass *klass, void *data)
75
{
168
{
76
DeviceClass *dc = DEVICE_CLASS(klass);
169
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
77
GICv3ITSCommonClass *icc = ARM_GICV3_ITS_COMMON_CLASS(klass);
170
TYPE_XLNX_VERSAL);
78
+ KVMARMITSClass *ic = KVM_ARM_ITS_CLASS(klass);
171
object_property_set_link(OBJECT(&s->soc), "ddr", OBJECT(machine->ram),
79
172
&error_abort);
80
dc->realize = kvm_arm_its_realize;
173
+ object_property_set_link(OBJECT(&s->soc), "canbus0", OBJECT(s->canbus[0]),
81
dc->props = kvm_arm_its_props;
174
+ &error_abort);
82
+ ic->parent_reset = dc->reset;
175
+ object_property_set_link(OBJECT(&s->soc), "canbus1", OBJECT(s->canbus[1]),
83
icc->send_msi = kvm_its_send_msi;
176
+ &error_abort);
84
icc->pre_save = kvm_arm_its_pre_save;
177
sysbus_realize(SYS_BUS_DEVICE(&s->soc), &error_fatal);
85
icc->post_load = kvm_arm_its_post_load;
178
86
+ dc->reset = kvm_arm_its_reset;
179
fdt_create(s);
180
create_virtio_regions(s);
181
fdt_add_gem_nodes(s);
182
fdt_add_uart_nodes(s);
183
+ fdt_add_canfd_nodes(s);
184
fdt_add_gic_nodes(s);
185
fdt_add_timer_nodes(s);
186
fdt_add_zdma_nodes(s);
187
@@ -XXX,XX +XXX,XX @@ static void versal_virt_init(MachineState *machine)
188
189
static void versal_virt_machine_instance_init(Object *obj)
190
{
191
+ VersalVirt *s = XLNX_VERSAL_VIRT_MACHINE(obj);
192
+
193
+ /*
194
+ * User can set canbus0 and canbus1 properties to can-bus object and connect
195
+ * to socketcan(optional) interface via command line.
196
+ */
197
+ object_property_add_link(obj, "canbus0", TYPE_CAN_BUS,
198
+ (Object **)&s->canbus[0],
199
+ object_property_allow_set_link,
200
+ 0);
201
+ object_property_add_link(obj, "canbus1", TYPE_CAN_BUS,
202
+ (Object **)&s->canbus[1],
203
+ object_property_allow_set_link,
204
+ 0);
87
}
205
}
88
206
89
static const TypeInfo kvm_arm_its_info = {
207
static void versal_virt_machine_class_init(ObjectClass *oc, void *data)
90
@@ -XXX,XX +XXX,XX @@ static const TypeInfo kvm_arm_its_info = {
208
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
91
.parent = TYPE_ARM_GICV3_ITS_COMMON,
209
index XXXXXXX..XXXXXXX 100644
92
.instance_size = sizeof(GICv3ITSState),
210
--- a/hw/arm/xlnx-versal.c
93
.class_init = kvm_arm_its_class_init,
211
+++ b/hw/arm/xlnx-versal.c
94
+ .class_size = sizeof(KVMARMITSClass),
212
@@ -XXX,XX +XXX,XX @@ static void versal_create_uarts(Versal *s, qemu_irq *pic)
213
}
214
}
215
216
+static void versal_create_canfds(Versal *s, qemu_irq *pic)
217
+{
218
+ int i;
219
+ uint32_t irqs[] = { VERSAL_CANFD0_IRQ_0, VERSAL_CANFD1_IRQ_0};
220
+ uint64_t addrs[] = { MM_CANFD0, MM_CANFD1 };
221
+
222
+ for (i = 0; i < ARRAY_SIZE(s->lpd.iou.canfd); i++) {
223
+ char *name = g_strdup_printf("canfd%d", i);
224
+ SysBusDevice *sbd;
225
+ MemoryRegion *mr;
226
+
227
+ object_initialize_child(OBJECT(s), name, &s->lpd.iou.canfd[i],
228
+ TYPE_XILINX_CANFD);
229
+ sbd = SYS_BUS_DEVICE(&s->lpd.iou.canfd[i]);
230
+
231
+ object_property_set_int(OBJECT(&s->lpd.iou.canfd[i]), "ext_clk_freq",
232
+ XLNX_VERSAL_CANFD_REF_CLK , &error_abort);
233
+
234
+ object_property_set_link(OBJECT(&s->lpd.iou.canfd[i]), "canfdbus",
235
+ OBJECT(s->lpd.iou.canbus[i]),
236
+ &error_abort);
237
+
238
+ sysbus_realize(sbd, &error_fatal);
239
+
240
+ mr = sysbus_mmio_get_region(sbd, 0);
241
+ memory_region_add_subregion(&s->mr_ps, addrs[i], mr);
242
+
243
+ sysbus_connect_irq(sbd, 0, pic[irqs[i]]);
244
+ g_free(name);
245
+ }
246
+}
247
+
248
static void versal_create_usbs(Versal *s, qemu_irq *pic)
249
{
250
DeviceState *dev;
251
@@ -XXX,XX +XXX,XX @@ static void versal_realize(DeviceState *dev, Error **errp)
252
versal_create_apu_gic(s, pic);
253
versal_create_rpu_cpus(s);
254
versal_create_uarts(s, pic);
255
+ versal_create_canfds(s, pic);
256
versal_create_usbs(s, pic);
257
versal_create_gems(s, pic);
258
versal_create_admas(s, pic);
259
@@ -XXX,XX +XXX,XX @@ static void versal_init(Object *obj)
260
static Property versal_properties[] = {
261
DEFINE_PROP_LINK("ddr", Versal, cfg.mr_ddr, TYPE_MEMORY_REGION,
262
MemoryRegion *),
263
+ DEFINE_PROP_LINK("canbus0", Versal, lpd.iou.canbus[0],
264
+ TYPE_CAN_BUS, CanBusState *),
265
+ DEFINE_PROP_LINK("canbus1", Versal, lpd.iou.canbus[1],
266
+ TYPE_CAN_BUS, CanBusState *),
267
DEFINE_PROP_END_OF_LIST()
95
};
268
};
96
269
97
static void kvm_arm_its_register_types(void)
98
--
270
--
99
2.7.4
271
2.34.1
100
101
diff view generated by jsdifflib
1
From: Zhaoshenglong <zhaoshenglong@huawei.com>
1
From: Vikram Garhwal <vikram.garhwal@amd.com>
2
2
3
Since I'm not working as an assignee in Linaro, replace the Linaro email
3
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
4
address with my personal one.
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
5
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: Zhaoshenglong <zhaoshenglong@huawei.com>
7
Message-id: 1513058845-9768-1-git-send-email-zhaoshenglong@huawei.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
7
---
10
MAINTAINERS | 2 +-
8
MAINTAINERS | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
9
1 file changed, 1 insertion(+), 1 deletion(-)
12
10
13
diff --git a/MAINTAINERS b/MAINTAINERS
11
diff --git a/MAINTAINERS b/MAINTAINERS
14
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
15
--- a/MAINTAINERS
13
--- a/MAINTAINERS
16
+++ b/MAINTAINERS
14
+++ b/MAINTAINERS
17
@@ -XXX,XX +XXX,XX @@ F: include/hw/*/xlnx*.h
15
@@ -XXX,XX +XXX,XX @@ M: Francisco Iglesias <francisco.iglesias@amd.com>
18
19
ARM ACPI Subsystem
20
M: Shannon Zhao <zhaoshenglong@huawei.com>
21
-M: Shannon Zhao <shannon.zhao@linaro.org>
22
+M: Shannon Zhao <shannon.zhaosl@gmail.com>
23
L: qemu-arm@nongnu.org
24
S: Maintained
16
S: Maintained
25
F: hw/arm/virt-acpi-build.c
17
F: hw/net/can/xlnx-*
18
F: include/hw/net/xlnx-*
19
-F: tests/qtest/xlnx-can-test*
20
+F: tests/qtest/xlnx-can*-test*
21
22
EDU
23
M: Jiri Slaby <jslaby@suse.cz>
26
--
24
--
27
2.7.4
25
2.34.1
28
29
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Vikram Garhwal <vikram.garhwal@amd.com>
2
2
3
Use memset() instead of a for loop to zero all of the registers.
3
The QTests perform three tests on the Xilinx VERSAL CANFD controller:
4
Tests the CANFD controllers in loopback.
5
Tests the CANFD controllers in normal mode with CAN frame.
6
Tests the CANFD controllers in normal mode with CANFD frame.
4
7
5
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
8
Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
6
Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com>
9
Acked-by: Thomas Huth <thuth@redhat.com>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
10
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
8
Message-id: c076e907f355923864cb1afde31b938ffb677778.1513104804.git.alistair.francis@xilinx.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
hw/ssi/xilinx_spips.c | 11 +++--------
14
tests/qtest/xlnx-canfd-test.c | 423 ++++++++++++++++++++++++++++++++++
12
1 file changed, 3 insertions(+), 8 deletions(-)
15
tests/qtest/meson.build | 1 +
16
2 files changed, 424 insertions(+)
17
create mode 100644 tests/qtest/xlnx-canfd-test.c
13
18
14
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
19
diff --git a/tests/qtest/xlnx-canfd-test.c b/tests/qtest/xlnx-canfd-test.c
20
new file mode 100644
21
index XXXXXXX..XXXXXXX
22
--- /dev/null
23
+++ b/tests/qtest/xlnx-canfd-test.c
24
@@ -XXX,XX +XXX,XX @@
25
+/*
26
+ * SPDX-License-Identifier: MIT
27
+ *
28
+ * QTests for the Xilinx Versal CANFD controller.
29
+ *
30
+ * Copyright (c) 2022 AMD Inc.
31
+ *
32
+ * Written-by: Vikram Garhwal<vikram.garhwal@amd.com>
33
+ *
34
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
35
+ * of this software and associated documentation files (the "Software"), to deal
36
+ * in the Software without restriction, including without limitation the rights
37
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
38
+ * copies of the Software, and to permit persons to whom the Software is
39
+ * furnished to do so, subject to the following conditions:
40
+ *
41
+ * The above copyright notice and this permission notice shall be included in
42
+ * all copies or substantial portions of the Software.
43
+ *
44
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
45
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
46
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
47
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
48
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
49
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
50
+ * THE SOFTWARE.
51
+ */
52
+
53
+#include "qemu/osdep.h"
54
+#include "libqtest.h"
55
+
56
+/* Base address. */
57
+#define CANFD0_BASE_ADDR 0xff060000
58
+#define CANFD1_BASE_ADDR 0xff070000
59
+
60
+/* Register addresses. */
61
+#define R_SRR_OFFSET 0x00
62
+#define R_MSR_OFFSET 0x04
63
+#define R_FILTER_CONTROL_REGISTER 0xe0
64
+#define R_SR_OFFSET 0x18
65
+#define R_ISR_OFFSET 0x1c
66
+#define R_IER_OFFSET 0x20
67
+#define R_ICR_OFFSET 0x24
68
+#define R_TX_READY_REQ_REGISTER 0x90
69
+#define RX_FIFO_STATUS_REGISTER 0xe8
70
+#define R_TXID_OFFSET 0x100
71
+#define R_TXDLC_OFFSET 0x104
72
+#define R_TXDATA1_OFFSET 0x108
73
+#define R_TXDATA2_OFFSET 0x10c
74
+#define R_AFMR_REGISTER0 0xa00
75
+#define R_AFIR_REGISTER0 0xa04
76
+#define R_RX0_ID_OFFSET 0x2100
77
+#define R_RX0_DLC_OFFSET 0x2104
78
+#define R_RX0_DATA1_OFFSET 0x2108
79
+#define R_RX0_DATA2_OFFSET 0x210c
80
+
81
+/* CANFD modes. */
82
+#define SRR_CONFIG_MODE 0x00
83
+#define MSR_NORMAL_MODE 0x00
84
+#define MSR_LOOPBACK_MODE (1 << 1)
85
+#define ENABLE_CANFD (1 << 1)
86
+
87
+/* CANFD status. */
88
+#define STATUS_CONFIG_MODE (1 << 0)
89
+#define STATUS_NORMAL_MODE (1 << 3)
90
+#define STATUS_LOOPBACK_MODE (1 << 1)
91
+#define ISR_TXOK (1 << 1)
92
+#define ISR_RXOK (1 << 4)
93
+
94
+#define ENABLE_ALL_FILTERS 0xffffffff
95
+#define ENABLE_ALL_INTERRUPTS 0xffffffff
96
+
97
+/* We are sending one canfd message. */
98
+#define TX_READY_REG_VAL 0x1
99
+
100
+#define FIRST_RX_STORE_INDEX 0x1
101
+#define STATUS_REG_MASK 0xf
102
+#define DLC_FD_BIT_SHIFT 0x1b
103
+#define DLC_FD_BIT_MASK 0xf8000000
104
+#define FIFO_STATUS_READ_INDEX_MASK 0x3f
105
+#define FIFO_STATUS_FILL_LEVEL_MASK 0x7f00
106
+#define FILL_LEVEL_SHIFT 0x8
107
+
108
+/* CANFD frame size ID, DLC and 16 DATA word. */
109
+#define CANFD_FRAME_SIZE 18
110
+/* CAN frame size ID, DLC and 2 DATA word. */
111
+#define CAN_FRAME_SIZE 4
112
+
113
+/* Set the filters for CANFD controller. */
114
+static void enable_filters(QTestState *qts)
115
+{
116
+ const uint32_t arr_afmr[32] = { 0xb423deaa, 0xa2a40bdc, 0x1b64f486,
117
+ 0x95c0d4ee, 0xe0c44528, 0x4b407904,
118
+ 0xd2673f46, 0x9fc638d6, 0x8844f3d8,
119
+ 0xa607d1e8, 0x67871bf4, 0xc2557dc,
120
+ 0x9ea5b53e, 0x3643c0cc, 0x5a05ea8e,
121
+ 0x83a46d84, 0x4a25c2b8, 0x93a66008,
122
+ 0x2e467470, 0xedc66118, 0x9086f9f2,
123
+ 0xfa23dd36, 0xb6654b90, 0xb221b8ca,
124
+ 0x3467d1e2, 0xa3a55542, 0x5b26a012,
125
+ 0x2281ea7e, 0xcea0ece8, 0xdc61e588,
126
+ 0x2e5676a, 0x16821320 };
127
+
128
+ const uint32_t arr_afir[32] = { 0xa833dfa1, 0x255a477e, 0x3a4bb1c5,
129
+ 0x8f560a6c, 0x27f38903, 0x2fecec4d,
130
+ 0xa014c66d, 0xec289b8, 0x7e52dead,
131
+ 0x82e94f3c, 0xcf3e3c5c, 0x66059871,
132
+ 0x3f213df4, 0x25ac3959, 0xa12e9bef,
133
+ 0xa3ad3af, 0xbafd7fe, 0xb3cb40fd,
134
+ 0x5d9caa81, 0x2ed61902, 0x7cd64a0,
135
+ 0x4b1fa538, 0x9b5ced8c, 0x150de059,
136
+ 0xd2794227, 0x635e820a, 0xbb6b02cf,
137
+ 0xbb58176, 0x570025bb, 0xa78d9658,
138
+ 0x49d735df, 0xe5399d2f };
139
+
140
+ /* Passing the respective array values to all the AFMR and AFIR pairs. */
141
+ for (int i = 0; i < 32; i++) {
142
+ /* For CANFD0. */
143
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_AFMR_REGISTER0 + 8 * i,
144
+ arr_afmr[i]);
145
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_AFIR_REGISTER0 + 8 * i,
146
+ arr_afir[i]);
147
+
148
+ /* For CANFD1. */
149
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_AFMR_REGISTER0 + 8 * i,
150
+ arr_afmr[i]);
151
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_AFIR_REGISTER0 + 8 * i,
152
+ arr_afir[i]);
153
+ }
154
+
155
+ /* Enable all the pairs from AFR register. */
156
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_FILTER_CONTROL_REGISTER,
157
+ ENABLE_ALL_FILTERS);
158
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_FILTER_CONTROL_REGISTER,
159
+ ENABLE_ALL_FILTERS);
160
+}
161
+
162
+static void configure_canfd(QTestState *qts, uint8_t mode)
163
+{
164
+ uint32_t status = 0;
165
+
166
+ /* Put CANFD0 and CANFD1 in config mode. */
167
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_SRR_OFFSET, SRR_CONFIG_MODE);
168
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_SRR_OFFSET, SRR_CONFIG_MODE);
169
+
170
+ /* Write mode of operation in Mode select register. */
171
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_MSR_OFFSET, mode);
172
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_MSR_OFFSET, mode);
173
+
174
+ enable_filters(qts);
175
+
176
+ /* Check here if CANFD0 and CANFD1 are in config mode. */
177
+ status = qtest_readl(qts, CANFD0_BASE_ADDR + R_SR_OFFSET);
178
+ status = status & STATUS_REG_MASK;
179
+ g_assert_cmpint(status, ==, STATUS_CONFIG_MODE);
180
+
181
+ status = qtest_readl(qts, CANFD1_BASE_ADDR + R_SR_OFFSET);
182
+ status = status & STATUS_REG_MASK;
183
+ g_assert_cmpint(status, ==, STATUS_CONFIG_MODE);
184
+
185
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_IER_OFFSET, ENABLE_ALL_INTERRUPTS);
186
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_IER_OFFSET, ENABLE_ALL_INTERRUPTS);
187
+
188
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CANFD);
189
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CANFD);
190
+}
191
+
192
+static void generate_random_data(uint32_t *buf_tx, bool is_canfd_frame)
193
+{
194
+ /* Generate random TX data for CANFD frame. */
195
+ if (is_canfd_frame) {
196
+ for (int i = 0; i < CANFD_FRAME_SIZE - 2; i++) {
197
+ buf_tx[2 + i] = rand();
198
+ }
199
+ } else {
200
+ /* Generate random TX data for CAN frame. */
201
+ for (int i = 0; i < CAN_FRAME_SIZE - 2; i++) {
202
+ buf_tx[2 + i] = rand();
203
+ }
204
+ }
205
+}
206
+
207
+static void read_data(QTestState *qts, uint64_t can_base_addr, uint32_t *buf_rx)
208
+{
209
+ uint32_t int_status;
210
+ uint32_t fifo_status_reg_value;
211
+ /* At which RX FIFO the received data is stored. */
212
+ uint8_t store_ind = 0;
213
+ bool is_canfd_frame = false;
214
+
215
+ /* Read the interrupt on CANFD rx. */
216
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_RXOK;
217
+
218
+ g_assert_cmpint(int_status, ==, ISR_RXOK);
219
+
220
+ /* Find the fill level and read index. */
221
+ fifo_status_reg_value = qtest_readl(qts, can_base_addr +
222
+ RX_FIFO_STATUS_REGISTER);
223
+
224
+ store_ind = (fifo_status_reg_value & FIFO_STATUS_READ_INDEX_MASK) +
225
+ ((fifo_status_reg_value & FIFO_STATUS_FILL_LEVEL_MASK) >>
226
+ FILL_LEVEL_SHIFT);
227
+
228
+ g_assert_cmpint(store_ind, ==, FIRST_RX_STORE_INDEX);
229
+
230
+ /* Read the RX register data for CANFD. */
231
+ buf_rx[0] = qtest_readl(qts, can_base_addr + R_RX0_ID_OFFSET);
232
+ buf_rx[1] = qtest_readl(qts, can_base_addr + R_RX0_DLC_OFFSET);
233
+
234
+ is_canfd_frame = (buf_rx[1] >> DLC_FD_BIT_SHIFT) & 1;
235
+
236
+ if (is_canfd_frame) {
237
+ for (int i = 0; i < CANFD_FRAME_SIZE - 2; i++) {
238
+ buf_rx[i + 2] = qtest_readl(qts,
239
+ can_base_addr + R_RX0_DATA1_OFFSET + 4 * i);
240
+ }
241
+ } else {
242
+ buf_rx[2] = qtest_readl(qts, can_base_addr + R_RX0_DATA1_OFFSET);
243
+ buf_rx[3] = qtest_readl(qts, can_base_addr + R_RX0_DATA2_OFFSET);
244
+ }
245
+
246
+ /* Clear the RX interrupt. */
247
+ qtest_writel(qts, CANFD1_BASE_ADDR + R_ICR_OFFSET, ISR_RXOK);
248
+}
249
+
250
+static void write_data(QTestState *qts, uint64_t can_base_addr,
251
+ const uint32_t *buf_tx, bool is_canfd_frame)
252
+{
253
+ /* Write the TX register data for CANFD. */
254
+ qtest_writel(qts, can_base_addr + R_TXID_OFFSET, buf_tx[0]);
255
+ qtest_writel(qts, can_base_addr + R_TXDLC_OFFSET, buf_tx[1]);
256
+
257
+ if (is_canfd_frame) {
258
+ for (int i = 0; i < CANFD_FRAME_SIZE - 2; i++) {
259
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET + 4 * i,
260
+ buf_tx[2 + i]);
261
+ }
262
+ } else {
263
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET, buf_tx[2]);
264
+ qtest_writel(qts, can_base_addr + R_TXDATA2_OFFSET, buf_tx[3]);
265
+ }
266
+}
267
+
268
+static void send_data(QTestState *qts, uint64_t can_base_addr)
269
+{
270
+ uint32_t int_status;
271
+
272
+ qtest_writel(qts, can_base_addr + R_TX_READY_REQ_REGISTER,
273
+ TX_READY_REG_VAL);
274
+
275
+ /* Read the interrupt on CANFD for tx. */
276
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_TXOK;
277
+
278
+ g_assert_cmpint(int_status, ==, ISR_TXOK);
279
+
280
+ /* Clear the interrupt for tx. */
281
+ qtest_writel(qts, CANFD0_BASE_ADDR + R_ICR_OFFSET, ISR_TXOK);
282
+}
283
+
284
+static void match_rx_tx_data(const uint32_t *buf_tx, const uint32_t *buf_rx,
285
+ bool is_canfd_frame)
286
+{
287
+ uint16_t size = 0;
288
+ uint8_t len = CAN_FRAME_SIZE;
289
+
290
+ if (is_canfd_frame) {
291
+ len = CANFD_FRAME_SIZE;
292
+ }
293
+
294
+ while (size < len) {
295
+ if (R_RX0_ID_OFFSET + 4 * size == R_RX0_DLC_OFFSET) {
296
+ g_assert_cmpint((buf_rx[size] & DLC_FD_BIT_MASK), ==,
297
+ (buf_tx[size] & DLC_FD_BIT_MASK));
298
+ } else {
299
+ if (!is_canfd_frame && size == 4) {
300
+ break;
301
+ }
302
+
303
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size]);
304
+ }
305
+
306
+ size++;
307
+ }
308
+}
309
+/*
310
+ * Xilinx CANFD supports both CAN and CANFD frames. This test will be
311
+ * transferring CAN frame i.e. 8 bytes of data from CANFD0 and CANFD1 through
312
+ * canbus. CANFD0 initiate the data transfer to can-bus, CANFD1 receives the
313
+ * data. Test compares the can frame data sent from CANFD0 and received on
314
+ * CANFD1.
315
+ */
316
+static void test_can_data_transfer(void)
317
+{
318
+ uint32_t buf_tx[CAN_FRAME_SIZE] = { 0x5a5bb9a4, 0x80000000,
319
+ 0x12345678, 0x87654321 };
320
+ uint32_t buf_rx[CAN_FRAME_SIZE] = { 0x00, 0x00, 0x00, 0x00 };
321
+ uint32_t status = 0;
322
+
323
+ generate_random_data(buf_tx, false);
324
+
325
+ QTestState *qts = qtest_init("-machine xlnx-versal-virt"
326
+ " -object can-bus,id=canbus"
327
+ " -machine canbus0=canbus"
328
+ " -machine canbus1=canbus"
329
+ );
330
+
331
+ configure_canfd(qts, MSR_NORMAL_MODE);
332
+
333
+ /* Check if CANFD0 and CANFD1 are in Normal mode. */
334
+ status = qtest_readl(qts, CANFD0_BASE_ADDR + R_SR_OFFSET);
335
+ status = status & STATUS_REG_MASK;
336
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
337
+
338
+ status = qtest_readl(qts, CANFD1_BASE_ADDR + R_SR_OFFSET);
339
+ status = status & STATUS_REG_MASK;
340
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
341
+
342
+ write_data(qts, CANFD0_BASE_ADDR, buf_tx, false);
343
+
344
+ send_data(qts, CANFD0_BASE_ADDR);
345
+ read_data(qts, CANFD1_BASE_ADDR, buf_rx);
346
+ match_rx_tx_data(buf_tx, buf_rx, false);
347
+
348
+ qtest_quit(qts);
349
+}
350
+
351
+/*
352
+ * This test will be transferring CANFD frame i.e. 64 bytes of data from CANFD0
353
+ * and CANFD1 through canbus. CANFD0 initiate the data transfer to can-bus,
354
+ * CANFD1 receives the data. Test compares the CANFD frame data sent from CANFD0
355
+ * with received on CANFD1.
356
+ */
357
+static void test_canfd_data_transfer(void)
358
+{
359
+ uint32_t buf_tx[CANFD_FRAME_SIZE] = { 0x5a5bb9a4, 0xf8000000 };
360
+ uint32_t buf_rx[CANFD_FRAME_SIZE] = { 0x00, 0x00, 0x00, 0x00 };
361
+ uint32_t status = 0;
362
+
363
+ generate_random_data(buf_tx, true);
364
+
365
+ QTestState *qts = qtest_init("-machine xlnx-versal-virt"
366
+ " -object can-bus,id=canbus"
367
+ " -machine canbus0=canbus"
368
+ " -machine canbus1=canbus"
369
+ );
370
+
371
+ configure_canfd(qts, MSR_NORMAL_MODE);
372
+
373
+ /* Check if CANFD0 and CANFD1 are in Normal mode. */
374
+ status = qtest_readl(qts, CANFD0_BASE_ADDR + R_SR_OFFSET);
375
+ status = status & STATUS_REG_MASK;
376
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
377
+
378
+ status = qtest_readl(qts, CANFD1_BASE_ADDR + R_SR_OFFSET);
379
+ status = status & STATUS_REG_MASK;
380
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
381
+
382
+ write_data(qts, CANFD0_BASE_ADDR, buf_tx, true);
383
+
384
+ send_data(qts, CANFD0_BASE_ADDR);
385
+ read_data(qts, CANFD1_BASE_ADDR, buf_rx);
386
+ match_rx_tx_data(buf_tx, buf_rx, true);
387
+
388
+ qtest_quit(qts);
389
+}
390
+
391
+/*
392
+ * This test is performing loopback mode on CANFD0 and CANFD1. Data sent from
393
+ * TX of each CANFD0 and CANFD1 are compared with RX register data for
394
+ * respective CANFD Controller.
395
+ */
396
+static void test_can_loopback(void)
397
+{
398
+ uint32_t buf_tx[CANFD_FRAME_SIZE] = { 0x5a5bb9a4, 0xf8000000 };
399
+ uint32_t buf_rx[CANFD_FRAME_SIZE] = { 0x00, 0x00, 0x00, 0x00 };
400
+ uint32_t status = 0;
401
+
402
+ generate_random_data(buf_tx, true);
403
+
404
+ QTestState *qts = qtest_init("-machine xlnx-versal-virt"
405
+ " -object can-bus,id=canbus"
406
+ " -machine canbus0=canbus"
407
+ " -machine canbus1=canbus"
408
+ );
409
+
410
+ configure_canfd(qts, MSR_LOOPBACK_MODE);
411
+
412
+ /* Check if CANFD0 and CANFD1 are set in correct loopback mode. */
413
+ status = qtest_readl(qts, CANFD0_BASE_ADDR + R_SR_OFFSET);
414
+ status = status & STATUS_REG_MASK;
415
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
416
+
417
+ status = qtest_readl(qts, CANFD1_BASE_ADDR + R_SR_OFFSET);
418
+ status = status & STATUS_REG_MASK;
419
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
420
+
421
+ write_data(qts, CANFD0_BASE_ADDR, buf_tx, true);
422
+
423
+ send_data(qts, CANFD0_BASE_ADDR);
424
+ read_data(qts, CANFD0_BASE_ADDR, buf_rx);
425
+ match_rx_tx_data(buf_tx, buf_rx, true);
426
+
427
+ generate_random_data(buf_tx, true);
428
+
429
+ write_data(qts, CANFD1_BASE_ADDR, buf_tx, true);
430
+
431
+ send_data(qts, CANFD1_BASE_ADDR);
432
+ read_data(qts, CANFD1_BASE_ADDR, buf_rx);
433
+ match_rx_tx_data(buf_tx, buf_rx, true);
434
+
435
+ qtest_quit(qts);
436
+}
437
+
438
+int main(int argc, char **argv)
439
+{
440
+ g_test_init(&argc, &argv, NULL);
441
+
442
+ qtest_add_func("/net/canfd/can_data_transfer", test_can_data_transfer);
443
+ qtest_add_func("/net/canfd/canfd_data_transfer", test_canfd_data_transfer);
444
+ qtest_add_func("/net/canfd/can_loopback", test_can_loopback);
445
+
446
+ return g_test_run();
447
+}
448
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
15
index XXXXXXX..XXXXXXX 100644
449
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/ssi/xilinx_spips.c
450
--- a/tests/qtest/meson.build
17
+++ b/hw/ssi/xilinx_spips.c
451
+++ b/tests/qtest/meson.build
18
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
452
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
19
{
453
(config_all.has_key('CONFIG_TCG') and config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? \
20
XilinxSPIPS *s = XILINX_SPIPS(d);
454
['tpm-tis-device-test', 'tpm-tis-device-swtpm-test'] : []) + \
21
455
(config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
22
- int i;
456
+ (config_all_devices.has_key('CONFIG_XLNX_VERSAL') ? ['xlnx-canfd-test'] : []) + \
23
- for (i = 0; i < XLNX_SPIPS_R_MAX; i++) {
457
(config_all_devices.has_key('CONFIG_RASPI') ? ['bcm2835-dma-test'] : []) + \
24
- s->regs[i] = 0;
458
(config_all.has_key('CONFIG_TCG') and \
25
- }
459
config_all_devices.has_key('CONFIG_TPM_TIS_I2C') ? ['tpm-tis-i2c-test'] : []) + \
26
+ memset(s->regs, 0, sizeof(s->regs));
27
28
fifo8_reset(&s->rx_fifo);
29
fifo8_reset(&s->rx_fifo);
30
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
31
static void xlnx_zynqmp_qspips_reset(DeviceState *d)
32
{
33
XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(d);
34
- int i;
35
36
xilinx_spips_reset(d);
37
38
- for (i = 0; i < XLNX_ZYNQMP_SPIPS_R_MAX; i++) {
39
- s->regs[i] = 0;
40
- }
41
+ memset(s->regs, 0, sizeof(s->regs));
42
+
43
fifo8_reset(&s->rx_fifo_g);
44
fifo8_reset(&s->rx_fifo_g);
45
fifo32_reset(&s->fifo_g);
46
--
460
--
47
2.7.4
461
2.34.1
48
49
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: qianfan Zhao <qianfanguijin@163.com>
2
2
3
Following the ZynqMP register spec let's ensure that all reset values
3
Allwinner R40 (sun8i) SoC features a Quad-Core Cortex-A7 ARM CPU,
4
are set.
4
and a Mali400 MP2 GPU from ARM. It's also known as the Allwinner T3
5
for In-Car Entertainment usage, A40i and A40pro are variants that
6
differ in applicable temperatures range (industrial and military).
5
7
6
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
8
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
9
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
8
Message-id: 19836f3e0a298b13343c5a59c87425355e7fd8bd.1513104804.git.alistair.francis@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
include/hw/ssi/xilinx_spips.h | 2 +-
12
include/hw/arm/allwinner-r40.h | 110 +++++++++
12
hw/ssi/xilinx_spips.c | 35 ++++++++++++++++++++++++++++++-----
13
hw/arm/allwinner-r40.c | 415 +++++++++++++++++++++++++++++++++
13
2 files changed, 31 insertions(+), 6 deletions(-)
14
hw/arm/bananapi_m2u.c | 129 ++++++++++
15
hw/arm/Kconfig | 10 +
16
hw/arm/meson.build | 1 +
17
5 files changed, 665 insertions(+)
18
create mode 100644 include/hw/arm/allwinner-r40.h
19
create mode 100644 hw/arm/allwinner-r40.c
20
create mode 100644 hw/arm/bananapi_m2u.c
14
21
15
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
22
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
23
new file mode 100644
24
index XXXXXXX..XXXXXXX
25
--- /dev/null
26
+++ b/include/hw/arm/allwinner-r40.h
27
@@ -XXX,XX +XXX,XX @@
28
+/*
29
+ * Allwinner R40/A40i/T3 System on Chip emulation
30
+ *
31
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
32
+ *
33
+ * This program is free software: you can redistribute it and/or modify
34
+ * it under the terms of the GNU General Public License as published by
35
+ * the Free Software Foundation, either version 2 of the License, or
36
+ * (at your option) any later version.
37
+ *
38
+ * This program is distributed in the hope that it will be useful,
39
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
40
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
41
+ * GNU General Public License for more details.
42
+ *
43
+ * You should have received a copy of the GNU General Public License
44
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
45
+ */
46
+
47
+#ifndef HW_ARM_ALLWINNER_R40_H
48
+#define HW_ARM_ALLWINNER_R40_H
49
+
50
+#include "qom/object.h"
51
+#include "hw/arm/boot.h"
52
+#include "hw/timer/allwinner-a10-pit.h"
53
+#include "hw/intc/arm_gic.h"
54
+#include "hw/sd/allwinner-sdhost.h"
55
+#include "target/arm/cpu.h"
56
+#include "sysemu/block-backend.h"
57
+
58
+enum {
59
+ AW_R40_DEV_SRAM_A1,
60
+ AW_R40_DEV_SRAM_A2,
61
+ AW_R40_DEV_SRAM_A3,
62
+ AW_R40_DEV_SRAM_A4,
63
+ AW_R40_DEV_MMC0,
64
+ AW_R40_DEV_MMC1,
65
+ AW_R40_DEV_MMC2,
66
+ AW_R40_DEV_MMC3,
67
+ AW_R40_DEV_CCU,
68
+ AW_R40_DEV_PIT,
69
+ AW_R40_DEV_UART0,
70
+ AW_R40_DEV_GIC_DIST,
71
+ AW_R40_DEV_GIC_CPU,
72
+ AW_R40_DEV_GIC_HYP,
73
+ AW_R40_DEV_GIC_VCPU,
74
+ AW_R40_DEV_SDRAM
75
+};
76
+
77
+#define AW_R40_NUM_CPUS (4)
78
+
79
+/**
80
+ * Allwinner R40 object model
81
+ * @{
82
+ */
83
+
84
+/** Object type for the Allwinner R40 SoC */
85
+#define TYPE_AW_R40 "allwinner-r40"
86
+
87
+/** Convert input object to Allwinner R40 state object */
88
+OBJECT_DECLARE_SIMPLE_TYPE(AwR40State, AW_R40)
89
+
90
+/** @} */
91
+
92
+/**
93
+ * Allwinner R40 object
94
+ *
95
+ * This struct contains the state of all the devices
96
+ * which are currently emulated by the R40 SoC code.
97
+ */
98
+#define AW_R40_NUM_MMCS 4
99
+
100
+struct AwR40State {
101
+ /*< private >*/
102
+ DeviceState parent_obj;
103
+ /*< public >*/
104
+
105
+ ARMCPU cpus[AW_R40_NUM_CPUS];
106
+ const hwaddr *memmap;
107
+ AwA10PITState timer;
108
+ AwSdHostState mmc[AW_R40_NUM_MMCS];
109
+ GICState gic;
110
+ MemoryRegion sram_a1;
111
+ MemoryRegion sram_a2;
112
+ MemoryRegion sram_a3;
113
+ MemoryRegion sram_a4;
114
+};
115
+
116
+/**
117
+ * Emulate Boot ROM firmware setup functionality.
118
+ *
119
+ * A real Allwinner R40 SoC contains a Boot ROM
120
+ * which is the first code that runs right after
121
+ * the SoC is powered on. The Boot ROM is responsible
122
+ * for loading user code (e.g. a bootloader) from any
123
+ * of the supported external devices and writing the
124
+ * downloaded code to internal SRAM. After loading the SoC
125
+ * begins executing the code written to SRAM.
126
+ *
127
+ * This function emulates the Boot ROM by copying 32 KiB
128
+ * of data from the given block device and writes it to
129
+ * the start of the first internal SRAM memory.
130
+ *
131
+ * @s: Allwinner R40 state object pointer
132
+ * @blk: Block backend device object pointer
133
+ * @unit: the mmc control's unit
134
+ */
135
+bool allwinner_r40_bootrom_setup(AwR40State *s, BlockBackend *blk, int unit);
136
+
137
+#endif /* HW_ARM_ALLWINNER_R40_H */
138
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
139
new file mode 100644
140
index XXXXXXX..XXXXXXX
141
--- /dev/null
142
+++ b/hw/arm/allwinner-r40.c
143
@@ -XXX,XX +XXX,XX @@
144
+/*
145
+ * Allwinner R40/A40i/T3 System on Chip emulation
146
+ *
147
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
148
+ *
149
+ * This program is free software: you can redistribute it and/or modify
150
+ * it under the terms of the GNU General Public License as published by
151
+ * the Free Software Foundation, either version 2 of the License, or
152
+ * (at your option) any later version.
153
+ *
154
+ * This program is distributed in the hope that it will be useful,
155
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
156
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
157
+ * GNU General Public License for more details.
158
+ *
159
+ * You should have received a copy of the GNU General Public License
160
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
161
+ */
162
+
163
+#include "qemu/osdep.h"
164
+#include "qapi/error.h"
165
+#include "qemu/error-report.h"
166
+#include "qemu/bswap.h"
167
+#include "qemu/module.h"
168
+#include "qemu/units.h"
169
+#include "hw/qdev-core.h"
170
+#include "hw/sysbus.h"
171
+#include "hw/char/serial.h"
172
+#include "hw/misc/unimp.h"
173
+#include "hw/usb/hcd-ehci.h"
174
+#include "hw/loader.h"
175
+#include "sysemu/sysemu.h"
176
+#include "hw/arm/allwinner-r40.h"
177
+
178
+/* Memory map */
179
+const hwaddr allwinner_r40_memmap[] = {
180
+ [AW_R40_DEV_SRAM_A1] = 0x00000000,
181
+ [AW_R40_DEV_SRAM_A2] = 0x00004000,
182
+ [AW_R40_DEV_SRAM_A3] = 0x00008000,
183
+ [AW_R40_DEV_SRAM_A4] = 0x0000b400,
184
+ [AW_R40_DEV_MMC0] = 0x01c0f000,
185
+ [AW_R40_DEV_MMC1] = 0x01c10000,
186
+ [AW_R40_DEV_MMC2] = 0x01c11000,
187
+ [AW_R40_DEV_MMC3] = 0x01c12000,
188
+ [AW_R40_DEV_PIT] = 0x01c20c00,
189
+ [AW_R40_DEV_UART0] = 0x01c28000,
190
+ [AW_R40_DEV_GIC_DIST] = 0x01c81000,
191
+ [AW_R40_DEV_GIC_CPU] = 0x01c82000,
192
+ [AW_R40_DEV_GIC_HYP] = 0x01c84000,
193
+ [AW_R40_DEV_GIC_VCPU] = 0x01c86000,
194
+ [AW_R40_DEV_SDRAM] = 0x40000000
195
+};
196
+
197
+/* List of unimplemented devices */
198
+struct AwR40Unimplemented {
199
+ const char *device_name;
200
+ hwaddr base;
201
+ hwaddr size;
202
+};
203
+
204
+static struct AwR40Unimplemented r40_unimplemented[] = {
205
+ { "d-engine", 0x01000000, 4 * MiB },
206
+ { "d-inter", 0x01400000, 128 * KiB },
207
+ { "sram-c", 0x01c00000, 4 * KiB },
208
+ { "dma", 0x01c02000, 4 * KiB },
209
+ { "nfdc", 0x01c03000, 4 * KiB },
210
+ { "ts", 0x01c04000, 4 * KiB },
211
+ { "spi0", 0x01c05000, 4 * KiB },
212
+ { "spi1", 0x01c06000, 4 * KiB },
213
+ { "cs0", 0x01c09000, 4 * KiB },
214
+ { "keymem", 0x01c0a000, 4 * KiB },
215
+ { "emac", 0x01c0b000, 4 * KiB },
216
+ { "usb0-otg", 0x01c13000, 4 * KiB },
217
+ { "usb0-host", 0x01c14000, 4 * KiB },
218
+ { "crypto", 0x01c15000, 4 * KiB },
219
+ { "spi2", 0x01c17000, 4 * KiB },
220
+ { "sata", 0x01c18000, 4 * KiB },
221
+ { "usb1-host", 0x01c19000, 4 * KiB },
222
+ { "sid", 0x01c1b000, 4 * KiB },
223
+ { "usb2-host", 0x01c1c000, 4 * KiB },
224
+ { "cs1", 0x01c1d000, 4 * KiB },
225
+ { "spi3", 0x01c1f000, 4 * KiB },
226
+ { "ccu", 0x01c20000, 1 * KiB },
227
+ { "rtc", 0x01c20400, 1 * KiB },
228
+ { "pio", 0x01c20800, 1 * KiB },
229
+ { "owa", 0x01c21000, 1 * KiB },
230
+ { "ac97", 0x01c21400, 1 * KiB },
231
+ { "cir0", 0x01c21800, 1 * KiB },
232
+ { "cir1", 0x01c21c00, 1 * KiB },
233
+ { "pcm0", 0x01c22000, 1 * KiB },
234
+ { "pcm1", 0x01c22400, 1 * KiB },
235
+ { "pcm2", 0x01c22800, 1 * KiB },
236
+ { "audio", 0x01c22c00, 1 * KiB },
237
+ { "keypad", 0x01c23000, 1 * KiB },
238
+ { "pwm", 0x01c23400, 1 * KiB },
239
+ { "keyadc", 0x01c24400, 1 * KiB },
240
+ { "ths", 0x01c24c00, 1 * KiB },
241
+ { "rtp", 0x01c25000, 1 * KiB },
242
+ { "pmu", 0x01c25400, 1 * KiB },
243
+ { "cpu-cfg", 0x01c25c00, 1 * KiB },
244
+ { "uart0", 0x01c28000, 1 * KiB },
245
+ { "uart1", 0x01c28400, 1 * KiB },
246
+ { "uart2", 0x01c28800, 1 * KiB },
247
+ { "uart3", 0x01c28c00, 1 * KiB },
248
+ { "uart4", 0x01c29000, 1 * KiB },
249
+ { "uart5", 0x01c29400, 1 * KiB },
250
+ { "uart6", 0x01c29800, 1 * KiB },
251
+ { "uart7", 0x01c29c00, 1 * KiB },
252
+ { "ps20", 0x01c2a000, 1 * KiB },
253
+ { "ps21", 0x01c2a400, 1 * KiB },
254
+ { "twi0", 0x01c2ac00, 1 * KiB },
255
+ { "twi1", 0x01c2b000, 1 * KiB },
256
+ { "twi2", 0x01c2b400, 1 * KiB },
257
+ { "twi3", 0x01c2b800, 1 * KiB },
258
+ { "twi4", 0x01c2c000, 1 * KiB },
259
+ { "scr", 0x01c2c400, 1 * KiB },
260
+ { "tvd-top", 0x01c30000, 4 * KiB },
261
+ { "tvd0", 0x01c31000, 4 * KiB },
262
+ { "tvd1", 0x01c32000, 4 * KiB },
263
+ { "tvd2", 0x01c33000, 4 * KiB },
264
+ { "tvd3", 0x01c34000, 4 * KiB },
265
+ { "gpu", 0x01c40000, 64 * KiB },
266
+ { "gmac", 0x01c50000, 64 * KiB },
267
+ { "hstmr", 0x01c60000, 4 * KiB },
268
+ { "dram-com", 0x01c62000, 4 * KiB },
269
+ { "dram-ctl", 0x01c63000, 4 * KiB },
270
+ { "tcon-top", 0x01c70000, 4 * KiB },
271
+ { "lcd0", 0x01c71000, 4 * KiB },
272
+ { "lcd1", 0x01c72000, 4 * KiB },
273
+ { "tv0", 0x01c73000, 4 * KiB },
274
+ { "tv1", 0x01c74000, 4 * KiB },
275
+ { "tve-top", 0x01c90000, 16 * KiB },
276
+ { "tve0", 0x01c94000, 16 * KiB },
277
+ { "tve1", 0x01c98000, 16 * KiB },
278
+ { "mipi_dsi", 0x01ca0000, 4 * KiB },
279
+ { "mipi_dphy", 0x01ca1000, 4 * KiB },
280
+ { "ve", 0x01d00000, 1024 * KiB },
281
+ { "mp", 0x01e80000, 128 * KiB },
282
+ { "hdmi", 0x01ee0000, 128 * KiB },
283
+ { "prcm", 0x01f01400, 1 * KiB },
284
+ { "debug", 0x3f500000, 64 * KiB },
285
+ { "cpubist", 0x3f501000, 4 * KiB },
286
+ { "dcu", 0x3fff0000, 64 * KiB },
287
+ { "hstmr", 0x01c60000, 4 * KiB },
288
+ { "brom", 0xffff0000, 36 * KiB }
289
+};
290
+
291
+/* Per Processor Interrupts */
292
+enum {
293
+ AW_R40_GIC_PPI_MAINT = 9,
294
+ AW_R40_GIC_PPI_HYPTIMER = 10,
295
+ AW_R40_GIC_PPI_VIRTTIMER = 11,
296
+ AW_R40_GIC_PPI_SECTIMER = 13,
297
+ AW_R40_GIC_PPI_PHYSTIMER = 14
298
+};
299
+
300
+/* Shared Processor Interrupts */
301
+enum {
302
+ AW_R40_GIC_SPI_UART0 = 1,
303
+ AW_R40_GIC_SPI_TIMER0 = 22,
304
+ AW_R40_GIC_SPI_TIMER1 = 23,
305
+ AW_R40_GIC_SPI_MMC0 = 32,
306
+ AW_R40_GIC_SPI_MMC1 = 33,
307
+ AW_R40_GIC_SPI_MMC2 = 34,
308
+ AW_R40_GIC_SPI_MMC3 = 35,
309
+};
310
+
311
+/* Allwinner R40 general constants */
312
+enum {
313
+ AW_R40_GIC_NUM_SPI = 128
314
+};
315
+
316
+#define BOOT0_MAGIC "eGON.BT0"
317
+
318
+/* The low 8-bits of the 'boot_media' field in the SPL header */
319
+#define SUNXI_BOOTED_FROM_MMC0 0
320
+#define SUNXI_BOOTED_FROM_NAND 1
321
+#define SUNXI_BOOTED_FROM_MMC2 2
322
+#define SUNXI_BOOTED_FROM_SPI 3
323
+
324
+struct boot_file_head {
325
+ uint32_t b_instruction;
326
+ uint8_t magic[8];
327
+ uint32_t check_sum;
328
+ uint32_t length;
329
+ uint32_t pub_head_size;
330
+ uint32_t fel_script_address;
331
+ uint32_t fel_uEnv_length;
332
+ uint32_t dt_name_offset;
333
+ uint32_t dram_size;
334
+ uint32_t boot_media;
335
+ uint32_t string_pool[13];
336
+};
337
+
338
+bool allwinner_r40_bootrom_setup(AwR40State *s, BlockBackend *blk, int unit)
339
+{
340
+ const int64_t rom_size = 32 * KiB;
341
+ g_autofree uint8_t *buffer = g_new0(uint8_t, rom_size);
342
+ struct boot_file_head *head = (struct boot_file_head *)buffer;
343
+
344
+ if (blk_pread(blk, 8 * KiB, rom_size, buffer, 0) < 0) {
345
+ error_setg(&error_fatal, "%s: failed to read BlockBackend data",
346
+ __func__);
347
+ return false;
348
+ }
349
+
350
+ /* we only check the magic string here. */
351
+ if (memcmp(head->magic, BOOT0_MAGIC, sizeof(head->magic))) {
352
+ return false;
353
+ }
354
+
355
+ /*
356
+ * Simulate the behavior of the bootROM, it will change the boot_media
357
+ * flag to indicate where the chip is booting from. R40 can boot from
358
+ * mmc0 or mmc2, the default value of boot_media is zero
359
+ * (SUNXI_BOOTED_FROM_MMC0), let's fix this flag when it is booting from
360
+ * the others.
361
+ */
362
+ if (unit == 2) {
363
+ head->boot_media = cpu_to_le32(SUNXI_BOOTED_FROM_MMC2);
364
+ } else {
365
+ head->boot_media = cpu_to_le32(SUNXI_BOOTED_FROM_MMC0);
366
+ }
367
+
368
+ rom_add_blob("allwinner-r40.bootrom", buffer, rom_size,
369
+ rom_size, s->memmap[AW_R40_DEV_SRAM_A1],
370
+ NULL, NULL, NULL, NULL, false);
371
+ return true;
372
+}
373
+
374
+static void allwinner_r40_init(Object *obj)
375
+{
376
+ static const char *mmc_names[AW_R40_NUM_MMCS] = {
377
+ "mmc0", "mmc1", "mmc2", "mmc3"
378
+ };
379
+ AwR40State *s = AW_R40(obj);
380
+
381
+ s->memmap = allwinner_r40_memmap;
382
+
383
+ for (int i = 0; i < AW_R40_NUM_CPUS; i++) {
384
+ object_initialize_child(obj, "cpu[*]", &s->cpus[i],
385
+ ARM_CPU_TYPE_NAME("cortex-a7"));
386
+ }
387
+
388
+ object_initialize_child(obj, "gic", &s->gic, TYPE_ARM_GIC);
389
+
390
+ object_initialize_child(obj, "timer", &s->timer, TYPE_AW_A10_PIT);
391
+ object_property_add_alias(obj, "clk0-freq", OBJECT(&s->timer),
392
+ "clk0-freq");
393
+ object_property_add_alias(obj, "clk1-freq", OBJECT(&s->timer),
394
+ "clk1-freq");
395
+
396
+ for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
397
+ object_initialize_child(obj, mmc_names[i], &s->mmc[i],
398
+ TYPE_AW_SDHOST_SUN5I);
399
+ }
400
+}
401
+
402
+static void allwinner_r40_realize(DeviceState *dev, Error **errp)
403
+{
404
+ AwR40State *s = AW_R40(dev);
405
+ unsigned i;
406
+
407
+ /* CPUs */
408
+ for (i = 0; i < AW_R40_NUM_CPUS; i++) {
409
+
410
+ /*
411
+ * Disable secondary CPUs. Guest EL3 firmware will start
412
+ * them via CPU reset control registers.
413
+ */
414
+ qdev_prop_set_bit(DEVICE(&s->cpus[i]), "start-powered-off",
415
+ i > 0);
416
+
417
+ /* All exception levels required */
418
+ qdev_prop_set_bit(DEVICE(&s->cpus[i]), "has_el3", true);
419
+ qdev_prop_set_bit(DEVICE(&s->cpus[i]), "has_el2", true);
420
+
421
+ /* Mark realized */
422
+ qdev_realize(DEVICE(&s->cpus[i]), NULL, &error_fatal);
423
+ }
424
+
425
+ /* Generic Interrupt Controller */
426
+ qdev_prop_set_uint32(DEVICE(&s->gic), "num-irq", AW_R40_GIC_NUM_SPI +
427
+ GIC_INTERNAL);
428
+ qdev_prop_set_uint32(DEVICE(&s->gic), "revision", 2);
429
+ qdev_prop_set_uint32(DEVICE(&s->gic), "num-cpu", AW_R40_NUM_CPUS);
430
+ qdev_prop_set_bit(DEVICE(&s->gic), "has-security-extensions", false);
431
+ qdev_prop_set_bit(DEVICE(&s->gic), "has-virtualization-extensions", true);
432
+ sysbus_realize(SYS_BUS_DEVICE(&s->gic), &error_fatal);
433
+
434
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gic), 0, s->memmap[AW_R40_DEV_GIC_DIST]);
435
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gic), 1, s->memmap[AW_R40_DEV_GIC_CPU]);
436
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gic), 2, s->memmap[AW_R40_DEV_GIC_HYP]);
437
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gic), 3, s->memmap[AW_R40_DEV_GIC_VCPU]);
438
+
439
+ /*
440
+ * Wire the outputs from each CPU's generic timer and the GICv2
441
+ * maintenance interrupt signal to the appropriate GIC PPI inputs,
442
+ * and the GIC's IRQ/FIQ/VIRQ/VFIQ interrupt outputs to the CPU's inputs.
443
+ */
444
+ for (i = 0; i < AW_R40_NUM_CPUS; i++) {
445
+ DeviceState *cpudev = DEVICE(&s->cpus[i]);
446
+ int ppibase = AW_R40_GIC_NUM_SPI + i * GIC_INTERNAL + GIC_NR_SGIS;
447
+ int irq;
448
+ /*
449
+ * Mapping from the output timer irq lines from the CPU to the
450
+ * GIC PPI inputs used for this board.
451
+ */
452
+ const int timer_irq[] = {
453
+ [GTIMER_PHYS] = AW_R40_GIC_PPI_PHYSTIMER,
454
+ [GTIMER_VIRT] = AW_R40_GIC_PPI_VIRTTIMER,
455
+ [GTIMER_HYP] = AW_R40_GIC_PPI_HYPTIMER,
456
+ [GTIMER_SEC] = AW_R40_GIC_PPI_SECTIMER,
457
+ };
458
+
459
+ /* Connect CPU timer outputs to GIC PPI inputs */
460
+ for (irq = 0; irq < ARRAY_SIZE(timer_irq); irq++) {
461
+ qdev_connect_gpio_out(cpudev, irq,
462
+ qdev_get_gpio_in(DEVICE(&s->gic),
463
+ ppibase + timer_irq[irq]));
464
+ }
465
+
466
+ /* Connect GIC outputs to CPU interrupt inputs */
467
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i,
468
+ qdev_get_gpio_in(cpudev, ARM_CPU_IRQ));
469
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + AW_R40_NUM_CPUS,
470
+ qdev_get_gpio_in(cpudev, ARM_CPU_FIQ));
471
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + (2 * AW_R40_NUM_CPUS),
472
+ qdev_get_gpio_in(cpudev, ARM_CPU_VIRQ));
473
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + (3 * AW_R40_NUM_CPUS),
474
+ qdev_get_gpio_in(cpudev, ARM_CPU_VFIQ));
475
+
476
+ /* GIC maintenance signal */
477
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + (4 * AW_R40_NUM_CPUS),
478
+ qdev_get_gpio_in(DEVICE(&s->gic),
479
+ ppibase + AW_R40_GIC_PPI_MAINT));
480
+ }
481
+
482
+ /* Timer */
483
+ sysbus_realize(SYS_BUS_DEVICE(&s->timer), &error_fatal);
484
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->timer), 0, s->memmap[AW_R40_DEV_PIT]);
485
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->timer), 0,
486
+ qdev_get_gpio_in(DEVICE(&s->gic),
487
+ AW_R40_GIC_SPI_TIMER0));
488
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->timer), 1,
489
+ qdev_get_gpio_in(DEVICE(&s->gic),
490
+ AW_R40_GIC_SPI_TIMER1));
491
+
492
+ /* SRAM */
493
+ memory_region_init_ram(&s->sram_a1, OBJECT(dev), "sram A1",
494
+ 16 * KiB, &error_abort);
495
+ memory_region_init_ram(&s->sram_a2, OBJECT(dev), "sram A2",
496
+ 16 * KiB, &error_abort);
497
+ memory_region_init_ram(&s->sram_a3, OBJECT(dev), "sram A3",
498
+ 13 * KiB, &error_abort);
499
+ memory_region_init_ram(&s->sram_a4, OBJECT(dev), "sram A4",
500
+ 3 * KiB, &error_abort);
501
+ memory_region_add_subregion(get_system_memory(),
502
+ s->memmap[AW_R40_DEV_SRAM_A1], &s->sram_a1);
503
+ memory_region_add_subregion(get_system_memory(),
504
+ s->memmap[AW_R40_DEV_SRAM_A2], &s->sram_a2);
505
+ memory_region_add_subregion(get_system_memory(),
506
+ s->memmap[AW_R40_DEV_SRAM_A3], &s->sram_a3);
507
+ memory_region_add_subregion(get_system_memory(),
508
+ s->memmap[AW_R40_DEV_SRAM_A4], &s->sram_a4);
509
+
510
+ /* SD/MMC */
511
+ for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
512
+ qemu_irq irq = qdev_get_gpio_in(DEVICE(&s->gic),
513
+ AW_R40_GIC_SPI_MMC0 + i);
514
+ const hwaddr addr = s->memmap[AW_R40_DEV_MMC0 + i];
515
+
516
+ object_property_set_link(OBJECT(&s->mmc[i]), "dma-memory",
517
+ OBJECT(get_system_memory()), &error_fatal);
518
+ sysbus_realize(SYS_BUS_DEVICE(&s->mmc[i]), &error_fatal);
519
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->mmc[i]), 0, addr);
520
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc[i]), 0, irq);
521
+ }
522
+
523
+ /* UART0. For future clocktree API: All UARTS are connected to APB2_CLK. */
524
+ serial_mm_init(get_system_memory(), s->memmap[AW_R40_DEV_UART0], 2,
525
+ qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_UART0),
526
+ 115200, serial_hd(0), DEVICE_NATIVE_ENDIAN);
527
+
528
+ /* Unimplemented devices */
529
+ for (i = 0; i < ARRAY_SIZE(r40_unimplemented); i++) {
530
+ create_unimplemented_device(r40_unimplemented[i].device_name,
531
+ r40_unimplemented[i].base,
532
+ r40_unimplemented[i].size);
533
+ }
534
+}
535
+
536
+static void allwinner_r40_class_init(ObjectClass *oc, void *data)
537
+{
538
+ DeviceClass *dc = DEVICE_CLASS(oc);
539
+
540
+ dc->realize = allwinner_r40_realize;
541
+ /* Reason: uses serial_hd() in realize function */
542
+ dc->user_creatable = false;
543
+}
544
+
545
+static const TypeInfo allwinner_r40_type_info = {
546
+ .name = TYPE_AW_R40,
547
+ .parent = TYPE_DEVICE,
548
+ .instance_size = sizeof(AwR40State),
549
+ .instance_init = allwinner_r40_init,
550
+ .class_init = allwinner_r40_class_init,
551
+};
552
+
553
+static void allwinner_r40_register_types(void)
554
+{
555
+ type_register_static(&allwinner_r40_type_info);
556
+}
557
+
558
+type_init(allwinner_r40_register_types)
559
diff --git a/hw/arm/bananapi_m2u.c b/hw/arm/bananapi_m2u.c
560
new file mode 100644
561
index XXXXXXX..XXXXXXX
562
--- /dev/null
563
+++ b/hw/arm/bananapi_m2u.c
564
@@ -XXX,XX +XXX,XX @@
565
+/*
566
+ * Bananapi M2U emulation
567
+ *
568
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
569
+ *
570
+ * This program is free software: you can redistribute it and/or modify
571
+ * it under the terms of the GNU General Public License as published by
572
+ * the Free Software Foundation, either version 2 of the License, or
573
+ * (at your option) any later version.
574
+ *
575
+ * This program is distributed in the hope that it will be useful,
576
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
577
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
578
+ * GNU General Public License for more details.
579
+ *
580
+ * You should have received a copy of the GNU General Public License
581
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
582
+ */
583
+
584
+#include "qemu/osdep.h"
585
+#include "qemu/units.h"
586
+#include "exec/address-spaces.h"
587
+#include "qapi/error.h"
588
+#include "qemu/error-report.h"
589
+#include "hw/boards.h"
590
+#include "hw/qdev-properties.h"
591
+#include "hw/arm/allwinner-r40.h"
592
+
593
+static struct arm_boot_info bpim2u_binfo;
594
+
595
+/*
596
+ * R40 can boot from mmc0 and mmc2, and bpim2u has two mmc interface, one is
597
+ * connected to sdcard and another mount an emmc media.
598
+ * Attach the mmc driver and try loading bootloader.
599
+ */
600
+static void mmc_attach_drive(AwR40State *s, AwSdHostState *mmc, int unit,
601
+ bool load_bootroom, bool *bootroom_loaded)
602
+{
603
+ DriveInfo *di = drive_get(IF_SD, 0, unit);
604
+ BlockBackend *blk = di ? blk_by_legacy_dinfo(di) : NULL;
605
+ BusState *bus;
606
+ DeviceState *carddev;
607
+
608
+ bus = qdev_get_child_bus(DEVICE(mmc), "sd-bus");
609
+ if (bus == NULL) {
610
+ error_report("No SD bus found in SOC object");
611
+ exit(1);
612
+ }
613
+
614
+ carddev = qdev_new(TYPE_SD_CARD);
615
+ qdev_prop_set_drive_err(carddev, "drive", blk, &error_fatal);
616
+ qdev_realize_and_unref(carddev, bus, &error_fatal);
617
+
618
+ if (load_bootroom && blk && blk_is_available(blk)) {
619
+ /* Use Boot ROM to copy data from SD card to SRAM */
620
+ *bootroom_loaded = allwinner_r40_bootrom_setup(s, blk, unit);
621
+ }
622
+}
623
+
624
+static void bpim2u_init(MachineState *machine)
625
+{
626
+ bool bootroom_loaded = false;
627
+ AwR40State *r40;
628
+
629
+ /* BIOS is not supported by this board */
630
+ if (machine->firmware) {
631
+ error_report("BIOS not supported for this machine");
632
+ exit(1);
633
+ }
634
+
635
+ /* Only allow Cortex-A7 for this board */
636
+ if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a7")) != 0) {
637
+ error_report("This board can only be used with cortex-a7 CPU");
638
+ exit(1);
639
+ }
640
+
641
+ r40 = AW_R40(object_new(TYPE_AW_R40));
642
+ object_property_add_child(OBJECT(machine), "soc", OBJECT(r40));
643
+ object_unref(OBJECT(r40));
644
+
645
+ /* Setup timer properties */
646
+ object_property_set_int(OBJECT(r40), "clk0-freq", 32768, &error_abort);
647
+ object_property_set_int(OBJECT(r40), "clk1-freq", 24 * 1000 * 1000,
648
+ &error_abort);
649
+
650
+ /* Mark R40 object realized */
651
+ qdev_realize(DEVICE(r40), NULL, &error_abort);
652
+
653
+ /*
654
+ * Plug in SD card and try load bootrom, R40 has 4 mmc controllers but can
655
+ * only booting from mmc0 and mmc2.
656
+ */
657
+ for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
658
+ switch (i) {
659
+ case 0:
660
+ case 2:
661
+ mmc_attach_drive(r40, &r40->mmc[i], i,
662
+ !machine->kernel_filename && !bootroom_loaded,
663
+ &bootroom_loaded);
664
+ break;
665
+ default:
666
+ mmc_attach_drive(r40, &r40->mmc[i], i, false, NULL);
667
+ break;
668
+ }
669
+ }
670
+
671
+ /* SDRAM */
672
+ memory_region_add_subregion(get_system_memory(),
673
+ r40->memmap[AW_R40_DEV_SDRAM], machine->ram);
674
+
675
+ bpim2u_binfo.loader_start = r40->memmap[AW_R40_DEV_SDRAM];
676
+ bpim2u_binfo.ram_size = machine->ram_size;
677
+ bpim2u_binfo.psci_conduit = QEMU_PSCI_CONDUIT_SMC;
678
+ arm_load_kernel(ARM_CPU(first_cpu), machine, &bpim2u_binfo);
679
+}
680
+
681
+static void bpim2u_machine_init(MachineClass *mc)
682
+{
683
+ mc->desc = "Bananapi M2U (Cortex-A7)";
684
+ mc->init = bpim2u_init;
685
+ mc->min_cpus = AW_R40_NUM_CPUS;
686
+ mc->max_cpus = AW_R40_NUM_CPUS;
687
+ mc->default_cpus = AW_R40_NUM_CPUS;
688
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a7");
689
+ mc->default_ram_size = 1 * GiB;
690
+ mc->default_ram_id = "bpim2u.ram";
691
+}
692
+
693
+DEFINE_MACHINE("bpim2u", bpim2u_machine_init)
694
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
16
index XXXXXXX..XXXXXXX 100644
695
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/ssi/xilinx_spips.h
696
--- a/hw/arm/Kconfig
18
+++ b/include/hw/ssi/xilinx_spips.h
697
+++ b/hw/arm/Kconfig
19
@@ -XXX,XX +XXX,XX @@
698
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_H3
20
typedef struct XilinxSPIPS XilinxSPIPS;
699
select USB_EHCI_SYSBUS
21
700
select SD
22
#define XLNX_SPIPS_R_MAX (0x100 / 4)
701
23
-#define XLNX_ZYNQMP_SPIPS_R_MAX (0x200 / 4)
702
+config ALLWINNER_R40
24
+#define XLNX_ZYNQMP_SPIPS_R_MAX (0x830 / 4)
703
+ bool
25
704
+ default y if TCG && ARM
26
/* Bite off 4k chunks at a time */
705
+ select ALLWINNER_A10_PIT
27
#define LQSPI_CACHE_SIZE 1024
706
+ select SERIAL
28
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
707
+ select ARM_TIMER
708
+ select ARM_GIC
709
+ select UNIMP
710
+ select SD
711
+
712
config RASPI
713
bool
714
default y
715
diff --git a/hw/arm/meson.build b/hw/arm/meson.build
29
index XXXXXXX..XXXXXXX 100644
716
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/ssi/xilinx_spips.c
717
--- a/hw/arm/meson.build
31
+++ b/hw/ssi/xilinx_spips.c
718
+++ b/hw/arm/meson.build
32
@@ -XXX,XX +XXX,XX @@
719
@@ -XXX,XX +XXX,XX @@ arm_ss.add(when: 'CONFIG_OMAP', if_true: files('omap1.c', 'omap2.c'))
33
720
arm_ss.add(when: 'CONFIG_STRONGARM', if_true: files('strongarm.c'))
34
/* interrupt mechanism */
721
arm_ss.add(when: 'CONFIG_ALLWINNER_A10', if_true: files('allwinner-a10.c', 'cubieboard.c'))
35
#define R_INTR_STATUS (0x04 / 4)
722
arm_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3.c', 'orangepi.c'))
36
+#define R_INTR_STATUS_RESET (0x104)
723
+arm_ss.add(when: 'CONFIG_ALLWINNER_R40', if_true: files('allwinner-r40.c', 'bananapi_m2u.c'))
37
#define R_INTR_EN (0x08 / 4)
724
arm_ss.add(when: 'CONFIG_RASPI', if_true: files('bcm2836.c', 'raspi.c'))
38
#define R_INTR_DIS (0x0C / 4)
725
arm_ss.add(when: 'CONFIG_STM32F100_SOC', if_true: files('stm32f100_soc.c'))
39
#define R_INTR_MASK (0x10 / 4)
726
arm_ss.add(when: 'CONFIG_STM32F205_SOC', if_true: files('stm32f205_soc.c'))
40
@@ -XXX,XX +XXX,XX @@
41
#define R_SLAVE_IDLE_COUNT (0x24 / 4)
42
#define R_TX_THRES (0x28 / 4)
43
#define R_RX_THRES (0x2C / 4)
44
+#define R_GPIO (0x30 / 4)
45
+#define R_LPBK_DLY_ADJ (0x38 / 4)
46
+#define R_LPBK_DLY_ADJ_RESET (0x33)
47
#define R_TXD1 (0x80 / 4)
48
#define R_TXD2 (0x84 / 4)
49
#define R_TXD3 (0x88 / 4)
50
@@ -XXX,XX +XXX,XX @@
51
#define R_GQSPI_IER (0x108 / 4)
52
#define R_GQSPI_IDR (0x10c / 4)
53
#define R_GQSPI_IMR (0x110 / 4)
54
+#define R_GQSPI_IMR_RESET (0xfbe)
55
#define R_GQSPI_TX_THRESH (0x128 / 4)
56
#define R_GQSPI_RX_THRESH (0x12c / 4)
57
+#define R_GQSPI_GPIO (0x130 / 4)
58
+#define R_GQSPI_LPBK_DLY_ADJ (0x138 / 4)
59
+#define R_GQSPI_LPBK_DLY_ADJ_RESET (0x33)
60
#define R_GQSPI_CNFG (0x100 / 4)
61
FIELD(GQSPI_CNFG, MODE_EN, 30, 2)
62
FIELD(GQSPI_CNFG, GEN_FIFO_START_MODE, 29, 1)
63
@@ -XXX,XX +XXX,XX @@
64
FIELD(GQSPI_GF_SNAPSHOT, EXPONENT, 9, 1)
65
FIELD(GQSPI_GF_SNAPSHOT, DATA_XFER, 8, 1)
66
FIELD(GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA, 0, 8)
67
-#define R_GQSPI_MOD_ID (0x168 / 4)
68
-#define R_GQSPI_MOD_ID_VALUE 0x010A0000
69
+#define R_GQSPI_MOD_ID (0x1fc / 4)
70
+#define R_GQSPI_MOD_ID_RESET (0x10a0000)
71
+
72
+#define R_QSPIDMA_DST_CTRL (0x80c / 4)
73
+#define R_QSPIDMA_DST_CTRL_RESET (0x803ffa00)
74
+#define R_QSPIDMA_DST_I_MASK (0x820 / 4)
75
+#define R_QSPIDMA_DST_I_MASK_RESET (0xfe)
76
+#define R_QSPIDMA_DST_CTRL2 (0x824 / 4)
77
+#define R_QSPIDMA_DST_CTRL2_RESET (0x081bfff8)
78
+
79
/* size of TXRX FIFOs */
80
#define RXFF_A (128)
81
#define TXFF_A (128)
82
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
83
fifo8_reset(&s->rx_fifo_g);
84
fifo8_reset(&s->rx_fifo_g);
85
fifo32_reset(&s->fifo_g);
86
+ s->regs[R_INTR_STATUS] = R_INTR_STATUS_RESET;
87
+ s->regs[R_GPIO] = 1;
88
+ s->regs[R_LPBK_DLY_ADJ] = R_LPBK_DLY_ADJ_RESET;
89
+ s->regs[R_GQSPI_GFIFO_THRESH] = 0x10;
90
+ s->regs[R_MOD_ID] = 0x01090101;
91
+ s->regs[R_GQSPI_IMR] = R_GQSPI_IMR_RESET;
92
s->regs[R_GQSPI_TX_THRESH] = 1;
93
s->regs[R_GQSPI_RX_THRESH] = 1;
94
- s->regs[R_GQSPI_GFIFO_THRESH] = 1;
95
- s->regs[R_GQSPI_IMR] = GQSPI_IXR_MASK;
96
- s->regs[R_MOD_ID] = 0x01090101;
97
+ s->regs[R_GQSPI_GPIO] = 1;
98
+ s->regs[R_GQSPI_LPBK_DLY_ADJ] = R_GQSPI_LPBK_DLY_ADJ_RESET;
99
+ s->regs[R_GQSPI_MOD_ID] = R_GQSPI_MOD_ID_RESET;
100
+ s->regs[R_QSPIDMA_DST_CTRL] = R_QSPIDMA_DST_CTRL_RESET;
101
+ s->regs[R_QSPIDMA_DST_I_MASK] = R_QSPIDMA_DST_I_MASK_RESET;
102
+ s->regs[R_QSPIDMA_DST_CTRL2] = R_QSPIDMA_DST_CTRL2_RESET;
103
s->man_start_com_g = false;
104
s->gqspi_irqline = 0;
105
xlnx_zynqmp_qspips_update_ixr(s);
106
--
727
--
107
2.7.4
728
2.34.1
108
109
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: qianfan Zhao <qianfanguijin@163.com>
2
2
3
Voiding the ITS caches is not supposed to happen via
3
The CCU provides the registers to program the PLLs and the controls
4
individual register writes. So we introduced a dedicated
4
most of the clock generation, division, distribution, synchronization
5
ITS KVM device ioctl to perform a cold reset of the ITS:
5
and gating.
6
KVM_DEV_ARM_VGIC_GRP_CTRL/KVM_DEV_ARM_ITS_CTRL_RESET. Let's
7
use this latter if the kernel supports it.
8
6
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
7
This commit adds support for the Clock Control Unit which emulates
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
a simple read/write register interface.
11
Message-id: 1511883692-11511-5-git-send-email-eric.auger@redhat.com
9
10
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
11
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
13
---
14
hw/intc/arm_gicv3_its_kvm.c | 9 ++++++++-
14
include/hw/arm/allwinner-r40.h | 2 +
15
1 file changed, 8 insertions(+), 1 deletion(-)
15
include/hw/misc/allwinner-r40-ccu.h | 65 +++++++++
16
hw/arm/allwinner-r40.c | 8 +-
17
hw/misc/allwinner-r40-ccu.c | 209 ++++++++++++++++++++++++++++
18
hw/misc/meson.build | 1 +
19
5 files changed, 284 insertions(+), 1 deletion(-)
20
create mode 100644 include/hw/misc/allwinner-r40-ccu.h
21
create mode 100644 hw/misc/allwinner-r40-ccu.c
16
22
17
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
23
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
18
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_its_kvm.c
25
--- a/include/hw/arm/allwinner-r40.h
20
+++ b/hw/intc/arm_gicv3_its_kvm.c
26
+++ b/include/hw/arm/allwinner-r40.h
21
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_reset(DeviceState *dev)
27
@@ -XXX,XX +XXX,XX @@
22
28
#include "hw/timer/allwinner-a10-pit.h"
23
c->parent_reset(dev);
29
#include "hw/intc/arm_gic.h"
24
30
#include "hw/sd/allwinner-sdhost.h"
25
- error_report("ITS KVM: full reset is not supported by QEMU");
31
+#include "hw/misc/allwinner-r40-ccu.h"
26
+ if (kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
32
#include "target/arm/cpu.h"
27
+ KVM_DEV_ARM_ITS_CTRL_RESET)) {
33
#include "sysemu/block-backend.h"
28
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
34
29
+ KVM_DEV_ARM_ITS_CTRL_RESET, NULL, true, &error_abort);
35
@@ -XXX,XX +XXX,XX @@ struct AwR40State {
30
+ return;
36
const hwaddr *memmap;
37
AwA10PITState timer;
38
AwSdHostState mmc[AW_R40_NUM_MMCS];
39
+ AwR40ClockCtlState ccu;
40
GICState gic;
41
MemoryRegion sram_a1;
42
MemoryRegion sram_a2;
43
diff --git a/include/hw/misc/allwinner-r40-ccu.h b/include/hw/misc/allwinner-r40-ccu.h
44
new file mode 100644
45
index XXXXXXX..XXXXXXX
46
--- /dev/null
47
+++ b/include/hw/misc/allwinner-r40-ccu.h
48
@@ -XXX,XX +XXX,XX @@
49
+/*
50
+ * Allwinner R40 Clock Control Unit emulation
51
+ *
52
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
53
+ *
54
+ * This program is free software: you can redistribute it and/or modify
55
+ * it under the terms of the GNU General Public License as published by
56
+ * the Free Software Foundation, either version 2 of the License, or
57
+ * (at your option) any later version.
58
+ *
59
+ * This program is distributed in the hope that it will be useful,
60
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
61
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
62
+ * GNU General Public License for more details.
63
+ *
64
+ * You should have received a copy of the GNU General Public License
65
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
66
+ */
67
+
68
+#ifndef HW_MISC_ALLWINNER_R40_CCU_H
69
+#define HW_MISC_ALLWINNER_R40_CCU_H
70
+
71
+#include "qom/object.h"
72
+#include "hw/sysbus.h"
73
+
74
+/**
75
+ * @name Constants
76
+ * @{
77
+ */
78
+
79
+/** Size of register I/O address space used by CCU device */
80
+#define AW_R40_CCU_IOSIZE (0x400)
81
+
82
+/** Total number of known registers */
83
+#define AW_R40_CCU_REGS_NUM (AW_R40_CCU_IOSIZE / sizeof(uint32_t))
84
+
85
+/** @} */
86
+
87
+/**
88
+ * @name Object model
89
+ * @{
90
+ */
91
+
92
+#define TYPE_AW_R40_CCU "allwinner-r40-ccu"
93
+OBJECT_DECLARE_SIMPLE_TYPE(AwR40ClockCtlState, AW_R40_CCU)
94
+
95
+/** @} */
96
+
97
+/**
98
+ * Allwinner R40 CCU object instance state.
99
+ */
100
+struct AwR40ClockCtlState {
101
+ /*< private >*/
102
+ SysBusDevice parent_obj;
103
+ /*< public >*/
104
+
105
+ /** Maps I/O registers in physical memory */
106
+ MemoryRegion iomem;
107
+
108
+ /** Array of hardware registers */
109
+ uint32_t regs[AW_R40_CCU_REGS_NUM];
110
+
111
+};
112
+
113
+#endif /* HW_MISC_ALLWINNER_R40_CCU_H */
114
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/hw/arm/allwinner-r40.c
117
+++ b/hw/arm/allwinner-r40.c
118
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
119
[AW_R40_DEV_MMC1] = 0x01c10000,
120
[AW_R40_DEV_MMC2] = 0x01c11000,
121
[AW_R40_DEV_MMC3] = 0x01c12000,
122
+ [AW_R40_DEV_CCU] = 0x01c20000,
123
[AW_R40_DEV_PIT] = 0x01c20c00,
124
[AW_R40_DEV_UART0] = 0x01c28000,
125
[AW_R40_DEV_GIC_DIST] = 0x01c81000,
126
@@ -XXX,XX +XXX,XX @@ static struct AwR40Unimplemented r40_unimplemented[] = {
127
{ "usb2-host", 0x01c1c000, 4 * KiB },
128
{ "cs1", 0x01c1d000, 4 * KiB },
129
{ "spi3", 0x01c1f000, 4 * KiB },
130
- { "ccu", 0x01c20000, 1 * KiB },
131
{ "rtc", 0x01c20400, 1 * KiB },
132
{ "pio", 0x01c20800, 1 * KiB },
133
{ "owa", 0x01c21000, 1 * KiB },
134
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
135
object_property_add_alias(obj, "clk1-freq", OBJECT(&s->timer),
136
"clk1-freq");
137
138
+ object_initialize_child(obj, "ccu", &s->ccu, TYPE_AW_R40_CCU);
139
+
140
for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
141
object_initialize_child(obj, mmc_names[i], &s->mmc[i],
142
TYPE_AW_SDHOST_SUN5I);
143
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
144
memory_region_add_subregion(get_system_memory(),
145
s->memmap[AW_R40_DEV_SRAM_A4], &s->sram_a4);
146
147
+ /* Clock Control Unit */
148
+ sysbus_realize(SYS_BUS_DEVICE(&s->ccu), &error_fatal);
149
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->ccu), 0, s->memmap[AW_R40_DEV_CCU]);
150
+
151
/* SD/MMC */
152
for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
153
qemu_irq irq = qdev_get_gpio_in(DEVICE(&s->gic),
154
diff --git a/hw/misc/allwinner-r40-ccu.c b/hw/misc/allwinner-r40-ccu.c
155
new file mode 100644
156
index XXXXXXX..XXXXXXX
157
--- /dev/null
158
+++ b/hw/misc/allwinner-r40-ccu.c
159
@@ -XXX,XX +XXX,XX @@
160
+/*
161
+ * Allwinner R40 Clock Control Unit emulation
162
+ *
163
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
164
+ *
165
+ * This program is free software: you can redistribute it and/or modify
166
+ * it under the terms of the GNU General Public License as published by
167
+ * the Free Software Foundation, either version 2 of the License, or
168
+ * (at your option) any later version.
169
+ *
170
+ * This program is distributed in the hope that it will be useful,
171
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
172
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
173
+ * GNU General Public License for more details.
174
+ *
175
+ * You should have received a copy of the GNU General Public License
176
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
177
+ */
178
+
179
+#include "qemu/osdep.h"
180
+#include "qemu/units.h"
181
+#include "hw/sysbus.h"
182
+#include "migration/vmstate.h"
183
+#include "qemu/log.h"
184
+#include "qemu/module.h"
185
+#include "hw/misc/allwinner-r40-ccu.h"
186
+
187
+/* CCU register offsets */
188
+enum {
189
+ REG_PLL_CPUX_CTRL = 0x0000,
190
+ REG_PLL_AUDIO_CTRL = 0x0008,
191
+ REG_PLL_VIDEO0_CTRL = 0x0010,
192
+ REG_PLL_VE_CTRL = 0x0018,
193
+ REG_PLL_DDR0_CTRL = 0x0020,
194
+ REG_PLL_PERIPH0_CTRL = 0x0028,
195
+ REG_PLL_PERIPH1_CTRL = 0x002c,
196
+ REG_PLL_VIDEO1_CTRL = 0x0030,
197
+ REG_PLL_SATA_CTRL = 0x0034,
198
+ REG_PLL_GPU_CTRL = 0x0038,
199
+ REG_PLL_MIPI_CTRL = 0x0040,
200
+ REG_PLL_DE_CTRL = 0x0048,
201
+ REG_PLL_DDR1_CTRL = 0x004c,
202
+ REG_AHB1_APB1_CFG = 0x0054,
203
+ REG_APB2_CFG = 0x0058,
204
+ REG_MMC0_CLK = 0x0088,
205
+ REG_MMC1_CLK = 0x008c,
206
+ REG_MMC2_CLK = 0x0090,
207
+ REG_MMC3_CLK = 0x0094,
208
+ REG_USBPHY_CFG = 0x00cc,
209
+ REG_PLL_DDR_AUX = 0x00f0,
210
+ REG_DRAM_CFG = 0x00f4,
211
+ REG_PLL_DDR1_CFG = 0x00f8,
212
+ REG_DRAM_CLK_GATING = 0x0100,
213
+ REG_GMAC_CLK = 0x0164,
214
+ REG_SYS_32K_CLK = 0x0310,
215
+ REG_PLL_LOCK_CTRL = 0x0320,
216
+};
217
+
218
+#define REG_INDEX(offset) (offset / sizeof(uint32_t))
219
+
220
+/* CCU register flags */
221
+enum {
222
+ REG_PLL_ENABLE = (1 << 31),
223
+ REG_PLL_LOCK = (1 << 28),
224
+};
225
+
226
+static uint64_t allwinner_r40_ccu_read(void *opaque, hwaddr offset,
227
+ unsigned size)
228
+{
229
+ const AwR40ClockCtlState *s = AW_R40_CCU(opaque);
230
+ const uint32_t idx = REG_INDEX(offset);
231
+
232
+ switch (offset) {
233
+ case 0x324 ... AW_R40_CCU_IOSIZE:
234
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
235
+ __func__, (uint32_t)offset);
236
+ return 0;
31
+ }
237
+ }
32
+
238
+
33
+ error_report("ITS KVM: full reset is not supported by the host kernel");
239
+ return s->regs[idx];
34
240
+}
35
if (!kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
241
+
36
GITS_CTLR)) {
242
+static void allwinner_r40_ccu_write(void *opaque, hwaddr offset,
243
+ uint64_t val, unsigned size)
244
+{
245
+ AwR40ClockCtlState *s = AW_R40_CCU(opaque);
246
+
247
+ switch (offset) {
248
+ case REG_DRAM_CFG: /* DRAM Configuration(for DDR0) */
249
+ /* bit16: SDRCLK_UPD (SDRCLK configuration 0 update) */
250
+ val &= ~(1 << 16);
251
+ break;
252
+ case REG_PLL_DDR1_CTRL: /* DDR1 Control register */
253
+ /* bit30: SDRPLL_UPD */
254
+ val &= ~(1 << 30);
255
+ if (val & REG_PLL_ENABLE) {
256
+ val |= REG_PLL_LOCK;
257
+ }
258
+ break;
259
+ case REG_PLL_CPUX_CTRL:
260
+ case REG_PLL_AUDIO_CTRL:
261
+ case REG_PLL_VE_CTRL:
262
+ case REG_PLL_VIDEO0_CTRL:
263
+ case REG_PLL_DDR0_CTRL:
264
+ case REG_PLL_PERIPH0_CTRL:
265
+ case REG_PLL_PERIPH1_CTRL:
266
+ case REG_PLL_VIDEO1_CTRL:
267
+ case REG_PLL_SATA_CTRL:
268
+ case REG_PLL_GPU_CTRL:
269
+ case REG_PLL_MIPI_CTRL:
270
+ case REG_PLL_DE_CTRL:
271
+ if (val & REG_PLL_ENABLE) {
272
+ val |= REG_PLL_LOCK;
273
+ }
274
+ break;
275
+ case 0x324 ... AW_R40_CCU_IOSIZE:
276
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
277
+ __func__, (uint32_t)offset);
278
+ break;
279
+ default:
280
+ qemu_log_mask(LOG_UNIMP, "%s: unimplemented write offset 0x%04x\n",
281
+ __func__, (uint32_t)offset);
282
+ break;
283
+ }
284
+
285
+ s->regs[REG_INDEX(offset)] = (uint32_t) val;
286
+}
287
+
288
+static const MemoryRegionOps allwinner_r40_ccu_ops = {
289
+ .read = allwinner_r40_ccu_read,
290
+ .write = allwinner_r40_ccu_write,
291
+ .endianness = DEVICE_NATIVE_ENDIAN,
292
+ .valid = {
293
+ .min_access_size = 4,
294
+ .max_access_size = 4,
295
+ },
296
+ .impl.min_access_size = 4,
297
+};
298
+
299
+static void allwinner_r40_ccu_reset(DeviceState *dev)
300
+{
301
+ AwR40ClockCtlState *s = AW_R40_CCU(dev);
302
+
303
+ memset(s->regs, 0, sizeof(s->regs));
304
+
305
+ /* Set default values for registers */
306
+ s->regs[REG_INDEX(REG_PLL_CPUX_CTRL)] = 0x00001000;
307
+ s->regs[REG_INDEX(REG_PLL_AUDIO_CTRL)] = 0x00035514;
308
+ s->regs[REG_INDEX(REG_PLL_VIDEO0_CTRL)] = 0x03006207;
309
+ s->regs[REG_INDEX(REG_PLL_VE_CTRL)] = 0x03006207;
310
+ s->regs[REG_INDEX(REG_PLL_DDR0_CTRL)] = 0x00001000,
311
+ s->regs[REG_INDEX(REG_PLL_PERIPH0_CTRL)] = 0x00041811;
312
+ s->regs[REG_INDEX(REG_PLL_PERIPH1_CTRL)] = 0x00041811;
313
+ s->regs[REG_INDEX(REG_PLL_VIDEO1_CTRL)] = 0x03006207;
314
+ s->regs[REG_INDEX(REG_PLL_SATA_CTRL)] = 0x00001811;
315
+ s->regs[REG_INDEX(REG_PLL_GPU_CTRL)] = 0x03006207;
316
+ s->regs[REG_INDEX(REG_PLL_MIPI_CTRL)] = 0x00000515;
317
+ s->regs[REG_INDEX(REG_PLL_DE_CTRL)] = 0x03006207;
318
+ s->regs[REG_INDEX(REG_PLL_DDR1_CTRL)] = 0x00001800;
319
+ s->regs[REG_INDEX(REG_AHB1_APB1_CFG)] = 0x00001010;
320
+ s->regs[REG_INDEX(REG_APB2_CFG)] = 0x01000000;
321
+ s->regs[REG_INDEX(REG_PLL_DDR_AUX)] = 0x00000001;
322
+ s->regs[REG_INDEX(REG_PLL_DDR1_CFG)] = 0x0ccca000;
323
+ s->regs[REG_INDEX(REG_SYS_32K_CLK)] = 0x0000000f;
324
+}
325
+
326
+static void allwinner_r40_ccu_init(Object *obj)
327
+{
328
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
329
+ AwR40ClockCtlState *s = AW_R40_CCU(obj);
330
+
331
+ /* Memory mapping */
332
+ memory_region_init_io(&s->iomem, OBJECT(s), &allwinner_r40_ccu_ops, s,
333
+ TYPE_AW_R40_CCU, AW_R40_CCU_IOSIZE);
334
+ sysbus_init_mmio(sbd, &s->iomem);
335
+}
336
+
337
+static const VMStateDescription allwinner_r40_ccu_vmstate = {
338
+ .name = "allwinner-r40-ccu",
339
+ .version_id = 1,
340
+ .minimum_version_id = 1,
341
+ .fields = (VMStateField[]) {
342
+ VMSTATE_UINT32_ARRAY(regs, AwR40ClockCtlState, AW_R40_CCU_REGS_NUM),
343
+ VMSTATE_END_OF_LIST()
344
+ }
345
+};
346
+
347
+static void allwinner_r40_ccu_class_init(ObjectClass *klass, void *data)
348
+{
349
+ DeviceClass *dc = DEVICE_CLASS(klass);
350
+
351
+ dc->reset = allwinner_r40_ccu_reset;
352
+ dc->vmsd = &allwinner_r40_ccu_vmstate;
353
+}
354
+
355
+static const TypeInfo allwinner_r40_ccu_info = {
356
+ .name = TYPE_AW_R40_CCU,
357
+ .parent = TYPE_SYS_BUS_DEVICE,
358
+ .instance_init = allwinner_r40_ccu_init,
359
+ .instance_size = sizeof(AwR40ClockCtlState),
360
+ .class_init = allwinner_r40_ccu_class_init,
361
+};
362
+
363
+static void allwinner_r40_ccu_register(void)
364
+{
365
+ type_register_static(&allwinner_r40_ccu_info);
366
+}
367
+
368
+type_init(allwinner_r40_ccu_register)
369
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
370
index XXXXXXX..XXXXXXX 100644
371
--- a/hw/misc/meson.build
372
+++ b/hw/misc/meson.build
373
@@ -XXX,XX +XXX,XX @@ specific_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-cpucfg.c'
374
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-dramc.c'))
375
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-sysctrl.c'))
376
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-sid.c'))
377
+softmmu_ss.add(when: 'CONFIG_ALLWINNER_R40', if_true: files('allwinner-r40-ccu.c'))
378
softmmu_ss.add(when: 'CONFIG_AXP209_PMU', if_true: files('axp209.c'))
379
softmmu_ss.add(when: 'CONFIG_REALVIEW', if_true: files('arm_sysctl.c'))
380
softmmu_ss.add(when: 'CONFIG_NSERIES', if_true: files('cbus.c'))
37
--
381
--
38
2.7.4
382
2.34.1
39
40
diff view generated by jsdifflib
1
For the v8M security extension, there should be two systick
1
From: qianfan Zhao <qianfanguijin@163.com>
2
devices, which use separate banked systick exceptions. The
3
register interface is banked in the same way as for other
4
banked registers, including the existence of an NS alias
5
region for secure code to access the nonsecure timer.
6
2
3
R40 has eight UARTs, support both 16450 and 16550 compatible modes.
4
5
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 1512154296-5652-3-git-send-email-peter.maydell@linaro.org
10
---
7
---
11
include/hw/intc/armv7m_nvic.h | 4 +-
8
include/hw/arm/allwinner-r40.h | 8 ++++++++
12
hw/intc/armv7m_nvic.c | 90 ++++++++++++++++++++++++++++++++++++-------
9
hw/arm/allwinner-r40.c | 34 +++++++++++++++++++++++++++++++---
13
2 files changed, 80 insertions(+), 14 deletions(-)
10
2 files changed, 39 insertions(+), 3 deletions(-)
14
11
15
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
12
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/intc/armv7m_nvic.h
14
--- a/include/hw/arm/allwinner-r40.h
18
+++ b/include/hw/intc/armv7m_nvic.h
15
+++ b/include/hw/arm/allwinner-r40.h
19
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
16
@@ -XXX,XX +XXX,XX @@ enum {
20
17
AW_R40_DEV_CCU,
21
MemoryRegion sysregmem;
18
AW_R40_DEV_PIT,
22
MemoryRegion sysreg_ns_mem;
19
AW_R40_DEV_UART0,
23
+ MemoryRegion systickmem;
20
+ AW_R40_DEV_UART1,
24
+ MemoryRegion systick_ns_mem;
21
+ AW_R40_DEV_UART2,
25
MemoryRegion container;
22
+ AW_R40_DEV_UART3,
26
23
+ AW_R40_DEV_UART4,
27
uint32_t num_irq;
24
+ AW_R40_DEV_UART5,
28
qemu_irq excpout;
25
+ AW_R40_DEV_UART6,
29
qemu_irq sysresetreq;
26
+ AW_R40_DEV_UART7,
30
27
AW_R40_DEV_GIC_DIST,
31
- SysTickState systick;
28
AW_R40_DEV_GIC_CPU,
32
+ SysTickState systick[M_REG_NUM_BANKS];
29
AW_R40_DEV_GIC_HYP,
33
} NVICState;
30
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(AwR40State, AW_R40)
34
31
* which are currently emulated by the R40 SoC code.
35
#endif
32
*/
36
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
33
#define AW_R40_NUM_MMCS 4
34
+#define AW_R40_NUM_UARTS 8
35
36
struct AwR40State {
37
/*< private >*/
38
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
37
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/intc/armv7m_nvic.c
40
--- a/hw/arm/allwinner-r40.c
39
+++ b/hw/intc/armv7m_nvic.c
41
+++ b/hw/arm/allwinner-r40.c
40
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_sysreg_ns_ops = {
42
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
41
.endianness = DEVICE_NATIVE_ENDIAN,
43
[AW_R40_DEV_CCU] = 0x01c20000,
42
};
44
[AW_R40_DEV_PIT] = 0x01c20c00,
43
45
[AW_R40_DEV_UART0] = 0x01c28000,
44
+static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,
46
+ [AW_R40_DEV_UART1] = 0x01c28400,
45
+ uint64_t value, unsigned size,
47
+ [AW_R40_DEV_UART2] = 0x01c28800,
46
+ MemTxAttrs attrs)
48
+ [AW_R40_DEV_UART3] = 0x01c28c00,
47
+{
49
+ [AW_R40_DEV_UART4] = 0x01c29000,
48
+ NVICState *s = opaque;
50
+ [AW_R40_DEV_UART5] = 0x01c29400,
49
+ MemoryRegion *mr;
51
+ [AW_R40_DEV_UART6] = 0x01c29800,
52
+ [AW_R40_DEV_UART7] = 0x01c29c00,
53
[AW_R40_DEV_GIC_DIST] = 0x01c81000,
54
[AW_R40_DEV_GIC_CPU] = 0x01c82000,
55
[AW_R40_DEV_GIC_HYP] = 0x01c84000,
56
@@ -XXX,XX +XXX,XX @@ enum {
57
/* Shared Processor Interrupts */
58
enum {
59
AW_R40_GIC_SPI_UART0 = 1,
60
+ AW_R40_GIC_SPI_UART1 = 2,
61
+ AW_R40_GIC_SPI_UART2 = 3,
62
+ AW_R40_GIC_SPI_UART3 = 4,
63
+ AW_R40_GIC_SPI_UART4 = 17,
64
+ AW_R40_GIC_SPI_UART5 = 18,
65
+ AW_R40_GIC_SPI_UART6 = 19,
66
+ AW_R40_GIC_SPI_UART7 = 20,
67
AW_R40_GIC_SPI_TIMER0 = 22,
68
AW_R40_GIC_SPI_TIMER1 = 23,
69
AW_R40_GIC_SPI_MMC0 = 32,
70
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
71
}
72
73
/* UART0. For future clocktree API: All UARTS are connected to APB2_CLK. */
74
- serial_mm_init(get_system_memory(), s->memmap[AW_R40_DEV_UART0], 2,
75
- qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_UART0),
76
- 115200, serial_hd(0), DEVICE_NATIVE_ENDIAN);
77
+ for (int i = 0; i < AW_R40_NUM_UARTS; i++) {
78
+ static const int uart_irqs[AW_R40_NUM_UARTS] = {
79
+ AW_R40_GIC_SPI_UART0,
80
+ AW_R40_GIC_SPI_UART1,
81
+ AW_R40_GIC_SPI_UART2,
82
+ AW_R40_GIC_SPI_UART3,
83
+ AW_R40_GIC_SPI_UART4,
84
+ AW_R40_GIC_SPI_UART5,
85
+ AW_R40_GIC_SPI_UART6,
86
+ AW_R40_GIC_SPI_UART7,
87
+ };
88
+ const hwaddr addr = s->memmap[AW_R40_DEV_UART0 + i];
50
+
89
+
51
+ /* Direct the access to the correct systick */
90
+ serial_mm_init(get_system_memory(), addr, 2,
52
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
91
+ qdev_get_gpio_in(DEVICE(&s->gic), uart_irqs[i]),
53
+ return memory_region_dispatch_write(mr, addr, value, size, attrs);
92
+ 115200, serial_hd(i), DEVICE_NATIVE_ENDIAN);
54
+}
55
+
56
+static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
57
+ uint64_t *data, unsigned size,
58
+ MemTxAttrs attrs)
59
+{
60
+ NVICState *s = opaque;
61
+ MemoryRegion *mr;
62
+
63
+ /* Direct the access to the correct systick */
64
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
65
+ return memory_region_dispatch_read(mr, addr, data, size, attrs);
66
+}
67
+
68
+static const MemoryRegionOps nvic_systick_ops = {
69
+ .read_with_attrs = nvic_systick_read,
70
+ .write_with_attrs = nvic_systick_write,
71
+ .endianness = DEVICE_NATIVE_ENDIAN,
72
+};
73
+
74
static int nvic_post_load(void *opaque, int version_id)
75
{
76
NVICState *s = opaque;
77
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
78
/* SysTick just asked us to pend its exception.
79
* (This is different from an external interrupt line's
80
* behaviour.)
81
- * TODO: when we implement the banked systicks we must make
82
- * this pend the correct banked exception.
83
+ * n == 0 : NonSecure systick
84
+ * n == 1 : Secure systick
85
*/
86
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK, false);
87
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK, n);
88
}
89
}
90
91
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
92
{
93
NVICState *s = NVIC(dev);
94
- SysBusDevice *systick_sbd;
95
Error *err = NULL;
96
int regionlen;
97
98
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
99
/* include space for internal exception vectors */
100
s->num_irq += NVIC_FIRST_IRQ;
101
102
- object_property_set_bool(OBJECT(&s->systick), true, "realized", &err);
103
+ object_property_set_bool(OBJECT(&s->systick[M_REG_NS]), true,
104
+ "realized", &err);
105
if (err != NULL) {
106
error_propagate(errp, err);
107
return;
108
}
109
- systick_sbd = SYS_BUS_DEVICE(&s->systick);
110
- sysbus_connect_irq(systick_sbd, 0,
111
- qdev_get_gpio_in_named(dev, "systick-trigger", 0));
112
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->systick[M_REG_NS]), 0,
113
+ qdev_get_gpio_in_named(dev, "systick-trigger",
114
+ M_REG_NS));
115
+
116
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
117
+ /* We couldn't init the secure systick device in instance_init
118
+ * as we didn't know then if the CPU had the security extensions;
119
+ * so we have to do it here.
120
+ */
121
+ object_initialize(&s->systick[M_REG_S], sizeof(s->systick[M_REG_S]),
122
+ TYPE_SYSTICK);
123
+ qdev_set_parent_bus(DEVICE(&s->systick[M_REG_S]), sysbus_get_default());
124
+
125
+ object_property_set_bool(OBJECT(&s->systick[M_REG_S]), true,
126
+ "realized", &err);
127
+ if (err != NULL) {
128
+ error_propagate(errp, err);
129
+ return;
130
+ }
131
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->systick[M_REG_S]), 0,
132
+ qdev_get_gpio_in_named(dev, "systick-trigger",
133
+ M_REG_S));
134
+ }
93
+ }
135
94
136
/* The NVIC and System Control Space (SCS) starts at 0xe000e000
95
/* Unimplemented devices */
137
* and looks like this:
96
for (i = 0; i < ARRAY_SIZE(r40_unimplemented); i++) {
138
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
139
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
140
"nvic_sysregs", 0x1000);
141
memory_region_add_subregion(&s->container, 0, &s->sysregmem);
142
+
143
+ memory_region_init_io(&s->systickmem, OBJECT(s),
144
+ &nvic_systick_ops, s,
145
+ "nvic_systick", 0xe0);
146
+
147
memory_region_add_subregion_overlap(&s->container, 0x10,
148
- sysbus_mmio_get_region(systick_sbd, 0),
149
- 1);
150
+ &s->systickmem, 1);
151
152
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
153
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
154
&nvic_sysreg_ns_ops, &s->sysregmem,
155
"nvic_sysregs_ns", 0x1000);
156
memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
157
+ memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
158
+ &nvic_sysreg_ns_ops, &s->systickmem,
159
+ "nvic_systick_ns", 0xe0);
160
+ memory_region_add_subregion_overlap(&s->container, 0x20010,
161
+ &s->systick_ns_mem, 1);
162
}
163
164
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
165
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_instance_init(Object *obj)
166
NVICState *nvic = NVIC(obj);
167
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
168
169
- object_initialize(&nvic->systick, sizeof(nvic->systick), TYPE_SYSTICK);
170
- qdev_set_parent_bus(DEVICE(&nvic->systick), sysbus_get_default());
171
+ object_initialize(&nvic->systick[M_REG_NS],
172
+ sizeof(nvic->systick[M_REG_NS]), TYPE_SYSTICK);
173
+ qdev_set_parent_bus(DEVICE(&nvic->systick[M_REG_NS]), sysbus_get_default());
174
+ /* We can't initialize the secure systick here, as we don't know
175
+ * yet if we need it.
176
+ */
177
178
sysbus_init_irq(sbd, &nvic->excpout);
179
qdev_init_gpio_out_named(dev, &nvic->sysresetreq, "SYSRESETREQ", 1);
180
- qdev_init_gpio_in_named(dev, nvic_systick_trigger, "systick-trigger", 1);
181
+ qdev_init_gpio_in_named(dev, nvic_systick_trigger, "systick-trigger",
182
+ M_REG_NUM_BANKS);
183
}
184
185
static void armv7m_nvic_class_init(ObjectClass *klass, void *data)
186
--
97
--
187
2.7.4
98
2.34.1
188
189
diff view generated by jsdifflib
1
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
1
From: qianfan Zhao <qianfanguijin@163.com>
2
structure, which we convert to the FSC at the callsite.
3
2
3
TWI(i2c) is designed to be used as an interface between CPU host and the
4
serial 2-Wire bus. It can support all standard 2-Wire transfer, can be
5
operated in standard mode(100kbit/s) or fast-mode, supporting data rate
6
up to 400kbit/s.
7
8
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
9
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-6-git-send-email-peter.maydell@linaro.org
9
---
11
---
10
target/arm/helper.c | 41 ++++++++++++++++++-----------------------
12
include/hw/arm/allwinner-r40.h | 3 +++
11
1 file changed, 18 insertions(+), 23 deletions(-)
13
hw/arm/allwinner-r40.c | 11 ++++++++++-
14
2 files changed, 13 insertions(+), 1 deletion(-)
12
15
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
18
--- a/include/hw/arm/allwinner-r40.h
16
+++ b/target/arm/helper.c
19
+++ b/include/hw/arm/allwinner-r40.h
17
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
20
@@ -XXX,XX +XXX,XX @@
18
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
21
#include "hw/intc/arm_gic.h"
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
22
#include "hw/sd/allwinner-sdhost.h"
20
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
23
#include "hw/misc/allwinner-r40-ccu.h"
21
- target_ulong *page_size_ptr, uint32_t *fsr,
24
+#include "hw/i2c/allwinner-i2c.h"
22
+ target_ulong *page_size_ptr,
25
#include "target/arm/cpu.h"
23
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs);
26
#include "sysemu/block-backend.h"
24
27
25
/* Security attributes for an address, as returned by v8m_security_lookup. */
28
@@ -XXX,XX +XXX,XX @@ enum {
26
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
29
AW_R40_DEV_UART5,
27
hwaddr s2pa;
30
AW_R40_DEV_UART6,
28
int s2prot;
31
AW_R40_DEV_UART7,
29
int ret;
32
+ AW_R40_DEV_TWI0,
30
- uint32_t fsr;
33
AW_R40_DEV_GIC_DIST,
31
34
AW_R40_DEV_GIC_CPU,
32
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
35
AW_R40_DEV_GIC_HYP,
33
- &txattrs, &s2prot, &s2size, &fsr, fi, NULL);
36
@@ -XXX,XX +XXX,XX @@ struct AwR40State {
34
+ &txattrs, &s2prot, &s2size, fi, NULL);
37
AwA10PITState timer;
35
if (ret) {
38
AwSdHostState mmc[AW_R40_NUM_MMCS];
36
fi->s2addr = addr;
39
AwR40ClockCtlState ccu;
37
fi->stage2 = true;
40
+ AWI2CState i2c0;
38
@@ -XXX,XX +XXX,XX @@ do_fault:
41
GICState gic;
39
return true;
42
MemoryRegion sram_a1;
43
MemoryRegion sram_a2;
44
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/hw/arm/allwinner-r40.c
47
+++ b/hw/arm/allwinner-r40.c
48
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
49
[AW_R40_DEV_UART5] = 0x01c29400,
50
[AW_R40_DEV_UART6] = 0x01c29800,
51
[AW_R40_DEV_UART7] = 0x01c29c00,
52
+ [AW_R40_DEV_TWI0] = 0x01c2ac00,
53
[AW_R40_DEV_GIC_DIST] = 0x01c81000,
54
[AW_R40_DEV_GIC_CPU] = 0x01c82000,
55
[AW_R40_DEV_GIC_HYP] = 0x01c84000,
56
@@ -XXX,XX +XXX,XX @@ static struct AwR40Unimplemented r40_unimplemented[] = {
57
{ "uart7", 0x01c29c00, 1 * KiB },
58
{ "ps20", 0x01c2a000, 1 * KiB },
59
{ "ps21", 0x01c2a400, 1 * KiB },
60
- { "twi0", 0x01c2ac00, 1 * KiB },
61
{ "twi1", 0x01c2b000, 1 * KiB },
62
{ "twi2", 0x01c2b400, 1 * KiB },
63
{ "twi3", 0x01c2b800, 1 * KiB },
64
@@ -XXX,XX +XXX,XX @@ enum {
65
AW_R40_GIC_SPI_UART1 = 2,
66
AW_R40_GIC_SPI_UART2 = 3,
67
AW_R40_GIC_SPI_UART3 = 4,
68
+ AW_R40_GIC_SPI_TWI0 = 7,
69
AW_R40_GIC_SPI_UART4 = 17,
70
AW_R40_GIC_SPI_UART5 = 18,
71
AW_R40_GIC_SPI_UART6 = 19,
72
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
73
object_initialize_child(obj, mmc_names[i], &s->mmc[i],
74
TYPE_AW_SDHOST_SUN5I);
75
}
76
+
77
+ object_initialize_child(obj, "twi0", &s->i2c0, TYPE_AW_I2C_SUN6I);
40
}
78
}
41
79
42
-/* Fault type for long-descriptor MMU fault reporting; this corresponds
80
static void allwinner_r40_realize(DeviceState *dev, Error **errp)
43
- * to bits [5..2] in the STATUS field in long-format DFSR/IFSR.
81
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
44
- */
82
115200, serial_hd(i), DEVICE_NATIVE_ENDIAN);
45
-typedef enum {
46
- translation_fault = 1,
47
- access_fault = 2,
48
- permission_fault = 3,
49
-} MMUFaultType;
50
-
51
/*
52
* check_s2_mmu_setup
53
* @cpu: ARMCPU
54
@@ -XXX,XX +XXX,XX @@ static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
55
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
56
MMUAccessType access_type, ARMMMUIdx mmu_idx,
57
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
58
- target_ulong *page_size_ptr, uint32_t *fsr,
59
+ target_ulong *page_size_ptr,
60
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
61
{
62
ARMCPU *cpu = arm_env_get_cpu(env);
63
CPUState *cs = CPU(cpu);
64
/* Read an LPAE long-descriptor translation table. */
65
- MMUFaultType fault_type = translation_fault;
66
+ ARMFaultType fault_type = ARMFault_Translation;
67
uint32_t level;
68
uint32_t epd = 0;
69
int32_t t0sz, t1sz;
70
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
71
ttbr_select = 1;
72
} else {
73
/* in the gap between the two regions, this is a Translation fault */
74
- fault_type = translation_fault;
75
+ fault_type = ARMFault_Translation;
76
goto do_fault;
77
}
83
}
78
84
79
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
85
+ /* I2C */
80
ok = check_s2_mmu_setup(cpu, aarch64, startlevel,
86
+ sysbus_realize(SYS_BUS_DEVICE(&s->i2c0), &error_fatal);
81
inputsize, stride);
87
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->i2c0), 0, s->memmap[AW_R40_DEV_TWI0]);
82
if (!ok) {
88
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->i2c0), 0,
83
- fault_type = translation_fault;
89
+ qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_TWI0));
84
+ fault_type = ARMFault_Translation;
85
goto do_fault;
86
}
87
level = startlevel;
88
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
89
/* Here descaddr is the final physical address, and attributes
90
* are all in attrs.
91
*/
92
- fault_type = access_fault;
93
+ fault_type = ARMFault_AccessFlag;
94
if ((attrs & (1 << 8)) == 0) {
95
/* Access flag */
96
goto do_fault;
97
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
98
*prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
99
}
100
101
- fault_type = permission_fault;
102
+ fault_type = ARMFault_Permission;
103
if (!(*prot & (1 << access_type))) {
104
goto do_fault;
105
}
106
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
107
return false;
108
109
do_fault:
110
- /* Long-descriptor format IFSR/DFSR value */
111
- *fsr = (1 << 9) | (fault_type << 2) | level;
112
+ fi->type = fault_type;
113
+ fi->level = level;
114
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
115
fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_S2NS);
116
return true;
117
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
118
/* S1 is done. Now do S2 translation. */
119
ret = get_phys_addr_lpae(env, ipa, access_type, ARMMMUIdx_S2NS,
120
phys_ptr, attrs, &s2_prot,
121
- page_size, fsr, fi,
122
+ page_size, fi,
123
cacheattrs != NULL ? &cacheattrs2 : NULL);
124
+ *fsr = arm_fi_to_lfsc(fi);
125
fi->s2addr = ipa;
126
/* Combine the S1 and S2 perms. */
127
*prot &= s2_prot;
128
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
129
}
130
131
if (regime_using_lpae_format(env, mmu_idx)) {
132
- return get_phys_addr_lpae(env, address, access_type, mmu_idx, phys_ptr,
133
- attrs, prot, page_size, fsr, fi, cacheattrs);
134
+ bool ret = get_phys_addr_lpae(env, address, access_type, mmu_idx,
135
+ phys_ptr, attrs, prot, page_size,
136
+ fi, cacheattrs);
137
+
90
+
138
+ *fsr = arm_fi_to_lfsc(fi);
91
/* Unimplemented devices */
139
+ return ret;
92
for (i = 0; i < ARRAY_SIZE(r40_unimplemented); i++) {
140
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
93
create_unimplemented_device(r40_unimplemented[i].device_name,
141
bool ret = get_phys_addr_v6(env, address, access_type, mmu_idx,
142
phys_ptr, attrs, prot, page_size, fi);
143
--
94
--
144
2.7.4
95
2.34.1
145
146
diff view generated by jsdifflib
1
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
1
From: qianfan Zhao <qianfanguijin@163.com>
2
structure, which we convert to the FSC at the callsite.
3
2
3
This patch adds minimal support for AXP-221 PMU and connect it to
4
bananapi M2U board.
5
6
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-5-git-send-email-peter.maydell@linaro.org
9
---
8
---
10
target/arm/helper.c | 40 ++++++++++++++++++++++------------------
9
hw/arm/bananapi_m2u.c | 6 +
11
1 file changed, 22 insertions(+), 18 deletions(-)
10
hw/misc/axp209.c | 238 -----------------------------------
11
hw/misc/axp2xx.c | 283 ++++++++++++++++++++++++++++++++++++++++++
12
hw/arm/Kconfig | 3 +-
13
hw/misc/Kconfig | 2 +-
14
hw/misc/meson.build | 2 +-
15
hw/misc/trace-events | 8 +-
16
7 files changed, 297 insertions(+), 245 deletions(-)
17
delete mode 100644 hw/misc/axp209.c
18
create mode 100644 hw/misc/axp2xx.c
12
19
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
diff --git a/hw/arm/bananapi_m2u.c b/hw/arm/bananapi_m2u.c
14
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
22
--- a/hw/arm/bananapi_m2u.c
16
+++ b/target/arm/helper.c
23
+++ b/hw/arm/bananapi_m2u.c
17
@@ -XXX,XX +XXX,XX @@ do_fault:
24
@@ -XXX,XX +XXX,XX @@
18
static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
25
#include "qapi/error.h"
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
26
#include "qemu/error-report.h"
20
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
27
#include "hw/boards.h"
21
- target_ulong *page_size, uint32_t *fsr,
28
+#include "hw/i2c/i2c.h"
22
- ARMMMUFaultInfo *fi)
29
#include "hw/qdev-properties.h"
23
+ target_ulong *page_size, ARMMMUFaultInfo *fi)
30
#include "hw/arm/allwinner-r40.h"
31
32
@@ -XXX,XX +XXX,XX @@ static void bpim2u_init(MachineState *machine)
24
{
33
{
25
CPUState *cs = CPU(arm_env_get_cpu(env));
34
bool bootroom_loaded = false;
26
- int code;
35
AwR40State *r40;
27
+ int level = 1;
36
+ I2CBus *i2c;
28
uint32_t table;
37
29
uint32_t desc;
38
/* BIOS is not supported by this board */
30
uint32_t xn;
39
if (machine->firmware) {
31
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
40
@@ -XXX,XX +XXX,XX @@ static void bpim2u_init(MachineState *machine)
32
/* Lookup l1 descriptor. */
33
if (!get_level1_table_address(env, mmu_idx, &table, address)) {
34
/* Section translation fault if page walk is disabled by PD0 or PD1 */
35
- code = 5;
36
+ fi->type = ARMFault_Translation;
37
goto do_fault;
38
}
39
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
41
/* Section translation fault, or attempt to use the encoding
42
* which is Reserved on implementations without PXN.
43
*/
44
- code = 5;
45
+ fi->type = ARMFault_Translation;
46
goto do_fault;
47
}
48
if ((type == 1) || !(desc & (1 << 18))) {
49
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
50
} else {
51
dacr = env->cp15.dacr_s;
52
}
53
+ if (type == 1) {
54
+ level = 2;
55
+ }
56
domain_prot = (dacr >> (domain * 2)) & 3;
57
if (domain_prot == 0 || domain_prot == 2) {
58
- if (type != 1) {
59
- code = 9; /* Section domain fault. */
60
- } else {
61
- code = 11; /* Page domain fault. */
62
- }
63
+ /* Section or Page domain fault */
64
+ fi->type = ARMFault_Domain;
65
goto do_fault;
66
}
67
if (type != 1) {
68
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
69
ap = ((desc >> 10) & 3) | ((desc >> 13) & 4);
70
xn = desc & (1 << 4);
71
pxn = desc & 1;
72
- code = 13;
73
ns = extract32(desc, 19, 1);
74
} else {
75
if (arm_feature(env, ARM_FEATURE_PXN)) {
76
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
77
ap = ((desc >> 4) & 3) | ((desc >> 7) & 4);
78
switch (desc & 3) {
79
case 0: /* Page translation fault. */
80
- code = 7;
81
+ fi->type = ARMFault_Translation;
82
goto do_fault;
83
case 1: /* 64k page. */
84
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
85
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
86
/* Never happens, but compiler isn't smart enough to tell. */
87
abort();
88
}
89
- code = 15;
90
}
91
if (domain_prot == 3) {
92
*prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
93
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
94
if (pxn && !regime_is_user(env, mmu_idx)) {
95
xn = 1;
96
}
97
- if (xn && access_type == MMU_INST_FETCH)
98
+ if (xn && access_type == MMU_INST_FETCH) {
99
+ fi->type = ARMFault_Permission;
100
goto do_fault;
101
+ }
102
103
if (arm_feature(env, ARM_FEATURE_V6K) &&
104
(regime_sctlr(env, mmu_idx) & SCTLR_AFE)) {
105
/* The simplified model uses AP[0] as an access control bit. */
106
if ((ap & 1) == 0) {
107
/* Access flag fault. */
108
- code = (code == 15) ? 6 : 3;
109
+ fi->type = ARMFault_AccessFlag;
110
goto do_fault;
111
}
112
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
113
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
114
}
115
if (!(*prot & (1 << access_type))) {
116
/* Access permission fault. */
117
+ fi->type = ARMFault_Permission;
118
goto do_fault;
119
}
41
}
120
}
42
}
121
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
43
122
*phys_ptr = phys_addr;
44
+ /* Connect AXP221 */
123
return false;
45
+ i2c = I2C_BUS(qdev_get_child_bus(DEVICE(&r40->i2c0), "i2c"));
124
do_fault:
46
+ i2c_slave_create_simple(i2c, "axp221_pmu", 0x34);
125
- *fsr = code | (domain << 4);
47
+
126
+ fi->domain = domain;
48
/* SDRAM */
127
+ fi->level = level;
49
memory_region_add_subregion(get_system_memory(),
128
return true;
50
r40->memmap[AW_R40_DEV_SDRAM], machine->ram);
129
}
51
diff --git a/hw/misc/axp209.c b/hw/misc/axp209.c
130
52
deleted file mode 100644
131
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
53
index XXXXXXX..XXXXXXX
132
return get_phys_addr_lpae(env, address, access_type, mmu_idx, phys_ptr,
54
--- a/hw/misc/axp209.c
133
attrs, prot, page_size, fsr, fi, cacheattrs);
55
+++ /dev/null
134
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
56
@@ -XXX,XX +XXX,XX @@
135
- return get_phys_addr_v6(env, address, access_type, mmu_idx, phys_ptr,
57
-/*
136
- attrs, prot, page_size, fsr, fi);
58
- * AXP-209 PMU Emulation
137
+ bool ret = get_phys_addr_v6(env, address, access_type, mmu_idx,
59
- *
138
+ phys_ptr, attrs, prot, page_size, fi);
60
- * Copyright (C) 2022 Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
139
+
61
- *
140
+ *fsr = arm_fi_to_sfsc(fi);
62
- * Permission is hereby granted, free of charge, to any person obtaining a
141
+ return ret;
63
- * copy of this software and associated documentation files (the "Software"),
142
} else {
64
- * to deal in the Software without restriction, including without limitation
143
bool ret = get_phys_addr_v5(env, address, access_type, mmu_idx,
65
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
144
phys_ptr, prot, page_size, fi);
66
- * and/or sell copies of the Software, and to permit persons to whom the
67
- * Software is furnished to do so, subject to the following conditions:
68
- *
69
- * The above copyright notice and this permission notice shall be included in
70
- * all copies or substantial portions of the Software.
71
- *
72
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
73
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
74
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
75
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
76
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
77
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
78
- * DEALINGS IN THE SOFTWARE.
79
- *
80
- * SPDX-License-Identifier: MIT
81
- */
82
-
83
-#include "qemu/osdep.h"
84
-#include "qemu/log.h"
85
-#include "trace.h"
86
-#include "hw/i2c/i2c.h"
87
-#include "migration/vmstate.h"
88
-
89
-#define TYPE_AXP209_PMU "axp209_pmu"
90
-
91
-#define AXP209(obj) \
92
- OBJECT_CHECK(AXP209I2CState, (obj), TYPE_AXP209_PMU)
93
-
94
-/* registers */
95
-enum {
96
- REG_POWER_STATUS = 0x0u,
97
- REG_OPERATING_MODE,
98
- REG_OTG_VBUS_STATUS,
99
- REG_CHIP_VERSION,
100
- REG_DATA_CACHE_0,
101
- REG_DATA_CACHE_1,
102
- REG_DATA_CACHE_2,
103
- REG_DATA_CACHE_3,
104
- REG_DATA_CACHE_4,
105
- REG_DATA_CACHE_5,
106
- REG_DATA_CACHE_6,
107
- REG_DATA_CACHE_7,
108
- REG_DATA_CACHE_8,
109
- REG_DATA_CACHE_9,
110
- REG_DATA_CACHE_A,
111
- REG_DATA_CACHE_B,
112
- REG_POWER_OUTPUT_CTRL = 0x12u,
113
- REG_DC_DC2_OUT_V_CTRL = 0x23u,
114
- REG_DC_DC2_DVS_CTRL = 0x25u,
115
- REG_DC_DC3_OUT_V_CTRL = 0x27u,
116
- REG_LDO2_4_OUT_V_CTRL,
117
- REG_LDO3_OUT_V_CTRL,
118
- REG_VBUS_CH_MGMT = 0x30u,
119
- REG_SHUTDOWN_V_CTRL,
120
- REG_SHUTDOWN_CTRL,
121
- REG_CHARGE_CTRL_1,
122
- REG_CHARGE_CTRL_2,
123
- REG_SPARE_CHARGE_CTRL,
124
- REG_PEK_KEY_CTRL,
125
- REG_DC_DC_FREQ_SET,
126
- REG_CHR_TEMP_TH_SET,
127
- REG_CHR_HIGH_TEMP_TH_CTRL,
128
- REG_IPSOUT_WARN_L1,
129
- REG_IPSOUT_WARN_L2,
130
- REG_DISCHR_TEMP_TH_SET,
131
- REG_DISCHR_HIGH_TEMP_TH_CTRL,
132
- REG_IRQ_BANK_1_CTRL = 0x40u,
133
- REG_IRQ_BANK_2_CTRL,
134
- REG_IRQ_BANK_3_CTRL,
135
- REG_IRQ_BANK_4_CTRL,
136
- REG_IRQ_BANK_5_CTRL,
137
- REG_IRQ_BANK_1_STAT = 0x48u,
138
- REG_IRQ_BANK_2_STAT,
139
- REG_IRQ_BANK_3_STAT,
140
- REG_IRQ_BANK_4_STAT,
141
- REG_IRQ_BANK_5_STAT,
142
- REG_ADC_ACIN_V_H = 0x56u,
143
- REG_ADC_ACIN_V_L,
144
- REG_ADC_ACIN_CURR_H,
145
- REG_ADC_ACIN_CURR_L,
146
- REG_ADC_VBUS_V_H,
147
- REG_ADC_VBUS_V_L,
148
- REG_ADC_VBUS_CURR_H,
149
- REG_ADC_VBUS_CURR_L,
150
- REG_ADC_INT_TEMP_H,
151
- REG_ADC_INT_TEMP_L,
152
- REG_ADC_TEMP_SENS_V_H = 0x62u,
153
- REG_ADC_TEMP_SENS_V_L,
154
- REG_ADC_BAT_V_H = 0x78u,
155
- REG_ADC_BAT_V_L,
156
- REG_ADC_BAT_DISCHR_CURR_H,
157
- REG_ADC_BAT_DISCHR_CURR_L,
158
- REG_ADC_BAT_CHR_CURR_H,
159
- REG_ADC_BAT_CHR_CURR_L,
160
- REG_ADC_IPSOUT_V_H,
161
- REG_ADC_IPSOUT_V_L,
162
- REG_DC_DC_MOD_SEL = 0x80u,
163
- REG_ADC_EN_1,
164
- REG_ADC_EN_2,
165
- REG_ADC_SR_CTRL,
166
- REG_ADC_IN_RANGE,
167
- REG_GPIO1_ADC_IRQ_RISING_TH,
168
- REG_GPIO1_ADC_IRQ_FALLING_TH,
169
- REG_TIMER_CTRL = 0x8au,
170
- REG_VBUS_CTRL_MON_SRP,
171
- REG_OVER_TEMP_SHUTDOWN = 0x8fu,
172
- REG_GPIO0_FEAT_SET,
173
- REG_GPIO_OUT_HIGH_SET,
174
- REG_GPIO1_FEAT_SET,
175
- REG_GPIO2_FEAT_SET,
176
- REG_GPIO_SIG_STATE_SET_MON,
177
- REG_GPIO3_SET,
178
- REG_COULOMB_CNTR_CTRL = 0xb8u,
179
- REG_POWER_MEAS_RES,
180
- NR_REGS
181
-};
182
-
183
-#define AXP209_CHIP_VERSION_ID (0x01)
184
-#define AXP209_DC_DC2_OUT_V_CTRL_RESET (0x16)
185
-#define AXP209_IRQ_BANK_1_CTRL_RESET (0xd8)
186
-
187
-/* A simple I2C slave which returns values of ID or CNT register. */
188
-typedef struct AXP209I2CState {
189
- /*< private >*/
190
- I2CSlave i2c;
191
- /*< public >*/
192
- uint8_t regs[NR_REGS]; /* peripheral registers */
193
- uint8_t ptr; /* current register index */
194
- uint8_t count; /* counter used for tx/rx */
195
-} AXP209I2CState;
196
-
197
-/* Reset all counters and load ID register */
198
-static void axp209_reset_enter(Object *obj, ResetType type)
199
-{
200
- AXP209I2CState *s = AXP209(obj);
201
-
202
- memset(s->regs, 0, NR_REGS);
203
- s->ptr = 0;
204
- s->count = 0;
205
- s->regs[REG_CHIP_VERSION] = AXP209_CHIP_VERSION_ID;
206
- s->regs[REG_DC_DC2_OUT_V_CTRL] = AXP209_DC_DC2_OUT_V_CTRL_RESET;
207
- s->regs[REG_IRQ_BANK_1_CTRL] = AXP209_IRQ_BANK_1_CTRL_RESET;
208
-}
209
-
210
-/* Handle events from master. */
211
-static int axp209_event(I2CSlave *i2c, enum i2c_event event)
212
-{
213
- AXP209I2CState *s = AXP209(i2c);
214
-
215
- s->count = 0;
216
-
217
- return 0;
218
-}
219
-
220
-/* Called when master requests read */
221
-static uint8_t axp209_rx(I2CSlave *i2c)
222
-{
223
- AXP209I2CState *s = AXP209(i2c);
224
- uint8_t ret = 0xff;
225
-
226
- if (s->ptr < NR_REGS) {
227
- ret = s->regs[s->ptr++];
228
- }
229
-
230
- trace_axp209_rx(s->ptr - 1, ret);
231
-
232
- return ret;
233
-}
234
-
235
-/*
236
- * Called when master sends write.
237
- * Update ptr with byte 0, then perform write with second byte.
238
- */
239
-static int axp209_tx(I2CSlave *i2c, uint8_t data)
240
-{
241
- AXP209I2CState *s = AXP209(i2c);
242
-
243
- if (s->count == 0) {
244
- /* Store register address */
245
- s->ptr = data;
246
- s->count++;
247
- trace_axp209_select(data);
248
- } else {
249
- trace_axp209_tx(s->ptr, data);
250
- if (s->ptr == REG_DC_DC2_OUT_V_CTRL) {
251
- s->regs[s->ptr++] = data;
252
- }
253
- }
254
-
255
- return 0;
256
-}
257
-
258
-static const VMStateDescription vmstate_axp209 = {
259
- .name = TYPE_AXP209_PMU,
260
- .version_id = 1,
261
- .fields = (VMStateField[]) {
262
- VMSTATE_UINT8_ARRAY(regs, AXP209I2CState, NR_REGS),
263
- VMSTATE_UINT8(count, AXP209I2CState),
264
- VMSTATE_UINT8(ptr, AXP209I2CState),
265
- VMSTATE_END_OF_LIST()
266
- }
267
-};
268
-
269
-static void axp209_class_init(ObjectClass *oc, void *data)
270
-{
271
- DeviceClass *dc = DEVICE_CLASS(oc);
272
- I2CSlaveClass *isc = I2C_SLAVE_CLASS(oc);
273
- ResettableClass *rc = RESETTABLE_CLASS(oc);
274
-
275
- rc->phases.enter = axp209_reset_enter;
276
- dc->vmsd = &vmstate_axp209;
277
- isc->event = axp209_event;
278
- isc->recv = axp209_rx;
279
- isc->send = axp209_tx;
280
-}
281
-
282
-static const TypeInfo axp209_info = {
283
- .name = TYPE_AXP209_PMU,
284
- .parent = TYPE_I2C_SLAVE,
285
- .instance_size = sizeof(AXP209I2CState),
286
- .class_init = axp209_class_init
287
-};
288
-
289
-static void axp209_register_devices(void)
290
-{
291
- type_register_static(&axp209_info);
292
-}
293
-
294
-type_init(axp209_register_devices);
295
diff --git a/hw/misc/axp2xx.c b/hw/misc/axp2xx.c
296
new file mode 100644
297
index XXXXXXX..XXXXXXX
298
--- /dev/null
299
+++ b/hw/misc/axp2xx.c
300
@@ -XXX,XX +XXX,XX @@
301
+/*
302
+ * AXP-2XX PMU Emulation, supported lists:
303
+ * AXP209
304
+ * AXP221
305
+ *
306
+ * Copyright (C) 2022 Strahinja Jankovic <strahinja.p.jankovic@gmail.com>
307
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
308
+ *
309
+ * Permission is hereby granted, free of charge, to any person obtaining a
310
+ * copy of this software and associated documentation files (the "Software"),
311
+ * to deal in the Software without restriction, including without limitation
312
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
313
+ * and/or sell copies of the Software, and to permit persons to whom the
314
+ * Software is furnished to do so, subject to the following conditions:
315
+ *
316
+ * The above copyright notice and this permission notice shall be included in
317
+ * all copies or substantial portions of the Software.
318
+ *
319
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
320
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
321
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
322
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
323
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
324
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
325
+ * DEALINGS IN THE SOFTWARE.
326
+ *
327
+ * SPDX-License-Identifier: MIT
328
+ */
329
+
330
+#include "qemu/osdep.h"
331
+#include "qemu/log.h"
332
+#include "qom/object.h"
333
+#include "trace.h"
334
+#include "hw/i2c/i2c.h"
335
+#include "migration/vmstate.h"
336
+
337
+#define TYPE_AXP2XX "axp2xx_pmu"
338
+#define TYPE_AXP209_PMU "axp209_pmu"
339
+#define TYPE_AXP221_PMU "axp221_pmu"
340
+
341
+OBJECT_DECLARE_TYPE(AXP2xxI2CState, AXP2xxClass, AXP2XX)
342
+
343
+#define NR_REGS (0xff)
344
+
345
+/* A simple I2C slave which returns values of ID or CNT register. */
346
+typedef struct AXP2xxI2CState {
347
+ /*< private >*/
348
+ I2CSlave i2c;
349
+ /*< public >*/
350
+ uint8_t regs[NR_REGS]; /* peripheral registers */
351
+ uint8_t ptr; /* current register index */
352
+ uint8_t count; /* counter used for tx/rx */
353
+} AXP2xxI2CState;
354
+
355
+typedef struct AXP2xxClass {
356
+ /*< private >*/
357
+ I2CSlaveClass parent_class;
358
+ /*< public >*/
359
+ void (*reset_enter)(AXP2xxI2CState *s, ResetType type);
360
+} AXP2xxClass;
361
+
362
+#define AXP209_CHIP_VERSION_ID (0x01)
363
+#define AXP209_DC_DC2_OUT_V_CTRL_RESET (0x16)
364
+
365
+/* Reset all counters and load ID register */
366
+static void axp209_reset_enter(AXP2xxI2CState *s, ResetType type)
367
+{
368
+ memset(s->regs, 0, NR_REGS);
369
+ s->ptr = 0;
370
+ s->count = 0;
371
+
372
+ s->regs[0x03] = AXP209_CHIP_VERSION_ID;
373
+ s->regs[0x23] = AXP209_DC_DC2_OUT_V_CTRL_RESET;
374
+
375
+ s->regs[0x30] = 0x60;
376
+ s->regs[0x32] = 0x46;
377
+ s->regs[0x34] = 0x41;
378
+ s->regs[0x35] = 0x22;
379
+ s->regs[0x36] = 0x5d;
380
+ s->regs[0x37] = 0x08;
381
+ s->regs[0x38] = 0xa5;
382
+ s->regs[0x39] = 0x1f;
383
+ s->regs[0x3a] = 0x68;
384
+ s->regs[0x3b] = 0x5f;
385
+ s->regs[0x3c] = 0xfc;
386
+ s->regs[0x3d] = 0x16;
387
+ s->regs[0x40] = 0xd8;
388
+ s->regs[0x42] = 0xff;
389
+ s->regs[0x43] = 0x3b;
390
+ s->regs[0x80] = 0xe0;
391
+ s->regs[0x82] = 0x83;
392
+ s->regs[0x83] = 0x80;
393
+ s->regs[0x84] = 0x32;
394
+ s->regs[0x86] = 0xff;
395
+ s->regs[0x90] = 0x07;
396
+ s->regs[0x91] = 0xa0;
397
+ s->regs[0x92] = 0x07;
398
+ s->regs[0x93] = 0x07;
399
+}
400
+
401
+#define AXP221_PWR_STATUS_ACIN_PRESENT BIT(7)
402
+#define AXP221_PWR_STATUS_ACIN_AVAIL BIT(6)
403
+#define AXP221_PWR_STATUS_VBUS_PRESENT BIT(5)
404
+#define AXP221_PWR_STATUS_VBUS_USED BIT(4)
405
+#define AXP221_PWR_STATUS_BAT_CHARGING BIT(2)
406
+#define AXP221_PWR_STATUS_ACIN_VBUS_POWERED BIT(1)
407
+
408
+/* Reset all counters and load ID register */
409
+static void axp221_reset_enter(AXP2xxI2CState *s, ResetType type)
410
+{
411
+ memset(s->regs, 0, NR_REGS);
412
+ s->ptr = 0;
413
+ s->count = 0;
414
+
415
+ /* input power status register */
416
+ s->regs[0x00] = AXP221_PWR_STATUS_ACIN_PRESENT
417
+ | AXP221_PWR_STATUS_ACIN_AVAIL
418
+ | AXP221_PWR_STATUS_ACIN_VBUS_POWERED;
419
+
420
+ s->regs[0x01] = 0x00; /* no battery is connected */
421
+
422
+ /*
423
+ * CHIPID register, no documented on datasheet, but it is checked in
424
+ * u-boot spl. I had read it from AXP221s and got 0x06 value.
425
+ * So leave 06h here.
426
+ */
427
+ s->regs[0x03] = 0x06;
428
+
429
+ s->regs[0x10] = 0xbf;
430
+ s->regs[0x13] = 0x01;
431
+ s->regs[0x30] = 0x60;
432
+ s->regs[0x31] = 0x03;
433
+ s->regs[0x32] = 0x43;
434
+ s->regs[0x33] = 0xc6;
435
+ s->regs[0x34] = 0x45;
436
+ s->regs[0x35] = 0x0e;
437
+ s->regs[0x36] = 0x5d;
438
+ s->regs[0x37] = 0x08;
439
+ s->regs[0x38] = 0xa5;
440
+ s->regs[0x39] = 0x1f;
441
+ s->regs[0x3c] = 0xfc;
442
+ s->regs[0x3d] = 0x16;
443
+ s->regs[0x80] = 0x80;
444
+ s->regs[0x82] = 0xe0;
445
+ s->regs[0x84] = 0x32;
446
+ s->regs[0x8f] = 0x01;
447
+
448
+ s->regs[0x90] = 0x07;
449
+ s->regs[0x91] = 0x1f;
450
+ s->regs[0x92] = 0x07;
451
+ s->regs[0x93] = 0x1f;
452
+
453
+ s->regs[0x40] = 0xd8;
454
+ s->regs[0x41] = 0xff;
455
+ s->regs[0x42] = 0x03;
456
+ s->regs[0x43] = 0x03;
457
+
458
+ s->regs[0xb8] = 0xc0;
459
+ s->regs[0xb9] = 0x64;
460
+ s->regs[0xe6] = 0xa0;
461
+}
462
+
463
+static void axp2xx_reset_enter(Object *obj, ResetType type)
464
+{
465
+ AXP2xxI2CState *s = AXP2XX(obj);
466
+ AXP2xxClass *sc = AXP2XX_GET_CLASS(s);
467
+
468
+ sc->reset_enter(s, type);
469
+}
470
+
471
+/* Handle events from master. */
472
+static int axp2xx_event(I2CSlave *i2c, enum i2c_event event)
473
+{
474
+ AXP2xxI2CState *s = AXP2XX(i2c);
475
+
476
+ s->count = 0;
477
+
478
+ return 0;
479
+}
480
+
481
+/* Called when master requests read */
482
+static uint8_t axp2xx_rx(I2CSlave *i2c)
483
+{
484
+ AXP2xxI2CState *s = AXP2XX(i2c);
485
+ uint8_t ret = 0xff;
486
+
487
+ if (s->ptr < NR_REGS) {
488
+ ret = s->regs[s->ptr++];
489
+ }
490
+
491
+ trace_axp2xx_rx(s->ptr - 1, ret);
492
+
493
+ return ret;
494
+}
495
+
496
+/*
497
+ * Called when master sends write.
498
+ * Update ptr with byte 0, then perform write with second byte.
499
+ */
500
+static int axp2xx_tx(I2CSlave *i2c, uint8_t data)
501
+{
502
+ AXP2xxI2CState *s = AXP2XX(i2c);
503
+
504
+ if (s->count == 0) {
505
+ /* Store register address */
506
+ s->ptr = data;
507
+ s->count++;
508
+ trace_axp2xx_select(data);
509
+ } else {
510
+ trace_axp2xx_tx(s->ptr, data);
511
+ s->regs[s->ptr++] = data;
512
+ }
513
+
514
+ return 0;
515
+}
516
+
517
+static const VMStateDescription vmstate_axp2xx = {
518
+ .name = TYPE_AXP2XX,
519
+ .version_id = 1,
520
+ .fields = (VMStateField[]) {
521
+ VMSTATE_UINT8_ARRAY(regs, AXP2xxI2CState, NR_REGS),
522
+ VMSTATE_UINT8(ptr, AXP2xxI2CState),
523
+ VMSTATE_UINT8(count, AXP2xxI2CState),
524
+ VMSTATE_END_OF_LIST()
525
+ }
526
+};
527
+
528
+static void axp2xx_class_init(ObjectClass *oc, void *data)
529
+{
530
+ DeviceClass *dc = DEVICE_CLASS(oc);
531
+ I2CSlaveClass *isc = I2C_SLAVE_CLASS(oc);
532
+ ResettableClass *rc = RESETTABLE_CLASS(oc);
533
+
534
+ rc->phases.enter = axp2xx_reset_enter;
535
+ dc->vmsd = &vmstate_axp2xx;
536
+ isc->event = axp2xx_event;
537
+ isc->recv = axp2xx_rx;
538
+ isc->send = axp2xx_tx;
539
+}
540
+
541
+static const TypeInfo axp2xx_info = {
542
+ .name = TYPE_AXP2XX,
543
+ .parent = TYPE_I2C_SLAVE,
544
+ .instance_size = sizeof(AXP2xxI2CState),
545
+ .class_size = sizeof(AXP2xxClass),
546
+ .class_init = axp2xx_class_init,
547
+ .abstract = true,
548
+};
549
+
550
+static void axp209_class_init(ObjectClass *oc, void *data)
551
+{
552
+ AXP2xxClass *sc = AXP2XX_CLASS(oc);
553
+
554
+ sc->reset_enter = axp209_reset_enter;
555
+}
556
+
557
+static const TypeInfo axp209_info = {
558
+ .name = TYPE_AXP209_PMU,
559
+ .parent = TYPE_AXP2XX,
560
+ .class_init = axp209_class_init
561
+};
562
+
563
+static void axp221_class_init(ObjectClass *oc, void *data)
564
+{
565
+ AXP2xxClass *sc = AXP2XX_CLASS(oc);
566
+
567
+ sc->reset_enter = axp221_reset_enter;
568
+}
569
+
570
+static const TypeInfo axp221_info = {
571
+ .name = TYPE_AXP221_PMU,
572
+ .parent = TYPE_AXP2XX,
573
+ .class_init = axp221_class_init,
574
+};
575
+
576
+static void axp2xx_register_devices(void)
577
+{
578
+ type_register_static(&axp2xx_info);
579
+ type_register_static(&axp209_info);
580
+ type_register_static(&axp221_info);
581
+}
582
+
583
+type_init(axp2xx_register_devices);
584
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
585
index XXXXXXX..XXXXXXX 100644
586
--- a/hw/arm/Kconfig
587
+++ b/hw/arm/Kconfig
588
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_A10
589
select ALLWINNER_WDT
590
select ALLWINNER_EMAC
591
select ALLWINNER_I2C
592
- select AXP209_PMU
593
+ select AXP2XX_PMU
594
select SERIAL
595
select UNIMP
596
597
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_R40
598
bool
599
default y if TCG && ARM
600
select ALLWINNER_A10_PIT
601
+ select AXP2XX_PMU
602
select SERIAL
603
select ARM_TIMER
604
select ARM_GIC
605
diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
606
index XXXXXXX..XXXXXXX 100644
607
--- a/hw/misc/Kconfig
608
+++ b/hw/misc/Kconfig
609
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_A10_CCM
610
config ALLWINNER_A10_DRAMC
611
bool
612
613
-config AXP209_PMU
614
+config AXP2XX_PMU
615
bool
616
depends on I2C
617
618
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
619
index XXXXXXX..XXXXXXX 100644
620
--- a/hw/misc/meson.build
621
+++ b/hw/misc/meson.build
622
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-dramc.c
623
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-sysctrl.c'))
624
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-sid.c'))
625
softmmu_ss.add(when: 'CONFIG_ALLWINNER_R40', if_true: files('allwinner-r40-ccu.c'))
626
-softmmu_ss.add(when: 'CONFIG_AXP209_PMU', if_true: files('axp209.c'))
627
+softmmu_ss.add(when: 'CONFIG_AXP2XX_PMU', if_true: files('axp2xx.c'))
628
softmmu_ss.add(when: 'CONFIG_REALVIEW', if_true: files('arm_sysctl.c'))
629
softmmu_ss.add(when: 'CONFIG_NSERIES', if_true: files('cbus.c'))
630
softmmu_ss.add(when: 'CONFIG_ECCMEMCTL', if_true: files('eccmemctl.c'))
631
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
632
index XXXXXXX..XXXXXXX 100644
633
--- a/hw/misc/trace-events
634
+++ b/hw/misc/trace-events
635
@@ -XXX,XX +XXX,XX @@ allwinner_sid_write(uint64_t offset, uint64_t data, unsigned size) "offset 0x%"
636
avr_power_read(uint8_t value) "power_reduc read value:%u"
637
avr_power_write(uint8_t value) "power_reduc write value:%u"
638
639
-# axp209.c
640
-axp209_rx(uint8_t reg, uint8_t data) "Read reg 0x%" PRIx8 " : 0x%" PRIx8
641
-axp209_select(uint8_t reg) "Accessing reg 0x%" PRIx8
642
-axp209_tx(uint8_t reg, uint8_t data) "Write reg 0x%" PRIx8 " : 0x%" PRIx8
643
+# axp2xx
644
+axp2xx_rx(uint8_t reg, uint8_t data) "Read reg 0x%" PRIx8 " : 0x%" PRIx8
645
+axp2xx_select(uint8_t reg) "Accessing reg 0x%" PRIx8
646
+axp2xx_tx(uint8_t reg, uint8_t data) "Write reg 0x%" PRIx8 " : 0x%" PRIx8
647
648
# eccmemctl.c
649
ecc_mem_writel_mer(uint32_t val) "Write memory enable 0x%08x"
145
--
650
--
146
2.7.4
651
2.34.1
147
148
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: qianfan Zhao <qianfanguijin@163.com>
2
2
3
Add support for continuous read out of the RDSR and READ_FSR status
3
Types of memory that the SDRAM controller supports are DDR2/DDR3
4
registers until the chip select is deasserted. This feature is supported
4
and capacities of up to 2GiB. This commit adds emulation support
5
by amongst others 1 or more flashtypes manufactured by Numonyx (Micron),
5
of the Allwinner R40 SDRAM controller.
6
Windbond, SST, Gigadevice, Eon and Macronix.
7
6
8
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
This driver only support 256M, 512M and 1024M memory now.
9
Acked-by: Marcin Krzemiński<mar.krzeminski@gmail.com>
8
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
11
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Message-id: 20171126231634.9531-2-frasse.iglesias@gmail.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/block/m25p80.c | 39 ++++++++++++++++++++++++++++++++++++++-
12
include/hw/arm/allwinner-r40.h | 13 +-
16
1 file changed, 38 insertions(+), 1 deletion(-)
13
include/hw/misc/allwinner-r40-dramc.h | 108 ++++++
14
hw/arm/allwinner-r40.c | 21 +-
15
hw/arm/bananapi_m2u.c | 7 +
16
hw/misc/allwinner-r40-dramc.c | 513 ++++++++++++++++++++++++++
17
hw/misc/meson.build | 1 +
18
hw/misc/trace-events | 14 +
19
7 files changed, 674 insertions(+), 3 deletions(-)
20
create mode 100644 include/hw/misc/allwinner-r40-dramc.h
21
create mode 100644 hw/misc/allwinner-r40-dramc.c
17
22
18
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
23
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
19
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/block/m25p80.c
25
--- a/include/hw/arm/allwinner-r40.h
21
+++ b/hw/block/m25p80.c
26
+++ b/include/hw/arm/allwinner-r40.h
22
@@ -XXX,XX +XXX,XX @@ typedef struct Flash {
27
@@ -XXX,XX +XXX,XX @@
23
uint8_t data[M25P80_INTERNAL_DATA_BUFFER_SZ];
28
#include "hw/intc/arm_gic.h"
24
uint32_t len;
29
#include "hw/sd/allwinner-sdhost.h"
25
uint32_t pos;
30
#include "hw/misc/allwinner-r40-ccu.h"
26
+ bool data_read_loop;
31
+#include "hw/misc/allwinner-r40-dramc.h"
27
uint8_t needed_bytes;
32
#include "hw/i2c/allwinner-i2c.h"
28
uint8_t cmd_in_progress;
33
#include "target/arm/cpu.h"
29
uint32_t cur_addr;
34
#include "sysemu/block-backend.h"
30
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
35
@@ -XXX,XX +XXX,XX @@ enum {
31
}
36
AW_R40_DEV_GIC_CPU,
32
s->pos = 0;
37
AW_R40_DEV_GIC_HYP,
33
s->len = 1;
38
AW_R40_DEV_GIC_VCPU,
34
+ s->data_read_loop = true;
39
- AW_R40_DEV_SDRAM
35
s->state = STATE_READING_DATA;
40
+ AW_R40_DEV_SDRAM,
36
break;
41
+ AW_R40_DEV_DRAMCOM,
37
42
+ AW_R40_DEV_DRAMCTL,
38
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
43
+ AW_R40_DEV_DRAMPHY,
39
}
44
};
40
s->pos = 0;
45
41
s->len = 1;
46
#define AW_R40_NUM_CPUS (4)
42
+ s->data_read_loop = true;
47
@@ -XXX,XX +XXX,XX @@ struct AwR40State {
43
s->state = STATE_READING_DATA;
48
DeviceState parent_obj;
44
break;
49
/*< public >*/
45
50
46
@@ -XXX,XX +XXX,XX @@ static int m25p80_cs(SSISlave *ss, bool select)
51
+ /** Physical base address for start of RAM */
47
s->pos = 0;
52
+ hwaddr ram_addr;
48
s->state = STATE_IDLE;
53
+
49
flash_sync_dirty(s, -1);
54
+ /** Total RAM size in megabytes */
50
+ s->data_read_loop = false;
55
+ uint32_t ram_size;
56
+
57
ARMCPU cpus[AW_R40_NUM_CPUS];
58
const hwaddr *memmap;
59
AwA10PITState timer;
60
AwSdHostState mmc[AW_R40_NUM_MMCS];
61
AwR40ClockCtlState ccu;
62
+ AwR40DramCtlState dramc;
63
AWI2CState i2c0;
64
GICState gic;
65
MemoryRegion sram_a1;
66
diff --git a/include/hw/misc/allwinner-r40-dramc.h b/include/hw/misc/allwinner-r40-dramc.h
67
new file mode 100644
68
index XXXXXXX..XXXXXXX
69
--- /dev/null
70
+++ b/include/hw/misc/allwinner-r40-dramc.h
71
@@ -XXX,XX +XXX,XX @@
72
+/*
73
+ * Allwinner R40 SDRAM Controller emulation
74
+ *
75
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
76
+ *
77
+ * This program is free software: you can redistribute it and/or modify
78
+ * it under the terms of the GNU General Public License as published by
79
+ * the Free Software Foundation, either version 2 of the License, or
80
+ * (at your option) any later version.
81
+ *
82
+ * This program is distributed in the hope that it will be useful,
83
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
84
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
85
+ * GNU General Public License for more details.
86
+ *
87
+ * You should have received a copy of the GNU General Public License
88
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
89
+ */
90
+
91
+#ifndef HW_MISC_ALLWINNER_R40_DRAMC_H
92
+#define HW_MISC_ALLWINNER_R40_DRAMC_H
93
+
94
+#include "qom/object.h"
95
+#include "hw/sysbus.h"
96
+#include "exec/hwaddr.h"
97
+
98
+/**
99
+ * Constants
100
+ * @{
101
+ */
102
+
103
+/** Highest register address used by DRAMCOM module */
104
+#define AW_R40_DRAMCOM_REGS_MAXADDR (0x804)
105
+
106
+/** Total number of known DRAMCOM registers */
107
+#define AW_R40_DRAMCOM_REGS_NUM (AW_R40_DRAMCOM_REGS_MAXADDR / \
108
+ sizeof(uint32_t))
109
+
110
+/** Highest register address used by DRAMCTL module */
111
+#define AW_R40_DRAMCTL_REGS_MAXADDR (0x88c)
112
+
113
+/** Total number of known DRAMCTL registers */
114
+#define AW_R40_DRAMCTL_REGS_NUM (AW_R40_DRAMCTL_REGS_MAXADDR / \
115
+ sizeof(uint32_t))
116
+
117
+/** Highest register address used by DRAMPHY module */
118
+#define AW_R40_DRAMPHY_REGS_MAXADDR (0x4)
119
+
120
+/** Total number of known DRAMPHY registers */
121
+#define AW_R40_DRAMPHY_REGS_NUM (AW_R40_DRAMPHY_REGS_MAXADDR / \
122
+ sizeof(uint32_t))
123
+
124
+/** @} */
125
+
126
+/**
127
+ * Object model
128
+ * @{
129
+ */
130
+
131
+#define TYPE_AW_R40_DRAMC "allwinner-r40-dramc"
132
+OBJECT_DECLARE_SIMPLE_TYPE(AwR40DramCtlState, AW_R40_DRAMC)
133
+
134
+/** @} */
135
+
136
+/**
137
+ * Allwinner R40 SDRAM Controller object instance state.
138
+ */
139
+struct AwR40DramCtlState {
140
+ /*< private >*/
141
+ SysBusDevice parent_obj;
142
+ /*< public >*/
143
+
144
+ /** Physical base address for start of RAM */
145
+ hwaddr ram_addr;
146
+
147
+ /** Total RAM size in megabytes */
148
+ uint32_t ram_size;
149
+
150
+ uint8_t set_row_bits;
151
+ uint8_t set_bank_bits;
152
+ uint8_t set_col_bits;
153
+
154
+ /**
155
+ * @name Memory Regions
156
+ * @{
157
+ */
158
+ MemoryRegion dramcom_iomem; /**< DRAMCOM module I/O registers */
159
+ MemoryRegion dramctl_iomem; /**< DRAMCTL module I/O registers */
160
+ MemoryRegion dramphy_iomem; /**< DRAMPHY module I/O registers */
161
+ MemoryRegion dram_high; /**< The high 1G dram for dualrank detect */
162
+ MemoryRegion detect_cells; /**< DRAM memory cells for auto detect */
163
+
164
+ /** @} */
165
+
166
+ /**
167
+ * @name Hardware Registers
168
+ * @{
169
+ */
170
+
171
+ uint32_t dramcom[AW_R40_DRAMCOM_REGS_NUM]; /**< DRAMCOM registers */
172
+ uint32_t dramctl[AW_R40_DRAMCTL_REGS_NUM]; /**< DRAMCTL registers */
173
+ uint32_t dramphy[AW_R40_DRAMPHY_REGS_NUM] ;/**< DRAMPHY registers */
174
+
175
+ /** @} */
176
+
177
+};
178
+
179
+#endif /* HW_MISC_ALLWINNER_R40_DRAMC_H */
180
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
181
index XXXXXXX..XXXXXXX 100644
182
--- a/hw/arm/allwinner-r40.c
183
+++ b/hw/arm/allwinner-r40.c
184
@@ -XXX,XX +XXX,XX @@
185
#include "hw/loader.h"
186
#include "sysemu/sysemu.h"
187
#include "hw/arm/allwinner-r40.h"
188
+#include "hw/misc/allwinner-r40-dramc.h"
189
190
/* Memory map */
191
const hwaddr allwinner_r40_memmap[] = {
192
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
193
[AW_R40_DEV_UART6] = 0x01c29800,
194
[AW_R40_DEV_UART7] = 0x01c29c00,
195
[AW_R40_DEV_TWI0] = 0x01c2ac00,
196
+ [AW_R40_DEV_DRAMCOM] = 0x01c62000,
197
+ [AW_R40_DEV_DRAMCTL] = 0x01c63000,
198
+ [AW_R40_DEV_DRAMPHY] = 0x01c65000,
199
[AW_R40_DEV_GIC_DIST] = 0x01c81000,
200
[AW_R40_DEV_GIC_CPU] = 0x01c82000,
201
[AW_R40_DEV_GIC_HYP] = 0x01c84000,
202
@@ -XXX,XX +XXX,XX @@ static struct AwR40Unimplemented r40_unimplemented[] = {
203
{ "gpu", 0x01c40000, 64 * KiB },
204
{ "gmac", 0x01c50000, 64 * KiB },
205
{ "hstmr", 0x01c60000, 4 * KiB },
206
- { "dram-com", 0x01c62000, 4 * KiB },
207
- { "dram-ctl", 0x01c63000, 4 * KiB },
208
{ "tcon-top", 0x01c70000, 4 * KiB },
209
{ "lcd0", 0x01c71000, 4 * KiB },
210
{ "lcd1", 0x01c72000, 4 * KiB },
211
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
51
}
212
}
52
213
53
DB_PRINT_L(0, "%sselect\n", select ? "de" : "");
214
object_initialize_child(obj, "twi0", &s->i2c0, TYPE_AW_I2C_SUN6I);
54
@@ -XXX,XX +XXX,XX @@ static uint32_t m25p80_transfer8(SSISlave *ss, uint32_t tx)
215
+
55
s->pos++;
216
+ object_initialize_child(obj, "dramc", &s->dramc, TYPE_AW_R40_DRAMC);
56
if (s->pos == s->len) {
217
+ object_property_add_alias(obj, "ram-addr", OBJECT(&s->dramc),
57
s->pos = 0;
218
+ "ram-addr");
58
- s->state = STATE_IDLE;
219
+ object_property_add_alias(obj, "ram-size", OBJECT(&s->dramc),
59
+ if (!s->data_read_loop) {
220
+ "ram-size");
60
+ s->state = STATE_IDLE;
221
}
61
+ }
222
62
}
223
static void allwinner_r40_realize(DeviceState *dev, Error **errp)
63
break;
224
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
64
225
sysbus_connect_irq(SYS_BUS_DEVICE(&s->i2c0), 0,
65
@@ -XXX,XX +XXX,XX @@ static Property m25p80_properties[] = {
226
qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_TWI0));
66
DEFINE_PROP_END_OF_LIST(),
227
67
};
228
+ /* DRAMC */
68
229
+ sysbus_realize(SYS_BUS_DEVICE(&s->dramc), &error_fatal);
69
+static int m25p80_pre_load(void *opaque)
230
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->dramc), 0,
70
+{
231
+ s->memmap[AW_R40_DEV_DRAMCOM]);
71
+ Flash *s = (Flash *)opaque;
232
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->dramc), 1,
72
+
233
+ s->memmap[AW_R40_DEV_DRAMCTL]);
73
+ s->data_read_loop = false;
234
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->dramc), 2,
235
+ s->memmap[AW_R40_DEV_DRAMPHY]);
236
+
237
/* Unimplemented devices */
238
for (i = 0; i < ARRAY_SIZE(r40_unimplemented); i++) {
239
create_unimplemented_device(r40_unimplemented[i].device_name,
240
diff --git a/hw/arm/bananapi_m2u.c b/hw/arm/bananapi_m2u.c
241
index XXXXXXX..XXXXXXX 100644
242
--- a/hw/arm/bananapi_m2u.c
243
+++ b/hw/arm/bananapi_m2u.c
244
@@ -XXX,XX +XXX,XX @@ static void bpim2u_init(MachineState *machine)
245
object_property_set_int(OBJECT(r40), "clk1-freq", 24 * 1000 * 1000,
246
&error_abort);
247
248
+ /* DRAMC */
249
+ r40->ram_size = machine->ram_size / MiB;
250
+ object_property_set_uint(OBJECT(r40), "ram-addr",
251
+ r40->memmap[AW_R40_DEV_SDRAM], &error_abort);
252
+ object_property_set_int(OBJECT(r40), "ram-size",
253
+ r40->ram_size, &error_abort);
254
+
255
/* Mark R40 object realized */
256
qdev_realize(DEVICE(r40), NULL, &error_abort);
257
258
diff --git a/hw/misc/allwinner-r40-dramc.c b/hw/misc/allwinner-r40-dramc.c
259
new file mode 100644
260
index XXXXXXX..XXXXXXX
261
--- /dev/null
262
+++ b/hw/misc/allwinner-r40-dramc.c
263
@@ -XXX,XX +XXX,XX @@
264
+/*
265
+ * Allwinner R40 SDRAM Controller emulation
266
+ *
267
+ * CCopyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
268
+ *
269
+ * This program is free software: you can redistribute it and/or modify
270
+ * it under the terms of the GNU General Public License as published by
271
+ * the Free Software Foundation, either version 2 of the License, or
272
+ * (at your option) any later version.
273
+ *
274
+ * This program is distributed in the hope that it will be useful,
275
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
276
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
277
+ * GNU General Public License for more details.
278
+ *
279
+ * You should have received a copy of the GNU General Public License
280
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
281
+ */
282
+
283
+#include "qemu/osdep.h"
284
+#include "qemu/units.h"
285
+#include "qemu/error-report.h"
286
+#include "hw/sysbus.h"
287
+#include "migration/vmstate.h"
288
+#include "qemu/log.h"
289
+#include "qemu/module.h"
290
+#include "exec/address-spaces.h"
291
+#include "hw/qdev-properties.h"
292
+#include "qapi/error.h"
293
+#include "qemu/bitops.h"
294
+#include "hw/misc/allwinner-r40-dramc.h"
295
+#include "trace.h"
296
+
297
+#define REG_INDEX(offset) (offset / sizeof(uint32_t))
298
+
299
+/* DRAMCOM register offsets */
300
+enum {
301
+ REG_DRAMCOM_CR = 0x0000, /* Control Register */
302
+};
303
+
304
+/* DRAMCOMM register flags */
305
+enum {
306
+ REG_DRAMCOM_CR_DUAL_RANK = (1 << 0),
307
+};
308
+
309
+/* DRAMCTL register offsets */
310
+enum {
311
+ REG_DRAMCTL_PIR = 0x0000, /* PHY Initialization Register */
312
+ REG_DRAMCTL_PGSR = 0x0010, /* PHY General Status Register */
313
+ REG_DRAMCTL_STATR = 0x0018, /* Status Register */
314
+ REG_DRAMCTL_PGCR = 0x0100, /* PHY general configuration registers */
315
+};
316
+
317
+/* DRAMCTL register flags */
318
+enum {
319
+ REG_DRAMCTL_PGSR_INITDONE = (1 << 0),
320
+ REG_DRAMCTL_PGSR_READ_TIMEOUT = (1 << 13),
321
+ REG_DRAMCTL_PGCR_ENABLE_READ_TIMEOUT = (1 << 25),
322
+};
323
+
324
+enum {
325
+ REG_DRAMCTL_STATR_ACTIVE = (1 << 0),
326
+};
327
+
328
+#define DRAM_MAX_ROW_BITS 16
329
+#define DRAM_MAX_COL_BITS 13 /* 8192 */
330
+#define DRAM_MAX_BANK 3
331
+
332
+static uint64_t dram_autodetect_cells[DRAM_MAX_ROW_BITS]
333
+ [DRAM_MAX_BANK]
334
+ [DRAM_MAX_COL_BITS];
335
+struct VirtualDDRChip {
336
+ uint32_t ram_size;
337
+ uint8_t bank_bits;
338
+ uint8_t row_bits;
339
+ uint8_t col_bits;
340
+};
341
+
342
+/*
343
+ * Only power of 2 RAM sizes from 256MiB up to 2048MiB are supported,
344
+ * 2GiB memory is not supported due to dual rank feature.
345
+ */
346
+static const struct VirtualDDRChip dummy_ddr_chips[] = {
347
+ {
348
+ .ram_size = 256,
349
+ .bank_bits = 3,
350
+ .row_bits = 12,
351
+ .col_bits = 13,
352
+ }, {
353
+ .ram_size = 512,
354
+ .bank_bits = 3,
355
+ .row_bits = 13,
356
+ .col_bits = 13,
357
+ }, {
358
+ .ram_size = 1024,
359
+ .bank_bits = 3,
360
+ .row_bits = 14,
361
+ .col_bits = 13,
362
+ }, {
363
+ 0
364
+ }
365
+};
366
+
367
+static const struct VirtualDDRChip *get_match_ddr(uint32_t ram_size)
368
+{
369
+ const struct VirtualDDRChip *ddr;
370
+
371
+ for (ddr = &dummy_ddr_chips[0]; ddr->ram_size; ddr++) {
372
+ if (ddr->ram_size == ram_size) {
373
+ return ddr;
374
+ }
375
+ }
376
+
377
+ return NULL;
378
+}
379
+
380
+static uint64_t *address_to_autodetect_cells(AwR40DramCtlState *s,
381
+ const struct VirtualDDRChip *ddr,
382
+ uint32_t offset)
383
+{
384
+ int row_index = 0, bank_index = 0, col_index = 0;
385
+ uint32_t row_addr, bank_addr, col_addr;
386
+
387
+ row_addr = extract32(offset, s->set_col_bits + s->set_bank_bits,
388
+ s->set_row_bits);
389
+ bank_addr = extract32(offset, s->set_col_bits, s->set_bank_bits);
390
+ col_addr = extract32(offset, 0, s->set_col_bits);
391
+
392
+ for (int i = 0; i < ddr->row_bits; i++) {
393
+ if (row_addr & BIT(i)) {
394
+ row_index = i;
395
+ }
396
+ }
397
+
398
+ for (int i = 0; i < ddr->bank_bits; i++) {
399
+ if (bank_addr & BIT(i)) {
400
+ bank_index = i;
401
+ }
402
+ }
403
+
404
+ for (int i = 0; i < ddr->col_bits; i++) {
405
+ if (col_addr & BIT(i)) {
406
+ col_index = i;
407
+ }
408
+ }
409
+
410
+ trace_allwinner_r40_dramc_offset_to_cell(offset, row_index, bank_index,
411
+ col_index);
412
+ return &dram_autodetect_cells[row_index][bank_index][col_index];
413
+}
414
+
415
+static void allwinner_r40_dramc_map_rows(AwR40DramCtlState *s, uint8_t row_bits,
416
+ uint8_t bank_bits, uint8_t col_bits)
417
+{
418
+ const struct VirtualDDRChip *ddr = get_match_ddr(s->ram_size);
419
+ bool enable_detect_cells;
420
+
421
+ trace_allwinner_r40_dramc_map_rows(row_bits, bank_bits, col_bits);
422
+
423
+ if (!ddr) {
424
+ return;
425
+ }
426
+
427
+ s->set_row_bits = row_bits;
428
+ s->set_bank_bits = bank_bits;
429
+ s->set_col_bits = col_bits;
430
+
431
+ enable_detect_cells = ddr->bank_bits != bank_bits
432
+ || ddr->row_bits != row_bits
433
+ || ddr->col_bits != col_bits;
434
+
435
+ if (enable_detect_cells) {
436
+ trace_allwinner_r40_dramc_detect_cells_enable();
437
+ } else {
438
+ trace_allwinner_r40_dramc_detect_cells_disable();
439
+ }
440
+
441
+ memory_region_set_enabled(&s->detect_cells, enable_detect_cells);
442
+}
443
+
444
+static uint64_t allwinner_r40_dramcom_read(void *opaque, hwaddr offset,
445
+ unsigned size)
446
+{
447
+ const AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
448
+ const uint32_t idx = REG_INDEX(offset);
449
+
450
+ if (idx >= AW_R40_DRAMCOM_REGS_NUM) {
451
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
452
+ __func__, (uint32_t)offset);
453
+ return 0;
454
+ }
455
+
456
+ trace_allwinner_r40_dramcom_read(offset, s->dramcom[idx], size);
457
+ return s->dramcom[idx];
458
+}
459
+
460
+static void allwinner_r40_dramcom_write(void *opaque, hwaddr offset,
461
+ uint64_t val, unsigned size)
462
+{
463
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
464
+ const uint32_t idx = REG_INDEX(offset);
465
+
466
+ trace_allwinner_r40_dramcom_write(offset, val, size);
467
+
468
+ if (idx >= AW_R40_DRAMCOM_REGS_NUM) {
469
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
470
+ __func__, (uint32_t)offset);
471
+ return;
472
+ }
473
+
474
+ switch (offset) {
475
+ case REG_DRAMCOM_CR: /* Control Register */
476
+ if (!(val & REG_DRAMCOM_CR_DUAL_RANK)) {
477
+ allwinner_r40_dramc_map_rows(s, ((val >> 4) & 0xf) + 1,
478
+ ((val >> 2) & 0x1) + 2,
479
+ (((val >> 8) & 0xf) + 3));
480
+ }
481
+ break;
482
+ };
483
+
484
+ s->dramcom[idx] = (uint32_t) val;
485
+}
486
+
487
+static uint64_t allwinner_r40_dramctl_read(void *opaque, hwaddr offset,
488
+ unsigned size)
489
+{
490
+ const AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
491
+ const uint32_t idx = REG_INDEX(offset);
492
+
493
+ if (idx >= AW_R40_DRAMCTL_REGS_NUM) {
494
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
495
+ __func__, (uint32_t)offset);
496
+ return 0;
497
+ }
498
+
499
+ trace_allwinner_r40_dramctl_read(offset, s->dramctl[idx], size);
500
+ return s->dramctl[idx];
501
+}
502
+
503
+static void allwinner_r40_dramctl_write(void *opaque, hwaddr offset,
504
+ uint64_t val, unsigned size)
505
+{
506
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
507
+ const uint32_t idx = REG_INDEX(offset);
508
+
509
+ trace_allwinner_r40_dramctl_write(offset, val, size);
510
+
511
+ if (idx >= AW_R40_DRAMCTL_REGS_NUM) {
512
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
513
+ __func__, (uint32_t)offset);
514
+ return;
515
+ }
516
+
517
+ switch (offset) {
518
+ case REG_DRAMCTL_PIR: /* PHY Initialization Register */
519
+ s->dramctl[REG_INDEX(REG_DRAMCTL_PGSR)] |= REG_DRAMCTL_PGSR_INITDONE;
520
+ s->dramctl[REG_INDEX(REG_DRAMCTL_STATR)] |= REG_DRAMCTL_STATR_ACTIVE;
521
+ break;
522
+ }
523
+
524
+ s->dramctl[idx] = (uint32_t) val;
525
+}
526
+
527
+static uint64_t allwinner_r40_dramphy_read(void *opaque, hwaddr offset,
528
+ unsigned size)
529
+{
530
+ const AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
531
+ const uint32_t idx = REG_INDEX(offset);
532
+
533
+ if (idx >= AW_R40_DRAMPHY_REGS_NUM) {
534
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
535
+ __func__, (uint32_t)offset);
536
+ return 0;
537
+ }
538
+
539
+ trace_allwinner_r40_dramphy_read(offset, s->dramphy[idx], size);
540
+ return s->dramphy[idx];
541
+}
542
+
543
+static void allwinner_r40_dramphy_write(void *opaque, hwaddr offset,
544
+ uint64_t val, unsigned size)
545
+{
546
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
547
+ const uint32_t idx = REG_INDEX(offset);
548
+
549
+ trace_allwinner_r40_dramphy_write(offset, val, size);
550
+
551
+ if (idx >= AW_R40_DRAMPHY_REGS_NUM) {
552
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
553
+ __func__, (uint32_t)offset);
554
+ return;
555
+ }
556
+
557
+ s->dramphy[idx] = (uint32_t) val;
558
+}
559
+
560
+static const MemoryRegionOps allwinner_r40_dramcom_ops = {
561
+ .read = allwinner_r40_dramcom_read,
562
+ .write = allwinner_r40_dramcom_write,
563
+ .endianness = DEVICE_NATIVE_ENDIAN,
564
+ .valid = {
565
+ .min_access_size = 4,
566
+ .max_access_size = 4,
567
+ },
568
+ .impl.min_access_size = 4,
569
+};
570
+
571
+static const MemoryRegionOps allwinner_r40_dramctl_ops = {
572
+ .read = allwinner_r40_dramctl_read,
573
+ .write = allwinner_r40_dramctl_write,
574
+ .endianness = DEVICE_NATIVE_ENDIAN,
575
+ .valid = {
576
+ .min_access_size = 4,
577
+ .max_access_size = 4,
578
+ },
579
+ .impl.min_access_size = 4,
580
+};
581
+
582
+static const MemoryRegionOps allwinner_r40_dramphy_ops = {
583
+ .read = allwinner_r40_dramphy_read,
584
+ .write = allwinner_r40_dramphy_write,
585
+ .endianness = DEVICE_NATIVE_ENDIAN,
586
+ .valid = {
587
+ .min_access_size = 4,
588
+ .max_access_size = 4,
589
+ },
590
+ .impl.min_access_size = 4,
591
+};
592
+
593
+static uint64_t allwinner_r40_detect_read(void *opaque, hwaddr offset,
594
+ unsigned size)
595
+{
596
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
597
+ const struct VirtualDDRChip *ddr = get_match_ddr(s->ram_size);
598
+ uint64_t data = 0;
599
+
600
+ if (ddr) {
601
+ data = *address_to_autodetect_cells(s, ddr, (uint32_t)offset);
602
+ }
603
+
604
+ trace_allwinner_r40_dramc_detect_cell_read(offset, data);
605
+ return data;
606
+}
607
+
608
+static void allwinner_r40_detect_write(void *opaque, hwaddr offset,
609
+ uint64_t data, unsigned size)
610
+{
611
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
612
+ const struct VirtualDDRChip *ddr = get_match_ddr(s->ram_size);
613
+
614
+ if (ddr) {
615
+ uint64_t *cell = address_to_autodetect_cells(s, ddr, (uint32_t)offset);
616
+ trace_allwinner_r40_dramc_detect_cell_write(offset, data);
617
+ *cell = data;
618
+ }
619
+}
620
+
621
+static const MemoryRegionOps allwinner_r40_detect_ops = {
622
+ .read = allwinner_r40_detect_read,
623
+ .write = allwinner_r40_detect_write,
624
+ .endianness = DEVICE_NATIVE_ENDIAN,
625
+ .valid = {
626
+ .min_access_size = 4,
627
+ .max_access_size = 4,
628
+ },
629
+ .impl.min_access_size = 4,
630
+};
631
+
632
+/*
633
+ * mctl_r40_detect_rank_count in u-boot will write the high 1G of DDR
634
+ * to detect wether the board support dual_rank or not. Create a virtual memory
635
+ * if the board's ram_size less or equal than 1G, and set read time out flag of
636
+ * REG_DRAMCTL_PGSR when the user touch this high dram.
637
+ */
638
+static uint64_t allwinner_r40_dualrank_detect_read(void *opaque, hwaddr offset,
639
+ unsigned size)
640
+{
641
+ AwR40DramCtlState *s = AW_R40_DRAMC(opaque);
642
+ uint32_t reg;
643
+
644
+ reg = s->dramctl[REG_INDEX(REG_DRAMCTL_PGCR)];
645
+ if (reg & REG_DRAMCTL_PGCR_ENABLE_READ_TIMEOUT) { /* Enable read time out */
646
+ /*
647
+ * this driver only support one rank, mark READ_TIMEOUT when try
648
+ * read the second rank.
649
+ */
650
+ s->dramctl[REG_INDEX(REG_DRAMCTL_PGSR)]
651
+ |= REG_DRAMCTL_PGSR_READ_TIMEOUT;
652
+ }
653
+
74
+ return 0;
654
+ return 0;
75
+}
655
+}
76
+
656
+
77
+static bool m25p80_data_read_loop_needed(void *opaque)
657
+static const MemoryRegionOps allwinner_r40_dualrank_detect_ops = {
78
+{
658
+ .read = allwinner_r40_dualrank_detect_read,
79
+ Flash *s = (Flash *)opaque;
659
+ .endianness = DEVICE_NATIVE_ENDIAN,
80
+
660
+ .valid = {
81
+ return s->data_read_loop;
661
+ .min_access_size = 4,
82
+}
662
+ .max_access_size = 4,
83
+
663
+ },
84
+static const VMStateDescription vmstate_m25p80_data_read_loop = {
664
+ .impl.min_access_size = 4,
85
+ .name = "m25p80/data_read_loop",
665
+};
666
+
667
+static void allwinner_r40_dramc_reset(DeviceState *dev)
668
+{
669
+ AwR40DramCtlState *s = AW_R40_DRAMC(dev);
670
+
671
+ /* Set default values for registers */
672
+ memset(&s->dramcom, 0, sizeof(s->dramcom));
673
+ memset(&s->dramctl, 0, sizeof(s->dramctl));
674
+ memset(&s->dramphy, 0, sizeof(s->dramphy));
675
+}
676
+
677
+static void allwinner_r40_dramc_realize(DeviceState *dev, Error **errp)
678
+{
679
+ AwR40DramCtlState *s = AW_R40_DRAMC(dev);
680
+
681
+ if (!get_match_ddr(s->ram_size)) {
682
+ error_report("%s: ram-size %u MiB is not supported",
683
+ __func__, s->ram_size);
684
+ exit(1);
685
+ }
686
+
687
+ /* detect_cells */
688
+ sysbus_mmio_map_overlap(SYS_BUS_DEVICE(s), 3, s->ram_addr, 10);
689
+ memory_region_set_enabled(&s->detect_cells, false);
690
+
691
+ /*
692
+ * We only support DRAM size up to 1G now, so prepare a high memory page
693
+ * after 1G for dualrank detect. index = 4
694
+ */
695
+ memory_region_init_io(&s->dram_high, OBJECT(s),
696
+ &allwinner_r40_dualrank_detect_ops, s,
697
+ "DRAMHIGH", KiB);
698
+ sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->dram_high);
699
+ sysbus_mmio_map(SYS_BUS_DEVICE(s), 4, s->ram_addr + GiB);
700
+}
701
+
702
+static void allwinner_r40_dramc_init(Object *obj)
703
+{
704
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
705
+ AwR40DramCtlState *s = AW_R40_DRAMC(obj);
706
+
707
+ /* DRAMCOM registers, index 0 */
708
+ memory_region_init_io(&s->dramcom_iomem, OBJECT(s),
709
+ &allwinner_r40_dramcom_ops, s,
710
+ "DRAMCOM", 4 * KiB);
711
+ sysbus_init_mmio(sbd, &s->dramcom_iomem);
712
+
713
+ /* DRAMCTL registers, index 1 */
714
+ memory_region_init_io(&s->dramctl_iomem, OBJECT(s),
715
+ &allwinner_r40_dramctl_ops, s,
716
+ "DRAMCTL", 4 * KiB);
717
+ sysbus_init_mmio(sbd, &s->dramctl_iomem);
718
+
719
+ /* DRAMPHY registers. index 2 */
720
+ memory_region_init_io(&s->dramphy_iomem, OBJECT(s),
721
+ &allwinner_r40_dramphy_ops, s,
722
+ "DRAMPHY", 4 * KiB);
723
+ sysbus_init_mmio(sbd, &s->dramphy_iomem);
724
+
725
+ /* R40 support max 2G memory but we only support up to 1G now. index 3 */
726
+ memory_region_init_io(&s->detect_cells, OBJECT(s),
727
+ &allwinner_r40_detect_ops, s,
728
+ "DRAMCELLS", 1 * GiB);
729
+ sysbus_init_mmio(sbd, &s->detect_cells);
730
+}
731
+
732
+static Property allwinner_r40_dramc_properties[] = {
733
+ DEFINE_PROP_UINT64("ram-addr", AwR40DramCtlState, ram_addr, 0x0),
734
+ DEFINE_PROP_UINT32("ram-size", AwR40DramCtlState, ram_size, 256), /* MiB */
735
+ DEFINE_PROP_END_OF_LIST()
736
+};
737
+
738
+static const VMStateDescription allwinner_r40_dramc_vmstate = {
739
+ .name = "allwinner-r40-dramc",
86
+ .version_id = 1,
740
+ .version_id = 1,
87
+ .minimum_version_id = 1,
741
+ .minimum_version_id = 1,
88
+ .needed = m25p80_data_read_loop_needed,
89
+ .fields = (VMStateField[]) {
742
+ .fields = (VMStateField[]) {
90
+ VMSTATE_BOOL(data_read_loop, Flash),
743
+ VMSTATE_UINT32_ARRAY(dramcom, AwR40DramCtlState,
744
+ AW_R40_DRAMCOM_REGS_NUM),
745
+ VMSTATE_UINT32_ARRAY(dramctl, AwR40DramCtlState,
746
+ AW_R40_DRAMCTL_REGS_NUM),
747
+ VMSTATE_UINT32_ARRAY(dramphy, AwR40DramCtlState,
748
+ AW_R40_DRAMPHY_REGS_NUM),
91
+ VMSTATE_END_OF_LIST()
749
+ VMSTATE_END_OF_LIST()
92
+ }
750
+ }
93
+};
751
+};
94
+
752
+
95
static const VMStateDescription vmstate_m25p80 = {
753
+static void allwinner_r40_dramc_class_init(ObjectClass *klass, void *data)
96
.name = "m25p80",
754
+{
97
.version_id = 0,
755
+ DeviceClass *dc = DEVICE_CLASS(klass);
98
.minimum_version_id = 0,
756
+
99
.pre_save = m25p80_pre_save,
757
+ dc->reset = allwinner_r40_dramc_reset;
100
+ .pre_load = m25p80_pre_load,
758
+ dc->vmsd = &allwinner_r40_dramc_vmstate;
101
.fields = (VMStateField[]) {
759
+ dc->realize = allwinner_r40_dramc_realize;
102
VMSTATE_UINT8(state, Flash),
760
+ device_class_set_props(dc, allwinner_r40_dramc_properties);
103
VMSTATE_UINT8_ARRAY(data, Flash, M25P80_INTERNAL_DATA_BUFFER_SZ),
761
+}
104
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m25p80 = {
762
+
105
VMSTATE_UINT8(spansion_cr3nv, Flash),
763
+static const TypeInfo allwinner_r40_dramc_info = {
106
VMSTATE_UINT8(spansion_cr4nv, Flash),
764
+ .name = TYPE_AW_R40_DRAMC,
107
VMSTATE_END_OF_LIST()
765
+ .parent = TYPE_SYS_BUS_DEVICE,
108
+ },
766
+ .instance_init = allwinner_r40_dramc_init,
109
+ .subsections = (const VMStateDescription * []) {
767
+ .instance_size = sizeof(AwR40DramCtlState),
110
+ &vmstate_m25p80_data_read_loop,
768
+ .class_init = allwinner_r40_dramc_class_init,
111
+ NULL
769
+};
112
}
770
+
113
};
771
+static void allwinner_r40_dramc_register(void)
114
772
+{
773
+ type_register_static(&allwinner_r40_dramc_info);
774
+}
775
+
776
+type_init(allwinner_r40_dramc_register)
777
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
778
index XXXXXXX..XXXXXXX 100644
779
--- a/hw/misc/meson.build
780
+++ b/hw/misc/meson.build
781
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-dramc.c
782
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-sysctrl.c'))
783
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-sid.c'))
784
softmmu_ss.add(when: 'CONFIG_ALLWINNER_R40', if_true: files('allwinner-r40-ccu.c'))
785
+softmmu_ss.add(when: 'CONFIG_ALLWINNER_R40', if_true: files('allwinner-r40-dramc.c'))
786
softmmu_ss.add(when: 'CONFIG_AXP2XX_PMU', if_true: files('axp2xx.c'))
787
softmmu_ss.add(when: 'CONFIG_REALVIEW', if_true: files('arm_sysctl.c'))
788
softmmu_ss.add(when: 'CONFIG_NSERIES', if_true: files('cbus.c'))
789
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
790
index XXXXXXX..XXXXXXX 100644
791
--- a/hw/misc/trace-events
792
+++ b/hw/misc/trace-events
793
@@ -XXX,XX +XXX,XX @@ allwinner_h3_dramctl_write(uint64_t offset, uint64_t data, unsigned size) "Write
794
allwinner_h3_dramphy_read(uint64_t offset, uint64_t data, unsigned size) "Read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
795
allwinner_h3_dramphy_write(uint64_t offset, uint64_t data, unsigned size) "write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
796
797
+# allwinner-r40-dramc.c
798
+allwinner_r40_dramc_detect_cells_disable(void) "Disable detect cells"
799
+allwinner_r40_dramc_detect_cells_enable(void) "Enable detect cells"
800
+allwinner_r40_dramc_map_rows(uint8_t row_bits, uint8_t bank_bits, uint8_t col_bits) "DRAM layout: row_bits %d, bank_bits %d, col_bits %d"
801
+allwinner_r40_dramc_offset_to_cell(uint64_t offset, int row, int bank, int col) "offset 0x%" PRIx64 " row %d bank %d col %d"
802
+allwinner_r40_dramc_detect_cell_write(uint64_t offset, uint64_t data) "offset 0x%" PRIx64 " data 0x%" PRIx64 ""
803
+allwinner_r40_dramc_detect_cell_read(uint64_t offset, uint64_t data) "offset 0x%" PRIx64 " data 0x%" PRIx64 ""
804
+allwinner_r40_dramcom_read(uint64_t offset, uint64_t data, unsigned size) "Read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
805
+allwinner_r40_dramcom_write(uint64_t offset, uint64_t data, unsigned size) "Write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
806
+allwinner_r40_dramctl_read(uint64_t offset, uint64_t data, unsigned size) "Read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
807
+allwinner_r40_dramctl_write(uint64_t offset, uint64_t data, unsigned size) "Write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
808
+allwinner_r40_dramphy_read(uint64_t offset, uint64_t data, unsigned size) "Read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
809
+allwinner_r40_dramphy_write(uint64_t offset, uint64_t data, unsigned size) "write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
810
+
811
# allwinner-sid.c
812
allwinner_sid_read(uint64_t offset, uint64_t data, unsigned size) "offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
813
allwinner_sid_write(uint64_t offset, uint64_t data, unsigned size) "offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
115
--
814
--
116
2.7.4
815
2.34.1
117
118
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: qianfan Zhao <qianfanguijin@163.com>
2
2
3
Add support for SST READ ID 0x90/0xAB commands for reading out the flash
3
A64's sd register was similar to H3, and it introduced a new register
4
manufacturer ID and device ID.
4
named SAMP_DL_REG location at 0x144. The dma descriptor buffer size of
5
5
mmc2 is only 8K and the other mmc controllers has 64K.
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Also fix allwinner-r40's mmc controller type.
8
Message-id: 20171126231634.9531-3-frasse.iglesias@gmail.com
8
9
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
hw/block/m25p80.c | 32 ++++++++++++++++++++++++++++++++
12
include/hw/sd/allwinner-sdhost.h | 9 ++++
12
1 file changed, 32 insertions(+)
13
hw/arm/allwinner-r40.c | 2 +-
13
14
hw/sd/allwinner-sdhost.c | 72 ++++++++++++++++++++++++++++++--
14
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
15
3 files changed, 79 insertions(+), 4 deletions(-)
16
17
diff --git a/include/hw/sd/allwinner-sdhost.h b/include/hw/sd/allwinner-sdhost.h
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/block/m25p80.c
19
--- a/include/hw/sd/allwinner-sdhost.h
17
+++ b/hw/block/m25p80.c
20
+++ b/include/hw/sd/allwinner-sdhost.h
18
@@ -XXX,XX +XXX,XX @@ typedef enum {
21
@@ -XXX,XX +XXX,XX @@
19
DPP = 0xa2,
22
/** Allwinner sun5i family and newer (A13, H2+, H3, etc) */
20
QPP = 0x32,
23
#define TYPE_AW_SDHOST_SUN5I TYPE_AW_SDHOST "-sun5i"
21
QPP_4 = 0x34,
24
22
+ RDID_90 = 0x90,
25
+/** Allwinner sun50i-a64 */
23
+ RDID_AB = 0xab,
26
+#define TYPE_AW_SDHOST_SUN50I_A64 TYPE_AW_SDHOST "-sun50i-a64"
24
27
+
25
ERASE_4K = 0x20,
28
+/** Allwinner sun50i-a64 emmc */
26
ERASE4_4K = 0x21,
29
+#define TYPE_AW_SDHOST_SUN50I_A64_EMMC TYPE_AW_SDHOST "-sun50i-a64-emmc"
27
@@ -XXX,XX +XXX,XX @@ typedef enum {
30
+
28
MAN_MACRONIX,
31
/** @} */
29
MAN_NUMONYX,
32
30
MAN_WINBOND,
33
/**
31
+ MAN_SST,
34
@@ -XXX,XX +XXX,XX @@ struct AwSdHostState {
32
MAN_GENERIC,
35
uint32_t startbit_detect; /**< eMMC DDR Start Bit Detection Control */
33
} Manufacturer;
36
uint32_t response_crc; /**< Response CRC */
34
37
uint32_t data_crc[8]; /**< Data CRC */
35
@@ -XXX,XX +XXX,XX @@ static inline Manufacturer get_man(Flash *s)
38
+ uint32_t sample_delay; /**< Sample delay control */
36
return MAN_SPANSION;
39
uint32_t status_crc; /**< Status CRC */
37
case 0xC2:
40
38
return MAN_MACRONIX;
41
/** @} */
39
+ case 0xBF:
42
@@ -XXX,XX +XXX,XX @@ struct AwSdHostClass {
40
+ return MAN_SST;
43
size_t max_desc_size;
41
default:
44
bool is_sun4i;
42
return MAN_GENERIC;
45
43
}
46
+ /** does the IP block support autocalibration? */
44
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
47
+ bool can_calibrate;
45
case WEVCR:
48
};
46
s->enh_volatile_cfg = s->data[0];
49
50
#endif /* HW_SD_ALLWINNER_SDHOST_H */
51
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/arm/allwinner-r40.c
54
+++ b/hw/arm/allwinner-r40.c
55
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
56
57
for (int i = 0; i < AW_R40_NUM_MMCS; i++) {
58
object_initialize_child(obj, mmc_names[i], &s->mmc[i],
59
- TYPE_AW_SDHOST_SUN5I);
60
+ TYPE_AW_SDHOST_SUN50I_A64);
61
}
62
63
object_initialize_child(obj, "twi0", &s->i2c0, TYPE_AW_I2C_SUN6I);
64
diff --git a/hw/sd/allwinner-sdhost.c b/hw/sd/allwinner-sdhost.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/sd/allwinner-sdhost.c
67
+++ b/hw/sd/allwinner-sdhost.c
68
@@ -XXX,XX +XXX,XX @@ enum {
69
REG_SD_DATA1_CRC = 0x12C, /* CRC Data 1 from card/eMMC */
70
REG_SD_DATA0_CRC = 0x130, /* CRC Data 0 from card/eMMC */
71
REG_SD_CRC_STA = 0x134, /* CRC status from card/eMMC during write */
72
+ REG_SD_SAMP_DL = 0x144, /* Sample Delay Control (sun50i-a64) */
73
REG_SD_FIFO = 0x200, /* Read/Write FIFO */
74
};
75
76
@@ -XXX,XX +XXX,XX @@ enum {
77
REG_SD_RES_CRC_RST = 0x0,
78
REG_SD_DATA_CRC_RST = 0x0,
79
REG_SD_CRC_STA_RST = 0x0,
80
+ REG_SD_SAMPLE_DL_RST = 0x00002000,
81
REG_SD_FIFO_RST = 0x0,
82
};
83
84
@@ -XXX,XX +XXX,XX @@ static uint64_t allwinner_sdhost_read(void *opaque, hwaddr offset,
85
{
86
AwSdHostState *s = AW_SDHOST(opaque);
87
AwSdHostClass *sc = AW_SDHOST_GET_CLASS(s);
88
+ bool out_of_bounds = false;
89
uint32_t res = 0;
90
91
switch (offset) {
92
@@ -XXX,XX +XXX,XX @@ static uint64_t allwinner_sdhost_read(void *opaque, hwaddr offset,
93
case REG_SD_FIFO: /* Read/Write FIFO */
94
res = allwinner_sdhost_fifo_read(s);
47
break;
95
break;
48
+ case RDID_90:
96
+ case REG_SD_SAMP_DL: /* Sample Delay */
49
+ case RDID_AB:
97
+ if (sc->can_calibrate) {
50
+ if (get_man(s) == MAN_SST) {
98
+ res = s->sample_delay;
51
+ if (s->cur_addr <= 1) {
52
+ if (s->cur_addr) {
53
+ s->data[0] = s->pi->id[2];
54
+ s->data[1] = s->pi->id[0];
55
+ } else {
56
+ s->data[0] = s->pi->id[0];
57
+ s->data[1] = s->pi->id[2];
58
+ }
59
+ s->pos = 0;
60
+ s->len = 2;
61
+ s->data_read_loop = true;
62
+ s->state = STATE_READING_DATA;
63
+ } else {
64
+ qemu_log_mask(LOG_GUEST_ERROR,
65
+ "M25P80: Invalid read id address\n");
66
+ }
67
+ } else {
99
+ } else {
68
+ qemu_log_mask(LOG_GUEST_ERROR,
100
+ out_of_bounds = true;
69
+ "M25P80: Read id (command 0x90/0xAB) is not supported"
70
+ " by device\n");
71
+ }
101
+ }
72
+ break;
102
+ break;
73
default:
103
default:
104
- qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset %"
105
- HWADDR_PRIx"\n", __func__, offset);
106
+ out_of_bounds = true;
107
res = 0;
74
break;
108
break;
75
}
109
}
76
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
110
77
case PP4:
111
+ if (out_of_bounds) {
78
case PP4_4:
112
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset %"
79
case DIE_ERASE:
113
+ HWADDR_PRIx"\n", __func__, offset);
80
+ case RDID_90:
114
+ }
81
+ case RDID_AB:
115
+
82
s->needed_bytes = get_addr_length(s);
116
trace_allwinner_sdhost_read(offset, res, size);
83
s->pos = 0;
117
return res;
84
s->len = 0;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_write(void *opaque, hwaddr offset,
120
{
121
AwSdHostState *s = AW_SDHOST(opaque);
122
AwSdHostClass *sc = AW_SDHOST_GET_CLASS(s);
123
+ bool out_of_bounds = false;
124
125
trace_allwinner_sdhost_write(offset, value, size);
126
127
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_write(void *opaque, hwaddr offset,
128
case REG_SD_DATA0_CRC: /* CRC Data 0 from card/eMMC */
129
case REG_SD_CRC_STA: /* CRC status from card/eMMC in write operation */
130
break;
131
+ case REG_SD_SAMP_DL: /* Sample delay control */
132
+ if (sc->can_calibrate) {
133
+ s->sample_delay = value;
134
+ } else {
135
+ out_of_bounds = true;
136
+ }
137
+ break;
138
default:
139
+ out_of_bounds = true;
140
+ break;
141
+ }
142
+
143
+ if (out_of_bounds) {
144
qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset %"
145
HWADDR_PRIx"\n", __func__, offset);
146
- break;
147
}
148
}
149
150
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_allwinner_sdhost = {
151
VMSTATE_UINT32(response_crc, AwSdHostState),
152
VMSTATE_UINT32_ARRAY(data_crc, AwSdHostState, 8),
153
VMSTATE_UINT32(status_crc, AwSdHostState),
154
+ VMSTATE_UINT32(sample_delay, AwSdHostState),
155
VMSTATE_END_OF_LIST()
156
}
157
};
158
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_realize(DeviceState *dev, Error **errp)
159
static void allwinner_sdhost_reset(DeviceState *dev)
160
{
161
AwSdHostState *s = AW_SDHOST(dev);
162
+ AwSdHostClass *sc = AW_SDHOST_GET_CLASS(s);
163
164
s->global_ctl = REG_SD_GCTL_RST;
165
s->clock_ctl = REG_SD_CKCR_RST;
166
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_reset(DeviceState *dev)
167
}
168
169
s->status_crc = REG_SD_CRC_STA_RST;
170
+
171
+ if (sc->can_calibrate) {
172
+ s->sample_delay = REG_SD_SAMPLE_DL_RST;
173
+ }
174
}
175
176
static void allwinner_sdhost_bus_class_init(ObjectClass *klass, void *data)
177
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_sun4i_class_init(ObjectClass *klass, void *data)
178
AwSdHostClass *sc = AW_SDHOST_CLASS(klass);
179
sc->max_desc_size = 8 * KiB;
180
sc->is_sun4i = true;
181
+ sc->can_calibrate = false;
182
}
183
184
static void allwinner_sdhost_sun5i_class_init(ObjectClass *klass, void *data)
185
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_sun5i_class_init(ObjectClass *klass, void *data)
186
AwSdHostClass *sc = AW_SDHOST_CLASS(klass);
187
sc->max_desc_size = 64 * KiB;
188
sc->is_sun4i = false;
189
+ sc->can_calibrate = false;
190
+}
191
+
192
+static void allwinner_sdhost_sun50i_a64_class_init(ObjectClass *klass,
193
+ void *data)
194
+{
195
+ AwSdHostClass *sc = AW_SDHOST_CLASS(klass);
196
+ sc->max_desc_size = 64 * KiB;
197
+ sc->is_sun4i = false;
198
+ sc->can_calibrate = true;
199
+}
200
+
201
+static void allwinner_sdhost_sun50i_a64_emmc_class_init(ObjectClass *klass,
202
+ void *data)
203
+{
204
+ AwSdHostClass *sc = AW_SDHOST_CLASS(klass);
205
+ sc->max_desc_size = 8 * KiB;
206
+ sc->is_sun4i = false;
207
+ sc->can_calibrate = true;
208
}
209
210
static const TypeInfo allwinner_sdhost_info = {
211
@@ -XXX,XX +XXX,XX @@ static const TypeInfo allwinner_sdhost_sun5i_info = {
212
.class_init = allwinner_sdhost_sun5i_class_init,
213
};
214
215
+static const TypeInfo allwinner_sdhost_sun50i_a64_info = {
216
+ .name = TYPE_AW_SDHOST_SUN50I_A64,
217
+ .parent = TYPE_AW_SDHOST,
218
+ .class_init = allwinner_sdhost_sun50i_a64_class_init,
219
+};
220
+
221
+static const TypeInfo allwinner_sdhost_sun50i_a64_emmc_info = {
222
+ .name = TYPE_AW_SDHOST_SUN50I_A64_EMMC,
223
+ .parent = TYPE_AW_SDHOST,
224
+ .class_init = allwinner_sdhost_sun50i_a64_emmc_class_init,
225
+};
226
+
227
static const TypeInfo allwinner_sdhost_bus_info = {
228
.name = TYPE_AW_SDHOST_BUS,
229
.parent = TYPE_SD_BUS,
230
@@ -XXX,XX +XXX,XX @@ static void allwinner_sdhost_register_types(void)
231
type_register_static(&allwinner_sdhost_info);
232
type_register_static(&allwinner_sdhost_sun4i_info);
233
type_register_static(&allwinner_sdhost_sun5i_info);
234
+ type_register_static(&allwinner_sdhost_sun50i_a64_info);
235
+ type_register_static(&allwinner_sdhost_sun50i_a64_emmc_info);
236
type_register_static(&allwinner_sdhost_bus_info);
237
}
238
85
--
239
--
86
2.7.4
240
2.34.1
87
88
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: qianfan Zhao <qianfanguijin@163.com>
2
2
3
Now that do_ats_write() is entirely in control of whether to
3
R40 has two ethernet controllers named as emac and gmac. The emac is
4
generate a 32-bit PAR or a 64-bit PAR, we can make it use the
4
compatibled with A10, and the GMAC is compatibled with H3.
5
correct (complicated) condition for doing so.
6
5
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1512503192-2239-13-git-send-email-peter.maydell@linaro.org
13
[PMM: Rebased Edgar's patch on top of get_phys_addr() refactoring;
14
use arm_s1_regime_using_lpae_format() rather than
15
regime_using_lpae_format() because the latter will assert
16
if passed ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1;
17
updated commit message appropriately]
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
8
---
20
target/arm/helper.c | 33 +++++++++++++++++++++++++++++----
9
include/hw/arm/allwinner-r40.h | 6 ++++
21
1 file changed, 29 insertions(+), 4 deletions(-)
10
hw/arm/allwinner-r40.c | 50 ++++++++++++++++++++++++++++++++--
11
hw/arm/bananapi_m2u.c | 3 ++
12
3 files changed, 57 insertions(+), 2 deletions(-)
22
13
23
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
24
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/helper.c
16
--- a/include/hw/arm/allwinner-r40.h
26
+++ b/target/arm/helper.c
17
+++ b/include/hw/arm/allwinner-r40.h
27
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
18
@@ -XXX,XX +XXX,XX @@
28
int prot;
19
#include "hw/misc/allwinner-r40-ccu.h"
29
bool ret;
20
#include "hw/misc/allwinner-r40-dramc.h"
30
uint64_t par64;
21
#include "hw/i2c/allwinner-i2c.h"
31
+ bool format64 = false;
22
+#include "hw/net/allwinner_emac.h"
32
MemTxAttrs attrs = {};
23
+#include "hw/net/allwinner-sun8i-emac.h"
33
ARMMMUFaultInfo fi = {};
24
#include "target/arm/cpu.h"
34
ARMCacheAttrs cacheattrs = {};
25
#include "sysemu/block-backend.h"
35
26
36
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
27
@@ -XXX,XX +XXX,XX @@ enum {
37
&prot, &page_size, &fi, &cacheattrs);
28
AW_R40_DEV_SRAM_A2,
38
- /* TODO: this is not the correct condition to use to decide whether
29
AW_R40_DEV_SRAM_A3,
39
- * to report a PAR in 64-bit or 32-bit format.
30
AW_R40_DEV_SRAM_A4,
40
- */
31
+ AW_R40_DEV_EMAC,
41
- if (arm_s1_regime_using_lpae_format(env, mmu_idx)) {
32
AW_R40_DEV_MMC0,
33
AW_R40_DEV_MMC1,
34
AW_R40_DEV_MMC2,
35
@@ -XXX,XX +XXX,XX @@ enum {
36
AW_R40_DEV_UART6,
37
AW_R40_DEV_UART7,
38
AW_R40_DEV_TWI0,
39
+ AW_R40_DEV_GMAC,
40
AW_R40_DEV_GIC_DIST,
41
AW_R40_DEV_GIC_CPU,
42
AW_R40_DEV_GIC_HYP,
43
@@ -XXX,XX +XXX,XX @@ struct AwR40State {
44
AwR40ClockCtlState ccu;
45
AwR40DramCtlState dramc;
46
AWI2CState i2c0;
47
+ AwEmacState emac;
48
+ AwSun8iEmacState gmac;
49
GICState gic;
50
MemoryRegion sram_a1;
51
MemoryRegion sram_a2;
52
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/arm/allwinner-r40.c
55
+++ b/hw/arm/allwinner-r40.c
56
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
57
[AW_R40_DEV_SRAM_A2] = 0x00004000,
58
[AW_R40_DEV_SRAM_A3] = 0x00008000,
59
[AW_R40_DEV_SRAM_A4] = 0x0000b400,
60
+ [AW_R40_DEV_EMAC] = 0x01c0b000,
61
[AW_R40_DEV_MMC0] = 0x01c0f000,
62
[AW_R40_DEV_MMC1] = 0x01c10000,
63
[AW_R40_DEV_MMC2] = 0x01c11000,
64
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
65
[AW_R40_DEV_UART6] = 0x01c29800,
66
[AW_R40_DEV_UART7] = 0x01c29c00,
67
[AW_R40_DEV_TWI0] = 0x01c2ac00,
68
+ [AW_R40_DEV_GMAC] = 0x01c50000,
69
[AW_R40_DEV_DRAMCOM] = 0x01c62000,
70
[AW_R40_DEV_DRAMCTL] = 0x01c63000,
71
[AW_R40_DEV_DRAMPHY] = 0x01c65000,
72
@@ -XXX,XX +XXX,XX @@ static struct AwR40Unimplemented r40_unimplemented[] = {
73
{ "spi1", 0x01c06000, 4 * KiB },
74
{ "cs0", 0x01c09000, 4 * KiB },
75
{ "keymem", 0x01c0a000, 4 * KiB },
76
- { "emac", 0x01c0b000, 4 * KiB },
77
{ "usb0-otg", 0x01c13000, 4 * KiB },
78
{ "usb0-host", 0x01c14000, 4 * KiB },
79
{ "crypto", 0x01c15000, 4 * KiB },
80
@@ -XXX,XX +XXX,XX @@ static struct AwR40Unimplemented r40_unimplemented[] = {
81
{ "tvd2", 0x01c33000, 4 * KiB },
82
{ "tvd3", 0x01c34000, 4 * KiB },
83
{ "gpu", 0x01c40000, 64 * KiB },
84
- { "gmac", 0x01c50000, 64 * KiB },
85
{ "hstmr", 0x01c60000, 4 * KiB },
86
{ "tcon-top", 0x01c70000, 4 * KiB },
87
{ "lcd0", 0x01c71000, 4 * KiB },
88
@@ -XXX,XX +XXX,XX @@ enum {
89
AW_R40_GIC_SPI_MMC1 = 33,
90
AW_R40_GIC_SPI_MMC2 = 34,
91
AW_R40_GIC_SPI_MMC3 = 35,
92
+ AW_R40_GIC_SPI_EMAC = 55,
93
+ AW_R40_GIC_SPI_GMAC = 85,
94
};
95
96
/* Allwinner R40 general constants */
97
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
98
99
object_initialize_child(obj, "twi0", &s->i2c0, TYPE_AW_I2C_SUN6I);
100
101
+ object_initialize_child(obj, "emac", &s->emac, TYPE_AW_EMAC);
102
+ object_initialize_child(obj, "gmac", &s->gmac, TYPE_AW_SUN8I_EMAC);
103
+ object_property_add_alias(obj, "gmac-phy-addr",
104
+ OBJECT(&s->gmac), "phy-addr");
42
+
105
+
43
+ if (is_a64(env)) {
106
object_initialize_child(obj, "dramc", &s->dramc, TYPE_AW_R40_DRAMC);
44
+ format64 = true;
107
object_property_add_alias(obj, "ram-addr", OBJECT(&s->dramc),
45
+ } else if (arm_feature(env, ARM_FEATURE_LPAE)) {
108
"ram-addr");
46
+ /*
109
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
47
+ * ATS1Cxx:
110
48
+ * * TTBCR.EAE determines whether the result is returned using the
111
static void allwinner_r40_realize(DeviceState *dev, Error **errp)
49
+ * 32-bit or the 64-bit PAR format
112
{
50
+ * * Instructions executed in Hyp mode always use the 64bit format
113
+ const char *r40_nic_models[] = { "gmac", "emac", NULL };
51
+ *
114
AwR40State *s = AW_R40(dev);
52
+ * ATS1S2NSOxx uses the 64bit format if any of the following is true:
115
unsigned i;
53
+ * * The Non-secure TTBCR.EAE bit is set to 1
116
54
+ * * The implementation includes EL2, and the value of HCR.VM is 1
117
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
55
+ *
118
sysbus_mmio_map(SYS_BUS_DEVICE(&s->dramc), 2,
56
+ * ATS1Hx always uses the 64bit format (not supported yet).
119
s->memmap[AW_R40_DEV_DRAMPHY]);
57
+ */
120
58
+ format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
121
+ /* nic support gmac and emac */
122
+ for (int i = 0; i < ARRAY_SIZE(r40_nic_models) - 1; i++) {
123
+ NICInfo *nic = &nd_table[i];
59
+
124
+
60
+ if (arm_feature(env, ARM_FEATURE_EL2)) {
125
+ if (!nic->used) {
61
+ if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
126
+ continue;
62
+ format64 |= env->cp15.hcr_el2 & HCR_VM;
127
+ }
63
+ } else {
128
+ if (qemu_show_nic_models(nic->model, r40_nic_models)) {
64
+ format64 |= arm_current_el(env) == 2;
129
+ exit(0);
65
+ }
130
+ }
131
+
132
+ switch (qemu_find_nic_model(nic, r40_nic_models, r40_nic_models[0])) {
133
+ case 0: /* gmac */
134
+ qdev_set_nic_properties(DEVICE(&s->gmac), nic);
135
+ break;
136
+ case 1: /* emac */
137
+ qdev_set_nic_properties(DEVICE(&s->emac), nic);
138
+ break;
139
+ default:
140
+ exit(1);
141
+ break;
66
+ }
142
+ }
67
+ }
143
+ }
68
+
144
+
69
+ if (format64) {
145
+ /* GMAC */
70
/* Create a 64-bit PAR */
146
+ object_property_set_link(OBJECT(&s->gmac), "dma-memory",
71
par64 = (1 << 11); /* LPAE bit always set */
147
+ OBJECT(get_system_memory()), &error_fatal);
72
if (!ret) {
148
+ sysbus_realize(SYS_BUS_DEVICE(&s->gmac), &error_fatal);
149
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gmac), 0, s->memmap[AW_R40_DEV_GMAC]);
150
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gmac), 0,
151
+ qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_GMAC));
152
+
153
+ /* EMAC */
154
+ sysbus_realize(SYS_BUS_DEVICE(&s->emac), &error_fatal);
155
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->emac), 0, s->memmap[AW_R40_DEV_EMAC]);
156
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->emac), 0,
157
+ qdev_get_gpio_in(DEVICE(&s->gic), AW_R40_GIC_SPI_EMAC));
158
+
159
/* Unimplemented devices */
160
for (i = 0; i < ARRAY_SIZE(r40_unimplemented); i++) {
161
create_unimplemented_device(r40_unimplemented[i].device_name,
162
diff --git a/hw/arm/bananapi_m2u.c b/hw/arm/bananapi_m2u.c
163
index XXXXXXX..XXXXXXX 100644
164
--- a/hw/arm/bananapi_m2u.c
165
+++ b/hw/arm/bananapi_m2u.c
166
@@ -XXX,XX +XXX,XX @@ static void bpim2u_init(MachineState *machine)
167
object_property_set_int(OBJECT(r40), "ram-size",
168
r40->ram_size, &error_abort);
169
170
+ /* GMAC PHY */
171
+ object_property_set_uint(OBJECT(r40), "gmac-phy-addr", 1, &error_abort);
172
+
173
/* Mark R40 object realized */
174
qdev_realize(DEVICE(r40), NULL, &error_abort);
175
73
--
176
--
74
2.7.4
177
2.34.1
75
76
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: qianfan Zhao <qianfanguijin@163.com>
2
2
3
Add support for the RX discard and RX drain functionality. Also transmit
3
Only a few important registers are added, especially the SRAM_VER
4
one byte per dummy cycle (to the flash memories) with commands that require
4
register.
5
these.
6
5
7
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20171126231634.9531-8-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
include/hw/ssi/xilinx_spips.h | 6 ++
10
include/hw/arm/allwinner-r40.h | 3 +
14
hw/ssi/xilinx_spips.c | 167 +++++++++++++++++++++++++++++++++++++-----
11
include/hw/misc/allwinner-sramc.h | 69 +++++++++++
15
2 files changed, 155 insertions(+), 18 deletions(-)
12
hw/arm/allwinner-r40.c | 7 +-
13
hw/misc/allwinner-sramc.c | 184 ++++++++++++++++++++++++++++++
14
hw/arm/Kconfig | 1 +
15
hw/misc/Kconfig | 3 +
16
hw/misc/meson.build | 1 +
17
hw/misc/trace-events | 4 +
18
8 files changed, 271 insertions(+), 1 deletion(-)
19
create mode 100644 include/hw/misc/allwinner-sramc.h
20
create mode 100644 hw/misc/allwinner-sramc.c
16
21
17
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
22
diff --git a/include/hw/arm/allwinner-r40.h b/include/hw/arm/allwinner-r40.h
18
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/ssi/xilinx_spips.h
24
--- a/include/hw/arm/allwinner-r40.h
20
+++ b/include/hw/ssi/xilinx_spips.h
25
+++ b/include/hw/arm/allwinner-r40.h
21
@@ -XXX,XX +XXX,XX @@ struct XilinxSPIPS {
22
uint8_t num_busses;
23
24
uint8_t snoop_state;
25
+ int cmd_dummies;
26
+ uint8_t link_state;
27
+ uint8_t link_state_next;
28
+ uint8_t link_state_next_when;
29
qemu_irq *cs_lines;
30
+ bool *cs_lines_state;
31
SSIBus **spi;
32
33
Fifo8 rx_fifo;
34
Fifo8 tx_fifo;
35
36
uint8_t num_txrx_bytes;
37
+ uint32_t rx_discard;
38
39
uint32_t regs[XLNX_SPIPS_R_MAX];
40
};
41
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/ssi/xilinx_spips.c
44
+++ b/hw/ssi/xilinx_spips.c
45
@@ -XXX,XX +XXX,XX @@
26
@@ -XXX,XX +XXX,XX @@
46
#include "qemu/bitops.h"
27
#include "hw/sd/allwinner-sdhost.h"
47
#include "hw/ssi/xilinx_spips.h"
28
#include "hw/misc/allwinner-r40-ccu.h"
48
#include "qapi/error.h"
29
#include "hw/misc/allwinner-r40-dramc.h"
49
+#include "hw/register.h"
30
+#include "hw/misc/allwinner-sramc.h"
50
#include "migration/blocker.h"
31
#include "hw/i2c/allwinner-i2c.h"
51
32
#include "hw/net/allwinner_emac.h"
52
#ifndef XILINX_SPIPS_ERR_DEBUG
33
#include "hw/net/allwinner-sun8i-emac.h"
34
@@ -XXX,XX +XXX,XX @@ enum {
35
AW_R40_DEV_SRAM_A2,
36
AW_R40_DEV_SRAM_A3,
37
AW_R40_DEV_SRAM_A4,
38
+ AW_R40_DEV_SRAMC,
39
AW_R40_DEV_EMAC,
40
AW_R40_DEV_MMC0,
41
AW_R40_DEV_MMC1,
42
@@ -XXX,XX +XXX,XX @@ struct AwR40State {
43
44
ARMCPU cpus[AW_R40_NUM_CPUS];
45
const hwaddr *memmap;
46
+ AwSRAMCState sramc;
47
AwA10PITState timer;
48
AwSdHostState mmc[AW_R40_NUM_MMCS];
49
AwR40ClockCtlState ccu;
50
diff --git a/include/hw/misc/allwinner-sramc.h b/include/hw/misc/allwinner-sramc.h
51
new file mode 100644
52
index XXXXXXX..XXXXXXX
53
--- /dev/null
54
+++ b/include/hw/misc/allwinner-sramc.h
53
@@ -XXX,XX +XXX,XX @@
55
@@ -XXX,XX +XXX,XX @@
54
#define LQSPI_CFG_DUMMY_SHIFT 8
56
+/*
55
#define LQSPI_CFG_INST_CODE 0xFF
57
+ * Allwinner SRAM controller emulation
56
58
+ *
57
+#define R_CMND (0xc0 / 4)
59
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
58
+ #define R_CMND_RXFIFO_DRAIN (1 << 19)
60
+ *
59
+ FIELD(CMND, PARTIAL_BYTE_LEN, 16, 3)
61
+ * This program is free software: you can redistribute it and/or modify
60
+#define R_CMND_EXT_ADD (1 << 15)
62
+ * it under the terms of the GNU General Public License as published by
61
+ FIELD(CMND, RX_DISCARD, 8, 7)
63
+ * the Free Software Foundation, either version 2 of the License, or
62
+ FIELD(CMND, DUMMY_CYCLES, 2, 6)
64
+ * (at your option) any later version.
63
+#define R_CMND_DMA_EN (1 << 1)
65
+ *
64
+#define R_CMND_PUSH_WAIT (1 << 0)
66
+ * This program is distributed in the hope that it will be useful,
65
#define R_LQSPI_STS (0xA4 / 4)
67
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
66
#define LQSPI_STS_WR_RECVD (1 << 1)
68
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
67
69
+ * GNU General Public License for more details.
70
+ *
71
+ * You should have received a copy of the GNU General Public License
72
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
73
+ */
74
+
75
+#ifndef HW_MISC_ALLWINNER_SRAMC_H
76
+#define HW_MISC_ALLWINNER_SRAMC_H
77
+
78
+#include "qom/object.h"
79
+#include "hw/sysbus.h"
80
+#include "qemu/uuid.h"
81
+
82
+/**
83
+ * Object model
84
+ * @{
85
+ */
86
+#define TYPE_AW_SRAMC "allwinner-sramc"
87
+#define TYPE_AW_SRAMC_SUN8I_R40 TYPE_AW_SRAMC "-sun8i-r40"
88
+OBJECT_DECLARE_TYPE(AwSRAMCState, AwSRAMCClass, AW_SRAMC)
89
+
90
+/** @} */
91
+
92
+/**
93
+ * Allwinner SRAMC object instance state
94
+ */
95
+struct AwSRAMCState {
96
+ /*< private >*/
97
+ SysBusDevice parent_obj;
98
+ /*< public >*/
99
+
100
+ /** Maps I/O registers in physical memory */
101
+ MemoryRegion iomem;
102
+
103
+ /* registers */
104
+ uint32_t sram_ctl1;
105
+ uint32_t sram_ver;
106
+ uint32_t sram_soft_entry_reg0;
107
+};
108
+
109
+/**
110
+ * Allwinner SRAM Controller class-level struct.
111
+ *
112
+ * This struct is filled by each sunxi device specific code
113
+ * such that the generic code can use this struct to support
114
+ * all devices.
115
+ */
116
+struct AwSRAMCClass {
117
+ /*< private >*/
118
+ SysBusDeviceClass parent_class;
119
+ /*< public >*/
120
+
121
+ uint32_t sram_version_code;
122
+};
123
+
124
+#endif /* HW_MISC_ALLWINNER_SRAMC_H */
125
diff --git a/hw/arm/allwinner-r40.c b/hw/arm/allwinner-r40.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/hw/arm/allwinner-r40.c
128
+++ b/hw/arm/allwinner-r40.c
129
@@ -XXX,XX +XXX,XX @@ const hwaddr allwinner_r40_memmap[] = {
130
[AW_R40_DEV_SRAM_A2] = 0x00004000,
131
[AW_R40_DEV_SRAM_A3] = 0x00008000,
132
[AW_R40_DEV_SRAM_A4] = 0x0000b400,
133
+ [AW_R40_DEV_SRAMC] = 0x01c00000,
134
[AW_R40_DEV_EMAC] = 0x01c0b000,
135
[AW_R40_DEV_MMC0] = 0x01c0f000,
136
[AW_R40_DEV_MMC1] = 0x01c10000,
137
@@ -XXX,XX +XXX,XX @@ struct AwR40Unimplemented {
138
static struct AwR40Unimplemented r40_unimplemented[] = {
139
{ "d-engine", 0x01000000, 4 * MiB },
140
{ "d-inter", 0x01400000, 128 * KiB },
141
- { "sram-c", 0x01c00000, 4 * KiB },
142
{ "dma", 0x01c02000, 4 * KiB },
143
{ "nfdc", 0x01c03000, 4 * KiB },
144
{ "ts", 0x01c04000, 4 * KiB },
145
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_init(Object *obj)
146
"ram-addr");
147
object_property_add_alias(obj, "ram-size", OBJECT(&s->dramc),
148
"ram-size");
149
+
150
+ object_initialize_child(obj, "sramc", &s->sramc, TYPE_AW_SRAMC_SUN8I_R40);
151
}
152
153
static void allwinner_r40_realize(DeviceState *dev, Error **errp)
154
@@ -XXX,XX +XXX,XX @@ static void allwinner_r40_realize(DeviceState *dev, Error **errp)
155
AW_R40_GIC_SPI_TIMER1));
156
157
/* SRAM */
158
+ sysbus_realize(SYS_BUS_DEVICE(&s->sramc), &error_fatal);
159
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->sramc), 0, s->memmap[AW_R40_DEV_SRAMC]);
160
+
161
memory_region_init_ram(&s->sram_a1, OBJECT(dev), "sram A1",
162
16 * KiB, &error_abort);
163
memory_region_init_ram(&s->sram_a2, OBJECT(dev), "sram A2",
164
diff --git a/hw/misc/allwinner-sramc.c b/hw/misc/allwinner-sramc.c
165
new file mode 100644
166
index XXXXXXX..XXXXXXX
167
--- /dev/null
168
+++ b/hw/misc/allwinner-sramc.c
68
@@ -XXX,XX +XXX,XX @@
169
@@ -XXX,XX +XXX,XX @@
69
#define LQSPI_ADDRESS_BITS 24
170
+/*
70
171
+ * Allwinner R40 SRAM controller emulation
71
#define SNOOP_CHECKING 0xFF
172
+ *
72
-#define SNOOP_NONE 0xFE
173
+ * Copyright (C) 2023 qianfan Zhao <qianfanguijin@163.com>
73
+#define SNOOP_ADDR 0xF0
174
+ *
74
+#define SNOOP_NONE 0xEE
175
+ * This program is free software: you can redistribute it and/or modify
75
#define SNOOP_STRIPING 0
176
+ * it under the terms of the GNU General Public License as published by
76
177
+ * the Free Software Foundation, either version 2 of the License, or
77
static inline int num_effective_busses(XilinxSPIPS *s)
178
+ * (at your option) any later version.
78
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
179
+ *
79
if (xilinx_spips_cs_is_set(s, i, field) && !found) {
180
+ * This program is distributed in the hope that it will be useful,
80
DB_PRINT_L(0, "selecting slave %d\n", i);
181
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
81
qemu_set_irq(s->cs_lines[cs_to_set], 0);
182
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
82
+ if (s->cs_lines_state[cs_to_set]) {
183
+ * GNU General Public License for more details.
83
+ s->cs_lines_state[cs_to_set] = false;
184
+ *
84
+ s->rx_discard = ARRAY_FIELD_EX32(s->regs, CMND, RX_DISCARD);
185
+ * You should have received a copy of the GNU General Public License
85
+ }
186
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
86
} else {
187
+ */
87
DB_PRINT_L(0, "deselecting slave %d\n", i);
188
+
88
qemu_set_irq(s->cs_lines[cs_to_set], 1);
189
+#include "qemu/osdep.h"
89
+ s->cs_lines_state[cs_to_set] = true;
190
+#include "qemu/units.h"
90
}
191
+#include "hw/sysbus.h"
91
}
192
+#include "migration/vmstate.h"
92
if (xilinx_spips_cs_is_set(s, i, field)) {
193
+#include "qemu/log.h"
93
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
194
+#include "qemu/module.h"
94
}
195
+#include "qapi/error.h"
95
if (!found) {
196
+#include "hw/qdev-properties.h"
96
s->snoop_state = SNOOP_CHECKING;
197
+#include "hw/qdev-properties-system.h"
97
+ s->cmd_dummies = 0;
198
+#include "hw/misc/allwinner-sramc.h"
98
+ s->link_state = 1;
199
+#include "trace.h"
99
+ s->link_state_next = 1;
200
+
100
+ s->link_state_next_when = 0;
201
+/*
101
DB_PRINT_L(1, "moving to snoop check state\n");
202
+ * register offsets
102
}
203
+ * https://linux-sunxi.org/SRAM_Controller_Register_Guide
103
}
204
+ */
104
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
205
+enum {
105
/* FIXME: move magic number definition somewhere sensible */
206
+ REG_SRAM_CTL1_CFG = 0x04, /* SRAM Control register 1 */
106
s->regs[R_MOD_ID] = 0x01090106;
207
+ REG_SRAM_VER = 0x24, /* SRAM Version register */
107
s->regs[R_LQSPI_CFG] = R_LQSPI_CFG_RESET;
208
+ REG_SRAM_R40_SOFT_ENTRY_REG0 = 0xbc,
108
+ s->link_state = 1;
209
+};
109
+ s->link_state_next = 1;
210
+
110
+ s->link_state_next_when = 0;
211
+/* REG_SRAMC_VERSION bit defines */
111
s->snoop_state = SNOOP_CHECKING;
212
+#define SRAM_VER_READ_ENABLE (1 << 15)
112
+ s->cmd_dummies = 0;
213
+#define SRAM_VER_VERSION_SHIFT 16
113
xilinx_spips_update_ixr(s);
214
+#define SRAM_VERSION_SUN8I_R40 0x1701
114
xilinx_spips_update_cs_lines(s);
215
+
115
}
216
+static uint64_t allwinner_sramc_read(void *opaque, hwaddr offset,
116
@@ -XXX,XX +XXX,XX @@ static inline void stripe8(uint8_t *x, int num, bool dir)
217
+ unsigned size)
117
memcpy(x, r, sizeof(uint8_t) * num);
218
+{
118
}
219
+ AwSRAMCState *s = AW_SRAMC(opaque);
119
220
+ AwSRAMCClass *sc = AW_SRAMC_GET_CLASS(s);
120
+static int xilinx_spips_num_dummies(XilinxQSPIPS *qs, uint8_t command)
221
+ uint64_t val = 0;
121
+{
222
+
122
+ if (!qs) {
223
+ switch (offset) {
123
+ /* The SPI device is not a QSPI device */
224
+ case REG_SRAM_CTL1_CFG:
124
+ return -1;
225
+ val = s->sram_ctl1;
226
+ break;
227
+ case REG_SRAM_VER:
228
+ /* bit15: lock bit, set this bit before reading this register */
229
+ if (s->sram_ver & SRAM_VER_READ_ENABLE) {
230
+ val = SRAM_VER_READ_ENABLE |
231
+ (sc->sram_version_code << SRAM_VER_VERSION_SHIFT);
232
+ }
233
+ break;
234
+ case REG_SRAM_R40_SOFT_ENTRY_REG0:
235
+ val = s->sram_soft_entry_reg0;
236
+ break;
237
+ default:
238
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
239
+ __func__, (uint32_t)offset);
240
+ return 0;
125
+ }
241
+ }
126
+
242
+
127
+ switch (command) { /* check for dummies */
243
+ trace_allwinner_sramc_read(offset, val);
128
+ case READ: /* no dummy bytes/cycles */
244
+
129
+ case PP:
245
+ return val;
130
+ case DPP:
246
+}
131
+ case QPP:
247
+
132
+ case READ_4:
248
+static void allwinner_sramc_write(void *opaque, hwaddr offset,
133
+ case PP_4:
249
+ uint64_t val, unsigned size)
134
+ case QPP_4:
250
+{
135
+ return 0;
251
+ AwSRAMCState *s = AW_SRAMC(opaque);
136
+ case FAST_READ:
252
+
137
+ case DOR:
253
+ trace_allwinner_sramc_write(offset, val);
138
+ case QOR:
254
+
139
+ case DOR_4:
255
+ switch (offset) {
140
+ case QOR_4:
256
+ case REG_SRAM_CTL1_CFG:
141
+ return 1;
257
+ s->sram_ctl1 = val;
142
+ case DIOR:
258
+ break;
143
+ case FAST_READ_4:
259
+ case REG_SRAM_VER:
144
+ case DIOR_4:
260
+ /* Only the READ_ENABLE bit is writeable */
145
+ return 2;
261
+ s->sram_ver = val & SRAM_VER_READ_ENABLE;
146
+ case QIOR:
262
+ break;
147
+ case QIOR_4:
263
+ case REG_SRAM_R40_SOFT_ENTRY_REG0:
148
+ return 5;
264
+ s->sram_soft_entry_reg0 = val;
265
+ break;
149
+ default:
266
+ default:
150
+ return -1;
267
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: out-of-bounds offset 0x%04x\n",
268
+ __func__, (uint32_t)offset);
269
+ break;
151
+ }
270
+ }
152
+}
271
+}
153
+
272
+
154
+static inline uint8_t get_addr_length(XilinxSPIPS *s, uint8_t cmd)
273
+static const MemoryRegionOps allwinner_sramc_ops = {
155
+{
274
+ .read = allwinner_sramc_read,
156
+ switch (cmd) {
275
+ .write = allwinner_sramc_write,
157
+ case PP_4:
276
+ .endianness = DEVICE_NATIVE_ENDIAN,
158
+ case QPP_4:
277
+ .valid = {
159
+ case READ_4:
278
+ .min_access_size = 4,
160
+ case QIOR_4:
279
+ .max_access_size = 4,
161
+ case FAST_READ_4:
280
+ },
162
+ case DOR_4:
281
+ .impl.min_access_size = 4,
163
+ case QOR_4:
282
+};
164
+ case DIOR_4:
283
+
165
+ return 4;
284
+static const VMStateDescription allwinner_sramc_vmstate = {
166
+ default:
285
+ .name = "allwinner-sramc",
167
+ return (s->regs[R_CMND] & R_CMND_EXT_ADD) ? 4 : 3;
286
+ .version_id = 1,
168
+ }
287
+ .minimum_version_id = 1,
169
+}
288
+ .fields = (VMStateField[]) {
170
+
289
+ VMSTATE_UINT32(sram_ver, AwSRAMCState),
171
static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
290
+ VMSTATE_UINT32(sram_soft_entry_reg0, AwSRAMCState),
172
{
291
+ VMSTATE_END_OF_LIST()
173
int debug_level = 0;
174
+ XilinxQSPIPS *q = (XilinxQSPIPS *) object_dynamic_cast(OBJECT(s),
175
+ TYPE_XILINX_QSPIPS);
176
177
for (;;) {
178
int i;
179
uint8_t tx = 0;
180
uint8_t tx_rx[num_effective_busses(s)];
181
+ uint8_t dummy_cycles = 0;
182
+ uint8_t addr_length;
183
184
if (fifo8_is_empty(&s->tx_fifo)) {
185
if (!(s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE)) {
186
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
187
tx_rx[i] = fifo8_pop(&s->tx_fifo);
188
}
189
stripe8(tx_rx, num_effective_busses(s), false);
190
- } else {
191
+ } else if (s->snoop_state >= SNOOP_ADDR) {
192
tx = fifo8_pop(&s->tx_fifo);
193
for (i = 0; i < num_effective_busses(s); ++i) {
194
tx_rx[i] = tx;
195
}
196
+ } else {
197
+ /* Extract a dummy byte and generate dummy cycles according to the
198
+ * link state */
199
+ tx = fifo8_pop(&s->tx_fifo);
200
+ dummy_cycles = 8 / s->link_state;
201
}
202
203
for (i = 0; i < num_effective_busses(s); ++i) {
204
int bus = num_effective_busses(s) - 1 - i;
205
- DB_PRINT_L(debug_level, "tx = %02x\n", tx_rx[i]);
206
- tx_rx[i] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[i]);
207
- DB_PRINT_L(debug_level, "rx = %02x\n", tx_rx[i]);
208
+ if (dummy_cycles) {
209
+ int d;
210
+ for (d = 0; d < dummy_cycles; ++d) {
211
+ tx_rx[0] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[0]);
212
+ }
213
+ } else {
214
+ DB_PRINT_L(debug_level, "tx = %02x\n", tx_rx[i]);
215
+ tx_rx[i] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[i]);
216
+ DB_PRINT_L(debug_level, "rx = %02x\n", tx_rx[i]);
217
+ }
218
}
219
220
- if (fifo8_is_full(&s->rx_fifo)) {
221
+ if (s->regs[R_CMND] & R_CMND_RXFIFO_DRAIN) {
222
+ DB_PRINT_L(debug_level, "dircarding drained rx byte\n");
223
+ /* Do nothing */
224
+ } else if (s->rx_discard) {
225
+ DB_PRINT_L(debug_level, "dircarding discarded rx byte\n");
226
+ s->rx_discard -= 8 / s->link_state;
227
+ } else if (fifo8_is_full(&s->rx_fifo)) {
228
s->regs[R_INTR_STATUS] |= IXR_RX_FIFO_OVERFLOW;
229
DB_PRINT_L(0, "rx FIFO overflow");
230
} else if (s->snoop_state == SNOOP_STRIPING) {
231
stripe8(tx_rx, num_effective_busses(s), true);
232
for (i = 0; i < num_effective_busses(s); ++i) {
233
fifo8_push(&s->rx_fifo, (uint8_t)tx_rx[i]);
234
+ DB_PRINT_L(debug_level, "pushing striped rx byte\n");
235
}
236
} else {
237
+ DB_PRINT_L(debug_level, "pushing unstriped rx byte\n");
238
fifo8_push(&s->rx_fifo, (uint8_t)tx_rx[0]);
239
}
240
241
+ if (s->link_state_next_when) {
242
+ s->link_state_next_when--;
243
+ if (!s->link_state_next_when) {
244
+ s->link_state = s->link_state_next;
245
+ }
246
+ }
247
+
248
DB_PRINT_L(debug_level, "initial snoop state: %x\n",
249
(unsigned)s->snoop_state);
250
switch (s->snoop_state) {
251
case (SNOOP_CHECKING):
252
- switch (tx) { /* new instruction code */
253
- case READ: /* 3 address bytes, no dummy bytes/cycles */
254
- case PP:
255
+ /* Store the count of dummy bytes in the txfifo */
256
+ s->cmd_dummies = xilinx_spips_num_dummies(q, tx);
257
+ addr_length = get_addr_length(s, tx);
258
+ if (s->cmd_dummies < 0) {
259
+ s->snoop_state = SNOOP_NONE;
260
+ } else {
261
+ s->snoop_state = SNOOP_ADDR + addr_length - 1;
262
+ }
263
+ switch (tx) {
264
case DPP:
265
- case QPP:
266
- s->snoop_state = 3;
267
- break;
268
- case FAST_READ: /* 3 address bytes, 1 dummy byte */
269
case DOR:
270
+ case DOR_4:
271
+ s->link_state_next = 2;
272
+ s->link_state_next_when = addr_length + s->cmd_dummies;
273
+ break;
274
+ case QPP:
275
+ case QPP_4:
276
case QOR:
277
- case DIOR: /* FIXME: these vary between vendor - set to spansion */
278
- s->snoop_state = 4;
279
+ case QOR_4:
280
+ s->link_state_next = 4;
281
+ s->link_state_next_when = addr_length + s->cmd_dummies;
282
+ break;
283
+ case DIOR:
284
+ case DIOR_4:
285
+ s->link_state = 2;
286
break;
287
- case QIOR: /* 3 address bytes, 2 dummy bytes */
288
- s->snoop_state = 6;
289
+ case QIOR:
290
+ case QIOR_4:
291
+ s->link_state = 4;
292
break;
293
- default:
294
+ }
295
+ break;
296
+ case (SNOOP_ADDR):
297
+ /* Address has been transmitted, transmit dummy cycles now if
298
+ * needed */
299
+ if (s->cmd_dummies < 0) {
300
s->snoop_state = SNOOP_NONE;
301
+ } else {
302
+ s->snoop_state = s->cmd_dummies;
303
}
304
break;
305
case (SNOOP_STRIPING):
306
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_write(void *opaque, hwaddr addr,
307
uint64_t value, unsigned size)
308
{
309
XilinxQSPIPS *q = XILINX_QSPIPS(opaque);
310
+ XilinxSPIPS *s = XILINX_SPIPS(opaque);
311
312
xilinx_spips_write(opaque, addr, value, size);
313
addr >>= 2;
314
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_write(void *opaque, hwaddr addr,
315
if (addr == R_LQSPI_CFG) {
316
xilinx_qspips_invalidate_mmio_ptr(q);
317
}
318
+ if (s->regs[R_CMND] & R_CMND_RXFIFO_DRAIN) {
319
+ fifo8_reset(&s->rx_fifo);
320
+ }
292
+ }
321
}
293
+};
322
294
+
323
static const MemoryRegionOps qspips_ops = {
295
+static void allwinner_sramc_reset(DeviceState *dev)
324
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_realize(DeviceState *dev, Error **errp)
296
+{
325
}
297
+ AwSRAMCState *s = AW_SRAMC(dev);
326
298
+ AwSRAMCClass *sc = AW_SRAMC_GET_CLASS(s);
327
s->cs_lines = g_new0(qemu_irq, s->num_cs * s->num_busses);
299
+
328
+ s->cs_lines_state = g_new0(bool, s->num_cs * s->num_busses);
300
+ switch (sc->sram_version_code) {
329
for (i = 0, cs = s->cs_lines; i < s->num_busses; ++i, cs += s->num_cs) {
301
+ case SRAM_VERSION_SUN8I_R40:
330
ssi_auto_connect_slaves(DEVICE(s), cs, s->spi[i]);
302
+ s->sram_ctl1 = 0x1300;
331
}
303
+ break;
304
+ }
305
+}
306
+
307
+static void allwinner_sramc_class_init(ObjectClass *klass, void *data)
308
+{
309
+ DeviceClass *dc = DEVICE_CLASS(klass);
310
+
311
+ dc->reset = allwinner_sramc_reset;
312
+ dc->vmsd = &allwinner_sramc_vmstate;
313
+}
314
+
315
+static void allwinner_sramc_init(Object *obj)
316
+{
317
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
318
+ AwSRAMCState *s = AW_SRAMC(obj);
319
+
320
+ /* Memory mapping */
321
+ memory_region_init_io(&s->iomem, OBJECT(s), &allwinner_sramc_ops, s,
322
+ TYPE_AW_SRAMC, 1 * KiB);
323
+ sysbus_init_mmio(sbd, &s->iomem);
324
+}
325
+
326
+static const TypeInfo allwinner_sramc_info = {
327
+ .name = TYPE_AW_SRAMC,
328
+ .parent = TYPE_SYS_BUS_DEVICE,
329
+ .instance_init = allwinner_sramc_init,
330
+ .instance_size = sizeof(AwSRAMCState),
331
+ .class_init = allwinner_sramc_class_init,
332
+};
333
+
334
+static void allwinner_r40_sramc_class_init(ObjectClass *klass, void *data)
335
+{
336
+ AwSRAMCClass *sc = AW_SRAMC_CLASS(klass);
337
+
338
+ sc->sram_version_code = SRAM_VERSION_SUN8I_R40;
339
+}
340
+
341
+static const TypeInfo allwinner_r40_sramc_info = {
342
+ .name = TYPE_AW_SRAMC_SUN8I_R40,
343
+ .parent = TYPE_AW_SRAMC,
344
+ .class_init = allwinner_r40_sramc_class_init,
345
+};
346
+
347
+static void allwinner_sramc_register(void)
348
+{
349
+ type_register_static(&allwinner_sramc_info);
350
+ type_register_static(&allwinner_r40_sramc_info);
351
+}
352
+
353
+type_init(allwinner_sramc_register)
354
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
355
index XXXXXXX..XXXXXXX 100644
356
--- a/hw/arm/Kconfig
357
+++ b/hw/arm/Kconfig
358
@@ -XXX,XX +XXX,XX @@ config ALLWINNER_H3
359
config ALLWINNER_R40
360
bool
361
default y if TCG && ARM
362
+ select ALLWINNER_SRAMC
363
select ALLWINNER_A10_PIT
364
select AXP2XX_PMU
365
select SERIAL
366
diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
367
index XXXXXXX..XXXXXXX 100644
368
--- a/hw/misc/Kconfig
369
+++ b/hw/misc/Kconfig
370
@@ -XXX,XX +XXX,XX @@ config VIRT_CTRL
371
config LASI
372
bool
373
374
+config ALLWINNER_SRAMC
375
+ bool
376
+
377
config ALLWINNER_A10_CCM
378
bool
379
380
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
381
index XXXXXXX..XXXXXXX 100644
382
--- a/hw/misc/meson.build
383
+++ b/hw/misc/meson.build
384
@@ -XXX,XX +XXX,XX @@ subdir('macio')
385
386
softmmu_ss.add(when: 'CONFIG_IVSHMEM_DEVICE', if_true: files('ivshmem.c'))
387
388
+softmmu_ss.add(when: 'CONFIG_ALLWINNER_SRAMC', if_true: files('allwinner-sramc.c'))
389
softmmu_ss.add(when: 'CONFIG_ALLWINNER_A10_CCM', if_true: files('allwinner-a10-ccm.c'))
390
softmmu_ss.add(when: 'CONFIG_ALLWINNER_A10_DRAMC', if_true: files('allwinner-a10-dramc.c'))
391
softmmu_ss.add(when: 'CONFIG_ALLWINNER_H3', if_true: files('allwinner-h3-ccu.c'))
392
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
393
index XXXXXXX..XXXXXXX 100644
394
--- a/hw/misc/trace-events
395
+++ b/hw/misc/trace-events
396
@@ -XXX,XX +XXX,XX @@ allwinner_r40_dramphy_write(uint64_t offset, uint64_t data, unsigned size) "writ
397
allwinner_sid_read(uint64_t offset, uint64_t data, unsigned size) "offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
398
allwinner_sid_write(uint64_t offset, uint64_t data, unsigned size) "offset 0x%" PRIx64 " data 0x%" PRIx64 " size %" PRIu32
399
400
+# allwinner-sramc.c
401
+allwinner_sramc_read(uint64_t offset, uint64_t data) "offset 0x%" PRIx64 " data 0x%" PRIx64
402
+allwinner_sramc_write(uint64_t offset, uint64_t data) "offset 0x%" PRIx64 " data 0x%" PRIx64
403
+
404
# avr_power.c
405
avr_power_read(uint8_t value) "power_reduc read value:%u"
406
avr_power_write(uint8_t value) "power_reduc write value:%u"
332
--
407
--
333
2.7.4
408
2.34.1
334
335
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: qianfan Zhao <qianfanguijin@163.com>
2
2
3
Update the reset value to match the latest ZynqMP register spec.
3
Add test case for booting from initrd and sd card.
4
4
5
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
5
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
6
Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com>
6
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
8
Message-id: c03e51d041db7f055596084891aeb1e856e32b9f.1513104804.git.alistair.francis@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
hw/ssi/xilinx_spips.c | 1 +
10
tests/avocado/boot_linux_console.py | 176 ++++++++++++++++++++++++++++
12
1 file changed, 1 insertion(+)
11
1 file changed, 176 insertions(+)
13
12
14
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
13
diff --git a/tests/avocado/boot_linux_console.py b/tests/avocado/boot_linux_console.py
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/ssi/xilinx_spips.c
15
--- a/tests/avocado/boot_linux_console.py
17
+++ b/hw/ssi/xilinx_spips.c
16
+++ b/tests/avocado/boot_linux_console.py
18
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
17
@@ -XXX,XX +XXX,XX @@ def test_arm_quanta_gsj_initrd(self):
19
s->regs[R_GQSPI_RX_THRESH] = 1;
18
self.wait_for_console_pattern(
20
s->regs[R_GQSPI_GFIFO_THRESH] = 1;
19
'Give root password for system maintenance')
21
s->regs[R_GQSPI_IMR] = GQSPI_IXR_MASK;
20
22
+ s->regs[R_MOD_ID] = 0x01090101;
21
+ def test_arm_bpim2u(self):
23
s->man_start_com_g = false;
22
+ """
24
s->gqspi_irqline = 0;
23
+ :avocado: tags=arch:arm
25
xlnx_zynqmp_qspips_update_ixr(s);
24
+ :avocado: tags=machine:bpim2u
25
+ :avocado: tags=accel:tcg
26
+ """
27
+ deb_url = ('https://apt.armbian.com/pool/main/l/linux-5.10.16-sunxi/'
28
+ 'linux-image-current-sunxi_21.02.2_armhf.deb')
29
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
30
+ deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
31
+ kernel_path = self.extract_from_deb(deb_path,
32
+ '/boot/vmlinuz-5.10.16-sunxi')
33
+ dtb_path = ('/usr/lib/linux-image-current-sunxi/'
34
+ 'sun8i-r40-bananapi-m2-ultra.dtb')
35
+ dtb_path = self.extract_from_deb(deb_path, dtb_path)
36
+
37
+ self.vm.set_console()
38
+ kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
39
+ 'console=ttyS0,115200n8 '
40
+ 'earlycon=uart,mmio32,0x1c28000')
41
+ self.vm.add_args('-kernel', kernel_path,
42
+ '-dtb', dtb_path,
43
+ '-append', kernel_command_line)
44
+ self.vm.launch()
45
+ console_pattern = 'Kernel command line: %s' % kernel_command_line
46
+ self.wait_for_console_pattern(console_pattern)
47
+
48
+ def test_arm_bpim2u_initrd(self):
49
+ """
50
+ :avocado: tags=arch:arm
51
+ :avocado: tags=accel:tcg
52
+ :avocado: tags=machine:bpim2u
53
+ """
54
+ deb_url = ('https://apt.armbian.com/pool/main/l/linux-5.10.16-sunxi/'
55
+ 'linux-image-current-sunxi_21.02.2_armhf.deb')
56
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
57
+ deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
58
+ kernel_path = self.extract_from_deb(deb_path,
59
+ '/boot/vmlinuz-5.10.16-sunxi')
60
+ dtb_path = ('/usr/lib/linux-image-current-sunxi/'
61
+ 'sun8i-r40-bananapi-m2-ultra.dtb')
62
+ dtb_path = self.extract_from_deb(deb_path, dtb_path)
63
+ initrd_url = ('https://github.com/groeck/linux-build-test/raw/'
64
+ '2eb0a73b5d5a28df3170c546ddaaa9757e1e0848/rootfs/'
65
+ 'arm/rootfs-armv7a.cpio.gz')
66
+ initrd_hash = '604b2e45cdf35045846b8bbfbf2129b1891bdc9c'
67
+ initrd_path_gz = self.fetch_asset(initrd_url, asset_hash=initrd_hash)
68
+ initrd_path = os.path.join(self.workdir, 'rootfs.cpio')
69
+ archive.gzip_uncompress(initrd_path_gz, initrd_path)
70
+
71
+ self.vm.set_console()
72
+ kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
73
+ 'console=ttyS0,115200 '
74
+ 'panic=-1 noreboot')
75
+ self.vm.add_args('-kernel', kernel_path,
76
+ '-dtb', dtb_path,
77
+ '-initrd', initrd_path,
78
+ '-append', kernel_command_line,
79
+ '-no-reboot')
80
+ self.vm.launch()
81
+ self.wait_for_console_pattern('Boot successful.')
82
+
83
+ exec_command_and_wait_for_pattern(self, 'cat /proc/cpuinfo',
84
+ 'Allwinner sun8i Family')
85
+ exec_command_and_wait_for_pattern(self, 'cat /proc/iomem',
86
+ 'system-control@1c00000')
87
+ exec_command_and_wait_for_pattern(self, 'reboot',
88
+ 'reboot: Restarting system')
89
+ # Wait for VM to shut down gracefully
90
+ self.vm.wait()
91
+
92
+ def test_arm_bpim2u_gmac(self):
93
+ """
94
+ :avocado: tags=arch:arm
95
+ :avocado: tags=accel:tcg
96
+ :avocado: tags=machine:bpim2u
97
+ :avocado: tags=device:sd
98
+ """
99
+ self.require_netdev('user')
100
+
101
+ deb_url = ('https://apt.armbian.com/pool/main/l/linux-5.10.16-sunxi/'
102
+ 'linux-image-current-sunxi_21.02.2_armhf.deb')
103
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
104
+ deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
105
+ kernel_path = self.extract_from_deb(deb_path,
106
+ '/boot/vmlinuz-5.10.16-sunxi')
107
+ dtb_path = ('/usr/lib/linux-image-current-sunxi/'
108
+ 'sun8i-r40-bananapi-m2-ultra.dtb')
109
+ dtb_path = self.extract_from_deb(deb_path, dtb_path)
110
+ rootfs_url = ('http://storage.kernelci.org/images/rootfs/buildroot/'
111
+ 'buildroot-baseline/20221116.0/armel/rootfs.ext2.xz')
112
+ rootfs_hash = 'fae32f337c7b87547b10f42599acf109da8b6d9a'
113
+ rootfs_path_xz = self.fetch_asset(rootfs_url, asset_hash=rootfs_hash)
114
+ rootfs_path = os.path.join(self.workdir, 'rootfs.cpio')
115
+ archive.lzma_uncompress(rootfs_path_xz, rootfs_path)
116
+ image_pow2ceil_expand(rootfs_path)
117
+
118
+ self.vm.set_console()
119
+ kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
120
+ 'console=ttyS0,115200 '
121
+ 'root=/dev/mmcblk0 rootwait rw '
122
+ 'panic=-1 noreboot')
123
+ self.vm.add_args('-kernel', kernel_path,
124
+ '-dtb', dtb_path,
125
+ '-drive', 'file=' + rootfs_path + ',if=sd,format=raw',
126
+ '-net', 'nic,model=gmac,netdev=host_gmac',
127
+ '-netdev', 'user,id=host_gmac',
128
+ '-append', kernel_command_line,
129
+ '-no-reboot')
130
+ self.vm.launch()
131
+ shell_ready = "/bin/sh: can't access tty; job control turned off"
132
+ self.wait_for_console_pattern(shell_ready)
133
+
134
+ exec_command_and_wait_for_pattern(self, 'cat /proc/cpuinfo',
135
+ 'Allwinner sun8i Family')
136
+ exec_command_and_wait_for_pattern(self, 'cat /proc/partitions',
137
+ 'mmcblk0')
138
+ exec_command_and_wait_for_pattern(self, 'ifconfig eth0 up',
139
+ 'eth0: Link is Up')
140
+ exec_command_and_wait_for_pattern(self, 'udhcpc eth0',
141
+ 'udhcpc: lease of 10.0.2.15 obtained')
142
+ exec_command_and_wait_for_pattern(self, 'ping -c 3 10.0.2.2',
143
+ '3 packets transmitted, 3 packets received, 0% packet loss')
144
+ exec_command_and_wait_for_pattern(self, 'reboot',
145
+ 'reboot: Restarting system')
146
+ # Wait for VM to shut down gracefully
147
+ self.vm.wait()
148
+
149
+ @skipUnless(os.getenv('AVOCADO_ALLOW_LARGE_STORAGE'), 'storage limited')
150
+ def test_arm_bpim2u_openwrt_22_03_3(self):
151
+ """
152
+ :avocado: tags=arch:arm
153
+ :avocado: tags=machine:bpim2u
154
+ :avocado: tags=device:sd
155
+ """
156
+
157
+ # This test download a 8.9 MiB compressed image and expand it
158
+ # to 127 MiB.
159
+ image_url = ('https://downloads.openwrt.org/releases/22.03.3/targets/'
160
+ 'sunxi/cortexa7/openwrt-22.03.3-sunxi-cortexa7-'
161
+ 'sinovoip_bananapi-m2-ultra-ext4-sdcard.img.gz')
162
+ image_hash = ('5b41b4e11423e562c6011640f9a7cd3b'
163
+ 'dd0a3d42b83430f7caa70a432e6cd82c')
164
+ image_path_gz = self.fetch_asset(image_url, asset_hash=image_hash,
165
+ algorithm='sha256')
166
+ image_path = archive.extract(image_path_gz, self.workdir)
167
+ image_pow2ceil_expand(image_path)
168
+
169
+ self.vm.set_console()
170
+ self.vm.add_args('-drive', 'file=' + image_path + ',if=sd,format=raw',
171
+ '-nic', 'user',
172
+ '-no-reboot')
173
+ self.vm.launch()
174
+
175
+ kernel_command_line = (self.KERNEL_COMMON_COMMAND_LINE +
176
+ 'usbcore.nousb '
177
+ 'noreboot')
178
+
179
+ self.wait_for_console_pattern('U-Boot SPL')
180
+
181
+ interrupt_interactive_console_until_pattern(
182
+ self, 'Hit any key to stop autoboot:', '=>')
183
+ exec_command_and_wait_for_pattern(self, "setenv extraargs '" +
184
+ kernel_command_line + "'", '=>')
185
+ exec_command_and_wait_for_pattern(self, 'boot', 'Starting kernel ...');
186
+
187
+ self.wait_for_console_pattern(
188
+ 'Please press Enter to activate this console.')
189
+
190
+ exec_command_and_wait_for_pattern(self, ' ', 'root@')
191
+
192
+ exec_command_and_wait_for_pattern(self, 'cat /proc/cpuinfo',
193
+ 'Allwinner sun8i Family')
194
+ exec_command_and_wait_for_pattern(self, 'cat /proc/iomem',
195
+ 'system-control@1c00000')
196
+
197
def test_arm_orangepi(self):
198
"""
199
:avocado: tags=arch:arm
26
--
200
--
27
2.7.4
201
2.34.1
28
29
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: qianfan Zhao <qianfanguijin@163.com>
2
2
3
Update headers against v4.15-rc1.
3
Add documents for Banana Pi M2U
4
4
5
Signed-off-by: Eric Auger <eric.auger@redhat.com>
5
Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
6
Message-id: 1511883692-11511-4-git-send-email-eric.auger@redhat.com
6
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>
7
[PMM: Minor format fixes to correct sphinx errors]
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
---
9
include/standard-headers/asm-s390/virtio-ccw.h | 1 +
10
docs/system/arm/bananapi_m2u.rst | 139 +++++++++++++++++++++++++++++++
10
include/standard-headers/asm-x86/hyperv.h | 394 +--------------------
11
docs/system/target-arm.rst | 1 +
11
include/standard-headers/linux/input-event-codes.h | 2 +
12
2 files changed, 140 insertions(+)
12
include/standard-headers/linux/input.h | 1 +
13
create mode 100644 docs/system/arm/bananapi_m2u.rst
13
include/standard-headers/linux/pci_regs.h | 45 ++-
14
linux-headers/asm-arm/kvm.h | 8 +
15
linux-headers/asm-arm/kvm_para.h | 1 +
16
linux-headers/asm-arm/unistd.h | 2 +
17
linux-headers/asm-arm64/kvm.h | 8 +
18
linux-headers/asm-arm64/unistd.h | 1 +
19
linux-headers/asm-powerpc/epapr_hcalls.h | 1 +
20
linux-headers/asm-powerpc/kvm.h | 1 +
21
linux-headers/asm-powerpc/kvm_para.h | 1 +
22
linux-headers/asm-powerpc/unistd.h | 1 +
23
linux-headers/asm-s390/kvm.h | 1 +
24
linux-headers/asm-s390/kvm_para.h | 1 +
25
linux-headers/asm-s390/unistd.h | 4 +-
26
linux-headers/asm-x86/kvm.h | 1 +
27
linux-headers/asm-x86/kvm_para.h | 2 +-
28
linux-headers/asm-x86/unistd.h | 1 +
29
linux-headers/linux/kvm.h | 2 +
30
linux-headers/linux/kvm_para.h | 1 +
31
linux-headers/linux/psci.h | 1 +
32
linux-headers/linux/userfaultfd.h | 1 +
33
linux-headers/linux/vfio.h | 1 +
34
linux-headers/linux/vfio_ccw.h | 1 +
35
linux-headers/linux/vhost.h | 1 +
36
27 files changed, 74 insertions(+), 411 deletions(-)
37
14
38
diff --git a/include/standard-headers/asm-s390/virtio-ccw.h b/include/standard-headers/asm-s390/virtio-ccw.h
15
diff --git a/docs/system/arm/bananapi_m2u.rst b/docs/system/arm/bananapi_m2u.rst
16
new file mode 100644
17
index XXXXXXX..XXXXXXX
18
--- /dev/null
19
+++ b/docs/system/arm/bananapi_m2u.rst
20
@@ -XXX,XX +XXX,XX @@
21
+Banana Pi BPI-M2U (``bpim2u``)
22
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
23
+
24
+Banana Pi BPI-M2 Ultra is a quad-core mini single board computer built with
25
+Allwinner A40i/R40/V40 SoC. It features 2GB of RAM and 8GB eMMC. It also
26
+has onboard WiFi and BT. On the ports side, the BPI-M2 Ultra has 2 USB A
27
+2.0 ports, 1 USB OTG port, 1 HDMI port, 1 audio jack, a DC power port,
28
+and last but not least, a SATA port.
29
+
30
+Supported devices
31
+"""""""""""""""""
32
+
33
+The Banana Pi M2U machine supports the following devices:
34
+
35
+ * SMP (Quad Core Cortex-A7)
36
+ * Generic Interrupt Controller configuration
37
+ * SRAM mappings
38
+ * SDRAM controller
39
+ * Timer device (re-used from Allwinner A10)
40
+ * UART
41
+ * SD/MMC storage controller
42
+ * EMAC ethernet
43
+ * GMAC ethernet
44
+ * Clock Control Unit
45
+ * TWI (I2C)
46
+
47
+Limitations
48
+"""""""""""
49
+
50
+Currently, Banana Pi M2U does *not* support the following features:
51
+
52
+- Graphical output via HDMI, GPU and/or the Display Engine
53
+- Audio output
54
+- Hardware Watchdog
55
+- Real Time Clock
56
+- USB 2.0 interfaces
57
+
58
+Also see the 'unimplemented' array in the Allwinner R40 SoC module
59
+for a complete list of unimplemented I/O devices: ``./hw/arm/allwinner-r40.c``
60
+
61
+Boot options
62
+""""""""""""
63
+
64
+The Banana Pi M2U machine can start using the standard -kernel functionality
65
+for loading a Linux kernel or ELF executable. Additionally, the Banana Pi M2U
66
+machine can also emulate the BootROM which is present on an actual Allwinner R40
67
+based SoC, which loads the bootloader from a SD card, specified via the -sd
68
+argument to qemu-system-arm.
69
+
70
+Running mainline Linux
71
+""""""""""""""""""""""
72
+
73
+To build a Linux mainline kernel that can be booted by the Banana Pi M2U machine,
74
+simply configure the kernel using the sunxi_defconfig configuration:
75
+
76
+.. code-block:: bash
77
+
78
+ $ ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- make mrproper
79
+ $ ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- make sunxi_defconfig
80
+
81
+To boot the newly build linux kernel in QEMU with the Banana Pi M2U machine, use:
82
+
83
+.. code-block:: bash
84
+
85
+ $ qemu-system-arm -M bpim2u -nographic \
86
+ -kernel /path/to/linux/arch/arm/boot/zImage \
87
+ -append 'console=ttyS0,115200' \
88
+ -dtb /path/to/linux/arch/arm/boot/dts/sun8i-r40-bananapi-m2-ultra.dtb
89
+
90
+Banana Pi M2U images
91
+""""""""""""""""""""
92
+
93
+Note that the mainline kernel does not have a root filesystem. You can choose
94
+to build you own image with buildroot using the bananapi_m2_ultra_defconfig.
95
+Also see https://buildroot.org for more information.
96
+
97
+Another possibility is to run an OpenWrt image for Banana Pi M2U which
98
+can be downloaded from:
99
+
100
+ https://downloads.openwrt.org/releases/22.03.3/targets/sunxi/cortexa7/
101
+
102
+When using an image as an SD card, it must be resized to a power of two. This can be
103
+done with the ``qemu-img`` command. It is recommended to only increase the image size
104
+instead of shrinking it to a power of two, to avoid loss of data. For example,
105
+to prepare a downloaded Armbian image, first extract it and then increase
106
+its size to one gigabyte as follows:
107
+
108
+.. code-block:: bash
109
+
110
+ $ qemu-img resize \
111
+ openwrt-22.03.3-sunxi-cortexa7-sinovoip_bananapi-m2-ultra-ext4-sdcard.img \
112
+ 1G
113
+
114
+Instead of providing a custom Linux kernel via the -kernel command you may also
115
+choose to let the Banana Pi M2U machine load the bootloader from SD card, just like
116
+a real board would do using the BootROM. Simply pass the selected image via the -sd
117
+argument and remove the -kernel, -append, -dbt and -initrd arguments:
118
+
119
+.. code-block:: bash
120
+
121
+ $ qemu-system-arm -M bpim2u -nic user -nographic \
122
+ -sd openwrt-22.03.3-sunxi-cortexa7-sinovoip_bananapi-m2-ultra-ext4-sdcard.img
123
+
124
+Running U-Boot
125
+""""""""""""""
126
+
127
+U-Boot mainline can be build and configured using the Bananapi_M2_Ultra_defconfig
128
+using similar commands as describe above for Linux. Note that it is recommended
129
+for development/testing to select the following configuration setting in U-Boot:
130
+
131
+ Device Tree Control > Provider for DTB for DT Control > Embedded DTB
132
+
133
+The BootROM of allwinner R40 loading u-boot from the 8KiB offset of sdcard.
134
+Let's create an bootable disk image:
135
+
136
+.. code-block:: bash
137
+
138
+ $ dd if=/dev/zero of=sd.img bs=32M count=1
139
+ $ dd if=u-boot-sunxi-with-spl.bin of=sd.img bs=1k seek=8 conv=notrunc
140
+
141
+And then boot it.
142
+
143
+.. code-block:: bash
144
+
145
+ $ qemu-system-arm -M bpim2u -nographic -sd sd.img
146
+
147
+Banana Pi M2U integration tests
148
+"""""""""""""""""""""""""""""""
149
+
150
+The Banana Pi M2U machine has several integration tests included.
151
+To run the whole set of tests, build QEMU from source and simply
152
+provide the following command:
153
+
154
+.. code-block:: bash
155
+
156
+ $ cd qemu-build-dir
157
+ $ AVOCADO_ALLOW_LARGE_STORAGE=yes tests/venv/bin/avocado \
158
+ --verbose --show=app,console run -t machine:bpim2u \
159
+ ../tests/avocado/boot_linux_console.py
160
diff --git a/docs/system/target-arm.rst b/docs/system/target-arm.rst
39
index XXXXXXX..XXXXXXX 100644
161
index XXXXXXX..XXXXXXX 100644
40
--- a/include/standard-headers/asm-s390/virtio-ccw.h
162
--- a/docs/system/target-arm.rst
41
+++ b/include/standard-headers/asm-s390/virtio-ccw.h
163
+++ b/docs/system/target-arm.rst
42
@@ -XXX,XX +XXX,XX @@
164
@@ -XXX,XX +XXX,XX @@ undocumented; you can get a complete list by running
43
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
165
arm/versatile
44
/*
166
arm/vexpress
45
* Definitions for virtio-ccw devices.
167
arm/aspeed
46
*
168
+ arm/bananapi_m2u.rst
47
diff --git a/include/standard-headers/asm-x86/hyperv.h b/include/standard-headers/asm-x86/hyperv.h
169
arm/sabrelite
48
index XXXXXXX..XXXXXXX 100644
170
arm/digic
49
--- a/include/standard-headers/asm-x86/hyperv.h
171
arm/cubieboard
50
+++ b/include/standard-headers/asm-x86/hyperv.h
51
@@ -1,393 +1 @@
52
-#ifndef _ASM_X86_HYPERV_H
53
-#define _ASM_X86_HYPERV_H
54
-
55
-#include "standard-headers/linux/types.h"
56
-
57
-/*
58
- * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
59
- * is set by CPUID(HvCpuIdFunctionVersionAndFeatures).
60
- */
61
-#define HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS    0x40000000
62
-#define HYPERV_CPUID_INTERFACE            0x40000001
63
-#define HYPERV_CPUID_VERSION            0x40000002
64
-#define HYPERV_CPUID_FEATURES            0x40000003
65
-#define HYPERV_CPUID_ENLIGHTMENT_INFO        0x40000004
66
-#define HYPERV_CPUID_IMPLEMENT_LIMITS        0x40000005
67
-
68
-#define HYPERV_HYPERVISOR_PRESENT_BIT        0x80000000
69
-#define HYPERV_CPUID_MIN            0x40000005
70
-#define HYPERV_CPUID_MAX            0x4000ffff
71
-
72
-/*
73
- * Feature identification. EAX indicates which features are available
74
- * to the partition based upon the current partition privileges.
75
- */
76
-
77
-/* VP Runtime (HV_X64_MSR_VP_RUNTIME) available */
78
-#define HV_X64_MSR_VP_RUNTIME_AVAILABLE        (1 << 0)
79
-/* Partition Reference Counter (HV_X64_MSR_TIME_REF_COUNT) available*/
80
-#define HV_X64_MSR_TIME_REF_COUNT_AVAILABLE    (1 << 1)
81
-/* Partition reference TSC MSR is available */
82
-#define HV_X64_MSR_REFERENCE_TSC_AVAILABLE (1 << 9)
83
-
84
-/* A partition's reference time stamp counter (TSC) page */
85
-#define HV_X64_MSR_REFERENCE_TSC        0x40000021
86
-
87
-/*
88
- * There is a single feature flag that signifies if the partition has access
89
- * to MSRs with local APIC and TSC frequencies.
90
- */
91
-#define HV_X64_ACCESS_FREQUENCY_MSRS        (1 << 11)
92
-
93
-/*
94
- * Basic SynIC MSRs (HV_X64_MSR_SCONTROL through HV_X64_MSR_EOM
95
- * and HV_X64_MSR_SINT0 through HV_X64_MSR_SINT15) available
96
- */
97
-#define HV_X64_MSR_SYNIC_AVAILABLE        (1 << 2)
98
-/*
99
- * Synthetic Timer MSRs (HV_X64_MSR_STIMER0_CONFIG through
100
- * HV_X64_MSR_STIMER3_COUNT) available
101
- */
102
-#define HV_X64_MSR_SYNTIMER_AVAILABLE        (1 << 3)
103
-/*
104
- * APIC access MSRs (HV_X64_MSR_EOI, HV_X64_MSR_ICR and HV_X64_MSR_TPR)
105
- * are available
106
- */
107
-#define HV_X64_MSR_APIC_ACCESS_AVAILABLE    (1 << 4)
108
-/* Hypercall MSRs (HV_X64_MSR_GUEST_OS_ID and HV_X64_MSR_HYPERCALL) available*/
109
-#define HV_X64_MSR_HYPERCALL_AVAILABLE        (1 << 5)
110
-/* Access virtual processor index MSR (HV_X64_MSR_VP_INDEX) available*/
111
-#define HV_X64_MSR_VP_INDEX_AVAILABLE        (1 << 6)
112
-/* Virtual system reset MSR (HV_X64_MSR_RESET) is available*/
113
-#define HV_X64_MSR_RESET_AVAILABLE        (1 << 7)
114
- /*
115
- * Access statistics pages MSRs (HV_X64_MSR_STATS_PARTITION_RETAIL_PAGE,
116
- * HV_X64_MSR_STATS_PARTITION_INTERNAL_PAGE, HV_X64_MSR_STATS_VP_RETAIL_PAGE,
117
- * HV_X64_MSR_STATS_VP_INTERNAL_PAGE) available
118
- */
119
-#define HV_X64_MSR_STAT_PAGES_AVAILABLE        (1 << 8)
120
-
121
-/* Frequency MSRs available */
122
-#define HV_FEATURE_FREQUENCY_MSRS_AVAILABLE    (1 << 8)
123
-
124
-/* Crash MSR available */
125
-#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE (1 << 10)
126
-
127
-/*
128
- * Feature identification: EBX indicates which flags were specified at
129
- * partition creation. The format is the same as the partition creation
130
- * flag structure defined in section Partition Creation Flags.
131
- */
132
-#define HV_X64_CREATE_PARTITIONS        (1 << 0)
133
-#define HV_X64_ACCESS_PARTITION_ID        (1 << 1)
134
-#define HV_X64_ACCESS_MEMORY_POOL        (1 << 2)
135
-#define HV_X64_ADJUST_MESSAGE_BUFFERS        (1 << 3)
136
-#define HV_X64_POST_MESSAGES            (1 << 4)
137
-#define HV_X64_SIGNAL_EVENTS            (1 << 5)
138
-#define HV_X64_CREATE_PORT            (1 << 6)
139
-#define HV_X64_CONNECT_PORT            (1 << 7)
140
-#define HV_X64_ACCESS_STATS            (1 << 8)
141
-#define HV_X64_DEBUGGING            (1 << 11)
142
-#define HV_X64_CPU_POWER_MANAGEMENT        (1 << 12)
143
-#define HV_X64_CONFIGURE_PROFILER        (1 << 13)
144
-
145
-/*
146
- * Feature identification. EDX indicates which miscellaneous features
147
- * are available to the partition.
148
- */
149
-/* The MWAIT instruction is available (per section MONITOR / MWAIT) */
150
-#define HV_X64_MWAIT_AVAILABLE                (1 << 0)
151
-/* Guest debugging support is available */
152
-#define HV_X64_GUEST_DEBUGGING_AVAILABLE        (1 << 1)
153
-/* Performance Monitor support is available*/
154
-#define HV_X64_PERF_MONITOR_AVAILABLE            (1 << 2)
155
-/* Support for physical CPU dynamic partitioning events is available*/
156
-#define HV_X64_CPU_DYNAMIC_PARTITIONING_AVAILABLE    (1 << 3)
157
-/*
158
- * Support for passing hypercall input parameter block via XMM
159
- * registers is available
160
- */
161
-#define HV_X64_HYPERCALL_PARAMS_XMM_AVAILABLE        (1 << 4)
162
-/* Support for a virtual guest idle state is available */
163
-#define HV_X64_GUEST_IDLE_STATE_AVAILABLE        (1 << 5)
164
-/* Guest crash data handler available */
165
-#define HV_X64_GUEST_CRASH_MSR_AVAILABLE        (1 << 10)
166
-
167
-/*
168
- * Implementation recommendations. Indicates which behaviors the hypervisor
169
- * recommends the OS implement for optimal performance.
170
- */
171
- /*
172
- * Recommend using hypercall for address space switches rather
173
- * than MOV to CR3 instruction
174
- */
175
-#define HV_X64_AS_SWITCH_RECOMMENDED        (1 << 0)
176
-/* Recommend using hypercall for local TLB flushes rather
177
- * than INVLPG or MOV to CR3 instructions */
178
-#define HV_X64_LOCAL_TLB_FLUSH_RECOMMENDED    (1 << 1)
179
-/*
180
- * Recommend using hypercall for remote TLB flushes rather
181
- * than inter-processor interrupts
182
- */
183
-#define HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED    (1 << 2)
184
-/*
185
- * Recommend using MSRs for accessing APIC registers
186
- * EOI, ICR and TPR rather than their memory-mapped counterparts
187
- */
188
-#define HV_X64_APIC_ACCESS_RECOMMENDED        (1 << 3)
189
-/* Recommend using the hypervisor-provided MSR to initiate a system RESET */
190
-#define HV_X64_SYSTEM_RESET_RECOMMENDED        (1 << 4)
191
-/*
192
- * Recommend using relaxed timing for this partition. If used,
193
- * the VM should disable any watchdog timeouts that rely on the
194
- * timely delivery of external interrupts
195
- */
196
-#define HV_X64_RELAXED_TIMING_RECOMMENDED    (1 << 5)
197
-
198
-/*
199
- * Virtual APIC support
200
- */
201
-#define HV_X64_DEPRECATING_AEOI_RECOMMENDED    (1 << 9)
202
-
203
-/* Recommend using the newer ExProcessorMasks interface */
204
-#define HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED    (1 << 11)
205
-
206
-/*
207
- * Crash notification flag.
208
- */
209
-#define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63)
210
-
211
-/* MSR used to identify the guest OS. */
212
-#define HV_X64_MSR_GUEST_OS_ID            0x40000000
213
-
214
-/* MSR used to setup pages used to communicate with the hypervisor. */
215
-#define HV_X64_MSR_HYPERCALL            0x40000001
216
-
217
-/* MSR used to provide vcpu index */
218
-#define HV_X64_MSR_VP_INDEX            0x40000002
219
-
220
-/* MSR used to reset the guest OS. */
221
-#define HV_X64_MSR_RESET            0x40000003
222
-
223
-/* MSR used to provide vcpu runtime in 100ns units */
224
-#define HV_X64_MSR_VP_RUNTIME            0x40000010
225
-
226
-/* MSR used to read the per-partition time reference counter */
227
-#define HV_X64_MSR_TIME_REF_COUNT        0x40000020
228
-
229
-/* MSR used to retrieve the TSC frequency */
230
-#define HV_X64_MSR_TSC_FREQUENCY        0x40000022
231
-
232
-/* MSR used to retrieve the local APIC timer frequency */
233
-#define HV_X64_MSR_APIC_FREQUENCY        0x40000023
234
-
235
-/* Define the virtual APIC registers */
236
-#define HV_X64_MSR_EOI                0x40000070
237
-#define HV_X64_MSR_ICR                0x40000071
238
-#define HV_X64_MSR_TPR                0x40000072
239
-#define HV_X64_MSR_APIC_ASSIST_PAGE        0x40000073
240
-
241
-/* Define synthetic interrupt controller model specific registers. */
242
-#define HV_X64_MSR_SCONTROL            0x40000080
243
-#define HV_X64_MSR_SVERSION            0x40000081
244
-#define HV_X64_MSR_SIEFP            0x40000082
245
-#define HV_X64_MSR_SIMP                0x40000083
246
-#define HV_X64_MSR_EOM                0x40000084
247
-#define HV_X64_MSR_SINT0            0x40000090
248
-#define HV_X64_MSR_SINT1            0x40000091
249
-#define HV_X64_MSR_SINT2            0x40000092
250
-#define HV_X64_MSR_SINT3            0x40000093
251
-#define HV_X64_MSR_SINT4            0x40000094
252
-#define HV_X64_MSR_SINT5            0x40000095
253
-#define HV_X64_MSR_SINT6            0x40000096
254
-#define HV_X64_MSR_SINT7            0x40000097
255
-#define HV_X64_MSR_SINT8            0x40000098
256
-#define HV_X64_MSR_SINT9            0x40000099
257
-#define HV_X64_MSR_SINT10            0x4000009A
258
-#define HV_X64_MSR_SINT11            0x4000009B
259
-#define HV_X64_MSR_SINT12            0x4000009C
260
-#define HV_X64_MSR_SINT13            0x4000009D
261
-#define HV_X64_MSR_SINT14            0x4000009E
262
-#define HV_X64_MSR_SINT15            0x4000009F
263
-
264
-/*
265
- * Synthetic Timer MSRs. Four timers per vcpu.
266
- */
267
-#define HV_X64_MSR_STIMER0_CONFIG        0x400000B0
268
-#define HV_X64_MSR_STIMER0_COUNT        0x400000B1
269
-#define HV_X64_MSR_STIMER1_CONFIG        0x400000B2
270
-#define HV_X64_MSR_STIMER1_COUNT        0x400000B3
271
-#define HV_X64_MSR_STIMER2_CONFIG        0x400000B4
272
-#define HV_X64_MSR_STIMER2_COUNT        0x400000B5
273
-#define HV_X64_MSR_STIMER3_CONFIG        0x400000B6
274
-#define HV_X64_MSR_STIMER3_COUNT        0x400000B7
275
-
276
-/* Hyper-V guest crash notification MSR's */
277
-#define HV_X64_MSR_CRASH_P0            0x40000100
278
-#define HV_X64_MSR_CRASH_P1            0x40000101
279
-#define HV_X64_MSR_CRASH_P2            0x40000102
280
-#define HV_X64_MSR_CRASH_P3            0x40000103
281
-#define HV_X64_MSR_CRASH_P4            0x40000104
282
-#define HV_X64_MSR_CRASH_CTL            0x40000105
283
-#define HV_X64_MSR_CRASH_CTL_NOTIFY        (1ULL << 63)
284
-#define HV_X64_MSR_CRASH_PARAMS        \
285
-        (1 + (HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0))
286
-
287
-#define HV_X64_MSR_HYPERCALL_ENABLE        0x00000001
288
-#define HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_SHIFT    12
289
-#define HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_MASK    \
290
-        (~((1ull << HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_SHIFT) - 1))
291
-
292
-/* Declare the various hypercall operations. */
293
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE    0x0002
294
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST    0x0003
295
-#define HVCALL_NOTIFY_LONG_SPIN_WAIT        0x0008
296
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX 0x0013
297
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX 0x0014
298
-#define HVCALL_POST_MESSAGE            0x005c
299
-#define HVCALL_SIGNAL_EVENT            0x005d
300
-
301
-#define HV_X64_MSR_APIC_ASSIST_PAGE_ENABLE        0x00000001
302
-#define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT    12
303
-#define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_MASK    \
304
-        (~((1ull << HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
305
-
306
-#define HV_X64_MSR_TSC_REFERENCE_ENABLE        0x00000001
307
-#define HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT    12
308
-
309
-#define HV_PROCESSOR_POWER_STATE_C0        0
310
-#define HV_PROCESSOR_POWER_STATE_C1        1
311
-#define HV_PROCESSOR_POWER_STATE_C2        2
312
-#define HV_PROCESSOR_POWER_STATE_C3        3
313
-
314
-#define HV_FLUSH_ALL_PROCESSORS            BIT(0)
315
-#define HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES    BIT(1)
316
-#define HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY    BIT(2)
317
-#define HV_FLUSH_USE_EXTENDED_RANGE_FORMAT    BIT(3)
318
-
319
-enum HV_GENERIC_SET_FORMAT {
320
-    HV_GENERIC_SET_SPARCE_4K,
321
-    HV_GENERIC_SET_ALL,
322
-};
323
-
324
-/* hypercall status code */
325
-#define HV_STATUS_SUCCESS            0
326
-#define HV_STATUS_INVALID_HYPERCALL_CODE    2
327
-#define HV_STATUS_INVALID_HYPERCALL_INPUT    3
328
-#define HV_STATUS_INVALID_ALIGNMENT        4
329
-#define HV_STATUS_INSUFFICIENT_MEMORY        11
330
-#define HV_STATUS_INVALID_CONNECTION_ID        18
331
-#define HV_STATUS_INSUFFICIENT_BUFFERS        19
332
-
333
-typedef struct _HV_REFERENCE_TSC_PAGE {
334
-    uint32_t tsc_sequence;
335
-    uint32_t res1;
336
-    uint64_t tsc_scale;
337
-    int64_t tsc_offset;
338
-} HV_REFERENCE_TSC_PAGE, *PHV_REFERENCE_TSC_PAGE;
339
-
340
-/* Define the number of synthetic interrupt sources. */
341
-#define HV_SYNIC_SINT_COUNT        (16)
342
-/* Define the expected SynIC version. */
343
-#define HV_SYNIC_VERSION_1        (0x1)
344
-
345
-#define HV_SYNIC_CONTROL_ENABLE        (1ULL << 0)
346
-#define HV_SYNIC_SIMP_ENABLE        (1ULL << 0)
347
-#define HV_SYNIC_SIEFP_ENABLE        (1ULL << 0)
348
-#define HV_SYNIC_SINT_MASKED        (1ULL << 16)
349
-#define HV_SYNIC_SINT_AUTO_EOI        (1ULL << 17)
350
-#define HV_SYNIC_SINT_VECTOR_MASK    (0xFF)
351
-
352
-#define HV_SYNIC_STIMER_COUNT        (4)
353
-
354
-/* Define synthetic interrupt controller message constants. */
355
-#define HV_MESSAGE_SIZE            (256)
356
-#define HV_MESSAGE_PAYLOAD_BYTE_COUNT    (240)
357
-#define HV_MESSAGE_PAYLOAD_QWORD_COUNT    (30)
358
-
359
-/* Define hypervisor message types. */
360
-enum hv_message_type {
361
-    HVMSG_NONE            = 0x00000000,
362
-
363
-    /* Memory access messages. */
364
-    HVMSG_UNMAPPED_GPA        = 0x80000000,
365
-    HVMSG_GPA_INTERCEPT        = 0x80000001,
366
-
367
-    /* Timer notification messages. */
368
-    HVMSG_TIMER_EXPIRED            = 0x80000010,
369
-
370
-    /* Error messages. */
371
-    HVMSG_INVALID_VP_REGISTER_VALUE    = 0x80000020,
372
-    HVMSG_UNRECOVERABLE_EXCEPTION    = 0x80000021,
373
-    HVMSG_UNSUPPORTED_FEATURE        = 0x80000022,
374
-
375
-    /* Trace buffer complete messages. */
376
-    HVMSG_EVENTLOG_BUFFERCOMPLETE    = 0x80000040,
377
-
378
-    /* Platform-specific processor intercept messages. */
379
-    HVMSG_X64_IOPORT_INTERCEPT        = 0x80010000,
380
-    HVMSG_X64_MSR_INTERCEPT        = 0x80010001,
381
-    HVMSG_X64_CPUID_INTERCEPT        = 0x80010002,
382
-    HVMSG_X64_EXCEPTION_INTERCEPT    = 0x80010003,
383
-    HVMSG_X64_APIC_EOI            = 0x80010004,
384
-    HVMSG_X64_LEGACY_FP_ERROR        = 0x80010005
385
-};
386
-
387
-/* Define synthetic interrupt controller message flags. */
388
-union hv_message_flags {
389
-    uint8_t asu8;
390
-    struct {
391
-        uint8_t msg_pending:1;
392
-        uint8_t reserved:7;
393
-    };
394
-};
395
-
396
-/* Define port identifier type. */
397
-union hv_port_id {
398
-    uint32_t asu32;
399
-    struct {
400
-        uint32_t id:24;
401
-        uint32_t reserved:8;
402
-    } u;
403
-};
404
-
405
-/* Define synthetic interrupt controller message header. */
406
-struct hv_message_header {
407
-    uint32_t message_type;
408
-    uint8_t payload_size;
409
-    union hv_message_flags message_flags;
410
-    uint8_t reserved[2];
411
-    union {
412
-        uint64_t sender;
413
-        union hv_port_id port;
414
-    };
415
-};
416
-
417
-/* Define synthetic interrupt controller message format. */
418
-struct hv_message {
419
-    struct hv_message_header header;
420
-    union {
421
-        uint64_t payload[HV_MESSAGE_PAYLOAD_QWORD_COUNT];
422
-    } u;
423
-};
424
-
425
-/* Define the synthetic interrupt message page layout. */
426
-struct hv_message_page {
427
-    struct hv_message sint_message[HV_SYNIC_SINT_COUNT];
428
-};
429
-
430
-/* Define timer message payload structure. */
431
-struct hv_timer_message_payload {
432
-    uint32_t timer_index;
433
-    uint32_t reserved;
434
-    uint64_t expiration_time;    /* When the timer expired */
435
-    uint64_t delivery_time;    /* When the message was delivered */
436
-};
437
-
438
-#define HV_STIMER_ENABLE        (1ULL << 0)
439
-#define HV_STIMER_PERIODIC        (1ULL << 1)
440
-#define HV_STIMER_LAZY            (1ULL << 2)
441
-#define HV_STIMER_AUTOENABLE        (1ULL << 3)
442
-#define HV_STIMER_SINT(config)        (uint8_t)(((config) >> 16) & 0x0F)
443
-
444
-#endif
445
+ /* this is a temporary placeholder until kvm_para.h stops including it */
446
diff --git a/include/standard-headers/linux/input-event-codes.h b/include/standard-headers/linux/input-event-codes.h
447
index XXXXXXX..XXXXXXX 100644
448
--- a/include/standard-headers/linux/input-event-codes.h
449
+++ b/include/standard-headers/linux/input-event-codes.h
450
@@ -XXX,XX +XXX,XX @@
451
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
452
/*
453
* Input event codes
454
*
455
@@ -XXX,XX +XXX,XX @@
456
#define BTN_TOOL_MOUSE        0x146
457
#define BTN_TOOL_LENS        0x147
458
#define BTN_TOOL_QUINTTAP    0x148    /* Five fingers on trackpad */
459
+#define BTN_STYLUS3        0x149
460
#define BTN_TOUCH        0x14a
461
#define BTN_STYLUS        0x14b
462
#define BTN_STYLUS2        0x14c
463
diff --git a/include/standard-headers/linux/input.h b/include/standard-headers/linux/input.h
464
index XXXXXXX..XXXXXXX 100644
465
--- a/include/standard-headers/linux/input.h
466
+++ b/include/standard-headers/linux/input.h
467
@@ -XXX,XX +XXX,XX @@
468
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
469
/*
470
* Copyright (c) 1999-2002 Vojtech Pavlik
471
*
472
diff --git a/include/standard-headers/linux/pci_regs.h b/include/standard-headers/linux/pci_regs.h
473
index XXXXXXX..XXXXXXX 100644
474
--- a/include/standard-headers/linux/pci_regs.h
475
+++ b/include/standard-headers/linux/pci_regs.h
476
@@ -XXX,XX +XXX,XX @@
477
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
478
/*
479
*    pci_regs.h
480
*
481
@@ -XXX,XX +XXX,XX @@
482
#define PCI_ERR_ROOT_FIRST_FATAL    0x00000010 /* First UNC is Fatal */
483
#define PCI_ERR_ROOT_NONFATAL_RCV    0x00000020 /* Non-Fatal Received */
484
#define PCI_ERR_ROOT_FATAL_RCV        0x00000040 /* Fatal Received */
485
+#define PCI_ERR_ROOT_AER_IRQ        0xf8000000 /* Advanced Error Interrupt Message Number */
486
#define PCI_ERR_ROOT_ERR_SRC    52    /* Error Source Identification */
487
488
/* Virtual Channel */
489
@@ -XXX,XX +XXX,XX @@
490
#define PCI_SATA_SIZEOF_LONG    16
491
492
/* Resizable BARs */
493
+#define PCI_REBAR_CAP        4    /* capability register */
494
+#define PCI_REBAR_CAP_SIZES        0x00FFFFF0 /* supported BAR sizes */
495
#define PCI_REBAR_CTRL        8    /* control register */
496
-#define PCI_REBAR_CTRL_NBAR_MASK    (7 << 5)    /* mask for # bars */
497
-#define PCI_REBAR_CTRL_NBAR_SHIFT    5    /* shift for # bars */
498
+#define PCI_REBAR_CTRL_BAR_IDX        0x00000007 /* BAR index */
499
+#define PCI_REBAR_CTRL_NBAR_MASK    0x000000E0 /* # of resizable BARs */
500
+#define PCI_REBAR_CTRL_NBAR_SHIFT    5      /* shift for # of BARs */
501
+#define PCI_REBAR_CTRL_BAR_SIZE    0x00001F00 /* BAR size */
502
503
/* Dynamic Power Allocation */
504
#define PCI_DPA_CAP        4    /* capability register */
505
@@ -XXX,XX +XXX,XX @@
506
507
/* Downstream Port Containment */
508
#define PCI_EXP_DPC_CAP            4    /* DPC Capability */
509
+#define PCI_EXP_DPC_IRQ            0x1f    /* DPC Interrupt Message Number */
510
#define PCI_EXP_DPC_CAP_RP_EXT        0x20    /* Root Port Extensions for DPC */
511
#define PCI_EXP_DPC_CAP_POISONED_TLP    0x40    /* Poisoned TLP Egress Blocking Supported */
512
#define PCI_EXP_DPC_CAP_SW_TRIGGER    0x80    /* Software Triggering Supported */
513
@@ -XXX,XX +XXX,XX @@
514
#define PCI_PTM_CTRL_ENABLE        0x00000001 /* PTM enable */
515
#define PCI_PTM_CTRL_ROOT        0x00000002 /* Root select */
516
517
-/* L1 PM Substates */
518
-#define PCI_L1SS_CAP         4    /* capability register */
519
-#define PCI_L1SS_CAP_PCIPM_L1_2     1    /* PCI PM L1.2 Support */
520
-#define PCI_L1SS_CAP_PCIPM_L1_1     2    /* PCI PM L1.1 Support */
521
-#define PCI_L1SS_CAP_ASPM_L1_2         4    /* ASPM L1.2 Support */
522
-#define PCI_L1SS_CAP_ASPM_L1_1         8    /* ASPM L1.1 Support */
523
-#define PCI_L1SS_CAP_L1_PM_SS        16    /* L1 PM Substates Support */
524
-#define PCI_L1SS_CTL1         8    /* Control Register 1 */
525
-#define PCI_L1SS_CTL1_PCIPM_L1_2    1    /* PCI PM L1.2 Enable */
526
-#define PCI_L1SS_CTL1_PCIPM_L1_1    2    /* PCI PM L1.1 Support */
527
-#define PCI_L1SS_CTL1_ASPM_L1_2    4    /* ASPM L1.2 Support */
528
-#define PCI_L1SS_CTL1_ASPM_L1_1    8    /* ASPM L1.1 Support */
529
-#define PCI_L1SS_CTL1_L1SS_MASK    0x0000000F
530
-#define PCI_L1SS_CTL2         0xC    /* Control Register 2 */
531
+/* ASPM L1 PM Substates */
532
+#define PCI_L1SS_CAP        0x04    /* Capabilities Register */
533
+#define PCI_L1SS_CAP_PCIPM_L1_2    0x00000001 /* PCI-PM L1.2 Supported */
534
+#define PCI_L1SS_CAP_PCIPM_L1_1    0x00000002 /* PCI-PM L1.1 Supported */
535
+#define PCI_L1SS_CAP_ASPM_L1_2        0x00000004 /* ASPM L1.2 Supported */
536
+#define PCI_L1SS_CAP_ASPM_L1_1        0x00000008 /* ASPM L1.1 Supported */
537
+#define PCI_L1SS_CAP_L1_PM_SS        0x00000010 /* L1 PM Substates Supported */
538
+#define PCI_L1SS_CAP_CM_RESTORE_TIME    0x0000ff00 /* Port Common_Mode_Restore_Time */
539
+#define PCI_L1SS_CAP_P_PWR_ON_SCALE    0x00030000 /* Port T_POWER_ON scale */
540
+#define PCI_L1SS_CAP_P_PWR_ON_VALUE    0x00f80000 /* Port T_POWER_ON value */
541
+#define PCI_L1SS_CTL1        0x08    /* Control 1 Register */
542
+#define PCI_L1SS_CTL1_PCIPM_L1_2    0x00000001 /* PCI-PM L1.2 Enable */
543
+#define PCI_L1SS_CTL1_PCIPM_L1_1    0x00000002 /* PCI-PM L1.1 Enable */
544
+#define PCI_L1SS_CTL1_ASPM_L1_2    0x00000004 /* ASPM L1.2 Enable */
545
+#define PCI_L1SS_CTL1_ASPM_L1_1    0x00000008 /* ASPM L1.1 Enable */
546
+#define PCI_L1SS_CTL1_L1SS_MASK    0x0000000f
547
+#define PCI_L1SS_CTL1_CM_RESTORE_TIME    0x0000ff00 /* Common_Mode_Restore_Time */
548
+#define PCI_L1SS_CTL1_LTR_L12_TH_VALUE    0x03ff0000 /* LTR_L1.2_THRESHOLD_Value */
549
+#define PCI_L1SS_CTL1_LTR_L12_TH_SCALE    0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */
550
+#define PCI_L1SS_CTL2        0x0c    /* Control 2 Register */
551
552
#endif /* LINUX_PCI_REGS_H */
553
diff --git a/linux-headers/asm-arm/kvm.h b/linux-headers/asm-arm/kvm.h
554
index XXXXXXX..XXXXXXX 100644
555
--- a/linux-headers/asm-arm/kvm.h
556
+++ b/linux-headers/asm-arm/kvm.h
557
@@ -XXX,XX +XXX,XX @@
558
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
559
/*
560
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
561
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
562
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
563
    (__ARM_CP15_REG(op1, 0, crm, 0) | KVM_REG_SIZE_U64)
564
#define ARM_CP15_REG64(...) __ARM_CP15_REG64(__VA_ARGS__)
565
566
+/* PL1 Physical Timer Registers */
567
+#define KVM_REG_ARM_PTIMER_CTL        ARM_CP15_REG32(0, 14, 2, 1)
568
+#define KVM_REG_ARM_PTIMER_CNT        ARM_CP15_REG64(0, 14)
569
+#define KVM_REG_ARM_PTIMER_CVAL        ARM_CP15_REG64(2, 14)
570
+
571
+/* Virtual Timer Registers */
572
#define KVM_REG_ARM_TIMER_CTL        ARM_CP15_REG32(0, 14, 3, 1)
573
#define KVM_REG_ARM_TIMER_CNT        ARM_CP15_REG64(1, 14)
574
#define KVM_REG_ARM_TIMER_CVAL        ARM_CP15_REG64(3, 14)
575
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
576
#define KVM_DEV_ARM_ITS_SAVE_TABLES        1
577
#define KVM_DEV_ARM_ITS_RESTORE_TABLES    2
578
#define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES    3
579
+#define KVM_DEV_ARM_ITS_CTRL_RESET        4
580
581
/* KVM_IRQ_LINE irq field index values */
582
#define KVM_ARM_IRQ_TYPE_SHIFT        24
583
diff --git a/linux-headers/asm-arm/kvm_para.h b/linux-headers/asm-arm/kvm_para.h
584
index XXXXXXX..XXXXXXX 100644
585
--- a/linux-headers/asm-arm/kvm_para.h
586
+++ b/linux-headers/asm-arm/kvm_para.h
587
@@ -1 +1,2 @@
588
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
589
#include <asm-generic/kvm_para.h>
590
diff --git a/linux-headers/asm-arm/unistd.h b/linux-headers/asm-arm/unistd.h
591
index XXXXXXX..XXXXXXX 100644
592
--- a/linux-headers/asm-arm/unistd.h
593
+++ b/linux-headers/asm-arm/unistd.h
594
@@ -XXX,XX +XXX,XX @@
595
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
596
/*
597
* arch/arm/include/asm/unistd.h
598
*
599
@@ -XXX,XX +XXX,XX @@
600
#define __ARM_NR_usr26            (__ARM_NR_BASE+3)
601
#define __ARM_NR_usr32            (__ARM_NR_BASE+4)
602
#define __ARM_NR_set_tls        (__ARM_NR_BASE+5)
603
+#define __ARM_NR_get_tls        (__ARM_NR_BASE+6)
604
605
#endif /* __ASM_ARM_UNISTD_H */
606
diff --git a/linux-headers/asm-arm64/kvm.h b/linux-headers/asm-arm64/kvm.h
607
index XXXXXXX..XXXXXXX 100644
608
--- a/linux-headers/asm-arm64/kvm.h
609
+++ b/linux-headers/asm-arm64/kvm.h
610
@@ -XXX,XX +XXX,XX @@
611
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
612
/*
613
* Copyright (C) 2012,2013 - ARM Ltd
614
* Author: Marc Zyngier <marc.zyngier@arm.com>
615
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
616
617
#define ARM64_SYS_REG(...) (__ARM64_SYS_REG(__VA_ARGS__) | KVM_REG_SIZE_U64)
618
619
+/* Physical Timer EL0 Registers */
620
+#define KVM_REG_ARM_PTIMER_CTL        ARM64_SYS_REG(3, 3, 14, 2, 1)
621
+#define KVM_REG_ARM_PTIMER_CVAL        ARM64_SYS_REG(3, 3, 14, 2, 2)
622
+#define KVM_REG_ARM_PTIMER_CNT        ARM64_SYS_REG(3, 3, 14, 0, 1)
623
+
624
+/* EL0 Virtual Timer Registers */
625
#define KVM_REG_ARM_TIMER_CTL        ARM64_SYS_REG(3, 3, 14, 3, 1)
626
#define KVM_REG_ARM_TIMER_CNT        ARM64_SYS_REG(3, 3, 14, 3, 2)
627
#define KVM_REG_ARM_TIMER_CVAL        ARM64_SYS_REG(3, 3, 14, 0, 2)
628
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
629
#define KVM_DEV_ARM_ITS_SAVE_TABLES 1
630
#define KVM_DEV_ARM_ITS_RESTORE_TABLES 2
631
#define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES    3
632
+#define KVM_DEV_ARM_ITS_CTRL_RESET        4
633
634
/* Device Control API on vcpu fd */
635
#define KVM_ARM_VCPU_PMU_V3_CTRL    0
636
diff --git a/linux-headers/asm-arm64/unistd.h b/linux-headers/asm-arm64/unistd.h
637
index XXXXXXX..XXXXXXX 100644
638
--- a/linux-headers/asm-arm64/unistd.h
639
+++ b/linux-headers/asm-arm64/unistd.h
640
@@ -XXX,XX +XXX,XX @@
641
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
642
/*
643
* Copyright (C) 2012 ARM Ltd.
644
*
645
diff --git a/linux-headers/asm-powerpc/epapr_hcalls.h b/linux-headers/asm-powerpc/epapr_hcalls.h
646
index XXXXXXX..XXXXXXX 100644
647
--- a/linux-headers/asm-powerpc/epapr_hcalls.h
648
+++ b/linux-headers/asm-powerpc/epapr_hcalls.h
649
@@ -XXX,XX +XXX,XX @@
650
+/* SPDX-License-Identifier: ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) */
651
/*
652
* ePAPR hcall interface
653
*
654
diff --git a/linux-headers/asm-powerpc/kvm.h b/linux-headers/asm-powerpc/kvm.h
655
index XXXXXXX..XXXXXXX 100644
656
--- a/linux-headers/asm-powerpc/kvm.h
657
+++ b/linux-headers/asm-powerpc/kvm.h
658
@@ -XXX,XX +XXX,XX @@
659
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
660
/*
661
* This program is free software; you can redistribute it and/or modify
662
* it under the terms of the GNU General Public License, version 2, as
663
diff --git a/linux-headers/asm-powerpc/kvm_para.h b/linux-headers/asm-powerpc/kvm_para.h
664
index XXXXXXX..XXXXXXX 100644
665
--- a/linux-headers/asm-powerpc/kvm_para.h
666
+++ b/linux-headers/asm-powerpc/kvm_para.h
667
@@ -XXX,XX +XXX,XX @@
668
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
669
/*
670
* This program is free software; you can redistribute it and/or modify
671
* it under the terms of the GNU General Public License, version 2, as
672
diff --git a/linux-headers/asm-powerpc/unistd.h b/linux-headers/asm-powerpc/unistd.h
673
index XXXXXXX..XXXXXXX 100644
674
--- a/linux-headers/asm-powerpc/unistd.h
675
+++ b/linux-headers/asm-powerpc/unistd.h
676
@@ -XXX,XX +XXX,XX @@
677
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
678
/*
679
* This file contains the system call numbers.
680
*
681
diff --git a/linux-headers/asm-s390/kvm.h b/linux-headers/asm-s390/kvm.h
682
index XXXXXXX..XXXXXXX 100644
683
--- a/linux-headers/asm-s390/kvm.h
684
+++ b/linux-headers/asm-s390/kvm.h
685
@@ -XXX,XX +XXX,XX @@
686
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
687
#ifndef __LINUX_KVM_S390_H
688
#define __LINUX_KVM_S390_H
689
/*
690
diff --git a/linux-headers/asm-s390/kvm_para.h b/linux-headers/asm-s390/kvm_para.h
691
index XXXXXXX..XXXXXXX 100644
692
--- a/linux-headers/asm-s390/kvm_para.h
693
+++ b/linux-headers/asm-s390/kvm_para.h
694
@@ -XXX,XX +XXX,XX @@
695
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
696
/*
697
* User API definitions for paravirtual devices on s390
698
*
699
diff --git a/linux-headers/asm-s390/unistd.h b/linux-headers/asm-s390/unistd.h
700
index XXXXXXX..XXXXXXX 100644
701
--- a/linux-headers/asm-s390/unistd.h
702
+++ b/linux-headers/asm-s390/unistd.h
703
@@ -XXX,XX +XXX,XX @@
704
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
705
/*
706
* S390 version
707
*
708
@@ -XXX,XX +XXX,XX @@
709
#define __NR_pwritev2        377
710
#define __NR_s390_guarded_storage    378
711
#define __NR_statx        379
712
-#define NR_syscalls 380
713
+#define __NR_s390_sthyi        380
714
+#define NR_syscalls 381
715
716
/*
717
* There are some system calls that are not present on 64 bit, some
718
diff --git a/linux-headers/asm-x86/kvm.h b/linux-headers/asm-x86/kvm.h
719
index XXXXXXX..XXXXXXX 100644
720
--- a/linux-headers/asm-x86/kvm.h
721
+++ b/linux-headers/asm-x86/kvm.h
722
@@ -XXX,XX +XXX,XX @@
723
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
724
#ifndef _ASM_X86_KVM_H
725
#define _ASM_X86_KVM_H
726
727
diff --git a/linux-headers/asm-x86/kvm_para.h b/linux-headers/asm-x86/kvm_para.h
728
index XXXXXXX..XXXXXXX 100644
729
--- a/linux-headers/asm-x86/kvm_para.h
730
+++ b/linux-headers/asm-x86/kvm_para.h
731
@@ -XXX,XX +XXX,XX @@
732
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
733
#ifndef _ASM_X86_KVM_PARA_H
734
#define _ASM_X86_KVM_PARA_H
735
736
@@ -XXX,XX +XXX,XX @@ struct kvm_vcpu_pv_apf_data {
737
#define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
738
#define KVM_PV_EOI_DISABLED 0x0
739
740
-
741
#endif /* _ASM_X86_KVM_PARA_H */
742
diff --git a/linux-headers/asm-x86/unistd.h b/linux-headers/asm-x86/unistd.h
743
index XXXXXXX..XXXXXXX 100644
744
--- a/linux-headers/asm-x86/unistd.h
745
+++ b/linux-headers/asm-x86/unistd.h
746
@@ -XXX,XX +XXX,XX @@
747
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
748
#ifndef _ASM_X86_UNISTD_H
749
#define _ASM_X86_UNISTD_H
750
751
diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
752
index XXXXXXX..XXXXXXX 100644
753
--- a/linux-headers/linux/kvm.h
754
+++ b/linux-headers/linux/kvm.h
755
@@ -XXX,XX +XXX,XX @@
756
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
757
#ifndef __LINUX_KVM_H
758
#define __LINUX_KVM_H
759
760
@@ -XXX,XX +XXX,XX @@ struct kvm_ppc_resize_hpt {
761
#define KVM_CAP_PPC_SMT_POSSIBLE 147
762
#define KVM_CAP_HYPERV_SYNIC2 148
763
#define KVM_CAP_HYPERV_VP_INDEX 149
764
+#define KVM_CAP_S390_AIS_MIGRATION 150
765
766
#ifdef KVM_CAP_IRQ_ROUTING
767
768
diff --git a/linux-headers/linux/kvm_para.h b/linux-headers/linux/kvm_para.h
769
index XXXXXXX..XXXXXXX 100644
770
--- a/linux-headers/linux/kvm_para.h
771
+++ b/linux-headers/linux/kvm_para.h
772
@@ -XXX,XX +XXX,XX @@
773
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
774
#ifndef __LINUX_KVM_PARA_H
775
#define __LINUX_KVM_PARA_H
776
777
diff --git a/linux-headers/linux/psci.h b/linux-headers/linux/psci.h
778
index XXXXXXX..XXXXXXX 100644
779
--- a/linux-headers/linux/psci.h
780
+++ b/linux-headers/linux/psci.h
781
@@ -XXX,XX +XXX,XX @@
782
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
783
/*
784
* ARM Power State and Coordination Interface (PSCI) header
785
*
786
diff --git a/linux-headers/linux/userfaultfd.h b/linux-headers/linux/userfaultfd.h
787
index XXXXXXX..XXXXXXX 100644
788
--- a/linux-headers/linux/userfaultfd.h
789
+++ b/linux-headers/linux/userfaultfd.h
790
@@ -XXX,XX +XXX,XX @@
791
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
792
/*
793
* include/linux/userfaultfd.h
794
*
795
diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
796
index XXXXXXX..XXXXXXX 100644
797
--- a/linux-headers/linux/vfio.h
798
+++ b/linux-headers/linux/vfio.h
799
@@ -XXX,XX +XXX,XX @@
800
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
801
/*
802
* VFIO API definition
803
*
804
diff --git a/linux-headers/linux/vfio_ccw.h b/linux-headers/linux/vfio_ccw.h
805
index XXXXXXX..XXXXXXX 100644
806
--- a/linux-headers/linux/vfio_ccw.h
807
+++ b/linux-headers/linux/vfio_ccw.h
808
@@ -XXX,XX +XXX,XX @@
809
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
810
/*
811
* Interfaces for vfio-ccw
812
*
813
diff --git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
814
index XXXXXXX..XXXXXXX 100644
815
--- a/linux-headers/linux/vhost.h
816
+++ b/linux-headers/linux/vhost.h
817
@@ -XXX,XX +XXX,XX @@
818
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
819
#ifndef _LINUX_VHOST_H
820
#define _LINUX_VHOST_H
821
/* Userspace interface for in-kernel virtio accelerators. */
822
--
172
--
823
2.7.4
173
2.34.1
824
825
diff view generated by jsdifflib
1
For M profile, we currently have an mmu index MNegPri for
1
From: Richard Henderson <richard.henderson@linaro.org>
2
"requested execution priority negative". This fails to
3
distinguish "requested execution priority negative, privileged"
4
from "requested execution priority negative, usermode", but
5
the two can return different results for MPU lookups. Fix this
6
by splitting MNegPri into MNegPriPriv and MNegPriUser, and
7
similarly for the Secure equivalent MSNegPri.
8
2
9
This takes us from 6 M profile MMU modes to 8, which means
3
Document the meaning of exclusive_high in a big-endian context,
10
we need to bump NB_MMU_MODES; this is OK since the point
4
and why we can't change it now.
11
where we are forced to reduce TLB sizes is 9 MMU modes.
12
5
13
(It would in theory be possible to stick with 6 MMU indexes:
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
{mpu-disabled,user,privileged} x {secure,nonsecure} since
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
in the MPU-disabled case the result of an MPU lookup is
8
Message-id: 20230530191438.411344-2-richard.henderson@linaro.org
16
always the same for both user and privileged code. However
17
we would then need to rework the TB flags handling to put
18
user/priv into the TB flags separately from the mmuidx.
19
Adding an extra couple of mmu indexes is simpler.)
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 1512153879-5291-5-git-send-email-peter.maydell@linaro.org
24
---
10
---
25
target/arm/cpu.h | 54 ++++++++++++++++++++++++++++++--------------------
11
target/arm/cpu.h | 8 ++++++++
26
target/arm/internals.h | 6 ++++--
12
1 file changed, 8 insertions(+)
27
target/arm/helper.c | 11 ++++++----
28
target/arm/translate.c | 8 ++++++--
29
4 files changed, 50 insertions(+), 29 deletions(-)
30
13
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
16
--- a/target/arm/cpu.h
34
+++ b/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@ enum {
18
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
36
#define ARM_CPU_VIRQ 2
19
uint64_t zcr_el[4]; /* ZCR_EL[1-3] */
37
#define ARM_CPU_VFIQ 3
20
uint64_t smcr_el[4]; /* SMCR_EL[1-3] */
38
21
} vfp;
39
-#define NB_MMU_MODES 7
40
+#define NB_MMU_MODES 8
41
/* ARM-specific extra insn start words:
42
* 1: Conditional execution bits
43
* 2: Partial exception syndrome for data aborts
44
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
45
* They have the following different MMU indexes:
46
* User
47
* Privileged
48
- * Execution priority negative (this is like privileged, but the
49
- * MPU HFNMIENA bit means that it may have different access permission
50
- * check results to normal privileged code, so can't share a TLB).
51
+ * User, execution priority negative (ie the MPU HFNMIENA bit may apply)
52
+ * Privileged, execution priority negative (ditto)
53
* If the CPU supports the v8M Security Extension then there are also:
54
* Secure User
55
* Secure Privileged
56
- * Secure, execution priority negative
57
+ * Secure User, execution priority negative
58
+ * Secure Privileged, execution priority negative
59
*
60
* The ARMMMUIdx and the mmu index value used by the core QEMU TLB code
61
* are not quite the same -- different CPU types (most notably M profile
62
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
63
* The constant names here are patterned after the general style of the names
64
* of the AT/ATS operations.
65
* The values used are carefully arranged to make mmu_idx => EL lookup easy.
66
+ * For M profile we arrange them to have a bit for priv, a bit for negpri
67
+ * and a bit for secure.
68
*/
69
#define ARM_MMU_IDX_A 0x10 /* A profile */
70
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
71
#define ARM_MMU_IDX_M 0x40 /* M profile */
72
73
+/* meanings of the bits for M profile mmu idx values */
74
+#define ARM_MMU_IDX_M_PRIV 0x1
75
+#define ARM_MMU_IDX_M_NEGPRI 0x2
76
+#define ARM_MMU_IDX_M_S 0x4
77
+
22
+
78
#define ARM_MMU_IDX_TYPE_MASK (~0x7)
23
uint64_t exclusive_addr;
79
#define ARM_MMU_IDX_COREIDX_MASK 0x7
24
uint64_t exclusive_val;
80
25
+ /*
81
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
26
+ * Contains the 'val' for the second 64-bit register of LDXP, which comes
82
ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
27
+ * from the higher address, not the high part of a complete 128-bit value.
83
ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
28
+ * In some ways it might be more convenient to record the exclusive value
84
ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
29
+ * as the low and high halves of a 128 bit data value, but the current
85
- ARMMMUIdx_MNegPri = 2 | ARM_MMU_IDX_M,
30
+ * semantics of these fields are baked into the migration format.
86
- ARMMMUIdx_MSUser = 3 | ARM_MMU_IDX_M,
31
+ */
87
- ARMMMUIdx_MSPriv = 4 | ARM_MMU_IDX_M,
32
uint64_t exclusive_high;
88
- ARMMMUIdx_MSNegPri = 5 | ARM_MMU_IDX_M,
33
89
+ ARMMMUIdx_MUserNegPri = 2 | ARM_MMU_IDX_M,
34
/* iwMMXt coprocessor state. */
90
+ ARMMMUIdx_MPrivNegPri = 3 | ARM_MMU_IDX_M,
91
+ ARMMMUIdx_MSUser = 4 | ARM_MMU_IDX_M,
92
+ ARMMMUIdx_MSPriv = 5 | ARM_MMU_IDX_M,
93
+ ARMMMUIdx_MSUserNegPri = 6 | ARM_MMU_IDX_M,
94
+ ARMMMUIdx_MSPrivNegPri = 7 | ARM_MMU_IDX_M,
95
/* Indexes below here don't have TLBs and are used only for AT system
96
* instructions or for the first stage of an S12 page table walk.
97
*/
98
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
99
ARMMMUIdxBit_S2NS = 1 << 6,
100
ARMMMUIdxBit_MUser = 1 << 0,
101
ARMMMUIdxBit_MPriv = 1 << 1,
102
- ARMMMUIdxBit_MNegPri = 1 << 2,
103
- ARMMMUIdxBit_MSUser = 1 << 3,
104
- ARMMMUIdxBit_MSPriv = 1 << 4,
105
- ARMMMUIdxBit_MSNegPri = 1 << 5,
106
+ ARMMMUIdxBit_MUserNegPri = 1 << 2,
107
+ ARMMMUIdxBit_MPrivNegPri = 1 << 3,
108
+ ARMMMUIdxBit_MSUser = 1 << 4,
109
+ ARMMMUIdxBit_MSPriv = 1 << 5,
110
+ ARMMMUIdxBit_MSUserNegPri = 1 << 6,
111
+ ARMMMUIdxBit_MSPrivNegPri = 1 << 7,
112
} ARMMMUIdxBit;
113
114
#define MMU_USER_IDX 0
115
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
116
case ARM_MMU_IDX_A:
117
return mmu_idx & 3;
118
case ARM_MMU_IDX_M:
119
- return (mmu_idx == ARMMMUIdx_MUser || mmu_idx == ARMMMUIdx_MSUser)
120
- ? 0 : 1;
121
+ return mmu_idx & ARM_MMU_IDX_M_PRIV;
122
default:
123
g_assert_not_reached();
124
}
125
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
126
bool secstate)
127
{
128
int el = arm_current_el(env);
129
- ARMMMUIdx mmu_idx;
130
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
131
132
- if (el == 0) {
133
- mmu_idx = secstate ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser;
134
- } else {
135
- mmu_idx = secstate ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv;
136
+ if (el != 0) {
137
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
138
}
139
140
if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
141
- mmu_idx = secstate ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri;
142
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
143
+ }
144
+
145
+ if (secstate) {
146
+ mmu_idx |= ARM_MMU_IDX_M_S;
147
}
148
149
return mmu_idx;
150
diff --git a/target/arm/internals.h b/target/arm/internals.h
151
index XXXXXXX..XXXXXXX 100644
152
--- a/target/arm/internals.h
153
+++ b/target/arm/internals.h
154
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
155
case ARMMMUIdx_S1NSE1:
156
case ARMMMUIdx_S1E2:
157
case ARMMMUIdx_S2NS:
158
+ case ARMMMUIdx_MPrivNegPri:
159
+ case ARMMMUIdx_MUserNegPri:
160
case ARMMMUIdx_MPriv:
161
- case ARMMMUIdx_MNegPri:
162
case ARMMMUIdx_MUser:
163
return false;
164
case ARMMMUIdx_S1E3:
165
case ARMMMUIdx_S1SE0:
166
case ARMMMUIdx_S1SE1:
167
+ case ARMMMUIdx_MSPrivNegPri:
168
+ case ARMMMUIdx_MSUserNegPri:
169
case ARMMMUIdx_MSPriv:
170
- case ARMMMUIdx_MSNegPri:
171
case ARMMMUIdx_MSUser:
172
return true;
173
default:
174
diff --git a/target/arm/helper.c b/target/arm/helper.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/target/arm/helper.c
177
+++ b/target/arm/helper.c
178
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
179
case ARMMMUIdx_S1SE1:
180
case ARMMMUIdx_S1NSE0:
181
case ARMMMUIdx_S1NSE1:
182
+ case ARMMMUIdx_MPrivNegPri:
183
+ case ARMMMUIdx_MUserNegPri:
184
case ARMMMUIdx_MPriv:
185
- case ARMMMUIdx_MNegPri:
186
case ARMMMUIdx_MUser:
187
+ case ARMMMUIdx_MSPrivNegPri:
188
+ case ARMMMUIdx_MSUserNegPri:
189
case ARMMMUIdx_MSPriv:
190
- case ARMMMUIdx_MSNegPri:
191
case ARMMMUIdx_MSUser:
192
return 1;
193
default:
194
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
195
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
196
case R_V7M_MPU_CTRL_ENABLE_MASK:
197
/* Enabled, but not for HardFault and NMI */
198
- return mmu_idx == ARMMMUIdx_MNegPri ||
199
- mmu_idx == ARMMMUIdx_MSNegPri;
200
+ return mmu_idx & ARM_MMU_IDX_M_NEGPRI;
201
case R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK:
202
/* Enabled for all cases */
203
return false;
204
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
205
case ARMMMUIdx_S1NSE0:
206
case ARMMMUIdx_MUser:
207
case ARMMMUIdx_MSUser:
208
+ case ARMMMUIdx_MUserNegPri:
209
+ case ARMMMUIdx_MSUserNegPri:
210
return true;
211
default:
212
return false;
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/target/arm/translate.c
216
+++ b/target/arm/translate.c
217
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
218
return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0);
219
case ARMMMUIdx_MUser:
220
case ARMMMUIdx_MPriv:
221
- case ARMMMUIdx_MNegPri:
222
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
223
+ case ARMMMUIdx_MUserNegPri:
224
+ case ARMMMUIdx_MPrivNegPri:
225
+ return arm_to_core_mmu_idx(ARMMMUIdx_MUserNegPri);
226
case ARMMMUIdx_MSUser:
227
case ARMMMUIdx_MSPriv:
228
- case ARMMMUIdx_MSNegPri:
229
return arm_to_core_mmu_idx(ARMMMUIdx_MSUser);
230
+ case ARMMMUIdx_MSUserNegPri:
231
+ case ARMMMUIdx_MSPrivNegPri:
232
+ return arm_to_core_mmu_idx(ARMMMUIdx_MSUserNegPri);
233
case ARMMMUIdx_S2NS:
234
default:
235
g_assert_not_reached();
236
--
35
--
237
2.7.4
36
2.34.1
238
239
diff view generated by jsdifflib
1
The TT instruction is going to need to look up the MMU index
1
From: Richard Henderson <richard.henderson@linaro.org>
2
for a specified security and privilege state. Refactor the
3
existing arm_v7m_mmu_idx_for_secstate() into a version that
4
lets you specify the privilege state and one that uses the
5
current state of the CPU.
6
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230530191438.411344-3-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 1512153879-5291-6-git-send-email-peter.maydell@linaro.org
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
---
8
---
12
target/arm/cpu.h | 21 ++++++++++++++++-----
9
target/arm/cpu.h | 5 +++++
13
1 file changed, 16 insertions(+), 5 deletions(-)
10
1 file changed, 5 insertions(+)
14
11
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
14
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
15
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
16
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_st(const ARMISARegisters *id)
20
}
17
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, ST) != 0;
21
}
18
}
22
19
23
-/* Return the MMU index for a v7M CPU in the specified security state */
20
+static inline bool isar_feature_aa64_lse2(const ARMISARegisters *id)
24
-static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
25
- bool secstate)
26
+/* Return the MMU index for a v7M CPU in the specified security and
27
+ * privilege state
28
+ */
29
+static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
30
+ bool secstate,
31
+ bool priv)
32
{
33
- int el = arm_current_el(env);
34
ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
35
36
- if (el != 0) {
37
+ if (priv) {
38
mmu_idx |= ARM_MMU_IDX_M_PRIV;
39
}
40
41
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
42
return mmu_idx;
43
}
44
45
+/* Return the MMU index for a v7M CPU in the specified security state */
46
+static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
47
+ bool secstate)
48
+{
21
+{
49
+ bool priv = arm_current_el(env) != 0;
22
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, AT) != 0;
50
+
51
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
52
+}
23
+}
53
+
24
+
54
/* Determine the current mmu_idx to use for normal loads/stores */
25
static inline bool isar_feature_aa64_fwb(const ARMISARegisters *id)
55
static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
56
{
26
{
27
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, FWB) != 0;
57
--
28
--
58
2.7.4
29
2.34.1
59
30
60
31
diff view generated by jsdifflib
1
Implement the TT instruction which queries the security
1
From: Richard Henderson <richard.henderson@linaro.org>
2
state and access permissions of a memory location.
3
2
3
Let finalize_memop_atom be the new basic function, with
4
finalize_memop and finalize_memop_pair testing FEAT_LSE2
5
to apply the appropriate atomicity.
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20230530191438.411344-4-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1512153879-5291-8-git-send-email-peter.maydell@linaro.org
7
---
12
---
8
target/arm/helper.h | 2 +
13
target/arm/tcg/translate.h | 39 +++++++++++++++++++++++++++++-----
9
target/arm/helper.c | 108 +++++++++++++++++++++++++++++++++++++++++++++++++
14
target/arm/tcg/translate-a64.c | 2 ++
10
target/arm/translate.c | 29 ++++++++++++-
15
target/arm/tcg/translate.c | 1 +
11
3 files changed, 138 insertions(+), 1 deletion(-)
16
3 files changed, 37 insertions(+), 5 deletions(-)
12
17
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
20
--- a/target/arm/tcg/translate.h
16
+++ b/target/arm/helper.h
21
+++ b/target/arm/tcg/translate.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(v7m_mrs, i32, env, i32)
22
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
18
DEF_HELPER_2(v7m_bxns, void, env, i32)
23
uint64_t features; /* CPU features bits */
19
DEF_HELPER_2(v7m_blxns, void, env, i32)
24
bool aarch64;
20
25
bool thumb;
21
+DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
26
+ bool lse2;
22
+
27
/* Because unallocated encodings generate different exception syndrome
23
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
28
* information from traps due to FP being disabled, we can't do a single
24
DEF_HELPER_3(set_cp_reg, void, env, ptr, i32)
29
* "is fp access disabled" check at a high level in the decode tree.
25
DEF_HELPER_2(get_cp_reg, i32, env, ptr)
30
@@ -XXX,XX +XXX,XX @@ static inline TCGv_ptr fpstatus_ptr(ARMFPStatusFlavour flavour)
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
31
g_assert_not_reached();
32
}
31
}
33
32
34
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
33
/**
35
+{
34
- * finalize_memop:
36
+ /* The TT instructions can be used by unprivileged code, but in
35
+ * finalize_memop_atom:
37
+ * user-only emulation we don't have the MPU.
36
* @s: DisasContext
38
+ * Luckily since we know we are NonSecure unprivileged (and that in
37
* @opc: size+sign+align of the memory operation
39
+ * turn means that the A flag wasn't specified), all the bits in the
38
+ * @atom: atomicity of the memory operation
40
+ * register must be zero:
39
*
41
+ * IREGION: 0 because IRVALID is 0
40
- * Build the complete MemOp for a memory operation, including alignment
42
+ * IRVALID: 0 because NS
41
- * and endianness.
43
+ * S: 0 because NS
42
+ * Build the complete MemOp for a memory operation, including alignment,
44
+ * NSRW: 0 because NS
43
+ * endianness, and atomicity.
45
+ * NSR: 0 because NS
44
*
46
+ * RW: 0 because unpriv and A flag not set
45
* If (op & MO_AMASK) then the operation already contains the required
47
+ * R: 0 because unpriv and A flag not set
46
* alignment, e.g. for AccType_ATOMIC. Otherwise, this an optionally
48
+ * SRVALID: 0 because NS
47
@@ -XXX,XX +XXX,XX @@ static inline TCGv_ptr fpstatus_ptr(ARMFPStatusFlavour flavour)
49
+ * MRVALID: 0 because unpriv and A flag not set
48
* and this is applied here. Note that there is no way to indicate that
50
+ * SREGION: 0 becaus SRVALID is 0
49
* no alignment should ever be enforced; this must be handled manually.
51
+ * MREGION: 0 because MRVALID is 0
50
*/
52
+ */
51
-static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
53
+ return 0;
52
+static inline MemOp finalize_memop_atom(DisasContext *s, MemOp opc, MemOp atom)
53
{
54
if (s->align_mem && !(opc & MO_AMASK)) {
55
opc |= MO_ALIGN;
56
}
57
- return opc | s->be_data;
58
+ return opc | atom | s->be_data;
54
+}
59
+}
55
+
60
+
56
void switch_mode(CPUARMState *env, int mode)
61
+/**
57
{
62
+ * finalize_memop:
58
ARMCPU *cpu = arm_env_get_cpu(env);
63
+ * @s: DisasContext
59
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
64
+ * @opc: size+sign+align of the memory operation
60
}
65
+ *
61
}
66
+ * Like finalize_memop_atom, but with default atomicity.
62
67
+ */
63
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
68
+static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
64
+{
69
+{
65
+ /* Implement the TT instruction. op is bits [7:6] of the insn. */
70
+ MemOp atom = s->lse2 ? MO_ATOM_WITHIN16 : MO_ATOM_IFALIGN;
66
+ bool forceunpriv = op & 1;
71
+ return finalize_memop_atom(s, opc, atom);
67
+ bool alt = op & 2;
68
+ V8M_SAttributes sattrs = {};
69
+ uint32_t tt_resp;
70
+ bool r, rw, nsr, nsrw, mrvalid;
71
+ int prot;
72
+ MemTxAttrs attrs = {};
73
+ hwaddr phys_addr;
74
+ uint32_t fsr;
75
+ ARMMMUIdx mmu_idx;
76
+ uint32_t mregion;
77
+ bool targetpriv;
78
+ bool targetsec = env->v7m.secure;
79
+
80
+ /* Work out what the security state and privilege level we're
81
+ * interested in is...
82
+ */
83
+ if (alt) {
84
+ targetsec = !targetsec;
85
+ }
86
+
87
+ if (forceunpriv) {
88
+ targetpriv = false;
89
+ } else {
90
+ targetpriv = arm_v7m_is_handler_mode(env) ||
91
+ !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK);
92
+ }
93
+
94
+ /* ...and then figure out which MMU index this is */
95
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv);
96
+
97
+ /* We know that the MPU and SAU don't care about the access type
98
+ * for our purposes beyond that we don't want to claim to be
99
+ * an insn fetch, so we arbitrarily call this a read.
100
+ */
101
+
102
+ /* MPU region info only available for privileged or if
103
+ * inspecting the other MPU state.
104
+ */
105
+ if (arm_current_el(env) != 0 || alt) {
106
+ /* We can ignore the return value as prot is always set */
107
+ pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
108
+ &phys_addr, &attrs, &prot, &fsr, &mregion);
109
+ if (mregion == -1) {
110
+ mrvalid = false;
111
+ mregion = 0;
112
+ } else {
113
+ mrvalid = true;
114
+ }
115
+ r = prot & PAGE_READ;
116
+ rw = prot & PAGE_WRITE;
117
+ } else {
118
+ r = false;
119
+ rw = false;
120
+ mrvalid = false;
121
+ mregion = 0;
122
+ }
123
+
124
+ if (env->v7m.secure) {
125
+ v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
126
+ nsr = sattrs.ns && r;
127
+ nsrw = sattrs.ns && rw;
128
+ } else {
129
+ sattrs.ns = true;
130
+ nsr = false;
131
+ nsrw = false;
132
+ }
133
+
134
+ tt_resp = (sattrs.iregion << 24) |
135
+ (sattrs.irvalid << 23) |
136
+ ((!sattrs.ns) << 22) |
137
+ (nsrw << 21) |
138
+ (nsr << 20) |
139
+ (rw << 19) |
140
+ (r << 18) |
141
+ (sattrs.srvalid << 17) |
142
+ (mrvalid << 16) |
143
+ (sattrs.sregion << 8) |
144
+ mregion;
145
+
146
+ return tt_resp;
147
+}
72
+}
148
+
73
+
74
+/**
75
+ * finalize_memop_pair:
76
+ * @s: DisasContext
77
+ * @opc: size+sign+align of the memory operation
78
+ *
79
+ * Like finalize_memop_atom, but with atomicity for a pair.
80
+ * C.f. Pseudocode for Mem[], operand ispair.
81
+ */
82
+static inline MemOp finalize_memop_pair(DisasContext *s, MemOp opc)
83
+{
84
+ MemOp atom = s->lse2 ? MO_ATOM_WITHIN16_PAIR : MO_ATOM_IFALIGN_PAIR;
85
+ return finalize_memop_atom(s, opc, atom);
86
}
87
88
/**
89
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/tcg/translate-a64.c
92
+++ b/target/arm/tcg/translate-a64.c
93
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
94
tcg_debug_assert(dc->tbid & 1);
149
#endif
95
#endif
150
96
151
void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in)
97
+ dc->lse2 = dc_isar_feature(aa64_lse2, dc);
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
98
+
99
/* Single step state. The code-generation logic here is:
100
* SS_ACTIVE == 0:
101
* generate code with no special handling for single-stepping (except
102
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
153
index XXXXXXX..XXXXXXX 100644
103
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
104
--- a/target/arm/tcg/translate.c
155
+++ b/target/arm/translate.c
105
+++ b/target/arm/tcg/translate.c
156
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(DisasContext *s, uint32_t insn)
106
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
157
if (insn & (1 << 22)) {
107
dc->sme_trap_nonstreaming =
158
/* 0b1110_100x_x1xx_xxxx_xxxx_xxxx_xxxx_xxxx
108
EX_TBFLAG_A32(tb_flags, SME_TRAP_NONSTREAMING);
159
* - load/store doubleword, load/store exclusive, ldacq/strel,
109
}
160
- * table branch.
110
+ dc->lse2 = false; /* applies only to aarch64 */
161
+ * table branch, TT.
111
dc->cp_regs = cpu->cp_regs;
162
*/
112
dc->features = env->features;
163
if (insn == 0xe97fe97f && arm_dc_feature(s, ARM_FEATURE_M) &&
113
164
arm_dc_feature(s, ARM_FEATURE_V8)) {
165
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(DisasContext *s, uint32_t insn)
166
} else if ((insn & (1 << 23)) == 0) {
167
/* 0b1110_1000_010x_xxxx_xxxx_xxxx_xxxx_xxxx
168
* - load/store exclusive word
169
+ * - TT (v8M only)
170
*/
171
if (rs == 15) {
172
+ if (!(insn & (1 << 20)) &&
173
+ arm_dc_feature(s, ARM_FEATURE_M) &&
174
+ arm_dc_feature(s, ARM_FEATURE_V8)) {
175
+ /* 0b1110_1000_0100_xxxx_1111_xxxx_xxxx_xxxx
176
+ * - TT (v8M only)
177
+ */
178
+ bool alt = insn & (1 << 7);
179
+ TCGv_i32 addr, op, ttresp;
180
+
181
+ if ((insn & 0x3f) || rd == 13 || rd == 15 || rn == 15) {
182
+ /* we UNDEF for these UNPREDICTABLE cases */
183
+ goto illegal_op;
184
+ }
185
+
186
+ if (alt && !s->v8m_secure) {
187
+ goto illegal_op;
188
+ }
189
+
190
+ addr = load_reg(s, rn);
191
+ op = tcg_const_i32(extract32(insn, 6, 2));
192
+ ttresp = tcg_temp_new_i32();
193
+ gen_helper_v7m_tt(ttresp, cpu_env, addr, op);
194
+ tcg_temp_free_i32(addr);
195
+ tcg_temp_free_i32(op);
196
+ store_reg(s, rd, ttresp);
197
+ }
198
goto illegal_op;
199
}
200
addr = tcg_temp_local_new_i32();
201
--
114
--
202
2.7.4
115
2.34.1
203
116
204
117
diff view generated by jsdifflib
1
In do_ats_write(), rather than using the FSR value from get_phys_addr(),
1
From: Richard Henderson <richard.henderson@linaro.org>
2
construct the PAR values using the information in the ARMMMUFaultInfo
3
struct. This allows us to create a PAR of the correct format regardless
4
of what the translation table format is.
5
2
6
For the moment we leave the condition for "when should this be a
3
While we don't require 16-byte atomicity here, using a single larger
7
64 bit PAR" as it was previously; this will need to be fixed to
4
load simplifies the code, and makes it a closer match to STXP.
8
properly support AArch32 Hyp mode.
9
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-5-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
14
Message-id: 1512503192-2239-11-git-send-email-peter.maydell@linaro.org
15
---
10
---
16
target/arm/helper.c | 16 ++++++++++------
11
target/arm/tcg/translate-a64.c | 31 ++++++++++++++++++++-----------
17
1 file changed, 10 insertions(+), 6 deletions(-)
12
1 file changed, 20 insertions(+), 11 deletions(-)
18
13
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
16
--- a/target/arm/tcg/translate-a64.c
22
+++ b/target/arm/helper.c
17
+++ b/target/arm/tcg/translate-a64.c
23
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
18
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
24
hwaddr phys_addr;
19
TCGv_i64 addr, int size, bool is_pair)
25
target_ulong page_size;
20
{
26
int prot;
21
int idx = get_mem_index(s);
27
- uint32_t fsr;
22
- MemOp memop = s->be_data;
28
+ uint32_t fsr_unused;
23
+ MemOp memop;
29
bool ret;
24
30
uint64_t par64;
25
g_assert(size <= 3);
31
MemTxAttrs attrs = {};
26
if (is_pair) {
32
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
27
g_assert(size >= 2);
33
ARMCacheAttrs cacheattrs = {};
28
if (size == 2) {
34
29
/* The pair must be single-copy atomic for the doubleword. */
35
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
30
- memop |= MO_64 | MO_ALIGN;
36
- &prot, &page_size, &fsr, &fi, &cacheattrs);
31
+ memop = finalize_memop(s, MO_64 | MO_ALIGN);
37
+ &prot, &page_size, &fsr_unused, &fi, &cacheattrs);
32
tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
38
+ /* TODO: this is not the correct condition to use to decide whether
33
if (s->be_data == MO_LE) {
39
+ * to report a PAR in 64-bit or 32-bit format.
34
tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
40
+ */
35
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
41
if (arm_s1_regime_using_lpae_format(env, mmu_idx)) {
36
tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 0, 32);
42
- /* fsr is a DFSR/IFSR value for the long descriptor
43
- * translation table format, but with WnR always clear.
44
- * Convert it to a 64-bit PAR.
45
- */
46
+ /* Create a 64-bit PAR */
47
par64 = (1 << 11); /* LPAE bit always set */
48
if (!ret) {
49
par64 |= phys_addr & ~0xfffULL;
50
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
51
par64 |= (uint64_t)cacheattrs.attrs << 56; /* ATTR */
52
par64 |= cacheattrs.shareability << 7; /* SH */
53
} else {
54
+ uint32_t fsr = arm_fi_to_lfsc(&fi);
55
+
56
par64 |= 1; /* F */
57
par64 |= (fsr & 0x3f) << 1; /* FS */
58
/* Note that S2WLK and FSTAGE are always zero, because we don't
59
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
60
par64 |= (1 << 9); /* NS */
61
}
37
}
62
} else {
38
} else {
63
+ uint32_t fsr = arm_fi_to_sfsc(&fi);
39
- /* The pair must be single-copy atomic for *each* doubleword, not
64
+
40
- the entire quadword, however it must be quadword aligned. */
65
par64 = ((fsr & (1 << 10)) >> 5) | ((fsr & (1 << 12)) >> 6) |
41
- memop |= MO_64;
66
((fsr & 0xf) << 1) | 1;
42
- tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx,
43
- memop | MO_ALIGN_16);
44
+ /*
45
+ * The pair must be single-copy atomic for *each* doubleword, not
46
+ * the entire quadword, however it must be quadword aligned.
47
+ * Expose the complete load to tcg, for ease of tlb lookup,
48
+ * but indicate that only 8-byte atomicity is required.
49
+ */
50
+ TCGv_i128 t16 = tcg_temp_new_i128();
51
52
- TCGv_i64 addr2 = tcg_temp_new_i64();
53
- tcg_gen_addi_i64(addr2, addr, 8);
54
- tcg_gen_qemu_ld_i64(cpu_exclusive_high, addr2, idx, memop);
55
+ memop = finalize_memop_atom(s, MO_128 | MO_ALIGN_16,
56
+ MO_ATOM_IFALIGN_PAIR);
57
+ tcg_gen_qemu_ld_i128(t16, addr, idx, memop);
58
59
+ if (s->be_data == MO_LE) {
60
+ tcg_gen_extr_i128_i64(cpu_exclusive_val,
61
+ cpu_exclusive_high, t16);
62
+ } else {
63
+ tcg_gen_extr_i128_i64(cpu_exclusive_high,
64
+ cpu_exclusive_val, t16);
65
+ }
66
tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
67
tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high);
67
}
68
}
69
} else {
70
- memop |= size | MO_ALIGN;
71
+ memop = finalize_memop(s, size | MO_ALIGN);
72
tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
73
tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
74
}
68
--
75
--
69
2.7.4
76
2.34.1
70
71
diff view generated by jsdifflib
1
Now that ARMMMUFaultInfo is guaranteed to have enough information
1
From: Richard Henderson <richard.henderson@linaro.org>
2
to construct a fault status code, we can pass it in to the
3
deliver_fault() function and let it generate the correct type
4
of FSR for the destination, rather than relying on the value
5
provided by get_phys_addr().
6
2
7
I don't think there are any cases the old code was getting
3
While we don't require 16-byte atomicity here, using a single larger
8
wrong, but this is more obviously correct.
4
operation simplifies the code. Introduce finalize_memop_asimd for this.
9
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-6-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
14
Message-id: 1512503192-2239-10-git-send-email-peter.maydell@linaro.org
15
---
10
---
16
target/arm/op_helper.c | 79 ++++++++++++++------------------------------------
11
target/arm/tcg/translate.h | 24 +++++++++++++++++++++++
17
1 file changed, 22 insertions(+), 57 deletions(-)
12
target/arm/tcg/translate-a64.c | 35 +++++++++++-----------------------
13
2 files changed, 35 insertions(+), 24 deletions(-)
18
14
19
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
15
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/op_helper.c
17
--- a/target/arm/tcg/translate.h
22
+++ b/target/arm/op_helper.c
18
+++ b/target/arm/tcg/translate.h
23
@@ -XXX,XX +XXX,XX @@ static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
19
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop_pair(DisasContext *s, MemOp opc)
20
return finalize_memop_atom(s, opc, atom);
24
}
21
}
25
22
26
static void deliver_fault(ARMCPU *cpu, vaddr addr, MMUAccessType access_type,
23
+/**
27
- uint32_t fsr, uint32_t fsc, ARMMMUFaultInfo *fi)
24
+ * finalize_memop_asimd:
28
+ int mmu_idx, ARMMMUFaultInfo *fi)
25
+ * @s: DisasContext
26
+ * @opc: size+sign+align of the memory operation
27
+ *
28
+ * Like finalize_memop_atom, but with atomicity of AccessType_ASIMD.
29
+ */
30
+static inline MemOp finalize_memop_asimd(DisasContext *s, MemOp opc)
31
+{
32
+ /*
33
+ * In the pseudocode for Mem[], with AccessType_ASIMD, size == 16,
34
+ * if IsAligned(8), the first case provides separate atomicity for
35
+ * the pair of 64-bit accesses. If !IsAligned(8), the middle cases
36
+ * do not apply, and we're left with the final case of no atomicity.
37
+ * Thus MO_ATOM_IFALIGN_PAIR.
38
+ *
39
+ * For other sizes, normal LSE2 rules apply.
40
+ */
41
+ if ((opc & MO_SIZE) == MO_128) {
42
+ return finalize_memop_atom(s, opc, MO_ATOM_IFALIGN_PAIR);
43
+ }
44
+ return finalize_memop(s, opc);
45
+}
46
+
47
/**
48
* asimd_imm_const: Expand an encoded SIMD constant value
49
*
50
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/tcg/translate-a64.c
53
+++ b/target/arm/tcg/translate-a64.c
54
@@ -XXX,XX +XXX,XX @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)
29
{
55
{
30
CPUARMState *env = &cpu->env;
56
/* This writes the bottom N bits of a 128 bit wide vector to memory */
31
int target_el;
57
TCGv_i64 tmplo = tcg_temp_new_i64();
32
bool same_el;
58
- MemOp mop;
33
- uint32_t syn, exc;
59
+ MemOp mop = finalize_memop_asimd(s, size);
34
+ uint32_t syn, exc, fsr, fsc;
60
35
+ ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
61
tcg_gen_ld_i64(tmplo, cpu_env, fp_reg_offset(s, srcidx, MO_64));
36
62
37
target_el = exception_target_el(env);
63
- if (size < 4) {
38
if (fi->stage2) {
64
- mop = finalize_memop(s, size);
39
@@ -XXX,XX +XXX,XX @@ static void deliver_fault(ARMCPU *cpu, vaddr addr, MMUAccessType access_type,
65
+ if (size < MO_128) {
40
}
66
tcg_gen_qemu_st_i64(tmplo, tcg_addr, get_mem_index(s), mop);
41
same_el = (arm_current_el(env) == target_el);
67
} else {
42
68
- bool be = s->be_data == MO_BE;
43
- if (fsc == 0x3f) {
69
- TCGv_i64 tcg_hiaddr = tcg_temp_new_i64();
44
- /* Caller doesn't have a long-format fault status code. This
70
TCGv_i64 tmphi = tcg_temp_new_i64();
45
- * should only happen if this fault will never actually be reported
71
+ TCGv_i128 t16 = tcg_temp_new_i128();
46
- * to an EL that uses a syndrome register. Check that here.
72
47
- * 0x3f is a (currently) reserved FSC code, in case the constructed
73
tcg_gen_ld_i64(tmphi, cpu_env, fp_reg_hi_offset(s, srcidx));
48
- * syndrome does leak into the guest somehow.
74
+ tcg_gen_concat_i64_i128(t16, tmplo, tmphi);
49
+ if (target_el == 2 || arm_el_is_aa64(env, target_el) ||
75
50
+ arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
76
- mop = s->be_data | MO_UQ;
51
+ /* LPAE format fault status register : bottom 6 bits are
77
- tcg_gen_qemu_st_i64(be ? tmphi : tmplo, tcg_addr, get_mem_index(s),
52
+ * status code in the same form as needed for syndrome
78
- mop | (s->align_mem ? MO_ALIGN_16 : 0));
53
+ */
79
- tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
54
+ fsr = arm_fi_to_lfsc(fi);
80
- tcg_gen_qemu_st_i64(be ? tmplo : tmphi, tcg_hiaddr,
55
+ fsc = extract32(fsr, 0, 6);
81
- get_mem_index(s), mop);
56
+ } else {
82
+ tcg_gen_qemu_st_i128(t16, tcg_addr, get_mem_index(s), mop);
57
+ fsr = arm_fi_to_sfsc(fi);
58
+ /* Short format FSR : this fault will never actually be reported
59
+ * to an EL that uses a syndrome register. Use a (currently)
60
+ * reserved FSR code in case the constructed syndrome does leak
61
+ * into the guest somehow.
62
*/
63
- assert(target_el != 2 && !arm_el_is_aa64(env, target_el));
64
+ fsc = 0x3f;
65
}
66
67
if (access_type == MMU_INST_FETCH) {
68
@@ -XXX,XX +XXX,XX @@ void tlb_fill(CPUState *cs, target_ulong addr, MMUAccessType access_type,
69
ret = arm_tlb_fill(cs, addr, access_type, mmu_idx, &fsr, &fi);
70
if (unlikely(ret)) {
71
ARMCPU *cpu = ARM_CPU(cs);
72
- uint32_t fsc;
73
74
if (retaddr) {
75
/* now we have a real cpu fault */
76
cpu_restore_state(cs, retaddr);
77
}
78
79
- if (fsr & (1 << 9)) {
80
- /* LPAE format fault status register : bottom 6 bits are
81
- * status code in the same form as needed for syndrome
82
- */
83
- fsc = extract32(fsr, 0, 6);
84
- } else {
85
- /* Short format FSR : this fault will never actually be reported
86
- * to an EL that uses a syndrome register. Use a (currently)
87
- * reserved FSR code in case the constructed syndrome does leak
88
- * into the guest somehow. deliver_fault will assert that
89
- * we don't target an EL using the syndrome.
90
- */
91
- fsc = 0x3f;
92
- }
93
-
94
- deliver_fault(cpu, addr, access_type, fsr, fsc, &fi);
95
+ deliver_fault(cpu, addr, access_type, mmu_idx, &fi);
96
}
83
}
97
}
84
}
98
85
99
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
86
@@ -XXX,XX +XXX,XX @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
100
int mmu_idx, uintptr_t retaddr)
87
/* This always zero-extends and writes to a full 128 bit wide vector */
101
{
88
TCGv_i64 tmplo = tcg_temp_new_i64();
102
ARMCPU *cpu = ARM_CPU(cs);
89
TCGv_i64 tmphi = NULL;
103
- CPUARMState *env = &cpu->env;
90
- MemOp mop;
104
- uint32_t fsr, fsc;
91
+ MemOp mop = finalize_memop_asimd(s, size);
105
ARMMMUFaultInfo fi = {};
92
106
- ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
93
- if (size < 4) {
107
94
- mop = finalize_memop(s, size);
108
if (retaddr) {
95
+ if (size < MO_128) {
109
/* now we have a real cpu fault */
96
tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), mop);
110
cpu_restore_state(cs, retaddr);
97
} else {
98
- bool be = s->be_data == MO_BE;
99
- TCGv_i64 tcg_hiaddr;
100
+ TCGv_i128 t16 = tcg_temp_new_i128();
101
+
102
+ tcg_gen_qemu_ld_i128(t16, tcg_addr, get_mem_index(s), mop);
103
104
tmphi = tcg_temp_new_i64();
105
- tcg_hiaddr = tcg_temp_new_i64();
106
-
107
- mop = s->be_data | MO_UQ;
108
- tcg_gen_qemu_ld_i64(be ? tmphi : tmplo, tcg_addr, get_mem_index(s),
109
- mop | (s->align_mem ? MO_ALIGN_16 : 0));
110
- tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
111
- tcg_gen_qemu_ld_i64(be ? tmplo : tmphi, tcg_hiaddr,
112
- get_mem_index(s), mop);
113
+ tcg_gen_extr_i128_i64(tmplo, tmphi, t16);
111
}
114
}
112
115
113
- /* the DFSR for an alignment fault depends on whether we're using
116
tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(s, destidx, MO_64));
114
- * the LPAE long descriptor format, or the short descriptor format
115
- */
116
- if (arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
117
- fsr = (1 << 9) | 0x21;
118
- } else {
119
- fsr = 0x1;
120
- }
121
- fsc = 0x21;
122
-
123
- deliver_fault(cpu, vaddr, access_type, fsr, fsc, &fi);
124
+ fi.type = ARMFault_Alignment;
125
+ deliver_fault(cpu, vaddr, access_type, mmu_idx, &fi);
126
}
127
128
/* arm_cpu_do_transaction_failed: handle a memory system error response
129
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
130
MemTxResult response, uintptr_t retaddr)
131
{
132
ARMCPU *cpu = ARM_CPU(cs);
133
- CPUARMState *env = &cpu->env;
134
- uint32_t fsr, fsc;
135
ARMMMUFaultInfo fi = {};
136
- ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
137
138
if (retaddr) {
139
/* now we have a real cpu fault */
140
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
141
* Slave error (1); in QEMU we follow that.
142
*/
143
fi.ea = (response != MEMTX_DECODE_ERROR);
144
-
145
- /* The fault status register format depends on whether we're using
146
- * the LPAE long descriptor format, or the short descriptor format.
147
- */
148
- if (arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
149
- /* long descriptor form, STATUS 0b010000: synchronous ext abort */
150
- fsr = (fi.ea << 12) | (1 << 9) | 0x10;
151
- } else {
152
- /* short descriptor form, FSR 0b01000 : synchronous ext abort */
153
- fsr = (fi.ea << 12) | 0x8;
154
- }
155
- fsc = 0x10;
156
-
157
- deliver_fault(cpu, addr, access_type, fsr, fsc, &fi);
158
+ fi.type = ARMFault_SyncExternal;
159
+ deliver_fault(cpu, addr, access_type, mmu_idx, &fi);
160
}
161
162
#endif /* !defined(CONFIG_USER_ONLY) */
163
--
117
--
164
2.7.4
118
2.34.1
165
166
diff view generated by jsdifflib
1
Make get_phys_addr_pmsav8() return a fault type in the ARMMMUFaultInfo
1
From: Richard Henderson <richard.henderson@linaro.org>
2
structure, which we convert to the FSC at the callsite.
3
2
3
This fixes a bug in that these two insns should have been using atomic
4
16-byte stores, since MTE is ARMv8.5 and LSE2 is mandatory from ARMv8.4.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-7-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-9-git-send-email-peter.maydell@linaro.org
9
---
10
---
10
target/arm/helper.c | 29 ++++++++++++++++++-----------
11
target/arm/tcg/translate-a64.c | 17 ++++++++++-------
11
1 file changed, 18 insertions(+), 11 deletions(-)
12
1 file changed, 10 insertions(+), 7 deletions(-)
12
13
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
--- a/target/arm/tcg/translate-a64.c
16
+++ b/target/arm/helper.c
17
+++ b/target/arm/tcg/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
18
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_tag(DisasContext *s, uint32_t insn)
18
static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
19
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
20
if (is_zero) {
20
hwaddr *phys_ptr, MemTxAttrs *txattrs,
21
TCGv_i64 clean_addr = clean_data_tbi(s, addr);
21
- int *prot, uint32_t *fsr, uint32_t *mregion)
22
- TCGv_i64 tcg_zero = tcg_constant_i64(0);
22
+ int *prot, ARMMMUFaultInfo *fi, uint32_t *mregion)
23
+ TCGv_i64 zero64 = tcg_constant_i64(0);
23
{
24
+ TCGv_i128 zero128 = tcg_temp_new_i128();
24
/* Perform a PMSAv8 MPU lookup (without also doing the SAU check
25
int mem_index = get_mem_index(s);
25
* that a full phys-to-virt translation does).
26
- int i, n = (1 + is_pair) << LOG2_TAG_GRANULE;
26
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
27
+ MemOp mop = finalize_memop(s, MO_128 | MO_ALIGN);
27
/* Multiple regions match -- always a failure (unlike
28
28
* PMSAv7 where highest-numbered-region wins)
29
- tcg_gen_qemu_st_i64(tcg_zero, clean_addr, mem_index,
29
*/
30
- MO_UQ | MO_ALIGN_16);
30
- *fsr = 0x00d; /* permission fault */
31
- for (i = 8; i < n; i += 8) {
31
+ fi->type = ARMFault_Permission;
32
- tcg_gen_addi_i64(clean_addr, clean_addr, 8);
32
+ fi->level = 1;
33
- tcg_gen_qemu_st_i64(tcg_zero, clean_addr, mem_index, MO_UQ);
33
return true;
34
+ tcg_gen_concat_i64_i128(zero128, zero64, zero64);
34
}
35
+
35
36
+ /* This is 1 or 2 atomic 16-byte operations. */
36
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
37
+ tcg_gen_qemu_st_i128(zero128, clean_addr, mem_index, mop);
37
38
+ if (is_pair) {
38
if (!hit) {
39
+ tcg_gen_addi_i64(clean_addr, clean_addr, 16);
39
/* background fault */
40
+ tcg_gen_qemu_st_i128(zero128, clean_addr, mem_index, mop);
40
- *fsr = 0;
41
+ fi->type = ARMFault_Background;
42
return true;
43
}
44
45
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
46
}
41
}
47
}
42
}
48
43
49
- *fsr = 0x00d; /* Permission fault */
50
+ fi->type = ARMFault_Permission;
51
+ fi->level = 1;
52
return !(*prot & (1 << access_type));
53
}
54
55
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
56
static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
57
MMUAccessType access_type, ARMMMUIdx mmu_idx,
58
hwaddr *phys_ptr, MemTxAttrs *txattrs,
59
- int *prot, uint32_t *fsr)
60
+ int *prot, ARMMMUFaultInfo *fi)
61
{
62
uint32_t secure = regime_is_secure(env, mmu_idx);
63
V8M_SAttributes sattrs = {};
64
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
65
* (including possibly emulating an SG instruction).
66
*/
67
if (sattrs.ns != !secure) {
68
- *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
69
+ if (sattrs.nsc) {
70
+ fi->type = ARMFault_QEMU_NSCExec;
71
+ } else {
72
+ fi->type = ARMFault_QEMU_SFault;
73
+ }
74
*phys_ptr = address;
75
*prot = 0;
76
return true;
77
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
78
* If we added it we would need to do so as a special case
79
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
80
*/
81
- *fsr = M_FAKE_FSR_SFAULT;
82
+ fi->type = ARMFault_QEMU_SFault;
83
*phys_ptr = address;
84
*prot = 0;
85
return true;
86
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
87
}
88
89
return pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
90
- txattrs, prot, fsr, NULL);
91
+ txattrs, prot, fi, NULL);
92
}
93
94
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
95
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
96
if (arm_feature(env, ARM_FEATURE_V8)) {
97
/* PMSAv8 */
98
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
99
- phys_ptr, attrs, prot, fsr);
100
+ phys_ptr, attrs, prot, fi);
101
+ *fsr = arm_fi_to_sfsc(fi);
102
} else if (arm_feature(env, ARM_FEATURE_V7)) {
103
/* PMSAv7 */
104
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
105
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
106
uint32_t tt_resp;
107
bool r, rw, nsr, nsrw, mrvalid;
108
int prot;
109
+ ARMMMUFaultInfo fi = {};
110
MemTxAttrs attrs = {};
111
hwaddr phys_addr;
112
- uint32_t fsr;
113
ARMMMUIdx mmu_idx;
114
uint32_t mregion;
115
bool targetpriv;
116
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
117
if (arm_current_el(env) != 0 || alt) {
118
/* We can ignore the return value as prot is always set */
119
pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
120
- &phys_addr, &attrs, &prot, &fsr, &mregion);
121
+ &phys_addr, &attrs, &prot, &fi, &mregion);
122
if (mregion == -1) {
123
mrvalid = false;
124
mregion = 0;
125
--
44
--
126
2.7.4
45
2.34.1
127
128
diff view generated by jsdifflib
1
Make get_phys_addr_pmsav5() return a fault type in the ARMMMUFaultInfo
1
From: Richard Henderson <richard.henderson@linaro.org>
2
structure, which we convert to the FSC at the callsite.
3
2
4
Note that PMSAv5 does not define any guest-visible fault status
3
Round len_align to 16 instead of 8, handling an odd 8-byte as part
5
register, so the different "fsr" values we were previously
4
of the tail. Use MO_ATOM_NONE to indicate that all of these memory
6
returning are entirely arbitrary. So we can just switch to using
5
ops have only byte atomicity.
7
the most appropriae fi->type values without worrying that we
8
need to special-case FaultInfo->FSC conversion for PMSAv5.
9
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230530191438.411344-8-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
14
Message-id: 1512503192-2239-7-git-send-email-peter.maydell@linaro.org
15
---
11
---
16
target/arm/helper.c | 20 +++++++++++++-------
12
target/arm/tcg/translate-sve.c | 95 +++++++++++++++++++++++++---------
17
1 file changed, 13 insertions(+), 7 deletions(-)
13
1 file changed, 70 insertions(+), 25 deletions(-)
18
14
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
17
--- a/target/arm/tcg/translate-sve.c
22
+++ b/target/arm/helper.c
18
+++ b/target/arm/tcg/translate-sve.c
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
19
@@ -XXX,XX +XXX,XX @@ TRANS_FEAT(UCVTF_dd, aa64_sve, gen_gvec_fpst_arg_zpz,
24
20
void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
25
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
21
int len, int rn, int imm)
26
MMUAccessType access_type, ARMMMUIdx mmu_idx,
27
- hwaddr *phys_ptr, int *prot, uint32_t *fsr)
28
+ hwaddr *phys_ptr, int *prot,
29
+ ARMMMUFaultInfo *fi)
30
{
22
{
31
int n;
23
- int len_align = QEMU_ALIGN_DOWN(len, 8);
32
uint32_t mask;
24
- int len_remain = len % 8;
33
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
25
- int nparts = len / 8 + ctpop8(len_remain);
26
+ int len_align = QEMU_ALIGN_DOWN(len, 16);
27
+ int len_remain = len % 16;
28
+ int nparts = len / 16 + ctpop8(len_remain);
29
int midx = get_mem_index(s);
30
TCGv_i64 dirty_addr, clean_addr, t0, t1;
31
+ TCGv_i128 t16;
32
33
dirty_addr = tcg_temp_new_i64();
34
tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm);
35
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
36
int i;
37
38
t0 = tcg_temp_new_i64();
39
- for (i = 0; i < len_align; i += 8) {
40
- tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUQ);
41
+ t1 = tcg_temp_new_i64();
42
+ t16 = tcg_temp_new_i128();
43
+
44
+ for (i = 0; i < len_align; i += 16) {
45
+ tcg_gen_qemu_ld_i128(t16, clean_addr, midx,
46
+ MO_LE | MO_128 | MO_ATOM_NONE);
47
+ tcg_gen_extr_i128_i64(t0, t1, t16);
48
tcg_gen_st_i64(t0, base, vofs + i);
49
- tcg_gen_addi_i64(clean_addr, clean_addr, 8);
50
+ tcg_gen_st_i64(t1, base, vofs + i + 8);
51
+ tcg_gen_addi_i64(clean_addr, clean_addr, 16);
34
}
52
}
53
} else {
54
TCGLabel *loop = gen_new_label();
55
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
56
tcg_gen_movi_ptr(i, 0);
57
gen_set_label(loop);
58
59
- t0 = tcg_temp_new_i64();
60
- tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUQ);
61
- tcg_gen_addi_i64(clean_addr, clean_addr, 8);
62
+ t16 = tcg_temp_new_i128();
63
+ tcg_gen_qemu_ld_i128(t16, clean_addr, midx,
64
+ MO_LE | MO_128 | MO_ATOM_NONE);
65
+ tcg_gen_addi_i64(clean_addr, clean_addr, 16);
66
67
tp = tcg_temp_new_ptr();
68
tcg_gen_add_ptr(tp, base, i);
69
- tcg_gen_addi_ptr(i, i, 8);
70
+ tcg_gen_addi_ptr(i, i, 16);
71
+
72
+ t0 = tcg_temp_new_i64();
73
+ t1 = tcg_temp_new_i64();
74
+ tcg_gen_extr_i128_i64(t0, t1, t16);
75
+
76
tcg_gen_st_i64(t0, tp, vofs);
77
+ tcg_gen_st_i64(t1, tp, vofs + 8);
78
79
tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
35
}
80
}
36
if (n < 0) {
81
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
37
- *fsr = 2;
82
* Predicate register loads can be any multiple of 2.
38
+ fi->type = ARMFault_Background;
83
* Note that we still store the entire 64-bit unit into cpu_env.
39
return true;
84
*/
85
+ if (len_remain >= 8) {
86
+ t0 = tcg_temp_new_i64();
87
+ tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUQ | MO_ATOM_NONE);
88
+ tcg_gen_st_i64(t0, base, vofs + len_align);
89
+ len_remain -= 8;
90
+ len_align += 8;
91
+ if (len_remain) {
92
+ tcg_gen_addi_i64(clean_addr, clean_addr, 8);
93
+ }
94
+ }
95
if (len_remain) {
96
t0 = tcg_temp_new_i64();
97
switch (len_remain) {
98
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
99
case 4:
100
case 8:
101
tcg_gen_qemu_ld_i64(t0, clean_addr, midx,
102
- MO_LE | ctz32(len_remain));
103
+ MO_LE | ctz32(len_remain) | MO_ATOM_NONE);
104
break;
105
106
case 6:
107
t1 = tcg_temp_new_i64();
108
- tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUL);
109
+ tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUL | MO_ATOM_NONE);
110
tcg_gen_addi_i64(clean_addr, clean_addr, 4);
111
- tcg_gen_qemu_ld_i64(t1, clean_addr, midx, MO_LEUW);
112
+ tcg_gen_qemu_ld_i64(t1, clean_addr, midx, MO_LEUW | MO_ATOM_NONE);
113
tcg_gen_deposit_i64(t0, t0, t1, 32, 32);
114
break;
115
116
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
117
void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
118
int len, int rn, int imm)
119
{
120
- int len_align = QEMU_ALIGN_DOWN(len, 8);
121
- int len_remain = len % 8;
122
- int nparts = len / 8 + ctpop8(len_remain);
123
+ int len_align = QEMU_ALIGN_DOWN(len, 16);
124
+ int len_remain = len % 16;
125
+ int nparts = len / 16 + ctpop8(len_remain);
126
int midx = get_mem_index(s);
127
- TCGv_i64 dirty_addr, clean_addr, t0;
128
+ TCGv_i64 dirty_addr, clean_addr, t0, t1;
129
+ TCGv_i128 t16;
130
131
dirty_addr = tcg_temp_new_i64();
132
tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm);
133
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
134
int i;
135
136
t0 = tcg_temp_new_i64();
137
+ t1 = tcg_temp_new_i64();
138
+ t16 = tcg_temp_new_i128();
139
for (i = 0; i < len_align; i += 8) {
140
tcg_gen_ld_i64(t0, base, vofs + i);
141
- tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUQ);
142
- tcg_gen_addi_i64(clean_addr, clean_addr, 8);
143
+ tcg_gen_ld_i64(t1, base, vofs + i + 8);
144
+ tcg_gen_concat_i64_i128(t16, t0, t1);
145
+ tcg_gen_qemu_st_i128(t16, clean_addr, midx,
146
+ MO_LE | MO_128 | MO_ATOM_NONE);
147
+ tcg_gen_addi_i64(clean_addr, clean_addr, 16);
148
}
149
} else {
150
TCGLabel *loop = gen_new_label();
151
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
152
gen_set_label(loop);
153
154
t0 = tcg_temp_new_i64();
155
+ t1 = tcg_temp_new_i64();
156
tp = tcg_temp_new_ptr();
157
tcg_gen_add_ptr(tp, base, i);
158
tcg_gen_ld_i64(t0, tp, vofs);
159
- tcg_gen_addi_ptr(i, i, 8);
160
+ tcg_gen_ld_i64(t1, tp, vofs + 8);
161
+ tcg_gen_addi_ptr(i, i, 16);
162
163
- tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUQ);
164
- tcg_gen_addi_i64(clean_addr, clean_addr, 8);
165
+ t16 = tcg_temp_new_i128();
166
+ tcg_gen_concat_i64_i128(t16, t0, t1);
167
+
168
+ tcg_gen_qemu_st_i128(t16, clean_addr, midx, MO_LEUQ);
169
+ tcg_gen_addi_i64(clean_addr, clean_addr, 16);
170
171
tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
40
}
172
}
41
173
42
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
174
/* Predicate register stores can be any multiple of 2. */
43
mask = (mask >> (n * 4)) & 0xf;
175
+ if (len_remain >= 8) {
44
switch (mask) {
176
+ t0 = tcg_temp_new_i64();
45
case 0:
177
+ tcg_gen_st_i64(t0, base, vofs + len_align);
46
- *fsr = 1;
178
+ tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUQ | MO_ATOM_NONE);
47
+ fi->type = ARMFault_Permission;
179
+ len_remain -= 8;
48
+ fi->level = 1;
180
+ len_align += 8;
49
return true;
181
+ if (len_remain) {
50
case 1:
182
+ tcg_gen_addi_i64(clean_addr, clean_addr, 8);
51
if (is_user) {
183
+ }
52
- *fsr = 1;
184
+ }
53
+ fi->type = ARMFault_Permission;
185
if (len_remain) {
54
+ fi->level = 1;
186
t0 = tcg_temp_new_i64();
55
return true;
187
tcg_gen_ld_i64(t0, base, vofs + len_align);
56
}
188
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
57
*prot = PAGE_READ | PAGE_WRITE;
189
case 4:
58
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
190
case 8:
59
break;
191
tcg_gen_qemu_st_i64(t0, clean_addr, midx,
60
case 5:
192
- MO_LE | ctz32(len_remain));
61
if (is_user) {
193
+ MO_LE | ctz32(len_remain) | MO_ATOM_NONE);
62
- *fsr = 1;
194
break;
63
+ fi->type = ARMFault_Permission;
195
64
+ fi->level = 1;
196
case 6:
65
return true;
197
- tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUL);
66
}
198
+ tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUL | MO_ATOM_NONE);
67
*prot = PAGE_READ;
199
tcg_gen_addi_i64(clean_addr, clean_addr, 4);
68
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
200
tcg_gen_shri_i64(t0, t0, 32);
69
break;
201
- tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUW);
70
default:
202
+ tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUW | MO_ATOM_NONE);
71
/* Bad permission. */
203
break;
72
- *fsr = 1;
204
73
+ fi->type = ARMFault_Permission;
205
default:
74
+ fi->level = 1;
75
return true;
76
}
77
*prot |= PAGE_EXEC;
78
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
79
} else {
80
/* Pre-v7 MPU */
81
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
82
- phys_ptr, prot, fsr);
83
+ phys_ptr, prot, fi);
84
+ *fsr = arm_fi_to_sfsc(fi);
85
}
86
qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32
87
" mmu_idx %u -> %s (prot %c%c%c)\n",
88
--
206
--
89
2.7.4
207
2.34.1
90
91
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
From the very beginning, post_load() was called from common
3
No need to duplicate this check across multiple call sites.
4
reset. This is not standard and obliged to discriminate the
5
reset case from the restore case using the iidr value.
6
4
7
Let's get rid of that call.
8
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 1511883692-11511-2-git-send-email-eric.auger@redhat.com
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20230530191438.411344-9-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
hw/intc/arm_gicv3_its_common.c | 2 --
10
target/arm/tcg/translate-a64.c | 44 ++++++++++++++++------------------
15
hw/intc/arm_gicv3_its_kvm.c | 4 ----
11
1 file changed, 21 insertions(+), 23 deletions(-)
16
2 files changed, 6 deletions(-)
17
12
18
diff --git a/hw/intc/arm_gicv3_its_common.c b/hw/intc/arm_gicv3_its_common.c
13
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
19
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gicv3_its_common.c
15
--- a/target/arm/tcg/translate-a64.c
21
+++ b/hw/intc/arm_gicv3_its_common.c
16
+++ b/target/arm/tcg/translate-a64.c
22
@@ -XXX,XX +XXX,XX @@ static void gicv3_its_common_reset(DeviceState *dev)
17
@@ -XXX,XX +XXX,XX @@ static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
23
s->creadr = 0;
18
* races in multi-threaded linux-user and when MTTCG softmmu is
24
s->iidr = 0;
19
* enabled.
25
memset(&s->baser, 0, sizeof(s->baser));
20
*/
26
-
21
-static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
27
- gicv3_its_post_load(s, 0);
22
- TCGv_i64 addr, int size, bool is_pair)
23
+static void gen_load_exclusive(DisasContext *s, int rt, int rt2, int rn,
24
+ int size, bool is_pair)
25
{
26
int idx = get_mem_index(s);
27
MemOp memop;
28
+ TCGv_i64 dirty_addr, clean_addr;
29
+
30
+ s->is_ldex = true;
31
+ dirty_addr = cpu_reg_sp(s, rn);
32
+ clean_addr = gen_mte_check1(s, dirty_addr, false, rn != 31, size);
33
34
g_assert(size <= 3);
35
if (is_pair) {
36
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
37
if (size == 2) {
38
/* The pair must be single-copy atomic for the doubleword. */
39
memop = finalize_memop(s, MO_64 | MO_ALIGN);
40
- tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
41
+ tcg_gen_qemu_ld_i64(cpu_exclusive_val, clean_addr, idx, memop);
42
if (s->be_data == MO_LE) {
43
tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
44
tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 32, 32);
45
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
46
47
memop = finalize_memop_atom(s, MO_128 | MO_ALIGN_16,
48
MO_ATOM_IFALIGN_PAIR);
49
- tcg_gen_qemu_ld_i128(t16, addr, idx, memop);
50
+ tcg_gen_qemu_ld_i128(t16, clean_addr, idx, memop);
51
52
if (s->be_data == MO_LE) {
53
tcg_gen_extr_i128_i64(cpu_exclusive_val,
54
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
55
}
56
} else {
57
memop = finalize_memop(s, size | MO_ALIGN);
58
- tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
59
+ tcg_gen_qemu_ld_i64(cpu_exclusive_val, clean_addr, idx, memop);
60
tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
61
}
62
- tcg_gen_mov_i64(cpu_exclusive_addr, addr);
63
+ tcg_gen_mov_i64(cpu_exclusive_addr, clean_addr);
28
}
64
}
29
65
30
static void gicv3_its_common_class_init(ObjectClass *klass, void *data)
66
static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
31
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
67
- TCGv_i64 addr, int size, int is_pair)
32
index XXXXXXX..XXXXXXX 100644
68
+ int rn, int size, int is_pair)
33
--- a/hw/intc/arm_gicv3_its_kvm.c
34
+++ b/hw/intc/arm_gicv3_its_kvm.c
35
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_post_load(GICv3ITSState *s)
36
{
69
{
37
int i;
70
/* if (env->exclusive_addr == addr && env->exclusive_val == [addr]
38
71
* && (!is_pair || env->exclusive_high == [addr + datasize])) {
39
- if (!s->iidr) {
72
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
40
- return;
73
*/
41
- }
74
TCGLabel *fail_label = gen_new_label();
42
-
75
TCGLabel *done_label = gen_new_label();
43
kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
76
- TCGv_i64 tmp;
44
GITS_IIDR, &s->iidr, true, &error_abort);
77
+ TCGv_i64 tmp, dirty_addr, clean_addr;
45
78
79
- tcg_gen_brcond_i64(TCG_COND_NE, addr, cpu_exclusive_addr, fail_label);
80
+ dirty_addr = cpu_reg_sp(s, rn);
81
+ clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, size);
82
+
83
+ tcg_gen_brcond_i64(TCG_COND_NE, clean_addr, cpu_exclusive_addr, fail_label);
84
85
tmp = tcg_temp_new_i64();
86
if (is_pair) {
87
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
88
if (is_lasr) {
89
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
90
}
91
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
92
- true, rn != 31, size);
93
- gen_store_exclusive(s, rs, rt, rt2, clean_addr, size, false);
94
+ gen_store_exclusive(s, rs, rt, rt2, rn, size, false);
95
return;
96
97
case 0x4: /* LDXR */
98
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
99
if (rn == 31) {
100
gen_check_sp_alignment(s);
101
}
102
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
103
- false, rn != 31, size);
104
- s->is_ldex = true;
105
- gen_load_exclusive(s, rt, rt2, clean_addr, size, false);
106
+ gen_load_exclusive(s, rt, rt2, rn, size, false);
107
if (is_lasr) {
108
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
109
}
110
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
111
if (is_lasr) {
112
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
113
}
114
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
115
- true, rn != 31, size);
116
- gen_store_exclusive(s, rs, rt, rt2, clean_addr, size, true);
117
+ gen_store_exclusive(s, rs, rt, rt2, rn, size, true);
118
return;
119
}
120
if (rt2 == 31
121
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
122
if (rn == 31) {
123
gen_check_sp_alignment(s);
124
}
125
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
126
- false, rn != 31, size);
127
- s->is_ldex = true;
128
- gen_load_exclusive(s, rt, rt2, clean_addr, size, true);
129
+ gen_load_exclusive(s, rt, rt2, rn, size, true);
130
if (is_lasr) {
131
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
132
}
46
--
133
--
47
2.7.4
134
2.34.1
48
49
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Update striping functionality to be big-endian bit order (as according to
3
This is required for LSE2, where the pair must be treated atomically if
4
the Zynq-7000 Technical Reference Manual). Output thereafter the even bits
4
it does not cross a 16-byte boundary. But it simplifies the code to do
5
into the flash memory connected to the lower QSPI bus and the odd bits into
5
this always.
6
the flash memory connected to the upper QSPI bus.
7
6
8
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Message-id: 20230530191438.411344-10-richard.henderson@linaro.org
11
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Message-id: 20171126231634.9531-7-frasse.iglesias@gmail.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/ssi/xilinx_spips.c | 19 ++++++++++---------
12
target/arm/tcg/translate-a64.c | 70 ++++++++++++++++++++++++++--------
16
1 file changed, 10 insertions(+), 9 deletions(-)
13
1 file changed, 55 insertions(+), 15 deletions(-)
17
14
18
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
15
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/ssi/xilinx_spips.c
17
--- a/target/arm/tcg/translate-a64.c
21
+++ b/hw/ssi/xilinx_spips.c
18
+++ b/target/arm/tcg/translate-a64.c
22
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
19
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
23
xilinx_spips_update_cs_lines(s);
20
} else {
24
}
21
TCGv_i64 tcg_rt = cpu_reg(s, rt);
25
22
TCGv_i64 tcg_rt2 = cpu_reg(s, rt2);
26
-/* N way (num) in place bit striper. Lay out row wise bits (LSB to MSB)
23
+ MemOp mop = size + 1;
27
+/* N way (num) in place bit striper. Lay out row wise bits (MSB to LSB)
24
+
28
* column wise (from element 0 to N-1). num is the length of x, and dir
25
+ /*
29
* reverses the direction of the transform. Best illustrated by example:
26
+ * With LSE2, non-sign-extending pairs are treated atomically if
30
* Each digit in the below array is a single bit (num == 3):
27
+ * aligned, and if unaligned one of the pair will be completely
31
*
28
+ * within a 16-byte block and that element will be atomic.
32
- * {{ 76543210, } ----- stripe (dir == false) -----> {{ FCheb630, }
29
+ * Otherwise each element is separately atomic.
33
- * { hgfedcba, } { GDAfc741, }
30
+ * In all cases, issue one operation with the correct atomicity.
34
- * { HGFEDCBA, }} <---- upstripe (dir == true) ----- { HEBgda52, }}
31
+ *
35
+ * {{ 76543210, } ----- stripe (dir == false) -----> {{ 741gdaFC, }
32
+ * This treats sign-extending loads like zero-extending loads,
36
+ * { hgfedcba, } { 630fcHEB, }
33
+ * since that reuses the most code below.
37
+ * { HGFEDCBA, }} <---- upstripe (dir == true) ----- { 52hebGDA, }}
34
+ */
38
*/
35
+ if (s->align_mem) {
39
36
+ mop |= (size == 2 ? MO_ALIGN_4 : MO_ALIGN_8);
40
static inline void stripe8(uint8_t *x, int num, bool dir)
37
+ }
41
@@ -XXX,XX +XXX,XX @@ static inline void stripe8(uint8_t *x, int num, bool dir)
38
+ mop = finalize_memop_pair(s, mop);
42
uint8_t r[num];
39
43
memset(r, 0, sizeof(uint8_t) * num);
40
if (is_load) {
44
int idx[2] = {0, 0};
41
- TCGv_i64 tmp = tcg_temp_new_i64();
45
- int bit[2] = {0, 0};
42
+ if (size == 2) {
46
+ int bit[2] = {0, 7};
43
+ int o2 = s->be_data == MO_LE ? 32 : 0;
47
int d = dir;
44
+ int o1 = o2 ^ 32;
48
45
49
for (idx[0] = 0; idx[0] < num; ++idx[0]) {
46
- /* Do not modify tcg_rt before recognizing any exception
50
- for (bit[0] = 0; bit[0] < 8; ++bit[0]) {
47
- * from the second load.
51
- r[idx[d]] |= x[idx[!d]] & 1 << bit[!d] ? 1 << bit[d] : 0;
48
- */
52
+ for (bit[0] = 7; bit[0] >= 0; bit[0]--) {
49
- do_gpr_ld(s, tmp, clean_addr, size + is_signed * MO_SIGN,
53
+ r[idx[!d]] |= x[idx[d]] & 1 << bit[d] ? 1 << bit[!d] : 0;
50
- false, false, 0, false, false);
54
idx[1] = (idx[1] + 1) % num;
51
- tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
55
if (!idx[1]) {
52
- do_gpr_ld(s, tcg_rt2, clean_addr, size + is_signed * MO_SIGN,
56
- bit[1]++;
53
- false, false, 0, false, false);
57
+ bit[1]--;
54
+ tcg_gen_qemu_ld_i64(tcg_rt, clean_addr, get_mem_index(s), mop);
58
}
55
+ if (is_signed) {
56
+ tcg_gen_sextract_i64(tcg_rt2, tcg_rt, o2, 32);
57
+ tcg_gen_sextract_i64(tcg_rt, tcg_rt, o1, 32);
58
+ } else {
59
+ tcg_gen_extract_i64(tcg_rt2, tcg_rt, o2, 32);
60
+ tcg_gen_extract_i64(tcg_rt, tcg_rt, o1, 32);
61
+ }
62
+ } else {
63
+ TCGv_i128 tmp = tcg_temp_new_i128();
64
65
- tcg_gen_mov_i64(tcg_rt, tmp);
66
+ tcg_gen_qemu_ld_i128(tmp, clean_addr, get_mem_index(s), mop);
67
+ if (s->be_data == MO_LE) {
68
+ tcg_gen_extr_i128_i64(tcg_rt, tcg_rt2, tmp);
69
+ } else {
70
+ tcg_gen_extr_i128_i64(tcg_rt2, tcg_rt, tmp);
71
+ }
72
+ }
73
} else {
74
- do_gpr_st(s, tcg_rt, clean_addr, size,
75
- false, 0, false, false);
76
- tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
77
- do_gpr_st(s, tcg_rt2, clean_addr, size,
78
- false, 0, false, false);
79
+ if (size == 2) {
80
+ TCGv_i64 tmp = tcg_temp_new_i64();
81
+
82
+ if (s->be_data == MO_LE) {
83
+ tcg_gen_concat32_i64(tmp, tcg_rt, tcg_rt2);
84
+ } else {
85
+ tcg_gen_concat32_i64(tmp, tcg_rt2, tcg_rt);
86
+ }
87
+ tcg_gen_qemu_st_i64(tmp, clean_addr, get_mem_index(s), mop);
88
+ } else {
89
+ TCGv_i128 tmp = tcg_temp_new_i128();
90
+
91
+ if (s->be_data == MO_LE) {
92
+ tcg_gen_concat_i64_i128(tmp, tcg_rt, tcg_rt2);
93
+ } else {
94
+ tcg_gen_concat_i64_i128(tmp, tcg_rt2, tcg_rt);
95
+ }
96
+ tcg_gen_qemu_st_i128(tmp, clean_addr, get_mem_index(s), mop);
97
+ }
59
}
98
}
60
}
99
}
61
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
62
}
63
64
for (i = 0; i < num_effective_busses(s); ++i) {
65
+ int bus = num_effective_busses(s) - 1 - i;
66
DB_PRINT_L(debug_level, "tx = %02x\n", tx_rx[i]);
67
- tx_rx[i] = ssi_transfer(s->spi[i], (uint32_t)tx_rx[i]);
68
+ tx_rx[i] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[i]);
69
DB_PRINT_L(debug_level, "rx = %02x\n", tx_rx[i]);
70
}
71
100
72
--
101
--
73
2.7.4
102
2.34.1
74
75
diff view generated by jsdifflib
1
All the callers of arm_ldq_ptw() and arm_ldl_ptw() ignore the value
1
From: Richard Henderson <richard.henderson@linaro.org>
2
that those functions store in the fsr argument on failure: if they
3
return failure to their callers they will always overwrite the fsr
4
value with something else.
5
2
6
Remove the argument from these functions and S1_ptw_translate().
3
We are going to need the complete memop beforehand,
7
This will simplify removing fsr from the calling functions.
4
so let's not compute it twice.
8
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-11-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
13
Message-id: 1512503192-2239-3-git-send-email-peter.maydell@linaro.org
14
---
10
---
15
target/arm/helper.c | 24 +++++++++++-------------
11
target/arm/tcg/translate-a64.c | 61 +++++++++++++++++++---------------
16
1 file changed, 11 insertions(+), 13 deletions(-)
12
1 file changed, 35 insertions(+), 26 deletions(-)
17
13
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
16
--- a/target/arm/tcg/translate-a64.c
21
+++ b/target/arm/helper.c
17
+++ b/target/arm/tcg/translate-a64.c
22
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
18
@@ -XXX,XX +XXX,XX @@ static void do_gpr_st_memidx(DisasContext *s, TCGv_i64 source,
23
/* Translate a S1 pagetable walk through S2 if needed. */
19
unsigned int iss_srt,
24
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
20
bool iss_sf, bool iss_ar)
25
hwaddr addr, MemTxAttrs txattrs,
26
- uint32_t *fsr,
27
ARMMMUFaultInfo *fi)
28
{
21
{
29
if ((mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1) &&
22
- memop = finalize_memop(s, memop);
30
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
23
tcg_gen_qemu_st_i64(source, tcg_addr, memidx, memop);
31
hwaddr s2pa;
24
32
int s2prot;
25
if (iss_valid) {
33
int ret;
26
@@ -XXX,XX +XXX,XX @@ static void do_gpr_ld_memidx(DisasContext *s, TCGv_i64 dest, TCGv_i64 tcg_addr,
34
+ uint32_t fsr;
27
bool iss_valid, unsigned int iss_srt,
35
28
bool iss_sf, bool iss_ar)
36
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
37
- &txattrs, &s2prot, &s2size, fsr, fi, NULL);
38
+ &txattrs, &s2prot, &s2size, &fsr, fi, NULL);
39
if (ret) {
40
fi->s2addr = addr;
41
fi->stage2 = true;
42
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
43
* (but not if it was for a debug access).
44
*/
45
static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
46
- ARMMMUIdx mmu_idx, uint32_t *fsr,
47
- ARMMMUFaultInfo *fi)
48
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
49
{
29
{
50
ARMCPU *cpu = ARM_CPU(cs);
30
- memop = finalize_memop(s, memop);
51
CPUARMState *env = &cpu->env;
31
tcg_gen_qemu_ld_i64(dest, tcg_addr, memidx, memop);
52
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
32
53
33
if (extend && (memop & MO_SIGN)) {
54
attrs.secure = is_secure;
34
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
55
as = arm_addressspace(cs, attrs);
35
int o2_L_o1_o0 = extract32(insn, 21, 3) * 2 | is_lasr;
56
- addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fsr, fi);
36
int size = extract32(insn, 30, 2);
57
+ addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fi);
37
TCGv_i64 clean_addr;
58
if (fi->s1ptw) {
38
+ MemOp memop;
59
return 0;
39
60
}
40
switch (o2_L_o1_o0) {
61
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
41
case 0x0: /* STXR */
42
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
43
gen_check_sp_alignment(s);
44
}
45
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
46
+ /* TODO: ARMv8.4-LSE SCTLR.nAA */
47
+ memop = finalize_memop(s, size | MO_ALIGN);
48
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
49
true, rn != 31, size);
50
- /* TODO: ARMv8.4-LSE SCTLR.nAA */
51
- do_gpr_st(s, cpu_reg(s, rt), clean_addr, size | MO_ALIGN, true, rt,
52
+ do_gpr_st(s, cpu_reg(s, rt), clean_addr, memop, true, rt,
53
disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
54
return;
55
56
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
57
if (rn == 31) {
58
gen_check_sp_alignment(s);
59
}
60
+ /* TODO: ARMv8.4-LSE SCTLR.nAA */
61
+ memop = finalize_memop(s, size | MO_ALIGN);
62
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
63
false, rn != 31, size);
64
- /* TODO: ARMv8.4-LSE SCTLR.nAA */
65
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size | MO_ALIGN, false, true,
66
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, memop, false, true,
67
rt, disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
68
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
69
return;
70
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
71
} else {
72
/* Only unsigned 32bit loads target 32bit registers. */
73
bool iss_sf = opc != 0;
74
+ MemOp memop = finalize_memop(s, size + is_signed * MO_SIGN);
75
76
- do_gpr_ld(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
77
- false, true, rt, iss_sf, false);
78
+ do_gpr_ld(s, tcg_rt, clean_addr, memop, false, true, rt, iss_sf, false);
79
}
62
}
80
}
63
81
64
static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
82
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
65
- ARMMMUIdx mmu_idx, uint32_t *fsr,
83
bool post_index;
66
- ARMMMUFaultInfo *fi)
84
bool writeback;
67
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
85
int memidx;
68
{
86
-
69
ARMCPU *cpu = ARM_CPU(cs);
87
+ MemOp memop;
70
CPUARMState *env = &cpu->env;
88
TCGv_i64 clean_addr, dirty_addr;
71
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
89
72
90
if (is_vector) {
73
attrs.secure = is_secure;
91
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
74
as = arm_addressspace(cs, attrs);
92
return;
75
- addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fsr, fi);
93
}
76
+ addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fi);
94
is_store = (opc == 0);
77
if (fi->s1ptw) {
95
- is_signed = extract32(opc, 1, 1);
78
return 0;
96
+ is_signed = !is_store && extract32(opc, 1, 1);
79
}
97
is_extended = (size < 3) && extract32(opc, 0, 1);
80
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
98
}
81
goto do_fault;
99
82
}
100
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
83
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
101
}
84
- mmu_idx, fsr, fi);
102
85
+ mmu_idx, fi);
103
memidx = is_unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
86
type = (desc & 3);
104
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
87
domain = (desc >> 5) & 0x0f;
105
+
88
if (regime_el(env, mmu_idx) == 1) {
106
clean_addr = gen_mte_check1_mmuidx(s, dirty_addr, is_store,
89
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
107
writeback || rn != 31,
90
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
108
size, is_unpriv, memidx);
91
}
109
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
92
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
110
bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
93
- mmu_idx, fsr, fi);
111
94
+ mmu_idx, fi);
112
if (is_store) {
95
switch (desc & 3) {
113
- do_gpr_st_memidx(s, tcg_rt, clean_addr, size, memidx,
96
case 0: /* Page translation fault. */
114
+ do_gpr_st_memidx(s, tcg_rt, clean_addr, memop, memidx,
97
code = 7;
115
iss_valid, rt, iss_sf, false);
98
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
116
} else {
99
goto do_fault;
117
- do_gpr_ld_memidx(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
100
}
118
+ do_gpr_ld_memidx(s, tcg_rt, clean_addr, memop,
101
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
119
is_extended, memidx,
102
- mmu_idx, fsr, fi);
120
iss_valid, rt, iss_sf, false);
103
+ mmu_idx, fi);
121
}
104
type = (desc & 3);
122
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
105
if (type == 0 || (type == 3 && !arm_feature(env, ARM_FEATURE_PXN))) {
123
bool is_signed = false;
106
/* Section translation fault, or attempt to use the encoding
124
bool is_store = false;
107
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
125
bool is_extended = false;
108
/* Lookup l2 entry. */
126
-
109
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
127
TCGv_i64 tcg_rm, clean_addr, dirty_addr;
110
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
128
+ MemOp memop;
111
- mmu_idx, fsr, fi);
129
112
+ mmu_idx, fi);
130
if (extract32(opt, 1, 1) == 0) {
113
ap = ((desc >> 4) & 3) | ((desc >> 7) & 4);
131
unallocated_encoding(s);
114
switch (desc & 3) {
132
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
115
case 0: /* Page translation fault. */
133
return;
116
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
134
}
117
descaddr |= (address >> (stride * (4 - level))) & indexmask;
135
is_store = (opc == 0);
118
descaddr &= ~7ULL;
136
- is_signed = extract32(opc, 1, 1);
119
nstable = extract32(tableattrs, 4, 1);
137
+ is_signed = !is_store && extract32(opc, 1, 1);
120
- descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fsr, fi);
138
is_extended = (size < 3) && extract32(opc, 0, 1);
121
+ descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fi);
139
}
122
if (fi->s1ptw) {
140
123
goto do_fault;
141
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
124
}
142
ext_and_shift_reg(tcg_rm, tcg_rm, opt, shift ? size : 0);
143
144
tcg_gen_add_i64(dirty_addr, dirty_addr, tcg_rm);
145
+
146
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
147
clean_addr = gen_mte_check1(s, dirty_addr, is_store, true, size);
148
149
if (is_vector) {
150
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
151
} else {
152
TCGv_i64 tcg_rt = cpu_reg(s, rt);
153
bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
154
+
155
if (is_store) {
156
- do_gpr_st(s, tcg_rt, clean_addr, size,
157
+ do_gpr_st(s, tcg_rt, clean_addr, memop,
158
true, rt, iss_sf, false);
159
} else {
160
- do_gpr_ld(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
161
+ do_gpr_ld(s, tcg_rt, clean_addr, memop,
162
is_extended, true, rt, iss_sf, false);
163
}
164
}
165
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
166
int rn = extract32(insn, 5, 5);
167
unsigned int imm12 = extract32(insn, 10, 12);
168
unsigned int offset;
169
-
170
TCGv_i64 clean_addr, dirty_addr;
171
-
172
bool is_store;
173
bool is_signed = false;
174
bool is_extended = false;
175
+ MemOp memop;
176
177
if (is_vector) {
178
size |= (opc & 2) << 1;
179
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
180
return;
181
}
182
is_store = (opc == 0);
183
- is_signed = extract32(opc, 1, 1);
184
+ is_signed = !is_store && extract32(opc, 1, 1);
185
is_extended = (size < 3) && extract32(opc, 0, 1);
186
}
187
188
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
189
dirty_addr = read_cpu_reg_sp(s, rn, 1);
190
offset = imm12 << size;
191
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
192
+
193
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
194
clean_addr = gen_mte_check1(s, dirty_addr, is_store, rn != 31, size);
195
196
if (is_vector) {
197
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
198
TCGv_i64 tcg_rt = cpu_reg(s, rt);
199
bool iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
200
if (is_store) {
201
- do_gpr_st(s, tcg_rt, clean_addr, size,
202
- true, rt, iss_sf, false);
203
+ do_gpr_st(s, tcg_rt, clean_addr, memop, true, rt, iss_sf, false);
204
} else {
205
- do_gpr_ld(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
206
+ do_gpr_ld(s, tcg_rt, clean_addr, memop,
207
is_extended, true, rt, iss_sf, false);
208
}
209
}
210
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
211
bool a = extract32(insn, 23, 1);
212
TCGv_i64 tcg_rs, tcg_rt, clean_addr;
213
AtomicThreeOpFn *fn = NULL;
214
- MemOp mop = s->be_data | size | MO_ALIGN;
215
+ MemOp mop = finalize_memop(s, size | MO_ALIGN);
216
217
if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
218
unallocated_encoding(s);
219
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
220
* full load-acquire (we only need "load-acquire processor consistent"),
221
* but we choose to implement them as full LDAQ.
222
*/
223
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, false,
224
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, mop, false,
225
true, rt, disas_ldst_compute_iss_sf(size, false, 0), true);
226
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
227
return;
228
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
229
bool use_key_a = !extract32(insn, 23, 1);
230
int offset;
231
TCGv_i64 clean_addr, dirty_addr, tcg_rt;
232
+ MemOp memop;
233
234
if (size != 3 || is_vector || !dc_isar_feature(aa64_pauth, s)) {
235
unallocated_encoding(s);
236
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
237
offset = sextract32(offset << size, 0, 10 + size);
238
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
239
240
+ memop = finalize_memop(s, size);
241
+
242
/* Note that "clean" and "dirty" here refer to TBI not PAC. */
243
clean_addr = gen_mte_check1(s, dirty_addr, false,
244
is_wback || rn != 31, size);
245
246
tcg_rt = cpu_reg(s, rt);
247
- do_gpr_ld(s, tcg_rt, clean_addr, size,
248
+ do_gpr_ld(s, tcg_rt, clean_addr, memop,
249
/* extend */ false, /* iss_valid */ !is_wback,
250
/* iss_srt */ rt, /* iss_sf */ true, /* iss_ar */ false);
251
252
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
253
}
254
255
/* TODO: ARMv8.4-LSE SCTLR.nAA */
256
- mop = size | MO_ALIGN;
257
+ mop = finalize_memop(s, size | MO_ALIGN);
258
259
switch (opc) {
260
case 0: /* STLURB */
125
--
261
--
126
2.7.4
262
2.34.1
127
128
diff view generated by jsdifflib
1
For the TT instruction we're going to need to do an MPU lookup that
1
From: Richard Henderson <richard.henderson@linaro.org>
2
also tells us which MPU region the access hit. This requires us
3
to do the MPU lookup without first doing the SAU security access
4
check, so pull the MPU lookup parts of get_phys_addr_pmsav8()
5
out into their own function.
6
2
7
The TT instruction also needs to know the MPU region number which
3
We are going to need the complete memop beforehand,
8
the lookup hit, so provide this information to the caller of the
4
so let's not compute it twice.
9
MPU lookup code, even though get_phys_addr_pmsav8() doesn't
10
need to know it.
11
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230530191438.411344-12-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1512153879-5291-7-git-send-email-peter.maydell@linaro.org
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
---
11
---
17
target/arm/helper.c | 130 +++++++++++++++++++++++++++++++---------------------
12
target/arm/tcg/translate-a64.c | 43 ++++++++++++++++++----------------
18
1 file changed, 79 insertions(+), 51 deletions(-)
13
1 file changed, 23 insertions(+), 20 deletions(-)
19
14
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
17
--- a/target/arm/tcg/translate-a64.c
23
+++ b/target/arm/helper.c
18
+++ b/target/arm/tcg/translate-a64.c
24
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
19
@@ -XXX,XX +XXX,XX @@ static void do_gpr_ld(DisasContext *s, TCGv_i64 dest, TCGv_i64 tcg_addr,
20
/*
21
* Store from FP register to memory
22
*/
23
-static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)
24
+static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, MemOp mop)
25
{
26
/* This writes the bottom N bits of a 128 bit wide vector to memory */
27
TCGv_i64 tmplo = tcg_temp_new_i64();
28
- MemOp mop = finalize_memop_asimd(s, size);
29
30
tcg_gen_ld_i64(tmplo, cpu_env, fp_reg_offset(s, srcidx, MO_64));
31
32
- if (size < MO_128) {
33
+ if ((mop & MO_SIZE) < MO_128) {
34
tcg_gen_qemu_st_i64(tmplo, tcg_addr, get_mem_index(s), mop);
35
} else {
36
TCGv_i64 tmphi = tcg_temp_new_i64();
37
@@ -XXX,XX +XXX,XX @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)
38
/*
39
* Load from memory to FP register
40
*/
41
-static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
42
+static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, MemOp mop)
43
{
44
/* This always zero-extends and writes to a full 128 bit wide vector */
45
TCGv_i64 tmplo = tcg_temp_new_i64();
46
TCGv_i64 tmphi = NULL;
47
- MemOp mop = finalize_memop_asimd(s, size);
48
49
- if (size < MO_128) {
50
+ if ((mop & MO_SIZE) < MO_128) {
51
tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), mop);
52
} else {
53
TCGv_i128 t16 = tcg_temp_new_i128();
54
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
55
bool is_signed = false;
56
int size = 2;
57
TCGv_i64 tcg_rt, clean_addr;
58
+ MemOp memop;
59
60
if (is_vector) {
61
if (opc == 3) {
62
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
63
if (!fp_access_check(s)) {
64
return;
65
}
66
+ memop = finalize_memop_asimd(s, size);
67
} else {
68
if (opc == 3) {
69
/* PRFM (literal) : prefetch */
70
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
71
}
72
size = 2 + extract32(opc, 0, 1);
73
is_signed = extract32(opc, 1, 1);
74
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
75
}
76
77
tcg_rt = cpu_reg(s, rt);
78
79
clean_addr = tcg_temp_new_i64();
80
gen_pc_plus_diff(s, clean_addr, imm);
81
+
82
if (is_vector) {
83
- do_fp_ld(s, rt, clean_addr, size);
84
+ do_fp_ld(s, rt, clean_addr, memop);
85
} else {
86
/* Only unsigned 32bit loads target 32bit registers. */
87
bool iss_sf = opc != 0;
88
- MemOp memop = finalize_memop(s, size + is_signed * MO_SIGN);
89
-
90
do_gpr_ld(s, tcg_rt, clean_addr, memop, false, true, rt, iss_sf, false);
25
}
91
}
26
}
92
}
27
93
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
28
-static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
94
(wback || rn != 31) && !set_tag, 2 << size);
29
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
95
30
- hwaddr *phys_ptr, MemTxAttrs *txattrs,
96
if (is_vector) {
31
- int *prot, uint32_t *fsr)
97
+ MemOp mop = finalize_memop_asimd(s, size);
32
+static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
98
+
33
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
99
if (is_load) {
34
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
100
- do_fp_ld(s, rt, clean_addr, size);
35
+ int *prot, uint32_t *fsr, uint32_t *mregion)
101
+ do_fp_ld(s, rt, clean_addr, mop);
36
{
102
} else {
37
+ /* Perform a PMSAv8 MPU lookup (without also doing the SAU check
103
- do_fp_st(s, rt, clean_addr, size);
38
+ * that a full phys-to-virt translation does).
104
+ do_fp_st(s, rt, clean_addr, mop);
39
+ * mregion is (if not NULL) set to the region number which matched,
105
}
40
+ * or -1 if no region number is returned (MPU off, address did not
106
tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
41
+ * hit a region, address hit in multiple regions).
107
if (is_load) {
42
+ */
108
- do_fp_ld(s, rt2, clean_addr, size);
43
ARMCPU *cpu = arm_env_get_cpu(env);
109
+ do_fp_ld(s, rt2, clean_addr, mop);
44
bool is_user = regime_is_user(env, mmu_idx);
110
} else {
45
uint32_t secure = regime_is_secure(env, mmu_idx);
111
- do_fp_st(s, rt2, clean_addr, size);
46
int n;
112
+ do_fp_st(s, rt2, clean_addr, mop);
47
int matchregion = -1;
113
}
48
bool hit = false;
114
} else {
49
- V8M_SAttributes sattrs = {};
115
TCGv_i64 tcg_rt = cpu_reg(s, rt);
50
116
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
51
*phys_ptr = address;
117
if (!fp_access_check(s)) {
52
*prot = 0;
118
return;
53
-
119
}
54
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
120
+ memop = finalize_memop_asimd(s, size);
55
- v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
121
} else {
56
- if (access_type == MMU_INST_FETCH) {
122
if (size == 3 && opc == 2) {
57
- /* Instruction fetches always use the MMU bank and the
123
/* PRFM - prefetch */
58
- * transaction attribute determined by the fetch address,
124
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
59
- * regardless of CPU state. This is painful for QEMU
125
is_store = (opc == 0);
60
- * to handle, because it would mean we need to encode
126
is_signed = !is_store && extract32(opc, 1, 1);
61
- * into the mmu_idx not just the (user, negpri) information
127
is_extended = (size < 3) && extract32(opc, 0, 1);
62
- * for the current security state but also that for the
128
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
63
- * other security state, which would balloon the number
64
- * of mmu_idx values needed alarmingly.
65
- * Fortunately we can avoid this because it's not actually
66
- * possible to arbitrarily execute code from memory with
67
- * the wrong security attribute: it will always generate
68
- * an exception of some kind or another, apart from the
69
- * special case of an NS CPU executing an SG instruction
70
- * in S&NSC memory. So we always just fail the translation
71
- * here and sort things out in the exception handler
72
- * (including possibly emulating an SG instruction).
73
- */
74
- if (sattrs.ns != !secure) {
75
- *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
76
- return true;
77
- }
78
- } else {
79
- /* For data accesses we always use the MMU bank indicated
80
- * by the current CPU state, but the security attributes
81
- * might downgrade a secure access to nonsecure.
82
- */
83
- if (sattrs.ns) {
84
- txattrs->secure = false;
85
- } else if (!secure) {
86
- /* NS access to S memory must fault.
87
- * Architecturally we should first check whether the
88
- * MPU information for this address indicates that we
89
- * are doing an unaligned access to Device memory, which
90
- * should generate a UsageFault instead. QEMU does not
91
- * currently check for that kind of unaligned access though.
92
- * If we added it we would need to do so as a special case
93
- * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
94
- */
95
- *fsr = M_FAKE_FSR_SFAULT;
96
- return true;
97
- }
98
- }
99
+ if (mregion) {
100
+ *mregion = -1;
101
}
129
}
102
130
103
/* Unlike the ARM ARM pseudocode, we don't need to check whether this
131
switch (idx) {
104
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
132
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
105
/* We don't need to look the attribute up in the MAIR0/MAIR1
106
* registers because that only tells us about cacheability.
107
*/
108
+ if (mregion) {
109
+ *mregion = matchregion;
110
+ }
111
}
133
}
112
134
113
*fsr = 0x00d; /* Permission fault */
135
memidx = is_unpriv ? get_a64_user_mem_index(s) : get_mem_index(s);
114
return !(*prot & (1 << access_type));
136
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
115
}
137
116
138
clean_addr = gen_mte_check1_mmuidx(s, dirty_addr, is_store,
117
+
139
writeback || rn != 31,
118
+static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
140
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
119
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
141
120
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
142
if (is_vector) {
121
+ int *prot, uint32_t *fsr)
143
if (is_store) {
122
+{
144
- do_fp_st(s, rt, clean_addr, size);
123
+ uint32_t secure = regime_is_secure(env, mmu_idx);
145
+ do_fp_st(s, rt, clean_addr, memop);
124
+ V8M_SAttributes sattrs = {};
146
} else {
125
+
147
- do_fp_ld(s, rt, clean_addr, size);
126
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
148
+ do_fp_ld(s, rt, clean_addr, memop);
127
+ v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
149
}
128
+ if (access_type == MMU_INST_FETCH) {
150
} else {
129
+ /* Instruction fetches always use the MMU bank and the
151
TCGv_i64 tcg_rt = cpu_reg(s, rt);
130
+ * transaction attribute determined by the fetch address,
152
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
131
+ * regardless of CPU state. This is painful for QEMU
153
132
+ * to handle, because it would mean we need to encode
154
if (is_vector) {
133
+ * into the mmu_idx not just the (user, negpri) information
155
if (is_store) {
134
+ * for the current security state but also that for the
156
- do_fp_st(s, rt, clean_addr, size);
135
+ * other security state, which would balloon the number
157
+ do_fp_st(s, rt, clean_addr, memop);
136
+ * of mmu_idx values needed alarmingly.
158
} else {
137
+ * Fortunately we can avoid this because it's not actually
159
- do_fp_ld(s, rt, clean_addr, size);
138
+ * possible to arbitrarily execute code from memory with
160
+ do_fp_ld(s, rt, clean_addr, memop);
139
+ * the wrong security attribute: it will always generate
161
}
140
+ * an exception of some kind or another, apart from the
162
} else {
141
+ * special case of an NS CPU executing an SG instruction
163
TCGv_i64 tcg_rt = cpu_reg(s, rt);
142
+ * in S&NSC memory. So we always just fail the translation
164
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
143
+ * here and sort things out in the exception handler
165
144
+ * (including possibly emulating an SG instruction).
166
if (is_vector) {
145
+ */
167
if (is_store) {
146
+ if (sattrs.ns != !secure) {
168
- do_fp_st(s, rt, clean_addr, size);
147
+ *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
169
+ do_fp_st(s, rt, clean_addr, memop);
148
+ *phys_ptr = address;
170
} else {
149
+ *prot = 0;
171
- do_fp_ld(s, rt, clean_addr, size);
150
+ return true;
172
+ do_fp_ld(s, rt, clean_addr, memop);
151
+ }
173
}
152
+ } else {
174
} else {
153
+ /* For data accesses we always use the MMU bank indicated
175
TCGv_i64 tcg_rt = cpu_reg(s, rt);
154
+ * by the current CPU state, but the security attributes
155
+ * might downgrade a secure access to nonsecure.
156
+ */
157
+ if (sattrs.ns) {
158
+ txattrs->secure = false;
159
+ } else if (!secure) {
160
+ /* NS access to S memory must fault.
161
+ * Architecturally we should first check whether the
162
+ * MPU information for this address indicates that we
163
+ * are doing an unaligned access to Device memory, which
164
+ * should generate a UsageFault instead. QEMU does not
165
+ * currently check for that kind of unaligned access though.
166
+ * If we added it we would need to do so as a special case
167
+ * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
168
+ */
169
+ *fsr = M_FAKE_FSR_SFAULT;
170
+ *phys_ptr = address;
171
+ *prot = 0;
172
+ return true;
173
+ }
174
+ }
175
+ }
176
+
177
+ return pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
178
+ txattrs, prot, fsr, NULL);
179
+}
180
+
181
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
182
MMUAccessType access_type, ARMMMUIdx mmu_idx,
183
hwaddr *phys_ptr, int *prot, uint32_t *fsr)
184
--
176
--
185
2.7.4
177
2.34.1
186
178
187
179
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Make tx/rx_data_bytes more generic so they can be reused (when adding
3
Pass the completed memop to gen_mte_check1_mmuidx.
4
support for the Zynqmp Generic QSPI).
4
For the moment, do nothing more than extract the size.
5
5
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20230530191438.411344-13-richard.henderson@linaro.org
9
Message-id: 20171126231634.9531-9-frasse.iglesias@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
hw/ssi/xilinx_spips.c | 64 +++++++++++++++++++++++++++++----------------------
11
target/arm/tcg/translate-a64.h | 2 +-
13
1 file changed, 37 insertions(+), 27 deletions(-)
12
target/arm/tcg/translate-a64.c | 82 ++++++++++++++++++----------------
13
target/arm/tcg/translate-sve.c | 7 +--
14
3 files changed, 49 insertions(+), 42 deletions(-)
14
15
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
16
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/xilinx_spips.c
18
--- a/target/arm/tcg/translate-a64.h
18
+++ b/hw/ssi/xilinx_spips.c
19
+++ b/target/arm/tcg/translate-a64.h
19
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static inline bool sme_smza_enabled_check(DisasContext *s)
20
/* config register */
21
21
#define R_CONFIG (0x00 / 4)
22
TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr);
22
#define IFMODE (1U << 31)
23
TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
23
-#define ENDIAN (1 << 26)
24
- bool tag_checked, int log2_size);
24
+#define R_CONFIG_ENDIAN (1 << 26)
25
+ bool tag_checked, MemOp memop);
25
#define MODEFAIL_GEN_EN (1 << 17)
26
TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
26
#define MAN_START_COM (1 << 16)
27
bool tag_checked, int size);
27
#define MAN_START_EN (1 << 15)
28
28
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
29
}
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static void gen_probe_access(DisasContext *s, TCGv_i64 ptr,
34
*/
35
static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
36
bool is_write, bool tag_checked,
37
- int log2_size, bool is_unpriv,
38
+ MemOp memop, bool is_unpriv,
39
int core_idx)
40
{
41
if (tag_checked && s->mte_active[is_unpriv]) {
42
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
43
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
44
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
45
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
46
- desc = FIELD_DP32(desc, MTEDESC, SIZEM1, (1 << log2_size) - 1);
47
+ desc = FIELD_DP32(desc, MTEDESC, SIZEM1, memop_size(memop) - 1);
48
49
ret = tcg_temp_new_i64();
50
gen_helper_mte_check(ret, cpu_env, tcg_constant_i32(desc), addr);
51
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
30
}
52
}
31
53
32
-static inline void rx_data_bytes(XilinxSPIPS *s, uint8_t *value, int max)
54
TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
33
+static inline void tx_data_bytes(Fifo8 *fifo, uint32_t value, int num, bool be)
55
- bool tag_checked, int log2_size)
56
+ bool tag_checked, MemOp memop)
34
{
57
{
35
int i;
58
- return gen_mte_check1_mmuidx(s, addr, is_write, tag_checked, log2_size,
36
+ for (i = 0; i < num && !fifo8_is_full(fifo); ++i) {
59
+ return gen_mte_check1_mmuidx(s, addr, is_write, tag_checked, memop,
37
+ if (be) {
60
false, get_mem_index(s));
38
+ fifo8_push(fifo, (uint8_t)(value >> 24));
61
}
39
+ value <<= 8;
62
40
+ } else {
63
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, int rn,
41
+ fifo8_push(fifo, (uint8_t)value);
64
int size, bool is_pair)
42
+ value >>= 8;
65
{
43
+ }
66
int idx = get_mem_index(s);
67
- MemOp memop;
68
TCGv_i64 dirty_addr, clean_addr;
69
+ MemOp memop;
70
+
71
+ /*
72
+ * For pairs:
73
+ * if size == 2, the operation is single-copy atomic for the doubleword.
74
+ * if size == 3, the operation is single-copy atomic for *each* doubleword,
75
+ * not the entire quadword, however it must be quadword aligned.
76
+ */
77
+ memop = size + is_pair;
78
+ if (memop == MO_128) {
79
+ memop = finalize_memop_atom(s, MO_128 | MO_ALIGN,
80
+ MO_ATOM_IFALIGN_PAIR);
81
+ } else {
82
+ memop = finalize_memop(s, memop | MO_ALIGN);
44
+ }
83
+ }
45
+}
84
46
85
s->is_ldex = true;
47
- for (i = 0; i < max && !fifo8_is_empty(&s->rx_fifo); ++i) {
86
dirty_addr = cpu_reg_sp(s, rn);
48
- value[i] = fifo8_pop(&s->rx_fifo);
87
- clean_addr = gen_mte_check1(s, dirty_addr, false, rn != 31, size);
49
+static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
88
+ clean_addr = gen_mte_check1(s, dirty_addr, false, rn != 31, memop);
50
+{
89
51
+ int i;
90
g_assert(size <= 3);
91
if (is_pair) {
92
g_assert(size >= 2);
93
if (size == 2) {
94
- /* The pair must be single-copy atomic for the doubleword. */
95
- memop = finalize_memop(s, MO_64 | MO_ALIGN);
96
tcg_gen_qemu_ld_i64(cpu_exclusive_val, clean_addr, idx, memop);
97
if (s->be_data == MO_LE) {
98
tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
99
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, int rn,
100
tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 0, 32);
101
}
102
} else {
103
- /*
104
- * The pair must be single-copy atomic for *each* doubleword, not
105
- * the entire quadword, however it must be quadword aligned.
106
- * Expose the complete load to tcg, for ease of tlb lookup,
107
- * but indicate that only 8-byte atomicity is required.
108
- */
109
TCGv_i128 t16 = tcg_temp_new_i128();
110
111
- memop = finalize_memop_atom(s, MO_128 | MO_ALIGN_16,
112
- MO_ATOM_IFALIGN_PAIR);
113
tcg_gen_qemu_ld_i128(t16, clean_addr, idx, memop);
114
115
if (s->be_data == MO_LE) {
116
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, int rn,
117
tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high);
118
}
119
} else {
120
- memop = finalize_memop(s, size | MO_ALIGN);
121
tcg_gen_qemu_ld_i64(cpu_exclusive_val, clean_addr, idx, memop);
122
tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val);
123
}
124
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
125
TCGLabel *fail_label = gen_new_label();
126
TCGLabel *done_label = gen_new_label();
127
TCGv_i64 tmp, dirty_addr, clean_addr;
128
+ MemOp memop;
52
+
129
+
53
+ for (i = 0; i < max && !fifo8_is_empty(fifo); ++i) {
130
+ memop = (size + is_pair) | MO_ALIGN;
54
+ value[i] = fifo8_pop(fifo);
131
+ memop = finalize_memop(s, memop);
55
}
132
56
+ return max - i;
133
dirty_addr = cpu_reg_sp(s, rn);
134
- clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, size);
135
+ clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, memop);
136
137
tcg_gen_brcond_i64(TCG_COND_NE, clean_addr, cpu_exclusive_addr, fail_label);
138
139
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
140
}
141
tcg_gen_atomic_cmpxchg_i64(tmp, cpu_exclusive_addr,
142
cpu_exclusive_val, tmp,
143
- get_mem_index(s),
144
- MO_64 | MO_ALIGN | s->be_data);
145
+ get_mem_index(s), memop);
146
tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, cpu_exclusive_val);
147
} else {
148
TCGv_i128 t16 = tcg_temp_new_i128();
149
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
150
}
151
152
tcg_gen_atomic_cmpxchg_i128(t16, cpu_exclusive_addr, c16, t16,
153
- get_mem_index(s),
154
- MO_128 | MO_ALIGN | s->be_data);
155
+ get_mem_index(s), memop);
156
157
a = tcg_temp_new_i64();
158
b = tcg_temp_new_i64();
159
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
160
}
161
} else {
162
tcg_gen_atomic_cmpxchg_i64(tmp, cpu_exclusive_addr, cpu_exclusive_val,
163
- cpu_reg(s, rt), get_mem_index(s),
164
- size | MO_ALIGN | s->be_data);
165
+ cpu_reg(s, rt), get_mem_index(s), memop);
166
tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, cpu_exclusive_val);
167
}
168
tcg_gen_mov_i64(cpu_reg(s, rd), tmp);
169
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap(DisasContext *s, int rs, int rt,
170
TCGv_i64 tcg_rt = cpu_reg(s, rt);
171
int memidx = get_mem_index(s);
172
TCGv_i64 clean_addr;
173
+ MemOp memop;
174
175
if (rn == 31) {
176
gen_check_sp_alignment(s);
177
}
178
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, size);
179
- tcg_gen_atomic_cmpxchg_i64(tcg_rs, clean_addr, tcg_rs, tcg_rt, memidx,
180
- size | MO_ALIGN | s->be_data);
181
+ memop = finalize_memop(s, size | MO_ALIGN);
182
+ clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
183
+ tcg_gen_atomic_cmpxchg_i64(tcg_rs, clean_addr, tcg_rs, tcg_rt,
184
+ memidx, memop);
57
}
185
}
58
186
59
static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
187
static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
60
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
188
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
61
uint32_t mask = ~0;
189
TCGv_i64 t2 = cpu_reg(s, rt + 1);
62
uint32_t ret;
190
TCGv_i64 clean_addr;
63
uint8_t rx_buf[4];
191
int memidx = get_mem_index(s);
64
+ int shortfall;
192
+ MemOp memop;
65
193
66
addr >>= 2;
194
if (rn == 31) {
67
switch (addr) {
195
gen_check_sp_alignment(s);
68
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
196
}
69
break;
197
70
case R_RX_DATA:
198
/* This is a single atomic access, despite the "pair". */
71
memset(rx_buf, 0, sizeof(rx_buf));
199
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, size + 1);
72
- rx_data_bytes(s, rx_buf, s->num_txrx_bytes);
200
+ memop = finalize_memop(s, (size + 1) | MO_ALIGN);
73
- ret = s->regs[R_CONFIG] & ENDIAN ? cpu_to_be32(*(uint32_t *)rx_buf)
201
+ clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
74
- : cpu_to_le32(*(uint32_t *)rx_buf);
202
75
+ shortfall = rx_data_bytes(&s->rx_fifo, rx_buf, s->num_txrx_bytes);
203
if (size == 2) {
76
+ ret = s->regs[R_CONFIG] & R_CONFIG_ENDIAN ?
204
TCGv_i64 cmp = tcg_temp_new_i64();
77
+ cpu_to_be32(*(uint32_t *)rx_buf) :
205
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
78
+ cpu_to_le32(*(uint32_t *)rx_buf);
206
tcg_gen_concat32_i64(cmp, s2, s1);
79
+ if (!(s->regs[R_CONFIG] & R_CONFIG_ENDIAN)) {
80
+ ret <<= 8 * shortfall;
81
+ }
82
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
83
xilinx_spips_update_ixr(s);
84
return ret;
85
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
86
87
}
88
89
-static inline void tx_data_bytes(XilinxSPIPS *s, uint32_t value, int num)
90
-{
91
- int i;
92
- for (i = 0; i < num && !fifo8_is_full(&s->tx_fifo); ++i) {
93
- if (s->regs[R_CONFIG] & ENDIAN) {
94
- fifo8_push(&s->tx_fifo, (uint8_t)(value >> 24));
95
- value <<= 8;
96
- } else {
97
- fifo8_push(&s->tx_fifo, (uint8_t)value);
98
- value >>= 8;
99
- }
100
- }
101
-}
102
-
103
static void xilinx_spips_write(void *opaque, hwaddr addr,
104
uint64_t value, unsigned size)
105
{
106
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
107
mask = 0;
108
break;
109
case R_TX_DATA:
110
- tx_data_bytes(s, (uint32_t)value, s->num_txrx_bytes);
111
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, s->num_txrx_bytes,
112
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
113
goto no_reg_update;
114
case R_TXD1:
115
- tx_data_bytes(s, (uint32_t)value, 1);
116
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, 1,
117
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
118
goto no_reg_update;
119
case R_TXD2:
120
- tx_data_bytes(s, (uint32_t)value, 2);
121
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, 2,
122
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
123
goto no_reg_update;
124
case R_TXD3:
125
- tx_data_bytes(s, (uint32_t)value, 3);
126
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, 3,
127
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
128
goto no_reg_update;
129
}
130
s->regs[addr] = (s->regs[addr] & ~mask) | (value & mask);
131
@@ -XXX,XX +XXX,XX @@ static void lqspi_load_cache(void *opaque, hwaddr addr)
132
133
while (cache_entry < LQSPI_CACHE_SIZE) {
134
for (i = 0; i < 64; ++i) {
135
- tx_data_bytes(s, 0, 1);
136
+ tx_data_bytes(&s->tx_fifo, 0, 1, false);
137
}
138
xilinx_spips_flush_txfifo(s);
139
for (i = 0; i < 64; ++i) {
140
- rx_data_bytes(s, &q->lqspi_buf[cache_entry++], 1);
141
+ rx_data_bytes(&s->rx_fifo, &q->lqspi_buf[cache_entry++], 1);
142
}
143
}
207
}
144
208
209
- tcg_gen_atomic_cmpxchg_i64(cmp, clean_addr, cmp, val, memidx,
210
- MO_64 | MO_ALIGN | s->be_data);
211
+ tcg_gen_atomic_cmpxchg_i64(cmp, clean_addr, cmp, val, memidx, memop);
212
213
if (s->be_data == MO_LE) {
214
tcg_gen_extr32_i64(s1, s2, cmp);
215
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
216
tcg_gen_concat_i64_i128(cmp, s2, s1);
217
}
218
219
- tcg_gen_atomic_cmpxchg_i128(cmp, clean_addr, cmp, val, memidx,
220
- MO_128 | MO_ALIGN | s->be_data);
221
+ tcg_gen_atomic_cmpxchg_i128(cmp, clean_addr, cmp, val, memidx, memop);
222
223
if (s->be_data == MO_LE) {
224
tcg_gen_extr_i128_i64(s1, s2, cmp);
225
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
226
/* TODO: ARMv8.4-LSE SCTLR.nAA */
227
memop = finalize_memop(s, size | MO_ALIGN);
228
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
229
- true, rn != 31, size);
230
+ true, rn != 31, memop);
231
do_gpr_st(s, cpu_reg(s, rt), clean_addr, memop, true, rt,
232
disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
233
return;
234
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
235
/* TODO: ARMv8.4-LSE SCTLR.nAA */
236
memop = finalize_memop(s, size | MO_ALIGN);
237
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
238
- false, rn != 31, size);
239
+ false, rn != 31, memop);
240
do_gpr_ld(s, cpu_reg(s, rt), clean_addr, memop, false, true,
241
rt, disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
242
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
243
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
244
tcg_gen_add_i64(dirty_addr, dirty_addr, tcg_rm);
245
246
memop = finalize_memop(s, size + is_signed * MO_SIGN);
247
- clean_addr = gen_mte_check1(s, dirty_addr, is_store, true, size);
248
+ clean_addr = gen_mte_check1(s, dirty_addr, is_store, true, memop);
249
250
if (is_vector) {
251
if (is_store) {
252
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
253
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
254
255
memop = finalize_memop(s, size + is_signed * MO_SIGN);
256
- clean_addr = gen_mte_check1(s, dirty_addr, is_store, rn != 31, size);
257
+ clean_addr = gen_mte_check1(s, dirty_addr, is_store, rn != 31, memop);
258
259
if (is_vector) {
260
if (is_store) {
261
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
262
if (rn == 31) {
263
gen_check_sp_alignment(s);
264
}
265
- clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), false, rn != 31, size);
266
+ clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), false, rn != 31, mop);
267
268
if (o3_opc == 014) {
269
/*
270
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
271
272
/* Note that "clean" and "dirty" here refer to TBI not PAC. */
273
clean_addr = gen_mte_check1(s, dirty_addr, false,
274
- is_wback || rn != 31, size);
275
+ is_wback || rn != 31, memop);
276
277
tcg_rt = cpu_reg(s, rt);
278
do_gpr_ld(s, tcg_rt, clean_addr, memop,
279
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
280
index XXXXXXX..XXXXXXX 100644
281
--- a/target/arm/tcg/translate-sve.c
282
+++ b/target/arm/tcg/translate-sve.c
283
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1R_zpri(DisasContext *s, arg_rpri_load *a)
284
unsigned msz = dtype_msz(a->dtype);
285
TCGLabel *over;
286
TCGv_i64 temp, clean_addr;
287
+ MemOp memop;
288
289
if (!dc_isar_feature(aa64_sve, s)) {
290
return false;
291
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1R_zpri(DisasContext *s, arg_rpri_load *a)
292
/* Load the data. */
293
temp = tcg_temp_new_i64();
294
tcg_gen_addi_i64(temp, cpu_reg_sp(s, a->rn), a->imm << msz);
295
- clean_addr = gen_mte_check1(s, temp, false, true, msz);
296
297
- tcg_gen_qemu_ld_i64(temp, clean_addr, get_mem_index(s),
298
- finalize_memop(s, dtype_mop[a->dtype]));
299
+ memop = finalize_memop(s, dtype_mop[a->dtype]);
300
+ clean_addr = gen_mte_check1(s, temp, false, true, memop);
301
+ tcg_gen_qemu_ld_i64(temp, clean_addr, get_mem_index(s), memop);
302
303
/* Broadcast to *all* elements. */
304
tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd),
145
--
305
--
146
2.7.4
306
2.34.1
147
148
diff view generated by jsdifflib
1
Make get_phys_addr_pmsav7() return a fault type in the ARMMMUFaultInfo
1
From: Richard Henderson <richard.henderson@linaro.org>
2
structure, which we convert to the FSC at the callsite.
3
2
3
Pass the individual memop to gen_mte_checkN.
4
For the moment, do nothing with it.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-14-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-8-git-send-email-peter.maydell@linaro.org
9
---
10
---
10
target/arm/helper.c | 11 +++++++----
11
target/arm/tcg/translate-a64.h | 2 +-
11
1 file changed, 7 insertions(+), 4 deletions(-)
12
target/arm/tcg/translate-a64.c | 31 +++++++++++++++++++------------
13
target/arm/tcg/translate-sve.c | 4 ++--
14
3 files changed, 22 insertions(+), 15 deletions(-)
12
15
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
18
--- a/target/arm/tcg/translate-a64.h
16
+++ b/target/arm/helper.c
19
+++ b/target/arm/tcg/translate-a64.h
17
@@ -XXX,XX +XXX,XX @@ static inline bool m_is_system_region(CPUARMState *env, uint32_t address)
20
@@ -XXX,XX +XXX,XX @@ TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr);
18
21
TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
19
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
22
bool tag_checked, MemOp memop);
20
MMUAccessType access_type, ARMMMUIdx mmu_idx,
23
TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
21
- hwaddr *phys_ptr, int *prot, uint32_t *fsr)
24
- bool tag_checked, int size);
22
+ hwaddr *phys_ptr, int *prot,
25
+ bool tag_checked, int total_size, MemOp memop);
23
+ ARMMMUFaultInfo *fi)
26
27
/* We should have at some point before trying to access an FP register
28
* done the necessary access check, so assert that
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
34
* For MTE, check multiple logical sequential accesses.
35
*/
36
TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
37
- bool tag_checked, int size)
38
+ bool tag_checked, int total_size, MemOp single_mop)
24
{
39
{
25
ARMCPU *cpu = arm_env_get_cpu(env);
40
if (tag_checked && s->mte_active[0]) {
26
int n;
41
TCGv_i64 ret;
27
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
42
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
28
if (n == -1) { /* no hits */
43
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
29
if (!pmsav7_use_background_region(cpu, mmu_idx, is_user)) {
44
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
30
/* background fault */
45
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
31
- *fsr = 0;
46
- desc = FIELD_DP32(desc, MTEDESC, SIZEM1, size - 1);
32
+ fi->type = ARMFault_Background;
47
+ desc = FIELD_DP32(desc, MTEDESC, SIZEM1, total_size - 1);
33
return true;
48
34
}
49
ret = tcg_temp_new_i64();
35
get_phys_addr_pmsav7_default(env, mmu_idx, address, prot);
50
gen_helper_mte_check(ret, cpu_env, tcg_constant_i32(desc), addr);
36
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
51
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
52
bool is_vector = extract32(insn, 26, 1);
53
bool is_load = extract32(insn, 22, 1);
54
int opc = extract32(insn, 30, 2);
55
-
56
bool is_signed = false;
57
bool postindex = false;
58
bool wback = false;
59
bool set_tag = false;
60
-
61
TCGv_i64 clean_addr, dirty_addr;
62
-
63
+ MemOp mop;
64
int size;
65
66
if (opc == 3) {
67
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
37
}
68
}
38
}
69
}
39
70
40
- *fsr = 0x00d; /* Permission fault */
71
+ if (is_vector) {
41
+ fi->type = ARMFault_Permission;
72
+ mop = finalize_memop_asimd(s, size);
42
+ fi->level = 1;
73
+ } else {
43
return !(*prot & (1 << access_type));
74
+ mop = finalize_memop(s, size);
44
}
75
+ }
45
76
clean_addr = gen_mte_checkN(s, dirty_addr, !is_load,
46
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
77
- (wback || rn != 31) && !set_tag, 2 << size);
47
} else if (arm_feature(env, ARM_FEATURE_V7)) {
78
+ (wback || rn != 31) && !set_tag,
48
/* PMSAv7 */
79
+ 2 << size, mop);
49
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
80
50
- phys_ptr, prot, fsr);
81
if (is_vector) {
51
+ phys_ptr, prot, fi);
82
- MemOp mop = finalize_memop_asimd(s, size);
52
+ *fsr = arm_fi_to_sfsc(fi);
83
-
84
+ /* LSE2 does not merge FP pairs; leave these as separate operations. */
85
if (is_load) {
86
do_fp_ld(s, rt, clean_addr, mop);
53
} else {
87
} else {
54
/* Pre-v7 MPU */
88
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
55
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
89
} else {
90
TCGv_i64 tcg_rt = cpu_reg(s, rt);
91
TCGv_i64 tcg_rt2 = cpu_reg(s, rt2);
92
- MemOp mop = size + 1;
93
94
/*
95
+ * We built mop above for the single logical access -- rebuild it
96
+ * now for the paired operation.
97
+ *
98
* With LSE2, non-sign-extending pairs are treated atomically if
99
* aligned, and if unaligned one of the pair will be completely
100
* within a 16-byte block and that element will be atomic.
101
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
102
* This treats sign-extending loads like zero-extending loads,
103
* since that reuses the most code below.
104
*/
105
+ mop = size + 1;
106
if (s->align_mem) {
107
mop |= (size == 2 ? MO_ALIGN_4 : MO_ALIGN_8);
108
}
109
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
110
* promote consecutive little-endian elements below.
111
*/
112
clean_addr = gen_mte_checkN(s, tcg_rn, is_store, is_postidx || rn != 31,
113
- total);
114
+ total, finalize_memop(s, size));
115
116
/*
117
* Consecutive little-endian elements from a single register
118
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
119
total = selem << scale;
120
tcg_rn = cpu_reg_sp(s, rn);
121
122
- clean_addr = gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != 31,
123
- total);
124
mop = finalize_memop(s, scale);
125
126
+ clean_addr = gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != 31,
127
+ total, mop);
128
+
129
tcg_ebytes = tcg_constant_i64(1 << scale);
130
for (xs = 0; xs < selem; xs++) {
131
if (replicate) {
132
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/target/arm/tcg/translate-sve.c
135
+++ b/target/arm/tcg/translate-sve.c
136
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
137
138
dirty_addr = tcg_temp_new_i64();
139
tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm);
140
- clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len);
141
+ clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len, MO_8);
142
143
/*
144
* Note that unpredicated load/store of vector/predicate registers
145
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
146
147
dirty_addr = tcg_temp_new_i64();
148
tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm);
149
- clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len);
150
+ clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len, MO_8);
151
152
/* Note that unpredicated load/store of vector/predicate registers
153
* are defined as a stream of bytes, which equates to little-endian
56
--
154
--
57
2.7.4
155
2.34.1
58
59
diff view generated by jsdifflib
1
All of the callers of get_phys_addr() and arm_tlb_fill() now ignore
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the FSR values they return, so we can just remove the argument
3
entirely.
4
2
3
Fixes a bug in that with SCTLR.A set, we should raise any
4
alignment fault before raising any MTE check fault.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230530191438.411344-15-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
9
Message-id: 1512503192-2239-12-git-send-email-peter.maydell@linaro.org
10
---
10
---
11
target/arm/internals.h | 2 +-
11
target/arm/internals.h | 3 ++-
12
target/arm/helper.c | 45 ++++++++++++++-------------------------------
12
target/arm/tcg/mte_helper.c | 18 ++++++++++++++++++
13
target/arm/op_helper.c | 3 +--
13
target/arm/tcg/translate-a64.c | 2 ++
14
3 files changed, 16 insertions(+), 34 deletions(-)
14
3 files changed, 22 insertions(+), 1 deletion(-)
15
15
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
18
--- a/target/arm/internals.h
19
+++ b/target/arm/internals.h
19
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ static inline uint32_t arm_fi_to_lfsc(ARMMMUFaultInfo *fi)
20
@@ -XXX,XX +XXX,XX @@ FIELD(MTEDESC, MIDX, 0, 4)
21
/* Do a page table walk and add page to TLB if possible */
21
FIELD(MTEDESC, TBI, 4, 2)
22
bool arm_tlb_fill(CPUState *cpu, vaddr address,
22
FIELD(MTEDESC, TCMA, 6, 2)
23
MMUAccessType access_type, int mmu_idx,
23
FIELD(MTEDESC, WRITE, 8, 1)
24
- uint32_t *fsr, ARMMMUFaultInfo *fi);
24
-FIELD(MTEDESC, SIZEM1, 9, SIMD_DATA_BITS - 9) /* size - 1 */
25
+ ARMMMUFaultInfo *fi);
25
+FIELD(MTEDESC, ALIGN, 9, 3)
26
26
+FIELD(MTEDESC, SIZEM1, 12, SIMD_DATA_BITS - 12) /* size - 1 */
27
/* Return true if the stage 1 translation regime is using LPAE format page
27
28
* tables */
28
bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr);
29
diff --git a/target/arm/helper.c b/target/arm/helper.c
29
uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
30
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
30
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/helper.c
32
--- a/target/arm/tcg/mte_helper.c
32
+++ b/target/arm/helper.c
33
+++ b/target/arm/tcg/mte_helper.c
33
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
34
@@ -XXX,XX +XXX,XX @@ uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra)
34
static bool get_phys_addr(CPUARMState *env, target_ulong address,
35
35
MMUAccessType access_type, ARMMMUIdx mmu_idx,
36
uint64_t HELPER(mte_check)(CPUARMState *env, uint32_t desc, uint64_t ptr)
36
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
37
- target_ulong *page_size, uint32_t *fsr,
38
+ target_ulong *page_size,
39
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs);
40
41
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
42
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
43
hwaddr phys_addr;
44
target_ulong page_size;
45
int prot;
46
- uint32_t fsr_unused;
47
bool ret;
48
uint64_t par64;
49
MemTxAttrs attrs = {};
50
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
51
ARMCacheAttrs cacheattrs = {};
52
53
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
54
- &prot, &page_size, &fsr_unused, &fi, &cacheattrs);
55
+ &prot, &page_size, &fi, &cacheattrs);
56
/* TODO: this is not the correct condition to use to decide whether
57
* to report a PAR in 64-bit or 32-bit format.
58
*/
59
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
60
target_ulong page_size;
61
hwaddr physaddr;
62
int prot;
63
- uint32_t fsr;
64
65
v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
66
if (!sattrs.nsc || sattrs.ns) {
67
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
68
return false;
69
}
70
if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
71
- &physaddr, &attrs, &prot, &page_size, &fsr, &fi, NULL)) {
72
+ &physaddr, &attrs, &prot, &page_size, &fi, NULL)) {
73
/* the MPU lookup failed */
74
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
75
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
76
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2)
77
* @attrs: set to the memory transaction attributes to use
78
* @prot: set to the permissions for the page containing phys_ptr
79
* @page_size: set to the size of the page containing phys_ptr
80
- * @fsr: set to the DFSR/IFSR value on failure
81
* @fi: set to fault info if the translation fails
82
* @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
83
*/
84
static bool get_phys_addr(CPUARMState *env, target_ulong address,
85
MMUAccessType access_type, ARMMMUIdx mmu_idx,
86
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
87
- target_ulong *page_size, uint32_t *fsr,
88
+ target_ulong *page_size,
89
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
90
{
37
{
91
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
38
+ /*
92
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
39
+ * R_XCHFJ: Alignment check not caused by memory type is priority 1,
93
40
+ * higher than any translation fault. When MTE is disabled, tcg
94
ret = get_phys_addr(env, address, access_type,
41
+ * performs the alignment check during the code generated for the
95
stage_1_mmu_idx(mmu_idx), &ipa, attrs,
42
+ * memory access. With MTE enabled, we must check this here before
96
- prot, page_size, fsr, fi, cacheattrs);
43
+ * raising any translation fault in allocation_tag_mem.
97
+ prot, page_size, fi, cacheattrs);
44
+ */
98
45
+ unsigned align = FIELD_EX32(desc, MTEDESC, ALIGN);
99
/* If S1 fails or S2 is disabled, return early. */
46
+ if (unlikely(align)) {
100
if (ret || regime_translation_disabled(env, ARMMMUIdx_S2NS)) {
47
+ align = (1u << align) - 1;
101
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
48
+ if (unlikely(ptr & align)) {
102
phys_ptr, attrs, &s2_prot,
49
+ int idx = FIELD_EX32(desc, MTEDESC, MIDX);
103
page_size, fi,
50
+ bool w = FIELD_EX32(desc, MTEDESC, WRITE);
104
cacheattrs != NULL ? &cacheattrs2 : NULL);
51
+ MMUAccessType type = w ? MMU_DATA_STORE : MMU_DATA_LOAD;
105
- *fsr = arm_fi_to_lfsc(fi);
52
+ arm_cpu_do_unaligned_access(env_cpu(env), ptr, type, idx, GETPC());
106
fi->s2addr = ipa;
53
+ }
107
/* Combine the S1 and S2 perms. */
54
+ }
108
*prot &= s2_prot;
55
+
109
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
56
return mte_check(env, desc, ptr, GETPC());
110
/* PMSAv8 */
111
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
112
phys_ptr, attrs, prot, fi);
113
- *fsr = arm_fi_to_sfsc(fi);
114
} else if (arm_feature(env, ARM_FEATURE_V7)) {
115
/* PMSAv7 */
116
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
117
phys_ptr, prot, fi);
118
- *fsr = arm_fi_to_sfsc(fi);
119
} else {
120
/* Pre-v7 MPU */
121
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
122
phys_ptr, prot, fi);
123
- *fsr = arm_fi_to_sfsc(fi);
124
}
125
qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32
126
" mmu_idx %u -> %s (prot %c%c%c)\n",
127
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
128
}
129
130
if (regime_using_lpae_format(env, mmu_idx)) {
131
- bool ret = get_phys_addr_lpae(env, address, access_type, mmu_idx,
132
- phys_ptr, attrs, prot, page_size,
133
- fi, cacheattrs);
134
-
135
- *fsr = arm_fi_to_lfsc(fi);
136
- return ret;
137
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx,
138
+ phys_ptr, attrs, prot, page_size,
139
+ fi, cacheattrs);
140
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
141
- bool ret = get_phys_addr_v6(env, address, access_type, mmu_idx,
142
- phys_ptr, attrs, prot, page_size, fi);
143
-
144
- *fsr = arm_fi_to_sfsc(fi);
145
- return ret;
146
+ return get_phys_addr_v6(env, address, access_type, mmu_idx,
147
+ phys_ptr, attrs, prot, page_size, fi);
148
} else {
149
- bool ret = get_phys_addr_v5(env, address, access_type, mmu_idx,
150
+ return get_phys_addr_v5(env, address, access_type, mmu_idx,
151
phys_ptr, prot, page_size, fi);
152
-
153
- *fsr = arm_fi_to_sfsc(fi);
154
- return ret;
155
}
156
}
57
}
157
58
158
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
59
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
159
* fsr with ARM DFSR/IFSR fault register format value on failure.
160
*/
161
bool arm_tlb_fill(CPUState *cs, vaddr address,
162
- MMUAccessType access_type, int mmu_idx, uint32_t *fsr,
163
+ MMUAccessType access_type, int mmu_idx,
164
ARMMMUFaultInfo *fi)
165
{
166
ARMCPU *cpu = ARM_CPU(cs);
167
@@ -XXX,XX +XXX,XX @@ bool arm_tlb_fill(CPUState *cs, vaddr address,
168
169
ret = get_phys_addr(env, address, access_type,
170
core_to_arm_mmu_idx(env, mmu_idx), &phys_addr,
171
- &attrs, &prot, &page_size, fsr, fi, NULL);
172
+ &attrs, &prot, &page_size, fi, NULL);
173
if (!ret) {
174
/* Map a single [sub]page. */
175
phys_addr &= TARGET_PAGE_MASK;
176
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
177
target_ulong page_size;
178
int prot;
179
bool ret;
180
- uint32_t fsr;
181
ARMMMUFaultInfo fi = {};
182
ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
183
184
*attrs = (MemTxAttrs) {};
185
186
ret = get_phys_addr(env, addr, 0, mmu_idx, &phys_addr,
187
- attrs, &prot, &page_size, &fsr, &fi, NULL);
188
+ attrs, &prot, &page_size, &fi, NULL);
189
190
if (ret) {
191
return -1;
192
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
193
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/op_helper.c
61
--- a/target/arm/tcg/translate-a64.c
195
+++ b/target/arm/op_helper.c
62
+++ b/target/arm/tcg/translate-a64.c
196
@@ -XXX,XX +XXX,XX @@ void tlb_fill(CPUState *cs, target_ulong addr, MMUAccessType access_type,
63
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
197
int mmu_idx, uintptr_t retaddr)
64
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
198
{
65
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
199
bool ret;
66
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
200
- uint32_t fsr = 0;
67
+ desc = FIELD_DP32(desc, MTEDESC, ALIGN, get_alignment_bits(memop));
201
ARMMMUFaultInfo fi = {};
68
desc = FIELD_DP32(desc, MTEDESC, SIZEM1, memop_size(memop) - 1);
202
69
203
- ret = arm_tlb_fill(cs, addr, access_type, mmu_idx, &fsr, &fi);
70
ret = tcg_temp_new_i64();
204
+ ret = arm_tlb_fill(cs, addr, access_type, mmu_idx, &fi);
71
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
205
if (unlikely(ret)) {
72
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
206
ARMCPU *cpu = ARM_CPU(cs);
73
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
207
74
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
75
+ desc = FIELD_DP32(desc, MTEDESC, ALIGN, get_alignment_bits(single_mop));
76
desc = FIELD_DP32(desc, MTEDESC, SIZEM1, total_size - 1);
77
78
ret = tcg_temp_new_i64();
208
--
79
--
209
2.7.4
80
2.34.1
210
211
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add support for the ZynqMP QSPI (consisting of the Generic QSPI and Legacy
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
QSPI) and connect Numonyx n25q512a11 flashes to it.
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
5
Message-id: 20230530191438.411344-16-richard.henderson@linaro.org
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20171126231634.9531-14-frasse.iglesias@gmail.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
include/hw/arm/xlnx-zynqmp.h | 5 +++++
8
target/arm/cpu.h | 3 ++-
15
hw/arm/xlnx-zcu102.c | 23 +++++++++++++++++++++++
9
target/arm/tcg/translate.h | 2 ++
16
hw/arm/xlnx-zynqmp.c | 26 ++++++++++++++++++++++++++
10
target/arm/tcg/hflags.c | 6 ++++++
17
3 files changed, 54 insertions(+)
11
target/arm/tcg/translate-a64.c | 1 +
12
4 files changed, 11 insertions(+), 1 deletion(-)
18
13
19
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/include/hw/arm/xlnx-zynqmp.h
16
--- a/target/arm/cpu.h
22
+++ b/include/hw/arm/xlnx-zynqmp.h
17
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
24
#define XLNX_ZYNQMP_NUM_SDHCI 2
19
#define SCTLR_D (1U << 5) /* up to v5; RAO in v6 */
25
#define XLNX_ZYNQMP_NUM_SPIS 2
20
#define SCTLR_CP15BEN (1U << 5) /* v7 onward */
26
21
#define SCTLR_L (1U << 6) /* up to v5; RAO in v6 and v7; RAZ in v8 */
27
+#define XLNX_ZYNQMP_NUM_QSPI_BUS 2
22
-#define SCTLR_nAA (1U << 6) /* when v8.4-LSE is implemented */
28
+#define XLNX_ZYNQMP_NUM_QSPI_BUS_CS 2
23
+#define SCTLR_nAA (1U << 6) /* when FEAT_LSE2 is implemented */
29
+#define XLNX_ZYNQMP_NUM_QSPI_FLASH 4
24
#define SCTLR_B (1U << 7) /* up to v6; RAZ in v7 */
30
+
25
#define SCTLR_ITD (1U << 7) /* v8 onward */
31
#define XLNX_ZYNQMP_NUM_OCM_BANKS 4
26
#define SCTLR_S (1U << 8) /* up to v6; RAZ in v7 */
32
#define XLNX_ZYNQMP_OCM_RAM_0_ADDRESS 0xFFFC0000
27
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, SVL, 24, 4)
33
#define XLNX_ZYNQMP_OCM_RAM_SIZE 0x10000
28
/* Indicates that SME Streaming mode is active, and SMCR_ELx.FA64 is not. */
34
@@ -XXX,XX +XXX,XX @@ typedef struct XlnxZynqMPState {
29
FIELD(TBFLAG_A64, SME_TRAP_NONSTREAMING, 28, 1)
35
SysbusAHCIState sata;
30
FIELD(TBFLAG_A64, FGT_ERET, 29, 1)
36
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
31
+FIELD(TBFLAG_A64, NAA, 30, 1)
37
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
32
38
+ XlnxZynqMPQSPIPS qspi;
33
/*
39
XlnxDPState dp;
34
* Helpers for using the above.
40
XlnxDPDMAState dpdma;
35
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
41
42
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
43
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/arm/xlnx-zcu102.c
37
--- a/target/arm/tcg/translate.h
45
+++ b/hw/arm/xlnx-zcu102.c
38
+++ b/target/arm/tcg/translate.h
46
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(XlnxZCU102 *s, MachineState *machine)
39
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
47
sysbus_connect_irq(SYS_BUS_DEVICE(&s->soc.spi[i]), 1, cs_line);
40
bool fgt_eret;
41
/* True if fine-grained trap on SVC is enabled */
42
bool fgt_svc;
43
+ /* True if FEAT_LSE2 SCTLR_ELx.nAA is set */
44
+ bool naa;
45
/*
46
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
47
* < 0, set by the current instruction.
48
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/tcg/hflags.c
51
+++ b/target/arm/tcg/hflags.c
52
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
53
}
48
}
54
}
49
55
50
+ for (i = 0; i < XLNX_ZYNQMP_NUM_QSPI_FLASH; i++) {
56
+ if (cpu_isar_feature(aa64_lse2, env_archcpu(env))) {
51
+ SSIBus *spi_bus;
57
+ if (sctlr & SCTLR_nAA) {
52
+ DeviceState *flash_dev;
58
+ DP_TBFLAG_A64(flags, NAA, 1);
53
+ qemu_irq cs_line;
54
+ DriveInfo *dinfo = drive_get_next(IF_MTD);
55
+ int bus = i / XLNX_ZYNQMP_NUM_QSPI_BUS_CS;
56
+ gchar *bus_name = g_strdup_printf("qspi%d", bus);
57
+
58
+ spi_bus = (SSIBus *)qdev_get_child_bus(DEVICE(&s->soc), bus_name);
59
+ g_free(bus_name);
60
+
61
+ flash_dev = ssi_create_slave_no_init(spi_bus, "n25q512a11");
62
+ if (dinfo) {
63
+ qdev_prop_set_drive(flash_dev, "drive", blk_by_legacy_dinfo(dinfo),
64
+ &error_fatal);
65
+ }
59
+ }
66
+ qdev_init_nofail(flash_dev);
67
+
68
+ cs_line = qdev_get_gpio_in_named(flash_dev, SSI_GPIO_CS, 0);
69
+
70
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->soc.qspi), i + 1, cs_line);
71
+ }
60
+ }
72
+
61
+
73
/* TODO create and connect IDE devices for ide_drive_get() */
62
/* Compute the condition for using AccType_UNPRIV for LDTR et al. */
74
63
if (!(env->pstate & PSTATE_UAO)) {
75
xlnx_zcu102_binfo.ram_size = ram_size;
64
switch (mmu_idx) {
76
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
65
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
77
index XXXXXXX..XXXXXXX 100644
66
index XXXXXXX..XXXXXXX 100644
78
--- a/hw/arm/xlnx-zynqmp.c
67
--- a/target/arm/tcg/translate-a64.c
79
+++ b/hw/arm/xlnx-zynqmp.c
68
+++ b/target/arm/tcg/translate-a64.c
80
@@ -XXX,XX +XXX,XX @@
69
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
81
#define SATA_ADDR 0xFD0C0000
70
dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
82
#define SATA_NUM_PORTS 2
71
dc->pstate_za = EX_TBFLAG_A64(tb_flags, PSTATE_ZA);
83
72
dc->sme_trap_nonstreaming = EX_TBFLAG_A64(tb_flags, SME_TRAP_NONSTREAMING);
84
+#define QSPI_ADDR 0xff0f0000
73
+ dc->naa = EX_TBFLAG_A64(tb_flags, NAA);
85
+#define LQSPI_ADDR 0xc0000000
74
dc->vec_len = 0;
86
+#define QSPI_IRQ 15
75
dc->vec_stride = 0;
87
+
76
dc->cp_regs = arm_cpu->cp_regs;
88
#define DP_ADDR 0xfd4a0000
89
#define DP_IRQ 113
90
91
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
92
qdev_set_parent_bus(DEVICE(&s->spi[i]), sysbus_get_default());
93
}
94
95
+ object_initialize(&s->qspi, sizeof(s->qspi), TYPE_XLNX_ZYNQMP_QSPIPS);
96
+ qdev_set_parent_bus(DEVICE(&s->qspi), sysbus_get_default());
97
+
98
object_initialize(&s->dp, sizeof(s->dp), TYPE_XLNX_DP);
99
qdev_set_parent_bus(DEVICE(&s->dp), sysbus_get_default());
100
101
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
102
g_free(bus_name);
103
}
104
105
+ object_property_set_bool(OBJECT(&s->qspi), true, "realized", &err);
106
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->qspi), 0, QSPI_ADDR);
107
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->qspi), 1, LQSPI_ADDR);
108
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->qspi), 0, gic_spi[QSPI_IRQ]);
109
+
110
+ for (i = 0; i < XLNX_ZYNQMP_NUM_QSPI_BUS; i++) {
111
+ gchar *bus_name;
112
+ gchar *target_bus;
113
+
114
+ /* Alias controller SPI bus to the SoC itself */
115
+ bus_name = g_strdup_printf("qspi%d", i);
116
+ target_bus = g_strdup_printf("spi%d", i);
117
+ object_property_add_alias(OBJECT(s), bus_name,
118
+ OBJECT(&s->qspi), target_bus,
119
+ &error_abort);
120
+ g_free(bus_name);
121
+ g_free(target_bus);
122
+ }
123
+
124
object_property_set_bool(OBJECT(&s->dp), true, "realized", &err);
125
if (err) {
126
error_propagate(errp, err);
127
--
77
--
128
2.7.4
78
2.34.1
129
130
diff view generated by jsdifflib
1
From: Prasad J Pandit <pjp@fedoraproject.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The ctz32() routine could return a value greater than
3
FEAT_LSE2 only requires that atomic operations not cross a
4
TC6393XB_GPIOS=16, because the device has 24 GPIO level
4
16-byte boundary. Ordered operations may be completely
5
bits but we only implement 16 outgoing lines. This could
5
unaligned if SCTLR.nAA is set.
6
lead to an OOB array access. Mask 'level' to avoid it.
6
7
7
Because this alignment check is so special, do it by hand.
8
Reported-by: Moguofang <moguofang@huawei.com>
8
Make sure not to keep TCG temps live across the branch.
9
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
9
10
Message-id: 20171212041539.25700-1-ppandit@redhat.com
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20230530191438.411344-17-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
---
14
hw/display/tc6393xb.c | 1 +
15
target/arm/tcg/helper-a64.h | 3 +
15
1 file changed, 1 insertion(+)
16
target/arm/tcg/helper-a64.c | 7 ++
16
17
target/arm/tcg/translate-a64.c | 120 ++++++++++++++++++++++++++-------
17
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
18
3 files changed, 104 insertions(+), 26 deletions(-)
19
20
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/display/tc6393xb.c
22
--- a/target/arm/tcg/helper-a64.h
20
+++ b/hw/display/tc6393xb.c
23
+++ b/target/arm/tcg/helper-a64.h
21
@@ -XXX,XX +XXX,XX @@ static void tc6393xb_gpio_handler_update(TC6393xbState *s)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(st2g_stub, TCG_CALL_NO_WG, void, env, i64)
22
int bit;
25
DEF_HELPER_FLAGS_2(ldgm, TCG_CALL_NO_WG, i64, env, i64)
23
26
DEF_HELPER_FLAGS_3(stgm, TCG_CALL_NO_WG, void, env, i64, i64)
24
level = s->gpio_level & s->gpio_dir;
27
DEF_HELPER_FLAGS_3(stzgm_tags, TCG_CALL_NO_WG, void, env, i64, i64)
25
+ level &= MAKE_64BIT_MASK(0, TC6393XB_GPIOS);
28
+
26
29
+DEF_HELPER_FLAGS_4(unaligned_access, TCG_CALL_NO_WG,
27
for (diff = s->prev_level ^ level; diff; diff ^= 1 << bit) {
30
+ noreturn, env, i64, i32, i32)
28
bit = ctz32(diff);
31
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/tcg/helper-a64.c
34
+++ b/target/arm/tcg/helper-a64.c
35
@@ -XXX,XX +XXX,XX @@ void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in)
36
37
memset(mem, 0, blocklen);
38
}
39
+
40
+void HELPER(unaligned_access)(CPUARMState *env, uint64_t addr,
41
+ uint32_t access_type, uint32_t mmu_idx)
42
+{
43
+ arm_cpu_do_unaligned_access(env_cpu(env), addr, access_type,
44
+ mmu_idx, GETPC());
45
+}
46
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/tcg/translate-a64.c
49
+++ b/target/arm/tcg/translate-a64.c
50
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
51
return clean_data_tbi(s, addr);
52
}
53
54
+/*
55
+ * Generate the special alignment check that applies to AccType_ATOMIC
56
+ * and AccType_ORDERED insns under FEAT_LSE2: the access need not be
57
+ * naturally aligned, but it must not cross a 16-byte boundary.
58
+ * See AArch64.CheckAlignment().
59
+ */
60
+static void check_lse2_align(DisasContext *s, int rn, int imm,
61
+ bool is_write, MemOp mop)
62
+{
63
+ TCGv_i32 tmp;
64
+ TCGv_i64 addr;
65
+ TCGLabel *over_label;
66
+ MMUAccessType type;
67
+ int mmu_idx;
68
+
69
+ tmp = tcg_temp_new_i32();
70
+ tcg_gen_extrl_i64_i32(tmp, cpu_reg_sp(s, rn));
71
+ tcg_gen_addi_i32(tmp, tmp, imm & 15);
72
+ tcg_gen_andi_i32(tmp, tmp, 15);
73
+ tcg_gen_addi_i32(tmp, tmp, memop_size(mop));
74
+
75
+ over_label = gen_new_label();
76
+ tcg_gen_brcondi_i32(TCG_COND_LEU, tmp, 16, over_label);
77
+
78
+ addr = tcg_temp_new_i64();
79
+ tcg_gen_addi_i64(addr, cpu_reg_sp(s, rn), imm);
80
+
81
+ type = is_write ? MMU_DATA_STORE : MMU_DATA_LOAD,
82
+ mmu_idx = get_mem_index(s);
83
+ gen_helper_unaligned_access(cpu_env, addr, tcg_constant_i32(type),
84
+ tcg_constant_i32(mmu_idx));
85
+
86
+ gen_set_label(over_label);
87
+
88
+}
89
+
90
+/* Handle the alignment check for AccType_ATOMIC instructions. */
91
+static MemOp check_atomic_align(DisasContext *s, int rn, MemOp mop)
92
+{
93
+ MemOp size = mop & MO_SIZE;
94
+
95
+ if (size == MO_8) {
96
+ return mop;
97
+ }
98
+
99
+ /*
100
+ * If size == MO_128, this is a LDXP, and the operation is single-copy
101
+ * atomic for each doubleword, not the entire quadword; it still must
102
+ * be quadword aligned.
103
+ */
104
+ if (size == MO_128) {
105
+ return finalize_memop_atom(s, MO_128 | MO_ALIGN,
106
+ MO_ATOM_IFALIGN_PAIR);
107
+ }
108
+ if (dc_isar_feature(aa64_lse2, s)) {
109
+ check_lse2_align(s, rn, 0, true, mop);
110
+ } else {
111
+ mop |= MO_ALIGN;
112
+ }
113
+ return finalize_memop(s, mop);
114
+}
115
+
116
+/* Handle the alignment check for AccType_ORDERED instructions. */
117
+static MemOp check_ordered_align(DisasContext *s, int rn, int imm,
118
+ bool is_write, MemOp mop)
119
+{
120
+ MemOp size = mop & MO_SIZE;
121
+
122
+ if (size == MO_8) {
123
+ return mop;
124
+ }
125
+ if (size == MO_128) {
126
+ return finalize_memop_atom(s, MO_128 | MO_ALIGN,
127
+ MO_ATOM_IFALIGN_PAIR);
128
+ }
129
+ if (!dc_isar_feature(aa64_lse2, s)) {
130
+ mop |= MO_ALIGN;
131
+ } else if (!s->naa) {
132
+ check_lse2_align(s, rn, imm, is_write, mop);
133
+ }
134
+ return finalize_memop(s, mop);
135
+}
136
+
137
typedef struct DisasCompare64 {
138
TCGCond cond;
139
TCGv_i64 value;
140
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, int rn,
141
{
142
int idx = get_mem_index(s);
143
TCGv_i64 dirty_addr, clean_addr;
144
- MemOp memop;
145
-
146
- /*
147
- * For pairs:
148
- * if size == 2, the operation is single-copy atomic for the doubleword.
149
- * if size == 3, the operation is single-copy atomic for *each* doubleword,
150
- * not the entire quadword, however it must be quadword aligned.
151
- */
152
- memop = size + is_pair;
153
- if (memop == MO_128) {
154
- memop = finalize_memop_atom(s, MO_128 | MO_ALIGN,
155
- MO_ATOM_IFALIGN_PAIR);
156
- } else {
157
- memop = finalize_memop(s, memop | MO_ALIGN);
158
- }
159
+ MemOp memop = check_atomic_align(s, rn, size + is_pair);
160
161
s->is_ldex = true;
162
dirty_addr = cpu_reg_sp(s, rn);
163
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap(DisasContext *s, int rs, int rt,
164
if (rn == 31) {
165
gen_check_sp_alignment(s);
166
}
167
- memop = finalize_memop(s, size | MO_ALIGN);
168
+ memop = check_atomic_align(s, rn, size);
169
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
170
tcg_gen_atomic_cmpxchg_i64(tcg_rs, clean_addr, tcg_rs, tcg_rt,
171
memidx, memop);
172
@@ -XXX,XX +XXX,XX @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
173
}
174
175
/* This is a single atomic access, despite the "pair". */
176
- memop = finalize_memop(s, (size + 1) | MO_ALIGN);
177
+ memop = check_atomic_align(s, rn, size + 1);
178
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
179
180
if (size == 2) {
181
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
182
gen_check_sp_alignment(s);
183
}
184
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
185
- /* TODO: ARMv8.4-LSE SCTLR.nAA */
186
- memop = finalize_memop(s, size | MO_ALIGN);
187
+ memop = check_ordered_align(s, rn, 0, true, size);
188
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
189
true, rn != 31, memop);
190
do_gpr_st(s, cpu_reg(s, rt), clean_addr, memop, true, rt,
191
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
192
if (rn == 31) {
193
gen_check_sp_alignment(s);
194
}
195
- /* TODO: ARMv8.4-LSE SCTLR.nAA */
196
- memop = finalize_memop(s, size | MO_ALIGN);
197
+ memop = check_ordered_align(s, rn, 0, false, size);
198
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
199
false, rn != 31, memop);
200
do_gpr_ld(s, cpu_reg(s, rt), clean_addr, memop, false, true,
201
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
202
bool a = extract32(insn, 23, 1);
203
TCGv_i64 tcg_rs, tcg_rt, clean_addr;
204
AtomicThreeOpFn *fn = NULL;
205
- MemOp mop = finalize_memop(s, size | MO_ALIGN);
206
+ MemOp mop = size;
207
208
if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
209
unallocated_encoding(s);
210
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
211
if (rn == 31) {
212
gen_check_sp_alignment(s);
213
}
214
+
215
+ mop = check_atomic_align(s, rn, mop);
216
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn), false, rn != 31, mop);
217
218
if (o3_opc == 014) {
219
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
220
bool is_store = false;
221
bool extend = false;
222
bool iss_sf;
223
- MemOp mop;
224
+ MemOp mop = size;
225
226
if (!dc_isar_feature(aa64_rcpc_8_4, s)) {
227
unallocated_encoding(s);
228
return;
229
}
230
231
- /* TODO: ARMv8.4-LSE SCTLR.nAA */
232
- mop = finalize_memop(s, size | MO_ALIGN);
233
-
234
switch (opc) {
235
case 0: /* STLURB */
236
is_store = true;
237
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
238
gen_check_sp_alignment(s);
239
}
240
241
+ mop = check_ordered_align(s, rn, offset, is_store, mop);
242
+
243
dirty_addr = read_cpu_reg_sp(s, rn, 1);
244
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
245
clean_addr = clean_data_tbi(s, dirty_addr);
29
--
246
--
30
2.7.4
247
2.34.1
31
32
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Don't set TX FIFO UNDERFLOW interrupt after transmitting the commands.
3
Push the mte check behind the exclusive_addr check.
4
Also update interrupts after reading out the interrupt status.
4
Document the several ways that we are still out of spec
5
with this implementation.
5
6
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Message-id: 20230530191438.411344-18-richard.henderson@linaro.org
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20171126231634.9531-12-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
hw/ssi/xilinx_spips.c | 4 +---
12
target/arm/tcg/translate-a64.c | 42 +++++++++++++++++++++++++++++-----
14
1 file changed, 1 insertion(+), 3 deletions(-)
13
1 file changed, 36 insertions(+), 6 deletions(-)
15
14
16
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
15
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/ssi/xilinx_spips.c
17
--- a/target/arm/tcg/translate-a64.c
19
+++ b/hw/ssi/xilinx_spips.c
18
+++ b/target/arm/tcg/translate-a64.c
20
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
19
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
21
uint8_t addr_length;
20
*/
22
21
TCGLabel *fail_label = gen_new_label();
23
if (fifo8_is_empty(&s->tx_fifo)) {
22
TCGLabel *done_label = gen_new_label();
24
- if (!(s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE)) {
23
- TCGv_i64 tmp, dirty_addr, clean_addr;
25
- s->regs[R_INTR_STATUS] |= IXR_TX_FIFO_UNDERFLOW;
24
+ TCGv_i64 tmp, clean_addr;
26
- }
25
MemOp memop;
27
xilinx_spips_update_ixr(s);
26
28
return;
27
- memop = (size + is_pair) | MO_ALIGN;
29
} else if (s->snoop_state == SNOOP_STRIPING) {
28
- memop = finalize_memop(s, memop);
30
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
29
-
31
ret = s->regs[addr] & IXR_ALL;
30
- dirty_addr = cpu_reg_sp(s, rn);
32
s->regs[addr] = 0;
31
- clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, memop);
33
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
32
+ /*
34
+ xilinx_spips_update_ixr(s);
33
+ * FIXME: We are out of spec here. We have recorded only the address
35
return ret;
34
+ * from load_exclusive, not the entire range, and we assume that the
36
case R_INTR_MASK:
35
+ * size of the access on both sides match. The architecture allows the
37
mask = IXR_ALL;
36
+ * store to be smaller than the load, so long as the stored bytes are
37
+ * within the range recorded by the load.
38
+ */
39
40
+ /* See AArch64.ExclusiveMonitorsPass() and AArch64.IsExclusiveVA(). */
41
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
42
tcg_gen_brcond_i64(TCG_COND_NE, clean_addr, cpu_exclusive_addr, fail_label);
43
44
+ /*
45
+ * The write, and any associated faults, only happen if the virtual
46
+ * and physical addresses pass the exclusive monitor check. These
47
+ * faults are exceedingly unlikely, because normally the guest uses
48
+ * the exact same address register for the load_exclusive, and we
49
+ * would have recognized these faults there.
50
+ *
51
+ * It is possible to trigger an alignment fault pre-LSE2, e.g. with an
52
+ * unaligned 4-byte write within the range of an aligned 8-byte load.
53
+ * With LSE2, the store would need to cross a 16-byte boundary when the
54
+ * load did not, which would mean the store is outside the range
55
+ * recorded for the monitor, which would have failed a corrected monitor
56
+ * check above. For now, we assume no size change and retain the
57
+ * MO_ALIGN to let tcg know what we checked in the load_exclusive.
58
+ *
59
+ * It is possible to trigger an MTE fault, by performing the load with
60
+ * a virtual address with a valid tag and performing the store with the
61
+ * same virtual address and a different invalid tag.
62
+ */
63
+ memop = size + is_pair;
64
+ if (memop == MO_128 || !dc_isar_feature(aa64_lse2, s)) {
65
+ memop |= MO_ALIGN;
66
+ }
67
+ memop = finalize_memop(s, memop);
68
+ gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
69
+
70
tmp = tcg_temp_new_i64();
71
if (is_pair) {
72
if (size == 2) {
38
--
73
--
39
2.7.4
74
2.34.1
40
41
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add support for 4 byte addresses in the LQSPI and correct LQSPI_CFG_SEP_BUS.
3
We have many other instances of stg in the testsuite;
4
change these to provide an instance of stz2g.
4
5
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20230530191438.411344-19-richard.henderson@linaro.org
8
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Message-id: 20171126231634.9531-11-frasse.iglesias@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
hw/ssi/xilinx_spips.c | 6 +++++-
11
tests/tcg/aarch64/mte-7.c | 3 +--
13
1 file changed, 5 insertions(+), 1 deletion(-)
12
1 file changed, 1 insertion(+), 2 deletions(-)
14
13
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
14
diff --git a/tests/tcg/aarch64/mte-7.c b/tests/tcg/aarch64/mte-7.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/xilinx_spips.c
16
--- a/tests/tcg/aarch64/mte-7.c
18
+++ b/hw/ssi/xilinx_spips.c
17
+++ b/tests/tcg/aarch64/mte-7.c
19
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ int main(int ac, char **av)
20
#define R_LQSPI_CFG_RESET 0x03A002EB
19
p = (void *)((unsigned long)p | (1ul << 56));
21
#define LQSPI_CFG_LQ_MODE (1U << 31)
20
22
#define LQSPI_CFG_TWO_MEM (1 << 30)
21
/* Store tag in sequential granules. */
23
-#define LQSPI_CFG_SEP_BUS (1 << 30)
22
- asm("stg %0, [%0]" : : "r"(p + 0x0ff0));
24
+#define LQSPI_CFG_SEP_BUS (1 << 29)
23
- asm("stg %0, [%0]" : : "r"(p + 0x1000));
25
#define LQSPI_CFG_U_PAGE (1 << 28)
24
+ asm("stz2g %0, [%0]" : : "r"(p + 0x0ff0));
26
+#define LQSPI_CFG_ADDR4 (1 << 27)
25
27
#define LQSPI_CFG_MODE_EN (1 << 25)
26
/*
28
#define LQSPI_CFG_MODE_WIDTH 8
27
* Perform an unaligned store with tag 1 crossing the pages.
29
#define LQSPI_CFG_MODE_SHIFT 16
30
@@ -XXX,XX +XXX,XX @@ static void lqspi_load_cache(void *opaque, hwaddr addr)
31
fifo8_push(&s->tx_fifo, s->regs[R_LQSPI_CFG] & LQSPI_CFG_INST_CODE);
32
/* read address */
33
DB_PRINT_L(0, "pushing read address %06x\n", flash_addr);
34
+ if (s->regs[R_LQSPI_CFG] & LQSPI_CFG_ADDR4) {
35
+ fifo8_push(&s->tx_fifo, (uint8_t)(flash_addr >> 24));
36
+ }
37
fifo8_push(&s->tx_fifo, (uint8_t)(flash_addr >> 16));
38
fifo8_push(&s->tx_fifo, (uint8_t)(flash_addr >> 8));
39
fifo8_push(&s->tx_fifo, (uint8_t)flash_addr);
40
--
28
--
41
2.7.4
29
2.34.1
42
43
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add support for the bank address register access commands (BRRD/BRWR) and
3
With -cpu max and FEAT_LSE2, the __aarch64__ section will only raise
4
the BULK_ERASE (0x60) command.
4
an alignment exception when the load crosses a 16-byte boundary.
5
5
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Acked-by: Marcin Krzemiński <mar.krzeminski@gmail.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20230530191438.411344-20-richard.henderson@linaro.org
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20171126231634.9531-4-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
hw/block/m25p80.c | 7 +++++++
11
tests/tcg/multiarch/sigbus.c | 13 +++++++++----
14
1 file changed, 7 insertions(+)
12
1 file changed, 9 insertions(+), 4 deletions(-)
15
13
16
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
14
diff --git a/tests/tcg/multiarch/sigbus.c b/tests/tcg/multiarch/sigbus.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/block/m25p80.c
16
--- a/tests/tcg/multiarch/sigbus.c
19
+++ b/hw/block/m25p80.c
17
+++ b/tests/tcg/multiarch/sigbus.c
20
@@ -XXX,XX +XXX,XX @@ typedef enum {
18
@@ -XXX,XX +XXX,XX @@
21
WRDI = 0x4,
19
#include <endian.h>
22
RDSR = 0x5,
20
23
WREN = 0x6,
21
24
+ BRRD = 0x16,
22
-unsigned long long x = 0x8877665544332211ull;
25
+ BRWR = 0x17,
23
-void * volatile p = (void *)&x + 1;
26
JEDEC_READ = 0x9f,
24
+char x[32] __attribute__((aligned(16))) = {
27
+ BULK_ERASE_60 = 0x60,
25
+ 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
28
BULK_ERASE = 0xc7,
26
+ 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
29
READ_FSR = 0x70,
27
+ 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
30
RDCR = 0x15,
28
+ 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20,
31
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
29
+};
32
s->write_enable = false;
30
+void * volatile p = (void *)&x + 15;
33
}
31
34
break;
32
void sigbus(int sig, siginfo_t *info, void *uc)
35
+ case BRWR:
33
{
36
case EXTEND_ADDR_WRITE:
34
@@ -XXX,XX +XXX,XX @@ int main()
37
s->ear = s->data[0];
35
* We might as well validate the unaligned load worked.
38
break;
36
*/
39
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
37
if (BYTE_ORDER == LITTLE_ENDIAN) {
40
s->state = STATE_READING_DATA;
38
- assert(tmp == 0x55443322);
41
break;
39
+ assert(tmp == 0x13121110);
42
40
} else {
43
+ case BULK_ERASE_60:
41
- assert(tmp == 0x77665544);
44
case BULK_ERASE:
42
+ assert(tmp == 0x10111213);
45
if (s->write_enable) {
43
}
46
DB_PRINT_L(0, "chip erase\n");
44
return EXIT_SUCCESS;
47
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
45
}
48
case EX_4BYTE_ADDR:
49
s->four_bytes_address_mode = false;
50
break;
51
+ case BRRD:
52
case EXTEND_ADDR_READ:
53
s->data[0] = s->ear;
54
s->pos = 0;
55
s->len = 1;
56
s->state = STATE_READING_DATA;
57
break;
58
+ case BRWR:
59
case EXTEND_ADDR_WRITE:
60
if (s->write_enable) {
61
s->needed_bytes = 1;
62
--
46
--
63
2.7.4
47
2.34.1
64
65
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add support for Micron (Numonyx) n25q512a11 and n25q512a13 flashes.
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Message-id: 20230530191438.411344-21-richard.henderson@linaro.org
6
Acked-by: Marcin Krzemiński <mar.krzeminski@gmail.com>
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20171126231634.9531-5-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
7
---
13
hw/block/m25p80.c | 2 ++
8
docs/system/arm/emulation.rst | 1 +
14
1 file changed, 2 insertions(+)
9
target/arm/tcg/cpu64.c | 1 +
10
2 files changed, 2 insertions(+)
15
11
16
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
12
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/block/m25p80.c
14
--- a/docs/system/arm/emulation.rst
19
+++ b/hw/block/m25p80.c
15
+++ b/docs/system/arm/emulation.rst
20
@@ -XXX,XX +XXX,XX @@ static const FlashPartInfo known_devices[] = {
16
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
{ INFO("n25q128a13", 0x20ba18, 0, 64 << 10, 256, ER_4K) },
17
- FEAT_LRCPC (Load-acquire RCpc instructions)
22
{ INFO("n25q256a11", 0x20bb19, 0, 64 << 10, 512, ER_4K) },
18
- FEAT_LRCPC2 (Load-acquire RCpc instructions v2)
23
{ INFO("n25q256a13", 0x20ba19, 0, 64 << 10, 512, ER_4K) },
19
- FEAT_LSE (Large System Extensions)
24
+ { INFO("n25q512a11", 0x20bb20, 0, 64 << 10, 1024, ER_4K) },
20
+- FEAT_LSE2 (Large System Extensions v2)
25
+ { INFO("n25q512a13", 0x20ba20, 0, 64 << 10, 1024, ER_4K) },
21
- FEAT_LVA (Large Virtual Address space)
26
{ INFO("n25q128", 0x20ba18, 0, 64 << 10, 256, 0) },
22
- FEAT_MTE (Memory Tagging Extension)
27
{ INFO("n25q256a", 0x20ba19, 0, 64 << 10, 512, ER_4K) },
23
- FEAT_MTE2 (Memory Tagging Extension)
28
{ INFO("n25q512a", 0x20ba20, 0, 64 << 10, 1024, ER_4K) },
24
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/tcg/cpu64.c
27
+++ b/target/arm/tcg/cpu64.c
28
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
29
t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
30
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
31
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
32
+ t = FIELD_DP64(t, ID_AA64MMFR2, AT, 1); /* FEAT_LSE2 */
33
t = FIELD_DP64(t, ID_AA64MMFR2, IDS, 1); /* FEAT_IDST */
34
t = FIELD_DP64(t, ID_AA64MMFR2, FWB, 1); /* FEAT_S2FWB */
35
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
29
--
36
--
30
2.7.4
37
2.34.1
31
32
diff view generated by jsdifflib
1
Make get_phys_addr_v5() return a fault type in the ARMMMUFaultInfo
1
From: Zhuojia Shen <chaosdefinition@hotmail.com>
2
structure, which we convert to the FSC at the callsite.
3
2
3
DC CVAP and DC CVADP instructions can be executed in EL0 on Linux,
4
either directly when SCTLR_EL1.UCI == 1 or emulated by the kernel (see
5
user_cache_maint_handler() in arch/arm64/kernel/traps.c).
6
7
This patch enables execution of the two instructions in user mode
8
emulation.
9
10
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-4-git-send-email-peter.maydell@linaro.org
9
---
14
---
10
target/arm/helper.c | 33 ++++++++++++++++++---------------
15
target/arm/helper.c | 6 ++----
11
1 file changed, 18 insertions(+), 15 deletions(-)
16
1 file changed, 2 insertions(+), 4 deletions(-)
12
17
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
20
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
21
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
22
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rndr_reginfo[] = {
18
static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
23
.access = PL0_R, .readfn = rndr_readfn },
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
24
};
20
hwaddr *phys_ptr, int *prot,
25
21
- target_ulong *page_size, uint32_t *fsr,
26
-#ifndef CONFIG_USER_ONLY
22
+ target_ulong *page_size,
27
static void dccvap_writefn(CPUARMState *env, const ARMCPRegInfo *opaque,
23
ARMMMUFaultInfo *fi)
28
uint64_t value)
24
{
29
{
25
CPUState *cs = CPU(arm_env_get_cpu(env));
30
@@ -XXX,XX +XXX,XX @@ static void dccvap_writefn(CPUARMState *env, const ARMCPRegInfo *opaque,
26
- int code;
31
/* This won't be crossing page boundaries */
27
+ int level = 1;
32
haddr = probe_read(env, vaddr, dline_size, mem_idx, GETPC());
28
uint32_t table;
33
if (haddr) {
29
uint32_t desc;
34
+#ifndef CONFIG_USER_ONLY
30
int type;
35
31
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
36
ram_addr_t offset;
32
/* Lookup l1 descriptor. */
37
MemoryRegion *mr;
33
if (!get_level1_table_address(env, mmu_idx, &table, address)) {
38
@@ -XXX,XX +XXX,XX @@ static void dccvap_writefn(CPUARMState *env, const ARMCPRegInfo *opaque,
34
/* Section translation fault if page walk is disabled by PD0 or PD1 */
39
if (mr) {
35
- code = 5;
40
memory_region_writeback(mr, offset, dline_size);
36
+ fi->type = ARMFault_Translation;
37
goto do_fault;
38
}
39
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
41
domain_prot = (dacr >> (domain * 2)) & 3;
42
if (type == 0) {
43
/* Section translation fault. */
44
- code = 5;
45
+ fi->type = ARMFault_Translation;
46
goto do_fault;
47
}
48
+ if (type != 2) {
49
+ level = 2;
50
+ }
51
if (domain_prot == 0 || domain_prot == 2) {
52
- if (type == 2)
53
- code = 9; /* Section domain fault. */
54
- else
55
- code = 11; /* Page domain fault. */
56
+ fi->type = ARMFault_Domain;
57
goto do_fault;
58
}
59
if (type == 2) {
60
/* 1Mb section. */
61
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
62
ap = (desc >> 10) & 3;
63
- code = 13;
64
*page_size = 1024 * 1024;
65
} else {
66
/* Lookup l2 entry. */
67
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
68
mmu_idx, fi);
69
switch (desc & 3) {
70
case 0: /* Page translation fault. */
71
- code = 7;
72
+ fi->type = ARMFault_Translation;
73
goto do_fault;
74
case 1: /* 64k page. */
75
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
76
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
77
/* UNPREDICTABLE in ARMv5; we choose to take a
78
* page translation fault.
79
*/
80
- code = 7;
81
+ fi->type = ARMFault_Translation;
82
goto do_fault;
83
}
84
} else {
85
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
86
/* Never happens, but compiler isn't smart enough to tell. */
87
abort();
88
}
41
}
89
- code = 15;
42
+#endif /*CONFIG_USER_ONLY*/
90
}
91
*prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
92
*prot |= *prot ? PAGE_EXEC : 0;
93
if (!(*prot & (1 << access_type))) {
94
/* Access permission fault. */
95
+ fi->type = ARMFault_Permission;
96
goto do_fault;
97
}
98
*phys_ptr = phys_addr;
99
return false;
100
do_fault:
101
- *fsr = code | (domain << 4);
102
+ fi->domain = domain;
103
+ fi->level = level;
104
return true;
105
}
106
107
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
108
return get_phys_addr_v6(env, address, access_type, mmu_idx, phys_ptr,
109
attrs, prot, page_size, fsr, fi);
110
} else {
111
- return get_phys_addr_v5(env, address, access_type, mmu_idx, phys_ptr,
112
- prot, page_size, fsr, fi);
113
+ bool ret = get_phys_addr_v5(env, address, access_type, mmu_idx,
114
+ phys_ptr, prot, page_size, fi);
115
+
116
+ *fsr = arm_fi_to_sfsc(fi);
117
+ return ret;
118
}
43
}
119
}
44
}
120
45
46
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpodp_reg[] = {
47
.fgt = FGT_DCCVADP,
48
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
49
};
50
-#endif /*CONFIG_USER_ONLY*/
51
52
static CPAccessResult access_aa64_tid5(CPUARMState *env, const ARMCPRegInfo *ri,
53
bool isread)
54
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
55
if (cpu_isar_feature(aa64_tlbios, cpu)) {
56
define_arm_cp_regs(cpu, tlbios_reginfo);
57
}
58
-#ifndef CONFIG_USER_ONLY
59
/* Data Cache clean instructions up to PoP */
60
if (cpu_isar_feature(aa64_dcpop, cpu)) {
61
define_one_arm_cp_reg(cpu, dcpop_reg);
62
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
63
define_one_arm_cp_reg(cpu, dcpodp_reg);
64
}
65
}
66
-#endif /*CONFIG_USER_ONLY*/
67
68
/*
69
* If full MTE is enabled, add all of the system registers.
121
--
70
--
122
2.7.4
71
2.34.1
123
124
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Zhuojia Shen <chaosdefinition@hotmail.com>
2
2
3
Add support for zero pumping according to the transfer size register.
3
Test execution of DC CVAP and DC CVADP instructions under user mode
4
emulation.
4
5
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20171126231634.9531-10-frasse.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
include/hw/ssi/xilinx_spips.h | 2 ++
11
tests/tcg/aarch64/dcpodp.c | 63 +++++++++++++++++++++++++++++++
12
hw/ssi/xilinx_spips.c | 47 ++++++++++++++++++++++++++++++++++++-------
12
tests/tcg/aarch64/dcpop.c | 63 +++++++++++++++++++++++++++++++
13
2 files changed, 42 insertions(+), 7 deletions(-)
13
tests/tcg/aarch64/Makefile.target | 11 ++++++
14
3 files changed, 137 insertions(+)
15
create mode 100644 tests/tcg/aarch64/dcpodp.c
16
create mode 100644 tests/tcg/aarch64/dcpop.c
14
17
15
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
18
diff --git a/tests/tcg/aarch64/dcpodp.c b/tests/tcg/aarch64/dcpodp.c
16
index XXXXXXX..XXXXXXX 100644
19
new file mode 100644
17
--- a/include/hw/ssi/xilinx_spips.h
20
index XXXXXXX..XXXXXXX
18
+++ b/include/hw/ssi/xilinx_spips.h
21
--- /dev/null
19
@@ -XXX,XX +XXX,XX @@ struct XilinxSPIPS {
22
+++ b/tests/tcg/aarch64/dcpodp.c
20
uint32_t rx_discard;
23
@@ -XXX,XX +XXX,XX @@
21
24
+/*
22
uint32_t regs[XLNX_SPIPS_R_MAX];
25
+ * Test execution of DC CVADP instruction.
26
+ *
27
+ * Copyright (c) 2023 Zhuojia Shen <chaosdefinition@hotmail.com>
28
+ * SPDX-License-Identifier: GPL-2.0-or-later
29
+ */
23
+
30
+
24
+ bool man_start_com;
31
+#include <asm/hwcap.h>
25
};
32
+#include <sys/auxv.h>
26
33
+
27
typedef struct {
34
+#include <signal.h>
28
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
35
+#include <stdbool.h>
29
index XXXXXXX..XXXXXXX 100644
36
+#include <stdio.h>
30
--- a/hw/ssi/xilinx_spips.c
37
+#include <stdlib.h>
31
+++ b/hw/ssi/xilinx_spips.c
38
+
32
@@ -XXX,XX +XXX,XX @@
39
+#ifndef HWCAP2_DCPODP
33
FIELD(CMND, DUMMY_CYCLES, 2, 6)
40
+#define HWCAP2_DCPODP (1 << 0)
34
#define R_CMND_DMA_EN (1 << 1)
41
+#endif
35
#define R_CMND_PUSH_WAIT (1 << 0)
42
+
36
+#define R_TRANSFER_SIZE (0xc4 / 4)
43
+bool should_fail = false;
37
#define R_LQSPI_STS (0xA4 / 4)
44
+
38
#define LQSPI_STS_WR_RECVD (1 << 1)
45
+static void signal_handler(int sig, siginfo_t *si, void *data)
39
40
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
41
s->link_state_next_when = 0;
42
s->snoop_state = SNOOP_CHECKING;
43
s->cmd_dummies = 0;
44
+ s->man_start_com = false;
45
xilinx_spips_update_ixr(s);
46
xilinx_spips_update_cs_lines(s);
47
}
48
@@ -XXX,XX +XXX,XX @@ static inline void tx_data_bytes(Fifo8 *fifo, uint32_t value, int num, bool be)
49
}
50
}
51
52
+static void xilinx_spips_check_zero_pump(XilinxSPIPS *s)
53
+{
46
+{
54
+ if (!s->regs[R_TRANSFER_SIZE]) {
47
+ ucontext_t *uc = (ucontext_t *)data;
55
+ return;
48
+
56
+ }
49
+ if (should_fail) {
57
+ if (!fifo8_is_empty(&s->tx_fifo) && s->regs[R_CMND] & R_CMND_PUSH_WAIT) {
50
+ uc->uc_mcontext.pc += 4;
58
+ return;
51
+ } else {
59
+ }
52
+ exit(EXIT_FAILURE);
60
+ /*
61
+ * The zero pump must never fill tx fifo such that rx overflow is
62
+ * possible
63
+ */
64
+ while (s->regs[R_TRANSFER_SIZE] &&
65
+ s->rx_fifo.num + s->tx_fifo.num < RXFF_A_Q - 3) {
66
+ /* endianess just doesn't matter when zero pumping */
67
+ tx_data_bytes(&s->tx_fifo, 0, 4, false);
68
+ s->regs[R_TRANSFER_SIZE] &= ~0x03ull;
69
+ s->regs[R_TRANSFER_SIZE] -= 4;
70
+ }
53
+ }
71
+}
54
+}
72
+
55
+
73
+static void xilinx_spips_check_flush(XilinxSPIPS *s)
56
+static int do_dc_cvadp(void)
74
+{
57
+{
75
+ if (s->man_start_com ||
58
+ struct sigaction sa = {
76
+ (!fifo8_is_empty(&s->tx_fifo) &&
59
+ .sa_flags = SA_SIGINFO,
77
+ !(s->regs[R_CONFIG] & MAN_START_EN))) {
60
+ .sa_sigaction = signal_handler,
78
+ xilinx_spips_check_zero_pump(s);
61
+ };
79
+ xilinx_spips_flush_txfifo(s);
62
+
63
+ sigemptyset(&sa.sa_mask);
64
+ if (sigaction(SIGSEGV, &sa, NULL) < 0) {
65
+ perror("sigaction");
66
+ return EXIT_FAILURE;
80
+ }
67
+ }
81
+ if (fifo8_is_empty(&s->tx_fifo) && !s->regs[R_TRANSFER_SIZE]) {
68
+
82
+ s->man_start_com = false;
69
+ asm volatile("dc cvadp, %0\n\t" :: "r"(&sa));
83
+ }
70
+
84
+ xilinx_spips_update_ixr(s);
71
+ should_fail = true;
72
+ asm volatile("dc cvadp, %0\n\t" :: "r"(NULL));
73
+ should_fail = false;
74
+
75
+ return EXIT_SUCCESS;
85
+}
76
+}
86
+
77
+
87
static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
78
+int main(void)
88
{
79
+{
89
int i;
80
+ if (getauxval(AT_HWCAP2) & HWCAP2_DCPODP) {
90
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
81
+ return do_dc_cvadp();
91
uint64_t value, unsigned size)
82
+ } else {
92
{
83
+ printf("SKIP: no HWCAP2_DCPODP on this system\n");
93
int mask = ~0;
84
+ return EXIT_SUCCESS;
94
- int man_start_com = 0;
85
+ }
95
XilinxSPIPS *s = opaque;
86
+}
96
87
diff --git a/tests/tcg/aarch64/dcpop.c b/tests/tcg/aarch64/dcpop.c
97
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr, (unsigned)value);
88
new file mode 100644
98
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
89
index XXXXXXX..XXXXXXX
99
switch (addr) {
90
--- /dev/null
100
case R_CONFIG:
91
+++ b/tests/tcg/aarch64/dcpop.c
101
mask = ~(R_CONFIG_RSVD | MAN_START_COM);
92
@@ -XXX,XX +XXX,XX @@
102
- if (value & MAN_START_COM) {
93
+/*
103
- man_start_com = 1;
94
+ * Test execution of DC CVAP instruction.
104
+ if ((value & MAN_START_COM) && (s->regs[R_CONFIG] & MAN_START_EN)) {
95
+ *
105
+ s->man_start_com = true;
96
+ * Copyright (c) 2023 Zhuojia Shen <chaosdefinition@hotmail.com>
106
}
97
+ * SPDX-License-Identifier: GPL-2.0-or-later
107
break;
98
+ */
108
case R_INTR_STATUS:
99
+
109
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
100
+#include <asm/hwcap.h>
110
s->regs[addr] = (s->regs[addr] & ~mask) | (value & mask);
101
+#include <sys/auxv.h>
111
no_reg_update:
102
+
112
xilinx_spips_update_cs_lines(s);
103
+#include <signal.h>
113
- if ((man_start_com && s->regs[R_CONFIG] & MAN_START_EN) ||
104
+#include <stdbool.h>
114
- (fifo8_is_empty(&s->tx_fifo) && s->regs[R_CONFIG] & MAN_START_EN)) {
105
+#include <stdio.h>
115
- xilinx_spips_flush_txfifo(s);
106
+#include <stdlib.h>
116
- }
107
+
117
+ xilinx_spips_check_flush(s);
108
+#ifndef HWCAP_DCPOP
118
xilinx_spips_update_cs_lines(s);
109
+#define HWCAP_DCPOP (1 << 16)
119
xilinx_spips_update_ixr(s);
110
+#endif
120
}
111
+
112
+bool should_fail = false;
113
+
114
+static void signal_handler(int sig, siginfo_t *si, void *data)
115
+{
116
+ ucontext_t *uc = (ucontext_t *)data;
117
+
118
+ if (should_fail) {
119
+ uc->uc_mcontext.pc += 4;
120
+ } else {
121
+ exit(EXIT_FAILURE);
122
+ }
123
+}
124
+
125
+static int do_dc_cvap(void)
126
+{
127
+ struct sigaction sa = {
128
+ .sa_flags = SA_SIGINFO,
129
+ .sa_sigaction = signal_handler,
130
+ };
131
+
132
+ sigemptyset(&sa.sa_mask);
133
+ if (sigaction(SIGSEGV, &sa, NULL) < 0) {
134
+ perror("sigaction");
135
+ return EXIT_FAILURE;
136
+ }
137
+
138
+ asm volatile("dc cvap, %0\n\t" :: "r"(&sa));
139
+
140
+ should_fail = true;
141
+ asm volatile("dc cvap, %0\n\t" :: "r"(NULL));
142
+ should_fail = false;
143
+
144
+ return EXIT_SUCCESS;
145
+}
146
+
147
+int main(void)
148
+{
149
+ if (getauxval(AT_HWCAP) & HWCAP_DCPOP) {
150
+ return do_dc_cvap();
151
+ } else {
152
+ printf("SKIP: no HWCAP_DCPOP on this system\n");
153
+ return EXIT_SUCCESS;
154
+ }
155
+}
156
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
157
index XXXXXXX..XXXXXXX 100644
158
--- a/tests/tcg/aarch64/Makefile.target
159
+++ b/tests/tcg/aarch64/Makefile.target
160
@@ -XXX,XX +XXX,XX @@ config-cc.mak: Makefile
161
    $(quiet-@)( \
162
     $(call cc-option,-march=armv8.1-a+sve, CROSS_CC_HAS_SVE); \
163
     $(call cc-option,-march=armv8.1-a+sve2, CROSS_CC_HAS_SVE2); \
164
+     $(call cc-option,-march=armv8.2-a, CROSS_CC_HAS_ARMV8_2); \
165
     $(call cc-option,-march=armv8.3-a, CROSS_CC_HAS_ARMV8_3); \
166
+     $(call cc-option,-march=armv8.5-a, CROSS_CC_HAS_ARMV8_5); \
167
     $(call cc-option,-mbranch-protection=standard, CROSS_CC_HAS_ARMV8_BTI); \
168
     $(call cc-option,-march=armv8.5-a+memtag, CROSS_CC_HAS_ARMV8_MTE); \
169
     $(call cc-option,-march=armv9-a+sme, CROSS_CC_HAS_ARMV9_SME)) 3> config-cc.mak
170
-include config-cc.mak
171
172
+ifneq ($(CROSS_CC_HAS_ARMV8_2),)
173
+AARCH64_TESTS += dcpop
174
+dcpop: CFLAGS += -march=armv8.2-a
175
+endif
176
+ifneq ($(CROSS_CC_HAS_ARMV8_5),)
177
+AARCH64_TESTS += dcpodp
178
+dcpodp: CFLAGS += -march=armv8.5-a
179
+endif
180
+
181
# Pauth Tests
182
ifneq ($(CROSS_CC_HAS_ARMV8_3),)
183
AARCH64_TESTS += pauth-1 pauth-2 pauth-4 pauth-5
121
--
184
--
122
2.7.4
185
2.34.1
123
124
diff view generated by jsdifflib
Deleted patch
1
In ARMv7M the CPU ignores explicit writes to CONTROL.SPSEL
2
in Handler mode. In v8M the behaviour is slightly different:
3
writes to the bit are permitted but will have no effect.
4
1
5
We've already done the hard work to handle the value in
6
CONTROL.SPSEL being out of sync with what stack pointer is
7
actually in use, so all we need to do to fix this last loose
8
end is to update the condition we use to guard whether we
9
call write_v7m_control_spsel() on the register write.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 1512153879-5291-3-git-send-email-peter.maydell@linaro.org
14
---
15
target/arm/helper.c | 5 ++++-
16
1 file changed, 4 insertions(+), 1 deletion(-)
17
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
21
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
23
* thread mode; other bits can be updated by any privileged code.
24
* write_v7m_control_spsel() deals with updating the SPSEL bit in
25
* env->v7m.control, so we only need update the others.
26
+ * For v7M, we must just ignore explicit writes to SPSEL in handler
27
+ * mode; for v8M the write is permitted but will have no effect.
28
*/
29
- if (!arm_v7m_is_handler_mode(env)) {
30
+ if (arm_feature(env, ARM_FEATURE_V8) ||
31
+ !arm_v7m_is_handler_mode(env)) {
32
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
33
}
34
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
35
--
36
2.7.4
37
38
diff view generated by jsdifflib
1
Generalize nvic_sysreg_ns_ops so that we can pass it an
1
From: Zhuojia Shen <chaosdefinition@hotmail.com>
2
arbitrary MemoryRegion which it will use as the underlying
3
register implementation to apply the NS-alias behaviour
4
to. We'll want this so we can do the same with systick.
5
2
3
Accessing EL0-accessible Debug Communication Channel (DCC) registers in
4
user mode emulation is currently enabled. However, it does not match
5
Linux behavior as Linux sets MDSCR_EL1.TDCC on startup to disable EL0
6
access to DCC (see __cpu_setup() in arch/arm64/mm/proc.S).
7
8
This patch fixes access_tdcc() to check MDSCR_EL1.TDCC for EL0 and sets
9
MDSCR_EL1.TDCC for user mode emulation to match Linux.
10
11
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: DS7PR12MB630905198DD8E69F6817544CAC4EA@DS7PR12MB6309.namprd12.prod.outlook.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 1512154296-5652-2-git-send-email-peter.maydell@linaro.org
9
---
15
---
10
hw/intc/armv7m_nvic.c | 10 +++++++---
16
target/arm/cpu.c | 2 ++
11
1 file changed, 7 insertions(+), 3 deletions(-)
17
target/arm/debug_helper.c | 5 +++++
18
2 files changed, 7 insertions(+)
12
19
13
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/armv7m_nvic.c
22
--- a/target/arm/cpu.c
16
+++ b/hw/intc/armv7m_nvic.c
23
+++ b/target/arm/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
24
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset_hold(Object *obj)
18
uint64_t value, unsigned size,
25
* This is not yet exposed from the Linux kernel in any way.
19
MemTxAttrs attrs)
26
*/
27
env->cp15.sctlr_el[1] |= SCTLR_TSCXT;
28
+ /* Disable access to Debug Communication Channel (DCC). */
29
+ env->cp15.mdscr_el1 |= 1 << 12;
30
#else
31
/* Reset into the highest available EL */
32
if (arm_feature(env, ARM_FEATURE_EL3)) {
33
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/debug_helper.c
36
+++ b/target/arm/debug_helper.c
37
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
38
* is implemented then these are controlled by MDCR_EL2.TDCC for
39
* EL2 and MDCR_EL3.TDCC for EL3. They are also controlled by
40
* the general debug access trap bits MDCR_EL2.TDA and MDCR_EL3.TDA.
41
+ * For EL0, they are also controlled by MDSCR_EL1.TDCC.
42
*/
43
static CPAccessResult access_tdcc(CPUARMState *env, const ARMCPRegInfo *ri,
44
bool isread)
20
{
45
{
21
+ MemoryRegion *mr = opaque;
46
int el = arm_current_el(env);
22
+
47
uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
23
if (attrs.secure) {
48
+ bool mdscr_el1_tdcc = extract32(env->cp15.mdscr_el1, 12, 1);
24
/* S accesses to the alias act like NS accesses to the real region */
49
bool mdcr_el2_tda = (mdcr_el2 & MDCR_TDA) || (mdcr_el2 & MDCR_TDE) ||
25
attrs.secure = 0;
50
(arm_hcr_el2_eff(env) & HCR_TGE);
26
- return nvic_sysreg_write(opaque, addr, value, size, attrs);
51
bool mdcr_el2_tdcc = cpu_isar_feature(aa64_fgt, env_archcpu(env)) &&
27
+ return memory_region_dispatch_write(mr, addr, value, size, attrs);
52
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdcc(CPUARMState *env, const ARMCPRegInfo *ri,
28
} else {
53
bool mdcr_el3_tdcc = cpu_isar_feature(aa64_fgt, env_archcpu(env)) &&
29
/* NS attrs are RAZ/WI for privileged, and BusFault for user */
54
(env->cp15.mdcr_el3 & MDCR_TDCC);
30
if (attrs.user) {
55
31
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
56
+ if (el < 1 && mdscr_el1_tdcc) {
32
uint64_t *data, unsigned size,
57
+ return CP_ACCESS_TRAP;
33
MemTxAttrs attrs)
58
+ }
34
{
59
if (el < 2 && (mdcr_el2_tda || mdcr_el2_tdcc)) {
35
+ MemoryRegion *mr = opaque;
60
return CP_ACCESS_TRAP_EL2;
36
+
37
if (attrs.secure) {
38
/* S accesses to the alias act like NS accesses to the real region */
39
attrs.secure = 0;
40
- return nvic_sysreg_read(opaque, addr, data, size, attrs);
41
+ return memory_region_dispatch_read(mr, addr, data, size, attrs);
42
} else {
43
/* NS attrs are RAZ/WI for privileged, and BusFault for user */
44
if (attrs.user) {
45
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
46
47
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
48
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
49
- &nvic_sysreg_ns_ops, s,
50
+ &nvic_sysreg_ns_ops, &s->sysregmem,
51
"nvic_sysregs_ns", 0x1000);
52
memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
53
}
61
}
54
--
62
--
55
2.7.4
63
2.34.1
56
57
diff view generated by jsdifflib