1
First pullreq for 6.0: mostly my v8.1M work, plus some other
1
Hi; here's the first arm pullreq for 9.1.
2
bits and pieces. (I still have a lot of stuff in my to-review
2
3
folder, which I may or may not get to before the Christmas break...)
3
This includes the reset method function signature change, so it has
4
some chance of compile failures due to merge conflicts if some other
5
pullreq added a device reset method and that pullreq got applied
6
before this one. If so, the changes needed to fix those up can be
7
created by running the spatch rune described in the commit message of
8
the "hw, target: Add ResetType argument to hold and exit phase
9
methods" commit.
4
10
5
thanks
11
thanks
6
-- PMM
12
-- PMM
7
13
8
The following changes since commit 5e7b204dbfae9a562fc73684986f936b97f63877:
14
The following changes since commit 5da72194df36535d773c8bdc951529ecd5e31707:
9
15
10
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2020-12-09 20:08:54 +0000)
16
Merge tag 'pull-tcg-20240424' of https://gitlab.com/rth7680/qemu into staging (2024-04-24 15:51:49 -0700)
11
17
12
are available in the Git repository at:
18
are available in the Git repository at:
13
19
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201210
20
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20240425
15
21
16
for you to fetch changes up to 71f916be1c7e9ede0e37d9cabc781b5a9e8638ff:
22
for you to fetch changes up to 214652da123e3821657a64691ee556281e9f6238:
17
23
18
hw/arm/armv7m: Correct typo in QOM object name (2020-12-10 11:44:56 +0000)
24
tests/qtest: Add tests for the STM32L4x5 USART (2024-04-25 10:21:59 +0100)
19
25
20
----------------------------------------------------------------
26
----------------------------------------------------------------
21
target-arm queue:
27
target-arm queue:
22
* hw/arm/smmuv3: Fix up L1STD_SPAN decoding
28
* Implement FEAT_NMI and NMI support in the GICv3
23
* xlnx-zynqmp: Support Xilinx ZynqMP CAN controllers
29
* hw/dma: avoid apparent overflow in soc_dma_set_request
24
* sbsa-ref: allow to use Cortex-A53/57/72 cpus
30
* linux-user/flatload.c: Remove unused bFLT shared-library and ZFLAT code
25
* Various minor code cleanups
31
* Add ResetType argument to Resettable hold and exit phase methods
26
* hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
32
* Add RESET_TYPE_SNAPSHOT_LOAD ResetType
27
* Implement more pieces of ARMv8.1M support
33
* Implement STM32L4x5 USART
28
34
29
----------------------------------------------------------------
35
----------------------------------------------------------------
30
Alex Chen (4):
36
Anastasia Belova (1):
31
i.MX25: Fix bad printf format specifiers
37
hw/dma: avoid apparent overflow in soc_dma_set_request
32
i.MX31: Fix bad printf format specifiers
38
33
i.MX6: Fix bad printf format specifiers
39
Arnaud Minier (5):
34
i.MX6ul: Fix bad printf format specifiers
40
hw/char: Implement STM32L4x5 USART skeleton
35
41
hw/char/stm32l4x5_usart: Enable serial read and write
36
Havard Skinnemoen (1):
42
hw/char/stm32l4x5_usart: Add options for serial parameters setting
37
tests/qtest/npcm7xx_rng-test: dump random data on failure
43
hw/arm: Add the USART to the stm32l4x5 SoC
38
44
tests/qtest: Add tests for the STM32L4x5 USART
39
Kunkun Jiang (1):
45
40
hw/arm/smmuv3: Fix up L1STD_SPAN decoding
46
Jinjie Ruan (22):
41
47
target/arm: Handle HCR_EL2 accesses for bits introduced with FEAT_NMI
42
Marcin Juszkiewicz (1):
48
target/arm: Add PSTATE.ALLINT
43
sbsa-ref: allow to use Cortex-A53/57/72 cpus
49
target/arm: Add support for FEAT_NMI, Non-maskable Interrupt
44
50
target/arm: Implement ALLINT MSR (immediate)
45
Peter Maydell (25):
51
target/arm: Support MSR access to ALLINT
46
hw/intc/armv7m_nvic: Make all of system PPB range be RAZWI/BusFault
52
target/arm: Add support for Non-maskable Interrupt
47
target/arm: Implement v8.1M PXN extension
53
target/arm: Add support for NMI in arm_phys_excp_target_el()
48
target/arm: Don't clobber ID_PFR1.Security on M-profile cores
54
target/arm: Handle IS/FS in ISR_EL1 for NMI, VINMI and VFNMI
49
target/arm: Implement VSCCLRM insn
55
target/arm: Handle PSTATE.ALLINT on taking an exception
50
target/arm: Implement CLRM instruction
56
hw/intc/arm_gicv3: Add external IRQ lines for NMI
51
target/arm: Enforce M-profile VMRS/VMSR register restrictions
57
hw/arm/virt: Wire NMI and VINMI irq lines from GIC to CPU
52
target/arm: Refactor M-profile VMSR/VMRS handling
58
target/arm: Handle NMI in arm_cpu_do_interrupt_aarch64()
53
target/arm: Move general-use constant expanders up in translate.c
59
hw/intc/arm_gicv3: Add has-nmi property to GICv3 device
54
target/arm: Implement VLDR/VSTR system register
60
hw/intc/arm_gicv3_kvm: Not set has-nmi=true for the KVM GICv3
55
target/arm: Implement M-profile FPSCR_nzcvqc
61
hw/intc/arm_gicv3: Add irq non-maskable property
56
target/arm: Use new FPCR_NZCV_MASK constant
62
hw/intc/arm_gicv3_redist: Implement GICR_INMIR0
57
target/arm: Factor out preserve-fp-state from full_vfp_access_check()
63
hw/intc/arm_gicv3: Implement GICD_INMIR
58
target/arm: Implement FPCXT_S fp system register
64
hw/intc/arm_gicv3: Implement NMI interrupt priority
59
hw/intc/armv7m_nvic: Update FPDSCR masking for v8.1M
65
hw/intc/arm_gicv3: Report the NMI interrupt in gicv3_cpuif_update()
60
target/arm: For v8.1M, always clear R0-R3, R12, APSR, EPSR on exception entry
66
hw/intc/arm_gicv3: Report the VINMI interrupt
61
target/arm: In v8.1M, don't set HFSR.FORCED on vector table fetch failures
67
target/arm: Add FEAT_NMI to max
62
target/arm: Implement v8.1M REVIDR register
68
hw/arm/virt: Enable NMI support in the GIC if the CPU has FEAT_NMI
63
target/arm: Implement new v8.1M NOCP check for exception return
69
64
target/arm: Implement new v8.1M VLLDM and VLSTM encodings
70
Peter Maydell (9):
65
hw/intc/armv7m_nvic: Support v8.1M CCR.TRD bit
71
hw/intc/arm_gicv3: Add NMI handling CPU interface registers
66
target/arm: Implement CCR_S.TRD behaviour for SG insns
72
hw/intc/arm_gicv3: Handle icv_nmiar1_read() for icc_nmiar1_read()
67
hw/intc/armv7m_nvic: Fix "return from inactive handler" check
73
linux-user/flatload.c: Remove unused bFLT shared-library and ZFLAT code
68
target/arm: Implement M-profile "minimal RAS implementation"
74
hw/misc: Don't special case RESET_TYPE_COLD in npcm7xx_clk, gcr
69
hw/intc/armv7m_nvic: Implement read/write for RAS register block
75
allwinner-i2c, adm1272: Use device_cold_reset() for software-triggered reset
70
hw/arm/armv7m: Correct typo in QOM object name
76
scripts/coccinelle: New script to add ResetType to hold and exit phases
71
77
hw, target: Add ResetType argument to hold and exit phase methods
72
Vikram Garhwal (4):
78
docs/devel/reset: Update to new API for hold and exit phase methods
73
hw/net/can: Introduce Xilinx ZynqMP CAN controller
79
reset: Add RESET_TYPE_SNAPSHOT_LOAD
74
xlnx-zynqmp: Connect Xilinx ZynqMP CAN controllers
80
75
tests/qtest: Introduce tests for Xilinx ZynqMP CAN controller
81
MAINTAINERS | 1 +
76
MAINTAINERS: Add maintainer entry for Xilinx ZynqMP CAN controller
82
docs/devel/reset.rst | 25 +-
77
83
docs/system/arm/b-l475e-iot01a.rst | 2 +-
78
meson.build | 1 +
84
docs/system/arm/emulation.rst | 1 +
79
hw/arm/smmuv3-internal.h | 2 +-
85
scripts/coccinelle/reset-type.cocci | 133 ++++++++
80
hw/net/can/trace.h | 1 +
86
hw/intc/gicv3_internal.h | 13 +
81
include/hw/arm/xlnx-zynqmp.h | 8 +
87
include/hw/arm/stm32l4x5_soc.h | 7 +
82
include/hw/intc/armv7m_nvic.h | 2 +
88
include/hw/char/stm32l4x5_usart.h | 67 ++++
83
include/hw/net/xlnx-zynqmp-can.h | 78 +++
89
include/hw/intc/arm_gic_common.h | 2 +
84
target/arm/cpu.h | 46 ++
90
include/hw/intc/arm_gicv3_common.h | 14 +
85
target/arm/m-nocp.decode | 10 +-
91
include/hw/resettable.h | 5 +-
86
target/arm/t32.decode | 10 +-
92
linux-user/flat.h | 5 +-
87
target/arm/vfp.decode | 14 +
93
target/arm/cpu-features.h | 5 +
88
hw/arm/armv7m.c | 4 +-
94
target/arm/cpu-qom.h | 5 +-
89
hw/arm/sbsa-ref.c | 23 +-
95
target/arm/cpu.h | 9 +
90
hw/arm/xlnx-zcu102.c | 20 +
96
target/arm/internals.h | 21 ++
91
hw/arm/xlnx-zynqmp.c | 34 ++
97
target/arm/tcg/helper-a64.h | 1 +
92
hw/intc/armv7m_nvic.c | 246 ++++++--
98
target/arm/tcg/a64.decode | 1 +
93
hw/misc/imx25_ccm.c | 12 +-
99
hw/adc/npcm7xx_adc.c | 2 +-
94
hw/misc/imx31_ccm.c | 14 +-
100
hw/arm/pxa2xx_pic.c | 2 +-
95
hw/misc/imx6_ccm.c | 20 +-
101
hw/arm/smmu-common.c | 2 +-
96
hw/misc/imx6_src.c | 2 +-
102
hw/arm/smmuv3.c | 4 +-
97
hw/misc/imx6ul_ccm.c | 4 +-
103
hw/arm/stellaris.c | 10 +-
98
hw/misc/imx_ccm.c | 4 +-
104
hw/arm/stm32l4x5_soc.c | 83 ++++-
99
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++++++++++
105
hw/arm/virt.c | 29 +-
100
target/arm/cpu.c | 5 +-
106
hw/audio/asc.c | 2 +-
101
target/arm/helper.c | 7 +-
107
hw/char/cadence_uart.c | 2 +-
102
target/arm/m_helper.c | 130 ++++-
108
hw/char/sifive_uart.c | 2 +-
103
target/arm/translate.c | 105 +++-
109
hw/char/stm32l4x5_usart.c | 637 ++++++++++++++++++++++++++++++++++++
104
tests/qtest/npcm7xx_rng-test.c | 12 +
110
hw/core/cpu-common.c | 2 +-
105
tests/qtest/xlnx-can-test.c | 360 ++++++++++++
111
hw/core/qdev.c | 4 +-
106
MAINTAINERS | 8 +
112
hw/core/reset.c | 17 +-
107
hw/Kconfig | 1 +
113
hw/core/resettable.c | 8 +-
108
hw/net/can/meson.build | 1 +
114
hw/display/virtio-vga.c | 4 +-
109
hw/net/can/trace-events | 9 +
115
hw/dma/soc_dma.c | 4 +-
110
target/arm/translate-vfp.c.inc | 511 ++++++++++++++++-
116
hw/gpio/npcm7xx_gpio.c | 2 +-
111
tests/qtest/meson.build | 1 +
117
hw/gpio/pl061.c | 2 +-
112
34 files changed, 2713 insertions(+), 153 deletions(-)
118
hw/gpio/stm32l4x5_gpio.c | 2 +-
113
create mode 100644 hw/net/can/trace.h
119
hw/hyperv/vmbus.c | 2 +-
114
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
120
hw/i2c/allwinner-i2c.c | 5 +-
115
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
121
hw/i2c/npcm7xx_smbus.c | 2 +-
116
create mode 100644 tests/qtest/xlnx-can-test.c
122
hw/input/adb.c | 2 +-
117
create mode 100644 hw/net/can/trace-events
123
hw/input/ps2.c | 12 +-
118
124
hw/intc/arm_gic_common.c | 2 +-
125
hw/intc/arm_gic_kvm.c | 4 +-
126
hw/intc/arm_gicv3.c | 67 +++-
127
hw/intc/arm_gicv3_common.c | 50 ++-
128
hw/intc/arm_gicv3_cpuif.c | 268 ++++++++++++++-
129
hw/intc/arm_gicv3_dist.c | 36 ++
130
hw/intc/arm_gicv3_its.c | 4 +-
131
hw/intc/arm_gicv3_its_common.c | 2 +-
132
hw/intc/arm_gicv3_its_kvm.c | 4 +-
133
hw/intc/arm_gicv3_kvm.c | 9 +-
134
hw/intc/arm_gicv3_redist.c | 22 ++
135
hw/intc/xics.c | 2 +-
136
hw/m68k/q800-glue.c | 2 +-
137
hw/misc/djmemc.c | 2 +-
138
hw/misc/iosb.c | 2 +-
139
hw/misc/mac_via.c | 8 +-
140
hw/misc/macio/cuda.c | 4 +-
141
hw/misc/macio/pmu.c | 4 +-
142
hw/misc/mos6522.c | 2 +-
143
hw/misc/npcm7xx_clk.c | 13 +-
144
hw/misc/npcm7xx_gcr.c | 12 +-
145
hw/misc/npcm7xx_mft.c | 2 +-
146
hw/misc/npcm7xx_pwm.c | 2 +-
147
hw/misc/stm32l4x5_exti.c | 2 +-
148
hw/misc/stm32l4x5_rcc.c | 10 +-
149
hw/misc/stm32l4x5_syscfg.c | 2 +-
150
hw/misc/xlnx-versal-cframe-reg.c | 2 +-
151
hw/misc/xlnx-versal-crl.c | 2 +-
152
hw/misc/xlnx-versal-pmc-iou-slcr.c | 2 +-
153
hw/misc/xlnx-versal-trng.c | 2 +-
154
hw/misc/xlnx-versal-xramc.c | 2 +-
155
hw/misc/xlnx-zynqmp-apu-ctrl.c | 2 +-
156
hw/misc/xlnx-zynqmp-crf.c | 2 +-
157
hw/misc/zynq_slcr.c | 4 +-
158
hw/net/can/xlnx-zynqmp-can.c | 2 +-
159
hw/net/e1000.c | 2 +-
160
hw/net/e1000e.c | 2 +-
161
hw/net/igb.c | 2 +-
162
hw/net/igbvf.c | 2 +-
163
hw/nvram/xlnx-bbram.c | 2 +-
164
hw/nvram/xlnx-versal-efuse-ctrl.c | 2 +-
165
hw/nvram/xlnx-zynqmp-efuse.c | 2 +-
166
hw/pci-bridge/cxl_root_port.c | 4 +-
167
hw/pci-bridge/pcie_root_port.c | 2 +-
168
hw/pci-host/bonito.c | 2 +-
169
hw/pci-host/pnv_phb.c | 4 +-
170
hw/pci-host/pnv_phb3_msi.c | 4 +-
171
hw/pci/pci.c | 4 +-
172
hw/rtc/mc146818rtc.c | 2 +-
173
hw/s390x/css-bridge.c | 2 +-
174
hw/sensor/adm1266.c | 2 +-
175
hw/sensor/adm1272.c | 4 +-
176
hw/sensor/isl_pmbus_vr.c | 10 +-
177
hw/sensor/max31785.c | 2 +-
178
hw/sensor/max34451.c | 2 +-
179
hw/ssi/npcm7xx_fiu.c | 2 +-
180
hw/timer/etraxfs_timer.c | 2 +-
181
hw/timer/npcm7xx_timer.c | 2 +-
182
hw/usb/hcd-dwc2.c | 8 +-
183
hw/usb/xlnx-versal-usb2-ctrl-regs.c | 2 +-
184
hw/virtio/virtio-pci.c | 2 +-
185
linux-user/flatload.c | 293 +----------------
186
target/arm/cpu.c | 151 ++++++++-
187
target/arm/helper.c | 101 +++++-
188
target/arm/tcg/cpu64.c | 1 +
189
target/arm/tcg/helper-a64.c | 16 +-
190
target/arm/tcg/translate-a64.c | 19 ++
191
target/avr/cpu.c | 4 +-
192
target/cris/cpu.c | 4 +-
193
target/hexagon/cpu.c | 4 +-
194
target/i386/cpu.c | 4 +-
195
target/loongarch/cpu.c | 4 +-
196
target/m68k/cpu.c | 4 +-
197
target/microblaze/cpu.c | 4 +-
198
target/mips/cpu.c | 4 +-
199
target/openrisc/cpu.c | 4 +-
200
target/ppc/cpu_init.c | 4 +-
201
target/riscv/cpu.c | 4 +-
202
target/rx/cpu.c | 4 +-
203
target/sh4/cpu.c | 4 +-
204
target/sparc/cpu.c | 4 +-
205
target/tricore/cpu.c | 4 +-
206
target/xtensa/cpu.c | 4 +-
207
tests/qtest/stm32l4x5_usart-test.c | 315 ++++++++++++++++++
208
hw/arm/Kconfig | 1 +
209
hw/char/Kconfig | 3 +
210
hw/char/meson.build | 1 +
211
hw/char/trace-events | 12 +
212
hw/intc/trace-events | 2 +
213
tests/qtest/meson.build | 4 +-
214
133 files changed, 2239 insertions(+), 537 deletions(-)
215
create mode 100644 scripts/coccinelle/reset-type.cocci
216
create mode 100644 include/hw/char/stm32l4x5_usart.h
217
create mode 100644 hw/char/stm32l4x5_usart.c
218
create mode 100644 tests/qtest/stm32l4x5_usart-test.c
diff view generated by jsdifflib
1
In v8.1M, vector table fetch failures don't set HFSR.FORCED (see rule
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
R_LLRP). (In previous versions of the architecture this was either
3
required or IMPDEF.)
4
2
3
FEAT_NMI defines another three new bits in HCRX_EL2: TALLINT, HCRX_VINMI and
4
HCRX_VFNMI. When the feature is enabled, allow these bits to be written in
5
HCRX_EL2.
6
7
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20240407081733.3231820-2-ruanjinjie@huawei.com
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-18-peter.maydell@linaro.org
8
---
12
---
9
target/arm/m_helper.c | 6 +++++-
13
target/arm/cpu-features.h | 5 +++++
10
1 file changed, 5 insertions(+), 1 deletion(-)
14
target/arm/helper.c | 8 +++++++-
15
2 files changed, 12 insertions(+), 1 deletion(-)
11
16
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
17
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
13
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
19
--- a/target/arm/cpu-features.h
15
+++ b/target/arm/m_helper.c
20
+++ b/target/arm/cpu-features.h
16
@@ -XXX,XX +XXX,XX @@ load_fail:
21
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
17
* The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
22
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SME) != 0;
18
* secure); otherwise it targets the same security state as the
23
}
19
* underlying exception.
24
20
+ * In v8.1M HardFaults from vector table fetch fails don't set FORCED.
25
+static inline bool isar_feature_aa64_nmi(const ARMISARegisters *id)
21
*/
26
+{
22
if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
27
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, NMI) != 0;
23
exc_secure = true;
28
+}
29
+
30
static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
31
{
32
return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
33
diff --git a/target/arm/helper.c b/target/arm/helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/helper.c
36
+++ b/target/arm/helper.c
37
@@ -XXX,XX +XXX,XX @@ bool el_is_in_host(CPUARMState *env, int el)
38
static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
39
uint64_t value)
40
{
41
+ ARMCPU *cpu = env_archcpu(env);
42
uint64_t valid_mask = 0;
43
44
/* FEAT_MOPS adds MSCEn and MCE2 */
45
- if (cpu_isar_feature(aa64_mops, env_archcpu(env))) {
46
+ if (cpu_isar_feature(aa64_mops, cpu)) {
47
valid_mask |= HCRX_MSCEN | HCRX_MCE2;
24
}
48
}
25
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
49
26
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK;
50
+ /* FEAT_NMI adds TALLINT, VINMI and VFNMI */
27
+ if (!arm_feature(env, ARM_FEATURE_V8_1M)) {
51
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
28
+ env->v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
52
+ valid_mask |= HCRX_TALLINT | HCRX_VINMI | HCRX_VFNMI;
29
+ }
53
+ }
30
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
54
+
31
return false;
55
/* Clear RES0 bits. */
56
env->cp15.hcrx_el2 = value & valid_mask;
32
}
57
}
33
--
58
--
34
2.20.1
59
2.34.1
35
36
diff view generated by jsdifflib
1
v8.1M introduces a new TRD flag in the CCR register, which enables
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
checking for stack frame integrity signatures on SG instructions.
3
This bit is not banked, and is always RAZ/WI to Non-secure code.
4
Adjust the code for handling CCR reads and writes to handle this.
5
2
3
When PSTATE.ALLINT is set, an IRQ or FIQ interrupt that is targeted to
4
ELx, with or without superpriority is masked. As Richard suggested, place
5
ALLINT bit in PSTATE in env->pstate.
6
7
In the pseudocode, AArch64.ExceptionReturn() calls SetPSTATEFromPSR(), which
8
treats PSTATE.ALLINT as one of the bits which are reinstated from SPSR to
9
PSTATE regardless of whether this is an illegal exception return or not. So
10
handle PSTATE.ALLINT the same way as PSTATE.DAIF in the illegal_return exit
11
path of the exception_return helper. With the change, exception entry and
12
return are automatically handled.
13
14
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20240407081733.3231820-3-ruanjinjie@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-23-peter.maydell@linaro.org
9
---
19
---
10
target/arm/cpu.h | 2 ++
20
target/arm/cpu.h | 1 +
11
hw/intc/armv7m_nvic.c | 26 ++++++++++++++++++--------
21
target/arm/tcg/helper-a64.c | 4 ++--
12
2 files changed, 20 insertions(+), 8 deletions(-)
22
2 files changed, 3 insertions(+), 2 deletions(-)
13
23
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
26
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
27
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKOFHFNMIGN, 10, 1)
28
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
19
FIELD(V7M_CCR, DC, 16, 1)
29
#define PSTATE_D (1U << 9)
20
FIELD(V7M_CCR, IC, 17, 1)
30
#define PSTATE_BTYPE (3U << 10)
21
FIELD(V7M_CCR, BP, 18, 1)
31
#define PSTATE_SSBS (1U << 12)
22
+FIELD(V7M_CCR, LOB, 19, 1)
32
+#define PSTATE_ALLINT (1U << 13)
23
+FIELD(V7M_CCR, TRD, 20, 1)
33
#define PSTATE_IL (1U << 20)
24
34
#define PSTATE_SS (1U << 21)
25
/* V7M SCR bits */
35
#define PSTATE_PAN (1U << 22)
26
FIELD(V7M_SCR, SLEEPONEXIT, 1, 1)
36
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
27
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
28
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/intc/armv7m_nvic.c
38
--- a/target/arm/tcg/helper-a64.c
30
+++ b/hw/intc/armv7m_nvic.c
39
+++ b/target/arm/tcg/helper-a64.c
31
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
40
@@ -XXX,XX +XXX,XX @@ illegal_return:
32
}
41
*/
33
return cpu->env.v7m.scr[attrs.secure];
42
env->pstate |= PSTATE_IL;
34
case 0xd14: /* Configuration Control. */
43
env->pc = new_pc;
35
- /* The BFHFNMIGN bit is the only non-banked bit; we
44
- spsr &= PSTATE_NZCV | PSTATE_DAIF;
36
- * keep it in the non-secure copy of the register.
45
- spsr |= pstate_read(env) & ~(PSTATE_NZCV | PSTATE_DAIF);
37
+ /*
46
+ spsr &= PSTATE_NZCV | PSTATE_DAIF | PSTATE_ALLINT;
38
+ * Non-banked bits: BFHFNMIGN (stored in the NS copy of the register)
47
+ spsr |= pstate_read(env) & ~(PSTATE_NZCV | PSTATE_DAIF | PSTATE_ALLINT);
39
+ * and TRD (stored in the S copy of the register)
48
pstate_write(env, spsr);
40
*/
49
if (!arm_singlestep_active(env)) {
41
val = cpu->env.v7m.ccr[attrs.secure];
50
env->pstate &= ~PSTATE_SS;
42
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
44
cpu->env.v7m.scr[attrs.secure] = value;
45
break;
46
case 0xd14: /* Configuration Control. */
47
+ {
48
+ uint32_t mask;
49
+
50
if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
51
goto bad_offset;
52
}
53
54
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
55
- value &= (R_V7M_CCR_STKALIGN_MASK |
56
- R_V7M_CCR_BFHFNMIGN_MASK |
57
- R_V7M_CCR_DIV_0_TRP_MASK |
58
- R_V7M_CCR_UNALIGN_TRP_MASK |
59
- R_V7M_CCR_USERSETMPEND_MASK |
60
- R_V7M_CCR_NONBASETHRDENA_MASK);
61
+ mask = R_V7M_CCR_STKALIGN_MASK |
62
+ R_V7M_CCR_BFHFNMIGN_MASK |
63
+ R_V7M_CCR_DIV_0_TRP_MASK |
64
+ R_V7M_CCR_UNALIGN_TRP_MASK |
65
+ R_V7M_CCR_USERSETMPEND_MASK |
66
+ R_V7M_CCR_NONBASETHRDENA_MASK;
67
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8_1M) && attrs.secure) {
68
+ /* TRD is always RAZ/WI from NS */
69
+ mask |= R_V7M_CCR_TRD_MASK;
70
+ }
71
+ value &= mask;
72
73
if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
74
/* v8M makes NONBASETHRDENA and STKALIGN be RES1 */
75
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
76
77
cpu->env.v7m.ccr[attrs.secure] = value;
78
break;
79
+ }
80
case 0xd24: /* System Handler Control and State (SHCSR) */
81
if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
82
goto bad_offset;
83
--
51
--
84
2.20.1
52
2.34.1
85
86
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
Add support for FEAT_NMI. NMI (FEAT_NMI) is an mandatory feature in
4
argument of type "unsigned int".
4
ARMv8.8-A and ARM v9.3-A.
5
5
6
Reported-by: Euler Robot <euler.robot@huawei.com>
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201126111109.112238-4-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20240407081733.3231820-4-ruanjinjie@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/misc/imx6_ccm.c | 20 ++++++++++----------
12
target/arm/internals.h | 3 +++
13
hw/misc/imx6_src.c | 2 +-
13
1 file changed, 3 insertions(+)
14
2 files changed, 11 insertions(+), 11 deletions(-)
15
14
16
diff --git a/hw/misc/imx6_ccm.c b/hw/misc/imx6_ccm.c
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx6_ccm.c
17
--- a/target/arm/internals.h
19
+++ b/hw/misc/imx6_ccm.c
18
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ static const char *imx6_ccm_reg_name(uint32_t reg)
19
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
21
case CCM_CMEOR:
20
if (isar_feature_aa64_mte(id)) {
22
return "CMEOR";
21
valid |= PSTATE_TCO;
23
default:
24
- sprintf(unknown, "%d ?", reg);
25
+ sprintf(unknown, "%u ?", reg);
26
return unknown;
27
}
22
}
28
}
23
+ if (isar_feature_aa64_nmi(id)) {
29
@@ -XXX,XX +XXX,XX @@ static const char *imx6_analog_reg_name(uint32_t reg)
24
+ valid |= PSTATE_ALLINT;
30
case USB_ANALOG_DIGPROG:
25
+ }
31
return "USB_ANALOG_DIGPROG";
26
32
default:
27
return valid;
33
- sprintf(unknown, "%d ?", reg);
34
+ sprintf(unknown, "%u ?", reg);
35
return unknown;
36
}
37
}
38
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_clk(IMX6CCMState *dev)
39
freq *= 20;
40
}
41
42
- DPRINTF("freq = %d\n", (uint32_t)freq);
43
+ DPRINTF("freq = %u\n", (uint32_t)freq);
44
45
return freq;
46
}
47
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd0_clk(IMX6CCMState *dev)
48
freq = imx6_analog_get_pll2_clk(dev) * 18
49
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD0_FRAC);
50
51
- DPRINTF("freq = %d\n", (uint32_t)freq);
52
+ DPRINTF("freq = %u\n", (uint32_t)freq);
53
54
return freq;
55
}
56
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_pll2_pfd2_clk(IMX6CCMState *dev)
57
freq = imx6_analog_get_pll2_clk(dev) * 18
58
/ EXTRACT(dev->analog[CCM_ANALOG_PFD_528], PFD2_FRAC);
59
60
- DPRINTF("freq = %d\n", (uint32_t)freq);
61
+ DPRINTF("freq = %u\n", (uint32_t)freq);
62
63
return freq;
64
}
65
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_analog_get_periph_clk(IMX6CCMState *dev)
66
break;
67
}
68
69
- DPRINTF("freq = %d\n", (uint32_t)freq);
70
+ DPRINTF("freq = %u\n", (uint32_t)freq);
71
72
return freq;
73
}
74
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ahb_clk(IMX6CCMState *dev)
75
freq = imx6_analog_get_periph_clk(dev)
76
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], AHB_PODF));
77
78
- DPRINTF("freq = %d\n", (uint32_t)freq);
79
+ DPRINTF("freq = %u\n", (uint32_t)freq);
80
81
return freq;
82
}
83
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_ipg_clk(IMX6CCMState *dev)
84
freq = imx6_ccm_get_ahb_clk(dev)
85
/ (1 + EXTRACT(dev->ccm[CCM_CBCDR], IPG_PODF));
86
87
- DPRINTF("freq = %d\n", (uint32_t)freq);
88
+ DPRINTF("freq = %u\n", (uint32_t)freq);
89
90
return freq;
91
}
92
@@ -XXX,XX +XXX,XX @@ static uint64_t imx6_ccm_get_per_clk(IMX6CCMState *dev)
93
freq = imx6_ccm_get_ipg_clk(dev)
94
/ (1 + EXTRACT(dev->ccm[CCM_CSCMR1], PERCLK_PODF));
95
96
- DPRINTF("freq = %d\n", (uint32_t)freq);
97
+ DPRINTF("freq = %u\n", (uint32_t)freq);
98
99
return freq;
100
}
101
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
102
break;
103
}
104
105
- DPRINTF("Clock = %d) = %d\n", clock, freq);
106
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
107
108
return freq;
109
}
110
diff --git a/hw/misc/imx6_src.c b/hw/misc/imx6_src.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/hw/misc/imx6_src.c
113
+++ b/hw/misc/imx6_src.c
114
@@ -XXX,XX +XXX,XX @@ static const char *imx6_src_reg_name(uint32_t reg)
115
case SRC_GPR10:
116
return "SRC_GPR10";
117
default:
118
- sprintf(unknown, "%d ?", reg);
119
+ sprintf(unknown, "%u ?", reg);
120
return unknown;
121
}
122
}
28
}
123
--
29
--
124
2.20.1
30
2.34.1
125
126
diff view generated by jsdifflib
1
Implement the v8.1M VSCCLRM insn, which zeros floating point
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
registers if there is an active floating point context.
3
This requires support in write_neon_element32() for the MO_32
4
element size, so add it.
5
2
6
Because we want to use arm_gen_condlabel(), we need to move
3
Add ALLINT MSR (immediate) to decodetree, in which the CRm is 0b000x. The
7
the definition of that function up in translate.c so it is
4
EL0 check is necessary to ALLINT, and the EL1 check is necessary when
8
before the #include of translate-vfp.c.inc.
5
imm == 1. So implement it inline for EL2/3, or EL1 with imm==0. Avoid the
6
unconditional write to pc and use raise_exception_ra to unwind.
9
7
8
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20240407081733.3231820-5-ruanjinjie@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-5-peter.maydell@linaro.org
13
---
13
---
14
target/arm/cpu.h | 9 ++++
14
target/arm/tcg/helper-a64.h | 1 +
15
target/arm/m-nocp.decode | 8 +++-
15
target/arm/tcg/a64.decode | 1 +
16
target/arm/translate.c | 21 +++++----
16
target/arm/tcg/helper-a64.c | 12 ++++++++++++
17
target/arm/translate-vfp.c.inc | 84 ++++++++++++++++++++++++++++++++++
17
target/arm/tcg/translate-a64.c | 19 +++++++++++++++++++
18
4 files changed, 111 insertions(+), 11 deletions(-)
18
4 files changed, 33 insertions(+)
19
19
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
21
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
22
--- a/target/arm/tcg/helper-a64.h
23
+++ b/target/arm/cpu.h
23
+++ b/target/arm/tcg/helper-a64.h
24
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_1(rbit64, TCG_CALL_NO_RWG_SE, i64, i64)
25
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
25
DEF_HELPER_2(msr_i_spsel, void, env, i32)
26
DEF_HELPER_2(msr_i_daifset, void, env, i32)
27
DEF_HELPER_2(msr_i_daifclear, void, env, i32)
28
+DEF_HELPER_1(msr_set_allint_el1, void, env)
29
DEF_HELPER_3(vfp_cmph_a64, i64, f16, f16, ptr)
30
DEF_HELPER_3(vfp_cmpeh_a64, i64, f16, f16, ptr)
31
DEF_HELPER_3(vfp_cmps_a64, i64, f32, f32, ptr)
32
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/tcg/a64.decode
35
+++ b/target/arm/tcg/a64.decode
36
@@ -XXX,XX +XXX,XX @@ MSR_i_DIT 1101 0101 0000 0 011 0100 .... 010 11111 @msr_i
37
MSR_i_TCO 1101 0101 0000 0 011 0100 .... 100 11111 @msr_i
38
MSR_i_DAIFSET 1101 0101 0000 0 011 0100 .... 110 11111 @msr_i
39
MSR_i_DAIFCLEAR 1101 0101 0000 0 011 0100 .... 111 11111 @msr_i
40
+MSR_i_ALLINT 1101 0101 0000 0 001 0100 000 imm:1 000 11111
41
MSR_i_SVCR 1101 0101 0000 0 011 0100 0 mask:2 imm:1 011 11111
42
43
# MRS, MSR (register), SYS, SYSL. These are all essentially the
44
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/helper-a64.c
47
+++ b/target/arm/tcg/helper-a64.c
48
@@ -XXX,XX +XXX,XX @@ void HELPER(msr_i_spsel)(CPUARMState *env, uint32_t imm)
49
update_spsel(env, imm);
26
}
50
}
27
51
28
+static inline bool isar_feature_aa32_m_sec_state(const ARMISARegisters *id)
52
+void HELPER(msr_set_allint_el1)(CPUARMState *env)
29
+{
53
+{
30
+ /*
54
+ /* ALLINT update to PSTATE. */
31
+ * Return true if M-profile state handling insns
55
+ if (arm_hcrx_el2_eff(env) & HCRX_TALLINT) {
32
+ * (VSCCLRM, CLRM, FPCTX access insns) are implemented
56
+ raise_exception_ra(env, EXCP_UDEF,
33
+ */
57
+ syn_aa64_sysregtrap(0, 1, 0, 4, 1, 0x1f, 0), 2,
34
+ return FIELD_EX32(id->id_pfr1, ID_PFR1, SECURITY) >= 3;
58
+ GETPC());
59
+ }
60
+
61
+ env->pstate |= PSTATE_ALLINT;
35
+}
62
+}
36
+
63
+
37
static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
64
static void daif_check(CPUARMState *env, uint32_t op,
65
uint32_t imm, uintptr_t ra)
38
{
66
{
39
/* Sadly this is encoded differently for A-profile and M-profile */
67
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
40
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
41
index XXXXXXX..XXXXXXX 100644
68
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/m-nocp.decode
69
--- a/target/arm/tcg/translate-a64.c
43
+++ b/target/arm/m-nocp.decode
70
+++ b/target/arm/tcg/translate-a64.c
44
@@ -XXX,XX +XXX,XX @@
71
@@ -XXX,XX +XXX,XX @@ static bool trans_MSR_i_DAIFCLEAR(DisasContext *s, arg_i *a)
45
# If the coprocessor is not present or disabled then we will generate
46
# the NOCP exception; otherwise we let the insn through to the main decode.
47
48
+%vd_dp 22:1 12:4
49
+%vd_sp 12:4 22:1
50
+
51
&nocp cp
52
53
{
54
# Special cases which do not take an early NOCP: VLLDM and VLSTM
55
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
56
- # TODO: VSCCLRM (new in v8.1M) is similar:
57
- #VSCCLRM 1110 1100 1-01 1111 ---- 1011 ---- ---0
58
+ # VSCCLRM (new in v8.1M) is similar:
59
+ VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
60
+ VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
61
62
NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
63
NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate.c
67
+++ b/target/arm/translate.c
68
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void)
69
a64_translate_init();
70
}
71
72
+/* Generate a label used for skipping this instruction */
73
+static void arm_gen_condlabel(DisasContext *s)
74
+{
75
+ if (!s->condjmp) {
76
+ s->condlabel = gen_new_label();
77
+ s->condjmp = 1;
78
+ }
79
+}
80
+
81
/* Flags for the disas_set_da_iss info argument:
82
* lower bits hold the Rt register number, higher bits are flags.
83
*/
84
@@ -XXX,XX +XXX,XX @@ static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
85
long off = neon_element_offset(reg, ele, memop);
86
87
switch (memop) {
88
+ case MO_32:
89
+ tcg_gen_st32_i64(src, cpu_env, off);
90
+ break;
91
case MO_64:
92
tcg_gen_st_i64(src, cpu_env, off);
93
break;
94
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
95
s->base.is_jmp = DISAS_UPDATE_EXIT;
96
}
97
98
-/* Generate a label used for skipping this instruction */
99
-static void arm_gen_condlabel(DisasContext *s)
100
-{
101
- if (!s->condjmp) {
102
- s->condlabel = gen_new_label();
103
- s->condjmp = 1;
104
- }
105
-}
106
-
107
/* Skip this instruction if the ARM condition is false */
108
static void arm_skip_unless(DisasContext *s, uint32_t cond)
109
{
110
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/translate-vfp.c.inc
113
+++ b/target/arm/translate-vfp.c.inc
114
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
115
return true;
72
return true;
116
}
73
}
117
74
118
+static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
75
+static bool trans_MSR_i_ALLINT(DisasContext *s, arg_i *a)
119
+{
76
+{
120
+ int btmreg, topreg;
77
+ if (!dc_isar_feature(aa64_nmi, s) || s->current_el == 0) {
121
+ TCGv_i64 zero;
122
+ TCGv_i32 aspen, sfpa;
123
+
124
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
125
+ /* Before v8.1M, fall through in decode to NOCP check */
126
+ return false;
78
+ return false;
127
+ }
79
+ }
128
+
80
+
129
+ /* Explicitly UNDEF because this takes precedence over NOCP */
81
+ if (a->imm == 0) {
130
+ if (!arm_dc_feature(s, ARM_FEATURE_M_MAIN) || !s->v8m_secure) {
82
+ clear_pstate_bits(PSTATE_ALLINT);
131
+ unallocated_encoding(s);
83
+ } else if (s->current_el > 1) {
132
+ return true;
84
+ set_pstate_bits(PSTATE_ALLINT);
85
+ } else {
86
+ gen_helper_msr_set_allint_el1(tcg_env);
133
+ }
87
+ }
134
+
88
+
135
+ if (!dc_isar_feature(aa32_vfp_simd, s)) {
89
+ /* Exit the cpu loop to re-evaluate pending IRQs. */
136
+ /* NOP if we have neither FP nor MVE */
90
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
137
+ return true;
138
+ }
139
+
140
+ /*
141
+ * If FPCCR.ASPEN != 0 && CONTROL_S.SFPA == 0 then there is no
142
+ * active floating point context so we must NOP (without doing
143
+ * any lazy state preservation or the NOCP check).
144
+ */
145
+ aspen = load_cpu_field(v7m.fpccr[M_REG_S]);
146
+ sfpa = load_cpu_field(v7m.control[M_REG_S]);
147
+ tcg_gen_andi_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
148
+ tcg_gen_xori_i32(aspen, aspen, R_V7M_FPCCR_ASPEN_MASK);
149
+ tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
150
+ tcg_gen_or_i32(sfpa, sfpa, aspen);
151
+ arm_gen_condlabel(s);
152
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
153
+
154
+ if (s->fp_excp_el != 0) {
155
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
156
+ syn_uncategorized(), s->fp_excp_el);
157
+ return true;
158
+ }
159
+
160
+ topreg = a->vd + a->imm - 1;
161
+ btmreg = a->vd;
162
+
163
+ /* Convert to Sreg numbers if the insn specified in Dregs */
164
+ if (a->size == 3) {
165
+ topreg = topreg * 2 + 1;
166
+ btmreg *= 2;
167
+ }
168
+
169
+ if (topreg > 63 || (topreg > 31 && !(topreg & 1))) {
170
+ /* UNPREDICTABLE: we choose to undef */
171
+ unallocated_encoding(s);
172
+ return true;
173
+ }
174
+
175
+ /* Silently ignore requests to clear D16-D31 if they don't exist */
176
+ if (topreg > 31 && !dc_isar_feature(aa32_simd_r32, s)) {
177
+ topreg = 31;
178
+ }
179
+
180
+ if (!vfp_access_check(s)) {
181
+ return true;
182
+ }
183
+
184
+ /* Zero the Sregs from btmreg to topreg inclusive. */
185
+ zero = tcg_const_i64(0);
186
+ if (btmreg & 1) {
187
+ write_neon_element64(zero, btmreg >> 1, 1, MO_32);
188
+ btmreg++;
189
+ }
190
+ for (; btmreg + 1 <= topreg; btmreg += 2) {
191
+ write_neon_element64(zero, btmreg >> 1, 0, MO_64);
192
+ }
193
+ if (btmreg == topreg) {
194
+ write_neon_element64(zero, btmreg >> 1, 0, MO_32);
195
+ btmreg++;
196
+ }
197
+ assert(btmreg == topreg + 1);
198
+ /* TODO: when MVE is implemented, zero VPR here */
199
+ return true;
91
+ return true;
200
+}
92
+}
201
+
93
+
202
static bool trans_NOCP(DisasContext *s, arg_nocp *a)
94
static bool trans_MSR_i_SVCR(DisasContext *s, arg_MSR_i_SVCR *a)
203
{
95
{
204
/*
96
if (!dc_isar_feature(aa64_sme, s) || a->mask == 0) {
205
--
97
--
206
2.20.1
98
2.34.1
207
208
diff view generated by jsdifflib
1
Implement the new-in-v8.1M VLDR/VSTR variants which directly
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
read or write FP system registers to memory.
3
2
3
Support ALLINT msr access as follow:
4
    mrs <xt>, ALLINT    // read allint
5
    msr ALLINT, <xt>    // write allint with imm
6
7
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20240407081733.3231820-6-ruanjinjie@huawei.com
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-10-peter.maydell@linaro.org
7
---
12
---
8
target/arm/vfp.decode | 14 ++++++
13
target/arm/helper.c | 35 +++++++++++++++++++++++++++++++++++
9
target/arm/translate-vfp.c.inc | 91 ++++++++++++++++++++++++++++++++++
14
1 file changed, 35 insertions(+)
10
2 files changed, 105 insertions(+)
11
15
12
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/vfp.decode
18
--- a/target/arm/helper.c
15
+++ b/target/arm/vfp.decode
19
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ VLDR_VSTR_hp ---- 1101 u:1 .0 l:1 rn:4 .... 1001 imm:8 vd=%vd_sp
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rme_mte_reginfo[] = {
17
VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 vd=%vd_sp
21
.opc0 = 1, .opc1 = 6, .crn = 7, .crm = 14, .opc2 = 5,
18
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 vd=%vd_dp
22
.access = PL3_W, .type = ARM_CP_NOP },
19
23
};
20
+# M-profile VLDR/VSTR to sysreg
21
+%vldr_sysreg 22:1 13:3
22
+%imm7_0x4 0:7 !function=times_4
23
+
24
+
24
+&vldr_sysreg rn reg imm a w p
25
+static void aa64_allint_write(CPUARMState *env, const ARMCPRegInfo *ri,
25
+@vldr_sysreg .... ... . a:1 . . . rn:4 ... . ... .. ....... \
26
+ uint64_t value)
26
+ reg=%vldr_sysreg imm=%imm7_0x4 &vldr_sysreg
27
+
28
+# P=0 W=0 is SEE "Related encodings", so split into two patterns
29
+VLDR_sysreg ---- 110 1 . . w:1 1 .... ... 0 111 11 ....... @vldr_sysreg p=1
30
+VLDR_sysreg ---- 110 0 . . 1 1 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
31
+VSTR_sysreg ---- 110 1 . . w:1 0 .... ... 0 111 11 ....... @vldr_sysreg p=1
32
+VSTR_sysreg ---- 110 0 . . 1 0 .... ... 0 111 11 ....... @vldr_sysreg p=0 w=1
33
+
34
# We split the load/store multiple up into two patterns to avoid
35
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
36
# grouping:
37
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-vfp.c.inc
40
+++ b/target/arm/translate-vfp.c.inc
41
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
42
return true;
43
}
44
45
+static void fp_sysreg_to_memory(DisasContext *s, void *opaque, TCGv_i32 value)
46
+{
27
+{
47
+ arg_vldr_sysreg *a = opaque;
28
+ env->pstate = (env->pstate & ~PSTATE_ALLINT) | (value & PSTATE_ALLINT);
48
+ uint32_t offset = a->imm;
49
+ TCGv_i32 addr;
50
+
51
+ if (!a->a) {
52
+ offset = - offset;
53
+ }
54
+
55
+ addr = load_reg(s, a->rn);
56
+ if (a->p) {
57
+ tcg_gen_addi_i32(addr, addr, offset);
58
+ }
59
+
60
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
61
+ gen_helper_v8m_stackcheck(cpu_env, addr);
62
+ }
63
+
64
+ gen_aa32_st_i32(s, value, addr, get_mem_index(s),
65
+ MO_UL | MO_ALIGN | s->be_data);
66
+ tcg_temp_free_i32(value);
67
+
68
+ if (a->w) {
69
+ /* writeback */
70
+ if (!a->p) {
71
+ tcg_gen_addi_i32(addr, addr, offset);
72
+ }
73
+ store_reg(s, a->rn, addr);
74
+ } else {
75
+ tcg_temp_free_i32(addr);
76
+ }
77
+}
29
+}
78
+
30
+
79
+static TCGv_i32 memory_to_fp_sysreg(DisasContext *s, void *opaque)
31
+static uint64_t aa64_allint_read(CPUARMState *env, const ARMCPRegInfo *ri)
80
+{
32
+{
81
+ arg_vldr_sysreg *a = opaque;
33
+ return env->pstate & PSTATE_ALLINT;
82
+ uint32_t offset = a->imm;
83
+ TCGv_i32 addr;
84
+ TCGv_i32 value = tcg_temp_new_i32();
85
+
86
+ if (!a->a) {
87
+ offset = - offset;
88
+ }
89
+
90
+ addr = load_reg(s, a->rn);
91
+ if (a->p) {
92
+ tcg_gen_addi_i32(addr, addr, offset);
93
+ }
94
+
95
+ if (s->v8m_stackcheck && a->rn == 13 && a->w) {
96
+ gen_helper_v8m_stackcheck(cpu_env, addr);
97
+ }
98
+
99
+ gen_aa32_ld_i32(s, value, addr, get_mem_index(s),
100
+ MO_UL | MO_ALIGN | s->be_data);
101
+
102
+ if (a->w) {
103
+ /* writeback */
104
+ if (!a->p) {
105
+ tcg_gen_addi_i32(addr, addr, offset);
106
+ }
107
+ store_reg(s, a->rn, addr);
108
+ } else {
109
+ tcg_temp_free_i32(addr);
110
+ }
111
+ return value;
112
+}
34
+}
113
+
35
+
114
+static bool trans_VLDR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
36
+static CPAccessResult aa64_allint_access(CPUARMState *env,
37
+ const ARMCPRegInfo *ri, bool isread)
115
+{
38
+{
116
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
39
+ if (!isread && arm_current_el(env) == 1 &&
117
+ return false;
40
+ (arm_hcrx_el2_eff(env) & HCRX_TALLINT)) {
41
+ return CP_ACCESS_TRAP_EL2;
118
+ }
42
+ }
119
+ if (a->rn == 15) {
43
+ return CP_ACCESS_OK;
120
+ return false;
121
+ }
122
+ return gen_M_fp_sysreg_write(s, a->reg, memory_to_fp_sysreg, a);
123
+}
44
+}
124
+
45
+
125
+static bool trans_VSTR_sysreg(DisasContext *s, arg_vldr_sysreg *a)
46
+static const ARMCPRegInfo nmi_reginfo[] = {
126
+{
47
+ { .name = "ALLINT", .state = ARM_CP_STATE_AA64,
127
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
48
+ .opc0 = 3, .opc1 = 0, .opc2 = 0, .crn = 4, .crm = 3,
128
+ return false;
49
+ .type = ARM_CP_NO_RAW,
50
+ .access = PL1_RW, .accessfn = aa64_allint_access,
51
+ .fieldoffset = offsetof(CPUARMState, pstate),
52
+ .writefn = aa64_allint_write, .readfn = aa64_allint_read,
53
+ .resetfn = arm_cp_reset_ignore },
54
+};
55
#endif /* TARGET_AARCH64 */
56
57
static void define_pmu_regs(ARMCPU *cpu)
58
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
59
if (cpu_isar_feature(aa64_nv2, cpu)) {
60
define_arm_cp_regs(cpu, nv2_reginfo);
61
}
62
+
63
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
64
+ define_arm_cp_regs(cpu, nmi_reginfo);
129
+ }
65
+ }
130
+ if (a->rn == 15) {
66
#endif
131
+ return false;
67
132
+ }
68
if (cpu_isar_feature(any_predinv, cpu)) {
133
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_memory, a);
134
+}
135
+
136
static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
137
{
138
TCGv_i32 tmp;
139
--
69
--
140
2.20.1
70
2.34.1
141
142
diff view generated by jsdifflib
1
The FPDSCR register has a similar layout to the FPSCR. In v8.1M it
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
gains new fields FZ16 (if half-precision floating point is supported)
3
and LTPSIZE (always reads as 4). Update the reset value and the code
4
that handles writes to this register accordingly.
5
2
3
This only implements the external delivery method via the GICv3.
4
5
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20240407081733.3231820-7-ruanjinjie@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-16-peter.maydell@linaro.org
9
---
10
---
10
target/arm/cpu.h | 5 +++++
11
target/arm/cpu-qom.h | 5 +-
11
hw/intc/armv7m_nvic.c | 9 ++++++++-
12
target/arm/cpu.h | 6 ++
12
target/arm/cpu.c | 3 +++
13
target/arm/internals.h | 18 +++++
13
3 files changed, 16 insertions(+), 1 deletion(-)
14
target/arm/cpu.c | 147 ++++++++++++++++++++++++++++++++++++++---
15
target/arm/helper.c | 33 +++++++--
16
5 files changed, 193 insertions(+), 16 deletions(-)
14
17
18
diff --git a/target/arm/cpu-qom.h b/target/arm/cpu-qom.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu-qom.h
21
+++ b/target/arm/cpu-qom.h
22
@@ -XXX,XX +XXX,XX @@ DECLARE_CLASS_CHECKERS(AArch64CPUClass, AARCH64_CPU,
23
#define ARM_CPU_TYPE_SUFFIX "-" TYPE_ARM_CPU
24
#define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
25
26
-/* Meanings of the ARMCPU object's four inbound GPIO lines */
27
+/* Meanings of the ARMCPU object's seven inbound GPIO lines */
28
#define ARM_CPU_IRQ 0
29
#define ARM_CPU_FIQ 1
30
#define ARM_CPU_VIRQ 2
31
#define ARM_CPU_VFIQ 3
32
+#define ARM_CPU_NMI 4
33
+#define ARM_CPU_VINMI 5
34
+#define ARM_CPU_VFNMI 6
35
36
/* For M profile, some registers are banked secure vs non-secure;
37
* these are represented as a 2-element array where the first element
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
38
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
40
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
41
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
42
@@ -XXX,XX +XXX,XX @@
20
#define FPCR_IXE (1 << 12) /* Inexact exception trap enable */
43
#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
21
#define FPCR_IDE (1 << 15) /* Input Denormal exception trap enable */
44
#define EXCP_VSERR 24
22
#define FPCR_FZ16 (1 << 19) /* ARMv8.2+, FP16 flush-to-zero */
45
#define EXCP_GPC 25 /* v9 Granule Protection Check Fault */
23
+#define FPCR_RMODE_MASK (3 << 22) /* Rounding mode */
46
+#define EXCP_NMI 26
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
47
+#define EXCP_VINMI 27
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
48
+#define EXCP_VFNMI 28
26
+#define FPCR_AHP (1 << 26) /* Alternative half-precision */
49
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
27
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
50
28
#define FPCR_V (1 << 28) /* FP overflow flag */
51
#define ARMV7M_EXCP_RESET 1
29
#define FPCR_C (1 << 29) /* FP carry flag */
52
@@ -XXX,XX +XXX,XX @@
30
#define FPCR_Z (1 << 30) /* FP zero flag */
53
#define CPU_INTERRUPT_VIRQ CPU_INTERRUPT_TGT_EXT_2
31
#define FPCR_N (1 << 31) /* FP negative flag */
54
#define CPU_INTERRUPT_VFIQ CPU_INTERRUPT_TGT_EXT_3
32
55
#define CPU_INTERRUPT_VSERR CPU_INTERRUPT_TGT_INT_0
33
+#define FPCR_LTPSIZE_SHIFT 16 /* LTPSIZE, M-profile only */
56
+#define CPU_INTERRUPT_NMI CPU_INTERRUPT_TGT_EXT_4
34
+#define FPCR_LTPSIZE_MASK (7 << FPCR_LTPSIZE_SHIFT)
57
+#define CPU_INTERRUPT_VINMI CPU_INTERRUPT_TGT_EXT_0
35
+
58
+#define CPU_INTERRUPT_VFNMI CPU_INTERRUPT_TGT_INT_1
36
#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
59
37
#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
60
/* The usual mapping for an AArch64 system register to its AArch32
38
61
* counterpart is for the 32 bit world to have access to the lower
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
62
diff --git a/target/arm/internals.h b/target/arm/internals.h
40
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/intc/armv7m_nvic.c
64
--- a/target/arm/internals.h
42
+++ b/hw/intc/armv7m_nvic.c
65
+++ b/target/arm/internals.h
43
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
66
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_virq(ARMCPU *cpu);
44
break;
67
*/
45
case 0xf3c: /* FPDSCR */
68
void arm_cpu_update_vfiq(ARMCPU *cpu);
46
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
69
47
- value &= 0x07c00000;
70
+/**
48
+ uint32_t mask = FPCR_AHP | FPCR_DN | FPCR_FZ | FPCR_RMODE_MASK;
71
+ * arm_cpu_update_vinmi: Update CPU_INTERRUPT_VINMI bit in cs->interrupt_request
49
+ if (cpu_isar_feature(any_fp16, cpu)) {
72
+ *
50
+ mask |= FPCR_FZ16;
73
+ * Update the CPU_INTERRUPT_VINMI bit in cs->interrupt_request, following
51
+ }
74
+ * a change to either the input VNMI line from the GIC or the HCRX_EL2.VINMI.
52
+ value &= mask;
75
+ * Must be called with the BQL held.
53
+ if (cpu_isar_feature(aa32_lob, cpu)) {
76
+ */
54
+ value |= 4 << FPCR_LTPSIZE_SHIFT;
77
+void arm_cpu_update_vinmi(ARMCPU *cpu);
55
+ }
78
+
56
cpu->env.v7m.fpdscr[attrs.secure] = value;
79
+/**
57
}
80
+ * arm_cpu_update_vfnmi: Update CPU_INTERRUPT_VFNMI bit in cs->interrupt_request
58
break;
81
+ *
82
+ * Update the CPU_INTERRUPT_VFNMI bit in cs->interrupt_request, following
83
+ * a change to the HCRX_EL2.VFNMI.
84
+ * Must be called with the BQL held.
85
+ */
86
+void arm_cpu_update_vfnmi(ARMCPU *cpu);
87
+
88
/**
89
* arm_cpu_update_vserr: Update CPU_INTERRUPT_VSERR bit
90
*
59
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
91
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
60
index XXXXXXX..XXXXXXX 100644
92
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/cpu.c
93
--- a/target/arm/cpu.c
62
+++ b/target/arm/cpu.c
94
+++ b/target/arm/cpu.c
63
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
95
@@ -XXX,XX +XXX,XX @@ void arm_restore_state_to_opc(CPUState *cs,
64
* always reset to 4.
96
}
65
*/
97
#endif /* CONFIG_TCG */
66
env->v7m.ltpsize = 4;
98
67
+ /* The LTPSIZE field in FPDSCR is constant and reads as 4. */
99
+/*
68
+ env->v7m.fpdscr[M_REG_NS] = 4 << FPCR_LTPSIZE_SHIFT;
100
+ * With SCTLR_ELx.NMI == 0, IRQ with Superpriority is masked identically with
69
+ env->v7m.fpdscr[M_REG_S] = 4 << FPCR_LTPSIZE_SHIFT;
101
+ * IRQ without Superpriority. Moreover, if the GIC is configured so that
102
+ * FEAT_GICv3_NMI is only set if FEAT_NMI is set, then we won't ever see
103
+ * CPU_INTERRUPT_*NMI anyway. So we might as well accept NMI here
104
+ * unconditionally.
105
+ */
106
static bool arm_cpu_has_work(CPUState *cs)
107
{
108
ARMCPU *cpu = ARM_CPU(cs);
109
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_has_work(CPUState *cs)
110
return (cpu->power_state != PSCI_OFF)
111
&& cs->interrupt_request &
112
(CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
113
+ | CPU_INTERRUPT_NMI | CPU_INTERRUPT_VINMI | CPU_INTERRUPT_VFNMI
114
| CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
115
| CPU_INTERRUPT_EXITTB);
116
}
117
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
118
CPUARMState *env = cpu_env(cs);
119
bool pstate_unmasked;
120
bool unmasked = false;
121
+ bool allIntMask = false;
122
123
/*
124
* Don't take exceptions if they target a lower EL.
125
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
126
return false;
127
}
128
129
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env)) &&
130
+ env->cp15.sctlr_el[target_el] & SCTLR_NMI && cur_el == target_el) {
131
+ allIntMask = env->pstate & PSTATE_ALLINT ||
132
+ ((env->cp15.sctlr_el[target_el] & SCTLR_SPINTMASK) &&
133
+ (env->pstate & PSTATE_SP));
134
+ }
135
+
136
switch (excp_idx) {
137
+ case EXCP_NMI:
138
+ pstate_unmasked = !allIntMask;
139
+ break;
140
+
141
+ case EXCP_VINMI:
142
+ if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
143
+ /* VINMIs are only taken when hypervized. */
144
+ return false;
145
+ }
146
+ return !allIntMask;
147
+ case EXCP_VFNMI:
148
+ if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
149
+ /* VFNMIs are only taken when hypervized. */
150
+ return false;
151
+ }
152
+ return !allIntMask;
153
case EXCP_FIQ:
154
- pstate_unmasked = !(env->daif & PSTATE_F);
155
+ pstate_unmasked = (!(env->daif & PSTATE_F)) && (!allIntMask);
156
break;
157
158
case EXCP_IRQ:
159
- pstate_unmasked = !(env->daif & PSTATE_I);
160
+ pstate_unmasked = (!(env->daif & PSTATE_I)) && (!allIntMask);
161
break;
162
163
case EXCP_VFIQ:
164
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
165
/* VFIQs are only taken when hypervized. */
166
return false;
70
}
167
}
71
168
- return !(env->daif & PSTATE_F);
72
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
169
+ return !(env->daif & PSTATE_F) && (!allIntMask);
170
case EXCP_VIRQ:
171
if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
172
/* VIRQs are only taken when hypervized. */
173
return false;
174
}
175
- return !(env->daif & PSTATE_I);
176
+ return !(env->daif & PSTATE_I) && (!allIntMask);
177
case EXCP_VSERR:
178
if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
179
/* VIRQs are only taken when hypervized. */
180
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
181
182
/* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
183
184
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env)) &&
185
+ (arm_sctlr(env, cur_el) & SCTLR_NMI)) {
186
+ if (interrupt_request & CPU_INTERRUPT_NMI) {
187
+ excp_idx = EXCP_NMI;
188
+ target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
189
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
190
+ cur_el, secure, hcr_el2)) {
191
+ goto found;
192
+ }
193
+ }
194
+ if (interrupt_request & CPU_INTERRUPT_VINMI) {
195
+ excp_idx = EXCP_VINMI;
196
+ target_el = 1;
197
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
198
+ cur_el, secure, hcr_el2)) {
199
+ goto found;
200
+ }
201
+ }
202
+ if (interrupt_request & CPU_INTERRUPT_VFNMI) {
203
+ excp_idx = EXCP_VFNMI;
204
+ target_el = 1;
205
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
206
+ cur_el, secure, hcr_el2)) {
207
+ goto found;
208
+ }
209
+ }
210
+ } else {
211
+ /*
212
+ * NMI disabled: interrupts with superpriority are handled
213
+ * as if they didn't have it
214
+ */
215
+ if (interrupt_request & CPU_INTERRUPT_NMI) {
216
+ interrupt_request |= CPU_INTERRUPT_HARD;
217
+ }
218
+ if (interrupt_request & CPU_INTERRUPT_VINMI) {
219
+ interrupt_request |= CPU_INTERRUPT_VIRQ;
220
+ }
221
+ if (interrupt_request & CPU_INTERRUPT_VFNMI) {
222
+ interrupt_request |= CPU_INTERRUPT_VFIQ;
223
+ }
224
+ }
225
+
226
if (interrupt_request & CPU_INTERRUPT_FIQ) {
227
excp_idx = EXCP_FIQ;
228
target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
229
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_virq(ARMCPU *cpu)
230
CPUARMState *env = &cpu->env;
231
CPUState *cs = CPU(cpu);
232
233
- bool new_state = (env->cp15.hcr_el2 & HCR_VI) ||
234
+ bool new_state = ((arm_hcr_el2_eff(env) & HCR_VI) &&
235
+ !(arm_hcrx_el2_eff(env) & HCRX_VINMI)) ||
236
(env->irq_line_state & CPU_INTERRUPT_VIRQ);
237
238
if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VIRQ) != 0)) {
239
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
240
CPUARMState *env = &cpu->env;
241
CPUState *cs = CPU(cpu);
242
243
- bool new_state = (env->cp15.hcr_el2 & HCR_VF) ||
244
+ bool new_state = ((arm_hcr_el2_eff(env) & HCR_VF) &&
245
+ !(arm_hcrx_el2_eff(env) & HCRX_VFNMI)) ||
246
(env->irq_line_state & CPU_INTERRUPT_VFIQ);
247
248
if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VFIQ) != 0)) {
249
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
250
}
251
}
252
253
+void arm_cpu_update_vinmi(ARMCPU *cpu)
254
+{
255
+ /*
256
+ * Update the interrupt level for VINMI, which is the logical OR of
257
+ * the HCRX_EL2.VINMI bit and the input line level from the GIC.
258
+ */
259
+ CPUARMState *env = &cpu->env;
260
+ CPUState *cs = CPU(cpu);
261
+
262
+ bool new_state = ((arm_hcr_el2_eff(env) & HCR_VI) &&
263
+ (arm_hcrx_el2_eff(env) & HCRX_VINMI)) ||
264
+ (env->irq_line_state & CPU_INTERRUPT_VINMI);
265
+
266
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VINMI) != 0)) {
267
+ if (new_state) {
268
+ cpu_interrupt(cs, CPU_INTERRUPT_VINMI);
269
+ } else {
270
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VINMI);
271
+ }
272
+ }
273
+}
274
+
275
+void arm_cpu_update_vfnmi(ARMCPU *cpu)
276
+{
277
+ /*
278
+ * Update the interrupt level for VFNMI, which is the HCRX_EL2.VFNMI bit.
279
+ */
280
+ CPUARMState *env = &cpu->env;
281
+ CPUState *cs = CPU(cpu);
282
+
283
+ bool new_state = (arm_hcr_el2_eff(env) & HCR_VF) &&
284
+ (arm_hcrx_el2_eff(env) & HCRX_VFNMI);
285
+
286
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VFNMI) != 0)) {
287
+ if (new_state) {
288
+ cpu_interrupt(cs, CPU_INTERRUPT_VFNMI);
289
+ } else {
290
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VFNMI);
291
+ }
292
+ }
293
+}
294
+
295
void arm_cpu_update_vserr(ARMCPU *cpu)
296
{
297
/*
298
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
299
[ARM_CPU_IRQ] = CPU_INTERRUPT_HARD,
300
[ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ,
301
[ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ,
302
- [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
303
+ [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ,
304
+ [ARM_CPU_NMI] = CPU_INTERRUPT_NMI,
305
+ [ARM_CPU_VINMI] = CPU_INTERRUPT_VINMI,
306
};
307
308
if (!arm_feature(env, ARM_FEATURE_EL2) &&
309
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
310
case ARM_CPU_VFIQ:
311
arm_cpu_update_vfiq(cpu);
312
break;
313
+ case ARM_CPU_VINMI:
314
+ arm_cpu_update_vinmi(cpu);
315
+ break;
316
case ARM_CPU_IRQ:
317
case ARM_CPU_FIQ:
318
+ case ARM_CPU_NMI:
319
if (level) {
320
cpu_interrupt(cs, mask[irq]);
321
} else {
322
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
323
#else
324
/* Our inbound IRQ and FIQ lines */
325
if (kvm_enabled()) {
326
- /* VIRQ and VFIQ are unused with KVM but we add them to maintain
327
- * the same interface as non-KVM CPUs.
328
+ /*
329
+ * VIRQ, VFIQ, NMI, VINMI are unused with KVM but we add
330
+ * them to maintain the same interface as non-KVM CPUs.
331
*/
332
- qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4);
333
+ qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 6);
334
} else {
335
- qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4);
336
+ qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 6);
337
}
338
339
qdev_init_gpio_out(DEVICE(cpu), cpu->gt_timer_outputs,
340
diff --git a/target/arm/helper.c b/target/arm/helper.c
341
index XXXXXXX..XXXXXXX 100644
342
--- a/target/arm/helper.c
343
+++ b/target/arm/helper.c
344
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
345
* and the state of the input lines from the GIC. (This requires
346
* that we have the BQL, which is done by marking the
347
* reginfo structs as ARM_CP_IO.)
348
- * Note that if a write to HCR pends a VIRQ or VFIQ it is never
349
- * possible for it to be taken immediately, because VIRQ and
350
- * VFIQ are masked unless running at EL0 or EL1, and HCR
351
- * can only be written at EL2.
352
+ * Note that if a write to HCR pends a VIRQ or VFIQ or VINMI or
353
+ * VFNMI, it is never possible for it to be taken immediately
354
+ * because VIRQ, VFIQ, VINMI and VFNMI are masked unless running
355
+ * at EL0 or EL1, and HCR can only be written at EL2.
356
*/
357
g_assert(bql_locked());
358
arm_cpu_update_virq(cpu);
359
arm_cpu_update_vfiq(cpu);
360
arm_cpu_update_vserr(cpu);
361
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
362
+ arm_cpu_update_vinmi(cpu);
363
+ arm_cpu_update_vfnmi(cpu);
364
+ }
365
}
366
367
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
368
@@ -XXX,XX +XXX,XX @@ static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
369
370
/* Clear RES0 bits. */
371
env->cp15.hcrx_el2 = value & valid_mask;
372
+
373
+ /*
374
+ * Updates to VINMI and VFNMI require us to update the status of
375
+ * virtual NMI, which are the logical OR of these bits
376
+ * and the state of the input lines from the GIC. (This requires
377
+ * that we have the BQL, which is done by marking the
378
+ * reginfo structs as ARM_CP_IO.)
379
+ * Note that if a write to HCRX pends a VINMI or VFNMI it is never
380
+ * possible for it to be taken immediately, because VINMI and
381
+ * VFNMI are masked unless running at EL0 or EL1, and HCRX
382
+ * can only be written at EL2.
383
+ */
384
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
385
+ g_assert(bql_locked());
386
+ arm_cpu_update_vinmi(cpu);
387
+ arm_cpu_update_vfnmi(cpu);
388
+ }
389
}
390
391
static CPAccessResult access_hxen(CPUARMState *env, const ARMCPRegInfo *ri,
392
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_hxen(CPUARMState *env, const ARMCPRegInfo *ri,
393
394
static const ARMCPRegInfo hcrx_el2_reginfo = {
395
.name = "HCRX_EL2", .state = ARM_CP_STATE_AA64,
396
+ .type = ARM_CP_IO,
397
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 2,
398
.access = PL2_RW, .writefn = hcrx_write, .accessfn = access_hxen,
399
.nv2_redirect_offset = 0xa0,
400
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(CPUState *cs)
401
[EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
402
[EXCP_VSERR] = "Virtual SERR",
403
[EXCP_GPC] = "Granule Protection Check",
404
+ [EXCP_NMI] = "NMI",
405
+ [EXCP_VINMI] = "Virtual IRQ NMI",
406
+ [EXCP_VFNMI] = "Virtual FIQ NMI",
407
};
408
409
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
73
--
410
--
74
2.20.1
411
2.34.1
75
76
diff view generated by jsdifflib
New patch
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
1
2
3
According to Arm GIC section 4.6.3 Interrupt superpriority, the interrupt
4
with superpriority is always IRQ, never FIQ, so handle NMI same as IRQ in
5
arm_phys_excp_target_el().
6
7
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20240407081733.3231820-8-ruanjinjie@huawei.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/helper.c | 1 +
14
1 file changed, 1 insertion(+)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
21
hcr_el2 = arm_hcr_el2_eff(env);
22
switch (excp_idx) {
23
case EXCP_IRQ:
24
+ case EXCP_NMI:
25
scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ);
26
hcr = hcr_el2 & HCR_IMO;
27
break;
28
--
29
2.34.1
diff view generated by jsdifflib
1
v8.1M defines a new FP system register FPSCR_nzcvqc; this behaves
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
like the existing FPSCR, except that it reads and writes only bits
3
[31:27] of the FPSCR (the N, Z, C, V and QC flag bits). (Unlike the
4
FPSCR, the special case for Rt=15 of writing the CPSR.NZCV is not
5
permitted.)
6
2
7
Implement the register. Since we don't yet implement MVE, we handle
3
Add IS and FS bit in ISR_EL1 and handle the read. With CPU_INTERRUPT_NMI or
8
the QC bit as RES0, with todo comments for where we will need to add
4
CPU_INTERRUPT_VINMI, both CPSR_I and ISR_IS must be set. With
9
support later.
5
CPU_INTERRUPT_VFNMI, both CPSR_F and ISR_FS must be set.
10
6
7
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20240407081733.3231820-9-ruanjinjie@huawei.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20201119215617.29887-11-peter.maydell@linaro.org
14
---
12
---
15
target/arm/cpu.h | 13 +++++++++++++
13
target/arm/cpu.h | 2 ++
16
target/arm/translate-vfp.c.inc | 27 +++++++++++++++++++++++++++
14
target/arm/helper.c | 13 +++++++++++++
17
2 files changed, 40 insertions(+)
15
2 files changed, 15 insertions(+)
18
16
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
21
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
24
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
22
#define CPSR_N (1U << 31)
25
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
23
#define CPSR_NZCV (CPSR_N | CPSR_Z | CPSR_C | CPSR_V)
26
#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
24
#define CPSR_AIF (CPSR_A | CPSR_I | CPSR_F)
27
+#define FPCR_V (1 << 28) /* FP overflow flag */
25
+#define ISR_FS (1U << 9)
28
+#define FPCR_C (1 << 29) /* FP carry flag */
26
+#define ISR_IS (1U << 10)
29
+#define FPCR_Z (1 << 30) /* FP zero flag */
27
30
+#define FPCR_N (1 << 31) /* FP negative flag */
28
#define CPSR_IT (CPSR_IT_0_1 | CPSR_IT_2_7)
29
#define CACHED_CPSR_BITS (CPSR_T | CPSR_AIF | CPSR_GE | CPSR_IT | CPSR_Q \
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/helper.c
33
+++ b/target/arm/helper.c
34
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
35
if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
36
ret |= CPSR_I;
37
}
38
+ if (cs->interrupt_request & CPU_INTERRUPT_VINMI) {
39
+ ret |= ISR_IS;
40
+ ret |= CPSR_I;
41
+ }
42
} else {
43
if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
44
ret |= CPSR_I;
45
}
31
+
46
+
32
+#define FPCR_NZCV_MASK (FPCR_N | FPCR_Z | FPCR_C | FPCR_V)
47
+ if (cs->interrupt_request & CPU_INTERRUPT_NMI) {
33
+#define FPCR_NZCVQC_MASK (FPCR_NZCV_MASK | FPCR_QC)
48
+ ret |= ISR_IS;
34
49
+ ret |= CPSR_I;
35
static inline uint32_t vfp_get_fpsr(CPUARMState *env)
36
{
37
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
38
#define ARM_VFP_FPEXC 8
39
#define ARM_VFP_FPINST 9
40
#define ARM_VFP_FPINST2 10
41
+/* These ones are M-profile only */
42
+#define ARM_VFP_FPSCR_NZCVQC 2
43
+#define ARM_VFP_VPR 12
44
+#define ARM_VFP_P0 13
45
+#define ARM_VFP_FPCXT_NS 14
46
+#define ARM_VFP_FPCXT_S 15
47
48
/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
49
#define QEMU_VFP_FPSCR_NZCV 0xffff
50
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate-vfp.c.inc
53
+++ b/target/arm/translate-vfp.c.inc
54
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
55
case ARM_VFP_FPSCR:
56
case QEMU_VFP_FPSCR_NZCV:
57
break;
58
+ case ARM_VFP_FPSCR_NZCVQC:
59
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
60
+ return false;
61
+ }
50
+ }
62
+ break;
63
default:
64
return FPSysRegCheckFailed;
65
}
51
}
66
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
52
67
tcg_temp_free_i32(tmp);
53
if (hcr_el2 & HCR_FMO) {
68
gen_lookup_tb(s);
54
if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
69
break;
55
ret |= CPSR_F;
70
+ case ARM_VFP_FPSCR_NZCVQC:
56
}
71
+ {
57
+ if (cs->interrupt_request & CPU_INTERRUPT_VFNMI) {
72
+ TCGv_i32 fpscr;
58
+ ret |= ISR_FS;
73
+ tmp = loadfn(s, opaque);
59
+ ret |= CPSR_F;
74
+ /*
60
+ }
75
+ * TODO: when we implement MVE, write the QC bit.
61
} else {
76
+ * For non-MVE, QC is RES0.
62
if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
77
+ */
63
ret |= CPSR_F;
78
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
79
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
80
+ tcg_gen_andi_i32(fpscr, fpscr, ~FPCR_NZCV_MASK);
81
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
82
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
83
+ tcg_temp_free_i32(tmp);
84
+ break;
85
+ }
86
default:
87
g_assert_not_reached();
88
}
89
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
90
gen_helper_vfp_get_fpscr(tmp, cpu_env);
91
storefn(s, opaque, tmp);
92
break;
93
+ case ARM_VFP_FPSCR_NZCVQC:
94
+ /*
95
+ * TODO: MVE has a QC bit, which we probably won't store
96
+ * in the xregs[] field. For non-MVE, where QC is RES0,
97
+ * we can just fall through to the FPSCR_NZCV case.
98
+ */
99
case QEMU_VFP_FPSCR_NZCV:
100
/*
101
* Read just NZCV; this is a special case to avoid the
102
--
64
--
103
2.20.1
65
2.34.1
104
105
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
3
Set or clear PSTATE.ALLINT on taking an exception to ELx according to the
4
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
4
SCTLR_ELx.SPINTMASK bit.
5
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
5
6
Message-id: 1605728926-352690-5-git-send-email-fnu.vikram@xilinx.com
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20240407081733.3231820-10-ruanjinjie@huawei.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
11
---
9
MAINTAINERS | 8 ++++++++
12
target/arm/helper.c | 8 ++++++++
10
1 file changed, 8 insertions(+)
13
1 file changed, 8 insertions(+)
11
14
12
diff --git a/MAINTAINERS b/MAINTAINERS
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/MAINTAINERS
17
--- a/target/arm/helper.c
15
+++ b/MAINTAINERS
18
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ F: hw/net/opencores_eth.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
17
20
}
18
Devices
21
}
19
-------
22
20
+Xilinx CAN
23
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
21
+M: Vikram Garhwal <fnu.vikram@xilinx.com>
24
+ if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPINTMASK)) {
22
+M: Francisco Iglesias <francisco.iglesias@xilinx.com>
25
+ new_mode |= PSTATE_ALLINT;
23
+S: Maintained
26
+ } else {
24
+F: hw/net/can/xlnx-*
27
+ new_mode &= ~PSTATE_ALLINT;
25
+F: include/hw/net/xlnx-*
28
+ }
26
+F: tests/qtest/xlnx-can-test*
29
+ }
27
+
30
+
28
EDU
31
pstate_write(env, PSTATE_DAIF | new_mode);
29
M: Jiri Slaby <jslaby@suse.cz>
32
env->aarch64 = true;
30
S: Maintained
33
aarch64_restore_sp(env, new_el);
31
--
34
--
32
2.20.1
35
2.34.1
33
34
diff view generated by jsdifflib
1
Correct a typo in the name we give the NVIC object.
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Augment the GICv3's QOM device interface by adding one
4
new set of sysbus IRQ line, to signal NMI to each CPU.
5
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20240407081733.3231820-11-ruanjinjie@huawei.com
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201119215617.29887-28-peter.maydell@linaro.org
7
---
11
---
8
hw/arm/armv7m.c | 2 +-
12
include/hw/intc/arm_gic_common.h | 2 ++
9
1 file changed, 1 insertion(+), 1 deletion(-)
13
include/hw/intc/arm_gicv3_common.h | 2 ++
14
hw/intc/arm_gicv3_common.c | 6 ++++++
15
3 files changed, 10 insertions(+)
10
16
11
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
17
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
12
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/armv7m.c
19
--- a/include/hw/intc/arm_gic_common.h
14
+++ b/hw/arm/armv7m.c
20
+++ b/include/hw/intc/arm_gic_common.h
15
@@ -XXX,XX +XXX,XX @@ static void armv7m_instance_init(Object *obj)
21
@@ -XXX,XX +XXX,XX @@ struct GICState {
16
22
qemu_irq parent_fiq[GIC_NCPU];
17
memory_region_init(&s->container, obj, "armv7m-container", UINT64_MAX);
23
qemu_irq parent_virq[GIC_NCPU];
18
24
qemu_irq parent_vfiq[GIC_NCPU];
19
- object_initialize_child(obj, "nvnic", &s->nvic, TYPE_NVIC);
25
+ qemu_irq parent_nmi[GIC_NCPU];
20
+ object_initialize_child(obj, "nvic", &s->nvic, TYPE_NVIC);
26
+ qemu_irq parent_vnmi[GIC_NCPU];
21
object_property_add_alias(obj, "num-irq",
27
qemu_irq maintenance_irq[GIC_NCPU];
22
OBJECT(&s->nvic), "num-irq");
28
23
29
/* GICD_CTLR; for a GIC with the security extensions the NS banked version
30
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/include/hw/intc/arm_gicv3_common.h
33
+++ b/include/hw/intc/arm_gicv3_common.h
34
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
35
qemu_irq parent_fiq;
36
qemu_irq parent_virq;
37
qemu_irq parent_vfiq;
38
+ qemu_irq parent_nmi;
39
+ qemu_irq parent_vnmi;
40
41
/* Redistributor */
42
uint32_t level; /* Current IRQ level */
43
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/intc/arm_gicv3_common.c
46
+++ b/hw/intc/arm_gicv3_common.c
47
@@ -XXX,XX +XXX,XX @@ void gicv3_init_irqs_and_mmio(GICv3State *s, qemu_irq_handler handler,
48
for (i = 0; i < s->num_cpu; i++) {
49
sysbus_init_irq(sbd, &s->cpu[i].parent_vfiq);
50
}
51
+ for (i = 0; i < s->num_cpu; i++) {
52
+ sysbus_init_irq(sbd, &s->cpu[i].parent_nmi);
53
+ }
54
+ for (i = 0; i < s->num_cpu; i++) {
55
+ sysbus_init_irq(sbd, &s->cpu[i].parent_vnmi);
56
+ }
57
58
memory_region_init_io(&s->iomem_dist, OBJECT(s), ops, s,
59
"gicv3_dist", 0x10000);
24
--
60
--
25
2.20.1
61
2.34.1
26
27
diff view generated by jsdifflib
1
In v8.1M a new exception return check is added which may cause a NOCP
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
UsageFault (see rule R_XLTP): before we clear s0..s15 and the FPSCR
3
we must check whether access to CP10 from the Security state of the
4
returning exception is disabled; if it is then we must take a fault.
5
2
6
(Note that for our implementation CPPWR is always RAZ/WI and so can
3
Wire the new NMI and VINMI interrupt line from the GIC to each CPU if it
7
never cause CP10 accesses to fail.)
4
is not GICv2.
8
5
9
The other v8.1M change to this register-clearing code is that if MVE
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
10
is implemented VPR must also be cleared, so add a TODO comment to
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
that effect.
8
Message-id: 20240407081733.3231820-12-ruanjinjie@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/arm/virt.c | 10 +++++++++-
12
1 file changed, 9 insertions(+), 1 deletion(-)
12
13
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-20-peter.maydell@linaro.org
16
---
17
target/arm/m_helper.c | 22 +++++++++++++++++++++-
18
1 file changed, 21 insertions(+), 1 deletion(-)
19
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m_helper.c
16
--- a/hw/arm/virt.c
23
+++ b/target/arm/m_helper.c
17
+++ b/hw/arm/virt.c
24
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
25
v7m_exception_taken(cpu, excret, true, false);
19
26
return;
20
/* Wire the outputs from each CPU's generic timer and the GICv3
27
} else {
21
* maintenance interrupt signal to the appropriate GIC PPI inputs,
28
- /* Clear s0..s15 and FPSCR */
22
- * and the GIC's IRQ/FIQ/VIRQ/VFIQ interrupt outputs to the CPU's inputs.
29
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
23
+ * and the GIC's IRQ/FIQ/VIRQ/VFIQ/NMI/VINMI interrupt outputs to the
30
+ /* v8.1M adds this NOCP check */
24
+ * CPU's inputs.
31
+ bool nsacr_pass = exc_secure ||
25
*/
32
+ extract32(env->v7m.nsacr, 10, 1);
26
for (i = 0; i < smp_cpus; i++) {
33
+ bool cpacr_pass = v7m_cpacr_pass(env, exc_secure, true);
27
DeviceState *cpudev = DEVICE(qemu_get_cpu(i));
34
+ if (!nsacr_pass) {
28
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
35
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
29
qdev_get_gpio_in(cpudev, ARM_CPU_VIRQ));
36
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
30
sysbus_connect_irq(gicbusdev, i + 3 * smp_cpus,
37
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
31
qdev_get_gpio_in(cpudev, ARM_CPU_VFIQ));
38
+ "stackframe: NSACR prevents clearing FPU registers\n");
32
+
39
+ v7m_exception_taken(cpu, excret, true, false);
33
+ if (vms->gic_version != VIRT_GIC_VERSION_2) {
40
+ } else if (!cpacr_pass) {
34
+ sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus,
41
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
35
+ qdev_get_gpio_in(cpudev, ARM_CPU_NMI));
42
+ exc_secure);
36
+ sysbus_connect_irq(gicbusdev, i + 5 * smp_cpus,
43
+ env->v7m.cfsr[exc_secure] |= R_V7M_CFSR_NOCP_MASK;
37
+ qdev_get_gpio_in(cpudev, ARM_CPU_VINMI));
44
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
38
+ }
45
+ "stackframe: CPACR prevents clearing FPU registers\n");
39
}
46
+ v7m_exception_taken(cpu, excret, true, false);
40
47
+ }
41
fdt_add_gic_node(vms);
48
+ }
49
+ /* Clear s0..s15 and FPSCR; TODO also VPR when MVE is implemented */
50
int i;
51
52
for (i = 0; i < 16; i += 2) {
53
--
42
--
54
2.20.1
43
2.34.1
55
56
diff view generated by jsdifflib
1
In v8.1M the PXN architecture extension adds a new PXN bit to the
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
MPU_RLAR registers, which forbids execution of code in the region
3
from a privileged mode.
4
2
5
This is another feature which is just in the generic "in v8.1M" set
3
According to Arm GIC section 4.6.3 Interrupt superpriority, the interrupt
6
and has no ID register field indicating its presence.
4
with superpriority is always IRQ, never FIQ, so the NMI exception trap entry
5
behave like IRQ. And VINMI(vIRQ with Superpriority) can be raised from the
6
GIC or come from the hcrx_el2.HCRX_VINMI bit, VFNMI(vFIQ with Superpriority)
7
come from the hcrx_el2.HCRX_VFNMI bit.
7
8
9
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20240407081733.3231820-13-ruanjinjie@huawei.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
11
---
14
---
12
target/arm/helper.c | 7 ++++++-
15
target/arm/helper.c | 3 +++
13
1 file changed, 6 insertions(+), 1 deletion(-)
16
1 file changed, 3 insertions(+)
14
17
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
20
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
21
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
22
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
20
} else {
23
break;
21
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
24
case EXCP_IRQ:
22
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
25
case EXCP_VIRQ:
23
+ bool pxn = false;
26
+ case EXCP_NMI:
24
+
27
+ case EXCP_VINMI:
25
+ if (arm_feature(env, ARM_FEATURE_V8_1M)) {
28
addr += 0x80;
26
+ pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
29
break;
27
+ }
30
case EXCP_FIQ:
28
31
case EXCP_VFIQ:
29
if (m_is_system_region(env, address)) {
32
+ case EXCP_VFNMI:
30
/* System space is always execute never */
33
addr += 0x100;
31
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
34
break;
32
}
35
case EXCP_VSERR:
33
34
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
35
- if (*prot && !xn) {
36
+ if (*prot && !xn && !(pxn && !is_user)) {
37
*prot |= PAGE_EXEC;
38
}
39
/* We don't need to look the attribute up in the MAIR0/MAIR1
40
--
36
--
41
2.20.1
37
2.34.1
42
43
diff view generated by jsdifflib
1
In v8.0M, on exception entry the registers R0-R3, R12, APSR and EPSR
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
are zeroed for an exception taken to Non-secure state; for an
3
exception taken to Secure state they become UNKNOWN, and we chose to
4
leave them at their previous values.
5
2
6
In v8.1M the behaviour is specified more tightly and these registers
3
Add a property has-nmi to the GICv3 device, and use this to set
7
are always zeroed regardless of the security state that the exception
4
the NMI bit in the GICD_TYPER register. This isn't visible to
8
targets (see rule R_KPZV). Implement this.
5
guests yet because the property defaults to false and we won't
6
set it in the board code until we've landed all of the changes
7
needed to implement FEAT_GICV3_NMI.
9
8
9
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20240407081733.3231820-14-ruanjinjie@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20201119215617.29887-17-peter.maydell@linaro.org
13
---
14
---
14
target/arm/m_helper.c | 16 ++++++++++++----
15
hw/intc/gicv3_internal.h | 1 +
15
1 file changed, 12 insertions(+), 4 deletions(-)
16
include/hw/intc/arm_gicv3_common.h | 1 +
17
hw/intc/arm_gicv3_common.c | 1 +
18
hw/intc/arm_gicv3_dist.c | 2 ++
19
4 files changed, 5 insertions(+)
16
20
17
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
21
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
18
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/m_helper.c
23
--- a/hw/intc/gicv3_internal.h
20
+++ b/target/arm/m_helper.c
24
+++ b/hw/intc/gicv3_internal.h
21
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
25
@@ -XXX,XX +XXX,XX @@
22
* Clear registers if necessary to prevent non-secure exception
26
#define GICD_CTLR_E1NWF (1U << 7)
23
* code being able to see register values from secure code.
27
#define GICD_CTLR_RWP (1U << 31)
24
* Where register values become architecturally UNKNOWN we leave
28
25
- * them with their previous values.
29
+#define GICD_TYPER_NMI_SHIFT 9
26
+ * them with their previous values. v8.1M is tighter than v8.0M
30
#define GICD_TYPER_LPIS_SHIFT 17
27
+ * here and always zeroes the caller-saved registers regardless
31
28
+ * of the security state the exception is targeting.
32
/* 16 bits EventId */
33
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/include/hw/intc/arm_gicv3_common.h
36
+++ b/include/hw/intc/arm_gicv3_common.h
37
@@ -XXX,XX +XXX,XX @@ struct GICv3State {
38
uint32_t num_irq;
39
uint32_t revision;
40
bool lpi_enable;
41
+ bool nmi_support;
42
bool security_extn;
43
bool force_8bit_prio;
44
bool irq_reset_nonsecure;
45
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/intc/arm_gicv3_common.c
48
+++ b/hw/intc/arm_gicv3_common.c
49
@@ -XXX,XX +XXX,XX @@ static Property arm_gicv3_common_properties[] = {
50
DEFINE_PROP_UINT32("num-irq", GICv3State, num_irq, 32),
51
DEFINE_PROP_UINT32("revision", GICv3State, revision, 3),
52
DEFINE_PROP_BOOL("has-lpi", GICv3State, lpi_enable, 0),
53
+ DEFINE_PROP_BOOL("has-nmi", GICv3State, nmi_support, 0),
54
DEFINE_PROP_BOOL("has-security-extensions", GICv3State, security_extn, 0),
55
/*
56
* Compatibility property: force 8 bits of physical priority, even
57
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/intc/arm_gicv3_dist.c
60
+++ b/hw/intc/arm_gicv3_dist.c
61
@@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
62
* by GICD_TYPER.IDbits)
63
* MBIS == 0 (message-based SPIs not supported)
64
* SecurityExtn == 1 if security extns supported
65
+ * NMI = 1 if Non-maskable interrupt property is supported
66
* CPUNumber == 0 since for us ARE is always 1
67
* ITLinesNumber == (((max SPI IntID + 1) / 32) - 1)
29
*/
68
*/
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
69
@@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
31
- if (!targets_secure) {
70
bool dvis = s->revision >= 4;
32
+ if (!targets_secure || arm_feature(env, ARM_FEATURE_V8_1M)) {
71
33
/*
72
*data = (1 << 25) | (1 << 24) | (dvis << 18) | (sec_extn << 10) |
34
* Always clear the caller-saved registers (they have been
73
+ (s->nmi_support << GICD_TYPER_NMI_SHIFT) |
35
* pushed to the stack earlier in v7m_push_stack()).
74
(s->lpi_enable << GICD_TYPER_LPIS_SHIFT) |
36
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
75
(0xf << 19) | itlinesnumber;
37
* v7m_push_callee_stack()).
76
return true;
38
*/
39
int i;
40
+ /*
41
+ * r4..r11 are callee-saves, zero only if background
42
+ * state was Secure (EXCRET.S == 1) and exception
43
+ * targets Non-secure state
44
+ */
45
+ bool zero_callee_saves = !targets_secure &&
46
+ (lr & R_V7M_EXCRET_S_MASK);
47
48
for (i = 0; i < 13; i++) {
49
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
50
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
51
+ if (i < 4 || i > 11 || zero_callee_saves) {
52
env->regs[i] = 0;
53
}
54
}
55
--
77
--
56
2.20.1
78
2.34.1
57
58
diff view generated by jsdifflib
1
In v8.1M a REVIDR register is defined, which is at address 0xe00ecfc
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
and is a read-only IMPDEF register providing implementation specific
3
minor revision information, like the v8A REVIDR_EL1. Implement this.
4
2
3
So far, there is no FEAT_GICv3_NMI support in the in-kernel GIC, so make it
4
an error to try to set has-nmi=true for the KVM GICv3.
5
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Message-id: 20240407081733.3231820-15-ruanjinjie@huawei.com
8
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-19-peter.maydell@linaro.org
8
---
10
---
9
hw/intc/armv7m_nvic.c | 5 +++++
11
hw/intc/arm_gicv3_kvm.c | 5 +++++
10
1 file changed, 5 insertions(+)
12
1 file changed, 5 insertions(+)
11
13
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/armv7m_nvic.c
16
--- a/hw/intc/arm_gicv3_kvm.c
15
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/hw/intc/arm_gicv3_kvm.c
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
18
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
17
}
19
return;
18
return val;
19
}
20
}
20
+ case 0xcfc:
21
21
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8_1M)) {
22
+ if (s->nmi_support) {
22
+ goto bad_offset;
23
+ error_setg(errp, "NMI is not supported with the in-kernel GIC");
23
+ }
24
+ return;
24
+ return cpu->revidr;
25
+ }
25
case 0xd00: /* CPUID Base. */
26
+
26
return cpu->midr;
27
gicv3_init_irqs_and_mmio(s, kvm_arm_gicv3_set_irq, NULL);
27
case 0xd04: /* Interrupt Control State (ICSR) */
28
29
for (i = 0; i < s->num_cpu; i++) {
28
--
30
--
29
2.20.1
31
2.34.1
30
31
diff view generated by jsdifflib
1
For M-profile CPUs, the range from 0xe0000000 to 0xe00fffff is the
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
Private Peripheral Bus range, which includes all of the memory mapped
3
devices and registers that are part of the CPU itself, including the
4
NVIC, systick timer, and debug and trace components like the Data
5
Watchpoint and Trace unit (DWT). Within this large region, the range
6
0xe000e000 to 0xe000efff is the System Control Space (NVIC, system
7
registers, systick) and 0xe002e000 to 0exe002efff is its Non-secure
8
alias.
9
2
10
The architecture is clear that within the SCS unimplemented registers
3
A SPI, PPI or SGI interrupt can have non-maskable property. So maintain
11
should be RES0 for privileged accesses and generate BusFault for
4
non-maskable property in PendingIrq and GICR/GICD. Since add new device
12
unprivileged accesses, and we currently implement this.
5
state, it also needs to be migrated, so also save NMI info in
6
vmstate_gicv3_cpu and vmstate_gicv3.
13
7
14
It is less clear about how to handle accesses to unimplemented
8
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
15
regions of the wider PPB. Unprivileged accesses should definitely
9
Acked-by: Richard Henderson <richard.henderson@linaro.org>
16
cause BusFaults (R_DQQS), but the behaviour of privileged accesses is
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
not given as a general rule. However, the register definitions of
11
Message-id: 20240407081733.3231820-16-ruanjinjie@huawei.com
18
individual registers for components like the DWT all state that they
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
are RES0 if the relevant component is not implemented, so the
13
---
20
simplest way to provide that is to provide RAZ/WI for the whole range
14
include/hw/intc/arm_gicv3_common.h | 4 ++++
21
for privileged accesses. (The v7M Arm ARM does say that reserved
15
hw/intc/arm_gicv3_common.c | 38 ++++++++++++++++++++++++++++++
22
registers should be UNK/SBZP.)
16
2 files changed, 42 insertions(+)
23
17
24
Expand the container MemoryRegion that the NVIC exposes so that
18
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
25
it covers the whole PPB space. This means:
26
* moving the address that the ARMV7M device maps it to down by
27
0xe000 bytes
28
* moving the off and the offsets within the container of all the
29
subregions forward by 0xe000 bytes
30
* adding a new default MemoryRegion that covers the whole container
31
at a lower priority than anything else and which provides the
32
RAZWI/BusFault behaviour
33
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
Message-id: 20201119215617.29887-2-peter.maydell@linaro.org
37
---
38
include/hw/intc/armv7m_nvic.h | 1 +
39
hw/arm/armv7m.c | 2 +-
40
hw/intc/armv7m_nvic.c | 78 ++++++++++++++++++++++++++++++-----
41
3 files changed, 69 insertions(+), 12 deletions(-)
42
43
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
44
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
45
--- a/include/hw/intc/armv7m_nvic.h
20
--- a/include/hw/intc/arm_gicv3_common.h
46
+++ b/include/hw/intc/armv7m_nvic.h
21
+++ b/include/hw/intc/arm_gicv3_common.h
47
@@ -XXX,XX +XXX,XX @@ struct NVICState {
22
@@ -XXX,XX +XXX,XX @@ typedef struct {
48
MemoryRegion systickmem;
23
int irq;
49
MemoryRegion systick_ns_mem;
24
uint8_t prio;
50
MemoryRegion container;
25
int grp;
51
+ MemoryRegion defaultmem;
26
+ bool nmi;
52
27
} PendingIrq;
53
uint32_t num_irq;
28
54
qemu_irq excpout;
29
struct GICv3CPUState {
55
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
30
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
31
uint32_t gicr_ienabler0;
32
uint32_t gicr_ipendr0;
33
uint32_t gicr_iactiver0;
34
+ uint32_t gicr_inmir0;
35
uint32_t edge_trigger; /* ICFGR0 and ICFGR1 even bits */
36
uint32_t gicr_igrpmodr0;
37
uint32_t gicr_nsacr;
38
@@ -XXX,XX +XXX,XX @@ struct GICv3State {
39
GIC_DECLARE_BITMAP(active); /* GICD_ISACTIVER */
40
GIC_DECLARE_BITMAP(level); /* Current level */
41
GIC_DECLARE_BITMAP(edge_trigger); /* GICD_ICFGR even bits */
42
+ GIC_DECLARE_BITMAP(nmi); /* GICD_INMIR */
43
uint8_t gicd_ipriority[GICV3_MAXIRQ];
44
uint64_t gicd_irouter[GICV3_MAXIRQ];
45
/* Cached information: pointer to the cpu i/f for the CPUs specified
46
@@ -XXX,XX +XXX,XX @@ GICV3_BITMAP_ACCESSORS(pending)
47
GICV3_BITMAP_ACCESSORS(active)
48
GICV3_BITMAP_ACCESSORS(level)
49
GICV3_BITMAP_ACCESSORS(edge_trigger)
50
+GICV3_BITMAP_ACCESSORS(nmi)
51
52
#define TYPE_ARM_GICV3_COMMON "arm-gicv3-common"
53
typedef struct ARMGICv3CommonClass ARMGICv3CommonClass;
54
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
56
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/armv7m.c
56
--- a/hw/intc/arm_gicv3_common.c
58
+++ b/hw/arm/armv7m.c
57
+++ b/hw/intc/arm_gicv3_common.c
59
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
58
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_gicv3_gicv4 = {
60
sysbus_connect_irq(sbd, 0,
59
}
61
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
62
63
- memory_region_add_subregion(&s->container, 0xe000e000,
64
+ memory_region_add_subregion(&s->container, 0xe0000000,
65
sysbus_mmio_get_region(sbd, 0));
66
67
for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
68
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/intc/armv7m_nvic.c
71
+++ b/hw/intc/armv7m_nvic.c
72
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
73
.endianness = DEVICE_NATIVE_ENDIAN,
74
};
60
};
75
61
76
+/*
62
+static bool gicv3_cpu_nmi_needed(void *opaque)
77
+ * Unassigned portions of the PPB space are RAZ/WI for privileged
78
+ * accesses, and fault for non-privileged accesses.
79
+ */
80
+static MemTxResult ppb_default_read(void *opaque, hwaddr addr,
81
+ uint64_t *data, unsigned size,
82
+ MemTxAttrs attrs)
83
+{
63
+{
84
+ qemu_log_mask(LOG_UNIMP, "Read of unassigned area of PPB: offset 0x%x\n",
64
+ GICv3CPUState *cs = opaque;
85
+ (uint32_t)addr);
65
+
86
+ if (attrs.user) {
66
+ return cs->gic->nmi_support;
87
+ return MEMTX_ERROR;
88
+ }
89
+ *data = 0;
90
+ return MEMTX_OK;
91
+}
67
+}
92
+
68
+
93
+static MemTxResult ppb_default_write(void *opaque, hwaddr addr,
69
+static const VMStateDescription vmstate_gicv3_cpu_nmi = {
94
+ uint64_t value, unsigned size,
70
+ .name = "arm_gicv3_cpu/nmi",
95
+ MemTxAttrs attrs)
71
+ .version_id = 1,
72
+ .minimum_version_id = 1,
73
+ .needed = gicv3_cpu_nmi_needed,
74
+ .fields = (const VMStateField[]) {
75
+ VMSTATE_UINT32(gicr_inmir0, GICv3CPUState),
76
+ VMSTATE_END_OF_LIST()
77
+ }
78
+};
79
+
80
static const VMStateDescription vmstate_gicv3_cpu = {
81
.name = "arm_gicv3_cpu",
82
.version_id = 1,
83
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gicv3_cpu = {
84
&vmstate_gicv3_cpu_virt,
85
&vmstate_gicv3_cpu_sre_el1,
86
&vmstate_gicv3_gicv4,
87
+ &vmstate_gicv3_cpu_nmi,
88
NULL
89
}
90
};
91
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_gicv3_gicd_no_migration_shift_bug = {
92
}
93
};
94
95
+static bool gicv3_nmi_needed(void *opaque)
96
+{
96
+{
97
+ qemu_log_mask(LOG_UNIMP, "Write of unassigned area of PPB: offset 0x%x\n",
97
+ GICv3State *cs = opaque;
98
+ (uint32_t)addr);
98
+
99
+ if (attrs.user) {
99
+ return cs->nmi_support;
100
+ return MEMTX_ERROR;
101
+ }
102
+ return MEMTX_OK;
103
+}
100
+}
104
+
101
+
105
+static const MemoryRegionOps ppb_default_ops = {
102
+const VMStateDescription vmstate_gicv3_gicd_nmi = {
106
+ .read_with_attrs = ppb_default_read,
103
+ .name = "arm_gicv3/gicd_nmi",
107
+ .write_with_attrs = ppb_default_write,
104
+ .version_id = 1,
108
+ .endianness = DEVICE_NATIVE_ENDIAN,
105
+ .minimum_version_id = 1,
109
+ .valid.min_access_size = 1,
106
+ .needed = gicv3_nmi_needed,
110
+ .valid.max_access_size = 8,
107
+ .fields = (const VMStateField[]) {
108
+ VMSTATE_UINT32_ARRAY(nmi, GICv3State, GICV3_BMP_SIZE),
109
+ VMSTATE_END_OF_LIST()
110
+ }
111
+};
111
+};
112
+
112
+
113
static int nvic_post_load(void *opaque, int version_id)
113
static const VMStateDescription vmstate_gicv3 = {
114
{
114
.name = "arm_gicv3",
115
NVICState *s = opaque;
115
.version_id = 1,
116
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
116
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gicv3 = {
117
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
117
},
118
{
118
.subsections = (const VMStateDescription * const []) {
119
NVICState *s = NVIC(dev);
119
&vmstate_gicv3_gicd_no_migration_shift_bug,
120
- int regionlen;
120
+ &vmstate_gicv3_gicd_nmi,
121
121
NULL
122
/* The armv7m container object will have set our CPU pointer */
123
if (!s->cpu || !arm_feature(&s->cpu->env, ARM_FEATURE_M)) {
124
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
125
M_REG_S));
126
}
122
}
127
123
};
128
- /* The NVIC and System Control Space (SCS) starts at 0xe000e000
129
+ /*
130
+ * This device provides a single sysbus memory region which
131
+ * represents the whole of the "System PPB" space. This is the
132
+ * range from 0xe0000000 to 0xe00fffff and includes the NVIC,
133
+ * the System Control Space (system registers), the systick timer,
134
+ * and for CPUs with the Security extension an NS banked version
135
+ * of all of these.
136
+ *
137
+ * The default behaviour for unimplemented registers/ranges
138
+ * (for instance the Data Watchpoint and Trace unit at 0xe0001000)
139
+ * is to RAZ/WI for privileged access and BusFault for non-privileged
140
+ * access.
141
+ *
142
+ * The NVIC and System Control Space (SCS) starts at 0xe000e000
143
* and looks like this:
144
* 0x004 - ICTR
145
* 0x010 - 0xff - systick
146
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
147
* generally code determining which banked register to use should
148
* use attrs.secure; code determining actual behaviour of the system
149
* should use env->v7m.secure.
150
+ *
151
+ * The container covers the whole PPB space. Within it the priority
152
+ * of overlapping regions is:
153
+ * - default region (for RAZ/WI and BusFault) : -1
154
+ * - system register regions : 0
155
+ * - systick : 1
156
+ * This is because the systick device is a small block of registers
157
+ * in the middle of the other system control registers.
158
*/
159
- regionlen = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? 0x21000 : 0x1000;
160
- memory_region_init(&s->container, OBJECT(s), "nvic", regionlen);
161
- /* The system register region goes at the bottom of the priority
162
- * stack as it covers the whole page.
163
- */
164
+ memory_region_init(&s->container, OBJECT(s), "nvic", 0x100000);
165
+ memory_region_init_io(&s->defaultmem, OBJECT(s), &ppb_default_ops, s,
166
+ "nvic-default", 0x100000);
167
+ memory_region_add_subregion_overlap(&s->container, 0, &s->defaultmem, -1);
168
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
169
"nvic_sysregs", 0x1000);
170
- memory_region_add_subregion(&s->container, 0, &s->sysregmem);
171
+ memory_region_add_subregion(&s->container, 0xe000, &s->sysregmem);
172
173
memory_region_init_io(&s->systickmem, OBJECT(s),
174
&nvic_systick_ops, s,
175
"nvic_systick", 0xe0);
176
177
- memory_region_add_subregion_overlap(&s->container, 0x10,
178
+ memory_region_add_subregion_overlap(&s->container, 0xe010,
179
&s->systickmem, 1);
180
181
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
182
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
183
&nvic_sysreg_ns_ops, &s->sysregmem,
184
"nvic_sysregs_ns", 0x1000);
185
- memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
186
+ memory_region_add_subregion(&s->container, 0x2e000, &s->sysreg_ns_mem);
187
memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
188
&nvic_sysreg_ns_ops, &s->systickmem,
189
"nvic_systick_ns", 0xe0);
190
- memory_region_add_subregion_overlap(&s->container, 0x20010,
191
+ memory_region_add_subregion_overlap(&s->container, 0x2e010,
192
&s->systick_ns_mem, 1);
193
}
194
195
--
124
--
196
2.20.1
125
2.34.1
197
198
diff view generated by jsdifflib
1
For v8.1M the architecture mandates that CPUs must provide at
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
least the "minimal RAS implementation" from the Reliability,
3
Availability and Serviceability extension. This consists of:
4
* an ESB instruction which is a NOP
5
-- since it is in the HINT space we need only add a comment
6
* an RFSR register which will RAZ/WI
7
* a RAZ/WI AIRCR.IESB bit
8
-- the code which handles writes to AIRCR does not allow setting
9
of RES0 bits, so we already treat this as RAZ/WI; add a comment
10
noting that this is deliberate
11
* minimal implementation of the RAS register block at 0xe0005000
12
-- this will be in a subsequent commit
13
* setting the ID_PFR0.RAS field to 0b0010
14
-- we will do this when we add the Cortex-M55 CPU model
15
2
3
Add GICR_INMIR0 register and support access GICR_INMIR0.
4
5
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20240407081733.3231820-17-ruanjinjie@huawei.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20201119215617.29887-26-peter.maydell@linaro.org
19
---
10
---
20
target/arm/cpu.h | 14 ++++++++++++++
11
hw/intc/gicv3_internal.h | 1 +
21
target/arm/t32.decode | 4 ++++
12
hw/intc/arm_gicv3_redist.c | 19 +++++++++++++++++++
22
hw/intc/armv7m_nvic.c | 13 +++++++++++++
13
2 files changed, 20 insertions(+)
23
3 files changed, 31 insertions(+)
24
14
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
26
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
17
--- a/hw/intc/gicv3_internal.h
28
+++ b/target/arm/cpu.h
18
+++ b/hw/intc/gicv3_internal.h
29
@@ -XXX,XX +XXX,XX @@ FIELD(ID_MMFR4, LSM, 20, 4)
19
@@ -XXX,XX +XXX,XX @@
30
FIELD(ID_MMFR4, CCIDX, 24, 4)
20
#define GICR_ICFGR1 (GICR_SGI_OFFSET + 0x0C04)
31
FIELD(ID_MMFR4, EVT, 28, 4)
21
#define GICR_IGRPMODR0 (GICR_SGI_OFFSET + 0x0D00)
32
22
#define GICR_NSACR (GICR_SGI_OFFSET + 0x0E00)
33
+FIELD(ID_PFR0, STATE0, 0, 4)
23
+#define GICR_INMIR0 (GICR_SGI_OFFSET + 0x0F80)
34
+FIELD(ID_PFR0, STATE1, 4, 4)
24
35
+FIELD(ID_PFR0, STATE2, 8, 4)
25
/* VLPI redistributor registers, offsets from VLPI_base */
36
+FIELD(ID_PFR0, STATE3, 12, 4)
26
#define GICR_VPROPBASER (GICR_VLPI_OFFSET + 0x70)
37
+FIELD(ID_PFR0, CSV2, 16, 4)
27
diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
38
+FIELD(ID_PFR0, AMU, 20, 4)
28
index XXXXXXX..XXXXXXX 100644
39
+FIELD(ID_PFR0, DIT, 24, 4)
29
--- a/hw/intc/arm_gicv3_redist.c
40
+FIELD(ID_PFR0, RAS, 28, 4)
30
+++ b/hw/intc/arm_gicv3_redist.c
41
+
31
@@ -XXX,XX +XXX,XX @@ static int gicr_ns_access(GICv3CPUState *cs, int irq)
42
FIELD(ID_PFR1, PROGMOD, 0, 4)
32
return extract32(cs->gicr_nsacr, irq * 2, 2);
43
FIELD(ID_PFR1, SECURITY, 4, 4)
44
FIELD(ID_PFR1, MPROGMOD, 8, 4)
45
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_predinv(const ARMISARegisters *id)
46
return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
47
}
33
}
48
34
49
+static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
35
+static void gicr_write_bitmap_reg(GICv3CPUState *cs, MemTxAttrs attrs,
36
+ uint32_t *reg, uint32_t val)
50
+{
37
+{
51
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
38
+ /* Helper routine to implement writing to a "set" register */
39
+ val &= mask_group(cs, attrs);
40
+ *reg = val;
41
+ gicv3_redist_update(cs);
52
+}
42
+}
53
+
43
+
54
static inline bool isar_feature_aa32_mprofile(const ARMISARegisters *id)
44
static void gicr_write_set_bitmap_reg(GICv3CPUState *cs, MemTxAttrs attrs,
45
uint32_t *reg, uint32_t val)
55
{
46
{
56
return FIELD_EX32(id->id_pfr1, ID_PFR1, MPROGMOD) != 0;
47
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_readl(GICv3CPUState *cs, hwaddr offset,
57
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
48
*data = value;
58
index XXXXXXX..XXXXXXX 100644
49
return MEMTX_OK;
59
--- a/target/arm/t32.decode
50
}
60
+++ b/target/arm/t32.decode
51
+ case GICR_INMIR0:
61
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
52
+ *data = cs->gic->nmi_support ?
62
# SEV 1111 0011 1010 1111 1000 0000 0000 0100
53
+ gicr_read_bitmap_reg(cs, attrs, cs->gicr_inmir0) : 0;
63
# SEVL 1111 0011 1010 1111 1000 0000 0000 0101
54
+ return MEMTX_OK;
64
55
case GICR_ICFGR0:
65
+ # For M-profile minimal-RAS ESB can be a NOP, which is the
56
case GICR_ICFGR1:
66
+ # default behaviour since it is in the hint space.
57
{
67
+ # ESB 1111 0011 1010 1111 1000 0000 0001 0000
58
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr offset,
59
gicv3_redist_update(cs);
60
return MEMTX_OK;
61
}
62
+ case GICR_INMIR0:
63
+ if (cs->gic->nmi_support) {
64
+ gicr_write_bitmap_reg(cs, attrs, &cs->gicr_inmir0, value);
65
+ }
66
+ return MEMTX_OK;
68
+
67
+
69
# The canonical nop ends in 0000 0000, but the whole rest
68
case GICR_ICFGR0:
70
# of the space is "reserved hint, behaves as nop".
69
/* Register is all RAZ/WI or RAO/WI bits */
71
NOP 1111 0011 1010 1111 1000 0000 ---- ----
70
return MEMTX_OK;
72
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/intc/armv7m_nvic.c
75
+++ b/hw/intc/armv7m_nvic.c
76
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
77
return 0;
78
}
79
return cpu->env.v7m.sfar;
80
+ case 0xf04: /* RFSR */
81
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
82
+ goto bad_offset;
83
+ }
84
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
85
+ return 0;
86
case 0xf34: /* FPCCR */
87
if (!cpu_isar_feature(aa32_vfp_simd, cpu)) {
88
return 0;
89
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
90
R_V7M_AIRCR_PRIGROUP_SHIFT,
91
R_V7M_AIRCR_PRIGROUP_LENGTH);
92
}
93
+ /* AIRCR.IESB is RAZ/WI because we implement only minimal RAS */
94
if (attrs.secure) {
95
/* These bits are only writable by secure */
96
cpu->env.v7m.aircr = value &
97
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
98
}
99
break;
100
}
101
+ case 0xf04: /* RFSR */
102
+ if (!cpu_isar_feature(aa32_ras, cpu)) {
103
+ goto bad_offset;
104
+ }
105
+ /* We provide minimal-RAS only: RFSR is RAZ/WI */
106
+ break;
107
case 0xf34: /* FPCCR */
108
if (cpu_isar_feature(aa32_vfp_simd, cpu)) {
109
/* Not all bits here are banked. */
110
--
71
--
111
2.20.1
72
2.34.1
112
113
diff view generated by jsdifflib
1
v8.1M introduces a new TRD flag in the CCR register, which enables
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
checking for stack frame integrity signatures on SG instructions.
3
Add the code in the SG insn implementation for the new behaviour.
4
2
3
Add GICD_INMIR, GICD_INMIRnE register and support access GICD_INMIR0.
4
5
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20240407081733.3231820-18-ruanjinjie@huawei.com
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201119215617.29887-24-peter.maydell@linaro.org
8
---
10
---
9
target/arm/m_helper.c | 86 +++++++++++++++++++++++++++++++++++++++++++
11
hw/intc/gicv3_internal.h | 2 ++
10
1 file changed, 86 insertions(+)
12
hw/intc/arm_gicv3_dist.c | 34 ++++++++++++++++++++++++++++++++++
13
2 files changed, 36 insertions(+)
11
14
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
15
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
17
--- a/hw/intc/gicv3_internal.h
15
+++ b/target/arm/m_helper.c
18
+++ b/hw/intc/gicv3_internal.h
16
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
19
@@ -XXX,XX +XXX,XX @@
17
return true;
20
#define GICD_SGIR 0x0F00
21
#define GICD_CPENDSGIR 0x0F10
22
#define GICD_SPENDSGIR 0x0F20
23
+#define GICD_INMIR 0x0F80
24
+#define GICD_INMIRnE 0x3B00
25
#define GICD_IROUTER 0x6000
26
#define GICD_IDREGS 0xFFD0
27
28
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/intc/arm_gicv3_dist.c
31
+++ b/hw/intc/arm_gicv3_dist.c
32
@@ -XXX,XX +XXX,XX @@ static int gicd_ns_access(GICv3State *s, int irq)
33
return extract32(s->gicd_nsacr[irq / 16], (irq % 16) * 2, 2);
18
}
34
}
19
35
20
+static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
36
+static void gicd_write_bitmap_reg(GICv3State *s, MemTxAttrs attrs,
21
+ uint32_t addr, uint32_t *spdata)
37
+ uint32_t *bmp, maskfn *maskfn,
38
+ int offset, uint32_t val)
22
+{
39
+{
23
+ /*
40
+ /*
24
+ * Read a word of data from the stack for the SG instruction,
41
+ * Helper routine to implement writing to a "set" register
25
+ * writing the value into *spdata. If the load succeeds, return
42
+ * (GICD_INMIR, etc).
26
+ * true; otherwise pend an appropriate exception and return false.
43
+ * Semantics implemented here:
27
+ * (We can't use data load helpers here that throw an exception
44
+ * RAZ/WI for SGIs, PPIs, unimplemented IRQs
28
+ * because of the context we're called in, which is halfway through
45
+ * Bits corresponding to Group 0 or Secure Group 1 interrupts RAZ/WI.
29
+ * arm_v7m_cpu_do_interrupt().)
46
+ * offset should be the offset in bytes of the register from the start
47
+ * of its group.
30
+ */
48
+ */
31
+ CPUState *cs = CPU(cpu);
49
+ int irq = offset * 8;
32
+ CPUARMState *env = &cpu->env;
33
+ MemTxAttrs attrs = {};
34
+ MemTxResult txres;
35
+ target_ulong page_size;
36
+ hwaddr physaddr;
37
+ int prot;
38
+ ARMMMUFaultInfo fi = {};
39
+ ARMCacheAttrs cacheattrs = {};
40
+ uint32_t value;
41
+
50
+
42
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
51
+ if (irq < GIC_INTERNAL || irq >= s->num_irq) {
43
+ &attrs, &prot, &page_size, &fi, &cacheattrs)) {
52
+ return;
44
+ /* MPU/SAU lookup failed */
45
+ if (fi.type == ARMFault_QEMU_SFault) {
46
+ qemu_log_mask(CPU_LOG_INT,
47
+ "...SecureFault during stack word read\n");
48
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
49
+ env->v7m.sfar = addr;
50
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
51
+ } else {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...MemManageFault during stack word read\n");
54
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_DACCVIOL_MASK |
55
+ R_V7M_CFSR_MMARVALID_MASK;
56
+ env->v7m.mmfar[M_REG_S] = addr;
57
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, false);
58
+ }
59
+ return false;
60
+ }
53
+ }
61
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
54
+ val &= mask_group_and_nsacr(s, attrs, maskfn, irq);
62
+ attrs, &txres);
55
+ *gic_bmp_ptr32(bmp, irq) = val;
63
+ if (txres != MEMTX_OK) {
56
+ gicv3_update(s, irq, 32);
64
+ /* BusFault trying to read the data */
65
+ qemu_log_mask(CPU_LOG_INT,
66
+ "...BusFault during stack word read\n");
67
+ env->v7m.cfsr[M_REG_NS] |=
68
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
69
+ env->v7m.bfar = addr;
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
71
+ return false;
72
+ }
73
+
74
+ *spdata = value;
75
+ return true;
76
+}
57
+}
77
+
58
+
78
static bool v7m_handle_execute_nsc(ARMCPU *cpu)
59
static void gicd_write_set_bitmap_reg(GICv3State *s, MemTxAttrs attrs,
79
{
60
uint32_t *bmp,
80
/*
61
maskfn *maskfn,
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
62
@@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
82
*/
63
/* RAZ/WI since affinity routing is always enabled */
83
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
64
*data = 0;
84
", executing it\n", env->regs[15]);
65
return true;
85
+
66
+ case GICD_INMIR ... GICD_INMIR + 0x7f:
86
+ if (cpu_isar_feature(aa32_m_sec_state, cpu) &&
67
+ *data = (!s->nmi_support) ? 0 :
87
+ !arm_v7m_is_handler_mode(env)) {
68
+ gicd_read_bitmap_reg(s, attrs, s->nmi, NULL,
88
+ /*
69
+ offset - GICD_INMIR);
89
+ * v8.1M exception stack frame integrity check. Note that we
70
+ return true;
90
+ * must perform the memory access even if CCR_S.TRD is zero
71
case GICD_IROUTER ... GICD_IROUTER + 0x1fdf:
91
+ * and we aren't going to check what the data loaded is.
72
{
92
+ */
73
uint64_t r;
93
+ uint32_t spdata, sp;
74
@@ -XXX,XX +XXX,XX @@ static bool gicd_writel(GICv3State *s, hwaddr offset,
94
+
75
case GICD_SPENDSGIR ... GICD_SPENDSGIR + 0xf:
95
+ /*
76
/* RAZ/WI since affinity routing is always enabled */
96
+ * We know we are currently NS, so the S stack pointers must be
77
return true;
97
+ * in other_ss_{psp,msp}, not in regs[13]/other_sp.
78
+ case GICD_INMIR ... GICD_INMIR + 0x7f:
98
+ */
79
+ if (s->nmi_support) {
99
+ sp = v7m_using_psp(env) ? env->v7m.other_ss_psp : env->v7m.other_ss_msp;
80
+ gicd_write_bitmap_reg(s, attrs, s->nmi, NULL,
100
+ if (!v7m_read_sg_stack_word(cpu, mmu_idx, sp, &spdata)) {
81
+ offset - GICD_INMIR, value);
101
+ /* Stack access failed and an exception has been pended */
102
+ return false;
103
+ }
82
+ }
104
+
83
+ return true;
105
+ if (env->v7m.ccr[M_REG_S] & R_V7M_CCR_TRD_MASK) {
84
case GICD_IROUTER ... GICD_IROUTER + 0x1fdf:
106
+ if (((spdata & ~1) == 0xfefa125a) ||
85
{
107
+ !(env->v7m.control[M_REG_S] & 1)) {
86
uint64_t r;
108
+ goto gen_invep;
109
+ }
110
+ }
111
+ }
112
+
113
env->regs[14] &= ~1;
114
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
115
switch_v7m_security_state(env, true);
116
--
87
--
117
2.20.1
88
2.34.1
118
119
diff view generated by jsdifflib
1
The RAS feature has a block of memory-mapped registers at offset
1
Add the NMIAR CPU interface registers which deal with acknowledging NMI.
2
0x5000 within the PPB. For a "minimal RAS" implementation we provide
3
no error records and so the only registers that exist in the block
4
are ERRIIDR and ERRDEVID.
5
2
6
The "RAZ/WI for privileged, BusFault for nonprivileged" behaviour
3
When introduce NMI interrupt, there are some updates to the semantics for the
7
of the "nvic-default" region is actually valid for minimal-RAS,
4
register ICC_IAR1_EL1 and ICC_HPPIR1_EL1. For ICC_IAR1_EL1 register, it
8
so the main benefit of providing an explicit implementation of
5
should return 1022 if the intid has non-maskable property. And for
9
the register block is more accurate LOG_UNIMP messages, and a
6
ICC_NMIAR1_EL1 register, it should return 1023 if the intid do not have
10
framework for where we could add a real RAS implementation later
7
non-maskable property. Howerever, these are not necessary for ICC_HPPIR1_EL1
11
if necessary.
8
register.
12
9
10
And the APR and RPR has NMI bits which should be handled correctly.
11
12
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
[PMM: Separate out whether cpuif supports NMI from whether the
15
GIC proper (IRI) supports NMI]
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20240407081733.3231820-19-ruanjinjie@huawei.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20201119215617.29887-27-peter.maydell@linaro.org
16
---
19
---
17
include/hw/intc/armv7m_nvic.h | 1 +
20
hw/intc/gicv3_internal.h | 5 +
18
hw/intc/armv7m_nvic.c | 56 +++++++++++++++++++++++++++++++++++
21
include/hw/intc/arm_gicv3_common.h | 7 ++
19
2 files changed, 57 insertions(+)
22
hw/intc/arm_gicv3_cpuif.c | 147 ++++++++++++++++++++++++++++-
23
hw/intc/trace-events | 1 +
24
4 files changed, 155 insertions(+), 5 deletions(-)
20
25
21
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
26
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
22
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/intc/armv7m_nvic.h
28
--- a/hw/intc/gicv3_internal.h
24
+++ b/include/hw/intc/armv7m_nvic.h
29
+++ b/hw/intc/gicv3_internal.h
25
@@ -XXX,XX +XXX,XX @@ struct NVICState {
30
@@ -XXX,XX +XXX,XX @@ FIELD(GICR_VPENDBASER, VALID, 63, 1)
26
MemoryRegion sysreg_ns_mem;
31
#define ICC_CTLR_EL3_A3V (1U << 15)
27
MemoryRegion systickmem;
32
#define ICC_CTLR_EL3_NDS (1U << 17)
28
MemoryRegion systick_ns_mem;
33
29
+ MemoryRegion ras_mem;
34
+#define ICC_AP1R_EL1_NMI (1ULL << 63)
30
MemoryRegion container;
35
+#define ICC_RPR_EL1_NSNMI (1ULL << 62)
31
MemoryRegion defaultmem;
36
+#define ICC_RPR_EL1_NMI (1ULL << 63)
32
37
+
33
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
38
#define ICH_VMCR_EL2_VENG0_SHIFT 0
39
#define ICH_VMCR_EL2_VENG0 (1U << ICH_VMCR_EL2_VENG0_SHIFT)
40
#define ICH_VMCR_EL2_VENG1_SHIFT 1
41
@@ -XXX,XX +XXX,XX @@ FIELD(VTE, RDBASE, 42, RDBASE_PROCNUM_LENGTH)
42
/* Special interrupt IDs */
43
#define INTID_SECURE 1020
44
#define INTID_NONSECURE 1021
45
+#define INTID_NMI 1022
46
#define INTID_SPURIOUS 1023
47
48
/* Functions internal to the emulated GICv3 */
49
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
34
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/armv7m_nvic.c
51
--- a/include/hw/intc/arm_gicv3_common.h
36
+++ b/hw/intc/armv7m_nvic.c
52
+++ b/include/hw/intc/arm_gicv3_common.h
37
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_systick_ops = {
53
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
38
.endianness = DEVICE_NATIVE_ENDIAN,
54
55
/* This is temporary working state, to avoid a malloc in gicv3_update() */
56
bool seenbetter;
57
+
58
+ /*
59
+ * Whether the CPU interface has NMI support (FEAT_GICv3_NMI). The
60
+ * CPU interface may support NMIs even when the GIC proper (what the
61
+ * spec calls the IRI; the redistributors and distributor) does not.
62
+ */
63
+ bool nmi_support;
39
};
64
};
40
65
41
+
66
/*
42
+static MemTxResult ras_read(void *opaque, hwaddr addr,
67
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
43
+ uint64_t *data, unsigned size,
68
index XXXXXXX..XXXXXXX 100644
44
+ MemTxAttrs attrs)
69
--- a/hw/intc/arm_gicv3_cpuif.c
70
+++ b/hw/intc/arm_gicv3_cpuif.c
71
@@ -XXX,XX +XXX,XX @@
72
#include "hw/irq.h"
73
#include "cpu.h"
74
#include "target/arm/cpregs.h"
75
+#include "target/arm/cpu-features.h"
76
#include "sysemu/tcg.h"
77
#include "sysemu/qtest.h"
78
79
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri)
80
return intid;
81
}
82
83
+static uint64_t icv_nmiar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
45
+{
84
+{
46
+ if (attrs.user) {
85
+ /* todo */
47
+ return MEMTX_ERROR;
86
+ uint64_t intid = INTID_SPURIOUS;
48
+ }
87
+ return intid;
49
+
50
+ switch (addr) {
51
+ case 0xe10: /* ERRIIDR */
52
+ /* architect field = Arm; product/variant/revision 0 */
53
+ *data = 0x43b;
54
+ break;
55
+ case 0xfc8: /* ERRDEVID */
56
+ /* Minimal RAS: we implement 0 error record indexes */
57
+ *data = 0;
58
+ break;
59
+ default:
60
+ qemu_log_mask(LOG_UNIMP, "Read RAS register offset 0x%x\n",
61
+ (uint32_t)addr);
62
+ *data = 0;
63
+ break;
64
+ }
65
+ return MEMTX_OK;
66
+}
88
+}
67
+
89
+
68
+static MemTxResult ras_write(void *opaque, hwaddr addr,
90
static uint32_t icc_fullprio_mask(GICv3CPUState *cs)
69
+ uint64_t value, unsigned size,
91
{
70
+ MemTxAttrs attrs)
92
/*
93
@@ -XXX,XX +XXX,XX @@ static int icc_highest_active_prio(GICv3CPUState *cs)
94
*/
95
int i;
96
97
+ if (cs->nmi_support) {
98
+ /*
99
+ * If an NMI is active this takes precedence over anything else
100
+ * for priority purposes; the NMI bit is only in the AP1R0 bit.
101
+ * We return here the effective priority of the NMI, which is
102
+ * either 0x0 or 0x80. Callers will need to check NMI again for
103
+ * purposes of either setting the RPR register bits or for
104
+ * prioritization of NMI vs non-NMI.
105
+ */
106
+ if (cs->icc_apr[GICV3_G1][0] & ICC_AP1R_EL1_NMI) {
107
+ return 0;
108
+ }
109
+ if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
110
+ return (cs->gic->gicd_ctlr & GICD_CTLR_DS) ? 0 : 0x80;
111
+ }
112
+ }
113
+
114
for (i = 0; i < icc_num_aprs(cs); i++) {
115
uint32_t apr = cs->icc_apr[GICV3_G0][i] |
116
cs->icc_apr[GICV3_G1][i] | cs->icc_apr[GICV3_G1NS][i];
117
@@ -XXX,XX +XXX,XX @@ static bool icc_hppi_can_preempt(GICv3CPUState *cs)
118
*/
119
int rprio;
120
uint32_t mask;
121
+ ARMCPU *cpu = ARM_CPU(cs->cpu);
122
+ CPUARMState *env = &cpu->env;
123
124
if (icc_no_enabled_hppi(cs)) {
125
return false;
126
}
127
128
- if (cs->hppi.prio >= cs->icc_pmr_el1) {
129
+ if (cs->hppi.nmi) {
130
+ if (!(cs->gic->gicd_ctlr & GICD_CTLR_DS) &&
131
+ cs->hppi.grp == GICV3_G1NS) {
132
+ if (cs->icc_pmr_el1 < 0x80) {
133
+ return false;
134
+ }
135
+ if (arm_is_secure(env) && cs->icc_pmr_el1 == 0x80) {
136
+ return false;
137
+ }
138
+ }
139
+ } else if (cs->hppi.prio >= cs->icc_pmr_el1) {
140
/* Priority mask masks this interrupt */
141
return false;
142
}
143
@@ -XXX,XX +XXX,XX @@ static bool icc_hppi_can_preempt(GICv3CPUState *cs)
144
return true;
145
}
146
147
+ if (cs->hppi.nmi && (cs->hppi.prio & mask) == (rprio & mask)) {
148
+ if (!(cs->icc_apr[cs->hppi.grp][0] & ICC_AP1R_EL1_NMI)) {
149
+ return true;
150
+ }
151
+ }
152
+
153
return false;
154
}
155
156
@@ -XXX,XX +XXX,XX @@ static void icc_activate_irq(GICv3CPUState *cs, int irq)
157
int aprbit = prio >> (8 - cs->prebits);
158
int regno = aprbit / 32;
159
int regbit = aprbit % 32;
160
+ bool nmi = cs->hppi.nmi;
161
162
- cs->icc_apr[cs->hppi.grp][regno] |= (1 << regbit);
163
+ if (nmi) {
164
+ cs->icc_apr[cs->hppi.grp][regno] |= ICC_AP1R_EL1_NMI;
165
+ } else {
166
+ cs->icc_apr[cs->hppi.grp][regno] |= (1 << regbit);
167
+ }
168
169
if (irq < GIC_INTERNAL) {
170
cs->gicr_iactiver0 = deposit32(cs->gicr_iactiver0, irq, 1, 1);
171
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_iar0_read(CPUARMState *env, const ARMCPRegInfo *ri)
172
static uint64_t icc_iar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
173
{
174
GICv3CPUState *cs = icc_cs_from_env(env);
175
+ int el = arm_current_el(env);
176
uint64_t intid;
177
178
if (icv_access(env, HCR_IMO)) {
179
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_iar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
180
}
181
182
if (!gicv3_intid_is_special(intid)) {
183
- icc_activate_irq(cs, intid);
184
+ if (cs->hppi.nmi && env->cp15.sctlr_el[el] & SCTLR_NMI) {
185
+ intid = INTID_NMI;
186
+ } else {
187
+ icc_activate_irq(cs, intid);
188
+ }
189
}
190
191
trace_gicv3_icc_iar1_read(gicv3_redist_affid(cs), intid);
192
return intid;
193
}
194
195
+static uint64_t icc_nmiar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
71
+{
196
+{
72
+ if (attrs.user) {
197
+ GICv3CPUState *cs = icc_cs_from_env(env);
73
+ return MEMTX_ERROR;
198
+ uint64_t intid;
74
+ }
199
+
75
+
200
+ if (icv_access(env, HCR_IMO)) {
76
+ switch (addr) {
201
+ return icv_nmiar1_read(env, ri);
77
+ default:
202
+ }
78
+ qemu_log_mask(LOG_UNIMP, "Write to RAS register offset 0x%x\n",
203
+
79
+ (uint32_t)addr);
204
+ if (!icc_hppi_can_preempt(cs)) {
80
+ break;
205
+ intid = INTID_SPURIOUS;
81
+ }
206
+ } else {
82
+ return MEMTX_OK;
207
+ intid = icc_hppir1_value(cs, env);
208
+ }
209
+
210
+ if (!gicv3_intid_is_special(intid)) {
211
+ if (!cs->hppi.nmi) {
212
+ intid = INTID_SPURIOUS;
213
+ } else {
214
+ icc_activate_irq(cs, intid);
215
+ }
216
+ }
217
+
218
+ trace_gicv3_icc_nmiar1_read(gicv3_redist_affid(cs), intid);
219
+ return intid;
83
+}
220
+}
84
+
221
+
85
+static const MemoryRegionOps ras_ops = {
222
static void icc_drop_prio(GICv3CPUState *cs, int grp)
86
+ .read_with_attrs = ras_read,
223
{
87
+ .write_with_attrs = ras_write,
224
/* Drop the priority of the currently active interrupt in
88
+ .endianness = DEVICE_NATIVE_ENDIAN,
225
@@ -XXX,XX +XXX,XX @@ static void icc_drop_prio(GICv3CPUState *cs, int grp)
226
if (!*papr) {
227
continue;
228
}
229
+
230
+ if (i == 0 && cs->nmi_support && (*papr & ICC_AP1R_EL1_NMI)) {
231
+ *papr &= (~ICC_AP1R_EL1_NMI);
232
+ break;
233
+ }
234
+
235
/* Clear the lowest set bit */
236
*papr &= *papr - 1;
237
break;
238
@@ -XXX,XX +XXX,XX @@ static int icc_highest_active_group(GICv3CPUState *cs)
239
*/
240
int i;
241
242
+ if (cs->nmi_support) {
243
+ if (cs->icc_apr[GICV3_G1][0] & ICC_AP1R_EL1_NMI) {
244
+ return GICV3_G1;
245
+ }
246
+ if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
247
+ return GICV3_G1NS;
248
+ }
249
+ }
250
+
251
for (i = 0; i < ARRAY_SIZE(cs->icc_apr[0]); i++) {
252
int g0ctz = ctz32(cs->icc_apr[GICV3_G0][i]);
253
int g1ctz = ctz32(cs->icc_apr[GICV3_G1][i]);
254
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
255
return;
256
}
257
258
- cs->icc_apr[grp][regno] = value & 0xFFFFFFFFU;
259
+ if (cs->nmi_support) {
260
+ cs->icc_apr[grp][regno] = value & (0xFFFFFFFFU | ICC_AP1R_EL1_NMI);
261
+ } else {
262
+ cs->icc_apr[grp][regno] = value & 0xFFFFFFFFU;
263
+ }
264
gicv3_cpuif_update(cs);
265
}
266
267
@@ -XXX,XX +XXX,XX @@ static void icc_dir_write(CPUARMState *env, const ARMCPRegInfo *ri,
268
static uint64_t icc_rpr_read(CPUARMState *env, const ARMCPRegInfo *ri)
269
{
270
GICv3CPUState *cs = icc_cs_from_env(env);
271
- int prio;
272
+ uint64_t prio;
273
274
if (icv_access(env, HCR_FMO | HCR_IMO)) {
275
return icv_rpr_read(env, ri);
276
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_rpr_read(CPUARMState *env, const ARMCPRegInfo *ri)
277
}
278
}
279
280
+ if (cs->nmi_support) {
281
+ /* NMI info is reported in the high bits of RPR */
282
+ if (arm_feature(env, ARM_FEATURE_EL3) && !arm_is_secure(env)) {
283
+ if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
284
+ prio |= ICC_RPR_EL1_NMI;
285
+ }
286
+ } else {
287
+ if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
288
+ prio |= ICC_RPR_EL1_NSNMI;
289
+ }
290
+ if (cs->icc_apr[GICV3_G1][0] & ICC_AP1R_EL1_NMI) {
291
+ prio |= ICC_RPR_EL1_NMI;
292
+ }
293
+ }
294
+ }
295
+
296
trace_gicv3_icc_rpr_read(gicv3_redist_affid(cs), prio);
297
return prio;
298
}
299
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_icc_apxr23_reginfo[] = {
300
},
301
};
302
303
+static const ARMCPRegInfo gicv3_cpuif_gicv3_nmi_reginfo[] = {
304
+ { .name = "ICC_NMIAR1_EL1", .state = ARM_CP_STATE_BOTH,
305
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 5,
306
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
307
+ .access = PL1_R, .accessfn = gicv3_irq_access,
308
+ .readfn = icc_nmiar1_read,
309
+ },
89
+};
310
+};
90
+
311
+
91
/*
312
static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
92
* Unassigned portions of the PPB space are RAZ/WI for privileged
313
{
93
* accesses, and fault for non-privileged accesses.
314
GICv3CPUState *cs = icc_cs_from_env(env);
94
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
315
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
95
&s->systick_ns_mem, 1);
316
*/
96
}
317
define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
97
318
98
+ if (cpu_isar_feature(aa32_ras, s->cpu)) {
319
+ /*
99
+ memory_region_init_io(&s->ras_mem, OBJECT(s),
320
+ * If the CPU implements FEAT_NMI and FEAT_GICv3 it must also
100
+ &ras_ops, s, "nvic_ras", 0x1000);
321
+ * implement FEAT_GICv3_NMI, which is the CPU interface part
101
+ memory_region_add_subregion(&s->container, 0x5000, &s->ras_mem);
322
+ * of NMI support. This is distinct from whether the GIC proper
102
+ }
323
+ * (redistributors and distributor) have NMI support. In QEMU
103
+
324
+ * that is a property of the GIC device in s->nmi_support;
104
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
325
+ * cs->nmi_support indicates the CPU interface's support.
105
}
326
+ */
106
327
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
328
+ cs->nmi_support = true;
329
+ define_arm_cp_regs(cpu, gicv3_cpuif_gicv3_nmi_reginfo);
330
+ }
331
+
332
/*
333
* The CPU implementation specifies the number of supported
334
* bits of physical priority. For backwards compatibility
335
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
336
index XXXXXXX..XXXXXXX 100644
337
--- a/hw/intc/trace-events
338
+++ b/hw/intc/trace-events
339
@@ -XXX,XX +XXX,XX @@ gicv3_cpuif_set_irqs(uint32_t cpuid, int fiqlevel, int irqlevel) "GICv3 CPU i/f
340
gicv3_icc_generate_sgi(uint32_t cpuid, int irq, int irm, uint32_t aff, uint32_t targetlist) "GICv3 CPU i/f 0x%x generating SGI %d IRM %d target affinity 0x%xxx targetlist 0x%x"
341
gicv3_icc_iar0_read(uint32_t cpu, uint64_t val) "GICv3 ICC_IAR0 read cpu 0x%x value 0x%" PRIx64
342
gicv3_icc_iar1_read(uint32_t cpu, uint64_t val) "GICv3 ICC_IAR1 read cpu 0x%x value 0x%" PRIx64
343
+gicv3_icc_nmiar1_read(uint32_t cpu, uint64_t val) "GICv3 ICC_NMIAR1 read cpu 0x%x value 0x%" PRIx64
344
gicv3_icc_eoir_write(int grp, uint32_t cpu, uint64_t val) "GICv3 ICC_EOIR%d write cpu 0x%x value 0x%" PRIx64
345
gicv3_icc_hppir0_read(uint32_t cpu, uint64_t val) "GICv3 ICC_HPPIR0 read cpu 0x%x value 0x%" PRIx64
346
gicv3_icc_hppir1_read(uint32_t cpu, uint64_t val) "GICv3 ICC_HPPIR1 read cpu 0x%x value 0x%" PRIx64
107
--
347
--
108
2.20.1
348
2.34.1
109
110
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
Implement icv_nmiar1_read() for icc_nmiar1_read(), so add definition for
2
2
ICH_LR_EL2.NMI and ICH_AP1R_EL2.NMI bit.
3
We should use printf format specifier "%u" instead of "%d" for
3
4
argument of type "unsigned int".
4
If FEAT_GICv3_NMI is supported, ich_ap_write() should consider ICV_AP1R_EL1.NMI
5
5
bit. In icv_activate_irq() and icv_eoir_write(), the ICV_AP1R_EL1.NMI bit
6
Reported-by: Euler Robot <euler.robot@huawei.com>
6
should be set or clear according to the Non-maskable property. And the RPR
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
7
priority should also update the NMI bit according to the APR priority NMI bit.
8
Message-id: 20201126111109.112238-3-alex.chen@huawei.com
8
9
By the way, add gicv3_icv_nmiar1_read trace event.
10
11
If the hpp irq is a NMI, the icv iar read should return 1022 and trap for
12
NMI again
13
14
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
[PMM: use cs->nmi_support instead of cs->gic->nmi_support]
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Message-id: 20240407081733.3231820-20-ruanjinjie@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
20
---
12
hw/misc/imx31_ccm.c | 14 +++++++-------
21
hw/intc/gicv3_internal.h | 4 ++
13
hw/misc/imx_ccm.c | 4 ++--
22
hw/intc/arm_gicv3_cpuif.c | 105 +++++++++++++++++++++++++++++++++-----
14
2 files changed, 9 insertions(+), 9 deletions(-)
23
hw/intc/trace-events | 1 +
15
24
3 files changed, 98 insertions(+), 12 deletions(-)
16
diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
25
26
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
17
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/misc/imx31_ccm.c
28
--- a/hw/intc/gicv3_internal.h
19
+++ b/hw/misc/imx31_ccm.c
29
+++ b/hw/intc/gicv3_internal.h
20
@@ -XXX,XX +XXX,XX @@ static const char *imx31_ccm_reg_name(uint32_t reg)
30
@@ -XXX,XX +XXX,XX @@ FIELD(GICR_VPENDBASER, VALID, 63, 1)
21
case IMX31_CCM_PDR2_REG:
31
#define ICH_LR_EL2_PRIORITY_SHIFT 48
22
return "PDR2";
32
#define ICH_LR_EL2_PRIORITY_LENGTH 8
23
default:
33
#define ICH_LR_EL2_PRIORITY_MASK (0xffULL << ICH_LR_EL2_PRIORITY_SHIFT)
24
- sprintf(unknown, "[%d ?]", reg);
34
+#define ICH_LR_EL2_NMI (1ULL << 59)
25
+ sprintf(unknown, "[%u ?]", reg);
35
#define ICH_LR_EL2_GROUP (1ULL << 60)
26
return unknown;
36
#define ICH_LR_EL2_HW (1ULL << 61)
37
#define ICH_LR_EL2_STATE_SHIFT 62
38
@@ -XXX,XX +XXX,XX @@ FIELD(GICR_VPENDBASER, VALID, 63, 1)
39
#define ICH_VTR_EL2_PREBITS_SHIFT 26
40
#define ICH_VTR_EL2_PRIBITS_SHIFT 29
41
42
+#define ICV_AP1R_EL1_NMI (1ULL << 63)
43
+#define ICV_RPR_EL1_NMI (1ULL << 63)
44
+
45
/* ITS Registers */
46
47
FIELD(GITS_BASER, SIZE, 0, 8)
48
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/hw/intc/arm_gicv3_cpuif.c
51
+++ b/hw/intc/arm_gicv3_cpuif.c
52
@@ -XXX,XX +XXX,XX @@ static int ich_highest_active_virt_prio(GICv3CPUState *cs)
53
int i;
54
int aprmax = ich_num_aprs(cs);
55
56
+ if (cs->ich_apr[GICV3_G1NS][0] & ICV_AP1R_EL1_NMI) {
57
+ return 0x0;
58
+ }
59
+
60
for (i = 0; i < aprmax; i++) {
61
uint32_t apr = cs->ich_apr[GICV3_G0][i] |
62
cs->ich_apr[GICV3_G1NS][i];
63
@@ -XXX,XX +XXX,XX @@ static int hppvi_index(GICv3CPUState *cs)
64
* correct behaviour.
65
*/
66
int prio = 0xff;
67
+ bool nmi = false;
68
69
if (!(cs->ich_vmcr_el2 & (ICH_VMCR_EL2_VENG0 | ICH_VMCR_EL2_VENG1))) {
70
/* Both groups disabled, definitely nothing to do */
71
@@ -XXX,XX +XXX,XX @@ static int hppvi_index(GICv3CPUState *cs)
72
73
for (i = 0; i < cs->num_list_regs; i++) {
74
uint64_t lr = cs->ich_lr_el2[i];
75
+ bool thisnmi;
76
int thisprio;
77
78
if (ich_lr_state(lr) != ICH_LR_EL2_STATE_PENDING) {
79
@@ -XXX,XX +XXX,XX @@ static int hppvi_index(GICv3CPUState *cs)
80
}
81
}
82
83
+ thisnmi = lr & ICH_LR_EL2_NMI;
84
thisprio = ich_lr_prio(lr);
85
86
- if (thisprio < prio) {
87
+ if ((thisprio < prio) || ((thisprio == prio) && (thisnmi & (!nmi)))) {
88
prio = thisprio;
89
+ nmi = thisnmi;
90
idx = i;
91
}
27
}
92
}
28
}
93
@@ -XXX,XX +XXX,XX @@ static bool icv_hppi_can_preempt(GICv3CPUState *cs, uint64_t lr)
29
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState *dev)
94
* equivalent of these checks.
30
freq = CKIH_FREQ;
95
*/
96
int grp;
97
+ bool is_nmi;
98
uint32_t mask, prio, rprio, vpmr;
99
100
if (!(cs->ich_hcr_el2 & ICH_HCR_EL2_EN)) {
101
@@ -XXX,XX +XXX,XX @@ static bool icv_hppi_can_preempt(GICv3CPUState *cs, uint64_t lr)
102
*/
103
104
prio = ich_lr_prio(lr);
105
+ is_nmi = lr & ICH_LR_EL2_NMI;
106
vpmr = extract64(cs->ich_vmcr_el2, ICH_VMCR_EL2_VPMR_SHIFT,
107
ICH_VMCR_EL2_VPMR_LENGTH);
108
109
- if (prio >= vpmr) {
110
+ if (!is_nmi && prio >= vpmr) {
111
/* Priority mask masks this interrupt */
112
return false;
31
}
113
}
32
114
@@ -XXX,XX +XXX,XX @@ static bool icv_hppi_can_preempt(GICv3CPUState *cs, uint64_t lr)
33
- DPRINTF("freq = %d\n", freq);
115
return true;
34
+ DPRINTF("freq = %u\n", freq);
35
36
return freq;
37
}
38
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mpll_clk(IMXCCMState *dev)
39
freq = imx_ccm_calc_pll(s->reg[IMX31_CCM_MPCTL_REG],
40
imx31_ccm_get_pll_ref_clk(dev));
41
42
- DPRINTF("freq = %d\n", freq);
43
+ DPRINTF("freq = %u\n", freq);
44
45
return freq;
46
}
47
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_mcu_main_clk(IMXCCMState *dev)
48
freq = imx31_ccm_get_mpll_clk(dev);
49
}
116
}
50
117
51
- DPRINTF("freq = %d\n", freq);
118
+ if ((prio & mask) == (rprio & mask) && is_nmi &&
52
+ DPRINTF("freq = %u\n", freq);
119
+ !(cs->ich_apr[GICV3_G1NS][0] & ICV_AP1R_EL1_NMI)) {
53
120
+ return true;
54
return freq;
121
+ }
55
}
122
+
56
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_hclk_clk(IMXCCMState *dev)
123
return false;
57
freq = imx31_ccm_get_mcu_main_clk(dev)
124
}
58
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], MAX));
125
59
126
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
60
- DPRINTF("freq = %d\n", freq);
127
61
+ DPRINTF("freq = %u\n", freq);
128
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
62
129
63
return freq;
130
- cs->ich_apr[grp][regno] = value & 0xFFFFFFFFU;
64
}
131
+ if (cs->nmi_support) {
65
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_ipg_clk(IMXCCMState *dev)
132
+ cs->ich_apr[grp][regno] = value & (0xFFFFFFFFU | ICV_AP1R_EL1_NMI);
66
freq = imx31_ccm_get_hclk_clk(dev)
133
+ } else {
67
/ (1 + EXTRACT(s->reg[IMX31_CCM_PDR0_REG], IPG));
134
+ cs->ich_apr[grp][regno] = value & 0xFFFFFFFFU;
68
135
+ }
69
- DPRINTF("freq = %d\n", freq);
136
70
+ DPRINTF("freq = %u\n", freq);
137
gicv3_cpuif_virt_irq_fiq_update(cs);
71
138
return;
72
return freq;
139
@@ -XXX,XX +XXX,XX @@ static void icv_ctlr_write(CPUARMState *env, const ARMCPRegInfo *ri,
73
}
140
static uint64_t icv_rpr_read(CPUARMState *env, const ARMCPRegInfo *ri)
74
@@ -XXX,XX +XXX,XX @@ static uint32_t imx31_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
141
{
75
break;
142
GICv3CPUState *cs = icc_cs_from_env(env);
143
- int prio = ich_highest_active_virt_prio(cs);
144
+ uint64_t prio = ich_highest_active_virt_prio(cs);
145
+
146
+ if (cs->ich_apr[GICV3_G1NS][0] & ICV_AP1R_EL1_NMI) {
147
+ prio |= ICV_RPR_EL1_NMI;
148
+ }
149
150
trace_gicv3_icv_rpr_read(gicv3_redist_affid(cs), prio);
151
return prio;
152
@@ -XXX,XX +XXX,XX @@ static void icv_activate_irq(GICv3CPUState *cs, int idx, int grp)
153
*/
154
uint32_t mask = icv_gprio_mask(cs, grp);
155
int prio = ich_lr_prio(cs->ich_lr_el2[idx]) & mask;
156
+ bool nmi = cs->ich_lr_el2[idx] & ICH_LR_EL2_NMI;
157
int aprbit = prio >> (8 - cs->vprebits);
158
int regno = aprbit / 32;
159
int regbit = aprbit % 32;
160
161
cs->ich_lr_el2[idx] &= ~ICH_LR_EL2_STATE_PENDING_BIT;
162
cs->ich_lr_el2[idx] |= ICH_LR_EL2_STATE_ACTIVE_BIT;
163
- cs->ich_apr[grp][regno] |= (1 << regbit);
164
+
165
+ if (nmi) {
166
+ cs->ich_apr[grp][regno] |= ICV_AP1R_EL1_NMI;
167
+ } else {
168
+ cs->ich_apr[grp][regno] |= (1 << regbit);
169
+ }
170
}
171
172
static void icv_activate_vlpi(GICv3CPUState *cs)
173
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri)
174
int grp = ri->crm == 8 ? GICV3_G0 : GICV3_G1NS;
175
int idx = hppvi_index(cs);
176
uint64_t intid = INTID_SPURIOUS;
177
+ int el = arm_current_el(env);
178
179
if (idx == HPPVI_INDEX_VLPI) {
180
if (cs->hppvlpi.grp == grp && icv_hppvlpi_can_preempt(cs)) {
181
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri)
182
} else if (idx >= 0) {
183
uint64_t lr = cs->ich_lr_el2[idx];
184
int thisgrp = (lr & ICH_LR_EL2_GROUP) ? GICV3_G1NS : GICV3_G0;
185
+ bool nmi = env->cp15.sctlr_el[el] & SCTLR_NMI && lr & ICH_LR_EL2_NMI;
186
187
if (thisgrp == grp && icv_hppi_can_preempt(cs, lr)) {
188
intid = ich_lr_vintid(lr);
189
if (!gicv3_intid_is_special(intid)) {
190
- icv_activate_irq(cs, idx, grp);
191
+ if (!nmi) {
192
+ icv_activate_irq(cs, idx, grp);
193
+ } else {
194
+ intid = INTID_NMI;
195
+ }
196
} else {
197
/* Interrupt goes from Pending to Invalid */
198
cs->ich_lr_el2[idx] &= ~ICH_LR_EL2_STATE_PENDING_BIT;
199
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri)
200
201
static uint64_t icv_nmiar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
202
{
203
- /* todo */
204
+ GICv3CPUState *cs = icc_cs_from_env(env);
205
+ int idx = hppvi_index(cs);
206
uint64_t intid = INTID_SPURIOUS;
207
+
208
+ if (idx >= 0 && idx != HPPVI_INDEX_VLPI) {
209
+ uint64_t lr = cs->ich_lr_el2[idx];
210
+ int thisgrp = (lr & ICH_LR_EL2_GROUP) ? GICV3_G1NS : GICV3_G0;
211
+
212
+ if ((thisgrp == GICV3_G1NS) && icv_hppi_can_preempt(cs, lr)) {
213
+ intid = ich_lr_vintid(lr);
214
+ if (!gicv3_intid_is_special(intid)) {
215
+ if (lr & ICH_LR_EL2_NMI) {
216
+ icv_activate_irq(cs, idx, GICV3_G1NS);
217
+ } else {
218
+ intid = INTID_SPURIOUS;
219
+ }
220
+ } else {
221
+ /* Interrupt goes from Pending to Invalid */
222
+ cs->ich_lr_el2[idx] &= ~ICH_LR_EL2_STATE_PENDING_BIT;
223
+ /*
224
+ * We will now return the (bogus) ID from the list register,
225
+ * as per the pseudocode.
226
+ */
227
+ }
228
+ }
229
+ }
230
+
231
+ trace_gicv3_icv_nmiar1_read(gicv3_redist_affid(cs), intid);
232
+
233
+ gicv3_cpuif_virt_update(cs);
234
+
235
return intid;
236
}
237
238
@@ -XXX,XX +XXX,XX @@ static void icv_increment_eoicount(GICv3CPUState *cs)
239
ICH_HCR_EL2_EOICOUNT_LENGTH, eoicount + 1);
240
}
241
242
-static int icv_drop_prio(GICv3CPUState *cs)
243
+static int icv_drop_prio(GICv3CPUState *cs, bool *nmi)
244
{
245
/* Drop the priority of the currently active virtual interrupt
246
* (favouring group 0 if there is a set active bit at
247
@@ -XXX,XX +XXX,XX @@ static int icv_drop_prio(GICv3CPUState *cs)
248
continue;
249
}
250
251
+ if (i == 0 && cs->nmi_support && (*papr1 & ICV_AP1R_EL1_NMI)) {
252
+ *papr1 &= (~ICV_AP1R_EL1_NMI);
253
+ *nmi = true;
254
+ return 0xff;
255
+ }
256
+
257
/* We can't just use the bit-twiddling hack icc_drop_prio() does
258
* because we need to return the bit number we cleared so
259
* it can be compared against the list register's priority field.
260
@@ -XXX,XX +XXX,XX @@ static void icv_eoir_write(CPUARMState *env, const ARMCPRegInfo *ri,
261
int irq = value & 0xffffff;
262
int grp = ri->crm == 8 ? GICV3_G0 : GICV3_G1NS;
263
int idx, dropprio;
264
+ bool nmi = false;
265
266
trace_gicv3_icv_eoir_write(ri->crm == 8 ? 0 : 1,
267
gicv3_redist_affid(cs), value);
268
@@ -XXX,XX +XXX,XX @@ static void icv_eoir_write(CPUARMState *env, const ARMCPRegInfo *ri,
269
* error checks" (because that lets us avoid scanning the AP
270
* registers twice).
271
*/
272
- dropprio = icv_drop_prio(cs);
273
- if (dropprio == 0xff) {
274
+ dropprio = icv_drop_prio(cs, &nmi);
275
+ if (dropprio == 0xff && !nmi) {
276
/* No active interrupt. It is CONSTRAINED UNPREDICTABLE
277
* whether the list registers are checked in this
278
* situation; we choose not to.
279
@@ -XXX,XX +XXX,XX @@ static void icv_eoir_write(CPUARMState *env, const ARMCPRegInfo *ri,
280
uint64_t lr = cs->ich_lr_el2[idx];
281
int thisgrp = (lr & ICH_LR_EL2_GROUP) ? GICV3_G1NS : GICV3_G0;
282
int lr_gprio = ich_lr_prio(lr) & icv_gprio_mask(cs, grp);
283
+ bool thisnmi = lr & ICH_LR_EL2_NMI;
284
285
- if (thisgrp == grp && lr_gprio == dropprio) {
286
+ if (thisgrp == grp && (lr_gprio == dropprio || (thisnmi & nmi))) {
287
if (!icv_eoi_split(env, cs) || irq >= GICV3_LPI_INTID_START) {
288
/*
289
* Priority drop and deactivate not split: deactivate irq now.
290
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
291
292
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
293
294
- cs->ich_apr[grp][regno] = value & 0xFFFFFFFFU;
295
+ if (cs->nmi_support) {
296
+ cs->ich_apr[grp][regno] = value & (0xFFFFFFFFU | ICV_AP1R_EL1_NMI);
297
+ } else {
298
+ cs->ich_apr[grp][regno] = value & 0xFFFFFFFFU;
299
+ }
300
gicv3_cpuif_virt_irq_fiq_update(cs);
301
}
302
303
@@ -XXX,XX +XXX,XX @@ static void ich_lr_write(CPUARMState *env, const ARMCPRegInfo *ri,
304
8 - cs->vpribits, 0);
76
}
305
}
77
306
78
- DPRINTF("Clock = %d) = %d\n", clock, freq);
307
+ /* Enforce RES0 bit in NMI field when FEAT_GICv3_NMI is not implemented */
79
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
308
+ if (!cs->nmi_support) {
80
309
+ value &= ~ICH_LR_EL2_NMI;
81
return freq;
310
+ }
82
}
311
+
83
diff --git a/hw/misc/imx_ccm.c b/hw/misc/imx_ccm.c
312
cs->ich_lr_el2[regno] = value;
313
gicv3_cpuif_virt_update(cs);
314
}
315
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
84
index XXXXXXX..XXXXXXX 100644
316
index XXXXXXX..XXXXXXX 100644
85
--- a/hw/misc/imx_ccm.c
317
--- a/hw/intc/trace-events
86
+++ b/hw/misc/imx_ccm.c
318
+++ b/hw/intc/trace-events
87
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
319
@@ -XXX,XX +XXX,XX @@ gicv3_icv_rpr_read(uint32_t cpu, uint64_t val) "GICv3 ICV_RPR read cpu 0x%x valu
88
freq = klass->get_clock_frequency(dev, clock);
320
gicv3_icv_hppir_read(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_HPPIR%d read cpu 0x%x value 0x%" PRIx64
89
}
321
gicv3_icv_dir_write(uint32_t cpu, uint64_t val) "GICv3 ICV_DIR write cpu 0x%x value 0x%" PRIx64
90
322
gicv3_icv_iar_read(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_IAR%d read cpu 0x%x value 0x%" PRIx64
91
- DPRINTF("(clock = %d) = %d\n", clock, freq);
323
+gicv3_icv_nmiar1_read(uint32_t cpu, uint64_t val) "GICv3 ICV_NMIAR1 read cpu 0x%x value 0x%" PRIx64
92
+ DPRINTF("(clock = %d) = %u\n", clock, freq);
324
gicv3_icv_eoir_write(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_EOIR%d write cpu 0x%x value 0x%" PRIx64
93
325
gicv3_cpuif_virt_update(uint32_t cpuid, int idx, int hppvlpi, int grp, int prio) "GICv3 CPU i/f 0x%x virt HPPI update LR index %d HPPVLPI %d grp %d prio %d"
94
return freq;
326
gicv3_cpuif_virt_set_irqs(uint32_t cpuid, int fiqlevel, int irqlevel) "GICv3 CPU i/f 0x%x virt HPPI update: setting FIQ %d IRQ %d"
95
}
96
@@ -XXX,XX +XXX,XX @@ uint32_t imx_ccm_calc_pll(uint32_t pllreg, uint32_t base_freq)
97
freq = ((2 * (base_freq >> 10) * (mfi * mfd + mfn)) /
98
(mfd * pd)) << 10;
99
100
- DPRINTF("(pllreg = 0x%08x, base_freq = %d) = %d\n", pllreg, base_freq,
101
+ DPRINTF("(pllreg = 0x%08x, base_freq = %u) = %d\n", pllreg, base_freq,
102
freq);
103
104
return freq;
105
--
327
--
106
2.20.1
328
2.34.1
107
108
diff view generated by jsdifflib
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Trusted Firmware now supports A72 on sbsa-ref by default [1] so enable
3
If GICD_CTLR_DS bit is zero and the NMI is non-secure, the NMI priority is
4
it for QEMU as well. A53 was already enabled there.
4
higher than 0x80, otherwise it is higher than 0x0. And save the interrupt
5
non-maskable property in hppi.nmi to deliver NMI exception. Since both GICR
6
and GICD can deliver NMI, it is both necessary to check whether the pending
7
irq is NMI in gicv3_redist_update_noirqset and gicv3_update_noirqset.
5
8
6
1. https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/7117
9
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
8
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20201120141705.246690-1-marcin.juszkiewicz@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20240407081733.3231820-21-ruanjinjie@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
---
14
hw/arm/sbsa-ref.c | 23 ++++++++++++++++++++---
15
hw/intc/arm_gicv3.c | 67 +++++++++++++++++++++++++++++++++-----
15
1 file changed, 20 insertions(+), 3 deletions(-)
16
hw/intc/arm_gicv3_common.c | 3 ++
17
hw/intc/arm_gicv3_redist.c | 3 ++
18
3 files changed, 64 insertions(+), 9 deletions(-)
16
19
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
20
diff --git a/hw/intc/arm_gicv3.c b/hw/intc/arm_gicv3.c
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
22
--- a/hw/intc/arm_gicv3.c
20
+++ b/hw/arm/sbsa-ref.c
23
+++ b/hw/intc/arm_gicv3.c
21
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
24
@@ -XXX,XX +XXX,XX @@
22
[SBSA_GWDT] = 16,
25
#include "hw/intc/arm_gicv3.h"
23
};
26
#include "gicv3_internal.h"
24
27
25
+static const char * const valid_cpus[] = {
28
-static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio)
26
+ ARM_CPU_TYPE_NAME("cortex-a53"),
29
+static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio, bool nmi)
27
+ ARM_CPU_TYPE_NAME("cortex-a57"),
30
{
28
+ ARM_CPU_TYPE_NAME("cortex-a72"),
31
/* Return true if this IRQ at this priority should take
29
+};
32
* precedence over the current recorded highest priority
30
+
33
@@ -XXX,XX +XXX,XX @@ static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio)
31
+static bool cpu_type_valid(const char *cpu)
34
* is the same as this one (a property which the calling code
35
* relies on).
36
*/
37
- if (prio < cs->hppi.prio) {
38
- return true;
39
+ if (prio != cs->hppi.prio) {
40
+ return prio < cs->hppi.prio;
41
}
42
+
43
+ /*
44
+ * The same priority IRQ with non-maskable property should signal to
45
+ * the CPU as it have the priority higher than the labelled 0x80 or 0x00.
46
+ */
47
+ if (nmi != cs->hppi.nmi) {
48
+ return nmi;
49
+ }
50
+
51
/* If multiple pending interrupts have the same priority then it is an
52
* IMPDEF choice which of them to signal to the CPU. We choose to
53
* signal the one with the lowest interrupt number.
54
*/
55
- if (prio == cs->hppi.prio && irq <= cs->hppi.irq) {
56
+ if (irq <= cs->hppi.irq) {
57
return true;
58
}
59
return false;
60
@@ -XXX,XX +XXX,XX @@ static uint32_t gicr_int_pending(GICv3CPUState *cs)
61
return pend;
62
}
63
64
+static bool gicv3_get_priority(GICv3CPUState *cs, bool is_redist, int irq,
65
+ uint8_t *prio)
32
+{
66
+{
33
+ int i;
67
+ uint32_t nmi = 0x0;
34
+
68
+
35
+ for (i = 0; i < ARRAY_SIZE(valid_cpus); i++) {
69
+ if (is_redist) {
36
+ if (strcmp(cpu, valid_cpus[i]) == 0) {
70
+ nmi = extract32(cs->gicr_inmir0, irq, 1);
37
+ return true;
71
+ } else {
72
+ nmi = *gic_bmp_ptr32(cs->gic->nmi, irq);
73
+ nmi = nmi & (1 << (irq & 0x1f));
74
+ }
75
+
76
+ if (nmi) {
77
+ /* DS = 0 & Non-secure NMI */
78
+ if (!(cs->gic->gicd_ctlr & GICD_CTLR_DS) &&
79
+ ((is_redist && extract32(cs->gicr_igroupr0, irq, 1)) ||
80
+ (!is_redist && gicv3_gicd_group_test(cs->gic, irq)))) {
81
+ *prio = 0x80;
82
+ } else {
83
+ *prio = 0x0;
38
+ }
84
+ }
39
+ }
85
+
86
+ return true;
87
+ }
88
+
89
+ if (is_redist) {
90
+ *prio = cs->gicr_ipriorityr[irq];
91
+ } else {
92
+ *prio = cs->gic->gicd_ipriority[irq];
93
+ }
94
+
40
+ return false;
95
+ return false;
41
+}
96
+}
42
+
97
+
43
static uint64_t sbsa_ref_cpu_mp_affinity(SBSAMachineState *sms, int idx)
98
/* Update the interrupt status after state in a redistributor
44
{
99
* or CPU interface has changed, but don't tell the CPU i/f.
45
uint8_t clustersz = ARM_DEFAULT_CPUS_PER_CLUSTER;
100
*/
46
@@ -XXX,XX +XXX,XX @@ static void sbsa_ref_init(MachineState *machine)
101
@@ -XXX,XX +XXX,XX @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
47
const CPUArchIdList *possible_cpus;
102
uint8_t prio;
48
int n, sbsa_max_cpus;
103
int i;
49
104
uint32_t pend;
50
- if (strcmp(machine->cpu_type, ARM_CPU_TYPE_NAME("cortex-a57"))) {
105
+ bool nmi = false;
51
- error_report("sbsa-ref: CPU type other than the built-in "
106
52
- "cortex-a57 not supported");
107
/* Find out which redistributor interrupts are eligible to be
53
+ if (!cpu_type_valid(machine->cpu_type)) {
108
* signaled to the CPU interface.
54
+ error_report("mach-virt: CPU type %s not supported", machine->cpu_type);
109
@@ -XXX,XX +XXX,XX @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
55
exit(1);
110
if (!(pend & (1 << i))) {
56
}
111
continue;
112
}
113
- prio = cs->gicr_ipriorityr[i];
114
- if (irqbetter(cs, i, prio)) {
115
+ nmi = gicv3_get_priority(cs, true, i, &prio);
116
+ if (irqbetter(cs, i, prio, nmi)) {
117
cs->hppi.irq = i;
118
cs->hppi.prio = prio;
119
+ cs->hppi.nmi = nmi;
120
seenbetter = true;
121
}
122
}
123
@@ -XXX,XX +XXX,XX @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
124
if ((cs->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) && cs->gic->lpi_enable &&
125
(cs->gic->gicd_ctlr & GICD_CTLR_EN_GRP1NS) &&
126
(cs->hpplpi.prio != 0xff)) {
127
- if (irqbetter(cs, cs->hpplpi.irq, cs->hpplpi.prio)) {
128
+ if (irqbetter(cs, cs->hpplpi.irq, cs->hpplpi.prio, cs->hpplpi.nmi)) {
129
cs->hppi.irq = cs->hpplpi.irq;
130
cs->hppi.prio = cs->hpplpi.prio;
131
+ cs->hppi.nmi = cs->hpplpi.nmi;
132
cs->hppi.grp = cs->hpplpi.grp;
133
seenbetter = true;
134
}
135
@@ -XXX,XX +XXX,XX @@ static void gicv3_update_noirqset(GICv3State *s, int start, int len)
136
int i;
137
uint8_t prio;
138
uint32_t pend = 0;
139
+ bool nmi = false;
140
141
assert(start >= GIC_INTERNAL);
142
assert(len > 0);
143
@@ -XXX,XX +XXX,XX @@ static void gicv3_update_noirqset(GICv3State *s, int start, int len)
144
*/
145
continue;
146
}
147
- prio = s->gicd_ipriority[i];
148
- if (irqbetter(cs, i, prio)) {
149
+ nmi = gicv3_get_priority(cs, false, i, &prio);
150
+ if (irqbetter(cs, i, prio, nmi)) {
151
cs->hppi.irq = i;
152
cs->hppi.prio = prio;
153
+ cs->hppi.nmi = nmi;
154
cs->seenbetter = true;
155
}
156
}
157
@@ -XXX,XX +XXX,XX @@ void gicv3_full_update_noirqset(GICv3State *s)
158
159
for (i = 0; i < s->num_cpu; i++) {
160
s->cpu[i].hppi.prio = 0xff;
161
+ s->cpu[i].hppi.nmi = false;
162
}
163
164
/* Note that we can guarantee that these functions will not
165
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
166
index XXXXXXX..XXXXXXX 100644
167
--- a/hw/intc/arm_gicv3_common.c
168
+++ b/hw/intc/arm_gicv3_common.c
169
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_reset_hold(Object *obj)
170
memset(cs->gicr_ipriorityr, 0, sizeof(cs->gicr_ipriorityr));
171
172
cs->hppi.prio = 0xff;
173
+ cs->hppi.nmi = false;
174
cs->hpplpi.prio = 0xff;
175
+ cs->hpplpi.nmi = false;
176
cs->hppvlpi.prio = 0xff;
177
+ cs->hppvlpi.nmi = false;
178
179
/* State in the CPU interface must *not* be reset here, because it
180
* is part of the CPU's reset domain, not the GIC device's.
181
diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
182
index XXXXXXX..XXXXXXX 100644
183
--- a/hw/intc/arm_gicv3_redist.c
184
+++ b/hw/intc/arm_gicv3_redist.c
185
@@ -XXX,XX +XXX,XX @@ static void update_for_one_lpi(GICv3CPUState *cs, int irq,
186
((prio == hpp->prio) && (irq <= hpp->irq))) {
187
hpp->irq = irq;
188
hpp->prio = prio;
189
+ hpp->nmi = false;
190
/* LPIs and vLPIs are always non-secure Grp1 interrupts */
191
hpp->grp = GICV3_G1NS;
192
}
193
@@ -XXX,XX +XXX,XX @@ static void update_for_all_lpis(GICv3CPUState *cs, uint64_t ptbase,
194
int i, bit;
195
196
hpp->prio = 0xff;
197
+ hpp->nmi = false;
198
199
for (i = GICV3_LPI_INTID_START / 8; i < pendt_size / 8; i++) {
200
address_space_read(as, ptbase + i, MEMTXATTRS_UNSPECIFIED, &pend, 1);
201
@@ -XXX,XX +XXX,XX @@ static void gicv3_redist_update_vlpi_only(GICv3CPUState *cs)
202
203
if (!FIELD_EX64(cs->gicr_vpendbaser, GICR_VPENDBASER, VALID)) {
204
cs->hppvlpi.prio = 0xff;
205
+ cs->hppvlpi.nmi = false;
206
return;
207
}
57
208
58
--
209
--
59
2.20.1
210
2.34.1
60
61
diff view generated by jsdifflib
1
Implement the new-in-v8.1M FPCXT_S floating point system register.
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
This is for saving and restoring the secure floating point context,
3
and it reads and writes bits [27:0] from the FPSCR and the
4
CONTROL.SFPA bit in bit [31].
5
2
3
In CPU Interface, if the IRQ has the non-maskable property, report NMI to
4
the corresponding PE.
5
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20240407081733.3231820-22-ruanjinjie@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-14-peter.maydell@linaro.org
9
---
11
---
10
target/arm/translate-vfp.c.inc | 58 ++++++++++++++++++++++++++++++++++
12
hw/intc/arm_gicv3_cpuif.c | 4 ++++
11
1 file changed, 58 insertions(+)
13
1 file changed, 4 insertions(+)
12
14
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
15
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
17
--- a/hw/intc/arm_gicv3_cpuif.c
16
+++ b/target/arm/translate-vfp.c.inc
18
+++ b/hw/intc/arm_gicv3_cpuif.c
17
@@ -XXX,XX +XXX,XX @@ static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
19
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_update(GICv3CPUState *cs)
18
return false;
20
/* Tell the CPU about its highest priority pending interrupt */
21
int irqlevel = 0;
22
int fiqlevel = 0;
23
+ int nmilevel = 0;
24
ARMCPU *cpu = ARM_CPU(cs->cpu);
25
CPUARMState *env = &cpu->env;
26
27
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_update(GICv3CPUState *cs)
28
29
if (isfiq) {
30
fiqlevel = 1;
31
+ } else if (cs->hppi.nmi) {
32
+ nmilevel = 1;
33
} else {
34
irqlevel = 1;
19
}
35
}
20
break;
36
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_update(GICv3CPUState *cs)
21
+ case ARM_VFP_FPCXT_S:
37
22
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
38
qemu_set_irq(cs->parent_fiq, fiqlevel);
23
+ return false;
39
qemu_set_irq(cs->parent_irq, irqlevel);
24
+ }
40
+ qemu_set_irq(cs->parent_nmi, nmilevel);
25
+ if (!s->v8m_secure) {
41
}
26
+ return false;
42
27
+ }
43
static uint64_t icc_pmr_read(CPUARMState *env, const ARMCPRegInfo *ri)
28
+ break;
29
default:
30
return FPSysRegCheckFailed;
31
}
32
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
33
tcg_temp_free_i32(tmp);
34
break;
35
}
36
+ case ARM_VFP_FPCXT_S:
37
+ {
38
+ TCGv_i32 sfpa, control, fpscr;
39
+ /* Set FPSCR[27:0] and CONTROL.SFPA from value */
40
+ tmp = loadfn(s, opaque);
41
+ sfpa = tcg_temp_new_i32();
42
+ tcg_gen_shri_i32(sfpa, tmp, 31);
43
+ control = load_cpu_field(v7m.control[M_REG_S]);
44
+ tcg_gen_deposit_i32(control, control, sfpa,
45
+ R_V7M_CONTROL_SFPA_SHIFT, 1);
46
+ store_cpu_field(control, v7m.control[M_REG_S]);
47
+ fpscr = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
48
+ tcg_gen_andi_i32(fpscr, fpscr, FPCR_NZCV_MASK);
49
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
50
+ tcg_gen_or_i32(fpscr, fpscr, tmp);
51
+ store_cpu_field(fpscr, vfp.xregs[ARM_VFP_FPSCR]);
52
+ tcg_temp_free_i32(tmp);
53
+ tcg_temp_free_i32(sfpa);
54
+ break;
55
+ }
56
default:
57
g_assert_not_reached();
58
}
59
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
60
tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
61
storefn(s, opaque, tmp);
62
break;
63
+ case ARM_VFP_FPCXT_S:
64
+ {
65
+ TCGv_i32 control, sfpa, fpscr;
66
+ /* Bits [27:0] from FPSCR, bit [31] from CONTROL.SFPA */
67
+ tmp = tcg_temp_new_i32();
68
+ sfpa = tcg_temp_new_i32();
69
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
70
+ tcg_gen_andi_i32(tmp, tmp, ~FPCR_NZCV_MASK);
71
+ control = load_cpu_field(v7m.control[M_REG_S]);
72
+ tcg_gen_andi_i32(sfpa, control, R_V7M_CONTROL_SFPA_MASK);
73
+ tcg_gen_shli_i32(sfpa, sfpa, 31 - R_V7M_CONTROL_SFPA_SHIFT);
74
+ tcg_gen_or_i32(tmp, tmp, sfpa);
75
+ tcg_temp_free_i32(sfpa);
76
+ /*
77
+ * Store result before updating FPSCR etc, in case
78
+ * it is a memory write which causes an exception.
79
+ */
80
+ storefn(s, opaque, tmp);
81
+ /*
82
+ * Now we must reset FPSCR from FPDSCR_NS, and clear
83
+ * CONTROL.SFPA; so we'll end the TB here.
84
+ */
85
+ tcg_gen_andi_i32(control, control, ~R_V7M_CONTROL_SFPA_MASK);
86
+ store_cpu_field(control, v7m.control[M_REG_S]);
87
+ fpscr = load_cpu_field(v7m.fpdscr[M_REG_NS]);
88
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
89
+ tcg_temp_free_i32(fpscr);
90
+ gen_lookup_tb(s);
91
+ break;
92
+ }
93
default:
94
g_assert_not_reached();
95
}
96
--
44
--
97
2.20.1
45
2.34.1
98
99
diff view generated by jsdifflib
1
From: Havard Skinnemoen <hskinnemoen@google.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Dump the collected random data after a randomness test failure.
3
In vCPU Interface, if the vIRQ has the non-maskable property, report
4
vINMI to the corresponding vPE.
4
5
5
Note that this relies on the test having called
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
6
g_test_set_nonfatal_assertions() so we don't abort immediately on the
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
assertion failure.
8
9
Signed-off-by: Havard Skinnemoen <hskinnemoen@google.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
[PMM: minor commit message tweak]
9
Message-id: 20240407081733.3231820-23-ruanjinjie@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
tests/qtest/npcm7xx_rng-test.c | 12 ++++++++++++
12
hw/intc/arm_gicv3_cpuif.c | 14 ++++++++++++--
15
1 file changed, 12 insertions(+)
13
1 file changed, 12 insertions(+), 2 deletions(-)
16
14
17
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
15
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/qtest/npcm7xx_rng-test.c
17
--- a/hw/intc/arm_gicv3_cpuif.c
20
+++ b/tests/qtest/npcm7xx_rng-test.c
18
+++ b/hw/intc/arm_gicv3_cpuif.c
21
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs)
22
20
int idx;
23
#include "libqtest-single.h"
21
int irqlevel = 0;
24
#include "qemu/bitops.h"
22
int fiqlevel = 0;
25
+#include "qemu-common.h"
23
+ int nmilevel = 0;
26
24
27
#define RNG_BASE_ADDR 0xf000b000
25
idx = hppvi_index(cs);
28
26
trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx,
29
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs)
30
/* Number of bits to collect for randomness tests. */
28
uint64_t lr = cs->ich_lr_el2[idx];
31
#define TEST_INPUT_BITS (128)
29
32
30
if (icv_hppi_can_preempt(cs, lr)) {
33
+static void dump_buf_if_failed(const uint8_t *buf, size_t size)
31
- /* Virtual interrupts are simple: G0 are always FIQ, and G1 IRQ */
34
+{
32
+ /*
35
+ if (g_test_failed()) {
33
+ * Virtual interrupts are simple: G0 are always FIQ, and G1 are
36
+ qemu_hexdump(stderr, "", buf, size);
34
+ * IRQ or NMI which depends on the ICH_LR<n>_EL2.NMI to have
37
+ }
35
+ * non-maskable property.
38
+}
36
+ */
39
+
37
if (lr & ICH_LR_EL2_GROUP) {
40
static void rng_writeb(unsigned int offset, uint8_t value)
38
- irqlevel = 1;
41
{
39
+ if (lr & ICH_LR_EL2_NMI) {
42
writeb(RNG_BASE_ADDR + offset, value);
40
+ nmilevel = 1;
43
@@ -XXX,XX +XXX,XX @@ static void test_continuous_monobit(void)
41
+ } else {
44
}
42
+ irqlevel = 1;
45
43
+ }
46
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
44
} else {
47
+ dump_buf_if_failed(buf, sizeof(buf));
45
fiqlevel = 1;
46
}
47
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs)
48
trace_gicv3_cpuif_virt_set_irqs(gicv3_redist_affid(cs), fiqlevel, irqlevel);
49
qemu_set_irq(cs->parent_vfiq, fiqlevel);
50
qemu_set_irq(cs->parent_virq, irqlevel);
51
+ qemu_set_irq(cs->parent_vnmi, nmilevel);
48
}
52
}
49
53
50
/*
54
static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
51
@@ -XXX,XX +XXX,XX @@ static void test_continuous_runs(void)
52
}
53
54
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
55
+ dump_buf_if_failed(buf.c, sizeof(buf));
56
}
57
58
/*
59
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_monobit(void)
60
}
61
62
g_assert_cmpfloat(calc_monobit_p(buf, sizeof(buf)), >, 0.01);
63
+ dump_buf_if_failed(buf, sizeof(buf));
64
}
65
66
/*
67
@@ -XXX,XX +XXX,XX @@ static void test_first_byte_runs(void)
68
}
69
70
g_assert_cmpfloat(calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE), >, 0.01);
71
+ dump_buf_if_failed(buf.c, sizeof(buf));
72
}
73
74
int main(int argc, char **argv)
75
--
55
--
76
2.20.1
56
2.34.1
77
78
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
Enable FEAT_NMI on the 'max' CPU.
4
argument of type "unsigned int".
5
4
6
Reported-by: Euler Robot <euler.robot@huawei.com>
5
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201126111109.112238-2-alex.chen@huawei.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20240407081733.3231820-24-ruanjinjie@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
hw/misc/imx25_ccm.c | 12 ++++++------
11
docs/system/arm/emulation.rst | 1 +
13
1 file changed, 6 insertions(+), 6 deletions(-)
12
target/arm/tcg/cpu64.c | 1 +
13
2 files changed, 2 insertions(+)
14
14
15
diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
15
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx25_ccm.c
17
--- a/docs/system/arm/emulation.rst
18
+++ b/hw/misc/imx25_ccm.c
18
+++ b/docs/system/arm/emulation.rst
19
@@ -XXX,XX +XXX,XX @@ static const char *imx25_ccm_reg_name(uint32_t reg)
19
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
20
case IMX25_CCM_LPIMR1_REG:
20
- FEAT_MTE (Memory Tagging Extension)
21
return "lpimr1";
21
- FEAT_MTE2 (Memory Tagging Extension)
22
default:
22
- FEAT_MTE3 (MTE Asymmetric Fault Handling)
23
- sprintf(unknown, "[%d ?]", reg);
23
+- FEAT_NMI (Non-maskable Interrupt)
24
+ sprintf(unknown, "[%u ?]", reg);
24
- FEAT_NV (Nested Virtualization)
25
return unknown;
25
- FEAT_NV2 (Enhanced nested virtualization support)
26
}
26
- FEAT_PACIMP (Pointer authentication - IMPLEMENTATION DEFINED algorithm)
27
}
27
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
28
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mpll_clk(IMXCCMState *dev)
28
index XXXXXXX..XXXXXXX 100644
29
freq = imx_ccm_calc_pll(s->reg[IMX25_CCM_MPCTL_REG], CKIH_FREQ);
29
--- a/target/arm/tcg/cpu64.c
30
}
30
+++ b/target/arm/tcg/cpu64.c
31
31
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
32
- DPRINTF("freq = %d\n", freq);
32
t = FIELD_DP64(t, ID_AA64PFR1, RAS_FRAC, 0); /* FEAT_RASv1p1 + FEAT_DoubleFault */
33
+ DPRINTF("freq = %u\n", freq);
33
t = FIELD_DP64(t, ID_AA64PFR1, SME, 1); /* FEAT_SME */
34
34
t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
35
return freq;
35
+ t = FIELD_DP64(t, ID_AA64PFR1, NMI, 1); /* FEAT_NMI */
36
}
36
cpu->isar.id_aa64pfr1 = t;
37
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_mcu_clk(IMXCCMState *dev)
37
38
38
t = cpu->isar.id_aa64mmfr0;
39
freq = freq / (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], ARM_CLK_DIV));
40
41
- DPRINTF("freq = %d\n", freq);
42
+ DPRINTF("freq = %u\n", freq);
43
44
return freq;
45
}
46
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ahb_clk(IMXCCMState *dev)
47
freq = imx25_ccm_get_mcu_clk(dev)
48
/ (1 + EXTRACT(s->reg[IMX25_CCM_CCTL_REG], AHB_CLK_DIV));
49
50
- DPRINTF("freq = %d\n", freq);
51
+ DPRINTF("freq = %u\n", freq);
52
53
return freq;
54
}
55
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_ipg_clk(IMXCCMState *dev)
56
57
freq = imx25_ccm_get_ahb_clk(dev) / 2;
58
59
- DPRINTF("freq = %d\n", freq);
60
+ DPRINTF("freq = %u\n", freq);
61
62
return freq;
63
}
64
@@ -XXX,XX +XXX,XX @@ static uint32_t imx25_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
65
break;
66
}
67
68
- DPRINTF("Clock = %d) = %d\n", clock, freq);
69
+ DPRINTF("Clock = %d) = %u\n", clock, freq);
70
71
return freq;
72
}
73
--
39
--
74
2.20.1
40
2.34.1
75
76
diff view generated by jsdifflib
1
Factor out the code which handles M-profile lazy FP state preservation
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
from full_vfp_access_check(); accesses to the FPCXT_NS register are
3
a special case which need to do just this part (corresponding in the
4
pseudocode to the PreserveFPState() function), and not the full
5
set of actions matching the pseudocode ExecuteFPCheck() which
6
normal FP instructions need to do.
7
2
3
If the CPU implements FEAT_NMI, then turn on the NMI support in the
4
GICv3 too. It's permitted to have a configuration with FEAT_NMI in
5
the CPU (and thus NMI support in the CPU interfaces too) but no NMI
6
support in the distributor and redistributor, but this isn't a very
7
useful setup as it's close to having no NMI support at all.
8
9
We don't need to gate the enabling of NMI in the GIC behind a
10
machine version property, because none of our current CPUs
11
implement FEAT_NMI, and '-cpu max' is not something we maintain
12
migration compatibility across versions for. So we can always
13
enable the GIC NMI support when the CPU has it.
14
15
Neither hvf nor KVM support NMI in the GIC yet, so we don't enable
16
it unless we're using TCG.
17
18
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20240407081733.3231820-25-ruanjinjie@huawei.com
21
[PMM: Update comment and commit message]
22
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20201119215617.29887-13-peter.maydell@linaro.org
12
---
24
---
13
target/arm/translate-vfp.c.inc | 45 ++++++++++++++++++++--------------
25
hw/arm/virt.c | 19 +++++++++++++++++++
14
1 file changed, 27 insertions(+), 18 deletions(-)
26
1 file changed, 19 insertions(+)
15
27
16
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
28
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
17
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-vfp.c.inc
30
--- a/hw/arm/virt.c
19
+++ b/target/arm/translate-vfp.c.inc
31
+++ b/hw/arm/virt.c
20
@@ -XXX,XX +XXX,XX @@ static inline long vfp_f16_offset(unsigned reg, bool top)
32
@@ -XXX,XX +XXX,XX @@ static void create_v2m(VirtMachineState *vms)
21
return offs;
33
vms->msi_controller = VIRT_MSI_CTRL_GICV2M;
22
}
34
}
23
35
24
+/*
36
+/*
25
+ * Generate code for M-profile lazy FP state preservation if needed;
37
+ * If the CPU has FEAT_NMI, then turn on the NMI support in the GICv3 too.
26
+ * this corresponds to the pseudocode PreserveFPState() function.
38
+ * It's permitted to have a configuration with NMI in the CPU (and thus the
39
+ * GICv3 CPU interface) but not in the distributor/redistributors, but it's
40
+ * not very useful.
27
+ */
41
+ */
28
+static void gen_preserve_fp_state(DisasContext *s)
42
+static bool gicv3_nmi_present(VirtMachineState *vms)
29
+{
43
+{
30
+ if (s->v7m_lspact) {
44
+ ARMCPU *cpu = ARM_CPU(qemu_get_cpu(0));
31
+ /*
45
+
32
+ * Lazy state saving affects external memory and also the NVIC,
46
+ return tcg_enabled() && cpu_isar_feature(aa64_nmi, cpu) &&
33
+ * so we must mark it as an IO operation for icount (and cause
47
+ (vms->gic_version != VIRT_GIC_VERSION_2);
34
+ * this to be the last insn in the TB).
35
+ */
36
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
37
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
38
+ gen_io_start();
39
+ }
40
+ gen_helper_v7m_preserve_fp_state(cpu_env);
41
+ /*
42
+ * If the preserve_fp_state helper doesn't throw an exception
43
+ * then it will clear LSPACT; we don't need to repeat this for
44
+ * any further FP insns in this TB.
45
+ */
46
+ s->v7m_lspact = false;
47
+ }
48
+}
48
+}
49
+
49
+
50
/*
50
static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
51
* Check that VFP access is enabled. If it is, do the necessary
51
{
52
* M-profile lazy-FP handling and then return true.
52
MachineState *ms = MACHINE(vms);
53
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
53
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
54
/* Handle M-profile lazy FP state mechanics */
54
vms->virt);
55
55
}
56
/* Trigger lazy-state preservation if necessary */
56
}
57
- if (s->v7m_lspact) {
57
+
58
- /*
58
+ if (gicv3_nmi_present(vms)) {
59
- * Lazy state saving affects external memory and also the NVIC,
59
+ qdev_prop_set_bit(vms->gic, "has-nmi", true);
60
- * so we must mark it as an IO operation for icount (and cause
60
+ }
61
- * this to be the last insn in the TB).
61
+
62
- */
62
gicbusdev = SYS_BUS_DEVICE(vms->gic);
63
- if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
63
sysbus_realize_and_unref(gicbusdev, &error_fatal);
64
- s->base.is_jmp = DISAS_UPDATE_EXIT;
64
sysbus_mmio_map(gicbusdev, 0, vms->memmap[VIRT_GIC_DIST].base);
65
- gen_io_start();
66
- }
67
- gen_helper_v7m_preserve_fp_state(cpu_env);
68
- /*
69
- * If the preserve_fp_state helper doesn't throw an exception
70
- * then it will clear LSPACT; we don't need to repeat this for
71
- * any further FP insns in this TB.
72
- */
73
- s->v7m_lspact = false;
74
- }
75
+ gen_preserve_fp_state(s);
76
77
/* Update ownership of FP context: set FPCCR.S to match current state */
78
if (s->v8m_fpccr_s_wrong) {
79
--
65
--
80
2.20.1
66
2.34.1
81
82
diff view generated by jsdifflib
1
From: Alex Chen <alex.chen@huawei.com>
1
From: Anastasia Belova <abelova@astralinux.ru>
2
2
3
We should use printf format specifier "%u" instead of "%d" for
3
In soc_dma_set_request() we try to set a bit in a uint64_t, but we
4
argument of type "unsigned int".
4
do it with "1 << ch->num", which can't set any bits past 31;
5
any use for a channel number of 32 or more would fail due to
6
integer overflow.
5
7
6
Reported-by: Euler Robot <euler.robot@huawei.com>
8
This doesn't happen in practice for our current use of this code,
7
Signed-off-by: Alex Chen <alex.chen@huawei.com>
9
because the worst case is when we call soc_dma_init() with an
8
Message-id: 20201126111109.112238-5-alex.chen@huawei.com
10
argument of 32 for the number of channels, and QEMU builds with
11
-fwrapv so the shift into the sign bit is well-defined. However,
12
it's obviously not the intended behaviour of the code.
13
14
Add casts to force the shift to be done as 64-bit arithmetic,
15
allowing up to 64 channels.
16
17
Found by Linux Verification Center (linuxtesting.org) with SVACE.
18
19
Fixes: afbb5194d4 ("Handle on-chip DMA controllers in one place, convert OMAP DMA to use it.")
20
Signed-off-by: Anastasia Belova <abelova@astralinux.ru>
21
Message-id: 20240409115301.21829-1-abelova@astralinux.ru
22
[PMM: Edit commit message to clarify that this doesn't actually
23
bite us in our current usage of this code.]
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
26
---
12
hw/misc/imx6ul_ccm.c | 4 ++--
27
hw/dma/soc_dma.c | 4 ++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
28
1 file changed, 2 insertions(+), 2 deletions(-)
14
29
15
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
30
diff --git a/hw/dma/soc_dma.c b/hw/dma/soc_dma.c
16
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/imx6ul_ccm.c
32
--- a/hw/dma/soc_dma.c
18
+++ b/hw/misc/imx6ul_ccm.c
33
+++ b/hw/dma/soc_dma.c
19
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_ccm_reg_name(uint32_t reg)
34
@@ -XXX,XX +XXX,XX @@ void soc_dma_set_request(struct soc_dma_ch_s *ch, int level)
20
case CCM_CMEOR:
35
dma->enabled_count += level - ch->enable;
21
return "CMEOR";
36
22
default:
37
if (level)
23
- sprintf(unknown, "%d ?", reg);
38
- dma->ch_enable_mask |= 1 << ch->num;
24
+ sprintf(unknown, "%u ?", reg);
39
+ dma->ch_enable_mask |= (uint64_t)1 << ch->num;
25
return unknown;
40
else
26
}
41
- dma->ch_enable_mask &= ~(1 << ch->num);
27
}
42
+ dma->ch_enable_mask &= ~((uint64_t)1 << ch->num);
28
@@ -XXX,XX +XXX,XX @@ static const char *imx6ul_analog_reg_name(uint32_t reg)
43
29
case USB_ANALOG_DIGPROG:
44
if (level != ch->enable) {
30
return "USB_ANALOG_DIGPROG";
45
soc_dma_ch_freq_update(dma);
31
default:
32
- sprintf(unknown, "%d ?", reg);
33
+ sprintf(unknown, "%u ?", reg);
34
return unknown;
35
}
36
}
37
--
46
--
38
2.20.1
47
2.34.1
39
40
diff view generated by jsdifflib
1
v8.1M adds new encodings of VLLDM and VLSTM (where bit 7 is set).
1
Ever since the bFLT format support was added in 2006, there has been
2
The only difference is that:
2
a chunk of code in the file guarded by CONFIG_BINFMT_SHARED_FLAT
3
* the old T1 encodings UNDEF if the implementation implements 32
3
which is supposedly for shared library support. This is not enabled
4
Dregs (this is currently architecturally impossible for M-profile)
4
and it's not possible to enable it, because if you do you'll run into
5
* the new T2 encodings have the implementation-defined option to
5
the "#error needs checking" in the calc_reloc() function.
6
read from memory (discarding the data) or write UNKNOWN values to
6
7
memory for the stack slots that would be D16-D31
7
Similarly, CONFIG_BINFMT_ZFLAT exists but can't be enabled because of
8
8
an "#error code needs checking" in load_flat_file().
9
We choose not to make those accesses, so for us the two
9
10
instructions behave identically assuming they don't UNDEF.
10
This code is obviously unfinished and has never been used; nobody in
11
the intervening 18 years has complained about this or fixed it, so
12
just delete the dead code. If anybody ever wants the feature they
13
can always pull it out of git, or (perhaps better) write it from
14
scratch based on the current Linux bFLT loader rather than the one of
15
18 years ago.
11
16
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
Message-id: 20201119215617.29887-21-peter.maydell@linaro.org
19
Message-id: 20240411115313.680433-1-peter.maydell@linaro.org
15
---
20
---
16
target/arm/m-nocp.decode | 2 +-
21
linux-user/flat.h | 5 +-
17
target/arm/translate-vfp.c.inc | 25 +++++++++++++++++++++++++
22
linux-user/flatload.c | 293 ++----------------------------------------
18
2 files changed, 26 insertions(+), 1 deletion(-)
23
2 files changed, 11 insertions(+), 287 deletions(-)
19
24
20
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
25
diff --git a/linux-user/flat.h b/linux-user/flat.h
21
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/m-nocp.decode
27
--- a/linux-user/flat.h
23
+++ b/target/arm/m-nocp.decode
28
+++ b/linux-user/flat.h
24
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@
25
30
31
#define    FLAT_VERSION            0x00000004L
32
33
-#ifdef CONFIG_BINFMT_SHARED_FLAT
34
-#define    MAX_SHARED_LIBS            (4)
35
-#else
36
+/* QEMU doesn't support bflt shared libraries */
37
#define    MAX_SHARED_LIBS            (1)
38
-#endif
39
40
/*
41
* To make everything easier to port and manage cross platform
42
diff --git a/linux-user/flatload.c b/linux-user/flatload.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/linux-user/flatload.c
45
+++ b/linux-user/flatload.c
46
@@ -XXX,XX +XXX,XX @@
47
*    JAN/99 -- coded full program relocation (gerg@snapgear.com)
48
*/
49
50
-/* ??? ZFLAT and shared library support is currently disabled. */
51
-
52
/****************************************************************************/
53
54
#include "qemu/osdep.h"
55
@@ -XXX,XX +XXX,XX @@ struct lib_info {
56
short loaded;        /* Has this library been loaded? */
57
};
58
59
-#ifdef CONFIG_BINFMT_SHARED_FLAT
60
-static int load_flat_shared_library(int id, struct lib_info *p);
61
-#endif
62
-
63
struct linux_binprm;
64
65
/****************************************************************************/
66
@@ -XXX,XX +XXX,XX @@ static int target_pread(int fd, abi_ulong ptr, abi_ulong len,
67
unlock_user(buf, ptr, len);
68
return ret;
69
}
70
-/****************************************************************************/
71
-
72
-#ifdef CONFIG_BINFMT_ZFLAT
73
-
74
-#include <linux/zlib.h>
75
-
76
-#define LBUFSIZE    4000
77
-
78
-/* gzip flag byte */
79
-#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
80
-#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gzip file */
81
-#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
82
-#define ORIG_NAME 0x08 /* bit 3 set: original file name present */
83
-#define COMMENT 0x10 /* bit 4 set: file comment present */
84
-#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
85
-#define RESERVED 0xC0 /* bit 6,7: reserved */
86
-
87
-static int decompress_exec(
88
-    struct linux_binprm *bprm,
89
-    unsigned long offset,
90
-    char *dst,
91
-    long len,
92
-    int fd)
93
-{
94
-    unsigned char *buf;
95
-    z_stream strm;
96
-    loff_t fpos;
97
-    int ret, retval;
98
-
99
-    DBG_FLT("decompress_exec(offset=%x,buf=%x,len=%x)\n",(int)offset, (int)dst, (int)len);
100
-
101
-    memset(&strm, 0, sizeof(strm));
102
-    strm.workspace = kmalloc(zlib_inflate_workspacesize(), GFP_KERNEL);
103
-    if (strm.workspace == NULL) {
104
-        DBG_FLT("binfmt_flat: no memory for decompress workspace\n");
105
-        return -ENOMEM;
106
-    }
107
-    buf = kmalloc(LBUFSIZE, GFP_KERNEL);
108
-    if (buf == NULL) {
109
-        DBG_FLT("binfmt_flat: no memory for read buffer\n");
110
-        retval = -ENOMEM;
111
-        goto out_free;
112
-    }
113
-
114
-    /* Read in first chunk of data and parse gzip header. */
115
-    fpos = offset;
116
-    ret = bprm->file->f_op->read(bprm->file, buf, LBUFSIZE, &fpos);
117
-
118
-    strm.next_in = buf;
119
-    strm.avail_in = ret;
120
-    strm.total_in = 0;
121
-
122
-    retval = -ENOEXEC;
123
-
124
-    /* Check minimum size -- gzip header */
125
-    if (ret < 10) {
126
-        DBG_FLT("binfmt_flat: file too small?\n");
127
-        goto out_free_buf;
128
-    }
129
-
130
-    /* Check gzip magic number */
131
-    if ((buf[0] != 037) || ((buf[1] != 0213) && (buf[1] != 0236))) {
132
-        DBG_FLT("binfmt_flat: unknown compression magic?\n");
133
-        goto out_free_buf;
134
-    }
135
-
136
-    /* Check gzip method */
137
-    if (buf[2] != 8) {
138
-        DBG_FLT("binfmt_flat: unknown compression method?\n");
139
-        goto out_free_buf;
140
-    }
141
-    /* Check gzip flags */
142
-    if ((buf[3] & ENCRYPTED) || (buf[3] & CONTINUATION) ||
143
-     (buf[3] & RESERVED)) {
144
-        DBG_FLT("binfmt_flat: unknown flags?\n");
145
-        goto out_free_buf;
146
-    }
147
-
148
-    ret = 10;
149
-    if (buf[3] & EXTRA_FIELD) {
150
-        ret += 2 + buf[10] + (buf[11] << 8);
151
-        if (unlikely(LBUFSIZE == ret)) {
152
-            DBG_FLT("binfmt_flat: buffer overflow (EXTRA)?\n");
153
-            goto out_free_buf;
154
-        }
155
-    }
156
-    if (buf[3] & ORIG_NAME) {
157
-        for (; ret < LBUFSIZE && (buf[ret] != 0); ret++)
158
-            ;
159
-        if (unlikely(LBUFSIZE == ret)) {
160
-            DBG_FLT("binfmt_flat: buffer overflow (ORIG_NAME)?\n");
161
-            goto out_free_buf;
162
-        }
163
-    }
164
-    if (buf[3] & COMMENT) {
165
-        for (; ret < LBUFSIZE && (buf[ret] != 0); ret++)
166
-            ;
167
-        if (unlikely(LBUFSIZE == ret)) {
168
-            DBG_FLT("binfmt_flat: buffer overflow (COMMENT)?\n");
169
-            goto out_free_buf;
170
-        }
171
-    }
172
-
173
-    strm.next_in += ret;
174
-    strm.avail_in -= ret;
175
-
176
-    strm.next_out = dst;
177
-    strm.avail_out = len;
178
-    strm.total_out = 0;
179
-
180
-    if (zlib_inflateInit2(&strm, -MAX_WBITS) != Z_OK) {
181
-        DBG_FLT("binfmt_flat: zlib init failed?\n");
182
-        goto out_free_buf;
183
-    }
184
-
185
-    while ((ret = zlib_inflate(&strm, Z_NO_FLUSH)) == Z_OK) {
186
-        ret = bprm->file->f_op->read(bprm->file, buf, LBUFSIZE, &fpos);
187
-        if (ret <= 0)
188
-            break;
189
- if (is_error(ret)) {
190
-            break;
191
- }
192
-        len -= ret;
193
-
194
-        strm.next_in = buf;
195
-        strm.avail_in = ret;
196
-        strm.total_in = 0;
197
-    }
198
-
199
-    if (ret < 0) {
200
-        DBG_FLT("binfmt_flat: decompression failed (%d), %s\n",
201
-            ret, strm.msg);
202
-        goto out_zlib;
203
-    }
204
-
205
-    retval = 0;
206
-out_zlib:
207
-    zlib_inflateEnd(&strm);
208
-out_free_buf:
209
-    kfree(buf);
210
-out_free:
211
-    kfree(strm.workspace);
212
-out:
213
-    return retval;
214
-}
215
-
216
-#endif /* CONFIG_BINFMT_ZFLAT */
217
218
/****************************************************************************/
219
220
@@ -XXX,XX +XXX,XX @@ calc_reloc(abi_ulong r, struct lib_info *p, int curid, int internalp)
221
abi_ulong text_len;
222
abi_ulong start_code;
223
224
-#ifdef CONFIG_BINFMT_SHARED_FLAT
225
-#error needs checking
226
- if (r == 0)
227
- id = curid;    /* Relocs of 0 are always self referring */
228
- else {
229
- id = (r >> 24) & 0xff;    /* Find ID for this reloc */
230
- r &= 0x00ffffff;    /* Trim ID off here */
231
- }
232
- if (id >= MAX_SHARED_LIBS) {
233
- fprintf(stderr, "BINFMT_FLAT: reference 0x%x to shared library %d\n",
234
- (unsigned) r, id);
235
- goto failed;
236
- }
237
- if (curid != id) {
238
- if (internalp) {
239
- fprintf(stderr, "BINFMT_FLAT: reloc address 0x%x not "
240
- "in same module (%d != %d)\n",
241
- (unsigned) r, curid, id);
242
- goto failed;
243
- } else if (!p[id].loaded && is_error(load_flat_shared_library(id, p))) {
244
- fprintf(stderr, "BINFMT_FLAT: failed to load library %d\n", id);
245
- goto failed;
246
- }
247
- /* Check versioning information (i.e. time stamps) */
248
- if (p[id].build_date && p[curid].build_date
249
- && p[curid].build_date < p[id].build_date) {
250
- fprintf(stderr, "BINFMT_FLAT: library %d is younger than %d\n",
251
- id, curid);
252
- goto failed;
253
- }
254
- }
255
-#else
256
id = 0;
257
-#endif
258
259
start_brk = p[id].start_brk;
260
start_data = p[id].start_data;
261
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
262
if (rev == OLD_FLAT_VERSION && flat_old_ram_flag(flags))
263
flags = FLAT_FLAG_RAM;
264
265
-#ifndef CONFIG_BINFMT_ZFLAT
266
if (flags & (FLAT_FLAG_GZIP|FLAT_FLAG_GZDATA)) {
267
- fprintf(stderr, "Support for ZFLAT executables is not enabled\n");
268
+ fprintf(stderr, "ZFLAT executables are not supported\n");
269
return -ENOEXEC;
270
}
271
-#endif
272
273
/*
274
* calculate the extra space we need to map in
275
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
276
(int)(data_len + bss_len + stack_len), (int)datapos);
277
278
fpos = ntohl(hdr->data_start);
279
-#ifdef CONFIG_BINFMT_ZFLAT
280
- if (flags & FLAT_FLAG_GZDATA) {
281
- result = decompress_exec(bprm, fpos, (char *) datapos,
282
- data_len + (relocs * sizeof(abi_ulong)))
283
- } else
284
-#endif
285
- {
286
- result = target_pread(bprm->src.fd, datapos,
287
- data_len + (relocs * sizeof(abi_ulong)),
288
- fpos);
289
- }
290
+ result = target_pread(bprm->src.fd, datapos,
291
+ data_len + (relocs * sizeof(abi_ulong)),
292
+ fpos);
293
if (result < 0) {
294
fprintf(stderr, "Unable to read data+bss\n");
295
return result;
296
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
297
datapos = realdatastart + indx_len;
298
reloc = (textpos + ntohl(hdr->reloc_start) + indx_len);
299
300
-#ifdef CONFIG_BINFMT_ZFLAT
301
-#error code needs checking
302
- /*
303
- * load it all in and treat it like a RAM load from now on
304
- */
305
- if (flags & FLAT_FLAG_GZIP) {
306
- result = decompress_exec(bprm, sizeof (struct flat_hdr),
307
- (((char *) textpos) + sizeof (struct flat_hdr)),
308
- (text_len + data_len + (relocs * sizeof(unsigned long))
309
- - sizeof (struct flat_hdr)),
310
- 0);
311
- memmove((void *) datapos, (void *) realdatastart,
312
- data_len + (relocs * sizeof(unsigned long)));
313
- } else if (flags & FLAT_FLAG_GZDATA) {
314
- fpos = 0;
315
- result = bprm->file->f_op->read(bprm->file,
316
- (char *) textpos, text_len, &fpos);
317
- if (!is_error(result)) {
318
- result = decompress_exec(bprm, text_len, (char *) datapos,
319
- data_len + (relocs * sizeof(unsigned long)), 0);
320
- }
321
- }
322
- else
323
-#endif
324
- {
325
- result = target_pread(bprm->src.fd, textpos,
326
- text_len, 0);
327
- if (result >= 0) {
328
- result = target_pread(bprm->src.fd, datapos,
329
- data_len + (relocs * sizeof(abi_ulong)),
330
- ntohl(hdr->data_start));
331
- }
332
+ result = target_pread(bprm->src.fd, textpos,
333
+ text_len, 0);
334
+ if (result >= 0) {
335
+ result = target_pread(bprm->src.fd, datapos,
336
+ data_len + (relocs * sizeof(abi_ulong)),
337
+ ntohl(hdr->data_start));
338
}
339
if (result < 0) {
340
fprintf(stderr, "Unable to read code+data+bss\n");
341
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
342
343
344
/****************************************************************************/
345
-#ifdef CONFIG_BINFMT_SHARED_FLAT
346
-
347
-/*
348
- * Load a shared library into memory. The library gets its own data
349
- * segment (including bss) but not argv/argc/environ.
350
- */
351
-
352
-static int load_flat_shared_library(int id, struct lib_info *libs)
353
-{
354
-    struct linux_binprm bprm;
355
-    int res;
356
-    char buf[16];
357
-
358
-    /* Create the file name */
359
-    sprintf(buf, "/lib/lib%d.so", id);
360
-
361
-    /* Open the file up */
362
-    bprm.filename = buf;
363
-    bprm.file = open_exec(bprm.filename);
364
-    res = PTR_ERR(bprm.file);
365
-    if (IS_ERR(bprm.file))
366
-        return res;
367
-
368
-    res = prepare_binprm(&bprm);
369
-
370
- if (!is_error(res)) {
371
-        res = load_flat_file(&bprm, libs, id, NULL);
372
- }
373
-    if (bprm.file) {
374
-        allow_write_access(bprm.file);
375
-        fput(bprm.file);
376
-        bprm.file = NULL;
377
-    }
378
-    return(res);
379
-}
380
-
381
-#endif /* CONFIG_BINFMT_SHARED_FLAT */
382
-
383
int load_flt_binary(struct linux_binprm *bprm, struct image_info *info)
26
{
384
{
27
# Special cases which do not take an early NOCP: VLLDM and VLSTM
385
struct lib_info libinfo[MAX_SHARED_LIBS];
28
- VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
386
@@ -XXX,XX +XXX,XX @@ int load_flt_binary(struct linux_binprm *bprm, struct image_info *info)
29
+ VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 op:1 000 0000
387
*/
30
# VSCCLRM (new in v8.1M) is similar:
388
start_addr = libinfo[0].entry;
31
VSCCLRM 1110 1100 1.01 1111 .... 1011 imm:7 0 vd=%vd_dp size=3
389
32
VSCCLRM 1110 1100 1.01 1111 .... 1010 imm:8 vd=%vd_sp size=2
390
-#ifdef CONFIG_BINFMT_SHARED_FLAT
33
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
391
-#error here
34
index XXXXXXX..XXXXXXX 100644
392
- for (i = MAX_SHARED_LIBS-1; i>0; i--) {
35
--- a/target/arm/translate-vfp.c.inc
393
- if (libinfo[i].loaded) {
36
+++ b/target/arm/translate-vfp.c.inc
394
- /* Push previous first to call address */
37
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
395
- --sp;
38
!arm_dc_feature(s, ARM_FEATURE_V8)) {
396
- if (put_user_ual(start_addr, sp))
39
return false;
397
- return -EFAULT;
40
}
398
- start_addr = libinfo[i].entry;
41
+
399
- }
42
+ if (a->op) {
400
- }
43
+ /*
401
-#endif
44
+ * T2 encoding ({D0-D31} reglist): v8.1M and up. We choose not
402
-
45
+ * to take the IMPDEF option to make memory accesses to the stack
403
/* Stash our initial stack pointer into the mm structure */
46
+ * slots that correspond to the D16-D31 registers (discarding
404
info->start_code = libinfo[0].start_code;
47
+ * read data and writing UNKNOWN values), so for us the T2
405
info->end_code = libinfo[0].start_code + libinfo[0].text_len;
48
+ * encoding behaves identically to the T1 encoding.
49
+ */
50
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
51
+ return false;
52
+ }
53
+ } else {
54
+ /*
55
+ * T1 encoding ({D0-D15} reglist); undef if we have 32 Dregs.
56
+ * This is currently architecturally impossible, but we add the
57
+ * check to stay in line with the pseudocode. Note that we must
58
+ * emit code for the UNDEF so it takes precedence over the NOCP.
59
+ */
60
+ if (dc_isar_feature(aa32_simd_r32, s)) {
61
+ unallocated_encoding(s);
62
+ return true;
63
+ }
64
+ }
65
+
66
/*
67
* If not secure, UNDEF. We must emit code for this
68
* rather than returning false so that this takes
69
--
406
--
70
2.20.1
407
2.34.1
71
408
72
409
diff view generated by jsdifflib
1
We defined a constant name for the mask of NZCV bits in the FPCR/FPSCR
1
The npcm7xx_clk and npcm7xx_gcr device reset methods look at
2
in the previous commit; use it in a couple of places in existing code,
2
the ResetType argument and only handle RESET_TYPE_COLD,
3
where we're masking out everything except NZCV for the "load to Rt=15
3
producing a warning if another reset type is passed. This
4
sets CPSR.NZCV" special case.
4
is different from how every other three-phase-reset method
5
we have works, and makes it difficult to add new reset types.
6
7
A better pattern is "assume that any reset type you don't know
8
about should be handled like RESET_TYPE_COLD"; switch these
9
devices to do that. Then adding a new reset type will only
10
need to touch those devices where its behaviour really needs
11
to be different from the standard cold reset.
5
12
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-12-peter.maydell@linaro.org
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
Reviewed-by: Luc Michel <luc.michel@amd.com>
17
Message-id: 20240412160809.1260625-2-peter.maydell@linaro.org
9
---
18
---
10
target/arm/translate-vfp.c.inc | 4 ++--
19
hw/misc/npcm7xx_clk.c | 13 +++----------
11
1 file changed, 2 insertions(+), 2 deletions(-)
20
hw/misc/npcm7xx_gcr.c | 12 ++++--------
21
2 files changed, 7 insertions(+), 18 deletions(-)
12
22
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
23
diff --git a/hw/misc/npcm7xx_clk.c b/hw/misc/npcm7xx_clk.c
14
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
25
--- a/hw/misc/npcm7xx_clk.c
16
+++ b/target/arm/translate-vfp.c.inc
26
+++ b/hw/misc/npcm7xx_clk.c
17
@@ -XXX,XX +XXX,XX @@ static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
27
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_clk_enter_reset(Object *obj, ResetType type)
18
* helper call for the "VMRS to CPSR.NZCV" insn.
28
19
*/
29
QEMU_BUILD_BUG_ON(sizeof(s->regs) != sizeof(cold_reset_values));
20
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
30
21
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
31
- switch (type) {
22
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
32
- case RESET_TYPE_COLD:
23
storefn(s, opaque, tmp);
33
- memcpy(s->regs, cold_reset_values, sizeof(cold_reset_values));
24
break;
34
- s->ref_ns = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
25
default:
35
- npcm7xx_clk_update_all_clocks(s);
26
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
36
- return;
27
case ARM_VFP_FPSCR:
37
- }
28
if (a->rt == 15) {
38
-
29
tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
39
+ memcpy(s->regs, cold_reset_values, sizeof(cold_reset_values));
30
- tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
40
+ s->ref_ns = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
31
+ tcg_gen_andi_i32(tmp, tmp, FPCR_NZCV_MASK);
41
+ npcm7xx_clk_update_all_clocks(s);
32
} else {
42
/*
33
tmp = tcg_temp_new_i32();
43
* A small number of registers need to be reset on a core domain reset,
34
gen_helper_vfp_get_fpscr(tmp, cpu_env);
44
* but no such reset type exists yet.
45
*/
46
- qemu_log_mask(LOG_UNIMP, "%s: reset type %d not implemented.",
47
- __func__, type);
48
}
49
50
static void npcm7xx_clk_init_clock_hierarchy(NPCM7xxCLKState *s)
51
diff --git a/hw/misc/npcm7xx_gcr.c b/hw/misc/npcm7xx_gcr.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/hw/misc/npcm7xx_gcr.c
54
+++ b/hw/misc/npcm7xx_gcr.c
55
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_gcr_enter_reset(Object *obj, ResetType type)
56
57
QEMU_BUILD_BUG_ON(sizeof(s->regs) != sizeof(cold_reset_values));
58
59
- switch (type) {
60
- case RESET_TYPE_COLD:
61
- memcpy(s->regs, cold_reset_values, sizeof(s->regs));
62
- s->regs[NPCM7XX_GCR_PWRON] = s->reset_pwron;
63
- s->regs[NPCM7XX_GCR_MDLR] = s->reset_mdlr;
64
- s->regs[NPCM7XX_GCR_INTCR3] = s->reset_intcr3;
65
- break;
66
- }
67
+ memcpy(s->regs, cold_reset_values, sizeof(s->regs));
68
+ s->regs[NPCM7XX_GCR_PWRON] = s->reset_pwron;
69
+ s->regs[NPCM7XX_GCR_MDLR] = s->reset_mdlr;
70
+ s->regs[NPCM7XX_GCR_INTCR3] = s->reset_intcr3;
71
}
72
73
static void npcm7xx_gcr_realize(DeviceState *dev, Error **errp)
35
--
74
--
36
2.20.1
75
2.34.1
37
76
38
77
diff view generated by jsdifflib
1
For M-profile before v8.1M, the only valid register for VMSR/VMRS is
1
Rather than directly calling the device's implementation of its 'hold'
2
the FPSCR. We have a comment that states this, but the actual logic
2
reset phase, call device_cold_reset(). This means we don't have to
3
to forbid accesses for any other register value is missing, so we
3
adjust this callsite when we add another argument to the function
4
would end up with A-profile style behaviour. Add the missing check.
4
signature for the hold and exit reset methods.
5
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201119215617.29887-7-peter.maydell@linaro.org
8
Reviewed-by: Luc Michel <luc.michel@amd.com>
9
Message-id: 20240412160809.1260625-3-peter.maydell@linaro.org
9
---
10
---
10
target/arm/translate-vfp.c.inc | 5 ++++-
11
hw/i2c/allwinner-i2c.c | 3 +--
11
1 file changed, 4 insertions(+), 1 deletion(-)
12
hw/sensor/adm1272.c | 2 +-
13
2 files changed, 2 insertions(+), 3 deletions(-)
12
14
13
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
15
diff --git a/hw/i2c/allwinner-i2c.c b/hw/i2c/allwinner-i2c.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-vfp.c.inc
17
--- a/hw/i2c/allwinner-i2c.c
16
+++ b/target/arm/translate-vfp.c.inc
18
+++ b/hw/i2c/allwinner-i2c.c
17
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
19
@@ -XXX,XX +XXX,XX @@ static void allwinner_i2c_write(void *opaque, hwaddr offset,
18
* Accesses to R15 are UNPREDICTABLE; we choose to undef.
20
break;
19
* (FPSCR -> r15 is a special case which writes to the PSR flags.)
21
case TWI_SRST_REG:
20
*/
22
if (((value & TWI_SRST_MASK) == 0) && (s->srst & TWI_SRST_MASK)) {
21
- if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
23
- /* Perform reset */
22
+ if (a->reg != ARM_VFP_FPSCR) {
24
- allwinner_i2c_reset_hold(OBJECT(s));
23
+ return false;
25
+ device_cold_reset(DEVICE(s));
24
+ }
25
+ if (a->rt == 15 && !a->l) {
26
return false;
27
}
26
}
28
}
27
s->srst = value & TWI_SRST_MASK;
28
break;
29
diff --git a/hw/sensor/adm1272.c b/hw/sensor/adm1272.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/sensor/adm1272.c
32
+++ b/hw/sensor/adm1272.c
33
@@ -XXX,XX +XXX,XX @@ static int adm1272_write_data(PMBusDevice *pmdev, const uint8_t *buf,
34
break;
35
36
case ADM1272_MFR_POWER_CYCLE:
37
- adm1272_exit_reset((Object *)s);
38
+ device_cold_reset(DEVICE(s));
39
break;
40
41
case ADM1272_HYSTERESIS_LOW:
29
--
42
--
30
2.20.1
43
2.34.1
31
32
diff view generated by jsdifflib
1
The constant-expander functions like negate, plus_2, etc, are
1
We pass a ResetType argument to the Resettable class enter phase
2
generally useful; move them up in translate.c so we can use them in
2
method, but we don't pass it to hold and exit, even though the
3
the VFP/Neon decoders as well as in the A32/T32/T16 decoders.
3
callsites have it readily available. This means that if a device
4
cared about the ResetType it would need to record it in the enter
5
phase method to use later on. We should pass the type to all three
6
of the phase methods to avoid having to do that.
7
8
This coccinelle script adds the ResetType argument to the hold and
9
exit phases of the Resettable interface.
10
11
The first part of the script (rules holdfn_assigned, holdfn_defined,
12
exitfn_assigned, exitfn_defined) update implementations of the
13
interface within device models, both to change the signature of their
14
method implementations and to pass on the reset type when they invoke
15
reset on some other device.
16
17
The second part of the script is various special cases:
18
* method callsites in resettable_phase_hold(), resettable_phase_exit()
19
and device_phases_reset()
20
* updating the typedefs for the methods
21
* isl_pmbus_vr.c has some code where one device's reset method directly
22
calls the implementation of a different device's method
4
23
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Reviewed-by: Luc Michel <luc.michel@amd.com>
7
Message-id: 20201119215617.29887-9-peter.maydell@linaro.org
26
Message-id: 20240412160809.1260625-4-peter.maydell@linaro.org
8
---
27
---
9
target/arm/translate.c | 46 +++++++++++++++++++++++-------------------
28
scripts/coccinelle/reset-type.cocci | 133 ++++++++++++++++++++++++++++
10
1 file changed, 25 insertions(+), 21 deletions(-)
29
1 file changed, 133 insertions(+)
30
create mode 100644 scripts/coccinelle/reset-type.cocci
11
31
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
32
diff --git a/scripts/coccinelle/reset-type.cocci b/scripts/coccinelle/reset-type.cocci
13
index XXXXXXX..XXXXXXX 100644
33
new file mode 100644
14
--- a/target/arm/translate.c
34
index XXXXXXX..XXXXXXX
15
+++ b/target/arm/translate.c
35
--- /dev/null
16
@@ -XXX,XX +XXX,XX @@ static void arm_gen_condlabel(DisasContext *s)
36
+++ b/scripts/coccinelle/reset-type.cocci
17
}
37
@@ -XXX,XX +XXX,XX @@
18
}
38
+// Convert device code using three-phase reset to add a ResetType
19
39
+// argument to implementations of ResettableHoldPhase and
20
+/*
40
+// ResettableEnterPhase methods.
21
+ * Constant expanders for the decoders.
41
+//
22
+ */
42
+// Copyright Linaro Ltd 2024
43
+// SPDX-License-Identifier: GPL-2.0-or-later
44
+//
45
+// for dir in include hw target; do \
46
+// spatch --macro-file scripts/cocci-macro-file.h \
47
+// --sp-file scripts/coccinelle/reset-type.cocci \
48
+// --keep-comments --smpl-spacing --in-place --include-headers \
49
+// --dir $dir; done
50
+//
51
+// This coccinelle script aims to produce a complete change that needs
52
+// no human interaction, so as well as the generic "update device
53
+// implementations of the hold and exit phase methods" it includes
54
+// the special-case transformations needed for the core code and for
55
+// one device model that does something a bit nonstandard. Those
56
+// special cases are at the end of the file.
23
+
57
+
24
+static int negate(DisasContext *s, int x)
58
+// Look for where we use a function as a ResettableHoldPhase method,
59
+// either by directly assigning it to phases.hold or by calling
60
+// resettable_class_set_parent_phases, and remember the function name.
61
+@ holdfn_assigned @
62
+identifier enterfn, holdfn, exitfn;
63
+identifier rc;
64
+expression e;
65
+@@
66
+ResettableClass *rc;
67
+...
68
+(
69
+ rc->phases.hold = holdfn;
70
+|
71
+ resettable_class_set_parent_phases(rc, enterfn, holdfn, exitfn, e);
72
+)
73
+
74
+// Look for the definition of the function we found in holdfn_assigned,
75
+// and add the new argument. If the function calls a hold function
76
+// itself (probably chaining to the parent class reset) then add the
77
+// new argument there too.
78
+@ holdfn_defined @
79
+identifier holdfn_assigned.holdfn;
80
+typedef Object;
81
+identifier obj;
82
+expression parent;
83
+@@
84
+-holdfn(Object *obj)
85
++holdfn(Object *obj, ResetType type)
25
+{
86
+{
26
+ return -x;
87
+ <...
88
+- parent.hold(obj)
89
++ parent.hold(obj, type)
90
+ ...>
27
+}
91
+}
28
+
92
+
29
+static int plus_2(DisasContext *s, int x)
93
+// Similarly for ResettableExitPhase.
94
+@ exitfn_assigned @
95
+identifier enterfn, holdfn, exitfn;
96
+identifier rc;
97
+expression e;
98
+@@
99
+ResettableClass *rc;
100
+...
101
+(
102
+ rc->phases.exit = exitfn;
103
+|
104
+ resettable_class_set_parent_phases(rc, enterfn, holdfn, exitfn, e);
105
+)
106
+@ exitfn_defined @
107
+identifier exitfn_assigned.exitfn;
108
+typedef Object;
109
+identifier obj;
110
+expression parent;
111
+@@
112
+-exitfn(Object *obj)
113
++exitfn(Object *obj, ResetType type)
30
+{
114
+{
31
+ return x + 2;
115
+ <...
116
+- parent.exit(obj)
117
++ parent.exit(obj, type)
118
+ ...>
32
+}
119
+}
33
+
120
+
34
+static int times_2(DisasContext *s, int x)
121
+// SPECIAL CASES ONLY BELOW HERE
35
+{
122
+// We use a python scripted constraint on the position of the match
36
+ return x * 2;
123
+// to ensure that they only match in a particular function. See
37
+}
124
+// https://public-inbox.org/git/alpine.DEB.2.21.1808240652370.2344@hadrien/
125
+// which recommends this as the way to do "match only in this function".
38
+
126
+
39
+static int times_4(DisasContext *s, int x)
127
+// Special case: isl_pmbus_vr.c has some reset methods calling others directly
40
+{
128
+@ isl_pmbus_vr @
41
+ return x * 4;
129
+identifier obj;
42
+}
130
+@@
131
+- isl_pmbus_vr_exit_reset(obj);
132
++ isl_pmbus_vr_exit_reset(obj, type);
43
+
133
+
44
/* Flags for the disas_set_da_iss info argument:
134
+// Special case: device_phases_reset() needs to pass RESET_TYPE_COLD
45
* lower bits hold the Rt register number, higher bits are flags.
135
+@ device_phases_reset_hold @
46
*/
136
+expression obj;
47
@@ -XXX,XX +XXX,XX @@ static void arm_skip_unless(DisasContext *s, uint32_t cond)
137
+identifier rc;
48
138
+identifier phase;
49
139
+position p : script:python() { p[0].current_element == "device_phases_reset" };
50
/*
140
+@@
51
- * Constant expanders for the decoders.
141
+- rc->phases.phase(obj)@p
52
+ * Constant expanders used by T16/T32 decode
142
++ rc->phases.phase(obj, RESET_TYPE_COLD)
53
*/
143
+
54
144
+// Special case: in resettable_phase_hold() and resettable_phase_exit()
55
-static int negate(DisasContext *s, int x)
145
+// we need to pass through the ResetType argument to the method being called
56
-{
146
+@ resettable_phase_hold @
57
- return -x;
147
+expression obj;
58
-}
148
+identifier rc;
59
-
149
+position p : script:python() { p[0].current_element == "resettable_phase_hold" };
60
-static int plus_2(DisasContext *s, int x)
150
+@@
61
-{
151
+- rc->phases.hold(obj)@p
62
- return x + 2;
152
++ rc->phases.hold(obj, type)
63
-}
153
+@ resettable_phase_exit @
64
-
154
+expression obj;
65
-static int times_2(DisasContext *s, int x)
155
+identifier rc;
66
-{
156
+position p : script:python() { p[0].current_element == "resettable_phase_exit" };
67
- return x * 2;
157
+@@
68
-}
158
+- rc->phases.exit(obj)@p
69
-
159
++ rc->phases.exit(obj, type)
70
-static int times_4(DisasContext *s, int x)
160
+// Special case: the typedefs for the methods need to declare the new argument
71
-{
161
+@ phase_typedef_hold @
72
- return x * 4;
162
+identifier obj;
73
-}
163
+@@
74
-
164
+- typedef void (*ResettableHoldPhase)(Object *obj);
75
/* Return only the rotation part of T32ExpandImm. */
165
++ typedef void (*ResettableHoldPhase)(Object *obj, ResetType type);
76
static int t32_expandimm_rot(DisasContext *s, int x)
166
+@ phase_typedef_exit @
77
{
167
+identifier obj;
168
+@@
169
+- typedef void (*ResettableExitPhase)(Object *obj);
170
++ typedef void (*ResettableExitPhase)(Object *obj, ResetType type);
78
--
171
--
79
2.20.1
172
2.34.1
80
81
diff view generated by jsdifflib
1
In arm_cpu_realizefn() we check whether the board code disabled EL3
1
We pass a ResetType argument to the Resettable class enter
2
via the has_el3 CPU object property, which we create if the CPU
2
phase method, but we don't pass it to hold and exit, even though
3
starts with the ARM_FEATURE_EL3 feature bit. If it is disabled, then
3
the callsites have it readily available. This means that if
4
we turn off ARM_FEATURE_EL3 and also zero out the relevant fields in
4
a device cared about the ResetType it would need to record it
5
the ID_PFR1 and ID_AA64PFR0 registers.
5
in the enter phase method to use later on. Pass the type to
6
all three of the phase methods to avoid having to do that.
6
7
7
This codepath was incorrectly being taken for M-profile CPUs, which
8
Commit created with
8
do not have an EL3 and don't set ARM_FEATURE_EL3, but which may have
9
the M-profile Security extension and so should have non-zero values
10
in the ID_PFR1.Security field.
11
9
12
Restrict the handling of the feature flag to A/R-profile cores.
10
for dir in hw target include; do \
11
spatch --macro-file scripts/cocci-macro-file.h \
12
--sp-file scripts/coccinelle/reset-type.cocci \
13
--keep-comments --smpl-spacing --in-place \
14
--include-headers --dir $dir; done
15
16
and no manual edits.
13
17
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20201119215617.29887-4-peter.maydell@linaro.org
21
Reviewed-by: Luc Michel <luc.michel@amd.com>
22
Message-id: 20240412160809.1260625-5-peter.maydell@linaro.org
17
---
23
---
18
target/arm/cpu.c | 2 +-
24
include/hw/resettable.h | 4 ++--
19
1 file changed, 1 insertion(+), 1 deletion(-)
25
hw/adc/npcm7xx_adc.c | 2 +-
26
hw/arm/pxa2xx_pic.c | 2 +-
27
hw/arm/smmu-common.c | 2 +-
28
hw/arm/smmuv3.c | 4 ++--
29
hw/arm/stellaris.c | 10 +++++-----
30
hw/audio/asc.c | 2 +-
31
hw/char/cadence_uart.c | 2 +-
32
hw/char/sifive_uart.c | 2 +-
33
hw/core/cpu-common.c | 2 +-
34
hw/core/qdev.c | 4 ++--
35
hw/core/reset.c | 2 +-
36
hw/core/resettable.c | 4 ++--
37
hw/display/virtio-vga.c | 4 ++--
38
hw/gpio/npcm7xx_gpio.c | 2 +-
39
hw/gpio/pl061.c | 2 +-
40
hw/gpio/stm32l4x5_gpio.c | 2 +-
41
hw/hyperv/vmbus.c | 2 +-
42
hw/i2c/allwinner-i2c.c | 2 +-
43
hw/i2c/npcm7xx_smbus.c | 2 +-
44
hw/input/adb.c | 2 +-
45
hw/input/ps2.c | 12 ++++++------
46
hw/intc/arm_gic_common.c | 2 +-
47
hw/intc/arm_gic_kvm.c | 4 ++--
48
hw/intc/arm_gicv3_common.c | 2 +-
49
hw/intc/arm_gicv3_its.c | 4 ++--
50
hw/intc/arm_gicv3_its_common.c | 2 +-
51
hw/intc/arm_gicv3_its_kvm.c | 4 ++--
52
hw/intc/arm_gicv3_kvm.c | 4 ++--
53
hw/intc/xics.c | 2 +-
54
hw/m68k/q800-glue.c | 2 +-
55
hw/misc/djmemc.c | 2 +-
56
hw/misc/iosb.c | 2 +-
57
hw/misc/mac_via.c | 8 ++++----
58
hw/misc/macio/cuda.c | 4 ++--
59
hw/misc/macio/pmu.c | 4 ++--
60
hw/misc/mos6522.c | 2 +-
61
hw/misc/npcm7xx_mft.c | 2 +-
62
hw/misc/npcm7xx_pwm.c | 2 +-
63
hw/misc/stm32l4x5_exti.c | 2 +-
64
hw/misc/stm32l4x5_rcc.c | 10 +++++-----
65
hw/misc/stm32l4x5_syscfg.c | 2 +-
66
hw/misc/xlnx-versal-cframe-reg.c | 2 +-
67
hw/misc/xlnx-versal-crl.c | 2 +-
68
hw/misc/xlnx-versal-pmc-iou-slcr.c | 2 +-
69
hw/misc/xlnx-versal-trng.c | 2 +-
70
hw/misc/xlnx-versal-xramc.c | 2 +-
71
hw/misc/xlnx-zynqmp-apu-ctrl.c | 2 +-
72
hw/misc/xlnx-zynqmp-crf.c | 2 +-
73
hw/misc/zynq_slcr.c | 4 ++--
74
hw/net/can/xlnx-zynqmp-can.c | 2 +-
75
hw/net/e1000.c | 2 +-
76
hw/net/e1000e.c | 2 +-
77
hw/net/igb.c | 2 +-
78
hw/net/igbvf.c | 2 +-
79
hw/nvram/xlnx-bbram.c | 2 +-
80
hw/nvram/xlnx-versal-efuse-ctrl.c | 2 +-
81
hw/nvram/xlnx-zynqmp-efuse.c | 2 +-
82
hw/pci-bridge/cxl_root_port.c | 4 ++--
83
hw/pci-bridge/pcie_root_port.c | 2 +-
84
hw/pci-host/bonito.c | 2 +-
85
hw/pci-host/pnv_phb.c | 4 ++--
86
hw/pci-host/pnv_phb3_msi.c | 4 ++--
87
hw/pci/pci.c | 4 ++--
88
hw/rtc/mc146818rtc.c | 2 +-
89
hw/s390x/css-bridge.c | 2 +-
90
hw/sensor/adm1266.c | 2 +-
91
hw/sensor/adm1272.c | 2 +-
92
hw/sensor/isl_pmbus_vr.c | 10 +++++-----
93
hw/sensor/max31785.c | 2 +-
94
hw/sensor/max34451.c | 2 +-
95
hw/ssi/npcm7xx_fiu.c | 2 +-
96
hw/timer/etraxfs_timer.c | 2 +-
97
hw/timer/npcm7xx_timer.c | 2 +-
98
hw/usb/hcd-dwc2.c | 8 ++++----
99
hw/usb/xlnx-versal-usb2-ctrl-regs.c | 2 +-
100
hw/virtio/virtio-pci.c | 2 +-
101
target/arm/cpu.c | 4 ++--
102
target/avr/cpu.c | 4 ++--
103
target/cris/cpu.c | 4 ++--
104
target/hexagon/cpu.c | 4 ++--
105
target/i386/cpu.c | 4 ++--
106
target/loongarch/cpu.c | 4 ++--
107
target/m68k/cpu.c | 4 ++--
108
target/microblaze/cpu.c | 4 ++--
109
target/mips/cpu.c | 4 ++--
110
target/openrisc/cpu.c | 4 ++--
111
target/ppc/cpu_init.c | 4 ++--
112
target/riscv/cpu.c | 4 ++--
113
target/rx/cpu.c | 4 ++--
114
target/sh4/cpu.c | 4 ++--
115
target/sparc/cpu.c | 4 ++--
116
target/tricore/cpu.c | 4 ++--
117
target/xtensa/cpu.c | 4 ++--
118
94 files changed, 150 insertions(+), 150 deletions(-)
20
119
120
diff --git a/include/hw/resettable.h b/include/hw/resettable.h
121
index XXXXXXX..XXXXXXX 100644
122
--- a/include/hw/resettable.h
123
+++ b/include/hw/resettable.h
124
@@ -XXX,XX +XXX,XX @@ typedef enum ResetType {
125
* the callback.
126
*/
127
typedef void (*ResettableEnterPhase)(Object *obj, ResetType type);
128
-typedef void (*ResettableHoldPhase)(Object *obj);
129
-typedef void (*ResettableExitPhase)(Object *obj);
130
+typedef void (*ResettableHoldPhase)(Object *obj, ResetType type);
131
+typedef void (*ResettableExitPhase)(Object *obj, ResetType type);
132
typedef ResettableState * (*ResettableGetState)(Object *obj);
133
typedef void (*ResettableTrFunction)(Object *obj);
134
typedef ResettableTrFunction (*ResettableGetTrFunction)(Object *obj);
135
diff --git a/hw/adc/npcm7xx_adc.c b/hw/adc/npcm7xx_adc.c
136
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/adc/npcm7xx_adc.c
138
+++ b/hw/adc/npcm7xx_adc.c
139
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_adc_enter_reset(Object *obj, ResetType type)
140
npcm7xx_adc_reset(s);
141
}
142
143
-static void npcm7xx_adc_hold_reset(Object *obj)
144
+static void npcm7xx_adc_hold_reset(Object *obj, ResetType type)
145
{
146
NPCM7xxADCState *s = NPCM7XX_ADC(obj);
147
148
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
149
index XXXXXXX..XXXXXXX 100644
150
--- a/hw/arm/pxa2xx_pic.c
151
+++ b/hw/arm/pxa2xx_pic.c
152
@@ -XXX,XX +XXX,XX @@ static int pxa2xx_pic_post_load(void *opaque, int version_id)
153
return 0;
154
}
155
156
-static void pxa2xx_pic_reset_hold(Object *obj)
157
+static void pxa2xx_pic_reset_hold(Object *obj, ResetType type)
158
{
159
PXA2xxPICState *s = PXA2XX_PIC(obj);
160
161
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/hw/arm/smmu-common.c
164
+++ b/hw/arm/smmu-common.c
165
@@ -XXX,XX +XXX,XX @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
166
}
167
}
168
169
-static void smmu_base_reset_hold(Object *obj)
170
+static void smmu_base_reset_hold(Object *obj, ResetType type)
171
{
172
SMMUState *s = ARM_SMMU(obj);
173
174
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/hw/arm/smmuv3.c
177
+++ b/hw/arm/smmuv3.c
178
@@ -XXX,XX +XXX,XX @@ static void smmu_init_irq(SMMUv3State *s, SysBusDevice *dev)
179
}
180
}
181
182
-static void smmu_reset_hold(Object *obj)
183
+static void smmu_reset_hold(Object *obj, ResetType type)
184
{
185
SMMUv3State *s = ARM_SMMUV3(obj);
186
SMMUv3Class *c = ARM_SMMUV3_GET_CLASS(s);
187
188
if (c->parent_phases.hold) {
189
- c->parent_phases.hold(obj);
190
+ c->parent_phases.hold(obj, type);
191
}
192
193
smmuv3_init_regs(s);
194
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
195
index XXXXXXX..XXXXXXX 100644
196
--- a/hw/arm/stellaris.c
197
+++ b/hw/arm/stellaris.c
198
@@ -XXX,XX +XXX,XX @@ static void stellaris_sys_reset_enter(Object *obj, ResetType type)
199
s->dcgc[0] = 1;
200
}
201
202
-static void stellaris_sys_reset_hold(Object *obj)
203
+static void stellaris_sys_reset_hold(Object *obj, ResetType type)
204
{
205
ssys_state *s = STELLARIS_SYS(obj);
206
207
@@ -XXX,XX +XXX,XX @@ static void stellaris_sys_reset_hold(Object *obj)
208
ssys_calculate_system_clock(s, true);
209
}
210
211
-static void stellaris_sys_reset_exit(Object *obj)
212
+static void stellaris_sys_reset_exit(Object *obj, ResetType type)
213
{
214
}
215
216
@@ -XXX,XX +XXX,XX @@ static void stellaris_i2c_reset_enter(Object *obj, ResetType type)
217
i2c_end_transfer(s->bus);
218
}
219
220
-static void stellaris_i2c_reset_hold(Object *obj)
221
+static void stellaris_i2c_reset_hold(Object *obj, ResetType type)
222
{
223
stellaris_i2c_state *s = STELLARIS_I2C(obj);
224
225
@@ -XXX,XX +XXX,XX @@ static void stellaris_i2c_reset_hold(Object *obj)
226
s->mcr = 0;
227
}
228
229
-static void stellaris_i2c_reset_exit(Object *obj)
230
+static void stellaris_i2c_reset_exit(Object *obj, ResetType type)
231
{
232
stellaris_i2c_state *s = STELLARIS_I2C(obj);
233
234
@@ -XXX,XX +XXX,XX @@ static void stellaris_adc_trigger(void *opaque, int irq, int level)
235
}
236
}
237
238
-static void stellaris_adc_reset_hold(Object *obj)
239
+static void stellaris_adc_reset_hold(Object *obj, ResetType type)
240
{
241
StellarisADCState *s = STELLARIS_ADC(obj);
242
int n;
243
diff --git a/hw/audio/asc.c b/hw/audio/asc.c
244
index XXXXXXX..XXXXXXX 100644
245
--- a/hw/audio/asc.c
246
+++ b/hw/audio/asc.c
247
@@ -XXX,XX +XXX,XX @@ static void asc_fifo_init(ASCFIFOState *fs, int index)
248
g_free(name);
249
}
250
251
-static void asc_reset_hold(Object *obj)
252
+static void asc_reset_hold(Object *obj, ResetType type)
253
{
254
ASCState *s = ASC(obj);
255
256
diff --git a/hw/char/cadence_uart.c b/hw/char/cadence_uart.c
257
index XXXXXXX..XXXXXXX 100644
258
--- a/hw/char/cadence_uart.c
259
+++ b/hw/char/cadence_uart.c
260
@@ -XXX,XX +XXX,XX @@ static void cadence_uart_reset_init(Object *obj, ResetType type)
261
s->r[R_TTRIG] = 0x00000020;
262
}
263
264
-static void cadence_uart_reset_hold(Object *obj)
265
+static void cadence_uart_reset_hold(Object *obj, ResetType type)
266
{
267
CadenceUARTState *s = CADENCE_UART(obj);
268
269
diff --git a/hw/char/sifive_uart.c b/hw/char/sifive_uart.c
270
index XXXXXXX..XXXXXXX 100644
271
--- a/hw/char/sifive_uart.c
272
+++ b/hw/char/sifive_uart.c
273
@@ -XXX,XX +XXX,XX @@ static void sifive_uart_reset_enter(Object *obj, ResetType type)
274
s->rx_fifo_len = 0;
275
}
276
277
-static void sifive_uart_reset_hold(Object *obj)
278
+static void sifive_uart_reset_hold(Object *obj, ResetType type)
279
{
280
SiFiveUARTState *s = SIFIVE_UART(obj);
281
qemu_irq_lower(s->irq);
282
diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
283
index XXXXXXX..XXXXXXX 100644
284
--- a/hw/core/cpu-common.c
285
+++ b/hw/core/cpu-common.c
286
@@ -XXX,XX +XXX,XX @@ void cpu_reset(CPUState *cpu)
287
trace_cpu_reset(cpu->cpu_index);
288
}
289
290
-static void cpu_common_reset_hold(Object *obj)
291
+static void cpu_common_reset_hold(Object *obj, ResetType type)
292
{
293
CPUState *cpu = CPU(obj);
294
CPUClass *cc = CPU_GET_CLASS(cpu);
295
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
296
index XXXXXXX..XXXXXXX 100644
297
--- a/hw/core/qdev.c
298
+++ b/hw/core/qdev.c
299
@@ -XXX,XX +XXX,XX @@ static void device_phases_reset(DeviceState *dev)
300
rc->phases.enter(OBJECT(dev), RESET_TYPE_COLD);
301
}
302
if (rc->phases.hold) {
303
- rc->phases.hold(OBJECT(dev));
304
+ rc->phases.hold(OBJECT(dev), RESET_TYPE_COLD);
305
}
306
if (rc->phases.exit) {
307
- rc->phases.exit(OBJECT(dev));
308
+ rc->phases.exit(OBJECT(dev), RESET_TYPE_COLD);
309
}
310
}
311
312
diff --git a/hw/core/reset.c b/hw/core/reset.c
313
index XXXXXXX..XXXXXXX 100644
314
--- a/hw/core/reset.c
315
+++ b/hw/core/reset.c
316
@@ -XXX,XX +XXX,XX @@ static ResettableState *legacy_reset_get_state(Object *obj)
317
return &lr->reset_state;
318
}
319
320
-static void legacy_reset_hold(Object *obj)
321
+static void legacy_reset_hold(Object *obj, ResetType type)
322
{
323
LegacyReset *lr = LEGACY_RESET(obj);
324
325
diff --git a/hw/core/resettable.c b/hw/core/resettable.c
326
index XXXXXXX..XXXXXXX 100644
327
--- a/hw/core/resettable.c
328
+++ b/hw/core/resettable.c
329
@@ -XXX,XX +XXX,XX @@ static void resettable_phase_hold(Object *obj, void *opaque, ResetType type)
330
trace_resettable_transitional_function(obj, obj_typename);
331
tr_func(obj);
332
} else if (rc->phases.hold) {
333
- rc->phases.hold(obj);
334
+ rc->phases.hold(obj, type);
335
}
336
}
337
trace_resettable_phase_hold_end(obj, obj_typename, s->count);
338
@@ -XXX,XX +XXX,XX @@ static void resettable_phase_exit(Object *obj, void *opaque, ResetType type)
339
if (--s->count == 0) {
340
trace_resettable_phase_exit_exec(obj, obj_typename, !!rc->phases.exit);
341
if (rc->phases.exit && !resettable_get_tr_func(rc, obj)) {
342
- rc->phases.exit(obj);
343
+ rc->phases.exit(obj, type);
344
}
345
}
346
s->exit_phase_in_progress = false;
347
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
348
index XXXXXXX..XXXXXXX 100644
349
--- a/hw/display/virtio-vga.c
350
+++ b/hw/display/virtio-vga.c
351
@@ -XXX,XX +XXX,XX @@ static void virtio_vga_base_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
352
}
353
}
354
355
-static void virtio_vga_base_reset_hold(Object *obj)
356
+static void virtio_vga_base_reset_hold(Object *obj, ResetType type)
357
{
358
VirtIOVGABaseClass *klass = VIRTIO_VGA_BASE_GET_CLASS(obj);
359
VirtIOVGABase *vvga = VIRTIO_VGA_BASE(obj);
360
361
/* reset virtio-gpu */
362
if (klass->parent_phases.hold) {
363
- klass->parent_phases.hold(obj);
364
+ klass->parent_phases.hold(obj, type);
365
}
366
367
/* reset vga */
368
diff --git a/hw/gpio/npcm7xx_gpio.c b/hw/gpio/npcm7xx_gpio.c
369
index XXXXXXX..XXXXXXX 100644
370
--- a/hw/gpio/npcm7xx_gpio.c
371
+++ b/hw/gpio/npcm7xx_gpio.c
372
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_gpio_enter_reset(Object *obj, ResetType type)
373
s->regs[NPCM7XX_GPIO_ODSC] = s->reset_odsc;
374
}
375
376
-static void npcm7xx_gpio_hold_reset(Object *obj)
377
+static void npcm7xx_gpio_hold_reset(Object *obj, ResetType type)
378
{
379
NPCM7xxGPIOState *s = NPCM7XX_GPIO(obj);
380
381
diff --git a/hw/gpio/pl061.c b/hw/gpio/pl061.c
382
index XXXXXXX..XXXXXXX 100644
383
--- a/hw/gpio/pl061.c
384
+++ b/hw/gpio/pl061.c
385
@@ -XXX,XX +XXX,XX @@ static void pl061_enter_reset(Object *obj, ResetType type)
386
s->amsel = 0;
387
}
388
389
-static void pl061_hold_reset(Object *obj)
390
+static void pl061_hold_reset(Object *obj, ResetType type)
391
{
392
PL061State *s = PL061(obj);
393
int i, level;
394
diff --git a/hw/gpio/stm32l4x5_gpio.c b/hw/gpio/stm32l4x5_gpio.c
395
index XXXXXXX..XXXXXXX 100644
396
--- a/hw/gpio/stm32l4x5_gpio.c
397
+++ b/hw/gpio/stm32l4x5_gpio.c
398
@@ -XXX,XX +XXX,XX @@ static bool is_push_pull(Stm32l4x5GpioState *s, unsigned pin)
399
return extract32(s->otyper, pin, 1) == 0;
400
}
401
402
-static void stm32l4x5_gpio_reset_hold(Object *obj)
403
+static void stm32l4x5_gpio_reset_hold(Object *obj, ResetType type)
404
{
405
Stm32l4x5GpioState *s = STM32L4X5_GPIO(obj);
406
407
diff --git a/hw/hyperv/vmbus.c b/hw/hyperv/vmbus.c
408
index XXXXXXX..XXXXXXX 100644
409
--- a/hw/hyperv/vmbus.c
410
+++ b/hw/hyperv/vmbus.c
411
@@ -XXX,XX +XXX,XX @@ static void vmbus_unrealize(BusState *bus)
412
qemu_mutex_destroy(&vmbus->rx_queue_lock);
413
}
414
415
-static void vmbus_reset_hold(Object *obj)
416
+static void vmbus_reset_hold(Object *obj, ResetType type)
417
{
418
vmbus_deinit(VMBUS(obj));
419
}
420
diff --git a/hw/i2c/allwinner-i2c.c b/hw/i2c/allwinner-i2c.c
421
index XXXXXXX..XXXXXXX 100644
422
--- a/hw/i2c/allwinner-i2c.c
423
+++ b/hw/i2c/allwinner-i2c.c
424
@@ -XXX,XX +XXX,XX @@ static inline bool allwinner_i2c_interrupt_is_enabled(AWI2CState *s)
425
return s->cntr & TWI_CNTR_INT_EN;
426
}
427
428
-static void allwinner_i2c_reset_hold(Object *obj)
429
+static void allwinner_i2c_reset_hold(Object *obj, ResetType type)
430
{
431
AWI2CState *s = AW_I2C(obj);
432
433
diff --git a/hw/i2c/npcm7xx_smbus.c b/hw/i2c/npcm7xx_smbus.c
434
index XXXXXXX..XXXXXXX 100644
435
--- a/hw/i2c/npcm7xx_smbus.c
436
+++ b/hw/i2c/npcm7xx_smbus.c
437
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_enter_reset(Object *obj, ResetType type)
438
s->rx_cur = 0;
439
}
440
441
-static void npcm7xx_smbus_hold_reset(Object *obj)
442
+static void npcm7xx_smbus_hold_reset(Object *obj, ResetType type)
443
{
444
NPCM7xxSMBusState *s = NPCM7XX_SMBUS(obj);
445
446
diff --git a/hw/input/adb.c b/hw/input/adb.c
447
index XXXXXXX..XXXXXXX 100644
448
--- a/hw/input/adb.c
449
+++ b/hw/input/adb.c
450
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_adb_bus = {
451
}
452
};
453
454
-static void adb_bus_reset_hold(Object *obj)
455
+static void adb_bus_reset_hold(Object *obj, ResetType type)
456
{
457
ADBBusState *adb_bus = ADB_BUS(obj);
458
459
diff --git a/hw/input/ps2.c b/hw/input/ps2.c
460
index XXXXXXX..XXXXXXX 100644
461
--- a/hw/input/ps2.c
462
+++ b/hw/input/ps2.c
463
@@ -XXX,XX +XXX,XX @@ void ps2_write_mouse(PS2MouseState *s, int val)
464
}
465
}
466
467
-static void ps2_reset_hold(Object *obj)
468
+static void ps2_reset_hold(Object *obj, ResetType type)
469
{
470
PS2State *s = PS2_DEVICE(obj);
471
472
@@ -XXX,XX +XXX,XX @@ static void ps2_reset_hold(Object *obj)
473
ps2_reset_queue(s);
474
}
475
476
-static void ps2_reset_exit(Object *obj)
477
+static void ps2_reset_exit(Object *obj, ResetType type)
478
{
479
PS2State *s = PS2_DEVICE(obj);
480
481
@@ -XXX,XX +XXX,XX @@ static void ps2_common_post_load(PS2State *s)
482
q->cwptr = ccount ? (q->rptr + ccount) & (PS2_BUFFER_SIZE - 1) : -1;
483
}
484
485
-static void ps2_kbd_reset_hold(Object *obj)
486
+static void ps2_kbd_reset_hold(Object *obj, ResetType type)
487
{
488
PS2DeviceClass *ps2dc = PS2_DEVICE_GET_CLASS(obj);
489
PS2KbdState *s = PS2_KBD_DEVICE(obj);
490
@@ -XXX,XX +XXX,XX @@ static void ps2_kbd_reset_hold(Object *obj)
491
trace_ps2_kbd_reset(s);
492
493
if (ps2dc->parent_phases.hold) {
494
- ps2dc->parent_phases.hold(obj);
495
+ ps2dc->parent_phases.hold(obj, type);
496
}
497
498
s->scan_enabled = 1;
499
@@ -XXX,XX +XXX,XX @@ static void ps2_kbd_reset_hold(Object *obj)
500
s->modifiers = 0;
501
}
502
503
-static void ps2_mouse_reset_hold(Object *obj)
504
+static void ps2_mouse_reset_hold(Object *obj, ResetType type)
505
{
506
PS2DeviceClass *ps2dc = PS2_DEVICE_GET_CLASS(obj);
507
PS2MouseState *s = PS2_MOUSE_DEVICE(obj);
508
@@ -XXX,XX +XXX,XX @@ static void ps2_mouse_reset_hold(Object *obj)
509
trace_ps2_mouse_reset(s);
510
511
if (ps2dc->parent_phases.hold) {
512
- ps2dc->parent_phases.hold(obj);
513
+ ps2dc->parent_phases.hold(obj, type);
514
}
515
516
s->mouse_status = 0;
517
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
518
index XXXXXXX..XXXXXXX 100644
519
--- a/hw/intc/arm_gic_common.c
520
+++ b/hw/intc/arm_gic_common.c
521
@@ -XXX,XX +XXX,XX @@ static inline void arm_gic_common_reset_irq_state(GICState *s, int cidx,
522
}
523
}
524
525
-static void arm_gic_common_reset_hold(Object *obj)
526
+static void arm_gic_common_reset_hold(Object *obj, ResetType type)
527
{
528
GICState *s = ARM_GIC_COMMON(obj);
529
int i, j;
530
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
531
index XXXXXXX..XXXXXXX 100644
532
--- a/hw/intc/arm_gic_kvm.c
533
+++ b/hw/intc/arm_gic_kvm.c
534
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_get(GICState *s)
535
}
536
}
537
538
-static void kvm_arm_gic_reset_hold(Object *obj)
539
+static void kvm_arm_gic_reset_hold(Object *obj, ResetType type)
540
{
541
GICState *s = ARM_GIC_COMMON(obj);
542
KVMARMGICClass *kgc = KVM_ARM_GIC_GET_CLASS(s);
543
544
if (kgc->parent_phases.hold) {
545
- kgc->parent_phases.hold(obj);
546
+ kgc->parent_phases.hold(obj, type);
547
}
548
549
if (kvm_arm_gic_can_save_restore(s)) {
550
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
551
index XXXXXXX..XXXXXXX 100644
552
--- a/hw/intc/arm_gicv3_common.c
553
+++ b/hw/intc/arm_gicv3_common.c
554
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_finalize(Object *obj)
555
g_free(s->redist_region_count);
556
}
557
558
-static void arm_gicv3_common_reset_hold(Object *obj)
559
+static void arm_gicv3_common_reset_hold(Object *obj, ResetType type)
560
{
561
GICv3State *s = ARM_GICV3_COMMON(obj);
562
int i;
563
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
564
index XXXXXXX..XXXXXXX 100644
565
--- a/hw/intc/arm_gicv3_its.c
566
+++ b/hw/intc/arm_gicv3_its.c
567
@@ -XXX,XX +XXX,XX @@ static void gicv3_arm_its_realize(DeviceState *dev, Error **errp)
568
}
569
}
570
571
-static void gicv3_its_reset_hold(Object *obj)
572
+static void gicv3_its_reset_hold(Object *obj, ResetType type)
573
{
574
GICv3ITSState *s = ARM_GICV3_ITS_COMMON(obj);
575
GICv3ITSClass *c = ARM_GICV3_ITS_GET_CLASS(s);
576
577
if (c->parent_phases.hold) {
578
- c->parent_phases.hold(obj);
579
+ c->parent_phases.hold(obj, type);
580
}
581
582
/* Quiescent bit reset to 1 */
583
diff --git a/hw/intc/arm_gicv3_its_common.c b/hw/intc/arm_gicv3_its_common.c
584
index XXXXXXX..XXXXXXX 100644
585
--- a/hw/intc/arm_gicv3_its_common.c
586
+++ b/hw/intc/arm_gicv3_its_common.c
587
@@ -XXX,XX +XXX,XX @@ void gicv3_its_init_mmio(GICv3ITSState *s, const MemoryRegionOps *ops,
588
msi_nonbroken = true;
589
}
590
591
-static void gicv3_its_common_reset_hold(Object *obj)
592
+static void gicv3_its_common_reset_hold(Object *obj, ResetType type)
593
{
594
GICv3ITSState *s = ARM_GICV3_ITS_COMMON(obj);
595
596
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
597
index XXXXXXX..XXXXXXX 100644
598
--- a/hw/intc/arm_gicv3_its_kvm.c
599
+++ b/hw/intc/arm_gicv3_its_kvm.c
600
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_post_load(GICv3ITSState *s)
601
GITS_CTLR, &s->ctlr, true, &error_abort);
602
}
603
604
-static void kvm_arm_its_reset_hold(Object *obj)
605
+static void kvm_arm_its_reset_hold(Object *obj, ResetType type)
606
{
607
GICv3ITSState *s = ARM_GICV3_ITS_COMMON(obj);
608
KVMARMITSClass *c = KVM_ARM_ITS_GET_CLASS(s);
609
int i;
610
611
if (c->parent_phases.hold) {
612
- c->parent_phases.hold(obj);
613
+ c->parent_phases.hold(obj, type);
614
}
615
616
if (kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
617
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
618
index XXXXXXX..XXXXXXX 100644
619
--- a/hw/intc/arm_gicv3_kvm.c
620
+++ b/hw/intc/arm_gicv3_kvm.c
621
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
622
c->icc_ctlr_el1[GICV3_S] = c->icc_ctlr_el1[GICV3_NS];
623
}
624
625
-static void kvm_arm_gicv3_reset_hold(Object *obj)
626
+static void kvm_arm_gicv3_reset_hold(Object *obj, ResetType type)
627
{
628
GICv3State *s = ARM_GICV3_COMMON(obj);
629
KVMARMGICv3Class *kgc = KVM_ARM_GICV3_GET_CLASS(s);
630
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_reset_hold(Object *obj)
631
DPRINTF("Reset\n");
632
633
if (kgc->parent_phases.hold) {
634
- kgc->parent_phases.hold(obj);
635
+ kgc->parent_phases.hold(obj, type);
636
}
637
638
if (s->migration_blocker) {
639
diff --git a/hw/intc/xics.c b/hw/intc/xics.c
640
index XXXXXXX..XXXXXXX 100644
641
--- a/hw/intc/xics.c
642
+++ b/hw/intc/xics.c
643
@@ -XXX,XX +XXX,XX @@ static void ics_reset_irq(ICSIRQState *irq)
644
irq->saved_priority = 0xff;
645
}
646
647
-static void ics_reset_hold(Object *obj)
648
+static void ics_reset_hold(Object *obj, ResetType type)
649
{
650
ICSState *ics = ICS(obj);
651
g_autofree uint8_t *flags = g_malloc(ics->nr_irqs);
652
diff --git a/hw/m68k/q800-glue.c b/hw/m68k/q800-glue.c
653
index XXXXXXX..XXXXXXX 100644
654
--- a/hw/m68k/q800-glue.c
655
+++ b/hw/m68k/q800-glue.c
656
@@ -XXX,XX +XXX,XX @@ static void glue_nmi_release(void *opaque)
657
GLUE_set_irq(s, GLUE_IRQ_IN_NMI, 0);
658
}
659
660
-static void glue_reset_hold(Object *obj)
661
+static void glue_reset_hold(Object *obj, ResetType type)
662
{
663
GLUEState *s = GLUE(obj);
664
665
diff --git a/hw/misc/djmemc.c b/hw/misc/djmemc.c
666
index XXXXXXX..XXXXXXX 100644
667
--- a/hw/misc/djmemc.c
668
+++ b/hw/misc/djmemc.c
669
@@ -XXX,XX +XXX,XX @@ static void djmemc_init(Object *obj)
670
sysbus_init_mmio(sbd, &s->mem_regs);
671
}
672
673
-static void djmemc_reset_hold(Object *obj)
674
+static void djmemc_reset_hold(Object *obj, ResetType type)
675
{
676
DJMEMCState *s = DJMEMC(obj);
677
678
diff --git a/hw/misc/iosb.c b/hw/misc/iosb.c
679
index XXXXXXX..XXXXXXX 100644
680
--- a/hw/misc/iosb.c
681
+++ b/hw/misc/iosb.c
682
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps iosb_mmio_ops = {
683
.endianness = DEVICE_BIG_ENDIAN,
684
};
685
686
-static void iosb_reset_hold(Object *obj)
687
+static void iosb_reset_hold(Object *obj, ResetType type)
688
{
689
IOSBState *s = IOSB(obj);
690
691
diff --git a/hw/misc/mac_via.c b/hw/misc/mac_via.c
692
index XXXXXXX..XXXXXXX 100644
693
--- a/hw/misc/mac_via.c
694
+++ b/hw/misc/mac_via.c
695
@@ -XXX,XX +XXX,XX @@ static int via1_post_load(void *opaque, int version_id)
696
}
697
698
/* VIA 1 */
699
-static void mos6522_q800_via1_reset_hold(Object *obj)
700
+static void mos6522_q800_via1_reset_hold(Object *obj, ResetType type)
701
{
702
MOS6522Q800VIA1State *v1s = MOS6522_Q800_VIA1(obj);
703
MOS6522State *ms = MOS6522(v1s);
704
@@ -XXX,XX +XXX,XX @@ static void mos6522_q800_via1_reset_hold(Object *obj)
705
ADBBusState *adb_bus = &v1s->adb_bus;
706
707
if (mdc->parent_phases.hold) {
708
- mdc->parent_phases.hold(obj);
709
+ mdc->parent_phases.hold(obj, type);
710
}
711
712
ms->timers[0].frequency = VIA_TIMER_FREQ;
713
@@ -XXX,XX +XXX,XX @@ static void mos6522_q800_via2_portB_write(MOS6522State *s)
714
}
715
}
716
717
-static void mos6522_q800_via2_reset_hold(Object *obj)
718
+static void mos6522_q800_via2_reset_hold(Object *obj, ResetType type)
719
{
720
MOS6522State *ms = MOS6522(obj);
721
MOS6522DeviceClass *mdc = MOS6522_GET_CLASS(ms);
722
723
if (mdc->parent_phases.hold) {
724
- mdc->parent_phases.hold(obj);
725
+ mdc->parent_phases.hold(obj, type);
726
}
727
728
ms->timers[0].frequency = VIA_TIMER_FREQ;
729
diff --git a/hw/misc/macio/cuda.c b/hw/misc/macio/cuda.c
730
index XXXXXXX..XXXXXXX 100644
731
--- a/hw/misc/macio/cuda.c
732
+++ b/hw/misc/macio/cuda.c
733
@@ -XXX,XX +XXX,XX @@ static void mos6522_cuda_portB_write(MOS6522State *s)
734
cuda_update(cs);
735
}
736
737
-static void mos6522_cuda_reset_hold(Object *obj)
738
+static void mos6522_cuda_reset_hold(Object *obj, ResetType type)
739
{
740
MOS6522State *ms = MOS6522(obj);
741
MOS6522DeviceClass *mdc = MOS6522_GET_CLASS(ms);
742
743
if (mdc->parent_phases.hold) {
744
- mdc->parent_phases.hold(obj);
745
+ mdc->parent_phases.hold(obj, type);
746
}
747
748
ms->timers[0].frequency = CUDA_TIMER_FREQ;
749
diff --git a/hw/misc/macio/pmu.c b/hw/misc/macio/pmu.c
750
index XXXXXXX..XXXXXXX 100644
751
--- a/hw/misc/macio/pmu.c
752
+++ b/hw/misc/macio/pmu.c
753
@@ -XXX,XX +XXX,XX @@ static void mos6522_pmu_portB_write(MOS6522State *s)
754
pmu_update(ps);
755
}
756
757
-static void mos6522_pmu_reset_hold(Object *obj)
758
+static void mos6522_pmu_reset_hold(Object *obj, ResetType type)
759
{
760
MOS6522State *ms = MOS6522(obj);
761
MOS6522PMUState *mps = container_of(ms, MOS6522PMUState, parent_obj);
762
@@ -XXX,XX +XXX,XX @@ static void mos6522_pmu_reset_hold(Object *obj)
763
MOS6522DeviceClass *mdc = MOS6522_GET_CLASS(ms);
764
765
if (mdc->parent_phases.hold) {
766
- mdc->parent_phases.hold(obj);
767
+ mdc->parent_phases.hold(obj, type);
768
}
769
770
ms->timers[0].frequency = VIA_TIMER_FREQ;
771
diff --git a/hw/misc/mos6522.c b/hw/misc/mos6522.c
772
index XXXXXXX..XXXXXXX 100644
773
--- a/hw/misc/mos6522.c
774
+++ b/hw/misc/mos6522.c
775
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_mos6522 = {
776
}
777
};
778
779
-static void mos6522_reset_hold(Object *obj)
780
+static void mos6522_reset_hold(Object *obj, ResetType type)
781
{
782
MOS6522State *s = MOS6522(obj);
783
784
diff --git a/hw/misc/npcm7xx_mft.c b/hw/misc/npcm7xx_mft.c
785
index XXXXXXX..XXXXXXX 100644
786
--- a/hw/misc/npcm7xx_mft.c
787
+++ b/hw/misc/npcm7xx_mft.c
788
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_mft_enter_reset(Object *obj, ResetType type)
789
npcm7xx_mft_reset(s);
790
}
791
792
-static void npcm7xx_mft_hold_reset(Object *obj)
793
+static void npcm7xx_mft_hold_reset(Object *obj, ResetType type)
794
{
795
NPCM7xxMFTState *s = NPCM7XX_MFT(obj);
796
797
diff --git a/hw/misc/npcm7xx_pwm.c b/hw/misc/npcm7xx_pwm.c
798
index XXXXXXX..XXXXXXX 100644
799
--- a/hw/misc/npcm7xx_pwm.c
800
+++ b/hw/misc/npcm7xx_pwm.c
801
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_pwm_enter_reset(Object *obj, ResetType type)
802
s->piir = 0x00000000;
803
}
804
805
-static void npcm7xx_pwm_hold_reset(Object *obj)
806
+static void npcm7xx_pwm_hold_reset(Object *obj, ResetType type)
807
{
808
NPCM7xxPWMState *s = NPCM7XX_PWM(obj);
809
int i;
810
diff --git a/hw/misc/stm32l4x5_exti.c b/hw/misc/stm32l4x5_exti.c
811
index XXXXXXX..XXXXXXX 100644
812
--- a/hw/misc/stm32l4x5_exti.c
813
+++ b/hw/misc/stm32l4x5_exti.c
814
@@ -XXX,XX +XXX,XX @@ static unsigned configurable_mask(unsigned bank)
815
return valid_mask(bank) & ~exti_romask[bank];
816
}
817
818
-static void stm32l4x5_exti_reset_hold(Object *obj)
819
+static void stm32l4x5_exti_reset_hold(Object *obj, ResetType type)
820
{
821
Stm32l4x5ExtiState *s = STM32L4X5_EXTI(obj);
822
823
diff --git a/hw/misc/stm32l4x5_rcc.c b/hw/misc/stm32l4x5_rcc.c
824
index XXXXXXX..XXXXXXX 100644
825
--- a/hw/misc/stm32l4x5_rcc.c
826
+++ b/hw/misc/stm32l4x5_rcc.c
827
@@ -XXX,XX +XXX,XX @@ static void clock_mux_reset_enter(Object *obj, ResetType type)
828
set_clock_mux_init_info(s, s->id);
829
}
830
831
-static void clock_mux_reset_hold(Object *obj)
832
+static void clock_mux_reset_hold(Object *obj, ResetType type)
833
{
834
RccClockMuxState *s = RCC_CLOCK_MUX(obj);
835
clock_mux_update(s, true);
836
}
837
838
-static void clock_mux_reset_exit(Object *obj)
839
+static void clock_mux_reset_exit(Object *obj, ResetType type)
840
{
841
RccClockMuxState *s = RCC_CLOCK_MUX(obj);
842
clock_mux_update(s, false);
843
@@ -XXX,XX +XXX,XX @@ static void pll_reset_enter(Object *obj, ResetType type)
844
set_pll_init_info(s, s->id);
845
}
846
847
-static void pll_reset_hold(Object *obj)
848
+static void pll_reset_hold(Object *obj, ResetType type)
849
{
850
RccPllState *s = RCC_PLL(obj);
851
pll_update(s, true);
852
}
853
854
-static void pll_reset_exit(Object *obj)
855
+static void pll_reset_exit(Object *obj, ResetType type)
856
{
857
RccPllState *s = RCC_PLL(obj);
858
pll_update(s, false);
859
@@ -XXX,XX +XXX,XX @@ static void rcc_update_csr(Stm32l4x5RccState *s)
860
rcc_update_irq(s);
861
}
862
863
-static void stm32l4x5_rcc_reset_hold(Object *obj)
864
+static void stm32l4x5_rcc_reset_hold(Object *obj, ResetType type)
865
{
866
Stm32l4x5RccState *s = STM32L4X5_RCC(obj);
867
s->cr = 0x00000063;
868
diff --git a/hw/misc/stm32l4x5_syscfg.c b/hw/misc/stm32l4x5_syscfg.c
869
index XXXXXXX..XXXXXXX 100644
870
--- a/hw/misc/stm32l4x5_syscfg.c
871
+++ b/hw/misc/stm32l4x5_syscfg.c
872
@@ -XXX,XX +XXX,XX @@
873
874
#define NUM_LINES_PER_EXTICR_REG 4
875
876
-static void stm32l4x5_syscfg_hold_reset(Object *obj)
877
+static void stm32l4x5_syscfg_hold_reset(Object *obj, ResetType type)
878
{
879
Stm32l4x5SyscfgState *s = STM32L4X5_SYSCFG(obj);
880
881
diff --git a/hw/misc/xlnx-versal-cframe-reg.c b/hw/misc/xlnx-versal-cframe-reg.c
882
index XXXXXXX..XXXXXXX 100644
883
--- a/hw/misc/xlnx-versal-cframe-reg.c
884
+++ b/hw/misc/xlnx-versal-cframe-reg.c
885
@@ -XXX,XX +XXX,XX @@ static void cframe_reg_reset_enter(Object *obj, ResetType type)
886
}
887
}
888
889
-static void cframe_reg_reset_hold(Object *obj)
890
+static void cframe_reg_reset_hold(Object *obj, ResetType type)
891
{
892
XlnxVersalCFrameReg *s = XLNX_VERSAL_CFRAME_REG(obj);
893
894
diff --git a/hw/misc/xlnx-versal-crl.c b/hw/misc/xlnx-versal-crl.c
895
index XXXXXXX..XXXXXXX 100644
896
--- a/hw/misc/xlnx-versal-crl.c
897
+++ b/hw/misc/xlnx-versal-crl.c
898
@@ -XXX,XX +XXX,XX @@ static void crl_reset_enter(Object *obj, ResetType type)
899
}
900
}
901
902
-static void crl_reset_hold(Object *obj)
903
+static void crl_reset_hold(Object *obj, ResetType type)
904
{
905
XlnxVersalCRL *s = XLNX_VERSAL_CRL(obj);
906
907
diff --git a/hw/misc/xlnx-versal-pmc-iou-slcr.c b/hw/misc/xlnx-versal-pmc-iou-slcr.c
908
index XXXXXXX..XXXXXXX 100644
909
--- a/hw/misc/xlnx-versal-pmc-iou-slcr.c
910
+++ b/hw/misc/xlnx-versal-pmc-iou-slcr.c
911
@@ -XXX,XX +XXX,XX @@ static void xlnx_versal_pmc_iou_slcr_reset_init(Object *obj, ResetType type)
912
}
913
}
914
915
-static void xlnx_versal_pmc_iou_slcr_reset_hold(Object *obj)
916
+static void xlnx_versal_pmc_iou_slcr_reset_hold(Object *obj, ResetType type)
917
{
918
XlnxVersalPmcIouSlcr *s = XILINX_VERSAL_PMC_IOU_SLCR(obj);
919
920
diff --git a/hw/misc/xlnx-versal-trng.c b/hw/misc/xlnx-versal-trng.c
921
index XXXXXXX..XXXXXXX 100644
922
--- a/hw/misc/xlnx-versal-trng.c
923
+++ b/hw/misc/xlnx-versal-trng.c
924
@@ -XXX,XX +XXX,XX @@ static void trng_unrealize(DeviceState *dev)
925
s->prng = NULL;
926
}
927
928
-static void trng_reset_hold(Object *obj)
929
+static void trng_reset_hold(Object *obj, ResetType type)
930
{
931
trng_reset(XLNX_VERSAL_TRNG(obj));
932
}
933
diff --git a/hw/misc/xlnx-versal-xramc.c b/hw/misc/xlnx-versal-xramc.c
934
index XXXXXXX..XXXXXXX 100644
935
--- a/hw/misc/xlnx-versal-xramc.c
936
+++ b/hw/misc/xlnx-versal-xramc.c
937
@@ -XXX,XX +XXX,XX @@ static void xram_ctrl_reset_enter(Object *obj, ResetType type)
938
ARRAY_FIELD_DP32(s->regs, XRAM_IMP, SIZE, s->cfg.encoded_size);
939
}
940
941
-static void xram_ctrl_reset_hold(Object *obj)
942
+static void xram_ctrl_reset_hold(Object *obj, ResetType type)
943
{
944
XlnxXramCtrl *s = XLNX_XRAM_CTRL(obj);
945
946
diff --git a/hw/misc/xlnx-zynqmp-apu-ctrl.c b/hw/misc/xlnx-zynqmp-apu-ctrl.c
947
index XXXXXXX..XXXXXXX 100644
948
--- a/hw/misc/xlnx-zynqmp-apu-ctrl.c
949
+++ b/hw/misc/xlnx-zynqmp-apu-ctrl.c
950
@@ -XXX,XX +XXX,XX @@ static void zynqmp_apu_reset_enter(Object *obj, ResetType type)
951
s->cpu_in_wfi = 0;
952
}
953
954
-static void zynqmp_apu_reset_hold(Object *obj)
955
+static void zynqmp_apu_reset_hold(Object *obj, ResetType type)
956
{
957
XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(obj);
958
959
diff --git a/hw/misc/xlnx-zynqmp-crf.c b/hw/misc/xlnx-zynqmp-crf.c
960
index XXXXXXX..XXXXXXX 100644
961
--- a/hw/misc/xlnx-zynqmp-crf.c
962
+++ b/hw/misc/xlnx-zynqmp-crf.c
963
@@ -XXX,XX +XXX,XX @@ static void crf_reset_enter(Object *obj, ResetType type)
964
}
965
}
966
967
-static void crf_reset_hold(Object *obj)
968
+static void crf_reset_hold(Object *obj, ResetType type)
969
{
970
XlnxZynqMPCRF *s = XLNX_ZYNQMP_CRF(obj);
971
ir_update_irq(s);
972
diff --git a/hw/misc/zynq_slcr.c b/hw/misc/zynq_slcr.c
973
index XXXXXXX..XXXXXXX 100644
974
--- a/hw/misc/zynq_slcr.c
975
+++ b/hw/misc/zynq_slcr.c
976
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_reset_init(Object *obj, ResetType type)
977
s->regs[R_DDRIOB + 12] = 0x00000021;
978
}
979
980
-static void zynq_slcr_reset_hold(Object *obj)
981
+static void zynq_slcr_reset_hold(Object *obj, ResetType type)
982
{
983
ZynqSLCRState *s = ZYNQ_SLCR(obj);
984
985
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_reset_hold(Object *obj)
986
zynq_slcr_propagate_clocks(s);
987
}
988
989
-static void zynq_slcr_reset_exit(Object *obj)
990
+static void zynq_slcr_reset_exit(Object *obj, ResetType type)
991
{
992
ZynqSLCRState *s = ZYNQ_SLCR(obj);
993
994
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
995
index XXXXXXX..XXXXXXX 100644
996
--- a/hw/net/can/xlnx-zynqmp-can.c
997
+++ b/hw/net/can/xlnx-zynqmp-can.c
998
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_can_reset_init(Object *obj, ResetType type)
999
ptimer_transaction_commit(s->can_timer);
1000
}
1001
1002
-static void xlnx_zynqmp_can_reset_hold(Object *obj)
1003
+static void xlnx_zynqmp_can_reset_hold(Object *obj, ResetType type)
1004
{
1005
XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1006
unsigned int i;
1007
diff --git a/hw/net/e1000.c b/hw/net/e1000.c
1008
index XXXXXXX..XXXXXXX 100644
1009
--- a/hw/net/e1000.c
1010
+++ b/hw/net/e1000.c
1011
@@ -XXX,XX +XXX,XX @@ static bool e1000_vet_init_need(void *opaque)
1012
return chkflag(VET);
1013
}
1014
1015
-static void e1000_reset_hold(Object *obj)
1016
+static void e1000_reset_hold(Object *obj, ResetType type)
1017
{
1018
E1000State *d = E1000(obj);
1019
E1000BaseClass *edc = E1000_GET_CLASS(d);
1020
diff --git a/hw/net/e1000e.c b/hw/net/e1000e.c
1021
index XXXXXXX..XXXXXXX 100644
1022
--- a/hw/net/e1000e.c
1023
+++ b/hw/net/e1000e.c
1024
@@ -XXX,XX +XXX,XX @@ static void e1000e_pci_uninit(PCIDevice *pci_dev)
1025
msi_uninit(pci_dev);
1026
}
1027
1028
-static void e1000e_qdev_reset_hold(Object *obj)
1029
+static void e1000e_qdev_reset_hold(Object *obj, ResetType type)
1030
{
1031
E1000EState *s = E1000E(obj);
1032
1033
diff --git a/hw/net/igb.c b/hw/net/igb.c
1034
index XXXXXXX..XXXXXXX 100644
1035
--- a/hw/net/igb.c
1036
+++ b/hw/net/igb.c
1037
@@ -XXX,XX +XXX,XX @@ static void igb_pci_uninit(PCIDevice *pci_dev)
1038
msi_uninit(pci_dev);
1039
}
1040
1041
-static void igb_qdev_reset_hold(Object *obj)
1042
+static void igb_qdev_reset_hold(Object *obj, ResetType type)
1043
{
1044
IGBState *s = IGB(obj);
1045
1046
diff --git a/hw/net/igbvf.c b/hw/net/igbvf.c
1047
index XXXXXXX..XXXXXXX 100644
1048
--- a/hw/net/igbvf.c
1049
+++ b/hw/net/igbvf.c
1050
@@ -XXX,XX +XXX,XX @@ static void igbvf_pci_realize(PCIDevice *dev, Error **errp)
1051
pcie_ari_init(dev, 0x150);
1052
}
1053
1054
-static void igbvf_qdev_reset_hold(Object *obj)
1055
+static void igbvf_qdev_reset_hold(Object *obj, ResetType type)
1056
{
1057
PCIDevice *vf = PCI_DEVICE(obj);
1058
1059
diff --git a/hw/nvram/xlnx-bbram.c b/hw/nvram/xlnx-bbram.c
1060
index XXXXXXX..XXXXXXX 100644
1061
--- a/hw/nvram/xlnx-bbram.c
1062
+++ b/hw/nvram/xlnx-bbram.c
1063
@@ -XXX,XX +XXX,XX @@ static RegisterAccessInfo bbram_ctrl_regs_info[] = {
1064
}
1065
};
1066
1067
-static void bbram_ctrl_reset_hold(Object *obj)
1068
+static void bbram_ctrl_reset_hold(Object *obj, ResetType type)
1069
{
1070
XlnxBBRam *s = XLNX_BBRAM(obj);
1071
unsigned int i;
1072
diff --git a/hw/nvram/xlnx-versal-efuse-ctrl.c b/hw/nvram/xlnx-versal-efuse-ctrl.c
1073
index XXXXXXX..XXXXXXX 100644
1074
--- a/hw/nvram/xlnx-versal-efuse-ctrl.c
1075
+++ b/hw/nvram/xlnx-versal-efuse-ctrl.c
1076
@@ -XXX,XX +XXX,XX @@ static void efuse_ctrl_register_reset(RegisterInfo *reg)
1077
register_reset(reg);
1078
}
1079
1080
-static void efuse_ctrl_reset_hold(Object *obj)
1081
+static void efuse_ctrl_reset_hold(Object *obj, ResetType type)
1082
{
1083
XlnxVersalEFuseCtrl *s = XLNX_VERSAL_EFUSE_CTRL(obj);
1084
unsigned int i;
1085
diff --git a/hw/nvram/xlnx-zynqmp-efuse.c b/hw/nvram/xlnx-zynqmp-efuse.c
1086
index XXXXXXX..XXXXXXX 100644
1087
--- a/hw/nvram/xlnx-zynqmp-efuse.c
1088
+++ b/hw/nvram/xlnx-zynqmp-efuse.c
1089
@@ -XXX,XX +XXX,XX @@ static void zynqmp_efuse_register_reset(RegisterInfo *reg)
1090
register_reset(reg);
1091
}
1092
1093
-static void zynqmp_efuse_reset_hold(Object *obj)
1094
+static void zynqmp_efuse_reset_hold(Object *obj, ResetType type)
1095
{
1096
XlnxZynqMPEFuse *s = XLNX_ZYNQMP_EFUSE(obj);
1097
unsigned int i;
1098
diff --git a/hw/pci-bridge/cxl_root_port.c b/hw/pci-bridge/cxl_root_port.c
1099
index XXXXXXX..XXXXXXX 100644
1100
--- a/hw/pci-bridge/cxl_root_port.c
1101
+++ b/hw/pci-bridge/cxl_root_port.c
1102
@@ -XXX,XX +XXX,XX @@ static void cxl_rp_realize(DeviceState *dev, Error **errp)
1103
component_bar);
1104
}
1105
1106
-static void cxl_rp_reset_hold(Object *obj)
1107
+static void cxl_rp_reset_hold(Object *obj, ResetType type)
1108
{
1109
PCIERootPortClass *rpc = PCIE_ROOT_PORT_GET_CLASS(obj);
1110
CXLRootPort *crp = CXL_ROOT_PORT(obj);
1111
1112
if (rpc->parent_phases.hold) {
1113
- rpc->parent_phases.hold(obj);
1114
+ rpc->parent_phases.hold(obj, type);
1115
}
1116
1117
latch_registers(crp);
1118
diff --git a/hw/pci-bridge/pcie_root_port.c b/hw/pci-bridge/pcie_root_port.c
1119
index XXXXXXX..XXXXXXX 100644
1120
--- a/hw/pci-bridge/pcie_root_port.c
1121
+++ b/hw/pci-bridge/pcie_root_port.c
1122
@@ -XXX,XX +XXX,XX @@ static void rp_write_config(PCIDevice *d, uint32_t address,
1123
pcie_aer_root_write_config(d, address, val, len, root_cmd);
1124
}
1125
1126
-static void rp_reset_hold(Object *obj)
1127
+static void rp_reset_hold(Object *obj, ResetType type)
1128
{
1129
PCIDevice *d = PCI_DEVICE(obj);
1130
DeviceState *qdev = DEVICE(obj);
1131
diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
1132
index XXXXXXX..XXXXXXX 100644
1133
--- a/hw/pci-host/bonito.c
1134
+++ b/hw/pci-host/bonito.c
1135
@@ -XXX,XX +XXX,XX @@ static int pci_bonito_map_irq(PCIDevice *pci_dev, int irq_num)
1136
}
1137
}
1138
1139
-static void bonito_reset_hold(Object *obj)
1140
+static void bonito_reset_hold(Object *obj, ResetType type)
1141
{
1142
PCIBonitoState *s = PCI_BONITO(obj);
1143
uint32_t val = 0;
1144
diff --git a/hw/pci-host/pnv_phb.c b/hw/pci-host/pnv_phb.c
1145
index XXXXXXX..XXXXXXX 100644
1146
--- a/hw/pci-host/pnv_phb.c
1147
+++ b/hw/pci-host/pnv_phb.c
1148
@@ -XXX,XX +XXX,XX @@ static void pnv_phb_class_init(ObjectClass *klass, void *data)
1149
dc->user_creatable = true;
1150
}
1151
1152
-static void pnv_phb_root_port_reset_hold(Object *obj)
1153
+static void pnv_phb_root_port_reset_hold(Object *obj, ResetType type)
1154
{
1155
PCIERootPortClass *rpc = PCIE_ROOT_PORT_GET_CLASS(obj);
1156
PnvPHBRootPort *phb_rp = PNV_PHB_ROOT_PORT(obj);
1157
@@ -XXX,XX +XXX,XX @@ static void pnv_phb_root_port_reset_hold(Object *obj)
1158
uint8_t *conf = d->config;
1159
1160
if (rpc->parent_phases.hold) {
1161
- rpc->parent_phases.hold(obj);
1162
+ rpc->parent_phases.hold(obj, type);
1163
}
1164
1165
if (phb_rp->version == 3) {
1166
diff --git a/hw/pci-host/pnv_phb3_msi.c b/hw/pci-host/pnv_phb3_msi.c
1167
index XXXXXXX..XXXXXXX 100644
1168
--- a/hw/pci-host/pnv_phb3_msi.c
1169
+++ b/hw/pci-host/pnv_phb3_msi.c
1170
@@ -XXX,XX +XXX,XX @@ static void phb3_msi_resend(ICSState *ics)
1171
}
1172
}
1173
1174
-static void phb3_msi_reset_hold(Object *obj)
1175
+static void phb3_msi_reset_hold(Object *obj, ResetType type)
1176
{
1177
Phb3MsiState *msi = PHB3_MSI(obj);
1178
ICSStateClass *icsc = ICS_GET_CLASS(obj);
1179
1180
if (icsc->parent_phases.hold) {
1181
- icsc->parent_phases.hold(obj);
1182
+ icsc->parent_phases.hold(obj, type);
1183
}
1184
1185
memset(msi->rba, 0, sizeof(msi->rba));
1186
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
1187
index XXXXXXX..XXXXXXX 100644
1188
--- a/hw/pci/pci.c
1189
+++ b/hw/pci/pci.c
1190
@@ -XXX,XX +XXX,XX @@ bool pci_available = true;
1191
1192
static char *pcibus_get_dev_path(DeviceState *dev);
1193
static char *pcibus_get_fw_dev_path(DeviceState *dev);
1194
-static void pcibus_reset_hold(Object *obj);
1195
+static void pcibus_reset_hold(Object *obj, ResetType type);
1196
static bool pcie_has_upstream_port(PCIDevice *dev);
1197
1198
static Property pci_props[] = {
1199
@@ -XXX,XX +XXX,XX @@ void pci_device_reset(PCIDevice *dev)
1200
* Called via bus_cold_reset on RST# assert, after the devices
1201
* have been reset device_cold_reset-ed already.
1202
*/
1203
-static void pcibus_reset_hold(Object *obj)
1204
+static void pcibus_reset_hold(Object *obj, ResetType type)
1205
{
1206
PCIBus *bus = PCI_BUS(obj);
1207
int i;
1208
diff --git a/hw/rtc/mc146818rtc.c b/hw/rtc/mc146818rtc.c
1209
index XXXXXXX..XXXXXXX 100644
1210
--- a/hw/rtc/mc146818rtc.c
1211
+++ b/hw/rtc/mc146818rtc.c
1212
@@ -XXX,XX +XXX,XX @@ static void rtc_reset_enter(Object *obj, ResetType type)
1213
}
1214
}
1215
1216
-static void rtc_reset_hold(Object *obj)
1217
+static void rtc_reset_hold(Object *obj, ResetType type)
1218
{
1219
MC146818RtcState *s = MC146818_RTC(obj);
1220
1221
diff --git a/hw/s390x/css-bridge.c b/hw/s390x/css-bridge.c
1222
index XXXXXXX..XXXXXXX 100644
1223
--- a/hw/s390x/css-bridge.c
1224
+++ b/hw/s390x/css-bridge.c
1225
@@ -XXX,XX +XXX,XX @@ static void ccw_device_unplug(HotplugHandler *hotplug_dev,
1226
qdev_unrealize(dev);
1227
}
1228
1229
-static void virtual_css_bus_reset_hold(Object *obj)
1230
+static void virtual_css_bus_reset_hold(Object *obj, ResetType type)
1231
{
1232
/* This should actually be modelled via the generic css */
1233
css_reset();
1234
diff --git a/hw/sensor/adm1266.c b/hw/sensor/adm1266.c
1235
index XXXXXXX..XXXXXXX 100644
1236
--- a/hw/sensor/adm1266.c
1237
+++ b/hw/sensor/adm1266.c
1238
@@ -XXX,XX +XXX,XX @@ static const uint8_t adm1266_ic_device_id[] = {0x03, 0x41, 0x12, 0x66};
1239
static const uint8_t adm1266_ic_device_rev[] = {0x08, 0x01, 0x08, 0x07, 0x0,
1240
0x0, 0x07, 0x41, 0x30};
1241
1242
-static void adm1266_exit_reset(Object *obj)
1243
+static void adm1266_exit_reset(Object *obj, ResetType type)
1244
{
1245
ADM1266State *s = ADM1266(obj);
1246
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1247
diff --git a/hw/sensor/adm1272.c b/hw/sensor/adm1272.c
1248
index XXXXXXX..XXXXXXX 100644
1249
--- a/hw/sensor/adm1272.c
1250
+++ b/hw/sensor/adm1272.c
1251
@@ -XXX,XX +XXX,XX @@ static uint32_t adm1272_direct_to_watts(uint16_t value)
1252
return pmbus_direct_mode2data(c, value);
1253
}
1254
1255
-static void adm1272_exit_reset(Object *obj)
1256
+static void adm1272_exit_reset(Object *obj, ResetType type)
1257
{
1258
ADM1272State *s = ADM1272(obj);
1259
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1260
diff --git a/hw/sensor/isl_pmbus_vr.c b/hw/sensor/isl_pmbus_vr.c
1261
index XXXXXXX..XXXXXXX 100644
1262
--- a/hw/sensor/isl_pmbus_vr.c
1263
+++ b/hw/sensor/isl_pmbus_vr.c
1264
@@ -XXX,XX +XXX,XX @@ static void isl_pmbus_vr_set(Object *obj, Visitor *v, const char *name,
1265
pmbus_check_limits(pmdev);
1266
}
1267
1268
-static void isl_pmbus_vr_exit_reset(Object *obj)
1269
+static void isl_pmbus_vr_exit_reset(Object *obj, ResetType type)
1270
{
1271
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1272
1273
@@ -XXX,XX +XXX,XX @@ static void isl_pmbus_vr_exit_reset(Object *obj)
1274
}
1275
1276
/* The raa228000 uses different direct mode coefficients from most isl devices */
1277
-static void raa228000_exit_reset(Object *obj)
1278
+static void raa228000_exit_reset(Object *obj, ResetType type)
1279
{
1280
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1281
1282
- isl_pmbus_vr_exit_reset(obj);
1283
+ isl_pmbus_vr_exit_reset(obj, type);
1284
1285
pmdev->pages[0].read_iout = 0;
1286
pmdev->pages[0].read_pout = 0;
1287
@@ -XXX,XX +XXX,XX @@ static void raa228000_exit_reset(Object *obj)
1288
pmdev->pages[0].read_temperature_3 = 0;
1289
}
1290
1291
-static void isl69259_exit_reset(Object *obj)
1292
+static void isl69259_exit_reset(Object *obj, ResetType type)
1293
{
1294
ISLState *s = ISL69260(obj);
1295
static const uint8_t ic_device_id[] = {0x04, 0x00, 0x81, 0xD2, 0x49, 0x3c};
1296
g_assert(sizeof(ic_device_id) <= sizeof(s->ic_device_id));
1297
1298
- isl_pmbus_vr_exit_reset(obj);
1299
+ isl_pmbus_vr_exit_reset(obj, type);
1300
1301
s->ic_device_id_len = sizeof(ic_device_id);
1302
memcpy(s->ic_device_id, ic_device_id, sizeof(ic_device_id));
1303
diff --git a/hw/sensor/max31785.c b/hw/sensor/max31785.c
1304
index XXXXXXX..XXXXXXX 100644
1305
--- a/hw/sensor/max31785.c
1306
+++ b/hw/sensor/max31785.c
1307
@@ -XXX,XX +XXX,XX @@ static int max31785_write_data(PMBusDevice *pmdev, const uint8_t *buf,
1308
return 0;
1309
}
1310
1311
-static void max31785_exit_reset(Object *obj)
1312
+static void max31785_exit_reset(Object *obj, ResetType type)
1313
{
1314
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1315
MAX31785State *s = MAX31785(obj);
1316
diff --git a/hw/sensor/max34451.c b/hw/sensor/max34451.c
1317
index XXXXXXX..XXXXXXX 100644
1318
--- a/hw/sensor/max34451.c
1319
+++ b/hw/sensor/max34451.c
1320
@@ -XXX,XX +XXX,XX @@ static inline void *memset_word(void *s, uint16_t c, size_t n)
1321
return s;
1322
}
1323
1324
-static void max34451_exit_reset(Object *obj)
1325
+static void max34451_exit_reset(Object *obj, ResetType type)
1326
{
1327
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1328
MAX34451State *s = MAX34451(obj);
1329
diff --git a/hw/ssi/npcm7xx_fiu.c b/hw/ssi/npcm7xx_fiu.c
1330
index XXXXXXX..XXXXXXX 100644
1331
--- a/hw/ssi/npcm7xx_fiu.c
1332
+++ b/hw/ssi/npcm7xx_fiu.c
1333
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_fiu_enter_reset(Object *obj, ResetType type)
1334
s->regs[NPCM7XX_FIU_CFG] = 0x0000000b;
1335
}
1336
1337
-static void npcm7xx_fiu_hold_reset(Object *obj)
1338
+static void npcm7xx_fiu_hold_reset(Object *obj, ResetType type)
1339
{
1340
NPCM7xxFIUState *s = NPCM7XX_FIU(obj);
1341
int i;
1342
diff --git a/hw/timer/etraxfs_timer.c b/hw/timer/etraxfs_timer.c
1343
index XXXXXXX..XXXXXXX 100644
1344
--- a/hw/timer/etraxfs_timer.c
1345
+++ b/hw/timer/etraxfs_timer.c
1346
@@ -XXX,XX +XXX,XX @@ static void etraxfs_timer_reset_enter(Object *obj, ResetType type)
1347
t->rw_intr_mask = 0;
1348
}
1349
1350
-static void etraxfs_timer_reset_hold(Object *obj)
1351
+static void etraxfs_timer_reset_hold(Object *obj, ResetType type)
1352
{
1353
ETRAXTimerState *t = ETRAX_TIMER(obj);
1354
1355
diff --git a/hw/timer/npcm7xx_timer.c b/hw/timer/npcm7xx_timer.c
1356
index XXXXXXX..XXXXXXX 100644
1357
--- a/hw/timer/npcm7xx_timer.c
1358
+++ b/hw/timer/npcm7xx_timer.c
1359
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_watchdog_timer_expired(void *opaque)
1360
}
1361
}
1362
1363
-static void npcm7xx_timer_hold_reset(Object *obj)
1364
+static void npcm7xx_timer_hold_reset(Object *obj, ResetType type)
1365
{
1366
NPCM7xxTimerCtrlState *s = NPCM7XX_TIMER(obj);
1367
int i;
1368
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
1369
index XXXXXXX..XXXXXXX 100644
1370
--- a/hw/usb/hcd-dwc2.c
1371
+++ b/hw/usb/hcd-dwc2.c
1372
@@ -XXX,XX +XXX,XX @@ static void dwc2_reset_enter(Object *obj, ResetType type)
1373
}
1374
}
1375
1376
-static void dwc2_reset_hold(Object *obj)
1377
+static void dwc2_reset_hold(Object *obj, ResetType type)
1378
{
1379
DWC2Class *c = DWC2_USB_GET_CLASS(obj);
1380
DWC2State *s = DWC2_USB(obj);
1381
@@ -XXX,XX +XXX,XX @@ static void dwc2_reset_hold(Object *obj)
1382
trace_usb_dwc2_reset_hold();
1383
1384
if (c->parent_phases.hold) {
1385
- c->parent_phases.hold(obj);
1386
+ c->parent_phases.hold(obj, type);
1387
}
1388
1389
dwc2_update_irq(s);
1390
}
1391
1392
-static void dwc2_reset_exit(Object *obj)
1393
+static void dwc2_reset_exit(Object *obj, ResetType type)
1394
{
1395
DWC2Class *c = DWC2_USB_GET_CLASS(obj);
1396
DWC2State *s = DWC2_USB(obj);
1397
@@ -XXX,XX +XXX,XX @@ static void dwc2_reset_exit(Object *obj)
1398
trace_usb_dwc2_reset_exit();
1399
1400
if (c->parent_phases.exit) {
1401
- c->parent_phases.exit(obj);
1402
+ c->parent_phases.exit(obj, type);
1403
}
1404
1405
s->hprt0 = HPRT0_PWR;
1406
diff --git a/hw/usb/xlnx-versal-usb2-ctrl-regs.c b/hw/usb/xlnx-versal-usb2-ctrl-regs.c
1407
index XXXXXXX..XXXXXXX 100644
1408
--- a/hw/usb/xlnx-versal-usb2-ctrl-regs.c
1409
+++ b/hw/usb/xlnx-versal-usb2-ctrl-regs.c
1410
@@ -XXX,XX +XXX,XX @@ static void usb2_ctrl_regs_reset_init(Object *obj, ResetType type)
1411
}
1412
}
1413
1414
-static void usb2_ctrl_regs_reset_hold(Object *obj)
1415
+static void usb2_ctrl_regs_reset_hold(Object *obj, ResetType type)
1416
{
1417
VersalUsb2CtrlRegs *s = XILINX_VERSAL_USB2_CTRL_REGS(obj);
1418
1419
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
1420
index XXXXXXX..XXXXXXX 100644
1421
--- a/hw/virtio/virtio-pci.c
1422
+++ b/hw/virtio/virtio-pci.c
1423
@@ -XXX,XX +XXX,XX @@ static void virtio_pci_reset(DeviceState *qdev)
1424
}
1425
}
1426
1427
-static void virtio_pci_bus_reset_hold(Object *obj)
1428
+static void virtio_pci_bus_reset_hold(Object *obj, ResetType type)
1429
{
1430
PCIDevice *dev = PCI_DEVICE(obj);
1431
DeviceState *qdev = DEVICE(obj);
21
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
1432
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
22
index XXXXXXX..XXXXXXX 100644
1433
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.c
1434
--- a/target/arm/cpu.c
24
+++ b/target/arm/cpu.c
1435
+++ b/target/arm/cpu.c
25
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
1436
@@ -XXX,XX +XXX,XX @@ static void cp_reg_check_reset(gpointer key, gpointer value, gpointer opaque)
26
}
1437
assert(oldvalue == newvalue);
27
}
1438
}
28
1439
29
- if (!cpu->has_el3) {
1440
-static void arm_cpu_reset_hold(Object *obj)
30
+ if (!arm_feature(env, ARM_FEATURE_M) && !cpu->has_el3) {
1441
+static void arm_cpu_reset_hold(Object *obj, ResetType type)
31
/* If the has_el3 CPU property is disabled then we need to disable the
1442
{
32
* feature.
1443
CPUState *cs = CPU(obj);
33
*/
1444
ARMCPU *cpu = ARM_CPU(cs);
1445
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset_hold(Object *obj)
1446
CPUARMState *env = &cpu->env;
1447
1448
if (acc->parent_phases.hold) {
1449
- acc->parent_phases.hold(obj);
1450
+ acc->parent_phases.hold(obj, type);
1451
}
1452
1453
memset(env, 0, offsetof(CPUARMState, end_reset_fields));
1454
diff --git a/target/avr/cpu.c b/target/avr/cpu.c
1455
index XXXXXXX..XXXXXXX 100644
1456
--- a/target/avr/cpu.c
1457
+++ b/target/avr/cpu.c
1458
@@ -XXX,XX +XXX,XX @@ static void avr_restore_state_to_opc(CPUState *cs,
1459
cpu_env(cs)->pc_w = data[0];
1460
}
1461
1462
-static void avr_cpu_reset_hold(Object *obj)
1463
+static void avr_cpu_reset_hold(Object *obj, ResetType type)
1464
{
1465
CPUState *cs = CPU(obj);
1466
AVRCPU *cpu = AVR_CPU(cs);
1467
@@ -XXX,XX +XXX,XX @@ static void avr_cpu_reset_hold(Object *obj)
1468
CPUAVRState *env = &cpu->env;
1469
1470
if (mcc->parent_phases.hold) {
1471
- mcc->parent_phases.hold(obj);
1472
+ mcc->parent_phases.hold(obj, type);
1473
}
1474
1475
env->pc_w = 0;
1476
diff --git a/target/cris/cpu.c b/target/cris/cpu.c
1477
index XXXXXXX..XXXXXXX 100644
1478
--- a/target/cris/cpu.c
1479
+++ b/target/cris/cpu.c
1480
@@ -XXX,XX +XXX,XX @@ static int cris_cpu_mmu_index(CPUState *cs, bool ifetch)
1481
return !!(cpu_env(cs)->pregs[PR_CCS] & U_FLAG);
1482
}
1483
1484
-static void cris_cpu_reset_hold(Object *obj)
1485
+static void cris_cpu_reset_hold(Object *obj, ResetType type)
1486
{
1487
CPUState *cs = CPU(obj);
1488
CRISCPUClass *ccc = CRIS_CPU_GET_CLASS(obj);
1489
@@ -XXX,XX +XXX,XX @@ static void cris_cpu_reset_hold(Object *obj)
1490
uint32_t vr;
1491
1492
if (ccc->parent_phases.hold) {
1493
- ccc->parent_phases.hold(obj);
1494
+ ccc->parent_phases.hold(obj, type);
1495
}
1496
1497
vr = env->pregs[PR_VR];
1498
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
1499
index XXXXXXX..XXXXXXX 100644
1500
--- a/target/hexagon/cpu.c
1501
+++ b/target/hexagon/cpu.c
1502
@@ -XXX,XX +XXX,XX @@ static void hexagon_restore_state_to_opc(CPUState *cs,
1503
cpu_env(cs)->gpr[HEX_REG_PC] = data[0];
1504
}
1505
1506
-static void hexagon_cpu_reset_hold(Object *obj)
1507
+static void hexagon_cpu_reset_hold(Object *obj, ResetType type)
1508
{
1509
CPUState *cs = CPU(obj);
1510
HexagonCPUClass *mcc = HEXAGON_CPU_GET_CLASS(obj);
1511
CPUHexagonState *env = cpu_env(cs);
1512
1513
if (mcc->parent_phases.hold) {
1514
- mcc->parent_phases.hold(obj);
1515
+ mcc->parent_phases.hold(obj, type);
1516
}
1517
1518
set_default_nan_mode(1, &env->fp_status);
1519
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
1520
index XXXXXXX..XXXXXXX 100644
1521
--- a/target/i386/cpu.c
1522
+++ b/target/i386/cpu.c
1523
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_set_sgxlepubkeyhash(CPUX86State *env)
1524
#endif
1525
}
1526
1527
-static void x86_cpu_reset_hold(Object *obj)
1528
+static void x86_cpu_reset_hold(Object *obj, ResetType type)
1529
{
1530
CPUState *cs = CPU(obj);
1531
X86CPU *cpu = X86_CPU(cs);
1532
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_reset_hold(Object *obj)
1533
int i;
1534
1535
if (xcc->parent_phases.hold) {
1536
- xcc->parent_phases.hold(obj);
1537
+ xcc->parent_phases.hold(obj, type);
1538
}
1539
1540
memset(env, 0, offsetof(CPUX86State, end_reset_fields));
1541
diff --git a/target/loongarch/cpu.c b/target/loongarch/cpu.c
1542
index XXXXXXX..XXXXXXX 100644
1543
--- a/target/loongarch/cpu.c
1544
+++ b/target/loongarch/cpu.c
1545
@@ -XXX,XX +XXX,XX @@ static void loongarch_max_initfn(Object *obj)
1546
loongarch_la464_initfn(obj);
1547
}
1548
1549
-static void loongarch_cpu_reset_hold(Object *obj)
1550
+static void loongarch_cpu_reset_hold(Object *obj, ResetType type)
1551
{
1552
CPUState *cs = CPU(obj);
1553
LoongArchCPUClass *lacc = LOONGARCH_CPU_GET_CLASS(obj);
1554
CPULoongArchState *env = cpu_env(cs);
1555
1556
if (lacc->parent_phases.hold) {
1557
- lacc->parent_phases.hold(obj);
1558
+ lacc->parent_phases.hold(obj, type);
1559
}
1560
1561
env->fcsr0_mask = FCSR0_M1 | FCSR0_M2 | FCSR0_M3;
1562
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
1563
index XXXXXXX..XXXXXXX 100644
1564
--- a/target/m68k/cpu.c
1565
+++ b/target/m68k/cpu.c
1566
@@ -XXX,XX +XXX,XX @@ static void m68k_unset_feature(CPUM68KState *env, int feature)
1567
env->features &= ~BIT_ULL(feature);
1568
}
1569
1570
-static void m68k_cpu_reset_hold(Object *obj)
1571
+static void m68k_cpu_reset_hold(Object *obj, ResetType type)
1572
{
1573
CPUState *cs = CPU(obj);
1574
M68kCPUClass *mcc = M68K_CPU_GET_CLASS(obj);
1575
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj)
1576
int i;
1577
1578
if (mcc->parent_phases.hold) {
1579
- mcc->parent_phases.hold(obj);
1580
+ mcc->parent_phases.hold(obj, type);
1581
}
1582
1583
memset(env, 0, offsetof(CPUM68KState, end_reset_fields));
1584
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
1585
index XXXXXXX..XXXXXXX 100644
1586
--- a/target/microblaze/cpu.c
1587
+++ b/target/microblaze/cpu.c
1588
@@ -XXX,XX +XXX,XX @@ static void microblaze_cpu_set_irq(void *opaque, int irq, int level)
1589
}
1590
#endif
1591
1592
-static void mb_cpu_reset_hold(Object *obj)
1593
+static void mb_cpu_reset_hold(Object *obj, ResetType type)
1594
{
1595
CPUState *cs = CPU(obj);
1596
MicroBlazeCPU *cpu = MICROBLAZE_CPU(cs);
1597
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_reset_hold(Object *obj)
1598
CPUMBState *env = &cpu->env;
1599
1600
if (mcc->parent_phases.hold) {
1601
- mcc->parent_phases.hold(obj);
1602
+ mcc->parent_phases.hold(obj, type);
1603
}
1604
1605
memset(env, 0, offsetof(CPUMBState, end_reset_fields));
1606
diff --git a/target/mips/cpu.c b/target/mips/cpu.c
1607
index XXXXXXX..XXXXXXX 100644
1608
--- a/target/mips/cpu.c
1609
+++ b/target/mips/cpu.c
1610
@@ -XXX,XX +XXX,XX @@ static int mips_cpu_mmu_index(CPUState *cs, bool ifunc)
1611
1612
#include "cpu-defs.c.inc"
1613
1614
-static void mips_cpu_reset_hold(Object *obj)
1615
+static void mips_cpu_reset_hold(Object *obj, ResetType type)
1616
{
1617
CPUState *cs = CPU(obj);
1618
MIPSCPU *cpu = MIPS_CPU(cs);
1619
@@ -XXX,XX +XXX,XX @@ static void mips_cpu_reset_hold(Object *obj)
1620
CPUMIPSState *env = &cpu->env;
1621
1622
if (mcc->parent_phases.hold) {
1623
- mcc->parent_phases.hold(obj);
1624
+ mcc->parent_phases.hold(obj, type);
1625
}
1626
1627
memset(env, 0, offsetof(CPUMIPSState, end_reset_fields));
1628
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
1629
index XXXXXXX..XXXXXXX 100644
1630
--- a/target/openrisc/cpu.c
1631
+++ b/target/openrisc/cpu.c
1632
@@ -XXX,XX +XXX,XX @@ static void openrisc_disas_set_info(CPUState *cpu, disassemble_info *info)
1633
info->print_insn = print_insn_or1k;
1634
}
1635
1636
-static void openrisc_cpu_reset_hold(Object *obj)
1637
+static void openrisc_cpu_reset_hold(Object *obj, ResetType type)
1638
{
1639
CPUState *cs = CPU(obj);
1640
OpenRISCCPU *cpu = OPENRISC_CPU(cs);
1641
OpenRISCCPUClass *occ = OPENRISC_CPU_GET_CLASS(obj);
1642
1643
if (occ->parent_phases.hold) {
1644
- occ->parent_phases.hold(obj);
1645
+ occ->parent_phases.hold(obj, type);
1646
}
1647
1648
memset(&cpu->env, 0, offsetof(CPUOpenRISCState, end_reset_fields));
1649
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
1650
index XXXXXXX..XXXXXXX 100644
1651
--- a/target/ppc/cpu_init.c
1652
+++ b/target/ppc/cpu_init.c
1653
@@ -XXX,XX +XXX,XX @@ static int ppc_cpu_mmu_index(CPUState *cs, bool ifetch)
1654
return ppc_env_mmu_index(cpu_env(cs), ifetch);
1655
}
1656
1657
-static void ppc_cpu_reset_hold(Object *obj)
1658
+static void ppc_cpu_reset_hold(Object *obj, ResetType type)
1659
{
1660
CPUState *cs = CPU(obj);
1661
PowerPCCPU *cpu = POWERPC_CPU(cs);
1662
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj)
1663
int i;
1664
1665
if (pcc->parent_phases.hold) {
1666
- pcc->parent_phases.hold(obj);
1667
+ pcc->parent_phases.hold(obj, type);
1668
}
1669
1670
msr = (target_ulong)0;
1671
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
1672
index XXXXXXX..XXXXXXX 100644
1673
--- a/target/riscv/cpu.c
1674
+++ b/target/riscv/cpu.c
1675
@@ -XXX,XX +XXX,XX @@ static int riscv_cpu_mmu_index(CPUState *cs, bool ifetch)
1676
return riscv_env_mmu_index(cpu_env(cs), ifetch);
1677
}
1678
1679
-static void riscv_cpu_reset_hold(Object *obj)
1680
+static void riscv_cpu_reset_hold(Object *obj, ResetType type)
1681
{
1682
#ifndef CONFIG_USER_ONLY
1683
uint8_t iprio;
1684
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj)
1685
CPURISCVState *env = &cpu->env;
1686
1687
if (mcc->parent_phases.hold) {
1688
- mcc->parent_phases.hold(obj);
1689
+ mcc->parent_phases.hold(obj, type);
1690
}
1691
#ifndef CONFIG_USER_ONLY
1692
env->misa_mxl = mcc->misa_mxl_max;
1693
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
1694
index XXXXXXX..XXXXXXX 100644
1695
--- a/target/rx/cpu.c
1696
+++ b/target/rx/cpu.c
1697
@@ -XXX,XX +XXX,XX @@ static int riscv_cpu_mmu_index(CPUState *cs, bool ifunc)
1698
return 0;
1699
}
1700
1701
-static void rx_cpu_reset_hold(Object *obj)
1702
+static void rx_cpu_reset_hold(Object *obj, ResetType type)
1703
{
1704
CPUState *cs = CPU(obj);
1705
RXCPUClass *rcc = RX_CPU_GET_CLASS(obj);
1706
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_reset_hold(Object *obj)
1707
uint32_t *resetvec;
1708
1709
if (rcc->parent_phases.hold) {
1710
- rcc->parent_phases.hold(obj);
1711
+ rcc->parent_phases.hold(obj, type);
1712
}
1713
1714
memset(env, 0, offsetof(CPURXState, end_reset_fields));
1715
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
1716
index XXXXXXX..XXXXXXX 100644
1717
--- a/target/sh4/cpu.c
1718
+++ b/target/sh4/cpu.c
1719
@@ -XXX,XX +XXX,XX @@ static int sh4_cpu_mmu_index(CPUState *cs, bool ifetch)
1720
}
1721
}
1722
1723
-static void superh_cpu_reset_hold(Object *obj)
1724
+static void superh_cpu_reset_hold(Object *obj, ResetType type)
1725
{
1726
CPUState *cs = CPU(obj);
1727
SuperHCPUClass *scc = SUPERH_CPU_GET_CLASS(obj);
1728
CPUSH4State *env = cpu_env(cs);
1729
1730
if (scc->parent_phases.hold) {
1731
- scc->parent_phases.hold(obj);
1732
+ scc->parent_phases.hold(obj, type);
1733
}
1734
1735
memset(env, 0, offsetof(CPUSH4State, end_reset_fields));
1736
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
1737
index XXXXXXX..XXXXXXX 100644
1738
--- a/target/sparc/cpu.c
1739
+++ b/target/sparc/cpu.c
1740
@@ -XXX,XX +XXX,XX @@
1741
1742
//#define DEBUG_FEATURES
1743
1744
-static void sparc_cpu_reset_hold(Object *obj)
1745
+static void sparc_cpu_reset_hold(Object *obj, ResetType type)
1746
{
1747
CPUState *cs = CPU(obj);
1748
SPARCCPUClass *scc = SPARC_CPU_GET_CLASS(obj);
1749
CPUSPARCState *env = cpu_env(cs);
1750
1751
if (scc->parent_phases.hold) {
1752
- scc->parent_phases.hold(obj);
1753
+ scc->parent_phases.hold(obj, type);
1754
}
1755
1756
memset(env, 0, offsetof(CPUSPARCState, end_reset_fields));
1757
diff --git a/target/tricore/cpu.c b/target/tricore/cpu.c
1758
index XXXXXXX..XXXXXXX 100644
1759
--- a/target/tricore/cpu.c
1760
+++ b/target/tricore/cpu.c
1761
@@ -XXX,XX +XXX,XX @@ static void tricore_restore_state_to_opc(CPUState *cs,
1762
cpu_env(cs)->PC = data[0];
1763
}
1764
1765
-static void tricore_cpu_reset_hold(Object *obj)
1766
+static void tricore_cpu_reset_hold(Object *obj, ResetType type)
1767
{
1768
CPUState *cs = CPU(obj);
1769
TriCoreCPUClass *tcc = TRICORE_CPU_GET_CLASS(obj);
1770
1771
if (tcc->parent_phases.hold) {
1772
- tcc->parent_phases.hold(obj);
1773
+ tcc->parent_phases.hold(obj, type);
1774
}
1775
1776
cpu_state_reset(cpu_env(cs));
1777
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
1778
index XXXXXXX..XXXXXXX 100644
1779
--- a/target/xtensa/cpu.c
1780
+++ b/target/xtensa/cpu.c
1781
@@ -XXX,XX +XXX,XX @@ bool xtensa_abi_call0(void)
1782
}
1783
#endif
1784
1785
-static void xtensa_cpu_reset_hold(Object *obj)
1786
+static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
1787
{
1788
CPUState *cs = CPU(obj);
1789
XtensaCPUClass *xcc = XTENSA_CPU_GET_CLASS(obj);
1790
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj)
1791
XTENSA_OPTION_DFP_COPROCESSOR);
1792
1793
if (xcc->parent_phases.hold) {
1794
- xcc->parent_phases.hold(obj);
1795
+ xcc->parent_phases.hold(obj, type);
1796
}
1797
1798
env->pc = env->config->exception_vector[EXC_RESET0 + env->static_vectors];
34
--
1799
--
35
2.20.1
1800
2.34.1
36
37
diff view generated by jsdifflib
1
In commit 077d7449100d824a4 we added code to handle the v8M
1
Update the reset documentation's example code to match the new API
2
requirement that returns from NMI or HardFault forcibly deactivate
2
for the hold and exit phase method APIs where they take a ResetType
3
those exceptions regardless of what interrupt the guest is trying to
3
argument.
4
deactivate. Unfortunately this broke the handling of the "illegal
5
exception return because the returning exception number is not
6
active" check for those cases. In the pseudocode this test is done
7
on the exception the guest asks to return from, but because our
8
implementation was doing this in armv7m_nvic_complete_irq() after the
9
new "deactivate NMI/HardFault regardless" code we ended up doing the
10
test on the VecInfo for that exception instead, which usually meant
11
failing to raise the illegal exception return fault.
12
13
In the case for "configurable exception targeting the opposite
14
security state" we detected the illegal-return case but went ahead
15
and deactivated the VecInfo anyway, which is wrong because that is
16
the VecInfo for the other security state.
17
18
Rearrange the code so that we first identify the illegal return
19
cases, then see if we really need to deactivate NMI or HardFault
20
instead, and finally do the deactivation.
21
4
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-25-peter.maydell@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Luc Michel <luc.michel@amd.com>
9
Message-id: 20240412160809.1260625-6-peter.maydell@linaro.org
25
---
10
---
26
hw/intc/armv7m_nvic.c | 59 +++++++++++++++++++++++--------------------
11
docs/devel/reset.rst | 8 ++++----
27
1 file changed, 32 insertions(+), 27 deletions(-)
12
1 file changed, 4 insertions(+), 4 deletions(-)
28
13
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
diff --git a/docs/devel/reset.rst b/docs/devel/reset.rst
30
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/intc/armv7m_nvic.c
16
--- a/docs/devel/reset.rst
32
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/docs/devel/reset.rst
33
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
18
@@ -XXX,XX +XXX,XX @@ in reset.
34
{
19
mydev->var = 0;
35
NVICState *s = (NVICState *)opaque;
36
VecInfo *vec = NULL;
37
- int ret;
38
+ int ret = 0;
39
40
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
41
42
+ trace_nvic_complete_irq(irq, secure);
43
+
44
+ if (secure && exc_is_banked(irq)) {
45
+ vec = &s->sec_vectors[irq];
46
+ } else {
47
+ vec = &s->vectors[irq];
48
+ }
49
+
50
+ /*
51
+ * Identify illegal exception return cases. We can't immediately
52
+ * return at this point because we still need to deactivate
53
+ * (either this exception or NMI/HardFault) first.
54
+ */
55
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
56
+ /*
57
+ * Return from a configurable exception targeting the opposite
58
+ * security state from the one we're trying to complete it for.
59
+ * Clear vec because it's not really the VecInfo for this
60
+ * (irq, secstate) so we mustn't deactivate it.
61
+ */
62
+ ret = -1;
63
+ vec = NULL;
64
+ } else if (!vec->active) {
65
+ /* Return from an inactive interrupt */
66
+ ret = -1;
67
+ } else {
68
+ /* Legal return, we will return the RETTOBASE bit value to the caller */
69
+ ret = nvic_rettobase(s);
70
+ }
71
+
72
/*
73
* For negative priorities, v8M will forcibly deactivate the appropriate
74
* NMI or HardFault regardless of what interrupt we're being asked to
75
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
76
}
20
}
77
21
78
if (!vec) {
22
- static void mydev_reset_hold(Object *obj)
79
- if (secure && exc_is_banked(irq)) {
23
+ static void mydev_reset_hold(Object *obj, ResetType type)
80
- vec = &s->sec_vectors[irq];
24
{
81
- } else {
25
MyDevClass *myclass = MYDEV_GET_CLASS(obj);
82
- vec = &s->vectors[irq];
26
MyDevState *mydev = MYDEV(obj);
83
- }
27
/* call parent class hold phase */
84
- }
28
if (myclass->parent_phases.hold) {
85
-
29
- myclass->parent_phases.hold(obj);
86
- trace_nvic_complete_irq(irq, secure);
30
+ myclass->parent_phases.hold(obj, type);
87
-
31
}
88
- if (!vec->active) {
32
/* set an IO */
89
- /* Tell the caller this was an illegal exception return */
33
qemu_set_irq(mydev->irq, 1);
90
- return -1;
91
- }
92
-
93
- /*
94
- * If this is a configurable exception and it is currently
95
- * targeting the opposite security state from the one we're trying
96
- * to complete it for, this counts as an illegal exception return.
97
- * We still need to deactivate whatever vector the logic above has
98
- * selected, though, as it might not be the same as the one for the
99
- * requested exception number.
100
- */
101
- if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
102
- ret = -1;
103
- } else {
104
- ret = nvic_rettobase(s);
105
+ return ret;
106
}
34
}
107
35
108
vec->active = 0;
36
- static void mydev_reset_exit(Object *obj)
37
+ static void mydev_reset_exit(Object *obj, ResetType type)
38
{
39
MyDevClass *myclass = MYDEV_GET_CLASS(obj);
40
MyDevState *mydev = MYDEV(obj);
41
/* call parent class exit phase */
42
if (myclass->parent_phases.exit) {
43
- myclass->parent_phases.exit(obj);
44
+ myclass->parent_phases.exit(obj, type);
45
}
46
/* clear an IO */
47
qemu_set_irq(mydev->irq, 0);
109
--
48
--
110
2.20.1
49
2.34.1
111
50
112
51
diff view generated by jsdifflib
1
In v8.1M the new CLRM instruction allows zeroing an arbitrary set of
1
Some devices and machines need to handle the reset before a vmsave
2
the general-purpose registers and APSR. Implement this.
2
snapshot is loaded differently -- the main user is the handling of
3
RNG seed information, which does not want to put a new RNG seed into
4
a ROM blob when we are doing a snapshot load.
3
5
4
The encoding is a subset of the LDMIA T2 encoding, using what would
6
Currently this kind of reset handling is supported only for:
5
be Rn=0b1111 (which UNDEFs for LDMIA).
7
* TYPE_MACHINE reset methods, which take a ShutdownCause argument
8
* reset functions registered with qemu_register_reset_nosnapshotload
9
10
To allow a three-phase-reset device to also distinguish "snapshot
11
load" reset from the normal kind, add a new ResetType
12
RESET_TYPE_SNAPSHOT_LOAD. All our existing reset methods ignore
13
the reset type, so we don't need to update any device code.
14
15
Add the enum type, and make qemu_devices_reset() use the
16
right reset type for the ShutdownCause it is passed. This
17
allows us to get rid of the device_reset_reason global we
18
were using to implement qemu_register_reset_nosnapshotload().
6
19
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201119215617.29887-6-peter.maydell@linaro.org
22
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
23
Reviewed-by: Luc Michel <luc.michel@amd.com>
24
Message-id: 20240412160809.1260625-7-peter.maydell@linaro.org
10
---
25
---
11
target/arm/t32.decode | 6 +++++-
26
docs/devel/reset.rst | 17 ++++++++++++++---
12
target/arm/translate.c | 38 ++++++++++++++++++++++++++++++++++++++
27
include/hw/resettable.h | 1 +
13
2 files changed, 43 insertions(+), 1 deletion(-)
28
hw/core/reset.c | 15 ++++-----------
29
hw/core/resettable.c | 4 ----
30
4 files changed, 19 insertions(+), 18 deletions(-)
14
31
15
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
32
diff --git a/docs/devel/reset.rst b/docs/devel/reset.rst
16
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/t32.decode
34
--- a/docs/devel/reset.rst
18
+++ b/target/arm/t32.decode
35
+++ b/docs/devel/reset.rst
19
@@ -XXX,XX +XXX,XX @@ UXTAB 1111 1010 0101 .... 1111 .... 10.. .... @rrr_rot
36
@@ -XXX,XX +XXX,XX @@ instantly reset an object, without keeping it in reset state, just call
20
37
``resettable_reset()``. These functions take two parameters: a pointer to the
21
STM_t32 1110 1000 10.0 .... ................ @ldstm i=1 b=0
38
object to reset and a reset type.
22
STM_t32 1110 1001 00.0 .... ................ @ldstm i=0 b=1
39
23
-LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
40
-Several types of reset will be supported. For now only cold reset is defined;
24
+{
41
-others may be added later. The Resettable interface handles reset types with an
25
+ # Rn=15 UNDEFs for LDM; M-profile CLRM uses that encoding
42
-enum:
26
+ CLRM 1110 1000 1001 1111 list:16
43
+The Resettable interface handles reset types with an enum ``ResetType``:
27
+ LDM_t32 1110 1000 10.1 .... ................ @ldstm i=1 b=0
44
28
+}
45
``RESET_TYPE_COLD``
29
LDM_t32 1110 1001 00.1 .... ................ @ldstm i=0 b=1
46
Cold reset is supported by every resettable object. In QEMU, it means we reset
30
47
@@ -XXX,XX +XXX,XX @@ enum:
31
&rfe !extern rn w pu
48
from what is a real hardware cold reset. It differs from other resets (like
32
diff --git a/target/arm/translate.c b/target/arm/translate.c
49
warm or bus resets) which may keep certain parts untouched.
50
51
+``RESET_TYPE_SNAPSHOT_LOAD``
52
+ This is called for a reset which is being done to put the system into a
53
+ clean state prior to loading a snapshot. (This corresponds to a reset
54
+ with ``SHUTDOWN_CAUSE_SNAPSHOT_LOAD``.) Almost all devices should treat
55
+ this the same as ``RESET_TYPE_COLD``. The main exception is devices which
56
+ have some non-deterministic state they want to reinitialize to a different
57
+ value on each cold reset, such as RNG seed information, and which they
58
+ must not reinitialize on a snapshot-load reset.
59
+
60
+Devices which implement reset methods must treat any unknown ``ResetType``
61
+as equivalent to ``RESET_TYPE_COLD``; this will reduce the amount of
62
+existing code we need to change if we add more types in future.
63
+
64
Calling ``resettable_reset()`` is equivalent to calling
65
``resettable_assert_reset()`` then ``resettable_release_reset()``. It is
66
possible to interleave multiple calls to these three functions. There may
67
diff --git a/include/hw/resettable.h b/include/hw/resettable.h
33
index XXXXXXX..XXXXXXX 100644
68
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.c
69
--- a/include/hw/resettable.h
35
+++ b/target/arm/translate.c
70
+++ b/include/hw/resettable.h
36
@@ -XXX,XX +XXX,XX @@ static bool trans_LDM_t16(DisasContext *s, arg_ldst_block *a)
71
@@ -XXX,XX +XXX,XX @@ typedef struct ResettableState ResettableState;
37
return do_ldm(s, a, 1);
72
*/
73
typedef enum ResetType {
74
RESET_TYPE_COLD,
75
+ RESET_TYPE_SNAPSHOT_LOAD,
76
} ResetType;
77
78
/*
79
diff --git a/hw/core/reset.c b/hw/core/reset.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/core/reset.c
82
+++ b/hw/core/reset.c
83
@@ -XXX,XX +XXX,XX @@ static ResettableContainer *get_root_reset_container(void)
84
return root_reset_container;
38
}
85
}
39
86
40
+static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
87
-/*
41
+{
88
- * Reason why the currently in-progress qemu_devices_reset() was called.
42
+ int i;
89
- * If we made at least SHUTDOWN_CAUSE_SNAPSHOT_LOAD have a corresponding
43
+ TCGv_i32 zero;
90
- * ResetType we could perhaps avoid the need for this global.
44
+
91
- */
45
+ if (!dc_isar_feature(aa32_m_sec_state, s)) {
92
-static ShutdownCause device_reset_reason;
46
+ return false;
93
-
47
+ }
48
+
49
+ if (extract32(a->list, 13, 1)) {
50
+ return false;
51
+ }
52
+
53
+ if (!a->list) {
54
+ /* UNPREDICTABLE; we choose to UNDEF */
55
+ return false;
56
+ }
57
+
58
+ zero = tcg_const_i32(0);
59
+ for (i = 0; i < 15; i++) {
60
+ if (extract32(a->list, i, 1)) {
61
+ /* Clear R[i] */
62
+ tcg_gen_mov_i32(cpu_R[i], zero);
63
+ }
64
+ }
65
+ if (extract32(a->list, 15, 1)) {
66
+ /*
67
+ * Clear APSR (by calling the MSR helper with the same argument
68
+ * as for "MSR APSR_nzcvqg, Rn": mask = 0b1100, SYSM=0)
69
+ */
70
+ TCGv_i32 maskreg = tcg_const_i32(0xc << 8);
71
+ gen_helper_v7m_msr(cpu_env, maskreg, zero);
72
+ tcg_temp_free_i32(maskreg);
73
+ }
74
+ tcg_temp_free_i32(zero);
75
+ return true;
76
+}
77
+
78
/*
94
/*
79
* Branch, branch with link
95
* This is an Object which implements Resettable simply to call the
80
*/
96
* callback function in the hold phase.
97
@@ -XXX,XX +XXX,XX @@ static void legacy_reset_hold(Object *obj, ResetType type)
98
{
99
LegacyReset *lr = LEGACY_RESET(obj);
100
101
- if (device_reset_reason == SHUTDOWN_CAUSE_SNAPSHOT_LOAD &&
102
- lr->skip_on_snapshot_load) {
103
+ if (type == RESET_TYPE_SNAPSHOT_LOAD && lr->skip_on_snapshot_load) {
104
return;
105
}
106
lr->func(lr->opaque);
107
@@ -XXX,XX +XXX,XX @@ void qemu_unregister_resettable(Object *obj)
108
109
void qemu_devices_reset(ShutdownCause reason)
110
{
111
- device_reset_reason = reason;
112
+ ResetType type = (reason == SHUTDOWN_CAUSE_SNAPSHOT_LOAD) ?
113
+ RESET_TYPE_SNAPSHOT_LOAD : RESET_TYPE_COLD;
114
115
/* Reset the simulation */
116
- resettable_reset(OBJECT(get_root_reset_container()), RESET_TYPE_COLD);
117
+ resettable_reset(OBJECT(get_root_reset_container()), type);
118
}
119
diff --git a/hw/core/resettable.c b/hw/core/resettable.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/hw/core/resettable.c
122
+++ b/hw/core/resettable.c
123
@@ -XXX,XX +XXX,XX @@ void resettable_reset(Object *obj, ResetType type)
124
125
void resettable_assert_reset(Object *obj, ResetType type)
126
{
127
- /* TODO: change this assert when adding support for other reset types */
128
- assert(type == RESET_TYPE_COLD);
129
trace_resettable_reset_assert_begin(obj, type);
130
assert(!enter_phase_in_progress);
131
132
@@ -XXX,XX +XXX,XX @@ void resettable_assert_reset(Object *obj, ResetType type)
133
134
void resettable_release_reset(Object *obj, ResetType type)
135
{
136
- /* TODO: change this assert when adding support for other reset types */
137
- assert(type == RESET_TYPE_COLD);
138
trace_resettable_reset_release_begin(obj, type);
139
assert(!enter_phase_in_progress);
140
81
--
141
--
82
2.20.1
142
2.34.1
83
143
84
144
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Arnaud Minier <arnaud.minier@telecom-paris.fr>
2
2
3
The Xilinx ZynqMP CAN controller is developed based on SocketCAN, QEMU CAN bus
3
Add the basic infrastructure (register read/write, type...)
4
implementation. Bus connection and socketCAN connection for each CAN module
4
to implement the STM32L4x5 USART.
5
can be set through command lines.
6
5
7
Example for using single CAN:
6
Also create different types for the USART, UART and LPUART
8
-object can-bus,id=canbus0 \
7
of the STM32L4x5 to deduplicate code and enable the
9
-machine xlnx-zcu102.canbus0=canbus0 \
8
implementation of different behaviors depending on the type.
10
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0
11
9
12
Example for connecting both CAN to same virtual CAN on host machine:
10
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
13
-object can-bus,id=canbus0 -object can-bus,id=canbus1 \
11
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
14
-machine xlnx-zcu102.canbus0=canbus0 \
15
-machine xlnx-zcu102.canbus1=canbus1 \
16
-object can-host-socketcan,id=socketcan0,if=vcan0,canbus=canbus0 \
17
-object can-host-socketcan,id=socketcan1,if=vcan0,canbus=canbus1
18
19
To create virtual CAN on the host machine, please check the QEMU CAN docs:
20
https://github.com/qemu/qemu/blob/master/docs/can.txt
21
22
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
23
Message-id: 1605728926-352690-2-git-send-email-fnu.vikram@xilinx.com
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20240329174402.60382-2-arnaud.minier@telecom-paris.fr
14
[PMM: update to new reset hold method signature;
15
fixed a few checkpatch nits]
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
17
---
27
meson.build | 1 +
18
MAINTAINERS | 1 +
28
hw/net/can/trace.h | 1 +
19
include/hw/char/stm32l4x5_usart.h | 66 +++++
29
include/hw/net/xlnx-zynqmp-can.h | 78 ++
20
hw/char/stm32l4x5_usart.c | 396 ++++++++++++++++++++++++++++++
30
hw/net/can/xlnx-zynqmp-can.c | 1161 ++++++++++++++++++++++++++++++
21
hw/char/Kconfig | 3 +
31
hw/Kconfig | 1 +
22
hw/char/meson.build | 1 +
32
hw/net/can/meson.build | 1 +
23
hw/char/trace-events | 4 +
33
hw/net/can/trace-events | 9 +
24
6 files changed, 471 insertions(+)
34
7 files changed, 1252 insertions(+)
25
create mode 100644 include/hw/char/stm32l4x5_usart.h
35
create mode 100644 hw/net/can/trace.h
26
create mode 100644 hw/char/stm32l4x5_usart.c
36
create mode 100644 include/hw/net/xlnx-zynqmp-can.h
37
create mode 100644 hw/net/can/xlnx-zynqmp-can.c
38
create mode 100644 hw/net/can/trace-events
39
27
40
diff --git a/meson.build b/meson.build
28
diff --git a/MAINTAINERS b/MAINTAINERS
41
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
42
--- a/meson.build
30
--- a/MAINTAINERS
43
+++ b/meson.build
31
+++ b/MAINTAINERS
44
@@ -XXX,XX +XXX,XX @@ if have_system
32
@@ -XXX,XX +XXX,XX @@ M: Inès Varhol <ines.varhol@telecom-paris.fr>
45
'hw/misc',
33
L: qemu-arm@nongnu.org
46
'hw/misc/macio',
34
S: Maintained
47
'hw/net',
35
F: hw/arm/stm32l4x5_soc.c
48
+ 'hw/net/can',
36
+F: hw/char/stm32l4x5_usart.c
49
'hw/nvram',
37
F: hw/misc/stm32l4x5_exti.c
50
'hw/pci',
38
F: hw/misc/stm32l4x5_syscfg.c
51
'hw/pci-host',
39
F: hw/misc/stm32l4x5_rcc.c
52
diff --git a/hw/net/can/trace.h b/hw/net/can/trace.h
40
diff --git a/include/hw/char/stm32l4x5_usart.h b/include/hw/char/stm32l4x5_usart.h
53
new file mode 100644
41
new file mode 100644
54
index XXXXXXX..XXXXXXX
42
index XXXXXXX..XXXXXXX
55
--- /dev/null
43
--- /dev/null
56
+++ b/hw/net/can/trace.h
44
+++ b/include/hw/char/stm32l4x5_usart.h
57
@@ -0,0 +1 @@
45
@@ -XXX,XX +XXX,XX @@
58
+#include "trace/trace-hw_net_can.h"
46
+/*
59
diff --git a/include/hw/net/xlnx-zynqmp-can.h b/include/hw/net/xlnx-zynqmp-can.h
47
+ * STM32L4X5 USART (Universal Synchronous Asynchronous Receiver Transmitter)
48
+ *
49
+ * Copyright (c) 2023 Arnaud Minier <arnaud.minier@telecom-paris.fr>
50
+ * Copyright (c) 2023 Inès Varhol <ines.varhol@telecom-paris.fr>
51
+ *
52
+ * SPDX-License-Identifier: GPL-2.0-or-later
53
+ *
54
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
55
+ * See the COPYING file in the top-level directory.
56
+ *
57
+ * The STM32L4X5 USART is heavily inspired by the stm32f2xx_usart
58
+ * by Alistair Francis.
59
+ * The reference used is the STMicroElectronics RM0351 Reference manual
60
+ * for STM32L4x5 and STM32L4x6 advanced Arm ® -based 32-bit MCUs.
61
+ */
62
+
63
+#ifndef HW_STM32L4X5_USART_H
64
+#define HW_STM32L4X5_USART_H
65
+
66
+#include "hw/sysbus.h"
67
+#include "chardev/char-fe.h"
68
+#include "qom/object.h"
69
+
70
+#define TYPE_STM32L4X5_USART_BASE "stm32l4x5-usart-base"
71
+#define TYPE_STM32L4X5_USART "stm32l4x5-usart"
72
+#define TYPE_STM32L4X5_UART "stm32l4x5-uart"
73
+#define TYPE_STM32L4X5_LPUART "stm32l4x5-lpuart"
74
+OBJECT_DECLARE_TYPE(Stm32l4x5UsartBaseState, Stm32l4x5UsartBaseClass,
75
+ STM32L4X5_USART_BASE)
76
+
77
+typedef enum {
78
+ STM32L4x5_USART,
79
+ STM32L4x5_UART,
80
+ STM32L4x5_LPUART,
81
+} Stm32l4x5UsartType;
82
+
83
+struct Stm32l4x5UsartBaseState {
84
+ SysBusDevice parent_obj;
85
+
86
+ MemoryRegion mmio;
87
+
88
+ uint32_t cr1;
89
+ uint32_t cr2;
90
+ uint32_t cr3;
91
+ uint32_t brr;
92
+ uint32_t gtpr;
93
+ uint32_t rtor;
94
+ /* rqr is write-only */
95
+ uint32_t isr;
96
+ /* icr is a clear register */
97
+ uint32_t rdr;
98
+ uint32_t tdr;
99
+
100
+ Clock *clk;
101
+ CharBackend chr;
102
+ qemu_irq irq;
103
+};
104
+
105
+struct Stm32l4x5UsartBaseClass {
106
+ SysBusDeviceClass parent_class;
107
+
108
+ Stm32l4x5UsartType type;
109
+};
110
+
111
+#endif /* HW_STM32L4X5_USART_H */
112
diff --git a/hw/char/stm32l4x5_usart.c b/hw/char/stm32l4x5_usart.c
60
new file mode 100644
113
new file mode 100644
61
index XXXXXXX..XXXXXXX
114
index XXXXXXX..XXXXXXX
62
--- /dev/null
115
--- /dev/null
63
+++ b/include/hw/net/xlnx-zynqmp-can.h
116
+++ b/hw/char/stm32l4x5_usart.c
64
@@ -XXX,XX +XXX,XX @@
117
@@ -XXX,XX +XXX,XX @@
65
+/*
118
+/*
66
+ * QEMU model of the Xilinx ZynqMP CAN controller.
119
+ * STM32L4X5 USART (Universal Synchronous Asynchronous Receiver Transmitter)
67
+ *
120
+ *
68
+ * Copyright (c) 2020 Xilinx Inc.
121
+ * Copyright (c) 2023 Arnaud Minier <arnaud.minier@telecom-paris.fr>
69
+ *
122
+ * Copyright (c) 2023 Inès Varhol <ines.varhol@telecom-paris.fr>
70
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
123
+ *
71
+ *
124
+ * SPDX-License-Identifier: GPL-2.0-or-later
72
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
125
+ *
73
+ * Pavel Pisa.
126
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
74
+ *
127
+ * See the COPYING file in the top-level directory.
75
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
128
+ *
76
+ * of this software and associated documentation files (the "Software"), to deal
129
+ * The STM32L4X5 USART is heavily inspired by the stm32f2xx_usart
77
+ * in the Software without restriction, including without limitation the rights
130
+ * by Alistair Francis.
78
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
131
+ * The reference used is the STMicroElectronics RM0351 Reference manual
79
+ * copies of the Software, and to permit persons to whom the Software is
132
+ * for STM32L4x5 and STM32L4x6 advanced Arm ® -based 32-bit MCUs.
80
+ * furnished to do so, subject to the following conditions:
81
+ *
82
+ * The above copyright notice and this permission notice shall be included in
83
+ * all copies or substantial portions of the Software.
84
+ *
85
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
86
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
87
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
88
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
89
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
90
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
91
+ * THE SOFTWARE.
92
+ */
133
+ */
93
+
134
+
94
+#ifndef XLNX_ZYNQMP_CAN_H
135
+#include "qemu/osdep.h"
95
+#define XLNX_ZYNQMP_CAN_H
136
+#include "qemu/log.h"
96
+
137
+#include "qemu/module.h"
97
+#include "hw/register.h"
138
+#include "qapi/error.h"
98
+#include "net/can_emu.h"
139
+#include "chardev/char-fe.h"
99
+#include "net/can_host.h"
140
+#include "chardev/char-serial.h"
100
+#include "qemu/fifo32.h"
141
+#include "migration/vmstate.h"
101
+#include "hw/ptimer.h"
142
+#include "hw/char/stm32l4x5_usart.h"
143
+#include "hw/clock.h"
144
+#include "hw/irq.h"
102
+#include "hw/qdev-clock.h"
145
+#include "hw/qdev-clock.h"
103
+
104
+#define TYPE_XLNX_ZYNQMP_CAN "xlnx.zynqmp-can"
105
+
106
+#define XLNX_ZYNQMP_CAN(obj) \
107
+ OBJECT_CHECK(XlnxZynqMPCANState, (obj), TYPE_XLNX_ZYNQMP_CAN)
108
+
109
+#define MAX_CAN_CTRLS 2
110
+#define XLNX_ZYNQMP_CAN_R_MAX (0x84 / 4)
111
+#define MAILBOX_CAPACITY 64
112
+#define CAN_TIMER_MAX 0XFFFFUL
113
+#define CAN_DEFAULT_CLOCK (24 * 1000 * 1000)
114
+
115
+/* Each CAN_FRAME will have 4 * 32bit size. */
116
+#define CAN_FRAME_SIZE 4
117
+#define RXFIFO_SIZE (MAILBOX_CAPACITY * CAN_FRAME_SIZE)
118
+
119
+typedef struct XlnxZynqMPCANState {
120
+ SysBusDevice parent_obj;
121
+ MemoryRegion iomem;
122
+
123
+ qemu_irq irq;
124
+
125
+ CanBusClientState bus_client;
126
+ CanBusState *canbus;
127
+
128
+ struct {
129
+ uint32_t ext_clk_freq;
130
+ } cfg;
131
+
132
+ RegisterInfo reg_info[XLNX_ZYNQMP_CAN_R_MAX];
133
+ uint32_t regs[XLNX_ZYNQMP_CAN_R_MAX];
134
+
135
+ Fifo32 rx_fifo;
136
+ Fifo32 tx_fifo;
137
+ Fifo32 txhpb_fifo;
138
+
139
+ ptimer_state *can_timer;
140
+} XlnxZynqMPCANState;
141
+
142
+#endif
143
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
144
new file mode 100644
145
index XXXXXXX..XXXXXXX
146
--- /dev/null
147
+++ b/hw/net/can/xlnx-zynqmp-can.c
148
@@ -XXX,XX +XXX,XX @@
149
+/*
150
+ * QEMU model of the Xilinx ZynqMP CAN controller.
151
+ * This implementation is based on the following datasheet:
152
+ * https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
153
+ *
154
+ * Copyright (c) 2020 Xilinx Inc.
155
+ *
156
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
157
+ *
158
+ * Based on QEMU CAN Device emulation implemented by Jin Yang, Deniz Eren and
159
+ * Pavel Pisa
160
+ *
161
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
162
+ * of this software and associated documentation files (the "Software"), to deal
163
+ * in the Software without restriction, including without limitation the rights
164
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
165
+ * copies of the Software, and to permit persons to whom the Software is
166
+ * furnished to do so, subject to the following conditions:
167
+ *
168
+ * The above copyright notice and this permission notice shall be included in
169
+ * all copies or substantial portions of the Software.
170
+ *
171
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
172
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
173
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
174
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
175
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
176
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
177
+ * THE SOFTWARE.
178
+ */
179
+
180
+#include "qemu/osdep.h"
181
+#include "hw/sysbus.h"
182
+#include "hw/register.h"
183
+#include "hw/irq.h"
184
+#include "qapi/error.h"
185
+#include "qemu/bitops.h"
186
+#include "qemu/log.h"
187
+#include "qemu/cutils.h"
188
+#include "sysemu/sysemu.h"
189
+#include "migration/vmstate.h"
190
+#include "hw/qdev-properties.h"
146
+#include "hw/qdev-properties.h"
191
+#include "net/can_emu.h"
147
+#include "hw/qdev-properties-system.h"
192
+#include "net/can_host.h"
148
+#include "hw/registerfields.h"
193
+#include "qemu/event_notifier.h"
194
+#include "qom/object_interfaces.h"
195
+#include "hw/net/xlnx-zynqmp-can.h"
196
+#include "trace.h"
149
+#include "trace.h"
197
+
150
+
198
+#ifndef XLNX_ZYNQMP_CAN_ERR_DEBUG
151
+
199
+#define XLNX_ZYNQMP_CAN_ERR_DEBUG 0
152
+REG32(CR1, 0x00)
200
+#endif
153
+ FIELD(CR1, M1, 28, 1) /* Word length (part 2, see M0) */
201
+
154
+ FIELD(CR1, EOBIE, 27, 1) /* End of Block interrupt enable */
202
+#define MAX_DLC 8
155
+ FIELD(CR1, RTOIE, 26, 1) /* Receiver timeout interrupt enable */
203
+#undef ERROR
156
+ FIELD(CR1, DEAT, 21, 5) /* Driver Enable assertion time */
204
+
157
+ FIELD(CR1, DEDT, 16, 5) /* Driver Enable de-assertion time */
205
+REG32(SOFTWARE_RESET_REGISTER, 0x0)
158
+ FIELD(CR1, OVER8, 15, 1) /* Oversampling mode */
206
+ FIELD(SOFTWARE_RESET_REGISTER, CEN, 1, 1)
159
+ FIELD(CR1, CMIE, 14, 1) /* Character match interrupt enable */
207
+ FIELD(SOFTWARE_RESET_REGISTER, SRST, 0, 1)
160
+ FIELD(CR1, MME, 13, 1) /* Mute mode enable */
208
+REG32(MODE_SELECT_REGISTER, 0x4)
161
+ FIELD(CR1, M0, 12, 1) /* Word length (part 1, see M1) */
209
+ FIELD(MODE_SELECT_REGISTER, SNOOP, 2, 1)
162
+ FIELD(CR1, WAKE, 11, 1) /* Receiver wakeup method */
210
+ FIELD(MODE_SELECT_REGISTER, LBACK, 1, 1)
163
+ FIELD(CR1, PCE, 10, 1) /* Parity control enable */
211
+ FIELD(MODE_SELECT_REGISTER, SLEEP, 0, 1)
164
+ FIELD(CR1, PS, 9, 1) /* Parity selection */
212
+REG32(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, 0x8)
165
+ FIELD(CR1, PEIE, 8, 1) /* PE interrupt enable */
213
+ FIELD(ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER, BRP, 0, 8)
166
+ FIELD(CR1, TXEIE, 7, 1) /* TXE interrupt enable */
214
+REG32(ARBITRATION_PHASE_BIT_TIMING_REGISTER, 0xc)
167
+ FIELD(CR1, TCIE, 6, 1) /* Transmission complete interrupt enable */
215
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, SJW, 7, 2)
168
+ FIELD(CR1, RXNEIE, 5, 1) /* RXNE interrupt enable */
216
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS2, 4, 3)
169
+ FIELD(CR1, IDLEIE, 4, 1) /* IDLE interrupt enable */
217
+ FIELD(ARBITRATION_PHASE_BIT_TIMING_REGISTER, TS1, 0, 4)
170
+ FIELD(CR1, TE, 3, 1) /* Transmitter enable */
218
+REG32(ERROR_COUNTER_REGISTER, 0x10)
171
+ FIELD(CR1, RE, 2, 1) /* Receiver enable */
219
+ FIELD(ERROR_COUNTER_REGISTER, REC, 8, 8)
172
+ FIELD(CR1, UESM, 1, 1) /* USART enable in Stop mode */
220
+ FIELD(ERROR_COUNTER_REGISTER, TEC, 0, 8)
173
+ FIELD(CR1, UE, 0, 1) /* USART enable */
221
+REG32(ERROR_STATUS_REGISTER, 0x14)
174
+REG32(CR2, 0x04)
222
+ FIELD(ERROR_STATUS_REGISTER, ACKER, 4, 1)
175
+ FIELD(CR2, ADD_1, 28, 4) /* ADD[7:4] */
223
+ FIELD(ERROR_STATUS_REGISTER, BERR, 3, 1)
176
+ FIELD(CR2, ADD_0, 24, 1) /* ADD[3:0] */
224
+ FIELD(ERROR_STATUS_REGISTER, STER, 2, 1)
177
+ FIELD(CR2, RTOEN, 23, 1) /* Receiver timeout enable */
225
+ FIELD(ERROR_STATUS_REGISTER, FMER, 1, 1)
178
+ FIELD(CR2, ABRMOD, 21, 2) /* Auto baud rate mode */
226
+ FIELD(ERROR_STATUS_REGISTER, CRCER, 0, 1)
179
+ FIELD(CR2, ABREN, 20, 1) /* Auto baud rate enable */
227
+REG32(STATUS_REGISTER, 0x18)
180
+ FIELD(CR2, MSBFIRST, 19, 1) /* Most significant bit first */
228
+ FIELD(STATUS_REGISTER, SNOOP, 12, 1)
181
+ FIELD(CR2, DATAINV, 18, 1) /* Binary data inversion */
229
+ FIELD(STATUS_REGISTER, ACFBSY, 11, 1)
182
+ FIELD(CR2, TXINV, 17, 1) /* TX pin active level inversion */
230
+ FIELD(STATUS_REGISTER, TXFLL, 10, 1)
183
+ FIELD(CR2, RXINV, 16, 1) /* RX pin active level inversion */
231
+ FIELD(STATUS_REGISTER, TXBFLL, 9, 1)
184
+ FIELD(CR2, SWAP, 15, 1) /* Swap RX/TX pins */
232
+ FIELD(STATUS_REGISTER, ESTAT, 7, 2)
185
+ FIELD(CR2, LINEN, 14, 1) /* LIN mode enable */
233
+ FIELD(STATUS_REGISTER, ERRWRN, 6, 1)
186
+ FIELD(CR2, STOP, 12, 2) /* STOP bits */
234
+ FIELD(STATUS_REGISTER, BBSY, 5, 1)
187
+ FIELD(CR2, CLKEN, 11, 1) /* Clock enable */
235
+ FIELD(STATUS_REGISTER, BIDLE, 4, 1)
188
+ FIELD(CR2, CPOL, 10, 1) /* Clock polarity */
236
+ FIELD(STATUS_REGISTER, NORMAL, 3, 1)
189
+ FIELD(CR2, CPHA, 9, 1) /* Clock phase */
237
+ FIELD(STATUS_REGISTER, SLEEP, 2, 1)
190
+ FIELD(CR2, LBCL, 8, 1) /* Last bit clock pulse */
238
+ FIELD(STATUS_REGISTER, LBACK, 1, 1)
191
+ FIELD(CR2, LBDIE, 6, 1) /* LIN break detection interrupt enable */
239
+ FIELD(STATUS_REGISTER, CONFIG, 0, 1)
192
+ FIELD(CR2, LBDL, 5, 1) /* LIN break detection length */
240
+REG32(INTERRUPT_STATUS_REGISTER, 0x1c)
193
+ FIELD(CR2, ADDM7, 4, 1) /* 7-bit / 4-bit Address Detection */
241
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFEMP, 14, 1)
194
+
242
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFWMEMP, 13, 1)
195
+REG32(CR3, 0x08)
243
+ FIELD(INTERRUPT_STATUS_REGISTER, RXFWMFLL, 12, 1)
196
+ /* TCBGTIE only on STM32L496xx/4A6xx devices */
244
+ FIELD(INTERRUPT_STATUS_REGISTER, WKUP, 11, 1)
197
+ FIELD(CR3, UCESM, 23, 1) /* USART Clock Enable in Stop Mode */
245
+ FIELD(INTERRUPT_STATUS_REGISTER, SLP, 10, 1)
198
+ FIELD(CR3, WUFIE, 22, 1) /* Wakeup from Stop mode interrupt enable */
246
+ FIELD(INTERRUPT_STATUS_REGISTER, BSOFF, 9, 1)
199
+ FIELD(CR3, WUS, 20, 2) /* Wakeup from Stop mode interrupt flag selection */
247
+ FIELD(INTERRUPT_STATUS_REGISTER, ERROR, 8, 1)
200
+ FIELD(CR3, SCARCNT, 17, 3) /* Smartcard auto-retry count */
248
+ FIELD(INTERRUPT_STATUS_REGISTER, RXNEMP, 7, 1)
201
+ FIELD(CR3, DEP, 15, 1) /* Driver enable polarity selection */
249
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOFLW, 6, 1)
202
+ FIELD(CR3, DEM, 14, 1) /* Driver enable mode */
250
+ FIELD(INTERRUPT_STATUS_REGISTER, RXUFLW, 5, 1)
203
+ FIELD(CR3, DDRE, 13, 1) /* DMA Disable on Reception Error */
251
+ FIELD(INTERRUPT_STATUS_REGISTER, RXOK, 4, 1)
204
+ FIELD(CR3, OVRDIS, 12, 1) /* Overrun Disable */
252
+ FIELD(INTERRUPT_STATUS_REGISTER, TXBFLL, 3, 1)
205
+ FIELD(CR3, ONEBIT, 11, 1) /* One sample bit method enable */
253
+ FIELD(INTERRUPT_STATUS_REGISTER, TXFLL, 2, 1)
206
+ FIELD(CR3, CTSIE, 10, 1) /* CTS interrupt enable */
254
+ FIELD(INTERRUPT_STATUS_REGISTER, TXOK, 1, 1)
207
+ FIELD(CR3, CTSE, 9, 1) /* CTS enable */
255
+ FIELD(INTERRUPT_STATUS_REGISTER, ARBLST, 0, 1)
208
+ FIELD(CR3, RTSE, 8, 1) /* RTS enable */
256
+REG32(INTERRUPT_ENABLE_REGISTER, 0x20)
209
+ FIELD(CR3, DMAT, 7, 1) /* DMA enable transmitter */
257
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFEMP, 14, 1)
210
+ FIELD(CR3, DMAR, 6, 1) /* DMA enable receiver */
258
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFWMEMP, 13, 1)
211
+ FIELD(CR3, SCEN, 5, 1) /* Smartcard mode enable */
259
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXFWMFLL, 12, 1)
212
+ FIELD(CR3, NACK, 4, 1) /* Smartcard NACK enable */
260
+ FIELD(INTERRUPT_ENABLE_REGISTER, EWKUP, 11, 1)
213
+ FIELD(CR3, HDSEL, 3, 1) /* Half-duplex selection */
261
+ FIELD(INTERRUPT_ENABLE_REGISTER, ESLP, 10, 1)
214
+ FIELD(CR3, IRLP, 2, 1) /* IrDA low-power */
262
+ FIELD(INTERRUPT_ENABLE_REGISTER, EBSOFF, 9, 1)
215
+ FIELD(CR3, IREN, 1, 1) /* IrDA mode enable */
263
+ FIELD(INTERRUPT_ENABLE_REGISTER, EERROR, 8, 1)
216
+ FIELD(CR3, EIE, 0, 1) /* Error interrupt enable */
264
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXNEMP, 7, 1)
217
+REG32(BRR, 0x0C)
265
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOFLW, 6, 1)
218
+ FIELD(BRR, BRR, 0, 16)
266
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXUFLW, 5, 1)
219
+REG32(GTPR, 0x10)
267
+ FIELD(INTERRUPT_ENABLE_REGISTER, ERXOK, 4, 1)
220
+ FIELD(GTPR, GT, 8, 8) /* Guard time value */
268
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXBFLL, 3, 1)
221
+ FIELD(GTPR, PSC, 0, 8) /* Prescaler value */
269
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXFLL, 2, 1)
222
+REG32(RTOR, 0x14)
270
+ FIELD(INTERRUPT_ENABLE_REGISTER, ETXOK, 1, 1)
223
+ FIELD(RTOR, BLEN, 24, 8) /* Block Length */
271
+ FIELD(INTERRUPT_ENABLE_REGISTER, EARBLST, 0, 1)
224
+ FIELD(RTOR, RTO, 0, 24) /* Receiver timeout value */
272
+REG32(INTERRUPT_CLEAR_REGISTER, 0x24)
225
+REG32(RQR, 0x18)
273
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFEMP, 14, 1)
226
+ FIELD(RQR, TXFRQ, 4, 1) /* Transmit data flush request */
274
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFWMEMP, 13, 1)
227
+ FIELD(RQR, RXFRQ, 3, 1) /* Receive data flush request */
275
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXFWMFLL, 12, 1)
228
+ FIELD(RQR, MMRQ, 2, 1) /* Mute mode request */
276
+ FIELD(INTERRUPT_CLEAR_REGISTER, CWKUP, 11, 1)
229
+ FIELD(RQR, SBKRQ, 1, 1) /* Send break request */
277
+ FIELD(INTERRUPT_CLEAR_REGISTER, CSLP, 10, 1)
230
+ FIELD(RQR, ABBRRQ, 0, 1) /* Auto baud rate request */
278
+ FIELD(INTERRUPT_CLEAR_REGISTER, CBSOFF, 9, 1)
231
+REG32(ISR, 0x1C)
279
+ FIELD(INTERRUPT_CLEAR_REGISTER, CERROR, 8, 1)
232
+ /* TCBGT only for STM32L475xx/476xx/486xx devices */
280
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXNEMP, 7, 1)
233
+ FIELD(ISR, REACK, 22, 1) /* Receive enable acknowledge flag */
281
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOFLW, 6, 1)
234
+ FIELD(ISR, TEACK, 21, 1) /* Transmit enable acknowledge flag */
282
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXUFLW, 5, 1)
235
+ FIELD(ISR, WUF, 20, 1) /* Wakeup from Stop mode flag */
283
+ FIELD(INTERRUPT_CLEAR_REGISTER, CRXOK, 4, 1)
236
+ FIELD(ISR, RWU, 19, 1) /* Receiver wakeup from Mute mode */
284
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXBFLL, 3, 1)
237
+ FIELD(ISR, SBKF, 18, 1) /* Send break flag */
285
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXFLL, 2, 1)
238
+ FIELD(ISR, CMF, 17, 1) /* Character match flag */
286
+ FIELD(INTERRUPT_CLEAR_REGISTER, CTXOK, 1, 1)
239
+ FIELD(ISR, BUSY, 16, 1) /* Busy flag */
287
+ FIELD(INTERRUPT_CLEAR_REGISTER, CARBLST, 0, 1)
240
+ FIELD(ISR, ABRF, 15, 1) /* Auto Baud rate flag */
288
+REG32(TIMESTAMP_REGISTER, 0x28)
241
+ FIELD(ISR, ABRE, 14, 1) /* Auto Baud rate error */
289
+ FIELD(TIMESTAMP_REGISTER, CTS, 0, 1)
242
+ FIELD(ISR, EOBF, 12, 1) /* End of block flag */
290
+REG32(WIR, 0x2c)
243
+ FIELD(ISR, RTOF, 11, 1) /* Receiver timeout */
291
+ FIELD(WIR, EW, 8, 8)
244
+ FIELD(ISR, CTS, 10, 1) /* CTS flag */
292
+ FIELD(WIR, FW, 0, 8)
245
+ FIELD(ISR, CTSIF, 9, 1) /* CTS interrupt flag */
293
+REG32(TXFIFO_ID, 0x30)
246
+ FIELD(ISR, LBDF, 8, 1) /* LIN break detection flag */
294
+ FIELD(TXFIFO_ID, IDH, 21, 11)
247
+ FIELD(ISR, TXE, 7, 1) /* Transmit data register empty */
295
+ FIELD(TXFIFO_ID, SRRRTR, 20, 1)
248
+ FIELD(ISR, TC, 6, 1) /* Transmission complete */
296
+ FIELD(TXFIFO_ID, IDE, 19, 1)
249
+ FIELD(ISR, RXNE, 5, 1) /* Read data register not empty */
297
+ FIELD(TXFIFO_ID, IDL, 1, 18)
250
+ FIELD(ISR, IDLE, 4, 1) /* Idle line detected */
298
+ FIELD(TXFIFO_ID, RTR, 0, 1)
251
+ FIELD(ISR, ORE, 3, 1) /* Overrun error */
299
+REG32(TXFIFO_DLC, 0x34)
252
+ FIELD(ISR, NF, 2, 1) /* START bit Noise detection flag */
300
+ FIELD(TXFIFO_DLC, DLC, 28, 4)
253
+ FIELD(ISR, FE, 1, 1) /* Framing Error */
301
+REG32(TXFIFO_DATA1, 0x38)
254
+ FIELD(ISR, PE, 0, 1) /* Parity Error */
302
+ FIELD(TXFIFO_DATA1, DB0, 24, 8)
255
+REG32(ICR, 0x20)
303
+ FIELD(TXFIFO_DATA1, DB1, 16, 8)
256
+ FIELD(ICR, WUCF, 20, 1) /* Wakeup from Stop mode clear flag */
304
+ FIELD(TXFIFO_DATA1, DB2, 8, 8)
257
+ FIELD(ICR, CMCF, 17, 1) /* Character match clear flag */
305
+ FIELD(TXFIFO_DATA1, DB3, 0, 8)
258
+ FIELD(ICR, EOBCF, 12, 1) /* End of block clear flag */
306
+REG32(TXFIFO_DATA2, 0x3c)
259
+ FIELD(ICR, RTOCF, 11, 1) /* Receiver timeout clear flag */
307
+ FIELD(TXFIFO_DATA2, DB4, 24, 8)
260
+ FIELD(ICR, CTSCF, 9, 1) /* CTS clear flag */
308
+ FIELD(TXFIFO_DATA2, DB5, 16, 8)
261
+ FIELD(ICR, LBDCF, 8, 1) /* LIN break detection clear flag */
309
+ FIELD(TXFIFO_DATA2, DB6, 8, 8)
262
+ /* TCBGTCF only on STM32L496xx/4A6xx devices */
310
+ FIELD(TXFIFO_DATA2, DB7, 0, 8)
263
+ FIELD(ICR, TCCF, 6, 1) /* Transmission complete clear flag */
311
+REG32(TXHPB_ID, 0x40)
264
+ FIELD(ICR, IDLECF, 4, 1) /* Idle line detected clear flag */
312
+ FIELD(TXHPB_ID, IDH, 21, 11)
265
+ FIELD(ICR, ORECF, 3, 1) /* Overrun error clear flag */
313
+ FIELD(TXHPB_ID, SRRRTR, 20, 1)
266
+ FIELD(ICR, NCF, 2, 1) /* Noise detected clear flag */
314
+ FIELD(TXHPB_ID, IDE, 19, 1)
267
+ FIELD(ICR, FECF, 1, 1) /* Framing error clear flag */
315
+ FIELD(TXHPB_ID, IDL, 1, 18)
268
+ FIELD(ICR, PECF, 0, 1) /* Parity error clear flag */
316
+ FIELD(TXHPB_ID, RTR, 0, 1)
269
+REG32(RDR, 0x24)
317
+REG32(TXHPB_DLC, 0x44)
270
+ FIELD(RDR, RDR, 0, 9)
318
+ FIELD(TXHPB_DLC, DLC, 28, 4)
271
+REG32(TDR, 0x28)
319
+REG32(TXHPB_DATA1, 0x48)
272
+ FIELD(TDR, TDR, 0, 9)
320
+ FIELD(TXHPB_DATA1, DB0, 24, 8)
273
+
321
+ FIELD(TXHPB_DATA1, DB1, 16, 8)
274
+static void stm32l4x5_usart_base_reset_hold(Object *obj, ResetType type)
322
+ FIELD(TXHPB_DATA1, DB2, 8, 8)
275
+{
323
+ FIELD(TXHPB_DATA1, DB3, 0, 8)
276
+ Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(obj);
324
+REG32(TXHPB_DATA2, 0x4c)
277
+
325
+ FIELD(TXHPB_DATA2, DB4, 24, 8)
278
+ s->cr1 = 0x00000000;
326
+ FIELD(TXHPB_DATA2, DB5, 16, 8)
279
+ s->cr2 = 0x00000000;
327
+ FIELD(TXHPB_DATA2, DB6, 8, 8)
280
+ s->cr3 = 0x00000000;
328
+ FIELD(TXHPB_DATA2, DB7, 0, 8)
281
+ s->brr = 0x00000000;
329
+REG32(RXFIFO_ID, 0x50)
282
+ s->gtpr = 0x00000000;
330
+ FIELD(RXFIFO_ID, IDH, 21, 11)
283
+ s->rtor = 0x00000000;
331
+ FIELD(RXFIFO_ID, SRRRTR, 20, 1)
284
+ s->isr = 0x020000C0;
332
+ FIELD(RXFIFO_ID, IDE, 19, 1)
285
+ s->rdr = 0x00000000;
333
+ FIELD(RXFIFO_ID, IDL, 1, 18)
286
+ s->tdr = 0x00000000;
334
+ FIELD(RXFIFO_ID, RTR, 0, 1)
287
+}
335
+REG32(RXFIFO_DLC, 0x54)
288
+
336
+ FIELD(RXFIFO_DLC, DLC, 28, 4)
289
+static uint64_t stm32l4x5_usart_base_read(void *opaque, hwaddr addr,
337
+ FIELD(RXFIFO_DLC, RXT, 0, 16)
290
+ unsigned int size)
338
+REG32(RXFIFO_DATA1, 0x58)
291
+{
339
+ FIELD(RXFIFO_DATA1, DB0, 24, 8)
292
+ Stm32l4x5UsartBaseState *s = opaque;
340
+ FIELD(RXFIFO_DATA1, DB1, 16, 8)
293
+ uint64_t retvalue = 0;
341
+ FIELD(RXFIFO_DATA1, DB2, 8, 8)
294
+
342
+ FIELD(RXFIFO_DATA1, DB3, 0, 8)
295
+ switch (addr) {
343
+REG32(RXFIFO_DATA2, 0x5c)
296
+ case A_CR1:
344
+ FIELD(RXFIFO_DATA2, DB4, 24, 8)
297
+ retvalue = s->cr1;
345
+ FIELD(RXFIFO_DATA2, DB5, 16, 8)
298
+ break;
346
+ FIELD(RXFIFO_DATA2, DB6, 8, 8)
299
+ case A_CR2:
347
+ FIELD(RXFIFO_DATA2, DB7, 0, 8)
300
+ retvalue = s->cr2;
348
+REG32(AFR, 0x60)
301
+ break;
349
+ FIELD(AFR, UAF4, 3, 1)
302
+ case A_CR3:
350
+ FIELD(AFR, UAF3, 2, 1)
303
+ retvalue = s->cr3;
351
+ FIELD(AFR, UAF2, 1, 1)
304
+ break;
352
+ FIELD(AFR, UAF1, 0, 1)
305
+ case A_BRR:
353
+REG32(AFMR1, 0x64)
306
+ retvalue = FIELD_EX32(s->brr, BRR, BRR);
354
+ FIELD(AFMR1, AMIDH, 21, 11)
307
+ break;
355
+ FIELD(AFMR1, AMSRR, 20, 1)
308
+ case A_GTPR:
356
+ FIELD(AFMR1, AMIDE, 19, 1)
309
+ retvalue = s->gtpr;
357
+ FIELD(AFMR1, AMIDL, 1, 18)
310
+ break;
358
+ FIELD(AFMR1, AMRTR, 0, 1)
311
+ case A_RTOR:
359
+REG32(AFIR1, 0x68)
312
+ retvalue = s->rtor;
360
+ FIELD(AFIR1, AIIDH, 21, 11)
313
+ break;
361
+ FIELD(AFIR1, AISRR, 20, 1)
314
+ case A_RQR:
362
+ FIELD(AFIR1, AIIDE, 19, 1)
315
+ /* RQR is a write only register */
363
+ FIELD(AFIR1, AIIDL, 1, 18)
316
+ retvalue = 0x00000000;
364
+ FIELD(AFIR1, AIRTR, 0, 1)
317
+ break;
365
+REG32(AFMR2, 0x6c)
318
+ case A_ISR:
366
+ FIELD(AFMR2, AMIDH, 21, 11)
319
+ retvalue = s->isr;
367
+ FIELD(AFMR2, AMSRR, 20, 1)
320
+ break;
368
+ FIELD(AFMR2, AMIDE, 19, 1)
321
+ case A_ICR:
369
+ FIELD(AFMR2, AMIDL, 1, 18)
322
+ /* ICR is a clear register */
370
+ FIELD(AFMR2, AMRTR, 0, 1)
323
+ retvalue = 0x00000000;
371
+REG32(AFIR2, 0x70)
324
+ break;
372
+ FIELD(AFIR2, AIIDH, 21, 11)
325
+ case A_RDR:
373
+ FIELD(AFIR2, AISRR, 20, 1)
326
+ retvalue = FIELD_EX32(s->rdr, RDR, RDR);
374
+ FIELD(AFIR2, AIIDE, 19, 1)
327
+ /* Reset RXNE flag */
375
+ FIELD(AFIR2, AIIDL, 1, 18)
328
+ s->isr &= ~R_ISR_RXNE_MASK;
376
+ FIELD(AFIR2, AIRTR, 0, 1)
329
+ break;
377
+REG32(AFMR3, 0x74)
330
+ case A_TDR:
378
+ FIELD(AFMR3, AMIDH, 21, 11)
331
+ retvalue = FIELD_EX32(s->tdr, TDR, TDR);
379
+ FIELD(AFMR3, AMSRR, 20, 1)
332
+ break;
380
+ FIELD(AFMR3, AMIDE, 19, 1)
333
+ default:
381
+ FIELD(AFMR3, AMIDL, 1, 18)
334
+ qemu_log_mask(LOG_GUEST_ERROR,
382
+ FIELD(AFMR3, AMRTR, 0, 1)
335
+ "%s: Bad offset 0x%"HWADDR_PRIx"\n", __func__, addr);
383
+REG32(AFIR3, 0x78)
336
+ break;
384
+ FIELD(AFIR3, AIIDH, 21, 11)
385
+ FIELD(AFIR3, AISRR, 20, 1)
386
+ FIELD(AFIR3, AIIDE, 19, 1)
387
+ FIELD(AFIR3, AIIDL, 1, 18)
388
+ FIELD(AFIR3, AIRTR, 0, 1)
389
+REG32(AFMR4, 0x7c)
390
+ FIELD(AFMR4, AMIDH, 21, 11)
391
+ FIELD(AFMR4, AMSRR, 20, 1)
392
+ FIELD(AFMR4, AMIDE, 19, 1)
393
+ FIELD(AFMR4, AMIDL, 1, 18)
394
+ FIELD(AFMR4, AMRTR, 0, 1)
395
+REG32(AFIR4, 0x80)
396
+ FIELD(AFIR4, AIIDH, 21, 11)
397
+ FIELD(AFIR4, AISRR, 20, 1)
398
+ FIELD(AFIR4, AIIDE, 19, 1)
399
+ FIELD(AFIR4, AIIDL, 1, 18)
400
+ FIELD(AFIR4, AIRTR, 0, 1)
401
+
402
+static void can_update_irq(XlnxZynqMPCANState *s)
403
+{
404
+ uint32_t irq;
405
+
406
+ /* Watermark register interrupts. */
407
+ if ((fifo32_num_free(&s->tx_fifo) / CAN_FRAME_SIZE) >
408
+ ARRAY_FIELD_EX32(s->regs, WIR, EW)) {
409
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFWMEMP, 1);
410
+ }
337
+ }
411
+
338
+
412
+ if ((fifo32_num_used(&s->rx_fifo) / CAN_FRAME_SIZE) >
339
+ trace_stm32l4x5_usart_read(addr, retvalue);
413
+ ARRAY_FIELD_EX32(s->regs, WIR, FW)) {
340
+
414
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXFWMFLL, 1);
341
+ return retvalue;
342
+}
343
+
344
+static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
345
+ uint64_t val64, unsigned int size)
346
+{
347
+ Stm32l4x5UsartBaseState *s = opaque;
348
+ const uint32_t value = val64;
349
+
350
+ trace_stm32l4x5_usart_write(addr, value);
351
+
352
+ switch (addr) {
353
+ case A_CR1:
354
+ s->cr1 = value;
355
+ return;
356
+ case A_CR2:
357
+ s->cr2 = value;
358
+ return;
359
+ case A_CR3:
360
+ s->cr3 = value;
361
+ return;
362
+ case A_BRR:
363
+ s->brr = value;
364
+ return;
365
+ case A_GTPR:
366
+ s->gtpr = value;
367
+ return;
368
+ case A_RTOR:
369
+ s->rtor = value;
370
+ return;
371
+ case A_RQR:
372
+ return;
373
+ case A_ISR:
374
+ qemu_log_mask(LOG_GUEST_ERROR,
375
+ "%s: ISR is read only !\n", __func__);
376
+ return;
377
+ case A_ICR:
378
+ /* Clear the status flags */
379
+ s->isr &= ~value;
380
+ return;
381
+ case A_RDR:
382
+ qemu_log_mask(LOG_GUEST_ERROR,
383
+ "%s: RDR is read only !\n", __func__);
384
+ return;
385
+ case A_TDR:
386
+ s->tdr = value;
387
+ return;
388
+ default:
389
+ qemu_log_mask(LOG_GUEST_ERROR,
390
+ "%s: Bad offset 0x%"HWADDR_PRIx"\n", __func__, addr);
415
+ }
391
+ }
416
+
392
+}
417
+ /* RX Interrupts. */
393
+
418
+ if (fifo32_num_used(&s->rx_fifo) >= CAN_FRAME_SIZE) {
394
+static const MemoryRegionOps stm32l4x5_usart_base_ops = {
419
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXNEMP, 1);
395
+ .read = stm32l4x5_usart_base_read,
420
+ }
396
+ .write = stm32l4x5_usart_base_write,
421
+
397
+ .endianness = DEVICE_NATIVE_ENDIAN,
422
+ /* TX interrupts. */
423
+ if (fifo32_is_empty(&s->tx_fifo)) {
424
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFEMP, 1);
425
+ }
426
+
427
+ if (fifo32_is_full(&s->tx_fifo)) {
428
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXFLL, 1);
429
+ }
430
+
431
+ if (fifo32_is_full(&s->txhpb_fifo)) {
432
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXBFLL, 1);
433
+ }
434
+
435
+ irq = s->regs[R_INTERRUPT_STATUS_REGISTER];
436
+ irq &= s->regs[R_INTERRUPT_ENABLE_REGISTER];
437
+
438
+ trace_xlnx_can_update_irq(s->regs[R_INTERRUPT_STATUS_REGISTER],
439
+ s->regs[R_INTERRUPT_ENABLE_REGISTER], irq);
440
+ qemu_set_irq(s->irq, irq);
441
+}
442
+
443
+static void can_ier_post_write(RegisterInfo *reg, uint64_t val)
444
+{
445
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
446
+
447
+ can_update_irq(s);
448
+}
449
+
450
+static uint64_t can_icr_pre_write(RegisterInfo *reg, uint64_t val)
451
+{
452
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
453
+
454
+ s->regs[R_INTERRUPT_STATUS_REGISTER] &= ~val;
455
+ can_update_irq(s);
456
+
457
+ return 0;
458
+}
459
+
460
+static void can_config_reset(XlnxZynqMPCANState *s)
461
+{
462
+ /* Reset all the configuration registers. */
463
+ register_reset(&s->reg_info[R_SOFTWARE_RESET_REGISTER]);
464
+ register_reset(&s->reg_info[R_MODE_SELECT_REGISTER]);
465
+ register_reset(
466
+ &s->reg_info[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER]);
467
+ register_reset(&s->reg_info[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER]);
468
+ register_reset(&s->reg_info[R_STATUS_REGISTER]);
469
+ register_reset(&s->reg_info[R_INTERRUPT_STATUS_REGISTER]);
470
+ register_reset(&s->reg_info[R_INTERRUPT_ENABLE_REGISTER]);
471
+ register_reset(&s->reg_info[R_INTERRUPT_CLEAR_REGISTER]);
472
+ register_reset(&s->reg_info[R_WIR]);
473
+}
474
+
475
+static void can_config_mode(XlnxZynqMPCANState *s)
476
+{
477
+ register_reset(&s->reg_info[R_ERROR_COUNTER_REGISTER]);
478
+ register_reset(&s->reg_info[R_ERROR_STATUS_REGISTER]);
479
+
480
+ /* Put XlnxZynqMPCAN in configuration mode. */
481
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 1);
482
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP, 0);
483
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP, 0);
484
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, BSOFF, 0);
485
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ERROR, 0);
486
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 0);
487
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 0);
488
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 0);
489
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, ARBLST, 0);
490
+
491
+ can_update_irq(s);
492
+}
493
+
494
+static void update_status_register_mode_bits(XlnxZynqMPCANState *s)
495
+{
496
+ bool sleep_status = ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP);
497
+ bool sleep_mode = ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP);
498
+ /* Wake up interrupt bit. */
499
+ bool wakeup_irq_val = sleep_status && (sleep_mode == 0);
500
+ /* Sleep interrupt bit. */
501
+ bool sleep_irq_val = sleep_mode && (sleep_status == 0);
502
+
503
+ /* Clear previous core mode status bits. */
504
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 0);
505
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 0);
506
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 0);
507
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 0);
508
+
509
+ /* set current mode bit and generate irqs accordingly. */
510
+ if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, LBACK)) {
511
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, LBACK, 1);
512
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SLEEP)) {
513
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SLEEP, 1);
514
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, SLP,
515
+ sleep_irq_val);
516
+ } else if (ARRAY_FIELD_EX32(s->regs, MODE_SELECT_REGISTER, SNOOP)) {
517
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, SNOOP, 1);
518
+ } else {
519
+ /*
520
+ * If all bits are zero then XlnxZynqMPCAN is set in normal mode.
521
+ */
522
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, NORMAL, 1);
523
+ /* Set wakeup interrupt bit. */
524
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, WKUP,
525
+ wakeup_irq_val);
526
+ }
527
+
528
+ can_update_irq(s);
529
+}
530
+
531
+static void can_exit_sleep_mode(XlnxZynqMPCANState *s)
532
+{
533
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, 0);
534
+ update_status_register_mode_bits(s);
535
+}
536
+
537
+static void generate_frame(qemu_can_frame *frame, uint32_t *data)
538
+{
539
+ frame->can_id = data[0];
540
+ frame->can_dlc = FIELD_EX32(data[1], TXFIFO_DLC, DLC);
541
+
542
+ frame->data[0] = FIELD_EX32(data[2], TXFIFO_DATA1, DB3);
543
+ frame->data[1] = FIELD_EX32(data[2], TXFIFO_DATA1, DB2);
544
+ frame->data[2] = FIELD_EX32(data[2], TXFIFO_DATA1, DB1);
545
+ frame->data[3] = FIELD_EX32(data[2], TXFIFO_DATA1, DB0);
546
+
547
+ frame->data[4] = FIELD_EX32(data[3], TXFIFO_DATA2, DB7);
548
+ frame->data[5] = FIELD_EX32(data[3], TXFIFO_DATA2, DB6);
549
+ frame->data[6] = FIELD_EX32(data[3], TXFIFO_DATA2, DB5);
550
+ frame->data[7] = FIELD_EX32(data[3], TXFIFO_DATA2, DB4);
551
+}
552
+
553
+static bool tx_ready_check(XlnxZynqMPCANState *s)
554
+{
555
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
556
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
557
+
558
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer data while"
559
+ " data while controller is in reset mode.\n",
560
+ path);
561
+ return false;
562
+ }
563
+
564
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
565
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
566
+
567
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
568
+ " data while controller is in configuration mode. Reset"
569
+ " the core so operations can start fresh.\n",
570
+ path);
571
+ return false;
572
+ }
573
+
574
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
575
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
576
+
577
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to transfer"
578
+ " data while controller is in SNOOP MODE.\n",
579
+ path);
580
+ return false;
581
+ }
582
+
583
+ return true;
584
+}
585
+
586
+static void transfer_fifo(XlnxZynqMPCANState *s, Fifo32 *fifo)
587
+{
588
+ qemu_can_frame frame;
589
+ uint32_t data[CAN_FRAME_SIZE];
590
+ int i;
591
+ bool can_tx = tx_ready_check(s);
592
+
593
+ if (!can_tx) {
594
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
595
+
596
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is not enabled for data"
597
+ " transfer.\n", path);
598
+ can_update_irq(s);
599
+ return;
600
+ }
601
+
602
+ while (!fifo32_is_empty(fifo)) {
603
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
604
+ data[i] = fifo32_pop(fifo);
605
+ }
606
+
607
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, LBACK)) {
608
+ /*
609
+ * Controller is in loopback. In Loopback mode, the CAN core
610
+ * transmits a recessive bitstream on to the XlnxZynqMPCAN Bus.
611
+ * Any message transmitted is looped back to the RX line and
612
+ * acknowledged. The XlnxZynqMPCAN core receives any message
613
+ * that it transmits.
614
+ */
615
+ if (fifo32_is_full(&s->rx_fifo)) {
616
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
617
+ } else {
618
+ for (i = 0; i < CAN_FRAME_SIZE; i++) {
619
+ fifo32_push(&s->rx_fifo, data[i]);
620
+ }
621
+
622
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
623
+ }
624
+ } else {
625
+ /* Normal mode Tx. */
626
+ generate_frame(&frame, data);
627
+
628
+ trace_xlnx_can_tx_data(frame.can_id, frame.can_dlc,
629
+ frame.data[0], frame.data[1],
630
+ frame.data[2], frame.data[3],
631
+ frame.data[4], frame.data[5],
632
+ frame.data[6], frame.data[7]);
633
+ can_bus_client_send(&s->bus_client, &frame, 1);
634
+ }
635
+ }
636
+
637
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, TXOK, 1);
638
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, TXBFLL, 0);
639
+
640
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) {
641
+ can_exit_sleep_mode(s);
642
+ }
643
+
644
+ can_update_irq(s);
645
+}
646
+
647
+static uint64_t can_srr_pre_write(RegisterInfo *reg, uint64_t val)
648
+{
649
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
650
+
651
+ ARRAY_FIELD_DP32(s->regs, SOFTWARE_RESET_REGISTER, CEN,
652
+ FIELD_EX32(val, SOFTWARE_RESET_REGISTER, CEN));
653
+
654
+ if (FIELD_EX32(val, SOFTWARE_RESET_REGISTER, SRST)) {
655
+ trace_xlnx_can_reset(val);
656
+
657
+ /* First, core will do software reset then will enter in config mode. */
658
+ can_config_reset(s);
659
+ }
660
+
661
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
662
+ can_config_mode(s);
663
+ } else {
664
+ /*
665
+ * Leave config mode. Now XlnxZynqMPCAN core will enter normal,
666
+ * sleep, snoop or loopback mode depending upon LBACK, SLEEP, SNOOP
667
+ * register states.
668
+ */
669
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, CONFIG, 0);
670
+
671
+ ptimer_transaction_begin(s->can_timer);
672
+ ptimer_set_count(s->can_timer, 0);
673
+ ptimer_transaction_commit(s->can_timer);
674
+
675
+ /* XlnxZynqMPCAN is out of config mode. It will send pending data. */
676
+ transfer_fifo(s, &s->txhpb_fifo);
677
+ transfer_fifo(s, &s->tx_fifo);
678
+ }
679
+
680
+ update_status_register_mode_bits(s);
681
+
682
+ return s->regs[R_SOFTWARE_RESET_REGISTER];
683
+}
684
+
685
+static uint64_t can_msr_pre_write(RegisterInfo *reg, uint64_t val)
686
+{
687
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
688
+ uint8_t multi_mode;
689
+
690
+ /*
691
+ * Multiple mode set check. This is done to make sure user doesn't set
692
+ * multiple modes.
693
+ */
694
+ multi_mode = FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK) +
695
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP) +
696
+ FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP);
697
+
698
+ if (multi_mode > 1) {
699
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
700
+
701
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to config"
702
+ " several modes simultaneously. One mode will be selected"
703
+ " according to their priority: LBACK > SLEEP > SNOOP.\n",
704
+ path);
705
+ }
706
+
707
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN) == 0) {
708
+ /* We are in configuration mode, any mode can be selected. */
709
+ s->regs[R_MODE_SELECT_REGISTER] = val;
710
+ } else {
711
+ bool sleep_mode_bit = FIELD_EX32(val, MODE_SELECT_REGISTER, SLEEP);
712
+
713
+ ARRAY_FIELD_DP32(s->regs, MODE_SELECT_REGISTER, SLEEP, sleep_mode_bit);
714
+
715
+ if (FIELD_EX32(val, MODE_SELECT_REGISTER, LBACK)) {
716
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
717
+
718
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
719
+ " LBACK mode without setting CEN bit as 0.\n",
720
+ path);
721
+ } else if (FIELD_EX32(val, MODE_SELECT_REGISTER, SNOOP)) {
722
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
723
+
724
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Attempting to set"
725
+ " SNOOP mode without setting CEN bit as 0.\n",
726
+ path);
727
+ }
728
+
729
+ update_status_register_mode_bits(s);
730
+ }
731
+
732
+ return s->regs[R_MODE_SELECT_REGISTER];
733
+}
734
+
735
+static uint64_t can_brpr_pre_write(RegisterInfo *reg, uint64_t val)
736
+{
737
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
738
+
739
+ /* Only allow writes when in config mode. */
740
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
741
+ return s->regs[R_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER];
742
+ }
743
+
744
+ return val;
745
+}
746
+
747
+static uint64_t can_btr_pre_write(RegisterInfo *reg, uint64_t val)
748
+{
749
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
750
+
751
+ /* Only allow writes when in config mode. */
752
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
753
+ return s->regs[R_ARBITRATION_PHASE_BIT_TIMING_REGISTER];
754
+ }
755
+
756
+ return val;
757
+}
758
+
759
+static uint64_t can_tcr_pre_write(RegisterInfo *reg, uint64_t val)
760
+{
761
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
762
+
763
+ if (FIELD_EX32(val, TIMESTAMP_REGISTER, CTS)) {
764
+ ptimer_transaction_begin(s->can_timer);
765
+ ptimer_set_count(s->can_timer, 0);
766
+ ptimer_transaction_commit(s->can_timer);
767
+ }
768
+
769
+ return 0;
770
+}
771
+
772
+static void update_rx_fifo(XlnxZynqMPCANState *s, const qemu_can_frame *frame)
773
+{
774
+ bool filter_pass = false;
775
+ uint16_t timestamp = 0;
776
+
777
+ /* If no filter is enabled. Message will be stored in FIFO. */
778
+ if (!((ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) |
779
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) |
780
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) |
781
+ (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)))) {
782
+ filter_pass = true;
783
+ }
784
+
785
+ /*
786
+ * Messages that pass any of the acceptance filters will be stored in
787
+ * the RX FIFO.
788
+ */
789
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1)) {
790
+ uint32_t id_masked = s->regs[R_AFMR1] & frame->can_id;
791
+ uint32_t filter_id_masked = s->regs[R_AFMR1] & s->regs[R_AFIR1];
792
+
793
+ if (filter_id_masked == id_masked) {
794
+ filter_pass = true;
795
+ }
796
+ }
797
+
798
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF2)) {
799
+ uint32_t id_masked = s->regs[R_AFMR2] & frame->can_id;
800
+ uint32_t filter_id_masked = s->regs[R_AFMR2] & s->regs[R_AFIR2];
801
+
802
+ if (filter_id_masked == id_masked) {
803
+ filter_pass = true;
804
+ }
805
+ }
806
+
807
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF3)) {
808
+ uint32_t id_masked = s->regs[R_AFMR3] & frame->can_id;
809
+ uint32_t filter_id_masked = s->regs[R_AFMR3] & s->regs[R_AFIR3];
810
+
811
+ if (filter_id_masked == id_masked) {
812
+ filter_pass = true;
813
+ }
814
+ }
815
+
816
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
817
+ uint32_t id_masked = s->regs[R_AFMR4] & frame->can_id;
818
+ uint32_t filter_id_masked = s->regs[R_AFMR4] & s->regs[R_AFIR4];
819
+
820
+ if (filter_id_masked == id_masked) {
821
+ filter_pass = true;
822
+ }
823
+ }
824
+
825
+ if (!filter_pass) {
826
+ trace_xlnx_can_rx_fifo_filter_reject(frame->can_id, frame->can_dlc);
827
+ return;
828
+ }
829
+
830
+ /* Store the message in fifo if it passed through any of the filters. */
831
+ if (filter_pass && frame->can_dlc <= MAX_DLC) {
832
+
833
+ if (fifo32_is_full(&s->rx_fifo)) {
834
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOFLW, 1);
835
+ } else {
836
+ timestamp = CAN_TIMER_MAX - ptimer_get_count(s->can_timer);
837
+
838
+ fifo32_push(&s->rx_fifo, frame->can_id);
839
+
840
+ fifo32_push(&s->rx_fifo, deposit32(0, R_RXFIFO_DLC_DLC_SHIFT,
841
+ R_RXFIFO_DLC_DLC_LENGTH,
842
+ frame->can_dlc) |
843
+ deposit32(0, R_RXFIFO_DLC_RXT_SHIFT,
844
+ R_RXFIFO_DLC_RXT_LENGTH,
845
+ timestamp));
846
+
847
+ /* First 32 bit of the data. */
848
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA1_DB3_SHIFT,
849
+ R_TXFIFO_DATA1_DB3_LENGTH,
850
+ frame->data[0]) |
851
+ deposit32(0, R_TXFIFO_DATA1_DB2_SHIFT,
852
+ R_TXFIFO_DATA1_DB2_LENGTH,
853
+ frame->data[1]) |
854
+ deposit32(0, R_TXFIFO_DATA1_DB1_SHIFT,
855
+ R_TXFIFO_DATA1_DB1_LENGTH,
856
+ frame->data[2]) |
857
+ deposit32(0, R_TXFIFO_DATA1_DB0_SHIFT,
858
+ R_TXFIFO_DATA1_DB0_LENGTH,
859
+ frame->data[3]));
860
+ /* Last 32 bit of the data. */
861
+ fifo32_push(&s->rx_fifo, deposit32(0, R_TXFIFO_DATA2_DB7_SHIFT,
862
+ R_TXFIFO_DATA2_DB7_LENGTH,
863
+ frame->data[4]) |
864
+ deposit32(0, R_TXFIFO_DATA2_DB6_SHIFT,
865
+ R_TXFIFO_DATA2_DB6_LENGTH,
866
+ frame->data[5]) |
867
+ deposit32(0, R_TXFIFO_DATA2_DB5_SHIFT,
868
+ R_TXFIFO_DATA2_DB5_LENGTH,
869
+ frame->data[6]) |
870
+ deposit32(0, R_TXFIFO_DATA2_DB4_SHIFT,
871
+ R_TXFIFO_DATA2_DB4_LENGTH,
872
+ frame->data[7]));
873
+
874
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXOK, 1);
875
+ trace_xlnx_can_rx_data(frame->can_id, frame->can_dlc,
876
+ frame->data[0], frame->data[1],
877
+ frame->data[2], frame->data[3],
878
+ frame->data[4], frame->data[5],
879
+ frame->data[6], frame->data[7]);
880
+ }
881
+
882
+ can_update_irq(s);
883
+ }
884
+}
885
+
886
+static uint64_t can_rxfifo_pre_read(RegisterInfo *reg, uint64_t val)
887
+{
888
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
889
+
890
+ if (!fifo32_is_empty(&s->rx_fifo)) {
891
+ val = fifo32_pop(&s->rx_fifo);
892
+ } else {
893
+ ARRAY_FIELD_DP32(s->regs, INTERRUPT_STATUS_REGISTER, RXUFLW, 1);
894
+ }
895
+
896
+ can_update_irq(s);
897
+ return val;
898
+}
899
+
900
+static void can_filter_enable_post_write(RegisterInfo *reg, uint64_t val)
901
+{
902
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
903
+
904
+ if (ARRAY_FIELD_EX32(s->regs, AFR, UAF1) &&
905
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF2) &&
906
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF3) &&
907
+ ARRAY_FIELD_EX32(s->regs, AFR, UAF4)) {
908
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 1);
909
+ } else {
910
+ ARRAY_FIELD_DP32(s->regs, STATUS_REGISTER, ACFBSY, 0);
911
+ }
912
+}
913
+
914
+static uint64_t can_filter_mask_pre_write(RegisterInfo *reg, uint64_t val)
915
+{
916
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
917
+ uint32_t reg_idx = (reg->access->addr) / 4;
918
+ uint32_t filter_number = (reg_idx - R_AFMR1) / 2;
919
+
920
+ /* modify an acceptance filter, the corresponding UAF bit should be '0'. */
921
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
922
+ s->regs[reg_idx] = val;
923
+
924
+ trace_xlnx_can_filter_mask_pre_write(filter_number, s->regs[reg_idx]);
925
+ } else {
926
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
927
+
928
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
929
+ " mask is not set as corresponding UAF bit is not 0.\n",
930
+ path, filter_number + 1);
931
+ }
932
+
933
+ return s->regs[reg_idx];
934
+}
935
+
936
+static uint64_t can_filter_id_pre_write(RegisterInfo *reg, uint64_t val)
937
+{
938
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
939
+ uint32_t reg_idx = (reg->access->addr) / 4;
940
+ uint32_t filter_number = (reg_idx - R_AFIR1) / 2;
941
+
942
+ if (!(s->regs[R_AFR] & (1 << filter_number))) {
943
+ s->regs[reg_idx] = val;
944
+
945
+ trace_xlnx_can_filter_id_pre_write(filter_number, s->regs[reg_idx]);
946
+ } else {
947
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
948
+
949
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Acceptance filter %d"
950
+ " id is not set as corresponding UAF bit is not 0.\n",
951
+ path, filter_number + 1);
952
+ }
953
+
954
+ return s->regs[reg_idx];
955
+}
956
+
957
+static void can_tx_post_write(RegisterInfo *reg, uint64_t val)
958
+{
959
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(reg->opaque);
960
+
961
+ bool is_txhpb = reg->access->addr > A_TXFIFO_DATA2;
962
+
963
+ bool initiate_transfer = (reg->access->addr == A_TXFIFO_DATA2) ||
964
+ (reg->access->addr == A_TXHPB_DATA2);
965
+
966
+ Fifo32 *f = is_txhpb ? &s->txhpb_fifo : &s->tx_fifo;
967
+
968
+ if (!fifo32_is_full(f)) {
969
+ fifo32_push(f, val);
970
+ } else {
971
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
972
+
973
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: TX FIFO is full.\n", path);
974
+ }
975
+
976
+ /* Initiate the message send if TX register is written. */
977
+ if (initiate_transfer &&
978
+ ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) {
979
+ transfer_fifo(s, f);
980
+ }
981
+
982
+ can_update_irq(s);
983
+}
984
+
985
+static const RegisterAccessInfo can_regs_info[] = {
986
+ { .name = "SOFTWARE_RESET_REGISTER",
987
+ .addr = A_SOFTWARE_RESET_REGISTER,
988
+ .rsvd = 0xfffffffc,
989
+ .pre_write = can_srr_pre_write,
990
+ },{ .name = "MODE_SELECT_REGISTER",
991
+ .addr = A_MODE_SELECT_REGISTER,
992
+ .rsvd = 0xfffffff8,
993
+ .pre_write = can_msr_pre_write,
994
+ },{ .name = "ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER",
995
+ .addr = A_ARBITRATION_PHASE_BAUD_RATE_PRESCALER_REGISTER,
996
+ .rsvd = 0xffffff00,
997
+ .pre_write = can_brpr_pre_write,
998
+ },{ .name = "ARBITRATION_PHASE_BIT_TIMING_REGISTER",
999
+ .addr = A_ARBITRATION_PHASE_BIT_TIMING_REGISTER,
1000
+ .rsvd = 0xfffffe00,
1001
+ .pre_write = can_btr_pre_write,
1002
+ },{ .name = "ERROR_COUNTER_REGISTER",
1003
+ .addr = A_ERROR_COUNTER_REGISTER,
1004
+ .rsvd = 0xffff0000,
1005
+ .ro = 0xffffffff,
1006
+ },{ .name = "ERROR_STATUS_REGISTER",
1007
+ .addr = A_ERROR_STATUS_REGISTER,
1008
+ .rsvd = 0xffffffe0,
1009
+ .w1c = 0x1f,
1010
+ },{ .name = "STATUS_REGISTER", .addr = A_STATUS_REGISTER,
1011
+ .reset = 0x1,
1012
+ .rsvd = 0xffffe000,
1013
+ .ro = 0x1fff,
1014
+ },{ .name = "INTERRUPT_STATUS_REGISTER",
1015
+ .addr = A_INTERRUPT_STATUS_REGISTER,
1016
+ .reset = 0x6000,
1017
+ .rsvd = 0xffff8000,
1018
+ .ro = 0x7fff,
1019
+ },{ .name = "INTERRUPT_ENABLE_REGISTER",
1020
+ .addr = A_INTERRUPT_ENABLE_REGISTER,
1021
+ .rsvd = 0xffff8000,
1022
+ .post_write = can_ier_post_write,
1023
+ },{ .name = "INTERRUPT_CLEAR_REGISTER",
1024
+ .addr = A_INTERRUPT_CLEAR_REGISTER,
1025
+ .rsvd = 0xffff8000,
1026
+ .pre_write = can_icr_pre_write,
1027
+ },{ .name = "TIMESTAMP_REGISTER",
1028
+ .addr = A_TIMESTAMP_REGISTER,
1029
+ .rsvd = 0xfffffffe,
1030
+ .pre_write = can_tcr_pre_write,
1031
+ },{ .name = "WIR", .addr = A_WIR,
1032
+ .reset = 0x3f3f,
1033
+ .rsvd = 0xffff0000,
1034
+ },{ .name = "TXFIFO_ID", .addr = A_TXFIFO_ID,
1035
+ .post_write = can_tx_post_write,
1036
+ },{ .name = "TXFIFO_DLC", .addr = A_TXFIFO_DLC,
1037
+ .rsvd = 0xfffffff,
1038
+ .post_write = can_tx_post_write,
1039
+ },{ .name = "TXFIFO_DATA1", .addr = A_TXFIFO_DATA1,
1040
+ .post_write = can_tx_post_write,
1041
+ },{ .name = "TXFIFO_DATA2", .addr = A_TXFIFO_DATA2,
1042
+ .post_write = can_tx_post_write,
1043
+ },{ .name = "TXHPB_ID", .addr = A_TXHPB_ID,
1044
+ .post_write = can_tx_post_write,
1045
+ },{ .name = "TXHPB_DLC", .addr = A_TXHPB_DLC,
1046
+ .rsvd = 0xfffffff,
1047
+ .post_write = can_tx_post_write,
1048
+ },{ .name = "TXHPB_DATA1", .addr = A_TXHPB_DATA1,
1049
+ .post_write = can_tx_post_write,
1050
+ },{ .name = "TXHPB_DATA2", .addr = A_TXHPB_DATA2,
1051
+ .post_write = can_tx_post_write,
1052
+ },{ .name = "RXFIFO_ID", .addr = A_RXFIFO_ID,
1053
+ .ro = 0xffffffff,
1054
+ .post_read = can_rxfifo_pre_read,
1055
+ },{ .name = "RXFIFO_DLC", .addr = A_RXFIFO_DLC,
1056
+ .rsvd = 0xfff0000,
1057
+ .post_read = can_rxfifo_pre_read,
1058
+ },{ .name = "RXFIFO_DATA1", .addr = A_RXFIFO_DATA1,
1059
+ .post_read = can_rxfifo_pre_read,
1060
+ },{ .name = "RXFIFO_DATA2", .addr = A_RXFIFO_DATA2,
1061
+ .post_read = can_rxfifo_pre_read,
1062
+ },{ .name = "AFR", .addr = A_AFR,
1063
+ .rsvd = 0xfffffff0,
1064
+ .post_write = can_filter_enable_post_write,
1065
+ },{ .name = "AFMR1", .addr = A_AFMR1,
1066
+ .pre_write = can_filter_mask_pre_write,
1067
+ },{ .name = "AFIR1", .addr = A_AFIR1,
1068
+ .pre_write = can_filter_id_pre_write,
1069
+ },{ .name = "AFMR2", .addr = A_AFMR2,
1070
+ .pre_write = can_filter_mask_pre_write,
1071
+ },{ .name = "AFIR2", .addr = A_AFIR2,
1072
+ .pre_write = can_filter_id_pre_write,
1073
+ },{ .name = "AFMR3", .addr = A_AFMR3,
1074
+ .pre_write = can_filter_mask_pre_write,
1075
+ },{ .name = "AFIR3", .addr = A_AFIR3,
1076
+ .pre_write = can_filter_id_pre_write,
1077
+ },{ .name = "AFMR4", .addr = A_AFMR4,
1078
+ .pre_write = can_filter_mask_pre_write,
1079
+ },{ .name = "AFIR4", .addr = A_AFIR4,
1080
+ .pre_write = can_filter_id_pre_write,
1081
+ }
1082
+};
1083
+
1084
+static void xlnx_zynqmp_can_ptimer_cb(void *opaque)
1085
+{
1086
+ /* No action required on the timer rollover. */
1087
+}
1088
+
1089
+static const MemoryRegionOps can_ops = {
1090
+ .read = register_read_memory,
1091
+ .write = register_write_memory,
1092
+ .endianness = DEVICE_LITTLE_ENDIAN,
1093
+ .valid = {
398
+ .valid = {
399
+ .max_access_size = 4,
1094
+ .min_access_size = 4,
400
+ .min_access_size = 4,
401
+ .unaligned = false
402
+ },
403
+ .impl = {
1095
+ .max_access_size = 4,
404
+ .max_access_size = 4,
405
+ .min_access_size = 4,
406
+ .unaligned = false
1096
+ },
407
+ },
1097
+};
408
+};
1098
+
409
+
1099
+static void xlnx_zynqmp_can_reset_init(Object *obj, ResetType type)
410
+static Property stm32l4x5_usart_base_properties[] = {
1100
+{
411
+ DEFINE_PROP_CHR("chardev", Stm32l4x5UsartBaseState, chr),
1101
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
412
+ DEFINE_PROP_END_OF_LIST(),
1102
+ unsigned int i;
1103
+
1104
+ for (i = R_RXFIFO_ID; i < ARRAY_SIZE(s->reg_info); ++i) {
1105
+ register_reset(&s->reg_info[i]);
1106
+ }
1107
+
1108
+ ptimer_transaction_begin(s->can_timer);
1109
+ ptimer_set_count(s->can_timer, 0);
1110
+ ptimer_transaction_commit(s->can_timer);
1111
+}
1112
+
1113
+static void xlnx_zynqmp_can_reset_hold(Object *obj)
1114
+{
1115
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1116
+ unsigned int i;
1117
+
1118
+ for (i = 0; i < R_RXFIFO_ID; ++i) {
1119
+ register_reset(&s->reg_info[i]);
1120
+ }
1121
+
1122
+ /*
1123
+ * Reset FIFOs when CAN model is reset. This will clear the fifo writes
1124
+ * done by post_write which gets called from register_reset function,
1125
+ * post_write handle will not be able to trigger tx because CAN will be
1126
+ * disabled when software_reset_register is cleared first.
1127
+ */
1128
+ fifo32_reset(&s->rx_fifo);
1129
+ fifo32_reset(&s->tx_fifo);
1130
+ fifo32_reset(&s->txhpb_fifo);
1131
+}
1132
+
1133
+static bool xlnx_zynqmp_can_can_receive(CanBusClientState *client)
1134
+{
1135
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1136
+ bus_client);
1137
+
1138
+ if (ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, SRST)) {
1139
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1140
+
1141
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is in reset state.\n",
1142
+ path);
1143
+ return false;
1144
+ }
1145
+
1146
+ if ((ARRAY_FIELD_EX32(s->regs, SOFTWARE_RESET_REGISTER, CEN)) == 0) {
1147
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1148
+
1149
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Controller is disabled. Incoming"
1150
+ " messages will be discarded.\n", path);
1151
+ return false;
1152
+ }
1153
+
1154
+ return true;
1155
+}
1156
+
1157
+static ssize_t xlnx_zynqmp_can_receive(CanBusClientState *client,
1158
+ const qemu_can_frame *buf, size_t buf_size) {
1159
+ XlnxZynqMPCANState *s = container_of(client, XlnxZynqMPCANState,
1160
+ bus_client);
1161
+ const qemu_can_frame *frame = buf;
1162
+
1163
+ if (buf_size <= 0) {
1164
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1165
+
1166
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Error in the data received.\n",
1167
+ path);
1168
+ return 0;
1169
+ }
1170
+
1171
+ if (ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SNOOP)) {
1172
+ /* Snoop Mode: Just keep the data. no response back. */
1173
+ update_rx_fifo(s, frame);
1174
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP))) {
1175
+ /*
1176
+ * XlnxZynqMPCAN is in sleep mode. Any data on bus will bring it to wake
1177
+ * up state.
1178
+ */
1179
+ can_exit_sleep_mode(s);
1180
+ update_rx_fifo(s, frame);
1181
+ } else if ((ARRAY_FIELD_EX32(s->regs, STATUS_REGISTER, SLEEP)) == 0) {
1182
+ update_rx_fifo(s, frame);
1183
+ } else {
1184
+ /*
1185
+ * XlnxZynqMPCAN will not participate in normal bus communication
1186
+ * and will not receive any messages transmitted by other CAN nodes.
1187
+ */
1188
+ trace_xlnx_can_rx_discard(s->regs[R_STATUS_REGISTER]);
1189
+ }
1190
+
1191
+ return 1;
1192
+}
1193
+
1194
+static CanBusClientInfo can_xilinx_bus_client_info = {
1195
+ .can_receive = xlnx_zynqmp_can_can_receive,
1196
+ .receive = xlnx_zynqmp_can_receive,
1197
+};
413
+};
1198
+
414
+
1199
+static int xlnx_zynqmp_can_connect_to_bus(XlnxZynqMPCANState *s,
415
+static void stm32l4x5_usart_base_init(Object *obj)
1200
+ CanBusState *bus)
416
+{
1201
+{
417
+ Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(obj);
1202
+ s->bus_client.info = &can_xilinx_bus_client_info;
418
+
1203
+
1204
+ if (can_bus_insert_client(bus, &s->bus_client) < 0) {
1205
+ return -1;
1206
+ }
1207
+ return 0;
1208
+}
1209
+
1210
+static void xlnx_zynqmp_can_realize(DeviceState *dev, Error **errp)
1211
+{
1212
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(dev);
1213
+
1214
+ if (s->canbus) {
1215
+ if (xlnx_zynqmp_can_connect_to_bus(s, s->canbus) < 0) {
1216
+ g_autofree char *path = object_get_canonical_path(OBJECT(s));
1217
+
1218
+ error_setg(errp, "%s: xlnx_zynqmp_can_connect_to_bus"
1219
+ " failed.", path);
1220
+ return;
1221
+ }
1222
+ }
1223
+
1224
+ /* Create RX FIFO, TXFIFO, TXHPB storage. */
1225
+ fifo32_create(&s->rx_fifo, RXFIFO_SIZE);
1226
+ fifo32_create(&s->tx_fifo, RXFIFO_SIZE);
1227
+ fifo32_create(&s->txhpb_fifo, CAN_FRAME_SIZE);
1228
+
1229
+ /* Allocate a new timer. */
1230
+ s->can_timer = ptimer_init(xlnx_zynqmp_can_ptimer_cb, s,
1231
+ PTIMER_POLICY_DEFAULT);
1232
+
1233
+ ptimer_transaction_begin(s->can_timer);
1234
+
1235
+ ptimer_set_freq(s->can_timer, s->cfg.ext_clk_freq);
1236
+ ptimer_set_limit(s->can_timer, CAN_TIMER_MAX, 1);
1237
+ ptimer_run(s->can_timer, 0);
1238
+ ptimer_transaction_commit(s->can_timer);
1239
+}
1240
+
1241
+static void xlnx_zynqmp_can_init(Object *obj)
1242
+{
1243
+ XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1244
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
1245
+
1246
+ RegisterInfoArray *reg_array;
1247
+
1248
+ memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_CAN,
1249
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1250
+ reg_array = register_init_block32(DEVICE(obj), can_regs_info,
1251
+ ARRAY_SIZE(can_regs_info),
1252
+ s->reg_info, s->regs,
1253
+ &can_ops,
1254
+ XLNX_ZYNQMP_CAN_ERR_DEBUG,
1255
+ XLNX_ZYNQMP_CAN_R_MAX * 4);
1256
+
1257
+ memory_region_add_subregion(&s->iomem, 0x00, &reg_array->mem);
1258
+ sysbus_init_mmio(sbd, &s->iomem);
1259
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
419
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
1260
+}
420
+
1261
+
421
+ memory_region_init_io(&s->mmio, obj, &stm32l4x5_usart_base_ops, s,
1262
+static const VMStateDescription vmstate_can = {
422
+ TYPE_STM32L4X5_USART_BASE, 0x400);
1263
+ .name = TYPE_XLNX_ZYNQMP_CAN,
423
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio);
424
+
425
+ s->clk = qdev_init_clock_in(DEVICE(s), "clk", NULL, s, 0);
426
+}
427
+
428
+static const VMStateDescription vmstate_stm32l4x5_usart_base = {
429
+ .name = TYPE_STM32L4X5_USART_BASE,
1264
+ .version_id = 1,
430
+ .version_id = 1,
1265
+ .minimum_version_id = 1,
431
+ .minimum_version_id = 1,
1266
+ .fields = (VMStateField[]) {
432
+ .fields = (VMStateField[]) {
1267
+ VMSTATE_FIFO32(rx_fifo, XlnxZynqMPCANState),
433
+ VMSTATE_UINT32(cr1, Stm32l4x5UsartBaseState),
1268
+ VMSTATE_FIFO32(tx_fifo, XlnxZynqMPCANState),
434
+ VMSTATE_UINT32(cr2, Stm32l4x5UsartBaseState),
1269
+ VMSTATE_FIFO32(txhpb_fifo, XlnxZynqMPCANState),
435
+ VMSTATE_UINT32(cr3, Stm32l4x5UsartBaseState),
1270
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPCANState, XLNX_ZYNQMP_CAN_R_MAX),
436
+ VMSTATE_UINT32(brr, Stm32l4x5UsartBaseState),
1271
+ VMSTATE_PTIMER(can_timer, XlnxZynqMPCANState),
437
+ VMSTATE_UINT32(gtpr, Stm32l4x5UsartBaseState),
1272
+ VMSTATE_END_OF_LIST(),
438
+ VMSTATE_UINT32(rtor, Stm32l4x5UsartBaseState),
439
+ VMSTATE_UINT32(isr, Stm32l4x5UsartBaseState),
440
+ VMSTATE_UINT32(rdr, Stm32l4x5UsartBaseState),
441
+ VMSTATE_UINT32(tdr, Stm32l4x5UsartBaseState),
442
+ VMSTATE_CLOCK(clk, Stm32l4x5UsartBaseState),
443
+ VMSTATE_END_OF_LIST()
1273
+ }
444
+ }
1274
+};
445
+};
1275
+
446
+
1276
+static Property xlnx_zynqmp_can_properties[] = {
447
+
1277
+ DEFINE_PROP_UINT32("ext_clk_freq", XlnxZynqMPCANState, cfg.ext_clk_freq,
448
+static void stm32l4x5_usart_base_realize(DeviceState *dev, Error **errp)
1278
+ CAN_DEFAULT_CLOCK),
449
+{
1279
+ DEFINE_PROP_LINK("canbus", XlnxZynqMPCANState, canbus, TYPE_CAN_BUS,
450
+ ERRP_GUARD();
1280
+ CanBusState *),
451
+ Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(dev);
1281
+ DEFINE_PROP_END_OF_LIST(),
452
+ if (!clock_has_source(s->clk)) {
1282
+};
453
+ error_setg(errp, "USART clock must be wired up by SoC code");
1283
+
454
+ return;
1284
+static void xlnx_zynqmp_can_class_init(ObjectClass *klass, void *data)
455
+ }
456
+}
457
+
458
+static void stm32l4x5_usart_base_class_init(ObjectClass *klass, void *data)
1285
+{
459
+{
1286
+ DeviceClass *dc = DEVICE_CLASS(klass);
460
+ DeviceClass *dc = DEVICE_CLASS(klass);
1287
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
461
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
1288
+
462
+
1289
+ rc->phases.enter = xlnx_zynqmp_can_reset_init;
463
+ rc->phases.hold = stm32l4x5_usart_base_reset_hold;
1290
+ rc->phases.hold = xlnx_zynqmp_can_reset_hold;
464
+ device_class_set_props(dc, stm32l4x5_usart_base_properties);
1291
+ dc->realize = xlnx_zynqmp_can_realize;
465
+ dc->realize = stm32l4x5_usart_base_realize;
1292
+ device_class_set_props(dc, xlnx_zynqmp_can_properties);
466
+ dc->vmsd = &vmstate_stm32l4x5_usart_base;
1293
+ dc->vmsd = &vmstate_can;
467
+}
1294
+}
468
+
1295
+
469
+static void stm32l4x5_usart_class_init(ObjectClass *oc, void *data)
1296
+static const TypeInfo can_info = {
470
+{
1297
+ .name = TYPE_XLNX_ZYNQMP_CAN,
471
+ Stm32l4x5UsartBaseClass *subc = STM32L4X5_USART_BASE_CLASS(oc);
1298
+ .parent = TYPE_SYS_BUS_DEVICE,
472
+
1299
+ .instance_size = sizeof(XlnxZynqMPCANState),
473
+ subc->type = STM32L4x5_USART;
1300
+ .class_init = xlnx_zynqmp_can_class_init,
474
+}
1301
+ .instance_init = xlnx_zynqmp_can_init,
475
+
476
+static void stm32l4x5_uart_class_init(ObjectClass *oc, void *data)
477
+{
478
+ Stm32l4x5UsartBaseClass *subc = STM32L4X5_USART_BASE_CLASS(oc);
479
+
480
+ subc->type = STM32L4x5_UART;
481
+}
482
+
483
+static void stm32l4x5_lpuart_class_init(ObjectClass *oc, void *data)
484
+{
485
+ Stm32l4x5UsartBaseClass *subc = STM32L4X5_USART_BASE_CLASS(oc);
486
+
487
+ subc->type = STM32L4x5_LPUART;
488
+}
489
+
490
+static const TypeInfo stm32l4x5_usart_types[] = {
491
+ {
492
+ .name = TYPE_STM32L4X5_USART_BASE,
493
+ .parent = TYPE_SYS_BUS_DEVICE,
494
+ .instance_size = sizeof(Stm32l4x5UsartBaseState),
495
+ .instance_init = stm32l4x5_usart_base_init,
496
+ .class_init = stm32l4x5_usart_base_class_init,
497
+ .abstract = true,
498
+ }, {
499
+ .name = TYPE_STM32L4X5_USART,
500
+ .parent = TYPE_STM32L4X5_USART_BASE,
501
+ .class_init = stm32l4x5_usart_class_init,
502
+ }, {
503
+ .name = TYPE_STM32L4X5_UART,
504
+ .parent = TYPE_STM32L4X5_USART_BASE,
505
+ .class_init = stm32l4x5_uart_class_init,
506
+ }, {
507
+ .name = TYPE_STM32L4X5_LPUART,
508
+ .parent = TYPE_STM32L4X5_USART_BASE,
509
+ .class_init = stm32l4x5_lpuart_class_init,
510
+ }
1302
+};
511
+};
1303
+
512
+
1304
+static void can_register_types(void)
513
+DEFINE_TYPES(stm32l4x5_usart_types)
1305
+{
514
diff --git a/hw/char/Kconfig b/hw/char/Kconfig
1306
+ type_register_static(&can_info);
1307
+}
1308
+
1309
+type_init(can_register_types)
1310
diff --git a/hw/Kconfig b/hw/Kconfig
1311
index XXXXXXX..XXXXXXX 100644
515
index XXXXXXX..XXXXXXX 100644
1312
--- a/hw/Kconfig
516
--- a/hw/char/Kconfig
1313
+++ b/hw/Kconfig
517
+++ b/hw/char/Kconfig
1314
@@ -XXX,XX +XXX,XX @@ config XILINX_AXI
518
@@ -XXX,XX +XXX,XX @@ config VIRTIO_SERIAL
1315
config XLNX_ZYNQMP
519
config STM32F2XX_USART
1316
bool
520
bool
1317
select REGISTER
521
1318
+ select CAN_BUS
522
+config STM32L4X5_USART
1319
diff --git a/hw/net/can/meson.build b/hw/net/can/meson.build
523
+ bool
524
+
525
config CMSDK_APB_UART
526
bool
527
528
diff --git a/hw/char/meson.build b/hw/char/meson.build
1320
index XXXXXXX..XXXXXXX 100644
529
index XXXXXXX..XXXXXXX 100644
1321
--- a/hw/net/can/meson.build
530
--- a/hw/char/meson.build
1322
+++ b/hw/net/can/meson.build
531
+++ b/hw/char/meson.build
1323
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_pcm3680_pci.c'))
532
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_RENESAS_SCI', if_true: files('renesas_sci.c'))
1324
softmmu_ss.add(when: 'CONFIG_CAN_PCI', if_true: files('can_mioe3680_pci.c'))
533
system_ss.add(when: 'CONFIG_SIFIVE_UART', if_true: files('sifive_uart.c'))
1325
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD', if_true: files('ctucan_core.c'))
534
system_ss.add(when: 'CONFIG_SH_SCI', if_true: files('sh_serial.c'))
1326
softmmu_ss.add(when: 'CONFIG_CAN_CTUCANFD_PCI', if_true: files('ctucan_pci.c'))
535
system_ss.add(when: 'CONFIG_STM32F2XX_USART', if_true: files('stm32f2xx_usart.c'))
1327
+softmmu_ss.add(when: 'CONFIG_XLNX_ZYNQMP', if_true: files('xlnx-zynqmp-can.c'))
536
+system_ss.add(when: 'CONFIG_STM32L4X5_USART', if_true: files('stm32l4x5_usart.c'))
1328
diff --git a/hw/net/can/trace-events b/hw/net/can/trace-events
537
system_ss.add(when: 'CONFIG_MCHP_PFSOC_MMUART', if_true: files('mchp_pfsoc_mmuart.c'))
1329
new file mode 100644
538
system_ss.add(when: 'CONFIG_HTIF', if_true: files('riscv_htif.c'))
1330
index XXXXXXX..XXXXXXX
539
system_ss.add(when: 'CONFIG_GOLDFISH_TTY', if_true: files('goldfish_tty.c'))
1331
--- /dev/null
540
diff --git a/hw/char/trace-events b/hw/char/trace-events
1332
+++ b/hw/net/can/trace-events
541
index XXXXXXX..XXXXXXX 100644
1333
@@ -XXX,XX +XXX,XX @@
542
--- a/hw/char/trace-events
1334
+# xlnx-zynqmp-can.c
543
+++ b/hw/char/trace-events
1335
+xlnx_can_update_irq(uint32_t isr, uint32_t ier, uint32_t irq) "ISR: 0x%08x IER: 0x%08x IRQ: 0x%08x"
544
@@ -XXX,XX +XXX,XX @@ cadence_uart_baudrate(unsigned baudrate) "baudrate %u"
1336
+xlnx_can_reset(uint32_t val) "Resetting controller with value = 0x%08x"
545
sh_serial_read(char *id, unsigned size, uint64_t offs, uint64_t val) " %s size %d offs 0x%02" PRIx64 " -> 0x%02" PRIx64
1337
+xlnx_can_rx_fifo_filter_reject(uint32_t id, uint8_t dlc) "Frame: ID: 0x%08x DLC: 0x%02x"
546
sh_serial_write(char *id, unsigned size, uint64_t offs, uint64_t val) "%s size %d offs 0x%02" PRIx64 " <- 0x%02" PRIx64
1338
+xlnx_can_filter_id_pre_write(uint8_t filter_num, uint32_t value) "Filter%d ID: 0x%08x"
547
1339
+xlnx_can_filter_mask_pre_write(uint8_t filter_num, uint32_t value) "Filter%d MASK: 0x%08x"
548
+# stm32l4x5_usart.c
1340
+xlnx_can_tx_data(uint32_t id, uint8_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
549
+stm32l4x5_usart_read(uint64_t addr, uint32_t data) "USART: Read <0x%" PRIx64 "> -> 0x%" PRIx32 ""
1341
+xlnx_can_rx_data(uint32_t id, uint32_t dlc, uint8_t db0, uint8_t db1, uint8_t db2, uint8_t db3, uint8_t db4, uint8_t db5, uint8_t db6, uint8_t db7) "Frame: ID: 0x%08x DLC: 0x%02x DATA: 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x"
550
+stm32l4x5_usart_write(uint64_t addr, uint32_t data) "USART: Write <0x%" PRIx64 "> <- 0x%" PRIx32 ""
1342
+xlnx_can_rx_discard(uint32_t status) "Controller is not enabled for bus communication. Status Register: 0x%08x"
551
+
552
# xen_console.c
553
xen_console_connect(unsigned int idx, unsigned int ring_ref, unsigned int port, unsigned int limit) "idx %u ring_ref %u port %u limit %u"
554
xen_console_disconnect(unsigned int idx) "idx %u"
1343
--
555
--
1344
2.20.1
556
2.34.1
1345
557
1346
558
diff view generated by jsdifflib
1
From: Kunkun Jiang <jiangkunkun@huawei.com>
1
From: Arnaud Minier <arnaud.minier@telecom-paris.fr>
2
2
3
Accroding to the SMMUv3 spec, the SPAN field of Level1 Stream Table
3
Implement the ability to read and write characters to the
4
Descriptor is 5 bits([4:0]).
4
usart using the serial port.
5
5
6
Fixes: 9bde7f0674f(hw/arm/smmuv3: Implement translate callback)
6
The character transmission is based on the
7
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
7
cmsdk-apb-uart implementation.
8
Message-id: 20201124023711.1184-1-jiangkunkun@huawei.com
8
9
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
10
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Acked-by: Eric Auger <eric.auger@redhat.com>
12
Message-id: 20240329174402.60382-3-arnaud.minier@telecom-paris.fr
13
[PMM: fixed a few checkpatch nits]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
15
---
13
hw/arm/smmuv3-internal.h | 2 +-
16
include/hw/char/stm32l4x5_usart.h | 1 +
14
1 file changed, 1 insertion(+), 1 deletion(-)
17
hw/char/stm32l4x5_usart.c | 143 ++++++++++++++++++++++++++++++
15
18
hw/char/trace-events | 7 ++
16
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
19
3 files changed, 151 insertions(+)
20
21
diff --git a/include/hw/char/stm32l4x5_usart.h b/include/hw/char/stm32l4x5_usart.h
17
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/smmuv3-internal.h
23
--- a/include/hw/char/stm32l4x5_usart.h
19
+++ b/hw/arm/smmuv3-internal.h
24
+++ b/include/hw/char/stm32l4x5_usart.h
20
@@ -XXX,XX +XXX,XX @@ static inline uint64_t l1std_l2ptr(STEDesc *desc)
25
@@ -XXX,XX +XXX,XX @@ struct Stm32l4x5UsartBaseState {
21
return hi << 32 | lo;
26
Clock *clk;
27
CharBackend chr;
28
qemu_irq irq;
29
+ guint watch_tag;
30
};
31
32
struct Stm32l4x5UsartBaseClass {
33
diff --git a/hw/char/stm32l4x5_usart.c b/hw/char/stm32l4x5_usart.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/char/stm32l4x5_usart.c
36
+++ b/hw/char/stm32l4x5_usart.c
37
@@ -XXX,XX +XXX,XX @@ REG32(RDR, 0x24)
38
REG32(TDR, 0x28)
39
FIELD(TDR, TDR, 0, 9)
40
41
+static void stm32l4x5_update_irq(Stm32l4x5UsartBaseState *s)
42
+{
43
+ if (((s->isr & R_ISR_WUF_MASK) && (s->cr3 & R_CR3_WUFIE_MASK)) ||
44
+ ((s->isr & R_ISR_CMF_MASK) && (s->cr1 & R_CR1_CMIE_MASK)) ||
45
+ ((s->isr & R_ISR_ABRF_MASK) && (s->cr1 & R_CR1_RXNEIE_MASK)) ||
46
+ ((s->isr & R_ISR_EOBF_MASK) && (s->cr1 & R_CR1_EOBIE_MASK)) ||
47
+ ((s->isr & R_ISR_RTOF_MASK) && (s->cr1 & R_CR1_RTOIE_MASK)) ||
48
+ ((s->isr & R_ISR_CTSIF_MASK) && (s->cr3 & R_CR3_CTSIE_MASK)) ||
49
+ ((s->isr & R_ISR_LBDF_MASK) && (s->cr2 & R_CR2_LBDIE_MASK)) ||
50
+ ((s->isr & R_ISR_TXE_MASK) && (s->cr1 & R_CR1_TXEIE_MASK)) ||
51
+ ((s->isr & R_ISR_TC_MASK) && (s->cr1 & R_CR1_TCIE_MASK)) ||
52
+ ((s->isr & R_ISR_RXNE_MASK) && (s->cr1 & R_CR1_RXNEIE_MASK)) ||
53
+ ((s->isr & R_ISR_IDLE_MASK) && (s->cr1 & R_CR1_IDLEIE_MASK)) ||
54
+ ((s->isr & R_ISR_ORE_MASK) &&
55
+ ((s->cr1 & R_CR1_RXNEIE_MASK) || (s->cr3 & R_CR3_EIE_MASK))) ||
56
+ /* TODO: Handle NF ? */
57
+ ((s->isr & R_ISR_FE_MASK) && (s->cr3 & R_CR3_EIE_MASK)) ||
58
+ ((s->isr & R_ISR_PE_MASK) && (s->cr1 & R_CR1_PEIE_MASK))) {
59
+ qemu_irq_raise(s->irq);
60
+ trace_stm32l4x5_usart_irq_raised(s->isr);
61
+ } else {
62
+ qemu_irq_lower(s->irq);
63
+ trace_stm32l4x5_usart_irq_lowered();
64
+ }
65
+}
66
+
67
+static int stm32l4x5_usart_base_can_receive(void *opaque)
68
+{
69
+ Stm32l4x5UsartBaseState *s = opaque;
70
+
71
+ if (!(s->isr & R_ISR_RXNE_MASK)) {
72
+ return 1;
73
+ }
74
+
75
+ return 0;
76
+}
77
+
78
+static void stm32l4x5_usart_base_receive(void *opaque, const uint8_t *buf,
79
+ int size)
80
+{
81
+ Stm32l4x5UsartBaseState *s = opaque;
82
+
83
+ if (!((s->cr1 & R_CR1_UE_MASK) && (s->cr1 & R_CR1_RE_MASK))) {
84
+ trace_stm32l4x5_usart_receiver_not_enabled(
85
+ FIELD_EX32(s->cr1, CR1, UE), FIELD_EX32(s->cr1, CR1, RE));
86
+ return;
87
+ }
88
+
89
+ /* Check if overrun detection is enabled and if there is an overrun */
90
+ if (!(s->cr3 & R_CR3_OVRDIS_MASK) && (s->isr & R_ISR_RXNE_MASK)) {
91
+ /*
92
+ * A character has been received while
93
+ * the previous has not been read = Overrun.
94
+ */
95
+ s->isr |= R_ISR_ORE_MASK;
96
+ trace_stm32l4x5_usart_overrun_detected(s->rdr, *buf);
97
+ } else {
98
+ /* No overrun */
99
+ s->rdr = *buf;
100
+ s->isr |= R_ISR_RXNE_MASK;
101
+ trace_stm32l4x5_usart_rx(s->rdr);
102
+ }
103
+
104
+ stm32l4x5_update_irq(s);
105
+}
106
+
107
+/*
108
+ * Try to send tx data, and arrange to be called back later if
109
+ * we can't (ie the char backend is busy/blocking).
110
+ */
111
+static gboolean usart_transmit(void *do_not_use, GIOCondition cond,
112
+ void *opaque)
113
+{
114
+ Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(opaque);
115
+ int ret;
116
+ /* TODO: Handle 9 bits transmission */
117
+ uint8_t ch = s->tdr;
118
+
119
+ s->watch_tag = 0;
120
+
121
+ if (!(s->cr1 & R_CR1_TE_MASK) || (s->isr & R_ISR_TXE_MASK)) {
122
+ return G_SOURCE_REMOVE;
123
+ }
124
+
125
+ ret = qemu_chr_fe_write(&s->chr, &ch, 1);
126
+ if (ret <= 0) {
127
+ s->watch_tag = qemu_chr_fe_add_watch(&s->chr, G_IO_OUT | G_IO_HUP,
128
+ usart_transmit, s);
129
+ if (!s->watch_tag) {
130
+ /*
131
+ * Most common reason to be here is "no chardev backend":
132
+ * just insta-drain the buffer, so the serial output
133
+ * goes into a void, rather than blocking the guest.
134
+ */
135
+ goto buffer_drained;
136
+ }
137
+ /* Transmit pending */
138
+ trace_stm32l4x5_usart_tx_pending();
139
+ return G_SOURCE_REMOVE;
140
+ }
141
+
142
+buffer_drained:
143
+ /* Character successfully sent */
144
+ trace_stm32l4x5_usart_tx(ch);
145
+ s->isr |= R_ISR_TC_MASK | R_ISR_TXE_MASK;
146
+ stm32l4x5_update_irq(s);
147
+ return G_SOURCE_REMOVE;
148
+}
149
+
150
+static void usart_cancel_transmit(Stm32l4x5UsartBaseState *s)
151
+{
152
+ if (s->watch_tag) {
153
+ g_source_remove(s->watch_tag);
154
+ s->watch_tag = 0;
155
+ }
156
+}
157
+
158
static void stm32l4x5_usart_base_reset_hold(Object *obj, ResetType type)
159
{
160
Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(obj);
161
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_reset_hold(Object *obj, ResetType type)
162
s->isr = 0x020000C0;
163
s->rdr = 0x00000000;
164
s->tdr = 0x00000000;
165
+
166
+ usart_cancel_transmit(s);
167
+ stm32l4x5_update_irq(s);
168
+}
169
+
170
+static void usart_update_rqr(Stm32l4x5UsartBaseState *s, uint32_t value)
171
+{
172
+ /* TXFRQ */
173
+ /* Reset RXNE flag */
174
+ if (value & R_RQR_RXFRQ_MASK) {
175
+ s->isr &= ~R_ISR_RXNE_MASK;
176
+ }
177
+ /* MMRQ */
178
+ /* SBKRQ */
179
+ /* ABRRQ */
180
+ stm32l4x5_update_irq(s);
22
}
181
}
23
182
24
-#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 4))
183
static uint64_t stm32l4x5_usart_base_read(void *opaque, hwaddr addr,
25
+#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 5))
184
@@ -XXX,XX +XXX,XX @@ static uint64_t stm32l4x5_usart_base_read(void *opaque, hwaddr addr,
26
185
retvalue = FIELD_EX32(s->rdr, RDR, RDR);
27
#endif
186
/* Reset RXNE flag */
187
s->isr &= ~R_ISR_RXNE_MASK;
188
+ stm32l4x5_update_irq(s);
189
break;
190
case A_TDR:
191
retvalue = FIELD_EX32(s->tdr, TDR, TDR);
192
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
193
switch (addr) {
194
case A_CR1:
195
s->cr1 = value;
196
+ stm32l4x5_update_irq(s);
197
return;
198
case A_CR2:
199
s->cr2 = value;
200
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
201
s->rtor = value;
202
return;
203
case A_RQR:
204
+ usart_update_rqr(s, value);
205
return;
206
case A_ISR:
207
qemu_log_mask(LOG_GUEST_ERROR,
208
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
209
case A_ICR:
210
/* Clear the status flags */
211
s->isr &= ~value;
212
+ stm32l4x5_update_irq(s);
213
return;
214
case A_RDR:
215
qemu_log_mask(LOG_GUEST_ERROR,
216
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
217
return;
218
case A_TDR:
219
s->tdr = value;
220
+ s->isr &= ~R_ISR_TXE_MASK;
221
+ usart_transmit(NULL, G_IO_OUT, s);
222
return;
223
default:
224
qemu_log_mask(LOG_GUEST_ERROR,
225
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_realize(DeviceState *dev, Error **errp)
226
error_setg(errp, "USART clock must be wired up by SoC code");
227
return;
228
}
229
+
230
+ qemu_chr_fe_set_handlers(&s->chr, stm32l4x5_usart_base_can_receive,
231
+ stm32l4x5_usart_base_receive, NULL, NULL,
232
+ s, NULL, true);
233
}
234
235
static void stm32l4x5_usart_base_class_init(ObjectClass *klass, void *data)
236
diff --git a/hw/char/trace-events b/hw/char/trace-events
237
index XXXXXXX..XXXXXXX 100644
238
--- a/hw/char/trace-events
239
+++ b/hw/char/trace-events
240
@@ -XXX,XX +XXX,XX @@ sh_serial_write(char *id, unsigned size, uint64_t offs, uint64_t val) "%s size %
241
# stm32l4x5_usart.c
242
stm32l4x5_usart_read(uint64_t addr, uint32_t data) "USART: Read <0x%" PRIx64 "> -> 0x%" PRIx32 ""
243
stm32l4x5_usart_write(uint64_t addr, uint32_t data) "USART: Write <0x%" PRIx64 "> <- 0x%" PRIx32 ""
244
+stm32l4x5_usart_rx(uint8_t c) "USART: got character 0x%x from backend"
245
+stm32l4x5_usart_tx(uint8_t c) "USART: character 0x%x sent to backend"
246
+stm32l4x5_usart_tx_pending(void) "USART: character send to backend pending"
247
+stm32l4x5_usart_irq_raised(uint32_t reg) "USART: IRQ raised: 0x%08"PRIx32
248
+stm32l4x5_usart_irq_lowered(void) "USART: IRQ lowered"
249
+stm32l4x5_usart_overrun_detected(uint8_t current, uint8_t received) "USART: Overrun detected, RDR='0x%x', received 0x%x"
250
+stm32l4x5_usart_receiver_not_enabled(uint8_t ue_bit, uint8_t re_bit) "USART: Receiver not enabled, UE=0x%x, RE=0x%x"
251
252
# xen_console.c
253
xen_console_connect(unsigned int idx, unsigned int ring_ref, unsigned int port, unsigned int limit) "idx %u ring_ref %u port %u limit %u"
28
--
254
--
29
2.20.1
255
2.34.1
30
256
31
257
diff view generated by jsdifflib
1
Currently M-profile borrows the A-profile code for VMSR and VMRS
1
From: Arnaud Minier <arnaud.minier@telecom-paris.fr>
2
(access to the FP system registers), because all it needs to support
3
is the FPSCR. In v8.1M things become significantly more complicated
4
in two ways:
5
2
6
* there are several new FP system registers; some have side effects
3
Add a function to change the settings of the
7
on read, and one (FPCXT_NS) needs to avoid the usual
4
serial connection.
8
vfp_access_check() and the "only if FPU implemented" check
9
5
10
* all sysregs are now accessible both by VMRS/VMSR (which
6
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
11
reads/writes a general purpose register) and also by VLDR/VSTR
7
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
12
(which reads/writes them directly to memory)
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20240329174402.60382-4-arnaud.minier@telecom-paris.fr
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/char/stm32l4x5_usart.c | 98 +++++++++++++++++++++++++++++++++++++++
13
hw/char/trace-events | 1 +
14
2 files changed, 99 insertions(+)
13
15
14
Refactor the structure of how we handle VMSR/VMRS to cope with this:
16
diff --git a/hw/char/stm32l4x5_usart.c b/hw/char/stm32l4x5_usart.c
15
16
* keep the M-profile code entirely separate from the A-profile code
17
18
* abstract out the "read or write the general purpose register" part
19
of the code into a loadfn or storefn function pointer, so we can
20
reuse it for VLDR/VSTR.
21
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20201119215617.29887-8-peter.maydell@linaro.org
25
---
26
target/arm/cpu.h | 3 +
27
target/arm/translate-vfp.c.inc | 182 ++++++++++++++++++++++++++++++---
28
2 files changed, 171 insertions(+), 14 deletions(-)
29
30
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
31
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.h
18
--- a/hw/char/stm32l4x5_usart.c
33
+++ b/target/arm/cpu.h
19
+++ b/hw/char/stm32l4x5_usart.c
34
@@ -XXX,XX +XXX,XX @@ enum arm_cpu_mode {
20
@@ -XXX,XX +XXX,XX @@ static void usart_cancel_transmit(Stm32l4x5UsartBaseState *s)
35
#define ARM_VFP_FPINST 9
21
}
36
#define ARM_VFP_FPINST2 10
22
}
37
23
38
+/* QEMU-internal value meaning "FPSCR, but we care only about NZCV" */
24
+static void stm32l4x5_update_params(Stm32l4x5UsartBaseState *s)
39
+#define QEMU_VFP_FPSCR_NZCV 0xffff
25
+{
26
+ int speed, parity, data_bits, stop_bits;
27
+ uint32_t value, usart_div;
28
+ QEMUSerialSetParams ssp;
40
+
29
+
41
/* iwMMXt coprocessor control registers. */
30
+ /* Select the parity type */
42
#define ARM_IWMMXT_wCID 0
31
+ if (s->cr1 & R_CR1_PCE_MASK) {
43
#define ARM_IWMMXT_wCon 1
32
+ if (s->cr1 & R_CR1_PS_MASK) {
44
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
33
+ parity = 'O';
45
index XXXXXXX..XXXXXXX 100644
34
+ } else {
46
--- a/target/arm/translate-vfp.c.inc
35
+ parity = 'E';
47
+++ b/target/arm/translate-vfp.c.inc
36
+ }
48
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
37
+ } else {
49
return true;
38
+ parity = 'N';
50
}
51
52
+/*
53
+ * M-profile provides two different sets of instructions that can
54
+ * access floating point system registers: VMSR/VMRS (which move
55
+ * to/from a general purpose register) and VLDR/VSTR sysreg (which
56
+ * move directly to/from memory). In some cases there are also side
57
+ * effects which must happen after any write to memory (which could
58
+ * cause an exception). So we implement the common logic for the
59
+ * sysreg access in gen_M_fp_sysreg_write() and gen_M_fp_sysreg_read(),
60
+ * which take pointers to callback functions which will perform the
61
+ * actual "read/write general purpose register" and "read/write
62
+ * memory" operations.
63
+ */
64
+
65
+/*
66
+ * Emit code to store the sysreg to its final destination; frees the
67
+ * TCG temp 'value' it is passed.
68
+ */
69
+typedef void fp_sysreg_storefn(DisasContext *s, void *opaque, TCGv_i32 value);
70
+/*
71
+ * Emit code to load the value to be copied to the sysreg; returns
72
+ * a new TCG temporary
73
+ */
74
+typedef TCGv_i32 fp_sysreg_loadfn(DisasContext *s, void *opaque);
75
+
76
+/* Common decode/access checks for fp sysreg read/write */
77
+typedef enum FPSysRegCheckResult {
78
+ FPSysRegCheckFailed, /* caller should return false */
79
+ FPSysRegCheckDone, /* caller should return true */
80
+ FPSysRegCheckContinue, /* caller should continue generating code */
81
+} FPSysRegCheckResult;
82
+
83
+static FPSysRegCheckResult fp_sysreg_checks(DisasContext *s, int regno)
84
+{
85
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
86
+ return FPSysRegCheckFailed;
87
+ }
39
+ }
88
+
40
+
89
+ switch (regno) {
41
+ /* Select the number of stop bits */
90
+ case ARM_VFP_FPSCR:
42
+ switch (FIELD_EX32(s->cr2, CR2, STOP)) {
91
+ case QEMU_VFP_FPSCR_NZCV:
43
+ case 0:
44
+ stop_bits = 1;
45
+ break;
46
+ case 2:
47
+ stop_bits = 2;
92
+ break;
48
+ break;
93
+ default:
49
+ default:
94
+ return FPSysRegCheckFailed;
50
+ qemu_log_mask(LOG_UNIMP,
51
+ "UNIMPLEMENTED: fractionnal stop bits; CR2[13:12] = %u",
52
+ FIELD_EX32(s->cr2, CR2, STOP));
53
+ return;
95
+ }
54
+ }
96
+
55
+
97
+ if (!vfp_access_check(s)) {
56
+ /* Select the length of the word */
98
+ return FPSysRegCheckDone;
57
+ switch ((FIELD_EX32(s->cr1, CR1, M1) << 1) | FIELD_EX32(s->cr1, CR1, M0)) {
58
+ case 0:
59
+ data_bits = 8;
60
+ break;
61
+ case 1:
62
+ data_bits = 9;
63
+ break;
64
+ case 2:
65
+ data_bits = 7;
66
+ break;
67
+ default:
68
+ qemu_log_mask(LOG_GUEST_ERROR,
69
+ "UNDEFINED: invalid word length, CR1.M = 0b11");
70
+ return;
99
+ }
71
+ }
100
+
72
+
101
+ return FPSysRegCheckContinue;
73
+ /* Select the baud rate */
74
+ value = FIELD_EX32(s->brr, BRR, BRR);
75
+ if (value < 16) {
76
+ qemu_log_mask(LOG_GUEST_ERROR,
77
+ "UNDEFINED: BRR less than 16: %u", value);
78
+ return;
79
+ }
80
+
81
+ if (FIELD_EX32(s->cr1, CR1, OVER8) == 0) {
82
+ /*
83
+ * Oversampling by 16
84
+ * BRR = USARTDIV
85
+ */
86
+ usart_div = value;
87
+ } else {
88
+ /*
89
+ * Oversampling by 8
90
+ * - BRR[2:0] = USARTDIV[3:0] shifted 1 bit to the right.
91
+ * - BRR[3] must be kept cleared.
92
+ * - BRR[15:4] = USARTDIV[15:4]
93
+ * - The frequency is multiplied by 2
94
+ */
95
+ usart_div = ((value & 0xFFF0) | ((value & 0x0007) << 1)) / 2;
96
+ }
97
+
98
+ speed = clock_get_hz(s->clk) / usart_div;
99
+
100
+ ssp.speed = speed;
101
+ ssp.parity = parity;
102
+ ssp.data_bits = data_bits;
103
+ ssp.stop_bits = stop_bits;
104
+
105
+ qemu_chr_fe_ioctl(&s->chr, CHR_IOCTL_SERIAL_SET_PARAMS, &ssp);
106
+
107
+ trace_stm32l4x5_usart_update_params(speed, parity, data_bits, stop_bits);
102
+}
108
+}
103
+
109
+
104
+static bool gen_M_fp_sysreg_write(DisasContext *s, int regno,
110
static void stm32l4x5_usart_base_reset_hold(Object *obj, ResetType type)
111
{
112
Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(obj);
113
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
114
switch (addr) {
115
case A_CR1:
116
s->cr1 = value;
117
+ stm32l4x5_update_params(s);
118
stm32l4x5_update_irq(s);
119
return;
120
case A_CR2:
121
s->cr2 = value;
122
+ stm32l4x5_update_params(s);
123
return;
124
case A_CR3:
125
s->cr3 = value;
126
return;
127
case A_BRR:
128
s->brr = value;
129
+ stm32l4x5_update_params(s);
130
return;
131
case A_GTPR:
132
s->gtpr = value;
133
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_init(Object *obj)
134
s->clk = qdev_init_clock_in(DEVICE(s), "clk", NULL, s, 0);
135
}
136
137
+static int stm32l4x5_usart_base_post_load(void *opaque, int version_id)
138
+{
139
+ Stm32l4x5UsartBaseState *s = (Stm32l4x5UsartBaseState *)opaque;
105
+
140
+
106
+ fp_sysreg_loadfn *loadfn,
141
+ stm32l4x5_update_params(s);
107
+ void *opaque)
142
+ return 0;
108
+{
109
+ /* Do a write to an M-profile floating point system register */
110
+ TCGv_i32 tmp;
111
+
112
+ switch (fp_sysreg_checks(s, regno)) {
113
+ case FPSysRegCheckFailed:
114
+ return false;
115
+ case FPSysRegCheckDone:
116
+ return true;
117
+ case FPSysRegCheckContinue:
118
+ break;
119
+ }
120
+
121
+ switch (regno) {
122
+ case ARM_VFP_FPSCR:
123
+ tmp = loadfn(s, opaque);
124
+ gen_helper_vfp_set_fpscr(cpu_env, tmp);
125
+ tcg_temp_free_i32(tmp);
126
+ gen_lookup_tb(s);
127
+ break;
128
+ default:
129
+ g_assert_not_reached();
130
+ }
131
+ return true;
132
+}
143
+}
133
+
144
+
134
+static bool gen_M_fp_sysreg_read(DisasContext *s, int regno,
145
static const VMStateDescription vmstate_stm32l4x5_usart_base = {
135
+ fp_sysreg_storefn *storefn,
146
.name = TYPE_STM32L4X5_USART_BASE,
136
+ void *opaque)
147
.version_id = 1,
137
+{
148
.minimum_version_id = 1,
138
+ /* Do a read from an M-profile floating point system register */
149
+ .post_load = stm32l4x5_usart_base_post_load,
139
+ TCGv_i32 tmp;
150
.fields = (VMStateField[]) {
140
+
151
VMSTATE_UINT32(cr1, Stm32l4x5UsartBaseState),
141
+ switch (fp_sysreg_checks(s, regno)) {
152
VMSTATE_UINT32(cr2, Stm32l4x5UsartBaseState),
142
+ case FPSysRegCheckFailed:
153
diff --git a/hw/char/trace-events b/hw/char/trace-events
143
+ return false;
154
index XXXXXXX..XXXXXXX 100644
144
+ case FPSysRegCheckDone:
155
--- a/hw/char/trace-events
145
+ return true;
156
+++ b/hw/char/trace-events
146
+ case FPSysRegCheckContinue:
157
@@ -XXX,XX +XXX,XX @@ stm32l4x5_usart_irq_raised(uint32_t reg) "USART: IRQ raised: 0x%08"PRIx32
147
+ break;
158
stm32l4x5_usart_irq_lowered(void) "USART: IRQ lowered"
148
+ }
159
stm32l4x5_usart_overrun_detected(uint8_t current, uint8_t received) "USART: Overrun detected, RDR='0x%x', received 0x%x"
149
+
160
stm32l4x5_usart_receiver_not_enabled(uint8_t ue_bit, uint8_t re_bit) "USART: Receiver not enabled, UE=0x%x, RE=0x%x"
150
+ switch (regno) {
161
+stm32l4x5_usart_update_params(int speed, uint8_t parity, int data, int stop) "USART: speed: %d, parity: %c, data bits: %d, stop bits: %d"
151
+ case ARM_VFP_FPSCR:
162
152
+ tmp = tcg_temp_new_i32();
163
# xen_console.c
153
+ gen_helper_vfp_get_fpscr(tmp, cpu_env);
164
xen_console_connect(unsigned int idx, unsigned int ring_ref, unsigned int port, unsigned int limit) "idx %u ring_ref %u port %u limit %u"
154
+ storefn(s, opaque, tmp);
155
+ break;
156
+ case QEMU_VFP_FPSCR_NZCV:
157
+ /*
158
+ * Read just NZCV; this is a special case to avoid the
159
+ * helper call for the "VMRS to CPSR.NZCV" insn.
160
+ */
161
+ tmp = load_cpu_field(vfp.xregs[ARM_VFP_FPSCR]);
162
+ tcg_gen_andi_i32(tmp, tmp, 0xf0000000);
163
+ storefn(s, opaque, tmp);
164
+ break;
165
+ default:
166
+ g_assert_not_reached();
167
+ }
168
+ return true;
169
+}
170
+
171
+static void fp_sysreg_to_gpr(DisasContext *s, void *opaque, TCGv_i32 value)
172
+{
173
+ arg_VMSR_VMRS *a = opaque;
174
+
175
+ if (a->rt == 15) {
176
+ /* Set the 4 flag bits in the CPSR */
177
+ gen_set_nzcv(value);
178
+ tcg_temp_free_i32(value);
179
+ } else {
180
+ store_reg(s, a->rt, value);
181
+ }
182
+}
183
+
184
+static TCGv_i32 gpr_to_fp_sysreg(DisasContext *s, void *opaque)
185
+{
186
+ arg_VMSR_VMRS *a = opaque;
187
+
188
+ return load_reg(s, a->rt);
189
+}
190
+
191
+static bool gen_M_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
192
+{
193
+ /*
194
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
195
+ * FPSCR -> r15 is a special case which writes to the PSR flags;
196
+ * set a->reg to a special value to tell gen_M_fp_sysreg_read()
197
+ * we only care about the top 4 bits of FPSCR there.
198
+ */
199
+ if (a->rt == 15) {
200
+ if (a->l && a->reg == ARM_VFP_FPSCR) {
201
+ a->reg = QEMU_VFP_FPSCR_NZCV;
202
+ } else {
203
+ return false;
204
+ }
205
+ }
206
+
207
+ if (a->l) {
208
+ /* VMRS, move FP system register to gp register */
209
+ return gen_M_fp_sysreg_read(s, a->reg, fp_sysreg_to_gpr, a);
210
+ } else {
211
+ /* VMSR, move gp register to FP system register */
212
+ return gen_M_fp_sysreg_write(s, a->reg, gpr_to_fp_sysreg, a);
213
+ }
214
+}
215
+
216
static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
217
{
218
TCGv_i32 tmp;
219
bool ignore_vfp_enabled = false;
220
221
- if (!dc_isar_feature(aa32_fpsp_v2, s)) {
222
- return false;
223
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
224
+ return gen_M_VMSR_VMRS(s, a);
225
}
226
227
- if (arm_dc_feature(s, ARM_FEATURE_M)) {
228
- /*
229
- * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
230
- * Accesses to R15 are UNPREDICTABLE; we choose to undef.
231
- * (FPSCR -> r15 is a special case which writes to the PSR flags.)
232
- */
233
- if (a->reg != ARM_VFP_FPSCR) {
234
- return false;
235
- }
236
- if (a->rt == 15 && !a->l) {
237
- return false;
238
- }
239
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
240
+ return false;
241
}
242
243
switch (a->reg) {
244
--
165
--
245
2.20.1
166
2.34.1
246
167
247
168
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Arnaud Minier <arnaud.minier@telecom-paris.fr>
2
2
3
Connect CAN0 and CAN1 on the ZynqMP.
3
Add the USART to the SoC and connect it to the other implemented devices.
4
4
5
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
5
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
7
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 1605728926-352690-3-git-send-email-fnu.vikram@xilinx.com
8
Message-id: 20240329174402.60382-5-arnaud.minier@telecom-paris.fr
9
[PMM: fixed a few checkpatch nits]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
include/hw/arm/xlnx-zynqmp.h | 8 ++++++++
12
docs/system/arm/b-l475e-iot01a.rst | 2 +-
12
hw/arm/xlnx-zcu102.c | 20 ++++++++++++++++++++
13
include/hw/arm/stm32l4x5_soc.h | 7 +++
13
hw/arm/xlnx-zynqmp.c | 34 ++++++++++++++++++++++++++++++++++
14
hw/arm/stm32l4x5_soc.c | 83 +++++++++++++++++++++++++++---
14
3 files changed, 62 insertions(+)
15
hw/arm/Kconfig | 1 +
15
16
4 files changed, 86 insertions(+), 7 deletions(-)
16
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
17
17
index XXXXXXX..XXXXXXX 100644
18
diff --git a/docs/system/arm/b-l475e-iot01a.rst b/docs/system/arm/b-l475e-iot01a.rst
18
--- a/include/hw/arm/xlnx-zynqmp.h
19
index XXXXXXX..XXXXXXX 100644
19
+++ b/include/hw/arm/xlnx-zynqmp.h
20
--- a/docs/system/arm/b-l475e-iot01a.rst
21
+++ b/docs/system/arm/b-l475e-iot01a.rst
22
@@ -XXX,XX +XXX,XX @@ Currently B-L475E-IOT01A machine's only supports the following devices:
23
- STM32L4x5 SYSCFG (System configuration controller)
24
- STM32L4x5 RCC (Reset and clock control)
25
- STM32L4x5 GPIOs (General-purpose I/Os)
26
+- STM32L4x5 USARTs, UARTs and LPUART (Serial ports)
27
28
Missing devices
29
"""""""""""""""
30
31
The B-L475E-IOT01A does *not* support the following devices:
32
33
-- Serial ports (UART)
34
- Analog to Digital Converter (ADC)
35
- SPI controller
36
- Timer controller (TIMER)
37
diff --git a/include/hw/arm/stm32l4x5_soc.h b/include/hw/arm/stm32l4x5_soc.h
38
index XXXXXXX..XXXXXXX 100644
39
--- a/include/hw/arm/stm32l4x5_soc.h
40
+++ b/include/hw/arm/stm32l4x5_soc.h
20
@@ -XXX,XX +XXX,XX @@
41
@@ -XXX,XX +XXX,XX @@
21
#include "hw/intc/arm_gic.h"
42
#include "hw/misc/stm32l4x5_exti.h"
22
#include "hw/net/cadence_gem.h"
43
#include "hw/misc/stm32l4x5_rcc.h"
23
#include "hw/char/cadence_uart.h"
44
#include "hw/gpio/stm32l4x5_gpio.h"
24
+#include "hw/net/xlnx-zynqmp-can.h"
45
+#include "hw/char/stm32l4x5_usart.h"
25
#include "hw/ide/ahci.h"
46
#include "qom/object.h"
26
#include "hw/sd/sdhci.h"
47
27
#include "hw/ssi/xilinx_spips.h"
48
#define TYPE_STM32L4X5_SOC "stm32l4x5-soc"
49
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_TYPE(Stm32l4x5SocState, Stm32l4x5SocClass, STM32L4X5_SOC)
50
51
#define NUM_EXTI_OR_GATES 4
52
53
+#define STM_NUM_USARTS 3
54
+#define STM_NUM_UARTS 2
55
+
56
struct Stm32l4x5SocState {
57
SysBusDevice parent_obj;
58
59
@@ -XXX,XX +XXX,XX @@ struct Stm32l4x5SocState {
60
Stm32l4x5SyscfgState syscfg;
61
Stm32l4x5RccState rcc;
62
Stm32l4x5GpioState gpio[NUM_GPIOS];
63
+ Stm32l4x5UsartBaseState usart[STM_NUM_USARTS];
64
+ Stm32l4x5UsartBaseState uart[STM_NUM_UARTS];
65
+ Stm32l4x5UsartBaseState lpuart;
66
67
MemoryRegion sram1;
68
MemoryRegion sram2;
69
diff --git a/hw/arm/stm32l4x5_soc.c b/hw/arm/stm32l4x5_soc.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/hw/arm/stm32l4x5_soc.c
72
+++ b/hw/arm/stm32l4x5_soc.c
28
@@ -XXX,XX +XXX,XX @@
73
@@ -XXX,XX +XXX,XX @@
29
#include "hw/cpu/cluster.h"
74
#include "sysemu/sysemu.h"
30
#include "target/arm/cpu.h"
75
#include "hw/or-irq.h"
31
#include "qom/object.h"
76
#include "hw/arm/stm32l4x5_soc.h"
32
+#include "net/can_emu.h"
77
+#include "hw/char/stm32l4x5_usart.h"
33
78
#include "hw/gpio/stm32l4x5_gpio.h"
34
#define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
79
#include "hw/qdev-clock.h"
35
OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
80
#include "hw/misc/unimp.h"
36
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(XlnxZynqMPState, XLNX_ZYNQMP)
81
@@ -XXX,XX +XXX,XX @@ static const struct {
37
#define XLNX_ZYNQMP_NUM_RPU_CPUS 2
82
{ 0x48001C00, 0x0000000F, 0x00000000, 0x00000000 },
38
#define XLNX_ZYNQMP_NUM_GEMS 4
39
#define XLNX_ZYNQMP_NUM_UARTS 2
40
+#define XLNX_ZYNQMP_NUM_CAN 2
41
+#define XLNX_ZYNQMP_CAN_REF_CLK (24 * 1000 * 1000)
42
#define XLNX_ZYNQMP_NUM_SDHCI 2
43
#define XLNX_ZYNQMP_NUM_SPIS 2
44
#define XLNX_ZYNQMP_NUM_GDMA_CH 8
45
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
46
47
CadenceGEMState gem[XLNX_ZYNQMP_NUM_GEMS];
48
CadenceUARTState uart[XLNX_ZYNQMP_NUM_UARTS];
49
+ XlnxZynqMPCANState can[XLNX_ZYNQMP_NUM_CAN];
50
SysbusAHCIState sata;
51
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
52
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
53
@@ -XXX,XX +XXX,XX @@ struct XlnxZynqMPState {
54
bool virt;
55
/* Has the RPU subsystem? */
56
bool has_rpu;
57
+
58
+ /* CAN bus. */
59
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
60
};
83
};
61
84
62
#endif
85
+static const hwaddr usart_addr[] = {
63
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
86
+ 0x40013800, /* "USART1", 0x400 */
64
index XXXXXXX..XXXXXXX 100644
87
+ 0x40004400, /* "USART2", 0x400 */
65
--- a/hw/arm/xlnx-zcu102.c
88
+ 0x40004800, /* "USART3", 0x400 */
66
+++ b/hw/arm/xlnx-zcu102.c
89
+};
67
@@ -XXX,XX +XXX,XX @@
90
+static const hwaddr uart_addr[] = {
68
#include "sysemu/qtest.h"
91
+ 0x40004C00, /* "UART4" , 0x400 */
69
#include "sysemu/device_tree.h"
92
+ 0x40005000 /* "UART5" , 0x400 */
70
#include "qom/object.h"
93
+};
71
+#include "net/can_emu.h"
94
+
72
95
+#define LPUART_BASE_ADDRESS 0x40008000
73
struct XlnxZCU102 {
96
+
74
MachineState parent_obj;
97
+static const int usart_irq[] = { 37, 38, 39 };
75
@@ -XXX,XX +XXX,XX @@ struct XlnxZCU102 {
98
+static const int uart_irq[] = { 52, 53 };
76
bool secure;
99
+#define LPUART_IRQ 70
77
bool virt;
100
+
78
101
static void stm32l4x5_soc_initfn(Object *obj)
79
+ CanBusState *canbus[XLNX_ZYNQMP_NUM_CAN];
102
{
80
+
103
Stm32l4x5SocState *s = STM32L4X5_SOC(obj);
81
struct arm_boot_info binfo;
104
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_initfn(Object *obj)
82
};
105
g_autofree char *name = g_strdup_printf("gpio%c", 'a' + i);
83
106
object_initialize_child(obj, name, &s->gpio[i], TYPE_STM32L4X5_GPIO);
84
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_init(MachineState *machine)
107
}
85
object_property_set_bool(OBJECT(&s->soc), "virtualization", s->virt,
108
+
86
&error_fatal);
109
+ for (int i = 0; i < STM_NUM_USARTS; i++) {
87
110
+ object_initialize_child(obj, "usart[*]", &s->usart[i],
88
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
111
+ TYPE_STM32L4X5_USART);
89
+ gchar *bus_name = g_strdup_printf("canbus%d", i);
112
+ }
90
+
113
+
91
+ object_property_set_link(OBJECT(&s->soc), bus_name,
114
+ for (int i = 0; i < STM_NUM_UARTS; i++) {
92
+ OBJECT(s->canbus[i]), &error_fatal);
115
+ object_initialize_child(obj, "uart[*]", &s->uart[i],
93
+ g_free(bus_name);
116
+ TYPE_STM32L4X5_UART);
94
+ }
117
+ }
95
+
118
+ object_initialize_child(obj, "lpuart1", &s->lpuart,
96
qdev_realize(DEVICE(&s->soc), NULL, &error_fatal);
119
+ TYPE_STM32L4X5_LPUART);
97
98
/* Create and plug in the SD cards */
99
@@ -XXX,XX +XXX,XX @@ static void xlnx_zcu102_machine_instance_init(Object *obj)
100
s->secure = false;
101
/* Default to virt (EL2) being disabled */
102
s->virt = false;
103
+ object_property_add_link(obj, "xlnx-zcu102.canbus0", TYPE_CAN_BUS,
104
+ (Object **)&s->canbus[0],
105
+ object_property_allow_set_link,
106
+ 0);
107
+
108
+ object_property_add_link(obj, "xlnx-zcu102.canbus1", TYPE_CAN_BUS,
109
+ (Object **)&s->canbus[1],
110
+ object_property_allow_set_link,
111
+ 0);
112
}
120
}
113
121
114
static void xlnx_zcu102_machine_class_init(ObjectClass *oc, void *data)
122
static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
115
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
123
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
116
index XXXXXXX..XXXXXXX 100644
124
sysbus_mmio_map(busdev, 0, RCC_BASE_ADDRESS);
117
--- a/hw/arm/xlnx-zynqmp.c
125
sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, RCC_IRQ));
118
+++ b/hw/arm/xlnx-zynqmp.c
126
119
@@ -XXX,XX +XXX,XX @@ static const int uart_intr[XLNX_ZYNQMP_NUM_UARTS] = {
127
+ /* USART devices */
120
21, 22,
128
+ for (int i = 0; i < STM_NUM_USARTS; i++) {
121
};
129
+ g_autofree char *name = g_strdup_printf("usart%d-out", i + 1);
122
130
+ dev = DEVICE(&(s->usart[i]));
123
+static const uint64_t can_addr[XLNX_ZYNQMP_NUM_CAN] = {
131
+ qdev_prop_set_chr(dev, "chardev", serial_hd(i));
124
+ 0xFF060000, 0xFF070000,
132
+ qdev_connect_clock_in(dev, "clk",
125
+};
133
+ qdev_get_clock_out(DEVICE(&(s->rcc)), name));
126
+
134
+ busdev = SYS_BUS_DEVICE(dev);
127
+static const int can_intr[XLNX_ZYNQMP_NUM_CAN] = {
135
+ if (!sysbus_realize(busdev, errp)) {
128
+ 23, 24,
129
+};
130
+
131
static const uint64_t sdhci_addr[XLNX_ZYNQMP_NUM_SDHCI] = {
132
0xFF160000, 0xFF170000,
133
};
134
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
135
TYPE_CADENCE_UART);
136
}
137
138
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
139
+ object_initialize_child(obj, "can[*]", &s->can[i],
140
+ TYPE_XLNX_ZYNQMP_CAN);
141
+ }
142
+
143
object_initialize_child(obj, "sata", &s->sata, TYPE_SYSBUS_AHCI);
144
145
for (i = 0; i < XLNX_ZYNQMP_NUM_SDHCI; i++) {
146
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
147
gic_spi[uart_intr[i]]);
148
}
149
150
+ for (i = 0; i < XLNX_ZYNQMP_NUM_CAN; i++) {
151
+ object_property_set_int(OBJECT(&s->can[i]), "ext_clk_freq",
152
+ XLNX_ZYNQMP_CAN_REF_CLK, &error_abort);
153
+
154
+ object_property_set_link(OBJECT(&s->can[i]), "canbus",
155
+ OBJECT(s->canbus[i]), &error_fatal);
156
+
157
+ sysbus_realize(SYS_BUS_DEVICE(&s->can[i]), &err);
158
+ if (err) {
159
+ error_propagate(errp, err);
160
+ return;
136
+ return;
161
+ }
137
+ }
162
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->can[i]), 0, can_addr[i]);
138
+ sysbus_mmio_map(busdev, 0, usart_addr[i]);
163
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->can[i]), 0,
139
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, usart_irq[i]));
164
+ gic_spi[can_intr[i]]);
140
+ }
165
+ }
141
+
166
+
142
+ /*
167
object_property_set_int(OBJECT(&s->sata), "num-ports", SATA_NUM_PORTS,
143
+ * TODO: Connect the USARTs, UARTs and LPUART to the EXTI once the EXTI
168
&error_abort);
144
+ * can handle other gpio-in than the gpios. (e.g. Direct Lines for the
169
if (!sysbus_realize(SYS_BUS_DEVICE(&s->sata), errp)) {
145
+ * usarts)
170
@@ -XXX,XX +XXX,XX @@ static Property xlnx_zynqmp_props[] = {
146
+ */
171
DEFINE_PROP_BOOL("has_rpu", XlnxZynqMPState, has_rpu, false),
147
+
172
DEFINE_PROP_LINK("ddr-ram", XlnxZynqMPState, ddr_ram, TYPE_MEMORY_REGION,
148
+ /* UART devices */
173
MemoryRegion *),
149
+ for (int i = 0; i < STM_NUM_UARTS; i++) {
174
+ DEFINE_PROP_LINK("canbus0", XlnxZynqMPState, canbus[0], TYPE_CAN_BUS,
150
+ g_autofree char *name = g_strdup_printf("uart%d-out", STM_NUM_USARTS + i + 1);
175
+ CanBusState *),
151
+ dev = DEVICE(&(s->uart[i]));
176
+ DEFINE_PROP_LINK("canbus1", XlnxZynqMPState, canbus[1], TYPE_CAN_BUS,
152
+ qdev_prop_set_chr(dev, "chardev", serial_hd(STM_NUM_USARTS + i));
177
+ CanBusState *),
153
+ qdev_connect_clock_in(dev, "clk",
178
DEFINE_PROP_END_OF_LIST()
154
+ qdev_get_clock_out(DEVICE(&(s->rcc)), name));
179
};
155
+ busdev = SYS_BUS_DEVICE(dev);
180
156
+ if (!sysbus_realize(busdev, errp)) {
157
+ return;
158
+ }
159
+ sysbus_mmio_map(busdev, 0, uart_addr[i]);
160
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, uart_irq[i]));
161
+ }
162
+
163
+ /* LPUART device*/
164
+ dev = DEVICE(&(s->lpuart));
165
+ qdev_prop_set_chr(dev, "chardev", serial_hd(STM_NUM_USARTS + STM_NUM_UARTS));
166
+ qdev_connect_clock_in(dev, "clk",
167
+ qdev_get_clock_out(DEVICE(&(s->rcc)), "lpuart1-out"));
168
+ busdev = SYS_BUS_DEVICE(dev);
169
+ if (!sysbus_realize(busdev, errp)) {
170
+ return;
171
+ }
172
+ sysbus_mmio_map(busdev, 0, LPUART_BASE_ADDRESS);
173
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, LPUART_IRQ));
174
+
175
/* APB1 BUS */
176
create_unimplemented_device("TIM2", 0x40000000, 0x400);
177
create_unimplemented_device("TIM3", 0x40000400, 0x400);
178
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
179
create_unimplemented_device("SPI2", 0x40003800, 0x400);
180
create_unimplemented_device("SPI3", 0x40003C00, 0x400);
181
/* RESERVED: 0x40004000, 0x400 */
182
- create_unimplemented_device("USART2", 0x40004400, 0x400);
183
- create_unimplemented_device("USART3", 0x40004800, 0x400);
184
- create_unimplemented_device("UART4", 0x40004C00, 0x400);
185
- create_unimplemented_device("UART5", 0x40005000, 0x400);
186
create_unimplemented_device("I2C1", 0x40005400, 0x400);
187
create_unimplemented_device("I2C2", 0x40005800, 0x400);
188
create_unimplemented_device("I2C3", 0x40005C00, 0x400);
189
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
190
create_unimplemented_device("DAC1", 0x40007400, 0x400);
191
create_unimplemented_device("OPAMP", 0x40007800, 0x400);
192
create_unimplemented_device("LPTIM1", 0x40007C00, 0x400);
193
- create_unimplemented_device("LPUART1", 0x40008000, 0x400);
194
/* RESERVED: 0x40008400, 0x400 */
195
create_unimplemented_device("SWPMI1", 0x40008800, 0x400);
196
/* RESERVED: 0x40008C00, 0x800 */
197
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
198
create_unimplemented_device("TIM1", 0x40012C00, 0x400);
199
create_unimplemented_device("SPI1", 0x40013000, 0x400);
200
create_unimplemented_device("TIM8", 0x40013400, 0x400);
201
- create_unimplemented_device("USART1", 0x40013800, 0x400);
202
/* RESERVED: 0x40013C00, 0x400 */
203
create_unimplemented_device("TIM15", 0x40014000, 0x400);
204
create_unimplemented_device("TIM16", 0x40014400, 0x400);
205
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
206
index XXXXXXX..XXXXXXX 100644
207
--- a/hw/arm/Kconfig
208
+++ b/hw/arm/Kconfig
209
@@ -XXX,XX +XXX,XX @@ config STM32L4X5_SOC
210
select STM32L4X5_SYSCFG
211
select STM32L4X5_RCC
212
select STM32L4X5_GPIO
213
+ select STM32L4X5_USART
214
215
config XLNX_ZYNQMP_ARM
216
bool
181
--
217
--
182
2.20.1
218
2.34.1
183
219
184
220
diff view generated by jsdifflib
1
From: Vikram Garhwal <fnu.vikram@xilinx.com>
1
From: Arnaud Minier <arnaud.minier@telecom-paris.fr>
2
2
3
The QTests perform five tests on the Xilinx ZynqMP CAN controller:
3
Test:
4
Tests the CAN controller in loopback, sleep and snoop mode.
4
- read/write from/to the usart registers
5
Tests filtering of incoming CAN messages.
5
- send/receive a character/string over the serial port
6
6
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
8
Reviewed-by: Francisco Iglesias <francisco.iglesias@xilinx.com>
8
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
9
Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 1605728926-352690-4-git-send-email-fnu.vikram@xilinx.com
10
Message-id: 20240329174402.60382-6-arnaud.minier@telecom-paris.fr
11
[PMM: fix checkpatch nits, remove commented out code]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
tests/qtest/xlnx-can-test.c | 360 ++++++++++++++++++++++++++++++++++++
14
tests/qtest/stm32l4x5_usart-test.c | 315 +++++++++++++++++++++++++++++
14
tests/qtest/meson.build | 1 +
15
tests/qtest/meson.build | 4 +-
15
2 files changed, 361 insertions(+)
16
2 files changed, 318 insertions(+), 1 deletion(-)
16
create mode 100644 tests/qtest/xlnx-can-test.c
17
create mode 100644 tests/qtest/stm32l4x5_usart-test.c
17
18
18
diff --git a/tests/qtest/xlnx-can-test.c b/tests/qtest/xlnx-can-test.c
19
diff --git a/tests/qtest/stm32l4x5_usart-test.c b/tests/qtest/stm32l4x5_usart-test.c
19
new file mode 100644
20
new file mode 100644
20
index XXXXXXX..XXXXXXX
21
index XXXXXXX..XXXXXXX
21
--- /dev/null
22
--- /dev/null
22
+++ b/tests/qtest/xlnx-can-test.c
23
+++ b/tests/qtest/stm32l4x5_usart-test.c
23
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@
24
+/*
25
+/*
25
+ * QTests for the Xilinx ZynqMP CAN controller.
26
+ * QTest testcase for STML4X5_USART
26
+ *
27
+ *
27
+ * Copyright (c) 2020 Xilinx Inc.
28
+ * Copyright (c) 2023 Arnaud Minier <arnaud.minier@telecom-paris.fr>
29
+ * Copyright (c) 2023 Inès Varhol <ines.varhol@telecom-paris.fr>
28
+ *
30
+ *
29
+ * Written-by: Vikram Garhwal<fnu.vikram@xilinx.com>
31
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
30
+ *
32
+ * See the COPYING file in the top-level directory.
31
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
32
+ * of this software and associated documentation files (the "Software"), to deal
33
+ * in the Software without restriction, including without limitation the rights
34
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
35
+ * copies of the Software, and to permit persons to whom the Software is
36
+ * furnished to do so, subject to the following conditions:
37
+ *
38
+ * The above copyright notice and this permission notice shall be included in
39
+ * all copies or substantial portions of the Software.
40
+ *
41
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
42
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
43
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
44
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
45
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
46
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
47
+ * THE SOFTWARE.
48
+ */
33
+ */
49
+
34
+
50
+#include "qemu/osdep.h"
35
+#include "qemu/osdep.h"
51
+#include "libqos/libqtest.h"
36
+#include "libqtest.h"
52
+
37
+#include "hw/misc/stm32l4x5_rcc_internals.h"
53
+/* Base address. */
38
+#include "hw/registerfields.h"
54
+#define CAN0_BASE_ADDR 0xFF060000
39
+
55
+#define CAN1_BASE_ADDR 0xFF070000
40
+#define RCC_BASE_ADDR 0x40021000
56
+
41
+/* Use USART 1 ADDR, assume the others work the same */
57
+/* Register addresses. */
42
+#define USART1_BASE_ADDR 0x40013800
58
+#define R_SRR_OFFSET 0x00
43
+
59
+#define R_MSR_OFFSET 0x04
44
+/* See stm32l4x5_usart for definitions */
60
+#define R_SR_OFFSET 0x18
45
+REG32(CR1, 0x00)
61
+#define R_ISR_OFFSET 0x1C
46
+ FIELD(CR1, M1, 28, 1)
62
+#define R_ICR_OFFSET 0x24
47
+ FIELD(CR1, OVER8, 15, 1)
63
+#define R_TXID_OFFSET 0x30
48
+ FIELD(CR1, M0, 12, 1)
64
+#define R_TXDLC_OFFSET 0x34
49
+ FIELD(CR1, PCE, 10, 1)
65
+#define R_TXDATA1_OFFSET 0x38
50
+ FIELD(CR1, TXEIE, 7, 1)
66
+#define R_TXDATA2_OFFSET 0x3C
51
+ FIELD(CR1, RXNEIE, 5, 1)
67
+#define R_RXID_OFFSET 0x50
52
+ FIELD(CR1, TE, 3, 1)
68
+#define R_RXDLC_OFFSET 0x54
53
+ FIELD(CR1, RE, 2, 1)
69
+#define R_RXDATA1_OFFSET 0x58
54
+ FIELD(CR1, UE, 0, 1)
70
+#define R_RXDATA2_OFFSET 0x5C
55
+REG32(CR2, 0x04)
71
+#define R_AFR 0x60
56
+REG32(CR3, 0x08)
72
+#define R_AFMR1 0x64
57
+ FIELD(CR3, OVRDIS, 12, 1)
73
+#define R_AFIR1 0x68
58
+REG32(BRR, 0x0C)
74
+#define R_AFMR2 0x6C
59
+REG32(GTPR, 0x10)
75
+#define R_AFIR2 0x70
60
+REG32(RTOR, 0x14)
76
+#define R_AFMR3 0x74
61
+REG32(RQR, 0x18)
77
+#define R_AFIR3 0x78
62
+REG32(ISR, 0x1C)
78
+#define R_AFMR4 0x7C
63
+ FIELD(ISR, TXE, 7, 1)
79
+#define R_AFIR4 0x80
64
+ FIELD(ISR, RXNE, 5, 1)
80
+
65
+ FIELD(ISR, ORE, 3, 1)
81
+/* CAN modes. */
66
+REG32(ICR, 0x20)
82
+#define CONFIG_MODE 0x00
67
+REG32(RDR, 0x24)
83
+#define NORMAL_MODE 0x00
68
+REG32(TDR, 0x28)
84
+#define LOOPBACK_MODE 0x02
69
+
85
+#define SNOOP_MODE 0x04
70
+#define NVIC_ISPR1 0XE000E204
86
+#define SLEEP_MODE 0x01
71
+#define NVIC_ICPR1 0xE000E284
87
+#define ENABLE_CAN (1 << 1)
72
+#define USART1_IRQ 37
88
+#define STATUS_NORMAL_MODE (1 << 3)
73
+
89
+#define STATUS_LOOPBACK_MODE (1 << 1)
74
+static bool check_nvic_pending(QTestState *qts, unsigned int n)
90
+#define STATUS_SNOOP_MODE (1 << 12)
75
+{
91
+#define STATUS_SLEEP_MODE (1 << 2)
76
+ /* No USART interrupts are less than 32 */
92
+#define ISR_TXOK (1 << 1)
77
+ assert(n > 32);
93
+#define ISR_RXOK (1 << 4)
78
+ n -= 32;
94
+
79
+ return qtest_readl(qts, NVIC_ISPR1) & (1 << n);
95
+static void match_rx_tx_data(const uint32_t *buf_tx, const uint32_t *buf_rx,
80
+}
96
+ uint8_t can_timestamp)
81
+
97
+{
82
+static bool clear_nvic_pending(QTestState *qts, unsigned int n)
98
+ uint16_t size = 0;
83
+{
99
+ uint8_t len = 4;
84
+ /* No USART interrupts are less than 32 */
100
+
85
+ assert(n > 32);
101
+ while (size < len) {
86
+ n -= 32;
102
+ if (R_RXID_OFFSET + 4 * size == R_RXDLC_OFFSET) {
87
+ qtest_writel(qts, NVIC_ICPR1, (1 << n));
103
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size] + can_timestamp);
88
+ return true;
104
+ } else {
89
+}
105
+ g_assert_cmpint(buf_rx[size], ==, buf_tx[size]);
90
+
91
+/*
92
+ * Wait indefinitely for the flag to be updated.
93
+ * If this is run on a slow CI runner,
94
+ * the meson harness will timeout after 10 minutes for us.
95
+ */
96
+static bool usart_wait_for_flag(QTestState *qts, uint32_t event_addr,
97
+ uint32_t flag)
98
+{
99
+ while (true) {
100
+ if ((qtest_readl(qts, event_addr) & flag)) {
101
+ return true;
106
+ }
102
+ }
107
+
103
+ g_usleep(1000);
108
+ size++;
109
+ }
104
+ }
110
+}
105
+
111
+
106
+ return false;
112
+static void read_data(QTestState *qts, uint64_t can_base_addr, uint32_t *buf_rx)
107
+}
113
+{
108
+
114
+ uint32_t int_status;
109
+static void usart_receive_string(QTestState *qts, int sock_fd, const char *in,
115
+
110
+ char *out)
116
+ /* Read the interrupt on CAN rx. */
111
+{
117
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_RXOK;
112
+ int i, in_len = strlen(in);
118
+
113
+
119
+ g_assert_cmpint(int_status, ==, ISR_RXOK);
114
+ g_assert_true(send(sock_fd, in, in_len, 0) == in_len);
120
+
115
+ for (i = 0; i < in_len; i++) {
121
+ /* Read the RX register data for CAN. */
116
+ g_assert_true(usart_wait_for_flag(qts,
122
+ buf_rx[0] = qtest_readl(qts, can_base_addr + R_RXID_OFFSET);
117
+ USART1_BASE_ADDR + A_ISR, R_ISR_RXNE_MASK));
123
+ buf_rx[1] = qtest_readl(qts, can_base_addr + R_RXDLC_OFFSET);
118
+ out[i] = qtest_readl(qts, USART1_BASE_ADDR + A_RDR);
124
+ buf_rx[2] = qtest_readl(qts, can_base_addr + R_RXDATA1_OFFSET);
119
+ }
125
+ buf_rx[3] = qtest_readl(qts, can_base_addr + R_RXDATA2_OFFSET);
120
+ out[i] = '\0';
126
+
121
+}
127
+ /* Clear the RX interrupt. */
122
+
128
+ qtest_writel(qts, CAN1_BASE_ADDR + R_ICR_OFFSET, ISR_RXOK);
123
+static void usart_send_string(QTestState *qts, const char *in)
129
+}
124
+{
130
+
125
+ int i, in_len = strlen(in);
131
+static void send_data(QTestState *qts, uint64_t can_base_addr,
126
+
132
+ const uint32_t *buf_tx)
127
+ for (i = 0; i < in_len; i++) {
133
+{
128
+ qtest_writel(qts, USART1_BASE_ADDR + A_TDR, in[i]);
134
+ uint32_t int_status;
129
+ g_assert_true(usart_wait_for_flag(qts,
135
+
130
+ USART1_BASE_ADDR + A_ISR, R_ISR_TXE_MASK));
136
+ /* Write the TX register data for CAN. */
131
+ }
137
+ qtest_writel(qts, can_base_addr + R_TXID_OFFSET, buf_tx[0]);
132
+}
138
+ qtest_writel(qts, can_base_addr + R_TXDLC_OFFSET, buf_tx[1]);
133
+
139
+ qtest_writel(qts, can_base_addr + R_TXDATA1_OFFSET, buf_tx[2]);
134
+/* Init the RCC clocks to run at 80 MHz */
140
+ qtest_writel(qts, can_base_addr + R_TXDATA2_OFFSET, buf_tx[3]);
135
+static void init_clocks(QTestState *qts)
141
+
136
+{
142
+ /* Read the interrupt on CAN for tx. */
137
+ uint32_t value;
143
+ int_status = qtest_readl(qts, can_base_addr + R_ISR_OFFSET) & ISR_TXOK;
138
+
144
+
139
+ /* MSIRANGE can be set only when MSI is OFF or READY */
145
+ g_assert_cmpint(int_status, ==, ISR_TXOK);
140
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CR), R_CR_MSION_MASK);
146
+
141
+
147
+ /* Clear the interrupt for tx. */
142
+ /* Clocking from MSI, in case MSI was not the default source */
148
+ qtest_writel(qts, CAN0_BASE_ADDR + R_ICR_OFFSET, ISR_TXOK);
143
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CFGR), 0);
149
+}
144
+
150
+
145
+ /*
151
+/*
146
+ * Update PLL and set MSI as the source clock.
152
+ * This test will be transferring data from CAN0 and CAN1 through canbus. CAN0
147
+ * PLLM = 1 --> 000
153
+ * initiate the data transfer to can-bus, CAN1 receives the data. Test compares
148
+ * PLLN = 40 --> 40
154
+ * the data sent from CAN0 with received on CAN1.
149
+ * PPLLR = 2 --> 00
155
+ */
150
+ * PLLDIV = unused, PLLP = unused (SAI3), PLLQ = unused (48M1)
156
+static void test_can_bus(void)
151
+ * SRC = MSI --> 01
157
+{
152
+ */
158
+ const uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
153
+ qtest_writel(qts, (RCC_BASE_ADDR + A_PLLCFGR), R_PLLCFGR_PLLREN_MASK |
159
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
154
+ (40 << R_PLLCFGR_PLLN_SHIFT) |
160
+ uint32_t status = 0;
155
+ (0b01 << R_PLLCFGR_PLLSRC_SHIFT));
161
+ uint8_t can_timestamp = 1;
156
+
162
+
157
+ /* PLL activation */
163
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
158
+
164
+ " -object can-bus,id=canbus0"
159
+ value = qtest_readl(qts, (RCC_BASE_ADDR + A_CR));
165
+ " -machine xlnx-zcu102.canbus0=canbus0"
160
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CR), value | R_CR_PLLON_MASK);
166
+ " -machine xlnx-zcu102.canbus1=canbus0"
161
+
167
+ );
162
+ /* RCC_CFGR is OK by defaut */
168
+
163
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CFGR), 0);
169
+ /* Configure the CAN0 and CAN1. */
164
+
170
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
165
+ /* CCIPR : no periph clock by default */
171
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
166
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CCIPR), 0);
172
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
167
+
173
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
168
+ /* Switches on the PLL clock source */
174
+
169
+ value = qtest_readl(qts, (RCC_BASE_ADDR + A_CFGR));
175
+ /* Check here if CAN0 and CAN1 are in normal mode. */
170
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CFGR), (value & ~R_CFGR_SW_MASK) |
176
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
171
+ (0b11 << R_CFGR_SW_SHIFT));
177
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
172
+
178
+
173
+ /* Enable SYSCFG clock enabled */
179
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
174
+ qtest_writel(qts, (RCC_BASE_ADDR + A_APB2ENR), R_APB2ENR_SYSCFGEN_MASK);
180
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
175
+
181
+
176
+ /* Enable the IO port B clock (See p.252) */
182
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
177
+ qtest_writel(qts, (RCC_BASE_ADDR + A_AHB2ENR), R_AHB2ENR_GPIOBEN_MASK);
183
+
178
+
184
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
179
+ /* Enable the clock for USART1 (cf p.259) */
185
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
180
+ /* We rewrite SYSCFGEN to not disable it */
181
+ qtest_writel(qts, (RCC_BASE_ADDR + A_APB2ENR),
182
+ R_APB2ENR_SYSCFGEN_MASK | R_APB2ENR_USART1EN_MASK);
183
+
184
+ /* TODO: Enable usart via gpio */
185
+
186
+ /* Set PCLK as the clock for USART1(cf p.272) i.e. reset both bits */
187
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CCIPR), 0);
188
+
189
+ /* Reset USART1 (see p.249) */
190
+ qtest_writel(qts, (RCC_BASE_ADDR + A_APB2RSTR), 1 << 14);
191
+ qtest_writel(qts, (RCC_BASE_ADDR + A_APB2RSTR), 0);
192
+}
193
+
194
+static void init_uart(QTestState *qts)
195
+{
196
+ uint32_t cr1;
197
+
198
+ init_clocks(qts);
199
+
200
+ /*
201
+ * For 115200 bauds, see p.1349.
202
+ * The clock has a frequency of 80Mhz,
203
+ * for 115200, we have to put a divider of 695 = 0x2B7.
204
+ */
205
+ qtest_writel(qts, (USART1_BASE_ADDR + A_BRR), 0x2B7);
206
+
207
+ /*
208
+ * Set the oversampling by 16,
209
+ * disable the parity control and
210
+ * set the word length to 8. (cf p.1377)
211
+ */
212
+ cr1 = qtest_readl(qts, (USART1_BASE_ADDR + A_CR1));
213
+ cr1 &= ~(R_CR1_M1_MASK | R_CR1_M0_MASK | R_CR1_OVER8_MASK | R_CR1_PCE_MASK);
214
+ qtest_writel(qts, (USART1_BASE_ADDR + A_CR1), cr1);
215
+
216
+ /* Enable the transmitter, the receiver and the USART. */
217
+ qtest_writel(qts, (USART1_BASE_ADDR + A_CR1),
218
+ R_CR1_UE_MASK | R_CR1_RE_MASK | R_CR1_TE_MASK);
219
+}
220
+
221
+static void test_write_read(void)
222
+{
223
+ QTestState *qts = qtest_init("-M b-l475e-iot01a");
224
+
225
+ /* Test that we can write and retrieve a value from the device */
226
+ qtest_writel(qts, USART1_BASE_ADDR + A_TDR, 0xFFFFFFFF);
227
+ const uint32_t tdr = qtest_readl(qts, USART1_BASE_ADDR + A_TDR);
228
+ g_assert_cmpuint(tdr, ==, 0x000001FF);
229
+}
230
+
231
+static void test_receive_char(void)
232
+{
233
+ int sock_fd;
234
+ uint32_t cr1;
235
+ QTestState *qts = qtest_init_with_serial("-M b-l475e-iot01a", &sock_fd);
236
+
237
+ init_uart(qts);
238
+
239
+ /* Try without initializing IRQ */
240
+ g_assert_true(send(sock_fd, "a", 1, 0) == 1);
241
+ usart_wait_for_flag(qts, USART1_BASE_ADDR + A_ISR, R_ISR_RXNE_MASK);
242
+ g_assert_cmphex(qtest_readl(qts, USART1_BASE_ADDR + A_RDR), ==, 'a');
243
+ g_assert_false(check_nvic_pending(qts, USART1_IRQ));
244
+
245
+ /* Now with the IRQ */
246
+ cr1 = qtest_readl(qts, (USART1_BASE_ADDR + A_CR1));
247
+ cr1 |= R_CR1_RXNEIE_MASK;
248
+ qtest_writel(qts, USART1_BASE_ADDR + A_CR1, cr1);
249
+ g_assert_true(send(sock_fd, "b", 1, 0) == 1);
250
+ usart_wait_for_flag(qts, USART1_BASE_ADDR + A_ISR, R_ISR_RXNE_MASK);
251
+ g_assert_cmphex(qtest_readl(qts, USART1_BASE_ADDR + A_RDR), ==, 'b');
252
+ g_assert_true(check_nvic_pending(qts, USART1_IRQ));
253
+ clear_nvic_pending(qts, USART1_IRQ);
254
+
255
+ close(sock_fd);
186
+
256
+
187
+ qtest_quit(qts);
257
+ qtest_quit(qts);
188
+}
258
+}
189
+
259
+
190
+/*
260
+static void test_send_char(void)
191
+ * This test is performing loopback mode on CAN0 and CAN1. Data sent from TX of
261
+{
192
+ * each CAN0 and CAN1 are compared with RX register data for respective CAN.
262
+ int sock_fd;
193
+ */
263
+ char s[1];
194
+static void test_can_loopback(void)
264
+ uint32_t cr1;
195
+{
265
+ QTestState *qts = qtest_init_with_serial("-M b-l475e-iot01a", &sock_fd);
196
+ uint32_t buf_tx[4] = { 0xFF, 0x80000000, 0x12345678, 0x87654321 };
266
+
197
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
267
+ init_uart(qts);
198
+ uint32_t status = 0;
268
+
199
+
269
+ /* Try without initializing IRQ */
200
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
270
+ qtest_writel(qts, USART1_BASE_ADDR + A_TDR, 'c');
201
+ " -object can-bus,id=canbus0"
271
+ g_assert_true(recv(sock_fd, s, 1, 0) == 1);
202
+ " -machine xlnx-zcu102.canbus0=canbus0"
272
+ g_assert_cmphex(s[0], ==, 'c');
203
+ " -machine xlnx-zcu102.canbus1=canbus0"
273
+ g_assert_false(check_nvic_pending(qts, USART1_IRQ));
204
+ );
274
+
205
+
275
+ /* Now with the IRQ */
206
+ /* Configure the CAN0 in loopback mode. */
276
+ cr1 = qtest_readl(qts, (USART1_BASE_ADDR + A_CR1));
207
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
277
+ cr1 |= R_CR1_TXEIE_MASK;
208
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
278
+ qtest_writel(qts, USART1_BASE_ADDR + A_CR1, cr1);
209
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
279
+ qtest_writel(qts, USART1_BASE_ADDR + A_TDR, 'd');
210
+
280
+ g_assert_true(recv(sock_fd, s, 1, 0) == 1);
211
+ /* Check here if CAN0 is set in loopback mode. */
281
+ g_assert_cmphex(s[0], ==, 'd');
212
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
282
+ g_assert_true(check_nvic_pending(qts, USART1_IRQ));
213
+
283
+ clear_nvic_pending(qts, USART1_IRQ);
214
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
284
+
215
+
285
+ close(sock_fd);
216
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
217
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
218
+ match_rx_tx_data(buf_tx, buf_rx, 0);
219
+
220
+ /* Configure the CAN1 in loopback mode. */
221
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
222
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, LOOPBACK_MODE);
223
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
224
+
225
+ /* Check here if CAN1 is set in loopback mode. */
226
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
227
+
228
+ g_assert_cmpint(status, ==, STATUS_LOOPBACK_MODE);
229
+
230
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
231
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
232
+ match_rx_tx_data(buf_tx, buf_rx, 0);
233
+
286
+
234
+ qtest_quit(qts);
287
+ qtest_quit(qts);
235
+}
288
+}
236
+
289
+
237
+/*
290
+static void test_receive_str(void)
238
+ * Enable filters for CAN1. This will filter incoming messages with ID. In this
291
+{
239
+ * test message will pass through filter 2.
292
+ int sock_fd;
240
+ */
293
+ char s[10];
241
+static void test_can_filter(void)
294
+ QTestState *qts = qtest_init_with_serial("-M b-l475e-iot01a", &sock_fd);
242
+{
295
+
243
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
296
+ init_uart(qts);
244
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
297
+
245
+ uint32_t status = 0;
298
+ usart_receive_string(qts, sock_fd, "hello", s);
246
+ uint8_t can_timestamp = 1;
299
+ g_assert_true(memcmp(s, "hello", 5) == 0);
247
+
300
+
248
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
301
+ close(sock_fd);
249
+ " -object can-bus,id=canbus0"
250
+ " -machine xlnx-zcu102.canbus0=canbus0"
251
+ " -machine xlnx-zcu102.canbus1=canbus0"
252
+ );
253
+
254
+ /* Configure the CAN0 and CAN1. */
255
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
256
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
257
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
258
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
259
+
260
+ /* Check here if CAN0 and CAN1 are in normal mode. */
261
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
262
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
263
+
264
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
265
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
266
+
267
+ /* Set filter for CAN1 for incoming messages. */
268
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0x0);
269
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR1, 0xF7);
270
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR1, 0x121F);
271
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR2, 0x5431);
272
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR2, 0x14);
273
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR3, 0x1234);
274
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR3, 0x5431);
275
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFMR4, 0xFFF);
276
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFIR4, 0x1234);
277
+
278
+ qtest_writel(qts, CAN1_BASE_ADDR + R_AFR, 0xF);
279
+
280
+ send_data(qts, CAN0_BASE_ADDR, buf_tx);
281
+
282
+ read_data(qts, CAN1_BASE_ADDR, buf_rx);
283
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
284
+
302
+
285
+ qtest_quit(qts);
303
+ qtest_quit(qts);
286
+}
304
+}
287
+
305
+
288
+/* Testing sleep mode on CAN0 while CAN1 is in normal mode. */
306
+static void test_send_str(void)
289
+static void test_can_sleepmode(void)
307
+{
290
+{
308
+ int sock_fd;
291
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
309
+ char s[10];
292
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
310
+ QTestState *qts = qtest_init_with_serial("-M b-l475e-iot01a", &sock_fd);
293
+ uint32_t status = 0;
311
+
294
+ uint8_t can_timestamp = 1;
312
+ init_uart(qts);
295
+
313
+
296
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
314
+ usart_send_string(qts, "world");
297
+ " -object can-bus,id=canbus0"
315
+ g_assert_true(recv(sock_fd, s, 10, 0) == 5);
298
+ " -machine xlnx-zcu102.canbus0=canbus0"
316
+ g_assert_true(memcmp(s, "world", 5) == 0);
299
+ " -machine xlnx-zcu102.canbus1=canbus0"
317
+
300
+ );
318
+ close(sock_fd);
301
+
302
+ /* Configure the CAN0. */
303
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
304
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SLEEP_MODE);
305
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
306
+
307
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
308
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
309
+
310
+ /* Check here if CAN0 is in SLEEP mode and CAN1 in normal mode. */
311
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
312
+ g_assert_cmpint(status, ==, STATUS_SLEEP_MODE);
313
+
314
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
315
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
316
+
317
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
318
+
319
+ /*
320
+ * Once CAN1 sends data on can-bus. CAN0 should exit sleep mode.
321
+ * Check the CAN0 status now. It should exit the sleep mode and receive the
322
+ * incoming data.
323
+ */
324
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
325
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
326
+
327
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
328
+
329
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
330
+
319
+
331
+ qtest_quit(qts);
320
+ qtest_quit(qts);
332
+}
321
+}
333
+
322
+
334
+/* Testing Snoop mode on CAN0 while CAN1 is in normal mode. */
335
+static void test_can_snoopmode(void)
336
+{
337
+ uint32_t buf_tx[4] = { 0x14, 0x80000000, 0x12345678, 0x87654321 };
338
+ uint32_t buf_rx[4] = { 0x00, 0x00, 0x00, 0x00 };
339
+ uint32_t status = 0;
340
+ uint8_t can_timestamp = 1;
341
+
342
+ QTestState *qts = qtest_init("-machine xlnx-zcu102"
343
+ " -object can-bus,id=canbus0"
344
+ " -machine xlnx-zcu102.canbus0=canbus0"
345
+ " -machine xlnx-zcu102.canbus1=canbus0"
346
+ );
347
+
348
+ /* Configure the CAN0. */
349
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, CONFIG_MODE);
350
+ qtest_writel(qts, CAN0_BASE_ADDR + R_MSR_OFFSET, SNOOP_MODE);
351
+ qtest_writel(qts, CAN0_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
352
+
353
+ qtest_writel(qts, CAN1_BASE_ADDR + R_SRR_OFFSET, ENABLE_CAN);
354
+ qtest_writel(qts, CAN1_BASE_ADDR + R_MSR_OFFSET, NORMAL_MODE);
355
+
356
+ /* Check here if CAN0 is in SNOOP mode and CAN1 in normal mode. */
357
+ status = qtest_readl(qts, CAN0_BASE_ADDR + R_SR_OFFSET);
358
+ g_assert_cmpint(status, ==, STATUS_SNOOP_MODE);
359
+
360
+ status = qtest_readl(qts, CAN1_BASE_ADDR + R_SR_OFFSET);
361
+ g_assert_cmpint(status, ==, STATUS_NORMAL_MODE);
362
+
363
+ send_data(qts, CAN1_BASE_ADDR, buf_tx);
364
+
365
+ read_data(qts, CAN0_BASE_ADDR, buf_rx);
366
+
367
+ match_rx_tx_data(buf_tx, buf_rx, can_timestamp);
368
+
369
+ qtest_quit(qts);
370
+}
371
+
372
+int main(int argc, char **argv)
323
+int main(int argc, char **argv)
373
+{
324
+{
325
+ int ret;
326
+
374
+ g_test_init(&argc, &argv, NULL);
327
+ g_test_init(&argc, &argv, NULL);
375
+
328
+ g_test_set_nonfatal_assertions();
376
+ qtest_add_func("/net/can/can_bus", test_can_bus);
329
+
377
+ qtest_add_func("/net/can/can_loopback", test_can_loopback);
330
+ qtest_add_func("stm32l4x5/usart/write_read", test_write_read);
378
+ qtest_add_func("/net/can/can_filter", test_can_filter);
331
+ qtest_add_func("stm32l4x5/usart/receive_char", test_receive_char);
379
+ qtest_add_func("/net/can/can_test_snoopmode", test_can_snoopmode);
332
+ qtest_add_func("stm32l4x5/usart/send_char", test_send_char);
380
+ qtest_add_func("/net/can/can_test_sleepmode", test_can_sleepmode);
333
+ qtest_add_func("stm32l4x5/usart/receive_str", test_receive_str);
381
+
334
+ qtest_add_func("stm32l4x5/usart/send_str", test_send_str);
382
+ return g_test_run();
335
+ ret = g_test_run();
383
+}
336
+
337
+ return ret;
338
+}
339
+
384
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
340
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
385
index XXXXXXX..XXXXXXX 100644
341
index XXXXXXX..XXXXXXX 100644
386
--- a/tests/qtest/meson.build
342
--- a/tests/qtest/meson.build
387
+++ b/tests/qtest/meson.build
343
+++ b/tests/qtest/meson.build
388
@@ -XXX,XX +XXX,XX @@ qtests_aarch64 = \
344
@@ -XXX,XX +XXX,XX @@ slow_qtests = {
389
['arm-cpu-features',
345
'npcm7xx_pwm-test': 300,
390
'numa-test',
346
'npcm7xx_watchdog_timer-test': 120,
391
'boot-serial-test',
347
'qom-test' : 900,
392
+ 'xlnx-can-test',
348
+ 'stm32l4x5_usart-test' : 600,
393
'migration-test']
349
'test-hmp' : 240,
394
350
'pxe-test': 610,
395
qtests_s390x = \
351
'prom-env-test': 360,
352
@@ -XXX,XX +XXX,XX @@ qtests_stm32l4x5 = \
353
['stm32l4x5_exti-test',
354
'stm32l4x5_syscfg-test',
355
'stm32l4x5_rcc-test',
356
- 'stm32l4x5_gpio-test']
357
+ 'stm32l4x5_gpio-test',
358
+ 'stm32l4x5_usart-test']
359
360
qtests_arm = \
361
(config_all_devices.has_key('CONFIG_MPS2') ? ['sse-timer-test'] : []) + \
396
--
362
--
397
2.20.1
363
2.34.1
398
364
399
365
diff view generated by jsdifflib