1
First arm pullreq for the 2.12 cycle, with all the
1
Hi; here's the first arm pullreq for 9.1.
2
things that queued up during the release phase.
2
3
2.11 isn't quite released yet, but might as well put
3
This includes the reset method function signature change, so it has
4
the pullreq on the mailing list :-)
4
some chance of compile failures due to merge conflicts if some other
5
pullreq added a device reset method and that pullreq got applied
6
before this one. If so, the changes needed to fix those up can be
7
created by running the spatch rune described in the commit message of
8
the "hw, target: Add ResetType argument to hold and exit phase
9
methods" commit.
5
10
6
thanks
11
thanks
7
-- PMM
12
-- PMM
8
13
9
The following changes since commit 0a0dc59d27527b78a195c2d838d28b7b49e5a639:
14
The following changes since commit 5da72194df36535d773c8bdc951529ecd5e31707:
10
15
11
Update version for v2.11.0 release (2017-12-13 14:31:09 +0000)
16
Merge tag 'pull-tcg-20240424' of https://gitlab.com/rth7680/qemu into staging (2024-04-24 15:51:49 -0700)
12
17
13
are available in the git repository at:
18
are available in the Git repository at:
14
19
15
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20171213
20
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20240425
16
21
17
for you to fetch changes up to d3c348b6e3af3598bfcb755d59f8f4de80a2228a:
22
for you to fetch changes up to 214652da123e3821657a64691ee556281e9f6238:
18
23
19
xilinx_spips: Use memset instead of a for loop to zero registers (2017-12-13 17:59:26 +0000)
24
tests/qtest: Add tests for the STM32L4x5 USART (2024-04-25 10:21:59 +0100)
20
25
21
----------------------------------------------------------------
26
----------------------------------------------------------------
22
target-arm queue:
27
target-arm queue:
23
* xilinx_spips: set reset values correctly
28
* Implement FEAT_NMI and NMI support in the GICv3
24
* MAINTAINERS: fix an email address
29
* hw/dma: avoid apparent overflow in soc_dma_set_request
25
* hw/display/tc6393xb: limit irq handler index to TC6393XB_GPIOS
30
* linux-user/flatload.c: Remove unused bFLT shared-library and ZFLAT code
26
* nvic: Make systick banked for v8M
31
* Add ResetType argument to Resettable hold and exit phase methods
27
* refactor get_phys_addr() so we can return the right format PAR
32
* Add RESET_TYPE_SNAPSHOT_LOAD ResetType
28
for ATS operations
33
* Implement STM32L4x5 USART
29
* implement v8M TT instruction
30
* fix some minor v8M bugs
31
* Implement reset for GICv3 ITS
32
* xlnx-zcu102: Add support for the ZynqMP QSPI
33
34
34
----------------------------------------------------------------
35
----------------------------------------------------------------
35
Alistair Francis (3):
36
Anastasia Belova (1):
36
xilinx_spips: Update the QSPI Mod ID reset value
37
hw/dma: avoid apparent overflow in soc_dma_set_request
37
xilinx_spips: Set all of the reset values
38
38
xilinx_spips: Use memset instead of a for loop to zero registers
39
Arnaud Minier (5):
39
40
hw/char: Implement STM32L4x5 USART skeleton
40
Edgar E. Iglesias (1):
41
hw/char/stm32l4x5_usart: Enable serial read and write
41
target/arm: Extend PAR format determination
42
hw/char/stm32l4x5_usart: Add options for serial parameters setting
42
43
hw/arm: Add the USART to the stm32l4x5 SoC
43
Eric Auger (4):
44
tests/qtest: Add tests for the STM32L4x5 USART
44
hw/intc/arm_gicv3_its: Don't call post_load on reset
45
45
hw/intc/arm_gicv3_its: Implement a minimalist reset
46
Jinjie Ruan (22):
46
linux-headers: update to 4.15-rc1
47
target/arm: Handle HCR_EL2 accesses for bits introduced with FEAT_NMI
47
hw/intc/arm_gicv3_its: Implement full reset
48
target/arm: Add PSTATE.ALLINT
48
49
target/arm: Add support for FEAT_NMI, Non-maskable Interrupt
49
Francisco Iglesias (13):
50
target/arm: Implement ALLINT MSR (immediate)
50
m25p80: Add support for continuous read out of RDSR and READ_FSR
51
target/arm: Support MSR access to ALLINT
51
m25p80: Add support for SST READ ID 0x90/0xAB commands
52
target/arm: Add support for Non-maskable Interrupt
52
m25p80: Add support for BRRD/BRWR and BULK_ERASE (0x60)
53
target/arm: Add support for NMI in arm_phys_excp_target_el()
53
m25p80: Add support for n25q512a11 and n25q512a13
54
target/arm: Handle IS/FS in ISR_EL1 for NMI, VINMI and VFNMI
54
xilinx_spips: Move FlashCMD, XilinxQSPIPS and XilinxSPIPSClass
55
target/arm: Handle PSTATE.ALLINT on taking an exception
55
xilinx_spips: Update striping to be big-endian bit order
56
hw/intc/arm_gicv3: Add external IRQ lines for NMI
56
xilinx_spips: Add support for RX discard and RX drain
57
hw/arm/virt: Wire NMI and VINMI irq lines from GIC to CPU
57
xilinx_spips: Make tx/rx_data_bytes more generic and reusable
58
target/arm: Handle NMI in arm_cpu_do_interrupt_aarch64()
58
xilinx_spips: Add support for zero pumping
59
hw/intc/arm_gicv3: Add has-nmi property to GICv3 device
59
xilinx_spips: Add support for 4 byte addresses in the LQSPI
60
hw/intc/arm_gicv3_kvm: Not set has-nmi=true for the KVM GICv3
60
xilinx_spips: Don't set TX FIFO UNDERFLOW at cmd done
61
hw/intc/arm_gicv3: Add irq non-maskable property
61
xilinx_spips: Add support for the ZynqMP Generic QSPI
62
hw/intc/arm_gicv3_redist: Implement GICR_INMIR0
62
xlnx-zcu102: Add support for the ZynqMP QSPI
63
hw/intc/arm_gicv3: Implement GICD_INMIR
63
64
hw/intc/arm_gicv3: Implement NMI interrupt priority
64
Peter Maydell (20):
65
hw/intc/arm_gicv3: Report the NMI interrupt in gicv3_cpuif_update()
65
target/arm: Handle SPSEL and current stack being out of sync in MSP/PSP reads
66
hw/intc/arm_gicv3: Report the VINMI interrupt
66
target/arm: Allow explicit writes to CONTROL.SPSEL in Handler mode
67
target/arm: Add FEAT_NMI to max
67
target/arm: Add missing M profile case to regime_is_user()
68
hw/arm/virt: Enable NMI support in the GIC if the CPU has FEAT_NMI
68
target/arm: Split M profile MNegPri mmu index into user and priv
69
69
target/arm: Create new arm_v7m_mmu_idx_for_secstate_and_priv()
70
Peter Maydell (9):
70
target/arm: Factor MPU lookup code out of get_phys_addr_pmsav8()
71
hw/intc/arm_gicv3: Add NMI handling CPU interface registers
71
target/arm: Implement TT instruction
72
hw/intc/arm_gicv3: Handle icv_nmiar1_read() for icc_nmiar1_read()
72
target/arm: Provide fault type enum and FSR conversion functions
73
linux-user/flatload.c: Remove unused bFLT shared-library and ZFLAT code
73
target/arm: Remove fsr argument from arm_ld*_ptw()
74
hw/misc: Don't special case RESET_TYPE_COLD in npcm7xx_clk, gcr
74
target/arm: Convert get_phys_addr_v5() to not return FSC values
75
allwinner-i2c, adm1272: Use device_cold_reset() for software-triggered reset
75
target/arm: Convert get_phys_addr_v6() to not return FSC values
76
scripts/coccinelle: New script to add ResetType to hold and exit phases
76
target/arm: Convert get_phys_addr_lpae() to not return FSC values
77
hw, target: Add ResetType argument to hold and exit phase methods
77
target/arm: Convert get_phys_addr_pmsav5() to not return FSC values
78
docs/devel/reset: Update to new API for hold and exit phase methods
78
target/arm: Convert get_phys_addr_pmsav7() to not return FSC values
79
reset: Add RESET_TYPE_SNAPSHOT_LOAD
79
target/arm: Convert get_phys_addr_pmsav8() to not return FSC values
80
80
target/arm: Use ARMMMUFaultInfo in deliver_fault()
81
MAINTAINERS | 1 +
81
target/arm: Ignore fsr from get_phys_addr() in do_ats_write()
82
docs/devel/reset.rst | 25 +-
82
target/arm: Remove fsr argument from get_phys_addr() and arm_tlb_fill()
83
docs/system/arm/b-l475e-iot01a.rst | 2 +-
83
nvic: Make nvic_sysreg_ns_ops work with any MemoryRegion
84
docs/system/arm/emulation.rst | 1 +
84
nvic: Make systick banked
85
scripts/coccinelle/reset-type.cocci | 133 ++++++++
85
86
hw/intc/gicv3_internal.h | 13 +
86
Prasad J Pandit (1):
87
include/hw/arm/stm32l4x5_soc.h | 7 +
87
hw/display/tc6393xb: limit irq handler index to TC6393XB_GPIOS
88
include/hw/char/stm32l4x5_usart.h | 67 ++++
88
89
include/hw/intc/arm_gic_common.h | 2 +
89
Zhaoshenglong (1):
90
include/hw/intc/arm_gicv3_common.h | 14 +
90
MAINTAINERS: replace the unavailable email address
91
include/hw/resettable.h | 5 +-
91
92
linux-user/flat.h | 5 +-
92
include/hw/arm/xlnx-zynqmp.h | 5 +
93
target/arm/cpu-features.h | 5 +
93
include/hw/intc/armv7m_nvic.h | 4 +-
94
target/arm/cpu-qom.h | 5 +-
94
include/hw/ssi/xilinx_spips.h | 74 +-
95
target/arm/cpu.h | 9 +
95
include/standard-headers/asm-s390/virtio-ccw.h | 1 +
96
target/arm/internals.h | 21 ++
96
include/standard-headers/asm-x86/hyperv.h | 394 +--------
97
target/arm/tcg/helper-a64.h | 1 +
97
include/standard-headers/linux/input-event-codes.h | 2 +
98
target/arm/tcg/a64.decode | 1 +
98
include/standard-headers/linux/input.h | 1 +
99
hw/adc/npcm7xx_adc.c | 2 +-
99
include/standard-headers/linux/pci_regs.h | 45 +-
100
hw/arm/pxa2xx_pic.c | 2 +-
100
linux-headers/asm-arm/kvm.h | 8 +
101
hw/arm/smmu-common.c | 2 +-
101
linux-headers/asm-arm/kvm_para.h | 1 +
102
hw/arm/smmuv3.c | 4 +-
102
linux-headers/asm-arm/unistd.h | 2 +
103
hw/arm/stellaris.c | 10 +-
103
linux-headers/asm-arm64/kvm.h | 8 +
104
hw/arm/stm32l4x5_soc.c | 83 ++++-
104
linux-headers/asm-arm64/unistd.h | 1 +
105
hw/arm/virt.c | 29 +-
105
linux-headers/asm-powerpc/epapr_hcalls.h | 1 +
106
hw/audio/asc.c | 2 +-
106
linux-headers/asm-powerpc/kvm.h | 1 +
107
hw/char/cadence_uart.c | 2 +-
107
linux-headers/asm-powerpc/kvm_para.h | 1 +
108
hw/char/sifive_uart.c | 2 +-
108
linux-headers/asm-powerpc/unistd.h | 1 +
109
hw/char/stm32l4x5_usart.c | 637 ++++++++++++++++++++++++++++++++++++
109
linux-headers/asm-s390/kvm.h | 1 +
110
hw/core/cpu-common.c | 2 +-
110
linux-headers/asm-s390/kvm_para.h | 1 +
111
hw/core/qdev.c | 4 +-
111
linux-headers/asm-s390/unistd.h | 4 +-
112
hw/core/reset.c | 17 +-
112
linux-headers/asm-x86/kvm.h | 1 +
113
hw/core/resettable.c | 8 +-
113
linux-headers/asm-x86/kvm_para.h | 2 +-
114
hw/display/virtio-vga.c | 4 +-
114
linux-headers/asm-x86/unistd.h | 1 +
115
hw/dma/soc_dma.c | 4 +-
115
linux-headers/linux/kvm.h | 2 +
116
hw/gpio/npcm7xx_gpio.c | 2 +-
116
linux-headers/linux/kvm_para.h | 1 +
117
hw/gpio/pl061.c | 2 +-
117
linux-headers/linux/psci.h | 1 +
118
hw/gpio/stm32l4x5_gpio.c | 2 +-
118
linux-headers/linux/userfaultfd.h | 1 +
119
hw/hyperv/vmbus.c | 2 +-
119
linux-headers/linux/vfio.h | 1 +
120
hw/i2c/allwinner-i2c.c | 5 +-
120
linux-headers/linux/vfio_ccw.h | 1 +
121
hw/i2c/npcm7xx_smbus.c | 2 +-
121
linux-headers/linux/vhost.h | 1 +
122
hw/input/adb.c | 2 +-
122
target/arm/cpu.h | 73 +-
123
hw/input/ps2.c | 12 +-
123
target/arm/helper.h | 2 +
124
hw/intc/arm_gic_common.c | 2 +-
124
target/arm/internals.h | 193 ++++-
125
hw/intc/arm_gic_kvm.c | 4 +-
125
hw/arm/xlnx-zcu102.c | 23 +
126
hw/intc/arm_gicv3.c | 67 +++-
126
hw/arm/xlnx-zynqmp.c | 26 +
127
hw/intc/arm_gicv3_common.c | 50 ++-
127
hw/block/m25p80.c | 80 +-
128
hw/intc/arm_gicv3_cpuif.c | 268 ++++++++++++++-
128
hw/display/tc6393xb.c | 1 +
129
hw/intc/arm_gicv3_dist.c | 36 ++
129
hw/intc/arm_gicv3_its_common.c | 2 -
130
hw/intc/arm_gicv3_its.c | 4 +-
130
hw/intc/arm_gicv3_its_kvm.c | 53 +-
131
hw/intc/arm_gicv3_its_common.c | 2 +-
131
hw/intc/armv7m_nvic.c | 100 ++-
132
hw/intc/arm_gicv3_its_kvm.c | 4 +-
132
hw/ssi/xilinx_spips.c | 928 +++++++++++++++++----
133
hw/intc/arm_gicv3_kvm.c | 9 +-
133
target/arm/helper.c | 489 +++++++----
134
hw/intc/arm_gicv3_redist.c | 22 ++
134
target/arm/op_helper.c | 82 +-
135
hw/intc/xics.c | 2 +-
135
target/arm/translate.c | 37 +-
136
hw/m68k/q800-glue.c | 2 +-
136
MAINTAINERS | 2 +-
137
hw/misc/djmemc.c | 2 +-
137
default-configs/arm-softmmu.mak | 2 +-
138
hw/misc/iosb.c | 2 +-
138
46 files changed, 1833 insertions(+), 828 deletions(-)
139
hw/misc/mac_via.c | 8 +-
139
140
hw/misc/macio/cuda.c | 4 +-
141
hw/misc/macio/pmu.c | 4 +-
142
hw/misc/mos6522.c | 2 +-
143
hw/misc/npcm7xx_clk.c | 13 +-
144
hw/misc/npcm7xx_gcr.c | 12 +-
145
hw/misc/npcm7xx_mft.c | 2 +-
146
hw/misc/npcm7xx_pwm.c | 2 +-
147
hw/misc/stm32l4x5_exti.c | 2 +-
148
hw/misc/stm32l4x5_rcc.c | 10 +-
149
hw/misc/stm32l4x5_syscfg.c | 2 +-
150
hw/misc/xlnx-versal-cframe-reg.c | 2 +-
151
hw/misc/xlnx-versal-crl.c | 2 +-
152
hw/misc/xlnx-versal-pmc-iou-slcr.c | 2 +-
153
hw/misc/xlnx-versal-trng.c | 2 +-
154
hw/misc/xlnx-versal-xramc.c | 2 +-
155
hw/misc/xlnx-zynqmp-apu-ctrl.c | 2 +-
156
hw/misc/xlnx-zynqmp-crf.c | 2 +-
157
hw/misc/zynq_slcr.c | 4 +-
158
hw/net/can/xlnx-zynqmp-can.c | 2 +-
159
hw/net/e1000.c | 2 +-
160
hw/net/e1000e.c | 2 +-
161
hw/net/igb.c | 2 +-
162
hw/net/igbvf.c | 2 +-
163
hw/nvram/xlnx-bbram.c | 2 +-
164
hw/nvram/xlnx-versal-efuse-ctrl.c | 2 +-
165
hw/nvram/xlnx-zynqmp-efuse.c | 2 +-
166
hw/pci-bridge/cxl_root_port.c | 4 +-
167
hw/pci-bridge/pcie_root_port.c | 2 +-
168
hw/pci-host/bonito.c | 2 +-
169
hw/pci-host/pnv_phb.c | 4 +-
170
hw/pci-host/pnv_phb3_msi.c | 4 +-
171
hw/pci/pci.c | 4 +-
172
hw/rtc/mc146818rtc.c | 2 +-
173
hw/s390x/css-bridge.c | 2 +-
174
hw/sensor/adm1266.c | 2 +-
175
hw/sensor/adm1272.c | 4 +-
176
hw/sensor/isl_pmbus_vr.c | 10 +-
177
hw/sensor/max31785.c | 2 +-
178
hw/sensor/max34451.c | 2 +-
179
hw/ssi/npcm7xx_fiu.c | 2 +-
180
hw/timer/etraxfs_timer.c | 2 +-
181
hw/timer/npcm7xx_timer.c | 2 +-
182
hw/usb/hcd-dwc2.c | 8 +-
183
hw/usb/xlnx-versal-usb2-ctrl-regs.c | 2 +-
184
hw/virtio/virtio-pci.c | 2 +-
185
linux-user/flatload.c | 293 +----------------
186
target/arm/cpu.c | 151 ++++++++-
187
target/arm/helper.c | 101 +++++-
188
target/arm/tcg/cpu64.c | 1 +
189
target/arm/tcg/helper-a64.c | 16 +-
190
target/arm/tcg/translate-a64.c | 19 ++
191
target/avr/cpu.c | 4 +-
192
target/cris/cpu.c | 4 +-
193
target/hexagon/cpu.c | 4 +-
194
target/i386/cpu.c | 4 +-
195
target/loongarch/cpu.c | 4 +-
196
target/m68k/cpu.c | 4 +-
197
target/microblaze/cpu.c | 4 +-
198
target/mips/cpu.c | 4 +-
199
target/openrisc/cpu.c | 4 +-
200
target/ppc/cpu_init.c | 4 +-
201
target/riscv/cpu.c | 4 +-
202
target/rx/cpu.c | 4 +-
203
target/sh4/cpu.c | 4 +-
204
target/sparc/cpu.c | 4 +-
205
target/tricore/cpu.c | 4 +-
206
target/xtensa/cpu.c | 4 +-
207
tests/qtest/stm32l4x5_usart-test.c | 315 ++++++++++++++++++
208
hw/arm/Kconfig | 1 +
209
hw/char/Kconfig | 3 +
210
hw/char/meson.build | 1 +
211
hw/char/trace-events | 12 +
212
hw/intc/trace-events | 2 +
213
tests/qtest/meson.build | 4 +-
214
133 files changed, 2239 insertions(+), 537 deletions(-)
215
create mode 100644 scripts/coccinelle/reset-type.cocci
216
create mode 100644 include/hw/char/stm32l4x5_usart.h
217
create mode 100644 hw/char/stm32l4x5_usart.c
218
create mode 100644 tests/qtest/stm32l4x5_usart-test.c
diff view generated by jsdifflib
1
Make get_phys_addr_pmsav7() return a fault type in the ARMMMUFaultInfo
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
structure, which we convert to the FSC at the callsite.
3
2
3
FEAT_NMI defines another three new bits in HCRX_EL2: TALLINT, HCRX_VINMI and
4
HCRX_VFNMI. When the feature is enabled, allow these bits to be written in
5
HCRX_EL2.
6
7
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20240407081733.3231820-2-ruanjinjie@huawei.com
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-8-git-send-email-peter.maydell@linaro.org
9
---
12
---
10
target/arm/helper.c | 11 +++++++----
13
target/arm/cpu-features.h | 5 +++++
11
1 file changed, 7 insertions(+), 4 deletions(-)
14
target/arm/helper.c | 8 +++++++-
15
2 files changed, 12 insertions(+), 1 deletion(-)
12
16
17
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu-features.h
20
+++ b/target/arm/cpu-features.h
21
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
22
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SME) != 0;
23
}
24
25
+static inline bool isar_feature_aa64_nmi(const ARMISARegisters *id)
26
+{
27
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, NMI) != 0;
28
+}
29
+
30
static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
31
{
32
return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
35
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
36
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static inline bool m_is_system_region(CPUARMState *env, uint32_t address)
37
@@ -XXX,XX +XXX,XX @@ bool el_is_in_host(CPUARMState *env, int el)
18
38
static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
19
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
39
uint64_t value)
20
MMUAccessType access_type, ARMMMUIdx mmu_idx,
21
- hwaddr *phys_ptr, int *prot, uint32_t *fsr)
22
+ hwaddr *phys_ptr, int *prot,
23
+ ARMMMUFaultInfo *fi)
24
{
40
{
25
ARMCPU *cpu = arm_env_get_cpu(env);
41
+ ARMCPU *cpu = env_archcpu(env);
26
int n;
42
uint64_t valid_mask = 0;
27
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
43
28
if (n == -1) { /* no hits */
44
/* FEAT_MOPS adds MSCEn and MCE2 */
29
if (!pmsav7_use_background_region(cpu, mmu_idx, is_user)) {
45
- if (cpu_isar_feature(aa64_mops, env_archcpu(env))) {
30
/* background fault */
46
+ if (cpu_isar_feature(aa64_mops, cpu)) {
31
- *fsr = 0;
47
valid_mask |= HCRX_MSCEN | HCRX_MCE2;
32
+ fi->type = ARMFault_Background;
33
return true;
34
}
35
get_phys_addr_pmsav7_default(env, mmu_idx, address, prot);
36
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
37
}
38
}
48
}
39
49
40
- *fsr = 0x00d; /* Permission fault */
50
+ /* FEAT_NMI adds TALLINT, VINMI and VFNMI */
41
+ fi->type = ARMFault_Permission;
51
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
42
+ fi->level = 1;
52
+ valid_mask |= HCRX_TALLINT | HCRX_VINMI | HCRX_VFNMI;
43
return !(*prot & (1 << access_type));
53
+ }
54
+
55
/* Clear RES0 bits. */
56
env->cp15.hcrx_el2 = value & valid_mask;
44
}
57
}
45
46
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
47
} else if (arm_feature(env, ARM_FEATURE_V7)) {
48
/* PMSAv7 */
49
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
50
- phys_ptr, prot, fsr);
51
+ phys_ptr, prot, fi);
52
+ *fsr = arm_fi_to_sfsc(fi);
53
} else {
54
/* Pre-v7 MPU */
55
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
56
--
58
--
57
2.7.4
59
2.34.1
58
59
diff view generated by jsdifflib
1
The TT instruction is going to need to look up the MMU index
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
for a specified security and privilege state. Refactor the
3
existing arm_v7m_mmu_idx_for_secstate() into a version that
4
lets you specify the privilege state and one that uses the
5
current state of the CPU.
6
2
3
When PSTATE.ALLINT is set, an IRQ or FIQ interrupt that is targeted to
4
ELx, with or without superpriority is masked. As Richard suggested, place
5
ALLINT bit in PSTATE in env->pstate.
6
7
In the pseudocode, AArch64.ExceptionReturn() calls SetPSTATEFromPSR(), which
8
treats PSTATE.ALLINT as one of the bits which are reinstated from SPSR to
9
PSTATE regardless of whether this is an illegal exception return or not. So
10
handle PSTATE.ALLINT the same way as PSTATE.DAIF in the illegal_return exit
11
path of the exception_return helper. With the change, exception entry and
12
return are automatically handled.
13
14
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20240407081733.3231820-3-ruanjinjie@huawei.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 1512153879-5291-6-git-send-email-peter.maydell@linaro.org
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
---
19
---
12
target/arm/cpu.h | 21 ++++++++++++++++-----
20
target/arm/cpu.h | 1 +
13
1 file changed, 16 insertions(+), 5 deletions(-)
21
target/arm/tcg/helper-a64.c | 4 ++--
22
2 files changed, 3 insertions(+), 2 deletions(-)
14
23
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
26
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
27
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
28
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
20
}
29
#define PSTATE_D (1U << 9)
21
}
30
#define PSTATE_BTYPE (3U << 10)
22
31
#define PSTATE_SSBS (1U << 12)
23
-/* Return the MMU index for a v7M CPU in the specified security state */
32
+#define PSTATE_ALLINT (1U << 13)
24
-static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
33
#define PSTATE_IL (1U << 20)
25
- bool secstate)
34
#define PSTATE_SS (1U << 21)
26
+/* Return the MMU index for a v7M CPU in the specified security and
35
#define PSTATE_PAN (1U << 22)
27
+ * privilege state
36
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
28
+ */
37
index XXXXXXX..XXXXXXX 100644
29
+static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
38
--- a/target/arm/tcg/helper-a64.c
30
+ bool secstate,
39
+++ b/target/arm/tcg/helper-a64.c
31
+ bool priv)
40
@@ -XXX,XX +XXX,XX @@ illegal_return:
32
{
41
*/
33
- int el = arm_current_el(env);
42
env->pstate |= PSTATE_IL;
34
ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
43
env->pc = new_pc;
35
44
- spsr &= PSTATE_NZCV | PSTATE_DAIF;
36
- if (el != 0) {
45
- spsr |= pstate_read(env) & ~(PSTATE_NZCV | PSTATE_DAIF);
37
+ if (priv) {
46
+ spsr &= PSTATE_NZCV | PSTATE_DAIF | PSTATE_ALLINT;
38
mmu_idx |= ARM_MMU_IDX_M_PRIV;
47
+ spsr |= pstate_read(env) & ~(PSTATE_NZCV | PSTATE_DAIF | PSTATE_ALLINT);
39
}
48
pstate_write(env, spsr);
40
49
if (!arm_singlestep_active(env)) {
41
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
50
env->pstate &= ~PSTATE_SS;
42
return mmu_idx;
43
}
44
45
+/* Return the MMU index for a v7M CPU in the specified security state */
46
+static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
47
+ bool secstate)
48
+{
49
+ bool priv = arm_current_el(env) != 0;
50
+
51
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
52
+}
53
+
54
/* Determine the current mmu_idx to use for normal loads/stores */
55
static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
56
{
57
--
51
--
58
2.7.4
52
2.34.1
59
60
diff view generated by jsdifflib
1
All of the callers of get_phys_addr() and arm_tlb_fill() now ignore
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
the FSR values they return, so we can just remove the argument
3
entirely.
4
2
3
Add support for FEAT_NMI. NMI (FEAT_NMI) is an mandatory feature in
4
ARMv8.8-A and ARM v9.3-A.
5
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20240407081733.3231820-4-ruanjinjie@huawei.com
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
9
Message-id: 1512503192-2239-12-git-send-email-peter.maydell@linaro.org
10
---
11
---
11
target/arm/internals.h | 2 +-
12
target/arm/internals.h | 3 +++
12
target/arm/helper.c | 45 ++++++++++++++-------------------------------
13
1 file changed, 3 insertions(+)
13
target/arm/op_helper.c | 3 +--
14
3 files changed, 16 insertions(+), 34 deletions(-)
15
14
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
17
--- a/target/arm/internals.h
19
+++ b/target/arm/internals.h
18
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ static inline uint32_t arm_fi_to_lfsc(ARMMMUFaultInfo *fi)
19
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
21
/* Do a page table walk and add page to TLB if possible */
20
if (isar_feature_aa64_mte(id)) {
22
bool arm_tlb_fill(CPUState *cpu, vaddr address,
21
valid |= PSTATE_TCO;
23
MMUAccessType access_type, int mmu_idx,
24
- uint32_t *fsr, ARMMMUFaultInfo *fi);
25
+ ARMMMUFaultInfo *fi);
26
27
/* Return true if the stage 1 translation regime is using LPAE format page
28
* tables */
29
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/helper.c
32
+++ b/target/arm/helper.c
33
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
34
static bool get_phys_addr(CPUARMState *env, target_ulong address,
35
MMUAccessType access_type, ARMMMUIdx mmu_idx,
36
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
37
- target_ulong *page_size, uint32_t *fsr,
38
+ target_ulong *page_size,
39
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs);
40
41
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
42
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
43
hwaddr phys_addr;
44
target_ulong page_size;
45
int prot;
46
- uint32_t fsr_unused;
47
bool ret;
48
uint64_t par64;
49
MemTxAttrs attrs = {};
50
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
51
ARMCacheAttrs cacheattrs = {};
52
53
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
54
- &prot, &page_size, &fsr_unused, &fi, &cacheattrs);
55
+ &prot, &page_size, &fi, &cacheattrs);
56
/* TODO: this is not the correct condition to use to decide whether
57
* to report a PAR in 64-bit or 32-bit format.
58
*/
59
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
60
target_ulong page_size;
61
hwaddr physaddr;
62
int prot;
63
- uint32_t fsr;
64
65
v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
66
if (!sattrs.nsc || sattrs.ns) {
67
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
68
return false;
69
}
22
}
70
if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
23
+ if (isar_feature_aa64_nmi(id)) {
71
- &physaddr, &attrs, &prot, &page_size, &fsr, &fi, NULL)) {
24
+ valid |= PSTATE_ALLINT;
72
+ &physaddr, &attrs, &prot, &page_size, &fi, NULL)) {
25
+ }
73
/* the MPU lookup failed */
26
74
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
27
return valid;
75
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
76
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2)
77
* @attrs: set to the memory transaction attributes to use
78
* @prot: set to the permissions for the page containing phys_ptr
79
* @page_size: set to the size of the page containing phys_ptr
80
- * @fsr: set to the DFSR/IFSR value on failure
81
* @fi: set to fault info if the translation fails
82
* @cacheattrs: (if non-NULL) set to the cacheability/shareability attributes
83
*/
84
static bool get_phys_addr(CPUARMState *env, target_ulong address,
85
MMUAccessType access_type, ARMMMUIdx mmu_idx,
86
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
87
- target_ulong *page_size, uint32_t *fsr,
88
+ target_ulong *page_size,
89
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
90
{
91
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
92
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
93
94
ret = get_phys_addr(env, address, access_type,
95
stage_1_mmu_idx(mmu_idx), &ipa, attrs,
96
- prot, page_size, fsr, fi, cacheattrs);
97
+ prot, page_size, fi, cacheattrs);
98
99
/* If S1 fails or S2 is disabled, return early. */
100
if (ret || regime_translation_disabled(env, ARMMMUIdx_S2NS)) {
101
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
102
phys_ptr, attrs, &s2_prot,
103
page_size, fi,
104
cacheattrs != NULL ? &cacheattrs2 : NULL);
105
- *fsr = arm_fi_to_lfsc(fi);
106
fi->s2addr = ipa;
107
/* Combine the S1 and S2 perms. */
108
*prot &= s2_prot;
109
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
110
/* PMSAv8 */
111
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
112
phys_ptr, attrs, prot, fi);
113
- *fsr = arm_fi_to_sfsc(fi);
114
} else if (arm_feature(env, ARM_FEATURE_V7)) {
115
/* PMSAv7 */
116
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
117
phys_ptr, prot, fi);
118
- *fsr = arm_fi_to_sfsc(fi);
119
} else {
120
/* Pre-v7 MPU */
121
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
122
phys_ptr, prot, fi);
123
- *fsr = arm_fi_to_sfsc(fi);
124
}
125
qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32
126
" mmu_idx %u -> %s (prot %c%c%c)\n",
127
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
128
}
129
130
if (regime_using_lpae_format(env, mmu_idx)) {
131
- bool ret = get_phys_addr_lpae(env, address, access_type, mmu_idx,
132
- phys_ptr, attrs, prot, page_size,
133
- fi, cacheattrs);
134
-
135
- *fsr = arm_fi_to_lfsc(fi);
136
- return ret;
137
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx,
138
+ phys_ptr, attrs, prot, page_size,
139
+ fi, cacheattrs);
140
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
141
- bool ret = get_phys_addr_v6(env, address, access_type, mmu_idx,
142
- phys_ptr, attrs, prot, page_size, fi);
143
-
144
- *fsr = arm_fi_to_sfsc(fi);
145
- return ret;
146
+ return get_phys_addr_v6(env, address, access_type, mmu_idx,
147
+ phys_ptr, attrs, prot, page_size, fi);
148
} else {
149
- bool ret = get_phys_addr_v5(env, address, access_type, mmu_idx,
150
+ return get_phys_addr_v5(env, address, access_type, mmu_idx,
151
phys_ptr, prot, page_size, fi);
152
-
153
- *fsr = arm_fi_to_sfsc(fi);
154
- return ret;
155
}
156
}
28
}
157
158
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
159
* fsr with ARM DFSR/IFSR fault register format value on failure.
160
*/
161
bool arm_tlb_fill(CPUState *cs, vaddr address,
162
- MMUAccessType access_type, int mmu_idx, uint32_t *fsr,
163
+ MMUAccessType access_type, int mmu_idx,
164
ARMMMUFaultInfo *fi)
165
{
166
ARMCPU *cpu = ARM_CPU(cs);
167
@@ -XXX,XX +XXX,XX @@ bool arm_tlb_fill(CPUState *cs, vaddr address,
168
169
ret = get_phys_addr(env, address, access_type,
170
core_to_arm_mmu_idx(env, mmu_idx), &phys_addr,
171
- &attrs, &prot, &page_size, fsr, fi, NULL);
172
+ &attrs, &prot, &page_size, fi, NULL);
173
if (!ret) {
174
/* Map a single [sub]page. */
175
phys_addr &= TARGET_PAGE_MASK;
176
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
177
target_ulong page_size;
178
int prot;
179
bool ret;
180
- uint32_t fsr;
181
ARMMMUFaultInfo fi = {};
182
ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
183
184
*attrs = (MemTxAttrs) {};
185
186
ret = get_phys_addr(env, addr, 0, mmu_idx, &phys_addr,
187
- attrs, &prot, &page_size, &fsr, &fi, NULL);
188
+ attrs, &prot, &page_size, &fi, NULL);
189
190
if (ret) {
191
return -1;
192
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/op_helper.c
195
+++ b/target/arm/op_helper.c
196
@@ -XXX,XX +XXX,XX @@ void tlb_fill(CPUState *cs, target_ulong addr, MMUAccessType access_type,
197
int mmu_idx, uintptr_t retaddr)
198
{
199
bool ret;
200
- uint32_t fsr = 0;
201
ARMMMUFaultInfo fi = {};
202
203
- ret = arm_tlb_fill(cs, addr, access_type, mmu_idx, &fsr, &fi);
204
+ ret = arm_tlb_fill(cs, addr, access_type, mmu_idx, &fi);
205
if (unlikely(ret)) {
206
ARMCPU *cpu = ARM_CPU(cs);
207
208
--
29
--
209
2.7.4
30
2.34.1
210
211
diff view generated by jsdifflib
1
Currently get_phys_addr() and its various subfunctions return
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
a hard-coded fault status register value for translation
3
failures. This is awkward because FSR values these days may
4
be either long-descriptor format or short-descriptor format.
5
Worse, the right FSR type to use doesn't depend only on the
6
translation table being walked -- some cases, like fault
7
info reported to AArch32 EL2 for some kinds of ATS operation,
8
must be in long-descriptor format even if the translation
9
table being walked was short format. We can't get those cases
10
right with our current approach.
11
2
12
Provide fields in the ARMMMUFaultInfo struct which allow
3
Add ALLINT MSR (immediate) to decodetree, in which the CRm is 0b000x. The
13
get_phys_addr() to provide sufficient information for a caller to
4
EL0 check is necessary to ALLINT, and the EL1 check is necessary when
14
construct an FSR value themselves, and utility functions which do
5
imm == 1. So implement it inline for EL2/3, or EL1 with imm==0. Avoid the
15
this for both long and short format FSR values, as a first step in
6
unconditional write to pc and use raise_exception_ra to unwind.
16
switching get_phys_addr() and its children to only returning the
17
failure cause in the ARMMMUFaultInfo struct.
18
7
8
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20240407081733.3231820-5-ruanjinjie@huawei.com
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
22
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
23
Message-id: 1512503192-2239-2-git-send-email-peter.maydell@linaro.org
24
---
13
---
25
target/arm/internals.h | 185 +++++++++++++++++++++++++++++++++++++++++++++++++
14
target/arm/tcg/helper-a64.h | 1 +
26
1 file changed, 185 insertions(+)
15
target/arm/tcg/a64.decode | 1 +
16
target/arm/tcg/helper-a64.c | 12 ++++++++++++
17
target/arm/tcg/translate-a64.c | 19 +++++++++++++++++++
18
4 files changed, 33 insertions(+)
27
19
28
diff --git a/target/arm/internals.h b/target/arm/internals.h
20
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
29
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/internals.h
22
--- a/target/arm/tcg/helper-a64.h
31
+++ b/target/arm/internals.h
23
+++ b/target/arm/tcg/helper-a64.h
32
@@ -XXX,XX +XXX,XX @@ static inline void arm_clear_exclusive(CPUARMState *env)
24
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_1(rbit64, TCG_CALL_NO_RWG_SE, i64, i64)
25
DEF_HELPER_2(msr_i_spsel, void, env, i32)
26
DEF_HELPER_2(msr_i_daifset, void, env, i32)
27
DEF_HELPER_2(msr_i_daifclear, void, env, i32)
28
+DEF_HELPER_1(msr_set_allint_el1, void, env)
29
DEF_HELPER_3(vfp_cmph_a64, i64, f16, f16, ptr)
30
DEF_HELPER_3(vfp_cmpeh_a64, i64, f16, f16, ptr)
31
DEF_HELPER_3(vfp_cmps_a64, i64, f32, f32, ptr)
32
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/tcg/a64.decode
35
+++ b/target/arm/tcg/a64.decode
36
@@ -XXX,XX +XXX,XX @@ MSR_i_DIT 1101 0101 0000 0 011 0100 .... 010 11111 @msr_i
37
MSR_i_TCO 1101 0101 0000 0 011 0100 .... 100 11111 @msr_i
38
MSR_i_DAIFSET 1101 0101 0000 0 011 0100 .... 110 11111 @msr_i
39
MSR_i_DAIFCLEAR 1101 0101 0000 0 011 0100 .... 111 11111 @msr_i
40
+MSR_i_ALLINT 1101 0101 0000 0 001 0100 000 imm:1 000 11111
41
MSR_i_SVCR 1101 0101 0000 0 011 0100 0 mask:2 imm:1 011 11111
42
43
# MRS, MSR (register), SYS, SYSL. These are all essentially the
44
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/tcg/helper-a64.c
47
+++ b/target/arm/tcg/helper-a64.c
48
@@ -XXX,XX +XXX,XX @@ void HELPER(msr_i_spsel)(CPUARMState *env, uint32_t imm)
49
update_spsel(env, imm);
33
}
50
}
34
51
35
/**
52
+void HELPER(msr_set_allint_el1)(CPUARMState *env)
36
+ * ARMFaultType: type of an ARM MMU fault
37
+ * This corresponds to the v8A pseudocode's Fault enumeration,
38
+ * with extensions for QEMU internal conditions.
39
+ */
40
+typedef enum ARMFaultType {
41
+ ARMFault_None,
42
+ ARMFault_AccessFlag,
43
+ ARMFault_Alignment,
44
+ ARMFault_Background,
45
+ ARMFault_Domain,
46
+ ARMFault_Permission,
47
+ ARMFault_Translation,
48
+ ARMFault_AddressSize,
49
+ ARMFault_SyncExternal,
50
+ ARMFault_SyncExternalOnWalk,
51
+ ARMFault_SyncParity,
52
+ ARMFault_SyncParityOnWalk,
53
+ ARMFault_AsyncParity,
54
+ ARMFault_AsyncExternal,
55
+ ARMFault_Debug,
56
+ ARMFault_TLBConflict,
57
+ ARMFault_Lockdown,
58
+ ARMFault_Exclusive,
59
+ ARMFault_ICacheMaint,
60
+ ARMFault_QEMU_NSCExec, /* v8M: NS executing in S&NSC memory */
61
+ ARMFault_QEMU_SFault, /* v8M: SecureFault INVTRAN, INVEP or AUVIOL */
62
+} ARMFaultType;
63
+
64
+/**
65
* ARMMMUFaultInfo: Information describing an ARM MMU Fault
66
+ * @type: Type of fault
67
+ * @level: Table walk level (for translation, access flag and permission faults)
68
+ * @domain: Domain of the fault address (for non-LPAE CPUs only)
69
* @s2addr: Address that caused a fault at stage 2
70
* @stage2: True if we faulted at stage 2
71
* @s1ptw: True if we faulted at stage 2 while doing a stage 1 page-table walk
72
@@ -XXX,XX +XXX,XX @@ static inline void arm_clear_exclusive(CPUARMState *env)
73
*/
74
typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
75
struct ARMMMUFaultInfo {
76
+ ARMFaultType type;
77
target_ulong s2addr;
78
+ int level;
79
+ int domain;
80
bool stage2;
81
bool s1ptw;
82
bool ea;
83
};
84
85
+/**
86
+ * arm_fi_to_sfsc: Convert fault info struct to short-format FSC
87
+ * Compare pseudocode EncodeSDFSC(), though unlike that function
88
+ * we set up a whole FSR-format code including domain field and
89
+ * putting the high bit of the FSC into bit 10.
90
+ */
91
+static inline uint32_t arm_fi_to_sfsc(ARMMMUFaultInfo *fi)
92
+{
53
+{
93
+ uint32_t fsc;
54
+ /* ALLINT update to PSTATE. */
94
+
55
+ if (arm_hcrx_el2_eff(env) & HCRX_TALLINT) {
95
+ switch (fi->type) {
56
+ raise_exception_ra(env, EXCP_UDEF,
96
+ case ARMFault_None:
57
+ syn_aa64_sysregtrap(0, 1, 0, 4, 1, 0x1f, 0), 2,
97
+ return 0;
58
+ GETPC());
98
+ case ARMFault_AccessFlag:
99
+ fsc = fi->level == 1 ? 0x3 : 0x6;
100
+ break;
101
+ case ARMFault_Alignment:
102
+ fsc = 0x1;
103
+ break;
104
+ case ARMFault_Permission:
105
+ fsc = fi->level == 1 ? 0xd : 0xf;
106
+ break;
107
+ case ARMFault_Domain:
108
+ fsc = fi->level == 1 ? 0x9 : 0xb;
109
+ break;
110
+ case ARMFault_Translation:
111
+ fsc = fi->level == 1 ? 0x5 : 0x7;
112
+ break;
113
+ case ARMFault_SyncExternal:
114
+ fsc = 0x8 | (fi->ea << 12);
115
+ break;
116
+ case ARMFault_SyncExternalOnWalk:
117
+ fsc = fi->level == 1 ? 0xc : 0xe;
118
+ fsc |= (fi->ea << 12);
119
+ break;
120
+ case ARMFault_SyncParity:
121
+ fsc = 0x409;
122
+ break;
123
+ case ARMFault_SyncParityOnWalk:
124
+ fsc = fi->level == 1 ? 0x40c : 0x40e;
125
+ break;
126
+ case ARMFault_AsyncParity:
127
+ fsc = 0x408;
128
+ break;
129
+ case ARMFault_AsyncExternal:
130
+ fsc = 0x406 | (fi->ea << 12);
131
+ break;
132
+ case ARMFault_Debug:
133
+ fsc = 0x2;
134
+ break;
135
+ case ARMFault_TLBConflict:
136
+ fsc = 0x400;
137
+ break;
138
+ case ARMFault_Lockdown:
139
+ fsc = 0x404;
140
+ break;
141
+ case ARMFault_Exclusive:
142
+ fsc = 0x405;
143
+ break;
144
+ case ARMFault_ICacheMaint:
145
+ fsc = 0x4;
146
+ break;
147
+ case ARMFault_Background:
148
+ fsc = 0x0;
149
+ break;
150
+ case ARMFault_QEMU_NSCExec:
151
+ fsc = M_FAKE_FSR_NSC_EXEC;
152
+ break;
153
+ case ARMFault_QEMU_SFault:
154
+ fsc = M_FAKE_FSR_SFAULT;
155
+ break;
156
+ default:
157
+ /* Other faults can't occur in a context that requires a
158
+ * short-format status code.
159
+ */
160
+ g_assert_not_reached();
161
+ }
59
+ }
162
+
60
+
163
+ fsc |= (fi->domain << 4);
61
+ env->pstate |= PSTATE_ALLINT;
164
+ return fsc;
165
+}
62
+}
166
+
63
+
167
+/**
64
static void daif_check(CPUARMState *env, uint32_t op,
168
+ * arm_fi_to_lfsc: Convert fault info struct to long-format FSC
65
uint32_t imm, uintptr_t ra)
169
+ * Compare pseudocode EncodeLDFSC(), though unlike that function
66
{
170
+ * we fill in also the LPAE bit 9 of a DFSR format.
67
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
171
+ */
68
index XXXXXXX..XXXXXXX 100644
172
+static inline uint32_t arm_fi_to_lfsc(ARMMMUFaultInfo *fi)
69
--- a/target/arm/tcg/translate-a64.c
70
+++ b/target/arm/tcg/translate-a64.c
71
@@ -XXX,XX +XXX,XX @@ static bool trans_MSR_i_DAIFCLEAR(DisasContext *s, arg_i *a)
72
return true;
73
}
74
75
+static bool trans_MSR_i_ALLINT(DisasContext *s, arg_i *a)
173
+{
76
+{
174
+ uint32_t fsc;
77
+ if (!dc_isar_feature(aa64_nmi, s) || s->current_el == 0) {
175
+
78
+ return false;
176
+ switch (fi->type) {
177
+ case ARMFault_None:
178
+ return 0;
179
+ case ARMFault_AddressSize:
180
+ fsc = fi->level & 3;
181
+ break;
182
+ case ARMFault_AccessFlag:
183
+ fsc = (fi->level & 3) | (0x2 << 2);
184
+ break;
185
+ case ARMFault_Permission:
186
+ fsc = (fi->level & 3) | (0x3 << 2);
187
+ break;
188
+ case ARMFault_Translation:
189
+ fsc = (fi->level & 3) | (0x1 << 2);
190
+ break;
191
+ case ARMFault_SyncExternal:
192
+ fsc = 0x10 | (fi->ea << 12);
193
+ break;
194
+ case ARMFault_SyncExternalOnWalk:
195
+ fsc = (fi->level & 3) | (0x5 << 2) | (fi->ea << 12);
196
+ break;
197
+ case ARMFault_SyncParity:
198
+ fsc = 0x18;
199
+ break;
200
+ case ARMFault_SyncParityOnWalk:
201
+ fsc = (fi->level & 3) | (0x7 << 2);
202
+ break;
203
+ case ARMFault_AsyncParity:
204
+ fsc = 0x19;
205
+ break;
206
+ case ARMFault_AsyncExternal:
207
+ fsc = 0x11 | (fi->ea << 12);
208
+ break;
209
+ case ARMFault_Alignment:
210
+ fsc = 0x21;
211
+ break;
212
+ case ARMFault_Debug:
213
+ fsc = 0x22;
214
+ break;
215
+ case ARMFault_TLBConflict:
216
+ fsc = 0x30;
217
+ break;
218
+ case ARMFault_Lockdown:
219
+ fsc = 0x34;
220
+ break;
221
+ case ARMFault_Exclusive:
222
+ fsc = 0x35;
223
+ break;
224
+ default:
225
+ /* Other faults can't occur in a context that requires a
226
+ * long-format status code.
227
+ */
228
+ g_assert_not_reached();
229
+ }
79
+ }
230
+
80
+
231
+ fsc |= 1 << 9;
81
+ if (a->imm == 0) {
232
+ return fsc;
82
+ clear_pstate_bits(PSTATE_ALLINT);
83
+ } else if (s->current_el > 1) {
84
+ set_pstate_bits(PSTATE_ALLINT);
85
+ } else {
86
+ gen_helper_msr_set_allint_el1(tcg_env);
87
+ }
88
+
89
+ /* Exit the cpu loop to re-evaluate pending IRQs. */
90
+ s->base.is_jmp = DISAS_UPDATE_EXIT;
91
+ return true;
233
+}
92
+}
234
+
93
+
235
/* Do a page table walk and add page to TLB if possible */
94
static bool trans_MSR_i_SVCR(DisasContext *s, arg_MSR_i_SVCR *a)
236
bool arm_tlb_fill(CPUState *cpu, vaddr address,
95
{
237
MMUAccessType access_type, int mmu_idx,
96
if (!dc_isar_feature(aa64_sme, s) || a->mask == 0) {
238
--
97
--
239
2.7.4
98
2.34.1
240
241
diff view generated by jsdifflib
1
Implement the TT instruction which queries the security
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
state and access permissions of a memory location.
3
2
3
Support ALLINT msr access as follow:
4
    mrs <xt>, ALLINT    // read allint
5
    msr ALLINT, <xt>    // write allint with imm
6
7
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20240407081733.3231820-6-ruanjinjie@huawei.com
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1512153879-5291-8-git-send-email-peter.maydell@linaro.org
7
---
12
---
8
target/arm/helper.h | 2 +
13
target/arm/helper.c | 35 +++++++++++++++++++++++++++++++++++
9
target/arm/helper.c | 108 +++++++++++++++++++++++++++++++++++++++++++++++++
14
1 file changed, 35 insertions(+)
10
target/arm/translate.c | 29 ++++++++++++-
11
3 files changed, 138 insertions(+), 1 deletion(-)
12
15
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
16
+++ b/target/arm/helper.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(v7m_mrs, i32, env, i32)
18
DEF_HELPER_2(v7m_bxns, void, env, i32)
19
DEF_HELPER_2(v7m_blxns, void, env, i32)
20
21
+DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
22
+
23
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
24
DEF_HELPER_3(set_cp_reg, void, env, ptr, i32)
25
DEF_HELPER_2(get_cp_reg, i32, env, ptr)
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
18
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
19
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rme_mte_reginfo[] = {
31
g_assert_not_reached();
21
.opc0 = 1, .opc1 = 6, .crn = 7, .crm = 14, .opc2 = 5,
32
}
22
.access = PL3_W, .type = ARM_CP_NOP },
33
23
};
34
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
24
+
25
+static void aa64_allint_write(CPUARMState *env, const ARMCPRegInfo *ri,
26
+ uint64_t value)
35
+{
27
+{
36
+ /* The TT instructions can be used by unprivileged code, but in
28
+ env->pstate = (env->pstate & ~PSTATE_ALLINT) | (value & PSTATE_ALLINT);
37
+ * user-only emulation we don't have the MPU.
38
+ * Luckily since we know we are NonSecure unprivileged (and that in
39
+ * turn means that the A flag wasn't specified), all the bits in the
40
+ * register must be zero:
41
+ * IREGION: 0 because IRVALID is 0
42
+ * IRVALID: 0 because NS
43
+ * S: 0 because NS
44
+ * NSRW: 0 because NS
45
+ * NSR: 0 because NS
46
+ * RW: 0 because unpriv and A flag not set
47
+ * R: 0 because unpriv and A flag not set
48
+ * SRVALID: 0 because NS
49
+ * MRVALID: 0 because unpriv and A flag not set
50
+ * SREGION: 0 becaus SRVALID is 0
51
+ * MREGION: 0 because MRVALID is 0
52
+ */
53
+ return 0;
54
+}
29
+}
55
+
30
+
56
void switch_mode(CPUARMState *env, int mode)
31
+static uint64_t aa64_allint_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
{
58
ARMCPU *cpu = arm_env_get_cpu(env);
59
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
60
}
61
}
62
63
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
64
+{
32
+{
65
+ /* Implement the TT instruction. op is bits [7:6] of the insn. */
33
+ return env->pstate & PSTATE_ALLINT;
66
+ bool forceunpriv = op & 1;
67
+ bool alt = op & 2;
68
+ V8M_SAttributes sattrs = {};
69
+ uint32_t tt_resp;
70
+ bool r, rw, nsr, nsrw, mrvalid;
71
+ int prot;
72
+ MemTxAttrs attrs = {};
73
+ hwaddr phys_addr;
74
+ uint32_t fsr;
75
+ ARMMMUIdx mmu_idx;
76
+ uint32_t mregion;
77
+ bool targetpriv;
78
+ bool targetsec = env->v7m.secure;
79
+
80
+ /* Work out what the security state and privilege level we're
81
+ * interested in is...
82
+ */
83
+ if (alt) {
84
+ targetsec = !targetsec;
85
+ }
86
+
87
+ if (forceunpriv) {
88
+ targetpriv = false;
89
+ } else {
90
+ targetpriv = arm_v7m_is_handler_mode(env) ||
91
+ !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK);
92
+ }
93
+
94
+ /* ...and then figure out which MMU index this is */
95
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv);
96
+
97
+ /* We know that the MPU and SAU don't care about the access type
98
+ * for our purposes beyond that we don't want to claim to be
99
+ * an insn fetch, so we arbitrarily call this a read.
100
+ */
101
+
102
+ /* MPU region info only available for privileged or if
103
+ * inspecting the other MPU state.
104
+ */
105
+ if (arm_current_el(env) != 0 || alt) {
106
+ /* We can ignore the return value as prot is always set */
107
+ pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
108
+ &phys_addr, &attrs, &prot, &fsr, &mregion);
109
+ if (mregion == -1) {
110
+ mrvalid = false;
111
+ mregion = 0;
112
+ } else {
113
+ mrvalid = true;
114
+ }
115
+ r = prot & PAGE_READ;
116
+ rw = prot & PAGE_WRITE;
117
+ } else {
118
+ r = false;
119
+ rw = false;
120
+ mrvalid = false;
121
+ mregion = 0;
122
+ }
123
+
124
+ if (env->v7m.secure) {
125
+ v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
126
+ nsr = sattrs.ns && r;
127
+ nsrw = sattrs.ns && rw;
128
+ } else {
129
+ sattrs.ns = true;
130
+ nsr = false;
131
+ nsrw = false;
132
+ }
133
+
134
+ tt_resp = (sattrs.iregion << 24) |
135
+ (sattrs.irvalid << 23) |
136
+ ((!sattrs.ns) << 22) |
137
+ (nsrw << 21) |
138
+ (nsr << 20) |
139
+ (rw << 19) |
140
+ (r << 18) |
141
+ (sattrs.srvalid << 17) |
142
+ (mrvalid << 16) |
143
+ (sattrs.sregion << 8) |
144
+ mregion;
145
+
146
+ return tt_resp;
147
+}
34
+}
148
+
35
+
36
+static CPAccessResult aa64_allint_access(CPUARMState *env,
37
+ const ARMCPRegInfo *ri, bool isread)
38
+{
39
+ if (!isread && arm_current_el(env) == 1 &&
40
+ (arm_hcrx_el2_eff(env) & HCRX_TALLINT)) {
41
+ return CP_ACCESS_TRAP_EL2;
42
+ }
43
+ return CP_ACCESS_OK;
44
+}
45
+
46
+static const ARMCPRegInfo nmi_reginfo[] = {
47
+ { .name = "ALLINT", .state = ARM_CP_STATE_AA64,
48
+ .opc0 = 3, .opc1 = 0, .opc2 = 0, .crn = 4, .crm = 3,
49
+ .type = ARM_CP_NO_RAW,
50
+ .access = PL1_RW, .accessfn = aa64_allint_access,
51
+ .fieldoffset = offsetof(CPUARMState, pstate),
52
+ .writefn = aa64_allint_write, .readfn = aa64_allint_read,
53
+ .resetfn = arm_cp_reset_ignore },
54
+};
55
#endif /* TARGET_AARCH64 */
56
57
static void define_pmu_regs(ARMCPU *cpu)
58
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
59
if (cpu_isar_feature(aa64_nv2, cpu)) {
60
define_arm_cp_regs(cpu, nv2_reginfo);
61
}
62
+
63
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
64
+ define_arm_cp_regs(cpu, nmi_reginfo);
65
+ }
149
#endif
66
#endif
150
67
151
void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in)
68
if (cpu_isar_feature(any_predinv, cpu)) {
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(DisasContext *s, uint32_t insn)
157
if (insn & (1 << 22)) {
158
/* 0b1110_100x_x1xx_xxxx_xxxx_xxxx_xxxx_xxxx
159
* - load/store doubleword, load/store exclusive, ldacq/strel,
160
- * table branch.
161
+ * table branch, TT.
162
*/
163
if (insn == 0xe97fe97f && arm_dc_feature(s, ARM_FEATURE_M) &&
164
arm_dc_feature(s, ARM_FEATURE_V8)) {
165
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(DisasContext *s, uint32_t insn)
166
} else if ((insn & (1 << 23)) == 0) {
167
/* 0b1110_1000_010x_xxxx_xxxx_xxxx_xxxx_xxxx
168
* - load/store exclusive word
169
+ * - TT (v8M only)
170
*/
171
if (rs == 15) {
172
+ if (!(insn & (1 << 20)) &&
173
+ arm_dc_feature(s, ARM_FEATURE_M) &&
174
+ arm_dc_feature(s, ARM_FEATURE_V8)) {
175
+ /* 0b1110_1000_0100_xxxx_1111_xxxx_xxxx_xxxx
176
+ * - TT (v8M only)
177
+ */
178
+ bool alt = insn & (1 << 7);
179
+ TCGv_i32 addr, op, ttresp;
180
+
181
+ if ((insn & 0x3f) || rd == 13 || rd == 15 || rn == 15) {
182
+ /* we UNDEF for these UNPREDICTABLE cases */
183
+ goto illegal_op;
184
+ }
185
+
186
+ if (alt && !s->v8m_secure) {
187
+ goto illegal_op;
188
+ }
189
+
190
+ addr = load_reg(s, rn);
191
+ op = tcg_const_i32(extract32(insn, 6, 2));
192
+ ttresp = tcg_temp_new_i32();
193
+ gen_helper_v7m_tt(ttresp, cpu_env, addr, op);
194
+ tcg_temp_free_i32(addr);
195
+ tcg_temp_free_i32(op);
196
+ store_reg(s, rd, ttresp);
197
+ }
198
goto illegal_op;
199
}
200
addr = tcg_temp_local_new_i32();
201
--
69
--
202
2.7.4
70
2.34.1
203
204
diff view generated by jsdifflib
1
For M profile, we currently have an mmu index MNegPri for
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
"requested execution priority negative". This fails to
3
distinguish "requested execution priority negative, privileged"
4
from "requested execution priority negative, usermode", but
5
the two can return different results for MPU lookups. Fix this
6
by splitting MNegPri into MNegPriPriv and MNegPriUser, and
7
similarly for the Secure equivalent MSNegPri.
8
2
9
This takes us from 6 M profile MMU modes to 8, which means
3
This only implements the external delivery method via the GICv3.
10
we need to bump NB_MMU_MODES; this is OK since the point
11
where we are forced to reduce TLB sizes is 9 MMU modes.
12
4
13
(It would in theory be possible to stick with 6 MMU indexes:
5
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
14
{mpu-disabled,user,privileged} x {secure,nonsecure} since
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
in the MPU-disabled case the result of an MPU lookup is
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
always the same for both user and privileged code. However
8
Message-id: 20240407081733.3231820-7-ruanjinjie@huawei.com
17
we would then need to rework the TB flags handling to put
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
user/priv into the TB flags separately from the mmuidx.
10
---
19
Adding an extra couple of mmu indexes is simpler.)
11
target/arm/cpu-qom.h | 5 +-
12
target/arm/cpu.h | 6 ++
13
target/arm/internals.h | 18 +++++
14
target/arm/cpu.c | 147 ++++++++++++++++++++++++++++++++++++++---
15
target/arm/helper.c | 33 +++++++--
16
5 files changed, 193 insertions(+), 16 deletions(-)
20
17
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
diff --git a/target/arm/cpu-qom.h b/target/arm/cpu-qom.h
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
index XXXXXXX..XXXXXXX 100644
23
Message-id: 1512153879-5291-5-git-send-email-peter.maydell@linaro.org
20
--- a/target/arm/cpu-qom.h
24
---
21
+++ b/target/arm/cpu-qom.h
25
target/arm/cpu.h | 54 ++++++++++++++++++++++++++++++--------------------
22
@@ -XXX,XX +XXX,XX @@ DECLARE_CLASS_CHECKERS(AArch64CPUClass, AARCH64_CPU,
26
target/arm/internals.h | 6 ++++--
23
#define ARM_CPU_TYPE_SUFFIX "-" TYPE_ARM_CPU
27
target/arm/helper.c | 11 ++++++----
24
#define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
28
target/arm/translate.c | 8 ++++++--
25
29
4 files changed, 50 insertions(+), 29 deletions(-)
26
-/* Meanings of the ARMCPU object's four inbound GPIO lines */
30
27
+/* Meanings of the ARMCPU object's seven inbound GPIO lines */
28
#define ARM_CPU_IRQ 0
29
#define ARM_CPU_FIQ 1
30
#define ARM_CPU_VIRQ 2
31
#define ARM_CPU_VFIQ 3
32
+#define ARM_CPU_NMI 4
33
+#define ARM_CPU_VINMI 5
34
+#define ARM_CPU_VFNMI 6
35
36
/* For M profile, some registers are banked secure vs non-secure;
37
* these are represented as a 2-element array where the first element
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
38
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
40
--- a/target/arm/cpu.h
34
+++ b/target/arm/cpu.h
41
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@ enum {
42
@@ -XXX,XX +XXX,XX @@
36
#define ARM_CPU_VIRQ 2
43
#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
37
#define ARM_CPU_VFIQ 3
44
#define EXCP_VSERR 24
38
45
#define EXCP_GPC 25 /* v9 Granule Protection Check Fault */
39
-#define NB_MMU_MODES 7
46
+#define EXCP_NMI 26
40
+#define NB_MMU_MODES 8
47
+#define EXCP_VINMI 27
41
/* ARM-specific extra insn start words:
48
+#define EXCP_VFNMI 28
42
* 1: Conditional execution bits
49
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
43
* 2: Partial exception syndrome for data aborts
50
44
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
51
#define ARMV7M_EXCP_RESET 1
45
* They have the following different MMU indexes:
52
@@ -XXX,XX +XXX,XX @@
46
* User
53
#define CPU_INTERRUPT_VIRQ CPU_INTERRUPT_TGT_EXT_2
47
* Privileged
54
#define CPU_INTERRUPT_VFIQ CPU_INTERRUPT_TGT_EXT_3
48
- * Execution priority negative (this is like privileged, but the
55
#define CPU_INTERRUPT_VSERR CPU_INTERRUPT_TGT_INT_0
49
- * MPU HFNMIENA bit means that it may have different access permission
56
+#define CPU_INTERRUPT_NMI CPU_INTERRUPT_TGT_EXT_4
50
- * check results to normal privileged code, so can't share a TLB).
57
+#define CPU_INTERRUPT_VINMI CPU_INTERRUPT_TGT_EXT_0
51
+ * User, execution priority negative (ie the MPU HFNMIENA bit may apply)
58
+#define CPU_INTERRUPT_VFNMI CPU_INTERRUPT_TGT_INT_1
52
+ * Privileged, execution priority negative (ditto)
59
53
* If the CPU supports the v8M Security Extension then there are also:
60
/* The usual mapping for an AArch64 system register to its AArch32
54
* Secure User
61
* counterpart is for the 32 bit world to have access to the lower
55
* Secure Privileged
56
- * Secure, execution priority negative
57
+ * Secure User, execution priority negative
58
+ * Secure Privileged, execution priority negative
59
*
60
* The ARMMMUIdx and the mmu index value used by the core QEMU TLB code
61
* are not quite the same -- different CPU types (most notably M profile
62
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
63
* The constant names here are patterned after the general style of the names
64
* of the AT/ATS operations.
65
* The values used are carefully arranged to make mmu_idx => EL lookup easy.
66
+ * For M profile we arrange them to have a bit for priv, a bit for negpri
67
+ * and a bit for secure.
68
*/
69
#define ARM_MMU_IDX_A 0x10 /* A profile */
70
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
71
#define ARM_MMU_IDX_M 0x40 /* M profile */
72
73
+/* meanings of the bits for M profile mmu idx values */
74
+#define ARM_MMU_IDX_M_PRIV 0x1
75
+#define ARM_MMU_IDX_M_NEGPRI 0x2
76
+#define ARM_MMU_IDX_M_S 0x4
77
+
78
#define ARM_MMU_IDX_TYPE_MASK (~0x7)
79
#define ARM_MMU_IDX_COREIDX_MASK 0x7
80
81
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
82
ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
83
ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
84
ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
85
- ARMMMUIdx_MNegPri = 2 | ARM_MMU_IDX_M,
86
- ARMMMUIdx_MSUser = 3 | ARM_MMU_IDX_M,
87
- ARMMMUIdx_MSPriv = 4 | ARM_MMU_IDX_M,
88
- ARMMMUIdx_MSNegPri = 5 | ARM_MMU_IDX_M,
89
+ ARMMMUIdx_MUserNegPri = 2 | ARM_MMU_IDX_M,
90
+ ARMMMUIdx_MPrivNegPri = 3 | ARM_MMU_IDX_M,
91
+ ARMMMUIdx_MSUser = 4 | ARM_MMU_IDX_M,
92
+ ARMMMUIdx_MSPriv = 5 | ARM_MMU_IDX_M,
93
+ ARMMMUIdx_MSUserNegPri = 6 | ARM_MMU_IDX_M,
94
+ ARMMMUIdx_MSPrivNegPri = 7 | ARM_MMU_IDX_M,
95
/* Indexes below here don't have TLBs and are used only for AT system
96
* instructions or for the first stage of an S12 page table walk.
97
*/
98
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
99
ARMMMUIdxBit_S2NS = 1 << 6,
100
ARMMMUIdxBit_MUser = 1 << 0,
101
ARMMMUIdxBit_MPriv = 1 << 1,
102
- ARMMMUIdxBit_MNegPri = 1 << 2,
103
- ARMMMUIdxBit_MSUser = 1 << 3,
104
- ARMMMUIdxBit_MSPriv = 1 << 4,
105
- ARMMMUIdxBit_MSNegPri = 1 << 5,
106
+ ARMMMUIdxBit_MUserNegPri = 1 << 2,
107
+ ARMMMUIdxBit_MPrivNegPri = 1 << 3,
108
+ ARMMMUIdxBit_MSUser = 1 << 4,
109
+ ARMMMUIdxBit_MSPriv = 1 << 5,
110
+ ARMMMUIdxBit_MSUserNegPri = 1 << 6,
111
+ ARMMMUIdxBit_MSPrivNegPri = 1 << 7,
112
} ARMMMUIdxBit;
113
114
#define MMU_USER_IDX 0
115
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
116
case ARM_MMU_IDX_A:
117
return mmu_idx & 3;
118
case ARM_MMU_IDX_M:
119
- return (mmu_idx == ARMMMUIdx_MUser || mmu_idx == ARMMMUIdx_MSUser)
120
- ? 0 : 1;
121
+ return mmu_idx & ARM_MMU_IDX_M_PRIV;
122
default:
123
g_assert_not_reached();
124
}
125
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
126
bool secstate)
127
{
128
int el = arm_current_el(env);
129
- ARMMMUIdx mmu_idx;
130
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
131
132
- if (el == 0) {
133
- mmu_idx = secstate ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser;
134
- } else {
135
- mmu_idx = secstate ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv;
136
+ if (el != 0) {
137
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
138
}
139
140
if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
141
- mmu_idx = secstate ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri;
142
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
143
+ }
144
+
145
+ if (secstate) {
146
+ mmu_idx |= ARM_MMU_IDX_M_S;
147
}
148
149
return mmu_idx;
150
diff --git a/target/arm/internals.h b/target/arm/internals.h
62
diff --git a/target/arm/internals.h b/target/arm/internals.h
151
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
152
--- a/target/arm/internals.h
64
--- a/target/arm/internals.h
153
+++ b/target/arm/internals.h
65
+++ b/target/arm/internals.h
154
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
66
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_virq(ARMCPU *cpu);
155
case ARMMMUIdx_S1NSE1:
67
*/
156
case ARMMMUIdx_S1E2:
68
void arm_cpu_update_vfiq(ARMCPU *cpu);
157
case ARMMMUIdx_S2NS:
69
158
+ case ARMMMUIdx_MPrivNegPri:
70
+/**
159
+ case ARMMMUIdx_MUserNegPri:
71
+ * arm_cpu_update_vinmi: Update CPU_INTERRUPT_VINMI bit in cs->interrupt_request
160
case ARMMMUIdx_MPriv:
72
+ *
161
- case ARMMMUIdx_MNegPri:
73
+ * Update the CPU_INTERRUPT_VINMI bit in cs->interrupt_request, following
162
case ARMMMUIdx_MUser:
74
+ * a change to either the input VNMI line from the GIC or the HCRX_EL2.VINMI.
75
+ * Must be called with the BQL held.
76
+ */
77
+void arm_cpu_update_vinmi(ARMCPU *cpu);
78
+
79
+/**
80
+ * arm_cpu_update_vfnmi: Update CPU_INTERRUPT_VFNMI bit in cs->interrupt_request
81
+ *
82
+ * Update the CPU_INTERRUPT_VFNMI bit in cs->interrupt_request, following
83
+ * a change to the HCRX_EL2.VFNMI.
84
+ * Must be called with the BQL held.
85
+ */
86
+void arm_cpu_update_vfnmi(ARMCPU *cpu);
87
+
88
/**
89
* arm_cpu_update_vserr: Update CPU_INTERRUPT_VSERR bit
90
*
91
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/cpu.c
94
+++ b/target/arm/cpu.c
95
@@ -XXX,XX +XXX,XX @@ void arm_restore_state_to_opc(CPUState *cs,
96
}
97
#endif /* CONFIG_TCG */
98
99
+/*
100
+ * With SCTLR_ELx.NMI == 0, IRQ with Superpriority is masked identically with
101
+ * IRQ without Superpriority. Moreover, if the GIC is configured so that
102
+ * FEAT_GICv3_NMI is only set if FEAT_NMI is set, then we won't ever see
103
+ * CPU_INTERRUPT_*NMI anyway. So we might as well accept NMI here
104
+ * unconditionally.
105
+ */
106
static bool arm_cpu_has_work(CPUState *cs)
107
{
108
ARMCPU *cpu = ARM_CPU(cs);
109
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_has_work(CPUState *cs)
110
return (cpu->power_state != PSCI_OFF)
111
&& cs->interrupt_request &
112
(CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
113
+ | CPU_INTERRUPT_NMI | CPU_INTERRUPT_VINMI | CPU_INTERRUPT_VFNMI
114
| CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
115
| CPU_INTERRUPT_EXITTB);
116
}
117
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
118
CPUARMState *env = cpu_env(cs);
119
bool pstate_unmasked;
120
bool unmasked = false;
121
+ bool allIntMask = false;
122
123
/*
124
* Don't take exceptions if they target a lower EL.
125
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
163
return false;
126
return false;
164
case ARMMMUIdx_S1E3:
127
}
165
case ARMMMUIdx_S1SE0:
128
166
case ARMMMUIdx_S1SE1:
129
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env)) &&
167
+ case ARMMMUIdx_MSPrivNegPri:
130
+ env->cp15.sctlr_el[target_el] & SCTLR_NMI && cur_el == target_el) {
168
+ case ARMMMUIdx_MSUserNegPri:
131
+ allIntMask = env->pstate & PSTATE_ALLINT ||
169
case ARMMMUIdx_MSPriv:
132
+ ((env->cp15.sctlr_el[target_el] & SCTLR_SPINTMASK) &&
170
- case ARMMMUIdx_MSNegPri:
133
+ (env->pstate & PSTATE_SP));
171
case ARMMMUIdx_MSUser:
134
+ }
172
return true;
135
+
173
default:
136
switch (excp_idx) {
137
+ case EXCP_NMI:
138
+ pstate_unmasked = !allIntMask;
139
+ break;
140
+
141
+ case EXCP_VINMI:
142
+ if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
143
+ /* VINMIs are only taken when hypervized. */
144
+ return false;
145
+ }
146
+ return !allIntMask;
147
+ case EXCP_VFNMI:
148
+ if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
149
+ /* VFNMIs are only taken when hypervized. */
150
+ return false;
151
+ }
152
+ return !allIntMask;
153
case EXCP_FIQ:
154
- pstate_unmasked = !(env->daif & PSTATE_F);
155
+ pstate_unmasked = (!(env->daif & PSTATE_F)) && (!allIntMask);
156
break;
157
158
case EXCP_IRQ:
159
- pstate_unmasked = !(env->daif & PSTATE_I);
160
+ pstate_unmasked = (!(env->daif & PSTATE_I)) && (!allIntMask);
161
break;
162
163
case EXCP_VFIQ:
164
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
165
/* VFIQs are only taken when hypervized. */
166
return false;
167
}
168
- return !(env->daif & PSTATE_F);
169
+ return !(env->daif & PSTATE_F) && (!allIntMask);
170
case EXCP_VIRQ:
171
if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
172
/* VIRQs are only taken when hypervized. */
173
return false;
174
}
175
- return !(env->daif & PSTATE_I);
176
+ return !(env->daif & PSTATE_I) && (!allIntMask);
177
case EXCP_VSERR:
178
if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
179
/* VIRQs are only taken when hypervized. */
180
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
181
182
/* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
183
184
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env)) &&
185
+ (arm_sctlr(env, cur_el) & SCTLR_NMI)) {
186
+ if (interrupt_request & CPU_INTERRUPT_NMI) {
187
+ excp_idx = EXCP_NMI;
188
+ target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
189
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
190
+ cur_el, secure, hcr_el2)) {
191
+ goto found;
192
+ }
193
+ }
194
+ if (interrupt_request & CPU_INTERRUPT_VINMI) {
195
+ excp_idx = EXCP_VINMI;
196
+ target_el = 1;
197
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
198
+ cur_el, secure, hcr_el2)) {
199
+ goto found;
200
+ }
201
+ }
202
+ if (interrupt_request & CPU_INTERRUPT_VFNMI) {
203
+ excp_idx = EXCP_VFNMI;
204
+ target_el = 1;
205
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
206
+ cur_el, secure, hcr_el2)) {
207
+ goto found;
208
+ }
209
+ }
210
+ } else {
211
+ /*
212
+ * NMI disabled: interrupts with superpriority are handled
213
+ * as if they didn't have it
214
+ */
215
+ if (interrupt_request & CPU_INTERRUPT_NMI) {
216
+ interrupt_request |= CPU_INTERRUPT_HARD;
217
+ }
218
+ if (interrupt_request & CPU_INTERRUPT_VINMI) {
219
+ interrupt_request |= CPU_INTERRUPT_VIRQ;
220
+ }
221
+ if (interrupt_request & CPU_INTERRUPT_VFNMI) {
222
+ interrupt_request |= CPU_INTERRUPT_VFIQ;
223
+ }
224
+ }
225
+
226
if (interrupt_request & CPU_INTERRUPT_FIQ) {
227
excp_idx = EXCP_FIQ;
228
target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
229
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_virq(ARMCPU *cpu)
230
CPUARMState *env = &cpu->env;
231
CPUState *cs = CPU(cpu);
232
233
- bool new_state = (env->cp15.hcr_el2 & HCR_VI) ||
234
+ bool new_state = ((arm_hcr_el2_eff(env) & HCR_VI) &&
235
+ !(arm_hcrx_el2_eff(env) & HCRX_VINMI)) ||
236
(env->irq_line_state & CPU_INTERRUPT_VIRQ);
237
238
if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VIRQ) != 0)) {
239
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
240
CPUARMState *env = &cpu->env;
241
CPUState *cs = CPU(cpu);
242
243
- bool new_state = (env->cp15.hcr_el2 & HCR_VF) ||
244
+ bool new_state = ((arm_hcr_el2_eff(env) & HCR_VF) &&
245
+ !(arm_hcrx_el2_eff(env) & HCRX_VFNMI)) ||
246
(env->irq_line_state & CPU_INTERRUPT_VFIQ);
247
248
if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VFIQ) != 0)) {
249
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
250
}
251
}
252
253
+void arm_cpu_update_vinmi(ARMCPU *cpu)
254
+{
255
+ /*
256
+ * Update the interrupt level for VINMI, which is the logical OR of
257
+ * the HCRX_EL2.VINMI bit and the input line level from the GIC.
258
+ */
259
+ CPUARMState *env = &cpu->env;
260
+ CPUState *cs = CPU(cpu);
261
+
262
+ bool new_state = ((arm_hcr_el2_eff(env) & HCR_VI) &&
263
+ (arm_hcrx_el2_eff(env) & HCRX_VINMI)) ||
264
+ (env->irq_line_state & CPU_INTERRUPT_VINMI);
265
+
266
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VINMI) != 0)) {
267
+ if (new_state) {
268
+ cpu_interrupt(cs, CPU_INTERRUPT_VINMI);
269
+ } else {
270
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VINMI);
271
+ }
272
+ }
273
+}
274
+
275
+void arm_cpu_update_vfnmi(ARMCPU *cpu)
276
+{
277
+ /*
278
+ * Update the interrupt level for VFNMI, which is the HCRX_EL2.VFNMI bit.
279
+ */
280
+ CPUARMState *env = &cpu->env;
281
+ CPUState *cs = CPU(cpu);
282
+
283
+ bool new_state = (arm_hcr_el2_eff(env) & HCR_VF) &&
284
+ (arm_hcrx_el2_eff(env) & HCRX_VFNMI);
285
+
286
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VFNMI) != 0)) {
287
+ if (new_state) {
288
+ cpu_interrupt(cs, CPU_INTERRUPT_VFNMI);
289
+ } else {
290
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VFNMI);
291
+ }
292
+ }
293
+}
294
+
295
void arm_cpu_update_vserr(ARMCPU *cpu)
296
{
297
/*
298
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
299
[ARM_CPU_IRQ] = CPU_INTERRUPT_HARD,
300
[ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ,
301
[ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ,
302
- [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
303
+ [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ,
304
+ [ARM_CPU_NMI] = CPU_INTERRUPT_NMI,
305
+ [ARM_CPU_VINMI] = CPU_INTERRUPT_VINMI,
306
};
307
308
if (!arm_feature(env, ARM_FEATURE_EL2) &&
309
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
310
case ARM_CPU_VFIQ:
311
arm_cpu_update_vfiq(cpu);
312
break;
313
+ case ARM_CPU_VINMI:
314
+ arm_cpu_update_vinmi(cpu);
315
+ break;
316
case ARM_CPU_IRQ:
317
case ARM_CPU_FIQ:
318
+ case ARM_CPU_NMI:
319
if (level) {
320
cpu_interrupt(cs, mask[irq]);
321
} else {
322
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
323
#else
324
/* Our inbound IRQ and FIQ lines */
325
if (kvm_enabled()) {
326
- /* VIRQ and VFIQ are unused with KVM but we add them to maintain
327
- * the same interface as non-KVM CPUs.
328
+ /*
329
+ * VIRQ, VFIQ, NMI, VINMI are unused with KVM but we add
330
+ * them to maintain the same interface as non-KVM CPUs.
331
*/
332
- qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4);
333
+ qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 6);
334
} else {
335
- qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4);
336
+ qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 6);
337
}
338
339
qdev_init_gpio_out(DEVICE(cpu), cpu->gt_timer_outputs,
174
diff --git a/target/arm/helper.c b/target/arm/helper.c
340
diff --git a/target/arm/helper.c b/target/arm/helper.c
175
index XXXXXXX..XXXXXXX 100644
341
index XXXXXXX..XXXXXXX 100644
176
--- a/target/arm/helper.c
342
--- a/target/arm/helper.c
177
+++ b/target/arm/helper.c
343
+++ b/target/arm/helper.c
178
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
344
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
179
case ARMMMUIdx_S1SE1:
345
* and the state of the input lines from the GIC. (This requires
180
case ARMMMUIdx_S1NSE0:
346
* that we have the BQL, which is done by marking the
181
case ARMMMUIdx_S1NSE1:
347
* reginfo structs as ARM_CP_IO.)
182
+ case ARMMMUIdx_MPrivNegPri:
348
- * Note that if a write to HCR pends a VIRQ or VFIQ it is never
183
+ case ARMMMUIdx_MUserNegPri:
349
- * possible for it to be taken immediately, because VIRQ and
184
case ARMMMUIdx_MPriv:
350
- * VFIQ are masked unless running at EL0 or EL1, and HCR
185
- case ARMMMUIdx_MNegPri:
351
- * can only be written at EL2.
186
case ARMMMUIdx_MUser:
352
+ * Note that if a write to HCR pends a VIRQ or VFIQ or VINMI or
187
+ case ARMMMUIdx_MSPrivNegPri:
353
+ * VFNMI, it is never possible for it to be taken immediately
188
+ case ARMMMUIdx_MSUserNegPri:
354
+ * because VIRQ, VFIQ, VINMI and VFNMI are masked unless running
189
case ARMMMUIdx_MSPriv:
355
+ * at EL0 or EL1, and HCR can only be written at EL2.
190
- case ARMMMUIdx_MSNegPri:
356
*/
191
case ARMMMUIdx_MSUser:
357
g_assert(bql_locked());
192
return 1;
358
arm_cpu_update_virq(cpu);
193
default:
359
arm_cpu_update_vfiq(cpu);
194
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
360
arm_cpu_update_vserr(cpu);
195
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
361
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
196
case R_V7M_MPU_CTRL_ENABLE_MASK:
362
+ arm_cpu_update_vinmi(cpu);
197
/* Enabled, but not for HardFault and NMI */
363
+ arm_cpu_update_vfnmi(cpu);
198
- return mmu_idx == ARMMMUIdx_MNegPri ||
364
+ }
199
- mmu_idx == ARMMMUIdx_MSNegPri;
365
}
200
+ return mmu_idx & ARM_MMU_IDX_M_NEGPRI;
366
201
case R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK:
367
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
202
/* Enabled for all cases */
368
@@ -XXX,XX +XXX,XX @@ static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
203
return false;
369
204
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
370
/* Clear RES0 bits. */
205
case ARMMMUIdx_S1NSE0:
371
env->cp15.hcrx_el2 = value & valid_mask;
206
case ARMMMUIdx_MUser:
372
+
207
case ARMMMUIdx_MSUser:
373
+ /*
208
+ case ARMMMUIdx_MUserNegPri:
374
+ * Updates to VINMI and VFNMI require us to update the status of
209
+ case ARMMMUIdx_MSUserNegPri:
375
+ * virtual NMI, which are the logical OR of these bits
210
return true;
376
+ * and the state of the input lines from the GIC. (This requires
211
default:
377
+ * that we have the BQL, which is done by marking the
212
return false;
378
+ * reginfo structs as ARM_CP_IO.)
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
379
+ * Note that if a write to HCRX pends a VINMI or VFNMI it is never
214
index XXXXXXX..XXXXXXX 100644
380
+ * possible for it to be taken immediately, because VINMI and
215
--- a/target/arm/translate.c
381
+ * VFNMI are masked unless running at EL0 or EL1, and HCRX
216
+++ b/target/arm/translate.c
382
+ * can only be written at EL2.
217
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
383
+ */
218
return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0);
384
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
219
case ARMMMUIdx_MUser:
385
+ g_assert(bql_locked());
220
case ARMMMUIdx_MPriv:
386
+ arm_cpu_update_vinmi(cpu);
221
- case ARMMMUIdx_MNegPri:
387
+ arm_cpu_update_vfnmi(cpu);
222
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
388
+ }
223
+ case ARMMMUIdx_MUserNegPri:
389
}
224
+ case ARMMMUIdx_MPrivNegPri:
390
225
+ return arm_to_core_mmu_idx(ARMMMUIdx_MUserNegPri);
391
static CPAccessResult access_hxen(CPUARMState *env, const ARMCPRegInfo *ri,
226
case ARMMMUIdx_MSUser:
392
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_hxen(CPUARMState *env, const ARMCPRegInfo *ri,
227
case ARMMMUIdx_MSPriv:
393
228
- case ARMMMUIdx_MSNegPri:
394
static const ARMCPRegInfo hcrx_el2_reginfo = {
229
return arm_to_core_mmu_idx(ARMMMUIdx_MSUser);
395
.name = "HCRX_EL2", .state = ARM_CP_STATE_AA64,
230
+ case ARMMMUIdx_MSUserNegPri:
396
+ .type = ARM_CP_IO,
231
+ case ARMMMUIdx_MSPrivNegPri:
397
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 2,
232
+ return arm_to_core_mmu_idx(ARMMMUIdx_MSUserNegPri);
398
.access = PL2_RW, .writefn = hcrx_write, .accessfn = access_hxen,
233
case ARMMMUIdx_S2NS:
399
.nv2_redirect_offset = 0xa0,
234
default:
400
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(CPUState *cs)
235
g_assert_not_reached();
401
[EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
402
[EXCP_VSERR] = "Virtual SERR",
403
[EXCP_GPC] = "Granule Protection Check",
404
+ [EXCP_NMI] = "NMI",
405
+ [EXCP_VINMI] = "Virtual IRQ NMI",
406
+ [EXCP_VFNMI] = "Virtual FIQ NMI",
407
};
408
409
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
236
--
410
--
237
2.7.4
411
2.34.1
238
239
diff view generated by jsdifflib
1
When we added the ARMMMUIdx_MSUser MMU index we forgot to
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
add it to the case statement in regime_is_user(), so we
3
weren't treating it as unprivileged when doing MPU lookups.
4
Correct the omission.
5
2
3
According to Arm GIC section 4.6.3 Interrupt superpriority, the interrupt
4
with superpriority is always IRQ, never FIQ, so handle NMI same as IRQ in
5
arm_phys_excp_target_el().
6
7
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20240407081733.3231820-8-ruanjinjie@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1512153879-5291-4-git-send-email-peter.maydell@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
---
12
---
11
target/arm/helper.c | 1 +
13
target/arm/helper.c | 1 +
12
1 file changed, 1 insertion(+)
14
1 file changed, 1 insertion(+)
13
15
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
18
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
19
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
20
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
19
case ARMMMUIdx_S1SE0:
21
hcr_el2 = arm_hcr_el2_eff(env);
20
case ARMMMUIdx_S1NSE0:
22
switch (excp_idx) {
21
case ARMMMUIdx_MUser:
23
case EXCP_IRQ:
22
+ case ARMMMUIdx_MSUser:
24
+ case EXCP_NMI:
23
return true;
25
scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ);
24
default:
26
hcr = hcr_el2 & HCR_IMO;
25
return false;
27
break;
26
--
28
--
27
2.7.4
29
2.34.1
28
29
diff view generated by jsdifflib
1
For the TT instruction we're going to need to do an MPU lookup that
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
also tells us which MPU region the access hit. This requires us
3
to do the MPU lookup without first doing the SAU security access
4
check, so pull the MPU lookup parts of get_phys_addr_pmsav8()
5
out into their own function.
6
2
7
The TT instruction also needs to know the MPU region number which
3
Add IS and FS bit in ISR_EL1 and handle the read. With CPU_INTERRUPT_NMI or
8
the lookup hit, so provide this information to the caller of the
4
CPU_INTERRUPT_VINMI, both CPSR_I and ISR_IS must be set. With
9
MPU lookup code, even though get_phys_addr_pmsav8() doesn't
5
CPU_INTERRUPT_VFNMI, both CPSR_F and ISR_FS must be set.
10
need to know it.
11
6
7
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20240407081733.3231820-9-ruanjinjie@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1512153879-5291-7-git-send-email-peter.maydell@linaro.org
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
---
12
---
17
target/arm/helper.c | 130 +++++++++++++++++++++++++++++++---------------------
13
target/arm/cpu.h | 2 ++
18
1 file changed, 79 insertions(+), 51 deletions(-)
14
target/arm/helper.c | 13 +++++++++++++
15
2 files changed, 15 insertions(+)
19
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
22
#define CPSR_N (1U << 31)
23
#define CPSR_NZCV (CPSR_N | CPSR_Z | CPSR_C | CPSR_V)
24
#define CPSR_AIF (CPSR_A | CPSR_I | CPSR_F)
25
+#define ISR_FS (1U << 9)
26
+#define ISR_IS (1U << 10)
27
28
#define CPSR_IT (CPSR_IT_0_1 | CPSR_IT_2_7)
29
#define CACHED_CPSR_BITS (CPSR_T | CPSR_AIF | CPSR_GE | CPSR_IT | CPSR_Q \
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
32
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
33
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
34
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
25
}
35
if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
26
}
36
ret |= CPSR_I;
27
37
}
28
-static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
38
+ if (cs->interrupt_request & CPU_INTERRUPT_VINMI) {
29
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
39
+ ret |= ISR_IS;
30
- hwaddr *phys_ptr, MemTxAttrs *txattrs,
40
+ ret |= CPSR_I;
31
- int *prot, uint32_t *fsr)
41
+ }
32
+static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
42
} else {
33
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
43
if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
34
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
44
ret |= CPSR_I;
35
+ int *prot, uint32_t *fsr, uint32_t *mregion)
45
}
36
{
46
+
37
+ /* Perform a PMSAv8 MPU lookup (without also doing the SAU check
47
+ if (cs->interrupt_request & CPU_INTERRUPT_NMI) {
38
+ * that a full phys-to-virt translation does).
48
+ ret |= ISR_IS;
39
+ * mregion is (if not NULL) set to the region number which matched,
49
+ ret |= CPSR_I;
40
+ * or -1 if no region number is returned (MPU off, address did not
41
+ * hit a region, address hit in multiple regions).
42
+ */
43
ARMCPU *cpu = arm_env_get_cpu(env);
44
bool is_user = regime_is_user(env, mmu_idx);
45
uint32_t secure = regime_is_secure(env, mmu_idx);
46
int n;
47
int matchregion = -1;
48
bool hit = false;
49
- V8M_SAttributes sattrs = {};
50
51
*phys_ptr = address;
52
*prot = 0;
53
-
54
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
55
- v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
56
- if (access_type == MMU_INST_FETCH) {
57
- /* Instruction fetches always use the MMU bank and the
58
- * transaction attribute determined by the fetch address,
59
- * regardless of CPU state. This is painful for QEMU
60
- * to handle, because it would mean we need to encode
61
- * into the mmu_idx not just the (user, negpri) information
62
- * for the current security state but also that for the
63
- * other security state, which would balloon the number
64
- * of mmu_idx values needed alarmingly.
65
- * Fortunately we can avoid this because it's not actually
66
- * possible to arbitrarily execute code from memory with
67
- * the wrong security attribute: it will always generate
68
- * an exception of some kind or another, apart from the
69
- * special case of an NS CPU executing an SG instruction
70
- * in S&NSC memory. So we always just fail the translation
71
- * here and sort things out in the exception handler
72
- * (including possibly emulating an SG instruction).
73
- */
74
- if (sattrs.ns != !secure) {
75
- *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
76
- return true;
77
- }
78
- } else {
79
- /* For data accesses we always use the MMU bank indicated
80
- * by the current CPU state, but the security attributes
81
- * might downgrade a secure access to nonsecure.
82
- */
83
- if (sattrs.ns) {
84
- txattrs->secure = false;
85
- } else if (!secure) {
86
- /* NS access to S memory must fault.
87
- * Architecturally we should first check whether the
88
- * MPU information for this address indicates that we
89
- * are doing an unaligned access to Device memory, which
90
- * should generate a UsageFault instead. QEMU does not
91
- * currently check for that kind of unaligned access though.
92
- * If we added it we would need to do so as a special case
93
- * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
94
- */
95
- *fsr = M_FAKE_FSR_SFAULT;
96
- return true;
97
- }
98
- }
99
+ if (mregion) {
100
+ *mregion = -1;
101
}
102
103
/* Unlike the ARM ARM pseudocode, we don't need to check whether this
104
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
105
/* We don't need to look the attribute up in the MAIR0/MAIR1
106
* registers because that only tells us about cacheability.
107
*/
108
+ if (mregion) {
109
+ *mregion = matchregion;
110
+ }
50
+ }
111
}
51
}
112
52
113
*fsr = 0x00d; /* Permission fault */
53
if (hcr_el2 & HCR_FMO) {
114
return !(*prot & (1 << access_type));
54
if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
115
}
55
ret |= CPSR_F;
116
56
}
117
+
57
+ if (cs->interrupt_request & CPU_INTERRUPT_VFNMI) {
118
+static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
58
+ ret |= ISR_FS;
119
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
59
+ ret |= CPSR_F;
120
+ hwaddr *phys_ptr, MemTxAttrs *txattrs,
121
+ int *prot, uint32_t *fsr)
122
+{
123
+ uint32_t secure = regime_is_secure(env, mmu_idx);
124
+ V8M_SAttributes sattrs = {};
125
+
126
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
127
+ v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
128
+ if (access_type == MMU_INST_FETCH) {
129
+ /* Instruction fetches always use the MMU bank and the
130
+ * transaction attribute determined by the fetch address,
131
+ * regardless of CPU state. This is painful for QEMU
132
+ * to handle, because it would mean we need to encode
133
+ * into the mmu_idx not just the (user, negpri) information
134
+ * for the current security state but also that for the
135
+ * other security state, which would balloon the number
136
+ * of mmu_idx values needed alarmingly.
137
+ * Fortunately we can avoid this because it's not actually
138
+ * possible to arbitrarily execute code from memory with
139
+ * the wrong security attribute: it will always generate
140
+ * an exception of some kind or another, apart from the
141
+ * special case of an NS CPU executing an SG instruction
142
+ * in S&NSC memory. So we always just fail the translation
143
+ * here and sort things out in the exception handler
144
+ * (including possibly emulating an SG instruction).
145
+ */
146
+ if (sattrs.ns != !secure) {
147
+ *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
148
+ *phys_ptr = address;
149
+ *prot = 0;
150
+ return true;
151
+ }
152
+ } else {
153
+ /* For data accesses we always use the MMU bank indicated
154
+ * by the current CPU state, but the security attributes
155
+ * might downgrade a secure access to nonsecure.
156
+ */
157
+ if (sattrs.ns) {
158
+ txattrs->secure = false;
159
+ } else if (!secure) {
160
+ /* NS access to S memory must fault.
161
+ * Architecturally we should first check whether the
162
+ * MPU information for this address indicates that we
163
+ * are doing an unaligned access to Device memory, which
164
+ * should generate a UsageFault instead. QEMU does not
165
+ * currently check for that kind of unaligned access though.
166
+ * If we added it we would need to do so as a special case
167
+ * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
168
+ */
169
+ *fsr = M_FAKE_FSR_SFAULT;
170
+ *phys_ptr = address;
171
+ *prot = 0;
172
+ return true;
173
+ }
174
+ }
60
+ }
175
+ }
61
} else {
176
+
62
if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
177
+ return pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
63
ret |= CPSR_F;
178
+ txattrs, prot, fsr, NULL);
179
+}
180
+
181
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
182
MMUAccessType access_type, ARMMMUIdx mmu_idx,
183
hwaddr *phys_ptr, int *prot, uint32_t *fsr)
184
--
64
--
185
2.7.4
65
2.34.1
186
187
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Now that do_ats_write() is entirely in control of whether to
3
Set or clear PSTATE.ALLINT on taking an exception to ELx according to the
4
generate a 32-bit PAR or a 64-bit PAR, we can make it use the
4
SCTLR_ELx.SPINTMASK bit.
5
correct (complicated) condition for doing so.
6
5
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
9
Message-id: 20240407081733.3231820-10-ruanjinjie@huawei.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1512503192-2239-13-git-send-email-peter.maydell@linaro.org
13
[PMM: Rebased Edgar's patch on top of get_phys_addr() refactoring;
14
use arm_s1_regime_using_lpae_format() rather than
15
regime_using_lpae_format() because the latter will assert
16
if passed ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1;
17
updated commit message appropriately]
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
11
---
20
target/arm/helper.c | 33 +++++++++++++++++++++++++++++----
12
target/arm/helper.c | 8 ++++++++
21
1 file changed, 29 insertions(+), 4 deletions(-)
13
1 file changed, 8 insertions(+)
22
14
23
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
26
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
27
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
28
int prot;
20
}
29
bool ret;
21
}
30
uint64_t par64;
22
31
+ bool format64 = false;
23
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
32
MemTxAttrs attrs = {};
24
+ if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPINTMASK)) {
33
ARMMMUFaultInfo fi = {};
25
+ new_mode |= PSTATE_ALLINT;
34
ARMCacheAttrs cacheattrs = {};
26
+ } else {
35
27
+ new_mode &= ~PSTATE_ALLINT;
36
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
37
&prot, &page_size, &fi, &cacheattrs);
38
- /* TODO: this is not the correct condition to use to decide whether
39
- * to report a PAR in 64-bit or 32-bit format.
40
- */
41
- if (arm_s1_regime_using_lpae_format(env, mmu_idx)) {
42
+
43
+ if (is_a64(env)) {
44
+ format64 = true;
45
+ } else if (arm_feature(env, ARM_FEATURE_LPAE)) {
46
+ /*
47
+ * ATS1Cxx:
48
+ * * TTBCR.EAE determines whether the result is returned using the
49
+ * 32-bit or the 64-bit PAR format
50
+ * * Instructions executed in Hyp mode always use the 64bit format
51
+ *
52
+ * ATS1S2NSOxx uses the 64bit format if any of the following is true:
53
+ * * The Non-secure TTBCR.EAE bit is set to 1
54
+ * * The implementation includes EL2, and the value of HCR.VM is 1
55
+ *
56
+ * ATS1Hx always uses the 64bit format (not supported yet).
57
+ */
58
+ format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
59
+
60
+ if (arm_feature(env, ARM_FEATURE_EL2)) {
61
+ if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
62
+ format64 |= env->cp15.hcr_el2 & HCR_VM;
63
+ } else {
64
+ format64 |= arm_current_el(env) == 2;
65
+ }
66
+ }
28
+ }
67
+ }
29
+ }
68
+
30
+
69
+ if (format64) {
31
pstate_write(env, PSTATE_DAIF | new_mode);
70
/* Create a 64-bit PAR */
32
env->aarch64 = true;
71
par64 = (1 << 11); /* LPAE bit always set */
33
aarch64_restore_sp(env, new_el);
72
if (!ret) {
73
--
34
--
74
2.7.4
35
2.34.1
75
76
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Use memset() instead of a for loop to zero all of the registers.
3
Augment the GICv3's QOM device interface by adding one
4
new set of sysbus IRQ line, to signal NMI to each CPU.
4
5
5
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
6
Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: c076e907f355923864cb1afde31b938ffb677778.1513104804.git.alistair.francis@xilinx.com
9
Message-id: 20240407081733.3231820-11-ruanjinjie@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
hw/ssi/xilinx_spips.c | 11 +++--------
12
include/hw/intc/arm_gic_common.h | 2 ++
12
1 file changed, 3 insertions(+), 8 deletions(-)
13
include/hw/intc/arm_gicv3_common.h | 2 ++
14
hw/intc/arm_gicv3_common.c | 6 ++++++
15
3 files changed, 10 insertions(+)
13
16
14
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
17
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/ssi/xilinx_spips.c
19
--- a/include/hw/intc/arm_gic_common.h
17
+++ b/hw/ssi/xilinx_spips.c
20
+++ b/include/hw/intc/arm_gic_common.h
18
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
21
@@ -XXX,XX +XXX,XX @@ struct GICState {
19
{
22
qemu_irq parent_fiq[GIC_NCPU];
20
XilinxSPIPS *s = XILINX_SPIPS(d);
23
qemu_irq parent_virq[GIC_NCPU];
21
24
qemu_irq parent_vfiq[GIC_NCPU];
22
- int i;
25
+ qemu_irq parent_nmi[GIC_NCPU];
23
- for (i = 0; i < XLNX_SPIPS_R_MAX; i++) {
26
+ qemu_irq parent_vnmi[GIC_NCPU];
24
- s->regs[i] = 0;
27
qemu_irq maintenance_irq[GIC_NCPU];
25
- }
28
26
+ memset(s->regs, 0, sizeof(s->regs));
29
/* GICD_CTLR; for a GIC with the security extensions the NS banked version
27
30
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
28
fifo8_reset(&s->rx_fifo);
31
index XXXXXXX..XXXXXXX 100644
29
fifo8_reset(&s->rx_fifo);
32
--- a/include/hw/intc/arm_gicv3_common.h
30
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
33
+++ b/include/hw/intc/arm_gicv3_common.h
31
static void xlnx_zynqmp_qspips_reset(DeviceState *d)
34
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
32
{
35
qemu_irq parent_fiq;
33
XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(d);
36
qemu_irq parent_virq;
34
- int i;
37
qemu_irq parent_vfiq;
35
38
+ qemu_irq parent_nmi;
36
xilinx_spips_reset(d);
39
+ qemu_irq parent_vnmi;
37
40
38
- for (i = 0; i < XLNX_ZYNQMP_SPIPS_R_MAX; i++) {
41
/* Redistributor */
39
- s->regs[i] = 0;
42
uint32_t level; /* Current IRQ level */
40
- }
43
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
41
+ memset(s->regs, 0, sizeof(s->regs));
44
index XXXXXXX..XXXXXXX 100644
42
+
45
--- a/hw/intc/arm_gicv3_common.c
43
fifo8_reset(&s->rx_fifo_g);
46
+++ b/hw/intc/arm_gicv3_common.c
44
fifo8_reset(&s->rx_fifo_g);
47
@@ -XXX,XX +XXX,XX @@ void gicv3_init_irqs_and_mmio(GICv3State *s, qemu_irq_handler handler,
45
fifo32_reset(&s->fifo_g);
48
for (i = 0; i < s->num_cpu; i++) {
49
sysbus_init_irq(sbd, &s->cpu[i].parent_vfiq);
50
}
51
+ for (i = 0; i < s->num_cpu; i++) {
52
+ sysbus_init_irq(sbd, &s->cpu[i].parent_nmi);
53
+ }
54
+ for (i = 0; i < s->num_cpu; i++) {
55
+ sysbus_init_irq(sbd, &s->cpu[i].parent_vnmi);
56
+ }
57
58
memory_region_init_io(&s->iomem_dist, OBJECT(s), ops, s,
59
"gicv3_dist", 0x10000);
46
--
60
--
47
2.7.4
61
2.34.1
48
49
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Following the ZynqMP register spec let's ensure that all reset values
3
Wire the new NMI and VINMI interrupt line from the GIC to each CPU if it
4
are set.
4
is not GICv2.
5
5
6
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 19836f3e0a298b13343c5a59c87425355e7fd8bd.1513104804.git.alistair.francis@xilinx.com
8
Message-id: 20240407081733.3231820-12-ruanjinjie@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
include/hw/ssi/xilinx_spips.h | 2 +-
11
hw/arm/virt.c | 10 +++++++++-
12
hw/ssi/xilinx_spips.c | 35 ++++++++++++++++++++++++++++++-----
12
1 file changed, 9 insertions(+), 1 deletion(-)
13
2 files changed, 31 insertions(+), 6 deletions(-)
14
13
15
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
14
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/ssi/xilinx_spips.h
16
--- a/hw/arm/virt.c
18
+++ b/include/hw/ssi/xilinx_spips.h
17
+++ b/hw/arm/virt.c
19
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
20
typedef struct XilinxSPIPS XilinxSPIPS;
19
21
20
/* Wire the outputs from each CPU's generic timer and the GICv3
22
#define XLNX_SPIPS_R_MAX (0x100 / 4)
21
* maintenance interrupt signal to the appropriate GIC PPI inputs,
23
-#define XLNX_ZYNQMP_SPIPS_R_MAX (0x200 / 4)
22
- * and the GIC's IRQ/FIQ/VIRQ/VFIQ interrupt outputs to the CPU's inputs.
24
+#define XLNX_ZYNQMP_SPIPS_R_MAX (0x830 / 4)
23
+ * and the GIC's IRQ/FIQ/VIRQ/VFIQ/NMI/VINMI interrupt outputs to the
25
24
+ * CPU's inputs.
26
/* Bite off 4k chunks at a time */
25
*/
27
#define LQSPI_CACHE_SIZE 1024
26
for (i = 0; i < smp_cpus; i++) {
28
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
27
DeviceState *cpudev = DEVICE(qemu_get_cpu(i));
29
index XXXXXXX..XXXXXXX 100644
28
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
30
--- a/hw/ssi/xilinx_spips.c
29
qdev_get_gpio_in(cpudev, ARM_CPU_VIRQ));
31
+++ b/hw/ssi/xilinx_spips.c
30
sysbus_connect_irq(gicbusdev, i + 3 * smp_cpus,
32
@@ -XXX,XX +XXX,XX @@
31
qdev_get_gpio_in(cpudev, ARM_CPU_VFIQ));
33
34
/* interrupt mechanism */
35
#define R_INTR_STATUS (0x04 / 4)
36
+#define R_INTR_STATUS_RESET (0x104)
37
#define R_INTR_EN (0x08 / 4)
38
#define R_INTR_DIS (0x0C / 4)
39
#define R_INTR_MASK (0x10 / 4)
40
@@ -XXX,XX +XXX,XX @@
41
#define R_SLAVE_IDLE_COUNT (0x24 / 4)
42
#define R_TX_THRES (0x28 / 4)
43
#define R_RX_THRES (0x2C / 4)
44
+#define R_GPIO (0x30 / 4)
45
+#define R_LPBK_DLY_ADJ (0x38 / 4)
46
+#define R_LPBK_DLY_ADJ_RESET (0x33)
47
#define R_TXD1 (0x80 / 4)
48
#define R_TXD2 (0x84 / 4)
49
#define R_TXD3 (0x88 / 4)
50
@@ -XXX,XX +XXX,XX @@
51
#define R_GQSPI_IER (0x108 / 4)
52
#define R_GQSPI_IDR (0x10c / 4)
53
#define R_GQSPI_IMR (0x110 / 4)
54
+#define R_GQSPI_IMR_RESET (0xfbe)
55
#define R_GQSPI_TX_THRESH (0x128 / 4)
56
#define R_GQSPI_RX_THRESH (0x12c / 4)
57
+#define R_GQSPI_GPIO (0x130 / 4)
58
+#define R_GQSPI_LPBK_DLY_ADJ (0x138 / 4)
59
+#define R_GQSPI_LPBK_DLY_ADJ_RESET (0x33)
60
#define R_GQSPI_CNFG (0x100 / 4)
61
FIELD(GQSPI_CNFG, MODE_EN, 30, 2)
62
FIELD(GQSPI_CNFG, GEN_FIFO_START_MODE, 29, 1)
63
@@ -XXX,XX +XXX,XX @@
64
FIELD(GQSPI_GF_SNAPSHOT, EXPONENT, 9, 1)
65
FIELD(GQSPI_GF_SNAPSHOT, DATA_XFER, 8, 1)
66
FIELD(GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA, 0, 8)
67
-#define R_GQSPI_MOD_ID (0x168 / 4)
68
-#define R_GQSPI_MOD_ID_VALUE 0x010A0000
69
+#define R_GQSPI_MOD_ID (0x1fc / 4)
70
+#define R_GQSPI_MOD_ID_RESET (0x10a0000)
71
+
32
+
72
+#define R_QSPIDMA_DST_CTRL (0x80c / 4)
33
+ if (vms->gic_version != VIRT_GIC_VERSION_2) {
73
+#define R_QSPIDMA_DST_CTRL_RESET (0x803ffa00)
34
+ sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus,
74
+#define R_QSPIDMA_DST_I_MASK (0x820 / 4)
35
+ qdev_get_gpio_in(cpudev, ARM_CPU_NMI));
75
+#define R_QSPIDMA_DST_I_MASK_RESET (0xfe)
36
+ sysbus_connect_irq(gicbusdev, i + 5 * smp_cpus,
76
+#define R_QSPIDMA_DST_CTRL2 (0x824 / 4)
37
+ qdev_get_gpio_in(cpudev, ARM_CPU_VINMI));
77
+#define R_QSPIDMA_DST_CTRL2_RESET (0x081bfff8)
38
+ }
78
+
39
}
79
/* size of TXRX FIFOs */
40
80
#define RXFF_A (128)
41
fdt_add_gic_node(vms);
81
#define TXFF_A (128)
82
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
83
fifo8_reset(&s->rx_fifo_g);
84
fifo8_reset(&s->rx_fifo_g);
85
fifo32_reset(&s->fifo_g);
86
+ s->regs[R_INTR_STATUS] = R_INTR_STATUS_RESET;
87
+ s->regs[R_GPIO] = 1;
88
+ s->regs[R_LPBK_DLY_ADJ] = R_LPBK_DLY_ADJ_RESET;
89
+ s->regs[R_GQSPI_GFIFO_THRESH] = 0x10;
90
+ s->regs[R_MOD_ID] = 0x01090101;
91
+ s->regs[R_GQSPI_IMR] = R_GQSPI_IMR_RESET;
92
s->regs[R_GQSPI_TX_THRESH] = 1;
93
s->regs[R_GQSPI_RX_THRESH] = 1;
94
- s->regs[R_GQSPI_GFIFO_THRESH] = 1;
95
- s->regs[R_GQSPI_IMR] = GQSPI_IXR_MASK;
96
- s->regs[R_MOD_ID] = 0x01090101;
97
+ s->regs[R_GQSPI_GPIO] = 1;
98
+ s->regs[R_GQSPI_LPBK_DLY_ADJ] = R_GQSPI_LPBK_DLY_ADJ_RESET;
99
+ s->regs[R_GQSPI_MOD_ID] = R_GQSPI_MOD_ID_RESET;
100
+ s->regs[R_QSPIDMA_DST_CTRL] = R_QSPIDMA_DST_CTRL_RESET;
101
+ s->regs[R_QSPIDMA_DST_I_MASK] = R_QSPIDMA_DST_I_MASK_RESET;
102
+ s->regs[R_QSPIDMA_DST_CTRL2] = R_QSPIDMA_DST_CTRL2_RESET;
103
s->man_start_com_g = false;
104
s->gqspi_irqline = 0;
105
xlnx_zynqmp_qspips_update_ixr(s);
106
--
42
--
107
2.7.4
43
2.34.1
108
109
diff view generated by jsdifflib
1
In do_ats_write(), rather than using the FSR value from get_phys_addr(),
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
construct the PAR values using the information in the ARMMMUFaultInfo
3
struct. This allows us to create a PAR of the correct format regardless
4
of what the translation table format is.
5
2
6
For the moment we leave the condition for "when should this be a
3
According to Arm GIC section 4.6.3 Interrupt superpriority, the interrupt
7
64 bit PAR" as it was previously; this will need to be fixed to
4
with superpriority is always IRQ, never FIQ, so the NMI exception trap entry
8
properly support AArch32 Hyp mode.
5
behave like IRQ. And VINMI(vIRQ with Superpriority) can be raised from the
6
GIC or come from the hcrx_el2.HCRX_VINMI bit, VFNMI(vFIQ with Superpriority)
7
come from the hcrx_el2.HCRX_VFNMI bit.
9
8
9
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20240407081733.3231820-13-ruanjinjie@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
14
Message-id: 1512503192-2239-11-git-send-email-peter.maydell@linaro.org
15
---
14
---
16
target/arm/helper.c | 16 ++++++++++------
15
target/arm/helper.c | 3 +++
17
1 file changed, 10 insertions(+), 6 deletions(-)
16
1 file changed, 3 insertions(+)
18
17
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
20
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
21
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
22
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
24
hwaddr phys_addr;
23
break;
25
target_ulong page_size;
24
case EXCP_IRQ:
26
int prot;
25
case EXCP_VIRQ:
27
- uint32_t fsr;
26
+ case EXCP_NMI:
28
+ uint32_t fsr_unused;
27
+ case EXCP_VINMI:
29
bool ret;
28
addr += 0x80;
30
uint64_t par64;
29
break;
31
MemTxAttrs attrs = {};
30
case EXCP_FIQ:
32
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
31
case EXCP_VFIQ:
33
ARMCacheAttrs cacheattrs = {};
32
+ case EXCP_VFNMI:
34
33
addr += 0x100;
35
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
34
break;
36
- &prot, &page_size, &fsr, &fi, &cacheattrs);
35
case EXCP_VSERR:
37
+ &prot, &page_size, &fsr_unused, &fi, &cacheattrs);
38
+ /* TODO: this is not the correct condition to use to decide whether
39
+ * to report a PAR in 64-bit or 32-bit format.
40
+ */
41
if (arm_s1_regime_using_lpae_format(env, mmu_idx)) {
42
- /* fsr is a DFSR/IFSR value for the long descriptor
43
- * translation table format, but with WnR always clear.
44
- * Convert it to a 64-bit PAR.
45
- */
46
+ /* Create a 64-bit PAR */
47
par64 = (1 << 11); /* LPAE bit always set */
48
if (!ret) {
49
par64 |= phys_addr & ~0xfffULL;
50
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
51
par64 |= (uint64_t)cacheattrs.attrs << 56; /* ATTR */
52
par64 |= cacheattrs.shareability << 7; /* SH */
53
} else {
54
+ uint32_t fsr = arm_fi_to_lfsc(&fi);
55
+
56
par64 |= 1; /* F */
57
par64 |= (fsr & 0x3f) << 1; /* FS */
58
/* Note that S2WLK and FSTAGE are always zero, because we don't
59
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
60
par64 |= (1 << 9); /* NS */
61
}
62
} else {
63
+ uint32_t fsr = arm_fi_to_sfsc(&fi);
64
+
65
par64 = ((fsr & (1 << 10)) >> 5) | ((fsr & (1 << 12)) >> 6) |
66
((fsr & 0xf) << 1) | 1;
67
}
68
--
36
--
69
2.7.4
37
2.34.1
70
71
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Update the reset value to match the latest ZynqMP register spec.
3
Add a property has-nmi to the GICv3 device, and use this to set
4
the NMI bit in the GICD_TYPER register. This isn't visible to
5
guests yet because the property defaults to false and we won't
6
set it in the board code until we've landed all of the changes
7
needed to implement FEAT_GICV3_NMI.
4
8
5
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
9
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
6
Reviewed-by: KONRAD Frederic <frederic.konrad@adacore.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: c03e51d041db7f055596084891aeb1e856e32b9f.1513104804.git.alistair.francis@xilinx.com
12
Message-id: 20240407081733.3231820-14-ruanjinjie@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
14
---
11
hw/ssi/xilinx_spips.c | 1 +
15
hw/intc/gicv3_internal.h | 1 +
12
1 file changed, 1 insertion(+)
16
include/hw/intc/arm_gicv3_common.h | 1 +
17
hw/intc/arm_gicv3_common.c | 1 +
18
hw/intc/arm_gicv3_dist.c | 2 ++
19
4 files changed, 5 insertions(+)
13
20
14
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
21
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
15
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/ssi/xilinx_spips.c
23
--- a/hw/intc/gicv3_internal.h
17
+++ b/hw/ssi/xilinx_spips.c
24
+++ b/hw/intc/gicv3_internal.h
18
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
25
@@ -XXX,XX +XXX,XX @@
19
s->regs[R_GQSPI_RX_THRESH] = 1;
26
#define GICD_CTLR_E1NWF (1U << 7)
20
s->regs[R_GQSPI_GFIFO_THRESH] = 1;
27
#define GICD_CTLR_RWP (1U << 31)
21
s->regs[R_GQSPI_IMR] = GQSPI_IXR_MASK;
28
22
+ s->regs[R_MOD_ID] = 0x01090101;
29
+#define GICD_TYPER_NMI_SHIFT 9
23
s->man_start_com_g = false;
30
#define GICD_TYPER_LPIS_SHIFT 17
24
s->gqspi_irqline = 0;
31
25
xlnx_zynqmp_qspips_update_ixr(s);
32
/* 16 bits EventId */
33
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/include/hw/intc/arm_gicv3_common.h
36
+++ b/include/hw/intc/arm_gicv3_common.h
37
@@ -XXX,XX +XXX,XX @@ struct GICv3State {
38
uint32_t num_irq;
39
uint32_t revision;
40
bool lpi_enable;
41
+ bool nmi_support;
42
bool security_extn;
43
bool force_8bit_prio;
44
bool irq_reset_nonsecure;
45
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/intc/arm_gicv3_common.c
48
+++ b/hw/intc/arm_gicv3_common.c
49
@@ -XXX,XX +XXX,XX @@ static Property arm_gicv3_common_properties[] = {
50
DEFINE_PROP_UINT32("num-irq", GICv3State, num_irq, 32),
51
DEFINE_PROP_UINT32("revision", GICv3State, revision, 3),
52
DEFINE_PROP_BOOL("has-lpi", GICv3State, lpi_enable, 0),
53
+ DEFINE_PROP_BOOL("has-nmi", GICv3State, nmi_support, 0),
54
DEFINE_PROP_BOOL("has-security-extensions", GICv3State, security_extn, 0),
55
/*
56
* Compatibility property: force 8 bits of physical priority, even
57
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/intc/arm_gicv3_dist.c
60
+++ b/hw/intc/arm_gicv3_dist.c
61
@@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
62
* by GICD_TYPER.IDbits)
63
* MBIS == 0 (message-based SPIs not supported)
64
* SecurityExtn == 1 if security extns supported
65
+ * NMI = 1 if Non-maskable interrupt property is supported
66
* CPUNumber == 0 since for us ARE is always 1
67
* ITLinesNumber == (((max SPI IntID + 1) / 32) - 1)
68
*/
69
@@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
70
bool dvis = s->revision >= 4;
71
72
*data = (1 << 25) | (1 << 24) | (dvis << 18) | (sec_extn << 10) |
73
+ (s->nmi_support << GICD_TYPER_NMI_SHIFT) |
74
(s->lpi_enable << GICD_TYPER_LPIS_SHIFT) |
75
(0xf << 19) | itlinesnumber;
76
return true;
26
--
77
--
27
2.7.4
78
2.34.1
28
29
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Add support for the ZynqMP QSPI (consisting of the Generic QSPI and Legacy
3
So far, there is no FEAT_GICv3_NMI support in the in-kernel GIC, so make it
4
QSPI) and connect Numonyx n25q512a11 flashes to it.
4
an error to try to set has-nmi=true for the KVM GICv3.
5
5
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
Message-id: 20240407081733.3231820-15-ruanjinjie@huawei.com
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20171126231634.9531-14-frasse.iglesias@gmail.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
include/hw/arm/xlnx-zynqmp.h | 5 +++++
11
hw/intc/arm_gicv3_kvm.c | 5 +++++
15
hw/arm/xlnx-zcu102.c | 23 +++++++++++++++++++++++
12
1 file changed, 5 insertions(+)
16
hw/arm/xlnx-zynqmp.c | 26 ++++++++++++++++++++++++++
17
3 files changed, 54 insertions(+)
18
13
19
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
14
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/include/hw/arm/xlnx-zynqmp.h
16
--- a/hw/intc/arm_gicv3_kvm.c
22
+++ b/include/hw/arm/xlnx-zynqmp.h
17
+++ b/hw/intc/arm_gicv3_kvm.c
23
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
24
#define XLNX_ZYNQMP_NUM_SDHCI 2
19
return;
25
#define XLNX_ZYNQMP_NUM_SPIS 2
26
27
+#define XLNX_ZYNQMP_NUM_QSPI_BUS 2
28
+#define XLNX_ZYNQMP_NUM_QSPI_BUS_CS 2
29
+#define XLNX_ZYNQMP_NUM_QSPI_FLASH 4
30
+
31
#define XLNX_ZYNQMP_NUM_OCM_BANKS 4
32
#define XLNX_ZYNQMP_OCM_RAM_0_ADDRESS 0xFFFC0000
33
#define XLNX_ZYNQMP_OCM_RAM_SIZE 0x10000
34
@@ -XXX,XX +XXX,XX @@ typedef struct XlnxZynqMPState {
35
SysbusAHCIState sata;
36
SDHCIState sdhci[XLNX_ZYNQMP_NUM_SDHCI];
37
XilinxSPIPS spi[XLNX_ZYNQMP_NUM_SPIS];
38
+ XlnxZynqMPQSPIPS qspi;
39
XlnxDPState dp;
40
XlnxDPDMAState dpdma;
41
42
diff --git a/hw/arm/xlnx-zcu102.c b/hw/arm/xlnx-zcu102.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/arm/xlnx-zcu102.c
45
+++ b/hw/arm/xlnx-zcu102.c
46
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(XlnxZCU102 *s, MachineState *machine)
47
sysbus_connect_irq(SYS_BUS_DEVICE(&s->soc.spi[i]), 1, cs_line);
48
}
20
}
49
21
50
+ for (i = 0; i < XLNX_ZYNQMP_NUM_QSPI_FLASH; i++) {
22
+ if (s->nmi_support) {
51
+ SSIBus *spi_bus;
23
+ error_setg(errp, "NMI is not supported with the in-kernel GIC");
52
+ DeviceState *flash_dev;
24
+ return;
53
+ qemu_irq cs_line;
54
+ DriveInfo *dinfo = drive_get_next(IF_MTD);
55
+ int bus = i / XLNX_ZYNQMP_NUM_QSPI_BUS_CS;
56
+ gchar *bus_name = g_strdup_printf("qspi%d", bus);
57
+
58
+ spi_bus = (SSIBus *)qdev_get_child_bus(DEVICE(&s->soc), bus_name);
59
+ g_free(bus_name);
60
+
61
+ flash_dev = ssi_create_slave_no_init(spi_bus, "n25q512a11");
62
+ if (dinfo) {
63
+ qdev_prop_set_drive(flash_dev, "drive", blk_by_legacy_dinfo(dinfo),
64
+ &error_fatal);
65
+ }
66
+ qdev_init_nofail(flash_dev);
67
+
68
+ cs_line = qdev_get_gpio_in_named(flash_dev, SSI_GPIO_CS, 0);
69
+
70
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->soc.qspi), i + 1, cs_line);
71
+ }
25
+ }
72
+
26
+
73
/* TODO create and connect IDE devices for ide_drive_get() */
27
gicv3_init_irqs_and_mmio(s, kvm_arm_gicv3_set_irq, NULL);
74
28
75
xlnx_zcu102_binfo.ram_size = ram_size;
29
for (i = 0; i < s->num_cpu; i++) {
76
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/hw/arm/xlnx-zynqmp.c
79
+++ b/hw/arm/xlnx-zynqmp.c
80
@@ -XXX,XX +XXX,XX @@
81
#define SATA_ADDR 0xFD0C0000
82
#define SATA_NUM_PORTS 2
83
84
+#define QSPI_ADDR 0xff0f0000
85
+#define LQSPI_ADDR 0xc0000000
86
+#define QSPI_IRQ 15
87
+
88
#define DP_ADDR 0xfd4a0000
89
#define DP_IRQ 113
90
91
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
92
qdev_set_parent_bus(DEVICE(&s->spi[i]), sysbus_get_default());
93
}
94
95
+ object_initialize(&s->qspi, sizeof(s->qspi), TYPE_XLNX_ZYNQMP_QSPIPS);
96
+ qdev_set_parent_bus(DEVICE(&s->qspi), sysbus_get_default());
97
+
98
object_initialize(&s->dp, sizeof(s->dp), TYPE_XLNX_DP);
99
qdev_set_parent_bus(DEVICE(&s->dp), sysbus_get_default());
100
101
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
102
g_free(bus_name);
103
}
104
105
+ object_property_set_bool(OBJECT(&s->qspi), true, "realized", &err);
106
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->qspi), 0, QSPI_ADDR);
107
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->qspi), 1, LQSPI_ADDR);
108
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->qspi), 0, gic_spi[QSPI_IRQ]);
109
+
110
+ for (i = 0; i < XLNX_ZYNQMP_NUM_QSPI_BUS; i++) {
111
+ gchar *bus_name;
112
+ gchar *target_bus;
113
+
114
+ /* Alias controller SPI bus to the SoC itself */
115
+ bus_name = g_strdup_printf("qspi%d", i);
116
+ target_bus = g_strdup_printf("spi%d", i);
117
+ object_property_add_alias(OBJECT(s), bus_name,
118
+ OBJECT(&s->qspi), target_bus,
119
+ &error_abort);
120
+ g_free(bus_name);
121
+ g_free(target_bus);
122
+ }
123
+
124
object_property_set_bool(OBJECT(&s->dp), true, "realized", &err);
125
if (err) {
126
error_propagate(errp, err);
127
--
30
--
128
2.7.4
31
2.34.1
129
130
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Add support for continuous read out of the RDSR and READ_FSR status
3
A SPI, PPI or SGI interrupt can have non-maskable property. So maintain
4
registers until the chip select is deasserted. This feature is supported
4
non-maskable property in PendingIrq and GICR/GICD. Since add new device
5
by amongst others 1 or more flashtypes manufactured by Numonyx (Micron),
5
state, it also needs to be migrated, so also save NMI info in
6
Windbond, SST, Gigadevice, Eon and Macronix.
6
vmstate_gicv3_cpu and vmstate_gicv3.
7
7
8
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
9
Acked-by: Marcin Krzemiński<mar.krzeminski@gmail.com>
9
Acked-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
11
Message-id: 20240407081733.3231820-16-ruanjinjie@huawei.com
12
Message-id: 20171126231634.9531-2-frasse.iglesias@gmail.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
13
---
15
hw/block/m25p80.c | 39 ++++++++++++++++++++++++++++++++++++++-
14
include/hw/intc/arm_gicv3_common.h | 4 ++++
16
1 file changed, 38 insertions(+), 1 deletion(-)
15
hw/intc/arm_gicv3_common.c | 38 ++++++++++++++++++++++++++++++
16
2 files changed, 42 insertions(+)
17
17
18
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
18
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
19
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/block/m25p80.c
20
--- a/include/hw/intc/arm_gicv3_common.h
21
+++ b/hw/block/m25p80.c
21
+++ b/include/hw/intc/arm_gicv3_common.h
22
@@ -XXX,XX +XXX,XX @@ typedef struct Flash {
22
@@ -XXX,XX +XXX,XX @@ typedef struct {
23
uint8_t data[M25P80_INTERNAL_DATA_BUFFER_SZ];
23
int irq;
24
uint32_t len;
24
uint8_t prio;
25
uint32_t pos;
25
int grp;
26
+ bool data_read_loop;
26
+ bool nmi;
27
uint8_t needed_bytes;
27
} PendingIrq;
28
uint8_t cmd_in_progress;
28
29
uint32_t cur_addr;
29
struct GICv3CPUState {
30
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
30
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
31
}
31
uint32_t gicr_ienabler0;
32
s->pos = 0;
32
uint32_t gicr_ipendr0;
33
s->len = 1;
33
uint32_t gicr_iactiver0;
34
+ s->data_read_loop = true;
34
+ uint32_t gicr_inmir0;
35
s->state = STATE_READING_DATA;
35
uint32_t edge_trigger; /* ICFGR0 and ICFGR1 even bits */
36
break;
36
uint32_t gicr_igrpmodr0;
37
37
uint32_t gicr_nsacr;
38
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
38
@@ -XXX,XX +XXX,XX @@ struct GICv3State {
39
}
39
GIC_DECLARE_BITMAP(active); /* GICD_ISACTIVER */
40
s->pos = 0;
40
GIC_DECLARE_BITMAP(level); /* Current level */
41
s->len = 1;
41
GIC_DECLARE_BITMAP(edge_trigger); /* GICD_ICFGR even bits */
42
+ s->data_read_loop = true;
42
+ GIC_DECLARE_BITMAP(nmi); /* GICD_INMIR */
43
s->state = STATE_READING_DATA;
43
uint8_t gicd_ipriority[GICV3_MAXIRQ];
44
break;
44
uint64_t gicd_irouter[GICV3_MAXIRQ];
45
45
/* Cached information: pointer to the cpu i/f for the CPUs specified
46
@@ -XXX,XX +XXX,XX @@ static int m25p80_cs(SSISlave *ss, bool select)
46
@@ -XXX,XX +XXX,XX @@ GICV3_BITMAP_ACCESSORS(pending)
47
s->pos = 0;
47
GICV3_BITMAP_ACCESSORS(active)
48
s->state = STATE_IDLE;
48
GICV3_BITMAP_ACCESSORS(level)
49
flash_sync_dirty(s, -1);
49
GICV3_BITMAP_ACCESSORS(edge_trigger)
50
+ s->data_read_loop = false;
50
+GICV3_BITMAP_ACCESSORS(nmi)
51
52
#define TYPE_ARM_GICV3_COMMON "arm-gicv3-common"
53
typedef struct ARMGICv3CommonClass ARMGICv3CommonClass;
54
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/intc/arm_gicv3_common.c
57
+++ b/hw/intc/arm_gicv3_common.c
58
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_gicv3_gicv4 = {
51
}
59
}
52
53
DB_PRINT_L(0, "%sselect\n", select ? "de" : "");
54
@@ -XXX,XX +XXX,XX @@ static uint32_t m25p80_transfer8(SSISlave *ss, uint32_t tx)
55
s->pos++;
56
if (s->pos == s->len) {
57
s->pos = 0;
58
- s->state = STATE_IDLE;
59
+ if (!s->data_read_loop) {
60
+ s->state = STATE_IDLE;
61
+ }
62
}
63
break;
64
65
@@ -XXX,XX +XXX,XX @@ static Property m25p80_properties[] = {
66
DEFINE_PROP_END_OF_LIST(),
67
};
60
};
68
61
69
+static int m25p80_pre_load(void *opaque)
62
+static bool gicv3_cpu_nmi_needed(void *opaque)
70
+{
63
+{
71
+ Flash *s = (Flash *)opaque;
64
+ GICv3CPUState *cs = opaque;
72
+
65
+
73
+ s->data_read_loop = false;
66
+ return cs->gic->nmi_support;
74
+ return 0;
75
+}
67
+}
76
+
68
+
77
+static bool m25p80_data_read_loop_needed(void *opaque)
69
+static const VMStateDescription vmstate_gicv3_cpu_nmi = {
78
+{
70
+ .name = "arm_gicv3_cpu/nmi",
79
+ Flash *s = (Flash *)opaque;
80
+
81
+ return s->data_read_loop;
82
+}
83
+
84
+static const VMStateDescription vmstate_m25p80_data_read_loop = {
85
+ .name = "m25p80/data_read_loop",
86
+ .version_id = 1,
71
+ .version_id = 1,
87
+ .minimum_version_id = 1,
72
+ .minimum_version_id = 1,
88
+ .needed = m25p80_data_read_loop_needed,
73
+ .needed = gicv3_cpu_nmi_needed,
89
+ .fields = (VMStateField[]) {
74
+ .fields = (const VMStateField[]) {
90
+ VMSTATE_BOOL(data_read_loop, Flash),
75
+ VMSTATE_UINT32(gicr_inmir0, GICv3CPUState),
91
+ VMSTATE_END_OF_LIST()
76
+ VMSTATE_END_OF_LIST()
92
+ }
77
+ }
93
+};
78
+};
94
+
79
+
95
static const VMStateDescription vmstate_m25p80 = {
80
static const VMStateDescription vmstate_gicv3_cpu = {
96
.name = "m25p80",
81
.name = "arm_gicv3_cpu",
97
.version_id = 0,
82
.version_id = 1,
98
.minimum_version_id = 0,
83
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gicv3_cpu = {
99
.pre_save = m25p80_pre_save,
84
&vmstate_gicv3_cpu_virt,
100
+ .pre_load = m25p80_pre_load,
85
&vmstate_gicv3_cpu_sre_el1,
101
.fields = (VMStateField[]) {
86
&vmstate_gicv3_gicv4,
102
VMSTATE_UINT8(state, Flash),
87
+ &vmstate_gicv3_cpu_nmi,
103
VMSTATE_UINT8_ARRAY(data, Flash, M25P80_INTERNAL_DATA_BUFFER_SZ),
88
NULL
104
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m25p80 = {
105
VMSTATE_UINT8(spansion_cr3nv, Flash),
106
VMSTATE_UINT8(spansion_cr4nv, Flash),
107
VMSTATE_END_OF_LIST()
108
+ },
109
+ .subsections = (const VMStateDescription * []) {
110
+ &vmstate_m25p80_data_read_loop,
111
+ NULL
112
}
89
}
113
};
90
};
114
91
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_gicv3_gicd_no_migration_shift_bug = {
92
}
93
};
94
95
+static bool gicv3_nmi_needed(void *opaque)
96
+{
97
+ GICv3State *cs = opaque;
98
+
99
+ return cs->nmi_support;
100
+}
101
+
102
+const VMStateDescription vmstate_gicv3_gicd_nmi = {
103
+ .name = "arm_gicv3/gicd_nmi",
104
+ .version_id = 1,
105
+ .minimum_version_id = 1,
106
+ .needed = gicv3_nmi_needed,
107
+ .fields = (const VMStateField[]) {
108
+ VMSTATE_UINT32_ARRAY(nmi, GICv3State, GICV3_BMP_SIZE),
109
+ VMSTATE_END_OF_LIST()
110
+ }
111
+};
112
+
113
static const VMStateDescription vmstate_gicv3 = {
114
.name = "arm_gicv3",
115
.version_id = 1,
116
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gicv3 = {
117
},
118
.subsections = (const VMStateDescription * const []) {
119
&vmstate_gicv3_gicd_no_migration_shift_bug,
120
+ &vmstate_gicv3_gicd_nmi,
121
NULL
122
}
123
};
115
--
124
--
116
2.7.4
125
2.34.1
117
118
diff view generated by jsdifflib
Deleted patch
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
2
1
3
Add support for SST READ ID 0x90/0xAB commands for reading out the flash
4
manufacturer ID and device ID.
5
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20171126231634.9531-3-frasse.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/block/m25p80.c | 32 ++++++++++++++++++++++++++++++++
12
1 file changed, 32 insertions(+)
13
14
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/block/m25p80.c
17
+++ b/hw/block/m25p80.c
18
@@ -XXX,XX +XXX,XX @@ typedef enum {
19
DPP = 0xa2,
20
QPP = 0x32,
21
QPP_4 = 0x34,
22
+ RDID_90 = 0x90,
23
+ RDID_AB = 0xab,
24
25
ERASE_4K = 0x20,
26
ERASE4_4K = 0x21,
27
@@ -XXX,XX +XXX,XX @@ typedef enum {
28
MAN_MACRONIX,
29
MAN_NUMONYX,
30
MAN_WINBOND,
31
+ MAN_SST,
32
MAN_GENERIC,
33
} Manufacturer;
34
35
@@ -XXX,XX +XXX,XX @@ static inline Manufacturer get_man(Flash *s)
36
return MAN_SPANSION;
37
case 0xC2:
38
return MAN_MACRONIX;
39
+ case 0xBF:
40
+ return MAN_SST;
41
default:
42
return MAN_GENERIC;
43
}
44
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
45
case WEVCR:
46
s->enh_volatile_cfg = s->data[0];
47
break;
48
+ case RDID_90:
49
+ case RDID_AB:
50
+ if (get_man(s) == MAN_SST) {
51
+ if (s->cur_addr <= 1) {
52
+ if (s->cur_addr) {
53
+ s->data[0] = s->pi->id[2];
54
+ s->data[1] = s->pi->id[0];
55
+ } else {
56
+ s->data[0] = s->pi->id[0];
57
+ s->data[1] = s->pi->id[2];
58
+ }
59
+ s->pos = 0;
60
+ s->len = 2;
61
+ s->data_read_loop = true;
62
+ s->state = STATE_READING_DATA;
63
+ } else {
64
+ qemu_log_mask(LOG_GUEST_ERROR,
65
+ "M25P80: Invalid read id address\n");
66
+ }
67
+ } else {
68
+ qemu_log_mask(LOG_GUEST_ERROR,
69
+ "M25P80: Read id (command 0x90/0xAB) is not supported"
70
+ " by device\n");
71
+ }
72
+ break;
73
default:
74
break;
75
}
76
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
77
case PP4:
78
case PP4_4:
79
case DIE_ERASE:
80
+ case RDID_90:
81
+ case RDID_AB:
82
s->needed_bytes = get_addr_length(s);
83
s->pos = 0;
84
s->len = 0;
85
--
86
2.7.4
87
88
diff view generated by jsdifflib
Deleted patch
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
2
1
3
Add support for the bank address register access commands (BRRD/BRWR) and
4
the BULK_ERASE (0x60) command.
5
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
Acked-by: Marcin Krzemiński <mar.krzeminski@gmail.com>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20171126231634.9531-4-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/block/m25p80.c | 7 +++++++
14
1 file changed, 7 insertions(+)
15
16
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/block/m25p80.c
19
+++ b/hw/block/m25p80.c
20
@@ -XXX,XX +XXX,XX @@ typedef enum {
21
WRDI = 0x4,
22
RDSR = 0x5,
23
WREN = 0x6,
24
+ BRRD = 0x16,
25
+ BRWR = 0x17,
26
JEDEC_READ = 0x9f,
27
+ BULK_ERASE_60 = 0x60,
28
BULK_ERASE = 0xc7,
29
READ_FSR = 0x70,
30
RDCR = 0x15,
31
@@ -XXX,XX +XXX,XX @@ static void complete_collecting_data(Flash *s)
32
s->write_enable = false;
33
}
34
break;
35
+ case BRWR:
36
case EXTEND_ADDR_WRITE:
37
s->ear = s->data[0];
38
break;
39
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
40
s->state = STATE_READING_DATA;
41
break;
42
43
+ case BULK_ERASE_60:
44
case BULK_ERASE:
45
if (s->write_enable) {
46
DB_PRINT_L(0, "chip erase\n");
47
@@ -XXX,XX +XXX,XX @@ static void decode_new_cmd(Flash *s, uint32_t value)
48
case EX_4BYTE_ADDR:
49
s->four_bytes_address_mode = false;
50
break;
51
+ case BRRD:
52
case EXTEND_ADDR_READ:
53
s->data[0] = s->ear;
54
s->pos = 0;
55
s->len = 1;
56
s->state = STATE_READING_DATA;
57
break;
58
+ case BRWR:
59
case EXTEND_ADDR_WRITE:
60
if (s->write_enable) {
61
s->needed_bytes = 1;
62
--
63
2.7.4
64
65
diff view generated by jsdifflib
1
From: Zhaoshenglong <zhaoshenglong@huawei.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Since I'm not working as an assignee in Linaro, replace the Linaro email
3
Add GICR_INMIR0 register and support access GICR_INMIR0.
4
address with my personal one.
5
4
6
Signed-off-by: Zhaoshenglong <zhaoshenglong@huawei.com>
5
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Message-id: 1513058845-9768-1-git-send-email-zhaoshenglong@huawei.com
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20240407081733.3231820-17-ruanjinjie@huawei.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
MAINTAINERS | 2 +-
11
hw/intc/gicv3_internal.h | 1 +
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
hw/intc/arm_gicv3_redist.c | 19 +++++++++++++++++++
13
2 files changed, 20 insertions(+)
12
14
13
diff --git a/MAINTAINERS b/MAINTAINERS
15
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/MAINTAINERS
17
--- a/hw/intc/gicv3_internal.h
16
+++ b/MAINTAINERS
18
+++ b/hw/intc/gicv3_internal.h
17
@@ -XXX,XX +XXX,XX @@ F: include/hw/*/xlnx*.h
19
@@ -XXX,XX +XXX,XX @@
18
20
#define GICR_ICFGR1 (GICR_SGI_OFFSET + 0x0C04)
19
ARM ACPI Subsystem
21
#define GICR_IGRPMODR0 (GICR_SGI_OFFSET + 0x0D00)
20
M: Shannon Zhao <zhaoshenglong@huawei.com>
22
#define GICR_NSACR (GICR_SGI_OFFSET + 0x0E00)
21
-M: Shannon Zhao <shannon.zhao@linaro.org>
23
+#define GICR_INMIR0 (GICR_SGI_OFFSET + 0x0F80)
22
+M: Shannon Zhao <shannon.zhaosl@gmail.com>
24
23
L: qemu-arm@nongnu.org
25
/* VLPI redistributor registers, offsets from VLPI_base */
24
S: Maintained
26
#define GICR_VPROPBASER (GICR_VLPI_OFFSET + 0x70)
25
F: hw/arm/virt-acpi-build.c
27
diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/intc/arm_gicv3_redist.c
30
+++ b/hw/intc/arm_gicv3_redist.c
31
@@ -XXX,XX +XXX,XX @@ static int gicr_ns_access(GICv3CPUState *cs, int irq)
32
return extract32(cs->gicr_nsacr, irq * 2, 2);
33
}
34
35
+static void gicr_write_bitmap_reg(GICv3CPUState *cs, MemTxAttrs attrs,
36
+ uint32_t *reg, uint32_t val)
37
+{
38
+ /* Helper routine to implement writing to a "set" register */
39
+ val &= mask_group(cs, attrs);
40
+ *reg = val;
41
+ gicv3_redist_update(cs);
42
+}
43
+
44
static void gicr_write_set_bitmap_reg(GICv3CPUState *cs, MemTxAttrs attrs,
45
uint32_t *reg, uint32_t val)
46
{
47
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_readl(GICv3CPUState *cs, hwaddr offset,
48
*data = value;
49
return MEMTX_OK;
50
}
51
+ case GICR_INMIR0:
52
+ *data = cs->gic->nmi_support ?
53
+ gicr_read_bitmap_reg(cs, attrs, cs->gicr_inmir0) : 0;
54
+ return MEMTX_OK;
55
case GICR_ICFGR0:
56
case GICR_ICFGR1:
57
{
58
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr offset,
59
gicv3_redist_update(cs);
60
return MEMTX_OK;
61
}
62
+ case GICR_INMIR0:
63
+ if (cs->gic->nmi_support) {
64
+ gicr_write_bitmap_reg(cs, attrs, &cs->gicr_inmir0, value);
65
+ }
66
+ return MEMTX_OK;
67
+
68
case GICR_ICFGR0:
69
/* Register is all RAZ/WI or RAO/WI bits */
70
return MEMTX_OK;
26
--
71
--
27
2.7.4
72
2.34.1
28
29
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
At the moment the ITS is not properly reset and this causes
3
Add GICD_INMIR, GICD_INMIRnE register and support access GICD_INMIR0.
4
various bugs on save/restore. We implement a minimalist reset
5
through individual register writes but for kernel versions
6
before v4.15 this fails voiding the vITS cache. We cannot
7
claim we have a comprehensive reset (hence the error message)
8
but that's better than nothing.
9
4
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
5
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1511883692-11511-3-git-send-email-eric.auger@redhat.com
8
Message-id: 20240407081733.3231820-18-ruanjinjie@huawei.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
hw/intc/arm_gicv3_its_kvm.c | 42 ++++++++++++++++++++++++++++++++++++++++++
11
hw/intc/gicv3_internal.h | 2 ++
16
1 file changed, 42 insertions(+)
12
hw/intc/arm_gicv3_dist.c | 34 ++++++++++++++++++++++++++++++++++
13
2 files changed, 36 insertions(+)
17
14
18
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
15
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gicv3_its_kvm.c
17
--- a/hw/intc/gicv3_internal.h
21
+++ b/hw/intc/arm_gicv3_its_kvm.c
18
+++ b/hw/intc/gicv3_internal.h
22
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
23
20
#define GICD_SGIR 0x0F00
24
#define TYPE_KVM_ARM_ITS "arm-its-kvm"
21
#define GICD_CPENDSGIR 0x0F10
25
#define KVM_ARM_ITS(obj) OBJECT_CHECK(GICv3ITSState, (obj), TYPE_KVM_ARM_ITS)
22
#define GICD_SPENDSGIR 0x0F20
26
+#define KVM_ARM_ITS_CLASS(klass) \
23
+#define GICD_INMIR 0x0F80
27
+ OBJECT_CLASS_CHECK(KVMARMITSClass, (klass), TYPE_KVM_ARM_ITS)
24
+#define GICD_INMIRnE 0x3B00
28
+#define KVM_ARM_ITS_GET_CLASS(obj) \
25
#define GICD_IROUTER 0x6000
29
+ OBJECT_GET_CLASS(KVMARMITSClass, (obj), TYPE_KVM_ARM_ITS)
26
#define GICD_IDREGS 0xFFD0
27
28
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/intc/arm_gicv3_dist.c
31
+++ b/hw/intc/arm_gicv3_dist.c
32
@@ -XXX,XX +XXX,XX @@ static int gicd_ns_access(GICv3State *s, int irq)
33
return extract32(s->gicd_nsacr[irq / 16], (irq % 16) * 2, 2);
34
}
35
36
+static void gicd_write_bitmap_reg(GICv3State *s, MemTxAttrs attrs,
37
+ uint32_t *bmp, maskfn *maskfn,
38
+ int offset, uint32_t val)
39
+{
40
+ /*
41
+ * Helper routine to implement writing to a "set" register
42
+ * (GICD_INMIR, etc).
43
+ * Semantics implemented here:
44
+ * RAZ/WI for SGIs, PPIs, unimplemented IRQs
45
+ * Bits corresponding to Group 0 or Secure Group 1 interrupts RAZ/WI.
46
+ * offset should be the offset in bytes of the register from the start
47
+ * of its group.
48
+ */
49
+ int irq = offset * 8;
30
+
50
+
31
+typedef struct KVMARMITSClass {
51
+ if (irq < GIC_INTERNAL || irq >= s->num_irq) {
32
+ GICv3ITSCommonClass parent_class;
33
+ void (*parent_reset)(DeviceState *dev);
34
+} KVMARMITSClass;
35
+
36
37
static int kvm_its_send_msi(GICv3ITSState *s, uint32_t value, uint16_t devid)
38
{
39
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_post_load(GICv3ITSState *s)
40
GITS_CTLR, &s->ctlr, true, &error_abort);
41
}
42
43
+static void kvm_arm_its_reset(DeviceState *dev)
44
+{
45
+ GICv3ITSState *s = ARM_GICV3_ITS_COMMON(dev);
46
+ KVMARMITSClass *c = KVM_ARM_ITS_GET_CLASS(s);
47
+ int i;
48
+
49
+ c->parent_reset(dev);
50
+
51
+ error_report("ITS KVM: full reset is not supported by QEMU");
52
+
53
+ if (!kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
54
+ GITS_CTLR)) {
55
+ return;
52
+ return;
56
+ }
53
+ }
57
+
54
+ val &= mask_group_and_nsacr(s, attrs, maskfn, irq);
58
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
55
+ *gic_bmp_ptr32(bmp, irq) = val;
59
+ GITS_CTLR, &s->ctlr, true, &error_abort);
56
+ gicv3_update(s, irq, 32);
60
+
61
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
62
+ GITS_CBASER, &s->cbaser, true, &error_abort);
63
+
64
+ for (i = 0; i < 8; i++) {
65
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
66
+ GITS_BASER + i * 8, &s->baser[i], true,
67
+ &error_abort);
68
+ }
69
+}
57
+}
70
+
58
+
71
static Property kvm_arm_its_props[] = {
59
static void gicd_write_set_bitmap_reg(GICv3State *s, MemTxAttrs attrs,
72
DEFINE_PROP_LINK("parent-gicv3", GICv3ITSState, gicv3, "kvm-arm-gicv3",
60
uint32_t *bmp,
73
GICv3State *),
61
maskfn *maskfn,
74
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_class_init(ObjectClass *klass, void *data)
62
@@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
75
{
63
/* RAZ/WI since affinity routing is always enabled */
76
DeviceClass *dc = DEVICE_CLASS(klass);
64
*data = 0;
77
GICv3ITSCommonClass *icc = ARM_GICV3_ITS_COMMON_CLASS(klass);
65
return true;
78
+ KVMARMITSClass *ic = KVM_ARM_ITS_CLASS(klass);
66
+ case GICD_INMIR ... GICD_INMIR + 0x7f:
79
67
+ *data = (!s->nmi_support) ? 0 :
80
dc->realize = kvm_arm_its_realize;
68
+ gicd_read_bitmap_reg(s, attrs, s->nmi, NULL,
81
dc->props = kvm_arm_its_props;
69
+ offset - GICD_INMIR);
82
+ ic->parent_reset = dc->reset;
70
+ return true;
83
icc->send_msi = kvm_its_send_msi;
71
case GICD_IROUTER ... GICD_IROUTER + 0x1fdf:
84
icc->pre_save = kvm_arm_its_pre_save;
72
{
85
icc->post_load = kvm_arm_its_post_load;
73
uint64_t r;
86
+ dc->reset = kvm_arm_its_reset;
74
@@ -XXX,XX +XXX,XX @@ static bool gicd_writel(GICv3State *s, hwaddr offset,
87
}
75
case GICD_SPENDSGIR ... GICD_SPENDSGIR + 0xf:
88
76
/* RAZ/WI since affinity routing is always enabled */
89
static const TypeInfo kvm_arm_its_info = {
77
return true;
90
@@ -XXX,XX +XXX,XX @@ static const TypeInfo kvm_arm_its_info = {
78
+ case GICD_INMIR ... GICD_INMIR + 0x7f:
91
.parent = TYPE_ARM_GICV3_ITS_COMMON,
79
+ if (s->nmi_support) {
92
.instance_size = sizeof(GICv3ITSState),
80
+ gicd_write_bitmap_reg(s, attrs, s->nmi, NULL,
93
.class_init = kvm_arm_its_class_init,
81
+ offset - GICD_INMIR, value);
94
+ .class_size = sizeof(KVMARMITSClass),
82
+ }
95
};
83
+ return true;
96
84
case GICD_IROUTER ... GICD_IROUTER + 0x1fdf:
97
static void kvm_arm_its_register_types(void)
85
{
86
uint64_t r;
98
--
87
--
99
2.7.4
88
2.34.1
100
101
diff view generated by jsdifflib
1
For the v8M security extension, there should be two systick
1
Add the NMIAR CPU interface registers which deal with acknowledging NMI.
2
devices, which use separate banked systick exceptions. The
3
register interface is banked in the same way as for other
4
banked registers, including the existence of an NS alias
5
region for secure code to access the nonsecure timer.
6
2
3
When introduce NMI interrupt, there are some updates to the semantics for the
4
register ICC_IAR1_EL1 and ICC_HPPIR1_EL1. For ICC_IAR1_EL1 register, it
5
should return 1022 if the intid has non-maskable property. And for
6
ICC_NMIAR1_EL1 register, it should return 1023 if the intid do not have
7
non-maskable property. Howerever, these are not necessary for ICC_HPPIR1_EL1
8
register.
9
10
And the APR and RPR has NMI bits which should be handled correctly.
11
12
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
[PMM: Separate out whether cpuif supports NMI from whether the
15
GIC proper (IRI) supports NMI]
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20240407081733.3231820-19-ruanjinjie@huawei.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 1512154296-5652-3-git-send-email-peter.maydell@linaro.org
10
---
19
---
11
include/hw/intc/armv7m_nvic.h | 4 +-
20
hw/intc/gicv3_internal.h | 5 +
12
hw/intc/armv7m_nvic.c | 90 ++++++++++++++++++++++++++++++++++++-------
21
include/hw/intc/arm_gicv3_common.h | 7 ++
13
2 files changed, 80 insertions(+), 14 deletions(-)
22
hw/intc/arm_gicv3_cpuif.c | 147 ++++++++++++++++++++++++++++-
23
hw/intc/trace-events | 1 +
24
4 files changed, 155 insertions(+), 5 deletions(-)
14
25
15
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
26
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
16
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/intc/armv7m_nvic.h
28
--- a/hw/intc/gicv3_internal.h
18
+++ b/include/hw/intc/armv7m_nvic.h
29
+++ b/hw/intc/gicv3_internal.h
19
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
30
@@ -XXX,XX +XXX,XX @@ FIELD(GICR_VPENDBASER, VALID, 63, 1)
20
31
#define ICC_CTLR_EL3_A3V (1U << 15)
21
MemoryRegion sysregmem;
32
#define ICC_CTLR_EL3_NDS (1U << 17)
22
MemoryRegion sysreg_ns_mem;
33
23
+ MemoryRegion systickmem;
34
+#define ICC_AP1R_EL1_NMI (1ULL << 63)
24
+ MemoryRegion systick_ns_mem;
35
+#define ICC_RPR_EL1_NSNMI (1ULL << 62)
25
MemoryRegion container;
36
+#define ICC_RPR_EL1_NMI (1ULL << 63)
26
37
+
27
uint32_t num_irq;
38
#define ICH_VMCR_EL2_VENG0_SHIFT 0
28
qemu_irq excpout;
39
#define ICH_VMCR_EL2_VENG0 (1U << ICH_VMCR_EL2_VENG0_SHIFT)
29
qemu_irq sysresetreq;
40
#define ICH_VMCR_EL2_VENG1_SHIFT 1
30
41
@@ -XXX,XX +XXX,XX @@ FIELD(VTE, RDBASE, 42, RDBASE_PROCNUM_LENGTH)
31
- SysTickState systick;
42
/* Special interrupt IDs */
32
+ SysTickState systick[M_REG_NUM_BANKS];
43
#define INTID_SECURE 1020
33
} NVICState;
44
#define INTID_NONSECURE 1021
34
45
+#define INTID_NMI 1022
35
#endif
46
#define INTID_SPURIOUS 1023
36
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
47
48
/* Functions internal to the emulated GICv3 */
49
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
37
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/intc/armv7m_nvic.c
51
--- a/include/hw/intc/arm_gicv3_common.h
39
+++ b/hw/intc/armv7m_nvic.c
52
+++ b/include/hw/intc/arm_gicv3_common.h
40
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps nvic_sysreg_ns_ops = {
53
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
41
.endianness = DEVICE_NATIVE_ENDIAN,
54
55
/* This is temporary working state, to avoid a malloc in gicv3_update() */
56
bool seenbetter;
57
+
58
+ /*
59
+ * Whether the CPU interface has NMI support (FEAT_GICv3_NMI). The
60
+ * CPU interface may support NMIs even when the GIC proper (what the
61
+ * spec calls the IRI; the redistributors and distributor) does not.
62
+ */
63
+ bool nmi_support;
42
};
64
};
43
65
44
+static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,
66
/*
45
+ uint64_t value, unsigned size,
67
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
46
+ MemTxAttrs attrs)
68
index XXXXXXX..XXXXXXX 100644
69
--- a/hw/intc/arm_gicv3_cpuif.c
70
+++ b/hw/intc/arm_gicv3_cpuif.c
71
@@ -XXX,XX +XXX,XX @@
72
#include "hw/irq.h"
73
#include "cpu.h"
74
#include "target/arm/cpregs.h"
75
+#include "target/arm/cpu-features.h"
76
#include "sysemu/tcg.h"
77
#include "sysemu/qtest.h"
78
79
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri)
80
return intid;
81
}
82
83
+static uint64_t icv_nmiar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
47
+{
84
+{
48
+ NVICState *s = opaque;
85
+ /* todo */
49
+ MemoryRegion *mr;
86
+ uint64_t intid = INTID_SPURIOUS;
50
+
87
+ return intid;
51
+ /* Direct the access to the correct systick */
52
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
53
+ return memory_region_dispatch_write(mr, addr, value, size, attrs);
54
+}
88
+}
55
+
89
+
56
+static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
90
static uint32_t icc_fullprio_mask(GICv3CPUState *cs)
57
+ uint64_t *data, unsigned size,
91
{
58
+ MemTxAttrs attrs)
92
/*
93
@@ -XXX,XX +XXX,XX @@ static int icc_highest_active_prio(GICv3CPUState *cs)
94
*/
95
int i;
96
97
+ if (cs->nmi_support) {
98
+ /*
99
+ * If an NMI is active this takes precedence over anything else
100
+ * for priority purposes; the NMI bit is only in the AP1R0 bit.
101
+ * We return here the effective priority of the NMI, which is
102
+ * either 0x0 or 0x80. Callers will need to check NMI again for
103
+ * purposes of either setting the RPR register bits or for
104
+ * prioritization of NMI vs non-NMI.
105
+ */
106
+ if (cs->icc_apr[GICV3_G1][0] & ICC_AP1R_EL1_NMI) {
107
+ return 0;
108
+ }
109
+ if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
110
+ return (cs->gic->gicd_ctlr & GICD_CTLR_DS) ? 0 : 0x80;
111
+ }
112
+ }
113
+
114
for (i = 0; i < icc_num_aprs(cs); i++) {
115
uint32_t apr = cs->icc_apr[GICV3_G0][i] |
116
cs->icc_apr[GICV3_G1][i] | cs->icc_apr[GICV3_G1NS][i];
117
@@ -XXX,XX +XXX,XX @@ static bool icc_hppi_can_preempt(GICv3CPUState *cs)
118
*/
119
int rprio;
120
uint32_t mask;
121
+ ARMCPU *cpu = ARM_CPU(cs->cpu);
122
+ CPUARMState *env = &cpu->env;
123
124
if (icc_no_enabled_hppi(cs)) {
125
return false;
126
}
127
128
- if (cs->hppi.prio >= cs->icc_pmr_el1) {
129
+ if (cs->hppi.nmi) {
130
+ if (!(cs->gic->gicd_ctlr & GICD_CTLR_DS) &&
131
+ cs->hppi.grp == GICV3_G1NS) {
132
+ if (cs->icc_pmr_el1 < 0x80) {
133
+ return false;
134
+ }
135
+ if (arm_is_secure(env) && cs->icc_pmr_el1 == 0x80) {
136
+ return false;
137
+ }
138
+ }
139
+ } else if (cs->hppi.prio >= cs->icc_pmr_el1) {
140
/* Priority mask masks this interrupt */
141
return false;
142
}
143
@@ -XXX,XX +XXX,XX @@ static bool icc_hppi_can_preempt(GICv3CPUState *cs)
144
return true;
145
}
146
147
+ if (cs->hppi.nmi && (cs->hppi.prio & mask) == (rprio & mask)) {
148
+ if (!(cs->icc_apr[cs->hppi.grp][0] & ICC_AP1R_EL1_NMI)) {
149
+ return true;
150
+ }
151
+ }
152
+
153
return false;
154
}
155
156
@@ -XXX,XX +XXX,XX @@ static void icc_activate_irq(GICv3CPUState *cs, int irq)
157
int aprbit = prio >> (8 - cs->prebits);
158
int regno = aprbit / 32;
159
int regbit = aprbit % 32;
160
+ bool nmi = cs->hppi.nmi;
161
162
- cs->icc_apr[cs->hppi.grp][regno] |= (1 << regbit);
163
+ if (nmi) {
164
+ cs->icc_apr[cs->hppi.grp][regno] |= ICC_AP1R_EL1_NMI;
165
+ } else {
166
+ cs->icc_apr[cs->hppi.grp][regno] |= (1 << regbit);
167
+ }
168
169
if (irq < GIC_INTERNAL) {
170
cs->gicr_iactiver0 = deposit32(cs->gicr_iactiver0, irq, 1, 1);
171
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_iar0_read(CPUARMState *env, const ARMCPRegInfo *ri)
172
static uint64_t icc_iar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
173
{
174
GICv3CPUState *cs = icc_cs_from_env(env);
175
+ int el = arm_current_el(env);
176
uint64_t intid;
177
178
if (icv_access(env, HCR_IMO)) {
179
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_iar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
180
}
181
182
if (!gicv3_intid_is_special(intid)) {
183
- icc_activate_irq(cs, intid);
184
+ if (cs->hppi.nmi && env->cp15.sctlr_el[el] & SCTLR_NMI) {
185
+ intid = INTID_NMI;
186
+ } else {
187
+ icc_activate_irq(cs, intid);
188
+ }
189
}
190
191
trace_gicv3_icc_iar1_read(gicv3_redist_affid(cs), intid);
192
return intid;
193
}
194
195
+static uint64_t icc_nmiar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
59
+{
196
+{
60
+ NVICState *s = opaque;
197
+ GICv3CPUState *cs = icc_cs_from_env(env);
61
+ MemoryRegion *mr;
198
+ uint64_t intid;
62
+
199
+
63
+ /* Direct the access to the correct systick */
200
+ if (icv_access(env, HCR_IMO)) {
64
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
201
+ return icv_nmiar1_read(env, ri);
65
+ return memory_region_dispatch_read(mr, addr, data, size, attrs);
202
+ }
203
+
204
+ if (!icc_hppi_can_preempt(cs)) {
205
+ intid = INTID_SPURIOUS;
206
+ } else {
207
+ intid = icc_hppir1_value(cs, env);
208
+ }
209
+
210
+ if (!gicv3_intid_is_special(intid)) {
211
+ if (!cs->hppi.nmi) {
212
+ intid = INTID_SPURIOUS;
213
+ } else {
214
+ icc_activate_irq(cs, intid);
215
+ }
216
+ }
217
+
218
+ trace_gicv3_icc_nmiar1_read(gicv3_redist_affid(cs), intid);
219
+ return intid;
66
+}
220
+}
67
+
221
+
68
+static const MemoryRegionOps nvic_systick_ops = {
222
static void icc_drop_prio(GICv3CPUState *cs, int grp)
69
+ .read_with_attrs = nvic_systick_read,
223
{
70
+ .write_with_attrs = nvic_systick_write,
224
/* Drop the priority of the currently active interrupt in
71
+ .endianness = DEVICE_NATIVE_ENDIAN,
225
@@ -XXX,XX +XXX,XX @@ static void icc_drop_prio(GICv3CPUState *cs, int grp)
226
if (!*papr) {
227
continue;
228
}
229
+
230
+ if (i == 0 && cs->nmi_support && (*papr & ICC_AP1R_EL1_NMI)) {
231
+ *papr &= (~ICC_AP1R_EL1_NMI);
232
+ break;
233
+ }
234
+
235
/* Clear the lowest set bit */
236
*papr &= *papr - 1;
237
break;
238
@@ -XXX,XX +XXX,XX @@ static int icc_highest_active_group(GICv3CPUState *cs)
239
*/
240
int i;
241
242
+ if (cs->nmi_support) {
243
+ if (cs->icc_apr[GICV3_G1][0] & ICC_AP1R_EL1_NMI) {
244
+ return GICV3_G1;
245
+ }
246
+ if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
247
+ return GICV3_G1NS;
248
+ }
249
+ }
250
+
251
for (i = 0; i < ARRAY_SIZE(cs->icc_apr[0]); i++) {
252
int g0ctz = ctz32(cs->icc_apr[GICV3_G0][i]);
253
int g1ctz = ctz32(cs->icc_apr[GICV3_G1][i]);
254
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
255
return;
256
}
257
258
- cs->icc_apr[grp][regno] = value & 0xFFFFFFFFU;
259
+ if (cs->nmi_support) {
260
+ cs->icc_apr[grp][regno] = value & (0xFFFFFFFFU | ICC_AP1R_EL1_NMI);
261
+ } else {
262
+ cs->icc_apr[grp][regno] = value & 0xFFFFFFFFU;
263
+ }
264
gicv3_cpuif_update(cs);
265
}
266
267
@@ -XXX,XX +XXX,XX @@ static void icc_dir_write(CPUARMState *env, const ARMCPRegInfo *ri,
268
static uint64_t icc_rpr_read(CPUARMState *env, const ARMCPRegInfo *ri)
269
{
270
GICv3CPUState *cs = icc_cs_from_env(env);
271
- int prio;
272
+ uint64_t prio;
273
274
if (icv_access(env, HCR_FMO | HCR_IMO)) {
275
return icv_rpr_read(env, ri);
276
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_rpr_read(CPUARMState *env, const ARMCPRegInfo *ri)
277
}
278
}
279
280
+ if (cs->nmi_support) {
281
+ /* NMI info is reported in the high bits of RPR */
282
+ if (arm_feature(env, ARM_FEATURE_EL3) && !arm_is_secure(env)) {
283
+ if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
284
+ prio |= ICC_RPR_EL1_NMI;
285
+ }
286
+ } else {
287
+ if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
288
+ prio |= ICC_RPR_EL1_NSNMI;
289
+ }
290
+ if (cs->icc_apr[GICV3_G1][0] & ICC_AP1R_EL1_NMI) {
291
+ prio |= ICC_RPR_EL1_NMI;
292
+ }
293
+ }
294
+ }
295
+
296
trace_gicv3_icc_rpr_read(gicv3_redist_affid(cs), prio);
297
return prio;
298
}
299
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_icc_apxr23_reginfo[] = {
300
},
301
};
302
303
+static const ARMCPRegInfo gicv3_cpuif_gicv3_nmi_reginfo[] = {
304
+ { .name = "ICC_NMIAR1_EL1", .state = ARM_CP_STATE_BOTH,
305
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 5,
306
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
307
+ .access = PL1_R, .accessfn = gicv3_irq_access,
308
+ .readfn = icc_nmiar1_read,
309
+ },
72
+};
310
+};
73
+
311
+
74
static int nvic_post_load(void *opaque, int version_id)
312
static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
75
{
313
{
76
NVICState *s = opaque;
314
GICv3CPUState *cs = icc_cs_from_env(env);
77
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
315
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
78
/* SysTick just asked us to pend its exception.
79
* (This is different from an external interrupt line's
80
* behaviour.)
81
- * TODO: when we implement the banked systicks we must make
82
- * this pend the correct banked exception.
83
+ * n == 0 : NonSecure systick
84
+ * n == 1 : Secure systick
85
*/
316
*/
86
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK, false);
317
define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
87
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK, n);
318
88
}
319
+ /*
89
}
320
+ * If the CPU implements FEAT_NMI and FEAT_GICv3 it must also
90
321
+ * implement FEAT_GICv3_NMI, which is the CPU interface part
91
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
322
+ * of NMI support. This is distinct from whether the GIC proper
92
{
323
+ * (redistributors and distributor) have NMI support. In QEMU
93
NVICState *s = NVIC(dev);
324
+ * that is a property of the GIC device in s->nmi_support;
94
- SysBusDevice *systick_sbd;
325
+ * cs->nmi_support indicates the CPU interface's support.
95
Error *err = NULL;
96
int regionlen;
97
98
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
99
/* include space for internal exception vectors */
100
s->num_irq += NVIC_FIRST_IRQ;
101
102
- object_property_set_bool(OBJECT(&s->systick), true, "realized", &err);
103
+ object_property_set_bool(OBJECT(&s->systick[M_REG_NS]), true,
104
+ "realized", &err);
105
if (err != NULL) {
106
error_propagate(errp, err);
107
return;
108
}
109
- systick_sbd = SYS_BUS_DEVICE(&s->systick);
110
- sysbus_connect_irq(systick_sbd, 0,
111
- qdev_get_gpio_in_named(dev, "systick-trigger", 0));
112
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->systick[M_REG_NS]), 0,
113
+ qdev_get_gpio_in_named(dev, "systick-trigger",
114
+ M_REG_NS));
115
+
116
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
117
+ /* We couldn't init the secure systick device in instance_init
118
+ * as we didn't know then if the CPU had the security extensions;
119
+ * so we have to do it here.
120
+ */
326
+ */
121
+ object_initialize(&s->systick[M_REG_S], sizeof(s->systick[M_REG_S]),
327
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
122
+ TYPE_SYSTICK);
328
+ cs->nmi_support = true;
123
+ qdev_set_parent_bus(DEVICE(&s->systick[M_REG_S]), sysbus_get_default());
329
+ define_arm_cp_regs(cpu, gicv3_cpuif_gicv3_nmi_reginfo);
124
+
330
+ }
125
+ object_property_set_bool(OBJECT(&s->systick[M_REG_S]), true,
331
+
126
+ "realized", &err);
332
/*
127
+ if (err != NULL) {
333
* The CPU implementation specifies the number of supported
128
+ error_propagate(errp, err);
334
* bits of physical priority. For backwards compatibility
129
+ return;
335
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
130
+ }
336
index XXXXXXX..XXXXXXX 100644
131
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->systick[M_REG_S]), 0,
337
--- a/hw/intc/trace-events
132
+ qdev_get_gpio_in_named(dev, "systick-trigger",
338
+++ b/hw/intc/trace-events
133
+ M_REG_S));
339
@@ -XXX,XX +XXX,XX @@ gicv3_cpuif_set_irqs(uint32_t cpuid, int fiqlevel, int irqlevel) "GICv3 CPU i/f
134
+ }
340
gicv3_icc_generate_sgi(uint32_t cpuid, int irq, int irm, uint32_t aff, uint32_t targetlist) "GICv3 CPU i/f 0x%x generating SGI %d IRM %d target affinity 0x%xxx targetlist 0x%x"
135
341
gicv3_icc_iar0_read(uint32_t cpu, uint64_t val) "GICv3 ICC_IAR0 read cpu 0x%x value 0x%" PRIx64
136
/* The NVIC and System Control Space (SCS) starts at 0xe000e000
342
gicv3_icc_iar1_read(uint32_t cpu, uint64_t val) "GICv3 ICC_IAR1 read cpu 0x%x value 0x%" PRIx64
137
* and looks like this:
343
+gicv3_icc_nmiar1_read(uint32_t cpu, uint64_t val) "GICv3 ICC_NMIAR1 read cpu 0x%x value 0x%" PRIx64
138
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
344
gicv3_icc_eoir_write(int grp, uint32_t cpu, uint64_t val) "GICv3 ICC_EOIR%d write cpu 0x%x value 0x%" PRIx64
139
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
345
gicv3_icc_hppir0_read(uint32_t cpu, uint64_t val) "GICv3 ICC_HPPIR0 read cpu 0x%x value 0x%" PRIx64
140
"nvic_sysregs", 0x1000);
346
gicv3_icc_hppir1_read(uint32_t cpu, uint64_t val) "GICv3 ICC_HPPIR1 read cpu 0x%x value 0x%" PRIx64
141
memory_region_add_subregion(&s->container, 0, &s->sysregmem);
142
+
143
+ memory_region_init_io(&s->systickmem, OBJECT(s),
144
+ &nvic_systick_ops, s,
145
+ "nvic_systick", 0xe0);
146
+
147
memory_region_add_subregion_overlap(&s->container, 0x10,
148
- sysbus_mmio_get_region(systick_sbd, 0),
149
- 1);
150
+ &s->systickmem, 1);
151
152
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
153
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
154
&nvic_sysreg_ns_ops, &s->sysregmem,
155
"nvic_sysregs_ns", 0x1000);
156
memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
157
+ memory_region_init_io(&s->systick_ns_mem, OBJECT(s),
158
+ &nvic_sysreg_ns_ops, &s->systickmem,
159
+ "nvic_systick_ns", 0xe0);
160
+ memory_region_add_subregion_overlap(&s->container, 0x20010,
161
+ &s->systick_ns_mem, 1);
162
}
163
164
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
165
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_instance_init(Object *obj)
166
NVICState *nvic = NVIC(obj);
167
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
168
169
- object_initialize(&nvic->systick, sizeof(nvic->systick), TYPE_SYSTICK);
170
- qdev_set_parent_bus(DEVICE(&nvic->systick), sysbus_get_default());
171
+ object_initialize(&nvic->systick[M_REG_NS],
172
+ sizeof(nvic->systick[M_REG_NS]), TYPE_SYSTICK);
173
+ qdev_set_parent_bus(DEVICE(&nvic->systick[M_REG_NS]), sysbus_get_default());
174
+ /* We can't initialize the secure systick here, as we don't know
175
+ * yet if we need it.
176
+ */
177
178
sysbus_init_irq(sbd, &nvic->excpout);
179
qdev_init_gpio_out_named(dev, &nvic->sysresetreq, "SYSRESETREQ", 1);
180
- qdev_init_gpio_in_named(dev, nvic_systick_trigger, "systick-trigger", 1);
181
+ qdev_init_gpio_in_named(dev, nvic_systick_trigger, "systick-trigger",
182
+ M_REG_NUM_BANKS);
183
}
184
185
static void armv7m_nvic_class_init(ObjectClass *klass, void *data)
186
--
347
--
187
2.7.4
348
2.34.1
188
189
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
Implement icv_nmiar1_read() for icc_nmiar1_read(), so add definition for
2
2
ICH_LR_EL2.NMI and ICH_AP1R_EL2.NMI bit.
3
Add support for the RX discard and RX drain functionality. Also transmit
3
4
one byte per dummy cycle (to the flash memories) with commands that require
4
If FEAT_GICv3_NMI is supported, ich_ap_write() should consider ICV_AP1R_EL1.NMI
5
these.
5
bit. In icv_activate_irq() and icv_eoir_write(), the ICV_AP1R_EL1.NMI bit
6
6
should be set or clear according to the Non-maskable property. And the RPR
7
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
7
priority should also update the NMI bit according to the APR priority NMI bit.
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
By the way, add gicv3_icv_nmiar1_read trace event.
10
Message-id: 20171126231634.9531-8-frasse.iglesias@gmail.com
10
11
If the hpp irq is a NMI, the icv iar read should return 1022 and trap for
12
NMI again
13
14
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
[PMM: use cs->nmi_support instead of cs->gic->nmi_support]
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Message-id: 20240407081733.3231820-20-ruanjinjie@huawei.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
20
---
13
include/hw/ssi/xilinx_spips.h | 6 ++
21
hw/intc/gicv3_internal.h | 4 ++
14
hw/ssi/xilinx_spips.c | 167 +++++++++++++++++++++++++++++++++++++-----
22
hw/intc/arm_gicv3_cpuif.c | 105 +++++++++++++++++++++++++++++++++-----
15
2 files changed, 155 insertions(+), 18 deletions(-)
23
hw/intc/trace-events | 1 +
16
24
3 files changed, 98 insertions(+), 12 deletions(-)
17
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
25
26
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
18
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/ssi/xilinx_spips.h
28
--- a/hw/intc/gicv3_internal.h
20
+++ b/include/hw/ssi/xilinx_spips.h
29
+++ b/hw/intc/gicv3_internal.h
21
@@ -XXX,XX +XXX,XX @@ struct XilinxSPIPS {
30
@@ -XXX,XX +XXX,XX @@ FIELD(GICR_VPENDBASER, VALID, 63, 1)
22
uint8_t num_busses;
31
#define ICH_LR_EL2_PRIORITY_SHIFT 48
23
32
#define ICH_LR_EL2_PRIORITY_LENGTH 8
24
uint8_t snoop_state;
33
#define ICH_LR_EL2_PRIORITY_MASK (0xffULL << ICH_LR_EL2_PRIORITY_SHIFT)
25
+ int cmd_dummies;
34
+#define ICH_LR_EL2_NMI (1ULL << 59)
26
+ uint8_t link_state;
35
#define ICH_LR_EL2_GROUP (1ULL << 60)
27
+ uint8_t link_state_next;
36
#define ICH_LR_EL2_HW (1ULL << 61)
28
+ uint8_t link_state_next_when;
37
#define ICH_LR_EL2_STATE_SHIFT 62
29
qemu_irq *cs_lines;
38
@@ -XXX,XX +XXX,XX @@ FIELD(GICR_VPENDBASER, VALID, 63, 1)
30
+ bool *cs_lines_state;
39
#define ICH_VTR_EL2_PREBITS_SHIFT 26
31
SSIBus **spi;
40
#define ICH_VTR_EL2_PRIBITS_SHIFT 29
32
41
33
Fifo8 rx_fifo;
42
+#define ICV_AP1R_EL1_NMI (1ULL << 63)
34
Fifo8 tx_fifo;
43
+#define ICV_RPR_EL1_NMI (1ULL << 63)
35
44
+
36
uint8_t num_txrx_bytes;
45
/* ITS Registers */
37
+ uint32_t rx_discard;
46
38
47
FIELD(GITS_BASER, SIZE, 0, 8)
39
uint32_t regs[XLNX_SPIPS_R_MAX];
48
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
40
};
41
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
42
index XXXXXXX..XXXXXXX 100644
49
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/ssi/xilinx_spips.c
50
--- a/hw/intc/arm_gicv3_cpuif.c
44
+++ b/hw/ssi/xilinx_spips.c
51
+++ b/hw/intc/arm_gicv3_cpuif.c
45
@@ -XXX,XX +XXX,XX @@
52
@@ -XXX,XX +XXX,XX @@ static int ich_highest_active_virt_prio(GICv3CPUState *cs)
46
#include "qemu/bitops.h"
53
int i;
47
#include "hw/ssi/xilinx_spips.h"
54
int aprmax = ich_num_aprs(cs);
48
#include "qapi/error.h"
55
49
+#include "hw/register.h"
56
+ if (cs->ich_apr[GICV3_G1NS][0] & ICV_AP1R_EL1_NMI) {
50
#include "migration/blocker.h"
57
+ return 0x0;
51
58
+ }
52
#ifndef XILINX_SPIPS_ERR_DEBUG
59
+
53
@@ -XXX,XX +XXX,XX @@
60
for (i = 0; i < aprmax; i++) {
54
#define LQSPI_CFG_DUMMY_SHIFT 8
61
uint32_t apr = cs->ich_apr[GICV3_G0][i] |
55
#define LQSPI_CFG_INST_CODE 0xFF
62
cs->ich_apr[GICV3_G1NS][i];
56
63
@@ -XXX,XX +XXX,XX @@ static int hppvi_index(GICv3CPUState *cs)
57
+#define R_CMND (0xc0 / 4)
64
* correct behaviour.
58
+ #define R_CMND_RXFIFO_DRAIN (1 << 19)
65
*/
59
+ FIELD(CMND, PARTIAL_BYTE_LEN, 16, 3)
66
int prio = 0xff;
60
+#define R_CMND_EXT_ADD (1 << 15)
67
+ bool nmi = false;
61
+ FIELD(CMND, RX_DISCARD, 8, 7)
68
62
+ FIELD(CMND, DUMMY_CYCLES, 2, 6)
69
if (!(cs->ich_vmcr_el2 & (ICH_VMCR_EL2_VENG0 | ICH_VMCR_EL2_VENG1))) {
63
+#define R_CMND_DMA_EN (1 << 1)
70
/* Both groups disabled, definitely nothing to do */
64
+#define R_CMND_PUSH_WAIT (1 << 0)
71
@@ -XXX,XX +XXX,XX @@ static int hppvi_index(GICv3CPUState *cs)
65
#define R_LQSPI_STS (0xA4 / 4)
72
66
#define LQSPI_STS_WR_RECVD (1 << 1)
73
for (i = 0; i < cs->num_list_regs; i++) {
67
74
uint64_t lr = cs->ich_lr_el2[i];
68
@@ -XXX,XX +XXX,XX @@
75
+ bool thisnmi;
69
#define LQSPI_ADDRESS_BITS 24
76
int thisprio;
70
77
71
#define SNOOP_CHECKING 0xFF
78
if (ich_lr_state(lr) != ICH_LR_EL2_STATE_PENDING) {
72
-#define SNOOP_NONE 0xFE
79
@@ -XXX,XX +XXX,XX @@ static int hppvi_index(GICv3CPUState *cs)
73
+#define SNOOP_ADDR 0xF0
80
}
74
+#define SNOOP_NONE 0xEE
81
}
75
#define SNOOP_STRIPING 0
82
76
83
+ thisnmi = lr & ICH_LR_EL2_NMI;
77
static inline int num_effective_busses(XilinxSPIPS *s)
84
thisprio = ich_lr_prio(lr);
78
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
85
79
if (xilinx_spips_cs_is_set(s, i, field) && !found) {
86
- if (thisprio < prio) {
80
DB_PRINT_L(0, "selecting slave %d\n", i);
87
+ if ((thisprio < prio) || ((thisprio == prio) && (thisnmi & (!nmi)))) {
81
qemu_set_irq(s->cs_lines[cs_to_set], 0);
88
prio = thisprio;
82
+ if (s->cs_lines_state[cs_to_set]) {
89
+ nmi = thisnmi;
83
+ s->cs_lines_state[cs_to_set] = false;
90
idx = i;
84
+ s->rx_discard = ARRAY_FIELD_EX32(s->regs, CMND, RX_DISCARD);
91
}
92
}
93
@@ -XXX,XX +XXX,XX @@ static bool icv_hppi_can_preempt(GICv3CPUState *cs, uint64_t lr)
94
* equivalent of these checks.
95
*/
96
int grp;
97
+ bool is_nmi;
98
uint32_t mask, prio, rprio, vpmr;
99
100
if (!(cs->ich_hcr_el2 & ICH_HCR_EL2_EN)) {
101
@@ -XXX,XX +XXX,XX @@ static bool icv_hppi_can_preempt(GICv3CPUState *cs, uint64_t lr)
102
*/
103
104
prio = ich_lr_prio(lr);
105
+ is_nmi = lr & ICH_LR_EL2_NMI;
106
vpmr = extract64(cs->ich_vmcr_el2, ICH_VMCR_EL2_VPMR_SHIFT,
107
ICH_VMCR_EL2_VPMR_LENGTH);
108
109
- if (prio >= vpmr) {
110
+ if (!is_nmi && prio >= vpmr) {
111
/* Priority mask masks this interrupt */
112
return false;
113
}
114
@@ -XXX,XX +XXX,XX @@ static bool icv_hppi_can_preempt(GICv3CPUState *cs, uint64_t lr)
115
return true;
116
}
117
118
+ if ((prio & mask) == (rprio & mask) && is_nmi &&
119
+ !(cs->ich_apr[GICV3_G1NS][0] & ICV_AP1R_EL1_NMI)) {
120
+ return true;
121
+ }
122
+
123
return false;
124
}
125
126
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
127
128
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
129
130
- cs->ich_apr[grp][regno] = value & 0xFFFFFFFFU;
131
+ if (cs->nmi_support) {
132
+ cs->ich_apr[grp][regno] = value & (0xFFFFFFFFU | ICV_AP1R_EL1_NMI);
133
+ } else {
134
+ cs->ich_apr[grp][regno] = value & 0xFFFFFFFFU;
135
+ }
136
137
gicv3_cpuif_virt_irq_fiq_update(cs);
138
return;
139
@@ -XXX,XX +XXX,XX @@ static void icv_ctlr_write(CPUARMState *env, const ARMCPRegInfo *ri,
140
static uint64_t icv_rpr_read(CPUARMState *env, const ARMCPRegInfo *ri)
141
{
142
GICv3CPUState *cs = icc_cs_from_env(env);
143
- int prio = ich_highest_active_virt_prio(cs);
144
+ uint64_t prio = ich_highest_active_virt_prio(cs);
145
+
146
+ if (cs->ich_apr[GICV3_G1NS][0] & ICV_AP1R_EL1_NMI) {
147
+ prio |= ICV_RPR_EL1_NMI;
148
+ }
149
150
trace_gicv3_icv_rpr_read(gicv3_redist_affid(cs), prio);
151
return prio;
152
@@ -XXX,XX +XXX,XX @@ static void icv_activate_irq(GICv3CPUState *cs, int idx, int grp)
153
*/
154
uint32_t mask = icv_gprio_mask(cs, grp);
155
int prio = ich_lr_prio(cs->ich_lr_el2[idx]) & mask;
156
+ bool nmi = cs->ich_lr_el2[idx] & ICH_LR_EL2_NMI;
157
int aprbit = prio >> (8 - cs->vprebits);
158
int regno = aprbit / 32;
159
int regbit = aprbit % 32;
160
161
cs->ich_lr_el2[idx] &= ~ICH_LR_EL2_STATE_PENDING_BIT;
162
cs->ich_lr_el2[idx] |= ICH_LR_EL2_STATE_ACTIVE_BIT;
163
- cs->ich_apr[grp][regno] |= (1 << regbit);
164
+
165
+ if (nmi) {
166
+ cs->ich_apr[grp][regno] |= ICV_AP1R_EL1_NMI;
167
+ } else {
168
+ cs->ich_apr[grp][regno] |= (1 << regbit);
169
+ }
170
}
171
172
static void icv_activate_vlpi(GICv3CPUState *cs)
173
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri)
174
int grp = ri->crm == 8 ? GICV3_G0 : GICV3_G1NS;
175
int idx = hppvi_index(cs);
176
uint64_t intid = INTID_SPURIOUS;
177
+ int el = arm_current_el(env);
178
179
if (idx == HPPVI_INDEX_VLPI) {
180
if (cs->hppvlpi.grp == grp && icv_hppvlpi_can_preempt(cs)) {
181
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri)
182
} else if (idx >= 0) {
183
uint64_t lr = cs->ich_lr_el2[idx];
184
int thisgrp = (lr & ICH_LR_EL2_GROUP) ? GICV3_G1NS : GICV3_G0;
185
+ bool nmi = env->cp15.sctlr_el[el] & SCTLR_NMI && lr & ICH_LR_EL2_NMI;
186
187
if (thisgrp == grp && icv_hppi_can_preempt(cs, lr)) {
188
intid = ich_lr_vintid(lr);
189
if (!gicv3_intid_is_special(intid)) {
190
- icv_activate_irq(cs, idx, grp);
191
+ if (!nmi) {
192
+ icv_activate_irq(cs, idx, grp);
193
+ } else {
194
+ intid = INTID_NMI;
85
+ }
195
+ }
86
} else {
196
} else {
87
DB_PRINT_L(0, "deselecting slave %d\n", i);
197
/* Interrupt goes from Pending to Invalid */
88
qemu_set_irq(s->cs_lines[cs_to_set], 1);
198
cs->ich_lr_el2[idx] &= ~ICH_LR_EL2_STATE_PENDING_BIT;
89
+ s->cs_lines_state[cs_to_set] = true;
199
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri)
90
}
200
91
}
201
static uint64_t icv_nmiar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
92
if (xilinx_spips_cs_is_set(s, i, field)) {
93
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
94
}
95
if (!found) {
96
s->snoop_state = SNOOP_CHECKING;
97
+ s->cmd_dummies = 0;
98
+ s->link_state = 1;
99
+ s->link_state_next = 1;
100
+ s->link_state_next_when = 0;
101
DB_PRINT_L(1, "moving to snoop check state\n");
102
}
103
}
104
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
105
/* FIXME: move magic number definition somewhere sensible */
106
s->regs[R_MOD_ID] = 0x01090106;
107
s->regs[R_LQSPI_CFG] = R_LQSPI_CFG_RESET;
108
+ s->link_state = 1;
109
+ s->link_state_next = 1;
110
+ s->link_state_next_when = 0;
111
s->snoop_state = SNOOP_CHECKING;
112
+ s->cmd_dummies = 0;
113
xilinx_spips_update_ixr(s);
114
xilinx_spips_update_cs_lines(s);
115
}
116
@@ -XXX,XX +XXX,XX @@ static inline void stripe8(uint8_t *x, int num, bool dir)
117
memcpy(x, r, sizeof(uint8_t) * num);
118
}
119
120
+static int xilinx_spips_num_dummies(XilinxQSPIPS *qs, uint8_t command)
121
+{
122
+ if (!qs) {
123
+ /* The SPI device is not a QSPI device */
124
+ return -1;
125
+ }
126
+
127
+ switch (command) { /* check for dummies */
128
+ case READ: /* no dummy bytes/cycles */
129
+ case PP:
130
+ case DPP:
131
+ case QPP:
132
+ case READ_4:
133
+ case PP_4:
134
+ case QPP_4:
135
+ return 0;
136
+ case FAST_READ:
137
+ case DOR:
138
+ case QOR:
139
+ case DOR_4:
140
+ case QOR_4:
141
+ return 1;
142
+ case DIOR:
143
+ case FAST_READ_4:
144
+ case DIOR_4:
145
+ return 2;
146
+ case QIOR:
147
+ case QIOR_4:
148
+ return 5;
149
+ default:
150
+ return -1;
151
+ }
152
+}
153
+
154
+static inline uint8_t get_addr_length(XilinxSPIPS *s, uint8_t cmd)
155
+{
156
+ switch (cmd) {
157
+ case PP_4:
158
+ case QPP_4:
159
+ case READ_4:
160
+ case QIOR_4:
161
+ case FAST_READ_4:
162
+ case DOR_4:
163
+ case QOR_4:
164
+ case DIOR_4:
165
+ return 4;
166
+ default:
167
+ return (s->regs[R_CMND] & R_CMND_EXT_ADD) ? 4 : 3;
168
+ }
169
+}
170
+
171
static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
172
{
202
{
173
int debug_level = 0;
203
- /* todo */
174
+ XilinxQSPIPS *q = (XilinxQSPIPS *) object_dynamic_cast(OBJECT(s),
204
+ GICv3CPUState *cs = icc_cs_from_env(env);
175
+ TYPE_XILINX_QSPIPS);
205
+ int idx = hppvi_index(cs);
176
206
uint64_t intid = INTID_SPURIOUS;
177
for (;;) {
207
+
178
int i;
208
+ if (idx >= 0 && idx != HPPVI_INDEX_VLPI) {
179
uint8_t tx = 0;
209
+ uint64_t lr = cs->ich_lr_el2[idx];
180
uint8_t tx_rx[num_effective_busses(s)];
210
+ int thisgrp = (lr & ICH_LR_EL2_GROUP) ? GICV3_G1NS : GICV3_G0;
181
+ uint8_t dummy_cycles = 0;
211
+
182
+ uint8_t addr_length;
212
+ if ((thisgrp == GICV3_G1NS) && icv_hppi_can_preempt(cs, lr)) {
183
213
+ intid = ich_lr_vintid(lr);
184
if (fifo8_is_empty(&s->tx_fifo)) {
214
+ if (!gicv3_intid_is_special(intid)) {
185
if (!(s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE)) {
215
+ if (lr & ICH_LR_EL2_NMI) {
186
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
216
+ icv_activate_irq(cs, idx, GICV3_G1NS);
187
tx_rx[i] = fifo8_pop(&s->tx_fifo);
217
+ } else {
188
}
218
+ intid = INTID_SPURIOUS;
189
stripe8(tx_rx, num_effective_busses(s), false);
190
- } else {
191
+ } else if (s->snoop_state >= SNOOP_ADDR) {
192
tx = fifo8_pop(&s->tx_fifo);
193
for (i = 0; i < num_effective_busses(s); ++i) {
194
tx_rx[i] = tx;
195
}
196
+ } else {
197
+ /* Extract a dummy byte and generate dummy cycles according to the
198
+ * link state */
199
+ tx = fifo8_pop(&s->tx_fifo);
200
+ dummy_cycles = 8 / s->link_state;
201
}
202
203
for (i = 0; i < num_effective_busses(s); ++i) {
204
int bus = num_effective_busses(s) - 1 - i;
205
- DB_PRINT_L(debug_level, "tx = %02x\n", tx_rx[i]);
206
- tx_rx[i] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[i]);
207
- DB_PRINT_L(debug_level, "rx = %02x\n", tx_rx[i]);
208
+ if (dummy_cycles) {
209
+ int d;
210
+ for (d = 0; d < dummy_cycles; ++d) {
211
+ tx_rx[0] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[0]);
212
+ }
219
+ }
213
+ } else {
220
+ } else {
214
+ DB_PRINT_L(debug_level, "tx = %02x\n", tx_rx[i]);
221
+ /* Interrupt goes from Pending to Invalid */
215
+ tx_rx[i] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[i]);
222
+ cs->ich_lr_el2[idx] &= ~ICH_LR_EL2_STATE_PENDING_BIT;
216
+ DB_PRINT_L(debug_level, "rx = %02x\n", tx_rx[i]);
223
+ /*
217
+ }
224
+ * We will now return the (bogus) ID from the list register,
218
}
225
+ * as per the pseudocode.
219
226
+ */
220
- if (fifo8_is_full(&s->rx_fifo)) {
221
+ if (s->regs[R_CMND] & R_CMND_RXFIFO_DRAIN) {
222
+ DB_PRINT_L(debug_level, "dircarding drained rx byte\n");
223
+ /* Do nothing */
224
+ } else if (s->rx_discard) {
225
+ DB_PRINT_L(debug_level, "dircarding discarded rx byte\n");
226
+ s->rx_discard -= 8 / s->link_state;
227
+ } else if (fifo8_is_full(&s->rx_fifo)) {
228
s->regs[R_INTR_STATUS] |= IXR_RX_FIFO_OVERFLOW;
229
DB_PRINT_L(0, "rx FIFO overflow");
230
} else if (s->snoop_state == SNOOP_STRIPING) {
231
stripe8(tx_rx, num_effective_busses(s), true);
232
for (i = 0; i < num_effective_busses(s); ++i) {
233
fifo8_push(&s->rx_fifo, (uint8_t)tx_rx[i]);
234
+ DB_PRINT_L(debug_level, "pushing striped rx byte\n");
235
}
236
} else {
237
+ DB_PRINT_L(debug_level, "pushing unstriped rx byte\n");
238
fifo8_push(&s->rx_fifo, (uint8_t)tx_rx[0]);
239
}
240
241
+ if (s->link_state_next_when) {
242
+ s->link_state_next_when--;
243
+ if (!s->link_state_next_when) {
244
+ s->link_state = s->link_state_next;
245
+ }
227
+ }
246
+ }
228
+ }
247
+
229
+ }
248
DB_PRINT_L(debug_level, "initial snoop state: %x\n",
230
+
249
(unsigned)s->snoop_state);
231
+ trace_gicv3_icv_nmiar1_read(gicv3_redist_affid(cs), intid);
250
switch (s->snoop_state) {
232
+
251
case (SNOOP_CHECKING):
233
+ gicv3_cpuif_virt_update(cs);
252
- switch (tx) { /* new instruction code */
234
+
253
- case READ: /* 3 address bytes, no dummy bytes/cycles */
235
return intid;
254
- case PP:
236
}
255
+ /* Store the count of dummy bytes in the txfifo */
237
256
+ s->cmd_dummies = xilinx_spips_num_dummies(q, tx);
238
@@ -XXX,XX +XXX,XX @@ static void icv_increment_eoicount(GICv3CPUState *cs)
257
+ addr_length = get_addr_length(s, tx);
239
ICH_HCR_EL2_EOICOUNT_LENGTH, eoicount + 1);
258
+ if (s->cmd_dummies < 0) {
240
}
259
+ s->snoop_state = SNOOP_NONE;
241
260
+ } else {
242
-static int icv_drop_prio(GICv3CPUState *cs)
261
+ s->snoop_state = SNOOP_ADDR + addr_length - 1;
243
+static int icv_drop_prio(GICv3CPUState *cs, bool *nmi)
262
+ }
263
+ switch (tx) {
264
case DPP:
265
- case QPP:
266
- s->snoop_state = 3;
267
- break;
268
- case FAST_READ: /* 3 address bytes, 1 dummy byte */
269
case DOR:
270
+ case DOR_4:
271
+ s->link_state_next = 2;
272
+ s->link_state_next_when = addr_length + s->cmd_dummies;
273
+ break;
274
+ case QPP:
275
+ case QPP_4:
276
case QOR:
277
- case DIOR: /* FIXME: these vary between vendor - set to spansion */
278
- s->snoop_state = 4;
279
+ case QOR_4:
280
+ s->link_state_next = 4;
281
+ s->link_state_next_when = addr_length + s->cmd_dummies;
282
+ break;
283
+ case DIOR:
284
+ case DIOR_4:
285
+ s->link_state = 2;
286
break;
287
- case QIOR: /* 3 address bytes, 2 dummy bytes */
288
- s->snoop_state = 6;
289
+ case QIOR:
290
+ case QIOR_4:
291
+ s->link_state = 4;
292
break;
293
- default:
294
+ }
295
+ break;
296
+ case (SNOOP_ADDR):
297
+ /* Address has been transmitted, transmit dummy cycles now if
298
+ * needed */
299
+ if (s->cmd_dummies < 0) {
300
s->snoop_state = SNOOP_NONE;
301
+ } else {
302
+ s->snoop_state = s->cmd_dummies;
303
}
304
break;
305
case (SNOOP_STRIPING):
306
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_write(void *opaque, hwaddr addr,
307
uint64_t value, unsigned size)
308
{
244
{
309
XilinxQSPIPS *q = XILINX_QSPIPS(opaque);
245
/* Drop the priority of the currently active virtual interrupt
310
+ XilinxSPIPS *s = XILINX_SPIPS(opaque);
246
* (favouring group 0 if there is a set active bit at
311
247
@@ -XXX,XX +XXX,XX @@ static int icv_drop_prio(GICv3CPUState *cs)
312
xilinx_spips_write(opaque, addr, value, size);
248
continue;
313
addr >>= 2;
249
}
314
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_write(void *opaque, hwaddr addr,
250
315
if (addr == R_LQSPI_CFG) {
251
+ if (i == 0 && cs->nmi_support && (*papr1 & ICV_AP1R_EL1_NMI)) {
316
xilinx_qspips_invalidate_mmio_ptr(q);
252
+ *papr1 &= (~ICV_AP1R_EL1_NMI);
253
+ *nmi = true;
254
+ return 0xff;
255
+ }
256
+
257
/* We can't just use the bit-twiddling hack icc_drop_prio() does
258
* because we need to return the bit number we cleared so
259
* it can be compared against the list register's priority field.
260
@@ -XXX,XX +XXX,XX @@ static void icv_eoir_write(CPUARMState *env, const ARMCPRegInfo *ri,
261
int irq = value & 0xffffff;
262
int grp = ri->crm == 8 ? GICV3_G0 : GICV3_G1NS;
263
int idx, dropprio;
264
+ bool nmi = false;
265
266
trace_gicv3_icv_eoir_write(ri->crm == 8 ? 0 : 1,
267
gicv3_redist_affid(cs), value);
268
@@ -XXX,XX +XXX,XX @@ static void icv_eoir_write(CPUARMState *env, const ARMCPRegInfo *ri,
269
* error checks" (because that lets us avoid scanning the AP
270
* registers twice).
271
*/
272
- dropprio = icv_drop_prio(cs);
273
- if (dropprio == 0xff) {
274
+ dropprio = icv_drop_prio(cs, &nmi);
275
+ if (dropprio == 0xff && !nmi) {
276
/* No active interrupt. It is CONSTRAINED UNPREDICTABLE
277
* whether the list registers are checked in this
278
* situation; we choose not to.
279
@@ -XXX,XX +XXX,XX @@ static void icv_eoir_write(CPUARMState *env, const ARMCPRegInfo *ri,
280
uint64_t lr = cs->ich_lr_el2[idx];
281
int thisgrp = (lr & ICH_LR_EL2_GROUP) ? GICV3_G1NS : GICV3_G0;
282
int lr_gprio = ich_lr_prio(lr) & icv_gprio_mask(cs, grp);
283
+ bool thisnmi = lr & ICH_LR_EL2_NMI;
284
285
- if (thisgrp == grp && lr_gprio == dropprio) {
286
+ if (thisgrp == grp && (lr_gprio == dropprio || (thisnmi & nmi))) {
287
if (!icv_eoi_split(env, cs) || irq >= GICV3_LPI_INTID_START) {
288
/*
289
* Priority drop and deactivate not split: deactivate irq now.
290
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
291
292
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
293
294
- cs->ich_apr[grp][regno] = value & 0xFFFFFFFFU;
295
+ if (cs->nmi_support) {
296
+ cs->ich_apr[grp][regno] = value & (0xFFFFFFFFU | ICV_AP1R_EL1_NMI);
297
+ } else {
298
+ cs->ich_apr[grp][regno] = value & 0xFFFFFFFFU;
299
+ }
300
gicv3_cpuif_virt_irq_fiq_update(cs);
301
}
302
303
@@ -XXX,XX +XXX,XX @@ static void ich_lr_write(CPUARMState *env, const ARMCPRegInfo *ri,
304
8 - cs->vpribits, 0);
317
}
305
}
318
+ if (s->regs[R_CMND] & R_CMND_RXFIFO_DRAIN) {
306
319
+ fifo8_reset(&s->rx_fifo);
307
+ /* Enforce RES0 bit in NMI field when FEAT_GICv3_NMI is not implemented */
320
+ }
308
+ if (!cs->nmi_support) {
321
}
309
+ value &= ~ICH_LR_EL2_NMI;
322
310
+ }
323
static const MemoryRegionOps qspips_ops = {
311
+
324
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_realize(DeviceState *dev, Error **errp)
312
cs->ich_lr_el2[regno] = value;
325
}
313
gicv3_cpuif_virt_update(cs);
326
314
}
327
s->cs_lines = g_new0(qemu_irq, s->num_cs * s->num_busses);
315
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
328
+ s->cs_lines_state = g_new0(bool, s->num_cs * s->num_busses);
316
index XXXXXXX..XXXXXXX 100644
329
for (i = 0, cs = s->cs_lines; i < s->num_busses; ++i, cs += s->num_cs) {
317
--- a/hw/intc/trace-events
330
ssi_auto_connect_slaves(DEVICE(s), cs, s->spi[i]);
318
+++ b/hw/intc/trace-events
331
}
319
@@ -XXX,XX +XXX,XX @@ gicv3_icv_rpr_read(uint32_t cpu, uint64_t val) "GICv3 ICV_RPR read cpu 0x%x valu
320
gicv3_icv_hppir_read(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_HPPIR%d read cpu 0x%x value 0x%" PRIx64
321
gicv3_icv_dir_write(uint32_t cpu, uint64_t val) "GICv3 ICV_DIR write cpu 0x%x value 0x%" PRIx64
322
gicv3_icv_iar_read(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_IAR%d read cpu 0x%x value 0x%" PRIx64
323
+gicv3_icv_nmiar1_read(uint32_t cpu, uint64_t val) "GICv3 ICV_NMIAR1 read cpu 0x%x value 0x%" PRIx64
324
gicv3_icv_eoir_write(int grp, uint32_t cpu, uint64_t val) "GICv3 ICV_EOIR%d write cpu 0x%x value 0x%" PRIx64
325
gicv3_cpuif_virt_update(uint32_t cpuid, int idx, int hppvlpi, int grp, int prio) "GICv3 CPU i/f 0x%x virt HPPI update LR index %d HPPVLPI %d grp %d prio %d"
326
gicv3_cpuif_virt_set_irqs(uint32_t cpuid, int fiqlevel, int irqlevel) "GICv3 CPU i/f 0x%x virt HPPI update: setting FIQ %d IRQ %d"
332
--
327
--
333
2.7.4
328
2.34.1
334
335
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Make tx/rx_data_bytes more generic so they can be reused (when adding
3
If GICD_CTLR_DS bit is zero and the NMI is non-secure, the NMI priority is
4
support for the Zynqmp Generic QSPI).
4
higher than 0x80, otherwise it is higher than 0x0. And save the interrupt
5
non-maskable property in hppi.nmi to deliver NMI exception. Since both GICR
6
and GICD can deliver NMI, it is both necessary to check whether the pending
7
irq is NMI in gicv3_redist_update_noirqset and gicv3_update_noirqset.
5
8
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
9
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20171126231634.9531-9-frasse.iglesias@gmail.com
12
Message-id: 20240407081733.3231820-21-ruanjinjie@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
14
---
12
hw/ssi/xilinx_spips.c | 64 +++++++++++++++++++++++++++++----------------------
15
hw/intc/arm_gicv3.c | 67 +++++++++++++++++++++++++++++++++-----
13
1 file changed, 37 insertions(+), 27 deletions(-)
16
hw/intc/arm_gicv3_common.c | 3 ++
17
hw/intc/arm_gicv3_redist.c | 3 ++
18
3 files changed, 64 insertions(+), 9 deletions(-)
14
19
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
20
diff --git a/hw/intc/arm_gicv3.c b/hw/intc/arm_gicv3.c
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/xilinx_spips.c
22
--- a/hw/intc/arm_gicv3.c
18
+++ b/hw/ssi/xilinx_spips.c
23
+++ b/hw/intc/arm_gicv3.c
19
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@
20
/* config register */
25
#include "hw/intc/arm_gicv3.h"
21
#define R_CONFIG (0x00 / 4)
26
#include "gicv3_internal.h"
22
#define IFMODE (1U << 31)
27
23
-#define ENDIAN (1 << 26)
28
-static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio)
24
+#define R_CONFIG_ENDIAN (1 << 26)
29
+static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio, bool nmi)
25
#define MODEFAIL_GEN_EN (1 << 17)
30
{
26
#define MAN_START_COM (1 << 16)
31
/* Return true if this IRQ at this priority should take
27
#define MAN_START_EN (1 << 15)
32
* precedence over the current recorded highest priority
28
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
33
@@ -XXX,XX +XXX,XX @@ static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio)
29
}
34
* is the same as this one (a property which the calling code
35
* relies on).
36
*/
37
- if (prio < cs->hppi.prio) {
38
- return true;
39
+ if (prio != cs->hppi.prio) {
40
+ return prio < cs->hppi.prio;
41
}
42
+
43
+ /*
44
+ * The same priority IRQ with non-maskable property should signal to
45
+ * the CPU as it have the priority higher than the labelled 0x80 or 0x00.
46
+ */
47
+ if (nmi != cs->hppi.nmi) {
48
+ return nmi;
49
+ }
50
+
51
/* If multiple pending interrupts have the same priority then it is an
52
* IMPDEF choice which of them to signal to the CPU. We choose to
53
* signal the one with the lowest interrupt number.
54
*/
55
- if (prio == cs->hppi.prio && irq <= cs->hppi.irq) {
56
+ if (irq <= cs->hppi.irq) {
57
return true;
58
}
59
return false;
60
@@ -XXX,XX +XXX,XX @@ static uint32_t gicr_int_pending(GICv3CPUState *cs)
61
return pend;
30
}
62
}
31
63
32
-static inline void rx_data_bytes(XilinxSPIPS *s, uint8_t *value, int max)
64
+static bool gicv3_get_priority(GICv3CPUState *cs, bool is_redist, int irq,
33
+static inline void tx_data_bytes(Fifo8 *fifo, uint32_t value, int num, bool be)
65
+ uint8_t *prio)
34
{
66
+{
67
+ uint32_t nmi = 0x0;
68
+
69
+ if (is_redist) {
70
+ nmi = extract32(cs->gicr_inmir0, irq, 1);
71
+ } else {
72
+ nmi = *gic_bmp_ptr32(cs->gic->nmi, irq);
73
+ nmi = nmi & (1 << (irq & 0x1f));
74
+ }
75
+
76
+ if (nmi) {
77
+ /* DS = 0 & Non-secure NMI */
78
+ if (!(cs->gic->gicd_ctlr & GICD_CTLR_DS) &&
79
+ ((is_redist && extract32(cs->gicr_igroupr0, irq, 1)) ||
80
+ (!is_redist && gicv3_gicd_group_test(cs->gic, irq)))) {
81
+ *prio = 0x80;
82
+ } else {
83
+ *prio = 0x0;
84
+ }
85
+
86
+ return true;
87
+ }
88
+
89
+ if (is_redist) {
90
+ *prio = cs->gicr_ipriorityr[irq];
91
+ } else {
92
+ *prio = cs->gic->gicd_ipriority[irq];
93
+ }
94
+
95
+ return false;
96
+}
97
+
98
/* Update the interrupt status after state in a redistributor
99
* or CPU interface has changed, but don't tell the CPU i/f.
100
*/
101
@@ -XXX,XX +XXX,XX @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
102
uint8_t prio;
35
int i;
103
int i;
36
+ for (i = 0; i < num && !fifo8_is_full(fifo); ++i) {
104
uint32_t pend;
37
+ if (be) {
105
+ bool nmi = false;
38
+ fifo8_push(fifo, (uint8_t)(value >> 24));
106
39
+ value <<= 8;
107
/* Find out which redistributor interrupts are eligible to be
40
+ } else {
108
* signaled to the CPU interface.
41
+ fifo8_push(fifo, (uint8_t)value);
109
@@ -XXX,XX +XXX,XX @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
42
+ value >>= 8;
110
if (!(pend & (1 << i))) {
43
+ }
111
continue;
44
+ }
45
+}
46
47
- for (i = 0; i < max && !fifo8_is_empty(&s->rx_fifo); ++i) {
48
- value[i] = fifo8_pop(&s->rx_fifo);
49
+static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
50
+{
51
+ int i;
52
+
53
+ for (i = 0; i < max && !fifo8_is_empty(fifo); ++i) {
54
+ value[i] = fifo8_pop(fifo);
55
}
56
+ return max - i;
57
}
58
59
static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
60
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
61
uint32_t mask = ~0;
62
uint32_t ret;
63
uint8_t rx_buf[4];
64
+ int shortfall;
65
66
addr >>= 2;
67
switch (addr) {
68
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
69
break;
70
case R_RX_DATA:
71
memset(rx_buf, 0, sizeof(rx_buf));
72
- rx_data_bytes(s, rx_buf, s->num_txrx_bytes);
73
- ret = s->regs[R_CONFIG] & ENDIAN ? cpu_to_be32(*(uint32_t *)rx_buf)
74
- : cpu_to_le32(*(uint32_t *)rx_buf);
75
+ shortfall = rx_data_bytes(&s->rx_fifo, rx_buf, s->num_txrx_bytes);
76
+ ret = s->regs[R_CONFIG] & R_CONFIG_ENDIAN ?
77
+ cpu_to_be32(*(uint32_t *)rx_buf) :
78
+ cpu_to_le32(*(uint32_t *)rx_buf);
79
+ if (!(s->regs[R_CONFIG] & R_CONFIG_ENDIAN)) {
80
+ ret <<= 8 * shortfall;
81
+ }
82
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
83
xilinx_spips_update_ixr(s);
84
return ret;
85
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
86
87
}
88
89
-static inline void tx_data_bytes(XilinxSPIPS *s, uint32_t value, int num)
90
-{
91
- int i;
92
- for (i = 0; i < num && !fifo8_is_full(&s->tx_fifo); ++i) {
93
- if (s->regs[R_CONFIG] & ENDIAN) {
94
- fifo8_push(&s->tx_fifo, (uint8_t)(value >> 24));
95
- value <<= 8;
96
- } else {
97
- fifo8_push(&s->tx_fifo, (uint8_t)value);
98
- value >>= 8;
99
- }
100
- }
101
-}
102
-
103
static void xilinx_spips_write(void *opaque, hwaddr addr,
104
uint64_t value, unsigned size)
105
{
106
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
107
mask = 0;
108
break;
109
case R_TX_DATA:
110
- tx_data_bytes(s, (uint32_t)value, s->num_txrx_bytes);
111
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, s->num_txrx_bytes,
112
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
113
goto no_reg_update;
114
case R_TXD1:
115
- tx_data_bytes(s, (uint32_t)value, 1);
116
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, 1,
117
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
118
goto no_reg_update;
119
case R_TXD2:
120
- tx_data_bytes(s, (uint32_t)value, 2);
121
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, 2,
122
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
123
goto no_reg_update;
124
case R_TXD3:
125
- tx_data_bytes(s, (uint32_t)value, 3);
126
+ tx_data_bytes(&s->tx_fifo, (uint32_t)value, 3,
127
+ s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
128
goto no_reg_update;
129
}
130
s->regs[addr] = (s->regs[addr] & ~mask) | (value & mask);
131
@@ -XXX,XX +XXX,XX @@ static void lqspi_load_cache(void *opaque, hwaddr addr)
132
133
while (cache_entry < LQSPI_CACHE_SIZE) {
134
for (i = 0; i < 64; ++i) {
135
- tx_data_bytes(s, 0, 1);
136
+ tx_data_bytes(&s->tx_fifo, 0, 1, false);
137
}
112
}
138
xilinx_spips_flush_txfifo(s);
113
- prio = cs->gicr_ipriorityr[i];
139
for (i = 0; i < 64; ++i) {
114
- if (irqbetter(cs, i, prio)) {
140
- rx_data_bytes(s, &q->lqspi_buf[cache_entry++], 1);
115
+ nmi = gicv3_get_priority(cs, true, i, &prio);
141
+ rx_data_bytes(&s->rx_fifo, &q->lqspi_buf[cache_entry++], 1);
116
+ if (irqbetter(cs, i, prio, nmi)) {
117
cs->hppi.irq = i;
118
cs->hppi.prio = prio;
119
+ cs->hppi.nmi = nmi;
120
seenbetter = true;
142
}
121
}
143
}
122
}
123
@@ -XXX,XX +XXX,XX @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
124
if ((cs->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) && cs->gic->lpi_enable &&
125
(cs->gic->gicd_ctlr & GICD_CTLR_EN_GRP1NS) &&
126
(cs->hpplpi.prio != 0xff)) {
127
- if (irqbetter(cs, cs->hpplpi.irq, cs->hpplpi.prio)) {
128
+ if (irqbetter(cs, cs->hpplpi.irq, cs->hpplpi.prio, cs->hpplpi.nmi)) {
129
cs->hppi.irq = cs->hpplpi.irq;
130
cs->hppi.prio = cs->hpplpi.prio;
131
+ cs->hppi.nmi = cs->hpplpi.nmi;
132
cs->hppi.grp = cs->hpplpi.grp;
133
seenbetter = true;
134
}
135
@@ -XXX,XX +XXX,XX @@ static void gicv3_update_noirqset(GICv3State *s, int start, int len)
136
int i;
137
uint8_t prio;
138
uint32_t pend = 0;
139
+ bool nmi = false;
140
141
assert(start >= GIC_INTERNAL);
142
assert(len > 0);
143
@@ -XXX,XX +XXX,XX @@ static void gicv3_update_noirqset(GICv3State *s, int start, int len)
144
*/
145
continue;
146
}
147
- prio = s->gicd_ipriority[i];
148
- if (irqbetter(cs, i, prio)) {
149
+ nmi = gicv3_get_priority(cs, false, i, &prio);
150
+ if (irqbetter(cs, i, prio, nmi)) {
151
cs->hppi.irq = i;
152
cs->hppi.prio = prio;
153
+ cs->hppi.nmi = nmi;
154
cs->seenbetter = true;
155
}
156
}
157
@@ -XXX,XX +XXX,XX @@ void gicv3_full_update_noirqset(GICv3State *s)
158
159
for (i = 0; i < s->num_cpu; i++) {
160
s->cpu[i].hppi.prio = 0xff;
161
+ s->cpu[i].hppi.nmi = false;
162
}
163
164
/* Note that we can guarantee that these functions will not
165
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
166
index XXXXXXX..XXXXXXX 100644
167
--- a/hw/intc/arm_gicv3_common.c
168
+++ b/hw/intc/arm_gicv3_common.c
169
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_reset_hold(Object *obj)
170
memset(cs->gicr_ipriorityr, 0, sizeof(cs->gicr_ipriorityr));
171
172
cs->hppi.prio = 0xff;
173
+ cs->hppi.nmi = false;
174
cs->hpplpi.prio = 0xff;
175
+ cs->hpplpi.nmi = false;
176
cs->hppvlpi.prio = 0xff;
177
+ cs->hppvlpi.nmi = false;
178
179
/* State in the CPU interface must *not* be reset here, because it
180
* is part of the CPU's reset domain, not the GIC device's.
181
diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
182
index XXXXXXX..XXXXXXX 100644
183
--- a/hw/intc/arm_gicv3_redist.c
184
+++ b/hw/intc/arm_gicv3_redist.c
185
@@ -XXX,XX +XXX,XX @@ static void update_for_one_lpi(GICv3CPUState *cs, int irq,
186
((prio == hpp->prio) && (irq <= hpp->irq))) {
187
hpp->irq = irq;
188
hpp->prio = prio;
189
+ hpp->nmi = false;
190
/* LPIs and vLPIs are always non-secure Grp1 interrupts */
191
hpp->grp = GICV3_G1NS;
192
}
193
@@ -XXX,XX +XXX,XX @@ static void update_for_all_lpis(GICv3CPUState *cs, uint64_t ptbase,
194
int i, bit;
195
196
hpp->prio = 0xff;
197
+ hpp->nmi = false;
198
199
for (i = GICV3_LPI_INTID_START / 8; i < pendt_size / 8; i++) {
200
address_space_read(as, ptbase + i, MEMTXATTRS_UNSPECIFIED, &pend, 1);
201
@@ -XXX,XX +XXX,XX @@ static void gicv3_redist_update_vlpi_only(GICv3CPUState *cs)
202
203
if (!FIELD_EX64(cs->gicr_vpendbaser, GICR_VPENDBASER, VALID)) {
204
cs->hppvlpi.prio = 0xff;
205
+ cs->hppvlpi.nmi = false;
206
return;
207
}
144
208
145
--
209
--
146
2.7.4
210
2.34.1
147
148
diff view generated by jsdifflib
1
Now that ARMMMUFaultInfo is guaranteed to have enough information
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
to construct a fault status code, we can pass it in to the
3
deliver_fault() function and let it generate the correct type
4
of FSR for the destination, rather than relying on the value
5
provided by get_phys_addr().
6
2
7
I don't think there are any cases the old code was getting
3
In CPU Interface, if the IRQ has the non-maskable property, report NMI to
8
wrong, but this is more obviously correct.
4
the corresponding PE.
9
5
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20240407081733.3231820-22-ruanjinjie@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
14
Message-id: 1512503192-2239-10-git-send-email-peter.maydell@linaro.org
15
---
11
---
16
target/arm/op_helper.c | 79 ++++++++++++++------------------------------------
12
hw/intc/arm_gicv3_cpuif.c | 4 ++++
17
1 file changed, 22 insertions(+), 57 deletions(-)
13
1 file changed, 4 insertions(+)
18
14
19
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
15
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/op_helper.c
17
--- a/hw/intc/arm_gicv3_cpuif.c
22
+++ b/target/arm/op_helper.c
18
+++ b/hw/intc/arm_gicv3_cpuif.c
23
@@ -XXX,XX +XXX,XX @@ static inline uint32_t merge_syn_data_abort(uint32_t template_syn,
19
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_update(GICv3CPUState *cs)
20
/* Tell the CPU about its highest priority pending interrupt */
21
int irqlevel = 0;
22
int fiqlevel = 0;
23
+ int nmilevel = 0;
24
ARMCPU *cpu = ARM_CPU(cs->cpu);
25
CPUARMState *env = &cpu->env;
26
27
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_update(GICv3CPUState *cs)
28
29
if (isfiq) {
30
fiqlevel = 1;
31
+ } else if (cs->hppi.nmi) {
32
+ nmilevel = 1;
33
} else {
34
irqlevel = 1;
35
}
36
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_update(GICv3CPUState *cs)
37
38
qemu_set_irq(cs->parent_fiq, fiqlevel);
39
qemu_set_irq(cs->parent_irq, irqlevel);
40
+ qemu_set_irq(cs->parent_nmi, nmilevel);
24
}
41
}
25
42
26
static void deliver_fault(ARMCPU *cpu, vaddr addr, MMUAccessType access_type,
43
static uint64_t icc_pmr_read(CPUARMState *env, const ARMCPRegInfo *ri)
27
- uint32_t fsr, uint32_t fsc, ARMMMUFaultInfo *fi)
28
+ int mmu_idx, ARMMMUFaultInfo *fi)
29
{
30
CPUARMState *env = &cpu->env;
31
int target_el;
32
bool same_el;
33
- uint32_t syn, exc;
34
+ uint32_t syn, exc, fsr, fsc;
35
+ ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
36
37
target_el = exception_target_el(env);
38
if (fi->stage2) {
39
@@ -XXX,XX +XXX,XX @@ static void deliver_fault(ARMCPU *cpu, vaddr addr, MMUAccessType access_type,
40
}
41
same_el = (arm_current_el(env) == target_el);
42
43
- if (fsc == 0x3f) {
44
- /* Caller doesn't have a long-format fault status code. This
45
- * should only happen if this fault will never actually be reported
46
- * to an EL that uses a syndrome register. Check that here.
47
- * 0x3f is a (currently) reserved FSC code, in case the constructed
48
- * syndrome does leak into the guest somehow.
49
+ if (target_el == 2 || arm_el_is_aa64(env, target_el) ||
50
+ arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
51
+ /* LPAE format fault status register : bottom 6 bits are
52
+ * status code in the same form as needed for syndrome
53
+ */
54
+ fsr = arm_fi_to_lfsc(fi);
55
+ fsc = extract32(fsr, 0, 6);
56
+ } else {
57
+ fsr = arm_fi_to_sfsc(fi);
58
+ /* Short format FSR : this fault will never actually be reported
59
+ * to an EL that uses a syndrome register. Use a (currently)
60
+ * reserved FSR code in case the constructed syndrome does leak
61
+ * into the guest somehow.
62
*/
63
- assert(target_el != 2 && !arm_el_is_aa64(env, target_el));
64
+ fsc = 0x3f;
65
}
66
67
if (access_type == MMU_INST_FETCH) {
68
@@ -XXX,XX +XXX,XX @@ void tlb_fill(CPUState *cs, target_ulong addr, MMUAccessType access_type,
69
ret = arm_tlb_fill(cs, addr, access_type, mmu_idx, &fsr, &fi);
70
if (unlikely(ret)) {
71
ARMCPU *cpu = ARM_CPU(cs);
72
- uint32_t fsc;
73
74
if (retaddr) {
75
/* now we have a real cpu fault */
76
cpu_restore_state(cs, retaddr);
77
}
78
79
- if (fsr & (1 << 9)) {
80
- /* LPAE format fault status register : bottom 6 bits are
81
- * status code in the same form as needed for syndrome
82
- */
83
- fsc = extract32(fsr, 0, 6);
84
- } else {
85
- /* Short format FSR : this fault will never actually be reported
86
- * to an EL that uses a syndrome register. Use a (currently)
87
- * reserved FSR code in case the constructed syndrome does leak
88
- * into the guest somehow. deliver_fault will assert that
89
- * we don't target an EL using the syndrome.
90
- */
91
- fsc = 0x3f;
92
- }
93
-
94
- deliver_fault(cpu, addr, access_type, fsr, fsc, &fi);
95
+ deliver_fault(cpu, addr, access_type, mmu_idx, &fi);
96
}
97
}
98
99
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
100
int mmu_idx, uintptr_t retaddr)
101
{
102
ARMCPU *cpu = ARM_CPU(cs);
103
- CPUARMState *env = &cpu->env;
104
- uint32_t fsr, fsc;
105
ARMMMUFaultInfo fi = {};
106
- ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
107
108
if (retaddr) {
109
/* now we have a real cpu fault */
110
cpu_restore_state(cs, retaddr);
111
}
112
113
- /* the DFSR for an alignment fault depends on whether we're using
114
- * the LPAE long descriptor format, or the short descriptor format
115
- */
116
- if (arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
117
- fsr = (1 << 9) | 0x21;
118
- } else {
119
- fsr = 0x1;
120
- }
121
- fsc = 0x21;
122
-
123
- deliver_fault(cpu, vaddr, access_type, fsr, fsc, &fi);
124
+ fi.type = ARMFault_Alignment;
125
+ deliver_fault(cpu, vaddr, access_type, mmu_idx, &fi);
126
}
127
128
/* arm_cpu_do_transaction_failed: handle a memory system error response
129
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
130
MemTxResult response, uintptr_t retaddr)
131
{
132
ARMCPU *cpu = ARM_CPU(cs);
133
- CPUARMState *env = &cpu->env;
134
- uint32_t fsr, fsc;
135
ARMMMUFaultInfo fi = {};
136
- ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
137
138
if (retaddr) {
139
/* now we have a real cpu fault */
140
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
141
* Slave error (1); in QEMU we follow that.
142
*/
143
fi.ea = (response != MEMTX_DECODE_ERROR);
144
-
145
- /* The fault status register format depends on whether we're using
146
- * the LPAE long descriptor format, or the short descriptor format.
147
- */
148
- if (arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
149
- /* long descriptor form, STATUS 0b010000: synchronous ext abort */
150
- fsr = (fi.ea << 12) | (1 << 9) | 0x10;
151
- } else {
152
- /* short descriptor form, FSR 0b01000 : synchronous ext abort */
153
- fsr = (fi.ea << 12) | 0x8;
154
- }
155
- fsc = 0x10;
156
-
157
- deliver_fault(cpu, addr, access_type, fsr, fsc, &fi);
158
+ fi.type = ARMFault_SyncExternal;
159
+ deliver_fault(cpu, addr, access_type, mmu_idx, &fi);
160
}
161
162
#endif /* !defined(CONFIG_USER_ONLY) */
163
--
44
--
164
2.7.4
45
2.34.1
165
166
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Add support for 4 byte addresses in the LQSPI and correct LQSPI_CFG_SEP_BUS.
3
In vCPU Interface, if the vIRQ has the non-maskable property, report
4
vINMI to the corresponding vPE.
4
5
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
6
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Message-id: 20240407081733.3231820-23-ruanjinjie@huawei.com
9
Message-id: 20171126231634.9531-11-frasse.iglesias@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/ssi/xilinx_spips.c | 6 +++++-
12
hw/intc/arm_gicv3_cpuif.c | 14 ++++++++++++--
13
1 file changed, 5 insertions(+), 1 deletion(-)
13
1 file changed, 12 insertions(+), 2 deletions(-)
14
14
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
15
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/xilinx_spips.c
17
--- a/hw/intc/arm_gicv3_cpuif.c
18
+++ b/hw/ssi/xilinx_spips.c
18
+++ b/hw/intc/arm_gicv3_cpuif.c
19
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs)
20
#define R_LQSPI_CFG_RESET 0x03A002EB
20
int idx;
21
#define LQSPI_CFG_LQ_MODE (1U << 31)
21
int irqlevel = 0;
22
#define LQSPI_CFG_TWO_MEM (1 << 30)
22
int fiqlevel = 0;
23
-#define LQSPI_CFG_SEP_BUS (1 << 30)
23
+ int nmilevel = 0;
24
+#define LQSPI_CFG_SEP_BUS (1 << 29)
24
25
#define LQSPI_CFG_U_PAGE (1 << 28)
25
idx = hppvi_index(cs);
26
+#define LQSPI_CFG_ADDR4 (1 << 27)
26
trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx,
27
#define LQSPI_CFG_MODE_EN (1 << 25)
27
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs)
28
#define LQSPI_CFG_MODE_WIDTH 8
28
uint64_t lr = cs->ich_lr_el2[idx];
29
#define LQSPI_CFG_MODE_SHIFT 16
29
30
@@ -XXX,XX +XXX,XX @@ static void lqspi_load_cache(void *opaque, hwaddr addr)
30
if (icv_hppi_can_preempt(cs, lr)) {
31
fifo8_push(&s->tx_fifo, s->regs[R_LQSPI_CFG] & LQSPI_CFG_INST_CODE);
31
- /* Virtual interrupts are simple: G0 are always FIQ, and G1 IRQ */
32
/* read address */
32
+ /*
33
DB_PRINT_L(0, "pushing read address %06x\n", flash_addr);
33
+ * Virtual interrupts are simple: G0 are always FIQ, and G1 are
34
+ if (s->regs[R_LQSPI_CFG] & LQSPI_CFG_ADDR4) {
34
+ * IRQ or NMI which depends on the ICH_LR<n>_EL2.NMI to have
35
+ fifo8_push(&s->tx_fifo, (uint8_t)(flash_addr >> 24));
35
+ * non-maskable property.
36
+ }
36
+ */
37
fifo8_push(&s->tx_fifo, (uint8_t)(flash_addr >> 16));
37
if (lr & ICH_LR_EL2_GROUP) {
38
fifo8_push(&s->tx_fifo, (uint8_t)(flash_addr >> 8));
38
- irqlevel = 1;
39
fifo8_push(&s->tx_fifo, (uint8_t)flash_addr);
39
+ if (lr & ICH_LR_EL2_NMI) {
40
+ nmilevel = 1;
41
+ } else {
42
+ irqlevel = 1;
43
+ }
44
} else {
45
fiqlevel = 1;
46
}
47
@@ -XXX,XX +XXX,XX @@ void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs)
48
trace_gicv3_cpuif_virt_set_irqs(gicv3_redist_affid(cs), fiqlevel, irqlevel);
49
qemu_set_irq(cs->parent_vfiq, fiqlevel);
50
qemu_set_irq(cs->parent_virq, irqlevel);
51
+ qemu_set_irq(cs->parent_vnmi, nmilevel);
52
}
53
54
static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
40
--
55
--
41
2.7.4
56
2.34.1
42
43
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Don't set TX FIFO UNDERFLOW interrupt after transmitting the commands.
3
Enable FEAT_NMI on the 'max' CPU.
4
Also update interrupts after reading out the interrupt status.
5
4
6
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
7
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20240407081733.3231820-24-ruanjinjie@huawei.com
10
Message-id: 20171126231634.9531-12-frasse.iglesias@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
hw/ssi/xilinx_spips.c | 4 +---
11
docs/system/arm/emulation.rst | 1 +
14
1 file changed, 1 insertion(+), 3 deletions(-)
12
target/arm/tcg/cpu64.c | 1 +
13
2 files changed, 2 insertions(+)
15
14
16
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
15
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/ssi/xilinx_spips.c
17
--- a/docs/system/arm/emulation.rst
19
+++ b/hw/ssi/xilinx_spips.c
18
+++ b/docs/system/arm/emulation.rst
20
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
19
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
uint8_t addr_length;
20
- FEAT_MTE (Memory Tagging Extension)
22
21
- FEAT_MTE2 (Memory Tagging Extension)
23
if (fifo8_is_empty(&s->tx_fifo)) {
22
- FEAT_MTE3 (MTE Asymmetric Fault Handling)
24
- if (!(s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE)) {
23
+- FEAT_NMI (Non-maskable Interrupt)
25
- s->regs[R_INTR_STATUS] |= IXR_TX_FIFO_UNDERFLOW;
24
- FEAT_NV (Nested Virtualization)
26
- }
25
- FEAT_NV2 (Enhanced nested virtualization support)
27
xilinx_spips_update_ixr(s);
26
- FEAT_PACIMP (Pointer authentication - IMPLEMENTATION DEFINED algorithm)
28
return;
27
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
29
} else if (s->snoop_state == SNOOP_STRIPING) {
28
index XXXXXXX..XXXXXXX 100644
30
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
29
--- a/target/arm/tcg/cpu64.c
31
ret = s->regs[addr] & IXR_ALL;
30
+++ b/target/arm/tcg/cpu64.c
32
s->regs[addr] = 0;
31
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
33
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
32
t = FIELD_DP64(t, ID_AA64PFR1, RAS_FRAC, 0); /* FEAT_RASv1p1 + FEAT_DoubleFault */
34
+ xilinx_spips_update_ixr(s);
33
t = FIELD_DP64(t, ID_AA64PFR1, SME, 1); /* FEAT_SME */
35
return ret;
34
t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
36
case R_INTR_MASK:
35
+ t = FIELD_DP64(t, ID_AA64PFR1, NMI, 1); /* FEAT_NMI */
37
mask = IXR_ALL;
36
cpu->isar.id_aa64pfr1 = t;
37
38
t = cpu->isar.id_aa64mmfr0;
38
--
39
--
39
2.7.4
40
2.34.1
40
41
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Jinjie Ruan <ruanjinjie@huawei.com>
2
2
3
Update striping functionality to be big-endian bit order (as according to
3
If the CPU implements FEAT_NMI, then turn on the NMI support in the
4
the Zynq-7000 Technical Reference Manual). Output thereafter the even bits
4
GICv3 too. It's permitted to have a configuration with FEAT_NMI in
5
into the flash memory connected to the lower QSPI bus and the odd bits into
5
the CPU (and thus NMI support in the CPU interfaces too) but no NMI
6
the flash memory connected to the upper QSPI bus.
6
support in the distributor and redistributor, but this isn't a very
7
useful setup as it's close to having no NMI support at all.
7
8
8
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
9
We don't need to gate the enabling of NMI in the GIC behind a
9
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
10
machine version property, because none of our current CPUs
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
11
implement FEAT_NMI, and '-cpu max' is not something we maintain
11
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
migration compatibility across versions for. So we can always
12
Message-id: 20171126231634.9531-7-frasse.iglesias@gmail.com
13
enable the GIC NMI support when the CPU has it.
14
15
Neither hvf nor KVM support NMI in the GIC yet, so we don't enable
16
it unless we're using TCG.
17
18
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20240407081733.3231820-25-ruanjinjie@huawei.com
21
[PMM: Update comment and commit message]
22
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
24
---
15
hw/ssi/xilinx_spips.c | 19 ++++++++++---------
25
hw/arm/virt.c | 19 +++++++++++++++++++
16
1 file changed, 10 insertions(+), 9 deletions(-)
26
1 file changed, 19 insertions(+)
17
27
18
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
28
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
19
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/ssi/xilinx_spips.c
30
--- a/hw/arm/virt.c
21
+++ b/hw/ssi/xilinx_spips.c
31
+++ b/hw/arm/virt.c
22
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
32
@@ -XXX,XX +XXX,XX @@ static void create_v2m(VirtMachineState *vms)
23
xilinx_spips_update_cs_lines(s);
33
vms->msi_controller = VIRT_MSI_CTRL_GICV2M;
24
}
34
}
25
35
26
-/* N way (num) in place bit striper. Lay out row wise bits (LSB to MSB)
36
+/*
27
+/* N way (num) in place bit striper. Lay out row wise bits (MSB to LSB)
37
+ * If the CPU has FEAT_NMI, then turn on the NMI support in the GICv3 too.
28
* column wise (from element 0 to N-1). num is the length of x, and dir
38
+ * It's permitted to have a configuration with NMI in the CPU (and thus the
29
* reverses the direction of the transform. Best illustrated by example:
39
+ * GICv3 CPU interface) but not in the distributor/redistributors, but it's
30
* Each digit in the below array is a single bit (num == 3):
40
+ * not very useful.
31
*
41
+ */
32
- * {{ 76543210, } ----- stripe (dir == false) -----> {{ FCheb630, }
42
+static bool gicv3_nmi_present(VirtMachineState *vms)
33
- * { hgfedcba, } { GDAfc741, }
43
+{
34
- * { HGFEDCBA, }} <---- upstripe (dir == true) ----- { HEBgda52, }}
44
+ ARMCPU *cpu = ARM_CPU(qemu_get_cpu(0));
35
+ * {{ 76543210, } ----- stripe (dir == false) -----> {{ 741gdaFC, }
45
+
36
+ * { hgfedcba, } { 630fcHEB, }
46
+ return tcg_enabled() && cpu_isar_feature(aa64_nmi, cpu) &&
37
+ * { HGFEDCBA, }} <---- upstripe (dir == true) ----- { 52hebGDA, }}
47
+ (vms->gic_version != VIRT_GIC_VERSION_2);
38
*/
48
+}
39
49
+
40
static inline void stripe8(uint8_t *x, int num, bool dir)
50
static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
41
@@ -XXX,XX +XXX,XX @@ static inline void stripe8(uint8_t *x, int num, bool dir)
51
{
42
uint8_t r[num];
52
MachineState *ms = MACHINE(vms);
43
memset(r, 0, sizeof(uint8_t) * num);
53
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
44
int idx[2] = {0, 0};
54
vms->virt);
45
- int bit[2] = {0, 0};
46
+ int bit[2] = {0, 7};
47
int d = dir;
48
49
for (idx[0] = 0; idx[0] < num; ++idx[0]) {
50
- for (bit[0] = 0; bit[0] < 8; ++bit[0]) {
51
- r[idx[d]] |= x[idx[!d]] & 1 << bit[!d] ? 1 << bit[d] : 0;
52
+ for (bit[0] = 7; bit[0] >= 0; bit[0]--) {
53
+ r[idx[!d]] |= x[idx[d]] & 1 << bit[d] ? 1 << bit[!d] : 0;
54
idx[1] = (idx[1] + 1) % num;
55
if (!idx[1]) {
56
- bit[1]++;
57
+ bit[1]--;
58
}
59
}
55
}
60
}
56
}
61
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_flush_txfifo(XilinxSPIPS *s)
57
+
62
}
58
+ if (gicv3_nmi_present(vms)) {
63
59
+ qdev_prop_set_bit(vms->gic, "has-nmi", true);
64
for (i = 0; i < num_effective_busses(s); ++i) {
60
+ }
65
+ int bus = num_effective_busses(s) - 1 - i;
61
+
66
DB_PRINT_L(debug_level, "tx = %02x\n", tx_rx[i]);
62
gicbusdev = SYS_BUS_DEVICE(vms->gic);
67
- tx_rx[i] = ssi_transfer(s->spi[i], (uint32_t)tx_rx[i]);
63
sysbus_realize_and_unref(gicbusdev, &error_fatal);
68
+ tx_rx[i] = ssi_transfer(s->spi[bus], (uint32_t)tx_rx[i]);
64
sysbus_mmio_map(gicbusdev, 0, vms->memmap[VIRT_GIC_DIST].base);
69
DB_PRINT_L(debug_level, "rx = %02x\n", tx_rx[i]);
70
}
71
72
--
65
--
73
2.7.4
66
2.34.1
74
75
diff view generated by jsdifflib
1
From: Prasad J Pandit <pjp@fedoraproject.org>
1
From: Anastasia Belova <abelova@astralinux.ru>
2
2
3
The ctz32() routine could return a value greater than
3
In soc_dma_set_request() we try to set a bit in a uint64_t, but we
4
TC6393XB_GPIOS=16, because the device has 24 GPIO level
4
do it with "1 << ch->num", which can't set any bits past 31;
5
bits but we only implement 16 outgoing lines. This could
5
any use for a channel number of 32 or more would fail due to
6
lead to an OOB array access. Mask 'level' to avoid it.
6
integer overflow.
7
7
8
Reported-by: Moguofang <moguofang@huawei.com>
8
This doesn't happen in practice for our current use of this code,
9
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
9
because the worst case is when we call soc_dma_init() with an
10
Message-id: 20171212041539.25700-1-ppandit@redhat.com
10
argument of 32 for the number of channels, and QEMU builds with
11
-fwrapv so the shift into the sign bit is well-defined. However,
12
it's obviously not the intended behaviour of the code.
13
14
Add casts to force the shift to be done as 64-bit arithmetic,
15
allowing up to 64 channels.
16
17
Found by Linux Verification Center (linuxtesting.org) with SVACE.
18
19
Fixes: afbb5194d4 ("Handle on-chip DMA controllers in one place, convert OMAP DMA to use it.")
20
Signed-off-by: Anastasia Belova <abelova@astralinux.ru>
21
Message-id: 20240409115301.21829-1-abelova@astralinux.ru
22
[PMM: Edit commit message to clarify that this doesn't actually
23
bite us in our current usage of this code.]
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
26
---
14
hw/display/tc6393xb.c | 1 +
27
hw/dma/soc_dma.c | 4 ++--
15
1 file changed, 1 insertion(+)
28
1 file changed, 2 insertions(+), 2 deletions(-)
16
29
17
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
30
diff --git a/hw/dma/soc_dma.c b/hw/dma/soc_dma.c
18
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/display/tc6393xb.c
32
--- a/hw/dma/soc_dma.c
20
+++ b/hw/display/tc6393xb.c
33
+++ b/hw/dma/soc_dma.c
21
@@ -XXX,XX +XXX,XX @@ static void tc6393xb_gpio_handler_update(TC6393xbState *s)
34
@@ -XXX,XX +XXX,XX @@ void soc_dma_set_request(struct soc_dma_ch_s *ch, int level)
22
int bit;
35
dma->enabled_count += level - ch->enable;
23
36
24
level = s->gpio_level & s->gpio_dir;
37
if (level)
25
+ level &= MAKE_64BIT_MASK(0, TC6393XB_GPIOS);
38
- dma->ch_enable_mask |= 1 << ch->num;
26
39
+ dma->ch_enable_mask |= (uint64_t)1 << ch->num;
27
for (diff = s->prev_level ^ level; diff; diff ^= 1 << bit) {
40
else
28
bit = ctz32(diff);
41
- dma->ch_enable_mask &= ~(1 << ch->num);
42
+ dma->ch_enable_mask &= ~((uint64_t)1 << ch->num);
43
44
if (level != ch->enable) {
45
soc_dma_ch_freq_update(dma);
29
--
46
--
30
2.7.4
47
2.34.1
31
32
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
Ever since the bFLT format support was added in 2006, there has been
2
2
a chunk of code in the file guarded by CONFIG_BINFMT_SHARED_FLAT
3
Update headers against v4.15-rc1.
3
which is supposedly for shared library support. This is not enabled
4
4
and it's not possible to enable it, because if you do you'll run into
5
Signed-off-by: Eric Auger <eric.auger@redhat.com>
5
the "#error needs checking" in the calc_reloc() function.
6
Message-id: 1511883692-11511-4-git-send-email-eric.auger@redhat.com
6
7
Similarly, CONFIG_BINFMT_ZFLAT exists but can't be enabled because of
8
an "#error code needs checking" in load_flat_file().
9
10
This code is obviously unfinished and has never been used; nobody in
11
the intervening 18 years has complained about this or fixed it, so
12
just delete the dead code. If anybody ever wants the feature they
13
can always pull it out of git, or (perhaps better) write it from
14
scratch based on the current Linux bFLT loader rather than the one of
15
18 years ago.
16
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
19
Message-id: 20240411115313.680433-1-peter.maydell@linaro.org
8
---
20
---
9
include/standard-headers/asm-s390/virtio-ccw.h | 1 +
21
linux-user/flat.h | 5 +-
10
include/standard-headers/asm-x86/hyperv.h | 394 +--------------------
22
linux-user/flatload.c | 293 ++----------------------------------------
11
include/standard-headers/linux/input-event-codes.h | 2 +
23
2 files changed, 11 insertions(+), 287 deletions(-)
12
include/standard-headers/linux/input.h | 1 +
24
13
include/standard-headers/linux/pci_regs.h | 45 ++-
25
diff --git a/linux-user/flat.h b/linux-user/flat.h
14
linux-headers/asm-arm/kvm.h | 8 +
15
linux-headers/asm-arm/kvm_para.h | 1 +
16
linux-headers/asm-arm/unistd.h | 2 +
17
linux-headers/asm-arm64/kvm.h | 8 +
18
linux-headers/asm-arm64/unistd.h | 1 +
19
linux-headers/asm-powerpc/epapr_hcalls.h | 1 +
20
linux-headers/asm-powerpc/kvm.h | 1 +
21
linux-headers/asm-powerpc/kvm_para.h | 1 +
22
linux-headers/asm-powerpc/unistd.h | 1 +
23
linux-headers/asm-s390/kvm.h | 1 +
24
linux-headers/asm-s390/kvm_para.h | 1 +
25
linux-headers/asm-s390/unistd.h | 4 +-
26
linux-headers/asm-x86/kvm.h | 1 +
27
linux-headers/asm-x86/kvm_para.h | 2 +-
28
linux-headers/asm-x86/unistd.h | 1 +
29
linux-headers/linux/kvm.h | 2 +
30
linux-headers/linux/kvm_para.h | 1 +
31
linux-headers/linux/psci.h | 1 +
32
linux-headers/linux/userfaultfd.h | 1 +
33
linux-headers/linux/vfio.h | 1 +
34
linux-headers/linux/vfio_ccw.h | 1 +
35
linux-headers/linux/vhost.h | 1 +
36
27 files changed, 74 insertions(+), 411 deletions(-)
37
38
diff --git a/include/standard-headers/asm-s390/virtio-ccw.h b/include/standard-headers/asm-s390/virtio-ccw.h
39
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
40
--- a/include/standard-headers/asm-s390/virtio-ccw.h
27
--- a/linux-user/flat.h
41
+++ b/include/standard-headers/asm-s390/virtio-ccw.h
28
+++ b/linux-user/flat.h
42
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@
43
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
30
31
#define    FLAT_VERSION            0x00000004L
32
33
-#ifdef CONFIG_BINFMT_SHARED_FLAT
34
-#define    MAX_SHARED_LIBS            (4)
35
-#else
36
+/* QEMU doesn't support bflt shared libraries */
37
#define    MAX_SHARED_LIBS            (1)
38
-#endif
39
44
/*
40
/*
45
* Definitions for virtio-ccw devices.
41
* To make everything easier to port and manage cross platform
46
*
42
diff --git a/linux-user/flatload.c b/linux-user/flatload.c
47
diff --git a/include/standard-headers/asm-x86/hyperv.h b/include/standard-headers/asm-x86/hyperv.h
48
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
49
--- a/include/standard-headers/asm-x86/hyperv.h
44
--- a/linux-user/flatload.c
50
+++ b/include/standard-headers/asm-x86/hyperv.h
45
+++ b/linux-user/flatload.c
51
@@ -1,393 +1 @@
46
@@ -XXX,XX +XXX,XX @@
52
-#ifndef _ASM_X86_HYPERV_H
47
*    JAN/99 -- coded full program relocation (gerg@snapgear.com)
53
-#define _ASM_X86_HYPERV_H
48
*/
54
-
49
55
-#include "standard-headers/linux/types.h"
50
-/* ??? ZFLAT and shared library support is currently disabled. */
51
-
52
/****************************************************************************/
53
54
#include "qemu/osdep.h"
55
@@ -XXX,XX +XXX,XX @@ struct lib_info {
56
short loaded;        /* Has this library been loaded? */
57
};
58
59
-#ifdef CONFIG_BINFMT_SHARED_FLAT
60
-static int load_flat_shared_library(int id, struct lib_info *p);
61
-#endif
62
-
63
struct linux_binprm;
64
65
/****************************************************************************/
66
@@ -XXX,XX +XXX,XX @@ static int target_pread(int fd, abi_ulong ptr, abi_ulong len,
67
unlock_user(buf, ptr, len);
68
return ret;
69
}
70
-/****************************************************************************/
71
-
72
-#ifdef CONFIG_BINFMT_ZFLAT
73
-
74
-#include <linux/zlib.h>
75
-
76
-#define LBUFSIZE    4000
77
-
78
-/* gzip flag byte */
79
-#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
80
-#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gzip file */
81
-#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
82
-#define ORIG_NAME 0x08 /* bit 3 set: original file name present */
83
-#define COMMENT 0x10 /* bit 4 set: file comment present */
84
-#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
85
-#define RESERVED 0xC0 /* bit 6,7: reserved */
86
-
87
-static int decompress_exec(
88
-    struct linux_binprm *bprm,
89
-    unsigned long offset,
90
-    char *dst,
91
-    long len,
92
-    int fd)
93
-{
94
-    unsigned char *buf;
95
-    z_stream strm;
96
-    loff_t fpos;
97
-    int ret, retval;
98
-
99
-    DBG_FLT("decompress_exec(offset=%x,buf=%x,len=%x)\n",(int)offset, (int)dst, (int)len);
100
-
101
-    memset(&strm, 0, sizeof(strm));
102
-    strm.workspace = kmalloc(zlib_inflate_workspacesize(), GFP_KERNEL);
103
-    if (strm.workspace == NULL) {
104
-        DBG_FLT("binfmt_flat: no memory for decompress workspace\n");
105
-        return -ENOMEM;
106
-    }
107
-    buf = kmalloc(LBUFSIZE, GFP_KERNEL);
108
-    if (buf == NULL) {
109
-        DBG_FLT("binfmt_flat: no memory for read buffer\n");
110
-        retval = -ENOMEM;
111
-        goto out_free;
112
-    }
113
-
114
-    /* Read in first chunk of data and parse gzip header. */
115
-    fpos = offset;
116
-    ret = bprm->file->f_op->read(bprm->file, buf, LBUFSIZE, &fpos);
117
-
118
-    strm.next_in = buf;
119
-    strm.avail_in = ret;
120
-    strm.total_in = 0;
121
-
122
-    retval = -ENOEXEC;
123
-
124
-    /* Check minimum size -- gzip header */
125
-    if (ret < 10) {
126
-        DBG_FLT("binfmt_flat: file too small?\n");
127
-        goto out_free_buf;
128
-    }
129
-
130
-    /* Check gzip magic number */
131
-    if ((buf[0] != 037) || ((buf[1] != 0213) && (buf[1] != 0236))) {
132
-        DBG_FLT("binfmt_flat: unknown compression magic?\n");
133
-        goto out_free_buf;
134
-    }
135
-
136
-    /* Check gzip method */
137
-    if (buf[2] != 8) {
138
-        DBG_FLT("binfmt_flat: unknown compression method?\n");
139
-        goto out_free_buf;
140
-    }
141
-    /* Check gzip flags */
142
-    if ((buf[3] & ENCRYPTED) || (buf[3] & CONTINUATION) ||
143
-     (buf[3] & RESERVED)) {
144
-        DBG_FLT("binfmt_flat: unknown flags?\n");
145
-        goto out_free_buf;
146
-    }
147
-
148
-    ret = 10;
149
-    if (buf[3] & EXTRA_FIELD) {
150
-        ret += 2 + buf[10] + (buf[11] << 8);
151
-        if (unlikely(LBUFSIZE == ret)) {
152
-            DBG_FLT("binfmt_flat: buffer overflow (EXTRA)?\n");
153
-            goto out_free_buf;
154
-        }
155
-    }
156
-    if (buf[3] & ORIG_NAME) {
157
-        for (; ret < LBUFSIZE && (buf[ret] != 0); ret++)
158
-            ;
159
-        if (unlikely(LBUFSIZE == ret)) {
160
-            DBG_FLT("binfmt_flat: buffer overflow (ORIG_NAME)?\n");
161
-            goto out_free_buf;
162
-        }
163
-    }
164
-    if (buf[3] & COMMENT) {
165
-        for (; ret < LBUFSIZE && (buf[ret] != 0); ret++)
166
-            ;
167
-        if (unlikely(LBUFSIZE == ret)) {
168
-            DBG_FLT("binfmt_flat: buffer overflow (COMMENT)?\n");
169
-            goto out_free_buf;
170
-        }
171
-    }
172
-
173
-    strm.next_in += ret;
174
-    strm.avail_in -= ret;
175
-
176
-    strm.next_out = dst;
177
-    strm.avail_out = len;
178
-    strm.total_out = 0;
179
-
180
-    if (zlib_inflateInit2(&strm, -MAX_WBITS) != Z_OK) {
181
-        DBG_FLT("binfmt_flat: zlib init failed?\n");
182
-        goto out_free_buf;
183
-    }
184
-
185
-    while ((ret = zlib_inflate(&strm, Z_NO_FLUSH)) == Z_OK) {
186
-        ret = bprm->file->f_op->read(bprm->file, buf, LBUFSIZE, &fpos);
187
-        if (ret <= 0)
188
-            break;
189
- if (is_error(ret)) {
190
-            break;
191
- }
192
-        len -= ret;
193
-
194
-        strm.next_in = buf;
195
-        strm.avail_in = ret;
196
-        strm.total_in = 0;
197
-    }
198
-
199
-    if (ret < 0) {
200
-        DBG_FLT("binfmt_flat: decompression failed (%d), %s\n",
201
-            ret, strm.msg);
202
-        goto out_zlib;
203
-    }
204
-
205
-    retval = 0;
206
-out_zlib:
207
-    zlib_inflateEnd(&strm);
208
-out_free_buf:
209
-    kfree(buf);
210
-out_free:
211
-    kfree(strm.workspace);
212
-out:
213
-    return retval;
214
-}
215
-
216
-#endif /* CONFIG_BINFMT_ZFLAT */
217
218
/****************************************************************************/
219
220
@@ -XXX,XX +XXX,XX @@ calc_reloc(abi_ulong r, struct lib_info *p, int curid, int internalp)
221
abi_ulong text_len;
222
abi_ulong start_code;
223
224
-#ifdef CONFIG_BINFMT_SHARED_FLAT
225
-#error needs checking
226
- if (r == 0)
227
- id = curid;    /* Relocs of 0 are always self referring */
228
- else {
229
- id = (r >> 24) & 0xff;    /* Find ID for this reloc */
230
- r &= 0x00ffffff;    /* Trim ID off here */
231
- }
232
- if (id >= MAX_SHARED_LIBS) {
233
- fprintf(stderr, "BINFMT_FLAT: reference 0x%x to shared library %d\n",
234
- (unsigned) r, id);
235
- goto failed;
236
- }
237
- if (curid != id) {
238
- if (internalp) {
239
- fprintf(stderr, "BINFMT_FLAT: reloc address 0x%x not "
240
- "in same module (%d != %d)\n",
241
- (unsigned) r, curid, id);
242
- goto failed;
243
- } else if (!p[id].loaded && is_error(load_flat_shared_library(id, p))) {
244
- fprintf(stderr, "BINFMT_FLAT: failed to load library %d\n", id);
245
- goto failed;
246
- }
247
- /* Check versioning information (i.e. time stamps) */
248
- if (p[id].build_date && p[curid].build_date
249
- && p[curid].build_date < p[id].build_date) {
250
- fprintf(stderr, "BINFMT_FLAT: library %d is younger than %d\n",
251
- id, curid);
252
- goto failed;
253
- }
254
- }
255
-#else
256
id = 0;
257
-#endif
258
259
start_brk = p[id].start_brk;
260
start_data = p[id].start_data;
261
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
262
if (rev == OLD_FLAT_VERSION && flat_old_ram_flag(flags))
263
flags = FLAT_FLAG_RAM;
264
265
-#ifndef CONFIG_BINFMT_ZFLAT
266
if (flags & (FLAT_FLAG_GZIP|FLAT_FLAG_GZDATA)) {
267
- fprintf(stderr, "Support for ZFLAT executables is not enabled\n");
268
+ fprintf(stderr, "ZFLAT executables are not supported\n");
269
return -ENOEXEC;
270
}
271
-#endif
272
273
/*
274
* calculate the extra space we need to map in
275
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
276
(int)(data_len + bss_len + stack_len), (int)datapos);
277
278
fpos = ntohl(hdr->data_start);
279
-#ifdef CONFIG_BINFMT_ZFLAT
280
- if (flags & FLAT_FLAG_GZDATA) {
281
- result = decompress_exec(bprm, fpos, (char *) datapos,
282
- data_len + (relocs * sizeof(abi_ulong)))
283
- } else
284
-#endif
285
- {
286
- result = target_pread(bprm->src.fd, datapos,
287
- data_len + (relocs * sizeof(abi_ulong)),
288
- fpos);
289
- }
290
+ result = target_pread(bprm->src.fd, datapos,
291
+ data_len + (relocs * sizeof(abi_ulong)),
292
+ fpos);
293
if (result < 0) {
294
fprintf(stderr, "Unable to read data+bss\n");
295
return result;
296
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
297
datapos = realdatastart + indx_len;
298
reloc = (textpos + ntohl(hdr->reloc_start) + indx_len);
299
300
-#ifdef CONFIG_BINFMT_ZFLAT
301
-#error code needs checking
302
- /*
303
- * load it all in and treat it like a RAM load from now on
304
- */
305
- if (flags & FLAT_FLAG_GZIP) {
306
- result = decompress_exec(bprm, sizeof (struct flat_hdr),
307
- (((char *) textpos) + sizeof (struct flat_hdr)),
308
- (text_len + data_len + (relocs * sizeof(unsigned long))
309
- - sizeof (struct flat_hdr)),
310
- 0);
311
- memmove((void *) datapos, (void *) realdatastart,
312
- data_len + (relocs * sizeof(unsigned long)));
313
- } else if (flags & FLAT_FLAG_GZDATA) {
314
- fpos = 0;
315
- result = bprm->file->f_op->read(bprm->file,
316
- (char *) textpos, text_len, &fpos);
317
- if (!is_error(result)) {
318
- result = decompress_exec(bprm, text_len, (char *) datapos,
319
- data_len + (relocs * sizeof(unsigned long)), 0);
320
- }
321
- }
322
- else
323
-#endif
324
- {
325
- result = target_pread(bprm->src.fd, textpos,
326
- text_len, 0);
327
- if (result >= 0) {
328
- result = target_pread(bprm->src.fd, datapos,
329
- data_len + (relocs * sizeof(abi_ulong)),
330
- ntohl(hdr->data_start));
331
- }
332
+ result = target_pread(bprm->src.fd, textpos,
333
+ text_len, 0);
334
+ if (result >= 0) {
335
+ result = target_pread(bprm->src.fd, datapos,
336
+ data_len + (relocs * sizeof(abi_ulong)),
337
+ ntohl(hdr->data_start));
338
}
339
if (result < 0) {
340
fprintf(stderr, "Unable to read code+data+bss\n");
341
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
342
343
344
/****************************************************************************/
345
-#ifdef CONFIG_BINFMT_SHARED_FLAT
56
-
346
-
57
-/*
347
-/*
58
- * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
348
- * Load a shared library into memory. The library gets its own data
59
- * is set by CPUID(HvCpuIdFunctionVersionAndFeatures).
349
- * segment (including bss) but not argv/argc/environ.
60
- */
350
- */
61
-#define HYPERV_CPUID_VENDOR_AND_MAX_FUNCTIONS    0x40000000
351
-
62
-#define HYPERV_CPUID_INTERFACE            0x40000001
352
-static int load_flat_shared_library(int id, struct lib_info *libs)
63
-#define HYPERV_CPUID_VERSION            0x40000002
353
-{
64
-#define HYPERV_CPUID_FEATURES            0x40000003
354
-    struct linux_binprm bprm;
65
-#define HYPERV_CPUID_ENLIGHTMENT_INFO        0x40000004
355
-    int res;
66
-#define HYPERV_CPUID_IMPLEMENT_LIMITS        0x40000005
356
-    char buf[16];
67
-
357
-
68
-#define HYPERV_HYPERVISOR_PRESENT_BIT        0x80000000
358
-    /* Create the file name */
69
-#define HYPERV_CPUID_MIN            0x40000005
359
-    sprintf(buf, "/lib/lib%d.so", id);
70
-#define HYPERV_CPUID_MAX            0x4000ffff
360
-
71
-
361
-    /* Open the file up */
72
-/*
362
-    bprm.filename = buf;
73
- * Feature identification. EAX indicates which features are available
363
-    bprm.file = open_exec(bprm.filename);
74
- * to the partition based upon the current partition privileges.
364
-    res = PTR_ERR(bprm.file);
75
- */
365
-    if (IS_ERR(bprm.file))
76
-
366
-        return res;
77
-/* VP Runtime (HV_X64_MSR_VP_RUNTIME) available */
367
-
78
-#define HV_X64_MSR_VP_RUNTIME_AVAILABLE        (1 << 0)
368
-    res = prepare_binprm(&bprm);
79
-/* Partition Reference Counter (HV_X64_MSR_TIME_REF_COUNT) available*/
369
-
80
-#define HV_X64_MSR_TIME_REF_COUNT_AVAILABLE    (1 << 1)
370
- if (!is_error(res)) {
81
-/* Partition reference TSC MSR is available */
371
-        res = load_flat_file(&bprm, libs, id, NULL);
82
-#define HV_X64_MSR_REFERENCE_TSC_AVAILABLE (1 << 9)
372
- }
83
-
373
-    if (bprm.file) {
84
-/* A partition's reference time stamp counter (TSC) page */
374
-        allow_write_access(bprm.file);
85
-#define HV_X64_MSR_REFERENCE_TSC        0x40000021
375
-        fput(bprm.file);
86
-
376
-        bprm.file = NULL;
87
-/*
377
-    }
88
- * There is a single feature flag that signifies if the partition has access
378
-    return(res);
89
- * to MSRs with local APIC and TSC frequencies.
379
-}
90
- */
380
-
91
-#define HV_X64_ACCESS_FREQUENCY_MSRS        (1 << 11)
381
-#endif /* CONFIG_BINFMT_SHARED_FLAT */
92
-
382
-
93
-/*
383
int load_flt_binary(struct linux_binprm *bprm, struct image_info *info)
94
- * Basic SynIC MSRs (HV_X64_MSR_SCONTROL through HV_X64_MSR_EOM
384
{
95
- * and HV_X64_MSR_SINT0 through HV_X64_MSR_SINT15) available
385
struct lib_info libinfo[MAX_SHARED_LIBS];
96
- */
386
@@ -XXX,XX +XXX,XX @@ int load_flt_binary(struct linux_binprm *bprm, struct image_info *info)
97
-#define HV_X64_MSR_SYNIC_AVAILABLE        (1 << 2)
387
*/
98
-/*
388
start_addr = libinfo[0].entry;
99
- * Synthetic Timer MSRs (HV_X64_MSR_STIMER0_CONFIG through
389
100
- * HV_X64_MSR_STIMER3_COUNT) available
390
-#ifdef CONFIG_BINFMT_SHARED_FLAT
101
- */
391
-#error here
102
-#define HV_X64_MSR_SYNTIMER_AVAILABLE        (1 << 3)
392
- for (i = MAX_SHARED_LIBS-1; i>0; i--) {
103
-/*
393
- if (libinfo[i].loaded) {
104
- * APIC access MSRs (HV_X64_MSR_EOI, HV_X64_MSR_ICR and HV_X64_MSR_TPR)
394
- /* Push previous first to call address */
105
- * are available
395
- --sp;
106
- */
396
- if (put_user_ual(start_addr, sp))
107
-#define HV_X64_MSR_APIC_ACCESS_AVAILABLE    (1 << 4)
397
- return -EFAULT;
108
-/* Hypercall MSRs (HV_X64_MSR_GUEST_OS_ID and HV_X64_MSR_HYPERCALL) available*/
398
- start_addr = libinfo[i].entry;
109
-#define HV_X64_MSR_HYPERCALL_AVAILABLE        (1 << 5)
399
- }
110
-/* Access virtual processor index MSR (HV_X64_MSR_VP_INDEX) available*/
400
- }
111
-#define HV_X64_MSR_VP_INDEX_AVAILABLE        (1 << 6)
401
-#endif
112
-/* Virtual system reset MSR (HV_X64_MSR_RESET) is available*/
402
-
113
-#define HV_X64_MSR_RESET_AVAILABLE        (1 << 7)
403
/* Stash our initial stack pointer into the mm structure */
114
- /*
404
info->start_code = libinfo[0].start_code;
115
- * Access statistics pages MSRs (HV_X64_MSR_STATS_PARTITION_RETAIL_PAGE,
405
info->end_code = libinfo[0].start_code + libinfo[0].text_len;
116
- * HV_X64_MSR_STATS_PARTITION_INTERNAL_PAGE, HV_X64_MSR_STATS_VP_RETAIL_PAGE,
117
- * HV_X64_MSR_STATS_VP_INTERNAL_PAGE) available
118
- */
119
-#define HV_X64_MSR_STAT_PAGES_AVAILABLE        (1 << 8)
120
-
121
-/* Frequency MSRs available */
122
-#define HV_FEATURE_FREQUENCY_MSRS_AVAILABLE    (1 << 8)
123
-
124
-/* Crash MSR available */
125
-#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE (1 << 10)
126
-
127
-/*
128
- * Feature identification: EBX indicates which flags were specified at
129
- * partition creation. The format is the same as the partition creation
130
- * flag structure defined in section Partition Creation Flags.
131
- */
132
-#define HV_X64_CREATE_PARTITIONS        (1 << 0)
133
-#define HV_X64_ACCESS_PARTITION_ID        (1 << 1)
134
-#define HV_X64_ACCESS_MEMORY_POOL        (1 << 2)
135
-#define HV_X64_ADJUST_MESSAGE_BUFFERS        (1 << 3)
136
-#define HV_X64_POST_MESSAGES            (1 << 4)
137
-#define HV_X64_SIGNAL_EVENTS            (1 << 5)
138
-#define HV_X64_CREATE_PORT            (1 << 6)
139
-#define HV_X64_CONNECT_PORT            (1 << 7)
140
-#define HV_X64_ACCESS_STATS            (1 << 8)
141
-#define HV_X64_DEBUGGING            (1 << 11)
142
-#define HV_X64_CPU_POWER_MANAGEMENT        (1 << 12)
143
-#define HV_X64_CONFIGURE_PROFILER        (1 << 13)
144
-
145
-/*
146
- * Feature identification. EDX indicates which miscellaneous features
147
- * are available to the partition.
148
- */
149
-/* The MWAIT instruction is available (per section MONITOR / MWAIT) */
150
-#define HV_X64_MWAIT_AVAILABLE                (1 << 0)
151
-/* Guest debugging support is available */
152
-#define HV_X64_GUEST_DEBUGGING_AVAILABLE        (1 << 1)
153
-/* Performance Monitor support is available*/
154
-#define HV_X64_PERF_MONITOR_AVAILABLE            (1 << 2)
155
-/* Support for physical CPU dynamic partitioning events is available*/
156
-#define HV_X64_CPU_DYNAMIC_PARTITIONING_AVAILABLE    (1 << 3)
157
-/*
158
- * Support for passing hypercall input parameter block via XMM
159
- * registers is available
160
- */
161
-#define HV_X64_HYPERCALL_PARAMS_XMM_AVAILABLE        (1 << 4)
162
-/* Support for a virtual guest idle state is available */
163
-#define HV_X64_GUEST_IDLE_STATE_AVAILABLE        (1 << 5)
164
-/* Guest crash data handler available */
165
-#define HV_X64_GUEST_CRASH_MSR_AVAILABLE        (1 << 10)
166
-
167
-/*
168
- * Implementation recommendations. Indicates which behaviors the hypervisor
169
- * recommends the OS implement for optimal performance.
170
- */
171
- /*
172
- * Recommend using hypercall for address space switches rather
173
- * than MOV to CR3 instruction
174
- */
175
-#define HV_X64_AS_SWITCH_RECOMMENDED        (1 << 0)
176
-/* Recommend using hypercall for local TLB flushes rather
177
- * than INVLPG or MOV to CR3 instructions */
178
-#define HV_X64_LOCAL_TLB_FLUSH_RECOMMENDED    (1 << 1)
179
-/*
180
- * Recommend using hypercall for remote TLB flushes rather
181
- * than inter-processor interrupts
182
- */
183
-#define HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED    (1 << 2)
184
-/*
185
- * Recommend using MSRs for accessing APIC registers
186
- * EOI, ICR and TPR rather than their memory-mapped counterparts
187
- */
188
-#define HV_X64_APIC_ACCESS_RECOMMENDED        (1 << 3)
189
-/* Recommend using the hypervisor-provided MSR to initiate a system RESET */
190
-#define HV_X64_SYSTEM_RESET_RECOMMENDED        (1 << 4)
191
-/*
192
- * Recommend using relaxed timing for this partition. If used,
193
- * the VM should disable any watchdog timeouts that rely on the
194
- * timely delivery of external interrupts
195
- */
196
-#define HV_X64_RELAXED_TIMING_RECOMMENDED    (1 << 5)
197
-
198
-/*
199
- * Virtual APIC support
200
- */
201
-#define HV_X64_DEPRECATING_AEOI_RECOMMENDED    (1 << 9)
202
-
203
-/* Recommend using the newer ExProcessorMasks interface */
204
-#define HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED    (1 << 11)
205
-
206
-/*
207
- * Crash notification flag.
208
- */
209
-#define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63)
210
-
211
-/* MSR used to identify the guest OS. */
212
-#define HV_X64_MSR_GUEST_OS_ID            0x40000000
213
-
214
-/* MSR used to setup pages used to communicate with the hypervisor. */
215
-#define HV_X64_MSR_HYPERCALL            0x40000001
216
-
217
-/* MSR used to provide vcpu index */
218
-#define HV_X64_MSR_VP_INDEX            0x40000002
219
-
220
-/* MSR used to reset the guest OS. */
221
-#define HV_X64_MSR_RESET            0x40000003
222
-
223
-/* MSR used to provide vcpu runtime in 100ns units */
224
-#define HV_X64_MSR_VP_RUNTIME            0x40000010
225
-
226
-/* MSR used to read the per-partition time reference counter */
227
-#define HV_X64_MSR_TIME_REF_COUNT        0x40000020
228
-
229
-/* MSR used to retrieve the TSC frequency */
230
-#define HV_X64_MSR_TSC_FREQUENCY        0x40000022
231
-
232
-/* MSR used to retrieve the local APIC timer frequency */
233
-#define HV_X64_MSR_APIC_FREQUENCY        0x40000023
234
-
235
-/* Define the virtual APIC registers */
236
-#define HV_X64_MSR_EOI                0x40000070
237
-#define HV_X64_MSR_ICR                0x40000071
238
-#define HV_X64_MSR_TPR                0x40000072
239
-#define HV_X64_MSR_APIC_ASSIST_PAGE        0x40000073
240
-
241
-/* Define synthetic interrupt controller model specific registers. */
242
-#define HV_X64_MSR_SCONTROL            0x40000080
243
-#define HV_X64_MSR_SVERSION            0x40000081
244
-#define HV_X64_MSR_SIEFP            0x40000082
245
-#define HV_X64_MSR_SIMP                0x40000083
246
-#define HV_X64_MSR_EOM                0x40000084
247
-#define HV_X64_MSR_SINT0            0x40000090
248
-#define HV_X64_MSR_SINT1            0x40000091
249
-#define HV_X64_MSR_SINT2            0x40000092
250
-#define HV_X64_MSR_SINT3            0x40000093
251
-#define HV_X64_MSR_SINT4            0x40000094
252
-#define HV_X64_MSR_SINT5            0x40000095
253
-#define HV_X64_MSR_SINT6            0x40000096
254
-#define HV_X64_MSR_SINT7            0x40000097
255
-#define HV_X64_MSR_SINT8            0x40000098
256
-#define HV_X64_MSR_SINT9            0x40000099
257
-#define HV_X64_MSR_SINT10            0x4000009A
258
-#define HV_X64_MSR_SINT11            0x4000009B
259
-#define HV_X64_MSR_SINT12            0x4000009C
260
-#define HV_X64_MSR_SINT13            0x4000009D
261
-#define HV_X64_MSR_SINT14            0x4000009E
262
-#define HV_X64_MSR_SINT15            0x4000009F
263
-
264
-/*
265
- * Synthetic Timer MSRs. Four timers per vcpu.
266
- */
267
-#define HV_X64_MSR_STIMER0_CONFIG        0x400000B0
268
-#define HV_X64_MSR_STIMER0_COUNT        0x400000B1
269
-#define HV_X64_MSR_STIMER1_CONFIG        0x400000B2
270
-#define HV_X64_MSR_STIMER1_COUNT        0x400000B3
271
-#define HV_X64_MSR_STIMER2_CONFIG        0x400000B4
272
-#define HV_X64_MSR_STIMER2_COUNT        0x400000B5
273
-#define HV_X64_MSR_STIMER3_CONFIG        0x400000B6
274
-#define HV_X64_MSR_STIMER3_COUNT        0x400000B7
275
-
276
-/* Hyper-V guest crash notification MSR's */
277
-#define HV_X64_MSR_CRASH_P0            0x40000100
278
-#define HV_X64_MSR_CRASH_P1            0x40000101
279
-#define HV_X64_MSR_CRASH_P2            0x40000102
280
-#define HV_X64_MSR_CRASH_P3            0x40000103
281
-#define HV_X64_MSR_CRASH_P4            0x40000104
282
-#define HV_X64_MSR_CRASH_CTL            0x40000105
283
-#define HV_X64_MSR_CRASH_CTL_NOTIFY        (1ULL << 63)
284
-#define HV_X64_MSR_CRASH_PARAMS        \
285
-        (1 + (HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0))
286
-
287
-#define HV_X64_MSR_HYPERCALL_ENABLE        0x00000001
288
-#define HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_SHIFT    12
289
-#define HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_MASK    \
290
-        (~((1ull << HV_X64_MSR_HYPERCALL_PAGE_ADDRESS_SHIFT) - 1))
291
-
292
-/* Declare the various hypercall operations. */
293
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE    0x0002
294
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST    0x0003
295
-#define HVCALL_NOTIFY_LONG_SPIN_WAIT        0x0008
296
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX 0x0013
297
-#define HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX 0x0014
298
-#define HVCALL_POST_MESSAGE            0x005c
299
-#define HVCALL_SIGNAL_EVENT            0x005d
300
-
301
-#define HV_X64_MSR_APIC_ASSIST_PAGE_ENABLE        0x00000001
302
-#define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT    12
303
-#define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_MASK    \
304
-        (~((1ull << HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
305
-
306
-#define HV_X64_MSR_TSC_REFERENCE_ENABLE        0x00000001
307
-#define HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT    12
308
-
309
-#define HV_PROCESSOR_POWER_STATE_C0        0
310
-#define HV_PROCESSOR_POWER_STATE_C1        1
311
-#define HV_PROCESSOR_POWER_STATE_C2        2
312
-#define HV_PROCESSOR_POWER_STATE_C3        3
313
-
314
-#define HV_FLUSH_ALL_PROCESSORS            BIT(0)
315
-#define HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES    BIT(1)
316
-#define HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY    BIT(2)
317
-#define HV_FLUSH_USE_EXTENDED_RANGE_FORMAT    BIT(3)
318
-
319
-enum HV_GENERIC_SET_FORMAT {
320
-    HV_GENERIC_SET_SPARCE_4K,
321
-    HV_GENERIC_SET_ALL,
322
-};
323
-
324
-/* hypercall status code */
325
-#define HV_STATUS_SUCCESS            0
326
-#define HV_STATUS_INVALID_HYPERCALL_CODE    2
327
-#define HV_STATUS_INVALID_HYPERCALL_INPUT    3
328
-#define HV_STATUS_INVALID_ALIGNMENT        4
329
-#define HV_STATUS_INSUFFICIENT_MEMORY        11
330
-#define HV_STATUS_INVALID_CONNECTION_ID        18
331
-#define HV_STATUS_INSUFFICIENT_BUFFERS        19
332
-
333
-typedef struct _HV_REFERENCE_TSC_PAGE {
334
-    uint32_t tsc_sequence;
335
-    uint32_t res1;
336
-    uint64_t tsc_scale;
337
-    int64_t tsc_offset;
338
-} HV_REFERENCE_TSC_PAGE, *PHV_REFERENCE_TSC_PAGE;
339
-
340
-/* Define the number of synthetic interrupt sources. */
341
-#define HV_SYNIC_SINT_COUNT        (16)
342
-/* Define the expected SynIC version. */
343
-#define HV_SYNIC_VERSION_1        (0x1)
344
-
345
-#define HV_SYNIC_CONTROL_ENABLE        (1ULL << 0)
346
-#define HV_SYNIC_SIMP_ENABLE        (1ULL << 0)
347
-#define HV_SYNIC_SIEFP_ENABLE        (1ULL << 0)
348
-#define HV_SYNIC_SINT_MASKED        (1ULL << 16)
349
-#define HV_SYNIC_SINT_AUTO_EOI        (1ULL << 17)
350
-#define HV_SYNIC_SINT_VECTOR_MASK    (0xFF)
351
-
352
-#define HV_SYNIC_STIMER_COUNT        (4)
353
-
354
-/* Define synthetic interrupt controller message constants. */
355
-#define HV_MESSAGE_SIZE            (256)
356
-#define HV_MESSAGE_PAYLOAD_BYTE_COUNT    (240)
357
-#define HV_MESSAGE_PAYLOAD_QWORD_COUNT    (30)
358
-
359
-/* Define hypervisor message types. */
360
-enum hv_message_type {
361
-    HVMSG_NONE            = 0x00000000,
362
-
363
-    /* Memory access messages. */
364
-    HVMSG_UNMAPPED_GPA        = 0x80000000,
365
-    HVMSG_GPA_INTERCEPT        = 0x80000001,
366
-
367
-    /* Timer notification messages. */
368
-    HVMSG_TIMER_EXPIRED            = 0x80000010,
369
-
370
-    /* Error messages. */
371
-    HVMSG_INVALID_VP_REGISTER_VALUE    = 0x80000020,
372
-    HVMSG_UNRECOVERABLE_EXCEPTION    = 0x80000021,
373
-    HVMSG_UNSUPPORTED_FEATURE        = 0x80000022,
374
-
375
-    /* Trace buffer complete messages. */
376
-    HVMSG_EVENTLOG_BUFFERCOMPLETE    = 0x80000040,
377
-
378
-    /* Platform-specific processor intercept messages. */
379
-    HVMSG_X64_IOPORT_INTERCEPT        = 0x80010000,
380
-    HVMSG_X64_MSR_INTERCEPT        = 0x80010001,
381
-    HVMSG_X64_CPUID_INTERCEPT        = 0x80010002,
382
-    HVMSG_X64_EXCEPTION_INTERCEPT    = 0x80010003,
383
-    HVMSG_X64_APIC_EOI            = 0x80010004,
384
-    HVMSG_X64_LEGACY_FP_ERROR        = 0x80010005
385
-};
386
-
387
-/* Define synthetic interrupt controller message flags. */
388
-union hv_message_flags {
389
-    uint8_t asu8;
390
-    struct {
391
-        uint8_t msg_pending:1;
392
-        uint8_t reserved:7;
393
-    };
394
-};
395
-
396
-/* Define port identifier type. */
397
-union hv_port_id {
398
-    uint32_t asu32;
399
-    struct {
400
-        uint32_t id:24;
401
-        uint32_t reserved:8;
402
-    } u;
403
-};
404
-
405
-/* Define synthetic interrupt controller message header. */
406
-struct hv_message_header {
407
-    uint32_t message_type;
408
-    uint8_t payload_size;
409
-    union hv_message_flags message_flags;
410
-    uint8_t reserved[2];
411
-    union {
412
-        uint64_t sender;
413
-        union hv_port_id port;
414
-    };
415
-};
416
-
417
-/* Define synthetic interrupt controller message format. */
418
-struct hv_message {
419
-    struct hv_message_header header;
420
-    union {
421
-        uint64_t payload[HV_MESSAGE_PAYLOAD_QWORD_COUNT];
422
-    } u;
423
-};
424
-
425
-/* Define the synthetic interrupt message page layout. */
426
-struct hv_message_page {
427
-    struct hv_message sint_message[HV_SYNIC_SINT_COUNT];
428
-};
429
-
430
-/* Define timer message payload structure. */
431
-struct hv_timer_message_payload {
432
-    uint32_t timer_index;
433
-    uint32_t reserved;
434
-    uint64_t expiration_time;    /* When the timer expired */
435
-    uint64_t delivery_time;    /* When the message was delivered */
436
-};
437
-
438
-#define HV_STIMER_ENABLE        (1ULL << 0)
439
-#define HV_STIMER_PERIODIC        (1ULL << 1)
440
-#define HV_STIMER_LAZY            (1ULL << 2)
441
-#define HV_STIMER_AUTOENABLE        (1ULL << 3)
442
-#define HV_STIMER_SINT(config)        (uint8_t)(((config) >> 16) & 0x0F)
443
-
444
-#endif
445
+ /* this is a temporary placeholder until kvm_para.h stops including it */
446
diff --git a/include/standard-headers/linux/input-event-codes.h b/include/standard-headers/linux/input-event-codes.h
447
index XXXXXXX..XXXXXXX 100644
448
--- a/include/standard-headers/linux/input-event-codes.h
449
+++ b/include/standard-headers/linux/input-event-codes.h
450
@@ -XXX,XX +XXX,XX @@
451
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
452
/*
453
* Input event codes
454
*
455
@@ -XXX,XX +XXX,XX @@
456
#define BTN_TOOL_MOUSE        0x146
457
#define BTN_TOOL_LENS        0x147
458
#define BTN_TOOL_QUINTTAP    0x148    /* Five fingers on trackpad */
459
+#define BTN_STYLUS3        0x149
460
#define BTN_TOUCH        0x14a
461
#define BTN_STYLUS        0x14b
462
#define BTN_STYLUS2        0x14c
463
diff --git a/include/standard-headers/linux/input.h b/include/standard-headers/linux/input.h
464
index XXXXXXX..XXXXXXX 100644
465
--- a/include/standard-headers/linux/input.h
466
+++ b/include/standard-headers/linux/input.h
467
@@ -XXX,XX +XXX,XX @@
468
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
469
/*
470
* Copyright (c) 1999-2002 Vojtech Pavlik
471
*
472
diff --git a/include/standard-headers/linux/pci_regs.h b/include/standard-headers/linux/pci_regs.h
473
index XXXXXXX..XXXXXXX 100644
474
--- a/include/standard-headers/linux/pci_regs.h
475
+++ b/include/standard-headers/linux/pci_regs.h
476
@@ -XXX,XX +XXX,XX @@
477
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
478
/*
479
*    pci_regs.h
480
*
481
@@ -XXX,XX +XXX,XX @@
482
#define PCI_ERR_ROOT_FIRST_FATAL    0x00000010 /* First UNC is Fatal */
483
#define PCI_ERR_ROOT_NONFATAL_RCV    0x00000020 /* Non-Fatal Received */
484
#define PCI_ERR_ROOT_FATAL_RCV        0x00000040 /* Fatal Received */
485
+#define PCI_ERR_ROOT_AER_IRQ        0xf8000000 /* Advanced Error Interrupt Message Number */
486
#define PCI_ERR_ROOT_ERR_SRC    52    /* Error Source Identification */
487
488
/* Virtual Channel */
489
@@ -XXX,XX +XXX,XX @@
490
#define PCI_SATA_SIZEOF_LONG    16
491
492
/* Resizable BARs */
493
+#define PCI_REBAR_CAP        4    /* capability register */
494
+#define PCI_REBAR_CAP_SIZES        0x00FFFFF0 /* supported BAR sizes */
495
#define PCI_REBAR_CTRL        8    /* control register */
496
-#define PCI_REBAR_CTRL_NBAR_MASK    (7 << 5)    /* mask for # bars */
497
-#define PCI_REBAR_CTRL_NBAR_SHIFT    5    /* shift for # bars */
498
+#define PCI_REBAR_CTRL_BAR_IDX        0x00000007 /* BAR index */
499
+#define PCI_REBAR_CTRL_NBAR_MASK    0x000000E0 /* # of resizable BARs */
500
+#define PCI_REBAR_CTRL_NBAR_SHIFT    5      /* shift for # of BARs */
501
+#define PCI_REBAR_CTRL_BAR_SIZE    0x00001F00 /* BAR size */
502
503
/* Dynamic Power Allocation */
504
#define PCI_DPA_CAP        4    /* capability register */
505
@@ -XXX,XX +XXX,XX @@
506
507
/* Downstream Port Containment */
508
#define PCI_EXP_DPC_CAP            4    /* DPC Capability */
509
+#define PCI_EXP_DPC_IRQ            0x1f    /* DPC Interrupt Message Number */
510
#define PCI_EXP_DPC_CAP_RP_EXT        0x20    /* Root Port Extensions for DPC */
511
#define PCI_EXP_DPC_CAP_POISONED_TLP    0x40    /* Poisoned TLP Egress Blocking Supported */
512
#define PCI_EXP_DPC_CAP_SW_TRIGGER    0x80    /* Software Triggering Supported */
513
@@ -XXX,XX +XXX,XX @@
514
#define PCI_PTM_CTRL_ENABLE        0x00000001 /* PTM enable */
515
#define PCI_PTM_CTRL_ROOT        0x00000002 /* Root select */
516
517
-/* L1 PM Substates */
518
-#define PCI_L1SS_CAP         4    /* capability register */
519
-#define PCI_L1SS_CAP_PCIPM_L1_2     1    /* PCI PM L1.2 Support */
520
-#define PCI_L1SS_CAP_PCIPM_L1_1     2    /* PCI PM L1.1 Support */
521
-#define PCI_L1SS_CAP_ASPM_L1_2         4    /* ASPM L1.2 Support */
522
-#define PCI_L1SS_CAP_ASPM_L1_1         8    /* ASPM L1.1 Support */
523
-#define PCI_L1SS_CAP_L1_PM_SS        16    /* L1 PM Substates Support */
524
-#define PCI_L1SS_CTL1         8    /* Control Register 1 */
525
-#define PCI_L1SS_CTL1_PCIPM_L1_2    1    /* PCI PM L1.2 Enable */
526
-#define PCI_L1SS_CTL1_PCIPM_L1_1    2    /* PCI PM L1.1 Support */
527
-#define PCI_L1SS_CTL1_ASPM_L1_2    4    /* ASPM L1.2 Support */
528
-#define PCI_L1SS_CTL1_ASPM_L1_1    8    /* ASPM L1.1 Support */
529
-#define PCI_L1SS_CTL1_L1SS_MASK    0x0000000F
530
-#define PCI_L1SS_CTL2         0xC    /* Control Register 2 */
531
+/* ASPM L1 PM Substates */
532
+#define PCI_L1SS_CAP        0x04    /* Capabilities Register */
533
+#define PCI_L1SS_CAP_PCIPM_L1_2    0x00000001 /* PCI-PM L1.2 Supported */
534
+#define PCI_L1SS_CAP_PCIPM_L1_1    0x00000002 /* PCI-PM L1.1 Supported */
535
+#define PCI_L1SS_CAP_ASPM_L1_2        0x00000004 /* ASPM L1.2 Supported */
536
+#define PCI_L1SS_CAP_ASPM_L1_1        0x00000008 /* ASPM L1.1 Supported */
537
+#define PCI_L1SS_CAP_L1_PM_SS        0x00000010 /* L1 PM Substates Supported */
538
+#define PCI_L1SS_CAP_CM_RESTORE_TIME    0x0000ff00 /* Port Common_Mode_Restore_Time */
539
+#define PCI_L1SS_CAP_P_PWR_ON_SCALE    0x00030000 /* Port T_POWER_ON scale */
540
+#define PCI_L1SS_CAP_P_PWR_ON_VALUE    0x00f80000 /* Port T_POWER_ON value */
541
+#define PCI_L1SS_CTL1        0x08    /* Control 1 Register */
542
+#define PCI_L1SS_CTL1_PCIPM_L1_2    0x00000001 /* PCI-PM L1.2 Enable */
543
+#define PCI_L1SS_CTL1_PCIPM_L1_1    0x00000002 /* PCI-PM L1.1 Enable */
544
+#define PCI_L1SS_CTL1_ASPM_L1_2    0x00000004 /* ASPM L1.2 Enable */
545
+#define PCI_L1SS_CTL1_ASPM_L1_1    0x00000008 /* ASPM L1.1 Enable */
546
+#define PCI_L1SS_CTL1_L1SS_MASK    0x0000000f
547
+#define PCI_L1SS_CTL1_CM_RESTORE_TIME    0x0000ff00 /* Common_Mode_Restore_Time */
548
+#define PCI_L1SS_CTL1_LTR_L12_TH_VALUE    0x03ff0000 /* LTR_L1.2_THRESHOLD_Value */
549
+#define PCI_L1SS_CTL1_LTR_L12_TH_SCALE    0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */
550
+#define PCI_L1SS_CTL2        0x0c    /* Control 2 Register */
551
552
#endif /* LINUX_PCI_REGS_H */
553
diff --git a/linux-headers/asm-arm/kvm.h b/linux-headers/asm-arm/kvm.h
554
index XXXXXXX..XXXXXXX 100644
555
--- a/linux-headers/asm-arm/kvm.h
556
+++ b/linux-headers/asm-arm/kvm.h
557
@@ -XXX,XX +XXX,XX @@
558
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
559
/*
560
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
561
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
562
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
563
    (__ARM_CP15_REG(op1, 0, crm, 0) | KVM_REG_SIZE_U64)
564
#define ARM_CP15_REG64(...) __ARM_CP15_REG64(__VA_ARGS__)
565
566
+/* PL1 Physical Timer Registers */
567
+#define KVM_REG_ARM_PTIMER_CTL        ARM_CP15_REG32(0, 14, 2, 1)
568
+#define KVM_REG_ARM_PTIMER_CNT        ARM_CP15_REG64(0, 14)
569
+#define KVM_REG_ARM_PTIMER_CVAL        ARM_CP15_REG64(2, 14)
570
+
571
+/* Virtual Timer Registers */
572
#define KVM_REG_ARM_TIMER_CTL        ARM_CP15_REG32(0, 14, 3, 1)
573
#define KVM_REG_ARM_TIMER_CNT        ARM_CP15_REG64(1, 14)
574
#define KVM_REG_ARM_TIMER_CVAL        ARM_CP15_REG64(3, 14)
575
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
576
#define KVM_DEV_ARM_ITS_SAVE_TABLES        1
577
#define KVM_DEV_ARM_ITS_RESTORE_TABLES    2
578
#define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES    3
579
+#define KVM_DEV_ARM_ITS_CTRL_RESET        4
580
581
/* KVM_IRQ_LINE irq field index values */
582
#define KVM_ARM_IRQ_TYPE_SHIFT        24
583
diff --git a/linux-headers/asm-arm/kvm_para.h b/linux-headers/asm-arm/kvm_para.h
584
index XXXXXXX..XXXXXXX 100644
585
--- a/linux-headers/asm-arm/kvm_para.h
586
+++ b/linux-headers/asm-arm/kvm_para.h
587
@@ -1 +1,2 @@
588
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
589
#include <asm-generic/kvm_para.h>
590
diff --git a/linux-headers/asm-arm/unistd.h b/linux-headers/asm-arm/unistd.h
591
index XXXXXXX..XXXXXXX 100644
592
--- a/linux-headers/asm-arm/unistd.h
593
+++ b/linux-headers/asm-arm/unistd.h
594
@@ -XXX,XX +XXX,XX @@
595
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
596
/*
597
* arch/arm/include/asm/unistd.h
598
*
599
@@ -XXX,XX +XXX,XX @@
600
#define __ARM_NR_usr26            (__ARM_NR_BASE+3)
601
#define __ARM_NR_usr32            (__ARM_NR_BASE+4)
602
#define __ARM_NR_set_tls        (__ARM_NR_BASE+5)
603
+#define __ARM_NR_get_tls        (__ARM_NR_BASE+6)
604
605
#endif /* __ASM_ARM_UNISTD_H */
606
diff --git a/linux-headers/asm-arm64/kvm.h b/linux-headers/asm-arm64/kvm.h
607
index XXXXXXX..XXXXXXX 100644
608
--- a/linux-headers/asm-arm64/kvm.h
609
+++ b/linux-headers/asm-arm64/kvm.h
610
@@ -XXX,XX +XXX,XX @@
611
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
612
/*
613
* Copyright (C) 2012,2013 - ARM Ltd
614
* Author: Marc Zyngier <marc.zyngier@arm.com>
615
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
616
617
#define ARM64_SYS_REG(...) (__ARM64_SYS_REG(__VA_ARGS__) | KVM_REG_SIZE_U64)
618
619
+/* Physical Timer EL0 Registers */
620
+#define KVM_REG_ARM_PTIMER_CTL        ARM64_SYS_REG(3, 3, 14, 2, 1)
621
+#define KVM_REG_ARM_PTIMER_CVAL        ARM64_SYS_REG(3, 3, 14, 2, 2)
622
+#define KVM_REG_ARM_PTIMER_CNT        ARM64_SYS_REG(3, 3, 14, 0, 1)
623
+
624
+/* EL0 Virtual Timer Registers */
625
#define KVM_REG_ARM_TIMER_CTL        ARM64_SYS_REG(3, 3, 14, 3, 1)
626
#define KVM_REG_ARM_TIMER_CNT        ARM64_SYS_REG(3, 3, 14, 3, 2)
627
#define KVM_REG_ARM_TIMER_CVAL        ARM64_SYS_REG(3, 3, 14, 0, 2)
628
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
629
#define KVM_DEV_ARM_ITS_SAVE_TABLES 1
630
#define KVM_DEV_ARM_ITS_RESTORE_TABLES 2
631
#define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES    3
632
+#define KVM_DEV_ARM_ITS_CTRL_RESET        4
633
634
/* Device Control API on vcpu fd */
635
#define KVM_ARM_VCPU_PMU_V3_CTRL    0
636
diff --git a/linux-headers/asm-arm64/unistd.h b/linux-headers/asm-arm64/unistd.h
637
index XXXXXXX..XXXXXXX 100644
638
--- a/linux-headers/asm-arm64/unistd.h
639
+++ b/linux-headers/asm-arm64/unistd.h
640
@@ -XXX,XX +XXX,XX @@
641
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
642
/*
643
* Copyright (C) 2012 ARM Ltd.
644
*
645
diff --git a/linux-headers/asm-powerpc/epapr_hcalls.h b/linux-headers/asm-powerpc/epapr_hcalls.h
646
index XXXXXXX..XXXXXXX 100644
647
--- a/linux-headers/asm-powerpc/epapr_hcalls.h
648
+++ b/linux-headers/asm-powerpc/epapr_hcalls.h
649
@@ -XXX,XX +XXX,XX @@
650
+/* SPDX-License-Identifier: ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) */
651
/*
652
* ePAPR hcall interface
653
*
654
diff --git a/linux-headers/asm-powerpc/kvm.h b/linux-headers/asm-powerpc/kvm.h
655
index XXXXXXX..XXXXXXX 100644
656
--- a/linux-headers/asm-powerpc/kvm.h
657
+++ b/linux-headers/asm-powerpc/kvm.h
658
@@ -XXX,XX +XXX,XX @@
659
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
660
/*
661
* This program is free software; you can redistribute it and/or modify
662
* it under the terms of the GNU General Public License, version 2, as
663
diff --git a/linux-headers/asm-powerpc/kvm_para.h b/linux-headers/asm-powerpc/kvm_para.h
664
index XXXXXXX..XXXXXXX 100644
665
--- a/linux-headers/asm-powerpc/kvm_para.h
666
+++ b/linux-headers/asm-powerpc/kvm_para.h
667
@@ -XXX,XX +XXX,XX @@
668
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
669
/*
670
* This program is free software; you can redistribute it and/or modify
671
* it under the terms of the GNU General Public License, version 2, as
672
diff --git a/linux-headers/asm-powerpc/unistd.h b/linux-headers/asm-powerpc/unistd.h
673
index XXXXXXX..XXXXXXX 100644
674
--- a/linux-headers/asm-powerpc/unistd.h
675
+++ b/linux-headers/asm-powerpc/unistd.h
676
@@ -XXX,XX +XXX,XX @@
677
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
678
/*
679
* This file contains the system call numbers.
680
*
681
diff --git a/linux-headers/asm-s390/kvm.h b/linux-headers/asm-s390/kvm.h
682
index XXXXXXX..XXXXXXX 100644
683
--- a/linux-headers/asm-s390/kvm.h
684
+++ b/linux-headers/asm-s390/kvm.h
685
@@ -XXX,XX +XXX,XX @@
686
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
687
#ifndef __LINUX_KVM_S390_H
688
#define __LINUX_KVM_S390_H
689
/*
690
diff --git a/linux-headers/asm-s390/kvm_para.h b/linux-headers/asm-s390/kvm_para.h
691
index XXXXXXX..XXXXXXX 100644
692
--- a/linux-headers/asm-s390/kvm_para.h
693
+++ b/linux-headers/asm-s390/kvm_para.h
694
@@ -XXX,XX +XXX,XX @@
695
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
696
/*
697
* User API definitions for paravirtual devices on s390
698
*
699
diff --git a/linux-headers/asm-s390/unistd.h b/linux-headers/asm-s390/unistd.h
700
index XXXXXXX..XXXXXXX 100644
701
--- a/linux-headers/asm-s390/unistd.h
702
+++ b/linux-headers/asm-s390/unistd.h
703
@@ -XXX,XX +XXX,XX @@
704
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
705
/*
706
* S390 version
707
*
708
@@ -XXX,XX +XXX,XX @@
709
#define __NR_pwritev2        377
710
#define __NR_s390_guarded_storage    378
711
#define __NR_statx        379
712
-#define NR_syscalls 380
713
+#define __NR_s390_sthyi        380
714
+#define NR_syscalls 381
715
716
/*
717
* There are some system calls that are not present on 64 bit, some
718
diff --git a/linux-headers/asm-x86/kvm.h b/linux-headers/asm-x86/kvm.h
719
index XXXXXXX..XXXXXXX 100644
720
--- a/linux-headers/asm-x86/kvm.h
721
+++ b/linux-headers/asm-x86/kvm.h
722
@@ -XXX,XX +XXX,XX @@
723
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
724
#ifndef _ASM_X86_KVM_H
725
#define _ASM_X86_KVM_H
726
727
diff --git a/linux-headers/asm-x86/kvm_para.h b/linux-headers/asm-x86/kvm_para.h
728
index XXXXXXX..XXXXXXX 100644
729
--- a/linux-headers/asm-x86/kvm_para.h
730
+++ b/linux-headers/asm-x86/kvm_para.h
731
@@ -XXX,XX +XXX,XX @@
732
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
733
#ifndef _ASM_X86_KVM_PARA_H
734
#define _ASM_X86_KVM_PARA_H
735
736
@@ -XXX,XX +XXX,XX @@ struct kvm_vcpu_pv_apf_data {
737
#define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
738
#define KVM_PV_EOI_DISABLED 0x0
739
740
-
741
#endif /* _ASM_X86_KVM_PARA_H */
742
diff --git a/linux-headers/asm-x86/unistd.h b/linux-headers/asm-x86/unistd.h
743
index XXXXXXX..XXXXXXX 100644
744
--- a/linux-headers/asm-x86/unistd.h
745
+++ b/linux-headers/asm-x86/unistd.h
746
@@ -XXX,XX +XXX,XX @@
747
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
748
#ifndef _ASM_X86_UNISTD_H
749
#define _ASM_X86_UNISTD_H
750
751
diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
752
index XXXXXXX..XXXXXXX 100644
753
--- a/linux-headers/linux/kvm.h
754
+++ b/linux-headers/linux/kvm.h
755
@@ -XXX,XX +XXX,XX @@
756
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
757
#ifndef __LINUX_KVM_H
758
#define __LINUX_KVM_H
759
760
@@ -XXX,XX +XXX,XX @@ struct kvm_ppc_resize_hpt {
761
#define KVM_CAP_PPC_SMT_POSSIBLE 147
762
#define KVM_CAP_HYPERV_SYNIC2 148
763
#define KVM_CAP_HYPERV_VP_INDEX 149
764
+#define KVM_CAP_S390_AIS_MIGRATION 150
765
766
#ifdef KVM_CAP_IRQ_ROUTING
767
768
diff --git a/linux-headers/linux/kvm_para.h b/linux-headers/linux/kvm_para.h
769
index XXXXXXX..XXXXXXX 100644
770
--- a/linux-headers/linux/kvm_para.h
771
+++ b/linux-headers/linux/kvm_para.h
772
@@ -XXX,XX +XXX,XX @@
773
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
774
#ifndef __LINUX_KVM_PARA_H
775
#define __LINUX_KVM_PARA_H
776
777
diff --git a/linux-headers/linux/psci.h b/linux-headers/linux/psci.h
778
index XXXXXXX..XXXXXXX 100644
779
--- a/linux-headers/linux/psci.h
780
+++ b/linux-headers/linux/psci.h
781
@@ -XXX,XX +XXX,XX @@
782
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
783
/*
784
* ARM Power State and Coordination Interface (PSCI) header
785
*
786
diff --git a/linux-headers/linux/userfaultfd.h b/linux-headers/linux/userfaultfd.h
787
index XXXXXXX..XXXXXXX 100644
788
--- a/linux-headers/linux/userfaultfd.h
789
+++ b/linux-headers/linux/userfaultfd.h
790
@@ -XXX,XX +XXX,XX @@
791
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
792
/*
793
* include/linux/userfaultfd.h
794
*
795
diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
796
index XXXXXXX..XXXXXXX 100644
797
--- a/linux-headers/linux/vfio.h
798
+++ b/linux-headers/linux/vfio.h
799
@@ -XXX,XX +XXX,XX @@
800
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
801
/*
802
* VFIO API definition
803
*
804
diff --git a/linux-headers/linux/vfio_ccw.h b/linux-headers/linux/vfio_ccw.h
805
index XXXXXXX..XXXXXXX 100644
806
--- a/linux-headers/linux/vfio_ccw.h
807
+++ b/linux-headers/linux/vfio_ccw.h
808
@@ -XXX,XX +XXX,XX @@
809
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
810
/*
811
* Interfaces for vfio-ccw
812
*
813
diff --git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
814
index XXXXXXX..XXXXXXX 100644
815
--- a/linux-headers/linux/vhost.h
816
+++ b/linux-headers/linux/vhost.h
817
@@ -XXX,XX +XXX,XX @@
818
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
819
#ifndef _LINUX_VHOST_H
820
#define _LINUX_VHOST_H
821
/* Userspace interface for in-kernel virtio accelerators. */
822
--
406
--
823
2.7.4
407
2.34.1
824
408
825
409
diff view generated by jsdifflib
1
Make get_phys_addr_v5() return a fault type in the ARMMMUFaultInfo
1
The npcm7xx_clk and npcm7xx_gcr device reset methods look at
2
structure, which we convert to the FSC at the callsite.
2
the ResetType argument and only handle RESET_TYPE_COLD,
3
producing a warning if another reset type is passed. This
4
is different from how every other three-phase-reset method
5
we have works, and makes it difficult to add new reset types.
6
7
A better pattern is "assume that any reset type you don't know
8
about should be handled like RESET_TYPE_COLD"; switch these
9
devices to do that. Then adding a new reset type will only
10
need to touch those devices where its behaviour really needs
11
to be different from the standard cold reset.
3
12
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
16
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 1512503192-2239-4-git-send-email-peter.maydell@linaro.org
17
Message-id: 20240412160809.1260625-2-peter.maydell@linaro.org
9
---
18
---
10
target/arm/helper.c | 33 ++++++++++++++++++---------------
19
hw/misc/npcm7xx_clk.c | 13 +++----------
11
1 file changed, 18 insertions(+), 15 deletions(-)
20
hw/misc/npcm7xx_gcr.c | 12 ++++--------
21
2 files changed, 7 insertions(+), 18 deletions(-)
12
22
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
diff --git a/hw/misc/npcm7xx_clk.c b/hw/misc/npcm7xx_clk.c
14
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
25
--- a/hw/misc/npcm7xx_clk.c
16
+++ b/target/arm/helper.c
26
+++ b/hw/misc/npcm7xx_clk.c
17
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
27
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_clk_enter_reset(Object *obj, ResetType type)
18
static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
28
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
29
QEMU_BUILD_BUG_ON(sizeof(s->regs) != sizeof(cold_reset_values));
20
hwaddr *phys_ptr, int *prot,
30
21
- target_ulong *page_size, uint32_t *fsr,
31
- switch (type) {
22
+ target_ulong *page_size,
32
- case RESET_TYPE_COLD:
23
ARMMMUFaultInfo *fi)
33
- memcpy(s->regs, cold_reset_values, sizeof(cold_reset_values));
24
{
34
- s->ref_ns = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
25
CPUState *cs = CPU(arm_env_get_cpu(env));
35
- npcm7xx_clk_update_all_clocks(s);
26
- int code;
36
- return;
27
+ int level = 1;
37
- }
28
uint32_t table;
38
-
29
uint32_t desc;
39
+ memcpy(s->regs, cold_reset_values, sizeof(cold_reset_values));
30
int type;
40
+ s->ref_ns = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
31
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
41
+ npcm7xx_clk_update_all_clocks(s);
32
/* Lookup l1 descriptor. */
42
/*
33
if (!get_level1_table_address(env, mmu_idx, &table, address)) {
43
* A small number of registers need to be reset on a core domain reset,
34
/* Section translation fault if page walk is disabled by PD0 or PD1 */
44
* but no such reset type exists yet.
35
- code = 5;
45
*/
36
+ fi->type = ARMFault_Translation;
46
- qemu_log_mask(LOG_UNIMP, "%s: reset type %d not implemented.",
37
goto do_fault;
47
- __func__, type);
38
}
39
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
41
domain_prot = (dacr >> (domain * 2)) & 3;
42
if (type == 0) {
43
/* Section translation fault. */
44
- code = 5;
45
+ fi->type = ARMFault_Translation;
46
goto do_fault;
47
}
48
+ if (type != 2) {
49
+ level = 2;
50
+ }
51
if (domain_prot == 0 || domain_prot == 2) {
52
- if (type == 2)
53
- code = 9; /* Section domain fault. */
54
- else
55
- code = 11; /* Page domain fault. */
56
+ fi->type = ARMFault_Domain;
57
goto do_fault;
58
}
59
if (type == 2) {
60
/* 1Mb section. */
61
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
62
ap = (desc >> 10) & 3;
63
- code = 13;
64
*page_size = 1024 * 1024;
65
} else {
66
/* Lookup l2 entry. */
67
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
68
mmu_idx, fi);
69
switch (desc & 3) {
70
case 0: /* Page translation fault. */
71
- code = 7;
72
+ fi->type = ARMFault_Translation;
73
goto do_fault;
74
case 1: /* 64k page. */
75
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
76
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
77
/* UNPREDICTABLE in ARMv5; we choose to take a
78
* page translation fault.
79
*/
80
- code = 7;
81
+ fi->type = ARMFault_Translation;
82
goto do_fault;
83
}
84
} else {
85
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
86
/* Never happens, but compiler isn't smart enough to tell. */
87
abort();
88
}
89
- code = 15;
90
}
91
*prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
92
*prot |= *prot ? PAGE_EXEC : 0;
93
if (!(*prot & (1 << access_type))) {
94
/* Access permission fault. */
95
+ fi->type = ARMFault_Permission;
96
goto do_fault;
97
}
98
*phys_ptr = phys_addr;
99
return false;
100
do_fault:
101
- *fsr = code | (domain << 4);
102
+ fi->domain = domain;
103
+ fi->level = level;
104
return true;
105
}
48
}
106
49
107
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
50
static void npcm7xx_clk_init_clock_hierarchy(NPCM7xxCLKState *s)
108
return get_phys_addr_v6(env, address, access_type, mmu_idx, phys_ptr,
51
diff --git a/hw/misc/npcm7xx_gcr.c b/hw/misc/npcm7xx_gcr.c
109
attrs, prot, page_size, fsr, fi);
52
index XXXXXXX..XXXXXXX 100644
110
} else {
53
--- a/hw/misc/npcm7xx_gcr.c
111
- return get_phys_addr_v5(env, address, access_type, mmu_idx, phys_ptr,
54
+++ b/hw/misc/npcm7xx_gcr.c
112
- prot, page_size, fsr, fi);
55
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_gcr_enter_reset(Object *obj, ResetType type)
113
+ bool ret = get_phys_addr_v5(env, address, access_type, mmu_idx,
56
114
+ phys_ptr, prot, page_size, fi);
57
QEMU_BUILD_BUG_ON(sizeof(s->regs) != sizeof(cold_reset_values));
115
+
58
116
+ *fsr = arm_fi_to_sfsc(fi);
59
- switch (type) {
117
+ return ret;
60
- case RESET_TYPE_COLD:
118
}
61
- memcpy(s->regs, cold_reset_values, sizeof(s->regs));
62
- s->regs[NPCM7XX_GCR_PWRON] = s->reset_pwron;
63
- s->regs[NPCM7XX_GCR_MDLR] = s->reset_mdlr;
64
- s->regs[NPCM7XX_GCR_INTCR3] = s->reset_intcr3;
65
- break;
66
- }
67
+ memcpy(s->regs, cold_reset_values, sizeof(s->regs));
68
+ s->regs[NPCM7XX_GCR_PWRON] = s->reset_pwron;
69
+ s->regs[NPCM7XX_GCR_MDLR] = s->reset_mdlr;
70
+ s->regs[NPCM7XX_GCR_INTCR3] = s->reset_intcr3;
119
}
71
}
120
72
73
static void npcm7xx_gcr_realize(DeviceState *dev, Error **errp)
121
--
74
--
122
2.7.4
75
2.34.1
123
76
124
77
diff view generated by jsdifflib
1
Make get_phys_addr_pmsav5() return a fault type in the ARMMMUFaultInfo
1
Rather than directly calling the device's implementation of its 'hold'
2
structure, which we convert to the FSC at the callsite.
2
reset phase, call device_cold_reset(). This means we don't have to
3
3
adjust this callsite when we add another argument to the function
4
Note that PMSAv5 does not define any guest-visible fault status
4
signature for the hold and exit reset methods.
5
register, so the different "fsr" values we were previously
6
returning are entirely arbitrary. So we can just switch to using
7
the most appropriae fi->type values without worrying that we
8
need to special-case FaultInfo->FSC conversion for PMSAv5.
9
5
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Luc Michel <luc.michel@amd.com>
13
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
9
Message-id: 20240412160809.1260625-3-peter.maydell@linaro.org
14
Message-id: 1512503192-2239-7-git-send-email-peter.maydell@linaro.org
15
---
10
---
16
target/arm/helper.c | 20 +++++++++++++-------
11
hw/i2c/allwinner-i2c.c | 3 +--
17
1 file changed, 13 insertions(+), 7 deletions(-)
12
hw/sensor/adm1272.c | 2 +-
13
2 files changed, 2 insertions(+), 3 deletions(-)
18
14
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/hw/i2c/allwinner-i2c.c b/hw/i2c/allwinner-i2c.c
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
17
--- a/hw/i2c/allwinner-i2c.c
22
+++ b/target/arm/helper.c
18
+++ b/hw/i2c/allwinner-i2c.c
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
19
@@ -XXX,XX +XXX,XX @@ static void allwinner_i2c_write(void *opaque, hwaddr offset,
24
20
break;
25
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
21
case TWI_SRST_REG:
26
MMUAccessType access_type, ARMMMUIdx mmu_idx,
22
if (((value & TWI_SRST_MASK) == 0) && (s->srst & TWI_SRST_MASK)) {
27
- hwaddr *phys_ptr, int *prot, uint32_t *fsr)
23
- /* Perform reset */
28
+ hwaddr *phys_ptr, int *prot,
24
- allwinner_i2c_reset_hold(OBJECT(s));
29
+ ARMMMUFaultInfo *fi)
25
+ device_cold_reset(DEVICE(s));
30
{
31
int n;
32
uint32_t mask;
33
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
34
}
26
}
35
}
27
s->srst = value & TWI_SRST_MASK;
36
if (n < 0) {
37
- *fsr = 2;
38
+ fi->type = ARMFault_Background;
39
return true;
40
}
41
42
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
43
mask = (mask >> (n * 4)) & 0xf;
44
switch (mask) {
45
case 0:
46
- *fsr = 1;
47
+ fi->type = ARMFault_Permission;
48
+ fi->level = 1;
49
return true;
50
case 1:
51
if (is_user) {
52
- *fsr = 1;
53
+ fi->type = ARMFault_Permission;
54
+ fi->level = 1;
55
return true;
56
}
57
*prot = PAGE_READ | PAGE_WRITE;
58
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
59
break;
28
break;
60
case 5:
29
diff --git a/hw/sensor/adm1272.c b/hw/sensor/adm1272.c
61
if (is_user) {
30
index XXXXXXX..XXXXXXX 100644
62
- *fsr = 1;
31
--- a/hw/sensor/adm1272.c
63
+ fi->type = ARMFault_Permission;
32
+++ b/hw/sensor/adm1272.c
64
+ fi->level = 1;
33
@@ -XXX,XX +XXX,XX @@ static int adm1272_write_data(PMBusDevice *pmdev, const uint8_t *buf,
65
return true;
66
}
67
*prot = PAGE_READ;
68
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
69
break;
34
break;
70
default:
35
71
/* Bad permission. */
36
case ADM1272_MFR_POWER_CYCLE:
72
- *fsr = 1;
37
- adm1272_exit_reset((Object *)s);
73
+ fi->type = ARMFault_Permission;
38
+ device_cold_reset(DEVICE(s));
74
+ fi->level = 1;
39
break;
75
return true;
40
76
}
41
case ADM1272_HYSTERESIS_LOW:
77
*prot |= PAGE_EXEC;
78
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
79
} else {
80
/* Pre-v7 MPU */
81
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
82
- phys_ptr, prot, fsr);
83
+ phys_ptr, prot, fi);
84
+ *fsr = arm_fi_to_sfsc(fi);
85
}
86
qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32
87
" mmu_idx %u -> %s (prot %c%c%c)\n",
88
--
42
--
89
2.7.4
43
2.34.1
90
91
diff view generated by jsdifflib
1
Generalize nvic_sysreg_ns_ops so that we can pass it an
1
We pass a ResetType argument to the Resettable class enter phase
2
arbitrary MemoryRegion which it will use as the underlying
2
method, but we don't pass it to hold and exit, even though the
3
register implementation to apply the NS-alias behaviour
3
callsites have it readily available. This means that if a device
4
to. We'll want this so we can do the same with systick.
4
cared about the ResetType it would need to record it in the enter
5
phase method to use later on. We should pass the type to all three
6
of the phase methods to avoid having to do that.
7
8
This coccinelle script adds the ResetType argument to the hold and
9
exit phases of the Resettable interface.
10
11
The first part of the script (rules holdfn_assigned, holdfn_defined,
12
exitfn_assigned, exitfn_defined) update implementations of the
13
interface within device models, both to change the signature of their
14
method implementations and to pass on the reset type when they invoke
15
reset on some other device.
16
17
The second part of the script is various special cases:
18
* method callsites in resettable_phase_hold(), resettable_phase_exit()
19
and device_phases_reset()
20
* updating the typedefs for the methods
21
* isl_pmbus_vr.c has some code where one device's reset method directly
22
calls the implementation of a different device's method
5
23
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
25
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 1512154296-5652-2-git-send-email-peter.maydell@linaro.org
26
Message-id: 20240412160809.1260625-4-peter.maydell@linaro.org
9
---
27
---
10
hw/intc/armv7m_nvic.c | 10 +++++++---
28
scripts/coccinelle/reset-type.cocci | 133 ++++++++++++++++++++++++++++
11
1 file changed, 7 insertions(+), 3 deletions(-)
29
1 file changed, 133 insertions(+)
30
create mode 100644 scripts/coccinelle/reset-type.cocci
12
31
13
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
32
diff --git a/scripts/coccinelle/reset-type.cocci b/scripts/coccinelle/reset-type.cocci
14
index XXXXXXX..XXXXXXX 100644
33
new file mode 100644
15
--- a/hw/intc/armv7m_nvic.c
34
index XXXXXXX..XXXXXXX
16
+++ b/hw/intc/armv7m_nvic.c
35
--- /dev/null
17
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
36
+++ b/scripts/coccinelle/reset-type.cocci
18
uint64_t value, unsigned size,
37
@@ -XXX,XX +XXX,XX @@
19
MemTxAttrs attrs)
38
+// Convert device code using three-phase reset to add a ResetType
20
{
39
+// argument to implementations of ResettableHoldPhase and
21
+ MemoryRegion *mr = opaque;
40
+// ResettableEnterPhase methods.
41
+//
42
+// Copyright Linaro Ltd 2024
43
+// SPDX-License-Identifier: GPL-2.0-or-later
44
+//
45
+// for dir in include hw target; do \
46
+// spatch --macro-file scripts/cocci-macro-file.h \
47
+// --sp-file scripts/coccinelle/reset-type.cocci \
48
+// --keep-comments --smpl-spacing --in-place --include-headers \
49
+// --dir $dir; done
50
+//
51
+// This coccinelle script aims to produce a complete change that needs
52
+// no human interaction, so as well as the generic "update device
53
+// implementations of the hold and exit phase methods" it includes
54
+// the special-case transformations needed for the core code and for
55
+// one device model that does something a bit nonstandard. Those
56
+// special cases are at the end of the file.
22
+
57
+
23
if (attrs.secure) {
58
+// Look for where we use a function as a ResettableHoldPhase method,
24
/* S accesses to the alias act like NS accesses to the real region */
59
+// either by directly assigning it to phases.hold or by calling
25
attrs.secure = 0;
60
+// resettable_class_set_parent_phases, and remember the function name.
26
- return nvic_sysreg_write(opaque, addr, value, size, attrs);
61
+@ holdfn_assigned @
27
+ return memory_region_dispatch_write(mr, addr, value, size, attrs);
62
+identifier enterfn, holdfn, exitfn;
28
} else {
63
+identifier rc;
29
/* NS attrs are RAZ/WI for privileged, and BusFault for user */
64
+expression e;
30
if (attrs.user) {
65
+@@
31
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
66
+ResettableClass *rc;
32
uint64_t *data, unsigned size,
67
+...
33
MemTxAttrs attrs)
68
+(
34
{
69
+ rc->phases.hold = holdfn;
35
+ MemoryRegion *mr = opaque;
70
+|
71
+ resettable_class_set_parent_phases(rc, enterfn, holdfn, exitfn, e);
72
+)
36
+
73
+
37
if (attrs.secure) {
74
+// Look for the definition of the function we found in holdfn_assigned,
38
/* S accesses to the alias act like NS accesses to the real region */
75
+// and add the new argument. If the function calls a hold function
39
attrs.secure = 0;
76
+// itself (probably chaining to the parent class reset) then add the
40
- return nvic_sysreg_read(opaque, addr, data, size, attrs);
77
+// new argument there too.
41
+ return memory_region_dispatch_read(mr, addr, data, size, attrs);
78
+@ holdfn_defined @
42
} else {
79
+identifier holdfn_assigned.holdfn;
43
/* NS attrs are RAZ/WI for privileged, and BusFault for user */
80
+typedef Object;
44
if (attrs.user) {
81
+identifier obj;
45
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
82
+expression parent;
46
83
+@@
47
if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
84
+-holdfn(Object *obj)
48
memory_region_init_io(&s->sysreg_ns_mem, OBJECT(s),
85
++holdfn(Object *obj, ResetType type)
49
- &nvic_sysreg_ns_ops, s,
86
+{
50
+ &nvic_sysreg_ns_ops, &s->sysregmem,
87
+ <...
51
"nvic_sysregs_ns", 0x1000);
88
+- parent.hold(obj)
52
memory_region_add_subregion(&s->container, 0x20000, &s->sysreg_ns_mem);
89
++ parent.hold(obj, type)
53
}
90
+ ...>
91
+}
92
+
93
+// Similarly for ResettableExitPhase.
94
+@ exitfn_assigned @
95
+identifier enterfn, holdfn, exitfn;
96
+identifier rc;
97
+expression e;
98
+@@
99
+ResettableClass *rc;
100
+...
101
+(
102
+ rc->phases.exit = exitfn;
103
+|
104
+ resettable_class_set_parent_phases(rc, enterfn, holdfn, exitfn, e);
105
+)
106
+@ exitfn_defined @
107
+identifier exitfn_assigned.exitfn;
108
+typedef Object;
109
+identifier obj;
110
+expression parent;
111
+@@
112
+-exitfn(Object *obj)
113
++exitfn(Object *obj, ResetType type)
114
+{
115
+ <...
116
+- parent.exit(obj)
117
++ parent.exit(obj, type)
118
+ ...>
119
+}
120
+
121
+// SPECIAL CASES ONLY BELOW HERE
122
+// We use a python scripted constraint on the position of the match
123
+// to ensure that they only match in a particular function. See
124
+// https://public-inbox.org/git/alpine.DEB.2.21.1808240652370.2344@hadrien/
125
+// which recommends this as the way to do "match only in this function".
126
+
127
+// Special case: isl_pmbus_vr.c has some reset methods calling others directly
128
+@ isl_pmbus_vr @
129
+identifier obj;
130
+@@
131
+- isl_pmbus_vr_exit_reset(obj);
132
++ isl_pmbus_vr_exit_reset(obj, type);
133
+
134
+// Special case: device_phases_reset() needs to pass RESET_TYPE_COLD
135
+@ device_phases_reset_hold @
136
+expression obj;
137
+identifier rc;
138
+identifier phase;
139
+position p : script:python() { p[0].current_element == "device_phases_reset" };
140
+@@
141
+- rc->phases.phase(obj)@p
142
++ rc->phases.phase(obj, RESET_TYPE_COLD)
143
+
144
+// Special case: in resettable_phase_hold() and resettable_phase_exit()
145
+// we need to pass through the ResetType argument to the method being called
146
+@ resettable_phase_hold @
147
+expression obj;
148
+identifier rc;
149
+position p : script:python() { p[0].current_element == "resettable_phase_hold" };
150
+@@
151
+- rc->phases.hold(obj)@p
152
++ rc->phases.hold(obj, type)
153
+@ resettable_phase_exit @
154
+expression obj;
155
+identifier rc;
156
+position p : script:python() { p[0].current_element == "resettable_phase_exit" };
157
+@@
158
+- rc->phases.exit(obj)@p
159
++ rc->phases.exit(obj, type)
160
+// Special case: the typedefs for the methods need to declare the new argument
161
+@ phase_typedef_hold @
162
+identifier obj;
163
+@@
164
+- typedef void (*ResettableHoldPhase)(Object *obj);
165
++ typedef void (*ResettableHoldPhase)(Object *obj, ResetType type);
166
+@ phase_typedef_exit @
167
+identifier obj;
168
+@@
169
+- typedef void (*ResettableExitPhase)(Object *obj);
170
++ typedef void (*ResettableExitPhase)(Object *obj, ResetType type);
54
--
171
--
55
2.7.4
172
2.34.1
56
57
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
We pass a ResetType argument to the Resettable class enter
2
phase method, but we don't pass it to hold and exit, even though
3
the callsites have it readily available. This means that if
4
a device cared about the ResetType it would need to record it
5
in the enter phase method to use later on. Pass the type to
6
all three of the phase methods to avoid having to do that.
2
7
3
From the very beginning, post_load() was called from common
8
Commit created with
4
reset. This is not standard and obliged to discriminate the
5
reset case from the restore case using the iidr value.
6
9
7
Let's get rid of that call.
10
for dir in hw target include; do \
11
spatch --macro-file scripts/cocci-macro-file.h \
12
--sp-file scripts/coccinelle/reset-type.cocci \
13
--keep-comments --smpl-spacing --in-place \
14
--include-headers --dir $dir; done
8
15
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
16
and no manual edits.
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
11
Message-id: 1511883692-11511-2-git-send-email-eric.auger@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Luc Michel <luc.michel@amd.com>
22
Message-id: 20240412160809.1260625-5-peter.maydell@linaro.org
13
---
23
---
14
hw/intc/arm_gicv3_its_common.c | 2 --
24
include/hw/resettable.h | 4 ++--
15
hw/intc/arm_gicv3_its_kvm.c | 4 ----
25
hw/adc/npcm7xx_adc.c | 2 +-
16
2 files changed, 6 deletions(-)
26
hw/arm/pxa2xx_pic.c | 2 +-
27
hw/arm/smmu-common.c | 2 +-
28
hw/arm/smmuv3.c | 4 ++--
29
hw/arm/stellaris.c | 10 +++++-----
30
hw/audio/asc.c | 2 +-
31
hw/char/cadence_uart.c | 2 +-
32
hw/char/sifive_uart.c | 2 +-
33
hw/core/cpu-common.c | 2 +-
34
hw/core/qdev.c | 4 ++--
35
hw/core/reset.c | 2 +-
36
hw/core/resettable.c | 4 ++--
37
hw/display/virtio-vga.c | 4 ++--
38
hw/gpio/npcm7xx_gpio.c | 2 +-
39
hw/gpio/pl061.c | 2 +-
40
hw/gpio/stm32l4x5_gpio.c | 2 +-
41
hw/hyperv/vmbus.c | 2 +-
42
hw/i2c/allwinner-i2c.c | 2 +-
43
hw/i2c/npcm7xx_smbus.c | 2 +-
44
hw/input/adb.c | 2 +-
45
hw/input/ps2.c | 12 ++++++------
46
hw/intc/arm_gic_common.c | 2 +-
47
hw/intc/arm_gic_kvm.c | 4 ++--
48
hw/intc/arm_gicv3_common.c | 2 +-
49
hw/intc/arm_gicv3_its.c | 4 ++--
50
hw/intc/arm_gicv3_its_common.c | 2 +-
51
hw/intc/arm_gicv3_its_kvm.c | 4 ++--
52
hw/intc/arm_gicv3_kvm.c | 4 ++--
53
hw/intc/xics.c | 2 +-
54
hw/m68k/q800-glue.c | 2 +-
55
hw/misc/djmemc.c | 2 +-
56
hw/misc/iosb.c | 2 +-
57
hw/misc/mac_via.c | 8 ++++----
58
hw/misc/macio/cuda.c | 4 ++--
59
hw/misc/macio/pmu.c | 4 ++--
60
hw/misc/mos6522.c | 2 +-
61
hw/misc/npcm7xx_mft.c | 2 +-
62
hw/misc/npcm7xx_pwm.c | 2 +-
63
hw/misc/stm32l4x5_exti.c | 2 +-
64
hw/misc/stm32l4x5_rcc.c | 10 +++++-----
65
hw/misc/stm32l4x5_syscfg.c | 2 +-
66
hw/misc/xlnx-versal-cframe-reg.c | 2 +-
67
hw/misc/xlnx-versal-crl.c | 2 +-
68
hw/misc/xlnx-versal-pmc-iou-slcr.c | 2 +-
69
hw/misc/xlnx-versal-trng.c | 2 +-
70
hw/misc/xlnx-versal-xramc.c | 2 +-
71
hw/misc/xlnx-zynqmp-apu-ctrl.c | 2 +-
72
hw/misc/xlnx-zynqmp-crf.c | 2 +-
73
hw/misc/zynq_slcr.c | 4 ++--
74
hw/net/can/xlnx-zynqmp-can.c | 2 +-
75
hw/net/e1000.c | 2 +-
76
hw/net/e1000e.c | 2 +-
77
hw/net/igb.c | 2 +-
78
hw/net/igbvf.c | 2 +-
79
hw/nvram/xlnx-bbram.c | 2 +-
80
hw/nvram/xlnx-versal-efuse-ctrl.c | 2 +-
81
hw/nvram/xlnx-zynqmp-efuse.c | 2 +-
82
hw/pci-bridge/cxl_root_port.c | 4 ++--
83
hw/pci-bridge/pcie_root_port.c | 2 +-
84
hw/pci-host/bonito.c | 2 +-
85
hw/pci-host/pnv_phb.c | 4 ++--
86
hw/pci-host/pnv_phb3_msi.c | 4 ++--
87
hw/pci/pci.c | 4 ++--
88
hw/rtc/mc146818rtc.c | 2 +-
89
hw/s390x/css-bridge.c | 2 +-
90
hw/sensor/adm1266.c | 2 +-
91
hw/sensor/adm1272.c | 2 +-
92
hw/sensor/isl_pmbus_vr.c | 10 +++++-----
93
hw/sensor/max31785.c | 2 +-
94
hw/sensor/max34451.c | 2 +-
95
hw/ssi/npcm7xx_fiu.c | 2 +-
96
hw/timer/etraxfs_timer.c | 2 +-
97
hw/timer/npcm7xx_timer.c | 2 +-
98
hw/usb/hcd-dwc2.c | 8 ++++----
99
hw/usb/xlnx-versal-usb2-ctrl-regs.c | 2 +-
100
hw/virtio/virtio-pci.c | 2 +-
101
target/arm/cpu.c | 4 ++--
102
target/avr/cpu.c | 4 ++--
103
target/cris/cpu.c | 4 ++--
104
target/hexagon/cpu.c | 4 ++--
105
target/i386/cpu.c | 4 ++--
106
target/loongarch/cpu.c | 4 ++--
107
target/m68k/cpu.c | 4 ++--
108
target/microblaze/cpu.c | 4 ++--
109
target/mips/cpu.c | 4 ++--
110
target/openrisc/cpu.c | 4 ++--
111
target/ppc/cpu_init.c | 4 ++--
112
target/riscv/cpu.c | 4 ++--
113
target/rx/cpu.c | 4 ++--
114
target/sh4/cpu.c | 4 ++--
115
target/sparc/cpu.c | 4 ++--
116
target/tricore/cpu.c | 4 ++--
117
target/xtensa/cpu.c | 4 ++--
118
94 files changed, 150 insertions(+), 150 deletions(-)
17
119
120
diff --git a/include/hw/resettable.h b/include/hw/resettable.h
121
index XXXXXXX..XXXXXXX 100644
122
--- a/include/hw/resettable.h
123
+++ b/include/hw/resettable.h
124
@@ -XXX,XX +XXX,XX @@ typedef enum ResetType {
125
* the callback.
126
*/
127
typedef void (*ResettableEnterPhase)(Object *obj, ResetType type);
128
-typedef void (*ResettableHoldPhase)(Object *obj);
129
-typedef void (*ResettableExitPhase)(Object *obj);
130
+typedef void (*ResettableHoldPhase)(Object *obj, ResetType type);
131
+typedef void (*ResettableExitPhase)(Object *obj, ResetType type);
132
typedef ResettableState * (*ResettableGetState)(Object *obj);
133
typedef void (*ResettableTrFunction)(Object *obj);
134
typedef ResettableTrFunction (*ResettableGetTrFunction)(Object *obj);
135
diff --git a/hw/adc/npcm7xx_adc.c b/hw/adc/npcm7xx_adc.c
136
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/adc/npcm7xx_adc.c
138
+++ b/hw/adc/npcm7xx_adc.c
139
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_adc_enter_reset(Object *obj, ResetType type)
140
npcm7xx_adc_reset(s);
141
}
142
143
-static void npcm7xx_adc_hold_reset(Object *obj)
144
+static void npcm7xx_adc_hold_reset(Object *obj, ResetType type)
145
{
146
NPCM7xxADCState *s = NPCM7XX_ADC(obj);
147
148
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
149
index XXXXXXX..XXXXXXX 100644
150
--- a/hw/arm/pxa2xx_pic.c
151
+++ b/hw/arm/pxa2xx_pic.c
152
@@ -XXX,XX +XXX,XX @@ static int pxa2xx_pic_post_load(void *opaque, int version_id)
153
return 0;
154
}
155
156
-static void pxa2xx_pic_reset_hold(Object *obj)
157
+static void pxa2xx_pic_reset_hold(Object *obj, ResetType type)
158
{
159
PXA2xxPICState *s = PXA2XX_PIC(obj);
160
161
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/hw/arm/smmu-common.c
164
+++ b/hw/arm/smmu-common.c
165
@@ -XXX,XX +XXX,XX @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
166
}
167
}
168
169
-static void smmu_base_reset_hold(Object *obj)
170
+static void smmu_base_reset_hold(Object *obj, ResetType type)
171
{
172
SMMUState *s = ARM_SMMU(obj);
173
174
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/hw/arm/smmuv3.c
177
+++ b/hw/arm/smmuv3.c
178
@@ -XXX,XX +XXX,XX @@ static void smmu_init_irq(SMMUv3State *s, SysBusDevice *dev)
179
}
180
}
181
182
-static void smmu_reset_hold(Object *obj)
183
+static void smmu_reset_hold(Object *obj, ResetType type)
184
{
185
SMMUv3State *s = ARM_SMMUV3(obj);
186
SMMUv3Class *c = ARM_SMMUV3_GET_CLASS(s);
187
188
if (c->parent_phases.hold) {
189
- c->parent_phases.hold(obj);
190
+ c->parent_phases.hold(obj, type);
191
}
192
193
smmuv3_init_regs(s);
194
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
195
index XXXXXXX..XXXXXXX 100644
196
--- a/hw/arm/stellaris.c
197
+++ b/hw/arm/stellaris.c
198
@@ -XXX,XX +XXX,XX @@ static void stellaris_sys_reset_enter(Object *obj, ResetType type)
199
s->dcgc[0] = 1;
200
}
201
202
-static void stellaris_sys_reset_hold(Object *obj)
203
+static void stellaris_sys_reset_hold(Object *obj, ResetType type)
204
{
205
ssys_state *s = STELLARIS_SYS(obj);
206
207
@@ -XXX,XX +XXX,XX @@ static void stellaris_sys_reset_hold(Object *obj)
208
ssys_calculate_system_clock(s, true);
209
}
210
211
-static void stellaris_sys_reset_exit(Object *obj)
212
+static void stellaris_sys_reset_exit(Object *obj, ResetType type)
213
{
214
}
215
216
@@ -XXX,XX +XXX,XX @@ static void stellaris_i2c_reset_enter(Object *obj, ResetType type)
217
i2c_end_transfer(s->bus);
218
}
219
220
-static void stellaris_i2c_reset_hold(Object *obj)
221
+static void stellaris_i2c_reset_hold(Object *obj, ResetType type)
222
{
223
stellaris_i2c_state *s = STELLARIS_I2C(obj);
224
225
@@ -XXX,XX +XXX,XX @@ static void stellaris_i2c_reset_hold(Object *obj)
226
s->mcr = 0;
227
}
228
229
-static void stellaris_i2c_reset_exit(Object *obj)
230
+static void stellaris_i2c_reset_exit(Object *obj, ResetType type)
231
{
232
stellaris_i2c_state *s = STELLARIS_I2C(obj);
233
234
@@ -XXX,XX +XXX,XX @@ static void stellaris_adc_trigger(void *opaque, int irq, int level)
235
}
236
}
237
238
-static void stellaris_adc_reset_hold(Object *obj)
239
+static void stellaris_adc_reset_hold(Object *obj, ResetType type)
240
{
241
StellarisADCState *s = STELLARIS_ADC(obj);
242
int n;
243
diff --git a/hw/audio/asc.c b/hw/audio/asc.c
244
index XXXXXXX..XXXXXXX 100644
245
--- a/hw/audio/asc.c
246
+++ b/hw/audio/asc.c
247
@@ -XXX,XX +XXX,XX @@ static void asc_fifo_init(ASCFIFOState *fs, int index)
248
g_free(name);
249
}
250
251
-static void asc_reset_hold(Object *obj)
252
+static void asc_reset_hold(Object *obj, ResetType type)
253
{
254
ASCState *s = ASC(obj);
255
256
diff --git a/hw/char/cadence_uart.c b/hw/char/cadence_uart.c
257
index XXXXXXX..XXXXXXX 100644
258
--- a/hw/char/cadence_uart.c
259
+++ b/hw/char/cadence_uart.c
260
@@ -XXX,XX +XXX,XX @@ static void cadence_uart_reset_init(Object *obj, ResetType type)
261
s->r[R_TTRIG] = 0x00000020;
262
}
263
264
-static void cadence_uart_reset_hold(Object *obj)
265
+static void cadence_uart_reset_hold(Object *obj, ResetType type)
266
{
267
CadenceUARTState *s = CADENCE_UART(obj);
268
269
diff --git a/hw/char/sifive_uart.c b/hw/char/sifive_uart.c
270
index XXXXXXX..XXXXXXX 100644
271
--- a/hw/char/sifive_uart.c
272
+++ b/hw/char/sifive_uart.c
273
@@ -XXX,XX +XXX,XX @@ static void sifive_uart_reset_enter(Object *obj, ResetType type)
274
s->rx_fifo_len = 0;
275
}
276
277
-static void sifive_uart_reset_hold(Object *obj)
278
+static void sifive_uart_reset_hold(Object *obj, ResetType type)
279
{
280
SiFiveUARTState *s = SIFIVE_UART(obj);
281
qemu_irq_lower(s->irq);
282
diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
283
index XXXXXXX..XXXXXXX 100644
284
--- a/hw/core/cpu-common.c
285
+++ b/hw/core/cpu-common.c
286
@@ -XXX,XX +XXX,XX @@ void cpu_reset(CPUState *cpu)
287
trace_cpu_reset(cpu->cpu_index);
288
}
289
290
-static void cpu_common_reset_hold(Object *obj)
291
+static void cpu_common_reset_hold(Object *obj, ResetType type)
292
{
293
CPUState *cpu = CPU(obj);
294
CPUClass *cc = CPU_GET_CLASS(cpu);
295
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
296
index XXXXXXX..XXXXXXX 100644
297
--- a/hw/core/qdev.c
298
+++ b/hw/core/qdev.c
299
@@ -XXX,XX +XXX,XX @@ static void device_phases_reset(DeviceState *dev)
300
rc->phases.enter(OBJECT(dev), RESET_TYPE_COLD);
301
}
302
if (rc->phases.hold) {
303
- rc->phases.hold(OBJECT(dev));
304
+ rc->phases.hold(OBJECT(dev), RESET_TYPE_COLD);
305
}
306
if (rc->phases.exit) {
307
- rc->phases.exit(OBJECT(dev));
308
+ rc->phases.exit(OBJECT(dev), RESET_TYPE_COLD);
309
}
310
}
311
312
diff --git a/hw/core/reset.c b/hw/core/reset.c
313
index XXXXXXX..XXXXXXX 100644
314
--- a/hw/core/reset.c
315
+++ b/hw/core/reset.c
316
@@ -XXX,XX +XXX,XX @@ static ResettableState *legacy_reset_get_state(Object *obj)
317
return &lr->reset_state;
318
}
319
320
-static void legacy_reset_hold(Object *obj)
321
+static void legacy_reset_hold(Object *obj, ResetType type)
322
{
323
LegacyReset *lr = LEGACY_RESET(obj);
324
325
diff --git a/hw/core/resettable.c b/hw/core/resettable.c
326
index XXXXXXX..XXXXXXX 100644
327
--- a/hw/core/resettable.c
328
+++ b/hw/core/resettable.c
329
@@ -XXX,XX +XXX,XX @@ static void resettable_phase_hold(Object *obj, void *opaque, ResetType type)
330
trace_resettable_transitional_function(obj, obj_typename);
331
tr_func(obj);
332
} else if (rc->phases.hold) {
333
- rc->phases.hold(obj);
334
+ rc->phases.hold(obj, type);
335
}
336
}
337
trace_resettable_phase_hold_end(obj, obj_typename, s->count);
338
@@ -XXX,XX +XXX,XX @@ static void resettable_phase_exit(Object *obj, void *opaque, ResetType type)
339
if (--s->count == 0) {
340
trace_resettable_phase_exit_exec(obj, obj_typename, !!rc->phases.exit);
341
if (rc->phases.exit && !resettable_get_tr_func(rc, obj)) {
342
- rc->phases.exit(obj);
343
+ rc->phases.exit(obj, type);
344
}
345
}
346
s->exit_phase_in_progress = false;
347
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
348
index XXXXXXX..XXXXXXX 100644
349
--- a/hw/display/virtio-vga.c
350
+++ b/hw/display/virtio-vga.c
351
@@ -XXX,XX +XXX,XX @@ static void virtio_vga_base_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
352
}
353
}
354
355
-static void virtio_vga_base_reset_hold(Object *obj)
356
+static void virtio_vga_base_reset_hold(Object *obj, ResetType type)
357
{
358
VirtIOVGABaseClass *klass = VIRTIO_VGA_BASE_GET_CLASS(obj);
359
VirtIOVGABase *vvga = VIRTIO_VGA_BASE(obj);
360
361
/* reset virtio-gpu */
362
if (klass->parent_phases.hold) {
363
- klass->parent_phases.hold(obj);
364
+ klass->parent_phases.hold(obj, type);
365
}
366
367
/* reset vga */
368
diff --git a/hw/gpio/npcm7xx_gpio.c b/hw/gpio/npcm7xx_gpio.c
369
index XXXXXXX..XXXXXXX 100644
370
--- a/hw/gpio/npcm7xx_gpio.c
371
+++ b/hw/gpio/npcm7xx_gpio.c
372
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_gpio_enter_reset(Object *obj, ResetType type)
373
s->regs[NPCM7XX_GPIO_ODSC] = s->reset_odsc;
374
}
375
376
-static void npcm7xx_gpio_hold_reset(Object *obj)
377
+static void npcm7xx_gpio_hold_reset(Object *obj, ResetType type)
378
{
379
NPCM7xxGPIOState *s = NPCM7XX_GPIO(obj);
380
381
diff --git a/hw/gpio/pl061.c b/hw/gpio/pl061.c
382
index XXXXXXX..XXXXXXX 100644
383
--- a/hw/gpio/pl061.c
384
+++ b/hw/gpio/pl061.c
385
@@ -XXX,XX +XXX,XX @@ static void pl061_enter_reset(Object *obj, ResetType type)
386
s->amsel = 0;
387
}
388
389
-static void pl061_hold_reset(Object *obj)
390
+static void pl061_hold_reset(Object *obj, ResetType type)
391
{
392
PL061State *s = PL061(obj);
393
int i, level;
394
diff --git a/hw/gpio/stm32l4x5_gpio.c b/hw/gpio/stm32l4x5_gpio.c
395
index XXXXXXX..XXXXXXX 100644
396
--- a/hw/gpio/stm32l4x5_gpio.c
397
+++ b/hw/gpio/stm32l4x5_gpio.c
398
@@ -XXX,XX +XXX,XX @@ static bool is_push_pull(Stm32l4x5GpioState *s, unsigned pin)
399
return extract32(s->otyper, pin, 1) == 0;
400
}
401
402
-static void stm32l4x5_gpio_reset_hold(Object *obj)
403
+static void stm32l4x5_gpio_reset_hold(Object *obj, ResetType type)
404
{
405
Stm32l4x5GpioState *s = STM32L4X5_GPIO(obj);
406
407
diff --git a/hw/hyperv/vmbus.c b/hw/hyperv/vmbus.c
408
index XXXXXXX..XXXXXXX 100644
409
--- a/hw/hyperv/vmbus.c
410
+++ b/hw/hyperv/vmbus.c
411
@@ -XXX,XX +XXX,XX @@ static void vmbus_unrealize(BusState *bus)
412
qemu_mutex_destroy(&vmbus->rx_queue_lock);
413
}
414
415
-static void vmbus_reset_hold(Object *obj)
416
+static void vmbus_reset_hold(Object *obj, ResetType type)
417
{
418
vmbus_deinit(VMBUS(obj));
419
}
420
diff --git a/hw/i2c/allwinner-i2c.c b/hw/i2c/allwinner-i2c.c
421
index XXXXXXX..XXXXXXX 100644
422
--- a/hw/i2c/allwinner-i2c.c
423
+++ b/hw/i2c/allwinner-i2c.c
424
@@ -XXX,XX +XXX,XX @@ static inline bool allwinner_i2c_interrupt_is_enabled(AWI2CState *s)
425
return s->cntr & TWI_CNTR_INT_EN;
426
}
427
428
-static void allwinner_i2c_reset_hold(Object *obj)
429
+static void allwinner_i2c_reset_hold(Object *obj, ResetType type)
430
{
431
AWI2CState *s = AW_I2C(obj);
432
433
diff --git a/hw/i2c/npcm7xx_smbus.c b/hw/i2c/npcm7xx_smbus.c
434
index XXXXXXX..XXXXXXX 100644
435
--- a/hw/i2c/npcm7xx_smbus.c
436
+++ b/hw/i2c/npcm7xx_smbus.c
437
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_smbus_enter_reset(Object *obj, ResetType type)
438
s->rx_cur = 0;
439
}
440
441
-static void npcm7xx_smbus_hold_reset(Object *obj)
442
+static void npcm7xx_smbus_hold_reset(Object *obj, ResetType type)
443
{
444
NPCM7xxSMBusState *s = NPCM7XX_SMBUS(obj);
445
446
diff --git a/hw/input/adb.c b/hw/input/adb.c
447
index XXXXXXX..XXXXXXX 100644
448
--- a/hw/input/adb.c
449
+++ b/hw/input/adb.c
450
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_adb_bus = {
451
}
452
};
453
454
-static void adb_bus_reset_hold(Object *obj)
455
+static void adb_bus_reset_hold(Object *obj, ResetType type)
456
{
457
ADBBusState *adb_bus = ADB_BUS(obj);
458
459
diff --git a/hw/input/ps2.c b/hw/input/ps2.c
460
index XXXXXXX..XXXXXXX 100644
461
--- a/hw/input/ps2.c
462
+++ b/hw/input/ps2.c
463
@@ -XXX,XX +XXX,XX @@ void ps2_write_mouse(PS2MouseState *s, int val)
464
}
465
}
466
467
-static void ps2_reset_hold(Object *obj)
468
+static void ps2_reset_hold(Object *obj, ResetType type)
469
{
470
PS2State *s = PS2_DEVICE(obj);
471
472
@@ -XXX,XX +XXX,XX @@ static void ps2_reset_hold(Object *obj)
473
ps2_reset_queue(s);
474
}
475
476
-static void ps2_reset_exit(Object *obj)
477
+static void ps2_reset_exit(Object *obj, ResetType type)
478
{
479
PS2State *s = PS2_DEVICE(obj);
480
481
@@ -XXX,XX +XXX,XX @@ static void ps2_common_post_load(PS2State *s)
482
q->cwptr = ccount ? (q->rptr + ccount) & (PS2_BUFFER_SIZE - 1) : -1;
483
}
484
485
-static void ps2_kbd_reset_hold(Object *obj)
486
+static void ps2_kbd_reset_hold(Object *obj, ResetType type)
487
{
488
PS2DeviceClass *ps2dc = PS2_DEVICE_GET_CLASS(obj);
489
PS2KbdState *s = PS2_KBD_DEVICE(obj);
490
@@ -XXX,XX +XXX,XX @@ static void ps2_kbd_reset_hold(Object *obj)
491
trace_ps2_kbd_reset(s);
492
493
if (ps2dc->parent_phases.hold) {
494
- ps2dc->parent_phases.hold(obj);
495
+ ps2dc->parent_phases.hold(obj, type);
496
}
497
498
s->scan_enabled = 1;
499
@@ -XXX,XX +XXX,XX @@ static void ps2_kbd_reset_hold(Object *obj)
500
s->modifiers = 0;
501
}
502
503
-static void ps2_mouse_reset_hold(Object *obj)
504
+static void ps2_mouse_reset_hold(Object *obj, ResetType type)
505
{
506
PS2DeviceClass *ps2dc = PS2_DEVICE_GET_CLASS(obj);
507
PS2MouseState *s = PS2_MOUSE_DEVICE(obj);
508
@@ -XXX,XX +XXX,XX @@ static void ps2_mouse_reset_hold(Object *obj)
509
trace_ps2_mouse_reset(s);
510
511
if (ps2dc->parent_phases.hold) {
512
- ps2dc->parent_phases.hold(obj);
513
+ ps2dc->parent_phases.hold(obj, type);
514
}
515
516
s->mouse_status = 0;
517
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
518
index XXXXXXX..XXXXXXX 100644
519
--- a/hw/intc/arm_gic_common.c
520
+++ b/hw/intc/arm_gic_common.c
521
@@ -XXX,XX +XXX,XX @@ static inline void arm_gic_common_reset_irq_state(GICState *s, int cidx,
522
}
523
}
524
525
-static void arm_gic_common_reset_hold(Object *obj)
526
+static void arm_gic_common_reset_hold(Object *obj, ResetType type)
527
{
528
GICState *s = ARM_GIC_COMMON(obj);
529
int i, j;
530
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
531
index XXXXXXX..XXXXXXX 100644
532
--- a/hw/intc/arm_gic_kvm.c
533
+++ b/hw/intc/arm_gic_kvm.c
534
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_get(GICState *s)
535
}
536
}
537
538
-static void kvm_arm_gic_reset_hold(Object *obj)
539
+static void kvm_arm_gic_reset_hold(Object *obj, ResetType type)
540
{
541
GICState *s = ARM_GIC_COMMON(obj);
542
KVMARMGICClass *kgc = KVM_ARM_GIC_GET_CLASS(s);
543
544
if (kgc->parent_phases.hold) {
545
- kgc->parent_phases.hold(obj);
546
+ kgc->parent_phases.hold(obj, type);
547
}
548
549
if (kvm_arm_gic_can_save_restore(s)) {
550
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
551
index XXXXXXX..XXXXXXX 100644
552
--- a/hw/intc/arm_gicv3_common.c
553
+++ b/hw/intc/arm_gicv3_common.c
554
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_finalize(Object *obj)
555
g_free(s->redist_region_count);
556
}
557
558
-static void arm_gicv3_common_reset_hold(Object *obj)
559
+static void arm_gicv3_common_reset_hold(Object *obj, ResetType type)
560
{
561
GICv3State *s = ARM_GICV3_COMMON(obj);
562
int i;
563
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
564
index XXXXXXX..XXXXXXX 100644
565
--- a/hw/intc/arm_gicv3_its.c
566
+++ b/hw/intc/arm_gicv3_its.c
567
@@ -XXX,XX +XXX,XX @@ static void gicv3_arm_its_realize(DeviceState *dev, Error **errp)
568
}
569
}
570
571
-static void gicv3_its_reset_hold(Object *obj)
572
+static void gicv3_its_reset_hold(Object *obj, ResetType type)
573
{
574
GICv3ITSState *s = ARM_GICV3_ITS_COMMON(obj);
575
GICv3ITSClass *c = ARM_GICV3_ITS_GET_CLASS(s);
576
577
if (c->parent_phases.hold) {
578
- c->parent_phases.hold(obj);
579
+ c->parent_phases.hold(obj, type);
580
}
581
582
/* Quiescent bit reset to 1 */
18
diff --git a/hw/intc/arm_gicv3_its_common.c b/hw/intc/arm_gicv3_its_common.c
583
diff --git a/hw/intc/arm_gicv3_its_common.c b/hw/intc/arm_gicv3_its_common.c
19
index XXXXXXX..XXXXXXX 100644
584
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gicv3_its_common.c
585
--- a/hw/intc/arm_gicv3_its_common.c
21
+++ b/hw/intc/arm_gicv3_its_common.c
586
+++ b/hw/intc/arm_gicv3_its_common.c
22
@@ -XXX,XX +XXX,XX @@ static void gicv3_its_common_reset(DeviceState *dev)
587
@@ -XXX,XX +XXX,XX @@ void gicv3_its_init_mmio(GICv3ITSState *s, const MemoryRegionOps *ops,
23
s->creadr = 0;
588
msi_nonbroken = true;
24
s->iidr = 0;
589
}
25
memset(&s->baser, 0, sizeof(s->baser));
590
26
-
591
-static void gicv3_its_common_reset_hold(Object *obj)
27
- gicv3_its_post_load(s, 0);
592
+static void gicv3_its_common_reset_hold(Object *obj, ResetType type)
28
}
593
{
29
594
GICv3ITSState *s = ARM_GICV3_ITS_COMMON(obj);
30
static void gicv3_its_common_class_init(ObjectClass *klass, void *data)
595
31
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
596
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
32
index XXXXXXX..XXXXXXX 100644
597
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/intc/arm_gicv3_its_kvm.c
598
--- a/hw/intc/arm_gicv3_its_kvm.c
34
+++ b/hw/intc/arm_gicv3_its_kvm.c
599
+++ b/hw/intc/arm_gicv3_its_kvm.c
35
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_post_load(GICv3ITSState *s)
600
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_post_load(GICv3ITSState *s)
36
{
601
GITS_CTLR, &s->ctlr, true, &error_abort);
602
}
603
604
-static void kvm_arm_its_reset_hold(Object *obj)
605
+static void kvm_arm_its_reset_hold(Object *obj, ResetType type)
606
{
607
GICv3ITSState *s = ARM_GICV3_ITS_COMMON(obj);
608
KVMARMITSClass *c = KVM_ARM_ITS_GET_CLASS(s);
37
int i;
609
int i;
38
610
39
- if (!s->iidr) {
611
if (c->parent_phases.hold) {
40
- return;
612
- c->parent_phases.hold(obj);
41
- }
613
+ c->parent_phases.hold(obj, type);
42
-
614
}
43
kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
615
44
GITS_IIDR, &s->iidr, true, &error_abort);
616
if (kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
45
617
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
618
index XXXXXXX..XXXXXXX 100644
619
--- a/hw/intc/arm_gicv3_kvm.c
620
+++ b/hw/intc/arm_gicv3_kvm.c
621
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
622
c->icc_ctlr_el1[GICV3_S] = c->icc_ctlr_el1[GICV3_NS];
623
}
624
625
-static void kvm_arm_gicv3_reset_hold(Object *obj)
626
+static void kvm_arm_gicv3_reset_hold(Object *obj, ResetType type)
627
{
628
GICv3State *s = ARM_GICV3_COMMON(obj);
629
KVMARMGICv3Class *kgc = KVM_ARM_GICV3_GET_CLASS(s);
630
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_reset_hold(Object *obj)
631
DPRINTF("Reset\n");
632
633
if (kgc->parent_phases.hold) {
634
- kgc->parent_phases.hold(obj);
635
+ kgc->parent_phases.hold(obj, type);
636
}
637
638
if (s->migration_blocker) {
639
diff --git a/hw/intc/xics.c b/hw/intc/xics.c
640
index XXXXXXX..XXXXXXX 100644
641
--- a/hw/intc/xics.c
642
+++ b/hw/intc/xics.c
643
@@ -XXX,XX +XXX,XX @@ static void ics_reset_irq(ICSIRQState *irq)
644
irq->saved_priority = 0xff;
645
}
646
647
-static void ics_reset_hold(Object *obj)
648
+static void ics_reset_hold(Object *obj, ResetType type)
649
{
650
ICSState *ics = ICS(obj);
651
g_autofree uint8_t *flags = g_malloc(ics->nr_irqs);
652
diff --git a/hw/m68k/q800-glue.c b/hw/m68k/q800-glue.c
653
index XXXXXXX..XXXXXXX 100644
654
--- a/hw/m68k/q800-glue.c
655
+++ b/hw/m68k/q800-glue.c
656
@@ -XXX,XX +XXX,XX @@ static void glue_nmi_release(void *opaque)
657
GLUE_set_irq(s, GLUE_IRQ_IN_NMI, 0);
658
}
659
660
-static void glue_reset_hold(Object *obj)
661
+static void glue_reset_hold(Object *obj, ResetType type)
662
{
663
GLUEState *s = GLUE(obj);
664
665
diff --git a/hw/misc/djmemc.c b/hw/misc/djmemc.c
666
index XXXXXXX..XXXXXXX 100644
667
--- a/hw/misc/djmemc.c
668
+++ b/hw/misc/djmemc.c
669
@@ -XXX,XX +XXX,XX @@ static void djmemc_init(Object *obj)
670
sysbus_init_mmio(sbd, &s->mem_regs);
671
}
672
673
-static void djmemc_reset_hold(Object *obj)
674
+static void djmemc_reset_hold(Object *obj, ResetType type)
675
{
676
DJMEMCState *s = DJMEMC(obj);
677
678
diff --git a/hw/misc/iosb.c b/hw/misc/iosb.c
679
index XXXXXXX..XXXXXXX 100644
680
--- a/hw/misc/iosb.c
681
+++ b/hw/misc/iosb.c
682
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps iosb_mmio_ops = {
683
.endianness = DEVICE_BIG_ENDIAN,
684
};
685
686
-static void iosb_reset_hold(Object *obj)
687
+static void iosb_reset_hold(Object *obj, ResetType type)
688
{
689
IOSBState *s = IOSB(obj);
690
691
diff --git a/hw/misc/mac_via.c b/hw/misc/mac_via.c
692
index XXXXXXX..XXXXXXX 100644
693
--- a/hw/misc/mac_via.c
694
+++ b/hw/misc/mac_via.c
695
@@ -XXX,XX +XXX,XX @@ static int via1_post_load(void *opaque, int version_id)
696
}
697
698
/* VIA 1 */
699
-static void mos6522_q800_via1_reset_hold(Object *obj)
700
+static void mos6522_q800_via1_reset_hold(Object *obj, ResetType type)
701
{
702
MOS6522Q800VIA1State *v1s = MOS6522_Q800_VIA1(obj);
703
MOS6522State *ms = MOS6522(v1s);
704
@@ -XXX,XX +XXX,XX @@ static void mos6522_q800_via1_reset_hold(Object *obj)
705
ADBBusState *adb_bus = &v1s->adb_bus;
706
707
if (mdc->parent_phases.hold) {
708
- mdc->parent_phases.hold(obj);
709
+ mdc->parent_phases.hold(obj, type);
710
}
711
712
ms->timers[0].frequency = VIA_TIMER_FREQ;
713
@@ -XXX,XX +XXX,XX @@ static void mos6522_q800_via2_portB_write(MOS6522State *s)
714
}
715
}
716
717
-static void mos6522_q800_via2_reset_hold(Object *obj)
718
+static void mos6522_q800_via2_reset_hold(Object *obj, ResetType type)
719
{
720
MOS6522State *ms = MOS6522(obj);
721
MOS6522DeviceClass *mdc = MOS6522_GET_CLASS(ms);
722
723
if (mdc->parent_phases.hold) {
724
- mdc->parent_phases.hold(obj);
725
+ mdc->parent_phases.hold(obj, type);
726
}
727
728
ms->timers[0].frequency = VIA_TIMER_FREQ;
729
diff --git a/hw/misc/macio/cuda.c b/hw/misc/macio/cuda.c
730
index XXXXXXX..XXXXXXX 100644
731
--- a/hw/misc/macio/cuda.c
732
+++ b/hw/misc/macio/cuda.c
733
@@ -XXX,XX +XXX,XX @@ static void mos6522_cuda_portB_write(MOS6522State *s)
734
cuda_update(cs);
735
}
736
737
-static void mos6522_cuda_reset_hold(Object *obj)
738
+static void mos6522_cuda_reset_hold(Object *obj, ResetType type)
739
{
740
MOS6522State *ms = MOS6522(obj);
741
MOS6522DeviceClass *mdc = MOS6522_GET_CLASS(ms);
742
743
if (mdc->parent_phases.hold) {
744
- mdc->parent_phases.hold(obj);
745
+ mdc->parent_phases.hold(obj, type);
746
}
747
748
ms->timers[0].frequency = CUDA_TIMER_FREQ;
749
diff --git a/hw/misc/macio/pmu.c b/hw/misc/macio/pmu.c
750
index XXXXXXX..XXXXXXX 100644
751
--- a/hw/misc/macio/pmu.c
752
+++ b/hw/misc/macio/pmu.c
753
@@ -XXX,XX +XXX,XX @@ static void mos6522_pmu_portB_write(MOS6522State *s)
754
pmu_update(ps);
755
}
756
757
-static void mos6522_pmu_reset_hold(Object *obj)
758
+static void mos6522_pmu_reset_hold(Object *obj, ResetType type)
759
{
760
MOS6522State *ms = MOS6522(obj);
761
MOS6522PMUState *mps = container_of(ms, MOS6522PMUState, parent_obj);
762
@@ -XXX,XX +XXX,XX @@ static void mos6522_pmu_reset_hold(Object *obj)
763
MOS6522DeviceClass *mdc = MOS6522_GET_CLASS(ms);
764
765
if (mdc->parent_phases.hold) {
766
- mdc->parent_phases.hold(obj);
767
+ mdc->parent_phases.hold(obj, type);
768
}
769
770
ms->timers[0].frequency = VIA_TIMER_FREQ;
771
diff --git a/hw/misc/mos6522.c b/hw/misc/mos6522.c
772
index XXXXXXX..XXXXXXX 100644
773
--- a/hw/misc/mos6522.c
774
+++ b/hw/misc/mos6522.c
775
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_mos6522 = {
776
}
777
};
778
779
-static void mos6522_reset_hold(Object *obj)
780
+static void mos6522_reset_hold(Object *obj, ResetType type)
781
{
782
MOS6522State *s = MOS6522(obj);
783
784
diff --git a/hw/misc/npcm7xx_mft.c b/hw/misc/npcm7xx_mft.c
785
index XXXXXXX..XXXXXXX 100644
786
--- a/hw/misc/npcm7xx_mft.c
787
+++ b/hw/misc/npcm7xx_mft.c
788
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_mft_enter_reset(Object *obj, ResetType type)
789
npcm7xx_mft_reset(s);
790
}
791
792
-static void npcm7xx_mft_hold_reset(Object *obj)
793
+static void npcm7xx_mft_hold_reset(Object *obj, ResetType type)
794
{
795
NPCM7xxMFTState *s = NPCM7XX_MFT(obj);
796
797
diff --git a/hw/misc/npcm7xx_pwm.c b/hw/misc/npcm7xx_pwm.c
798
index XXXXXXX..XXXXXXX 100644
799
--- a/hw/misc/npcm7xx_pwm.c
800
+++ b/hw/misc/npcm7xx_pwm.c
801
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_pwm_enter_reset(Object *obj, ResetType type)
802
s->piir = 0x00000000;
803
}
804
805
-static void npcm7xx_pwm_hold_reset(Object *obj)
806
+static void npcm7xx_pwm_hold_reset(Object *obj, ResetType type)
807
{
808
NPCM7xxPWMState *s = NPCM7XX_PWM(obj);
809
int i;
810
diff --git a/hw/misc/stm32l4x5_exti.c b/hw/misc/stm32l4x5_exti.c
811
index XXXXXXX..XXXXXXX 100644
812
--- a/hw/misc/stm32l4x5_exti.c
813
+++ b/hw/misc/stm32l4x5_exti.c
814
@@ -XXX,XX +XXX,XX @@ static unsigned configurable_mask(unsigned bank)
815
return valid_mask(bank) & ~exti_romask[bank];
816
}
817
818
-static void stm32l4x5_exti_reset_hold(Object *obj)
819
+static void stm32l4x5_exti_reset_hold(Object *obj, ResetType type)
820
{
821
Stm32l4x5ExtiState *s = STM32L4X5_EXTI(obj);
822
823
diff --git a/hw/misc/stm32l4x5_rcc.c b/hw/misc/stm32l4x5_rcc.c
824
index XXXXXXX..XXXXXXX 100644
825
--- a/hw/misc/stm32l4x5_rcc.c
826
+++ b/hw/misc/stm32l4x5_rcc.c
827
@@ -XXX,XX +XXX,XX @@ static void clock_mux_reset_enter(Object *obj, ResetType type)
828
set_clock_mux_init_info(s, s->id);
829
}
830
831
-static void clock_mux_reset_hold(Object *obj)
832
+static void clock_mux_reset_hold(Object *obj, ResetType type)
833
{
834
RccClockMuxState *s = RCC_CLOCK_MUX(obj);
835
clock_mux_update(s, true);
836
}
837
838
-static void clock_mux_reset_exit(Object *obj)
839
+static void clock_mux_reset_exit(Object *obj, ResetType type)
840
{
841
RccClockMuxState *s = RCC_CLOCK_MUX(obj);
842
clock_mux_update(s, false);
843
@@ -XXX,XX +XXX,XX @@ static void pll_reset_enter(Object *obj, ResetType type)
844
set_pll_init_info(s, s->id);
845
}
846
847
-static void pll_reset_hold(Object *obj)
848
+static void pll_reset_hold(Object *obj, ResetType type)
849
{
850
RccPllState *s = RCC_PLL(obj);
851
pll_update(s, true);
852
}
853
854
-static void pll_reset_exit(Object *obj)
855
+static void pll_reset_exit(Object *obj, ResetType type)
856
{
857
RccPllState *s = RCC_PLL(obj);
858
pll_update(s, false);
859
@@ -XXX,XX +XXX,XX @@ static void rcc_update_csr(Stm32l4x5RccState *s)
860
rcc_update_irq(s);
861
}
862
863
-static void stm32l4x5_rcc_reset_hold(Object *obj)
864
+static void stm32l4x5_rcc_reset_hold(Object *obj, ResetType type)
865
{
866
Stm32l4x5RccState *s = STM32L4X5_RCC(obj);
867
s->cr = 0x00000063;
868
diff --git a/hw/misc/stm32l4x5_syscfg.c b/hw/misc/stm32l4x5_syscfg.c
869
index XXXXXXX..XXXXXXX 100644
870
--- a/hw/misc/stm32l4x5_syscfg.c
871
+++ b/hw/misc/stm32l4x5_syscfg.c
872
@@ -XXX,XX +XXX,XX @@
873
874
#define NUM_LINES_PER_EXTICR_REG 4
875
876
-static void stm32l4x5_syscfg_hold_reset(Object *obj)
877
+static void stm32l4x5_syscfg_hold_reset(Object *obj, ResetType type)
878
{
879
Stm32l4x5SyscfgState *s = STM32L4X5_SYSCFG(obj);
880
881
diff --git a/hw/misc/xlnx-versal-cframe-reg.c b/hw/misc/xlnx-versal-cframe-reg.c
882
index XXXXXXX..XXXXXXX 100644
883
--- a/hw/misc/xlnx-versal-cframe-reg.c
884
+++ b/hw/misc/xlnx-versal-cframe-reg.c
885
@@ -XXX,XX +XXX,XX @@ static void cframe_reg_reset_enter(Object *obj, ResetType type)
886
}
887
}
888
889
-static void cframe_reg_reset_hold(Object *obj)
890
+static void cframe_reg_reset_hold(Object *obj, ResetType type)
891
{
892
XlnxVersalCFrameReg *s = XLNX_VERSAL_CFRAME_REG(obj);
893
894
diff --git a/hw/misc/xlnx-versal-crl.c b/hw/misc/xlnx-versal-crl.c
895
index XXXXXXX..XXXXXXX 100644
896
--- a/hw/misc/xlnx-versal-crl.c
897
+++ b/hw/misc/xlnx-versal-crl.c
898
@@ -XXX,XX +XXX,XX @@ static void crl_reset_enter(Object *obj, ResetType type)
899
}
900
}
901
902
-static void crl_reset_hold(Object *obj)
903
+static void crl_reset_hold(Object *obj, ResetType type)
904
{
905
XlnxVersalCRL *s = XLNX_VERSAL_CRL(obj);
906
907
diff --git a/hw/misc/xlnx-versal-pmc-iou-slcr.c b/hw/misc/xlnx-versal-pmc-iou-slcr.c
908
index XXXXXXX..XXXXXXX 100644
909
--- a/hw/misc/xlnx-versal-pmc-iou-slcr.c
910
+++ b/hw/misc/xlnx-versal-pmc-iou-slcr.c
911
@@ -XXX,XX +XXX,XX @@ static void xlnx_versal_pmc_iou_slcr_reset_init(Object *obj, ResetType type)
912
}
913
}
914
915
-static void xlnx_versal_pmc_iou_slcr_reset_hold(Object *obj)
916
+static void xlnx_versal_pmc_iou_slcr_reset_hold(Object *obj, ResetType type)
917
{
918
XlnxVersalPmcIouSlcr *s = XILINX_VERSAL_PMC_IOU_SLCR(obj);
919
920
diff --git a/hw/misc/xlnx-versal-trng.c b/hw/misc/xlnx-versal-trng.c
921
index XXXXXXX..XXXXXXX 100644
922
--- a/hw/misc/xlnx-versal-trng.c
923
+++ b/hw/misc/xlnx-versal-trng.c
924
@@ -XXX,XX +XXX,XX @@ static void trng_unrealize(DeviceState *dev)
925
s->prng = NULL;
926
}
927
928
-static void trng_reset_hold(Object *obj)
929
+static void trng_reset_hold(Object *obj, ResetType type)
930
{
931
trng_reset(XLNX_VERSAL_TRNG(obj));
932
}
933
diff --git a/hw/misc/xlnx-versal-xramc.c b/hw/misc/xlnx-versal-xramc.c
934
index XXXXXXX..XXXXXXX 100644
935
--- a/hw/misc/xlnx-versal-xramc.c
936
+++ b/hw/misc/xlnx-versal-xramc.c
937
@@ -XXX,XX +XXX,XX @@ static void xram_ctrl_reset_enter(Object *obj, ResetType type)
938
ARRAY_FIELD_DP32(s->regs, XRAM_IMP, SIZE, s->cfg.encoded_size);
939
}
940
941
-static void xram_ctrl_reset_hold(Object *obj)
942
+static void xram_ctrl_reset_hold(Object *obj, ResetType type)
943
{
944
XlnxXramCtrl *s = XLNX_XRAM_CTRL(obj);
945
946
diff --git a/hw/misc/xlnx-zynqmp-apu-ctrl.c b/hw/misc/xlnx-zynqmp-apu-ctrl.c
947
index XXXXXXX..XXXXXXX 100644
948
--- a/hw/misc/xlnx-zynqmp-apu-ctrl.c
949
+++ b/hw/misc/xlnx-zynqmp-apu-ctrl.c
950
@@ -XXX,XX +XXX,XX @@ static void zynqmp_apu_reset_enter(Object *obj, ResetType type)
951
s->cpu_in_wfi = 0;
952
}
953
954
-static void zynqmp_apu_reset_hold(Object *obj)
955
+static void zynqmp_apu_reset_hold(Object *obj, ResetType type)
956
{
957
XlnxZynqMPAPUCtrl *s = XLNX_ZYNQMP_APU_CTRL(obj);
958
959
diff --git a/hw/misc/xlnx-zynqmp-crf.c b/hw/misc/xlnx-zynqmp-crf.c
960
index XXXXXXX..XXXXXXX 100644
961
--- a/hw/misc/xlnx-zynqmp-crf.c
962
+++ b/hw/misc/xlnx-zynqmp-crf.c
963
@@ -XXX,XX +XXX,XX @@ static void crf_reset_enter(Object *obj, ResetType type)
964
}
965
}
966
967
-static void crf_reset_hold(Object *obj)
968
+static void crf_reset_hold(Object *obj, ResetType type)
969
{
970
XlnxZynqMPCRF *s = XLNX_ZYNQMP_CRF(obj);
971
ir_update_irq(s);
972
diff --git a/hw/misc/zynq_slcr.c b/hw/misc/zynq_slcr.c
973
index XXXXXXX..XXXXXXX 100644
974
--- a/hw/misc/zynq_slcr.c
975
+++ b/hw/misc/zynq_slcr.c
976
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_reset_init(Object *obj, ResetType type)
977
s->regs[R_DDRIOB + 12] = 0x00000021;
978
}
979
980
-static void zynq_slcr_reset_hold(Object *obj)
981
+static void zynq_slcr_reset_hold(Object *obj, ResetType type)
982
{
983
ZynqSLCRState *s = ZYNQ_SLCR(obj);
984
985
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_reset_hold(Object *obj)
986
zynq_slcr_propagate_clocks(s);
987
}
988
989
-static void zynq_slcr_reset_exit(Object *obj)
990
+static void zynq_slcr_reset_exit(Object *obj, ResetType type)
991
{
992
ZynqSLCRState *s = ZYNQ_SLCR(obj);
993
994
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
995
index XXXXXXX..XXXXXXX 100644
996
--- a/hw/net/can/xlnx-zynqmp-can.c
997
+++ b/hw/net/can/xlnx-zynqmp-can.c
998
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_can_reset_init(Object *obj, ResetType type)
999
ptimer_transaction_commit(s->can_timer);
1000
}
1001
1002
-static void xlnx_zynqmp_can_reset_hold(Object *obj)
1003
+static void xlnx_zynqmp_can_reset_hold(Object *obj, ResetType type)
1004
{
1005
XlnxZynqMPCANState *s = XLNX_ZYNQMP_CAN(obj);
1006
unsigned int i;
1007
diff --git a/hw/net/e1000.c b/hw/net/e1000.c
1008
index XXXXXXX..XXXXXXX 100644
1009
--- a/hw/net/e1000.c
1010
+++ b/hw/net/e1000.c
1011
@@ -XXX,XX +XXX,XX @@ static bool e1000_vet_init_need(void *opaque)
1012
return chkflag(VET);
1013
}
1014
1015
-static void e1000_reset_hold(Object *obj)
1016
+static void e1000_reset_hold(Object *obj, ResetType type)
1017
{
1018
E1000State *d = E1000(obj);
1019
E1000BaseClass *edc = E1000_GET_CLASS(d);
1020
diff --git a/hw/net/e1000e.c b/hw/net/e1000e.c
1021
index XXXXXXX..XXXXXXX 100644
1022
--- a/hw/net/e1000e.c
1023
+++ b/hw/net/e1000e.c
1024
@@ -XXX,XX +XXX,XX @@ static void e1000e_pci_uninit(PCIDevice *pci_dev)
1025
msi_uninit(pci_dev);
1026
}
1027
1028
-static void e1000e_qdev_reset_hold(Object *obj)
1029
+static void e1000e_qdev_reset_hold(Object *obj, ResetType type)
1030
{
1031
E1000EState *s = E1000E(obj);
1032
1033
diff --git a/hw/net/igb.c b/hw/net/igb.c
1034
index XXXXXXX..XXXXXXX 100644
1035
--- a/hw/net/igb.c
1036
+++ b/hw/net/igb.c
1037
@@ -XXX,XX +XXX,XX @@ static void igb_pci_uninit(PCIDevice *pci_dev)
1038
msi_uninit(pci_dev);
1039
}
1040
1041
-static void igb_qdev_reset_hold(Object *obj)
1042
+static void igb_qdev_reset_hold(Object *obj, ResetType type)
1043
{
1044
IGBState *s = IGB(obj);
1045
1046
diff --git a/hw/net/igbvf.c b/hw/net/igbvf.c
1047
index XXXXXXX..XXXXXXX 100644
1048
--- a/hw/net/igbvf.c
1049
+++ b/hw/net/igbvf.c
1050
@@ -XXX,XX +XXX,XX @@ static void igbvf_pci_realize(PCIDevice *dev, Error **errp)
1051
pcie_ari_init(dev, 0x150);
1052
}
1053
1054
-static void igbvf_qdev_reset_hold(Object *obj)
1055
+static void igbvf_qdev_reset_hold(Object *obj, ResetType type)
1056
{
1057
PCIDevice *vf = PCI_DEVICE(obj);
1058
1059
diff --git a/hw/nvram/xlnx-bbram.c b/hw/nvram/xlnx-bbram.c
1060
index XXXXXXX..XXXXXXX 100644
1061
--- a/hw/nvram/xlnx-bbram.c
1062
+++ b/hw/nvram/xlnx-bbram.c
1063
@@ -XXX,XX +XXX,XX @@ static RegisterAccessInfo bbram_ctrl_regs_info[] = {
1064
}
1065
};
1066
1067
-static void bbram_ctrl_reset_hold(Object *obj)
1068
+static void bbram_ctrl_reset_hold(Object *obj, ResetType type)
1069
{
1070
XlnxBBRam *s = XLNX_BBRAM(obj);
1071
unsigned int i;
1072
diff --git a/hw/nvram/xlnx-versal-efuse-ctrl.c b/hw/nvram/xlnx-versal-efuse-ctrl.c
1073
index XXXXXXX..XXXXXXX 100644
1074
--- a/hw/nvram/xlnx-versal-efuse-ctrl.c
1075
+++ b/hw/nvram/xlnx-versal-efuse-ctrl.c
1076
@@ -XXX,XX +XXX,XX @@ static void efuse_ctrl_register_reset(RegisterInfo *reg)
1077
register_reset(reg);
1078
}
1079
1080
-static void efuse_ctrl_reset_hold(Object *obj)
1081
+static void efuse_ctrl_reset_hold(Object *obj, ResetType type)
1082
{
1083
XlnxVersalEFuseCtrl *s = XLNX_VERSAL_EFUSE_CTRL(obj);
1084
unsigned int i;
1085
diff --git a/hw/nvram/xlnx-zynqmp-efuse.c b/hw/nvram/xlnx-zynqmp-efuse.c
1086
index XXXXXXX..XXXXXXX 100644
1087
--- a/hw/nvram/xlnx-zynqmp-efuse.c
1088
+++ b/hw/nvram/xlnx-zynqmp-efuse.c
1089
@@ -XXX,XX +XXX,XX @@ static void zynqmp_efuse_register_reset(RegisterInfo *reg)
1090
register_reset(reg);
1091
}
1092
1093
-static void zynqmp_efuse_reset_hold(Object *obj)
1094
+static void zynqmp_efuse_reset_hold(Object *obj, ResetType type)
1095
{
1096
XlnxZynqMPEFuse *s = XLNX_ZYNQMP_EFUSE(obj);
1097
unsigned int i;
1098
diff --git a/hw/pci-bridge/cxl_root_port.c b/hw/pci-bridge/cxl_root_port.c
1099
index XXXXXXX..XXXXXXX 100644
1100
--- a/hw/pci-bridge/cxl_root_port.c
1101
+++ b/hw/pci-bridge/cxl_root_port.c
1102
@@ -XXX,XX +XXX,XX @@ static void cxl_rp_realize(DeviceState *dev, Error **errp)
1103
component_bar);
1104
}
1105
1106
-static void cxl_rp_reset_hold(Object *obj)
1107
+static void cxl_rp_reset_hold(Object *obj, ResetType type)
1108
{
1109
PCIERootPortClass *rpc = PCIE_ROOT_PORT_GET_CLASS(obj);
1110
CXLRootPort *crp = CXL_ROOT_PORT(obj);
1111
1112
if (rpc->parent_phases.hold) {
1113
- rpc->parent_phases.hold(obj);
1114
+ rpc->parent_phases.hold(obj, type);
1115
}
1116
1117
latch_registers(crp);
1118
diff --git a/hw/pci-bridge/pcie_root_port.c b/hw/pci-bridge/pcie_root_port.c
1119
index XXXXXXX..XXXXXXX 100644
1120
--- a/hw/pci-bridge/pcie_root_port.c
1121
+++ b/hw/pci-bridge/pcie_root_port.c
1122
@@ -XXX,XX +XXX,XX @@ static void rp_write_config(PCIDevice *d, uint32_t address,
1123
pcie_aer_root_write_config(d, address, val, len, root_cmd);
1124
}
1125
1126
-static void rp_reset_hold(Object *obj)
1127
+static void rp_reset_hold(Object *obj, ResetType type)
1128
{
1129
PCIDevice *d = PCI_DEVICE(obj);
1130
DeviceState *qdev = DEVICE(obj);
1131
diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
1132
index XXXXXXX..XXXXXXX 100644
1133
--- a/hw/pci-host/bonito.c
1134
+++ b/hw/pci-host/bonito.c
1135
@@ -XXX,XX +XXX,XX @@ static int pci_bonito_map_irq(PCIDevice *pci_dev, int irq_num)
1136
}
1137
}
1138
1139
-static void bonito_reset_hold(Object *obj)
1140
+static void bonito_reset_hold(Object *obj, ResetType type)
1141
{
1142
PCIBonitoState *s = PCI_BONITO(obj);
1143
uint32_t val = 0;
1144
diff --git a/hw/pci-host/pnv_phb.c b/hw/pci-host/pnv_phb.c
1145
index XXXXXXX..XXXXXXX 100644
1146
--- a/hw/pci-host/pnv_phb.c
1147
+++ b/hw/pci-host/pnv_phb.c
1148
@@ -XXX,XX +XXX,XX @@ static void pnv_phb_class_init(ObjectClass *klass, void *data)
1149
dc->user_creatable = true;
1150
}
1151
1152
-static void pnv_phb_root_port_reset_hold(Object *obj)
1153
+static void pnv_phb_root_port_reset_hold(Object *obj, ResetType type)
1154
{
1155
PCIERootPortClass *rpc = PCIE_ROOT_PORT_GET_CLASS(obj);
1156
PnvPHBRootPort *phb_rp = PNV_PHB_ROOT_PORT(obj);
1157
@@ -XXX,XX +XXX,XX @@ static void pnv_phb_root_port_reset_hold(Object *obj)
1158
uint8_t *conf = d->config;
1159
1160
if (rpc->parent_phases.hold) {
1161
- rpc->parent_phases.hold(obj);
1162
+ rpc->parent_phases.hold(obj, type);
1163
}
1164
1165
if (phb_rp->version == 3) {
1166
diff --git a/hw/pci-host/pnv_phb3_msi.c b/hw/pci-host/pnv_phb3_msi.c
1167
index XXXXXXX..XXXXXXX 100644
1168
--- a/hw/pci-host/pnv_phb3_msi.c
1169
+++ b/hw/pci-host/pnv_phb3_msi.c
1170
@@ -XXX,XX +XXX,XX @@ static void phb3_msi_resend(ICSState *ics)
1171
}
1172
}
1173
1174
-static void phb3_msi_reset_hold(Object *obj)
1175
+static void phb3_msi_reset_hold(Object *obj, ResetType type)
1176
{
1177
Phb3MsiState *msi = PHB3_MSI(obj);
1178
ICSStateClass *icsc = ICS_GET_CLASS(obj);
1179
1180
if (icsc->parent_phases.hold) {
1181
- icsc->parent_phases.hold(obj);
1182
+ icsc->parent_phases.hold(obj, type);
1183
}
1184
1185
memset(msi->rba, 0, sizeof(msi->rba));
1186
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
1187
index XXXXXXX..XXXXXXX 100644
1188
--- a/hw/pci/pci.c
1189
+++ b/hw/pci/pci.c
1190
@@ -XXX,XX +XXX,XX @@ bool pci_available = true;
1191
1192
static char *pcibus_get_dev_path(DeviceState *dev);
1193
static char *pcibus_get_fw_dev_path(DeviceState *dev);
1194
-static void pcibus_reset_hold(Object *obj);
1195
+static void pcibus_reset_hold(Object *obj, ResetType type);
1196
static bool pcie_has_upstream_port(PCIDevice *dev);
1197
1198
static Property pci_props[] = {
1199
@@ -XXX,XX +XXX,XX @@ void pci_device_reset(PCIDevice *dev)
1200
* Called via bus_cold_reset on RST# assert, after the devices
1201
* have been reset device_cold_reset-ed already.
1202
*/
1203
-static void pcibus_reset_hold(Object *obj)
1204
+static void pcibus_reset_hold(Object *obj, ResetType type)
1205
{
1206
PCIBus *bus = PCI_BUS(obj);
1207
int i;
1208
diff --git a/hw/rtc/mc146818rtc.c b/hw/rtc/mc146818rtc.c
1209
index XXXXXXX..XXXXXXX 100644
1210
--- a/hw/rtc/mc146818rtc.c
1211
+++ b/hw/rtc/mc146818rtc.c
1212
@@ -XXX,XX +XXX,XX @@ static void rtc_reset_enter(Object *obj, ResetType type)
1213
}
1214
}
1215
1216
-static void rtc_reset_hold(Object *obj)
1217
+static void rtc_reset_hold(Object *obj, ResetType type)
1218
{
1219
MC146818RtcState *s = MC146818_RTC(obj);
1220
1221
diff --git a/hw/s390x/css-bridge.c b/hw/s390x/css-bridge.c
1222
index XXXXXXX..XXXXXXX 100644
1223
--- a/hw/s390x/css-bridge.c
1224
+++ b/hw/s390x/css-bridge.c
1225
@@ -XXX,XX +XXX,XX @@ static void ccw_device_unplug(HotplugHandler *hotplug_dev,
1226
qdev_unrealize(dev);
1227
}
1228
1229
-static void virtual_css_bus_reset_hold(Object *obj)
1230
+static void virtual_css_bus_reset_hold(Object *obj, ResetType type)
1231
{
1232
/* This should actually be modelled via the generic css */
1233
css_reset();
1234
diff --git a/hw/sensor/adm1266.c b/hw/sensor/adm1266.c
1235
index XXXXXXX..XXXXXXX 100644
1236
--- a/hw/sensor/adm1266.c
1237
+++ b/hw/sensor/adm1266.c
1238
@@ -XXX,XX +XXX,XX @@ static const uint8_t adm1266_ic_device_id[] = {0x03, 0x41, 0x12, 0x66};
1239
static const uint8_t adm1266_ic_device_rev[] = {0x08, 0x01, 0x08, 0x07, 0x0,
1240
0x0, 0x07, 0x41, 0x30};
1241
1242
-static void adm1266_exit_reset(Object *obj)
1243
+static void adm1266_exit_reset(Object *obj, ResetType type)
1244
{
1245
ADM1266State *s = ADM1266(obj);
1246
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1247
diff --git a/hw/sensor/adm1272.c b/hw/sensor/adm1272.c
1248
index XXXXXXX..XXXXXXX 100644
1249
--- a/hw/sensor/adm1272.c
1250
+++ b/hw/sensor/adm1272.c
1251
@@ -XXX,XX +XXX,XX @@ static uint32_t adm1272_direct_to_watts(uint16_t value)
1252
return pmbus_direct_mode2data(c, value);
1253
}
1254
1255
-static void adm1272_exit_reset(Object *obj)
1256
+static void adm1272_exit_reset(Object *obj, ResetType type)
1257
{
1258
ADM1272State *s = ADM1272(obj);
1259
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1260
diff --git a/hw/sensor/isl_pmbus_vr.c b/hw/sensor/isl_pmbus_vr.c
1261
index XXXXXXX..XXXXXXX 100644
1262
--- a/hw/sensor/isl_pmbus_vr.c
1263
+++ b/hw/sensor/isl_pmbus_vr.c
1264
@@ -XXX,XX +XXX,XX @@ static void isl_pmbus_vr_set(Object *obj, Visitor *v, const char *name,
1265
pmbus_check_limits(pmdev);
1266
}
1267
1268
-static void isl_pmbus_vr_exit_reset(Object *obj)
1269
+static void isl_pmbus_vr_exit_reset(Object *obj, ResetType type)
1270
{
1271
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1272
1273
@@ -XXX,XX +XXX,XX @@ static void isl_pmbus_vr_exit_reset(Object *obj)
1274
}
1275
1276
/* The raa228000 uses different direct mode coefficients from most isl devices */
1277
-static void raa228000_exit_reset(Object *obj)
1278
+static void raa228000_exit_reset(Object *obj, ResetType type)
1279
{
1280
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1281
1282
- isl_pmbus_vr_exit_reset(obj);
1283
+ isl_pmbus_vr_exit_reset(obj, type);
1284
1285
pmdev->pages[0].read_iout = 0;
1286
pmdev->pages[0].read_pout = 0;
1287
@@ -XXX,XX +XXX,XX @@ static void raa228000_exit_reset(Object *obj)
1288
pmdev->pages[0].read_temperature_3 = 0;
1289
}
1290
1291
-static void isl69259_exit_reset(Object *obj)
1292
+static void isl69259_exit_reset(Object *obj, ResetType type)
1293
{
1294
ISLState *s = ISL69260(obj);
1295
static const uint8_t ic_device_id[] = {0x04, 0x00, 0x81, 0xD2, 0x49, 0x3c};
1296
g_assert(sizeof(ic_device_id) <= sizeof(s->ic_device_id));
1297
1298
- isl_pmbus_vr_exit_reset(obj);
1299
+ isl_pmbus_vr_exit_reset(obj, type);
1300
1301
s->ic_device_id_len = sizeof(ic_device_id);
1302
memcpy(s->ic_device_id, ic_device_id, sizeof(ic_device_id));
1303
diff --git a/hw/sensor/max31785.c b/hw/sensor/max31785.c
1304
index XXXXXXX..XXXXXXX 100644
1305
--- a/hw/sensor/max31785.c
1306
+++ b/hw/sensor/max31785.c
1307
@@ -XXX,XX +XXX,XX @@ static int max31785_write_data(PMBusDevice *pmdev, const uint8_t *buf,
1308
return 0;
1309
}
1310
1311
-static void max31785_exit_reset(Object *obj)
1312
+static void max31785_exit_reset(Object *obj, ResetType type)
1313
{
1314
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1315
MAX31785State *s = MAX31785(obj);
1316
diff --git a/hw/sensor/max34451.c b/hw/sensor/max34451.c
1317
index XXXXXXX..XXXXXXX 100644
1318
--- a/hw/sensor/max34451.c
1319
+++ b/hw/sensor/max34451.c
1320
@@ -XXX,XX +XXX,XX @@ static inline void *memset_word(void *s, uint16_t c, size_t n)
1321
return s;
1322
}
1323
1324
-static void max34451_exit_reset(Object *obj)
1325
+static void max34451_exit_reset(Object *obj, ResetType type)
1326
{
1327
PMBusDevice *pmdev = PMBUS_DEVICE(obj);
1328
MAX34451State *s = MAX34451(obj);
1329
diff --git a/hw/ssi/npcm7xx_fiu.c b/hw/ssi/npcm7xx_fiu.c
1330
index XXXXXXX..XXXXXXX 100644
1331
--- a/hw/ssi/npcm7xx_fiu.c
1332
+++ b/hw/ssi/npcm7xx_fiu.c
1333
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_fiu_enter_reset(Object *obj, ResetType type)
1334
s->regs[NPCM7XX_FIU_CFG] = 0x0000000b;
1335
}
1336
1337
-static void npcm7xx_fiu_hold_reset(Object *obj)
1338
+static void npcm7xx_fiu_hold_reset(Object *obj, ResetType type)
1339
{
1340
NPCM7xxFIUState *s = NPCM7XX_FIU(obj);
1341
int i;
1342
diff --git a/hw/timer/etraxfs_timer.c b/hw/timer/etraxfs_timer.c
1343
index XXXXXXX..XXXXXXX 100644
1344
--- a/hw/timer/etraxfs_timer.c
1345
+++ b/hw/timer/etraxfs_timer.c
1346
@@ -XXX,XX +XXX,XX @@ static void etraxfs_timer_reset_enter(Object *obj, ResetType type)
1347
t->rw_intr_mask = 0;
1348
}
1349
1350
-static void etraxfs_timer_reset_hold(Object *obj)
1351
+static void etraxfs_timer_reset_hold(Object *obj, ResetType type)
1352
{
1353
ETRAXTimerState *t = ETRAX_TIMER(obj);
1354
1355
diff --git a/hw/timer/npcm7xx_timer.c b/hw/timer/npcm7xx_timer.c
1356
index XXXXXXX..XXXXXXX 100644
1357
--- a/hw/timer/npcm7xx_timer.c
1358
+++ b/hw/timer/npcm7xx_timer.c
1359
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_watchdog_timer_expired(void *opaque)
1360
}
1361
}
1362
1363
-static void npcm7xx_timer_hold_reset(Object *obj)
1364
+static void npcm7xx_timer_hold_reset(Object *obj, ResetType type)
1365
{
1366
NPCM7xxTimerCtrlState *s = NPCM7XX_TIMER(obj);
1367
int i;
1368
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
1369
index XXXXXXX..XXXXXXX 100644
1370
--- a/hw/usb/hcd-dwc2.c
1371
+++ b/hw/usb/hcd-dwc2.c
1372
@@ -XXX,XX +XXX,XX @@ static void dwc2_reset_enter(Object *obj, ResetType type)
1373
}
1374
}
1375
1376
-static void dwc2_reset_hold(Object *obj)
1377
+static void dwc2_reset_hold(Object *obj, ResetType type)
1378
{
1379
DWC2Class *c = DWC2_USB_GET_CLASS(obj);
1380
DWC2State *s = DWC2_USB(obj);
1381
@@ -XXX,XX +XXX,XX @@ static void dwc2_reset_hold(Object *obj)
1382
trace_usb_dwc2_reset_hold();
1383
1384
if (c->parent_phases.hold) {
1385
- c->parent_phases.hold(obj);
1386
+ c->parent_phases.hold(obj, type);
1387
}
1388
1389
dwc2_update_irq(s);
1390
}
1391
1392
-static void dwc2_reset_exit(Object *obj)
1393
+static void dwc2_reset_exit(Object *obj, ResetType type)
1394
{
1395
DWC2Class *c = DWC2_USB_GET_CLASS(obj);
1396
DWC2State *s = DWC2_USB(obj);
1397
@@ -XXX,XX +XXX,XX @@ static void dwc2_reset_exit(Object *obj)
1398
trace_usb_dwc2_reset_exit();
1399
1400
if (c->parent_phases.exit) {
1401
- c->parent_phases.exit(obj);
1402
+ c->parent_phases.exit(obj, type);
1403
}
1404
1405
s->hprt0 = HPRT0_PWR;
1406
diff --git a/hw/usb/xlnx-versal-usb2-ctrl-regs.c b/hw/usb/xlnx-versal-usb2-ctrl-regs.c
1407
index XXXXXXX..XXXXXXX 100644
1408
--- a/hw/usb/xlnx-versal-usb2-ctrl-regs.c
1409
+++ b/hw/usb/xlnx-versal-usb2-ctrl-regs.c
1410
@@ -XXX,XX +XXX,XX @@ static void usb2_ctrl_regs_reset_init(Object *obj, ResetType type)
1411
}
1412
}
1413
1414
-static void usb2_ctrl_regs_reset_hold(Object *obj)
1415
+static void usb2_ctrl_regs_reset_hold(Object *obj, ResetType type)
1416
{
1417
VersalUsb2CtrlRegs *s = XILINX_VERSAL_USB2_CTRL_REGS(obj);
1418
1419
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
1420
index XXXXXXX..XXXXXXX 100644
1421
--- a/hw/virtio/virtio-pci.c
1422
+++ b/hw/virtio/virtio-pci.c
1423
@@ -XXX,XX +XXX,XX @@ static void virtio_pci_reset(DeviceState *qdev)
1424
}
1425
}
1426
1427
-static void virtio_pci_bus_reset_hold(Object *obj)
1428
+static void virtio_pci_bus_reset_hold(Object *obj, ResetType type)
1429
{
1430
PCIDevice *dev = PCI_DEVICE(obj);
1431
DeviceState *qdev = DEVICE(obj);
1432
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
1433
index XXXXXXX..XXXXXXX 100644
1434
--- a/target/arm/cpu.c
1435
+++ b/target/arm/cpu.c
1436
@@ -XXX,XX +XXX,XX @@ static void cp_reg_check_reset(gpointer key, gpointer value, gpointer opaque)
1437
assert(oldvalue == newvalue);
1438
}
1439
1440
-static void arm_cpu_reset_hold(Object *obj)
1441
+static void arm_cpu_reset_hold(Object *obj, ResetType type)
1442
{
1443
CPUState *cs = CPU(obj);
1444
ARMCPU *cpu = ARM_CPU(cs);
1445
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset_hold(Object *obj)
1446
CPUARMState *env = &cpu->env;
1447
1448
if (acc->parent_phases.hold) {
1449
- acc->parent_phases.hold(obj);
1450
+ acc->parent_phases.hold(obj, type);
1451
}
1452
1453
memset(env, 0, offsetof(CPUARMState, end_reset_fields));
1454
diff --git a/target/avr/cpu.c b/target/avr/cpu.c
1455
index XXXXXXX..XXXXXXX 100644
1456
--- a/target/avr/cpu.c
1457
+++ b/target/avr/cpu.c
1458
@@ -XXX,XX +XXX,XX @@ static void avr_restore_state_to_opc(CPUState *cs,
1459
cpu_env(cs)->pc_w = data[0];
1460
}
1461
1462
-static void avr_cpu_reset_hold(Object *obj)
1463
+static void avr_cpu_reset_hold(Object *obj, ResetType type)
1464
{
1465
CPUState *cs = CPU(obj);
1466
AVRCPU *cpu = AVR_CPU(cs);
1467
@@ -XXX,XX +XXX,XX @@ static void avr_cpu_reset_hold(Object *obj)
1468
CPUAVRState *env = &cpu->env;
1469
1470
if (mcc->parent_phases.hold) {
1471
- mcc->parent_phases.hold(obj);
1472
+ mcc->parent_phases.hold(obj, type);
1473
}
1474
1475
env->pc_w = 0;
1476
diff --git a/target/cris/cpu.c b/target/cris/cpu.c
1477
index XXXXXXX..XXXXXXX 100644
1478
--- a/target/cris/cpu.c
1479
+++ b/target/cris/cpu.c
1480
@@ -XXX,XX +XXX,XX @@ static int cris_cpu_mmu_index(CPUState *cs, bool ifetch)
1481
return !!(cpu_env(cs)->pregs[PR_CCS] & U_FLAG);
1482
}
1483
1484
-static void cris_cpu_reset_hold(Object *obj)
1485
+static void cris_cpu_reset_hold(Object *obj, ResetType type)
1486
{
1487
CPUState *cs = CPU(obj);
1488
CRISCPUClass *ccc = CRIS_CPU_GET_CLASS(obj);
1489
@@ -XXX,XX +XXX,XX @@ static void cris_cpu_reset_hold(Object *obj)
1490
uint32_t vr;
1491
1492
if (ccc->parent_phases.hold) {
1493
- ccc->parent_phases.hold(obj);
1494
+ ccc->parent_phases.hold(obj, type);
1495
}
1496
1497
vr = env->pregs[PR_VR];
1498
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
1499
index XXXXXXX..XXXXXXX 100644
1500
--- a/target/hexagon/cpu.c
1501
+++ b/target/hexagon/cpu.c
1502
@@ -XXX,XX +XXX,XX @@ static void hexagon_restore_state_to_opc(CPUState *cs,
1503
cpu_env(cs)->gpr[HEX_REG_PC] = data[0];
1504
}
1505
1506
-static void hexagon_cpu_reset_hold(Object *obj)
1507
+static void hexagon_cpu_reset_hold(Object *obj, ResetType type)
1508
{
1509
CPUState *cs = CPU(obj);
1510
HexagonCPUClass *mcc = HEXAGON_CPU_GET_CLASS(obj);
1511
CPUHexagonState *env = cpu_env(cs);
1512
1513
if (mcc->parent_phases.hold) {
1514
- mcc->parent_phases.hold(obj);
1515
+ mcc->parent_phases.hold(obj, type);
1516
}
1517
1518
set_default_nan_mode(1, &env->fp_status);
1519
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
1520
index XXXXXXX..XXXXXXX 100644
1521
--- a/target/i386/cpu.c
1522
+++ b/target/i386/cpu.c
1523
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_set_sgxlepubkeyhash(CPUX86State *env)
1524
#endif
1525
}
1526
1527
-static void x86_cpu_reset_hold(Object *obj)
1528
+static void x86_cpu_reset_hold(Object *obj, ResetType type)
1529
{
1530
CPUState *cs = CPU(obj);
1531
X86CPU *cpu = X86_CPU(cs);
1532
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_reset_hold(Object *obj)
1533
int i;
1534
1535
if (xcc->parent_phases.hold) {
1536
- xcc->parent_phases.hold(obj);
1537
+ xcc->parent_phases.hold(obj, type);
1538
}
1539
1540
memset(env, 0, offsetof(CPUX86State, end_reset_fields));
1541
diff --git a/target/loongarch/cpu.c b/target/loongarch/cpu.c
1542
index XXXXXXX..XXXXXXX 100644
1543
--- a/target/loongarch/cpu.c
1544
+++ b/target/loongarch/cpu.c
1545
@@ -XXX,XX +XXX,XX @@ static void loongarch_max_initfn(Object *obj)
1546
loongarch_la464_initfn(obj);
1547
}
1548
1549
-static void loongarch_cpu_reset_hold(Object *obj)
1550
+static void loongarch_cpu_reset_hold(Object *obj, ResetType type)
1551
{
1552
CPUState *cs = CPU(obj);
1553
LoongArchCPUClass *lacc = LOONGARCH_CPU_GET_CLASS(obj);
1554
CPULoongArchState *env = cpu_env(cs);
1555
1556
if (lacc->parent_phases.hold) {
1557
- lacc->parent_phases.hold(obj);
1558
+ lacc->parent_phases.hold(obj, type);
1559
}
1560
1561
env->fcsr0_mask = FCSR0_M1 | FCSR0_M2 | FCSR0_M3;
1562
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
1563
index XXXXXXX..XXXXXXX 100644
1564
--- a/target/m68k/cpu.c
1565
+++ b/target/m68k/cpu.c
1566
@@ -XXX,XX +XXX,XX @@ static void m68k_unset_feature(CPUM68KState *env, int feature)
1567
env->features &= ~BIT_ULL(feature);
1568
}
1569
1570
-static void m68k_cpu_reset_hold(Object *obj)
1571
+static void m68k_cpu_reset_hold(Object *obj, ResetType type)
1572
{
1573
CPUState *cs = CPU(obj);
1574
M68kCPUClass *mcc = M68K_CPU_GET_CLASS(obj);
1575
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_reset_hold(Object *obj)
1576
int i;
1577
1578
if (mcc->parent_phases.hold) {
1579
- mcc->parent_phases.hold(obj);
1580
+ mcc->parent_phases.hold(obj, type);
1581
}
1582
1583
memset(env, 0, offsetof(CPUM68KState, end_reset_fields));
1584
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
1585
index XXXXXXX..XXXXXXX 100644
1586
--- a/target/microblaze/cpu.c
1587
+++ b/target/microblaze/cpu.c
1588
@@ -XXX,XX +XXX,XX @@ static void microblaze_cpu_set_irq(void *opaque, int irq, int level)
1589
}
1590
#endif
1591
1592
-static void mb_cpu_reset_hold(Object *obj)
1593
+static void mb_cpu_reset_hold(Object *obj, ResetType type)
1594
{
1595
CPUState *cs = CPU(obj);
1596
MicroBlazeCPU *cpu = MICROBLAZE_CPU(cs);
1597
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_reset_hold(Object *obj)
1598
CPUMBState *env = &cpu->env;
1599
1600
if (mcc->parent_phases.hold) {
1601
- mcc->parent_phases.hold(obj);
1602
+ mcc->parent_phases.hold(obj, type);
1603
}
1604
1605
memset(env, 0, offsetof(CPUMBState, end_reset_fields));
1606
diff --git a/target/mips/cpu.c b/target/mips/cpu.c
1607
index XXXXXXX..XXXXXXX 100644
1608
--- a/target/mips/cpu.c
1609
+++ b/target/mips/cpu.c
1610
@@ -XXX,XX +XXX,XX @@ static int mips_cpu_mmu_index(CPUState *cs, bool ifunc)
1611
1612
#include "cpu-defs.c.inc"
1613
1614
-static void mips_cpu_reset_hold(Object *obj)
1615
+static void mips_cpu_reset_hold(Object *obj, ResetType type)
1616
{
1617
CPUState *cs = CPU(obj);
1618
MIPSCPU *cpu = MIPS_CPU(cs);
1619
@@ -XXX,XX +XXX,XX @@ static void mips_cpu_reset_hold(Object *obj)
1620
CPUMIPSState *env = &cpu->env;
1621
1622
if (mcc->parent_phases.hold) {
1623
- mcc->parent_phases.hold(obj);
1624
+ mcc->parent_phases.hold(obj, type);
1625
}
1626
1627
memset(env, 0, offsetof(CPUMIPSState, end_reset_fields));
1628
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
1629
index XXXXXXX..XXXXXXX 100644
1630
--- a/target/openrisc/cpu.c
1631
+++ b/target/openrisc/cpu.c
1632
@@ -XXX,XX +XXX,XX @@ static void openrisc_disas_set_info(CPUState *cpu, disassemble_info *info)
1633
info->print_insn = print_insn_or1k;
1634
}
1635
1636
-static void openrisc_cpu_reset_hold(Object *obj)
1637
+static void openrisc_cpu_reset_hold(Object *obj, ResetType type)
1638
{
1639
CPUState *cs = CPU(obj);
1640
OpenRISCCPU *cpu = OPENRISC_CPU(cs);
1641
OpenRISCCPUClass *occ = OPENRISC_CPU_GET_CLASS(obj);
1642
1643
if (occ->parent_phases.hold) {
1644
- occ->parent_phases.hold(obj);
1645
+ occ->parent_phases.hold(obj, type);
1646
}
1647
1648
memset(&cpu->env, 0, offsetof(CPUOpenRISCState, end_reset_fields));
1649
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
1650
index XXXXXXX..XXXXXXX 100644
1651
--- a/target/ppc/cpu_init.c
1652
+++ b/target/ppc/cpu_init.c
1653
@@ -XXX,XX +XXX,XX @@ static int ppc_cpu_mmu_index(CPUState *cs, bool ifetch)
1654
return ppc_env_mmu_index(cpu_env(cs), ifetch);
1655
}
1656
1657
-static void ppc_cpu_reset_hold(Object *obj)
1658
+static void ppc_cpu_reset_hold(Object *obj, ResetType type)
1659
{
1660
CPUState *cs = CPU(obj);
1661
PowerPCCPU *cpu = POWERPC_CPU(cs);
1662
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_reset_hold(Object *obj)
1663
int i;
1664
1665
if (pcc->parent_phases.hold) {
1666
- pcc->parent_phases.hold(obj);
1667
+ pcc->parent_phases.hold(obj, type);
1668
}
1669
1670
msr = (target_ulong)0;
1671
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
1672
index XXXXXXX..XXXXXXX 100644
1673
--- a/target/riscv/cpu.c
1674
+++ b/target/riscv/cpu.c
1675
@@ -XXX,XX +XXX,XX @@ static int riscv_cpu_mmu_index(CPUState *cs, bool ifetch)
1676
return riscv_env_mmu_index(cpu_env(cs), ifetch);
1677
}
1678
1679
-static void riscv_cpu_reset_hold(Object *obj)
1680
+static void riscv_cpu_reset_hold(Object *obj, ResetType type)
1681
{
1682
#ifndef CONFIG_USER_ONLY
1683
uint8_t iprio;
1684
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_reset_hold(Object *obj)
1685
CPURISCVState *env = &cpu->env;
1686
1687
if (mcc->parent_phases.hold) {
1688
- mcc->parent_phases.hold(obj);
1689
+ mcc->parent_phases.hold(obj, type);
1690
}
1691
#ifndef CONFIG_USER_ONLY
1692
env->misa_mxl = mcc->misa_mxl_max;
1693
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
1694
index XXXXXXX..XXXXXXX 100644
1695
--- a/target/rx/cpu.c
1696
+++ b/target/rx/cpu.c
1697
@@ -XXX,XX +XXX,XX @@ static int riscv_cpu_mmu_index(CPUState *cs, bool ifunc)
1698
return 0;
1699
}
1700
1701
-static void rx_cpu_reset_hold(Object *obj)
1702
+static void rx_cpu_reset_hold(Object *obj, ResetType type)
1703
{
1704
CPUState *cs = CPU(obj);
1705
RXCPUClass *rcc = RX_CPU_GET_CLASS(obj);
1706
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_reset_hold(Object *obj)
1707
uint32_t *resetvec;
1708
1709
if (rcc->parent_phases.hold) {
1710
- rcc->parent_phases.hold(obj);
1711
+ rcc->parent_phases.hold(obj, type);
1712
}
1713
1714
memset(env, 0, offsetof(CPURXState, end_reset_fields));
1715
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
1716
index XXXXXXX..XXXXXXX 100644
1717
--- a/target/sh4/cpu.c
1718
+++ b/target/sh4/cpu.c
1719
@@ -XXX,XX +XXX,XX @@ static int sh4_cpu_mmu_index(CPUState *cs, bool ifetch)
1720
}
1721
}
1722
1723
-static void superh_cpu_reset_hold(Object *obj)
1724
+static void superh_cpu_reset_hold(Object *obj, ResetType type)
1725
{
1726
CPUState *cs = CPU(obj);
1727
SuperHCPUClass *scc = SUPERH_CPU_GET_CLASS(obj);
1728
CPUSH4State *env = cpu_env(cs);
1729
1730
if (scc->parent_phases.hold) {
1731
- scc->parent_phases.hold(obj);
1732
+ scc->parent_phases.hold(obj, type);
1733
}
1734
1735
memset(env, 0, offsetof(CPUSH4State, end_reset_fields));
1736
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
1737
index XXXXXXX..XXXXXXX 100644
1738
--- a/target/sparc/cpu.c
1739
+++ b/target/sparc/cpu.c
1740
@@ -XXX,XX +XXX,XX @@
1741
1742
//#define DEBUG_FEATURES
1743
1744
-static void sparc_cpu_reset_hold(Object *obj)
1745
+static void sparc_cpu_reset_hold(Object *obj, ResetType type)
1746
{
1747
CPUState *cs = CPU(obj);
1748
SPARCCPUClass *scc = SPARC_CPU_GET_CLASS(obj);
1749
CPUSPARCState *env = cpu_env(cs);
1750
1751
if (scc->parent_phases.hold) {
1752
- scc->parent_phases.hold(obj);
1753
+ scc->parent_phases.hold(obj, type);
1754
}
1755
1756
memset(env, 0, offsetof(CPUSPARCState, end_reset_fields));
1757
diff --git a/target/tricore/cpu.c b/target/tricore/cpu.c
1758
index XXXXXXX..XXXXXXX 100644
1759
--- a/target/tricore/cpu.c
1760
+++ b/target/tricore/cpu.c
1761
@@ -XXX,XX +XXX,XX @@ static void tricore_restore_state_to_opc(CPUState *cs,
1762
cpu_env(cs)->PC = data[0];
1763
}
1764
1765
-static void tricore_cpu_reset_hold(Object *obj)
1766
+static void tricore_cpu_reset_hold(Object *obj, ResetType type)
1767
{
1768
CPUState *cs = CPU(obj);
1769
TriCoreCPUClass *tcc = TRICORE_CPU_GET_CLASS(obj);
1770
1771
if (tcc->parent_phases.hold) {
1772
- tcc->parent_phases.hold(obj);
1773
+ tcc->parent_phases.hold(obj, type);
1774
}
1775
1776
cpu_state_reset(cpu_env(cs));
1777
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
1778
index XXXXXXX..XXXXXXX 100644
1779
--- a/target/xtensa/cpu.c
1780
+++ b/target/xtensa/cpu.c
1781
@@ -XXX,XX +XXX,XX @@ bool xtensa_abi_call0(void)
1782
}
1783
#endif
1784
1785
-static void xtensa_cpu_reset_hold(Object *obj)
1786
+static void xtensa_cpu_reset_hold(Object *obj, ResetType type)
1787
{
1788
CPUState *cs = CPU(obj);
1789
XtensaCPUClass *xcc = XTENSA_CPU_GET_CLASS(obj);
1790
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_reset_hold(Object *obj)
1791
XTENSA_OPTION_DFP_COPROCESSOR);
1792
1793
if (xcc->parent_phases.hold) {
1794
- xcc->parent_phases.hold(obj);
1795
+ xcc->parent_phases.hold(obj, type);
1796
}
1797
1798
env->pc = env->config->exception_vector[EXC_RESET0 + env->static_vectors];
46
--
1799
--
47
2.7.4
1800
2.34.1
48
49
diff view generated by jsdifflib
1
Make get_phys_addr_pmsav8() return a fault type in the ARMMMUFaultInfo
1
Update the reset documentation's example code to match the new API
2
structure, which we convert to the FSC at the callsite.
2
for the hold and exit phase method APIs where they take a ResetType
3
argument.
3
4
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 1512503192-2239-9-git-send-email-peter.maydell@linaro.org
9
Message-id: 20240412160809.1260625-6-peter.maydell@linaro.org
9
---
10
---
10
target/arm/helper.c | 29 ++++++++++++++++++-----------
11
docs/devel/reset.rst | 8 ++++----
11
1 file changed, 18 insertions(+), 11 deletions(-)
12
1 file changed, 4 insertions(+), 4 deletions(-)
12
13
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/docs/devel/reset.rst b/docs/devel/reset.rst
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
--- a/docs/devel/reset.rst
16
+++ b/target/arm/helper.c
17
+++ b/docs/devel/reset.rst
17
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
18
@@ -XXX,XX +XXX,XX @@ in reset.
18
static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
19
mydev->var = 0;
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
20
hwaddr *phys_ptr, MemTxAttrs *txattrs,
21
- int *prot, uint32_t *fsr, uint32_t *mregion)
22
+ int *prot, ARMMMUFaultInfo *fi, uint32_t *mregion)
23
{
24
/* Perform a PMSAv8 MPU lookup (without also doing the SAU check
25
* that a full phys-to-virt translation does).
26
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
27
/* Multiple regions match -- always a failure (unlike
28
* PMSAv7 where highest-numbered-region wins)
29
*/
30
- *fsr = 0x00d; /* permission fault */
31
+ fi->type = ARMFault_Permission;
32
+ fi->level = 1;
33
return true;
34
}
35
36
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
37
38
if (!hit) {
39
/* background fault */
40
- *fsr = 0;
41
+ fi->type = ARMFault_Background;
42
return true;
43
}
20
}
44
21
45
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
22
- static void mydev_reset_hold(Object *obj)
23
+ static void mydev_reset_hold(Object *obj, ResetType type)
24
{
25
MyDevClass *myclass = MYDEV_GET_CLASS(obj);
26
MyDevState *mydev = MYDEV(obj);
27
/* call parent class hold phase */
28
if (myclass->parent_phases.hold) {
29
- myclass->parent_phases.hold(obj);
30
+ myclass->parent_phases.hold(obj, type);
46
}
31
}
32
/* set an IO */
33
qemu_set_irq(mydev->irq, 1);
47
}
34
}
48
35
49
- *fsr = 0x00d; /* Permission fault */
36
- static void mydev_reset_exit(Object *obj)
50
+ fi->type = ARMFault_Permission;
37
+ static void mydev_reset_exit(Object *obj, ResetType type)
51
+ fi->level = 1;
38
{
52
return !(*prot & (1 << access_type));
39
MyDevClass *myclass = MYDEV_GET_CLASS(obj);
53
}
40
MyDevState *mydev = MYDEV(obj);
54
41
/* call parent class exit phase */
55
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
42
if (myclass->parent_phases.exit) {
56
static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
43
- myclass->parent_phases.exit(obj);
57
MMUAccessType access_type, ARMMMUIdx mmu_idx,
44
+ myclass->parent_phases.exit(obj, type);
58
hwaddr *phys_ptr, MemTxAttrs *txattrs,
45
}
59
- int *prot, uint32_t *fsr)
46
/* clear an IO */
60
+ int *prot, ARMMMUFaultInfo *fi)
47
qemu_set_irq(mydev->irq, 0);
61
{
62
uint32_t secure = regime_is_secure(env, mmu_idx);
63
V8M_SAttributes sattrs = {};
64
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
65
* (including possibly emulating an SG instruction).
66
*/
67
if (sattrs.ns != !secure) {
68
- *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT;
69
+ if (sattrs.nsc) {
70
+ fi->type = ARMFault_QEMU_NSCExec;
71
+ } else {
72
+ fi->type = ARMFault_QEMU_SFault;
73
+ }
74
*phys_ptr = address;
75
*prot = 0;
76
return true;
77
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
78
* If we added it we would need to do so as a special case
79
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
80
*/
81
- *fsr = M_FAKE_FSR_SFAULT;
82
+ fi->type = ARMFault_QEMU_SFault;
83
*phys_ptr = address;
84
*prot = 0;
85
return true;
86
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
87
}
88
89
return pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
90
- txattrs, prot, fsr, NULL);
91
+ txattrs, prot, fi, NULL);
92
}
93
94
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
95
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
96
if (arm_feature(env, ARM_FEATURE_V8)) {
97
/* PMSAv8 */
98
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
99
- phys_ptr, attrs, prot, fsr);
100
+ phys_ptr, attrs, prot, fi);
101
+ *fsr = arm_fi_to_sfsc(fi);
102
} else if (arm_feature(env, ARM_FEATURE_V7)) {
103
/* PMSAv7 */
104
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
105
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
106
uint32_t tt_resp;
107
bool r, rw, nsr, nsrw, mrvalid;
108
int prot;
109
+ ARMMMUFaultInfo fi = {};
110
MemTxAttrs attrs = {};
111
hwaddr phys_addr;
112
- uint32_t fsr;
113
ARMMMUIdx mmu_idx;
114
uint32_t mregion;
115
bool targetpriv;
116
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
117
if (arm_current_el(env) != 0 || alt) {
118
/* We can ignore the return value as prot is always set */
119
pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
120
- &phys_addr, &attrs, &prot, &fsr, &mregion);
121
+ &phys_addr, &attrs, &prot, &fi, &mregion);
122
if (mregion == -1) {
123
mrvalid = false;
124
mregion = 0;
125
--
48
--
126
2.7.4
49
2.34.1
127
50
128
51
diff view generated by jsdifflib
1
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
1
Some devices and machines need to handle the reset before a vmsave
2
structure, which we convert to the FSC at the callsite.
2
snapshot is loaded differently -- the main user is the handling of
3
RNG seed information, which does not want to put a new RNG seed into
4
a ROM blob when we are doing a snapshot load.
5
6
Currently this kind of reset handling is supported only for:
7
* TYPE_MACHINE reset methods, which take a ShutdownCause argument
8
* reset functions registered with qemu_register_reset_nosnapshotload
9
10
To allow a three-phase-reset device to also distinguish "snapshot
11
load" reset from the normal kind, add a new ResetType
12
RESET_TYPE_SNAPSHOT_LOAD. All our existing reset methods ignore
13
the reset type, so we don't need to update any device code.
14
15
Add the enum type, and make qemu_devices_reset() use the
16
right reset type for the ShutdownCause it is passed. This
17
allows us to get rid of the device_reset_reason global we
18
were using to implement qemu_register_reset_nosnapshotload().
3
19
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
22
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
23
Reviewed-by: Luc Michel <luc.michel@amd.com>
8
Message-id: 1512503192-2239-6-git-send-email-peter.maydell@linaro.org
24
Message-id: 20240412160809.1260625-7-peter.maydell@linaro.org
9
---
25
---
10
target/arm/helper.c | 41 ++++++++++++++++++-----------------------
26
docs/devel/reset.rst | 17 ++++++++++++++---
11
1 file changed, 18 insertions(+), 23 deletions(-)
27
include/hw/resettable.h | 1 +
28
hw/core/reset.c | 15 ++++-----------
29
hw/core/resettable.c | 4 ----
30
4 files changed, 19 insertions(+), 18 deletions(-)
12
31
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
diff --git a/docs/devel/reset.rst b/docs/devel/reset.rst
14
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
34
--- a/docs/devel/reset.rst
16
+++ b/target/arm/helper.c
35
+++ b/docs/devel/reset.rst
17
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
36
@@ -XXX,XX +XXX,XX @@ instantly reset an object, without keeping it in reset state, just call
18
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
37
``resettable_reset()``. These functions take two parameters: a pointer to the
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
38
object to reset and a reset type.
20
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
39
21
- target_ulong *page_size_ptr, uint32_t *fsr,
40
-Several types of reset will be supported. For now only cold reset is defined;
22
+ target_ulong *page_size_ptr,
41
-others may be added later. The Resettable interface handles reset types with an
23
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs);
42
-enum:
24
43
+The Resettable interface handles reset types with an enum ``ResetType``:
25
/* Security attributes for an address, as returned by v8m_security_lookup. */
44
26
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
45
``RESET_TYPE_COLD``
27
hwaddr s2pa;
46
Cold reset is supported by every resettable object. In QEMU, it means we reset
28
int s2prot;
47
@@ -XXX,XX +XXX,XX @@ enum:
29
int ret;
48
from what is a real hardware cold reset. It differs from other resets (like
30
- uint32_t fsr;
49
warm or bus resets) which may keep certain parts untouched.
31
50
32
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
51
+``RESET_TYPE_SNAPSHOT_LOAD``
33
- &txattrs, &s2prot, &s2size, &fsr, fi, NULL);
52
+ This is called for a reset which is being done to put the system into a
34
+ &txattrs, &s2prot, &s2size, fi, NULL);
53
+ clean state prior to loading a snapshot. (This corresponds to a reset
35
if (ret) {
54
+ with ``SHUTDOWN_CAUSE_SNAPSHOT_LOAD``.) Almost all devices should treat
36
fi->s2addr = addr;
55
+ this the same as ``RESET_TYPE_COLD``. The main exception is devices which
37
fi->stage2 = true;
56
+ have some non-deterministic state they want to reinitialize to a different
38
@@ -XXX,XX +XXX,XX @@ do_fault:
57
+ value on each cold reset, such as RNG seed information, and which they
39
return true;
58
+ must not reinitialize on a snapshot-load reset.
59
+
60
+Devices which implement reset methods must treat any unknown ``ResetType``
61
+as equivalent to ``RESET_TYPE_COLD``; this will reduce the amount of
62
+existing code we need to change if we add more types in future.
63
+
64
Calling ``resettable_reset()`` is equivalent to calling
65
``resettable_assert_reset()`` then ``resettable_release_reset()``. It is
66
possible to interleave multiple calls to these three functions. There may
67
diff --git a/include/hw/resettable.h b/include/hw/resettable.h
68
index XXXXXXX..XXXXXXX 100644
69
--- a/include/hw/resettable.h
70
+++ b/include/hw/resettable.h
71
@@ -XXX,XX +XXX,XX @@ typedef struct ResettableState ResettableState;
72
*/
73
typedef enum ResetType {
74
RESET_TYPE_COLD,
75
+ RESET_TYPE_SNAPSHOT_LOAD,
76
} ResetType;
77
78
/*
79
diff --git a/hw/core/reset.c b/hw/core/reset.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/core/reset.c
82
+++ b/hw/core/reset.c
83
@@ -XXX,XX +XXX,XX @@ static ResettableContainer *get_root_reset_container(void)
84
return root_reset_container;
40
}
85
}
41
86
42
-/* Fault type for long-descriptor MMU fault reporting; this corresponds
87
-/*
43
- * to bits [5..2] in the STATUS field in long-format DFSR/IFSR.
88
- * Reason why the currently in-progress qemu_devices_reset() was called.
89
- * If we made at least SHUTDOWN_CAUSE_SNAPSHOT_LOAD have a corresponding
90
- * ResetType we could perhaps avoid the need for this global.
44
- */
91
- */
45
-typedef enum {
92
-static ShutdownCause device_reset_reason;
46
- translation_fault = 1,
47
- access_fault = 2,
48
- permission_fault = 3,
49
-} MMUFaultType;
50
-
93
-
51
/*
94
/*
52
* check_s2_mmu_setup
95
* This is an Object which implements Resettable simply to call the
53
* @cpu: ARMCPU
96
* callback function in the hold phase.
54
@@ -XXX,XX +XXX,XX @@ static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
97
@@ -XXX,XX +XXX,XX @@ static void legacy_reset_hold(Object *obj, ResetType type)
55
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
56
MMUAccessType access_type, ARMMMUIdx mmu_idx,
57
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
58
- target_ulong *page_size_ptr, uint32_t *fsr,
59
+ target_ulong *page_size_ptr,
60
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
61
{
98
{
62
ARMCPU *cpu = arm_env_get_cpu(env);
99
LegacyReset *lr = LEGACY_RESET(obj);
63
CPUState *cs = CPU(cpu);
100
64
/* Read an LPAE long-descriptor translation table. */
101
- if (device_reset_reason == SHUTDOWN_CAUSE_SNAPSHOT_LOAD &&
65
- MMUFaultType fault_type = translation_fault;
102
- lr->skip_on_snapshot_load) {
66
+ ARMFaultType fault_type = ARMFault_Translation;
103
+ if (type == RESET_TYPE_SNAPSHOT_LOAD && lr->skip_on_snapshot_load) {
67
uint32_t level;
104
return;
68
uint32_t epd = 0;
69
int32_t t0sz, t1sz;
70
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
71
ttbr_select = 1;
72
} else {
73
/* in the gap between the two regions, this is a Translation fault */
74
- fault_type = translation_fault;
75
+ fault_type = ARMFault_Translation;
76
goto do_fault;
77
}
105
}
78
106
lr->func(lr->opaque);
79
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
107
@@ -XXX,XX +XXX,XX @@ void qemu_unregister_resettable(Object *obj)
80
ok = check_s2_mmu_setup(cpu, aarch64, startlevel,
108
81
inputsize, stride);
109
void qemu_devices_reset(ShutdownCause reason)
82
if (!ok) {
110
{
83
- fault_type = translation_fault;
111
- device_reset_reason = reason;
84
+ fault_type = ARMFault_Translation;
112
+ ResetType type = (reason == SHUTDOWN_CAUSE_SNAPSHOT_LOAD) ?
85
goto do_fault;
113
+ RESET_TYPE_SNAPSHOT_LOAD : RESET_TYPE_COLD;
86
}
114
87
level = startlevel;
115
/* Reset the simulation */
88
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
116
- resettable_reset(OBJECT(get_root_reset_container()), RESET_TYPE_COLD);
89
/* Here descaddr is the final physical address, and attributes
117
+ resettable_reset(OBJECT(get_root_reset_container()), type);
90
* are all in attrs.
118
}
91
*/
119
diff --git a/hw/core/resettable.c b/hw/core/resettable.c
92
- fault_type = access_fault;
120
index XXXXXXX..XXXXXXX 100644
93
+ fault_type = ARMFault_AccessFlag;
121
--- a/hw/core/resettable.c
94
if ((attrs & (1 << 8)) == 0) {
122
+++ b/hw/core/resettable.c
95
/* Access flag */
123
@@ -XXX,XX +XXX,XX @@ void resettable_reset(Object *obj, ResetType type)
96
goto do_fault;
124
97
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
125
void resettable_assert_reset(Object *obj, ResetType type)
98
*prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
126
{
99
}
127
- /* TODO: change this assert when adding support for other reset types */
100
128
- assert(type == RESET_TYPE_COLD);
101
- fault_type = permission_fault;
129
trace_resettable_reset_assert_begin(obj, type);
102
+ fault_type = ARMFault_Permission;
130
assert(!enter_phase_in_progress);
103
if (!(*prot & (1 << access_type))) {
131
104
goto do_fault;
132
@@ -XXX,XX +XXX,XX @@ void resettable_assert_reset(Object *obj, ResetType type)
105
}
133
106
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
134
void resettable_release_reset(Object *obj, ResetType type)
107
return false;
135
{
108
136
- /* TODO: change this assert when adding support for other reset types */
109
do_fault:
137
- assert(type == RESET_TYPE_COLD);
110
- /* Long-descriptor format IFSR/DFSR value */
138
trace_resettable_reset_release_begin(obj, type);
111
- *fsr = (1 << 9) | (fault_type << 2) | level;
139
assert(!enter_phase_in_progress);
112
+ fi->type = fault_type;
140
113
+ fi->level = level;
114
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
115
fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_S2NS);
116
return true;
117
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
118
/* S1 is done. Now do S2 translation. */
119
ret = get_phys_addr_lpae(env, ipa, access_type, ARMMMUIdx_S2NS,
120
phys_ptr, attrs, &s2_prot,
121
- page_size, fsr, fi,
122
+ page_size, fi,
123
cacheattrs != NULL ? &cacheattrs2 : NULL);
124
+ *fsr = arm_fi_to_lfsc(fi);
125
fi->s2addr = ipa;
126
/* Combine the S1 and S2 perms. */
127
*prot &= s2_prot;
128
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
129
}
130
131
if (regime_using_lpae_format(env, mmu_idx)) {
132
- return get_phys_addr_lpae(env, address, access_type, mmu_idx, phys_ptr,
133
- attrs, prot, page_size, fsr, fi, cacheattrs);
134
+ bool ret = get_phys_addr_lpae(env, address, access_type, mmu_idx,
135
+ phys_ptr, attrs, prot, page_size,
136
+ fi, cacheattrs);
137
+
138
+ *fsr = arm_fi_to_lfsc(fi);
139
+ return ret;
140
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
141
bool ret = get_phys_addr_v6(env, address, access_type, mmu_idx,
142
phys_ptr, attrs, prot, page_size, fi);
143
--
141
--
144
2.7.4
142
2.34.1
145
143
146
144
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Arnaud Minier <arnaud.minier@telecom-paris.fr>
2
2
3
Add support for the Zynq Ultrascale MPSoc Generic QSPI.
3
Add the basic infrastructure (register read/write, type...)
4
to implement the STM32L4x5 USART.
4
5
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Also create different types for the USART, UART and LPUART
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
of the STM32L4x5 to deduplicate code and enable the
7
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
implementation of different behaviors depending on the type.
8
Message-id: 20171126231634.9531-13-frasse.iglesias@gmail.com
9
10
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
11
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20240329174402.60382-2-arnaud.minier@telecom-paris.fr
14
[PMM: update to new reset hold method signature;
15
fixed a few checkpatch nits]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
17
---
11
include/hw/ssi/xilinx_spips.h | 32 ++-
18
MAINTAINERS | 1 +
12
hw/ssi/xilinx_spips.c | 579 ++++++++++++++++++++++++++++++++++++----
19
include/hw/char/stm32l4x5_usart.h | 66 +++++
13
default-configs/arm-softmmu.mak | 2 +-
20
hw/char/stm32l4x5_usart.c | 396 ++++++++++++++++++++++++++++++
14
3 files changed, 564 insertions(+), 49 deletions(-)
21
hw/char/Kconfig | 3 +
22
hw/char/meson.build | 1 +
23
hw/char/trace-events | 4 +
24
6 files changed, 471 insertions(+)
25
create mode 100644 include/hw/char/stm32l4x5_usart.h
26
create mode 100644 hw/char/stm32l4x5_usart.c
15
27
16
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
28
diff --git a/MAINTAINERS b/MAINTAINERS
17
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/ssi/xilinx_spips.h
30
--- a/MAINTAINERS
19
+++ b/include/hw/ssi/xilinx_spips.h
31
+++ b/MAINTAINERS
32
@@ -XXX,XX +XXX,XX @@ M: Inès Varhol <ines.varhol@telecom-paris.fr>
33
L: qemu-arm@nongnu.org
34
S: Maintained
35
F: hw/arm/stm32l4x5_soc.c
36
+F: hw/char/stm32l4x5_usart.c
37
F: hw/misc/stm32l4x5_exti.c
38
F: hw/misc/stm32l4x5_syscfg.c
39
F: hw/misc/stm32l4x5_rcc.c
40
diff --git a/include/hw/char/stm32l4x5_usart.h b/include/hw/char/stm32l4x5_usart.h
41
new file mode 100644
42
index XXXXXXX..XXXXXXX
43
--- /dev/null
44
+++ b/include/hw/char/stm32l4x5_usart.h
20
@@ -XXX,XX +XXX,XX @@
45
@@ -XXX,XX +XXX,XX @@
21
#define XILINX_SPIPS_H
46
+/*
22
47
+ * STM32L4X5 USART (Universal Synchronous Asynchronous Receiver Transmitter)
23
#include "hw/ssi/ssi.h"
48
+ *
24
-#include "qemu/fifo8.h"
49
+ * Copyright (c) 2023 Arnaud Minier <arnaud.minier@telecom-paris.fr>
25
+#include "qemu/fifo32.h"
50
+ * Copyright (c) 2023 Inès Varhol <ines.varhol@telecom-paris.fr>
26
+#include "hw/stream.h"
51
+ *
27
52
+ * SPDX-License-Identifier: GPL-2.0-or-later
28
typedef struct XilinxSPIPS XilinxSPIPS;
53
+ *
29
54
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
30
#define XLNX_SPIPS_R_MAX (0x100 / 4)
55
+ * See the COPYING file in the top-level directory.
31
+#define XLNX_ZYNQMP_SPIPS_R_MAX (0x200 / 4)
56
+ *
32
57
+ * The STM32L4X5 USART is heavily inspired by the stm32f2xx_usart
33
/* Bite off 4k chunks at a time */
58
+ * by Alistair Francis.
34
#define LQSPI_CACHE_SIZE 1024
59
+ * The reference used is the STMicroElectronics RM0351 Reference manual
35
@@ -XXX,XX +XXX,XX @@ typedef struct {
60
+ * for STM32L4x5 and STM32L4x6 advanced Arm ® -based 32-bit MCUs.
36
bool mmio_execution_enabled;
61
+ */
37
} XilinxQSPIPS;
62
+
38
63
+#ifndef HW_STM32L4X5_USART_H
39
+typedef struct {
64
+#define HW_STM32L4X5_USART_H
40
+ XilinxQSPIPS parent_obj;
65
+
41
+
66
+#include "hw/sysbus.h"
42
+ StreamSlave *dma;
67
+#include "chardev/char-fe.h"
43
+ uint8_t dma_buf[4];
68
+#include "qom/object.h"
44
+ int gqspi_irqline;
69
+
45
+
70
+#define TYPE_STM32L4X5_USART_BASE "stm32l4x5-usart-base"
46
+ uint32_t regs[XLNX_ZYNQMP_SPIPS_R_MAX];
71
+#define TYPE_STM32L4X5_USART "stm32l4x5-usart"
47
+
72
+#define TYPE_STM32L4X5_UART "stm32l4x5-uart"
48
+ /* GQSPI has seperate tx/rx fifos */
73
+#define TYPE_STM32L4X5_LPUART "stm32l4x5-lpuart"
49
+ Fifo8 rx_fifo_g;
74
+OBJECT_DECLARE_TYPE(Stm32l4x5UsartBaseState, Stm32l4x5UsartBaseClass,
50
+ Fifo8 tx_fifo_g;
75
+ STM32L4X5_USART_BASE)
51
+ Fifo32 fifo_g;
76
+
52
+ /*
77
+typedef enum {
53
+ * At the end of each generic command, misaligned extra bytes are discard
78
+ STM32L4x5_USART,
54
+ * or padded to tx and rx respectively to round it out (and avoid need for
79
+ STM32L4x5_UART,
55
+ * individual byte access. Since we use byte fifos, keep track of the
80
+ STM32L4x5_LPUART,
56
+ * alignment WRT to word access.
81
+} Stm32l4x5UsartType;
57
+ */
82
+
58
+ uint8_t rx_fifo_g_align;
83
+struct Stm32l4x5UsartBaseState {
59
+ uint8_t tx_fifo_g_align;
84
+ SysBusDevice parent_obj;
60
+ bool man_start_com_g;
85
+
61
+} XlnxZynqMPQSPIPS;
86
+ MemoryRegion mmio;
62
+
87
+
63
typedef struct XilinxSPIPSClass {
88
+ uint32_t cr1;
64
SysBusDeviceClass parent_class;
89
+ uint32_t cr2;
65
90
+ uint32_t cr3;
66
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPSClass {
91
+ uint32_t brr;
67
92
+ uint32_t gtpr;
68
#define TYPE_XILINX_SPIPS "xlnx.ps7-spi"
93
+ uint32_t rtor;
69
#define TYPE_XILINX_QSPIPS "xlnx.ps7-qspi"
94
+ /* rqr is write-only */
70
+#define TYPE_XLNX_ZYNQMP_QSPIPS "xlnx.usmp-gqspi"
95
+ uint32_t isr;
71
96
+ /* icr is a clear register */
72
#define XILINX_SPIPS(obj) \
97
+ uint32_t rdr;
73
OBJECT_CHECK(XilinxSPIPS, (obj), TYPE_XILINX_SPIPS)
98
+ uint32_t tdr;
74
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPSClass {
99
+
75
#define XILINX_QSPIPS(obj) \
100
+ Clock *clk;
76
OBJECT_CHECK(XilinxQSPIPS, (obj), TYPE_XILINX_QSPIPS)
101
+ CharBackend chr;
77
102
+ qemu_irq irq;
78
+#define XLNX_ZYNQMP_QSPIPS(obj) \
103
+};
79
+ OBJECT_CHECK(XlnxZynqMPQSPIPS, (obj), TYPE_XLNX_ZYNQMP_QSPIPS)
104
+
80
+
105
+struct Stm32l4x5UsartBaseClass {
81
#endif /* XILINX_SPIPS_H */
106
+ SysBusDeviceClass parent_class;
82
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
107
+
83
index XXXXXXX..XXXXXXX 100644
108
+ Stm32l4x5UsartType type;
84
--- a/hw/ssi/xilinx_spips.c
109
+};
85
+++ b/hw/ssi/xilinx_spips.c
110
+
111
+#endif /* HW_STM32L4X5_USART_H */
112
diff --git a/hw/char/stm32l4x5_usart.c b/hw/char/stm32l4x5_usart.c
113
new file mode 100644
114
index XXXXXXX..XXXXXXX
115
--- /dev/null
116
+++ b/hw/char/stm32l4x5_usart.c
86
@@ -XXX,XX +XXX,XX @@
117
@@ -XXX,XX +XXX,XX @@
87
#include "hw/ssi/xilinx_spips.h"
118
+/*
88
#include "qapi/error.h"
119
+ * STM32L4X5 USART (Universal Synchronous Asynchronous Receiver Transmitter)
89
#include "hw/register.h"
120
+ *
90
+#include "sysemu/dma.h"
121
+ * Copyright (c) 2023 Arnaud Minier <arnaud.minier@telecom-paris.fr>
91
#include "migration/blocker.h"
122
+ * Copyright (c) 2023 Inès Varhol <ines.varhol@telecom-paris.fr>
92
123
+ *
93
#ifndef XILINX_SPIPS_ERR_DEBUG
124
+ * SPDX-License-Identifier: GPL-2.0-or-later
94
@@ -XXX,XX +XXX,XX @@
125
+ *
95
#define R_INTR_DIS (0x0C / 4)
126
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
96
#define R_INTR_MASK (0x10 / 4)
127
+ * See the COPYING file in the top-level directory.
97
#define IXR_TX_FIFO_UNDERFLOW (1 << 6)
128
+ *
98
+/* Poll timeout not implemented */
129
+ * The STM32L4X5 USART is heavily inspired by the stm32f2xx_usart
99
+#define IXR_RX_FIFO_EMPTY (1 << 11)
130
+ * by Alistair Francis.
100
+#define IXR_GENERIC_FIFO_FULL (1 << 10)
131
+ * The reference used is the STMicroElectronics RM0351 Reference manual
101
+#define IXR_GENERIC_FIFO_NOT_FULL (1 << 9)
132
+ * for STM32L4x5 and STM32L4x6 advanced Arm ® -based 32-bit MCUs.
102
+#define IXR_TX_FIFO_EMPTY (1 << 8)
103
+#define IXR_GENERIC_FIFO_EMPTY (1 << 7)
104
#define IXR_RX_FIFO_FULL (1 << 5)
105
#define IXR_RX_FIFO_NOT_EMPTY (1 << 4)
106
#define IXR_TX_FIFO_FULL (1 << 3)
107
#define IXR_TX_FIFO_NOT_FULL (1 << 2)
108
#define IXR_TX_FIFO_MODE_FAIL (1 << 1)
109
#define IXR_RX_FIFO_OVERFLOW (1 << 0)
110
-#define IXR_ALL ((IXR_TX_FIFO_UNDERFLOW<<1)-1)
111
+#define IXR_ALL ((1 << 13) - 1)
112
+#define GQSPI_IXR_MASK 0xFBE
113
+#define IXR_SELF_CLEAR \
114
+(IXR_GENERIC_FIFO_EMPTY \
115
+| IXR_GENERIC_FIFO_FULL \
116
+| IXR_GENERIC_FIFO_NOT_FULL \
117
+| IXR_TX_FIFO_EMPTY \
118
+| IXR_TX_FIFO_FULL \
119
+| IXR_TX_FIFO_NOT_FULL \
120
+| IXR_RX_FIFO_EMPTY \
121
+| IXR_RX_FIFO_FULL \
122
+| IXR_RX_FIFO_NOT_EMPTY)
123
124
#define R_EN (0x14 / 4)
125
#define R_DELAY (0x18 / 4)
126
@@ -XXX,XX +XXX,XX @@
127
128
#define R_MOD_ID (0xFC / 4)
129
130
+#define R_GQSPI_SELECT (0x144 / 4)
131
+ FIELD(GQSPI_SELECT, GENERIC_QSPI_EN, 0, 1)
132
+#define R_GQSPI_ISR (0x104 / 4)
133
+#define R_GQSPI_IER (0x108 / 4)
134
+#define R_GQSPI_IDR (0x10c / 4)
135
+#define R_GQSPI_IMR (0x110 / 4)
136
+#define R_GQSPI_TX_THRESH (0x128 / 4)
137
+#define R_GQSPI_RX_THRESH (0x12c / 4)
138
+#define R_GQSPI_CNFG (0x100 / 4)
139
+ FIELD(GQSPI_CNFG, MODE_EN, 30, 2)
140
+ FIELD(GQSPI_CNFG, GEN_FIFO_START_MODE, 29, 1)
141
+ FIELD(GQSPI_CNFG, GEN_FIFO_START, 28, 1)
142
+ FIELD(GQSPI_CNFG, ENDIAN, 26, 1)
143
+ /* Poll timeout not implemented */
144
+ FIELD(GQSPI_CNFG, EN_POLL_TIMEOUT, 20, 1)
145
+ /* QEMU doesnt care about any of these last three */
146
+ FIELD(GQSPI_CNFG, BR, 3, 3)
147
+ FIELD(GQSPI_CNFG, CPH, 2, 1)
148
+ FIELD(GQSPI_CNFG, CPL, 1, 1)
149
+#define R_GQSPI_GEN_FIFO (0x140 / 4)
150
+#define R_GQSPI_TXD (0x11c / 4)
151
+#define R_GQSPI_RXD (0x120 / 4)
152
+#define R_GQSPI_FIFO_CTRL (0x14c / 4)
153
+ FIELD(GQSPI_FIFO_CTRL, RX_FIFO_RESET, 2, 1)
154
+ FIELD(GQSPI_FIFO_CTRL, TX_FIFO_RESET, 1, 1)
155
+ FIELD(GQSPI_FIFO_CTRL, GENERIC_FIFO_RESET, 0, 1)
156
+#define R_GQSPI_GFIFO_THRESH (0x150 / 4)
157
+#define R_GQSPI_DATA_STS (0x15c / 4)
158
+/* We use the snapshot register to hold the core state for the currently
159
+ * or most recently executed command. So the generic fifo format is defined
160
+ * for the snapshot register
161
+ */
133
+ */
162
+#define R_GQSPI_GF_SNAPSHOT (0x160 / 4)
134
+
163
+ FIELD(GQSPI_GF_SNAPSHOT, POLL, 19, 1)
135
+#include "qemu/osdep.h"
164
+ FIELD(GQSPI_GF_SNAPSHOT, STRIPE, 18, 1)
136
+#include "qemu/log.h"
165
+ FIELD(GQSPI_GF_SNAPSHOT, RECIEVE, 17, 1)
137
+#include "qemu/module.h"
166
+ FIELD(GQSPI_GF_SNAPSHOT, TRANSMIT, 16, 1)
138
+#include "qapi/error.h"
167
+ FIELD(GQSPI_GF_SNAPSHOT, DATA_BUS_SELECT, 14, 2)
139
+#include "chardev/char-fe.h"
168
+ FIELD(GQSPI_GF_SNAPSHOT, CHIP_SELECT, 12, 2)
140
+#include "chardev/char-serial.h"
169
+ FIELD(GQSPI_GF_SNAPSHOT, SPI_MODE, 10, 2)
141
+#include "migration/vmstate.h"
170
+ FIELD(GQSPI_GF_SNAPSHOT, EXPONENT, 9, 1)
142
+#include "hw/char/stm32l4x5_usart.h"
171
+ FIELD(GQSPI_GF_SNAPSHOT, DATA_XFER, 8, 1)
143
+#include "hw/clock.h"
172
+ FIELD(GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA, 0, 8)
144
+#include "hw/irq.h"
173
+#define R_GQSPI_MOD_ID (0x168 / 4)
145
+#include "hw/qdev-clock.h"
174
+#define R_GQSPI_MOD_ID_VALUE 0x010A0000
146
+#include "hw/qdev-properties.h"
175
/* size of TXRX FIFOs */
147
+#include "hw/qdev-properties-system.h"
176
-#define RXFF_A 32
148
+#include "hw/registerfields.h"
177
-#define TXFF_A 32
149
+#include "trace.h"
178
+#define RXFF_A (128)
150
+
179
+#define TXFF_A (128)
151
+
180
152
+REG32(CR1, 0x00)
181
#define RXFF_A_Q (64 * 4)
153
+ FIELD(CR1, M1, 28, 1) /* Word length (part 2, see M0) */
182
#define TXFF_A_Q (64 * 4)
154
+ FIELD(CR1, EOBIE, 27, 1) /* End of Block interrupt enable */
183
@@ -XXX,XX +XXX,XX @@ static inline int num_effective_busses(XilinxSPIPS *s)
155
+ FIELD(CR1, RTOIE, 26, 1) /* Receiver timeout interrupt enable */
184
s->regs[R_LQSPI_CFG] & LQSPI_CFG_TWO_MEM) ? s->num_busses : 1;
156
+ FIELD(CR1, DEAT, 21, 5) /* Driver Enable assertion time */
185
}
157
+ FIELD(CR1, DEDT, 16, 5) /* Driver Enable de-assertion time */
186
158
+ FIELD(CR1, OVER8, 15, 1) /* Oversampling mode */
187
-static inline bool xilinx_spips_cs_is_set(XilinxSPIPS *s, int i, int field)
159
+ FIELD(CR1, CMIE, 14, 1) /* Character match interrupt enable */
188
-{
160
+ FIELD(CR1, MME, 13, 1) /* Mute mode enable */
189
- return ~field & (1 << i) && (s->regs[R_CONFIG] & MANUAL_CS
161
+ FIELD(CR1, M0, 12, 1) /* Word length (part 1, see M1) */
190
- || !fifo8_is_empty(&s->tx_fifo));
162
+ FIELD(CR1, WAKE, 11, 1) /* Receiver wakeup method */
191
-}
163
+ FIELD(CR1, PCE, 10, 1) /* Parity control enable */
192
-
164
+ FIELD(CR1, PS, 9, 1) /* Parity selection */
193
-static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
165
+ FIELD(CR1, PEIE, 8, 1) /* PE interrupt enable */
194
+static void xilinx_spips_update_cs(XilinxSPIPS *s, int field)
166
+ FIELD(CR1, TXEIE, 7, 1) /* TXE interrupt enable */
195
{
167
+ FIELD(CR1, TCIE, 6, 1) /* Transmission complete interrupt enable */
196
- int i, j;
168
+ FIELD(CR1, RXNEIE, 5, 1) /* RXNE interrupt enable */
197
- bool found = false;
169
+ FIELD(CR1, IDLEIE, 4, 1) /* IDLE interrupt enable */
198
- int field = s->regs[R_CONFIG] >> CS_SHIFT;
170
+ FIELD(CR1, TE, 3, 1) /* Transmitter enable */
199
+ int i;
171
+ FIELD(CR1, RE, 2, 1) /* Receiver enable */
200
172
+ FIELD(CR1, UESM, 1, 1) /* USART enable in Stop mode */
201
for (i = 0; i < s->num_cs; i++) {
173
+ FIELD(CR1, UE, 0, 1) /* USART enable */
202
- for (j = 0; j < num_effective_busses(s); j++) {
174
+REG32(CR2, 0x04)
203
- int upage = !!(s->regs[R_LQSPI_STS] & LQSPI_CFG_U_PAGE);
175
+ FIELD(CR2, ADD_1, 28, 4) /* ADD[7:4] */
204
- int cs_to_set = (j * s->num_cs + i + upage) %
176
+ FIELD(CR2, ADD_0, 24, 1) /* ADD[3:0] */
205
- (s->num_cs * s->num_busses);
177
+ FIELD(CR2, RTOEN, 23, 1) /* Receiver timeout enable */
206
-
178
+ FIELD(CR2, ABRMOD, 21, 2) /* Auto baud rate mode */
207
- if (xilinx_spips_cs_is_set(s, i, field) && !found) {
179
+ FIELD(CR2, ABREN, 20, 1) /* Auto baud rate enable */
208
- DB_PRINT_L(0, "selecting slave %d\n", i);
180
+ FIELD(CR2, MSBFIRST, 19, 1) /* Most significant bit first */
209
- qemu_set_irq(s->cs_lines[cs_to_set], 0);
181
+ FIELD(CR2, DATAINV, 18, 1) /* Binary data inversion */
210
- if (s->cs_lines_state[cs_to_set]) {
182
+ FIELD(CR2, TXINV, 17, 1) /* TX pin active level inversion */
211
- s->cs_lines_state[cs_to_set] = false;
183
+ FIELD(CR2, RXINV, 16, 1) /* RX pin active level inversion */
212
- s->rx_discard = ARRAY_FIELD_EX32(s->regs, CMND, RX_DISCARD);
184
+ FIELD(CR2, SWAP, 15, 1) /* Swap RX/TX pins */
213
- }
185
+ FIELD(CR2, LINEN, 14, 1) /* LIN mode enable */
214
- } else {
186
+ FIELD(CR2, STOP, 12, 2) /* STOP bits */
215
- DB_PRINT_L(0, "deselecting slave %d\n", i);
187
+ FIELD(CR2, CLKEN, 11, 1) /* Clock enable */
216
- qemu_set_irq(s->cs_lines[cs_to_set], 1);
188
+ FIELD(CR2, CPOL, 10, 1) /* Clock polarity */
217
- s->cs_lines_state[cs_to_set] = true;
189
+ FIELD(CR2, CPHA, 9, 1) /* Clock phase */
218
- }
190
+ FIELD(CR2, LBCL, 8, 1) /* Last bit clock pulse */
219
- }
191
+ FIELD(CR2, LBDIE, 6, 1) /* LIN break detection interrupt enable */
220
- if (xilinx_spips_cs_is_set(s, i, field)) {
192
+ FIELD(CR2, LBDL, 5, 1) /* LIN break detection length */
221
- found = true;
193
+ FIELD(CR2, ADDM7, 4, 1) /* 7-bit / 4-bit Address Detection */
222
+ bool old_state = s->cs_lines_state[i];
194
+
223
+ bool new_state = field & (1 << i);
195
+REG32(CR3, 0x08)
224
+
196
+ /* TCBGTIE only on STM32L496xx/4A6xx devices */
225
+ if (old_state != new_state) {
197
+ FIELD(CR3, UCESM, 23, 1) /* USART Clock Enable in Stop Mode */
226
+ s->cs_lines_state[i] = new_state;
198
+ FIELD(CR3, WUFIE, 22, 1) /* Wakeup from Stop mode interrupt enable */
227
+ s->rx_discard = ARRAY_FIELD_EX32(s->regs, CMND, RX_DISCARD);
199
+ FIELD(CR3, WUS, 20, 2) /* Wakeup from Stop mode interrupt flag selection */
228
+ DB_PRINT_L(1, "%sselecting slave %d\n", new_state ? "" : "de", i);
200
+ FIELD(CR3, SCARCNT, 17, 3) /* Smartcard auto-retry count */
229
}
201
+ FIELD(CR3, DEP, 15, 1) /* Driver enable polarity selection */
230
+ qemu_set_irq(s->cs_lines[i], !new_state);
202
+ FIELD(CR3, DEM, 14, 1) /* Driver enable mode */
231
}
203
+ FIELD(CR3, DDRE, 13, 1) /* DMA Disable on Reception Error */
232
- if (!found) {
204
+ FIELD(CR3, OVRDIS, 12, 1) /* Overrun Disable */
233
+ if (!(field & ((1 << s->num_cs) - 1))) {
205
+ FIELD(CR3, ONEBIT, 11, 1) /* One sample bit method enable */
234
s->snoop_state = SNOOP_CHECKING;
206
+ FIELD(CR3, CTSIE, 10, 1) /* CTS interrupt enable */
235
s->cmd_dummies = 0;
207
+ FIELD(CR3, CTSE, 9, 1) /* CTS enable */
236
s->link_state = 1;
208
+ FIELD(CR3, RTSE, 8, 1) /* RTS enable */
237
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
209
+ FIELD(CR3, DMAT, 7, 1) /* DMA enable transmitter */
238
}
210
+ FIELD(CR3, DMAR, 6, 1) /* DMA enable receiver */
239
}
211
+ FIELD(CR3, SCEN, 5, 1) /* Smartcard mode enable */
240
212
+ FIELD(CR3, NACK, 4, 1) /* Smartcard NACK enable */
241
+static void xlnx_zynqmp_qspips_update_cs_lines(XlnxZynqMPQSPIPS *s)
213
+ FIELD(CR3, HDSEL, 3, 1) /* Half-duplex selection */
242
+{
214
+ FIELD(CR3, IRLP, 2, 1) /* IrDA low-power */
243
+ if (s->regs[R_GQSPI_GF_SNAPSHOT]) {
215
+ FIELD(CR3, IREN, 1, 1) /* IrDA mode enable */
244
+ int field = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, CHIP_SELECT);
216
+ FIELD(CR3, EIE, 0, 1) /* Error interrupt enable */
245
+ xilinx_spips_update_cs(XILINX_SPIPS(s), field);
217
+REG32(BRR, 0x0C)
218
+ FIELD(BRR, BRR, 0, 16)
219
+REG32(GTPR, 0x10)
220
+ FIELD(GTPR, GT, 8, 8) /* Guard time value */
221
+ FIELD(GTPR, PSC, 0, 8) /* Prescaler value */
222
+REG32(RTOR, 0x14)
223
+ FIELD(RTOR, BLEN, 24, 8) /* Block Length */
224
+ FIELD(RTOR, RTO, 0, 24) /* Receiver timeout value */
225
+REG32(RQR, 0x18)
226
+ FIELD(RQR, TXFRQ, 4, 1) /* Transmit data flush request */
227
+ FIELD(RQR, RXFRQ, 3, 1) /* Receive data flush request */
228
+ FIELD(RQR, MMRQ, 2, 1) /* Mute mode request */
229
+ FIELD(RQR, SBKRQ, 1, 1) /* Send break request */
230
+ FIELD(RQR, ABBRRQ, 0, 1) /* Auto baud rate request */
231
+REG32(ISR, 0x1C)
232
+ /* TCBGT only for STM32L475xx/476xx/486xx devices */
233
+ FIELD(ISR, REACK, 22, 1) /* Receive enable acknowledge flag */
234
+ FIELD(ISR, TEACK, 21, 1) /* Transmit enable acknowledge flag */
235
+ FIELD(ISR, WUF, 20, 1) /* Wakeup from Stop mode flag */
236
+ FIELD(ISR, RWU, 19, 1) /* Receiver wakeup from Mute mode */
237
+ FIELD(ISR, SBKF, 18, 1) /* Send break flag */
238
+ FIELD(ISR, CMF, 17, 1) /* Character match flag */
239
+ FIELD(ISR, BUSY, 16, 1) /* Busy flag */
240
+ FIELD(ISR, ABRF, 15, 1) /* Auto Baud rate flag */
241
+ FIELD(ISR, ABRE, 14, 1) /* Auto Baud rate error */
242
+ FIELD(ISR, EOBF, 12, 1) /* End of block flag */
243
+ FIELD(ISR, RTOF, 11, 1) /* Receiver timeout */
244
+ FIELD(ISR, CTS, 10, 1) /* CTS flag */
245
+ FIELD(ISR, CTSIF, 9, 1) /* CTS interrupt flag */
246
+ FIELD(ISR, LBDF, 8, 1) /* LIN break detection flag */
247
+ FIELD(ISR, TXE, 7, 1) /* Transmit data register empty */
248
+ FIELD(ISR, TC, 6, 1) /* Transmission complete */
249
+ FIELD(ISR, RXNE, 5, 1) /* Read data register not empty */
250
+ FIELD(ISR, IDLE, 4, 1) /* Idle line detected */
251
+ FIELD(ISR, ORE, 3, 1) /* Overrun error */
252
+ FIELD(ISR, NF, 2, 1) /* START bit Noise detection flag */
253
+ FIELD(ISR, FE, 1, 1) /* Framing Error */
254
+ FIELD(ISR, PE, 0, 1) /* Parity Error */
255
+REG32(ICR, 0x20)
256
+ FIELD(ICR, WUCF, 20, 1) /* Wakeup from Stop mode clear flag */
257
+ FIELD(ICR, CMCF, 17, 1) /* Character match clear flag */
258
+ FIELD(ICR, EOBCF, 12, 1) /* End of block clear flag */
259
+ FIELD(ICR, RTOCF, 11, 1) /* Receiver timeout clear flag */
260
+ FIELD(ICR, CTSCF, 9, 1) /* CTS clear flag */
261
+ FIELD(ICR, LBDCF, 8, 1) /* LIN break detection clear flag */
262
+ /* TCBGTCF only on STM32L496xx/4A6xx devices */
263
+ FIELD(ICR, TCCF, 6, 1) /* Transmission complete clear flag */
264
+ FIELD(ICR, IDLECF, 4, 1) /* Idle line detected clear flag */
265
+ FIELD(ICR, ORECF, 3, 1) /* Overrun error clear flag */
266
+ FIELD(ICR, NCF, 2, 1) /* Noise detected clear flag */
267
+ FIELD(ICR, FECF, 1, 1) /* Framing error clear flag */
268
+ FIELD(ICR, PECF, 0, 1) /* Parity error clear flag */
269
+REG32(RDR, 0x24)
270
+ FIELD(RDR, RDR, 0, 9)
271
+REG32(TDR, 0x28)
272
+ FIELD(TDR, TDR, 0, 9)
273
+
274
+static void stm32l4x5_usart_base_reset_hold(Object *obj, ResetType type)
275
+{
276
+ Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(obj);
277
+
278
+ s->cr1 = 0x00000000;
279
+ s->cr2 = 0x00000000;
280
+ s->cr3 = 0x00000000;
281
+ s->brr = 0x00000000;
282
+ s->gtpr = 0x00000000;
283
+ s->rtor = 0x00000000;
284
+ s->isr = 0x020000C0;
285
+ s->rdr = 0x00000000;
286
+ s->tdr = 0x00000000;
287
+}
288
+
289
+static uint64_t stm32l4x5_usart_base_read(void *opaque, hwaddr addr,
290
+ unsigned int size)
291
+{
292
+ Stm32l4x5UsartBaseState *s = opaque;
293
+ uint64_t retvalue = 0;
294
+
295
+ switch (addr) {
296
+ case A_CR1:
297
+ retvalue = s->cr1;
298
+ break;
299
+ case A_CR2:
300
+ retvalue = s->cr2;
301
+ break;
302
+ case A_CR3:
303
+ retvalue = s->cr3;
304
+ break;
305
+ case A_BRR:
306
+ retvalue = FIELD_EX32(s->brr, BRR, BRR);
307
+ break;
308
+ case A_GTPR:
309
+ retvalue = s->gtpr;
310
+ break;
311
+ case A_RTOR:
312
+ retvalue = s->rtor;
313
+ break;
314
+ case A_RQR:
315
+ /* RQR is a write only register */
316
+ retvalue = 0x00000000;
317
+ break;
318
+ case A_ISR:
319
+ retvalue = s->isr;
320
+ break;
321
+ case A_ICR:
322
+ /* ICR is a clear register */
323
+ retvalue = 0x00000000;
324
+ break;
325
+ case A_RDR:
326
+ retvalue = FIELD_EX32(s->rdr, RDR, RDR);
327
+ /* Reset RXNE flag */
328
+ s->isr &= ~R_ISR_RXNE_MASK;
329
+ break;
330
+ case A_TDR:
331
+ retvalue = FIELD_EX32(s->tdr, TDR, TDR);
332
+ break;
333
+ default:
334
+ qemu_log_mask(LOG_GUEST_ERROR,
335
+ "%s: Bad offset 0x%"HWADDR_PRIx"\n", __func__, addr);
336
+ break;
246
+ }
337
+ }
247
+}
338
+
248
+
339
+ trace_stm32l4x5_usart_read(addr, retvalue);
249
+static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
340
+
250
+{
341
+ return retvalue;
251
+ int field = ~((s->regs[R_CONFIG] & CS) >> CS_SHIFT);
342
+}
252
+
343
+
253
+ /* In dual parallel, mirror low CS to both */
344
+static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
254
+ if (num_effective_busses(s) == 2) {
345
+ uint64_t val64, unsigned int size)
255
+ /* Single bit chip-select for qspi */
346
+{
256
+ field &= 0x1;
347
+ Stm32l4x5UsartBaseState *s = opaque;
257
+ field |= field << 1;
348
+ const uint32_t value = val64;
258
+ /* Dual stack U-Page */
349
+
259
+ } else if (s->regs[R_LQSPI_CFG] & LQSPI_CFG_TWO_MEM &&
350
+ trace_stm32l4x5_usart_write(addr, value);
260
+ s->regs[R_LQSPI_STS] & LQSPI_CFG_U_PAGE) {
351
+
261
+ /* Single bit chip-select for qspi */
352
+ switch (addr) {
262
+ field &= 0x1;
353
+ case A_CR1:
263
+ /* change from CS0 to CS1 */
354
+ s->cr1 = value;
264
+ field <<= 1;
355
+ return;
356
+ case A_CR2:
357
+ s->cr2 = value;
358
+ return;
359
+ case A_CR3:
360
+ s->cr3 = value;
361
+ return;
362
+ case A_BRR:
363
+ s->brr = value;
364
+ return;
365
+ case A_GTPR:
366
+ s->gtpr = value;
367
+ return;
368
+ case A_RTOR:
369
+ s->rtor = value;
370
+ return;
371
+ case A_RQR:
372
+ return;
373
+ case A_ISR:
374
+ qemu_log_mask(LOG_GUEST_ERROR,
375
+ "%s: ISR is read only !\n", __func__);
376
+ return;
377
+ case A_ICR:
378
+ /* Clear the status flags */
379
+ s->isr &= ~value;
380
+ return;
381
+ case A_RDR:
382
+ qemu_log_mask(LOG_GUEST_ERROR,
383
+ "%s: RDR is read only !\n", __func__);
384
+ return;
385
+ case A_TDR:
386
+ s->tdr = value;
387
+ return;
388
+ default:
389
+ qemu_log_mask(LOG_GUEST_ERROR,
390
+ "%s: Bad offset 0x%"HWADDR_PRIx"\n", __func__, addr);
265
+ }
391
+ }
266
+ /* Auto CS */
392
+}
267
+ if (!(s->regs[R_CONFIG] & MANUAL_CS) &&
393
+
268
+ fifo8_is_empty(&s->tx_fifo)) {
394
+static const MemoryRegionOps stm32l4x5_usart_base_ops = {
269
+ field = 0;
395
+ .read = stm32l4x5_usart_base_read,
270
+ }
396
+ .write = stm32l4x5_usart_base_write,
271
+ xilinx_spips_update_cs(s, field);
397
+ .endianness = DEVICE_NATIVE_ENDIAN,
272
+}
398
+ .valid = {
273
+
399
+ .max_access_size = 4,
274
static void xilinx_spips_update_ixr(XilinxSPIPS *s)
400
+ .min_access_size = 4,
275
{
401
+ .unaligned = false
276
- if (s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE) {
402
+ },
277
- return;
403
+ .impl = {
278
+ if (!(s->regs[R_LQSPI_CFG] & LQSPI_CFG_LQ_MODE)) {
404
+ .max_access_size = 4,
279
+ s->regs[R_INTR_STATUS] &= ~IXR_SELF_CLEAR;
405
+ .min_access_size = 4,
280
+ s->regs[R_INTR_STATUS] |=
406
+ .unaligned = false
281
+ (fifo8_is_full(&s->rx_fifo) ? IXR_RX_FIFO_FULL : 0) |
407
+ },
282
+ (s->rx_fifo.num >= s->regs[R_RX_THRES] ?
283
+ IXR_RX_FIFO_NOT_EMPTY : 0) |
284
+ (fifo8_is_full(&s->tx_fifo) ? IXR_TX_FIFO_FULL : 0) |
285
+ (fifo8_is_empty(&s->tx_fifo) ? IXR_TX_FIFO_EMPTY : 0) |
286
+ (s->tx_fifo.num < s->regs[R_TX_THRES] ? IXR_TX_FIFO_NOT_FULL : 0);
287
}
288
- /* These are set/cleared as they occur */
289
- s->regs[R_INTR_STATUS] &= (IXR_TX_FIFO_UNDERFLOW | IXR_RX_FIFO_OVERFLOW |
290
- IXR_TX_FIFO_MODE_FAIL);
291
- /* these are pure functions of fifo state, set them here */
292
- s->regs[R_INTR_STATUS] |=
293
- (fifo8_is_full(&s->rx_fifo) ? IXR_RX_FIFO_FULL : 0) |
294
- (s->rx_fifo.num >= s->regs[R_RX_THRES] ? IXR_RX_FIFO_NOT_EMPTY : 0) |
295
- (fifo8_is_full(&s->tx_fifo) ? IXR_TX_FIFO_FULL : 0) |
296
- (s->tx_fifo.num < s->regs[R_TX_THRES] ? IXR_TX_FIFO_NOT_FULL : 0);
297
- /* drive external interrupt pin */
298
int new_irqline = !!(s->regs[R_INTR_MASK] & s->regs[R_INTR_STATUS] &
299
IXR_ALL);
300
if (new_irqline != s->irqline) {
301
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_update_ixr(XilinxSPIPS *s)
302
}
303
}
304
305
+static void xlnx_zynqmp_qspips_update_ixr(XlnxZynqMPQSPIPS *s)
306
+{
307
+ uint32_t gqspi_int;
308
+ int new_irqline;
309
+
310
+ s->regs[R_GQSPI_ISR] &= ~IXR_SELF_CLEAR;
311
+ s->regs[R_GQSPI_ISR] |=
312
+ (fifo32_is_empty(&s->fifo_g) ? IXR_GENERIC_FIFO_EMPTY : 0) |
313
+ (fifo32_is_full(&s->fifo_g) ? IXR_GENERIC_FIFO_FULL : 0) |
314
+ (s->fifo_g.fifo.num < s->regs[R_GQSPI_GFIFO_THRESH] ?
315
+ IXR_GENERIC_FIFO_NOT_FULL : 0) |
316
+ (fifo8_is_empty(&s->rx_fifo_g) ? IXR_RX_FIFO_EMPTY : 0) |
317
+ (fifo8_is_full(&s->rx_fifo_g) ? IXR_RX_FIFO_FULL : 0) |
318
+ (s->rx_fifo_g.num >= s->regs[R_GQSPI_RX_THRESH] ?
319
+ IXR_RX_FIFO_NOT_EMPTY : 0) |
320
+ (fifo8_is_empty(&s->tx_fifo_g) ? IXR_TX_FIFO_EMPTY : 0) |
321
+ (fifo8_is_full(&s->tx_fifo_g) ? IXR_TX_FIFO_FULL : 0) |
322
+ (s->tx_fifo_g.num < s->regs[R_GQSPI_TX_THRESH] ?
323
+ IXR_TX_FIFO_NOT_FULL : 0);
324
+
325
+ /* GQSPI Interrupt Trigger Status */
326
+ gqspi_int = (~s->regs[R_GQSPI_IMR]) & s->regs[R_GQSPI_ISR] & GQSPI_IXR_MASK;
327
+ new_irqline = !!(gqspi_int & IXR_ALL);
328
+
329
+ /* drive external interrupt pin */
330
+ if (new_irqline != s->gqspi_irqline) {
331
+ s->gqspi_irqline = new_irqline;
332
+ qemu_set_irq(XILINX_SPIPS(s)->irq, s->gqspi_irqline);
333
+ }
334
+}
335
+
336
static void xilinx_spips_reset(DeviceState *d)
337
{
338
XilinxSPIPS *s = XILINX_SPIPS(d);
339
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
340
xilinx_spips_update_cs_lines(s);
341
}
342
343
+static void xlnx_zynqmp_qspips_reset(DeviceState *d)
344
+{
345
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(d);
346
+ int i;
347
+
348
+ xilinx_spips_reset(d);
349
+
350
+ for (i = 0; i < XLNX_ZYNQMP_SPIPS_R_MAX; i++) {
351
+ s->regs[i] = 0;
352
+ }
353
+ fifo8_reset(&s->rx_fifo_g);
354
+ fifo8_reset(&s->rx_fifo_g);
355
+ fifo32_reset(&s->fifo_g);
356
+ s->regs[R_GQSPI_TX_THRESH] = 1;
357
+ s->regs[R_GQSPI_RX_THRESH] = 1;
358
+ s->regs[R_GQSPI_GFIFO_THRESH] = 1;
359
+ s->regs[R_GQSPI_IMR] = GQSPI_IXR_MASK;
360
+ s->man_start_com_g = false;
361
+ s->gqspi_irqline = 0;
362
+ xlnx_zynqmp_qspips_update_ixr(s);
363
+}
364
+
365
/* N way (num) in place bit striper. Lay out row wise bits (MSB to LSB)
366
* column wise (from element 0 to N-1). num is the length of x, and dir
367
* reverses the direction of the transform. Best illustrated by example:
368
@@ -XXX,XX +XXX,XX @@ static inline void stripe8(uint8_t *x, int num, bool dir)
369
memcpy(x, r, sizeof(uint8_t) * num);
370
}
371
372
+static void xlnx_zynqmp_qspips_flush_fifo_g(XlnxZynqMPQSPIPS *s)
373
+{
374
+ while (s->regs[R_GQSPI_DATA_STS] || !fifo32_is_empty(&s->fifo_g)) {
375
+ uint8_t tx_rx[2] = { 0 };
376
+ int num_stripes = 1;
377
+ uint8_t busses;
378
+ int i;
379
+
380
+ if (!s->regs[R_GQSPI_DATA_STS]) {
381
+ uint8_t imm;
382
+
383
+ s->regs[R_GQSPI_GF_SNAPSHOT] = fifo32_pop(&s->fifo_g);
384
+ DB_PRINT_L(0, "GQSPI command: %x\n", s->regs[R_GQSPI_GF_SNAPSHOT]);
385
+ if (!s->regs[R_GQSPI_GF_SNAPSHOT]) {
386
+ DB_PRINT_L(0, "Dummy GQSPI Delay Command Entry, Do nothing");
387
+ continue;
388
+ }
389
+ xlnx_zynqmp_qspips_update_cs_lines(s);
390
+
391
+ imm = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA);
392
+ if (!ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, DATA_XFER)) {
393
+ /* immedate transfer */
394
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, TRANSMIT) ||
395
+ ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE)) {
396
+ s->regs[R_GQSPI_DATA_STS] = 1;
397
+ /* CS setup/hold - do nothing */
398
+ } else {
399
+ s->regs[R_GQSPI_DATA_STS] = 0;
400
+ }
401
+ } else if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, EXPONENT)) {
402
+ if (imm > 31) {
403
+ qemu_log_mask(LOG_UNIMP, "QSPI exponential transfer too"
404
+ " long - 2 ^ %" PRId8 " requested\n", imm);
405
+ }
406
+ s->regs[R_GQSPI_DATA_STS] = 1ul << imm;
407
+ } else {
408
+ s->regs[R_GQSPI_DATA_STS] = imm;
409
+ }
410
+ }
411
+ /* Zero length transfer check */
412
+ if (!s->regs[R_GQSPI_DATA_STS]) {
413
+ continue;
414
+ }
415
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE) &&
416
+ fifo8_is_full(&s->rx_fifo_g)) {
417
+ /* No space in RX fifo for transfer - try again later */
418
+ return;
419
+ }
420
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, STRIPE) &&
421
+ (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, TRANSMIT) ||
422
+ ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE))) {
423
+ num_stripes = 2;
424
+ }
425
+ if (!ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, DATA_XFER)) {
426
+ tx_rx[0] = ARRAY_FIELD_EX32(s->regs,
427
+ GQSPI_GF_SNAPSHOT, IMMEDIATE_DATA);
428
+ } else if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, TRANSMIT)) {
429
+ for (i = 0; i < num_stripes; ++i) {
430
+ if (!fifo8_is_empty(&s->tx_fifo_g)) {
431
+ tx_rx[i] = fifo8_pop(&s->tx_fifo_g);
432
+ s->tx_fifo_g_align++;
433
+ } else {
434
+ return;
435
+ }
436
+ }
437
+ }
438
+ if (num_stripes == 1) {
439
+ /* mirror */
440
+ tx_rx[1] = tx_rx[0];
441
+ }
442
+ busses = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, DATA_BUS_SELECT);
443
+ for (i = 0; i < 2; ++i) {
444
+ DB_PRINT_L(1, "bus %d tx = %02x\n", i, tx_rx[i]);
445
+ tx_rx[i] = ssi_transfer(XILINX_SPIPS(s)->spi[i], tx_rx[i]);
446
+ DB_PRINT_L(1, "bus %d rx = %02x\n", i, tx_rx[i]);
447
+ }
448
+ if (s->regs[R_GQSPI_DATA_STS] > 1 &&
449
+ busses == 0x3 && num_stripes == 2) {
450
+ s->regs[R_GQSPI_DATA_STS] -= 2;
451
+ } else if (s->regs[R_GQSPI_DATA_STS] > 0) {
452
+ s->regs[R_GQSPI_DATA_STS]--;
453
+ }
454
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, RECIEVE)) {
455
+ for (i = 0; i < 2; ++i) {
456
+ if (busses & (1 << i)) {
457
+ DB_PRINT_L(1, "bus %d push_byte = %02x\n", i, tx_rx[i]);
458
+ fifo8_push(&s->rx_fifo_g, tx_rx[i]);
459
+ s->rx_fifo_g_align++;
460
+ }
461
+ }
462
+ }
463
+ if (!s->regs[R_GQSPI_DATA_STS]) {
464
+ for (; s->tx_fifo_g_align % 4; s->tx_fifo_g_align++) {
465
+ fifo8_pop(&s->tx_fifo_g);
466
+ }
467
+ for (; s->rx_fifo_g_align % 4; s->rx_fifo_g_align++) {
468
+ fifo8_push(&s->rx_fifo_g, 0);
469
+ }
470
+ }
471
+ }
472
+}
473
+
474
static int xilinx_spips_num_dummies(XilinxQSPIPS *qs, uint8_t command)
475
{
476
if (!qs) {
477
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_check_flush(XilinxSPIPS *s)
478
xilinx_spips_update_ixr(s);
479
}
480
481
+static void xlnx_zynqmp_qspips_check_flush(XlnxZynqMPQSPIPS *s)
482
+{
483
+ bool gqspi_has_work = s->regs[R_GQSPI_DATA_STS] ||
484
+ !fifo32_is_empty(&s->fifo_g);
485
+
486
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_SELECT, GENERIC_QSPI_EN)) {
487
+ if (s->man_start_com_g || (gqspi_has_work &&
488
+ !ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, GEN_FIFO_START_MODE))) {
489
+ xlnx_zynqmp_qspips_flush_fifo_g(s);
490
+ }
491
+ } else {
492
+ xilinx_spips_check_flush(XILINX_SPIPS(s));
493
+ }
494
+ if (!gqspi_has_work) {
495
+ s->man_start_com_g = false;
496
+ }
497
+ xlnx_zynqmp_qspips_update_ixr(s);
498
+}
499
+
500
static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
501
{
502
int i;
503
@@ -XXX,XX +XXX,XX @@ static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
504
return max - i;
505
}
506
507
+static const void *pop_buf(Fifo8 *fifo, uint32_t max, uint32_t *num)
508
+{
509
+ void *ret;
510
+
511
+ if (max == 0 || max > fifo->num) {
512
+ abort();
513
+ }
514
+ *num = MIN(fifo->capacity - fifo->head, max);
515
+ ret = &fifo->data[fifo->head];
516
+ fifo->head += *num;
517
+ fifo->head %= fifo->capacity;
518
+ fifo->num -= *num;
519
+ return ret;
520
+}
521
+
522
+static void xlnx_zynqmp_qspips_notify(void *opaque)
523
+{
524
+ XlnxZynqMPQSPIPS *rq = XLNX_ZYNQMP_QSPIPS(opaque);
525
+ XilinxSPIPS *s = XILINX_SPIPS(rq);
526
+ Fifo8 *recv_fifo;
527
+
528
+ if (ARRAY_FIELD_EX32(rq->regs, GQSPI_SELECT, GENERIC_QSPI_EN)) {
529
+ if (!(ARRAY_FIELD_EX32(rq->regs, GQSPI_CNFG, MODE_EN) == 2)) {
530
+ return;
531
+ }
532
+ recv_fifo = &rq->rx_fifo_g;
533
+ } else {
534
+ if (!(s->regs[R_CMND] & R_CMND_DMA_EN)) {
535
+ return;
536
+ }
537
+ recv_fifo = &s->rx_fifo;
538
+ }
539
+ while (recv_fifo->num >= 4
540
+ && stream_can_push(rq->dma, xlnx_zynqmp_qspips_notify, rq))
541
+ {
542
+ size_t ret;
543
+ uint32_t num;
544
+ const void *rxd = pop_buf(recv_fifo, 4, &num);
545
+
546
+ memcpy(rq->dma_buf, rxd, num);
547
+
548
+ ret = stream_push(rq->dma, rq->dma_buf, 4);
549
+ assert(ret == 4);
550
+ xlnx_zynqmp_qspips_check_flush(rq);
551
+ }
552
+}
553
+
554
static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
555
unsigned size)
556
{
557
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
558
ret <<= 8 * shortfall;
559
}
560
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr * 4, ret);
561
+ xilinx_spips_check_flush(s);
562
xilinx_spips_update_ixr(s);
563
return ret;
564
}
565
@@ -XXX,XX +XXX,XX @@ static uint64_t xilinx_spips_read(void *opaque, hwaddr addr,
566
567
}
568
569
+static uint64_t xlnx_zynqmp_qspips_read(void *opaque,
570
+ hwaddr addr, unsigned size)
571
+{
572
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(opaque);
573
+ uint32_t reg = addr / 4;
574
+ uint32_t ret;
575
+ uint8_t rx_buf[4];
576
+ int shortfall;
577
+
578
+ if (reg <= R_MOD_ID) {
579
+ return xilinx_spips_read(opaque, addr, size);
580
+ } else {
581
+ switch (reg) {
582
+ case R_GQSPI_RXD:
583
+ if (fifo8_is_empty(&s->rx_fifo_g)) {
584
+ qemu_log_mask(LOG_GUEST_ERROR,
585
+ "Read from empty GQSPI RX FIFO\n");
586
+ return 0;
587
+ }
588
+ memset(rx_buf, 0, sizeof(rx_buf));
589
+ shortfall = rx_data_bytes(&s->rx_fifo_g, rx_buf,
590
+ XILINX_SPIPS(s)->num_txrx_bytes);
591
+ ret = ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, ENDIAN) ?
592
+ cpu_to_be32(*(uint32_t *)rx_buf) :
593
+ cpu_to_le32(*(uint32_t *)rx_buf);
594
+ if (!ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, ENDIAN)) {
595
+ ret <<= 8 * shortfall;
596
+ }
597
+ xlnx_zynqmp_qspips_check_flush(s);
598
+ xlnx_zynqmp_qspips_update_ixr(s);
599
+ return ret;
600
+ default:
601
+ return s->regs[reg];
602
+ }
603
+ }
604
+}
605
+
606
static void xilinx_spips_write(void *opaque, hwaddr addr,
607
uint64_t value, unsigned size)
608
{
609
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_write(void *opaque, hwaddr addr,
610
}
611
}
612
613
+static void xlnx_zynqmp_qspips_write(void *opaque, hwaddr addr,
614
+ uint64_t value, unsigned size)
615
+{
616
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(opaque);
617
+ uint32_t reg = addr / 4;
618
+
619
+ if (reg <= R_MOD_ID) {
620
+ xilinx_qspips_write(opaque, addr, value, size);
621
+ } else {
622
+ switch (reg) {
623
+ case R_GQSPI_CNFG:
624
+ if (FIELD_EX32(value, GQSPI_CNFG, GEN_FIFO_START) &&
625
+ ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, GEN_FIFO_START_MODE)) {
626
+ s->man_start_com_g = true;
627
+ }
628
+ s->regs[reg] = value & ~(R_GQSPI_CNFG_GEN_FIFO_START_MASK);
629
+ break;
630
+ case R_GQSPI_GEN_FIFO:
631
+ if (!fifo32_is_full(&s->fifo_g)) {
632
+ fifo32_push(&s->fifo_g, value);
633
+ }
634
+ break;
635
+ case R_GQSPI_TXD:
636
+ tx_data_bytes(&s->tx_fifo_g, (uint32_t)value, 4,
637
+ ARRAY_FIELD_EX32(s->regs, GQSPI_CNFG, ENDIAN));
638
+ break;
639
+ case R_GQSPI_FIFO_CTRL:
640
+ if (FIELD_EX32(value, GQSPI_FIFO_CTRL, GENERIC_FIFO_RESET)) {
641
+ fifo32_reset(&s->fifo_g);
642
+ }
643
+ if (FIELD_EX32(value, GQSPI_FIFO_CTRL, TX_FIFO_RESET)) {
644
+ fifo8_reset(&s->tx_fifo_g);
645
+ }
646
+ if (FIELD_EX32(value, GQSPI_FIFO_CTRL, RX_FIFO_RESET)) {
647
+ fifo8_reset(&s->rx_fifo_g);
648
+ }
649
+ break;
650
+ case R_GQSPI_IDR:
651
+ s->regs[R_GQSPI_IMR] |= value;
652
+ break;
653
+ case R_GQSPI_IER:
654
+ s->regs[R_GQSPI_IMR] &= ~value;
655
+ break;
656
+ case R_GQSPI_ISR:
657
+ s->regs[R_GQSPI_ISR] &= ~value;
658
+ break;
659
+ case R_GQSPI_IMR:
660
+ case R_GQSPI_RXD:
661
+ case R_GQSPI_GF_SNAPSHOT:
662
+ case R_GQSPI_MOD_ID:
663
+ break;
664
+ default:
665
+ s->regs[reg] = value;
666
+ break;
667
+ }
668
+ xlnx_zynqmp_qspips_update_cs_lines(s);
669
+ xlnx_zynqmp_qspips_check_flush(s);
670
+ xlnx_zynqmp_qspips_update_cs_lines(s);
671
+ xlnx_zynqmp_qspips_update_ixr(s);
672
+ }
673
+ xlnx_zynqmp_qspips_notify(s);
674
+}
675
+
676
static const MemoryRegionOps qspips_ops = {
677
.read = xilinx_spips_read,
678
.write = xilinx_qspips_write,
679
.endianness = DEVICE_LITTLE_ENDIAN,
680
};
681
682
+static const MemoryRegionOps xlnx_zynqmp_qspips_ops = {
683
+ .read = xlnx_zynqmp_qspips_read,
684
+ .write = xlnx_zynqmp_qspips_write,
685
+ .endianness = DEVICE_LITTLE_ENDIAN,
686
+};
408
+};
687
+
409
+
688
#define LQSPI_CACHE_SIZE 1024
410
+static Property stm32l4x5_usart_base_properties[] = {
689
411
+ DEFINE_PROP_CHR("chardev", Stm32l4x5UsartBaseState, chr),
690
static void lqspi_load_cache(void *opaque, hwaddr addr)
412
+ DEFINE_PROP_END_OF_LIST(),
691
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_realize(DeviceState *dev, Error **errp)
413
+};
692
}
414
+
693
415
+static void stm32l4x5_usart_base_init(Object *obj)
694
memory_region_init_io(&s->iomem, OBJECT(s), xsc->reg_ops, s,
416
+{
695
- "spi", XLNX_SPIPS_R_MAX * 4);
417
+ Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(obj);
696
+ "spi", XLNX_ZYNQMP_SPIPS_R_MAX * 4);
418
+
697
sysbus_init_mmio(sbd, &s->iomem);
419
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
698
420
+
699
s->irqline = -1;
421
+ memory_region_init_io(&s->mmio, obj, &stm32l4x5_usart_base_ops, s,
700
@@ -XXX,XX +XXX,XX @@ static void xilinx_qspips_realize(DeviceState *dev, Error **errp)
422
+ TYPE_STM32L4X5_USART_BASE, 0x400);
701
}
423
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->mmio);
702
}
424
+
703
425
+ s->clk = qdev_init_clock_in(DEVICE(s), "clk", NULL, s, 0);
704
+static void xlnx_zynqmp_qspips_realize(DeviceState *dev, Error **errp)
426
+}
705
+{
427
+
706
+ XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(dev);
428
+static const VMStateDescription vmstate_stm32l4x5_usart_base = {
707
+ XilinxSPIPSClass *xsc = XILINX_SPIPS_GET_CLASS(s);
429
+ .name = TYPE_STM32L4X5_USART_BASE,
708
+
709
+ xilinx_qspips_realize(dev, errp);
710
+ fifo8_create(&s->rx_fifo_g, xsc->rx_fifo_size);
711
+ fifo8_create(&s->tx_fifo_g, xsc->tx_fifo_size);
712
+ fifo32_create(&s->fifo_g, 32);
713
+}
714
+
715
+static void xlnx_zynqmp_qspips_init(Object *obj)
716
+{
717
+ XlnxZynqMPQSPIPS *rq = XLNX_ZYNQMP_QSPIPS(obj);
718
+
719
+ object_property_add_link(obj, "stream-connected-dma", TYPE_STREAM_SLAVE,
720
+ (Object **)&rq->dma,
721
+ object_property_allow_set_link,
722
+ OBJ_PROP_LINK_UNREF_ON_RELEASE,
723
+ NULL);
724
+}
725
+
726
static int xilinx_spips_post_load(void *opaque, int version_id)
727
{
728
xilinx_spips_update_ixr((XilinxSPIPS *)opaque);
729
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_xilinx_spips = {
730
}
731
};
732
733
+static int xlnx_zynqmp_qspips_post_load(void *opaque, int version_id)
734
+{
735
+ XlnxZynqMPQSPIPS *s = (XlnxZynqMPQSPIPS *)opaque;
736
+ XilinxSPIPS *qs = XILINX_SPIPS(s);
737
+
738
+ if (ARRAY_FIELD_EX32(s->regs, GQSPI_SELECT, GENERIC_QSPI_EN) &&
739
+ fifo8_is_empty(&qs->rx_fifo) && fifo8_is_empty(&qs->tx_fifo)) {
740
+ xlnx_zynqmp_qspips_update_ixr(s);
741
+ xlnx_zynqmp_qspips_update_cs_lines(s);
742
+ }
743
+ return 0;
744
+}
745
+
746
+static const VMStateDescription vmstate_xilinx_qspips = {
747
+ .name = "xilinx_qspips",
748
+ .version_id = 1,
430
+ .version_id = 1,
749
+ .minimum_version_id = 1,
431
+ .minimum_version_id = 1,
750
+ .fields = (VMStateField[]) {
432
+ .fields = (VMStateField[]) {
751
+ VMSTATE_STRUCT(parent_obj, XilinxQSPIPS, 0,
433
+ VMSTATE_UINT32(cr1, Stm32l4x5UsartBaseState),
752
+ vmstate_xilinx_spips, XilinxSPIPS),
434
+ VMSTATE_UINT32(cr2, Stm32l4x5UsartBaseState),
435
+ VMSTATE_UINT32(cr3, Stm32l4x5UsartBaseState),
436
+ VMSTATE_UINT32(brr, Stm32l4x5UsartBaseState),
437
+ VMSTATE_UINT32(gtpr, Stm32l4x5UsartBaseState),
438
+ VMSTATE_UINT32(rtor, Stm32l4x5UsartBaseState),
439
+ VMSTATE_UINT32(isr, Stm32l4x5UsartBaseState),
440
+ VMSTATE_UINT32(rdr, Stm32l4x5UsartBaseState),
441
+ VMSTATE_UINT32(tdr, Stm32l4x5UsartBaseState),
442
+ VMSTATE_CLOCK(clk, Stm32l4x5UsartBaseState),
753
+ VMSTATE_END_OF_LIST()
443
+ VMSTATE_END_OF_LIST()
754
+ }
444
+ }
755
+};
445
+};
756
+
446
+
757
+static const VMStateDescription vmstate_xlnx_zynqmp_qspips = {
447
+
758
+ .name = "xlnx_zynqmp_qspips",
448
+static void stm32l4x5_usart_base_realize(DeviceState *dev, Error **errp)
759
+ .version_id = 1,
449
+{
760
+ .minimum_version_id = 1,
450
+ ERRP_GUARD();
761
+ .post_load = xlnx_zynqmp_qspips_post_load,
451
+ Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(dev);
762
+ .fields = (VMStateField[]) {
452
+ if (!clock_has_source(s->clk)) {
763
+ VMSTATE_STRUCT(parent_obj, XlnxZynqMPQSPIPS, 0,
453
+ error_setg(errp, "USART clock must be wired up by SoC code");
764
+ vmstate_xilinx_qspips, XilinxQSPIPS),
454
+ return;
765
+ VMSTATE_FIFO8(tx_fifo_g, XlnxZynqMPQSPIPS),
455
+ }
766
+ VMSTATE_FIFO8(rx_fifo_g, XlnxZynqMPQSPIPS),
456
+}
767
+ VMSTATE_FIFO32(fifo_g, XlnxZynqMPQSPIPS),
457
+
768
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPQSPIPS, XLNX_ZYNQMP_SPIPS_R_MAX),
458
+static void stm32l4x5_usart_base_class_init(ObjectClass *klass, void *data)
769
+ VMSTATE_END_OF_LIST()
459
+{
460
+ DeviceClass *dc = DEVICE_CLASS(klass);
461
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
462
+
463
+ rc->phases.hold = stm32l4x5_usart_base_reset_hold;
464
+ device_class_set_props(dc, stm32l4x5_usart_base_properties);
465
+ dc->realize = stm32l4x5_usart_base_realize;
466
+ dc->vmsd = &vmstate_stm32l4x5_usart_base;
467
+}
468
+
469
+static void stm32l4x5_usart_class_init(ObjectClass *oc, void *data)
470
+{
471
+ Stm32l4x5UsartBaseClass *subc = STM32L4X5_USART_BASE_CLASS(oc);
472
+
473
+ subc->type = STM32L4x5_USART;
474
+}
475
+
476
+static void stm32l4x5_uart_class_init(ObjectClass *oc, void *data)
477
+{
478
+ Stm32l4x5UsartBaseClass *subc = STM32L4X5_USART_BASE_CLASS(oc);
479
+
480
+ subc->type = STM32L4x5_UART;
481
+}
482
+
483
+static void stm32l4x5_lpuart_class_init(ObjectClass *oc, void *data)
484
+{
485
+ Stm32l4x5UsartBaseClass *subc = STM32L4X5_USART_BASE_CLASS(oc);
486
+
487
+ subc->type = STM32L4x5_LPUART;
488
+}
489
+
490
+static const TypeInfo stm32l4x5_usart_types[] = {
491
+ {
492
+ .name = TYPE_STM32L4X5_USART_BASE,
493
+ .parent = TYPE_SYS_BUS_DEVICE,
494
+ .instance_size = sizeof(Stm32l4x5UsartBaseState),
495
+ .instance_init = stm32l4x5_usart_base_init,
496
+ .class_init = stm32l4x5_usart_base_class_init,
497
+ .abstract = true,
498
+ }, {
499
+ .name = TYPE_STM32L4X5_USART,
500
+ .parent = TYPE_STM32L4X5_USART_BASE,
501
+ .class_init = stm32l4x5_usart_class_init,
502
+ }, {
503
+ .name = TYPE_STM32L4X5_UART,
504
+ .parent = TYPE_STM32L4X5_USART_BASE,
505
+ .class_init = stm32l4x5_uart_class_init,
506
+ }, {
507
+ .name = TYPE_STM32L4X5_LPUART,
508
+ .parent = TYPE_STM32L4X5_USART_BASE,
509
+ .class_init = stm32l4x5_lpuart_class_init,
770
+ }
510
+ }
771
+};
511
+};
772
+
512
+
773
static Property xilinx_qspips_properties[] = {
513
+DEFINE_TYPES(stm32l4x5_usart_types)
774
/* We had to turn this off for 2.10 as it is not compatible with migration.
514
diff --git a/hw/char/Kconfig b/hw/char/Kconfig
775
* It can be enabled but will prevent the device to be migrated.
776
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_class_init(ObjectClass *klass, void *data)
777
xsc->tx_fifo_size = TXFF_A;
778
}
779
780
+static void xlnx_zynqmp_qspips_class_init(ObjectClass *klass, void * data)
781
+{
782
+ DeviceClass *dc = DEVICE_CLASS(klass);
783
+ XilinxSPIPSClass *xsc = XILINX_SPIPS_CLASS(klass);
784
+
785
+ dc->realize = xlnx_zynqmp_qspips_realize;
786
+ dc->reset = xlnx_zynqmp_qspips_reset;
787
+ dc->vmsd = &vmstate_xlnx_zynqmp_qspips;
788
+ xsc->reg_ops = &xlnx_zynqmp_qspips_ops;
789
+ xsc->rx_fifo_size = RXFF_A_Q;
790
+ xsc->tx_fifo_size = TXFF_A_Q;
791
+}
792
+
793
static const TypeInfo xilinx_spips_info = {
794
.name = TYPE_XILINX_SPIPS,
795
.parent = TYPE_SYS_BUS_DEVICE,
796
@@ -XXX,XX +XXX,XX @@ static const TypeInfo xilinx_qspips_info = {
797
.class_init = xilinx_qspips_class_init,
798
};
799
800
+static const TypeInfo xlnx_zynqmp_qspips_info = {
801
+ .name = TYPE_XLNX_ZYNQMP_QSPIPS,
802
+ .parent = TYPE_XILINX_QSPIPS,
803
+ .instance_size = sizeof(XlnxZynqMPQSPIPS),
804
+ .instance_init = xlnx_zynqmp_qspips_init,
805
+ .class_init = xlnx_zynqmp_qspips_class_init,
806
+};
807
+
808
static void xilinx_spips_register_types(void)
809
{
810
type_register_static(&xilinx_spips_info);
811
type_register_static(&xilinx_qspips_info);
812
+ type_register_static(&xlnx_zynqmp_qspips_info);
813
}
814
815
type_init(xilinx_spips_register_types)
816
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
817
index XXXXXXX..XXXXXXX 100644
515
index XXXXXXX..XXXXXXX 100644
818
--- a/default-configs/arm-softmmu.mak
516
--- a/hw/char/Kconfig
819
+++ b/default-configs/arm-softmmu.mak
517
+++ b/hw/char/Kconfig
820
@@ -XXX,XX +XXX,XX @@ CONFIG_SMBIOS=y
518
@@ -XXX,XX +XXX,XX @@ config VIRTIO_SERIAL
821
CONFIG_ASPEED_SOC=y
519
config STM32F2XX_USART
822
CONFIG_GPIO_KEY=y
520
bool
823
CONFIG_MSF2=y
521
824
-
522
+config STM32L4X5_USART
825
CONFIG_FW_CFG_DMA=y
523
+ bool
826
+CONFIG_XILINX_AXI=y
524
+
525
config CMSDK_APB_UART
526
bool
527
528
diff --git a/hw/char/meson.build b/hw/char/meson.build
529
index XXXXXXX..XXXXXXX 100644
530
--- a/hw/char/meson.build
531
+++ b/hw/char/meson.build
532
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_RENESAS_SCI', if_true: files('renesas_sci.c'))
533
system_ss.add(when: 'CONFIG_SIFIVE_UART', if_true: files('sifive_uart.c'))
534
system_ss.add(when: 'CONFIG_SH_SCI', if_true: files('sh_serial.c'))
535
system_ss.add(when: 'CONFIG_STM32F2XX_USART', if_true: files('stm32f2xx_usart.c'))
536
+system_ss.add(when: 'CONFIG_STM32L4X5_USART', if_true: files('stm32l4x5_usart.c'))
537
system_ss.add(when: 'CONFIG_MCHP_PFSOC_MMUART', if_true: files('mchp_pfsoc_mmuart.c'))
538
system_ss.add(when: 'CONFIG_HTIF', if_true: files('riscv_htif.c'))
539
system_ss.add(when: 'CONFIG_GOLDFISH_TTY', if_true: files('goldfish_tty.c'))
540
diff --git a/hw/char/trace-events b/hw/char/trace-events
541
index XXXXXXX..XXXXXXX 100644
542
--- a/hw/char/trace-events
543
+++ b/hw/char/trace-events
544
@@ -XXX,XX +XXX,XX @@ cadence_uart_baudrate(unsigned baudrate) "baudrate %u"
545
sh_serial_read(char *id, unsigned size, uint64_t offs, uint64_t val) " %s size %d offs 0x%02" PRIx64 " -> 0x%02" PRIx64
546
sh_serial_write(char *id, unsigned size, uint64_t offs, uint64_t val) "%s size %d offs 0x%02" PRIx64 " <- 0x%02" PRIx64
547
548
+# stm32l4x5_usart.c
549
+stm32l4x5_usart_read(uint64_t addr, uint32_t data) "USART: Read <0x%" PRIx64 "> -> 0x%" PRIx32 ""
550
+stm32l4x5_usart_write(uint64_t addr, uint32_t data) "USART: Write <0x%" PRIx64 "> <- 0x%" PRIx32 ""
551
+
552
# xen_console.c
553
xen_console_connect(unsigned int idx, unsigned int ring_ref, unsigned int port, unsigned int limit) "idx %u ring_ref %u port %u limit %u"
554
xen_console_disconnect(unsigned int idx) "idx %u"
827
--
555
--
828
2.7.4
556
2.34.1
829
557
830
558
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Arnaud Minier <arnaud.minier@telecom-paris.fr>
2
2
3
Voiding the ITS caches is not supposed to happen via
3
Implement the ability to read and write characters to the
4
individual register writes. So we introduced a dedicated
4
usart using the serial port.
5
ITS KVM device ioctl to perform a cold reset of the ITS:
5
6
KVM_DEV_ARM_VGIC_GRP_CTRL/KVM_DEV_ARM_ITS_CTRL_RESET. Let's
6
The character transmission is based on the
7
use this latter if the kernel supports it.
7
cmsdk-apb-uart implementation.
8
8
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
9
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
10
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 1511883692-11511-5-git-send-email-eric.auger@redhat.com
12
Message-id: 20240329174402.60382-3-arnaud.minier@telecom-paris.fr
13
[PMM: fixed a few checkpatch nits]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
15
---
14
hw/intc/arm_gicv3_its_kvm.c | 9 ++++++++-
16
include/hw/char/stm32l4x5_usart.h | 1 +
15
1 file changed, 8 insertions(+), 1 deletion(-)
17
hw/char/stm32l4x5_usart.c | 143 ++++++++++++++++++++++++++++++
16
18
hw/char/trace-events | 7 ++
17
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
19
3 files changed, 151 insertions(+)
20
21
diff --git a/include/hw/char/stm32l4x5_usart.h b/include/hw/char/stm32l4x5_usart.h
18
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_its_kvm.c
23
--- a/include/hw/char/stm32l4x5_usart.h
20
+++ b/hw/intc/arm_gicv3_its_kvm.c
24
+++ b/include/hw/char/stm32l4x5_usart.h
21
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_reset(DeviceState *dev)
25
@@ -XXX,XX +XXX,XX @@ struct Stm32l4x5UsartBaseState {
22
26
Clock *clk;
23
c->parent_reset(dev);
27
CharBackend chr;
24
28
qemu_irq irq;
25
- error_report("ITS KVM: full reset is not supported by QEMU");
29
+ guint watch_tag;
26
+ if (kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
30
};
27
+ KVM_DEV_ARM_ITS_CTRL_RESET)) {
31
28
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
32
struct Stm32l4x5UsartBaseClass {
29
+ KVM_DEV_ARM_ITS_CTRL_RESET, NULL, true, &error_abort);
33
diff --git a/hw/char/stm32l4x5_usart.c b/hw/char/stm32l4x5_usart.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/char/stm32l4x5_usart.c
36
+++ b/hw/char/stm32l4x5_usart.c
37
@@ -XXX,XX +XXX,XX @@ REG32(RDR, 0x24)
38
REG32(TDR, 0x28)
39
FIELD(TDR, TDR, 0, 9)
40
41
+static void stm32l4x5_update_irq(Stm32l4x5UsartBaseState *s)
42
+{
43
+ if (((s->isr & R_ISR_WUF_MASK) && (s->cr3 & R_CR3_WUFIE_MASK)) ||
44
+ ((s->isr & R_ISR_CMF_MASK) && (s->cr1 & R_CR1_CMIE_MASK)) ||
45
+ ((s->isr & R_ISR_ABRF_MASK) && (s->cr1 & R_CR1_RXNEIE_MASK)) ||
46
+ ((s->isr & R_ISR_EOBF_MASK) && (s->cr1 & R_CR1_EOBIE_MASK)) ||
47
+ ((s->isr & R_ISR_RTOF_MASK) && (s->cr1 & R_CR1_RTOIE_MASK)) ||
48
+ ((s->isr & R_ISR_CTSIF_MASK) && (s->cr3 & R_CR3_CTSIE_MASK)) ||
49
+ ((s->isr & R_ISR_LBDF_MASK) && (s->cr2 & R_CR2_LBDIE_MASK)) ||
50
+ ((s->isr & R_ISR_TXE_MASK) && (s->cr1 & R_CR1_TXEIE_MASK)) ||
51
+ ((s->isr & R_ISR_TC_MASK) && (s->cr1 & R_CR1_TCIE_MASK)) ||
52
+ ((s->isr & R_ISR_RXNE_MASK) && (s->cr1 & R_CR1_RXNEIE_MASK)) ||
53
+ ((s->isr & R_ISR_IDLE_MASK) && (s->cr1 & R_CR1_IDLEIE_MASK)) ||
54
+ ((s->isr & R_ISR_ORE_MASK) &&
55
+ ((s->cr1 & R_CR1_RXNEIE_MASK) || (s->cr3 & R_CR3_EIE_MASK))) ||
56
+ /* TODO: Handle NF ? */
57
+ ((s->isr & R_ISR_FE_MASK) && (s->cr3 & R_CR3_EIE_MASK)) ||
58
+ ((s->isr & R_ISR_PE_MASK) && (s->cr1 & R_CR1_PEIE_MASK))) {
59
+ qemu_irq_raise(s->irq);
60
+ trace_stm32l4x5_usart_irq_raised(s->isr);
61
+ } else {
62
+ qemu_irq_lower(s->irq);
63
+ trace_stm32l4x5_usart_irq_lowered();
64
+ }
65
+}
66
+
67
+static int stm32l4x5_usart_base_can_receive(void *opaque)
68
+{
69
+ Stm32l4x5UsartBaseState *s = opaque;
70
+
71
+ if (!(s->isr & R_ISR_RXNE_MASK)) {
72
+ return 1;
73
+ }
74
+
75
+ return 0;
76
+}
77
+
78
+static void stm32l4x5_usart_base_receive(void *opaque, const uint8_t *buf,
79
+ int size)
80
+{
81
+ Stm32l4x5UsartBaseState *s = opaque;
82
+
83
+ if (!((s->cr1 & R_CR1_UE_MASK) && (s->cr1 & R_CR1_RE_MASK))) {
84
+ trace_stm32l4x5_usart_receiver_not_enabled(
85
+ FIELD_EX32(s->cr1, CR1, UE), FIELD_EX32(s->cr1, CR1, RE));
30
+ return;
86
+ return;
31
+ }
87
+ }
32
+
88
+
33
+ error_report("ITS KVM: full reset is not supported by the host kernel");
89
+ /* Check if overrun detection is enabled and if there is an overrun */
34
90
+ if (!(s->cr3 & R_CR3_OVRDIS_MASK) && (s->isr & R_ISR_RXNE_MASK)) {
35
if (!kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
91
+ /*
36
GITS_CTLR)) {
92
+ * A character has been received while
93
+ * the previous has not been read = Overrun.
94
+ */
95
+ s->isr |= R_ISR_ORE_MASK;
96
+ trace_stm32l4x5_usart_overrun_detected(s->rdr, *buf);
97
+ } else {
98
+ /* No overrun */
99
+ s->rdr = *buf;
100
+ s->isr |= R_ISR_RXNE_MASK;
101
+ trace_stm32l4x5_usart_rx(s->rdr);
102
+ }
103
+
104
+ stm32l4x5_update_irq(s);
105
+}
106
+
107
+/*
108
+ * Try to send tx data, and arrange to be called back later if
109
+ * we can't (ie the char backend is busy/blocking).
110
+ */
111
+static gboolean usart_transmit(void *do_not_use, GIOCondition cond,
112
+ void *opaque)
113
+{
114
+ Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(opaque);
115
+ int ret;
116
+ /* TODO: Handle 9 bits transmission */
117
+ uint8_t ch = s->tdr;
118
+
119
+ s->watch_tag = 0;
120
+
121
+ if (!(s->cr1 & R_CR1_TE_MASK) || (s->isr & R_ISR_TXE_MASK)) {
122
+ return G_SOURCE_REMOVE;
123
+ }
124
+
125
+ ret = qemu_chr_fe_write(&s->chr, &ch, 1);
126
+ if (ret <= 0) {
127
+ s->watch_tag = qemu_chr_fe_add_watch(&s->chr, G_IO_OUT | G_IO_HUP,
128
+ usart_transmit, s);
129
+ if (!s->watch_tag) {
130
+ /*
131
+ * Most common reason to be here is "no chardev backend":
132
+ * just insta-drain the buffer, so the serial output
133
+ * goes into a void, rather than blocking the guest.
134
+ */
135
+ goto buffer_drained;
136
+ }
137
+ /* Transmit pending */
138
+ trace_stm32l4x5_usart_tx_pending();
139
+ return G_SOURCE_REMOVE;
140
+ }
141
+
142
+buffer_drained:
143
+ /* Character successfully sent */
144
+ trace_stm32l4x5_usart_tx(ch);
145
+ s->isr |= R_ISR_TC_MASK | R_ISR_TXE_MASK;
146
+ stm32l4x5_update_irq(s);
147
+ return G_SOURCE_REMOVE;
148
+}
149
+
150
+static void usart_cancel_transmit(Stm32l4x5UsartBaseState *s)
151
+{
152
+ if (s->watch_tag) {
153
+ g_source_remove(s->watch_tag);
154
+ s->watch_tag = 0;
155
+ }
156
+}
157
+
158
static void stm32l4x5_usart_base_reset_hold(Object *obj, ResetType type)
159
{
160
Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(obj);
161
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_reset_hold(Object *obj, ResetType type)
162
s->isr = 0x020000C0;
163
s->rdr = 0x00000000;
164
s->tdr = 0x00000000;
165
+
166
+ usart_cancel_transmit(s);
167
+ stm32l4x5_update_irq(s);
168
+}
169
+
170
+static void usart_update_rqr(Stm32l4x5UsartBaseState *s, uint32_t value)
171
+{
172
+ /* TXFRQ */
173
+ /* Reset RXNE flag */
174
+ if (value & R_RQR_RXFRQ_MASK) {
175
+ s->isr &= ~R_ISR_RXNE_MASK;
176
+ }
177
+ /* MMRQ */
178
+ /* SBKRQ */
179
+ /* ABRRQ */
180
+ stm32l4x5_update_irq(s);
181
}
182
183
static uint64_t stm32l4x5_usart_base_read(void *opaque, hwaddr addr,
184
@@ -XXX,XX +XXX,XX @@ static uint64_t stm32l4x5_usart_base_read(void *opaque, hwaddr addr,
185
retvalue = FIELD_EX32(s->rdr, RDR, RDR);
186
/* Reset RXNE flag */
187
s->isr &= ~R_ISR_RXNE_MASK;
188
+ stm32l4x5_update_irq(s);
189
break;
190
case A_TDR:
191
retvalue = FIELD_EX32(s->tdr, TDR, TDR);
192
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
193
switch (addr) {
194
case A_CR1:
195
s->cr1 = value;
196
+ stm32l4x5_update_irq(s);
197
return;
198
case A_CR2:
199
s->cr2 = value;
200
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
201
s->rtor = value;
202
return;
203
case A_RQR:
204
+ usart_update_rqr(s, value);
205
return;
206
case A_ISR:
207
qemu_log_mask(LOG_GUEST_ERROR,
208
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
209
case A_ICR:
210
/* Clear the status flags */
211
s->isr &= ~value;
212
+ stm32l4x5_update_irq(s);
213
return;
214
case A_RDR:
215
qemu_log_mask(LOG_GUEST_ERROR,
216
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
217
return;
218
case A_TDR:
219
s->tdr = value;
220
+ s->isr &= ~R_ISR_TXE_MASK;
221
+ usart_transmit(NULL, G_IO_OUT, s);
222
return;
223
default:
224
qemu_log_mask(LOG_GUEST_ERROR,
225
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_realize(DeviceState *dev, Error **errp)
226
error_setg(errp, "USART clock must be wired up by SoC code");
227
return;
228
}
229
+
230
+ qemu_chr_fe_set_handlers(&s->chr, stm32l4x5_usart_base_can_receive,
231
+ stm32l4x5_usart_base_receive, NULL, NULL,
232
+ s, NULL, true);
233
}
234
235
static void stm32l4x5_usart_base_class_init(ObjectClass *klass, void *data)
236
diff --git a/hw/char/trace-events b/hw/char/trace-events
237
index XXXXXXX..XXXXXXX 100644
238
--- a/hw/char/trace-events
239
+++ b/hw/char/trace-events
240
@@ -XXX,XX +XXX,XX @@ sh_serial_write(char *id, unsigned size, uint64_t offs, uint64_t val) "%s size %
241
# stm32l4x5_usart.c
242
stm32l4x5_usart_read(uint64_t addr, uint32_t data) "USART: Read <0x%" PRIx64 "> -> 0x%" PRIx32 ""
243
stm32l4x5_usart_write(uint64_t addr, uint32_t data) "USART: Write <0x%" PRIx64 "> <- 0x%" PRIx32 ""
244
+stm32l4x5_usart_rx(uint8_t c) "USART: got character 0x%x from backend"
245
+stm32l4x5_usart_tx(uint8_t c) "USART: character 0x%x sent to backend"
246
+stm32l4x5_usart_tx_pending(void) "USART: character send to backend pending"
247
+stm32l4x5_usart_irq_raised(uint32_t reg) "USART: IRQ raised: 0x%08"PRIx32
248
+stm32l4x5_usart_irq_lowered(void) "USART: IRQ lowered"
249
+stm32l4x5_usart_overrun_detected(uint8_t current, uint8_t received) "USART: Overrun detected, RDR='0x%x', received 0x%x"
250
+stm32l4x5_usart_receiver_not_enabled(uint8_t ue_bit, uint8_t re_bit) "USART: Receiver not enabled, UE=0x%x, RE=0x%x"
251
252
# xen_console.c
253
xen_console_connect(unsigned int idx, unsigned int ring_ref, unsigned int port, unsigned int limit) "idx %u ring_ref %u port %u limit %u"
37
--
254
--
38
2.7.4
255
2.34.1
39
256
40
257
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Arnaud Minier <arnaud.minier@telecom-paris.fr>
2
2
3
Add support for zero pumping according to the transfer size register.
3
Add a function to change the settings of the
4
serial connection.
4
5
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
6
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
7
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20171126231634.9531-10-frasse.iglesias@gmail.com
9
Message-id: 20240329174402.60382-4-arnaud.minier@telecom-paris.fr
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
include/hw/ssi/xilinx_spips.h | 2 ++
12
hw/char/stm32l4x5_usart.c | 98 +++++++++++++++++++++++++++++++++++++++
12
hw/ssi/xilinx_spips.c | 47 ++++++++++++++++++++++++++++++++++++-------
13
hw/char/trace-events | 1 +
13
2 files changed, 42 insertions(+), 7 deletions(-)
14
2 files changed, 99 insertions(+)
14
15
15
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
16
diff --git a/hw/char/stm32l4x5_usart.c b/hw/char/stm32l4x5_usart.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/ssi/xilinx_spips.h
18
--- a/hw/char/stm32l4x5_usart.c
18
+++ b/include/hw/ssi/xilinx_spips.h
19
+++ b/hw/char/stm32l4x5_usart.c
19
@@ -XXX,XX +XXX,XX @@ struct XilinxSPIPS {
20
@@ -XXX,XX +XXX,XX @@ static void usart_cancel_transmit(Stm32l4x5UsartBaseState *s)
20
uint32_t rx_discard;
21
22
uint32_t regs[XLNX_SPIPS_R_MAX];
23
+
24
+ bool man_start_com;
25
};
26
27
typedef struct {
28
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/ssi/xilinx_spips.c
31
+++ b/hw/ssi/xilinx_spips.c
32
@@ -XXX,XX +XXX,XX @@
33
FIELD(CMND, DUMMY_CYCLES, 2, 6)
34
#define R_CMND_DMA_EN (1 << 1)
35
#define R_CMND_PUSH_WAIT (1 << 0)
36
+#define R_TRANSFER_SIZE (0xc4 / 4)
37
#define R_LQSPI_STS (0xA4 / 4)
38
#define LQSPI_STS_WR_RECVD (1 << 1)
39
40
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_reset(DeviceState *d)
41
s->link_state_next_when = 0;
42
s->snoop_state = SNOOP_CHECKING;
43
s->cmd_dummies = 0;
44
+ s->man_start_com = false;
45
xilinx_spips_update_ixr(s);
46
xilinx_spips_update_cs_lines(s);
47
}
48
@@ -XXX,XX +XXX,XX @@ static inline void tx_data_bytes(Fifo8 *fifo, uint32_t value, int num, bool be)
49
}
21
}
50
}
22
}
51
23
52
+static void xilinx_spips_check_zero_pump(XilinxSPIPS *s)
24
+static void stm32l4x5_update_params(Stm32l4x5UsartBaseState *s)
53
+{
25
+{
54
+ if (!s->regs[R_TRANSFER_SIZE]) {
26
+ int speed, parity, data_bits, stop_bits;
27
+ uint32_t value, usart_div;
28
+ QEMUSerialSetParams ssp;
29
+
30
+ /* Select the parity type */
31
+ if (s->cr1 & R_CR1_PCE_MASK) {
32
+ if (s->cr1 & R_CR1_PS_MASK) {
33
+ parity = 'O';
34
+ } else {
35
+ parity = 'E';
36
+ }
37
+ } else {
38
+ parity = 'N';
39
+ }
40
+
41
+ /* Select the number of stop bits */
42
+ switch (FIELD_EX32(s->cr2, CR2, STOP)) {
43
+ case 0:
44
+ stop_bits = 1;
45
+ break;
46
+ case 2:
47
+ stop_bits = 2;
48
+ break;
49
+ default:
50
+ qemu_log_mask(LOG_UNIMP,
51
+ "UNIMPLEMENTED: fractionnal stop bits; CR2[13:12] = %u",
52
+ FIELD_EX32(s->cr2, CR2, STOP));
55
+ return;
53
+ return;
56
+ }
54
+ }
57
+ if (!fifo8_is_empty(&s->tx_fifo) && s->regs[R_CMND] & R_CMND_PUSH_WAIT) {
55
+
56
+ /* Select the length of the word */
57
+ switch ((FIELD_EX32(s->cr1, CR1, M1) << 1) | FIELD_EX32(s->cr1, CR1, M0)) {
58
+ case 0:
59
+ data_bits = 8;
60
+ break;
61
+ case 1:
62
+ data_bits = 9;
63
+ break;
64
+ case 2:
65
+ data_bits = 7;
66
+ break;
67
+ default:
68
+ qemu_log_mask(LOG_GUEST_ERROR,
69
+ "UNDEFINED: invalid word length, CR1.M = 0b11");
58
+ return;
70
+ return;
59
+ }
71
+ }
60
+ /*
72
+
61
+ * The zero pump must never fill tx fifo such that rx overflow is
73
+ /* Select the baud rate */
62
+ * possible
74
+ value = FIELD_EX32(s->brr, BRR, BRR);
63
+ */
75
+ if (value < 16) {
64
+ while (s->regs[R_TRANSFER_SIZE] &&
76
+ qemu_log_mask(LOG_GUEST_ERROR,
65
+ s->rx_fifo.num + s->tx_fifo.num < RXFF_A_Q - 3) {
77
+ "UNDEFINED: BRR less than 16: %u", value);
66
+ /* endianess just doesn't matter when zero pumping */
78
+ return;
67
+ tx_data_bytes(&s->tx_fifo, 0, 4, false);
68
+ s->regs[R_TRANSFER_SIZE] &= ~0x03ull;
69
+ s->regs[R_TRANSFER_SIZE] -= 4;
70
+ }
79
+ }
80
+
81
+ if (FIELD_EX32(s->cr1, CR1, OVER8) == 0) {
82
+ /*
83
+ * Oversampling by 16
84
+ * BRR = USARTDIV
85
+ */
86
+ usart_div = value;
87
+ } else {
88
+ /*
89
+ * Oversampling by 8
90
+ * - BRR[2:0] = USARTDIV[3:0] shifted 1 bit to the right.
91
+ * - BRR[3] must be kept cleared.
92
+ * - BRR[15:4] = USARTDIV[15:4]
93
+ * - The frequency is multiplied by 2
94
+ */
95
+ usart_div = ((value & 0xFFF0) | ((value & 0x0007) << 1)) / 2;
96
+ }
97
+
98
+ speed = clock_get_hz(s->clk) / usart_div;
99
+
100
+ ssp.speed = speed;
101
+ ssp.parity = parity;
102
+ ssp.data_bits = data_bits;
103
+ ssp.stop_bits = stop_bits;
104
+
105
+ qemu_chr_fe_ioctl(&s->chr, CHR_IOCTL_SERIAL_SET_PARAMS, &ssp);
106
+
107
+ trace_stm32l4x5_usart_update_params(speed, parity, data_bits, stop_bits);
71
+}
108
+}
72
+
109
+
73
+static void xilinx_spips_check_flush(XilinxSPIPS *s)
110
static void stm32l4x5_usart_base_reset_hold(Object *obj, ResetType type)
111
{
112
Stm32l4x5UsartBaseState *s = STM32L4X5_USART_BASE(obj);
113
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_write(void *opaque, hwaddr addr,
114
switch (addr) {
115
case A_CR1:
116
s->cr1 = value;
117
+ stm32l4x5_update_params(s);
118
stm32l4x5_update_irq(s);
119
return;
120
case A_CR2:
121
s->cr2 = value;
122
+ stm32l4x5_update_params(s);
123
return;
124
case A_CR3:
125
s->cr3 = value;
126
return;
127
case A_BRR:
128
s->brr = value;
129
+ stm32l4x5_update_params(s);
130
return;
131
case A_GTPR:
132
s->gtpr = value;
133
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_usart_base_init(Object *obj)
134
s->clk = qdev_init_clock_in(DEVICE(s), "clk", NULL, s, 0);
135
}
136
137
+static int stm32l4x5_usart_base_post_load(void *opaque, int version_id)
74
+{
138
+{
75
+ if (s->man_start_com ||
139
+ Stm32l4x5UsartBaseState *s = (Stm32l4x5UsartBaseState *)opaque;
76
+ (!fifo8_is_empty(&s->tx_fifo) &&
140
+
77
+ !(s->regs[R_CONFIG] & MAN_START_EN))) {
141
+ stm32l4x5_update_params(s);
78
+ xilinx_spips_check_zero_pump(s);
142
+ return 0;
79
+ xilinx_spips_flush_txfifo(s);
80
+ }
81
+ if (fifo8_is_empty(&s->tx_fifo) && !s->regs[R_TRANSFER_SIZE]) {
82
+ s->man_start_com = false;
83
+ }
84
+ xilinx_spips_update_ixr(s);
85
+}
143
+}
86
+
144
+
87
static inline int rx_data_bytes(Fifo8 *fifo, uint8_t *value, int max)
145
static const VMStateDescription vmstate_stm32l4x5_usart_base = {
88
{
146
.name = TYPE_STM32L4X5_USART_BASE,
89
int i;
147
.version_id = 1,
90
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
148
.minimum_version_id = 1,
91
uint64_t value, unsigned size)
149
+ .post_load = stm32l4x5_usart_base_post_load,
92
{
150
.fields = (VMStateField[]) {
93
int mask = ~0;
151
VMSTATE_UINT32(cr1, Stm32l4x5UsartBaseState),
94
- int man_start_com = 0;
152
VMSTATE_UINT32(cr2, Stm32l4x5UsartBaseState),
95
XilinxSPIPS *s = opaque;
153
diff --git a/hw/char/trace-events b/hw/char/trace-events
96
154
index XXXXXXX..XXXXXXX 100644
97
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr, (unsigned)value);
155
--- a/hw/char/trace-events
98
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
156
+++ b/hw/char/trace-events
99
switch (addr) {
157
@@ -XXX,XX +XXX,XX @@ stm32l4x5_usart_irq_raised(uint32_t reg) "USART: IRQ raised: 0x%08"PRIx32
100
case R_CONFIG:
158
stm32l4x5_usart_irq_lowered(void) "USART: IRQ lowered"
101
mask = ~(R_CONFIG_RSVD | MAN_START_COM);
159
stm32l4x5_usart_overrun_detected(uint8_t current, uint8_t received) "USART: Overrun detected, RDR='0x%x', received 0x%x"
102
- if (value & MAN_START_COM) {
160
stm32l4x5_usart_receiver_not_enabled(uint8_t ue_bit, uint8_t re_bit) "USART: Receiver not enabled, UE=0x%x, RE=0x%x"
103
- man_start_com = 1;
161
+stm32l4x5_usart_update_params(int speed, uint8_t parity, int data, int stop) "USART: speed: %d, parity: %c, data bits: %d, stop bits: %d"
104
+ if ((value & MAN_START_COM) && (s->regs[R_CONFIG] & MAN_START_EN)) {
162
105
+ s->man_start_com = true;
163
# xen_console.c
106
}
164
xen_console_connect(unsigned int idx, unsigned int ring_ref, unsigned int port, unsigned int limit) "idx %u ring_ref %u port %u limit %u"
107
break;
108
case R_INTR_STATUS:
109
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
110
s->regs[addr] = (s->regs[addr] & ~mask) | (value & mask);
111
no_reg_update:
112
xilinx_spips_update_cs_lines(s);
113
- if ((man_start_com && s->regs[R_CONFIG] & MAN_START_EN) ||
114
- (fifo8_is_empty(&s->tx_fifo) && s->regs[R_CONFIG] & MAN_START_EN)) {
115
- xilinx_spips_flush_txfifo(s);
116
- }
117
+ xilinx_spips_check_flush(s);
118
xilinx_spips_update_cs_lines(s);
119
xilinx_spips_update_ixr(s);
120
}
121
--
165
--
122
2.7.4
166
2.34.1
123
167
124
168
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Arnaud Minier <arnaud.minier@telecom-paris.fr>
2
2
3
Move the FlashCMD enum, XilinxQSPIPS and XilinxSPIPSClass structures to the
3
Add the USART to the SoC and connect it to the other implemented devices.
4
header for consistency (struct XilinxSPIPS is found there). Also move out
4
5
a define and remove two double included headers (while touching the code).
5
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
6
Finally, add 4 byte address commands to the FlashCMD enum.
6
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
7
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Message-id: 20240329174402.60382-5-arnaud.minier@telecom-paris.fr
9
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
9
[PMM: fixed a few checkpatch nits]
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Message-id: 20171126231634.9531-6-frasse.iglesias@gmail.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
11
---
16
include/hw/ssi/xilinx_spips.h | 34 ++++++++++++++++++++++++++++++++++
12
docs/system/arm/b-l475e-iot01a.rst | 2 +-
17
hw/ssi/xilinx_spips.c | 35 -----------------------------------
13
include/hw/arm/stm32l4x5_soc.h | 7 +++
18
2 files changed, 34 insertions(+), 35 deletions(-)
14
hw/arm/stm32l4x5_soc.c | 83 +++++++++++++++++++++++++++---
19
15
hw/arm/Kconfig | 1 +
20
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
16
4 files changed, 86 insertions(+), 7 deletions(-)
21
index XXXXXXX..XXXXXXX 100644
17
22
--- a/include/hw/ssi/xilinx_spips.h
18
diff --git a/docs/system/arm/b-l475e-iot01a.rst b/docs/system/arm/b-l475e-iot01a.rst
23
+++ b/include/hw/ssi/xilinx_spips.h
19
index XXXXXXX..XXXXXXX 100644
24
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPS XilinxSPIPS;
20
--- a/docs/system/arm/b-l475e-iot01a.rst
25
21
+++ b/docs/system/arm/b-l475e-iot01a.rst
26
#define XLNX_SPIPS_R_MAX (0x100 / 4)
22
@@ -XXX,XX +XXX,XX @@ Currently B-L475E-IOT01A machine's only supports the following devices:
27
23
- STM32L4x5 SYSCFG (System configuration controller)
28
+/* Bite off 4k chunks at a time */
24
- STM32L4x5 RCC (Reset and clock control)
29
+#define LQSPI_CACHE_SIZE 1024
25
- STM32L4x5 GPIOs (General-purpose I/Os)
30
+
26
+- STM32L4x5 USARTs, UARTs and LPUART (Serial ports)
31
+typedef enum {
27
32
+ READ = 0x3, READ_4 = 0x13,
28
Missing devices
33
+ FAST_READ = 0xb, FAST_READ_4 = 0x0c,
29
"""""""""""""""
34
+ DOR = 0x3b, DOR_4 = 0x3c,
30
35
+ QOR = 0x6b, QOR_4 = 0x6c,
31
The B-L475E-IOT01A does *not* support the following devices:
36
+ DIOR = 0xbb, DIOR_4 = 0xbc,
32
37
+ QIOR = 0xeb, QIOR_4 = 0xec,
33
-- Serial ports (UART)
38
+
34
- Analog to Digital Converter (ADC)
39
+ PP = 0x2, PP_4 = 0x12,
35
- SPI controller
40
+ DPP = 0xa2,
36
- Timer controller (TIMER)
41
+ QPP = 0x32, QPP_4 = 0x34,
37
diff --git a/include/hw/arm/stm32l4x5_soc.h b/include/hw/arm/stm32l4x5_soc.h
42
+} FlashCMD;
38
index XXXXXXX..XXXXXXX 100644
43
+
39
--- a/include/hw/arm/stm32l4x5_soc.h
44
struct XilinxSPIPS {
40
+++ b/include/hw/arm/stm32l4x5_soc.h
41
@@ -XXX,XX +XXX,XX @@
42
#include "hw/misc/stm32l4x5_exti.h"
43
#include "hw/misc/stm32l4x5_rcc.h"
44
#include "hw/gpio/stm32l4x5_gpio.h"
45
+#include "hw/char/stm32l4x5_usart.h"
46
#include "qom/object.h"
47
48
#define TYPE_STM32L4X5_SOC "stm32l4x5-soc"
49
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_TYPE(Stm32l4x5SocState, Stm32l4x5SocClass, STM32L4X5_SOC)
50
51
#define NUM_EXTI_OR_GATES 4
52
53
+#define STM_NUM_USARTS 3
54
+#define STM_NUM_UARTS 2
55
+
56
struct Stm32l4x5SocState {
45
SysBusDevice parent_obj;
57
SysBusDevice parent_obj;
46
58
47
@@ -XXX,XX +XXX,XX @@ struct XilinxSPIPS {
59
@@ -XXX,XX +XXX,XX @@ struct Stm32l4x5SocState {
48
uint32_t regs[XLNX_SPIPS_R_MAX];
60
Stm32l4x5SyscfgState syscfg;
49
};
61
Stm32l4x5RccState rcc;
50
62
Stm32l4x5GpioState gpio[NUM_GPIOS];
51
+typedef struct {
63
+ Stm32l4x5UsartBaseState usart[STM_NUM_USARTS];
52
+ XilinxSPIPS parent_obj;
64
+ Stm32l4x5UsartBaseState uart[STM_NUM_UARTS];
53
+
65
+ Stm32l4x5UsartBaseState lpuart;
54
+ uint8_t lqspi_buf[LQSPI_CACHE_SIZE];
66
55
+ hwaddr lqspi_cached_addr;
67
MemoryRegion sram1;
56
+ Error *migration_blocker;
68
MemoryRegion sram2;
57
+ bool mmio_execution_enabled;
69
diff --git a/hw/arm/stm32l4x5_soc.c b/hw/arm/stm32l4x5_soc.c
58
+} XilinxQSPIPS;
70
index XXXXXXX..XXXXXXX 100644
59
+
71
--- a/hw/arm/stm32l4x5_soc.c
60
+typedef struct XilinxSPIPSClass {
72
+++ b/hw/arm/stm32l4x5_soc.c
61
+ SysBusDeviceClass parent_class;
62
+
63
+ const MemoryRegionOps *reg_ops;
64
+
65
+ uint32_t rx_fifo_size;
66
+ uint32_t tx_fifo_size;
67
+} XilinxSPIPSClass;
68
+
69
#define TYPE_XILINX_SPIPS "xlnx.ps7-spi"
70
#define TYPE_XILINX_QSPIPS "xlnx.ps7-qspi"
71
72
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/ssi/xilinx_spips.c
75
+++ b/hw/ssi/xilinx_spips.c
76
@@ -XXX,XX +XXX,XX @@
73
@@ -XXX,XX +XXX,XX @@
77
#include "sysemu/sysemu.h"
74
#include "sysemu/sysemu.h"
78
#include "hw/ptimer.h"
75
#include "hw/or-irq.h"
79
#include "qemu/log.h"
76
#include "hw/arm/stm32l4x5_soc.h"
80
-#include "qemu/fifo8.h"
77
+#include "hw/char/stm32l4x5_usart.h"
81
-#include "hw/ssi/ssi.h"
78
#include "hw/gpio/stm32l4x5_gpio.h"
82
#include "qemu/bitops.h"
79
#include "hw/qdev-clock.h"
83
#include "hw/ssi/xilinx_spips.h"
80
#include "hw/misc/unimp.h"
84
#include "qapi/error.h"
81
@@ -XXX,XX +XXX,XX @@ static const struct {
85
@@ -XXX,XX +XXX,XX @@
82
{ 0x48001C00, 0x0000000F, 0x00000000, 0x00000000 },
86
83
};
87
/* 16MB per linear region */
84
88
#define LQSPI_ADDRESS_BITS 24
85
+static const hwaddr usart_addr[] = {
89
-/* Bite off 4k chunks at a time */
86
+ 0x40013800, /* "USART1", 0x400 */
90
-#define LQSPI_CACHE_SIZE 1024
87
+ 0x40004400, /* "USART2", 0x400 */
91
88
+ 0x40004800, /* "USART3", 0x400 */
92
#define SNOOP_CHECKING 0xFF
89
+};
93
#define SNOOP_NONE 0xFE
90
+static const hwaddr uart_addr[] = {
94
#define SNOOP_STRIPING 0
91
+ 0x40004C00, /* "UART4" , 0x400 */
95
92
+ 0x40005000 /* "UART5" , 0x400 */
96
-typedef enum {
93
+};
97
- READ = 0x3,
94
+
98
- FAST_READ = 0xb,
95
+#define LPUART_BASE_ADDRESS 0x40008000
99
- DOR = 0x3b,
96
+
100
- QOR = 0x6b,
97
+static const int usart_irq[] = { 37, 38, 39 };
101
- DIOR = 0xbb,
98
+static const int uart_irq[] = { 52, 53 };
102
- QIOR = 0xeb,
99
+#define LPUART_IRQ 70
103
-
100
+
104
- PP = 0x2,
101
static void stm32l4x5_soc_initfn(Object *obj)
105
- DPP = 0xa2,
106
- QPP = 0x32,
107
-} FlashCMD;
108
-
109
-typedef struct {
110
- XilinxSPIPS parent_obj;
111
-
112
- uint8_t lqspi_buf[LQSPI_CACHE_SIZE];
113
- hwaddr lqspi_cached_addr;
114
- Error *migration_blocker;
115
- bool mmio_execution_enabled;
116
-} XilinxQSPIPS;
117
-
118
-typedef struct XilinxSPIPSClass {
119
- SysBusDeviceClass parent_class;
120
-
121
- const MemoryRegionOps *reg_ops;
122
-
123
- uint32_t rx_fifo_size;
124
- uint32_t tx_fifo_size;
125
-} XilinxSPIPSClass;
126
-
127
static inline int num_effective_busses(XilinxSPIPS *s)
128
{
102
{
129
return (s->regs[R_LQSPI_CFG] & LQSPI_CFG_SEP_BUS &&
103
Stm32l4x5SocState *s = STM32L4X5_SOC(obj);
104
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_initfn(Object *obj)
105
g_autofree char *name = g_strdup_printf("gpio%c", 'a' + i);
106
object_initialize_child(obj, name, &s->gpio[i], TYPE_STM32L4X5_GPIO);
107
}
108
+
109
+ for (int i = 0; i < STM_NUM_USARTS; i++) {
110
+ object_initialize_child(obj, "usart[*]", &s->usart[i],
111
+ TYPE_STM32L4X5_USART);
112
+ }
113
+
114
+ for (int i = 0; i < STM_NUM_UARTS; i++) {
115
+ object_initialize_child(obj, "uart[*]", &s->uart[i],
116
+ TYPE_STM32L4X5_UART);
117
+ }
118
+ object_initialize_child(obj, "lpuart1", &s->lpuart,
119
+ TYPE_STM32L4X5_LPUART);
120
}
121
122
static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
123
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
124
sysbus_mmio_map(busdev, 0, RCC_BASE_ADDRESS);
125
sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, RCC_IRQ));
126
127
+ /* USART devices */
128
+ for (int i = 0; i < STM_NUM_USARTS; i++) {
129
+ g_autofree char *name = g_strdup_printf("usart%d-out", i + 1);
130
+ dev = DEVICE(&(s->usart[i]));
131
+ qdev_prop_set_chr(dev, "chardev", serial_hd(i));
132
+ qdev_connect_clock_in(dev, "clk",
133
+ qdev_get_clock_out(DEVICE(&(s->rcc)), name));
134
+ busdev = SYS_BUS_DEVICE(dev);
135
+ if (!sysbus_realize(busdev, errp)) {
136
+ return;
137
+ }
138
+ sysbus_mmio_map(busdev, 0, usart_addr[i]);
139
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, usart_irq[i]));
140
+ }
141
+
142
+ /*
143
+ * TODO: Connect the USARTs, UARTs and LPUART to the EXTI once the EXTI
144
+ * can handle other gpio-in than the gpios. (e.g. Direct Lines for the
145
+ * usarts)
146
+ */
147
+
148
+ /* UART devices */
149
+ for (int i = 0; i < STM_NUM_UARTS; i++) {
150
+ g_autofree char *name = g_strdup_printf("uart%d-out", STM_NUM_USARTS + i + 1);
151
+ dev = DEVICE(&(s->uart[i]));
152
+ qdev_prop_set_chr(dev, "chardev", serial_hd(STM_NUM_USARTS + i));
153
+ qdev_connect_clock_in(dev, "clk",
154
+ qdev_get_clock_out(DEVICE(&(s->rcc)), name));
155
+ busdev = SYS_BUS_DEVICE(dev);
156
+ if (!sysbus_realize(busdev, errp)) {
157
+ return;
158
+ }
159
+ sysbus_mmio_map(busdev, 0, uart_addr[i]);
160
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, uart_irq[i]));
161
+ }
162
+
163
+ /* LPUART device*/
164
+ dev = DEVICE(&(s->lpuart));
165
+ qdev_prop_set_chr(dev, "chardev", serial_hd(STM_NUM_USARTS + STM_NUM_UARTS));
166
+ qdev_connect_clock_in(dev, "clk",
167
+ qdev_get_clock_out(DEVICE(&(s->rcc)), "lpuart1-out"));
168
+ busdev = SYS_BUS_DEVICE(dev);
169
+ if (!sysbus_realize(busdev, errp)) {
170
+ return;
171
+ }
172
+ sysbus_mmio_map(busdev, 0, LPUART_BASE_ADDRESS);
173
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, LPUART_IRQ));
174
+
175
/* APB1 BUS */
176
create_unimplemented_device("TIM2", 0x40000000, 0x400);
177
create_unimplemented_device("TIM3", 0x40000400, 0x400);
178
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
179
create_unimplemented_device("SPI2", 0x40003800, 0x400);
180
create_unimplemented_device("SPI3", 0x40003C00, 0x400);
181
/* RESERVED: 0x40004000, 0x400 */
182
- create_unimplemented_device("USART2", 0x40004400, 0x400);
183
- create_unimplemented_device("USART3", 0x40004800, 0x400);
184
- create_unimplemented_device("UART4", 0x40004C00, 0x400);
185
- create_unimplemented_device("UART5", 0x40005000, 0x400);
186
create_unimplemented_device("I2C1", 0x40005400, 0x400);
187
create_unimplemented_device("I2C2", 0x40005800, 0x400);
188
create_unimplemented_device("I2C3", 0x40005C00, 0x400);
189
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
190
create_unimplemented_device("DAC1", 0x40007400, 0x400);
191
create_unimplemented_device("OPAMP", 0x40007800, 0x400);
192
create_unimplemented_device("LPTIM1", 0x40007C00, 0x400);
193
- create_unimplemented_device("LPUART1", 0x40008000, 0x400);
194
/* RESERVED: 0x40008400, 0x400 */
195
create_unimplemented_device("SWPMI1", 0x40008800, 0x400);
196
/* RESERVED: 0x40008C00, 0x800 */
197
@@ -XXX,XX +XXX,XX @@ static void stm32l4x5_soc_realize(DeviceState *dev_soc, Error **errp)
198
create_unimplemented_device("TIM1", 0x40012C00, 0x400);
199
create_unimplemented_device("SPI1", 0x40013000, 0x400);
200
create_unimplemented_device("TIM8", 0x40013400, 0x400);
201
- create_unimplemented_device("USART1", 0x40013800, 0x400);
202
/* RESERVED: 0x40013C00, 0x400 */
203
create_unimplemented_device("TIM15", 0x40014000, 0x400);
204
create_unimplemented_device("TIM16", 0x40014400, 0x400);
205
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
206
index XXXXXXX..XXXXXXX 100644
207
--- a/hw/arm/Kconfig
208
+++ b/hw/arm/Kconfig
209
@@ -XXX,XX +XXX,XX @@ config STM32L4X5_SOC
210
select STM32L4X5_SYSCFG
211
select STM32L4X5_RCC
212
select STM32L4X5_GPIO
213
+ select STM32L4X5_USART
214
215
config XLNX_ZYNQMP_ARM
216
bool
130
--
217
--
131
2.7.4
218
2.34.1
132
219
133
220
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Arnaud Minier <arnaud.minier@telecom-paris.fr>
2
2
3
Add support for Micron (Numonyx) n25q512a11 and n25q512a13 flashes.
3
Test:
4
4
- read/write from/to the usart registers
5
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
- send/receive a character/string over the serial port
6
Acked-by: Marcin Krzemiński <mar.krzeminski@gmail.com>
6
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
Signed-off-by: Arnaud Minier <arnaud.minier@telecom-paris.fr>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr>
9
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20171126231634.9531-5-frasse.iglesias@gmail.com
10
Message-id: 20240329174402.60382-6-arnaud.minier@telecom-paris.fr
11
[PMM: fix checkpatch nits, remove commented out code]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
hw/block/m25p80.c | 2 ++
14
tests/qtest/stm32l4x5_usart-test.c | 315 +++++++++++++++++++++++++++++
14
1 file changed, 2 insertions(+)
15
tests/qtest/meson.build | 4 +-
15
16
2 files changed, 318 insertions(+), 1 deletion(-)
16
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
17
create mode 100644 tests/qtest/stm32l4x5_usart-test.c
18
19
diff --git a/tests/qtest/stm32l4x5_usart-test.c b/tests/qtest/stm32l4x5_usart-test.c
20
new file mode 100644
21
index XXXXXXX..XXXXXXX
22
--- /dev/null
23
+++ b/tests/qtest/stm32l4x5_usart-test.c
24
@@ -XXX,XX +XXX,XX @@
25
+/*
26
+ * QTest testcase for STML4X5_USART
27
+ *
28
+ * Copyright (c) 2023 Arnaud Minier <arnaud.minier@telecom-paris.fr>
29
+ * Copyright (c) 2023 Inès Varhol <ines.varhol@telecom-paris.fr>
30
+ *
31
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
32
+ * See the COPYING file in the top-level directory.
33
+ */
34
+
35
+#include "qemu/osdep.h"
36
+#include "libqtest.h"
37
+#include "hw/misc/stm32l4x5_rcc_internals.h"
38
+#include "hw/registerfields.h"
39
+
40
+#define RCC_BASE_ADDR 0x40021000
41
+/* Use USART 1 ADDR, assume the others work the same */
42
+#define USART1_BASE_ADDR 0x40013800
43
+
44
+/* See stm32l4x5_usart for definitions */
45
+REG32(CR1, 0x00)
46
+ FIELD(CR1, M1, 28, 1)
47
+ FIELD(CR1, OVER8, 15, 1)
48
+ FIELD(CR1, M0, 12, 1)
49
+ FIELD(CR1, PCE, 10, 1)
50
+ FIELD(CR1, TXEIE, 7, 1)
51
+ FIELD(CR1, RXNEIE, 5, 1)
52
+ FIELD(CR1, TE, 3, 1)
53
+ FIELD(CR1, RE, 2, 1)
54
+ FIELD(CR1, UE, 0, 1)
55
+REG32(CR2, 0x04)
56
+REG32(CR3, 0x08)
57
+ FIELD(CR3, OVRDIS, 12, 1)
58
+REG32(BRR, 0x0C)
59
+REG32(GTPR, 0x10)
60
+REG32(RTOR, 0x14)
61
+REG32(RQR, 0x18)
62
+REG32(ISR, 0x1C)
63
+ FIELD(ISR, TXE, 7, 1)
64
+ FIELD(ISR, RXNE, 5, 1)
65
+ FIELD(ISR, ORE, 3, 1)
66
+REG32(ICR, 0x20)
67
+REG32(RDR, 0x24)
68
+REG32(TDR, 0x28)
69
+
70
+#define NVIC_ISPR1 0XE000E204
71
+#define NVIC_ICPR1 0xE000E284
72
+#define USART1_IRQ 37
73
+
74
+static bool check_nvic_pending(QTestState *qts, unsigned int n)
75
+{
76
+ /* No USART interrupts are less than 32 */
77
+ assert(n > 32);
78
+ n -= 32;
79
+ return qtest_readl(qts, NVIC_ISPR1) & (1 << n);
80
+}
81
+
82
+static bool clear_nvic_pending(QTestState *qts, unsigned int n)
83
+{
84
+ /* No USART interrupts are less than 32 */
85
+ assert(n > 32);
86
+ n -= 32;
87
+ qtest_writel(qts, NVIC_ICPR1, (1 << n));
88
+ return true;
89
+}
90
+
91
+/*
92
+ * Wait indefinitely for the flag to be updated.
93
+ * If this is run on a slow CI runner,
94
+ * the meson harness will timeout after 10 minutes for us.
95
+ */
96
+static bool usart_wait_for_flag(QTestState *qts, uint32_t event_addr,
97
+ uint32_t flag)
98
+{
99
+ while (true) {
100
+ if ((qtest_readl(qts, event_addr) & flag)) {
101
+ return true;
102
+ }
103
+ g_usleep(1000);
104
+ }
105
+
106
+ return false;
107
+}
108
+
109
+static void usart_receive_string(QTestState *qts, int sock_fd, const char *in,
110
+ char *out)
111
+{
112
+ int i, in_len = strlen(in);
113
+
114
+ g_assert_true(send(sock_fd, in, in_len, 0) == in_len);
115
+ for (i = 0; i < in_len; i++) {
116
+ g_assert_true(usart_wait_for_flag(qts,
117
+ USART1_BASE_ADDR + A_ISR, R_ISR_RXNE_MASK));
118
+ out[i] = qtest_readl(qts, USART1_BASE_ADDR + A_RDR);
119
+ }
120
+ out[i] = '\0';
121
+}
122
+
123
+static void usart_send_string(QTestState *qts, const char *in)
124
+{
125
+ int i, in_len = strlen(in);
126
+
127
+ for (i = 0; i < in_len; i++) {
128
+ qtest_writel(qts, USART1_BASE_ADDR + A_TDR, in[i]);
129
+ g_assert_true(usart_wait_for_flag(qts,
130
+ USART1_BASE_ADDR + A_ISR, R_ISR_TXE_MASK));
131
+ }
132
+}
133
+
134
+/* Init the RCC clocks to run at 80 MHz */
135
+static void init_clocks(QTestState *qts)
136
+{
137
+ uint32_t value;
138
+
139
+ /* MSIRANGE can be set only when MSI is OFF or READY */
140
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CR), R_CR_MSION_MASK);
141
+
142
+ /* Clocking from MSI, in case MSI was not the default source */
143
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CFGR), 0);
144
+
145
+ /*
146
+ * Update PLL and set MSI as the source clock.
147
+ * PLLM = 1 --> 000
148
+ * PLLN = 40 --> 40
149
+ * PPLLR = 2 --> 00
150
+ * PLLDIV = unused, PLLP = unused (SAI3), PLLQ = unused (48M1)
151
+ * SRC = MSI --> 01
152
+ */
153
+ qtest_writel(qts, (RCC_BASE_ADDR + A_PLLCFGR), R_PLLCFGR_PLLREN_MASK |
154
+ (40 << R_PLLCFGR_PLLN_SHIFT) |
155
+ (0b01 << R_PLLCFGR_PLLSRC_SHIFT));
156
+
157
+ /* PLL activation */
158
+
159
+ value = qtest_readl(qts, (RCC_BASE_ADDR + A_CR));
160
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CR), value | R_CR_PLLON_MASK);
161
+
162
+ /* RCC_CFGR is OK by defaut */
163
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CFGR), 0);
164
+
165
+ /* CCIPR : no periph clock by default */
166
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CCIPR), 0);
167
+
168
+ /* Switches on the PLL clock source */
169
+ value = qtest_readl(qts, (RCC_BASE_ADDR + A_CFGR));
170
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CFGR), (value & ~R_CFGR_SW_MASK) |
171
+ (0b11 << R_CFGR_SW_SHIFT));
172
+
173
+ /* Enable SYSCFG clock enabled */
174
+ qtest_writel(qts, (RCC_BASE_ADDR + A_APB2ENR), R_APB2ENR_SYSCFGEN_MASK);
175
+
176
+ /* Enable the IO port B clock (See p.252) */
177
+ qtest_writel(qts, (RCC_BASE_ADDR + A_AHB2ENR), R_AHB2ENR_GPIOBEN_MASK);
178
+
179
+ /* Enable the clock for USART1 (cf p.259) */
180
+ /* We rewrite SYSCFGEN to not disable it */
181
+ qtest_writel(qts, (RCC_BASE_ADDR + A_APB2ENR),
182
+ R_APB2ENR_SYSCFGEN_MASK | R_APB2ENR_USART1EN_MASK);
183
+
184
+ /* TODO: Enable usart via gpio */
185
+
186
+ /* Set PCLK as the clock for USART1(cf p.272) i.e. reset both bits */
187
+ qtest_writel(qts, (RCC_BASE_ADDR + A_CCIPR), 0);
188
+
189
+ /* Reset USART1 (see p.249) */
190
+ qtest_writel(qts, (RCC_BASE_ADDR + A_APB2RSTR), 1 << 14);
191
+ qtest_writel(qts, (RCC_BASE_ADDR + A_APB2RSTR), 0);
192
+}
193
+
194
+static void init_uart(QTestState *qts)
195
+{
196
+ uint32_t cr1;
197
+
198
+ init_clocks(qts);
199
+
200
+ /*
201
+ * For 115200 bauds, see p.1349.
202
+ * The clock has a frequency of 80Mhz,
203
+ * for 115200, we have to put a divider of 695 = 0x2B7.
204
+ */
205
+ qtest_writel(qts, (USART1_BASE_ADDR + A_BRR), 0x2B7);
206
+
207
+ /*
208
+ * Set the oversampling by 16,
209
+ * disable the parity control and
210
+ * set the word length to 8. (cf p.1377)
211
+ */
212
+ cr1 = qtest_readl(qts, (USART1_BASE_ADDR + A_CR1));
213
+ cr1 &= ~(R_CR1_M1_MASK | R_CR1_M0_MASK | R_CR1_OVER8_MASK | R_CR1_PCE_MASK);
214
+ qtest_writel(qts, (USART1_BASE_ADDR + A_CR1), cr1);
215
+
216
+ /* Enable the transmitter, the receiver and the USART. */
217
+ qtest_writel(qts, (USART1_BASE_ADDR + A_CR1),
218
+ R_CR1_UE_MASK | R_CR1_RE_MASK | R_CR1_TE_MASK);
219
+}
220
+
221
+static void test_write_read(void)
222
+{
223
+ QTestState *qts = qtest_init("-M b-l475e-iot01a");
224
+
225
+ /* Test that we can write and retrieve a value from the device */
226
+ qtest_writel(qts, USART1_BASE_ADDR + A_TDR, 0xFFFFFFFF);
227
+ const uint32_t tdr = qtest_readl(qts, USART1_BASE_ADDR + A_TDR);
228
+ g_assert_cmpuint(tdr, ==, 0x000001FF);
229
+}
230
+
231
+static void test_receive_char(void)
232
+{
233
+ int sock_fd;
234
+ uint32_t cr1;
235
+ QTestState *qts = qtest_init_with_serial("-M b-l475e-iot01a", &sock_fd);
236
+
237
+ init_uart(qts);
238
+
239
+ /* Try without initializing IRQ */
240
+ g_assert_true(send(sock_fd, "a", 1, 0) == 1);
241
+ usart_wait_for_flag(qts, USART1_BASE_ADDR + A_ISR, R_ISR_RXNE_MASK);
242
+ g_assert_cmphex(qtest_readl(qts, USART1_BASE_ADDR + A_RDR), ==, 'a');
243
+ g_assert_false(check_nvic_pending(qts, USART1_IRQ));
244
+
245
+ /* Now with the IRQ */
246
+ cr1 = qtest_readl(qts, (USART1_BASE_ADDR + A_CR1));
247
+ cr1 |= R_CR1_RXNEIE_MASK;
248
+ qtest_writel(qts, USART1_BASE_ADDR + A_CR1, cr1);
249
+ g_assert_true(send(sock_fd, "b", 1, 0) == 1);
250
+ usart_wait_for_flag(qts, USART1_BASE_ADDR + A_ISR, R_ISR_RXNE_MASK);
251
+ g_assert_cmphex(qtest_readl(qts, USART1_BASE_ADDR + A_RDR), ==, 'b');
252
+ g_assert_true(check_nvic_pending(qts, USART1_IRQ));
253
+ clear_nvic_pending(qts, USART1_IRQ);
254
+
255
+ close(sock_fd);
256
+
257
+ qtest_quit(qts);
258
+}
259
+
260
+static void test_send_char(void)
261
+{
262
+ int sock_fd;
263
+ char s[1];
264
+ uint32_t cr1;
265
+ QTestState *qts = qtest_init_with_serial("-M b-l475e-iot01a", &sock_fd);
266
+
267
+ init_uart(qts);
268
+
269
+ /* Try without initializing IRQ */
270
+ qtest_writel(qts, USART1_BASE_ADDR + A_TDR, 'c');
271
+ g_assert_true(recv(sock_fd, s, 1, 0) == 1);
272
+ g_assert_cmphex(s[0], ==, 'c');
273
+ g_assert_false(check_nvic_pending(qts, USART1_IRQ));
274
+
275
+ /* Now with the IRQ */
276
+ cr1 = qtest_readl(qts, (USART1_BASE_ADDR + A_CR1));
277
+ cr1 |= R_CR1_TXEIE_MASK;
278
+ qtest_writel(qts, USART1_BASE_ADDR + A_CR1, cr1);
279
+ qtest_writel(qts, USART1_BASE_ADDR + A_TDR, 'd');
280
+ g_assert_true(recv(sock_fd, s, 1, 0) == 1);
281
+ g_assert_cmphex(s[0], ==, 'd');
282
+ g_assert_true(check_nvic_pending(qts, USART1_IRQ));
283
+ clear_nvic_pending(qts, USART1_IRQ);
284
+
285
+ close(sock_fd);
286
+
287
+ qtest_quit(qts);
288
+}
289
+
290
+static void test_receive_str(void)
291
+{
292
+ int sock_fd;
293
+ char s[10];
294
+ QTestState *qts = qtest_init_with_serial("-M b-l475e-iot01a", &sock_fd);
295
+
296
+ init_uart(qts);
297
+
298
+ usart_receive_string(qts, sock_fd, "hello", s);
299
+ g_assert_true(memcmp(s, "hello", 5) == 0);
300
+
301
+ close(sock_fd);
302
+
303
+ qtest_quit(qts);
304
+}
305
+
306
+static void test_send_str(void)
307
+{
308
+ int sock_fd;
309
+ char s[10];
310
+ QTestState *qts = qtest_init_with_serial("-M b-l475e-iot01a", &sock_fd);
311
+
312
+ init_uart(qts);
313
+
314
+ usart_send_string(qts, "world");
315
+ g_assert_true(recv(sock_fd, s, 10, 0) == 5);
316
+ g_assert_true(memcmp(s, "world", 5) == 0);
317
+
318
+ close(sock_fd);
319
+
320
+ qtest_quit(qts);
321
+}
322
+
323
+int main(int argc, char **argv)
324
+{
325
+ int ret;
326
+
327
+ g_test_init(&argc, &argv, NULL);
328
+ g_test_set_nonfatal_assertions();
329
+
330
+ qtest_add_func("stm32l4x5/usart/write_read", test_write_read);
331
+ qtest_add_func("stm32l4x5/usart/receive_char", test_receive_char);
332
+ qtest_add_func("stm32l4x5/usart/send_char", test_send_char);
333
+ qtest_add_func("stm32l4x5/usart/receive_str", test_receive_str);
334
+ qtest_add_func("stm32l4x5/usart/send_str", test_send_str);
335
+ ret = g_test_run();
336
+
337
+ return ret;
338
+}
339
+
340
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
17
index XXXXXXX..XXXXXXX 100644
341
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/block/m25p80.c
342
--- a/tests/qtest/meson.build
19
+++ b/hw/block/m25p80.c
343
+++ b/tests/qtest/meson.build
20
@@ -XXX,XX +XXX,XX @@ static const FlashPartInfo known_devices[] = {
344
@@ -XXX,XX +XXX,XX @@ slow_qtests = {
21
{ INFO("n25q128a13", 0x20ba18, 0, 64 << 10, 256, ER_4K) },
345
'npcm7xx_pwm-test': 300,
22
{ INFO("n25q256a11", 0x20bb19, 0, 64 << 10, 512, ER_4K) },
346
'npcm7xx_watchdog_timer-test': 120,
23
{ INFO("n25q256a13", 0x20ba19, 0, 64 << 10, 512, ER_4K) },
347
'qom-test' : 900,
24
+ { INFO("n25q512a11", 0x20bb20, 0, 64 << 10, 1024, ER_4K) },
348
+ 'stm32l4x5_usart-test' : 600,
25
+ { INFO("n25q512a13", 0x20ba20, 0, 64 << 10, 1024, ER_4K) },
349
'test-hmp' : 240,
26
{ INFO("n25q128", 0x20ba18, 0, 64 << 10, 256, 0) },
350
'pxe-test': 610,
27
{ INFO("n25q256a", 0x20ba19, 0, 64 << 10, 512, ER_4K) },
351
'prom-env-test': 360,
28
{ INFO("n25q512a", 0x20ba20, 0, 64 << 10, 1024, ER_4K) },
352
@@ -XXX,XX +XXX,XX @@ qtests_stm32l4x5 = \
353
['stm32l4x5_exti-test',
354
'stm32l4x5_syscfg-test',
355
'stm32l4x5_rcc-test',
356
- 'stm32l4x5_gpio-test']
357
+ 'stm32l4x5_gpio-test',
358
+ 'stm32l4x5_usart-test']
359
360
qtests_arm = \
361
(config_all_devices.has_key('CONFIG_MPS2') ? ['sse-timer-test'] : []) + \
29
--
362
--
30
2.7.4
363
2.34.1
31
364
32
365
diff view generated by jsdifflib
Deleted patch
1
For v8M it is possible for the CONTROL.SPSEL bit value and the
2
current stack to be out of sync. This means we need to update
3
the checks used in reads and writes of the PSP and MSP special
4
registers to use v7m_using_psp() rather than directly checking
5
the SPSEL bit in the control register.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 1512153879-5291-2-git-send-email-peter.maydell@linaro.org
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
---
12
target/arm/helper.c | 10 ++++------
13
1 file changed, 4 insertions(+), 6 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
20
21
switch (reg) {
22
case 8: /* MSP */
23
- return (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) ?
24
- env->v7m.other_sp : env->regs[13];
25
+ return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13];
26
case 9: /* PSP */
27
- return (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) ?
28
- env->regs[13] : env->v7m.other_sp;
29
+ return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp;
30
case 16: /* PRIMASK */
31
return env->v7m.primask[env->v7m.secure];
32
case 17: /* BASEPRI */
33
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
34
}
35
break;
36
case 8: /* MSP */
37
- if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) {
38
+ if (v7m_using_psp(env)) {
39
env->v7m.other_sp = val;
40
} else {
41
env->regs[13] = val;
42
}
43
break;
44
case 9: /* PSP */
45
- if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) {
46
+ if (v7m_using_psp(env)) {
47
env->regs[13] = val;
48
} else {
49
env->v7m.other_sp = val;
50
--
51
2.7.4
52
53
diff view generated by jsdifflib
Deleted patch
1
In ARMv7M the CPU ignores explicit writes to CONTROL.SPSEL
2
in Handler mode. In v8M the behaviour is slightly different:
3
writes to the bit are permitted but will have no effect.
4
1
5
We've already done the hard work to handle the value in
6
CONTROL.SPSEL being out of sync with what stack pointer is
7
actually in use, so all we need to do to fix this last loose
8
end is to update the condition we use to guard whether we
9
call write_v7m_control_spsel() on the register write.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 1512153879-5291-3-git-send-email-peter.maydell@linaro.org
14
---
15
target/arm/helper.c | 5 ++++-
16
1 file changed, 4 insertions(+), 1 deletion(-)
17
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
21
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
23
* thread mode; other bits can be updated by any privileged code.
24
* write_v7m_control_spsel() deals with updating the SPSEL bit in
25
* env->v7m.control, so we only need update the others.
26
+ * For v7M, we must just ignore explicit writes to SPSEL in handler
27
+ * mode; for v8M the write is permitted but will have no effect.
28
*/
29
- if (!arm_v7m_is_handler_mode(env)) {
30
+ if (arm_feature(env, ARM_FEATURE_V8) ||
31
+ !arm_v7m_is_handler_mode(env)) {
32
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
33
}
34
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
35
--
36
2.7.4
37
38
diff view generated by jsdifflib
Deleted patch
1
All the callers of arm_ldq_ptw() and arm_ldl_ptw() ignore the value
2
that those functions store in the fsr argument on failure: if they
3
return failure to their callers they will always overwrite the fsr
4
value with something else.
5
1
6
Remove the argument from these functions and S1_ptw_translate().
7
This will simplify removing fsr from the calling functions.
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
13
Message-id: 1512503192-2239-3-git-send-email-peter.maydell@linaro.org
14
---
15
target/arm/helper.c | 24 +++++++++++-------------
16
1 file changed, 11 insertions(+), 13 deletions(-)
17
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
21
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
23
/* Translate a S1 pagetable walk through S2 if needed. */
24
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
25
hwaddr addr, MemTxAttrs txattrs,
26
- uint32_t *fsr,
27
ARMMMUFaultInfo *fi)
28
{
29
if ((mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1) &&
30
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
31
hwaddr s2pa;
32
int s2prot;
33
int ret;
34
+ uint32_t fsr;
35
36
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
37
- &txattrs, &s2prot, &s2size, fsr, fi, NULL);
38
+ &txattrs, &s2prot, &s2size, &fsr, fi, NULL);
39
if (ret) {
40
fi->s2addr = addr;
41
fi->stage2 = true;
42
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
43
* (but not if it was for a debug access).
44
*/
45
static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
46
- ARMMMUIdx mmu_idx, uint32_t *fsr,
47
- ARMMMUFaultInfo *fi)
48
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
49
{
50
ARMCPU *cpu = ARM_CPU(cs);
51
CPUARMState *env = &cpu->env;
52
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
53
54
attrs.secure = is_secure;
55
as = arm_addressspace(cs, attrs);
56
- addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fsr, fi);
57
+ addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fi);
58
if (fi->s1ptw) {
59
return 0;
60
}
61
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
62
}
63
64
static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
65
- ARMMMUIdx mmu_idx, uint32_t *fsr,
66
- ARMMMUFaultInfo *fi)
67
+ ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
68
{
69
ARMCPU *cpu = ARM_CPU(cs);
70
CPUARMState *env = &cpu->env;
71
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
72
73
attrs.secure = is_secure;
74
as = arm_addressspace(cs, attrs);
75
- addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fsr, fi);
76
+ addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fi);
77
if (fi->s1ptw) {
78
return 0;
79
}
80
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
81
goto do_fault;
82
}
83
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
84
- mmu_idx, fsr, fi);
85
+ mmu_idx, fi);
86
type = (desc & 3);
87
domain = (desc >> 5) & 0x0f;
88
if (regime_el(env, mmu_idx) == 1) {
89
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
90
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
91
}
92
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
93
- mmu_idx, fsr, fi);
94
+ mmu_idx, fi);
95
switch (desc & 3) {
96
case 0: /* Page translation fault. */
97
code = 7;
98
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
99
goto do_fault;
100
}
101
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
102
- mmu_idx, fsr, fi);
103
+ mmu_idx, fi);
104
type = (desc & 3);
105
if (type == 0 || (type == 3 && !arm_feature(env, ARM_FEATURE_PXN))) {
106
/* Section translation fault, or attempt to use the encoding
107
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
108
/* Lookup l2 entry. */
109
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
110
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
111
- mmu_idx, fsr, fi);
112
+ mmu_idx, fi);
113
ap = ((desc >> 4) & 3) | ((desc >> 7) & 4);
114
switch (desc & 3) {
115
case 0: /* Page translation fault. */
116
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
117
descaddr |= (address >> (stride * (4 - level))) & indexmask;
118
descaddr &= ~7ULL;
119
nstable = extract32(tableattrs, 4, 1);
120
- descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fsr, fi);
121
+ descriptor = arm_ldq_ptw(cs, descaddr, !nstable, mmu_idx, fi);
122
if (fi->s1ptw) {
123
goto do_fault;
124
}
125
--
126
2.7.4
127
128
diff view generated by jsdifflib
Deleted patch
1
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
2
structure, which we convert to the FSC at the callsite.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
8
Message-id: 1512503192-2239-5-git-send-email-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 40 ++++++++++++++++++++++------------------
11
1 file changed, 22 insertions(+), 18 deletions(-)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ do_fault:
18
static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
19
MMUAccessType access_type, ARMMMUIdx mmu_idx,
20
hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot,
21
- target_ulong *page_size, uint32_t *fsr,
22
- ARMMMUFaultInfo *fi)
23
+ target_ulong *page_size, ARMMMUFaultInfo *fi)
24
{
25
CPUState *cs = CPU(arm_env_get_cpu(env));
26
- int code;
27
+ int level = 1;
28
uint32_t table;
29
uint32_t desc;
30
uint32_t xn;
31
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
32
/* Lookup l1 descriptor. */
33
if (!get_level1_table_address(env, mmu_idx, &table, address)) {
34
/* Section translation fault if page walk is disabled by PD0 or PD1 */
35
- code = 5;
36
+ fi->type = ARMFault_Translation;
37
goto do_fault;
38
}
39
desc = arm_ldl_ptw(cs, table, regime_is_secure(env, mmu_idx),
40
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
41
/* Section translation fault, or attempt to use the encoding
42
* which is Reserved on implementations without PXN.
43
*/
44
- code = 5;
45
+ fi->type = ARMFault_Translation;
46
goto do_fault;
47
}
48
if ((type == 1) || !(desc & (1 << 18))) {
49
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
50
} else {
51
dacr = env->cp15.dacr_s;
52
}
53
+ if (type == 1) {
54
+ level = 2;
55
+ }
56
domain_prot = (dacr >> (domain * 2)) & 3;
57
if (domain_prot == 0 || domain_prot == 2) {
58
- if (type != 1) {
59
- code = 9; /* Section domain fault. */
60
- } else {
61
- code = 11; /* Page domain fault. */
62
- }
63
+ /* Section or Page domain fault */
64
+ fi->type = ARMFault_Domain;
65
goto do_fault;
66
}
67
if (type != 1) {
68
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
69
ap = ((desc >> 10) & 3) | ((desc >> 13) & 4);
70
xn = desc & (1 << 4);
71
pxn = desc & 1;
72
- code = 13;
73
ns = extract32(desc, 19, 1);
74
} else {
75
if (arm_feature(env, ARM_FEATURE_PXN)) {
76
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
77
ap = ((desc >> 4) & 3) | ((desc >> 7) & 4);
78
switch (desc & 3) {
79
case 0: /* Page translation fault. */
80
- code = 7;
81
+ fi->type = ARMFault_Translation;
82
goto do_fault;
83
case 1: /* 64k page. */
84
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
85
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
86
/* Never happens, but compiler isn't smart enough to tell. */
87
abort();
88
}
89
- code = 15;
90
}
91
if (domain_prot == 3) {
92
*prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
93
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
94
if (pxn && !regime_is_user(env, mmu_idx)) {
95
xn = 1;
96
}
97
- if (xn && access_type == MMU_INST_FETCH)
98
+ if (xn && access_type == MMU_INST_FETCH) {
99
+ fi->type = ARMFault_Permission;
100
goto do_fault;
101
+ }
102
103
if (arm_feature(env, ARM_FEATURE_V6K) &&
104
(regime_sctlr(env, mmu_idx) & SCTLR_AFE)) {
105
/* The simplified model uses AP[0] as an access control bit. */
106
if ((ap & 1) == 0) {
107
/* Access flag fault. */
108
- code = (code == 15) ? 6 : 3;
109
+ fi->type = ARMFault_AccessFlag;
110
goto do_fault;
111
}
112
*prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
113
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
114
}
115
if (!(*prot & (1 << access_type))) {
116
/* Access permission fault. */
117
+ fi->type = ARMFault_Permission;
118
goto do_fault;
119
}
120
}
121
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
122
*phys_ptr = phys_addr;
123
return false;
124
do_fault:
125
- *fsr = code | (domain << 4);
126
+ fi->domain = domain;
127
+ fi->level = level;
128
return true;
129
}
130
131
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
132
return get_phys_addr_lpae(env, address, access_type, mmu_idx, phys_ptr,
133
attrs, prot, page_size, fsr, fi, cacheattrs);
134
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
135
- return get_phys_addr_v6(env, address, access_type, mmu_idx, phys_ptr,
136
- attrs, prot, page_size, fsr, fi);
137
+ bool ret = get_phys_addr_v6(env, address, access_type, mmu_idx,
138
+ phys_ptr, attrs, prot, page_size, fi);
139
+
140
+ *fsr = arm_fi_to_sfsc(fi);
141
+ return ret;
142
} else {
143
bool ret = get_phys_addr_v5(env, address, access_type, mmu_idx,
144
phys_ptr, prot, page_size, fi);
145
--
146
2.7.4
147
148
diff view generated by jsdifflib