1
A last collection of patches to squeeze in before rc0.
1
Last pullreq before 6.0 softfreeze: a few minor feature patches,
2
The patches from me are all bugfixes. Philippe's are just
2
some bugfixes, some cleanups.
3
code-movement, but I wanted to get these into 4.1 because
4
that kind of patch is so painful to have to rebase.
5
(The diffstat is huge but it's just code moving from file to file.)
6
3
7
thanks
8
-- PMM
4
-- PMM
9
5
10
The following changes since commit 234e256511e588680300600ce087c5185d68cf2a:
6
The following changes since commit 6f34661b6c97a37a5efc27d31c037ddeda4547e2:
11
7
12
Merge remote-tracking branch 'remotes/armbru/tags/pull-build-2019-07-02-v2' into staging (2019-07-04 15:58:46 +0100)
8
Merge remote-tracking branch 'remotes/vivier2/tags/trivial-branch-for-6.0-pull-request' into staging (2021-03-11 18:55:27 +0000)
13
9
14
are available in the Git repository at:
10
are available in the Git repository at:
15
11
16
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190704
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210312-1
17
13
18
for you to fetch changes up to b75f3735802b5b33f10e4bfe374d4b17bb86d29a:
14
for you to fetch changes up to 41f09f2e9f09e4dd386d84174a6dcb5136af17ca:
19
15
20
target/arm: Correct VMOV_imm_dp handling of short vectors (2019-07-04 16:52:05 +0100)
16
hw/display/pxa2xx: Inline template header (2021-03-12 13:26:08 +0000)
21
17
22
----------------------------------------------------------------
18
----------------------------------------------------------------
23
target-arm queue:
19
target-arm queue:
24
* more code-movement to separate TCG-only functions into their own files
20
* versal: Support XRAMs and XRAM controller
25
* Correct VMOV_imm_dp handling of short vectors
21
* smmu: Various minor bug fixes
26
* Execute Thumb instructions when their condbits are 0xf
22
* SVE emulation: fix bugs handling odd vector lengths
27
* armv7m_systick: Forbid non-privileged accesses
23
* allwinner-sun8i-emac: traverse transmit queue using TX_CUR_DESC register value
28
* Use _ra versions of cpu_stl_data() in v7M helpers
24
* tests/acceptance: fix orangepi-pc acceptance tests
29
* v8M: Check state of exception being returned from
25
* hw/timer/sse-timer: Propagate eventual error in sse_timer_realize()
30
* v8M: Forcibly clear negative-priority exceptions on deactivate
26
* hw/arm/virt: KVM: The IPA lower bound is 32
27
* npcm7xx: support MFT module
28
* pl110, pxa2xx_lcd: tidy up template headers
31
29
32
----------------------------------------------------------------
30
----------------------------------------------------------------
33
Peter Maydell (6):
31
Andrew Jones (2):
34
arm v8M: Forcibly clear negative-priority exceptions on deactivate
32
accel: kvm: Fix kvm_type invocation
35
target/arm: v8M: Check state of exception being returned from
33
hw/arm/virt: KVM: The IPA lower bound is 32
36
target/arm: Use _ra versions of cpu_stl_data() in v7M helpers
37
hw/timer/armv7m_systick: Forbid non-privileged accesses
38
target/arm: Execute Thumb instructions when their condbits are 0xf
39
target/arm: Correct VMOV_imm_dp handling of short vectors
40
34
41
Philippe Mathieu-Daudé (3):
35
Edgar E. Iglesias (2):
42
target/arm: Move debug routines to debug_helper.c
36
hw/misc: versal: Add a model of the XRAM controller
43
target/arm: Restrict semi-hosting to TCG
37
hw/arm: versal: Add support for the XRAMs
44
target/arm/helper: Move M profile routines to m_helper.c
45
38
46
target/arm/Makefile.objs | 5 +-
39
Eric Auger (7):
47
target/arm/cpu.h | 7 +
40
intel_iommu: Fix mask may be uninitialized in vtd_context_device_invalidate
48
hw/intc/armv7m_nvic.c | 54 +-
41
dma: Introduce dma_aligned_pow2_mask()
49
hw/timer/armv7m_systick.c | 26 +-
42
virtio-iommu: Handle non power of 2 range invalidations
50
target/arm/cpu.c | 9 +-
43
hw/arm/smmu-common: Fix smmu_iotlb_inv_iova when asid is not set
51
target/arm/debug_helper.c | 311 +++++
44
hw/arm/smmuv3: Enforce invalidation on a power of two range
52
target/arm/helper.c | 2646 +--------------------------------------
45
hw/arm/smmuv3: Fix SMMU_CMD_CFGI_STE_RANGE handling
53
target/arm/m_helper.c | 2679 ++++++++++++++++++++++++++++++++++++++++
46
hw/arm/smmuv3: Uniformize sid traces
54
target/arm/op_helper.c | 295 -----
55
target/arm/translate-vfp.inc.c | 2 +-
56
target/arm/translate.c | 15 +-
57
11 files changed, 3096 insertions(+), 2953 deletions(-)
58
create mode 100644 target/arm/debug_helper.c
59
create mode 100644 target/arm/m_helper.c
60
47
48
Hao Wu (5):
49
hw/misc: Add GPIOs for duty in NPCM7xx PWM
50
hw/misc: Add NPCM7XX MFT Module
51
hw/arm: Add MFT device to NPCM7xx Soc
52
hw/arm: Connect PWM fans in NPCM7XX boards
53
tests/qtest: Test PWM fan RPM using MFT in PWM test
54
55
Niek Linnenbank (5):
56
hw/net/allwinner-sun8i-emac: traverse transmit queue using TX_CUR_DESC register value
57
tests/acceptance/boot_linux_console: remove Armbian 19.11.3 bionic test for orangepi-pc machine
58
tests/acceptance/boot_linux_console: change URL for test_arm_orangepi_bionic_20_08
59
tests/acceptance: update sunxi kernel from armbian to 5.10.16
60
tests/acceptance: drop ARMBIAN_ARTIFACTS_CACHED condition for orangepi-pc, cubieboard tests
61
62
Peter Maydell (9):
63
hw/display/pl110: Remove dead code for non-32-bpp surfaces
64
hw/display/pl110: Pull included-once parts of template header into pl110.c
65
hw/display/pl110: Remove use of BITS from pl110_template.h
66
hw/display/pxa2xx_lcd: Remove dead code for non-32-bpp surfaces
67
hw/display/pxa2xx_lcd: Remove dest_width state field
68
hw/display/pxa2xx: Remove use of BITS in pxa2xx_template.h
69
hw/display/pxa2xx: Apply brace-related coding style fixes to template header
70
hw/display/pxa2xx: Apply whitespace-only coding style fixes to template header
71
hw/display/pxa2xx: Inline template header
72
73
Philippe Mathieu-Daudé (1):
74
hw/timer/sse-timer: Propagate eventual error in sse_timer_realize()
75
76
Richard Henderson (8):
77
target/arm: Fix sve_uzp_p vs odd vector lengths
78
target/arm: Fix sve_zip_p vs odd vector lengths
79
target/arm: Fix sve_punpk_p vs odd vector lengths
80
target/arm: Update find_last_active for PREDDESC
81
target/arm: Update BRKA, BRKB, BRKN for PREDDESC
82
target/arm: Update CNTP for PREDDESC
83
target/arm: Update WHILE for PREDDESC
84
target/arm: Update sve reduction vs simd_desc
85
86
docs/system/arm/nuvoton.rst | 2 +-
87
docs/system/arm/xlnx-versal-virt.rst | 1 +
88
hw/arm/smmu-internal.h | 5 +
89
hw/display/pl110_template.h | 120 +-------
90
hw/display/pxa2xx_template.h | 447 ---------------------------
91
include/hw/arm/npcm7xx.h | 13 +-
92
include/hw/arm/xlnx-versal.h | 13 +
93
include/hw/boards.h | 1 +
94
include/hw/misc/npcm7xx_mft.h | 70 +++++
95
include/hw/misc/npcm7xx_pwm.h | 4 +-
96
include/hw/misc/xlnx-versal-xramc.h | 97 ++++++
97
include/sysemu/dma.h | 12 +
98
target/arm/kvm_arm.h | 6 +-
99
accel/kvm/kvm-all.c | 2 +
100
hw/arm/npcm7xx.c | 45 ++-
101
hw/arm/npcm7xx_boards.c | 99 ++++++
102
hw/arm/smmu-common.c | 32 +-
103
hw/arm/smmuv3.c | 58 ++--
104
hw/arm/virt.c | 23 +-
105
hw/arm/xlnx-versal.c | 36 +++
106
hw/display/pl110.c | 123 +++++---
107
hw/display/pxa2xx_lcd.c | 520 ++++++++++++++++++++++++++-----
108
hw/i386/intel_iommu.c | 32 +-
109
hw/misc/npcm7xx_mft.c | 540 +++++++++++++++++++++++++++++++++
110
hw/misc/npcm7xx_pwm.c | 4 +
111
hw/misc/xlnx-versal-xramc.c | 253 +++++++++++++++
112
hw/net/allwinner-sun8i-emac.c | 62 ++--
113
hw/timer/sse-timer.c | 1 +
114
hw/virtio/virtio-iommu.c | 19 +-
115
softmmu/dma-helpers.c | 26 ++
116
target/arm/kvm.c | 4 +-
117
target/arm/sve_helper.c | 107 ++++---
118
target/arm/translate-sve.c | 26 +-
119
tests/qtest/npcm7xx_pwm-test.c | 205 ++++++++++++-
120
hw/arm/trace-events | 24 +-
121
hw/misc/meson.build | 2 +
122
hw/misc/trace-events | 8 +
123
tests/acceptance/boot_linux_console.py | 120 +++-----
124
tests/acceptance/replay_kernel.py | 10 +-
125
39 files changed, 2235 insertions(+), 937 deletions(-)
126
delete mode 100644 hw/display/pxa2xx_template.h
127
create mode 100644 include/hw/misc/npcm7xx_mft.h
128
create mode 100644 include/hw/misc/xlnx-versal-xramc.h
129
create mode 100644 hw/misc/npcm7xx_mft.c
130
create mode 100644 hw/misc/xlnx-versal-xramc.c
131
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
These routines are TCG specific.
3
Add a model of the Xilinx Versal Accelerator RAM (XRAM).
4
4
This is mainly a stub to make firmware happy. The size of
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
the RAMs can be probed. The interrupt mask logic is
6
Message-id: 20190701194942.10092-2-philmd@redhat.com
6
modelled but none of the interrups will ever be raised
7
unless injected.
8
9
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20210308224637.2949533-2-edgar.iglesias@gmail.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
target/arm/Makefile.objs | 2 +-
14
include/hw/misc/xlnx-versal-xramc.h | 97 +++++++++++
11
target/arm/cpu.c | 9 +-
15
hw/misc/xlnx-versal-xramc.c | 253 ++++++++++++++++++++++++++++
12
target/arm/debug_helper.c | 311 ++++++++++++++++++++++++++++++++++++++
16
hw/misc/meson.build | 1 +
13
target/arm/op_helper.c | 295 ------------------------------------
17
3 files changed, 351 insertions(+)
14
4 files changed, 315 insertions(+), 302 deletions(-)
18
create mode 100644 include/hw/misc/xlnx-versal-xramc.h
15
create mode 100644 target/arm/debug_helper.c
19
create mode 100644 hw/misc/xlnx-versal-xramc.c
16
20
17
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
21
diff --git a/include/hw/misc/xlnx-versal-xramc.h b/include/hw/misc/xlnx-versal-xramc.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/Makefile.objs
20
+++ b/target/arm/Makefile.objs
21
@@ -XXX,XX +XXX,XX @@ target/arm/translate-sve.o: target/arm/decode-sve.inc.c
22
target/arm/translate.o: target/arm/decode-vfp.inc.c
23
target/arm/translate.o: target/arm/decode-vfp-uncond.inc.c
24
25
-obj-y += tlb_helper.o
26
+obj-y += tlb_helper.o debug_helper.o
27
obj-y += translate.o op_helper.o
28
obj-y += crypto_helper.o
29
obj-y += iwmmxt_helper.o vec_helper.o neon_helper.o
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
35
cc->gdb_arch_name = arm_gdb_arch_name;
36
cc->gdb_get_dynamic_xml = arm_gdb_get_dynamic_xml;
37
cc->gdb_stop_before_watchpoint = true;
38
- cc->debug_excp_handler = arm_debug_excp_handler;
39
- cc->debug_check_watchpoint = arm_debug_check_watchpoint;
40
-#if !defined(CONFIG_USER_ONLY)
41
- cc->adjust_watchpoint_address = arm_adjust_watchpoint_address;
42
-#endif
43
-
44
cc->disas_set_info = arm_disas_set_info;
45
#ifdef CONFIG_TCG
46
cc->tcg_initialize = arm_translate_init;
47
cc->tlb_fill = arm_cpu_tlb_fill;
48
+ cc->debug_excp_handler = arm_debug_excp_handler;
49
+ cc->debug_check_watchpoint = arm_debug_check_watchpoint;
50
#if !defined(CONFIG_USER_ONLY)
51
cc->do_unaligned_access = arm_cpu_do_unaligned_access;
52
cc->do_transaction_failed = arm_cpu_do_transaction_failed;
53
+ cc->adjust_watchpoint_address = arm_adjust_watchpoint_address;
54
#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
55
#endif
56
}
57
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
58
new file mode 100644
22
new file mode 100644
59
index XXXXXXX..XXXXXXX
23
index XXXXXXX..XXXXXXX
60
--- /dev/null
24
--- /dev/null
61
+++ b/target/arm/debug_helper.c
25
+++ b/include/hw/misc/xlnx-versal-xramc.h
62
@@ -XXX,XX +XXX,XX @@
26
@@ -XXX,XX +XXX,XX @@
63
+/*
27
+/*
64
+ * ARM debug helpers.
28
+ * QEMU model of the Xilinx XRAM Controller.
65
+ *
29
+ *
66
+ * This code is licensed under the GNU GPL v2 or later.
30
+ * Copyright (c) 2021 Xilinx Inc.
31
+ * SPDX-License-Identifier: GPL-2.0-or-later
32
+ * Written by Edgar E. Iglesias <edgar.iglesias@xilinx.com>
33
+ */
34
+
35
+#ifndef XLNX_VERSAL_XRAMC_H
36
+#define XLNX_VERSAL_XRAMC_H
37
+
38
+#include "hw/sysbus.h"
39
+#include "hw/register.h"
40
+
41
+#define TYPE_XLNX_XRAM_CTRL "xlnx.versal-xramc"
42
+
43
+#define XLNX_XRAM_CTRL(obj) \
44
+ OBJECT_CHECK(XlnxXramCtrl, (obj), TYPE_XLNX_XRAM_CTRL)
45
+
46
+REG32(XRAM_ERR_CTRL, 0x0)
47
+ FIELD(XRAM_ERR_CTRL, UE_RES, 3, 1)
48
+ FIELD(XRAM_ERR_CTRL, PWR_ERR_RES, 2, 1)
49
+ FIELD(XRAM_ERR_CTRL, PZ_ERR_RES, 1, 1)
50
+ FIELD(XRAM_ERR_CTRL, APB_ERR_RES, 0, 1)
51
+REG32(XRAM_ISR, 0x4)
52
+ FIELD(XRAM_ISR, INV_APB, 0, 1)
53
+REG32(XRAM_IMR, 0x8)
54
+ FIELD(XRAM_IMR, INV_APB, 0, 1)
55
+REG32(XRAM_IEN, 0xc)
56
+ FIELD(XRAM_IEN, INV_APB, 0, 1)
57
+REG32(XRAM_IDS, 0x10)
58
+ FIELD(XRAM_IDS, INV_APB, 0, 1)
59
+REG32(XRAM_ECC_CNTL, 0x14)
60
+ FIELD(XRAM_ECC_CNTL, FI_MODE, 2, 1)
61
+ FIELD(XRAM_ECC_CNTL, DET_ONLY, 1, 1)
62
+ FIELD(XRAM_ECC_CNTL, ECC_ON_OFF, 0, 1)
63
+REG32(XRAM_CLR_EXE, 0x18)
64
+ FIELD(XRAM_CLR_EXE, MON_7, 7, 1)
65
+ FIELD(XRAM_CLR_EXE, MON_6, 6, 1)
66
+ FIELD(XRAM_CLR_EXE, MON_5, 5, 1)
67
+ FIELD(XRAM_CLR_EXE, MON_4, 4, 1)
68
+ FIELD(XRAM_CLR_EXE, MON_3, 3, 1)
69
+ FIELD(XRAM_CLR_EXE, MON_2, 2, 1)
70
+ FIELD(XRAM_CLR_EXE, MON_1, 1, 1)
71
+ FIELD(XRAM_CLR_EXE, MON_0, 0, 1)
72
+REG32(XRAM_CE_FFA, 0x1c)
73
+ FIELD(XRAM_CE_FFA, ADDR, 0, 20)
74
+REG32(XRAM_CE_FFD0, 0x20)
75
+REG32(XRAM_CE_FFD1, 0x24)
76
+REG32(XRAM_CE_FFD2, 0x28)
77
+REG32(XRAM_CE_FFD3, 0x2c)
78
+REG32(XRAM_CE_FFE, 0x30)
79
+ FIELD(XRAM_CE_FFE, SYNDROME, 0, 16)
80
+REG32(XRAM_UE_FFA, 0x34)
81
+ FIELD(XRAM_UE_FFA, ADDR, 0, 20)
82
+REG32(XRAM_UE_FFD0, 0x38)
83
+REG32(XRAM_UE_FFD1, 0x3c)
84
+REG32(XRAM_UE_FFD2, 0x40)
85
+REG32(XRAM_UE_FFD3, 0x44)
86
+REG32(XRAM_UE_FFE, 0x48)
87
+ FIELD(XRAM_UE_FFE, SYNDROME, 0, 16)
88
+REG32(XRAM_FI_D0, 0x4c)
89
+REG32(XRAM_FI_D1, 0x50)
90
+REG32(XRAM_FI_D2, 0x54)
91
+REG32(XRAM_FI_D3, 0x58)
92
+REG32(XRAM_FI_SY, 0x5c)
93
+ FIELD(XRAM_FI_SY, DATA, 0, 16)
94
+REG32(XRAM_RMW_UE_FFA, 0x70)
95
+ FIELD(XRAM_RMW_UE_FFA, ADDR, 0, 20)
96
+REG32(XRAM_FI_CNTR, 0x74)
97
+ FIELD(XRAM_FI_CNTR, COUNT, 0, 24)
98
+REG32(XRAM_IMP, 0x80)
99
+ FIELD(XRAM_IMP, SIZE, 0, 4)
100
+REG32(XRAM_PRDY_DBG, 0x84)
101
+ FIELD(XRAM_PRDY_DBG, ISLAND3, 12, 4)
102
+ FIELD(XRAM_PRDY_DBG, ISLAND2, 8, 4)
103
+ FIELD(XRAM_PRDY_DBG, ISLAND1, 4, 4)
104
+ FIELD(XRAM_PRDY_DBG, ISLAND0, 0, 4)
105
+REG32(XRAM_SAFETY_CHK, 0xff8)
106
+
107
+#define XRAM_CTRL_R_MAX (R_XRAM_SAFETY_CHK + 1)
108
+
109
+typedef struct XlnxXramCtrl {
110
+ SysBusDevice parent_obj;
111
+ MemoryRegion ram;
112
+ qemu_irq irq;
113
+
114
+ struct {
115
+ uint64_t size;
116
+ unsigned int encoded_size;
117
+ } cfg;
118
+
119
+ RegisterInfoArray *reg_array;
120
+ uint32_t regs[XRAM_CTRL_R_MAX];
121
+ RegisterInfo regs_info[XRAM_CTRL_R_MAX];
122
+} XlnxXramCtrl;
123
+#endif
124
diff --git a/hw/misc/xlnx-versal-xramc.c b/hw/misc/xlnx-versal-xramc.c
125
new file mode 100644
126
index XXXXXXX..XXXXXXX
127
--- /dev/null
128
+++ b/hw/misc/xlnx-versal-xramc.c
129
@@ -XXX,XX +XXX,XX @@
130
+/*
131
+ * QEMU model of the Xilinx XRAM Controller.
67
+ *
132
+ *
133
+ * Copyright (c) 2021 Xilinx Inc.
68
+ * SPDX-License-Identifier: GPL-2.0-or-later
134
+ * SPDX-License-Identifier: GPL-2.0-or-later
135
+ * Written by Edgar E. Iglesias <edgar.iglesias@xilinx.com>
69
+ */
136
+ */
137
+
70
+#include "qemu/osdep.h"
138
+#include "qemu/osdep.h"
71
+#include "cpu.h"
139
+#include "qemu/units.h"
72
+#include "internals.h"
140
+#include "qapi/error.h"
73
+#include "exec/exec-all.h"
141
+#include "migration/vmstate.h"
74
+#include "exec/helper-proto.h"
142
+#include "hw/sysbus.h"
75
+
143
+#include "hw/register.h"
76
+/* Return true if the linked breakpoint entry lbn passes its checks */
144
+#include "hw/qdev-properties.h"
77
+static bool linked_bp_matches(ARMCPU *cpu, int lbn)
145
+#include "hw/irq.h"
78
+{
146
+#include "hw/misc/xlnx-versal-xramc.h"
79
+ CPUARMState *env = &cpu->env;
147
+
80
+ uint64_t bcr = env->cp15.dbgbcr[lbn];
148
+#ifndef XLNX_XRAM_CTRL_ERR_DEBUG
81
+ int brps = extract32(cpu->dbgdidr, 24, 4);
149
+#define XLNX_XRAM_CTRL_ERR_DEBUG 0
82
+ int ctx_cmps = extract32(cpu->dbgdidr, 20, 4);
150
+#endif
83
+ int bt;
151
+
84
+ uint32_t contextidr;
152
+static void xram_update_irq(XlnxXramCtrl *s)
85
+
153
+{
86
+ /*
154
+ bool pending = s->regs[R_XRAM_ISR] & ~s->regs[R_XRAM_IMR];
87
+ * Links to unimplemented or non-context aware breakpoints are
155
+ qemu_set_irq(s->irq, pending);
88
+ * CONSTRAINED UNPREDICTABLE: either behave as if disabled, or
156
+}
89
+ * as if linked to an UNKNOWN context-aware breakpoint (in which
157
+
90
+ * case DBGWCR<n>_EL1.LBN must indicate that breakpoint).
158
+static void xram_isr_postw(RegisterInfo *reg, uint64_t val64)
91
+ * We choose the former.
159
+{
92
+ */
160
+ XlnxXramCtrl *s = XLNX_XRAM_CTRL(reg->opaque);
93
+ if (lbn > brps || lbn < (brps - ctx_cmps)) {
161
+ xram_update_irq(s);
94
+ return false;
162
+}
163
+
164
+static uint64_t xram_ien_prew(RegisterInfo *reg, uint64_t val64)
165
+{
166
+ XlnxXramCtrl *s = XLNX_XRAM_CTRL(reg->opaque);
167
+ uint32_t val = val64;
168
+
169
+ s->regs[R_XRAM_IMR] &= ~val;
170
+ xram_update_irq(s);
171
+ return 0;
172
+}
173
+
174
+static uint64_t xram_ids_prew(RegisterInfo *reg, uint64_t val64)
175
+{
176
+ XlnxXramCtrl *s = XLNX_XRAM_CTRL(reg->opaque);
177
+ uint32_t val = val64;
178
+
179
+ s->regs[R_XRAM_IMR] |= val;
180
+ xram_update_irq(s);
181
+ return 0;
182
+}
183
+
184
+static const RegisterAccessInfo xram_ctrl_regs_info[] = {
185
+ { .name = "XRAM_ERR_CTRL", .addr = A_XRAM_ERR_CTRL,
186
+ .reset = 0xf,
187
+ .rsvd = 0xfffffff0,
188
+ },{ .name = "XRAM_ISR", .addr = A_XRAM_ISR,
189
+ .rsvd = 0xfffff800,
190
+ .w1c = 0x7ff,
191
+ .post_write = xram_isr_postw,
192
+ },{ .name = "XRAM_IMR", .addr = A_XRAM_IMR,
193
+ .reset = 0x7ff,
194
+ .rsvd = 0xfffff800,
195
+ .ro = 0x7ff,
196
+ },{ .name = "XRAM_IEN", .addr = A_XRAM_IEN,
197
+ .rsvd = 0xfffff800,
198
+ .pre_write = xram_ien_prew,
199
+ },{ .name = "XRAM_IDS", .addr = A_XRAM_IDS,
200
+ .rsvd = 0xfffff800,
201
+ .pre_write = xram_ids_prew,
202
+ },{ .name = "XRAM_ECC_CNTL", .addr = A_XRAM_ECC_CNTL,
203
+ .rsvd = 0xfffffff8,
204
+ },{ .name = "XRAM_CLR_EXE", .addr = A_XRAM_CLR_EXE,
205
+ .rsvd = 0xffffff00,
206
+ },{ .name = "XRAM_CE_FFA", .addr = A_XRAM_CE_FFA,
207
+ .rsvd = 0xfff00000,
208
+ .ro = 0xfffff,
209
+ },{ .name = "XRAM_CE_FFD0", .addr = A_XRAM_CE_FFD0,
210
+ .ro = 0xffffffff,
211
+ },{ .name = "XRAM_CE_FFD1", .addr = A_XRAM_CE_FFD1,
212
+ .ro = 0xffffffff,
213
+ },{ .name = "XRAM_CE_FFD2", .addr = A_XRAM_CE_FFD2,
214
+ .ro = 0xffffffff,
215
+ },{ .name = "XRAM_CE_FFD3", .addr = A_XRAM_CE_FFD3,
216
+ .ro = 0xffffffff,
217
+ },{ .name = "XRAM_CE_FFE", .addr = A_XRAM_CE_FFE,
218
+ .rsvd = 0xffff0000,
219
+ .ro = 0xffff,
220
+ },{ .name = "XRAM_UE_FFA", .addr = A_XRAM_UE_FFA,
221
+ .rsvd = 0xfff00000,
222
+ .ro = 0xfffff,
223
+ },{ .name = "XRAM_UE_FFD0", .addr = A_XRAM_UE_FFD0,
224
+ .ro = 0xffffffff,
225
+ },{ .name = "XRAM_UE_FFD1", .addr = A_XRAM_UE_FFD1,
226
+ .ro = 0xffffffff,
227
+ },{ .name = "XRAM_UE_FFD2", .addr = A_XRAM_UE_FFD2,
228
+ .ro = 0xffffffff,
229
+ },{ .name = "XRAM_UE_FFD3", .addr = A_XRAM_UE_FFD3,
230
+ .ro = 0xffffffff,
231
+ },{ .name = "XRAM_UE_FFE", .addr = A_XRAM_UE_FFE,
232
+ .rsvd = 0xffff0000,
233
+ .ro = 0xffff,
234
+ },{ .name = "XRAM_FI_D0", .addr = A_XRAM_FI_D0,
235
+ },{ .name = "XRAM_FI_D1", .addr = A_XRAM_FI_D1,
236
+ },{ .name = "XRAM_FI_D2", .addr = A_XRAM_FI_D2,
237
+ },{ .name = "XRAM_FI_D3", .addr = A_XRAM_FI_D3,
238
+ },{ .name = "XRAM_FI_SY", .addr = A_XRAM_FI_SY,
239
+ .rsvd = 0xffff0000,
240
+ },{ .name = "XRAM_RMW_UE_FFA", .addr = A_XRAM_RMW_UE_FFA,
241
+ .rsvd = 0xfff00000,
242
+ .ro = 0xfffff,
243
+ },{ .name = "XRAM_FI_CNTR", .addr = A_XRAM_FI_CNTR,
244
+ .rsvd = 0xff000000,
245
+ },{ .name = "XRAM_IMP", .addr = A_XRAM_IMP,
246
+ .reset = 0x4,
247
+ .rsvd = 0xfffffff0,
248
+ .ro = 0xf,
249
+ },{ .name = "XRAM_PRDY_DBG", .addr = A_XRAM_PRDY_DBG,
250
+ .reset = 0xffff,
251
+ .rsvd = 0xffff0000,
252
+ .ro = 0xffff,
253
+ },{ .name = "XRAM_SAFETY_CHK", .addr = A_XRAM_SAFETY_CHK,
95
+ }
254
+ }
96
+
255
+};
97
+ bcr = env->cp15.dbgbcr[lbn];
256
+
98
+
257
+static void xram_ctrl_reset_enter(Object *obj, ResetType type)
99
+ if (extract64(bcr, 0, 1) == 0) {
258
+{
100
+ /* Linked breakpoint disabled : generate no events */
259
+ XlnxXramCtrl *s = XLNX_XRAM_CTRL(obj);
101
+ return false;
260
+ unsigned int i;
261
+
262
+ for (i = 0; i < ARRAY_SIZE(s->regs_info); ++i) {
263
+ register_reset(&s->regs_info[i]);
102
+ }
264
+ }
103
+
265
+
104
+ bt = extract64(bcr, 20, 4);
266
+ ARRAY_FIELD_DP32(s->regs, XRAM_IMP, SIZE, s->cfg.encoded_size);
105
+
267
+}
106
+ /*
268
+
107
+ * We match the whole register even if this is AArch32 using the
269
+static void xram_ctrl_reset_hold(Object *obj)
108
+ * short descriptor format (in which case it holds both PROCID and ASID),
270
+{
109
+ * since we don't implement the optional v7 context ID masking.
271
+ XlnxXramCtrl *s = XLNX_XRAM_CTRL(obj);
110
+ */
272
+
111
+ contextidr = extract64(env->cp15.contextidr_el[1], 0, 32);
273
+ xram_update_irq(s);
112
+
274
+}
113
+ switch (bt) {
275
+
114
+ case 3: /* linked context ID match */
276
+static const MemoryRegionOps xram_ctrl_ops = {
115
+ if (arm_current_el(env) > 1) {
277
+ .read = register_read_memory,
116
+ /* Context matches never fire in EL2 or (AArch64) EL3 */
278
+ .write = register_write_memory,
117
+ return false;
279
+ .endianness = DEVICE_LITTLE_ENDIAN,
118
+ }
280
+ .valid = {
119
+ return (contextidr == extract64(env->cp15.dbgbvr[lbn], 0, 32));
281
+ .min_access_size = 4,
120
+ case 5: /* linked address mismatch (reserved in AArch64) */
282
+ .max_access_size = 4,
121
+ case 9: /* linked VMID match (reserved if no EL2) */
283
+ },
122
+ case 11: /* linked context ID and VMID match (reserved if no EL2) */
284
+};
285
+
286
+static void xram_ctrl_realize(DeviceState *dev, Error **errp)
287
+{
288
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
289
+ XlnxXramCtrl *s = XLNX_XRAM_CTRL(dev);
290
+
291
+ switch (s->cfg.size) {
292
+ case 64 * KiB:
293
+ s->cfg.encoded_size = 0;
294
+ break;
295
+ case 128 * KiB:
296
+ s->cfg.encoded_size = 1;
297
+ break;
298
+ case 256 * KiB:
299
+ s->cfg.encoded_size = 2;
300
+ break;
301
+ case 512 * KiB:
302
+ s->cfg.encoded_size = 3;
303
+ break;
304
+ case 1 * MiB:
305
+ s->cfg.encoded_size = 4;
306
+ break;
123
+ default:
307
+ default:
124
+ /*
308
+ error_setg(errp, "Unsupported XRAM size %" PRId64, s->cfg.size);
125
+ * Links to Unlinked context breakpoints must generate no
309
+ return;
126
+ * events; we choose to do the same for reserved values too.
127
+ */
128
+ return false;
129
+ }
310
+ }
130
+
311
+
131
+ return false;
312
+ memory_region_init_ram(&s->ram, OBJECT(s),
132
+}
313
+ object_get_canonical_path_component(OBJECT(s)),
133
+
314
+ s->cfg.size, &error_fatal);
134
+static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)
315
+ sysbus_init_mmio(sbd, &s->ram);
135
+{
316
+}
136
+ CPUARMState *env = &cpu->env;
317
+
137
+ uint64_t cr;
318
+static void xram_ctrl_init(Object *obj)
138
+ int pac, hmc, ssc, wt, lbn;
319
+{
139
+ /*
320
+ XlnxXramCtrl *s = XLNX_XRAM_CTRL(obj);
140
+ * Note that for watchpoints the check is against the CPU security
321
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
141
+ * state, not the S/NS attribute on the offending data access.
322
+
142
+ */
323
+ s->reg_array =
143
+ bool is_secure = arm_is_secure(env);
324
+ register_init_block32(DEVICE(obj), xram_ctrl_regs_info,
144
+ int access_el = arm_current_el(env);
325
+ ARRAY_SIZE(xram_ctrl_regs_info),
145
+
326
+ s->regs_info, s->regs,
146
+ if (is_wp) {
327
+ &xram_ctrl_ops,
147
+ CPUWatchpoint *wp = env->cpu_watchpoint[n];
328
+ XLNX_XRAM_CTRL_ERR_DEBUG,
148
+
329
+ XRAM_CTRL_R_MAX * 4);
149
+ if (!wp || !(wp->flags & BP_WATCHPOINT_HIT)) {
330
+ sysbus_init_mmio(sbd, &s->reg_array->mem);
150
+ return false;
331
+ sysbus_init_irq(sbd, &s->irq);
151
+ }
332
+}
152
+ cr = env->cp15.dbgwcr[n];
333
+
153
+ if (wp->hitattrs.user) {
334
+static void xram_ctrl_finalize(Object *obj)
154
+ /*
335
+{
155
+ * The LDRT/STRT/LDT/STT "unprivileged access" instructions should
336
+ XlnxXramCtrl *s = XLNX_XRAM_CTRL(obj);
156
+ * match watchpoints as if they were accesses done at EL0, even if
337
+ register_finalize_block(s->reg_array);
157
+ * the CPU is at EL1 or higher.
338
+}
158
+ */
339
+
159
+ access_el = 0;
340
+static const VMStateDescription vmstate_xram_ctrl = {
160
+ }
341
+ .name = TYPE_XLNX_XRAM_CTRL,
161
+ } else {
342
+ .version_id = 1,
162
+ uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
343
+ .minimum_version_id = 1,
163
+
344
+ .fields = (VMStateField[]) {
164
+ if (!env->cpu_breakpoint[n] || env->cpu_breakpoint[n]->pc != pc) {
345
+ VMSTATE_UINT32_ARRAY(regs, XlnxXramCtrl, XRAM_CTRL_R_MAX),
165
+ return false;
346
+ VMSTATE_END_OF_LIST(),
166
+ }
167
+ cr = env->cp15.dbgbcr[n];
168
+ }
347
+ }
169
+ /*
348
+};
170
+ * The WATCHPOINT_HIT flag guarantees us that the watchpoint is
349
+
171
+ * enabled and that the address and access type match; for breakpoints
350
+static Property xram_ctrl_properties[] = {
172
+ * we know the address matched; check the remaining fields, including
351
+ DEFINE_PROP_UINT64("size", XlnxXramCtrl, cfg.size, 1 * MiB),
173
+ * linked breakpoints. We rely on WCR and BCR having the same layout
352
+ DEFINE_PROP_END_OF_LIST(),
174
+ * for the LBN, SSC, HMC, PAC/PMC and is-linked fields.
353
+};
175
+ * Note that some combinations of {PAC, HMC, SSC} are reserved and
354
+
176
+ * must act either like some valid combination or as if the watchpoint
355
+static void xram_ctrl_class_init(ObjectClass *klass, void *data)
177
+ * were disabled. We choose the former, and use this together with
356
+{
178
+ * the fact that EL3 must always be Secure and EL2 must always be
357
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
179
+ * Non-Secure to simplify the code slightly compared to the full
358
+ DeviceClass *dc = DEVICE_CLASS(klass);
180
+ * table in the ARM ARM.
359
+
181
+ */
360
+ dc->realize = xram_ctrl_realize;
182
+ pac = extract64(cr, 1, 2);
361
+ dc->vmsd = &vmstate_xram_ctrl;
183
+ hmc = extract64(cr, 13, 1);
362
+ device_class_set_props(dc, xram_ctrl_properties);
184
+ ssc = extract64(cr, 14, 2);
363
+
185
+
364
+ rc->phases.enter = xram_ctrl_reset_enter;
186
+ switch (ssc) {
365
+ rc->phases.hold = xram_ctrl_reset_hold;
187
+ case 0:
366
+}
188
+ break;
367
+
189
+ case 1:
368
+static const TypeInfo xram_ctrl_info = {
190
+ case 3:
369
+ .name = TYPE_XLNX_XRAM_CTRL,
191
+ if (is_secure) {
370
+ .parent = TYPE_SYS_BUS_DEVICE,
192
+ return false;
371
+ .instance_size = sizeof(XlnxXramCtrl),
193
+ }
372
+ .class_init = xram_ctrl_class_init,
194
+ break;
373
+ .instance_init = xram_ctrl_init,
195
+ case 2:
374
+ .instance_finalize = xram_ctrl_finalize,
196
+ if (!is_secure) {
375
+};
197
+ return false;
376
+
198
+ }
377
+static void xram_ctrl_register_types(void)
199
+ break;
378
+{
200
+ }
379
+ type_register_static(&xram_ctrl_info);
201
+
380
+}
202
+ switch (access_el) {
381
+
203
+ case 3:
382
+type_init(xram_ctrl_register_types)
204
+ case 2:
383
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
205
+ if (!hmc) {
206
+ return false;
207
+ }
208
+ break;
209
+ case 1:
210
+ if (extract32(pac, 0, 1) == 0) {
211
+ return false;
212
+ }
213
+ break;
214
+ case 0:
215
+ if (extract32(pac, 1, 1) == 0) {
216
+ return false;
217
+ }
218
+ break;
219
+ default:
220
+ g_assert_not_reached();
221
+ }
222
+
223
+ wt = extract64(cr, 20, 1);
224
+ lbn = extract64(cr, 16, 4);
225
+
226
+ if (wt && !linked_bp_matches(cpu, lbn)) {
227
+ return false;
228
+ }
229
+
230
+ return true;
231
+}
232
+
233
+static bool check_watchpoints(ARMCPU *cpu)
234
+{
235
+ CPUARMState *env = &cpu->env;
236
+ int n;
237
+
238
+ /*
239
+ * If watchpoints are disabled globally or we can't take debug
240
+ * exceptions here then watchpoint firings are ignored.
241
+ */
242
+ if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
243
+ || !arm_generate_debug_exceptions(env)) {
244
+ return false;
245
+ }
246
+
247
+ for (n = 0; n < ARRAY_SIZE(env->cpu_watchpoint); n++) {
248
+ if (bp_wp_matches(cpu, n, true)) {
249
+ return true;
250
+ }
251
+ }
252
+ return false;
253
+}
254
+
255
+static bool check_breakpoints(ARMCPU *cpu)
256
+{
257
+ CPUARMState *env = &cpu->env;
258
+ int n;
259
+
260
+ /*
261
+ * If breakpoints are disabled globally or we can't take debug
262
+ * exceptions here then breakpoint firings are ignored.
263
+ */
264
+ if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
265
+ || !arm_generate_debug_exceptions(env)) {
266
+ return false;
267
+ }
268
+
269
+ for (n = 0; n < ARRAY_SIZE(env->cpu_breakpoint); n++) {
270
+ if (bp_wp_matches(cpu, n, false)) {
271
+ return true;
272
+ }
273
+ }
274
+ return false;
275
+}
276
+
277
+void HELPER(check_breakpoints)(CPUARMState *env)
278
+{
279
+ ARMCPU *cpu = env_archcpu(env);
280
+
281
+ if (check_breakpoints(cpu)) {
282
+ HELPER(exception_internal(env, EXCP_DEBUG));
283
+ }
284
+}
285
+
286
+bool arm_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
287
+{
288
+ /*
289
+ * Called by core code when a CPU watchpoint fires; need to check if this
290
+ * is also an architectural watchpoint match.
291
+ */
292
+ ARMCPU *cpu = ARM_CPU(cs);
293
+
294
+ return check_watchpoints(cpu);
295
+}
296
+
297
+void arm_debug_excp_handler(CPUState *cs)
298
+{
299
+ /*
300
+ * Called by core code when a watchpoint or breakpoint fires;
301
+ * need to check which one and raise the appropriate exception.
302
+ */
303
+ ARMCPU *cpu = ARM_CPU(cs);
304
+ CPUARMState *env = &cpu->env;
305
+ CPUWatchpoint *wp_hit = cs->watchpoint_hit;
306
+
307
+ if (wp_hit) {
308
+ if (wp_hit->flags & BP_CPU) {
309
+ bool wnr = (wp_hit->flags & BP_WATCHPOINT_HIT_WRITE) != 0;
310
+ bool same_el = arm_debug_target_el(env) == arm_current_el(env);
311
+
312
+ cs->watchpoint_hit = NULL;
313
+
314
+ env->exception.fsr = arm_debug_exception_fsr(env);
315
+ env->exception.vaddress = wp_hit->hitaddr;
316
+ raise_exception(env, EXCP_DATA_ABORT,
317
+ syn_watchpoint(same_el, 0, wnr),
318
+ arm_debug_target_el(env));
319
+ }
320
+ } else {
321
+ uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
322
+ bool same_el = (arm_debug_target_el(env) == arm_current_el(env));
323
+
324
+ /*
325
+ * (1) GDB breakpoints should be handled first.
326
+ * (2) Do not raise a CPU exception if no CPU breakpoint has fired,
327
+ * since singlestep is also done by generating a debug internal
328
+ * exception.
329
+ */
330
+ if (cpu_breakpoint_test(cs, pc, BP_GDB)
331
+ || !cpu_breakpoint_test(cs, pc, BP_CPU)) {
332
+ return;
333
+ }
334
+
335
+ env->exception.fsr = arm_debug_exception_fsr(env);
336
+ /*
337
+ * FAR is UNKNOWN: clear vaddress to avoid potentially exposing
338
+ * values to the guest that it shouldn't be able to see at its
339
+ * exception/security level.
340
+ */
341
+ env->exception.vaddress = 0;
342
+ raise_exception(env, EXCP_PREFETCH_ABORT,
343
+ syn_breakpoint(same_el),
344
+ arm_debug_target_el(env));
345
+ }
346
+}
347
+
348
+#if !defined(CONFIG_USER_ONLY)
349
+
350
+vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len)
351
+{
352
+ ARMCPU *cpu = ARM_CPU(cs);
353
+ CPUARMState *env = &cpu->env;
354
+
355
+ /*
356
+ * In BE32 system mode, target memory is stored byteswapped (on a
357
+ * little-endian host system), and by the time we reach here (via an
358
+ * opcode helper) the addresses of subword accesses have been adjusted
359
+ * to account for that, which means that watchpoints will not match.
360
+ * Undo the adjustment here.
361
+ */
362
+ if (arm_sctlr_b(env)) {
363
+ if (len == 1) {
364
+ addr ^= 3;
365
+ } else if (len == 2) {
366
+ addr ^= 2;
367
+ }
368
+ }
369
+
370
+ return addr;
371
+}
372
+
373
+#endif
374
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
375
index XXXXXXX..XXXXXXX 100644
384
index XXXXXXX..XXXXXXX 100644
376
--- a/target/arm/op_helper.c
385
--- a/hw/misc/meson.build
377
+++ b/target/arm/op_helper.c
386
+++ b/hw/misc/meson.build
378
@@ -XXX,XX +XXX,XX @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome)
387
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_RASPI', if_true: files(
379
}
388
))
380
}
389
softmmu_ss.add(when: 'CONFIG_SLAVIO', if_true: files('slavio_misc.c'))
381
390
softmmu_ss.add(when: 'CONFIG_ZYNQ', if_true: files('zynq_slcr.c', 'zynq-xadc.c'))
382
-/* Return true if the linked breakpoint entry lbn passes its checks */
391
+softmmu_ss.add(when: 'CONFIG_XLNX_VERSAL', if_true: files('xlnx-versal-xramc.c'))
383
-static bool linked_bp_matches(ARMCPU *cpu, int lbn)
392
softmmu_ss.add(when: 'CONFIG_STM32F2XX_SYSCFG', if_true: files('stm32f2xx_syscfg.c'))
384
-{
393
softmmu_ss.add(when: 'CONFIG_STM32F4XX_SYSCFG', if_true: files('stm32f4xx_syscfg.c'))
385
- CPUARMState *env = &cpu->env;
394
softmmu_ss.add(when: 'CONFIG_STM32F4XX_EXTI', if_true: files('stm32f4xx_exti.c'))
386
- uint64_t bcr = env->cp15.dbgbcr[lbn];
387
- int brps = extract32(cpu->dbgdidr, 24, 4);
388
- int ctx_cmps = extract32(cpu->dbgdidr, 20, 4);
389
- int bt;
390
- uint32_t contextidr;
391
-
392
- /*
393
- * Links to unimplemented or non-context aware breakpoints are
394
- * CONSTRAINED UNPREDICTABLE: either behave as if disabled, or
395
- * as if linked to an UNKNOWN context-aware breakpoint (in which
396
- * case DBGWCR<n>_EL1.LBN must indicate that breakpoint).
397
- * We choose the former.
398
- */
399
- if (lbn > brps || lbn < (brps - ctx_cmps)) {
400
- return false;
401
- }
402
-
403
- bcr = env->cp15.dbgbcr[lbn];
404
-
405
- if (extract64(bcr, 0, 1) == 0) {
406
- /* Linked breakpoint disabled : generate no events */
407
- return false;
408
- }
409
-
410
- bt = extract64(bcr, 20, 4);
411
-
412
- /*
413
- * We match the whole register even if this is AArch32 using the
414
- * short descriptor format (in which case it holds both PROCID and ASID),
415
- * since we don't implement the optional v7 context ID masking.
416
- */
417
- contextidr = extract64(env->cp15.contextidr_el[1], 0, 32);
418
-
419
- switch (bt) {
420
- case 3: /* linked context ID match */
421
- if (arm_current_el(env) > 1) {
422
- /* Context matches never fire in EL2 or (AArch64) EL3 */
423
- return false;
424
- }
425
- return (contextidr == extract64(env->cp15.dbgbvr[lbn], 0, 32));
426
- case 5: /* linked address mismatch (reserved in AArch64) */
427
- case 9: /* linked VMID match (reserved if no EL2) */
428
- case 11: /* linked context ID and VMID match (reserved if no EL2) */
429
- default:
430
- /*
431
- * Links to Unlinked context breakpoints must generate no
432
- * events; we choose to do the same for reserved values too.
433
- */
434
- return false;
435
- }
436
-
437
- return false;
438
-}
439
-
440
-static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)
441
-{
442
- CPUARMState *env = &cpu->env;
443
- uint64_t cr;
444
- int pac, hmc, ssc, wt, lbn;
445
- /*
446
- * Note that for watchpoints the check is against the CPU security
447
- * state, not the S/NS attribute on the offending data access.
448
- */
449
- bool is_secure = arm_is_secure(env);
450
- int access_el = arm_current_el(env);
451
-
452
- if (is_wp) {
453
- CPUWatchpoint *wp = env->cpu_watchpoint[n];
454
-
455
- if (!wp || !(wp->flags & BP_WATCHPOINT_HIT)) {
456
- return false;
457
- }
458
- cr = env->cp15.dbgwcr[n];
459
- if (wp->hitattrs.user) {
460
- /*
461
- * The LDRT/STRT/LDT/STT "unprivileged access" instructions should
462
- * match watchpoints as if they were accesses done at EL0, even if
463
- * the CPU is at EL1 or higher.
464
- */
465
- access_el = 0;
466
- }
467
- } else {
468
- uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
469
-
470
- if (!env->cpu_breakpoint[n] || env->cpu_breakpoint[n]->pc != pc) {
471
- return false;
472
- }
473
- cr = env->cp15.dbgbcr[n];
474
- }
475
- /*
476
- * The WATCHPOINT_HIT flag guarantees us that the watchpoint is
477
- * enabled and that the address and access type match; for breakpoints
478
- * we know the address matched; check the remaining fields, including
479
- * linked breakpoints. We rely on WCR and BCR having the same layout
480
- * for the LBN, SSC, HMC, PAC/PMC and is-linked fields.
481
- * Note that some combinations of {PAC, HMC, SSC} are reserved and
482
- * must act either like some valid combination or as if the watchpoint
483
- * were disabled. We choose the former, and use this together with
484
- * the fact that EL3 must always be Secure and EL2 must always be
485
- * Non-Secure to simplify the code slightly compared to the full
486
- * table in the ARM ARM.
487
- */
488
- pac = extract64(cr, 1, 2);
489
- hmc = extract64(cr, 13, 1);
490
- ssc = extract64(cr, 14, 2);
491
-
492
- switch (ssc) {
493
- case 0:
494
- break;
495
- case 1:
496
- case 3:
497
- if (is_secure) {
498
- return false;
499
- }
500
- break;
501
- case 2:
502
- if (!is_secure) {
503
- return false;
504
- }
505
- break;
506
- }
507
-
508
- switch (access_el) {
509
- case 3:
510
- case 2:
511
- if (!hmc) {
512
- return false;
513
- }
514
- break;
515
- case 1:
516
- if (extract32(pac, 0, 1) == 0) {
517
- return false;
518
- }
519
- break;
520
- case 0:
521
- if (extract32(pac, 1, 1) == 0) {
522
- return false;
523
- }
524
- break;
525
- default:
526
- g_assert_not_reached();
527
- }
528
-
529
- wt = extract64(cr, 20, 1);
530
- lbn = extract64(cr, 16, 4);
531
-
532
- if (wt && !linked_bp_matches(cpu, lbn)) {
533
- return false;
534
- }
535
-
536
- return true;
537
-}
538
-
539
-static bool check_watchpoints(ARMCPU *cpu)
540
-{
541
- CPUARMState *env = &cpu->env;
542
- int n;
543
-
544
- /*
545
- * If watchpoints are disabled globally or we can't take debug
546
- * exceptions here then watchpoint firings are ignored.
547
- */
548
- if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
549
- || !arm_generate_debug_exceptions(env)) {
550
- return false;
551
- }
552
-
553
- for (n = 0; n < ARRAY_SIZE(env->cpu_watchpoint); n++) {
554
- if (bp_wp_matches(cpu, n, true)) {
555
- return true;
556
- }
557
- }
558
- return false;
559
-}
560
-
561
-static bool check_breakpoints(ARMCPU *cpu)
562
-{
563
- CPUARMState *env = &cpu->env;
564
- int n;
565
-
566
- /*
567
- * If breakpoints are disabled globally or we can't take debug
568
- * exceptions here then breakpoint firings are ignored.
569
- */
570
- if (extract32(env->cp15.mdscr_el1, 15, 1) == 0
571
- || !arm_generate_debug_exceptions(env)) {
572
- return false;
573
- }
574
-
575
- for (n = 0; n < ARRAY_SIZE(env->cpu_breakpoint); n++) {
576
- if (bp_wp_matches(cpu, n, false)) {
577
- return true;
578
- }
579
- }
580
- return false;
581
-}
582
-
583
-void HELPER(check_breakpoints)(CPUARMState *env)
584
-{
585
- ARMCPU *cpu = env_archcpu(env);
586
-
587
- if (check_breakpoints(cpu)) {
588
- HELPER(exception_internal(env, EXCP_DEBUG));
589
- }
590
-}
591
-
592
-bool arm_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp)
593
-{
594
- /*
595
- * Called by core code when a CPU watchpoint fires; need to check if this
596
- * is also an architectural watchpoint match.
597
- */
598
- ARMCPU *cpu = ARM_CPU(cs);
599
-
600
- return check_watchpoints(cpu);
601
-}
602
-
603
-vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len)
604
-{
605
- ARMCPU *cpu = ARM_CPU(cs);
606
- CPUARMState *env = &cpu->env;
607
-
608
- /*
609
- * In BE32 system mode, target memory is stored byteswapped (on a
610
- * little-endian host system), and by the time we reach here (via an
611
- * opcode helper) the addresses of subword accesses have been adjusted
612
- * to account for that, which means that watchpoints will not match.
613
- * Undo the adjustment here.
614
- */
615
- if (arm_sctlr_b(env)) {
616
- if (len == 1) {
617
- addr ^= 3;
618
- } else if (len == 2) {
619
- addr ^= 2;
620
- }
621
- }
622
-
623
- return addr;
624
-}
625
-
626
-void arm_debug_excp_handler(CPUState *cs)
627
-{
628
- /*
629
- * Called by core code when a watchpoint or breakpoint fires;
630
- * need to check which one and raise the appropriate exception.
631
- */
632
- ARMCPU *cpu = ARM_CPU(cs);
633
- CPUARMState *env = &cpu->env;
634
- CPUWatchpoint *wp_hit = cs->watchpoint_hit;
635
-
636
- if (wp_hit) {
637
- if (wp_hit->flags & BP_CPU) {
638
- bool wnr = (wp_hit->flags & BP_WATCHPOINT_HIT_WRITE) != 0;
639
- bool same_el = arm_debug_target_el(env) == arm_current_el(env);
640
-
641
- cs->watchpoint_hit = NULL;
642
-
643
- env->exception.fsr = arm_debug_exception_fsr(env);
644
- env->exception.vaddress = wp_hit->hitaddr;
645
- raise_exception(env, EXCP_DATA_ABORT,
646
- syn_watchpoint(same_el, 0, wnr),
647
- arm_debug_target_el(env));
648
- }
649
- } else {
650
- uint64_t pc = is_a64(env) ? env->pc : env->regs[15];
651
- bool same_el = (arm_debug_target_el(env) == arm_current_el(env));
652
-
653
- /*
654
- * (1) GDB breakpoints should be handled first.
655
- * (2) Do not raise a CPU exception if no CPU breakpoint has fired,
656
- * since singlestep is also done by generating a debug internal
657
- * exception.
658
- */
659
- if (cpu_breakpoint_test(cs, pc, BP_GDB)
660
- || !cpu_breakpoint_test(cs, pc, BP_CPU)) {
661
- return;
662
- }
663
-
664
- env->exception.fsr = arm_debug_exception_fsr(env);
665
- /*
666
- * FAR is UNKNOWN: clear vaddress to avoid potentially exposing
667
- * values to the guest that it shouldn't be able to see at its
668
- * exception/security level.
669
- */
670
- env->exception.vaddress = 0;
671
- raise_exception(env, EXCP_PREFETCH_ABORT,
672
- syn_breakpoint(same_el),
673
- arm_debug_target_el(env));
674
- }
675
-}
676
-
677
/* ??? Flag setting arithmetic is awkward because we need to do comparisons.
678
The only way to do that in TCG is a conditional branch, which clobbers
679
all our temporaries. For now implement these as helper functions. */
680
--
395
--
681
2.20.1
396
2.20.1
682
397
683
398
diff view generated by jsdifflib
New patch
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
2
3
Connect the support for the Versal Accelerator RAMs (XRAMs).
4
5
Reviewed-by: Luc Michel <luc@lmichel.fr>
6
Acked-by: Alistair Francis <alistair.francis@wdc.com>
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20210308224637.2949533-3-edgar.iglesias@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
docs/system/arm/xlnx-versal-virt.rst | 1 +
12
include/hw/arm/xlnx-versal.h | 13 ++++++++++
13
hw/arm/xlnx-versal.c | 36 ++++++++++++++++++++++++++++
14
3 files changed, 50 insertions(+)
15
16
diff --git a/docs/system/arm/xlnx-versal-virt.rst b/docs/system/arm/xlnx-versal-virt.rst
17
index XXXXXXX..XXXXXXX 100644
18
--- a/docs/system/arm/xlnx-versal-virt.rst
19
+++ b/docs/system/arm/xlnx-versal-virt.rst
20
@@ -XXX,XX +XXX,XX @@ Implemented devices:
21
- 8 ADMA (Xilinx zDMA) channels
22
- 2 SD Controllers
23
- OCM (256KB of On Chip Memory)
24
+- XRAM (4MB of on chip Accelerator RAM)
25
- DDR memory
26
27
QEMU does not yet model any other devices, including the PL and the AI Engine.
28
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/include/hw/arm/xlnx-versal.h
31
+++ b/include/hw/arm/xlnx-versal.h
32
@@ -XXX,XX +XXX,XX @@
33
34
#include "hw/sysbus.h"
35
#include "hw/arm/boot.h"
36
+#include "hw/or-irq.h"
37
#include "hw/sd/sdhci.h"
38
#include "hw/intc/arm_gicv3.h"
39
#include "hw/char/pl011.h"
40
@@ -XXX,XX +XXX,XX @@
41
#include "hw/rtc/xlnx-zynqmp-rtc.h"
42
#include "qom/object.h"
43
#include "hw/usb/xlnx-usb-subsystem.h"
44
+#include "hw/misc/xlnx-versal-xramc.h"
45
46
#define TYPE_XLNX_VERSAL "xlnx-versal"
47
OBJECT_DECLARE_SIMPLE_TYPE(Versal, XLNX_VERSAL)
48
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(Versal, XLNX_VERSAL)
49
#define XLNX_VERSAL_NR_GEMS 2
50
#define XLNX_VERSAL_NR_ADMAS 8
51
#define XLNX_VERSAL_NR_SDS 2
52
+#define XLNX_VERSAL_NR_XRAM 4
53
#define XLNX_VERSAL_NR_IRQS 192
54
55
struct Versal {
56
@@ -XXX,XX +XXX,XX @@ struct Versal {
57
XlnxZDMA adma[XLNX_VERSAL_NR_ADMAS];
58
VersalUsb2 usb;
59
} iou;
60
+
61
+ struct {
62
+ qemu_or_irq irq_orgate;
63
+ XlnxXramCtrl ctrl[XLNX_VERSAL_NR_XRAM];
64
+ } xram;
65
} lpd;
66
67
/* The Platform Management Controller subsystem. */
68
@@ -XXX,XX +XXX,XX @@ struct Versal {
69
#define VERSAL_GEM1_IRQ_0 58
70
#define VERSAL_GEM1_WAKE_IRQ_0 59
71
#define VERSAL_ADMA_IRQ_0 60
72
+#define VERSAL_XRAM_IRQ_0 79
73
#define VERSAL_RTC_APB_ERR_IRQ 121
74
#define VERSAL_SD0_IRQ_0 126
75
#define VERSAL_RTC_ALARM_IRQ 142
76
@@ -XXX,XX +XXX,XX @@ struct Versal {
77
#define MM_OCM 0xfffc0000U
78
#define MM_OCM_SIZE 0x40000
79
80
+#define MM_XRAM 0xfe800000
81
+#define MM_XRAMC 0xff8e0000
82
+#define MM_XRAMC_SIZE 0x10000
83
+
84
#define MM_USB2_CTRL_REGS 0xFF9D0000
85
#define MM_USB2_CTRL_REGS_SIZE 0x10000
86
87
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
88
index XXXXXXX..XXXXXXX 100644
89
--- a/hw/arm/xlnx-versal.c
90
+++ b/hw/arm/xlnx-versal.c
91
@@ -XXX,XX +XXX,XX @@
92
*/
93
94
#include "qemu/osdep.h"
95
+#include "qemu/units.h"
96
#include "qapi/error.h"
97
#include "qemu/log.h"
98
#include "qemu/module.h"
99
@@ -XXX,XX +XXX,XX @@ static void versal_create_rtc(Versal *s, qemu_irq *pic)
100
sysbus_connect_irq(sbd, 1, pic[VERSAL_RTC_APB_ERR_IRQ]);
101
}
102
103
+static void versal_create_xrams(Versal *s, qemu_irq *pic)
104
+{
105
+ int nr_xrams = ARRAY_SIZE(s->lpd.xram.ctrl);
106
+ DeviceState *orgate;
107
+ int i;
108
+
109
+ /* XRAM IRQs get ORed into a single line. */
110
+ object_initialize_child(OBJECT(s), "xram-irq-orgate",
111
+ &s->lpd.xram.irq_orgate, TYPE_OR_IRQ);
112
+ orgate = DEVICE(&s->lpd.xram.irq_orgate);
113
+ object_property_set_int(OBJECT(orgate),
114
+ "num-lines", nr_xrams, &error_fatal);
115
+ qdev_realize(orgate, NULL, &error_fatal);
116
+ qdev_connect_gpio_out(orgate, 0, pic[VERSAL_XRAM_IRQ_0]);
117
+
118
+ for (i = 0; i < ARRAY_SIZE(s->lpd.xram.ctrl); i++) {
119
+ SysBusDevice *sbd;
120
+ MemoryRegion *mr;
121
+
122
+ object_initialize_child(OBJECT(s), "xram[*]", &s->lpd.xram.ctrl[i],
123
+ TYPE_XLNX_XRAM_CTRL);
124
+ sbd = SYS_BUS_DEVICE(&s->lpd.xram.ctrl[i]);
125
+ sysbus_realize(sbd, &error_fatal);
126
+
127
+ mr = sysbus_mmio_get_region(sbd, 0);
128
+ memory_region_add_subregion(&s->mr_ps,
129
+ MM_XRAMC + i * MM_XRAMC_SIZE, mr);
130
+ mr = sysbus_mmio_get_region(sbd, 1);
131
+ memory_region_add_subregion(&s->mr_ps, MM_XRAM + i * MiB, mr);
132
+
133
+ sysbus_connect_irq(sbd, 0, qdev_get_gpio_in(orgate, i));
134
+ }
135
+}
136
+
137
/* This takes the board allocated linear DDR memory and creates aliases
138
* for each split DDR range/aperture on the Versal address map.
139
*/
140
@@ -XXX,XX +XXX,XX @@ static void versal_realize(DeviceState *dev, Error **errp)
141
versal_create_admas(s, pic);
142
versal_create_sds(s, pic);
143
versal_create_rtc(s, pic);
144
+ versal_create_xrams(s, pic);
145
versal_map_ddr(s);
146
versal_unimp(s);
147
148
--
149
2.20.1
150
151
diff view generated by jsdifflib
New patch
1
From: Eric Auger <eric.auger@redhat.com>
1
2
3
With -Werror=maybe-uninitialized configuration we get
4
../hw/i386/intel_iommu.c: In function ‘vtd_context_device_invalidate’:
5
../hw/i386/intel_iommu.c:1888:10: error: ‘mask’ may be used
6
uninitialized in this function [-Werror=maybe-uninitialized]
7
1888 | mask = ~mask;
8
| ~~~~~^~~~~~~
9
10
Add a g_assert_not_reached() to avoid the error.
11
12
Signed-off-by: Eric Auger <eric.auger@redhat.com>
13
Reviewed-by: Peter Xu <peterx@redhat.com>
14
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Message-id: 20210309102742.30442-2-eric.auger@redhat.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
hw/i386/intel_iommu.c | 2 ++
19
1 file changed, 2 insertions(+)
20
21
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/i386/intel_iommu.c
24
+++ b/hw/i386/intel_iommu.c
25
@@ -XXX,XX +XXX,XX @@ static void vtd_context_device_invalidate(IntelIOMMUState *s,
26
case 3:
27
mask = 7; /* Mask bit 2:0 in the SID field */
28
break;
29
+ default:
30
+ g_assert_not_reached();
31
}
32
mask = ~mask;
33
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
New patch
1
From: Eric Auger <eric.auger@redhat.com>
1
2
3
Currently get_naturally_aligned_size() is used by the intel iommu
4
to compute the maximum invalidation range based on @size which is
5
a power of 2 while being aligned with the @start address and less
6
than the maximum range defined by @gaw.
7
8
This helper is also useful for other iommu devices (virtio-iommu,
9
SMMUv3) to make sure IOMMU UNMAP notifiers only are called with
10
power of 2 range sizes.
11
12
Let's move this latter into dma-helpers.c and rename it into
13
dma_aligned_pow2_mask(). Also rewrite the helper so that it
14
accomodates UINT64_MAX values for the size mask and max mask.
15
It now returns a mask instead of a size. Change the caller.
16
17
Signed-off-by: Eric Auger <eric.auger@redhat.com>
18
Reviewed-by: Peter Xu <peterx@redhat.com>
19
Message-id: 20210309102742.30442-3-eric.auger@redhat.com
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
22
include/sysemu/dma.h | 12 ++++++++++++
23
hw/i386/intel_iommu.c | 30 +++++++-----------------------
24
softmmu/dma-helpers.c | 26 ++++++++++++++++++++++++++
25
3 files changed, 45 insertions(+), 23 deletions(-)
26
27
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
28
index XXXXXXX..XXXXXXX 100644
29
--- a/include/sysemu/dma.h
30
+++ b/include/sysemu/dma.h
31
@@ -XXX,XX +XXX,XX @@ uint64_t dma_buf_write(uint8_t *ptr, int32_t len, QEMUSGList *sg);
32
void dma_acct_start(BlockBackend *blk, BlockAcctCookie *cookie,
33
QEMUSGList *sg, enum BlockAcctType type);
34
35
+/**
36
+ * dma_aligned_pow2_mask: Return the address bit mask of the largest
37
+ * power of 2 size less or equal than @end - @start + 1, aligned with @start,
38
+ * and bounded by 1 << @max_addr_bits bits.
39
+ *
40
+ * @start: range start address
41
+ * @end: range end address (greater than @start)
42
+ * @max_addr_bits: max address bits (<= 64)
43
+ */
44
+uint64_t dma_aligned_pow2_mask(uint64_t start, uint64_t end,
45
+ int max_addr_bits);
46
+
47
#endif
48
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/hw/i386/intel_iommu.c
51
+++ b/hw/i386/intel_iommu.c
52
@@ -XXX,XX +XXX,XX @@
53
#include "hw/i386/x86-iommu.h"
54
#include "hw/pci-host/q35.h"
55
#include "sysemu/kvm.h"
56
+#include "sysemu/dma.h"
57
#include "sysemu/sysemu.h"
58
#include "hw/i386/apic_internal.h"
59
#include "kvm/kvm_i386.h"
60
@@ -XXX,XX +XXX,XX @@ VTDAddressSpace *vtd_find_add_as(IntelIOMMUState *s, PCIBus *bus, int devfn)
61
return vtd_dev_as;
62
}
63
64
-static uint64_t get_naturally_aligned_size(uint64_t start,
65
- uint64_t size, int gaw)
66
-{
67
- uint64_t max_mask = 1ULL << gaw;
68
- uint64_t alignment = start ? start & -start : max_mask;
69
-
70
- alignment = MIN(alignment, max_mask);
71
- size = MIN(size, max_mask);
72
-
73
- if (alignment <= size) {
74
- /* Increase the alignment of start */
75
- return alignment;
76
- } else {
77
- /* Find the largest page mask from size */
78
- return 1ULL << (63 - clz64(size));
79
- }
80
-}
81
-
82
/* Unmap the whole range in the notifier's scope. */
83
static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
84
{
85
@@ -XXX,XX +XXX,XX @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
86
87
while (remain >= VTD_PAGE_SIZE) {
88
IOMMUTLBEvent event;
89
- uint64_t mask = get_naturally_aligned_size(start, remain, s->aw_bits);
90
+ uint64_t mask = dma_aligned_pow2_mask(start, end, s->aw_bits);
91
+ uint64_t size = mask + 1;
92
93
- assert(mask);
94
+ assert(size);
95
96
event.type = IOMMU_NOTIFIER_UNMAP;
97
event.entry.iova = start;
98
- event.entry.addr_mask = mask - 1;
99
+ event.entry.addr_mask = mask;
100
event.entry.target_as = &address_space_memory;
101
event.entry.perm = IOMMU_NONE;
102
/* This field is meaningless for unmap */
103
@@ -XXX,XX +XXX,XX @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
104
105
memory_region_notify_iommu_one(n, &event);
106
107
- start += mask;
108
- remain -= mask;
109
+ start += size;
110
+ remain -= size;
111
}
112
113
assert(!remain);
114
diff --git a/softmmu/dma-helpers.c b/softmmu/dma-helpers.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/softmmu/dma-helpers.c
117
+++ b/softmmu/dma-helpers.c
118
@@ -XXX,XX +XXX,XX @@ void dma_acct_start(BlockBackend *blk, BlockAcctCookie *cookie,
119
{
120
block_acct_start(blk_get_stats(blk), cookie, sg->size, type);
121
}
122
+
123
+uint64_t dma_aligned_pow2_mask(uint64_t start, uint64_t end, int max_addr_bits)
124
+{
125
+ uint64_t max_mask = UINT64_MAX, addr_mask = end - start;
126
+ uint64_t alignment_mask, size_mask;
127
+
128
+ if (max_addr_bits != 64) {
129
+ max_mask = (1ULL << max_addr_bits) - 1;
130
+ }
131
+
132
+ alignment_mask = start ? (start & -start) - 1 : max_mask;
133
+ alignment_mask = MIN(alignment_mask, max_mask);
134
+ size_mask = MIN(addr_mask, max_mask);
135
+
136
+ if (alignment_mask <= size_mask) {
137
+ /* Increase the alignment of start */
138
+ return alignment_mask;
139
+ } else {
140
+ /* Find the largest page mask from size */
141
+ if (addr_mask == UINT64_MAX) {
142
+ return UINT64_MAX;
143
+ }
144
+ return (1ULL << (63 - clz64(addr_mask + 1))) - 1;
145
+ }
146
+}
147
+
148
--
149
2.20.1
150
151
diff view generated by jsdifflib
New patch
1
From: Eric Auger <eric.auger@redhat.com>
1
2
3
Unmap notifiers work with an address mask assuming an
4
invalidation range of a power of 2. Nothing mandates this
5
in the VIRTIO-IOMMU spec.
6
7
So in case the range is not a power of 2, split it into
8
several invalidations.
9
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
Reviewed-by: Peter Xu <peterx@redhat.com>
12
Message-id: 20210309102742.30442-4-eric.auger@redhat.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/virtio/virtio-iommu.c | 19 ++++++++++++++++---
16
1 file changed, 16 insertions(+), 3 deletions(-)
17
18
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/virtio/virtio-iommu.c
21
+++ b/hw/virtio/virtio-iommu.c
22
@@ -XXX,XX +XXX,XX @@ static void virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr virt_start,
23
hwaddr virt_end)
24
{
25
IOMMUTLBEvent event;
26
+ uint64_t delta = virt_end - virt_start;
27
28
if (!(mr->iommu_notify_flags & IOMMU_NOTIFIER_UNMAP)) {
29
return;
30
@@ -XXX,XX +XXX,XX @@ static void virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr virt_start,
31
32
event.type = IOMMU_NOTIFIER_UNMAP;
33
event.entry.target_as = &address_space_memory;
34
- event.entry.addr_mask = virt_end - virt_start;
35
- event.entry.iova = virt_start;
36
event.entry.perm = IOMMU_NONE;
37
event.entry.translated_addr = 0;
38
+ event.entry.addr_mask = delta;
39
+ event.entry.iova = virt_start;
40
41
- memory_region_notify_iommu(mr, 0, event);
42
+ if (delta == UINT64_MAX) {
43
+ memory_region_notify_iommu(mr, 0, event);
44
+ }
45
+
46
+
47
+ while (virt_start != virt_end + 1) {
48
+ uint64_t mask = dma_aligned_pow2_mask(virt_start, virt_end, 64);
49
+
50
+ event.entry.addr_mask = mask;
51
+ event.entry.iova = virt_start;
52
+ memory_region_notify_iommu(mr, 0, event);
53
+ virt_start += mask + 1;
54
+ }
55
}
56
57
static gboolean virtio_iommu_notify_unmap_cb(gpointer key, gpointer value,
58
--
59
2.20.1
60
61
diff view generated by jsdifflib
New patch
1
From: Eric Auger <eric.auger@redhat.com>
1
2
3
If the asid is not set, do not attempt to locate the key directly
4
as all inserted keys have a valid asid.
5
6
Use g_hash_table_foreach_remove instead.
7
8
Signed-off-by: Eric Auger <eric.auger@redhat.com>
9
Message-id: 20210309102742.30442-5-eric.auger@redhat.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/arm/smmu-common.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/smmu-common.c
19
+++ b/hw/arm/smmu-common.c
20
@@ -XXX,XX +XXX,XX @@ inline void
21
smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
22
uint8_t tg, uint64_t num_pages, uint8_t ttl)
23
{
24
- if (ttl && (num_pages == 1)) {
25
+ if (ttl && (num_pages == 1) && (asid >= 0)) {
26
SMMUIOTLBKey key = smmu_get_iotlb_key(asid, iova, tg, ttl);
27
28
g_hash_table_remove(s->iotlb, &key);
29
--
30
2.20.1
31
32
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Eric Auger <eric.auger@redhat.com>
2
2
3
Per Peter Maydell:
3
As of today, the driver can invalidate a number of pages that is
4
not a power of 2. However IOTLB unmap notifications and internal
5
IOTLB invalidations work with masks leading to erroneous
6
invalidations.
4
7
5
Semihosting hooks either SVC or HLT instructions, and inside KVM
8
In case the range is not a power of 2, split invalidations into
6
both of those go to EL1, ie to the guest, and can't be trapped to
9
power of 2 invalidations.
7
KVM.
8
10
9
Let check_for_semihosting() return False when not running on TCG.
11
When looking for a single page entry in the vSMMU internal IOTLB,
12
let's make sure that if the entry is not found using a
13
g_hash_table_remove() we iterate over all the entries to find a
14
potential range that overlaps it.
10
15
11
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
16
Signed-off-by: Eric Auger <eric.auger@redhat.com>
12
Message-id: 20190701194942.10092-3-philmd@redhat.com
17
Message-id: 20210309102742.30442-6-eric.auger@redhat.com
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
20
---
16
target/arm/Makefile.objs | 2 +-
21
hw/arm/smmu-common.c | 30 ++++++++++++++++++------------
17
target/arm/cpu.h | 7 +++++++
22
hw/arm/smmuv3.c | 24 ++++++++++++++++++++----
18
target/arm/helper.c | 8 +++++++-
23
2 files changed, 38 insertions(+), 16 deletions(-)
19
3 files changed, 15 insertions(+), 2 deletions(-)
20
24
21
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
25
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
22
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/Makefile.objs
27
--- a/hw/arm/smmu-common.c
24
+++ b/target/arm/Makefile.objs
28
+++ b/hw/arm/smmu-common.c
25
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@ inline void
26
-obj-y += arm-semi.o
30
smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
27
+obj-$(CONFIG_TCG) += arm-semi.o
31
uint8_t tg, uint64_t num_pages, uint8_t ttl)
28
obj-y += helper.o vfp_helper.o
32
{
29
obj-y += cpu.o gdbstub.o
33
+ /* if tg is not set we use 4KB range invalidation */
30
obj-$(TARGET_AARCH64) += cpu64.o gdbstub64.o
34
+ uint8_t granule = tg ? tg * 2 + 10 : 12;
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
35
+
36
if (ttl && (num_pages == 1) && (asid >= 0)) {
37
SMMUIOTLBKey key = smmu_get_iotlb_key(asid, iova, tg, ttl);
38
39
- g_hash_table_remove(s->iotlb, &key);
40
- } else {
41
- /* if tg is not set we use 4KB range invalidation */
42
- uint8_t granule = tg ? tg * 2 + 10 : 12;
43
-
44
- SMMUIOTLBPageInvInfo info = {
45
- .asid = asid, .iova = iova,
46
- .mask = (num_pages * 1 << granule) - 1};
47
-
48
- g_hash_table_foreach_remove(s->iotlb,
49
- smmu_hash_remove_by_asid_iova,
50
- &info);
51
+ if (g_hash_table_remove(s->iotlb, &key)) {
52
+ return;
53
+ }
54
+ /*
55
+ * if the entry is not found, let's see if it does not
56
+ * belong to a larger IOTLB entry
57
+ */
58
}
59
+
60
+ SMMUIOTLBPageInvInfo info = {
61
+ .asid = asid, .iova = iova,
62
+ .mask = (num_pages * 1 << granule) - 1};
63
+
64
+ g_hash_table_foreach_remove(s->iotlb,
65
+ smmu_hash_remove_by_asid_iova,
66
+ &info);
67
}
68
69
inline void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
70
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
32
index XXXXXXX..XXXXXXX 100644
71
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
72
--- a/hw/arm/smmuv3.c
34
+++ b/target/arm/cpu.h
73
+++ b/hw/arm/smmuv3.c
35
@@ -XXX,XX +XXX,XX @@ static inline void aarch64_sve_change_el(CPUARMState *env, int o,
74
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
36
{ }
75
uint16_t vmid = CMD_VMID(cmd);
37
#endif
76
bool leaf = CMD_LEAF(cmd);
38
77
uint8_t tg = CMD_TG(cmd);
39
+#if !defined(CONFIG_TCG)
78
- hwaddr num_pages = 1;
40
+static inline target_ulong do_arm_semihosting(CPUARMState *env)
79
+ uint64_t first_page = 0, last_page;
41
+{
80
+ uint64_t num_pages = 1;
42
+ g_assert_not_reached();
81
int asid = -1;
43
+}
82
44
+#else
83
if (tg) {
45
target_ulong do_arm_semihosting(CPUARMState *env);
84
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
46
+#endif
85
if (type == SMMU_CMD_TLBI_NH_VA) {
47
void aarch64_sync_32_to_64(CPUARMState *env);
86
asid = CMD_ASID(cmd);
48
void aarch64_sync_64_to_32(CPUARMState *env);
49
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/helper.c
53
+++ b/target/arm/helper.c
54
@@ -XXX,XX +XXX,XX @@
55
#include "qemu/qemu-print.h"
56
#include "exec/exec-all.h"
57
#include "exec/cpu_ldst.h"
58
-#include "arm_ldst.h"
59
#include <zlib.h> /* For crc32 */
60
#include "hw/semihosting/semihost.h"
61
#include "sysemu/cpus.h"
62
@@ -XXX,XX +XXX,XX @@
63
#include "qapi/qapi-commands-machine-target.h"
64
#include "qapi/error.h"
65
#include "qemu/guest-random.h"
66
+#ifdef CONFIG_TCG
67
+#include "arm_ldst.h"
68
+#endif
69
70
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
71
72
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
73
74
static inline bool check_for_semihosting(CPUState *cs)
75
{
76
+#ifdef CONFIG_TCG
77
/* Check whether this exception is a semihosting call; if so
78
* then handle it and return true; otherwise return false.
79
*/
80
@@ -XXX,XX +XXX,XX @@ static inline bool check_for_semihosting(CPUState *cs)
81
env->regs[0] = do_arm_semihosting(env);
82
return true;
83
}
87
}
84
+#else
88
- trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, num_pages, ttl, leaf);
85
+ return false;
89
- smmuv3_inv_notifiers_iova(s, asid, addr, tg, num_pages);
86
+#endif
90
- smmu_iotlb_inv_iova(s, asid, addr, tg, num_pages, ttl);
91
+
92
+ /* Split invalidations into ^2 range invalidations */
93
+ last_page = num_pages - 1;
94
+ while (num_pages) {
95
+ uint8_t granule = tg * 2 + 10;
96
+ uint64_t mask, count;
97
+
98
+ mask = dma_aligned_pow2_mask(first_page, last_page, 64 - granule);
99
+ count = mask + 1;
100
+
101
+ trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, count, ttl, leaf);
102
+ smmuv3_inv_notifiers_iova(s, asid, addr, tg, count);
103
+ smmu_iotlb_inv_iova(s, asid, addr, tg, count, ttl);
104
+
105
+ num_pages -= count;
106
+ first_page += count;
107
+ addr += count * BIT_ULL(granule);
108
+ }
87
}
109
}
88
110
89
/* Handle a CPU exception for A and R profile CPUs.
111
static int smmuv3_cmdq_consume(SMMUv3State *s)
90
--
112
--
91
2.20.1
113
2.20.1
92
114
93
115
diff view generated by jsdifflib
New patch
1
From: Eric Auger <eric.auger@redhat.com>
1
2
3
If the whole SID range (32b) is invalidated (SMMU_CMD_CFGI_ALL),
4
@end overflows and we fail to handle the command properly.
5
6
Once this gets fixed, the current code really is awkward in the
7
sense it loops over the whole range instead of removing the
8
currently cached configs through a hash table lookup.
9
10
Fix both the overflow and the lookup.
11
12
Signed-off-by: Eric Auger <eric.auger@redhat.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20210309102742.30442-7-eric.auger@redhat.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/arm/smmu-internal.h | 5 +++++
18
hw/arm/smmuv3.c | 34 ++++++++++++++++++++--------------
19
2 files changed, 25 insertions(+), 14 deletions(-)
20
21
diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/smmu-internal.h
24
+++ b/hw/arm/smmu-internal.h
25
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUIOTLBPageInvInfo {
26
uint64_t mask;
27
} SMMUIOTLBPageInvInfo;
28
29
+typedef struct SMMUSIDRange {
30
+ uint32_t start;
31
+ uint32_t end;
32
+} SMMUSIDRange;
33
+
34
#endif
35
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/arm/smmuv3.c
38
+++ b/hw/arm/smmuv3.c
39
@@ -XXX,XX +XXX,XX @@
40
41
#include "hw/arm/smmuv3.h"
42
#include "smmuv3-internal.h"
43
+#include "smmu-internal.h"
44
45
/**
46
* smmuv3_trigger_irq - pulse @irq if enabled and update
47
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
48
}
49
}
50
51
+static gboolean
52
+smmuv3_invalidate_ste(gpointer key, gpointer value, gpointer user_data)
53
+{
54
+ SMMUDevice *sdev = (SMMUDevice *)key;
55
+ uint32_t sid = smmu_get_sid(sdev);
56
+ SMMUSIDRange *sid_range = (SMMUSIDRange *)user_data;
57
+
58
+ if (sid < sid_range->start || sid > sid_range->end) {
59
+ return false;
60
+ }
61
+ trace_smmuv3_config_cache_inv(sid);
62
+ return true;
63
+}
64
+
65
static int smmuv3_cmdq_consume(SMMUv3State *s)
66
{
67
SMMUState *bs = ARM_SMMU(s);
68
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
69
}
70
case SMMU_CMD_CFGI_STE_RANGE: /* same as SMMU_CMD_CFGI_ALL */
71
{
72
- uint32_t start = CMD_SID(&cmd), end, i;
73
+ uint32_t start = CMD_SID(&cmd);
74
uint8_t range = CMD_STE_RANGE(&cmd);
75
+ uint64_t end = start + (1ULL << (range + 1)) - 1;
76
+ SMMUSIDRange sid_range = {start, end};
77
78
if (CMD_SSEC(&cmd)) {
79
cmd_error = SMMU_CERROR_ILL;
80
break;
81
}
82
-
83
- end = start + (1 << (range + 1)) - 1;
84
trace_smmuv3_cmdq_cfgi_ste_range(start, end);
85
-
86
- for (i = start; i <= end; i++) {
87
- IOMMUMemoryRegion *mr = smmu_iommu_mr(bs, i);
88
- SMMUDevice *sdev;
89
-
90
- if (!mr) {
91
- continue;
92
- }
93
- sdev = container_of(mr, SMMUDevice, iommu);
94
- smmuv3_flush_config(sdev);
95
- }
96
+ g_hash_table_foreach_remove(bs->configs, smmuv3_invalidate_ste,
97
+ &sid_range);
98
break;
99
}
100
case SMMU_CMD_CFGI_CD:
101
--
102
2.20.1
103
104
diff view generated by jsdifflib
New patch
1
From: Eric Auger <eric.auger@redhat.com>
1
2
3
Convert all sid printouts to sid=0x%x.
4
5
Signed-off-by: Eric Auger <eric.auger@redhat.com>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Message-id: 20210309102742.30442-8-eric.auger@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/arm/trace-events | 24 ++++++++++++------------
11
1 file changed, 12 insertions(+), 12 deletions(-)
12
13
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/trace-events
16
+++ b/hw/arm/trace-events
17
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
18
smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
19
smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
20
smmuv3_write_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
21
-smmuv3_record_event(const char *type, uint32_t sid) "%s sid=%d"
22
-smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "SID:0x%x features:0x%x, sid_split:0x%x"
23
+smmuv3_record_event(const char *type, uint32_t sid) "%s sid=0x%x"
24
+smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "sid=0x%x features:0x%x, sid_split:0x%x"
25
smmuv3_find_ste_2lvl(uint64_t strtab_base, uint64_t l1ptr, int l1_ste_offset, uint64_t l2ptr, int l2_ste_offset, int max_l2_ste) "strtab_base:0x%"PRIx64" l1ptr:0x%"PRIx64" l1_off:0x%x, l2ptr:0x%"PRIx64" l2_off:0x%x max_l2_ste:%d"
26
smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
27
-smmuv3_translate_disable(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d bypass (smmu disabled) iova:0x%"PRIx64" is_write=%d"
28
-smmuv3_translate_bypass(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d STE bypass iova:0x%"PRIx64" is_write=%d"
29
-smmuv3_translate_abort(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d abort on iova:0x%"PRIx64" is_write=%d"
30
-smmuv3_translate_success(const char *n, uint16_t sid, uint64_t iova, uint64_t translated, int perm) "%s sid=%d iova=0x%"PRIx64" translated=0x%"PRIx64" perm=0x%x"
31
+smmuv3_translate_disable(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=0x%x bypass (smmu disabled) iova:0x%"PRIx64" is_write=%d"
32
+smmuv3_translate_bypass(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=0x%x STE bypass iova:0x%"PRIx64" is_write=%d"
33
+smmuv3_translate_abort(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=0x%x abort on iova:0x%"PRIx64" is_write=%d"
34
+smmuv3_translate_success(const char *n, uint16_t sid, uint64_t iova, uint64_t translated, int perm) "%s sid=0x%x iova=0x%"PRIx64" translated=0x%"PRIx64" perm=0x%x"
35
smmuv3_get_cd(uint64_t addr) "CD addr: 0x%"PRIx64
36
smmuv3_decode_cd(uint32_t oas) "oas=%d"
37
smmuv3_decode_cd_tt(int i, uint32_t tsz, uint64_t ttb, uint32_t granule_sz, bool had) "TT[%d]:tsz:%d ttb:0x%"PRIx64" granule_sz:%d had:%d"
38
-smmuv3_cmdq_cfgi_ste(int streamid) "streamid =%d"
39
+smmuv3_cmdq_cfgi_ste(int streamid) "streamid= 0x%x"
40
smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%x - end=0x%x"
41
-smmuv3_cmdq_cfgi_cd(uint32_t sid) "streamid = %d"
42
-smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid %d (hits=%d, misses=%d, hit rate=%d)"
43
-smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid %d (hits=%d, misses=%d, hit rate=%d)"
44
-smmuv3_s1_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf) "vmid =%d asid =%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d"
45
+smmuv3_cmdq_cfgi_cd(uint32_t sid) "sid=0x%x"
46
+smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
47
+smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid=0x%x (hits=%d, misses=%d, hit rate=%d)"
48
+smmuv3_s1_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf) "vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d"
49
smmuv3_cmdq_tlbi_nh(void) ""
50
smmuv3_cmdq_tlbi_nh_asid(uint16_t asid) "asid=%d"
51
-smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid %d"
52
+smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid=0x%x"
53
smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
54
smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
55
smmuv3_inv_notifiers_iova(const char *name, uint16_t asid, uint64_t iova, uint8_t tg, uint64_t num_pages) "iommu mr=%s asid=%d iova=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64
56
--
57
2.20.1
58
59
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Missed out on compressing the second half of a predicate
4
with length vl % 512 > 256.
5
6
Adjust all of the x + (y << s) to x | (y << s) as a
7
general style fix. Drop the extract64 because the input
8
uint64_t are known to be already zero-extended from the
9
current size of the predicate.
10
11
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210309155305.11301-2-richard.henderson@linaro.org
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
target/arm/sve_helper.c | 30 +++++++++++++++++++++---------
18
1 file changed, 21 insertions(+), 9 deletions(-)
19
20
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/sve_helper.c
23
+++ b/target/arm/sve_helper.c
24
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
25
if (oprsz <= 8) {
26
l = compress_bits(n[0] >> odd, esz);
27
h = compress_bits(m[0] >> odd, esz);
28
- d[0] = extract64(l + (h << (4 * oprsz)), 0, 8 * oprsz);
29
+ d[0] = l | (h << (4 * oprsz));
30
} else {
31
ARMPredicateReg tmp_m;
32
intptr_t oprsz_16 = oprsz / 16;
33
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
34
h = n[2 * i + 1];
35
l = compress_bits(l >> odd, esz);
36
h = compress_bits(h >> odd, esz);
37
- d[i] = l + (h << 32);
38
+ d[i] = l | (h << 32);
39
}
40
41
- /* For VL which is not a power of 2, the results from M do not
42
- align nicely with the uint64_t for D. Put the aligned results
43
- from M into TMP_M and then copy it into place afterward. */
44
+ /*
45
+ * For VL which is not a multiple of 512, the results from M do not
46
+ * align nicely with the uint64_t for D. Put the aligned results
47
+ * from M into TMP_M and then copy it into place afterward.
48
+ */
49
if (oprsz & 15) {
50
- d[i] = compress_bits(n[2 * i] >> odd, esz);
51
+ int final_shift = (oprsz & 15) * 2;
52
+
53
+ l = n[2 * i + 0];
54
+ h = n[2 * i + 1];
55
+ l = compress_bits(l >> odd, esz);
56
+ h = compress_bits(h >> odd, esz);
57
+ d[i] = l | (h << final_shift);
58
59
for (i = 0; i < oprsz_16; i++) {
60
l = m[2 * i + 0];
61
h = m[2 * i + 1];
62
l = compress_bits(l >> odd, esz);
63
h = compress_bits(h >> odd, esz);
64
- tmp_m.p[i] = l + (h << 32);
65
+ tmp_m.p[i] = l | (h << 32);
66
}
67
- tmp_m.p[i] = compress_bits(m[2 * i] >> odd, esz);
68
+ l = m[2 * i + 0];
69
+ h = m[2 * i + 1];
70
+ l = compress_bits(l >> odd, esz);
71
+ h = compress_bits(h >> odd, esz);
72
+ tmp_m.p[i] = l | (h << final_shift);
73
74
swap_memmove(vd + oprsz / 2, &tmp_m, oprsz / 2);
75
} else {
76
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
77
h = m[2 * i + 1];
78
l = compress_bits(l >> odd, esz);
79
h = compress_bits(h >> odd, esz);
80
- d[oprsz_16 + i] = l + (h << 32);
81
+ d[oprsz_16 + i] = l | (h << 32);
82
}
83
}
84
}
85
--
86
2.20.1
87
88
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Wrote too much with low-half zip (zip1) with vl % 512 != 0.
4
5
Adjust all of the x + (y << s) to x | (y << s) as a style fix.
6
7
We only ever have exact overlap between D, M, and N. Therefore
8
we only need a single temporary, and we do not need to check for
9
partial overlap.
10
11
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210309155305.11301-3-richard.henderson@linaro.org
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
target/arm/sve_helper.c | 25 ++++++++++++++-----------
18
1 file changed, 14 insertions(+), 11 deletions(-)
19
20
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/sve_helper.c
23
+++ b/target/arm/sve_helper.c
24
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
25
intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
26
int esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
27
intptr_t high = FIELD_EX32(pred_desc, PREDDESC, DATA);
28
+ int esize = 1 << esz;
29
uint64_t *d = vd;
30
intptr_t i;
31
32
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
33
mm = extract64(mm, high * half, half);
34
nn = expand_bits(nn, esz);
35
mm = expand_bits(mm, esz);
36
- d[0] = nn + (mm << (1 << esz));
37
+ d[0] = nn | (mm << esize);
38
} else {
39
- ARMPredicateReg tmp_n, tmp_m;
40
+ ARMPredicateReg tmp;
41
42
/* We produce output faster than we consume input.
43
Therefore we must be mindful of possible overlap. */
44
- if ((vn - vd) < (uintptr_t)oprsz) {
45
- vn = memcpy(&tmp_n, vn, oprsz);
46
- }
47
- if ((vm - vd) < (uintptr_t)oprsz) {
48
- vm = memcpy(&tmp_m, vm, oprsz);
49
+ if (vd == vn) {
50
+ vn = memcpy(&tmp, vn, oprsz);
51
+ if (vd == vm) {
52
+ vm = vn;
53
+ }
54
+ } else if (vd == vm) {
55
+ vm = memcpy(&tmp, vm, oprsz);
56
}
57
if (high) {
58
high = oprsz >> 1;
59
}
60
61
- if ((high & 3) == 0) {
62
+ if ((oprsz & 7) == 0) {
63
uint32_t *n = vn, *m = vm;
64
high >>= 2;
65
66
- for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
67
+ for (i = 0; i < oprsz / 8; i++) {
68
uint64_t nn = n[H4(high + i)];
69
uint64_t mm = m[H4(high + i)];
70
71
nn = expand_bits(nn, esz);
72
mm = expand_bits(mm, esz);
73
- d[i] = nn + (mm << (1 << esz));
74
+ d[i] = nn | (mm << esize);
75
}
76
} else {
77
uint8_t *n = vn, *m = vm;
78
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
79
80
nn = expand_bits(nn, esz);
81
mm = expand_bits(mm, esz);
82
- d16[H2(i)] = nn + (mm << (1 << esz));
83
+ d16[H2(i)] = nn | (mm << esize);
84
}
85
}
86
}
87
--
88
2.20.1
89
90
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Wrote too much with punpk1 with vl % 512 != 0.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210309155305.11301-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/sve_helper.c | 4 ++--
12
1 file changed, 2 insertions(+), 2 deletions(-)
13
14
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/sve_helper.c
17
+++ b/target/arm/sve_helper.c
18
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
19
high = oprsz >> 1;
20
}
21
22
- if ((high & 3) == 0) {
23
+ if ((oprsz & 7) == 0) {
24
uint32_t *n = vn;
25
high >>= 2;
26
27
- for (i = 0; i < DIV_ROUND_UP(oprsz, 8); i++) {
28
+ for (i = 0; i < oprsz / 8; i++) {
29
uint64_t nn = n[H4(high + i)];
30
d[i] = expand_bits(nn, 0);
31
}
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Since b64ee454a4a0, all predicate operations should be
4
using these field macros for predicates.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210309155305.11301-5-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/sve_helper.c | 6 +++---
12
target/arm/translate-sve.c | 7 +++----
13
2 files changed, 6 insertions(+), 7 deletions(-)
14
15
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/sve_helper.c
18
+++ b/target/arm/sve_helper.c
19
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_compact_d)(void *vd, void *vn, void *vg, uint32_t desc)
20
*/
21
int32_t HELPER(sve_last_active_element)(void *vg, uint32_t pred_desc)
22
{
23
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
24
- intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
25
+ intptr_t words = DIV_ROUND_UP(FIELD_EX32(pred_desc, PREDDESC, OPRSZ), 8);
26
+ intptr_t esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
27
28
- return last_active_element(vg, DIV_ROUND_UP(oprsz, 8), esz);
29
+ return last_active_element(vg, words, esz);
30
}
31
32
void HELPER(sve_splice)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)
33
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-sve.c
36
+++ b/target/arm/translate-sve.c
37
@@ -XXX,XX +XXX,XX @@ static void find_last_active(DisasContext *s, TCGv_i32 ret, int esz, int pg)
38
*/
39
TCGv_ptr t_p = tcg_temp_new_ptr();
40
TCGv_i32 t_desc;
41
- unsigned vsz = pred_full_reg_size(s);
42
- unsigned desc;
43
+ unsigned desc = 0;
44
45
- desc = vsz - 2;
46
- desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
47
+ desc = FIELD_DP32(desc, PREDDESC, OPRSZ, pred_full_reg_size(s));
48
+ desc = FIELD_DP32(desc, PREDDESC, ESZ, esz);
49
50
tcg_gen_addi_ptr(t_p, cpu_env, pred_full_reg_offset(s, pg));
51
t_desc = tcg_const_i32(desc);
52
--
53
2.20.1
54
55
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Since b64ee454a4a0, all predicate operations should be
4
using these field macros for predicates.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210309155305.11301-6-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/sve_helper.c | 30 ++++++++++++++----------------
12
target/arm/translate-sve.c | 4 ++--
13
2 files changed, 16 insertions(+), 18 deletions(-)
14
15
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/sve_helper.c
18
+++ b/target/arm/sve_helper.c
19
@@ -XXX,XX +XXX,XX @@ static uint32_t do_zero(ARMPredicateReg *d, intptr_t oprsz)
20
void HELPER(sve_brkpa)(void *vd, void *vn, void *vm, void *vg,
21
uint32_t pred_desc)
22
{
23
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
24
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
25
if (last_active_pred(vn, vg, oprsz)) {
26
compute_brk_z(vd, vm, vg, oprsz, true);
27
} else {
28
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_brkpa)(void *vd, void *vn, void *vm, void *vg,
29
uint32_t HELPER(sve_brkpas)(void *vd, void *vn, void *vm, void *vg,
30
uint32_t pred_desc)
31
{
32
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
33
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
34
if (last_active_pred(vn, vg, oprsz)) {
35
return compute_brks_z(vd, vm, vg, oprsz, true);
36
} else {
37
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_brkpas)(void *vd, void *vn, void *vm, void *vg,
38
void HELPER(sve_brkpb)(void *vd, void *vn, void *vm, void *vg,
39
uint32_t pred_desc)
40
{
41
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
42
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
43
if (last_active_pred(vn, vg, oprsz)) {
44
compute_brk_z(vd, vm, vg, oprsz, false);
45
} else {
46
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_brkpb)(void *vd, void *vn, void *vm, void *vg,
47
uint32_t HELPER(sve_brkpbs)(void *vd, void *vn, void *vm, void *vg,
48
uint32_t pred_desc)
49
{
50
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
51
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
52
if (last_active_pred(vn, vg, oprsz)) {
53
return compute_brks_z(vd, vm, vg, oprsz, false);
54
} else {
55
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_brkpbs)(void *vd, void *vn, void *vm, void *vg,
56
57
void HELPER(sve_brka_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
58
{
59
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
60
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
61
compute_brk_z(vd, vn, vg, oprsz, true);
62
}
63
64
uint32_t HELPER(sve_brkas_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
65
{
66
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
67
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
68
return compute_brks_z(vd, vn, vg, oprsz, true);
69
}
70
71
void HELPER(sve_brkb_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
72
{
73
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
74
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
75
compute_brk_z(vd, vn, vg, oprsz, false);
76
}
77
78
uint32_t HELPER(sve_brkbs_z)(void *vd, void *vn, void *vg, uint32_t pred_desc)
79
{
80
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
81
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
82
return compute_brks_z(vd, vn, vg, oprsz, false);
83
}
84
85
void HELPER(sve_brka_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
86
{
87
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
88
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
89
compute_brk_m(vd, vn, vg, oprsz, true);
90
}
91
92
uint32_t HELPER(sve_brkas_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
93
{
94
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
95
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
96
return compute_brks_m(vd, vn, vg, oprsz, true);
97
}
98
99
void HELPER(sve_brkb_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
100
{
101
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
102
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
103
compute_brk_m(vd, vn, vg, oprsz, false);
104
}
105
106
uint32_t HELPER(sve_brkbs_m)(void *vd, void *vn, void *vg, uint32_t pred_desc)
107
{
108
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
109
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
110
return compute_brks_m(vd, vn, vg, oprsz, false);
111
}
112
113
void HELPER(sve_brkn)(void *vd, void *vn, void *vg, uint32_t pred_desc)
114
{
115
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
116
-
117
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
118
if (!last_active_pred(vn, vg, oprsz)) {
119
do_zero(vd, oprsz);
120
}
121
@@ -XXX,XX +XXX,XX @@ static uint32_t predtest_ones(ARMPredicateReg *d, intptr_t oprsz,
122
123
uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
124
{
125
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
126
-
127
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
128
if (last_active_pred(vn, vg, oprsz)) {
129
return predtest_ones(vd, oprsz, -1);
130
} else {
131
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
132
index XXXXXXX..XXXXXXX 100644
133
--- a/target/arm/translate-sve.c
134
+++ b/target/arm/translate-sve.c
135
@@ -XXX,XX +XXX,XX @@ static bool do_brk3(DisasContext *s, arg_rprr_s *a,
136
TCGv_ptr n = tcg_temp_new_ptr();
137
TCGv_ptr m = tcg_temp_new_ptr();
138
TCGv_ptr g = tcg_temp_new_ptr();
139
- TCGv_i32 t = tcg_const_i32(vsz - 2);
140
+ TCGv_i32 t = tcg_const_i32(FIELD_DP32(0, PREDDESC, OPRSZ, vsz));
141
142
tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
143
tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
144
@@ -XXX,XX +XXX,XX @@ static bool do_brk2(DisasContext *s, arg_rpr_s *a,
145
TCGv_ptr d = tcg_temp_new_ptr();
146
TCGv_ptr n = tcg_temp_new_ptr();
147
TCGv_ptr g = tcg_temp_new_ptr();
148
- TCGv_i32 t = tcg_const_i32(vsz - 2);
149
+ TCGv_i32 t = tcg_const_i32(FIELD_DP32(0, PREDDESC, OPRSZ, vsz));
150
151
tcg_gen_addi_ptr(d, cpu_env, pred_full_reg_offset(s, a->rd));
152
tcg_gen_addi_ptr(n, cpu_env, pred_full_reg_offset(s, a->rn));
153
--
154
2.20.1
155
156
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Since b64ee454a4a0, all predicate operations should be
4
using these field macros for predicates.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210309155305.11301-7-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/sve_helper.c | 6 +++---
12
target/arm/translate-sve.c | 6 +++---
13
2 files changed, 6 insertions(+), 6 deletions(-)
14
15
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/sve_helper.c
18
+++ b/target/arm/sve_helper.c
19
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_brkns)(void *vd, void *vn, void *vg, uint32_t pred_desc)
20
21
uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
22
{
23
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
24
- intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
25
+ intptr_t words = DIV_ROUND_UP(FIELD_EX32(pred_desc, PREDDESC, OPRSZ), 8);
26
+ intptr_t esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
27
uint64_t *n = vn, *g = vg, sum = 0, mask = pred_esz_masks[esz];
28
intptr_t i;
29
30
- for (i = 0; i < DIV_ROUND_UP(oprsz, 8); ++i) {
31
+ for (i = 0; i < words; ++i) {
32
uint64_t t = n[i] & g[i] & mask;
33
sum += ctpop64(t);
34
}
35
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/translate-sve.c
38
+++ b/target/arm/translate-sve.c
39
@@ -XXX,XX +XXX,XX @@ static void do_cntp(DisasContext *s, TCGv_i64 val, int esz, int pn, int pg)
40
} else {
41
TCGv_ptr t_pn = tcg_temp_new_ptr();
42
TCGv_ptr t_pg = tcg_temp_new_ptr();
43
- unsigned desc;
44
+ unsigned desc = 0;
45
TCGv_i32 t_desc;
46
47
- desc = psz - 2;
48
- desc = deposit32(desc, SIMD_DATA_SHIFT, 2, esz);
49
+ desc = FIELD_DP32(desc, PREDDESC, OPRSZ, psz);
50
+ desc = FIELD_DP32(desc, PREDDESC, ESZ, esz);
51
52
tcg_gen_addi_ptr(t_pn, cpu_env, pred_full_reg_offset(s, pn));
53
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, pg));
54
--
55
2.20.1
56
57
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Since b64ee454a4a0, all predicate operations should be
4
using these field macros for predicates.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210309155305.11301-8-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/sve_helper.c | 4 ++--
12
target/arm/translate-sve.c | 7 ++++---
13
2 files changed, 6 insertions(+), 5 deletions(-)
14
15
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/sve_helper.c
18
+++ b/target/arm/sve_helper.c
19
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t pred_desc)
20
21
uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
22
{
23
- uintptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
24
- intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
25
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
26
+ intptr_t esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
27
uint64_t esz_mask = pred_esz_masks[esz];
28
ARMPredicateReg *d = vd;
29
uint32_t flags;
30
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/translate-sve.c
33
+++ b/target/arm/translate-sve.c
34
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
35
TCGv_i64 op0, op1, t0, t1, tmax;
36
TCGv_i32 t2, t3;
37
TCGv_ptr ptr;
38
- unsigned desc, vsz = vec_full_reg_size(s);
39
+ unsigned vsz = vec_full_reg_size(s);
40
+ unsigned desc = 0;
41
TCGCond cond;
42
43
if (!sve_access_check(s)) {
44
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
45
/* Scale elements to bits. */
46
tcg_gen_shli_i32(t2, t2, a->esz);
47
48
- desc = (vsz / 8) - 2;
49
- desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
50
+ desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz / 8);
51
+ desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
52
t3 = tcg_const_i32(desc);
53
54
ptr = tcg_temp_new_ptr();
55
--
56
2.20.1
57
58
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
With the reduction operations, we intentionally increase maxsz to
4
the next power of 2, so as to fill out the reduction tree correctly.
5
Since e2e7168a214b, oprsz must equal maxsz, with exceptions for small
6
vectors, so this triggers an assertion for vector sizes > 32 that are
7
not themselves a power of 2.
8
9
Pass the power-of-two value in the simd_data field instead.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210309155305.11301-9-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/sve_helper.c | 2 +-
17
target/arm/translate-sve.c | 2 +-
18
2 files changed, 2 insertions(+), 2 deletions(-)
19
20
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/sve_helper.c
23
+++ b/target/arm/sve_helper.c
24
@@ -XXX,XX +XXX,XX @@ static TYPE NAME##_reduce(TYPE *data, float_status *status, uintptr_t n) \
25
} \
26
uint64_t HELPER(NAME)(void *vn, void *vg, void *vs, uint32_t desc) \
27
{ \
28
- uintptr_t i, oprsz = simd_oprsz(desc), maxsz = simd_maxsz(desc); \
29
+ uintptr_t i, oprsz = simd_oprsz(desc), maxsz = simd_data(desc); \
30
TYPE data[sizeof(ARMVectorReg) / sizeof(TYPE)]; \
31
for (i = 0; i < oprsz; ) { \
32
uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
33
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-sve.c
36
+++ b/target/arm/translate-sve.c
37
@@ -XXX,XX +XXX,XX @@ static void do_reduce(DisasContext *s, arg_rpr_esz *a,
38
{
39
unsigned vsz = vec_full_reg_size(s);
40
unsigned p2vsz = pow2ceil(vsz);
41
- TCGv_i32 t_desc = tcg_const_i32(simd_desc(vsz, p2vsz, 0));
42
+ TCGv_i32 t_desc = tcg_const_i32(simd_desc(vsz, vsz, p2vsz));
43
TCGv_ptr t_zn, t_pg, status;
44
TCGv_i64 temp;
45
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
New patch
1
From: Niek Linnenbank <nieklinnenbank@gmail.com>
1
2
3
Currently the emulated EMAC for sun8i always traverses the transmit queue
4
from the head when transferring packets. It searches for a list of consecutive
5
descriptors whichs are flagged as ready for processing and transmits their payloads
6
accordingly. The controller stops processing once it finds a descriptor that is not
7
marked ready.
8
9
While the above behaviour works in most situations, it is not the same as the actual
10
EMAC in hardware. Actual hardware uses the TX_CUR_DESC register value to keep track
11
of the last position in the transmit queue and continues processing from that position
12
when software triggers the start of DMA processing. The currently emulated behaviour can
13
lead to packet loss on transmit when software fills the transmit queue with ready
14
descriptors that overlap the tail of the circular list.
15
16
This commit modifies the emulated EMAC for sun8i such that it processes
17
the transmit queue using the TX_CUR_DESC register in the same way as hardware.
18
19
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Message-id: 20210310195820.21950-2-nieklinnenbank@gmail.com
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
24
hw/net/allwinner-sun8i-emac.c | 62 +++++++++++++++++++----------------
25
1 file changed, 34 insertions(+), 28 deletions(-)
26
27
diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/net/allwinner-sun8i-emac.c
30
+++ b/hw/net/allwinner-sun8i-emac.c
31
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_update_irq(AwSun8iEmacState *s)
32
qemu_set_irq(s->irq, (s->int_sta & s->int_en) != 0);
33
}
34
35
-static uint32_t allwinner_sun8i_emac_next_desc(AwSun8iEmacState *s,
36
- FrameDescriptor *desc,
37
- size_t min_size)
38
+static bool allwinner_sun8i_emac_desc_owned(FrameDescriptor *desc,
39
+ size_t min_buf_size)
40
{
41
- uint32_t paddr = desc->next;
42
-
43
- dma_memory_read(&s->dma_as, paddr, desc, sizeof(*desc));
44
-
45
- if ((desc->status & DESC_STATUS_CTL) &&
46
- (desc->status2 & DESC_STATUS2_BUF_SIZE_MASK) >= min_size) {
47
- return paddr;
48
- } else {
49
- return 0;
50
- }
51
+ return (desc->status & DESC_STATUS_CTL) && (min_buf_size == 0 ||
52
+ (desc->status2 & DESC_STATUS2_BUF_SIZE_MASK) >= min_buf_size);
53
}
54
55
-static uint32_t allwinner_sun8i_emac_get_desc(AwSun8iEmacState *s,
56
- FrameDescriptor *desc,
57
- uint32_t start_addr,
58
- size_t min_size)
59
+static void allwinner_sun8i_emac_get_desc(AwSun8iEmacState *s,
60
+ FrameDescriptor *desc,
61
+ uint32_t phys_addr)
62
+{
63
+ dma_memory_read(&s->dma_as, phys_addr, desc, sizeof(*desc));
64
+}
65
+
66
+static uint32_t allwinner_sun8i_emac_next_desc(AwSun8iEmacState *s,
67
+ FrameDescriptor *desc)
68
+{
69
+ const uint32_t nxt = desc->next;
70
+ allwinner_sun8i_emac_get_desc(s, desc, nxt);
71
+ return nxt;
72
+}
73
+
74
+static uint32_t allwinner_sun8i_emac_find_desc(AwSun8iEmacState *s,
75
+ FrameDescriptor *desc,
76
+ uint32_t start_addr,
77
+ size_t min_size)
78
{
79
uint32_t desc_addr = start_addr;
80
81
/* Note that the list is a cycle. Last entry points back to the head. */
82
while (desc_addr != 0) {
83
- dma_memory_read(&s->dma_as, desc_addr, desc, sizeof(*desc));
84
+ allwinner_sun8i_emac_get_desc(s, desc, desc_addr);
85
86
- if ((desc->status & DESC_STATUS_CTL) &&
87
- (desc->status2 & DESC_STATUS2_BUF_SIZE_MASK) >= min_size) {
88
+ if (allwinner_sun8i_emac_desc_owned(desc, min_size)) {
89
return desc_addr;
90
} else if (desc->next == start_addr) {
91
break;
92
@@ -XXX,XX +XXX,XX @@ static uint32_t allwinner_sun8i_emac_rx_desc(AwSun8iEmacState *s,
93
FrameDescriptor *desc,
94
size_t min_size)
95
{
96
- return allwinner_sun8i_emac_get_desc(s, desc, s->rx_desc_curr, min_size);
97
+ return allwinner_sun8i_emac_find_desc(s, desc, s->rx_desc_curr, min_size);
98
}
99
100
static uint32_t allwinner_sun8i_emac_tx_desc(AwSun8iEmacState *s,
101
- FrameDescriptor *desc,
102
- size_t min_size)
103
+ FrameDescriptor *desc)
104
{
105
- return allwinner_sun8i_emac_get_desc(s, desc, s->tx_desc_head, min_size);
106
+ allwinner_sun8i_emac_get_desc(s, desc, s->tx_desc_curr);
107
+ return s->tx_desc_curr;
108
}
109
110
static void allwinner_sun8i_emac_flush_desc(AwSun8iEmacState *s,
111
@@ -XXX,XX +XXX,XX @@ static ssize_t allwinner_sun8i_emac_receive(NetClientState *nc,
112
bytes_left -= desc_bytes;
113
114
/* Move to the next descriptor */
115
- s->rx_desc_curr = allwinner_sun8i_emac_next_desc(s, &desc, 64);
116
+ s->rx_desc_curr = allwinner_sun8i_emac_find_desc(s, &desc, desc.next,
117
+ AW_SUN8I_EMAC_MIN_PKT_SZ);
118
if (!s->rx_desc_curr) {
119
/* Not enough buffer space available */
120
s->int_sta |= INT_STA_RX_BUF_UA;
121
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_transmit(AwSun8iEmacState *s)
122
size_t transmitted = 0;
123
static uint8_t packet_buf[2048];
124
125
- s->tx_desc_curr = allwinner_sun8i_emac_tx_desc(s, &desc, 0);
126
+ s->tx_desc_curr = allwinner_sun8i_emac_tx_desc(s, &desc);
127
128
/* Read all transmit descriptors */
129
- while (s->tx_desc_curr != 0) {
130
+ while (allwinner_sun8i_emac_desc_owned(&desc, 0)) {
131
132
/* Read from physical memory into packet buffer */
133
bytes = desc.status2 & DESC_STATUS2_BUF_SIZE_MASK;
134
@@ -XXX,XX +XXX,XX @@ static void allwinner_sun8i_emac_transmit(AwSun8iEmacState *s)
135
packet_bytes = 0;
136
transmitted++;
137
}
138
- s->tx_desc_curr = allwinner_sun8i_emac_next_desc(s, &desc, 0);
139
+ s->tx_desc_curr = allwinner_sun8i_emac_next_desc(s, &desc);
140
}
141
142
/* Raise transmit completed interrupt */
143
--
144
2.20.1
145
146
diff view generated by jsdifflib
New patch
1
From: Niek Linnenbank <nieklinnenbank@gmail.com>
1
2
3
The image for Armbian 19.11.3 bionic has been removed from the armbian server.
4
Without the image as input the test arm_orangepi_bionic_19_11 cannot run.
5
6
This commit removes the test completely and merges the code of the generic function
7
do_test_arm_orangepi_uboot_armbian back with the 20.08 test.
8
9
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
10
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
11
Message-id: 20210310195820.21950-3-nieklinnenbank@gmail.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
tests/acceptance/boot_linux_console.py | 72 ++++++++------------------
15
1 file changed, 23 insertions(+), 49 deletions(-)
16
17
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
18
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/acceptance/boot_linux_console.py
20
+++ b/tests/acceptance/boot_linux_console.py
21
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_sd(self):
22
# Wait for VM to shut down gracefully
23
self.vm.wait()
24
25
- def do_test_arm_orangepi_uboot_armbian(self, image_path):
26
+ @skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
27
+ 'Test artifacts fetched from unreliable apt.armbian.com')
28
+ @skipUnless(os.getenv('AVOCADO_ALLOW_LARGE_STORAGE'), 'storage limited')
29
+ def test_arm_orangepi_bionic_20_08(self):
30
+ """
31
+ :avocado: tags=arch:arm
32
+ :avocado: tags=machine:orangepi-pc
33
+ :avocado: tags=device:sd
34
+ """
35
+
36
+ # This test download a 275 MiB compressed image and expand it
37
+ # to 1036 MiB, but the underlying filesystem is 1552 MiB...
38
+ # As we expand it to 2 GiB we are safe.
39
+
40
+ image_url = ('https://dl.armbian.com/orangepipc/archive/'
41
+ 'Armbian_20.08.1_Orangepipc_bionic_current_5.8.5.img.xz')
42
+ image_hash = ('b4d6775f5673486329e45a0586bf06b6'
43
+ 'dbe792199fd182ac6b9c7bb6c7d3e6dd')
44
+ image_path_xz = self.fetch_asset(image_url, asset_hash=image_hash,
45
+ algorithm='sha256')
46
+ image_path = archive.extract(image_path_xz, self.workdir)
47
+ image_pow2ceil_expand(image_path)
48
+
49
self.vm.set_console()
50
self.vm.add_args('-drive', 'file=' + image_path + ',if=sd,format=raw',
51
'-nic', 'user',
52
@@ -XXX,XX +XXX,XX @@ def do_test_arm_orangepi_uboot_armbian(self, image_path):
53
'to <orangepipc>')
54
self.wait_for_console_pattern('Starting Load Kernel Modules...')
55
56
- @skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
57
- 'Test artifacts fetched from unreliable apt.armbian.com')
58
- @skipUnless(os.getenv('AVOCADO_ALLOW_LARGE_STORAGE'), 'storage limited')
59
- @skipUnless(P7ZIP_AVAILABLE, '7z not installed')
60
- def test_arm_orangepi_bionic_19_11(self):
61
- """
62
- :avocado: tags=arch:arm
63
- :avocado: tags=machine:orangepi-pc
64
- :avocado: tags=device:sd
65
- """
66
-
67
- # This test download a 196MB compressed image and expand it to 1GB
68
- image_url = ('https://dl.armbian.com/orangepipc/archive/'
69
- 'Armbian_19.11.3_Orangepipc_bionic_current_5.3.9.7z')
70
- image_hash = '196a8ffb72b0123d92cea4a070894813d305c71e'
71
- image_path_7z = self.fetch_asset(image_url, asset_hash=image_hash)
72
- image_name = 'Armbian_19.11.3_Orangepipc_bionic_current_5.3.9.img'
73
- image_path = os.path.join(self.workdir, image_name)
74
- process.run("7z e -o%s %s" % (self.workdir, image_path_7z))
75
- image_pow2ceil_expand(image_path)
76
-
77
- self.do_test_arm_orangepi_uboot_armbian(image_path)
78
-
79
- @skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
80
- 'Test artifacts fetched from unreliable apt.armbian.com')
81
- @skipUnless(os.getenv('AVOCADO_ALLOW_LARGE_STORAGE'), 'storage limited')
82
- def test_arm_orangepi_bionic_20_08(self):
83
- """
84
- :avocado: tags=arch:arm
85
- :avocado: tags=machine:orangepi-pc
86
- :avocado: tags=device:sd
87
- """
88
-
89
- # This test download a 275 MiB compressed image and expand it
90
- # to 1036 MiB, but the underlying filesystem is 1552 MiB...
91
- # As we expand it to 2 GiB we are safe.
92
-
93
- image_url = ('https://dl.armbian.com/orangepipc/archive/'
94
- 'Armbian_20.08.1_Orangepipc_bionic_current_5.8.5.img.xz')
95
- image_hash = ('b4d6775f5673486329e45a0586bf06b6'
96
- 'dbe792199fd182ac6b9c7bb6c7d3e6dd')
97
- image_path_xz = self.fetch_asset(image_url, asset_hash=image_hash,
98
- algorithm='sha256')
99
- image_path = archive.extract(image_path_xz, self.workdir)
100
- image_pow2ceil_expand(image_path)
101
-
102
- self.do_test_arm_orangepi_uboot_armbian(image_path)
103
-
104
@skipUnless(os.getenv('AVOCADO_ALLOW_LARGE_STORAGE'), 'storage limited')
105
def test_arm_orangepi_uboot_netbsd9(self):
106
"""
107
--
108
2.20.1
109
110
diff view generated by jsdifflib
New patch
1
From: Niek Linnenbank <nieklinnenbank@gmail.com>
1
2
3
Update the download URL of the Armbian 20.08 Bionic image for
4
test_arm_orangepi_bionic_20_08 of the orangepi-pc machine.
5
6
The archive.armbian.com URL contains more images and should keep stable
7
for a longer period of time than dl.armbian.com.
8
9
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
13
Message-id: 20210310195820.21950-4-nieklinnenbank@gmail.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
tests/acceptance/boot_linux_console.py | 2 +-
17
1 file changed, 1 insertion(+), 1 deletion(-)
18
19
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
20
index XXXXXXX..XXXXXXX 100644
21
--- a/tests/acceptance/boot_linux_console.py
22
+++ b/tests/acceptance/boot_linux_console.py
23
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_bionic_20_08(self):
24
# to 1036 MiB, but the underlying filesystem is 1552 MiB...
25
# As we expand it to 2 GiB we are safe.
26
27
- image_url = ('https://dl.armbian.com/orangepipc/archive/'
28
+ image_url = ('https://archive.armbian.com/orangepipc/archive/'
29
'Armbian_20.08.1_Orangepipc_bionic_current_5.8.5.img.xz')
30
image_hash = ('b4d6775f5673486329e45a0586bf06b6'
31
'dbe792199fd182ac6b9c7bb6c7d3e6dd')
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
New patch
1
From: Niek Linnenbank <nieklinnenbank@gmail.com>
1
2
3
The linux kernel 4.20.7 binary for sunxi has been removed from apt.armbian.com:
4
5
$ ARMBIAN_ARTIFACTS_CACHED=yes AVOCADO_ALLOW_LARGE_STORAGE=yes avocado --show=app,console run -t machine:orangepi-pc tests/acceptance/boot_linux_console.py
6
Fetching asset from tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi
7
...
8
(1/6) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi:
9
CANCEL: Missing asset https://apt.armbian.com/pool/main/l/linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb (0.55 s)
10
11
This commit updates the sunxi kernel to 5.10.16 for the acceptance
12
tests of the orangepi-pc and cubieboard machines.
13
14
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
15
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
16
Message-id: 20210310195820.21950-5-nieklinnenbank@gmail.com
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
tests/acceptance/boot_linux_console.py | 40 +++++++++++++-------------
20
tests/acceptance/replay_kernel.py | 8 +++---
21
2 files changed, 24 insertions(+), 24 deletions(-)
22
23
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
24
index XXXXXXX..XXXXXXX 100644
25
--- a/tests/acceptance/boot_linux_console.py
26
+++ b/tests/acceptance/boot_linux_console.py
27
@@ -XXX,XX +XXX,XX @@ def test_arm_cubieboard_initrd(self):
28
:avocado: tags=machine:cubieboard
29
"""
30
deb_url = ('https://apt.armbian.com/pool/main/l/'
31
- 'linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb')
32
- deb_hash = '1334c29c44d984ffa05ed10de8c3361f33d78315'
33
+ 'linux-5.10.16-sunxi/linux-image-current-sunxi_21.02.2_armhf.deb')
34
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
35
deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
36
kernel_path = self.extract_from_deb(deb_path,
37
- '/boot/vmlinuz-4.20.7-sunxi')
38
- dtb_path = '/usr/lib/linux-image-dev-sunxi/sun4i-a10-cubieboard.dtb'
39
+ '/boot/vmlinuz-5.10.16-sunxi')
40
+ dtb_path = '/usr/lib/linux-image-current-sunxi/sun4i-a10-cubieboard.dtb'
41
dtb_path = self.extract_from_deb(deb_path, dtb_path)
42
initrd_url = ('https://github.com/groeck/linux-build-test/raw/'
43
'2eb0a73b5d5a28df3170c546ddaaa9757e1e0848/rootfs/'
44
@@ -XXX,XX +XXX,XX @@ def test_arm_cubieboard_sata(self):
45
:avocado: tags=machine:cubieboard
46
"""
47
deb_url = ('https://apt.armbian.com/pool/main/l/'
48
- 'linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb')
49
- deb_hash = '1334c29c44d984ffa05ed10de8c3361f33d78315'
50
+ 'linux-5.10.16-sunxi/linux-image-current-sunxi_21.02.2_armhf.deb')
51
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
52
deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
53
kernel_path = self.extract_from_deb(deb_path,
54
- '/boot/vmlinuz-4.20.7-sunxi')
55
- dtb_path = '/usr/lib/linux-image-dev-sunxi/sun4i-a10-cubieboard.dtb'
56
+ '/boot/vmlinuz-5.10.16-sunxi')
57
+ dtb_path = '/usr/lib/linux-image-current-sunxi/sun4i-a10-cubieboard.dtb'
58
dtb_path = self.extract_from_deb(deb_path, dtb_path)
59
rootfs_url = ('https://github.com/groeck/linux-build-test/raw/'
60
'2eb0a73b5d5a28df3170c546ddaaa9757e1e0848/rootfs/'
61
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi(self):
62
:avocado: tags=machine:orangepi-pc
63
"""
64
deb_url = ('https://apt.armbian.com/pool/main/l/'
65
- 'linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb')
66
- deb_hash = '1334c29c44d984ffa05ed10de8c3361f33d78315'
67
+ 'linux-5.10.16-sunxi/linux-image-current-sunxi_21.02.2_armhf.deb')
68
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
69
deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
70
kernel_path = self.extract_from_deb(deb_path,
71
- '/boot/vmlinuz-4.20.7-sunxi')
72
- dtb_path = '/usr/lib/linux-image-dev-sunxi/sun8i-h3-orangepi-pc.dtb'
73
+ '/boot/vmlinuz-5.10.16-sunxi')
74
+ dtb_path = '/usr/lib/linux-image-current-sunxi/sun8i-h3-orangepi-pc.dtb'
75
dtb_path = self.extract_from_deb(deb_path, dtb_path)
76
77
self.vm.set_console()
78
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_initrd(self):
79
:avocado: tags=machine:orangepi-pc
80
"""
81
deb_url = ('https://apt.armbian.com/pool/main/l/'
82
- 'linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb')
83
- deb_hash = '1334c29c44d984ffa05ed10de8c3361f33d78315'
84
+ 'linux-5.10.16-sunxi/linux-image-current-sunxi_21.02.2_armhf.deb')
85
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
86
deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
87
kernel_path = self.extract_from_deb(deb_path,
88
- '/boot/vmlinuz-4.20.7-sunxi')
89
- dtb_path = '/usr/lib/linux-image-dev-sunxi/sun8i-h3-orangepi-pc.dtb'
90
+ '/boot/vmlinuz-5.10.16-sunxi')
91
+ dtb_path = '/usr/lib/linux-image-current-sunxi/sun8i-h3-orangepi-pc.dtb'
92
dtb_path = self.extract_from_deb(deb_path, dtb_path)
93
initrd_url = ('https://github.com/groeck/linux-build-test/raw/'
94
'2eb0a73b5d5a28df3170c546ddaaa9757e1e0848/rootfs/'
95
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_sd(self):
96
:avocado: tags=device:sd
97
"""
98
deb_url = ('https://apt.armbian.com/pool/main/l/'
99
- 'linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb')
100
- deb_hash = '1334c29c44d984ffa05ed10de8c3361f33d78315'
101
+ 'linux-5.10.16-sunxi/linux-image-current-sunxi_21.02.2_armhf.deb')
102
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
103
deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
104
kernel_path = self.extract_from_deb(deb_path,
105
- '/boot/vmlinuz-4.20.7-sunxi')
106
- dtb_path = '/usr/lib/linux-image-dev-sunxi/sun8i-h3-orangepi-pc.dtb'
107
+ '/boot/vmlinuz-5.10.16-sunxi')
108
+ dtb_path = '/usr/lib/linux-image-current-sunxi/sun8i-h3-orangepi-pc.dtb'
109
dtb_path = self.extract_from_deb(deb_path, dtb_path)
110
rootfs_url = ('http://storage.kernelci.org/images/rootfs/buildroot/'
111
'kci-2019.02/armel/base/rootfs.ext2.xz')
112
diff --git a/tests/acceptance/replay_kernel.py b/tests/acceptance/replay_kernel.py
113
index XXXXXXX..XXXXXXX 100644
114
--- a/tests/acceptance/replay_kernel.py
115
+++ b/tests/acceptance/replay_kernel.py
116
@@ -XXX,XX +XXX,XX @@ def test_arm_cubieboard_initrd(self):
117
:avocado: tags=machine:cubieboard
118
"""
119
deb_url = ('https://apt.armbian.com/pool/main/l/'
120
- 'linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb')
121
- deb_hash = '1334c29c44d984ffa05ed10de8c3361f33d78315'
122
+ 'linux-5.10.16-sunxi/linux-image-current-sunxi_21.02.2_armhf.deb')
123
+ deb_hash = '9fa84beda245cabf0b4fa84cf6eaa7738ead1da0'
124
deb_path = self.fetch_asset(deb_url, asset_hash=deb_hash)
125
kernel_path = self.extract_from_deb(deb_path,
126
- '/boot/vmlinuz-4.20.7-sunxi')
127
- dtb_path = '/usr/lib/linux-image-dev-sunxi/sun4i-a10-cubieboard.dtb'
128
+ '/boot/vmlinuz-5.10.16-sunxi')
129
+ dtb_path = '/usr/lib/linux-image-current-sunxi/sun4i-a10-cubieboard.dtb'
130
dtb_path = self.extract_from_deb(deb_path, dtb_path)
131
initrd_url = ('https://github.com/groeck/linux-build-test/raw/'
132
'2eb0a73b5d5a28df3170c546ddaaa9757e1e0848/rootfs/'
133
--
134
2.20.1
135
136
diff view generated by jsdifflib
New patch
1
From: Niek Linnenbank <nieklinnenbank@gmail.com>
1
2
3
Previously the ARMBIAN_ARTIFACTS_CACHED pre-condition was added to allow running
4
tests that have already existing armbian.com artifacts stored in the local avocado cache,
5
but do not have working URLs to download a fresh copy.
6
7
At this time of writing the URLs for artifacts on the armbian.com server are updated and working.
8
Any future broken URLs will result in a skipped acceptance test, for example:
9
10
(1/5) tests/acceptance/boot_linux_console.py:BootLinuxConsole.test_arm_orangepi:
11
CANCEL: Missing asset https://apt.armbian.com/pool/main/l/linux-4.20.7-sunxi/linux-image-dev-sunxi_5.75_armhf.deb (0.53 s)
12
13
This commits removes the ARMBIAN_ARTIFACTS_CACHED pre-condition such that
14
the acceptance tests for the orangepi-pc and cubieboard machines can run.
15
16
Signed-off-by: Niek Linnenbank <nieklinnenbank@gmail.com>
17
Reviewed-by: Willian Rampazzo <willianr@redhat.com>
18
Message-id: 20210310195820.21950-6-nieklinnenbank@gmail.com
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
tests/acceptance/boot_linux_console.py | 12 ------------
22
tests/acceptance/replay_kernel.py | 2 --
23
2 files changed, 14 deletions(-)
24
25
diff --git a/tests/acceptance/boot_linux_console.py b/tests/acceptance/boot_linux_console.py
26
index XXXXXXX..XXXXXXX 100644
27
--- a/tests/acceptance/boot_linux_console.py
28
+++ b/tests/acceptance/boot_linux_console.py
29
@@ -XXX,XX +XXX,XX @@ def test_arm_exynos4210_initrd(self):
30
self.wait_for_console_pattern('Boot successful.')
31
# TODO user command, for now the uart is stuck
32
33
- @skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
34
- 'Test artifacts fetched from unreliable apt.armbian.com')
35
def test_arm_cubieboard_initrd(self):
36
"""
37
:avocado: tags=arch:arm
38
@@ -XXX,XX +XXX,XX @@ def test_arm_cubieboard_initrd(self):
39
'system-control@1c00000')
40
# cubieboard's reboot is not functioning; omit reboot test.
41
42
- @skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
43
- 'Test artifacts fetched from unreliable apt.armbian.com')
44
def test_arm_cubieboard_sata(self):
45
"""
46
:avocado: tags=arch:arm
47
@@ -XXX,XX +XXX,XX @@ def test_arm_quanta_gsj_initrd(self):
48
self.wait_for_console_pattern(
49
'Give root password for system maintenance')
50
51
- @skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
52
- 'Test artifacts fetched from unreliable apt.armbian.com')
53
def test_arm_orangepi(self):
54
"""
55
:avocado: tags=arch:arm
56
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi(self):
57
console_pattern = 'Kernel command line: %s' % kernel_command_line
58
self.wait_for_console_pattern(console_pattern)
59
60
- @skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
61
- 'Test artifacts fetched from unreliable apt.armbian.com')
62
def test_arm_orangepi_initrd(self):
63
"""
64
:avocado: tags=arch:arm
65
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_initrd(self):
66
# Wait for VM to shut down gracefully
67
self.vm.wait()
68
69
- @skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
70
- 'Test artifacts fetched from unreliable apt.armbian.com')
71
def test_arm_orangepi_sd(self):
72
"""
73
:avocado: tags=arch:arm
74
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_sd(self):
75
# Wait for VM to shut down gracefully
76
self.vm.wait()
77
78
- @skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
79
- 'Test artifacts fetched from unreliable apt.armbian.com')
80
@skipUnless(os.getenv('AVOCADO_ALLOW_LARGE_STORAGE'), 'storage limited')
81
def test_arm_orangepi_bionic_20_08(self):
82
"""
83
diff --git a/tests/acceptance/replay_kernel.py b/tests/acceptance/replay_kernel.py
84
index XXXXXXX..XXXXXXX 100644
85
--- a/tests/acceptance/replay_kernel.py
86
+++ b/tests/acceptance/replay_kernel.py
87
@@ -XXX,XX +XXX,XX @@ def test_arm_virt(self):
88
self.run_rr(kernel_path, kernel_command_line, console_pattern, shift=1)
89
90
@skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
91
- @skipUnless(os.getenv('ARMBIAN_ARTIFACTS_CACHED'),
92
- 'Test artifacts fetched from unreliable apt.armbian.com')
93
def test_arm_cubieboard_initrd(self):
94
"""
95
:avocado: tags=arch:arm
96
--
97
2.20.1
98
99
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
If the SSECounter link is absent, we set an error message
4
in sse_timer_realize() but forgot to propagate this error.
5
Add the missing 'return'.
6
7
Fixes: CID 1450755 (Null pointer dereferences)
8
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20210312001845.1562670-1-f4bug@amsat.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/timer/sse-timer.c | 1 +
14
1 file changed, 1 insertion(+)
15
16
diff --git a/hw/timer/sse-timer.c b/hw/timer/sse-timer.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/timer/sse-timer.c
19
+++ b/hw/timer/sse-timer.c
20
@@ -XXX,XX +XXX,XX @@ static void sse_timer_realize(DeviceState *dev, Error **errp)
21
22
if (!s->counter) {
23
error_setg(errp, "counter property was not set");
24
+ return;
25
}
26
27
s->counter_notifier.notify = sse_timer_counter_callback;
28
--
29
2.20.1
30
31
diff view generated by jsdifflib
New patch
1
From: Andrew Jones <drjones@redhat.com>
1
2
3
Prior to commit f2ce39b4f067 a MachineClass kvm_type method
4
only needed to be registered to ensure it would be executed.
5
With commit f2ce39b4f067 a kvm-type machine property must also
6
be specified. hw/arm/virt relies on the kvm_type method to pass
7
its selected IPA limit to KVM, but this is not exposed as a
8
machine property. Restore the previous functionality of invoking
9
kvm_type when it's present.
10
11
Fixes: f2ce39b4f067 ("vl: make qemu_get_machine_opts static")
12
Signed-off-by: Andrew Jones <drjones@redhat.com>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Message-id: 20210310135218.255205-2-drjones@redhat.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
include/hw/boards.h | 1 +
18
accel/kvm/kvm-all.c | 2 ++
19
2 files changed, 3 insertions(+)
20
21
diff --git a/include/hw/boards.h b/include/hw/boards.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/boards.h
24
+++ b/include/hw/boards.h
25
@@ -XXX,XX +XXX,XX @@ typedef struct {
26
* @kvm_type:
27
* Return the type of KVM corresponding to the kvm-type string option or
28
* computed based on other criteria such as the host kernel capabilities.
29
+ * kvm-type may be NULL if it is not needed.
30
* @numa_mem_supported:
31
* true if '--numa node.mem' option is supported and false otherwise
32
* @smp_parse:
33
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/accel/kvm/kvm-all.c
36
+++ b/accel/kvm/kvm-all.c
37
@@ -XXX,XX +XXX,XX @@ static int kvm_init(MachineState *ms)
38
"kvm-type",
39
&error_abort);
40
type = mc->kvm_type(ms, kvm_type);
41
+ } else if (mc->kvm_type) {
42
+ type = mc->kvm_type(ms, NULL);
43
}
44
45
do {
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
1
Like most of the v7M memory mapped system registers, the systick
1
From: Andrew Jones <drjones@redhat.com>
2
registers are accessible to privileged code only and user accesses
3
must generate a BusFault. We implement that for registers in
4
the NVIC proper already, but missed it for systick since we
5
implement it as a separate device. Correct the omission.
6
2
3
The virt machine already checks KVM_CAP_ARM_VM_IPA_SIZE to get the
4
upper bound of the IPA size. If that bound is lower than the highest
5
possible GPA for the machine, then QEMU will error out. However, the
6
IPA is set to 40 when the highest GPA is less than or equal to 40,
7
even when KVM may support an IPA limit as low as 32. This means KVM
8
may fail the VM creation unnecessarily. Additionally, 40 is selected
9
with the value 0, which means use the default, and that gets around
10
a check in some versions of KVM, causing a difficult to debug fail.
11
Always use the IPA size that corresponds to the highest possible GPA,
12
unless it's lower than 32, in which case use 32. Also, we must still
13
use 0 when KVM only supports the legacy fixed 40 bit IPA.
14
15
Suggested-by: Marc Zyngier <maz@kernel.org>
16
Signed-off-by: Andrew Jones <drjones@redhat.com>
17
Reviewed-by: Eric Auger <eric.auger@redhat.com>
18
Reviewed-by: Marc Zyngier <maz@kernel.org>
19
Message-id: 20210310135218.255205-3-drjones@redhat.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Message-id: 20190617175317.27557-6-peter.maydell@linaro.org
11
---
21
---
12
hw/timer/armv7m_systick.c | 26 ++++++++++++++++++++------
22
target/arm/kvm_arm.h | 6 ++++--
13
1 file changed, 20 insertions(+), 6 deletions(-)
23
hw/arm/virt.c | 23 ++++++++++++++++-------
24
target/arm/kvm.c | 4 +++-
25
3 files changed, 23 insertions(+), 10 deletions(-)
14
26
15
diff --git a/hw/timer/armv7m_systick.c b/hw/timer/armv7m_systick.c
27
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
16
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/timer/armv7m_systick.c
29
--- a/target/arm/kvm_arm.h
18
+++ b/hw/timer/armv7m_systick.c
30
+++ b/target/arm/kvm_arm.h
19
@@ -XXX,XX +XXX,XX @@ static void systick_timer_tick(void *opaque)
31
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_sve_supported(void);
20
}
32
/**
33
* kvm_arm_get_max_vm_ipa_size:
34
* @ms: Machine state handle
35
+ * @fixed_ipa: True when the IPA limit is fixed at 40. This is the case
36
+ * for legacy KVM.
37
*
38
* Returns the number of bits in the IPA address space supported by KVM
39
*/
40
-int kvm_arm_get_max_vm_ipa_size(MachineState *ms);
41
+int kvm_arm_get_max_vm_ipa_size(MachineState *ms, bool *fixed_ipa);
42
43
/**
44
* kvm_arm_sync_mpstate_to_kvm:
45
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_add_vcpu_properties(Object *obj)
46
g_assert_not_reached();
21
}
47
}
22
48
23
-static uint64_t systick_read(void *opaque, hwaddr addr, unsigned size)
49
-static inline int kvm_arm_get_max_vm_ipa_size(MachineState *ms)
24
+static MemTxResult systick_read(void *opaque, hwaddr addr, uint64_t *data,
50
+static inline int kvm_arm_get_max_vm_ipa_size(MachineState *ms, bool *fixed_ipa)
25
+ unsigned size, MemTxAttrs attrs)
26
{
51
{
27
SysTickState *s = opaque;
52
g_assert_not_reached();
28
uint32_t val;
53
}
29
54
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
30
+ if (attrs.user) {
55
index XXXXXXX..XXXXXXX 100644
31
+ /* Generate BusFault for unprivileged accesses */
56
--- a/hw/arm/virt.c
32
+ return MEMTX_ERROR;
57
+++ b/hw/arm/virt.c
58
@@ -XXX,XX +XXX,XX @@ static HotplugHandler *virt_machine_get_hotplug_handler(MachineState *machine,
59
static int virt_kvm_type(MachineState *ms, const char *type_str)
60
{
61
VirtMachineState *vms = VIRT_MACHINE(ms);
62
- int max_vm_pa_size = kvm_arm_get_max_vm_ipa_size(ms);
63
- int requested_pa_size;
64
+ int max_vm_pa_size, requested_pa_size;
65
+ bool fixed_ipa;
66
+
67
+ max_vm_pa_size = kvm_arm_get_max_vm_ipa_size(ms, &fixed_ipa);
68
69
/* we freeze the memory map to compute the highest gpa */
70
virt_set_memmap(vms);
71
72
requested_pa_size = 64 - clz64(vms->highest_gpa);
73
74
+ /*
75
+ * KVM requires the IPA size to be at least 32 bits.
76
+ */
77
+ if (requested_pa_size < 32) {
78
+ requested_pa_size = 32;
33
+ }
79
+ }
34
+
80
+
35
switch (addr) {
81
if (requested_pa_size > max_vm_pa_size) {
36
case 0x0: /* SysTick Control and Status. */
82
error_report("-m and ,maxmem option values "
37
val = s->control;
83
"require an IPA range (%d bits) larger than "
38
@@ -XXX,XX +XXX,XX @@ static uint64_t systick_read(void *opaque, hwaddr addr, unsigned size)
84
"the one supported by the host (%d bits)",
85
requested_pa_size, max_vm_pa_size);
86
- exit(1);
87
+ exit(1);
39
}
88
}
40
89
/*
41
trace_systick_read(addr, val, size);
90
- * By default we return 0 which corresponds to an implicit legacy
42
- return val;
91
- * 40b IPA setting. Otherwise we return the actual requested PA
43
+ *data = val;
92
- * logsize
44
+ return MEMTX_OK;
93
+ * We return the requested PA log size, unless KVM only supports
94
+ * the implicit legacy 40b IPA setting, in which case the kvm_type
95
+ * must be 0.
96
*/
97
- return requested_pa_size > 40 ? requested_pa_size : 0;
98
+ return fixed_ipa ? 0 : requested_pa_size;
45
}
99
}
46
100
47
-static void systick_write(void *opaque, hwaddr addr,
101
static void virt_machine_class_init(ObjectClass *oc, void *data)
48
- uint64_t value, unsigned size)
102
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
49
+static MemTxResult systick_write(void *opaque, hwaddr addr,
103
index XXXXXXX..XXXXXXX 100644
50
+ uint64_t value, unsigned size,
104
--- a/target/arm/kvm.c
51
+ MemTxAttrs attrs)
105
+++ b/target/arm/kvm.c
106
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_pmu_supported(void)
107
return kvm_check_extension(kvm_state, KVM_CAP_ARM_PMU_V3);
108
}
109
110
-int kvm_arm_get_max_vm_ipa_size(MachineState *ms)
111
+int kvm_arm_get_max_vm_ipa_size(MachineState *ms, bool *fixed_ipa)
52
{
112
{
53
SysTickState *s = opaque;
113
KVMState *s = KVM_STATE(ms->accelerator);
54
114
int ret;
55
+ if (attrs.user) {
115
56
+ /* Generate BusFault for unprivileged accesses */
116
ret = kvm_check_extension(s, KVM_CAP_ARM_VM_IPA_SIZE);
57
+ return MEMTX_ERROR;
117
+ *fixed_ipa = ret <= 0;
58
+ }
59
+
118
+
60
trace_systick_write(addr, value, size);
119
return ret > 0 ? ret : 40;
61
62
switch (addr) {
63
@@ -XXX,XX +XXX,XX @@ static void systick_write(void *opaque, hwaddr addr,
64
qemu_log_mask(LOG_GUEST_ERROR,
65
"SysTick: Bad write offset 0x%" HWADDR_PRIx "\n", addr);
66
}
67
+ return MEMTX_OK;
68
}
120
}
69
121
70
static const MemoryRegionOps systick_ops = {
71
- .read = systick_read,
72
- .write = systick_write,
73
+ .read_with_attrs = systick_read,
74
+ .write_with_attrs = systick_write,
75
.endianness = DEVICE_NATIVE_ENDIAN,
76
.valid.min_access_size = 4,
77
.valid.max_access_size = 4,
78
--
122
--
79
2.20.1
123
2.20.1
80
124
81
125
diff view generated by jsdifflib
New patch
1
From: Hao Wu <wuhaotsh@google.com>
1
2
3
This patch adds GPIOs in NPCM7xx PWM module for its duty values.
4
The purpose of this is to connect it to the MFT module to provide
5
an input for measuring a PWM fan's RPM. Each PWM module has
6
NPCM7XX_PWM_PER_MODULE of GPIOs, each one corresponds to
7
one PWM instance and can connect to multiple fan instances in MFT.
8
9
Reviewed-by: Doug Evans <dje@google.com>
10
Reviewed-by: Tyrone Ting <kfting@nuvoton.com>
11
Signed-off-by: Hao Wu <wuhaotsh@google.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20210311180855.149764-2-wuhaotsh@google.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
include/hw/misc/npcm7xx_pwm.h | 4 +++-
17
hw/misc/npcm7xx_pwm.c | 4 ++++
18
2 files changed, 7 insertions(+), 1 deletion(-)
19
20
diff --git a/include/hw/misc/npcm7xx_pwm.h b/include/hw/misc/npcm7xx_pwm.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/include/hw/misc/npcm7xx_pwm.h
23
+++ b/include/hw/misc/npcm7xx_pwm.h
24
@@ -XXX,XX +XXX,XX @@ typedef struct NPCM7xxPWM {
25
* @iomem: Memory region through which registers are accessed.
26
* @clock: The PWM clock.
27
* @pwm: The PWM channels owned by this module.
28
+ * @duty_gpio_out: The duty cycle of each PWM channels as a output GPIO.
29
* @ppr: The prescaler register.
30
* @csr: The clock selector register.
31
* @pcr: The control register.
32
@@ -XXX,XX +XXX,XX @@ struct NPCM7xxPWMState {
33
MemoryRegion iomem;
34
35
Clock *clock;
36
- NPCM7xxPWM pwm[NPCM7XX_PWM_PER_MODULE];
37
+ NPCM7xxPWM pwm[NPCM7XX_PWM_PER_MODULE];
38
+ qemu_irq duty_gpio_out[NPCM7XX_PWM_PER_MODULE];
39
40
uint32_t ppr;
41
uint32_t csr;
42
diff --git a/hw/misc/npcm7xx_pwm.c b/hw/misc/npcm7xx_pwm.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/misc/npcm7xx_pwm.c
45
+++ b/hw/misc/npcm7xx_pwm.c
46
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_pwm_update_duty(NPCM7xxPWM *p)
47
trace_npcm7xx_pwm_update_duty(DEVICE(p->module)->canonical_path,
48
p->index, p->duty, duty);
49
p->duty = duty;
50
+ qemu_set_irq(p->module->duty_gpio_out[p->index], p->duty);
51
}
52
}
53
54
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_pwm_init(Object *obj)
55
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
56
int i;
57
58
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(s->pwm) != NPCM7XX_PWM_PER_MODULE);
59
for (i = 0; i < NPCM7XX_PWM_PER_MODULE; i++) {
60
NPCM7xxPWM *p = &s->pwm[i];
61
p->module = s;
62
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_pwm_init(Object *obj)
63
object_property_add_uint32_ptr(obj, "duty[*]",
64
&s->pwm[i].duty, OBJ_PROP_FLAG_READ);
65
}
66
+ qdev_init_gpio_out_named(DEVICE(s), s->duty_gpio_out,
67
+ "duty-gpio-out", NPCM7XX_PWM_PER_MODULE);
68
}
69
70
static const VMStateDescription vmstate_npcm7xx_pwm = {
71
--
72
2.20.1
73
74
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Hao Wu <wuhaotsh@google.com>
2
2
3
In preparation for supporting TCG disablement on ARM, we move most
3
This patch implements Multi Function Timer (MFT) module for NPCM7XX.
4
of TCG related v7m/v8m helpers and APIs into their own file.
4
This module is mainly used to configure PWM fans. It has just enough
5
functionality to make the PWM fan kernel module work.
5
6
6
Note: It is easier to review this commit using the 'histogram'
7
The module takes two input, the max_rpm of a fan (modifiable via QMP)
7
diff algorithm:
8
and duty cycle (a GPIO from the PWM module.) The actual measured RPM
9
is equal to max_rpm * duty_cycle / NPCM7XX_PWM_MAX_DUTY. The RPM is
10
measured as a counter compared to a prescaled input clock. The kernel
11
driver reads this counter and report to user space.
8
12
9
$ git diff --diff-algorithm=histogram ...
13
Refs:
10
or
14
https://github.com/torvalds/linux/blob/master/drivers/hwmon/npcm750-pwm-fan.c
11
$ git diff --histogram ...
12
15
13
Suggested-by: Samuel Ortiz <sameo@linux.intel.com>
16
Reviewed-by: Doug Evans <dje@google.com>
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
17
Reviewed-by: Tyrone Ting <kfting@nuvoton.com>
15
Message-id: 20190702144335.10717-2-philmd@redhat.com
18
Signed-off-by: Hao Wu <wuhaotsh@google.com>
19
Message-id: 20210311180855.149764-3-wuhaotsh@google.com
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
22
---
19
target/arm/Makefile.objs | 1 +
23
include/hw/misc/npcm7xx_mft.h | 70 +++++
20
target/arm/helper.c | 2638 +------------------------------------
24
hw/misc/npcm7xx_mft.c | 540 ++++++++++++++++++++++++++++++++++
21
target/arm/m_helper.c | 2676 ++++++++++++++++++++++++++++++++++++++
25
hw/misc/meson.build | 1 +
22
3 files changed, 2681 insertions(+), 2634 deletions(-)
26
hw/misc/trace-events | 8 +
23
create mode 100644 target/arm/m_helper.c
27
4 files changed, 619 insertions(+)
28
create mode 100644 include/hw/misc/npcm7xx_mft.h
29
create mode 100644 hw/misc/npcm7xx_mft.c
24
30
25
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
31
diff --git a/include/hw/misc/npcm7xx_mft.h b/include/hw/misc/npcm7xx_mft.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/Makefile.objs
28
+++ b/target/arm/Makefile.objs
29
@@ -XXX,XX +XXX,XX @@ obj-y += tlb_helper.o debug_helper.o
30
obj-y += translate.o op_helper.o
31
obj-y += crypto_helper.o
32
obj-y += iwmmxt_helper.o vec_helper.o neon_helper.o
33
+obj-y += m_helper.o
34
35
obj-$(CONFIG_SOFTMMU) += psci.o
36
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@
42
#include "qemu/crc32c.h"
43
#include "qemu/qemu-print.h"
44
#include "exec/exec-all.h"
45
-#include "exec/cpu_ldst.h"
46
#include <zlib.h> /* For crc32 */
47
#include "hw/semihosting/semihost.h"
48
#include "sysemu/cpus.h"
49
@@ -XXX,XX +XXX,XX @@
50
#include "qemu/guest-random.h"
51
#ifdef CONFIG_TCG
52
#include "arm_ldst.h"
53
+#include "exec/cpu_ldst.h"
54
#endif
55
56
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
57
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(rbit)(uint32_t x)
58
59
#ifdef CONFIG_USER_ONLY
60
61
-/* These should probably raise undefined insn exceptions. */
62
-void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val)
63
-{
64
- ARMCPU *cpu = env_archcpu(env);
65
-
66
- cpu_abort(CPU(cpu), "v7m_msr %d\n", reg);
67
-}
68
-
69
-uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
70
-{
71
- ARMCPU *cpu = env_archcpu(env);
72
-
73
- cpu_abort(CPU(cpu), "v7m_mrs %d\n", reg);
74
- return 0;
75
-}
76
-
77
-void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
78
-{
79
- /* translate.c should never generate calls here in user-only mode */
80
- g_assert_not_reached();
81
-}
82
-
83
-void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
84
-{
85
- /* translate.c should never generate calls here in user-only mode */
86
- g_assert_not_reached();
87
-}
88
-
89
-void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
90
-{
91
- /* translate.c should never generate calls here in user-only mode */
92
- g_assert_not_reached();
93
-}
94
-
95
-void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
96
-{
97
- /* translate.c should never generate calls here in user-only mode */
98
- g_assert_not_reached();
99
-}
100
-
101
-void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
102
-{
103
- /* translate.c should never generate calls here in user-only mode */
104
- g_assert_not_reached();
105
-}
106
-
107
-uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
108
-{
109
- /*
110
- * The TT instructions can be used by unprivileged code, but in
111
- * user-only emulation we don't have the MPU.
112
- * Luckily since we know we are NonSecure unprivileged (and that in
113
- * turn means that the A flag wasn't specified), all the bits in the
114
- * register must be zero:
115
- * IREGION: 0 because IRVALID is 0
116
- * IRVALID: 0 because NS
117
- * S: 0 because NS
118
- * NSRW: 0 because NS
119
- * NSR: 0 because NS
120
- * RW: 0 because unpriv and A flag not set
121
- * R: 0 because unpriv and A flag not set
122
- * SRVALID: 0 because NS
123
- * MRVALID: 0 because unpriv and A flag not set
124
- * SREGION: 0 becaus SRVALID is 0
125
- * MREGION: 0 because MRVALID is 0
126
- */
127
- return 0;
128
-}
129
-
130
static void switch_mode(CPUARMState *env, int mode)
131
{
132
ARMCPU *cpu = env_archcpu(env);
133
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(int idx)
134
}
135
}
136
137
-/*
138
- * What kind of stack write are we doing? This affects how exceptions
139
- * generated during the stacking are treated.
140
- */
141
-typedef enum StackingMode {
142
- STACK_NORMAL,
143
- STACK_IGNFAULTS,
144
- STACK_LAZYFP,
145
-} StackingMode;
146
-
147
-static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
148
- ARMMMUIdx mmu_idx, StackingMode mode)
149
-{
150
- CPUState *cs = CPU(cpu);
151
- CPUARMState *env = &cpu->env;
152
- MemTxAttrs attrs = {};
153
- MemTxResult txres;
154
- target_ulong page_size;
155
- hwaddr physaddr;
156
- int prot;
157
- ARMMMUFaultInfo fi = {};
158
- bool secure = mmu_idx & ARM_MMU_IDX_M_S;
159
- int exc;
160
- bool exc_secure;
161
-
162
- if (get_phys_addr(env, addr, MMU_DATA_STORE, mmu_idx, &physaddr,
163
- &attrs, &prot, &page_size, &fi, NULL)) {
164
- /* MPU/SAU lookup failed */
165
- if (fi.type == ARMFault_QEMU_SFault) {
166
- if (mode == STACK_LAZYFP) {
167
- qemu_log_mask(CPU_LOG_INT,
168
- "...SecureFault with SFSR.LSPERR "
169
- "during lazy stacking\n");
170
- env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK;
171
- } else {
172
- qemu_log_mask(CPU_LOG_INT,
173
- "...SecureFault with SFSR.AUVIOL "
174
- "during stacking\n");
175
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
176
- }
177
- env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK;
178
- env->v7m.sfar = addr;
179
- exc = ARMV7M_EXCP_SECURE;
180
- exc_secure = false;
181
- } else {
182
- if (mode == STACK_LAZYFP) {
183
- qemu_log_mask(CPU_LOG_INT,
184
- "...MemManageFault with CFSR.MLSPERR\n");
185
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK;
186
- } else {
187
- qemu_log_mask(CPU_LOG_INT,
188
- "...MemManageFault with CFSR.MSTKERR\n");
189
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
190
- }
191
- exc = ARMV7M_EXCP_MEM;
192
- exc_secure = secure;
193
- }
194
- goto pend_fault;
195
- }
196
- address_space_stl_le(arm_addressspace(cs, attrs), physaddr, value,
197
- attrs, &txres);
198
- if (txres != MEMTX_OK) {
199
- /* BusFault trying to write the data */
200
- if (mode == STACK_LAZYFP) {
201
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n");
202
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK;
203
- } else {
204
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
205
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
206
- }
207
- exc = ARMV7M_EXCP_BUS;
208
- exc_secure = false;
209
- goto pend_fault;
210
- }
211
- return true;
212
-
213
-pend_fault:
214
- /*
215
- * By pending the exception at this point we are making
216
- * the IMPDEF choice "overridden exceptions pended" (see the
217
- * MergeExcInfo() pseudocode). The other choice would be to not
218
- * pend them now and then make a choice about which to throw away
219
- * later if we have two derived exceptions.
220
- * The only case when we must not pend the exception but instead
221
- * throw it away is if we are doing the push of the callee registers
222
- * and we've already generated a derived exception (this is indicated
223
- * by the caller passing STACK_IGNFAULTS). Even in this case we will
224
- * still update the fault status registers.
225
- */
226
- switch (mode) {
227
- case STACK_NORMAL:
228
- armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure);
229
- break;
230
- case STACK_LAZYFP:
231
- armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure);
232
- break;
233
- case STACK_IGNFAULTS:
234
- break;
235
- }
236
- return false;
237
-}
238
-
239
-static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
240
- ARMMMUIdx mmu_idx)
241
-{
242
- CPUState *cs = CPU(cpu);
243
- CPUARMState *env = &cpu->env;
244
- MemTxAttrs attrs = {};
245
- MemTxResult txres;
246
- target_ulong page_size;
247
- hwaddr physaddr;
248
- int prot;
249
- ARMMMUFaultInfo fi = {};
250
- bool secure = mmu_idx & ARM_MMU_IDX_M_S;
251
- int exc;
252
- bool exc_secure;
253
- uint32_t value;
254
-
255
- if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
256
- &attrs, &prot, &page_size, &fi, NULL)) {
257
- /* MPU/SAU lookup failed */
258
- if (fi.type == ARMFault_QEMU_SFault) {
259
- qemu_log_mask(CPU_LOG_INT,
260
- "...SecureFault with SFSR.AUVIOL during unstack\n");
261
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
262
- env->v7m.sfar = addr;
263
- exc = ARMV7M_EXCP_SECURE;
264
- exc_secure = false;
265
- } else {
266
- qemu_log_mask(CPU_LOG_INT,
267
- "...MemManageFault with CFSR.MUNSTKERR\n");
268
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MUNSTKERR_MASK;
269
- exc = ARMV7M_EXCP_MEM;
270
- exc_secure = secure;
271
- }
272
- goto pend_fault;
273
- }
274
-
275
- value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
276
- attrs, &txres);
277
- if (txres != MEMTX_OK) {
278
- /* BusFault trying to read the data */
279
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
280
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_UNSTKERR_MASK;
281
- exc = ARMV7M_EXCP_BUS;
282
- exc_secure = false;
283
- goto pend_fault;
284
- }
285
-
286
- *dest = value;
287
- return true;
288
-
289
-pend_fault:
290
- /*
291
- * By pending the exception at this point we are making
292
- * the IMPDEF choice "overridden exceptions pended" (see the
293
- * MergeExcInfo() pseudocode). The other choice would be to not
294
- * pend them now and then make a choice about which to throw away
295
- * later if we have two derived exceptions.
296
- */
297
- armv7m_nvic_set_pending(env->nvic, exc, exc_secure);
298
- return false;
299
-}
300
-
301
-void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
302
-{
303
- /*
304
- * Preserve FP state (because LSPACT was set and we are about
305
- * to execute an FP instruction). This corresponds to the
306
- * PreserveFPState() pseudocode.
307
- * We may throw an exception if the stacking fails.
308
- */
309
- ARMCPU *cpu = env_archcpu(env);
310
- bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
311
- bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK);
312
- bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK);
313
- bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK;
314
- uint32_t fpcar = env->v7m.fpcar[is_secure];
315
- bool stacked_ok = true;
316
- bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
317
- bool take_exception;
318
-
319
- /* Take the iothread lock as we are going to touch the NVIC */
320
- qemu_mutex_lock_iothread();
321
-
322
- /* Check the background context had access to the FPU */
323
- if (!v7m_cpacr_pass(env, is_secure, is_priv)) {
324
- armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure);
325
- env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK;
326
- stacked_ok = false;
327
- } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) {
328
- armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
329
- env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
330
- stacked_ok = false;
331
- }
332
-
333
- if (!splimviol && stacked_ok) {
334
- /* We only stack if the stack limit wasn't violated */
335
- int i;
336
- ARMMMUIdx mmu_idx;
337
-
338
- mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri);
339
- for (i = 0; i < (ts ? 32 : 16); i += 2) {
340
- uint64_t dn = *aa32_vfp_dreg(env, i / 2);
341
- uint32_t faddr = fpcar + 4 * i;
342
- uint32_t slo = extract64(dn, 0, 32);
343
- uint32_t shi = extract64(dn, 32, 32);
344
-
345
- if (i >= 16) {
346
- faddr += 8; /* skip the slot for the FPSCR */
347
- }
348
- stacked_ok = stacked_ok &&
349
- v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
350
- v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP);
351
- }
352
-
353
- stacked_ok = stacked_ok &&
354
- v7m_stack_write(cpu, fpcar + 0x40,
355
- vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
356
- }
357
-
358
- /*
359
- * We definitely pended an exception, but it's possible that it
360
- * might not be able to be taken now. If its priority permits us
361
- * to take it now, then we must not update the LSPACT or FP regs,
362
- * but instead jump out to take the exception immediately.
363
- * If it's just pending and won't be taken until the current
364
- * handler exits, then we do update LSPACT and the FP regs.
365
- */
366
- take_exception = !stacked_ok &&
367
- armv7m_nvic_can_take_pending_exception(env->nvic);
368
-
369
- qemu_mutex_unlock_iothread();
370
-
371
- if (take_exception) {
372
- raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC());
373
- }
374
-
375
- env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
376
-
377
- if (ts) {
378
- /* Clear s0 to s31 and the FPSCR */
379
- int i;
380
-
381
- for (i = 0; i < 32; i += 2) {
382
- *aa32_vfp_dreg(env, i / 2) = 0;
383
- }
384
- vfp_set_fpscr(env, 0);
385
- }
386
- /*
387
- * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
388
- * unchanged.
389
- */
390
-}
391
-
392
-/*
393
- * Write to v7M CONTROL.SPSEL bit for the specified security bank.
394
- * This may change the current stack pointer between Main and Process
395
- * stack pointers if it is done for the CONTROL register for the current
396
- * security state.
397
- */
398
-static void write_v7m_control_spsel_for_secstate(CPUARMState *env,
399
- bool new_spsel,
400
- bool secstate)
401
-{
402
- bool old_is_psp = v7m_using_psp(env);
403
-
404
- env->v7m.control[secstate] =
405
- deposit32(env->v7m.control[secstate],
406
- R_V7M_CONTROL_SPSEL_SHIFT,
407
- R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
408
-
409
- if (secstate == env->v7m.secure) {
410
- bool new_is_psp = v7m_using_psp(env);
411
- uint32_t tmp;
412
-
413
- if (old_is_psp != new_is_psp) {
414
- tmp = env->v7m.other_sp;
415
- env->v7m.other_sp = env->regs[13];
416
- env->regs[13] = tmp;
417
- }
418
- }
419
-}
420
-
421
-/*
422
- * Write to v7M CONTROL.SPSEL bit. This may change the current
423
- * stack pointer between Main and Process stack pointers.
424
- */
425
-static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
426
-{
427
- write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure);
428
-}
429
-
430
-void write_v7m_exception(CPUARMState *env, uint32_t new_exc)
431
-{
432
- /*
433
- * Write a new value to v7m.exception, thus transitioning into or out
434
- * of Handler mode; this may result in a change of active stack pointer.
435
- */
436
- bool new_is_psp, old_is_psp = v7m_using_psp(env);
437
- uint32_t tmp;
438
-
439
- env->v7m.exception = new_exc;
440
-
441
- new_is_psp = v7m_using_psp(env);
442
-
443
- if (old_is_psp != new_is_psp) {
444
- tmp = env->v7m.other_sp;
445
- env->v7m.other_sp = env->regs[13];
446
- env->regs[13] = tmp;
447
- }
448
-}
449
-
450
-/* Switch M profile security state between NS and S */
451
-static void switch_v7m_security_state(CPUARMState *env, bool new_secstate)
452
-{
453
- uint32_t new_ss_msp, new_ss_psp;
454
-
455
- if (env->v7m.secure == new_secstate) {
456
- return;
457
- }
458
-
459
- /*
460
- * All the banked state is accessed by looking at env->v7m.secure
461
- * except for the stack pointer; rearrange the SP appropriately.
462
- */
463
- new_ss_msp = env->v7m.other_ss_msp;
464
- new_ss_psp = env->v7m.other_ss_psp;
465
-
466
- if (v7m_using_psp(env)) {
467
- env->v7m.other_ss_psp = env->regs[13];
468
- env->v7m.other_ss_msp = env->v7m.other_sp;
469
- } else {
470
- env->v7m.other_ss_msp = env->regs[13];
471
- env->v7m.other_ss_psp = env->v7m.other_sp;
472
- }
473
-
474
- env->v7m.secure = new_secstate;
475
-
476
- if (v7m_using_psp(env)) {
477
- env->regs[13] = new_ss_psp;
478
- env->v7m.other_sp = new_ss_msp;
479
- } else {
480
- env->regs[13] = new_ss_msp;
481
- env->v7m.other_sp = new_ss_psp;
482
- }
483
-}
484
-
485
-void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
486
-{
487
- /*
488
- * Handle v7M BXNS:
489
- * - if the return value is a magic value, do exception return (like BX)
490
- * - otherwise bit 0 of the return value is the target security state
491
- */
492
- uint32_t min_magic;
493
-
494
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
495
- /* Covers FNC_RETURN and EXC_RETURN magic */
496
- min_magic = FNC_RETURN_MIN_MAGIC;
497
- } else {
498
- /* EXC_RETURN magic only */
499
- min_magic = EXC_RETURN_MIN_MAGIC;
500
- }
501
-
502
- if (dest >= min_magic) {
503
- /*
504
- * This is an exception return magic value; put it where
505
- * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT.
506
- * Note that if we ever add gen_ss_advance() singlestep support to
507
- * M profile this should count as an "instruction execution complete"
508
- * event (compare gen_bx_excret_final_code()).
509
- */
510
- env->regs[15] = dest & ~1;
511
- env->thumb = dest & 1;
512
- HELPER(exception_internal)(env, EXCP_EXCEPTION_EXIT);
513
- /* notreached */
514
- }
515
-
516
- /* translate.c should have made BXNS UNDEF unless we're secure */
517
- assert(env->v7m.secure);
518
-
519
- if (!(dest & 1)) {
520
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
521
- }
522
- switch_v7m_security_state(env, dest & 1);
523
- env->thumb = 1;
524
- env->regs[15] = dest & ~1;
525
-}
526
-
527
-void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
528
-{
529
- /*
530
- * Handle v7M BLXNS:
531
- * - bit 0 of the destination address is the target security state
532
- */
533
-
534
- /* At this point regs[15] is the address just after the BLXNS */
535
- uint32_t nextinst = env->regs[15] | 1;
536
- uint32_t sp = env->regs[13] - 8;
537
- uint32_t saved_psr;
538
-
539
- /* translate.c will have made BLXNS UNDEF unless we're secure */
540
- assert(env->v7m.secure);
541
-
542
- if (dest & 1) {
543
- /*
544
- * Target is Secure, so this is just a normal BLX,
545
- * except that the low bit doesn't indicate Thumb/not.
546
- */
547
- env->regs[14] = nextinst;
548
- env->thumb = 1;
549
- env->regs[15] = dest & ~1;
550
- return;
551
- }
552
-
553
- /* Target is non-secure: first push a stack frame */
554
- if (!QEMU_IS_ALIGNED(sp, 8)) {
555
- qemu_log_mask(LOG_GUEST_ERROR,
556
- "BLXNS with misaligned SP is UNPREDICTABLE\n");
557
- }
558
-
559
- if (sp < v7m_sp_limit(env)) {
560
- raise_exception(env, EXCP_STKOF, 0, 1);
561
- }
562
-
563
- saved_psr = env->v7m.exception;
564
- if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) {
565
- saved_psr |= XPSR_SFPA;
566
- }
567
-
568
- /* Note that these stores can throw exceptions on MPU faults */
569
- cpu_stl_data(env, sp, nextinst);
570
- cpu_stl_data(env, sp + 4, saved_psr);
571
-
572
- env->regs[13] = sp;
573
- env->regs[14] = 0xfeffffff;
574
- if (arm_v7m_is_handler_mode(env)) {
575
- /*
576
- * Write a dummy value to IPSR, to avoid leaking the current secure
577
- * exception number to non-secure code. This is guaranteed not
578
- * to cause write_v7m_exception() to actually change stacks.
579
- */
580
- write_v7m_exception(env, 1);
581
- }
582
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
583
- switch_v7m_security_state(env, 0);
584
- env->thumb = 1;
585
- env->regs[15] = dest;
586
-}
587
-
588
-static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
589
- bool spsel)
590
-{
591
- /*
592
- * Return a pointer to the location where we currently store the
593
- * stack pointer for the requested security state and thread mode.
594
- * This pointer will become invalid if the CPU state is updated
595
- * such that the stack pointers are switched around (eg changing
596
- * the SPSEL control bit).
597
- * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode().
598
- * Unlike that pseudocode, we require the caller to pass us in the
599
- * SPSEL control bit value; this is because we also use this
600
- * function in handling of pushing of the callee-saves registers
601
- * part of the v8M stack frame (pseudocode PushCalleeStack()),
602
- * and in the tailchain codepath the SPSEL bit comes from the exception
603
- * return magic LR value from the previous exception. The pseudocode
604
- * opencodes the stack-selection in PushCalleeStack(), but we prefer
605
- * to make this utility function generic enough to do the job.
606
- */
607
- bool want_psp = threadmode && spsel;
608
-
609
- if (secure == env->v7m.secure) {
610
- if (want_psp == v7m_using_psp(env)) {
611
- return &env->regs[13];
612
- } else {
613
- return &env->v7m.other_sp;
614
- }
615
- } else {
616
- if (want_psp) {
617
- return &env->v7m.other_ss_psp;
618
- } else {
619
- return &env->v7m.other_ss_msp;
620
- }
621
- }
622
-}
623
-
624
-static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
625
- uint32_t *pvec)
626
-{
627
- CPUState *cs = CPU(cpu);
628
- CPUARMState *env = &cpu->env;
629
- MemTxResult result;
630
- uint32_t addr = env->v7m.vecbase[targets_secure] + exc * 4;
631
- uint32_t vector_entry;
632
- MemTxAttrs attrs = {};
633
- ARMMMUIdx mmu_idx;
634
- bool exc_secure;
635
-
636
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure, true);
637
-
638
- /*
639
- * We don't do a get_phys_addr() here because the rules for vector
640
- * loads are special: they always use the default memory map, and
641
- * the default memory map permits reads from all addresses.
642
- * Since there's no easy way to pass through to pmsav8_mpu_lookup()
643
- * that we want this special case which would always say "yes",
644
- * we just do the SAU lookup here followed by a direct physical load.
645
- */
646
- attrs.secure = targets_secure;
647
- attrs.user = false;
648
-
649
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
650
- V8M_SAttributes sattrs = {};
651
-
652
- v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
653
- if (sattrs.ns) {
654
- attrs.secure = false;
655
- } else if (!targets_secure) {
656
- /* NS access to S memory */
657
- goto load_fail;
658
- }
659
- }
660
-
661
- vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr,
662
- attrs, &result);
663
- if (result != MEMTX_OK) {
664
- goto load_fail;
665
- }
666
- *pvec = vector_entry;
667
- return true;
668
-
669
-load_fail:
670
- /*
671
- * All vector table fetch fails are reported as HardFault, with
672
- * HFSR.VECTTBL and .FORCED set. (FORCED is set because
673
- * technically the underlying exception is a MemManage or BusFault
674
- * that is escalated to HardFault.) This is a terminal exception,
675
- * so we will either take the HardFault immediately or else enter
676
- * lockup (the latter case is handled in armv7m_nvic_set_pending_derived()).
677
- */
678
- exc_secure = targets_secure ||
679
- !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
680
- env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
681
- armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
682
- return false;
683
-}
684
-
685
-static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
686
-{
687
- /*
688
- * Return the integrity signature value for the callee-saves
689
- * stack frame section. @lr is the exception return payload/LR value
690
- * whose FType bit forms bit 0 of the signature if FP is present.
691
- */
692
- uint32_t sig = 0xfefa125a;
693
-
694
- if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
695
- sig |= 1;
696
- }
697
- return sig;
698
-}
699
-
700
-static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
701
- bool ignore_faults)
702
-{
703
- /*
704
- * For v8M, push the callee-saves register part of the stack frame.
705
- * Compare the v8M pseudocode PushCalleeStack().
706
- * In the tailchaining case this may not be the current stack.
707
- */
708
- CPUARMState *env = &cpu->env;
709
- uint32_t *frame_sp_p;
710
- uint32_t frameptr;
711
- ARMMMUIdx mmu_idx;
712
- bool stacked_ok;
713
- uint32_t limit;
714
- bool want_psp;
715
- uint32_t sig;
716
- StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL;
717
-
718
- if (dotailchain) {
719
- bool mode = lr & R_V7M_EXCRET_MODE_MASK;
720
- bool priv = !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_NPRIV_MASK) ||
721
- !mode;
722
-
723
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, M_REG_S, priv);
724
- frame_sp_p = get_v7m_sp_ptr(env, M_REG_S, mode,
725
- lr & R_V7M_EXCRET_SPSEL_MASK);
726
- want_psp = mode && (lr & R_V7M_EXCRET_SPSEL_MASK);
727
- if (want_psp) {
728
- limit = env->v7m.psplim[M_REG_S];
729
- } else {
730
- limit = env->v7m.msplim[M_REG_S];
731
- }
732
- } else {
733
- mmu_idx = arm_mmu_idx(env);
734
- frame_sp_p = &env->regs[13];
735
- limit = v7m_sp_limit(env);
736
- }
737
-
738
- frameptr = *frame_sp_p - 0x28;
739
- if (frameptr < limit) {
740
- /*
741
- * Stack limit failure: set SP to the limit value, and generate
742
- * STKOF UsageFault. Stack pushes below the limit must not be
743
- * performed. It is IMPDEF whether pushes above the limit are
744
- * performed; we choose not to.
745
- */
746
- qemu_log_mask(CPU_LOG_INT,
747
- "...STKOF during callee-saves register stacking\n");
748
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
749
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
750
- env->v7m.secure);
751
- *frame_sp_p = limit;
752
- return true;
753
- }
754
-
755
- /*
756
- * Write as much of the stack frame as we can. A write failure may
757
- * cause us to pend a derived exception.
758
- */
759
- sig = v7m_integrity_sig(env, lr);
760
- stacked_ok =
761
- v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) &&
762
- v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) &&
763
- v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) &&
764
- v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) &&
765
- v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) &&
766
- v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) &&
767
- v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) &&
768
- v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) &&
769
- v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode);
770
-
771
- /* Update SP regardless of whether any of the stack accesses failed. */
772
- *frame_sp_p = frameptr;
773
-
774
- return !stacked_ok;
775
-}
776
-
777
-static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
778
- bool ignore_stackfaults)
779
-{
780
- /*
781
- * Do the "take the exception" parts of exception entry,
782
- * but not the pushing of state to the stack. This is
783
- * similar to the pseudocode ExceptionTaken() function.
784
- */
785
- CPUARMState *env = &cpu->env;
786
- uint32_t addr;
787
- bool targets_secure;
788
- int exc;
789
- bool push_failed = false;
790
-
791
- armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
792
- qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
793
- targets_secure ? "secure" : "nonsecure", exc);
794
-
795
- if (dotailchain) {
796
- /* Sanitize LR FType and PREFIX bits */
797
- if (!arm_feature(env, ARM_FEATURE_VFP)) {
798
- lr |= R_V7M_EXCRET_FTYPE_MASK;
799
- }
800
- lr = deposit32(lr, 24, 8, 0xff);
801
- }
802
-
803
- if (arm_feature(env, ARM_FEATURE_V8)) {
804
- if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
805
- (lr & R_V7M_EXCRET_S_MASK)) {
806
- /*
807
- * The background code (the owner of the registers in the
808
- * exception frame) is Secure. This means it may either already
809
- * have or now needs to push callee-saves registers.
810
- */
811
- if (targets_secure) {
812
- if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) {
813
- /*
814
- * We took an exception from Secure to NonSecure
815
- * (which means the callee-saved registers got stacked)
816
- * and are now tailchaining to a Secure exception.
817
- * Clear DCRS so eventual return from this Secure
818
- * exception unstacks the callee-saved registers.
819
- */
820
- lr &= ~R_V7M_EXCRET_DCRS_MASK;
821
- }
822
- } else {
823
- /*
824
- * We're going to a non-secure exception; push the
825
- * callee-saves registers to the stack now, if they're
826
- * not already saved.
827
- */
828
- if (lr & R_V7M_EXCRET_DCRS_MASK &&
829
- !(dotailchain && !(lr & R_V7M_EXCRET_ES_MASK))) {
830
- push_failed = v7m_push_callee_stack(cpu, lr, dotailchain,
831
- ignore_stackfaults);
832
- }
833
- lr |= R_V7M_EXCRET_DCRS_MASK;
834
- }
835
- }
836
-
837
- lr &= ~R_V7M_EXCRET_ES_MASK;
838
- if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) {
839
- lr |= R_V7M_EXCRET_ES_MASK;
840
- }
841
- lr &= ~R_V7M_EXCRET_SPSEL_MASK;
842
- if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) {
843
- lr |= R_V7M_EXCRET_SPSEL_MASK;
844
- }
845
-
846
- /*
847
- * Clear registers if necessary to prevent non-secure exception
848
- * code being able to see register values from secure code.
849
- * Where register values become architecturally UNKNOWN we leave
850
- * them with their previous values.
851
- */
852
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
853
- if (!targets_secure) {
854
- /*
855
- * Always clear the caller-saved registers (they have been
856
- * pushed to the stack earlier in v7m_push_stack()).
857
- * Clear callee-saved registers if the background code is
858
- * Secure (in which case these regs were saved in
859
- * v7m_push_callee_stack()).
860
- */
861
- int i;
862
-
863
- for (i = 0; i < 13; i++) {
864
- /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
865
- if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
866
- env->regs[i] = 0;
867
- }
868
- }
869
- /* Clear EAPSR */
870
- xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT);
871
- }
872
- }
873
- }
874
-
875
- if (push_failed && !ignore_stackfaults) {
876
- /*
877
- * Derived exception on callee-saves register stacking:
878
- * we might now want to take a different exception which
879
- * targets a different security state, so try again from the top.
880
- */
881
- qemu_log_mask(CPU_LOG_INT,
882
- "...derived exception on callee-saves register stacking");
883
- v7m_exception_taken(cpu, lr, true, true);
884
- return;
885
- }
886
-
887
- if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
888
- /* Vector load failed: derived exception */
889
- qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
890
- v7m_exception_taken(cpu, lr, true, true);
891
- return;
892
- }
893
-
894
- /*
895
- * Now we've done everything that might cause a derived exception
896
- * we can go ahead and activate whichever exception we're going to
897
- * take (which might now be the derived exception).
898
- */
899
- armv7m_nvic_acknowledge_irq(env->nvic);
900
-
901
- /* Switch to target security state -- must do this before writing SPSEL */
902
- switch_v7m_security_state(env, targets_secure);
903
- write_v7m_control_spsel(env, 0);
904
- arm_clear_exclusive(env);
905
- /* Clear SFPA and FPCA (has no effect if no FPU) */
906
- env->v7m.control[M_REG_S] &=
907
- ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK);
908
- /* Clear IT bits */
909
- env->condexec_bits = 0;
910
- env->regs[14] = lr;
911
- env->regs[15] = addr & 0xfffffffe;
912
- env->thumb = addr & 1;
913
-}
914
-
915
-static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
916
- bool apply_splim)
917
-{
918
- /*
919
- * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR
920
- * that we will need later in order to do lazy FP reg stacking.
921
- */
922
- bool is_secure = env->v7m.secure;
923
- void *nvic = env->nvic;
924
- /*
925
- * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
926
- * are banked and we want to update the bit in the bank for the
927
- * current security state; and in one case we want to specifically
928
- * update the NS banked version of a bit even if we are secure.
929
- */
930
- uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S];
931
- uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS];
932
- uint32_t *fpccr = &env->v7m.fpccr[is_secure];
933
- bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy;
934
-
935
- env->v7m.fpcar[is_secure] = frameptr & ~0x7;
936
-
937
- if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) {
938
- bool splimviol;
939
- uint32_t splim = v7m_sp_limit(env);
940
- bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) &&
941
- (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK);
942
-
943
- splimviol = !ign && frameptr < splim;
944
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol);
945
- }
946
-
947
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1);
948
-
949
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure);
950
-
951
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0);
952
-
953
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD,
954
- !arm_v7m_is_handler_mode(env));
955
-
956
- hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false);
957
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
958
-
959
- bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false);
960
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
961
-
962
- mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure);
963
- *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy);
964
-
965
- ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false);
966
- *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy);
967
-
968
- monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false);
969
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy);
970
-
971
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
972
- s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true);
973
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy);
974
-
975
- sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false);
976
- *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy);
977
- }
978
-}
979
-
980
-void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
981
-{
982
- /* fptr is the value of Rn, the frame pointer we store the FP regs to */
983
- bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
984
- bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
985
-
986
- assert(env->v7m.secure);
987
-
988
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
989
- return;
990
- }
991
-
992
- /* Check access to the coprocessor is permitted */
993
- if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
994
- raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
995
- }
996
-
997
- if (lspact) {
998
- /* LSPACT should not be active when there is active FP state */
999
- raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC());
1000
- }
1001
-
1002
- if (fptr & 7) {
1003
- raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
1004
- }
1005
-
1006
- /*
1007
- * Note that we do not use v7m_stack_write() here, because the
1008
- * accesses should not set the FSR bits for stacking errors if they
1009
- * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
1010
- * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
1011
- * and longjmp out.
1012
- */
1013
- if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
1014
- bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
1015
- int i;
1016
-
1017
- for (i = 0; i < (ts ? 32 : 16); i += 2) {
1018
- uint64_t dn = *aa32_vfp_dreg(env, i / 2);
1019
- uint32_t faddr = fptr + 4 * i;
1020
- uint32_t slo = extract64(dn, 0, 32);
1021
- uint32_t shi = extract64(dn, 32, 32);
1022
-
1023
- if (i >= 16) {
1024
- faddr += 8; /* skip the slot for the FPSCR */
1025
- }
1026
- cpu_stl_data(env, faddr, slo);
1027
- cpu_stl_data(env, faddr + 4, shi);
1028
- }
1029
- cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
1030
-
1031
- /*
1032
- * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
1033
- * leave them unchanged, matching our choice in v7m_preserve_fp_state.
1034
- */
1035
- if (ts) {
1036
- for (i = 0; i < 32; i += 2) {
1037
- *aa32_vfp_dreg(env, i / 2) = 0;
1038
- }
1039
- vfp_set_fpscr(env, 0);
1040
- }
1041
- } else {
1042
- v7m_update_fpccr(env, fptr, false);
1043
- }
1044
-
1045
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
1046
-}
1047
-
1048
-void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
1049
-{
1050
- /* fptr is the value of Rn, the frame pointer we load the FP regs from */
1051
- assert(env->v7m.secure);
1052
-
1053
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
1054
- return;
1055
- }
1056
-
1057
- /* Check access to the coprocessor is permitted */
1058
- if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
1059
- raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
1060
- }
1061
-
1062
- if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
1063
- /* State in FP is still valid */
1064
- env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK;
1065
- } else {
1066
- bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
1067
- int i;
1068
- uint32_t fpscr;
1069
-
1070
- if (fptr & 7) {
1071
- raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
1072
- }
1073
-
1074
- for (i = 0; i < (ts ? 32 : 16); i += 2) {
1075
- uint32_t slo, shi;
1076
- uint64_t dn;
1077
- uint32_t faddr = fptr + 4 * i;
1078
-
1079
- if (i >= 16) {
1080
- faddr += 8; /* skip the slot for the FPSCR */
1081
- }
1082
-
1083
- slo = cpu_ldl_data(env, faddr);
1084
- shi = cpu_ldl_data(env, faddr + 4);
1085
-
1086
- dn = (uint64_t) shi << 32 | slo;
1087
- *aa32_vfp_dreg(env, i / 2) = dn;
1088
- }
1089
- fpscr = cpu_ldl_data(env, fptr + 0x40);
1090
- vfp_set_fpscr(env, fpscr);
1091
- }
1092
-
1093
- env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
1094
-}
1095
-
1096
-static bool v7m_push_stack(ARMCPU *cpu)
1097
-{
1098
- /*
1099
- * Do the "set up stack frame" part of exception entry,
1100
- * similar to pseudocode PushStack().
1101
- * Return true if we generate a derived exception (and so
1102
- * should ignore further stack faults trying to process
1103
- * that derived exception.)
1104
- */
1105
- bool stacked_ok = true, limitviol = false;
1106
- CPUARMState *env = &cpu->env;
1107
- uint32_t xpsr = xpsr_read(env);
1108
- uint32_t frameptr = env->regs[13];
1109
- ARMMMUIdx mmu_idx = arm_mmu_idx(env);
1110
- uint32_t framesize;
1111
- bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1);
1112
-
1113
- if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) &&
1114
- (env->v7m.secure || nsacr_cp10)) {
1115
- if (env->v7m.secure &&
1116
- env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) {
1117
- framesize = 0xa8;
1118
- } else {
1119
- framesize = 0x68;
1120
- }
1121
- } else {
1122
- framesize = 0x20;
1123
- }
1124
-
1125
- /* Align stack pointer if the guest wants that */
1126
- if ((frameptr & 4) &&
1127
- (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKALIGN_MASK)) {
1128
- frameptr -= 4;
1129
- xpsr |= XPSR_SPREALIGN;
1130
- }
1131
-
1132
- xpsr &= ~XPSR_SFPA;
1133
- if (env->v7m.secure &&
1134
- (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
1135
- xpsr |= XPSR_SFPA;
1136
- }
1137
-
1138
- frameptr -= framesize;
1139
-
1140
- if (arm_feature(env, ARM_FEATURE_V8)) {
1141
- uint32_t limit = v7m_sp_limit(env);
1142
-
1143
- if (frameptr < limit) {
1144
- /*
1145
- * Stack limit failure: set SP to the limit value, and generate
1146
- * STKOF UsageFault. Stack pushes below the limit must not be
1147
- * performed. It is IMPDEF whether pushes above the limit are
1148
- * performed; we choose not to.
1149
- */
1150
- qemu_log_mask(CPU_LOG_INT,
1151
- "...STKOF during stacking\n");
1152
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
1153
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1154
- env->v7m.secure);
1155
- env->regs[13] = limit;
1156
- /*
1157
- * We won't try to perform any further memory accesses but
1158
- * we must continue through the following code to check for
1159
- * permission faults during FPU state preservation, and we
1160
- * must update FPCCR if lazy stacking is enabled.
1161
- */
1162
- limitviol = true;
1163
- stacked_ok = false;
1164
- }
1165
- }
1166
-
1167
- /*
1168
- * Write as much of the stack frame as we can. If we fail a stack
1169
- * write this will result in a derived exception being pended
1170
- * (which may be taken in preference to the one we started with
1171
- * if it has higher priority).
1172
- */
1173
- stacked_ok = stacked_ok &&
1174
- v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) &&
1175
- v7m_stack_write(cpu, frameptr + 4, env->regs[1],
1176
- mmu_idx, STACK_NORMAL) &&
1177
- v7m_stack_write(cpu, frameptr + 8, env->regs[2],
1178
- mmu_idx, STACK_NORMAL) &&
1179
- v7m_stack_write(cpu, frameptr + 12, env->regs[3],
1180
- mmu_idx, STACK_NORMAL) &&
1181
- v7m_stack_write(cpu, frameptr + 16, env->regs[12],
1182
- mmu_idx, STACK_NORMAL) &&
1183
- v7m_stack_write(cpu, frameptr + 20, env->regs[14],
1184
- mmu_idx, STACK_NORMAL) &&
1185
- v7m_stack_write(cpu, frameptr + 24, env->regs[15],
1186
- mmu_idx, STACK_NORMAL) &&
1187
- v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL);
1188
-
1189
- if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
1190
- /* FPU is active, try to save its registers */
1191
- bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
1192
- bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK;
1193
-
1194
- if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1195
- qemu_log_mask(CPU_LOG_INT,
1196
- "...SecureFault because LSPACT and FPCA both set\n");
1197
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
1198
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1199
- } else if (!env->v7m.secure && !nsacr_cp10) {
1200
- qemu_log_mask(CPU_LOG_INT,
1201
- "...Secure UsageFault with CFSR.NOCP because "
1202
- "NSACR.CP10 prevents stacking FP regs\n");
1203
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
1204
- env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
1205
- } else {
1206
- if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
1207
- /* Lazy stacking disabled, save registers now */
1208
- int i;
1209
- bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure,
1210
- arm_current_el(env) != 0);
1211
-
1212
- if (stacked_ok && !cpacr_pass) {
1213
- /*
1214
- * Take UsageFault if CPACR forbids access. The pseudocode
1215
- * here does a full CheckCPEnabled() but we know the NSACR
1216
- * check can never fail as we have already handled that.
1217
- */
1218
- qemu_log_mask(CPU_LOG_INT,
1219
- "...UsageFault with CFSR.NOCP because "
1220
- "CPACR.CP10 prevents stacking FP regs\n");
1221
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1222
- env->v7m.secure);
1223
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
1224
- stacked_ok = false;
1225
- }
1226
-
1227
- for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
1228
- uint64_t dn = *aa32_vfp_dreg(env, i / 2);
1229
- uint32_t faddr = frameptr + 0x20 + 4 * i;
1230
- uint32_t slo = extract64(dn, 0, 32);
1231
- uint32_t shi = extract64(dn, 32, 32);
1232
-
1233
- if (i >= 16) {
1234
- faddr += 8; /* skip the slot for the FPSCR */
1235
- }
1236
- stacked_ok = stacked_ok &&
1237
- v7m_stack_write(cpu, faddr, slo,
1238
- mmu_idx, STACK_NORMAL) &&
1239
- v7m_stack_write(cpu, faddr + 4, shi,
1240
- mmu_idx, STACK_NORMAL);
1241
- }
1242
- stacked_ok = stacked_ok &&
1243
- v7m_stack_write(cpu, frameptr + 0x60,
1244
- vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
1245
- if (cpacr_pass) {
1246
- for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
1247
- *aa32_vfp_dreg(env, i / 2) = 0;
1248
- }
1249
- vfp_set_fpscr(env, 0);
1250
- }
1251
- } else {
1252
- /* Lazy stacking enabled, save necessary info to stack later */
1253
- v7m_update_fpccr(env, frameptr + 0x20, true);
1254
- }
1255
- }
1256
- }
1257
-
1258
- /*
1259
- * If we broke a stack limit then SP was already updated earlier;
1260
- * otherwise we update SP regardless of whether any of the stack
1261
- * accesses failed or we took some other kind of fault.
1262
- */
1263
- if (!limitviol) {
1264
- env->regs[13] = frameptr;
1265
- }
1266
-
1267
- return !stacked_ok;
1268
-}
1269
-
1270
-static void do_v7m_exception_exit(ARMCPU *cpu)
1271
-{
1272
- CPUARMState *env = &cpu->env;
1273
- uint32_t excret;
1274
- uint32_t xpsr, xpsr_mask;
1275
- bool ufault = false;
1276
- bool sfault = false;
1277
- bool return_to_sp_process;
1278
- bool return_to_handler;
1279
- bool rettobase = false;
1280
- bool exc_secure = false;
1281
- bool return_to_secure;
1282
- bool ftype;
1283
- bool restore_s16_s31;
1284
-
1285
- /*
1286
- * If we're not in Handler mode then jumps to magic exception-exit
1287
- * addresses don't have magic behaviour. However for the v8M
1288
- * security extensions the magic secure-function-return has to
1289
- * work in thread mode too, so to avoid doing an extra check in
1290
- * the generated code we allow exception-exit magic to also cause the
1291
- * internal exception and bring us here in thread mode. Correct code
1292
- * will never try to do this (the following insn fetch will always
1293
- * fault) so we the overhead of having taken an unnecessary exception
1294
- * doesn't matter.
1295
- */
1296
- if (!arm_v7m_is_handler_mode(env)) {
1297
- return;
1298
- }
1299
-
1300
- /*
1301
- * In the spec pseudocode ExceptionReturn() is called directly
1302
- * from BXWritePC() and gets the full target PC value including
1303
- * bit zero. In QEMU's implementation we treat it as a normal
1304
- * jump-to-register (which is then caught later on), and so split
1305
- * the target value up between env->regs[15] and env->thumb in
1306
- * gen_bx(). Reconstitute it.
1307
- */
1308
- excret = env->regs[15];
1309
- if (env->thumb) {
1310
- excret |= 1;
1311
- }
1312
-
1313
- qemu_log_mask(CPU_LOG_INT, "Exception return: magic PC %" PRIx32
1314
- " previous exception %d\n",
1315
- excret, env->v7m.exception);
1316
-
1317
- if ((excret & R_V7M_EXCRET_RES1_MASK) != R_V7M_EXCRET_RES1_MASK) {
1318
- qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero high bits in exception "
1319
- "exit PC value 0x%" PRIx32 " are UNPREDICTABLE\n",
1320
- excret);
1321
- }
1322
-
1323
- ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
1324
-
1325
- if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
1326
- qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
1327
- "exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
1328
- "if FPU not present\n",
1329
- excret);
1330
- ftype = true;
1331
- }
1332
-
1333
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1334
- /*
1335
- * EXC_RETURN.ES validation check (R_SMFL). We must do this before
1336
- * we pick which FAULTMASK to clear.
1337
- */
1338
- if (!env->v7m.secure &&
1339
- ((excret & R_V7M_EXCRET_ES_MASK) ||
1340
- !(excret & R_V7M_EXCRET_DCRS_MASK))) {
1341
- sfault = 1;
1342
- /* For all other purposes, treat ES as 0 (R_HXSR) */
1343
- excret &= ~R_V7M_EXCRET_ES_MASK;
1344
- }
1345
- exc_secure = excret & R_V7M_EXCRET_ES_MASK;
1346
- }
1347
-
1348
- if (env->v7m.exception != ARMV7M_EXCP_NMI) {
1349
- /*
1350
- * Auto-clear FAULTMASK on return from other than NMI.
1351
- * If the security extension is implemented then this only
1352
- * happens if the raw execution priority is >= 0; the
1353
- * value of the ES bit in the exception return value indicates
1354
- * which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
1355
- */
1356
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1357
- if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
1358
- env->v7m.faultmask[exc_secure] = 0;
1359
- }
1360
- } else {
1361
- env->v7m.faultmask[M_REG_NS] = 0;
1362
- }
1363
- }
1364
-
1365
- switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception,
1366
- exc_secure)) {
1367
- case -1:
1368
- /* attempt to exit an exception that isn't active */
1369
- ufault = true;
1370
- break;
1371
- case 0:
1372
- /* still an irq active now */
1373
- break;
1374
- case 1:
1375
- /*
1376
- * We returned to base exception level, no nesting.
1377
- * (In the pseudocode this is written using "NestedActivation != 1"
1378
- * where we have 'rettobase == false'.)
1379
- */
1380
- rettobase = true;
1381
- break;
1382
- default:
1383
- g_assert_not_reached();
1384
- }
1385
-
1386
- return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK);
1387
- return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK;
1388
- return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
1389
- (excret & R_V7M_EXCRET_S_MASK);
1390
-
1391
- if (arm_feature(env, ARM_FEATURE_V8)) {
1392
- if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) {
1393
- /*
1394
- * UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP);
1395
- * we choose to take the UsageFault.
1396
- */
1397
- if ((excret & R_V7M_EXCRET_S_MASK) ||
1398
- (excret & R_V7M_EXCRET_ES_MASK) ||
1399
- !(excret & R_V7M_EXCRET_DCRS_MASK)) {
1400
- ufault = true;
1401
- }
1402
- }
1403
- if (excret & R_V7M_EXCRET_RES0_MASK) {
1404
- ufault = true;
1405
- }
1406
- } else {
1407
- /* For v7M we only recognize certain combinations of the low bits */
1408
- switch (excret & 0xf) {
1409
- case 1: /* Return to Handler */
1410
- break;
1411
- case 13: /* Return to Thread using Process stack */
1412
- case 9: /* Return to Thread using Main stack */
1413
- /*
1414
- * We only need to check NONBASETHRDENA for v7M, because in
1415
- * v8M this bit does not exist (it is RES1).
1416
- */
1417
- if (!rettobase &&
1418
- !(env->v7m.ccr[env->v7m.secure] &
1419
- R_V7M_CCR_NONBASETHRDENA_MASK)) {
1420
- ufault = true;
1421
- }
1422
- break;
1423
- default:
1424
- ufault = true;
1425
- }
1426
- }
1427
-
1428
- /*
1429
- * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
1430
- * Handler mode (and will be until we write the new XPSR.Interrupt
1431
- * field) this does not switch around the current stack pointer.
1432
- * We must do this before we do any kind of tailchaining, including
1433
- * for the derived exceptions on integrity check failures, or we will
1434
- * give the guest an incorrect EXCRET.SPSEL value on exception entry.
1435
- */
1436
- write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
1437
-
1438
- /*
1439
- * Clear scratch FP values left in caller saved registers; this
1440
- * must happen before any kind of tail chaining.
1441
- */
1442
- if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) &&
1443
- (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
1444
- if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
1445
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
1446
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1447
- qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
1448
- "stackframe: error during lazy state deactivation\n");
1449
- v7m_exception_taken(cpu, excret, true, false);
1450
- return;
1451
- } else {
1452
- /* Clear s0..s15 and FPSCR */
1453
- int i;
1454
-
1455
- for (i = 0; i < 16; i += 2) {
1456
- *aa32_vfp_dreg(env, i / 2) = 0;
1457
- }
1458
- vfp_set_fpscr(env, 0);
1459
- }
1460
- }
1461
-
1462
- if (sfault) {
1463
- env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
1464
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1465
- qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
1466
- "stackframe: failed EXC_RETURN.ES validity check\n");
1467
- v7m_exception_taken(cpu, excret, true, false);
1468
- return;
1469
- }
1470
-
1471
- if (ufault) {
1472
- /*
1473
- * Bad exception return: instead of popping the exception
1474
- * stack, directly take a usage fault on the current stack.
1475
- */
1476
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1477
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
1478
- qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
1479
- "stackframe: failed exception return integrity check\n");
1480
- v7m_exception_taken(cpu, excret, true, false);
1481
- return;
1482
- }
1483
-
1484
- /*
1485
- * Tailchaining: if there is currently a pending exception that
1486
- * is high enough priority to preempt execution at the level we're
1487
- * about to return to, then just directly take that exception now,
1488
- * avoiding an unstack-and-then-stack. Note that now we have
1489
- * deactivated the previous exception by calling armv7m_nvic_complete_irq()
1490
- * our current execution priority is already the execution priority we are
1491
- * returning to -- none of the state we would unstack or set based on
1492
- * the EXCRET value affects it.
1493
- */
1494
- if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
1495
- qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
1496
- v7m_exception_taken(cpu, excret, true, false);
1497
- return;
1498
- }
1499
-
1500
- switch_v7m_security_state(env, return_to_secure);
1501
-
1502
- {
1503
- /*
1504
- * The stack pointer we should be reading the exception frame from
1505
- * depends on bits in the magic exception return type value (and
1506
- * for v8M isn't necessarily the stack pointer we will eventually
1507
- * end up resuming execution with). Get a pointer to the location
1508
- * in the CPU state struct where the SP we need is currently being
1509
- * stored; we will use and modify it in place.
1510
- * We use this limited C variable scope so we don't accidentally
1511
- * use 'frame_sp_p' after we do something that makes it invalid.
1512
- */
1513
- uint32_t *frame_sp_p = get_v7m_sp_ptr(env,
1514
- return_to_secure,
1515
- !return_to_handler,
1516
- return_to_sp_process);
1517
- uint32_t frameptr = *frame_sp_p;
1518
- bool pop_ok = true;
1519
- ARMMMUIdx mmu_idx;
1520
- bool return_to_priv = return_to_handler ||
1521
- !(env->v7m.control[return_to_secure] & R_V7M_CONTROL_NPRIV_MASK);
1522
-
1523
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, return_to_secure,
1524
- return_to_priv);
1525
-
1526
- if (!QEMU_IS_ALIGNED(frameptr, 8) &&
1527
- arm_feature(env, ARM_FEATURE_V8)) {
1528
- qemu_log_mask(LOG_GUEST_ERROR,
1529
- "M profile exception return with non-8-aligned SP "
1530
- "for destination state is UNPREDICTABLE\n");
1531
- }
1532
-
1533
- /* Do we need to pop callee-saved registers? */
1534
- if (return_to_secure &&
1535
- ((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
1536
- (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
1537
- uint32_t actual_sig;
1538
-
1539
- pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx);
1540
-
1541
- if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) {
1542
- /* Take a SecureFault on the current stack */
1543
- env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
1544
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1545
- qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
1546
- "stackframe: failed exception return integrity "
1547
- "signature check\n");
1548
- v7m_exception_taken(cpu, excret, true, false);
1549
- return;
1550
- }
1551
-
1552
- pop_ok = pop_ok &&
1553
- v7m_stack_read(cpu, &env->regs[4], frameptr + 0x8, mmu_idx) &&
1554
- v7m_stack_read(cpu, &env->regs[5], frameptr + 0xc, mmu_idx) &&
1555
- v7m_stack_read(cpu, &env->regs[6], frameptr + 0x10, mmu_idx) &&
1556
- v7m_stack_read(cpu, &env->regs[7], frameptr + 0x14, mmu_idx) &&
1557
- v7m_stack_read(cpu, &env->regs[8], frameptr + 0x18, mmu_idx) &&
1558
- v7m_stack_read(cpu, &env->regs[9], frameptr + 0x1c, mmu_idx) &&
1559
- v7m_stack_read(cpu, &env->regs[10], frameptr + 0x20, mmu_idx) &&
1560
- v7m_stack_read(cpu, &env->regs[11], frameptr + 0x24, mmu_idx);
1561
-
1562
- frameptr += 0x28;
1563
- }
1564
-
1565
- /* Pop registers */
1566
- pop_ok = pop_ok &&
1567
- v7m_stack_read(cpu, &env->regs[0], frameptr, mmu_idx) &&
1568
- v7m_stack_read(cpu, &env->regs[1], frameptr + 0x4, mmu_idx) &&
1569
- v7m_stack_read(cpu, &env->regs[2], frameptr + 0x8, mmu_idx) &&
1570
- v7m_stack_read(cpu, &env->regs[3], frameptr + 0xc, mmu_idx) &&
1571
- v7m_stack_read(cpu, &env->regs[12], frameptr + 0x10, mmu_idx) &&
1572
- v7m_stack_read(cpu, &env->regs[14], frameptr + 0x14, mmu_idx) &&
1573
- v7m_stack_read(cpu, &env->regs[15], frameptr + 0x18, mmu_idx) &&
1574
- v7m_stack_read(cpu, &xpsr, frameptr + 0x1c, mmu_idx);
1575
-
1576
- if (!pop_ok) {
1577
- /*
1578
- * v7m_stack_read() pended a fault, so take it (as a tail
1579
- * chained exception on the same stack frame)
1580
- */
1581
- qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
1582
- v7m_exception_taken(cpu, excret, true, false);
1583
- return;
1584
- }
1585
-
1586
- /*
1587
- * Returning from an exception with a PC with bit 0 set is defined
1588
- * behaviour on v8M (bit 0 is ignored), but for v7M it was specified
1589
- * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore
1590
- * the lsbit, and there are several RTOSes out there which incorrectly
1591
- * assume the r15 in the stack frame should be a Thumb-style "lsbit
1592
- * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but
1593
- * complain about the badly behaved guest.
1594
- */
1595
- if (env->regs[15] & 1) {
1596
- env->regs[15] &= ~1U;
1597
- if (!arm_feature(env, ARM_FEATURE_V8)) {
1598
- qemu_log_mask(LOG_GUEST_ERROR,
1599
- "M profile return from interrupt with misaligned "
1600
- "PC is UNPREDICTABLE on v7M\n");
1601
- }
1602
- }
1603
-
1604
- if (arm_feature(env, ARM_FEATURE_V8)) {
1605
- /*
1606
- * For v8M we have to check whether the xPSR exception field
1607
- * matches the EXCRET value for return to handler/thread
1608
- * before we commit to changing the SP and xPSR.
1609
- */
1610
- bool will_be_handler = (xpsr & XPSR_EXCP) != 0;
1611
- if (return_to_handler != will_be_handler) {
1612
- /*
1613
- * Take an INVPC UsageFault on the current stack.
1614
- * By this point we will have switched to the security state
1615
- * for the background state, so this UsageFault will target
1616
- * that state.
1617
- */
1618
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1619
- env->v7m.secure);
1620
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1621
- qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
1622
- "stackframe: failed exception return integrity "
1623
- "check\n");
1624
- v7m_exception_taken(cpu, excret, true, false);
1625
- return;
1626
- }
1627
- }
1628
-
1629
- if (!ftype) {
1630
- /* FP present and we need to handle it */
1631
- if (!return_to_secure &&
1632
- (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) {
1633
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1634
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
1635
- qemu_log_mask(CPU_LOG_INT,
1636
- "...taking SecureFault on existing stackframe: "
1637
- "Secure LSPACT set but exception return is "
1638
- "not to secure state\n");
1639
- v7m_exception_taken(cpu, excret, true, false);
1640
- return;
1641
- }
1642
-
1643
- restore_s16_s31 = return_to_secure &&
1644
- (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
1645
-
1646
- if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) {
1647
- /* State in FPU is still valid, just clear LSPACT */
1648
- env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
1649
- } else {
1650
- int i;
1651
- uint32_t fpscr;
1652
- bool cpacr_pass, nsacr_pass;
1653
-
1654
- cpacr_pass = v7m_cpacr_pass(env, return_to_secure,
1655
- return_to_priv);
1656
- nsacr_pass = return_to_secure ||
1657
- extract32(env->v7m.nsacr, 10, 1);
1658
-
1659
- if (!cpacr_pass) {
1660
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1661
- return_to_secure);
1662
- env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK;
1663
- qemu_log_mask(CPU_LOG_INT,
1664
- "...taking UsageFault on existing "
1665
- "stackframe: CPACR.CP10 prevents unstacking "
1666
- "FP regs\n");
1667
- v7m_exception_taken(cpu, excret, true, false);
1668
- return;
1669
- } else if (!nsacr_pass) {
1670
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
1671
- env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK;
1672
- qemu_log_mask(CPU_LOG_INT,
1673
- "...taking Secure UsageFault on existing "
1674
- "stackframe: NSACR.CP10 prevents unstacking "
1675
- "FP regs\n");
1676
- v7m_exception_taken(cpu, excret, true, false);
1677
- return;
1678
- }
1679
-
1680
- for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
1681
- uint32_t slo, shi;
1682
- uint64_t dn;
1683
- uint32_t faddr = frameptr + 0x20 + 4 * i;
1684
-
1685
- if (i >= 16) {
1686
- faddr += 8; /* Skip the slot for the FPSCR */
1687
- }
1688
-
1689
- pop_ok = pop_ok &&
1690
- v7m_stack_read(cpu, &slo, faddr, mmu_idx) &&
1691
- v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx);
1692
-
1693
- if (!pop_ok) {
1694
- break;
1695
- }
1696
-
1697
- dn = (uint64_t)shi << 32 | slo;
1698
- *aa32_vfp_dreg(env, i / 2) = dn;
1699
- }
1700
- pop_ok = pop_ok &&
1701
- v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx);
1702
- if (pop_ok) {
1703
- vfp_set_fpscr(env, fpscr);
1704
- }
1705
- if (!pop_ok) {
1706
- /*
1707
- * These regs are 0 if security extension present;
1708
- * otherwise merely UNKNOWN. We zero always.
1709
- */
1710
- for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
1711
- *aa32_vfp_dreg(env, i / 2) = 0;
1712
- }
1713
- vfp_set_fpscr(env, 0);
1714
- }
1715
- }
1716
- }
1717
- env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
1718
- V7M_CONTROL, FPCA, !ftype);
1719
-
1720
- /* Commit to consuming the stack frame */
1721
- frameptr += 0x20;
1722
- if (!ftype) {
1723
- frameptr += 0x48;
1724
- if (restore_s16_s31) {
1725
- frameptr += 0x40;
1726
- }
1727
- }
1728
- /*
1729
- * Undo stack alignment (the SPREALIGN bit indicates that the original
1730
- * pre-exception SP was not 8-aligned and we added a padding word to
1731
- * align it, so we undo this by ORing in the bit that increases it
1732
- * from the current 8-aligned value to the 8-unaligned value. (Adding 4
1733
- * would work too but a logical OR is how the pseudocode specifies it.)
1734
- */
1735
- if (xpsr & XPSR_SPREALIGN) {
1736
- frameptr |= 4;
1737
- }
1738
- *frame_sp_p = frameptr;
1739
- }
1740
-
1741
- xpsr_mask = ~(XPSR_SPREALIGN | XPSR_SFPA);
1742
- if (!arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
1743
- xpsr_mask &= ~XPSR_GE;
1744
- }
1745
- /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
1746
- xpsr_write(env, xpsr, xpsr_mask);
1747
-
1748
- if (env->v7m.secure) {
1749
- bool sfpa = xpsr & XPSR_SFPA;
1750
-
1751
- env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
1752
- V7M_CONTROL, SFPA, sfpa);
1753
- }
1754
-
1755
- /*
1756
- * The restored xPSR exception field will be zero if we're
1757
- * resuming in Thread mode. If that doesn't match what the
1758
- * exception return excret specified then this is a UsageFault.
1759
- * v7M requires we make this check here; v8M did it earlier.
1760
- */
1761
- if (return_to_handler != arm_v7m_is_handler_mode(env)) {
1762
- /*
1763
- * Take an INVPC UsageFault by pushing the stack again;
1764
- * we know we're v7M so this is never a Secure UsageFault.
1765
- */
1766
- bool ignore_stackfaults;
1767
-
1768
- assert(!arm_feature(env, ARM_FEATURE_V8));
1769
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
1770
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1771
- ignore_stackfaults = v7m_push_stack(cpu);
1772
- qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
1773
- "failed exception return integrity check\n");
1774
- v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
1775
- return;
1776
- }
1777
-
1778
- /* Otherwise, we have a successful exception exit. */
1779
- arm_clear_exclusive(env);
1780
- qemu_log_mask(CPU_LOG_INT, "...successful exception return\n");
1781
-}
1782
-
1783
-static bool do_v7m_function_return(ARMCPU *cpu)
1784
-{
1785
- /*
1786
- * v8M security extensions magic function return.
1787
- * We may either:
1788
- * (1) throw an exception (longjump)
1789
- * (2) return true if we successfully handled the function return
1790
- * (3) return false if we failed a consistency check and have
1791
- * pended a UsageFault that needs to be taken now
1792
- *
1793
- * At this point the magic return value is split between env->regs[15]
1794
- * and env->thumb. We don't bother to reconstitute it because we don't
1795
- * need it (all values are handled the same way).
1796
- */
1797
- CPUARMState *env = &cpu->env;
1798
- uint32_t newpc, newpsr, newpsr_exc;
1799
-
1800
- qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n");
1801
-
1802
- {
1803
- bool threadmode, spsel;
1804
- TCGMemOpIdx oi;
1805
- ARMMMUIdx mmu_idx;
1806
- uint32_t *frame_sp_p;
1807
- uint32_t frameptr;
1808
-
1809
- /* Pull the return address and IPSR from the Secure stack */
1810
- threadmode = !arm_v7m_is_handler_mode(env);
1811
- spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK;
1812
-
1813
- frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel);
1814
- frameptr = *frame_sp_p;
1815
-
1816
- /*
1817
- * These loads may throw an exception (for MPU faults). We want to
1818
- * do them as secure, so work out what MMU index that is.
1819
- */
1820
- mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
1821
- oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx));
1822
- newpc = helper_le_ldul_mmu(env, frameptr, oi, 0);
1823
- newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0);
1824
-
1825
- /* Consistency checks on new IPSR */
1826
- newpsr_exc = newpsr & XPSR_EXCP;
1827
- if (!((env->v7m.exception == 0 && newpsr_exc == 0) ||
1828
- (env->v7m.exception == 1 && newpsr_exc != 0))) {
1829
- /* Pend the fault and tell our caller to take it */
1830
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
1831
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
1832
- env->v7m.secure);
1833
- qemu_log_mask(CPU_LOG_INT,
1834
- "...taking INVPC UsageFault: "
1835
- "IPSR consistency check failed\n");
1836
- return false;
1837
- }
1838
-
1839
- *frame_sp_p = frameptr + 8;
1840
- }
1841
-
1842
- /* This invalidates frame_sp_p */
1843
- switch_v7m_security_state(env, true);
1844
- env->v7m.exception = newpsr_exc;
1845
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
1846
- if (newpsr & XPSR_SFPA) {
1847
- env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK;
1848
- }
1849
- xpsr_write(env, 0, XPSR_IT);
1850
- env->thumb = newpc & 1;
1851
- env->regs[15] = newpc & ~1;
1852
-
1853
- qemu_log_mask(CPU_LOG_INT, "...function return successful\n");
1854
- return true;
1855
-}
1856
-
1857
-static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
1858
- uint32_t addr, uint16_t *insn)
1859
-{
1860
- /*
1861
- * Load a 16-bit portion of a v7M instruction, returning true on success,
1862
- * or false on failure (in which case we will have pended the appropriate
1863
- * exception).
1864
- * We need to do the instruction fetch's MPU and SAU checks
1865
- * like this because there is no MMU index that would allow
1866
- * doing the load with a single function call. Instead we must
1867
- * first check that the security attributes permit the load
1868
- * and that they don't mismatch on the two halves of the instruction,
1869
- * and then we do the load as a secure load (ie using the security
1870
- * attributes of the address, not the CPU, as architecturally required).
1871
- */
1872
- CPUState *cs = CPU(cpu);
1873
- CPUARMState *env = &cpu->env;
1874
- V8M_SAttributes sattrs = {};
1875
- MemTxAttrs attrs = {};
1876
- ARMMMUFaultInfo fi = {};
1877
- MemTxResult txres;
1878
- target_ulong page_size;
1879
- hwaddr physaddr;
1880
- int prot;
1881
-
1882
- v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
1883
- if (!sattrs.nsc || sattrs.ns) {
1884
- /*
1885
- * This must be the second half of the insn, and it straddles a
1886
- * region boundary with the second half not being S&NSC.
1887
- */
1888
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
1889
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1890
- qemu_log_mask(CPU_LOG_INT,
1891
- "...really SecureFault with SFSR.INVEP\n");
1892
- return false;
1893
- }
1894
- if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
1895
- &physaddr, &attrs, &prot, &page_size, &fi, NULL)) {
1896
- /* the MPU lookup failed */
1897
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
1898
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
1899
- qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
1900
- return false;
1901
- }
1902
- *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr,
1903
- attrs, &txres);
1904
- if (txres != MEMTX_OK) {
1905
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
1906
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
1907
- qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n");
1908
- return false;
1909
- }
1910
- return true;
1911
-}
1912
-
1913
-static bool v7m_handle_execute_nsc(ARMCPU *cpu)
1914
-{
1915
- /*
1916
- * Check whether this attempt to execute code in a Secure & NS-Callable
1917
- * memory region is for an SG instruction; if so, then emulate the
1918
- * effect of the SG instruction and return true. Otherwise pend
1919
- * the correct kind of exception and return false.
1920
- */
1921
- CPUARMState *env = &cpu->env;
1922
- ARMMMUIdx mmu_idx;
1923
- uint16_t insn;
1924
-
1925
- /*
1926
- * We should never get here unless get_phys_addr_pmsav8() caused
1927
- * an exception for NS executing in S&NSC memory.
1928
- */
1929
- assert(!env->v7m.secure);
1930
- assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
1931
-
1932
- /* We want to do the MPU lookup as secure; work out what mmu_idx that is */
1933
- mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
1934
-
1935
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
1936
- return false;
1937
- }
1938
-
1939
- if (!env->thumb) {
1940
- goto gen_invep;
1941
- }
1942
-
1943
- if (insn != 0xe97f) {
1944
- /*
1945
- * Not an SG instruction first half (we choose the IMPDEF
1946
- * early-SG-check option).
1947
- */
1948
- goto gen_invep;
1949
- }
1950
-
1951
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
1952
- return false;
1953
- }
1954
-
1955
- if (insn != 0xe97f) {
1956
- /*
1957
- * Not an SG instruction second half (yes, both halves of the SG
1958
- * insn have the same hex value)
1959
- */
1960
- goto gen_invep;
1961
- }
1962
-
1963
- /*
1964
- * OK, we have confirmed that we really have an SG instruction.
1965
- * We know we're NS in S memory so don't need to repeat those checks.
1966
- */
1967
- qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
1968
- ", executing it\n", env->regs[15]);
1969
- env->regs[14] &= ~1;
1970
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
1971
- switch_v7m_security_state(env, true);
1972
- xpsr_write(env, 0, XPSR_IT);
1973
- env->regs[15] += 4;
1974
- return true;
1975
-
1976
-gen_invep:
1977
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
1978
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
1979
- qemu_log_mask(CPU_LOG_INT,
1980
- "...really SecureFault with SFSR.INVEP\n");
1981
- return false;
1982
-}
1983
-
1984
-void arm_v7m_cpu_do_interrupt(CPUState *cs)
1985
-{
1986
- ARMCPU *cpu = ARM_CPU(cs);
1987
- CPUARMState *env = &cpu->env;
1988
- uint32_t lr;
1989
- bool ignore_stackfaults;
1990
-
1991
- arm_log_exception(cs->exception_index);
1992
-
1993
- /*
1994
- * For exceptions we just mark as pending on the NVIC, and let that
1995
- * handle it.
1996
- */
1997
- switch (cs->exception_index) {
1998
- case EXCP_UDEF:
1999
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2000
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
2001
- break;
2002
- case EXCP_NOCP:
2003
- {
2004
- /*
2005
- * NOCP might be directed to something other than the current
2006
- * security state if this fault is because of NSACR; we indicate
2007
- * the target security state using exception.target_el.
2008
- */
2009
- int target_secstate;
2010
-
2011
- if (env->exception.target_el == 3) {
2012
- target_secstate = M_REG_S;
2013
- } else {
2014
- target_secstate = env->v7m.secure;
2015
- }
2016
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate);
2017
- env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK;
2018
- break;
2019
- }
2020
- case EXCP_INVSTATE:
2021
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2022
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
2023
- break;
2024
- case EXCP_STKOF:
2025
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2026
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
2027
- break;
2028
- case EXCP_LSERR:
2029
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
2030
- env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
2031
- break;
2032
- case EXCP_UNALIGNED:
2033
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
2034
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
2035
- break;
2036
- case EXCP_SWI:
2037
- /* The PC already points to the next instruction. */
2038
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
2039
- break;
2040
- case EXCP_PREFETCH_ABORT:
2041
- case EXCP_DATA_ABORT:
2042
- /*
2043
- * Note that for M profile we don't have a guest facing FSR, but
2044
- * the env->exception.fsr will be populated by the code that
2045
- * raises the fault, in the A profile short-descriptor format.
2046
- */
2047
- switch (env->exception.fsr & 0xf) {
2048
- case M_FAKE_FSR_NSC_EXEC:
2049
- /*
2050
- * Exception generated when we try to execute code at an address
2051
- * which is marked as Secure & Non-Secure Callable and the CPU
2052
- * is in the Non-Secure state. The only instruction which can
2053
- * be executed like this is SG (and that only if both halves of
2054
- * the SG instruction have the same security attributes.)
2055
- * Everything else must generate an INVEP SecureFault, so we
2056
- * emulate the SG instruction here.
2057
- */
2058
- if (v7m_handle_execute_nsc(cpu)) {
2059
- return;
2060
- }
2061
- break;
2062
- case M_FAKE_FSR_SFAULT:
2063
- /*
2064
- * Various flavours of SecureFault for attempts to execute or
2065
- * access data in the wrong security state.
2066
- */
2067
- switch (cs->exception_index) {
2068
- case EXCP_PREFETCH_ABORT:
2069
- if (env->v7m.secure) {
2070
- env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK;
2071
- qemu_log_mask(CPU_LOG_INT,
2072
- "...really SecureFault with SFSR.INVTRAN\n");
2073
- } else {
2074
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
2075
- qemu_log_mask(CPU_LOG_INT,
2076
- "...really SecureFault with SFSR.INVEP\n");
2077
- }
2078
- break;
2079
- case EXCP_DATA_ABORT:
2080
- /* This must be an NS access to S memory */
2081
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
2082
- qemu_log_mask(CPU_LOG_INT,
2083
- "...really SecureFault with SFSR.AUVIOL\n");
2084
- break;
2085
- }
2086
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
2087
- break;
2088
- case 0x8: /* External Abort */
2089
- switch (cs->exception_index) {
2090
- case EXCP_PREFETCH_ABORT:
2091
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
2092
- qemu_log_mask(CPU_LOG_INT, "...with CFSR.IBUSERR\n");
2093
- break;
2094
- case EXCP_DATA_ABORT:
2095
- env->v7m.cfsr[M_REG_NS] |=
2096
- (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
2097
- env->v7m.bfar = env->exception.vaddress;
2098
- qemu_log_mask(CPU_LOG_INT,
2099
- "...with CFSR.PRECISERR and BFAR 0x%x\n",
2100
- env->v7m.bfar);
2101
- break;
2102
- }
2103
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
2104
- break;
2105
- default:
2106
- /*
2107
- * All other FSR values are either MPU faults or "can't happen
2108
- * for M profile" cases.
2109
- */
2110
- switch (cs->exception_index) {
2111
- case EXCP_PREFETCH_ABORT:
2112
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
2113
- qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n");
2114
- break;
2115
- case EXCP_DATA_ABORT:
2116
- env->v7m.cfsr[env->v7m.secure] |=
2117
- (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK);
2118
- env->v7m.mmfar[env->v7m.secure] = env->exception.vaddress;
2119
- qemu_log_mask(CPU_LOG_INT,
2120
- "...with CFSR.DACCVIOL and MMFAR 0x%x\n",
2121
- env->v7m.mmfar[env->v7m.secure]);
2122
- break;
2123
- }
2124
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM,
2125
- env->v7m.secure);
2126
- break;
2127
- }
2128
- break;
2129
- case EXCP_BKPT:
2130
- if (semihosting_enabled()) {
2131
- int nr;
2132
- nr = arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) & 0xff;
2133
- if (nr == 0xab) {
2134
- env->regs[15] += 2;
2135
- qemu_log_mask(CPU_LOG_INT,
2136
- "...handling as semihosting call 0x%x\n",
2137
- env->regs[0]);
2138
- env->regs[0] = do_arm_semihosting(env);
2139
- return;
2140
- }
2141
- }
2142
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false);
2143
- break;
2144
- case EXCP_IRQ:
2145
- break;
2146
- case EXCP_EXCEPTION_EXIT:
2147
- if (env->regs[15] < EXC_RETURN_MIN_MAGIC) {
2148
- /* Must be v8M security extension function return */
2149
- assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC);
2150
- assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
2151
- if (do_v7m_function_return(cpu)) {
2152
- return;
2153
- }
2154
- } else {
2155
- do_v7m_exception_exit(cpu);
2156
- return;
2157
- }
2158
- break;
2159
- case EXCP_LAZYFP:
2160
- /*
2161
- * We already pended the specific exception in the NVIC in the
2162
- * v7m_preserve_fp_state() helper function.
2163
- */
2164
- break;
2165
- default:
2166
- cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
2167
- return; /* Never happens. Keep compiler happy. */
2168
- }
2169
-
2170
- if (arm_feature(env, ARM_FEATURE_V8)) {
2171
- lr = R_V7M_EXCRET_RES1_MASK |
2172
- R_V7M_EXCRET_DCRS_MASK;
2173
- /*
2174
- * The S bit indicates whether we should return to Secure
2175
- * or NonSecure (ie our current state).
2176
- * The ES bit indicates whether we're taking this exception
2177
- * to Secure or NonSecure (ie our target state). We set it
2178
- * later, in v7m_exception_taken().
2179
- * The SPSEL bit is also set in v7m_exception_taken() for v8M.
2180
- * This corresponds to the ARM ARM pseudocode for v8M setting
2181
- * some LR bits in PushStack() and some in ExceptionTaken();
2182
- * the distinction matters for the tailchain cases where we
2183
- * can take an exception without pushing the stack.
2184
- */
2185
- if (env->v7m.secure) {
2186
- lr |= R_V7M_EXCRET_S_MASK;
2187
- }
2188
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
2189
- lr |= R_V7M_EXCRET_FTYPE_MASK;
2190
- }
2191
- } else {
2192
- lr = R_V7M_EXCRET_RES1_MASK |
2193
- R_V7M_EXCRET_S_MASK |
2194
- R_V7M_EXCRET_DCRS_MASK |
2195
- R_V7M_EXCRET_FTYPE_MASK |
2196
- R_V7M_EXCRET_ES_MASK;
2197
- if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) {
2198
- lr |= R_V7M_EXCRET_SPSEL_MASK;
2199
- }
2200
- }
2201
- if (!arm_v7m_is_handler_mode(env)) {
2202
- lr |= R_V7M_EXCRET_MODE_MASK;
2203
- }
2204
-
2205
- ignore_stackfaults = v7m_push_stack(cpu);
2206
- v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
2207
-}
2208
-
2209
/*
2210
* Function used to synchronize QEMU's AArch64 register set with AArch32
2211
* register set. This is necessary when switching between AArch32 and AArch64
2212
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
2213
return phys_addr;
2214
}
2215
2216
-uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
2217
-{
2218
- uint32_t mask;
2219
- unsigned el = arm_current_el(env);
2220
-
2221
- /* First handle registers which unprivileged can read */
2222
-
2223
- switch (reg) {
2224
- case 0 ... 7: /* xPSR sub-fields */
2225
- mask = 0;
2226
- if ((reg & 1) && el) {
2227
- mask |= XPSR_EXCP; /* IPSR (unpriv. reads as zero) */
2228
- }
2229
- if (!(reg & 4)) {
2230
- mask |= XPSR_NZCV | XPSR_Q; /* APSR */
2231
- if (arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
2232
- mask |= XPSR_GE;
2233
- }
2234
- }
2235
- /* EPSR reads as zero */
2236
- return xpsr_read(env) & mask;
2237
- break;
2238
- case 20: /* CONTROL */
2239
- {
2240
- uint32_t value = env->v7m.control[env->v7m.secure];
2241
- if (!env->v7m.secure) {
2242
- /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
2243
- value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
2244
- }
2245
- return value;
2246
- }
2247
- case 0x94: /* CONTROL_NS */
2248
- /*
2249
- * We have to handle this here because unprivileged Secure code
2250
- * can read the NS CONTROL register.
2251
- */
2252
- if (!env->v7m.secure) {
2253
- return 0;
2254
- }
2255
- return env->v7m.control[M_REG_NS] |
2256
- (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK);
2257
- }
2258
-
2259
- if (el == 0) {
2260
- return 0; /* unprivileged reads others as zero */
2261
- }
2262
-
2263
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
2264
- switch (reg) {
2265
- case 0x88: /* MSP_NS */
2266
- if (!env->v7m.secure) {
2267
- return 0;
2268
- }
2269
- return env->v7m.other_ss_msp;
2270
- case 0x89: /* PSP_NS */
2271
- if (!env->v7m.secure) {
2272
- return 0;
2273
- }
2274
- return env->v7m.other_ss_psp;
2275
- case 0x8a: /* MSPLIM_NS */
2276
- if (!env->v7m.secure) {
2277
- return 0;
2278
- }
2279
- return env->v7m.msplim[M_REG_NS];
2280
- case 0x8b: /* PSPLIM_NS */
2281
- if (!env->v7m.secure) {
2282
- return 0;
2283
- }
2284
- return env->v7m.psplim[M_REG_NS];
2285
- case 0x90: /* PRIMASK_NS */
2286
- if (!env->v7m.secure) {
2287
- return 0;
2288
- }
2289
- return env->v7m.primask[M_REG_NS];
2290
- case 0x91: /* BASEPRI_NS */
2291
- if (!env->v7m.secure) {
2292
- return 0;
2293
- }
2294
- return env->v7m.basepri[M_REG_NS];
2295
- case 0x93: /* FAULTMASK_NS */
2296
- if (!env->v7m.secure) {
2297
- return 0;
2298
- }
2299
- return env->v7m.faultmask[M_REG_NS];
2300
- case 0x98: /* SP_NS */
2301
- {
2302
- /*
2303
- * This gives the non-secure SP selected based on whether we're
2304
- * currently in handler mode or not, using the NS CONTROL.SPSEL.
2305
- */
2306
- bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
2307
-
2308
- if (!env->v7m.secure) {
2309
- return 0;
2310
- }
2311
- if (!arm_v7m_is_handler_mode(env) && spsel) {
2312
- return env->v7m.other_ss_psp;
2313
- } else {
2314
- return env->v7m.other_ss_msp;
2315
- }
2316
- }
2317
- default:
2318
- break;
2319
- }
2320
- }
2321
-
2322
- switch (reg) {
2323
- case 8: /* MSP */
2324
- return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13];
2325
- case 9: /* PSP */
2326
- return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp;
2327
- case 10: /* MSPLIM */
2328
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2329
- goto bad_reg;
2330
- }
2331
- return env->v7m.msplim[env->v7m.secure];
2332
- case 11: /* PSPLIM */
2333
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2334
- goto bad_reg;
2335
- }
2336
- return env->v7m.psplim[env->v7m.secure];
2337
- case 16: /* PRIMASK */
2338
- return env->v7m.primask[env->v7m.secure];
2339
- case 17: /* BASEPRI */
2340
- case 18: /* BASEPRI_MAX */
2341
- return env->v7m.basepri[env->v7m.secure];
2342
- case 19: /* FAULTMASK */
2343
- return env->v7m.faultmask[env->v7m.secure];
2344
- default:
2345
- bad_reg:
2346
- qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special"
2347
- " register %d\n", reg);
2348
- return 0;
2349
- }
2350
-}
2351
-
2352
-void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
2353
-{
2354
- /*
2355
- * We're passed bits [11..0] of the instruction; extract
2356
- * SYSm and the mask bits.
2357
- * Invalid combinations of SYSm and mask are UNPREDICTABLE;
2358
- * we choose to treat them as if the mask bits were valid.
2359
- * NB that the pseudocode 'mask' variable is bits [11..10],
2360
- * whereas ours is [11..8].
2361
- */
2362
- uint32_t mask = extract32(maskreg, 8, 4);
2363
- uint32_t reg = extract32(maskreg, 0, 8);
2364
- int cur_el = arm_current_el(env);
2365
-
2366
- if (cur_el == 0 && reg > 7 && reg != 20) {
2367
- /*
2368
- * only xPSR sub-fields and CONTROL.SFPA may be written by
2369
- * unprivileged code
2370
- */
2371
- return;
2372
- }
2373
-
2374
- if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
2375
- switch (reg) {
2376
- case 0x88: /* MSP_NS */
2377
- if (!env->v7m.secure) {
2378
- return;
2379
- }
2380
- env->v7m.other_ss_msp = val;
2381
- return;
2382
- case 0x89: /* PSP_NS */
2383
- if (!env->v7m.secure) {
2384
- return;
2385
- }
2386
- env->v7m.other_ss_psp = val;
2387
- return;
2388
- case 0x8a: /* MSPLIM_NS */
2389
- if (!env->v7m.secure) {
2390
- return;
2391
- }
2392
- env->v7m.msplim[M_REG_NS] = val & ~7;
2393
- return;
2394
- case 0x8b: /* PSPLIM_NS */
2395
- if (!env->v7m.secure) {
2396
- return;
2397
- }
2398
- env->v7m.psplim[M_REG_NS] = val & ~7;
2399
- return;
2400
- case 0x90: /* PRIMASK_NS */
2401
- if (!env->v7m.secure) {
2402
- return;
2403
- }
2404
- env->v7m.primask[M_REG_NS] = val & 1;
2405
- return;
2406
- case 0x91: /* BASEPRI_NS */
2407
- if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
2408
- return;
2409
- }
2410
- env->v7m.basepri[M_REG_NS] = val & 0xff;
2411
- return;
2412
- case 0x93: /* FAULTMASK_NS */
2413
- if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
2414
- return;
2415
- }
2416
- env->v7m.faultmask[M_REG_NS] = val & 1;
2417
- return;
2418
- case 0x94: /* CONTROL_NS */
2419
- if (!env->v7m.secure) {
2420
- return;
2421
- }
2422
- write_v7m_control_spsel_for_secstate(env,
2423
- val & R_V7M_CONTROL_SPSEL_MASK,
2424
- M_REG_NS);
2425
- if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
2426
- env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
2427
- env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
2428
- }
2429
- /*
2430
- * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
2431
- * RES0 if the FPU is not present, and is stored in the S bank
2432
- */
2433
- if (arm_feature(env, ARM_FEATURE_VFP) &&
2434
- extract32(env->v7m.nsacr, 10, 1)) {
2435
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
2436
- env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
2437
- }
2438
- return;
2439
- case 0x98: /* SP_NS */
2440
- {
2441
- /*
2442
- * This gives the non-secure SP selected based on whether we're
2443
- * currently in handler mode or not, using the NS CONTROL.SPSEL.
2444
- */
2445
- bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
2446
- bool is_psp = !arm_v7m_is_handler_mode(env) && spsel;
2447
- uint32_t limit;
2448
-
2449
- if (!env->v7m.secure) {
2450
- return;
2451
- }
2452
-
2453
- limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
2454
-
2455
- if (val < limit) {
2456
- CPUState *cs = env_cpu(env);
2457
-
2458
- cpu_restore_state(cs, GETPC(), true);
2459
- raise_exception(env, EXCP_STKOF, 0, 1);
2460
- }
2461
-
2462
- if (is_psp) {
2463
- env->v7m.other_ss_psp = val;
2464
- } else {
2465
- env->v7m.other_ss_msp = val;
2466
- }
2467
- return;
2468
- }
2469
- default:
2470
- break;
2471
- }
2472
- }
2473
-
2474
- switch (reg) {
2475
- case 0 ... 7: /* xPSR sub-fields */
2476
- /* only APSR is actually writable */
2477
- if (!(reg & 4)) {
2478
- uint32_t apsrmask = 0;
2479
-
2480
- if (mask & 8) {
2481
- apsrmask |= XPSR_NZCV | XPSR_Q;
2482
- }
2483
- if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
2484
- apsrmask |= XPSR_GE;
2485
- }
2486
- xpsr_write(env, val, apsrmask);
2487
- }
2488
- break;
2489
- case 8: /* MSP */
2490
- if (v7m_using_psp(env)) {
2491
- env->v7m.other_sp = val;
2492
- } else {
2493
- env->regs[13] = val;
2494
- }
2495
- break;
2496
- case 9: /* PSP */
2497
- if (v7m_using_psp(env)) {
2498
- env->regs[13] = val;
2499
- } else {
2500
- env->v7m.other_sp = val;
2501
- }
2502
- break;
2503
- case 10: /* MSPLIM */
2504
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2505
- goto bad_reg;
2506
- }
2507
- env->v7m.msplim[env->v7m.secure] = val & ~7;
2508
- break;
2509
- case 11: /* PSPLIM */
2510
- if (!arm_feature(env, ARM_FEATURE_V8)) {
2511
- goto bad_reg;
2512
- }
2513
- env->v7m.psplim[env->v7m.secure] = val & ~7;
2514
- break;
2515
- case 16: /* PRIMASK */
2516
- env->v7m.primask[env->v7m.secure] = val & 1;
2517
- break;
2518
- case 17: /* BASEPRI */
2519
- if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
2520
- goto bad_reg;
2521
- }
2522
- env->v7m.basepri[env->v7m.secure] = val & 0xff;
2523
- break;
2524
- case 18: /* BASEPRI_MAX */
2525
- if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
2526
- goto bad_reg;
2527
- }
2528
- val &= 0xff;
2529
- if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
2530
- || env->v7m.basepri[env->v7m.secure] == 0)) {
2531
- env->v7m.basepri[env->v7m.secure] = val;
2532
- }
2533
- break;
2534
- case 19: /* FAULTMASK */
2535
- if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
2536
- goto bad_reg;
2537
- }
2538
- env->v7m.faultmask[env->v7m.secure] = val & 1;
2539
- break;
2540
- case 20: /* CONTROL */
2541
- /*
2542
- * Writing to the SPSEL bit only has an effect if we are in
2543
- * thread mode; other bits can be updated by any privileged code.
2544
- * write_v7m_control_spsel() deals with updating the SPSEL bit in
2545
- * env->v7m.control, so we only need update the others.
2546
- * For v7M, we must just ignore explicit writes to SPSEL in handler
2547
- * mode; for v8M the write is permitted but will have no effect.
2548
- * All these bits are writes-ignored from non-privileged code,
2549
- * except for SFPA.
2550
- */
2551
- if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) ||
2552
- !arm_v7m_is_handler_mode(env))) {
2553
- write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
2554
- }
2555
- if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) {
2556
- env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
2557
- env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
2558
- }
2559
- if (arm_feature(env, ARM_FEATURE_VFP)) {
2560
- /*
2561
- * SFPA is RAZ/WI from NS or if no FPU.
2562
- * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
2563
- * Both are stored in the S bank.
2564
- */
2565
- if (env->v7m.secure) {
2566
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
2567
- env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK;
2568
- }
2569
- if (cur_el > 0 &&
2570
- (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) ||
2571
- extract32(env->v7m.nsacr, 10, 1))) {
2572
- env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
2573
- env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
2574
- }
2575
- }
2576
- break;
2577
- default:
2578
- bad_reg:
2579
- qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special"
2580
- " register %d\n", reg);
2581
- return;
2582
- }
2583
-}
2584
-
2585
-uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
2586
-{
2587
- /* Implement the TT instruction. op is bits [7:6] of the insn. */
2588
- bool forceunpriv = op & 1;
2589
- bool alt = op & 2;
2590
- V8M_SAttributes sattrs = {};
2591
- uint32_t tt_resp;
2592
- bool r, rw, nsr, nsrw, mrvalid;
2593
- int prot;
2594
- ARMMMUFaultInfo fi = {};
2595
- MemTxAttrs attrs = {};
2596
- hwaddr phys_addr;
2597
- ARMMMUIdx mmu_idx;
2598
- uint32_t mregion;
2599
- bool targetpriv;
2600
- bool targetsec = env->v7m.secure;
2601
- bool is_subpage;
2602
-
2603
- /*
2604
- * Work out what the security state and privilege level we're
2605
- * interested in is...
2606
- */
2607
- if (alt) {
2608
- targetsec = !targetsec;
2609
- }
2610
-
2611
- if (forceunpriv) {
2612
- targetpriv = false;
2613
- } else {
2614
- targetpriv = arm_v7m_is_handler_mode(env) ||
2615
- !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK);
2616
- }
2617
-
2618
- /* ...and then figure out which MMU index this is */
2619
- mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv);
2620
-
2621
- /*
2622
- * We know that the MPU and SAU don't care about the access type
2623
- * for our purposes beyond that we don't want to claim to be
2624
- * an insn fetch, so we arbitrarily call this a read.
2625
- */
2626
-
2627
- /*
2628
- * MPU region info only available for privileged or if
2629
- * inspecting the other MPU state.
2630
- */
2631
- if (arm_current_el(env) != 0 || alt) {
2632
- /* We can ignore the return value as prot is always set */
2633
- pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
2634
- &phys_addr, &attrs, &prot, &is_subpage,
2635
- &fi, &mregion);
2636
- if (mregion == -1) {
2637
- mrvalid = false;
2638
- mregion = 0;
2639
- } else {
2640
- mrvalid = true;
2641
- }
2642
- r = prot & PAGE_READ;
2643
- rw = prot & PAGE_WRITE;
2644
- } else {
2645
- r = false;
2646
- rw = false;
2647
- mrvalid = false;
2648
- mregion = 0;
2649
- }
2650
-
2651
- if (env->v7m.secure) {
2652
- v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
2653
- nsr = sattrs.ns && r;
2654
- nsrw = sattrs.ns && rw;
2655
- } else {
2656
- sattrs.ns = true;
2657
- nsr = false;
2658
- nsrw = false;
2659
- }
2660
-
2661
- tt_resp = (sattrs.iregion << 24) |
2662
- (sattrs.irvalid << 23) |
2663
- ((!sattrs.ns) << 22) |
2664
- (nsrw << 21) |
2665
- (nsr << 20) |
2666
- (rw << 19) |
2667
- (r << 18) |
2668
- (sattrs.srvalid << 17) |
2669
- (mrvalid << 16) |
2670
- (sattrs.sregion << 8) |
2671
- mregion;
2672
-
2673
- return tt_resp;
2674
-}
2675
-
2676
#endif
2677
2678
/* Note that signed overflow is undefined in C. The following routines are
2679
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
2680
return 0;
2681
}
2682
2683
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
2684
- bool secstate, bool priv, bool negpri)
2685
-{
2686
- ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
2687
-
2688
- if (priv) {
2689
- mmu_idx |= ARM_MMU_IDX_M_PRIV;
2690
- }
2691
-
2692
- if (negpri) {
2693
- mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
2694
- }
2695
-
2696
- if (secstate) {
2697
- mmu_idx |= ARM_MMU_IDX_M_S;
2698
- }
2699
-
2700
- return mmu_idx;
2701
-}
2702
-
2703
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
2704
- bool secstate, bool priv)
2705
-{
2706
- bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
2707
-
2708
- return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
2709
-}
2710
-
2711
-/* Return the MMU index for a v7M CPU in the specified security state */
2712
+#ifndef CONFIG_TCG
2713
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
2714
{
2715
- bool priv = arm_current_el(env) != 0;
2716
-
2717
- return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
2718
+ g_assert_not_reached();
2719
}
2720
+#endif
2721
2722
ARMMMUIdx arm_mmu_idx(CPUARMState *env)
2723
{
2724
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
2725
new file mode 100644
32
new file mode 100644
2726
index XXXXXXX..XXXXXXX
33
index XXXXXXX..XXXXXXX
2727
--- /dev/null
34
--- /dev/null
2728
+++ b/target/arm/m_helper.c
35
+++ b/include/hw/misc/npcm7xx_mft.h
2729
@@ -XXX,XX +XXX,XX @@
36
@@ -XXX,XX +XXX,XX @@
2730
+/*
37
+/*
2731
+ * ARM generic helpers.
38
+ * Nuvoton NPCM7xx MFT Module
2732
+ *
39
+ *
2733
+ * This code is licensed under the GNU GPL v2 or later.
40
+ * Copyright 2021 Google LLC
2734
+ *
41
+ *
2735
+ * SPDX-License-Identifier: GPL-2.0-or-later
42
+ * This program is free software; you can redistribute it and/or modify it
2736
+ */
43
+ * under the terms of the GNU General Public License as published by the
44
+ * Free Software Foundation; either version 2 of the License, or
45
+ * (at your option) any later version.
46
+ *
47
+ * This program is distributed in the hope that it will be useful, but WITHOUT
48
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
49
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
50
+ * for more details.
51
+ */
52
+#ifndef NPCM7XX_MFT_H
53
+#define NPCM7XX_MFT_H
54
+
55
+#include "exec/memory.h"
56
+#include "hw/clock.h"
57
+#include "hw/irq.h"
58
+#include "hw/sysbus.h"
59
+#include "qom/object.h"
60
+
61
+/* Max Fan input number. */
62
+#define NPCM7XX_MFT_MAX_FAN_INPUT 19
63
+
64
+/*
65
+ * Number of registers in one MFT module. Don't change this without increasing
66
+ * the version_id in vmstate.
67
+ */
68
+#define NPCM7XX_MFT_NR_REGS (0x20 / sizeof(uint16_t))
69
+
70
+/*
71
+ * The MFT can take up to 4 inputs: A0, B0, A1, B1. It can measure one A and one
72
+ * B simultaneously. NPCM7XX_MFT_INASEL and NPCM7XX_MFT_INBSEL are used to
73
+ * select which A or B input are used.
74
+ */
75
+#define NPCM7XX_MFT_FANIN_COUNT 4
76
+
77
+/**
78
+ * struct NPCM7xxMFTState - Multi Functional Tachometer device state.
79
+ * @parent: System bus device.
80
+ * @iomem: Memory region through which registers are accessed.
81
+ * @clock_in: The input clock for MFT from CLK module.
82
+ * @clock_{1,2}: The counter clocks for NPCM7XX_MFT_CNT{1,2}
83
+ * @irq: The IRQ for this MFT state.
84
+ * @regs: The MMIO registers.
85
+ * @max_rpm: The maximum rpm for fans. Order: A0, B0, A1, B1.
86
+ * @duty: The duty cycles for fans, relative to NPCM7XX_PWM_MAX_DUTY.
87
+ */
88
+typedef struct NPCM7xxMFTState {
89
+ SysBusDevice parent;
90
+
91
+ MemoryRegion iomem;
92
+
93
+ Clock *clock_in;
94
+ Clock *clock_1, *clock_2;
95
+ qemu_irq irq;
96
+ uint16_t regs[NPCM7XX_MFT_NR_REGS];
97
+
98
+ uint32_t max_rpm[NPCM7XX_MFT_FANIN_COUNT];
99
+ uint32_t duty[NPCM7XX_MFT_FANIN_COUNT];
100
+} NPCM7xxMFTState;
101
+
102
+#define TYPE_NPCM7XX_MFT "npcm7xx-mft"
103
+#define NPCM7XX_MFT(obj) \
104
+ OBJECT_CHECK(NPCM7xxMFTState, (obj), TYPE_NPCM7XX_MFT)
105
+
106
+#endif /* NPCM7XX_MFT_H */
107
diff --git a/hw/misc/npcm7xx_mft.c b/hw/misc/npcm7xx_mft.c
108
new file mode 100644
109
index XXXXXXX..XXXXXXX
110
--- /dev/null
111
+++ b/hw/misc/npcm7xx_mft.c
112
@@ -XXX,XX +XXX,XX @@
113
+/*
114
+ * Nuvoton NPCM7xx MFT Module
115
+ *
116
+ * Copyright 2021 Google LLC
117
+ *
118
+ * This program is free software; you can redistribute it and/or modify it
119
+ * under the terms of the GNU General Public License as published by the
120
+ * Free Software Foundation; either version 2 of the License, or
121
+ * (at your option) any later version.
122
+ *
123
+ * This program is distributed in the hope that it will be useful, but WITHOUT
124
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
125
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
126
+ * for more details.
127
+ */
128
+
2737
+#include "qemu/osdep.h"
129
+#include "qemu/osdep.h"
130
+#include "hw/irq.h"
131
+#include "hw/qdev-clock.h"
132
+#include "hw/qdev-properties.h"
133
+#include "hw/misc/npcm7xx_mft.h"
134
+#include "hw/misc/npcm7xx_pwm.h"
135
+#include "hw/registerfields.h"
136
+#include "migration/vmstate.h"
137
+#include "qapi/error.h"
138
+#include "qapi/visitor.h"
139
+#include "qemu/bitops.h"
140
+#include "qemu/error-report.h"
141
+#include "qemu/log.h"
142
+#include "qemu/module.h"
143
+#include "qemu/timer.h"
2738
+#include "qemu/units.h"
144
+#include "qemu/units.h"
2739
+#include "target/arm/idau.h"
2740
+#include "trace.h"
145
+#include "trace.h"
2741
+#include "cpu.h"
146
+
2742
+#include "internals.h"
147
+/*
2743
+#include "exec/gdbstub.h"
148
+ * Some of the registers can only accessed via 16-bit ops and some can only
2744
+#include "exec/helper-proto.h"
149
+ * be accessed via 8-bit ops. However we mark all of them using REG16 to
2745
+#include "qemu/host-utils.h"
150
+ * simplify implementation. npcm7xx_mft_check_mem_op checks the access length
2746
+#include "sysemu/sysemu.h"
151
+ * of memory operations.
2747
+#include "qemu/bitops.h"
152
+ */
2748
+#include "qemu/crc32c.h"
153
+REG16(NPCM7XX_MFT_CNT1, 0x00);
2749
+#include "qemu/qemu-print.h"
154
+REG16(NPCM7XX_MFT_CRA, 0x02);
2750
+#include "exec/exec-all.h"
155
+REG16(NPCM7XX_MFT_CRB, 0x04);
2751
+#include <zlib.h> /* For crc32 */
156
+REG16(NPCM7XX_MFT_CNT2, 0x06);
2752
+#include "hw/semihosting/semihost.h"
157
+REG16(NPCM7XX_MFT_PRSC, 0x08);
2753
+#include "sysemu/cpus.h"
158
+REG16(NPCM7XX_MFT_CKC, 0x0a);
2754
+#include "sysemu/kvm.h"
159
+REG16(NPCM7XX_MFT_MCTRL, 0x0c);
2755
+#include "qemu/range.h"
160
+REG16(NPCM7XX_MFT_ICTRL, 0x0e);
2756
+#include "qapi/qapi-commands-target.h"
161
+REG16(NPCM7XX_MFT_ICLR, 0x10);
2757
+#include "qapi/error.h"
162
+REG16(NPCM7XX_MFT_IEN, 0x12);
2758
+#include "qemu/guest-random.h"
163
+REG16(NPCM7XX_MFT_CPA, 0x14);
2759
+#ifdef CONFIG_TCG
164
+REG16(NPCM7XX_MFT_CPB, 0x16);
2760
+#include "arm_ldst.h"
165
+REG16(NPCM7XX_MFT_CPCFG, 0x18);
2761
+#include "exec/cpu_ldst.h"
166
+REG16(NPCM7XX_MFT_INASEL, 0x1a);
2762
+#endif
167
+REG16(NPCM7XX_MFT_INBSEL, 0x1c);
2763
+
168
+
2764
+#ifdef CONFIG_USER_ONLY
169
+/* Register Fields */
2765
+
170
+#define NPCM7XX_MFT_CKC_C2CSEL BIT(3)
2766
+/* These should probably raise undefined insn exceptions. */
171
+#define NPCM7XX_MFT_CKC_C1CSEL BIT(0)
2767
+void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val)
172
+
2768
+{
173
+#define NPCM7XX_MFT_MCTRL_TBEN BIT(6)
2769
+ ARMCPU *cpu = env_archcpu(env);
174
+#define NPCM7XX_MFT_MCTRL_TAEN BIT(5)
2770
+
175
+#define NPCM7XX_MFT_MCTRL_TBEDG BIT(4)
2771
+ cpu_abort(CPU(cpu), "v7m_msr %d\n", reg);
176
+#define NPCM7XX_MFT_MCTRL_TAEDG BIT(3)
2772
+}
177
+#define NPCM7XX_MFT_MCTRL_MODE5 BIT(2)
2773
+
178
+
2774
+uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
179
+#define NPCM7XX_MFT_ICTRL_TFPND BIT(5)
2775
+{
180
+#define NPCM7XX_MFT_ICTRL_TEPND BIT(4)
2776
+ ARMCPU *cpu = env_archcpu(env);
181
+#define NPCM7XX_MFT_ICTRL_TDPND BIT(3)
2777
+
182
+#define NPCM7XX_MFT_ICTRL_TCPND BIT(2)
2778
+ cpu_abort(CPU(cpu), "v7m_mrs %d\n", reg);
183
+#define NPCM7XX_MFT_ICTRL_TBPND BIT(1)
2779
+ return 0;
184
+#define NPCM7XX_MFT_ICTRL_TAPND BIT(0)
2780
+}
185
+
2781
+
186
+#define NPCM7XX_MFT_ICLR_TFCLR BIT(5)
2782
+void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
187
+#define NPCM7XX_MFT_ICLR_TECLR BIT(4)
2783
+{
188
+#define NPCM7XX_MFT_ICLR_TDCLR BIT(3)
2784
+ /* translate.c should never generate calls here in user-only mode */
189
+#define NPCM7XX_MFT_ICLR_TCCLR BIT(2)
2785
+ g_assert_not_reached();
190
+#define NPCM7XX_MFT_ICLR_TBCLR BIT(1)
2786
+}
191
+#define NPCM7XX_MFT_ICLR_TACLR BIT(0)
2787
+
192
+
2788
+void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
193
+#define NPCM7XX_MFT_IEN_TFIEN BIT(5)
2789
+{
194
+#define NPCM7XX_MFT_IEN_TEIEN BIT(4)
2790
+ /* translate.c should never generate calls here in user-only mode */
195
+#define NPCM7XX_MFT_IEN_TDIEN BIT(3)
2791
+ g_assert_not_reached();
196
+#define NPCM7XX_MFT_IEN_TCIEN BIT(2)
2792
+}
197
+#define NPCM7XX_MFT_IEN_TBIEN BIT(1)
2793
+
198
+#define NPCM7XX_MFT_IEN_TAIEN BIT(0)
2794
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
199
+
2795
+{
200
+#define NPCM7XX_MFT_CPCFG_GET_B(rv) extract8((rv), 4, 4)
2796
+ /* translate.c should never generate calls here in user-only mode */
201
+#define NPCM7XX_MFT_CPCFG_GET_A(rv) extract8((rv), 0, 4)
2797
+ g_assert_not_reached();
202
+#define NPCM7XX_MFT_CPCFG_HIEN BIT(3)
2798
+}
203
+#define NPCM7XX_MFT_CPCFG_EQEN BIT(2)
2799
+
204
+#define NPCM7XX_MFT_CPCFG_LOEN BIT(1)
2800
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
205
+#define NPCM7XX_MFT_CPCFG_CPSEL BIT(0)
2801
+{
206
+
2802
+ /* translate.c should never generate calls here in user-only mode */
207
+#define NPCM7XX_MFT_INASEL_SELA BIT(0)
2803
+ g_assert_not_reached();
208
+#define NPCM7XX_MFT_INBSEL_SELB BIT(0)
2804
+}
209
+
2805
+
210
+/* Max CNT values of the module. The CNT value is a countdown from it. */
2806
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
211
+#define NPCM7XX_MFT_MAX_CNT 0xFFFF
2807
+{
212
+
2808
+ /* translate.c should never generate calls here in user-only mode */
213
+/* Each fan revolution should generated 2 pulses */
2809
+ g_assert_not_reached();
214
+#define NPCM7XX_MFT_PULSE_PER_REVOLUTION 2
2810
+}
215
+
2811
+
216
+typedef enum NPCM7xxMFTCaptureState {
2812
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
217
+ /* capture succeeded with a valid CNT value. */
218
+ NPCM7XX_CAPTURE_SUCCEED,
219
+ /* capture stopped prematurely due to reaching CPCFG condition. */
220
+ NPCM7XX_CAPTURE_COMPARE_HIT,
221
+ /* capture fails since it reaches underflow condition for CNT. */
222
+ NPCM7XX_CAPTURE_UNDERFLOW,
223
+} NPCM7xxMFTCaptureState;
224
+
225
+static void npcm7xx_mft_reset(NPCM7xxMFTState *s)
226
+{
227
+ int i;
228
+
229
+ /* Only registers PRSC ~ INBSEL need to be reset. */
230
+ for (i = R_NPCM7XX_MFT_PRSC; i <= R_NPCM7XX_MFT_INBSEL; ++i) {
231
+ s->regs[i] = 0;
232
+ }
233
+}
234
+
235
+static void npcm7xx_mft_clear_interrupt(NPCM7xxMFTState *s, uint8_t iclr)
2813
+{
236
+{
2814
+ /*
237
+ /*
2815
+ * The TT instructions can be used by unprivileged code, but in
238
+ * Clear bits in ICTRL where corresponding bits in iclr is 1.
2816
+ * user-only emulation we don't have the MPU.
239
+ * Both iclr and ictrl are 8-bit regs. (See npcm7xx_mft_check_mem_op)
2817
+ * Luckily since we know we are NonSecure unprivileged (and that in
2818
+ * turn means that the A flag wasn't specified), all the bits in the
2819
+ * register must be zero:
2820
+ * IREGION: 0 because IRVALID is 0
2821
+ * IRVALID: 0 because NS
2822
+ * S: 0 because NS
2823
+ * NSRW: 0 because NS
2824
+ * NSR: 0 because NS
2825
+ * RW: 0 because unpriv and A flag not set
2826
+ * R: 0 because unpriv and A flag not set
2827
+ * SRVALID: 0 because NS
2828
+ * MRVALID: 0 because unpriv and A flag not set
2829
+ * SREGION: 0 becaus SRVALID is 0
2830
+ * MREGION: 0 because MRVALID is 0
2831
+ */
240
+ */
2832
+ return 0;
241
+ s->regs[R_NPCM7XX_MFT_ICTRL] &= ~iclr;
2833
+}
242
+}
2834
+
2835
+#else
2836
+
243
+
2837
+/*
244
+/*
2838
+ * What kind of stack write are we doing? This affects how exceptions
245
+ * If the CPCFG's condition should be triggered during count down from
2839
+ * generated during the stacking are treated.
246
+ * NPCM7XX_MFT_MAX_CNT to src if compared to tgt, return the count when
2840
+ */
247
+ * the condition is triggered.
2841
+typedef enum StackingMode {
248
+ * Otherwise return -1.
2842
+ STACK_NORMAL,
249
+ * Since tgt is uint16_t it must always <= NPCM7XX_MFT_MAX_CNT.
2843
+ STACK_IGNFAULTS,
250
+ */
2844
+ STACK_LAZYFP,
251
+static int npcm7xx_mft_compare(int32_t src, uint16_t tgt, uint8_t cpcfg)
2845
+} StackingMode;
252
+{
2846
+
253
+ if (cpcfg & NPCM7XX_MFT_CPCFG_HIEN) {
2847
+static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
254
+ return NPCM7XX_MFT_MAX_CNT;
2848
+ ARMMMUIdx mmu_idx, StackingMode mode)
255
+ }
2849
+{
256
+ if ((cpcfg & NPCM7XX_MFT_CPCFG_EQEN) && (src <= tgt)) {
2850
+ CPUState *cs = CPU(cpu);
257
+ return tgt;
2851
+ CPUARMState *env = &cpu->env;
258
+ }
2852
+ MemTxAttrs attrs = {};
259
+ if ((cpcfg & NPCM7XX_MFT_CPCFG_LOEN) && (tgt > 0) && (src < tgt)) {
2853
+ MemTxResult txres;
260
+ return tgt - 1;
2854
+ target_ulong page_size;
261
+ }
2855
+ hwaddr physaddr;
262
+
2856
+ int prot;
263
+ return -1;
2857
+ ARMMMUFaultInfo fi = {};
264
+}
2858
+ bool secure = mmu_idx & ARM_MMU_IDX_M_S;
265
+
2859
+ int exc;
266
+/* Compute CNT according to corresponding fan's RPM. */
2860
+ bool exc_secure;
267
+static NPCM7xxMFTCaptureState npcm7xx_mft_compute_cnt(
2861
+
268
+ Clock *clock, uint32_t max_rpm, uint32_t duty, uint16_t tgt,
2862
+ if (get_phys_addr(env, addr, MMU_DATA_STORE, mmu_idx, &physaddr,
269
+ uint8_t cpcfg, uint16_t *cnt)
2863
+ &attrs, &prot, &page_size, &fi, NULL)) {
270
+{
2864
+ /* MPU/SAU lookup failed */
271
+ uint32_t rpm = (uint64_t)max_rpm * (uint64_t)duty / NPCM7XX_PWM_MAX_DUTY;
2865
+ if (fi.type == ARMFault_QEMU_SFault) {
272
+ int32_t count;
2866
+ if (mode == STACK_LAZYFP) {
273
+ int stopped;
2867
+ qemu_log_mask(CPU_LOG_INT,
274
+ NPCM7xxMFTCaptureState state;
2868
+ "...SecureFault with SFSR.LSPERR "
275
+
2869
+ "during lazy stacking\n");
276
+ if (rpm == 0) {
2870
+ env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK;
277
+ /*
2871
+ } else {
278
+ * If RPM = 0, capture won't happen. CNT will continue count down.
2872
+ qemu_log_mask(CPU_LOG_INT,
279
+ * So it's effective equivalent to have a cnt > NPCM7XX_MFT_MAX_CNT
2873
+ "...SecureFault with SFSR.AUVIOL "
280
+ */
2874
+ "during stacking\n");
281
+ count = NPCM7XX_MFT_MAX_CNT + 1;
2875
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
282
+ } else {
2876
+ }
283
+ /*
2877
+ env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK;
284
+ * RPM = revolution/min. The time for one revlution (in ns) is
2878
+ env->v7m.sfar = addr;
285
+ * MINUTE_TO_NANOSECOND / RPM.
2879
+ exc = ARMV7M_EXCP_SECURE;
286
+ */
2880
+ exc_secure = false;
287
+ count = clock_ns_to_ticks(clock, (60 * NANOSECONDS_PER_SECOND) /
288
+ (rpm * NPCM7XX_MFT_PULSE_PER_REVOLUTION));
289
+ }
290
+
291
+ if (count > NPCM7XX_MFT_MAX_CNT) {
292
+ count = -1;
293
+ } else {
294
+ /* The CNT is a countdown value from NPCM7XX_MFT_MAX_CNT. */
295
+ count = NPCM7XX_MFT_MAX_CNT - count;
296
+ }
297
+ stopped = npcm7xx_mft_compare(count, tgt, cpcfg);
298
+ if (stopped == -1) {
299
+ if (count == -1) {
300
+ /* Underflow */
301
+ state = NPCM7XX_CAPTURE_UNDERFLOW;
2881
+ } else {
302
+ } else {
2882
+ if (mode == STACK_LAZYFP) {
303
+ state = NPCM7XX_CAPTURE_SUCCEED;
2883
+ qemu_log_mask(CPU_LOG_INT,
2884
+ "...MemManageFault with CFSR.MLSPERR\n");
2885
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK;
2886
+ } else {
2887
+ qemu_log_mask(CPU_LOG_INT,
2888
+ "...MemManageFault with CFSR.MSTKERR\n");
2889
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
2890
+ }
2891
+ exc = ARMV7M_EXCP_MEM;
2892
+ exc_secure = secure;
2893
+ }
2894
+ goto pend_fault;
2895
+ }
2896
+ address_space_stl_le(arm_addressspace(cs, attrs), physaddr, value,
2897
+ attrs, &txres);
2898
+ if (txres != MEMTX_OK) {
2899
+ /* BusFault trying to write the data */
2900
+ if (mode == STACK_LAZYFP) {
2901
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n");
2902
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK;
2903
+ } else {
2904
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
2905
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
2906
+ }
2907
+ exc = ARMV7M_EXCP_BUS;
2908
+ exc_secure = false;
2909
+ goto pend_fault;
2910
+ }
2911
+ return true;
2912
+
2913
+pend_fault:
2914
+ /*
2915
+ * By pending the exception at this point we are making
2916
+ * the IMPDEF choice "overridden exceptions pended" (see the
2917
+ * MergeExcInfo() pseudocode). The other choice would be to not
2918
+ * pend them now and then make a choice about which to throw away
2919
+ * later if we have two derived exceptions.
2920
+ * The only case when we must not pend the exception but instead
2921
+ * throw it away is if we are doing the push of the callee registers
2922
+ * and we've already generated a derived exception (this is indicated
2923
+ * by the caller passing STACK_IGNFAULTS). Even in this case we will
2924
+ * still update the fault status registers.
2925
+ */
2926
+ switch (mode) {
2927
+ case STACK_NORMAL:
2928
+ armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure);
2929
+ break;
2930
+ case STACK_LAZYFP:
2931
+ armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure);
2932
+ break;
2933
+ case STACK_IGNFAULTS:
2934
+ break;
2935
+ }
2936
+ return false;
2937
+}
2938
+
2939
+static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
2940
+ ARMMMUIdx mmu_idx)
2941
+{
2942
+ CPUState *cs = CPU(cpu);
2943
+ CPUARMState *env = &cpu->env;
2944
+ MemTxAttrs attrs = {};
2945
+ MemTxResult txres;
2946
+ target_ulong page_size;
2947
+ hwaddr physaddr;
2948
+ int prot;
2949
+ ARMMMUFaultInfo fi = {};
2950
+ bool secure = mmu_idx & ARM_MMU_IDX_M_S;
2951
+ int exc;
2952
+ bool exc_secure;
2953
+ uint32_t value;
2954
+
2955
+ if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr,
2956
+ &attrs, &prot, &page_size, &fi, NULL)) {
2957
+ /* MPU/SAU lookup failed */
2958
+ if (fi.type == ARMFault_QEMU_SFault) {
2959
+ qemu_log_mask(CPU_LOG_INT,
2960
+ "...SecureFault with SFSR.AUVIOL during unstack\n");
2961
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
2962
+ env->v7m.sfar = addr;
2963
+ exc = ARMV7M_EXCP_SECURE;
2964
+ exc_secure = false;
2965
+ } else {
2966
+ qemu_log_mask(CPU_LOG_INT,
2967
+ "...MemManageFault with CFSR.MUNSTKERR\n");
2968
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MUNSTKERR_MASK;
2969
+ exc = ARMV7M_EXCP_MEM;
2970
+ exc_secure = secure;
2971
+ }
2972
+ goto pend_fault;
2973
+ }
2974
+
2975
+ value = address_space_ldl(arm_addressspace(cs, attrs), physaddr,
2976
+ attrs, &txres);
2977
+ if (txres != MEMTX_OK) {
2978
+ /* BusFault trying to read the data */
2979
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
2980
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_UNSTKERR_MASK;
2981
+ exc = ARMV7M_EXCP_BUS;
2982
+ exc_secure = false;
2983
+ goto pend_fault;
2984
+ }
2985
+
2986
+ *dest = value;
2987
+ return true;
2988
+
2989
+pend_fault:
2990
+ /*
2991
+ * By pending the exception at this point we are making
2992
+ * the IMPDEF choice "overridden exceptions pended" (see the
2993
+ * MergeExcInfo() pseudocode). The other choice would be to not
2994
+ * pend them now and then make a choice about which to throw away
2995
+ * later if we have two derived exceptions.
2996
+ */
2997
+ armv7m_nvic_set_pending(env->nvic, exc, exc_secure);
2998
+ return false;
2999
+}
3000
+
3001
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
3002
+{
3003
+ /*
3004
+ * Preserve FP state (because LSPACT was set and we are about
3005
+ * to execute an FP instruction). This corresponds to the
3006
+ * PreserveFPState() pseudocode.
3007
+ * We may throw an exception if the stacking fails.
3008
+ */
3009
+ ARMCPU *cpu = env_archcpu(env);
3010
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
3011
+ bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK);
3012
+ bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK);
3013
+ bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK;
3014
+ uint32_t fpcar = env->v7m.fpcar[is_secure];
3015
+ bool stacked_ok = true;
3016
+ bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
3017
+ bool take_exception;
3018
+
3019
+ /* Take the iothread lock as we are going to touch the NVIC */
3020
+ qemu_mutex_lock_iothread();
3021
+
3022
+ /* Check the background context had access to the FPU */
3023
+ if (!v7m_cpacr_pass(env, is_secure, is_priv)) {
3024
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure);
3025
+ env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK;
3026
+ stacked_ok = false;
3027
+ } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) {
3028
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
3029
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
3030
+ stacked_ok = false;
3031
+ }
3032
+
3033
+ if (!splimviol && stacked_ok) {
3034
+ /* We only stack if the stack limit wasn't violated */
3035
+ int i;
3036
+ ARMMMUIdx mmu_idx;
3037
+
3038
+ mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri);
3039
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
3040
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
3041
+ uint32_t faddr = fpcar + 4 * i;
3042
+ uint32_t slo = extract64(dn, 0, 32);
3043
+ uint32_t shi = extract64(dn, 32, 32);
3044
+
3045
+ if (i >= 16) {
3046
+ faddr += 8; /* skip the slot for the FPSCR */
3047
+ }
3048
+ stacked_ok = stacked_ok &&
3049
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
3050
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP);
3051
+ }
3052
+
3053
+ stacked_ok = stacked_ok &&
3054
+ v7m_stack_write(cpu, fpcar + 0x40,
3055
+ vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
3056
+ }
3057
+
3058
+ /*
3059
+ * We definitely pended an exception, but it's possible that it
3060
+ * might not be able to be taken now. If its priority permits us
3061
+ * to take it now, then we must not update the LSPACT or FP regs,
3062
+ * but instead jump out to take the exception immediately.
3063
+ * If it's just pending and won't be taken until the current
3064
+ * handler exits, then we do update LSPACT and the FP regs.
3065
+ */
3066
+ take_exception = !stacked_ok &&
3067
+ armv7m_nvic_can_take_pending_exception(env->nvic);
3068
+
3069
+ qemu_mutex_unlock_iothread();
3070
+
3071
+ if (take_exception) {
3072
+ raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC());
3073
+ }
3074
+
3075
+ env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
3076
+
3077
+ if (ts) {
3078
+ /* Clear s0 to s31 and the FPSCR */
3079
+ int i;
3080
+
3081
+ for (i = 0; i < 32; i += 2) {
3082
+ *aa32_vfp_dreg(env, i / 2) = 0;
3083
+ }
3084
+ vfp_set_fpscr(env, 0);
3085
+ }
3086
+ /*
3087
+ * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
3088
+ * unchanged.
3089
+ */
3090
+}
3091
+
3092
+/*
3093
+ * Write to v7M CONTROL.SPSEL bit for the specified security bank.
3094
+ * This may change the current stack pointer between Main and Process
3095
+ * stack pointers if it is done for the CONTROL register for the current
3096
+ * security state.
3097
+ */
3098
+static void write_v7m_control_spsel_for_secstate(CPUARMState *env,
3099
+ bool new_spsel,
3100
+ bool secstate)
3101
+{
3102
+ bool old_is_psp = v7m_using_psp(env);
3103
+
3104
+ env->v7m.control[secstate] =
3105
+ deposit32(env->v7m.control[secstate],
3106
+ R_V7M_CONTROL_SPSEL_SHIFT,
3107
+ R_V7M_CONTROL_SPSEL_LENGTH, new_spsel);
3108
+
3109
+ if (secstate == env->v7m.secure) {
3110
+ bool new_is_psp = v7m_using_psp(env);
3111
+ uint32_t tmp;
3112
+
3113
+ if (old_is_psp != new_is_psp) {
3114
+ tmp = env->v7m.other_sp;
3115
+ env->v7m.other_sp = env->regs[13];
3116
+ env->regs[13] = tmp;
3117
+ }
3118
+ }
3119
+}
3120
+
3121
+/*
3122
+ * Write to v7M CONTROL.SPSEL bit. This may change the current
3123
+ * stack pointer between Main and Process stack pointers.
3124
+ */
3125
+static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel)
3126
+{
3127
+ write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure);
3128
+}
3129
+
3130
+void write_v7m_exception(CPUARMState *env, uint32_t new_exc)
3131
+{
3132
+ /*
3133
+ * Write a new value to v7m.exception, thus transitioning into or out
3134
+ * of Handler mode; this may result in a change of active stack pointer.
3135
+ */
3136
+ bool new_is_psp, old_is_psp = v7m_using_psp(env);
3137
+ uint32_t tmp;
3138
+
3139
+ env->v7m.exception = new_exc;
3140
+
3141
+ new_is_psp = v7m_using_psp(env);
3142
+
3143
+ if (old_is_psp != new_is_psp) {
3144
+ tmp = env->v7m.other_sp;
3145
+ env->v7m.other_sp = env->regs[13];
3146
+ env->regs[13] = tmp;
3147
+ }
3148
+}
3149
+
3150
+/* Switch M profile security state between NS and S */
3151
+static void switch_v7m_security_state(CPUARMState *env, bool new_secstate)
3152
+{
3153
+ uint32_t new_ss_msp, new_ss_psp;
3154
+
3155
+ if (env->v7m.secure == new_secstate) {
3156
+ return;
3157
+ }
3158
+
3159
+ /*
3160
+ * All the banked state is accessed by looking at env->v7m.secure
3161
+ * except for the stack pointer; rearrange the SP appropriately.
3162
+ */
3163
+ new_ss_msp = env->v7m.other_ss_msp;
3164
+ new_ss_psp = env->v7m.other_ss_psp;
3165
+
3166
+ if (v7m_using_psp(env)) {
3167
+ env->v7m.other_ss_psp = env->regs[13];
3168
+ env->v7m.other_ss_msp = env->v7m.other_sp;
3169
+ } else {
3170
+ env->v7m.other_ss_msp = env->regs[13];
3171
+ env->v7m.other_ss_psp = env->v7m.other_sp;
3172
+ }
3173
+
3174
+ env->v7m.secure = new_secstate;
3175
+
3176
+ if (v7m_using_psp(env)) {
3177
+ env->regs[13] = new_ss_psp;
3178
+ env->v7m.other_sp = new_ss_msp;
3179
+ } else {
3180
+ env->regs[13] = new_ss_msp;
3181
+ env->v7m.other_sp = new_ss_psp;
3182
+ }
3183
+}
3184
+
3185
+void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
3186
+{
3187
+ /*
3188
+ * Handle v7M BXNS:
3189
+ * - if the return value is a magic value, do exception return (like BX)
3190
+ * - otherwise bit 0 of the return value is the target security state
3191
+ */
3192
+ uint32_t min_magic;
3193
+
3194
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3195
+ /* Covers FNC_RETURN and EXC_RETURN magic */
3196
+ min_magic = FNC_RETURN_MIN_MAGIC;
3197
+ } else {
3198
+ /* EXC_RETURN magic only */
3199
+ min_magic = EXC_RETURN_MIN_MAGIC;
3200
+ }
3201
+
3202
+ if (dest >= min_magic) {
3203
+ /*
3204
+ * This is an exception return magic value; put it where
3205
+ * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT.
3206
+ * Note that if we ever add gen_ss_advance() singlestep support to
3207
+ * M profile this should count as an "instruction execution complete"
3208
+ * event (compare gen_bx_excret_final_code()).
3209
+ */
3210
+ env->regs[15] = dest & ~1;
3211
+ env->thumb = dest & 1;
3212
+ HELPER(exception_internal)(env, EXCP_EXCEPTION_EXIT);
3213
+ /* notreached */
3214
+ }
3215
+
3216
+ /* translate.c should have made BXNS UNDEF unless we're secure */
3217
+ assert(env->v7m.secure);
3218
+
3219
+ if (!(dest & 1)) {
3220
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
3221
+ }
3222
+ switch_v7m_security_state(env, dest & 1);
3223
+ env->thumb = 1;
3224
+ env->regs[15] = dest & ~1;
3225
+}
3226
+
3227
+void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
3228
+{
3229
+ /*
3230
+ * Handle v7M BLXNS:
3231
+ * - bit 0 of the destination address is the target security state
3232
+ */
3233
+
3234
+ /* At this point regs[15] is the address just after the BLXNS */
3235
+ uint32_t nextinst = env->regs[15] | 1;
3236
+ uint32_t sp = env->regs[13] - 8;
3237
+ uint32_t saved_psr;
3238
+
3239
+ /* translate.c will have made BLXNS UNDEF unless we're secure */
3240
+ assert(env->v7m.secure);
3241
+
3242
+ if (dest & 1) {
3243
+ /*
3244
+ * Target is Secure, so this is just a normal BLX,
3245
+ * except that the low bit doesn't indicate Thumb/not.
3246
+ */
3247
+ env->regs[14] = nextinst;
3248
+ env->thumb = 1;
3249
+ env->regs[15] = dest & ~1;
3250
+ return;
3251
+ }
3252
+
3253
+ /* Target is non-secure: first push a stack frame */
3254
+ if (!QEMU_IS_ALIGNED(sp, 8)) {
3255
+ qemu_log_mask(LOG_GUEST_ERROR,
3256
+ "BLXNS with misaligned SP is UNPREDICTABLE\n");
3257
+ }
3258
+
3259
+ if (sp < v7m_sp_limit(env)) {
3260
+ raise_exception(env, EXCP_STKOF, 0, 1);
3261
+ }
3262
+
3263
+ saved_psr = env->v7m.exception;
3264
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) {
3265
+ saved_psr |= XPSR_SFPA;
3266
+ }
3267
+
3268
+ /* Note that these stores can throw exceptions on MPU faults */
3269
+ cpu_stl_data(env, sp, nextinst);
3270
+ cpu_stl_data(env, sp + 4, saved_psr);
3271
+
3272
+ env->regs[13] = sp;
3273
+ env->regs[14] = 0xfeffffff;
3274
+ if (arm_v7m_is_handler_mode(env)) {
3275
+ /*
3276
+ * Write a dummy value to IPSR, to avoid leaking the current secure
3277
+ * exception number to non-secure code. This is guaranteed not
3278
+ * to cause write_v7m_exception() to actually change stacks.
3279
+ */
3280
+ write_v7m_exception(env, 1);
3281
+ }
3282
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
3283
+ switch_v7m_security_state(env, 0);
3284
+ env->thumb = 1;
3285
+ env->regs[15] = dest;
3286
+}
3287
+
3288
+static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
3289
+ bool spsel)
3290
+{
3291
+ /*
3292
+ * Return a pointer to the location where we currently store the
3293
+ * stack pointer for the requested security state and thread mode.
3294
+ * This pointer will become invalid if the CPU state is updated
3295
+ * such that the stack pointers are switched around (eg changing
3296
+ * the SPSEL control bit).
3297
+ * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode().
3298
+ * Unlike that pseudocode, we require the caller to pass us in the
3299
+ * SPSEL control bit value; this is because we also use this
3300
+ * function in handling of pushing of the callee-saves registers
3301
+ * part of the v8M stack frame (pseudocode PushCalleeStack()),
3302
+ * and in the tailchain codepath the SPSEL bit comes from the exception
3303
+ * return magic LR value from the previous exception. The pseudocode
3304
+ * opencodes the stack-selection in PushCalleeStack(), but we prefer
3305
+ * to make this utility function generic enough to do the job.
3306
+ */
3307
+ bool want_psp = threadmode && spsel;
3308
+
3309
+ if (secure == env->v7m.secure) {
3310
+ if (want_psp == v7m_using_psp(env)) {
3311
+ return &env->regs[13];
3312
+ } else {
3313
+ return &env->v7m.other_sp;
3314
+ }
304
+ }
3315
+ } else {
305
+ } else {
3316
+ if (want_psp) {
306
+ count = stopped;
3317
+ return &env->v7m.other_ss_psp;
307
+ state = NPCM7XX_CAPTURE_COMPARE_HIT;
3318
+ } else {
308
+ }
3319
+ return &env->v7m.other_ss_msp;
309
+
3320
+ }
310
+ if (count != -1) {
3321
+ }
311
+ *cnt = count;
3322
+}
312
+ }
3323
+
313
+ trace_npcm7xx_mft_rpm(clock->canonical_path, clock_get_hz(clock),
3324
+static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
314
+ state, count, rpm, duty);
3325
+ uint32_t *pvec)
315
+ return state;
3326
+{
316
+}
3327
+ CPUState *cs = CPU(cpu);
317
+
3328
+ CPUARMState *env = &cpu->env;
318
+/*
3329
+ MemTxResult result;
319
+ * Capture Fan RPM and update CNT and CR registers accordingly.
3330
+ uint32_t addr = env->v7m.vecbase[targets_secure] + exc * 4;
320
+ * Raise IRQ if certain contidions are met in IEN.
3331
+ uint32_t vector_entry;
321
+ */
3332
+ MemTxAttrs attrs = {};
322
+static void npcm7xx_mft_capture(NPCM7xxMFTState *s)
3333
+ ARMMMUIdx mmu_idx;
323
+{
3334
+ bool exc_secure;
324
+ int irq_level = 0;
3335
+
325
+ NPCM7xxMFTCaptureState state;
3336
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure, true);
326
+ int sel;
327
+ uint8_t cpcfg;
3337
+
328
+
3338
+ /*
329
+ /*
3339
+ * We don't do a get_phys_addr() here because the rules for vector
330
+ * If not mode 5, the behavior is undefined. We just do nothing in this
3340
+ * loads are special: they always use the default memory map, and
331
+ * case.
3341
+ * the default memory map permits reads from all addresses.
3342
+ * Since there's no easy way to pass through to pmsav8_mpu_lookup()
3343
+ * that we want this special case which would always say "yes",
3344
+ * we just do the SAU lookup here followed by a direct physical load.
3345
+ */
332
+ */
3346
+ attrs.secure = targets_secure;
333
+ if (!(s->regs[R_NPCM7XX_MFT_MCTRL] & NPCM7XX_MFT_MCTRL_MODE5)) {
3347
+ attrs.user = false;
3348
+
3349
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3350
+ V8M_SAttributes sattrs = {};
3351
+
3352
+ v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
3353
+ if (sattrs.ns) {
3354
+ attrs.secure = false;
3355
+ } else if (!targets_secure) {
3356
+ /* NS access to S memory */
3357
+ goto load_fail;
3358
+ }
3359
+ }
3360
+
3361
+ vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr,
3362
+ attrs, &result);
3363
+ if (result != MEMTX_OK) {
3364
+ goto load_fail;
3365
+ }
3366
+ *pvec = vector_entry;
3367
+ return true;
3368
+
3369
+load_fail:
3370
+ /*
3371
+ * All vector table fetch fails are reported as HardFault, with
3372
+ * HFSR.VECTTBL and .FORCED set. (FORCED is set because
3373
+ * technically the underlying exception is a MemManage or BusFault
3374
+ * that is escalated to HardFault.) This is a terminal exception,
3375
+ * so we will either take the HardFault immediately or else enter
3376
+ * lockup (the latter case is handled in armv7m_nvic_set_pending_derived()).
3377
+ */
3378
+ exc_secure = targets_secure ||
3379
+ !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
3380
+ env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
3381
+ armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
3382
+ return false;
3383
+}
3384
+
3385
+static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
3386
+{
3387
+ /*
3388
+ * Return the integrity signature value for the callee-saves
3389
+ * stack frame section. @lr is the exception return payload/LR value
3390
+ * whose FType bit forms bit 0 of the signature if FP is present.
3391
+ */
3392
+ uint32_t sig = 0xfefa125a;
3393
+
3394
+ if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
3395
+ sig |= 1;
3396
+ }
3397
+ return sig;
3398
+}
3399
+
3400
+static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
3401
+ bool ignore_faults)
3402
+{
3403
+ /*
3404
+ * For v8M, push the callee-saves register part of the stack frame.
3405
+ * Compare the v8M pseudocode PushCalleeStack().
3406
+ * In the tailchaining case this may not be the current stack.
3407
+ */
3408
+ CPUARMState *env = &cpu->env;
3409
+ uint32_t *frame_sp_p;
3410
+ uint32_t frameptr;
3411
+ ARMMMUIdx mmu_idx;
3412
+ bool stacked_ok;
3413
+ uint32_t limit;
3414
+ bool want_psp;
3415
+ uint32_t sig;
3416
+ StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL;
3417
+
3418
+ if (dotailchain) {
3419
+ bool mode = lr & R_V7M_EXCRET_MODE_MASK;
3420
+ bool priv = !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_NPRIV_MASK) ||
3421
+ !mode;
3422
+
3423
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, M_REG_S, priv);
3424
+ frame_sp_p = get_v7m_sp_ptr(env, M_REG_S, mode,
3425
+ lr & R_V7M_EXCRET_SPSEL_MASK);
3426
+ want_psp = mode && (lr & R_V7M_EXCRET_SPSEL_MASK);
3427
+ if (want_psp) {
3428
+ limit = env->v7m.psplim[M_REG_S];
3429
+ } else {
3430
+ limit = env->v7m.msplim[M_REG_S];
3431
+ }
3432
+ } else {
3433
+ mmu_idx = arm_mmu_idx(env);
3434
+ frame_sp_p = &env->regs[13];
3435
+ limit = v7m_sp_limit(env);
3436
+ }
3437
+
3438
+ frameptr = *frame_sp_p - 0x28;
3439
+ if (frameptr < limit) {
3440
+ /*
3441
+ * Stack limit failure: set SP to the limit value, and generate
3442
+ * STKOF UsageFault. Stack pushes below the limit must not be
3443
+ * performed. It is IMPDEF whether pushes above the limit are
3444
+ * performed; we choose not to.
3445
+ */
3446
+ qemu_log_mask(CPU_LOG_INT,
3447
+ "...STKOF during callee-saves register stacking\n");
3448
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
3449
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
3450
+ env->v7m.secure);
3451
+ *frame_sp_p = limit;
3452
+ return true;
3453
+ }
3454
+
3455
+ /*
3456
+ * Write as much of the stack frame as we can. A write failure may
3457
+ * cause us to pend a derived exception.
3458
+ */
3459
+ sig = v7m_integrity_sig(env, lr);
3460
+ stacked_ok =
3461
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) &&
3462
+ v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) &&
3463
+ v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) &&
3464
+ v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) &&
3465
+ v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) &&
3466
+ v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) &&
3467
+ v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) &&
3468
+ v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) &&
3469
+ v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode);
3470
+
3471
+ /* Update SP regardless of whether any of the stack accesses failed. */
3472
+ *frame_sp_p = frameptr;
3473
+
3474
+ return !stacked_ok;
3475
+}
3476
+
3477
+static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
3478
+ bool ignore_stackfaults)
3479
+{
3480
+ /*
3481
+ * Do the "take the exception" parts of exception entry,
3482
+ * but not the pushing of state to the stack. This is
3483
+ * similar to the pseudocode ExceptionTaken() function.
3484
+ */
3485
+ CPUARMState *env = &cpu->env;
3486
+ uint32_t addr;
3487
+ bool targets_secure;
3488
+ int exc;
3489
+ bool push_failed = false;
3490
+
3491
+ armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
3492
+ qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
3493
+ targets_secure ? "secure" : "nonsecure", exc);
3494
+
3495
+ if (dotailchain) {
3496
+ /* Sanitize LR FType and PREFIX bits */
3497
+ if (!arm_feature(env, ARM_FEATURE_VFP)) {
3498
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
3499
+ }
3500
+ lr = deposit32(lr, 24, 8, 0xff);
3501
+ }
3502
+
3503
+ if (arm_feature(env, ARM_FEATURE_V8)) {
3504
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
3505
+ (lr & R_V7M_EXCRET_S_MASK)) {
3506
+ /*
3507
+ * The background code (the owner of the registers in the
3508
+ * exception frame) is Secure. This means it may either already
3509
+ * have or now needs to push callee-saves registers.
3510
+ */
3511
+ if (targets_secure) {
3512
+ if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) {
3513
+ /*
3514
+ * We took an exception from Secure to NonSecure
3515
+ * (which means the callee-saved registers got stacked)
3516
+ * and are now tailchaining to a Secure exception.
3517
+ * Clear DCRS so eventual return from this Secure
3518
+ * exception unstacks the callee-saved registers.
3519
+ */
3520
+ lr &= ~R_V7M_EXCRET_DCRS_MASK;
3521
+ }
3522
+ } else {
3523
+ /*
3524
+ * We're going to a non-secure exception; push the
3525
+ * callee-saves registers to the stack now, if they're
3526
+ * not already saved.
3527
+ */
3528
+ if (lr & R_V7M_EXCRET_DCRS_MASK &&
3529
+ !(dotailchain && !(lr & R_V7M_EXCRET_ES_MASK))) {
3530
+ push_failed = v7m_push_callee_stack(cpu, lr, dotailchain,
3531
+ ignore_stackfaults);
3532
+ }
3533
+ lr |= R_V7M_EXCRET_DCRS_MASK;
3534
+ }
3535
+ }
3536
+
3537
+ lr &= ~R_V7M_EXCRET_ES_MASK;
3538
+ if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3539
+ lr |= R_V7M_EXCRET_ES_MASK;
3540
+ }
3541
+ lr &= ~R_V7M_EXCRET_SPSEL_MASK;
3542
+ if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) {
3543
+ lr |= R_V7M_EXCRET_SPSEL_MASK;
3544
+ }
3545
+
3546
+ /*
3547
+ * Clear registers if necessary to prevent non-secure exception
3548
+ * code being able to see register values from secure code.
3549
+ * Where register values become architecturally UNKNOWN we leave
3550
+ * them with their previous values.
3551
+ */
3552
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3553
+ if (!targets_secure) {
3554
+ /*
3555
+ * Always clear the caller-saved registers (they have been
3556
+ * pushed to the stack earlier in v7m_push_stack()).
3557
+ * Clear callee-saved registers if the background code is
3558
+ * Secure (in which case these regs were saved in
3559
+ * v7m_push_callee_stack()).
3560
+ */
3561
+ int i;
3562
+
3563
+ for (i = 0; i < 13; i++) {
3564
+ /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */
3565
+ if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) {
3566
+ env->regs[i] = 0;
3567
+ }
3568
+ }
3569
+ /* Clear EAPSR */
3570
+ xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT);
3571
+ }
3572
+ }
3573
+ }
3574
+
3575
+ if (push_failed && !ignore_stackfaults) {
3576
+ /*
3577
+ * Derived exception on callee-saves register stacking:
3578
+ * we might now want to take a different exception which
3579
+ * targets a different security state, so try again from the top.
3580
+ */
3581
+ qemu_log_mask(CPU_LOG_INT,
3582
+ "...derived exception on callee-saves register stacking");
3583
+ v7m_exception_taken(cpu, lr, true, true);
3584
+ return;
334
+ return;
3585
+ }
335
+ }
3586
+
336
+
3587
+ if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
337
+ /* Capture input A. */
3588
+ /* Vector load failed: derived exception */
338
+ if (s->regs[R_NPCM7XX_MFT_MCTRL] & NPCM7XX_MFT_MCTRL_TAEN &&
3589
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
339
+ s->regs[R_NPCM7XX_MFT_CKC] & NPCM7XX_MFT_CKC_C1CSEL) {
3590
+ v7m_exception_taken(cpu, lr, true, true);
340
+ sel = s->regs[R_NPCM7XX_MFT_INASEL] & NPCM7XX_MFT_INASEL_SELA;
3591
+ return;
341
+ cpcfg = NPCM7XX_MFT_CPCFG_GET_A(s->regs[R_NPCM7XX_MFT_CPCFG]);
3592
+ }
342
+ state = npcm7xx_mft_compute_cnt(s->clock_1,
3593
+
343
+ sel ? s->max_rpm[2] : s->max_rpm[0],
3594
+ /*
344
+ sel ? s->duty[2] : s->duty[0],
3595
+ * Now we've done everything that might cause a derived exception
345
+ s->regs[R_NPCM7XX_MFT_CPA],
3596
+ * we can go ahead and activate whichever exception we're going to
346
+ cpcfg,
3597
+ * take (which might now be the derived exception).
347
+ &s->regs[R_NPCM7XX_MFT_CNT1]);
3598
+ */
348
+ switch (state) {
3599
+ armv7m_nvic_acknowledge_irq(env->nvic);
349
+ case NPCM7XX_CAPTURE_SUCCEED:
3600
+
350
+ /* Interrupt on input capture on TAn transition - TAPND */
3601
+ /* Switch to target security state -- must do this before writing SPSEL */
351
+ s->regs[R_NPCM7XX_MFT_CRA] = s->regs[R_NPCM7XX_MFT_CNT1];
3602
+ switch_v7m_security_state(env, targets_secure);
352
+ s->regs[R_NPCM7XX_MFT_ICTRL] |= NPCM7XX_MFT_ICTRL_TAPND;
3603
+ write_v7m_control_spsel(env, 0);
353
+ if (s->regs[R_NPCM7XX_MFT_IEN] & NPCM7XX_MFT_IEN_TAIEN) {
3604
+ arm_clear_exclusive(env);
354
+ irq_level = 1;
3605
+ /* Clear SFPA and FPCA (has no effect if no FPU) */
3606
+ env->v7m.control[M_REG_S] &=
3607
+ ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK);
3608
+ /* Clear IT bits */
3609
+ env->condexec_bits = 0;
3610
+ env->regs[14] = lr;
3611
+ env->regs[15] = addr & 0xfffffffe;
3612
+ env->thumb = addr & 1;
3613
+}
3614
+
3615
+static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
3616
+ bool apply_splim)
3617
+{
3618
+ /*
3619
+ * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR
3620
+ * that we will need later in order to do lazy FP reg stacking.
3621
+ */
3622
+ bool is_secure = env->v7m.secure;
3623
+ void *nvic = env->nvic;
3624
+ /*
3625
+ * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
3626
+ * are banked and we want to update the bit in the bank for the
3627
+ * current security state; and in one case we want to specifically
3628
+ * update the NS banked version of a bit even if we are secure.
3629
+ */
3630
+ uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S];
3631
+ uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS];
3632
+ uint32_t *fpccr = &env->v7m.fpccr[is_secure];
3633
+ bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy;
3634
+
3635
+ env->v7m.fpcar[is_secure] = frameptr & ~0x7;
3636
+
3637
+ if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) {
3638
+ bool splimviol;
3639
+ uint32_t splim = v7m_sp_limit(env);
3640
+ bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) &&
3641
+ (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK);
3642
+
3643
+ splimviol = !ign && frameptr < splim;
3644
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol);
3645
+ }
3646
+
3647
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1);
3648
+
3649
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure);
3650
+
3651
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0);
3652
+
3653
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD,
3654
+ !arm_v7m_is_handler_mode(env));
3655
+
3656
+ hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false);
3657
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
3658
+
3659
+ bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false);
3660
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
3661
+
3662
+ mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure);
3663
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy);
3664
+
3665
+ ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false);
3666
+ *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy);
3667
+
3668
+ monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false);
3669
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy);
3670
+
3671
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3672
+ s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true);
3673
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy);
3674
+
3675
+ sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false);
3676
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy);
3677
+ }
3678
+}
3679
+
3680
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
3681
+{
3682
+ /* fptr is the value of Rn, the frame pointer we store the FP regs to */
3683
+ bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
3684
+ bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
3685
+
3686
+ assert(env->v7m.secure);
3687
+
3688
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
3689
+ return;
3690
+ }
3691
+
3692
+ /* Check access to the coprocessor is permitted */
3693
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
3694
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
3695
+ }
3696
+
3697
+ if (lspact) {
3698
+ /* LSPACT should not be active when there is active FP state */
3699
+ raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC());
3700
+ }
3701
+
3702
+ if (fptr & 7) {
3703
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
3704
+ }
3705
+
3706
+ /*
3707
+ * Note that we do not use v7m_stack_write() here, because the
3708
+ * accesses should not set the FSR bits for stacking errors if they
3709
+ * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
3710
+ * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
3711
+ * and longjmp out.
3712
+ */
3713
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
3714
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
3715
+ int i;
3716
+
3717
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
3718
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
3719
+ uint32_t faddr = fptr + 4 * i;
3720
+ uint32_t slo = extract64(dn, 0, 32);
3721
+ uint32_t shi = extract64(dn, 32, 32);
3722
+
3723
+ if (i >= 16) {
3724
+ faddr += 8; /* skip the slot for the FPSCR */
3725
+ }
3726
+ cpu_stl_data(env, faddr, slo);
3727
+ cpu_stl_data(env, faddr + 4, shi);
3728
+ }
3729
+ cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
3730
+
3731
+ /*
3732
+ * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
3733
+ * leave them unchanged, matching our choice in v7m_preserve_fp_state.
3734
+ */
3735
+ if (ts) {
3736
+ for (i = 0; i < 32; i += 2) {
3737
+ *aa32_vfp_dreg(env, i / 2) = 0;
3738
+ }
3739
+ vfp_set_fpscr(env, 0);
3740
+ }
3741
+ } else {
3742
+ v7m_update_fpccr(env, fptr, false);
3743
+ }
3744
+
3745
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
3746
+}
3747
+
3748
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
3749
+{
3750
+ /* fptr is the value of Rn, the frame pointer we load the FP regs from */
3751
+ assert(env->v7m.secure);
3752
+
3753
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
3754
+ return;
3755
+ }
3756
+
3757
+ /* Check access to the coprocessor is permitted */
3758
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
3759
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
3760
+ }
3761
+
3762
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
3763
+ /* State in FP is still valid */
3764
+ env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK;
3765
+ } else {
3766
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
3767
+ int i;
3768
+ uint32_t fpscr;
3769
+
3770
+ if (fptr & 7) {
3771
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
3772
+ }
3773
+
3774
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
3775
+ uint32_t slo, shi;
3776
+ uint64_t dn;
3777
+ uint32_t faddr = fptr + 4 * i;
3778
+
3779
+ if (i >= 16) {
3780
+ faddr += 8; /* skip the slot for the FPSCR */
3781
+ }
3782
+
3783
+ slo = cpu_ldl_data(env, faddr);
3784
+ shi = cpu_ldl_data(env, faddr + 4);
3785
+
3786
+ dn = (uint64_t) shi << 32 | slo;
3787
+ *aa32_vfp_dreg(env, i / 2) = dn;
3788
+ }
3789
+ fpscr = cpu_ldl_data(env, fptr + 0x40);
3790
+ vfp_set_fpscr(env, fpscr);
3791
+ }
3792
+
3793
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
3794
+}
3795
+
3796
+static bool v7m_push_stack(ARMCPU *cpu)
3797
+{
3798
+ /*
3799
+ * Do the "set up stack frame" part of exception entry,
3800
+ * similar to pseudocode PushStack().
3801
+ * Return true if we generate a derived exception (and so
3802
+ * should ignore further stack faults trying to process
3803
+ * that derived exception.)
3804
+ */
3805
+ bool stacked_ok = true, limitviol = false;
3806
+ CPUARMState *env = &cpu->env;
3807
+ uint32_t xpsr = xpsr_read(env);
3808
+ uint32_t frameptr = env->regs[13];
3809
+ ARMMMUIdx mmu_idx = arm_mmu_idx(env);
3810
+ uint32_t framesize;
3811
+ bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1);
3812
+
3813
+ if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) &&
3814
+ (env->v7m.secure || nsacr_cp10)) {
3815
+ if (env->v7m.secure &&
3816
+ env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) {
3817
+ framesize = 0xa8;
3818
+ } else {
3819
+ framesize = 0x68;
3820
+ }
3821
+ } else {
3822
+ framesize = 0x20;
3823
+ }
3824
+
3825
+ /* Align stack pointer if the guest wants that */
3826
+ if ((frameptr & 4) &&
3827
+ (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKALIGN_MASK)) {
3828
+ frameptr -= 4;
3829
+ xpsr |= XPSR_SPREALIGN;
3830
+ }
3831
+
3832
+ xpsr &= ~XPSR_SFPA;
3833
+ if (env->v7m.secure &&
3834
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
3835
+ xpsr |= XPSR_SFPA;
3836
+ }
3837
+
3838
+ frameptr -= framesize;
3839
+
3840
+ if (arm_feature(env, ARM_FEATURE_V8)) {
3841
+ uint32_t limit = v7m_sp_limit(env);
3842
+
3843
+ if (frameptr < limit) {
3844
+ /*
3845
+ * Stack limit failure: set SP to the limit value, and generate
3846
+ * STKOF UsageFault. Stack pushes below the limit must not be
3847
+ * performed. It is IMPDEF whether pushes above the limit are
3848
+ * performed; we choose not to.
3849
+ */
3850
+ qemu_log_mask(CPU_LOG_INT,
3851
+ "...STKOF during stacking\n");
3852
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
3853
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
3854
+ env->v7m.secure);
3855
+ env->regs[13] = limit;
3856
+ /*
3857
+ * We won't try to perform any further memory accesses but
3858
+ * we must continue through the following code to check for
3859
+ * permission faults during FPU state preservation, and we
3860
+ * must update FPCCR if lazy stacking is enabled.
3861
+ */
3862
+ limitviol = true;
3863
+ stacked_ok = false;
3864
+ }
3865
+ }
3866
+
3867
+ /*
3868
+ * Write as much of the stack frame as we can. If we fail a stack
3869
+ * write this will result in a derived exception being pended
3870
+ * (which may be taken in preference to the one we started with
3871
+ * if it has higher priority).
3872
+ */
3873
+ stacked_ok = stacked_ok &&
3874
+ v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) &&
3875
+ v7m_stack_write(cpu, frameptr + 4, env->regs[1],
3876
+ mmu_idx, STACK_NORMAL) &&
3877
+ v7m_stack_write(cpu, frameptr + 8, env->regs[2],
3878
+ mmu_idx, STACK_NORMAL) &&
3879
+ v7m_stack_write(cpu, frameptr + 12, env->regs[3],
3880
+ mmu_idx, STACK_NORMAL) &&
3881
+ v7m_stack_write(cpu, frameptr + 16, env->regs[12],
3882
+ mmu_idx, STACK_NORMAL) &&
3883
+ v7m_stack_write(cpu, frameptr + 20, env->regs[14],
3884
+ mmu_idx, STACK_NORMAL) &&
3885
+ v7m_stack_write(cpu, frameptr + 24, env->regs[15],
3886
+ mmu_idx, STACK_NORMAL) &&
3887
+ v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL);
3888
+
3889
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
3890
+ /* FPU is active, try to save its registers */
3891
+ bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
3892
+ bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK;
3893
+
3894
+ if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) {
3895
+ qemu_log_mask(CPU_LOG_INT,
3896
+ "...SecureFault because LSPACT and FPCA both set\n");
3897
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
3898
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
3899
+ } else if (!env->v7m.secure && !nsacr_cp10) {
3900
+ qemu_log_mask(CPU_LOG_INT,
3901
+ "...Secure UsageFault with CFSR.NOCP because "
3902
+ "NSACR.CP10 prevents stacking FP regs\n");
3903
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
3904
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
3905
+ } else {
3906
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
3907
+ /* Lazy stacking disabled, save registers now */
3908
+ int i;
3909
+ bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure,
3910
+ arm_current_el(env) != 0);
3911
+
3912
+ if (stacked_ok && !cpacr_pass) {
3913
+ /*
3914
+ * Take UsageFault if CPACR forbids access. The pseudocode
3915
+ * here does a full CheckCPEnabled() but we know the NSACR
3916
+ * check can never fail as we have already handled that.
3917
+ */
3918
+ qemu_log_mask(CPU_LOG_INT,
3919
+ "...UsageFault with CFSR.NOCP because "
3920
+ "CPACR.CP10 prevents stacking FP regs\n");
3921
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
3922
+ env->v7m.secure);
3923
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
3924
+ stacked_ok = false;
3925
+ }
3926
+
3927
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
3928
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
3929
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
3930
+ uint32_t slo = extract64(dn, 0, 32);
3931
+ uint32_t shi = extract64(dn, 32, 32);
3932
+
3933
+ if (i >= 16) {
3934
+ faddr += 8; /* skip the slot for the FPSCR */
3935
+ }
3936
+ stacked_ok = stacked_ok &&
3937
+ v7m_stack_write(cpu, faddr, slo,
3938
+ mmu_idx, STACK_NORMAL) &&
3939
+ v7m_stack_write(cpu, faddr + 4, shi,
3940
+ mmu_idx, STACK_NORMAL);
3941
+ }
3942
+ stacked_ok = stacked_ok &&
3943
+ v7m_stack_write(cpu, frameptr + 0x60,
3944
+ vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
3945
+ if (cpacr_pass) {
3946
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
3947
+ *aa32_vfp_dreg(env, i / 2) = 0;
3948
+ }
3949
+ vfp_set_fpscr(env, 0);
3950
+ }
3951
+ } else {
3952
+ /* Lazy stacking enabled, save necessary info to stack later */
3953
+ v7m_update_fpccr(env, frameptr + 0x20, true);
3954
+ }
3955
+ }
3956
+ }
3957
+
3958
+ /*
3959
+ * If we broke a stack limit then SP was already updated earlier;
3960
+ * otherwise we update SP regardless of whether any of the stack
3961
+ * accesses failed or we took some other kind of fault.
3962
+ */
3963
+ if (!limitviol) {
3964
+ env->regs[13] = frameptr;
3965
+ }
3966
+
3967
+ return !stacked_ok;
3968
+}
3969
+
3970
+static void do_v7m_exception_exit(ARMCPU *cpu)
3971
+{
3972
+ CPUARMState *env = &cpu->env;
3973
+ uint32_t excret;
3974
+ uint32_t xpsr, xpsr_mask;
3975
+ bool ufault = false;
3976
+ bool sfault = false;
3977
+ bool return_to_sp_process;
3978
+ bool return_to_handler;
3979
+ bool rettobase = false;
3980
+ bool exc_secure = false;
3981
+ bool return_to_secure;
3982
+ bool ftype;
3983
+ bool restore_s16_s31;
3984
+
3985
+ /*
3986
+ * If we're not in Handler mode then jumps to magic exception-exit
3987
+ * addresses don't have magic behaviour. However for the v8M
3988
+ * security extensions the magic secure-function-return has to
3989
+ * work in thread mode too, so to avoid doing an extra check in
3990
+ * the generated code we allow exception-exit magic to also cause the
3991
+ * internal exception and bring us here in thread mode. Correct code
3992
+ * will never try to do this (the following insn fetch will always
3993
+ * fault) so we the overhead of having taken an unnecessary exception
3994
+ * doesn't matter.
3995
+ */
3996
+ if (!arm_v7m_is_handler_mode(env)) {
3997
+ return;
3998
+ }
3999
+
4000
+ /*
4001
+ * In the spec pseudocode ExceptionReturn() is called directly
4002
+ * from BXWritePC() and gets the full target PC value including
4003
+ * bit zero. In QEMU's implementation we treat it as a normal
4004
+ * jump-to-register (which is then caught later on), and so split
4005
+ * the target value up between env->regs[15] and env->thumb in
4006
+ * gen_bx(). Reconstitute it.
4007
+ */
4008
+ excret = env->regs[15];
4009
+ if (env->thumb) {
4010
+ excret |= 1;
4011
+ }
4012
+
4013
+ qemu_log_mask(CPU_LOG_INT, "Exception return: magic PC %" PRIx32
4014
+ " previous exception %d\n",
4015
+ excret, env->v7m.exception);
4016
+
4017
+ if ((excret & R_V7M_EXCRET_RES1_MASK) != R_V7M_EXCRET_RES1_MASK) {
4018
+ qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero high bits in exception "
4019
+ "exit PC value 0x%" PRIx32 " are UNPREDICTABLE\n",
4020
+ excret);
4021
+ }
4022
+
4023
+ ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
4024
+
4025
+ if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
4026
+ qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
4027
+ "exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
4028
+ "if FPU not present\n",
4029
+ excret);
4030
+ ftype = true;
4031
+ }
4032
+
4033
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4034
+ /*
4035
+ * EXC_RETURN.ES validation check (R_SMFL). We must do this before
4036
+ * we pick which FAULTMASK to clear.
4037
+ */
4038
+ if (!env->v7m.secure &&
4039
+ ((excret & R_V7M_EXCRET_ES_MASK) ||
4040
+ !(excret & R_V7M_EXCRET_DCRS_MASK))) {
4041
+ sfault = 1;
4042
+ /* For all other purposes, treat ES as 0 (R_HXSR) */
4043
+ excret &= ~R_V7M_EXCRET_ES_MASK;
4044
+ }
4045
+ exc_secure = excret & R_V7M_EXCRET_ES_MASK;
4046
+ }
4047
+
4048
+ if (env->v7m.exception != ARMV7M_EXCP_NMI) {
4049
+ /*
4050
+ * Auto-clear FAULTMASK on return from other than NMI.
4051
+ * If the security extension is implemented then this only
4052
+ * happens if the raw execution priority is >= 0; the
4053
+ * value of the ES bit in the exception return value indicates
4054
+ * which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
4055
+ */
4056
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4057
+ if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
4058
+ env->v7m.faultmask[exc_secure] = 0;
4059
+ }
4060
+ } else {
4061
+ env->v7m.faultmask[M_REG_NS] = 0;
4062
+ }
4063
+ }
4064
+
4065
+ switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception,
4066
+ exc_secure)) {
4067
+ case -1:
4068
+ /* attempt to exit an exception that isn't active */
4069
+ ufault = true;
4070
+ break;
4071
+ case 0:
4072
+ /* still an irq active now */
4073
+ break;
4074
+ case 1:
4075
+ /*
4076
+ * We returned to base exception level, no nesting.
4077
+ * (In the pseudocode this is written using "NestedActivation != 1"
4078
+ * where we have 'rettobase == false'.)
4079
+ */
4080
+ rettobase = true;
4081
+ break;
4082
+ default:
4083
+ g_assert_not_reached();
4084
+ }
4085
+
4086
+ return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK);
4087
+ return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK;
4088
+ return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
4089
+ (excret & R_V7M_EXCRET_S_MASK);
4090
+
4091
+ if (arm_feature(env, ARM_FEATURE_V8)) {
4092
+ if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4093
+ /*
4094
+ * UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP);
4095
+ * we choose to take the UsageFault.
4096
+ */
4097
+ if ((excret & R_V7M_EXCRET_S_MASK) ||
4098
+ (excret & R_V7M_EXCRET_ES_MASK) ||
4099
+ !(excret & R_V7M_EXCRET_DCRS_MASK)) {
4100
+ ufault = true;
4101
+ }
4102
+ }
4103
+ if (excret & R_V7M_EXCRET_RES0_MASK) {
4104
+ ufault = true;
4105
+ }
4106
+ } else {
4107
+ /* For v7M we only recognize certain combinations of the low bits */
4108
+ switch (excret & 0xf) {
4109
+ case 1: /* Return to Handler */
4110
+ break;
4111
+ case 13: /* Return to Thread using Process stack */
4112
+ case 9: /* Return to Thread using Main stack */
4113
+ /*
4114
+ * We only need to check NONBASETHRDENA for v7M, because in
4115
+ * v8M this bit does not exist (it is RES1).
4116
+ */
4117
+ if (!rettobase &&
4118
+ !(env->v7m.ccr[env->v7m.secure] &
4119
+ R_V7M_CCR_NONBASETHRDENA_MASK)) {
4120
+ ufault = true;
4121
+ }
355
+ }
4122
+ break;
356
+ break;
4123
+ default:
357
+
4124
+ ufault = true;
358
+ case NPCM7XX_CAPTURE_COMPARE_HIT:
4125
+ }
359
+ /* Compare Hit - TEPND */
4126
+ }
360
+ s->regs[R_NPCM7XX_MFT_ICTRL] |= NPCM7XX_MFT_ICTRL_TEPND;
4127
+
361
+ if (s->regs[R_NPCM7XX_MFT_IEN] & NPCM7XX_MFT_IEN_TEIEN) {
4128
+ /*
362
+ irq_level = 1;
4129
+ * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
4130
+ * Handler mode (and will be until we write the new XPSR.Interrupt
4131
+ * field) this does not switch around the current stack pointer.
4132
+ * We must do this before we do any kind of tailchaining, including
4133
+ * for the derived exceptions on integrity check failures, or we will
4134
+ * give the guest an incorrect EXCRET.SPSEL value on exception entry.
4135
+ */
4136
+ write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
4137
+
4138
+ /*
4139
+ * Clear scratch FP values left in caller saved registers; this
4140
+ * must happen before any kind of tail chaining.
4141
+ */
4142
+ if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) &&
4143
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
4144
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
4145
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
4146
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4147
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
4148
+ "stackframe: error during lazy state deactivation\n");
4149
+ v7m_exception_taken(cpu, excret, true, false);
4150
+ return;
4151
+ } else {
4152
+ /* Clear s0..s15 and FPSCR */
4153
+ int i;
4154
+
4155
+ for (i = 0; i < 16; i += 2) {
4156
+ *aa32_vfp_dreg(env, i / 2) = 0;
4157
+ }
4158
+ vfp_set_fpscr(env, 0);
4159
+ }
4160
+ }
4161
+
4162
+ if (sfault) {
4163
+ env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
4164
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4165
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
4166
+ "stackframe: failed EXC_RETURN.ES validity check\n");
4167
+ v7m_exception_taken(cpu, excret, true, false);
4168
+ return;
4169
+ }
4170
+
4171
+ if (ufault) {
4172
+ /*
4173
+ * Bad exception return: instead of popping the exception
4174
+ * stack, directly take a usage fault on the current stack.
4175
+ */
4176
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4177
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4178
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
4179
+ "stackframe: failed exception return integrity check\n");
4180
+ v7m_exception_taken(cpu, excret, true, false);
4181
+ return;
4182
+ }
4183
+
4184
+ /*
4185
+ * Tailchaining: if there is currently a pending exception that
4186
+ * is high enough priority to preempt execution at the level we're
4187
+ * about to return to, then just directly take that exception now,
4188
+ * avoiding an unstack-and-then-stack. Note that now we have
4189
+ * deactivated the previous exception by calling armv7m_nvic_complete_irq()
4190
+ * our current execution priority is already the execution priority we are
4191
+ * returning to -- none of the state we would unstack or set based on
4192
+ * the EXCRET value affects it.
4193
+ */
4194
+ if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
4195
+ qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
4196
+ v7m_exception_taken(cpu, excret, true, false);
4197
+ return;
4198
+ }
4199
+
4200
+ switch_v7m_security_state(env, return_to_secure);
4201
+
4202
+ {
4203
+ /*
4204
+ * The stack pointer we should be reading the exception frame from
4205
+ * depends on bits in the magic exception return type value (and
4206
+ * for v8M isn't necessarily the stack pointer we will eventually
4207
+ * end up resuming execution with). Get a pointer to the location
4208
+ * in the CPU state struct where the SP we need is currently being
4209
+ * stored; we will use and modify it in place.
4210
+ * We use this limited C variable scope so we don't accidentally
4211
+ * use 'frame_sp_p' after we do something that makes it invalid.
4212
+ */
4213
+ uint32_t *frame_sp_p = get_v7m_sp_ptr(env,
4214
+ return_to_secure,
4215
+ !return_to_handler,
4216
+ return_to_sp_process);
4217
+ uint32_t frameptr = *frame_sp_p;
4218
+ bool pop_ok = true;
4219
+ ARMMMUIdx mmu_idx;
4220
+ bool return_to_priv = return_to_handler ||
4221
+ !(env->v7m.control[return_to_secure] & R_V7M_CONTROL_NPRIV_MASK);
4222
+
4223
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, return_to_secure,
4224
+ return_to_priv);
4225
+
4226
+ if (!QEMU_IS_ALIGNED(frameptr, 8) &&
4227
+ arm_feature(env, ARM_FEATURE_V8)) {
4228
+ qemu_log_mask(LOG_GUEST_ERROR,
4229
+ "M profile exception return with non-8-aligned SP "
4230
+ "for destination state is UNPREDICTABLE\n");
4231
+ }
4232
+
4233
+ /* Do we need to pop callee-saved registers? */
4234
+ if (return_to_secure &&
4235
+ ((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
4236
+ (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
4237
+ uint32_t actual_sig;
4238
+
4239
+ pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx);
4240
+
4241
+ if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) {
4242
+ /* Take a SecureFault on the current stack */
4243
+ env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
4244
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4245
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
4246
+ "stackframe: failed exception return integrity "
4247
+ "signature check\n");
4248
+ v7m_exception_taken(cpu, excret, true, false);
4249
+ return;
4250
+ }
4251
+
4252
+ pop_ok = pop_ok &&
4253
+ v7m_stack_read(cpu, &env->regs[4], frameptr + 0x8, mmu_idx) &&
4254
+ v7m_stack_read(cpu, &env->regs[5], frameptr + 0xc, mmu_idx) &&
4255
+ v7m_stack_read(cpu, &env->regs[6], frameptr + 0x10, mmu_idx) &&
4256
+ v7m_stack_read(cpu, &env->regs[7], frameptr + 0x14, mmu_idx) &&
4257
+ v7m_stack_read(cpu, &env->regs[8], frameptr + 0x18, mmu_idx) &&
4258
+ v7m_stack_read(cpu, &env->regs[9], frameptr + 0x1c, mmu_idx) &&
4259
+ v7m_stack_read(cpu, &env->regs[10], frameptr + 0x20, mmu_idx) &&
4260
+ v7m_stack_read(cpu, &env->regs[11], frameptr + 0x24, mmu_idx);
4261
+
4262
+ frameptr += 0x28;
4263
+ }
4264
+
4265
+ /* Pop registers */
4266
+ pop_ok = pop_ok &&
4267
+ v7m_stack_read(cpu, &env->regs[0], frameptr, mmu_idx) &&
4268
+ v7m_stack_read(cpu, &env->regs[1], frameptr + 0x4, mmu_idx) &&
4269
+ v7m_stack_read(cpu, &env->regs[2], frameptr + 0x8, mmu_idx) &&
4270
+ v7m_stack_read(cpu, &env->regs[3], frameptr + 0xc, mmu_idx) &&
4271
+ v7m_stack_read(cpu, &env->regs[12], frameptr + 0x10, mmu_idx) &&
4272
+ v7m_stack_read(cpu, &env->regs[14], frameptr + 0x14, mmu_idx) &&
4273
+ v7m_stack_read(cpu, &env->regs[15], frameptr + 0x18, mmu_idx) &&
4274
+ v7m_stack_read(cpu, &xpsr, frameptr + 0x1c, mmu_idx);
4275
+
4276
+ if (!pop_ok) {
4277
+ /*
4278
+ * v7m_stack_read() pended a fault, so take it (as a tail
4279
+ * chained exception on the same stack frame)
4280
+ */
4281
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
4282
+ v7m_exception_taken(cpu, excret, true, false);
4283
+ return;
4284
+ }
4285
+
4286
+ /*
4287
+ * Returning from an exception with a PC with bit 0 set is defined
4288
+ * behaviour on v8M (bit 0 is ignored), but for v7M it was specified
4289
+ * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore
4290
+ * the lsbit, and there are several RTOSes out there which incorrectly
4291
+ * assume the r15 in the stack frame should be a Thumb-style "lsbit
4292
+ * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but
4293
+ * complain about the badly behaved guest.
4294
+ */
4295
+ if (env->regs[15] & 1) {
4296
+ env->regs[15] &= ~1U;
4297
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
4298
+ qemu_log_mask(LOG_GUEST_ERROR,
4299
+ "M profile return from interrupt with misaligned "
4300
+ "PC is UNPREDICTABLE on v7M\n");
4301
+ }
4302
+ }
4303
+
4304
+ if (arm_feature(env, ARM_FEATURE_V8)) {
4305
+ /*
4306
+ * For v8M we have to check whether the xPSR exception field
4307
+ * matches the EXCRET value for return to handler/thread
4308
+ * before we commit to changing the SP and xPSR.
4309
+ */
4310
+ bool will_be_handler = (xpsr & XPSR_EXCP) != 0;
4311
+ if (return_to_handler != will_be_handler) {
4312
+ /*
4313
+ * Take an INVPC UsageFault on the current stack.
4314
+ * By this point we will have switched to the security state
4315
+ * for the background state, so this UsageFault will target
4316
+ * that state.
4317
+ */
4318
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
4319
+ env->v7m.secure);
4320
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4321
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
4322
+ "stackframe: failed exception return integrity "
4323
+ "check\n");
4324
+ v7m_exception_taken(cpu, excret, true, false);
4325
+ return;
4326
+ }
4327
+ }
4328
+
4329
+ if (!ftype) {
4330
+ /* FP present and we need to handle it */
4331
+ if (!return_to_secure &&
4332
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) {
4333
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4334
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
4335
+ qemu_log_mask(CPU_LOG_INT,
4336
+ "...taking SecureFault on existing stackframe: "
4337
+ "Secure LSPACT set but exception return is "
4338
+ "not to secure state\n");
4339
+ v7m_exception_taken(cpu, excret, true, false);
4340
+ return;
4341
+ }
4342
+
4343
+ restore_s16_s31 = return_to_secure &&
4344
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
4345
+
4346
+ if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) {
4347
+ /* State in FPU is still valid, just clear LSPACT */
4348
+ env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
4349
+ } else {
4350
+ int i;
4351
+ uint32_t fpscr;
4352
+ bool cpacr_pass, nsacr_pass;
4353
+
4354
+ cpacr_pass = v7m_cpacr_pass(env, return_to_secure,
4355
+ return_to_priv);
4356
+ nsacr_pass = return_to_secure ||
4357
+ extract32(env->v7m.nsacr, 10, 1);
4358
+
4359
+ if (!cpacr_pass) {
4360
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
4361
+ return_to_secure);
4362
+ env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK;
4363
+ qemu_log_mask(CPU_LOG_INT,
4364
+ "...taking UsageFault on existing "
4365
+ "stackframe: CPACR.CP10 prevents unstacking "
4366
+ "FP regs\n");
4367
+ v7m_exception_taken(cpu, excret, true, false);
4368
+ return;
4369
+ } else if (!nsacr_pass) {
4370
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
4371
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK;
4372
+ qemu_log_mask(CPU_LOG_INT,
4373
+ "...taking Secure UsageFault on existing "
4374
+ "stackframe: NSACR.CP10 prevents unstacking "
4375
+ "FP regs\n");
4376
+ v7m_exception_taken(cpu, excret, true, false);
4377
+ return;
4378
+ }
4379
+
4380
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
4381
+ uint32_t slo, shi;
4382
+ uint64_t dn;
4383
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
4384
+
4385
+ if (i >= 16) {
4386
+ faddr += 8; /* Skip the slot for the FPSCR */
4387
+ }
4388
+
4389
+ pop_ok = pop_ok &&
4390
+ v7m_stack_read(cpu, &slo, faddr, mmu_idx) &&
4391
+ v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx);
4392
+
4393
+ if (!pop_ok) {
4394
+ break;
4395
+ }
4396
+
4397
+ dn = (uint64_t)shi << 32 | slo;
4398
+ *aa32_vfp_dreg(env, i / 2) = dn;
4399
+ }
4400
+ pop_ok = pop_ok &&
4401
+ v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx);
4402
+ if (pop_ok) {
4403
+ vfp_set_fpscr(env, fpscr);
4404
+ }
4405
+ if (!pop_ok) {
4406
+ /*
4407
+ * These regs are 0 if security extension present;
4408
+ * otherwise merely UNKNOWN. We zero always.
4409
+ */
4410
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
4411
+ *aa32_vfp_dreg(env, i / 2) = 0;
4412
+ }
4413
+ vfp_set_fpscr(env, 0);
4414
+ }
4415
+ }
4416
+ }
4417
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
4418
+ V7M_CONTROL, FPCA, !ftype);
4419
+
4420
+ /* Commit to consuming the stack frame */
4421
+ frameptr += 0x20;
4422
+ if (!ftype) {
4423
+ frameptr += 0x48;
4424
+ if (restore_s16_s31) {
4425
+ frameptr += 0x40;
4426
+ }
4427
+ }
4428
+ /*
4429
+ * Undo stack alignment (the SPREALIGN bit indicates that the original
4430
+ * pre-exception SP was not 8-aligned and we added a padding word to
4431
+ * align it, so we undo this by ORing in the bit that increases it
4432
+ * from the current 8-aligned value to the 8-unaligned value. (Adding 4
4433
+ * would work too but a logical OR is how the pseudocode specifies it.)
4434
+ */
4435
+ if (xpsr & XPSR_SPREALIGN) {
4436
+ frameptr |= 4;
4437
+ }
4438
+ *frame_sp_p = frameptr;
4439
+ }
4440
+
4441
+ xpsr_mask = ~(XPSR_SPREALIGN | XPSR_SFPA);
4442
+ if (!arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
4443
+ xpsr_mask &= ~XPSR_GE;
4444
+ }
4445
+ /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
4446
+ xpsr_write(env, xpsr, xpsr_mask);
4447
+
4448
+ if (env->v7m.secure) {
4449
+ bool sfpa = xpsr & XPSR_SFPA;
4450
+
4451
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
4452
+ V7M_CONTROL, SFPA, sfpa);
4453
+ }
4454
+
4455
+ /*
4456
+ * The restored xPSR exception field will be zero if we're
4457
+ * resuming in Thread mode. If that doesn't match what the
4458
+ * exception return excret specified then this is a UsageFault.
4459
+ * v7M requires we make this check here; v8M did it earlier.
4460
+ */
4461
+ if (return_to_handler != arm_v7m_is_handler_mode(env)) {
4462
+ /*
4463
+ * Take an INVPC UsageFault by pushing the stack again;
4464
+ * we know we're v7M so this is never a Secure UsageFault.
4465
+ */
4466
+ bool ignore_stackfaults;
4467
+
4468
+ assert(!arm_feature(env, ARM_FEATURE_V8));
4469
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
4470
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4471
+ ignore_stackfaults = v7m_push_stack(cpu);
4472
+ qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
4473
+ "failed exception return integrity check\n");
4474
+ v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
4475
+ return;
4476
+ }
4477
+
4478
+ /* Otherwise, we have a successful exception exit. */
4479
+ arm_clear_exclusive(env);
4480
+ qemu_log_mask(CPU_LOG_INT, "...successful exception return\n");
4481
+}
4482
+
4483
+static bool do_v7m_function_return(ARMCPU *cpu)
4484
+{
4485
+ /*
4486
+ * v8M security extensions magic function return.
4487
+ * We may either:
4488
+ * (1) throw an exception (longjump)
4489
+ * (2) return true if we successfully handled the function return
4490
+ * (3) return false if we failed a consistency check and have
4491
+ * pended a UsageFault that needs to be taken now
4492
+ *
4493
+ * At this point the magic return value is split between env->regs[15]
4494
+ * and env->thumb. We don't bother to reconstitute it because we don't
4495
+ * need it (all values are handled the same way).
4496
+ */
4497
+ CPUARMState *env = &cpu->env;
4498
+ uint32_t newpc, newpsr, newpsr_exc;
4499
+
4500
+ qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n");
4501
+
4502
+ {
4503
+ bool threadmode, spsel;
4504
+ TCGMemOpIdx oi;
4505
+ ARMMMUIdx mmu_idx;
4506
+ uint32_t *frame_sp_p;
4507
+ uint32_t frameptr;
4508
+
4509
+ /* Pull the return address and IPSR from the Secure stack */
4510
+ threadmode = !arm_v7m_is_handler_mode(env);
4511
+ spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK;
4512
+
4513
+ frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel);
4514
+ frameptr = *frame_sp_p;
4515
+
4516
+ /*
4517
+ * These loads may throw an exception (for MPU faults). We want to
4518
+ * do them as secure, so work out what MMU index that is.
4519
+ */
4520
+ mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
4521
+ oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx));
4522
+ newpc = helper_le_ldul_mmu(env, frameptr, oi, 0);
4523
+ newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0);
4524
+
4525
+ /* Consistency checks on new IPSR */
4526
+ newpsr_exc = newpsr & XPSR_EXCP;
4527
+ if (!((env->v7m.exception == 0 && newpsr_exc == 0) ||
4528
+ (env->v7m.exception == 1 && newpsr_exc != 0))) {
4529
+ /* Pend the fault and tell our caller to take it */
4530
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
4531
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
4532
+ env->v7m.secure);
4533
+ qemu_log_mask(CPU_LOG_INT,
4534
+ "...taking INVPC UsageFault: "
4535
+ "IPSR consistency check failed\n");
4536
+ return false;
4537
+ }
4538
+
4539
+ *frame_sp_p = frameptr + 8;
4540
+ }
4541
+
4542
+ /* This invalidates frame_sp_p */
4543
+ switch_v7m_security_state(env, true);
4544
+ env->v7m.exception = newpsr_exc;
4545
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
4546
+ if (newpsr & XPSR_SFPA) {
4547
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK;
4548
+ }
4549
+ xpsr_write(env, 0, XPSR_IT);
4550
+ env->thumb = newpc & 1;
4551
+ env->regs[15] = newpc & ~1;
4552
+
4553
+ qemu_log_mask(CPU_LOG_INT, "...function return successful\n");
4554
+ return true;
4555
+}
4556
+
4557
+static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
4558
+ uint32_t addr, uint16_t *insn)
4559
+{
4560
+ /*
4561
+ * Load a 16-bit portion of a v7M instruction, returning true on success,
4562
+ * or false on failure (in which case we will have pended the appropriate
4563
+ * exception).
4564
+ * We need to do the instruction fetch's MPU and SAU checks
4565
+ * like this because there is no MMU index that would allow
4566
+ * doing the load with a single function call. Instead we must
4567
+ * first check that the security attributes permit the load
4568
+ * and that they don't mismatch on the two halves of the instruction,
4569
+ * and then we do the load as a secure load (ie using the security
4570
+ * attributes of the address, not the CPU, as architecturally required).
4571
+ */
4572
+ CPUState *cs = CPU(cpu);
4573
+ CPUARMState *env = &cpu->env;
4574
+ V8M_SAttributes sattrs = {};
4575
+ MemTxAttrs attrs = {};
4576
+ ARMMMUFaultInfo fi = {};
4577
+ MemTxResult txres;
4578
+ target_ulong page_size;
4579
+ hwaddr physaddr;
4580
+ int prot;
4581
+
4582
+ v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
4583
+ if (!sattrs.nsc || sattrs.ns) {
4584
+ /*
4585
+ * This must be the second half of the insn, and it straddles a
4586
+ * region boundary with the second half not being S&NSC.
4587
+ */
4588
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
4589
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4590
+ qemu_log_mask(CPU_LOG_INT,
4591
+ "...really SecureFault with SFSR.INVEP\n");
4592
+ return false;
4593
+ }
4594
+ if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
4595
+ &physaddr, &attrs, &prot, &page_size, &fi, NULL)) {
4596
+ /* the MPU lookup failed */
4597
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
4598
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
4599
+ qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
4600
+ return false;
4601
+ }
4602
+ *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr,
4603
+ attrs, &txres);
4604
+ if (txres != MEMTX_OK) {
4605
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
4606
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
4607
+ qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n");
4608
+ return false;
4609
+ }
4610
+ return true;
4611
+}
4612
+
4613
+static bool v7m_handle_execute_nsc(ARMCPU *cpu)
4614
+{
4615
+ /*
4616
+ * Check whether this attempt to execute code in a Secure & NS-Callable
4617
+ * memory region is for an SG instruction; if so, then emulate the
4618
+ * effect of the SG instruction and return true. Otherwise pend
4619
+ * the correct kind of exception and return false.
4620
+ */
4621
+ CPUARMState *env = &cpu->env;
4622
+ ARMMMUIdx mmu_idx;
4623
+ uint16_t insn;
4624
+
4625
+ /*
4626
+ * We should never get here unless get_phys_addr_pmsav8() caused
4627
+ * an exception for NS executing in S&NSC memory.
4628
+ */
4629
+ assert(!env->v7m.secure);
4630
+ assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
4631
+
4632
+ /* We want to do the MPU lookup as secure; work out what mmu_idx that is */
4633
+ mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
4634
+
4635
+ if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
4636
+ return false;
4637
+ }
4638
+
4639
+ if (!env->thumb) {
4640
+ goto gen_invep;
4641
+ }
4642
+
4643
+ if (insn != 0xe97f) {
4644
+ /*
4645
+ * Not an SG instruction first half (we choose the IMPDEF
4646
+ * early-SG-check option).
4647
+ */
4648
+ goto gen_invep;
4649
+ }
4650
+
4651
+ if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
4652
+ return false;
4653
+ }
4654
+
4655
+ if (insn != 0xe97f) {
4656
+ /*
4657
+ * Not an SG instruction second half (yes, both halves of the SG
4658
+ * insn have the same hex value)
4659
+ */
4660
+ goto gen_invep;
4661
+ }
4662
+
4663
+ /*
4664
+ * OK, we have confirmed that we really have an SG instruction.
4665
+ * We know we're NS in S memory so don't need to repeat those checks.
4666
+ */
4667
+ qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
4668
+ ", executing it\n", env->regs[15]);
4669
+ env->regs[14] &= ~1;
4670
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
4671
+ switch_v7m_security_state(env, true);
4672
+ xpsr_write(env, 0, XPSR_IT);
4673
+ env->regs[15] += 4;
4674
+ return true;
4675
+
4676
+gen_invep:
4677
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
4678
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4679
+ qemu_log_mask(CPU_LOG_INT,
4680
+ "...really SecureFault with SFSR.INVEP\n");
4681
+ return false;
4682
+}
4683
+
4684
+void arm_v7m_cpu_do_interrupt(CPUState *cs)
4685
+{
4686
+ ARMCPU *cpu = ARM_CPU(cs);
4687
+ CPUARMState *env = &cpu->env;
4688
+ uint32_t lr;
4689
+ bool ignore_stackfaults;
4690
+
4691
+ arm_log_exception(cs->exception_index);
4692
+
4693
+ /*
4694
+ * For exceptions we just mark as pending on the NVIC, and let that
4695
+ * handle it.
4696
+ */
4697
+ switch (cs->exception_index) {
4698
+ case EXCP_UDEF:
4699
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4700
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
4701
+ break;
4702
+ case EXCP_NOCP:
4703
+ {
4704
+ /*
4705
+ * NOCP might be directed to something other than the current
4706
+ * security state if this fault is because of NSACR; we indicate
4707
+ * the target security state using exception.target_el.
4708
+ */
4709
+ int target_secstate;
4710
+
4711
+ if (env->exception.target_el == 3) {
4712
+ target_secstate = M_REG_S;
4713
+ } else {
4714
+ target_secstate = env->v7m.secure;
4715
+ }
4716
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate);
4717
+ env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK;
4718
+ break;
4719
+ }
4720
+ case EXCP_INVSTATE:
4721
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4722
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
4723
+ break;
4724
+ case EXCP_STKOF:
4725
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4726
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
4727
+ break;
4728
+ case EXCP_LSERR:
4729
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4730
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
4731
+ break;
4732
+ case EXCP_UNALIGNED:
4733
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
4734
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
4735
+ break;
4736
+ case EXCP_SWI:
4737
+ /* The PC already points to the next instruction. */
4738
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
4739
+ break;
4740
+ case EXCP_PREFETCH_ABORT:
4741
+ case EXCP_DATA_ABORT:
4742
+ /*
4743
+ * Note that for M profile we don't have a guest facing FSR, but
4744
+ * the env->exception.fsr will be populated by the code that
4745
+ * raises the fault, in the A profile short-descriptor format.
4746
+ */
4747
+ switch (env->exception.fsr & 0xf) {
4748
+ case M_FAKE_FSR_NSC_EXEC:
4749
+ /*
4750
+ * Exception generated when we try to execute code at an address
4751
+ * which is marked as Secure & Non-Secure Callable and the CPU
4752
+ * is in the Non-Secure state. The only instruction which can
4753
+ * be executed like this is SG (and that only if both halves of
4754
+ * the SG instruction have the same security attributes.)
4755
+ * Everything else must generate an INVEP SecureFault, so we
4756
+ * emulate the SG instruction here.
4757
+ */
4758
+ if (v7m_handle_execute_nsc(cpu)) {
4759
+ return;
4760
+ }
363
+ }
4761
+ break;
364
+ break;
4762
+ case M_FAKE_FSR_SFAULT:
365
+
4763
+ /*
366
+ case NPCM7XX_CAPTURE_UNDERFLOW:
4764
+ * Various flavours of SecureFault for attempts to execute or
367
+ /* Underflow - TCPND */
4765
+ * access data in the wrong security state.
368
+ s->regs[R_NPCM7XX_MFT_ICTRL] |= NPCM7XX_MFT_ICTRL_TCPND;
4766
+ */
369
+ if (s->regs[R_NPCM7XX_MFT_IEN] & NPCM7XX_MFT_IEN_TCIEN) {
4767
+ switch (cs->exception_index) {
370
+ irq_level = 1;
4768
+ case EXCP_PREFETCH_ABORT:
4769
+ if (env->v7m.secure) {
4770
+ env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK;
4771
+ qemu_log_mask(CPU_LOG_INT,
4772
+ "...really SecureFault with SFSR.INVTRAN\n");
4773
+ } else {
4774
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
4775
+ qemu_log_mask(CPU_LOG_INT,
4776
+ "...really SecureFault with SFSR.INVEP\n");
4777
+ }
4778
+ break;
4779
+ case EXCP_DATA_ABORT:
4780
+ /* This must be an NS access to S memory */
4781
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
4782
+ qemu_log_mask(CPU_LOG_INT,
4783
+ "...really SecureFault with SFSR.AUVIOL\n");
4784
+ break;
4785
+ }
371
+ }
4786
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
4787
+ break;
372
+ break;
4788
+ case 0x8: /* External Abort */
373
+
4789
+ switch (cs->exception_index) {
374
+ default:
4790
+ case EXCP_PREFETCH_ABORT:
375
+ g_assert_not_reached();
4791
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
376
+ }
4792
+ qemu_log_mask(CPU_LOG_INT, "...with CFSR.IBUSERR\n");
377
+ }
4793
+ break;
378
+
4794
+ case EXCP_DATA_ABORT:
379
+ /* Capture input B. */
4795
+ env->v7m.cfsr[M_REG_NS] |=
380
+ if (s->regs[R_NPCM7XX_MFT_MCTRL] & NPCM7XX_MFT_MCTRL_TBEN &&
4796
+ (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
381
+ s->regs[R_NPCM7XX_MFT_CKC] & NPCM7XX_MFT_CKC_C2CSEL) {
4797
+ env->v7m.bfar = env->exception.vaddress;
382
+ sel = s->regs[R_NPCM7XX_MFT_INBSEL] & NPCM7XX_MFT_INBSEL_SELB;
4798
+ qemu_log_mask(CPU_LOG_INT,
383
+ cpcfg = NPCM7XX_MFT_CPCFG_GET_B(s->regs[R_NPCM7XX_MFT_CPCFG]);
4799
+ "...with CFSR.PRECISERR and BFAR 0x%x\n",
384
+ state = npcm7xx_mft_compute_cnt(s->clock_2,
4800
+ env->v7m.bfar);
385
+ sel ? s->max_rpm[3] : s->max_rpm[1],
4801
+ break;
386
+ sel ? s->duty[3] : s->duty[1],
387
+ s->regs[R_NPCM7XX_MFT_CPB],
388
+ cpcfg,
389
+ &s->regs[R_NPCM7XX_MFT_CNT2]);
390
+ switch (state) {
391
+ case NPCM7XX_CAPTURE_SUCCEED:
392
+ /* Interrupt on input capture on TBn transition - TBPND */
393
+ s->regs[R_NPCM7XX_MFT_CRB] = s->regs[R_NPCM7XX_MFT_CNT2];
394
+ s->regs[R_NPCM7XX_MFT_ICTRL] |= NPCM7XX_MFT_ICTRL_TBPND;
395
+ if (s->regs[R_NPCM7XX_MFT_IEN] & NPCM7XX_MFT_IEN_TBIEN) {
396
+ irq_level = 1;
4802
+ }
397
+ }
4803
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
4804
+ break;
398
+ break;
399
+
400
+ case NPCM7XX_CAPTURE_COMPARE_HIT:
401
+ /* Compare Hit - TFPND */
402
+ s->regs[R_NPCM7XX_MFT_ICTRL] |= NPCM7XX_MFT_ICTRL_TFPND;
403
+ if (s->regs[R_NPCM7XX_MFT_IEN] & NPCM7XX_MFT_IEN_TFIEN) {
404
+ irq_level = 1;
405
+ }
406
+ break;
407
+
408
+ case NPCM7XX_CAPTURE_UNDERFLOW:
409
+ /* Underflow - TDPND */
410
+ s->regs[R_NPCM7XX_MFT_ICTRL] |= NPCM7XX_MFT_ICTRL_TDPND;
411
+ if (s->regs[R_NPCM7XX_MFT_IEN] & NPCM7XX_MFT_IEN_TDIEN) {
412
+ irq_level = 1;
413
+ }
414
+ break;
415
+
4805
+ default:
416
+ default:
4806
+ /*
417
+ g_assert_not_reached();
4807
+ * All other FSR values are either MPU faults or "can't happen
4808
+ * for M profile" cases.
4809
+ */
4810
+ switch (cs->exception_index) {
4811
+ case EXCP_PREFETCH_ABORT:
4812
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
4813
+ qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n");
4814
+ break;
4815
+ case EXCP_DATA_ABORT:
4816
+ env->v7m.cfsr[env->v7m.secure] |=
4817
+ (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK);
4818
+ env->v7m.mmfar[env->v7m.secure] = env->exception.vaddress;
4819
+ qemu_log_mask(CPU_LOG_INT,
4820
+ "...with CFSR.DACCVIOL and MMFAR 0x%x\n",
4821
+ env->v7m.mmfar[env->v7m.secure]);
4822
+ break;
4823
+ }
4824
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM,
4825
+ env->v7m.secure);
4826
+ break;
4827
+ }
418
+ }
419
+ }
420
+
421
+ trace_npcm7xx_mft_capture(DEVICE(s)->canonical_path, irq_level);
422
+ qemu_set_irq(s->irq, irq_level);
423
+}
424
+
425
+/* Update clock for counters. */
426
+static void npcm7xx_mft_update_clock(void *opaque, ClockEvent event)
427
+{
428
+ NPCM7xxMFTState *s = NPCM7XX_MFT(opaque);
429
+ uint64_t prescaled_clock_period;
430
+
431
+ prescaled_clock_period = clock_get(s->clock_in) *
432
+ (s->regs[R_NPCM7XX_MFT_PRSC] + 1ULL);
433
+ trace_npcm7xx_mft_update_clock(s->clock_in->canonical_path,
434
+ s->regs[R_NPCM7XX_MFT_CKC],
435
+ clock_get(s->clock_in),
436
+ prescaled_clock_period);
437
+ /* Update clock 1 */
438
+ if (s->regs[R_NPCM7XX_MFT_CKC] & NPCM7XX_MFT_CKC_C1CSEL) {
439
+ /* Clock is prescaled. */
440
+ clock_update(s->clock_1, prescaled_clock_period);
441
+ } else {
442
+ /* Clock stopped. */
443
+ clock_update(s->clock_1, 0);
444
+ }
445
+ /* Update clock 2 */
446
+ if (s->regs[R_NPCM7XX_MFT_CKC] & NPCM7XX_MFT_CKC_C2CSEL) {
447
+ /* Clock is prescaled. */
448
+ clock_update(s->clock_2, prescaled_clock_period);
449
+ } else {
450
+ /* Clock stopped. */
451
+ clock_update(s->clock_2, 0);
452
+ }
453
+
454
+ npcm7xx_mft_capture(s);
455
+}
456
+
457
+static uint64_t npcm7xx_mft_read(void *opaque, hwaddr offset, unsigned size)
458
+{
459
+ NPCM7xxMFTState *s = NPCM7XX_MFT(opaque);
460
+ uint16_t value = 0;
461
+
462
+ switch (offset) {
463
+ case A_NPCM7XX_MFT_ICLR:
464
+ qemu_log_mask(LOG_GUEST_ERROR,
465
+ "%s: register @ 0x%04" HWADDR_PRIx " is write-only\n",
466
+ __func__, offset);
4828
+ break;
467
+ break;
4829
+ case EXCP_BKPT:
468
+
4830
+ if (semihosting_enabled()) {
469
+ default:
4831
+ int nr;
470
+ value = s->regs[offset / 2];
4832
+ nr = arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) & 0xff;
471
+ }
4833
+ if (nr == 0xab) {
472
+
4834
+ env->regs[15] += 2;
473
+ trace_npcm7xx_mft_read(DEVICE(s)->canonical_path, offset, value);
4835
+ qemu_log_mask(CPU_LOG_INT,
474
+ return value;
4836
+ "...handling as semihosting call 0x%x\n",
475
+}
4837
+ env->regs[0]);
476
+
4838
+ env->regs[0] = do_arm_semihosting(env);
477
+static void npcm7xx_mft_write(void *opaque, hwaddr offset,
4839
+ return;
478
+ uint64_t v, unsigned size)
4840
+ }
479
+{
4841
+ }
480
+ NPCM7xxMFTState *s = NPCM7XX_MFT(opaque);
4842
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false);
481
+
482
+ trace_npcm7xx_mft_write(DEVICE(s)->canonical_path, offset, v);
483
+ switch (offset) {
484
+ case A_NPCM7XX_MFT_ICLR:
485
+ npcm7xx_mft_clear_interrupt(s, v);
4843
+ break;
486
+ break;
4844
+ case EXCP_IRQ:
487
+
488
+ case A_NPCM7XX_MFT_CKC:
489
+ case A_NPCM7XX_MFT_PRSC:
490
+ s->regs[offset / 2] = v;
491
+ npcm7xx_mft_update_clock(s, ClockUpdate);
4845
+ break;
492
+ break;
4846
+ case EXCP_EXCEPTION_EXIT:
493
+
4847
+ if (env->regs[15] < EXC_RETURN_MIN_MAGIC) {
494
+ default:
4848
+ /* Must be v8M security extension function return */
495
+ s->regs[offset / 2] = v;
4849
+ assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC);
496
+ npcm7xx_mft_capture(s);
4850
+ assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
4851
+ if (do_v7m_function_return(cpu)) {
4852
+ return;
4853
+ }
4854
+ } else {
4855
+ do_v7m_exception_exit(cpu);
4856
+ return;
4857
+ }
4858
+ break;
497
+ break;
4859
+ case EXCP_LAZYFP:
498
+ }
4860
+ /*
499
+}
4861
+ * We already pended the specific exception in the NVIC in the
500
+
4862
+ * v7m_preserve_fp_state() helper function.
501
+static bool npcm7xx_mft_check_mem_op(void *opaque, hwaddr offset,
4863
+ */
502
+ unsigned size, bool is_write,
4864
+ break;
503
+ MemTxAttrs attrs)
504
+{
505
+ switch (offset) {
506
+ /* 16-bit registers. Must be accessed with 16-bit read/write.*/
507
+ case A_NPCM7XX_MFT_CNT1:
508
+ case A_NPCM7XX_MFT_CRA:
509
+ case A_NPCM7XX_MFT_CRB:
510
+ case A_NPCM7XX_MFT_CNT2:
511
+ case A_NPCM7XX_MFT_CPA:
512
+ case A_NPCM7XX_MFT_CPB:
513
+ return size == 2;
514
+
515
+ /* 8-bit registers. Must be accessed with 8-bit read/write.*/
516
+ case A_NPCM7XX_MFT_PRSC:
517
+ case A_NPCM7XX_MFT_CKC:
518
+ case A_NPCM7XX_MFT_MCTRL:
519
+ case A_NPCM7XX_MFT_ICTRL:
520
+ case A_NPCM7XX_MFT_ICLR:
521
+ case A_NPCM7XX_MFT_IEN:
522
+ case A_NPCM7XX_MFT_CPCFG:
523
+ case A_NPCM7XX_MFT_INASEL:
524
+ case A_NPCM7XX_MFT_INBSEL:
525
+ return size == 1;
526
+
4865
+ default:
527
+ default:
4866
+ cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
528
+ /* Invalid registers. */
4867
+ return; /* Never happens. Keep compiler happy. */
529
+ return false;
4868
+ }
530
+ }
4869
+
531
+}
4870
+ if (arm_feature(env, ARM_FEATURE_V8)) {
532
+
4871
+ lr = R_V7M_EXCRET_RES1_MASK |
533
+static void npcm7xx_mft_get_max_rpm(Object *obj, Visitor *v, const char *name,
4872
+ R_V7M_EXCRET_DCRS_MASK;
534
+ void *opaque, Error **errp)
4873
+ /*
535
+{
4874
+ * The S bit indicates whether we should return to Secure
536
+ visit_type_uint32(v, name, (uint32_t *)opaque, errp);
4875
+ * or NonSecure (ie our current state).
537
+}
4876
+ * The ES bit indicates whether we're taking this exception
538
+
4877
+ * to Secure or NonSecure (ie our target state). We set it
539
+static void npcm7xx_mft_set_max_rpm(Object *obj, Visitor *v, const char *name,
4878
+ * later, in v7m_exception_taken().
540
+ void *opaque, Error **errp)
4879
+ * The SPSEL bit is also set in v7m_exception_taken() for v8M.
541
+{
4880
+ * This corresponds to the ARM ARM pseudocode for v8M setting
542
+ NPCM7xxMFTState *s = NPCM7XX_MFT(obj);
4881
+ * some LR bits in PushStack() and some in ExceptionTaken();
543
+ uint32_t *max_rpm = opaque;
4882
+ * the distinction matters for the tailchain cases where we
544
+ uint32_t value;
4883
+ * can take an exception without pushing the stack.
545
+
4884
+ */
546
+ if (!visit_type_uint32(v, name, &value, errp)) {
4885
+ if (env->v7m.secure) {
4886
+ lr |= R_V7M_EXCRET_S_MASK;
4887
+ }
4888
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
4889
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
4890
+ }
4891
+ } else {
4892
+ lr = R_V7M_EXCRET_RES1_MASK |
4893
+ R_V7M_EXCRET_S_MASK |
4894
+ R_V7M_EXCRET_DCRS_MASK |
4895
+ R_V7M_EXCRET_FTYPE_MASK |
4896
+ R_V7M_EXCRET_ES_MASK;
4897
+ if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) {
4898
+ lr |= R_V7M_EXCRET_SPSEL_MASK;
4899
+ }
4900
+ }
4901
+ if (!arm_v7m_is_handler_mode(env)) {
4902
+ lr |= R_V7M_EXCRET_MODE_MASK;
4903
+ }
4904
+
4905
+ ignore_stackfaults = v7m_push_stack(cpu);
4906
+ v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
4907
+}
4908
+
4909
+uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
4910
+{
4911
+ uint32_t mask;
4912
+ unsigned el = arm_current_el(env);
4913
+
4914
+ /* First handle registers which unprivileged can read */
4915
+
4916
+ switch (reg) {
4917
+ case 0 ... 7: /* xPSR sub-fields */
4918
+ mask = 0;
4919
+ if ((reg & 1) && el) {
4920
+ mask |= XPSR_EXCP; /* IPSR (unpriv. reads as zero) */
4921
+ }
4922
+ if (!(reg & 4)) {
4923
+ mask |= XPSR_NZCV | XPSR_Q; /* APSR */
4924
+ if (arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
4925
+ mask |= XPSR_GE;
4926
+ }
4927
+ }
4928
+ /* EPSR reads as zero */
4929
+ return xpsr_read(env) & mask;
4930
+ break;
4931
+ case 20: /* CONTROL */
4932
+ {
4933
+ uint32_t value = env->v7m.control[env->v7m.secure];
4934
+ if (!env->v7m.secure) {
4935
+ /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
4936
+ value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
4937
+ }
4938
+ return value;
4939
+ }
4940
+ case 0x94: /* CONTROL_NS */
4941
+ /*
4942
+ * We have to handle this here because unprivileged Secure code
4943
+ * can read the NS CONTROL register.
4944
+ */
4945
+ if (!env->v7m.secure) {
4946
+ return 0;
4947
+ }
4948
+ return env->v7m.control[M_REG_NS] |
4949
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK);
4950
+ }
4951
+
4952
+ if (el == 0) {
4953
+ return 0; /* unprivileged reads others as zero */
4954
+ }
4955
+
4956
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
4957
+ switch (reg) {
4958
+ case 0x88: /* MSP_NS */
4959
+ if (!env->v7m.secure) {
4960
+ return 0;
4961
+ }
4962
+ return env->v7m.other_ss_msp;
4963
+ case 0x89: /* PSP_NS */
4964
+ if (!env->v7m.secure) {
4965
+ return 0;
4966
+ }
4967
+ return env->v7m.other_ss_psp;
4968
+ case 0x8a: /* MSPLIM_NS */
4969
+ if (!env->v7m.secure) {
4970
+ return 0;
4971
+ }
4972
+ return env->v7m.msplim[M_REG_NS];
4973
+ case 0x8b: /* PSPLIM_NS */
4974
+ if (!env->v7m.secure) {
4975
+ return 0;
4976
+ }
4977
+ return env->v7m.psplim[M_REG_NS];
4978
+ case 0x90: /* PRIMASK_NS */
4979
+ if (!env->v7m.secure) {
4980
+ return 0;
4981
+ }
4982
+ return env->v7m.primask[M_REG_NS];
4983
+ case 0x91: /* BASEPRI_NS */
4984
+ if (!env->v7m.secure) {
4985
+ return 0;
4986
+ }
4987
+ return env->v7m.basepri[M_REG_NS];
4988
+ case 0x93: /* FAULTMASK_NS */
4989
+ if (!env->v7m.secure) {
4990
+ return 0;
4991
+ }
4992
+ return env->v7m.faultmask[M_REG_NS];
4993
+ case 0x98: /* SP_NS */
4994
+ {
4995
+ /*
4996
+ * This gives the non-secure SP selected based on whether we're
4997
+ * currently in handler mode or not, using the NS CONTROL.SPSEL.
4998
+ */
4999
+ bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
5000
+
5001
+ if (!env->v7m.secure) {
5002
+ return 0;
5003
+ }
5004
+ if (!arm_v7m_is_handler_mode(env) && spsel) {
5005
+ return env->v7m.other_ss_psp;
5006
+ } else {
5007
+ return env->v7m.other_ss_msp;
5008
+ }
5009
+ }
5010
+ default:
5011
+ break;
5012
+ }
5013
+ }
5014
+
5015
+ switch (reg) {
5016
+ case 8: /* MSP */
5017
+ return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13];
5018
+ case 9: /* PSP */
5019
+ return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp;
5020
+ case 10: /* MSPLIM */
5021
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5022
+ goto bad_reg;
5023
+ }
5024
+ return env->v7m.msplim[env->v7m.secure];
5025
+ case 11: /* PSPLIM */
5026
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5027
+ goto bad_reg;
5028
+ }
5029
+ return env->v7m.psplim[env->v7m.secure];
5030
+ case 16: /* PRIMASK */
5031
+ return env->v7m.primask[env->v7m.secure];
5032
+ case 17: /* BASEPRI */
5033
+ case 18: /* BASEPRI_MAX */
5034
+ return env->v7m.basepri[env->v7m.secure];
5035
+ case 19: /* FAULTMASK */
5036
+ return env->v7m.faultmask[env->v7m.secure];
5037
+ default:
5038
+ bad_reg:
5039
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special"
5040
+ " register %d\n", reg);
5041
+ return 0;
5042
+ }
5043
+}
5044
+
5045
+void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
5046
+{
5047
+ /*
5048
+ * We're passed bits [11..0] of the instruction; extract
5049
+ * SYSm and the mask bits.
5050
+ * Invalid combinations of SYSm and mask are UNPREDICTABLE;
5051
+ * we choose to treat them as if the mask bits were valid.
5052
+ * NB that the pseudocode 'mask' variable is bits [11..10],
5053
+ * whereas ours is [11..8].
5054
+ */
5055
+ uint32_t mask = extract32(maskreg, 8, 4);
5056
+ uint32_t reg = extract32(maskreg, 0, 8);
5057
+ int cur_el = arm_current_el(env);
5058
+
5059
+ if (cur_el == 0 && reg > 7 && reg != 20) {
5060
+ /*
5061
+ * only xPSR sub-fields and CONTROL.SFPA may be written by
5062
+ * unprivileged code
5063
+ */
5064
+ return;
547
+ return;
5065
+ }
548
+ }
5066
+
549
+
5067
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
550
+ *max_rpm = value;
5068
+ switch (reg) {
551
+ npcm7xx_mft_capture(s);
5069
+ case 0x88: /* MSP_NS */
552
+}
5070
+ if (!env->v7m.secure) {
553
+
5071
+ return;
554
+static void npcm7xx_mft_duty_handler(void *opaque, int n, int value)
5072
+ }
555
+{
5073
+ env->v7m.other_ss_msp = val;
556
+ NPCM7xxMFTState *s = NPCM7XX_MFT(opaque);
5074
+ return;
557
+
5075
+ case 0x89: /* PSP_NS */
558
+ trace_npcm7xx_mft_set_duty(DEVICE(s)->canonical_path, n, value);
5076
+ if (!env->v7m.secure) {
559
+ s->duty[n] = value;
5077
+ return;
560
+ npcm7xx_mft_capture(s);
5078
+ }
561
+}
5079
+ env->v7m.other_ss_psp = val;
562
+
5080
+ return;
563
+static const struct MemoryRegionOps npcm7xx_mft_ops = {
5081
+ case 0x8a: /* MSPLIM_NS */
564
+ .read = npcm7xx_mft_read,
5082
+ if (!env->v7m.secure) {
565
+ .write = npcm7xx_mft_write,
5083
+ return;
566
+ .endianness = DEVICE_LITTLE_ENDIAN,
5084
+ }
567
+ .valid = {
5085
+ env->v7m.msplim[M_REG_NS] = val & ~7;
568
+ .min_access_size = 1,
5086
+ return;
569
+ .max_access_size = 2,
5087
+ case 0x8b: /* PSPLIM_NS */
570
+ .unaligned = false,
5088
+ if (!env->v7m.secure) {
571
+ .accepts = npcm7xx_mft_check_mem_op,
5089
+ return;
572
+ },
5090
+ }
573
+};
5091
+ env->v7m.psplim[M_REG_NS] = val & ~7;
574
+
5092
+ return;
575
+static void npcm7xx_mft_enter_reset(Object *obj, ResetType type)
5093
+ case 0x90: /* PRIMASK_NS */
576
+{
5094
+ if (!env->v7m.secure) {
577
+ NPCM7xxMFTState *s = NPCM7XX_MFT(obj);
5095
+ return;
578
+
5096
+ }
579
+ npcm7xx_mft_reset(s);
5097
+ env->v7m.primask[M_REG_NS] = val & 1;
580
+}
5098
+ return;
581
+
5099
+ case 0x91: /* BASEPRI_NS */
582
+static void npcm7xx_mft_hold_reset(Object *obj)
5100
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
583
+{
5101
+ return;
584
+ NPCM7xxMFTState *s = NPCM7XX_MFT(obj);
5102
+ }
585
+
5103
+ env->v7m.basepri[M_REG_NS] = val & 0xff;
586
+ qemu_irq_lower(s->irq);
5104
+ return;
587
+}
5105
+ case 0x93: /* FAULTMASK_NS */
588
+
5106
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
589
+static void npcm7xx_mft_init(Object *obj)
5107
+ return;
590
+{
5108
+ }
591
+ NPCM7xxMFTState *s = NPCM7XX_MFT(obj);
5109
+ env->v7m.faultmask[M_REG_NS] = val & 1;
592
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
5110
+ return;
593
+ DeviceState *dev = DEVICE(obj);
5111
+ case 0x94: /* CONTROL_NS */
594
+
5112
+ if (!env->v7m.secure) {
595
+ memory_region_init_io(&s->iomem, obj, &npcm7xx_mft_ops, s,
5113
+ return;
596
+ TYPE_NPCM7XX_MFT, 4 * KiB);
5114
+ }
597
+ sysbus_init_mmio(sbd, &s->iomem);
5115
+ write_v7m_control_spsel_for_secstate(env,
598
+ sysbus_init_irq(sbd, &s->irq);
5116
+ val & R_V7M_CONTROL_SPSEL_MASK,
599
+ s->clock_in = qdev_init_clock_in(dev, "clock-in", npcm7xx_mft_update_clock,
5117
+ M_REG_NS);
600
+ s, ClockUpdate);
5118
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
601
+ s->clock_1 = qdev_init_clock_out(dev, "clock1");
5119
+ env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
602
+ s->clock_2 = qdev_init_clock_out(dev, "clock2");
5120
+ env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
603
+
5121
+ }
604
+ for (int i = 0; i < NPCM7XX_PWM_PER_MODULE; ++i) {
5122
+ /*
605
+ object_property_add(obj, "max_rpm[*]", "uint32",
5123
+ * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
606
+ npcm7xx_mft_get_max_rpm,
5124
+ * RES0 if the FPU is not present, and is stored in the S bank
607
+ npcm7xx_mft_set_max_rpm,
5125
+ */
608
+ NULL, &s->max_rpm[i]);
5126
+ if (arm_feature(env, ARM_FEATURE_VFP) &&
609
+ }
5127
+ extract32(env->v7m.nsacr, 10, 1)) {
610
+ qdev_init_gpio_in_named(dev, npcm7xx_mft_duty_handler, "duty",
5128
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
611
+ NPCM7XX_MFT_FANIN_COUNT);
5129
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
612
+}
5130
+ }
613
+
5131
+ return;
614
+static const VMStateDescription vmstate_npcm7xx_mft = {
5132
+ case 0x98: /* SP_NS */
615
+ .name = "npcm7xx-mft-module",
5133
+ {
616
+ .version_id = 0,
5134
+ /*
617
+ .minimum_version_id = 0,
5135
+ * This gives the non-secure SP selected based on whether we're
618
+ .fields = (VMStateField[]) {
5136
+ * currently in handler mode or not, using the NS CONTROL.SPSEL.
619
+ VMSTATE_CLOCK(clock_in, NPCM7xxMFTState),
5137
+ */
620
+ VMSTATE_CLOCK(clock_1, NPCM7xxMFTState),
5138
+ bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
621
+ VMSTATE_CLOCK(clock_2, NPCM7xxMFTState),
5139
+ bool is_psp = !arm_v7m_is_handler_mode(env) && spsel;
622
+ VMSTATE_UINT16_ARRAY(regs, NPCM7xxMFTState, NPCM7XX_MFT_NR_REGS),
5140
+ uint32_t limit;
623
+ VMSTATE_UINT32_ARRAY(max_rpm, NPCM7xxMFTState, NPCM7XX_MFT_FANIN_COUNT),
5141
+
624
+ VMSTATE_UINT32_ARRAY(duty, NPCM7xxMFTState, NPCM7XX_MFT_FANIN_COUNT),
5142
+ if (!env->v7m.secure) {
625
+ VMSTATE_END_OF_LIST(),
5143
+ return;
626
+ },
5144
+ }
627
+};
5145
+
628
+
5146
+ limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
629
+static void npcm7xx_mft_class_init(ObjectClass *klass, void *data)
5147
+
630
+{
5148
+ if (val < limit) {
631
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
5149
+ CPUState *cs = env_cpu(env);
632
+ DeviceClass *dc = DEVICE_CLASS(klass);
5150
+
633
+
5151
+ cpu_restore_state(cs, GETPC(), true);
634
+ dc->desc = "NPCM7xx MFT Controller";
5152
+ raise_exception(env, EXCP_STKOF, 0, 1);
635
+ dc->vmsd = &vmstate_npcm7xx_mft;
5153
+ }
636
+ rc->phases.enter = npcm7xx_mft_enter_reset;
5154
+
637
+ rc->phases.hold = npcm7xx_mft_hold_reset;
5155
+ if (is_psp) {
638
+}
5156
+ env->v7m.other_ss_psp = val;
639
+
5157
+ } else {
640
+static const TypeInfo npcm7xx_mft_info = {
5158
+ env->v7m.other_ss_msp = val;
641
+ .name = TYPE_NPCM7XX_MFT,
5159
+ }
642
+ .parent = TYPE_SYS_BUS_DEVICE,
5160
+ return;
643
+ .instance_size = sizeof(NPCM7xxMFTState),
5161
+ }
644
+ .class_init = npcm7xx_mft_class_init,
5162
+ default:
645
+ .instance_init = npcm7xx_mft_init,
5163
+ break;
646
+};
5164
+ }
647
+
5165
+ }
648
+static void npcm7xx_mft_register_type(void)
5166
+
649
+{
5167
+ switch (reg) {
650
+ type_register_static(&npcm7xx_mft_info);
5168
+ case 0 ... 7: /* xPSR sub-fields */
651
+}
5169
+ /* only APSR is actually writable */
652
+type_init(npcm7xx_mft_register_type);
5170
+ if (!(reg & 4)) {
653
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
5171
+ uint32_t apsrmask = 0;
654
index XXXXXXX..XXXXXXX 100644
5172
+
655
--- a/hw/misc/meson.build
5173
+ if (mask & 8) {
656
+++ b/hw/misc/meson.build
5174
+ apsrmask |= XPSR_NZCV | XPSR_Q;
657
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_MAINSTONE', if_true: files('mst_fpga.c'))
5175
+ }
658
softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files(
5176
+ if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
659
'npcm7xx_clk.c',
5177
+ apsrmask |= XPSR_GE;
660
'npcm7xx_gcr.c',
5178
+ }
661
+ 'npcm7xx_mft.c',
5179
+ xpsr_write(env, val, apsrmask);
662
'npcm7xx_pwm.c',
5180
+ }
663
'npcm7xx_rng.c',
5181
+ break;
664
))
5182
+ case 8: /* MSP */
665
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
5183
+ if (v7m_using_psp(env)) {
666
index XXXXXXX..XXXXXXX 100644
5184
+ env->v7m.other_sp = val;
667
--- a/hw/misc/trace-events
5185
+ } else {
668
+++ b/hw/misc/trace-events
5186
+ env->regs[13] = val;
669
@@ -XXX,XX +XXX,XX @@ npcm7xx_clk_write(uint64_t offset, uint32_t value) "offset: 0x%04" PRIx64 " valu
5187
+ }
670
npcm7xx_gcr_read(uint64_t offset, uint32_t value) " offset: 0x%04" PRIx64 " value: 0x%08" PRIx32
5188
+ break;
671
npcm7xx_gcr_write(uint64_t offset, uint32_t value) "offset: 0x%04" PRIx64 " value: 0x%08" PRIx32
5189
+ case 9: /* PSP */
672
5190
+ if (v7m_using_psp(env)) {
673
+# npcm7xx_mft.c
5191
+ env->regs[13] = val;
674
+npcm7xx_mft_read(const char *name, uint64_t offset, uint16_t value) "%s: offset: 0x%04" PRIx64 " value: 0x%04" PRIx16
5192
+ } else {
675
+npcm7xx_mft_write(const char *name, uint64_t offset, uint16_t value) "%s: offset: 0x%04" PRIx64 " value: 0x%04" PRIx16
5193
+ env->v7m.other_sp = val;
676
+npcm7xx_mft_rpm(const char *clock, uint32_t clock_hz, int state, int32_t cnt, uint32_t rpm, uint32_t duty) " fan clk: %s clock_hz: %" PRIu32 ", state: %d, cnt: %" PRIi32 ", rpm: %" PRIu32 ", duty: %" PRIu32
5194
+ }
677
+npcm7xx_mft_capture(const char *name, int irq_level) "%s: level: %d"
5195
+ break;
678
+npcm7xx_mft_update_clock(const char *name, uint16_t sel, uint64_t clock_period, uint64_t prescaled_clock_period) "%s: sel: 0x%02" PRIx16 ", period: %" PRIu64 ", prescaled: %" PRIu64
5196
+ case 10: /* MSPLIM */
679
+npcm7xx_mft_set_duty(const char *name, int n, int value) "%s[%d]: %d"
5197
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
680
+
5198
+ goto bad_reg;
681
# npcm7xx_rng.c
5199
+ }
682
npcm7xx_rng_read(uint64_t offset, uint64_t value, unsigned size) "offset: 0x%04" PRIx64 " value: 0x%02" PRIx64 " size: %u"
5200
+ env->v7m.msplim[env->v7m.secure] = val & ~7;
683
npcm7xx_rng_write(uint64_t offset, uint64_t value, unsigned size) "offset: 0x%04" PRIx64 " value: 0x%02" PRIx64 " size: %u"
5201
+ break;
5202
+ case 11: /* PSPLIM */
5203
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
5204
+ goto bad_reg;
5205
+ }
5206
+ env->v7m.psplim[env->v7m.secure] = val & ~7;
5207
+ break;
5208
+ case 16: /* PRIMASK */
5209
+ env->v7m.primask[env->v7m.secure] = val & 1;
5210
+ break;
5211
+ case 17: /* BASEPRI */
5212
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
5213
+ goto bad_reg;
5214
+ }
5215
+ env->v7m.basepri[env->v7m.secure] = val & 0xff;
5216
+ break;
5217
+ case 18: /* BASEPRI_MAX */
5218
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
5219
+ goto bad_reg;
5220
+ }
5221
+ val &= 0xff;
5222
+ if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
5223
+ || env->v7m.basepri[env->v7m.secure] == 0)) {
5224
+ env->v7m.basepri[env->v7m.secure] = val;
5225
+ }
5226
+ break;
5227
+ case 19: /* FAULTMASK */
5228
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
5229
+ goto bad_reg;
5230
+ }
5231
+ env->v7m.faultmask[env->v7m.secure] = val & 1;
5232
+ break;
5233
+ case 20: /* CONTROL */
5234
+ /*
5235
+ * Writing to the SPSEL bit only has an effect if we are in
5236
+ * thread mode; other bits can be updated by any privileged code.
5237
+ * write_v7m_control_spsel() deals with updating the SPSEL bit in
5238
+ * env->v7m.control, so we only need update the others.
5239
+ * For v7M, we must just ignore explicit writes to SPSEL in handler
5240
+ * mode; for v8M the write is permitted but will have no effect.
5241
+ * All these bits are writes-ignored from non-privileged code,
5242
+ * except for SFPA.
5243
+ */
5244
+ if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) ||
5245
+ !arm_v7m_is_handler_mode(env))) {
5246
+ write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
5247
+ }
5248
+ if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) {
5249
+ env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
5250
+ env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
5251
+ }
5252
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
5253
+ /*
5254
+ * SFPA is RAZ/WI from NS or if no FPU.
5255
+ * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
5256
+ * Both are stored in the S bank.
5257
+ */
5258
+ if (env->v7m.secure) {
5259
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
5260
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK;
5261
+ }
5262
+ if (cur_el > 0 &&
5263
+ (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) ||
5264
+ extract32(env->v7m.nsacr, 10, 1))) {
5265
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
5266
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
5267
+ }
5268
+ }
5269
+ break;
5270
+ default:
5271
+ bad_reg:
5272
+ qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special"
5273
+ " register %d\n", reg);
5274
+ return;
5275
+ }
5276
+}
5277
+
5278
+uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
5279
+{
5280
+ /* Implement the TT instruction. op is bits [7:6] of the insn. */
5281
+ bool forceunpriv = op & 1;
5282
+ bool alt = op & 2;
5283
+ V8M_SAttributes sattrs = {};
5284
+ uint32_t tt_resp;
5285
+ bool r, rw, nsr, nsrw, mrvalid;
5286
+ int prot;
5287
+ ARMMMUFaultInfo fi = {};
5288
+ MemTxAttrs attrs = {};
5289
+ hwaddr phys_addr;
5290
+ ARMMMUIdx mmu_idx;
5291
+ uint32_t mregion;
5292
+ bool targetpriv;
5293
+ bool targetsec = env->v7m.secure;
5294
+ bool is_subpage;
5295
+
5296
+ /*
5297
+ * Work out what the security state and privilege level we're
5298
+ * interested in is...
5299
+ */
5300
+ if (alt) {
5301
+ targetsec = !targetsec;
5302
+ }
5303
+
5304
+ if (forceunpriv) {
5305
+ targetpriv = false;
5306
+ } else {
5307
+ targetpriv = arm_v7m_is_handler_mode(env) ||
5308
+ !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK);
5309
+ }
5310
+
5311
+ /* ...and then figure out which MMU index this is */
5312
+ mmu_idx = arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targetpriv);
5313
+
5314
+ /*
5315
+ * We know that the MPU and SAU don't care about the access type
5316
+ * for our purposes beyond that we don't want to claim to be
5317
+ * an insn fetch, so we arbitrarily call this a read.
5318
+ */
5319
+
5320
+ /*
5321
+ * MPU region info only available for privileged or if
5322
+ * inspecting the other MPU state.
5323
+ */
5324
+ if (arm_current_el(env) != 0 || alt) {
5325
+ /* We can ignore the return value as prot is always set */
5326
+ pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
5327
+ &phys_addr, &attrs, &prot, &is_subpage,
5328
+ &fi, &mregion);
5329
+ if (mregion == -1) {
5330
+ mrvalid = false;
5331
+ mregion = 0;
5332
+ } else {
5333
+ mrvalid = true;
5334
+ }
5335
+ r = prot & PAGE_READ;
5336
+ rw = prot & PAGE_WRITE;
5337
+ } else {
5338
+ r = false;
5339
+ rw = false;
5340
+ mrvalid = false;
5341
+ mregion = 0;
5342
+ }
5343
+
5344
+ if (env->v7m.secure) {
5345
+ v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs);
5346
+ nsr = sattrs.ns && r;
5347
+ nsrw = sattrs.ns && rw;
5348
+ } else {
5349
+ sattrs.ns = true;
5350
+ nsr = false;
5351
+ nsrw = false;
5352
+ }
5353
+
5354
+ tt_resp = (sattrs.iregion << 24) |
5355
+ (sattrs.irvalid << 23) |
5356
+ ((!sattrs.ns) << 22) |
5357
+ (nsrw << 21) |
5358
+ (nsr << 20) |
5359
+ (rw << 19) |
5360
+ (r << 18) |
5361
+ (sattrs.srvalid << 17) |
5362
+ (mrvalid << 16) |
5363
+ (sattrs.sregion << 8) |
5364
+ mregion;
5365
+
5366
+ return tt_resp;
5367
+}
5368
+
5369
+#endif /* !CONFIG_USER_ONLY */
5370
+
5371
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
5372
+ bool secstate, bool priv, bool negpri)
5373
+{
5374
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
5375
+
5376
+ if (priv) {
5377
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
5378
+ }
5379
+
5380
+ if (negpri) {
5381
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
5382
+ }
5383
+
5384
+ if (secstate) {
5385
+ mmu_idx |= ARM_MMU_IDX_M_S;
5386
+ }
5387
+
5388
+ return mmu_idx;
5389
+}
5390
+
5391
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
5392
+ bool secstate, bool priv)
5393
+{
5394
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
5395
+
5396
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
5397
+}
5398
+
5399
+/* Return the MMU index for a v7M CPU in the specified security state */
5400
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
5401
+{
5402
+ bool priv = arm_current_el(env) != 0;
5403
+
5404
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
5405
+}
5406
--
684
--
5407
2.20.1
685
2.20.1
5408
686
5409
687
diff view generated by jsdifflib
New patch
1
From: Hao Wu <wuhaotsh@google.com>
1
2
3
This patch adds the recently implemented MFT device to the NPCM7XX
4
SoC file.
5
6
Reviewed-by: Doug Evans <dje@google.com>
7
Reviewed-by: Tyrone Ting <kfting@nuvoton.com>
8
Signed-off-by: Hao Wu <wuhaotsh@google.com>
9
Message-id: 20210311180855.149764-4-wuhaotsh@google.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
docs/system/arm/nuvoton.rst | 2 +-
14
include/hw/arm/npcm7xx.h | 2 ++
15
hw/arm/npcm7xx.c | 45 ++++++++++++++++++++++++++++++-------
16
3 files changed, 40 insertions(+), 9 deletions(-)
17
18
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
19
index XXXXXXX..XXXXXXX 100644
20
--- a/docs/system/arm/nuvoton.rst
21
+++ b/docs/system/arm/nuvoton.rst
22
@@ -XXX,XX +XXX,XX @@ Supported devices
23
* Pulse Width Modulation (PWM)
24
* SMBus controller (SMBF)
25
* Ethernet controller (EMC)
26
+ * Tachometer
27
28
Missing devices
29
---------------
30
@@ -XXX,XX +XXX,XX @@ Missing devices
31
* Peripheral SPI controller (PSPI)
32
* SD/MMC host
33
* PECI interface
34
- * Tachometer
35
* PCI and PCIe root complex and bridges
36
* VDM and MCTP support
37
* Serial I/O expansion
38
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/include/hw/arm/npcm7xx.h
41
+++ b/include/hw/arm/npcm7xx.h
42
@@ -XXX,XX +XXX,XX @@
43
#include "hw/mem/npcm7xx_mc.h"
44
#include "hw/misc/npcm7xx_clk.h"
45
#include "hw/misc/npcm7xx_gcr.h"
46
+#include "hw/misc/npcm7xx_mft.h"
47
#include "hw/misc/npcm7xx_pwm.h"
48
#include "hw/misc/npcm7xx_rng.h"
49
#include "hw/net/npcm7xx_emc.h"
50
@@ -XXX,XX +XXX,XX @@ typedef struct NPCM7xxState {
51
NPCM7xxTimerCtrlState tim[3];
52
NPCM7xxADCState adc;
53
NPCM7xxPWMState pwm[2];
54
+ NPCM7xxMFTState mft[8];
55
NPCM7xxOTPState key_storage;
56
NPCM7xxOTPState fuse_array;
57
NPCM7xxMCState mc;
58
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/hw/arm/npcm7xx.c
61
+++ b/hw/arm/npcm7xx.c
62
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
63
NPCM7XX_SMBUS15_IRQ,
64
NPCM7XX_PWM0_IRQ = 93, /* PWM module 0 */
65
NPCM7XX_PWM1_IRQ, /* PWM module 1 */
66
+ NPCM7XX_MFT0_IRQ = 96, /* MFT module 0 */
67
+ NPCM7XX_MFT1_IRQ, /* MFT module 1 */
68
+ NPCM7XX_MFT2_IRQ, /* MFT module 2 */
69
+ NPCM7XX_MFT3_IRQ, /* MFT module 3 */
70
+ NPCM7XX_MFT4_IRQ, /* MFT module 4 */
71
+ NPCM7XX_MFT5_IRQ, /* MFT module 5 */
72
+ NPCM7XX_MFT6_IRQ, /* MFT module 6 */
73
+ NPCM7XX_MFT7_IRQ, /* MFT module 7 */
74
NPCM7XX_EMC2RX_IRQ = 114,
75
NPCM7XX_EMC2TX_IRQ,
76
NPCM7XX_GPIO0_IRQ = 116,
77
@@ -XXX,XX +XXX,XX @@ static const hwaddr npcm7xx_pwm_addr[] = {
78
0xf0104000,
79
};
80
81
+/* Register base address for each MFT Module */
82
+static const hwaddr npcm7xx_mft_addr[] = {
83
+ 0xf0180000,
84
+ 0xf0181000,
85
+ 0xf0182000,
86
+ 0xf0183000,
87
+ 0xf0184000,
88
+ 0xf0185000,
89
+ 0xf0186000,
90
+ 0xf0187000,
91
+};
92
+
93
/* Direct memory-mapped access to each SMBus Module. */
94
static const hwaddr npcm7xx_smbus_addr[] = {
95
0xf0080000,
96
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_init(Object *obj)
97
object_initialize_child(obj, "pwm[*]", &s->pwm[i], TYPE_NPCM7XX_PWM);
98
}
99
100
+ for (i = 0; i < ARRAY_SIZE(s->mft); i++) {
101
+ object_initialize_child(obj, "mft[*]", &s->mft[i], TYPE_NPCM7XX_MFT);
102
+ }
103
+
104
for (i = 0; i < ARRAY_SIZE(s->emc); i++) {
105
object_initialize_child(obj, "emc[*]", &s->emc[i], TYPE_NPCM7XX_EMC);
106
}
107
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
108
sysbus_connect_irq(sbd, i, npcm7xx_irq(s, NPCM7XX_PWM0_IRQ + i));
109
}
110
111
+ /* MFT Modules. Cannot fail. */
112
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(npcm7xx_mft_addr) != ARRAY_SIZE(s->mft));
113
+ for (i = 0; i < ARRAY_SIZE(s->mft); i++) {
114
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->mft[i]);
115
+
116
+ qdev_connect_clock_in(DEVICE(&s->mft[i]), "clock-in",
117
+ qdev_get_clock_out(DEVICE(&s->clk),
118
+ "apb4-clock"));
119
+ sysbus_realize(sbd, &error_abort);
120
+ sysbus_mmio_map(sbd, 0, npcm7xx_mft_addr[i]);
121
+ sysbus_connect_irq(sbd, 0, npcm7xx_irq(s, NPCM7XX_MFT0_IRQ + i));
122
+ }
123
+
124
/*
125
* EMC Modules. Cannot fail.
126
* The mapping of the device to its netdev backend works as follows:
127
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
128
create_unimplemented_device("npcm7xx.peci", 0xf0100000, 4 * KiB);
129
create_unimplemented_device("npcm7xx.siox[1]", 0xf0101000, 4 * KiB);
130
create_unimplemented_device("npcm7xx.siox[2]", 0xf0102000, 4 * KiB);
131
- create_unimplemented_device("npcm7xx.mft[0]", 0xf0180000, 4 * KiB);
132
- create_unimplemented_device("npcm7xx.mft[1]", 0xf0181000, 4 * KiB);
133
- create_unimplemented_device("npcm7xx.mft[2]", 0xf0182000, 4 * KiB);
134
- create_unimplemented_device("npcm7xx.mft[3]", 0xf0183000, 4 * KiB);
135
- create_unimplemented_device("npcm7xx.mft[4]", 0xf0184000, 4 * KiB);
136
- create_unimplemented_device("npcm7xx.mft[5]", 0xf0185000, 4 * KiB);
137
- create_unimplemented_device("npcm7xx.mft[6]", 0xf0186000, 4 * KiB);
138
- create_unimplemented_device("npcm7xx.mft[7]", 0xf0187000, 4 * KiB);
139
create_unimplemented_device("npcm7xx.pspi1", 0xf0200000, 4 * KiB);
140
create_unimplemented_device("npcm7xx.pspi2", 0xf0201000, 4 * KiB);
141
create_unimplemented_device("npcm7xx.ahbpci", 0xf0400000, 1 * MiB);
142
--
143
2.20.1
144
145
diff view generated by jsdifflib
New patch
1
1
From: Hao Wu <wuhaotsh@google.com>
2
3
This patch adds fan_splitters (split IRQs) in NPCM7XX boards. Each fan
4
splitter corresponds to 1 PWM output and can connect to multiple fan
5
inputs (MFT devices).
6
In NPCM7XX boards(NPCM750 EVB and Quanta GSJ boards), we initializes
7
these splitters and connect them to their corresponding modules
8
according their specific device trees.
9
10
Reviewed-by: Doug Evans <dje@google.com>
11
Reviewed-by: Tyrone Ting <kfting@nuvoton.com>
12
Signed-off-by: Hao Wu <wuhaotsh@google.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20210311180855.149764-5-wuhaotsh@google.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
include/hw/arm/npcm7xx.h | 11 ++++-
18
hw/arm/npcm7xx_boards.c | 99 ++++++++++++++++++++++++++++++++++++++++
19
2 files changed, 109 insertions(+), 1 deletion(-)
20
21
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/arm/npcm7xx.h
24
+++ b/include/hw/arm/npcm7xx.h
25
@@ -XXX,XX +XXX,XX @@
26
27
#include "hw/boards.h"
28
#include "hw/adc/npcm7xx_adc.h"
29
+#include "hw/core/split-irq.h"
30
#include "hw/cpu/a9mpcore.h"
31
#include "hw/gpio/npcm7xx_gpio.h"
32
#include "hw/i2c/npcm7xx_smbus.h"
33
@@ -XXX,XX +XXX,XX @@
34
#define NPCM7XX_GIC_CPU_IF_ADDR (0xf03fe100) /* GIC within A9 */
35
#define NPCM7XX_BOARD_SETUP_ADDR (0xffff1000) /* Boot ROM */
36
37
+#define NPCM7XX_NR_PWM_MODULES 2
38
+
39
typedef struct NPCM7xxMachine {
40
MachineState parent;
41
+ /*
42
+ * PWM fan splitter. each splitter connects to one PWM output and
43
+ * multiple MFT inputs.
44
+ */
45
+ SplitIRQ fan_splitter[NPCM7XX_NR_PWM_MODULES *
46
+ NPCM7XX_PWM_PER_MODULE];
47
} NPCM7xxMachine;
48
49
#define TYPE_NPCM7XX_MACHINE MACHINE_TYPE_NAME("npcm7xx")
50
@@ -XXX,XX +XXX,XX @@ typedef struct NPCM7xxState {
51
NPCM7xxCLKState clk;
52
NPCM7xxTimerCtrlState tim[3];
53
NPCM7xxADCState adc;
54
- NPCM7xxPWMState pwm[2];
55
+ NPCM7xxPWMState pwm[NPCM7XX_NR_PWM_MODULES];
56
NPCM7xxMFTState mft[8];
57
NPCM7xxOTPState key_storage;
58
NPCM7xxOTPState fuse_array;
59
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/hw/arm/npcm7xx_boards.c
62
+++ b/hw/arm/npcm7xx_boards.c
63
@@ -XXX,XX +XXX,XX @@
64
#include "hw/core/cpu.h"
65
#include "hw/i2c/smbus_eeprom.h"
66
#include "hw/loader.h"
67
+#include "hw/qdev-core.h"
68
#include "hw/qdev-properties.h"
69
#include "qapi/error.h"
70
#include "qemu-common.h"
71
@@ -XXX,XX +XXX,XX @@ static void at24c_eeprom_init(NPCM7xxState *soc, int bus, uint8_t addr,
72
i2c_slave_realize_and_unref(i2c_dev, i2c_bus, &error_abort);
73
}
74
75
+static void npcm7xx_init_pwm_splitter(NPCM7xxMachine *machine,
76
+ NPCM7xxState *soc, const int *fan_counts)
77
+{
78
+ SplitIRQ *splitters = machine->fan_splitter;
79
+
80
+ /*
81
+ * PWM 0~3 belong to module 0 output 0~3.
82
+ * PWM 4~7 belong to module 1 output 0~3.
83
+ */
84
+ for (int i = 0; i < NPCM7XX_NR_PWM_MODULES; ++i) {
85
+ for (int j = 0; j < NPCM7XX_PWM_PER_MODULE; ++j) {
86
+ int splitter_no = i * NPCM7XX_PWM_PER_MODULE + j;
87
+ DeviceState *splitter;
88
+
89
+ if (fan_counts[splitter_no] < 1) {
90
+ continue;
91
+ }
92
+ object_initialize_child(OBJECT(machine), "fan-splitter[*]",
93
+ &splitters[splitter_no], TYPE_SPLIT_IRQ);
94
+ splitter = DEVICE(&splitters[splitter_no]);
95
+ qdev_prop_set_uint16(splitter, "num-lines",
96
+ fan_counts[splitter_no]);
97
+ qdev_realize(splitter, NULL, &error_abort);
98
+ qdev_connect_gpio_out_named(DEVICE(&soc->pwm[i]), "duty-gpio-out",
99
+ j, qdev_get_gpio_in(splitter, 0));
100
+ }
101
+ }
102
+}
103
+
104
+static void npcm7xx_connect_pwm_fan(NPCM7xxState *soc, SplitIRQ *splitter,
105
+ int fan_no, int output_no)
106
+{
107
+ DeviceState *fan;
108
+ int fan_input;
109
+ qemu_irq fan_duty_gpio;
110
+
111
+ g_assert(fan_no >= 0 && fan_no <= NPCM7XX_MFT_MAX_FAN_INPUT);
112
+ /*
113
+ * Fan 0~1 belong to module 0 input 0~1.
114
+ * Fan 2~3 belong to module 1 input 0~1.
115
+ * ...
116
+ * Fan 14~15 belong to module 7 input 0~1.
117
+ * Fan 16~17 belong to module 0 input 2~3.
118
+ * Fan 18~19 belong to module 1 input 2~3.
119
+ */
120
+ if (fan_no < 16) {
121
+ fan = DEVICE(&soc->mft[fan_no / 2]);
122
+ fan_input = fan_no % 2;
123
+ } else {
124
+ fan = DEVICE(&soc->mft[(fan_no - 16) / 2]);
125
+ fan_input = fan_no % 2 + 2;
126
+ }
127
+
128
+ /* Connect the Fan to PWM module */
129
+ fan_duty_gpio = qdev_get_gpio_in_named(fan, "duty", fan_input);
130
+ qdev_connect_gpio_out(DEVICE(splitter), output_no, fan_duty_gpio);
131
+}
132
+
133
static void npcm750_evb_i2c_init(NPCM7xxState *soc)
134
{
135
/* lm75 temperature sensor on SVB, tmp105 is compatible */
136
@@ -XXX,XX +XXX,XX @@ static void npcm750_evb_i2c_init(NPCM7xxState *soc)
137
i2c_slave_create_simple(npcm7xx_i2c_get_bus(soc, 6), "tmp105", 0x48);
138
}
139
140
+static void npcm750_evb_fan_init(NPCM7xxMachine *machine, NPCM7xxState *soc)
141
+{
142
+ SplitIRQ *splitter = machine->fan_splitter;
143
+ static const int fan_counts[] = {2, 2, 2, 2, 2, 2, 2, 2};
144
+
145
+ npcm7xx_init_pwm_splitter(machine, soc, fan_counts);
146
+ npcm7xx_connect_pwm_fan(soc, &splitter[0], 0x00, 0);
147
+ npcm7xx_connect_pwm_fan(soc, &splitter[0], 0x01, 1);
148
+ npcm7xx_connect_pwm_fan(soc, &splitter[1], 0x02, 0);
149
+ npcm7xx_connect_pwm_fan(soc, &splitter[1], 0x03, 1);
150
+ npcm7xx_connect_pwm_fan(soc, &splitter[2], 0x04, 0);
151
+ npcm7xx_connect_pwm_fan(soc, &splitter[2], 0x05, 1);
152
+ npcm7xx_connect_pwm_fan(soc, &splitter[3], 0x06, 0);
153
+ npcm7xx_connect_pwm_fan(soc, &splitter[3], 0x07, 1);
154
+ npcm7xx_connect_pwm_fan(soc, &splitter[4], 0x08, 0);
155
+ npcm7xx_connect_pwm_fan(soc, &splitter[4], 0x09, 1);
156
+ npcm7xx_connect_pwm_fan(soc, &splitter[5], 0x0a, 0);
157
+ npcm7xx_connect_pwm_fan(soc, &splitter[5], 0x0b, 1);
158
+ npcm7xx_connect_pwm_fan(soc, &splitter[6], 0x0c, 0);
159
+ npcm7xx_connect_pwm_fan(soc, &splitter[6], 0x0d, 1);
160
+ npcm7xx_connect_pwm_fan(soc, &splitter[7], 0x0e, 0);
161
+ npcm7xx_connect_pwm_fan(soc, &splitter[7], 0x0f, 1);
162
+}
163
+
164
static void quanta_gsj_i2c_init(NPCM7xxState *soc)
165
{
166
/* GSJ machine have 4 max31725 temperature sensors, tmp105 is compatible. */
167
@@ -XXX,XX +XXX,XX @@ static void quanta_gsj_i2c_init(NPCM7xxState *soc)
168
/* TODO: Add additional i2c devices. */
169
}
170
171
+static void quanta_gsj_fan_init(NPCM7xxMachine *machine, NPCM7xxState *soc)
172
+{
173
+ SplitIRQ *splitter = machine->fan_splitter;
174
+ static const int fan_counts[] = {2, 2, 2, 0, 0, 0, 0, 0};
175
+
176
+ npcm7xx_init_pwm_splitter(machine, soc, fan_counts);
177
+ npcm7xx_connect_pwm_fan(soc, &splitter[0], 0x00, 0);
178
+ npcm7xx_connect_pwm_fan(soc, &splitter[0], 0x01, 1);
179
+ npcm7xx_connect_pwm_fan(soc, &splitter[1], 0x02, 0);
180
+ npcm7xx_connect_pwm_fan(soc, &splitter[1], 0x03, 1);
181
+ npcm7xx_connect_pwm_fan(soc, &splitter[2], 0x04, 0);
182
+ npcm7xx_connect_pwm_fan(soc, &splitter[2], 0x05, 1);
183
+}
184
+
185
static void npcm750_evb_init(MachineState *machine)
186
{
187
NPCM7xxState *soc;
188
@@ -XXX,XX +XXX,XX @@ static void npcm750_evb_init(MachineState *machine)
189
npcm7xx_load_bootrom(machine, soc);
190
npcm7xx_connect_flash(&soc->fiu[0], 0, "w25q256", drive_get(IF_MTD, 0, 0));
191
npcm750_evb_i2c_init(soc);
192
+ npcm750_evb_fan_init(NPCM7XX_MACHINE(machine), soc);
193
npcm7xx_load_kernel(machine, soc);
194
}
195
196
@@ -XXX,XX +XXX,XX @@ static void quanta_gsj_init(MachineState *machine)
197
npcm7xx_connect_flash(&soc->fiu[0], 0, "mx25l25635e",
198
drive_get(IF_MTD, 0, 0));
199
quanta_gsj_i2c_init(soc);
200
+ quanta_gsj_fan_init(NPCM7XX_MACHINE(machine), soc);
201
npcm7xx_load_kernel(machine, soc);
202
}
203
204
--
205
2.20.1
206
207
diff view generated by jsdifflib
1
Coverity points out (CID 1402195) that the loop in trans_VMOV_imm_dp()
1
From: Hao Wu <wuhaotsh@google.com>
2
that iterates over the destination registers in a short-vector VMOV
2
3
accidentally throws away the returned updated register number
3
This patch adds testing of PWM fan RPMs in the existing npcm7xx pwm
4
from vfp_advance_dreg(). Add the missing assignment. (We got this
4
test. It tests whether the MFT module can measure correct fan values
5
correct in trans_VMOV_imm_sp().)
5
for a PWM fan in NPCM7XX boards.
6
6
7
Fixes: 18cf951af9a27ae573a
7
Reviewed-by: Doug Evans <dje@google.com>
8
Reviewed-by: Tyrone Ting <kfting@nuvoton.com>
9
Signed-off-by: Hao Wu <wuhaotsh@google.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210311180855.149764-6-wuhaotsh@google.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20190702105115.9465-1-peter.maydell@linaro.org
11
---
13
---
12
target/arm/translate-vfp.inc.c | 2 +-
14
tests/qtest/npcm7xx_pwm-test.c | 205 ++++++++++++++++++++++++++++++++-
13
1 file changed, 1 insertion(+), 1 deletion(-)
15
1 file changed, 199 insertions(+), 6 deletions(-)
14
16
15
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
17
diff --git a/tests/qtest/npcm7xx_pwm-test.c b/tests/qtest/npcm7xx_pwm-test.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-vfp.inc.c
19
--- a/tests/qtest/npcm7xx_pwm-test.c
18
+++ b/target/arm/translate-vfp.inc.c
20
+++ b/tests/qtest/npcm7xx_pwm-test.c
19
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
21
@@ -XXX,XX +XXX,XX @@
20
22
#define PLL_FBDV(rv) extract32((rv), 16, 12)
21
/* Set up the operands for the next iteration */
23
#define PLL_OTDV1(rv) extract32((rv), 8, 3)
22
veclen--;
24
#define PLL_OTDV2(rv) extract32((rv), 13, 3)
23
- vfp_advance_dreg(vd, delta_d);
25
+#define APB4CKDIV(rv) extract32((rv), 30, 2)
24
+ vd = vfp_advance_dreg(vd, delta_d);
26
#define APB3CKDIV(rv) extract32((rv), 28, 2)
27
#define CLK2CKDIV(rv) extract32((rv), 0, 1)
28
#define CLK4CKDIV(rv) extract32((rv), 26, 2)
29
@@ -XXX,XX +XXX,XX @@
30
31
#define MAX_DUTY 1000000
32
33
+/* MFT (PWM fan) related */
34
+#define MFT_BA(n) (0xf0180000 + ((n) * 0x1000))
35
+#define MFT_IRQ(n) (96 + (n))
36
+#define MFT_CNT1 0x00
37
+#define MFT_CRA 0x02
38
+#define MFT_CRB 0x04
39
+#define MFT_CNT2 0x06
40
+#define MFT_PRSC 0x08
41
+#define MFT_CKC 0x0a
42
+#define MFT_MCTRL 0x0c
43
+#define MFT_ICTRL 0x0e
44
+#define MFT_ICLR 0x10
45
+#define MFT_IEN 0x12
46
+#define MFT_CPA 0x14
47
+#define MFT_CPB 0x16
48
+#define MFT_CPCFG 0x18
49
+#define MFT_INASEL 0x1a
50
+#define MFT_INBSEL 0x1c
51
+
52
+#define MFT_MCTRL_ALL 0x64
53
+#define MFT_ICLR_ALL 0x3f
54
+#define MFT_IEN_ALL 0x3f
55
+#define MFT_CPCFG_EQ_MODE 0x44
56
+
57
+#define MFT_CKC_C2CSEL BIT(3)
58
+#define MFT_CKC_C1CSEL BIT(0)
59
+
60
+#define MFT_ICTRL_TFPND BIT(5)
61
+#define MFT_ICTRL_TEPND BIT(4)
62
+#define MFT_ICTRL_TDPND BIT(3)
63
+#define MFT_ICTRL_TCPND BIT(2)
64
+#define MFT_ICTRL_TBPND BIT(1)
65
+#define MFT_ICTRL_TAPND BIT(0)
66
+
67
+#define MFT_MAX_CNT 0xffff
68
+#define MFT_TIMEOUT 0x5000
69
+
70
+#define DEFAULT_RPM 19800
71
+#define DEFAULT_PRSC 255
72
+#define MFT_PULSE_PER_REVOLUTION 2
73
+
74
+#define MAX_ERROR 1
75
+
76
typedef struct PWMModule {
77
int irq;
78
uint64_t base_addr;
79
@@ -XXX,XX +XXX,XX @@ static uint64_t pwm_get_duty(QTestState *qts, int module_index, int pwm_index)
80
return pwm_qom_get(qts, path, name);
81
}
82
83
+static void mft_qom_set(QTestState *qts, int index, const char *name,
84
+ uint32_t value)
85
+{
86
+ QDict *response;
87
+ char *path = g_strdup_printf("/machine/soc/mft[%d]", index);
88
+
89
+ g_test_message("Setting properties %s of mft[%d] with value %u",
90
+ name, index, value);
91
+ response = qtest_qmp(qts, "{ 'execute': 'qom-set',"
92
+ " 'arguments': { 'path': %s, "
93
+ " 'property': %s, 'value': %u}}",
94
+ path, name, value);
95
+ /* The qom set message returns successfully. */
96
+ g_assert_true(qdict_haskey(response, "return"));
97
+}
98
+
99
static uint32_t get_pll(uint32_t con)
100
{
101
return REF_HZ * PLL_FBDV(con) / (PLL_INDV(con) * PLL_OTDV1(con)
102
* PLL_OTDV2(con));
103
}
104
105
-static uint64_t read_pclk(QTestState *qts)
106
+static uint64_t read_pclk(QTestState *qts, bool mft)
107
{
108
uint64_t freq = REF_HZ;
109
uint32_t clksel = qtest_readl(qts, CLK_BA + CLKSEL);
110
uint32_t pllcon;
111
uint32_t clkdiv1 = qtest_readl(qts, CLK_BA + CLKDIV1);
112
uint32_t clkdiv2 = qtest_readl(qts, CLK_BA + CLKDIV2);
113
+ uint32_t apbdiv = mft ? APB4CKDIV(clkdiv2) : APB3CKDIV(clkdiv2);
114
115
switch (CPUCKSEL(clksel)) {
116
case 0:
117
@@ -XXX,XX +XXX,XX @@ static uint64_t read_pclk(QTestState *qts)
118
g_assert_not_reached();
25
}
119
}
26
120
27
tcg_temp_free_i64(fd);
121
- freq >>= (CLK2CKDIV(clkdiv1) + CLK4CKDIV(clkdiv1) + APB3CKDIV(clkdiv2));
122
+ freq >>= (CLK2CKDIV(clkdiv1) + CLK4CKDIV(clkdiv1) + apbdiv);
123
124
return freq;
125
}
126
@@ -XXX,XX +XXX,XX @@ static uint32_t pwm_selector(uint32_t csr)
127
static uint64_t pwm_compute_freq(QTestState *qts, uint32_t ppr, uint32_t csr,
128
uint32_t cnr)
129
{
130
- return read_pclk(qts) / ((ppr + 1) * pwm_selector(csr) * (cnr + 1));
131
+ return read_pclk(qts, false) / ((ppr + 1) * pwm_selector(csr) * (cnr + 1));
132
}
133
134
static uint64_t pwm_compute_duty(uint32_t cnr, uint32_t cmr, bool inverted)
135
@@ -XXX,XX +XXX,XX @@ static void pwm_write(QTestState *qts, const TestData *td, unsigned offset,
136
qtest_writel(qts, td->module->base_addr + offset, value);
137
}
138
139
+static uint8_t mft_readb(QTestState *qts, int index, unsigned offset)
140
+{
141
+ return qtest_readb(qts, MFT_BA(index) + offset);
142
+}
143
+
144
+static uint16_t mft_readw(QTestState *qts, int index, unsigned offset)
145
+{
146
+ return qtest_readw(qts, MFT_BA(index) + offset);
147
+}
148
+
149
+static void mft_writeb(QTestState *qts, int index, unsigned offset,
150
+ uint8_t value)
151
+{
152
+ qtest_writeb(qts, MFT_BA(index) + offset, value);
153
+}
154
+
155
+static void mft_writew(QTestState *qts, int index, unsigned offset,
156
+ uint16_t value)
157
+{
158
+ return qtest_writew(qts, MFT_BA(index) + offset, value);
159
+}
160
+
161
static uint32_t pwm_read_ppr(QTestState *qts, const TestData *td)
162
{
163
return extract32(pwm_read(qts, td, PPR), ppr_base[pwm_index(td->pwm)], 8);
164
@@ -XXX,XX +XXX,XX @@ static void pwm_write_cmr(QTestState *qts, const TestData *td, uint32_t value)
165
pwm_write(qts, td, td->pwm->cmr_offset, value);
166
}
167
168
+static int mft_compute_index(const TestData *td)
169
+{
170
+ int index = pwm_module_index(td->module) * ARRAY_SIZE(pwm_list) +
171
+ pwm_index(td->pwm);
172
+
173
+ g_assert_cmpint(index, <,
174
+ ARRAY_SIZE(pwm_module_list) * ARRAY_SIZE(pwm_list));
175
+
176
+ return index;
177
+}
178
+
179
+static void mft_reset_counters(QTestState *qts, int index)
180
+{
181
+ mft_writew(qts, index, MFT_CNT1, MFT_MAX_CNT);
182
+ mft_writew(qts, index, MFT_CNT2, MFT_MAX_CNT);
183
+ mft_writew(qts, index, MFT_CRA, MFT_MAX_CNT);
184
+ mft_writew(qts, index, MFT_CRB, MFT_MAX_CNT);
185
+ mft_writew(qts, index, MFT_CPA, MFT_MAX_CNT - MFT_TIMEOUT);
186
+ mft_writew(qts, index, MFT_CPB, MFT_MAX_CNT - MFT_TIMEOUT);
187
+}
188
+
189
+static void mft_init(QTestState *qts, const TestData *td)
190
+{
191
+ int index = mft_compute_index(td);
192
+
193
+ /* Enable everything */
194
+ mft_writeb(qts, index, MFT_CKC, 0);
195
+ mft_writeb(qts, index, MFT_ICLR, MFT_ICLR_ALL);
196
+ mft_writeb(qts, index, MFT_MCTRL, MFT_MCTRL_ALL);
197
+ mft_writeb(qts, index, MFT_IEN, MFT_IEN_ALL);
198
+ mft_writeb(qts, index, MFT_INASEL, 0);
199
+ mft_writeb(qts, index, MFT_INBSEL, 0);
200
+
201
+ /* Set cpcfg to use EQ mode, same as kernel driver */
202
+ mft_writeb(qts, index, MFT_CPCFG, MFT_CPCFG_EQ_MODE);
203
+
204
+ /* Write default counters, timeout and prescaler */
205
+ mft_reset_counters(qts, index);
206
+ mft_writeb(qts, index, MFT_PRSC, DEFAULT_PRSC);
207
+
208
+ /* Write default max rpm via QMP */
209
+ mft_qom_set(qts, index, "max_rpm[0]", DEFAULT_RPM);
210
+ mft_qom_set(qts, index, "max_rpm[1]", DEFAULT_RPM);
211
+}
212
+
213
+static int32_t mft_compute_cnt(uint32_t rpm, uint64_t clk)
214
+{
215
+ uint64_t cnt;
216
+
217
+ if (rpm == 0) {
218
+ return -1;
219
+ }
220
+
221
+ cnt = clk * 60 / ((DEFAULT_PRSC + 1) * rpm * MFT_PULSE_PER_REVOLUTION);
222
+ if (cnt >= MFT_TIMEOUT) {
223
+ return -1;
224
+ }
225
+ return MFT_MAX_CNT - cnt;
226
+}
227
+
228
+static void mft_verify_rpm(QTestState *qts, const TestData *td, uint64_t duty)
229
+{
230
+ int index = mft_compute_index(td);
231
+ uint16_t cnt, cr;
232
+ uint32_t rpm = DEFAULT_RPM * duty / MAX_DUTY;
233
+ uint64_t clk = read_pclk(qts, true);
234
+ int32_t expected_cnt = mft_compute_cnt(rpm, clk);
235
+
236
+ qtest_irq_intercept_in(qts, "/machine/soc/a9mpcore/gic");
237
+ g_test_message(
238
+ "verifying rpm for mft[%d]: clk: %lu, duty: %lu, rpm: %u, cnt: %d",
239
+ index, clk, duty, rpm, expected_cnt);
240
+
241
+ /* Verify rpm for fan A */
242
+ /* Stop capture */
243
+ mft_writeb(qts, index, MFT_CKC, 0);
244
+ mft_writeb(qts, index, MFT_ICLR, MFT_ICLR_ALL);
245
+ mft_reset_counters(qts, index);
246
+ g_assert_cmphex(mft_readw(qts, index, MFT_CNT1), ==, MFT_MAX_CNT);
247
+ g_assert_cmphex(mft_readw(qts, index, MFT_CRA), ==, MFT_MAX_CNT);
248
+ g_assert_cmphex(mft_readw(qts, index, MFT_CPA), ==,
249
+ MFT_MAX_CNT - MFT_TIMEOUT);
250
+ /* Start capture */
251
+ mft_writeb(qts, index, MFT_CKC, MFT_CKC_C1CSEL);
252
+ g_assert_true(qtest_get_irq(qts, MFT_IRQ(index)));
253
+ if (expected_cnt == -1) {
254
+ g_assert_cmphex(mft_readb(qts, index, MFT_ICTRL), ==, MFT_ICTRL_TEPND);
255
+ } else {
256
+ g_assert_cmphex(mft_readb(qts, index, MFT_ICTRL), ==, MFT_ICTRL_TAPND);
257
+ cnt = mft_readw(qts, index, MFT_CNT1);
258
+ /*
259
+ * Due to error in clock measurement and rounding, we might have a small
260
+ * error in measuring RPM.
261
+ */
262
+ g_assert_cmphex(cnt + MAX_ERROR, >=, expected_cnt);
263
+ g_assert_cmphex(cnt, <=, expected_cnt + MAX_ERROR);
264
+ cr = mft_readw(qts, index, MFT_CRA);
265
+ g_assert_cmphex(cnt, ==, cr);
266
+ }
267
+
268
+ /* Verify rpm for fan B */
269
+
270
+ qtest_irq_intercept_out(qts, "/machine/soc/a9mpcore/gic");
271
+}
272
+
273
/* Check pwm registers can be reset to default value */
274
static void test_init(gconstpointer test_data)
275
{
276
const TestData *td = test_data;
277
- QTestState *qts = qtest_init("-machine quanta-gsj");
278
+ QTestState *qts = qtest_init("-machine npcm750-evb");
279
int module = pwm_module_index(td->module);
280
int pwm = pwm_index(td->pwm);
281
282
@@ -XXX,XX +XXX,XX @@ static void test_init(gconstpointer test_data)
283
static void test_oneshot(gconstpointer test_data)
284
{
285
const TestData *td = test_data;
286
- QTestState *qts = qtest_init("-machine quanta-gsj");
287
+ QTestState *qts = qtest_init("-machine npcm750-evb");
288
int module = pwm_module_index(td->module);
289
int pwm = pwm_index(td->pwm);
290
uint32_t ppr, csr, pcr;
291
@@ -XXX,XX +XXX,XX @@ static void test_oneshot(gconstpointer test_data)
292
static void test_toggle(gconstpointer test_data)
293
{
294
const TestData *td = test_data;
295
- QTestState *qts = qtest_init("-machine quanta-gsj");
296
+ QTestState *qts = qtest_init("-machine npcm750-evb");
297
int module = pwm_module_index(td->module);
298
int pwm = pwm_index(td->pwm);
299
uint32_t ppr, csr, pcr, cnr, cmr;
300
int i, j, k, l;
301
uint64_t expected_freq, expected_duty;
302
303
+ mft_init(qts, td);
304
+
305
pcr = CH_EN | CH_MOD;
306
for (i = 0; i < ARRAY_SIZE(ppr_list); ++i) {
307
ppr = ppr_list[i];
308
@@ -XXX,XX +XXX,XX @@ static void test_toggle(gconstpointer test_data)
309
==, expected_freq);
310
}
311
312
+ /* Test MFT's RPM is correct. */
313
+ mft_verify_rpm(qts, td, expected_duty);
314
+
315
/* Test inverted mode */
316
expected_duty = pwm_compute_duty(cnr, cmr, true);
317
pwm_write_pcr(qts, td, pcr | CH_INV);
28
--
318
--
29
2.20.1
319
2.20.1
30
320
31
321
diff view generated by jsdifflib
1
In v8M, an attempt to return from an exception which is not
1
For a long time now the UI layer has guaranteed that the console
2
active is an illegal exception return. For this purpose,
2
surface is always 32 bits per pixel. Remove the legacy dead
3
exceptions which can configurably target either Secure or
3
code from the pl110 display device which was handling the
4
NonSecure are not considered to be active if they are
4
possibility that the console surface was some other format.
5
configured for the opposite security state for the one
6
we're trying to return from (eg attempt to return from
7
an NS NMI but NMI targets Secure). In the pseudocode this
8
is handled by IsActiveForState().
9
10
Detect this case rather than counting an active exception
11
possibly of the wrong security state as being sufficient.
12
5
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
15
Message-id: 20190617175317.27557-4-peter.maydell@linaro.org
8
Message-id: 20210211141515.8755-2-peter.maydell@linaro.org
16
---
9
---
17
hw/intc/armv7m_nvic.c | 14 +++++++++++++-
10
hw/display/pl110.c | 53 +++++++---------------------------------------
18
1 file changed, 13 insertions(+), 1 deletion(-)
11
1 file changed, 8 insertions(+), 45 deletions(-)
19
12
20
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
diff --git a/hw/display/pl110.c b/hw/display/pl110.c
21
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/armv7m_nvic.c
15
--- a/hw/display/pl110.c
23
+++ b/hw/intc/armv7m_nvic.c
16
+++ b/hw/display/pl110.c
24
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
17
@@ -XXX,XX +XXX,XX @@ static const unsigned char *idregs[] = {
25
return -1;
18
pl111_id
19
};
20
21
-#define BITS 8
22
-#include "pl110_template.h"
23
-#define BITS 15
24
-#include "pl110_template.h"
25
-#define BITS 16
26
-#include "pl110_template.h"
27
-#define BITS 24
28
-#include "pl110_template.h"
29
#define BITS 32
30
#include "pl110_template.h"
31
32
@@ -XXX,XX +XXX,XX @@ static void pl110_update_display(void *opaque)
33
PL110State *s = (PL110State *)opaque;
34
SysBusDevice *sbd;
35
DisplaySurface *surface = qemu_console_surface(s->con);
36
- drawfn* fntable;
37
drawfn fn;
38
- int dest_width;
39
int src_width;
40
int bpp_offset;
41
int first;
42
@@ -XXX,XX +XXX,XX @@ static void pl110_update_display(void *opaque)
43
44
sbd = SYS_BUS_DEVICE(s);
45
46
- switch (surface_bits_per_pixel(surface)) {
47
- case 0:
48
- return;
49
- case 8:
50
- fntable = pl110_draw_fn_8;
51
- dest_width = 1;
52
- break;
53
- case 15:
54
- fntable = pl110_draw_fn_15;
55
- dest_width = 2;
56
- break;
57
- case 16:
58
- fntable = pl110_draw_fn_16;
59
- dest_width = 2;
60
- break;
61
- case 24:
62
- fntable = pl110_draw_fn_24;
63
- dest_width = 3;
64
- break;
65
- case 32:
66
- fntable = pl110_draw_fn_32;
67
- dest_width = 4;
68
- break;
69
- default:
70
- fprintf(stderr, "pl110: Bad color depth\n");
71
- exit(1);
72
- }
73
if (s->cr & PL110_CR_BGR)
74
bpp_offset = 0;
75
else
76
@@ -XXX,XX +XXX,XX @@ static void pl110_update_display(void *opaque)
77
}
26
}
78
}
27
79
28
- ret = nvic_rettobase(s);
80
- if (s->cr & PL110_CR_BEBO)
29
+ /*
81
- fn = fntable[s->bpp + 8 + bpp_offset];
30
+ * If this is a configurable exception and it is currently
82
- else if (s->cr & PL110_CR_BEPO)
31
+ * targeting the opposite security state from the one we're trying
83
- fn = fntable[s->bpp + 16 + bpp_offset];
32
+ * to complete it for, this counts as an illegal exception return.
84
- else
33
+ * We still need to deactivate whatever vector the logic above has
85
- fn = fntable[s->bpp + bpp_offset];
34
+ * selected, though, as it might not be the same as the one for the
86
+ if (s->cr & PL110_CR_BEBO) {
35
+ * requested exception number.
87
+ fn = pl110_draw_fn_32[s->bpp + 8 + bpp_offset];
36
+ */
88
+ } else if (s->cr & PL110_CR_BEPO) {
37
+ if (!exc_is_banked(irq) && exc_targets_secure(s, irq) != secure) {
89
+ fn = pl110_draw_fn_32[s->bpp + 16 + bpp_offset];
38
+ ret = -1;
39
+ } else {
90
+ } else {
40
+ ret = nvic_rettobase(s);
91
+ fn = pl110_draw_fn_32[s->bpp + bpp_offset];
41
+ }
92
+ }
42
93
43
vec->active = 0;
94
src_width = s->cols;
44
if (vec->level) {
95
switch (s->bpp) {
96
@@ -XXX,XX +XXX,XX @@ static void pl110_update_display(void *opaque)
97
src_width <<= 2;
98
break;
99
}
100
- dest_width *= s->cols;
101
first = 0;
102
if (s->invalidate) {
103
framebuffer_update_memory_section(&s->fbsection,
104
@@ -XXX,XX +XXX,XX @@ static void pl110_update_display(void *opaque)
105
106
framebuffer_update_display(surface, &s->fbsection,
107
s->cols, s->rows,
108
- src_width, dest_width, 0,
109
+ src_width, s->cols * 4, 0,
110
s->invalidate,
111
fn, s->palette,
112
&first, &last);
45
--
113
--
46
2.20.1
114
2.20.1
47
115
48
116
diff view generated by jsdifflib
New patch
1
1
The pl110_template.h header has a doubly-nested multiple-include pattern:
2
* pl110.c includes it once for each host bit depth (now always 32)
3
* every time it is included, it includes itself 6 times, to account
4
for multiple guest device pixel and byte orders
5
6
Now we only have to deal with 32-bit host bit depths, we can move the
7
code corresponding to the outer layer of this double-nesting to be
8
directly in pl110.c and reduce the template header to a single layer
9
of nesting.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
13
Message-id: 20210211141515.8755-3-peter.maydell@linaro.org
14
---
15
hw/display/pl110_template.h | 100 +-----------------------------------
16
hw/display/pl110.c | 79 ++++++++++++++++++++++++++++
17
2 files changed, 80 insertions(+), 99 deletions(-)
18
19
diff --git a/hw/display/pl110_template.h b/hw/display/pl110_template.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/display/pl110_template.h
22
+++ b/hw/display/pl110_template.h
23
@@ -XXX,XX +XXX,XX @@
24
*/
25
26
#ifndef ORDER
27
-
28
-#if BITS == 8
29
-#define COPY_PIXEL(to, from) *(to++) = from
30
-#elif BITS == 15 || BITS == 16
31
-#define COPY_PIXEL(to, from) do { *(uint16_t *)to = from; to += 2; } while (0)
32
-#elif BITS == 24
33
-#define COPY_PIXEL(to, from) \
34
- do { \
35
- *(to++) = from; \
36
- *(to++) = (from) >> 8; \
37
- *(to++) = (from) >> 16; \
38
- } while (0)
39
-#elif BITS == 32
40
-#define COPY_PIXEL(to, from) do { *(uint32_t *)to = from; to += 4; } while (0)
41
-#else
42
-#error unknown bit depth
43
+#error "pl110_template.h is only for inclusion by pl110.c"
44
#endif
45
46
-#undef RGB
47
-#define BORDER bgr
48
-#define ORDER 0
49
-#include "pl110_template.h"
50
-#define ORDER 1
51
-#include "pl110_template.h"
52
-#define ORDER 2
53
-#include "pl110_template.h"
54
-#undef BORDER
55
-#define RGB
56
-#define BORDER rgb
57
-#define ORDER 0
58
-#include "pl110_template.h"
59
-#define ORDER 1
60
-#include "pl110_template.h"
61
-#define ORDER 2
62
-#include "pl110_template.h"
63
-#undef BORDER
64
-
65
-static drawfn glue(pl110_draw_fn_,BITS)[48] =
66
-{
67
- glue(pl110_draw_line1_lblp_bgr,BITS),
68
- glue(pl110_draw_line2_lblp_bgr,BITS),
69
- glue(pl110_draw_line4_lblp_bgr,BITS),
70
- glue(pl110_draw_line8_lblp_bgr,BITS),
71
- glue(pl110_draw_line16_555_lblp_bgr,BITS),
72
- glue(pl110_draw_line32_lblp_bgr,BITS),
73
- glue(pl110_draw_line16_lblp_bgr,BITS),
74
- glue(pl110_draw_line12_lblp_bgr,BITS),
75
-
76
- glue(pl110_draw_line1_bbbp_bgr,BITS),
77
- glue(pl110_draw_line2_bbbp_bgr,BITS),
78
- glue(pl110_draw_line4_bbbp_bgr,BITS),
79
- glue(pl110_draw_line8_bbbp_bgr,BITS),
80
- glue(pl110_draw_line16_555_bbbp_bgr,BITS),
81
- glue(pl110_draw_line32_bbbp_bgr,BITS),
82
- glue(pl110_draw_line16_bbbp_bgr,BITS),
83
- glue(pl110_draw_line12_bbbp_bgr,BITS),
84
-
85
- glue(pl110_draw_line1_lbbp_bgr,BITS),
86
- glue(pl110_draw_line2_lbbp_bgr,BITS),
87
- glue(pl110_draw_line4_lbbp_bgr,BITS),
88
- glue(pl110_draw_line8_lbbp_bgr,BITS),
89
- glue(pl110_draw_line16_555_lbbp_bgr,BITS),
90
- glue(pl110_draw_line32_lbbp_bgr,BITS),
91
- glue(pl110_draw_line16_lbbp_bgr,BITS),
92
- glue(pl110_draw_line12_lbbp_bgr,BITS),
93
-
94
- glue(pl110_draw_line1_lblp_rgb,BITS),
95
- glue(pl110_draw_line2_lblp_rgb,BITS),
96
- glue(pl110_draw_line4_lblp_rgb,BITS),
97
- glue(pl110_draw_line8_lblp_rgb,BITS),
98
- glue(pl110_draw_line16_555_lblp_rgb,BITS),
99
- glue(pl110_draw_line32_lblp_rgb,BITS),
100
- glue(pl110_draw_line16_lblp_rgb,BITS),
101
- glue(pl110_draw_line12_lblp_rgb,BITS),
102
-
103
- glue(pl110_draw_line1_bbbp_rgb,BITS),
104
- glue(pl110_draw_line2_bbbp_rgb,BITS),
105
- glue(pl110_draw_line4_bbbp_rgb,BITS),
106
- glue(pl110_draw_line8_bbbp_rgb,BITS),
107
- glue(pl110_draw_line16_555_bbbp_rgb,BITS),
108
- glue(pl110_draw_line32_bbbp_rgb,BITS),
109
- glue(pl110_draw_line16_bbbp_rgb,BITS),
110
- glue(pl110_draw_line12_bbbp_rgb,BITS),
111
-
112
- glue(pl110_draw_line1_lbbp_rgb,BITS),
113
- glue(pl110_draw_line2_lbbp_rgb,BITS),
114
- glue(pl110_draw_line4_lbbp_rgb,BITS),
115
- glue(pl110_draw_line8_lbbp_rgb,BITS),
116
- glue(pl110_draw_line16_555_lbbp_rgb,BITS),
117
- glue(pl110_draw_line32_lbbp_rgb,BITS),
118
- glue(pl110_draw_line16_lbbp_rgb,BITS),
119
- glue(pl110_draw_line12_lbbp_rgb,BITS),
120
-};
121
-
122
-#undef BITS
123
-#undef COPY_PIXEL
124
-
125
-#else
126
-
127
#if ORDER == 0
128
#define NAME glue(glue(lblp_, BORDER), BITS)
129
#ifdef HOST_WORDS_BIGENDIAN
130
@@ -XXX,XX +XXX,XX @@ static void glue(pl110_draw_line12_,NAME)(void *opaque, uint8_t *d, const uint8_
131
#undef NAME
132
#undef SWAP_WORDS
133
#undef ORDER
134
-
135
-#endif
136
diff --git a/hw/display/pl110.c b/hw/display/pl110.c
137
index XXXXXXX..XXXXXXX 100644
138
--- a/hw/display/pl110.c
139
+++ b/hw/display/pl110.c
140
@@ -XXX,XX +XXX,XX @@ static const unsigned char *idregs[] = {
141
};
142
143
#define BITS 32
144
+#define COPY_PIXEL(to, from) do { *(uint32_t *)to = from; to += 4; } while (0)
145
+
146
+#undef RGB
147
+#define BORDER bgr
148
+#define ORDER 0
149
#include "pl110_template.h"
150
+#define ORDER 1
151
+#include "pl110_template.h"
152
+#define ORDER 2
153
+#include "pl110_template.h"
154
+#undef BORDER
155
+#define RGB
156
+#define BORDER rgb
157
+#define ORDER 0
158
+#include "pl110_template.h"
159
+#define ORDER 1
160
+#include "pl110_template.h"
161
+#define ORDER 2
162
+#include "pl110_template.h"
163
+#undef BORDER
164
+
165
+static drawfn pl110_draw_fn_32[48] = {
166
+ pl110_draw_line1_lblp_bgr32,
167
+ pl110_draw_line2_lblp_bgr32,
168
+ pl110_draw_line4_lblp_bgr32,
169
+ pl110_draw_line8_lblp_bgr32,
170
+ pl110_draw_line16_555_lblp_bgr32,
171
+ pl110_draw_line32_lblp_bgr32,
172
+ pl110_draw_line16_lblp_bgr32,
173
+ pl110_draw_line12_lblp_bgr32,
174
+
175
+ pl110_draw_line1_bbbp_bgr32,
176
+ pl110_draw_line2_bbbp_bgr32,
177
+ pl110_draw_line4_bbbp_bgr32,
178
+ pl110_draw_line8_bbbp_bgr32,
179
+ pl110_draw_line16_555_bbbp_bgr32,
180
+ pl110_draw_line32_bbbp_bgr32,
181
+ pl110_draw_line16_bbbp_bgr32,
182
+ pl110_draw_line12_bbbp_bgr32,
183
+
184
+ pl110_draw_line1_lbbp_bgr32,
185
+ pl110_draw_line2_lbbp_bgr32,
186
+ pl110_draw_line4_lbbp_bgr32,
187
+ pl110_draw_line8_lbbp_bgr32,
188
+ pl110_draw_line16_555_lbbp_bgr32,
189
+ pl110_draw_line32_lbbp_bgr32,
190
+ pl110_draw_line16_lbbp_bgr32,
191
+ pl110_draw_line12_lbbp_bgr32,
192
+
193
+ pl110_draw_line1_lblp_rgb32,
194
+ pl110_draw_line2_lblp_rgb32,
195
+ pl110_draw_line4_lblp_rgb32,
196
+ pl110_draw_line8_lblp_rgb32,
197
+ pl110_draw_line16_555_lblp_rgb32,
198
+ pl110_draw_line32_lblp_rgb32,
199
+ pl110_draw_line16_lblp_rgb32,
200
+ pl110_draw_line12_lblp_rgb32,
201
+
202
+ pl110_draw_line1_bbbp_rgb32,
203
+ pl110_draw_line2_bbbp_rgb32,
204
+ pl110_draw_line4_bbbp_rgb32,
205
+ pl110_draw_line8_bbbp_rgb32,
206
+ pl110_draw_line16_555_bbbp_rgb32,
207
+ pl110_draw_line32_bbbp_rgb32,
208
+ pl110_draw_line16_bbbp_rgb32,
209
+ pl110_draw_line12_bbbp_rgb32,
210
+
211
+ pl110_draw_line1_lbbp_rgb32,
212
+ pl110_draw_line2_lbbp_rgb32,
213
+ pl110_draw_line4_lbbp_rgb32,
214
+ pl110_draw_line8_lbbp_rgb32,
215
+ pl110_draw_line16_555_lbbp_rgb32,
216
+ pl110_draw_line32_lbbp_rgb32,
217
+ pl110_draw_line16_lbbp_rgb32,
218
+ pl110_draw_line12_lbbp_rgb32,
219
+};
220
+
221
+#undef BITS
222
+#undef COPY_PIXEL
223
+
224
225
static int pl110_enabled(PL110State *s)
226
{
227
--
228
2.20.1
229
230
diff view generated by jsdifflib
New patch
1
1
BITS is always 32, so remove all uses of it from the template header,
2
by dropping the trailing '32' from the draw function names and
3
not constructing the name of rgb_to_pixel32() via the glue() macro.
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
7
Message-id: 20210211141515.8755-4-peter.maydell@linaro.org
8
---
9
hw/display/pl110_template.h | 20 +++----
10
hw/display/pl110.c | 113 ++++++++++++++++++------------------
11
2 files changed, 65 insertions(+), 68 deletions(-)
12
13
diff --git a/hw/display/pl110_template.h b/hw/display/pl110_template.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/display/pl110_template.h
16
+++ b/hw/display/pl110_template.h
17
@@ -XXX,XX +XXX,XX @@
18
#endif
19
20
#if ORDER == 0
21
-#define NAME glue(glue(lblp_, BORDER), BITS)
22
+#define NAME glue(lblp_, BORDER)
23
#ifdef HOST_WORDS_BIGENDIAN
24
#define SWAP_WORDS 1
25
#endif
26
#elif ORDER == 1
27
-#define NAME glue(glue(bbbp_, BORDER), BITS)
28
+#define NAME glue(bbbp_, BORDER)
29
#ifndef HOST_WORDS_BIGENDIAN
30
#define SWAP_WORDS 1
31
#endif
32
#else
33
#define SWAP_PIXELS 1
34
-#define NAME glue(glue(lbbp_, BORDER), BITS)
35
+#define NAME glue(lbbp_, BORDER)
36
#ifdef HOST_WORDS_BIGENDIAN
37
#define SWAP_WORDS 1
38
#endif
39
@@ -XXX,XX +XXX,XX @@ static void glue(pl110_draw_line16_,NAME)(void *opaque, uint8_t *d, const uint8_
40
MSB = (data & 0x1f) << 3;
41
data >>= 5;
42
#endif
43
- COPY_PIXEL(d, glue(rgb_to_pixel,BITS)(r, g, b));
44
+ COPY_PIXEL(d, rgb_to_pixel32(r, g, b));
45
LSB = (data & 0x1f) << 3;
46
data >>= 5;
47
g = (data & 0x3f) << 2;
48
data >>= 6;
49
MSB = (data & 0x1f) << 3;
50
data >>= 5;
51
- COPY_PIXEL(d, glue(rgb_to_pixel,BITS)(r, g, b));
52
+ COPY_PIXEL(d, rgb_to_pixel32(r, g, b));
53
#undef MSB
54
#undef LSB
55
width -= 2;
56
@@ -XXX,XX +XXX,XX @@ static void glue(pl110_draw_line32_,NAME)(void *opaque, uint8_t *d, const uint8_
57
g = (data >> 16) & 0xff;
58
MSB = (data >> 8) & 0xff;
59
#endif
60
- COPY_PIXEL(d, glue(rgb_to_pixel,BITS)(r, g, b));
61
+ COPY_PIXEL(d, rgb_to_pixel32(r, g, b));
62
#undef MSB
63
#undef LSB
64
width--;
65
@@ -XXX,XX +XXX,XX @@ static void glue(pl110_draw_line16_555_,NAME)(void *opaque, uint8_t *d, const ui
66
data >>= 5;
67
MSB = (data & 0x1f) << 3;
68
data >>= 5;
69
- COPY_PIXEL(d, glue(rgb_to_pixel,BITS)(r, g, b));
70
+ COPY_PIXEL(d, rgb_to_pixel32(r, g, b));
71
LSB = (data & 0x1f) << 3;
72
data >>= 5;
73
g = (data & 0x1f) << 3;
74
data >>= 5;
75
MSB = (data & 0x1f) << 3;
76
data >>= 6;
77
- COPY_PIXEL(d, glue(rgb_to_pixel,BITS)(r, g, b));
78
+ COPY_PIXEL(d, rgb_to_pixel32(r, g, b));
79
#undef MSB
80
#undef LSB
81
width -= 2;
82
@@ -XXX,XX +XXX,XX @@ static void glue(pl110_draw_line12_,NAME)(void *opaque, uint8_t *d, const uint8_
83
data >>= 4;
84
MSB = (data & 0xf) << 4;
85
data >>= 8;
86
- COPY_PIXEL(d, glue(rgb_to_pixel,BITS)(r, g, b));
87
+ COPY_PIXEL(d, rgb_to_pixel32(r, g, b));
88
LSB = (data & 0xf) << 4;
89
data >>= 4;
90
g = (data & 0xf) << 4;
91
data >>= 4;
92
MSB = (data & 0xf) << 4;
93
data >>= 8;
94
- COPY_PIXEL(d, glue(rgb_to_pixel,BITS)(r, g, b));
95
+ COPY_PIXEL(d, rgb_to_pixel32(r, g, b));
96
#undef MSB
97
#undef LSB
98
width -= 2;
99
diff --git a/hw/display/pl110.c b/hw/display/pl110.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/hw/display/pl110.c
102
+++ b/hw/display/pl110.c
103
@@ -XXX,XX +XXX,XX @@ static const unsigned char *idregs[] = {
104
pl111_id
105
};
106
107
-#define BITS 32
108
#define COPY_PIXEL(to, from) do { *(uint32_t *)to = from; to += 4; } while (0)
109
110
#undef RGB
111
@@ -XXX,XX +XXX,XX @@ static const unsigned char *idregs[] = {
112
#include "pl110_template.h"
113
#undef BORDER
114
115
-static drawfn pl110_draw_fn_32[48] = {
116
- pl110_draw_line1_lblp_bgr32,
117
- pl110_draw_line2_lblp_bgr32,
118
- pl110_draw_line4_lblp_bgr32,
119
- pl110_draw_line8_lblp_bgr32,
120
- pl110_draw_line16_555_lblp_bgr32,
121
- pl110_draw_line32_lblp_bgr32,
122
- pl110_draw_line16_lblp_bgr32,
123
- pl110_draw_line12_lblp_bgr32,
124
-
125
- pl110_draw_line1_bbbp_bgr32,
126
- pl110_draw_line2_bbbp_bgr32,
127
- pl110_draw_line4_bbbp_bgr32,
128
- pl110_draw_line8_bbbp_bgr32,
129
- pl110_draw_line16_555_bbbp_bgr32,
130
- pl110_draw_line32_bbbp_bgr32,
131
- pl110_draw_line16_bbbp_bgr32,
132
- pl110_draw_line12_bbbp_bgr32,
133
-
134
- pl110_draw_line1_lbbp_bgr32,
135
- pl110_draw_line2_lbbp_bgr32,
136
- pl110_draw_line4_lbbp_bgr32,
137
- pl110_draw_line8_lbbp_bgr32,
138
- pl110_draw_line16_555_lbbp_bgr32,
139
- pl110_draw_line32_lbbp_bgr32,
140
- pl110_draw_line16_lbbp_bgr32,
141
- pl110_draw_line12_lbbp_bgr32,
142
-
143
- pl110_draw_line1_lblp_rgb32,
144
- pl110_draw_line2_lblp_rgb32,
145
- pl110_draw_line4_lblp_rgb32,
146
- pl110_draw_line8_lblp_rgb32,
147
- pl110_draw_line16_555_lblp_rgb32,
148
- pl110_draw_line32_lblp_rgb32,
149
- pl110_draw_line16_lblp_rgb32,
150
- pl110_draw_line12_lblp_rgb32,
151
-
152
- pl110_draw_line1_bbbp_rgb32,
153
- pl110_draw_line2_bbbp_rgb32,
154
- pl110_draw_line4_bbbp_rgb32,
155
- pl110_draw_line8_bbbp_rgb32,
156
- pl110_draw_line16_555_bbbp_rgb32,
157
- pl110_draw_line32_bbbp_rgb32,
158
- pl110_draw_line16_bbbp_rgb32,
159
- pl110_draw_line12_bbbp_rgb32,
160
-
161
- pl110_draw_line1_lbbp_rgb32,
162
- pl110_draw_line2_lbbp_rgb32,
163
- pl110_draw_line4_lbbp_rgb32,
164
- pl110_draw_line8_lbbp_rgb32,
165
- pl110_draw_line16_555_lbbp_rgb32,
166
- pl110_draw_line32_lbbp_rgb32,
167
- pl110_draw_line16_lbbp_rgb32,
168
- pl110_draw_line12_lbbp_rgb32,
169
-};
170
-
171
-#undef BITS
172
#undef COPY_PIXEL
173
174
+static drawfn pl110_draw_fn_32[48] = {
175
+ pl110_draw_line1_lblp_bgr,
176
+ pl110_draw_line2_lblp_bgr,
177
+ pl110_draw_line4_lblp_bgr,
178
+ pl110_draw_line8_lblp_bgr,
179
+ pl110_draw_line16_555_lblp_bgr,
180
+ pl110_draw_line32_lblp_bgr,
181
+ pl110_draw_line16_lblp_bgr,
182
+ pl110_draw_line12_lblp_bgr,
183
+
184
+ pl110_draw_line1_bbbp_bgr,
185
+ pl110_draw_line2_bbbp_bgr,
186
+ pl110_draw_line4_bbbp_bgr,
187
+ pl110_draw_line8_bbbp_bgr,
188
+ pl110_draw_line16_555_bbbp_bgr,
189
+ pl110_draw_line32_bbbp_bgr,
190
+ pl110_draw_line16_bbbp_bgr,
191
+ pl110_draw_line12_bbbp_bgr,
192
+
193
+ pl110_draw_line1_lbbp_bgr,
194
+ pl110_draw_line2_lbbp_bgr,
195
+ pl110_draw_line4_lbbp_bgr,
196
+ pl110_draw_line8_lbbp_bgr,
197
+ pl110_draw_line16_555_lbbp_bgr,
198
+ pl110_draw_line32_lbbp_bgr,
199
+ pl110_draw_line16_lbbp_bgr,
200
+ pl110_draw_line12_lbbp_bgr,
201
+
202
+ pl110_draw_line1_lblp_rgb,
203
+ pl110_draw_line2_lblp_rgb,
204
+ pl110_draw_line4_lblp_rgb,
205
+ pl110_draw_line8_lblp_rgb,
206
+ pl110_draw_line16_555_lblp_rgb,
207
+ pl110_draw_line32_lblp_rgb,
208
+ pl110_draw_line16_lblp_rgb,
209
+ pl110_draw_line12_lblp_rgb,
210
+
211
+ pl110_draw_line1_bbbp_rgb,
212
+ pl110_draw_line2_bbbp_rgb,
213
+ pl110_draw_line4_bbbp_rgb,
214
+ pl110_draw_line8_bbbp_rgb,
215
+ pl110_draw_line16_555_bbbp_rgb,
216
+ pl110_draw_line32_bbbp_rgb,
217
+ pl110_draw_line16_bbbp_rgb,
218
+ pl110_draw_line12_bbbp_rgb,
219
+
220
+ pl110_draw_line1_lbbp_rgb,
221
+ pl110_draw_line2_lbbp_rgb,
222
+ pl110_draw_line4_lbbp_rgb,
223
+ pl110_draw_line8_lbbp_rgb,
224
+ pl110_draw_line16_555_lbbp_rgb,
225
+ pl110_draw_line32_lbbp_rgb,
226
+ pl110_draw_line16_lbbp_rgb,
227
+ pl110_draw_line12_lbbp_rgb,
228
+};
229
230
static int pl110_enabled(PL110State *s)
231
{
232
--
233
2.20.1
234
235
diff view generated by jsdifflib
New patch
1
For a long time now the UI layer has guaranteed that the console
2
surface is always 32 bits per pixel. Remove the legacy dead code
3
from the pxa2xx_lcd display device which was handling the possibility
4
that the console surface was some other format.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
8
Message-id: 20210211141515.8755-5-peter.maydell@linaro.org
9
---
10
hw/display/pxa2xx_lcd.c | 79 +++++++++--------------------------------
11
1 file changed, 17 insertions(+), 62 deletions(-)
12
13
diff --git a/hw/display/pxa2xx_lcd.c b/hw/display/pxa2xx_lcd.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/display/pxa2xx_lcd.c
16
+++ b/hw/display/pxa2xx_lcd.c
17
@@ -XXX,XX +XXX,XX @@ struct PXA2xxLCDState {
18
19
int invalidated;
20
QemuConsole *con;
21
- drawfn *line_fn[2];
22
int dest_width;
23
int xres, yres;
24
int pal_for;
25
@@ -XXX,XX +XXX,XX @@ typedef struct QEMU_PACKED {
26
#define LDCMD_SOFINT    (1 << 22)
27
#define LDCMD_PAL    (1 << 26)
28
29
+#define BITS 32
30
+#include "pxa2xx_template.h"
31
+
32
/* Route internal interrupt lines to the global IC */
33
static void pxa2xx_lcdc_int_update(PXA2xxLCDState *s)
34
{
35
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_palette_parse(PXA2xxLCDState *s, int ch, int bpp)
36
}
37
}
38
39
+static inline drawfn pxa2xx_drawfn(PXA2xxLCDState *s)
40
+{
41
+ if (s->transp) {
42
+ return pxa2xx_draw_fn_32t[s->bpp];
43
+ } else {
44
+ return pxa2xx_draw_fn_32[s->bpp];
45
+ }
46
+}
47
+
48
static void pxa2xx_lcdc_dma0_redraw_rot0(PXA2xxLCDState *s,
49
hwaddr addr, int *miny, int *maxy)
50
{
51
DisplaySurface *surface = qemu_console_surface(s->con);
52
int src_width, dest_width;
53
- drawfn fn = NULL;
54
- if (s->dest_width)
55
- fn = s->line_fn[s->transp][s->bpp];
56
+ drawfn fn = pxa2xx_drawfn(s);
57
if (!fn)
58
return;
59
60
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_lcdc_dma0_redraw_rot90(PXA2xxLCDState *s,
61
{
62
DisplaySurface *surface = qemu_console_surface(s->con);
63
int src_width, dest_width;
64
- drawfn fn = NULL;
65
- if (s->dest_width)
66
- fn = s->line_fn[s->transp][s->bpp];
67
+ drawfn fn = pxa2xx_drawfn(s);
68
if (!fn)
69
return;
70
71
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_lcdc_dma0_redraw_rot180(PXA2xxLCDState *s,
72
{
73
DisplaySurface *surface = qemu_console_surface(s->con);
74
int src_width, dest_width;
75
- drawfn fn = NULL;
76
- if (s->dest_width) {
77
- fn = s->line_fn[s->transp][s->bpp];
78
- }
79
+ drawfn fn = pxa2xx_drawfn(s);
80
if (!fn) {
81
return;
82
}
83
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_lcdc_dma0_redraw_rot270(PXA2xxLCDState *s,
84
{
85
DisplaySurface *surface = qemu_console_surface(s->con);
86
int src_width, dest_width;
87
- drawfn fn = NULL;
88
- if (s->dest_width) {
89
- fn = s->line_fn[s->transp][s->bpp];
90
- }
91
+ drawfn fn = pxa2xx_drawfn(s);
92
if (!fn) {
93
return;
94
}
95
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pxa2xx_lcdc = {
96
}
97
};
98
99
-#define BITS 8
100
-#include "pxa2xx_template.h"
101
-#define BITS 15
102
-#include "pxa2xx_template.h"
103
-#define BITS 16
104
-#include "pxa2xx_template.h"
105
-#define BITS 24
106
-#include "pxa2xx_template.h"
107
-#define BITS 32
108
-#include "pxa2xx_template.h"
109
-
110
static const GraphicHwOps pxa2xx_ops = {
111
.invalidate = pxa2xx_invalidate_display,
112
.gfx_update = pxa2xx_update_display,
113
@@ -XXX,XX +XXX,XX @@ PXA2xxLCDState *pxa2xx_lcdc_init(MemoryRegion *sysmem,
114
hwaddr base, qemu_irq irq)
115
{
116
PXA2xxLCDState *s;
117
- DisplaySurface *surface;
118
119
s = (PXA2xxLCDState *) g_malloc0(sizeof(PXA2xxLCDState));
120
s->invalidated = 1;
121
@@ -XXX,XX +XXX,XX @@ PXA2xxLCDState *pxa2xx_lcdc_init(MemoryRegion *sysmem,
122
memory_region_add_subregion(sysmem, base, &s->iomem);
123
124
s->con = graphic_console_init(NULL, 0, &pxa2xx_ops, s);
125
- surface = qemu_console_surface(s->con);
126
-
127
- switch (surface_bits_per_pixel(surface)) {
128
- case 0:
129
- s->dest_width = 0;
130
- break;
131
- case 8:
132
- s->line_fn[0] = pxa2xx_draw_fn_8;
133
- s->line_fn[1] = pxa2xx_draw_fn_8t;
134
- s->dest_width = 1;
135
- break;
136
- case 15:
137
- s->line_fn[0] = pxa2xx_draw_fn_15;
138
- s->line_fn[1] = pxa2xx_draw_fn_15t;
139
- s->dest_width = 2;
140
- break;
141
- case 16:
142
- s->line_fn[0] = pxa2xx_draw_fn_16;
143
- s->line_fn[1] = pxa2xx_draw_fn_16t;
144
- s->dest_width = 2;
145
- break;
146
- case 24:
147
- s->line_fn[0] = pxa2xx_draw_fn_24;
148
- s->line_fn[1] = pxa2xx_draw_fn_24t;
149
- s->dest_width = 3;
150
- break;
151
- case 32:
152
- s->line_fn[0] = pxa2xx_draw_fn_32;
153
- s->line_fn[1] = pxa2xx_draw_fn_32t;
154
- s->dest_width = 4;
155
- break;
156
- default:
157
- fprintf(stderr, "%s: Bad color depth\n", __func__);
158
- exit(1);
159
- }
160
+ s->dest_width = 4;
161
162
vmstate_register(NULL, 0, &vmstate_pxa2xx_lcdc, s);
163
164
--
165
2.20.1
166
167
diff view generated by jsdifflib
New patch
1
Since the dest_width is now always 4 because the output surface is
2
32bpp, we can replace the dest_width state field with a constant.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
6
Message-id: 20210211141515.8755-6-peter.maydell@linaro.org
7
---
8
hw/display/pxa2xx_lcd.c | 20 +++++++++++---------
9
1 file changed, 11 insertions(+), 9 deletions(-)
10
11
diff --git a/hw/display/pxa2xx_lcd.c b/hw/display/pxa2xx_lcd.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/display/pxa2xx_lcd.c
14
+++ b/hw/display/pxa2xx_lcd.c
15
@@ -XXX,XX +XXX,XX @@ typedef struct QEMU_PACKED {
16
#define LDCMD_SOFINT    (1 << 22)
17
#define LDCMD_PAL    (1 << 26)
18
19
+/* Size of a pixel in the QEMU UI output surface, in bytes */
20
+#define DEST_PIXEL_WIDTH 4
21
+
22
#define BITS 32
23
#include "pxa2xx_template.h"
24
25
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_lcdc_dma0_redraw_rot0(PXA2xxLCDState *s,
26
else if (s->bpp > pxa_lcdc_8bpp)
27
src_width *= 2;
28
29
- dest_width = s->xres * s->dest_width;
30
+ dest_width = s->xres * DEST_PIXEL_WIDTH;
31
*miny = 0;
32
if (s->invalidated) {
33
framebuffer_update_memory_section(&s->fbsection, s->sysmem,
34
addr, s->yres, src_width);
35
}
36
framebuffer_update_display(surface, &s->fbsection, s->xres, s->yres,
37
- src_width, dest_width, s->dest_width,
38
+ src_width, dest_width, DEST_PIXEL_WIDTH,
39
s->invalidated,
40
fn, s->dma_ch[0].palette, miny, maxy);
41
}
42
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_lcdc_dma0_redraw_rot90(PXA2xxLCDState *s,
43
else if (s->bpp > pxa_lcdc_8bpp)
44
src_width *= 2;
45
46
- dest_width = s->yres * s->dest_width;
47
+ dest_width = s->yres * DEST_PIXEL_WIDTH;
48
*miny = 0;
49
if (s->invalidated) {
50
framebuffer_update_memory_section(&s->fbsection, s->sysmem,
51
addr, s->yres, src_width);
52
}
53
framebuffer_update_display(surface, &s->fbsection, s->xres, s->yres,
54
- src_width, s->dest_width, -dest_width,
55
+ src_width, DEST_PIXEL_WIDTH, -dest_width,
56
s->invalidated,
57
fn, s->dma_ch[0].palette,
58
miny, maxy);
59
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_lcdc_dma0_redraw_rot180(PXA2xxLCDState *s,
60
src_width *= 2;
61
}
62
63
- dest_width = s->xres * s->dest_width;
64
+ dest_width = s->xres * DEST_PIXEL_WIDTH;
65
*miny = 0;
66
if (s->invalidated) {
67
framebuffer_update_memory_section(&s->fbsection, s->sysmem,
68
addr, s->yres, src_width);
69
}
70
framebuffer_update_display(surface, &s->fbsection, s->xres, s->yres,
71
- src_width, -dest_width, -s->dest_width,
72
+ src_width, -dest_width, -DEST_PIXEL_WIDTH,
73
s->invalidated,
74
fn, s->dma_ch[0].palette, miny, maxy);
75
}
76
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_lcdc_dma0_redraw_rot270(PXA2xxLCDState *s,
77
src_width *= 2;
78
}
79
80
- dest_width = s->yres * s->dest_width;
81
+ dest_width = s->yres * DEST_PIXEL_WIDTH;
82
*miny = 0;
83
if (s->invalidated) {
84
framebuffer_update_memory_section(&s->fbsection, s->sysmem,
85
addr, s->yres, src_width);
86
}
87
framebuffer_update_display(surface, &s->fbsection, s->xres, s->yres,
88
- src_width, -s->dest_width, dest_width,
89
+ src_width, -DEST_PIXEL_WIDTH, dest_width,
90
s->invalidated,
91
fn, s->dma_ch[0].palette,
92
miny, maxy);
93
@@ -XXX,XX +XXX,XX @@ PXA2xxLCDState *pxa2xx_lcdc_init(MemoryRegion *sysmem,
94
memory_region_add_subregion(sysmem, base, &s->iomem);
95
96
s->con = graphic_console_init(NULL, 0, &pxa2xx_ops, s);
97
- s->dest_width = 4;
98
99
vmstate_register(NULL, 0, &vmstate_pxa2xx_lcdc, s);
100
101
--
102
2.20.1
103
104
diff view generated by jsdifflib
1
In the various helper functions for v7M/v8M instructions, use
1
Now that BITS is always 32, expand out all its uses in the template
2
the _ra versions of cpu_stl_data() and friends. Otherwise we
2
header, including removing now-useless uses of the glue() macro.
3
may get wrong behaviour or an assert() due to not being able
4
to locate the TB if there is an exception on the memory access
5
or if it performs an IO operation when in icount mode.
6
3
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
9
Message-id: 20190617175317.27557-5-peter.maydell@linaro.org
6
Message-id: 20210211141515.8755-7-peter.maydell@linaro.org
10
---
7
---
11
target/arm/m_helper.c | 21 ++++++++++++---------
8
hw/display/pxa2xx_template.h | 110 ++++++++++++++---------------------
12
1 file changed, 12 insertions(+), 9 deletions(-)
9
1 file changed, 45 insertions(+), 65 deletions(-)
13
10
14
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
11
diff --git a/hw/display/pxa2xx_template.h b/hw/display/pxa2xx_template.h
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/m_helper.c
13
--- a/hw/display/pxa2xx_template.h
17
+++ b/target/arm/m_helper.c
14
+++ b/hw/display/pxa2xx_template.h
18
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
15
@@ -XXX,XX +XXX,XX @@
19
}
16
*/
20
17
21
/* Note that these stores can throw exceptions on MPU faults */
18
# define SKIP_PIXEL(to)        to += deststep
22
- cpu_stl_data(env, sp, nextinst);
19
-#if BITS == 8
23
- cpu_stl_data(env, sp + 4, saved_psr);
20
-# define COPY_PIXEL(to, from) do { *to = from; SKIP_PIXEL(to); } while (0)
24
+ cpu_stl_data_ra(env, sp, nextinst, GETPC());
21
-#elif BITS == 15 || BITS == 16
25
+ cpu_stl_data_ra(env, sp + 4, saved_psr, GETPC());
22
-# define COPY_PIXEL(to, from) \
26
23
- do { \
27
env->regs[13] = sp;
24
- *(uint16_t *) to = from; \
28
env->regs[14] = 0xfeffffff;
25
- SKIP_PIXEL(to); \
29
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
26
- } while (0)
30
/* fptr is the value of Rn, the frame pointer we store the FP regs to */
27
-#elif BITS == 24
31
bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
28
-# define COPY_PIXEL(to, from) \
32
bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
29
- do { \
33
+ uintptr_t ra = GETPC();
30
- *(uint16_t *) to = from; \
34
31
- *(to + 2) = (from) >> 16; \
35
assert(env->v7m.secure);
32
- SKIP_PIXEL(to); \
36
33
- } while (0)
37
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
34
-#elif BITS == 32
38
* Note that we do not use v7m_stack_write() here, because the
35
# define COPY_PIXEL(to, from) \
39
* accesses should not set the FSR bits for stacking errors if they
36
do { \
40
* fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
37
*(uint32_t *) to = from; \
41
- * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
38
SKIP_PIXEL(to); \
42
+ * or AccType_LAZYFP). Faults in cpu_stl_data_ra() will throw exceptions
39
} while (0)
43
* and longjmp out.
40
-#else
44
*/
41
-# error unknown bit depth
45
if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
42
-#endif
46
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
43
47
if (i >= 16) {
44
#ifdef HOST_WORDS_BIGENDIAN
48
faddr += 8; /* skip the slot for the FPSCR */
45
# define SWAP_WORDS    1
49
}
46
@@ -XXX,XX +XXX,XX @@
50
- cpu_stl_data(env, faddr, slo);
47
#define FN_2(x)        FN(x + 1) FN(x)
51
- cpu_stl_data(env, faddr + 4, shi);
48
#define FN_4(x)        FN_2(x + 2) FN_2(x)
52
+ cpu_stl_data_ra(env, faddr, slo, ra);
49
53
+ cpu_stl_data_ra(env, faddr + 4, shi, ra);
50
-static void glue(pxa2xx_draw_line2_, BITS)(void *opaque,
54
}
51
+static void pxa2xx_draw_line2(void *opaque,
55
- cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
52
uint8_t *dest, const uint8_t *src, int width, int deststep)
56
+ cpu_stl_data_ra(env, fptr + 0x40, vfp_get_fpscr(env), ra);
53
{
57
54
uint32_t *palette = opaque;
58
/*
55
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line2_, BITS)(void *opaque,
59
* If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
56
}
60
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
57
}
61
58
62
void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
59
-static void glue(pxa2xx_draw_line4_, BITS)(void *opaque,
63
{
60
+static void pxa2xx_draw_line4(void *opaque,
64
+ uintptr_t ra = GETPC();
61
uint8_t *dest, const uint8_t *src, int width, int deststep)
65
+
62
{
66
/* fptr is the value of Rn, the frame pointer we load the FP regs from */
63
uint32_t *palette = opaque;
67
assert(env->v7m.secure);
64
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line4_, BITS)(void *opaque,
68
65
}
69
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
66
}
70
faddr += 8; /* skip the slot for the FPSCR */
67
71
}
68
-static void glue(pxa2xx_draw_line8_, BITS)(void *opaque,
72
69
+static void pxa2xx_draw_line8(void *opaque,
73
- slo = cpu_ldl_data(env, faddr);
70
uint8_t *dest, const uint8_t *src, int width, int deststep)
74
- shi = cpu_ldl_data(env, faddr + 4);
71
{
75
+ slo = cpu_ldl_data_ra(env, faddr, ra);
72
uint32_t *palette = opaque;
76
+ shi = cpu_ldl_data_ra(env, faddr + 4, ra);
73
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line8_, BITS)(void *opaque,
77
74
}
78
dn = (uint64_t) shi << 32 | slo;
75
}
79
*aa32_vfp_dreg(env, i / 2) = dn;
76
80
}
77
-static void glue(pxa2xx_draw_line16_, BITS)(void *opaque,
81
- fpscr = cpu_ldl_data(env, fptr + 0x40);
78
+static void pxa2xx_draw_line16(void *opaque,
82
+ fpscr = cpu_ldl_data_ra(env, fptr + 0x40, ra);
79
uint8_t *dest, const uint8_t *src, int width, int deststep)
83
vfp_set_fpscr(env, fpscr);
80
{
84
}
81
uint32_t data;
82
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line16_, BITS)(void *opaque,
83
data >>= 6;
84
r = (data & 0x1f) << 3;
85
data >>= 5;
86
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
87
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
88
b = (data & 0x1f) << 3;
89
data >>= 5;
90
g = (data & 0x3f) << 2;
91
data >>= 6;
92
r = (data & 0x1f) << 3;
93
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
94
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
95
width -= 2;
96
src += 4;
97
}
98
}
99
100
-static void glue(pxa2xx_draw_line16t_, BITS)(void *opaque,
101
+static void pxa2xx_draw_line16t(void *opaque,
102
uint8_t *dest, const uint8_t *src, int width, int deststep)
103
{
104
uint32_t data;
105
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line16t_, BITS)(void *opaque,
106
if (data & 1)
107
SKIP_PIXEL(dest);
108
else
109
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
110
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
111
data >>= 1;
112
b = (data & 0x1f) << 3;
113
data >>= 5;
114
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line16t_, BITS)(void *opaque,
115
if (data & 1)
116
SKIP_PIXEL(dest);
117
else
118
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
119
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
120
width -= 2;
121
src += 4;
122
}
123
}
124
125
-static void glue(pxa2xx_draw_line18_, BITS)(void *opaque,
126
+static void pxa2xx_draw_line18(void *opaque,
127
uint8_t *dest, const uint8_t *src, int width, int deststep)
128
{
129
uint32_t data;
130
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line18_, BITS)(void *opaque,
131
g = (data & 0x3f) << 2;
132
data >>= 6;
133
r = (data & 0x3f) << 2;
134
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
135
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
136
width -= 1;
137
src += 4;
138
}
139
}
140
141
/* The wicked packed format */
142
-static void glue(pxa2xx_draw_line18p_, BITS)(void *opaque,
143
+static void pxa2xx_draw_line18p(void *opaque,
144
uint8_t *dest, const uint8_t *src, int width, int deststep)
145
{
146
uint32_t data[3];
147
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line18p_, BITS)(void *opaque,
148
data[0] >>= 6;
149
r = (data[0] & 0x3f) << 2;
150
data[0] >>= 12;
151
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
152
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
153
b = (data[0] & 0x3f) << 2;
154
data[0] >>= 6;
155
g = ((data[1] & 0xf) << 4) | (data[0] << 2);
156
data[1] >>= 4;
157
r = (data[1] & 0x3f) << 2;
158
data[1] >>= 12;
159
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
160
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
161
b = (data[1] & 0x3f) << 2;
162
data[1] >>= 6;
163
g = (data[1] & 0x3f) << 2;
164
data[1] >>= 6;
165
r = ((data[2] & 0x3) << 6) | (data[1] << 2);
166
data[2] >>= 8;
167
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
168
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
169
b = (data[2] & 0x3f) << 2;
170
data[2] >>= 6;
171
g = (data[2] & 0x3f) << 2;
172
data[2] >>= 6;
173
r = data[2] << 2;
174
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
175
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
176
width -= 4;
177
}
178
}
179
180
-static void glue(pxa2xx_draw_line19_, BITS)(void *opaque,
181
+static void pxa2xx_draw_line19(void *opaque,
182
uint8_t *dest, const uint8_t *src, int width, int deststep)
183
{
184
uint32_t data;
185
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line19_, BITS)(void *opaque,
186
if (data & 1)
187
SKIP_PIXEL(dest);
188
else
189
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
190
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
191
width -= 1;
192
src += 4;
193
}
194
}
195
196
/* The wicked packed format */
197
-static void glue(pxa2xx_draw_line19p_, BITS)(void *opaque,
198
+static void pxa2xx_draw_line19p(void *opaque,
199
uint8_t *dest, const uint8_t *src, int width, int deststep)
200
{
201
uint32_t data[3];
202
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line19p_, BITS)(void *opaque,
203
if (data[0] & 1)
204
SKIP_PIXEL(dest);
205
else
206
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
207
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
208
data[0] >>= 6;
209
b = (data[0] & 0x3f) << 2;
210
data[0] >>= 6;
211
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line19p_, BITS)(void *opaque,
212
if (data[1] & 1)
213
SKIP_PIXEL(dest);
214
else
215
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
216
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
217
data[1] >>= 6;
218
b = (data[1] & 0x3f) << 2;
219
data[1] >>= 6;
220
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line19p_, BITS)(void *opaque,
221
if (data[2] & 1)
222
SKIP_PIXEL(dest);
223
else
224
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
225
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
226
data[2] >>= 6;
227
b = (data[2] & 0x3f) << 2;
228
data[2] >>= 6;
229
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line19p_, BITS)(void *opaque,
230
if (data[2] & 1)
231
SKIP_PIXEL(dest);
232
else
233
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
234
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
235
width -= 4;
236
}
237
}
238
239
-static void glue(pxa2xx_draw_line24_, BITS)(void *opaque,
240
+static void pxa2xx_draw_line24(void *opaque,
241
uint8_t *dest, const uint8_t *src, int width, int deststep)
242
{
243
uint32_t data;
244
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line24_, BITS)(void *opaque,
245
g = data & 0xff;
246
data >>= 8;
247
r = data & 0xff;
248
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
249
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
250
width -= 1;
251
src += 4;
252
}
253
}
254
255
-static void glue(pxa2xx_draw_line24t_, BITS)(void *opaque,
256
+static void pxa2xx_draw_line24t(void *opaque,
257
uint8_t *dest, const uint8_t *src, int width, int deststep)
258
{
259
uint32_t data;
260
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line24t_, BITS)(void *opaque,
261
if (data & 1)
262
SKIP_PIXEL(dest);
263
else
264
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
265
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
266
width -= 1;
267
src += 4;
268
}
269
}
270
271
-static void glue(pxa2xx_draw_line25_, BITS)(void *opaque,
272
+static void pxa2xx_draw_line25(void *opaque,
273
uint8_t *dest, const uint8_t *src, int width, int deststep)
274
{
275
uint32_t data;
276
@@ -XXX,XX +XXX,XX @@ static void glue(pxa2xx_draw_line25_, BITS)(void *opaque,
277
if (data & 1)
278
SKIP_PIXEL(dest);
279
else
280
- COPY_PIXEL(dest, glue(rgb_to_pixel, BITS)(r, g, b));
281
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
282
width -= 1;
283
src += 4;
284
}
285
}
286
287
/* Overlay planes disabled, no transparency */
288
-static drawfn glue(pxa2xx_draw_fn_, BITS)[16] =
289
+static drawfn pxa2xx_draw_fn_32[16] =
290
{
291
[0 ... 0xf] = NULL,
292
- [pxa_lcdc_2bpp] = glue(pxa2xx_draw_line2_, BITS),
293
- [pxa_lcdc_4bpp] = glue(pxa2xx_draw_line4_, BITS),
294
- [pxa_lcdc_8bpp] = glue(pxa2xx_draw_line8_, BITS),
295
- [pxa_lcdc_16bpp] = glue(pxa2xx_draw_line16_, BITS),
296
- [pxa_lcdc_18bpp] = glue(pxa2xx_draw_line18_, BITS),
297
- [pxa_lcdc_18pbpp] = glue(pxa2xx_draw_line18p_, BITS),
298
- [pxa_lcdc_24bpp] = glue(pxa2xx_draw_line24_, BITS),
299
+ [pxa_lcdc_2bpp] = pxa2xx_draw_line2,
300
+ [pxa_lcdc_4bpp] = pxa2xx_draw_line4,
301
+ [pxa_lcdc_8bpp] = pxa2xx_draw_line8,
302
+ [pxa_lcdc_16bpp] = pxa2xx_draw_line16,
303
+ [pxa_lcdc_18bpp] = pxa2xx_draw_line18,
304
+ [pxa_lcdc_18pbpp] = pxa2xx_draw_line18p,
305
+ [pxa_lcdc_24bpp] = pxa2xx_draw_line24,
306
};
307
308
/* Overlay planes enabled, transparency used */
309
-static drawfn glue(glue(pxa2xx_draw_fn_, BITS), t)[16] =
310
+static drawfn pxa2xx_draw_fn_32t[16] =
311
{
312
[0 ... 0xf] = NULL,
313
- [pxa_lcdc_4bpp] = glue(pxa2xx_draw_line4_, BITS),
314
- [pxa_lcdc_8bpp] = glue(pxa2xx_draw_line8_, BITS),
315
- [pxa_lcdc_16bpp] = glue(pxa2xx_draw_line16t_, BITS),
316
- [pxa_lcdc_19bpp] = glue(pxa2xx_draw_line19_, BITS),
317
- [pxa_lcdc_19pbpp] = glue(pxa2xx_draw_line19p_, BITS),
318
- [pxa_lcdc_24bpp] = glue(pxa2xx_draw_line24t_, BITS),
319
- [pxa_lcdc_25bpp] = glue(pxa2xx_draw_line25_, BITS),
320
+ [pxa_lcdc_4bpp] = pxa2xx_draw_line4,
321
+ [pxa_lcdc_8bpp] = pxa2xx_draw_line8,
322
+ [pxa_lcdc_16bpp] = pxa2xx_draw_line16t,
323
+ [pxa_lcdc_19bpp] = pxa2xx_draw_line19,
324
+ [pxa_lcdc_19pbpp] = pxa2xx_draw_line19p,
325
+ [pxa_lcdc_24bpp] = pxa2xx_draw_line24t,
326
+ [pxa_lcdc_25bpp] = pxa2xx_draw_line25,
327
};
328
329
-#undef BITS
330
#undef COPY_PIXEL
331
#undef SKIP_PIXEL
85
332
86
--
333
--
87
2.20.1
334
2.20.1
88
335
89
336
diff view generated by jsdifflib
1
Thumb instructions in an IT block are set up to be conditionally
1
We're about to move code from the template header into pxa2xx_lcd.c.
2
executed depending on a set of condition bits encoded into the IT
2
Before doing that, make coding style fixes so checkpatch doesn't
3
bits of the CPSR/XPSR. The architecture specifies that if the
3
complain about the patch which moves the code. This commit fixes
4
condition bits are 0b1111 this means "always execute" (like 0b1110),
4
missing braces in the SKIP_PIXEL() macro definition and in if()
5
not "never execute"; we were treating it as "never execute". (See
5
statements.
6
the ConditionHolds() pseudocode in both the A-profile and M-profile
7
Arm ARM.)
8
9
This is a bit of an obscure corner case, because the only legal
10
way to get to an 0b1111 set of condbits is to do an exception
11
return which sets the XPSR/CPSR up that way. An IT instruction
12
which encodes a condition sequence that would include an 0b1111 is
13
UNPREDICTABLE, and for v8A the CONSTRAINED UNPREDICTABLE choices
14
for such an IT insn are to NOP, UNDEF, or treat 0b1111 like 0b1110.
15
Add a comment noting that we take the latter option.
16
6
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
19
Message-id: 20190617175317.27557-7-peter.maydell@linaro.org
9
Message-id: 20210211141515.8755-8-peter.maydell@linaro.org
20
---
10
---
21
target/arm/translate.c | 15 +++++++++++++--
11
hw/display/pxa2xx_template.h | 47 +++++++++++++++++++++---------------
22
1 file changed, 13 insertions(+), 2 deletions(-)
12
1 file changed, 28 insertions(+), 19 deletions(-)
23
13
24
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/hw/display/pxa2xx_template.h b/hw/display/pxa2xx_template.h
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/translate.c
16
--- a/hw/display/pxa2xx_template.h
27
+++ b/target/arm/translate.c
17
+++ b/hw/display/pxa2xx_template.h
28
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
18
@@ -XXX,XX +XXX,XX @@
29
gen_nop_hint(s, (insn >> 4) & 0xf);
19
* Framebuffer format conversion routines.
30
break;
20
*/
31
}
21
32
- /* If Then. */
22
-# define SKIP_PIXEL(to)        to += deststep
33
+ /*
23
+# define SKIP_PIXEL(to) do { to += deststep; } while (0)
34
+ * IT (If-Then)
24
# define COPY_PIXEL(to, from) \
35
+ *
25
do { \
36
+ * Combinations of firstcond and mask which set up an 0b1111
26
*(uint32_t *) to = from; \
37
+ * condition are UNPREDICTABLE; we take the CONSTRAINED
27
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line16t(void *opaque,
38
+ * UNPREDICTABLE choice to treat 0b1111 the same as 0b1110,
28
data >>= 5;
39
+ * i.e. both meaning "execute always".
29
r = (data & 0x1f) << 3;
40
+ */
30
data >>= 5;
41
s->condexec_cond = (insn >> 4) & 0xe;
31
- if (data & 1)
42
s->condexec_mask = insn & 0x1f;
32
+ if (data & 1) {
43
/* No actual code generated for this insn, just setup state. */
33
SKIP_PIXEL(dest);
44
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
34
- else
45
if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
35
+ } else {
46
uint32_t cond = dc->condexec_cond;
36
COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
47
37
+ }
48
- if (cond != 0x0e) { /* Skip conditional when condition is AL. */
38
data >>= 1;
49
+ /*
39
b = (data & 0x1f) << 3;
50
+ * Conditionally skip the insn. Note that both 0xe and 0xf mean
40
data >>= 5;
51
+ * "always"; 0xf is not "never".
41
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line16t(void *opaque,
52
+ */
42
data >>= 5;
53
+ if (cond < 0x0e) {
43
r = (data & 0x1f) << 3;
54
arm_skip_unless(dc, cond);
44
data >>= 5;
55
}
45
- if (data & 1)
46
+ if (data & 1) {
47
SKIP_PIXEL(dest);
48
- else
49
+ } else {
50
COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
51
+ }
52
width -= 2;
53
src += 4;
54
}
55
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line19(void *opaque,
56
data >>= 6;
57
r = (data & 0x3f) << 2;
58
data >>= 6;
59
- if (data & 1)
60
+ if (data & 1) {
61
SKIP_PIXEL(dest);
62
- else
63
+ } else {
64
COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
65
+ }
66
width -= 1;
67
src += 4;
68
}
69
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line19p(void *opaque,
70
data[0] >>= 6;
71
r = (data[0] & 0x3f) << 2;
72
data[0] >>= 6;
73
- if (data[0] & 1)
74
+ if (data[0] & 1) {
75
SKIP_PIXEL(dest);
76
- else
77
+ } else {
78
COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
79
+ }
80
data[0] >>= 6;
81
b = (data[0] & 0x3f) << 2;
82
data[0] >>= 6;
83
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line19p(void *opaque,
84
data[1] >>= 4;
85
r = (data[1] & 0x3f) << 2;
86
data[1] >>= 6;
87
- if (data[1] & 1)
88
+ if (data[1] & 1) {
89
SKIP_PIXEL(dest);
90
- else
91
+ } else {
92
COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
93
+ }
94
data[1] >>= 6;
95
b = (data[1] & 0x3f) << 2;
96
data[1] >>= 6;
97
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line19p(void *opaque,
98
data[1] >>= 6;
99
r = ((data[2] & 0x3) << 6) | (data[1] << 2);
100
data[2] >>= 2;
101
- if (data[2] & 1)
102
+ if (data[2] & 1) {
103
SKIP_PIXEL(dest);
104
- else
105
+ } else {
106
COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
107
+ }
108
data[2] >>= 6;
109
b = (data[2] & 0x3f) << 2;
110
data[2] >>= 6;
111
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line19p(void *opaque,
112
data[2] >>= 6;
113
r = data[2] << 2;
114
data[2] >>= 6;
115
- if (data[2] & 1)
116
+ if (data[2] & 1) {
117
SKIP_PIXEL(dest);
118
- else
119
+ } else {
120
COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
121
+ }
122
width -= 4;
123
}
124
}
125
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line24t(void *opaque,
126
data >>= 8;
127
r = data & 0xff;
128
data >>= 8;
129
- if (data & 1)
130
+ if (data & 1) {
131
SKIP_PIXEL(dest);
132
- else
133
+ } else {
134
COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
135
+ }
136
width -= 1;
137
src += 4;
138
}
139
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line25(void *opaque,
140
data >>= 8;
141
r = data & 0xff;
142
data >>= 8;
143
- if (data & 1)
144
+ if (data & 1) {
145
SKIP_PIXEL(dest);
146
- else
147
+ } else {
148
COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
149
+ }
150
width -= 1;
151
src += 4;
56
}
152
}
57
--
153
--
58
2.20.1
154
2.20.1
59
155
60
156
diff view generated by jsdifflib
New patch
1
1
We're about to move code from the template header into pxa2xx_lcd.c.
2
Before doing that, make coding style fixes so checkpatch doesn't
3
complain about the patch which moves the code. This commit is
4
whitespace changes only:
5
* avoid hard-coded tabs
6
* fix ident on function prototypes
7
* no newline before open brace on array definitions
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
11
Message-id: 20210211141515.8755-9-peter.maydell@linaro.org
12
---
13
hw/display/pxa2xx_template.h | 66 +++++++++++++++++-------------------
14
1 file changed, 32 insertions(+), 34 deletions(-)
15
16
diff --git a/hw/display/pxa2xx_template.h b/hw/display/pxa2xx_template.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/display/pxa2xx_template.h
19
+++ b/hw/display/pxa2xx_template.h
20
@@ -XXX,XX +XXX,XX @@
21
} while (0)
22
23
#ifdef HOST_WORDS_BIGENDIAN
24
-# define SWAP_WORDS    1
25
+# define SWAP_WORDS 1
26
#endif
27
28
-#define FN_2(x)        FN(x + 1) FN(x)
29
-#define FN_4(x)        FN_2(x + 2) FN_2(x)
30
+#define FN_2(x) FN(x + 1) FN(x)
31
+#define FN_4(x) FN_2(x + 2) FN_2(x)
32
33
-static void pxa2xx_draw_line2(void *opaque,
34
- uint8_t *dest, const uint8_t *src, int width, int deststep)
35
+static void pxa2xx_draw_line2(void *opaque, uint8_t *dest, const uint8_t *src,
36
+ int width, int deststep)
37
{
38
uint32_t *palette = opaque;
39
uint32_t data;
40
while (width > 0) {
41
data = *(uint32_t *) src;
42
-#define FN(x)        COPY_PIXEL(dest, palette[(data >> ((x) * 2)) & 3]);
43
+#define FN(x) COPY_PIXEL(dest, palette[(data >> ((x) * 2)) & 3]);
44
#ifdef SWAP_WORDS
45
FN_4(12)
46
FN_4(8)
47
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line2(void *opaque,
48
}
49
}
50
51
-static void pxa2xx_draw_line4(void *opaque,
52
- uint8_t *dest, const uint8_t *src, int width, int deststep)
53
+static void pxa2xx_draw_line4(void *opaque, uint8_t *dest, const uint8_t *src,
54
+ int width, int deststep)
55
{
56
uint32_t *palette = opaque;
57
uint32_t data;
58
while (width > 0) {
59
data = *(uint32_t *) src;
60
-#define FN(x)        COPY_PIXEL(dest, palette[(data >> ((x) * 4)) & 0xf]);
61
+#define FN(x) COPY_PIXEL(dest, palette[(data >> ((x) * 4)) & 0xf]);
62
#ifdef SWAP_WORDS
63
FN_2(6)
64
FN_2(4)
65
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line4(void *opaque,
66
}
67
}
68
69
-static void pxa2xx_draw_line8(void *opaque,
70
- uint8_t *dest, const uint8_t *src, int width, int deststep)
71
+static void pxa2xx_draw_line8(void *opaque, uint8_t *dest, const uint8_t *src,
72
+ int width, int deststep)
73
{
74
uint32_t *palette = opaque;
75
uint32_t data;
76
while (width > 0) {
77
data = *(uint32_t *) src;
78
-#define FN(x)        COPY_PIXEL(dest, palette[(data >> (x)) & 0xff]);
79
+#define FN(x) COPY_PIXEL(dest, palette[(data >> (x)) & 0xff]);
80
#ifdef SWAP_WORDS
81
FN(24)
82
FN(16)
83
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line8(void *opaque,
84
}
85
}
86
87
-static void pxa2xx_draw_line16(void *opaque,
88
- uint8_t *dest, const uint8_t *src, int width, int deststep)
89
+static void pxa2xx_draw_line16(void *opaque, uint8_t *dest, const uint8_t *src,
90
+ int width, int deststep)
91
{
92
uint32_t data;
93
unsigned int r, g, b;
94
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line16(void *opaque,
95
}
96
}
97
98
-static void pxa2xx_draw_line16t(void *opaque,
99
- uint8_t *dest, const uint8_t *src, int width, int deststep)
100
+static void pxa2xx_draw_line16t(void *opaque, uint8_t *dest, const uint8_t *src,
101
+ int width, int deststep)
102
{
103
uint32_t data;
104
unsigned int r, g, b;
105
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line16t(void *opaque,
106
}
107
}
108
109
-static void pxa2xx_draw_line18(void *opaque,
110
- uint8_t *dest, const uint8_t *src, int width, int deststep)
111
+static void pxa2xx_draw_line18(void *opaque, uint8_t *dest, const uint8_t *src,
112
+ int width, int deststep)
113
{
114
uint32_t data;
115
unsigned int r, g, b;
116
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line18(void *opaque,
117
}
118
119
/* The wicked packed format */
120
-static void pxa2xx_draw_line18p(void *opaque,
121
- uint8_t *dest, const uint8_t *src, int width, int deststep)
122
+static void pxa2xx_draw_line18p(void *opaque, uint8_t *dest, const uint8_t *src,
123
+ int width, int deststep)
124
{
125
uint32_t data[3];
126
unsigned int r, g, b;
127
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line18p(void *opaque,
128
}
129
}
130
131
-static void pxa2xx_draw_line19(void *opaque,
132
- uint8_t *dest, const uint8_t *src, int width, int deststep)
133
+static void pxa2xx_draw_line19(void *opaque, uint8_t *dest, const uint8_t *src,
134
+ int width, int deststep)
135
{
136
uint32_t data;
137
unsigned int r, g, b;
138
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line19(void *opaque,
139
}
140
141
/* The wicked packed format */
142
-static void pxa2xx_draw_line19p(void *opaque,
143
- uint8_t *dest, const uint8_t *src, int width, int deststep)
144
+static void pxa2xx_draw_line19p(void *opaque, uint8_t *dest, const uint8_t *src,
145
+ int width, int deststep)
146
{
147
uint32_t data[3];
148
unsigned int r, g, b;
149
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line19p(void *opaque,
150
}
151
}
152
153
-static void pxa2xx_draw_line24(void *opaque,
154
- uint8_t *dest, const uint8_t *src, int width, int deststep)
155
+static void pxa2xx_draw_line24(void *opaque, uint8_t *dest, const uint8_t *src,
156
+ int width, int deststep)
157
{
158
uint32_t data;
159
unsigned int r, g, b;
160
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line24(void *opaque,
161
}
162
}
163
164
-static void pxa2xx_draw_line24t(void *opaque,
165
- uint8_t *dest, const uint8_t *src, int width, int deststep)
166
+static void pxa2xx_draw_line24t(void *opaque, uint8_t *dest, const uint8_t *src,
167
+ int width, int deststep)
168
{
169
uint32_t data;
170
unsigned int r, g, b;
171
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line24t(void *opaque,
172
}
173
}
174
175
-static void pxa2xx_draw_line25(void *opaque,
176
- uint8_t *dest, const uint8_t *src, int width, int deststep)
177
+static void pxa2xx_draw_line25(void *opaque, uint8_t *dest, const uint8_t *src,
178
+ int width, int deststep)
179
{
180
uint32_t data;
181
unsigned int r, g, b;
182
@@ -XXX,XX +XXX,XX @@ static void pxa2xx_draw_line25(void *opaque,
183
}
184
185
/* Overlay planes disabled, no transparency */
186
-static drawfn pxa2xx_draw_fn_32[16] =
187
-{
188
+static drawfn pxa2xx_draw_fn_32[16] = {
189
[0 ... 0xf] = NULL,
190
[pxa_lcdc_2bpp] = pxa2xx_draw_line2,
191
[pxa_lcdc_4bpp] = pxa2xx_draw_line4,
192
@@ -XXX,XX +XXX,XX @@ static drawfn pxa2xx_draw_fn_32[16] =
193
};
194
195
/* Overlay planes enabled, transparency used */
196
-static drawfn pxa2xx_draw_fn_32t[16] =
197
-{
198
+static drawfn pxa2xx_draw_fn_32t[16] = {
199
[0 ... 0xf] = NULL,
200
[pxa_lcdc_4bpp] = pxa2xx_draw_line4,
201
[pxa_lcdc_8bpp] = pxa2xx_draw_line8,
202
--
203
2.20.1
204
205
diff view generated by jsdifflib
1
To prevent execution priority remaining negative if the guest
1
The template header is now included only once; just inline its contents
2
returns from an NMI or HardFault with a corrupted IPSR, the
2
in hw/display/pxa2xx_lcd.c.
3
v8M interrupt deactivation process forces the HardFault and NMI
4
to inactive based on the current raw execution priority,
5
even if the interrupt the guest is trying to deactivate
6
is something else. In the pseudocode this is done in the
7
Deactivate() function.
8
3
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
11
Message-id: 20190617175317.27557-3-peter.maydell@linaro.org
6
Message-id: 20210211141515.8755-10-peter.maydell@linaro.org
12
---
7
---
13
hw/intc/armv7m_nvic.c | 40 +++++++++++++++++++++++++++++++++++-----
8
hw/display/pxa2xx_template.h | 434 -----------------------------------
14
1 file changed, 35 insertions(+), 5 deletions(-)
9
hw/display/pxa2xx_lcd.c | 427 +++++++++++++++++++++++++++++++++-
10
2 files changed, 425 insertions(+), 436 deletions(-)
11
delete mode 100644 hw/display/pxa2xx_template.h
15
12
16
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
diff --git a/hw/display/pxa2xx_template.h b/hw/display/pxa2xx_template.h
14
deleted file mode 100644
15
index XXXXXXX..XXXXXXX
16
--- a/hw/display/pxa2xx_template.h
17
+++ /dev/null
18
@@ -XXX,XX +XXX,XX @@
19
-/*
20
- * Intel XScale PXA255/270 LCDC emulation.
21
- *
22
- * Copyright (c) 2006 Openedhand Ltd.
23
- * Written by Andrzej Zaborowski <balrog@zabor.org>
24
- *
25
- * This code is licensed under the GPLv2.
26
- *
27
- * Framebuffer format conversion routines.
28
- */
29
-
30
-# define SKIP_PIXEL(to) do { to += deststep; } while (0)
31
-# define COPY_PIXEL(to, from) \
32
- do { \
33
- *(uint32_t *) to = from; \
34
- SKIP_PIXEL(to); \
35
- } while (0)
36
-
37
-#ifdef HOST_WORDS_BIGENDIAN
38
-# define SWAP_WORDS 1
39
-#endif
40
-
41
-#define FN_2(x) FN(x + 1) FN(x)
42
-#define FN_4(x) FN_2(x + 2) FN_2(x)
43
-
44
-static void pxa2xx_draw_line2(void *opaque, uint8_t *dest, const uint8_t *src,
45
- int width, int deststep)
46
-{
47
- uint32_t *palette = opaque;
48
- uint32_t data;
49
- while (width > 0) {
50
- data = *(uint32_t *) src;
51
-#define FN(x) COPY_PIXEL(dest, palette[(data >> ((x) * 2)) & 3]);
52
-#ifdef SWAP_WORDS
53
- FN_4(12)
54
- FN_4(8)
55
- FN_4(4)
56
- FN_4(0)
57
-#else
58
- FN_4(0)
59
- FN_4(4)
60
- FN_4(8)
61
- FN_4(12)
62
-#endif
63
-#undef FN
64
- width -= 16;
65
- src += 4;
66
- }
67
-}
68
-
69
-static void pxa2xx_draw_line4(void *opaque, uint8_t *dest, const uint8_t *src,
70
- int width, int deststep)
71
-{
72
- uint32_t *palette = opaque;
73
- uint32_t data;
74
- while (width > 0) {
75
- data = *(uint32_t *) src;
76
-#define FN(x) COPY_PIXEL(dest, palette[(data >> ((x) * 4)) & 0xf]);
77
-#ifdef SWAP_WORDS
78
- FN_2(6)
79
- FN_2(4)
80
- FN_2(2)
81
- FN_2(0)
82
-#else
83
- FN_2(0)
84
- FN_2(2)
85
- FN_2(4)
86
- FN_2(6)
87
-#endif
88
-#undef FN
89
- width -= 8;
90
- src += 4;
91
- }
92
-}
93
-
94
-static void pxa2xx_draw_line8(void *opaque, uint8_t *dest, const uint8_t *src,
95
- int width, int deststep)
96
-{
97
- uint32_t *palette = opaque;
98
- uint32_t data;
99
- while (width > 0) {
100
- data = *(uint32_t *) src;
101
-#define FN(x) COPY_PIXEL(dest, palette[(data >> (x)) & 0xff]);
102
-#ifdef SWAP_WORDS
103
- FN(24)
104
- FN(16)
105
- FN(8)
106
- FN(0)
107
-#else
108
- FN(0)
109
- FN(8)
110
- FN(16)
111
- FN(24)
112
-#endif
113
-#undef FN
114
- width -= 4;
115
- src += 4;
116
- }
117
-}
118
-
119
-static void pxa2xx_draw_line16(void *opaque, uint8_t *dest, const uint8_t *src,
120
- int width, int deststep)
121
-{
122
- uint32_t data;
123
- unsigned int r, g, b;
124
- while (width > 0) {
125
- data = *(uint32_t *) src;
126
-#ifdef SWAP_WORDS
127
- data = bswap32(data);
128
-#endif
129
- b = (data & 0x1f) << 3;
130
- data >>= 5;
131
- g = (data & 0x3f) << 2;
132
- data >>= 6;
133
- r = (data & 0x1f) << 3;
134
- data >>= 5;
135
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
136
- b = (data & 0x1f) << 3;
137
- data >>= 5;
138
- g = (data & 0x3f) << 2;
139
- data >>= 6;
140
- r = (data & 0x1f) << 3;
141
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
142
- width -= 2;
143
- src += 4;
144
- }
145
-}
146
-
147
-static void pxa2xx_draw_line16t(void *opaque, uint8_t *dest, const uint8_t *src,
148
- int width, int deststep)
149
-{
150
- uint32_t data;
151
- unsigned int r, g, b;
152
- while (width > 0) {
153
- data = *(uint32_t *) src;
154
-#ifdef SWAP_WORDS
155
- data = bswap32(data);
156
-#endif
157
- b = (data & 0x1f) << 3;
158
- data >>= 5;
159
- g = (data & 0x1f) << 3;
160
- data >>= 5;
161
- r = (data & 0x1f) << 3;
162
- data >>= 5;
163
- if (data & 1) {
164
- SKIP_PIXEL(dest);
165
- } else {
166
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
167
- }
168
- data >>= 1;
169
- b = (data & 0x1f) << 3;
170
- data >>= 5;
171
- g = (data & 0x1f) << 3;
172
- data >>= 5;
173
- r = (data & 0x1f) << 3;
174
- data >>= 5;
175
- if (data & 1) {
176
- SKIP_PIXEL(dest);
177
- } else {
178
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
179
- }
180
- width -= 2;
181
- src += 4;
182
- }
183
-}
184
-
185
-static void pxa2xx_draw_line18(void *opaque, uint8_t *dest, const uint8_t *src,
186
- int width, int deststep)
187
-{
188
- uint32_t data;
189
- unsigned int r, g, b;
190
- while (width > 0) {
191
- data = *(uint32_t *) src;
192
-#ifdef SWAP_WORDS
193
- data = bswap32(data);
194
-#endif
195
- b = (data & 0x3f) << 2;
196
- data >>= 6;
197
- g = (data & 0x3f) << 2;
198
- data >>= 6;
199
- r = (data & 0x3f) << 2;
200
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
201
- width -= 1;
202
- src += 4;
203
- }
204
-}
205
-
206
-/* The wicked packed format */
207
-static void pxa2xx_draw_line18p(void *opaque, uint8_t *dest, const uint8_t *src,
208
- int width, int deststep)
209
-{
210
- uint32_t data[3];
211
- unsigned int r, g, b;
212
- while (width > 0) {
213
- data[0] = *(uint32_t *) src;
214
- src += 4;
215
- data[1] = *(uint32_t *) src;
216
- src += 4;
217
- data[2] = *(uint32_t *) src;
218
- src += 4;
219
-#ifdef SWAP_WORDS
220
- data[0] = bswap32(data[0]);
221
- data[1] = bswap32(data[1]);
222
- data[2] = bswap32(data[2]);
223
-#endif
224
- b = (data[0] & 0x3f) << 2;
225
- data[0] >>= 6;
226
- g = (data[0] & 0x3f) << 2;
227
- data[0] >>= 6;
228
- r = (data[0] & 0x3f) << 2;
229
- data[0] >>= 12;
230
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
231
- b = (data[0] & 0x3f) << 2;
232
- data[0] >>= 6;
233
- g = ((data[1] & 0xf) << 4) | (data[0] << 2);
234
- data[1] >>= 4;
235
- r = (data[1] & 0x3f) << 2;
236
- data[1] >>= 12;
237
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
238
- b = (data[1] & 0x3f) << 2;
239
- data[1] >>= 6;
240
- g = (data[1] & 0x3f) << 2;
241
- data[1] >>= 6;
242
- r = ((data[2] & 0x3) << 6) | (data[1] << 2);
243
- data[2] >>= 8;
244
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
245
- b = (data[2] & 0x3f) << 2;
246
- data[2] >>= 6;
247
- g = (data[2] & 0x3f) << 2;
248
- data[2] >>= 6;
249
- r = data[2] << 2;
250
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
251
- width -= 4;
252
- }
253
-}
254
-
255
-static void pxa2xx_draw_line19(void *opaque, uint8_t *dest, const uint8_t *src,
256
- int width, int deststep)
257
-{
258
- uint32_t data;
259
- unsigned int r, g, b;
260
- while (width > 0) {
261
- data = *(uint32_t *) src;
262
-#ifdef SWAP_WORDS
263
- data = bswap32(data);
264
-#endif
265
- b = (data & 0x3f) << 2;
266
- data >>= 6;
267
- g = (data & 0x3f) << 2;
268
- data >>= 6;
269
- r = (data & 0x3f) << 2;
270
- data >>= 6;
271
- if (data & 1) {
272
- SKIP_PIXEL(dest);
273
- } else {
274
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
275
- }
276
- width -= 1;
277
- src += 4;
278
- }
279
-}
280
-
281
-/* The wicked packed format */
282
-static void pxa2xx_draw_line19p(void *opaque, uint8_t *dest, const uint8_t *src,
283
- int width, int deststep)
284
-{
285
- uint32_t data[3];
286
- unsigned int r, g, b;
287
- while (width > 0) {
288
- data[0] = *(uint32_t *) src;
289
- src += 4;
290
- data[1] = *(uint32_t *) src;
291
- src += 4;
292
- data[2] = *(uint32_t *) src;
293
- src += 4;
294
-# ifdef SWAP_WORDS
295
- data[0] = bswap32(data[0]);
296
- data[1] = bswap32(data[1]);
297
- data[2] = bswap32(data[2]);
298
-# endif
299
- b = (data[0] & 0x3f) << 2;
300
- data[0] >>= 6;
301
- g = (data[0] & 0x3f) << 2;
302
- data[0] >>= 6;
303
- r = (data[0] & 0x3f) << 2;
304
- data[0] >>= 6;
305
- if (data[0] & 1) {
306
- SKIP_PIXEL(dest);
307
- } else {
308
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
309
- }
310
- data[0] >>= 6;
311
- b = (data[0] & 0x3f) << 2;
312
- data[0] >>= 6;
313
- g = ((data[1] & 0xf) << 4) | (data[0] << 2);
314
- data[1] >>= 4;
315
- r = (data[1] & 0x3f) << 2;
316
- data[1] >>= 6;
317
- if (data[1] & 1) {
318
- SKIP_PIXEL(dest);
319
- } else {
320
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
321
- }
322
- data[1] >>= 6;
323
- b = (data[1] & 0x3f) << 2;
324
- data[1] >>= 6;
325
- g = (data[1] & 0x3f) << 2;
326
- data[1] >>= 6;
327
- r = ((data[2] & 0x3) << 6) | (data[1] << 2);
328
- data[2] >>= 2;
329
- if (data[2] & 1) {
330
- SKIP_PIXEL(dest);
331
- } else {
332
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
333
- }
334
- data[2] >>= 6;
335
- b = (data[2] & 0x3f) << 2;
336
- data[2] >>= 6;
337
- g = (data[2] & 0x3f) << 2;
338
- data[2] >>= 6;
339
- r = data[2] << 2;
340
- data[2] >>= 6;
341
- if (data[2] & 1) {
342
- SKIP_PIXEL(dest);
343
- } else {
344
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
345
- }
346
- width -= 4;
347
- }
348
-}
349
-
350
-static void pxa2xx_draw_line24(void *opaque, uint8_t *dest, const uint8_t *src,
351
- int width, int deststep)
352
-{
353
- uint32_t data;
354
- unsigned int r, g, b;
355
- while (width > 0) {
356
- data = *(uint32_t *) src;
357
-#ifdef SWAP_WORDS
358
- data = bswap32(data);
359
-#endif
360
- b = data & 0xff;
361
- data >>= 8;
362
- g = data & 0xff;
363
- data >>= 8;
364
- r = data & 0xff;
365
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
366
- width -= 1;
367
- src += 4;
368
- }
369
-}
370
-
371
-static void pxa2xx_draw_line24t(void *opaque, uint8_t *dest, const uint8_t *src,
372
- int width, int deststep)
373
-{
374
- uint32_t data;
375
- unsigned int r, g, b;
376
- while (width > 0) {
377
- data = *(uint32_t *) src;
378
-#ifdef SWAP_WORDS
379
- data = bswap32(data);
380
-#endif
381
- b = (data & 0x7f) << 1;
382
- data >>= 7;
383
- g = data & 0xff;
384
- data >>= 8;
385
- r = data & 0xff;
386
- data >>= 8;
387
- if (data & 1) {
388
- SKIP_PIXEL(dest);
389
- } else {
390
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
391
- }
392
- width -= 1;
393
- src += 4;
394
- }
395
-}
396
-
397
-static void pxa2xx_draw_line25(void *opaque, uint8_t *dest, const uint8_t *src,
398
- int width, int deststep)
399
-{
400
- uint32_t data;
401
- unsigned int r, g, b;
402
- while (width > 0) {
403
- data = *(uint32_t *) src;
404
-#ifdef SWAP_WORDS
405
- data = bswap32(data);
406
-#endif
407
- b = data & 0xff;
408
- data >>= 8;
409
- g = data & 0xff;
410
- data >>= 8;
411
- r = data & 0xff;
412
- data >>= 8;
413
- if (data & 1) {
414
- SKIP_PIXEL(dest);
415
- } else {
416
- COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
417
- }
418
- width -= 1;
419
- src += 4;
420
- }
421
-}
422
-
423
-/* Overlay planes disabled, no transparency */
424
-static drawfn pxa2xx_draw_fn_32[16] = {
425
- [0 ... 0xf] = NULL,
426
- [pxa_lcdc_2bpp] = pxa2xx_draw_line2,
427
- [pxa_lcdc_4bpp] = pxa2xx_draw_line4,
428
- [pxa_lcdc_8bpp] = pxa2xx_draw_line8,
429
- [pxa_lcdc_16bpp] = pxa2xx_draw_line16,
430
- [pxa_lcdc_18bpp] = pxa2xx_draw_line18,
431
- [pxa_lcdc_18pbpp] = pxa2xx_draw_line18p,
432
- [pxa_lcdc_24bpp] = pxa2xx_draw_line24,
433
-};
434
-
435
-/* Overlay planes enabled, transparency used */
436
-static drawfn pxa2xx_draw_fn_32t[16] = {
437
- [0 ... 0xf] = NULL,
438
- [pxa_lcdc_4bpp] = pxa2xx_draw_line4,
439
- [pxa_lcdc_8bpp] = pxa2xx_draw_line8,
440
- [pxa_lcdc_16bpp] = pxa2xx_draw_line16t,
441
- [pxa_lcdc_19bpp] = pxa2xx_draw_line19,
442
- [pxa_lcdc_19pbpp] = pxa2xx_draw_line19p,
443
- [pxa_lcdc_24bpp] = pxa2xx_draw_line24t,
444
- [pxa_lcdc_25bpp] = pxa2xx_draw_line25,
445
-};
446
-
447
-#undef COPY_PIXEL
448
-#undef SKIP_PIXEL
449
-
450
-#ifdef SWAP_WORDS
451
-# undef SWAP_WORDS
452
-#endif
453
diff --git a/hw/display/pxa2xx_lcd.c b/hw/display/pxa2xx_lcd.c
17
index XXXXXXX..XXXXXXX 100644
454
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/armv7m_nvic.c
455
--- a/hw/display/pxa2xx_lcd.c
19
+++ b/hw/intc/armv7m_nvic.c
456
+++ b/hw/display/pxa2xx_lcd.c
20
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
457
@@ -XXX,XX +XXX,XX @@ typedef struct QEMU_PACKED {
21
int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
458
/* Size of a pixel in the QEMU UI output surface, in bytes */
22
{
459
#define DEST_PIXEL_WIDTH 4
23
NVICState *s = (NVICState *)opaque;
460
24
- VecInfo *vec;
461
-#define BITS 32
25
+ VecInfo *vec = NULL;
462
-#include "pxa2xx_template.h"
26
int ret;
463
+/* Line drawing code to handle the various possible guest pixel formats */
27
464
+
28
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
465
+# define SKIP_PIXEL(to) do { to += deststep; } while (0)
29
466
+# define COPY_PIXEL(to, from) \
30
- if (secure && exc_is_banked(irq)) {
467
+ do { \
31
- vec = &s->sec_vectors[irq];
468
+ *(uint32_t *) to = from; \
32
- } else {
469
+ SKIP_PIXEL(to); \
33
- vec = &s->vectors[irq];
470
+ } while (0)
34
+ /*
471
+
35
+ * For negative priorities, v8M will forcibly deactivate the appropriate
472
+#ifdef HOST_WORDS_BIGENDIAN
36
+ * NMI or HardFault regardless of what interrupt we're being asked to
473
+# define SWAP_WORDS 1
37
+ * deactivate (compare the DeActivate() pseudocode). This is a guard
474
+#endif
38
+ * against software returning from NMI or HardFault with a corrupted
475
+
39
+ * IPSR and leaving the CPU in a negative-priority state.
476
+#define FN_2(x) FN(x + 1) FN(x)
40
+ * v7M does not do this, but simply deactivates the requested interrupt.
477
+#define FN_4(x) FN_2(x + 2) FN_2(x)
41
+ */
478
+
42
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_V8)) {
479
+static void pxa2xx_draw_line2(void *opaque, uint8_t *dest, const uint8_t *src,
43
+ switch (armv7m_nvic_raw_execution_priority(s)) {
480
+ int width, int deststep)
44
+ case -1:
481
+{
45
+ if (s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
482
+ uint32_t *palette = opaque;
46
+ vec = &s->vectors[ARMV7M_EXCP_HARD];
483
+ uint32_t data;
47
+ } else {
484
+ while (width > 0) {
48
+ vec = &s->sec_vectors[ARMV7M_EXCP_HARD];
485
+ data = *(uint32_t *) src;
49
+ }
486
+#define FN(x) COPY_PIXEL(dest, palette[(data >> ((x) * 2)) & 3]);
50
+ break;
487
+#ifdef SWAP_WORDS
51
+ case -2:
488
+ FN_4(12)
52
+ vec = &s->vectors[ARMV7M_EXCP_NMI];
489
+ FN_4(8)
53
+ break;
490
+ FN_4(4)
54
+ case -3:
491
+ FN_4(0)
55
+ vec = &s->sec_vectors[ARMV7M_EXCP_HARD];
492
+#else
56
+ break;
493
+ FN_4(0)
57
+ default:
494
+ FN_4(4)
58
+ break;
495
+ FN_4(8)
496
+ FN_4(12)
497
+#endif
498
+#undef FN
499
+ width -= 16;
500
+ src += 4;
501
+ }
502
+}
503
+
504
+static void pxa2xx_draw_line4(void *opaque, uint8_t *dest, const uint8_t *src,
505
+ int width, int deststep)
506
+{
507
+ uint32_t *palette = opaque;
508
+ uint32_t data;
509
+ while (width > 0) {
510
+ data = *(uint32_t *) src;
511
+#define FN(x) COPY_PIXEL(dest, palette[(data >> ((x) * 4)) & 0xf]);
512
+#ifdef SWAP_WORDS
513
+ FN_2(6)
514
+ FN_2(4)
515
+ FN_2(2)
516
+ FN_2(0)
517
+#else
518
+ FN_2(0)
519
+ FN_2(2)
520
+ FN_2(4)
521
+ FN_2(6)
522
+#endif
523
+#undef FN
524
+ width -= 8;
525
+ src += 4;
526
+ }
527
+}
528
+
529
+static void pxa2xx_draw_line8(void *opaque, uint8_t *dest, const uint8_t *src,
530
+ int width, int deststep)
531
+{
532
+ uint32_t *palette = opaque;
533
+ uint32_t data;
534
+ while (width > 0) {
535
+ data = *(uint32_t *) src;
536
+#define FN(x) COPY_PIXEL(dest, palette[(data >> (x)) & 0xff]);
537
+#ifdef SWAP_WORDS
538
+ FN(24)
539
+ FN(16)
540
+ FN(8)
541
+ FN(0)
542
+#else
543
+ FN(0)
544
+ FN(8)
545
+ FN(16)
546
+ FN(24)
547
+#endif
548
+#undef FN
549
+ width -= 4;
550
+ src += 4;
551
+ }
552
+}
553
+
554
+static void pxa2xx_draw_line16(void *opaque, uint8_t *dest, const uint8_t *src,
555
+ int width, int deststep)
556
+{
557
+ uint32_t data;
558
+ unsigned int r, g, b;
559
+ while (width > 0) {
560
+ data = *(uint32_t *) src;
561
+#ifdef SWAP_WORDS
562
+ data = bswap32(data);
563
+#endif
564
+ b = (data & 0x1f) << 3;
565
+ data >>= 5;
566
+ g = (data & 0x3f) << 2;
567
+ data >>= 6;
568
+ r = (data & 0x1f) << 3;
569
+ data >>= 5;
570
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
571
+ b = (data & 0x1f) << 3;
572
+ data >>= 5;
573
+ g = (data & 0x3f) << 2;
574
+ data >>= 6;
575
+ r = (data & 0x1f) << 3;
576
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
577
+ width -= 2;
578
+ src += 4;
579
+ }
580
+}
581
+
582
+static void pxa2xx_draw_line16t(void *opaque, uint8_t *dest, const uint8_t *src,
583
+ int width, int deststep)
584
+{
585
+ uint32_t data;
586
+ unsigned int r, g, b;
587
+ while (width > 0) {
588
+ data = *(uint32_t *) src;
589
+#ifdef SWAP_WORDS
590
+ data = bswap32(data);
591
+#endif
592
+ b = (data & 0x1f) << 3;
593
+ data >>= 5;
594
+ g = (data & 0x1f) << 3;
595
+ data >>= 5;
596
+ r = (data & 0x1f) << 3;
597
+ data >>= 5;
598
+ if (data & 1) {
599
+ SKIP_PIXEL(dest);
600
+ } else {
601
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
59
+ }
602
+ }
60
+ }
603
+ data >>= 1;
61
+
604
+ b = (data & 0x1f) << 3;
62
+ if (!vec) {
605
+ data >>= 5;
63
+ if (secure && exc_is_banked(irq)) {
606
+ g = (data & 0x1f) << 3;
64
+ vec = &s->sec_vectors[irq];
607
+ data >>= 5;
608
+ r = (data & 0x1f) << 3;
609
+ data >>= 5;
610
+ if (data & 1) {
611
+ SKIP_PIXEL(dest);
65
+ } else {
612
+ } else {
66
+ vec = &s->vectors[irq];
613
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
67
+ }
614
+ }
68
}
615
+ width -= 2;
69
616
+ src += 4;
70
trace_nvic_complete_irq(irq, secure);
617
+ }
618
+}
619
+
620
+static void pxa2xx_draw_line18(void *opaque, uint8_t *dest, const uint8_t *src,
621
+ int width, int deststep)
622
+{
623
+ uint32_t data;
624
+ unsigned int r, g, b;
625
+ while (width > 0) {
626
+ data = *(uint32_t *) src;
627
+#ifdef SWAP_WORDS
628
+ data = bswap32(data);
629
+#endif
630
+ b = (data & 0x3f) << 2;
631
+ data >>= 6;
632
+ g = (data & 0x3f) << 2;
633
+ data >>= 6;
634
+ r = (data & 0x3f) << 2;
635
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
636
+ width -= 1;
637
+ src += 4;
638
+ }
639
+}
640
+
641
+/* The wicked packed format */
642
+static void pxa2xx_draw_line18p(void *opaque, uint8_t *dest, const uint8_t *src,
643
+ int width, int deststep)
644
+{
645
+ uint32_t data[3];
646
+ unsigned int r, g, b;
647
+ while (width > 0) {
648
+ data[0] = *(uint32_t *) src;
649
+ src += 4;
650
+ data[1] = *(uint32_t *) src;
651
+ src += 4;
652
+ data[2] = *(uint32_t *) src;
653
+ src += 4;
654
+#ifdef SWAP_WORDS
655
+ data[0] = bswap32(data[0]);
656
+ data[1] = bswap32(data[1]);
657
+ data[2] = bswap32(data[2]);
658
+#endif
659
+ b = (data[0] & 0x3f) << 2;
660
+ data[0] >>= 6;
661
+ g = (data[0] & 0x3f) << 2;
662
+ data[0] >>= 6;
663
+ r = (data[0] & 0x3f) << 2;
664
+ data[0] >>= 12;
665
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
666
+ b = (data[0] & 0x3f) << 2;
667
+ data[0] >>= 6;
668
+ g = ((data[1] & 0xf) << 4) | (data[0] << 2);
669
+ data[1] >>= 4;
670
+ r = (data[1] & 0x3f) << 2;
671
+ data[1] >>= 12;
672
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
673
+ b = (data[1] & 0x3f) << 2;
674
+ data[1] >>= 6;
675
+ g = (data[1] & 0x3f) << 2;
676
+ data[1] >>= 6;
677
+ r = ((data[2] & 0x3) << 6) | (data[1] << 2);
678
+ data[2] >>= 8;
679
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
680
+ b = (data[2] & 0x3f) << 2;
681
+ data[2] >>= 6;
682
+ g = (data[2] & 0x3f) << 2;
683
+ data[2] >>= 6;
684
+ r = data[2] << 2;
685
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
686
+ width -= 4;
687
+ }
688
+}
689
+
690
+static void pxa2xx_draw_line19(void *opaque, uint8_t *dest, const uint8_t *src,
691
+ int width, int deststep)
692
+{
693
+ uint32_t data;
694
+ unsigned int r, g, b;
695
+ while (width > 0) {
696
+ data = *(uint32_t *) src;
697
+#ifdef SWAP_WORDS
698
+ data = bswap32(data);
699
+#endif
700
+ b = (data & 0x3f) << 2;
701
+ data >>= 6;
702
+ g = (data & 0x3f) << 2;
703
+ data >>= 6;
704
+ r = (data & 0x3f) << 2;
705
+ data >>= 6;
706
+ if (data & 1) {
707
+ SKIP_PIXEL(dest);
708
+ } else {
709
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
710
+ }
711
+ width -= 1;
712
+ src += 4;
713
+ }
714
+}
715
+
716
+/* The wicked packed format */
717
+static void pxa2xx_draw_line19p(void *opaque, uint8_t *dest, const uint8_t *src,
718
+ int width, int deststep)
719
+{
720
+ uint32_t data[3];
721
+ unsigned int r, g, b;
722
+ while (width > 0) {
723
+ data[0] = *(uint32_t *) src;
724
+ src += 4;
725
+ data[1] = *(uint32_t *) src;
726
+ src += 4;
727
+ data[2] = *(uint32_t *) src;
728
+ src += 4;
729
+# ifdef SWAP_WORDS
730
+ data[0] = bswap32(data[0]);
731
+ data[1] = bswap32(data[1]);
732
+ data[2] = bswap32(data[2]);
733
+# endif
734
+ b = (data[0] & 0x3f) << 2;
735
+ data[0] >>= 6;
736
+ g = (data[0] & 0x3f) << 2;
737
+ data[0] >>= 6;
738
+ r = (data[0] & 0x3f) << 2;
739
+ data[0] >>= 6;
740
+ if (data[0] & 1) {
741
+ SKIP_PIXEL(dest);
742
+ } else {
743
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
744
+ }
745
+ data[0] >>= 6;
746
+ b = (data[0] & 0x3f) << 2;
747
+ data[0] >>= 6;
748
+ g = ((data[1] & 0xf) << 4) | (data[0] << 2);
749
+ data[1] >>= 4;
750
+ r = (data[1] & 0x3f) << 2;
751
+ data[1] >>= 6;
752
+ if (data[1] & 1) {
753
+ SKIP_PIXEL(dest);
754
+ } else {
755
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
756
+ }
757
+ data[1] >>= 6;
758
+ b = (data[1] & 0x3f) << 2;
759
+ data[1] >>= 6;
760
+ g = (data[1] & 0x3f) << 2;
761
+ data[1] >>= 6;
762
+ r = ((data[2] & 0x3) << 6) | (data[1] << 2);
763
+ data[2] >>= 2;
764
+ if (data[2] & 1) {
765
+ SKIP_PIXEL(dest);
766
+ } else {
767
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
768
+ }
769
+ data[2] >>= 6;
770
+ b = (data[2] & 0x3f) << 2;
771
+ data[2] >>= 6;
772
+ g = (data[2] & 0x3f) << 2;
773
+ data[2] >>= 6;
774
+ r = data[2] << 2;
775
+ data[2] >>= 6;
776
+ if (data[2] & 1) {
777
+ SKIP_PIXEL(dest);
778
+ } else {
779
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
780
+ }
781
+ width -= 4;
782
+ }
783
+}
784
+
785
+static void pxa2xx_draw_line24(void *opaque, uint8_t *dest, const uint8_t *src,
786
+ int width, int deststep)
787
+{
788
+ uint32_t data;
789
+ unsigned int r, g, b;
790
+ while (width > 0) {
791
+ data = *(uint32_t *) src;
792
+#ifdef SWAP_WORDS
793
+ data = bswap32(data);
794
+#endif
795
+ b = data & 0xff;
796
+ data >>= 8;
797
+ g = data & 0xff;
798
+ data >>= 8;
799
+ r = data & 0xff;
800
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
801
+ width -= 1;
802
+ src += 4;
803
+ }
804
+}
805
+
806
+static void pxa2xx_draw_line24t(void *opaque, uint8_t *dest, const uint8_t *src,
807
+ int width, int deststep)
808
+{
809
+ uint32_t data;
810
+ unsigned int r, g, b;
811
+ while (width > 0) {
812
+ data = *(uint32_t *) src;
813
+#ifdef SWAP_WORDS
814
+ data = bswap32(data);
815
+#endif
816
+ b = (data & 0x7f) << 1;
817
+ data >>= 7;
818
+ g = data & 0xff;
819
+ data >>= 8;
820
+ r = data & 0xff;
821
+ data >>= 8;
822
+ if (data & 1) {
823
+ SKIP_PIXEL(dest);
824
+ } else {
825
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
826
+ }
827
+ width -= 1;
828
+ src += 4;
829
+ }
830
+}
831
+
832
+static void pxa2xx_draw_line25(void *opaque, uint8_t *dest, const uint8_t *src,
833
+ int width, int deststep)
834
+{
835
+ uint32_t data;
836
+ unsigned int r, g, b;
837
+ while (width > 0) {
838
+ data = *(uint32_t *) src;
839
+#ifdef SWAP_WORDS
840
+ data = bswap32(data);
841
+#endif
842
+ b = data & 0xff;
843
+ data >>= 8;
844
+ g = data & 0xff;
845
+ data >>= 8;
846
+ r = data & 0xff;
847
+ data >>= 8;
848
+ if (data & 1) {
849
+ SKIP_PIXEL(dest);
850
+ } else {
851
+ COPY_PIXEL(dest, rgb_to_pixel32(r, g, b));
852
+ }
853
+ width -= 1;
854
+ src += 4;
855
+ }
856
+}
857
+
858
+/* Overlay planes disabled, no transparency */
859
+static drawfn pxa2xx_draw_fn_32[16] = {
860
+ [0 ... 0xf] = NULL,
861
+ [pxa_lcdc_2bpp] = pxa2xx_draw_line2,
862
+ [pxa_lcdc_4bpp] = pxa2xx_draw_line4,
863
+ [pxa_lcdc_8bpp] = pxa2xx_draw_line8,
864
+ [pxa_lcdc_16bpp] = pxa2xx_draw_line16,
865
+ [pxa_lcdc_18bpp] = pxa2xx_draw_line18,
866
+ [pxa_lcdc_18pbpp] = pxa2xx_draw_line18p,
867
+ [pxa_lcdc_24bpp] = pxa2xx_draw_line24,
868
+};
869
+
870
+/* Overlay planes enabled, transparency used */
871
+static drawfn pxa2xx_draw_fn_32t[16] = {
872
+ [0 ... 0xf] = NULL,
873
+ [pxa_lcdc_4bpp] = pxa2xx_draw_line4,
874
+ [pxa_lcdc_8bpp] = pxa2xx_draw_line8,
875
+ [pxa_lcdc_16bpp] = pxa2xx_draw_line16t,
876
+ [pxa_lcdc_19bpp] = pxa2xx_draw_line19,
877
+ [pxa_lcdc_19pbpp] = pxa2xx_draw_line19p,
878
+ [pxa_lcdc_24bpp] = pxa2xx_draw_line24t,
879
+ [pxa_lcdc_25bpp] = pxa2xx_draw_line25,
880
+};
881
+
882
+#undef COPY_PIXEL
883
+#undef SKIP_PIXEL
884
+
885
+#ifdef SWAP_WORDS
886
+# undef SWAP_WORDS
887
+#endif
888
889
/* Route internal interrupt lines to the global IC */
890
static void pxa2xx_lcdc_int_update(PXA2xxLCDState *s)
71
--
891
--
72
2.20.1
892
2.20.1
73
893
74
894
diff view generated by jsdifflib