1
target-arm queue: Eric's SMMUv3 patchset, and an array
1
Two small bugfixes, plus most of RTH's refactoring of cpregs
2
of minor bugfixes and improvements from various others.
2
handling.
3
3
4
thanks
5
-- PMM
4
-- PMM
6
5
7
The following changes since commit c8b7e627b4269a3bc3ae41d9f420547a47e6d9b9:
6
The following changes since commit 1fba9dc71a170b3a05b9d3272dd8ecfe7f26e215:
8
7
9
Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2018-05-04' into staging (2018-05-04 14:42:46 +0100)
8
Merge tag 'pull-request-2022-05-04' of https://gitlab.com/thuth/qemu into staging (2022-05-04 08:07:02 -0700)
10
9
11
are available in the Git repository at:
10
are available in the Git repository at:
12
11
13
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180504
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220505
14
13
15
for you to fetch changes up to 5680740c92993e9b3f3e011f2a2c394070e33f56:
14
for you to fetch changes up to 99a50d1a67c602126fc2b3a4812d3000eba9bf34:
16
15
17
hw/arm/virt: Introduce the iommu option (2018-05-04 18:05:52 +0100)
16
target/arm: read access to performance counters from EL0 (2022-05-05 09:36:22 +0100)
18
17
19
----------------------------------------------------------------
18
----------------------------------------------------------------
20
target-arm queue:
19
target-arm queue:
21
* Emulate the SMMUv3 (IOMMU); one will be created in the 'virt' board
20
* Enable read access to performance counters from EL0
22
if the commandline includes "-machine iommu=smmuv3"
21
* Enable SCTLR_EL1.BT0 for aarch64-linux-user
23
* target/arm: Implement v8M VLLDM and VLSTM
22
* Refactoring of cpreg handling
24
* hw/arm: Don't fail qtest due to missing SD card in -nodefaults mode
25
* Some fixes to silence Coverity false-positives
26
* arm: boot: set boot_info starting from first_cpu
27
(fixes a technical bug not visible in practice)
28
* hw/net/smc91c111: Convert away from old_mmio
29
* hw/usb/tusb6010: Convert away from old_mmio
30
* hw/char/cmsdk-apb-uart.c: Accept more input after character read
31
* target/arm: Make MPUIR write-ignored on OMAP, StrongARM
32
* hw/arm/virt: Add linux,pci-domain property
33
23
34
----------------------------------------------------------------
24
----------------------------------------------------------------
35
Eric Auger (11):
25
Alex Zuepke (1):
36
hw/arm/smmu-common: smmu base device and datatypes
26
target/arm: read access to performance counters from EL0
37
hw/arm/smmu-common: IOMMU memory region and address space setup
38
hw/arm/smmu-common: VMSAv8-64 page table walk
39
hw/arm/smmuv3: Wired IRQ and GERROR helpers
40
hw/arm/smmuv3: Queue helpers
41
hw/arm/smmuv3: Implement MMIO write operations
42
hw/arm/smmuv3: Event queue recording helper
43
hw/arm/smmuv3: Implement translate callback
44
hw/arm/smmuv3: Abort on vfio or vhost case
45
target/arm/kvm: Translate the MSI doorbell in kvm_arch_fixup_msi_route
46
hw/arm/virt: Introduce the iommu option
47
27
48
Igor Mammedov (1):
28
Richard Henderson (22):
49
arm: boot: set boot_info starting from first_cpu
29
target/arm: Enable SCTLR_EL1.BT0 for aarch64-linux-user
30
target/arm: Split out cpregs.h
31
target/arm: Reorg CPAccessResult and access_check_cp_reg
32
target/arm: Replace sentinels with ARRAY_SIZE in cpregs.h
33
target/arm: Make some more cpreg data static const
34
target/arm: Reorg ARMCPRegInfo type field bits
35
target/arm: Avoid bare abort() or assert(0)
36
target/arm: Change cpreg access permissions to enum
37
target/arm: Name CPState type
38
target/arm: Name CPSecureState type
39
target/arm: Drop always-true test in define_arm_vh_e2h_redirects_aliases
40
target/arm: Store cpregs key in the hash table directly
41
target/arm: Merge allocation of the cpreg and its name
42
target/arm: Hoist computation of key in add_cpreg_to_hashtable
43
target/arm: Consolidate cpreg updates in add_cpreg_to_hashtable
44
target/arm: Use bool for is64 and ns in add_cpreg_to_hashtable
45
target/arm: Hoist isbanked computation in add_cpreg_to_hashtable
46
target/arm: Perform override check early in add_cpreg_to_hashtable
47
target/arm: Reformat comments in add_cpreg_to_hashtable
48
target/arm: Remove HOST_BIG_ENDIAN ifdef in add_cpreg_to_hashtable
49
target/arm: Add isar predicates for FEAT_Debugv8p2
50
target/arm: Add isar_feature_{aa64,any}_ras
50
51
51
Jan Kiszka (1):
52
target/arm/cpregs.h | 453 ++++++++++++++++++++++++++++++++++++++
52
hw/arm/virt: Add linux,pci-domain property
53
target/arm/cpu.h | 393 +++------------------------------
53
54
hw/arm/pxa2xx.c | 2 +-
54
Mathew Maidment (1):
55
hw/arm/pxa2xx_pic.c | 2 +-
55
target/arm: Correct MPUIR privilege level in register_cp_regs_for_features() conditional case
56
hw/intc/arm_gicv3_cpuif.c | 6 +-
56
57
hw/intc/arm_gicv3_kvm.c | 3 +-
57
Patrick Oppenlander (1):
58
target/arm/cpu.c | 25 +--
58
hw/char/cmsdk-apb-uart.c: Accept more input after character read
59
target/arm/cpu64.c | 2 +-
59
60
target/arm/cpu_tcg.c | 5 +-
60
Peter Maydell (3):
61
target/arm/gdbstub.c | 5 +-
61
hw/usb/tusb6010: Convert away from old_mmio
62
target/arm/helper.c | 358 +++++++++++++-----------------
62
hw/net/smc91c111: Convert away from old_mmio
63
target/arm/hvf/hvf.c | 2 +-
63
target/arm: Implement v8M VLLDM and VLSTM
64
target/arm/kvm-stub.c | 4 +-
64
65
target/arm/kvm.c | 4 +-
65
Prem Mallappa (3):
66
target/arm/machine.c | 4 +-
66
hw/arm/smmuv3: Skeleton
67
target/arm/op_helper.c | 57 ++---
67
hw/arm/virt: Add SMMUv3 to the virt board
68
target/arm/translate-a64.c | 14 +-
68
hw/arm/virt-acpi-build: Add smmuv3 node in IORT table
69
target/arm/translate-neon.c | 2 +-
69
70
target/arm/translate.c | 13 +-
70
Richard Henderson (2):
71
tests/tcg/aarch64/bti-3.c | 42 ++++
71
target/arm: Tidy conditions in handle_vec_simd_shri
72
tests/tcg/aarch64/Makefile.target | 6 +-
72
target/arm: Tidy condition in disas_simd_two_reg_misc
73
21 files changed, 738 insertions(+), 664 deletions(-)
73
74
create mode 100644 target/arm/cpregs.h
74
Thomas Huth (1):
75
create mode 100644 tests/tcg/aarch64/bti-3.c
75
hw/arm: Don't fail qtest due to missing SD card in -nodefaults mode
76
77
hw/arm/Makefile.objs | 1 +
78
hw/arm/smmu-internal.h | 99 +++
79
hw/arm/smmuv3-internal.h | 621 ++++++++++++++++++
80
include/hw/acpi/acpi-defs.h | 15 +
81
include/hw/arm/smmu-common.h | 145 +++++
82
include/hw/arm/smmuv3.h | 87 +++
83
include/hw/arm/virt.h | 10 +
84
hw/arm/boot.c | 2 +-
85
hw/arm/omap1.c | 8 +-
86
hw/arm/omap2.c | 8 +-
87
hw/arm/pxa2xx.c | 15 +-
88
hw/arm/smmu-common.c | 372 +++++++++++
89
hw/arm/smmuv3.c | 1191 +++++++++++++++++++++++++++++++++++
90
hw/arm/virt-acpi-build.c | 55 +-
91
hw/arm/virt.c | 101 ++-
92
hw/char/cmsdk-apb-uart.c | 1 +
93
hw/net/smc91c111.c | 54 +-
94
hw/usb/tusb6010.c | 40 +-
95
target/arm/helper.c | 2 +-
96
target/arm/kvm.c | 38 +-
97
target/arm/translate-a64.c | 12 +-
98
target/arm/translate.c | 17 +-
99
default-configs/aarch64-softmmu.mak | 1 +
100
hw/arm/trace-events | 37 ++
101
target/arm/trace-events | 3 +
102
25 files changed, 2868 insertions(+), 67 deletions(-)
103
create mode 100644 hw/arm/smmu-internal.h
104
create mode 100644 hw/arm/smmuv3-internal.h
105
create mode 100644 include/hw/arm/smmu-common.h
106
create mode 100644 include/hw/arm/smmuv3.h
107
create mode 100644 hw/arm/smmu-common.c
108
create mode 100644 hw/arm/smmuv3.c
109
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This patch implements the page table walk for VMSAv8-64.
3
This controls whether the PACI{A,B}SP instructions trap with BTYPE=3
4
(indirect branch from register other than x16/x17). The linux kernel
5
sets this in bti_enable().
4
6
5
Signed-off-by: Eric Auger <eric.auger@redhat.com>
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/998
6
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 1524665762-31355-4-git-send-email-eric.auger@redhat.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20220427042312.294300-1-richard.henderson@linaro.org
11
[PMM: remove stray change to makefile comment]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
hw/arm/smmu-internal.h | 99 ++++++++++++++++
14
target/arm/cpu.c | 2 ++
12
include/hw/arm/smmu-common.h | 14 +++
15
tests/tcg/aarch64/bti-3.c | 42 +++++++++++++++++++++++++++++++
13
hw/arm/smmu-common.c | 222 +++++++++++++++++++++++++++++++++++
16
tests/tcg/aarch64/Makefile.target | 6 ++---
14
hw/arm/trace-events | 9 +-
17
3 files changed, 47 insertions(+), 3 deletions(-)
15
4 files changed, 343 insertions(+), 1 deletion(-)
18
create mode 100644 tests/tcg/aarch64/bti-3.c
16
create mode 100644 hw/arm/smmu-internal.h
17
19
18
diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.c
23
+++ b/target/arm/cpu.c
24
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
25
/* Enable all PAC keys. */
26
env->cp15.sctlr_el[1] |= (SCTLR_EnIA | SCTLR_EnIB |
27
SCTLR_EnDA | SCTLR_EnDB);
28
+ /* Trap on btype=3 for PACIxSP. */
29
+ env->cp15.sctlr_el[1] |= SCTLR_BT0;
30
/* and to the FP/Neon instructions */
31
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 2, 3);
32
/* and to the SVE instructions */
33
diff --git a/tests/tcg/aarch64/bti-3.c b/tests/tcg/aarch64/bti-3.c
19
new file mode 100644
34
new file mode 100644
20
index XXXXXXX..XXXXXXX
35
index XXXXXXX..XXXXXXX
21
--- /dev/null
36
--- /dev/null
22
+++ b/hw/arm/smmu-internal.h
37
+++ b/tests/tcg/aarch64/bti-3.c
23
@@ -XXX,XX +XXX,XX @@
38
@@ -XXX,XX +XXX,XX @@
24
+/*
39
+/*
25
+ * ARM SMMU support - Internal API
40
+ * BTI vs PACIASP
26
+ *
27
+ * Copyright (c) 2017 Red Hat, Inc.
28
+ * Copyright (C) 2014-2016 Broadcom Corporation
29
+ * Written by Prem Mallappa, Eric Auger
30
+ *
31
+ * This program is free software; you can redistribute it and/or modify
32
+ * it under the terms of the GNU General Public License version 2 as
33
+ * published by the Free Software Foundation.
34
+ *
35
+ * This program is distributed in the hope that it will be useful,
36
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
37
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
38
+ * General Public License for more details.
39
+ *
40
+ * You should have received a copy of the GNU General Public License along
41
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
42
+ */
41
+ */
43
+
42
+
44
+#ifndef HW_ARM_SMMU_INTERNAL_H
43
+#include "bti-crt.inc.c"
45
+#define HW_ARM_SMMU_INTERNAL_H
46
+
44
+
47
+#define TBI0(tbi) ((tbi) & 0x1)
45
+static void skip2_sigill(int sig, siginfo_t *info, ucontext_t *uc)
48
+#define TBI1(tbi) ((tbi) & 0x2 >> 1)
49
+
50
+/* PTE Manipulation */
51
+
52
+#define ARM_LPAE_PTE_TYPE_SHIFT 0
53
+#define ARM_LPAE_PTE_TYPE_MASK 0x3
54
+
55
+#define ARM_LPAE_PTE_TYPE_BLOCK 1
56
+#define ARM_LPAE_PTE_TYPE_TABLE 3
57
+
58
+#define ARM_LPAE_L3_PTE_TYPE_RESERVED 1
59
+#define ARM_LPAE_L3_PTE_TYPE_PAGE 3
60
+
61
+#define ARM_LPAE_PTE_VALID (1 << 0)
62
+
63
+#define PTE_ADDRESS(pte, shift) \
64
+ (extract64(pte, shift, 47 - shift + 1) << shift)
65
+
66
+#define is_invalid_pte(pte) (!(pte & ARM_LPAE_PTE_VALID))
67
+
68
+#define is_reserved_pte(pte, level) \
69
+ ((level == 3) && \
70
+ ((pte & ARM_LPAE_PTE_TYPE_MASK) == ARM_LPAE_L3_PTE_TYPE_RESERVED))
71
+
72
+#define is_block_pte(pte, level) \
73
+ ((level < 3) && \
74
+ ((pte & ARM_LPAE_PTE_TYPE_MASK) == ARM_LPAE_PTE_TYPE_BLOCK))
75
+
76
+#define is_table_pte(pte, level) \
77
+ ((level < 3) && \
78
+ ((pte & ARM_LPAE_PTE_TYPE_MASK) == ARM_LPAE_PTE_TYPE_TABLE))
79
+
80
+#define is_page_pte(pte, level) \
81
+ ((level == 3) && \
82
+ ((pte & ARM_LPAE_PTE_TYPE_MASK) == ARM_LPAE_L3_PTE_TYPE_PAGE))
83
+
84
+/* access permissions */
85
+
86
+#define PTE_AP(pte) \
87
+ (extract64(pte, 6, 2))
88
+
89
+#define PTE_APTABLE(pte) \
90
+ (extract64(pte, 61, 2))
91
+
92
+/*
93
+ * TODO: At the moment all transactions are considered as privileged (EL1)
94
+ * as IOMMU translation callback does not pass user/priv attributes.
95
+ */
96
+#define is_permission_fault(ap, perm) \
97
+ (((perm) & IOMMU_WO) && ((ap) & 0x2))
98
+
99
+#define PTE_AP_TO_PERM(ap) \
100
+ (IOMMU_ACCESS_FLAG(true, !((ap) & 0x2)))
101
+
102
+/* Level Indexing */
103
+
104
+static inline int level_shift(int level, int granule_sz)
105
+{
46
+{
106
+ return granule_sz + (3 - level) * (granule_sz - 3);
47
+ uc->uc_mcontext.pc += 8;
48
+ uc->uc_mcontext.pstate = 1;
107
+}
49
+}
108
+
50
+
109
+static inline uint64_t level_page_mask(int level, int granule_sz)
51
+#define BTYPE_1() \
52
+ asm("mov %0,#1; adr x16, 1f; br x16; 1: hint #25; mov %0,#0" \
53
+ : "=r"(skipped) : : "x16", "x30")
54
+
55
+#define BTYPE_2() \
56
+ asm("mov %0,#1; adr x16, 1f; blr x16; 1: hint #25; mov %0,#0" \
57
+ : "=r"(skipped) : : "x16", "x30")
58
+
59
+#define BTYPE_3() \
60
+ asm("mov %0,#1; adr x15, 1f; br x15; 1: hint #25; mov %0,#0" \
61
+ : "=r"(skipped) : : "x15", "x30")
62
+
63
+#define TEST(WHICH, EXPECT) \
64
+ do { WHICH(); fail += skipped ^ EXPECT; } while (0)
65
+
66
+int main()
110
+{
67
+{
111
+ return ~(MAKE_64BIT_MASK(0, level_shift(level, granule_sz)));
68
+ int fail = 0;
69
+ int skipped;
70
+
71
+ /* Signal-like with SA_SIGINFO. */
72
+ signal_info(SIGILL, skip2_sigill);
73
+
74
+ /* With SCTLR_EL1.BT0 set, PACIASP is not compatible with type=3. */
75
+ TEST(BTYPE_1, 0);
76
+ TEST(BTYPE_2, 0);
77
+ TEST(BTYPE_3, 1);
78
+
79
+ return fail;
112
+}
80
+}
113
+
81
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
114
+static inline
115
+uint64_t iova_level_offset(uint64_t iova, int inputsize,
116
+ int level, int gsz)
117
+{
118
+ return ((iova & MAKE_64BIT_MASK(0, inputsize)) >> level_shift(level, gsz)) &
119
+ MAKE_64BIT_MASK(0, gsz - 3);
120
+}
121
+
122
+#endif
123
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
124
index XXXXXXX..XXXXXXX 100644
82
index XXXXXXX..XXXXXXX 100644
125
--- a/include/hw/arm/smmu-common.h
83
--- a/tests/tcg/aarch64/Makefile.target
126
+++ b/include/hw/arm/smmu-common.h
84
+++ b/tests/tcg/aarch64/Makefile.target
127
@@ -XXX,XX +XXX,XX @@ static inline uint16_t smmu_get_sid(SMMUDevice *sdev)
85
@@ -XXX,XX +XXX,XX @@ endif
128
{
86
# BTI Tests
129
return PCI_BUILD_BDF(pci_bus_num(sdev->bus), sdev->devfn);
87
# bti-1 tests the elf notes, so we require special compiler support.
130
}
88
ifneq ($(CROSS_CC_HAS_ARMV8_BTI),)
131
+
89
-AARCH64_TESTS += bti-1
132
+/**
90
-bti-1: CFLAGS += -mbranch-protection=standard
133
+ * smmu_ptw - Perform the page table walk for a given iova / access flags
91
-bti-1: LDFLAGS += -nostdlib
134
+ * pair, according to @cfg translation config
92
+AARCH64_TESTS += bti-1 bti-3
135
+ */
93
+bti-1 bti-3: CFLAGS += -mbranch-protection=standard
136
+int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
94
+bti-1 bti-3: LDFLAGS += -nostdlib
137
+ IOMMUTLBEntry *tlbe, SMMUPTWEventInfo *info);
95
endif
138
+
96
# bti-2 tests PROT_BTI, so no special compiler support required.
139
+/**
97
AARCH64_TESTS += bti-2
140
+ * select_tt - compute which translation table shall be used according to
141
+ * the input iova and translation config and return the TT specific info
142
+ */
143
+SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova);
144
+
145
#endif /* HW_ARM_SMMU_COMMON */
146
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/hw/arm/smmu-common.c
149
+++ b/hw/arm/smmu-common.c
150
@@ -XXX,XX +XXX,XX @@
151
152
#include "qemu/error-report.h"
153
#include "hw/arm/smmu-common.h"
154
+#include "smmu-internal.h"
155
+
156
+/* VMSAv8-64 Translation */
157
+
158
+/**
159
+ * get_pte - Get the content of a page table entry located at
160
+ * @base_addr[@index]
161
+ */
162
+static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
163
+ SMMUPTWEventInfo *info)
164
+{
165
+ int ret;
166
+ dma_addr_t addr = baseaddr + index * sizeof(*pte);
167
+
168
+ /* TODO: guarantee 64-bit single-copy atomicity */
169
+ ret = dma_memory_read(&address_space_memory, addr,
170
+ (uint8_t *)pte, sizeof(*pte));
171
+
172
+ if (ret != MEMTX_OK) {
173
+ info->type = SMMU_PTW_ERR_WALK_EABT;
174
+ info->addr = addr;
175
+ return -EINVAL;
176
+ }
177
+ trace_smmu_get_pte(baseaddr, index, addr, *pte);
178
+ return 0;
179
+}
180
+
181
+/* VMSAv8-64 Translation Table Format Descriptor Decoding */
182
+
183
+/**
184
+ * get_page_pte_address - returns the L3 descriptor output address,
185
+ * ie. the page frame
186
+ * ARM ARM spec: Figure D4-17 VMSAv8-64 level 3 descriptor format
187
+ */
188
+static inline hwaddr get_page_pte_address(uint64_t pte, int granule_sz)
189
+{
190
+ return PTE_ADDRESS(pte, granule_sz);
191
+}
192
+
193
+/**
194
+ * get_table_pte_address - return table descriptor output address,
195
+ * ie. address of next level table
196
+ * ARM ARM Figure D4-16 VMSAv8-64 level0, level1, and level 2 descriptor formats
197
+ */
198
+static inline hwaddr get_table_pte_address(uint64_t pte, int granule_sz)
199
+{
200
+ return PTE_ADDRESS(pte, granule_sz);
201
+}
202
+
203
+/**
204
+ * get_block_pte_address - return block descriptor output address and block size
205
+ * ARM ARM Figure D4-16 VMSAv8-64 level0, level1, and level 2 descriptor formats
206
+ */
207
+static inline hwaddr get_block_pte_address(uint64_t pte, int level,
208
+ int granule_sz, uint64_t *bsz)
209
+{
210
+ int n = (granule_sz - 3) * (4 - level) + 3;
211
+
212
+ *bsz = 1 << n;
213
+ return PTE_ADDRESS(pte, n);
214
+}
215
+
216
+SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
217
+{
218
+ bool tbi = extract64(iova, 55, 1) ? TBI1(cfg->tbi) : TBI0(cfg->tbi);
219
+ uint8_t tbi_byte = tbi * 8;
220
+
221
+ if (cfg->tt[0].tsz &&
222
+ !extract64(iova, 64 - cfg->tt[0].tsz, cfg->tt[0].tsz - tbi_byte)) {
223
+ /* there is a ttbr0 region and we are in it (high bits all zero) */
224
+ return &cfg->tt[0];
225
+ } else if (cfg->tt[1].tsz &&
226
+ !extract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte)) {
227
+ /* there is a ttbr1 region and we are in it (high bits all one) */
228
+ return &cfg->tt[1];
229
+ } else if (!cfg->tt[0].tsz) {
230
+ /* ttbr0 region is "everything not in the ttbr1 region" */
231
+ return &cfg->tt[0];
232
+ } else if (!cfg->tt[1].tsz) {
233
+ /* ttbr1 region is "everything not in the ttbr0 region" */
234
+ return &cfg->tt[1];
235
+ }
236
+ /* in the gap between the two regions, this is a Translation fault */
237
+ return NULL;
238
+}
239
+
240
+/**
241
+ * smmu_ptw_64 - VMSAv8-64 Walk of the page tables for a given IOVA
242
+ * @cfg: translation config
243
+ * @iova: iova to translate
244
+ * @perm: access type
245
+ * @tlbe: IOMMUTLBEntry (out)
246
+ * @info: handle to an error info
247
+ *
248
+ * Return 0 on success, < 0 on error. In case of error, @info is filled
249
+ * and tlbe->perm is set to IOMMU_NONE.
250
+ * Upon success, @tlbe is filled with translated_addr and entry
251
+ * permission rights.
252
+ */
253
+static int smmu_ptw_64(SMMUTransCfg *cfg,
254
+ dma_addr_t iova, IOMMUAccessFlags perm,
255
+ IOMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
256
+{
257
+ dma_addr_t baseaddr, indexmask;
258
+ int stage = cfg->stage;
259
+ SMMUTransTableInfo *tt = select_tt(cfg, iova);
260
+ uint8_t level, granule_sz, inputsize, stride;
261
+
262
+ if (!tt || tt->disabled) {
263
+ info->type = SMMU_PTW_ERR_TRANSLATION;
264
+ goto error;
265
+ }
266
+
267
+ granule_sz = tt->granule_sz;
268
+ stride = granule_sz - 3;
269
+ inputsize = 64 - tt->tsz;
270
+ level = 4 - (inputsize - 4) / stride;
271
+ indexmask = (1ULL << (inputsize - (stride * (4 - level)))) - 1;
272
+ baseaddr = extract64(tt->ttb, 0, 48);
273
+ baseaddr &= ~indexmask;
274
+
275
+ tlbe->iova = iova;
276
+ tlbe->addr_mask = (1 << granule_sz) - 1;
277
+
278
+ while (level <= 3) {
279
+ uint64_t subpage_size = 1ULL << level_shift(level, granule_sz);
280
+ uint64_t mask = subpage_size - 1;
281
+ uint32_t offset = iova_level_offset(iova, inputsize, level, granule_sz);
282
+ uint64_t pte;
283
+ dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
284
+ uint8_t ap;
285
+
286
+ if (get_pte(baseaddr, offset, &pte, info)) {
287
+ goto error;
288
+ }
289
+ trace_smmu_ptw_level(level, iova, subpage_size,
290
+ baseaddr, offset, pte);
291
+
292
+ if (is_invalid_pte(pte) || is_reserved_pte(pte, level)) {
293
+ trace_smmu_ptw_invalid_pte(stage, level, baseaddr,
294
+ pte_addr, offset, pte);
295
+ info->type = SMMU_PTW_ERR_TRANSLATION;
296
+ goto error;
297
+ }
298
+
299
+ if (is_page_pte(pte, level)) {
300
+ uint64_t gpa = get_page_pte_address(pte, granule_sz);
301
+
302
+ ap = PTE_AP(pte);
303
+ if (is_permission_fault(ap, perm)) {
304
+ info->type = SMMU_PTW_ERR_PERMISSION;
305
+ goto error;
306
+ }
307
+
308
+ tlbe->translated_addr = gpa + (iova & mask);
309
+ tlbe->perm = PTE_AP_TO_PERM(ap);
310
+ trace_smmu_ptw_page_pte(stage, level, iova,
311
+ baseaddr, pte_addr, pte, gpa);
312
+ return 0;
313
+ }
314
+ if (is_block_pte(pte, level)) {
315
+ uint64_t block_size;
316
+ hwaddr gpa = get_block_pte_address(pte, level, granule_sz,
317
+ &block_size);
318
+
319
+ ap = PTE_AP(pte);
320
+ if (is_permission_fault(ap, perm)) {
321
+ info->type = SMMU_PTW_ERR_PERMISSION;
322
+ goto error;
323
+ }
324
+
325
+ trace_smmu_ptw_block_pte(stage, level, baseaddr,
326
+ pte_addr, pte, iova, gpa,
327
+ block_size >> 20);
328
+
329
+ tlbe->translated_addr = gpa + (iova & mask);
330
+ tlbe->perm = PTE_AP_TO_PERM(ap);
331
+ return 0;
332
+ }
333
+
334
+ /* table pte */
335
+ ap = PTE_APTABLE(pte);
336
+
337
+ if (is_permission_fault(ap, perm)) {
338
+ info->type = SMMU_PTW_ERR_PERMISSION;
339
+ goto error;
340
+ }
341
+ baseaddr = get_table_pte_address(pte, granule_sz);
342
+ level++;
343
+ }
344
+
345
+ info->type = SMMU_PTW_ERR_TRANSLATION;
346
+
347
+error:
348
+ tlbe->perm = IOMMU_NONE;
349
+ return -EINVAL;
350
+}
351
+
352
+/**
353
+ * smmu_ptw - Walk the page tables for an IOVA, according to @cfg
354
+ *
355
+ * @cfg: translation configuration
356
+ * @iova: iova to translate
357
+ * @perm: tentative access type
358
+ * @tlbe: returned entry
359
+ * @info: ptw event handle
360
+ *
361
+ * return 0 on success
362
+ */
363
+inline int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
364
+ IOMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
365
+{
366
+ if (!cfg->aa64) {
367
+ /*
368
+ * This code path is not entered as we check this while decoding
369
+ * the configuration data in the derived SMMU model.
370
+ */
371
+ g_assert_not_reached();
372
+ }
373
+
374
+ return smmu_ptw_64(cfg, iova, perm, tlbe, info);
375
+}
376
377
/**
378
* The bus number is used for lookup when SID based invalidation occurs.
379
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
380
index XXXXXXX..XXXXXXX 100644
381
--- a/hw/arm/trace-events
382
+++ b/hw/arm/trace-events
383
@@ -XXX,XX +XXX,XX @@
384
virt_acpi_setup(void) "No fw cfg or ACPI disabled. Bailing out."
385
386
# hw/arm/smmu-common.c
387
-smmu_add_mr(const char *name) "%s"
388
\ No newline at end of file
389
+smmu_add_mr(const char *name) "%s"
390
+smmu_page_walk(int stage, uint64_t baseaddr, int first_level, uint64_t start, uint64_t end) "stage=%d, baseaddr=0x%"PRIx64", first level=%d, start=0x%"PRIx64", end=0x%"PRIx64
391
+smmu_lookup_table(int level, uint64_t baseaddr, int granule_sz, uint64_t start, uint64_t end, int flags, uint64_t subpage_size) "level=%d baseaddr=0x%"PRIx64" granule=%d, start=0x%"PRIx64" end=0x%"PRIx64" flags=%d subpage_size=0x%"PRIx64
392
+smmu_ptw_level(int level, uint64_t iova, size_t subpage_size, uint64_t baseaddr, uint32_t offset, uint64_t pte) "level=%d iova=0x%"PRIx64" subpage_sz=0x%lx baseaddr=0x%"PRIx64" offset=%d => pte=0x%"PRIx64
393
+smmu_ptw_invalid_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint32_t offset, uint64_t pte) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" offset=%d pte=0x%"PRIx64
394
+smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t address) "stage=%d level=%d iova=0x%"PRIx64" base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" page address = 0x%"PRIx64
395
+smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
396
+smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
397
--
98
--
398
2.17.0
99
2.25.1
399
400
diff view generated by jsdifflib
1
From: Prem Mallappa <prem.mallappa@broadcom.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This patch implements a skeleton for the smmuv3 device.
3
Move ARMCPRegInfo and all related declarations to a new
4
Datatypes and register definitions are introduced. The MMIO
4
internal header, out of the public cpu.h.
5
region, the interrupts and the queue are initialized.
6
5
7
Only the MMIO read operation is implemented here.
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
9
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1524665762-31355-5-git-send-email-eric.auger@redhat.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-2-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/arm/Makefile.objs | 2 +-
12
target/arm/cpregs.h | 413 +++++++++++++++++++++++++++++++++++++
16
hw/arm/smmuv3-internal.h | 142 +++++++++++++++
13
target/arm/cpu.h | 368 ---------------------------------
17
include/hw/arm/smmuv3.h | 87 ++++++++++
14
hw/arm/pxa2xx.c | 1 +
18
hw/arm/smmuv3.c | 366 +++++++++++++++++++++++++++++++++++++++
15
hw/arm/pxa2xx_pic.c | 1 +
19
hw/arm/trace-events | 3 +
16
hw/intc/arm_gicv3_cpuif.c | 1 +
20
5 files changed, 599 insertions(+), 1 deletion(-)
17
hw/intc/arm_gicv3_kvm.c | 2 +
21
create mode 100644 hw/arm/smmuv3-internal.h
18
target/arm/cpu.c | 1 +
22
create mode 100644 include/hw/arm/smmuv3.h
19
target/arm/cpu64.c | 1 +
23
create mode 100644 hw/arm/smmuv3.c
20
target/arm/cpu_tcg.c | 1 +
21
target/arm/gdbstub.c | 3 +-
22
target/arm/helper.c | 1 +
23
target/arm/op_helper.c | 1 +
24
target/arm/translate-a64.c | 4 +-
25
target/arm/translate.c | 3 +-
26
14 files changed, 427 insertions(+), 374 deletions(-)
27
create mode 100644 target/arm/cpregs.h
24
28
25
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
29
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/hw/arm/Makefile.objs
28
+++ b/hw/arm/Makefile.objs
29
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_MPS2) += mps2-tz.o
30
obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
31
obj-$(CONFIG_IOTKIT) += iotkit.o
32
obj-$(CONFIG_FSL_IMX7) += fsl-imx7.o mcimx7d-sabre.o
33
-obj-$(CONFIG_ARM_SMMUV3) += smmu-common.o
34
+obj-$(CONFIG_ARM_SMMUV3) += smmu-common.o smmuv3.o
35
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
36
new file mode 100644
30
new file mode 100644
37
index XXXXXXX..XXXXXXX
31
index XXXXXXX..XXXXXXX
38
--- /dev/null
32
--- /dev/null
39
+++ b/hw/arm/smmuv3-internal.h
33
+++ b/target/arm/cpregs.h
40
@@ -XXX,XX +XXX,XX @@
34
@@ -XXX,XX +XXX,XX @@
41
+/*
35
+/*
42
+ * ARM SMMUv3 support - Internal API
36
+ * QEMU ARM CP Register access and descriptions
43
+ *
37
+ *
44
+ * Copyright (C) 2014-2016 Broadcom Corporation
38
+ * Copyright (c) 2022 Linaro Ltd
45
+ * Copyright (c) 2017 Red Hat, Inc.
46
+ * Written by Prem Mallappa, Eric Auger
47
+ *
39
+ *
48
+ * This program is free software; you can redistribute it and/or modify
40
+ * This program is free software; you can redistribute it and/or
49
+ * it under the terms of the GNU General Public License version 2 as
41
+ * modify it under the terms of the GNU General Public License
50
+ * published by the Free Software Foundation.
42
+ * as published by the Free Software Foundation; either version 2
43
+ * of the License, or (at your option) any later version.
51
+ *
44
+ *
52
+ * This program is distributed in the hope that it will be useful,
45
+ * This program is distributed in the hope that it will be useful,
53
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
46
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
54
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
47
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
55
+ * GNU General Public License for more details.
48
+ * GNU General Public License for more details.
56
+ *
49
+ *
57
+ * You should have received a copy of the GNU General Public License along
50
+ * You should have received a copy of the GNU General Public License
58
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
51
+ * along with this program; if not, see
59
+ */
52
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
60
+
53
+ */
61
+#ifndef HW_ARM_SMMU_V3_INTERNAL_H
54
+
62
+#define HW_ARM_SMMU_V3_INTERNAL_H
55
+#ifndef TARGET_ARM_CPREGS_H
63
+
56
+#define TARGET_ARM_CPREGS_H
64
+#include "hw/arm/smmu-common.h"
57
+
65
+
58
+/*
66
+/* MMIO Registers */
59
+ * ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
67
+
60
+ * special-behaviour cp reg and bits [11..8] indicate what behaviour
68
+REG32(IDR0, 0x0)
61
+ * it has. Otherwise it is a simple cp reg, where CONST indicates that
69
+ FIELD(IDR0, S1P, 1 , 1)
62
+ * TCG can assume the value to be constant (ie load at translate time)
70
+ FIELD(IDR0, TTF, 2 , 2)
63
+ * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
71
+ FIELD(IDR0, COHACC, 4 , 1)
64
+ * indicates that the TB should not be ended after a write to this register
72
+ FIELD(IDR0, ASID16, 12, 1)
65
+ * (the default is that the TB ends after cp writes). OVERRIDE permits
73
+ FIELD(IDR0, TTENDIAN, 21, 2)
66
+ * a register definition to override a previous definition for the
74
+ FIELD(IDR0, STALL_MODEL, 24, 2)
67
+ * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
75
+ FIELD(IDR0, TERM_MODEL, 26, 1)
68
+ * old must have the OVERRIDE bit set.
76
+ FIELD(IDR0, STLEVEL, 27, 2)
69
+ * ALIAS indicates that this register is an alias view of some underlying
77
+
70
+ * state which is also visible via another register, and that the other
78
+REG32(IDR1, 0x4)
71
+ * register is handling migration and reset; registers marked ALIAS will not be
79
+ FIELD(IDR1, SIDSIZE, 0 , 6)
72
+ * migrated but may have their state set by syncing of register state from KVM.
80
+ FIELD(IDR1, EVENTQS, 16, 5)
73
+ * NO_RAW indicates that this register has no underlying state and does not
81
+ FIELD(IDR1, CMDQS, 21, 5)
74
+ * support raw access for state saving/loading; it will not be used for either
82
+
75
+ * migration or KVM state synchronization. (Typically this is for "registers"
83
+#define SMMU_IDR1_SIDSIZE 16
76
+ * which are actually used as instructions for cache maintenance and so on.)
84
+#define SMMU_CMDQS 19
77
+ * IO indicates that this register does I/O and therefore its accesses
85
+#define SMMU_EVENTQS 19
78
+ * need to be marked with gen_io_start() and also end the TB. In particular,
86
+
79
+ * registers which implement clocks or timers require this.
87
+REG32(IDR2, 0x8)
80
+ * RAISES_EXC is for when the read or write hook might raise an exception;
88
+REG32(IDR3, 0xc)
81
+ * the generated code will synchronize the CPU state before calling the hook
89
+REG32(IDR4, 0x10)
82
+ * so that it is safe for the hook to call raise_exception().
90
+REG32(IDR5, 0x14)
83
+ * NEWEL is for writes to registers that might change the exception
91
+ FIELD(IDR5, OAS, 0, 3);
84
+ * level - typically on older ARM chips. For those cases we need to
92
+ FIELD(IDR5, GRAN4K, 4, 1);
85
+ * re-read the new el when recomputing the translation flags.
93
+ FIELD(IDR5, GRAN16K, 5, 1);
86
+ */
94
+ FIELD(IDR5, GRAN64K, 6, 1);
87
+#define ARM_CP_SPECIAL 0x0001
95
+
88
+#define ARM_CP_CONST 0x0002
96
+#define SMMU_IDR5_OAS 4
89
+#define ARM_CP_64BIT 0x0004
97
+
90
+#define ARM_CP_SUPPRESS_TB_END 0x0008
98
+REG32(IIDR, 0x1c)
91
+#define ARM_CP_OVERRIDE 0x0010
99
+REG32(CR0, 0x20)
92
+#define ARM_CP_ALIAS 0x0020
100
+ FIELD(CR0, SMMU_ENABLE, 0, 1)
93
+#define ARM_CP_IO 0x0040
101
+ FIELD(CR0, EVENTQEN, 2, 1)
94
+#define ARM_CP_NO_RAW 0x0080
102
+ FIELD(CR0, CMDQEN, 3, 1)
95
+#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
103
+
96
+#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
104
+REG32(CR0ACK, 0x24)
97
+#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
105
+REG32(CR1, 0x28)
98
+#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
106
+REG32(CR2, 0x2c)
99
+#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
107
+REG32(STATUSR, 0x40)
100
+#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
108
+REG32(IRQ_CTRL, 0x50)
101
+#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
109
+ FIELD(IRQ_CTRL, GERROR_IRQEN, 0, 1)
102
+#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
110
+ FIELD(IRQ_CTRL, PRI_IRQEN, 1, 1)
103
+#define ARM_CP_FPU 0x1000
111
+ FIELD(IRQ_CTRL, EVENTQ_IRQEN, 2, 1)
104
+#define ARM_CP_SVE 0x2000
112
+
105
+#define ARM_CP_NO_GDB 0x4000
113
+REG32(IRQ_CTRL_ACK, 0x54)
106
+#define ARM_CP_RAISES_EXC 0x8000
114
+REG32(GERROR, 0x60)
107
+#define ARM_CP_NEWEL 0x10000
115
+ FIELD(GERROR, CMDQ_ERR, 0, 1)
108
+/* Used only as a terminator for ARMCPRegInfo lists */
116
+ FIELD(GERROR, EVENTQ_ABT_ERR, 2, 1)
109
+#define ARM_CP_SENTINEL 0xfffff
117
+ FIELD(GERROR, PRIQ_ABT_ERR, 3, 1)
110
+/* Mask of only the flag bits in a type field */
118
+ FIELD(GERROR, MSI_CMDQ_ABT_ERR, 4, 1)
111
+#define ARM_CP_FLAG_MASK 0x1f0ff
119
+ FIELD(GERROR, MSI_EVENTQ_ABT_ERR, 5, 1)
112
+
120
+ FIELD(GERROR, MSI_PRIQ_ABT_ERR, 6, 1)
113
+/*
121
+ FIELD(GERROR, MSI_GERROR_ABT_ERR, 7, 1)
114
+ * Valid values for ARMCPRegInfo state field, indicating which of
122
+ FIELD(GERROR, MSI_SFM_ERR, 8, 1)
115
+ * the AArch32 and AArch64 execution states this register is visible in.
123
+
116
+ * If the reginfo doesn't explicitly specify then it is AArch32 only.
124
+REG32(GERRORN, 0x64)
117
+ * If the reginfo is declared to be visible in both states then a second
125
+
118
+ * reginfo is synthesised for the AArch32 view of the AArch64 register,
126
+#define A_GERROR_IRQ_CFG0 0x68 /* 64b */
119
+ * such that the AArch32 view is the lower 32 bits of the AArch64 one.
127
+REG32(GERROR_IRQ_CFG1, 0x70)
120
+ * Note that we rely on the values of these enums as we iterate through
128
+REG32(GERROR_IRQ_CFG2, 0x74)
121
+ * the various states in some places.
129
+
122
+ */
130
+#define A_STRTAB_BASE 0x80 /* 64b */
123
+enum {
131
+
124
+ ARM_CP_STATE_AA32 = 0,
132
+#define SMMU_BASE_ADDR_MASK 0xffffffffffe0
125
+ ARM_CP_STATE_AA64 = 1,
133
+
126
+ ARM_CP_STATE_BOTH = 2,
134
+REG32(STRTAB_BASE_CFG, 0x88)
127
+};
135
+ FIELD(STRTAB_BASE_CFG, FMT, 16, 2)
128
+
136
+ FIELD(STRTAB_BASE_CFG, SPLIT, 6 , 5)
129
+/*
137
+ FIELD(STRTAB_BASE_CFG, LOG2SIZE, 0 , 6)
130
+ * ARM CP register secure state flags. These flags identify security state
138
+
131
+ * attributes for a given CP register entry.
139
+#define A_CMDQ_BASE 0x90 /* 64b */
132
+ * The existence of both or neither secure and non-secure flags indicates that
140
+REG32(CMDQ_PROD, 0x98)
133
+ * the register has both a secure and non-secure hash entry. A single one of
141
+REG32(CMDQ_CONS, 0x9c)
134
+ * these flags causes the register to only be hashed for the specified
142
+ FIELD(CMDQ_CONS, ERR, 24, 7)
135
+ * security state.
143
+
136
+ * Although definitions may have any combination of the S/NS bits, each
144
+#define A_EVENTQ_BASE 0xa0 /* 64b */
137
+ * registered entry will only have one to identify whether the entry is secure
145
+REG32(EVENTQ_PROD, 0xa8)
138
+ * or non-secure.
146
+REG32(EVENTQ_CONS, 0xac)
139
+ */
147
+
140
+enum {
148
+#define A_EVENTQ_IRQ_CFG0 0xb0 /* 64b */
141
+ ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
149
+REG32(EVENTQ_IRQ_CFG1, 0xb8)
142
+ ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
150
+REG32(EVENTQ_IRQ_CFG2, 0xbc)
143
+};
151
+
144
+
152
+#define A_IDREGS 0xfd0
145
+/*
153
+
146
+ * Return true if cptype is a valid type field. This is used to try to
154
+static inline int smmu_enabled(SMMUv3State *s)
147
+ * catch errors where the sentinel has been accidentally left off the end
148
+ * of a list of registers.
149
+ */
150
+static inline bool cptype_valid(int cptype)
155
+{
151
+{
156
+ return FIELD_EX32(s->cr[0], CR0, SMMU_ENABLE);
152
+ return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
153
+ || ((cptype & ARM_CP_SPECIAL) &&
154
+ ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
157
+}
155
+}
158
+
156
+
159
+/* Command Queue Entry */
157
+/*
160
+typedef struct Cmd {
158
+ * Access rights:
161
+ uint32_t word[4];
159
+ * We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
162
+} Cmd;
160
+ * defines as PL0 (user), PL1 (fiq/irq/svc/abt/und/sys, ie privileged), and
163
+
161
+ * PL2 (hyp). The other level which has Read and Write bits is Secure PL1
164
+/* Event Queue Entry */
162
+ * (ie any of the privileged modes in Secure state, or Monitor mode).
165
+typedef struct Evt {
163
+ * If a register is accessible in one privilege level it's always accessible
166
+ uint32_t word[8];
164
+ * in higher privilege levels too. Since "Secure PL1" also follows this rule
167
+} Evt;
165
+ * (ie anything visible in PL2 is visible in S-PL1, some things are only
168
+
166
+ * visible in S-PL1) but "Secure PL1" is a bit of a mouthful, we bend the
169
+static inline uint32_t smmuv3_idreg(int regoffset)
167
+ * terminology a little and call this PL3.
168
+ * In AArch64 things are somewhat simpler as the PLx bits line up exactly
169
+ * with the ELx exception levels.
170
+ *
171
+ * If access permissions for a register are more complex than can be
172
+ * described with these bits, then use a laxer set of restrictions, and
173
+ * do the more restrictive/complex check inside a helper function.
174
+ */
175
+#define PL3_R 0x80
176
+#define PL3_W 0x40
177
+#define PL2_R (0x20 | PL3_R)
178
+#define PL2_W (0x10 | PL3_W)
179
+#define PL1_R (0x08 | PL2_R)
180
+#define PL1_W (0x04 | PL2_W)
181
+#define PL0_R (0x02 | PL1_R)
182
+#define PL0_W (0x01 | PL1_W)
183
+
184
+/*
185
+ * For user-mode some registers are accessible to EL0 via a kernel
186
+ * trap-and-emulate ABI. In this case we define the read permissions
187
+ * as actually being PL0_R. However some bits of any given register
188
+ * may still be masked.
189
+ */
190
+#ifdef CONFIG_USER_ONLY
191
+#define PL0U_R PL0_R
192
+#else
193
+#define PL0U_R PL1_R
194
+#endif
195
+
196
+#define PL3_RW (PL3_R | PL3_W)
197
+#define PL2_RW (PL2_R | PL2_W)
198
+#define PL1_RW (PL1_R | PL1_W)
199
+#define PL0_RW (PL0_R | PL0_W)
200
+
201
+typedef enum CPAccessResult {
202
+ /* Access is permitted */
203
+ CP_ACCESS_OK = 0,
204
+ /*
205
+ * Access fails due to a configurable trap or enable which would
206
+ * result in a categorized exception syndrome giving information about
207
+ * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
208
+ * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
209
+ * PL1 if in EL0, otherwise to the current EL).
210
+ */
211
+ CP_ACCESS_TRAP = 1,
212
+ /*
213
+ * Access fails and results in an exception syndrome 0x0 ("uncategorized").
214
+ * Note that this is not a catch-all case -- the set of cases which may
215
+ * result in this failure is specifically defined by the architecture.
216
+ */
217
+ CP_ACCESS_TRAP_UNCATEGORIZED = 2,
218
+ /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
219
+ CP_ACCESS_TRAP_EL2 = 3,
220
+ CP_ACCESS_TRAP_EL3 = 4,
221
+ /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
222
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
223
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
224
+} CPAccessResult;
225
+
226
+typedef struct ARMCPRegInfo ARMCPRegInfo;
227
+
228
+/*
229
+ * Access functions for coprocessor registers. These cannot fail and
230
+ * may not raise exceptions.
231
+ */
232
+typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque);
233
+typedef void CPWriteFn(CPUARMState *env, const ARMCPRegInfo *opaque,
234
+ uint64_t value);
235
+/* Access permission check functions for coprocessor registers. */
236
+typedef CPAccessResult CPAccessFn(CPUARMState *env,
237
+ const ARMCPRegInfo *opaque,
238
+ bool isread);
239
+/* Hook function for register reset */
240
+typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque);
241
+
242
+#define CP_ANY 0xff
243
+
244
+/* Definition of an ARM coprocessor register */
245
+struct ARMCPRegInfo {
246
+ /* Name of register (useful mainly for debugging, need not be unique) */
247
+ const char *name;
248
+ /*
249
+ * Location of register: coprocessor number and (crn,crm,opc1,opc2)
250
+ * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a
251
+ * 'wildcard' field -- any value of that field in the MRC/MCR insn
252
+ * will be decoded to this register. The register read and write
253
+ * callbacks will be passed an ARMCPRegInfo with the crn/crm/opc1/opc2
254
+ * used by the program, so it is possible to register a wildcard and
255
+ * then behave differently on read/write if necessary.
256
+ * For 64 bit registers, only crm and opc1 are relevant; crn and opc2
257
+ * must both be zero.
258
+ * For AArch64-visible registers, opc0 is also used.
259
+ * Since there are no "coprocessors" in AArch64, cp is purely used as a
260
+ * way to distinguish (for KVM's benefit) guest-visible system registers
261
+ * from demuxed ones provided to preserve the "no side effects on
262
+ * KVM register read/write from QEMU" semantics. cp==0x13 is guest
263
+ * visible (to match KVM's encoding); cp==0 will be converted to
264
+ * cp==0x13 when the ARMCPRegInfo is registered, for convenience.
265
+ */
266
+ uint8_t cp;
267
+ uint8_t crn;
268
+ uint8_t crm;
269
+ uint8_t opc0;
270
+ uint8_t opc1;
271
+ uint8_t opc2;
272
+ /* Execution state in which this register is visible: ARM_CP_STATE_* */
273
+ int state;
274
+ /* Register type: ARM_CP_* bits/values */
275
+ int type;
276
+ /* Access rights: PL*_[RW] */
277
+ int access;
278
+ /* Security state: ARM_CP_SECSTATE_* bits/values */
279
+ int secure;
280
+ /*
281
+ * The opaque pointer passed to define_arm_cp_regs_with_opaque() when
282
+ * this register was defined: can be used to hand data through to the
283
+ * register read/write functions, since they are passed the ARMCPRegInfo*.
284
+ */
285
+ void *opaque;
286
+ /*
287
+ * Value of this register, if it is ARM_CP_CONST. Otherwise, if
288
+ * fieldoffset is non-zero, the reset value of the register.
289
+ */
290
+ uint64_t resetvalue;
291
+ /*
292
+ * Offset of the field in CPUARMState for this register.
293
+ * This is not needed if either:
294
+ * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs
295
+ * 2. both readfn and writefn are specified
296
+ */
297
+ ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */
298
+
299
+ /*
300
+ * Offsets of the secure and non-secure fields in CPUARMState for the
301
+ * register if it is banked. These fields are only used during the static
302
+ * registration of a register. During hashing the bank associated
303
+ * with a given security state is copied to fieldoffset which is used from
304
+ * there on out.
305
+ *
306
+ * It is expected that register definitions use either fieldoffset or
307
+ * bank_fieldoffsets in the definition but not both. It is also expected
308
+ * that both bank offsets are set when defining a banked register. This
309
+ * use indicates that a register is banked.
310
+ */
311
+ ptrdiff_t bank_fieldoffsets[2];
312
+
313
+ /*
314
+ * Function for making any access checks for this register in addition to
315
+ * those specified by the 'access' permissions bits. If NULL, no extra
316
+ * checks required. The access check is performed at runtime, not at
317
+ * translate time.
318
+ */
319
+ CPAccessFn *accessfn;
320
+ /*
321
+ * Function for handling reads of this register. If NULL, then reads
322
+ * will be done by loading from the offset into CPUARMState specified
323
+ * by fieldoffset.
324
+ */
325
+ CPReadFn *readfn;
326
+ /*
327
+ * Function for handling writes of this register. If NULL, then writes
328
+ * will be done by writing to the offset into CPUARMState specified
329
+ * by fieldoffset.
330
+ */
331
+ CPWriteFn *writefn;
332
+ /*
333
+ * Function for doing a "raw" read; used when we need to copy
334
+ * coprocessor state to the kernel for KVM or out for
335
+ * migration. This only needs to be provided if there is also a
336
+ * readfn and it has side effects (for instance clear-on-read bits).
337
+ */
338
+ CPReadFn *raw_readfn;
339
+ /*
340
+ * Function for doing a "raw" write; used when we need to copy KVM
341
+ * kernel coprocessor state into userspace, or for inbound
342
+ * migration. This only needs to be provided if there is also a
343
+ * writefn and it masks out "unwritable" bits or has write-one-to-clear
344
+ * or similar behaviour.
345
+ */
346
+ CPWriteFn *raw_writefn;
347
+ /*
348
+ * Function for resetting the register. If NULL, then reset will be done
349
+ * by writing resetvalue to the field specified in fieldoffset. If
350
+ * fieldoffset is 0 then no reset will be done.
351
+ */
352
+ CPResetFn *resetfn;
353
+
354
+ /*
355
+ * "Original" writefn and readfn.
356
+ * For ARMv8.1-VHE register aliases, we overwrite the read/write
357
+ * accessor functions of various EL1/EL0 to perform the runtime
358
+ * check for which sysreg should actually be modified, and then
359
+ * forwards the operation. Before overwriting the accessors,
360
+ * the original function is copied here, so that accesses that
361
+ * really do go to the EL1/EL0 version proceed normally.
362
+ * (The corresponding EL2 register is linked via opaque.)
363
+ */
364
+ CPReadFn *orig_readfn;
365
+ CPWriteFn *orig_writefn;
366
+};
367
+
368
+/*
369
+ * Macros which are lvalues for the field in CPUARMState for the
370
+ * ARMCPRegInfo *ri.
371
+ */
372
+#define CPREG_FIELD32(env, ri) \
373
+ (*(uint32_t *)((char *)(env) + (ri)->fieldoffset))
374
+#define CPREG_FIELD64(env, ri) \
375
+ (*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
376
+
377
+#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
378
+
379
+void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
380
+ const ARMCPRegInfo *regs, void *opaque);
381
+void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
382
+ const ARMCPRegInfo *regs, void *opaque);
383
+static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
170
+{
384
+{
171
+ /*
385
+ define_arm_cp_regs_with_opaque(cpu, regs, 0);
172
+ * Return the value of the Primecell/Corelink ID registers at the
173
+ * specified offset from the first ID register.
174
+ * These value indicate an ARM implementation of MMU600 p1
175
+ */
176
+ static const uint8_t smmuv3_ids[] = {
177
+ 0x04, 0, 0, 0, 0x84, 0xB4, 0xF0, 0x10, 0x0D, 0xF0, 0x05, 0xB1
178
+ };
179
+ return smmuv3_ids[regoffset / 4];
180
+}
386
+}
181
+
387
+static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
182
+#endif
183
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
184
new file mode 100644
185
index XXXXXXX..XXXXXXX
186
--- /dev/null
187
+++ b/include/hw/arm/smmuv3.h
188
@@ -XXX,XX +XXX,XX @@
189
+/*
190
+ * Copyright (C) 2014-2016 Broadcom Corporation
191
+ * Copyright (c) 2017 Red Hat, Inc.
192
+ * Written by Prem Mallappa, Eric Auger
193
+ *
194
+ * This program is free software; you can redistribute it and/or modify
195
+ * it under the terms of the GNU General Public License version 2 as
196
+ * published by the Free Software Foundation.
197
+ *
198
+ * This program is distributed in the hope that it will be useful,
199
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
200
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
201
+ * GNU General Public License for more details.
202
+ *
203
+ * You should have received a copy of the GNU General Public License along
204
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
205
+ */
206
+
207
+#ifndef HW_ARM_SMMUV3_H
208
+#define HW_ARM_SMMUV3_H
209
+
210
+#include "hw/arm/smmu-common.h"
211
+#include "hw/registerfields.h"
212
+
213
+#define TYPE_SMMUV3_IOMMU_MEMORY_REGION "smmuv3-iommu-memory-region"
214
+
215
+typedef struct SMMUQueue {
216
+ uint64_t base; /* base register */
217
+ uint32_t prod;
218
+ uint32_t cons;
219
+ uint8_t entry_size;
220
+ uint8_t log2size;
221
+} SMMUQueue;
222
+
223
+typedef struct SMMUv3State {
224
+ SMMUState smmu_state;
225
+
226
+ uint32_t features;
227
+ uint8_t sid_size;
228
+ uint8_t sid_split;
229
+
230
+ uint32_t idr[6];
231
+ uint32_t iidr;
232
+ uint32_t cr[3];
233
+ uint32_t cr0ack;
234
+ uint32_t statusr;
235
+ uint32_t irq_ctrl;
236
+ uint32_t gerror;
237
+ uint32_t gerrorn;
238
+ uint64_t gerror_irq_cfg0;
239
+ uint32_t gerror_irq_cfg1;
240
+ uint32_t gerror_irq_cfg2;
241
+ uint64_t strtab_base;
242
+ uint32_t strtab_base_cfg;
243
+ uint64_t eventq_irq_cfg0;
244
+ uint32_t eventq_irq_cfg1;
245
+ uint32_t eventq_irq_cfg2;
246
+
247
+ SMMUQueue eventq, cmdq;
248
+
249
+ qemu_irq irq[4];
250
+} SMMUv3State;
251
+
252
+typedef enum {
253
+ SMMU_IRQ_EVTQ,
254
+ SMMU_IRQ_PRIQ,
255
+ SMMU_IRQ_CMD_SYNC,
256
+ SMMU_IRQ_GERROR,
257
+} SMMUIrq;
258
+
259
+typedef struct {
260
+ /*< private >*/
261
+ SMMUBaseClass smmu_base_class;
262
+ /*< public >*/
263
+
264
+ DeviceRealize parent_realize;
265
+ DeviceReset parent_reset;
266
+} SMMUv3Class;
267
+
268
+#define TYPE_ARM_SMMUV3 "arm-smmuv3"
269
+#define ARM_SMMUV3(obj) OBJECT_CHECK(SMMUv3State, (obj), TYPE_ARM_SMMUV3)
270
+#define ARM_SMMUV3_CLASS(klass) \
271
+ OBJECT_CLASS_CHECK(SMMUv3Class, (klass), TYPE_ARM_SMMUV3)
272
+#define ARM_SMMUV3_GET_CLASS(obj) \
273
+ OBJECT_GET_CLASS(SMMUv3Class, (obj), TYPE_ARM_SMMUV3)
274
+
275
+#endif
276
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
277
new file mode 100644
278
index XXXXXXX..XXXXXXX
279
--- /dev/null
280
+++ b/hw/arm/smmuv3.c
281
@@ -XXX,XX +XXX,XX @@
282
+/*
283
+ * Copyright (C) 2014-2016 Broadcom Corporation
284
+ * Copyright (c) 2017 Red Hat, Inc.
285
+ * Written by Prem Mallappa, Eric Auger
286
+ *
287
+ * This program is free software; you can redistribute it and/or modify
288
+ * it under the terms of the GNU General Public License version 2 as
289
+ * published by the Free Software Foundation.
290
+ *
291
+ * This program is distributed in the hope that it will be useful,
292
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
293
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
294
+ * GNU General Public License for more details.
295
+ *
296
+ * You should have received a copy of the GNU General Public License along
297
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
298
+ */
299
+
300
+#include "qemu/osdep.h"
301
+#include "hw/boards.h"
302
+#include "sysemu/sysemu.h"
303
+#include "hw/sysbus.h"
304
+#include "hw/qdev-core.h"
305
+#include "hw/pci/pci.h"
306
+#include "exec/address-spaces.h"
307
+#include "trace.h"
308
+#include "qemu/log.h"
309
+#include "qemu/error-report.h"
310
+#include "qapi/error.h"
311
+
312
+#include "hw/arm/smmuv3.h"
313
+#include "smmuv3-internal.h"
314
+
315
+static void smmuv3_init_regs(SMMUv3State *s)
316
+{
388
+{
317
+ /**
389
+ define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
318
+ * IDR0: stage1 only, AArch64 only, coherent access, 16b ASID,
319
+ * multi-level stream table
320
+ */
321
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, S1P, 1); /* stage 1 supported */
322
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTF, 2); /* AArch64 PTW only */
323
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, COHACC, 1); /* IO coherent */
324
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, ASID16, 1); /* 16-bit ASID */
325
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TTENDIAN, 2); /* little endian */
326
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STALL_MODEL, 1); /* No stall */
327
+ /* terminated transaction will always be aborted/error returned */
328
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, TERM_MODEL, 1);
329
+ /* 2-level stream table supported */
330
+ s->idr[0] = FIELD_DP32(s->idr[0], IDR0, STLEVEL, 1);
331
+
332
+ s->idr[1] = FIELD_DP32(s->idr[1], IDR1, SIDSIZE, SMMU_IDR1_SIDSIZE);
333
+ s->idr[1] = FIELD_DP32(s->idr[1], IDR1, EVENTQS, SMMU_EVENTQS);
334
+ s->idr[1] = FIELD_DP32(s->idr[1], IDR1, CMDQS, SMMU_CMDQS);
335
+
336
+ /* 4K and 64K granule support */
337
+ s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN4K, 1);
338
+ s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN64K, 1);
339
+ s->idr[5] = FIELD_DP32(s->idr[5], IDR5, OAS, SMMU_IDR5_OAS); /* 44 bits */
340
+
341
+ s->cmdq.base = deposit64(s->cmdq.base, 0, 5, SMMU_CMDQS);
342
+ s->cmdq.prod = 0;
343
+ s->cmdq.cons = 0;
344
+ s->cmdq.entry_size = sizeof(struct Cmd);
345
+ s->eventq.base = deposit64(s->eventq.base, 0, 5, SMMU_EVENTQS);
346
+ s->eventq.prod = 0;
347
+ s->eventq.cons = 0;
348
+ s->eventq.entry_size = sizeof(struct Evt);
349
+
350
+ s->features = 0;
351
+ s->sid_split = 0;
352
+}
390
+}
353
+
391
+const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
354
+static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
392
+
355
+ unsigned size, MemTxAttrs attrs)
393
+/*
394
+ * Definition of an ARM co-processor register as viewed from
395
+ * userspace. This is used for presenting sanitised versions of
396
+ * registers to userspace when emulating the Linux AArch64 CPU
397
+ * ID/feature ABI (advertised as HWCAP_CPUID).
398
+ */
399
+typedef struct ARMCPRegUserSpaceInfo {
400
+ /* Name of register */
401
+ const char *name;
402
+
403
+ /* Is the name actually a glob pattern */
404
+ bool is_glob;
405
+
406
+ /* Only some bits are exported to user space */
407
+ uint64_t exported_bits;
408
+
409
+ /* Fixed bits are applied after the mask */
410
+ uint64_t fixed_bits;
411
+} ARMCPRegUserSpaceInfo;
412
+
413
+#define REGUSERINFO_SENTINEL { .name = NULL }
414
+
415
+void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
416
+
417
+/* CPWriteFn that can be used to implement writes-ignored behaviour */
418
+void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
419
+ uint64_t value);
420
+/* CPReadFn that can be used for read-as-zero behaviour */
421
+uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri);
422
+
423
+/*
424
+ * CPResetFn that does nothing, for use if no reset is required even
425
+ * if fieldoffset is non zero.
426
+ */
427
+void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque);
428
+
429
+/*
430
+ * Return true if this reginfo struct's field in the cpu state struct
431
+ * is 64 bits wide.
432
+ */
433
+static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri)
356
+{
434
+{
357
+ /* not yet implemented */
435
+ return (ri->state == ARM_CP_STATE_AA64) || (ri->type & ARM_CP_64BIT);
358
+ return MEMTX_ERROR;
359
+}
436
+}
360
+
437
+
361
+static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
438
+static inline bool cp_access_ok(int current_el,
362
+ uint64_t *data, MemTxAttrs attrs)
439
+ const ARMCPRegInfo *ri, int isread)
363
+{
440
+{
364
+ switch (offset) {
441
+ return (ri->access >> ((current_el * 2) + isread)) & 1;
365
+ case A_GERROR_IRQ_CFG0:
366
+ *data = s->gerror_irq_cfg0;
367
+ return MEMTX_OK;
368
+ case A_STRTAB_BASE:
369
+ *data = s->strtab_base;
370
+ return MEMTX_OK;
371
+ case A_CMDQ_BASE:
372
+ *data = s->cmdq.base;
373
+ return MEMTX_OK;
374
+ case A_EVENTQ_BASE:
375
+ *data = s->eventq.base;
376
+ return MEMTX_OK;
377
+ default:
378
+ *data = 0;
379
+ qemu_log_mask(LOG_UNIMP,
380
+ "%s Unexpected 64-bit access to 0x%"PRIx64" (RAZ)\n",
381
+ __func__, offset);
382
+ return MEMTX_OK;
383
+ }
384
+}
442
+}
385
+
443
+
386
+static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
444
+/* Raw read of a coprocessor register (as needed for migration, etc) */
387
+ uint64_t *data, MemTxAttrs attrs)
445
+uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri);
388
+{
446
+
389
+ switch (offset) {
447
+#endif /* TARGET_ARM_CPREGS_H */
390
+ case A_IDREGS ... A_IDREGS + 0x1f:
448
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
391
+ *data = smmuv3_idreg(offset - A_IDREGS);
449
index XXXXXXX..XXXXXXX 100644
392
+ return MEMTX_OK;
450
--- a/target/arm/cpu.h
393
+ case A_IDR0 ... A_IDR5:
451
+++ b/target/arm/cpu.h
394
+ *data = s->idr[(offset - A_IDR0) / 4];
452
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
395
+ return MEMTX_OK;
453
return kvmid;
396
+ case A_IIDR:
454
}
397
+ *data = s->iidr;
455
398
+ return MEMTX_OK;
456
-/* ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
399
+ case A_CR0:
457
- * special-behaviour cp reg and bits [11..8] indicate what behaviour
400
+ *data = s->cr[0];
458
- * it has. Otherwise it is a simple cp reg, where CONST indicates that
401
+ return MEMTX_OK;
459
- * TCG can assume the value to be constant (ie load at translate time)
402
+ case A_CR0ACK:
460
- * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
403
+ *data = s->cr0ack;
461
- * indicates that the TB should not be ended after a write to this register
404
+ return MEMTX_OK;
462
- * (the default is that the TB ends after cp writes). OVERRIDE permits
405
+ case A_CR1:
463
- * a register definition to override a previous definition for the
406
+ *data = s->cr[1];
464
- * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
407
+ return MEMTX_OK;
465
- * old must have the OVERRIDE bit set.
408
+ case A_CR2:
466
- * ALIAS indicates that this register is an alias view of some underlying
409
+ *data = s->cr[2];
467
- * state which is also visible via another register, and that the other
410
+ return MEMTX_OK;
468
- * register is handling migration and reset; registers marked ALIAS will not be
411
+ case A_STATUSR:
469
- * migrated but may have their state set by syncing of register state from KVM.
412
+ *data = s->statusr;
470
- * NO_RAW indicates that this register has no underlying state and does not
413
+ return MEMTX_OK;
471
- * support raw access for state saving/loading; it will not be used for either
414
+ case A_IRQ_CTRL:
472
- * migration or KVM state synchronization. (Typically this is for "registers"
415
+ case A_IRQ_CTRL_ACK:
473
- * which are actually used as instructions for cache maintenance and so on.)
416
+ *data = s->irq_ctrl;
474
- * IO indicates that this register does I/O and therefore its accesses
417
+ return MEMTX_OK;
475
- * need to be marked with gen_io_start() and also end the TB. In particular,
418
+ case A_GERROR:
476
- * registers which implement clocks or timers require this.
419
+ *data = s->gerror;
477
- * RAISES_EXC is for when the read or write hook might raise an exception;
420
+ return MEMTX_OK;
478
- * the generated code will synchronize the CPU state before calling the hook
421
+ case A_GERRORN:
479
- * so that it is safe for the hook to call raise_exception().
422
+ *data = s->gerrorn;
480
- * NEWEL is for writes to registers that might change the exception
423
+ return MEMTX_OK;
481
- * level - typically on older ARM chips. For those cases we need to
424
+ case A_GERROR_IRQ_CFG0: /* 64b */
482
- * re-read the new el when recomputing the translation flags.
425
+ *data = extract64(s->gerror_irq_cfg0, 0, 32);
483
- */
426
+ return MEMTX_OK;
484
-#define ARM_CP_SPECIAL 0x0001
427
+ case A_GERROR_IRQ_CFG0 + 4:
485
-#define ARM_CP_CONST 0x0002
428
+ *data = extract64(s->gerror_irq_cfg0, 32, 32);
486
-#define ARM_CP_64BIT 0x0004
429
+ return MEMTX_OK;
487
-#define ARM_CP_SUPPRESS_TB_END 0x0008
430
+ case A_GERROR_IRQ_CFG1:
488
-#define ARM_CP_OVERRIDE 0x0010
431
+ *data = s->gerror_irq_cfg1;
489
-#define ARM_CP_ALIAS 0x0020
432
+ return MEMTX_OK;
490
-#define ARM_CP_IO 0x0040
433
+ case A_GERROR_IRQ_CFG2:
491
-#define ARM_CP_NO_RAW 0x0080
434
+ *data = s->gerror_irq_cfg2;
492
-#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
435
+ return MEMTX_OK;
493
-#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
436
+ case A_STRTAB_BASE: /* 64b */
494
-#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
437
+ *data = extract64(s->strtab_base, 0, 32);
495
-#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
438
+ return MEMTX_OK;
496
-#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
439
+ case A_STRTAB_BASE + 4: /* 64b */
497
-#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
440
+ *data = extract64(s->strtab_base, 32, 32);
498
-#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
441
+ return MEMTX_OK;
499
-#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
442
+ case A_STRTAB_BASE_CFG:
500
-#define ARM_CP_FPU 0x1000
443
+ *data = s->strtab_base_cfg;
501
-#define ARM_CP_SVE 0x2000
444
+ return MEMTX_OK;
502
-#define ARM_CP_NO_GDB 0x4000
445
+ case A_CMDQ_BASE: /* 64b */
503
-#define ARM_CP_RAISES_EXC 0x8000
446
+ *data = extract64(s->cmdq.base, 0, 32);
504
-#define ARM_CP_NEWEL 0x10000
447
+ return MEMTX_OK;
505
-/* Used only as a terminator for ARMCPRegInfo lists */
448
+ case A_CMDQ_BASE + 4:
506
-#define ARM_CP_SENTINEL 0xfffff
449
+ *data = extract64(s->cmdq.base, 32, 32);
507
-/* Mask of only the flag bits in a type field */
450
+ return MEMTX_OK;
508
-#define ARM_CP_FLAG_MASK 0x1f0ff
451
+ case A_CMDQ_PROD:
509
-
452
+ *data = s->cmdq.prod;
510
-/* Valid values for ARMCPRegInfo state field, indicating which of
453
+ return MEMTX_OK;
511
- * the AArch32 and AArch64 execution states this register is visible in.
454
+ case A_CMDQ_CONS:
512
- * If the reginfo doesn't explicitly specify then it is AArch32 only.
455
+ *data = s->cmdq.cons;
513
- * If the reginfo is declared to be visible in both states then a second
456
+ return MEMTX_OK;
514
- * reginfo is synthesised for the AArch32 view of the AArch64 register,
457
+ case A_EVENTQ_BASE: /* 64b */
515
- * such that the AArch32 view is the lower 32 bits of the AArch64 one.
458
+ *data = extract64(s->eventq.base, 0, 32);
516
- * Note that we rely on the values of these enums as we iterate through
459
+ return MEMTX_OK;
517
- * the various states in some places.
460
+ case A_EVENTQ_BASE + 4: /* 64b */
518
- */
461
+ *data = extract64(s->eventq.base, 32, 32);
519
-enum {
462
+ return MEMTX_OK;
520
- ARM_CP_STATE_AA32 = 0,
463
+ case A_EVENTQ_PROD:
521
- ARM_CP_STATE_AA64 = 1,
464
+ *data = s->eventq.prod;
522
- ARM_CP_STATE_BOTH = 2,
465
+ return MEMTX_OK;
523
-};
466
+ case A_EVENTQ_CONS:
524
-
467
+ *data = s->eventq.cons;
525
-/* ARM CP register secure state flags. These flags identify security state
468
+ return MEMTX_OK;
526
- * attributes for a given CP register entry.
469
+ default:
527
- * The existence of both or neither secure and non-secure flags indicates that
470
+ *data = 0;
528
- * the register has both a secure and non-secure hash entry. A single one of
471
+ qemu_log_mask(LOG_UNIMP,
529
- * these flags causes the register to only be hashed for the specified
472
+ "%s unhandled 32-bit access at 0x%"PRIx64" (RAZ)\n",
530
- * security state.
473
+ __func__, offset);
531
- * Although definitions may have any combination of the S/NS bits, each
474
+ return MEMTX_OK;
532
- * registered entry will only have one to identify whether the entry is secure
475
+ }
533
- * or non-secure.
476
+}
534
- */
477
+
535
-enum {
478
+static MemTxResult smmu_read_mmio(void *opaque, hwaddr offset, uint64_t *data,
536
- ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
479
+ unsigned size, MemTxAttrs attrs)
537
- ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
480
+{
538
-};
481
+ SMMUState *sys = opaque;
539
-
482
+ SMMUv3State *s = ARM_SMMUV3(sys);
540
-/* Return true if cptype is a valid type field. This is used to try to
483
+ MemTxResult r;
541
- * catch errors where the sentinel has been accidentally left off the end
484
+
542
- * of a list of registers.
485
+ /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
543
- */
486
+ offset &= ~0x10000;
544
-static inline bool cptype_valid(int cptype)
487
+
545
-{
488
+ switch (size) {
546
- return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
489
+ case 8:
547
- || ((cptype & ARM_CP_SPECIAL) &&
490
+ r = smmu_readll(s, offset, data, attrs);
548
- ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
491
+ break;
549
-}
492
+ case 4:
550
-
493
+ r = smmu_readl(s, offset, data, attrs);
551
-/* Access rights:
494
+ break;
552
- * We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
495
+ default:
553
- * defines as PL0 (user), PL1 (fiq/irq/svc/abt/und/sys, ie privileged), and
496
+ r = MEMTX_ERROR;
554
- * PL2 (hyp). The other level which has Read and Write bits is Secure PL1
497
+ break;
555
- * (ie any of the privileged modes in Secure state, or Monitor mode).
498
+ }
556
- * If a register is accessible in one privilege level it's always accessible
499
+
557
- * in higher privilege levels too. Since "Secure PL1" also follows this rule
500
+ trace_smmuv3_read_mmio(offset, *data, size, r);
558
- * (ie anything visible in PL2 is visible in S-PL1, some things are only
501
+ return r;
559
- * visible in S-PL1) but "Secure PL1" is a bit of a mouthful, we bend the
502
+}
560
- * terminology a little and call this PL3.
503
+
561
- * In AArch64 things are somewhat simpler as the PLx bits line up exactly
504
+static const MemoryRegionOps smmu_mem_ops = {
562
- * with the ELx exception levels.
505
+ .read_with_attrs = smmu_read_mmio,
563
- *
506
+ .write_with_attrs = smmu_write_mmio,
564
- * If access permissions for a register are more complex than can be
507
+ .endianness = DEVICE_LITTLE_ENDIAN,
565
- * described with these bits, then use a laxer set of restrictions, and
508
+ .valid = {
566
- * do the more restrictive/complex check inside a helper function.
509
+ .min_access_size = 4,
567
- */
510
+ .max_access_size = 8,
568
-#define PL3_R 0x80
511
+ },
569
-#define PL3_W 0x40
512
+ .impl = {
570
-#define PL2_R (0x20 | PL3_R)
513
+ .min_access_size = 4,
571
-#define PL2_W (0x10 | PL3_W)
514
+ .max_access_size = 8,
572
-#define PL1_R (0x08 | PL2_R)
515
+ },
573
-#define PL1_W (0x04 | PL2_W)
516
+};
574
-#define PL0_R (0x02 | PL1_R)
517
+
575
-#define PL0_W (0x01 | PL1_W)
518
+static void smmu_init_irq(SMMUv3State *s, SysBusDevice *dev)
576
-
519
+{
577
-/*
520
+ int i;
578
- * For user-mode some registers are accessible to EL0 via a kernel
521
+
579
- * trap-and-emulate ABI. In this case we define the read permissions
522
+ for (i = 0; i < ARRAY_SIZE(s->irq); i++) {
580
- * as actually being PL0_R. However some bits of any given register
523
+ sysbus_init_irq(dev, &s->irq[i]);
581
- * may still be masked.
524
+ }
582
- */
525
+}
583
-#ifdef CONFIG_USER_ONLY
526
+
584
-#define PL0U_R PL0_R
527
+static void smmu_reset(DeviceState *dev)
585
-#else
528
+{
586
-#define PL0U_R PL1_R
529
+ SMMUv3State *s = ARM_SMMUV3(dev);
587
-#endif
530
+ SMMUv3Class *c = ARM_SMMUV3_GET_CLASS(s);
588
-
531
+
589
-#define PL3_RW (PL3_R | PL3_W)
532
+ c->parent_reset(dev);
590
-#define PL2_RW (PL2_R | PL2_W)
533
+
591
-#define PL1_RW (PL1_R | PL1_W)
534
+ smmuv3_init_regs(s);
592
-#define PL0_RW (PL0_R | PL0_W)
535
+}
593
-
536
+
594
/* Return the highest implemented Exception Level */
537
+static void smmu_realize(DeviceState *d, Error **errp)
595
static inline int arm_highest_el(CPUARMState *env)
538
+{
596
{
539
+ SMMUState *sys = ARM_SMMU(d);
597
@@ -XXX,XX +XXX,XX @@ static inline int arm_current_el(CPUARMState *env)
540
+ SMMUv3State *s = ARM_SMMUV3(sys);
598
}
541
+ SMMUv3Class *c = ARM_SMMUV3_GET_CLASS(s);
599
}
542
+ SysBusDevice *dev = SYS_BUS_DEVICE(d);
600
543
+ Error *local_err = NULL;
601
-typedef struct ARMCPRegInfo ARMCPRegInfo;
544
+
602
-
545
+ c->parent_realize(d, &local_err);
603
-typedef enum CPAccessResult {
546
+ if (local_err) {
604
- /* Access is permitted */
547
+ error_propagate(errp, local_err);
605
- CP_ACCESS_OK = 0,
548
+ return;
606
- /* Access fails due to a configurable trap or enable which would
549
+ }
607
- * result in a categorized exception syndrome giving information about
550
+
608
- * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
551
+ memory_region_init_io(&sys->iomem, OBJECT(s),
609
- * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
552
+ &smmu_mem_ops, sys, TYPE_ARM_SMMUV3, 0x20000);
610
- * PL1 if in EL0, otherwise to the current EL).
553
+
611
- */
554
+ sys->mrtypename = TYPE_SMMUV3_IOMMU_MEMORY_REGION;
612
- CP_ACCESS_TRAP = 1,
555
+
613
- /* Access fails and results in an exception syndrome 0x0 ("uncategorized").
556
+ sysbus_init_mmio(dev, &sys->iomem);
614
- * Note that this is not a catch-all case -- the set of cases which may
557
+
615
- * result in this failure is specifically defined by the architecture.
558
+ smmu_init_irq(s, dev);
616
- */
559
+}
617
- CP_ACCESS_TRAP_UNCATEGORIZED = 2,
560
+
618
- /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
561
+static const VMStateDescription vmstate_smmuv3_queue = {
619
- CP_ACCESS_TRAP_EL2 = 3,
562
+ .name = "smmuv3_queue",
620
- CP_ACCESS_TRAP_EL3 = 4,
563
+ .version_id = 1,
621
- /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
564
+ .minimum_version_id = 1,
622
- CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
565
+ .fields = (VMStateField[]) {
623
- CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
566
+ VMSTATE_UINT64(base, SMMUQueue),
624
-} CPAccessResult;
567
+ VMSTATE_UINT32(prod, SMMUQueue),
625
-
568
+ VMSTATE_UINT32(cons, SMMUQueue),
626
-/* Access functions for coprocessor registers. These cannot fail and
569
+ VMSTATE_UINT8(log2size, SMMUQueue),
627
- * may not raise exceptions.
570
+ },
628
- */
571
+};
629
-typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque);
572
+
630
-typedef void CPWriteFn(CPUARMState *env, const ARMCPRegInfo *opaque,
573
+static const VMStateDescription vmstate_smmuv3 = {
631
- uint64_t value);
574
+ .name = "smmuv3",
632
-/* Access permission check functions for coprocessor registers. */
575
+ .version_id = 1,
633
-typedef CPAccessResult CPAccessFn(CPUARMState *env,
576
+ .minimum_version_id = 1,
634
- const ARMCPRegInfo *opaque,
577
+ .fields = (VMStateField[]) {
635
- bool isread);
578
+ VMSTATE_UINT32(features, SMMUv3State),
636
-/* Hook function for register reset */
579
+ VMSTATE_UINT8(sid_size, SMMUv3State),
637
-typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque);
580
+ VMSTATE_UINT8(sid_split, SMMUv3State),
638
-
581
+
639
-#define CP_ANY 0xff
582
+ VMSTATE_UINT32_ARRAY(cr, SMMUv3State, 3),
640
-
583
+ VMSTATE_UINT32(cr0ack, SMMUv3State),
641
-/* Definition of an ARM coprocessor register */
584
+ VMSTATE_UINT32(statusr, SMMUv3State),
642
-struct ARMCPRegInfo {
585
+ VMSTATE_UINT32(irq_ctrl, SMMUv3State),
643
- /* Name of register (useful mainly for debugging, need not be unique) */
586
+ VMSTATE_UINT32(gerror, SMMUv3State),
644
- const char *name;
587
+ VMSTATE_UINT32(gerrorn, SMMUv3State),
645
- /* Location of register: coprocessor number and (crn,crm,opc1,opc2)
588
+ VMSTATE_UINT64(gerror_irq_cfg0, SMMUv3State),
646
- * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a
589
+ VMSTATE_UINT32(gerror_irq_cfg1, SMMUv3State),
647
- * 'wildcard' field -- any value of that field in the MRC/MCR insn
590
+ VMSTATE_UINT32(gerror_irq_cfg2, SMMUv3State),
648
- * will be decoded to this register. The register read and write
591
+ VMSTATE_UINT64(strtab_base, SMMUv3State),
649
- * callbacks will be passed an ARMCPRegInfo with the crn/crm/opc1/opc2
592
+ VMSTATE_UINT32(strtab_base_cfg, SMMUv3State),
650
- * used by the program, so it is possible to register a wildcard and
593
+ VMSTATE_UINT64(eventq_irq_cfg0, SMMUv3State),
651
- * then behave differently on read/write if necessary.
594
+ VMSTATE_UINT32(eventq_irq_cfg1, SMMUv3State),
652
- * For 64 bit registers, only crm and opc1 are relevant; crn and opc2
595
+ VMSTATE_UINT32(eventq_irq_cfg2, SMMUv3State),
653
- * must both be zero.
596
+
654
- * For AArch64-visible registers, opc0 is also used.
597
+ VMSTATE_STRUCT(cmdq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
655
- * Since there are no "coprocessors" in AArch64, cp is purely used as a
598
+ VMSTATE_STRUCT(eventq, SMMUv3State, 0, vmstate_smmuv3_queue, SMMUQueue),
656
- * way to distinguish (for KVM's benefit) guest-visible system registers
599
+
657
- * from demuxed ones provided to preserve the "no side effects on
600
+ VMSTATE_END_OF_LIST(),
658
- * KVM register read/write from QEMU" semantics. cp==0x13 is guest
601
+ },
659
- * visible (to match KVM's encoding); cp==0 will be converted to
602
+};
660
- * cp==0x13 when the ARMCPRegInfo is registered, for convenience.
603
+
661
- */
604
+static void smmuv3_instance_init(Object *obj)
662
- uint8_t cp;
605
+{
663
- uint8_t crn;
606
+ /* Nothing much to do here as of now */
664
- uint8_t crm;
607
+}
665
- uint8_t opc0;
608
+
666
- uint8_t opc1;
609
+static void smmuv3_class_init(ObjectClass *klass, void *data)
667
- uint8_t opc2;
610
+{
668
- /* Execution state in which this register is visible: ARM_CP_STATE_* */
611
+ DeviceClass *dc = DEVICE_CLASS(klass);
669
- int state;
612
+ SMMUv3Class *c = ARM_SMMUV3_CLASS(klass);
670
- /* Register type: ARM_CP_* bits/values */
613
+
671
- int type;
614
+ dc->vmsd = &vmstate_smmuv3;
672
- /* Access rights: PL*_[RW] */
615
+ device_class_set_parent_reset(dc, smmu_reset, &c->parent_reset);
673
- int access;
616
+ c->parent_realize = dc->realize;
674
- /* Security state: ARM_CP_SECSTATE_* bits/values */
617
+ dc->realize = smmu_realize;
675
- int secure;
618
+}
676
- /* The opaque pointer passed to define_arm_cp_regs_with_opaque() when
619
+
677
- * this register was defined: can be used to hand data through to the
620
+static void smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
678
- * register read/write functions, since they are passed the ARMCPRegInfo*.
621
+ void *data)
679
- */
622
+{
680
- void *opaque;
623
+}
681
- /* Value of this register, if it is ARM_CP_CONST. Otherwise, if
624
+
682
- * fieldoffset is non-zero, the reset value of the register.
625
+static const TypeInfo smmuv3_type_info = {
683
- */
626
+ .name = TYPE_ARM_SMMUV3,
684
- uint64_t resetvalue;
627
+ .parent = TYPE_ARM_SMMU,
685
- /* Offset of the field in CPUARMState for this register.
628
+ .instance_size = sizeof(SMMUv3State),
686
- *
629
+ .instance_init = smmuv3_instance_init,
687
- * This is not needed if either:
630
+ .class_size = sizeof(SMMUv3Class),
688
- * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs
631
+ .class_init = smmuv3_class_init,
689
- * 2. both readfn and writefn are specified
632
+};
690
- */
633
+
691
- ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */
634
+static const TypeInfo smmuv3_iommu_memory_region_info = {
692
-
635
+ .parent = TYPE_IOMMU_MEMORY_REGION,
693
- /* Offsets of the secure and non-secure fields in CPUARMState for the
636
+ .name = TYPE_SMMUV3_IOMMU_MEMORY_REGION,
694
- * register if it is banked. These fields are only used during the static
637
+ .class_init = smmuv3_iommu_memory_region_class_init,
695
- * registration of a register. During hashing the bank associated
638
+};
696
- * with a given security state is copied to fieldoffset which is used from
639
+
697
- * there on out.
640
+static void smmuv3_register_types(void)
698
- *
641
+{
699
- * It is expected that register definitions use either fieldoffset or
642
+ type_register(&smmuv3_type_info);
700
- * bank_fieldoffsets in the definition but not both. It is also expected
643
+ type_register(&smmuv3_iommu_memory_region_info);
701
- * that both bank offsets are set when defining a banked register. This
644
+}
702
- * use indicates that a register is banked.
645
+
703
- */
646
+type_init(smmuv3_register_types)
704
- ptrdiff_t bank_fieldoffsets[2];
647
+
705
-
648
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
706
- /* Function for making any access checks for this register in addition to
649
index XXXXXXX..XXXXXXX 100644
707
- * those specified by the 'access' permissions bits. If NULL, no extra
650
--- a/hw/arm/trace-events
708
- * checks required. The access check is performed at runtime, not at
651
+++ b/hw/arm/trace-events
709
- * translate time.
652
@@ -XXX,XX +XXX,XX @@ smmu_ptw_invalid_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr,
710
- */
653
smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t address) "stage=%d level=%d iova=0x%"PRIx64" base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" page address = 0x%"PRIx64
711
- CPAccessFn *accessfn;
654
smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
712
- /* Function for handling reads of this register. If NULL, then reads
655
smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
713
- * will be done by loading from the offset into CPUARMState specified
656
+
714
- * by fieldoffset.
657
+#hw/arm/smmuv3.c
715
- */
658
+smmuv3_read_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
716
- CPReadFn *readfn;
717
- /* Function for handling writes of this register. If NULL, then writes
718
- * will be done by writing to the offset into CPUARMState specified
719
- * by fieldoffset.
720
- */
721
- CPWriteFn *writefn;
722
- /* Function for doing a "raw" read; used when we need to copy
723
- * coprocessor state to the kernel for KVM or out for
724
- * migration. This only needs to be provided if there is also a
725
- * readfn and it has side effects (for instance clear-on-read bits).
726
- */
727
- CPReadFn *raw_readfn;
728
- /* Function for doing a "raw" write; used when we need to copy KVM
729
- * kernel coprocessor state into userspace, or for inbound
730
- * migration. This only needs to be provided if there is also a
731
- * writefn and it masks out "unwritable" bits or has write-one-to-clear
732
- * or similar behaviour.
733
- */
734
- CPWriteFn *raw_writefn;
735
- /* Function for resetting the register. If NULL, then reset will be done
736
- * by writing resetvalue to the field specified in fieldoffset. If
737
- * fieldoffset is 0 then no reset will be done.
738
- */
739
- CPResetFn *resetfn;
740
-
741
- /*
742
- * "Original" writefn and readfn.
743
- * For ARMv8.1-VHE register aliases, we overwrite the read/write
744
- * accessor functions of various EL1/EL0 to perform the runtime
745
- * check for which sysreg should actually be modified, and then
746
- * forwards the operation. Before overwriting the accessors,
747
- * the original function is copied here, so that accesses that
748
- * really do go to the EL1/EL0 version proceed normally.
749
- * (The corresponding EL2 register is linked via opaque.)
750
- */
751
- CPReadFn *orig_readfn;
752
- CPWriteFn *orig_writefn;
753
-};
754
-
755
-/* Macros which are lvalues for the field in CPUARMState for the
756
- * ARMCPRegInfo *ri.
757
- */
758
-#define CPREG_FIELD32(env, ri) \
759
- (*(uint32_t *)((char *)(env) + (ri)->fieldoffset))
760
-#define CPREG_FIELD64(env, ri) \
761
- (*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
762
-
763
-#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
764
-
765
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
766
- const ARMCPRegInfo *regs, void *opaque);
767
-void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
768
- const ARMCPRegInfo *regs, void *opaque);
769
-static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
770
-{
771
- define_arm_cp_regs_with_opaque(cpu, regs, 0);
772
-}
773
-static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
774
-{
775
- define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
776
-}
777
-const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
778
-
779
-/*
780
- * Definition of an ARM co-processor register as viewed from
781
- * userspace. This is used for presenting sanitised versions of
782
- * registers to userspace when emulating the Linux AArch64 CPU
783
- * ID/feature ABI (advertised as HWCAP_CPUID).
784
- */
785
-typedef struct ARMCPRegUserSpaceInfo {
786
- /* Name of register */
787
- const char *name;
788
-
789
- /* Is the name actually a glob pattern */
790
- bool is_glob;
791
-
792
- /* Only some bits are exported to user space */
793
- uint64_t exported_bits;
794
-
795
- /* Fixed bits are applied after the mask */
796
- uint64_t fixed_bits;
797
-} ARMCPRegUserSpaceInfo;
798
-
799
-#define REGUSERINFO_SENTINEL { .name = NULL }
800
-
801
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
802
-
803
-/* CPWriteFn that can be used to implement writes-ignored behaviour */
804
-void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
805
- uint64_t value);
806
-/* CPReadFn that can be used for read-as-zero behaviour */
807
-uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri);
808
-
809
-/* CPResetFn that does nothing, for use if no reset is required even
810
- * if fieldoffset is non zero.
811
- */
812
-void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque);
813
-
814
-/* Return true if this reginfo struct's field in the cpu state struct
815
- * is 64 bits wide.
816
- */
817
-static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri)
818
-{
819
- return (ri->state == ARM_CP_STATE_AA64) || (ri->type & ARM_CP_64BIT);
820
-}
821
-
822
-static inline bool cp_access_ok(int current_el,
823
- const ARMCPRegInfo *ri, int isread)
824
-{
825
- return (ri->access >> ((current_el * 2) + isread)) & 1;
826
-}
827
-
828
-/* Raw read of a coprocessor register (as needed for migration, etc) */
829
-uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri);
830
-
831
/**
832
* write_list_to_cpustate
833
* @cpu: ARMCPU
834
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
835
index XXXXXXX..XXXXXXX 100644
836
--- a/hw/arm/pxa2xx.c
837
+++ b/hw/arm/pxa2xx.c
838
@@ -XXX,XX +XXX,XX @@
839
#include "qemu/cutils.h"
840
#include "qemu/log.h"
841
#include "qom/object.h"
842
+#include "target/arm/cpregs.h"
843
844
static struct {
845
hwaddr io_base;
846
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
847
index XXXXXXX..XXXXXXX 100644
848
--- a/hw/arm/pxa2xx_pic.c
849
+++ b/hw/arm/pxa2xx_pic.c
850
@@ -XXX,XX +XXX,XX @@
851
#include "hw/sysbus.h"
852
#include "migration/vmstate.h"
853
#include "qom/object.h"
854
+#include "target/arm/cpregs.h"
855
856
#define ICIP    0x00    /* Interrupt Controller IRQ Pending register */
857
#define ICMR    0x04    /* Interrupt Controller Mask register */
858
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
859
index XXXXXXX..XXXXXXX 100644
860
--- a/hw/intc/arm_gicv3_cpuif.c
861
+++ b/hw/intc/arm_gicv3_cpuif.c
862
@@ -XXX,XX +XXX,XX @@
863
#include "gicv3_internal.h"
864
#include "hw/irq.h"
865
#include "cpu.h"
866
+#include "target/arm/cpregs.h"
867
868
/*
869
* Special case return value from hppvi_index(); must be larger than
870
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
871
index XXXXXXX..XXXXXXX 100644
872
--- a/hw/intc/arm_gicv3_kvm.c
873
+++ b/hw/intc/arm_gicv3_kvm.c
874
@@ -XXX,XX +XXX,XX @@
875
#include "vgic_common.h"
876
#include "migration/blocker.h"
877
#include "qom/object.h"
878
+#include "target/arm/cpregs.h"
879
+
880
881
#ifdef DEBUG_GICV3_KVM
882
#define DPRINTF(fmt, ...) \
883
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
884
index XXXXXXX..XXXXXXX 100644
885
--- a/target/arm/cpu.c
886
+++ b/target/arm/cpu.c
887
@@ -XXX,XX +XXX,XX @@
888
#include "kvm_arm.h"
889
#include "disas/capstone.h"
890
#include "fpu/softfloat.h"
891
+#include "cpregs.h"
892
893
static void arm_cpu_set_pc(CPUState *cs, vaddr value)
894
{
895
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
896
index XXXXXXX..XXXXXXX 100644
897
--- a/target/arm/cpu64.c
898
+++ b/target/arm/cpu64.c
899
@@ -XXX,XX +XXX,XX @@
900
#include "hvf_arm.h"
901
#include "qapi/visitor.h"
902
#include "hw/qdev-properties.h"
903
+#include "cpregs.h"
904
905
906
#ifndef CONFIG_USER_ONLY
907
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
908
index XXXXXXX..XXXXXXX 100644
909
--- a/target/arm/cpu_tcg.c
910
+++ b/target/arm/cpu_tcg.c
911
@@ -XXX,XX +XXX,XX @@
912
#if !defined(CONFIG_USER_ONLY)
913
#include "hw/boards.h"
914
#endif
915
+#include "cpregs.h"
916
917
/* CPU models. These are not needed for the AArch64 linux-user build. */
918
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
919
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
920
index XXXXXXX..XXXXXXX 100644
921
--- a/target/arm/gdbstub.c
922
+++ b/target/arm/gdbstub.c
923
@@ -XXX,XX +XXX,XX @@
924
*/
925
#include "qemu/osdep.h"
926
#include "cpu.h"
927
-#include "internals.h"
928
#include "exec/gdbstub.h"
929
+#include "internals.h"
930
+#include "cpregs.h"
931
932
typedef struct RegisterSysregXmlParam {
933
CPUState *cs;
934
diff --git a/target/arm/helper.c b/target/arm/helper.c
935
index XXXXXXX..XXXXXXX 100644
936
--- a/target/arm/helper.c
937
+++ b/target/arm/helper.c
938
@@ -XXX,XX +XXX,XX @@
939
#include "exec/cpu_ldst.h"
940
#include "semihosting/common-semi.h"
941
#endif
942
+#include "cpregs.h"
943
944
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
945
#define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */
946
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
947
index XXXXXXX..XXXXXXX 100644
948
--- a/target/arm/op_helper.c
949
+++ b/target/arm/op_helper.c
950
@@ -XXX,XX +XXX,XX @@
951
#include "internals.h"
952
#include "exec/exec-all.h"
953
#include "exec/cpu_ldst.h"
954
+#include "cpregs.h"
955
956
#define SIGNBIT (uint32_t)0x80000000
957
#define SIGNBIT64 ((uint64_t)1 << 63)
958
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
959
index XXXXXXX..XXXXXXX 100644
960
--- a/target/arm/translate-a64.c
961
+++ b/target/arm/translate-a64.c
962
@@ -XXX,XX +XXX,XX @@
963
#include "translate.h"
964
#include "internals.h"
965
#include "qemu/host-utils.h"
966
-
967
#include "semihosting/semihost.h"
968
#include "exec/gen-icount.h"
969
-
970
#include "exec/helper-proto.h"
971
#include "exec/helper-gen.h"
972
#include "exec/log.h"
973
-
974
+#include "cpregs.h"
975
#include "translate-a64.h"
976
#include "qemu/atomic128.h"
977
978
diff --git a/target/arm/translate.c b/target/arm/translate.c
979
index XXXXXXX..XXXXXXX 100644
980
--- a/target/arm/translate.c
981
+++ b/target/arm/translate.c
982
@@ -XXX,XX +XXX,XX @@
983
#include "qemu/bitops.h"
984
#include "arm_ldst.h"
985
#include "semihosting/semihost.h"
986
-
987
#include "exec/helper-proto.h"
988
#include "exec/helper-gen.h"
989
-
990
#include "exec/log.h"
991
+#include "cpregs.h"
992
993
994
#define ENABLE_ARCH_4T arm_dc_feature(s, ARM_FEATURE_V4T)
659
--
995
--
660
2.17.0
996
2.25.1
661
997
662
998
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Now we have relevant helpers for queue and irq
3
Rearrange the values of the enumerators of CPAccessResult
4
management, let's implement MMIO write operations.
4
so that we may directly extract the target el. For the two
5
special cases in access_check_cp_reg, use CPAccessResult.
5
6
6
Signed-off-by: Eric Auger <eric.auger@redhat.com>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 1524665762-31355-8-git-send-email-eric.auger@redhat.com
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220501055028.646596-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
hw/arm/smmuv3-internal.h | 8 +-
13
target/arm/cpregs.h | 26 ++++++++++++--------
13
hw/arm/smmuv3.c | 170 +++++++++++++++++++++++++++++++++++++--
14
target/arm/op_helper.c | 56 +++++++++++++++++++++---------------------
14
hw/arm/trace-events | 6 ++
15
2 files changed, 44 insertions(+), 38 deletions(-)
15
3 files changed, 174 insertions(+), 10 deletions(-)
16
16
17
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
17
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/smmuv3-internal.h
19
--- a/target/arm/cpregs.h
20
+++ b/hw/arm/smmuv3-internal.h
20
+++ b/target/arm/cpregs.h
21
@@ -XXX,XX +XXX,XX @@ REG32(CR0, 0x20)
21
@@ -XXX,XX +XXX,XX @@ static inline bool cptype_valid(int cptype)
22
FIELD(CR0, EVENTQEN, 2, 1)
22
typedef enum CPAccessResult {
23
FIELD(CR0, CMDQEN, 3, 1)
23
/* Access is permitted */
24
24
CP_ACCESS_OK = 0,
25
+#define SMMU_CR0_RESERVED 0xFFFFFC20
26
+
25
+
27
REG32(CR0ACK, 0x24)
26
+ /*
28
REG32(CR1, 0x28)
27
+ * Combined with one of the following, the low 2 bits indicate the
29
REG32(CR2, 0x2c)
28
+ * target exception level. If 0, the exception is taken to the usual
30
@@ -XXX,XX +XXX,XX @@ static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
29
+ * target EL (EL1 or PL1 if in EL0, otherwise to the current EL).
31
return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
30
+ */
32
}
31
+ CP_ACCESS_EL_MASK = 3,
33
32
+
34
-/* public until callers get introduced */
33
/*
35
-void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask);
34
* Access fails due to a configurable trap or enable which would
36
-void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t gerrorn);
35
* result in a categorized exception syndrome giving information about
37
-
36
* the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
38
/* Queue Handling */
37
- * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
39
38
- * PL1 if in EL0, otherwise to the current EL).
40
#define Q_BASE(q) ((q)->base & SMMU_BASE_ADDR_MASK)
39
+ * 0xc or 0x18).
41
@@ -XXX,XX +XXX,XX @@ enum { /* Command completion notification */
40
*/
42
addr; \
41
- CP_ACCESS_TRAP = 1,
43
})
42
+ CP_ACCESS_TRAP = (1 << 2),
44
43
+ CP_ACCESS_TRAP_EL2 = CP_ACCESS_TRAP | 2,
45
-int smmuv3_cmdq_consume(SMMUv3State *s);
44
+ CP_ACCESS_TRAP_EL3 = CP_ACCESS_TRAP | 3,
46
+#define SMMU_FEATURE_2LVL_STE (1 << 0)
45
+
47
46
/*
48
#endif
47
* Access fails and results in an exception syndrome 0x0 ("uncategorized").
49
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
48
* Note that this is not a catch-all case -- the set of cases which may
49
* result in this failure is specifically defined by the architecture.
50
*/
51
- CP_ACCESS_TRAP_UNCATEGORIZED = 2,
52
- /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
53
- CP_ACCESS_TRAP_EL2 = 3,
54
- CP_ACCESS_TRAP_EL3 = 4,
55
- /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
56
- CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
57
- CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
58
+ CP_ACCESS_TRAP_UNCATEGORIZED = (2 << 2),
59
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = CP_ACCESS_TRAP_UNCATEGORIZED | 2,
60
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = CP_ACCESS_TRAP_UNCATEGORIZED | 3,
61
} CPAccessResult;
62
63
typedef struct ARMCPRegInfo ARMCPRegInfo;
64
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
50
index XXXXXXX..XXXXXXX 100644
65
index XXXXXXX..XXXXXXX 100644
51
--- a/hw/arm/smmuv3.c
66
--- a/target/arm/op_helper.c
52
+++ b/hw/arm/smmuv3.c
67
+++ b/target/arm/op_helper.c
53
@@ -XXX,XX +XXX,XX @@
68
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
54
* @irq: irq type
69
uint32_t isread)
55
* @gerror_mask: mask of gerrors to toggle (relevant if @irq is GERROR)
56
*/
57
-void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask)
58
+static void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq,
59
+ uint32_t gerror_mask)
60
{
70
{
61
71
const ARMCPRegInfo *ri = rip;
62
bool pulse = false;
72
+ CPAccessResult res = CP_ACCESS_OK;
63
@@ -XXX,XX +XXX,XX @@ void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask)
73
int target_el;
74
75
if (arm_feature(env, ARM_FEATURE_XSCALE) && ri->cp < 14
76
&& extract32(env->cp15.c15_cpar, ri->cp, 1) == 0) {
77
- raise_exception(env, EXCP_UDEF, syndrome, exception_target_el(env));
78
+ res = CP_ACCESS_TRAP;
79
+ goto fail;
64
}
80
}
65
}
81
66
82
/*
67
-void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
83
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
68
+static void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
84
mask &= ~((1 << 4) | (1 << 14));
69
{
85
70
uint32_t pending = s->gerror ^ s->gerrorn;
86
if (env->cp15.hstr_el2 & mask) {
71
uint32_t toggled = s->gerrorn ^ new_gerrorn;
87
- target_el = 2;
72
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
88
- goto exept;
73
s->sid_split = 0;
89
+ res = CP_ACCESS_TRAP_EL2;
74
}
90
+ goto fail;
75
91
}
76
-int smmuv3_cmdq_consume(SMMUv3State *s)
92
}
77
+static int smmuv3_cmdq_consume(SMMUv3State *s)
93
78
{
94
- if (!ri->accessfn) {
79
SMMUCmdError cmd_error = SMMU_CERROR_NONE;
95
+ if (ri->accessfn) {
80
SMMUQueue *q = &s->cmdq;
96
+ res = ri->accessfn(env, ri, isread);
81
@@ -XXX,XX +XXX,XX @@ int smmuv3_cmdq_consume(SMMUv3State *s)
82
return 0;
83
}
84
85
+static MemTxResult smmu_writell(SMMUv3State *s, hwaddr offset,
86
+ uint64_t data, MemTxAttrs attrs)
87
+{
88
+ switch (offset) {
89
+ case A_GERROR_IRQ_CFG0:
90
+ s->gerror_irq_cfg0 = data;
91
+ return MEMTX_OK;
92
+ case A_STRTAB_BASE:
93
+ s->strtab_base = data;
94
+ return MEMTX_OK;
95
+ case A_CMDQ_BASE:
96
+ s->cmdq.base = data;
97
+ s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
98
+ if (s->cmdq.log2size > SMMU_CMDQS) {
99
+ s->cmdq.log2size = SMMU_CMDQS;
100
+ }
101
+ return MEMTX_OK;
102
+ case A_EVENTQ_BASE:
103
+ s->eventq.base = data;
104
+ s->eventq.log2size = extract64(s->eventq.base, 0, 5);
105
+ if (s->eventq.log2size > SMMU_EVENTQS) {
106
+ s->eventq.log2size = SMMU_EVENTQS;
107
+ }
108
+ return MEMTX_OK;
109
+ case A_EVENTQ_IRQ_CFG0:
110
+ s->eventq_irq_cfg0 = data;
111
+ return MEMTX_OK;
112
+ default:
113
+ qemu_log_mask(LOG_UNIMP,
114
+ "%s Unexpected 64-bit access to 0x%"PRIx64" (WI)\n",
115
+ __func__, offset);
116
+ return MEMTX_OK;
117
+ }
97
+ }
118
+}
98
+ if (likely(res == CP_ACCESS_OK)) {
119
+
99
return;
120
+static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
100
}
121
+ uint64_t data, MemTxAttrs attrs)
101
122
+{
102
- switch (ri->accessfn(env, ri, isread)) {
123
+ switch (offset) {
103
- case CP_ACCESS_OK:
124
+ case A_CR0:
104
- return;
125
+ s->cr[0] = data;
105
+ fail:
126
+ s->cr0ack = data & ~SMMU_CR0_RESERVED;
106
+ switch (res & ~CP_ACCESS_EL_MASK) {
127
+ /* in case the command queue has been enabled */
107
case CP_ACCESS_TRAP:
128
+ smmuv3_cmdq_consume(s);
108
- target_el = exception_target_el(env);
129
+ return MEMTX_OK;
109
- break;
130
+ case A_CR1:
110
- case CP_ACCESS_TRAP_EL2:
131
+ s->cr[1] = data;
111
- /* Requesting a trap to EL2 when we're in EL3 is
132
+ return MEMTX_OK;
112
- * a bug in the access function.
133
+ case A_CR2:
113
- */
134
+ s->cr[2] = data;
114
- assert(arm_current_el(env) != 3);
135
+ return MEMTX_OK;
115
- target_el = 2;
136
+ case A_IRQ_CTRL:
116
- break;
137
+ s->irq_ctrl = data;
117
- case CP_ACCESS_TRAP_EL3:
138
+ return MEMTX_OK;
118
- target_el = 3;
139
+ case A_GERRORN:
119
break;
140
+ smmuv3_write_gerrorn(s, data);
120
case CP_ACCESS_TRAP_UNCATEGORIZED:
141
+ /*
121
- target_el = exception_target_el(env);
142
+ * By acknowledging the CMDQ_ERR, SW may notify cmds can
122
- syndrome = syn_uncategorized();
143
+ * be processed again
123
- break;
144
+ */
124
- case CP_ACCESS_TRAP_UNCATEGORIZED_EL2:
145
+ smmuv3_cmdq_consume(s);
125
- target_el = 2;
146
+ return MEMTX_OK;
126
- syndrome = syn_uncategorized();
147
+ case A_GERROR_IRQ_CFG0: /* 64b */
127
- break;
148
+ s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 0, 32, data);
128
- case CP_ACCESS_TRAP_UNCATEGORIZED_EL3:
149
+ return MEMTX_OK;
129
- target_el = 3;
150
+ case A_GERROR_IRQ_CFG0 + 4:
130
syndrome = syn_uncategorized();
151
+ s->gerror_irq_cfg0 = deposit64(s->gerror_irq_cfg0, 32, 32, data);
131
break;
152
+ return MEMTX_OK;
132
default:
153
+ case A_GERROR_IRQ_CFG1:
133
g_assert_not_reached();
154
+ s->gerror_irq_cfg1 = data;
134
}
155
+ return MEMTX_OK;
135
156
+ case A_GERROR_IRQ_CFG2:
136
-exept:
157
+ s->gerror_irq_cfg2 = data;
137
+ target_el = res & CP_ACCESS_EL_MASK;
158
+ return MEMTX_OK;
138
+ switch (target_el) {
159
+ case A_STRTAB_BASE: /* 64b */
139
+ case 0:
160
+ s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
140
+ target_el = exception_target_el(env);
161
+ return MEMTX_OK;
162
+ case A_STRTAB_BASE + 4:
163
+ s->strtab_base = deposit64(s->strtab_base, 32, 32, data);
164
+ return MEMTX_OK;
165
+ case A_STRTAB_BASE_CFG:
166
+ s->strtab_base_cfg = data;
167
+ if (FIELD_EX32(data, STRTAB_BASE_CFG, FMT) == 1) {
168
+ s->sid_split = FIELD_EX32(data, STRTAB_BASE_CFG, SPLIT);
169
+ s->features |= SMMU_FEATURE_2LVL_STE;
170
+ }
171
+ return MEMTX_OK;
172
+ case A_CMDQ_BASE: /* 64b */
173
+ s->cmdq.base = deposit64(s->cmdq.base, 0, 32, data);
174
+ s->cmdq.log2size = extract64(s->cmdq.base, 0, 5);
175
+ if (s->cmdq.log2size > SMMU_CMDQS) {
176
+ s->cmdq.log2size = SMMU_CMDQS;
177
+ }
178
+ return MEMTX_OK;
179
+ case A_CMDQ_BASE + 4: /* 64b */
180
+ s->cmdq.base = deposit64(s->cmdq.base, 32, 32, data);
181
+ return MEMTX_OK;
182
+ case A_CMDQ_PROD:
183
+ s->cmdq.prod = data;
184
+ smmuv3_cmdq_consume(s);
185
+ return MEMTX_OK;
186
+ case A_CMDQ_CONS:
187
+ s->cmdq.cons = data;
188
+ return MEMTX_OK;
189
+ case A_EVENTQ_BASE: /* 64b */
190
+ s->eventq.base = deposit64(s->eventq.base, 0, 32, data);
191
+ s->eventq.log2size = extract64(s->eventq.base, 0, 5);
192
+ if (s->eventq.log2size > SMMU_EVENTQS) {
193
+ s->eventq.log2size = SMMU_EVENTQS;
194
+ }
195
+ return MEMTX_OK;
196
+ case A_EVENTQ_BASE + 4:
197
+ s->eventq.base = deposit64(s->eventq.base, 32, 32, data);
198
+ return MEMTX_OK;
199
+ case A_EVENTQ_PROD:
200
+ s->eventq.prod = data;
201
+ return MEMTX_OK;
202
+ case A_EVENTQ_CONS:
203
+ s->eventq.cons = data;
204
+ return MEMTX_OK;
205
+ case A_EVENTQ_IRQ_CFG0: /* 64b */
206
+ s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 0, 32, data);
207
+ return MEMTX_OK;
208
+ case A_EVENTQ_IRQ_CFG0 + 4:
209
+ s->eventq_irq_cfg0 = deposit64(s->eventq_irq_cfg0, 32, 32, data);
210
+ return MEMTX_OK;
211
+ case A_EVENTQ_IRQ_CFG1:
212
+ s->eventq_irq_cfg1 = data;
213
+ return MEMTX_OK;
214
+ case A_EVENTQ_IRQ_CFG2:
215
+ s->eventq_irq_cfg2 = data;
216
+ return MEMTX_OK;
217
+ default:
218
+ qemu_log_mask(LOG_UNIMP,
219
+ "%s Unexpected 32-bit access to 0x%"PRIx64" (WI)\n",
220
+ __func__, offset);
221
+ return MEMTX_OK;
222
+ }
223
+}
224
+
225
static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
226
unsigned size, MemTxAttrs attrs)
227
{
228
- /* not yet implemented */
229
- return MEMTX_ERROR;
230
+ SMMUState *sys = opaque;
231
+ SMMUv3State *s = ARM_SMMUV3(sys);
232
+ MemTxResult r;
233
+
234
+ /* CONSTRAINED UNPREDICTABLE choice to have page0/1 be exact aliases */
235
+ offset &= ~0x10000;
236
+
237
+ switch (size) {
238
+ case 8:
239
+ r = smmu_writell(s, offset, data, attrs);
240
+ break;
141
+ break;
241
+ case 4:
142
+ case 2:
242
+ r = smmu_writel(s, offset, data, attrs);
143
+ assert(arm_current_el(env) != 3);
144
+ assert(arm_is_el2_enabled(env));
145
+ break;
146
+ case 3:
147
+ assert(arm_feature(env, ARM_FEATURE_EL3));
243
+ break;
148
+ break;
244
+ default:
149
+ default:
245
+ r = MEMTX_ERROR;
150
+ /* No "direct" traps to EL1 */
246
+ break;
151
+ g_assert_not_reached();
247
+ }
152
+ }
248
+
153
+
249
+ trace_smmuv3_write_mmio(offset, data, size, r);
154
raise_exception(env, EXCP_UDEF, syndrome, target_el);
250
+ return r;
251
}
155
}
252
156
253
static MemTxResult smmu_readll(SMMUv3State *s, hwaddr offset,
254
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
255
index XXXXXXX..XXXXXXX 100644
256
--- a/hw/arm/trace-events
257
+++ b/hw/arm/trace-events
258
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_consume(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t con
259
smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
260
smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
261
smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
262
+smmuv3_update(bool is_empty, uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "q empty:%d prod:%d cons:%d p.wrap:%d p.cons:%d"
263
+smmuv3_update_check_cmd(int error) "cmdq not enabled or error :0x%x"
264
+smmuv3_write_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
265
+smmuv3_write_mmio_idr(uint64_t addr, uint64_t val) "write to RO/Unimpl reg 0x%lx val64:0x%lx"
266
+smmuv3_write_mmio_evtq_cons_bef_clear(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "Before clearing interrupt prod:0x%x cons:0x%x prod.w:%d cons.w:%d"
267
+smmuv3_write_mmio_evtq_cons_after_clear(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "after clearing interrupt prod:0x%x cons:0x%x prod.w:%d cons.w:%d"
268
--
157
--
269
2.17.0
158
2.25.1
270
159
271
160
diff view generated by jsdifflib
1
From: Thomas Huth <thuth@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
When running omap1/2 or pxa2xx based ARM machines with -nodefaults,
3
Remove a possible source of error by removing REGINFO_SENTINEL
4
they bail out immediately complaining about a "missing SecureDigital
4
and using ARRAY_SIZE (convinently hidden inside a macro) to
5
device". That's not how the "default" devices in vl.c are meant to
5
find the end of the set of regs being registered or modified.
6
work - it should be possible for a board to also start up without
7
default devices. So let's turn the error message and exit() into
8
a warning instead.
9
6
10
Signed-off-by: Thomas Huth <thuth@redhat.com>
7
The space saved by not having the extra array element reduces
11
Message-id: 1525326811-3233-1-git-send-email-thuth@redhat.com
8
the executable's .data.rel.ro section by about 9k.
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220501055028.646596-4-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
15
---
16
hw/arm/omap1.c | 8 ++++----
16
target/arm/cpregs.h | 53 +++++++++---------
17
hw/arm/omap2.c | 8 ++++----
17
hw/arm/pxa2xx.c | 1 -
18
hw/arm/pxa2xx.c | 15 +++++++--------
18
hw/arm/pxa2xx_pic.c | 1 -
19
3 files changed, 15 insertions(+), 16 deletions(-)
19
hw/intc/arm_gicv3_cpuif.c | 5 --
20
hw/intc/arm_gicv3_kvm.c | 1 -
21
target/arm/cpu64.c | 1 -
22
target/arm/cpu_tcg.c | 4 --
23
target/arm/helper.c | 111 ++++++++------------------------------
24
8 files changed, 48 insertions(+), 129 deletions(-)
20
25
21
diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
26
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
22
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/omap1.c
28
--- a/target/arm/cpregs.h
24
+++ b/hw/arm/omap1.c
29
+++ b/target/arm/cpregs.h
25
@@ -XXX,XX +XXX,XX @@
30
@@ -XXX,XX +XXX,XX @@
26
#include "hw/arm/soc_dma.h"
31
#define ARM_CP_NO_GDB 0x4000
27
#include "sysemu/block-backend.h"
32
#define ARM_CP_RAISES_EXC 0x8000
28
#include "sysemu/blockdev.h"
33
#define ARM_CP_NEWEL 0x10000
29
+#include "sysemu/qtest.h"
34
-/* Used only as a terminator for ARMCPRegInfo lists */
30
#include "qemu/range.h"
35
-#define ARM_CP_SENTINEL 0xfffff
31
#include "hw/sysbus.h"
36
/* Mask of only the flag bits in a type field */
32
#include "qemu/cutils.h"
37
#define ARM_CP_FLAG_MASK 0x1f0ff
33
@@ -XXX,XX +XXX,XX @@ struct omap_mpu_state_s *omap310_mpu_init(MemoryRegion *system_memory,
38
34
omap_findclk(s, "dpll3"));
39
@@ -XXX,XX +XXX,XX @@ enum {
35
40
ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
36
dinfo = drive_get(IF_SD, 0, 0);
41
};
37
- if (!dinfo) {
42
38
- error_report("missing SecureDigital device");
43
-/*
39
- exit(1);
44
- * Return true if cptype is a valid type field. This is used to try to
40
+ if (!dinfo && !qtest_enabled()) {
45
- * catch errors where the sentinel has been accidentally left off the end
41
+ warn_report("missing SecureDigital device");
46
- * of a list of registers.
42
}
47
- */
43
s->mmc = omap_mmc_init(0xfffb7800, system_memory,
48
-static inline bool cptype_valid(int cptype)
44
- blk_by_legacy_dinfo(dinfo),
49
-{
45
+ dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
50
- return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
46
qdev_get_gpio_in(s->ih[1], OMAP_INT_OQN),
51
- || ((cptype & ARM_CP_SPECIAL) &&
47
&s->drq[OMAP_DMA_MMC_TX],
52
- ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
48
omap_findclk(s, "mmc_ck"));
53
-}
49
diff --git a/hw/arm/omap2.c b/hw/arm/omap2.c
54
-
50
index XXXXXXX..XXXXXXX 100644
55
/*
51
--- a/hw/arm/omap2.c
56
* Access rights:
52
+++ b/hw/arm/omap2.c
57
* We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
53
@@ -XXX,XX +XXX,XX @@
58
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
54
#include "cpu.h"
59
#define CPREG_FIELD64(env, ri) \
55
#include "sysemu/block-backend.h"
60
(*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
56
#include "sysemu/blockdev.h"
61
57
+#include "sysemu/qtest.h"
62
-#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
58
#include "hw/boards.h"
63
+void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, const ARMCPRegInfo *reg,
59
#include "hw/hw.h"
64
+ void *opaque);
60
#include "hw/arm/arm.h"
65
61
@@ -XXX,XX +XXX,XX @@ struct omap_mpu_state_s *omap2420_mpu_init(MemoryRegion *sysmem,
66
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
62
s->drq[OMAP24XX_DMA_GPMC]);
67
- const ARMCPRegInfo *regs, void *opaque);
63
68
-void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
64
dinfo = drive_get(IF_SD, 0, 0);
69
- const ARMCPRegInfo *regs, void *opaque);
65
- if (!dinfo) {
70
-static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
66
- error_report("missing SecureDigital device");
71
-{
67
- exit(1);
72
- define_arm_cp_regs_with_opaque(cpu, regs, 0);
68
+ if (!dinfo && !qtest_enabled()) {
73
-}
69
+ warn_report("missing SecureDigital device");
74
static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
70
}
75
{
71
s->mmc = omap2_mmc_init(omap_l4tao(s->l4, 9),
76
- define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
72
- blk_by_legacy_dinfo(dinfo),
77
+ define_one_arm_cp_reg_with_opaque(cpu, regs, NULL);
73
+ dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
78
}
74
qdev_get_gpio_in(s->ih[0], OMAP_INT_24XX_MMC_IRQ),
79
+
75
&s->drq[OMAP24XX_DMA_MMC1_TX],
80
+void define_arm_cp_regs_with_opaque_len(ARMCPU *cpu, const ARMCPRegInfo *regs,
76
omap_findclk(s, "mmc_fclk"), omap_findclk(s, "mmc_iclk"));
81
+ void *opaque, size_t len);
82
+
83
+#define define_arm_cp_regs_with_opaque(CPU, REGS, OPAQUE) \
84
+ do { \
85
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(REGS) == 0); \
86
+ define_arm_cp_regs_with_opaque_len(CPU, REGS, OPAQUE, \
87
+ ARRAY_SIZE(REGS)); \
88
+ } while (0)
89
+
90
+#define define_arm_cp_regs(CPU, REGS) \
91
+ define_arm_cp_regs_with_opaque(CPU, REGS, NULL)
92
+
93
const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
94
95
/*
96
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCPRegUserSpaceInfo {
97
uint64_t fixed_bits;
98
} ARMCPRegUserSpaceInfo;
99
100
-#define REGUSERINFO_SENTINEL { .name = NULL }
101
+void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
102
+ const ARMCPRegUserSpaceInfo *mods,
103
+ size_t mods_len);
104
105
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
106
+#define modify_arm_cp_regs(REGS, MODS) \
107
+ do { \
108
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(REGS) == 0); \
109
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(MODS) == 0); \
110
+ modify_arm_cp_regs_with_len(REGS, ARRAY_SIZE(REGS), \
111
+ MODS, ARRAY_SIZE(MODS)); \
112
+ } while (0)
113
114
/* CPWriteFn that can be used to implement writes-ignored behaviour */
115
void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
77
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
116
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
78
index XXXXXXX..XXXXXXX 100644
117
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/arm/pxa2xx.c
118
--- a/hw/arm/pxa2xx.c
80
+++ b/hw/arm/pxa2xx.c
119
+++ b/hw/arm/pxa2xx.c
81
@@ -XXX,XX +XXX,XX @@
120
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pxa_cp_reginfo[] = {
82
#include "chardev/char-fe.h"
121
{ .name = "PWRMODE", .cp = 14, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 0,
83
#include "sysemu/block-backend.h"
122
.access = PL1_RW, .type = ARM_CP_IO,
84
#include "sysemu/blockdev.h"
123
.readfn = arm_cp_read_zero, .writefn = pxa2xx_pwrmode_write },
85
+#include "sysemu/qtest.h"
124
- REGINFO_SENTINEL
86
#include "qemu/cutils.h"
125
};
87
126
88
static struct {
127
static void pxa2xx_setup_cp14(PXA2xxState *s)
89
@@ -XXX,XX +XXX,XX @@ PXA2xxState *pxa270_init(MemoryRegion *address_space,
128
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
90
s->gpio = pxa2xx_gpio_init(0x40e00000, s->cpu, s->pic, 121);
129
index XXXXXXX..XXXXXXX 100644
91
130
--- a/hw/arm/pxa2xx_pic.c
92
dinfo = drive_get(IF_SD, 0, 0);
131
+++ b/hw/arm/pxa2xx_pic.c
93
- if (!dinfo) {
132
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pxa_pic_cp_reginfo[] = {
94
- error_report("missing SecureDigital device");
133
REGINFO_FOR_PIC_CP("ICLR2", 8),
95
- exit(1);
134
REGINFO_FOR_PIC_CP("ICFP2", 9),
96
+ if (!dinfo && !qtest_enabled()) {
135
REGINFO_FOR_PIC_CP("ICPR2", 0xa),
97
+ warn_report("missing SecureDigital device");
136
- REGINFO_SENTINEL
137
};
138
139
static const MemoryRegionOps pxa2xx_pic_ops = {
140
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/hw/intc/arm_gicv3_cpuif.c
143
+++ b/hw/intc/arm_gicv3_cpuif.c
144
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
145
.readfn = icc_igrpen1_el3_read,
146
.writefn = icc_igrpen1_el3_write,
147
},
148
- REGINFO_SENTINEL
149
};
150
151
static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
152
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_hcr_reginfo[] = {
153
.readfn = ich_vmcr_read,
154
.writefn = ich_vmcr_write,
155
},
156
- REGINFO_SENTINEL
157
};
158
159
static const ARMCPRegInfo gicv3_cpuif_ich_apxr1_reginfo[] = {
160
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_ich_apxr1_reginfo[] = {
161
.readfn = ich_ap_read,
162
.writefn = ich_ap_write,
163
},
164
- REGINFO_SENTINEL
165
};
166
167
static const ARMCPRegInfo gicv3_cpuif_ich_apxr23_reginfo[] = {
168
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_ich_apxr23_reginfo[] = {
169
.readfn = ich_ap_read,
170
.writefn = ich_ap_write,
171
},
172
- REGINFO_SENTINEL
173
};
174
175
static void gicv3_cpuif_el_change_hook(ARMCPU *cpu, void *opaque)
176
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
177
.readfn = ich_lr_read,
178
.writefn = ich_lr_write,
179
},
180
- REGINFO_SENTINEL
181
};
182
define_arm_cp_regs(cpu, lr_regset);
183
}
184
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
185
index XXXXXXX..XXXXXXX 100644
186
--- a/hw/intc/arm_gicv3_kvm.c
187
+++ b/hw/intc/arm_gicv3_kvm.c
188
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
189
*/
190
.resetfn = arm_gicv3_icc_reset,
191
},
192
- REGINFO_SENTINEL
193
};
194
195
/**
196
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
197
index XXXXXXX..XXXXXXX 100644
198
--- a/target/arm/cpu64.c
199
+++ b/target/arm/cpu64.c
200
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
201
{ .name = "L2MERRSR",
202
.cp = 15, .opc1 = 3, .crm = 15,
203
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
204
- REGINFO_SENTINEL
205
};
206
207
static void aarch64_a57_initfn(Object *obj)
208
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
209
index XXXXXXX..XXXXXXX 100644
210
--- a/target/arm/cpu_tcg.c
211
+++ b/target/arm/cpu_tcg.c
212
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa8_cp_reginfo[] = {
213
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
214
{ .name = "L2AUXCR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 2,
215
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
216
- REGINFO_SENTINEL
217
};
218
219
static void cortex_a8_initfn(Object *obj)
220
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa9_cp_reginfo[] = {
221
.access = PL1_RW, .resetvalue = 0, .type = ARM_CP_CONST },
222
{ .name = "TLB_ATTR", .cp = 15, .crn = 15, .crm = 7, .opc1 = 5, .opc2 = 2,
223
.access = PL1_RW, .resetvalue = 0, .type = ARM_CP_CONST },
224
- REGINFO_SENTINEL
225
};
226
227
static void cortex_a9_initfn(Object *obj)
228
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa15_cp_reginfo[] = {
229
#endif
230
{ .name = "L2ECTLR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 3,
231
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
232
- REGINFO_SENTINEL
233
};
234
235
static void cortex_a7_initfn(Object *obj)
236
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexr5_cp_reginfo[] = {
237
.access = PL1_RW, .type = ARM_CP_CONST },
238
{ .name = "DCACHE_INVAL", .cp = 15, .opc1 = 0, .crn = 15, .crm = 5,
239
.opc2 = 0, .access = PL1_W, .type = ARM_CP_NOP },
240
- REGINFO_SENTINEL
241
};
242
243
static void cortex_r5_initfn(Object *obj)
244
diff --git a/target/arm/helper.c b/target/arm/helper.c
245
index XXXXXXX..XXXXXXX 100644
246
--- a/target/arm/helper.c
247
+++ b/target/arm/helper.c
248
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
249
.secure = ARM_CP_SECSTATE_S,
250
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_s),
251
.resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, },
252
- REGINFO_SENTINEL
253
};
254
255
static const ARMCPRegInfo not_v8_cp_reginfo[] = {
256
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
257
{ .name = "CACHEMAINT", .cp = 15, .crn = 7, .crm = CP_ANY,
258
.opc1 = 0, .opc2 = CP_ANY, .access = PL1_W,
259
.type = ARM_CP_NOP | ARM_CP_OVERRIDE },
260
- REGINFO_SENTINEL
261
};
262
263
static const ARMCPRegInfo not_v6_cp_reginfo[] = {
264
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v6_cp_reginfo[] = {
265
*/
266
{ .name = "WFI_v5", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = 2,
267
.access = PL1_W, .type = ARM_CP_WFI },
268
- REGINFO_SENTINEL
269
};
270
271
static const ARMCPRegInfo not_v7_cp_reginfo[] = {
272
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v7_cp_reginfo[] = {
273
.opc1 = 0, .opc2 = 0, .access = PL1_RW, .type = ARM_CP_NOP },
274
{ .name = "NMRR", .cp = 15, .crn = 10, .crm = 2,
275
.opc1 = 0, .opc2 = 1, .access = PL1_RW, .type = ARM_CP_NOP },
276
- REGINFO_SENTINEL
277
};
278
279
static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
280
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
281
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
282
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
283
.resetfn = cpacr_reset, .writefn = cpacr_write, .readfn = cpacr_read },
284
- REGINFO_SENTINEL
285
};
286
287
typedef struct pm_event {
288
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
289
{ .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
290
.type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
291
.writefn = tlbimvaa_write },
292
- REGINFO_SENTINEL
293
};
294
295
static const ARMCPRegInfo v7mp_cp_reginfo[] = {
296
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7mp_cp_reginfo[] = {
297
{ .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
298
.type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
299
.writefn = tlbimvaa_is_write },
300
- REGINFO_SENTINEL
301
};
302
303
static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
304
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
305
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr),
306
.writefn = pmovsset_write,
307
.raw_writefn = raw_write },
308
- REGINFO_SENTINEL
309
};
310
311
static void teecr_write(CPUARMState *env, const ARMCPRegInfo *ri,
312
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo t2ee_cp_reginfo[] = {
313
{ .name = "TEEHBR", .cp = 14, .crn = 1, .crm = 0, .opc1 = 6, .opc2 = 0,
314
.access = PL0_RW, .fieldoffset = offsetof(CPUARMState, teehbr),
315
.accessfn = teehbr_access, .resetvalue = 0 },
316
- REGINFO_SENTINEL
317
};
318
319
static const ARMCPRegInfo v6k_cp_reginfo[] = {
320
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
321
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidrprw_s),
322
offsetoflow32(CPUARMState, cp15.tpidrprw_ns) },
323
.resetvalue = 0 },
324
- REGINFO_SENTINEL
325
};
326
327
#ifndef CONFIG_USER_ONLY
328
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
329
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_SEC].cval),
330
.writefn = gt_sec_cval_write, .raw_writefn = raw_write,
331
},
332
- REGINFO_SENTINEL
333
};
334
335
static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
336
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
337
.access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO,
338
.readfn = gt_virt_cnt_read,
339
},
340
- REGINFO_SENTINEL
341
};
342
343
#endif
344
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vapa_cp_reginfo[] = {
345
.access = PL1_W, .accessfn = ats_access,
346
.writefn = ats_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC },
347
#endif
348
- REGINFO_SENTINEL
349
};
350
351
/* Return basic MPU access permission bits. */
352
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav7_cp_reginfo[] = {
353
.fieldoffset = offsetof(CPUARMState, pmsav7.rnr[M_REG_NS]),
354
.writefn = pmsav7_rgnr_write,
355
.resetfn = arm_cp_reset_ignore },
356
- REGINFO_SENTINEL
357
};
358
359
static const ARMCPRegInfo pmsav5_cp_reginfo[] = {
360
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav5_cp_reginfo[] = {
361
{ .name = "946_PRBS7", .cp = 15, .crn = 6, .crm = 7, .opc1 = 0,
362
.opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0,
363
.fieldoffset = offsetof(CPUARMState, cp15.c6_region[7]) },
364
- REGINFO_SENTINEL
365
};
366
367
static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri,
368
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
369
.access = PL1_RW, .accessfn = access_tvm_trvm,
370
.fieldoffset = offsetof(CPUARMState, cp15.far_el[1]),
371
.resetvalue = 0, },
372
- REGINFO_SENTINEL
373
};
374
375
static const ARMCPRegInfo vmsa_cp_reginfo[] = {
376
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
377
/* No offsetoflow32 -- pass the entire TCR to writefn/raw_writefn. */
378
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.tcr_el[3]),
379
offsetof(CPUARMState, cp15.tcr_el[1])} },
380
- REGINFO_SENTINEL
381
};
382
383
/* Note that unlike TTBCR, writing to TTBCR2 does not require flushing
384
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo omap_cp_reginfo[] = {
385
{ .name = "C9", .cp = 15, .crn = 9,
386
.crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW,
387
.type = ARM_CP_CONST | ARM_CP_OVERRIDE, .resetvalue = 0 },
388
- REGINFO_SENTINEL
389
};
390
391
static void xscale_cpar_write(CPUARMState *env, const ARMCPRegInfo *ri,
392
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo xscale_cp_reginfo[] = {
393
{ .name = "XSCALE_UNLOCK_DCACHE",
394
.cp = 15, .opc1 = 0, .crn = 9, .crm = 2, .opc2 = 1,
395
.access = PL1_W, .type = ARM_CP_NOP },
396
- REGINFO_SENTINEL
397
};
398
399
static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
400
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
401
.access = PL1_RW,
402
.type = ARM_CP_CONST | ARM_CP_NO_RAW | ARM_CP_OVERRIDE,
403
.resetvalue = 0 },
404
- REGINFO_SENTINEL
405
};
406
407
static const ARMCPRegInfo cache_dirty_status_cp_reginfo[] = {
408
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_dirty_status_cp_reginfo[] = {
409
{ .name = "CDSR", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 6,
410
.access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
411
.resetvalue = 0 },
412
- REGINFO_SENTINEL
413
};
414
415
static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
416
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
417
.access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
418
{ .name = "CIDCR", .cp = 15, .crm = 14, .opc1 = 0,
419
.access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
420
- REGINFO_SENTINEL
421
};
422
423
static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
424
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
425
{ .name = "TCI_DCACHE", .cp = 15, .crn = 7, .crm = 14, .opc1 = 0, .opc2 = 3,
426
.access = PL0_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
427
.resetvalue = (1 << 30) },
428
- REGINFO_SENTINEL
429
};
430
431
static const ARMCPRegInfo strongarm_cp_reginfo[] = {
432
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo strongarm_cp_reginfo[] = {
433
.crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY,
434
.access = PL1_RW, .resetvalue = 0,
435
.type = ARM_CP_CONST | ARM_CP_OVERRIDE | ARM_CP_NO_RAW },
436
- REGINFO_SENTINEL
437
};
438
439
static uint64_t midr_read(CPUARMState *env, const ARMCPRegInfo *ri)
440
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lpae_cp_reginfo[] = {
441
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s),
442
offsetof(CPUARMState, cp15.ttbr1_ns) },
443
.writefn = vmsa_ttbr_write, },
444
- REGINFO_SENTINEL
445
};
446
447
static uint64_t aa64_fpcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
448
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
449
.access = PL1_RW, .accessfn = access_trap_aa32s_el1,
450
.writefn = sdcr_write,
451
.fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) },
452
- REGINFO_SENTINEL
453
};
454
455
/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */
456
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
457
.type = ARM_CP_CONST,
458
.cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2,
459
.access = PL2_RW, .resetvalue = 0 },
460
- REGINFO_SENTINEL
461
};
462
463
/* Ditto, but for registers which exist in ARMv8 but not v7 */
464
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
465
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
466
.access = PL2_RW,
467
.type = ARM_CP_CONST, .resetvalue = 0 },
468
- REGINFO_SENTINEL
469
};
470
471
static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
472
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
473
.cp = 15, .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3,
474
.access = PL2_RW,
475
.fieldoffset = offsetof(CPUARMState, cp15.hstr_el2) },
476
- REGINFO_SENTINEL
477
};
478
479
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
480
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
481
.access = PL2_RW,
482
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
483
.writefn = hcr_writehigh },
484
- REGINFO_SENTINEL
485
};
486
487
static CPAccessResult sel2_access(CPUARMState *env, const ARMCPRegInfo *ri,
488
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_sec_cp_reginfo[] = {
489
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 2,
490
.access = PL2_RW, .accessfn = sel2_access,
491
.fieldoffset = offsetof(CPUARMState, cp15.vstcr_el2) },
492
- REGINFO_SENTINEL
493
};
494
495
static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
496
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
497
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5,
498
.access = PL3_W, .type = ARM_CP_NO_RAW,
499
.writefn = tlbi_aa64_vae3_write },
500
- REGINFO_SENTINEL
501
};
502
503
#ifndef CONFIG_USER_ONLY
504
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
505
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
506
.access = PL1_RW, .accessfn = access_tda,
507
.type = ARM_CP_NOP },
508
- REGINFO_SENTINEL
509
};
510
511
static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
512
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
513
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
514
{ .name = "DBGDSAR", .cp = 14, .crm = 2, .opc1 = 0,
515
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
516
- REGINFO_SENTINEL
517
};
518
519
/* Return the exception level to which exceptions should be taken
520
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
521
.fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]),
522
.writefn = dbgbcr_write, .raw_writefn = raw_write
523
},
524
- REGINFO_SENTINEL
525
};
526
define_arm_cp_regs(cpu, dbgregs);
98
}
527
}
99
s->mmc = pxa2xx_mmci_init(address_space, 0x41100000,
528
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
100
- blk_by_legacy_dinfo(dinfo),
529
.fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]),
101
+ dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
530
.writefn = dbgwcr_write, .raw_writefn = raw_write
102
qdev_get_gpio_in(s->pic, PXA2XX_PIC_MMC),
531
},
103
qdev_get_gpio_in(s->dma, PXA2XX_RX_RQ_MMCI),
532
- REGINFO_SENTINEL
104
qdev_get_gpio_in(s->dma, PXA2XX_TX_RQ_MMCI));
533
};
105
@@ -XXX,XX +XXX,XX @@ PXA2xxState *pxa255_init(MemoryRegion *address_space, unsigned int sdram_size)
534
define_arm_cp_regs(cpu, dbgregs);
106
s->gpio = pxa2xx_gpio_init(0x40e00000, s->cpu, s->pic, 85);
107
108
dinfo = drive_get(IF_SD, 0, 0);
109
- if (!dinfo) {
110
- error_report("missing SecureDigital device");
111
- exit(1);
112
+ if (!dinfo && !qtest_enabled()) {
113
+ warn_report("missing SecureDigital device");
114
}
535
}
115
s->mmc = pxa2xx_mmci_init(address_space, 0x41100000,
536
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
116
- blk_by_legacy_dinfo(dinfo),
537
.type = ARM_CP_IO,
117
+ dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
538
.readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
118
qdev_get_gpio_in(s->pic, PXA2XX_PIC_MMC),
539
.raw_writefn = pmevtyper_rawwrite },
119
qdev_get_gpio_in(s->dma, PXA2XX_RX_RQ_MMCI),
540
- REGINFO_SENTINEL
120
qdev_get_gpio_in(s->dma, PXA2XX_TX_RQ_MMCI));
541
};
542
define_arm_cp_regs(cpu, pmev_regs);
543
g_free(pmevcntr_name);
544
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
545
.cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5,
546
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
547
.resetvalue = extract64(cpu->pmceid1, 32, 32) },
548
- REGINFO_SENTINEL
549
};
550
define_arm_cp_regs(cpu, v81_pmu_regs);
551
}
552
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
553
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
554
.access = PL1_R, .accessfn = access_lor_ns,
555
.type = ARM_CP_CONST, .resetvalue = 0 },
556
- REGINFO_SENTINEL
557
};
558
559
#ifdef TARGET_AARCH64
560
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
561
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 3,
562
.access = PL1_RW, .accessfn = access_pauth,
563
.fieldoffset = offsetof(CPUARMState, keys.apib.hi) },
564
- REGINFO_SENTINEL
565
};
566
567
static const ARMCPRegInfo tlbirange_reginfo[] = {
568
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
569
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
570
.access = PL3_W, .type = ARM_CP_NO_RAW,
571
.writefn = tlbi_aa64_rvae3_write },
572
- REGINFO_SENTINEL
573
};
574
575
static const ARMCPRegInfo tlbios_reginfo[] = {
576
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
577
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 5,
578
.access = PL3_W, .type = ARM_CP_NO_RAW,
579
.writefn = tlbi_aa64_vae3is_write },
580
- REGINFO_SENTINEL
581
};
582
583
static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
584
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rndr_reginfo[] = {
585
.type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END | ARM_CP_IO,
586
.opc0 = 3, .opc1 = 3, .crn = 2, .crm = 4, .opc2 = 1,
587
.access = PL0_R, .readfn = rndr_readfn },
588
- REGINFO_SENTINEL
589
};
590
591
#ifndef CONFIG_USER_ONLY
592
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpop_reg[] = {
593
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 1,
594
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
595
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
596
- REGINFO_SENTINEL
597
};
598
599
static const ARMCPRegInfo dcpodp_reg[] = {
600
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpodp_reg[] = {
601
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 1,
602
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
603
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
604
- REGINFO_SENTINEL
605
};
606
#endif /*CONFIG_USER_ONLY*/
607
608
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_reginfo[] = {
609
{ .name = "DC_CIGDSW", .state = ARM_CP_STATE_AA64,
610
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 6,
611
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
612
- REGINFO_SENTINEL
613
};
614
615
static const ARMCPRegInfo mte_tco_ro_reginfo[] = {
616
{ .name = "TCO", .state = ARM_CP_STATE_AA64,
617
.opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 7,
618
.type = ARM_CP_CONST, .access = PL0_RW, },
619
- REGINFO_SENTINEL
620
};
621
622
static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
623
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
624
.accessfn = aa64_zva_access,
625
#endif
626
},
627
- REGINFO_SENTINEL
628
};
629
630
#endif
631
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo predinv_reginfo[] = {
632
{ .name = "CPPRCTX", .state = ARM_CP_STATE_AA32,
633
.cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 7,
634
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
635
- REGINFO_SENTINEL
636
};
637
638
static uint64_t ccsidr2_read(CPUARMState *env, const ARMCPRegInfo *ri)
639
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ccsidr2_reginfo[] = {
640
.access = PL1_R,
641
.accessfn = access_aa64_tid2,
642
.readfn = ccsidr2_read, .type = ARM_CP_NO_RAW },
643
- REGINFO_SENTINEL
644
};
645
646
static CPAccessResult access_aa64_tid3(CPUARMState *env, const ARMCPRegInfo *ri,
647
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo jazelle_regs[] = {
648
.cp = 14, .crn = 2, .crm = 0, .opc1 = 7, .opc2 = 0,
649
.accessfn = access_joscr_jmcr,
650
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
651
- REGINFO_SENTINEL
652
};
653
654
static const ARMCPRegInfo vhe_reginfo[] = {
655
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
656
.access = PL2_RW, .accessfn = e2h_access,
657
.writefn = gt_virt_cval_write, .raw_writefn = raw_write },
658
#endif
659
- REGINFO_SENTINEL
660
};
661
662
#ifndef CONFIG_USER_ONLY
663
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1e1_reginfo[] = {
664
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
665
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
666
.writefn = ats_write64 },
667
- REGINFO_SENTINEL
668
};
669
670
static const ARMCPRegInfo ats1cp_reginfo[] = {
671
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1cp_reginfo[] = {
672
.cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
673
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
674
.writefn = ats_write },
675
- REGINFO_SENTINEL
676
};
677
#endif
678
679
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo actlr2_hactlr2_reginfo[] = {
680
.cp = 15, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 3,
681
.access = PL2_RW, .type = ARM_CP_CONST,
682
.resetvalue = 0 },
683
- REGINFO_SENTINEL
684
};
685
686
void register_cp_regs_for_features(ARMCPU *cpu)
687
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
688
.access = PL1_R, .type = ARM_CP_CONST,
689
.accessfn = access_aa32_tid3,
690
.resetvalue = cpu->isar.id_isar6 },
691
- REGINFO_SENTINEL
692
};
693
define_arm_cp_regs(cpu, v6_idregs);
694
define_arm_cp_regs(cpu, v6_cp_reginfo);
695
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
696
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 7,
697
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
698
.resetvalue = cpu->pmceid1 },
699
- REGINFO_SENTINEL
700
};
701
#ifdef CONFIG_USER_ONLY
702
ARMCPRegUserSpaceInfo v8_user_idregs[] = {
703
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
704
.exported_bits = 0x000000f0ffffffff },
705
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
706
.is_glob = true },
707
- REGUSERINFO_SENTINEL
708
};
709
modify_arm_cp_regs(v8_idregs, v8_user_idregs);
710
#endif
711
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
712
.access = PL2_RW,
713
.resetvalue = vmpidr_def,
714
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
715
- REGINFO_SENTINEL
716
};
717
define_arm_cp_regs(cpu, vpidr_regs);
718
define_arm_cp_regs(cpu, el2_cp_reginfo);
719
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
720
.access = PL2_RW, .accessfn = access_el3_aa32ns,
721
.type = ARM_CP_NO_RAW,
722
.writefn = arm_cp_write_ignore, .readfn = mpidr_read },
723
- REGINFO_SENTINEL
724
};
725
define_arm_cp_regs(cpu, vpidr_regs);
726
define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo);
727
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
728
.raw_writefn = raw_write, .writefn = sctlr_write,
729
.fieldoffset = offsetof(CPUARMState, cp15.sctlr_el[3]),
730
.resetvalue = cpu->reset_sctlr },
731
- REGINFO_SENTINEL
732
};
733
734
define_arm_cp_regs(cpu, el3_regs);
735
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
736
{ .name = "DUMMY",
737
.cp = 15, .crn = 0, .crm = 7, .opc1 = 0, .opc2 = CP_ANY,
738
.access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 },
739
- REGINFO_SENTINEL
740
};
741
ARMCPRegInfo id_v8_midr_cp_reginfo[] = {
742
{ .name = "MIDR_EL1", .state = ARM_CP_STATE_BOTH,
743
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
744
.access = PL1_R,
745
.accessfn = access_aa64_tid1,
746
.type = ARM_CP_CONST, .resetvalue = cpu->revidr },
747
- REGINFO_SENTINEL
748
};
749
ARMCPRegInfo id_cp_reginfo[] = {
750
/* These are common to v8 and pre-v8 */
751
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
752
.access = PL1_R,
753
.accessfn = access_aa32_tid1,
754
.type = ARM_CP_CONST, .resetvalue = 0 },
755
- REGINFO_SENTINEL
756
};
757
/* TLBTR is specific to VMSA */
758
ARMCPRegInfo id_tlbtr_reginfo = {
759
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
760
{ .name = "MIDR_EL1",
761
.exported_bits = 0x00000000ffffffff },
762
{ .name = "REVIDR_EL1" },
763
- REGUSERINFO_SENTINEL
764
};
765
modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo);
766
#endif
767
if (arm_feature(env, ARM_FEATURE_OMAPCP) ||
768
arm_feature(env, ARM_FEATURE_STRONGARM)) {
769
- ARMCPRegInfo *r;
770
+ size_t i;
771
/* Register the blanket "writes ignored" value first to cover the
772
* whole space. Then update the specific ID registers to allow write
773
* access, so that they ignore writes rather than causing them to
774
* UNDEF.
775
*/
776
define_one_arm_cp_reg(cpu, &crn0_wi_reginfo);
777
- for (r = id_pre_v8_midr_cp_reginfo;
778
- r->type != ARM_CP_SENTINEL; r++) {
779
- r->access = PL1_RW;
780
+ for (i = 0; i < ARRAY_SIZE(id_pre_v8_midr_cp_reginfo); ++i) {
781
+ id_pre_v8_midr_cp_reginfo[i].access = PL1_RW;
782
}
783
- for (r = id_cp_reginfo; r->type != ARM_CP_SENTINEL; r++) {
784
- r->access = PL1_RW;
785
+ for (i = 0; i < ARRAY_SIZE(id_cp_reginfo); ++i) {
786
+ id_cp_reginfo[i].access = PL1_RW;
787
}
788
id_mpuir_reginfo.access = PL1_RW;
789
id_tlbtr_reginfo.access = PL1_RW;
790
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
791
{ .name = "MPIDR_EL1", .state = ARM_CP_STATE_BOTH,
792
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 5,
793
.access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
794
- REGINFO_SENTINEL
795
};
796
#ifdef CONFIG_USER_ONLY
797
ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
798
{ .name = "MPIDR_EL1",
799
.fixed_bits = 0x0000000080000000 },
800
- REGUSERINFO_SENTINEL
801
};
802
modify_arm_cp_regs(mpidr_cp_reginfo, mpidr_user_cp_reginfo);
803
#endif
804
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
805
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 0, .opc2 = 1,
806
.access = PL3_RW, .type = ARM_CP_CONST,
807
.resetvalue = 0 },
808
- REGINFO_SENTINEL
809
};
810
define_arm_cp_regs(cpu, auxcr_reginfo);
811
if (cpu_isar_feature(aa32_ac2, cpu)) {
812
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
813
.type = ARM_CP_CONST,
814
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 3, .opc2 = 0,
815
.access = PL1_R, .resetvalue = cpu->reset_cbar },
816
- REGINFO_SENTINEL
817
};
818
/* We don't implement a r/w 64 bit CBAR currently */
819
assert(arm_feature(env, ARM_FEATURE_CBAR_RO));
820
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
821
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.vbar_s),
822
offsetof(CPUARMState, cp15.vbar_ns) },
823
.resetvalue = 0 },
824
- REGINFO_SENTINEL
825
};
826
define_arm_cp_regs(cpu, vbar_cp_reginfo);
827
}
828
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
829
r->writefn);
830
}
831
}
832
- /* Bad type field probably means missing sentinel at end of reg list */
833
- assert(cptype_valid(r->type));
834
+
835
for (crm = crmmin; crm <= crmmax; crm++) {
836
for (opc1 = opc1min; opc1 <= opc1max; opc1++) {
837
for (opc2 = opc2min; opc2 <= opc2max; opc2++) {
838
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
839
}
840
}
841
842
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
843
- const ARMCPRegInfo *regs, void *opaque)
844
+/* Define a whole list of registers */
845
+void define_arm_cp_regs_with_opaque_len(ARMCPU *cpu, const ARMCPRegInfo *regs,
846
+ void *opaque, size_t len)
847
{
848
- /* Define a whole list of registers */
849
- const ARMCPRegInfo *r;
850
- for (r = regs; r->type != ARM_CP_SENTINEL; r++) {
851
- define_one_arm_cp_reg_with_opaque(cpu, r, opaque);
852
+ size_t i;
853
+ for (i = 0; i < len; ++i) {
854
+ define_one_arm_cp_reg_with_opaque(cpu, regs + i, opaque);
855
}
856
}
857
858
@@ -XXX,XX +XXX,XX @@ void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
859
* user-space cannot alter any values and dynamic values pertaining to
860
* execution state are hidden from user space view anyway.
861
*/
862
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods)
863
+void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
864
+ const ARMCPRegUserSpaceInfo *mods,
865
+ size_t mods_len)
866
{
867
- const ARMCPRegUserSpaceInfo *m;
868
- ARMCPRegInfo *r;
869
-
870
- for (m = mods; m->name; m++) {
871
+ for (size_t mi = 0; mi < mods_len; ++mi) {
872
+ const ARMCPRegUserSpaceInfo *m = mods + mi;
873
GPatternSpec *pat = NULL;
874
+
875
if (m->is_glob) {
876
pat = g_pattern_spec_new(m->name);
877
}
878
- for (r = regs; r->type != ARM_CP_SENTINEL; r++) {
879
+ for (size_t ri = 0; ri < regs_len; ++ri) {
880
+ ARMCPRegInfo *r = regs + ri;
881
+
882
if (pat && g_pattern_match_string(pat, r->name)) {
883
r->type = ARM_CP_CONST;
884
r->access = PL0U_R;
121
--
885
--
122
2.17.0
886
2.25.1
123
887
124
888
diff view generated by jsdifflib
1
From: Mathew Maidment <mathew1800@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The duplication of id_tlbtr_reginfo was unintentionally added within
3
These particular data structures are not modified at runtime.
4
3281af8114c6b8ead02f08b58e3c36895c1ea047 which should have been
5
id_mpuir_reginfo.
6
4
7
The effect was that for OMAP and StrongARM CPUs we would
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
incorrectly UNDEF writes to MPUIR rather than NOPing them.
9
10
Signed-off-by: Mathew Maidment <mathew1800@gmail.com>
11
Message-id: 20180501184933.37609-2-mathew1800@gmail.com
12
[PMM: tweak commit message]
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220501055028.646596-5-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
10
---
16
target/arm/helper.c | 2 +-
11
target/arm/helper.c | 16 ++++++++--------
17
1 file changed, 1 insertion(+), 1 deletion(-)
12
1 file changed, 8 insertions(+), 8 deletions(-)
18
13
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
24
for (r = id_cp_reginfo; r->type != ARM_CP_SENTINEL; r++) {
19
.resetvalue = cpu->pmceid1 },
25
r->access = PL1_RW;
20
};
26
}
21
#ifdef CONFIG_USER_ONLY
27
- id_tlbtr_reginfo.access = PL1_RW;
22
- ARMCPRegUserSpaceInfo v8_user_idregs[] = {
28
+ id_mpuir_reginfo.access = PL1_RW;
23
+ static const ARMCPRegUserSpaceInfo v8_user_idregs[] = {
29
id_tlbtr_reginfo.access = PL1_RW;
24
{ .name = "ID_AA64PFR0_EL1",
25
.exported_bits = 0x000f000f00ff0000,
26
.fixed_bits = 0x0000000000000011 },
27
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
28
*/
29
if (arm_feature(env, ARM_FEATURE_EL3)) {
30
if (arm_feature(env, ARM_FEATURE_AARCH64)) {
31
- ARMCPRegInfo nsacr = {
32
+ static const ARMCPRegInfo nsacr = {
33
.name = "NSACR", .type = ARM_CP_CONST,
34
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
35
.access = PL1_RW, .accessfn = nsacr_access,
36
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
37
};
38
define_one_arm_cp_reg(cpu, &nsacr);
39
} else {
40
- ARMCPRegInfo nsacr = {
41
+ static const ARMCPRegInfo nsacr = {
42
.name = "NSACR",
43
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
44
.access = PL3_RW | PL1_R,
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
30
}
46
}
47
} else {
31
if (arm_feature(env, ARM_FEATURE_V8)) {
48
if (arm_feature(env, ARM_FEATURE_V8)) {
49
- ARMCPRegInfo nsacr = {
50
+ static const ARMCPRegInfo nsacr = {
51
.name = "NSACR", .type = ARM_CP_CONST,
52
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
53
.access = PL1_R,
54
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
55
.access = PL1_R, .type = ARM_CP_CONST,
56
.resetvalue = cpu->pmsav7_dregion << 8
57
};
58
- ARMCPRegInfo crn0_wi_reginfo = {
59
+ static const ARMCPRegInfo crn0_wi_reginfo = {
60
.name = "CRN0_WI", .cp = 15, .crn = 0, .crm = CP_ANY,
61
.opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_W,
62
.type = ARM_CP_NOP | ARM_CP_OVERRIDE
63
};
64
#ifdef CONFIG_USER_ONLY
65
- ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
66
+ static const ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
67
{ .name = "MIDR_EL1",
68
.exported_bits = 0x00000000ffffffff },
69
{ .name = "REVIDR_EL1" },
70
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
71
.access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
72
};
73
#ifdef CONFIG_USER_ONLY
74
- ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
75
+ static const ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
76
{ .name = "MPIDR_EL1",
77
.fixed_bits = 0x0000000080000000 },
78
};
79
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
80
}
81
82
if (arm_feature(env, ARM_FEATURE_VBAR)) {
83
- ARMCPRegInfo vbar_cp_reginfo[] = {
84
+ static const ARMCPRegInfo vbar_cp_reginfo[] = {
85
{ .name = "VBAR", .state = ARM_CP_STATE_BOTH,
86
.opc0 = 3, .crn = 12, .crm = 0, .opc1 = 0, .opc2 = 0,
87
.access = PL1_RW, .writefn = vbar_write,
32
--
88
--
33
2.17.0
89
2.25.1
34
90
35
91
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The (size > 3 && !is_q) condition is identical to the preceeding test
3
Instead of defining ARM_CP_FLAG_MASK to remove flags,
4
of bit 3 in immh; eliminate it. For the benefit of Coverity, assert
4
define ARM_CP_SPECIAL_MASK to isolate special cases.
5
that size is within the bounds we expect.
5
Sort the specials to the low bits. Use an enum.
6
6
7
Fixes: Coverity CID1385846
7
Split the large comment block so as to document each
8
Fixes: Coverity CID1385849
8
value separately.
9
Fixes: Coverity CID1385852
9
10
Fixes: Coverity CID1385857
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20180501180455.11214-2-richard.henderson@linaro.org
12
Message-id: 20220501055028.646596-6-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
14
---
16
target/arm/translate-a64.c | 6 +-----
15
target/arm/cpregs.h | 130 +++++++++++++++++++++++--------------
17
1 file changed, 1 insertion(+), 5 deletions(-)
16
target/arm/cpu.c | 4 +-
18
17
target/arm/helper.c | 4 +-
18
target/arm/translate-a64.c | 6 +-
19
target/arm/translate.c | 6 +-
20
5 files changed, 92 insertions(+), 58 deletions(-)
21
22
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpregs.h
25
+++ b/target/arm/cpregs.h
26
@@ -XXX,XX +XXX,XX @@
27
#define TARGET_ARM_CPREGS_H
28
29
/*
30
- * ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
31
- * special-behaviour cp reg and bits [11..8] indicate what behaviour
32
- * it has. Otherwise it is a simple cp reg, where CONST indicates that
33
- * TCG can assume the value to be constant (ie load at translate time)
34
- * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
35
- * indicates that the TB should not be ended after a write to this register
36
- * (the default is that the TB ends after cp writes). OVERRIDE permits
37
- * a register definition to override a previous definition for the
38
- * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
39
- * old must have the OVERRIDE bit set.
40
- * ALIAS indicates that this register is an alias view of some underlying
41
- * state which is also visible via another register, and that the other
42
- * register is handling migration and reset; registers marked ALIAS will not be
43
- * migrated but may have their state set by syncing of register state from KVM.
44
- * NO_RAW indicates that this register has no underlying state and does not
45
- * support raw access for state saving/loading; it will not be used for either
46
- * migration or KVM state synchronization. (Typically this is for "registers"
47
- * which are actually used as instructions for cache maintenance and so on.)
48
- * IO indicates that this register does I/O and therefore its accesses
49
- * need to be marked with gen_io_start() and also end the TB. In particular,
50
- * registers which implement clocks or timers require this.
51
- * RAISES_EXC is for when the read or write hook might raise an exception;
52
- * the generated code will synchronize the CPU state before calling the hook
53
- * so that it is safe for the hook to call raise_exception().
54
- * NEWEL is for writes to registers that might change the exception
55
- * level - typically on older ARM chips. For those cases we need to
56
- * re-read the new el when recomputing the translation flags.
57
+ * ARMCPRegInfo type field bits:
58
*/
59
-#define ARM_CP_SPECIAL 0x0001
60
-#define ARM_CP_CONST 0x0002
61
-#define ARM_CP_64BIT 0x0004
62
-#define ARM_CP_SUPPRESS_TB_END 0x0008
63
-#define ARM_CP_OVERRIDE 0x0010
64
-#define ARM_CP_ALIAS 0x0020
65
-#define ARM_CP_IO 0x0040
66
-#define ARM_CP_NO_RAW 0x0080
67
-#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
68
-#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
69
-#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
70
-#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
71
-#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
72
-#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
73
-#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
74
-#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
75
-#define ARM_CP_FPU 0x1000
76
-#define ARM_CP_SVE 0x2000
77
-#define ARM_CP_NO_GDB 0x4000
78
-#define ARM_CP_RAISES_EXC 0x8000
79
-#define ARM_CP_NEWEL 0x10000
80
-/* Mask of only the flag bits in a type field */
81
-#define ARM_CP_FLAG_MASK 0x1f0ff
82
+enum {
83
+ /*
84
+ * Register must be handled specially during translation.
85
+ * The method is one of the values below:
86
+ */
87
+ ARM_CP_SPECIAL_MASK = 0x000f,
88
+ /* Special: no change to PE state: writes ignored, reads ignored. */
89
+ ARM_CP_NOP = 0x0001,
90
+ /* Special: sysreg is WFI, for v5 and v6. */
91
+ ARM_CP_WFI = 0x0002,
92
+ /* Special: sysreg is NZCV. */
93
+ ARM_CP_NZCV = 0x0003,
94
+ /* Special: sysreg is CURRENTEL. */
95
+ ARM_CP_CURRENTEL = 0x0004,
96
+ /* Special: sysreg is DC ZVA or similar. */
97
+ ARM_CP_DC_ZVA = 0x0005,
98
+ ARM_CP_DC_GVA = 0x0006,
99
+ ARM_CP_DC_GZVA = 0x0007,
100
+
101
+ /* Flag: reads produce resetvalue; writes ignored. */
102
+ ARM_CP_CONST = 1 << 4,
103
+ /* Flag: For ARM_CP_STATE_AA32, sysreg is 64-bit. */
104
+ ARM_CP_64BIT = 1 << 5,
105
+ /*
106
+ * Flag: TB should not be ended after a write to this register
107
+ * (the default is that the TB ends after cp writes).
108
+ */
109
+ ARM_CP_SUPPRESS_TB_END = 1 << 6,
110
+ /*
111
+ * Flag: Permit a register definition to override a previous definition
112
+ * for the same (cp, is64, crn, crm, opc1, opc2) tuple: either the new
113
+ * or the old must have the ARM_CP_OVERRIDE bit set.
114
+ */
115
+ ARM_CP_OVERRIDE = 1 << 7,
116
+ /*
117
+ * Flag: Register is an alias view of some underlying state which is also
118
+ * visible via another register, and that the other register is handling
119
+ * migration and reset; registers marked ARM_CP_ALIAS will not be migrated
120
+ * but may have their state set by syncing of register state from KVM.
121
+ */
122
+ ARM_CP_ALIAS = 1 << 8,
123
+ /*
124
+ * Flag: Register does I/O and therefore its accesses need to be marked
125
+ * with gen_io_start() and also end the TB. In particular, registers which
126
+ * implement clocks or timers require this.
127
+ */
128
+ ARM_CP_IO = 1 << 9,
129
+ /*
130
+ * Flag: Register has no underlying state and does not support raw access
131
+ * for state saving/loading; it will not be used for either migration or
132
+ * KVM state synchronization. Typically this is for "registers" which are
133
+ * actually used as instructions for cache maintenance and so on.
134
+ */
135
+ ARM_CP_NO_RAW = 1 << 10,
136
+ /*
137
+ * Flag: The read or write hook might raise an exception; the generated
138
+ * code will synchronize the CPU state before calling the hook so that it
139
+ * is safe for the hook to call raise_exception().
140
+ */
141
+ ARM_CP_RAISES_EXC = 1 << 11,
142
+ /*
143
+ * Flag: Writes to the sysreg might change the exception level - typically
144
+ * on older ARM chips. For those cases we need to re-read the new el when
145
+ * recomputing the translation flags.
146
+ */
147
+ ARM_CP_NEWEL = 1 << 12,
148
+ /*
149
+ * Flag: Access check for this sysreg is identical to accessing FPU state
150
+ * from an instruction: use translation fp_access_check().
151
+ */
152
+ ARM_CP_FPU = 1 << 13,
153
+ /*
154
+ * Flag: Access check for this sysreg is identical to accessing SVE state
155
+ * from an instruction: use translation sve_access_check().
156
+ */
157
+ ARM_CP_SVE = 1 << 14,
158
+ /* Flag: Do not expose in gdb sysreg xml. */
159
+ ARM_CP_NO_GDB = 1 << 15,
160
+};
161
162
/*
163
* Valid values for ARMCPRegInfo state field, indicating which of
164
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
165
index XXXXXXX..XXXXXXX 100644
166
--- a/target/arm/cpu.c
167
+++ b/target/arm/cpu.c
168
@@ -XXX,XX +XXX,XX @@ static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
169
ARMCPRegInfo *ri = value;
170
ARMCPU *cpu = opaque;
171
172
- if (ri->type & (ARM_CP_SPECIAL | ARM_CP_ALIAS)) {
173
+ if (ri->type & (ARM_CP_SPECIAL_MASK | ARM_CP_ALIAS)) {
174
return;
175
}
176
177
@@ -XXX,XX +XXX,XX @@ static void cp_reg_check_reset(gpointer key, gpointer value, gpointer opaque)
178
ARMCPU *cpu = opaque;
179
uint64_t oldvalue, newvalue;
180
181
- if (ri->type & (ARM_CP_SPECIAL | ARM_CP_ALIAS | ARM_CP_NO_RAW)) {
182
+ if (ri->type & (ARM_CP_SPECIAL_MASK | ARM_CP_ALIAS | ARM_CP_NO_RAW)) {
183
return;
184
}
185
186
diff --git a/target/arm/helper.c b/target/arm/helper.c
187
index XXXXXXX..XXXXXXX 100644
188
--- a/target/arm/helper.c
189
+++ b/target/arm/helper.c
190
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
191
* multiple times. Special registers (ie NOP/WFI) are
192
* never migratable and not even raw-accessible.
193
*/
194
- if ((r->type & ARM_CP_SPECIAL)) {
195
+ if (r->type & ARM_CP_SPECIAL_MASK) {
196
r2->type |= ARM_CP_NO_RAW;
197
}
198
if (((r->crm == CP_ANY) && crm != 0) ||
199
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
200
/* Check that the register definition has enough info to handle
201
* reads and writes if they are permitted.
202
*/
203
- if (!(r->type & (ARM_CP_SPECIAL|ARM_CP_CONST))) {
204
+ if (!(r->type & (ARM_CP_SPECIAL_MASK | ARM_CP_CONST))) {
205
if (r->access & PL3_R) {
206
assert((r->fieldoffset ||
207
(r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1])) ||
19
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
208
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
20
index XXXXXXX..XXXXXXX 100644
209
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate-a64.c
210
--- a/target/arm/translate-a64.c
22
+++ b/target/arm/translate-a64.c
211
+++ b/target/arm/translate-a64.c
23
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
212
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
24
unallocated_encoding(s);
213
}
25
return;
214
26
}
215
/* Handle special cases first */
27
-
216
- switch (ri->type & ~(ARM_CP_FLAG_MASK & ~ARM_CP_SPECIAL)) {
28
- if (size > 3 && !is_q) {
217
+ switch (ri->type & ARM_CP_SPECIAL_MASK) {
29
- unallocated_encoding(s);
218
+ case 0:
30
- return;
219
+ break;
31
- }
220
case ARM_CP_NOP:
32
+ tcg_debug_assert(size <= 3);
221
return;
33
222
case ARM_CP_NZCV:
34
if (!fp_access_check(s)) {
223
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
35
return;
224
}
225
return;
226
default:
227
- break;
228
+ g_assert_not_reached();
229
}
230
if ((ri->type & ARM_CP_FPU) && !fp_access_check(s)) {
231
return;
232
diff --git a/target/arm/translate.c b/target/arm/translate.c
233
index XXXXXXX..XXXXXXX 100644
234
--- a/target/arm/translate.c
235
+++ b/target/arm/translate.c
236
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
237
}
238
239
/* Handle special cases first */
240
- switch (ri->type & ~(ARM_CP_FLAG_MASK & ~ARM_CP_SPECIAL)) {
241
+ switch (ri->type & ARM_CP_SPECIAL_MASK) {
242
+ case 0:
243
+ break;
244
case ARM_CP_NOP:
245
return;
246
case ARM_CP_WFI:
247
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
248
s->base.is_jmp = DISAS_WFI;
249
return;
250
default:
251
- break;
252
+ g_assert_not_reached();
253
}
254
255
if ((tb_cflags(s->base.tb) & CF_USE_ICOUNT) && (ri->type & ARM_CP_IO)) {
36
--
256
--
37
2.17.0
257
2.25.1
38
39
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In case the MSI is translated by an IOMMU we need to fixup the
3
Standardize on g_assert_not_reached() for "should not happen".
4
MSI route with the translated address.
4
Retain abort() when preceeded by fprintf or error_report.
5
5
6
Signed-off-by: Eric Auger <eric.auger@redhat.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Bharat Bhushan <Bharat.Bhushan@nxp.com>
8
Message-id: 1524665762-31355-12-git-send-email-eric.auger@redhat.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20220501055028.646596-7-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/kvm.c | 38 +++++++++++++++++++++++++++++++++++++-
11
target/arm/helper.c | 7 +++----
13
target/arm/trace-events | 3 +++
12
target/arm/hvf/hvf.c | 2 +-
14
2 files changed, 40 insertions(+), 1 deletion(-)
13
target/arm/kvm-stub.c | 4 ++--
14
target/arm/kvm.c | 4 ++--
15
target/arm/machine.c | 4 ++--
16
target/arm/translate-a64.c | 4 ++--
17
target/arm/translate-neon.c | 2 +-
18
target/arm/translate.c | 4 ++--
19
8 files changed, 15 insertions(+), 16 deletions(-)
15
20
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
26
break;
27
default:
28
/* broken reginfo with out-of-range opc1 */
29
- assert(false);
30
- break;
31
+ g_assert_not_reached();
32
}
33
/* assert our permissions are not too lax (stricter is fine) */
34
assert((r->access & ~mask) == 0);
35
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
36
break;
37
default:
38
/* Never happens, but compiler isn't smart enough to tell. */
39
- abort();
40
+ g_assert_not_reached();
41
}
42
}
43
*prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
44
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
45
break;
46
default:
47
/* Never happens, but compiler isn't smart enough to tell. */
48
- abort();
49
+ g_assert_not_reached();
50
}
51
}
52
if (domain_prot == 3) {
53
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/hvf/hvf.c
56
+++ b/target/arm/hvf/hvf.c
57
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
58
/* we got kicked, no exit to process */
59
return 0;
60
default:
61
- assert(0);
62
+ g_assert_not_reached();
63
}
64
65
hvf_sync_vtimer(cpu);
66
diff --git a/target/arm/kvm-stub.c b/target/arm/kvm-stub.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/kvm-stub.c
69
+++ b/target/arm/kvm-stub.c
70
@@ -XXX,XX +XXX,XX @@
71
72
bool write_kvmstate_to_list(ARMCPU *cpu)
73
{
74
- abort();
75
+ g_assert_not_reached();
76
}
77
78
bool write_list_to_kvmstate(ARMCPU *cpu, int level)
79
{
80
- abort();
81
+ g_assert_not_reached();
82
}
16
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
83
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
17
index XXXXXXX..XXXXXXX 100644
84
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/kvm.c
85
--- a/target/arm/kvm.c
19
+++ b/target/arm/kvm.c
86
+++ b/target/arm/kvm.c
20
@@ -XXX,XX +XXX,XX @@
87
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu)
21
#include "sysemu/kvm.h"
88
ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &r);
22
#include "kvm_arm.h"
89
break;
23
#include "cpu.h"
90
default:
24
+#include "trace.h"
91
- abort();
25
#include "internals.h"
92
+ g_assert_not_reached();
26
#include "hw/arm/arm.h"
93
}
27
+#include "hw/pci/pci.h"
94
if (ret) {
28
#include "exec/memattrs.h"
95
ok = false;
29
#include "exec/address-spaces.h"
96
@@ -XXX,XX +XXX,XX @@ bool write_list_to_kvmstate(ARMCPU *cpu, int level)
30
#include "hw/boards.h"
97
r.addr = (uintptr_t)(cpu->cpreg_values + i);
31
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vgic_probe(void)
98
break;
32
int kvm_arch_fixup_msi_route(struct kvm_irq_routing_entry *route,
99
default:
33
uint64_t address, uint32_t data, PCIDevice *dev)
100
- abort();
34
{
101
+ g_assert_not_reached();
35
- return 0;
102
}
36
+ AddressSpace *as = pci_device_iommu_address_space(dev);
103
ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &r);
37
+ hwaddr xlat, len, doorbell_gpa;
104
if (ret) {
38
+ MemoryRegionSection mrs;
105
diff --git a/target/arm/machine.c b/target/arm/machine.c
39
+ MemoryRegion *mr;
106
index XXXXXXX..XXXXXXX 100644
40
+ int ret = 1;
107
--- a/target/arm/machine.c
41
+
108
+++ b/target/arm/machine.c
42
+ if (as == &address_space_memory) {
109
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
43
+ return 0;
110
if (kvm_enabled()) {
44
+ }
111
if (!write_kvmstate_to_list(cpu)) {
45
+
112
/* This should never fail */
46
+ /* MSI doorbell address is translated by an IOMMU */
113
- abort();
47
+
114
+ g_assert_not_reached();
48
+ rcu_read_lock();
115
}
49
+ mr = address_space_translate(as, address, &xlat, &len, true);
116
50
+ if (!mr) {
117
/*
51
+ goto unlock;
118
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
52
+ }
119
} else {
53
+ mrs = memory_region_find(mr, xlat, 1);
120
if (!write_cpustate_to_list(cpu, false)) {
54
+ if (!mrs.mr) {
121
/* This should never fail. */
55
+ goto unlock;
122
- abort();
56
+ }
123
+ g_assert_not_reached();
57
+
124
}
58
+ doorbell_gpa = mrs.offset_within_address_space;
125
}
59
+ memory_region_unref(mrs.mr);
126
60
+
127
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
61
+ route->u.msi.address_lo = doorbell_gpa;
128
index XXXXXXX..XXXXXXX 100644
62
+ route->u.msi.address_hi = doorbell_gpa >> 32;
129
--- a/target/arm/translate-a64.c
63
+
130
+++ b/target/arm/translate-a64.c
64
+ trace_kvm_arm_fixup_msi_route(address, doorbell_gpa);
131
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
65
+
132
gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
66
+ ret = 0;
133
break;
67
+
134
default:
68
+unlock:
135
- abort();
69
+ rcu_read_unlock();
136
+ g_assert_not_reached();
70
+ return ret;
137
}
138
139
write_fp_sreg(s, rd, tcg_res);
140
@@ -XXX,XX +XXX,XX @@ static void handle_fp_fcvt(DisasContext *s, int opcode,
141
break;
142
}
143
default:
144
- abort();
145
+ g_assert_not_reached();
146
}
71
}
147
}
72
148
73
int kvm_arch_add_msi_route_post(struct kvm_irq_routing_entry *route,
149
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
74
diff --git a/target/arm/trace-events b/target/arm/trace-events
75
index XXXXXXX..XXXXXXX 100644
150
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/trace-events
151
--- a/target/arm/translate-neon.c
77
+++ b/target/arm/trace-events
152
+++ b/target/arm/translate-neon.c
78
@@ -XXX,XX +XXX,XX @@ arm_gt_tval_write(int timer, uint64_t value) "gt_tval_write: timer %d value 0x%"
153
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
79
arm_gt_ctl_write(int timer, uint64_t value) "gt_ctl_write: timer %d value 0x%" PRIx64
154
}
80
arm_gt_imask_toggle(int timer, int irqstate) "gt_ctl_write: timer %d IMASK toggle, new irqstate %d"
155
break;
81
arm_gt_cntvoff_write(uint64_t value) "gt_cntvoff_write: value 0x%" PRIx64
156
default:
82
+
157
- abort();
83
+# target/arm/kvm.c
158
+ g_assert_not_reached();
84
+kvm_arm_fixup_msi_route(uint64_t iova, uint64_t gpa) "MSI iova = 0x%"PRIx64" is translated into 0x%"PRIx64
159
}
160
if ((vd + a->stride * (nregs - 1)) > 31) {
161
/*
162
diff --git a/target/arm/translate.c b/target/arm/translate.c
163
index XXXXXXX..XXXXXXX 100644
164
--- a/target/arm/translate.c
165
+++ b/target/arm/translate.c
166
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
167
offset = 4;
168
break;
169
default:
170
- abort();
171
+ g_assert_not_reached();
172
}
173
tcg_gen_addi_i32(addr, addr, offset);
174
tmp = load_reg(s, 14);
175
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
176
offset = 0;
177
break;
178
default:
179
- abort();
180
+ g_assert_not_reached();
181
}
182
tcg_gen_addi_i32(addr, addr, offset);
183
gen_helper_set_r13_banked(cpu_env, tcg_constant_i32(mode), addr);
85
--
184
--
86
2.17.0
185
2.25.1
87
88
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We set up the infrastructure to enumerate all the PCI devices
3
Create a typedef as well, and use it in ARMCPRegInfo.
4
attached to the SMMU and create an associated IOMMU memory
4
This won't be perfect for debugging, but it'll nicely
5
region and address space.
5
display the most common cases.
6
6
7
Those info are stored in SMMUDevice objects. The devices are
8
grouped according to the PCIBus they belong to. A hash table
9
indexed by the PCIBus pointer is used. Also an array indexed by
10
the bus number allows to find the list of SMMUDevices.
11
12
Signed-off-by: Eric Auger <eric.auger@redhat.com>
13
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 1524665762-31355-3-git-send-email-eric.auger@redhat.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-8-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
11
---
18
include/hw/arm/smmu-common.h | 8 +++++
12
target/arm/cpregs.h | 44 +++++++++++++++++++++++---------------------
19
hw/arm/smmu-common.c | 69 ++++++++++++++++++++++++++++++++++++
13
target/arm/helper.c | 2 +-
20
hw/arm/trace-events | 3 ++
14
2 files changed, 24 insertions(+), 22 deletions(-)
21
3 files changed, 80 insertions(+)
22
15
23
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
24
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/arm/smmu-common.h
18
--- a/target/arm/cpregs.h
26
+++ b/include/hw/arm/smmu-common.h
19
+++ b/target/arm/cpregs.h
27
@@ -XXX,XX +XXX,XX @@ typedef struct {
20
@@ -XXX,XX +XXX,XX @@ enum {
28
#define ARM_SMMU_GET_CLASS(obj) \
21
* described with these bits, then use a laxer set of restrictions, and
29
OBJECT_GET_CLASS(SMMUBaseClass, (obj), TYPE_ARM_SMMU)
22
* do the more restrictive/complex check inside a helper function.
30
23
*/
31
+/* Return the SMMUPciBus handle associated to a PCI bus number */
24
-#define PL3_R 0x80
32
+SMMUPciBus *smmu_find_smmu_pcibus(SMMUState *s, uint8_t bus_num);
25
-#define PL3_W 0x40
33
+
26
-#define PL2_R (0x20 | PL3_R)
34
+/* Return the stream ID of an SMMU device */
27
-#define PL2_W (0x10 | PL3_W)
35
+static inline uint16_t smmu_get_sid(SMMUDevice *sdev)
28
-#define PL1_R (0x08 | PL2_R)
36
+{
29
-#define PL1_W (0x04 | PL2_W)
37
+ return PCI_BUILD_BDF(pci_bus_num(sdev->bus), sdev->devfn);
30
-#define PL0_R (0x02 | PL1_R)
38
+}
31
-#define PL0_W (0x01 | PL1_W)
39
#endif /* HW_ARM_SMMU_COMMON */
32
+typedef enum {
40
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
33
+ PL3_R = 0x80,
34
+ PL3_W = 0x40,
35
+ PL2_R = 0x20 | PL3_R,
36
+ PL2_W = 0x10 | PL3_W,
37
+ PL1_R = 0x08 | PL2_R,
38
+ PL1_W = 0x04 | PL2_W,
39
+ PL0_R = 0x02 | PL1_R,
40
+ PL0_W = 0x01 | PL1_W,
41
42
-/*
43
- * For user-mode some registers are accessible to EL0 via a kernel
44
- * trap-and-emulate ABI. In this case we define the read permissions
45
- * as actually being PL0_R. However some bits of any given register
46
- * may still be masked.
47
- */
48
+ /*
49
+ * For user-mode some registers are accessible to EL0 via a kernel
50
+ * trap-and-emulate ABI. In this case we define the read permissions
51
+ * as actually being PL0_R. However some bits of any given register
52
+ * may still be masked.
53
+ */
54
#ifdef CONFIG_USER_ONLY
55
-#define PL0U_R PL0_R
56
+ PL0U_R = PL0_R,
57
#else
58
-#define PL0U_R PL1_R
59
+ PL0U_R = PL1_R,
60
#endif
61
62
-#define PL3_RW (PL3_R | PL3_W)
63
-#define PL2_RW (PL2_R | PL2_W)
64
-#define PL1_RW (PL1_R | PL1_W)
65
-#define PL0_RW (PL0_R | PL0_W)
66
+ PL3_RW = PL3_R | PL3_W,
67
+ PL2_RW = PL2_R | PL2_W,
68
+ PL1_RW = PL1_R | PL1_W,
69
+ PL0_RW = PL0_R | PL0_W,
70
+} CPAccessRights;
71
72
typedef enum CPAccessResult {
73
/* Access is permitted */
74
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
75
/* Register type: ARM_CP_* bits/values */
76
int type;
77
/* Access rights: PL*_[RW] */
78
- int access;
79
+ CPAccessRights access;
80
/* Security state: ARM_CP_SECSTATE_* bits/values */
81
int secure;
82
/*
83
diff --git a/target/arm/helper.c b/target/arm/helper.c
41
index XXXXXXX..XXXXXXX 100644
84
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/smmu-common.c
85
--- a/target/arm/helper.c
43
+++ b/hw/arm/smmu-common.c
86
+++ b/target/arm/helper.c
44
@@ -XXX,XX +XXX,XX @@
87
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
45
#include "qemu/error-report.h"
88
* to encompass the generic architectural permission check.
46
#include "hw/arm/smmu-common.h"
89
*/
47
90
if (r->state != ARM_CP_STATE_AA32) {
48
+/**
91
- int mask = 0;
49
+ * The bus number is used for lookup when SID based invalidation occurs.
92
+ CPAccessRights mask;
50
+ * In that case we lazily populate the SMMUPciBus array from the bus hash
93
switch (r->opc1) {
51
+ * table. At the time the SMMUPciBus is created (smmu_find_add_as), the bus
94
case 0:
52
+ * numbers may not be always initialized yet.
95
/* min_EL EL1, but some accessible to EL0 via kernel ABI */
53
+ */
54
+SMMUPciBus *smmu_find_smmu_pcibus(SMMUState *s, uint8_t bus_num)
55
+{
56
+ SMMUPciBus *smmu_pci_bus = s->smmu_pcibus_by_bus_num[bus_num];
57
+
58
+ if (!smmu_pci_bus) {
59
+ GHashTableIter iter;
60
+
61
+ g_hash_table_iter_init(&iter, s->smmu_pcibus_by_busptr);
62
+ while (g_hash_table_iter_next(&iter, NULL, (void **)&smmu_pci_bus)) {
63
+ if (pci_bus_num(smmu_pci_bus->bus) == bus_num) {
64
+ s->smmu_pcibus_by_bus_num[bus_num] = smmu_pci_bus;
65
+ return smmu_pci_bus;
66
+ }
67
+ }
68
+ }
69
+ return smmu_pci_bus;
70
+}
71
+
72
+static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
73
+{
74
+ SMMUState *s = opaque;
75
+ SMMUPciBus *sbus = g_hash_table_lookup(s->smmu_pcibus_by_busptr, bus);
76
+ SMMUDevice *sdev;
77
+
78
+ if (!sbus) {
79
+ sbus = g_malloc0(sizeof(SMMUPciBus) +
80
+ sizeof(SMMUDevice *) * SMMU_PCI_DEVFN_MAX);
81
+ sbus->bus = bus;
82
+ g_hash_table_insert(s->smmu_pcibus_by_busptr, bus, sbus);
83
+ }
84
+
85
+ sdev = sbus->pbdev[devfn];
86
+ if (!sdev) {
87
+ char *name = g_strdup_printf("%s-%d-%d",
88
+ s->mrtypename,
89
+ pci_bus_num(bus), devfn);
90
+ sdev = sbus->pbdev[devfn] = g_new0(SMMUDevice, 1);
91
+
92
+ sdev->smmu = s;
93
+ sdev->bus = bus;
94
+ sdev->devfn = devfn;
95
+
96
+ memory_region_init_iommu(&sdev->iommu, sizeof(sdev->iommu),
97
+ s->mrtypename,
98
+ OBJECT(s), name, 1ULL << SMMU_MAX_VA_BITS);
99
+ address_space_init(&sdev->as,
100
+ MEMORY_REGION(&sdev->iommu), name);
101
+ trace_smmu_add_mr(name);
102
+ g_free(name);
103
+ }
104
+
105
+ return &sdev->as;
106
+}
107
+
108
static void smmu_base_realize(DeviceState *dev, Error **errp)
109
{
110
+ SMMUState *s = ARM_SMMU(dev);
111
SMMUBaseClass *sbc = ARM_SMMU_GET_CLASS(dev);
112
Error *local_err = NULL;
113
114
@@ -XXX,XX +XXX,XX @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
115
error_propagate(errp, local_err);
116
return;
117
}
118
+
119
+ s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL);
120
+
121
+ if (s->primary_bus) {
122
+ pci_setup_iommu(s->primary_bus, smmu_find_add_as, s);
123
+ } else {
124
+ error_setg(errp, "SMMU is not attached to any PCI bus!");
125
+ }
126
}
127
128
static void smmu_base_reset(DeviceState *dev)
129
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/arm/trace-events
132
+++ b/hw/arm/trace-events
133
@@ -XXX,XX +XXX,XX @@
134
135
# hw/arm/virt-acpi-build.c
136
virt_acpi_setup(void) "No fw cfg or ACPI disabled. Bailing out."
137
+
138
+# hw/arm/smmu-common.c
139
+smmu_add_mr(const char *name) "%s"
140
\ No newline at end of file
141
--
96
--
142
2.17.0
97
2.25.1
143
144
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The patch introduces the smmu base device and class for the ARM
3
Give this enum a name and use in ARMCPRegInfo,
4
smmu. Devices for specific versions will be derived from this
4
add_cpreg_to_hashtable and define_one_arm_cp_reg_with_opaque.
5
base device.
6
5
7
We also introduce some important datatypes.
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
10
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1524665762-31355-2-git-send-email-eric.auger@redhat.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-9-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
hw/arm/Makefile.objs | 1 +
12
target/arm/cpregs.h | 6 +++---
16
include/hw/arm/smmu-common.h | 123 ++++++++++++++++++++++++++++
13
target/arm/helper.c | 6 ++++--
17
hw/arm/smmu-common.c | 81 ++++++++++++++++++
14
2 files changed, 7 insertions(+), 5 deletions(-)
18
default-configs/aarch64-softmmu.mak | 1 +
19
4 files changed, 206 insertions(+)
20
create mode 100644 include/hw/arm/smmu-common.h
21
create mode 100644 hw/arm/smmu-common.c
22
15
23
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
24
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/arm/Makefile.objs
18
--- a/target/arm/cpregs.h
26
+++ b/hw/arm/Makefile.objs
19
+++ b/target/arm/cpregs.h
27
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_MPS2) += mps2-tz.o
20
@@ -XXX,XX +XXX,XX @@ enum {
28
obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
21
* Note that we rely on the values of these enums as we iterate through
29
obj-$(CONFIG_IOTKIT) += iotkit.o
22
* the various states in some places.
30
obj-$(CONFIG_FSL_IMX7) += fsl-imx7.o mcimx7d-sabre.o
23
*/
31
+obj-$(CONFIG_ARM_SMMUV3) += smmu-common.o
24
-enum {
32
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
25
+typedef enum {
33
new file mode 100644
26
ARM_CP_STATE_AA32 = 0,
34
index XXXXXXX..XXXXXXX
27
ARM_CP_STATE_AA64 = 1,
35
--- /dev/null
28
ARM_CP_STATE_BOTH = 2,
36
+++ b/include/hw/arm/smmu-common.h
29
-};
37
@@ -XXX,XX +XXX,XX @@
30
+} CPState;
38
+/*
31
39
+ * ARM SMMU Support
32
/*
40
+ *
33
* ARM CP register secure state flags. These flags identify security state
41
+ * Copyright (C) 2015-2016 Broadcom Corporation
34
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
42
+ * Copyright (c) 2017 Red Hat, Inc.
35
uint8_t opc1;
43
+ * Written by Prem Mallappa, Eric Auger
36
uint8_t opc2;
44
+ *
37
/* Execution state in which this register is visible: ARM_CP_STATE_* */
45
+ * This program is free software; you can redistribute it and/or modify
38
- int state;
46
+ * it under the terms of the GNU General Public License version 2 as
39
+ CPState state;
47
+ * published by the Free Software Foundation.
40
/* Register type: ARM_CP_* bits/values */
48
+ *
41
int type;
49
+ * This program is distributed in the hope that it will be useful,
42
/* Access rights: PL*_[RW] */
50
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
51
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
44
index XXXXXXX..XXXXXXX 100644
52
+ * GNU General Public License for more details.
45
--- a/target/arm/helper.c
53
+ *
46
+++ b/target/arm/helper.c
54
+ */
47
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
48
}
49
50
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
51
- void *opaque, int state, int secstate,
52
+ void *opaque, CPState state, int secstate,
53
int crm, int opc1, int opc2,
54
const char *name)
55
{
56
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
57
* bits; the ARM_CP_64BIT* flag applies only to the AArch32 view of
58
* the register, if any.
59
*/
60
- int crm, opc1, opc2, state;
61
+ int crm, opc1, opc2;
62
int crmmin = (r->crm == CP_ANY) ? 0 : r->crm;
63
int crmmax = (r->crm == CP_ANY) ? 15 : r->crm;
64
int opc1min = (r->opc1 == CP_ANY) ? 0 : r->opc1;
65
int opc1max = (r->opc1 == CP_ANY) ? 7 : r->opc1;
66
int opc2min = (r->opc2 == CP_ANY) ? 0 : r->opc2;
67
int opc2max = (r->opc2 == CP_ANY) ? 7 : r->opc2;
68
+ CPState state;
55
+
69
+
56
+#ifndef HW_ARM_SMMU_COMMON_H
70
/* 64 bit registers have only CRm and Opc1 fields */
57
+#define HW_ARM_SMMU_COMMON_H
71
assert(!((r->type & ARM_CP_64BIT) && (r->opc2 || r->crn)));
58
+
72
/* op0 only exists in the AArch64 encodings */
59
+#include "hw/sysbus.h"
60
+#include "hw/pci/pci.h"
61
+
62
+#define SMMU_PCI_BUS_MAX 256
63
+#define SMMU_PCI_DEVFN_MAX 256
64
+
65
+#define SMMU_MAX_VA_BITS 48
66
+
67
+/*
68
+ * Page table walk error types
69
+ */
70
+typedef enum {
71
+ SMMU_PTW_ERR_NONE,
72
+ SMMU_PTW_ERR_WALK_EABT, /* Translation walk external abort */
73
+ SMMU_PTW_ERR_TRANSLATION, /* Translation fault */
74
+ SMMU_PTW_ERR_ADDR_SIZE, /* Address Size fault */
75
+ SMMU_PTW_ERR_ACCESS, /* Access fault */
76
+ SMMU_PTW_ERR_PERMISSION, /* Permission fault */
77
+} SMMUPTWEventType;
78
+
79
+typedef struct SMMUPTWEventInfo {
80
+ SMMUPTWEventType type;
81
+ dma_addr_t addr; /* fetched address that induced an abort, if any */
82
+} SMMUPTWEventInfo;
83
+
84
+typedef struct SMMUTransTableInfo {
85
+ bool disabled; /* is the translation table disabled? */
86
+ uint64_t ttb; /* TT base address */
87
+ uint8_t tsz; /* input range, ie. 2^(64 -tsz)*/
88
+ uint8_t granule_sz; /* granule page shift */
89
+} SMMUTransTableInfo;
90
+
91
+/*
92
+ * Generic structure populated by derived SMMU devices
93
+ * after decoding the configuration information and used as
94
+ * input to the page table walk
95
+ */
96
+typedef struct SMMUTransCfg {
97
+ int stage; /* translation stage */
98
+ bool aa64; /* arch64 or aarch32 translation table */
99
+ bool disabled; /* smmu is disabled */
100
+ bool bypassed; /* translation is bypassed */
101
+ bool aborted; /* translation is aborted */
102
+ uint64_t ttb; /* TT base address */
103
+ uint8_t oas; /* output address width */
104
+ uint8_t tbi; /* Top Byte Ignore */
105
+ uint16_t asid;
106
+ SMMUTransTableInfo tt[2];
107
+} SMMUTransCfg;
108
+
109
+typedef struct SMMUDevice {
110
+ void *smmu;
111
+ PCIBus *bus;
112
+ int devfn;
113
+ IOMMUMemoryRegion iommu;
114
+ AddressSpace as;
115
+} SMMUDevice;
116
+
117
+typedef struct SMMUNotifierNode {
118
+ SMMUDevice *sdev;
119
+ QLIST_ENTRY(SMMUNotifierNode) next;
120
+} SMMUNotifierNode;
121
+
122
+typedef struct SMMUPciBus {
123
+ PCIBus *bus;
124
+ SMMUDevice *pbdev[0]; /* Parent array is sparse, so dynamically alloc */
125
+} SMMUPciBus;
126
+
127
+typedef struct SMMUState {
128
+ /* <private> */
129
+ SysBusDevice dev;
130
+ const char *mrtypename;
131
+ MemoryRegion iomem;
132
+
133
+ GHashTable *smmu_pcibus_by_busptr;
134
+ GHashTable *configs; /* cache for configuration data */
135
+ GHashTable *iotlb;
136
+ SMMUPciBus *smmu_pcibus_by_bus_num[SMMU_PCI_BUS_MAX];
137
+ PCIBus *pci_bus;
138
+ QLIST_HEAD(, SMMUNotifierNode) notifiers_list;
139
+ uint8_t bus_num;
140
+ PCIBus *primary_bus;
141
+} SMMUState;
142
+
143
+typedef struct {
144
+ /* <private> */
145
+ SysBusDeviceClass parent_class;
146
+
147
+ /*< public >*/
148
+
149
+ DeviceRealize parent_realize;
150
+
151
+} SMMUBaseClass;
152
+
153
+#define TYPE_ARM_SMMU "arm-smmu"
154
+#define ARM_SMMU(obj) OBJECT_CHECK(SMMUState, (obj), TYPE_ARM_SMMU)
155
+#define ARM_SMMU_CLASS(klass) \
156
+ OBJECT_CLASS_CHECK(SMMUBaseClass, (klass), TYPE_ARM_SMMU)
157
+#define ARM_SMMU_GET_CLASS(obj) \
158
+ OBJECT_GET_CLASS(SMMUBaseClass, (obj), TYPE_ARM_SMMU)
159
+
160
+#endif /* HW_ARM_SMMU_COMMON */
161
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
162
new file mode 100644
163
index XXXXXXX..XXXXXXX
164
--- /dev/null
165
+++ b/hw/arm/smmu-common.c
166
@@ -XXX,XX +XXX,XX @@
167
+/*
168
+ * Copyright (C) 2014-2016 Broadcom Corporation
169
+ * Copyright (c) 2017 Red Hat, Inc.
170
+ * Written by Prem Mallappa, Eric Auger
171
+ *
172
+ * This program is free software; you can redistribute it and/or modify
173
+ * it under the terms of the GNU General Public License version 2 as
174
+ * published by the Free Software Foundation.
175
+ *
176
+ * This program is distributed in the hope that it will be useful,
177
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
178
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
179
+ * GNU General Public License for more details.
180
+ *
181
+ * Author: Prem Mallappa <pmallapp@broadcom.com>
182
+ *
183
+ */
184
+
185
+#include "qemu/osdep.h"
186
+#include "sysemu/sysemu.h"
187
+#include "exec/address-spaces.h"
188
+#include "trace.h"
189
+#include "exec/target_page.h"
190
+#include "qom/cpu.h"
191
+#include "hw/qdev-properties.h"
192
+#include "qapi/error.h"
193
+
194
+#include "qemu/error-report.h"
195
+#include "hw/arm/smmu-common.h"
196
+
197
+static void smmu_base_realize(DeviceState *dev, Error **errp)
198
+{
199
+ SMMUBaseClass *sbc = ARM_SMMU_GET_CLASS(dev);
200
+ Error *local_err = NULL;
201
+
202
+ sbc->parent_realize(dev, &local_err);
203
+ if (local_err) {
204
+ error_propagate(errp, local_err);
205
+ return;
206
+ }
207
+}
208
+
209
+static void smmu_base_reset(DeviceState *dev)
210
+{
211
+ /* will be filled later on */
212
+}
213
+
214
+static Property smmu_dev_properties[] = {
215
+ DEFINE_PROP_UINT8("bus_num", SMMUState, bus_num, 0),
216
+ DEFINE_PROP_LINK("primary-bus", SMMUState, primary_bus, "PCI", PCIBus *),
217
+ DEFINE_PROP_END_OF_LIST(),
218
+};
219
+
220
+static void smmu_base_class_init(ObjectClass *klass, void *data)
221
+{
222
+ DeviceClass *dc = DEVICE_CLASS(klass);
223
+ SMMUBaseClass *sbc = ARM_SMMU_CLASS(klass);
224
+
225
+ dc->props = smmu_dev_properties;
226
+ device_class_set_parent_realize(dc, smmu_base_realize,
227
+ &sbc->parent_realize);
228
+ dc->reset = smmu_base_reset;
229
+}
230
+
231
+static const TypeInfo smmu_base_info = {
232
+ .name = TYPE_ARM_SMMU,
233
+ .parent = TYPE_SYS_BUS_DEVICE,
234
+ .instance_size = sizeof(SMMUState),
235
+ .class_data = NULL,
236
+ .class_size = sizeof(SMMUBaseClass),
237
+ .class_init = smmu_base_class_init,
238
+ .abstract = true,
239
+};
240
+
241
+static void smmu_base_register_types(void)
242
+{
243
+ type_register_static(&smmu_base_info);
244
+}
245
+
246
+type_init(smmu_base_register_types)
247
+
248
diff --git a/default-configs/aarch64-softmmu.mak b/default-configs/aarch64-softmmu.mak
249
index XXXXXXX..XXXXXXX 100644
250
--- a/default-configs/aarch64-softmmu.mak
251
+++ b/default-configs/aarch64-softmmu.mak
252
@@ -XXX,XX +XXX,XX @@ CONFIG_DDC=y
253
CONFIG_DPCD=y
254
CONFIG_XLNX_ZYNQMP=y
255
CONFIG_XLNX_ZYNQMP_ARM=y
256
+CONFIG_ARM_SMMUV3=y
257
--
73
--
258
2.17.0
74
2.25.1
259
75
260
76
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This patch implements the IOMMU Memory Region translate()
3
Give this enum a name and use in ARMCPRegInfo and add_cpreg_to_hashtable.
4
callback. Most of the code relates to the translation
4
Add the enumerator ARM_CP_SECSTATE_BOTH to clarify how 0
5
configuration decoding and check (STE, CD).
5
is handled in define_one_arm_cp_reg_with_opaque.
6
6
7
Signed-off-by: Eric Auger <eric.auger@redhat.com>
8
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
9
Message-id: 1524665762-31355-10-git-send-email-eric.auger@redhat.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-10-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
hw/arm/smmuv3-internal.h | 160 +++++++++++++++++
12
target/arm/cpregs.h | 7 ++++---
14
hw/arm/smmuv3.c | 358 +++++++++++++++++++++++++++++++++++++++
13
target/arm/helper.c | 7 +++++--
15
hw/arm/trace-events | 9 +
14
2 files changed, 9 insertions(+), 5 deletions(-)
16
3 files changed, 527 insertions(+)
17
15
18
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/smmuv3-internal.h
18
--- a/target/arm/cpregs.h
21
+++ b/hw/arm/smmuv3-internal.h
19
+++ b/target/arm/cpregs.h
22
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUEventInfo {
20
@@ -XXX,XX +XXX,XX @@ typedef enum {
23
21
* registered entry will only have one to identify whether the entry is secure
24
void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *event);
22
* or non-secure.
25
23
*/
26
+/* Configuration Data */
24
-enum {
27
+
25
+typedef enum {
28
+/* STE Level 1 Descriptor */
26
+ ARM_CP_SECSTATE_BOTH = 0, /* define one cpreg for each secstate */
29
+typedef struct STEDesc {
27
ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
30
+ uint32_t word[2];
28
ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
31
+} STEDesc;
29
-};
32
+
30
+} CPSecureState;
33
+/* CD Level 1 Descriptor */
31
34
+typedef struct CDDesc {
32
/*
35
+ uint32_t word[2];
33
* Access rights:
36
+} CDDesc;
34
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
37
+
35
/* Access rights: PL*_[RW] */
38
+/* Stream Table Entry(STE) */
36
CPAccessRights access;
39
+typedef struct STE {
37
/* Security state: ARM_CP_SECSTATE_* bits/values */
40
+ uint32_t word[16];
38
- int secure;
41
+} STE;
39
+ CPSecureState secure;
42
+
40
/*
43
+/* Context Descriptor(CD) */
41
* The opaque pointer passed to define_arm_cp_regs_with_opaque() when
44
+typedef struct CD {
42
* this register was defined: can be used to hand data through to the
45
+ uint32_t word[16];
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
46
+} CD;
47
+
48
+/* STE fields */
49
+
50
+#define STE_VALID(x) extract32((x)->word[0], 0, 1)
51
+
52
+#define STE_CONFIG(x) extract32((x)->word[0], 1, 3)
53
+#define STE_CFG_S1_ENABLED(config) (config & 0x1)
54
+#define STE_CFG_S2_ENABLED(config) (config & 0x2)
55
+#define STE_CFG_ABORT(config) (!(config & 0x4))
56
+#define STE_CFG_BYPASS(config) (config == 0x4)
57
+
58
+#define STE_S1FMT(x) extract32((x)->word[0], 4 , 2)
59
+#define STE_S1CDMAX(x) extract32((x)->word[1], 27, 5)
60
+#define STE_S1STALLD(x) extract32((x)->word[2], 27, 1)
61
+#define STE_EATS(x) extract32((x)->word[2], 28, 2)
62
+#define STE_STRW(x) extract32((x)->word[2], 30, 2)
63
+#define STE_S2VMID(x) extract32((x)->word[4], 0 , 16)
64
+#define STE_S2T0SZ(x) extract32((x)->word[5], 0 , 6)
65
+#define STE_S2SL0(x) extract32((x)->word[5], 6 , 2)
66
+#define STE_S2TG(x) extract32((x)->word[5], 14, 2)
67
+#define STE_S2PS(x) extract32((x)->word[5], 16, 3)
68
+#define STE_S2AA64(x) extract32((x)->word[5], 19, 1)
69
+#define STE_S2HD(x) extract32((x)->word[5], 24, 1)
70
+#define STE_S2HA(x) extract32((x)->word[5], 25, 1)
71
+#define STE_S2S(x) extract32((x)->word[5], 26, 1)
72
+#define STE_CTXPTR(x) \
73
+ ({ \
74
+ unsigned long addr; \
75
+ addr = (uint64_t)extract32((x)->word[1], 0, 16) << 32; \
76
+ addr |= (uint64_t)((x)->word[0] & 0xffffffc0); \
77
+ addr; \
78
+ })
79
+
80
+#define STE_S2TTB(x) \
81
+ ({ \
82
+ unsigned long addr; \
83
+ addr = (uint64_t)extract32((x)->word[7], 0, 16) << 32; \
84
+ addr |= (uint64_t)((x)->word[6] & 0xfffffff0); \
85
+ addr; \
86
+ })
87
+
88
+static inline int oas2bits(int oas_field)
89
+{
90
+ switch (oas_field) {
91
+ case 0:
92
+ return 32;
93
+ case 1:
94
+ return 36;
95
+ case 2:
96
+ return 40;
97
+ case 3:
98
+ return 42;
99
+ case 4:
100
+ return 44;
101
+ case 5:
102
+ return 48;
103
+ }
104
+ return -1;
105
+}
106
+
107
+static inline int pa_range(STE *ste)
108
+{
109
+ int oas_field = MIN(STE_S2PS(ste), SMMU_IDR5_OAS);
110
+
111
+ if (!STE_S2AA64(ste)) {
112
+ return 40;
113
+ }
114
+
115
+ return oas2bits(oas_field);
116
+}
117
+
118
+#define MAX_PA(ste) ((1 << pa_range(ste)) - 1)
119
+
120
+/* CD fields */
121
+
122
+#define CD_VALID(x) extract32((x)->word[0], 30, 1)
123
+#define CD_ASID(x) extract32((x)->word[1], 16, 16)
124
+#define CD_TTB(x, sel) \
125
+ ({ \
126
+ uint64_t hi, lo; \
127
+ hi = extract32((x)->word[(sel) * 2 + 3], 0, 19); \
128
+ hi <<= 32; \
129
+ lo = (x)->word[(sel) * 2 + 2] & ~0xfULL; \
130
+ hi | lo; \
131
+ })
132
+
133
+#define CD_TSZ(x, sel) extract32((x)->word[0], (16 * (sel)) + 0, 6)
134
+#define CD_TG(x, sel) extract32((x)->word[0], (16 * (sel)) + 6, 2)
135
+#define CD_EPD(x, sel) extract32((x)->word[0], (16 * (sel)) + 14, 1)
136
+#define CD_ENDI(x) extract32((x)->word[0], 15, 1)
137
+#define CD_IPS(x) extract32((x)->word[1], 0 , 3)
138
+#define CD_TBI(x) extract32((x)->word[1], 6 , 2)
139
+#define CD_HD(x) extract32((x)->word[1], 10 , 1)
140
+#define CD_HA(x) extract32((x)->word[1], 11 , 1)
141
+#define CD_S(x) extract32((x)->word[1], 12, 1)
142
+#define CD_R(x) extract32((x)->word[1], 13, 1)
143
+#define CD_A(x) extract32((x)->word[1], 14, 1)
144
+#define CD_AARCH64(x) extract32((x)->word[1], 9 , 1)
145
+
146
+#define CDM_VALID(x) ((x)->word[0] & 0x1)
147
+
148
+static inline int is_cd_valid(SMMUv3State *s, STE *ste, CD *cd)
149
+{
150
+ return CD_VALID(cd);
151
+}
152
+
153
+/**
154
+ * tg2granule - Decodes the CD translation granule size field according
155
+ * to the ttbr in use
156
+ * @bits: TG0/1 fields
157
+ * @ttbr: ttbr index in use
158
+ */
159
+static inline int tg2granule(int bits, int ttbr)
160
+{
161
+ switch (bits) {
162
+ case 0:
163
+ return ttbr ? 0 : 12;
164
+ case 1:
165
+ return ttbr ? 14 : 16;
166
+ case 2:
167
+ return ttbr ? 12 : 14;
168
+ case 3:
169
+ return ttbr ? 16 : 0;
170
+ default:
171
+ return 0;
172
+ }
173
+}
174
+
175
+static inline uint64_t l1std_l2ptr(STEDesc *desc)
176
+{
177
+ uint64_t hi, lo;
178
+
179
+ hi = desc->word[1];
180
+ lo = desc->word[0] & ~0x1fULL;
181
+ return hi << 32 | lo;
182
+}
183
+
184
+#define L1STD_SPAN(stm) (extract32((stm)->word[0], 0, 4))
185
+
186
#endif
187
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
188
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
189
--- a/hw/arm/smmuv3.c
45
--- a/target/arm/helper.c
190
+++ b/hw/arm/smmuv3.c
46
+++ b/target/arm/helper.c
191
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
47
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
192
s->sid_split = 0;
193
}
48
}
194
49
195
+static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
50
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
196
+ SMMUEventInfo *event)
51
- void *opaque, CPState state, int secstate,
197
+{
52
+ void *opaque, CPState state,
198
+ int ret;
53
+ CPSecureState secstate,
199
+
54
int crm, int opc1, int opc2,
200
+ trace_smmuv3_get_ste(addr);
55
const char *name)
201
+ /* TODO: guarantee 64-bit single-copy atomicity */
202
+ ret = dma_memory_read(&address_space_memory, addr,
203
+ (void *)buf, sizeof(*buf));
204
+ if (ret != MEMTX_OK) {
205
+ qemu_log_mask(LOG_GUEST_ERROR,
206
+ "Cannot fetch pte at address=0x%"PRIx64"\n", addr);
207
+ event->type = SMMU_EVT_F_STE_FETCH;
208
+ event->u.f_ste_fetch.addr = addr;
209
+ return -EINVAL;
210
+ }
211
+ return 0;
212
+
213
+}
214
+
215
+/* @ssid > 0 not supported yet */
216
+static int smmu_get_cd(SMMUv3State *s, STE *ste, uint32_t ssid,
217
+ CD *buf, SMMUEventInfo *event)
218
+{
219
+ dma_addr_t addr = STE_CTXPTR(ste);
220
+ int ret;
221
+
222
+ trace_smmuv3_get_cd(addr);
223
+ /* TODO: guarantee 64-bit single-copy atomicity */
224
+ ret = dma_memory_read(&address_space_memory, addr,
225
+ (void *)buf, sizeof(*buf));
226
+ if (ret != MEMTX_OK) {
227
+ qemu_log_mask(LOG_GUEST_ERROR,
228
+ "Cannot fetch pte at address=0x%"PRIx64"\n", addr);
229
+ event->type = SMMU_EVT_F_CD_FETCH;
230
+ event->u.f_ste_fetch.addr = addr;
231
+ return -EINVAL;
232
+ }
233
+ return 0;
234
+}
235
+
236
+/* Returns <0 if the caller has no need to continue the translation */
237
+static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
238
+ STE *ste, SMMUEventInfo *event)
239
+{
240
+ uint32_t config;
241
+ int ret = -EINVAL;
242
+
243
+ if (!STE_VALID(ste)) {
244
+ goto bad_ste;
245
+ }
246
+
247
+ config = STE_CONFIG(ste);
248
+
249
+ if (STE_CFG_ABORT(config)) {
250
+ cfg->aborted = true; /* abort but don't record any event */
251
+ return ret;
252
+ }
253
+
254
+ if (STE_CFG_BYPASS(config)) {
255
+ cfg->bypassed = true;
256
+ return ret;
257
+ }
258
+
259
+ if (STE_CFG_S2_ENABLED(config)) {
260
+ qemu_log_mask(LOG_UNIMP, "SMMUv3 does not support stage 2 yet\n");
261
+ goto bad_ste;
262
+ }
263
+
264
+ if (STE_S1CDMAX(ste) != 0) {
265
+ qemu_log_mask(LOG_UNIMP,
266
+ "SMMUv3 does not support multiple context descriptors yet\n");
267
+ goto bad_ste;
268
+ }
269
+
270
+ if (STE_S1STALLD(ste)) {
271
+ qemu_log_mask(LOG_UNIMP,
272
+ "SMMUv3 S1 stalling fault model not allowed yet\n");
273
+ goto bad_ste;
274
+ }
275
+ return 0;
276
+
277
+bad_ste:
278
+ event->type = SMMU_EVT_C_BAD_STE;
279
+ return -EINVAL;
280
+}
281
+
282
+/**
283
+ * smmu_find_ste - Return the stream table entry associated
284
+ * to the sid
285
+ *
286
+ * @s: smmuv3 handle
287
+ * @sid: stream ID
288
+ * @ste: returned stream table entry
289
+ * @event: handle to an event info
290
+ *
291
+ * Supports linear and 2-level stream table
292
+ * Return 0 on success, -EINVAL otherwise
293
+ */
294
+static int smmu_find_ste(SMMUv3State *s, uint32_t sid, STE *ste,
295
+ SMMUEventInfo *event)
296
+{
297
+ dma_addr_t addr;
298
+ int ret;
299
+
300
+ trace_smmuv3_find_ste(sid, s->features, s->sid_split);
301
+ /* Check SID range */
302
+ if (sid > (1 << SMMU_IDR1_SIDSIZE)) {
303
+ event->type = SMMU_EVT_C_BAD_STREAMID;
304
+ return -EINVAL;
305
+ }
306
+ if (s->features & SMMU_FEATURE_2LVL_STE) {
307
+ int l1_ste_offset, l2_ste_offset, max_l2_ste, span;
308
+ dma_addr_t strtab_base, l1ptr, l2ptr;
309
+ STEDesc l1std;
310
+
311
+ strtab_base = s->strtab_base & SMMU_BASE_ADDR_MASK;
312
+ l1_ste_offset = sid >> s->sid_split;
313
+ l2_ste_offset = sid & ((1 << s->sid_split) - 1);
314
+ l1ptr = (dma_addr_t)(strtab_base + l1_ste_offset * sizeof(l1std));
315
+ /* TODO: guarantee 64-bit single-copy atomicity */
316
+ ret = dma_memory_read(&address_space_memory, l1ptr,
317
+ (uint8_t *)&l1std, sizeof(l1std));
318
+ if (ret != MEMTX_OK) {
319
+ qemu_log_mask(LOG_GUEST_ERROR,
320
+ "Could not read L1PTR at 0X%"PRIx64"\n", l1ptr);
321
+ event->type = SMMU_EVT_F_STE_FETCH;
322
+ event->u.f_ste_fetch.addr = l1ptr;
323
+ return -EINVAL;
324
+ }
325
+
326
+ span = L1STD_SPAN(&l1std);
327
+
328
+ if (!span) {
329
+ /* l2ptr is not valid */
330
+ qemu_log_mask(LOG_GUEST_ERROR,
331
+ "invalid sid=%d (L1STD span=0)\n", sid);
332
+ event->type = SMMU_EVT_C_BAD_STREAMID;
333
+ return -EINVAL;
334
+ }
335
+ max_l2_ste = (1 << span) - 1;
336
+ l2ptr = l1std_l2ptr(&l1std);
337
+ trace_smmuv3_find_ste_2lvl(s->strtab_base, l1ptr, l1_ste_offset,
338
+ l2ptr, l2_ste_offset, max_l2_ste);
339
+ if (l2_ste_offset > max_l2_ste) {
340
+ qemu_log_mask(LOG_GUEST_ERROR,
341
+ "l2_ste_offset=%d > max_l2_ste=%d\n",
342
+ l2_ste_offset, max_l2_ste);
343
+ event->type = SMMU_EVT_C_BAD_STE;
344
+ return -EINVAL;
345
+ }
346
+ addr = l2ptr + l2_ste_offset * sizeof(*ste);
347
+ } else {
348
+ addr = s->strtab_base + sid * sizeof(*ste);
349
+ }
350
+
351
+ if (smmu_get_ste(s, addr, ste, event)) {
352
+ return -EINVAL;
353
+ }
354
+
355
+ return 0;
356
+}
357
+
358
+static int decode_cd(SMMUTransCfg *cfg, CD *cd, SMMUEventInfo *event)
359
+{
360
+ int ret = -EINVAL;
361
+ int i;
362
+
363
+ if (!CD_VALID(cd) || !CD_AARCH64(cd)) {
364
+ goto bad_cd;
365
+ }
366
+ if (!CD_A(cd)) {
367
+ goto bad_cd; /* SMMU_IDR0.TERM_MODEL == 1 */
368
+ }
369
+ if (CD_S(cd)) {
370
+ goto bad_cd; /* !STE_SECURE && SMMU_IDR0.STALL_MODEL == 1 */
371
+ }
372
+ if (CD_HA(cd) || CD_HD(cd)) {
373
+ goto bad_cd; /* HTTU = 0 */
374
+ }
375
+
376
+ /* we support only those at the moment */
377
+ cfg->aa64 = true;
378
+ cfg->stage = 1;
379
+
380
+ cfg->oas = oas2bits(CD_IPS(cd));
381
+ cfg->oas = MIN(oas2bits(SMMU_IDR5_OAS), cfg->oas);
382
+ cfg->tbi = CD_TBI(cd);
383
+ cfg->asid = CD_ASID(cd);
384
+
385
+ trace_smmuv3_decode_cd(cfg->oas);
386
+
387
+ /* decode data dependent on TT */
388
+ for (i = 0; i <= 1; i++) {
389
+ int tg, tsz;
390
+ SMMUTransTableInfo *tt = &cfg->tt[i];
391
+
392
+ cfg->tt[i].disabled = CD_EPD(cd, i);
393
+ if (cfg->tt[i].disabled) {
394
+ continue;
395
+ }
396
+
397
+ tsz = CD_TSZ(cd, i);
398
+ if (tsz < 16 || tsz > 39) {
399
+ goto bad_cd;
400
+ }
401
+
402
+ tg = CD_TG(cd, i);
403
+ tt->granule_sz = tg2granule(tg, i);
404
+ if ((tt->granule_sz != 12 && tt->granule_sz != 16) || CD_ENDI(cd)) {
405
+ goto bad_cd;
406
+ }
407
+
408
+ tt->tsz = tsz;
409
+ tt->ttb = CD_TTB(cd, i);
410
+ if (tt->ttb & ~(MAKE_64BIT_MASK(0, cfg->oas))) {
411
+ goto bad_cd;
412
+ }
413
+ trace_smmuv3_decode_cd_tt(i, tt->tsz, tt->ttb, tt->granule_sz);
414
+ }
415
+
416
+ event->record_trans_faults = CD_R(cd);
417
+
418
+ return 0;
419
+
420
+bad_cd:
421
+ event->type = SMMU_EVT_C_BAD_CD;
422
+ return ret;
423
+}
424
+
425
+/**
426
+ * smmuv3_decode_config - Prepare the translation configuration
427
+ * for the @mr iommu region
428
+ * @mr: iommu memory region the translation config must be prepared for
429
+ * @cfg: output translation configuration which is populated through
430
+ * the different configuration decoding steps
431
+ * @event: must be zero'ed by the caller
432
+ *
433
+ * return < 0 if the translation needs to be aborted (@event is filled
434
+ * accordingly). Return 0 otherwise.
435
+ */
436
+static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
437
+ SMMUEventInfo *event)
438
+{
439
+ SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
440
+ uint32_t sid = smmu_get_sid(sdev);
441
+ SMMUv3State *s = sdev->smmu;
442
+ int ret = -EINVAL;
443
+ STE ste;
444
+ CD cd;
445
+
446
+ if (smmu_find_ste(s, sid, &ste, event)) {
447
+ return ret;
448
+ }
449
+
450
+ if (decode_ste(s, cfg, &ste, event)) {
451
+ return ret;
452
+ }
453
+
454
+ if (smmu_get_cd(s, &ste, 0 /* ssid */, &cd, event)) {
455
+ return ret;
456
+ }
457
+
458
+ return decode_cd(cfg, &cd, event);
459
+}
460
+
461
+static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
462
+ IOMMUAccessFlags flag)
463
+{
464
+ SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
465
+ SMMUv3State *s = sdev->smmu;
466
+ uint32_t sid = smmu_get_sid(sdev);
467
+ SMMUEventInfo event = {.type = SMMU_EVT_OK, .sid = sid};
468
+ SMMUPTWEventInfo ptw_info = {};
469
+ SMMUTransCfg cfg = {};
470
+ IOMMUTLBEntry entry = {
471
+ .target_as = &address_space_memory,
472
+ .iova = addr,
473
+ .translated_addr = addr,
474
+ .addr_mask = ~(hwaddr)0,
475
+ .perm = IOMMU_NONE,
476
+ };
477
+ int ret = 0;
478
+
479
+ if (!smmu_enabled(s)) {
480
+ goto out;
481
+ }
482
+
483
+ ret = smmuv3_decode_config(mr, &cfg, &event);
484
+ if (ret) {
485
+ goto out;
486
+ }
487
+
488
+ if (cfg.aborted) {
489
+ goto out;
490
+ }
491
+
492
+ ret = smmu_ptw(&cfg, addr, flag, &entry, &ptw_info);
493
+ if (ret) {
494
+ switch (ptw_info.type) {
495
+ case SMMU_PTW_ERR_WALK_EABT:
496
+ event.type = SMMU_EVT_F_WALK_EABT;
497
+ event.u.f_walk_eabt.addr = addr;
498
+ event.u.f_walk_eabt.rnw = flag & 0x1;
499
+ event.u.f_walk_eabt.class = 0x1;
500
+ event.u.f_walk_eabt.addr2 = ptw_info.addr;
501
+ break;
502
+ case SMMU_PTW_ERR_TRANSLATION:
503
+ if (event.record_trans_faults) {
504
+ event.type = SMMU_EVT_F_TRANSLATION;
505
+ event.u.f_translation.addr = addr;
506
+ event.u.f_translation.rnw = flag & 0x1;
507
+ }
508
+ break;
509
+ case SMMU_PTW_ERR_ADDR_SIZE:
510
+ if (event.record_trans_faults) {
511
+ event.type = SMMU_EVT_F_ADDR_SIZE;
512
+ event.u.f_addr_size.addr = addr;
513
+ event.u.f_addr_size.rnw = flag & 0x1;
514
+ }
515
+ break;
516
+ case SMMU_PTW_ERR_ACCESS:
517
+ if (event.record_trans_faults) {
518
+ event.type = SMMU_EVT_F_ACCESS;
519
+ event.u.f_access.addr = addr;
520
+ event.u.f_access.rnw = flag & 0x1;
521
+ }
522
+ break;
523
+ case SMMU_PTW_ERR_PERMISSION:
524
+ if (event.record_trans_faults) {
525
+ event.type = SMMU_EVT_F_PERMISSION;
526
+ event.u.f_permission.addr = addr;
527
+ event.u.f_permission.rnw = flag & 0x1;
528
+ }
529
+ break;
530
+ default:
531
+ g_assert_not_reached();
532
+ }
533
+ }
534
+out:
535
+ if (ret) {
536
+ qemu_log_mask(LOG_GUEST_ERROR,
537
+ "%s translation failed for iova=0x%"PRIx64"(%d)\n",
538
+ mr->parent_obj.name, addr, ret);
539
+ entry.perm = IOMMU_NONE;
540
+ smmuv3_record_event(s, &event);
541
+ } else if (!cfg.aborted) {
542
+ entry.perm = flag;
543
+ trace_smmuv3_translate(mr->parent_obj.name, sid, addr,
544
+ entry.translated_addr, entry.perm);
545
+ }
546
+
547
+ return entry;
548
+}
549
+
550
static int smmuv3_cmdq_consume(SMMUv3State *s)
551
{
56
{
552
SMMUCmdError cmd_error = SMMU_CERROR_NONE;
57
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
553
@@ -XXX,XX +XXX,XX @@ static void smmuv3_class_init(ObjectClass *klass, void *data)
58
r->secure, crm, opc1, opc2,
554
static void smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
59
r->name);
555
void *data)
60
break;
556
{
61
- default:
557
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
62
+ case ARM_CP_SECSTATE_BOTH:
558
+
63
name = g_strdup_printf("%s_S", r->name);
559
+ imrc->translate = smmuv3_translate;
64
add_cpreg_to_hashtable(cpu, r, opaque, state,
560
}
65
ARM_CP_SECSTATE_S,
561
66
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
562
static const TypeInfo smmuv3_type_info = {
67
ARM_CP_SECSTATE_NS,
563
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
68
crm, opc1, opc2, r->name);
564
index XXXXXXX..XXXXXXX 100644
69
break;
565
--- a/hw/arm/trace-events
70
+ default:
566
+++ b/hw/arm/trace-events
71
+ g_assert_not_reached();
567
@@ -XXX,XX +XXX,XX @@ smmuv3_write_mmio_idr(uint64_t addr, uint64_t val) "write to RO/Unimpl reg 0x%lx
72
}
568
smmuv3_write_mmio_evtq_cons_bef_clear(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "Before clearing interrupt prod:0x%x cons:0x%x prod.w:%d cons.w:%d"
73
} else {
569
smmuv3_write_mmio_evtq_cons_after_clear(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "after clearing interrupt prod:0x%x cons:0x%x prod.w:%d cons.w:%d"
74
/* AArch64 registers get mapped to non-secure instance
570
smmuv3_record_event(const char *type, uint32_t sid) "%s sid=%d"
571
+smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "SID:0x%x features:0x%x, sid_split:0x%x"
572
+smmuv3_find_ste_2lvl(uint64_t strtab_base, uint64_t l1ptr, int l1_ste_offset, uint64_t l2ptr, int l2_ste_offset, int max_l2_ste) "strtab_base:0x%lx l1ptr:0x%"PRIx64" l1_off:0x%x, l2ptr:0x%"PRIx64" l2_off:0x%x max_l2_ste:%d"
573
+smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
574
+smmuv3_translate_bypass(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d bypass iova:0x%"PRIx64" is_write=%d"
575
+smmuv3_translate_in(uint16_t sid, int pci_bus_num, uint64_t strtab_base) "SID:0x%x bus:%d strtab_base:0x%"PRIx64
576
+smmuv3_get_cd(uint64_t addr) "CD addr: 0x%"PRIx64
577
+smmuv3_translate(const char *n, uint16_t sid, uint64_t iova, uint64_t translated, int perm) "%s sid=%d iova=0x%"PRIx64" translated=0x%"PRIx64" perm=0x%x"
578
+smmuv3_decode_cd(uint32_t oas) "oas=%d"
579
+smmuv3_decode_cd_tt(int i, uint32_t tsz, uint64_t ttb, uint32_t granule_sz) "TT[%d]:tsz:%d ttb:0x%"PRIx64" granule_sz:%d"
580
--
75
--
581
2.17.0
76
2.25.1
582
583
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Even though nothing is currently broken (since all boards
3
The new_key field is always non-zero -- drop the if.
4
use first_cpu as boot cpu), make sure that boot_info is set
5
on all CPUs.
6
If some board would like support heterogenuos setup (i.e.
7
init boot_info on subset of CPUs) in future, it should add
8
a reasonable API to do it, instead of starting assigning
9
boot_info from some CPU and till the end of present CPUs
10
list.
11
4
12
Ref:
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
"Message-ID: <CAFEAcA_NMWuA8WSs3cNeY6xX1kerO_uAcN_3=fK02BEhHJW86g@mail.gmail.com>"
14
15
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 1525176522-200354-5-git-send-email-imammedo@redhat.com
7
Message-id: 20220501055028.646596-11-richard.henderson@linaro.org
8
[PMM: reinstated dropped PL3_RW mask]
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
10
---
20
hw/arm/boot.c | 2 +-
11
target/arm/helper.c | 23 +++++++++++------------
21
1 file changed, 1 insertion(+), 1 deletion(-)
12
1 file changed, 11 insertions(+), 12 deletions(-)
22
13
23
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/arm/boot.c
16
--- a/target/arm/helper.c
26
+++ b/hw/arm/boot.c
17
+++ b/target/arm/helper.c
27
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
18
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
28
}
19
29
info->is_linux = is_linux;
20
for (i = 0; i < ARRAY_SIZE(aliases); i++) {
30
21
const struct E2HAlias *a = &aliases[i];
31
- for (cs = CPU(cpu); cs; cs = CPU_NEXT(cs)) {
22
- ARMCPRegInfo *src_reg, *dst_reg;
32
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
23
+ ARMCPRegInfo *src_reg, *dst_reg, *new_reg;
33
ARM_CPU(cs)->env.boot_info = info;
24
+ uint32_t *new_key;
34
}
25
+ bool ok;
35
}
26
27
if (a->feature && !a->feature(&cpu->isar)) {
28
continue;
29
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
30
g_assert(src_reg->opaque == NULL);
31
32
/* Create alias before redirection so we dup the right data. */
33
- if (a->new_key) {
34
- ARMCPRegInfo *new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
35
- uint32_t *new_key = g_memdup(&a->new_key, sizeof(uint32_t));
36
- bool ok;
37
+ new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
38
+ new_key = g_memdup(&a->new_key, sizeof(uint32_t));
39
40
- new_reg->name = a->new_name;
41
- new_reg->type |= ARM_CP_ALIAS;
42
- /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
43
- new_reg->access &= PL2_RW | PL3_RW;
44
+ new_reg->name = a->new_name;
45
+ new_reg->type |= ARM_CP_ALIAS;
46
+ /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
47
+ new_reg->access &= PL2_RW | PL3_RW;
48
49
- ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
50
- g_assert(ok);
51
- }
52
+ ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
53
+ g_assert(ok);
54
55
src_reg->opaque = dst_reg;
56
src_reg->orig_readfn = src_reg->readfn ?: raw_read;
36
--
57
--
37
2.17.0
58
2.25.1
38
39
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We introduce helpers to read/write into the command and event
3
Cast the uint32_t key into a gpointer directly, which
4
circular queues.
4
allows us to avoid allocating storage for each key.
5
5
6
smmuv3_write_eventq and smmuv3_cmq_consume will become static
6
Use g_hash_table_lookup when we already have a gpointer
7
in subsequent patches.
7
(e.g. for callbacks like count_cpreg), or when using
8
get_arm_cp_reginfo would require casting away const.
8
9
9
Invalidation commands are not yet dealt with. We do not cache
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
data that need to be invalidated. This will change with vhost
11
integration.
12
13
Signed-off-by: Eric Auger <eric.auger@redhat.com>
14
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Message-id: 1524665762-31355-7-git-send-email-eric.auger@redhat.com
12
Message-id: 20220501055028.646596-12-richard.henderson@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
14
---
19
hw/arm/smmuv3-internal.h | 163 +++++++++++++++++++++++++++++++++++++++
15
target/arm/cpu.c | 4 ++--
20
hw/arm/smmuv3.c | 136 ++++++++++++++++++++++++++++++++
16
target/arm/gdbstub.c | 2 +-
21
hw/arm/trace-events | 5 ++
17
target/arm/helper.c | 41 ++++++++++++++++++-----------------------
22
3 files changed, 304 insertions(+)
18
3 files changed, 21 insertions(+), 26 deletions(-)
23
19
24
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
25
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/arm/smmuv3-internal.h
22
--- a/target/arm/cpu.c
27
+++ b/hw/arm/smmuv3-internal.h
23
+++ b/target/arm/cpu.c
28
@@ -XXX,XX +XXX,XX @@ static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
24
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
29
void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask);
25
ARMCPU *cpu = ARM_CPU(obj);
30
void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t gerrorn);
26
31
27
cpu_set_cpustate_pointers(cpu);
32
+/* Queue Handling */
28
- cpu->cp_regs = g_hash_table_new_full(g_int_hash, g_int_equal,
33
+
29
- g_free, cpreg_hashtable_data_destroy);
34
+#define Q_BASE(q) ((q)->base & SMMU_BASE_ADDR_MASK)
30
+ cpu->cp_regs = g_hash_table_new_full(g_direct_hash, g_direct_equal,
35
+#define WRAP_MASK(q) (1 << (q)->log2size)
31
+ NULL, cpreg_hashtable_data_destroy);
36
+#define INDEX_MASK(q) (((1 << (q)->log2size)) - 1)
32
37
+#define WRAP_INDEX_MASK(q) ((1 << ((q)->log2size + 1)) - 1)
33
QLIST_INIT(&cpu->pre_el_change_hooks);
38
+
34
QLIST_INIT(&cpu->el_change_hooks);
39
+#define Q_CONS(q) ((q)->cons & INDEX_MASK(q))
35
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
40
+#define Q_PROD(q) ((q)->prod & INDEX_MASK(q))
41
+
42
+#define Q_CONS_ENTRY(q) (Q_BASE(q) + (q)->entry_size * Q_CONS(q))
43
+#define Q_PROD_ENTRY(q) (Q_BASE(q) + (q)->entry_size * Q_PROD(q))
44
+
45
+#define Q_CONS_WRAP(q) (((q)->cons & WRAP_MASK(q)) >> (q)->log2size)
46
+#define Q_PROD_WRAP(q) (((q)->prod & WRAP_MASK(q)) >> (q)->log2size)
47
+
48
+static inline bool smmuv3_q_full(SMMUQueue *q)
49
+{
50
+ return ((q->cons ^ q->prod) & WRAP_INDEX_MASK(q)) == WRAP_MASK(q);
51
+}
52
+
53
+static inline bool smmuv3_q_empty(SMMUQueue *q)
54
+{
55
+ return (q->cons & WRAP_INDEX_MASK(q)) == (q->prod & WRAP_INDEX_MASK(q));
56
+}
57
+
58
+static inline void queue_prod_incr(SMMUQueue *q)
59
+{
60
+ q->prod = (q->prod + 1) & WRAP_INDEX_MASK(q);
61
+}
62
+
63
+static inline void queue_cons_incr(SMMUQueue *q)
64
+{
65
+ /*
66
+ * We have to use deposit for the CONS registers to preserve
67
+ * the ERR field in the high bits.
68
+ */
69
+ q->cons = deposit32(q->cons, 0, q->log2size + 1, q->cons + 1);
70
+}
71
+
72
+static inline bool smmuv3_cmdq_enabled(SMMUv3State *s)
73
+{
74
+ return FIELD_EX32(s->cr[0], CR0, CMDQEN);
75
+}
76
+
77
+static inline bool smmuv3_eventq_enabled(SMMUv3State *s)
78
+{
79
+ return FIELD_EX32(s->cr[0], CR0, EVENTQEN);
80
+}
81
+
82
+static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type)
83
+{
84
+ s->cmdq.cons = FIELD_DP32(s->cmdq.cons, CMDQ_CONS, ERR, err_type);
85
+}
86
+
87
+void smmuv3_write_eventq(SMMUv3State *s, Evt *evt);
88
+
89
+/* Commands */
90
+
91
+typedef enum SMMUCommandType {
92
+ SMMU_CMD_NONE = 0x00,
93
+ SMMU_CMD_PREFETCH_CONFIG ,
94
+ SMMU_CMD_PREFETCH_ADDR,
95
+ SMMU_CMD_CFGI_STE,
96
+ SMMU_CMD_CFGI_STE_RANGE,
97
+ SMMU_CMD_CFGI_CD,
98
+ SMMU_CMD_CFGI_CD_ALL,
99
+ SMMU_CMD_CFGI_ALL,
100
+ SMMU_CMD_TLBI_NH_ALL = 0x10,
101
+ SMMU_CMD_TLBI_NH_ASID,
102
+ SMMU_CMD_TLBI_NH_VA,
103
+ SMMU_CMD_TLBI_NH_VAA,
104
+ SMMU_CMD_TLBI_EL3_ALL = 0x18,
105
+ SMMU_CMD_TLBI_EL3_VA = 0x1a,
106
+ SMMU_CMD_TLBI_EL2_ALL = 0x20,
107
+ SMMU_CMD_TLBI_EL2_ASID,
108
+ SMMU_CMD_TLBI_EL2_VA,
109
+ SMMU_CMD_TLBI_EL2_VAA,
110
+ SMMU_CMD_TLBI_S12_VMALL = 0x28,
111
+ SMMU_CMD_TLBI_S2_IPA = 0x2a,
112
+ SMMU_CMD_TLBI_NSNH_ALL = 0x30,
113
+ SMMU_CMD_ATC_INV = 0x40,
114
+ SMMU_CMD_PRI_RESP,
115
+ SMMU_CMD_RESUME = 0x44,
116
+ SMMU_CMD_STALL_TERM,
117
+ SMMU_CMD_SYNC,
118
+} SMMUCommandType;
119
+
120
+static const char *cmd_stringify[] = {
121
+ [SMMU_CMD_PREFETCH_CONFIG] = "SMMU_CMD_PREFETCH_CONFIG",
122
+ [SMMU_CMD_PREFETCH_ADDR] = "SMMU_CMD_PREFETCH_ADDR",
123
+ [SMMU_CMD_CFGI_STE] = "SMMU_CMD_CFGI_STE",
124
+ [SMMU_CMD_CFGI_STE_RANGE] = "SMMU_CMD_CFGI_STE_RANGE",
125
+ [SMMU_CMD_CFGI_CD] = "SMMU_CMD_CFGI_CD",
126
+ [SMMU_CMD_CFGI_CD_ALL] = "SMMU_CMD_CFGI_CD_ALL",
127
+ [SMMU_CMD_CFGI_ALL] = "SMMU_CMD_CFGI_ALL",
128
+ [SMMU_CMD_TLBI_NH_ALL] = "SMMU_CMD_TLBI_NH_ALL",
129
+ [SMMU_CMD_TLBI_NH_ASID] = "SMMU_CMD_TLBI_NH_ASID",
130
+ [SMMU_CMD_TLBI_NH_VA] = "SMMU_CMD_TLBI_NH_VA",
131
+ [SMMU_CMD_TLBI_NH_VAA] = "SMMU_CMD_TLBI_NH_VAA",
132
+ [SMMU_CMD_TLBI_EL3_ALL] = "SMMU_CMD_TLBI_EL3_ALL",
133
+ [SMMU_CMD_TLBI_EL3_VA] = "SMMU_CMD_TLBI_EL3_VA",
134
+ [SMMU_CMD_TLBI_EL2_ALL] = "SMMU_CMD_TLBI_EL2_ALL",
135
+ [SMMU_CMD_TLBI_EL2_ASID] = "SMMU_CMD_TLBI_EL2_ASID",
136
+ [SMMU_CMD_TLBI_EL2_VA] = "SMMU_CMD_TLBI_EL2_VA",
137
+ [SMMU_CMD_TLBI_EL2_VAA] = "SMMU_CMD_TLBI_EL2_VAA",
138
+ [SMMU_CMD_TLBI_S12_VMALL] = "SMMU_CMD_TLBI_S12_VMALL",
139
+ [SMMU_CMD_TLBI_S2_IPA] = "SMMU_CMD_TLBI_S2_IPA",
140
+ [SMMU_CMD_TLBI_NSNH_ALL] = "SMMU_CMD_TLBI_NSNH_ALL",
141
+ [SMMU_CMD_ATC_INV] = "SMMU_CMD_ATC_INV",
142
+ [SMMU_CMD_PRI_RESP] = "SMMU_CMD_PRI_RESP",
143
+ [SMMU_CMD_RESUME] = "SMMU_CMD_RESUME",
144
+ [SMMU_CMD_STALL_TERM] = "SMMU_CMD_STALL_TERM",
145
+ [SMMU_CMD_SYNC] = "SMMU_CMD_SYNC",
146
+};
147
+
148
+static inline const char *smmu_cmd_string(SMMUCommandType type)
149
+{
150
+ if (type > SMMU_CMD_NONE && type < ARRAY_SIZE(cmd_stringify)) {
151
+ return cmd_stringify[type] ? cmd_stringify[type] : "UNKNOWN";
152
+ } else {
153
+ return "INVALID";
154
+ }
155
+}
156
+
157
+/* CMDQ fields */
158
+
159
+typedef enum {
160
+ SMMU_CERROR_NONE = 0,
161
+ SMMU_CERROR_ILL,
162
+ SMMU_CERROR_ABT,
163
+ SMMU_CERROR_ATC_INV_SYNC,
164
+} SMMUCmdError;
165
+
166
+enum { /* Command completion notification */
167
+ CMD_SYNC_SIG_NONE,
168
+ CMD_SYNC_SIG_IRQ,
169
+ CMD_SYNC_SIG_SEV,
170
+};
171
+
172
+#define CMD_TYPE(x) extract32((x)->word[0], 0 , 8)
173
+#define CMD_SSEC(x) extract32((x)->word[0], 10, 1)
174
+#define CMD_SSV(x) extract32((x)->word[0], 11, 1)
175
+#define CMD_RESUME_AC(x) extract32((x)->word[0], 12, 1)
176
+#define CMD_RESUME_AB(x) extract32((x)->word[0], 13, 1)
177
+#define CMD_SYNC_CS(x) extract32((x)->word[0], 12, 2)
178
+#define CMD_SSID(x) extract32((x)->word[0], 12, 20)
179
+#define CMD_SID(x) ((x)->word[1])
180
+#define CMD_VMID(x) extract32((x)->word[1], 0 , 16)
181
+#define CMD_ASID(x) extract32((x)->word[1], 16, 16)
182
+#define CMD_RESUME_STAG(x) extract32((x)->word[2], 0 , 16)
183
+#define CMD_RESP(x) extract32((x)->word[2], 11, 2)
184
+#define CMD_LEAF(x) extract32((x)->word[2], 0 , 1)
185
+#define CMD_STE_RANGE(x) extract32((x)->word[2], 0 , 5)
186
+#define CMD_ADDR(x) ({ \
187
+ uint64_t high = (uint64_t)(x)->word[3]; \
188
+ uint64_t low = extract32((x)->word[2], 12, 20); \
189
+ uint64_t addr = high << 32 | (low << 12); \
190
+ addr; \
191
+ })
192
+
193
+int smmuv3_cmdq_consume(SMMUv3State *s);
194
+
195
#endif
196
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
197
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
198
--- a/hw/arm/smmuv3.c
37
--- a/target/arm/gdbstub.c
199
+++ b/hw/arm/smmuv3.c
38
+++ b/target/arm/gdbstub.c
200
@@ -XXX,XX +XXX,XX @@ void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
39
@@ -XXX,XX +XXX,XX @@ static void arm_gen_one_xml_sysreg_tag(GString *s, DynamicGDBXMLInfo *dyn_xml,
201
trace_smmuv3_write_gerrorn(toggled & pending, s->gerrorn);
40
static void arm_register_sysreg_for_xml(gpointer key, gpointer value,
41
gpointer p)
42
{
43
- uint32_t ri_key = *(uint32_t *)key;
44
+ uint32_t ri_key = (uintptr_t)key;
45
ARMCPRegInfo *ri = value;
46
RegisterSysregXmlParam *param = (RegisterSysregXmlParam *)p;
47
GString *s = param->s;
48
diff --git a/target/arm/helper.c b/target/arm/helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/helper.c
51
+++ b/target/arm/helper.c
52
@@ -XXX,XX +XXX,XX @@ bool write_list_to_cpustate(ARMCPU *cpu)
53
static void add_cpreg_to_list(gpointer key, gpointer opaque)
54
{
55
ARMCPU *cpu = opaque;
56
- uint64_t regidx;
57
- const ARMCPRegInfo *ri;
58
-
59
- regidx = *(uint32_t *)key;
60
- ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
61
+ uint32_t regidx = (uintptr_t)key;
62
+ const ARMCPRegInfo *ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
63
64
if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
65
cpu->cpreg_indexes[cpu->cpreg_array_len] = cpreg_to_kvm_id(regidx);
66
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_list(gpointer key, gpointer opaque)
67
static void count_cpreg(gpointer key, gpointer opaque)
68
{
69
ARMCPU *cpu = opaque;
70
- uint64_t regidx;
71
const ARMCPRegInfo *ri;
72
73
- regidx = *(uint32_t *)key;
74
- ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
75
+ ri = g_hash_table_lookup(cpu->cp_regs, key);
76
77
if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
78
cpu->cpreg_array_len++;
79
@@ -XXX,XX +XXX,XX @@ static void count_cpreg(gpointer key, gpointer opaque)
80
81
static gint cpreg_key_compare(gconstpointer a, gconstpointer b)
82
{
83
- uint64_t aidx = cpreg_to_kvm_id(*(uint32_t *)a);
84
- uint64_t bidx = cpreg_to_kvm_id(*(uint32_t *)b);
85
+ uint64_t aidx = cpreg_to_kvm_id((uintptr_t)a);
86
+ uint64_t bidx = cpreg_to_kvm_id((uintptr_t)b);
87
88
if (aidx > bidx) {
89
return 1;
90
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
91
for (i = 0; i < ARRAY_SIZE(aliases); i++) {
92
const struct E2HAlias *a = &aliases[i];
93
ARMCPRegInfo *src_reg, *dst_reg, *new_reg;
94
- uint32_t *new_key;
95
bool ok;
96
97
if (a->feature && !a->feature(&cpu->isar)) {
98
continue;
99
}
100
101
- src_reg = g_hash_table_lookup(cpu->cp_regs, &a->src_key);
102
- dst_reg = g_hash_table_lookup(cpu->cp_regs, &a->dst_key);
103
+ src_reg = g_hash_table_lookup(cpu->cp_regs,
104
+ (gpointer)(uintptr_t)a->src_key);
105
+ dst_reg = g_hash_table_lookup(cpu->cp_regs,
106
+ (gpointer)(uintptr_t)a->dst_key);
107
g_assert(src_reg != NULL);
108
g_assert(dst_reg != NULL);
109
110
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
111
112
/* Create alias before redirection so we dup the right data. */
113
new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
114
- new_key = g_memdup(&a->new_key, sizeof(uint32_t));
115
116
new_reg->name = a->new_name;
117
new_reg->type |= ARM_CP_ALIAS;
118
/* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
119
new_reg->access &= PL2_RW | PL3_RW;
120
121
- ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
122
+ ok = g_hash_table_insert(cpu->cp_regs,
123
+ (gpointer)(uintptr_t)a->new_key, new_reg);
124
g_assert(ok);
125
126
src_reg->opaque = dst_reg;
127
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
128
/* Private utility function for define_one_arm_cp_reg_with_opaque():
129
* add a single reginfo struct to the hash table.
130
*/
131
- uint32_t *key = g_new(uint32_t, 1);
132
+ uint32_t key;
133
ARMCPRegInfo *r2 = g_memdup(r, sizeof(ARMCPRegInfo));
134
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
135
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
136
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
137
if (r->cp == 0 || r->state == ARM_CP_STATE_BOTH) {
138
r2->cp = CP_REG_ARM64_SYSREG_CP;
139
}
140
- *key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
141
- r2->opc0, opc1, opc2);
142
+ key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
143
+ r2->opc0, opc1, opc2);
144
} else {
145
- *key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
146
+ key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
147
}
148
if (opaque) {
149
r2->opaque = opaque;
150
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
151
* requested.
152
*/
153
if (!(r->type & ARM_CP_OVERRIDE)) {
154
- ARMCPRegInfo *oldreg;
155
- oldreg = g_hash_table_lookup(cpu->cp_regs, key);
156
+ const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
157
if (oldreg && !(oldreg->type & ARM_CP_OVERRIDE)) {
158
fprintf(stderr, "Register redefined: cp=%d %d bit "
159
"crn=%d crm=%d opc1=%d opc2=%d, "
160
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
161
g_assert_not_reached();
162
}
163
}
164
- g_hash_table_insert(cpu->cp_regs, key, r2);
165
+ g_hash_table_insert(cpu->cp_regs, (gpointer)(uintptr_t)key, r2);
202
}
166
}
203
167
204
+static inline MemTxResult queue_read(SMMUQueue *q, void *data)
168
205
+{
169
@@ -XXX,XX +XXX,XX @@ void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
206
+ dma_addr_t addr = Q_CONS_ENTRY(q);
170
207
+
171
const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp)
208
+ return dma_memory_read(&address_space_memory, addr, data, q->entry_size);
209
+}
210
+
211
+static MemTxResult queue_write(SMMUQueue *q, void *data)
212
+{
213
+ dma_addr_t addr = Q_PROD_ENTRY(q);
214
+ MemTxResult ret;
215
+
216
+ ret = dma_memory_write(&address_space_memory, addr, data, q->entry_size);
217
+ if (ret != MEMTX_OK) {
218
+ return ret;
219
+ }
220
+
221
+ queue_prod_incr(q);
222
+ return MEMTX_OK;
223
+}
224
+
225
+void smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
226
+{
227
+ SMMUQueue *q = &s->eventq;
228
+
229
+ if (!smmuv3_eventq_enabled(s)) {
230
+ return;
231
+ }
232
+
233
+ if (smmuv3_q_full(q)) {
234
+ return;
235
+ }
236
+
237
+ queue_write(q, evt);
238
+
239
+ if (smmuv3_q_empty(q)) {
240
+ smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0);
241
+ }
242
+}
243
+
244
static void smmuv3_init_regs(SMMUv3State *s)
245
{
172
{
246
/**
173
- return g_hash_table_lookup(cpregs, &encoded_cp);
247
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
174
+ return g_hash_table_lookup(cpregs, (gpointer)(uintptr_t)encoded_cp);
248
s->sid_split = 0;
249
}
175
}
250
176
251
+int smmuv3_cmdq_consume(SMMUv3State *s)
177
void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
252
+{
253
+ SMMUCmdError cmd_error = SMMU_CERROR_NONE;
254
+ SMMUQueue *q = &s->cmdq;
255
+ SMMUCommandType type = 0;
256
+
257
+ if (!smmuv3_cmdq_enabled(s)) {
258
+ return 0;
259
+ }
260
+ /*
261
+ * some commands depend on register values, typically CR0. In case those
262
+ * register values change while handling the command, spec says it
263
+ * is UNPREDICTABLE whether the command is interpreted under the new
264
+ * or old value.
265
+ */
266
+
267
+ while (!smmuv3_q_empty(q)) {
268
+ uint32_t pending = s->gerror ^ s->gerrorn;
269
+ Cmd cmd;
270
+
271
+ trace_smmuv3_cmdq_consume(Q_PROD(q), Q_CONS(q),
272
+ Q_PROD_WRAP(q), Q_CONS_WRAP(q));
273
+
274
+ if (FIELD_EX32(pending, GERROR, CMDQ_ERR)) {
275
+ break;
276
+ }
277
+
278
+ if (queue_read(q, &cmd) != MEMTX_OK) {
279
+ cmd_error = SMMU_CERROR_ABT;
280
+ break;
281
+ }
282
+
283
+ type = CMD_TYPE(&cmd);
284
+
285
+ trace_smmuv3_cmdq_opcode(smmu_cmd_string(type));
286
+
287
+ switch (type) {
288
+ case SMMU_CMD_SYNC:
289
+ if (CMD_SYNC_CS(&cmd) & CMD_SYNC_SIG_IRQ) {
290
+ smmuv3_trigger_irq(s, SMMU_IRQ_CMD_SYNC, 0);
291
+ }
292
+ break;
293
+ case SMMU_CMD_PREFETCH_CONFIG:
294
+ case SMMU_CMD_PREFETCH_ADDR:
295
+ case SMMU_CMD_CFGI_STE:
296
+ case SMMU_CMD_CFGI_STE_RANGE: /* same as SMMU_CMD_CFGI_ALL */
297
+ case SMMU_CMD_CFGI_CD:
298
+ case SMMU_CMD_CFGI_CD_ALL:
299
+ case SMMU_CMD_TLBI_NH_ALL:
300
+ case SMMU_CMD_TLBI_NH_ASID:
301
+ case SMMU_CMD_TLBI_NH_VA:
302
+ case SMMU_CMD_TLBI_NH_VAA:
303
+ case SMMU_CMD_TLBI_EL3_ALL:
304
+ case SMMU_CMD_TLBI_EL3_VA:
305
+ case SMMU_CMD_TLBI_EL2_ALL:
306
+ case SMMU_CMD_TLBI_EL2_ASID:
307
+ case SMMU_CMD_TLBI_EL2_VA:
308
+ case SMMU_CMD_TLBI_EL2_VAA:
309
+ case SMMU_CMD_TLBI_S12_VMALL:
310
+ case SMMU_CMD_TLBI_S2_IPA:
311
+ case SMMU_CMD_TLBI_NSNH_ALL:
312
+ case SMMU_CMD_ATC_INV:
313
+ case SMMU_CMD_PRI_RESP:
314
+ case SMMU_CMD_RESUME:
315
+ case SMMU_CMD_STALL_TERM:
316
+ trace_smmuv3_unhandled_cmd(type);
317
+ break;
318
+ default:
319
+ cmd_error = SMMU_CERROR_ILL;
320
+ qemu_log_mask(LOG_GUEST_ERROR,
321
+ "Illegal command type: %d\n", CMD_TYPE(&cmd));
322
+ break;
323
+ }
324
+ if (cmd_error) {
325
+ break;
326
+ }
327
+ /*
328
+ * We only increment the cons index after the completion of
329
+ * the command. We do that because the SYNC returns immediately
330
+ * and does not check the completion of previous commands
331
+ */
332
+ queue_cons_incr(q);
333
+ }
334
+
335
+ if (cmd_error) {
336
+ trace_smmuv3_cmdq_consume_error(smmu_cmd_string(type), cmd_error);
337
+ smmu_write_cmdq_err(s, cmd_error);
338
+ smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_CMDQ_ERR_MASK);
339
+ }
340
+
341
+ trace_smmuv3_cmdq_consume_out(Q_PROD(q), Q_CONS(q),
342
+ Q_PROD_WRAP(q), Q_CONS_WRAP(q));
343
+
344
+ return 0;
345
+}
346
+
347
static MemTxResult smmu_write_mmio(void *opaque, hwaddr offset, uint64_t data,
348
unsigned size, MemTxAttrs attrs)
349
{
350
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
351
index XXXXXXX..XXXXXXX 100644
352
--- a/hw/arm/trace-events
353
+++ b/hw/arm/trace-events
354
@@ -XXX,XX +XXX,XX @@ smmuv3_read_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr:
355
smmuv3_trigger_irq(int irq) "irq=%d"
356
smmuv3_write_gerror(uint32_t toggled, uint32_t gerror) "toggled=0x%x, new GERROR=0x%x"
357
smmuv3_write_gerrorn(uint32_t acked, uint32_t gerrorn) "acked=0x%x, new GERRORN=0x%x"
358
+smmuv3_unhandled_cmd(uint32_t type) "Unhandled command type=%d"
359
+smmuv3_cmdq_consume(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod=%d cons=%d prod.wrap=%d cons.wrap=%d"
360
+smmuv3_cmdq_opcode(const char *opcode) "<--- %s"
361
+smmuv3_cmdq_consume_out(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "prod:%d, cons:%d, prod_wrap:%d, cons_wrap:%d "
362
+smmuv3_cmdq_consume_error(const char *cmd_name, uint8_t cmd_error) "Error on %s command execution: %d"
363
--
178
--
364
2.17.0
179
2.25.1
365
366
diff view generated by jsdifflib
1
Convert the smc91c111 device away from using the old_mmio field of
1
From: Richard Henderson <richard.henderson@linaro.org>
2
MemoryRegionOps. This device is used by several Arm board models.
3
2
3
Simplify freeing cp_regs hash table entries by using a single
4
allocation for the entire value.
5
6
This fixes a theoretical bug if we were to ever free the entire
7
hash table, because we've been installing string literal constants
8
into the cpreg structure in define_arm_vh_e2h_redirects_aliases.
9
However, at present we only free entries created for AArch32
10
wildcard cpregs which get overwritten by more specific cpregs,
11
so this bug is never exposed.
12
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20220501055028.646596-13-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20180427173611.10281-3-peter.maydell@linaro.org
7
---
17
---
8
hw/net/smc91c111.c | 54 +++++++++++++++++++++-------------------------
18
target/arm/cpu.c | 16 +---------------
9
1 file changed, 25 insertions(+), 29 deletions(-)
19
target/arm/helper.c | 10 ++++++++--
20
2 files changed, 9 insertions(+), 17 deletions(-)
10
21
11
diff --git a/hw/net/smc91c111.c b/hw/net/smc91c111.c
22
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/net/smc91c111.c
24
--- a/target/arm/cpu.c
14
+++ b/hw/net/smc91c111.c
25
+++ b/target/arm/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static uint32_t smc91c111_readb(void *opaque, hwaddr offset)
26
@@ -XXX,XX +XXX,XX @@ uint64_t arm_cpu_mp_affinity(int idx, uint8_t clustersz)
16
return 0;
27
return (Aff1 << ARM_AFF1_SHIFT) | Aff0;
17
}
28
}
18
29
19
-static void smc91c111_writew(void *opaque, hwaddr offset,
30
-static void cpreg_hashtable_data_destroy(gpointer data)
20
- uint32_t value)
21
+static uint64_t smc91c111_readfn(void *opaque, hwaddr addr, unsigned size)
22
{
23
- smc91c111_writeb(opaque, offset, value & 0xff);
24
- smc91c111_writeb(opaque, offset + 1, value >> 8);
25
+ int i;
26
+ uint32_t val = 0;
27
+
28
+ for (i = 0; i < size; i++) {
29
+ val |= smc91c111_readb(opaque, addr + i) << (i * 8);
30
+ }
31
+ return val;
32
}
33
34
-static void smc91c111_writel(void *opaque, hwaddr offset,
35
- uint32_t value)
36
+static void smc91c111_writefn(void *opaque, hwaddr addr,
37
+ uint64_t value, unsigned size)
38
{
39
+ int i = 0;
40
+
41
/* 32-bit writes to offset 0xc only actually write to the bank select
42
- register (offset 0xe) */
43
- if (offset != 0xc)
44
- smc91c111_writew(opaque, offset, value & 0xffff);
45
- smc91c111_writew(opaque, offset + 2, value >> 16);
46
-}
47
+ * register (offset 0xe), so skip the first two bytes we would write.
48
+ */
49
+ if (addr == 0xc && size == 4) {
50
+ i += 2;
51
+ }
52
53
-static uint32_t smc91c111_readw(void *opaque, hwaddr offset)
54
-{
31
-{
55
- uint32_t val;
32
- /*
56
- val = smc91c111_readb(opaque, offset);
33
- * Destroy function for cpu->cp_regs hashtable data entries.
57
- val |= smc91c111_readb(opaque, offset + 1) << 8;
34
- * We must free the name string because it was g_strdup()ed in
58
- return val;
35
- * add_cpreg_to_hashtable(). It's OK to cast away the 'const'
36
- * from r->name because we know we definitely allocated it.
37
- */
38
- ARMCPRegInfo *r = data;
39
-
40
- g_free((void *)r->name);
41
- g_free(r);
59
-}
42
-}
60
-
43
-
61
-static uint32_t smc91c111_readl(void *opaque, hwaddr offset)
44
static void arm_cpu_initfn(Object *obj)
62
-{
45
{
63
- uint32_t val;
46
ARMCPU *cpu = ARM_CPU(obj);
64
- val = smc91c111_readw(opaque, offset);
47
65
- val |= smc91c111_readw(opaque, offset + 2) << 16;
48
cpu_set_cpustate_pointers(cpu);
66
- return val;
49
cpu->cp_regs = g_hash_table_new_full(g_direct_hash, g_direct_equal,
67
+ for (; i < size; i++) {
50
- NULL, cpreg_hashtable_data_destroy);
68
+ smc91c111_writeb(opaque, addr + i,
51
+ NULL, g_free);
69
+ extract32(value, i * 8, 8));
52
70
+ }
53
QLIST_INIT(&cpu->pre_el_change_hooks);
71
}
54
QLIST_INIT(&cpu->el_change_hooks);
72
55
diff --git a/target/arm/helper.c b/target/arm/helper.c
73
static int smc91c111_can_receive_nc(NetClientState *nc)
56
index XXXXXXX..XXXXXXX 100644
74
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps smc91c111_mem_ops = {
57
--- a/target/arm/helper.c
75
/* The special case for 32 bit writes to 0xc means we can't just
58
+++ b/target/arm/helper.c
76
* set .impl.min/max_access_size to 1, unfortunately
59
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
60
* add a single reginfo struct to the hash table.
77
*/
61
*/
78
- .old_mmio = {
62
uint32_t key;
79
- .read = { smc91c111_readb, smc91c111_readw, smc91c111_readl, },
63
- ARMCPRegInfo *r2 = g_memdup(r, sizeof(ARMCPRegInfo));
80
- .write = { smc91c111_writeb, smc91c111_writew, smc91c111_writel, },
64
+ ARMCPRegInfo *r2;
81
- },
65
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
82
+ .read = smc91c111_readfn,
66
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
83
+ .write = smc91c111_writefn,
67
+ size_t name_len;
84
+ .valid.min_access_size = 1,
68
+
85
+ .valid.max_access_size = 4,
69
+ /* Combine cpreg and name into one allocation. */
86
.endianness = DEVICE_NATIVE_ENDIAN,
70
+ name_len = strlen(name) + 1;
87
};
71
+ r2 = g_malloc(sizeof(*r2) + name_len);
88
72
+ *r2 = *r;
73
+ r2->name = memcpy(r2 + 1, name, name_len);
74
75
- r2->name = g_strdup(name);
76
/* Reset the secure state to the specific incoming state. This is
77
* necessary as the register may have been defined with both states.
78
*/
89
--
79
--
90
2.17.0
80
2.25.1
91
92
diff view generated by jsdifflib
1
Convert the tusb6010 device away from using the old_mmio field
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of MemoryRegionOps. This device is used only in the n800 and n810
3
boards.
4
2
3
Move the computation of key to the top of the function.
4
Hoist the resolution of cp as well, as an input to the
5
computation of key.
6
7
This will be required by a subsequent patch.
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20220501055028.646596-14-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180427173611.10281-2-peter.maydell@linaro.org
8
---
13
---
9
hw/usb/tusb6010.c | 40 ++++++++++++++++++++++++++++++++++++----
14
target/arm/helper.c | 49 +++++++++++++++++++++++++--------------------
10
1 file changed, 36 insertions(+), 4 deletions(-)
15
1 file changed, 27 insertions(+), 22 deletions(-)
11
16
12
diff --git a/hw/usb/tusb6010.c b/hw/usb/tusb6010.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/usb/tusb6010.c
19
--- a/target/arm/helper.c
15
+++ b/hw/usb/tusb6010.c
20
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static void tusb_async_writew(void *opaque, hwaddr addr,
21
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
17
}
22
ARMCPRegInfo *r2;
18
}
23
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
19
24
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
20
+static uint64_t tusb_async_readfn(void *opaque, hwaddr addr, unsigned size)
25
+ int cp = r->cp;
21
+{
26
size_t name_len;
22
+ switch (size) {
27
23
+ case 1:
28
+ switch (state) {
24
+ return tusb_async_readb(opaque, addr);
29
+ case ARM_CP_STATE_AA32:
25
+ case 2:
30
+ /* We assume it is a cp15 register if the .cp field is left unset. */
26
+ return tusb_async_readh(opaque, addr);
31
+ if (cp == 0 && r->state == ARM_CP_STATE_BOTH) {
27
+ case 4:
32
+ cp = 15;
28
+ return tusb_async_readw(opaque, addr);
33
+ }
29
+ default:
34
+ key = ENCODE_CP_REG(cp, is64, ns, r->crn, crm, opc1, opc2);
30
+ g_assert_not_reached();
31
+ }
32
+}
33
+
34
+static void tusb_async_writefn(void *opaque, hwaddr addr,
35
+ uint64_t value, unsigned size)
36
+{
37
+ switch (size) {
38
+ case 1:
39
+ tusb_async_writeb(opaque, addr, value);
40
+ break;
35
+ break;
41
+ case 2:
36
+ case ARM_CP_STATE_AA64:
42
+ tusb_async_writeh(opaque, addr, value);
37
+ /*
43
+ break;
38
+ * To allow abbreviation of ARMCPRegInfo definitions, we treat
44
+ case 4:
39
+ * cp == 0 as equivalent to the value for "standard guest-visible
45
+ tusb_async_writew(opaque, addr, value);
40
+ * sysreg". STATE_BOTH definitions are also always "standard sysreg"
41
+ * in their AArch64 view (the .cp value may be non-zero for the
42
+ * benefit of the AArch32 view).
43
+ */
44
+ if (cp == 0 || r->state == ARM_CP_STATE_BOTH) {
45
+ cp = CP_REG_ARM64_SYSREG_CP;
46
+ }
47
+ key = ENCODE_AA64_CP_REG(cp, r->crn, crm, r->opc0, opc1, opc2);
46
+ break;
48
+ break;
47
+ default:
49
+ default:
48
+ g_assert_not_reached();
50
+ g_assert_not_reached();
49
+ }
51
+ }
50
+}
51
+
52
+
52
static const MemoryRegionOps tusb_async_ops = {
53
/* Combine cpreg and name into one allocation. */
53
- .old_mmio = {
54
name_len = strlen(name) + 1;
54
- .read = { tusb_async_readb, tusb_async_readh, tusb_async_readw, },
55
r2 = g_malloc(sizeof(*r2) + name_len);
55
- .write = { tusb_async_writeb, tusb_async_writeh, tusb_async_writew, },
56
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
56
- },
57
}
57
+ .read = tusb_async_readfn,
58
58
+ .write = tusb_async_writefn,
59
if (r->state == ARM_CP_STATE_BOTH) {
59
+ .valid.min_access_size = 1,
60
- /* We assume it is a cp15 register if the .cp field is left unset.
60
+ .valid.max_access_size = 4,
61
- */
61
.endianness = DEVICE_NATIVE_ENDIAN,
62
- if (r2->cp == 0) {
62
};
63
- r2->cp = 15;
63
64
- }
65
-
66
#if HOST_BIG_ENDIAN
67
if (r2->fieldoffset) {
68
r2->fieldoffset += sizeof(uint32_t);
69
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
70
#endif
71
}
72
}
73
- if (state == ARM_CP_STATE_AA64) {
74
- /* To allow abbreviation of ARMCPRegInfo
75
- * definitions, we treat cp == 0 as equivalent to
76
- * the value for "standard guest-visible sysreg".
77
- * STATE_BOTH definitions are also always "standard
78
- * sysreg" in their AArch64 view (the .cp value may
79
- * be non-zero for the benefit of the AArch32 view).
80
- */
81
- if (r->cp == 0 || r->state == ARM_CP_STATE_BOTH) {
82
- r2->cp = CP_REG_ARM64_SYSREG_CP;
83
- }
84
- key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
85
- r2->opc0, opc1, opc2);
86
- } else {
87
- key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
88
- }
89
if (opaque) {
90
r2->opaque = opaque;
91
}
92
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
93
/* Make sure reginfo passed to helpers for wildcarded regs
94
* has the correct crm/opc1/opc2 for this reg, not CP_ANY:
95
*/
96
+ r2->cp = cp;
97
r2->crm = crm;
98
r2->opc1 = opc1;
99
r2->opc2 = opc2;
64
--
100
--
65
2.17.0
101
2.25.1
66
67
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This allows to pin the host controller in the Linux PCI domain space.
3
Put most of the value writeback to the same place,
4
Linux requires that property to be available consistently or not at all,
4
and improve the comment that goes with them.
5
in which case the domain number becomes unstable on additions/removals.
6
Adding it here won't make a difference in practice for most setups as we
7
only expose one controller.
8
5
9
However, enabling Jailhouse on top may introduce another controller, and
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
that one would like to have stable address as well. So the property is
11
needed for the first controller as well.
12
13
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
14
Message-id: 3301c5bc-7b47-1b0e-8ce4-30435057a276@web.de
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20220501055028.646596-15-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
10
---
18
hw/arm/virt.c | 1 +
11
target/arm/helper.c | 28 ++++++++++++----------------
19
1 file changed, 1 insertion(+)
12
1 file changed, 12 insertions(+), 16 deletions(-)
20
13
21
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/virt.c
16
--- a/target/arm/helper.c
24
+++ b/hw/arm/virt.c
17
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static void create_pcie(const VirtMachineState *vms, qemu_irq *pic)
18
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
26
qemu_fdt_setprop_string(vms->fdt, nodename, "device_type", "pci");
19
*r2 = *r;
27
qemu_fdt_setprop_cell(vms->fdt, nodename, "#address-cells", 3);
20
r2->name = memcpy(r2 + 1, name, name_len);
28
qemu_fdt_setprop_cell(vms->fdt, nodename, "#size-cells", 2);
21
29
+ qemu_fdt_setprop_cell(vms->fdt, nodename, "linux,pci-domain", 0);
22
- /* Reset the secure state to the specific incoming state. This is
30
qemu_fdt_setprop_cells(vms->fdt, nodename, "bus-range", 0,
23
- * necessary as the register may have been defined with both states.
31
nr_pcie_buses - 1);
24
+ /*
32
qemu_fdt_setprop(vms->fdt, nodename, "dma-coherent", NULL, 0);
25
+ * Update fields to match the instantiation, overwiting wildcards
26
+ * such as CP_ANY, ARM_CP_STATE_BOTH, or ARM_CP_SECSTATE_BOTH.
27
*/
28
+ r2->cp = cp;
29
+ r2->crm = crm;
30
+ r2->opc1 = opc1;
31
+ r2->opc2 = opc2;
32
+ r2->state = state;
33
r2->secure = secstate;
34
+ if (opaque) {
35
+ r2->opaque = opaque;
36
+ }
37
38
if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
39
/* Register is banked (using both entries in array).
40
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
41
#endif
42
}
43
}
44
- if (opaque) {
45
- r2->opaque = opaque;
46
- }
47
- /* reginfo passed to helpers is correct for the actual access,
48
- * and is never ARM_CP_STATE_BOTH:
49
- */
50
- r2->state = state;
51
- /* Make sure reginfo passed to helpers for wildcarded regs
52
- * has the correct crm/opc1/opc2 for this reg, not CP_ANY:
53
- */
54
- r2->cp = cp;
55
- r2->crm = crm;
56
- r2->opc1 = opc1;
57
- r2->opc2 = opc2;
58
+
59
/* By convention, for wildcarded registers only the first
60
* entry is used for migration; the others are marked as
61
* ALIAS so we don't try to transfer the register
33
--
62
--
34
2.17.0
63
2.25.1
35
36
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Path analysis shows that size == 3 && !is_q has been eliminated.
3
Bool is a more appropriate type for these variables.
4
4
5
Fixes: Coverity CID1385853
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20180501180455.11214-3-richard.henderson@linaro.org
7
Message-id: 20220501055028.646596-16-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
target/arm/translate-a64.c | 6 +++++-
10
target/arm/helper.c | 4 ++--
12
1 file changed, 5 insertions(+), 1 deletion(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
13
12
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
15
--- a/target/arm/helper.c
17
+++ b/target/arm/translate-a64.c
16
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
17
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
19
/* All 64-bit element operations can be shared with scalar 2misc */
18
*/
20
int pass;
19
uint32_t key;
21
20
ARMCPRegInfo *r2;
22
- for (pass = 0; pass < (is_q ? 2 : 1); pass++) {
21
- int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
23
+ /* Coverity claims (size == 3 && !is_q) has been eliminated
22
- int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
24
+ * from all paths leading to here.
23
+ bool is64 = r->type & ARM_CP_64BIT;
25
+ */
24
+ bool ns = secstate & ARM_CP_SECSTATE_NS;
26
+ tcg_debug_assert(is_q);
25
int cp = r->cp;
27
+ for (pass = 0; pass < 2; pass++) {
26
size_t name_len;
28
TCGv_i64 tcg_op = tcg_temp_new_i64();
29
TCGv_i64 tcg_res = tcg_temp_new_i64();
30
27
31
--
28
--
32
2.17.0
29
2.25.1
33
34
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
ARM virt machine now exposes a new "iommu" option.
3
Computing isbanked only once makes the code
4
The SMMUv3 IOMMU is instantiated using -machine virt,iommu=smmuv3.
4
a bit easier to read.
5
5
6
Signed-off-by: Eric Auger <eric.auger@redhat.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 1524665762-31355-15-git-send-email-eric.auger@redhat.com
8
Message-id: 20220501055028.646596-17-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
hw/arm/virt.c | 36 ++++++++++++++++++++++++++++++++++++
11
target/arm/helper.c | 6 ++++--
13
1 file changed, 36 insertions(+)
12
1 file changed, 4 insertions(+), 2 deletions(-)
14
13
15
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/virt.c
16
--- a/target/arm/helper.c
18
+++ b/hw/arm/virt.c
17
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void virt_set_gic_version(Object *obj, const char *value, Error **errp)
18
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
19
bool is64 = r->type & ARM_CP_64BIT;
20
bool ns = secstate & ARM_CP_SECSTATE_NS;
21
int cp = r->cp;
22
+ bool isbanked;
23
size_t name_len;
24
25
switch (state) {
26
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
27
r2->opaque = opaque;
20
}
28
}
21
}
29
22
30
- if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
23
+static char *virt_get_iommu(Object *obj, Error **errp)
31
+ isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
24
+{
32
+ if (isbanked) {
25
+ VirtMachineState *vms = VIRT_MACHINE(obj);
33
/* Register is banked (using both entries in array).
26
+
34
* Overwriting fieldoffset as the array is only used to define
27
+ switch (vms->iommu) {
35
* banked registers but later only fieldoffset is used.
28
+ case VIRT_IOMMU_NONE:
36
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
29
+ return g_strdup("none");
30
+ case VIRT_IOMMU_SMMUV3:
31
+ return g_strdup("smmuv3");
32
+ default:
33
+ g_assert_not_reached();
34
+ }
35
+}
36
+
37
+static void virt_set_iommu(Object *obj, const char *value, Error **errp)
38
+{
39
+ VirtMachineState *vms = VIRT_MACHINE(obj);
40
+
41
+ if (!strcmp(value, "smmuv3")) {
42
+ vms->iommu = VIRT_IOMMU_SMMUV3;
43
+ } else if (!strcmp(value, "none")) {
44
+ vms->iommu = VIRT_IOMMU_NONE;
45
+ } else {
46
+ error_setg(errp, "Invalid iommu value");
47
+ error_append_hint(errp, "Valid values are none, smmuv3.\n");
48
+ }
49
+}
50
+
51
static CpuInstanceProperties
52
virt_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
53
{
54
@@ -XXX,XX +XXX,XX @@ static void virt_2_12_instance_init(Object *obj)
55
NULL);
56
}
37
}
57
38
58
+ /* Default disallows iommu instantiation */
39
if (state == ARM_CP_STATE_AA32) {
59
+ vms->iommu = VIRT_IOMMU_NONE;
40
- if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
60
+ object_property_add_str(obj, "iommu", virt_get_iommu, virt_set_iommu, NULL);
41
+ if (isbanked) {
61
+ object_property_set_description(obj, "iommu",
42
/* If the register is banked then we don't need to migrate or
62
+ "Set the IOMMU type. "
43
* reset the 32-bit instance in certain cases:
63
+ "Valid values are none and smmuv3",
44
*
64
+ NULL);
65
+
66
vms->memmap = a15memmap;
67
vms->irqmap = a15irqmap;
68
}
69
--
45
--
70
2.17.0
46
2.25.1
71
72
diff view generated by jsdifflib
1
From: Prem Mallappa <prem.mallappa@broadcom.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add code to instantiate an smmuv3 in virt machine. A new iommu
3
Perform the override check early, so that it is still done
4
integer member is introduced in VirtMachineState to store the type
4
even when we decide to discard an unreachable cpreg.
5
of the iommu in use.
6
5
7
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
6
Use assert not printf+abort.
8
Signed-off-by: Eric Auger <eric.auger@redhat.com>
7
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 1524665762-31355-13-git-send-email-eric.auger@redhat.com
10
Message-id: 20220501055028.646596-18-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
12
---
13
include/hw/arm/virt.h | 10 +++++++
13
target/arm/helper.c | 22 ++++++++--------------
14
hw/arm/virt.c | 64 ++++++++++++++++++++++++++++++++++++++++++-
14
1 file changed, 8 insertions(+), 14 deletions(-)
15
2 files changed, 73 insertions(+), 1 deletion(-)
16
15
17
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/virt.h
18
--- a/target/arm/helper.c
20
+++ b/include/hw/arm/virt.h
19
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
22
21
g_assert_not_reached();
23
#define NUM_GICV2M_SPIS 64
22
}
24
#define NUM_VIRTIO_TRANSPORTS 32
23
25
+#define NUM_SMMU_IRQS 4
24
+ /* Overriding of an existing definition must be explicitly requested. */
26
25
+ if (!(r->type & ARM_CP_OVERRIDE)) {
27
#define ARCH_GICV3_MAINT_IRQ 9
26
+ const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
28
27
+ if (oldreg) {
29
@@ -XXX,XX +XXX,XX @@ enum {
28
+ assert(oldreg->type & ARM_CP_OVERRIDE);
30
VIRT_GIC_V2M,
29
+ }
31
VIRT_GIC_ITS,
32
VIRT_GIC_REDIST,
33
+ VIRT_SMMU,
34
VIRT_UART,
35
VIRT_MMIO,
36
VIRT_RTC,
37
@@ -XXX,XX +XXX,XX @@ enum {
38
VIRT_SECURE_MEM,
39
};
40
41
+typedef enum VirtIOMMUType {
42
+ VIRT_IOMMU_NONE,
43
+ VIRT_IOMMU_SMMUV3,
44
+ VIRT_IOMMU_VIRTIO,
45
+} VirtIOMMUType;
46
+
47
typedef struct MemMapEntry {
48
hwaddr base;
49
hwaddr size;
50
@@ -XXX,XX +XXX,XX @@ typedef struct {
51
bool its;
52
bool virt;
53
int32_t gic_version;
54
+ VirtIOMMUType iommu;
55
struct arm_boot_info bootinfo;
56
const MemMapEntry *memmap;
57
const int *irqmap;
58
@@ -XXX,XX +XXX,XX @@ typedef struct {
59
uint32_t clock_phandle;
60
uint32_t gic_phandle;
61
uint32_t msi_phandle;
62
+ uint32_t iommu_phandle;
63
int psci_conduit;
64
} VirtMachineState;
65
66
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/hw/arm/virt.c
69
+++ b/hw/arm/virt.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "hw/smbios/smbios.h"
72
#include "qapi/visitor.h"
73
#include "standard-headers/linux/input.h"
74
+#include "hw/arm/smmuv3.h"
75
76
#define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
77
static void virt_##major##_##minor##_class_init(ObjectClass *oc, \
78
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry a15memmap[] = {
79
[VIRT_FW_CFG] = { 0x09020000, 0x00000018 },
80
[VIRT_GPIO] = { 0x09030000, 0x00001000 },
81
[VIRT_SECURE_UART] = { 0x09040000, 0x00001000 },
82
+ [VIRT_SMMU] = { 0x09050000, 0x00020000 },
83
[VIRT_MMIO] = { 0x0a000000, 0x00000200 },
84
/* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
85
[VIRT_PLATFORM_BUS] = { 0x0c000000, 0x02000000 },
86
@@ -XXX,XX +XXX,XX @@ static const int a15irqmap[] = {
87
[VIRT_SECURE_UART] = 8,
88
[VIRT_MMIO] = 16, /* ...to 16 + NUM_VIRTIO_TRANSPORTS - 1 */
89
[VIRT_GIC_V2M] = 48, /* ...to 48 + NUM_GICV2M_SPIS - 1 */
90
+ [VIRT_SMMU] = 74, /* ...to 74 + NUM_SMMU_IRQS - 1 */
91
[VIRT_PLATFORM_BUS] = 112, /* ...to 112 + PLATFORM_BUS_NUM_IRQS -1 */
92
};
93
94
@@ -XXX,XX +XXX,XX @@ static void create_pcie_irq_map(const VirtMachineState *vms,
95
0x7 /* PCI irq */);
96
}
97
98
-static void create_pcie(const VirtMachineState *vms, qemu_irq *pic)
99
+static void create_smmu(const VirtMachineState *vms, qemu_irq *pic,
100
+ PCIBus *bus)
101
+{
102
+ char *node;
103
+ const char compat[] = "arm,smmu-v3";
104
+ int irq = vms->irqmap[VIRT_SMMU];
105
+ int i;
106
+ hwaddr base = vms->memmap[VIRT_SMMU].base;
107
+ hwaddr size = vms->memmap[VIRT_SMMU].size;
108
+ const char irq_names[] = "eventq\0priq\0cmdq-sync\0gerror";
109
+ DeviceState *dev;
110
+
111
+ if (vms->iommu != VIRT_IOMMU_SMMUV3 || !vms->iommu_phandle) {
112
+ return;
113
+ }
30
+ }
114
+
31
+
115
+ dev = qdev_create(NULL, "arm-smmuv3");
32
/* Combine cpreg and name into one allocation. */
116
+
33
name_len = strlen(name) + 1;
117
+ object_property_set_link(OBJECT(dev), OBJECT(bus), "primary-bus",
34
r2 = g_malloc(sizeof(*r2) + name_len);
118
+ &error_abort);
35
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
119
+ qdev_init_nofail(dev);
36
assert(!raw_accessors_invalid(r2));
120
+ sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
37
}
121
+ for (i = 0; i < NUM_SMMU_IRQS; i++) {
38
122
+ sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, pic[irq + i]);
39
- /* Overriding of an existing definition must be explicitly
123
+ }
40
- * requested.
124
+
41
- */
125
+ node = g_strdup_printf("/smmuv3@%" PRIx64, base);
42
- if (!(r->type & ARM_CP_OVERRIDE)) {
126
+ qemu_fdt_add_subnode(vms->fdt, node);
43
- const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
127
+ qemu_fdt_setprop(vms->fdt, node, "compatible", compat, sizeof(compat));
44
- if (oldreg && !(oldreg->type & ARM_CP_OVERRIDE)) {
128
+ qemu_fdt_setprop_sized_cells(vms->fdt, node, "reg", 2, base, 2, size);
45
- fprintf(stderr, "Register redefined: cp=%d %d bit "
129
+
46
- "crn=%d crm=%d opc1=%d opc2=%d, "
130
+ qemu_fdt_setprop_cells(vms->fdt, node, "interrupts",
47
- "was %s, now %s\n", r2->cp, 32 + 32 * is64,
131
+ GIC_FDT_IRQ_TYPE_SPI, irq , GIC_FDT_IRQ_FLAGS_EDGE_LO_HI,
48
- r2->crn, r2->crm, r2->opc1, r2->opc2,
132
+ GIC_FDT_IRQ_TYPE_SPI, irq + 1, GIC_FDT_IRQ_FLAGS_EDGE_LO_HI,
49
- oldreg->name, r2->name);
133
+ GIC_FDT_IRQ_TYPE_SPI, irq + 2, GIC_FDT_IRQ_FLAGS_EDGE_LO_HI,
50
- g_assert_not_reached();
134
+ GIC_FDT_IRQ_TYPE_SPI, irq + 3, GIC_FDT_IRQ_FLAGS_EDGE_LO_HI);
51
- }
135
+
52
- }
136
+ qemu_fdt_setprop(vms->fdt, node, "interrupt-names", irq_names,
53
g_hash_table_insert(cpu->cp_regs, (gpointer)(uintptr_t)key, r2);
137
+ sizeof(irq_names));
138
+
139
+ qemu_fdt_setprop_cell(vms->fdt, node, "clocks", vms->clock_phandle);
140
+ qemu_fdt_setprop_string(vms->fdt, node, "clock-names", "apb_pclk");
141
+ qemu_fdt_setprop(vms->fdt, node, "dma-coherent", NULL, 0);
142
+
143
+ qemu_fdt_setprop_cell(vms->fdt, node, "#iommu-cells", 1);
144
+
145
+ qemu_fdt_setprop_cell(vms->fdt, node, "phandle", vms->iommu_phandle);
146
+ g_free(node);
147
+}
148
+
149
+static void create_pcie(VirtMachineState *vms, qemu_irq *pic)
150
{
151
hwaddr base_mmio = vms->memmap[VIRT_PCIE_MMIO].base;
152
hwaddr size_mmio = vms->memmap[VIRT_PCIE_MMIO].size;
153
@@ -XXX,XX +XXX,XX @@ static void create_pcie(const VirtMachineState *vms, qemu_irq *pic)
154
qemu_fdt_setprop_cell(vms->fdt, nodename, "#interrupt-cells", 1);
155
create_pcie_irq_map(vms, vms->gic_phandle, irq, nodename);
156
157
+ if (vms->iommu) {
158
+ vms->iommu_phandle = qemu_fdt_alloc_phandle(vms->fdt);
159
+
160
+ create_smmu(vms, pic, pci->bus);
161
+
162
+ qemu_fdt_setprop_cells(vms->fdt, nodename, "iommu-map",
163
+ 0x0, vms->iommu_phandle, 0x0, 0x10000);
164
+ }
165
+
166
g_free(nodename);
167
}
54
}
168
55
169
--
56
--
170
2.17.0
57
2.25.1
171
172
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Let's introduce a helper function aiming at recording an
3
Put the block comments into the current coding style.
4
event in the event queue.
5
4
6
Signed-off-by: Eric Auger <eric.auger@redhat.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 1524665762-31355-9-git-send-email-eric.auger@redhat.com
7
Message-id: 20220501055028.646596-19-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
hw/arm/smmuv3-internal.h | 148 ++++++++++++++++++++++++++++++++++++++-
10
target/arm/helper.c | 24 +++++++++++++++---------
12
hw/arm/smmuv3.c | 108 ++++++++++++++++++++++++++--
11
1 file changed, 15 insertions(+), 9 deletions(-)
13
hw/arm/trace-events | 1 +
14
3 files changed, 249 insertions(+), 8 deletions(-)
15
12
16
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/smmuv3-internal.h
15
--- a/target/arm/helper.c
19
+++ b/hw/arm/smmuv3-internal.h
16
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static inline void smmu_write_cmdq_err(SMMUv3State *s, uint32_t err_type)
17
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
21
s->cmdq.cons = FIELD_DP32(s->cmdq.cons, CMDQ_CONS, ERR, err_type);
18
return cpu_list;
22
}
19
}
23
20
24
-void smmuv3_write_eventq(SMMUv3State *s, Evt *evt);
21
+/*
25
-
22
+ * Private utility function for define_one_arm_cp_reg_with_opaque():
26
/* Commands */
23
+ * add a single reginfo struct to the hash table.
27
24
+ */
28
typedef enum SMMUCommandType {
25
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
29
@@ -XXX,XX +XXX,XX @@ enum { /* Command completion notification */
26
void *opaque, CPState state,
30
27
CPSecureState secstate,
31
#define SMMU_FEATURE_2LVL_STE (1 << 0)
28
int crm, int opc1, int opc2,
32
29
const char *name)
33
+/* Events */
34
+
35
+typedef enum SMMUEventType {
36
+ SMMU_EVT_OK = 0x00,
37
+ SMMU_EVT_F_UUT ,
38
+ SMMU_EVT_C_BAD_STREAMID ,
39
+ SMMU_EVT_F_STE_FETCH ,
40
+ SMMU_EVT_C_BAD_STE ,
41
+ SMMU_EVT_F_BAD_ATS_TREQ ,
42
+ SMMU_EVT_F_STREAM_DISABLED ,
43
+ SMMU_EVT_F_TRANS_FORBIDDEN ,
44
+ SMMU_EVT_C_BAD_SUBSTREAMID ,
45
+ SMMU_EVT_F_CD_FETCH ,
46
+ SMMU_EVT_C_BAD_CD ,
47
+ SMMU_EVT_F_WALK_EABT ,
48
+ SMMU_EVT_F_TRANSLATION = 0x10,
49
+ SMMU_EVT_F_ADDR_SIZE ,
50
+ SMMU_EVT_F_ACCESS ,
51
+ SMMU_EVT_F_PERMISSION ,
52
+ SMMU_EVT_F_TLB_CONFLICT = 0x20,
53
+ SMMU_EVT_F_CFG_CONFLICT ,
54
+ SMMU_EVT_E_PAGE_REQ = 0x24,
55
+} SMMUEventType;
56
+
57
+static const char *event_stringify[] = {
58
+ [SMMU_EVT_OK] = "SMMU_EVT_OK",
59
+ [SMMU_EVT_F_UUT] = "SMMU_EVT_F_UUT",
60
+ [SMMU_EVT_C_BAD_STREAMID] = "SMMU_EVT_C_BAD_STREAMID",
61
+ [SMMU_EVT_F_STE_FETCH] = "SMMU_EVT_F_STE_FETCH",
62
+ [SMMU_EVT_C_BAD_STE] = "SMMU_EVT_C_BAD_STE",
63
+ [SMMU_EVT_F_BAD_ATS_TREQ] = "SMMU_EVT_F_BAD_ATS_TREQ",
64
+ [SMMU_EVT_F_STREAM_DISABLED] = "SMMU_EVT_F_STREAM_DISABLED",
65
+ [SMMU_EVT_F_TRANS_FORBIDDEN] = "SMMU_EVT_F_TRANS_FORBIDDEN",
66
+ [SMMU_EVT_C_BAD_SUBSTREAMID] = "SMMU_EVT_C_BAD_SUBSTREAMID",
67
+ [SMMU_EVT_F_CD_FETCH] = "SMMU_EVT_F_CD_FETCH",
68
+ [SMMU_EVT_C_BAD_CD] = "SMMU_EVT_C_BAD_CD",
69
+ [SMMU_EVT_F_WALK_EABT] = "SMMU_EVT_F_WALK_EABT",
70
+ [SMMU_EVT_F_TRANSLATION] = "SMMU_EVT_F_TRANSLATION",
71
+ [SMMU_EVT_F_ADDR_SIZE] = "SMMU_EVT_F_ADDR_SIZE",
72
+ [SMMU_EVT_F_ACCESS] = "SMMU_EVT_F_ACCESS",
73
+ [SMMU_EVT_F_PERMISSION] = "SMMU_EVT_F_PERMISSION",
74
+ [SMMU_EVT_F_TLB_CONFLICT] = "SMMU_EVT_F_TLB_CONFLICT",
75
+ [SMMU_EVT_F_CFG_CONFLICT] = "SMMU_EVT_F_CFG_CONFLICT",
76
+ [SMMU_EVT_E_PAGE_REQ] = "SMMU_EVT_E_PAGE_REQ",
77
+};
78
+
79
+static inline const char *smmu_event_string(SMMUEventType type)
80
+{
81
+ if (type < ARRAY_SIZE(event_stringify)) {
82
+ return event_stringify[type] ? event_stringify[type] : "UNKNOWN";
83
+ } else {
84
+ return "INVALID";
85
+ }
86
+}
87
+
88
+/* Encode an event record */
89
+typedef struct SMMUEventInfo {
90
+ SMMUEventType type;
91
+ uint32_t sid;
92
+ bool recorded;
93
+ bool record_trans_faults;
94
+ union {
95
+ struct {
96
+ uint32_t ssid;
97
+ bool ssv;
98
+ dma_addr_t addr;
99
+ bool rnw;
100
+ bool pnu;
101
+ bool ind;
102
+ } f_uut;
103
+ struct SSIDInfo {
104
+ uint32_t ssid;
105
+ bool ssv;
106
+ } c_bad_streamid;
107
+ struct SSIDAddrInfo {
108
+ uint32_t ssid;
109
+ bool ssv;
110
+ dma_addr_t addr;
111
+ } f_ste_fetch;
112
+ struct SSIDInfo c_bad_ste;
113
+ struct {
114
+ dma_addr_t addr;
115
+ bool rnw;
116
+ } f_transl_forbidden;
117
+ struct {
118
+ uint32_t ssid;
119
+ } c_bad_substream;
120
+ struct SSIDAddrInfo f_cd_fetch;
121
+ struct SSIDInfo c_bad_cd;
122
+ struct FullInfo {
123
+ bool stall;
124
+ uint16_t stag;
125
+ uint32_t ssid;
126
+ bool ssv;
127
+ bool s2;
128
+ dma_addr_t addr;
129
+ bool rnw;
130
+ bool pnu;
131
+ bool ind;
132
+ uint8_t class;
133
+ dma_addr_t addr2;
134
+ } f_walk_eabt;
135
+ struct FullInfo f_translation;
136
+ struct FullInfo f_addr_size;
137
+ struct FullInfo f_access;
138
+ struct FullInfo f_permission;
139
+ struct SSIDInfo f_cfg_conflict;
140
+ /**
141
+ * not supported yet:
142
+ * F_BAD_ATS_TREQ
143
+ * F_BAD_ATS_TREQ
144
+ * F_TLB_CONFLICT
145
+ * E_PAGE_REQUEST
146
+ * IMPDEF_EVENTn
147
+ */
148
+ } u;
149
+} SMMUEventInfo;
150
+
151
+/* EVTQ fields */
152
+
153
+#define EVT_Q_OVERFLOW (1 << 31)
154
+
155
+#define EVT_SET_TYPE(x, v) deposit32((x)->word[0], 0 , 8 , v)
156
+#define EVT_SET_SSV(x, v) deposit32((x)->word[0], 11, 1 , v)
157
+#define EVT_SET_SSID(x, v) deposit32((x)->word[0], 12, 20, v)
158
+#define EVT_SET_SID(x, v) ((x)->word[1] = v)
159
+#define EVT_SET_STAG(x, v) deposit32((x)->word[2], 0 , 16, v)
160
+#define EVT_SET_STALL(x, v) deposit32((x)->word[2], 31, 1 , v)
161
+#define EVT_SET_PNU(x, v) deposit32((x)->word[3], 1 , 1 , v)
162
+#define EVT_SET_IND(x, v) deposit32((x)->word[3], 2 , 1 , v)
163
+#define EVT_SET_RNW(x, v) deposit32((x)->word[3], 3 , 1 , v)
164
+#define EVT_SET_S2(x, v) deposit32((x)->word[3], 7 , 1 , v)
165
+#define EVT_SET_CLASS(x, v) deposit32((x)->word[3], 8 , 2 , v)
166
+#define EVT_SET_ADDR(x, addr) \
167
+ do { \
168
+ (x)->word[5] = (uint32_t)(addr >> 32); \
169
+ (x)->word[4] = (uint32_t)(addr & 0xffffffff); \
170
+ } while (0)
171
+#define EVT_SET_ADDR2(x, addr) \
172
+ do { \
173
+ deposit32((x)->word[7], 3, 29, addr >> 16); \
174
+ deposit32((x)->word[7], 0, 16, addr & 0xffff);\
175
+ } while (0)
176
+
177
+void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *event);
178
+
179
#endif
180
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
181
index XXXXXXX..XXXXXXX 100644
182
--- a/hw/arm/smmuv3.c
183
+++ b/hw/arm/smmuv3.c
184
@@ -XXX,XX +XXX,XX @@ static MemTxResult queue_write(SMMUQueue *q, void *data)
185
return MEMTX_OK;
186
}
187
188
-void smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
189
+static MemTxResult smmuv3_write_eventq(SMMUv3State *s, Evt *evt)
190
{
30
{
191
SMMUQueue *q = &s->eventq;
31
- /* Private utility function for define_one_arm_cp_reg_with_opaque():
192
+ MemTxResult r;
32
- * add a single reginfo struct to the hash table.
193
+
33
- */
194
+ if (!smmuv3_eventq_enabled(s)) {
34
uint32_t key;
195
+ return MEMTX_ERROR;
35
ARMCPRegInfo *r2;
196
+ }
36
bool is64 = r->type & ARM_CP_64BIT;
197
+
37
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
198
+ if (smmuv3_q_full(q)) {
38
199
+ return MEMTX_ERROR;
39
isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
200
+ }
40
if (isbanked) {
201
+
41
- /* Register is banked (using both entries in array).
202
+ r = queue_write(q, evt);
42
+ /*
203
+ if (r != MEMTX_OK) {
43
+ * Register is banked (using both entries in array).
204
+ return r;
44
* Overwriting fieldoffset as the array is only used to define
205
+ }
45
* banked registers but later only fieldoffset is used.
206
+
46
*/
207
+ if (smmuv3_q_empty(q)) {
47
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
208
+ smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0);
48
209
+ }
49
if (state == ARM_CP_STATE_AA32) {
210
+ return MEMTX_OK;
50
if (isbanked) {
211
+}
51
- /* If the register is banked then we don't need to migrate or
212
+
52
+ /*
213
+void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
53
+ * If the register is banked then we don't need to migrate or
214
+{
54
* reset the 32-bit instance in certain cases:
215
+ Evt evt;
55
*
216
+ MemTxResult r;
56
* 1) If the register has both 32-bit and 64-bit instances then we
217
57
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
218
if (!smmuv3_eventq_enabled(s)) {
58
r2->type |= ARM_CP_ALIAS;
219
return;
59
}
60
} else if ((secstate != r->secure) && !ns) {
61
- /* The register is not banked so we only want to allow migration of
62
- * the non-secure instance.
63
+ /*
64
+ * The register is not banked so we only want to allow migration
65
+ * of the non-secure instance.
66
*/
67
r2->type |= ARM_CP_ALIAS;
68
}
69
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
70
}
220
}
71
}
221
72
222
- if (smmuv3_q_full(q)) {
73
- /* By convention, for wildcarded registers only the first
223
+ EVT_SET_TYPE(&evt, info->type);
74
+ /*
224
+ EVT_SET_SID(&evt, info->sid);
75
+ * By convention, for wildcarded registers only the first
225
+
76
* entry is used for migration; the others are marked as
226
+ switch (info->type) {
77
* ALIAS so we don't try to transfer the register
227
+ case SMMU_EVT_OK:
78
* multiple times. Special registers (ie NOP/WFI) are
228
return;
79
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
229
+ case SMMU_EVT_F_UUT:
80
r2->type |= ARM_CP_ALIAS | ARM_CP_NO_GDB;
230
+ EVT_SET_SSID(&evt, info->u.f_uut.ssid);
231
+ EVT_SET_SSV(&evt, info->u.f_uut.ssv);
232
+ EVT_SET_ADDR(&evt, info->u.f_uut.addr);
233
+ EVT_SET_RNW(&evt, info->u.f_uut.rnw);
234
+ EVT_SET_PNU(&evt, info->u.f_uut.pnu);
235
+ EVT_SET_IND(&evt, info->u.f_uut.ind);
236
+ break;
237
+ case SMMU_EVT_C_BAD_STREAMID:
238
+ EVT_SET_SSID(&evt, info->u.c_bad_streamid.ssid);
239
+ EVT_SET_SSV(&evt, info->u.c_bad_streamid.ssv);
240
+ break;
241
+ case SMMU_EVT_F_STE_FETCH:
242
+ EVT_SET_SSID(&evt, info->u.f_ste_fetch.ssid);
243
+ EVT_SET_SSV(&evt, info->u.f_ste_fetch.ssv);
244
+ EVT_SET_ADDR(&evt, info->u.f_ste_fetch.addr);
245
+ break;
246
+ case SMMU_EVT_C_BAD_STE:
247
+ EVT_SET_SSID(&evt, info->u.c_bad_ste.ssid);
248
+ EVT_SET_SSV(&evt, info->u.c_bad_ste.ssv);
249
+ break;
250
+ case SMMU_EVT_F_STREAM_DISABLED:
251
+ break;
252
+ case SMMU_EVT_F_TRANS_FORBIDDEN:
253
+ EVT_SET_ADDR(&evt, info->u.f_transl_forbidden.addr);
254
+ EVT_SET_RNW(&evt, info->u.f_transl_forbidden.rnw);
255
+ break;
256
+ case SMMU_EVT_C_BAD_SUBSTREAMID:
257
+ EVT_SET_SSID(&evt, info->u.c_bad_substream.ssid);
258
+ break;
259
+ case SMMU_EVT_F_CD_FETCH:
260
+ EVT_SET_SSID(&evt, info->u.f_cd_fetch.ssid);
261
+ EVT_SET_SSV(&evt, info->u.f_cd_fetch.ssv);
262
+ EVT_SET_ADDR(&evt, info->u.f_cd_fetch.addr);
263
+ break;
264
+ case SMMU_EVT_C_BAD_CD:
265
+ EVT_SET_SSID(&evt, info->u.c_bad_cd.ssid);
266
+ EVT_SET_SSV(&evt, info->u.c_bad_cd.ssv);
267
+ break;
268
+ case SMMU_EVT_F_WALK_EABT:
269
+ case SMMU_EVT_F_TRANSLATION:
270
+ case SMMU_EVT_F_ADDR_SIZE:
271
+ case SMMU_EVT_F_ACCESS:
272
+ case SMMU_EVT_F_PERMISSION:
273
+ EVT_SET_STALL(&evt, info->u.f_walk_eabt.stall);
274
+ EVT_SET_STAG(&evt, info->u.f_walk_eabt.stag);
275
+ EVT_SET_SSID(&evt, info->u.f_walk_eabt.ssid);
276
+ EVT_SET_SSV(&evt, info->u.f_walk_eabt.ssv);
277
+ EVT_SET_S2(&evt, info->u.f_walk_eabt.s2);
278
+ EVT_SET_ADDR(&evt, info->u.f_walk_eabt.addr);
279
+ EVT_SET_RNW(&evt, info->u.f_walk_eabt.rnw);
280
+ EVT_SET_PNU(&evt, info->u.f_walk_eabt.pnu);
281
+ EVT_SET_IND(&evt, info->u.f_walk_eabt.ind);
282
+ EVT_SET_CLASS(&evt, info->u.f_walk_eabt.class);
283
+ EVT_SET_ADDR2(&evt, info->u.f_walk_eabt.addr2);
284
+ break;
285
+ case SMMU_EVT_F_CFG_CONFLICT:
286
+ EVT_SET_SSID(&evt, info->u.f_cfg_conflict.ssid);
287
+ EVT_SET_SSV(&evt, info->u.f_cfg_conflict.ssv);
288
+ break;
289
+ /* rest is not implemented */
290
+ case SMMU_EVT_F_BAD_ATS_TREQ:
291
+ case SMMU_EVT_F_TLB_CONFLICT:
292
+ case SMMU_EVT_E_PAGE_REQ:
293
+ default:
294
+ g_assert_not_reached();
295
}
81
}
296
82
297
- queue_write(q, evt);
83
- /* Check that raw accesses are either forbidden or handled. Note that
298
-
84
+ /*
299
- if (smmuv3_q_empty(q)) {
85
+ * Check that raw accesses are either forbidden or handled. Note that
300
- smmuv3_trigger_irq(s, SMMU_IRQ_EVTQ, 0);
86
* we can't assert this earlier because the setup of fieldoffset for
301
+ trace_smmuv3_record_event(smmu_event_string(info->type), info->sid);
87
* banked registers has to be done first.
302
+ r = smmuv3_write_eventq(s, &evt);
88
*/
303
+ if (r != MEMTX_OK) {
304
+ smmuv3_trigger_irq(s, SMMU_IRQ_GERROR, R_GERROR_EVENTQ_ABT_ERR_MASK);
305
}
306
+ info->recorded = true;
307
}
308
309
static void smmuv3_init_regs(SMMUv3State *s)
310
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
311
index XXXXXXX..XXXXXXX 100644
312
--- a/hw/arm/trace-events
313
+++ b/hw/arm/trace-events
314
@@ -XXX,XX +XXX,XX @@ smmuv3_write_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr:
315
smmuv3_write_mmio_idr(uint64_t addr, uint64_t val) "write to RO/Unimpl reg 0x%lx val64:0x%lx"
316
smmuv3_write_mmio_evtq_cons_bef_clear(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "Before clearing interrupt prod:0x%x cons:0x%x prod.w:%d cons.w:%d"
317
smmuv3_write_mmio_evtq_cons_after_clear(uint32_t prod, uint32_t cons, uint8_t prod_wrap, uint8_t cons_wrap) "after clearing interrupt prod:0x%x cons:0x%x prod.w:%d cons.w:%d"
318
+smmuv3_record_event(const char *type, uint32_t sid) "%s sid=%d"
319
--
89
--
320
2.17.0
90
2.25.1
321
322
diff view generated by jsdifflib
1
From: Prem Mallappa <prem.mallappa@broadcom.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This patch builds the smmuv3 node in the ACPI IORT table.
3
Since e03b56863d2bc, our host endian indicator is unconditionally
4
set, which means that we can use a normal C condition.
4
5
5
The RID space of the root complex, which spans 0x0-0x10000
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
maps to streamid space 0x0-0x10000 in smmuv3, which in turn
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
maps to deviceid space 0x0-0x10000 in the ITS group.
8
Message-id: 20220501055028.646596-20-richard.henderson@linaro.org
8
9
[PMM: quote correct git hash in commit message]
9
The guest must feature the IOMMU probe deferral series
10
(https://lkml.org/lkml/2017/4/10/214) which fixes streamid
11
multiple lookup. This bug is not related to the SMMU emulation.
12
13
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
14
Signed-off-by: Eric Auger <eric.auger@redhat.com>
15
Reviewed-by: Shannon Zhao <zhaoshenglong@huawei.com>
16
Message-id: 1524665762-31355-14-git-send-email-eric.auger@redhat.com
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
11
---
19
include/hw/acpi/acpi-defs.h | 15 ++++++++++
12
target/arm/helper.c | 9 +++------
20
hw/arm/virt-acpi-build.c | 55 ++++++++++++++++++++++++++++++++-----
13
1 file changed, 3 insertions(+), 6 deletions(-)
21
2 files changed, 63 insertions(+), 7 deletions(-)
22
14
23
diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/acpi/acpi-defs.h
17
--- a/target/arm/helper.c
26
+++ b/include/hw/acpi/acpi-defs.h
18
+++ b/target/arm/helper.c
27
@@ -XXX,XX +XXX,XX @@ struct AcpiIortItsGroup {
19
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
28
} QEMU_PACKED;
20
r2->type |= ARM_CP_ALIAS;
29
typedef struct AcpiIortItsGroup AcpiIortItsGroup;
21
}
30
22
31
+struct AcpiIortSmmu3 {
23
- if (r->state == ARM_CP_STATE_BOTH) {
32
+ ACPI_IORT_NODE_HEADER_DEF
24
-#if HOST_BIG_ENDIAN
33
+ uint64_t base_address;
25
- if (r2->fieldoffset) {
34
+ uint32_t flags;
26
- r2->fieldoffset += sizeof(uint32_t);
35
+ uint32_t reserved2;
27
- }
36
+ uint64_t vatos_address;
28
-#endif
37
+ uint32_t model;
29
+ if (HOST_BIG_ENDIAN &&
38
+ uint32_t event_gsiv;
30
+ r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
39
+ uint32_t pri_gsiv;
31
+ r2->fieldoffset += sizeof(uint32_t);
40
+ uint32_t gerr_gsiv;
32
}
41
+ uint32_t sync_gsiv;
42
+ AcpiIortIdMapping id_mapping_array[0];
43
+} QEMU_PACKED;
44
+typedef struct AcpiIortSmmu3 AcpiIortSmmu3;
45
+
46
struct AcpiIortRC {
47
ACPI_IORT_NODE_HEADER_DEF
48
AcpiIortMemoryAccess memory_properties;
49
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/hw/arm/virt-acpi-build.c
52
+++ b/hw/arm/virt-acpi-build.c
53
@@ -XXX,XX +XXX,XX @@ build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned xsdt_tbl_offset)
54
}
55
56
static void
57
-build_iort(GArray *table_data, BIOSLinker *linker)
58
+build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
59
{
60
- int iort_start = table_data->len;
61
+ int nb_nodes, iort_start = table_data->len;
62
AcpiIortIdMapping *idmap;
63
AcpiIortItsGroup *its;
64
AcpiIortTable *iort;
65
- size_t node_size, iort_length;
66
+ AcpiIortSmmu3 *smmu;
67
+ size_t node_size, iort_length, smmu_offset = 0;
68
AcpiIortRC *rc;
69
70
iort = acpi_data_push(table_data, sizeof(*iort));
71
72
+ if (vms->iommu == VIRT_IOMMU_SMMUV3) {
73
+ nb_nodes = 3; /* RC, ITS, SMMUv3 */
74
+ } else {
75
+ nb_nodes = 2; /* RC, ITS */
76
+ }
77
+
78
iort_length = sizeof(*iort);
79
- iort->node_count = cpu_to_le32(2); /* RC and ITS nodes */
80
+ iort->node_count = cpu_to_le32(nb_nodes);
81
iort->node_offset = cpu_to_le32(sizeof(*iort));
82
83
/* ITS group node */
84
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker)
85
its->its_count = cpu_to_le32(1);
86
its->identifiers[0] = 0; /* MADT translation_id */
87
88
+ if (vms->iommu == VIRT_IOMMU_SMMUV3) {
89
+ int irq = vms->irqmap[VIRT_SMMU];
90
+
91
+ /* SMMUv3 node */
92
+ smmu_offset = iort->node_offset + node_size;
93
+ node_size = sizeof(*smmu) + sizeof(*idmap);
94
+ iort_length += node_size;
95
+ smmu = acpi_data_push(table_data, node_size);
96
+
97
+ smmu->type = ACPI_IORT_NODE_SMMU_V3;
98
+ smmu->length = cpu_to_le16(node_size);
99
+ smmu->mapping_count = cpu_to_le32(1);
100
+ smmu->mapping_offset = cpu_to_le32(sizeof(*smmu));
101
+ smmu->base_address = cpu_to_le64(vms->memmap[VIRT_SMMU].base);
102
+ smmu->event_gsiv = cpu_to_le32(irq);
103
+ smmu->pri_gsiv = cpu_to_le32(irq + 1);
104
+ smmu->gerr_gsiv = cpu_to_le32(irq + 2);
105
+ smmu->sync_gsiv = cpu_to_le32(irq + 3);
106
+
107
+ /* Identity RID mapping covering the whole input RID range */
108
+ idmap = &smmu->id_mapping_array[0];
109
+ idmap->input_base = 0;
110
+ idmap->id_count = cpu_to_le32(0xFFFF);
111
+ idmap->output_base = 0;
112
+ /* output IORT node is the ITS group node (the first node) */
113
+ idmap->output_reference = cpu_to_le32(iort->node_offset);
114
+ }
115
+
116
/* Root Complex Node */
117
node_size = sizeof(*rc) + sizeof(*idmap);
118
iort_length += node_size;
119
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker)
120
idmap->input_base = 0;
121
idmap->id_count = cpu_to_le32(0xFFFF);
122
idmap->output_base = 0;
123
- /* output IORT node is the ITS group node (the first node) */
124
- idmap->output_reference = cpu_to_le32(iort->node_offset);
125
+
126
+ if (vms->iommu == VIRT_IOMMU_SMMUV3) {
127
+ /* output IORT node is the smmuv3 node */
128
+ idmap->output_reference = cpu_to_le32(smmu_offset);
129
+ } else {
130
+ /* output IORT node is the ITS group node (the first node) */
131
+ idmap->output_reference = cpu_to_le32(iort->node_offset);
132
+ }
133
134
iort->length = cpu_to_le32(iort_length);
135
136
@@ -XXX,XX +XXX,XX @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
137
138
if (its_class_name() && !vmc->no_its) {
139
acpi_add_table(table_offsets, tables_blob);
140
- build_iort(tables_blob, tables->linker);
141
+ build_iort(tables_blob, tables->linker, vms);
142
}
33
}
143
34
144
/* XSDT is pointed to by RSDP */
145
--
35
--
146
2.17.0
36
2.25.1
147
148
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We introduce some helpers to handle wired IRQs and especially
4
GERROR interrupt. SMMU writes GERROR register on GERROR event
5
and SW acks GERROR interrupts by setting GERRORn.
6
7
The Wired interrupts are edge sensitive hence the pulse usage.
8
9
Signed-off-by: Eric Auger <eric.auger@redhat.com>
10
Signed-off-by: Prem Mallappa <prem.mallappa@broadcom.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1524665762-31355-6-git-send-email-eric.auger@redhat.com
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20220501055028.646596-24-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
7
---
15
hw/arm/smmuv3-internal.h | 14 +++++++++
8
target/arm/cpu.h | 15 +++++++++++++++
16
hw/arm/smmuv3.c | 64 ++++++++++++++++++++++++++++++++++++++++
9
1 file changed, 15 insertions(+)
17
hw/arm/trace-events | 3 ++
18
3 files changed, 81 insertions(+)
19
10
20
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/smmuv3-internal.h
13
--- a/target/arm/cpu.h
23
+++ b/hw/arm/smmuv3-internal.h
14
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ static inline uint32_t smmuv3_idreg(int regoffset)
15
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_ssbs(const ARMISARegisters *id)
25
return smmuv3_ids[regoffset / 4];
16
return FIELD_EX32(id->id_pfr2, ID_PFR2, SSBS) != 0;
26
}
17
}
27
18
28
+static inline bool smmuv3_eventq_irq_enabled(SMMUv3State *s)
19
+static inline bool isar_feature_aa32_debugv8p2(const ARMISARegisters *id)
29
+{
20
+{
30
+ return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, EVENTQ_IRQEN);
21
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 8;
31
+}
22
+}
32
+
23
+
33
+static inline bool smmuv3_gerror_irq_enabled(SMMUv3State *s)
24
/*
25
* 64-bit feature tests via id registers.
26
*/
27
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
28
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
29
}
30
31
+static inline bool isar_feature_aa64_debugv8p2(const ARMISARegisters *id)
34
+{
32
+{
35
+ return FIELD_EX32(s->irq_ctrl, IRQ_CTRL, GERROR_IRQEN);
33
+ return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, DEBUGVER) >= 8;
36
+}
34
+}
37
+
35
+
38
+/* public until callers get introduced */
36
static inline bool isar_feature_aa64_sve2(const ARMISARegisters *id)
39
+void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask);
37
{
40
+void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t gerrorn);
38
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SVEVER) != 0;
41
+
39
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_tts2uxn(const ARMISARegisters *id)
42
#endif
40
return isar_feature_aa64_tts2uxn(id) || isar_feature_aa32_tts2uxn(id);
43
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
41
}
44
index XXXXXXX..XXXXXXX 100644
42
45
--- a/hw/arm/smmuv3.c
43
+static inline bool isar_feature_any_debugv8p2(const ARMISARegisters *id)
46
+++ b/hw/arm/smmuv3.c
47
@@ -XXX,XX +XXX,XX @@
48
#include "hw/arm/smmuv3.h"
49
#include "smmuv3-internal.h"
50
51
+/**
52
+ * smmuv3_trigger_irq - pulse @irq if enabled and update
53
+ * GERROR register in case of GERROR interrupt
54
+ *
55
+ * @irq: irq type
56
+ * @gerror_mask: mask of gerrors to toggle (relevant if @irq is GERROR)
57
+ */
58
+void smmuv3_trigger_irq(SMMUv3State *s, SMMUIrq irq, uint32_t gerror_mask)
59
+{
44
+{
60
+
45
+ return isar_feature_aa64_debugv8p2(id) || isar_feature_aa32_debugv8p2(id);
61
+ bool pulse = false;
62
+
63
+ switch (irq) {
64
+ case SMMU_IRQ_EVTQ:
65
+ pulse = smmuv3_eventq_irq_enabled(s);
66
+ break;
67
+ case SMMU_IRQ_PRIQ:
68
+ qemu_log_mask(LOG_UNIMP, "PRI not yet supported\n");
69
+ break;
70
+ case SMMU_IRQ_CMD_SYNC:
71
+ pulse = true;
72
+ break;
73
+ case SMMU_IRQ_GERROR:
74
+ {
75
+ uint32_t pending = s->gerror ^ s->gerrorn;
76
+ uint32_t new_gerrors = ~pending & gerror_mask;
77
+
78
+ if (!new_gerrors) {
79
+ /* only toggle non pending errors */
80
+ return;
81
+ }
82
+ s->gerror ^= new_gerrors;
83
+ trace_smmuv3_write_gerror(new_gerrors, s->gerror);
84
+
85
+ pulse = smmuv3_gerror_irq_enabled(s);
86
+ break;
87
+ }
88
+ }
89
+ if (pulse) {
90
+ trace_smmuv3_trigger_irq(irq);
91
+ qemu_irq_pulse(s->irq[irq]);
92
+ }
93
+}
46
+}
94
+
47
+
95
+void smmuv3_write_gerrorn(SMMUv3State *s, uint32_t new_gerrorn)
48
/*
96
+{
49
* Forward to the above feature tests given an ARMCPU pointer.
97
+ uint32_t pending = s->gerror ^ s->gerrorn;
50
*/
98
+ uint32_t toggled = s->gerrorn ^ new_gerrorn;
99
+
100
+ if (toggled & ~pending) {
101
+ qemu_log_mask(LOG_GUEST_ERROR,
102
+ "guest toggles non pending errors = 0x%x\n",
103
+ toggled & ~pending);
104
+ }
105
+
106
+ /*
107
+ * We do not raise any error in case guest toggles bits corresponding
108
+ * to not active IRQs (CONSTRAINED UNPREDICTABLE)
109
+ */
110
+ s->gerrorn = new_gerrorn;
111
+
112
+ trace_smmuv3_write_gerrorn(toggled & pending, s->gerrorn);
113
+}
114
+
115
static void smmuv3_init_regs(SMMUv3State *s)
116
{
117
/**
118
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
119
index XXXXXXX..XXXXXXX 100644
120
--- a/hw/arm/trace-events
121
+++ b/hw/arm/trace-events
122
@@ -XXX,XX +XXX,XX @@ smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "base
123
124
#hw/arm/smmuv3.c
125
smmuv3_read_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
126
+smmuv3_trigger_irq(int irq) "irq=%d"
127
+smmuv3_write_gerror(uint32_t toggled, uint32_t gerror) "toggled=0x%x, new GERROR=0x%x"
128
+smmuv3_write_gerrorn(uint32_t acked, uint32_t gerrorn) "acked=0x%x, new GERRORN=0x%x"
129
--
51
--
130
2.17.0
52
2.25.1
131
132
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
At the moment, the SMMUv3 does not support notification on
3
Add the aa64 predicate for detecting RAS support from id registers.
4
TLB invalidation. So let's log an error as soon as such notifier
4
We already have the aa32 version from the M-profile work.
5
gets enabled.
5
Add the 'any' predicate for testing both aa64 and aa32.
6
6
7
Signed-off-by: Eric Auger <eric.auger@redhat.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 1524665762-31355-11-git-send-email-eric.auger@redhat.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-34-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/arm/smmuv3.c | 11 +++++++++++
12
target/arm/cpu.h | 10 ++++++++++
13
1 file changed, 11 insertions(+)
13
1 file changed, 10 insertions(+)
14
14
15
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/smmuv3.c
17
--- a/target/arm/cpu.h
18
+++ b/hw/arm/smmuv3.c
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static void smmuv3_class_init(ObjectClass *klass, void *data)
19
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_aa32_el1(const ARMISARegisters *id)
20
dc->realize = smmu_realize;
20
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL1) >= 2;
21
}
21
}
22
22
23
+static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
23
+static inline bool isar_feature_aa64_ras(const ARMISARegisters *id)
24
+ IOMMUNotifierFlag old,
25
+ IOMMUNotifierFlag new)
26
+{
24
+{
27
+ if (old == IOMMU_NOTIFIER_NONE) {
25
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RAS) != 0;
28
+ warn_report("SMMUV3 does not support vhost/vfio integration yet: "
29
+ "devices of those types will not function properly");
30
+ }
31
+}
26
+}
32
+
27
+
33
static void smmuv3_iommu_memory_region_class_init(ObjectClass *klass,
28
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
34
void *data)
35
{
29
{
36
IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
30
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
37
31
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_debugv8p2(const ARMISARegisters *id)
38
imrc->translate = smmuv3_translate;
32
return isar_feature_aa64_debugv8p2(id) || isar_feature_aa32_debugv8p2(id);
39
+ imrc->notify_flag_changed = smmuv3_notify_flag_changed;
40
}
33
}
41
34
42
static const TypeInfo smmuv3_type_info = {
35
+static inline bool isar_feature_any_ras(const ARMISARegisters *id)
36
+{
37
+ return isar_feature_aa64_ras(id) || isar_feature_aa32_ras(id);
38
+}
39
+
40
/*
41
* Forward to the above feature tests given an ARMCPU pointer.
42
*/
43
--
43
--
44
2.17.0
44
2.25.1
45
46
diff view generated by jsdifflib
1
From: Patrick Oppenlander <patrick.oppenlander@gmail.com>
1
From: Alex Zuepke <alex.zuepke@tum.de>
2
2
3
The character frontend needs to be notified that the uart receive buffer
3
The ARMv8 manual defines that PMUSERENR_EL0.ER enables read-access
4
is empty and ready to handle another character.
4
to both PMXEVCNTR_EL0 and PMEVCNTR<n>_EL0 registers, however,
5
we only use it for PMXEVCNTR_EL0. Extend to PMEVCNTR<n>_EL0 as well.
5
6
6
Previously, the uart only worked correctly when receiving one character
7
Signed-off-by: Alex Zuepke <alex.zuepke@tum.de>
7
at a time.
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
9
Message-id: 20220428132717.84190-1-alex.zuepke@tum.de
9
Signed-off-by: Patrick Oppenlander <patrick.oppenlander@gmail.com>
10
Message-id: CAEg67GkRTw=cXei3o9hvpxG_L4zSrNzR0bFyAgny+sSEUb_kPw@mail.gmail.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/char/cmsdk-apb-uart.c | 1 +
12
target/arm/helper.c | 4 ++--
15
1 file changed, 1 insertion(+)
13
1 file changed, 2 insertions(+), 2 deletions(-)
16
14
17
diff --git a/hw/char/cmsdk-apb-uart.c b/hw/char/cmsdk-apb-uart.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/char/cmsdk-apb-uart.c
17
--- a/target/arm/helper.c
20
+++ b/hw/char/cmsdk-apb-uart.c
18
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ static uint64_t uart_read(void *opaque, hwaddr offset, unsigned size)
19
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
22
r = s->rxbuf;
20
.crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
23
s->state &= ~R_STATE_RXFULL_MASK;
21
.access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
24
cmsdk_apb_uart_update(s);
22
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
25
+ qemu_chr_fe_accept_input(&s->chr);
23
- .accessfn = pmreg_access },
26
break;
24
+ .accessfn = pmreg_access_xevcntr },
27
case A_STATE:
25
{ .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64,
28
r = s->state;
26
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)),
27
- .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
28
+ .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access_xevcntr,
29
.type = ARM_CP_IO,
30
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
31
.raw_readfn = pmevcntr_rawread,
29
--
32
--
30
2.17.0
33
2.25.1
31
32
diff view generated by jsdifflib
Deleted patch
1
For v8M the instructions VLLDM and VLSTM support lazy saving
2
and restoring of the secure floating-point registers. Even
3
if the floating point extension is not implemented, these
4
instructions must act as NOPs in Secure state, so they can
5
be used as part of the secure-to-nonsecure call sequence.
6
1
7
Fixes: https://bugs.launchpad.net/qemu/+bug/1768295
8
Cc: qemu-stable@nongnu.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20180503105730.5958-1-peter.maydell@linaro.org
12
---
13
target/arm/translate.c | 17 ++++++++++++++++-
14
1 file changed, 16 insertions(+), 1 deletion(-)
15
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
19
+++ b/target/arm/translate.c
20
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
21
/* Coprocessor. */
22
if (arm_dc_feature(s, ARM_FEATURE_M)) {
23
/* We don't currently implement M profile FP support,
24
- * so this entire space should give a NOCP fault.
25
+ * so this entire space should give a NOCP fault, with
26
+ * the exception of the v8M VLLDM and VLSTM insns, which
27
+ * must be NOPs in Secure state and UNDEF in Nonsecure state.
28
*/
29
+ if (arm_dc_feature(s, ARM_FEATURE_V8) &&
30
+ (insn & 0xffa00f00) == 0xec200a00) {
31
+ /* 0b1110_1100_0x1x_xxxx_xxxx_1010_xxxx_xxxx
32
+ * - VLLDM, VLSTM
33
+ * We choose to UNDEF if the RAZ bits are non-zero.
34
+ */
35
+ if (!s->v8m_secure || (insn & 0x0040f0ff)) {
36
+ goto illegal_op;
37
+ }
38
+ /* Just NOP since FP support is not implemented */
39
+ break;
40
+ }
41
+ /* All other insns: NOCP */
42
gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
43
default_exception_el(s));
44
break;
45
--
46
2.17.0
47
48
diff view generated by jsdifflib