1
Arm pullreq for the 2.12 codefreeze...
1
Hi; here's the latest arm pullreq. This is mostly patches from
2
RTH, plus a couple of other more minor things. Switching to
3
PCREL is the big one, hopefully should improve performance.
2
4
3
thanks
5
thanks
4
-- PMM
6
-- PMM
5
7
6
The following changes since commit b39b61e410022f96ceb53d4381d25cba5126ac44:
8
The following changes since commit 214a8da23651f2472b296b3293e619fd58d9e212:
7
9
8
memory: fix flatview_access_valid RCU read lock/unlock imbalance (2018-03-09 15:55:20 +0000)
10
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2022-10-18 11:14:31 -0400)
9
11
10
are available in the Git repository at:
12
are available in the Git repository at:
11
13
12
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180309
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221020
13
15
14
for you to fetch changes up to 076a0fc32a73a9b960e0f73f04a531bc1bd94308:
16
for you to fetch changes up to 5db899303799e49209016a93289b8694afa1449e:
15
17
16
MAINTAINERS: Add entries for SD (SDHCI, SDBus, SDCard) (2018-03-09 17:09:45 +0000)
18
hw/ide/microdrive: Use device_cold_reset() for self-resets (2022-10-20 12:11:53 +0100)
17
19
18
----------------------------------------------------------------
20
----------------------------------------------------------------
19
target-arm queue:
21
target-arm queue:
20
* i.MX: Add i.MX7 SOC implementation and i.MX7 Sabre board
22
* Switch to TARGET_TB_PCREL
21
* Report the correct core count in A53 L2CTLR on the ZynqMP board
23
* More pagetable-walk refactoring preparatory to HAFDBS
22
* linux-user: preliminary SVE support work (signal handling)
24
* update the cortex-a15 MIDR to latest rev
23
* hw/arm/boot: fix memory leak in case of error loading ELF file
25
* hw/char/pl011: fix baud rate calculation
24
* hw/arm/boot: avoid reading off end of buffer if passed very
26
* hw/ide/microdrive: Use device_cold_reset() for self-resets
25
small image file
26
* hw/arm: Use more CONFIG switches for the object files
27
* target/arm: Add "-cpu max" support
28
* hw/arm/virt: Support -machine gic-version=max
29
* hw/sd: improve debug tracing
30
* hw/sd: sdcard: Add the Tuning Command (CMD 19)
31
* MAINTAINERS: add Philippe as odd-fixes maintainer for SD
32
27
33
----------------------------------------------------------------
28
----------------------------------------------------------------
34
Alistair Francis (2):
29
Alex Bennée (1):
35
target/arm: Add a core count property
30
target/arm: update the cortex-a15 MIDR to latest rev
36
hw/arm: Set the core count for Xilinx's ZynqMP
37
31
38
Andrey Smirnov (3):
32
Baruch Siach (1):
39
pci: Add support for Designware IP block
33
hw/char/pl011: fix baud rate calculation
40
i.MX: Add i.MX7 SOC implementation.
41
Implement support for i.MX7 Sabre board
42
34
43
Marc-André Lureau (2):
35
Peter Maydell (1):
44
arm: fix load ELF error leak
36
hw/ide/microdrive: Use device_cold_reset() for self-resets
45
arm: avoid heap-buffer-overflow in load_aarch64_image
46
37
47
Peter Maydell (6):
38
Richard Henderson (21):
48
target/arm: Query host CPU features on-demand at instance init
39
target/arm: Enable TARGET_PAGE_ENTRY_EXTRA
49
target/arm: Move definition of 'host' cpu type into cpu.c
40
target/arm: Use probe_access_full for MTE
50
target/arm: Add "-cpu max" support
41
target/arm: Use probe_access_full for BTI
51
target/arm: Make 'any' CPU just an alias for 'max'
42
target/arm: Add ARMMMUIdx_Phys_{S,NS}
52
hw/arm/virt: Add "max" to the list of CPU types "virt" supports
43
target/arm: Move ARMMMUIdx_Stage2 to a real tlb mmu_idx
53
hw/arm/virt: Support -machine gic-version=max
44
target/arm: Restrict tlb flush from vttbr_write to vmid change
45
target/arm: Split out S1Translate type
46
target/arm: Plumb debug into S1Translate
47
target/arm: Move be test for regime into S1TranslateResult
48
target/arm: Use softmmu tlbs for page table walking
49
target/arm: Split out get_phys_addr_twostage
50
target/arm: Use bool consistently for get_phys_addr subroutines
51
target/arm: Introduce curr_insn_len
52
target/arm: Change gen_goto_tb to work on displacements
53
target/arm: Change gen_*set_pc_im to gen_*update_pc
54
target/arm: Change gen_exception_insn* to work on displacements
55
target/arm: Remove gen_exception_internal_insn pc argument
56
target/arm: Change gen_jmp* to work on displacements
57
target/arm: Introduce gen_pc_plus_diff for aarch64
58
target/arm: Introduce gen_pc_plus_diff for aarch32
59
target/arm: Enable TARGET_TB_PCREL
54
60
55
Philippe Mathieu-Daudé (6):
61
target/arm/cpu-param.h | 17 +-
56
sdcard: Do not trace CMD55, except when we already expect an ACMD
62
target/arm/cpu.h | 47 ++--
57
sdcard: Display command name when tracing CMD/ACMD
63
target/arm/internals.h | 1 +
58
sdcard: Display which protocol is used when tracing (SD or SPI)
64
target/arm/sve_ldst_internal.h | 1 +
59
sdcard: Add the Tuning Command (CMD19)
65
target/arm/translate-a32.h | 2 +-
60
sdhci: Fix a typo in comment
66
target/arm/translate.h | 66 ++++-
61
MAINTAINERS: Add entries for SD (SDHCI, SDBus, SDCard)
67
hw/char/pl011.c | 2 +-
68
hw/ide/microdrive.c | 8 +-
69
target/arm/cpu.c | 23 +-
70
target/arm/cpu_tcg.c | 4 +-
71
target/arm/helper.c | 155 +++++++++---
72
target/arm/mte_helper.c | 62 ++---
73
target/arm/ptw.c | 535 +++++++++++++++++++++++++----------------
74
target/arm/sve_helper.c | 54 ++---
75
target/arm/tlb_helper.c | 24 +-
76
target/arm/translate-a64.c | 220 ++++++++++-------
77
target/arm/translate-m-nocp.c | 8 +-
78
target/arm/translate-mve.c | 2 +-
79
target/arm/translate-vfp.c | 10 +-
80
target/arm/translate.c | 284 +++++++++++++---------
81
20 files changed, 918 insertions(+), 607 deletions(-)
62
82
63
Richard Henderson (5):
64
linux-user: Implement aarch64 PR_SVE_SET/GET_VL
65
aarch64-linux-user: Split out helpers for guest signal handling
66
aarch64-linux-user: Remove struct target_aux_context
67
aarch64-linux-user: Add support for EXTRA signal frame records
68
aarch64-linux-user: Add support for SVE signal frame records
69
70
Thomas Huth (1):
71
hw/arm: Use more CONFIG switches for the object files
72
73
hw/arm/Makefile.objs | 31 +-
74
hw/pci-host/Makefile.objs | 2 +
75
hw/sd/Makefile.objs | 2 +-
76
hw/sd/sdmmc-internal.h | 24 ++
77
include/hw/arm/fsl-imx7.h | 222 +++++++++++
78
include/hw/pci-host/designware.h | 102 +++++
79
include/hw/pci/pci_ids.h | 2 +
80
linux-user/aarch64/target_syscall.h | 3 +
81
target/arm/cpu-qom.h | 2 +
82
target/arm/cpu.h | 11 +
83
target/arm/kvm_arm.h | 35 +-
84
hw/arm/boot.c | 4 +-
85
hw/arm/fsl-imx7.c | 582 ++++++++++++++++++++++++++++
86
hw/arm/mcimx7d-sabre.c | 90 +++++
87
hw/arm/virt.c | 30 +-
88
hw/arm/xlnx-zynqmp.c | 2 +
89
hw/pci-host/designware.c | 754 ++++++++++++++++++++++++++++++++++++
90
hw/sd/sd.c | 55 ++-
91
hw/sd/sdhci.c | 4 +-
92
hw/sd/sdmmc-internal.c | 72 ++++
93
linux-user/signal.c | 415 ++++++++++++++++----
94
linux-user/syscall.c | 27 ++
95
target/arm/cpu.c | 103 ++++-
96
target/arm/cpu64.c | 113 ++++--
97
target/arm/kvm.c | 53 +--
98
target/arm/kvm32.c | 8 +-
99
target/arm/kvm64.c | 8 +-
100
MAINTAINERS | 8 +
101
default-configs/arm-softmmu.mak | 9 +
102
hw/sd/trace-events | 8 +-
103
30 files changed, 2583 insertions(+), 198 deletions(-)
104
create mode 100644 include/hw/arm/fsl-imx7.h
105
create mode 100644 include/hw/pci-host/designware.h
106
create mode 100644 hw/arm/fsl-imx7.c
107
create mode 100644 hw/arm/mcimx7d-sabre.c
108
create mode 100644 hw/pci-host/designware.c
109
create mode 100644 hw/sd/sdmmc-internal.c
110
diff view generated by jsdifflib
1
From: Marc-André Lureau <marcandre.lureau@redhat.com>
1
From: Baruch Siach <baruch@tkos.co.il>
2
2
3
Spotted by ASAN:
3
The PL011 TRM says that "UARTIBRD = 0 is invalid and UARTFBRD is ignored
4
QTEST_QEMU_BINARY=aarch64-softmmu/qemu-system-aarch64 tests/boot-serial-test
4
when this is the case". But the code looks at FBRD for the invalid case.
5
Fix this.
5
6
6
Direct leak of 48 byte(s) in 1 object(s) allocated from:
7
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
7
#0 0x7ff8a9b0ca38 in __interceptor_calloc (/lib64/libasan.so.4+0xdea38)
8
Message-id: 1408f62a2e45665816527d4845ffde650957d5ab.1665051588.git.baruchs-c@neureality.ai
8
#1 0x7ff8a8ea7f75 in g_malloc0 ../glib/gmem.c:124
9
#2 0x55fef3d99129 in error_setv /home/elmarco/src/qemu/util/error.c:59
10
#3 0x55fef3d99738 in error_setg_internal /home/elmarco/src/qemu/util/error.c:95
11
#4 0x55fef323acb2 in load_elf_hdr /home/elmarco/src/qemu/hw/core/loader.c:393
12
#5 0x55fef2d15776 in arm_load_elf /home/elmarco/src/qemu/hw/arm/boot.c:830
13
#6 0x55fef2d16d39 in arm_load_kernel_notify /home/elmarco/src/qemu/hw/arm/boot.c:1022
14
#7 0x55fef3dc634d in notifier_list_notify /home/elmarco/src/qemu/util/notify.c:40
15
#8 0x55fef2fc3182 in qemu_run_machine_init_done_notifiers /home/elmarco/src/qemu/vl.c:2716
16
#9 0x55fef2fcbbd1 in main /home/elmarco/src/qemu/vl.c:4679
17
#10 0x7ff89dfed009 in __libc_start_main (/lib64/libc.so.6+0x21009)
18
19
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
11
---
23
hw/arm/boot.c | 1 +
12
hw/char/pl011.c | 2 +-
24
1 file changed, 1 insertion(+)
13
1 file changed, 1 insertion(+), 1 deletion(-)
25
14
26
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
15
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
27
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/arm/boot.c
17
--- a/hw/char/pl011.c
29
+++ b/hw/arm/boot.c
18
+++ b/hw/char/pl011.c
30
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
19
@@ -XXX,XX +XXX,XX @@ static unsigned int pl011_get_baudrate(const PL011State *s)
31
20
{
32
load_elf_hdr(info->kernel_filename, &elf_header, &elf_is64, &err);
21
uint64_t clk;
33
if (err) {
22
34
+ error_free(err);
23
- if (s->fbrd == 0) {
35
return ret;
24
+ if (s->ibrd == 0) {
25
return 0;
36
}
26
}
37
27
38
--
28
--
39
2.16.2
29
2.25.1
40
41
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This changes the qemu signal frame layout to be more like the kernel's,
3
The CPUTLBEntryFull structure now stores the original pte attributes, as
4
in that the various records are dynamically allocated rather than fixed
4
well as the physical address. Therefore, we no longer need a separate
5
in place by a structure.
5
bit in MemTxAttrs, nor do we need to walk the tree of memory regions.
6
7
For now, all of the allocation is out of uc.tuc_mcontext.__reserved,
8
so the allocation is actually trivial. That will change with SVE support.
9
6
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180303143823.27055-4-richard.henderson@linaro.org
9
Message-id: 20221011031911.2408754-3-richard.henderson@linaro.org
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
11
---
16
linux-user/signal.c | 89 ++++++++++++++++++++++++++++++++++++-----------------
12
target/arm/cpu.h | 1 -
17
1 file changed, 61 insertions(+), 28 deletions(-)
13
target/arm/sve_ldst_internal.h | 1 +
14
target/arm/mte_helper.c | 62 ++++++++++------------------------
15
target/arm/sve_helper.c | 54 ++++++++++-------------------
16
target/arm/tlb_helper.c | 4 ---
17
5 files changed, 36 insertions(+), 86 deletions(-)
18
18
19
diff --git a/linux-user/signal.c b/linux-user/signal.c
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
21
--- a/linux-user/signal.c
21
--- a/target/arm/cpu.h
22
+++ b/linux-user/signal.c
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ struct target_fpsimd_context {
23
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
24
uint64_t vregs[32 * 2]; /* really uint128_t vregs[32] */
24
* generic target bits directly.
25
};
25
*/
26
26
#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
27
-/*
27
-#define arm_tlb_mte_tagged(x) (typecheck_memtxattrs(x)->target_tlb_bit1)
28
- * Auxiliary context saved in the sigcontext.__reserved array. Not exported to
28
29
- * user space as it will change with the addition of new context. User space
29
/*
30
- * should check the magic/size information.
30
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
31
- */
31
diff --git a/target/arm/sve_ldst_internal.h b/target/arm/sve_ldst_internal.h
32
-struct target_aux_context {
32
index XXXXXXX..XXXXXXX 100644
33
- struct target_fpsimd_context fpsimd;
33
--- a/target/arm/sve_ldst_internal.h
34
- /* additional context to be added before "end" */
34
+++ b/target/arm/sve_ldst_internal.h
35
- struct target_aarch64_ctx end;
35
@@ -XXX,XX +XXX,XX @@ typedef struct {
36
-};
36
void *host;
37
-
37
int flags;
38
struct target_rt_sigframe {
38
MemTxAttrs attrs;
39
struct target_siginfo info;
39
+ bool tagged;
40
struct target_ucontext uc;
40
} SVEHostPage;
41
+};
41
42
bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
43
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mte_helper.c
46
+++ b/target/arm/mte_helper.c
47
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
48
TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1);
49
return tags + index;
50
#else
51
- uintptr_t index;
52
CPUTLBEntryFull *full;
53
+ MemTxAttrs attrs;
54
int in_page, flags;
55
- ram_addr_t ptr_ra;
56
hwaddr ptr_paddr, tag_paddr, xlat;
57
MemoryRegion *mr;
58
ARMASIdx tag_asi;
59
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
60
* valid. Indicate to probe_access_flags no-fault, then assert that
61
* we received a valid page.
62
*/
63
- flags = probe_access_flags(env, ptr, ptr_access, ptr_mmu_idx,
64
- ra == 0, &host, ra);
65
+ flags = probe_access_full(env, ptr, ptr_access, ptr_mmu_idx,
66
+ ra == 0, &host, &full, ra);
67
assert(!(flags & TLB_INVALID_MASK));
68
69
- /*
70
- * Find the CPUTLBEntryFull for ptr. This *must* be present in the TLB
71
- * because we just found the mapping.
72
- * TODO: Perhaps there should be a cputlb helper that returns a
73
- * matching tlb entry + iotlb entry.
74
- */
75
- index = tlb_index(env, ptr_mmu_idx, ptr);
76
-# ifdef CONFIG_DEBUG_TCG
77
- {
78
- CPUTLBEntry *entry = tlb_entry(env, ptr_mmu_idx, ptr);
79
- target_ulong comparator = (ptr_access == MMU_DATA_LOAD
80
- ? entry->addr_read
81
- : tlb_addr_write(entry));
82
- g_assert(tlb_hit(comparator, ptr));
83
- }
84
-# endif
85
- full = &env_tlb(env)->d[ptr_mmu_idx].fulltlb[index];
86
-
87
/* If the virtual page MemAttr != Tagged, access unchecked. */
88
- if (!arm_tlb_mte_tagged(&full->attrs)) {
89
+ if (full->pte_attrs != 0xf0) {
90
return NULL;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
94
return NULL;
95
}
96
97
+ /*
98
+ * Remember these values across the second lookup below,
99
+ * which may invalidate this pointer via tlb resize.
100
+ */
101
+ ptr_paddr = full->phys_addr;
102
+ attrs = full->attrs;
103
+ full = NULL;
42
+
104
+
43
+struct target_rt_frame_record {
105
/*
44
uint64_t fp;
106
* The Normal memory access can extend to the next page. E.g. a single
45
uint64_t lr;
107
* 8-byte access to the last byte of a page will check only the last
46
uint32_t tramp[2];
108
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
47
@@ -XXX,XX +XXX,XX @@ static void target_restore_fpsimd_record(CPUARMState *env,
109
*/
48
static int target_restore_sigframe(CPUARMState *env,
110
in_page = -(ptr | TARGET_PAGE_MASK);
49
struct target_rt_sigframe *sf)
111
if (unlikely(ptr_size > in_page)) {
50
{
112
- void *ignore;
51
- struct target_aux_context *aux
113
- flags |= probe_access_flags(env, ptr + in_page, ptr_access,
52
- = (struct target_aux_context *)sf->uc.tuc_mcontext.__reserved;
114
- ptr_mmu_idx, ra == 0, &ignore, ra);
53
- uint32_t magic, size;
115
+ flags |= probe_access_full(env, ptr + in_page, ptr_access,
54
+ struct target_aarch64_ctx *ctx;
116
+ ptr_mmu_idx, ra == 0, &host, &full, ra);
55
+ struct target_fpsimd_context *fpsimd = NULL;
117
assert(!(flags & TLB_INVALID_MASK));
56
118
}
57
target_restore_general_frame(env, sf);
119
58
120
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
59
- __get_user(magic, &aux->fpsimd.head.magic);
121
if (unlikely(flags & TLB_WATCHPOINT)) {
60
- __get_user(size, &aux->fpsimd.head.size);
122
int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE;
61
- if (magic == TARGET_FPSIMD_MAGIC
123
assert(ra != 0);
62
- && size == sizeof(struct target_fpsimd_context)) {
124
- cpu_check_watchpoint(env_cpu(env), ptr, ptr_size,
63
- target_restore_fpsimd_record(env, &aux->fpsimd);
125
- full->attrs, wp, ra);
64
- } else {
126
+ cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, attrs, wp, ra);
65
+ ctx = (struct target_aarch64_ctx *)sf->uc.tuc_mcontext.__reserved;
127
}
66
+ while (ctx) {
128
67
+ uint32_t magic, size;
129
- /*
68
+
130
- * Find the physical address within the normal mem space.
69
+ __get_user(magic, &ctx->magic);
131
- * The memory region lookup must succeed because TLB_MMIO was
70
+ __get_user(size, &ctx->size);
132
- * not set in the cputlb lookup above.
71
+ switch (magic) {
133
- */
72
+ case 0:
134
- mr = memory_region_from_host(host, &ptr_ra);
73
+ if (size != 0) {
135
- tcg_debug_assert(mr != NULL);
74
+ return 1;
136
- tcg_debug_assert(memory_region_is_ram(mr));
75
+ }
137
- ptr_paddr = ptr_ra;
76
+ ctx = NULL;
138
- do {
77
+ continue;
139
- ptr_paddr += mr->addr;
78
+
140
- mr = mr->container;
79
+ case TARGET_FPSIMD_MAGIC:
141
- } while (mr);
80
+ if (fpsimd || size != sizeof(struct target_fpsimd_context)) {
142
-
81
+ return 1;
143
/* Convert to the physical address in tag space. */
82
+ }
144
tag_paddr = ptr_paddr >> (LOG2_TAG_GRANULE + 1);
83
+ fpsimd = (struct target_fpsimd_context *)ctx;
145
84
+ break;
146
/* Look up the address in tag space. */
85
+
147
- tag_asi = full->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
86
+ default:
148
+ tag_asi = attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
87
+ /* Unknown record -- we certainly didn't generate it.
149
tag_as = cpu_get_address_space(env_cpu(env), tag_asi);
88
+ * Did we in fact get out of sync?
150
mr = address_space_translate(tag_as, tag_paddr, &xlat, NULL,
89
+ */
151
- tag_access == MMU_DATA_STORE,
90
+ return 1;
152
- full->attrs);
91
+ }
153
+ tag_access == MMU_DATA_STORE, attrs);
92
+ ctx = (void *)ctx + size;
154
93
+ }
155
/*
94
+
156
* Note that @mr will never be NULL. If there is nothing in the address
95
+ /* Require FPSIMD always. */
157
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
96
+ if (!fpsimd) {
158
index XXXXXXX..XXXXXXX 100644
97
return 1;
159
--- a/target/arm/sve_helper.c
98
}
160
+++ b/target/arm/sve_helper.c
99
+ target_restore_fpsimd_record(env, fpsimd);
161
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
100
+
162
*/
101
return 0;
163
addr = useronly_clean_ptr(addr);
164
165
+#ifdef CONFIG_USER_ONLY
166
flags = probe_access_flags(env, addr, access_type, mmu_idx, nofault,
167
&info->host, retaddr);
168
+ memset(&info->attrs, 0, sizeof(info->attrs));
169
+ /* Require both ANON and MTE; see allocation_tag_mem(). */
170
+ info->tagged = (flags & PAGE_ANON) && (flags & PAGE_MTE);
171
+#else
172
+ CPUTLBEntryFull *full;
173
+ flags = probe_access_full(env, addr, access_type, mmu_idx, nofault,
174
+ &info->host, &full, retaddr);
175
+ info->attrs = full->attrs;
176
+ info->tagged = full->pte_attrs == 0xf0;
177
+#endif
178
info->flags = flags;
179
180
if (flags & TLB_INVALID_MASK) {
181
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
182
183
/* Ensure that info->host[] is relative to addr, not addr + mem_off. */
184
info->host -= mem_off;
185
-
186
-#ifdef CONFIG_USER_ONLY
187
- memset(&info->attrs, 0, sizeof(info->attrs));
188
- /* Require both MAP_ANON and PROT_MTE -- see allocation_tag_mem. */
189
- arm_tlb_mte_tagged(&info->attrs) =
190
- (flags & PAGE_ANON) && (flags & PAGE_MTE);
191
-#else
192
- /*
193
- * Find the iotlbentry for addr and return the transaction attributes.
194
- * This *must* be present in the TLB because we just found the mapping.
195
- */
196
- {
197
- uintptr_t index = tlb_index(env, mmu_idx, addr);
198
-
199
-# ifdef CONFIG_DEBUG_TCG
200
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
201
- target_ulong comparator = (access_type == MMU_DATA_LOAD
202
- ? entry->addr_read
203
- : tlb_addr_write(entry));
204
- g_assert(tlb_hit(comparator, addr));
205
-# endif
206
-
207
- CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
208
- info->attrs = full->attrs;
209
- }
210
-#endif
211
-
212
return true;
102
}
213
}
103
214
104
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
215
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
105
target_siginfo_t *info, target_sigset_t *set,
216
intptr_t mem_off, reg_off, reg_last;
106
CPUARMState *env)
217
107
{
218
/* Process the page only if MemAttr == Tagged. */
108
+ int size = offsetof(struct target_rt_sigframe, uc.tuc_mcontext.__reserved);
219
- if (arm_tlb_mte_tagged(&info->page[0].attrs)) {
109
+ int fpsimd_ofs, end1_ofs, fr_ofs;
220
+ if (info->page[0].tagged) {
110
struct target_rt_sigframe *frame;
221
mem_off = info->mem_off_first[0];
111
- struct target_aux_context *aux;
222
reg_off = info->reg_off_first[0];
112
+ struct target_rt_frame_record *fr;
223
reg_last = info->reg_off_split;
113
abi_ulong frame_addr, return_addr;
224
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
114
225
}
115
+ fpsimd_ofs = size;
226
116
+ size += sizeof(struct target_fpsimd_context);
227
mem_off = info->mem_off_first[1];
117
+ end1_ofs = size;
228
- if (mem_off >= 0 && arm_tlb_mte_tagged(&info->page[1].attrs)) {
118
+ size += sizeof(struct target_aarch64_ctx);
229
+ if (mem_off >= 0 && info->page[1].tagged) {
119
+ fr_ofs = size;
230
reg_off = info->reg_off_first[1];
120
+ size += sizeof(struct target_rt_frame_record);
231
reg_last = info->reg_off_last[1];
121
+
232
122
frame_addr = get_sigframe(ka, env);
233
@@ -XXX,XX +XXX,XX @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
123
trace_user_setup_frame(env, frame_addr);
234
* Disable MTE checking if the Tagged bit is not set. Since TBI must
124
if (!lock_user_struct(VERIFY_WRITE, frame, frame_addr, 0)) {
235
* be set within MTEDESC for MTE, !mtedesc => !mte_active.
125
goto give_sigsegv;
236
*/
126
}
237
- if (!arm_tlb_mte_tagged(&info.page[0].attrs)) {
127
- aux = (struct target_aux_context *)frame->uc.tuc_mcontext.__reserved;
238
+ if (!info.page[0].tagged) {
128
239
mtedesc = 0;
129
target_setup_general_frame(frame, env, set);
240
}
130
- target_setup_fpsimd_record(&aux->fpsimd, env);
241
131
- target_setup_end_record(&aux->end);
242
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
132
+ target_setup_fpsimd_record((void *)frame + fpsimd_ofs, env);
243
cpu_check_watchpoint(env_cpu(env), addr, msize,
133
+ target_setup_end_record((void *)frame + end1_ofs);
244
info.attrs, BP_MEM_READ, retaddr);
134
+
245
}
135
+ /* Set up the stack frame for unwinding. */
246
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
136
+ fr = (void *)frame + fr_ofs;
247
+ if (mtedesc && info.tagged) {
137
+ __put_user(env->xregs[29], &fr->fp);
248
mte_check(env, mtedesc, addr, retaddr);
138
+ __put_user(env->xregs[30], &fr->lr);
249
}
139
250
if (unlikely(info.flags & TLB_MMIO)) {
140
if (ka->sa_flags & TARGET_SA_RESTORER) {
251
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
141
return_addr = ka->sa_restorer;
252
msize, info.attrs,
142
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
253
BP_MEM_READ, retaddr);
143
* Since these are instructions they need to be put as little-endian
254
}
144
* regardless of target default or current CPU endianness.
255
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
145
*/
256
+ if (mtedesc && info.tagged) {
146
- __put_user_e(0xd2801168, &frame->tramp[0], le);
257
mte_check(env, mtedesc, addr, retaddr);
147
- __put_user_e(0xd4000001, &frame->tramp[1], le);
258
}
148
- return_addr = frame_addr + offsetof(struct target_rt_sigframe, tramp);
259
tlb_fn(env, &scratch, reg_off, addr, retaddr);
149
+ __put_user_e(0xd2801168, &fr->tramp[0], le);
260
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
150
+ __put_user_e(0xd4000001, &fr->tramp[1], le);
261
(env_cpu(env), addr, msize) & BP_MEM_READ)) {
151
+ return_addr = frame_addr + fr_ofs
262
goto fault;
152
+ + offsetof(struct target_rt_frame_record, tramp);
263
}
153
}
264
- if (mtedesc &&
154
env->xregs[0] = usig;
265
- arm_tlb_mte_tagged(&info.attrs) &&
155
env->xregs[31] = frame_addr;
266
- !mte_probe(env, mtedesc, addr)) {
156
- env->xregs[29] = env->xregs[31] + offsetof(struct target_rt_sigframe, fp);
267
+ if (mtedesc && info.tagged && !mte_probe(env, mtedesc, addr)) {
157
+ env->xregs[29] = frame_addr + fr_ofs;
268
goto fault;
158
env->pc = ka->_sa_handler;
269
}
159
env->xregs[30] = return_addr;
270
160
if (info) {
271
@@ -XXX,XX +XXX,XX @@ void sve_st1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
272
info.attrs, BP_MEM_WRITE, retaddr);
273
}
274
275
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
276
+ if (mtedesc && info.tagged) {
277
mte_check(env, mtedesc, addr, retaddr);
278
}
279
}
280
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
281
index XXXXXXX..XXXXXXX 100644
282
--- a/target/arm/tlb_helper.c
283
+++ b/target/arm/tlb_helper.c
284
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
285
res.f.phys_addr &= TARGET_PAGE_MASK;
286
address &= TARGET_PAGE_MASK;
287
}
288
- /* Notice and record tagged memory. */
289
- if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) {
290
- arm_tlb_mte_tagged(&res.f.attrs) = true;
291
- }
292
293
res.f.pte_attrs = res.cacheattrs.attrs;
294
res.f.shareability = res.cacheattrs.shareability;
161
--
295
--
162
2.16.2
296
2.25.1
163
164
diff view generated by jsdifflib
1
Add support for passing 'max' to -machine gic-version. By analogy
1
From: Richard Henderson <richard.henderson@linaro.org>
2
with the -cpu max option, this picks the "best available" GIC version
3
whether you're using KVM or TCG, so it behaves like 'host' when
4
using KVM, and gives you GICv3 when using TCG.
5
2
6
Also like '-cpu host', using -machine gic-version=max' means there
3
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit.
7
is no guarantee of migration compatibility between QEMU versions;
4
In is_guarded_page, use probe_access_full instead of just guessing
8
in future 'max' might mean '4'.
5
that the tlb entry is still present. Also handles the FIXME about
6
executing from device memory.
9
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20180308130626.12393-7-peter.maydell@linaro.org
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
---
12
---
14
hw/arm/virt.c | 29 +++++++++++++++++++----------
13
target/arm/cpu-param.h | 9 +++++----
15
1 file changed, 19 insertions(+), 10 deletions(-)
14
target/arm/cpu.h | 13 -------------
15
target/arm/internals.h | 1 +
16
target/arm/ptw.c | 7 ++++---
17
target/arm/translate-a64.c | 21 ++++++++++-----------
18
5 files changed, 20 insertions(+), 31 deletions(-)
16
19
17
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
20
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/virt.c
22
--- a/target/arm/cpu-param.h
20
+++ b/hw/arm/virt.c
23
+++ b/target/arm/cpu-param.h
21
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
24
@@ -XXX,XX +XXX,XX @@
22
/* We can probe only here because during property set
25
*
23
* KVM is not available yet
26
* For ARMMMUIdx_Stage2*, pte_attrs is the S2 descriptor bits [5:2].
27
* Otherwise, pte_attrs is the same as the MAIR_EL1 8-bit format.
28
- * For shareability, as in the SH field of the VMSAv8-64 PTEs.
29
+ * For shareability and guarded, as in the SH and GP fields respectively
30
+ * of the VMSAv8-64 PTEs.
31
*/
32
# define TARGET_PAGE_ENTRY_EXTRA \
33
- uint8_t pte_attrs; \
34
- uint8_t shareability;
35
-
36
+ uint8_t pte_attrs; \
37
+ uint8_t shareability; \
38
+ bool guarded;
39
#endif
40
41
#define NB_MMU_MODES 8
42
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/cpu.h
45
+++ b/target/arm/cpu.h
46
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
47
/* Shared between translate-sve.c and sve_helper.c. */
48
extern const uint64_t pred_esz_masks[5];
49
50
-/* Helper for the macros below, validating the argument type. */
51
-static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
52
-{
53
- return x;
54
-}
55
-
56
-/*
57
- * Lvalue macros for ARM TLB bits that we must cache in the TCG TLB.
58
- * Using these should be a bit more self-documenting than using the
59
- * generic target bits directly.
60
- */
61
-#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
62
-
63
/*
64
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
65
* Note that with the Linux kernel, PROT_MTE may not be cleared by mprotect
66
diff --git a/target/arm/internals.h b/target/arm/internals.h
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/internals.h
69
+++ b/target/arm/internals.h
70
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
71
unsigned int attrs:8;
72
unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs */
73
bool is_s2_format:1;
74
+ bool guarded:1; /* guarded bit of the v8-64 PTE */
75
} ARMCacheAttrs;
76
77
/* Fields that are valid upon success. */
78
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/ptw.c
81
+++ b/target/arm/ptw.c
82
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
83
*/
84
result->f.attrs.secure = false;
85
}
86
- /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
87
- if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
88
- arm_tlb_bti_gp(&result->f.attrs) = true;
89
+
90
+ /* When in aarch64 mode, and BTI is enabled, remember GP in the TLB. */
91
+ if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) {
92
+ result->f.guarded = guarded;
93
}
94
95
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
96
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate-a64.c
99
+++ b/target/arm/translate-a64.c
100
@@ -XXX,XX +XXX,XX @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s)
101
#ifdef CONFIG_USER_ONLY
102
return page_get_flags(addr) & PAGE_BTI;
103
#else
104
+ CPUTLBEntryFull *full;
105
+ void *host;
106
int mmu_idx = arm_to_core_mmu_idx(s->mmu_idx);
107
- unsigned int index = tlb_index(env, mmu_idx, addr);
108
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
109
+ int flags;
110
111
/*
112
* We test this immediately after reading an insn, which means
113
- * that any normal page must be in the TLB. The only exception
114
- * would be for executing from flash or device memory, which
115
- * does not retain the TLB entry.
116
- *
117
- * FIXME: Assume false for those, for now. We could use
118
- * arm_cpu_get_phys_page_attrs_debug to re-read the page
119
- * table entry even for that case.
120
+ * that the TLB entry must be present and valid, and thus this
121
+ * access will never raise an exception.
24
*/
122
*/
25
- if (!vms->gic_version) {
123
- return (tlb_hit(entry->addr_code, addr) &&
26
+ if (vms->gic_version <= 0) {
124
- arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs));
27
+ /* "host" or "max" */
125
+ flags = probe_access_full(env, addr, MMU_INST_FETCH, mmu_idx,
28
if (!kvm_enabled()) {
126
+ false, &host, &full, 0);
29
- error_report("gic-version=host requires KVM");
127
+ assert(!(flags & TLB_INVALID_MASK));
30
- exit(1);
128
+
31
- }
129
+ return full->guarded;
32
-
130
#endif
33
- vms->gic_version = kvm_arm_vgic_probe();
34
- if (!vms->gic_version) {
35
- error_report("Unable to determine GIC version supported by host");
36
- exit(1);
37
+ if (vms->gic_version == 0) {
38
+ error_report("gic-version=host requires KVM");
39
+ exit(1);
40
+ } else {
41
+ /* "max": currently means 3 for TCG */
42
+ vms->gic_version = 3;
43
+ }
44
+ } else {
45
+ vms->gic_version = kvm_arm_vgic_probe();
46
+ if (!vms->gic_version) {
47
+ error_report(
48
+ "Unable to determine GIC version supported by host");
49
+ exit(1);
50
+ }
51
}
52
}
53
54
@@ -XXX,XX +XXX,XX @@ static void virt_set_gic_version(Object *obj, const char *value, Error **errp)
55
vms->gic_version = 2;
56
} else if (!strcmp(value, "host")) {
57
vms->gic_version = 0; /* Will probe later */
58
+ } else if (!strcmp(value, "max")) {
59
+ vms->gic_version = -1; /* Will probe later */
60
} else {
61
error_setg(errp, "Invalid gic-version value");
62
- error_append_hint(errp, "Valid values are 3, 2, host.\n");
63
+ error_append_hint(errp, "Valid values are 3, 2, host, max.\n");
64
}
65
}
131
}
66
132
67
--
133
--
68
2.16.2
134
2.25.1
69
70
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Not yet used, but add mmu indexes for 1-1 mapping
4
to physical addresses.
5
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180309153654.13518-8-f4bug@amsat.org
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
hw/sd/sdhci.c | 4 ++--
11
target/arm/cpu-param.h | 2 +-
9
1 file changed, 2 insertions(+), 2 deletions(-)
12
target/arm/cpu.h | 7 ++++++-
13
target/arm/ptw.c | 19 +++++++++++++++++--
14
3 files changed, 24 insertions(+), 4 deletions(-)
10
15
11
diff --git a/hw/sd/sdhci.c b/hw/sd/sdhci.c
16
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/sd/sdhci.c
18
--- a/target/arm/cpu-param.h
14
+++ b/hw/sd/sdhci.c
19
+++ b/target/arm/cpu-param.h
15
@@ -XXX,XX +XXX,XX @@ static void sdhci_read_block_from_card(SDHCIState *s)
20
@@ -XXX,XX +XXX,XX @@
16
for (index = 0; index < blk_size; index++) {
21
bool guarded;
17
data = sdbus_read_data(&s->sdbus);
22
#endif
18
if (!FIELD_EX32(s->hostctl2, SDHC_HOSTCTL2, EXECUTE_TUNING)) {
23
19
- /* Device is not in tunning */
24
-#define NB_MMU_MODES 8
20
+ /* Device is not in tuning */
25
+#define NB_MMU_MODES 10
21
s->fifo_buffer[index] = data;
26
27
#endif
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
33
* EL2 EL2&0 +PAN
34
* EL2 (aka NS PL2)
35
* EL3 (aka S PL1)
36
+ * Physical (NS & S)
37
*
38
- * for a total of 8 different mmu_idx.
39
+ * for a total of 10 different mmu_idx.
40
*
41
* R profile CPUs have an MPU, but can use the same set of MMU indexes
42
* as A profile. They only need to distinguish EL0 and EL1 (and
43
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
44
ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A,
45
ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A,
46
47
+ /* TLBs with 1-1 mapping to the physical address spaces. */
48
+ ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A,
49
+ ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A,
50
+
51
/*
52
* These are not allocated TLBs and are used only for AT system
53
* instructions or for the first stage of an S12 page table walk.
54
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/ptw.c
57
+++ b/target/arm/ptw.c
58
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
59
case ARMMMUIdx_E3:
60
break;
61
62
+ case ARMMMUIdx_Phys_NS:
63
+ case ARMMMUIdx_Phys_S:
64
+ /* No translation for physical address spaces. */
65
+ return true;
66
+
67
default:
68
g_assert_not_reached();
69
}
70
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
71
{
72
uint8_t memattr = 0x00; /* Device nGnRnE */
73
uint8_t shareability = 0; /* non-sharable */
74
+ int r_el;
75
76
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
77
- int r_el = regime_el(env, mmu_idx);
78
+ switch (mmu_idx) {
79
+ case ARMMMUIdx_Stage2:
80
+ case ARMMMUIdx_Stage2_S:
81
+ case ARMMMUIdx_Phys_NS:
82
+ case ARMMMUIdx_Phys_S:
83
+ break;
84
85
+ default:
86
+ r_el = regime_el(env, mmu_idx);
87
if (arm_el_is_aa64(env, r_el)) {
88
int pamax = arm_pamax(env_archcpu(env));
89
uint64_t tcr = env->cp15.tcr_el[r_el];
90
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
91
shareability = 2; /* outer sharable */
22
}
92
}
93
result->cacheattrs.is_s2_format = false;
94
+ break;
23
}
95
}
24
96
25
if (FIELD_EX32(s->hostctl2, SDHC_HOSTCTL2, EXECUTE_TUNING)) {
97
result->f.phys_addr = address;
26
- /* Device is in tunning */
98
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
27
+ /* Device is in tuning */
99
is_secure = arm_is_secure_below_el3(env);
28
s->hostctl2 &= ~R_SDHC_HOSTCTL2_EXECUTE_TUNING_MASK;
100
break;
29
s->hostctl2 |= R_SDHC_HOSTCTL2_SAMPLING_CLKSEL_MASK;
101
case ARMMMUIdx_Stage2:
30
s->prnsts &= ~(SDHC_DAT_LINE_ACTIVE | SDHC_DOING_READ |
102
+ case ARMMMUIdx_Phys_NS:
103
case ARMMMUIdx_MPrivNegPri:
104
case ARMMMUIdx_MUserNegPri:
105
case ARMMMUIdx_MPriv:
106
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
107
break;
108
case ARMMMUIdx_E3:
109
case ARMMMUIdx_Stage2_S:
110
+ case ARMMMUIdx_Phys_S:
111
case ARMMMUIdx_MSPrivNegPri:
112
case ARMMMUIdx_MSUserNegPri:
113
case ARMMMUIdx_MSPriv:
31
--
114
--
32
2.16.2
115
2.25.1
33
34
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The cortex A53 TRM specifies that bits 24 and 25 of the L2CTLR register
3
We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb.
4
specify the number of cores in the processor, not the total number of
4
Flush the tlb when invalidating stage 1+2 translations. Re-use
5
cores in the system. To report this correctly on machines with multiple
5
alle1_tlbmask() for other instances of EL1&0 + Stage2.
6
CPU clusters (ARM's big.LITTLE or Xilinx's ZynqMP) we need to allow
7
the machine to overwrite this value. To do this let's add an optional
8
property.
9
6
10
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: ef01d95c0759e88f47f22d11b14c91512a658b4f.1520018138.git.alistair.francis@xilinx.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20221011031911.2408754-6-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
target/arm/cpu.h | 5 +++++
12
target/arm/cpu-param.h | 2 +-
16
target/arm/cpu.c | 6 ++++++
13
target/arm/cpu.h | 23 ++++---
17
target/arm/cpu64.c | 6 ++++--
14
target/arm/helper.c | 151 ++++++++++++++++++++++++++++++-----------
18
3 files changed, 15 insertions(+), 2 deletions(-)
15
3 files changed, 127 insertions(+), 49 deletions(-)
19
16
17
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu-param.h
20
+++ b/target/arm/cpu-param.h
21
@@ -XXX,XX +XXX,XX @@
22
bool guarded;
23
#endif
24
25
-#define NB_MMU_MODES 10
26
+#define NB_MMU_MODES 12
27
28
#endif
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
31
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
32
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
33
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
25
/* Uniprocessor system with MP extensions */
34
* EL2 (aka NS PL2)
26
bool mp_is_up;
35
* EL3 (aka S PL1)
27
36
* Physical (NS & S)
28
+ /* Specify the number of cores in this CPU cluster. Used for the L2CTLR
37
+ * Stage2 (NS & S)
29
+ * register.
38
*
39
- * for a total of 10 different mmu_idx.
40
+ * for a total of 12 different mmu_idx.
41
*
42
* R profile CPUs have an MPU, but can use the same set of MMU indexes
43
* as A profile. They only need to distinguish EL0 and EL1 (and
44
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
45
ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A,
46
ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A,
47
48
+ /*
49
+ * Used for second stage of an S12 page table walk, or for descriptor
50
+ * loads during first stage of an S1 page table walk. Note that both
51
+ * are in use simultaneously for SecureEL2: the security state for
52
+ * the S2 ptw is selected by the NS bit from the S1 ptw.
30
+ */
53
+ */
31
+ int32_t core_count;
54
+ ARMMMUIdx_Stage2 = 10 | ARM_MMU_IDX_A,
32
+
55
+ ARMMMUIdx_Stage2_S = 11 | ARM_MMU_IDX_A,
33
/* The instance init functions for implementation-specific subclasses
56
+
34
* set these fields to specify the implementation-dependent values of
57
/*
35
* various constant registers and reset values of non-constant
58
* These are not allocated TLBs and are used only for AT system
36
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
59
* instructions or for the first stage of an S12 page table walk.
60
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
61
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
62
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
63
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
64
- /*
65
- * Not allocated a TLB: used only for second stage of an S12 page
66
- * table walk, or for descriptor loads during first stage of an S1
67
- * page table walk. Note that if we ever want to have a TLB for this
68
- * then various TLB flush insns which currently are no-ops or flush
69
- * only stage 1 MMU indexes will need to change to flush stage 2.
70
- */
71
- ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
72
- ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB,
73
74
/*
75
* M-profile.
76
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
77
TO_CORE_BIT(E20_2),
78
TO_CORE_BIT(E20_2_PAN),
79
TO_CORE_BIT(E3),
80
+ TO_CORE_BIT(Stage2),
81
+ TO_CORE_BIT(Stage2_S),
82
83
TO_CORE_BIT(MUser),
84
TO_CORE_BIT(MPriv),
85
diff --git a/target/arm/helper.c b/target/arm/helper.c
37
index XXXXXXX..XXXXXXX 100644
86
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/cpu.c
87
--- a/target/arm/helper.c
39
+++ b/target/arm/cpu.c
88
+++ b/target/arm/helper.c
40
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
89
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
41
cs->num_ases = 1;
90
raw_write(env, ri, value);
91
}
92
93
+static int alle1_tlbmask(CPUARMState *env)
94
+{
95
+ /*
96
+ * Note that the 'ALL' scope must invalidate both stage 1 and
97
+ * stage 2 translations, whereas most other scopes only invalidate
98
+ * stage 1 translations.
99
+ */
100
+ return (ARMMMUIdxBit_E10_1 |
101
+ ARMMMUIdxBit_E10_1_PAN |
102
+ ARMMMUIdxBit_E10_0 |
103
+ ARMMMUIdxBit_Stage2 |
104
+ ARMMMUIdxBit_Stage2_S);
105
+}
106
+
107
+
108
/* IS variants of TLB operations must affect all cores */
109
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
uint64_t value)
111
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
112
{
113
CPUState *cs = env_cpu(env);
114
115
- tlb_flush_by_mmuidx(cs,
116
- ARMMMUIdxBit_E10_1 |
117
- ARMMMUIdxBit_E10_1_PAN |
118
- ARMMMUIdxBit_E10_0);
119
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
120
}
121
122
static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
123
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
{
125
CPUState *cs = env_cpu(env);
126
127
- tlb_flush_by_mmuidx_all_cpus_synced(cs,
128
- ARMMMUIdxBit_E10_1 |
129
- ARMMMUIdxBit_E10_1_PAN |
130
- ARMMMUIdxBit_E10_0);
131
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, alle1_tlbmask(env));
132
}
133
134
135
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
136
ARMMMUIdxBit_E2);
137
}
138
139
+static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
140
+ uint64_t value)
141
+{
142
+ CPUState *cs = env_cpu(env);
143
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
144
+
145
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
146
+}
147
+
148
+static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
149
+ uint64_t value)
150
+{
151
+ CPUState *cs = env_cpu(env);
152
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
153
+
154
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
155
+}
156
+
157
static const ARMCPRegInfo cp_reginfo[] = {
158
/* Define the secure and non-secure FCSE identifier CP registers
159
* separately because there is no secure bank in V8 (no _EL3). This allows
160
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
162
/*
163
* A change in VMID to the stage2 page table (Stage2) invalidates
164
- * the combined stage 1&2 tlbs (EL10_1 and EL10_0).
165
+ * the stage2 and combined stage 1&2 tlbs (EL10_1 and EL10_0).
166
*/
167
if (raw_read(env, ri) != value) {
168
- uint16_t mask = ARMMMUIdxBit_E10_1 |
169
- ARMMMUIdxBit_E10_1_PAN |
170
- ARMMMUIdxBit_E10_0;
171
- tlb_flush_by_mmuidx(cs, mask);
172
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
173
raw_write(env, ri, value);
42
}
174
}
43
cpu_address_space_init(cs, ARMASIdx_NS, "cpu-memory", cs->memory);
175
}
44
+
176
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
45
+ /* No core_count specified, default to smp_cpus. */
177
}
46
+ if (cpu->core_count == -1) {
178
}
47
+ cpu->core_count = smp_cpus;
179
180
-static int alle1_tlbmask(CPUARMState *env)
181
-{
182
- /*
183
- * Note that the 'ALL' scope must invalidate both stage 1 and
184
- * stage 2 translations, whereas most other scopes only invalidate
185
- * stage 1 translations.
186
- */
187
- return (ARMMMUIdxBit_E10_1 |
188
- ARMMMUIdxBit_E10_1_PAN |
189
- ARMMMUIdxBit_E10_0);
190
-}
191
-
192
static int e2_tlbmask(CPUARMState *env)
193
{
194
return (ARMMMUIdxBit_E20_0 |
195
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
ARMMMUIdxBit_E3, bits);
197
}
198
199
+static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
200
+{
201
+ /*
202
+ * The MSB of value is the NS field, which only applies if SEL2
203
+ * is implemented and SCR_EL3.NS is not set (i.e. in secure mode).
204
+ */
205
+ return (value >= 0
206
+ && cpu_isar_feature(aa64_sel2, env_archcpu(env))
207
+ && arm_is_secure_below_el3(env)
208
+ ? ARMMMUIdxBit_Stage2_S
209
+ : ARMMMUIdxBit_Stage2);
210
+}
211
+
212
+static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
213
+ uint64_t value)
214
+{
215
+ CPUState *cs = env_cpu(env);
216
+ int mask = ipas2e1_tlbmask(env, value);
217
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
218
+
219
+ if (tlb_force_broadcast(env)) {
220
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
221
+ } else {
222
+ tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
48
+ }
223
+ }
224
+}
225
+
226
+static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
227
+ uint64_t value)
228
+{
229
+ CPUState *cs = env_cpu(env);
230
+ int mask = ipas2e1_tlbmask(env, value);
231
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
232
+
233
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
234
+}
235
+
236
#ifdef TARGET_AARCH64
237
typedef struct {
238
uint64_t base;
239
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env,
240
241
do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
242
}
243
+
244
+static void tlbi_aa64_ripas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
245
+ uint64_t value)
246
+{
247
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value),
248
+ tlb_force_broadcast(env));
249
+}
250
+
251
+static void tlbi_aa64_ripas2e1is_write(CPUARMState *env,
252
+ const ARMCPRegInfo *ri,
253
+ uint64_t value)
254
+{
255
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value), true);
256
+}
49
#endif
257
#endif
50
258
51
qemu_init_vcpu(cs);
259
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
52
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_properties[] = {
260
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
53
DEFINE_PROP_UINT64("mp-affinity", ARMCPU,
261
.writefn = tlbi_aa64_vae1_write },
54
mp_affinity, ARM64_AFFINITY_INVALID),
262
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
55
DEFINE_PROP_INT32("node-id", ARMCPU, node_id, CPU_UNSET_NUMA_NODE_ID),
263
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
56
+ DEFINE_PROP_INT32("core-count", ARMCPU, core_count, -1),
264
- .access = PL2_W, .type = ARM_CP_NOP },
57
DEFINE_PROP_END_OF_LIST()
265
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
58
};
266
+ .writefn = tlbi_aa64_ipas2e1is_write },
59
267
{ .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
60
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
268
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
61
index XXXXXXX..XXXXXXX 100644
269
- .access = PL2_W, .type = ARM_CP_NOP },
62
--- a/target/arm/cpu64.c
270
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
63
+++ b/target/arm/cpu64.c
271
+ .writefn = tlbi_aa64_ipas2e1is_write },
64
@@ -XXX,XX +XXX,XX @@ static inline void unset_feature(CPUARMState *env, int feature)
272
{ .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
65
#ifndef CONFIG_USER_ONLY
273
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
66
static uint64_t a57_a53_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
274
.access = PL2_W, .type = ARM_CP_NO_RAW,
67
{
275
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
68
- /* Number of processors is in [25:24]; otherwise we RAZ */
276
.writefn = tlbi_aa64_alle1is_write },
69
- return (smp_cpus - 1) << 24;
277
{ .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
70
+ ARMCPU *cpu = arm_env_get_cpu(env);
278
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
71
+
279
- .access = PL2_W, .type = ARM_CP_NOP },
72
+ /* Number of cores is in [25:24]; otherwise we RAZ */
280
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
73
+ return (cpu->core_count - 1) << 24;
281
+ .writefn = tlbi_aa64_ipas2e1_write },
74
}
282
{ .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
75
#endif
283
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
76
284
- .access = PL2_W, .type = ARM_CP_NOP },
285
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
286
+ .writefn = tlbi_aa64_ipas2e1_write },
287
{ .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
288
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
289
.access = PL2_W, .type = ARM_CP_NO_RAW,
290
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
291
.writefn = tlbimva_hyp_is_write },
292
{ .name = "TLBIIPAS2",
293
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
294
- .type = ARM_CP_NOP, .access = PL2_W },
295
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
296
+ .writefn = tlbiipas2_hyp_write },
297
{ .name = "TLBIIPAS2IS",
298
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
299
- .type = ARM_CP_NOP, .access = PL2_W },
300
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
301
+ .writefn = tlbiipas2is_hyp_write },
302
{ .name = "TLBIIPAS2L",
303
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
304
- .type = ARM_CP_NOP, .access = PL2_W },
305
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
306
+ .writefn = tlbiipas2_hyp_write },
307
{ .name = "TLBIIPAS2LIS",
308
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
309
- .type = ARM_CP_NOP, .access = PL2_W },
310
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
311
+ .writefn = tlbiipas2is_hyp_write },
312
/* 32 bit cache operations */
313
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
314
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
315
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
316
.writefn = tlbi_aa64_rvae1_write },
317
{ .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
318
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
319
- .access = PL2_W, .type = ARM_CP_NOP },
320
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
321
+ .writefn = tlbi_aa64_ripas2e1is_write },
322
{ .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
323
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
324
- .access = PL2_W, .type = ARM_CP_NOP },
325
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
326
+ .writefn = tlbi_aa64_ripas2e1is_write },
327
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
328
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
329
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
330
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
331
.writefn = tlbi_aa64_rvae2is_write },
332
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
333
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
334
- .access = PL2_W, .type = ARM_CP_NOP },
335
- { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
336
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
337
+ .writefn = tlbi_aa64_ripas2e1_write },
338
+ { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
339
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
340
- .access = PL2_W, .type = ARM_CP_NOP },
341
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
342
+ .writefn = tlbi_aa64_ripas2e1_write },
343
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
344
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
345
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
77
--
346
--
78
2.16.2
347
2.25.1
79
80
diff view generated by jsdifflib
Deleted patch
1
From: Alistair Francis <alistair.francis@xilinx.com>
2
1
3
Set the ARM CPU core count property for the A53's attached to the Xilnx
4
ZynqMP machine.
5
6
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: fe0dd90b85ac73f9fc9548c253bededa70a07006.1520018138.git.alistair.francis@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/arm/xlnx-zynqmp.c | 2 ++
12
1 file changed, 2 insertions(+)
13
14
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/xlnx-zynqmp.c
17
+++ b/hw/arm/xlnx-zynqmp.c
18
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
19
s->virt, "has_el2", NULL);
20
object_property_set_int(OBJECT(&s->apu_cpu[i]), GIC_BASE_ADDR,
21
"reset-cbar", &error_abort);
22
+ object_property_set_int(OBJECT(&s->apu_cpu[i]), num_apus,
23
+ "core-count", &error_abort);
24
object_property_set_bool(OBJECT(&s->apu_cpu[i]), true, "realized",
25
&err);
26
if (err) {
27
--
28
2.16.2
29
30
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Compare only the VMID field when considering whether we need to flush.
4
Acked-by: Alistair Francis <alistair.francis@xilinx.com>
4
5
Message-id: 20180309153654.13518-2-f4bug@amsat.org
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20221011031911.2408754-7-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
---
9
hw/sd/sd.c | 11 ++++++++---
10
target/arm/helper.c | 4 ++--
10
1 file changed, 8 insertions(+), 3 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
11
12
12
diff --git a/hw/sd/sd.c b/hw/sd/sd.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/sd/sd.c
15
--- a/target/arm/helper.c
15
+++ b/hw/sd/sd.c
16
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static void sd_lock_command(SDState *sd)
17
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
17
sd->card_status &= ~CARD_IS_LOCKED;
18
* A change in VMID to the stage2 page table (Stage2) invalidates
19
* the stage2 and combined stage 1&2 tlbs (EL10_1 and EL10_0).
20
*/
21
- if (raw_read(env, ri) != value) {
22
+ if (extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
23
tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
24
- raw_write(env, ri, value);
25
}
26
+ raw_write(env, ri, value);
18
}
27
}
19
28
20
-static sd_rsp_type_t sd_normal_command(SDState *sd,
29
static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
21
- SDRequest req)
22
+static sd_rsp_type_t sd_normal_command(SDState *sd, SDRequest req)
23
{
24
uint32_t rca = 0x0000;
25
uint64_t addr = (sd->ocr & (1 << 30)) ? (uint64_t) req.arg << 9 : req.arg;
26
27
- trace_sdcard_normal_command(req.cmd, req.arg, sd_state_name(sd->state));
28
+ /* CMD55 precedes an ACMD, so we are not interested in tracing it.
29
+ * However there is no ACMD55, so we want to trace this particular case.
30
+ */
31
+ if (req.cmd != 55 || sd->expecting_acmd) {
32
+ trace_sdcard_normal_command(req.cmd, req.arg,
33
+ sd_state_name(sd->state));
34
+ }
35
36
/* Not interpreting this as an app command */
37
sd->card_status &= ~APP_CMD;
38
--
30
--
39
2.16.2
31
2.25.1
40
41
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
From the "Physical Layer Simplified Specification Version 3.01":
3
Consolidate most of the inputs and outputs of S1_ptw_translate
4
into a single structure. Plumb this through arm_ld*_ptw from
5
the controlling get_phys_addr_* routine.
4
6
5
A known data block ("Tuning block") can be used to tune sampling
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
point for tuning required hosts. [...]
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
This procedure gives the system optimal timing for each specific
9
Message-id: 20221011031911.2408754-8-richard.henderson@linaro.org
8
host and card combination and compensates for static delays in
9
the timing budget including process, voltage and different PCB
10
loads and skews. [...]
11
Data block, carried by DAT[3:0], contains a pattern for tuning
12
sampling position to receive data on the CMD and DAT[3:0] line.
13
14
[based on a patch from Alistair Francis <alistair.francis@xilinx.com>
15
from qemu/xilinx tag xilinx-v2015.2]
16
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
18
Message-id: 20180309153654.13518-5-f4bug@amsat.org
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
11
---
21
hw/sd/sd.c | 29 +++++++++++++++++++++++++++++
12
target/arm/ptw.c | 140 ++++++++++++++++++++++++++---------------------
22
1 file changed, 29 insertions(+)
13
1 file changed, 79 insertions(+), 61 deletions(-)
23
14
24
diff --git a/hw/sd/sd.c b/hw/sd/sd.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
25
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/sd/sd.c
17
--- a/target/arm/ptw.c
27
+++ b/hw/sd/sd.c
18
+++ b/target/arm/ptw.c
28
@@ -XXX,XX +XXX,XX @@ static sd_rsp_type_t sd_normal_command(SDState *sd, SDRequest req)
19
@@ -XXX,XX +XXX,XX @@
29
}
20
#include "idau.h"
30
break;
21
31
22
32
+ case 19: /* CMD19: SEND_TUNING_BLOCK (SD) */
23
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
33
+ if (sd->state == sd_transfer_state) {
24
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
34
+ sd->state = sd_sendingdata_state;
25
- bool is_secure, bool s1_is_el0,
35
+ sd->data_offset = 0;
26
+typedef struct S1Translate {
36
+ return sd_r1;
27
+ ARMMMUIdx in_mmu_idx;
37
+ }
28
+ bool in_secure;
38
+ break;
29
+ bool out_secure;
30
+ hwaddr out_phys;
31
+} S1Translate;
39
+
32
+
40
case 23: /* CMD23: SET_BLOCK_COUNT */
33
+static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
41
switch (sd->state) {
34
+ uint64_t address,
42
case sd_transfer_state:
35
+ MMUAccessType access_type, bool s1_is_el0,
43
@@ -XXX,XX +XXX,XX @@ void sd_write_data(SDState *sd, uint8_t value)
36
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
44
}
37
__attribute__((nonnull));
45
}
38
46
39
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
47
+#define SD_TUNING_BLOCK_SIZE 64
40
}
41
42
/* Translate a S1 pagetable walk through S2 if needed. */
43
-static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
44
- hwaddr addr, bool *is_secure_ptr,
45
- ARMMMUFaultInfo *fi)
46
+static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
47
+ hwaddr addr, ARMMMUFaultInfo *fi)
48
{
49
- bool is_secure = *is_secure_ptr;
50
+ bool is_secure = ptw->in_secure;
51
ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
52
53
- if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
54
+ if (arm_mmu_idx_is_stage1_of_2(ptw->in_mmu_idx) &&
55
!regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
56
GetPhysAddrResult s2 = {};
57
+ S1Translate s2ptw = {
58
+ .in_mmu_idx = s2_mmu_idx,
59
+ .in_secure = is_secure,
60
+ };
61
uint64_t hcr;
62
int ret;
63
64
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
65
- is_secure, false, &s2, fi);
66
+ ret = get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
67
+ false, &s2, fi);
68
if (ret) {
69
assert(fi->type != ARMFault_None);
70
fi->s2addr = addr;
71
fi->stage2 = true;
72
fi->s1ptw = true;
73
fi->s1ns = !is_secure;
74
- return ~0;
75
+ return false;
76
}
77
78
hcr = arm_hcr_el2_eff_secstate(env, is_secure);
79
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
80
fi->stage2 = true;
81
fi->s1ptw = true;
82
fi->s1ns = !is_secure;
83
- return ~0;
84
+ return false;
85
}
86
87
if (arm_is_secure_below_el3(env)) {
88
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
89
} else {
90
is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
91
}
92
- *is_secure_ptr = is_secure;
93
} else {
94
assert(!is_secure);
95
}
96
97
addr = s2.f.phys_addr;
98
}
99
- return addr;
48
+
100
+
49
+static const uint8_t sd_tuning_block_pattern[SD_TUNING_BLOCK_SIZE] = {
101
+ ptw->out_secure = is_secure;
50
+ /* See: Physical Layer Simplified Specification Version 3.01, Table 4-2 */
102
+ ptw->out_phys = addr;
51
+ 0xff, 0x0f, 0xff, 0x00, 0x0f, 0xfc, 0xc3, 0xcc,
103
+ return true;
52
+ 0xc3, 0x3c, 0xcc, 0xff, 0xfe, 0xff, 0xfe, 0xef,
104
}
53
+ 0xff, 0xdf, 0xff, 0xdd, 0xff, 0xfb, 0xff, 0xfb,
105
54
+ 0xbf, 0xff, 0x7f, 0xff, 0x77, 0xf7, 0xbd, 0xef,
106
/* All loads done in the course of a page table walk go through here. */
55
+ 0xff, 0xf0, 0xff, 0xf0, 0x0f, 0xfc, 0xcc, 0x3c,
107
-static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
56
+ 0xcc, 0x33, 0xcc, 0xcf, 0xff, 0xef, 0xff, 0xee,
108
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
57
+ 0xff, 0xfd, 0xff, 0xfd, 0xdf, 0xff, 0xbf, 0xff,
109
+static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
58
+ 0xbb, 0xff, 0xf7, 0xff, 0xf7, 0x7f, 0x7b, 0xde,
110
+ ARMMMUFaultInfo *fi)
59
+};
111
{
112
CPUState *cs = env_cpu(env);
113
MemTxAttrs attrs = {};
114
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
115
AddressSpace *as;
116
uint32_t data;
117
118
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
119
- attrs.secure = is_secure;
120
- as = arm_addressspace(cs, attrs);
121
- if (fi->s1ptw) {
122
+ if (!S1_ptw_translate(env, ptw, addr, fi)) {
123
return 0;
124
}
125
- if (regime_translation_big_endian(env, mmu_idx)) {
126
+ addr = ptw->out_phys;
127
+ attrs.secure = ptw->out_secure;
128
+ as = arm_addressspace(cs, attrs);
129
+ if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
130
data = address_space_ldl_be(as, addr, attrs, &result);
131
} else {
132
data = address_space_ldl_le(as, addr, attrs, &result);
133
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
134
return 0;
135
}
136
137
-static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
138
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
139
+static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
140
+ ARMMMUFaultInfo *fi)
141
{
142
CPUState *cs = env_cpu(env);
143
MemTxAttrs attrs = {};
144
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
145
AddressSpace *as;
146
uint64_t data;
147
148
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
149
- attrs.secure = is_secure;
150
- as = arm_addressspace(cs, attrs);
151
- if (fi->s1ptw) {
152
+ if (!S1_ptw_translate(env, ptw, addr, fi)) {
153
return 0;
154
}
155
- if (regime_translation_big_endian(env, mmu_idx)) {
156
+ addr = ptw->out_phys;
157
+ attrs.secure = ptw->out_secure;
158
+ as = arm_addressspace(cs, attrs);
159
+ if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
160
data = address_space_ldq_be(as, addr, attrs, &result);
161
} else {
162
data = address_space_ldq_le(as, addr, attrs, &result);
163
@@ -XXX,XX +XXX,XX @@ static int simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
164
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
165
}
166
167
-static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
168
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
169
- bool is_secure, GetPhysAddrResult *result,
170
- ARMMMUFaultInfo *fi)
171
+static bool get_phys_addr_v5(CPUARMState *env, S1Translate *ptw,
172
+ uint32_t address, MMUAccessType access_type,
173
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
174
{
175
int level = 1;
176
uint32_t table;
177
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
178
179
/* Pagetable walk. */
180
/* Lookup l1 descriptor. */
181
- if (!get_level1_table_address(env, mmu_idx, &table, address)) {
182
+ if (!get_level1_table_address(env, ptw->in_mmu_idx, &table, address)) {
183
/* Section translation fault if page walk is disabled by PD0 or PD1 */
184
fi->type = ARMFault_Translation;
185
goto do_fault;
186
}
187
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
188
+ desc = arm_ldl_ptw(env, ptw, table, fi);
189
if (fi->type != ARMFault_None) {
190
goto do_fault;
191
}
192
type = (desc & 3);
193
domain = (desc >> 5) & 0x0f;
194
- if (regime_el(env, mmu_idx) == 1) {
195
+ if (regime_el(env, ptw->in_mmu_idx) == 1) {
196
dacr = env->cp15.dacr_ns;
197
} else {
198
dacr = env->cp15.dacr_s;
199
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
200
/* Fine pagetable. */
201
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
202
}
203
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
204
+ desc = arm_ldl_ptw(env, ptw, table, fi);
205
if (fi->type != ARMFault_None) {
206
goto do_fault;
207
}
208
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
209
g_assert_not_reached();
210
}
211
}
212
- result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
213
+ result->f.prot = ap_to_rw_prot(env, ptw->in_mmu_idx, ap, domain_prot);
214
result->f.prot |= result->f.prot ? PAGE_EXEC : 0;
215
if (!(result->f.prot & (1 << access_type))) {
216
/* Access permission fault. */
217
@@ -XXX,XX +XXX,XX @@ do_fault:
218
return true;
219
}
220
221
-static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
222
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
223
- bool is_secure, GetPhysAddrResult *result,
224
- ARMMMUFaultInfo *fi)
225
+static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
226
+ uint32_t address, MMUAccessType access_type,
227
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
228
{
229
ARMCPU *cpu = env_archcpu(env);
230
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
231
int level = 1;
232
uint32_t table;
233
uint32_t desc;
234
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
235
fi->type = ARMFault_Translation;
236
goto do_fault;
237
}
238
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
239
+ desc = arm_ldl_ptw(env, ptw, table, fi);
240
if (fi->type != ARMFault_None) {
241
goto do_fault;
242
}
243
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
244
ns = extract32(desc, 3, 1);
245
/* Lookup l2 entry. */
246
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
247
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
248
+ desc = arm_ldl_ptw(env, ptw, table, fi);
249
if (fi->type != ARMFault_None) {
250
goto do_fault;
251
}
252
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
253
* the WnR bit is never set (the caller must do this).
254
*
255
* @env: CPUARMState
256
+ * @ptw: Current and next stage parameters for the walk.
257
* @address: virtual address to get physical address for
258
* @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH
259
- * @mmu_idx: MMU index indicating required translation regime
260
- * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page
261
- * table walk), must be true if this is stage 2 of a stage 1+2
262
+ * @s1_is_el0: if @ptw->in_mmu_idx is ARMMMUIdx_Stage2
263
+ * (so this is a stage 2 page table walk),
264
+ * must be true if this is stage 2 of a stage 1+2
265
* walk for an EL0 access. If @mmu_idx is anything else,
266
* @s1_is_el0 is ignored.
267
* @result: set on translation success,
268
* @fi: set to fault info if the translation fails
269
*/
270
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
271
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
272
- bool is_secure, bool s1_is_el0,
273
+static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
274
+ uint64_t address,
275
+ MMUAccessType access_type, bool s1_is_el0,
276
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
277
{
278
ARMCPU *cpu = env_archcpu(env);
279
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
280
+ bool is_secure = ptw->in_secure;
281
/* Read an LPAE long-descriptor translation table. */
282
ARMFaultType fault_type = ARMFault_Translation;
283
uint32_t level;
284
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
285
descaddr |= (address >> (stride * (4 - level))) & indexmask;
286
descaddr &= ~7ULL;
287
nstable = extract32(tableattrs, 4, 1);
288
- descriptor = arm_ldq_ptw(env, descaddr, !nstable, mmu_idx, fi);
289
+ ptw->in_secure = !nstable;
290
+ descriptor = arm_ldq_ptw(env, ptw, descaddr, fi);
291
if (fi->type != ARMFault_None) {
292
goto do_fault;
293
}
294
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
295
ARMMMUFaultInfo *fi)
296
{
297
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
298
+ S1Translate ptw;
299
300
if (mmu_idx != s1_mmu_idx) {
301
/*
302
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
303
int ret;
304
bool ipa_secure, s2walk_secure;
305
ARMCacheAttrs cacheattrs1;
306
- ARMMMUIdx s2_mmu_idx;
307
bool is_el0;
308
uint64_t hcr;
309
310
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
311
s2walk_secure = false;
312
}
313
314
- s2_mmu_idx = (s2walk_secure
315
- ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
316
+ ptw.in_mmu_idx =
317
+ s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
318
+ ptw.in_secure = s2walk_secure;
319
is_el0 = mmu_idx == ARMMMUIdx_E10_0;
320
321
/*
322
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
323
cacheattrs1 = result->cacheattrs;
324
memset(result, 0, sizeof(*result));
325
326
- ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx,
327
- s2walk_secure, is_el0, result, fi);
328
+ ret = get_phys_addr_lpae(env, &ptw, ipa, access_type,
329
+ is_el0, result, fi);
330
fi->s2addr = ipa;
331
332
/* Combine the S1 and S2 perms. */
333
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
334
return get_phys_addr_disabled(env, address, access_type, mmu_idx,
335
is_secure, result, fi);
336
}
60
+
337
+
61
uint8_t sd_read_data(SDState *sd)
338
+ ptw.in_mmu_idx = mmu_idx;
62
{
339
+ ptw.in_secure = is_secure;
63
/* TODO: Append CRCs */
64
@@ -XXX,XX +XXX,XX @@ uint8_t sd_read_data(SDState *sd)
65
}
66
break;
67
68
+ case 19: /* CMD19: SEND_TUNING_BLOCK (SD) */
69
+ if (sd->data_offset >= SD_TUNING_BLOCK_SIZE - 1) {
70
+ sd->state = sd_transfer_state;
71
+ }
72
+ ret = sd_tuning_block_pattern[sd->data_offset++];
73
+ break;
74
+
340
+
75
case 22:    /* ACMD22: SEND_NUM_WR_BLOCKS */
341
if (regime_using_lpae_format(env, mmu_idx)) {
76
ret = sd->data[sd->data_offset ++];
342
- return get_phys_addr_lpae(env, address, access_type, mmu_idx,
343
- is_secure, false, result, fi);
344
+ return get_phys_addr_lpae(env, &ptw, address, access_type, false,
345
+ result, fi);
346
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
347
- return get_phys_addr_v6(env, address, access_type, mmu_idx,
348
- is_secure, result, fi);
349
+ return get_phys_addr_v6(env, &ptw, address, access_type, result, fi);
350
} else {
351
- return get_phys_addr_v5(env, address, access_type, mmu_idx,
352
- is_secure, result, fi);
353
+ return get_phys_addr_v5(env, &ptw, address, access_type, result, fi);
354
}
355
}
77
356
78
--
357
--
79
2.16.2
358
2.25.1
80
81
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The SDBus will reuse these functions, so we put them in a new source file.
3
Before using softmmu page tables for the ptw, plumb down
4
a debug parameter so that we can query page table entries
5
from gdbstub without modifying cpu state.
4
6
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180309153654.13518-3-f4bug@amsat.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
[PMM: slight wordsmithing of comments, added note that string
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
returned does not need to be freed]
9
Message-id: 20221011031911.2408754-9-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/sd/Makefile.objs | 2 +-
12
target/arm/ptw.c | 55 ++++++++++++++++++++++++++++++++----------------
13
hw/sd/sdmmc-internal.h | 24 +++++++++++++++++
13
1 file changed, 37 insertions(+), 18 deletions(-)
14
hw/sd/sd.c | 13 +++++----
15
hw/sd/sdmmc-internal.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++
16
hw/sd/trace-events | 8 +++---
17
5 files changed, 109 insertions(+), 10 deletions(-)
18
create mode 100644 hw/sd/sdmmc-internal.c
19
14
20
diff --git a/hw/sd/Makefile.objs b/hw/sd/Makefile.objs
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/sd/Makefile.objs
17
--- a/target/arm/ptw.c
23
+++ b/hw/sd/Makefile.objs
18
+++ b/target/arm/ptw.c
24
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
25
common-obj-$(CONFIG_PL181) += pl181.o
20
typedef struct S1Translate {
26
common-obj-$(CONFIG_SSI_SD) += ssi-sd.o
21
ARMMMUIdx in_mmu_idx;
27
-common-obj-$(CONFIG_SD) += sd.o core.o
22
bool in_secure;
28
+common-obj-$(CONFIG_SD) += sd.o core.o sdmmc-internal.o
23
+ bool in_debug;
29
common-obj-$(CONFIG_SDHCI) += sdhci.o
24
bool out_secure;
30
25
hwaddr out_phys;
31
obj-$(CONFIG_MILKYMIST) += milkymist-memcard.o
26
} S1Translate;
32
diff --git a/hw/sd/sdmmc-internal.h b/hw/sd/sdmmc-internal.h
27
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
33
index XXXXXXX..XXXXXXX 100644
28
S1Translate s2ptw = {
34
--- a/hw/sd/sdmmc-internal.h
29
.in_mmu_idx = s2_mmu_idx,
35
+++ b/hw/sd/sdmmc-internal.h
30
.in_secure = is_secure,
36
@@ -XXX,XX +XXX,XX @@
31
+ .in_debug = ptw->in_debug,
37
32
};
38
#define SDMMC_CMD_MAX 64
33
uint64_t hcr;
39
34
int ret;
40
+/**
35
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
41
+ * sd_cmd_name:
36
return 0;
42
+ * @cmd: A SD "normal" command, up to SDMMC_CMD_MAX.
37
}
43
+ *
38
44
+ * Returns a human-readable name describing the command.
39
-bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
45
+ * The return value is always a static string which does not need
40
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
46
+ * to be freed after use.
41
- bool is_secure, GetPhysAddrResult *result,
47
+ *
42
- ARMMMUFaultInfo *fi)
48
+ * Returns: The command name of @cmd or "UNKNOWN_CMD".
43
+static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
49
+ */
44
+ target_ulong address,
50
+const char *sd_cmd_name(uint8_t cmd);
45
+ MMUAccessType access_type,
51
+
46
+ GetPhysAddrResult *result,
52
+/**
47
+ ARMMMUFaultInfo *fi)
53
+ * sd_acmd_name:
48
{
54
+ * @cmd: A SD "Application-Specific" command, up to SDMMC_CMD_MAX.
49
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
55
+ *
50
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
56
+ * Returns a human-readable name describing the application command.
51
- S1Translate ptw;
57
+ * The return value is always a static string which does not need
52
+ bool is_secure = ptw->in_secure;
58
+ * to be freed after use.
53
59
+ *
54
if (mmu_idx != s1_mmu_idx) {
60
+ * Returns: The application command name of @cmd or "UNKNOWN_ACMD".
55
/*
61
+ */
56
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
62
+const char *sd_acmd_name(uint8_t cmd);
57
bool is_el0;
63
+
58
uint64_t hcr;
64
#endif
59
65
diff --git a/hw/sd/sd.c b/hw/sd/sd.c
60
- ret = get_phys_addr_with_secure(env, address, access_type,
66
index XXXXXXX..XXXXXXX 100644
61
- s1_mmu_idx, is_secure, result, fi);
67
--- a/hw/sd/sd.c
62
+ ptw->in_mmu_idx = s1_mmu_idx;
68
+++ b/hw/sd/sd.c
63
+ ret = get_phys_addr_with_struct(env, ptw, address, access_type,
69
@@ -XXX,XX +XXX,XX @@ static sd_rsp_type_t sd_normal_command(SDState *sd, SDRequest req)
64
+ result, fi);
70
* However there is no ACMD55, so we want to trace this particular case.
65
71
*/
66
/* If S1 fails or S2 is disabled, return early. */
72
if (req.cmd != 55 || sd->expecting_acmd) {
67
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
73
- trace_sdcard_normal_command(req.cmd, req.arg,
68
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
74
- sd_state_name(sd->state));
69
s2walk_secure = false;
75
+ trace_sdcard_normal_command(sd_cmd_name(req.cmd), req.cmd,
70
}
76
+ req.arg, sd_state_name(sd->state));
71
72
- ptw.in_mmu_idx =
73
+ ptw->in_mmu_idx =
74
s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
75
- ptw.in_secure = s2walk_secure;
76
+ ptw->in_secure = s2walk_secure;
77
is_el0 = mmu_idx == ARMMMUIdx_E10_0;
78
79
/*
80
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
81
cacheattrs1 = result->cacheattrs;
82
memset(result, 0, sizeof(*result));
83
84
- ret = get_phys_addr_lpae(env, &ptw, ipa, access_type,
85
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
86
is_el0, result, fi);
87
fi->s2addr = ipa;
88
89
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
90
is_secure, result, fi);
77
}
91
}
78
92
79
/* Not interpreting this as an app command */
93
- ptw.in_mmu_idx = mmu_idx;
80
@@ -XXX,XX +XXX,XX @@ static sd_rsp_type_t sd_normal_command(SDState *sd, SDRequest req)
94
- ptw.in_secure = is_secure;
81
static sd_rsp_type_t sd_app_command(SDState *sd,
95
-
82
SDRequest req)
96
if (regime_using_lpae_format(env, mmu_idx)) {
83
{
97
- return get_phys_addr_lpae(env, &ptw, address, access_type, false,
84
- trace_sdcard_app_command(req.cmd, req.arg);
98
+ return get_phys_addr_lpae(env, ptw, address, access_type, false,
85
+ trace_sdcard_app_command(sd_acmd_name(req.cmd),
99
result, fi);
86
+ req.cmd, req.arg, sd_state_name(sd->state));
100
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
87
sd->card_status |= APP_CMD;
101
- return get_phys_addr_v6(env, &ptw, address, access_type, result, fi);
88
switch (req.cmd) {
102
+ return get_phys_addr_v6(env, ptw, address, access_type, result, fi);
89
case 6:    /* ACMD6: SET_BUS_WIDTH */
103
} else {
90
@@ -XXX,XX +XXX,XX @@ void sd_write_data(SDState *sd, uint8_t value)
104
- return get_phys_addr_v5(env, &ptw, address, access_type, result, fi);
91
if (sd->card_status & (ADDRESS_ERROR | WP_VIOLATION))
105
+ return get_phys_addr_v5(env, ptw, address, access_type, result, fi);
92
return;
106
}
93
107
}
94
- trace_sdcard_write_data(sd->current_cmd, value);
108
95
+ trace_sdcard_write_data(sd_acmd_name(sd->current_cmd),
109
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
96
+ sd->current_cmd, value);
110
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
97
switch (sd->current_cmd) {
111
+ bool is_secure, GetPhysAddrResult *result,
98
case 24:    /* CMD24: WRITE_SINGLE_BLOCK */
112
+ ARMMMUFaultInfo *fi)
99
sd->data[sd->data_offset ++] = value;
100
@@ -XXX,XX +XXX,XX @@ uint8_t sd_read_data(SDState *sd)
101
102
io_len = (sd->ocr & (1 << 30)) ? 512 : sd->blk_len;
103
104
- trace_sdcard_read_data(sd->current_cmd, io_len);
105
+ trace_sdcard_read_data(sd_acmd_name(sd->current_cmd),
106
+ sd->current_cmd, io_len);
107
switch (sd->current_cmd) {
108
case 6:    /* CMD6: SWITCH_FUNCTION */
109
ret = sd->data[sd->data_offset ++];
110
diff --git a/hw/sd/sdmmc-internal.c b/hw/sd/sdmmc-internal.c
111
new file mode 100644
112
index XXXXXXX..XXXXXXX
113
--- /dev/null
114
+++ b/hw/sd/sdmmc-internal.c
115
@@ -XXX,XX +XXX,XX @@
116
+/*
117
+ * SD/MMC cards common helpers
118
+ *
119
+ * Copyright (c) 2018 Philippe Mathieu-Daudé <f4bug@amsat.org>
120
+ *
121
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
122
+ * See the COPYING file in the top-level directory.
123
+ * SPDX-License-Identifier: GPL-2.0-or-later
124
+ */
125
+
126
+#include "qemu/osdep.h"
127
+#include "sdmmc-internal.h"
128
+
129
+const char *sd_cmd_name(uint8_t cmd)
130
+{
113
+{
131
+ static const char *cmd_abbrev[SDMMC_CMD_MAX] = {
114
+ S1Translate ptw = {
132
+ [0] = "GO_IDLE_STATE",
115
+ .in_mmu_idx = mmu_idx,
133
+ [2] = "ALL_SEND_CID", [3] = "SEND_RELATIVE_ADDR",
116
+ .in_secure = is_secure,
134
+ [4] = "SET_DSR", [5] = "IO_SEND_OP_COND",
135
+ [6] = "SWITCH_FUNC", [7] = "SELECT/DESELECT_CARD",
136
+ [8] = "SEND_IF_COND", [9] = "SEND_CSD",
137
+ [10] = "SEND_CID", [11] = "VOLTAGE_SWITCH",
138
+ [12] = "STOP_TRANSMISSION", [13] = "SEND_STATUS",
139
+ [15] = "GO_INACTIVE_STATE",
140
+ [16] = "SET_BLOCKLEN", [17] = "READ_SINGLE_BLOCK",
141
+ [18] = "READ_MULTIPLE_BLOCK", [19] = "SEND_TUNING_BLOCK",
142
+ [20] = "SPEED_CLASS_CONTROL", [21] = "DPS_spec",
143
+ [23] = "SET_BLOCK_COUNT",
144
+ [24] = "WRITE_BLOCK", [25] = "WRITE_MULTIPLE_BLOCK",
145
+ [26] = "MANUF_RSVD", [27] = "PROGRAM_CSD",
146
+ [28] = "SET_WRITE_PROT", [29] = "CLR_WRITE_PROT",
147
+ [30] = "SEND_WRITE_PROT",
148
+ [32] = "ERASE_WR_BLK_START", [33] = "ERASE_WR_BLK_END",
149
+ [34] = "SW_FUNC_RSVD", [35] = "SW_FUNC_RSVD",
150
+ [36] = "SW_FUNC_RSVD", [37] = "SW_FUNC_RSVD",
151
+ [38] = "ERASE",
152
+ [40] = "DPS_spec",
153
+ [42] = "LOCK_UNLOCK", [43] = "Q_MANAGEMENT",
154
+ [44] = "Q_TASK_INFO_A", [45] = "Q_TASK_INFO_B",
155
+ [46] = "Q_RD_TASK", [47] = "Q_WR_TASK",
156
+ [48] = "READ_EXTR_SINGLE", [49] = "WRITE_EXTR_SINGLE",
157
+ [50] = "SW_FUNC_RSVD",
158
+ [52] = "IO_RW_DIRECT", [53] = "IO_RW_EXTENDED",
159
+ [54] = "SDIO_RSVD", [55] = "APP_CMD",
160
+ [56] = "GEN_CMD", [57] = "SW_FUNC_RSVD",
161
+ [58] = "READ_EXTR_MULTI", [59] = "WRITE_EXTR_MULTI",
162
+ [60] = "MANUF_RSVD", [61] = "MANUF_RSVD",
163
+ [62] = "MANUF_RSVD", [63] = "MANUF_RSVD",
164
+ };
117
+ };
165
+ return cmd_abbrev[cmd] ? cmd_abbrev[cmd] : "UNKNOWN_CMD";
118
+ return get_phys_addr_with_struct(env, &ptw, address, access_type,
119
+ result, fi);
166
+}
120
+}
167
+
121
+
168
+const char *sd_acmd_name(uint8_t cmd)
122
bool get_phys_addr(CPUARMState *env, target_ulong address,
169
+{
123
MMUAccessType access_type, ARMMMUIdx mmu_idx,
170
+ static const char *acmd_abbrev[SDMMC_CMD_MAX] = {
124
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
171
+ [6] = "SET_BUS_WIDTH",
125
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
172
+ [13] = "SD_STATUS",
126
{
173
+ [14] = "DPS_spec", [15] = "DPS_spec",
127
ARMCPU *cpu = ARM_CPU(cs);
174
+ [16] = "DPS_spec",
128
CPUARMState *env = &cpu->env;
175
+ [18] = "SECU_spec",
129
+ S1Translate ptw = {
176
+ [22] = "SEND_NUM_WR_BLOCKS", [23] = "SET_WR_BLK_ERASE_COUNT",
130
+ .in_mmu_idx = arm_mmu_idx(env),
177
+ [41] = "SD_SEND_OP_COND",
131
+ .in_secure = arm_is_secure(env),
178
+ [42] = "SET_CLR_CARD_DETECT",
132
+ .in_debug = true,
179
+ [51] = "SEND_SCR",
180
+ [52] = "SECU_spec", [53] = "SECU_spec",
181
+ [54] = "SECU_spec",
182
+ [56] = "SECU_spec", [57] = "SECU_spec",
183
+ [58] = "SECU_spec", [59] = "SECU_spec",
184
+ };
133
+ };
185
+
134
GetPhysAddrResult res = {};
186
+ return acmd_abbrev[cmd] ? acmd_abbrev[cmd] : "UNKNOWN_ACMD";
135
ARMMMUFaultInfo fi = {};
187
+}
136
- ARMMMUIdx mmu_idx = arm_mmu_idx(env);
188
diff --git a/hw/sd/trace-events b/hw/sd/trace-events
137
bool ret;
189
index XXXXXXX..XXXXXXX 100644
138
190
--- a/hw/sd/trace-events
139
- ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi);
191
+++ b/hw/sd/trace-events
140
+ ret = get_phys_addr_with_struct(env, &ptw, addr, MMU_DATA_LOAD, &res, &fi);
192
@@ -XXX,XX +XXX,XX @@ sdhci_write_dataport(uint16_t data_count) "write buffer filled with %u bytes of
141
*attrs = res.f.attrs;
193
sdhci_capareg(const char *desc, uint16_t val) "%s: %u"
142
194
143
if (ret) {
195
# hw/sd/sd.c
196
-sdcard_normal_command(uint8_t cmd, uint32_t arg, const char *state) "CMD%d arg 0x%08x (state %s)"
197
-sdcard_app_command(uint8_t acmd, uint32_t arg) "ACMD%d arg 0x%08x"
198
+sdcard_normal_command(const char *cmd_desc, uint8_t cmd, uint32_t arg, const char *state) "%20s/ CMD%02d arg 0x%08x (state %s)"
199
+sdcard_app_command(const char *acmd_desc, uint8_t acmd, uint32_t arg, const char *state) "%23s/ACMD%02d arg 0x%08x (state %s)"
200
sdcard_response(const char *rspdesc, int rsplen) "%s (sz:%d)"
201
sdcard_powerup(void) ""
202
sdcard_inquiry_cmd41(void) ""
203
@@ -XXX,XX +XXX,XX @@ sdcard_lock(void) ""
204
sdcard_unlock(void) ""
205
sdcard_read_block(uint64_t addr, uint32_t len) "addr 0x%" PRIx64 " size 0x%x"
206
sdcard_write_block(uint64_t addr, uint32_t len) "addr 0x%" PRIx64 " size 0x%x"
207
-sdcard_write_data(uint8_t cmd, uint8_t value) "CMD%02d value 0x%02x"
208
-sdcard_read_data(uint8_t cmd, int length) "CMD%02d len %d"
209
+sdcard_write_data(const char *cmd_desc, uint8_t cmd, uint8_t value) "%20s/ CMD%02d value 0x%02x"
210
+sdcard_read_data(const char *cmd_desc, uint8_t cmd, int length) "%20s/ CMD%02d len %d"
211
sdcard_set_voltage(uint16_t millivolts) "%u mV"
212
213
# hw/sd/milkymist-memcard.c
214
--
144
--
215
2.16.2
145
2.25.1
216
217
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Split out helpers from target_setup_frame and target_restore_sigframe
3
Hoist this test out of arm_ld[lq]_ptw into S1_ptw_translate.
4
for dealing with general registers, fpsimd registers, and the end record.
5
6
When we add support for sve registers, the relative positions of
7
these will change.
8
4
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20180303143823.27055-3-richard.henderson@linaro.org
7
Message-id: 20221011031911.2408754-10-richard.henderson@linaro.org
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
9
---
15
linux-user/signal.c | 120 ++++++++++++++++++++++++++++++----------------------
10
target/arm/ptw.c | 6 ++++--
16
1 file changed, 69 insertions(+), 51 deletions(-)
11
1 file changed, 4 insertions(+), 2 deletions(-)
17
12
18
diff --git a/linux-user/signal.c b/linux-user/signal.c
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
19
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
20
--- a/linux-user/signal.c
15
--- a/target/arm/ptw.c
21
+++ b/linux-user/signal.c
16
+++ b/target/arm/ptw.c
22
@@ -XXX,XX +XXX,XX @@ struct target_rt_sigframe {
17
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
23
uint32_t tramp[2];
18
bool in_secure;
24
};
19
bool in_debug;
25
20
bool out_secure;
26
-static int target_setup_sigframe(struct target_rt_sigframe *sf,
21
+ bool out_be;
27
- CPUARMState *env, target_sigset_t *set)
22
hwaddr out_phys;
28
+static void target_setup_general_frame(struct target_rt_sigframe *sf,
23
} S1Translate;
29
+ CPUARMState *env, target_sigset_t *set)
24
30
{
25
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
31
int i;
26
32
- struct target_aux_context *aux =
27
ptw->out_secure = is_secure;
33
- (struct target_aux_context *)sf->uc.tuc_mcontext.__reserved;
28
ptw->out_phys = addr;
34
29
+ ptw->out_be = regime_translation_big_endian(env, ptw->in_mmu_idx);
35
- /* set up the stack frame for unwinding */
30
return true;
36
- __put_user(env->xregs[29], &sf->fp);
37
- __put_user(env->xregs[30], &sf->lr);
38
+ __put_user(0, &sf->uc.tuc_flags);
39
+ __put_user(0, &sf->uc.tuc_link);
40
+
41
+ __put_user(target_sigaltstack_used.ss_sp, &sf->uc.tuc_stack.ss_sp);
42
+ __put_user(sas_ss_flags(env->xregs[31]), &sf->uc.tuc_stack.ss_flags);
43
+ __put_user(target_sigaltstack_used.ss_size, &sf->uc.tuc_stack.ss_size);
44
45
for (i = 0; i < 31; i++) {
46
__put_user(env->xregs[i], &sf->uc.tuc_mcontext.regs[i]);
47
@@ -XXX,XX +XXX,XX @@ static int target_setup_sigframe(struct target_rt_sigframe *sf,
48
for (i = 0; i < TARGET_NSIG_WORDS; i++) {
49
__put_user(set->sig[i], &sf->uc.tuc_sigmask.sig[i]);
50
}
51
+}
52
+
53
+static void target_setup_fpsimd_record(struct target_fpsimd_context *fpsimd,
54
+ CPUARMState *env)
55
+{
56
+ int i;
57
+
58
+ __put_user(TARGET_FPSIMD_MAGIC, &fpsimd->head.magic);
59
+ __put_user(sizeof(struct target_fpsimd_context), &fpsimd->head.size);
60
+ __put_user(vfp_get_fpsr(env), &fpsimd->fpsr);
61
+ __put_user(vfp_get_fpcr(env), &fpsimd->fpcr);
62
63
for (i = 0; i < 32; i++) {
64
uint64_t *q = aa64_vfp_qreg(env, i);
65
#ifdef TARGET_WORDS_BIGENDIAN
66
- __put_user(q[0], &aux->fpsimd.vregs[i * 2 + 1]);
67
- __put_user(q[1], &aux->fpsimd.vregs[i * 2]);
68
+ __put_user(q[0], &fpsimd->vregs[i * 2 + 1]);
69
+ __put_user(q[1], &fpsimd->vregs[i * 2]);
70
#else
71
- __put_user(q[0], &aux->fpsimd.vregs[i * 2]);
72
- __put_user(q[1], &aux->fpsimd.vregs[i * 2 + 1]);
73
+ __put_user(q[0], &fpsimd->vregs[i * 2]);
74
+ __put_user(q[1], &fpsimd->vregs[i * 2 + 1]);
75
#endif
76
}
77
- __put_user(vfp_get_fpsr(env), &aux->fpsimd.fpsr);
78
- __put_user(vfp_get_fpcr(env), &aux->fpsimd.fpcr);
79
- __put_user(TARGET_FPSIMD_MAGIC, &aux->fpsimd.head.magic);
80
- __put_user(sizeof(struct target_fpsimd_context),
81
- &aux->fpsimd.head.size);
82
-
83
- /* set the "end" magic */
84
- __put_user(0, &aux->end.magic);
85
- __put_user(0, &aux->end.size);
86
-
87
- return 0;
88
}
31
}
89
32
90
-static int target_restore_sigframe(CPUARMState *env,
33
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
91
- struct target_rt_sigframe *sf)
34
addr = ptw->out_phys;
92
+static void target_setup_end_record(struct target_aarch64_ctx *end)
35
attrs.secure = ptw->out_secure;
93
+{
36
as = arm_addressspace(cs, attrs);
94
+ __put_user(0, &end->magic);
37
- if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
95
+ __put_user(0, &end->size);
38
+ if (ptw->out_be) {
96
+}
39
data = address_space_ldl_be(as, addr, attrs, &result);
97
+
98
+static void target_restore_general_frame(CPUARMState *env,
99
+ struct target_rt_sigframe *sf)
100
{
101
sigset_t set;
102
- int i;
103
- struct target_aux_context *aux =
104
- (struct target_aux_context *)sf->uc.tuc_mcontext.__reserved;
105
- uint32_t magic, size, fpsr, fpcr;
106
uint64_t pstate;
107
+ int i;
108
109
target_to_host_sigset(&set, &sf->uc.tuc_sigmask);
110
set_sigmask(&set);
111
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
112
__get_user(env->pc, &sf->uc.tuc_mcontext.pc);
113
__get_user(pstate, &sf->uc.tuc_mcontext.pstate);
114
pstate_write(env, pstate);
115
+}
116
117
- __get_user(magic, &aux->fpsimd.head.magic);
118
- __get_user(size, &aux->fpsimd.head.size);
119
+static void target_restore_fpsimd_record(CPUARMState *env,
120
+ struct target_fpsimd_context *fpsimd)
121
+{
122
+ uint32_t fpsr, fpcr;
123
+ int i;
124
125
- if (magic != TARGET_FPSIMD_MAGIC
126
- || size != sizeof(struct target_fpsimd_context)) {
127
- return 1;
128
- }
129
+ __get_user(fpsr, &fpsimd->fpsr);
130
+ vfp_set_fpsr(env, fpsr);
131
+ __get_user(fpcr, &fpsimd->fpcr);
132
+ vfp_set_fpcr(env, fpcr);
133
134
for (i = 0; i < 32; i++) {
135
uint64_t *q = aa64_vfp_qreg(env, i);
136
#ifdef TARGET_WORDS_BIGENDIAN
137
- __get_user(q[0], &aux->fpsimd.vregs[i * 2 + 1]);
138
- __get_user(q[1], &aux->fpsimd.vregs[i * 2]);
139
+ __get_user(q[0], &fpsimd->vregs[i * 2 + 1]);
140
+ __get_user(q[1], &fpsimd->vregs[i * 2]);
141
#else
142
- __get_user(q[0], &aux->fpsimd.vregs[i * 2]);
143
- __get_user(q[1], &aux->fpsimd.vregs[i * 2 + 1]);
144
+ __get_user(q[0], &fpsimd->vregs[i * 2]);
145
+ __get_user(q[1], &fpsimd->vregs[i * 2 + 1]);
146
#endif
147
}
148
- __get_user(fpsr, &aux->fpsimd.fpsr);
149
- vfp_set_fpsr(env, fpsr);
150
- __get_user(fpcr, &aux->fpsimd.fpcr);
151
- vfp_set_fpcr(env, fpcr);
152
+}
153
154
+static int target_restore_sigframe(CPUARMState *env,
155
+ struct target_rt_sigframe *sf)
156
+{
157
+ struct target_aux_context *aux
158
+ = (struct target_aux_context *)sf->uc.tuc_mcontext.__reserved;
159
+ uint32_t magic, size;
160
+
161
+ target_restore_general_frame(env, sf);
162
+
163
+ __get_user(magic, &aux->fpsimd.head.magic);
164
+ __get_user(size, &aux->fpsimd.head.size);
165
+ if (magic == TARGET_FPSIMD_MAGIC
166
+ && size == sizeof(struct target_fpsimd_context)) {
167
+ target_restore_fpsimd_record(env, &aux->fpsimd);
168
+ } else {
169
+ return 1;
170
+ }
171
return 0;
172
}
173
174
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
175
CPUARMState *env)
176
{
177
struct target_rt_sigframe *frame;
178
+ struct target_aux_context *aux;
179
abi_ulong frame_addr, return_addr;
180
181
frame_addr = get_sigframe(ka, env);
182
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
183
if (!lock_user_struct(VERIFY_WRITE, frame, frame_addr, 0)) {
184
goto give_sigsegv;
185
}
186
+ aux = (struct target_aux_context *)frame->uc.tuc_mcontext.__reserved;
187
188
- __put_user(0, &frame->uc.tuc_flags);
189
- __put_user(0, &frame->uc.tuc_link);
190
+ target_setup_general_frame(frame, env, set);
191
+ target_setup_fpsimd_record(&aux->fpsimd, env);
192
+ target_setup_end_record(&aux->end);
193
194
- __put_user(target_sigaltstack_used.ss_sp,
195
- &frame->uc.tuc_stack.ss_sp);
196
- __put_user(sas_ss_flags(env->xregs[31]),
197
- &frame->uc.tuc_stack.ss_flags);
198
- __put_user(target_sigaltstack_used.ss_size,
199
- &frame->uc.tuc_stack.ss_size);
200
- target_setup_sigframe(frame, env, set);
201
if (ka->sa_flags & TARGET_SA_RESTORER) {
202
return_addr = ka->sa_restorer;
203
} else {
40
} else {
41
data = address_space_ldl_le(as, addr, attrs, &result);
42
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
43
addr = ptw->out_phys;
44
attrs.secure = ptw->out_secure;
45
as = arm_addressspace(cs, attrs);
46
- if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
47
+ if (ptw->out_be) {
48
data = address_space_ldq_be(as, addr, attrs, &result);
49
} else {
50
data = address_space_ldq_le(as, addr, attrs, &result);
204
--
51
--
205
2.16.2
52
2.25.1
206
207
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
As an implementation choice, widening VL has zeroed the
3
So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and
4
previously inaccessible portion of the sve registers.
4
arm_ldq_ptw. Use probe_access_full to find the host address,
5
and if so use a host load. If the probe fails, we've got our
6
fault info already. On the off chance that page tables are not
7
in RAM, continue to use the address_space_ld* functions.
5
8
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Acked-by: Alex Bennée <alex.bennee@linaro.org>
11
Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org
9
Message-id: 20180303143823.27055-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
linux-user/aarch64/target_syscall.h | 3 +++
14
target/arm/cpu.h | 5 +
13
target/arm/cpu.h | 1 +
15
target/arm/ptw.c | 196 +++++++++++++++++++++++++---------------
14
linux-user/syscall.c | 27 ++++++++++++++++++++++++
16
target/arm/tlb_helper.c | 17 +++-
15
target/arm/cpu64.c | 41 +++++++++++++++++++++++++++++++++++++
17
3 files changed, 144 insertions(+), 74 deletions(-)
16
4 files changed, 72 insertions(+)
17
18
18
diff --git a/linux-user/aarch64/target_syscall.h b/linux-user/aarch64/target_syscall.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/linux-user/aarch64/target_syscall.h
21
+++ b/linux-user/aarch64/target_syscall.h
22
@@ -XXX,XX +XXX,XX @@ struct target_pt_regs {
23
#define TARGET_MLOCKALL_MCL_CURRENT 1
24
#define TARGET_MLOCKALL_MCL_FUTURE 2
25
26
+#define TARGET_PR_SVE_SET_VL 50
27
+#define TARGET_PR_SVE_GET_VL 51
28
+
29
#endif /* AARCH64_TARGET_SYSCALL_H */
30
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
31
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
33
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
34
@@ -XXX,XX +XXX,XX @@ int arm_cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cs,
23
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMTBFlags {
35
#ifdef TARGET_AARCH64
24
target_ulong flags2;
36
int aarch64_cpu_gdb_read_register(CPUState *cpu, uint8_t *buf, int reg);
25
} CPUARMTBFlags;
37
int aarch64_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
26
38
+void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq);
27
+typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
39
#endif
28
+
40
29
typedef struct CPUArchState {
41
target_ulong do_arm_semihosting(CPUARMState *env);
30
/* Regs for current mode. */
42
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
31
uint32_t regs[16];
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
33
struct CPUBreakpoint *cpu_breakpoint[16];
34
struct CPUWatchpoint *cpu_watchpoint[16];
35
36
+ /* Optional fault info across tlb lookup. */
37
+ ARMMMUFaultInfo *tlb_fi;
38
+
39
/* Fields up to this point are cleared by a CPU reset */
40
struct {} end_reset_fields;
41
42
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
43
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
44
--- a/linux-user/syscall.c
44
--- a/target/arm/ptw.c
45
+++ b/linux-user/syscall.c
45
+++ b/target/arm/ptw.c
46
@@ -XXX,XX +XXX,XX @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
46
@@ -XXX,XX +XXX,XX @@
47
break;
47
#include "qemu/osdep.h"
48
#include "qemu/log.h"
49
#include "qemu/range.h"
50
+#include "exec/exec-all.h"
51
#include "cpu.h"
52
#include "internals.h"
53
#include "idau.h"
54
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
55
bool out_secure;
56
bool out_be;
57
hwaddr out_phys;
58
+ void *out_host;
59
} S1Translate;
60
61
static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
62
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
63
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
64
}
65
66
-static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
67
+static bool S2_attrs_are_device(uint64_t hcr, uint8_t attrs)
68
{
69
/*
70
* For an S1 page table walk, the stage 1 attributes are always
71
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
72
* With HCR_EL2.FWB == 1 this is when descriptor bit [4] is 0, ie
73
* when cacheattrs.attrs bit [2] is 0.
74
*/
75
- assert(cacheattrs.is_s2_format);
76
if (hcr & HCR_FWB) {
77
- return (cacheattrs.attrs & 0x4) == 0;
78
+ return (attrs & 0x4) == 0;
79
} else {
80
- return (cacheattrs.attrs & 0xc) == 0;
81
+ return (attrs & 0xc) == 0;
82
}
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
86
hwaddr addr, ARMMMUFaultInfo *fi)
87
{
88
bool is_secure = ptw->in_secure;
89
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
90
ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
91
+ bool s2_phys = false;
92
+ uint8_t pte_attrs;
93
+ bool pte_secure;
94
95
- if (arm_mmu_idx_is_stage1_of_2(ptw->in_mmu_idx) &&
96
- !regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
97
- GetPhysAddrResult s2 = {};
98
- S1Translate s2ptw = {
99
- .in_mmu_idx = s2_mmu_idx,
100
- .in_secure = is_secure,
101
- .in_debug = ptw->in_debug,
102
- };
103
- uint64_t hcr;
104
- int ret;
105
+ if (!arm_mmu_idx_is_stage1_of_2(mmu_idx)
106
+ || regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
107
+ s2_mmu_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
108
+ s2_phys = true;
109
+ }
110
111
- ret = get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
112
- false, &s2, fi);
113
- if (ret) {
114
- assert(fi->type != ARMFault_None);
115
- fi->s2addr = addr;
116
- fi->stage2 = true;
117
- fi->s1ptw = true;
118
- fi->s1ns = !is_secure;
119
- return false;
120
+ if (unlikely(ptw->in_debug)) {
121
+ /*
122
+ * From gdbstub, do not use softmmu so that we don't modify the
123
+ * state of the cpu at all, including softmmu tlb contents.
124
+ */
125
+ if (s2_phys) {
126
+ ptw->out_phys = addr;
127
+ pte_attrs = 0;
128
+ pte_secure = is_secure;
129
+ } else {
130
+ S1Translate s2ptw = {
131
+ .in_mmu_idx = s2_mmu_idx,
132
+ .in_secure = is_secure,
133
+ .in_debug = true,
134
+ };
135
+ GetPhysAddrResult s2 = { };
136
+ if (!get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
137
+ false, &s2, fi)) {
138
+ goto fail;
139
+ }
140
+ ptw->out_phys = s2.f.phys_addr;
141
+ pte_attrs = s2.cacheattrs.attrs;
142
+ pte_secure = s2.f.attrs.secure;
48
}
143
}
49
#endif
144
+ ptw->out_host = NULL;
50
+#ifdef TARGET_AARCH64
145
+ } else {
51
+ case TARGET_PR_SVE_SET_VL:
146
+ CPUTLBEntryFull *full;
52
+ /* We cannot support either PR_SVE_SET_VL_ONEXEC
147
+ int flags;
53
+ or PR_SVE_VL_INHERIT. Therefore, anything above
148
54
+ ARM_MAX_VQ results in EINVAL. */
149
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
55
+ ret = -TARGET_EINVAL;
150
- if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
56
+ if (arm_feature(cpu_env, ARM_FEATURE_SVE)
151
+ env->tlb_fi = fi;
57
+ && arg2 >= 0 && arg2 <= ARM_MAX_VQ * 16 && !(arg2 & 15)) {
152
+ flags = probe_access_full(env, addr, MMU_DATA_LOAD,
58
+ CPUARMState *env = cpu_env;
153
+ arm_to_core_mmu_idx(s2_mmu_idx),
59
+ int old_vq = (env->vfp.zcr_el[1] & 0xf) + 1;
154
+ true, &ptw->out_host, &full, 0);
60
+ int vq = MAX(arg2 / 16, 1);
155
+ env->tlb_fi = NULL;
61
+
156
+
62
+ if (vq < old_vq) {
157
+ if (unlikely(flags & TLB_INVALID_MASK)) {
63
+ aarch64_sve_narrow_vq(env, vq);
158
+ goto fail;
64
+ }
159
+ }
65
+ env->vfp.zcr_el[1] = vq - 1;
160
+ ptw->out_phys = full->phys_addr;
66
+ ret = vq * 16;
161
+ pte_attrs = full->pte_attrs;
67
+ }
162
+ pte_secure = full->attrs.secure;
68
+ break;
163
+ }
69
+ case TARGET_PR_SVE_GET_VL:
164
+
70
+ ret = -TARGET_EINVAL;
165
+ if (!s2_phys) {
71
+ if (arm_feature(cpu_env, ARM_FEATURE_SVE)) {
166
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
72
+ CPUARMState *env = cpu_env;
167
+
73
+ ret = ((env->vfp.zcr_el[1] & 0xf) + 1) * 16;
168
+ if ((hcr & HCR_PTW) && S2_attrs_are_device(hcr, pte_attrs)) {
74
+ }
169
/*
75
+ break;
170
* PTW set and S1 walk touched S2 Device memory:
76
+#endif /* AARCH64 */
171
* generate Permission fault.
77
case PR_GET_SECCOMP:
172
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
78
case PR_SET_SECCOMP:
173
fi->s1ns = !is_secure;
79
/* Disable seccomp to prevent the target disabling syscalls we
174
return false;
80
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
175
}
176
-
177
- if (arm_is_secure_below_el3(env)) {
178
- /* Check if page table walk is to secure or non-secure PA space. */
179
- if (is_secure) {
180
- is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
181
- } else {
182
- is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
183
- }
184
- } else {
185
- assert(!is_secure);
186
- }
187
-
188
- addr = s2.f.phys_addr;
189
}
190
191
- ptw->out_secure = is_secure;
192
- ptw->out_phys = addr;
193
- ptw->out_be = regime_translation_big_endian(env, ptw->in_mmu_idx);
194
+ /* Check if page table walk is to secure or non-secure PA space. */
195
+ ptw->out_secure = (is_secure
196
+ && !(pte_secure
197
+ ? env->cp15.vstcr_el2 & VSTCR_SW
198
+ : env->cp15.vtcr_el2 & VTCR_NSW));
199
+ ptw->out_be = regime_translation_big_endian(env, mmu_idx);
200
return true;
201
+
202
+ fail:
203
+ assert(fi->type != ARMFault_None);
204
+ fi->s2addr = addr;
205
+ fi->stage2 = true;
206
+ fi->s1ptw = true;
207
+ fi->s1ns = !is_secure;
208
+ return false;
209
}
210
211
/* All loads done in the course of a page table walk go through here. */
212
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
213
ARMMMUFaultInfo *fi)
214
{
215
CPUState *cs = env_cpu(env);
216
- MemTxAttrs attrs = {};
217
- MemTxResult result = MEMTX_OK;
218
- AddressSpace *as;
219
uint32_t data;
220
221
if (!S1_ptw_translate(env, ptw, addr, fi)) {
222
+ /* Failure. */
223
+ assert(fi->s1ptw);
224
return 0;
225
}
226
- addr = ptw->out_phys;
227
- attrs.secure = ptw->out_secure;
228
- as = arm_addressspace(cs, attrs);
229
- if (ptw->out_be) {
230
- data = address_space_ldl_be(as, addr, attrs, &result);
231
+
232
+ if (likely(ptw->out_host)) {
233
+ /* Page tables are in RAM, and we have the host address. */
234
+ if (ptw->out_be) {
235
+ data = ldl_be_p(ptw->out_host);
236
+ } else {
237
+ data = ldl_le_p(ptw->out_host);
238
+ }
239
} else {
240
- data = address_space_ldl_le(as, addr, attrs, &result);
241
+ /* Page tables are in MMIO. */
242
+ MemTxAttrs attrs = { .secure = ptw->out_secure };
243
+ AddressSpace *as = arm_addressspace(cs, attrs);
244
+ MemTxResult result = MEMTX_OK;
245
+
246
+ if (ptw->out_be) {
247
+ data = address_space_ldl_be(as, ptw->out_phys, attrs, &result);
248
+ } else {
249
+ data = address_space_ldl_le(as, ptw->out_phys, attrs, &result);
250
+ }
251
+ if (unlikely(result != MEMTX_OK)) {
252
+ fi->type = ARMFault_SyncExternalOnWalk;
253
+ fi->ea = arm_extabort_type(result);
254
+ return 0;
255
+ }
256
}
257
- if (result == MEMTX_OK) {
258
- return data;
259
- }
260
- fi->type = ARMFault_SyncExternalOnWalk;
261
- fi->ea = arm_extabort_type(result);
262
- return 0;
263
+ return data;
264
}
265
266
static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
267
ARMMMUFaultInfo *fi)
268
{
269
CPUState *cs = env_cpu(env);
270
- MemTxAttrs attrs = {};
271
- MemTxResult result = MEMTX_OK;
272
- AddressSpace *as;
273
uint64_t data;
274
275
if (!S1_ptw_translate(env, ptw, addr, fi)) {
276
+ /* Failure. */
277
+ assert(fi->s1ptw);
278
return 0;
279
}
280
- addr = ptw->out_phys;
281
- attrs.secure = ptw->out_secure;
282
- as = arm_addressspace(cs, attrs);
283
- if (ptw->out_be) {
284
- data = address_space_ldq_be(as, addr, attrs, &result);
285
+
286
+ if (likely(ptw->out_host)) {
287
+ /* Page tables are in RAM, and we have the host address. */
288
+ if (ptw->out_be) {
289
+ data = ldq_be_p(ptw->out_host);
290
+ } else {
291
+ data = ldq_le_p(ptw->out_host);
292
+ }
293
} else {
294
- data = address_space_ldq_le(as, addr, attrs, &result);
295
+ /* Page tables are in MMIO. */
296
+ MemTxAttrs attrs = { .secure = ptw->out_secure };
297
+ AddressSpace *as = arm_addressspace(cs, attrs);
298
+ MemTxResult result = MEMTX_OK;
299
+
300
+ if (ptw->out_be) {
301
+ data = address_space_ldq_be(as, ptw->out_phys, attrs, &result);
302
+ } else {
303
+ data = address_space_ldq_le(as, ptw->out_phys, attrs, &result);
304
+ }
305
+ if (unlikely(result != MEMTX_OK)) {
306
+ fi->type = ARMFault_SyncExternalOnWalk;
307
+ fi->ea = arm_extabort_type(result);
308
+ return 0;
309
+ }
310
}
311
- if (result == MEMTX_OK) {
312
- return data;
313
- }
314
- fi->type = ARMFault_SyncExternalOnWalk;
315
- fi->ea = arm_extabort_type(result);
316
- return 0;
317
+ return data;
318
}
319
320
static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
321
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
81
index XXXXXXX..XXXXXXX 100644
322
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/cpu64.c
323
--- a/target/arm/tlb_helper.c
83
+++ b/target/arm/cpu64.c
324
+++ b/target/arm/tlb_helper.c
84
@@ -XXX,XX +XXX,XX @@ static void aarch64_cpu_register_types(void)
325
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
85
}
326
bool probe, uintptr_t retaddr)
86
327
{
87
type_init(aarch64_cpu_register_types)
328
ARMCPU *cpu = ARM_CPU(cs);
88
+
329
- ARMMMUFaultInfo fi = {};
89
+/* The manual says that when SVE is enabled and VQ is widened the
330
GetPhysAddrResult res = {};
90
+ * implementation is allowed to zero the previously inaccessible
331
+ ARMMMUFaultInfo local_fi, *fi;
91
+ * portion of the registers. The corollary to that is that when
332
int ret;
92
+ * SVE is enabled and VQ is narrowed we are also allowed to zero
333
93
+ * the now inaccessible portion of the registers.
334
+ /*
94
+ *
335
+ * Allow S1_ptw_translate to see any fault generated here.
95
+ * The intent of this is that no predicate bit beyond VQ is ever set.
336
+ * Since this may recurse, read and clear.
96
+ * Which means that some operations on predicate registers themselves
337
+ */
97
+ * may operate on full uint64_t or even unrolled across the maximum
338
+ fi = cpu->env.tlb_fi;
98
+ * uint64_t[4]. Performing 4 bits of host arithmetic unconditionally
339
+ if (fi) {
99
+ * may well be cheaper than conditionals to restrict the operation
340
+ cpu->env.tlb_fi = NULL;
100
+ * to the relevant portion of a uint16_t[16].
341
+ } else {
101
+ *
342
+ fi = memset(&local_fi, 0, sizeof(local_fi));
102
+ * TODO: Need to call this for changes to the real system registers
103
+ * and EL state changes.
104
+ */
105
+void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq)
106
+{
107
+ int i, j;
108
+ uint64_t pmask;
109
+
110
+ assert(vq >= 1 && vq <= ARM_MAX_VQ);
111
+
112
+ /* Zap the high bits of the zregs. */
113
+ for (i = 0; i < 32; i++) {
114
+ memset(&env->vfp.zregs[i].d[2 * vq], 0, 16 * (ARM_MAX_VQ - vq));
115
+ }
343
+ }
116
+
344
+
117
+ /* Zap the high bits of the pregs and ffr. */
345
/*
118
+ pmask = 0;
346
* Walk the page table and (if the mapping exists) add the page
119
+ if (vq & 3) {
347
* to the TLB. On success, return true. Otherwise, if probing,
120
+ pmask = ~(-1ULL << (16 * (vq & 3)));
348
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
121
+ }
349
*/
122
+ for (j = vq / 4; j < ARM_MAX_VQ / 4; j++) {
350
ret = get_phys_addr(&cpu->env, address, access_type,
123
+ for (i = 0; i < 17; ++i) {
351
core_to_arm_mmu_idx(&cpu->env, mmu_idx),
124
+ env->vfp.pregs[i].p[j] &= pmask;
352
- &res, &fi);
125
+ }
353
+ &res, fi);
126
+ pmask = 0;
354
if (likely(!ret)) {
127
+ }
355
/*
128
+}
356
* Map a single [sub]page. Regions smaller than our declared
357
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
358
} else {
359
/* now we have a real cpu fault */
360
cpu_restore_state(cs, retaddr, true);
361
- arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi);
362
+ arm_deliver_fault(cpu, address, access_type, mmu_idx, fi);
363
}
364
}
365
#else
129
--
366
--
130
2.16.2
367
2.25.1
131
132
diff view generated by jsdifflib
1
From: Andrey Smirnov <andrew.smirnov@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
The following interfaces are partially or fully emulated:
4
5
* up to 2 Cortex A9 cores (SMP works with PSCI)
6
* A7 MPCORE (identical to A15 MPCORE)
7
* 4 GPTs modules
8
* 7 GPIO controllers
9
* 2 IOMUXC controllers
10
* 1 CCM module
11
* 1 SVNS module
12
* 1 SRC module
13
* 1 GPCv2 controller
14
* 4 eCSPI controllers
15
* 4 I2C controllers
16
* 7 i.MX UART controllers
17
* 2 FlexCAN controllers
18
* 2 Ethernet controllers (FEC)
19
* 3 SD controllers (USDHC)
20
* 4 WDT modules
21
* 1 SDMA module
22
* 1 GPR module
23
* 2 USBMISC modules
24
* 2 ADC modules
25
* 1 PCIe controller
26
27
Tested to boot and work with upstream Linux (4.13+) guest.
28
2
29
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
31
[PMM: folded a couple of long lines]
5
Message-id: 20221011031911.2408754-12-richard.henderson@linaro.org
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
---
7
---
34
hw/arm/Makefile.objs | 1 +
8
target/arm/ptw.c | 191 +++++++++++++++++++++++++----------------------
35
include/hw/arm/fsl-imx7.h | 222 +++++++++++++++
9
1 file changed, 100 insertions(+), 91 deletions(-)
36
hw/arm/fsl-imx7.c | 582 ++++++++++++++++++++++++++++++++++++++++
37
default-configs/arm-softmmu.mak | 1 +
38
4 files changed, 806 insertions(+)
39
create mode 100644 include/hw/arm/fsl-imx7.h
40
create mode 100644 hw/arm/fsl-imx7.c
41
10
42
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
43
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/arm/Makefile.objs
13
--- a/target/arm/ptw.c
45
+++ b/hw/arm/Makefile.objs
14
+++ b/target/arm/ptw.c
46
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_MPS2) += mps2.o
15
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
47
obj-$(CONFIG_MPS2) += mps2-tz.o
16
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
48
obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
17
__attribute__((nonnull));
49
obj-$(CONFIG_IOTKIT) += iotkit.o
18
50
+obj-$(CONFIG_FSL_IMX7) += fsl-imx7.o
19
+static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
51
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
20
+ target_ulong address,
52
new file mode 100644
21
+ MMUAccessType access_type,
53
index XXXXXXX..XXXXXXX
22
+ GetPhysAddrResult *result,
54
--- /dev/null
23
+ ARMMMUFaultInfo *fi)
55
+++ b/include/hw/arm/fsl-imx7.h
24
+ __attribute__((nonnull));
56
@@ -XXX,XX +XXX,XX @@
25
+
57
+/*
26
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
58
+ * Copyright (c) 2018, Impinj, Inc.
27
static const uint8_t pamax_map[] = {
59
+ *
28
[0] = 32,
60
+ * i.MX7 SoC definitions
29
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
61
+ *
30
return 0;
62
+ * Author: Andrey Smirnov <andrew.smirnov@gmail.com>
31
}
63
+ *
32
64
+ * This program is free software; you can redistribute it and/or modify
33
+static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
65
+ * it under the terms of the GNU General Public License as published by
34
+ target_ulong address,
66
+ * the Free Software Foundation; either version 2 of the License, or
35
+ MMUAccessType access_type,
67
+ * (at your option) any later version.
36
+ GetPhysAddrResult *result,
68
+ *
37
+ ARMMMUFaultInfo *fi)
69
+ * This program is distributed in the hope that it will be useful,
38
+{
70
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
39
+ hwaddr ipa;
71
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
40
+ int s1_prot;
72
+ * GNU General Public License for more details.
41
+ int ret;
73
+ */
42
+ bool is_secure = ptw->in_secure;
74
+
43
+ bool ipa_secure, s2walk_secure;
75
+#ifndef FSL_IMX7_H
44
+ ARMCacheAttrs cacheattrs1;
76
+#define FSL_IMX7_H
45
+ bool is_el0;
77
+
46
+ uint64_t hcr;
78
+#include "hw/arm/arm.h"
47
+
79
+#include "hw/cpu/a15mpcore.h"
48
+ ret = get_phys_addr_with_struct(env, ptw, address, access_type, result, fi);
80
+#include "hw/intc/imx_gpcv2.h"
49
+
81
+#include "hw/misc/imx7_ccm.h"
50
+ /* If S1 fails or S2 is disabled, return early. */
82
+#include "hw/misc/imx7_snvs.h"
51
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2, is_secure)) {
83
+#include "hw/misc/imx7_gpr.h"
52
+ return ret;
84
+#include "hw/misc/imx6_src.h"
53
+ }
85
+#include "hw/misc/imx2_wdt.h"
54
+
86
+#include "hw/gpio/imx_gpio.h"
55
+ ipa = result->f.phys_addr;
87
+#include "hw/char/imx_serial.h"
56
+ ipa_secure = result->f.attrs.secure;
88
+#include "hw/timer/imx_gpt.h"
57
+ if (is_secure) {
89
+#include "hw/timer/imx_epit.h"
58
+ /* Select TCR based on the NS bit from the S1 walk. */
90
+#include "hw/i2c/imx_i2c.h"
59
+ s2walk_secure = !(ipa_secure
91
+#include "hw/gpio/imx_gpio.h"
60
+ ? env->cp15.vstcr_el2 & VSTCR_SW
92
+#include "hw/sd/sdhci.h"
61
+ : env->cp15.vtcr_el2 & VTCR_NSW);
93
+#include "hw/ssi/imx_spi.h"
62
+ } else {
94
+#include "hw/net/imx_fec.h"
63
+ assert(!ipa_secure);
95
+#include "hw/pci-host/designware.h"
64
+ s2walk_secure = false;
96
+#include "hw/usb/chipidea.h"
65
+ }
97
+#include "exec/memory.h"
66
+
98
+#include "cpu.h"
67
+ is_el0 = ptw->in_mmu_idx == ARMMMUIdx_Stage1_E0;
99
+
68
+ ptw->in_mmu_idx = s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
100
+#define TYPE_FSL_IMX7 "fsl,imx7"
69
+ ptw->in_secure = s2walk_secure;
101
+#define FSL_IMX7(obj) OBJECT_CHECK(FslIMX7State, (obj), TYPE_FSL_IMX7)
70
+
102
+
103
+enum FslIMX7Configuration {
104
+ FSL_IMX7_NUM_CPUS = 2,
105
+ FSL_IMX7_NUM_UARTS = 7,
106
+ FSL_IMX7_NUM_ETHS = 2,
107
+ FSL_IMX7_ETH_NUM_TX_RINGS = 3,
108
+ FSL_IMX7_NUM_USDHCS = 3,
109
+ FSL_IMX7_NUM_WDTS = 4,
110
+ FSL_IMX7_NUM_GPTS = 4,
111
+ FSL_IMX7_NUM_IOMUXCS = 2,
112
+ FSL_IMX7_NUM_GPIOS = 7,
113
+ FSL_IMX7_NUM_I2CS = 4,
114
+ FSL_IMX7_NUM_ECSPIS = 4,
115
+ FSL_IMX7_NUM_USBS = 3,
116
+ FSL_IMX7_NUM_ADCS = 2,
117
+};
118
+
119
+typedef struct FslIMX7State {
120
+ /*< private >*/
121
+ DeviceState parent_obj;
122
+
123
+ /*< public >*/
124
+ ARMCPU cpu[FSL_IMX7_NUM_CPUS];
125
+ A15MPPrivState a7mpcore;
126
+ IMXGPTState gpt[FSL_IMX7_NUM_GPTS];
127
+ IMXGPIOState gpio[FSL_IMX7_NUM_GPIOS];
128
+ IMX7CCMState ccm;
129
+ IMX7AnalogState analog;
130
+ IMX7SNVSState snvs;
131
+ IMXGPCv2State gpcv2;
132
+ IMXSPIState spi[FSL_IMX7_NUM_ECSPIS];
133
+ IMXI2CState i2c[FSL_IMX7_NUM_I2CS];
134
+ IMXSerialState uart[FSL_IMX7_NUM_UARTS];
135
+ IMXFECState eth[FSL_IMX7_NUM_ETHS];
136
+ SDHCIState usdhc[FSL_IMX7_NUM_USDHCS];
137
+ IMX2WdtState wdt[FSL_IMX7_NUM_WDTS];
138
+ IMX7GPRState gpr;
139
+ ChipideaState usb[FSL_IMX7_NUM_USBS];
140
+ DesignwarePCIEHost pcie;
141
+} FslIMX7State;
142
+
143
+enum FslIMX7MemoryMap {
144
+ FSL_IMX7_MMDC_ADDR = 0x80000000,
145
+ FSL_IMX7_MMDC_SIZE = 2 * 1024 * 1024 * 1024UL,
146
+
147
+ FSL_IMX7_GPIO1_ADDR = 0x30200000,
148
+ FSL_IMX7_GPIO2_ADDR = 0x30210000,
149
+ FSL_IMX7_GPIO3_ADDR = 0x30220000,
150
+ FSL_IMX7_GPIO4_ADDR = 0x30230000,
151
+ FSL_IMX7_GPIO5_ADDR = 0x30240000,
152
+ FSL_IMX7_GPIO6_ADDR = 0x30250000,
153
+ FSL_IMX7_GPIO7_ADDR = 0x30260000,
154
+
155
+ FSL_IMX7_IOMUXC_LPSR_GPR_ADDR = 0x30270000,
156
+
157
+ FSL_IMX7_WDOG1_ADDR = 0x30280000,
158
+ FSL_IMX7_WDOG2_ADDR = 0x30290000,
159
+ FSL_IMX7_WDOG3_ADDR = 0x302A0000,
160
+ FSL_IMX7_WDOG4_ADDR = 0x302B0000,
161
+
162
+ FSL_IMX7_IOMUXC_LPSR_ADDR = 0x302C0000,
163
+
164
+ FSL_IMX7_GPT1_ADDR = 0x302D0000,
165
+ FSL_IMX7_GPT2_ADDR = 0x302E0000,
166
+ FSL_IMX7_GPT3_ADDR = 0x302F0000,
167
+ FSL_IMX7_GPT4_ADDR = 0x30300000,
168
+
169
+ FSL_IMX7_IOMUXC_ADDR = 0x30330000,
170
+ FSL_IMX7_IOMUXC_GPR_ADDR = 0x30340000,
171
+ FSL_IMX7_IOMUXCn_SIZE = 0x1000,
172
+
173
+ FSL_IMX7_ANALOG_ADDR = 0x30360000,
174
+ FSL_IMX7_SNVS_ADDR = 0x30370000,
175
+ FSL_IMX7_CCM_ADDR = 0x30380000,
176
+
177
+ FSL_IMX7_SRC_ADDR = 0x30390000,
178
+ FSL_IMX7_SRC_SIZE = 0x1000,
179
+
180
+ FSL_IMX7_ADC1_ADDR = 0x30610000,
181
+ FSL_IMX7_ADC2_ADDR = 0x30620000,
182
+ FSL_IMX7_ADCn_SIZE = 0x1000,
183
+
184
+ FSL_IMX7_GPC_ADDR = 0x303A0000,
185
+
186
+ FSL_IMX7_I2C1_ADDR = 0x30A20000,
187
+ FSL_IMX7_I2C2_ADDR = 0x30A30000,
188
+ FSL_IMX7_I2C3_ADDR = 0x30A40000,
189
+ FSL_IMX7_I2C4_ADDR = 0x30A50000,
190
+
191
+ FSL_IMX7_ECSPI1_ADDR = 0x30820000,
192
+ FSL_IMX7_ECSPI2_ADDR = 0x30830000,
193
+ FSL_IMX7_ECSPI3_ADDR = 0x30840000,
194
+ FSL_IMX7_ECSPI4_ADDR = 0x30630000,
195
+
196
+ FSL_IMX7_LCDIF_ADDR = 0x30730000,
197
+ FSL_IMX7_LCDIF_SIZE = 0x1000,
198
+
199
+ FSL_IMX7_UART1_ADDR = 0x30860000,
200
+ /*
71
+ /*
201
+ * Some versions of the reference manual claim that UART2 is @
72
+ * S1 is done, now do S2 translation.
202
+ * 0x30870000, but experiments with HW + DT files in upstream
73
+ * Save the stage1 results so that we may merge prot and cacheattrs later.
203
+ * Linux kernel show that not to be true and that block is
204
+ * acutally located @ 0x30890000
205
+ */
74
+ */
206
+ FSL_IMX7_UART2_ADDR = 0x30890000,
75
+ s1_prot = result->f.prot;
207
+ FSL_IMX7_UART3_ADDR = 0x30880000,
76
+ cacheattrs1 = result->cacheattrs;
208
+ FSL_IMX7_UART4_ADDR = 0x30A60000,
77
+ memset(result, 0, sizeof(*result));
209
+ FSL_IMX7_UART5_ADDR = 0x30A70000,
78
+
210
+ FSL_IMX7_UART6_ADDR = 0x30A80000,
79
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type, is_el0, result, fi);
211
+ FSL_IMX7_UART7_ADDR = 0x30A90000,
80
+ fi->s2addr = ipa;
212
+
81
+
213
+ FSL_IMX7_ENET1_ADDR = 0x30BE0000,
82
+ /* Combine the S1 and S2 perms. */
214
+ FSL_IMX7_ENET2_ADDR = 0x30BF0000,
83
+ result->f.prot &= s1_prot;
215
+
84
+
216
+ FSL_IMX7_USB1_ADDR = 0x30B10000,
85
+ /* If S2 fails, return early. */
217
+ FSL_IMX7_USBMISC1_ADDR = 0x30B10200,
86
+ if (ret) {
218
+ FSL_IMX7_USB2_ADDR = 0x30B20000,
87
+ return ret;
219
+ FSL_IMX7_USBMISC2_ADDR = 0x30B20200,
88
+ }
220
+ FSL_IMX7_USB3_ADDR = 0x30B30000,
89
+
221
+ FSL_IMX7_USBMISC3_ADDR = 0x30B30200,
90
+ /* Combine the S1 and S2 cache attributes. */
222
+ FSL_IMX7_USBMISCn_SIZE = 0x200,
91
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
223
+
92
+ if (hcr & HCR_DC) {
224
+ FSL_IMX7_USDHC1_ADDR = 0x30B40000,
93
+ /*
225
+ FSL_IMX7_USDHC2_ADDR = 0x30B50000,
94
+ * HCR.DC forces the first stage attributes to
226
+ FSL_IMX7_USDHC3_ADDR = 0x30B60000,
95
+ * Normal Non-Shareable,
227
+
96
+ * Inner Write-Back Read-Allocate Write-Allocate,
228
+ FSL_IMX7_SDMA_ADDR = 0x30BD0000,
97
+ * Outer Write-Back Read-Allocate Write-Allocate.
229
+ FSL_IMX7_SDMA_SIZE = 0x1000,
98
+ * Do not overwrite Tagged within attrs.
230
+
99
+ */
231
+ FSL_IMX7_A7MPCORE_ADDR = 0x31000000,
100
+ if (cacheattrs1.attrs != 0xf0) {
232
+ FSL_IMX7_A7MPCORE_DAP_ADDR = 0x30000000,
101
+ cacheattrs1.attrs = 0xff;
233
+
102
+ }
234
+ FSL_IMX7_PCIE_REG_ADDR = 0x33800000,
103
+ cacheattrs1.shareability = 0;
235
+ FSL_IMX7_PCIE_REG_SIZE = 16 * 1024,
104
+ }
236
+
105
+ result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
237
+ FSL_IMX7_GPR_ADDR = 0x30340000,
106
+ result->cacheattrs);
238
+};
239
+
240
+enum FslIMX7IRQs {
241
+ FSL_IMX7_USDHC1_IRQ = 22,
242
+ FSL_IMX7_USDHC2_IRQ = 23,
243
+ FSL_IMX7_USDHC3_IRQ = 24,
244
+
245
+ FSL_IMX7_UART1_IRQ = 26,
246
+ FSL_IMX7_UART2_IRQ = 27,
247
+ FSL_IMX7_UART3_IRQ = 28,
248
+ FSL_IMX7_UART4_IRQ = 29,
249
+ FSL_IMX7_UART5_IRQ = 30,
250
+ FSL_IMX7_UART6_IRQ = 16,
251
+
252
+ FSL_IMX7_ECSPI1_IRQ = 31,
253
+ FSL_IMX7_ECSPI2_IRQ = 32,
254
+ FSL_IMX7_ECSPI3_IRQ = 33,
255
+ FSL_IMX7_ECSPI4_IRQ = 34,
256
+
257
+ FSL_IMX7_I2C1_IRQ = 35,
258
+ FSL_IMX7_I2C2_IRQ = 36,
259
+ FSL_IMX7_I2C3_IRQ = 37,
260
+ FSL_IMX7_I2C4_IRQ = 38,
261
+
262
+ FSL_IMX7_USB1_IRQ = 43,
263
+ FSL_IMX7_USB2_IRQ = 42,
264
+ FSL_IMX7_USB3_IRQ = 40,
265
+
266
+ FSL_IMX7_PCI_INTA_IRQ = 122,
267
+ FSL_IMX7_PCI_INTB_IRQ = 123,
268
+ FSL_IMX7_PCI_INTC_IRQ = 124,
269
+ FSL_IMX7_PCI_INTD_IRQ = 125,
270
+
271
+ FSL_IMX7_UART7_IRQ = 126,
272
+
273
+#define FSL_IMX7_ENET_IRQ(i, n) ((n) + ((i) ? 100 : 118))
274
+
275
+ FSL_IMX7_MAX_IRQ = 128,
276
+};
277
+
278
+#endif /* FSL_IMX7_H */
279
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
280
new file mode 100644
281
index XXXXXXX..XXXXXXX
282
--- /dev/null
283
+++ b/hw/arm/fsl-imx7.c
284
@@ -XXX,XX +XXX,XX @@
285
+/*
286
+ * Copyright (c) 2018, Impinj, Inc.
287
+ *
288
+ * i.MX7 SoC definitions
289
+ *
290
+ * Author: Andrey Smirnov <andrew.smirnov@gmail.com>
291
+ *
292
+ * Based on hw/arm/fsl-imx6.c
293
+ *
294
+ * This program is free software; you can redistribute it and/or modify
295
+ * it under the terms of the GNU General Public License as published by
296
+ * the Free Software Foundation; either version 2 of the License, or
297
+ * (at your option) any later version.
298
+ *
299
+ * This program is distributed in the hope that it will be useful,
300
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
301
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
302
+ * GNU General Public License for more details.
303
+ */
304
+
305
+#include "qemu/osdep.h"
306
+#include "qapi/error.h"
307
+#include "qemu-common.h"
308
+#include "hw/arm/fsl-imx7.h"
309
+#include "hw/misc/unimp.h"
310
+#include "sysemu/sysemu.h"
311
+#include "qemu/error-report.h"
312
+
313
+#define NAME_SIZE 20
314
+
315
+static void fsl_imx7_init(Object *obj)
316
+{
317
+ BusState *sysbus = sysbus_get_default();
318
+ FslIMX7State *s = FSL_IMX7(obj);
319
+ char name[NAME_SIZE];
320
+ int i;
321
+
322
+ if (smp_cpus > FSL_IMX7_NUM_CPUS) {
323
+ error_report("%s: Only %d CPUs are supported (%d requested)",
324
+ TYPE_FSL_IMX7, FSL_IMX7_NUM_CPUS, smp_cpus);
325
+ exit(1);
326
+ }
327
+
328
+ for (i = 0; i < smp_cpus; i++) {
329
+ object_initialize(&s->cpu[i], sizeof(s->cpu[i]),
330
+ ARM_CPU_TYPE_NAME("cortex-a7"));
331
+ snprintf(name, NAME_SIZE, "cpu%d", i);
332
+ object_property_add_child(obj, name, OBJECT(&s->cpu[i]),
333
+ &error_fatal);
334
+ }
335
+
107
+
336
+ /*
108
+ /*
337
+ * A7MPCORE
109
+ * Check if IPA translates to secure or non-secure PA space.
110
+ * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
338
+ */
111
+ */
339
+ object_initialize(&s->a7mpcore, sizeof(s->a7mpcore), TYPE_A15MPCORE_PRIV);
112
+ result->f.attrs.secure =
340
+ qdev_set_parent_bus(DEVICE(&s->a7mpcore), sysbus);
113
+ (is_secure
341
+ object_property_add_child(obj, "a7mpcore",
114
+ && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
342
+ OBJECT(&s->a7mpcore), &error_fatal);
115
+ && (ipa_secure
343
+
116
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
344
+ /*
117
+
345
+ * GPIOs 1 to 7
118
+ return 0;
346
+ */
347
+ for (i = 0; i < FSL_IMX7_NUM_GPIOS; i++) {
348
+ object_initialize(&s->gpio[i], sizeof(s->gpio[i]),
349
+ TYPE_IMX_GPIO);
350
+ qdev_set_parent_bus(DEVICE(&s->gpio[i]), sysbus);
351
+ snprintf(name, NAME_SIZE, "gpio%d", i);
352
+ object_property_add_child(obj, name,
353
+ OBJECT(&s->gpio[i]), &error_fatal);
354
+ }
355
+
356
+ /*
357
+ * GPT1, 2, 3, 4
358
+ */
359
+ for (i = 0; i < FSL_IMX7_NUM_GPTS; i++) {
360
+ object_initialize(&s->gpt[i], sizeof(s->gpt[i]), TYPE_IMX7_GPT);
361
+ qdev_set_parent_bus(DEVICE(&s->gpt[i]), sysbus);
362
+ snprintf(name, NAME_SIZE, "gpt%d", i);
363
+ object_property_add_child(obj, name, OBJECT(&s->gpt[i]),
364
+ &error_fatal);
365
+ }
366
+
367
+ /*
368
+ * CCM
369
+ */
370
+ object_initialize(&s->ccm, sizeof(s->ccm), TYPE_IMX7_CCM);
371
+ qdev_set_parent_bus(DEVICE(&s->ccm), sysbus);
372
+ object_property_add_child(obj, "ccm", OBJECT(&s->ccm), &error_fatal);
373
+
374
+ /*
375
+ * Analog
376
+ */
377
+ object_initialize(&s->analog, sizeof(s->analog), TYPE_IMX7_ANALOG);
378
+ qdev_set_parent_bus(DEVICE(&s->analog), sysbus);
379
+ object_property_add_child(obj, "analog", OBJECT(&s->analog), &error_fatal);
380
+
381
+ /*
382
+ * GPCv2
383
+ */
384
+ object_initialize(&s->gpcv2, sizeof(s->gpcv2), TYPE_IMX_GPCV2);
385
+ qdev_set_parent_bus(DEVICE(&s->gpcv2), sysbus);
386
+ object_property_add_child(obj, "gpcv2", OBJECT(&s->gpcv2), &error_fatal);
387
+
388
+ for (i = 0; i < FSL_IMX7_NUM_ECSPIS; i++) {
389
+ object_initialize(&s->spi[i], sizeof(s->spi[i]), TYPE_IMX_SPI);
390
+ qdev_set_parent_bus(DEVICE(&s->spi[i]), sysbus_get_default());
391
+ snprintf(name, NAME_SIZE, "spi%d", i + 1);
392
+ object_property_add_child(obj, name, OBJECT(&s->spi[i]), NULL);
393
+ }
394
+
395
+
396
+ for (i = 0; i < FSL_IMX7_NUM_I2CS; i++) {
397
+ object_initialize(&s->i2c[i], sizeof(s->i2c[i]), TYPE_IMX_I2C);
398
+ qdev_set_parent_bus(DEVICE(&s->i2c[i]), sysbus_get_default());
399
+ snprintf(name, NAME_SIZE, "i2c%d", i + 1);
400
+ object_property_add_child(obj, name, OBJECT(&s->i2c[i]), NULL);
401
+ }
402
+
403
+ /*
404
+ * UART
405
+ */
406
+ for (i = 0; i < FSL_IMX7_NUM_UARTS; i++) {
407
+ object_initialize(&s->uart[i], sizeof(s->uart[i]), TYPE_IMX_SERIAL);
408
+ qdev_set_parent_bus(DEVICE(&s->uart[i]), sysbus);
409
+ snprintf(name, NAME_SIZE, "uart%d", i);
410
+ object_property_add_child(obj, name, OBJECT(&s->uart[i]),
411
+ &error_fatal);
412
+ }
413
+
414
+ /*
415
+ * Ethernet
416
+ */
417
+ for (i = 0; i < FSL_IMX7_NUM_ETHS; i++) {
418
+ object_initialize(&s->eth[i], sizeof(s->eth[i]), TYPE_IMX_ENET);
419
+ qdev_set_parent_bus(DEVICE(&s->eth[i]), sysbus);
420
+ snprintf(name, NAME_SIZE, "eth%d", i);
421
+ object_property_add_child(obj, name, OBJECT(&s->eth[i]),
422
+ &error_fatal);
423
+ }
424
+
425
+ /*
426
+ * SDHCI
427
+ */
428
+ for (i = 0; i < FSL_IMX7_NUM_USDHCS; i++) {
429
+ object_initialize(&s->usdhc[i], sizeof(s->usdhc[i]),
430
+ TYPE_IMX_USDHC);
431
+ qdev_set_parent_bus(DEVICE(&s->usdhc[i]), sysbus);
432
+ snprintf(name, NAME_SIZE, "usdhc%d", i);
433
+ object_property_add_child(obj, name, OBJECT(&s->usdhc[i]),
434
+ &error_fatal);
435
+ }
436
+
437
+ /*
438
+ * SNVS
439
+ */
440
+ object_initialize(&s->snvs, sizeof(s->snvs), TYPE_IMX7_SNVS);
441
+ qdev_set_parent_bus(DEVICE(&s->snvs), sysbus);
442
+ object_property_add_child(obj, "snvs", OBJECT(&s->snvs), &error_fatal);
443
+
444
+ /*
445
+ * Watchdog
446
+ */
447
+ for (i = 0; i < FSL_IMX7_NUM_WDTS; i++) {
448
+ object_initialize(&s->wdt[i], sizeof(s->wdt[i]), TYPE_IMX2_WDT);
449
+ qdev_set_parent_bus(DEVICE(&s->wdt[i]), sysbus);
450
+ snprintf(name, NAME_SIZE, "wdt%d", i);
451
+ object_property_add_child(obj, name, OBJECT(&s->wdt[i]),
452
+ &error_fatal);
453
+ }
454
+
455
+ /*
456
+ * GPR
457
+ */
458
+ object_initialize(&s->gpr, sizeof(s->gpr), TYPE_IMX7_GPR);
459
+ qdev_set_parent_bus(DEVICE(&s->gpr), sysbus);
460
+ object_property_add_child(obj, "gpr", OBJECT(&s->gpr), &error_fatal);
461
+
462
+ object_initialize(&s->pcie, sizeof(s->pcie), TYPE_DESIGNWARE_PCIE_HOST);
463
+ qdev_set_parent_bus(DEVICE(&s->pcie), sysbus);
464
+ object_property_add_child(obj, "pcie", OBJECT(&s->pcie), &error_fatal);
465
+
466
+ for (i = 0; i < FSL_IMX7_NUM_USBS; i++) {
467
+ object_initialize(&s->usb[i],
468
+ sizeof(s->usb[i]), TYPE_CHIPIDEA);
469
+ qdev_set_parent_bus(DEVICE(&s->usb[i]), sysbus);
470
+ snprintf(name, NAME_SIZE, "usb%d", i);
471
+ object_property_add_child(obj, name,
472
+ OBJECT(&s->usb[i]), &error_fatal);
473
+ }
474
+}
119
+}
475
+
120
+
476
+static void fsl_imx7_realize(DeviceState *dev, Error **errp)
121
static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
477
+{
122
target_ulong address,
478
+ FslIMX7State *s = FSL_IMX7(dev);
123
MMUAccessType access_type,
479
+ Object *o;
124
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
480
+ int i;
125
if (mmu_idx != s1_mmu_idx) {
481
+ qemu_irq irq;
126
/*
482
+ char name[NAME_SIZE];
127
* Call ourselves recursively to do the stage 1 and then stage 2
483
+
128
- * translations if mmu_idx is a two-stage regime.
484
+ for (i = 0; i < smp_cpus; i++) {
129
+ * translations if mmu_idx is a two-stage regime, and EL2 present.
485
+ o = OBJECT(&s->cpu[i]);
130
+ * Otherwise, a stage1+stage2 translation is just stage 1.
486
+
131
*/
487
+ object_property_set_int(o, QEMU_PSCI_CONDUIT_SMC,
132
+ ptw->in_mmu_idx = mmu_idx = s1_mmu_idx;
488
+ "psci-conduit", &error_abort);
133
if (arm_feature(env, ARM_FEATURE_EL2)) {
489
+
134
- hwaddr ipa;
490
+ /* On uniprocessor, the CBAR is set to 0 */
135
- int s1_prot;
491
+ if (smp_cpus > 1) {
136
- int ret;
492
+ object_property_set_int(o, FSL_IMX7_A7MPCORE_ADDR,
137
- bool ipa_secure, s2walk_secure;
493
+ "reset-cbar", &error_abort);
138
- ARMCacheAttrs cacheattrs1;
494
+ }
139
- bool is_el0;
495
+
140
- uint64_t hcr;
496
+ if (i) {
141
-
497
+ /* Secondary CPUs start in PSCI powered-down state */
142
- ptw->in_mmu_idx = s1_mmu_idx;
498
+ object_property_set_bool(o, true,
143
- ret = get_phys_addr_with_struct(env, ptw, address, access_type,
499
+ "start-powered-off", &error_abort);
144
- result, fi);
500
+ }
145
-
501
+
146
- /* If S1 fails or S2 is disabled, return early. */
502
+ object_property_set_bool(o, true, "realized", &error_abort);
147
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
503
+ }
148
- is_secure)) {
504
+
149
- return ret;
505
+ /*
150
- }
506
+ * A7MPCORE
151
-
507
+ */
152
- ipa = result->f.phys_addr;
508
+ object_property_set_int(OBJECT(&s->a7mpcore), smp_cpus, "num-cpu",
153
- ipa_secure = result->f.attrs.secure;
509
+ &error_abort);
154
- if (is_secure) {
510
+ object_property_set_int(OBJECT(&s->a7mpcore),
155
- /* Select TCR based on the NS bit from the S1 walk. */
511
+ FSL_IMX7_MAX_IRQ + GIC_INTERNAL,
156
- s2walk_secure = !(ipa_secure
512
+ "num-irq", &error_abort);
157
- ? env->cp15.vstcr_el2 & VSTCR_SW
513
+
158
- : env->cp15.vtcr_el2 & VTCR_NSW);
514
+ object_property_set_bool(OBJECT(&s->a7mpcore), true, "realized",
159
- } else {
515
+ &error_abort);
160
- assert(!ipa_secure);
516
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->a7mpcore), 0, FSL_IMX7_A7MPCORE_ADDR);
161
- s2walk_secure = false;
517
+
162
- }
518
+ for (i = 0; i < smp_cpus; i++) {
163
-
519
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->a7mpcore);
164
- ptw->in_mmu_idx =
520
+ DeviceState *d = DEVICE(qemu_get_cpu(i));
165
- s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
521
+
166
- ptw->in_secure = s2walk_secure;
522
+ irq = qdev_get_gpio_in(d, ARM_CPU_IRQ);
167
- is_el0 = mmu_idx == ARMMMUIdx_E10_0;
523
+ sysbus_connect_irq(sbd, i, irq);
168
-
524
+ irq = qdev_get_gpio_in(d, ARM_CPU_FIQ);
169
- /*
525
+ sysbus_connect_irq(sbd, i + smp_cpus, irq);
170
- * S1 is done, now do S2 translation.
526
+ }
171
- * Save the stage1 results so that we may merge
527
+
172
- * prot and cacheattrs later.
528
+ /*
173
- */
529
+ * A7MPCORE DAP
174
- s1_prot = result->f.prot;
530
+ */
175
- cacheattrs1 = result->cacheattrs;
531
+ create_unimplemented_device("a7mpcore-dap", FSL_IMX7_A7MPCORE_DAP_ADDR,
176
- memset(result, 0, sizeof(*result));
532
+ 0x100000);
177
-
533
+
178
- ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
534
+ /*
179
- is_el0, result, fi);
535
+ * GPT1, 2, 3, 4
180
- fi->s2addr = ipa;
536
+ */
181
-
537
+ for (i = 0; i < FSL_IMX7_NUM_GPTS; i++) {
182
- /* Combine the S1 and S2 perms. */
538
+ static const hwaddr FSL_IMX7_GPTn_ADDR[FSL_IMX7_NUM_GPTS] = {
183
- result->f.prot &= s1_prot;
539
+ FSL_IMX7_GPT1_ADDR,
184
-
540
+ FSL_IMX7_GPT2_ADDR,
185
- /* If S2 fails, return early. */
541
+ FSL_IMX7_GPT3_ADDR,
186
- if (ret) {
542
+ FSL_IMX7_GPT4_ADDR,
187
- return ret;
543
+ };
188
- }
544
+
189
-
545
+ s->gpt[i].ccm = IMX_CCM(&s->ccm);
190
- /* Combine the S1 and S2 cache attributes. */
546
+ object_property_set_bool(OBJECT(&s->gpt[i]), true, "realized",
191
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
547
+ &error_abort);
192
- if (hcr & HCR_DC) {
548
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpt[i]), 0, FSL_IMX7_GPTn_ADDR[i]);
193
- /*
549
+ }
194
- * HCR.DC forces the first stage attributes to
550
+
195
- * Normal Non-Shareable,
551
+ for (i = 0; i < FSL_IMX7_NUM_GPIOS; i++) {
196
- * Inner Write-Back Read-Allocate Write-Allocate,
552
+ static const hwaddr FSL_IMX7_GPIOn_ADDR[FSL_IMX7_NUM_GPIOS] = {
197
- * Outer Write-Back Read-Allocate Write-Allocate.
553
+ FSL_IMX7_GPIO1_ADDR,
198
- * Do not overwrite Tagged within attrs.
554
+ FSL_IMX7_GPIO2_ADDR,
199
- */
555
+ FSL_IMX7_GPIO3_ADDR,
200
- if (cacheattrs1.attrs != 0xf0) {
556
+ FSL_IMX7_GPIO4_ADDR,
201
- cacheattrs1.attrs = 0xff;
557
+ FSL_IMX7_GPIO5_ADDR,
202
- }
558
+ FSL_IMX7_GPIO6_ADDR,
203
- cacheattrs1.shareability = 0;
559
+ FSL_IMX7_GPIO7_ADDR,
204
- }
560
+ };
205
- result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
561
+
206
- result->cacheattrs);
562
+ object_property_set_bool(OBJECT(&s->gpio[i]), true, "realized",
207
-
563
+ &error_abort);
208
- /*
564
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpio[i]), 0, FSL_IMX7_GPIOn_ADDR[i]);
209
- * Check if IPA translates to secure or non-secure PA space.
565
+ }
210
- * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
566
+
211
- */
567
+ /*
212
- result->f.attrs.secure =
568
+ * IOMUXC and IOMUXC_LPSR
213
- (is_secure
569
+ */
214
- && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
570
+ for (i = 0; i < FSL_IMX7_NUM_IOMUXCS; i++) {
215
- && (ipa_secure
571
+ static const hwaddr FSL_IMX7_IOMUXCn_ADDR[FSL_IMX7_NUM_IOMUXCS] = {
216
- || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
572
+ FSL_IMX7_IOMUXC_ADDR,
217
-
573
+ FSL_IMX7_IOMUXC_LPSR_ADDR,
218
- return 0;
574
+ };
219
- } else {
575
+
220
- /*
576
+ snprintf(name, NAME_SIZE, "iomuxc%d", i);
221
- * For non-EL2 CPUs a stage1+stage2 translation is just stage 1.
577
+ create_unimplemented_device(name, FSL_IMX7_IOMUXCn_ADDR[i],
222
- */
578
+ FSL_IMX7_IOMUXCn_SIZE);
223
- mmu_idx = stage_1_mmu_idx(mmu_idx);
579
+ }
224
+ return get_phys_addr_twostage(env, ptw, address, access_type,
580
+
225
+ result, fi);
581
+ /*
226
}
582
+ * CCM
227
}
583
+ */
584
+ object_property_set_bool(OBJECT(&s->ccm), true, "realized", &error_abort);
585
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->ccm), 0, FSL_IMX7_CCM_ADDR);
586
+
587
+ /*
588
+ * Analog
589
+ */
590
+ object_property_set_bool(OBJECT(&s->analog), true, "realized",
591
+ &error_abort);
592
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->analog), 0, FSL_IMX7_ANALOG_ADDR);
593
+
594
+ /*
595
+ * GPCv2
596
+ */
597
+ object_property_set_bool(OBJECT(&s->gpcv2), true,
598
+ "realized", &error_abort);
599
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpcv2), 0, FSL_IMX7_GPC_ADDR);
600
+
601
+ /* Initialize all ECSPI */
602
+ for (i = 0; i < FSL_IMX7_NUM_ECSPIS; i++) {
603
+ static const hwaddr FSL_IMX7_SPIn_ADDR[FSL_IMX7_NUM_ECSPIS] = {
604
+ FSL_IMX7_ECSPI1_ADDR,
605
+ FSL_IMX7_ECSPI2_ADDR,
606
+ FSL_IMX7_ECSPI3_ADDR,
607
+ FSL_IMX7_ECSPI4_ADDR,
608
+ };
609
+
610
+ static const hwaddr FSL_IMX7_SPIn_IRQ[FSL_IMX7_NUM_ECSPIS] = {
611
+ FSL_IMX7_ECSPI1_IRQ,
612
+ FSL_IMX7_ECSPI2_IRQ,
613
+ FSL_IMX7_ECSPI3_IRQ,
614
+ FSL_IMX7_ECSPI4_IRQ,
615
+ };
616
+
617
+ /* Initialize the SPI */
618
+ object_property_set_bool(OBJECT(&s->spi[i]), true, "realized",
619
+ &error_abort);
620
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->spi[i]), 0,
621
+ FSL_IMX7_SPIn_ADDR[i]);
622
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->spi[i]), 0,
623
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
624
+ FSL_IMX7_SPIn_IRQ[i]));
625
+ }
626
+
627
+ for (i = 0; i < FSL_IMX7_NUM_I2CS; i++) {
628
+ static const hwaddr FSL_IMX7_I2Cn_ADDR[FSL_IMX7_NUM_I2CS] = {
629
+ FSL_IMX7_I2C1_ADDR,
630
+ FSL_IMX7_I2C2_ADDR,
631
+ FSL_IMX7_I2C3_ADDR,
632
+ FSL_IMX7_I2C4_ADDR,
633
+ };
634
+
635
+ static const hwaddr FSL_IMX7_I2Cn_IRQ[FSL_IMX7_NUM_I2CS] = {
636
+ FSL_IMX7_I2C1_IRQ,
637
+ FSL_IMX7_I2C2_IRQ,
638
+ FSL_IMX7_I2C3_IRQ,
639
+ FSL_IMX7_I2C4_IRQ,
640
+ };
641
+
642
+ object_property_set_bool(OBJECT(&s->i2c[i]), true, "realized",
643
+ &error_abort);
644
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->i2c[i]), 0, FSL_IMX7_I2Cn_ADDR[i]);
645
+
646
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->i2c[i]), 0,
647
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
648
+ FSL_IMX7_I2Cn_IRQ[i]));
649
+ }
650
+
651
+ /*
652
+ * UART
653
+ */
654
+ for (i = 0; i < FSL_IMX7_NUM_UARTS; i++) {
655
+ static const hwaddr FSL_IMX7_UARTn_ADDR[FSL_IMX7_NUM_UARTS] = {
656
+ FSL_IMX7_UART1_ADDR,
657
+ FSL_IMX7_UART2_ADDR,
658
+ FSL_IMX7_UART3_ADDR,
659
+ FSL_IMX7_UART4_ADDR,
660
+ FSL_IMX7_UART5_ADDR,
661
+ FSL_IMX7_UART6_ADDR,
662
+ FSL_IMX7_UART7_ADDR,
663
+ };
664
+
665
+ static const int FSL_IMX7_UARTn_IRQ[FSL_IMX7_NUM_UARTS] = {
666
+ FSL_IMX7_UART1_IRQ,
667
+ FSL_IMX7_UART2_IRQ,
668
+ FSL_IMX7_UART3_IRQ,
669
+ FSL_IMX7_UART4_IRQ,
670
+ FSL_IMX7_UART5_IRQ,
671
+ FSL_IMX7_UART6_IRQ,
672
+ FSL_IMX7_UART7_IRQ,
673
+ };
674
+
675
+
676
+ if (i < MAX_SERIAL_PORTS) {
677
+ qdev_prop_set_chr(DEVICE(&s->uart[i]), "chardev", serial_hds[i]);
678
+ }
679
+
680
+ object_property_set_bool(OBJECT(&s->uart[i]), true, "realized",
681
+ &error_abort);
682
+
683
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->uart[i]), 0, FSL_IMX7_UARTn_ADDR[i]);
684
+
685
+ irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_UARTn_IRQ[i]);
686
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->uart[i]), 0, irq);
687
+ }
688
+
689
+ /*
690
+ * Ethernet
691
+ */
692
+ for (i = 0; i < FSL_IMX7_NUM_ETHS; i++) {
693
+ static const hwaddr FSL_IMX7_ENETn_ADDR[FSL_IMX7_NUM_ETHS] = {
694
+ FSL_IMX7_ENET1_ADDR,
695
+ FSL_IMX7_ENET2_ADDR,
696
+ };
697
+
698
+ object_property_set_uint(OBJECT(&s->eth[i]), FSL_IMX7_ETH_NUM_TX_RINGS,
699
+ "tx-ring-num", &error_abort);
700
+ qdev_set_nic_properties(DEVICE(&s->eth[i]), &nd_table[i]);
701
+ object_property_set_bool(OBJECT(&s->eth[i]), true, "realized",
702
+ &error_abort);
703
+
704
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->eth[i]), 0, FSL_IMX7_ENETn_ADDR[i]);
705
+
706
+ irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_ENET_IRQ(i, 0));
707
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->eth[i]), 0, irq);
708
+ irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_ENET_IRQ(i, 3));
709
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->eth[i]), 1, irq);
710
+ }
711
+
712
+ /*
713
+ * USDHC
714
+ */
715
+ for (i = 0; i < FSL_IMX7_NUM_USDHCS; i++) {
716
+ static const hwaddr FSL_IMX7_USDHCn_ADDR[FSL_IMX7_NUM_USDHCS] = {
717
+ FSL_IMX7_USDHC1_ADDR,
718
+ FSL_IMX7_USDHC2_ADDR,
719
+ FSL_IMX7_USDHC3_ADDR,
720
+ };
721
+
722
+ static const int FSL_IMX7_USDHCn_IRQ[FSL_IMX7_NUM_USDHCS] = {
723
+ FSL_IMX7_USDHC1_IRQ,
724
+ FSL_IMX7_USDHC2_IRQ,
725
+ FSL_IMX7_USDHC3_IRQ,
726
+ };
727
+
728
+ object_property_set_bool(OBJECT(&s->usdhc[i]), true, "realized",
729
+ &error_abort);
730
+
731
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->usdhc[i]), 0,
732
+ FSL_IMX7_USDHCn_ADDR[i]);
733
+
734
+ irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_USDHCn_IRQ[i]);
735
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->usdhc[i]), 0, irq);
736
+ }
737
+
738
+ /*
739
+ * SNVS
740
+ */
741
+ object_property_set_bool(OBJECT(&s->snvs), true, "realized", &error_abort);
742
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->snvs), 0, FSL_IMX7_SNVS_ADDR);
743
+
744
+ /*
745
+ * SRC
746
+ */
747
+ create_unimplemented_device("sdma", FSL_IMX7_SRC_ADDR, FSL_IMX7_SRC_SIZE);
748
+
749
+ /*
750
+ * Watchdog
751
+ */
752
+ for (i = 0; i < FSL_IMX7_NUM_WDTS; i++) {
753
+ static const hwaddr FSL_IMX7_WDOGn_ADDR[FSL_IMX7_NUM_WDTS] = {
754
+ FSL_IMX7_WDOG1_ADDR,
755
+ FSL_IMX7_WDOG2_ADDR,
756
+ FSL_IMX7_WDOG3_ADDR,
757
+ FSL_IMX7_WDOG4_ADDR,
758
+ };
759
+
760
+ object_property_set_bool(OBJECT(&s->wdt[i]), true, "realized",
761
+ &error_abort);
762
+
763
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->wdt[i]), 0, FSL_IMX7_WDOGn_ADDR[i]);
764
+ }
765
+
766
+ /*
767
+ * SDMA
768
+ */
769
+ create_unimplemented_device("sdma", FSL_IMX7_SDMA_ADDR, FSL_IMX7_SDMA_SIZE);
770
+
771
+
772
+ object_property_set_bool(OBJECT(&s->gpr), true, "realized",
773
+ &error_abort);
774
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpr), 0, FSL_IMX7_GPR_ADDR);
775
+
776
+ object_property_set_bool(OBJECT(&s->pcie), true,
777
+ "realized", &error_abort);
778
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->pcie), 0, FSL_IMX7_PCIE_REG_ADDR);
779
+
780
+ irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_PCI_INTA_IRQ);
781
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->pcie), 0, irq);
782
+ irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_PCI_INTB_IRQ);
783
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->pcie), 1, irq);
784
+ irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_PCI_INTC_IRQ);
785
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->pcie), 2, irq);
786
+ irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_PCI_INTD_IRQ);
787
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->pcie), 3, irq);
788
+
789
+
790
+ for (i = 0; i < FSL_IMX7_NUM_USBS; i++) {
791
+ static const hwaddr FSL_IMX7_USBMISCn_ADDR[FSL_IMX7_NUM_USBS] = {
792
+ FSL_IMX7_USBMISC1_ADDR,
793
+ FSL_IMX7_USBMISC2_ADDR,
794
+ FSL_IMX7_USBMISC3_ADDR,
795
+ };
796
+
797
+ static const hwaddr FSL_IMX7_USBn_ADDR[FSL_IMX7_NUM_USBS] = {
798
+ FSL_IMX7_USB1_ADDR,
799
+ FSL_IMX7_USB2_ADDR,
800
+ FSL_IMX7_USB3_ADDR,
801
+ };
802
+
803
+ static const hwaddr FSL_IMX7_USBn_IRQ[FSL_IMX7_NUM_USBS] = {
804
+ FSL_IMX7_USB1_IRQ,
805
+ FSL_IMX7_USB2_IRQ,
806
+ FSL_IMX7_USB3_IRQ,
807
+ };
808
+
809
+ object_property_set_bool(OBJECT(&s->usb[i]), true, "realized",
810
+ &error_abort);
811
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->usb[i]), 0,
812
+ FSL_IMX7_USBn_ADDR[i]);
813
+
814
+ irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_USBn_IRQ[i]);
815
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->usb[i]), 0, irq);
816
+
817
+ snprintf(name, NAME_SIZE, "usbmisc%d", i);
818
+ create_unimplemented_device(name, FSL_IMX7_USBMISCn_ADDR[i],
819
+ FSL_IMX7_USBMISCn_SIZE);
820
+ }
821
+
822
+ /*
823
+ * ADCs
824
+ */
825
+ for (i = 0; i < FSL_IMX7_NUM_ADCS; i++) {
826
+ static const hwaddr FSL_IMX7_ADCn_ADDR[FSL_IMX7_NUM_ADCS] = {
827
+ FSL_IMX7_ADC1_ADDR,
828
+ FSL_IMX7_ADC2_ADDR,
829
+ };
830
+
831
+ snprintf(name, NAME_SIZE, "adc%d", i);
832
+ create_unimplemented_device(name, FSL_IMX7_ADCn_ADDR[i],
833
+ FSL_IMX7_ADCn_SIZE);
834
+ }
835
+
836
+ /*
837
+ * LCD
838
+ */
839
+ create_unimplemented_device("lcdif", FSL_IMX7_LCDIF_ADDR,
840
+ FSL_IMX7_LCDIF_SIZE);
841
+}
842
+
843
+static void fsl_imx7_class_init(ObjectClass *oc, void *data)
844
+{
845
+ DeviceClass *dc = DEVICE_CLASS(oc);
846
+
847
+ dc->realize = fsl_imx7_realize;
848
+
849
+ /* Reason: Uses serial_hds and nd_table in realize() directly */
850
+ dc->user_creatable = false;
851
+ dc->desc = "i.MX7 SOC";
852
+}
853
+
854
+static const TypeInfo fsl_imx7_type_info = {
855
+ .name = TYPE_FSL_IMX7,
856
+ .parent = TYPE_DEVICE,
857
+ .instance_size = sizeof(FslIMX7State),
858
+ .instance_init = fsl_imx7_init,
859
+ .class_init = fsl_imx7_class_init,
860
+};
861
+
862
+static void fsl_imx7_register_types(void)
863
+{
864
+ type_register_static(&fsl_imx7_type_info);
865
+}
866
+type_init(fsl_imx7_register_types)
867
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
868
index XXXXXXX..XXXXXXX 100644
869
--- a/default-configs/arm-softmmu.mak
870
+++ b/default-configs/arm-softmmu.mak
871
@@ -XXX,XX +XXX,XX @@ CONFIG_ALLWINNER_A10=y
872
CONFIG_FSL_IMX6=y
873
CONFIG_FSL_IMX31=y
874
CONFIG_FSL_IMX25=y
875
+CONFIG_FSL_IMX7=y
876
877
CONFIG_IMX_I2C=y
878
228
879
--
229
--
880
2.16.2
230
2.25.1
881
882
diff view generated by jsdifflib
1
Add support for "-cpu max" for ARM guests. This CPU type behaves
1
From: Richard Henderson <richard.henderson@linaro.org>
2
like "-cpu host" when KVM is enabled, and like a system CPU with
3
the maximum possible feature set otherwise. (Note that this means
4
it won't be migratable across versions, as we will likely add
5
features to it in future.)
6
2
3
The return type of the functions is already bool, but in a few
4
instances we used an integer type with the return statement.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221011031911.2408754-13-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180308130626.12393-4-peter.maydell@linaro.org
10
---
10
---
11
target/arm/cpu-qom.h | 2 ++
11
target/arm/ptw.c | 7 +++----
12
target/arm/cpu.c | 24 ++++++++++++++++++++++++
12
1 file changed, 3 insertions(+), 4 deletions(-)
13
target/arm/cpu64.c | 21 +++++++++++++++++++++
14
3 files changed, 47 insertions(+)
15
13
16
diff --git a/target/arm/cpu-qom.h b/target/arm/cpu-qom.h
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu-qom.h
16
--- a/target/arm/ptw.c
19
+++ b/target/arm/cpu-qom.h
17
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ struct arm_boot_info;
18
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
21
#define ARM_CPU_GET_CLASS(obj) \
19
result->f.lg_page_size = TARGET_PAGE_BITS;
22
OBJECT_GET_CLASS(ARMCPUClass, (obj), TYPE_ARM_CPU)
20
result->cacheattrs.shareability = shareability;
23
21
result->cacheattrs.attrs = memattr;
24
+#define TYPE_ARM_MAX_CPU "max-" TYPE_ARM_CPU
22
- return 0;
25
+
23
+ return false;
26
/**
27
* ARMCPUClass:
28
* @parent_realize: The parent class' realize handler.
29
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu.c
32
+++ b/target/arm/cpu.c
33
@@ -XXX,XX +XXX,XX @@ static void pxa270c5_initfn(Object *obj)
34
cpu->reset_sctlr = 0x00000078;
35
}
24
}
36
25
37
+#ifndef TARGET_AARCH64
26
static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
38
+/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host);
27
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
39
+ * otherwise, a CPU with as many features enabled as our emulation supports.
40
+ * The version of '-cpu max' for qemu-system-aarch64 is defined in cpu64.c;
41
+ * this only needs to handle 32 bits.
42
+ */
43
+static void arm_max_initfn(Object *obj)
44
+{
45
+ ARMCPU *cpu = ARM_CPU(obj);
46
+
47
+ if (kvm_enabled()) {
48
+ kvm_arm_set_cpu_features_from_host(cpu);
49
+ } else {
50
+ cortex_a15_initfn(obj);
51
+ /* In future we might add feature bits here even if the
52
+ * real-world A15 doesn't implement them.
53
+ */
54
+ }
55
+}
56
+#endif
57
+
58
#ifdef CONFIG_USER_ONLY
59
static void arm_any_initfn(Object *obj)
60
{
28
{
61
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_cpus[] = {
29
hwaddr ipa;
62
{ .name = "pxa270-b1", .initfn = pxa270b1_initfn },
30
int s1_prot;
63
{ .name = "pxa270-c0", .initfn = pxa270c0_initfn },
31
- int ret;
64
{ .name = "pxa270-c5", .initfn = pxa270c5_initfn },
32
bool is_secure = ptw->in_secure;
65
+#ifndef TARGET_AARCH64
33
- bool ipa_secure, s2walk_secure;
66
+ { .name = "max", .initfn = arm_max_initfn },
34
+ bool ret, ipa_secure, s2walk_secure;
67
+#endif
35
ARMCacheAttrs cacheattrs1;
68
#ifdef CONFIG_USER_ONLY
36
bool is_el0;
69
{ .name = "any", .initfn = arm_any_initfn },
37
uint64_t hcr;
70
#endif
38
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
71
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
39
&& (ipa_secure
72
index XXXXXXX..XXXXXXX 100644
40
|| !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
73
--- a/target/arm/cpu64.c
41
74
+++ b/target/arm/cpu64.c
42
- return 0;
75
@@ -XXX,XX +XXX,XX @@
43
+ return false;
76
#include "hw/arm/arm.h"
77
#include "sysemu/sysemu.h"
78
#include "sysemu/kvm.h"
79
+#include "kvm_arm.h"
80
81
static inline void set_feature(CPUARMState *env, int feature)
82
{
83
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
84
define_arm_cp_regs(cpu, cortex_a57_a53_cp_reginfo);
85
}
44
}
86
45
87
+/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host);
46
static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
88
+ * otherwise, a CPU with as many features enabled as our emulation supports.
89
+ * The version of '-cpu max' for qemu-system-arm is defined in cpu.c;
90
+ * this only needs to handle 64 bits.
91
+ */
92
+static void aarch64_max_initfn(Object *obj)
93
+{
94
+ ARMCPU *cpu = ARM_CPU(obj);
95
+
96
+ if (kvm_enabled()) {
97
+ kvm_arm_set_cpu_features_from_host(cpu);
98
+ } else {
99
+ aarch64_a57_initfn(obj);
100
+ /* In future we might add feature bits here even if the
101
+ * real-world A57 doesn't implement them.
102
+ */
103
+ }
104
+}
105
+
106
#ifdef CONFIG_USER_ONLY
107
static void aarch64_any_initfn(Object *obj)
108
{
109
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCPUInfo {
110
static const ARMCPUInfo aarch64_cpus[] = {
111
{ .name = "cortex-a57", .initfn = aarch64_a57_initfn },
112
{ .name = "cortex-a53", .initfn = aarch64_a53_initfn },
113
+ { .name = "max", .initfn = aarch64_max_initfn },
114
#ifdef CONFIG_USER_ONLY
115
{ .name = "any", .initfn = aarch64_any_initfn },
116
#endif
117
--
47
--
118
2.16.2
48
2.25.1
119
120
diff view generated by jsdifflib
1
From: Andrey Smirnov <andrew.smirnov@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Implement code needed to set up emulation of MCIMX7SABRE board from
3
A simple helper to retrieve the length of the current insn.
4
NXP. For more info about the HW see:
5
4
6
https://www.nxp.com/support/developer-resources/hardware-development-tools/sabre-development-system/sabre-board-for-smart-devices-based-on-the-i.mx-7dual-applications-processors:MCIMX7SABRE
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Cc: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20221020030641.2066807-2-richard.henderson@linaro.org
9
Cc: Jason Wang <jasowang@redhat.com>
10
Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Cc: Marcel Apfelbaum <marcel.apfelbaum@zoho.com>
12
Cc: Michael S. Tsirkin <mst@redhat.com>
13
Cc: qemu-devel@nongnu.org
14
Cc: qemu-arm@nongnu.org
15
Cc: yurovsky@gmail.com
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
9
---
20
hw/arm/Makefile.objs | 2 +-
10
target/arm/translate.h | 5 +++++
21
hw/arm/mcimx7d-sabre.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++
11
target/arm/translate-vfp.c | 2 +-
22
2 files changed, 91 insertions(+), 1 deletion(-)
12
target/arm/translate.c | 5 ++---
23
create mode 100644 hw/arm/mcimx7d-sabre.c
13
3 files changed, 8 insertions(+), 4 deletions(-)
24
14
25
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
26
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
27
--- a/hw/arm/Makefile.objs
17
--- a/target/arm/translate.h
28
+++ b/hw/arm/Makefile.objs
18
+++ b/target/arm/translate.h
29
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_MPS2) += mps2.o
19
@@ -XXX,XX +XXX,XX @@ static inline void disas_set_insn_syndrome(DisasContext *s, uint32_t syn)
30
obj-$(CONFIG_MPS2) += mps2-tz.o
20
s->insn_start = NULL;
31
obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
21
}
32
obj-$(CONFIG_IOTKIT) += iotkit.o
22
33
-obj-$(CONFIG_FSL_IMX7) += fsl-imx7.o
23
+static inline int curr_insn_len(DisasContext *s)
34
+obj-$(CONFIG_FSL_IMX7) += fsl-imx7.o mcimx7d-sabre.o
35
diff --git a/hw/arm/mcimx7d-sabre.c b/hw/arm/mcimx7d-sabre.c
36
new file mode 100644
37
index XXXXXXX..XXXXXXX
38
--- /dev/null
39
+++ b/hw/arm/mcimx7d-sabre.c
40
@@ -XXX,XX +XXX,XX @@
41
+/*
42
+ * Copyright (c) 2018, Impinj, Inc.
43
+ *
44
+ * MCIMX7D_SABRE Board System emulation.
45
+ *
46
+ * Author: Andrey Smirnov <andrew.smirnov@gmail.com>
47
+ *
48
+ * This code is licensed under the GPL, version 2 or later.
49
+ * See the file `COPYING' in the top level directory.
50
+ *
51
+ * It (partially) emulates a mcimx7d_sabre board, with a Freescale
52
+ * i.MX7 SoC
53
+ */
54
+
55
+#include "qemu/osdep.h"
56
+#include "qapi/error.h"
57
+#include "qemu-common.h"
58
+#include "hw/arm/fsl-imx7.h"
59
+#include "hw/boards.h"
60
+#include "sysemu/sysemu.h"
61
+#include "sysemu/device_tree.h"
62
+#include "qemu/error-report.h"
63
+#include "sysemu/qtest.h"
64
+#include "net/net.h"
65
+
66
+typedef struct {
67
+ FslIMX7State soc;
68
+ MemoryRegion ram;
69
+} MCIMX7Sabre;
70
+
71
+static void mcimx7d_sabre_init(MachineState *machine)
72
+{
24
+{
73
+ static struct arm_boot_info boot_info;
25
+ return s->base.pc_next - s->pc_curr;
74
+ MCIMX7Sabre *s = g_new0(MCIMX7Sabre, 1);
75
+ Object *soc;
76
+ int i;
77
+
78
+ if (machine->ram_size > FSL_IMX7_MMDC_SIZE) {
79
+ error_report("RAM size " RAM_ADDR_FMT " above max supported (%08x)",
80
+ machine->ram_size, FSL_IMX7_MMDC_SIZE);
81
+ exit(1);
82
+ }
83
+
84
+ boot_info = (struct arm_boot_info) {
85
+ .loader_start = FSL_IMX7_MMDC_ADDR,
86
+ .board_id = -1,
87
+ .ram_size = machine->ram_size,
88
+ .kernel_filename = machine->kernel_filename,
89
+ .kernel_cmdline = machine->kernel_cmdline,
90
+ .initrd_filename = machine->initrd_filename,
91
+ .nb_cpus = smp_cpus,
92
+ };
93
+
94
+ object_initialize(&s->soc, sizeof(s->soc), TYPE_FSL_IMX7);
95
+ soc = OBJECT(&s->soc);
96
+ object_property_add_child(OBJECT(machine), "soc", soc, &error_fatal);
97
+ object_property_set_bool(soc, true, "realized", &error_fatal);
98
+
99
+ memory_region_allocate_system_memory(&s->ram, NULL, "mcimx7d-sabre.ram",
100
+ machine->ram_size);
101
+ memory_region_add_subregion(get_system_memory(),
102
+ FSL_IMX7_MMDC_ADDR, &s->ram);
103
+
104
+ for (i = 0; i < FSL_IMX7_NUM_USDHCS; i++) {
105
+ BusState *bus;
106
+ DeviceState *carddev;
107
+ DriveInfo *di;
108
+ BlockBackend *blk;
109
+
110
+ di = drive_get_next(IF_SD);
111
+ blk = di ? blk_by_legacy_dinfo(di) : NULL;
112
+ bus = qdev_get_child_bus(DEVICE(&s->soc.usdhc[i]), "sd-bus");
113
+ carddev = qdev_create(bus, TYPE_SD_CARD);
114
+ qdev_prop_set_drive(carddev, "drive", blk, &error_fatal);
115
+ object_property_set_bool(OBJECT(carddev), true,
116
+ "realized", &error_fatal);
117
+ }
118
+
119
+ if (!qtest_enabled()) {
120
+ arm_load_kernel(&s->soc.cpu[0], &boot_info);
121
+ }
122
+}
26
+}
123
+
27
+
124
+static void mcimx7d_sabre_machine_init(MachineClass *mc)
28
/* is_jmp field values */
125
+{
29
#define DISAS_JUMP DISAS_TARGET_0 /* only pc was modified dynamically */
126
+ mc->desc = "Freescale i.MX7 DUAL SABRE (Cortex A7)";
30
/* CPU state was modified dynamically; exit to main loop for interrupts. */
127
+ mc->init = mcimx7d_sabre_init;
31
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
128
+ mc->max_cpus = FSL_IMX7_NUM_CPUS;
32
index XXXXXXX..XXXXXXX 100644
129
+}
33
--- a/target/arm/translate-vfp.c
130
+DEFINE_MACHINE("mcimx7d-sabre", mcimx7d_sabre_machine_init)
34
+++ b/target/arm/translate-vfp.c
35
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
36
if (s->sme_trap_nonstreaming) {
37
gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
38
syn_smetrap(SME_ET_Streaming,
39
- s->base.pc_next - s->pc_curr == 2));
40
+ curr_insn_len(s) == 2));
41
return false;
42
}
43
44
diff --git a/target/arm/translate.c b/target/arm/translate.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/translate.c
47
+++ b/target/arm/translate.c
48
@@ -XXX,XX +XXX,XX @@ static ISSInfo make_issinfo(DisasContext *s, int rd, bool p, bool w)
49
/* ISS not valid if writeback */
50
if (p && !w) {
51
ret = rd;
52
- if (s->base.pc_next - s->pc_curr == 2) {
53
+ if (curr_insn_len(s) == 2) {
54
ret |= ISSIs16Bit;
55
}
56
} else {
57
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
58
/* nothing more to generate */
59
break;
60
case DISAS_WFI:
61
- gen_helper_wfi(cpu_env,
62
- tcg_constant_i32(dc->base.pc_next - dc->pc_curr));
63
+ gen_helper_wfi(cpu_env, tcg_constant_i32(curr_insn_len(dc)));
64
/*
65
* The helper doesn't necessarily throw an exception, but we
66
* must go back to the main loop to check for interrupts anyway.
131
--
67
--
132
2.16.2
68
2.25.1
133
69
134
70
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180309153654.13518-4-f4bug@amsat.org
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
hw/sd/sd.c | 14 ++++++++++----
10
target/arm/translate-a64.c | 40 ++++++++++++++++++++------------------
9
hw/sd/trace-events | 8 ++++----
11
target/arm/translate.c | 10 ++++++----
10
2 files changed, 14 insertions(+), 8 deletions(-)
12
2 files changed, 27 insertions(+), 23 deletions(-)
11
13
12
diff --git a/hw/sd/sd.c b/hw/sd/sd.c
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/sd/sd.c
16
--- a/target/arm/translate-a64.c
15
+++ b/hw/sd/sd.c
17
+++ b/target/arm/translate-a64.c
16
@@ -XXX,XX +XXX,XX @@ struct SDState {
18
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, uint64_t dest)
17
qemu_irq readonly_cb;
19
return translator_use_goto_tb(&s->base, dest);
18
qemu_irq inserted_cb;
20
}
19
QEMUTimer *ocr_power_timer;
21
20
+ const char *proto_name;
22
-static inline void gen_goto_tb(DisasContext *s, int n, uint64_t dest)
21
bool enable;
23
+static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
22
uint8_t dat_lines;
24
{
23
bool cmd_line;
25
+ uint64_t dest = s->pc_curr + diff;
24
@@ -XXX,XX +XXX,XX @@ static sd_rsp_type_t sd_normal_command(SDState *sd, SDRequest req)
26
+
25
* However there is no ACMD55, so we want to trace this particular case.
27
if (use_goto_tb(s, dest)) {
26
*/
28
tcg_gen_goto_tb(n);
27
if (req.cmd != 55 || sd->expecting_acmd) {
29
gen_a64_set_pc_im(dest);
28
- trace_sdcard_normal_command(sd_cmd_name(req.cmd), req.cmd,
30
@@ -XXX,XX +XXX,XX @@ static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
29
+ trace_sdcard_normal_command(sd->proto_name,
31
*/
30
+ sd_cmd_name(req.cmd), req.cmd,
32
static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
31
req.arg, sd_state_name(sd->state));
33
{
34
- uint64_t addr = s->pc_curr + sextract32(insn, 0, 26) * 4;
35
+ int64_t diff = sextract32(insn, 0, 26) * 4;
36
37
if (insn & (1U << 31)) {
38
/* BL Branch with link */
39
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
40
41
/* B Branch / BL Branch with link */
42
reset_btype(s);
43
- gen_goto_tb(s, 0, addr);
44
+ gen_goto_tb(s, 0, diff);
45
}
46
47
/* Compare and branch (immediate)
48
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
49
static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
50
{
51
unsigned int sf, op, rt;
52
- uint64_t addr;
53
+ int64_t diff;
54
TCGLabel *label_match;
55
TCGv_i64 tcg_cmp;
56
57
sf = extract32(insn, 31, 1);
58
op = extract32(insn, 24, 1); /* 0: CBZ; 1: CBNZ */
59
rt = extract32(insn, 0, 5);
60
- addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
61
+ diff = sextract32(insn, 5, 19) * 4;
62
63
tcg_cmp = read_cpu_reg(s, rt, sf);
64
label_match = gen_new_label();
65
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
66
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
67
tcg_cmp, 0, label_match);
68
69
- gen_goto_tb(s, 0, s->base.pc_next);
70
+ gen_goto_tb(s, 0, 4);
71
gen_set_label(label_match);
72
- gen_goto_tb(s, 1, addr);
73
+ gen_goto_tb(s, 1, diff);
74
}
75
76
/* Test and branch (immediate)
77
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
78
static void disas_test_b_imm(DisasContext *s, uint32_t insn)
79
{
80
unsigned int bit_pos, op, rt;
81
- uint64_t addr;
82
+ int64_t diff;
83
TCGLabel *label_match;
84
TCGv_i64 tcg_cmp;
85
86
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
87
op = extract32(insn, 24, 1); /* 0: TBZ; 1: TBNZ */
88
- addr = s->pc_curr + sextract32(insn, 5, 14) * 4;
89
+ diff = sextract32(insn, 5, 14) * 4;
90
rt = extract32(insn, 0, 5);
91
92
tcg_cmp = tcg_temp_new_i64();
93
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
94
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
95
tcg_cmp, 0, label_match);
96
tcg_temp_free_i64(tcg_cmp);
97
- gen_goto_tb(s, 0, s->base.pc_next);
98
+ gen_goto_tb(s, 0, 4);
99
gen_set_label(label_match);
100
- gen_goto_tb(s, 1, addr);
101
+ gen_goto_tb(s, 1, diff);
102
}
103
104
/* Conditional branch (immediate)
105
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
106
static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
107
{
108
unsigned int cond;
109
- uint64_t addr;
110
+ int64_t diff;
111
112
if ((insn & (1 << 4)) || (insn & (1 << 24))) {
113
unallocated_encoding(s);
114
return;
32
}
115
}
33
116
- addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
34
@@ -XXX,XX +XXX,XX @@ static sd_rsp_type_t sd_normal_command(SDState *sd, SDRequest req)
117
+ diff = sextract32(insn, 5, 19) * 4;
35
static sd_rsp_type_t sd_app_command(SDState *sd,
118
cond = extract32(insn, 0, 4);
36
SDRequest req)
119
37
{
120
reset_btype(s);
38
- trace_sdcard_app_command(sd_acmd_name(req.cmd),
121
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
39
+ trace_sdcard_app_command(sd->proto_name, sd_acmd_name(req.cmd),
122
/* genuinely conditional branches */
40
req.cmd, req.arg, sd_state_name(sd->state));
123
TCGLabel *label_match = gen_new_label();
41
sd->card_status |= APP_CMD;
124
arm_gen_test_cc(cond, label_match);
42
switch (req.cmd) {
125
- gen_goto_tb(s, 0, s->base.pc_next);
43
@@ -XXX,XX +XXX,XX @@ void sd_write_data(SDState *sd, uint8_t value)
126
+ gen_goto_tb(s, 0, 4);
44
if (sd->card_status & (ADDRESS_ERROR | WP_VIOLATION))
127
gen_set_label(label_match);
128
- gen_goto_tb(s, 1, addr);
129
+ gen_goto_tb(s, 1, diff);
130
} else {
131
/* 0xe and 0xf are both "always" conditions */
132
- gen_goto_tb(s, 0, addr);
133
+ gen_goto_tb(s, 0, diff);
134
}
135
}
136
137
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
138
* any pending interrupts immediately.
139
*/
140
reset_btype(s);
141
- gen_goto_tb(s, 0, s->base.pc_next);
142
+ gen_goto_tb(s, 0, 4);
45
return;
143
return;
46
144
47
- trace_sdcard_write_data(sd_acmd_name(sd->current_cmd),
145
case 7: /* SB */
48
+ trace_sdcard_write_data(sd->proto_name,
146
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
49
+ sd_acmd_name(sd->current_cmd),
147
* MB and end the TB instead.
50
sd->current_cmd, value);
148
*/
51
switch (sd->current_cmd) {
149
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
52
case 24:    /* CMD24: WRITE_SINGLE_BLOCK */
150
- gen_goto_tb(s, 0, s->base.pc_next);
53
@@ -XXX,XX +XXX,XX @@ uint8_t sd_read_data(SDState *sd)
151
+ gen_goto_tb(s, 0, 4);
54
152
return;
55
io_len = (sd->ocr & (1 << 30)) ? 512 : sd->blk_len;
153
56
154
default:
57
- trace_sdcard_read_data(sd_acmd_name(sd->current_cmd),
155
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
58
+ trace_sdcard_read_data(sd->proto_name,
156
switch (dc->base.is_jmp) {
59
+ sd_acmd_name(sd->current_cmd),
157
case DISAS_NEXT:
60
sd->current_cmd, io_len);
158
case DISAS_TOO_MANY:
61
switch (sd->current_cmd) {
159
- gen_goto_tb(dc, 1, dc->base.pc_next);
62
case 6:    /* CMD6: SWITCH_FUNCTION */
160
+ gen_goto_tb(dc, 1, 4);
63
@@ -XXX,XX +XXX,XX @@ static void sd_realize(DeviceState *dev, Error **errp)
161
break;
64
SDState *sd = SD_CARD(dev);
162
default:
65
int ret;
163
case DISAS_UPDATE_EXIT:
66
164
diff --git a/target/arm/translate.c b/target/arm/translate.c
67
+ sd->proto_name = sd->spi ? "SPI" : "SD";
165
index XXXXXXX..XXXXXXX 100644
166
--- a/target/arm/translate.c
167
+++ b/target/arm/translate.c
168
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
169
* cpu_loop_exec. Any live exit_requests will be processed as we
170
* enter the next TB.
171
*/
172
-static void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
173
+static void gen_goto_tb(DisasContext *s, int n, int diff)
174
{
175
+ target_ulong dest = s->pc_curr + diff;
68
+
176
+
69
if (sd->blk && blk_is_read_only(sd->blk)) {
177
if (translator_use_goto_tb(&s->base, dest)) {
70
error_setg(errp, "Cannot use read-only drive as SD card");
178
tcg_gen_goto_tb(n);
71
return;
179
gen_set_pc_im(s, dest);
72
diff --git a/hw/sd/trace-events b/hw/sd/trace-events
180
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
73
index XXXXXXX..XXXXXXX 100644
181
* gen_jmp();
74
--- a/hw/sd/trace-events
182
* on the second call to gen_jmp().
75
+++ b/hw/sd/trace-events
183
*/
76
@@ -XXX,XX +XXX,XX @@ sdhci_write_dataport(uint16_t data_count) "write buffer filled with %u bytes of
184
- gen_goto_tb(s, tbno, dest);
77
sdhci_capareg(const char *desc, uint16_t val) "%s: %u"
185
+ gen_goto_tb(s, tbno, dest - s->pc_curr);
78
186
break;
79
# hw/sd/sd.c
187
case DISAS_UPDATE_NOCHAIN:
80
-sdcard_normal_command(const char *cmd_desc, uint8_t cmd, uint32_t arg, const char *state) "%20s/ CMD%02d arg 0x%08x (state %s)"
188
case DISAS_UPDATE_EXIT:
81
-sdcard_app_command(const char *acmd_desc, uint8_t acmd, uint32_t arg, const char *state) "%23s/ACMD%02d arg 0x%08x (state %s)"
189
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
82
+sdcard_normal_command(const char *proto, const char *cmd_desc, uint8_t cmd, uint32_t arg, const char *state) "%s %20s/ CMD%02d arg 0x%08x (state %s)"
190
switch (dc->base.is_jmp) {
83
+sdcard_app_command(const char *proto, const char *acmd_desc, uint8_t acmd, uint32_t arg, const char *state) "%s %23s/ACMD%02d arg 0x%08x (state %s)"
191
case DISAS_NEXT:
84
sdcard_response(const char *rspdesc, int rsplen) "%s (sz:%d)"
192
case DISAS_TOO_MANY:
85
sdcard_powerup(void) ""
193
- gen_goto_tb(dc, 1, dc->base.pc_next);
86
sdcard_inquiry_cmd41(void) ""
194
+ gen_goto_tb(dc, 1, curr_insn_len(dc));
87
@@ -XXX,XX +XXX,XX @@ sdcard_lock(void) ""
195
break;
88
sdcard_unlock(void) ""
196
case DISAS_UPDATE_NOCHAIN:
89
sdcard_read_block(uint64_t addr, uint32_t len) "addr 0x%" PRIx64 " size 0x%x"
197
gen_set_pc_im(dc, dc->base.pc_next);
90
sdcard_write_block(uint64_t addr, uint32_t len) "addr 0x%" PRIx64 " size 0x%x"
198
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
91
-sdcard_write_data(const char *cmd_desc, uint8_t cmd, uint8_t value) "%20s/ CMD%02d value 0x%02x"
199
gen_set_pc_im(dc, dc->base.pc_next);
92
-sdcard_read_data(const char *cmd_desc, uint8_t cmd, int length) "%20s/ CMD%02d len %d"
200
gen_singlestep_exception(dc);
93
+sdcard_write_data(const char *proto, const char *cmd_desc, uint8_t cmd, uint8_t value) "%s %20s/ CMD%02d value 0x%02x"
201
} else {
94
+sdcard_read_data(const char *proto, const char *cmd_desc, uint8_t cmd, int length) "%s %20s/ CMD%02d len %d"
202
- gen_goto_tb(dc, 1, dc->base.pc_next);
95
sdcard_set_voltage(uint16_t millivolts) "%u mV"
203
+ gen_goto_tb(dc, 1, curr_insn_len(dc));
96
204
}
97
# hw/sd/milkymist-memcard.c
205
}
206
}
98
--
207
--
99
2.16.2
208
2.25.1
100
101
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
After spending months studying all the different SD Specifications
3
In preparation for TARGET_TB_PCREL, reduce reliance on
4
from the SD Association, voluntarily add myself as maintainer
4
absolute values by passing in pc difference.
5
for the SD code.
6
5
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180309153654.13518-9-f4bug@amsat.org
8
Message-id: 20221020030641.2066807-4-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
MAINTAINERS | 8 ++++++++
11
target/arm/translate-a32.h | 2 +-
13
1 file changed, 8 insertions(+)
12
target/arm/translate.h | 6 ++--
13
target/arm/translate-a64.c | 32 +++++++++---------
14
target/arm/translate-vfp.c | 2 +-
15
target/arm/translate.c | 68 ++++++++++++++++++++------------------
16
5 files changed, 56 insertions(+), 54 deletions(-)
14
17
15
diff --git a/MAINTAINERS b/MAINTAINERS
18
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/MAINTAINERS
20
--- a/target/arm/translate-a32.h
18
+++ b/MAINTAINERS
21
+++ b/target/arm/translate-a32.h
19
@@ -XXX,XX +XXX,XX @@ M: Peter Crosthwaite <crosthwaite.peter@gmail.com>
22
@@ -XXX,XX +XXX,XX @@ void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop);
20
S: Maintained
23
TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs);
21
F: hw/ssi/xilinx_*
24
void gen_set_cpsr(TCGv_i32 var, uint32_t mask);
22
25
void gen_set_condexec(DisasContext *s);
23
+SD (Secure Card)
26
-void gen_set_pc_im(DisasContext *s, target_ulong val);
24
+M: Philippe Mathieu-Daudé <f4bug@amsat.org>
27
+void gen_update_pc(DisasContext *s, target_long diff);
25
+S: Odd Fixes
28
void gen_lookup_tb(DisasContext *s);
26
+F: include/hw/sd/sd*
29
long vfp_reg_offset(bool dp, unsigned reg);
27
+F: hw/sd/core.c
30
long neon_full_reg_offset(unsigned reg);
28
+F: hw/sd/sd*
31
diff --git a/target/arm/translate.h b/target/arm/translate.h
29
+F: tests/sd*
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate.h
34
+++ b/target/arm/translate.h
35
@@ -XXX,XX +XXX,XX @@ static inline int curr_insn_len(DisasContext *s)
36
* For instructions which want an immediate exit to the main loop, as opposed
37
* to attempting to use lookup_and_goto_ptr. Unlike DISAS_UPDATE_EXIT, this
38
* doesn't write the PC on exiting the translation loop so you need to ensure
39
- * something (gen_a64_set_pc_im or runtime helper) has done so before we reach
40
+ * something (gen_a64_update_pc or runtime helper) has done so before we reach
41
* return from cpu_tb_exec.
42
*/
43
#define DISAS_EXIT DISAS_TARGET_9
44
@@ -XXX,XX +XXX,XX @@ static inline int curr_insn_len(DisasContext *s)
45
46
#ifdef TARGET_AARCH64
47
void a64_translate_init(void);
48
-void gen_a64_set_pc_im(uint64_t val);
49
+void gen_a64_update_pc(DisasContext *s, target_long diff);
50
extern const TranslatorOps aarch64_translator_ops;
51
#else
52
static inline void a64_translate_init(void)
53
{
54
}
55
56
-static inline void gen_a64_set_pc_im(uint64_t val)
57
+static inline void gen_a64_update_pc(DisasContext *s, target_long diff)
58
{
59
}
60
#endif
61
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/translate-a64.c
64
+++ b/target/arm/translate-a64.c
65
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
66
}
67
}
68
69
-void gen_a64_set_pc_im(uint64_t val)
70
+void gen_a64_update_pc(DisasContext *s, target_long diff)
71
{
72
- tcg_gen_movi_i64(cpu_pc, val);
73
+ tcg_gen_movi_i64(cpu_pc, s->pc_curr + diff);
74
}
75
76
/*
77
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
78
79
static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
80
{
81
- gen_a64_set_pc_im(pc);
82
+ gen_a64_update_pc(s, pc - s->pc_curr);
83
gen_exception_internal(excp);
84
s->base.is_jmp = DISAS_NORETURN;
85
}
86
87
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syndrome)
88
{
89
- gen_a64_set_pc_im(s->pc_curr);
90
+ gen_a64_update_pc(s, 0);
91
gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syndrome));
92
s->base.is_jmp = DISAS_NORETURN;
93
}
94
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
95
96
if (use_goto_tb(s, dest)) {
97
tcg_gen_goto_tb(n);
98
- gen_a64_set_pc_im(dest);
99
+ gen_a64_update_pc(s, diff);
100
tcg_gen_exit_tb(s->base.tb, n);
101
s->base.is_jmp = DISAS_NORETURN;
102
} else {
103
- gen_a64_set_pc_im(dest);
104
+ gen_a64_update_pc(s, diff);
105
if (s->ss_active) {
106
gen_step_complete_exception(s);
107
} else {
108
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
109
uint32_t syndrome;
110
111
syndrome = syn_aa64_sysregtrap(op0, op1, op2, crn, crm, rt, isread);
112
- gen_a64_set_pc_im(s->pc_curr);
113
+ gen_a64_update_pc(s, 0);
114
gen_helper_access_check_cp_reg(cpu_env,
115
tcg_constant_ptr(ri),
116
tcg_constant_i32(syndrome),
117
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
118
* The readfn or writefn might raise an exception;
119
* synchronize the CPU state in case it does.
120
*/
121
- gen_a64_set_pc_im(s->pc_curr);
122
+ gen_a64_update_pc(s, 0);
123
}
124
125
/* Handle special cases first */
126
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
127
/* The pre HVC helper handles cases when HVC gets trapped
128
* as an undefined insn by runtime configuration.
129
*/
130
- gen_a64_set_pc_im(s->pc_curr);
131
+ gen_a64_update_pc(s, 0);
132
gen_helper_pre_hvc(cpu_env);
133
gen_ss_advance(s);
134
gen_exception_insn_el(s, s->base.pc_next, EXCP_HVC,
135
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
136
unallocated_encoding(s);
137
break;
138
}
139
- gen_a64_set_pc_im(s->pc_curr);
140
+ gen_a64_update_pc(s, 0);
141
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
142
gen_ss_advance(s);
143
gen_exception_insn_el(s, s->base.pc_next, EXCP_SMC,
144
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
145
*/
146
switch (dc->base.is_jmp) {
147
default:
148
- gen_a64_set_pc_im(dc->base.pc_next);
149
+ gen_a64_update_pc(dc, 4);
150
/* fall through */
151
case DISAS_EXIT:
152
case DISAS_JUMP:
153
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
154
break;
155
default:
156
case DISAS_UPDATE_EXIT:
157
- gen_a64_set_pc_im(dc->base.pc_next);
158
+ gen_a64_update_pc(dc, 4);
159
/* fall through */
160
case DISAS_EXIT:
161
tcg_gen_exit_tb(NULL, 0);
162
break;
163
case DISAS_UPDATE_NOCHAIN:
164
- gen_a64_set_pc_im(dc->base.pc_next);
165
+ gen_a64_update_pc(dc, 4);
166
/* fall through */
167
case DISAS_JUMP:
168
tcg_gen_lookup_and_goto_ptr();
169
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
170
case DISAS_SWI:
171
break;
172
case DISAS_WFE:
173
- gen_a64_set_pc_im(dc->base.pc_next);
174
+ gen_a64_update_pc(dc, 4);
175
gen_helper_wfe(cpu_env);
176
break;
177
case DISAS_YIELD:
178
- gen_a64_set_pc_im(dc->base.pc_next);
179
+ gen_a64_update_pc(dc, 4);
180
gen_helper_yield(cpu_env);
181
break;
182
case DISAS_WFI:
183
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
184
* This is a special case because we don't want to just halt
185
* the CPU if trying to debug across a WFI.
186
*/
187
- gen_a64_set_pc_im(dc->base.pc_next);
188
+ gen_a64_update_pc(dc, 4);
189
gen_helper_wfi(cpu_env, tcg_constant_i32(4));
190
/*
191
* The helper doesn't necessarily throw an exception, but we
192
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/translate-vfp.c
195
+++ b/target/arm/translate-vfp.c
196
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
197
case ARM_VFP_FPSID:
198
if (s->current_el == 1) {
199
gen_set_condexec(s);
200
- gen_set_pc_im(s, s->pc_curr);
201
+ gen_update_pc(s, 0);
202
gen_helper_check_hcr_el2_trap(cpu_env,
203
tcg_constant_i32(a->rt),
204
tcg_constant_i32(a->reg));
205
diff --git a/target/arm/translate.c b/target/arm/translate.c
206
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate.c
208
+++ b/target/arm/translate.c
209
@@ -XXX,XX +XXX,XX @@ void gen_set_condexec(DisasContext *s)
210
}
211
}
212
213
-void gen_set_pc_im(DisasContext *s, target_ulong val)
214
+void gen_update_pc(DisasContext *s, target_long diff)
215
{
216
- tcg_gen_movi_i32(cpu_R[15], val);
217
+ tcg_gen_movi_i32(cpu_R[15], s->pc_curr + diff);
218
}
219
220
/* Set PC and Thumb state from var. var is marked as dead. */
221
@@ -XXX,XX +XXX,XX @@ static inline void gen_bxns(DisasContext *s, int rm)
222
223
/* The bxns helper may raise an EXCEPTION_EXIT exception, so in theory
224
* we need to sync state before calling it, but:
225
- * - we don't need to do gen_set_pc_im() because the bxns helper will
226
+ * - we don't need to do gen_update_pc() because the bxns helper will
227
* always set the PC itself
228
* - we don't need to do gen_set_condexec() because BXNS is UNPREDICTABLE
229
* unless it's outside an IT block or the last insn in an IT block,
230
@@ -XXX,XX +XXX,XX @@ static inline void gen_blxns(DisasContext *s, int rm)
231
* We do however need to set the PC, because the blxns helper reads it.
232
* The blxns helper may throw an exception.
233
*/
234
- gen_set_pc_im(s, s->base.pc_next);
235
+ gen_update_pc(s, curr_insn_len(s));
236
gen_helper_v7m_blxns(cpu_env, var);
237
tcg_temp_free_i32(var);
238
s->base.is_jmp = DISAS_EXIT;
239
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
240
* as an undefined insn by runtime configuration (ie before
241
* the insn really executes).
242
*/
243
- gen_set_pc_im(s, s->pc_curr);
244
+ gen_update_pc(s, 0);
245
gen_helper_pre_hvc(cpu_env);
246
/* Otherwise we will treat this as a real exception which
247
* happens after execution of the insn. (The distinction matters
248
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
249
* for single stepping.)
250
*/
251
s->svc_imm = imm16;
252
- gen_set_pc_im(s, s->base.pc_next);
253
+ gen_update_pc(s, curr_insn_len(s));
254
s->base.is_jmp = DISAS_HVC;
255
}
256
257
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
258
/* As with HVC, we may take an exception either before or after
259
* the insn executes.
260
*/
261
- gen_set_pc_im(s, s->pc_curr);
262
+ gen_update_pc(s, 0);
263
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa32_smc()));
264
- gen_set_pc_im(s, s->base.pc_next);
265
+ gen_update_pc(s, curr_insn_len(s));
266
s->base.is_jmp = DISAS_SMC;
267
}
268
269
static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
270
{
271
gen_set_condexec(s);
272
- gen_set_pc_im(s, pc);
273
+ gen_update_pc(s, pc - s->pc_curr);
274
gen_exception_internal(excp);
275
s->base.is_jmp = DISAS_NORETURN;
276
}
277
@@ -XXX,XX +XXX,XX @@ static void gen_exception_insn_el_v(DisasContext *s, uint64_t pc, int excp,
278
uint32_t syn, TCGv_i32 tcg_el)
279
{
280
if (s->aarch64) {
281
- gen_a64_set_pc_im(pc);
282
+ gen_a64_update_pc(s, pc - s->pc_curr);
283
} else {
284
gen_set_condexec(s);
285
- gen_set_pc_im(s, pc);
286
+ gen_update_pc(s, pc - s->pc_curr);
287
}
288
gen_exception_el_v(excp, syn, tcg_el);
289
s->base.is_jmp = DISAS_NORETURN;
290
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
291
void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
292
{
293
if (s->aarch64) {
294
- gen_a64_set_pc_im(pc);
295
+ gen_a64_update_pc(s, pc - s->pc_curr);
296
} else {
297
gen_set_condexec(s);
298
- gen_set_pc_im(s, pc);
299
+ gen_update_pc(s, pc - s->pc_curr);
300
}
301
gen_exception(excp, syn);
302
s->base.is_jmp = DISAS_NORETURN;
303
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
304
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
305
{
306
gen_set_condexec(s);
307
- gen_set_pc_im(s, s->pc_curr);
308
+ gen_update_pc(s, 0);
309
gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syn));
310
s->base.is_jmp = DISAS_NORETURN;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
313
314
if (translator_use_goto_tb(&s->base, dest)) {
315
tcg_gen_goto_tb(n);
316
- gen_set_pc_im(s, dest);
317
+ gen_update_pc(s, diff);
318
tcg_gen_exit_tb(s->base.tb, n);
319
} else {
320
- gen_set_pc_im(s, dest);
321
+ gen_update_pc(s, diff);
322
gen_goto_ptr();
323
}
324
s->base.is_jmp = DISAS_NORETURN;
325
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
326
/* Jump, specifying which TB number to use if we gen_goto_tb() */
327
static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
328
{
329
+ int diff = dest - s->pc_curr;
30
+
330
+
31
USB
331
if (unlikely(s->ss_active)) {
32
M: Gerd Hoffmann <kraxel@redhat.com>
332
/* An indirect jump so that we still trigger the debug exception. */
33
S: Maintained
333
- gen_set_pc_im(s, dest);
334
+ gen_update_pc(s, diff);
335
s->base.is_jmp = DISAS_JUMP;
336
return;
337
}
338
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
339
* gen_jmp();
340
* on the second call to gen_jmp().
341
*/
342
- gen_goto_tb(s, tbno, dest - s->pc_curr);
343
+ gen_goto_tb(s, tbno, diff);
344
break;
345
case DISAS_UPDATE_NOCHAIN:
346
case DISAS_UPDATE_EXIT:
347
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
348
* Avoid using goto_tb so we really do exit back to the main loop
349
* and don't chain to another TB.
350
*/
351
- gen_set_pc_im(s, dest);
352
+ gen_update_pc(s, diff);
353
gen_goto_ptr();
354
s->base.is_jmp = DISAS_NORETURN;
355
break;
356
@@ -XXX,XX +XXX,XX @@ static void gen_msr_banked(DisasContext *s, int r, int sysm, int rn)
357
358
/* Sync state because msr_banked() can raise exceptions */
359
gen_set_condexec(s);
360
- gen_set_pc_im(s, s->pc_curr);
361
+ gen_update_pc(s, 0);
362
tcg_reg = load_reg(s, rn);
363
gen_helper_msr_banked(cpu_env, tcg_reg,
364
tcg_constant_i32(tgtmode),
365
@@ -XXX,XX +XXX,XX @@ static void gen_mrs_banked(DisasContext *s, int r, int sysm, int rn)
366
367
/* Sync state because mrs_banked() can raise exceptions */
368
gen_set_condexec(s);
369
- gen_set_pc_im(s, s->pc_curr);
370
+ gen_update_pc(s, 0);
371
tcg_reg = tcg_temp_new_i32();
372
gen_helper_mrs_banked(tcg_reg, cpu_env,
373
tcg_constant_i32(tgtmode),
374
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
375
}
376
377
gen_set_condexec(s);
378
- gen_set_pc_im(s, s->pc_curr);
379
+ gen_update_pc(s, 0);
380
gen_helper_access_check_cp_reg(cpu_env,
381
tcg_constant_ptr(ri),
382
tcg_constant_i32(syndrome),
383
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
384
* synchronize the CPU state in case it does.
385
*/
386
gen_set_condexec(s);
387
- gen_set_pc_im(s, s->pc_curr);
388
+ gen_update_pc(s, 0);
389
}
390
391
/* Handle special cases first */
392
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
393
unallocated_encoding(s);
394
return;
395
}
396
- gen_set_pc_im(s, s->base.pc_next);
397
+ gen_update_pc(s, curr_insn_len(s));
398
s->base.is_jmp = DISAS_WFI;
399
return;
400
default:
401
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
402
addr = tcg_temp_new_i32();
403
/* get_r13_banked() will raise an exception if called from System mode */
404
gen_set_condexec(s);
405
- gen_set_pc_im(s, s->pc_curr);
406
+ gen_update_pc(s, 0);
407
gen_helper_get_r13_banked(addr, cpu_env, tcg_constant_i32(mode));
408
switch (amode) {
409
case 0: /* DA */
410
@@ -XXX,XX +XXX,XX @@ static bool trans_YIELD(DisasContext *s, arg_YIELD *a)
411
* scheduling of other vCPUs.
412
*/
413
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
414
- gen_set_pc_im(s, s->base.pc_next);
415
+ gen_update_pc(s, curr_insn_len(s));
416
s->base.is_jmp = DISAS_YIELD;
417
}
418
return true;
419
@@ -XXX,XX +XXX,XX @@ static bool trans_WFE(DisasContext *s, arg_WFE *a)
420
* implemented so we can't sleep like WFI does.
421
*/
422
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
423
- gen_set_pc_im(s, s->base.pc_next);
424
+ gen_update_pc(s, curr_insn_len(s));
425
s->base.is_jmp = DISAS_WFE;
426
}
427
return true;
428
@@ -XXX,XX +XXX,XX @@ static bool trans_WFE(DisasContext *s, arg_WFE *a)
429
static bool trans_WFI(DisasContext *s, arg_WFI *a)
430
{
431
/* For WFI, halt the vCPU until an IRQ. */
432
- gen_set_pc_im(s, s->base.pc_next);
433
+ gen_update_pc(s, curr_insn_len(s));
434
s->base.is_jmp = DISAS_WFI;
435
return true;
436
}
437
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
438
(a->imm == semihost_imm)) {
439
gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
440
} else {
441
- gen_set_pc_im(s, s->base.pc_next);
442
+ gen_update_pc(s, curr_insn_len(s));
443
s->svc_imm = a->imm;
444
s->base.is_jmp = DISAS_SWI;
445
}
446
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
447
case DISAS_TOO_MANY:
448
case DISAS_UPDATE_EXIT:
449
case DISAS_UPDATE_NOCHAIN:
450
- gen_set_pc_im(dc, dc->base.pc_next);
451
+ gen_update_pc(dc, curr_insn_len(dc));
452
/* fall through */
453
default:
454
/* FIXME: Single stepping a WFI insn will not halt the CPU. */
455
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
456
gen_goto_tb(dc, 1, curr_insn_len(dc));
457
break;
458
case DISAS_UPDATE_NOCHAIN:
459
- gen_set_pc_im(dc, dc->base.pc_next);
460
+ gen_update_pc(dc, curr_insn_len(dc));
461
/* fall through */
462
case DISAS_JUMP:
463
gen_goto_ptr();
464
break;
465
case DISAS_UPDATE_EXIT:
466
- gen_set_pc_im(dc, dc->base.pc_next);
467
+ gen_update_pc(dc, curr_insn_len(dc));
468
/* fall through */
469
default:
470
/* indicate that the hash table must be used to find the next TB */
471
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
472
gen_set_label(dc->condlabel);
473
gen_set_condexec(dc);
474
if (unlikely(dc->ss_active)) {
475
- gen_set_pc_im(dc, dc->base.pc_next);
476
+ gen_update_pc(dc, curr_insn_len(dc));
477
gen_singlestep_exception(dc);
478
} else {
479
gen_goto_tb(dc, 1, curr_insn_len(dc));
34
--
480
--
35
2.16.2
481
2.25.1
36
482
37
483
diff view generated by jsdifflib
1
From: Thomas Huth <thuth@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
A lot of ARM object files are linked into the executable unconditionally,
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
even though we have corresponding CONFIG switches like CONFIG_PXA2XX or
4
5
CONFIG_OMAP. We should make sure to use these switches in the Makefile so
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
that the users can disable certain unwanted boards and devices more easily.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
While we're at it, also add some new switches for the boards that do not
7
Message-id: 20221020030641.2066807-5-richard.henderson@linaro.org
8
have a CONFIG option yet.
9
10
Signed-off-by: Thomas Huth <thuth@redhat.com>
11
Message-id: 1520266949-29817-1-git-send-email-thuth@redhat.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
9
---
16
hw/arm/Makefile.objs | 30 +++++++++++++++++++++---------
10
target/arm/translate.h | 5 +++--
17
default-configs/arm-softmmu.mak | 7 +++++++
11
target/arm/translate-a64.c | 28 ++++++++++-------------
18
2 files changed, 28 insertions(+), 9 deletions(-)
12
target/arm/translate-m-nocp.c | 6 ++---
19
13
target/arm/translate-mve.c | 2 +-
20
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
14
target/arm/translate-vfp.c | 6 ++---
21
index XXXXXXX..XXXXXXX 100644
15
target/arm/translate.c | 42 +++++++++++++++++------------------
22
--- a/hw/arm/Makefile.objs
16
6 files changed, 43 insertions(+), 46 deletions(-)
23
+++ b/hw/arm/Makefile.objs
17
24
@@ -XXX,XX +XXX,XX @@
18
diff --git a/target/arm/translate.h b/target/arm/translate.h
25
-obj-y += boot.o collie.o exynos4_boards.o gumstix.o highbank.o
19
index XXXXXXX..XXXXXXX 100644
26
-obj-$(CONFIG_DIGIC) += digic_boards.o
20
--- a/target/arm/translate.h
27
-obj-y += integratorcp.o mainstone.o musicpal.o nseries.o
21
+++ b/target/arm/translate.h
28
-obj-y += omap_sx1.o palm.o realview.o spitz.o stellaris.o
22
@@ -XXX,XX +XXX,XX @@ void arm_jump_cc(DisasCompare *cmp, TCGLabel *label);
29
-obj-y += tosa.o versatilepb.o vexpress.o virt.o xilinx_zynq.o z2.o
23
void arm_gen_test_cc(int cc, TCGLabel *label);
30
+obj-y += boot.o virt.o sysbus-fdt.o
24
MemOp pow2_align(unsigned i);
31
obj-$(CONFIG_ACPI) += virt-acpi-build.o
25
void unallocated_encoding(DisasContext *s);
32
-obj-y += netduino2.o
26
-void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
33
-obj-y += sysbus-fdt.o
27
+void gen_exception_insn_el(DisasContext *s, target_long pc_diff, int excp,
34
+obj-$(CONFIG_DIGIC) += digic_boards.o
28
uint32_t syn, uint32_t target_el);
35
+obj-$(CONFIG_EXYNOS4) += exynos4_boards.o
29
-void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn);
36
+obj-$(CONFIG_HIGHBANK) += highbank.o
30
+void gen_exception_insn(DisasContext *s, target_long pc_diff,
37
+obj-$(CONFIG_INTEGRATOR) += integratorcp.o
31
+ int excp, uint32_t syn);
38
+obj-$(CONFIG_MAINSTONE) += mainstone.o
32
39
+obj-$(CONFIG_MUSICPAL) += musicpal.o
33
/* Return state of Alternate Half-precision flag, caller frees result */
40
+obj-$(CONFIG_NETDUINO2) += netduino2.o
34
static inline TCGv_i32 get_ahp_flag(void)
41
+obj-$(CONFIG_NSERIES) += nseries.o
35
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
42
+obj-$(CONFIG_OMAP) += omap_sx1.o palm.o
36
index XXXXXXX..XXXXXXX 100644
43
+obj-$(CONFIG_PXA2XX) += gumstix.o spitz.o tosa.o z2.o
37
--- a/target/arm/translate-a64.c
44
+obj-$(CONFIG_REALVIEW) += realview.o
38
+++ b/target/arm/translate-a64.c
45
+obj-$(CONFIG_STELLARIS) += stellaris.o
39
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check_only(DisasContext *s)
46
+obj-$(CONFIG_STRONGARM) += collie.o
40
assert(!s->fp_access_checked);
47
+obj-$(CONFIG_VERSATILE) += vexpress.o versatilepb.o
41
s->fp_access_checked = true;
48
+obj-$(CONFIG_ZYNQ) += xilinx_zynq.o
42
49
43
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
50
-obj-y += armv7m.o exynos4210.o pxa2xx.o pxa2xx_gpio.o pxa2xx_pic.o
44
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
51
+obj-$(CONFIG_ARM_V7M) += armv7m.o
45
syn_fp_access_trap(1, 0xe, false, 0),
52
+obj-$(CONFIG_EXYNOS4) += exynos4210.o
46
s->fp_excp_el);
53
+obj-$(CONFIG_PXA2XX) += pxa2xx.o pxa2xx_gpio.o pxa2xx_pic.o
47
return false;
54
obj-$(CONFIG_DIGIC) += digic.o
48
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check(DisasContext *s)
55
-obj-y += omap1.o omap2.o strongarm.o
49
return false;
56
+obj-$(CONFIG_OMAP) += omap1.o omap2.o
50
}
57
+obj-$(CONFIG_STRONGARM) += strongarm.o
51
if (s->sme_trap_nonstreaming && s->is_nonstreaming) {
58
obj-$(CONFIG_ALLWINNER_A10) += allwinner-a10.o cubieboard.o
52
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
59
obj-$(CONFIG_RASPI) += bcm2835_peripherals.o bcm2836.o raspi.o
53
+ gen_exception_insn(s, 0, EXCP_UDEF,
60
obj-$(CONFIG_STM32F205_SOC) += stm32f205_soc.o
54
syn_smetrap(SME_ET_Streaming, false));
61
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
55
return false;
62
index XXXXXXX..XXXXXXX 100644
56
}
63
--- a/default-configs/arm-softmmu.mak
57
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
64
+++ b/default-configs/arm-softmmu.mak
58
goto fail_exit;
65
@@ -XXX,XX +XXX,XX @@ CONFIG_A9MPCORE=y
59
}
66
CONFIG_A15MPCORE=y
60
} else if (s->sve_excp_el) {
67
61
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
68
CONFIG_ARM_V7M=y
62
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
69
+CONFIG_NETDUINO2=y
63
syn_sve_access_trap(), s->sve_excp_el);
70
64
goto fail_exit;
71
CONFIG_ARM_GIC=y
65
}
72
CONFIG_ARM_GIC_KVM=$(CONFIG_KVM)
66
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
73
@@ -XXX,XX +XXX,XX @@ CONFIG_TZ_PPC=y
67
static bool sme_access_check(DisasContext *s)
74
CONFIG_IOTKIT=y
68
{
75
CONFIG_IOTKIT_SECCTL=y
69
if (s->sme_excp_el) {
76
70
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
77
+CONFIG_VERSATILE=y
71
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
78
CONFIG_VERSATILE_PCI=y
72
syn_smetrap(SME_ET_AccessTrap, false),
79
CONFIG_VERSATILE_I2C=y
73
s->sme_excp_el);
80
74
return false;
81
@@ -XXX,XX +XXX,XX @@ CONFIG_VFIO_XGMAC=y
75
@@ -XXX,XX +XXX,XX @@ bool sme_enabled_check_with_svcr(DisasContext *s, unsigned req)
82
CONFIG_VFIO_AMD_XGBE=y
76
return false;
83
77
}
84
CONFIG_SDHCI=y
78
if (FIELD_EX64(req, SVCR, SM) && !s->pstate_sm) {
85
+CONFIG_INTEGRATOR=y
79
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
86
CONFIG_INTEGRATOR_DEBUG=y
80
+ gen_exception_insn(s, 0, EXCP_UDEF,
87
81
syn_smetrap(SME_ET_NotStreaming, false));
88
CONFIG_ALLWINNER_A10_PIT=y
82
return false;
89
@@ -XXX,XX +XXX,XX @@ CONFIG_MSF2=y
83
}
90
CONFIG_FW_CFG_DMA=y
84
if (FIELD_EX64(req, SVCR, ZA) && !s->pstate_za) {
91
CONFIG_XILINX_AXI=y
85
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
92
CONFIG_PCI_DESIGNWARE=y
86
+ gen_exception_insn(s, 0, EXCP_UDEF,
93
+
87
syn_smetrap(SME_ET_InactiveZA, false));
94
+CONFIG_STRONGARM=y
88
return false;
95
+CONFIG_HIGHBANK=y
89
}
96
+CONFIG_MUSICPAL=y
90
@@ -XXX,XX +XXX,XX @@ static void gen_sysreg_undef(DisasContext *s, bool isread,
91
} else {
92
syndrome = syn_uncategorized();
93
}
94
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syndrome);
95
+ gen_exception_insn(s, 0, EXCP_UDEF, syndrome);
96
}
97
98
/* MRS - move from system register
99
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
100
switch (op2_ll) {
101
case 1: /* SVC */
102
gen_ss_advance(s);
103
- gen_exception_insn(s, s->base.pc_next, EXCP_SWI,
104
- syn_aa64_svc(imm16));
105
+ gen_exception_insn(s, 4, EXCP_SWI, syn_aa64_svc(imm16));
106
break;
107
case 2: /* HVC */
108
if (s->current_el == 0) {
109
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
110
gen_a64_update_pc(s, 0);
111
gen_helper_pre_hvc(cpu_env);
112
gen_ss_advance(s);
113
- gen_exception_insn_el(s, s->base.pc_next, EXCP_HVC,
114
- syn_aa64_hvc(imm16), 2);
115
+ gen_exception_insn_el(s, 4, EXCP_HVC, syn_aa64_hvc(imm16), 2);
116
break;
117
case 3: /* SMC */
118
if (s->current_el == 0) {
119
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
120
gen_a64_update_pc(s, 0);
121
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
122
gen_ss_advance(s);
123
- gen_exception_insn_el(s, s->base.pc_next, EXCP_SMC,
124
- syn_aa64_smc(imm16), 3);
125
+ gen_exception_insn_el(s, 4, EXCP_SMC, syn_aa64_smc(imm16), 3);
126
break;
127
default:
128
unallocated_encoding(s);
129
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
130
* Illegal execution state. This has priority over BTI
131
* exceptions, but comes after instruction abort exceptions.
132
*/
133
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_illegalstate());
134
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_illegalstate());
135
return;
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
139
if (s->btype != 0
140
&& s->guarded_page
141
&& !btype_destination_ok(insn, s->bt, s->btype)) {
142
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
143
- syn_btitrap(s->btype));
144
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_btitrap(s->btype));
145
return;
146
}
147
} else {
148
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
149
index XXXXXXX..XXXXXXX 100644
150
--- a/target/arm/translate-m-nocp.c
151
+++ b/target/arm/translate-m-nocp.c
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
153
tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
154
155
if (s->fp_excp_el != 0) {
156
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
157
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
158
syn_uncategorized(), s->fp_excp_el);
159
return true;
160
}
161
@@ -XXX,XX +XXX,XX @@ static bool trans_NOCP(DisasContext *s, arg_nocp *a)
162
}
163
164
if (a->cp != 10) {
165
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP, syn_uncategorized());
166
+ gen_exception_insn(s, 0, EXCP_NOCP, syn_uncategorized());
167
return true;
168
}
169
170
if (s->fp_excp_el != 0) {
171
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
172
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
173
syn_uncategorized(), s->fp_excp_el);
174
return true;
175
}
176
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
177
index XXXXXXX..XXXXXXX 100644
178
--- a/target/arm/translate-mve.c
179
+++ b/target/arm/translate-mve.c
180
@@ -XXX,XX +XXX,XX @@ bool mve_eci_check(DisasContext *s)
181
return true;
182
default:
183
/* Reserved value: INVSTATE UsageFault */
184
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
185
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
186
return false;
187
}
188
}
189
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
190
index XXXXXXX..XXXXXXX 100644
191
--- a/target/arm/translate-vfp.c
192
+++ b/target/arm/translate-vfp.c
193
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
194
int coproc = arm_dc_feature(s, ARM_FEATURE_V8) ? 0 : 0xa;
195
uint32_t syn = syn_fp_access_trap(1, 0xe, false, coproc);
196
197
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF, syn, s->fp_excp_el);
198
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn, s->fp_excp_el);
199
return false;
200
}
201
202
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
203
* appear to be any insns which touch VFP which are allowed.
204
*/
205
if (s->sme_trap_nonstreaming) {
206
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
207
+ gen_exception_insn(s, 0, EXCP_UDEF,
208
syn_smetrap(SME_ET_Streaming,
209
curr_insn_len(s) == 2));
210
return false;
211
@@ -XXX,XX +XXX,XX @@ bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
212
* the encoding space handled by the patterns in m-nocp.decode,
213
* and for them we may need to raise NOCP here.
214
*/
215
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
216
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
217
syn_uncategorized(), s->fp_excp_el);
218
return false;
219
}
220
diff --git a/target/arm/translate.c b/target/arm/translate.c
221
index XXXXXXX..XXXXXXX 100644
222
--- a/target/arm/translate.c
223
+++ b/target/arm/translate.c
224
@@ -XXX,XX +XXX,XX @@ static void gen_exception(int excp, uint32_t syndrome)
225
tcg_constant_i32(syndrome));
226
}
227
228
-static void gen_exception_insn_el_v(DisasContext *s, uint64_t pc, int excp,
229
- uint32_t syn, TCGv_i32 tcg_el)
230
+static void gen_exception_insn_el_v(DisasContext *s, target_long pc_diff,
231
+ int excp, uint32_t syn, TCGv_i32 tcg_el)
232
{
233
if (s->aarch64) {
234
- gen_a64_update_pc(s, pc - s->pc_curr);
235
+ gen_a64_update_pc(s, pc_diff);
236
} else {
237
gen_set_condexec(s);
238
- gen_update_pc(s, pc - s->pc_curr);
239
+ gen_update_pc(s, pc_diff);
240
}
241
gen_exception_el_v(excp, syn, tcg_el);
242
s->base.is_jmp = DISAS_NORETURN;
243
}
244
245
-void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
246
+void gen_exception_insn_el(DisasContext *s, target_long pc_diff, int excp,
247
uint32_t syn, uint32_t target_el)
248
{
249
- gen_exception_insn_el_v(s, pc, excp, syn, tcg_constant_i32(target_el));
250
+ gen_exception_insn_el_v(s, pc_diff, excp, syn,
251
+ tcg_constant_i32(target_el));
252
}
253
254
-void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
255
+void gen_exception_insn(DisasContext *s, target_long pc_diff,
256
+ int excp, uint32_t syn)
257
{
258
if (s->aarch64) {
259
- gen_a64_update_pc(s, pc - s->pc_curr);
260
+ gen_a64_update_pc(s, pc_diff);
261
} else {
262
gen_set_condexec(s);
263
- gen_update_pc(s, pc - s->pc_curr);
264
+ gen_update_pc(s, pc_diff);
265
}
266
gen_exception(excp, syn);
267
s->base.is_jmp = DISAS_NORETURN;
268
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
269
void unallocated_encoding(DisasContext *s)
270
{
271
/* Unallocated and reserved encodings are uncategorized */
272
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized());
273
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_uncategorized());
274
}
275
276
/* Force a TB lookup after an instruction that changes the CPU state. */
277
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
278
tcg_el = tcg_constant_i32(3);
279
}
280
281
- gen_exception_insn_el_v(s, s->pc_curr, EXCP_UDEF,
282
+ gen_exception_insn_el_v(s, 0, EXCP_UDEF,
283
syn_uncategorized(), tcg_el);
284
tcg_temp_free_i32(tcg_el);
285
return false;
286
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
287
288
undef:
289
/* If we get here then some access check did not pass */
290
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized());
291
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_uncategorized());
292
return false;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
296
* For the UNPREDICTABLE cases we choose to UNDEF.
297
*/
298
if (s->current_el == 1 && !s->ns && mode == ARM_CPU_MODE_MON) {
299
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
300
- syn_uncategorized(), 3);
301
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_uncategorized(), 3);
302
return;
303
}
304
305
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
306
* Do the check-and-raise-exception by hand.
307
*/
308
if (s->fp_excp_el) {
309
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
310
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
311
syn_uncategorized(), s->fp_excp_el);
312
return true;
313
}
314
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
315
tmp = load_cpu_field(v7m.ltpsize);
316
tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
317
tcg_temp_free_i32(tmp);
318
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
319
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
320
gen_set_label(skipexc);
321
}
322
323
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
324
* UsageFault exception.
325
*/
326
if (arm_dc_feature(s, ARM_FEATURE_M)) {
327
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
328
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
329
return;
330
}
331
332
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
333
* Illegal execution state. This has priority over BTI
334
* exceptions, but comes after instruction abort exceptions.
335
*/
336
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_illegalstate());
337
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_illegalstate());
338
return;
339
}
340
341
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
342
* Illegal execution state. This has priority over BTI
343
* exceptions, but comes after instruction abort exceptions.
344
*/
345
- gen_exception_insn(dc, dc->pc_curr, EXCP_UDEF, syn_illegalstate());
346
+ gen_exception_insn(dc, 0, EXCP_UDEF, syn_illegalstate());
347
return;
348
}
349
350
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
351
*/
352
tcg_remove_ops_after(dc->insn_eci_rewind);
353
dc->condjmp = 0;
354
- gen_exception_insn(dc, dc->pc_curr, EXCP_INVSTATE,
355
- syn_uncategorized());
356
+ gen_exception_insn(dc, 0, EXCP_INVSTATE, syn_uncategorized());
357
}
358
359
arm_post_translate_insn(dc);
97
--
360
--
98
2.16.2
361
2.25.1
99
362
100
363
diff view generated by jsdifflib
1
From: Marc-André Lureau <marcandre.lureau@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Spotted by ASAN:
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
Since we always pass dc->pc_curr, fold the arithmetic to zero displacement.
4
5
5
elmarco@boraha:~/src/qemu/build (master *%)$ QTEST_QEMU_BINARY=aarch64-softmmu/qemu-system-aarch64 tests/boot-serial-test
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
/aarch64/boot-serial/virt: ** (process:19740): DEBUG: 18:39:30.275: foo /tmp/qtest-boot-serial-cXaS94D
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
=================================================================
8
Message-id: 20221020030641.2066807-6-richard.henderson@linaro.org
8
==19740==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x603000069648 at pc 0x7f1d2201cc54 bp 0x7fff331f6a40 sp 0x7fff331f61e8
9
READ of size 4 at 0x603000069648 thread T0
10
#0 0x7f1d2201cc53 (/lib64/libasan.so.4+0xafc53)
11
#1 0x55bc86685ee3 in load_aarch64_image /home/elmarco/src/qemu/hw/arm/boot.c:894
12
#2 0x55bc86687217 in arm_load_kernel_notify /home/elmarco/src/qemu/hw/arm/boot.c:1047
13
#3 0x55bc877363b5 in notifier_list_notify /home/elmarco/src/qemu/util/notify.c:40
14
#4 0x55bc869331ea in qemu_run_machine_init_done_notifiers /home/elmarco/src/qemu/vl.c:2716
15
#5 0x55bc8693bc39 in main /home/elmarco/src/qemu/vl.c:4679
16
#6 0x7f1d1652c009 in __libc_start_main (/lib64/libc.so.6+0x21009)
17
#7 0x55bc86255cc9 in _start (/home/elmarco/src/qemu/build/aarch64-softmmu/qemu-system-aarch64+0x1ae5cc9)
18
19
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
10
---
23
hw/arm/boot.c | 3 ++-
11
target/arm/translate-a64.c | 6 +++---
24
1 file changed, 2 insertions(+), 1 deletion(-)
12
target/arm/translate.c | 10 +++++-----
13
2 files changed, 8 insertions(+), 8 deletions(-)
25
14
26
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
27
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/arm/boot.c
17
--- a/target/arm/translate-a64.c
29
+++ b/hw/arm/boot.c
18
+++ b/target/arm/translate-a64.c
30
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
19
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
20
gen_helper_exception_internal(cpu_env, tcg_constant_i32(excp));
21
}
22
23
-static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
24
+static void gen_exception_internal_insn(DisasContext *s, int excp)
25
{
26
- gen_a64_update_pc(s, pc - s->pc_curr);
27
+ gen_a64_update_pc(s, 0);
28
gen_exception_internal(excp);
29
s->base.is_jmp = DISAS_NORETURN;
30
}
31
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
32
* Secondly, "HLT 0xf000" is the A64 semihosting syscall instruction.
33
*/
34
if (semihosting_enabled(s->current_el == 0) && imm16 == 0xf000) {
35
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
36
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
37
} else {
38
unallocated_encoding(s);
39
}
40
diff --git a/target/arm/translate.c b/target/arm/translate.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate.c
43
+++ b/target/arm/translate.c
44
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
45
s->base.is_jmp = DISAS_SMC;
46
}
47
48
-static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
49
+static void gen_exception_internal_insn(DisasContext *s, int excp)
50
{
51
gen_set_condexec(s);
52
- gen_update_pc(s, pc - s->pc_curr);
53
+ gen_update_pc(s, 0);
54
gen_exception_internal(excp);
55
s->base.is_jmp = DISAS_NORETURN;
56
}
57
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
58
*/
59
if (semihosting_enabled(s->current_el != 0) &&
60
(imm == (s->thumb ? 0x3c : 0xf000))) {
61
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
62
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
63
return;
31
}
64
}
32
65
33
/* check the arm64 magic header value -- very old kernels may not have it */
66
@@ -XXX,XX +XXX,XX @@ static bool trans_BKPT(DisasContext *s, arg_BKPT *a)
34
- if (memcmp(buffer + ARM64_MAGIC_OFFSET, "ARM\x64", 4) == 0) {
67
if (arm_dc_feature(s, ARM_FEATURE_M) &&
35
+ if (size > ARM64_MAGIC_OFFSET + 4 &&
68
semihosting_enabled(s->current_el == 0) &&
36
+ memcmp(buffer + ARM64_MAGIC_OFFSET, "ARM\x64", 4) == 0) {
69
(a->imm == 0xab)) {
37
uint64_t hdrvals[2];
70
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
38
71
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
39
/* The arm64 Image header has text_offset and image_size fields at 8 and
72
} else {
73
gen_exception_bkpt_insn(s, syn_aa32_bkpt(a->imm, false));
74
}
75
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
76
if (!arm_dc_feature(s, ARM_FEATURE_M) &&
77
semihosting_enabled(s->current_el == 0) &&
78
(a->imm == semihost_imm)) {
79
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
80
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
81
} else {
82
gen_update_pc(s, curr_insn_len(s));
83
s->svc_imm = a->imm;
40
--
84
--
41
2.16.2
85
2.25.1
42
86
43
87
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Depending on the currently selected size of the SVE vector registers,
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
we can either store the data within the "standard" allocation, or we
5
may beedn to allocate additional space with an EXTRA record.
6
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180303143823.27055-6-richard.henderson@linaro.org
7
Message-id: 20221020030641.2066807-7-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
linux-user/signal.c | 210 +++++++++++++++++++++++++++++++++++++++++++++++-----
10
target/arm/translate.c | 37 +++++++++++++++++++++----------------
13
1 file changed, 192 insertions(+), 18 deletions(-)
11
1 file changed, 21 insertions(+), 16 deletions(-)
14
12
15
diff --git a/linux-user/signal.c b/linux-user/signal.c
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/linux-user/signal.c
15
--- a/target/arm/translate.c
18
+++ b/linux-user/signal.c
16
+++ b/target/arm/translate.c
19
@@ -XXX,XX +XXX,XX @@ struct target_extra_context {
17
@@ -XXX,XX +XXX,XX @@ static uint32_t read_pc(DisasContext *s)
20
uint32_t reserved[3];
18
return s->pc_curr + (s->thumb ? 4 : 8);
21
};
22
23
+#define TARGET_SVE_MAGIC 0x53564501
24
+
25
+struct target_sve_context {
26
+ struct target_aarch64_ctx head;
27
+ uint16_t vl;
28
+ uint16_t reserved[3];
29
+ /* The actual SVE data immediately follows. It is layed out
30
+ * according to TARGET_SVE_SIG_{Z,P}REG_OFFSET, based off of
31
+ * the original struct pointer.
32
+ */
33
+};
34
+
35
+#define TARGET_SVE_VQ_BYTES 16
36
+
37
+#define TARGET_SVE_SIG_ZREG_SIZE(VQ) ((VQ) * TARGET_SVE_VQ_BYTES)
38
+#define TARGET_SVE_SIG_PREG_SIZE(VQ) ((VQ) * (TARGET_SVE_VQ_BYTES / 8))
39
+
40
+#define TARGET_SVE_SIG_REGS_OFFSET \
41
+ QEMU_ALIGN_UP(sizeof(struct target_sve_context), TARGET_SVE_VQ_BYTES)
42
+#define TARGET_SVE_SIG_ZREG_OFFSET(VQ, N) \
43
+ (TARGET_SVE_SIG_REGS_OFFSET + TARGET_SVE_SIG_ZREG_SIZE(VQ) * (N))
44
+#define TARGET_SVE_SIG_PREG_OFFSET(VQ, N) \
45
+ (TARGET_SVE_SIG_ZREG_OFFSET(VQ, 32) + TARGET_SVE_SIG_PREG_SIZE(VQ) * (N))
46
+#define TARGET_SVE_SIG_FFR_OFFSET(VQ) \
47
+ (TARGET_SVE_SIG_PREG_OFFSET(VQ, 16))
48
+#define TARGET_SVE_SIG_CONTEXT_SIZE(VQ) \
49
+ (TARGET_SVE_SIG_PREG_OFFSET(VQ, 17))
50
+
51
struct target_rt_sigframe {
52
struct target_siginfo info;
53
struct target_ucontext uc;
54
@@ -XXX,XX +XXX,XX @@ static void target_setup_end_record(struct target_aarch64_ctx *end)
55
__put_user(0, &end->size);
56
}
19
}
57
20
58
+static void target_setup_sve_record(struct target_sve_context *sve,
21
+/* The pc_curr difference for an architectural jump. */
59
+ CPUARMState *env, int vq, int size)
22
+static target_long jmp_diff(DisasContext *s, target_long diff)
60
+{
23
+{
61
+ int i, j;
24
+ return diff + (s->thumb ? 4 : 8);
62
+
63
+ __put_user(TARGET_SVE_MAGIC, &sve->head.magic);
64
+ __put_user(size, &sve->head.size);
65
+ __put_user(vq * TARGET_SVE_VQ_BYTES, &sve->vl);
66
+
67
+ /* Note that SVE regs are stored as a byte stream, with each byte element
68
+ * at a subsequent address. This corresponds to a little-endian store
69
+ * of our 64-bit hunks.
70
+ */
71
+ for (i = 0; i < 32; ++i) {
72
+ uint64_t *z = (void *)sve + TARGET_SVE_SIG_ZREG_OFFSET(vq, i);
73
+ for (j = 0; j < vq * 2; ++j) {
74
+ __put_user_e(env->vfp.zregs[i].d[j], z + j, le);
75
+ }
76
+ }
77
+ for (i = 0; i <= 16; ++i) {
78
+ uint16_t *p = (void *)sve + TARGET_SVE_SIG_PREG_OFFSET(vq, i);
79
+ for (j = 0; j < vq; ++j) {
80
+ uint64_t r = env->vfp.pregs[i].p[j >> 2];
81
+ __put_user_e(r >> ((j & 3) * 16), p + j, le);
82
+ }
83
+ }
84
+}
25
+}
85
+
26
+
86
static void target_restore_general_frame(CPUARMState *env,
27
/* Set a variable to the value of a CPU register. */
87
struct target_rt_sigframe *sf)
28
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
88
{
29
{
89
@@ -XXX,XX +XXX,XX @@ static void target_restore_fpsimd_record(CPUARMState *env,
30
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
31
* cpu_loop_exec. Any live exit_requests will be processed as we
32
* enter the next TB.
33
*/
34
-static void gen_goto_tb(DisasContext *s, int n, int diff)
35
+static void gen_goto_tb(DisasContext *s, int n, target_long diff)
36
{
37
target_ulong dest = s->pc_curr + diff;
38
39
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
40
}
41
42
/* Jump, specifying which TB number to use if we gen_goto_tb() */
43
-static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
44
+static void gen_jmp_tb(DisasContext *s, target_long diff, int tbno)
45
{
46
- int diff = dest - s->pc_curr;
47
-
48
if (unlikely(s->ss_active)) {
49
/* An indirect jump so that we still trigger the debug exception. */
50
gen_update_pc(s, diff);
51
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
90
}
52
}
91
}
53
}
92
54
93
+static void target_restore_sve_record(CPUARMState *env,
55
-static inline void gen_jmp(DisasContext *s, uint32_t dest)
94
+ struct target_sve_context *sve, int vq)
56
+static inline void gen_jmp(DisasContext *s, target_long diff)
95
+{
96
+ int i, j;
97
+
98
+ /* Note that SVE regs are stored as a byte stream, with each byte element
99
+ * at a subsequent address. This corresponds to a little-endian load
100
+ * of our 64-bit hunks.
101
+ */
102
+ for (i = 0; i < 32; ++i) {
103
+ uint64_t *z = (void *)sve + TARGET_SVE_SIG_ZREG_OFFSET(vq, i);
104
+ for (j = 0; j < vq * 2; ++j) {
105
+ __get_user_e(env->vfp.zregs[i].d[j], z + j, le);
106
+ }
107
+ }
108
+ for (i = 0; i <= 16; ++i) {
109
+ uint16_t *p = (void *)sve + TARGET_SVE_SIG_PREG_OFFSET(vq, i);
110
+ for (j = 0; j < vq; ++j) {
111
+ uint16_t r;
112
+ __get_user_e(r, p + j, le);
113
+ if (j & 3) {
114
+ env->vfp.pregs[i].p[j >> 2] |= (uint64_t)r << ((j & 3) * 16);
115
+ } else {
116
+ env->vfp.pregs[i].p[j >> 2] = r;
117
+ }
118
+ }
119
+ }
120
+}
121
+
122
static int target_restore_sigframe(CPUARMState *env,
123
struct target_rt_sigframe *sf)
124
{
57
{
125
struct target_aarch64_ctx *ctx, *extra = NULL;
58
- gen_jmp_tb(s, dest, 0);
126
struct target_fpsimd_context *fpsimd = NULL;
59
+ gen_jmp_tb(s, diff, 0);
127
+ struct target_sve_context *sve = NULL;
60
}
128
uint64_t extra_datap = 0;
61
129
bool used_extra = false;
62
static inline void gen_mulxy(TCGv_i32 t0, TCGv_i32 t1, int x, int y)
130
bool err = false;
63
@@ -XXX,XX +XXX,XX @@ static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
131
+ int vq = 0, sve_size = 0;
64
132
65
static bool trans_B(DisasContext *s, arg_i *a)
133
target_restore_general_frame(env, sf);
66
{
134
67
- gen_jmp(s, read_pc(s) + a->imm);
135
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
68
+ gen_jmp(s, jmp_diff(s, a->imm));
136
fpsimd = (struct target_fpsimd_context *)ctx;
69
return true;
137
break;
70
}
138
71
139
+ case TARGET_SVE_MAGIC:
72
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond_thumb(DisasContext *s, arg_ci *a)
140
+ if (arm_feature(env, ARM_FEATURE_SVE)) {
73
return true;
141
+ vq = (env->vfp.zcr_el[1] & 0xf) + 1;
142
+ sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
143
+ if (!sve && size == sve_size) {
144
+ sve = (struct target_sve_context *)ctx;
145
+ break;
146
+ }
147
+ }
148
+ err = true;
149
+ goto exit;
150
+
151
case TARGET_EXTRA_MAGIC:
152
if (extra || size != sizeof(struct target_extra_context)) {
153
err = true;
154
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
155
err = true;
156
}
74
}
157
75
arm_skip_unless(s, a->cond);
158
+ /* SVE data, if present, overwrites FPSIMD data. */
76
- gen_jmp(s, read_pc(s) + a->imm);
159
+ if (sve) {
77
+ gen_jmp(s, jmp_diff(s, a->imm));
160
+ target_restore_sve_record(env, sve, vq);
78
return true;
161
+ }
162
+
163
exit:
164
unlock_user(extra, extra_datap, 0);
165
return err;
166
}
79
}
167
80
168
-static abi_ulong get_sigframe(struct target_sigaction *ka, CPUARMState *env)
81
static bool trans_BL(DisasContext *s, arg_i *a)
169
+static abi_ulong get_sigframe(struct target_sigaction *ka,
170
+ CPUARMState *env, int size)
171
{
82
{
172
abi_ulong sp;
83
tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
173
84
- gen_jmp(s, read_pc(s) + a->imm);
174
@@ -XXX,XX +XXX,XX @@ static abi_ulong get_sigframe(struct target_sigaction *ka, CPUARMState *env)
85
+ gen_jmp(s, jmp_diff(s, a->imm));
175
sp = target_sigaltstack_used.ss_sp + target_sigaltstack_used.ss_size;
86
return true;
87
}
88
89
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
176
}
90
}
177
91
tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
178
- sp = (sp - sizeof(struct target_rt_sigframe)) & ~15;
92
store_cpu_field_constant(!s->thumb, thumb);
179
+ sp = (sp - size) & ~15;
93
- gen_jmp(s, (read_pc(s) & ~3) + a->imm);
180
94
+ /* This jump is computed from an aligned PC: subtract off the low bits. */
181
return sp;
95
+ gen_jmp(s, jmp_diff(s, a->imm - (s->pc_curr & 3)));
96
return true;
182
}
97
}
183
98
184
+typedef struct {
99
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
185
+ int total_size;
100
* when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
186
+ int extra_base;
101
*/
187
+ int extra_size;
188
+ int std_end_ofs;
189
+ int extra_ofs;
190
+ int extra_end_ofs;
191
+} target_sigframe_layout;
192
+
193
+static int alloc_sigframe_space(int this_size, target_sigframe_layout *l)
194
+{
195
+ /* Make sure there will always be space for the end marker. */
196
+ const int std_size = sizeof(struct target_rt_sigframe)
197
+ - sizeof(struct target_aarch64_ctx);
198
+ int this_loc = l->total_size;
199
+
200
+ if (l->extra_base) {
201
+ /* Once we have begun an extra space, all allocations go there. */
202
+ l->extra_size += this_size;
203
+ } else if (this_size + this_loc > std_size) {
204
+ /* This allocation does not fit in the standard space. */
205
+ /* Allocate the extra record. */
206
+ l->extra_ofs = this_loc;
207
+ l->total_size += sizeof(struct target_extra_context);
208
+
209
+ /* Allocate the standard end record. */
210
+ l->std_end_ofs = l->total_size;
211
+ l->total_size += sizeof(struct target_aarch64_ctx);
212
+
213
+ /* Allocate the requested record. */
214
+ l->extra_base = this_loc = l->total_size;
215
+ l->extra_size = this_size;
216
+ }
217
+ l->total_size += this_size;
218
+
219
+ return this_loc;
220
+}
221
+
222
static void target_setup_frame(int usig, struct target_sigaction *ka,
223
target_siginfo_t *info, target_sigset_t *set,
224
CPUARMState *env)
225
{
226
- int size = offsetof(struct target_rt_sigframe, uc.tuc_mcontext.__reserved);
227
- int fpsimd_ofs, end1_ofs, fr_ofs, end2_ofs = 0;
228
- int extra_ofs = 0, extra_base = 0, extra_size = 0;
229
+ target_sigframe_layout layout = {
230
+ /* Begin with the size pointing to the reserved space. */
231
+ .total_size = offsetof(struct target_rt_sigframe,
232
+ uc.tuc_mcontext.__reserved),
233
+ };
234
+ int fpsimd_ofs, fr_ofs, sve_ofs = 0, vq = 0, sve_size = 0;
235
struct target_rt_sigframe *frame;
236
struct target_rt_frame_record *fr;
237
abi_ulong frame_addr, return_addr;
238
239
- fpsimd_ofs = size;
240
- size += sizeof(struct target_fpsimd_context);
241
- end1_ofs = size;
242
- size += sizeof(struct target_aarch64_ctx);
243
- fr_ofs = size;
244
- size += sizeof(struct target_rt_frame_record);
245
+ /* FPSIMD record is always in the standard space. */
246
+ fpsimd_ofs = alloc_sigframe_space(sizeof(struct target_fpsimd_context),
247
+ &layout);
248
249
- frame_addr = get_sigframe(ka, env);
250
+ /* SVE state needs saving only if it exists. */
251
+ if (arm_feature(env, ARM_FEATURE_SVE)) {
252
+ vq = (env->vfp.zcr_el[1] & 0xf) + 1;
253
+ sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
254
+ sve_ofs = alloc_sigframe_space(sve_size, &layout);
255
+ }
256
+
257
+ if (layout.extra_ofs) {
258
+ /* Reserve space for the extra end marker. The standard end marker
259
+ * will have been allocated when we allocated the extra record.
260
+ */
261
+ layout.extra_end_ofs
262
+ = alloc_sigframe_space(sizeof(struct target_aarch64_ctx), &layout);
263
+ } else {
264
+ /* Reserve space for the standard end marker.
265
+ * Do not use alloc_sigframe_space because we cheat
266
+ * std_size therein to reserve space for this.
267
+ */
268
+ layout.std_end_ofs = layout.total_size;
269
+ layout.total_size += sizeof(struct target_aarch64_ctx);
270
+ }
271
+
272
+ /* Reserve space for the return code. On a real system this would
273
+ * be within the VDSO. So, despite the name this is not a "real"
274
+ * record within the frame.
275
+ */
276
+ fr_ofs = layout.total_size;
277
+ layout.total_size += sizeof(struct target_rt_frame_record);
278
+
279
+ frame_addr = get_sigframe(ka, env, layout.total_size);
280
trace_user_setup_frame(env, frame_addr);
281
if (!lock_user_struct(VERIFY_WRITE, frame, frame_addr, 0)) {
282
goto give_sigsegv;
283
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
284
285
target_setup_general_frame(frame, env, set);
286
target_setup_fpsimd_record((void *)frame + fpsimd_ofs, env);
287
- if (extra_ofs) {
288
- target_setup_extra_record((void *)frame + extra_ofs,
289
- frame_addr + extra_base, extra_size);
290
+ target_setup_end_record((void *)frame + layout.std_end_ofs);
291
+ if (layout.extra_ofs) {
292
+ target_setup_extra_record((void *)frame + layout.extra_ofs,
293
+ frame_addr + layout.extra_base,
294
+ layout.extra_size);
295
+ target_setup_end_record((void *)frame + layout.extra_end_ofs);
296
}
102
}
297
- target_setup_end_record((void *)frame + end1_ofs);
103
- gen_jmp_tb(s, s->base.pc_next, 1);
298
- if (end2_ofs) {
104
+ gen_jmp_tb(s, curr_insn_len(s), 1);
299
- target_setup_end_record((void *)frame + end2_ofs);
105
300
+ if (sve_ofs) {
106
gen_set_label(nextlabel);
301
+ target_setup_sve_record((void *)frame + sve_ofs, env, vq, sve_size);
107
- gen_jmp(s, read_pc(s) + a->imm);
108
+ gen_jmp(s, jmp_diff(s, a->imm));
109
return true;
110
}
111
112
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
113
114
if (a->f) {
115
/* Loop-forever: just jump back to the loop start */
116
- gen_jmp(s, read_pc(s) - a->imm);
117
+ gen_jmp(s, jmp_diff(s, -a->imm));
118
return true;
302
}
119
}
303
120
304
/* Set up the stack frame for unwinding. */
121
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
122
tcg_temp_free_i32(decr);
123
}
124
/* Jump back to the loop start */
125
- gen_jmp(s, read_pc(s) - a->imm);
126
+ gen_jmp(s, jmp_diff(s, -a->imm));
127
128
gen_set_label(loopend);
129
if (a->tp) {
130
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
131
store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
132
}
133
/* End TB, continuing to following insn */
134
- gen_jmp_tb(s, s->base.pc_next, 1);
135
+ gen_jmp_tb(s, curr_insn_len(s), 1);
136
return true;
137
}
138
139
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_CBZ *a)
140
tcg_gen_brcondi_i32(a->nz ? TCG_COND_EQ : TCG_COND_NE,
141
tmp, 0, s->condlabel);
142
tcg_temp_free_i32(tmp);
143
- gen_jmp(s, read_pc(s) + a->imm);
144
+ gen_jmp(s, jmp_diff(s, a->imm));
145
return true;
146
}
147
305
--
148
--
306
2.16.2
149
2.25.1
307
308
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The EXTRA record allows for additional space to be allocated
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
beyon what is currently reserved. Add code to emit and read
5
this record type.
6
4
7
Nothing uses extra space yet.
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180303143823.27055-5-richard.henderson@linaro.org
7
Message-id: 20221020030641.2066807-8-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
9
---
15
linux-user/signal.c | 74 +++++++++++++++++++++++++++++++++++++++++++++--------
10
target/arm/translate-a64.c | 41 +++++++++++++++++++++++++++-----------
16
1 file changed, 63 insertions(+), 11 deletions(-)
11
1 file changed, 29 insertions(+), 12 deletions(-)
17
12
18
diff --git a/linux-user/signal.c b/linux-user/signal.c
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
19
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
20
--- a/linux-user/signal.c
15
--- a/target/arm/translate-a64.c
21
+++ b/linux-user/signal.c
16
+++ b/target/arm/translate-a64.c
22
@@ -XXX,XX +XXX,XX @@ struct target_fpsimd_context {
17
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
23
uint64_t vregs[32 * 2]; /* really uint128_t vregs[32] */
24
};
25
26
+#define TARGET_EXTRA_MAGIC 0x45585401
27
+
28
+struct target_extra_context {
29
+ struct target_aarch64_ctx head;
30
+ uint64_t datap; /* 16-byte aligned pointer to extra space cast to __u64 */
31
+ uint32_t size; /* size in bytes of the extra space */
32
+ uint32_t reserved[3];
33
+};
34
+
35
struct target_rt_sigframe {
36
struct target_siginfo info;
37
struct target_ucontext uc;
38
@@ -XXX,XX +XXX,XX @@ static void target_setup_fpsimd_record(struct target_fpsimd_context *fpsimd,
39
}
18
}
40
}
19
}
41
20
42
+static void target_setup_extra_record(struct target_extra_context *extra,
21
+static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
43
+ uint64_t datap, uint32_t extra_size)
44
+{
22
+{
45
+ __put_user(TARGET_EXTRA_MAGIC, &extra->head.magic);
23
+ tcg_gen_movi_i64(dest, s->pc_curr + diff);
46
+ __put_user(sizeof(struct target_extra_context), &extra->head.size);
47
+ __put_user(datap, &extra->datap);
48
+ __put_user(extra_size, &extra->size);
49
+}
24
+}
50
+
25
+
51
static void target_setup_end_record(struct target_aarch64_ctx *end)
26
void gen_a64_update_pc(DisasContext *s, target_long diff)
52
{
27
{
53
__put_user(0, &end->magic);
28
- tcg_gen_movi_i64(cpu_pc, s->pc_curr + diff);
54
@@ -XXX,XX +XXX,XX @@ static void target_restore_fpsimd_record(CPUARMState *env,
29
+ gen_pc_plus_diff(s, cpu_pc, diff);
55
static int target_restore_sigframe(CPUARMState *env,
30
}
56
struct target_rt_sigframe *sf)
31
32
/*
33
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
34
35
if (insn & (1U << 31)) {
36
/* BL Branch with link */
37
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
38
+ gen_pc_plus_diff(s, cpu_reg(s, 30), curr_insn_len(s));
39
}
40
41
/* B Branch / BL Branch with link */
42
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
43
default:
44
goto do_unallocated;
45
}
46
- gen_a64_set_pc(s, dst);
47
/* BLR also needs to load return address */
48
if (opc == 1) {
49
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
50
+ TCGv_i64 lr = cpu_reg(s, 30);
51
+ if (dst == lr) {
52
+ TCGv_i64 tmp = new_tmp_a64(s);
53
+ tcg_gen_mov_i64(tmp, dst);
54
+ dst = tmp;
55
+ }
56
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
57
}
58
+ gen_a64_set_pc(s, dst);
59
break;
60
61
case 8: /* BRAA */
62
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
63
} else {
64
dst = cpu_reg(s, rn);
65
}
66
- gen_a64_set_pc(s, dst);
67
/* BLRAA also needs to load return address */
68
if (opc == 9) {
69
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
70
+ TCGv_i64 lr = cpu_reg(s, 30);
71
+ if (dst == lr) {
72
+ TCGv_i64 tmp = new_tmp_a64(s);
73
+ tcg_gen_mov_i64(tmp, dst);
74
+ dst = tmp;
75
+ }
76
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
77
}
78
+ gen_a64_set_pc(s, dst);
79
break;
80
81
case 4: /* ERET */
82
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
83
84
tcg_rt = cpu_reg(s, rt);
85
86
- clean_addr = tcg_constant_i64(s->pc_curr + imm);
87
+ clean_addr = new_tmp_a64(s);
88
+ gen_pc_plus_diff(s, clean_addr, imm);
89
if (is_vector) {
90
do_fp_ld(s, rt, clean_addr, size);
91
} else {
92
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
93
static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
57
{
94
{
58
- struct target_aarch64_ctx *ctx;
95
unsigned int page, rd;
59
+ struct target_aarch64_ctx *ctx, *extra = NULL;
96
- uint64_t base;
60
struct target_fpsimd_context *fpsimd = NULL;
97
- uint64_t offset;
61
+ uint64_t extra_datap = 0;
98
+ int64_t offset;
62
+ bool used_extra = false;
99
63
+ bool err = false;
100
page = extract32(insn, 31, 1);
64
101
/* SignExtend(immhi:immlo) -> offset */
65
target_restore_general_frame(env, sf);
102
offset = sextract64(insn, 5, 19);
66
103
offset = offset << 2 | extract32(insn, 29, 2);
67
ctx = (struct target_aarch64_ctx *)sf->uc.tuc_mcontext.__reserved;
104
rd = extract32(insn, 0, 5);
68
while (ctx) {
105
- base = s->pc_curr;
69
- uint32_t magic, size;
106
70
+ uint32_t magic, size, extra_size;
107
if (page) {
71
108
/* ADRP (page based) */
72
__get_user(magic, &ctx->magic);
109
- base &= ~0xfff;
73
__get_user(size, &ctx->size);
110
offset <<= 12;
74
switch (magic) {
111
+ /* The page offset is ok for TARGET_TB_PCREL. */
75
case 0:
112
+ offset -= s->pc_curr & 0xfff;
76
if (size != 0) {
77
- return 1;
78
+ err = true;
79
+ goto exit;
80
+ }
81
+ if (used_extra) {
82
+ ctx = NULL;
83
+ } else {
84
+ ctx = extra;
85
+ used_extra = true;
86
}
87
- ctx = NULL;
88
continue;
89
90
case TARGET_FPSIMD_MAGIC:
91
if (fpsimd || size != sizeof(struct target_fpsimd_context)) {
92
- return 1;
93
+ err = true;
94
+ goto exit;
95
}
96
fpsimd = (struct target_fpsimd_context *)ctx;
97
break;
98
99
+ case TARGET_EXTRA_MAGIC:
100
+ if (extra || size != sizeof(struct target_extra_context)) {
101
+ err = true;
102
+ goto exit;
103
+ }
104
+ __get_user(extra_datap,
105
+ &((struct target_extra_context *)ctx)->datap);
106
+ __get_user(extra_size,
107
+ &((struct target_extra_context *)ctx)->size);
108
+ extra = lock_user(VERIFY_READ, extra_datap, extra_size, 0);
109
+ break;
110
+
111
default:
112
/* Unknown record -- we certainly didn't generate it.
113
* Did we in fact get out of sync?
114
*/
115
- return 1;
116
+ err = true;
117
+ goto exit;
118
}
119
ctx = (void *)ctx + size;
120
}
113
}
121
114
122
/* Require FPSIMD always. */
115
- tcg_gen_movi_i64(cpu_reg(s, rd), base + offset);
123
- if (!fpsimd) {
116
+ gen_pc_plus_diff(s, cpu_reg(s, rd), offset);
124
- return 1;
125
+ if (fpsimd) {
126
+ target_restore_fpsimd_record(env, fpsimd);
127
+ } else {
128
+ err = true;
129
}
130
- target_restore_fpsimd_record(env, fpsimd);
131
132
- return 0;
133
+ exit:
134
+ unlock_user(extra, extra_datap, 0);
135
+ return err;
136
}
117
}
137
118
138
static abi_ulong get_sigframe(struct target_sigaction *ka, CPUARMState *env)
119
/*
139
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
140
CPUARMState *env)
141
{
142
int size = offsetof(struct target_rt_sigframe, uc.tuc_mcontext.__reserved);
143
- int fpsimd_ofs, end1_ofs, fr_ofs;
144
+ int fpsimd_ofs, end1_ofs, fr_ofs, end2_ofs = 0;
145
+ int extra_ofs = 0, extra_base = 0, extra_size = 0;
146
struct target_rt_sigframe *frame;
147
struct target_rt_frame_record *fr;
148
abi_ulong frame_addr, return_addr;
149
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
150
151
target_setup_general_frame(frame, env, set);
152
target_setup_fpsimd_record((void *)frame + fpsimd_ofs, env);
153
+ if (extra_ofs) {
154
+ target_setup_extra_record((void *)frame + extra_ofs,
155
+ frame_addr + extra_base, extra_size);
156
+ }
157
target_setup_end_record((void *)frame + end1_ofs);
158
+ if (end2_ofs) {
159
+ target_setup_end_record((void *)frame + end2_ofs);
160
+ }
161
162
/* Set up the stack frame for unwinding. */
163
fr = (void *)frame + fr_ofs;
164
--
120
--
165
2.16.2
121
2.25.1
166
167
diff view generated by jsdifflib
1
Currently we query the host CPU features in the class init function
1
From: Richard Henderson <richard.henderson@linaro.org>
2
for the TYPE_ARM_HOST_CPU class, so that we can later copy them
3
from the class object into the instance object in the object
4
instance init function. This is awkward for implementing "-cpu max",
5
which should work like "-cpu host" for KVM but like "cpu with all
6
implemented features" for TCG.
7
2
8
Move the place where we store the information about the host CPU from
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
9
a class object to static variables in kvm.c, and then in the instance
10
init function call a new kvm_arm_set_cpu_features_from_host()
11
function which will query the host kernel if necessary and then
12
fill in the CPU instance fields.
13
4
14
This allows us to drop the special class struct and class init
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
function for TYPE_ARM_HOST_CPU entirely.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-9-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 38 +++++++++++++++++++++-----------------
11
1 file changed, 21 insertions(+), 17 deletions(-)
16
12
17
We can't delay the probe until realize, because the ARM
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
instance_post_init hook needs to look at the feature bits we
19
set, so we need to do it in the initfn. This is safe because
20
the probing doesn't affect the actual VM state (it creates a
21
separate scratch VM to do its testing), but the probe might fail.
22
Because we can't report errors in retrieving the host features
23
in the initfn, we check this belatedly in the realize function
24
(the intervening code will be able to cope with the relevant
25
fields in the CPU structure being zero).
26
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
29
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
30
Message-id: 20180308130626.12393-2-peter.maydell@linaro.org
31
---
32
target/arm/cpu.h | 5 +++++
33
target/arm/kvm_arm.h | 35 ++++++++++++++++++++++++-----------
34
target/arm/cpu.c | 13 +++++++++++++
35
target/arm/kvm.c | 36 +++++++++++++++++++-----------------
36
target/arm/kvm32.c | 8 ++++----
37
target/arm/kvm64.c | 8 ++++----
38
6 files changed, 69 insertions(+), 36 deletions(-)
39
40
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
41
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/cpu.h
15
--- a/target/arm/translate.c
43
+++ b/target/arm/cpu.h
16
+++ b/target/arm/translate.c
44
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
17
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
45
/* Uniprocessor system with MP extensions */
18
}
46
bool mp_is_up;
19
}
47
20
48
+ /* True if we tried kvm_arm_host_cpu_features() during CPU instance_init
21
-/* The architectural value of PC. */
49
+ * and the probe failed (so we need to report the error in realize)
22
-static uint32_t read_pc(DisasContext *s)
50
+ */
23
-{
51
+ bool host_cpu_probe_failed;
24
- return s->pc_curr + (s->thumb ? 4 : 8);
52
+
25
-}
53
/* Specify the number of cores in this CPU cluster. Used for the L2CTLR
54
* register.
55
*/
56
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/kvm_arm.h
59
+++ b/target/arm/kvm_arm.h
60
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
61
void kvm_arm_destroy_scratch_host_vcpu(int *fdarray);
62
63
#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU
64
-#define ARM_HOST_CPU_CLASS(klass) \
65
- OBJECT_CLASS_CHECK(ARMHostCPUClass, (klass), TYPE_ARM_HOST_CPU)
66
-#define ARM_HOST_CPU_GET_CLASS(obj) \
67
- OBJECT_GET_CLASS(ARMHostCPUClass, (obj), TYPE_ARM_HOST_CPU)
68
-
26
-
69
-typedef struct ARMHostCPUClass {
27
/* The pc_curr difference for an architectural jump. */
70
- /*< private >*/
28
static target_long jmp_diff(DisasContext *s, target_long diff)
71
- ARMCPUClass parent_class;
29
{
72
- /*< public >*/
30
return diff + (s->thumb ? 4 : 8);
73
31
}
74
+/**
32
75
+ * ARMHostCPUFeatures: information about the host CPU (identified
33
+static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
76
+ * by asking the host kernel)
77
+ */
78
+typedef struct ARMHostCPUFeatures {
79
uint64_t features;
80
uint32_t target;
81
const char *dtb_compatible;
82
-} ARMHostCPUClass;
83
+} ARMHostCPUFeatures;
84
85
/**
86
* kvm_arm_get_host_cpu_features:
87
@@ -XXX,XX +XXX,XX @@ typedef struct ARMHostCPUClass {
88
* Probe the capabilities of the host kernel's preferred CPU and fill
89
* in the ARMHostCPUClass struct accordingly.
90
*/
91
-bool kvm_arm_get_host_cpu_features(ARMHostCPUClass *ahcc);
92
+bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf);
93
94
+/**
95
+ * kvm_arm_set_cpu_features_from_host:
96
+ * @cpu: ARMCPU to set the features for
97
+ *
98
+ * Set up the ARMCPU struct fields up to match the information probed
99
+ * from the host CPU.
100
+ */
101
+void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu);
102
103
/**
104
* kvm_arm_sync_mpstate_to_kvm
105
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pmu_init(CPUState *cs);
106
107
#else
108
109
+static inline void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
110
+{
34
+{
111
+ /* This should never actually be called in the "not KVM" case,
35
+ tcg_gen_movi_i32(var, s->pc_curr + diff);
112
+ * but set up the fields to indicate an error anyway.
113
+ */
114
+ cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
115
+ cpu->host_cpu_probe_failed = true;
116
+}
36
+}
117
+
37
+
118
static inline int kvm_arm_vgic_probe(void)
38
/* Set a variable to the value of a CPU register. */
39
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
119
{
40
{
120
return 0;
41
if (reg == 15) {
121
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
42
- tcg_gen_movi_i32(var, read_pc(s));
122
index XXXXXXX..XXXXXXX 100644
43
+ gen_pc_plus_diff(s, var, jmp_diff(s, 0));
123
--- a/target/arm/cpu.c
44
} else {
124
+++ b/target/arm/cpu.c
45
tcg_gen_mov_i32(var, cpu_R[reg]);
125
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
46
}
126
int pagebits;
47
@@ -XXX,XX +XXX,XX @@ TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs)
127
Error *local_err = NULL;
48
TCGv_i32 tmp = tcg_temp_new_i32();
128
49
129
+ /* If we needed to query the host kernel for the CPU features
50
if (reg == 15) {
130
+ * then it's possible that might have failed in the initfn, but
51
- tcg_gen_movi_i32(tmp, (read_pc(s) & ~3) + ofs);
131
+ * this is the first point where we can report it.
52
+ /*
132
+ */
53
+ * This address is computed from an aligned PC:
133
+ if (cpu->host_cpu_probe_failed) {
54
+ * subtract off the low bits.
134
+ if (!kvm_enabled()) {
55
+ */
135
+ error_setg(errp, "The 'host' CPU type can only be used with KVM");
56
+ gen_pc_plus_diff(s, tmp, jmp_diff(s, ofs - (s->pc_curr & 3)));
136
+ } else {
57
} else {
137
+ error_setg(errp, "Failed to retrieve host CPU features");
58
tcg_gen_addi_i32(tmp, cpu_R[reg], ofs);
138
+ }
59
}
139
+ return;
60
@@ -XXX,XX +XXX,XX @@ void unallocated_encoding(DisasContext *s)
140
+ }
61
/* Force a TB lookup after an instruction that changes the CPU state. */
141
+
62
void gen_lookup_tb(DisasContext *s)
142
cpu_exec_realizefn(cs, &local_err);
143
if (local_err != NULL) {
144
error_propagate(errp, local_err);
145
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
146
index XXXXXXX..XXXXXXX 100644
147
--- a/target/arm/kvm.c
148
+++ b/target/arm/kvm.c
149
@@ -XXX,XX +XXX,XX @@ const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
150
151
static bool cap_has_mp_state;
152
153
+static ARMHostCPUFeatures arm_host_cpu_features;
154
+
155
int kvm_arm_vcpu_init(CPUState *cs)
156
{
63
{
157
ARMCPU *cpu = ARM_CPU(cs);
64
- tcg_gen_movi_i32(cpu_R[15], s->base.pc_next);
158
@@ -XXX,XX +XXX,XX @@ void kvm_arm_destroy_scratch_host_vcpu(int *fdarray)
65
+ gen_pc_plus_diff(s, cpu_R[15], curr_insn_len(s));
159
}
66
s->base.is_jmp = DISAS_EXIT;
160
}
67
}
161
68
162
-static void kvm_arm_host_cpu_class_init(ObjectClass *oc, void *data)
69
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_r(DisasContext *s, arg_BLX_r *a)
163
+void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
164
{
165
- ARMHostCPUClass *ahcc = ARM_HOST_CPU_CLASS(oc);
166
+ CPUARMState *env = &cpu->env;
167
168
- /* All we really need to set up for the 'host' CPU
169
- * is the feature bits -- we rely on the fact that the
170
- * various ID register values in ARMCPU are only used for
171
- * TCG CPUs.
172
- */
173
- if (!kvm_arm_get_host_cpu_features(ahcc)) {
174
- fprintf(stderr, "Failed to retrieve host CPU features!\n");
175
- abort();
176
+ if (!arm_host_cpu_features.dtb_compatible) {
177
+ if (!kvm_enabled() ||
178
+ !kvm_arm_get_host_cpu_features(&arm_host_cpu_features)) {
179
+ /* We can't report this error yet, so flag that we need to
180
+ * in arm_cpu_realizefn().
181
+ */
182
+ cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE;
183
+ cpu->host_cpu_probe_failed = true;
184
+ return;
185
+ }
186
}
187
+
188
+ cpu->kvm_target = arm_host_cpu_features.target;
189
+ cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible;
190
+ env->features = arm_host_cpu_features.features;
191
}
192
193
static void kvm_arm_host_cpu_initfn(Object *obj)
194
{
195
- ARMHostCPUClass *ahcc = ARM_HOST_CPU_GET_CLASS(obj);
196
ARMCPU *cpu = ARM_CPU(obj);
197
- CPUARMState *env = &cpu->env;
198
199
- cpu->kvm_target = ahcc->target;
200
- cpu->dtb_compatible = ahcc->dtb_compatible;
201
- env->features = ahcc->features;
202
+ kvm_arm_set_cpu_features_from_host(cpu);
203
}
204
205
static const TypeInfo host_arm_cpu_type_info = {
206
@@ -XXX,XX +XXX,XX @@ static const TypeInfo host_arm_cpu_type_info = {
207
.parent = TYPE_ARM_CPU,
208
#endif
209
.instance_init = kvm_arm_host_cpu_initfn,
210
- .class_init = kvm_arm_host_cpu_class_init,
211
- .class_size = sizeof(ARMHostCPUClass),
212
};
213
214
int kvm_arch_init(MachineState *ms, KVMState *s)
215
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
216
index XXXXXXX..XXXXXXX 100644
217
--- a/target/arm/kvm32.c
218
+++ b/target/arm/kvm32.c
219
@@ -XXX,XX +XXX,XX @@ static inline void set_feature(uint64_t *features, int feature)
220
*features |= 1ULL << feature;
221
}
222
223
-bool kvm_arm_get_host_cpu_features(ARMHostCPUClass *ahcc)
224
+bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
225
{
226
/* Identify the feature bits corresponding to the host CPU, and
227
* fill out the ARMHostCPUClass fields accordingly. To do this
228
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUClass *ahcc)
229
return false;
70
return false;
230
}
71
}
231
72
tmp = load_reg(s, a->rm);
232
- ahcc->target = init.target;
73
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
233
+ ahcf->target = init.target;
74
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
234
75
gen_bx(s, tmp);
235
/* This is not strictly blessed by the device tree binding docs yet,
236
* but in practice the kernel does not care about this string so
237
* there is no point maintaining an KVM_ARM_TARGET_* -> string table.
238
*/
239
- ahcc->dtb_compatible = "arm,arm-v7";
240
+ ahcf->dtb_compatible = "arm,arm-v7";
241
242
for (i = 0; i < ARRAY_SIZE(idregs); i++) {
243
ret = ioctl(fdarray[2], KVM_GET_ONE_REG, &idregs[i]);
244
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUClass *ahcc)
245
set_feature(&features, ARM_FEATURE_VFP4);
246
}
247
248
- ahcc->features = features;
249
+ ahcf->features = features;
250
251
return true;
76
return true;
252
}
77
}
253
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
78
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond_thumb(DisasContext *s, arg_ci *a)
254
index XXXXXXX..XXXXXXX 100644
79
255
--- a/target/arm/kvm64.c
80
static bool trans_BL(DisasContext *s, arg_i *a)
256
+++ b/target/arm/kvm64.c
81
{
257
@@ -XXX,XX +XXX,XX @@ static inline void unset_feature(uint64_t *features, int feature)
82
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
258
*features &= ~(1ULL << feature);
83
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
84
gen_jmp(s, jmp_diff(s, a->imm));
85
return true;
259
}
86
}
260
87
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
261
-bool kvm_arm_get_host_cpu_features(ARMHostCPUClass *ahcc)
88
if (s->thumb && (a->imm & 2)) {
262
+bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
263
{
264
/* Identify the feature bits corresponding to the host CPU, and
265
* fill out the ARMHostCPUClass fields accordingly. To do this
266
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUClass *ahcc)
267
return false;
89
return false;
268
}
90
}
269
91
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
270
- ahcc->target = init.target;
92
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
271
- ahcc->dtb_compatible = "arm,arm-v8";
93
store_cpu_field_constant(!s->thumb, thumb);
272
+ ahcf->target = init.target;
94
/* This jump is computed from an aligned PC: subtract off the low bits. */
273
+ ahcf->dtb_compatible = "arm,arm-v8";
95
gen_jmp(s, jmp_diff(s, a->imm - (s->pc_curr & 3)));
274
96
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
275
kvm_arm_destroy_scratch_host_vcpu(fdarray);
97
static bool trans_BL_BLX_prefix(DisasContext *s, arg_BL_BLX_prefix *a)
276
98
{
277
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUClass *ahcc)
99
assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
278
set_feature(&features, ARM_FEATURE_AARCH64);
100
- tcg_gen_movi_i32(cpu_R[14], read_pc(s) + (a->imm << 12));
279
set_feature(&features, ARM_FEATURE_PMU);
101
+ gen_pc_plus_diff(s, cpu_R[14], jmp_diff(s, a->imm << 12));
280
102
return true;
281
- ahcc->features = features;
103
}
282
+ ahcf->features = features;
104
283
105
@@ -XXX,XX +XXX,XX @@ static bool trans_BL_suffix(DisasContext *s, arg_BL_suffix *a)
106
107
assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
108
tcg_gen_addi_i32(tmp, cpu_R[14], (a->imm << 1) | 1);
109
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
110
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | 1);
111
gen_bx(s, tmp);
112
return true;
113
}
114
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_suffix(DisasContext *s, arg_BLX_suffix *a)
115
tmp = tcg_temp_new_i32();
116
tcg_gen_addi_i32(tmp, cpu_R[14], a->imm << 1);
117
tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
118
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
119
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | 1);
120
gen_bx(s, tmp);
121
return true;
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
124
tcg_gen_add_i32(addr, addr, tmp);
125
126
gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), half ? MO_UW : MO_UB);
127
- tcg_temp_free_i32(addr);
128
129
tcg_gen_add_i32(tmp, tmp, tmp);
130
- tcg_gen_addi_i32(tmp, tmp, read_pc(s));
131
+ gen_pc_plus_diff(s, addr, jmp_diff(s, 0));
132
+ tcg_gen_add_i32(tmp, tmp, addr);
133
+ tcg_temp_free_i32(addr);
134
store_reg(s, 15, tmp);
284
return true;
135
return true;
285
}
136
}
286
--
137
--
287
2.16.2
138
2.25.1
288
139
289
140
diff view generated by jsdifflib
1
From: Andrey Smirnov <andrew.smirnov@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add code needed to get a functional PCI subsytem when using in
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
conjunction with upstream Linux guest (4.13+). Tested to work against
4
Message-id: 20221020030641.2066807-10-richard.henderson@linaro.org
5
"e1000e" (network adapter, using MSI interrupts) as well as
6
"usb-ehci" (USB controller, using legacy PCI interrupts).
7
8
Based on "i.MX6 Applications Processor Reference Manual" (Document
9
Number: IMX6DQRM Rev. 4) as well as corresponding dirver in Linux
10
kernel (circa 4.13 - 4.16 found in drivers/pci/dwc/*)
11
12
Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
7
---
16
hw/pci-host/Makefile.objs | 2 +
8
target/arm/cpu-param.h | 2 +
17
include/hw/pci-host/designware.h | 102 ++++++
9
target/arm/translate.h | 50 +++++++++++++++-
18
include/hw/pci/pci_ids.h | 2 +
10
target/arm/cpu.c | 23 ++++----
19
hw/pci-host/designware.c | 754 +++++++++++++++++++++++++++++++++++++++
11
target/arm/translate-a64.c | 64 +++++++++++++-------
20
default-configs/arm-softmmu.mak | 1 +
12
target/arm/translate-m-nocp.c | 2 +-
21
5 files changed, 861 insertions(+)
13
target/arm/translate.c | 108 +++++++++++++++++++++++-----------
22
create mode 100644 include/hw/pci-host/designware.h
14
6 files changed, 178 insertions(+), 71 deletions(-)
23
create mode 100644 hw/pci-host/designware.c
24
15
25
diff --git a/hw/pci-host/Makefile.objs b/hw/pci-host/Makefile.objs
16
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
26
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
27
--- a/hw/pci-host/Makefile.objs
18
--- a/target/arm/cpu-param.h
28
+++ b/hw/pci-host/Makefile.objs
19
+++ b/target/arm/cpu-param.h
29
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_PCI_PIIX) += piix.o
20
@@ -XXX,XX +XXX,XX @@
30
common-obj-$(CONFIG_PCI_Q35) += q35.o
21
# define TARGET_PAGE_BITS_VARY
31
common-obj-$(CONFIG_PCI_GENERIC) += gpex.o
22
# define TARGET_PAGE_BITS_MIN 10
32
common-obj-$(CONFIG_PCI_XILINX) += xilinx-pcie.o
23
24
+# define TARGET_TB_PCREL 1
33
+
25
+
34
+common-obj-$(CONFIG_PCI_DESIGNWARE) += designware.o
26
/*
35
diff --git a/include/hw/pci-host/designware.h b/include/hw/pci-host/designware.h
27
* Cache the attrs and shareability fields from the page table entry.
36
new file mode 100644
28
*
37
index XXXXXXX..XXXXXXX
29
diff --git a/target/arm/translate.h b/target/arm/translate.h
38
--- /dev/null
30
index XXXXXXX..XXXXXXX 100644
39
+++ b/include/hw/pci-host/designware.h
31
--- a/target/arm/translate.h
32
+++ b/target/arm/translate.h
40
@@ -XXX,XX +XXX,XX @@
33
@@ -XXX,XX +XXX,XX @@
34
35
36
/* internal defines */
37
+
41
+/*
38
+/*
42
+ * Copyright (c) 2017, Impinj, Inc.
39
+ * Save pc_save across a branch, so that we may restore the value from
43
+ *
40
+ * before the branch at the point the label is emitted.
44
+ * Designware PCIe IP block emulation
45
+ *
46
+ * This library is free software; you can redistribute it and/or
47
+ * modify it under the terms of the GNU Lesser General Public
48
+ * License as published by the Free Software Foundation; either
49
+ * version 2 of the License, or (at your option) any later version.
50
+ *
51
+ * This library is distributed in the hope that it will be useful,
52
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
53
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
54
+ * Lesser General Public License for more details.
55
+ *
56
+ * You should have received a copy of the GNU Lesser General Public
57
+ * License along with this library; if not, see
58
+ * <http://www.gnu.org/licenses/>.
59
+ */
41
+ */
42
+typedef struct DisasLabel {
43
+ TCGLabel *label;
44
+ target_ulong pc_save;
45
+} DisasLabel;
60
+
46
+
61
+#ifndef DESIGNWARE_H
47
typedef struct DisasContext {
62
+#define DESIGNWARE_H
48
DisasContextBase base;
63
+
49
const ARMISARegisters *isar;
64
+#include "hw/hw.h"
50
65
+#include "hw/sysbus.h"
51
/* The address of the current instruction being translated. */
66
+#include "hw/pci/pci.h"
52
target_ulong pc_curr;
67
+#include "hw/pci/pci_bus.h"
53
+ /*
68
+#include "hw/pci/pcie_host.h"
54
+ * For TARGET_TB_PCREL, the full value of cpu_pc is not known
69
+#include "hw/pci/pci_bridge.h"
55
+ * (although the page offset is known). For convenience, the
70
+
56
+ * translation loop uses the full virtual address that triggered
71
+#define TYPE_DESIGNWARE_PCIE_HOST "designware-pcie-host"
57
+ * the translation, from base.pc_start through pc_curr.
72
+#define DESIGNWARE_PCIE_HOST(obj) \
58
+ * For efficiency, we do not update cpu_pc for every instruction.
73
+ OBJECT_CHECK(DesignwarePCIEHost, (obj), TYPE_DESIGNWARE_PCIE_HOST)
59
+ * Instead, pc_save has the value of pc_curr at the time of the
74
+
60
+ * last update to cpu_pc, which allows us to compute the addend
75
+#define TYPE_DESIGNWARE_PCIE_ROOT "designware-pcie-root"
61
+ * needed to bring cpu_pc current: pc_curr - pc_save.
76
+#define DESIGNWARE_PCIE_ROOT(obj) \
62
+ * If cpu_pc now contains the destination of an indirect branch,
77
+ OBJECT_CHECK(DesignwarePCIERoot, (obj), TYPE_DESIGNWARE_PCIE_ROOT)
63
+ * pc_save contains -1 to indicate that relative updates are no
78
+
64
+ * longer possible.
79
+struct DesignwarePCIERoot;
65
+ */
80
+typedef struct DesignwarePCIERoot DesignwarePCIERoot;
66
+ target_ulong pc_save;
81
+
67
target_ulong page_start;
82
+typedef struct DesignwarePCIEViewport {
68
uint32_t insn;
83
+ DesignwarePCIERoot *root;
69
/* Nonzero if this instruction has been conditionally skipped. */
84
+
70
int condjmp;
85
+ MemoryRegion cfg;
71
/* The label that will be jumped to when the instruction is skipped. */
86
+ MemoryRegion mem;
72
- TCGLabel *condlabel;
87
+
73
+ DisasLabel condlabel;
88
+ uint64_t base;
74
/* Thumb-2 conditional execution bits. */
89
+ uint64_t target;
75
int condexec_mask;
90
+ uint32_t limit;
76
int condexec_cond;
91
+ uint32_t cr[2];
77
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
92
+
78
* after decode (ie after any UNDEF checks)
93
+ bool inbound;
79
*/
94
+} DesignwarePCIEViewport;
80
bool eci_handled;
95
+
81
- /* TCG op to rewind to if this turns out to be an invalid ECI state */
96
+typedef struct DesignwarePCIEMSIBank {
82
- TCGOp *insn_eci_rewind;
97
+ uint32_t enable;
83
int sctlr_b;
98
+ uint32_t mask;
84
MemOp be_data;
99
+ uint32_t status;
85
#if !defined(CONFIG_USER_ONLY)
100
+} DesignwarePCIEMSIBank;
86
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
101
+
87
*/
102
+typedef struct DesignwarePCIEMSI {
88
uint64_t asimd_imm_const(uint32_t imm, int cmode, int op);
103
+ uint64_t base;
89
104
+ MemoryRegion iomem;
105
+
106
+#define DESIGNWARE_PCIE_NUM_MSI_BANKS 1
107
+
108
+ DesignwarePCIEMSIBank intr[DESIGNWARE_PCIE_NUM_MSI_BANKS];
109
+} DesignwarePCIEMSI;
110
+
111
+struct DesignwarePCIERoot {
112
+ PCIBridge parent_obj;
113
+
114
+ uint32_t atu_viewport;
115
+
116
+#define DESIGNWARE_PCIE_VIEWPORT_OUTBOUND 0
117
+#define DESIGNWARE_PCIE_VIEWPORT_INBOUND 1
118
+#define DESIGNWARE_PCIE_NUM_VIEWPORTS 4
119
+
120
+ DesignwarePCIEViewport viewports[2][DESIGNWARE_PCIE_NUM_VIEWPORTS];
121
+ DesignwarePCIEMSI msi;
122
+};
123
+
124
+typedef struct DesignwarePCIEHost {
125
+ PCIHostState parent_obj;
126
+
127
+ DesignwarePCIERoot root;
128
+
129
+ struct {
130
+ AddressSpace address_space;
131
+ MemoryRegion address_space_root;
132
+
133
+ MemoryRegion memory;
134
+ MemoryRegion io;
135
+
136
+ qemu_irq irqs[4];
137
+ } pci;
138
+
139
+ MemoryRegion mmio;
140
+} DesignwarePCIEHost;
141
+
142
+#endif /* DESIGNWARE_H */
143
diff --git a/include/hw/pci/pci_ids.h b/include/hw/pci/pci_ids.h
144
index XXXXXXX..XXXXXXX 100644
145
--- a/include/hw/pci/pci_ids.h
146
+++ b/include/hw/pci/pci_ids.h
147
@@ -XXX,XX +XXX,XX @@
148
#define PCI_VENDOR_ID_VMWARE 0x15ad
149
#define PCI_DEVICE_ID_VMWARE_PVRDMA 0x0820
150
151
+#define PCI_VENDOR_ID_SYNOPSYS 0x16C3
152
+
153
#endif
154
diff --git a/hw/pci-host/designware.c b/hw/pci-host/designware.c
155
new file mode 100644
156
index XXXXXXX..XXXXXXX
157
--- /dev/null
158
+++ b/hw/pci-host/designware.c
159
@@ -XXX,XX +XXX,XX @@
160
+/*
90
+/*
161
+ * Copyright (c) 2018, Impinj, Inc.
91
+ * gen_disas_label:
162
+ *
92
+ * Create a label and cache a copy of pc_save.
163
+ * Designware PCIe IP block emulation
164
+ *
165
+ * This library is free software; you can redistribute it and/or
166
+ * modify it under the terms of the GNU Lesser General Public
167
+ * License as published by the Free Software Foundation; either
168
+ * version 2 of the License, or (at your option) any later version.
169
+ *
170
+ * This library is distributed in the hope that it will be useful,
171
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
172
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
173
+ * Lesser General Public License for more details.
174
+ *
175
+ * You should have received a copy of the GNU Lesser General Public
176
+ * License along with this library; if not, see
177
+ * <http://www.gnu.org/licenses/>.
178
+ */
93
+ */
179
+
94
+static inline DisasLabel gen_disas_label(DisasContext *s)
180
+#include "qemu/osdep.h"
181
+#include "qapi/error.h"
182
+#include "hw/pci/msi.h"
183
+#include "hw/pci/pci_bridge.h"
184
+#include "hw/pci/pci_host.h"
185
+#include "hw/pci/pcie_port.h"
186
+#include "hw/pci-host/designware.h"
187
+
188
+#define DESIGNWARE_PCIE_PORT_LINK_CONTROL 0x710
189
+#define DESIGNWARE_PCIE_PHY_DEBUG_R1 0x72C
190
+#define DESIGNWARE_PCIE_PHY_DEBUG_R1_XMLH_LINK_UP BIT(4)
191
+#define DESIGNWARE_PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C
192
+#define DESIGNWARE_PCIE_PORT_LOGIC_SPEED_CHANGE BIT(17)
193
+#define DESIGNWARE_PCIE_MSI_ADDR_LO 0x820
194
+#define DESIGNWARE_PCIE_MSI_ADDR_HI 0x824
195
+#define DESIGNWARE_PCIE_MSI_INTR0_ENABLE 0x828
196
+#define DESIGNWARE_PCIE_MSI_INTR0_MASK 0x82C
197
+#define DESIGNWARE_PCIE_MSI_INTR0_STATUS 0x830
198
+#define DESIGNWARE_PCIE_ATU_VIEWPORT 0x900
199
+#define DESIGNWARE_PCIE_ATU_REGION_INBOUND BIT(31)
200
+#define DESIGNWARE_PCIE_ATU_CR1 0x904
201
+#define DESIGNWARE_PCIE_ATU_TYPE_MEM (0x0 << 0)
202
+#define DESIGNWARE_PCIE_ATU_CR2 0x908
203
+#define DESIGNWARE_PCIE_ATU_ENABLE BIT(31)
204
+#define DESIGNWARE_PCIE_ATU_LOWER_BASE 0x90C
205
+#define DESIGNWARE_PCIE_ATU_UPPER_BASE 0x910
206
+#define DESIGNWARE_PCIE_ATU_LIMIT 0x914
207
+#define DESIGNWARE_PCIE_ATU_LOWER_TARGET 0x918
208
+#define DESIGNWARE_PCIE_ATU_BUS(x) (((x) >> 24) & 0xff)
209
+#define DESIGNWARE_PCIE_ATU_DEVFN(x) (((x) >> 16) & 0xff)
210
+#define DESIGNWARE_PCIE_ATU_UPPER_TARGET 0x91C
211
+
212
+static DesignwarePCIEHost *
213
+designware_pcie_root_to_host(DesignwarePCIERoot *root)
214
+{
95
+{
215
+ BusState *bus = qdev_get_parent_bus(DEVICE(root));
96
+ return (DisasLabel){
216
+ return DESIGNWARE_PCIE_HOST(bus->parent);
97
+ .label = gen_new_label(),
98
+ .pc_save = s->pc_save,
99
+ };
217
+}
100
+}
218
+
101
+
219
+static void designware_pcie_root_msi_write(void *opaque, hwaddr addr,
102
+/*
220
+ uint64_t val, unsigned len)
103
+ * set_disas_label:
104
+ * Emit a label and restore the cached copy of pc_save.
105
+ */
106
+static inline void set_disas_label(DisasContext *s, DisasLabel l)
221
+{
107
+{
222
+ DesignwarePCIERoot *root = DESIGNWARE_PCIE_ROOT(opaque);
108
+ gen_set_label(l.label);
223
+ DesignwarePCIEHost *host = designware_pcie_root_to_host(root);
109
+ s->pc_save = l.pc_save;
224
+
225
+ root->msi.intr[0].status |= BIT(val) & root->msi.intr[0].enable;
226
+
227
+ if (root->msi.intr[0].status & ~root->msi.intr[0].mask) {
228
+ qemu_set_irq(host->pci.irqs[0], 1);
229
+ }
230
+}
110
+}
231
+
111
+
232
+static const MemoryRegionOps designware_pci_host_msi_ops = {
112
/*
233
+ .write = designware_pcie_root_msi_write,
113
* Helpers for implementing sets of trans_* functions.
234
+ .endianness = DEVICE_LITTLE_ENDIAN,
114
* Defer the implementation of NAME to FUNC, with optional extra arguments.
235
+ .valid = {
115
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
236
+ .min_access_size = 4,
116
index XXXXXXX..XXXXXXX 100644
237
+ .max_access_size = 4,
117
--- a/target/arm/cpu.c
238
+ },
118
+++ b/target/arm/cpu.c
239
+};
119
@@ -XXX,XX +XXX,XX @@ static vaddr arm_cpu_get_pc(CPUState *cs)
120
void arm_cpu_synchronize_from_tb(CPUState *cs,
121
const TranslationBlock *tb)
122
{
123
- ARMCPU *cpu = ARM_CPU(cs);
124
- CPUARMState *env = &cpu->env;
125
-
126
- /*
127
- * It's OK to look at env for the current mode here, because it's
128
- * never possible for an AArch64 TB to chain to an AArch32 TB.
129
- */
130
- if (is_a64(env)) {
131
- env->pc = tb_pc(tb);
132
- } else {
133
- env->regs[15] = tb_pc(tb);
134
+ /* The program counter is always up to date with TARGET_TB_PCREL. */
135
+ if (!TARGET_TB_PCREL) {
136
+ CPUARMState *env = cs->env_ptr;
137
+ /*
138
+ * It's OK to look at env for the current mode here, because it's
139
+ * never possible for an AArch64 TB to chain to an AArch32 TB.
140
+ */
141
+ if (is_a64(env)) {
142
+ env->pc = tb_pc(tb);
143
+ } else {
144
+ env->regs[15] = tb_pc(tb);
145
+ }
146
}
147
}
148
#endif /* CONFIG_TCG */
149
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
150
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/translate-a64.c
152
+++ b/target/arm/translate-a64.c
153
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
154
155
static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
156
{
157
- tcg_gen_movi_i64(dest, s->pc_curr + diff);
158
+ assert(s->pc_save != -1);
159
+ if (TARGET_TB_PCREL) {
160
+ tcg_gen_addi_i64(dest, cpu_pc, (s->pc_curr - s->pc_save) + diff);
161
+ } else {
162
+ tcg_gen_movi_i64(dest, s->pc_curr + diff);
163
+ }
164
}
165
166
void gen_a64_update_pc(DisasContext *s, target_long diff)
167
{
168
gen_pc_plus_diff(s, cpu_pc, diff);
169
+ s->pc_save = s->pc_curr + diff;
170
}
171
172
/*
173
@@ -XXX,XX +XXX,XX @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
174
* then loading an address into the PC will clear out any tag.
175
*/
176
gen_top_byte_ignore(s, cpu_pc, src, s->tbii);
177
+ s->pc_save = -1;
178
}
179
180
/*
181
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, uint64_t dest)
182
183
static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
184
{
185
- uint64_t dest = s->pc_curr + diff;
186
-
187
- if (use_goto_tb(s, dest)) {
188
- tcg_gen_goto_tb(n);
189
- gen_a64_update_pc(s, diff);
190
+ if (use_goto_tb(s, s->pc_curr + diff)) {
191
+ /*
192
+ * For pcrel, the pc must always be up-to-date on entry to
193
+ * the linked TB, so that it can use simple additions for all
194
+ * further adjustments. For !pcrel, the linked TB is compiled
195
+ * to know its full virtual address, so we can delay the
196
+ * update to pc to the unlinked path. A long chain of links
197
+ * can thus avoid many updates to the PC.
198
+ */
199
+ if (TARGET_TB_PCREL) {
200
+ gen_a64_update_pc(s, diff);
201
+ tcg_gen_goto_tb(n);
202
+ } else {
203
+ tcg_gen_goto_tb(n);
204
+ gen_a64_update_pc(s, diff);
205
+ }
206
tcg_gen_exit_tb(s->base.tb, n);
207
s->base.is_jmp = DISAS_NORETURN;
208
} else {
209
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
210
{
211
unsigned int sf, op, rt;
212
int64_t diff;
213
- TCGLabel *label_match;
214
+ DisasLabel match;
215
TCGv_i64 tcg_cmp;
216
217
sf = extract32(insn, 31, 1);
218
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
219
diff = sextract32(insn, 5, 19) * 4;
220
221
tcg_cmp = read_cpu_reg(s, rt, sf);
222
- label_match = gen_new_label();
223
-
224
reset_btype(s);
225
- tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
226
- tcg_cmp, 0, label_match);
227
228
+ match = gen_disas_label(s);
229
+ tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
230
+ tcg_cmp, 0, match.label);
231
gen_goto_tb(s, 0, 4);
232
- gen_set_label(label_match);
233
+ set_disas_label(s, match);
234
gen_goto_tb(s, 1, diff);
235
}
236
237
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
238
{
239
unsigned int bit_pos, op, rt;
240
int64_t diff;
241
- TCGLabel *label_match;
242
+ DisasLabel match;
243
TCGv_i64 tcg_cmp;
244
245
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
246
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
247
248
tcg_cmp = tcg_temp_new_i64();
249
tcg_gen_andi_i64(tcg_cmp, cpu_reg(s, rt), (1ULL << bit_pos));
250
- label_match = gen_new_label();
251
252
reset_btype(s);
240
+
253
+
241
+static void designware_pcie_root_update_msi_mapping(DesignwarePCIERoot *root)
254
+ match = gen_disas_label(s);
242
+
255
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
243
+{
256
- tcg_cmp, 0, label_match);
244
+ MemoryRegion *mem = &root->msi.iomem;
257
+ tcg_cmp, 0, match.label);
245
+ const uint64_t base = root->msi.base;
258
tcg_temp_free_i64(tcg_cmp);
246
+ const bool enable = root->msi.intr[0].enable;
259
gen_goto_tb(s, 0, 4);
247
+
260
- gen_set_label(label_match);
248
+ memory_region_set_address(mem, base);
261
+ set_disas_label(s, match);
249
+ memory_region_set_enabled(mem, enable);
262
gen_goto_tb(s, 1, diff);
250
+}
263
}
251
+
264
252
+static DesignwarePCIEViewport *
265
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
253
+designware_pcie_root_get_current_viewport(DesignwarePCIERoot *root)
266
reset_btype(s);
254
+{
267
if (cond < 0x0e) {
255
+ const unsigned int idx = root->atu_viewport & 0xF;
268
/* genuinely conditional branches */
256
+ const unsigned int dir =
269
- TCGLabel *label_match = gen_new_label();
257
+ !!(root->atu_viewport & DESIGNWARE_PCIE_ATU_REGION_INBOUND);
270
- arm_gen_test_cc(cond, label_match);
258
+ return &root->viewports[dir][idx];
271
+ DisasLabel match = gen_disas_label(s);
259
+}
272
+ arm_gen_test_cc(cond, match.label);
260
+
273
gen_goto_tb(s, 0, 4);
261
+static uint32_t
274
- gen_set_label(label_match);
262
+designware_pcie_root_config_read(PCIDevice *d, uint32_t address, int len)
275
+ set_disas_label(s, match);
263
+{
276
gen_goto_tb(s, 1, diff);
264
+ DesignwarePCIERoot *root = DESIGNWARE_PCIE_ROOT(d);
277
} else {
265
+ DesignwarePCIEViewport *viewport =
278
/* 0xe and 0xf are both "always" conditions */
266
+ designware_pcie_root_get_current_viewport(root);
279
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
267
+
280
268
+ uint32_t val;
281
dc->isar = &arm_cpu->isar;
269
+
282
dc->condjmp = 0;
270
+ switch (address) {
283
-
271
+ case DESIGNWARE_PCIE_PORT_LINK_CONTROL:
284
+ dc->pc_save = dc->base.pc_first;
285
dc->aarch64 = true;
286
dc->thumb = false;
287
dc->sctlr_b = 0;
288
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
289
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
290
{
291
DisasContext *dc = container_of(dcbase, DisasContext, base);
292
+ target_ulong pc_arg = dc->base.pc_next;
293
294
- tcg_gen_insn_start(dc->base.pc_next, 0, 0);
295
+ if (TARGET_TB_PCREL) {
296
+ pc_arg &= ~TARGET_PAGE_MASK;
297
+ }
298
+ tcg_gen_insn_start(pc_arg, 0, 0);
299
dc->insn_start = tcg_last_op();
300
}
301
302
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
303
index XXXXXXX..XXXXXXX 100644
304
--- a/target/arm/translate-m-nocp.c
305
+++ b/target/arm/translate-m-nocp.c
306
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
307
tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
308
tcg_gen_or_i32(sfpa, sfpa, aspen);
309
arm_gen_condlabel(s);
310
- tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
311
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel.label);
312
313
if (s->fp_excp_el != 0) {
314
gen_exception_insn_el(s, 0, EXCP_NOCP,
315
diff --git a/target/arm/translate.c b/target/arm/translate.c
316
index XXXXXXX..XXXXXXX 100644
317
--- a/target/arm/translate.c
318
+++ b/target/arm/translate.c
319
@@ -XXX,XX +XXX,XX @@ uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
320
void arm_gen_condlabel(DisasContext *s)
321
{
322
if (!s->condjmp) {
323
- s->condlabel = gen_new_label();
324
+ s->condlabel = gen_disas_label(s);
325
s->condjmp = 1;
326
}
327
}
328
@@ -XXX,XX +XXX,XX @@ static target_long jmp_diff(DisasContext *s, target_long diff)
329
330
static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
331
{
332
- tcg_gen_movi_i32(var, s->pc_curr + diff);
333
+ assert(s->pc_save != -1);
334
+ if (TARGET_TB_PCREL) {
335
+ tcg_gen_addi_i32(var, cpu_R[15], (s->pc_curr - s->pc_save) + diff);
336
+ } else {
337
+ tcg_gen_movi_i32(var, s->pc_curr + diff);
338
+ }
339
}
340
341
/* Set a variable to the value of a CPU register. */
342
@@ -XXX,XX +XXX,XX @@ void store_reg(DisasContext *s, int reg, TCGv_i32 var)
343
*/
344
tcg_gen_andi_i32(var, var, s->thumb ? ~1 : ~3);
345
s->base.is_jmp = DISAS_JUMP;
346
+ s->pc_save = -1;
347
} else if (reg == 13 && arm_dc_feature(s, ARM_FEATURE_M)) {
348
/* For M-profile SP bits [1:0] are always zero */
349
tcg_gen_andi_i32(var, var, ~3);
350
@@ -XXX,XX +XXX,XX @@ void gen_set_condexec(DisasContext *s)
351
352
void gen_update_pc(DisasContext *s, target_long diff)
353
{
354
- tcg_gen_movi_i32(cpu_R[15], s->pc_curr + diff);
355
+ gen_pc_plus_diff(s, cpu_R[15], diff);
356
+ s->pc_save = s->pc_curr + diff;
357
}
358
359
/* Set PC and Thumb state from var. var is marked as dead. */
360
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx(DisasContext *s, TCGv_i32 var)
361
tcg_gen_andi_i32(cpu_R[15], var, ~1);
362
tcg_gen_andi_i32(var, var, 1);
363
store_cpu_field(var, thumb);
364
+ s->pc_save = -1;
365
}
366
367
/*
368
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
369
static inline void gen_bx_excret_final_code(DisasContext *s)
370
{
371
/* Generate the code to finish possible exception return and end the TB */
372
- TCGLabel *excret_label = gen_new_label();
373
+ DisasLabel excret_label = gen_disas_label(s);
374
uint32_t min_magic;
375
376
if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY)) {
377
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret_final_code(DisasContext *s)
378
}
379
380
/* Is the new PC value in the magic range indicating exception return? */
381
- tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label);
382
+ tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label.label);
383
/* No: end the TB as we would for a DISAS_JMP */
384
if (s->ss_active) {
385
gen_singlestep_exception(s);
386
} else {
387
tcg_gen_exit_tb(NULL, 0);
388
}
389
- gen_set_label(excret_label);
390
+ set_disas_label(s, excret_label);
391
/* Yes: this is an exception return.
392
* At this point in runtime env->regs[15] and env->thumb will hold
393
* the exception-return magic number, which do_v7m_exception_exit()
394
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
395
*/
396
static void gen_goto_tb(DisasContext *s, int n, target_long diff)
397
{
398
- target_ulong dest = s->pc_curr + diff;
399
-
400
- if (translator_use_goto_tb(&s->base, dest)) {
401
- tcg_gen_goto_tb(n);
402
- gen_update_pc(s, diff);
403
+ if (translator_use_goto_tb(&s->base, s->pc_curr + diff)) {
272
+ /*
404
+ /*
273
+ * Linux guest uses this register only to configure number of
405
+ * For pcrel, the pc must always be up-to-date on entry to
274
+ * PCIE lane (which in our case is irrelevant) and doesn't
406
+ * the linked TB, so that it can use simple additions for all
275
+ * really care about the value it reads from this register
407
+ * further adjustments. For !pcrel, the linked TB is compiled
408
+ * to know its full virtual address, so we can delay the
409
+ * update to pc to the unlinked path. A long chain of links
410
+ * can thus avoid many updates to the PC.
276
+ */
411
+ */
277
+ val = 0xDEADBEEF;
412
+ if (TARGET_TB_PCREL) {
278
+ break;
413
+ gen_update_pc(s, diff);
279
+
414
+ tcg_gen_goto_tb(n);
280
+ case DESIGNWARE_PCIE_LINK_WIDTH_SPEED_CONTROL:
415
+ } else {
281
+ /*
416
+ tcg_gen_goto_tb(n);
282
+ * To make sure that any code in guest waiting for speed
417
+ gen_update_pc(s, diff);
283
+ * change does not time out we always report
418
+ }
284
+ * PORT_LOGIC_SPEED_CHANGE as set
419
tcg_gen_exit_tb(s->base.tb, n);
285
+ */
420
} else {
286
+ val = DESIGNWARE_PCIE_PORT_LOGIC_SPEED_CHANGE;
421
gen_update_pc(s, diff);
287
+ break;
422
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
288
+
423
static void arm_skip_unless(DisasContext *s, uint32_t cond)
289
+ case DESIGNWARE_PCIE_MSI_ADDR_LO:
424
{
290
+ val = root->msi.base;
425
arm_gen_condlabel(s);
291
+ break;
426
- arm_gen_test_cc(cond ^ 1, s->condlabel);
292
+
427
+ arm_gen_test_cc(cond ^ 1, s->condlabel.label);
293
+ case DESIGNWARE_PCIE_MSI_ADDR_HI:
428
}
294
+ val = root->msi.base >> 32;
429
295
+ break;
430
296
+
431
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
297
+ case DESIGNWARE_PCIE_MSI_INTR0_ENABLE:
432
{
298
+ val = root->msi.intr[0].enable;
433
/* M-profile low-overhead while-loop start */
299
+ break;
434
TCGv_i32 tmp;
300
+
435
- TCGLabel *nextlabel;
301
+ case DESIGNWARE_PCIE_MSI_INTR0_MASK:
436
+ DisasLabel nextlabel;
302
+ val = root->msi.intr[0].mask;
437
303
+ break;
438
if (!dc_isar_feature(aa32_lob, s)) {
304
+
439
return false;
305
+ case DESIGNWARE_PCIE_MSI_INTR0_STATUS:
440
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
306
+ val = root->msi.intr[0].status;
441
}
307
+ break;
442
}
308
+
443
309
+ case DESIGNWARE_PCIE_PHY_DEBUG_R1:
444
- nextlabel = gen_new_label();
310
+ val = DESIGNWARE_PCIE_PHY_DEBUG_R1_XMLH_LINK_UP;
445
- tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel);
311
+ break;
446
+ nextlabel = gen_disas_label(s);
312
+
447
+ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel.label);
313
+ case DESIGNWARE_PCIE_ATU_VIEWPORT:
448
tmp = load_reg(s, a->rn);
314
+ val = root->atu_viewport;
449
store_reg(s, 14, tmp);
315
+ break;
450
if (a->size != 4) {
316
+
451
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
317
+ case DESIGNWARE_PCIE_ATU_LOWER_BASE:
452
}
318
+ val = viewport->base;
453
gen_jmp_tb(s, curr_insn_len(s), 1);
319
+ break;
454
320
+
455
- gen_set_label(nextlabel);
321
+ case DESIGNWARE_PCIE_ATU_UPPER_BASE:
456
+ set_disas_label(s, nextlabel);
322
+ val = viewport->base >> 32;
457
gen_jmp(s, jmp_diff(s, a->imm));
323
+ break;
458
return true;
324
+
459
}
325
+ case DESIGNWARE_PCIE_ATU_LOWER_TARGET:
460
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
326
+ val = viewport->target;
461
* any faster.
327
+ break;
462
*/
328
+
463
TCGv_i32 tmp;
329
+ case DESIGNWARE_PCIE_ATU_UPPER_TARGET:
464
- TCGLabel *loopend;
330
+ val = viewport->target >> 32;
465
+ DisasLabel loopend;
331
+ break;
466
bool fpu_active;
332
+
467
333
+ case DESIGNWARE_PCIE_ATU_LIMIT:
468
if (!dc_isar_feature(aa32_lob, s)) {
334
+ val = viewport->limit;
469
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
335
+ break;
470
336
+
471
if (!a->tp && dc_isar_feature(aa32_mve, s) && fpu_active) {
337
+ case DESIGNWARE_PCIE_ATU_CR1:
472
/* Need to do a runtime check for LTPSIZE != 4 */
338
+ case DESIGNWARE_PCIE_ATU_CR2: /* FALLTHROUGH */
473
- TCGLabel *skipexc = gen_new_label();
339
+ val = viewport->cr[(address - DESIGNWARE_PCIE_ATU_CR1) /
474
+ DisasLabel skipexc = gen_disas_label(s);
340
+ sizeof(uint32_t)];
475
tmp = load_cpu_field(v7m.ltpsize);
341
+ break;
476
- tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
342
+
477
+ tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc.label);
343
+ default:
478
tcg_temp_free_i32(tmp);
344
+ val = pci_default_read_config(d, address, len);
479
gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
345
+ break;
480
- gen_set_label(skipexc);
481
+ set_disas_label(s, skipexc);
482
}
483
484
if (a->f) {
485
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
486
* loop decrement value is 1. For LETP we need to calculate the decrement
487
* value from LTPSIZE.
488
*/
489
- loopend = gen_new_label();
490
+ loopend = gen_disas_label(s);
491
if (!a->tp) {
492
- tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend);
493
+ tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend.label);
494
tcg_gen_addi_i32(cpu_R[14], cpu_R[14], -1);
495
} else {
496
/*
497
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
498
tcg_gen_shl_i32(decr, tcg_constant_i32(1), decr);
499
tcg_temp_free_i32(ltpsize);
500
501
- tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend);
502
+ tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend.label);
503
504
tcg_gen_sub_i32(cpu_R[14], cpu_R[14], decr);
505
tcg_temp_free_i32(decr);
506
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
507
/* Jump back to the loop start */
508
gen_jmp(s, jmp_diff(s, -a->imm));
509
510
- gen_set_label(loopend);
511
+ set_disas_label(s, loopend);
512
if (a->tp) {
513
/* Exits from tail-pred loops must reset LTPSIZE to 4 */
514
store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
515
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_CBZ *a)
516
517
arm_gen_condlabel(s);
518
tcg_gen_brcondi_i32(a->nz ? TCG_COND_EQ : TCG_COND_NE,
519
- tmp, 0, s->condlabel);
520
+ tmp, 0, s->condlabel.label);
521
tcg_temp_free_i32(tmp);
522
gen_jmp(s, jmp_diff(s, a->imm));
523
return true;
524
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
525
526
dc->isar = &cpu->isar;
527
dc->condjmp = 0;
528
-
529
+ dc->pc_save = dc->base.pc_first;
530
dc->aarch64 = false;
531
dc->thumb = EX_TBFLAG_AM32(tb_flags, THUMB);
532
dc->be_data = EX_TBFLAG_ANY(tb_flags, BE_DATA) ? MO_BE : MO_LE;
533
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
534
*/
535
dc->eci = dc->condexec_mask = dc->condexec_cond = 0;
536
dc->eci_handled = false;
537
- dc->insn_eci_rewind = NULL;
538
if (condexec & 0xf) {
539
dc->condexec_mask = (condexec & 0xf) << 1;
540
dc->condexec_cond = condexec >> 4;
541
@@ -XXX,XX +XXX,XX @@ static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
542
* fields here.
543
*/
544
uint32_t condexec_bits;
545
+ target_ulong pc_arg = dc->base.pc_next;
546
547
+ if (TARGET_TB_PCREL) {
548
+ pc_arg &= ~TARGET_PAGE_MASK;
346
+ }
549
+ }
347
+
550
if (dc->eci) {
348
+ return val;
551
condexec_bits = dc->eci << 4;
349
+}
552
} else {
350
+
553
condexec_bits = (dc->condexec_cond << 4) | (dc->condexec_mask >> 1);
351
+static uint64_t designware_pcie_root_data_access(void *opaque, hwaddr addr,
554
}
352
+ uint64_t *val, unsigned len)
555
- tcg_gen_insn_start(dc->base.pc_next, condexec_bits, 0);
353
+{
556
+ tcg_gen_insn_start(pc_arg, condexec_bits, 0);
354
+ DesignwarePCIEViewport *viewport = opaque;
557
dc->insn_start = tcg_last_op();
355
+ DesignwarePCIERoot *root = viewport->root;
558
}
356
+
559
357
+ const uint8_t busnum = DESIGNWARE_PCIE_ATU_BUS(viewport->target);
560
@@ -XXX,XX +XXX,XX @@ static bool arm_check_ss_active(DisasContext *dc)
358
+ const uint8_t devfn = DESIGNWARE_PCIE_ATU_DEVFN(viewport->target);
561
359
+ PCIBus *pcibus = pci_get_bus(PCI_DEVICE(root));
562
static void arm_post_translate_insn(DisasContext *dc)
360
+ PCIDevice *pcidev = pci_find_device(pcibus, busnum, devfn);
563
{
361
+
564
- if (dc->condjmp && !dc->base.is_jmp) {
362
+ if (pcidev) {
565
- gen_set_label(dc->condlabel);
363
+ addr &= pci_config_size(pcidev) - 1;
566
+ if (dc->condjmp && dc->base.is_jmp == DISAS_NEXT) {
364
+
567
+ if (dc->pc_save != dc->condlabel.pc_save) {
365
+ if (val) {
568
+ gen_update_pc(dc, dc->condlabel.pc_save - dc->pc_save);
366
+ pci_host_config_write_common(pcidev, addr,
569
+ }
367
+ pci_config_size(pcidev),
570
+ gen_set_label(dc->condlabel.label);
368
+ *val, len);
571
dc->condjmp = 0;
572
}
573
translator_loop_temp_check(&dc->base);
574
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
575
uint32_t pc = dc->base.pc_next;
576
uint32_t insn;
577
bool is_16bit;
578
+ /* TCG op to rewind to if this turns out to be an invalid ECI state */
579
+ TCGOp *insn_eci_rewind = NULL;
580
+ target_ulong insn_eci_pc_save = -1;
581
582
/* Misaligned thumb PC is architecturally impossible. */
583
assert((dc->base.pc_next & 1) == 0);
584
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
585
* insn" case. We will rewind to the marker (ie throwing away
586
* all the generated code) and instead emit "take exception".
587
*/
588
- dc->insn_eci_rewind = tcg_last_op();
589
+ insn_eci_rewind = tcg_last_op();
590
+ insn_eci_pc_save = dc->pc_save;
591
}
592
593
if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
594
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
595
* Insn wasn't valid for ECI/ICI at all: undo what we
596
* just generated and instead emit an exception
597
*/
598
- tcg_remove_ops_after(dc->insn_eci_rewind);
599
+ tcg_remove_ops_after(insn_eci_rewind);
600
+ dc->pc_save = insn_eci_pc_save;
601
dc->condjmp = 0;
602
gen_exception_insn(dc, 0, EXCP_INVSTATE, syn_uncategorized());
603
}
604
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
605
606
if (dc->condjmp) {
607
/* "Condition failed" instruction codepath for the branch/trap insn */
608
- gen_set_label(dc->condlabel);
609
+ set_disas_label(dc, dc->condlabel);
610
gen_set_condexec(dc);
611
if (unlikely(dc->ss_active)) {
612
gen_update_pc(dc, curr_insn_len(dc));
613
@@ -XXX,XX +XXX,XX @@ void restore_state_to_opc(CPUARMState *env, TranslationBlock *tb,
614
target_ulong *data)
615
{
616
if (is_a64(env)) {
617
- env->pc = data[0];
618
+ if (TARGET_TB_PCREL) {
619
+ env->pc = (env->pc & TARGET_PAGE_MASK) | data[0];
369
+ } else {
620
+ } else {
370
+ return pci_host_config_read_common(pcidev, addr,
621
+ env->pc = data[0];
371
+ pci_config_size(pcidev),
372
+ len);
373
+ }
622
+ }
374
+ }
623
env->condexec_bits = 0;
375
+
624
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
376
+ return UINT64_MAX;
625
} else {
377
+}
626
- env->regs[15] = data[0];
378
+
627
+ if (TARGET_TB_PCREL) {
379
+static uint64_t designware_pcie_root_data_read(void *opaque, hwaddr addr,
628
+ env->regs[15] = (env->regs[15] & TARGET_PAGE_MASK) | data[0];
380
+ unsigned len)
629
+ } else {
381
+{
630
+ env->regs[15] = data[0];
382
+ return designware_pcie_root_data_access(opaque, addr, NULL, len);
383
+}
384
+
385
+static void designware_pcie_root_data_write(void *opaque, hwaddr addr,
386
+ uint64_t val, unsigned len)
387
+{
388
+ designware_pcie_root_data_access(opaque, addr, &val, len);
389
+}
390
+
391
+static const MemoryRegionOps designware_pci_host_conf_ops = {
392
+ .read = designware_pcie_root_data_read,
393
+ .write = designware_pcie_root_data_write,
394
+ .endianness = DEVICE_LITTLE_ENDIAN,
395
+ .valid = {
396
+ .min_access_size = 1,
397
+ .max_access_size = 4,
398
+ },
399
+};
400
+
401
+static void designware_pcie_update_viewport(DesignwarePCIERoot *root,
402
+ DesignwarePCIEViewport *viewport)
403
+{
404
+ const uint64_t target = viewport->target;
405
+ const uint64_t base = viewport->base;
406
+ const uint64_t size = (uint64_t)viewport->limit - base + 1;
407
+ const bool enabled = viewport->cr[1] & DESIGNWARE_PCIE_ATU_ENABLE;
408
+
409
+ MemoryRegion *current, *other;
410
+
411
+ if (viewport->cr[0] == DESIGNWARE_PCIE_ATU_TYPE_MEM) {
412
+ current = &viewport->mem;
413
+ other = &viewport->cfg;
414
+ memory_region_set_alias_offset(current, target);
415
+ } else {
416
+ current = &viewport->cfg;
417
+ other = &viewport->mem;
418
+ }
419
+
420
+ /*
421
+ * An outbound viewport can be reconfigure from being MEM to CFG,
422
+ * to account for that we disable the "other" memory region that
423
+ * becomes unused due to that fact.
424
+ */
425
+ memory_region_set_enabled(other, false);
426
+ if (enabled) {
427
+ memory_region_set_size(current, size);
428
+ memory_region_set_address(current, base);
429
+ }
430
+ memory_region_set_enabled(current, enabled);
431
+}
432
+
433
+static void designware_pcie_root_config_write(PCIDevice *d, uint32_t address,
434
+ uint32_t val, int len)
435
+{
436
+ DesignwarePCIERoot *root = DESIGNWARE_PCIE_ROOT(d);
437
+ DesignwarePCIEHost *host = designware_pcie_root_to_host(root);
438
+ DesignwarePCIEViewport *viewport =
439
+ designware_pcie_root_get_current_viewport(root);
440
+
441
+ switch (address) {
442
+ case DESIGNWARE_PCIE_PORT_LINK_CONTROL:
443
+ case DESIGNWARE_PCIE_LINK_WIDTH_SPEED_CONTROL:
444
+ case DESIGNWARE_PCIE_PHY_DEBUG_R1:
445
+ /* No-op */
446
+ break;
447
+
448
+ case DESIGNWARE_PCIE_MSI_ADDR_LO:
449
+ root->msi.base &= 0xFFFFFFFF00000000ULL;
450
+ root->msi.base |= val;
451
+ break;
452
+
453
+ case DESIGNWARE_PCIE_MSI_ADDR_HI:
454
+ root->msi.base &= 0x00000000FFFFFFFFULL;
455
+ root->msi.base |= (uint64_t)val << 32;
456
+ break;
457
+
458
+ case DESIGNWARE_PCIE_MSI_INTR0_ENABLE: {
459
+ const bool update_msi_mapping = !root->msi.intr[0].enable ^ !!val;
460
+
461
+ root->msi.intr[0].enable = val;
462
+
463
+ if (update_msi_mapping) {
464
+ designware_pcie_root_update_msi_mapping(root);
465
+ }
631
+ }
466
+ break;
632
env->condexec_bits = data[1];
467
+ }
633
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
468
+
634
}
469
+ case DESIGNWARE_PCIE_MSI_INTR0_MASK:
470
+ root->msi.intr[0].mask = val;
471
+ break;
472
+
473
+ case DESIGNWARE_PCIE_MSI_INTR0_STATUS:
474
+ root->msi.intr[0].status ^= val;
475
+ if (!root->msi.intr[0].status) {
476
+ qemu_set_irq(host->pci.irqs[0], 0);
477
+ }
478
+ break;
479
+
480
+ case DESIGNWARE_PCIE_ATU_VIEWPORT:
481
+ root->atu_viewport = val;
482
+ break;
483
+
484
+ case DESIGNWARE_PCIE_ATU_LOWER_BASE:
485
+ viewport->base &= 0xFFFFFFFF00000000ULL;
486
+ viewport->base |= val;
487
+ break;
488
+
489
+ case DESIGNWARE_PCIE_ATU_UPPER_BASE:
490
+ viewport->base &= 0x00000000FFFFFFFFULL;
491
+ viewport->base |= (uint64_t)val << 32;
492
+ break;
493
+
494
+ case DESIGNWARE_PCIE_ATU_LOWER_TARGET:
495
+ viewport->target &= 0xFFFFFFFF00000000ULL;
496
+ viewport->target |= val;
497
+ break;
498
+
499
+ case DESIGNWARE_PCIE_ATU_UPPER_TARGET:
500
+ viewport->target &= 0x00000000FFFFFFFFULL;
501
+ viewport->target |= val;
502
+ break;
503
+
504
+ case DESIGNWARE_PCIE_ATU_LIMIT:
505
+ viewport->limit = val;
506
+ break;
507
+
508
+ case DESIGNWARE_PCIE_ATU_CR1:
509
+ viewport->cr[0] = val;
510
+ break;
511
+ case DESIGNWARE_PCIE_ATU_CR2:
512
+ viewport->cr[1] = val;
513
+ designware_pcie_update_viewport(root, viewport);
514
+ break;
515
+
516
+ default:
517
+ pci_bridge_write_config(d, address, val, len);
518
+ break;
519
+ }
520
+}
521
+
522
+static char *designware_pcie_viewport_name(const char *direction,
523
+ unsigned int i,
524
+ const char *type)
525
+{
526
+ return g_strdup_printf("PCI %s Viewport %u [%s]",
527
+ direction, i, type);
528
+}
529
+
530
+static void designware_pcie_root_realize(PCIDevice *dev, Error **errp)
531
+{
532
+ DesignwarePCIERoot *root = DESIGNWARE_PCIE_ROOT(dev);
533
+ DesignwarePCIEHost *host = designware_pcie_root_to_host(root);
534
+ MemoryRegion *address_space = &host->pci.memory;
535
+ PCIBridge *br = PCI_BRIDGE(dev);
536
+ DesignwarePCIEViewport *viewport;
537
+ /*
538
+ * Dummy values used for initial configuration of MemoryRegions
539
+ * that belong to a given viewport
540
+ */
541
+ const hwaddr dummy_offset = 0;
542
+ const uint64_t dummy_size = 4;
543
+ size_t i;
544
+
545
+ br->bus_name = "dw-pcie";
546
+
547
+ pci_set_word(dev->config + PCI_COMMAND,
548
+ PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
549
+
550
+ pci_config_set_interrupt_pin(dev->config, 1);
551
+ pci_bridge_initfn(dev, TYPE_PCIE_BUS);
552
+
553
+ pcie_port_init_reg(dev);
554
+
555
+ pcie_cap_init(dev, 0x70, PCI_EXP_TYPE_ROOT_PORT,
556
+ 0, &error_fatal);
557
+
558
+ msi_nonbroken = true;
559
+ msi_init(dev, 0x50, 32, true, true, &error_fatal);
560
+
561
+ for (i = 0; i < DESIGNWARE_PCIE_NUM_VIEWPORTS; i++) {
562
+ MemoryRegion *source, *destination, *mem;
563
+ const char *direction;
564
+ char *name;
565
+
566
+ viewport = &root->viewports[DESIGNWARE_PCIE_VIEWPORT_INBOUND][i];
567
+ viewport->inbound = true;
568
+ viewport->base = 0x0000000000000000ULL;
569
+ viewport->target = 0x0000000000000000ULL;
570
+ viewport->limit = UINT32_MAX;
571
+ viewport->cr[0] = DESIGNWARE_PCIE_ATU_TYPE_MEM;
572
+
573
+ source = &host->pci.address_space_root;
574
+ destination = get_system_memory();
575
+ direction = "Inbound";
576
+
577
+ /*
578
+ * Configure MemoryRegion implementing PCI -> CPU memory
579
+ * access
580
+ */
581
+ mem = &viewport->mem;
582
+ name = designware_pcie_viewport_name(direction, i, "MEM");
583
+ memory_region_init_alias(mem, OBJECT(root), name, destination,
584
+ dummy_offset, dummy_size);
585
+ memory_region_add_subregion_overlap(source, dummy_offset, mem, -1);
586
+ memory_region_set_enabled(mem, false);
587
+ g_free(name);
588
+
589
+ viewport = &root->viewports[DESIGNWARE_PCIE_VIEWPORT_OUTBOUND][i];
590
+ viewport->root = root;
591
+ viewport->inbound = false;
592
+ viewport->base = 0x0000000000000000ULL;
593
+ viewport->target = 0x0000000000000000ULL;
594
+ viewport->limit = UINT32_MAX;
595
+ viewport->cr[0] = DESIGNWARE_PCIE_ATU_TYPE_MEM;
596
+
597
+ destination = &host->pci.memory;
598
+ direction = "Outbound";
599
+ source = get_system_memory();
600
+
601
+ /*
602
+ * Configure MemoryRegion implementing CPU -> PCI memory
603
+ * access
604
+ */
605
+ mem = &viewport->mem;
606
+ name = designware_pcie_viewport_name(direction, i, "MEM");
607
+ memory_region_init_alias(mem, OBJECT(root), name, destination,
608
+ dummy_offset, dummy_size);
609
+ memory_region_add_subregion(source, dummy_offset, mem);
610
+ memory_region_set_enabled(mem, false);
611
+ g_free(name);
612
+
613
+ /*
614
+ * Configure MemoryRegion implementing access to configuration
615
+ * space
616
+ */
617
+ mem = &viewport->cfg;
618
+ name = designware_pcie_viewport_name(direction, i, "CFG");
619
+ memory_region_init_io(&viewport->cfg, OBJECT(root),
620
+ &designware_pci_host_conf_ops,
621
+ viewport, name, dummy_size);
622
+ memory_region_add_subregion(source, dummy_offset, mem);
623
+ memory_region_set_enabled(mem, false);
624
+ g_free(name);
625
+ }
626
+
627
+ /*
628
+ * If no inbound iATU windows are configured, HW defaults to
629
+ * letting inbound TLPs to pass in. We emulate that by exlicitly
630
+ * configuring first inbound window to cover all of target's
631
+ * address space.
632
+ *
633
+ * NOTE: This will not work correctly for the case when first
634
+ * configured inbound window is window 0
635
+ */
636
+ viewport = &root->viewports[DESIGNWARE_PCIE_VIEWPORT_INBOUND][0];
637
+ viewport->cr[1] = DESIGNWARE_PCIE_ATU_ENABLE;
638
+ designware_pcie_update_viewport(root, viewport);
639
+
640
+ memory_region_init_io(&root->msi.iomem, OBJECT(root),
641
+ &designware_pci_host_msi_ops,
642
+ root, "pcie-msi", 0x4);
643
+ /*
644
+ * We initially place MSI interrupt I/O region a adress 0 and
645
+ * disable it. It'll be later moved to correct offset and enabled
646
+ * in designware_pcie_root_update_msi_mapping() as a part of
647
+ * initialization done by guest OS
648
+ */
649
+ memory_region_add_subregion(address_space, dummy_offset, &root->msi.iomem);
650
+ memory_region_set_enabled(&root->msi.iomem, false);
651
+}
652
+
653
+static void designware_pcie_set_irq(void *opaque, int irq_num, int level)
654
+{
655
+ DesignwarePCIEHost *host = DESIGNWARE_PCIE_HOST(opaque);
656
+
657
+ qemu_set_irq(host->pci.irqs[irq_num], level);
658
+}
659
+
660
+static const char *
661
+designware_pcie_host_root_bus_path(PCIHostState *host_bridge, PCIBus *rootbus)
662
+{
663
+ return "0000:00";
664
+}
665
+
666
+static const VMStateDescription vmstate_designware_pcie_msi_bank = {
667
+ .name = "designware-pcie-msi-bank",
668
+ .version_id = 1,
669
+ .minimum_version_id = 1,
670
+ .fields = (VMStateField[]) {
671
+ VMSTATE_UINT32(enable, DesignwarePCIEMSIBank),
672
+ VMSTATE_UINT32(mask, DesignwarePCIEMSIBank),
673
+ VMSTATE_UINT32(status, DesignwarePCIEMSIBank),
674
+ VMSTATE_END_OF_LIST()
675
+ }
676
+};
677
+
678
+static const VMStateDescription vmstate_designware_pcie_msi = {
679
+ .name = "designware-pcie-msi",
680
+ .version_id = 1,
681
+ .minimum_version_id = 1,
682
+ .fields = (VMStateField[]) {
683
+ VMSTATE_UINT64(base, DesignwarePCIEMSI),
684
+ VMSTATE_STRUCT_ARRAY(intr,
685
+ DesignwarePCIEMSI,
686
+ DESIGNWARE_PCIE_NUM_MSI_BANKS,
687
+ 1,
688
+ vmstate_designware_pcie_msi_bank,
689
+ DesignwarePCIEMSIBank),
690
+ VMSTATE_END_OF_LIST()
691
+ }
692
+};
693
+
694
+static const VMStateDescription vmstate_designware_pcie_viewport = {
695
+ .name = "designware-pcie-viewport",
696
+ .version_id = 1,
697
+ .minimum_version_id = 1,
698
+ .fields = (VMStateField[]) {
699
+ VMSTATE_UINT64(base, DesignwarePCIEViewport),
700
+ VMSTATE_UINT64(target, DesignwarePCIEViewport),
701
+ VMSTATE_UINT32(limit, DesignwarePCIEViewport),
702
+ VMSTATE_UINT32_ARRAY(cr, DesignwarePCIEViewport, 2),
703
+ VMSTATE_END_OF_LIST()
704
+ }
705
+};
706
+
707
+static const VMStateDescription vmstate_designware_pcie_root = {
708
+ .name = "designware-pcie-root",
709
+ .version_id = 1,
710
+ .minimum_version_id = 1,
711
+ .fields = (VMStateField[]) {
712
+ VMSTATE_PCI_DEVICE(parent_obj, PCIBridge),
713
+ VMSTATE_UINT32(atu_viewport, DesignwarePCIERoot),
714
+ VMSTATE_STRUCT_2DARRAY(viewports,
715
+ DesignwarePCIERoot,
716
+ 2,
717
+ DESIGNWARE_PCIE_NUM_VIEWPORTS,
718
+ 1,
719
+ vmstate_designware_pcie_viewport,
720
+ DesignwarePCIEViewport),
721
+ VMSTATE_STRUCT(msi,
722
+ DesignwarePCIERoot,
723
+ 1,
724
+ vmstate_designware_pcie_msi,
725
+ DesignwarePCIEMSI),
726
+ VMSTATE_END_OF_LIST()
727
+ }
728
+};
729
+
730
+static void designware_pcie_root_class_init(ObjectClass *klass, void *data)
731
+{
732
+ PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
733
+ DeviceClass *dc = DEVICE_CLASS(klass);
734
+
735
+ set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
736
+
737
+ k->vendor_id = PCI_VENDOR_ID_SYNOPSYS;
738
+ k->device_id = 0xABCD;
739
+ k->revision = 0;
740
+ k->class_id = PCI_CLASS_BRIDGE_PCI;
741
+ k->is_bridge = true;
742
+ k->exit = pci_bridge_exitfn;
743
+ k->realize = designware_pcie_root_realize;
744
+ k->config_read = designware_pcie_root_config_read;
745
+ k->config_write = designware_pcie_root_config_write;
746
+
747
+ dc->reset = pci_bridge_reset;
748
+ /*
749
+ * PCI-facing part of the host bridge, not usable without the
750
+ * host-facing part, which can't be device_add'ed, yet.
751
+ */
752
+ dc->user_creatable = false;
753
+ dc->vmsd = &vmstate_designware_pcie_root;
754
+}
755
+
756
+static uint64_t designware_pcie_host_mmio_read(void *opaque, hwaddr addr,
757
+ unsigned int size)
758
+{
759
+ PCIHostState *pci = PCI_HOST_BRIDGE(opaque);
760
+ PCIDevice *device = pci_find_device(pci->bus, 0, 0);
761
+
762
+ return pci_host_config_read_common(device,
763
+ addr,
764
+ pci_config_size(device),
765
+ size);
766
+}
767
+
768
+static void designware_pcie_host_mmio_write(void *opaque, hwaddr addr,
769
+ uint64_t val, unsigned int size)
770
+{
771
+ PCIHostState *pci = PCI_HOST_BRIDGE(opaque);
772
+ PCIDevice *device = pci_find_device(pci->bus, 0, 0);
773
+
774
+ return pci_host_config_write_common(device,
775
+ addr,
776
+ pci_config_size(device),
777
+ val, size);
778
+}
779
+
780
+static const MemoryRegionOps designware_pci_mmio_ops = {
781
+ .read = designware_pcie_host_mmio_read,
782
+ .write = designware_pcie_host_mmio_write,
783
+ .endianness = DEVICE_LITTLE_ENDIAN,
784
+ .impl = {
785
+ /*
786
+ * Our device would not work correctly if the guest was doing
787
+ * unaligned access. This might not be a limitation on the real
788
+ * device but in practice there is no reason for a guest to access
789
+ * this device unaligned.
790
+ */
791
+ .min_access_size = 4,
792
+ .max_access_size = 4,
793
+ .unaligned = false,
794
+ },
795
+};
796
+
797
+static AddressSpace *designware_pcie_host_set_iommu(PCIBus *bus, void *opaque,
798
+ int devfn)
799
+{
800
+ DesignwarePCIEHost *s = DESIGNWARE_PCIE_HOST(opaque);
801
+
802
+ return &s->pci.address_space;
803
+}
804
+
805
+static void designware_pcie_host_realize(DeviceState *dev, Error **errp)
806
+{
807
+ PCIHostState *pci = PCI_HOST_BRIDGE(dev);
808
+ DesignwarePCIEHost *s = DESIGNWARE_PCIE_HOST(dev);
809
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
810
+ size_t i;
811
+
812
+ for (i = 0; i < ARRAY_SIZE(s->pci.irqs); i++) {
813
+ sysbus_init_irq(sbd, &s->pci.irqs[i]);
814
+ }
815
+
816
+ memory_region_init_io(&s->mmio,
817
+ OBJECT(s),
818
+ &designware_pci_mmio_ops,
819
+ s,
820
+ "pcie.reg", 4 * 1024);
821
+ sysbus_init_mmio(sbd, &s->mmio);
822
+
823
+ memory_region_init(&s->pci.io, OBJECT(s), "pcie-pio", 16);
824
+ memory_region_init(&s->pci.memory, OBJECT(s),
825
+ "pcie-bus-memory",
826
+ UINT64_MAX);
827
+
828
+ pci->bus = pci_register_root_bus(dev, "pcie",
829
+ designware_pcie_set_irq,
830
+ pci_swizzle_map_irq_fn,
831
+ s,
832
+ &s->pci.memory,
833
+ &s->pci.io,
834
+ 0, 4,
835
+ TYPE_PCIE_BUS);
836
+
837
+ memory_region_init(&s->pci.address_space_root,
838
+ OBJECT(s),
839
+ "pcie-bus-address-space-root",
840
+ UINT64_MAX);
841
+ memory_region_add_subregion(&s->pci.address_space_root,
842
+ 0x0, &s->pci.memory);
843
+ address_space_init(&s->pci.address_space,
844
+ &s->pci.address_space_root,
845
+ "pcie-bus-address-space");
846
+ pci_setup_iommu(pci->bus, designware_pcie_host_set_iommu, s);
847
+
848
+ qdev_set_parent_bus(DEVICE(&s->root), BUS(pci->bus));
849
+ qdev_init_nofail(DEVICE(&s->root));
850
+}
851
+
852
+static const VMStateDescription vmstate_designware_pcie_host = {
853
+ .name = "designware-pcie-host",
854
+ .version_id = 1,
855
+ .minimum_version_id = 1,
856
+ .fields = (VMStateField[]) {
857
+ VMSTATE_STRUCT(root,
858
+ DesignwarePCIEHost,
859
+ 1,
860
+ vmstate_designware_pcie_root,
861
+ DesignwarePCIERoot),
862
+ VMSTATE_END_OF_LIST()
863
+ }
864
+};
865
+
866
+static void designware_pcie_host_class_init(ObjectClass *klass, void *data)
867
+{
868
+ DeviceClass *dc = DEVICE_CLASS(klass);
869
+ PCIHostBridgeClass *hc = PCI_HOST_BRIDGE_CLASS(klass);
870
+
871
+ hc->root_bus_path = designware_pcie_host_root_bus_path;
872
+ dc->realize = designware_pcie_host_realize;
873
+ set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
874
+ dc->fw_name = "pci";
875
+ dc->vmsd = &vmstate_designware_pcie_host;
876
+}
877
+
878
+static void designware_pcie_host_init(Object *obj)
879
+{
880
+ DesignwarePCIEHost *s = DESIGNWARE_PCIE_HOST(obj);
881
+ DesignwarePCIERoot *root = &s->root;
882
+
883
+ object_initialize(root, sizeof(*root), TYPE_DESIGNWARE_PCIE_ROOT);
884
+ object_property_add_child(obj, "root", OBJECT(root), NULL);
885
+ qdev_prop_set_int32(DEVICE(root), "addr", PCI_DEVFN(0, 0));
886
+ qdev_prop_set_bit(DEVICE(root), "multifunction", false);
887
+}
888
+
889
+static const TypeInfo designware_pcie_root_info = {
890
+ .name = TYPE_DESIGNWARE_PCIE_ROOT,
891
+ .parent = TYPE_PCI_BRIDGE,
892
+ .instance_size = sizeof(DesignwarePCIERoot),
893
+ .class_init = designware_pcie_root_class_init,
894
+ .interfaces = (InterfaceInfo[]) {
895
+ { INTERFACE_PCIE_DEVICE },
896
+ { }
897
+ },
898
+};
899
+
900
+static const TypeInfo designware_pcie_host_info = {
901
+ .name = TYPE_DESIGNWARE_PCIE_HOST,
902
+ .parent = TYPE_PCI_HOST_BRIDGE,
903
+ .instance_size = sizeof(DesignwarePCIEHost),
904
+ .instance_init = designware_pcie_host_init,
905
+ .class_init = designware_pcie_host_class_init,
906
+};
907
+
908
+static void designware_pcie_register(void)
909
+{
910
+ type_register_static(&designware_pcie_root_info);
911
+ type_register_static(&designware_pcie_host_info);
912
+}
913
+type_init(designware_pcie_register)
914
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
915
index XXXXXXX..XXXXXXX 100644
916
--- a/default-configs/arm-softmmu.mak
917
+++ b/default-configs/arm-softmmu.mak
918
@@ -XXX,XX +XXX,XX @@ CONFIG_GPIO_KEY=y
919
CONFIG_MSF2=y
920
CONFIG_FW_CFG_DMA=y
921
CONFIG_XILINX_AXI=y
922
+CONFIG_PCI_DESIGNWARE=y
923
--
635
--
924
2.16.2
636
2.25.1
925
926
diff view generated by jsdifflib
1
Move the definition of the 'host' cpu type into cpu.c, where all the
1
Currently the microdrive code uses device_legacy_reset() to reset
2
other CPU types are defined. We can do this now we've decoupled it
2
itself, and has its reset method call reset on the IDE bus as the
3
from the KVM-specific host feature probing. This means we now create
3
last thing it does. Switch to using device_cold_reset().
4
the type unconditionally (assuming we were built with KVM support at
4
5
all), but if you try to use it without -enable-kvm this will end
5
The only concrete microdrive device is the TYPE_DSCM1XXXX; it is not
6
up in the "host cpu probe failed and KVM not enabled" path in
6
command-line pluggable, so it is used only by the old pxa2xx Arm
7
arm_cpu_realizefn(), for an appropriate error message.
7
boards 'akita', 'borzoi', 'spitz', 'terrier' and 'tosa'.
8
9
You might think that this would result in the IDE bus being
10
reset automatically, but it does not, because the IDEBus type
11
does not set the BusClass::reset method. Instead the controller
12
must explicitly call ide_bus_reset(). We therefore leave that
13
call in md_reset().
14
15
Note also that because the PCMCIA card device is a direct subclass of
16
TYPE_DEVICE and we don't model the PCMCIA controller-to-card
17
interface as a qbus, PCMCIA cards are not on any qbus and so they
18
don't get reset when the system is reset. The reset only happens via
19
the dscm1xxxx_attach() and dscm1xxxx_detach() functions during
20
machine creation.
21
22
Because our aim here is merely to try to get rid of calls to the
23
device_legacy_reset() function, we leave these other dubious
24
reset-related issues alone. (They all stem from this code being
25
absolutely ancient.)
8
26
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
28
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
29
Message-id: 20221013174042.1602926-1-peter.maydell@linaro.org
12
Message-id: 20180308130626.12393-3-peter.maydell@linaro.org
13
---
30
---
14
target/arm/cpu.c | 24 ++++++++++++++++++++++++
31
hw/ide/microdrive.c | 8 ++++----
15
target/arm/kvm.c | 19 -------------------
32
1 file changed, 4 insertions(+), 4 deletions(-)
16
2 files changed, 24 insertions(+), 19 deletions(-)
17
33
18
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
34
diff --git a/hw/ide/microdrive.c b/hw/ide/microdrive.c
19
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.c
36
--- a/hw/ide/microdrive.c
21
+++ b/target/arm/cpu.c
37
+++ b/hw/ide/microdrive.c
22
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
38
@@ -XXX,XX +XXX,XX @@ static void md_attr_write(PCMCIACardState *card, uint32_t at, uint8_t value)
23
#endif
39
case 0x00:    /* Configuration Option Register */
24
}
40
s->opt = value & 0xcf;
25
41
if (value & OPT_SRESET) {
26
+#ifdef CONFIG_KVM
42
- device_legacy_reset(DEVICE(s));
27
+static void arm_host_initfn(Object *obj)
43
+ device_cold_reset(DEVICE(s));
28
+{
44
}
29
+ ARMCPU *cpu = ARM_CPU(obj);
45
md_interrupt_update(s);
30
+
46
break;
31
+ kvm_arm_set_cpu_features_from_host(cpu);
47
@@ -XXX,XX +XXX,XX @@ static void md_common_write(PCMCIACardState *card, uint32_t at, uint16_t value)
32
+}
48
case 0xe:    /* Device Control */
33
+
49
s->ctrl = value;
34
+static const TypeInfo host_arm_cpu_type_info = {
50
if (value & CTRL_SRST) {
35
+ .name = TYPE_ARM_HOST_CPU,
51
- device_legacy_reset(DEVICE(s));
36
+#ifdef TARGET_AARCH64
52
+ device_cold_reset(DEVICE(s));
37
+ .parent = TYPE_AARCH64_CPU,
53
}
38
+#else
54
md_interrupt_update(s);
39
+ .parent = TYPE_ARM_CPU,
55
break;
40
+#endif
56
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_attach(PCMCIACardState *card)
41
+ .instance_init = arm_host_initfn,
57
md->attr_base = pcc->cis[0x74] | (pcc->cis[0x76] << 8);
42
+};
58
md->io_base = 0x0;
43
+
59
44
+#endif
60
- device_legacy_reset(DEVICE(md));
45
+
61
+ device_cold_reset(DEVICE(md));
46
static void cpu_register(const ARMCPUInfo *info)
62
md_interrupt_update(md);
63
64
return 0;
65
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_detach(PCMCIACardState *card)
47
{
66
{
48
TypeInfo type_info = {
67
MicroDriveState *md = MICRODRIVE(card);
49
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_register_types(void)
68
50
cpu_register(info);
69
- device_legacy_reset(DEVICE(md));
51
info++;
70
+ device_cold_reset(DEVICE(md));
52
}
53
+
54
+#ifdef CONFIG_KVM
55
+ type_register_static(&host_arm_cpu_type_info);
56
+#endif
57
}
58
59
type_init(arm_cpu_register_types)
60
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/kvm.c
63
+++ b/target/arm/kvm.c
64
@@ -XXX,XX +XXX,XX @@ void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
65
env->features = arm_host_cpu_features.features;
66
}
67
68
-static void kvm_arm_host_cpu_initfn(Object *obj)
69
-{
70
- ARMCPU *cpu = ARM_CPU(obj);
71
-
72
- kvm_arm_set_cpu_features_from_host(cpu);
73
-}
74
-
75
-static const TypeInfo host_arm_cpu_type_info = {
76
- .name = TYPE_ARM_HOST_CPU,
77
-#ifdef TARGET_AARCH64
78
- .parent = TYPE_AARCH64_CPU,
79
-#else
80
- .parent = TYPE_ARM_CPU,
81
-#endif
82
- .instance_init = kvm_arm_host_cpu_initfn,
83
-};
84
-
85
int kvm_arch_init(MachineState *ms, KVMState *s)
86
{
87
/* For ARM interrupt delivery is always asynchronous,
88
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init(MachineState *ms, KVMState *s)
89
90
cap_has_mp_state = kvm_check_extension(s, KVM_CAP_MP_STATE);
91
92
- type_register_static(&host_arm_cpu_type_info);
93
-
94
return 0;
71
return 0;
95
}
72
}
96
73
97
--
74
--
98
2.16.2
75
2.25.1
99
76
100
77
diff view generated by jsdifflib
Deleted patch
1
Now we have a working '-cpu max', the linux-user-only
2
'any' CPU is pretty much the same thing, so implement it
3
that way.
4
1
5
For the moment we don't add any of the extra feature bits
6
to the system-emulation "max", because we don't set the
7
ID register bits we would need to to advertise those
8
features as present.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20180308130626.12393-5-peter.maydell@linaro.org
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
---
15
target/arm/cpu.c | 52 +++++++++++++++++++++++++----------------------
16
target/arm/cpu64.c | 59 ++++++++++++++++++++++++++----------------------------
17
2 files changed, 56 insertions(+), 55 deletions(-)
18
19
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.c
22
+++ b/target/arm/cpu.c
23
@@ -XXX,XX +XXX,XX @@ static ObjectClass *arm_cpu_class_by_name(const char *cpu_model)
24
ObjectClass *oc;
25
char *typename;
26
char **cpuname;
27
+ const char *cpunamestr;
28
29
cpuname = g_strsplit(cpu_model, ",", 1);
30
- typename = g_strdup_printf(ARM_CPU_TYPE_NAME("%s"), cpuname[0]);
31
+ cpunamestr = cpuname[0];
32
+#ifdef CONFIG_USER_ONLY
33
+ /* For backwards compatibility usermode emulation allows "-cpu any",
34
+ * which has the same semantics as "-cpu max".
35
+ */
36
+ if (!strcmp(cpunamestr, "any")) {
37
+ cpunamestr = "max";
38
+ }
39
+#endif
40
+ typename = g_strdup_printf(ARM_CPU_TYPE_NAME("%s"), cpunamestr);
41
oc = object_class_by_name(typename);
42
g_strfreev(cpuname);
43
g_free(typename);
44
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
45
kvm_arm_set_cpu_features_from_host(cpu);
46
} else {
47
cortex_a15_initfn(obj);
48
- /* In future we might add feature bits here even if the
49
- * real-world A15 doesn't implement them.
50
- */
51
- }
52
-}
53
-#endif
54
-
55
#ifdef CONFIG_USER_ONLY
56
-static void arm_any_initfn(Object *obj)
57
-{
58
- ARMCPU *cpu = ARM_CPU(obj);
59
- set_feature(&cpu->env, ARM_FEATURE_V8);
60
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
61
- set_feature(&cpu->env, ARM_FEATURE_NEON);
62
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
63
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
64
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
65
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
66
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
67
- set_feature(&cpu->env, ARM_FEATURE_CRC);
68
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
69
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
70
- cpu->midr = 0xffffffff;
71
+ /* We don't set these in system emulation mode for the moment,
72
+ * since we don't correctly set the ID registers to advertise them,
73
+ */
74
+ set_feature(&cpu->env, ARM_FEATURE_V8);
75
+ set_feature(&cpu->env, ARM_FEATURE_VFP4);
76
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
77
+ set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
78
+ set_feature(&cpu->env, ARM_FEATURE_V8_AES);
79
+ set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
80
+ set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
81
+ set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
82
+ set_feature(&cpu->env, ARM_FEATURE_CRC);
83
+ set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
84
+ set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
85
+#endif
86
+ }
87
}
88
#endif
89
90
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_cpus[] = {
91
{ .name = "max", .initfn = arm_max_initfn },
92
#endif
93
#ifdef CONFIG_USER_ONLY
94
- { .name = "any", .initfn = arm_any_initfn },
95
+ { .name = "any", .initfn = arm_max_initfn },
96
#endif
97
#endif
98
{ .name = NULL }
99
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/cpu64.c
102
+++ b/target/arm/cpu64.c
103
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
104
kvm_arm_set_cpu_features_from_host(cpu);
105
} else {
106
aarch64_a57_initfn(obj);
107
- /* In future we might add feature bits here even if the
108
- * real-world A57 doesn't implement them.
109
+#ifdef CONFIG_USER_ONLY
110
+ /* We don't set these in system emulation mode for the moment,
111
+ * since we don't correctly set the ID registers to advertise them,
112
+ * and in some cases they're only available in AArch64 and not AArch32,
113
+ * whereas the architecture requires them to be present in both if
114
+ * present in either.
115
*/
116
+ set_feature(&cpu->env, ARM_FEATURE_V8);
117
+ set_feature(&cpu->env, ARM_FEATURE_VFP4);
118
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
119
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
120
+ set_feature(&cpu->env, ARM_FEATURE_V8_AES);
121
+ set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
122
+ set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
123
+ set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
124
+ set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
125
+ set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
126
+ set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
127
+ set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
128
+ set_feature(&cpu->env, ARM_FEATURE_CRC);
129
+ set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
130
+ set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
131
+ set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
132
+ /* For usermode -cpu max we can use a larger and more efficient DCZ
133
+ * blocksize since we don't have to follow what the hardware does.
134
+ */
135
+ cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
136
+ cpu->dcz_blocksize = 7; /* 512 bytes */
137
+#endif
138
}
139
}
140
141
-#ifdef CONFIG_USER_ONLY
142
-static void aarch64_any_initfn(Object *obj)
143
-{
144
- ARMCPU *cpu = ARM_CPU(obj);
145
-
146
- set_feature(&cpu->env, ARM_FEATURE_V8);
147
- set_feature(&cpu->env, ARM_FEATURE_VFP4);
148
- set_feature(&cpu->env, ARM_FEATURE_NEON);
149
- set_feature(&cpu->env, ARM_FEATURE_AARCH64);
150
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
151
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
152
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
153
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
154
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
155
- set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
156
- set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
157
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
158
- set_feature(&cpu->env, ARM_FEATURE_CRC);
159
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
160
- set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
161
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
162
- cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
163
- cpu->dcz_blocksize = 7; /* 512 bytes */
164
-}
165
-#endif
166
-
167
typedef struct ARMCPUInfo {
168
const char *name;
169
void (*initfn)(Object *obj);
170
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
171
{ .name = "cortex-a57", .initfn = aarch64_a57_initfn },
172
{ .name = "cortex-a53", .initfn = aarch64_a53_initfn },
173
{ .name = "max", .initfn = aarch64_max_initfn },
174
-#ifdef CONFIG_USER_ONLY
175
- { .name = "any", .initfn = aarch64_any_initfn },
176
-#endif
177
{ .name = NULL }
178
};
179
180
--
181
2.16.2
182
183
diff view generated by jsdifflib
Deleted patch
1
Allow the virt board to support '-cpu max' in the same way
2
it already handles '-cpu host'.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180308130626.12393-6-peter.maydell@linaro.org
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
---
9
hw/arm/virt.c | 1 +
10
1 file changed, 1 insertion(+)
11
12
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/arm/virt.c
15
+++ b/hw/arm/virt.c
16
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
17
ARM_CPU_TYPE_NAME("cortex-a53"),
18
ARM_CPU_TYPE_NAME("cortex-a57"),
19
ARM_CPU_TYPE_NAME("host"),
20
+ ARM_CPU_TYPE_NAME("max"),
21
};
22
23
static bool cpu_type_valid(const char *cpu)
24
--
25
2.16.2
26
27
diff view generated by jsdifflib