1
The following changes since commit 003ba52a8b327180e284630b289c6ece5a3e08b9:
1
Hi; here's the latest arm pullreq. This is mostly patches from
2
RTH, plus a couple of other more minor things. Switching to
3
PCREL is the big one, hopefully should improve performance.
2
4
3
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2023-02-16 11:16:39 +0000)
5
thanks
6
-- PMM
7
8
The following changes since commit 214a8da23651f2472b296b3293e619fd58d9e212:
9
10
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2022-10-18 11:14:31 -0400)
4
11
5
are available in the Git repository at:
12
are available in the Git repository at:
6
13
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230216
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221020
8
15
9
for you to fetch changes up to caf01d6a435d9f4a95aeae2f9fc6cb8b889b1fb8:
16
for you to fetch changes up to 5db899303799e49209016a93289b8694afa1449e:
10
17
11
tests/qtest: Restrict tpm-tis-devices-{swtpm}-test to CONFIG_TCG (2023-02-16 16:28:53 +0000)
18
hw/ide/microdrive: Use device_cold_reset() for self-resets (2022-10-20 12:11:53 +0100)
12
19
13
----------------------------------------------------------------
20
----------------------------------------------------------------
14
target-arm queue:
21
target-arm queue:
15
* Some mostly M-profile-related code cleanups
22
* Switch to TARGET_TB_PCREL
16
* avocado: Retire the boot_linux.py AArch64 TCG tests
23
* More pagetable-walk refactoring preparatory to HAFDBS
17
* hw/arm/smmuv3: Add GBPA register
24
* update the cortex-a15 MIDR to latest rev
18
* arm/virt: don't try to spell out the accelerator
25
* hw/char/pl011: fix baud rate calculation
19
* hw/arm: Attach PSPI module to NPCM7XX SoC
26
* hw/ide/microdrive: Use device_cold_reset() for self-resets
20
* Some cleanup/refactoring patches aiming towards
21
allowing building Arm targets without CONFIG_TCG
22
27
23
----------------------------------------------------------------
28
----------------------------------------------------------------
24
Alex Bennée (1):
29
Alex Bennée (1):
25
tests/avocado: retire the Aarch64 TCG tests from boot_linux.py
30
target/arm: update the cortex-a15 MIDR to latest rev
26
31
27
Claudio Fontana (3):
32
Baruch Siach (1):
28
target/arm: rename handle_semihosting to tcg_handle_semihosting
33
hw/char/pl011: fix baud rate calculation
29
target/arm: wrap psci call with tcg_enabled
30
target/arm: wrap call to aarch64_sve_change_el in tcg_enabled()
31
34
32
Cornelia Huck (1):
35
Peter Maydell (1):
33
arm/virt: don't try to spell out the accelerator
36
hw/ide/microdrive: Use device_cold_reset() for self-resets
34
37
35
Fabiano Rosas (7):
38
Richard Henderson (21):
36
target/arm: Move PC alignment check
39
target/arm: Enable TARGET_PAGE_ENTRY_EXTRA
37
target/arm: Move cpregs code out of cpu.h
40
target/arm: Use probe_access_full for MTE
38
tests/avocado: Skip tests that require a missing accelerator
41
target/arm: Use probe_access_full for BTI
39
tests/avocado: Tag TCG tests with accel:tcg
42
target/arm: Add ARMMMUIdx_Phys_{S,NS}
40
target/arm: Use "max" as default cpu for the virt machine with KVM
43
target/arm: Move ARMMMUIdx_Stage2 to a real tlb mmu_idx
41
tests/qtest: arm-cpu-features: Match tests to required accelerators
44
target/arm: Restrict tlb flush from vttbr_write to vmid change
42
tests/qtest: Restrict tpm-tis-devices-{swtpm}-test to CONFIG_TCG
45
target/arm: Split out S1Translate type
46
target/arm: Plumb debug into S1Translate
47
target/arm: Move be test for regime into S1TranslateResult
48
target/arm: Use softmmu tlbs for page table walking
49
target/arm: Split out get_phys_addr_twostage
50
target/arm: Use bool consistently for get_phys_addr subroutines
51
target/arm: Introduce curr_insn_len
52
target/arm: Change gen_goto_tb to work on displacements
53
target/arm: Change gen_*set_pc_im to gen_*update_pc
54
target/arm: Change gen_exception_insn* to work on displacements
55
target/arm: Remove gen_exception_internal_insn pc argument
56
target/arm: Change gen_jmp* to work on displacements
57
target/arm: Introduce gen_pc_plus_diff for aarch64
58
target/arm: Introduce gen_pc_plus_diff for aarch32
59
target/arm: Enable TARGET_TB_PCREL
43
60
44
Hao Wu (3):
61
target/arm/cpu-param.h | 17 +-
45
MAINTAINERS: Add myself to maintainers and remove Havard
62
target/arm/cpu.h | 47 ++--
46
hw/ssi: Add Nuvoton PSPI Module
63
target/arm/internals.h | 1 +
47
hw/arm: Attach PSPI module to NPCM7XX SoC
64
target/arm/sve_ldst_internal.h | 1 +
65
target/arm/translate-a32.h | 2 +-
66
target/arm/translate.h | 66 ++++-
67
hw/char/pl011.c | 2 +-
68
hw/ide/microdrive.c | 8 +-
69
target/arm/cpu.c | 23 +-
70
target/arm/cpu_tcg.c | 4 +-
71
target/arm/helper.c | 155 +++++++++---
72
target/arm/mte_helper.c | 62 ++---
73
target/arm/ptw.c | 535 +++++++++++++++++++++++++----------------
74
target/arm/sve_helper.c | 54 ++---
75
target/arm/tlb_helper.c | 24 +-
76
target/arm/translate-a64.c | 220 ++++++++++-------
77
target/arm/translate-m-nocp.c | 8 +-
78
target/arm/translate-mve.c | 2 +-
79
target/arm/translate-vfp.c | 10 +-
80
target/arm/translate.c | 284 +++++++++++++---------
81
20 files changed, 918 insertions(+), 607 deletions(-)
48
82
49
Jean-Philippe Brucker (2):
50
hw/arm/smmu-common: Support 64-bit addresses
51
hw/arm/smmu-common: Fix TTB1 handling
52
53
Mostafa Saleh (1):
54
hw/arm/smmuv3: Add GBPA register
55
56
Philippe Mathieu-Daudé (12):
57
hw/intc/armv7m_nvic: Use OBJECT_DECLARE_SIMPLE_TYPE() macro
58
target/arm: Simplify arm_v7m_mmu_idx_for_secstate() for user emulation
59
target/arm: Reduce arm_v7m_mmu_idx_[all/for_secstate_and_priv]() scope
60
target/arm: Constify ID_PFR1 on user emulation
61
target/arm: Convert CPUARMState::eabi to boolean
62
target/arm: Avoid resetting CPUARMState::eabi field
63
target/arm: Restrict CPUARMState::gicv3state to sysemu
64
target/arm: Restrict CPUARMState::arm_boot_info to sysemu
65
target/arm: Restrict CPUARMState::nvic to sysemu
66
target/arm: Store CPUARMState::nvic as NVICState*
67
target/arm: Declare CPU <-> NVIC helpers in 'hw/intc/armv7m_nvic.h'
68
hw/arm: Add missing XLNX_ZYNQMP_ARM -> USB_DWC3 Kconfig dependency
69
70
MAINTAINERS | 8 +-
71
docs/system/arm/nuvoton.rst | 2 +-
72
hw/arm/smmuv3-internal.h | 7 +
73
include/hw/arm/npcm7xx.h | 2 +
74
include/hw/arm/smmu-common.h | 2 -
75
include/hw/arm/smmuv3.h | 1 +
76
include/hw/intc/armv7m_nvic.h | 128 +++++++++++++++++-
77
include/hw/ssi/npcm_pspi.h | 53 ++++++++
78
linux-user/user-internals.h | 2 +-
79
target/arm/cpregs.h | 98 ++++++++++++++
80
target/arm/cpu.h | 228 ++-------------------------------
81
target/arm/internals.h | 14 --
82
hw/arm/npcm7xx.c | 25 +++-
83
hw/arm/smmu-common.c | 4 +-
84
hw/arm/smmuv3.c | 43 ++++++-
85
hw/arm/virt.c | 10 +-
86
hw/intc/armv7m_nvic.c | 38 ++----
87
hw/ssi/npcm_pspi.c | 221 ++++++++++++++++++++++++++++++++
88
linux-user/arm/cpu_loop.c | 4 +-
89
target/arm/cpu.c | 5 +-
90
target/arm/cpu_tcg.c | 3 +
91
target/arm/helper.c | 31 +++--
92
target/arm/m_helper.c | 86 +++++++------
93
target/arm/machine.c | 18 +--
94
tests/qtest/arm-cpu-features.c | 28 ++--
95
hw/arm/Kconfig | 1 +
96
hw/ssi/meson.build | 2 +-
97
hw/ssi/trace-events | 5 +
98
tests/avocado/avocado_qemu/__init__.py | 4 +
99
tests/avocado/boot_linux.py | 48 ++-----
100
tests/avocado/boot_linux_console.py | 1 +
101
tests/avocado/machine_aarch64_virt.py | 63 ++++++++-
102
tests/avocado/reverse_debugging.py | 8 ++
103
tests/qtest/meson.build | 4 +-
104
34 files changed, 798 insertions(+), 399 deletions(-)
105
create mode 100644 include/hw/ssi/npcm_pspi.h
106
create mode 100644 hw/ssi/npcm_pspi.c
107
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
1
3
Manually convert to OBJECT_DECLARE_SIMPLE_TYPE() macro,
4
similarly to automatic conversion from commit 8063396bf3
5
("Use OBJECT_DECLARE_SIMPLE_TYPE when possible").
6
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230206223502.25122-2-philmd@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/hw/intc/armv7m_nvic.h | 5 +----
13
1 file changed, 1 insertion(+), 4 deletions(-)
14
15
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/intc/armv7m_nvic.h
18
+++ b/include/hw/intc/armv7m_nvic.h
19
@@ -XXX,XX +XXX,XX @@
20
#include "qom/object.h"
21
22
#define TYPE_NVIC "armv7m_nvic"
23
-
24
-typedef struct NVICState NVICState;
25
-DECLARE_INSTANCE_CHECKER(NVICState, NVIC,
26
- TYPE_NVIC)
27
+OBJECT_DECLARE_SIMPLE_TYPE(NVICState, NVIC)
28
29
/* Highest permitted number of exceptions (architectural limit) */
30
#define NVIC_MAX_VECTORS 512
31
--
32
2.34.1
33
34
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
From: Baruch Siach <baruch@tkos.co.il>
2
2
3
Addresses targeting the second translation table (TTB1) in the SMMU have
3
The PL011 TRM says that "UARTIBRD = 0 is invalid and UARTFBRD is ignored
4
all upper bits set (except for the top byte when TBI is enabled). Fix
4
when this is the case". But the code looks at FBRD for the invalid case.
5
the TTB1 check.
5
Fix this.
6
6
7
Reported-by: Ola Hugosson <ola.hugosson@arm.com>
7
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 1408f62a2e45665816527d4845ffde650957d5ab.1665051588.git.baruchs-c@neureality.ai
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
11
Message-id: 20230214171921.1917916-3-jean-philippe@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/arm/smmu-common.c | 2 +-
12
hw/char/pl011.c | 2 +-
15
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
16
14
17
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
15
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/smmu-common.c
17
--- a/hw/char/pl011.c
20
+++ b/hw/arm/smmu-common.c
18
+++ b/hw/char/pl011.c
21
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
19
@@ -XXX,XX +XXX,XX @@ static unsigned int pl011_get_baudrate(const PL011State *s)
22
/* there is a ttbr0 region and we are in it (high bits all zero) */
20
{
23
return &cfg->tt[0];
21
uint64_t clk;
24
} else if (cfg->tt[1].tsz &&
22
25
- !extract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte)) {
23
- if (s->fbrd == 0) {
26
+ sextract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte) == -1) {
24
+ if (s->ibrd == 0) {
27
/* there is a ttbr1 region and we are in it (high bits all one) */
25
return 0;
28
return &cfg->tt[1];
26
}
29
} else if (!cfg->tt[0].tsz) {
27
30
--
28
--
31
2.34.1
29
2.25.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
3
The CPUTLBEntryFull structure now stores the original pte attributes, as
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
well as the physical address. Therefore, we no longer need a separate
5
Message-id: 20230206223502.25122-10-philmd@linaro.org
5
bit in MemTxAttrs, nor do we need to walk the tree of memory regions.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221011031911.2408754-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/cpu.h | 2 +-
12
target/arm/cpu.h | 1 -
9
1 file changed, 1 insertion(+), 1 deletion(-)
13
target/arm/sve_ldst_internal.h | 1 +
14
target/arm/mte_helper.c | 62 ++++++++++------------------------
15
target/arm/sve_helper.c | 54 ++++++++++-------------------
16
target/arm/tlb_helper.c | 4 ---
17
5 files changed, 36 insertions(+), 86 deletions(-)
10
18
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
23
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
16
uint32_t ctrl;
24
* generic target bits directly.
17
} sau;
25
*/
18
26
#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
19
- void *nvic;
27
-#define arm_tlb_mte_tagged(x) (typecheck_memtxattrs(x)->target_tlb_bit1)
20
#if !defined(CONFIG_USER_ONLY)
28
21
+ void *nvic;
29
/*
22
const struct arm_boot_info *boot_info;
30
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
23
/* Store GICv3CPUState to access from this struct */
31
diff --git a/target/arm/sve_ldst_internal.h b/target/arm/sve_ldst_internal.h
24
void *gicv3state;
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/sve_ldst_internal.h
34
+++ b/target/arm/sve_ldst_internal.h
35
@@ -XXX,XX +XXX,XX @@ typedef struct {
36
void *host;
37
int flags;
38
MemTxAttrs attrs;
39
+ bool tagged;
40
} SVEHostPage;
41
42
bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
43
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mte_helper.c
46
+++ b/target/arm/mte_helper.c
47
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
48
TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1);
49
return tags + index;
50
#else
51
- uintptr_t index;
52
CPUTLBEntryFull *full;
53
+ MemTxAttrs attrs;
54
int in_page, flags;
55
- ram_addr_t ptr_ra;
56
hwaddr ptr_paddr, tag_paddr, xlat;
57
MemoryRegion *mr;
58
ARMASIdx tag_asi;
59
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
60
* valid. Indicate to probe_access_flags no-fault, then assert that
61
* we received a valid page.
62
*/
63
- flags = probe_access_flags(env, ptr, ptr_access, ptr_mmu_idx,
64
- ra == 0, &host, ra);
65
+ flags = probe_access_full(env, ptr, ptr_access, ptr_mmu_idx,
66
+ ra == 0, &host, &full, ra);
67
assert(!(flags & TLB_INVALID_MASK));
68
69
- /*
70
- * Find the CPUTLBEntryFull for ptr. This *must* be present in the TLB
71
- * because we just found the mapping.
72
- * TODO: Perhaps there should be a cputlb helper that returns a
73
- * matching tlb entry + iotlb entry.
74
- */
75
- index = tlb_index(env, ptr_mmu_idx, ptr);
76
-# ifdef CONFIG_DEBUG_TCG
77
- {
78
- CPUTLBEntry *entry = tlb_entry(env, ptr_mmu_idx, ptr);
79
- target_ulong comparator = (ptr_access == MMU_DATA_LOAD
80
- ? entry->addr_read
81
- : tlb_addr_write(entry));
82
- g_assert(tlb_hit(comparator, ptr));
83
- }
84
-# endif
85
- full = &env_tlb(env)->d[ptr_mmu_idx].fulltlb[index];
86
-
87
/* If the virtual page MemAttr != Tagged, access unchecked. */
88
- if (!arm_tlb_mte_tagged(&full->attrs)) {
89
+ if (full->pte_attrs != 0xf0) {
90
return NULL;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
94
return NULL;
95
}
96
97
+ /*
98
+ * Remember these values across the second lookup below,
99
+ * which may invalidate this pointer via tlb resize.
100
+ */
101
+ ptr_paddr = full->phys_addr;
102
+ attrs = full->attrs;
103
+ full = NULL;
104
+
105
/*
106
* The Normal memory access can extend to the next page. E.g. a single
107
* 8-byte access to the last byte of a page will check only the last
108
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
109
*/
110
in_page = -(ptr | TARGET_PAGE_MASK);
111
if (unlikely(ptr_size > in_page)) {
112
- void *ignore;
113
- flags |= probe_access_flags(env, ptr + in_page, ptr_access,
114
- ptr_mmu_idx, ra == 0, &ignore, ra);
115
+ flags |= probe_access_full(env, ptr + in_page, ptr_access,
116
+ ptr_mmu_idx, ra == 0, &host, &full, ra);
117
assert(!(flags & TLB_INVALID_MASK));
118
}
119
120
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
121
if (unlikely(flags & TLB_WATCHPOINT)) {
122
int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE;
123
assert(ra != 0);
124
- cpu_check_watchpoint(env_cpu(env), ptr, ptr_size,
125
- full->attrs, wp, ra);
126
+ cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, attrs, wp, ra);
127
}
128
129
- /*
130
- * Find the physical address within the normal mem space.
131
- * The memory region lookup must succeed because TLB_MMIO was
132
- * not set in the cputlb lookup above.
133
- */
134
- mr = memory_region_from_host(host, &ptr_ra);
135
- tcg_debug_assert(mr != NULL);
136
- tcg_debug_assert(memory_region_is_ram(mr));
137
- ptr_paddr = ptr_ra;
138
- do {
139
- ptr_paddr += mr->addr;
140
- mr = mr->container;
141
- } while (mr);
142
-
143
/* Convert to the physical address in tag space. */
144
tag_paddr = ptr_paddr >> (LOG2_TAG_GRANULE + 1);
145
146
/* Look up the address in tag space. */
147
- tag_asi = full->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
148
+ tag_asi = attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
149
tag_as = cpu_get_address_space(env_cpu(env), tag_asi);
150
mr = address_space_translate(tag_as, tag_paddr, &xlat, NULL,
151
- tag_access == MMU_DATA_STORE,
152
- full->attrs);
153
+ tag_access == MMU_DATA_STORE, attrs);
154
155
/*
156
* Note that @mr will never be NULL. If there is nothing in the address
157
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/target/arm/sve_helper.c
160
+++ b/target/arm/sve_helper.c
161
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
162
*/
163
addr = useronly_clean_ptr(addr);
164
165
+#ifdef CONFIG_USER_ONLY
166
flags = probe_access_flags(env, addr, access_type, mmu_idx, nofault,
167
&info->host, retaddr);
168
+ memset(&info->attrs, 0, sizeof(info->attrs));
169
+ /* Require both ANON and MTE; see allocation_tag_mem(). */
170
+ info->tagged = (flags & PAGE_ANON) && (flags & PAGE_MTE);
171
+#else
172
+ CPUTLBEntryFull *full;
173
+ flags = probe_access_full(env, addr, access_type, mmu_idx, nofault,
174
+ &info->host, &full, retaddr);
175
+ info->attrs = full->attrs;
176
+ info->tagged = full->pte_attrs == 0xf0;
177
+#endif
178
info->flags = flags;
179
180
if (flags & TLB_INVALID_MASK) {
181
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
182
183
/* Ensure that info->host[] is relative to addr, not addr + mem_off. */
184
info->host -= mem_off;
185
-
186
-#ifdef CONFIG_USER_ONLY
187
- memset(&info->attrs, 0, sizeof(info->attrs));
188
- /* Require both MAP_ANON and PROT_MTE -- see allocation_tag_mem. */
189
- arm_tlb_mte_tagged(&info->attrs) =
190
- (flags & PAGE_ANON) && (flags & PAGE_MTE);
191
-#else
192
- /*
193
- * Find the iotlbentry for addr and return the transaction attributes.
194
- * This *must* be present in the TLB because we just found the mapping.
195
- */
196
- {
197
- uintptr_t index = tlb_index(env, mmu_idx, addr);
198
-
199
-# ifdef CONFIG_DEBUG_TCG
200
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
201
- target_ulong comparator = (access_type == MMU_DATA_LOAD
202
- ? entry->addr_read
203
- : tlb_addr_write(entry));
204
- g_assert(tlb_hit(comparator, addr));
205
-# endif
206
-
207
- CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
208
- info->attrs = full->attrs;
209
- }
210
-#endif
211
-
212
return true;
213
}
214
215
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
216
intptr_t mem_off, reg_off, reg_last;
217
218
/* Process the page only if MemAttr == Tagged. */
219
- if (arm_tlb_mte_tagged(&info->page[0].attrs)) {
220
+ if (info->page[0].tagged) {
221
mem_off = info->mem_off_first[0];
222
reg_off = info->reg_off_first[0];
223
reg_last = info->reg_off_split;
224
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
225
}
226
227
mem_off = info->mem_off_first[1];
228
- if (mem_off >= 0 && arm_tlb_mte_tagged(&info->page[1].attrs)) {
229
+ if (mem_off >= 0 && info->page[1].tagged) {
230
reg_off = info->reg_off_first[1];
231
reg_last = info->reg_off_last[1];
232
233
@@ -XXX,XX +XXX,XX @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
234
* Disable MTE checking if the Tagged bit is not set. Since TBI must
235
* be set within MTEDESC for MTE, !mtedesc => !mte_active.
236
*/
237
- if (!arm_tlb_mte_tagged(&info.page[0].attrs)) {
238
+ if (!info.page[0].tagged) {
239
mtedesc = 0;
240
}
241
242
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
243
cpu_check_watchpoint(env_cpu(env), addr, msize,
244
info.attrs, BP_MEM_READ, retaddr);
245
}
246
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
247
+ if (mtedesc && info.tagged) {
248
mte_check(env, mtedesc, addr, retaddr);
249
}
250
if (unlikely(info.flags & TLB_MMIO)) {
251
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
252
msize, info.attrs,
253
BP_MEM_READ, retaddr);
254
}
255
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
256
+ if (mtedesc && info.tagged) {
257
mte_check(env, mtedesc, addr, retaddr);
258
}
259
tlb_fn(env, &scratch, reg_off, addr, retaddr);
260
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
261
(env_cpu(env), addr, msize) & BP_MEM_READ)) {
262
goto fault;
263
}
264
- if (mtedesc &&
265
- arm_tlb_mte_tagged(&info.attrs) &&
266
- !mte_probe(env, mtedesc, addr)) {
267
+ if (mtedesc && info.tagged && !mte_probe(env, mtedesc, addr)) {
268
goto fault;
269
}
270
271
@@ -XXX,XX +XXX,XX @@ void sve_st1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
272
info.attrs, BP_MEM_WRITE, retaddr);
273
}
274
275
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
276
+ if (mtedesc && info.tagged) {
277
mte_check(env, mtedesc, addr, retaddr);
278
}
279
}
280
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
281
index XXXXXXX..XXXXXXX 100644
282
--- a/target/arm/tlb_helper.c
283
+++ b/target/arm/tlb_helper.c
284
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
285
res.f.phys_addr &= TARGET_PAGE_MASK;
286
address &= TARGET_PAGE_MASK;
287
}
288
- /* Notice and record tagged memory. */
289
- if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) {
290
- arm_tlb_mte_tagged(&res.f.attrs) = true;
291
- }
292
293
res.f.pte_attrs = res.cacheattrs.attrs;
294
res.f.shareability = res.cacheattrs.shareability;
25
--
295
--
26
2.34.1
296
2.25.1
27
28
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
3
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit.
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
In is_guarded_page, use probe_access_full instead of just guessing
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
that the tlb entry is still present. Also handles the FIXME about
6
Message-id: 20230206223502.25122-6-philmd@linaro.org
6
executing from device memory.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
12
---
9
linux-user/user-internals.h | 2 +-
13
target/arm/cpu-param.h | 9 +++++----
10
target/arm/cpu.h | 2 +-
14
target/arm/cpu.h | 13 -------------
11
linux-user/arm/cpu_loop.c | 4 ++--
15
target/arm/internals.h | 1 +
12
3 files changed, 4 insertions(+), 4 deletions(-)
16
target/arm/ptw.c | 7 ++++---
17
target/arm/translate-a64.c | 21 ++++++++++-----------
18
5 files changed, 20 insertions(+), 31 deletions(-)
13
19
14
diff --git a/linux-user/user-internals.h b/linux-user/user-internals.h
20
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/user-internals.h
22
--- a/target/arm/cpu-param.h
17
+++ b/linux-user/user-internals.h
23
+++ b/target/arm/cpu-param.h
18
@@ -XXX,XX +XXX,XX @@ void print_termios(void *arg);
24
@@ -XXX,XX +XXX,XX @@
19
#ifdef TARGET_ARM
25
*
20
static inline int regpairs_aligned(CPUArchState *cpu_env, int num)
26
* For ARMMMUIdx_Stage2*, pte_attrs is the S2 descriptor bits [5:2].
21
{
27
* Otherwise, pte_attrs is the same as the MAIR_EL1 8-bit format.
22
- return cpu_env->eabi == 1;
28
- * For shareability, as in the SH field of the VMSAv8-64 PTEs.
23
+ return cpu_env->eabi;
29
+ * For shareability and guarded, as in the SH and GP fields respectively
24
}
30
+ * of the VMSAv8-64 PTEs.
25
#elif defined(TARGET_MIPS) && defined(TARGET_ABI_MIPSO32)
31
*/
26
static inline int regpairs_aligned(CPUArchState *cpu_env, int num) { return 1; }
32
# define TARGET_PAGE_ENTRY_EXTRA \
33
- uint8_t pte_attrs; \
34
- uint8_t shareability;
35
-
36
+ uint8_t pte_attrs; \
37
+ uint8_t shareability; \
38
+ bool guarded;
39
#endif
40
41
#define NB_MMU_MODES 8
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
42
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
28
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.h
44
--- a/target/arm/cpu.h
30
+++ b/target/arm/cpu.h
45
+++ b/target/arm/cpu.h
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
46
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
32
47
/* Shared between translate-sve.c and sve_helper.c. */
33
#if defined(CONFIG_USER_ONLY)
48
extern const uint64_t pred_esz_masks[5];
34
/* For usermode syscall translation. */
49
35
- int eabi;
50
-/* Helper for the macros below, validating the argument type. */
36
+ bool eabi;
51
-static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
52
-{
53
- return x;
54
-}
55
-
56
-/*
57
- * Lvalue macros for ARM TLB bits that we must cache in the TCG TLB.
58
- * Using these should be a bit more self-documenting than using the
59
- * generic target bits directly.
60
- */
61
-#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
62
-
63
/*
64
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
65
* Note that with the Linux kernel, PROT_MTE may not be cleared by mprotect
66
diff --git a/target/arm/internals.h b/target/arm/internals.h
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/internals.h
69
+++ b/target/arm/internals.h
70
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
71
unsigned int attrs:8;
72
unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs */
73
bool is_s2_format:1;
74
+ bool guarded:1; /* guarded bit of the v8-64 PTE */
75
} ARMCacheAttrs;
76
77
/* Fields that are valid upon success. */
78
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/ptw.c
81
+++ b/target/arm/ptw.c
82
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
83
*/
84
result->f.attrs.secure = false;
85
}
86
- /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
87
- if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
88
- arm_tlb_bti_gp(&result->f.attrs) = true;
89
+
90
+ /* When in aarch64 mode, and BTI is enabled, remember GP in the TLB. */
91
+ if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) {
92
+ result->f.guarded = guarded;
93
}
94
95
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
96
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate-a64.c
99
+++ b/target/arm/translate-a64.c
100
@@ -XXX,XX +XXX,XX @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s)
101
#ifdef CONFIG_USER_ONLY
102
return page_get_flags(addr) & PAGE_BTI;
103
#else
104
+ CPUTLBEntryFull *full;
105
+ void *host;
106
int mmu_idx = arm_to_core_mmu_idx(s->mmu_idx);
107
- unsigned int index = tlb_index(env, mmu_idx, addr);
108
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
109
+ int flags;
110
111
/*
112
* We test this immediately after reading an insn, which means
113
- * that any normal page must be in the TLB. The only exception
114
- * would be for executing from flash or device memory, which
115
- * does not retain the TLB entry.
116
- *
117
- * FIXME: Assume false for those, for now. We could use
118
- * arm_cpu_get_phys_page_attrs_debug to re-read the page
119
- * table entry even for that case.
120
+ * that the TLB entry must be present and valid, and thus this
121
+ * access will never raise an exception.
122
*/
123
- return (tlb_hit(entry->addr_code, addr) &&
124
- arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs));
125
+ flags = probe_access_full(env, addr, MMU_INST_FETCH, mmu_idx,
126
+ false, &host, &full, 0);
127
+ assert(!(flags & TLB_INVALID_MASK));
128
+
129
+ return full->guarded;
37
#endif
130
#endif
38
131
}
39
struct CPUBreakpoint *cpu_breakpoint[16];
40
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/linux-user/arm/cpu_loop.c
43
+++ b/linux-user/arm/cpu_loop.c
44
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
45
break;
46
case EXCP_SWI:
47
{
48
- env->eabi = 1;
49
+ env->eabi = true;
50
/* system call */
51
if (env->thumb) {
52
/* Thumb is always EABI style with syscall number in r7 */
53
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
54
* > 0xfffff and are handled below as out-of-range.
55
*/
56
n ^= ARM_SYSCALL_BASE;
57
- env->eabi = 0;
58
+ env->eabi = false;
59
}
60
}
61
132
62
--
133
--
63
2.34.1
134
2.25.1
64
65
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
While dozens of files include "cpu.h", only 3 files require
3
Not yet used, but add mmu indexes for 1-1 mapping
4
these NVIC helper declarations.
4
to physical addresses.
5
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230206223502.25122-12-philmd@linaro.org
8
Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
include/hw/intc/armv7m_nvic.h | 123 ++++++++++++++++++++++++++++++++++
11
target/arm/cpu-param.h | 2 +-
12
target/arm/cpu.h | 123 ----------------------------------
12
target/arm/cpu.h | 7 ++++++-
13
target/arm/cpu.c | 4 +-
13
target/arm/ptw.c | 19 +++++++++++++++++--
14
target/arm/cpu_tcg.c | 3 +
14
3 files changed, 24 insertions(+), 4 deletions(-)
15
target/arm/m_helper.c | 3 +
16
5 files changed, 132 insertions(+), 124 deletions(-)
17
15
18
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
16
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/intc/armv7m_nvic.h
18
--- a/target/arm/cpu-param.h
21
+++ b/include/hw/intc/armv7m_nvic.h
19
+++ b/target/arm/cpu-param.h
22
@@ -XXX,XX +XXX,XX @@ struct NVICState {
20
@@ -XXX,XX +XXX,XX @@
23
qemu_irq sysresetreq;
21
bool guarded;
24
};
22
#endif
25
23
26
+/* Interface between CPU and Interrupt controller. */
24
-#define NB_MMU_MODES 8
27
+/**
25
+#define NB_MMU_MODES 10
28
+ * armv7m_nvic_set_pending: mark the specified exception as pending
26
29
+ * @s: the NVIC
30
+ * @irq: the exception number to mark pending
31
+ * @secure: false for non-banked exceptions or for the nonsecure
32
+ * version of a banked exception, true for the secure version of a banked
33
+ * exception.
34
+ *
35
+ * Marks the specified exception as pending. Note that we will assert()
36
+ * if @secure is true and @irq does not specify one of the fixed set
37
+ * of architecturally banked exceptions.
38
+ */
39
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
40
+/**
41
+ * armv7m_nvic_set_pending_derived: mark this derived exception as pending
42
+ * @s: the NVIC
43
+ * @irq: the exception number to mark pending
44
+ * @secure: false for non-banked exceptions or for the nonsecure
45
+ * version of a banked exception, true for the secure version of a banked
46
+ * exception.
47
+ *
48
+ * Similar to armv7m_nvic_set_pending(), but specifically for derived
49
+ * exceptions (exceptions generated in the course of trying to take
50
+ * a different exception).
51
+ */
52
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
53
+/**
54
+ * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
55
+ * @s: the NVIC
56
+ * @irq: the exception number to mark pending
57
+ * @secure: false for non-banked exceptions or for the nonsecure
58
+ * version of a banked exception, true for the secure version of a banked
59
+ * exception.
60
+ *
61
+ * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
62
+ * generated in the course of lazy stacking of FP registers.
63
+ */
64
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
65
+/**
66
+ * armv7m_nvic_get_pending_irq_info: return highest priority pending
67
+ * exception, and whether it targets Secure state
68
+ * @s: the NVIC
69
+ * @pirq: set to pending exception number
70
+ * @ptargets_secure: set to whether pending exception targets Secure
71
+ *
72
+ * This function writes the number of the highest priority pending
73
+ * exception (the one which would be made active by
74
+ * armv7m_nvic_acknowledge_irq()) to @pirq, and sets @ptargets_secure
75
+ * to true if the current highest priority pending exception should
76
+ * be taken to Secure state, false for NS.
77
+ */
78
+void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
79
+ bool *ptargets_secure);
80
+/**
81
+ * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
82
+ * @s: the NVIC
83
+ *
84
+ * Move the current highest priority pending exception from the pending
85
+ * state to the active state, and update v7m.exception to indicate that
86
+ * it is the exception currently being handled.
87
+ */
88
+void armv7m_nvic_acknowledge_irq(NVICState *s);
89
+/**
90
+ * armv7m_nvic_complete_irq: complete specified interrupt or exception
91
+ * @s: the NVIC
92
+ * @irq: the exception number to complete
93
+ * @secure: true if this exception was secure
94
+ *
95
+ * Returns: -1 if the irq was not active
96
+ * 1 if completing this irq brought us back to base (no active irqs)
97
+ * 0 if there is still an irq active after this one was completed
98
+ * (Ignoring -1, this is the same as the RETTOBASE value before completion.)
99
+ */
100
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
101
+/**
102
+ * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
103
+ * @s: the NVIC
104
+ * @irq: the exception number to mark pending
105
+ * @secure: false for non-banked exceptions or for the nonsecure
106
+ * version of a banked exception, true for the secure version of a banked
107
+ * exception.
108
+ *
109
+ * Return whether an exception is "ready", i.e. whether the exception is
110
+ * enabled and is configured at a priority which would allow it to
111
+ * interrupt the current execution priority. This controls whether the
112
+ * RDY bit for it in the FPCCR is set.
113
+ */
114
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
115
+/**
116
+ * armv7m_nvic_raw_execution_priority: return the raw execution priority
117
+ * @s: the NVIC
118
+ *
119
+ * Returns: the raw execution priority as defined by the v8M architecture.
120
+ * This is the execution priority minus the effects of AIRCR.PRIS,
121
+ * and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
122
+ * (v8M ARM ARM I_PKLD.)
123
+ */
124
+int armv7m_nvic_raw_execution_priority(NVICState *s);
125
+/**
126
+ * armv7m_nvic_neg_prio_requested: return true if the requested execution
127
+ * priority is negative for the specified security state.
128
+ * @s: the NVIC
129
+ * @secure: the security state to test
130
+ * This corresponds to the pseudocode IsReqExecPriNeg().
131
+ */
132
+#ifndef CONFIG_USER_ONLY
133
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
134
+#else
135
+static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
136
+{
137
+ return false;
138
+}
139
+#endif
140
+#ifndef CONFIG_USER_ONLY
141
+bool armv7m_nvic_can_take_pending_exception(NVICState *s);
142
+#else
143
+static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
144
+{
145
+ return true;
146
+}
147
+#endif
148
+
149
#endif
27
#endif
150
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
151
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
152
--- a/target/arm/cpu.h
30
--- a/target/arm/cpu.h
153
+++ b/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
154
@@ -XXX,XX +XXX,XX @@ void arm_cpu_list(void);
32
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
155
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
33
* EL2 EL2&0 +PAN
156
uint32_t cur_el, bool secure);
34
* EL2 (aka NS PL2)
157
35
* EL3 (aka S PL1)
158
-/* Interface between CPU and Interrupt controller. */
36
+ * Physical (NS & S)
159
-#ifndef CONFIG_USER_ONLY
37
*
160
-bool armv7m_nvic_can_take_pending_exception(NVICState *s);
38
- * for a total of 8 different mmu_idx.
161
-#else
39
+ * for a total of 10 different mmu_idx.
162
-static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
40
*
163
-{
41
* R profile CPUs have an MPU, but can use the same set of MMU indexes
164
- return true;
42
* as A profile. They only need to distinguish EL0 and EL1 (and
165
-}
43
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
166
-#endif
44
ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A,
167
-/**
45
ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A,
168
- * armv7m_nvic_set_pending: mark the specified exception as pending
46
169
- * @s: the NVIC
47
+ /* TLBs with 1-1 mapping to the physical address spaces. */
170
- * @irq: the exception number to mark pending
48
+ ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A,
171
- * @secure: false for non-banked exceptions or for the nonsecure
49
+ ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A,
172
- * version of a banked exception, true for the secure version of a banked
50
+
173
- * exception.
51
/*
174
- *
52
* These are not allocated TLBs and are used only for AT system
175
- * Marks the specified exception as pending. Note that we will assert()
53
* instructions or for the first stage of an S12 page table walk.
176
- * if @secure is true and @irq does not specify one of the fixed set
54
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
177
- * of architecturally banked exceptions.
178
- */
179
-void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
180
-/**
181
- * armv7m_nvic_set_pending_derived: mark this derived exception as pending
182
- * @s: the NVIC
183
- * @irq: the exception number to mark pending
184
- * @secure: false for non-banked exceptions or for the nonsecure
185
- * version of a banked exception, true for the secure version of a banked
186
- * exception.
187
- *
188
- * Similar to armv7m_nvic_set_pending(), but specifically for derived
189
- * exceptions (exceptions generated in the course of trying to take
190
- * a different exception).
191
- */
192
-void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
193
-/**
194
- * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
195
- * @s: the NVIC
196
- * @irq: the exception number to mark pending
197
- * @secure: false for non-banked exceptions or for the nonsecure
198
- * version of a banked exception, true for the secure version of a banked
199
- * exception.
200
- *
201
- * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
202
- * generated in the course of lazy stacking of FP registers.
203
- */
204
-void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
205
-/**
206
- * armv7m_nvic_get_pending_irq_info: return highest priority pending
207
- * exception, and whether it targets Secure state
208
- * @s: the NVIC
209
- * @pirq: set to pending exception number
210
- * @ptargets_secure: set to whether pending exception targets Secure
211
- *
212
- * This function writes the number of the highest priority pending
213
- * exception (the one which would be made active by
214
- * armv7m_nvic_acknowledge_irq()) to @pirq, and sets @ptargets_secure
215
- * to true if the current highest priority pending exception should
216
- * be taken to Secure state, false for NS.
217
- */
218
-void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
219
- bool *ptargets_secure);
220
-/**
221
- * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
222
- * @s: the NVIC
223
- *
224
- * Move the current highest priority pending exception from the pending
225
- * state to the active state, and update v7m.exception to indicate that
226
- * it is the exception currently being handled.
227
- */
228
-void armv7m_nvic_acknowledge_irq(NVICState *s);
229
-/**
230
- * armv7m_nvic_complete_irq: complete specified interrupt or exception
231
- * @s: the NVIC
232
- * @irq: the exception number to complete
233
- * @secure: true if this exception was secure
234
- *
235
- * Returns: -1 if the irq was not active
236
- * 1 if completing this irq brought us back to base (no active irqs)
237
- * 0 if there is still an irq active after this one was completed
238
- * (Ignoring -1, this is the same as the RETTOBASE value before completion.)
239
- */
240
-int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
241
-/**
242
- * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
243
- * @s: the NVIC
244
- * @irq: the exception number to mark pending
245
- * @secure: false for non-banked exceptions or for the nonsecure
246
- * version of a banked exception, true for the secure version of a banked
247
- * exception.
248
- *
249
- * Return whether an exception is "ready", i.e. whether the exception is
250
- * enabled and is configured at a priority which would allow it to
251
- * interrupt the current execution priority. This controls whether the
252
- * RDY bit for it in the FPCCR is set.
253
- */
254
-bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
255
-/**
256
- * armv7m_nvic_raw_execution_priority: return the raw execution priority
257
- * @s: the NVIC
258
- *
259
- * Returns: the raw execution priority as defined by the v8M architecture.
260
- * This is the execution priority minus the effects of AIRCR.PRIS,
261
- * and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
262
- * (v8M ARM ARM I_PKLD.)
263
- */
264
-int armv7m_nvic_raw_execution_priority(NVICState *s);
265
-/**
266
- * armv7m_nvic_neg_prio_requested: return true if the requested execution
267
- * priority is negative for the specified security state.
268
- * @s: the NVIC
269
- * @secure: the security state to test
270
- * This corresponds to the pseudocode IsReqExecPriNeg().
271
- */
272
-#ifndef CONFIG_USER_ONLY
273
-bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
274
-#else
275
-static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
276
-{
277
- return false;
278
-}
279
-#endif
280
-
281
/* Interface for defining coprocessor registers.
282
* Registers are defined in tables of arm_cp_reginfo structs
283
* which are passed to define_arm_cp_regs().
284
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
285
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
286
--- a/target/arm/cpu.c
56
--- a/target/arm/ptw.c
287
+++ b/target/arm/cpu.c
57
+++ b/target/arm/ptw.c
288
@@ -XXX,XX +XXX,XX @@
58
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
289
#if !defined(CONFIG_USER_ONLY)
59
case ARMMMUIdx_E3:
290
#include "hw/loader.h"
60
break;
291
#include "hw/boards.h"
61
292
+#ifdef CONFIG_TCG
62
+ case ARMMMUIdx_Phys_NS:
293
#include "hw/intc/armv7m_nvic.h"
63
+ case ARMMMUIdx_Phys_S:
294
-#endif
64
+ /* No translation for physical address spaces. */
295
+#endif /* CONFIG_TCG */
65
+ return true;
296
+#endif /* !CONFIG_USER_ONLY */
66
+
297
#include "sysemu/tcg.h"
67
default:
298
#include "sysemu/qtest.h"
68
g_assert_not_reached();
299
#include "sysemu/hw_accel.h"
69
}
300
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
70
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
301
index XXXXXXX..XXXXXXX 100644
71
{
302
--- a/target/arm/cpu_tcg.c
72
uint8_t memattr = 0x00; /* Device nGnRnE */
303
+++ b/target/arm/cpu_tcg.c
73
uint8_t shareability = 0; /* non-sharable */
304
@@ -XXX,XX +XXX,XX @@
74
+ int r_el;
305
#include "hw/boards.h"
75
306
#endif
76
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
307
#include "cpregs.h"
77
- int r_el = regime_el(env, mmu_idx);
308
+#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_TCG)
78
+ switch (mmu_idx) {
309
+#include "hw/intc/armv7m_nvic.h"
79
+ case ARMMMUIdx_Stage2:
310
+#endif
80
+ case ARMMMUIdx_Stage2_S:
311
81
+ case ARMMMUIdx_Phys_NS:
312
82
+ case ARMMMUIdx_Phys_S:
313
/* Share AArch32 -cpu max features with AArch64. */
83
+ break;
314
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
84
315
index XXXXXXX..XXXXXXX 100644
85
+ default:
316
--- a/target/arm/m_helper.c
86
+ r_el = regime_el(env, mmu_idx);
317
+++ b/target/arm/m_helper.c
87
if (arm_el_is_aa64(env, r_el)) {
318
@@ -XXX,XX +XXX,XX @@
88
int pamax = arm_pamax(env_archcpu(env));
319
#include "exec/cpu_ldst.h"
89
uint64_t tcr = env->cp15.tcr_el[r_el];
320
#include "semihosting/common-semi.h"
90
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
321
#endif
91
shareability = 2; /* outer sharable */
322
+#if !defined(CONFIG_USER_ONLY)
92
}
323
+#include "hw/intc/armv7m_nvic.h"
93
result->cacheattrs.is_s2_format = false;
324
+#endif
94
+ break;
325
95
}
326
static void v7m_msr_xpsr(CPUARMState *env, uint32_t mask,
96
327
uint32_t reg, uint32_t val)
97
result->f.phys_addr = address;
98
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
99
is_secure = arm_is_secure_below_el3(env);
100
break;
101
case ARMMMUIdx_Stage2:
102
+ case ARMMMUIdx_Phys_NS:
103
case ARMMMUIdx_MPrivNegPri:
104
case ARMMMUIdx_MUserNegPri:
105
case ARMMMUIdx_MPriv:
106
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
107
break;
108
case ARMMMUIdx_E3:
109
case ARMMMUIdx_Stage2_S:
110
+ case ARMMMUIdx_Phys_S:
111
case ARMMMUIdx_MSPrivNegPri:
112
case ARMMMUIdx_MSUserNegPri:
113
case ARMMMUIdx_MSPriv:
328
--
114
--
329
2.34.1
115
2.25.1
330
331
diff view generated by jsdifflib
1
From: Claudio Fontana <cfontana@suse.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Claudio Fontana <cfontana@suse.de>
3
We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb.
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Flush the tlb when invalidating stage 1+2 translations. Re-use
5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
5
alle1_tlbmask() for other instances of EL1&0 + Stage2.
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20221011031911.2408754-6-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
11
---
9
target/arm/helper.c | 12 +++++++-----
12
target/arm/cpu-param.h | 2 +-
10
1 file changed, 7 insertions(+), 5 deletions(-)
13
target/arm/cpu.h | 23 ++++---
14
target/arm/helper.c | 151 ++++++++++++++++++++++++++++++-----------
15
3 files changed, 127 insertions(+), 49 deletions(-)
11
16
17
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu-param.h
20
+++ b/target/arm/cpu-param.h
21
@@ -XXX,XX +XXX,XX @@
22
bool guarded;
23
#endif
24
25
-#define NB_MMU_MODES 10
26
+#define NB_MMU_MODES 12
27
28
#endif
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu.h
32
+++ b/target/arm/cpu.h
33
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
34
* EL2 (aka NS PL2)
35
* EL3 (aka S PL1)
36
* Physical (NS & S)
37
+ * Stage2 (NS & S)
38
*
39
- * for a total of 10 different mmu_idx.
40
+ * for a total of 12 different mmu_idx.
41
*
42
* R profile CPUs have an MPU, but can use the same set of MMU indexes
43
* as A profile. They only need to distinguish EL0 and EL1 (and
44
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
45
ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A,
46
ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A,
47
48
+ /*
49
+ * Used for second stage of an S12 page table walk, or for descriptor
50
+ * loads during first stage of an S1 page table walk. Note that both
51
+ * are in use simultaneously for SecureEL2: the security state for
52
+ * the S2 ptw is selected by the NS bit from the S1 ptw.
53
+ */
54
+ ARMMMUIdx_Stage2 = 10 | ARM_MMU_IDX_A,
55
+ ARMMMUIdx_Stage2_S = 11 | ARM_MMU_IDX_A,
56
+
57
/*
58
* These are not allocated TLBs and are used only for AT system
59
* instructions or for the first stage of an S12 page table walk.
60
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
61
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
62
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
63
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
64
- /*
65
- * Not allocated a TLB: used only for second stage of an S12 page
66
- * table walk, or for descriptor loads during first stage of an S1
67
- * page table walk. Note that if we ever want to have a TLB for this
68
- * then various TLB flush insns which currently are no-ops or flush
69
- * only stage 1 MMU indexes will need to change to flush stage 2.
70
- */
71
- ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
72
- ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB,
73
74
/*
75
* M-profile.
76
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
77
TO_CORE_BIT(E20_2),
78
TO_CORE_BIT(E20_2_PAN),
79
TO_CORE_BIT(E3),
80
+ TO_CORE_BIT(Stage2),
81
+ TO_CORE_BIT(Stage2_S),
82
83
TO_CORE_BIT(MUser),
84
TO_CORE_BIT(MPriv),
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
85
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
86
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
87
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
88
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
89
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
17
unsigned int cur_el = arm_current_el(env);
90
raw_write(env, ri, value);
18
int rt;
91
}
19
92
93
+static int alle1_tlbmask(CPUARMState *env)
94
+{
95
+ /*
96
+ * Note that the 'ALL' scope must invalidate both stage 1 and
97
+ * stage 2 translations, whereas most other scopes only invalidate
98
+ * stage 1 translations.
99
+ */
100
+ return (ARMMMUIdxBit_E10_1 |
101
+ ARMMMUIdxBit_E10_1_PAN |
102
+ ARMMMUIdxBit_E10_0 |
103
+ ARMMMUIdxBit_Stage2 |
104
+ ARMMMUIdxBit_Stage2_S);
105
+}
106
+
107
+
108
/* IS variants of TLB operations must affect all cores */
109
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
uint64_t value)
111
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
112
{
113
CPUState *cs = env_cpu(env);
114
115
- tlb_flush_by_mmuidx(cs,
116
- ARMMMUIdxBit_E10_1 |
117
- ARMMMUIdxBit_E10_1_PAN |
118
- ARMMMUIdxBit_E10_0);
119
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
120
}
121
122
static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
123
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
{
125
CPUState *cs = env_cpu(env);
126
127
- tlb_flush_by_mmuidx_all_cpus_synced(cs,
128
- ARMMMUIdxBit_E10_1 |
129
- ARMMMUIdxBit_E10_1_PAN |
130
- ARMMMUIdxBit_E10_0);
131
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, alle1_tlbmask(env));
132
}
133
134
135
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
136
ARMMMUIdxBit_E2);
137
}
138
139
+static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
140
+ uint64_t value)
141
+{
142
+ CPUState *cs = env_cpu(env);
143
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
144
+
145
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
146
+}
147
+
148
+static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
149
+ uint64_t value)
150
+{
151
+ CPUState *cs = env_cpu(env);
152
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
153
+
154
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
155
+}
156
+
157
static const ARMCPRegInfo cp_reginfo[] = {
158
/* Define the secure and non-secure FCSE identifier CP registers
159
* separately because there is no secure bank in V8 (no _EL3). This allows
160
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
162
/*
163
* A change in VMID to the stage2 page table (Stage2) invalidates
164
- * the combined stage 1&2 tlbs (EL10_1 and EL10_0).
165
+ * the stage2 and combined stage 1&2 tlbs (EL10_1 and EL10_0).
166
*/
167
if (raw_read(env, ri) != value) {
168
- uint16_t mask = ARMMMUIdxBit_E10_1 |
169
- ARMMMUIdxBit_E10_1_PAN |
170
- ARMMMUIdxBit_E10_0;
171
- tlb_flush_by_mmuidx(cs, mask);
172
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
173
raw_write(env, ri, value);
174
}
175
}
176
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
177
}
178
}
179
180
-static int alle1_tlbmask(CPUARMState *env)
181
-{
20
- /*
182
- /*
21
- * Note that new_el can never be 0. If cur_el is 0, then
183
- * Note that the 'ALL' scope must invalidate both stage 1 and
22
- * el0_a64 is is_a64(), else el0_a64 is ignored.
184
- * stage 2 translations, whereas most other scopes only invalidate
185
- * stage 1 translations.
23
- */
186
- */
24
- aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
187
- return (ARMMMUIdxBit_E10_1 |
25
+ if (tcg_enabled()) {
188
- ARMMMUIdxBit_E10_1_PAN |
26
+ /*
189
- ARMMMUIdxBit_E10_0);
27
+ * Note that new_el can never be 0. If cur_el is 0, then
190
-}
28
+ * el0_a64 is is_a64(), else el0_a64 is ignored.
191
-
29
+ */
192
static int e2_tlbmask(CPUARMState *env)
30
+ aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
193
{
194
return (ARMMMUIdxBit_E20_0 |
195
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
ARMMMUIdxBit_E3, bits);
197
}
198
199
+static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
200
+{
201
+ /*
202
+ * The MSB of value is the NS field, which only applies if SEL2
203
+ * is implemented and SCR_EL3.NS is not set (i.e. in secure mode).
204
+ */
205
+ return (value >= 0
206
+ && cpu_isar_feature(aa64_sel2, env_archcpu(env))
207
+ && arm_is_secure_below_el3(env)
208
+ ? ARMMMUIdxBit_Stage2_S
209
+ : ARMMMUIdxBit_Stage2);
210
+}
211
+
212
+static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
213
+ uint64_t value)
214
+{
215
+ CPUState *cs = env_cpu(env);
216
+ int mask = ipas2e1_tlbmask(env, value);
217
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
218
+
219
+ if (tlb_force_broadcast(env)) {
220
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
221
+ } else {
222
+ tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
31
+ }
223
+ }
32
224
+}
33
if (cur_el < new_el) {
225
+
34
/*
226
+static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
227
+ uint64_t value)
228
+{
229
+ CPUState *cs = env_cpu(env);
230
+ int mask = ipas2e1_tlbmask(env, value);
231
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
232
+
233
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
234
+}
235
+
236
#ifdef TARGET_AARCH64
237
typedef struct {
238
uint64_t base;
239
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env,
240
241
do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
242
}
243
+
244
+static void tlbi_aa64_ripas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
245
+ uint64_t value)
246
+{
247
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value),
248
+ tlb_force_broadcast(env));
249
+}
250
+
251
+static void tlbi_aa64_ripas2e1is_write(CPUARMState *env,
252
+ const ARMCPRegInfo *ri,
253
+ uint64_t value)
254
+{
255
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value), true);
256
+}
257
#endif
258
259
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
260
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
261
.writefn = tlbi_aa64_vae1_write },
262
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
263
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
264
- .access = PL2_W, .type = ARM_CP_NOP },
265
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
266
+ .writefn = tlbi_aa64_ipas2e1is_write },
267
{ .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
268
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
269
- .access = PL2_W, .type = ARM_CP_NOP },
270
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
271
+ .writefn = tlbi_aa64_ipas2e1is_write },
272
{ .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
273
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
274
.access = PL2_W, .type = ARM_CP_NO_RAW,
275
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
276
.writefn = tlbi_aa64_alle1is_write },
277
{ .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
278
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
279
- .access = PL2_W, .type = ARM_CP_NOP },
280
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
281
+ .writefn = tlbi_aa64_ipas2e1_write },
282
{ .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
283
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
284
- .access = PL2_W, .type = ARM_CP_NOP },
285
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
286
+ .writefn = tlbi_aa64_ipas2e1_write },
287
{ .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
288
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
289
.access = PL2_W, .type = ARM_CP_NO_RAW,
290
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
291
.writefn = tlbimva_hyp_is_write },
292
{ .name = "TLBIIPAS2",
293
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
294
- .type = ARM_CP_NOP, .access = PL2_W },
295
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
296
+ .writefn = tlbiipas2_hyp_write },
297
{ .name = "TLBIIPAS2IS",
298
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
299
- .type = ARM_CP_NOP, .access = PL2_W },
300
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
301
+ .writefn = tlbiipas2is_hyp_write },
302
{ .name = "TLBIIPAS2L",
303
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
304
- .type = ARM_CP_NOP, .access = PL2_W },
305
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
306
+ .writefn = tlbiipas2_hyp_write },
307
{ .name = "TLBIIPAS2LIS",
308
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
309
- .type = ARM_CP_NOP, .access = PL2_W },
310
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
311
+ .writefn = tlbiipas2is_hyp_write },
312
/* 32 bit cache operations */
313
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
314
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
315
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
316
.writefn = tlbi_aa64_rvae1_write },
317
{ .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
318
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
319
- .access = PL2_W, .type = ARM_CP_NOP },
320
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
321
+ .writefn = tlbi_aa64_ripas2e1is_write },
322
{ .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
323
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
324
- .access = PL2_W, .type = ARM_CP_NOP },
325
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
326
+ .writefn = tlbi_aa64_ripas2e1is_write },
327
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
328
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
329
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
330
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
331
.writefn = tlbi_aa64_rvae2is_write },
332
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
333
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
334
- .access = PL2_W, .type = ARM_CP_NOP },
335
- { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
336
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
337
+ .writefn = tlbi_aa64_ripas2e1_write },
338
+ { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
339
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
340
- .access = PL2_W, .type = ARM_CP_NOP },
341
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
342
+ .writefn = tlbi_aa64_ripas2e1_write },
343
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
344
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
345
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
35
--
346
--
36
2.34.1
347
2.25.1
37
38
diff view generated by jsdifflib
1
From: Claudio Fontana <cfontana@suse.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
make it clearer from the name that this is a tcg-only function.
3
Compare only the VMID field when considering whether we need to flush.
4
4
5
Signed-off-by: Claudio Fontana <cfontana@suse.de>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221011031911.2408754-7-richard.henderson@linaro.org
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/helper.c | 4 ++--
10
target/arm/helper.c | 4 ++--
13
1 file changed, 2 insertions(+), 2 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
14
12
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
17
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
* trapped to the hypervisor in KVM.
18
* A change in VMID to the stage2 page table (Stage2) invalidates
21
*/
19
* the stage2 and combined stage 1&2 tlbs (EL10_1 and EL10_0).
22
#ifdef CONFIG_TCG
23
-static void handle_semihosting(CPUState *cs)
24
+static void tcg_handle_semihosting(CPUState *cs)
25
{
26
ARMCPU *cpu = ARM_CPU(cs);
27
CPUARMState *env = &cpu->env;
28
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
29
*/
20
*/
30
#ifdef CONFIG_TCG
21
- if (raw_read(env, ri) != value) {
31
if (cs->exception_index == EXCP_SEMIHOST) {
22
+ if (extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
32
- handle_semihosting(cs);
23
tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
33
+ tcg_handle_semihosting(cs);
24
- raw_write(env, ri, value);
34
return;
35
}
25
}
36
#endif
26
+ raw_write(env, ri, value);
27
}
28
29
static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
37
--
30
--
38
2.34.1
31
2.25.1
39
40
diff view generated by jsdifflib
1
From: Cornelia Huck <cohuck@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Just use current_accel_name() directly.
3
Consolidate most of the inputs and outputs of S1_ptw_translate
4
into a single structure. Plumb this through arm_ld*_ptw from
5
the controlling get_phys_addr_* routine.
4
6
5
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221011031911.2408754-8-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
hw/arm/virt.c | 6 +++---
12
target/arm/ptw.c | 140 ++++++++++++++++++++++++++---------------------
11
1 file changed, 3 insertions(+), 3 deletions(-)
13
1 file changed, 79 insertions(+), 61 deletions(-)
12
14
13
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/virt.c
17
--- a/target/arm/ptw.c
16
+++ b/hw/arm/virt.c
18
+++ b/target/arm/ptw.c
17
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
19
@@ -XXX,XX +XXX,XX @@
18
if (vms->secure && (kvm_enabled() || hvf_enabled())) {
20
#include "idau.h"
19
error_report("mach-virt: %s does not support providing "
21
20
"Security extensions (TrustZone) to the guest CPU",
22
21
- kvm_enabled() ? "KVM" : "HVF");
23
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
22
+ current_accel_name());
24
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
23
exit(1);
25
- bool is_secure, bool s1_is_el0,
24
}
26
+typedef struct S1Translate {
25
27
+ ARMMMUIdx in_mmu_idx;
26
if (vms->virt && (kvm_enabled() || hvf_enabled())) {
28
+ bool in_secure;
27
error_report("mach-virt: %s does not support providing "
29
+ bool out_secure;
28
"Virtualization extensions to the guest CPU",
30
+ hwaddr out_phys;
29
- kvm_enabled() ? "KVM" : "HVF");
31
+} S1Translate;
30
+ current_accel_name());
32
+
31
exit(1);
33
+static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
32
}
34
+ uint64_t address,
33
35
+ MMUAccessType access_type, bool s1_is_el0,
34
if (vms->mte && (kvm_enabled() || hvf_enabled())) {
36
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
35
error_report("mach-virt: %s does not support providing "
37
__attribute__((nonnull));
36
"MTE to the guest CPU",
38
37
- kvm_enabled() ? "KVM" : "HVF");
39
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
38
+ current_accel_name());
40
}
39
exit(1);
41
40
}
42
/* Translate a S1 pagetable walk through S2 if needed. */
43
-static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
44
- hwaddr addr, bool *is_secure_ptr,
45
- ARMMMUFaultInfo *fi)
46
+static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
47
+ hwaddr addr, ARMMMUFaultInfo *fi)
48
{
49
- bool is_secure = *is_secure_ptr;
50
+ bool is_secure = ptw->in_secure;
51
ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
52
53
- if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
54
+ if (arm_mmu_idx_is_stage1_of_2(ptw->in_mmu_idx) &&
55
!regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
56
GetPhysAddrResult s2 = {};
57
+ S1Translate s2ptw = {
58
+ .in_mmu_idx = s2_mmu_idx,
59
+ .in_secure = is_secure,
60
+ };
61
uint64_t hcr;
62
int ret;
63
64
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
65
- is_secure, false, &s2, fi);
66
+ ret = get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
67
+ false, &s2, fi);
68
if (ret) {
69
assert(fi->type != ARMFault_None);
70
fi->s2addr = addr;
71
fi->stage2 = true;
72
fi->s1ptw = true;
73
fi->s1ns = !is_secure;
74
- return ~0;
75
+ return false;
76
}
77
78
hcr = arm_hcr_el2_eff_secstate(env, is_secure);
79
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
80
fi->stage2 = true;
81
fi->s1ptw = true;
82
fi->s1ns = !is_secure;
83
- return ~0;
84
+ return false;
85
}
86
87
if (arm_is_secure_below_el3(env)) {
88
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
89
} else {
90
is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
91
}
92
- *is_secure_ptr = is_secure;
93
} else {
94
assert(!is_secure);
95
}
96
97
addr = s2.f.phys_addr;
98
}
99
- return addr;
100
+
101
+ ptw->out_secure = is_secure;
102
+ ptw->out_phys = addr;
103
+ return true;
104
}
105
106
/* All loads done in the course of a page table walk go through here. */
107
-static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
108
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
109
+static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
110
+ ARMMMUFaultInfo *fi)
111
{
112
CPUState *cs = env_cpu(env);
113
MemTxAttrs attrs = {};
114
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
115
AddressSpace *as;
116
uint32_t data;
117
118
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
119
- attrs.secure = is_secure;
120
- as = arm_addressspace(cs, attrs);
121
- if (fi->s1ptw) {
122
+ if (!S1_ptw_translate(env, ptw, addr, fi)) {
123
return 0;
124
}
125
- if (regime_translation_big_endian(env, mmu_idx)) {
126
+ addr = ptw->out_phys;
127
+ attrs.secure = ptw->out_secure;
128
+ as = arm_addressspace(cs, attrs);
129
+ if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
130
data = address_space_ldl_be(as, addr, attrs, &result);
131
} else {
132
data = address_space_ldl_le(as, addr, attrs, &result);
133
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
134
return 0;
135
}
136
137
-static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
138
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
139
+static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
140
+ ARMMMUFaultInfo *fi)
141
{
142
CPUState *cs = env_cpu(env);
143
MemTxAttrs attrs = {};
144
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
145
AddressSpace *as;
146
uint64_t data;
147
148
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
149
- attrs.secure = is_secure;
150
- as = arm_addressspace(cs, attrs);
151
- if (fi->s1ptw) {
152
+ if (!S1_ptw_translate(env, ptw, addr, fi)) {
153
return 0;
154
}
155
- if (regime_translation_big_endian(env, mmu_idx)) {
156
+ addr = ptw->out_phys;
157
+ attrs.secure = ptw->out_secure;
158
+ as = arm_addressspace(cs, attrs);
159
+ if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
160
data = address_space_ldq_be(as, addr, attrs, &result);
161
} else {
162
data = address_space_ldq_le(as, addr, attrs, &result);
163
@@ -XXX,XX +XXX,XX @@ static int simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
164
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
165
}
166
167
-static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
168
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
169
- bool is_secure, GetPhysAddrResult *result,
170
- ARMMMUFaultInfo *fi)
171
+static bool get_phys_addr_v5(CPUARMState *env, S1Translate *ptw,
172
+ uint32_t address, MMUAccessType access_type,
173
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
174
{
175
int level = 1;
176
uint32_t table;
177
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
178
179
/* Pagetable walk. */
180
/* Lookup l1 descriptor. */
181
- if (!get_level1_table_address(env, mmu_idx, &table, address)) {
182
+ if (!get_level1_table_address(env, ptw->in_mmu_idx, &table, address)) {
183
/* Section translation fault if page walk is disabled by PD0 or PD1 */
184
fi->type = ARMFault_Translation;
185
goto do_fault;
186
}
187
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
188
+ desc = arm_ldl_ptw(env, ptw, table, fi);
189
if (fi->type != ARMFault_None) {
190
goto do_fault;
191
}
192
type = (desc & 3);
193
domain = (desc >> 5) & 0x0f;
194
- if (regime_el(env, mmu_idx) == 1) {
195
+ if (regime_el(env, ptw->in_mmu_idx) == 1) {
196
dacr = env->cp15.dacr_ns;
197
} else {
198
dacr = env->cp15.dacr_s;
199
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
200
/* Fine pagetable. */
201
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
202
}
203
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
204
+ desc = arm_ldl_ptw(env, ptw, table, fi);
205
if (fi->type != ARMFault_None) {
206
goto do_fault;
207
}
208
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
209
g_assert_not_reached();
210
}
211
}
212
- result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
213
+ result->f.prot = ap_to_rw_prot(env, ptw->in_mmu_idx, ap, domain_prot);
214
result->f.prot |= result->f.prot ? PAGE_EXEC : 0;
215
if (!(result->f.prot & (1 << access_type))) {
216
/* Access permission fault. */
217
@@ -XXX,XX +XXX,XX @@ do_fault:
218
return true;
219
}
220
221
-static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
222
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
223
- bool is_secure, GetPhysAddrResult *result,
224
- ARMMMUFaultInfo *fi)
225
+static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
226
+ uint32_t address, MMUAccessType access_type,
227
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
228
{
229
ARMCPU *cpu = env_archcpu(env);
230
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
231
int level = 1;
232
uint32_t table;
233
uint32_t desc;
234
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
235
fi->type = ARMFault_Translation;
236
goto do_fault;
237
}
238
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
239
+ desc = arm_ldl_ptw(env, ptw, table, fi);
240
if (fi->type != ARMFault_None) {
241
goto do_fault;
242
}
243
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
244
ns = extract32(desc, 3, 1);
245
/* Lookup l2 entry. */
246
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
247
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
248
+ desc = arm_ldl_ptw(env, ptw, table, fi);
249
if (fi->type != ARMFault_None) {
250
goto do_fault;
251
}
252
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
253
* the WnR bit is never set (the caller must do this).
254
*
255
* @env: CPUARMState
256
+ * @ptw: Current and next stage parameters for the walk.
257
* @address: virtual address to get physical address for
258
* @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH
259
- * @mmu_idx: MMU index indicating required translation regime
260
- * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page
261
- * table walk), must be true if this is stage 2 of a stage 1+2
262
+ * @s1_is_el0: if @ptw->in_mmu_idx is ARMMMUIdx_Stage2
263
+ * (so this is a stage 2 page table walk),
264
+ * must be true if this is stage 2 of a stage 1+2
265
* walk for an EL0 access. If @mmu_idx is anything else,
266
* @s1_is_el0 is ignored.
267
* @result: set on translation success,
268
* @fi: set to fault info if the translation fails
269
*/
270
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
271
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
272
- bool is_secure, bool s1_is_el0,
273
+static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
274
+ uint64_t address,
275
+ MMUAccessType access_type, bool s1_is_el0,
276
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
277
{
278
ARMCPU *cpu = env_archcpu(env);
279
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
280
+ bool is_secure = ptw->in_secure;
281
/* Read an LPAE long-descriptor translation table. */
282
ARMFaultType fault_type = ARMFault_Translation;
283
uint32_t level;
284
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
285
descaddr |= (address >> (stride * (4 - level))) & indexmask;
286
descaddr &= ~7ULL;
287
nstable = extract32(tableattrs, 4, 1);
288
- descriptor = arm_ldq_ptw(env, descaddr, !nstable, mmu_idx, fi);
289
+ ptw->in_secure = !nstable;
290
+ descriptor = arm_ldq_ptw(env, ptw, descaddr, fi);
291
if (fi->type != ARMFault_None) {
292
goto do_fault;
293
}
294
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
295
ARMMMUFaultInfo *fi)
296
{
297
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
298
+ S1Translate ptw;
299
300
if (mmu_idx != s1_mmu_idx) {
301
/*
302
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
303
int ret;
304
bool ipa_secure, s2walk_secure;
305
ARMCacheAttrs cacheattrs1;
306
- ARMMMUIdx s2_mmu_idx;
307
bool is_el0;
308
uint64_t hcr;
309
310
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
311
s2walk_secure = false;
312
}
313
314
- s2_mmu_idx = (s2walk_secure
315
- ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
316
+ ptw.in_mmu_idx =
317
+ s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
318
+ ptw.in_secure = s2walk_secure;
319
is_el0 = mmu_idx == ARMMMUIdx_E10_0;
320
321
/*
322
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
323
cacheattrs1 = result->cacheattrs;
324
memset(result, 0, sizeof(*result));
325
326
- ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx,
327
- s2walk_secure, is_el0, result, fi);
328
+ ret = get_phys_addr_lpae(env, &ptw, ipa, access_type,
329
+ is_el0, result, fi);
330
fi->s2addr = ipa;
331
332
/* Combine the S1 and S2 perms. */
333
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
334
return get_phys_addr_disabled(env, address, access_type, mmu_idx,
335
is_secure, result, fi);
336
}
337
+
338
+ ptw.in_mmu_idx = mmu_idx;
339
+ ptw.in_secure = is_secure;
340
+
341
if (regime_using_lpae_format(env, mmu_idx)) {
342
- return get_phys_addr_lpae(env, address, access_type, mmu_idx,
343
- is_secure, false, result, fi);
344
+ return get_phys_addr_lpae(env, &ptw, address, access_type, false,
345
+ result, fi);
346
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
347
- return get_phys_addr_v6(env, address, access_type, mmu_idx,
348
- is_secure, result, fi);
349
+ return get_phys_addr_v6(env, &ptw, address, access_type, result, fi);
350
} else {
351
- return get_phys_addr_v5(env, address, access_type, mmu_idx,
352
- is_secure, result, fi);
353
+ return get_phys_addr_v5(env, &ptw, address, access_type, result, fi);
354
}
355
}
41
356
42
--
357
--
43
2.34.1
358
2.25.1
diff view generated by jsdifflib
1
From: Mostafa Saleh <smostafa@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
GBPA register can be used to globally abort all
3
Before using softmmu page tables for the ptw, plumb down
4
transactions.
4
a debug parameter so that we can query page table entries
5
from gdbstub without modifying cpu state.
5
6
6
It is described in the SMMU manual in "6.3.14 SMMU_GBPA".
7
ABORT reset value is IMPLEMENTATION DEFINED, it is chosen to
8
be zero(Do not abort incoming transactions).
9
10
Other fields have default values of Use Incoming.
11
12
If UPDATE is not set, the write is ignored. This is the only permitted
13
behavior in SMMUv3.2 and later.(6.3.14.1 Update procedure)
14
15
As this patch adds a new state to the SMMU (GBPA), it is added
16
in a new subsection for forward migration compatibility.
17
GBPA is only migrated if its value is different from the reset value.
18
It does this to be backward migration compatible if SW didn't write
19
the register.
20
21
Signed-off-by: Mostafa Saleh <smostafa@google.com>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Eric Auger <eric.auger@redhat.com>
24
Message-id: 20230214094009.2445653-1-smostafa@google.com
25
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221011031911.2408754-9-richard.henderson@linaro.org
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
---
11
---
28
hw/arm/smmuv3-internal.h | 7 +++++++
12
target/arm/ptw.c | 55 ++++++++++++++++++++++++++++++++----------------
29
include/hw/arm/smmuv3.h | 1 +
13
1 file changed, 37 insertions(+), 18 deletions(-)
30
hw/arm/smmuv3.c | 43 +++++++++++++++++++++++++++++++++++++++-
31
3 files changed, 50 insertions(+), 1 deletion(-)
32
14
33
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
34
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/arm/smmuv3-internal.h
17
--- a/target/arm/ptw.c
36
+++ b/hw/arm/smmuv3-internal.h
18
+++ b/target/arm/ptw.c
37
@@ -XXX,XX +XXX,XX @@ REG32(CR0ACK, 0x24)
19
@@ -XXX,XX +XXX,XX @@
38
REG32(CR1, 0x28)
20
typedef struct S1Translate {
39
REG32(CR2, 0x2c)
21
ARMMMUIdx in_mmu_idx;
40
REG32(STATUSR, 0x40)
22
bool in_secure;
41
+REG32(GBPA, 0x44)
23
+ bool in_debug;
42
+ FIELD(GBPA, ABORT, 20, 1)
24
bool out_secure;
43
+ FIELD(GBPA, UPDATE, 31, 1)
25
hwaddr out_phys;
44
+
26
} S1Translate;
45
+/* Use incoming. */
27
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
46
+#define SMMU_GBPA_RESET_VAL 0x1000
28
S1Translate s2ptw = {
47
+
29
.in_mmu_idx = s2_mmu_idx,
48
REG32(IRQ_CTRL, 0x50)
30
.in_secure = is_secure,
49
FIELD(IRQ_CTRL, GERROR_IRQEN, 0, 1)
31
+ .in_debug = ptw->in_debug,
50
FIELD(IRQ_CTRL, PRI_IRQEN, 1, 1)
32
};
51
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
33
uint64_t hcr;
52
index XXXXXXX..XXXXXXX 100644
34
int ret;
53
--- a/include/hw/arm/smmuv3.h
35
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
54
+++ b/include/hw/arm/smmuv3.h
36
return 0;
55
@@ -XXX,XX +XXX,XX @@ struct SMMUv3State {
56
uint32_t cr[3];
57
uint32_t cr0ack;
58
uint32_t statusr;
59
+ uint32_t gbpa;
60
uint32_t irq_ctrl;
61
uint32_t gerror;
62
uint32_t gerrorn;
63
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/smmuv3.c
66
+++ b/hw/arm/smmuv3.c
67
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
68
s->gerror = 0;
69
s->gerrorn = 0;
70
s->statusr = 0;
71
+ s->gbpa = SMMU_GBPA_RESET_VAL;
72
}
37
}
73
38
74
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
39
-bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
75
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
40
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
76
qemu_mutex_lock(&s->mutex);
41
- bool is_secure, GetPhysAddrResult *result,
77
42
- ARMMMUFaultInfo *fi)
78
if (!smmu_enabled(s)) {
43
+static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
79
- status = SMMU_TRANS_DISABLE;
44
+ target_ulong address,
80
+ if (FIELD_EX32(s->gbpa, GBPA, ABORT)) {
45
+ MMUAccessType access_type,
81
+ status = SMMU_TRANS_ABORT;
46
+ GetPhysAddrResult *result,
82
+ } else {
47
+ ARMMMUFaultInfo *fi)
83
+ status = SMMU_TRANS_DISABLE;
48
{
84
+ }
49
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
85
goto epilogue;
50
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
51
- S1Translate ptw;
52
+ bool is_secure = ptw->in_secure;
53
54
if (mmu_idx != s1_mmu_idx) {
55
/*
56
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
57
bool is_el0;
58
uint64_t hcr;
59
60
- ret = get_phys_addr_with_secure(env, address, access_type,
61
- s1_mmu_idx, is_secure, result, fi);
62
+ ptw->in_mmu_idx = s1_mmu_idx;
63
+ ret = get_phys_addr_with_struct(env, ptw, address, access_type,
64
+ result, fi);
65
66
/* If S1 fails or S2 is disabled, return early. */
67
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
68
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
69
s2walk_secure = false;
70
}
71
72
- ptw.in_mmu_idx =
73
+ ptw->in_mmu_idx =
74
s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
75
- ptw.in_secure = s2walk_secure;
76
+ ptw->in_secure = s2walk_secure;
77
is_el0 = mmu_idx == ARMMMUIdx_E10_0;
78
79
/*
80
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
81
cacheattrs1 = result->cacheattrs;
82
memset(result, 0, sizeof(*result));
83
84
- ret = get_phys_addr_lpae(env, &ptw, ipa, access_type,
85
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
86
is_el0, result, fi);
87
fi->s2addr = ipa;
88
89
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
90
is_secure, result, fi);
86
}
91
}
87
92
88
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
93
- ptw.in_mmu_idx = mmu_idx;
89
case A_GERROR_IRQ_CFG2:
94
- ptw.in_secure = is_secure;
90
s->gerror_irq_cfg2 = data;
95
-
91
return MEMTX_OK;
96
if (regime_using_lpae_format(env, mmu_idx)) {
92
+ case A_GBPA:
97
- return get_phys_addr_lpae(env, &ptw, address, access_type, false,
93
+ /*
98
+ return get_phys_addr_lpae(env, ptw, address, access_type, false,
94
+ * If UPDATE is not set, the write is ignored. This is the only
99
result, fi);
95
+ * permitted behavior in SMMUv3.2 and later.
100
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
96
+ */
101
- return get_phys_addr_v6(env, &ptw, address, access_type, result, fi);
97
+ if (data & R_GBPA_UPDATE_MASK) {
102
+ return get_phys_addr_v6(env, ptw, address, access_type, result, fi);
98
+ /* Ignore update bit as write is synchronous. */
103
} else {
99
+ s->gbpa = data & ~R_GBPA_UPDATE_MASK;
104
- return get_phys_addr_v5(env, &ptw, address, access_type, result, fi);
100
+ }
105
+ return get_phys_addr_v5(env, ptw, address, access_type, result, fi);
101
+ return MEMTX_OK;
106
}
102
case A_STRTAB_BASE: /* 64b */
107
}
103
s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
108
104
return MEMTX_OK;
109
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
105
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
110
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
106
case A_STATUSR:
111
+ bool is_secure, GetPhysAddrResult *result,
107
*data = s->statusr;
112
+ ARMMMUFaultInfo *fi)
108
return MEMTX_OK;
109
+ case A_GBPA:
110
+ *data = s->gbpa;
111
+ return MEMTX_OK;
112
case A_IRQ_CTRL:
113
case A_IRQ_CTRL_ACK:
114
*data = s->irq_ctrl;
115
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3_queue = {
116
},
117
};
118
119
+static bool smmuv3_gbpa_needed(void *opaque)
120
+{
113
+{
121
+ SMMUv3State *s = opaque;
114
+ S1Translate ptw = {
122
+
115
+ .in_mmu_idx = mmu_idx,
123
+ /* Only migrate GBPA if it has different reset value. */
116
+ .in_secure = is_secure,
124
+ return s->gbpa != SMMU_GBPA_RESET_VAL;
117
+ };
118
+ return get_phys_addr_with_struct(env, &ptw, address, access_type,
119
+ result, fi);
125
+}
120
+}
126
+
121
+
127
+static const VMStateDescription vmstate_gbpa = {
122
bool get_phys_addr(CPUARMState *env, target_ulong address,
128
+ .name = "smmuv3/gbpa",
123
MMUAccessType access_type, ARMMMUIdx mmu_idx,
129
+ .version_id = 1,
124
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
130
+ .minimum_version_id = 1,
125
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
131
+ .needed = smmuv3_gbpa_needed,
126
{
132
+ .fields = (VMStateField[]) {
127
ARMCPU *cpu = ARM_CPU(cs);
133
+ VMSTATE_UINT32(gbpa, SMMUv3State),
128
CPUARMState *env = &cpu->env;
134
+ VMSTATE_END_OF_LIST()
129
+ S1Translate ptw = {
135
+ }
130
+ .in_mmu_idx = arm_mmu_idx(env),
136
+};
131
+ .in_secure = arm_is_secure(env),
137
+
132
+ .in_debug = true,
138
static const VMStateDescription vmstate_smmuv3 = {
133
+ };
139
.name = "smmuv3",
134
GetPhysAddrResult res = {};
140
.version_id = 1,
135
ARMMMUFaultInfo fi = {};
141
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3 = {
136
- ARMMMUIdx mmu_idx = arm_mmu_idx(env);
142
137
bool ret;
143
VMSTATE_END_OF_LIST(),
138
144
},
139
- ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi);
145
+ .subsections = (const VMStateDescription * []) {
140
+ ret = get_phys_addr_with_struct(env, &ptw, addr, MMU_DATA_LOAD, &res, &fi);
146
+ &vmstate_gbpa,
141
*attrs = res.f.attrs;
147
+ NULL
142
148
+ }
143
if (ret) {
149
};
150
151
static void smmuv3_instance_init(Object *obj)
152
--
144
--
153
2.34.1
145
2.25.1
diff view generated by jsdifflib
1
From: Fabiano Rosas <farosas@suse.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
These tests set -accel tcg, so restrict them to when TCG is present.
3
Hoist this test out of arm_ld[lq]_ptw into S1_ptw_translate.
4
4
5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Acked-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
7
Message-id: 20221011031911.2408754-10-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
9
---
10
tests/qtest/meson.build | 4 ++--
10
target/arm/ptw.c | 6 ++++--
11
1 file changed, 2 insertions(+), 2 deletions(-)
11
1 file changed, 4 insertions(+), 2 deletions(-)
12
12
13
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tests/qtest/meson.build
15
--- a/target/arm/ptw.c
16
+++ b/tests/qtest/meson.build
16
+++ b/target/arm/ptw.c
17
@@ -XXX,XX +XXX,XX @@ qtests_arm = \
17
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
18
# TODO: once aarch64 TCG is fixed on ARM 32 bit host, make bios-tables-test unconditional
18
bool in_secure;
19
qtests_aarch64 = \
19
bool in_debug;
20
(cpu != 'arm' and unpack_edk2_blobs ? ['bios-tables-test'] : []) + \
20
bool out_secure;
21
- (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) + \
21
+ bool out_be;
22
- (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) + \
22
hwaddr out_phys;
23
+ (config_all.has_key('CONFIG_TCG') and config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? \
23
} S1Translate;
24
+ ['tpm-tis-device-test', 'tpm-tis-device-swtpm-test'] : []) + \
24
25
(config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
25
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
26
(config_all_devices.has_key('CONFIG_RASPI') ? ['bcm2835-dma-test'] : []) + \
26
27
['arm-cpu-features',
27
ptw->out_secure = is_secure;
28
ptw->out_phys = addr;
29
+ ptw->out_be = regime_translation_big_endian(env, ptw->in_mmu_idx);
30
return true;
31
}
32
33
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
34
addr = ptw->out_phys;
35
attrs.secure = ptw->out_secure;
36
as = arm_addressspace(cs, attrs);
37
- if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
38
+ if (ptw->out_be) {
39
data = address_space_ldl_be(as, addr, attrs, &result);
40
} else {
41
data = address_space_ldl_le(as, addr, attrs, &result);
42
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
43
addr = ptw->out_phys;
44
attrs.secure = ptw->out_secure;
45
as = arm_addressspace(cs, attrs);
46
- if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
47
+ if (ptw->out_be) {
48
data = address_space_ldq_be(as, addr, attrs, &result);
49
} else {
50
data = address_space_ldq_le(as, addr, attrs, &result);
28
--
51
--
29
2.34.1
52
2.25.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Although the 'eabi' field is only used in user emulation where
3
So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and
4
CPU reset doesn't occur, it doesn't belong to the area to reset.
4
arm_ldq_ptw. Use probe_access_full to find the host address,
5
Move it after the 'end_reset_fields' for consistency.
5
and if so use a host load. If the probe fails, we've got our
6
fault info already. On the off chance that page tables are not
7
in RAM, continue to use the address_space_ld* functions.
6
8
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230206223502.25122-7-philmd@linaro.org
11
Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
target/arm/cpu.h | 9 ++++-----
14
target/arm/cpu.h | 5 +
13
1 file changed, 4 insertions(+), 5 deletions(-)
15
target/arm/ptw.c | 196 +++++++++++++++++++++++++---------------
16
target/arm/tlb_helper.c | 17 +++-
17
3 files changed, 144 insertions(+), 74 deletions(-)
14
18
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMTBFlags {
24
target_ulong flags2;
25
} CPUARMTBFlags;
26
27
+typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
28
+
29
typedef struct CPUArchState {
30
/* Regs for current mode. */
31
uint32_t regs[16];
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
20
ARMVectorReg zarray[ARM_MAX_VQ * 16];
21
#endif
22
23
-#if defined(CONFIG_USER_ONLY)
24
- /* For usermode syscall translation. */
25
- bool eabi;
26
-#endif
27
-
28
struct CPUBreakpoint *cpu_breakpoint[16];
33
struct CPUBreakpoint *cpu_breakpoint[16];
29
struct CPUWatchpoint *cpu_watchpoint[16];
34
struct CPUWatchpoint *cpu_watchpoint[16];
30
35
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
36
+ /* Optional fault info across tlb lookup. */
32
const struct arm_boot_info *boot_info;
37
+ ARMMMUFaultInfo *tlb_fi;
33
/* Store GICv3CPUState to access from this struct */
38
+
34
void *gicv3state;
39
/* Fields up to this point are cleared by a CPU reset */
35
+#if defined(CONFIG_USER_ONLY)
40
struct {} end_reset_fields;
36
+ /* For usermode syscall translation. */
41
37
+ bool eabi;
42
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
38
+#endif /* CONFIG_USER_ONLY */
43
index XXXXXXX..XXXXXXX 100644
39
44
--- a/target/arm/ptw.c
40
#ifdef TARGET_TAGGED_ADDRESSES
45
+++ b/target/arm/ptw.c
41
/* Linux syscall tagged address support */
46
@@ -XXX,XX +XXX,XX @@
47
#include "qemu/osdep.h"
48
#include "qemu/log.h"
49
#include "qemu/range.h"
50
+#include "exec/exec-all.h"
51
#include "cpu.h"
52
#include "internals.h"
53
#include "idau.h"
54
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
55
bool out_secure;
56
bool out_be;
57
hwaddr out_phys;
58
+ void *out_host;
59
} S1Translate;
60
61
static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
62
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
63
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
64
}
65
66
-static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
67
+static bool S2_attrs_are_device(uint64_t hcr, uint8_t attrs)
68
{
69
/*
70
* For an S1 page table walk, the stage 1 attributes are always
71
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
72
* With HCR_EL2.FWB == 1 this is when descriptor bit [4] is 0, ie
73
* when cacheattrs.attrs bit [2] is 0.
74
*/
75
- assert(cacheattrs.is_s2_format);
76
if (hcr & HCR_FWB) {
77
- return (cacheattrs.attrs & 0x4) == 0;
78
+ return (attrs & 0x4) == 0;
79
} else {
80
- return (cacheattrs.attrs & 0xc) == 0;
81
+ return (attrs & 0xc) == 0;
82
}
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
86
hwaddr addr, ARMMMUFaultInfo *fi)
87
{
88
bool is_secure = ptw->in_secure;
89
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
90
ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
91
+ bool s2_phys = false;
92
+ uint8_t pte_attrs;
93
+ bool pte_secure;
94
95
- if (arm_mmu_idx_is_stage1_of_2(ptw->in_mmu_idx) &&
96
- !regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
97
- GetPhysAddrResult s2 = {};
98
- S1Translate s2ptw = {
99
- .in_mmu_idx = s2_mmu_idx,
100
- .in_secure = is_secure,
101
- .in_debug = ptw->in_debug,
102
- };
103
- uint64_t hcr;
104
- int ret;
105
+ if (!arm_mmu_idx_is_stage1_of_2(mmu_idx)
106
+ || regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
107
+ s2_mmu_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
108
+ s2_phys = true;
109
+ }
110
111
- ret = get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
112
- false, &s2, fi);
113
- if (ret) {
114
- assert(fi->type != ARMFault_None);
115
- fi->s2addr = addr;
116
- fi->stage2 = true;
117
- fi->s1ptw = true;
118
- fi->s1ns = !is_secure;
119
- return false;
120
+ if (unlikely(ptw->in_debug)) {
121
+ /*
122
+ * From gdbstub, do not use softmmu so that we don't modify the
123
+ * state of the cpu at all, including softmmu tlb contents.
124
+ */
125
+ if (s2_phys) {
126
+ ptw->out_phys = addr;
127
+ pte_attrs = 0;
128
+ pte_secure = is_secure;
129
+ } else {
130
+ S1Translate s2ptw = {
131
+ .in_mmu_idx = s2_mmu_idx,
132
+ .in_secure = is_secure,
133
+ .in_debug = true,
134
+ };
135
+ GetPhysAddrResult s2 = { };
136
+ if (!get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
137
+ false, &s2, fi)) {
138
+ goto fail;
139
+ }
140
+ ptw->out_phys = s2.f.phys_addr;
141
+ pte_attrs = s2.cacheattrs.attrs;
142
+ pte_secure = s2.f.attrs.secure;
143
}
144
+ ptw->out_host = NULL;
145
+ } else {
146
+ CPUTLBEntryFull *full;
147
+ int flags;
148
149
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
150
- if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
151
+ env->tlb_fi = fi;
152
+ flags = probe_access_full(env, addr, MMU_DATA_LOAD,
153
+ arm_to_core_mmu_idx(s2_mmu_idx),
154
+ true, &ptw->out_host, &full, 0);
155
+ env->tlb_fi = NULL;
156
+
157
+ if (unlikely(flags & TLB_INVALID_MASK)) {
158
+ goto fail;
159
+ }
160
+ ptw->out_phys = full->phys_addr;
161
+ pte_attrs = full->pte_attrs;
162
+ pte_secure = full->attrs.secure;
163
+ }
164
+
165
+ if (!s2_phys) {
166
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
167
+
168
+ if ((hcr & HCR_PTW) && S2_attrs_are_device(hcr, pte_attrs)) {
169
/*
170
* PTW set and S1 walk touched S2 Device memory:
171
* generate Permission fault.
172
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
173
fi->s1ns = !is_secure;
174
return false;
175
}
176
-
177
- if (arm_is_secure_below_el3(env)) {
178
- /* Check if page table walk is to secure or non-secure PA space. */
179
- if (is_secure) {
180
- is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
181
- } else {
182
- is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
183
- }
184
- } else {
185
- assert(!is_secure);
186
- }
187
-
188
- addr = s2.f.phys_addr;
189
}
190
191
- ptw->out_secure = is_secure;
192
- ptw->out_phys = addr;
193
- ptw->out_be = regime_translation_big_endian(env, ptw->in_mmu_idx);
194
+ /* Check if page table walk is to secure or non-secure PA space. */
195
+ ptw->out_secure = (is_secure
196
+ && !(pte_secure
197
+ ? env->cp15.vstcr_el2 & VSTCR_SW
198
+ : env->cp15.vtcr_el2 & VTCR_NSW));
199
+ ptw->out_be = regime_translation_big_endian(env, mmu_idx);
200
return true;
201
+
202
+ fail:
203
+ assert(fi->type != ARMFault_None);
204
+ fi->s2addr = addr;
205
+ fi->stage2 = true;
206
+ fi->s1ptw = true;
207
+ fi->s1ns = !is_secure;
208
+ return false;
209
}
210
211
/* All loads done in the course of a page table walk go through here. */
212
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
213
ARMMMUFaultInfo *fi)
214
{
215
CPUState *cs = env_cpu(env);
216
- MemTxAttrs attrs = {};
217
- MemTxResult result = MEMTX_OK;
218
- AddressSpace *as;
219
uint32_t data;
220
221
if (!S1_ptw_translate(env, ptw, addr, fi)) {
222
+ /* Failure. */
223
+ assert(fi->s1ptw);
224
return 0;
225
}
226
- addr = ptw->out_phys;
227
- attrs.secure = ptw->out_secure;
228
- as = arm_addressspace(cs, attrs);
229
- if (ptw->out_be) {
230
- data = address_space_ldl_be(as, addr, attrs, &result);
231
+
232
+ if (likely(ptw->out_host)) {
233
+ /* Page tables are in RAM, and we have the host address. */
234
+ if (ptw->out_be) {
235
+ data = ldl_be_p(ptw->out_host);
236
+ } else {
237
+ data = ldl_le_p(ptw->out_host);
238
+ }
239
} else {
240
- data = address_space_ldl_le(as, addr, attrs, &result);
241
+ /* Page tables are in MMIO. */
242
+ MemTxAttrs attrs = { .secure = ptw->out_secure };
243
+ AddressSpace *as = arm_addressspace(cs, attrs);
244
+ MemTxResult result = MEMTX_OK;
245
+
246
+ if (ptw->out_be) {
247
+ data = address_space_ldl_be(as, ptw->out_phys, attrs, &result);
248
+ } else {
249
+ data = address_space_ldl_le(as, ptw->out_phys, attrs, &result);
250
+ }
251
+ if (unlikely(result != MEMTX_OK)) {
252
+ fi->type = ARMFault_SyncExternalOnWalk;
253
+ fi->ea = arm_extabort_type(result);
254
+ return 0;
255
+ }
256
}
257
- if (result == MEMTX_OK) {
258
- return data;
259
- }
260
- fi->type = ARMFault_SyncExternalOnWalk;
261
- fi->ea = arm_extabort_type(result);
262
- return 0;
263
+ return data;
264
}
265
266
static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
267
ARMMMUFaultInfo *fi)
268
{
269
CPUState *cs = env_cpu(env);
270
- MemTxAttrs attrs = {};
271
- MemTxResult result = MEMTX_OK;
272
- AddressSpace *as;
273
uint64_t data;
274
275
if (!S1_ptw_translate(env, ptw, addr, fi)) {
276
+ /* Failure. */
277
+ assert(fi->s1ptw);
278
return 0;
279
}
280
- addr = ptw->out_phys;
281
- attrs.secure = ptw->out_secure;
282
- as = arm_addressspace(cs, attrs);
283
- if (ptw->out_be) {
284
- data = address_space_ldq_be(as, addr, attrs, &result);
285
+
286
+ if (likely(ptw->out_host)) {
287
+ /* Page tables are in RAM, and we have the host address. */
288
+ if (ptw->out_be) {
289
+ data = ldq_be_p(ptw->out_host);
290
+ } else {
291
+ data = ldq_le_p(ptw->out_host);
292
+ }
293
} else {
294
- data = address_space_ldq_le(as, addr, attrs, &result);
295
+ /* Page tables are in MMIO. */
296
+ MemTxAttrs attrs = { .secure = ptw->out_secure };
297
+ AddressSpace *as = arm_addressspace(cs, attrs);
298
+ MemTxResult result = MEMTX_OK;
299
+
300
+ if (ptw->out_be) {
301
+ data = address_space_ldq_be(as, ptw->out_phys, attrs, &result);
302
+ } else {
303
+ data = address_space_ldq_le(as, ptw->out_phys, attrs, &result);
304
+ }
305
+ if (unlikely(result != MEMTX_OK)) {
306
+ fi->type = ARMFault_SyncExternalOnWalk;
307
+ fi->ea = arm_extabort_type(result);
308
+ return 0;
309
+ }
310
}
311
- if (result == MEMTX_OK) {
312
- return data;
313
- }
314
- fi->type = ARMFault_SyncExternalOnWalk;
315
- fi->ea = arm_extabort_type(result);
316
- return 0;
317
+ return data;
318
}
319
320
static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
321
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/tlb_helper.c
324
+++ b/target/arm/tlb_helper.c
325
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
326
bool probe, uintptr_t retaddr)
327
{
328
ARMCPU *cpu = ARM_CPU(cs);
329
- ARMMMUFaultInfo fi = {};
330
GetPhysAddrResult res = {};
331
+ ARMMMUFaultInfo local_fi, *fi;
332
int ret;
333
334
+ /*
335
+ * Allow S1_ptw_translate to see any fault generated here.
336
+ * Since this may recurse, read and clear.
337
+ */
338
+ fi = cpu->env.tlb_fi;
339
+ if (fi) {
340
+ cpu->env.tlb_fi = NULL;
341
+ } else {
342
+ fi = memset(&local_fi, 0, sizeof(local_fi));
343
+ }
344
+
345
/*
346
* Walk the page table and (if the mapping exists) add the page
347
* to the TLB. On success, return true. Otherwise, if probing,
348
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
349
*/
350
ret = get_phys_addr(&cpu->env, address, access_type,
351
core_to_arm_mmu_idx(&cpu->env, mmu_idx),
352
- &res, &fi);
353
+ &res, fi);
354
if (likely(!ret)) {
355
/*
356
* Map a single [sub]page. Regions smaller than our declared
357
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
358
} else {
359
/* now we have a real cpu fault */
360
cpu_restore_state(cs, retaddr, true);
361
- arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi);
362
+ arm_deliver_fault(cpu, address, access_type, mmu_idx, fi);
363
}
364
}
365
#else
42
--
366
--
43
2.34.1
367
2.25.1
44
45
diff view generated by jsdifflib
1
From: Fabiano Rosas <farosas@suse.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move this earlier to make the next patch diff cleaner. While here
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
update the comment slightly to not give the impression that the
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
misalignment affects only TCG.
5
Message-id: 20221011031911.2408754-12-richard.henderson@linaro.org
6
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
7
---
13
target/arm/machine.c | 18 +++++++++---------
8
target/arm/ptw.c | 191 +++++++++++++++++++++++++----------------------
14
1 file changed, 9 insertions(+), 9 deletions(-)
9
1 file changed, 100 insertions(+), 91 deletions(-)
15
10
16
diff --git a/target/arm/machine.c b/target/arm/machine.c
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/machine.c
13
--- a/target/arm/ptw.c
19
+++ b/target/arm/machine.c
14
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
15
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
16
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
17
__attribute__((nonnull));
18
19
+static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
20
+ target_ulong address,
21
+ MMUAccessType access_type,
22
+ GetPhysAddrResult *result,
23
+ ARMMMUFaultInfo *fi)
24
+ __attribute__((nonnull));
25
+
26
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
27
static const uint8_t pamax_map[] = {
28
[0] = 32,
29
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
30
return 0;
31
}
32
33
+static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
34
+ target_ulong address,
35
+ MMUAccessType access_type,
36
+ GetPhysAddrResult *result,
37
+ ARMMMUFaultInfo *fi)
38
+{
39
+ hwaddr ipa;
40
+ int s1_prot;
41
+ int ret;
42
+ bool is_secure = ptw->in_secure;
43
+ bool ipa_secure, s2walk_secure;
44
+ ARMCacheAttrs cacheattrs1;
45
+ bool is_el0;
46
+ uint64_t hcr;
47
+
48
+ ret = get_phys_addr_with_struct(env, ptw, address, access_type, result, fi);
49
+
50
+ /* If S1 fails or S2 is disabled, return early. */
51
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2, is_secure)) {
52
+ return ret;
53
+ }
54
+
55
+ ipa = result->f.phys_addr;
56
+ ipa_secure = result->f.attrs.secure;
57
+ if (is_secure) {
58
+ /* Select TCR based on the NS bit from the S1 walk. */
59
+ s2walk_secure = !(ipa_secure
60
+ ? env->cp15.vstcr_el2 & VSTCR_SW
61
+ : env->cp15.vtcr_el2 & VTCR_NSW);
62
+ } else {
63
+ assert(!ipa_secure);
64
+ s2walk_secure = false;
65
+ }
66
+
67
+ is_el0 = ptw->in_mmu_idx == ARMMMUIdx_Stage1_E0;
68
+ ptw->in_mmu_idx = s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
69
+ ptw->in_secure = s2walk_secure;
70
+
71
+ /*
72
+ * S1 is done, now do S2 translation.
73
+ * Save the stage1 results so that we may merge prot and cacheattrs later.
74
+ */
75
+ s1_prot = result->f.prot;
76
+ cacheattrs1 = result->cacheattrs;
77
+ memset(result, 0, sizeof(*result));
78
+
79
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type, is_el0, result, fi);
80
+ fi->s2addr = ipa;
81
+
82
+ /* Combine the S1 and S2 perms. */
83
+ result->f.prot &= s1_prot;
84
+
85
+ /* If S2 fails, return early. */
86
+ if (ret) {
87
+ return ret;
88
+ }
89
+
90
+ /* Combine the S1 and S2 cache attributes. */
91
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
92
+ if (hcr & HCR_DC) {
93
+ /*
94
+ * HCR.DC forces the first stage attributes to
95
+ * Normal Non-Shareable,
96
+ * Inner Write-Back Read-Allocate Write-Allocate,
97
+ * Outer Write-Back Read-Allocate Write-Allocate.
98
+ * Do not overwrite Tagged within attrs.
99
+ */
100
+ if (cacheattrs1.attrs != 0xf0) {
101
+ cacheattrs1.attrs = 0xff;
102
+ }
103
+ cacheattrs1.shareability = 0;
104
+ }
105
+ result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
106
+ result->cacheattrs);
107
+
108
+ /*
109
+ * Check if IPA translates to secure or non-secure PA space.
110
+ * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
111
+ */
112
+ result->f.attrs.secure =
113
+ (is_secure
114
+ && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
115
+ && (ipa_secure
116
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
117
+
118
+ return 0;
119
+}
120
+
121
static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
122
target_ulong address,
123
MMUAccessType access_type,
124
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
125
if (mmu_idx != s1_mmu_idx) {
126
/*
127
* Call ourselves recursively to do the stage 1 and then stage 2
128
- * translations if mmu_idx is a two-stage regime.
129
+ * translations if mmu_idx is a two-stage regime, and EL2 present.
130
+ * Otherwise, a stage1+stage2 translation is just stage 1.
131
*/
132
+ ptw->in_mmu_idx = mmu_idx = s1_mmu_idx;
133
if (arm_feature(env, ARM_FEATURE_EL2)) {
134
- hwaddr ipa;
135
- int s1_prot;
136
- int ret;
137
- bool ipa_secure, s2walk_secure;
138
- ARMCacheAttrs cacheattrs1;
139
- bool is_el0;
140
- uint64_t hcr;
141
-
142
- ptw->in_mmu_idx = s1_mmu_idx;
143
- ret = get_phys_addr_with_struct(env, ptw, address, access_type,
144
- result, fi);
145
-
146
- /* If S1 fails or S2 is disabled, return early. */
147
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
148
- is_secure)) {
149
- return ret;
150
- }
151
-
152
- ipa = result->f.phys_addr;
153
- ipa_secure = result->f.attrs.secure;
154
- if (is_secure) {
155
- /* Select TCR based on the NS bit from the S1 walk. */
156
- s2walk_secure = !(ipa_secure
157
- ? env->cp15.vstcr_el2 & VSTCR_SW
158
- : env->cp15.vtcr_el2 & VTCR_NSW);
159
- } else {
160
- assert(!ipa_secure);
161
- s2walk_secure = false;
162
- }
163
-
164
- ptw->in_mmu_idx =
165
- s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
166
- ptw->in_secure = s2walk_secure;
167
- is_el0 = mmu_idx == ARMMMUIdx_E10_0;
168
-
169
- /*
170
- * S1 is done, now do S2 translation.
171
- * Save the stage1 results so that we may merge
172
- * prot and cacheattrs later.
173
- */
174
- s1_prot = result->f.prot;
175
- cacheattrs1 = result->cacheattrs;
176
- memset(result, 0, sizeof(*result));
177
-
178
- ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
179
- is_el0, result, fi);
180
- fi->s2addr = ipa;
181
-
182
- /* Combine the S1 and S2 perms. */
183
- result->f.prot &= s1_prot;
184
-
185
- /* If S2 fails, return early. */
186
- if (ret) {
187
- return ret;
188
- }
189
-
190
- /* Combine the S1 and S2 cache attributes. */
191
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
192
- if (hcr & HCR_DC) {
193
- /*
194
- * HCR.DC forces the first stage attributes to
195
- * Normal Non-Shareable,
196
- * Inner Write-Back Read-Allocate Write-Allocate,
197
- * Outer Write-Back Read-Allocate Write-Allocate.
198
- * Do not overwrite Tagged within attrs.
199
- */
200
- if (cacheattrs1.attrs != 0xf0) {
201
- cacheattrs1.attrs = 0xff;
202
- }
203
- cacheattrs1.shareability = 0;
204
- }
205
- result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
206
- result->cacheattrs);
207
-
208
- /*
209
- * Check if IPA translates to secure or non-secure PA space.
210
- * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
211
- */
212
- result->f.attrs.secure =
213
- (is_secure
214
- && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
215
- && (ipa_secure
216
- || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
217
-
218
- return 0;
219
- } else {
220
- /*
221
- * For non-EL2 CPUs a stage1+stage2 translation is just stage 1.
222
- */
223
- mmu_idx = stage_1_mmu_idx(mmu_idx);
224
+ return get_phys_addr_twostage(env, ptw, address, access_type,
225
+ result, fi);
21
}
226
}
22
}
227
}
23
228
24
+ /*
25
+ * Misaligned thumb pc is architecturally impossible. Fail the
26
+ * incoming migration. For TCG it would trigger the assert in
27
+ * thumb_tr_translate_insn().
28
+ */
29
+ if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
30
+ return -1;
31
+ }
32
+
33
hw_breakpoint_update_all(cpu);
34
hw_watchpoint_update_all(cpu);
35
36
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
37
}
38
}
39
40
- /*
41
- * Misaligned thumb pc is architecturally impossible.
42
- * We have an assert in thumb_tr_translate_insn to verify this.
43
- * Fail an incoming migrate to avoid this assert.
44
- */
45
- if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
46
- return -1;
47
- }
48
-
49
if (!kvm_enabled()) {
50
pmu_op_finish(&cpu->env);
51
}
52
--
229
--
53
2.34.1
230
2.25.1
54
55
diff view generated by jsdifflib
1
From: Hao Wu <wuhaotsh@google.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Hao Wu <wuhaotsh@google.com>
3
The return type of the functions is already bool, but in a few
4
Reviewed-by: Titus Rwantare <titusr@google.com>
4
instances we used an integer type with the return statement.
5
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
5
6
Message-id: 20230208235433.3989937-4-wuhaotsh@google.com
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221011031911.2408754-13-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
docs/system/arm/nuvoton.rst | 2 +-
11
target/arm/ptw.c | 7 +++----
10
include/hw/arm/npcm7xx.h | 2 ++
12
1 file changed, 3 insertions(+), 4 deletions(-)
11
hw/arm/npcm7xx.c | 25 +++++++++++++++++++++++--
12
3 files changed, 26 insertions(+), 3 deletions(-)
13
13
14
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/docs/system/arm/nuvoton.rst
16
--- a/target/arm/ptw.c
17
+++ b/docs/system/arm/nuvoton.rst
17
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ Supported devices
18
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
19
* SMBus controller (SMBF)
19
result->f.lg_page_size = TARGET_PAGE_BITS;
20
* Ethernet controller (EMC)
20
result->cacheattrs.shareability = shareability;
21
* Tachometer
21
result->cacheattrs.attrs = memattr;
22
+ * Peripheral SPI controller (PSPI)
22
- return 0;
23
23
+ return false;
24
Missing devices
25
---------------
26
@@ -XXX,XX +XXX,XX @@ Missing devices
27
28
* Ethernet controller (GMAC)
29
* USB device (USBD)
30
- * Peripheral SPI controller (PSPI)
31
* SD/MMC host
32
* PECI interface
33
* PCI and PCIe root complex and bridges
34
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/include/hw/arm/npcm7xx.h
37
+++ b/include/hw/arm/npcm7xx.h
38
@@ -XXX,XX +XXX,XX @@
39
#include "hw/nvram/npcm7xx_otp.h"
40
#include "hw/timer/npcm7xx_timer.h"
41
#include "hw/ssi/npcm7xx_fiu.h"
42
+#include "hw/ssi/npcm_pspi.h"
43
#include "hw/usb/hcd-ehci.h"
44
#include "hw/usb/hcd-ohci.h"
45
#include "target/arm/cpu.h"
46
@@ -XXX,XX +XXX,XX @@ struct NPCM7xxState {
47
NPCM7xxFIUState fiu[2];
48
NPCM7xxEMCState emc[2];
49
NPCM7xxSDHCIState mmc;
50
+ NPCMPSPIState pspi[2];
51
};
52
53
#define TYPE_NPCM7XX "npcm7xx"
54
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/hw/arm/npcm7xx.c
57
+++ b/hw/arm/npcm7xx.c
58
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
59
NPCM7XX_EMC1RX_IRQ = 15,
60
NPCM7XX_EMC1TX_IRQ,
61
NPCM7XX_MMC_IRQ = 26,
62
+ NPCM7XX_PSPI2_IRQ = 28,
63
+ NPCM7XX_PSPI1_IRQ = 31,
64
NPCM7XX_TIMER0_IRQ = 32, /* Timer Module 0 */
65
NPCM7XX_TIMER1_IRQ,
66
NPCM7XX_TIMER2_IRQ,
67
@@ -XXX,XX +XXX,XX @@ static const hwaddr npcm7xx_emc_addr[] = {
68
0xf0826000,
69
};
70
71
+/* Register base address for each PSPI Module */
72
+static const hwaddr npcm7xx_pspi_addr[] = {
73
+ 0xf0200000,
74
+ 0xf0201000,
75
+};
76
+
77
static const struct {
78
hwaddr regs_addr;
79
uint32_t unconnected_pins;
80
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_init(Object *obj)
81
object_initialize_child(obj, "emc[*]", &s->emc[i], TYPE_NPCM7XX_EMC);
82
}
83
84
+ for (i = 0; i < ARRAY_SIZE(s->pspi); i++) {
85
+ object_initialize_child(obj, "pspi[*]", &s->pspi[i], TYPE_NPCM_PSPI);
86
+ }
87
+
88
object_initialize_child(obj, "mmc", &s->mmc, TYPE_NPCM7XX_SDHCI);
89
}
24
}
90
25
91
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
26
static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
92
sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc), 0,
27
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
93
npcm7xx_irq(s, NPCM7XX_MMC_IRQ));
28
{
94
29
hwaddr ipa;
95
+ /* PSPI */
30
int s1_prot;
96
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(npcm7xx_pspi_addr) != ARRAY_SIZE(s->pspi));
31
- int ret;
97
+ for (i = 0; i < ARRAY_SIZE(s->pspi); i++) {
32
bool is_secure = ptw->in_secure;
98
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->pspi[i]);
33
- bool ipa_secure, s2walk_secure;
99
+ int irq = (i == 0) ? NPCM7XX_PSPI1_IRQ : NPCM7XX_PSPI2_IRQ;
34
+ bool ret, ipa_secure, s2walk_secure;
100
+
35
ARMCacheAttrs cacheattrs1;
101
+ sysbus_realize(sbd, &error_abort);
36
bool is_el0;
102
+ sysbus_mmio_map(sbd, 0, npcm7xx_pspi_addr[i]);
37
uint64_t hcr;
103
+ sysbus_connect_irq(sbd, 0, npcm7xx_irq(s, irq));
38
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
104
+ }
39
&& (ipa_secure
105
+
40
|| !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
106
create_unimplemented_device("npcm7xx.shm", 0xc0001000, 4 * KiB);
41
107
create_unimplemented_device("npcm7xx.vdmx", 0xe0800000, 4 * KiB);
42
- return 0;
108
create_unimplemented_device("npcm7xx.pcierc", 0xe1000000, 64 * KiB);
43
+ return false;
109
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
44
}
110
create_unimplemented_device("npcm7xx.peci", 0xf0100000, 4 * KiB);
45
111
create_unimplemented_device("npcm7xx.siox[1]", 0xf0101000, 4 * KiB);
46
static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
112
create_unimplemented_device("npcm7xx.siox[2]", 0xf0102000, 4 * KiB);
113
- create_unimplemented_device("npcm7xx.pspi1", 0xf0200000, 4 * KiB);
114
- create_unimplemented_device("npcm7xx.pspi2", 0xf0201000, 4 * KiB);
115
create_unimplemented_device("npcm7xx.ahbpci", 0xf0400000, 1 * MiB);
116
create_unimplemented_device("npcm7xx.mcphy", 0xf05f0000, 64 * KiB);
117
create_unimplemented_device("npcm7xx.gmac1", 0xf0802000, 8 * KiB);
118
--
47
--
119
2.34.1
48
2.25.1
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
arm_v7m_mmu_idx_all() and arm_v7m_mmu_idx_for_secstate_and_priv()
3
A simple helper to retrieve the length of the current insn.
4
are only used for system emulation in m_helper.c.
5
Move the definitions to avoid prototype forward declarations.
6
4
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230206223502.25122-4-philmd@linaro.org
7
Message-id: 20221020030641.2066807-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/internals.h | 14 --------
10
target/arm/translate.h | 5 +++++
13
target/arm/m_helper.c | 74 +++++++++++++++++++++---------------------
11
target/arm/translate-vfp.c | 2 +-
14
2 files changed, 37 insertions(+), 51 deletions(-)
12
target/arm/translate.c | 5 ++---
13
3 files changed, 8 insertions(+), 4 deletions(-)
15
14
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
17
--- a/target/arm/translate.h
19
+++ b/target/arm/internals.h
18
+++ b/target/arm/translate.h
20
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx core_to_aa64_mmu_idx(int mmu_idx)
19
@@ -XXX,XX +XXX,XX @@ static inline void disas_set_insn_syndrome(DisasContext *s, uint32_t syn)
21
20
s->insn_start = NULL;
22
int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx);
21
}
23
22
24
-/*
23
+static inline int curr_insn_len(DisasContext *s)
25
- * Return the MMU index for a v7M CPU with all relevant information
26
- * manually specified.
27
- */
28
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
29
- bool secstate, bool priv, bool negpri);
30
-
31
-/*
32
- * Return the MMU index for a v7M CPU in the specified security and
33
- * privilege state.
34
- */
35
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
36
- bool secstate, bool priv);
37
-
38
/* Return the MMU index for a v7M CPU in the specified security state */
39
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
40
41
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/m_helper.c
44
+++ b/target/arm/m_helper.c
45
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
46
47
#else /* !CONFIG_USER_ONLY */
48
49
+static ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
50
+ bool secstate, bool priv, bool negpri)
51
+{
24
+{
52
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
25
+ return s->base.pc_next - s->pc_curr;
53
+
54
+ if (priv) {
55
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
56
+ }
57
+
58
+ if (negpri) {
59
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
60
+ }
61
+
62
+ if (secstate) {
63
+ mmu_idx |= ARM_MMU_IDX_M_S;
64
+ }
65
+
66
+ return mmu_idx;
67
+}
26
+}
68
+
27
+
69
+static ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
28
/* is_jmp field values */
70
+ bool secstate, bool priv)
29
#define DISAS_JUMP DISAS_TARGET_0 /* only pc was modified dynamically */
71
+{
30
/* CPU state was modified dynamically; exit to main loop for interrupts. */
72
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
31
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
73
+
32
index XXXXXXX..XXXXXXX 100644
74
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
33
--- a/target/arm/translate-vfp.c
75
+}
34
+++ b/target/arm/translate-vfp.c
76
+
35
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
77
+/* Return the MMU index for a v7M CPU in the specified security state */
36
if (s->sme_trap_nonstreaming) {
78
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
37
gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
79
+{
38
syn_smetrap(SME_ET_Streaming,
80
+ bool priv = arm_v7m_is_handler_mode(env) ||
39
- s->base.pc_next - s->pc_curr == 2));
81
+ !(env->v7m.control[secstate] & 1);
40
+ curr_insn_len(s) == 2));
82
+
41
return false;
83
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
42
}
84
+}
43
85
+
44
diff --git a/target/arm/translate.c b/target/arm/translate.c
86
/*
45
index XXXXXXX..XXXXXXX 100644
87
* What kind of stack write are we doing? This affects how exceptions
46
--- a/target/arm/translate.c
88
* generated during the stacking are treated.
47
+++ b/target/arm/translate.c
89
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
48
@@ -XXX,XX +XXX,XX @@ static ISSInfo make_issinfo(DisasContext *s, int rd, bool p, bool w)
90
return tt_resp;
49
/* ISS not valid if writeback */
91
}
50
if (p && !w) {
92
51
ret = rd;
93
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
52
- if (s->base.pc_next - s->pc_curr == 2) {
94
- bool secstate, bool priv, bool negpri)
53
+ if (curr_insn_len(s) == 2) {
95
-{
54
ret |= ISSIs16Bit;
96
- ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
55
}
97
-
56
} else {
98
- if (priv) {
57
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
99
- mmu_idx |= ARM_MMU_IDX_M_PRIV;
58
/* nothing more to generate */
100
- }
59
break;
101
-
60
case DISAS_WFI:
102
- if (negpri) {
61
- gen_helper_wfi(cpu_env,
103
- mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
62
- tcg_constant_i32(dc->base.pc_next - dc->pc_curr));
104
- }
63
+ gen_helper_wfi(cpu_env, tcg_constant_i32(curr_insn_len(dc)));
105
-
64
/*
106
- if (secstate) {
65
* The helper doesn't necessarily throw an exception, but we
107
- mmu_idx |= ARM_MMU_IDX_M_S;
66
* must go back to the main loop to check for interrupts anyway.
108
- }
109
-
110
- return mmu_idx;
111
-}
112
-
113
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
114
- bool secstate, bool priv)
115
-{
116
- bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
117
-
118
- return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
119
-}
120
-
121
-/* Return the MMU index for a v7M CPU in the specified security state */
122
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
123
-{
124
- bool priv = arm_v7m_is_handler_mode(env) ||
125
- !(env->v7m.control[secstate] & 1);
126
-
127
- return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
128
-}
129
-
130
#endif /* !CONFIG_USER_ONLY */
131
--
67
--
132
2.34.1
68
2.25.1
133
69
134
70
diff view generated by jsdifflib
1
From: Fabiano Rosas <farosas@suse.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
5
Acked-by: Thomas Huth <thuth@redhat.com>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
tests/qtest/arm-cpu-features.c | 28 ++++++++++++++++++----------
10
target/arm/translate-a64.c | 40 ++++++++++++++++++++------------------
9
1 file changed, 18 insertions(+), 10 deletions(-)
11
target/arm/translate.c | 10 ++++++----
12
2 files changed, 27 insertions(+), 23 deletions(-)
10
13
11
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/tests/qtest/arm-cpu-features.c
16
--- a/target/arm/translate-a64.c
14
+++ b/tests/qtest/arm-cpu-features.c
17
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, uint64_t dest)
16
#define SVE_MAX_VQ 16
19
return translator_use_goto_tb(&s->base, dest);
17
20
}
18
#define MACHINE "-machine virt,gic-version=max -accel tcg "
21
19
-#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm -accel tcg "
22
-static inline void gen_goto_tb(DisasContext *s, int n, uint64_t dest)
20
+#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm "
23
+static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
21
#define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \
24
{
22
" 'arguments': { 'type': 'full', "
25
+ uint64_t dest = s->pc_curr + diff;
23
#define QUERY_TAIL "}}"
24
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
25
{
26
g_test_init(&argc, &argv, NULL);
27
28
- qtest_add_data_func("/arm/query-cpu-model-expansion",
29
- NULL, test_query_cpu_model_expansion);
30
+ if (qtest_has_accel("tcg")) {
31
+ qtest_add_data_func("/arm/query-cpu-model-expansion",
32
+ NULL, test_query_cpu_model_expansion);
33
+ }
34
+
26
+
35
+ if (!g_str_equal(qtest_get_arch(), "aarch64")) {
27
if (use_goto_tb(s, dest)) {
36
+ goto out;
28
tcg_gen_goto_tb(n);
37
+ }
29
gen_a64_set_pc_im(dest);
38
30
@@ -XXX,XX +XXX,XX @@ static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
39
/*
31
*/
40
* For now we only run KVM specific tests with AArch64 QEMU in
32
static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
41
* order avoid attempting to run an AArch32 QEMU with KVM on
33
{
42
* AArch64 hosts. That won't work and isn't easy to detect.
34
- uint64_t addr = s->pc_curr + sextract32(insn, 0, 26) * 4;
43
*/
35
+ int64_t diff = sextract32(insn, 0, 26) * 4;
44
- if (g_str_equal(qtest_get_arch(), "aarch64") && qtest_has_accel("kvm")) {
36
45
+ if (qtest_has_accel("kvm")) {
37
if (insn & (1U << 31)) {
46
/*
38
/* BL Branch with link */
47
* This tests target the 'host' CPU type, so register it only if
39
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
48
* KVM is available.
40
41
/* B Branch / BL Branch with link */
42
reset_btype(s);
43
- gen_goto_tb(s, 0, addr);
44
+ gen_goto_tb(s, 0, diff);
45
}
46
47
/* Compare and branch (immediate)
48
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
49
static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
50
{
51
unsigned int sf, op, rt;
52
- uint64_t addr;
53
+ int64_t diff;
54
TCGLabel *label_match;
55
TCGv_i64 tcg_cmp;
56
57
sf = extract32(insn, 31, 1);
58
op = extract32(insn, 24, 1); /* 0: CBZ; 1: CBNZ */
59
rt = extract32(insn, 0, 5);
60
- addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
61
+ diff = sextract32(insn, 5, 19) * 4;
62
63
tcg_cmp = read_cpu_reg(s, rt, sf);
64
label_match = gen_new_label();
65
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
66
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
67
tcg_cmp, 0, label_match);
68
69
- gen_goto_tb(s, 0, s->base.pc_next);
70
+ gen_goto_tb(s, 0, 4);
71
gen_set_label(label_match);
72
- gen_goto_tb(s, 1, addr);
73
+ gen_goto_tb(s, 1, diff);
74
}
75
76
/* Test and branch (immediate)
77
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
78
static void disas_test_b_imm(DisasContext *s, uint32_t insn)
79
{
80
unsigned int bit_pos, op, rt;
81
- uint64_t addr;
82
+ int64_t diff;
83
TCGLabel *label_match;
84
TCGv_i64 tcg_cmp;
85
86
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
87
op = extract32(insn, 24, 1); /* 0: TBZ; 1: TBNZ */
88
- addr = s->pc_curr + sextract32(insn, 5, 14) * 4;
89
+ diff = sextract32(insn, 5, 14) * 4;
90
rt = extract32(insn, 0, 5);
91
92
tcg_cmp = tcg_temp_new_i64();
93
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
94
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
95
tcg_cmp, 0, label_match);
96
tcg_temp_free_i64(tcg_cmp);
97
- gen_goto_tb(s, 0, s->base.pc_next);
98
+ gen_goto_tb(s, 0, 4);
99
gen_set_label(label_match);
100
- gen_goto_tb(s, 1, addr);
101
+ gen_goto_tb(s, 1, diff);
102
}
103
104
/* Conditional branch (immediate)
105
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
106
static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
107
{
108
unsigned int cond;
109
- uint64_t addr;
110
+ int64_t diff;
111
112
if ((insn & (1 << 4)) || (insn & (1 << 24))) {
113
unallocated_encoding(s);
114
return;
115
}
116
- addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
117
+ diff = sextract32(insn, 5, 19) * 4;
118
cond = extract32(insn, 0, 4);
119
120
reset_btype(s);
121
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
122
/* genuinely conditional branches */
123
TCGLabel *label_match = gen_new_label();
124
arm_gen_test_cc(cond, label_match);
125
- gen_goto_tb(s, 0, s->base.pc_next);
126
+ gen_goto_tb(s, 0, 4);
127
gen_set_label(label_match);
128
- gen_goto_tb(s, 1, addr);
129
+ gen_goto_tb(s, 1, diff);
130
} else {
131
/* 0xe and 0xf are both "always" conditions */
132
- gen_goto_tb(s, 0, addr);
133
+ gen_goto_tb(s, 0, diff);
134
}
135
}
136
137
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
138
* any pending interrupts immediately.
49
*/
139
*/
50
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion",
140
reset_btype(s);
51
NULL, test_query_cpu_model_expansion_kvm);
141
- gen_goto_tb(s, 0, s->base.pc_next);
52
- }
142
+ gen_goto_tb(s, 0, 4);
53
143
return;
54
- if (g_str_equal(qtest_get_arch(), "aarch64")) {
144
55
- qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
145
case 7: /* SB */
56
- NULL, sve_tests_sve_max_vq_8);
146
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
57
- qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
147
* MB and end the TB instead.
58
- NULL, sve_tests_sve_off);
148
*/
59
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion/sve-off",
149
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
60
NULL, sve_tests_sve_off_kvm);
150
- gen_goto_tb(s, 0, s->base.pc_next);
151
+ gen_goto_tb(s, 0, 4);
152
return;
153
154
default:
155
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
156
switch (dc->base.is_jmp) {
157
case DISAS_NEXT:
158
case DISAS_TOO_MANY:
159
- gen_goto_tb(dc, 1, dc->base.pc_next);
160
+ gen_goto_tb(dc, 1, 4);
161
break;
162
default:
163
case DISAS_UPDATE_EXIT:
164
diff --git a/target/arm/translate.c b/target/arm/translate.c
165
index XXXXXXX..XXXXXXX 100644
166
--- a/target/arm/translate.c
167
+++ b/target/arm/translate.c
168
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
169
* cpu_loop_exec. Any live exit_requests will be processed as we
170
* enter the next TB.
171
*/
172
-static void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
173
+static void gen_goto_tb(DisasContext *s, int n, int diff)
174
{
175
+ target_ulong dest = s->pc_curr + diff;
176
+
177
if (translator_use_goto_tb(&s->base, dest)) {
178
tcg_gen_goto_tb(n);
179
gen_set_pc_im(s, dest);
180
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
181
* gen_jmp();
182
* on the second call to gen_jmp().
183
*/
184
- gen_goto_tb(s, tbno, dest);
185
+ gen_goto_tb(s, tbno, dest - s->pc_curr);
186
break;
187
case DISAS_UPDATE_NOCHAIN:
188
case DISAS_UPDATE_EXIT:
189
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
190
switch (dc->base.is_jmp) {
191
case DISAS_NEXT:
192
case DISAS_TOO_MANY:
193
- gen_goto_tb(dc, 1, dc->base.pc_next);
194
+ gen_goto_tb(dc, 1, curr_insn_len(dc));
195
break;
196
case DISAS_UPDATE_NOCHAIN:
197
gen_set_pc_im(dc, dc->base.pc_next);
198
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
199
gen_set_pc_im(dc, dc->base.pc_next);
200
gen_singlestep_exception(dc);
201
} else {
202
- gen_goto_tb(dc, 1, dc->base.pc_next);
203
+ gen_goto_tb(dc, 1, curr_insn_len(dc));
204
}
61
}
205
}
62
63
+ if (qtest_has_accel("tcg")) {
64
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
65
+ NULL, sve_tests_sve_max_vq_8);
66
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
67
+ NULL, sve_tests_sve_off);
68
+ }
69
+
70
+out:
71
return g_test_run();
72
}
206
}
73
--
207
--
74
2.34.1
208
2.25.1
diff view generated by jsdifflib
1
From: Fabiano Rosas <farosas@suse.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This allows the test to be skipped when TCG is not present in the QEMU
3
In preparation for TARGET_TB_PCREL, reduce reliance on
4
binary.
4
absolute values by passing in pc difference.
5
5
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20221020030641.2066807-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
tests/avocado/boot_linux_console.py | 1 +
11
target/arm/translate-a32.h | 2 +-
12
tests/avocado/reverse_debugging.py | 8 ++++++++
12
target/arm/translate.h | 6 ++--
13
2 files changed, 9 insertions(+)
13
target/arm/translate-a64.c | 32 +++++++++---------
14
target/arm/translate-vfp.c | 2 +-
15
target/arm/translate.c | 68 ++++++++++++++++++++------------------
16
5 files changed, 56 insertions(+), 54 deletions(-)
14
17
15
diff --git a/tests/avocado/boot_linux_console.py b/tests/avocado/boot_linux_console.py
18
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/tests/avocado/boot_linux_console.py
20
--- a/target/arm/translate-a32.h
18
+++ b/tests/avocado/boot_linux_console.py
21
+++ b/target/arm/translate-a32.h
19
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_uboot_netbsd9(self):
22
@@ -XXX,XX +XXX,XX @@ void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop);
20
23
TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs);
21
def test_aarch64_raspi3_atf(self):
24
void gen_set_cpsr(TCGv_i32 var, uint32_t mask);
22
"""
25
void gen_set_condexec(DisasContext *s);
23
+ :avocado: tags=accel:tcg
26
-void gen_set_pc_im(DisasContext *s, target_ulong val);
24
:avocado: tags=arch:aarch64
27
+void gen_update_pc(DisasContext *s, target_long diff);
25
:avocado: tags=machine:raspi3b
28
void gen_lookup_tb(DisasContext *s);
26
:avocado: tags=cpu:cortex-a53
29
long vfp_reg_offset(bool dp, unsigned reg);
27
diff --git a/tests/avocado/reverse_debugging.py b/tests/avocado/reverse_debugging.py
30
long neon_full_reg_offset(unsigned reg);
31
diff --git a/target/arm/translate.h b/target/arm/translate.h
28
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
29
--- a/tests/avocado/reverse_debugging.py
33
--- a/target/arm/translate.h
30
+++ b/tests/avocado/reverse_debugging.py
34
+++ b/target/arm/translate.h
31
@@ -XXX,XX +XXX,XX @@ def reverse_debugging(self, shift=7, args=None):
35
@@ -XXX,XX +XXX,XX @@ static inline int curr_insn_len(DisasContext *s)
32
vm.shutdown()
36
* For instructions which want an immediate exit to the main loop, as opposed
33
37
* to attempting to use lookup_and_goto_ptr. Unlike DISAS_UPDATE_EXIT, this
34
class ReverseDebugging_X86_64(ReverseDebugging):
38
* doesn't write the PC on exiting the translation loop so you need to ensure
35
+ """
39
- * something (gen_a64_set_pc_im or runtime helper) has done so before we reach
36
+ :avocado: tags=accel:tcg
40
+ * something (gen_a64_update_pc or runtime helper) has done so before we reach
37
+ """
41
* return from cpu_tb_exec.
42
*/
43
#define DISAS_EXIT DISAS_TARGET_9
44
@@ -XXX,XX +XXX,XX @@ static inline int curr_insn_len(DisasContext *s)
45
46
#ifdef TARGET_AARCH64
47
void a64_translate_init(void);
48
-void gen_a64_set_pc_im(uint64_t val);
49
+void gen_a64_update_pc(DisasContext *s, target_long diff);
50
extern const TranslatorOps aarch64_translator_ops;
51
#else
52
static inline void a64_translate_init(void)
53
{
54
}
55
56
-static inline void gen_a64_set_pc_im(uint64_t val)
57
+static inline void gen_a64_update_pc(DisasContext *s, target_long diff)
58
{
59
}
60
#endif
61
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/translate-a64.c
64
+++ b/target/arm/translate-a64.c
65
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
66
}
67
}
68
69
-void gen_a64_set_pc_im(uint64_t val)
70
+void gen_a64_update_pc(DisasContext *s, target_long diff)
71
{
72
- tcg_gen_movi_i64(cpu_pc, val);
73
+ tcg_gen_movi_i64(cpu_pc, s->pc_curr + diff);
74
}
75
76
/*
77
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
78
79
static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
80
{
81
- gen_a64_set_pc_im(pc);
82
+ gen_a64_update_pc(s, pc - s->pc_curr);
83
gen_exception_internal(excp);
84
s->base.is_jmp = DISAS_NORETURN;
85
}
86
87
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syndrome)
88
{
89
- gen_a64_set_pc_im(s->pc_curr);
90
+ gen_a64_update_pc(s, 0);
91
gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syndrome));
92
s->base.is_jmp = DISAS_NORETURN;
93
}
94
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
95
96
if (use_goto_tb(s, dest)) {
97
tcg_gen_goto_tb(n);
98
- gen_a64_set_pc_im(dest);
99
+ gen_a64_update_pc(s, diff);
100
tcg_gen_exit_tb(s->base.tb, n);
101
s->base.is_jmp = DISAS_NORETURN;
102
} else {
103
- gen_a64_set_pc_im(dest);
104
+ gen_a64_update_pc(s, diff);
105
if (s->ss_active) {
106
gen_step_complete_exception(s);
107
} else {
108
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
109
uint32_t syndrome;
110
111
syndrome = syn_aa64_sysregtrap(op0, op1, op2, crn, crm, rt, isread);
112
- gen_a64_set_pc_im(s->pc_curr);
113
+ gen_a64_update_pc(s, 0);
114
gen_helper_access_check_cp_reg(cpu_env,
115
tcg_constant_ptr(ri),
116
tcg_constant_i32(syndrome),
117
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
118
* The readfn or writefn might raise an exception;
119
* synchronize the CPU state in case it does.
120
*/
121
- gen_a64_set_pc_im(s->pc_curr);
122
+ gen_a64_update_pc(s, 0);
123
}
124
125
/* Handle special cases first */
126
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
127
/* The pre HVC helper handles cases when HVC gets trapped
128
* as an undefined insn by runtime configuration.
129
*/
130
- gen_a64_set_pc_im(s->pc_curr);
131
+ gen_a64_update_pc(s, 0);
132
gen_helper_pre_hvc(cpu_env);
133
gen_ss_advance(s);
134
gen_exception_insn_el(s, s->base.pc_next, EXCP_HVC,
135
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
136
unallocated_encoding(s);
137
break;
138
}
139
- gen_a64_set_pc_im(s->pc_curr);
140
+ gen_a64_update_pc(s, 0);
141
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
142
gen_ss_advance(s);
143
gen_exception_insn_el(s, s->base.pc_next, EXCP_SMC,
144
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
145
*/
146
switch (dc->base.is_jmp) {
147
default:
148
- gen_a64_set_pc_im(dc->base.pc_next);
149
+ gen_a64_update_pc(dc, 4);
150
/* fall through */
151
case DISAS_EXIT:
152
case DISAS_JUMP:
153
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
154
break;
155
default:
156
case DISAS_UPDATE_EXIT:
157
- gen_a64_set_pc_im(dc->base.pc_next);
158
+ gen_a64_update_pc(dc, 4);
159
/* fall through */
160
case DISAS_EXIT:
161
tcg_gen_exit_tb(NULL, 0);
162
break;
163
case DISAS_UPDATE_NOCHAIN:
164
- gen_a64_set_pc_im(dc->base.pc_next);
165
+ gen_a64_update_pc(dc, 4);
166
/* fall through */
167
case DISAS_JUMP:
168
tcg_gen_lookup_and_goto_ptr();
169
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
170
case DISAS_SWI:
171
break;
172
case DISAS_WFE:
173
- gen_a64_set_pc_im(dc->base.pc_next);
174
+ gen_a64_update_pc(dc, 4);
175
gen_helper_wfe(cpu_env);
176
break;
177
case DISAS_YIELD:
178
- gen_a64_set_pc_im(dc->base.pc_next);
179
+ gen_a64_update_pc(dc, 4);
180
gen_helper_yield(cpu_env);
181
break;
182
case DISAS_WFI:
183
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
184
* This is a special case because we don't want to just halt
185
* the CPU if trying to debug across a WFI.
186
*/
187
- gen_a64_set_pc_im(dc->base.pc_next);
188
+ gen_a64_update_pc(dc, 4);
189
gen_helper_wfi(cpu_env, tcg_constant_i32(4));
190
/*
191
* The helper doesn't necessarily throw an exception, but we
192
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/translate-vfp.c
195
+++ b/target/arm/translate-vfp.c
196
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
197
case ARM_VFP_FPSID:
198
if (s->current_el == 1) {
199
gen_set_condexec(s);
200
- gen_set_pc_im(s, s->pc_curr);
201
+ gen_update_pc(s, 0);
202
gen_helper_check_hcr_el2_trap(cpu_env,
203
tcg_constant_i32(a->rt),
204
tcg_constant_i32(a->reg));
205
diff --git a/target/arm/translate.c b/target/arm/translate.c
206
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate.c
208
+++ b/target/arm/translate.c
209
@@ -XXX,XX +XXX,XX @@ void gen_set_condexec(DisasContext *s)
210
}
211
}
212
213
-void gen_set_pc_im(DisasContext *s, target_ulong val)
214
+void gen_update_pc(DisasContext *s, target_long diff)
215
{
216
- tcg_gen_movi_i32(cpu_R[15], val);
217
+ tcg_gen_movi_i32(cpu_R[15], s->pc_curr + diff);
218
}
219
220
/* Set PC and Thumb state from var. var is marked as dead. */
221
@@ -XXX,XX +XXX,XX @@ static inline void gen_bxns(DisasContext *s, int rm)
222
223
/* The bxns helper may raise an EXCEPTION_EXIT exception, so in theory
224
* we need to sync state before calling it, but:
225
- * - we don't need to do gen_set_pc_im() because the bxns helper will
226
+ * - we don't need to do gen_update_pc() because the bxns helper will
227
* always set the PC itself
228
* - we don't need to do gen_set_condexec() because BXNS is UNPREDICTABLE
229
* unless it's outside an IT block or the last insn in an IT block,
230
@@ -XXX,XX +XXX,XX @@ static inline void gen_blxns(DisasContext *s, int rm)
231
* We do however need to set the PC, because the blxns helper reads it.
232
* The blxns helper may throw an exception.
233
*/
234
- gen_set_pc_im(s, s->base.pc_next);
235
+ gen_update_pc(s, curr_insn_len(s));
236
gen_helper_v7m_blxns(cpu_env, var);
237
tcg_temp_free_i32(var);
238
s->base.is_jmp = DISAS_EXIT;
239
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
240
* as an undefined insn by runtime configuration (ie before
241
* the insn really executes).
242
*/
243
- gen_set_pc_im(s, s->pc_curr);
244
+ gen_update_pc(s, 0);
245
gen_helper_pre_hvc(cpu_env);
246
/* Otherwise we will treat this as a real exception which
247
* happens after execution of the insn. (The distinction matters
248
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
249
* for single stepping.)
250
*/
251
s->svc_imm = imm16;
252
- gen_set_pc_im(s, s->base.pc_next);
253
+ gen_update_pc(s, curr_insn_len(s));
254
s->base.is_jmp = DISAS_HVC;
255
}
256
257
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
258
/* As with HVC, we may take an exception either before or after
259
* the insn executes.
260
*/
261
- gen_set_pc_im(s, s->pc_curr);
262
+ gen_update_pc(s, 0);
263
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa32_smc()));
264
- gen_set_pc_im(s, s->base.pc_next);
265
+ gen_update_pc(s, curr_insn_len(s));
266
s->base.is_jmp = DISAS_SMC;
267
}
268
269
static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
270
{
271
gen_set_condexec(s);
272
- gen_set_pc_im(s, pc);
273
+ gen_update_pc(s, pc - s->pc_curr);
274
gen_exception_internal(excp);
275
s->base.is_jmp = DISAS_NORETURN;
276
}
277
@@ -XXX,XX +XXX,XX @@ static void gen_exception_insn_el_v(DisasContext *s, uint64_t pc, int excp,
278
uint32_t syn, TCGv_i32 tcg_el)
279
{
280
if (s->aarch64) {
281
- gen_a64_set_pc_im(pc);
282
+ gen_a64_update_pc(s, pc - s->pc_curr);
283
} else {
284
gen_set_condexec(s);
285
- gen_set_pc_im(s, pc);
286
+ gen_update_pc(s, pc - s->pc_curr);
287
}
288
gen_exception_el_v(excp, syn, tcg_el);
289
s->base.is_jmp = DISAS_NORETURN;
290
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
291
void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
292
{
293
if (s->aarch64) {
294
- gen_a64_set_pc_im(pc);
295
+ gen_a64_update_pc(s, pc - s->pc_curr);
296
} else {
297
gen_set_condexec(s);
298
- gen_set_pc_im(s, pc);
299
+ gen_update_pc(s, pc - s->pc_curr);
300
}
301
gen_exception(excp, syn);
302
s->base.is_jmp = DISAS_NORETURN;
303
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
304
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
305
{
306
gen_set_condexec(s);
307
- gen_set_pc_im(s, s->pc_curr);
308
+ gen_update_pc(s, 0);
309
gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syn));
310
s->base.is_jmp = DISAS_NORETURN;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
313
314
if (translator_use_goto_tb(&s->base, dest)) {
315
tcg_gen_goto_tb(n);
316
- gen_set_pc_im(s, dest);
317
+ gen_update_pc(s, diff);
318
tcg_gen_exit_tb(s->base.tb, n);
319
} else {
320
- gen_set_pc_im(s, dest);
321
+ gen_update_pc(s, diff);
322
gen_goto_ptr();
323
}
324
s->base.is_jmp = DISAS_NORETURN;
325
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
326
/* Jump, specifying which TB number to use if we gen_goto_tb() */
327
static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
328
{
329
+ int diff = dest - s->pc_curr;
38
+
330
+
39
REG_PC = 0x10
331
if (unlikely(s->ss_active)) {
40
REG_CS = 0x12
332
/* An indirect jump so that we still trigger the debug exception. */
41
def get_pc(self, g):
333
- gen_set_pc_im(s, dest);
42
@@ -XXX,XX +XXX,XX @@ def test_x86_64_pc(self):
334
+ gen_update_pc(s, diff);
43
self.reverse_debugging()
335
s->base.is_jmp = DISAS_JUMP;
44
336
return;
45
class ReverseDebugging_AArch64(ReverseDebugging):
337
}
46
+ """
338
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
47
+ :avocado: tags=accel:tcg
339
* gen_jmp();
48
+ """
340
* on the second call to gen_jmp().
49
+
341
*/
50
REG_PC = 32
342
- gen_goto_tb(s, tbno, dest - s->pc_curr);
51
343
+ gen_goto_tb(s, tbno, diff);
52
# unidentified gitlab timeout problem
344
break;
345
case DISAS_UPDATE_NOCHAIN:
346
case DISAS_UPDATE_EXIT:
347
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
348
* Avoid using goto_tb so we really do exit back to the main loop
349
* and don't chain to another TB.
350
*/
351
- gen_set_pc_im(s, dest);
352
+ gen_update_pc(s, diff);
353
gen_goto_ptr();
354
s->base.is_jmp = DISAS_NORETURN;
355
break;
356
@@ -XXX,XX +XXX,XX @@ static void gen_msr_banked(DisasContext *s, int r, int sysm, int rn)
357
358
/* Sync state because msr_banked() can raise exceptions */
359
gen_set_condexec(s);
360
- gen_set_pc_im(s, s->pc_curr);
361
+ gen_update_pc(s, 0);
362
tcg_reg = load_reg(s, rn);
363
gen_helper_msr_banked(cpu_env, tcg_reg,
364
tcg_constant_i32(tgtmode),
365
@@ -XXX,XX +XXX,XX @@ static void gen_mrs_banked(DisasContext *s, int r, int sysm, int rn)
366
367
/* Sync state because mrs_banked() can raise exceptions */
368
gen_set_condexec(s);
369
- gen_set_pc_im(s, s->pc_curr);
370
+ gen_update_pc(s, 0);
371
tcg_reg = tcg_temp_new_i32();
372
gen_helper_mrs_banked(tcg_reg, cpu_env,
373
tcg_constant_i32(tgtmode),
374
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
375
}
376
377
gen_set_condexec(s);
378
- gen_set_pc_im(s, s->pc_curr);
379
+ gen_update_pc(s, 0);
380
gen_helper_access_check_cp_reg(cpu_env,
381
tcg_constant_ptr(ri),
382
tcg_constant_i32(syndrome),
383
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
384
* synchronize the CPU state in case it does.
385
*/
386
gen_set_condexec(s);
387
- gen_set_pc_im(s, s->pc_curr);
388
+ gen_update_pc(s, 0);
389
}
390
391
/* Handle special cases first */
392
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
393
unallocated_encoding(s);
394
return;
395
}
396
- gen_set_pc_im(s, s->base.pc_next);
397
+ gen_update_pc(s, curr_insn_len(s));
398
s->base.is_jmp = DISAS_WFI;
399
return;
400
default:
401
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
402
addr = tcg_temp_new_i32();
403
/* get_r13_banked() will raise an exception if called from System mode */
404
gen_set_condexec(s);
405
- gen_set_pc_im(s, s->pc_curr);
406
+ gen_update_pc(s, 0);
407
gen_helper_get_r13_banked(addr, cpu_env, tcg_constant_i32(mode));
408
switch (amode) {
409
case 0: /* DA */
410
@@ -XXX,XX +XXX,XX @@ static bool trans_YIELD(DisasContext *s, arg_YIELD *a)
411
* scheduling of other vCPUs.
412
*/
413
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
414
- gen_set_pc_im(s, s->base.pc_next);
415
+ gen_update_pc(s, curr_insn_len(s));
416
s->base.is_jmp = DISAS_YIELD;
417
}
418
return true;
419
@@ -XXX,XX +XXX,XX @@ static bool trans_WFE(DisasContext *s, arg_WFE *a)
420
* implemented so we can't sleep like WFI does.
421
*/
422
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
423
- gen_set_pc_im(s, s->base.pc_next);
424
+ gen_update_pc(s, curr_insn_len(s));
425
s->base.is_jmp = DISAS_WFE;
426
}
427
return true;
428
@@ -XXX,XX +XXX,XX @@ static bool trans_WFE(DisasContext *s, arg_WFE *a)
429
static bool trans_WFI(DisasContext *s, arg_WFI *a)
430
{
431
/* For WFI, halt the vCPU until an IRQ. */
432
- gen_set_pc_im(s, s->base.pc_next);
433
+ gen_update_pc(s, curr_insn_len(s));
434
s->base.is_jmp = DISAS_WFI;
435
return true;
436
}
437
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
438
(a->imm == semihost_imm)) {
439
gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
440
} else {
441
- gen_set_pc_im(s, s->base.pc_next);
442
+ gen_update_pc(s, curr_insn_len(s));
443
s->svc_imm = a->imm;
444
s->base.is_jmp = DISAS_SWI;
445
}
446
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
447
case DISAS_TOO_MANY:
448
case DISAS_UPDATE_EXIT:
449
case DISAS_UPDATE_NOCHAIN:
450
- gen_set_pc_im(dc, dc->base.pc_next);
451
+ gen_update_pc(dc, curr_insn_len(dc));
452
/* fall through */
453
default:
454
/* FIXME: Single stepping a WFI insn will not halt the CPU. */
455
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
456
gen_goto_tb(dc, 1, curr_insn_len(dc));
457
break;
458
case DISAS_UPDATE_NOCHAIN:
459
- gen_set_pc_im(dc, dc->base.pc_next);
460
+ gen_update_pc(dc, curr_insn_len(dc));
461
/* fall through */
462
case DISAS_JUMP:
463
gen_goto_ptr();
464
break;
465
case DISAS_UPDATE_EXIT:
466
- gen_set_pc_im(dc, dc->base.pc_next);
467
+ gen_update_pc(dc, curr_insn_len(dc));
468
/* fall through */
469
default:
470
/* indicate that the hash table must be used to find the next TB */
471
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
472
gen_set_label(dc->condlabel);
473
gen_set_condexec(dc);
474
if (unlikely(dc->ss_active)) {
475
- gen_set_pc_im(dc, dc->base.pc_next);
476
+ gen_update_pc(dc, curr_insn_len(dc));
477
gen_singlestep_exception(dc);
478
} else {
479
gen_goto_tb(dc, 1, curr_insn_len(dc));
53
--
480
--
54
2.34.1
481
2.25.1
55
482
56
483
diff view generated by jsdifflib
1
From: Fabiano Rosas <farosas@suse.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Now that the cortex-a15 is under CONFIG_TCG, use as default CPU for a
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
KVM-only build the 'max' cpu.
4
5
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Note that we cannot use 'host' here because the qtests can run without
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
any other accelerator (than qtest) and 'host' depends on KVM being
7
Message-id: 20221020030641.2066807-5-richard.henderson@linaro.org
8
enabled.
9
10
Signed-off-by: Fabiano Rosas <farosas@suse.de>
11
Acked-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Thomas Huth <thuth@redhat.com>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
9
---
15
hw/arm/virt.c | 4 ++++
10
target/arm/translate.h | 5 +++--
16
1 file changed, 4 insertions(+)
11
target/arm/translate-a64.c | 28 ++++++++++-------------
17
12
target/arm/translate-m-nocp.c | 6 ++---
18
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
13
target/arm/translate-mve.c | 2 +-
19
index XXXXXXX..XXXXXXX 100644
14
target/arm/translate-vfp.c | 6 ++---
20
--- a/hw/arm/virt.c
15
target/arm/translate.c | 42 +++++++++++++++++------------------
21
+++ b/hw/arm/virt.c
16
6 files changed, 43 insertions(+), 46 deletions(-)
22
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
17
23
mc->minimum_page_bits = 12;
18
diff --git a/target/arm/translate.h b/target/arm/translate.h
24
mc->possible_cpu_arch_ids = virt_possible_cpu_arch_ids;
19
index XXXXXXX..XXXXXXX 100644
25
mc->cpu_index_to_instance_props = virt_cpu_index_to_props;
20
--- a/target/arm/translate.h
26
+#ifdef CONFIG_TCG
21
+++ b/target/arm/translate.h
27
mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a15");
22
@@ -XXX,XX +XXX,XX @@ void arm_jump_cc(DisasCompare *cmp, TCGLabel *label);
28
+#else
23
void arm_gen_test_cc(int cc, TCGLabel *label);
29
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("max");
24
MemOp pow2_align(unsigned i);
30
+#endif
25
void unallocated_encoding(DisasContext *s);
31
mc->get_default_cpu_node_id = virt_get_default_cpu_node_id;
26
-void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
32
mc->kvm_type = virt_kvm_type;
27
+void gen_exception_insn_el(DisasContext *s, target_long pc_diff, int excp,
33
assert(!mc->get_hotplug_handler);
28
uint32_t syn, uint32_t target_el);
29
-void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn);
30
+void gen_exception_insn(DisasContext *s, target_long pc_diff,
31
+ int excp, uint32_t syn);
32
33
/* Return state of Alternate Half-precision flag, caller frees result */
34
static inline TCGv_i32 get_ahp_flag(void)
35
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/translate-a64.c
38
+++ b/target/arm/translate-a64.c
39
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check_only(DisasContext *s)
40
assert(!s->fp_access_checked);
41
s->fp_access_checked = true;
42
43
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
44
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
45
syn_fp_access_trap(1, 0xe, false, 0),
46
s->fp_excp_el);
47
return false;
48
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check(DisasContext *s)
49
return false;
50
}
51
if (s->sme_trap_nonstreaming && s->is_nonstreaming) {
52
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
53
+ gen_exception_insn(s, 0, EXCP_UDEF,
54
syn_smetrap(SME_ET_Streaming, false));
55
return false;
56
}
57
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
58
goto fail_exit;
59
}
60
} else if (s->sve_excp_el) {
61
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
62
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
63
syn_sve_access_trap(), s->sve_excp_el);
64
goto fail_exit;
65
}
66
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
67
static bool sme_access_check(DisasContext *s)
68
{
69
if (s->sme_excp_el) {
70
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
71
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
72
syn_smetrap(SME_ET_AccessTrap, false),
73
s->sme_excp_el);
74
return false;
75
@@ -XXX,XX +XXX,XX @@ bool sme_enabled_check_with_svcr(DisasContext *s, unsigned req)
76
return false;
77
}
78
if (FIELD_EX64(req, SVCR, SM) && !s->pstate_sm) {
79
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
80
+ gen_exception_insn(s, 0, EXCP_UDEF,
81
syn_smetrap(SME_ET_NotStreaming, false));
82
return false;
83
}
84
if (FIELD_EX64(req, SVCR, ZA) && !s->pstate_za) {
85
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
86
+ gen_exception_insn(s, 0, EXCP_UDEF,
87
syn_smetrap(SME_ET_InactiveZA, false));
88
return false;
89
}
90
@@ -XXX,XX +XXX,XX @@ static void gen_sysreg_undef(DisasContext *s, bool isread,
91
} else {
92
syndrome = syn_uncategorized();
93
}
94
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syndrome);
95
+ gen_exception_insn(s, 0, EXCP_UDEF, syndrome);
96
}
97
98
/* MRS - move from system register
99
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
100
switch (op2_ll) {
101
case 1: /* SVC */
102
gen_ss_advance(s);
103
- gen_exception_insn(s, s->base.pc_next, EXCP_SWI,
104
- syn_aa64_svc(imm16));
105
+ gen_exception_insn(s, 4, EXCP_SWI, syn_aa64_svc(imm16));
106
break;
107
case 2: /* HVC */
108
if (s->current_el == 0) {
109
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
110
gen_a64_update_pc(s, 0);
111
gen_helper_pre_hvc(cpu_env);
112
gen_ss_advance(s);
113
- gen_exception_insn_el(s, s->base.pc_next, EXCP_HVC,
114
- syn_aa64_hvc(imm16), 2);
115
+ gen_exception_insn_el(s, 4, EXCP_HVC, syn_aa64_hvc(imm16), 2);
116
break;
117
case 3: /* SMC */
118
if (s->current_el == 0) {
119
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
120
gen_a64_update_pc(s, 0);
121
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
122
gen_ss_advance(s);
123
- gen_exception_insn_el(s, s->base.pc_next, EXCP_SMC,
124
- syn_aa64_smc(imm16), 3);
125
+ gen_exception_insn_el(s, 4, EXCP_SMC, syn_aa64_smc(imm16), 3);
126
break;
127
default:
128
unallocated_encoding(s);
129
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
130
* Illegal execution state. This has priority over BTI
131
* exceptions, but comes after instruction abort exceptions.
132
*/
133
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_illegalstate());
134
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_illegalstate());
135
return;
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
139
if (s->btype != 0
140
&& s->guarded_page
141
&& !btype_destination_ok(insn, s->bt, s->btype)) {
142
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
143
- syn_btitrap(s->btype));
144
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_btitrap(s->btype));
145
return;
146
}
147
} else {
148
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
149
index XXXXXXX..XXXXXXX 100644
150
--- a/target/arm/translate-m-nocp.c
151
+++ b/target/arm/translate-m-nocp.c
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
153
tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
154
155
if (s->fp_excp_el != 0) {
156
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
157
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
158
syn_uncategorized(), s->fp_excp_el);
159
return true;
160
}
161
@@ -XXX,XX +XXX,XX @@ static bool trans_NOCP(DisasContext *s, arg_nocp *a)
162
}
163
164
if (a->cp != 10) {
165
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP, syn_uncategorized());
166
+ gen_exception_insn(s, 0, EXCP_NOCP, syn_uncategorized());
167
return true;
168
}
169
170
if (s->fp_excp_el != 0) {
171
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
172
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
173
syn_uncategorized(), s->fp_excp_el);
174
return true;
175
}
176
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
177
index XXXXXXX..XXXXXXX 100644
178
--- a/target/arm/translate-mve.c
179
+++ b/target/arm/translate-mve.c
180
@@ -XXX,XX +XXX,XX @@ bool mve_eci_check(DisasContext *s)
181
return true;
182
default:
183
/* Reserved value: INVSTATE UsageFault */
184
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
185
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
186
return false;
187
}
188
}
189
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
190
index XXXXXXX..XXXXXXX 100644
191
--- a/target/arm/translate-vfp.c
192
+++ b/target/arm/translate-vfp.c
193
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
194
int coproc = arm_dc_feature(s, ARM_FEATURE_V8) ? 0 : 0xa;
195
uint32_t syn = syn_fp_access_trap(1, 0xe, false, coproc);
196
197
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF, syn, s->fp_excp_el);
198
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn, s->fp_excp_el);
199
return false;
200
}
201
202
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
203
* appear to be any insns which touch VFP which are allowed.
204
*/
205
if (s->sme_trap_nonstreaming) {
206
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
207
+ gen_exception_insn(s, 0, EXCP_UDEF,
208
syn_smetrap(SME_ET_Streaming,
209
curr_insn_len(s) == 2));
210
return false;
211
@@ -XXX,XX +XXX,XX @@ bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
212
* the encoding space handled by the patterns in m-nocp.decode,
213
* and for them we may need to raise NOCP here.
214
*/
215
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
216
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
217
syn_uncategorized(), s->fp_excp_el);
218
return false;
219
}
220
diff --git a/target/arm/translate.c b/target/arm/translate.c
221
index XXXXXXX..XXXXXXX 100644
222
--- a/target/arm/translate.c
223
+++ b/target/arm/translate.c
224
@@ -XXX,XX +XXX,XX @@ static void gen_exception(int excp, uint32_t syndrome)
225
tcg_constant_i32(syndrome));
226
}
227
228
-static void gen_exception_insn_el_v(DisasContext *s, uint64_t pc, int excp,
229
- uint32_t syn, TCGv_i32 tcg_el)
230
+static void gen_exception_insn_el_v(DisasContext *s, target_long pc_diff,
231
+ int excp, uint32_t syn, TCGv_i32 tcg_el)
232
{
233
if (s->aarch64) {
234
- gen_a64_update_pc(s, pc - s->pc_curr);
235
+ gen_a64_update_pc(s, pc_diff);
236
} else {
237
gen_set_condexec(s);
238
- gen_update_pc(s, pc - s->pc_curr);
239
+ gen_update_pc(s, pc_diff);
240
}
241
gen_exception_el_v(excp, syn, tcg_el);
242
s->base.is_jmp = DISAS_NORETURN;
243
}
244
245
-void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
246
+void gen_exception_insn_el(DisasContext *s, target_long pc_diff, int excp,
247
uint32_t syn, uint32_t target_el)
248
{
249
- gen_exception_insn_el_v(s, pc, excp, syn, tcg_constant_i32(target_el));
250
+ gen_exception_insn_el_v(s, pc_diff, excp, syn,
251
+ tcg_constant_i32(target_el));
252
}
253
254
-void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
255
+void gen_exception_insn(DisasContext *s, target_long pc_diff,
256
+ int excp, uint32_t syn)
257
{
258
if (s->aarch64) {
259
- gen_a64_update_pc(s, pc - s->pc_curr);
260
+ gen_a64_update_pc(s, pc_diff);
261
} else {
262
gen_set_condexec(s);
263
- gen_update_pc(s, pc - s->pc_curr);
264
+ gen_update_pc(s, pc_diff);
265
}
266
gen_exception(excp, syn);
267
s->base.is_jmp = DISAS_NORETURN;
268
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
269
void unallocated_encoding(DisasContext *s)
270
{
271
/* Unallocated and reserved encodings are uncategorized */
272
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized());
273
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_uncategorized());
274
}
275
276
/* Force a TB lookup after an instruction that changes the CPU state. */
277
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
278
tcg_el = tcg_constant_i32(3);
279
}
280
281
- gen_exception_insn_el_v(s, s->pc_curr, EXCP_UDEF,
282
+ gen_exception_insn_el_v(s, 0, EXCP_UDEF,
283
syn_uncategorized(), tcg_el);
284
tcg_temp_free_i32(tcg_el);
285
return false;
286
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
287
288
undef:
289
/* If we get here then some access check did not pass */
290
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized());
291
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_uncategorized());
292
return false;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
296
* For the UNPREDICTABLE cases we choose to UNDEF.
297
*/
298
if (s->current_el == 1 && !s->ns && mode == ARM_CPU_MODE_MON) {
299
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
300
- syn_uncategorized(), 3);
301
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_uncategorized(), 3);
302
return;
303
}
304
305
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
306
* Do the check-and-raise-exception by hand.
307
*/
308
if (s->fp_excp_el) {
309
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
310
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
311
syn_uncategorized(), s->fp_excp_el);
312
return true;
313
}
314
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
315
tmp = load_cpu_field(v7m.ltpsize);
316
tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
317
tcg_temp_free_i32(tmp);
318
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
319
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
320
gen_set_label(skipexc);
321
}
322
323
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
324
* UsageFault exception.
325
*/
326
if (arm_dc_feature(s, ARM_FEATURE_M)) {
327
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
328
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
329
return;
330
}
331
332
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
333
* Illegal execution state. This has priority over BTI
334
* exceptions, but comes after instruction abort exceptions.
335
*/
336
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_illegalstate());
337
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_illegalstate());
338
return;
339
}
340
341
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
342
* Illegal execution state. This has priority over BTI
343
* exceptions, but comes after instruction abort exceptions.
344
*/
345
- gen_exception_insn(dc, dc->pc_curr, EXCP_UDEF, syn_illegalstate());
346
+ gen_exception_insn(dc, 0, EXCP_UDEF, syn_illegalstate());
347
return;
348
}
349
350
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
351
*/
352
tcg_remove_ops_after(dc->insn_eci_rewind);
353
dc->condjmp = 0;
354
- gen_exception_insn(dc, dc->pc_curr, EXCP_INVSTATE,
355
- syn_uncategorized());
356
+ gen_exception_insn(dc, 0, EXCP_INVSTATE, syn_uncategorized());
357
}
358
359
arm_post_translate_insn(dc);
34
--
360
--
35
2.34.1
361
2.25.1
362
363
diff view generated by jsdifflib
1
From: Claudio Fontana <cfontana@suse.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
for "all" builds (tcg + kvm), we want to avoid doing
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
the psci check if tcg is built-in, but not enabled.
4
Since we always pass dc->pc_curr, fold the arithmetic to zero displacement.
5
5
6
Signed-off-by: Claudio Fontana <cfontana@suse.de>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
8
Message-id: 20221020030641.2066807-6-richard.henderson@linaro.org
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper.c | 3 ++-
11
target/arm/translate-a64.c | 6 +++---
13
1 file changed, 2 insertions(+), 1 deletion(-)
12
target/arm/translate.c | 10 +++++-----
13
2 files changed, 8 insertions(+), 8 deletions(-)
14
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
17
--- a/target/arm/translate-a64.c
18
+++ b/target/arm/helper.c
18
+++ b/target/arm/translate-a64.c
19
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
20
#include "hw/irq.h"
20
gen_helper_exception_internal(cpu_env, tcg_constant_i32(excp));
21
#include "sysemu/cpu-timers.h"
21
}
22
#include "sysemu/kvm.h"
22
23
+#include "sysemu/tcg.h"
23
-static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
24
#include "qapi/qapi-commands-machine-target.h"
24
+static void gen_exception_internal_insn(DisasContext *s, int excp)
25
#include "qapi/error.h"
25
{
26
#include "qemu/guest-random.h"
26
- gen_a64_update_pc(s, pc - s->pc_curr);
27
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
27
+ gen_a64_update_pc(s, 0);
28
env->exception.syndrome);
28
gen_exception_internal(excp);
29
s->base.is_jmp = DISAS_NORETURN;
30
}
31
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
32
* Secondly, "HLT 0xf000" is the A64 semihosting syscall instruction.
33
*/
34
if (semihosting_enabled(s->current_el == 0) && imm16 == 0xf000) {
35
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
36
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
37
} else {
38
unallocated_encoding(s);
39
}
40
diff --git a/target/arm/translate.c b/target/arm/translate.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate.c
43
+++ b/target/arm/translate.c
44
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
45
s->base.is_jmp = DISAS_SMC;
46
}
47
48
-static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
49
+static void gen_exception_internal_insn(DisasContext *s, int excp)
50
{
51
gen_set_condexec(s);
52
- gen_update_pc(s, pc - s->pc_curr);
53
+ gen_update_pc(s, 0);
54
gen_exception_internal(excp);
55
s->base.is_jmp = DISAS_NORETURN;
56
}
57
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
58
*/
59
if (semihosting_enabled(s->current_el != 0) &&
60
(imm == (s->thumb ? 0x3c : 0xf000))) {
61
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
62
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
63
return;
29
}
64
}
30
65
31
- if (arm_is_psci_call(cpu, cs->exception_index)) {
66
@@ -XXX,XX +XXX,XX @@ static bool trans_BKPT(DisasContext *s, arg_BKPT *a)
32
+ if (tcg_enabled() && arm_is_psci_call(cpu, cs->exception_index)) {
67
if (arm_dc_feature(s, ARM_FEATURE_M) &&
33
arm_handle_psci_call(cpu);
68
semihosting_enabled(s->current_el == 0) &&
34
qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n");
69
(a->imm == 0xab)) {
35
return;
70
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
71
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
72
} else {
73
gen_exception_bkpt_insn(s, syn_aa32_bkpt(a->imm, false));
74
}
75
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
76
if (!arm_dc_feature(s, ARM_FEATURE_M) &&
77
semihosting_enabled(s->current_el == 0) &&
78
(a->imm == semihost_imm)) {
79
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
80
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
81
} else {
82
gen_update_pc(s, curr_insn_len(s));
83
s->svc_imm = a->imm;
36
--
84
--
37
2.34.1
85
2.25.1
38
86
39
87
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There is no point in using a void pointer to access the NVIC.
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
Use the real type to avoid casting it while debugging.
5
4
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230206223502.25122-11-philmd@linaro.org
7
Message-id: 20221020030641.2066807-7-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
target/arm/cpu.h | 46 ++++++++++++++++++++++---------------------
10
target/arm/translate.c | 37 +++++++++++++++++++++----------------
12
hw/intc/armv7m_nvic.c | 38 ++++++++++++-----------------------
11
1 file changed, 21 insertions(+), 16 deletions(-)
13
target/arm/cpu.c | 1 +
14
target/arm/m_helper.c | 2 +-
15
4 files changed, 39 insertions(+), 48 deletions(-)
16
12
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
15
--- a/target/arm/translate.c
20
+++ b/target/arm/cpu.h
16
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMTBFlags {
17
@@ -XXX,XX +XXX,XX @@ static uint32_t read_pc(DisasContext *s)
22
18
return s->pc_curr + (s->thumb ? 4 : 8);
23
typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
19
}
24
20
25
+typedef struct NVICState NVICState;
21
+/* The pc_curr difference for an architectural jump. */
22
+static target_long jmp_diff(DisasContext *s, target_long diff)
23
+{
24
+ return diff + (s->thumb ? 4 : 8);
25
+}
26
+
26
+
27
typedef struct CPUArchState {
27
/* Set a variable to the value of a CPU register. */
28
/* Regs for current mode. */
28
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
29
uint32_t regs[16];
30
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
31
} sau;
32
33
#if !defined(CONFIG_USER_ONLY)
34
- void *nvic;
35
+ NVICState *nvic;
36
const struct arm_boot_info *boot_info;
37
/* Store GICv3CPUState to access from this struct */
38
void *gicv3state;
39
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
40
41
/* Interface between CPU and Interrupt controller. */
42
#ifndef CONFIG_USER_ONLY
43
-bool armv7m_nvic_can_take_pending_exception(void *opaque);
44
+bool armv7m_nvic_can_take_pending_exception(NVICState *s);
45
#else
46
-static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
47
+static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
48
{
29
{
30
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
31
* cpu_loop_exec. Any live exit_requests will be processed as we
32
* enter the next TB.
33
*/
34
-static void gen_goto_tb(DisasContext *s, int n, int diff)
35
+static void gen_goto_tb(DisasContext *s, int n, target_long diff)
36
{
37
target_ulong dest = s->pc_curr + diff;
38
39
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
40
}
41
42
/* Jump, specifying which TB number to use if we gen_goto_tb() */
43
-static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
44
+static void gen_jmp_tb(DisasContext *s, target_long diff, int tbno)
45
{
46
- int diff = dest - s->pc_curr;
47
-
48
if (unlikely(s->ss_active)) {
49
/* An indirect jump so that we still trigger the debug exception. */
50
gen_update_pc(s, diff);
51
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
52
}
53
}
54
55
-static inline void gen_jmp(DisasContext *s, uint32_t dest)
56
+static inline void gen_jmp(DisasContext *s, target_long diff)
57
{
58
- gen_jmp_tb(s, dest, 0);
59
+ gen_jmp_tb(s, diff, 0);
60
}
61
62
static inline void gen_mulxy(TCGv_i32 t0, TCGv_i32 t1, int x, int y)
63
@@ -XXX,XX +XXX,XX @@ static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
64
65
static bool trans_B(DisasContext *s, arg_i *a)
66
{
67
- gen_jmp(s, read_pc(s) + a->imm);
68
+ gen_jmp(s, jmp_diff(s, a->imm));
49
return true;
69
return true;
50
}
70
}
51
#endif
71
52
/**
72
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond_thumb(DisasContext *s, arg_ci *a)
53
* armv7m_nvic_set_pending: mark the specified exception as pending
54
- * @opaque: the NVIC
55
+ * @s: the NVIC
56
* @irq: the exception number to mark pending
57
* @secure: false for non-banked exceptions or for the nonsecure
58
* version of a banked exception, true for the secure version of a banked
59
@@ -XXX,XX +XXX,XX @@ static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
60
* if @secure is true and @irq does not specify one of the fixed set
61
* of architecturally banked exceptions.
62
*/
63
-void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
64
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
65
/**
66
* armv7m_nvic_set_pending_derived: mark this derived exception as pending
67
- * @opaque: the NVIC
68
+ * @s: the NVIC
69
* @irq: the exception number to mark pending
70
* @secure: false for non-banked exceptions or for the nonsecure
71
* version of a banked exception, true for the secure version of a banked
72
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
73
* exceptions (exceptions generated in the course of trying to take
74
* a different exception).
75
*/
76
-void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
77
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
78
/**
79
* armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
80
- * @opaque: the NVIC
81
+ * @s: the NVIC
82
* @irq: the exception number to mark pending
83
* @secure: false for non-banked exceptions or for the nonsecure
84
* version of a banked exception, true for the secure version of a banked
85
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
86
* Similar to armv7m_nvic_set_pending(), but specifically for exceptions
87
* generated in the course of lazy stacking of FP registers.
88
*/
89
-void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
90
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
91
/**
92
* armv7m_nvic_get_pending_irq_info: return highest priority pending
93
* exception, and whether it targets Secure state
94
- * @opaque: the NVIC
95
+ * @s: the NVIC
96
* @pirq: set to pending exception number
97
* @ptargets_secure: set to whether pending exception targets Secure
98
*
99
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
100
* to true if the current highest priority pending exception should
101
* be taken to Secure state, false for NS.
102
*/
103
-void armv7m_nvic_get_pending_irq_info(void *opaque, int *pirq,
104
+void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
105
bool *ptargets_secure);
106
/**
107
* armv7m_nvic_acknowledge_irq: make highest priority pending exception active
108
- * @opaque: the NVIC
109
+ * @s: the NVIC
110
*
111
* Move the current highest priority pending exception from the pending
112
* state to the active state, and update v7m.exception to indicate that
113
* it is the exception currently being handled.
114
*/
115
-void armv7m_nvic_acknowledge_irq(void *opaque);
116
+void armv7m_nvic_acknowledge_irq(NVICState *s);
117
/**
118
* armv7m_nvic_complete_irq: complete specified interrupt or exception
119
- * @opaque: the NVIC
120
+ * @s: the NVIC
121
* @irq: the exception number to complete
122
* @secure: true if this exception was secure
123
*
124
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque);
125
* 0 if there is still an irq active after this one was completed
126
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
127
*/
128
-int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
129
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
130
/**
131
* armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
132
- * @opaque: the NVIC
133
+ * @s: the NVIC
134
* @irq: the exception number to mark pending
135
* @secure: false for non-banked exceptions or for the nonsecure
136
* version of a banked exception, true for the secure version of a banked
137
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
138
* interrupt the current execution priority. This controls whether the
139
* RDY bit for it in the FPCCR is set.
140
*/
141
-bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure);
142
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
143
/**
144
* armv7m_nvic_raw_execution_priority: return the raw execution priority
145
- * @opaque: the NVIC
146
+ * @s: the NVIC
147
*
148
* Returns: the raw execution priority as defined by the v8M architecture.
149
* This is the execution priority minus the effects of AIRCR.PRIS,
150
* and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
151
* (v8M ARM ARM I_PKLD.)
152
*/
153
-int armv7m_nvic_raw_execution_priority(void *opaque);
154
+int armv7m_nvic_raw_execution_priority(NVICState *s);
155
/**
156
* armv7m_nvic_neg_prio_requested: return true if the requested execution
157
* priority is negative for the specified security state.
158
- * @opaque: the NVIC
159
+ * @s: the NVIC
160
* @secure: the security state to test
161
* This corresponds to the pseudocode IsReqExecPriNeg().
162
*/
163
#ifndef CONFIG_USER_ONLY
164
-bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure);
165
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
166
#else
167
-static inline bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
168
+static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
169
{
170
return false;
171
}
172
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
173
index XXXXXXX..XXXXXXX 100644
174
--- a/hw/intc/armv7m_nvic.c
175
+++ b/hw/intc/armv7m_nvic.c
176
@@ -XXX,XX +XXX,XX @@ static inline int nvic_exec_prio(NVICState *s)
177
return MIN(running, s->exception_prio);
178
}
179
180
-bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
181
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
182
{
183
/* Return true if the requested execution priority is negative
184
* for the specified security state, ie that security state
185
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
186
* mean we don't allow FAULTMASK_NS to actually make the execution
187
* priority negative). Compare pseudocode IsReqExcPriNeg().
188
*/
189
- NVICState *s = opaque;
190
-
191
if (s->cpu->env.v7m.faultmask[secure]) {
192
return true;
73
return true;
193
}
74
}
194
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
75
arm_skip_unless(s, a->cond);
195
return false;
76
- gen_jmp(s, read_pc(s) + a->imm);
77
+ gen_jmp(s, jmp_diff(s, a->imm));
78
return true;
196
}
79
}
197
80
198
-bool armv7m_nvic_can_take_pending_exception(void *opaque)
81
static bool trans_BL(DisasContext *s, arg_i *a)
199
+bool armv7m_nvic_can_take_pending_exception(NVICState *s)
200
{
82
{
201
- NVICState *s = opaque;
83
tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
202
-
84
- gen_jmp(s, read_pc(s) + a->imm);
203
return nvic_exec_prio(s) > nvic_pending_prio(s);
85
+ gen_jmp(s, jmp_diff(s, a->imm));
86
return true;
204
}
87
}
205
88
206
-int armv7m_nvic_raw_execution_priority(void *opaque)
89
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
207
+int armv7m_nvic_raw_execution_priority(NVICState *s)
90
}
208
{
91
tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
209
- NVICState *s = opaque;
92
store_cpu_field_constant(!s->thumb, thumb);
210
-
93
- gen_jmp(s, (read_pc(s) & ~3) + a->imm);
211
return s->exception_prio;
94
+ /* This jump is computed from an aligned PC: subtract off the low bits. */
95
+ gen_jmp(s, jmp_diff(s, a->imm - (s->pc_curr & 3)));
96
return true;
212
}
97
}
213
98
214
@@ -XXX,XX +XXX,XX @@ static void nvic_irq_update(NVICState *s)
99
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
215
* if @secure is true and @irq does not specify one of the fixed set
100
* when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
216
* of architecturally banked exceptions.
101
*/
217
*/
218
-static void armv7m_nvic_clear_pending(void *opaque, int irq, bool secure)
219
+static void armv7m_nvic_clear_pending(NVICState *s, int irq, bool secure)
220
{
221
- NVICState *s = (NVICState *)opaque;
222
VecInfo *vec;
223
224
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
225
@@ -XXX,XX +XXX,XX @@ static void do_armv7m_nvic_set_pending(void *opaque, int irq, bool secure,
226
}
102
}
103
- gen_jmp_tb(s, s->base.pc_next, 1);
104
+ gen_jmp_tb(s, curr_insn_len(s), 1);
105
106
gen_set_label(nextlabel);
107
- gen_jmp(s, read_pc(s) + a->imm);
108
+ gen_jmp(s, jmp_diff(s, a->imm));
109
return true;
227
}
110
}
228
111
229
-void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
112
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
230
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure)
113
231
{
114
if (a->f) {
232
- do_armv7m_nvic_set_pending(opaque, irq, secure, false);
115
/* Loop-forever: just jump back to the loop start */
233
+ do_armv7m_nvic_set_pending(s, irq, secure, false);
116
- gen_jmp(s, read_pc(s) - a->imm);
117
+ gen_jmp(s, jmp_diff(s, -a->imm));
118
return true;
119
}
120
121
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
122
tcg_temp_free_i32(decr);
123
}
124
/* Jump back to the loop start */
125
- gen_jmp(s, read_pc(s) - a->imm);
126
+ gen_jmp(s, jmp_diff(s, -a->imm));
127
128
gen_set_label(loopend);
129
if (a->tp) {
130
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
131
store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
132
}
133
/* End TB, continuing to following insn */
134
- gen_jmp_tb(s, s->base.pc_next, 1);
135
+ gen_jmp_tb(s, curr_insn_len(s), 1);
136
return true;
234
}
137
}
235
138
236
-void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure)
139
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_CBZ *a)
237
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure)
140
tcg_gen_brcondi_i32(a->nz ? TCG_COND_EQ : TCG_COND_NE,
238
{
141
tmp, 0, s->condlabel);
239
- do_armv7m_nvic_set_pending(opaque, irq, secure, true);
142
tcg_temp_free_i32(tmp);
240
+ do_armv7m_nvic_set_pending(s, irq, secure, true);
143
- gen_jmp(s, read_pc(s) + a->imm);
144
+ gen_jmp(s, jmp_diff(s, a->imm));
145
return true;
241
}
146
}
242
147
243
-void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
244
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure)
245
{
246
/*
247
* Pend an exception during lazy FP stacking. This differs
248
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
249
* whether we should escalate depends on the saved context
250
* in the FPCCR register, not on the current state of the CPU/NVIC.
251
*/
252
- NVICState *s = (NVICState *)opaque;
253
bool banked = exc_is_banked(irq);
254
VecInfo *vec;
255
bool targets_secure;
256
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
257
}
258
259
/* Make pending IRQ active. */
260
-void armv7m_nvic_acknowledge_irq(void *opaque)
261
+void armv7m_nvic_acknowledge_irq(NVICState *s)
262
{
263
- NVICState *s = (NVICState *)opaque;
264
CPUARMState *env = &s->cpu->env;
265
const int pending = s->vectpending;
266
const int running = nvic_exec_prio(s);
267
@@ -XXX,XX +XXX,XX @@ static bool vectpending_targets_secure(NVICState *s)
268
exc_targets_secure(s, s->vectpending);
269
}
270
271
-void armv7m_nvic_get_pending_irq_info(void *opaque,
272
+void armv7m_nvic_get_pending_irq_info(NVICState *s,
273
int *pirq, bool *ptargets_secure)
274
{
275
- NVICState *s = (NVICState *)opaque;
276
const int pending = s->vectpending;
277
bool targets_secure;
278
279
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
280
*pirq = pending;
281
}
282
283
-int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
284
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure)
285
{
286
- NVICState *s = (NVICState *)opaque;
287
VecInfo *vec = NULL;
288
int ret = 0;
289
290
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
291
return ret;
292
}
293
294
-bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
295
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure)
296
{
297
/*
298
* Return whether an exception is "ready", i.e. it is enabled and is
299
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
300
* for non-banked exceptions secure is always false; for banked exceptions
301
* it indicates which of the exceptions is required.
302
*/
303
- NVICState *s = (NVICState *)opaque;
304
bool banked = exc_is_banked(irq);
305
VecInfo *vec;
306
int running = nvic_exec_prio(s);
307
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
308
index XXXXXXX..XXXXXXX 100644
309
--- a/target/arm/cpu.c
310
+++ b/target/arm/cpu.c
311
@@ -XXX,XX +XXX,XX @@
312
#if !defined(CONFIG_USER_ONLY)
313
#include "hw/loader.h"
314
#include "hw/boards.h"
315
+#include "hw/intc/armv7m_nvic.h"
316
#endif
317
#include "sysemu/tcg.h"
318
#include "sysemu/qtest.h"
319
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
320
index XXXXXXX..XXXXXXX 100644
321
--- a/target/arm/m_helper.c
322
+++ b/target/arm/m_helper.c
323
@@ -XXX,XX +XXX,XX @@ static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
324
* that we will need later in order to do lazy FP reg stacking.
325
*/
326
bool is_secure = env->v7m.secure;
327
- void *nvic = env->nvic;
328
+ NVICState *nvic = env->nvic;
329
/*
330
* Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
331
* are banked and we want to update the bit in the bank for the
332
--
148
--
333
2.34.1
149
2.25.1
334
335
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Message-id: 20230206223502.25122-3-philmd@linaro.org
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-8-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
---
9
target/arm/m_helper.c | 11 ++++++++---
10
target/arm/translate-a64.c | 41 +++++++++++++++++++++++++++-----------
10
1 file changed, 8 insertions(+), 3 deletions(-)
11
1 file changed, 29 insertions(+), 12 deletions(-)
11
12
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
15
--- a/target/arm/translate-a64.c
15
+++ b/target/arm/m_helper.c
16
+++ b/target/arm/translate-a64.c
16
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
17
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
17
return 0;
18
}
18
}
19
}
19
20
20
-#else
21
+static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
21
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
22
+{
22
+{
23
+ return ARMMMUIdx_MUser;
23
+ tcg_gen_movi_i64(dest, s->pc_curr + diff);
24
+}
24
+}
25
+
25
+
26
+#else /* !CONFIG_USER_ONLY */
26
void gen_a64_update_pc(DisasContext *s, target_long diff)
27
{
28
- tcg_gen_movi_i64(cpu_pc, s->pc_curr + diff);
29
+ gen_pc_plus_diff(s, cpu_pc, diff);
30
}
27
31
28
/*
32
/*
29
* What kind of stack write are we doing? This affects how exceptions
33
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
30
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
34
31
return tt_resp;
35
if (insn & (1U << 31)) {
36
/* BL Branch with link */
37
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
38
+ gen_pc_plus_diff(s, cpu_reg(s, 30), curr_insn_len(s));
39
}
40
41
/* B Branch / BL Branch with link */
42
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
43
default:
44
goto do_unallocated;
45
}
46
- gen_a64_set_pc(s, dst);
47
/* BLR also needs to load return address */
48
if (opc == 1) {
49
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
50
+ TCGv_i64 lr = cpu_reg(s, 30);
51
+ if (dst == lr) {
52
+ TCGv_i64 tmp = new_tmp_a64(s);
53
+ tcg_gen_mov_i64(tmp, dst);
54
+ dst = tmp;
55
+ }
56
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
57
}
58
+ gen_a64_set_pc(s, dst);
59
break;
60
61
case 8: /* BRAA */
62
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
63
} else {
64
dst = cpu_reg(s, rn);
65
}
66
- gen_a64_set_pc(s, dst);
67
/* BLRAA also needs to load return address */
68
if (opc == 9) {
69
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
70
+ TCGv_i64 lr = cpu_reg(s, 30);
71
+ if (dst == lr) {
72
+ TCGv_i64 tmp = new_tmp_a64(s);
73
+ tcg_gen_mov_i64(tmp, dst);
74
+ dst = tmp;
75
+ }
76
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
77
}
78
+ gen_a64_set_pc(s, dst);
79
break;
80
81
case 4: /* ERET */
82
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
83
84
tcg_rt = cpu_reg(s, rt);
85
86
- clean_addr = tcg_constant_i64(s->pc_curr + imm);
87
+ clean_addr = new_tmp_a64(s);
88
+ gen_pc_plus_diff(s, clean_addr, imm);
89
if (is_vector) {
90
do_fp_ld(s, rt, clean_addr, size);
91
} else {
92
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
93
static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
94
{
95
unsigned int page, rd;
96
- uint64_t base;
97
- uint64_t offset;
98
+ int64_t offset;
99
100
page = extract32(insn, 31, 1);
101
/* SignExtend(immhi:immlo) -> offset */
102
offset = sextract64(insn, 5, 19);
103
offset = offset << 2 | extract32(insn, 29, 2);
104
rd = extract32(insn, 0, 5);
105
- base = s->pc_curr;
106
107
if (page) {
108
/* ADRP (page based) */
109
- base &= ~0xfff;
110
offset <<= 12;
111
+ /* The page offset is ok for TARGET_TB_PCREL. */
112
+ offset -= s->pc_curr & 0xfff;
113
}
114
115
- tcg_gen_movi_i64(cpu_reg(s, rd), base + offset);
116
+ gen_pc_plus_diff(s, cpu_reg(s, rd), offset);
32
}
117
}
33
118
34
-#endif /* !CONFIG_USER_ONLY */
119
/*
35
-
36
ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
37
bool secstate, bool priv, bool negpri)
38
{
39
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
40
41
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
42
}
43
+
44
+#endif /* !CONFIG_USER_ONLY */
45
--
120
--
46
2.34.1
121
2.25.1
47
48
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
5
Message-id: 20230206223502.25122-5-philmd@linaro.org
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-9-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper.c | 12 ++++++++++--
10
target/arm/translate.c | 38 +++++++++++++++++++++-----------------
9
1 file changed, 10 insertions(+), 2 deletions(-)
11
1 file changed, 21 insertions(+), 17 deletions(-)
10
12
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
15
--- a/target/arm/translate.c
14
+++ b/target/arm/helper.c
16
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
17
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
16
}
18
}
17
}
19
}
18
20
19
+#ifndef CONFIG_USER_ONLY
21
-/* The architectural value of PC. */
20
/*
22
-static uint32_t read_pc(DisasContext *s)
21
* We don't know until after realize whether there's a GICv3
23
-{
22
* attached, and that is what registers the gicv3 sysregs.
24
- return s->pc_curr + (s->thumb ? 4 : 8);
23
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
25
-}
24
return pfr1;
26
-
27
/* The pc_curr difference for an architectural jump. */
28
static target_long jmp_diff(DisasContext *s, target_long diff)
29
{
30
return diff + (s->thumb ? 4 : 8);
25
}
31
}
26
32
27
-#ifndef CONFIG_USER_ONLY
33
+static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
28
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
34
+{
35
+ tcg_gen_movi_i32(var, s->pc_curr + diff);
36
+}
37
+
38
/* Set a variable to the value of a CPU register. */
39
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
29
{
40
{
30
ARMCPU *cpu = env_archcpu(env);
41
if (reg == 15) {
31
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
42
- tcg_gen_movi_i32(var, read_pc(s));
32
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 1,
43
+ gen_pc_plus_diff(s, var, jmp_diff(s, 0));
33
.access = PL1_R, .type = ARM_CP_NO_RAW,
44
} else {
34
.accessfn = access_aa32_tid3,
45
tcg_gen_mov_i32(var, cpu_R[reg]);
35
+#ifdef CONFIG_USER_ONLY
46
}
36
+ .type = ARM_CP_CONST,
47
@@ -XXX,XX +XXX,XX @@ TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs)
37
+ .resetvalue = cpu->isar.id_pfr1,
48
TCGv_i32 tmp = tcg_temp_new_i32();
38
+#else
49
39
+ .type = ARM_CP_NO_RAW,
50
if (reg == 15) {
40
+ .accessfn = access_aa32_tid3,
51
- tcg_gen_movi_i32(tmp, (read_pc(s) & ~3) + ofs);
41
.readfn = id_pfr1_read,
52
+ /*
42
- .writefn = arm_cp_write_ignore },
53
+ * This address is computed from an aligned PC:
43
+ .writefn = arm_cp_write_ignore
54
+ * subtract off the low bits.
44
+#endif
55
+ */
45
+ },
56
+ gen_pc_plus_diff(s, tmp, jmp_diff(s, ofs - (s->pc_curr & 3)));
46
{ .name = "ID_DFR0", .state = ARM_CP_STATE_BOTH,
57
} else {
47
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 2,
58
tcg_gen_addi_i32(tmp, cpu_R[reg], ofs);
48
.access = PL1_R, .type = ARM_CP_CONST,
59
}
60
@@ -XXX,XX +XXX,XX @@ void unallocated_encoding(DisasContext *s)
61
/* Force a TB lookup after an instruction that changes the CPU state. */
62
void gen_lookup_tb(DisasContext *s)
63
{
64
- tcg_gen_movi_i32(cpu_R[15], s->base.pc_next);
65
+ gen_pc_plus_diff(s, cpu_R[15], curr_insn_len(s));
66
s->base.is_jmp = DISAS_EXIT;
67
}
68
69
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_r(DisasContext *s, arg_BLX_r *a)
70
return false;
71
}
72
tmp = load_reg(s, a->rm);
73
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
74
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
75
gen_bx(s, tmp);
76
return true;
77
}
78
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond_thumb(DisasContext *s, arg_ci *a)
79
80
static bool trans_BL(DisasContext *s, arg_i *a)
81
{
82
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
83
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
84
gen_jmp(s, jmp_diff(s, a->imm));
85
return true;
86
}
87
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
88
if (s->thumb && (a->imm & 2)) {
89
return false;
90
}
91
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
92
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
93
store_cpu_field_constant(!s->thumb, thumb);
94
/* This jump is computed from an aligned PC: subtract off the low bits. */
95
gen_jmp(s, jmp_diff(s, a->imm - (s->pc_curr & 3)));
96
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
97
static bool trans_BL_BLX_prefix(DisasContext *s, arg_BL_BLX_prefix *a)
98
{
99
assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
100
- tcg_gen_movi_i32(cpu_R[14], read_pc(s) + (a->imm << 12));
101
+ gen_pc_plus_diff(s, cpu_R[14], jmp_diff(s, a->imm << 12));
102
return true;
103
}
104
105
@@ -XXX,XX +XXX,XX @@ static bool trans_BL_suffix(DisasContext *s, arg_BL_suffix *a)
106
107
assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
108
tcg_gen_addi_i32(tmp, cpu_R[14], (a->imm << 1) | 1);
109
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
110
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | 1);
111
gen_bx(s, tmp);
112
return true;
113
}
114
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_suffix(DisasContext *s, arg_BLX_suffix *a)
115
tmp = tcg_temp_new_i32();
116
tcg_gen_addi_i32(tmp, cpu_R[14], a->imm << 1);
117
tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
118
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
119
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | 1);
120
gen_bx(s, tmp);
121
return true;
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
124
tcg_gen_add_i32(addr, addr, tmp);
125
126
gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), half ? MO_UW : MO_UB);
127
- tcg_temp_free_i32(addr);
128
129
tcg_gen_add_i32(tmp, tmp, tmp);
130
- tcg_gen_addi_i32(tmp, tmp, read_pc(s));
131
+ gen_pc_plus_diff(s, addr, jmp_diff(s, 0));
132
+ tcg_gen_add_i32(tmp, tmp, addr);
133
+ tcg_temp_free_i32(addr);
134
store_reg(s, 15, tmp);
135
return true;
136
}
49
--
137
--
50
2.34.1
138
2.25.1
51
139
52
140
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
1
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20230206223502.25122-8-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/cpu.h | 3 ++-
9
1 file changed, 2 insertions(+), 1 deletion(-)
10
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
16
17
void *nvic;
18
const struct arm_boot_info *boot_info;
19
+#if !defined(CONFIG_USER_ONLY)
20
/* Store GICv3CPUState to access from this struct */
21
void *gicv3state;
22
-#if defined(CONFIG_USER_ONLY)
23
+#else /* CONFIG_USER_ONLY */
24
/* For usermode syscall translation. */
25
bool eabi;
26
#endif /* CONFIG_USER_ONLY */
27
--
28
2.34.1
29
30
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
1
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20230206223502.25122-9-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/cpu.h | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
10
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
16
} sau;
17
18
void *nvic;
19
- const struct arm_boot_info *boot_info;
20
#if !defined(CONFIG_USER_ONLY)
21
+ const struct arm_boot_info *boot_info;
22
/* Store GICv3CPUState to access from this struct */
23
void *gicv3state;
24
#else /* CONFIG_USER_ONLY */
25
--
26
2.34.1
27
28
diff view generated by jsdifflib
Deleted patch
1
From: Alex Bennée <alex.bennee@linaro.org>
2
1
3
The two TCG tests for GICv2 and GICv3 are very heavy weight distros
4
that take a long time to boot up, especially for an --enable-debug
5
build. The total code coverage they give is:
6
7
Overall coverage rate:
8
lines......: 11.2% (59584 of 530123 lines)
9
functions..: 15.0% (7436 of 49443 functions)
10
branches...: 6.3% (19273 of 303933 branches)
11
12
We already get pretty close to that with the machine_aarch64_virt
13
tests which only does one full boot (~120s vs ~600s) of alpine. We
14
expand the kernel+initrd boot (~8s) to test both GICs and also add an
15
RNG device and a block device to generate a few IRQs and exercise the
16
storage layer. With that we get to a coverage of:
17
18
Overall coverage rate:
19
lines......: 11.0% (58121 of 530123 lines)
20
functions..: 14.9% (7343 of 49443 functions)
21
branches...: 6.0% (18269 of 303933 branches)
22
23
which I feel is close enough given the massive time saving. If we want
24
to target any more sub-systems we can use lighter weight more directed
25
tests.
26
27
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
28
Reviewed-by: Fabiano Rosas <farosas@suse.de>
29
Acked-by: Richard Henderson <richard.henderson@linaro.org>
30
Message-id: 20230203181632.2919715-1-alex.bennee@linaro.org
31
Cc: Peter Maydell <peter.maydell@linaro.org>
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
---
34
tests/avocado/boot_linux.py | 48 ++++----------------
35
tests/avocado/machine_aarch64_virt.py | 63 ++++++++++++++++++++++++---
36
2 files changed, 65 insertions(+), 46 deletions(-)
37
38
diff --git a/tests/avocado/boot_linux.py b/tests/avocado/boot_linux.py
39
index XXXXXXX..XXXXXXX 100644
40
--- a/tests/avocado/boot_linux.py
41
+++ b/tests/avocado/boot_linux.py
42
@@ -XXX,XX +XXX,XX @@ def test_pc_q35_kvm(self):
43
self.launch_and_wait(set_up_ssh_connection=False)
44
45
46
-# For Aarch64 we only boot KVM tests in CI as the TCG tests are very
47
-# heavyweight. There are lighter weight distros which we use in the
48
-# machine_aarch64_virt.py tests.
49
+# For Aarch64 we only boot KVM tests in CI as booting the current
50
+# Fedora OS in TCG tests is very heavyweight. There are lighter weight
51
+# distros which we use in the machine_aarch64_virt.py tests.
52
class BootLinuxAarch64(LinuxTest):
53
"""
54
:avocado: tags=arch:aarch64
55
:avocado: tags=machine:virt
56
- :avocado: tags=machine:gic-version=2
57
"""
58
timeout = 720
59
60
- def add_common_args(self):
61
- self.vm.add_args('-bios',
62
- os.path.join(BUILD_DIR, 'pc-bios',
63
- 'edk2-aarch64-code.fd'))
64
- self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
65
- self.vm.add_args('-object', 'rng-random,id=rng0,filename=/dev/urandom')
66
-
67
- @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
68
- def test_fedora_cloud_tcg_gicv2(self):
69
- """
70
- :avocado: tags=accel:tcg
71
- :avocado: tags=cpu:max
72
- :avocado: tags=device:gicv2
73
- """
74
- self.require_accelerator("tcg")
75
- self.vm.add_args("-accel", "tcg")
76
- self.vm.add_args("-cpu", "max,lpa2=off")
77
- self.vm.add_args("-machine", "virt,gic-version=2")
78
- self.add_common_args()
79
- self.launch_and_wait(set_up_ssh_connection=False)
80
-
81
- @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
82
- def test_fedora_cloud_tcg_gicv3(self):
83
- """
84
- :avocado: tags=accel:tcg
85
- :avocado: tags=cpu:max
86
- :avocado: tags=device:gicv3
87
- """
88
- self.require_accelerator("tcg")
89
- self.vm.add_args("-accel", "tcg")
90
- self.vm.add_args("-cpu", "max,lpa2=off")
91
- self.vm.add_args("-machine", "virt,gic-version=3")
92
- self.add_common_args()
93
- self.launch_and_wait(set_up_ssh_connection=False)
94
-
95
def test_virt_kvm(self):
96
"""
97
:avocado: tags=accel:kvm
98
@@ -XXX,XX +XXX,XX @@ def test_virt_kvm(self):
99
self.require_accelerator("kvm")
100
self.vm.add_args("-accel", "kvm")
101
self.vm.add_args("-machine", "virt,gic-version=host")
102
- self.add_common_args()
103
+ self.vm.add_args('-bios',
104
+ os.path.join(BUILD_DIR, 'pc-bios',
105
+ 'edk2-aarch64-code.fd'))
106
+ self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
107
+ self.vm.add_args('-object', 'rng-random,id=rng0,filename=/dev/urandom')
108
self.launch_and_wait(set_up_ssh_connection=False)
109
110
111
diff --git a/tests/avocado/machine_aarch64_virt.py b/tests/avocado/machine_aarch64_virt.py
112
index XXXXXXX..XXXXXXX 100644
113
--- a/tests/avocado/machine_aarch64_virt.py
114
+++ b/tests/avocado/machine_aarch64_virt.py
115
@@ -XXX,XX +XXX,XX @@
116
117
import time
118
import os
119
+import logging
120
121
from avocado_qemu import QemuSystemTest
122
from avocado_qemu import wait_for_console_pattern
123
from avocado_qemu import exec_command
124
from avocado_qemu import BUILD_DIR
125
+from avocado.utils import process
126
+from avocado.utils.path import find_command
127
128
class Aarch64VirtMachine(QemuSystemTest):
129
KERNEL_COMMON_COMMAND_LINE = 'printk.time=0 '
130
@@ -XXX,XX +XXX,XX @@ def test_alpine_virt_tcg_gic_max(self):
131
self.wait_for_console_pattern('Welcome to Alpine Linux 3.16')
132
133
134
- def test_aarch64_virt(self):
135
+ def common_aarch64_virt(self, machine):
136
"""
137
- :avocado: tags=arch:aarch64
138
- :avocado: tags=machine:virt
139
- :avocado: tags=accel:tcg
140
- :avocado: tags=cpu:max
141
+ Common code to launch basic virt machine with kernel+initrd
142
+ and a scratch disk.
143
"""
144
+ logger = logging.getLogger('aarch64_virt')
145
+
146
kernel_url = ('https://fileserver.linaro.org/s/'
147
'z6B2ARM7DQT3HWN/download')
148
-
149
kernel_hash = 'ed11daab50c151dde0e1e9c9cb8b2d9bd3215347'
150
kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
151
152
@@ -XXX,XX +XXX,XX @@ def test_aarch64_virt(self):
153
'console=ttyAMA0')
154
self.require_accelerator("tcg")
155
self.vm.add_args('-cpu', 'max,pauth-impdef=on',
156
+ '-machine', machine,
157
'-accel', 'tcg',
158
'-kernel', kernel_path,
159
'-append', kernel_command_line)
160
+
161
+ # A RNG offers an easy way to generate a few IRQs
162
+ self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
163
+ self.vm.add_args('-object',
164
+ 'rng-random,id=rng0,filename=/dev/urandom')
165
+
166
+ # Also add a scratch block device
167
+ logger.info('creating scratch qcow2 image')
168
+ image_path = os.path.join(self.workdir, 'scratch.qcow2')
169
+ qemu_img = os.path.join(BUILD_DIR, 'qemu-img')
170
+ if not os.path.exists(qemu_img):
171
+ qemu_img = find_command('qemu-img', False)
172
+ if qemu_img is False:
173
+ self.cancel('Could not find "qemu-img", which is required to '
174
+ 'create the temporary qcow2 image')
175
+ cmd = '%s create -f qcow2 %s 8M' % (qemu_img, image_path)
176
+ process.run(cmd)
177
+
178
+ # Add the device
179
+ self.vm.add_args('-blockdev',
180
+ f"driver=qcow2,file.driver=file,file.filename={image_path},node-name=scratch")
181
+ self.vm.add_args('-device',
182
+ 'virtio-blk-device,drive=scratch')
183
+
184
self.vm.launch()
185
self.wait_for_console_pattern('Welcome to Buildroot')
186
time.sleep(0.1)
187
exec_command(self, 'root')
188
time.sleep(0.1)
189
+ exec_command(self, 'dd if=/dev/hwrng of=/dev/vda bs=512 count=4')
190
+ time.sleep(0.1)
191
+ exec_command(self, 'md5sum /dev/vda')
192
+ time.sleep(0.1)
193
+ exec_command(self, 'cat /proc/interrupts')
194
+ time.sleep(0.1)
195
exec_command(self, 'cat /proc/self/maps')
196
time.sleep(0.1)
197
+
198
+ def test_aarch64_virt_gicv3(self):
199
+ """
200
+ :avocado: tags=arch:aarch64
201
+ :avocado: tags=machine:virt
202
+ :avocado: tags=accel:tcg
203
+ :avocado: tags=cpu:max
204
+ """
205
+ self.common_aarch64_virt("virt,gic_version=3")
206
+
207
+ def test_aarch64_virt_gicv2(self):
208
+ """
209
+ :avocado: tags=arch:aarch64
210
+ :avocado: tags=machine:virt
211
+ :avocado: tags=accel:tcg
212
+ :avocado: tags=cpu:max
213
+ """
214
+ self.common_aarch64_virt("virt,gic-version=2")
215
--
216
2.34.1
217
218
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
1
3
Since commit acc0b8b05a when running the ZynqMP ZCU102 board with
4
a QEMU configured using --without-default-devices, we get:
5
6
$ qemu-system-aarch64 -M xlnx-zcu102
7
qemu-system-aarch64: missing object type 'usb_dwc3'
8
Abort trap: 6
9
10
Fix by adding the missing Kconfig dependency.
11
12
Fixes: acc0b8b05a ("hw/arm/xlnx-zynqmp: Connect ZynqMP's USB controllers")
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
Message-id: 20230216092327.2203-1-philmd@linaro.org
15
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
hw/arm/Kconfig | 1 +
19
1 file changed, 1 insertion(+)
20
21
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/Kconfig
24
+++ b/hw/arm/Kconfig
25
@@ -XXX,XX +XXX,XX @@ config XLNX_ZYNQMP_ARM
26
select XLNX_CSU_DMA
27
select XLNX_ZYNQMP
28
select XLNX_ZDMA
29
+ select USB_DWC3
30
31
config XLNX_VERSAL
32
bool
33
--
34
2.34.1
35
36
diff view generated by jsdifflib
Deleted patch
1
From: Hao Wu <wuhaotsh@google.com>
2
1
3
Havard is no longer working on the Nuvoton systems for a while
4
and won't be able to do any work on it in the future. So I'll
5
take over maintaining the Nuvoton system from him.
6
7
Signed-off-by: Hao Wu <wuhaotsh@google.com>
8
Acked-by: Havard Skinnemoen <hskinnemoen@google.com>
9
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
10
Message-id: 20230208235433.3989937-2-wuhaotsh@google.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
MAINTAINERS | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/MAINTAINERS b/MAINTAINERS
17
index XXXXXXX..XXXXXXX 100644
18
--- a/MAINTAINERS
19
+++ b/MAINTAINERS
20
@@ -XXX,XX +XXX,XX @@ F: include/hw/net/mv88w8618_eth.h
21
F: docs/system/arm/musicpal.rst
22
23
Nuvoton NPCM7xx
24
-M: Havard Skinnemoen <hskinnemoen@google.com>
25
M: Tyrone Ting <kfting@nuvoton.com>
26
+M: Hao Wu <wuhaotsh@google.com>
27
L: qemu-arm@nongnu.org
28
S: Supported
29
F: hw/*/npcm7xx*
30
--
31
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Hao Wu <wuhaotsh@google.com>
2
1
3
Nuvoton's PSPI is a general purpose SPI module which enables
4
connections to SPI-based peripheral devices.
5
6
Signed-off-by: Hao Wu <wuhaotsh@google.com>
7
Reviewed-by: Chris Rauer <crauer@google.com>
8
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
9
Message-id: 20230208235433.3989937-3-wuhaotsh@google.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
MAINTAINERS | 6 +-
13
include/hw/ssi/npcm_pspi.h | 53 +++++++++
14
hw/ssi/npcm_pspi.c | 221 +++++++++++++++++++++++++++++++++++++
15
hw/ssi/meson.build | 2 +-
16
hw/ssi/trace-events | 5 +
17
5 files changed, 283 insertions(+), 4 deletions(-)
18
create mode 100644 include/hw/ssi/npcm_pspi.h
19
create mode 100644 hw/ssi/npcm_pspi.c
20
21
diff --git a/MAINTAINERS b/MAINTAINERS
22
index XXXXXXX..XXXXXXX 100644
23
--- a/MAINTAINERS
24
+++ b/MAINTAINERS
25
@@ -XXX,XX +XXX,XX @@ M: Tyrone Ting <kfting@nuvoton.com>
26
M: Hao Wu <wuhaotsh@google.com>
27
L: qemu-arm@nongnu.org
28
S: Supported
29
-F: hw/*/npcm7xx*
30
-F: include/hw/*/npcm7xx*
31
-F: tests/qtest/npcm7xx*
32
+F: hw/*/npcm*
33
+F: include/hw/*/npcm*
34
+F: tests/qtest/npcm*
35
F: pc-bios/npcm7xx_bootrom.bin
36
F: roms/vbootrom
37
F: docs/system/arm/nuvoton.rst
38
diff --git a/include/hw/ssi/npcm_pspi.h b/include/hw/ssi/npcm_pspi.h
39
new file mode 100644
40
index XXXXXXX..XXXXXXX
41
--- /dev/null
42
+++ b/include/hw/ssi/npcm_pspi.h
43
@@ -XXX,XX +XXX,XX @@
44
+/*
45
+ * Nuvoton Peripheral SPI Module
46
+ *
47
+ * Copyright 2023 Google LLC
48
+ *
49
+ * This program is free software; you can redistribute it and/or modify it
50
+ * under the terms of the GNU General Public License as published by the
51
+ * Free Software Foundation; either version 2 of the License, or
52
+ * (at your option) any later version.
53
+ *
54
+ * This program is distributed in the hope that it will be useful, but WITHOUT
55
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
56
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
57
+ * for more details.
58
+ */
59
+#ifndef NPCM_PSPI_H
60
+#define NPCM_PSPI_H
61
+
62
+#include "hw/ssi/ssi.h"
63
+#include "hw/sysbus.h"
64
+
65
+/*
66
+ * Number of registers in our device state structure. Don't change this without
67
+ * incrementing the version_id in the vmstate.
68
+ */
69
+#define NPCM_PSPI_NR_REGS 3
70
+
71
+/**
72
+ * NPCMPSPIState - Device state for one Flash Interface Unit.
73
+ * @parent: System bus device.
74
+ * @mmio: Memory region for register access.
75
+ * @spi: The SPI bus mastered by this controller.
76
+ * @regs: Register contents.
77
+ * @irq: The interrupt request queue for this module.
78
+ *
79
+ * Each PSPI has a shared bank of registers, and controls up to four chip
80
+ * selects. Each chip select has a dedicated memory region which may be used to
81
+ * read and write the flash connected to that chip select as if it were memory.
82
+ */
83
+typedef struct NPCMPSPIState {
84
+ SysBusDevice parent;
85
+
86
+ MemoryRegion mmio;
87
+
88
+ SSIBus *spi;
89
+ uint16_t regs[NPCM_PSPI_NR_REGS];
90
+ qemu_irq irq;
91
+} NPCMPSPIState;
92
+
93
+#define TYPE_NPCM_PSPI "npcm-pspi"
94
+OBJECT_DECLARE_SIMPLE_TYPE(NPCMPSPIState, NPCM_PSPI)
95
+
96
+#endif /* NPCM_PSPI_H */
97
diff --git a/hw/ssi/npcm_pspi.c b/hw/ssi/npcm_pspi.c
98
new file mode 100644
99
index XXXXXXX..XXXXXXX
100
--- /dev/null
101
+++ b/hw/ssi/npcm_pspi.c
102
@@ -XXX,XX +XXX,XX @@
103
+/*
104
+ * Nuvoton NPCM Peripheral SPI Module (PSPI)
105
+ *
106
+ * Copyright 2023 Google LLC
107
+ *
108
+ * This program is free software; you can redistribute it and/or modify it
109
+ * under the terms of the GNU General Public License as published by the
110
+ * Free Software Foundation; either version 2 of the License, or
111
+ * (at your option) any later version.
112
+ *
113
+ * This program is distributed in the hope that it will be useful, but WITHOUT
114
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
115
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
116
+ * for more details.
117
+ */
118
+
119
+#include "qemu/osdep.h"
120
+
121
+#include "hw/irq.h"
122
+#include "hw/registerfields.h"
123
+#include "hw/ssi/npcm_pspi.h"
124
+#include "migration/vmstate.h"
125
+#include "qapi/error.h"
126
+#include "qemu/error-report.h"
127
+#include "qemu/log.h"
128
+#include "qemu/module.h"
129
+#include "qemu/units.h"
130
+
131
+#include "trace.h"
132
+
133
+REG16(PSPI_DATA, 0x0)
134
+REG16(PSPI_CTL1, 0x2)
135
+ FIELD(PSPI_CTL1, SPIEN, 0, 1)
136
+ FIELD(PSPI_CTL1, MOD, 2, 1)
137
+ FIELD(PSPI_CTL1, EIR, 5, 1)
138
+ FIELD(PSPI_CTL1, EIW, 6, 1)
139
+ FIELD(PSPI_CTL1, SCM, 7, 1)
140
+ FIELD(PSPI_CTL1, SCIDL, 8, 1)
141
+ FIELD(PSPI_CTL1, SCDV, 9, 7)
142
+REG16(PSPI_STAT, 0x4)
143
+ FIELD(PSPI_STAT, BSY, 0, 1)
144
+ FIELD(PSPI_STAT, RBF, 1, 1)
145
+
146
+static void npcm_pspi_update_irq(NPCMPSPIState *s)
147
+{
148
+ int level = 0;
149
+
150
+ /* Only fire IRQ when the module is enabled. */
151
+ if (FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, SPIEN)) {
152
+ /* Update interrupt as BSY is cleared. */
153
+ if ((!FIELD_EX16(s->regs[R_PSPI_STAT], PSPI_STAT, BSY)) &&
154
+ FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, EIW)) {
155
+ level = 1;
156
+ }
157
+
158
+ /* Update interrupt as RBF is set. */
159
+ if (FIELD_EX16(s->regs[R_PSPI_STAT], PSPI_STAT, RBF) &&
160
+ FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, EIR)) {
161
+ level = 1;
162
+ }
163
+ }
164
+ qemu_set_irq(s->irq, level);
165
+}
166
+
167
+static uint16_t npcm_pspi_read_data(NPCMPSPIState *s)
168
+{
169
+ uint16_t value = s->regs[R_PSPI_DATA];
170
+
171
+ /* Clear stat bits as the value are read out. */
172
+ s->regs[R_PSPI_STAT] = 0;
173
+
174
+ return value;
175
+}
176
+
177
+static void npcm_pspi_write_data(NPCMPSPIState *s, uint16_t data)
178
+{
179
+ uint16_t value = 0;
180
+
181
+ if (FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, MOD)) {
182
+ value = ssi_transfer(s->spi, extract16(data, 8, 8)) << 8;
183
+ }
184
+ value |= ssi_transfer(s->spi, extract16(data, 0, 8));
185
+ s->regs[R_PSPI_DATA] = value;
186
+
187
+ /* Mark data as available */
188
+ s->regs[R_PSPI_STAT] = R_PSPI_STAT_BSY_MASK | R_PSPI_STAT_RBF_MASK;
189
+}
190
+
191
+/* Control register read handler. */
192
+static uint64_t npcm_pspi_ctrl_read(void *opaque, hwaddr addr,
193
+ unsigned int size)
194
+{
195
+ NPCMPSPIState *s = opaque;
196
+ uint16_t value;
197
+
198
+ switch (addr) {
199
+ case A_PSPI_DATA:
200
+ value = npcm_pspi_read_data(s);
201
+ break;
202
+
203
+ case A_PSPI_CTL1:
204
+ value = s->regs[R_PSPI_CTL1];
205
+ break;
206
+
207
+ case A_PSPI_STAT:
208
+ value = s->regs[R_PSPI_STAT];
209
+ break;
210
+
211
+ default:
212
+ qemu_log_mask(LOG_GUEST_ERROR,
213
+ "%s: write to invalid offset 0x%" PRIx64 "\n",
214
+ DEVICE(s)->canonical_path, addr);
215
+ return 0;
216
+ }
217
+ trace_npcm_pspi_ctrl_read(DEVICE(s)->canonical_path, addr, value);
218
+ npcm_pspi_update_irq(s);
219
+
220
+ return value;
221
+}
222
+
223
+/* Control register write handler. */
224
+static void npcm_pspi_ctrl_write(void *opaque, hwaddr addr, uint64_t v,
225
+ unsigned int size)
226
+{
227
+ NPCMPSPIState *s = opaque;
228
+ uint16_t value = v;
229
+
230
+ trace_npcm_pspi_ctrl_write(DEVICE(s)->canonical_path, addr, value);
231
+
232
+ switch (addr) {
233
+ case A_PSPI_DATA:
234
+ npcm_pspi_write_data(s, value);
235
+ break;
236
+
237
+ case A_PSPI_CTL1:
238
+ s->regs[R_PSPI_CTL1] = value;
239
+ break;
240
+
241
+ case A_PSPI_STAT:
242
+ qemu_log_mask(LOG_GUEST_ERROR,
243
+ "%s: write to read-only register PSPI_STAT: 0x%08"
244
+ PRIx64 "\n", DEVICE(s)->canonical_path, v);
245
+ break;
246
+
247
+ default:
248
+ qemu_log_mask(LOG_GUEST_ERROR,
249
+ "%s: write to invalid offset 0x%" PRIx64 "\n",
250
+ DEVICE(s)->canonical_path, addr);
251
+ return;
252
+ }
253
+ npcm_pspi_update_irq(s);
254
+}
255
+
256
+static const MemoryRegionOps npcm_pspi_ctrl_ops = {
257
+ .read = npcm_pspi_ctrl_read,
258
+ .write = npcm_pspi_ctrl_write,
259
+ .endianness = DEVICE_LITTLE_ENDIAN,
260
+ .valid = {
261
+ .min_access_size = 1,
262
+ .max_access_size = 2,
263
+ .unaligned = false,
264
+ },
265
+ .impl = {
266
+ .min_access_size = 2,
267
+ .max_access_size = 2,
268
+ .unaligned = false,
269
+ },
270
+};
271
+
272
+static void npcm_pspi_enter_reset(Object *obj, ResetType type)
273
+{
274
+ NPCMPSPIState *s = NPCM_PSPI(obj);
275
+
276
+ trace_npcm_pspi_enter_reset(DEVICE(obj)->canonical_path, type);
277
+ memset(s->regs, 0, sizeof(s->regs));
278
+}
279
+
280
+static void npcm_pspi_realize(DeviceState *dev, Error **errp)
281
+{
282
+ NPCMPSPIState *s = NPCM_PSPI(dev);
283
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
284
+ Object *obj = OBJECT(dev);
285
+
286
+ s->spi = ssi_create_bus(dev, "pspi");
287
+ memory_region_init_io(&s->mmio, obj, &npcm_pspi_ctrl_ops, s,
288
+ "mmio", 4 * KiB);
289
+ sysbus_init_mmio(sbd, &s->mmio);
290
+ sysbus_init_irq(sbd, &s->irq);
291
+}
292
+
293
+static const VMStateDescription vmstate_npcm_pspi = {
294
+ .name = "npcm-pspi",
295
+ .version_id = 0,
296
+ .minimum_version_id = 0,
297
+ .fields = (VMStateField[]) {
298
+ VMSTATE_UINT16_ARRAY(regs, NPCMPSPIState, NPCM_PSPI_NR_REGS),
299
+ VMSTATE_END_OF_LIST(),
300
+ },
301
+};
302
+
303
+
304
+static void npcm_pspi_class_init(ObjectClass *klass, void *data)
305
+{
306
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
307
+ DeviceClass *dc = DEVICE_CLASS(klass);
308
+
309
+ dc->desc = "NPCM Peripheral SPI Module";
310
+ dc->realize = npcm_pspi_realize;
311
+ dc->vmsd = &vmstate_npcm_pspi;
312
+ rc->phases.enter = npcm_pspi_enter_reset;
313
+}
314
+
315
+static const TypeInfo npcm_pspi_types[] = {
316
+ {
317
+ .name = TYPE_NPCM_PSPI,
318
+ .parent = TYPE_SYS_BUS_DEVICE,
319
+ .instance_size = sizeof(NPCMPSPIState),
320
+ .class_init = npcm_pspi_class_init,
321
+ },
322
+};
323
+DEFINE_TYPES(npcm_pspi_types);
324
diff --git a/hw/ssi/meson.build b/hw/ssi/meson.build
325
index XXXXXXX..XXXXXXX 100644
326
--- a/hw/ssi/meson.build
327
+++ b/hw/ssi/meson.build
328
@@ -XXX,XX +XXX,XX @@
329
softmmu_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files('aspeed_smc.c'))
330
softmmu_ss.add(when: 'CONFIG_MSF2', if_true: files('mss-spi.c'))
331
-softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_fiu.c'))
332
+softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_fiu.c', 'npcm_pspi.c'))
333
softmmu_ss.add(when: 'CONFIG_PL022', if_true: files('pl022.c'))
334
softmmu_ss.add(when: 'CONFIG_SIFIVE_SPI', if_true: files('sifive_spi.c'))
335
softmmu_ss.add(when: 'CONFIG_SSI', if_true: files('ssi.c'))
336
diff --git a/hw/ssi/trace-events b/hw/ssi/trace-events
337
index XXXXXXX..XXXXXXX 100644
338
--- a/hw/ssi/trace-events
339
+++ b/hw/ssi/trace-events
340
@@ -XXX,XX +XXX,XX @@ npcm7xx_fiu_ctrl_write(const char *id, uint64_t addr, uint32_t data) "%s offset:
341
npcm7xx_fiu_flash_read(const char *id, int cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
342
npcm7xx_fiu_flash_write(const char *id, unsigned cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
343
344
+# npcm_pspi.c
345
+npcm_pspi_enter_reset(const char *id, int reset_type) "%s reset type: %d"
346
+npcm_pspi_ctrl_read(const char *id, uint64_t addr, uint16_t data) "%s offset: 0x%03" PRIx64 " value: 0x%04" PRIx16
347
+npcm_pspi_ctrl_write(const char *id, uint64_t addr, uint16_t data) "%s offset: 0x%03" PRIx64 " value: 0x%04" PRIx16
348
+
349
# ibex_spi_host.c
350
351
ibex_spi_host_reset(const char *msg) "%s"
352
--
353
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
1
3
Addresses targeting the second translation table (TTB1) in the SMMU have
4
all upper bits set. Ensure the IOMMU region covers all 64 bits.
5
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Message-id: 20230214171921.1917916-2-jean-philippe@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/hw/arm/smmu-common.h | 2 --
13
hw/arm/smmu-common.c | 2 +-
14
2 files changed, 1 insertion(+), 3 deletions(-)
15
16
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/smmu-common.h
19
+++ b/include/hw/arm/smmu-common.h
20
@@ -XXX,XX +XXX,XX @@
21
#define SMMU_PCI_DEVFN_MAX 256
22
#define SMMU_PCI_DEVFN(sid) (sid & 0xFF)
23
24
-#define SMMU_MAX_VA_BITS 48
25
-
26
/*
27
* Page table walk error types
28
*/
29
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/arm/smmu-common.c
32
+++ b/hw/arm/smmu-common.c
33
@@ -XXX,XX +XXX,XX @@ static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
34
35
memory_region_init_iommu(&sdev->iommu, sizeof(sdev->iommu),
36
s->mrtypename,
37
- OBJECT(s), name, 1ULL << SMMU_MAX_VA_BITS);
38
+ OBJECT(s), name, UINT64_MAX);
39
address_space_init(&sdev->as,
40
MEMORY_REGION(&sdev->iommu), name);
41
trace_smmu_add_mr(name);
42
--
43
2.34.1
diff view generated by jsdifflib
1
From: Fabiano Rosas <farosas@suse.de>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Since commit cf7c6d1004 ("target/arm: Split out cpregs.h") we now have
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
a cpregs.h header which is more suitable for this code.
4
Message-id: 20221020030641.2066807-10-richard.henderson@linaro.org
5
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Code moved verbatim.
7
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
target/arm/cpregs.h | 98 +++++++++++++++++++++++++++++++++++++++++++++
8
target/arm/cpu-param.h | 2 +
15
target/arm/cpu.h | 91 -----------------------------------------
9
target/arm/translate.h | 50 +++++++++++++++-
16
2 files changed, 98 insertions(+), 91 deletions(-)
10
target/arm/cpu.c | 23 ++++----
11
target/arm/translate-a64.c | 64 +++++++++++++-------
12
target/arm/translate-m-nocp.c | 2 +-
13
target/arm/translate.c | 108 +++++++++++++++++++++++-----------
14
6 files changed, 178 insertions(+), 71 deletions(-)
17
15
18
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
16
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpregs.h
18
--- a/target/arm/cpu-param.h
21
+++ b/target/arm/cpregs.h
19
+++ b/target/arm/cpu-param.h
22
@@ -XXX,XX +XXX,XX @@ enum {
20
@@ -XXX,XX +XXX,XX @@
23
ARM_CP_SME = 1 << 19,
21
# define TARGET_PAGE_BITS_VARY
24
};
22
# define TARGET_PAGE_BITS_MIN 10
25
23
26
+/*
24
+# define TARGET_TB_PCREL 1
27
+ * Interface for defining coprocessor registers.
25
+
28
+ * Registers are defined in tables of arm_cp_reginfo structs
26
/*
29
+ * which are passed to define_arm_cp_regs().
27
* Cache the attrs and shareability fields from the page table entry.
30
+ */
28
*
29
diff --git a/target/arm/translate.h b/target/arm/translate.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate.h
32
+++ b/target/arm/translate.h
33
@@ -XXX,XX +XXX,XX @@
34
35
36
/* internal defines */
31
+
37
+
32
+/*
38
+/*
33
+ * When looking up a coprocessor register we look for it
39
+ * Save pc_save across a branch, so that we may restore the value from
34
+ * via an integer which encodes all of:
40
+ * before the branch at the point the label is emitted.
35
+ * coprocessor number
36
+ * Crn, Crm, opc1, opc2 fields
37
+ * 32 or 64 bit register (ie is it accessed via MRC/MCR
38
+ * or via MRRC/MCRR?)
39
+ * non-secure/secure bank (AArch32 only)
40
+ * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field.
41
+ * (In this case crn and opc2 should be zero.)
42
+ * For AArch64, there is no 32/64 bit size distinction;
43
+ * instead all registers have a 2 bit op0, 3 bit op1 and op2,
44
+ * and 4 bit CRn and CRm. The encoding patterns are chosen
45
+ * to be easy to convert to and from the KVM encodings, and also
46
+ * so that the hashtable can contain both AArch32 and AArch64
47
+ * registers (to allow for interprocessing where we might run
48
+ * 32 bit code on a 64 bit core).
49
+ */
41
+ */
42
+typedef struct DisasLabel {
43
+ TCGLabel *label;
44
+ target_ulong pc_save;
45
+} DisasLabel;
46
+
47
typedef struct DisasContext {
48
DisasContextBase base;
49
const ARMISARegisters *isar;
50
51
/* The address of the current instruction being translated. */
52
target_ulong pc_curr;
53
+ /*
54
+ * For TARGET_TB_PCREL, the full value of cpu_pc is not known
55
+ * (although the page offset is known). For convenience, the
56
+ * translation loop uses the full virtual address that triggered
57
+ * the translation, from base.pc_start through pc_curr.
58
+ * For efficiency, we do not update cpu_pc for every instruction.
59
+ * Instead, pc_save has the value of pc_curr at the time of the
60
+ * last update to cpu_pc, which allows us to compute the addend
61
+ * needed to bring cpu_pc current: pc_curr - pc_save.
62
+ * If cpu_pc now contains the destination of an indirect branch,
63
+ * pc_save contains -1 to indicate that relative updates are no
64
+ * longer possible.
65
+ */
66
+ target_ulong pc_save;
67
target_ulong page_start;
68
uint32_t insn;
69
/* Nonzero if this instruction has been conditionally skipped. */
70
int condjmp;
71
/* The label that will be jumped to when the instruction is skipped. */
72
- TCGLabel *condlabel;
73
+ DisasLabel condlabel;
74
/* Thumb-2 conditional execution bits. */
75
int condexec_mask;
76
int condexec_cond;
77
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
78
* after decode (ie after any UNDEF checks)
79
*/
80
bool eci_handled;
81
- /* TCG op to rewind to if this turns out to be an invalid ECI state */
82
- TCGOp *insn_eci_rewind;
83
int sctlr_b;
84
MemOp be_data;
85
#if !defined(CONFIG_USER_ONLY)
86
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
87
*/
88
uint64_t asimd_imm_const(uint32_t imm, int cmode, int op);
89
50
+/*
90
+/*
51
+ * This bit is private to our hashtable cpreg; in KVM register
91
+ * gen_disas_label:
52
+ * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64
92
+ * Create a label and cache a copy of pc_save.
53
+ * in the upper bits of the 64 bit ID.
54
+ */
93
+ */
55
+#define CP_REG_AA64_SHIFT 28
94
+static inline DisasLabel gen_disas_label(DisasContext *s)
56
+#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT)
57
+
58
+/*
59
+ * To enable banking of coprocessor registers depending on ns-bit we
60
+ * add a bit to distinguish between secure and non-secure cpregs in the
61
+ * hashtable.
62
+ */
63
+#define CP_REG_NS_SHIFT 29
64
+#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT)
65
+
66
+#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \
67
+ ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \
68
+ ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2))
69
+
70
+#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \
71
+ (CP_REG_AA64_MASK | \
72
+ ((cp) << CP_REG_ARM_COPROC_SHIFT) | \
73
+ ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \
74
+ ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \
75
+ ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \
76
+ ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \
77
+ ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT))
78
+
79
+/*
80
+ * Convert a full 64 bit KVM register ID to the truncated 32 bit
81
+ * version used as a key for the coprocessor register hashtable
82
+ */
83
+static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid)
84
+{
95
+{
85
+ uint32_t cpregid = kvmid;
96
+ return (DisasLabel){
86
+ if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) {
97
+ .label = gen_new_label(),
87
+ cpregid |= CP_REG_AA64_MASK;
98
+ .pc_save = s->pc_save,
88
+ } else {
99
+ };
89
+ if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) {
90
+ cpregid |= (1 << 15);
91
+ }
92
+
93
+ /*
94
+ * KVM is always non-secure so add the NS flag on AArch32 register
95
+ * entries.
96
+ */
97
+ cpregid |= 1 << CP_REG_NS_SHIFT;
98
+ }
99
+ return cpregid;
100
+}
100
+}
101
+
101
+
102
+/*
102
+/*
103
+ * Convert a truncated 32 bit hashtable key into the full
103
+ * set_disas_label:
104
+ * 64 bit KVM register ID.
104
+ * Emit a label and restore the cached copy of pc_save.
105
+ */
105
+ */
106
+static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
106
+static inline void set_disas_label(DisasContext *s, DisasLabel l)
107
+{
107
+{
108
+ uint64_t kvmid;
108
+ gen_set_label(l.label);
109
+
109
+ s->pc_save = l.pc_save;
110
+ if (cpregid & CP_REG_AA64_MASK) {
111
+ kvmid = cpregid & ~CP_REG_AA64_MASK;
112
+ kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64;
113
+ } else {
114
+ kvmid = cpregid & ~(1 << 15);
115
+ if (cpregid & (1 << 15)) {
116
+ kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM;
117
+ } else {
118
+ kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM;
119
+ }
120
+ }
121
+ return kvmid;
122
+}
110
+}
123
+
111
+
124
/*
112
/*
125
* Valid values for ARMCPRegInfo state field, indicating which of
113
* Helpers for implementing sets of trans_* functions.
126
* the AArch32 and AArch64 execution states this register is visible in.
114
* Defer the implementation of NAME to FUNC, with optional extra arguments.
127
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
115
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
128
index XXXXXXX..XXXXXXX 100644
116
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/cpu.h
117
--- a/target/arm/cpu.c
130
+++ b/target/arm/cpu.h
118
+++ b/target/arm/cpu.c
131
@@ -XXX,XX +XXX,XX @@ void arm_cpu_list(void);
119
@@ -XXX,XX +XXX,XX @@ static vaddr arm_cpu_get_pc(CPUState *cs)
132
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
120
void arm_cpu_synchronize_from_tb(CPUState *cs,
133
uint32_t cur_el, bool secure);
121
const TranslationBlock *tb)
134
122
{
135
-/* Interface for defining coprocessor registers.
123
- ARMCPU *cpu = ARM_CPU(cs);
136
- * Registers are defined in tables of arm_cp_reginfo structs
124
- CPUARMState *env = &cpu->env;
137
- * which are passed to define_arm_cp_regs().
138
- */
139
-
125
-
140
-/* When looking up a coprocessor register we look for it
126
- /*
141
- * via an integer which encodes all of:
127
- * It's OK to look at env for the current mode here, because it's
142
- * coprocessor number
128
- * never possible for an AArch64 TB to chain to an AArch32 TB.
143
- * Crn, Crm, opc1, opc2 fields
129
- */
144
- * 32 or 64 bit register (ie is it accessed via MRC/MCR
130
- if (is_a64(env)) {
145
- * or via MRRC/MCRR?)
131
- env->pc = tb_pc(tb);
146
- * non-secure/secure bank (AArch32 only)
132
- } else {
147
- * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field.
133
- env->regs[15] = tb_pc(tb);
148
- * (In this case crn and opc2 should be zero.)
134
+ /* The program counter is always up to date with TARGET_TB_PCREL. */
149
- * For AArch64, there is no 32/64 bit size distinction;
135
+ if (!TARGET_TB_PCREL) {
150
- * instead all registers have a 2 bit op0, 3 bit op1 and op2,
136
+ CPUARMState *env = cs->env_ptr;
151
- * and 4 bit CRn and CRm. The encoding patterns are chosen
137
+ /*
152
- * to be easy to convert to and from the KVM encodings, and also
138
+ * It's OK to look at env for the current mode here, because it's
153
- * so that the hashtable can contain both AArch32 and AArch64
139
+ * never possible for an AArch64 TB to chain to an AArch32 TB.
154
- * registers (to allow for interprocessing where we might run
140
+ */
155
- * 32 bit code on a 64 bit core).
141
+ if (is_a64(env)) {
156
- */
142
+ env->pc = tb_pc(tb);
157
-/* This bit is private to our hashtable cpreg; in KVM register
143
+ } else {
158
- * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64
144
+ env->regs[15] = tb_pc(tb);
159
- * in the upper bits of the 64 bit ID.
145
+ }
160
- */
146
}
161
-#define CP_REG_AA64_SHIFT 28
147
}
162
-#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT)
148
#endif /* CONFIG_TCG */
149
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
150
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/translate-a64.c
152
+++ b/target/arm/translate-a64.c
153
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
154
155
static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
156
{
157
- tcg_gen_movi_i64(dest, s->pc_curr + diff);
158
+ assert(s->pc_save != -1);
159
+ if (TARGET_TB_PCREL) {
160
+ tcg_gen_addi_i64(dest, cpu_pc, (s->pc_curr - s->pc_save) + diff);
161
+ } else {
162
+ tcg_gen_movi_i64(dest, s->pc_curr + diff);
163
+ }
164
}
165
166
void gen_a64_update_pc(DisasContext *s, target_long diff)
167
{
168
gen_pc_plus_diff(s, cpu_pc, diff);
169
+ s->pc_save = s->pc_curr + diff;
170
}
171
172
/*
173
@@ -XXX,XX +XXX,XX @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
174
* then loading an address into the PC will clear out any tag.
175
*/
176
gen_top_byte_ignore(s, cpu_pc, src, s->tbii);
177
+ s->pc_save = -1;
178
}
179
180
/*
181
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, uint64_t dest)
182
183
static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
184
{
185
- uint64_t dest = s->pc_curr + diff;
163
-
186
-
164
-/* To enable banking of coprocessor registers depending on ns-bit we
187
- if (use_goto_tb(s, dest)) {
165
- * add a bit to distinguish between secure and non-secure cpregs in the
188
- tcg_gen_goto_tb(n);
166
- * hashtable.
189
- gen_a64_update_pc(s, diff);
167
- */
190
+ if (use_goto_tb(s, s->pc_curr + diff)) {
168
-#define CP_REG_NS_SHIFT 29
191
+ /*
169
-#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT)
192
+ * For pcrel, the pc must always be up-to-date on entry to
193
+ * the linked TB, so that it can use simple additions for all
194
+ * further adjustments. For !pcrel, the linked TB is compiled
195
+ * to know its full virtual address, so we can delay the
196
+ * update to pc to the unlinked path. A long chain of links
197
+ * can thus avoid many updates to the PC.
198
+ */
199
+ if (TARGET_TB_PCREL) {
200
+ gen_a64_update_pc(s, diff);
201
+ tcg_gen_goto_tb(n);
202
+ } else {
203
+ tcg_gen_goto_tb(n);
204
+ gen_a64_update_pc(s, diff);
205
+ }
206
tcg_gen_exit_tb(s->base.tb, n);
207
s->base.is_jmp = DISAS_NORETURN;
208
} else {
209
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
210
{
211
unsigned int sf, op, rt;
212
int64_t diff;
213
- TCGLabel *label_match;
214
+ DisasLabel match;
215
TCGv_i64 tcg_cmp;
216
217
sf = extract32(insn, 31, 1);
218
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
219
diff = sextract32(insn, 5, 19) * 4;
220
221
tcg_cmp = read_cpu_reg(s, rt, sf);
222
- label_match = gen_new_label();
170
-
223
-
171
-#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \
224
reset_btype(s);
172
- ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \
225
- tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
173
- ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2))
226
- tcg_cmp, 0, label_match);
227
228
+ match = gen_disas_label(s);
229
+ tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
230
+ tcg_cmp, 0, match.label);
231
gen_goto_tb(s, 0, 4);
232
- gen_set_label(label_match);
233
+ set_disas_label(s, match);
234
gen_goto_tb(s, 1, diff);
235
}
236
237
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
238
{
239
unsigned int bit_pos, op, rt;
240
int64_t diff;
241
- TCGLabel *label_match;
242
+ DisasLabel match;
243
TCGv_i64 tcg_cmp;
244
245
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
246
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
247
248
tcg_cmp = tcg_temp_new_i64();
249
tcg_gen_andi_i64(tcg_cmp, cpu_reg(s, rt), (1ULL << bit_pos));
250
- label_match = gen_new_label();
251
252
reset_btype(s);
253
+
254
+ match = gen_disas_label(s);
255
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
256
- tcg_cmp, 0, label_match);
257
+ tcg_cmp, 0, match.label);
258
tcg_temp_free_i64(tcg_cmp);
259
gen_goto_tb(s, 0, 4);
260
- gen_set_label(label_match);
261
+ set_disas_label(s, match);
262
gen_goto_tb(s, 1, diff);
263
}
264
265
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
266
reset_btype(s);
267
if (cond < 0x0e) {
268
/* genuinely conditional branches */
269
- TCGLabel *label_match = gen_new_label();
270
- arm_gen_test_cc(cond, label_match);
271
+ DisasLabel match = gen_disas_label(s);
272
+ arm_gen_test_cc(cond, match.label);
273
gen_goto_tb(s, 0, 4);
274
- gen_set_label(label_match);
275
+ set_disas_label(s, match);
276
gen_goto_tb(s, 1, diff);
277
} else {
278
/* 0xe and 0xf are both "always" conditions */
279
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
280
281
dc->isar = &arm_cpu->isar;
282
dc->condjmp = 0;
174
-
283
-
175
-#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \
284
+ dc->pc_save = dc->base.pc_first;
176
- (CP_REG_AA64_MASK | \
285
dc->aarch64 = true;
177
- ((cp) << CP_REG_ARM_COPROC_SHIFT) | \
286
dc->thumb = false;
178
- ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \
287
dc->sctlr_b = 0;
179
- ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \
288
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
180
- ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \
289
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
181
- ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \
290
{
182
- ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT))
291
DisasContext *dc = container_of(dcbase, DisasContext, base);
292
+ target_ulong pc_arg = dc->base.pc_next;
293
294
- tcg_gen_insn_start(dc->base.pc_next, 0, 0);
295
+ if (TARGET_TB_PCREL) {
296
+ pc_arg &= ~TARGET_PAGE_MASK;
297
+ }
298
+ tcg_gen_insn_start(pc_arg, 0, 0);
299
dc->insn_start = tcg_last_op();
300
}
301
302
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
303
index XXXXXXX..XXXXXXX 100644
304
--- a/target/arm/translate-m-nocp.c
305
+++ b/target/arm/translate-m-nocp.c
306
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
307
tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
308
tcg_gen_or_i32(sfpa, sfpa, aspen);
309
arm_gen_condlabel(s);
310
- tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
311
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel.label);
312
313
if (s->fp_excp_el != 0) {
314
gen_exception_insn_el(s, 0, EXCP_NOCP,
315
diff --git a/target/arm/translate.c b/target/arm/translate.c
316
index XXXXXXX..XXXXXXX 100644
317
--- a/target/arm/translate.c
318
+++ b/target/arm/translate.c
319
@@ -XXX,XX +XXX,XX @@ uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
320
void arm_gen_condlabel(DisasContext *s)
321
{
322
if (!s->condjmp) {
323
- s->condlabel = gen_new_label();
324
+ s->condlabel = gen_disas_label(s);
325
s->condjmp = 1;
326
}
327
}
328
@@ -XXX,XX +XXX,XX @@ static target_long jmp_diff(DisasContext *s, target_long diff)
329
330
static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
331
{
332
- tcg_gen_movi_i32(var, s->pc_curr + diff);
333
+ assert(s->pc_save != -1);
334
+ if (TARGET_TB_PCREL) {
335
+ tcg_gen_addi_i32(var, cpu_R[15], (s->pc_curr - s->pc_save) + diff);
336
+ } else {
337
+ tcg_gen_movi_i32(var, s->pc_curr + diff);
338
+ }
339
}
340
341
/* Set a variable to the value of a CPU register. */
342
@@ -XXX,XX +XXX,XX @@ void store_reg(DisasContext *s, int reg, TCGv_i32 var)
343
*/
344
tcg_gen_andi_i32(var, var, s->thumb ? ~1 : ~3);
345
s->base.is_jmp = DISAS_JUMP;
346
+ s->pc_save = -1;
347
} else if (reg == 13 && arm_dc_feature(s, ARM_FEATURE_M)) {
348
/* For M-profile SP bits [1:0] are always zero */
349
tcg_gen_andi_i32(var, var, ~3);
350
@@ -XXX,XX +XXX,XX @@ void gen_set_condexec(DisasContext *s)
351
352
void gen_update_pc(DisasContext *s, target_long diff)
353
{
354
- tcg_gen_movi_i32(cpu_R[15], s->pc_curr + diff);
355
+ gen_pc_plus_diff(s, cpu_R[15], diff);
356
+ s->pc_save = s->pc_curr + diff;
357
}
358
359
/* Set PC and Thumb state from var. var is marked as dead. */
360
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx(DisasContext *s, TCGv_i32 var)
361
tcg_gen_andi_i32(cpu_R[15], var, ~1);
362
tcg_gen_andi_i32(var, var, 1);
363
store_cpu_field(var, thumb);
364
+ s->pc_save = -1;
365
}
366
367
/*
368
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
369
static inline void gen_bx_excret_final_code(DisasContext *s)
370
{
371
/* Generate the code to finish possible exception return and end the TB */
372
- TCGLabel *excret_label = gen_new_label();
373
+ DisasLabel excret_label = gen_disas_label(s);
374
uint32_t min_magic;
375
376
if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY)) {
377
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret_final_code(DisasContext *s)
378
}
379
380
/* Is the new PC value in the magic range indicating exception return? */
381
- tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label);
382
+ tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label.label);
383
/* No: end the TB as we would for a DISAS_JMP */
384
if (s->ss_active) {
385
gen_singlestep_exception(s);
386
} else {
387
tcg_gen_exit_tb(NULL, 0);
388
}
389
- gen_set_label(excret_label);
390
+ set_disas_label(s, excret_label);
391
/* Yes: this is an exception return.
392
* At this point in runtime env->regs[15] and env->thumb will hold
393
* the exception-return magic number, which do_v7m_exception_exit()
394
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
395
*/
396
static void gen_goto_tb(DisasContext *s, int n, target_long diff)
397
{
398
- target_ulong dest = s->pc_curr + diff;
183
-
399
-
184
-/* Convert a full 64 bit KVM register ID to the truncated 32 bit
400
- if (translator_use_goto_tb(&s->base, dest)) {
185
- * version used as a key for the coprocessor register hashtable
401
- tcg_gen_goto_tb(n);
186
- */
402
- gen_update_pc(s, diff);
187
-static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid)
403
+ if (translator_use_goto_tb(&s->base, s->pc_curr + diff)) {
188
-{
404
+ /*
189
- uint32_t cpregid = kvmid;
405
+ * For pcrel, the pc must always be up-to-date on entry to
190
- if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) {
406
+ * the linked TB, so that it can use simple additions for all
191
- cpregid |= CP_REG_AA64_MASK;
407
+ * further adjustments. For !pcrel, the linked TB is compiled
192
- } else {
408
+ * to know its full virtual address, so we can delay the
193
- if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) {
409
+ * update to pc to the unlinked path. A long chain of links
194
- cpregid |= (1 << 15);
410
+ * can thus avoid many updates to the PC.
195
- }
411
+ */
412
+ if (TARGET_TB_PCREL) {
413
+ gen_update_pc(s, diff);
414
+ tcg_gen_goto_tb(n);
415
+ } else {
416
+ tcg_gen_goto_tb(n);
417
+ gen_update_pc(s, diff);
418
+ }
419
tcg_gen_exit_tb(s->base.tb, n);
420
} else {
421
gen_update_pc(s, diff);
422
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
423
static void arm_skip_unless(DisasContext *s, uint32_t cond)
424
{
425
arm_gen_condlabel(s);
426
- arm_gen_test_cc(cond ^ 1, s->condlabel);
427
+ arm_gen_test_cc(cond ^ 1, s->condlabel.label);
428
}
429
430
431
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
432
{
433
/* M-profile low-overhead while-loop start */
434
TCGv_i32 tmp;
435
- TCGLabel *nextlabel;
436
+ DisasLabel nextlabel;
437
438
if (!dc_isar_feature(aa32_lob, s)) {
439
return false;
440
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
441
}
442
}
443
444
- nextlabel = gen_new_label();
445
- tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel);
446
+ nextlabel = gen_disas_label(s);
447
+ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel.label);
448
tmp = load_reg(s, a->rn);
449
store_reg(s, 14, tmp);
450
if (a->size != 4) {
451
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
452
}
453
gen_jmp_tb(s, curr_insn_len(s), 1);
454
455
- gen_set_label(nextlabel);
456
+ set_disas_label(s, nextlabel);
457
gen_jmp(s, jmp_diff(s, a->imm));
458
return true;
459
}
460
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
461
* any faster.
462
*/
463
TCGv_i32 tmp;
464
- TCGLabel *loopend;
465
+ DisasLabel loopend;
466
bool fpu_active;
467
468
if (!dc_isar_feature(aa32_lob, s)) {
469
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
470
471
if (!a->tp && dc_isar_feature(aa32_mve, s) && fpu_active) {
472
/* Need to do a runtime check for LTPSIZE != 4 */
473
- TCGLabel *skipexc = gen_new_label();
474
+ DisasLabel skipexc = gen_disas_label(s);
475
tmp = load_cpu_field(v7m.ltpsize);
476
- tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
477
+ tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc.label);
478
tcg_temp_free_i32(tmp);
479
gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
480
- gen_set_label(skipexc);
481
+ set_disas_label(s, skipexc);
482
}
483
484
if (a->f) {
485
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
486
* loop decrement value is 1. For LETP we need to calculate the decrement
487
* value from LTPSIZE.
488
*/
489
- loopend = gen_new_label();
490
+ loopend = gen_disas_label(s);
491
if (!a->tp) {
492
- tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend);
493
+ tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend.label);
494
tcg_gen_addi_i32(cpu_R[14], cpu_R[14], -1);
495
} else {
496
/*
497
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
498
tcg_gen_shl_i32(decr, tcg_constant_i32(1), decr);
499
tcg_temp_free_i32(ltpsize);
500
501
- tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend);
502
+ tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend.label);
503
504
tcg_gen_sub_i32(cpu_R[14], cpu_R[14], decr);
505
tcg_temp_free_i32(decr);
506
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
507
/* Jump back to the loop start */
508
gen_jmp(s, jmp_diff(s, -a->imm));
509
510
- gen_set_label(loopend);
511
+ set_disas_label(s, loopend);
512
if (a->tp) {
513
/* Exits from tail-pred loops must reset LTPSIZE to 4 */
514
store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
515
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_CBZ *a)
516
517
arm_gen_condlabel(s);
518
tcg_gen_brcondi_i32(a->nz ? TCG_COND_EQ : TCG_COND_NE,
519
- tmp, 0, s->condlabel);
520
+ tmp, 0, s->condlabel.label);
521
tcg_temp_free_i32(tmp);
522
gen_jmp(s, jmp_diff(s, a->imm));
523
return true;
524
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
525
526
dc->isar = &cpu->isar;
527
dc->condjmp = 0;
196
-
528
-
197
- /* KVM is always non-secure so add the NS flag on AArch32 register
529
+ dc->pc_save = dc->base.pc_first;
198
- * entries.
530
dc->aarch64 = false;
199
- */
531
dc->thumb = EX_TBFLAG_AM32(tb_flags, THUMB);
200
- cpregid |= 1 << CP_REG_NS_SHIFT;
532
dc->be_data = EX_TBFLAG_ANY(tb_flags, BE_DATA) ? MO_BE : MO_LE;
201
- }
533
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
202
- return cpregid;
534
*/
203
-}
535
dc->eci = dc->condexec_mask = dc->condexec_cond = 0;
204
-
536
dc->eci_handled = false;
205
-/* Convert a truncated 32 bit hashtable key into the full
537
- dc->insn_eci_rewind = NULL;
206
- * 64 bit KVM register ID.
538
if (condexec & 0xf) {
207
- */
539
dc->condexec_mask = (condexec & 0xf) << 1;
208
-static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
540
dc->condexec_cond = condexec >> 4;
209
-{
541
@@ -XXX,XX +XXX,XX @@ static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
210
- uint64_t kvmid;
542
* fields here.
211
-
543
*/
212
- if (cpregid & CP_REG_AA64_MASK) {
544
uint32_t condexec_bits;
213
- kvmid = cpregid & ~CP_REG_AA64_MASK;
545
+ target_ulong pc_arg = dc->base.pc_next;
214
- kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64;
546
215
- } else {
547
+ if (TARGET_TB_PCREL) {
216
- kvmid = cpregid & ~(1 << 15);
548
+ pc_arg &= ~TARGET_PAGE_MASK;
217
- if (cpregid & (1 << 15)) {
549
+ }
218
- kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM;
550
if (dc->eci) {
219
- } else {
551
condexec_bits = dc->eci << 4;
220
- kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM;
552
} else {
221
- }
553
condexec_bits = (dc->condexec_cond << 4) | (dc->condexec_mask >> 1);
222
- }
554
}
223
- return kvmid;
555
- tcg_gen_insn_start(dc->base.pc_next, condexec_bits, 0);
224
-}
556
+ tcg_gen_insn_start(pc_arg, condexec_bits, 0);
225
-
557
dc->insn_start = tcg_last_op();
226
/* Return the highest implemented Exception Level */
558
}
227
static inline int arm_highest_el(CPUARMState *env)
559
228
{
560
@@ -XXX,XX +XXX,XX @@ static bool arm_check_ss_active(DisasContext *dc)
561
562
static void arm_post_translate_insn(DisasContext *dc)
563
{
564
- if (dc->condjmp && !dc->base.is_jmp) {
565
- gen_set_label(dc->condlabel);
566
+ if (dc->condjmp && dc->base.is_jmp == DISAS_NEXT) {
567
+ if (dc->pc_save != dc->condlabel.pc_save) {
568
+ gen_update_pc(dc, dc->condlabel.pc_save - dc->pc_save);
569
+ }
570
+ gen_set_label(dc->condlabel.label);
571
dc->condjmp = 0;
572
}
573
translator_loop_temp_check(&dc->base);
574
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
575
uint32_t pc = dc->base.pc_next;
576
uint32_t insn;
577
bool is_16bit;
578
+ /* TCG op to rewind to if this turns out to be an invalid ECI state */
579
+ TCGOp *insn_eci_rewind = NULL;
580
+ target_ulong insn_eci_pc_save = -1;
581
582
/* Misaligned thumb PC is architecturally impossible. */
583
assert((dc->base.pc_next & 1) == 0);
584
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
585
* insn" case. We will rewind to the marker (ie throwing away
586
* all the generated code) and instead emit "take exception".
587
*/
588
- dc->insn_eci_rewind = tcg_last_op();
589
+ insn_eci_rewind = tcg_last_op();
590
+ insn_eci_pc_save = dc->pc_save;
591
}
592
593
if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
594
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
595
* Insn wasn't valid for ECI/ICI at all: undo what we
596
* just generated and instead emit an exception
597
*/
598
- tcg_remove_ops_after(dc->insn_eci_rewind);
599
+ tcg_remove_ops_after(insn_eci_rewind);
600
+ dc->pc_save = insn_eci_pc_save;
601
dc->condjmp = 0;
602
gen_exception_insn(dc, 0, EXCP_INVSTATE, syn_uncategorized());
603
}
604
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
605
606
if (dc->condjmp) {
607
/* "Condition failed" instruction codepath for the branch/trap insn */
608
- gen_set_label(dc->condlabel);
609
+ set_disas_label(dc, dc->condlabel);
610
gen_set_condexec(dc);
611
if (unlikely(dc->ss_active)) {
612
gen_update_pc(dc, curr_insn_len(dc));
613
@@ -XXX,XX +XXX,XX @@ void restore_state_to_opc(CPUARMState *env, TranslationBlock *tb,
614
target_ulong *data)
615
{
616
if (is_a64(env)) {
617
- env->pc = data[0];
618
+ if (TARGET_TB_PCREL) {
619
+ env->pc = (env->pc & TARGET_PAGE_MASK) | data[0];
620
+ } else {
621
+ env->pc = data[0];
622
+ }
623
env->condexec_bits = 0;
624
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
625
} else {
626
- env->regs[15] = data[0];
627
+ if (TARGET_TB_PCREL) {
628
+ env->regs[15] = (env->regs[15] & TARGET_PAGE_MASK) | data[0];
629
+ } else {
630
+ env->regs[15] = data[0];
631
+ }
632
env->condexec_bits = data[1];
633
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
634
}
229
--
635
--
230
2.34.1
636
2.25.1
231
232
diff view generated by jsdifflib
1
From: Fabiano Rosas <farosas@suse.de>
1
Currently the microdrive code uses device_legacy_reset() to reset
2
itself, and has its reset method call reset on the IDE bus as the
3
last thing it does. Switch to using device_cold_reset().
2
4
3
If a test was tagged with the "accel" tag and the specified
5
The only concrete microdrive device is the TYPE_DSCM1XXXX; it is not
4
accelerator it not present in the qemu binary, cancel the test.
6
command-line pluggable, so it is used only by the old pxa2xx Arm
7
boards 'akita', 'borzoi', 'spitz', 'terrier' and 'tosa'.
5
8
6
We can now write tests without explicit calls to require_accelerator,
9
You might think that this would result in the IDE bus being
7
just the tag is enough.
10
reset automatically, but it does not, because the IDEBus type
11
does not set the BusClass::reset method. Instead the controller
12
must explicitly call ide_bus_reset(). We therefore leave that
13
call in md_reset().
8
14
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
15
Note also that because the PCMCIA card device is a direct subclass of
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
TYPE_DEVICE and we don't model the PCMCIA controller-to-card
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
17
interface as a qbus, PCMCIA cards are not on any qbus and so they
18
don't get reset when the system is reset. The reset only happens via
19
the dscm1xxxx_attach() and dscm1xxxx_detach() functions during
20
machine creation.
21
22
Because our aim here is merely to try to get rid of calls to the
23
device_legacy_reset() function, we leave these other dubious
24
reset-related issues alone. (They all stem from this code being
25
absolutely ancient.)
26
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
29
Message-id: 20221013174042.1602926-1-peter.maydell@linaro.org
13
---
30
---
14
tests/avocado/avocado_qemu/__init__.py | 4 ++++
31
hw/ide/microdrive.c | 8 ++++----
15
1 file changed, 4 insertions(+)
32
1 file changed, 4 insertions(+), 4 deletions(-)
16
33
17
diff --git a/tests/avocado/avocado_qemu/__init__.py b/tests/avocado/avocado_qemu/__init__.py
34
diff --git a/hw/ide/microdrive.c b/hw/ide/microdrive.c
18
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
19
--- a/tests/avocado/avocado_qemu/__init__.py
36
--- a/hw/ide/microdrive.c
20
+++ b/tests/avocado/avocado_qemu/__init__.py
37
+++ b/hw/ide/microdrive.c
21
@@ -XXX,XX +XXX,XX @@ def setUp(self):
38
@@ -XXX,XX +XXX,XX @@ static void md_attr_write(PCMCIACardState *card, uint32_t at, uint8_t value)
22
39
case 0x00:    /* Configuration Option Register */
23
super().setUp('qemu-system-')
40
s->opt = value & 0xcf;
24
41
if (value & OPT_SRESET) {
25
+ accel_required = self._get_unique_tag_val('accel')
42
- device_legacy_reset(DEVICE(s));
26
+ if accel_required:
43
+ device_cold_reset(DEVICE(s));
27
+ self.require_accelerator(accel_required)
44
}
28
+
45
md_interrupt_update(s);
29
self.machine = self.params.get('machine',
46
break;
30
default=self._get_unique_tag_val('machine'))
47
@@ -XXX,XX +XXX,XX @@ static void md_common_write(PCMCIACardState *card, uint32_t at, uint16_t value)
48
case 0xe:    /* Device Control */
49
s->ctrl = value;
50
if (value & CTRL_SRST) {
51
- device_legacy_reset(DEVICE(s));
52
+ device_cold_reset(DEVICE(s));
53
}
54
md_interrupt_update(s);
55
break;
56
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_attach(PCMCIACardState *card)
57
md->attr_base = pcc->cis[0x74] | (pcc->cis[0x76] << 8);
58
md->io_base = 0x0;
59
60
- device_legacy_reset(DEVICE(md));
61
+ device_cold_reset(DEVICE(md));
62
md_interrupt_update(md);
63
64
return 0;
65
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_detach(PCMCIACardState *card)
66
{
67
MicroDriveState *md = MICRODRIVE(card);
68
69
- device_legacy_reset(DEVICE(md));
70
+ device_cold_reset(DEVICE(md));
71
return 0;
72
}
31
73
32
--
74
--
33
2.34.1
75
2.25.1
34
76
35
77
diff view generated by jsdifflib