1
Small pile of bug fixes for rc1. I've included my patches to get
1
Hi; here's the latest round of arm patches. I have included also
2
our docs building with Sphinx 3, just for convenience...
2
my patchset for the RTC devices to avoid keeping time_t and
3
time_t diffs in 32-bit variables.
3
4
5
thanks
4
-- PMM
6
-- PMM
5
7
6
The following changes since commit b149dea55cce97cb226683d06af61984a1c11e96:
8
The following changes since commit 156618d9ea67f2f2e31d9dedd97f2dcccbe6808c:
7
9
8
Merge remote-tracking branch 'remotes/cschoenebeck/tags/pull-9p-20201102' into staging (2020-11-02 10:57:48 +0000)
10
Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging (2023-08-30 09:20:27 -0400)
9
11
10
are available in the Git repository at:
12
are available in the Git repository at:
11
13
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201102
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230831
13
15
14
for you to fetch changes up to ffb4fbf90a2f63c9cb33e4bb9f854c79bf04ca4a:
16
for you to fetch changes up to e73b8bb8a3e9a162f70e9ffbf922d4fafc96bbfb:
15
17
16
tests/qtest/npcm7xx_rng-test: Disable randomness tests (2020-11-02 16:52:18 +0000)
18
hw/arm: Set number of MPU regions correctly for an505, an521, an524 (2023-08-31 11:07:02 +0100)
17
19
18
----------------------------------------------------------------
20
----------------------------------------------------------------
19
target-arm queue:
21
target-arm queue:
20
* target/arm: Fix Neon emulation bugs on big-endian hosts
22
* Some of the preliminary patches for Cortex-A710 support
21
* target/arm: fix handling of HCR.FB
23
* i.MX7 and i.MX6UL refactoring
22
* target/arm: fix LORID_EL1 access check
24
* Implement SRC device for i.MX7
23
* disas/capstone: Fix monitor disassembly of >32 bytes
25
* Catch illegal-exception-return from EL3 with bad NSE/NS
24
* hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
26
* Use 64-bit offsets for holding time_t differences in RTC devices
25
* hw/arm/boot: fix SVE for EL3 direct kernel boot
27
* Model correct number of MPU regions for an505, an521, an524 boards
26
* hw/display/omap_lcdc: Fix potential NULL pointer dereference
27
* hw/display/exynos4210_fimd: Fix potential NULL pointer dereference
28
* target/arm: Get correct MMU index for other-security-state
29
* configure: Test that gio libs from pkg-config work
30
* hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work
31
* docs: Fix building with Sphinx 3
32
* tests/qtest/npcm7xx_rng-test: Disable randomness tests
33
28
34
----------------------------------------------------------------
29
----------------------------------------------------------------
35
AlexChen (2):
30
Alex Bennée (1):
36
hw/display/omap_lcdc: Fix potential NULL pointer dereference
31
target/arm: properly document FEAT_CRC32
37
hw/display/exynos4210_fimd: Fix potential NULL pointer dereference
38
32
39
Peter Maydell (9):
33
Jean-Christophe Dubois (6):
40
target/arm: Fix float16 pairwise Neon ops on big-endian hosts
34
Remove i.MX7 IOMUX GPR device from i.MX6UL
41
target/arm: Fix VUDOT/VSDOT (scalar) on big-endian hosts
35
Refactor i.MX6UL processor code
42
disas/capstone: Fix monitor disassembly of >32 bytes
36
Add i.MX6UL missing devices.
43
target/arm: Get correct MMU index for other-security-state
37
Refactor i.MX7 processor code
44
configure: Test that gio libs from pkg-config work
38
Add i.MX7 missing TZ devices and memory regions
45
hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work
39
Add i.MX7 SRC device implementation
46
scripts/kerneldoc: For Sphinx 3 use c:macro for macros with arguments
47
qemu-option-trace.rst.inc: Don't use option:: markup
48
tests/qtest/npcm7xx_rng-test: Disable randomness tests
49
40
50
Philippe Mathieu-Daudé (1):
41
Peter Maydell (8):
51
hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)
42
target/arm: Catch illegal-exception-return from EL3 with bad NSE/NS
43
hw/rtc/m48t59: Use 64-bit arithmetic in set_alarm()
44
hw/rtc/twl92230: Use int64_t for sec_offset and alm_sec
45
hw/rtc/aspeed_rtc: Use 64-bit offset for holding time_t difference
46
rtc: Use time_t for passing and returning time offsets
47
target/arm: Do all "ARM_FEATURE_X implies Y" checks in post_init
48
hw/arm/armv7m: Add mpu-ns-regions and mpu-s-regions properties
49
hw/arm: Set number of MPU regions correctly for an505, an521, an524
52
50
53
Richard Henderson (11):
51
Richard Henderson (9):
54
target/arm: Introduce neon_full_reg_offset
52
target/arm: Reduce dcz_blocksize to uint8_t
55
target/arm: Move neon_element_offset to translate.c
53
target/arm: Allow cpu to configure GM blocksize
56
target/arm: Use neon_element_offset in neon_load/store_reg
54
target/arm: Support more GM blocksizes
57
target/arm: Use neon_element_offset in vfp_reg_offset
55
target/arm: When tag memory is not present, set MTE=1
58
target/arm: Add read/write_neon_element32
56
target/arm: Introduce make_ccsidr64
59
target/arm: Expand read/write_neon_element32 to all MemOp
57
target/arm: Apply access checks to neoverse-n1 special registers
60
target/arm: Rename neon_load_reg32 to vfp_load_reg32
58
target/arm: Apply access checks to neoverse-v1 special registers
61
target/arm: Add read/write_neon_element64
59
target/arm: Suppress FEAT_TRBE (Trace Buffer Extension)
62
target/arm: Rename neon_load_reg64 to vfp_load_reg64
60
target/arm: Implement FEAT_HPDS2 as a no-op
63
target/arm: Simplify do_long_3d and do_2scalar_long
64
target/arm: Improve do_prewiden_3d
65
61
66
Rémi Denis-Courmont (3):
62
docs/system/arm/emulation.rst | 2 +
67
target/arm: fix handling of HCR.FB
63
include/hw/arm/armsse.h | 5 +
68
target/arm: fix LORID_EL1 access check
64
include/hw/arm/armv7m.h | 8 +
69
hw/arm/boot: fix SVE for EL3 direct kernel boot
65
include/hw/arm/fsl-imx6ul.h | 158 ++++++++++++++++---
66
include/hw/arm/fsl-imx7.h | 338 ++++++++++++++++++++++++++++++-----------
67
include/hw/misc/imx7_src.h | 66 ++++++++
68
include/hw/rtc/aspeed_rtc.h | 2 +-
69
include/sysemu/rtc.h | 4 +-
70
target/arm/cpregs.h | 2 +
71
target/arm/cpu.h | 5 +-
72
target/arm/internals.h | 6 -
73
target/arm/tcg/translate.h | 2 +
74
hw/arm/armsse.c | 16 ++
75
hw/arm/armv7m.c | 21 +++
76
hw/arm/fsl-imx6ul.c | 174 +++++++++++++--------
77
hw/arm/fsl-imx7.c | 201 +++++++++++++++++++-----
78
hw/arm/mps2-tz.c | 29 ++++
79
hw/misc/imx7_src.c | 276 +++++++++++++++++++++++++++++++++
80
hw/rtc/aspeed_rtc.c | 5 +-
81
hw/rtc/m48t59.c | 2 +-
82
hw/rtc/twl92230.c | 4 +-
83
softmmu/rtc.c | 4 +-
84
target/arm/cpu.c | 207 ++++++++++++++-----------
85
target/arm/helper.c | 15 +-
86
target/arm/tcg/cpu32.c | 2 +-
87
target/arm/tcg/cpu64.c | 102 +++++++++----
88
target/arm/tcg/helper-a64.c | 9 ++
89
target/arm/tcg/mte_helper.c | 90 ++++++++---
90
target/arm/tcg/translate-a64.c | 5 +-
91
hw/misc/meson.build | 1 +
92
hw/misc/trace-events | 4 +
93
31 files changed, 1393 insertions(+), 372 deletions(-)
94
create mode 100644 include/hw/misc/imx7_src.h
95
create mode 100644 hw/misc/imx7_src.c
70
96
71
docs/qemu-option-trace.rst.inc | 6 +-
72
configure | 10 +-
73
include/hw/intc/arm_gicv3_common.h | 1 -
74
disas/capstone.c | 2 +-
75
hw/arm/boot.c | 3 +
76
hw/arm/smmuv3.c | 3 +-
77
hw/display/exynos4210_fimd.c | 4 +-
78
hw/display/omap_lcdc.c | 10 +-
79
hw/intc/arm_gicv3_cpuif.c | 5 +-
80
target/arm/helper.c | 24 +-
81
target/arm/m_helper.c | 3 +-
82
target/arm/translate.c | 153 +++++++++---
83
target/arm/vec_helper.c | 12 +-
84
tests/qtest/npcm7xx_rng-test.c | 14 +-
85
scripts/kernel-doc | 18 +-
86
target/arm/translate-neon.c.inc | 472 ++++++++++++++++++++-----------------
87
target/arm/translate-vfp.c.inc | 341 +++++++++++----------------
88
17 files changed, 588 insertions(+), 493 deletions(-)
89
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
These are the only users of neon_reg_offset, so remove that.
3
This value is only 4 bits wide.
4
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-4-richard.henderson@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230811214031.171020-2-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/translate.c | 14 ++------------
11
target/arm/cpu.h | 3 ++-
11
1 file changed, 2 insertions(+), 12 deletions(-)
12
1 file changed, 2 insertions(+), 1 deletion(-)
12
13
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
16
--- a/target/arm/cpu.h
16
+++ b/target/arm/translate.c
17
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline long vfp_reg_offset(bool dp, unsigned reg)
18
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
18
}
19
bool prop_lpa2;
19
}
20
20
21
/* DCZ blocksize, in log_2(words), ie low 4 bits of DCZID_EL0 */
21
-/* Return the offset of a 32-bit piece of a NEON register.
22
- uint32_t dcz_blocksize;
22
- zero is the least significant end of the register. */
23
+ uint8_t dcz_blocksize;
23
-static inline long
24
+
24
-neon_reg_offset (int reg, int n)
25
uint64_t rvbar_prop; /* Property/input signals. */
25
-{
26
26
- int sreg;
27
/* Configurable aspects of GIC cpu interface (which is part of the CPU) */
27
- sreg = reg * 2 + n;
28
- return vfp_reg_offset(0, sreg);
29
-}
30
-
31
static TCGv_i32 neon_load_reg(int reg, int pass)
32
{
33
TCGv_i32 tmp = tcg_temp_new_i32();
34
- tcg_gen_ld_i32(tmp, cpu_env, neon_reg_offset(reg, pass));
35
+ tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32));
36
return tmp;
37
}
38
39
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
40
{
41
- tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
42
+ tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32));
43
tcg_temp_free_i32(var);
44
}
45
46
--
28
--
47
2.20.1
29
2.34.1
48
30
49
31
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc.
3
Previously we hard-coded the blocksize with GMID_EL1_BS.
4
4
But the value we choose for -cpu max does not match the
5
value that cortex-a710 uses.
6
7
Mirror the way we handle dcz_blocksize.
8
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-9-richard.henderson@linaro.org
11
Message-id: 20230811214031.171020-3-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
target/arm/translate.c | 26 +++++++++
14
target/arm/cpu.h | 2 ++
11
target/arm/translate-neon.c.inc | 94 ++++++++++++++++-----------------
15
target/arm/internals.h | 6 -----
12
2 files changed, 73 insertions(+), 47 deletions(-)
16
target/arm/tcg/translate.h | 2 ++
13
17
target/arm/helper.c | 11 +++++---
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
target/arm/tcg/cpu64.c | 1 +
15
index XXXXXXX..XXXXXXX 100644
19
target/arm/tcg/mte_helper.c | 46 ++++++++++++++++++++++------------
16
--- a/target/arm/translate.c
20
target/arm/tcg/translate-a64.c | 5 ++--
17
+++ b/target/arm/translate.c
21
7 files changed, 45 insertions(+), 28 deletions(-)
18
@@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop)
22
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
27
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
28
29
/* DCZ blocksize, in log_2(words), ie low 4 bits of DCZID_EL0 */
30
uint8_t dcz_blocksize;
31
+ /* GM blocksize, in log_2(words), ie low 4 bits of GMID_EL0 */
32
+ uint8_t gm_blocksize;
33
34
uint64_t rvbar_prop; /* Property/input signals. */
35
36
diff --git a/target/arm/internals.h b/target/arm/internals.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/internals.h
39
+++ b/target/arm/internals.h
40
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(CPUState *cs);
41
42
#endif /* !CONFIG_USER_ONLY */
43
44
-/*
45
- * The log2 of the words in the tag block, for GMID_EL1.BS.
46
- * The is the maximum, 256 bytes, which manipulates 64-bits of tags.
47
- */
48
-#define GMID_EL1_BS 6
49
-
50
/*
51
* SVE predicates are 1/8 the size of SVE vectors, and cannot use
52
* the same simd_desc() encoding due to restrictions on size.
53
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/tcg/translate.h
56
+++ b/target/arm/tcg/translate.h
57
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
58
int8_t btype;
59
/* A copy of cpu->dcz_blocksize. */
60
uint8_t dcz_blocksize;
61
+ /* A copy of cpu->gm_blocksize. */
62
+ uint8_t gm_blocksize;
63
/* True if this page is guarded. */
64
bool guarded_page;
65
/* Bottom two bits of XScale c15_cpar coprocessor access control reg */
66
diff --git a/target/arm/helper.c b/target/arm/helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/helper.c
69
+++ b/target/arm/helper.c
70
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_reginfo[] = {
71
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 6,
72
.access = PL1_RW, .accessfn = access_mte,
73
.fieldoffset = offsetof(CPUARMState, cp15.gcr_el1) },
74
- { .name = "GMID_EL1", .state = ARM_CP_STATE_AA64,
75
- .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 4,
76
- .access = PL1_R, .accessfn = access_aa64_tid5,
77
- .type = ARM_CP_CONST, .resetvalue = GMID_EL1_BS },
78
{ .name = "TCO", .state = ARM_CP_STATE_AA64,
79
.opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 7,
80
.type = ARM_CP_NO_RAW,
81
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
82
* then define only a RAZ/WI version of PSTATE.TCO.
83
*/
84
if (cpu_isar_feature(aa64_mte, cpu)) {
85
+ ARMCPRegInfo gmid_reginfo = {
86
+ .name = "GMID_EL1", .state = ARM_CP_STATE_AA64,
87
+ .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 4,
88
+ .access = PL1_R, .accessfn = access_aa64_tid5,
89
+ .type = ARM_CP_CONST, .resetvalue = cpu->gm_blocksize,
90
+ };
91
+ define_one_arm_cp_reg(cpu, &gmid_reginfo);
92
define_arm_cp_regs(cpu, mte_reginfo);
93
define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo);
94
} else if (cpu_isar_feature(aa64_mte_insn_reg, cpu)) {
95
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/target/arm/tcg/cpu64.c
98
+++ b/target/arm/tcg/cpu64.c
99
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
100
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
101
cpu->dcz_blocksize = 7; /* 512 bytes */
102
#endif
103
+ cpu->gm_blocksize = 6; /* 256 bytes */
104
105
cpu->sve_vq.supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
106
cpu->sme_vq.supported = SVE_VQ_POW2_MAP;
107
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/tcg/mte_helper.c
110
+++ b/target/arm/tcg/mte_helper.c
111
@@ -XXX,XX +XXX,XX @@ void HELPER(st2g_stub)(CPUARMState *env, uint64_t ptr)
19
}
112
}
20
}
113
}
21
114
22
+static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
115
-#define LDGM_STGM_SIZE (4 << GMID_EL1_BS)
23
+{
116
-
24
+ long off = neon_element_offset(reg, ele, memop);
117
uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
25
+
118
{
26
+ switch (memop) {
119
int mmu_idx = cpu_mmu_index(env, false);
27
+ case MO_Q:
120
uintptr_t ra = GETPC();
28
+ tcg_gen_ld_i64(dest, cpu_env, off);
121
+ int gm_bs = env_archcpu(env)->gm_blocksize;
122
+ int gm_bs_bytes = 4 << gm_bs;
123
void *tag_mem;
124
125
- ptr = QEMU_ALIGN_DOWN(ptr, LDGM_STGM_SIZE);
126
+ ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
127
128
/* Trap if accessing an invalid page. */
129
tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_LOAD,
130
- LDGM_STGM_SIZE, MMU_DATA_LOAD,
131
- LDGM_STGM_SIZE / (2 * TAG_GRANULE), ra);
132
+ gm_bs_bytes, MMU_DATA_LOAD,
133
+ gm_bs_bytes / (2 * TAG_GRANULE), ra);
134
135
/* The tag is squashed to zero if the page does not support tags. */
136
if (!tag_mem) {
137
return 0;
138
}
139
140
- QEMU_BUILD_BUG_ON(GMID_EL1_BS != 6);
141
/*
142
- * We are loading 64-bits worth of tags. The ordering of elements
143
- * within the word corresponds to a 64-bit little-endian operation.
144
+ * The ordering of elements within the word corresponds to
145
+ * a little-endian operation.
146
*/
147
- return ldq_le_p(tag_mem);
148
+ switch (gm_bs) {
149
+ case 6:
150
+ /* 256 bytes -> 16 tags -> 64 result bits */
151
+ return ldq_le_p(tag_mem);
152
+ default:
153
+ /* cpu configured with unsupported gm blocksize. */
154
+ g_assert_not_reached();
155
+ }
156
}
157
158
void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
159
{
160
int mmu_idx = cpu_mmu_index(env, false);
161
uintptr_t ra = GETPC();
162
+ int gm_bs = env_archcpu(env)->gm_blocksize;
163
+ int gm_bs_bytes = 4 << gm_bs;
164
void *tag_mem;
165
166
- ptr = QEMU_ALIGN_DOWN(ptr, LDGM_STGM_SIZE);
167
+ ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
168
169
/* Trap if accessing an invalid page. */
170
tag_mem = allocation_tag_mem(env, mmu_idx, ptr, MMU_DATA_STORE,
171
- LDGM_STGM_SIZE, MMU_DATA_LOAD,
172
- LDGM_STGM_SIZE / (2 * TAG_GRANULE), ra);
173
+ gm_bs_bytes, MMU_DATA_LOAD,
174
+ gm_bs_bytes / (2 * TAG_GRANULE), ra);
175
176
/*
177
* Tag store only happens if the page support tags,
178
@@ -XXX,XX +XXX,XX @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
179
return;
180
}
181
182
- QEMU_BUILD_BUG_ON(GMID_EL1_BS != 6);
183
/*
184
- * We are storing 64-bits worth of tags. The ordering of elements
185
- * within the word corresponds to a 64-bit little-endian operation.
186
+ * The ordering of elements within the word corresponds to
187
+ * a little-endian operation.
188
*/
189
- stq_le_p(tag_mem, val);
190
+ switch (gm_bs) {
191
+ case 6:
192
+ stq_le_p(tag_mem, val);
29
+ break;
193
+ break;
30
+ default:
194
+ default:
195
+ /* cpu configured with unsupported gm blocksize. */
31
+ g_assert_not_reached();
196
+ g_assert_not_reached();
32
+ }
197
+ }
33
+}
34
+
35
static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
36
{
37
long off = neon_element_offset(reg, ele, memop);
38
@@ -XXX,XX +XXX,XX @@ static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
39
}
40
}
198
}
41
199
42
+static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop)
200
void HELPER(stzgm_tags)(CPUARMState *env, uint64_t ptr, uint64_t val)
43
+{
201
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
44
+ long off = neon_element_offset(reg, ele, memop);
202
index XXXXXXX..XXXXXXX 100644
45
+
203
--- a/target/arm/tcg/translate-a64.c
46
+ switch (memop) {
204
+++ b/target/arm/tcg/translate-a64.c
47
+ case MO_64:
205
@@ -XXX,XX +XXX,XX @@ static bool trans_STGM(DisasContext *s, arg_ldst_tag *a)
48
+ tcg_gen_st_i64(src, cpu_env, off);
206
gen_helper_stgm(cpu_env, addr, tcg_rt);
49
+ break;
50
+ default:
51
+ g_assert_not_reached();
52
+ }
53
+}
54
+
55
static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
56
{
57
TCGv_ptr ret = tcg_temp_new_ptr();
58
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/translate-neon.c.inc
61
+++ b/target/arm/translate-neon.c.inc
62
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a,
63
for (pass = 0; pass < a->q + 1; pass++) {
64
TCGv_i64 tmp = tcg_temp_new_i64();
65
66
- neon_load_reg64(tmp, a->vm + pass);
67
+ read_neon_element64(tmp, a->vm, pass, MO_64);
68
fn(tmp, cpu_env, tmp, constimm);
69
- neon_store_reg64(tmp, a->vd + pass);
70
+ write_neon_element64(tmp, a->vd, pass, MO_64);
71
tcg_temp_free_i64(tmp);
72
}
73
tcg_temp_free_i64(constimm);
74
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
75
rd = tcg_temp_new_i32();
76
77
/* Load both inputs first to avoid potential overwrite if rm == rd */
78
- neon_load_reg64(rm1, a->vm);
79
- neon_load_reg64(rm2, a->vm + 1);
80
+ read_neon_element64(rm1, a->vm, 0, MO_64);
81
+ read_neon_element64(rm2, a->vm, 1, MO_64);
82
83
shiftfn(rm1, rm1, constimm);
84
narrowfn(rd, cpu_env, rm1);
85
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
86
tcg_gen_shli_i64(tmp, tmp, a->shift);
87
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
88
}
89
- neon_store_reg64(tmp, a->vd);
90
+ write_neon_element64(tmp, a->vd, 0, MO_64);
91
92
widenfn(tmp, rm1);
93
tcg_temp_free_i32(rm1);
94
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
95
tcg_gen_shli_i64(tmp, tmp, a->shift);
96
tcg_gen_andi_i64(tmp, tmp, ~widen_mask);
97
}
98
- neon_store_reg64(tmp, a->vd + 1);
99
+ write_neon_element64(tmp, a->vd, 1, MO_64);
100
tcg_temp_free_i64(tmp);
101
return true;
102
}
103
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
104
rm_64 = tcg_temp_new_i64();
105
106
if (src1_wide) {
107
- neon_load_reg64(rn0_64, a->vn);
108
+ read_neon_element64(rn0_64, a->vn, 0, MO_64);
109
} else {
207
} else {
110
TCGv_i32 tmp = tcg_temp_new_i32();
208
MMUAccessType acc = MMU_DATA_STORE;
111
read_neon_element32(tmp, a->vn, 0, MO_32);
209
- int size = 4 << GMID_EL1_BS;
112
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
210
+ int size = 4 << s->gm_blocksize;
113
* avoid incorrect results if a narrow input overlaps with the result.
211
114
*/
212
clean_addr = clean_data_tbi(s, addr);
115
if (src1_wide) {
213
tcg_gen_andi_i64(clean_addr, clean_addr, -size);
116
- neon_load_reg64(rn1_64, a->vn + 1);
214
@@ -XXX,XX +XXX,XX @@ static bool trans_LDGM(DisasContext *s, arg_ldst_tag *a)
117
+ read_neon_element64(rn1_64, a->vn, 1, MO_64);
215
gen_helper_ldgm(tcg_rt, cpu_env, addr);
118
} else {
216
} else {
119
TCGv_i32 tmp = tcg_temp_new_i32();
217
MMUAccessType acc = MMU_DATA_LOAD;
120
read_neon_element32(tmp, a->vn, 1, MO_32);
218
- int size = 4 << GMID_EL1_BS;
121
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
219
+ int size = 4 << s->gm_blocksize;
122
rm = tcg_temp_new_i32();
220
123
read_neon_element32(rm, a->vm, 1, MO_32);
221
clean_addr = clean_data_tbi(s, addr);
124
222
tcg_gen_andi_i64(clean_addr, clean_addr, -size);
125
- neon_store_reg64(rn0_64, a->vd);
223
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
126
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
224
dc->cp_regs = arm_cpu->cp_regs;
127
225
dc->features = env->features;
128
widenfn(rm_64, rm);
226
dc->dcz_blocksize = arm_cpu->dcz_blocksize;
129
tcg_temp_free_i32(rm);
227
+ dc->gm_blocksize = arm_cpu->gm_blocksize;
130
opfn(rn1_64, rn1_64, rm_64);
228
131
- neon_store_reg64(rn1_64, a->vd + 1);
229
#ifdef CONFIG_USER_ONLY
132
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
230
/* In sve_probe_page, we assume TBI is enabled. */
133
134
tcg_temp_free_i64(rn0_64);
135
tcg_temp_free_i64(rn1_64);
136
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
137
rd0 = tcg_temp_new_i32();
138
rd1 = tcg_temp_new_i32();
139
140
- neon_load_reg64(rn_64, a->vn);
141
- neon_load_reg64(rm_64, a->vm);
142
+ read_neon_element64(rn_64, a->vn, 0, MO_64);
143
+ read_neon_element64(rm_64, a->vm, 0, MO_64);
144
145
opfn(rn_64, rn_64, rm_64);
146
147
narrowfn(rd0, rn_64);
148
149
- neon_load_reg64(rn_64, a->vn + 1);
150
- neon_load_reg64(rm_64, a->vm + 1);
151
+ read_neon_element64(rn_64, a->vn, 1, MO_64);
152
+ read_neon_element64(rm_64, a->vm, 1, MO_64);
153
154
opfn(rn_64, rn_64, rm_64);
155
156
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
157
/* Don't store results until after all loads: they might overlap */
158
if (accfn) {
159
tmp = tcg_temp_new_i64();
160
- neon_load_reg64(tmp, a->vd);
161
+ read_neon_element64(tmp, a->vd, 0, MO_64);
162
accfn(tmp, tmp, rd0);
163
- neon_store_reg64(tmp, a->vd);
164
- neon_load_reg64(tmp, a->vd + 1);
165
+ write_neon_element64(tmp, a->vd, 0, MO_64);
166
+ read_neon_element64(tmp, a->vd, 1, MO_64);
167
accfn(tmp, tmp, rd1);
168
- neon_store_reg64(tmp, a->vd + 1);
169
+ write_neon_element64(tmp, a->vd, 1, MO_64);
170
tcg_temp_free_i64(tmp);
171
} else {
172
- neon_store_reg64(rd0, a->vd);
173
- neon_store_reg64(rd1, a->vd + 1);
174
+ write_neon_element64(rd0, a->vd, 0, MO_64);
175
+ write_neon_element64(rd1, a->vd, 1, MO_64);
176
}
177
178
tcg_temp_free_i64(rd0);
179
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
180
181
if (accfn) {
182
TCGv_i64 t64 = tcg_temp_new_i64();
183
- neon_load_reg64(t64, a->vd);
184
+ read_neon_element64(t64, a->vd, 0, MO_64);
185
accfn(t64, t64, rn0_64);
186
- neon_store_reg64(t64, a->vd);
187
- neon_load_reg64(t64, a->vd + 1);
188
+ write_neon_element64(t64, a->vd, 0, MO_64);
189
+ read_neon_element64(t64, a->vd, 1, MO_64);
190
accfn(t64, t64, rn1_64);
191
- neon_store_reg64(t64, a->vd + 1);
192
+ write_neon_element64(t64, a->vd, 1, MO_64);
193
tcg_temp_free_i64(t64);
194
} else {
195
- neon_store_reg64(rn0_64, a->vd);
196
- neon_store_reg64(rn1_64, a->vd + 1);
197
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
198
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
199
}
200
tcg_temp_free_i64(rn0_64);
201
tcg_temp_free_i64(rn1_64);
202
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
203
right = tcg_temp_new_i64();
204
dest = tcg_temp_new_i64();
205
206
- neon_load_reg64(right, a->vn);
207
- neon_load_reg64(left, a->vm);
208
+ read_neon_element64(right, a->vn, 0, MO_64);
209
+ read_neon_element64(left, a->vm, 0, MO_64);
210
tcg_gen_extract2_i64(dest, right, left, a->imm * 8);
211
- neon_store_reg64(dest, a->vd);
212
+ write_neon_element64(dest, a->vd, 0, MO_64);
213
214
tcg_temp_free_i64(left);
215
tcg_temp_free_i64(right);
216
@@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a)
217
destright = tcg_temp_new_i64();
218
219
if (a->imm < 8) {
220
- neon_load_reg64(right, a->vn);
221
- neon_load_reg64(middle, a->vn + 1);
222
+ read_neon_element64(right, a->vn, 0, MO_64);
223
+ read_neon_element64(middle, a->vn, 1, MO_64);
224
tcg_gen_extract2_i64(destright, right, middle, a->imm * 8);
225
- neon_load_reg64(left, a->vm);
226
+ read_neon_element64(left, a->vm, 0, MO_64);
227
tcg_gen_extract2_i64(destleft, middle, left, a->imm * 8);
228
} else {
229
- neon_load_reg64(right, a->vn + 1);
230
- neon_load_reg64(middle, a->vm);
231
+ read_neon_element64(right, a->vn, 1, MO_64);
232
+ read_neon_element64(middle, a->vm, 0, MO_64);
233
tcg_gen_extract2_i64(destright, right, middle, (a->imm - 8) * 8);
234
- neon_load_reg64(left, a->vm + 1);
235
+ read_neon_element64(left, a->vm, 1, MO_64);
236
tcg_gen_extract2_i64(destleft, middle, left, (a->imm - 8) * 8);
237
}
238
239
- neon_store_reg64(destright, a->vd);
240
- neon_store_reg64(destleft, a->vd + 1);
241
+ write_neon_element64(destright, a->vd, 0, MO_64);
242
+ write_neon_element64(destleft, a->vd, 1, MO_64);
243
244
tcg_temp_free_i64(destright);
245
tcg_temp_free_i64(destleft);
246
@@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
247
248
if (accfn) {
249
TCGv_i64 tmp64 = tcg_temp_new_i64();
250
- neon_load_reg64(tmp64, a->vd + pass);
251
+ read_neon_element64(tmp64, a->vd, pass, MO_64);
252
accfn(rd_64, tmp64, rd_64);
253
tcg_temp_free_i64(tmp64);
254
}
255
- neon_store_reg64(rd_64, a->vd + pass);
256
+ write_neon_element64(rd_64, a->vd, pass, MO_64);
257
tcg_temp_free_i64(rd_64);
258
}
259
return true;
260
@@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a,
261
rd0 = tcg_temp_new_i32();
262
rd1 = tcg_temp_new_i32();
263
264
- neon_load_reg64(rm, a->vm);
265
+ read_neon_element64(rm, a->vm, 0, MO_64);
266
narrowfn(rd0, cpu_env, rm);
267
- neon_load_reg64(rm, a->vm + 1);
268
+ read_neon_element64(rm, a->vm, 1, MO_64);
269
narrowfn(rd1, cpu_env, rm);
270
write_neon_element32(rd0, a->vd, 0, MO_32);
271
write_neon_element32(rd1, a->vd, 1, MO_32);
272
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
273
274
widenfn(rd, rm0);
275
tcg_gen_shli_i64(rd, rd, 8 << a->size);
276
- neon_store_reg64(rd, a->vd);
277
+ write_neon_element64(rd, a->vd, 0, MO_64);
278
widenfn(rd, rm1);
279
tcg_gen_shli_i64(rd, rd, 8 << a->size);
280
- neon_store_reg64(rd, a->vd + 1);
281
+ write_neon_element64(rd, a->vd, 1, MO_64);
282
283
tcg_temp_free_i64(rd);
284
tcg_temp_free_i32(rm0);
285
@@ -XXX,XX +XXX,XX @@ static bool trans_VSWP(DisasContext *s, arg_2misc *a)
286
rm = tcg_temp_new_i64();
287
rd = tcg_temp_new_i64();
288
for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
289
- neon_load_reg64(rm, a->vm + pass);
290
- neon_load_reg64(rd, a->vd + pass);
291
- neon_store_reg64(rm, a->vd + pass);
292
- neon_store_reg64(rd, a->vm + pass);
293
+ read_neon_element64(rm, a->vm, pass, MO_64);
294
+ read_neon_element64(rd, a->vd, pass, MO_64);
295
+ write_neon_element64(rm, a->vd, pass, MO_64);
296
+ write_neon_element64(rd, a->vm, pass, MO_64);
297
}
298
tcg_temp_free_i64(rm);
299
tcg_temp_free_i64(rd);
300
--
231
--
301
2.20.1
232
2.34.1
302
303
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The only uses of this function are for loading VFP
3
Support all of the easy GM block sizes.
4
single-precision values, and nothing to do with NEON.
4
Use direct memory operations, since the pointers are aligned.
5
6
While BS=2 (16 bytes, 1 tag) is a legal setting, that requires
7
an atomic store of one nibble. This is not difficult, but there
8
is also no point in supporting it until required.
9
10
Note that cortex-a710 sets GM blocksize to match its cacheline
11
size of 64 bytes. I expect many implementations will also
12
match the cacheline, which makes 16 bytes very unlikely.
5
13
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201030022618.785675-8-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Message-id: 20230811214031.171020-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
18
---
11
target/arm/translate.c | 4 +-
19
target/arm/cpu.c | 18 +++++++++---
12
target/arm/translate-vfp.c.inc | 184 ++++++++++++++++-----------------
20
target/arm/tcg/mte_helper.c | 56 +++++++++++++++++++++++++++++++------
13
2 files changed, 94 insertions(+), 94 deletions(-)
21
2 files changed, 62 insertions(+), 12 deletions(-)
14
22
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
23
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
25
--- a/target/arm/cpu.c
18
+++ b/target/arm/translate.c
26
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg64(TCGv_i64 var, int reg)
27
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
20
tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg));
28
ID_PFR1, VIRTUALIZATION, 0);
21
}
29
}
22
30
23
-static inline void neon_load_reg32(TCGv_i32 var, int reg)
31
+ if (cpu_isar_feature(aa64_mte, cpu)) {
24
+static inline void vfp_load_reg32(TCGv_i32 var, int reg)
32
+ /*
25
{
33
+ * The architectural range of GM blocksize is 2-6, however qemu
26
tcg_gen_ld_i32(var, cpu_env, vfp_reg_offset(false, reg));
34
+ * doesn't support blocksize of 2 (see HELPER(ldgm)).
27
}
35
+ */
28
36
+ if (tcg_enabled()) {
29
-static inline void neon_store_reg32(TCGv_i32 var, int reg)
37
+ assert(cpu->gm_blocksize >= 3 && cpu->gm_blocksize <= 6);
30
+static inline void vfp_store_reg32(TCGv_i32 var, int reg)
38
+ }
31
{
39
+
32
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
40
#ifndef CONFIG_USER_ONLY
33
}
41
- if (cpu->tag_memory == NULL && cpu_isar_feature(aa64_mte, cpu)) {
34
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
42
/*
43
* Disable the MTE feature bits if we do not have tag-memory
44
* provided by the machine.
45
*/
46
- cpu->isar.id_aa64pfr1 =
47
- FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 0);
48
- }
49
+ if (cpu->tag_memory == NULL) {
50
+ cpu->isar.id_aa64pfr1 =
51
+ FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 0);
52
+ }
53
#endif
54
+ }
55
56
if (tcg_enabled()) {
57
/*
58
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
35
index XXXXXXX..XXXXXXX 100644
59
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-vfp.c.inc
60
--- a/target/arm/tcg/mte_helper.c
37
+++ b/target/arm/translate-vfp.c.inc
61
+++ b/target/arm/tcg/mte_helper.c
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
62
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
39
frn = tcg_temp_new_i32();
63
int gm_bs = env_archcpu(env)->gm_blocksize;
40
frm = tcg_temp_new_i32();
64
int gm_bs_bytes = 4 << gm_bs;
41
dest = tcg_temp_new_i32();
65
void *tag_mem;
42
- neon_load_reg32(frn, rn);
66
+ uint64_t ret;
43
- neon_load_reg32(frm, rm);
67
+ int shift;
44
+ vfp_load_reg32(frn, rn);
68
45
+ vfp_load_reg32(frm, rm);
69
ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
46
switch (a->cc) {
70
47
case 0: /* eq: Z */
71
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
48
tcg_gen_movcond_i32(TCG_COND_EQ, dest, cpu_ZF, zero,
72
49
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
73
/*
50
if (sz == 1) {
74
* The ordering of elements within the word corresponds to
51
tcg_gen_andi_i32(dest, dest, 0xffff);
75
- * a little-endian operation.
52
}
76
+ * a little-endian operation. Computation of shift comes from
53
- neon_store_reg32(dest, rd);
77
+ *
54
+ vfp_store_reg32(dest, rd);
78
+ * index = address<LOG2_TAG_GRANULE+3:LOG2_TAG_GRANULE>
55
tcg_temp_free_i32(frn);
79
+ * data<index*4+3:index*4> = tag
56
tcg_temp_free_i32(frm);
80
+ *
57
tcg_temp_free_i32(dest);
81
+ * Because of the alignment of ptr above, BS=6 has shift=0.
58
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
82
+ * All memory operations are aligned. Defer support for BS=2,
59
TCGv_i32 tcg_res;
83
+ * requiring insertion or extraction of a nibble, until we
60
tcg_op = tcg_temp_new_i32();
84
+ * support a cpu that requires it.
61
tcg_res = tcg_temp_new_i32();
85
*/
62
- neon_load_reg32(tcg_op, rm);
86
switch (gm_bs) {
63
+ vfp_load_reg32(tcg_op, rm);
87
+ case 3:
64
if (sz == 1) {
88
+ /* 32 bytes -> 2 tags -> 8 result bits */
65
gen_helper_rinth(tcg_res, tcg_op, fpst);
89
+ ret = *(uint8_t *)tag_mem;
66
} else {
90
+ break;
67
gen_helper_rints(tcg_res, tcg_op, fpst);
91
+ case 4:
68
}
92
+ /* 64 bytes -> 4 tags -> 16 result bits */
69
- neon_store_reg32(tcg_res, rd);
93
+ ret = cpu_to_le16(*(uint16_t *)tag_mem);
70
+ vfp_store_reg32(tcg_res, rd);
94
+ break;
71
tcg_temp_free_i32(tcg_op);
95
+ case 5:
72
tcg_temp_free_i32(tcg_res);
96
+ /* 128 bytes -> 8 tags -> 32 result bits */
73
}
97
+ ret = cpu_to_le32(*(uint32_t *)tag_mem);
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
98
+ break;
75
gen_helper_vfp_tould(tcg_res, tcg_double, tcg_shift, fpst);
99
case 6:
76
}
100
/* 256 bytes -> 16 tags -> 64 result bits */
77
tcg_gen_extrl_i64_i32(tcg_tmp, tcg_res);
101
- return ldq_le_p(tag_mem);
78
- neon_store_reg32(tcg_tmp, rd);
102
+ return cpu_to_le64(*(uint64_t *)tag_mem);
79
+ vfp_store_reg32(tcg_tmp, rd);
103
default:
80
tcg_temp_free_i32(tcg_tmp);
104
- /* cpu configured with unsupported gm blocksize. */
81
tcg_temp_free_i64(tcg_res);
105
+ /*
82
tcg_temp_free_i64(tcg_double);
106
+ * CPU configured with unsupported/invalid gm blocksize.
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
107
+ * This is detected early in arm_cpu_realizefn.
84
TCGv_i32 tcg_single, tcg_res;
108
+ */
85
tcg_single = tcg_temp_new_i32();
86
tcg_res = tcg_temp_new_i32();
87
- neon_load_reg32(tcg_single, rm);
88
+ vfp_load_reg32(tcg_single, rm);
89
if (sz == 1) {
90
if (is_signed) {
91
gen_helper_vfp_toslh(tcg_res, tcg_single, tcg_shift, fpst);
92
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
93
gen_helper_vfp_touls(tcg_res, tcg_single, tcg_shift, fpst);
94
}
95
}
96
- neon_store_reg32(tcg_res, rd);
97
+ vfp_store_reg32(tcg_res, rd);
98
tcg_temp_free_i32(tcg_res);
99
tcg_temp_free_i32(tcg_single);
100
}
101
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a)
102
if (a->l) {
103
/* VFP to general purpose register */
104
tmp = tcg_temp_new_i32();
105
- neon_load_reg32(tmp, a->vn);
106
+ vfp_load_reg32(tmp, a->vn);
107
tcg_gen_andi_i32(tmp, tmp, 0xffff);
108
store_reg(s, a->rt, tmp);
109
} else {
110
/* general purpose register to VFP */
111
tmp = load_reg(s, a->rt);
112
tcg_gen_andi_i32(tmp, tmp, 0xffff);
113
- neon_store_reg32(tmp, a->vn);
114
+ vfp_store_reg32(tmp, a->vn);
115
tcg_temp_free_i32(tmp);
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
119
if (a->l) {
120
/* VFP to general purpose register */
121
tmp = tcg_temp_new_i32();
122
- neon_load_reg32(tmp, a->vn);
123
+ vfp_load_reg32(tmp, a->vn);
124
if (a->rt == 15) {
125
/* Set the 4 flag bits in the CPSR. */
126
gen_set_nzcv(tmp);
127
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
128
} else {
129
/* general purpose register to VFP */
130
tmp = load_reg(s, a->rt);
131
- neon_store_reg32(tmp, a->vn);
132
+ vfp_store_reg32(tmp, a->vn);
133
tcg_temp_free_i32(tmp);
134
}
135
136
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
137
if (a->op) {
138
/* fpreg to gpreg */
139
tmp = tcg_temp_new_i32();
140
- neon_load_reg32(tmp, a->vm);
141
+ vfp_load_reg32(tmp, a->vm);
142
store_reg(s, a->rt, tmp);
143
tmp = tcg_temp_new_i32();
144
- neon_load_reg32(tmp, a->vm + 1);
145
+ vfp_load_reg32(tmp, a->vm + 1);
146
store_reg(s, a->rt2, tmp);
147
} else {
148
/* gpreg to fpreg */
149
tmp = load_reg(s, a->rt);
150
- neon_store_reg32(tmp, a->vm);
151
+ vfp_store_reg32(tmp, a->vm);
152
tcg_temp_free_i32(tmp);
153
tmp = load_reg(s, a->rt2);
154
- neon_store_reg32(tmp, a->vm + 1);
155
+ vfp_store_reg32(tmp, a->vm + 1);
156
tcg_temp_free_i32(tmp);
157
}
158
159
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
160
if (a->op) {
161
/* fpreg to gpreg */
162
tmp = tcg_temp_new_i32();
163
- neon_load_reg32(tmp, a->vm * 2);
164
+ vfp_load_reg32(tmp, a->vm * 2);
165
store_reg(s, a->rt, tmp);
166
tmp = tcg_temp_new_i32();
167
- neon_load_reg32(tmp, a->vm * 2 + 1);
168
+ vfp_load_reg32(tmp, a->vm * 2 + 1);
169
store_reg(s, a->rt2, tmp);
170
} else {
171
/* gpreg to fpreg */
172
tmp = load_reg(s, a->rt);
173
- neon_store_reg32(tmp, a->vm * 2);
174
+ vfp_store_reg32(tmp, a->vm * 2);
175
tcg_temp_free_i32(tmp);
176
tmp = load_reg(s, a->rt2);
177
- neon_store_reg32(tmp, a->vm * 2 + 1);
178
+ vfp_store_reg32(tmp, a->vm * 2 + 1);
179
tcg_temp_free_i32(tmp);
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_hp(DisasContext *s, arg_VLDR_VSTR_sp *a)
183
tmp = tcg_temp_new_i32();
184
if (a->l) {
185
gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
186
- neon_store_reg32(tmp, a->vd);
187
+ vfp_store_reg32(tmp, a->vd);
188
} else {
189
- neon_load_reg32(tmp, a->vd);
190
+ vfp_load_reg32(tmp, a->vd);
191
gen_aa32_st16(s, tmp, addr, get_mem_index(s));
192
}
193
tcg_temp_free_i32(tmp);
194
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
195
tmp = tcg_temp_new_i32();
196
if (a->l) {
197
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
198
- neon_store_reg32(tmp, a->vd);
199
+ vfp_store_reg32(tmp, a->vd);
200
} else {
201
- neon_load_reg32(tmp, a->vd);
202
+ vfp_load_reg32(tmp, a->vd);
203
gen_aa32_st32(s, tmp, addr, get_mem_index(s));
204
}
205
tcg_temp_free_i32(tmp);
206
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
207
if (a->l) {
208
/* load */
209
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
210
- neon_store_reg32(tmp, a->vd + i);
211
+ vfp_store_reg32(tmp, a->vd + i);
212
} else {
213
/* store */
214
- neon_load_reg32(tmp, a->vd + i);
215
+ vfp_load_reg32(tmp, a->vd + i);
216
gen_aa32_st32(s, tmp, addr, get_mem_index(s));
217
}
218
tcg_gen_addi_i32(addr, addr, offset);
219
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
220
fd = tcg_temp_new_i32();
221
fpst = fpstatus_ptr(FPST_FPCR);
222
223
- neon_load_reg32(f0, vn);
224
- neon_load_reg32(f1, vm);
225
+ vfp_load_reg32(f0, vn);
226
+ vfp_load_reg32(f1, vm);
227
228
for (;;) {
229
if (reads_vd) {
230
- neon_load_reg32(fd, vd);
231
+ vfp_load_reg32(fd, vd);
232
}
233
fn(fd, f0, f1, fpst);
234
- neon_store_reg32(fd, vd);
235
+ vfp_store_reg32(fd, vd);
236
237
if (veclen == 0) {
238
break;
239
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
240
veclen--;
241
vd = vfp_advance_sreg(vd, delta_d);
242
vn = vfp_advance_sreg(vn, delta_d);
243
- neon_load_reg32(f0, vn);
244
+ vfp_load_reg32(f0, vn);
245
if (delta_m) {
246
vm = vfp_advance_sreg(vm, delta_m);
247
- neon_load_reg32(f1, vm);
248
+ vfp_load_reg32(f1, vm);
249
}
250
}
251
252
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_hp(DisasContext *s, VFPGen3OpSPFn *fn,
253
fd = tcg_temp_new_i32();
254
fpst = fpstatus_ptr(FPST_FPCR_F16);
255
256
- neon_load_reg32(f0, vn);
257
- neon_load_reg32(f1, vm);
258
+ vfp_load_reg32(f0, vn);
259
+ vfp_load_reg32(f1, vm);
260
261
if (reads_vd) {
262
- neon_load_reg32(fd, vd);
263
+ vfp_load_reg32(fd, vd);
264
}
265
fn(fd, f0, f1, fpst);
266
- neon_store_reg32(fd, vd);
267
+ vfp_store_reg32(fd, vd);
268
269
tcg_temp_free_i32(f0);
270
tcg_temp_free_i32(f1);
271
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
272
f0 = tcg_temp_new_i32();
273
fd = tcg_temp_new_i32();
274
275
- neon_load_reg32(f0, vm);
276
+ vfp_load_reg32(f0, vm);
277
278
for (;;) {
279
fn(fd, f0);
280
- neon_store_reg32(fd, vd);
281
+ vfp_store_reg32(fd, vd);
282
283
if (veclen == 0) {
284
break;
285
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
286
/* single source one-many */
287
while (veclen--) {
288
vd = vfp_advance_sreg(vd, delta_d);
289
- neon_store_reg32(fd, vd);
290
+ vfp_store_reg32(fd, vd);
291
}
292
break;
293
}
294
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
295
veclen--;
296
vd = vfp_advance_sreg(vd, delta_d);
297
vm = vfp_advance_sreg(vm, delta_m);
298
- neon_load_reg32(f0, vm);
299
+ vfp_load_reg32(f0, vm);
300
}
301
302
tcg_temp_free_i32(f0);
303
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
304
}
305
306
f0 = tcg_temp_new_i32();
307
- neon_load_reg32(f0, vm);
308
+ vfp_load_reg32(f0, vm);
309
fn(f0, f0);
310
- neon_store_reg32(f0, vd);
311
+ vfp_store_reg32(f0, vd);
312
tcg_temp_free_i32(f0);
313
314
return true;
315
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
316
vm = tcg_temp_new_i32();
317
vd = tcg_temp_new_i32();
318
319
- neon_load_reg32(vn, a->vn);
320
- neon_load_reg32(vm, a->vm);
321
+ vfp_load_reg32(vn, a->vn);
322
+ vfp_load_reg32(vm, a->vm);
323
if (neg_n) {
324
/* VFNMS, VFMS */
325
gen_helper_vfp_negh(vn, vn);
326
}
327
- neon_load_reg32(vd, a->vd);
328
+ vfp_load_reg32(vd, a->vd);
329
if (neg_d) {
330
/* VFNMA, VFNMS */
331
gen_helper_vfp_negh(vd, vd);
332
}
333
fpst = fpstatus_ptr(FPST_FPCR_F16);
334
gen_helper_vfp_muladdh(vd, vn, vm, vd, fpst);
335
- neon_store_reg32(vd, a->vd);
336
+ vfp_store_reg32(vd, a->vd);
337
338
tcg_temp_free_ptr(fpst);
339
tcg_temp_free_i32(vn);
340
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d)
341
vm = tcg_temp_new_i32();
342
vd = tcg_temp_new_i32();
343
344
- neon_load_reg32(vn, a->vn);
345
- neon_load_reg32(vm, a->vm);
346
+ vfp_load_reg32(vn, a->vn);
347
+ vfp_load_reg32(vm, a->vm);
348
if (neg_n) {
349
/* VFNMS, VFMS */
350
gen_helper_vfp_negs(vn, vn);
351
}
352
- neon_load_reg32(vd, a->vd);
353
+ vfp_load_reg32(vd, a->vd);
354
if (neg_d) {
355
/* VFNMA, VFNMS */
356
gen_helper_vfp_negs(vd, vd);
357
}
358
fpst = fpstatus_ptr(FPST_FPCR);
359
gen_helper_vfp_muladds(vd, vn, vm, vd, fpst);
360
- neon_store_reg32(vd, a->vd);
361
+ vfp_store_reg32(vd, a->vd);
362
363
tcg_temp_free_ptr(fpst);
364
tcg_temp_free_i32(vn);
365
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_hp(DisasContext *s, arg_VMOV_imm_sp *a)
366
}
367
368
fd = tcg_const_i32(vfp_expand_imm(MO_16, a->imm));
369
- neon_store_reg32(fd, a->vd);
370
+ vfp_store_reg32(fd, a->vd);
371
tcg_temp_free_i32(fd);
372
return true;
373
}
374
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
375
fd = tcg_const_i32(vfp_expand_imm(MO_32, a->imm));
376
377
for (;;) {
378
- neon_store_reg32(fd, vd);
379
+ vfp_store_reg32(fd, vd);
380
381
if (veclen == 0) {
382
break;
383
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a)
384
vd = tcg_temp_new_i32();
385
vm = tcg_temp_new_i32();
386
387
- neon_load_reg32(vd, a->vd);
388
+ vfp_load_reg32(vd, a->vd);
389
if (a->z) {
390
tcg_gen_movi_i32(vm, 0);
391
} else {
392
- neon_load_reg32(vm, a->vm);
393
+ vfp_load_reg32(vm, a->vm);
394
}
395
396
if (a->e) {
397
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_sp(DisasContext *s, arg_VCMP_sp *a)
398
vd = tcg_temp_new_i32();
399
vm = tcg_temp_new_i32();
400
401
- neon_load_reg32(vd, a->vd);
402
+ vfp_load_reg32(vd, a->vd);
403
if (a->z) {
404
tcg_gen_movi_i32(vm, 0);
405
} else {
406
- neon_load_reg32(vm, a->vm);
407
+ vfp_load_reg32(vm, a->vm);
408
}
409
410
if (a->e) {
411
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f32_f16(DisasContext *s, arg_VCVT_f32_f16 *a)
412
/* The T bit tells us if we want the low or high 16 bits of Vm */
413
tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t));
414
gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp_mode);
415
- neon_store_reg32(tmp, a->vd);
416
+ vfp_store_reg32(tmp, a->vd);
417
tcg_temp_free_i32(ahp_mode);
418
tcg_temp_free_ptr(fpst);
419
tcg_temp_free_i32(tmp);
420
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a)
421
ahp_mode = get_ahp_flag();
422
tmp = tcg_temp_new_i32();
423
424
- neon_load_reg32(tmp, a->vm);
425
+ vfp_load_reg32(tmp, a->vm);
426
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp_mode);
427
tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
428
tcg_temp_free_i32(ahp_mode);
429
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_hp(DisasContext *s, arg_VRINTR_sp *a)
430
}
431
432
tmp = tcg_temp_new_i32();
433
- neon_load_reg32(tmp, a->vm);
434
+ vfp_load_reg32(tmp, a->vm);
435
fpst = fpstatus_ptr(FPST_FPCR_F16);
436
gen_helper_rinth(tmp, tmp, fpst);
437
- neon_store_reg32(tmp, a->vd);
438
+ vfp_store_reg32(tmp, a->vd);
439
tcg_temp_free_ptr(fpst);
440
tcg_temp_free_i32(tmp);
441
return true;
442
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_sp(DisasContext *s, arg_VRINTR_sp *a)
443
}
444
445
tmp = tcg_temp_new_i32();
446
- neon_load_reg32(tmp, a->vm);
447
+ vfp_load_reg32(tmp, a->vm);
448
fpst = fpstatus_ptr(FPST_FPCR);
449
gen_helper_rints(tmp, tmp, fpst);
450
- neon_store_reg32(tmp, a->vd);
451
+ vfp_store_reg32(tmp, a->vd);
452
tcg_temp_free_ptr(fpst);
453
tcg_temp_free_i32(tmp);
454
return true;
455
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_hp(DisasContext *s, arg_VRINTZ_sp *a)
456
}
457
458
tmp = tcg_temp_new_i32();
459
- neon_load_reg32(tmp, a->vm);
460
+ vfp_load_reg32(tmp, a->vm);
461
fpst = fpstatus_ptr(FPST_FPCR_F16);
462
tcg_rmode = tcg_const_i32(float_round_to_zero);
463
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
464
gen_helper_rinth(tmp, tmp, fpst);
465
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
466
- neon_store_reg32(tmp, a->vd);
467
+ vfp_store_reg32(tmp, a->vd);
468
tcg_temp_free_ptr(fpst);
469
tcg_temp_free_i32(tcg_rmode);
470
tcg_temp_free_i32(tmp);
471
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_sp(DisasContext *s, arg_VRINTZ_sp *a)
472
}
473
474
tmp = tcg_temp_new_i32();
475
- neon_load_reg32(tmp, a->vm);
476
+ vfp_load_reg32(tmp, a->vm);
477
fpst = fpstatus_ptr(FPST_FPCR);
478
tcg_rmode = tcg_const_i32(float_round_to_zero);
479
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
480
gen_helper_rints(tmp, tmp, fpst);
481
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
482
- neon_store_reg32(tmp, a->vd);
483
+ vfp_store_reg32(tmp, a->vd);
484
tcg_temp_free_ptr(fpst);
485
tcg_temp_free_i32(tcg_rmode);
486
tcg_temp_free_i32(tmp);
487
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_hp(DisasContext *s, arg_VRINTX_sp *a)
488
}
489
490
tmp = tcg_temp_new_i32();
491
- neon_load_reg32(tmp, a->vm);
492
+ vfp_load_reg32(tmp, a->vm);
493
fpst = fpstatus_ptr(FPST_FPCR_F16);
494
gen_helper_rinth_exact(tmp, tmp, fpst);
495
- neon_store_reg32(tmp, a->vd);
496
+ vfp_store_reg32(tmp, a->vd);
497
tcg_temp_free_ptr(fpst);
498
tcg_temp_free_i32(tmp);
499
return true;
500
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_sp(DisasContext *s, arg_VRINTX_sp *a)
501
}
502
503
tmp = tcg_temp_new_i32();
504
- neon_load_reg32(tmp, a->vm);
505
+ vfp_load_reg32(tmp, a->vm);
506
fpst = fpstatus_ptr(FPST_FPCR);
507
gen_helper_rints_exact(tmp, tmp, fpst);
508
- neon_store_reg32(tmp, a->vd);
509
+ vfp_store_reg32(tmp, a->vd);
510
tcg_temp_free_ptr(fpst);
511
tcg_temp_free_i32(tmp);
512
return true;
513
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
514
515
vm = tcg_temp_new_i32();
516
vd = tcg_temp_new_i64();
517
- neon_load_reg32(vm, a->vm);
518
+ vfp_load_reg32(vm, a->vm);
519
gen_helper_vfp_fcvtds(vd, vm, cpu_env);
520
neon_store_reg64(vd, a->vd);
521
tcg_temp_free_i32(vm);
522
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
523
vm = tcg_temp_new_i64();
524
neon_load_reg64(vm, a->vm);
525
gen_helper_vfp_fcvtsd(vd, vm, cpu_env);
526
- neon_store_reg32(vd, a->vd);
527
+ vfp_store_reg32(vd, a->vd);
528
tcg_temp_free_i32(vd);
529
tcg_temp_free_i64(vm);
530
return true;
531
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a)
532
}
533
534
vm = tcg_temp_new_i32();
535
- neon_load_reg32(vm, a->vm);
536
+ vfp_load_reg32(vm, a->vm);
537
fpst = fpstatus_ptr(FPST_FPCR_F16);
538
if (a->s) {
539
/* i32 -> f16 */
540
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a)
541
/* u32 -> f16 */
542
gen_helper_vfp_uitoh(vm, vm, fpst);
543
}
544
- neon_store_reg32(vm, a->vd);
545
+ vfp_store_reg32(vm, a->vd);
546
tcg_temp_free_i32(vm);
547
tcg_temp_free_ptr(fpst);
548
return true;
549
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
550
}
551
552
vm = tcg_temp_new_i32();
553
- neon_load_reg32(vm, a->vm);
554
+ vfp_load_reg32(vm, a->vm);
555
fpst = fpstatus_ptr(FPST_FPCR);
556
if (a->s) {
557
/* i32 -> f32 */
558
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
559
/* u32 -> f32 */
560
gen_helper_vfp_uitos(vm, vm, fpst);
561
}
562
- neon_store_reg32(vm, a->vd);
563
+ vfp_store_reg32(vm, a->vd);
564
tcg_temp_free_i32(vm);
565
tcg_temp_free_ptr(fpst);
566
return true;
567
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
568
569
vm = tcg_temp_new_i32();
570
vd = tcg_temp_new_i64();
571
- neon_load_reg32(vm, a->vm);
572
+ vfp_load_reg32(vm, a->vm);
573
fpst = fpstatus_ptr(FPST_FPCR);
574
if (a->s) {
575
/* i32 -> f64 */
576
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
577
vd = tcg_temp_new_i32();
578
neon_load_reg64(vm, a->vm);
579
gen_helper_vjcvt(vd, vm, cpu_env);
580
- neon_store_reg32(vd, a->vd);
581
+ vfp_store_reg32(vd, a->vd);
582
tcg_temp_free_i64(vm);
583
tcg_temp_free_i32(vd);
584
return true;
585
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
586
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
587
588
vd = tcg_temp_new_i32();
589
- neon_load_reg32(vd, a->vd);
590
+ vfp_load_reg32(vd, a->vd);
591
592
fpst = fpstatus_ptr(FPST_FPCR_F16);
593
shift = tcg_const_i32(frac_bits);
594
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
595
g_assert_not_reached();
109
g_assert_not_reached();
596
}
110
}
597
111
+ shift = extract64(ptr, LOG2_TAG_GRANULE, 4) * 4;
598
- neon_store_reg32(vd, a->vd);
112
+ return ret << shift;
599
+ vfp_store_reg32(vd, a->vd);
113
}
600
tcg_temp_free_i32(vd);
114
601
tcg_temp_free_i32(shift);
115
void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
602
tcg_temp_free_ptr(fpst);
116
@@ -XXX,XX +XXX,XX @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
603
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
117
int gm_bs = env_archcpu(env)->gm_blocksize;
604
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
118
int gm_bs_bytes = 4 << gm_bs;
605
119
void *tag_mem;
606
vd = tcg_temp_new_i32();
120
+ int shift;
607
- neon_load_reg32(vd, a->vd);
121
608
+ vfp_load_reg32(vd, a->vd);
122
ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
609
123
610
fpst = fpstatus_ptr(FPST_FPCR);
124
@@ -XXX,XX +XXX,XX @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
611
shift = tcg_const_i32(frac_bits);
125
return;
612
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
613
g_assert_not_reached();
614
}
126
}
615
127
616
- neon_store_reg32(vd, a->vd);
128
- /*
617
+ vfp_store_reg32(vd, a->vd);
129
- * The ordering of elements within the word corresponds to
618
tcg_temp_free_i32(vd);
130
- * a little-endian operation.
619
tcg_temp_free_i32(shift);
131
- */
620
tcg_temp_free_ptr(fpst);
132
+ /* See LDGM for comments on BS and on shift. */
621
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
133
+ shift = extract64(ptr, LOG2_TAG_GRANULE, 4) * 4;
622
134
+ val >>= shift;
623
fpst = fpstatus_ptr(FPST_FPCR_F16);
135
switch (gm_bs) {
624
vm = tcg_temp_new_i32();
136
+ case 3:
625
- neon_load_reg32(vm, a->vm);
137
+ /* 32 bytes -> 2 tags -> 8 result bits */
626
+ vfp_load_reg32(vm, a->vm);
138
+ *(uint8_t *)tag_mem = val;
627
139
+ break;
628
if (a->s) {
140
+ case 4:
629
if (a->rz) {
141
+ /* 64 bytes -> 4 tags -> 16 result bits */
630
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a)
142
+ *(uint16_t *)tag_mem = cpu_to_le16(val);
631
gen_helper_vfp_touih(vm, vm, fpst);
143
+ break;
632
}
144
+ case 5:
633
}
145
+ /* 128 bytes -> 8 tags -> 32 result bits */
634
- neon_store_reg32(vm, a->vd);
146
+ *(uint32_t *)tag_mem = cpu_to_le32(val);
635
+ vfp_store_reg32(vm, a->vd);
147
+ break;
636
tcg_temp_free_i32(vm);
148
case 6:
637
tcg_temp_free_ptr(fpst);
149
- stq_le_p(tag_mem, val);
638
return true;
150
+ /* 256 bytes -> 16 tags -> 64 result bits */
639
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
151
+ *(uint64_t *)tag_mem = cpu_to_le64(val);
640
152
break;
641
fpst = fpstatus_ptr(FPST_FPCR);
153
default:
642
vm = tcg_temp_new_i32();
154
/* cpu configured with unsupported gm blocksize. */
643
- neon_load_reg32(vm, a->vm);
644
+ vfp_load_reg32(vm, a->vm);
645
646
if (a->s) {
647
if (a->rz) {
648
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
649
gen_helper_vfp_touis(vm, vm, fpst);
650
}
651
}
652
- neon_store_reg32(vm, a->vd);
653
+ vfp_store_reg32(vm, a->vd);
654
tcg_temp_free_i32(vm);
655
tcg_temp_free_ptr(fpst);
656
return true;
657
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
658
gen_helper_vfp_touid(vd, vm, fpst);
659
}
660
}
661
- neon_store_reg32(vd, a->vd);
662
+ vfp_store_reg32(vd, a->vd);
663
tcg_temp_free_i32(vd);
664
tcg_temp_free_i64(vm);
665
tcg_temp_free_ptr(fpst);
666
@@ -XXX,XX +XXX,XX @@ static bool trans_VINS(DisasContext *s, arg_VINS *a)
667
/* Insert low half of Vm into high half of Vd */
668
rm = tcg_temp_new_i32();
669
rd = tcg_temp_new_i32();
670
- neon_load_reg32(rm, a->vm);
671
- neon_load_reg32(rd, a->vd);
672
+ vfp_load_reg32(rm, a->vm);
673
+ vfp_load_reg32(rd, a->vd);
674
tcg_gen_deposit_i32(rd, rd, rm, 16, 16);
675
- neon_store_reg32(rd, a->vd);
676
+ vfp_store_reg32(rd, a->vd);
677
tcg_temp_free_i32(rm);
678
tcg_temp_free_i32(rd);
679
return true;
680
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOVX(DisasContext *s, arg_VINS *a)
681
682
/* Set Vd to high half of Vm */
683
rm = tcg_temp_new_i32();
684
- neon_load_reg32(rm, a->vm);
685
+ vfp_load_reg32(rm, a->vm);
686
tcg_gen_shri_i32(rm, rm, 16);
687
- neon_store_reg32(rm, a->vd);
688
+ vfp_store_reg32(rm, a->vd);
689
tcg_temp_free_i32(rm);
690
return true;
691
}
692
--
155
--
693
2.20.1
156
2.34.1
694
695
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In both cases, we can sink the write-back and perform
3
When the cpu support MTE, but the system does not, reduce cpu
4
the accumulate into the normal destination temps.
4
support to user instructions at EL0 instead of completely
5
disabling MTE. If we encounter a cpu implementation which does
6
something else, we can revisit this setting.
5
7
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201030022618.785675-11-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20230811214031.171020-5-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
target/arm/translate-neon.c.inc | 23 +++++++++--------------
13
target/arm/cpu.c | 7 ++++---
12
1 file changed, 9 insertions(+), 14 deletions(-)
14
1 file changed, 4 insertions(+), 3 deletions(-)
13
15
14
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
16
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-neon.c.inc
18
--- a/target/arm/cpu.c
17
+++ b/target/arm/translate-neon.c.inc
19
+++ b/target/arm/cpu.c
18
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
20
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
19
if (accfn) {
21
20
tmp = tcg_temp_new_i64();
22
#ifndef CONFIG_USER_ONLY
21
read_neon_element64(tmp, a->vd, 0, MO_64);
23
/*
22
- accfn(tmp, tmp, rd0);
24
- * Disable the MTE feature bits if we do not have tag-memory
23
- write_neon_element64(tmp, a->vd, 0, MO_64);
25
- * provided by the machine.
24
+ accfn(rd0, tmp, rd0);
26
+ * If we do not have tag-memory provided by the machine,
25
read_neon_element64(tmp, a->vd, 1, MO_64);
27
+ * reduce MTE support to instructions enabled at EL0.
26
- accfn(tmp, tmp, rd1);
28
+ * This matches Cortex-A710 BROADCASTMTE input being LOW.
27
- write_neon_element64(tmp, a->vd, 1, MO_64);
29
*/
28
+ accfn(rd1, tmp, rd1);
30
if (cpu->tag_memory == NULL) {
29
tcg_temp_free_i64(tmp);
31
cpu->isar.id_aa64pfr1 =
30
- } else {
32
- FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 0);
31
- write_neon_element64(rd0, a->vd, 0, MO_64);
33
+ FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 1);
32
- write_neon_element64(rd1, a->vd, 1, MO_64);
34
}
35
#endif
33
}
36
}
34
35
+ write_neon_element64(rd0, a->vd, 0, MO_64);
36
+ write_neon_element64(rd1, a->vd, 1, MO_64);
37
tcg_temp_free_i64(rd0);
38
tcg_temp_free_i64(rd1);
39
40
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
41
if (accfn) {
42
TCGv_i64 t64 = tcg_temp_new_i64();
43
read_neon_element64(t64, a->vd, 0, MO_64);
44
- accfn(t64, t64, rn0_64);
45
- write_neon_element64(t64, a->vd, 0, MO_64);
46
+ accfn(rn0_64, t64, rn0_64);
47
read_neon_element64(t64, a->vd, 1, MO_64);
48
- accfn(t64, t64, rn1_64);
49
- write_neon_element64(t64, a->vd, 1, MO_64);
50
+ accfn(rn1_64, t64, rn1_64);
51
tcg_temp_free_i64(t64);
52
- } else {
53
- write_neon_element64(rn0_64, a->vd, 0, MO_64);
54
- write_neon_element64(rn1_64, a->vd, 1, MO_64);
55
}
56
+
57
+ write_neon_element64(rn0_64, a->vd, 0, MO_64);
58
+ write_neon_element64(rn1_64, a->vd, 1, MO_64);
59
tcg_temp_free_i64(rn0_64);
60
tcg_temp_free_i64(rn1_64);
61
return true;
62
--
37
--
63
2.20.1
38
2.34.1
64
65
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This will shortly have users outside of translate-neon.c.inc.
3
Do not hard-code the constants for Neoverse V1.
4
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-3-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20230811214031.171020-6-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
9
---
10
target/arm/translate.c | 20 ++++++++++++++++++++
10
target/arm/tcg/cpu64.c | 48 ++++++++++++++++++++++++++++--------------
11
target/arm/translate-neon.c.inc | 19 -------------------
11
1 file changed, 32 insertions(+), 16 deletions(-)
12
2 files changed, 20 insertions(+), 19 deletions(-)
13
12
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
15
--- a/target/arm/tcg/cpu64.c
17
+++ b/target/arm/translate.c
16
+++ b/target/arm/tcg/cpu64.c
18
@@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg)
17
@@ -XXX,XX +XXX,XX @@
19
return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
18
#include "qemu/module.h"
20
}
19
#include "qapi/visitor.h"
21
20
#include "hw/qdev-properties.h"
22
+/*
21
+#include "qemu/units.h"
23
+ * Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
22
#include "internals.h"
24
+ * where 0 is the least significant end of the register.
23
#include "cpregs.h"
25
+ */
24
26
+static long neon_element_offset(int reg, int element, MemOp size)
25
+static uint64_t make_ccsidr64(unsigned assoc, unsigned linesize,
26
+ unsigned cachesize)
27
+{
27
+{
28
+ int element_size = 1 << size;
28
+ unsigned lg_linesize = ctz32(linesize);
29
+ int ofs = element * element_size;
29
+ unsigned sets;
30
+#ifdef HOST_WORDS_BIGENDIAN
30
+
31
+ /*
31
+ /*
32
+ * Calculate the offset assuming fully little-endian,
32
+ * The 64-bit CCSIDR_EL1 format is:
33
+ * then XOR to account for the order of the 8-byte units.
33
+ * [55:32] number of sets - 1
34
+ * [23:3] associativity - 1
35
+ * [2:0] log2(linesize) - 4
36
+ * so 0 == 16 bytes, 1 == 32 bytes, 2 == 64 bytes, etc
34
+ */
37
+ */
35
+ if (element_size < 8) {
38
+ assert(assoc != 0);
36
+ ofs ^= 8 - element_size;
39
+ assert(is_power_of_2(linesize));
37
+ }
40
+ assert(lg_linesize >= 4 && lg_linesize <= 7 + 4);
38
+#endif
41
+
39
+ return neon_full_reg_offset(reg) + ofs;
42
+ /* sets * associativity * linesize == cachesize. */
43
+ sets = cachesize / (assoc * linesize);
44
+ assert(cachesize % (assoc * linesize) == 0);
45
+
46
+ return ((uint64_t)(sets - 1) << 32)
47
+ | ((assoc - 1) << 3)
48
+ | (lg_linesize - 4);
40
+}
49
+}
41
+
50
+
42
static inline long vfp_reg_offset(bool dp, unsigned reg)
51
static void aarch64_a35_initfn(Object *obj)
43
{
52
{
44
if (dp) {
53
ARMCPU *cpu = ARM_CPU(obj);
45
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
54
@@ -XXX,XX +XXX,XX @@ static void aarch64_neoverse_v1_initfn(Object *obj)
46
index XXXXXXX..XXXXXXX 100644
55
* The Neoverse-V1 r1p2 TRM lists 32-bit format CCSIDR_EL1 values,
47
--- a/target/arm/translate-neon.c.inc
56
* but also says it implements CCIDX, which means they should be
48
+++ b/target/arm/translate-neon.c.inc
57
* 64-bit format. So we here use values which are based on the textual
49
@@ -XXX,XX +XXX,XX @@ static inline int neon_3same_fp_size(DisasContext *s, int x)
58
- * information in chapter 2 of the TRM (and on the fact that
50
#include "decode-neon-ls.c.inc"
59
- * sets * associativity * linesize == cachesize).
51
#include "decode-neon-shared.c.inc"
60
- *
52
61
- * The 64-bit CCSIDR_EL1 format is:
53
-/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
62
- * [55:32] number of sets - 1
54
- * where 0 is the least significant end of the register.
63
- * [23:3] associativity - 1
55
- */
64
- * [2:0] log2(linesize) - 4
56
-static inline long
65
- * so 0 == 16 bytes, 1 == 32 bytes, 2 == 64 bytes, etc
57
-neon_element_offset(int reg, int element, MemOp size)
66
- *
58
-{
67
- * L1: 4-way set associative 64-byte line size, total size 64K,
59
- int element_size = 1 << size;
68
- * so sets is 256.
60
- int ofs = element * element_size;
69
+ * information in chapter 2 of the TRM:
61
-#ifdef HOST_WORDS_BIGENDIAN
70
*
62
- /* Calculate the offset assuming fully little-endian,
71
+ * L1: 4-way set associative 64-byte line size, total size 64K.
63
- * then XOR to account for the order of the 8-byte units.
72
* L2: 8-way set associative, 64 byte line size, either 512K or 1MB.
64
- */
73
- * We pick 1MB, so this has 2048 sets.
65
- if (element_size < 8) {
74
- *
66
- ofs ^= 8 - element_size;
75
* L3: No L3 (this matches the CLIDR_EL1 value).
67
- }
76
*/
68
-#endif
77
- cpu->ccsidr[0] = 0x000000ff0000001aull; /* 64KB L1 dcache */
69
- return neon_full_reg_offset(reg) + ofs;
78
- cpu->ccsidr[1] = 0x000000ff0000001aull; /* 64KB L1 icache */
70
-}
79
- cpu->ccsidr[2] = 0x000007ff0000003aull; /* 1MB L2 cache */
71
-
80
+ cpu->ccsidr[0] = make_ccsidr64(4, 64, 64 * KiB); /* L1 dcache */
72
static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
81
+ cpu->ccsidr[1] = cpu->ccsidr[0]; /* L1 icache */
73
{
82
+ cpu->ccsidr[2] = make_ccsidr64(8, 64, 1 * MiB); /* L2 cache */
74
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
83
84
/* From 3.2.115 SCTLR_EL3 */
85
cpu->reset_sctlr = 0x30c50838;
75
--
86
--
76
2.20.1
87
2.34.1
77
78
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This seems a bit more readable than using offsetof CPU_DoubleU.
3
Access to many of the special registers is enabled or disabled
4
by ACTLR_EL[23], which we implement as constant 0, which means
5
that all writes outside EL3 should trap.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20201030022618.785675-5-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20230811214031.171020-7-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/translate.c | 13 ++++---------
12
target/arm/cpregs.h | 2 ++
11
1 file changed, 4 insertions(+), 9 deletions(-)
13
target/arm/helper.c | 4 ++--
14
target/arm/tcg/cpu64.c | 46 +++++++++++++++++++++++++++++++++---------
15
3 files changed, 41 insertions(+), 11 deletions(-)
12
16
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
19
--- a/target/arm/cpregs.h
16
+++ b/target/arm/translate.c
20
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@ static long neon_element_offset(int reg, int element, MemOp size)
21
@@ -XXX,XX +XXX,XX @@ static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
18
return neon_full_reg_offset(reg) + ofs;
22
void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
23
#endif
24
25
+CPAccessResult access_tvm_trvm(CPUARMState *, const ARMCPRegInfo *, bool);
26
+
27
#endif /* TARGET_ARM_CPREGS_H */
28
diff --git a/target/arm/helper.c b/target/arm/helper.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/helper.c
31
+++ b/target/arm/helper.c
32
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri,
19
}
33
}
20
34
21
-static inline long vfp_reg_offset(bool dp, unsigned reg)
35
/* Check for traps from EL1 due to HCR_EL2.TVM and HCR_EL2.TRVM. */
22
+/* Return the offset of a VFP Dreg (dp = true) or VFP Sreg (dp = false). */
36
-static CPAccessResult access_tvm_trvm(CPUARMState *env, const ARMCPRegInfo *ri,
23
+static long vfp_reg_offset(bool dp, unsigned reg)
37
- bool isread)
38
+CPAccessResult access_tvm_trvm(CPUARMState *env, const ARMCPRegInfo *ri,
39
+ bool isread)
24
{
40
{
25
if (dp) {
41
if (arm_current_el(env) == 1) {
26
- return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
42
uint64_t trap = isread ? HCR_TRVM : HCR_TVM;
27
+ return neon_element_offset(reg, 0, MO_64);
43
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
28
} else {
44
index XXXXXXX..XXXXXXX 100644
29
- long ofs = offsetof(CPUARMState, vfp.zregs[reg >> 2].d[(reg >> 1) & 1]);
45
--- a/target/arm/tcg/cpu64.c
30
- if (reg & 1) {
46
+++ b/target/arm/tcg/cpu64.c
31
- ofs += offsetof(CPU_DoubleU, l.upper);
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_a64fx_initfn(Object *obj)
32
- } else {
48
/* TODO: Add A64FX specific HPC extension registers */
33
- ofs += offsetof(CPU_DoubleU, l.lower);
34
- }
35
- return ofs;
36
+ return neon_element_offset(reg >> 1, reg & 1, MO_32);
37
}
38
}
49
}
39
50
51
+static CPAccessResult access_actlr_w(CPUARMState *env, const ARMCPRegInfo *r,
52
+ bool read)
53
+{
54
+ if (!read) {
55
+ int el = arm_current_el(env);
56
+
57
+ /* Because ACTLR_EL2 is constant 0, writes below EL2 trap to EL2. */
58
+ if (el < 2 && arm_is_el2_enabled(env)) {
59
+ return CP_ACCESS_TRAP_EL2;
60
+ }
61
+ /* Because ACTLR_EL3 is constant 0, writes below EL3 trap to EL3. */
62
+ if (el < 3 && arm_feature(env, ARM_FEATURE_EL3)) {
63
+ return CP_ACCESS_TRAP_EL3;
64
+ }
65
+ }
66
+ return CP_ACCESS_OK;
67
+}
68
+
69
static const ARMCPRegInfo neoverse_n1_cp_reginfo[] = {
70
{ .name = "ATCR_EL1", .state = ARM_CP_STATE_AA64,
71
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 7, .opc2 = 0,
72
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
73
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
74
+ /* Traps and enables are the same as for TCR_EL1. */
75
+ .accessfn = access_tvm_trvm, .fgt = FGT_TCR_EL1, },
76
{ .name = "ATCR_EL2", .state = ARM_CP_STATE_AA64,
77
.opc0 = 3, .opc1 = 4, .crn = 15, .crm = 7, .opc2 = 0,
78
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
79
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo neoverse_n1_cp_reginfo[] = {
80
.access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
81
{ .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
82
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 1, .opc2 = 0,
83
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
84
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
85
+ .accessfn = access_actlr_w },
86
{ .name = "CPUACTLR2_EL1", .state = ARM_CP_STATE_AA64,
87
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 1, .opc2 = 1,
88
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
89
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
90
+ .accessfn = access_actlr_w },
91
{ .name = "CPUACTLR3_EL1", .state = ARM_CP_STATE_AA64,
92
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 1, .opc2 = 2,
93
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
94
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
95
+ .accessfn = access_actlr_w },
96
/*
97
* Report CPUCFR_EL1.SCU as 1, as we do not implement the DSU
98
* (and in particular its system registers).
99
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo neoverse_n1_cp_reginfo[] = {
100
.access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 4 },
101
{ .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
102
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 1, .opc2 = 4,
103
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0x961563010 },
104
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0x961563010,
105
+ .accessfn = access_actlr_w },
106
{ .name = "CPUPCR_EL3", .state = ARM_CP_STATE_AA64,
107
.opc0 = 3, .opc1 = 6, .crn = 15, .crm = 8, .opc2 = 1,
108
.access = PL3_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
109
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo neoverse_n1_cp_reginfo[] = {
110
.access = PL3_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
111
{ .name = "CPUPWRCTLR_EL1", .state = ARM_CP_STATE_AA64,
112
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 2, .opc2 = 7,
113
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
114
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
115
+ .accessfn = access_actlr_w },
116
{ .name = "ERXPFGCDN_EL1", .state = ARM_CP_STATE_AA64,
117
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 2, .opc2 = 2,
118
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
119
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
120
+ .accessfn = access_actlr_w },
121
{ .name = "ERXPFGCTL_EL1", .state = ARM_CP_STATE_AA64,
122
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 2, .opc2 = 1,
123
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
124
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
125
+ .accessfn = access_actlr_w },
126
{ .name = "ERXPFGF_EL1", .state = ARM_CP_STATE_AA64,
127
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 2, .opc2 = 0,
128
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
129
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
130
+ .accessfn = access_actlr_w },
131
};
132
133
static void define_neoverse_n1_cp_reginfo(ARMCPU *cpu)
40
--
134
--
41
2.20.1
135
2.34.1
42
43
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The only uses of this function are for loading VFP
3
There is only one additional EL1 register modeled, which
4
double-precision values, and nothing to do with NEON.
4
also needs to use access_actlr_w.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201030022618.785675-10-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230811214031.171020-8-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/translate.c | 8 ++--
11
target/arm/tcg/cpu64.c | 3 ++-
12
target/arm/translate-vfp.c.inc | 84 +++++++++++++++++-----------------
12
1 file changed, 2 insertions(+), 1 deletion(-)
13
2 files changed, 46 insertions(+), 46 deletions(-)
14
13
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
16
--- a/target/arm/tcg/cpu64.c
18
+++ b/target/arm/translate.c
17
+++ b/target/arm/tcg/cpu64.c
19
@@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg)
18
@@ -XXX,XX +XXX,XX @@ static void define_neoverse_n1_cp_reginfo(ARMCPU *cpu)
20
}
19
static const ARMCPRegInfo neoverse_v1_cp_reginfo[] = {
21
}
20
{ .name = "CPUECTLR2_EL1", .state = ARM_CP_STATE_AA64,
22
21
.opc0 = 3, .opc1 = 0, .crn = 15, .crm = 1, .opc2 = 5,
23
-static inline void neon_load_reg64(TCGv_i64 var, int reg)
22
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
24
+static inline void vfp_load_reg64(TCGv_i64 var, int reg)
23
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0,
25
{
24
+ .accessfn = access_actlr_w },
26
- tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
25
{ .name = "CPUPPMCR_EL3", .state = ARM_CP_STATE_AA64,
27
+ tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(true, reg));
26
.opc0 = 3, .opc1 = 6, .crn = 15, .crm = 2, .opc2 = 0,
28
}
27
.access = PL3_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
29
30
-static inline void neon_store_reg64(TCGv_i64 var, int reg)
31
+static inline void vfp_store_reg64(TCGv_i64 var, int reg)
32
{
33
- tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg));
34
+ tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(true, reg));
35
}
36
37
static inline void vfp_load_reg32(TCGv_i32 var, int reg)
38
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-vfp.c.inc
41
+++ b/target/arm/translate-vfp.c.inc
42
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
43
tcg_gen_ext_i32_i64(nf, cpu_NF);
44
tcg_gen_ext_i32_i64(vf, cpu_VF);
45
46
- neon_load_reg64(frn, rn);
47
- neon_load_reg64(frm, rm);
48
+ vfp_load_reg64(frn, rn);
49
+ vfp_load_reg64(frm, rm);
50
switch (a->cc) {
51
case 0: /* eq: Z */
52
tcg_gen_movcond_i64(TCG_COND_EQ, dest, zf, zero,
53
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
54
tcg_temp_free_i64(tmp);
55
break;
56
}
57
- neon_store_reg64(dest, rd);
58
+ vfp_store_reg64(dest, rd);
59
tcg_temp_free_i64(frn);
60
tcg_temp_free_i64(frm);
61
tcg_temp_free_i64(dest);
62
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
63
TCGv_i64 tcg_res;
64
tcg_op = tcg_temp_new_i64();
65
tcg_res = tcg_temp_new_i64();
66
- neon_load_reg64(tcg_op, rm);
67
+ vfp_load_reg64(tcg_op, rm);
68
gen_helper_rintd(tcg_res, tcg_op, fpst);
69
- neon_store_reg64(tcg_res, rd);
70
+ vfp_store_reg64(tcg_res, rd);
71
tcg_temp_free_i64(tcg_op);
72
tcg_temp_free_i64(tcg_res);
73
} else {
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
75
tcg_double = tcg_temp_new_i64();
76
tcg_res = tcg_temp_new_i64();
77
tcg_tmp = tcg_temp_new_i32();
78
- neon_load_reg64(tcg_double, rm);
79
+ vfp_load_reg64(tcg_double, rm);
80
if (is_signed) {
81
gen_helper_vfp_tosld(tcg_res, tcg_double, tcg_shift, fpst);
82
} else {
83
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
84
tmp = tcg_temp_new_i64();
85
if (a->l) {
86
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
87
- neon_store_reg64(tmp, a->vd);
88
+ vfp_store_reg64(tmp, a->vd);
89
} else {
90
- neon_load_reg64(tmp, a->vd);
91
+ vfp_load_reg64(tmp, a->vd);
92
gen_aa32_st64(s, tmp, addr, get_mem_index(s));
93
}
94
tcg_temp_free_i64(tmp);
95
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
96
if (a->l) {
97
/* load */
98
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
99
- neon_store_reg64(tmp, a->vd + i);
100
+ vfp_store_reg64(tmp, a->vd + i);
101
} else {
102
/* store */
103
- neon_load_reg64(tmp, a->vd + i);
104
+ vfp_load_reg64(tmp, a->vd + i);
105
gen_aa32_st64(s, tmp, addr, get_mem_index(s));
106
}
107
tcg_gen_addi_i32(addr, addr, offset);
108
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
109
fd = tcg_temp_new_i64();
110
fpst = fpstatus_ptr(FPST_FPCR);
111
112
- neon_load_reg64(f0, vn);
113
- neon_load_reg64(f1, vm);
114
+ vfp_load_reg64(f0, vn);
115
+ vfp_load_reg64(f1, vm);
116
117
for (;;) {
118
if (reads_vd) {
119
- neon_load_reg64(fd, vd);
120
+ vfp_load_reg64(fd, vd);
121
}
122
fn(fd, f0, f1, fpst);
123
- neon_store_reg64(fd, vd);
124
+ vfp_store_reg64(fd, vd);
125
126
if (veclen == 0) {
127
break;
128
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
129
veclen--;
130
vd = vfp_advance_dreg(vd, delta_d);
131
vn = vfp_advance_dreg(vn, delta_d);
132
- neon_load_reg64(f0, vn);
133
+ vfp_load_reg64(f0, vn);
134
if (delta_m) {
135
vm = vfp_advance_dreg(vm, delta_m);
136
- neon_load_reg64(f1, vm);
137
+ vfp_load_reg64(f1, vm);
138
}
139
}
140
141
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
142
f0 = tcg_temp_new_i64();
143
fd = tcg_temp_new_i64();
144
145
- neon_load_reg64(f0, vm);
146
+ vfp_load_reg64(f0, vm);
147
148
for (;;) {
149
fn(fd, f0);
150
- neon_store_reg64(fd, vd);
151
+ vfp_store_reg64(fd, vd);
152
153
if (veclen == 0) {
154
break;
155
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
156
/* single source one-many */
157
while (veclen--) {
158
vd = vfp_advance_dreg(vd, delta_d);
159
- neon_store_reg64(fd, vd);
160
+ vfp_store_reg64(fd, vd);
161
}
162
break;
163
}
164
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
165
veclen--;
166
vd = vfp_advance_dreg(vd, delta_d);
167
vd = vfp_advance_dreg(vm, delta_m);
168
- neon_load_reg64(f0, vm);
169
+ vfp_load_reg64(f0, vm);
170
}
171
172
tcg_temp_free_i64(f0);
173
@@ -XXX,XX +XXX,XX @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d)
174
vm = tcg_temp_new_i64();
175
vd = tcg_temp_new_i64();
176
177
- neon_load_reg64(vn, a->vn);
178
- neon_load_reg64(vm, a->vm);
179
+ vfp_load_reg64(vn, a->vn);
180
+ vfp_load_reg64(vm, a->vm);
181
if (neg_n) {
182
/* VFNMS, VFMS */
183
gen_helper_vfp_negd(vn, vn);
184
}
185
- neon_load_reg64(vd, a->vd);
186
+ vfp_load_reg64(vd, a->vd);
187
if (neg_d) {
188
/* VFNMA, VFNMS */
189
gen_helper_vfp_negd(vd, vd);
190
}
191
fpst = fpstatus_ptr(FPST_FPCR);
192
gen_helper_vfp_muladdd(vd, vn, vm, vd, fpst);
193
- neon_store_reg64(vd, a->vd);
194
+ vfp_store_reg64(vd, a->vd);
195
196
tcg_temp_free_ptr(fpst);
197
tcg_temp_free_i64(vn);
198
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
199
fd = tcg_const_i64(vfp_expand_imm(MO_64, a->imm));
200
201
for (;;) {
202
- neon_store_reg64(fd, vd);
203
+ vfp_store_reg64(fd, vd);
204
205
if (veclen == 0) {
206
break;
207
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
208
vd = tcg_temp_new_i64();
209
vm = tcg_temp_new_i64();
210
211
- neon_load_reg64(vd, a->vd);
212
+ vfp_load_reg64(vd, a->vd);
213
if (a->z) {
214
tcg_gen_movi_i64(vm, 0);
215
} else {
216
- neon_load_reg64(vm, a->vm);
217
+ vfp_load_reg64(vm, a->vm);
218
}
219
220
if (a->e) {
221
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
222
tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t));
223
vd = tcg_temp_new_i64();
224
gen_helper_vfp_fcvt_f16_to_f64(vd, tmp, fpst, ahp_mode);
225
- neon_store_reg64(vd, a->vd);
226
+ vfp_store_reg64(vd, a->vd);
227
tcg_temp_free_i32(ahp_mode);
228
tcg_temp_free_ptr(fpst);
229
tcg_temp_free_i32(tmp);
230
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
231
tmp = tcg_temp_new_i32();
232
vm = tcg_temp_new_i64();
233
234
- neon_load_reg64(vm, a->vm);
235
+ vfp_load_reg64(vm, a->vm);
236
gen_helper_vfp_fcvt_f64_to_f16(tmp, vm, fpst, ahp_mode);
237
tcg_temp_free_i64(vm);
238
tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
239
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
240
}
241
242
tmp = tcg_temp_new_i64();
243
- neon_load_reg64(tmp, a->vm);
244
+ vfp_load_reg64(tmp, a->vm);
245
fpst = fpstatus_ptr(FPST_FPCR);
246
gen_helper_rintd(tmp, tmp, fpst);
247
- neon_store_reg64(tmp, a->vd);
248
+ vfp_store_reg64(tmp, a->vd);
249
tcg_temp_free_ptr(fpst);
250
tcg_temp_free_i64(tmp);
251
return true;
252
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
253
}
254
255
tmp = tcg_temp_new_i64();
256
- neon_load_reg64(tmp, a->vm);
257
+ vfp_load_reg64(tmp, a->vm);
258
fpst = fpstatus_ptr(FPST_FPCR);
259
tcg_rmode = tcg_const_i32(float_round_to_zero);
260
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
261
gen_helper_rintd(tmp, tmp, fpst);
262
gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst);
263
- neon_store_reg64(tmp, a->vd);
264
+ vfp_store_reg64(tmp, a->vd);
265
tcg_temp_free_ptr(fpst);
266
tcg_temp_free_i64(tmp);
267
tcg_temp_free_i32(tcg_rmode);
268
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
269
}
270
271
tmp = tcg_temp_new_i64();
272
- neon_load_reg64(tmp, a->vm);
273
+ vfp_load_reg64(tmp, a->vm);
274
fpst = fpstatus_ptr(FPST_FPCR);
275
gen_helper_rintd_exact(tmp, tmp, fpst);
276
- neon_store_reg64(tmp, a->vd);
277
+ vfp_store_reg64(tmp, a->vd);
278
tcg_temp_free_ptr(fpst);
279
tcg_temp_free_i64(tmp);
280
return true;
281
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
282
vd = tcg_temp_new_i64();
283
vfp_load_reg32(vm, a->vm);
284
gen_helper_vfp_fcvtds(vd, vm, cpu_env);
285
- neon_store_reg64(vd, a->vd);
286
+ vfp_store_reg64(vd, a->vd);
287
tcg_temp_free_i32(vm);
288
tcg_temp_free_i64(vd);
289
return true;
290
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
291
292
vd = tcg_temp_new_i32();
293
vm = tcg_temp_new_i64();
294
- neon_load_reg64(vm, a->vm);
295
+ vfp_load_reg64(vm, a->vm);
296
gen_helper_vfp_fcvtsd(vd, vm, cpu_env);
297
vfp_store_reg32(vd, a->vd);
298
tcg_temp_free_i32(vd);
299
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
300
/* u32 -> f64 */
301
gen_helper_vfp_uitod(vd, vm, fpst);
302
}
303
- neon_store_reg64(vd, a->vd);
304
+ vfp_store_reg64(vd, a->vd);
305
tcg_temp_free_i32(vm);
306
tcg_temp_free_i64(vd);
307
tcg_temp_free_ptr(fpst);
308
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
309
310
vm = tcg_temp_new_i64();
311
vd = tcg_temp_new_i32();
312
- neon_load_reg64(vm, a->vm);
313
+ vfp_load_reg64(vm, a->vm);
314
gen_helper_vjcvt(vd, vm, cpu_env);
315
vfp_store_reg32(vd, a->vd);
316
tcg_temp_free_i64(vm);
317
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
318
frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm);
319
320
vd = tcg_temp_new_i64();
321
- neon_load_reg64(vd, a->vd);
322
+ vfp_load_reg64(vd, a->vd);
323
324
fpst = fpstatus_ptr(FPST_FPCR);
325
shift = tcg_const_i32(frac_bits);
326
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
327
g_assert_not_reached();
328
}
329
330
- neon_store_reg64(vd, a->vd);
331
+ vfp_store_reg64(vd, a->vd);
332
tcg_temp_free_i64(vd);
333
tcg_temp_free_i32(shift);
334
tcg_temp_free_ptr(fpst);
335
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
336
fpst = fpstatus_ptr(FPST_FPCR);
337
vm = tcg_temp_new_i64();
338
vd = tcg_temp_new_i32();
339
- neon_load_reg64(vm, a->vm);
340
+ vfp_load_reg64(vm, a->vm);
341
342
if (a->s) {
343
if (a->rz) {
344
--
28
--
345
2.20.1
29
2.34.1
346
347
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We can then use this to improve VMOV (scalar to gp) and
3
Like FEAT_TRF (Self-hosted Trace Extension), suppress tracing
4
VMOV (gp to scalar) so that we simply perform the memory
4
external to the cpu, which is out of scope for QEMU.
5
operation that we wanted, rather than inserting or
6
extracting from a 32-bit quantity.
7
8
These were the last uses of neon_load/store_reg, so remove them.
9
5
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20201030022618.785675-7-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230811214031.171020-10-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
target/arm/translate.c | 50 +++++++++++++-----------
11
target/arm/cpu.c | 3 +++
16
target/arm/translate-vfp.c.inc | 71 +++++-----------------------------
12
1 file changed, 3 insertions(+)
17
2 files changed, 37 insertions(+), 84 deletions(-)
18
13
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
16
--- a/target/arm/cpu.c
22
+++ b/target/arm/translate.c
17
+++ b/target/arm/cpu.c
23
@@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg)
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
24
* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
19
/* FEAT_SPE (Statistical Profiling Extension) */
25
* where 0 is the least significant end of the register.
20
cpu->isar.id_aa64dfr0 =
26
*/
21
FIELD_DP64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, PMSVER, 0);
27
-static long neon_element_offset(int reg, int element, MemOp size)
22
+ /* FEAT_TRBE (Trace Buffer Extension) */
28
+static long neon_element_offset(int reg, int element, MemOp memop)
23
+ cpu->isar.id_aa64dfr0 =
29
{
24
+ FIELD_DP64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, TRACEBUFFER, 0);
30
- int element_size = 1 << size;
25
/* FEAT_TRF (Self-hosted Trace Extension) */
31
+ int element_size = 1 << (memop & MO_SIZE);
26
cpu->isar.id_aa64dfr0 =
32
int ofs = element * element_size;
27
FIELD_DP64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, TRACEFILT, 0);
33
#ifdef HOST_WORDS_BIGENDIAN
34
/*
35
@@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg)
36
}
37
}
38
39
-static TCGv_i32 neon_load_reg(int reg, int pass)
40
-{
41
- TCGv_i32 tmp = tcg_temp_new_i32();
42
- tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32));
43
- return tmp;
44
-}
45
-
46
-static void neon_store_reg(int reg, int pass, TCGv_i32 var)
47
-{
48
- tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32));
49
- tcg_temp_free_i32(var);
50
-}
51
-
52
static inline void neon_load_reg64(TCGv_i64 var, int reg)
53
{
54
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
55
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg)
56
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
57
}
58
59
-static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
60
+static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop)
61
{
62
- long off = neon_element_offset(reg, ele, size);
63
+ long off = neon_element_offset(reg, ele, memop);
64
65
- switch (size) {
66
- case MO_32:
67
+ switch (memop) {
68
+ case MO_SB:
69
+ tcg_gen_ld8s_i32(dest, cpu_env, off);
70
+ break;
71
+ case MO_UB:
72
+ tcg_gen_ld8u_i32(dest, cpu_env, off);
73
+ break;
74
+ case MO_SW:
75
+ tcg_gen_ld16s_i32(dest, cpu_env, off);
76
+ break;
77
+ case MO_UW:
78
+ tcg_gen_ld16u_i32(dest, cpu_env, off);
79
+ break;
80
+ case MO_UL:
81
+ case MO_SL:
82
tcg_gen_ld_i32(dest, cpu_env, off);
83
break;
84
default:
85
@@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
86
}
87
}
88
89
-static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size)
90
+static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop)
91
{
92
- long off = neon_element_offset(reg, ele, size);
93
+ long off = neon_element_offset(reg, ele, memop);
94
95
- switch (size) {
96
+ switch (memop) {
97
+ case MO_8:
98
+ tcg_gen_st8_i32(src, cpu_env, off);
99
+ break;
100
+ case MO_16:
101
+ tcg_gen_st16_i32(src, cpu_env, off);
102
+ break;
103
case MO_32:
104
tcg_gen_st_i32(src, cpu_env, off);
105
break;
106
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/translate-vfp.c.inc
109
+++ b/target/arm/translate-vfp.c.inc
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
111
{
112
/* VMOV scalar to general purpose register */
113
TCGv_i32 tmp;
114
- int pass;
115
- uint32_t offset;
116
117
- /* SIZE == 2 is a VFP instruction; otherwise NEON. */
118
- if (a->size == 2
119
+ /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
120
+ if (a->size == MO_32
121
? !dc_isar_feature(aa32_fpsp_v2, s)
122
: !arm_dc_feature(s, ARM_FEATURE_NEON)) {
123
return false;
124
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
125
return false;
126
}
127
128
- offset = a->index << a->size;
129
- pass = extract32(offset, 2, 1);
130
- offset = extract32(offset, 0, 2) * 8;
131
-
132
if (!vfp_access_check(s)) {
133
return true;
134
}
135
136
- tmp = neon_load_reg(a->vn, pass);
137
- switch (a->size) {
138
- case 0:
139
- if (offset) {
140
- tcg_gen_shri_i32(tmp, tmp, offset);
141
- }
142
- if (a->u) {
143
- gen_uxtb(tmp);
144
- } else {
145
- gen_sxtb(tmp);
146
- }
147
- break;
148
- case 1:
149
- if (a->u) {
150
- if (offset) {
151
- tcg_gen_shri_i32(tmp, tmp, 16);
152
- } else {
153
- gen_uxth(tmp);
154
- }
155
- } else {
156
- if (offset) {
157
- tcg_gen_sari_i32(tmp, tmp, 16);
158
- } else {
159
- gen_sxth(tmp);
160
- }
161
- }
162
- break;
163
- case 2:
164
- break;
165
- }
166
+ tmp = tcg_temp_new_i32();
167
+ read_neon_element32(tmp, a->vn, a->index, a->size | (a->u ? 0 : MO_SIGN));
168
store_reg(s, a->rt, tmp);
169
170
return true;
171
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
172
static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
173
{
174
/* VMOV general purpose register to scalar */
175
- TCGv_i32 tmp, tmp2;
176
- int pass;
177
- uint32_t offset;
178
+ TCGv_i32 tmp;
179
180
- /* SIZE == 2 is a VFP instruction; otherwise NEON. */
181
- if (a->size == 2
182
+ /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */
183
+ if (a->size == MO_32
184
? !dc_isar_feature(aa32_fpsp_v2, s)
185
: !arm_dc_feature(s, ARM_FEATURE_NEON)) {
186
return false;
187
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
188
return false;
189
}
190
191
- offset = a->index << a->size;
192
- pass = extract32(offset, 2, 1);
193
- offset = extract32(offset, 0, 2) * 8;
194
-
195
if (!vfp_access_check(s)) {
196
return true;
197
}
198
199
tmp = load_reg(s, a->rt);
200
- switch (a->size) {
201
- case 0:
202
- tmp2 = neon_load_reg(a->vn, pass);
203
- tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 8);
204
- tcg_temp_free_i32(tmp2);
205
- break;
206
- case 1:
207
- tmp2 = neon_load_reg(a->vn, pass);
208
- tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 16);
209
- tcg_temp_free_i32(tmp2);
210
- break;
211
- case 2:
212
- break;
213
- }
214
- neon_store_reg(a->vn, pass, tmp);
215
+ write_neon_element32(tmp, a->vn, a->index, a->size);
216
+ tcg_temp_free_i32(tmp);
217
218
return true;
219
}
220
--
28
--
221
2.20.1
29
2.34.1
222
223
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This function makes it clear that we're talking about the whole
3
This feature allows the operating system to set TCR_ELx.HWU*
4
register, and not the 32-bit piece at index 0. This fixes a bug
4
to allow the implementation to use the PBHA bits from the
5
when running on a big-endian host.
5
block and page descriptors for for IMPLEMENTATION DEFINED
6
purposes. Since QEMU has no need to use these bits, we may
7
simply ignore them.
6
8
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20201030022618.785675-2-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20230811214031.171020-11-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
target/arm/translate.c | 8 ++++++
14
docs/system/arm/emulation.rst | 1 +
13
target/arm/translate-neon.c.inc | 44 ++++++++++++++++-----------------
15
target/arm/tcg/cpu32.c | 2 +-
14
target/arm/translate-vfp.c.inc | 2 +-
16
target/arm/tcg/cpu64.c | 2 +-
15
3 files changed, 31 insertions(+), 23 deletions(-)
17
3 files changed, 3 insertions(+), 2 deletions(-)
16
18
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
21
--- a/docs/system/arm/emulation.rst
20
+++ b/target/arm/translate.c
22
+++ b/docs/system/arm/emulation.rst
21
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
22
unallocated_encoding(s);
24
- FEAT_HAFDBS (Hardware management of the access flag and dirty bit state)
23
}
25
- FEAT_HCX (Support for the HCRX_EL2 register)
24
26
- FEAT_HPDS (Hierarchical permission disables)
25
+/*
27
+- FEAT_HPDS2 (Translation table page-based hardware attributes)
26
+ * Return the offset of a "full" NEON Dreg.
28
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
27
+ */
29
- FEAT_IDST (ID space trap handling)
28
+static long neon_full_reg_offset(unsigned reg)
30
- FEAT_IESB (Implicit error synchronization event)
29
+{
31
diff --git a/target/arm/tcg/cpu32.c b/target/arm/tcg/cpu32.c
30
+ return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]);
31
+}
32
+
33
static inline long vfp_reg_offset(bool dp, unsigned reg)
34
{
35
if (dp) {
36
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
37
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate-neon.c.inc
33
--- a/target/arm/tcg/cpu32.c
39
+++ b/target/arm/translate-neon.c.inc
34
+++ b/target/arm/tcg/cpu32.c
40
@@ -XXX,XX +XXX,XX @@ neon_element_offset(int reg, int element, MemOp size)
35
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
41
ofs ^= 8 - element_size;
36
cpu->isar.id_mmfr3 = t;
42
}
37
43
#endif
38
t = cpu->isar.id_mmfr4;
44
- return neon_reg_offset(reg, 0) + ofs;
39
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* FEAT_AA32HPD */
45
+ return neon_full_reg_offset(reg) + ofs;
40
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 2); /* FEAT_HPDS2 */
46
}
41
t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
47
42
t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* FEAT_TTCNP */
48
static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
43
t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* FEAT_XNX */
49
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
44
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
50
* We cannot write 16 bytes at once because the
51
* destination is unaligned.
52
*/
53
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
54
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd),
55
8, 8, tmp);
56
- tcg_gen_gvec_mov(0, neon_reg_offset(vd + 1, 0),
57
- neon_reg_offset(vd, 0), 8, 8);
58
+ tcg_gen_gvec_mov(0, neon_full_reg_offset(vd + 1),
59
+ neon_full_reg_offset(vd), 8, 8);
60
} else {
61
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0),
62
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd),
63
vec_size, vec_size, tmp);
64
}
65
tcg_gen_addi_i32(addr, addr, 1 << size);
66
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
67
static bool do_3same(DisasContext *s, arg_3same *a, GVecGen3Fn fn)
68
{
69
int vec_size = a->q ? 16 : 8;
70
- int rd_ofs = neon_reg_offset(a->vd, 0);
71
- int rn_ofs = neon_reg_offset(a->vn, 0);
72
- int rm_ofs = neon_reg_offset(a->vm, 0);
73
+ int rd_ofs = neon_full_reg_offset(a->vd);
74
+ int rn_ofs = neon_full_reg_offset(a->vn);
75
+ int rm_ofs = neon_full_reg_offset(a->vm);
76
77
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
78
return false;
79
@@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn)
80
{
81
/* Handle a 2-reg-shift insn which can be vectorized. */
82
int vec_size = a->q ? 16 : 8;
83
- int rd_ofs = neon_reg_offset(a->vd, 0);
84
- int rm_ofs = neon_reg_offset(a->vm, 0);
85
+ int rd_ofs = neon_full_reg_offset(a->vd);
86
+ int rm_ofs = neon_full_reg_offset(a->vm);
87
88
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
89
return false;
90
@@ -XXX,XX +XXX,XX @@ static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a,
91
{
92
/* FP operations in 2-reg-and-shift group */
93
int vec_size = a->q ? 16 : 8;
94
- int rd_ofs = neon_reg_offset(a->vd, 0);
95
- int rm_ofs = neon_reg_offset(a->vm, 0);
96
+ int rd_ofs = neon_full_reg_offset(a->vd);
97
+ int rm_ofs = neon_full_reg_offset(a->vm);
98
TCGv_ptr fpst;
99
100
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
101
@@ -XXX,XX +XXX,XX @@ static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a,
102
return true;
103
}
104
105
- reg_ofs = neon_reg_offset(a->vd, 0);
106
+ reg_ofs = neon_full_reg_offset(a->vd);
107
vec_size = a->q ? 16 : 8;
108
imm = asimd_imm_const(a->imm, a->cmode, a->op);
109
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_P_3d(DisasContext *s, arg_3diff *a)
111
return true;
112
}
113
114
- tcg_gen_gvec_3_ool(neon_reg_offset(a->vd, 0),
115
- neon_reg_offset(a->vn, 0),
116
- neon_reg_offset(a->vm, 0),
117
+ tcg_gen_gvec_3_ool(neon_full_reg_offset(a->vd),
118
+ neon_full_reg_offset(a->vn),
119
+ neon_full_reg_offset(a->vm),
120
16, 16, 0, fn_gvec);
121
return true;
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a,
124
{
125
/* Two registers and a scalar, using gvec */
126
int vec_size = a->q ? 16 : 8;
127
- int rd_ofs = neon_reg_offset(a->vd, 0);
128
- int rn_ofs = neon_reg_offset(a->vn, 0);
129
+ int rd_ofs = neon_full_reg_offset(a->vd);
130
+ int rn_ofs = neon_full_reg_offset(a->vn);
131
int rm_ofs;
132
int idx;
133
TCGv_ptr fpstatus;
134
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a,
135
/* a->vm is M:Vm, which encodes both register and index */
136
idx = extract32(a->vm, a->size + 2, 2);
137
a->vm = extract32(a->vm, 0, a->size + 2);
138
- rm_ofs = neon_reg_offset(a->vm, 0);
139
+ rm_ofs = neon_full_reg_offset(a->vm);
140
141
fpstatus = fpstatus_ptr(a->size == 1 ? FPST_STD_F16 : FPST_STD);
142
tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, fpstatus,
143
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
144
return true;
145
}
146
147
- tcg_gen_gvec_dup_mem(a->size, neon_reg_offset(a->vd, 0),
148
+ tcg_gen_gvec_dup_mem(a->size, neon_full_reg_offset(a->vd),
149
neon_element_offset(a->vm, a->index, a->size),
150
a->q ? 16 : 8, a->q ? 16 : 8);
151
return true;
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
153
static bool do_2misc_vec(DisasContext *s, arg_2misc *a, GVecGen2Fn *fn)
154
{
155
int vec_size = a->q ? 16 : 8;
156
- int rd_ofs = neon_reg_offset(a->vd, 0);
157
- int rm_ofs = neon_reg_offset(a->vm, 0);
158
+ int rd_ofs = neon_full_reg_offset(a->vd);
159
+ int rm_ofs = neon_full_reg_offset(a->vm);
160
161
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
162
return false;
163
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
164
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/translate-vfp.c.inc
46
--- a/target/arm/tcg/cpu64.c
166
+++ b/target/arm/translate-vfp.c.inc
47
+++ b/target/arm/tcg/cpu64.c
167
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
48
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
168
}
49
t = FIELD_DP64(t, ID_AA64MMFR1, HAFDBS, 2); /* FEAT_HAFDBS */
169
50
t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
170
tmp = load_reg(s, a->rt);
51
t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
171
- tcg_gen_gvec_dup_i32(size, neon_reg_offset(a->vn, 0),
52
- t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
172
+ tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(a->vn),
53
+ t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 2); /* FEAT_HPDS2 */
173
vec_size, vec_size, tmp);
54
t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); /* FEAT_LOR */
174
tcg_temp_free_i32(tmp);
55
t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 3); /* FEAT_PAN3 */
175
56
t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* FEAT_XNX */
176
--
57
--
177
2.20.1
58
2.34.1
178
179
diff view generated by jsdifflib
1
From: AlexChen <alex.chen@huawei.com>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
In exynos4210_fimd_update(), the pointer s is dereferinced before
3
This is a mandatory feature for Armv8.1 architectures but we don't
4
being check if it is valid, which may lead to NULL pointer dereference.
4
state the feature clearly in our emulation list. Also include
5
So move the assignment to global_width after checking that the s is valid.
5
FEAT_CRC32 comment in aarch64_max_tcg_initfn for ease of grepping.
6
6
7
Reported-by: Euler Robot <euler.robot@huawei.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Signed-off-by: Alex Chen <alex.chen@huawei.com>
8
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20230824075406.1515566-1-alex.bennee@linaro.org
10
Message-id: 5F9F8D88.9030102@huawei.com
10
Cc: qemu-stable@nongnu.org
11
Message-Id: <20230222110104.3996971-1-alex.bennee@linaro.org>
12
[PMM: pluralize 'instructions' in docs]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
14
---
13
hw/display/exynos4210_fimd.c | 4 +++-
15
docs/system/arm/emulation.rst | 1 +
14
1 file changed, 3 insertions(+), 1 deletion(-)
16
target/arm/tcg/cpu64.c | 2 +-
17
2 files changed, 2 insertions(+), 1 deletion(-)
15
18
16
diff --git a/hw/display/exynos4210_fimd.c b/hw/display/exynos4210_fimd.c
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
17
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/display/exynos4210_fimd.c
21
--- a/docs/system/arm/emulation.rst
19
+++ b/hw/display/exynos4210_fimd.c
22
+++ b/docs/system/arm/emulation.rst
20
@@ -XXX,XX +XXX,XX @@ static void exynos4210_fimd_update(void *opaque)
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
21
bool blend = false;
24
- FEAT_BBM at level 2 (Translation table break-before-make levels)
22
uint8_t *host_fb_addr;
25
- FEAT_BF16 (AArch64 BFloat16 instructions)
23
bool is_dirty = false;
26
- FEAT_BTI (Branch Target Identification)
24
- const int global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1;
27
+- FEAT_CRC32 (CRC32 instructions)
25
+ int global_width;
28
- FEAT_CSV2 (Cache speculation variant 2)
26
29
- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
27
if (!s || !s->console || !s->enabled ||
30
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
28
surface_bits_per_pixel(qemu_console_surface(s->console)) == 0) {
31
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
29
return;
32
index XXXXXXX..XXXXXXX 100644
30
}
33
--- a/target/arm/tcg/cpu64.c
31
+
34
+++ b/target/arm/tcg/cpu64.c
32
+ global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1;
35
@@ -XXX,XX +XXX,XX @@ void aarch64_max_tcg_initfn(Object *obj)
33
exynos4210_update_resolution(s);
36
t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* FEAT_PMULL */
34
surface = qemu_console_surface(s->console);
37
t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); /* FEAT_SHA1 */
35
38
t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* FEAT_SHA512 */
39
- t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
40
+ t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1); /* FEAT_CRC32 */
41
t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2); /* FEAT_LSE */
42
t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1); /* FEAT_RDM */
43
t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1); /* FEAT_SHA3 */
36
--
44
--
37
2.20.1
45
2.34.1
38
46
39
47
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
Use the BIT_ULL() macro to ensure we use 64-bit arithmetic.
3
i.MX7 IOMUX GPR device is not equivalent to i.MX6UL IOMUXC GPR device.
4
This fixes the following Coverity issue (OVERFLOW_BEFORE_WIDEN):
4
In particular, register 22 is not present on i.MX6UL and this is actualy
5
The only register that is really emulated in the i.MX7 IOMUX GPR device.
5
6
6
CID 1432363 (#1 of 1): Unintentional integer overflow:
7
Note: The i.MX6UL code is actually also implementing the IOMUX GPR device
8
as an unimplemented device at the same bus adress and the 2 instantiations
9
were actualy colliding. So we go back to the unimplemented device for now.
7
10
8
overflow_before_widen:
11
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
9
Potentially overflowing expression 1 << scale with type int
12
Message-id: 48681bf51ee97646479bb261bee19abebbc8074e.1692964892.git.jcd@tribudubois.net
10
(32 bits, signed) is evaluated using 32-bit arithmetic, and
11
then used in a context that expects an expression of type
12
hwaddr (64 bits, unsigned).
13
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Acked-by: Eric Auger <eric.auger@redhat.com>
16
Message-id: 20201030144617.1535064-1-philmd@redhat.com
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
15
---
20
hw/arm/smmuv3.c | 3 ++-
16
include/hw/arm/fsl-imx6ul.h | 2 --
21
1 file changed, 2 insertions(+), 1 deletion(-)
17
hw/arm/fsl-imx6ul.c | 11 -----------
18
2 files changed, 13 deletions(-)
22
19
23
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
20
diff --git a/include/hw/arm/fsl-imx6ul.h b/include/hw/arm/fsl-imx6ul.h
24
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/arm/smmuv3.c
22
--- a/include/hw/arm/fsl-imx6ul.h
26
+++ b/hw/arm/smmuv3.c
23
+++ b/include/hw/arm/fsl-imx6ul.h
27
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@
28
*/
25
#include "hw/misc/imx6ul_ccm.h"
29
26
#include "hw/misc/imx6_src.h"
30
#include "qemu/osdep.h"
27
#include "hw/misc/imx7_snvs.h"
31
+#include "qemu/bitops.h"
28
-#include "hw/misc/imx7_gpr.h"
32
#include "hw/irq.h"
29
#include "hw/intc/imx_gpcv2.h"
33
#include "hw/sysbus.h"
30
#include "hw/watchdog/wdt_imx2.h"
34
#include "migration/vmstate.h"
31
#include "hw/gpio/imx_gpio.h"
35
@@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
32
@@ -XXX,XX +XXX,XX @@ struct FslIMX6ULState {
36
scale = CMD_SCALE(cmd);
33
IMX6SRCState src;
37
num = CMD_NUM(cmd);
34
IMX7SNVSState snvs;
38
ttl = CMD_TTL(cmd);
35
IMXGPCv2State gpcv2;
39
- num_pages = (num + 1) * (1 << (scale));
36
- IMX7GPRState gpr;
40
+ num_pages = (num + 1) * BIT_ULL(scale);
37
IMXSPIState spi[FSL_IMX6UL_NUM_ECSPIS];
38
IMXI2CState i2c[FSL_IMX6UL_NUM_I2CS];
39
IMXSerialState uart[FSL_IMX6UL_NUM_UARTS];
40
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/fsl-imx6ul.c
43
+++ b/hw/arm/fsl-imx6ul.c
44
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
45
*/
46
object_initialize_child(obj, "snvs", &s->snvs, TYPE_IMX7_SNVS);
47
48
- /*
49
- * GPR
50
- */
51
- object_initialize_child(obj, "gpr", &s->gpr, TYPE_IMX7_GPR);
52
-
53
/*
54
* GPIOs 1 to 5
55
*/
56
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
57
FSL_IMX6UL_WDOGn_IRQ[i]));
41
}
58
}
42
59
43
if (type == SMMU_CMD_TLBI_NH_VA) {
60
- /*
61
- * GPR
62
- */
63
- sysbus_realize(SYS_BUS_DEVICE(&s->gpr), &error_abort);
64
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpr), 0, FSL_IMX6UL_IOMUXC_GPR_ADDR);
65
-
66
/*
67
* SDMA
68
*/
44
--
69
--
45
2.20.1
70
2.34.1
46
47
diff view generated by jsdifflib
1
The randomness tests in the NPCM7xx RNG test fail intermittently
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
but fairly frequently. On my machine running the test in a loop:
3
while QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/npcm7xx_rng-test; do true; done
4
2
5
will fail in less than a minute with an error like:
3
* Add Addr and size definition for most i.MX6UL devices in i.MX6UL header file.
6
ERROR:../../tests/qtest/npcm7xx_rng-test.c:256:test_first_byte_runs:
4
* Use those newly defined named constants whenever possible.
7
assertion failed (calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE) > 0.01): (0.00286205989 > 0.01)
5
* Standardize the way we init a familly of unimplemented devices
6
- SAI
7
- PWM
8
- CAN
9
* Add/rework few comments
8
10
9
(Failures have been observed on all 4 of the randomness tests,
11
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
10
not just first_byte_runs.)
12
Message-id: d579043fbd4e4b490370783fda43fc02c8e9be75.1692964892.git.jcd@tribudubois.net
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
include/hw/arm/fsl-imx6ul.h | 156 +++++++++++++++++++++++++++++++-----
17
hw/arm/fsl-imx6ul.c | 147 ++++++++++++++++++++++-----------
18
2 files changed, 232 insertions(+), 71 deletions(-)
11
19
12
It's not clear why these tests are failing like this, but intermittent
20
diff --git a/include/hw/arm/fsl-imx6ul.h b/include/hw/arm/fsl-imx6ul.h
13
failures make CI and merge testing awkward, so disable running them
14
unless a developer specifically sets QEMU_TEST_FLAKY_RNG_TESTS when
15
running the test suite, until we work out the cause.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
19
Message-id: 20201102152454.8287-1-peter.maydell@linaro.org
20
Reviewed-by: Havard Skinnemoen <hskinnemoen@google.com>
21
---
22
tests/qtest/npcm7xx_rng-test.c | 14 ++++++++++----
23
1 file changed, 10 insertions(+), 4 deletions(-)
24
25
diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c
26
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
27
--- a/tests/qtest/npcm7xx_rng-test.c
22
--- a/include/hw/arm/fsl-imx6ul.h
28
+++ b/tests/qtest/npcm7xx_rng-test.c
23
+++ b/include/hw/arm/fsl-imx6ul.h
29
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
24
@@ -XXX,XX +XXX,XX @@
30
25
#include "exec/memory.h"
31
qtest_add_func("npcm7xx_rng/enable_disable", test_enable_disable);
26
#include "cpu.h"
32
qtest_add_func("npcm7xx_rng/rosel", test_rosel);
27
#include "qom/object.h"
33
- qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit);
28
+#include "qemu/units.h"
34
- qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs);
29
35
- qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit);
30
#define TYPE_FSL_IMX6UL "fsl-imx6ul"
36
- qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs);
31
OBJECT_DECLARE_SIMPLE_TYPE(FslIMX6ULState, FSL_IMX6UL)
32
@@ -XXX,XX +XXX,XX @@ enum FslIMX6ULConfiguration {
33
FSL_IMX6UL_NUM_ADCS = 2,
34
FSL_IMX6UL_NUM_USB_PHYS = 2,
35
FSL_IMX6UL_NUM_USBS = 2,
36
+ FSL_IMX6UL_NUM_SAIS = 3,
37
+ FSL_IMX6UL_NUM_CANS = 2,
38
+ FSL_IMX6UL_NUM_PWMS = 4,
39
};
40
41
struct FslIMX6ULState {
42
@@ -XXX,XX +XXX,XX @@ struct FslIMX6ULState {
43
44
enum FslIMX6ULMemoryMap {
45
FSL_IMX6UL_MMDC_ADDR = 0x80000000,
46
- FSL_IMX6UL_MMDC_SIZE = 2 * 1024 * 1024 * 1024UL,
47
+ FSL_IMX6UL_MMDC_SIZE = (2 * GiB),
48
49
FSL_IMX6UL_QSPI1_MEM_ADDR = 0x60000000,
50
- FSL_IMX6UL_EIM_ALIAS_ADDR = 0x58000000,
51
- FSL_IMX6UL_EIM_CS_ADDR = 0x50000000,
52
- FSL_IMX6UL_AES_ENCRYPT_ADDR = 0x10000000,
53
- FSL_IMX6UL_QSPI1_RX_ADDR = 0x0C000000,
54
+ FSL_IMX6UL_QSPI1_MEM_SIZE = (256 * MiB),
55
56
- /* AIPS-2 */
57
+ FSL_IMX6UL_EIM_ALIAS_ADDR = 0x58000000,
58
+ FSL_IMX6UL_EIM_ALIAS_SIZE = (128 * MiB),
59
+
60
+ FSL_IMX6UL_EIM_CS_ADDR = 0x50000000,
61
+ FSL_IMX6UL_EIM_CS_SIZE = (128 * MiB),
62
+
63
+ FSL_IMX6UL_AES_ENCRYPT_ADDR = 0x10000000,
64
+ FSL_IMX6UL_AES_ENCRYPT_SIZE = (1 * MiB),
65
+
66
+ FSL_IMX6UL_QSPI1_RX_ADDR = 0x0C000000,
67
+ FSL_IMX6UL_QSPI1_RX_SIZE = (32 * MiB),
68
+
69
+ /* AIPS-2 Begin */
70
FSL_IMX6UL_UART6_ADDR = 0x021FC000,
71
+
72
FSL_IMX6UL_I2C4_ADDR = 0x021F8000,
73
+
74
FSL_IMX6UL_UART5_ADDR = 0x021F4000,
75
FSL_IMX6UL_UART4_ADDR = 0x021F0000,
76
FSL_IMX6UL_UART3_ADDR = 0x021EC000,
77
FSL_IMX6UL_UART2_ADDR = 0x021E8000,
78
+
79
FSL_IMX6UL_WDOG3_ADDR = 0x021E4000,
80
+
81
FSL_IMX6UL_QSPI_ADDR = 0x021E0000,
82
+ FSL_IMX6UL_QSPI_SIZE = 0x500,
83
+
84
FSL_IMX6UL_SYS_CNT_CTRL_ADDR = 0x021DC000,
85
+ FSL_IMX6UL_SYS_CNT_CTRL_SIZE = (16 * KiB),
86
+
87
FSL_IMX6UL_SYS_CNT_CMP_ADDR = 0x021D8000,
88
+ FSL_IMX6UL_SYS_CNT_CMP_SIZE = (16 * KiB),
89
+
90
FSL_IMX6UL_SYS_CNT_RD_ADDR = 0x021D4000,
91
+ FSL_IMX6UL_SYS_CNT_RD_SIZE = (16 * KiB),
92
+
93
FSL_IMX6UL_TZASC_ADDR = 0x021D0000,
94
+ FSL_IMX6UL_TZASC_SIZE = (16 * KiB),
95
+
96
FSL_IMX6UL_PXP_ADDR = 0x021CC000,
97
+ FSL_IMX6UL_PXP_SIZE = (16 * KiB),
98
+
99
FSL_IMX6UL_LCDIF_ADDR = 0x021C8000,
100
+ FSL_IMX6UL_LCDIF_SIZE = 0x100,
101
+
102
FSL_IMX6UL_CSI_ADDR = 0x021C4000,
103
+ FSL_IMX6UL_CSI_SIZE = 0x100,
104
+
105
FSL_IMX6UL_CSU_ADDR = 0x021C0000,
106
+ FSL_IMX6UL_CSU_SIZE = (16 * KiB),
107
+
108
FSL_IMX6UL_OCOTP_CTRL_ADDR = 0x021BC000,
109
+ FSL_IMX6UL_OCOTP_CTRL_SIZE = (4 * KiB),
110
+
111
FSL_IMX6UL_EIM_ADDR = 0x021B8000,
112
+ FSL_IMX6UL_EIM_SIZE = 0x100,
113
+
114
FSL_IMX6UL_SIM2_ADDR = 0x021B4000,
115
+
116
FSL_IMX6UL_MMDC_CFG_ADDR = 0x021B0000,
117
+ FSL_IMX6UL_MMDC_CFG_SIZE = (4 * KiB),
118
+
119
FSL_IMX6UL_ROMCP_ADDR = 0x021AC000,
120
+ FSL_IMX6UL_ROMCP_SIZE = 0x300,
121
+
122
FSL_IMX6UL_I2C3_ADDR = 0x021A8000,
123
FSL_IMX6UL_I2C2_ADDR = 0x021A4000,
124
FSL_IMX6UL_I2C1_ADDR = 0x021A0000,
125
+
126
FSL_IMX6UL_ADC2_ADDR = 0x0219C000,
127
FSL_IMX6UL_ADC1_ADDR = 0x02198000,
128
+ FSL_IMX6UL_ADCn_SIZE = 0x100,
129
+
130
FSL_IMX6UL_USDHC2_ADDR = 0x02194000,
131
FSL_IMX6UL_USDHC1_ADDR = 0x02190000,
132
- FSL_IMX6UL_SIM1_ADDR = 0x0218C000,
133
- FSL_IMX6UL_ENET1_ADDR = 0x02188000,
134
- FSL_IMX6UL_USBO2_USBMISC_ADDR = 0x02184800,
135
- FSL_IMX6UL_USBO2_USB_ADDR = 0x02184000,
136
- FSL_IMX6UL_USBO2_PL301_ADDR = 0x02180000,
137
- FSL_IMX6UL_AIPS2_CFG_ADDR = 0x0217C000,
138
- FSL_IMX6UL_CAAM_ADDR = 0x02140000,
139
- FSL_IMX6UL_A7MPCORE_DAP_ADDR = 0x02100000,
140
141
- /* AIPS-1 */
142
+ FSL_IMX6UL_SIM1_ADDR = 0x0218C000,
143
+ FSL_IMX6UL_SIMn_SIZE = (16 * KiB),
144
+
145
+ FSL_IMX6UL_ENET1_ADDR = 0x02188000,
146
+
147
+ FSL_IMX6UL_USBO2_USBMISC_ADDR = 0x02184800,
148
+ FSL_IMX6UL_USBO2_USB1_ADDR = 0x02184000,
149
+ FSL_IMX6UL_USBO2_USB2_ADDR = 0x02184200,
150
+
151
+ FSL_IMX6UL_USBO2_PL301_ADDR = 0x02180000,
152
+ FSL_IMX6UL_USBO2_PL301_SIZE = (16 * KiB),
153
+
154
+ FSL_IMX6UL_AIPS2_CFG_ADDR = 0x0217C000,
155
+ FSL_IMX6UL_AIPS2_CFG_SIZE = 0x100,
156
+
157
+ FSL_IMX6UL_CAAM_ADDR = 0x02140000,
158
+ FSL_IMX6UL_CAAM_SIZE = (16 * KiB),
159
+
160
+ FSL_IMX6UL_A7MPCORE_DAP_ADDR = 0x02100000,
161
+ FSL_IMX6UL_A7MPCORE_DAP_SIZE = (4 * KiB),
162
+ /* AIPS-2 End */
163
+
164
+ /* AIPS-1 Begin */
165
FSL_IMX6UL_PWM8_ADDR = 0x020FC000,
166
FSL_IMX6UL_PWM7_ADDR = 0x020F8000,
167
FSL_IMX6UL_PWM6_ADDR = 0x020F4000,
168
FSL_IMX6UL_PWM5_ADDR = 0x020F0000,
169
+
170
FSL_IMX6UL_SDMA_ADDR = 0x020EC000,
171
+ FSL_IMX6UL_SDMA_SIZE = 0x300,
172
+
173
FSL_IMX6UL_GPT2_ADDR = 0x020E8000,
174
+
175
FSL_IMX6UL_IOMUXC_GPR_ADDR = 0x020E4000,
176
+ FSL_IMX6UL_IOMUXC_GPR_SIZE = 0x40,
177
+
178
FSL_IMX6UL_IOMUXC_ADDR = 0x020E0000,
179
+ FSL_IMX6UL_IOMUXC_SIZE = 0x700,
180
+
181
FSL_IMX6UL_GPC_ADDR = 0x020DC000,
182
+
183
FSL_IMX6UL_SRC_ADDR = 0x020D8000,
184
+
185
FSL_IMX6UL_EPIT2_ADDR = 0x020D4000,
186
FSL_IMX6UL_EPIT1_ADDR = 0x020D0000,
187
+
188
FSL_IMX6UL_SNVS_HP_ADDR = 0x020CC000,
189
+
190
FSL_IMX6UL_USBPHY2_ADDR = 0x020CA000,
191
- FSL_IMX6UL_USBPHY2_SIZE = (4 * 1024),
192
FSL_IMX6UL_USBPHY1_ADDR = 0x020C9000,
193
- FSL_IMX6UL_USBPHY1_SIZE = (4 * 1024),
194
+
195
FSL_IMX6UL_ANALOG_ADDR = 0x020C8000,
196
+ FSL_IMX6UL_ANALOG_SIZE = 0x300,
197
+
198
FSL_IMX6UL_CCM_ADDR = 0x020C4000,
199
+
200
FSL_IMX6UL_WDOG2_ADDR = 0x020C0000,
201
FSL_IMX6UL_WDOG1_ADDR = 0x020BC000,
202
+
203
FSL_IMX6UL_KPP_ADDR = 0x020B8000,
204
+ FSL_IMX6UL_KPP_SIZE = 0x10,
205
+
206
FSL_IMX6UL_ENET2_ADDR = 0x020B4000,
207
+
208
FSL_IMX6UL_SNVS_LP_ADDR = 0x020B0000,
209
+ FSL_IMX6UL_SNVS_LP_SIZE = (16 * KiB),
210
+
211
FSL_IMX6UL_GPIO5_ADDR = 0x020AC000,
212
FSL_IMX6UL_GPIO4_ADDR = 0x020A8000,
213
FSL_IMX6UL_GPIO3_ADDR = 0x020A4000,
214
FSL_IMX6UL_GPIO2_ADDR = 0x020A0000,
215
FSL_IMX6UL_GPIO1_ADDR = 0x0209C000,
216
+
217
FSL_IMX6UL_GPT1_ADDR = 0x02098000,
218
+
219
FSL_IMX6UL_CAN2_ADDR = 0x02094000,
220
FSL_IMX6UL_CAN1_ADDR = 0x02090000,
221
+ FSL_IMX6UL_CANn_SIZE = (4 * KiB),
222
+
223
FSL_IMX6UL_PWM4_ADDR = 0x0208C000,
224
FSL_IMX6UL_PWM3_ADDR = 0x02088000,
225
FSL_IMX6UL_PWM2_ADDR = 0x02084000,
226
FSL_IMX6UL_PWM1_ADDR = 0x02080000,
227
+ FSL_IMX6UL_PWMn_SIZE = 0x20,
228
+
229
FSL_IMX6UL_AIPS1_CFG_ADDR = 0x0207C000,
230
+ FSL_IMX6UL_AIPS1_CFG_SIZE = (16 * KiB),
231
+
232
FSL_IMX6UL_BEE_ADDR = 0x02044000,
233
+ FSL_IMX6UL_BEE_SIZE = (16 * KiB),
234
+
235
FSL_IMX6UL_TOUCH_CTRL_ADDR = 0x02040000,
236
+ FSL_IMX6UL_TOUCH_CTRL_SIZE = 0x100,
237
+
238
FSL_IMX6UL_SPBA_ADDR = 0x0203C000,
239
+ FSL_IMX6UL_SPBA_SIZE = 0x100,
240
+
241
FSL_IMX6UL_ASRC_ADDR = 0x02034000,
242
+ FSL_IMX6UL_ASRC_SIZE = 0x100,
243
+
244
FSL_IMX6UL_SAI3_ADDR = 0x02030000,
245
FSL_IMX6UL_SAI2_ADDR = 0x0202C000,
246
FSL_IMX6UL_SAI1_ADDR = 0x02028000,
247
+ FSL_IMX6UL_SAIn_SIZE = 0x200,
248
+
249
FSL_IMX6UL_UART8_ADDR = 0x02024000,
250
FSL_IMX6UL_UART1_ADDR = 0x02020000,
251
FSL_IMX6UL_UART7_ADDR = 0x02018000,
252
+
253
FSL_IMX6UL_ECSPI4_ADDR = 0x02014000,
254
FSL_IMX6UL_ECSPI3_ADDR = 0x02010000,
255
FSL_IMX6UL_ECSPI2_ADDR = 0x0200C000,
256
FSL_IMX6UL_ECSPI1_ADDR = 0x02008000,
257
+
258
FSL_IMX6UL_SPDIF_ADDR = 0x02004000,
259
+ FSL_IMX6UL_SPDIF_SIZE = 0x100,
260
+ /* AIPS-1 End */
261
+
262
+ FSL_IMX6UL_BCH_ADDR = 0x01808000,
263
+ FSL_IMX6UL_BCH_SIZE = 0x200,
264
+
265
+ FSL_IMX6UL_GPMI_ADDR = 0x01806000,
266
+ FSL_IMX6UL_GPMI_SIZE = 0x200,
267
268
FSL_IMX6UL_APBH_DMA_ADDR = 0x01804000,
269
- FSL_IMX6UL_APBH_DMA_SIZE = (32 * 1024),
270
+ FSL_IMX6UL_APBH_DMA_SIZE = (4 * KiB),
271
272
FSL_IMX6UL_A7MPCORE_ADDR = 0x00A00000,
273
274
FSL_IMX6UL_OCRAM_ALIAS_ADDR = 0x00920000,
275
- FSL_IMX6UL_OCRAM_ALIAS_SIZE = 0x00060000,
276
+ FSL_IMX6UL_OCRAM_ALIAS_SIZE = (384 * KiB),
277
+
278
FSL_IMX6UL_OCRAM_MEM_ADDR = 0x00900000,
279
- FSL_IMX6UL_OCRAM_MEM_SIZE = 0x00020000,
280
+ FSL_IMX6UL_OCRAM_MEM_SIZE = (128 * KiB),
281
+
282
FSL_IMX6UL_CAAM_MEM_ADDR = 0x00100000,
283
- FSL_IMX6UL_CAAM_MEM_SIZE = 0x00008000,
284
+ FSL_IMX6UL_CAAM_MEM_SIZE = (32 * KiB),
285
+
286
FSL_IMX6UL_ROM_ADDR = 0x00000000,
287
- FSL_IMX6UL_ROM_SIZE = 0x00018000,
288
+ FSL_IMX6UL_ROM_SIZE = (96 * KiB),
289
};
290
291
enum FslIMX6ULIRQs {
292
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
293
index XXXXXXX..XXXXXXX 100644
294
--- a/hw/arm/fsl-imx6ul.c
295
+++ b/hw/arm/fsl-imx6ul.c
296
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
297
object_initialize_child(obj, "snvs", &s->snvs, TYPE_IMX7_SNVS);
298
299
/*
300
- * GPIOs 1 to 5
301
+ * GPIOs
302
*/
303
for (i = 0; i < FSL_IMX6UL_NUM_GPIOS; i++) {
304
snprintf(name, NAME_SIZE, "gpio%d", i);
305
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
306
}
307
308
/*
309
- * GPT 1, 2
310
+ * GPTs
311
*/
312
for (i = 0; i < FSL_IMX6UL_NUM_GPTS; i++) {
313
snprintf(name, NAME_SIZE, "gpt%d", i);
314
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
315
}
316
317
/*
318
- * EPIT 1, 2
319
+ * EPITs
320
*/
321
for (i = 0; i < FSL_IMX6UL_NUM_EPITS; i++) {
322
snprintf(name, NAME_SIZE, "epit%d", i + 1);
323
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
324
}
325
326
/*
327
- * eCSPI
328
+ * eCSPIs
329
*/
330
for (i = 0; i < FSL_IMX6UL_NUM_ECSPIS; i++) {
331
snprintf(name, NAME_SIZE, "spi%d", i + 1);
332
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
333
}
334
335
/*
336
- * I2C
337
+ * I2Cs
338
*/
339
for (i = 0; i < FSL_IMX6UL_NUM_I2CS; i++) {
340
snprintf(name, NAME_SIZE, "i2c%d", i + 1);
341
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
342
}
343
344
/*
345
- * UART
346
+ * UARTs
347
*/
348
for (i = 0; i < FSL_IMX6UL_NUM_UARTS; i++) {
349
snprintf(name, NAME_SIZE, "uart%d", i);
350
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
351
}
352
353
/*
354
- * Ethernet
355
+ * Ethernets
356
*/
357
for (i = 0; i < FSL_IMX6UL_NUM_ETHS; i++) {
358
snprintf(name, NAME_SIZE, "eth%d", i);
359
object_initialize_child(obj, name, &s->eth[i], TYPE_IMX_ENET);
360
}
361
362
- /* USB */
37
+ /*
363
+ /*
38
+ * These tests fail intermittently; only run them on explicit
364
+ * USB PHYs
39
+ * request until we figure out why.
40
+ */
365
+ */
41
+ if (getenv("QEMU_TEST_FLAKY_RNG_TESTS")) {
366
for (i = 0; i < FSL_IMX6UL_NUM_USB_PHYS; i++) {
42
+ qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit);
367
snprintf(name, NAME_SIZE, "usbphy%d", i);
43
+ qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs);
368
object_initialize_child(obj, name, &s->usbphy[i], TYPE_IMX_USBPHY);
44
+ qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit);
369
}
45
+ qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs);
370
+
371
+ /*
372
+ * USBs
373
+ */
374
for (i = 0; i < FSL_IMX6UL_NUM_USBS; i++) {
375
snprintf(name, NAME_SIZE, "usb%d", i);
376
object_initialize_child(obj, name, &s->usb[i], TYPE_CHIPIDEA);
377
}
378
379
/*
380
- * SDHCI
381
+ * SDHCIs
382
*/
383
for (i = 0; i < FSL_IMX6UL_NUM_USDHCS; i++) {
384
snprintf(name, NAME_SIZE, "usdhc%d", i);
385
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
386
}
387
388
/*
389
- * Watchdog
390
+ * Watchdogs
391
*/
392
for (i = 0; i < FSL_IMX6UL_NUM_WDTS; i++) {
393
snprintf(name, NAME_SIZE, "wdt%d", i);
394
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
395
* A7MPCORE DAP
396
*/
397
create_unimplemented_device("a7mpcore-dap", FSL_IMX6UL_A7MPCORE_DAP_ADDR,
398
- 0x100000);
399
+ FSL_IMX6UL_A7MPCORE_DAP_SIZE);
400
401
/*
402
- * GPT 1, 2
403
+ * GPTs
404
*/
405
for (i = 0; i < FSL_IMX6UL_NUM_GPTS; i++) {
406
static const hwaddr FSL_IMX6UL_GPTn_ADDR[FSL_IMX6UL_NUM_GPTS] = {
407
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
408
}
409
410
/*
411
- * EPIT 1, 2
412
+ * EPITs
413
*/
414
for (i = 0; i < FSL_IMX6UL_NUM_EPITS; i++) {
415
static const hwaddr FSL_IMX6UL_EPITn_ADDR[FSL_IMX6UL_NUM_EPITS] = {
416
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
417
}
418
419
/*
420
- * GPIO
421
+ * GPIOs
422
*/
423
for (i = 0; i < FSL_IMX6UL_NUM_GPIOS; i++) {
424
static const hwaddr FSL_IMX6UL_GPIOn_ADDR[FSL_IMX6UL_NUM_GPIOS] = {
425
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
426
}
427
428
/*
429
- * IOMUXC and IOMUXC_GPR
430
+ * IOMUXC
431
*/
432
- for (i = 0; i < 1; i++) {
433
- static const hwaddr FSL_IMX6UL_IOMUXCn_ADDR[FSL_IMX6UL_NUM_IOMUXCS] = {
434
- FSL_IMX6UL_IOMUXC_ADDR,
435
- FSL_IMX6UL_IOMUXC_GPR_ADDR,
436
- };
437
-
438
- snprintf(name, NAME_SIZE, "iomuxc%d", i);
439
- create_unimplemented_device(name, FSL_IMX6UL_IOMUXCn_ADDR[i], 0x4000);
440
- }
441
+ create_unimplemented_device("iomuxc", FSL_IMX6UL_IOMUXC_ADDR,
442
+ FSL_IMX6UL_IOMUXC_SIZE);
443
+ create_unimplemented_device("iomuxc_gpr", FSL_IMX6UL_IOMUXC_GPR_ADDR,
444
+ FSL_IMX6UL_IOMUXC_GPR_SIZE);
445
446
/*
447
* CCM
448
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
449
sysbus_realize(SYS_BUS_DEVICE(&s->gpcv2), &error_abort);
450
sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpcv2), 0, FSL_IMX6UL_GPC_ADDR);
451
452
- /* Initialize all ECSPI */
453
+ /*
454
+ * ECSPIs
455
+ */
456
for (i = 0; i < FSL_IMX6UL_NUM_ECSPIS; i++) {
457
static const hwaddr FSL_IMX6UL_SPIn_ADDR[FSL_IMX6UL_NUM_ECSPIS] = {
458
FSL_IMX6UL_ECSPI1_ADDR,
459
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
460
}
461
462
/*
463
- * I2C
464
+ * I2Cs
465
*/
466
for (i = 0; i < FSL_IMX6UL_NUM_I2CS; i++) {
467
static const hwaddr FSL_IMX6UL_I2Cn_ADDR[FSL_IMX6UL_NUM_I2CS] = {
468
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
469
}
470
471
/*
472
- * UART
473
+ * UARTs
474
*/
475
for (i = 0; i < FSL_IMX6UL_NUM_UARTS; i++) {
476
static const hwaddr FSL_IMX6UL_UARTn_ADDR[FSL_IMX6UL_NUM_UARTS] = {
477
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
478
}
479
480
/*
481
- * Ethernet
482
+ * Ethernets
483
*
484
* We must use two loops since phy_connected affects the other interface
485
* and we have to set all properties before calling sysbus_realize().
486
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
487
FSL_IMX6UL_ENETn_TIMER_IRQ[i]));
488
}
489
490
- /* USB */
491
+ /*
492
+ * USB PHYs
493
+ */
494
for (i = 0; i < FSL_IMX6UL_NUM_USB_PHYS; i++) {
495
+ static const hwaddr
496
+ FSL_IMX6UL_USB_PHYn_ADDR[FSL_IMX6UL_NUM_USB_PHYS] = {
497
+ FSL_IMX6UL_USBPHY1_ADDR,
498
+ FSL_IMX6UL_USBPHY2_ADDR,
499
+ };
500
+
501
sysbus_realize(SYS_BUS_DEVICE(&s->usbphy[i]), &error_abort);
502
sysbus_mmio_map(SYS_BUS_DEVICE(&s->usbphy[i]), 0,
503
- FSL_IMX6UL_USBPHY1_ADDR + i * 0x1000);
504
+ FSL_IMX6UL_USB_PHYn_ADDR[i]);
505
}
506
507
+ /*
508
+ * USBs
509
+ */
510
for (i = 0; i < FSL_IMX6UL_NUM_USBS; i++) {
511
+ static const hwaddr FSL_IMX6UL_USB02_USBn_ADDR[FSL_IMX6UL_NUM_USBS] = {
512
+ FSL_IMX6UL_USBO2_USB1_ADDR,
513
+ FSL_IMX6UL_USBO2_USB2_ADDR,
514
+ };
515
+
516
static const int FSL_IMX6UL_USBn_IRQ[] = {
517
FSL_IMX6UL_USB1_IRQ,
518
FSL_IMX6UL_USB2_IRQ,
519
};
520
+
521
sysbus_realize(SYS_BUS_DEVICE(&s->usb[i]), &error_abort);
522
sysbus_mmio_map(SYS_BUS_DEVICE(&s->usb[i]), 0,
523
- FSL_IMX6UL_USBO2_USB_ADDR + i * 0x200);
524
+ FSL_IMX6UL_USB02_USBn_ADDR[i]);
525
sysbus_connect_irq(SYS_BUS_DEVICE(&s->usb[i]), 0,
526
qdev_get_gpio_in(DEVICE(&s->a7mpcore),
527
FSL_IMX6UL_USBn_IRQ[i]));
528
}
529
530
/*
531
- * USDHC
532
+ * USDHCs
533
*/
534
for (i = 0; i < FSL_IMX6UL_NUM_USDHCS; i++) {
535
static const hwaddr FSL_IMX6UL_USDHCn_ADDR[FSL_IMX6UL_NUM_USDHCS] = {
536
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
537
sysbus_mmio_map(SYS_BUS_DEVICE(&s->snvs), 0, FSL_IMX6UL_SNVS_HP_ADDR);
538
539
/*
540
- * Watchdog
541
+ * Watchdogs
542
*/
543
for (i = 0; i < FSL_IMX6UL_NUM_WDTS; i++) {
544
static const hwaddr FSL_IMX6UL_WDOGn_ADDR[FSL_IMX6UL_NUM_WDTS] = {
545
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
546
FSL_IMX6UL_WDOG2_ADDR,
547
FSL_IMX6UL_WDOG3_ADDR,
548
};
549
+
550
static const int FSL_IMX6UL_WDOGn_IRQ[FSL_IMX6UL_NUM_WDTS] = {
551
FSL_IMX6UL_WDOG1_IRQ,
552
FSL_IMX6UL_WDOG2_IRQ,
553
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
554
/*
555
* SDMA
556
*/
557
- create_unimplemented_device("sdma", FSL_IMX6UL_SDMA_ADDR, 0x4000);
558
+ create_unimplemented_device("sdma", FSL_IMX6UL_SDMA_ADDR,
559
+ FSL_IMX6UL_SDMA_SIZE);
560
561
/*
562
- * SAI (Audio SSI (Synchronous Serial Interface))
563
+ * SAIs (Audio SSI (Synchronous Serial Interface))
564
*/
565
- create_unimplemented_device("sai1", FSL_IMX6UL_SAI1_ADDR, 0x4000);
566
- create_unimplemented_device("sai2", FSL_IMX6UL_SAI2_ADDR, 0x4000);
567
- create_unimplemented_device("sai3", FSL_IMX6UL_SAI3_ADDR, 0x4000);
568
+ for (i = 0; i < FSL_IMX6UL_NUM_SAIS; i++) {
569
+ static const hwaddr FSL_IMX6UL_SAIn_ADDR[FSL_IMX6UL_NUM_SAIS] = {
570
+ FSL_IMX6UL_SAI1_ADDR,
571
+ FSL_IMX6UL_SAI2_ADDR,
572
+ FSL_IMX6UL_SAI3_ADDR,
573
+ };
574
+
575
+ snprintf(name, NAME_SIZE, "sai%d", i);
576
+ create_unimplemented_device(name, FSL_IMX6UL_SAIn_ADDR[i],
577
+ FSL_IMX6UL_SAIn_SIZE);
46
+ }
578
+ }
47
579
48
qtest_start("-machine npcm750-evb");
580
/*
49
ret = g_test_run();
581
- * PWM
582
+ * PWMs
583
*/
584
- create_unimplemented_device("pwm1", FSL_IMX6UL_PWM1_ADDR, 0x4000);
585
- create_unimplemented_device("pwm2", FSL_IMX6UL_PWM2_ADDR, 0x4000);
586
- create_unimplemented_device("pwm3", FSL_IMX6UL_PWM3_ADDR, 0x4000);
587
- create_unimplemented_device("pwm4", FSL_IMX6UL_PWM4_ADDR, 0x4000);
588
+ for (i = 0; i < FSL_IMX6UL_NUM_PWMS; i++) {
589
+ static const hwaddr FSL_IMX6UL_PWMn_ADDR[FSL_IMX6UL_NUM_PWMS] = {
590
+ FSL_IMX6UL_PWM1_ADDR,
591
+ FSL_IMX6UL_PWM2_ADDR,
592
+ FSL_IMX6UL_PWM3_ADDR,
593
+ FSL_IMX6UL_PWM4_ADDR,
594
+ };
595
+
596
+ snprintf(name, NAME_SIZE, "pwm%d", i);
597
+ create_unimplemented_device(name, FSL_IMX6UL_PWMn_ADDR[i],
598
+ FSL_IMX6UL_PWMn_SIZE);
599
+ }
600
601
/*
602
* Audio ASRC (asynchronous sample rate converter)
603
*/
604
- create_unimplemented_device("asrc", FSL_IMX6UL_ASRC_ADDR, 0x4000);
605
+ create_unimplemented_device("asrc", FSL_IMX6UL_ASRC_ADDR,
606
+ FSL_IMX6UL_ASRC_SIZE);
607
608
/*
609
- * CAN
610
+ * CANs
611
*/
612
- create_unimplemented_device("can1", FSL_IMX6UL_CAN1_ADDR, 0x4000);
613
- create_unimplemented_device("can2", FSL_IMX6UL_CAN2_ADDR, 0x4000);
614
+ for (i = 0; i < FSL_IMX6UL_NUM_CANS; i++) {
615
+ static const hwaddr FSL_IMX6UL_CANn_ADDR[FSL_IMX6UL_NUM_CANS] = {
616
+ FSL_IMX6UL_CAN1_ADDR,
617
+ FSL_IMX6UL_CAN2_ADDR,
618
+ };
619
+
620
+ snprintf(name, NAME_SIZE, "can%d", i);
621
+ create_unimplemented_device(name, FSL_IMX6UL_CANn_ADDR[i],
622
+ FSL_IMX6UL_CANn_SIZE);
623
+ }
624
625
/*
626
* APHB_DMA
627
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
628
};
629
630
snprintf(name, NAME_SIZE, "adc%d", i);
631
- create_unimplemented_device(name, FSL_IMX6UL_ADCn_ADDR[i], 0x4000);
632
+ create_unimplemented_device(name, FSL_IMX6UL_ADCn_ADDR[i],
633
+ FSL_IMX6UL_ADCn_SIZE);
634
}
635
636
/*
637
* LCD
638
*/
639
- create_unimplemented_device("lcdif", FSL_IMX6UL_LCDIF_ADDR, 0x4000);
640
+ create_unimplemented_device("lcdif", FSL_IMX6UL_LCDIF_ADDR,
641
+ FSL_IMX6UL_LCDIF_SIZE);
642
643
/*
644
* ROM memory
50
--
645
--
51
2.20.1
646
2.34.1
52
53
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
When booting a CPU with EL3 using the -kernel flag, set up CPTR_EL3 so
3
* Add TZASC as unimplemented device.
4
that SVE will not trap to EL3.
4
- Allow bare metal application to access this (unimplemented) device
5
* Add CSU as unimplemented device.
6
- Allow bare metal application to access this (unimplemented) device
7
* Add 4 missing PWM devices
5
8
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
9
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20201030151541.11976-1-remi@remlab.net
11
Message-id: 59e4dc56e14eccfefd379275ec19048dff9c10b3.1692964892.git.jcd@tribudubois.net
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
hw/arm/boot.c | 3 +++
14
include/hw/arm/fsl-imx6ul.h | 2 +-
12
1 file changed, 3 insertions(+)
15
hw/arm/fsl-imx6ul.c | 16 ++++++++++++++++
16
2 files changed, 17 insertions(+), 1 deletion(-)
13
17
14
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
18
diff --git a/include/hw/arm/fsl-imx6ul.h b/include/hw/arm/fsl-imx6ul.h
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/boot.c
20
--- a/include/hw/arm/fsl-imx6ul.h
17
+++ b/hw/arm/boot.c
21
+++ b/include/hw/arm/fsl-imx6ul.h
18
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
22
@@ -XXX,XX +XXX,XX @@ enum FslIMX6ULConfiguration {
19
if (cpu_isar_feature(aa64_mte, cpu)) {
23
FSL_IMX6UL_NUM_USBS = 2,
20
env->cp15.scr_el3 |= SCR_ATA;
24
FSL_IMX6UL_NUM_SAIS = 3,
21
}
25
FSL_IMX6UL_NUM_CANS = 2,
22
+ if (cpu_isar_feature(aa64_sve, cpu)) {
26
- FSL_IMX6UL_NUM_PWMS = 4,
23
+ env->cp15.cptr_el[3] |= CPTR_EZ;
27
+ FSL_IMX6UL_NUM_PWMS = 8,
24
+ }
28
};
25
/* AArch64 kernels never boot in secure mode */
29
26
assert(!info->secure_boot);
30
struct FslIMX6ULState {
27
/* This hook is only supported for AArch32 currently:
31
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/arm/fsl-imx6ul.c
34
+++ b/hw/arm/fsl-imx6ul.c
35
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
36
FSL_IMX6UL_PWM2_ADDR,
37
FSL_IMX6UL_PWM3_ADDR,
38
FSL_IMX6UL_PWM4_ADDR,
39
+ FSL_IMX6UL_PWM5_ADDR,
40
+ FSL_IMX6UL_PWM6_ADDR,
41
+ FSL_IMX6UL_PWM7_ADDR,
42
+ FSL_IMX6UL_PWM8_ADDR,
43
};
44
45
snprintf(name, NAME_SIZE, "pwm%d", i);
46
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_realize(DeviceState *dev, Error **errp)
47
create_unimplemented_device("lcdif", FSL_IMX6UL_LCDIF_ADDR,
48
FSL_IMX6UL_LCDIF_SIZE);
49
50
+ /*
51
+ * CSU
52
+ */
53
+ create_unimplemented_device("csu", FSL_IMX6UL_CSU_ADDR,
54
+ FSL_IMX6UL_CSU_SIZE);
55
+
56
+ /*
57
+ * TZASC
58
+ */
59
+ create_unimplemented_device("tzasc", FSL_IMX6UL_TZASC_ADDR,
60
+ FSL_IMX6UL_TZASC_SIZE);
61
+
62
/*
63
* ROM memory
64
*/
28
--
65
--
29
2.20.1
66
2.34.1
30
67
31
68
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
We can use proper widening loads to extend 32-bit inputs,
3
* Add Addr and size definition for all i.MX7 devices in i.MX7 header file.
4
and skip the "widenfn" step.
4
* Use those newly defined named constants whenever possible.
5
* Standardize the way we init a familly of unimplemented devices
6
- SAI
7
- PWM
8
- CAN
9
* Add/rework few comments
5
10
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
7
Message-id: 20201030022618.785675-12-richard.henderson@linaro.org
12
Message-id: 59e195d33e4d486a8d131392acd46633c8c10ed7.1692964892.git.jcd@tribudubois.net
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
15
---
11
target/arm/translate.c | 6 +++
16
include/hw/arm/fsl-imx7.h | 330 ++++++++++++++++++++++++++++----------
12
target/arm/translate-neon.c.inc | 66 ++++++++++++++++++---------------
17
hw/arm/fsl-imx7.c | 130 ++++++++++-----
13
2 files changed, 43 insertions(+), 29 deletions(-)
18
2 files changed, 335 insertions(+), 125 deletions(-)
14
19
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
22
--- a/include/hw/arm/fsl-imx7.h
18
+++ b/target/arm/translate.c
23
+++ b/include/hw/arm/fsl-imx7.h
19
@@ -XXX,XX +XXX,XX @@ static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop)
24
@@ -XXX,XX +XXX,XX @@
20
long off = neon_element_offset(reg, ele, memop);
25
#include "hw/misc/imx7_ccm.h"
21
26
#include "hw/misc/imx7_snvs.h"
22
switch (memop) {
27
#include "hw/misc/imx7_gpr.h"
23
+ case MO_SL:
28
-#include "hw/misc/imx6_src.h"
24
+ tcg_gen_ld32s_i64(dest, cpu_env, off);
29
#include "hw/watchdog/wdt_imx2.h"
25
+ break;
30
#include "hw/gpio/imx_gpio.h"
26
+ case MO_UL:
31
#include "hw/char/imx_serial.h"
27
+ tcg_gen_ld32u_i64(dest, cpu_env, off);
32
@@ -XXX,XX +XXX,XX @@
28
+ break;
33
#include "hw/usb/chipidea.h"
29
case MO_Q:
34
#include "cpu.h"
30
tcg_gen_ld_i64(dest, cpu_env, off);
35
#include "qom/object.h"
31
break;
36
+#include "qemu/units.h"
32
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
37
38
#define TYPE_FSL_IMX7 "fsl-imx7"
39
OBJECT_DECLARE_SIMPLE_TYPE(FslIMX7State, FSL_IMX7)
40
@@ -XXX,XX +XXX,XX @@ enum FslIMX7Configuration {
41
FSL_IMX7_NUM_ECSPIS = 4,
42
FSL_IMX7_NUM_USBS = 3,
43
FSL_IMX7_NUM_ADCS = 2,
44
+ FSL_IMX7_NUM_SAIS = 3,
45
+ FSL_IMX7_NUM_CANS = 2,
46
+ FSL_IMX7_NUM_PWMS = 4,
47
};
48
49
struct FslIMX7State {
50
@@ -XXX,XX +XXX,XX @@ struct FslIMX7State {
51
52
enum FslIMX7MemoryMap {
53
FSL_IMX7_MMDC_ADDR = 0x80000000,
54
- FSL_IMX7_MMDC_SIZE = 2 * 1024 * 1024 * 1024UL,
55
+ FSL_IMX7_MMDC_SIZE = (2 * GiB),
56
57
- FSL_IMX7_GPIO1_ADDR = 0x30200000,
58
- FSL_IMX7_GPIO2_ADDR = 0x30210000,
59
- FSL_IMX7_GPIO3_ADDR = 0x30220000,
60
- FSL_IMX7_GPIO4_ADDR = 0x30230000,
61
- FSL_IMX7_GPIO5_ADDR = 0x30240000,
62
- FSL_IMX7_GPIO6_ADDR = 0x30250000,
63
- FSL_IMX7_GPIO7_ADDR = 0x30260000,
64
+ FSL_IMX7_QSPI1_MEM_ADDR = 0x60000000,
65
+ FSL_IMX7_QSPI1_MEM_SIZE = (256 * MiB),
66
67
- FSL_IMX7_IOMUXC_LPSR_GPR_ADDR = 0x30270000,
68
+ FSL_IMX7_PCIE1_MEM_ADDR = 0x40000000,
69
+ FSL_IMX7_PCIE1_MEM_SIZE = (256 * MiB),
70
71
- FSL_IMX7_WDOG1_ADDR = 0x30280000,
72
- FSL_IMX7_WDOG2_ADDR = 0x30290000,
73
- FSL_IMX7_WDOG3_ADDR = 0x302A0000,
74
- FSL_IMX7_WDOG4_ADDR = 0x302B0000,
75
+ FSL_IMX7_QSPI1_RX_BUF_ADDR = 0x34000000,
76
+ FSL_IMX7_QSPI1_RX_BUF_SIZE = (32 * MiB),
77
78
- FSL_IMX7_IOMUXC_LPSR_ADDR = 0x302C0000,
79
+ /* PCIe Peripherals */
80
+ FSL_IMX7_PCIE_REG_ADDR = 0x33800000,
81
82
- FSL_IMX7_GPT1_ADDR = 0x302D0000,
83
- FSL_IMX7_GPT2_ADDR = 0x302E0000,
84
- FSL_IMX7_GPT3_ADDR = 0x302F0000,
85
- FSL_IMX7_GPT4_ADDR = 0x30300000,
86
+ /* MMAP Peripherals */
87
+ FSL_IMX7_DMA_APBH_ADDR = 0x33000000,
88
+ FSL_IMX7_DMA_APBH_SIZE = 0x8000,
89
90
- FSL_IMX7_IOMUXC_ADDR = 0x30330000,
91
- FSL_IMX7_IOMUXC_GPR_ADDR = 0x30340000,
92
- FSL_IMX7_IOMUXCn_SIZE = 0x1000,
93
+ /* GPV configuration */
94
+ FSL_IMX7_GPV6_ADDR = 0x32600000,
95
+ FSL_IMX7_GPV5_ADDR = 0x32500000,
96
+ FSL_IMX7_GPV4_ADDR = 0x32400000,
97
+ FSL_IMX7_GPV3_ADDR = 0x32300000,
98
+ FSL_IMX7_GPV2_ADDR = 0x32200000,
99
+ FSL_IMX7_GPV1_ADDR = 0x32100000,
100
+ FSL_IMX7_GPV0_ADDR = 0x32000000,
101
+ FSL_IMX7_GPVn_SIZE = (1 * MiB),
102
103
- FSL_IMX7_OCOTP_ADDR = 0x30350000,
104
- FSL_IMX7_OCOTP_SIZE = 0x10000,
105
+ /* Arm Peripherals */
106
+ FSL_IMX7_A7MPCORE_ADDR = 0x31000000,
107
108
- FSL_IMX7_ANALOG_ADDR = 0x30360000,
109
- FSL_IMX7_SNVS_ADDR = 0x30370000,
110
- FSL_IMX7_CCM_ADDR = 0x30380000,
111
+ /* AIPS-3 Begin */
112
113
- FSL_IMX7_SRC_ADDR = 0x30390000,
114
- FSL_IMX7_SRC_SIZE = 0x1000,
115
+ FSL_IMX7_ENET2_ADDR = 0x30BF0000,
116
+ FSL_IMX7_ENET1_ADDR = 0x30BE0000,
117
118
- FSL_IMX7_ADC1_ADDR = 0x30610000,
119
- FSL_IMX7_ADC2_ADDR = 0x30620000,
120
- FSL_IMX7_ADCn_SIZE = 0x1000,
121
+ FSL_IMX7_SDMA_ADDR = 0x30BD0000,
122
+ FSL_IMX7_SDMA_SIZE = (4 * KiB),
123
124
- FSL_IMX7_PWM1_ADDR = 0x30660000,
125
- FSL_IMX7_PWM2_ADDR = 0x30670000,
126
- FSL_IMX7_PWM3_ADDR = 0x30680000,
127
- FSL_IMX7_PWM4_ADDR = 0x30690000,
128
- FSL_IMX7_PWMn_SIZE = 0x10000,
129
+ FSL_IMX7_EIM_ADDR = 0x30BC0000,
130
+ FSL_IMX7_EIM_SIZE = (4 * KiB),
131
132
- FSL_IMX7_PCIE_PHY_ADDR = 0x306D0000,
133
- FSL_IMX7_PCIE_PHY_SIZE = 0x10000,
134
+ FSL_IMX7_QSPI_ADDR = 0x30BB0000,
135
+ FSL_IMX7_QSPI_SIZE = 0x8000,
136
137
- FSL_IMX7_GPC_ADDR = 0x303A0000,
138
+ FSL_IMX7_SIM2_ADDR = 0x30BA0000,
139
+ FSL_IMX7_SIM1_ADDR = 0x30B90000,
140
+ FSL_IMX7_SIMn_SIZE = (4 * KiB),
141
+
142
+ FSL_IMX7_USDHC3_ADDR = 0x30B60000,
143
+ FSL_IMX7_USDHC2_ADDR = 0x30B50000,
144
+ FSL_IMX7_USDHC1_ADDR = 0x30B40000,
145
+
146
+ FSL_IMX7_USB3_ADDR = 0x30B30000,
147
+ FSL_IMX7_USBMISC3_ADDR = 0x30B30200,
148
+ FSL_IMX7_USB2_ADDR = 0x30B20000,
149
+ FSL_IMX7_USBMISC2_ADDR = 0x30B20200,
150
+ FSL_IMX7_USB1_ADDR = 0x30B10000,
151
+ FSL_IMX7_USBMISC1_ADDR = 0x30B10200,
152
+ FSL_IMX7_USBMISCn_SIZE = 0x200,
153
+
154
+ FSL_IMX7_USB_PL301_ADDR = 0x30AD0000,
155
+ FSL_IMX7_USB_PL301_SIZE = (64 * KiB),
156
+
157
+ FSL_IMX7_SEMAPHORE_HS_ADDR = 0x30AC0000,
158
+ FSL_IMX7_SEMAPHORE_HS_SIZE = (64 * KiB),
159
+
160
+ FSL_IMX7_MUB_ADDR = 0x30AB0000,
161
+ FSL_IMX7_MUA_ADDR = 0x30AA0000,
162
+ FSL_IMX7_MUn_SIZE = (KiB),
163
+
164
+ FSL_IMX7_UART7_ADDR = 0x30A90000,
165
+ FSL_IMX7_UART6_ADDR = 0x30A80000,
166
+ FSL_IMX7_UART5_ADDR = 0x30A70000,
167
+ FSL_IMX7_UART4_ADDR = 0x30A60000,
168
+
169
+ FSL_IMX7_I2C4_ADDR = 0x30A50000,
170
+ FSL_IMX7_I2C3_ADDR = 0x30A40000,
171
+ FSL_IMX7_I2C2_ADDR = 0x30A30000,
172
+ FSL_IMX7_I2C1_ADDR = 0x30A20000,
173
+
174
+ FSL_IMX7_CAN2_ADDR = 0x30A10000,
175
+ FSL_IMX7_CAN1_ADDR = 0x30A00000,
176
+ FSL_IMX7_CANn_SIZE = (4 * KiB),
177
+
178
+ FSL_IMX7_AIPS3_CONF_ADDR = 0x309F0000,
179
+ FSL_IMX7_AIPS3_CONF_SIZE = (64 * KiB),
180
181
FSL_IMX7_CAAM_ADDR = 0x30900000,
182
- FSL_IMX7_CAAM_SIZE = 0x40000,
183
+ FSL_IMX7_CAAM_SIZE = (256 * KiB),
184
185
- FSL_IMX7_CAN1_ADDR = 0x30A00000,
186
- FSL_IMX7_CAN2_ADDR = 0x30A10000,
187
- FSL_IMX7_CANn_SIZE = 0x10000,
188
+ FSL_IMX7_SPBA_ADDR = 0x308F0000,
189
+ FSL_IMX7_SPBA_SIZE = (4 * KiB),
190
191
- FSL_IMX7_I2C1_ADDR = 0x30A20000,
192
- FSL_IMX7_I2C2_ADDR = 0x30A30000,
193
- FSL_IMX7_I2C3_ADDR = 0x30A40000,
194
- FSL_IMX7_I2C4_ADDR = 0x30A50000,
195
+ FSL_IMX7_SAI3_ADDR = 0x308C0000,
196
+ FSL_IMX7_SAI2_ADDR = 0x308B0000,
197
+ FSL_IMX7_SAI1_ADDR = 0x308A0000,
198
+ FSL_IMX7_SAIn_SIZE = (4 * KiB),
199
200
- FSL_IMX7_ECSPI1_ADDR = 0x30820000,
201
- FSL_IMX7_ECSPI2_ADDR = 0x30830000,
202
- FSL_IMX7_ECSPI3_ADDR = 0x30840000,
203
- FSL_IMX7_ECSPI4_ADDR = 0x30630000,
204
-
205
- FSL_IMX7_LCDIF_ADDR = 0x30730000,
206
- FSL_IMX7_LCDIF_SIZE = 0x1000,
207
-
208
- FSL_IMX7_UART1_ADDR = 0x30860000,
209
+ FSL_IMX7_UART3_ADDR = 0x30880000,
210
/*
211
* Some versions of the reference manual claim that UART2 is @
212
* 0x30870000, but experiments with HW + DT files in upstream
213
@@ -XXX,XX +XXX,XX @@ enum FslIMX7MemoryMap {
214
* actually located @ 0x30890000
215
*/
216
FSL_IMX7_UART2_ADDR = 0x30890000,
217
- FSL_IMX7_UART3_ADDR = 0x30880000,
218
- FSL_IMX7_UART4_ADDR = 0x30A60000,
219
- FSL_IMX7_UART5_ADDR = 0x30A70000,
220
- FSL_IMX7_UART6_ADDR = 0x30A80000,
221
- FSL_IMX7_UART7_ADDR = 0x30A90000,
222
+ FSL_IMX7_UART1_ADDR = 0x30860000,
223
224
- FSL_IMX7_SAI1_ADDR = 0x308A0000,
225
- FSL_IMX7_SAI2_ADDR = 0x308B0000,
226
- FSL_IMX7_SAI3_ADDR = 0x308C0000,
227
- FSL_IMX7_SAIn_SIZE = 0x10000,
228
+ FSL_IMX7_ECSPI3_ADDR = 0x30840000,
229
+ FSL_IMX7_ECSPI2_ADDR = 0x30830000,
230
+ FSL_IMX7_ECSPI1_ADDR = 0x30820000,
231
+ FSL_IMX7_ECSPIn_SIZE = (4 * KiB),
232
233
- FSL_IMX7_ENET1_ADDR = 0x30BE0000,
234
- FSL_IMX7_ENET2_ADDR = 0x30BF0000,
235
+ /* AIPS-3 End */
236
237
- FSL_IMX7_USB1_ADDR = 0x30B10000,
238
- FSL_IMX7_USBMISC1_ADDR = 0x30B10200,
239
- FSL_IMX7_USB2_ADDR = 0x30B20000,
240
- FSL_IMX7_USBMISC2_ADDR = 0x30B20200,
241
- FSL_IMX7_USB3_ADDR = 0x30B30000,
242
- FSL_IMX7_USBMISC3_ADDR = 0x30B30200,
243
- FSL_IMX7_USBMISCn_SIZE = 0x200,
244
+ /* AIPS-2 Begin */
245
246
- FSL_IMX7_USDHC1_ADDR = 0x30B40000,
247
- FSL_IMX7_USDHC2_ADDR = 0x30B50000,
248
- FSL_IMX7_USDHC3_ADDR = 0x30B60000,
249
+ FSL_IMX7_AXI_DEBUG_MON_ADDR = 0x307E0000,
250
+ FSL_IMX7_AXI_DEBUG_MON_SIZE = (64 * KiB),
251
252
- FSL_IMX7_SDMA_ADDR = 0x30BD0000,
253
- FSL_IMX7_SDMA_SIZE = 0x1000,
254
+ FSL_IMX7_PERFMON2_ADDR = 0x307D0000,
255
+ FSL_IMX7_PERFMON1_ADDR = 0x307C0000,
256
+ FSL_IMX7_PERFMONn_SIZE = (64 * KiB),
257
+
258
+ FSL_IMX7_DDRC_ADDR = 0x307A0000,
259
+ FSL_IMX7_DDRC_SIZE = (4 * KiB),
260
+
261
+ FSL_IMX7_DDRC_PHY_ADDR = 0x30790000,
262
+ FSL_IMX7_DDRC_PHY_SIZE = (4 * KiB),
263
+
264
+ FSL_IMX7_TZASC_ADDR = 0x30780000,
265
+ FSL_IMX7_TZASC_SIZE = (64 * KiB),
266
+
267
+ FSL_IMX7_MIPI_DSI_ADDR = 0x30760000,
268
+ FSL_IMX7_MIPI_DSI_SIZE = (4 * KiB),
269
+
270
+ FSL_IMX7_MIPI_CSI_ADDR = 0x30750000,
271
+ FSL_IMX7_MIPI_CSI_SIZE = 0x4000,
272
+
273
+ FSL_IMX7_LCDIF_ADDR = 0x30730000,
274
+ FSL_IMX7_LCDIF_SIZE = 0x8000,
275
+
276
+ FSL_IMX7_CSI_ADDR = 0x30710000,
277
+ FSL_IMX7_CSI_SIZE = (4 * KiB),
278
+
279
+ FSL_IMX7_PXP_ADDR = 0x30700000,
280
+ FSL_IMX7_PXP_SIZE = 0x4000,
281
+
282
+ FSL_IMX7_EPDC_ADDR = 0x306F0000,
283
+ FSL_IMX7_EPDC_SIZE = (4 * KiB),
284
+
285
+ FSL_IMX7_PCIE_PHY_ADDR = 0x306D0000,
286
+ FSL_IMX7_PCIE_PHY_SIZE = (4 * KiB),
287
+
288
+ FSL_IMX7_SYSCNT_CTRL_ADDR = 0x306C0000,
289
+ FSL_IMX7_SYSCNT_CMP_ADDR = 0x306B0000,
290
+ FSL_IMX7_SYSCNT_RD_ADDR = 0x306A0000,
291
+
292
+ FSL_IMX7_PWM4_ADDR = 0x30690000,
293
+ FSL_IMX7_PWM3_ADDR = 0x30680000,
294
+ FSL_IMX7_PWM2_ADDR = 0x30670000,
295
+ FSL_IMX7_PWM1_ADDR = 0x30660000,
296
+ FSL_IMX7_PWMn_SIZE = (4 * KiB),
297
+
298
+ FSL_IMX7_FlEXTIMER2_ADDR = 0x30650000,
299
+ FSL_IMX7_FlEXTIMER1_ADDR = 0x30640000,
300
+ FSL_IMX7_FLEXTIMERn_SIZE = (4 * KiB),
301
+
302
+ FSL_IMX7_ECSPI4_ADDR = 0x30630000,
303
+
304
+ FSL_IMX7_ADC2_ADDR = 0x30620000,
305
+ FSL_IMX7_ADC1_ADDR = 0x30610000,
306
+ FSL_IMX7_ADCn_SIZE = (4 * KiB),
307
+
308
+ FSL_IMX7_AIPS2_CONF_ADDR = 0x305F0000,
309
+ FSL_IMX7_AIPS2_CONF_SIZE = (64 * KiB),
310
+
311
+ /* AIPS-2 End */
312
+
313
+ /* AIPS-1 Begin */
314
+
315
+ FSL_IMX7_CSU_ADDR = 0x303E0000,
316
+ FSL_IMX7_CSU_SIZE = (64 * KiB),
317
+
318
+ FSL_IMX7_RDC_ADDR = 0x303D0000,
319
+ FSL_IMX7_RDC_SIZE = (4 * KiB),
320
+
321
+ FSL_IMX7_SEMAPHORE2_ADDR = 0x303C0000,
322
+ FSL_IMX7_SEMAPHORE1_ADDR = 0x303B0000,
323
+ FSL_IMX7_SEMAPHOREn_SIZE = (4 * KiB),
324
+
325
+ FSL_IMX7_GPC_ADDR = 0x303A0000,
326
+
327
+ FSL_IMX7_SRC_ADDR = 0x30390000,
328
+ FSL_IMX7_SRC_SIZE = (4 * KiB),
329
+
330
+ FSL_IMX7_CCM_ADDR = 0x30380000,
331
+
332
+ FSL_IMX7_SNVS_HP_ADDR = 0x30370000,
333
+
334
+ FSL_IMX7_ANALOG_ADDR = 0x30360000,
335
+
336
+ FSL_IMX7_OCOTP_ADDR = 0x30350000,
337
+ FSL_IMX7_OCOTP_SIZE = 0x10000,
338
+
339
+ FSL_IMX7_IOMUXC_GPR_ADDR = 0x30340000,
340
+ FSL_IMX7_IOMUXC_GPR_SIZE = (4 * KiB),
341
+
342
+ FSL_IMX7_IOMUXC_ADDR = 0x30330000,
343
+ FSL_IMX7_IOMUXC_SIZE = (4 * KiB),
344
+
345
+ FSL_IMX7_KPP_ADDR = 0x30320000,
346
+ FSL_IMX7_KPP_SIZE = (4 * KiB),
347
+
348
+ FSL_IMX7_ROMCP_ADDR = 0x30310000,
349
+ FSL_IMX7_ROMCP_SIZE = (4 * KiB),
350
+
351
+ FSL_IMX7_GPT4_ADDR = 0x30300000,
352
+ FSL_IMX7_GPT3_ADDR = 0x302F0000,
353
+ FSL_IMX7_GPT2_ADDR = 0x302E0000,
354
+ FSL_IMX7_GPT1_ADDR = 0x302D0000,
355
+
356
+ FSL_IMX7_IOMUXC_LPSR_ADDR = 0x302C0000,
357
+ FSL_IMX7_IOMUXC_LPSR_SIZE = (4 * KiB),
358
+
359
+ FSL_IMX7_WDOG4_ADDR = 0x302B0000,
360
+ FSL_IMX7_WDOG3_ADDR = 0x302A0000,
361
+ FSL_IMX7_WDOG2_ADDR = 0x30290000,
362
+ FSL_IMX7_WDOG1_ADDR = 0x30280000,
363
+
364
+ FSL_IMX7_IOMUXC_LPSR_GPR_ADDR = 0x30270000,
365
+
366
+ FSL_IMX7_GPIO7_ADDR = 0x30260000,
367
+ FSL_IMX7_GPIO6_ADDR = 0x30250000,
368
+ FSL_IMX7_GPIO5_ADDR = 0x30240000,
369
+ FSL_IMX7_GPIO4_ADDR = 0x30230000,
370
+ FSL_IMX7_GPIO3_ADDR = 0x30220000,
371
+ FSL_IMX7_GPIO2_ADDR = 0x30210000,
372
+ FSL_IMX7_GPIO1_ADDR = 0x30200000,
373
+
374
+ FSL_IMX7_AIPS1_CONF_ADDR = 0x301F0000,
375
+ FSL_IMX7_AIPS1_CONF_SIZE = (64 * KiB),
376
377
- FSL_IMX7_A7MPCORE_ADDR = 0x31000000,
378
FSL_IMX7_A7MPCORE_DAP_ADDR = 0x30000000,
379
+ FSL_IMX7_A7MPCORE_DAP_SIZE = (1 * MiB),
380
381
- FSL_IMX7_PCIE_REG_ADDR = 0x33800000,
382
- FSL_IMX7_PCIE_REG_SIZE = 16 * 1024,
383
+ /* AIPS-1 End */
384
385
- FSL_IMX7_GPR_ADDR = 0x30340000,
386
+ FSL_IMX7_EIM_CS0_ADDR = 0x28000000,
387
+ FSL_IMX7_EIM_CS0_SIZE = (128 * MiB),
388
389
- FSL_IMX7_DMA_APBH_ADDR = 0x33000000,
390
- FSL_IMX7_DMA_APBH_SIZE = 0x2000,
391
+ FSL_IMX7_OCRAM_PXP_ADDR = 0x00940000,
392
+ FSL_IMX7_OCRAM_PXP_SIZE = (32 * KiB),
393
+
394
+ FSL_IMX7_OCRAM_EPDC_ADDR = 0x00920000,
395
+ FSL_IMX7_OCRAM_EPDC_SIZE = (128 * KiB),
396
+
397
+ FSL_IMX7_OCRAM_MEM_ADDR = 0x00900000,
398
+ FSL_IMX7_OCRAM_MEM_SIZE = (128 * KiB),
399
+
400
+ FSL_IMX7_TCMU_ADDR = 0x00800000,
401
+ FSL_IMX7_TCMU_SIZE = (32 * KiB),
402
+
403
+ FSL_IMX7_TCML_ADDR = 0x007F8000,
404
+ FSL_IMX7_TCML_SIZE = (32 * KiB),
405
+
406
+ FSL_IMX7_OCRAM_S_ADDR = 0x00180000,
407
+ FSL_IMX7_OCRAM_S_SIZE = (32 * KiB),
408
+
409
+ FSL_IMX7_CAAM_MEM_ADDR = 0x00100000,
410
+ FSL_IMX7_CAAM_MEM_SIZE = (32 * KiB),
411
+
412
+ FSL_IMX7_ROM_ADDR = 0x00000000,
413
+ FSL_IMX7_ROM_SIZE = (96 * KiB),
414
};
415
416
enum FslIMX7IRQs {
417
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
33
index XXXXXXX..XXXXXXX 100644
418
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-neon.c.inc
419
--- a/hw/arm/fsl-imx7.c
35
+++ b/target/arm/translate-neon.c.inc
420
+++ b/hw/arm/fsl-imx7.c
36
@@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a)
421
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
37
static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
422
char name[NAME_SIZE];
38
NeonGenWidenFn *widenfn,
423
int i;
39
NeonGenTwo64OpFn *opfn,
424
40
- bool src1_wide)
425
+ /*
41
+ int src1_mop, int src2_mop)
426
+ * CPUs
42
{
427
+ */
43
/* 3-regs different lengths, prewidening case (VADDL/VSUBL/VAADW/VSUBW) */
428
for (i = 0; i < MIN(ms->smp.cpus, FSL_IMX7_NUM_CPUS); i++) {
44
TCGv_i64 rn0_64, rn1_64, rm_64;
429
snprintf(name, NAME_SIZE, "cpu%d", i);
45
- TCGv_i32 rm;
430
object_initialize_child(obj, name, &s->cpu[i],
46
431
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
47
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
432
TYPE_A15MPCORE_PRIV);
48
return false;
433
49
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
434
/*
50
return false;
435
- * GPIOs 1 to 7
51
}
436
+ * GPIOs
52
437
*/
53
- if (!widenfn || !opfn) {
438
for (i = 0; i < FSL_IMX7_NUM_GPIOS; i++) {
54
+ if (!opfn) {
439
snprintf(name, NAME_SIZE, "gpio%d", i);
55
/* size == 3 case, which is an entirely different insn group */
440
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
56
return false;
441
}
57
}
442
58
443
/*
59
- if ((a->vd & 1) || (src1_wide && (a->vn & 1))) {
444
- * GPT1, 2, 3, 4
60
+ if ((a->vd & 1) || (src1_mop == MO_Q && (a->vn & 1))) {
445
+ * GPTs
61
return false;
446
*/
62
}
447
for (i = 0; i < FSL_IMX7_NUM_GPTS; i++) {
63
448
snprintf(name, NAME_SIZE, "gpt%d", i);
64
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
449
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
65
rn1_64 = tcg_temp_new_i64();
450
*/
66
rm_64 = tcg_temp_new_i64();
451
object_initialize_child(obj, "gpcv2", &s->gpcv2, TYPE_IMX_GPCV2);
67
452
68
- if (src1_wide) {
453
+ /*
69
- read_neon_element64(rn0_64, a->vn, 0, MO_64);
454
+ * ECSPIs
70
+ if (src1_mop >= 0) {
455
+ */
71
+ read_neon_element64(rn0_64, a->vn, 0, src1_mop);
456
for (i = 0; i < FSL_IMX7_NUM_ECSPIS; i++) {
72
} else {
457
snprintf(name, NAME_SIZE, "spi%d", i + 1);
73
TCGv_i32 tmp = tcg_temp_new_i32();
458
object_initialize_child(obj, name, &s->spi[i], TYPE_IMX_SPI);
74
read_neon_element32(tmp, a->vn, 0, MO_32);
459
}
75
widenfn(rn0_64, tmp);
460
76
tcg_temp_free_i32(tmp);
461
-
77
}
462
+ /*
78
- rm = tcg_temp_new_i32();
463
+ * I2Cs
79
- read_neon_element32(rm, a->vm, 0, MO_32);
464
+ */
80
+ if (src2_mop >= 0) {
465
for (i = 0; i < FSL_IMX7_NUM_I2CS; i++) {
81
+ read_neon_element64(rm_64, a->vm, 0, src2_mop);
466
snprintf(name, NAME_SIZE, "i2c%d", i + 1);
82
+ } else {
467
object_initialize_child(obj, name, &s->i2c[i], TYPE_IMX_I2C);
83
+ TCGv_i32 tmp = tcg_temp_new_i32();
468
}
84
+ read_neon_element32(tmp, a->vm, 0, MO_32);
469
85
+ widenfn(rm_64, tmp);
470
/*
86
+ tcg_temp_free_i32(tmp);
471
- * UART
472
+ * UARTs
473
*/
474
for (i = 0; i < FSL_IMX7_NUM_UARTS; i++) {
475
snprintf(name, NAME_SIZE, "uart%d", i);
476
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
477
}
478
479
/*
480
- * Ethernet
481
+ * Ethernets
482
*/
483
for (i = 0; i < FSL_IMX7_NUM_ETHS; i++) {
484
snprintf(name, NAME_SIZE, "eth%d", i);
485
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
486
}
487
488
/*
489
- * SDHCI
490
+ * SDHCIs
491
*/
492
for (i = 0; i < FSL_IMX7_NUM_USDHCS; i++) {
493
snprintf(name, NAME_SIZE, "usdhc%d", i);
494
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
495
object_initialize_child(obj, "snvs", &s->snvs, TYPE_IMX7_SNVS);
496
497
/*
498
- * Watchdog
499
+ * Watchdogs
500
*/
501
for (i = 0; i < FSL_IMX7_NUM_WDTS; i++) {
502
snprintf(name, NAME_SIZE, "wdt%d", i);
503
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
504
*/
505
object_initialize_child(obj, "gpr", &s->gpr, TYPE_IMX7_GPR);
506
507
+ /*
508
+ * PCIE
509
+ */
510
object_initialize_child(obj, "pcie", &s->pcie, TYPE_DESIGNWARE_PCIE_HOST);
511
512
+ /*
513
+ * USBs
514
+ */
515
for (i = 0; i < FSL_IMX7_NUM_USBS; i++) {
516
snprintf(name, NAME_SIZE, "usb%d", i);
517
object_initialize_child(obj, name, &s->usb[i], TYPE_CHIPIDEA);
518
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
519
return;
520
}
521
522
+ /*
523
+ * CPUs
524
+ */
525
for (i = 0; i < smp_cpus; i++) {
526
o = OBJECT(&s->cpu[i]);
527
528
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
529
* A7MPCORE DAP
530
*/
531
create_unimplemented_device("a7mpcore-dap", FSL_IMX7_A7MPCORE_DAP_ADDR,
532
- 0x100000);
533
+ FSL_IMX7_A7MPCORE_DAP_SIZE);
534
535
/*
536
- * GPT1, 2, 3, 4
537
+ * GPTs
538
*/
539
for (i = 0; i < FSL_IMX7_NUM_GPTS; i++) {
540
static const hwaddr FSL_IMX7_GPTn_ADDR[FSL_IMX7_NUM_GPTS] = {
541
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
542
FSL_IMX7_GPTn_IRQ[i]));
543
}
544
545
+ /*
546
+ * GPIOs
547
+ */
548
for (i = 0; i < FSL_IMX7_NUM_GPIOS; i++) {
549
static const hwaddr FSL_IMX7_GPIOn_ADDR[FSL_IMX7_NUM_GPIOS] = {
550
FSL_IMX7_GPIO1_ADDR,
551
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
552
/*
553
* IOMUXC and IOMUXC_LPSR
554
*/
555
- for (i = 0; i < FSL_IMX7_NUM_IOMUXCS; i++) {
556
- static const hwaddr FSL_IMX7_IOMUXCn_ADDR[FSL_IMX7_NUM_IOMUXCS] = {
557
- FSL_IMX7_IOMUXC_ADDR,
558
- FSL_IMX7_IOMUXC_LPSR_ADDR,
559
- };
560
-
561
- snprintf(name, NAME_SIZE, "iomuxc%d", i);
562
- create_unimplemented_device(name, FSL_IMX7_IOMUXCn_ADDR[i],
563
- FSL_IMX7_IOMUXCn_SIZE);
564
- }
565
+ create_unimplemented_device("iomuxc", FSL_IMX7_IOMUXC_ADDR,
566
+ FSL_IMX7_IOMUXC_SIZE);
567
+ create_unimplemented_device("iomuxc_lspr", FSL_IMX7_IOMUXC_LPSR_ADDR,
568
+ FSL_IMX7_IOMUXC_LPSR_SIZE);
569
570
/*
571
* CCM
572
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
573
sysbus_realize(SYS_BUS_DEVICE(&s->gpcv2), &error_abort);
574
sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpcv2), 0, FSL_IMX7_GPC_ADDR);
575
576
- /* Initialize all ECSPI */
577
+ /*
578
+ * ECSPIs
579
+ */
580
for (i = 0; i < FSL_IMX7_NUM_ECSPIS; i++) {
581
static const hwaddr FSL_IMX7_SPIn_ADDR[FSL_IMX7_NUM_ECSPIS] = {
582
FSL_IMX7_ECSPI1_ADDR,
583
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
584
FSL_IMX7_SPIn_IRQ[i]));
585
}
586
587
+ /*
588
+ * I2Cs
589
+ */
590
for (i = 0; i < FSL_IMX7_NUM_I2CS; i++) {
591
static const hwaddr FSL_IMX7_I2Cn_ADDR[FSL_IMX7_NUM_I2CS] = {
592
FSL_IMX7_I2C1_ADDR,
593
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
594
}
595
596
/*
597
- * UART
598
+ * UARTs
599
*/
600
for (i = 0; i < FSL_IMX7_NUM_UARTS; i++) {
601
static const hwaddr FSL_IMX7_UARTn_ADDR[FSL_IMX7_NUM_UARTS] = {
602
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
603
}
604
605
/*
606
- * Ethernet
607
+ * Ethernets
608
*
609
* We must use two loops since phy_connected affects the other interface
610
* and we have to set all properties before calling sysbus_realize().
611
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
612
}
613
614
/*
615
- * USDHC
616
+ * USDHCs
617
*/
618
for (i = 0; i < FSL_IMX7_NUM_USDHCS; i++) {
619
static const hwaddr FSL_IMX7_USDHCn_ADDR[FSL_IMX7_NUM_USDHCS] = {
620
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
621
* SNVS
622
*/
623
sysbus_realize(SYS_BUS_DEVICE(&s->snvs), &error_abort);
624
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->snvs), 0, FSL_IMX7_SNVS_ADDR);
625
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->snvs), 0, FSL_IMX7_SNVS_HP_ADDR);
626
627
/*
628
* SRC
629
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
630
create_unimplemented_device("src", FSL_IMX7_SRC_ADDR, FSL_IMX7_SRC_SIZE);
631
632
/*
633
- * Watchdog
634
+ * Watchdogs
635
*/
636
for (i = 0; i < FSL_IMX7_NUM_WDTS; i++) {
637
static const hwaddr FSL_IMX7_WDOGn_ADDR[FSL_IMX7_NUM_WDTS] = {
638
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
639
create_unimplemented_device("caam", FSL_IMX7_CAAM_ADDR, FSL_IMX7_CAAM_SIZE);
640
641
/*
642
- * PWM
643
+ * PWMs
644
*/
645
- create_unimplemented_device("pwm1", FSL_IMX7_PWM1_ADDR, FSL_IMX7_PWMn_SIZE);
646
- create_unimplemented_device("pwm2", FSL_IMX7_PWM2_ADDR, FSL_IMX7_PWMn_SIZE);
647
- create_unimplemented_device("pwm3", FSL_IMX7_PWM3_ADDR, FSL_IMX7_PWMn_SIZE);
648
- create_unimplemented_device("pwm4", FSL_IMX7_PWM4_ADDR, FSL_IMX7_PWMn_SIZE);
649
+ for (i = 0; i < FSL_IMX7_NUM_PWMS; i++) {
650
+ static const hwaddr FSL_IMX7_PWMn_ADDR[FSL_IMX7_NUM_PWMS] = {
651
+ FSL_IMX7_PWM1_ADDR,
652
+ FSL_IMX7_PWM2_ADDR,
653
+ FSL_IMX7_PWM3_ADDR,
654
+ FSL_IMX7_PWM4_ADDR,
655
+ };
656
+
657
+ snprintf(name, NAME_SIZE, "pwm%d", i);
658
+ create_unimplemented_device(name, FSL_IMX7_PWMn_ADDR[i],
659
+ FSL_IMX7_PWMn_SIZE);
87
+ }
660
+ }
88
661
89
- widenfn(rm_64, rm);
662
/*
90
- tcg_temp_free_i32(rm);
663
- * CAN
91
opfn(rn0_64, rn0_64, rm_64);
664
+ * CANs
92
665
*/
93
/*
666
- create_unimplemented_device("can1", FSL_IMX7_CAN1_ADDR, FSL_IMX7_CANn_SIZE);
94
* Load second pass inputs before storing the first pass result, to
667
- create_unimplemented_device("can2", FSL_IMX7_CAN2_ADDR, FSL_IMX7_CANn_SIZE);
95
* avoid incorrect results if a narrow input overlaps with the result.
668
+ for (i = 0; i < FSL_IMX7_NUM_CANS; i++) {
96
*/
669
+ static const hwaddr FSL_IMX7_CANn_ADDR[FSL_IMX7_NUM_CANS] = {
97
- if (src1_wide) {
670
+ FSL_IMX7_CAN1_ADDR,
98
- read_neon_element64(rn1_64, a->vn, 1, MO_64);
671
+ FSL_IMX7_CAN2_ADDR,
99
+ if (src1_mop >= 0) {
672
+ };
100
+ read_neon_element64(rn1_64, a->vn, 1, src1_mop);
673
+
101
} else {
674
+ snprintf(name, NAME_SIZE, "can%d", i);
102
TCGv_i32 tmp = tcg_temp_new_i32();
675
+ create_unimplemented_device(name, FSL_IMX7_CANn_ADDR[i],
103
read_neon_element32(tmp, a->vn, 1, MO_32);
676
+ FSL_IMX7_CANn_SIZE);
104
widenfn(rn1_64, tmp);
105
tcg_temp_free_i32(tmp);
106
}
107
- rm = tcg_temp_new_i32();
108
- read_neon_element32(rm, a->vm, 1, MO_32);
109
+ if (src2_mop >= 0) {
110
+ read_neon_element64(rm_64, a->vm, 1, src2_mop);
111
+ } else {
112
+ TCGv_i32 tmp = tcg_temp_new_i32();
113
+ read_neon_element32(tmp, a->vm, 1, MO_32);
114
+ widenfn(rm_64, tmp);
115
+ tcg_temp_free_i32(tmp);
116
+ }
677
+ }
117
678
118
write_neon_element64(rn0_64, a->vd, 0, MO_64);
679
/*
119
680
- * SAI (Audio SSI (Synchronous Serial Interface))
120
- widenfn(rm_64, rm);
681
+ * SAIs (Audio SSI (Synchronous Serial Interface))
121
- tcg_temp_free_i32(rm);
682
*/
122
opfn(rn1_64, rn1_64, rm_64);
683
- create_unimplemented_device("sai1", FSL_IMX7_SAI1_ADDR, FSL_IMX7_SAIn_SIZE);
123
write_neon_element64(rn1_64, a->vd, 1, MO_64);
684
- create_unimplemented_device("sai2", FSL_IMX7_SAI2_ADDR, FSL_IMX7_SAIn_SIZE);
124
685
- create_unimplemented_device("sai2", FSL_IMX7_SAI3_ADDR, FSL_IMX7_SAIn_SIZE);
125
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
686
+ for (i = 0; i < FSL_IMX7_NUM_SAIS; i++) {
126
return true;
687
+ static const hwaddr FSL_IMX7_SAIn_ADDR[FSL_IMX7_NUM_SAIS] = {
688
+ FSL_IMX7_SAI1_ADDR,
689
+ FSL_IMX7_SAI2_ADDR,
690
+ FSL_IMX7_SAI3_ADDR,
691
+ };
692
+
693
+ snprintf(name, NAME_SIZE, "sai%d", i);
694
+ create_unimplemented_device(name, FSL_IMX7_SAIn_ADDR[i],
695
+ FSL_IMX7_SAIn_SIZE);
696
+ }
697
698
/*
699
* OCOTP
700
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
701
create_unimplemented_device("ocotp", FSL_IMX7_OCOTP_ADDR,
702
FSL_IMX7_OCOTP_SIZE);
703
704
+ /*
705
+ * GPR
706
+ */
707
sysbus_realize(SYS_BUS_DEVICE(&s->gpr), &error_abort);
708
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpr), 0, FSL_IMX7_GPR_ADDR);
709
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpr), 0, FSL_IMX7_IOMUXC_GPR_ADDR);
710
711
+ /*
712
+ * PCIE
713
+ */
714
sysbus_realize(SYS_BUS_DEVICE(&s->pcie), &error_abort);
715
sysbus_mmio_map(SYS_BUS_DEVICE(&s->pcie), 0, FSL_IMX7_PCIE_REG_ADDR);
716
717
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
718
irq = qdev_get_gpio_in(DEVICE(&s->a7mpcore), FSL_IMX7_PCI_INTD_IRQ);
719
sysbus_connect_irq(SYS_BUS_DEVICE(&s->pcie), 3, irq);
720
721
-
722
+ /*
723
+ * USBs
724
+ */
725
for (i = 0; i < FSL_IMX7_NUM_USBS; i++) {
726
static const hwaddr FSL_IMX7_USBMISCn_ADDR[FSL_IMX7_NUM_USBS] = {
727
FSL_IMX7_USBMISC1_ADDR,
728
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
729
*/
730
create_unimplemented_device("pcie-phy", FSL_IMX7_PCIE_PHY_ADDR,
731
FSL_IMX7_PCIE_PHY_SIZE);
732
+
127
}
733
}
128
734
129
-#define DO_PREWIDEN(INSN, S, EXT, OP, SRC1WIDE) \
735
static Property fsl_imx7_properties[] = {
130
+#define DO_PREWIDEN(INSN, S, OP, SRC1WIDE, SIGN) \
131
static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \
132
{ \
133
static NeonGenWidenFn * const widenfn[] = { \
134
gen_helper_neon_widen_##S##8, \
135
gen_helper_neon_widen_##S##16, \
136
- tcg_gen_##EXT##_i32_i64, \
137
- NULL, \
138
+ NULL, NULL, \
139
}; \
140
static NeonGenTwo64OpFn * const addfn[] = { \
141
gen_helper_neon_##OP##l_u16, \
142
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
143
tcg_gen_##OP##_i64, \
144
NULL, \
145
}; \
146
- return do_prewiden_3d(s, a, widenfn[a->size], \
147
- addfn[a->size], SRC1WIDE); \
148
+ int narrow_mop = a->size == MO_32 ? MO_32 | SIGN : -1; \
149
+ return do_prewiden_3d(s, a, widenfn[a->size], addfn[a->size], \
150
+ SRC1WIDE ? MO_Q : narrow_mop, \
151
+ narrow_mop); \
152
}
153
154
-DO_PREWIDEN(VADDL_S, s, ext, add, false)
155
-DO_PREWIDEN(VADDL_U, u, extu, add, false)
156
-DO_PREWIDEN(VSUBL_S, s, ext, sub, false)
157
-DO_PREWIDEN(VSUBL_U, u, extu, sub, false)
158
-DO_PREWIDEN(VADDW_S, s, ext, add, true)
159
-DO_PREWIDEN(VADDW_U, u, extu, add, true)
160
-DO_PREWIDEN(VSUBW_S, s, ext, sub, true)
161
-DO_PREWIDEN(VSUBW_U, u, extu, sub, true)
162
+DO_PREWIDEN(VADDL_S, s, add, false, MO_SIGN)
163
+DO_PREWIDEN(VADDL_U, u, add, false, 0)
164
+DO_PREWIDEN(VSUBL_S, s, sub, false, MO_SIGN)
165
+DO_PREWIDEN(VSUBL_U, u, sub, false, 0)
166
+DO_PREWIDEN(VADDW_S, s, add, true, MO_SIGN)
167
+DO_PREWIDEN(VADDW_U, u, add, true, 0)
168
+DO_PREWIDEN(VSUBW_S, s, sub, true, MO_SIGN)
169
+DO_PREWIDEN(VSUBW_U, u, sub, true, 0)
170
171
static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
172
NeonGenTwo64OpFn *opfn, NeonGenNarrowFn *narrowfn)
173
--
736
--
174
2.20.1
737
2.34.1
175
176
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
HCR should be applied when NS is set, not when it is cleared.
3
* Add TZASC as unimplemented device.
4
- Allow bare metal application to access this (unimplemented) device
5
* Add CSU as unimplemented device.
6
- Allow bare metal application to access this (unimplemented) device
7
* Add various memory segments
8
- OCRAM
9
- OCRAM EPDC
10
- OCRAM PXP
11
- OCRAM S
12
- ROM
13
- CAAM
4
14
5
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
15
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
17
Message-id: f887a3483996ba06d40bd62ffdfb0ecf68621987.1692964892.git.jcd@tribudubois.net
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
19
---
9
target/arm/helper.c | 5 ++---
20
include/hw/arm/fsl-imx7.h | 7 +++++
10
1 file changed, 2 insertions(+), 3 deletions(-)
21
hw/arm/fsl-imx7.c | 63 +++++++++++++++++++++++++++++++++++++++
22
2 files changed, 70 insertions(+)
11
23
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
13
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
26
--- a/include/hw/arm/fsl-imx7.h
15
+++ b/target/arm/helper.c
27
+++ b/include/hw/arm/fsl-imx7.h
16
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
@@ -XXX,XX +XXX,XX @@ struct FslIMX7State {
17
29
IMX7GPRState gpr;
18
/*
30
ChipideaState usb[FSL_IMX7_NUM_USBS];
19
* Non-IS variants of TLB operations are upgraded to
31
DesignwarePCIEHost pcie;
20
- * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
32
+ MemoryRegion rom;
21
+ * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to
33
+ MemoryRegion caam;
22
* force broadcast of these operations.
34
+ MemoryRegion ocram;
23
*/
35
+ MemoryRegion ocram_epdc;
24
static bool tlb_force_broadcast(CPUARMState *env)
36
+ MemoryRegion ocram_pxp;
25
{
37
+ MemoryRegion ocram_s;
26
- return (env->cp15.hcr_el2 & HCR_FB) &&
38
+
27
- arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
39
uint32_t phy_num[FSL_IMX7_NUM_ETHS];
28
+ return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB);
40
bool phy_connected[FSL_IMX7_NUM_ETHS];
41
};
42
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/arm/fsl-imx7.c
45
+++ b/hw/arm/fsl-imx7.c
46
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
47
create_unimplemented_device("pcie-phy", FSL_IMX7_PCIE_PHY_ADDR,
48
FSL_IMX7_PCIE_PHY_SIZE);
49
50
+ /*
51
+ * CSU
52
+ */
53
+ create_unimplemented_device("csu", FSL_IMX7_CSU_ADDR,
54
+ FSL_IMX7_CSU_SIZE);
55
+
56
+ /*
57
+ * TZASC
58
+ */
59
+ create_unimplemented_device("tzasc", FSL_IMX7_TZASC_ADDR,
60
+ FSL_IMX7_TZASC_SIZE);
61
+
62
+ /*
63
+ * OCRAM memory
64
+ */
65
+ memory_region_init_ram(&s->ocram, NULL, "imx7.ocram",
66
+ FSL_IMX7_OCRAM_MEM_SIZE,
67
+ &error_abort);
68
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_OCRAM_MEM_ADDR,
69
+ &s->ocram);
70
+
71
+ /*
72
+ * OCRAM EPDC memory
73
+ */
74
+ memory_region_init_ram(&s->ocram_epdc, NULL, "imx7.ocram_epdc",
75
+ FSL_IMX7_OCRAM_EPDC_SIZE,
76
+ &error_abort);
77
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_OCRAM_EPDC_ADDR,
78
+ &s->ocram_epdc);
79
+
80
+ /*
81
+ * OCRAM PXP memory
82
+ */
83
+ memory_region_init_ram(&s->ocram_pxp, NULL, "imx7.ocram_pxp",
84
+ FSL_IMX7_OCRAM_PXP_SIZE,
85
+ &error_abort);
86
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_OCRAM_PXP_ADDR,
87
+ &s->ocram_pxp);
88
+
89
+ /*
90
+ * OCRAM_S memory
91
+ */
92
+ memory_region_init_ram(&s->ocram_s, NULL, "imx7.ocram_s",
93
+ FSL_IMX7_OCRAM_S_SIZE,
94
+ &error_abort);
95
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_OCRAM_S_ADDR,
96
+ &s->ocram_s);
97
+
98
+ /*
99
+ * ROM memory
100
+ */
101
+ memory_region_init_rom(&s->rom, OBJECT(dev), "imx7.rom",
102
+ FSL_IMX7_ROM_SIZE, &error_abort);
103
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_ROM_ADDR,
104
+ &s->rom);
105
+
106
+ /*
107
+ * CAAM memory
108
+ */
109
+ memory_region_init_rom(&s->caam, OBJECT(dev), "imx7.caam",
110
+ FSL_IMX7_CAAM_MEM_SIZE, &error_abort);
111
+ memory_region_add_subregion(get_system_memory(), FSL_IMX7_CAAM_MEM_ADDR,
112
+ &s->caam);
29
}
113
}
30
114
31
static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
115
static Property fsl_imx7_properties[] = {
32
--
116
--
33
2.20.1
117
2.34.1
34
118
35
119
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
Model these off the aa64 read/write_vec_element functions.
3
The SRC device is normally used to start the secondary CPU.
4
Use it within translate-neon.c.inc. The new functions do
4
5
not allocate or free temps, so this rearranges the calling
5
When running Linux directly, QEMU is emulating a PSCI interface that UBOOT
6
code a bit.
6
is installing at boot time and therefore the fact that the SRC device is
7
7
unimplemented is hidden as Qemu respond directly to PSCI requets without
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
using the SRC device.
9
Message-id: 20201030022618.785675-6-richard.henderson@linaro.org
9
10
But if you try to run a more bare metal application (maybe uboot itself),
11
then it is not possible to start the secondary CPU as the SRC is an
12
unimplemented device.
13
14
This patch adds the ability to start the secondary CPU through the SRC
15
device so that you can use this feature in bare metal applications.
16
17
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Message-id: ce9a0162defd2acee5dc7f8a674743de0cded569.1692964892.git.jcd@tribudubois.net
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
21
---
13
target/arm/translate.c | 26 ++++
22
include/hw/arm/fsl-imx7.h | 3 +-
14
target/arm/translate-neon.c.inc | 256 ++++++++++++++++++++------------
23
include/hw/misc/imx7_src.h | 66 +++++++++
15
2 files changed, 183 insertions(+), 99 deletions(-)
24
hw/arm/fsl-imx7.c | 8 +-
16
25
hw/misc/imx7_src.c | 276 +++++++++++++++++++++++++++++++++++++
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
26
hw/misc/meson.build | 1 +
27
hw/misc/trace-events | 4 +
28
6 files changed, 356 insertions(+), 2 deletions(-)
29
create mode 100644 include/hw/misc/imx7_src.h
30
create mode 100644 hw/misc/imx7_src.c
31
32
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
18
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
34
--- a/include/hw/arm/fsl-imx7.h
20
+++ b/target/arm/translate.c
35
+++ b/include/hw/arm/fsl-imx7.h
21
@@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg)
36
@@ -XXX,XX +XXX,XX @@
22
tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg));
37
#include "hw/misc/imx7_ccm.h"
23
}
38
#include "hw/misc/imx7_snvs.h"
24
39
#include "hw/misc/imx7_gpr.h"
25
+static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size)
40
+#include "hw/misc/imx7_src.h"
26
+{
41
#include "hw/watchdog/wdt_imx2.h"
27
+ long off = neon_element_offset(reg, ele, size);
42
#include "hw/gpio/imx_gpio.h"
28
+
43
#include "hw/char/imx_serial.h"
29
+ switch (size) {
44
@@ -XXX,XX +XXX,XX @@ struct FslIMX7State {
30
+ case MO_32:
45
IMX7CCMState ccm;
31
+ tcg_gen_ld_i32(dest, cpu_env, off);
46
IMX7AnalogState analog;
47
IMX7SNVSState snvs;
48
+ IMX7SRCState src;
49
IMXGPCv2State gpcv2;
50
IMXSPIState spi[FSL_IMX7_NUM_ECSPIS];
51
IMXI2CState i2c[FSL_IMX7_NUM_I2CS];
52
@@ -XXX,XX +XXX,XX @@ enum FslIMX7MemoryMap {
53
FSL_IMX7_GPC_ADDR = 0x303A0000,
54
55
FSL_IMX7_SRC_ADDR = 0x30390000,
56
- FSL_IMX7_SRC_SIZE = (4 * KiB),
57
58
FSL_IMX7_CCM_ADDR = 0x30380000,
59
60
diff --git a/include/hw/misc/imx7_src.h b/include/hw/misc/imx7_src.h
61
new file mode 100644
62
index XXXXXXX..XXXXXXX
63
--- /dev/null
64
+++ b/include/hw/misc/imx7_src.h
65
@@ -XXX,XX +XXX,XX @@
66
+/*
67
+ * IMX7 System Reset Controller
68
+ *
69
+ * Copyright (C) 2023 Jean-Christophe Dubois <jcd@tribudubois.net>
70
+ *
71
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
72
+ * See the COPYING file in the top-level directory.
73
+ */
74
+
75
+#ifndef IMX7_SRC_H
76
+#define IMX7_SRC_H
77
+
78
+#include "hw/sysbus.h"
79
+#include "qemu/bitops.h"
80
+#include "qom/object.h"
81
+
82
+#define SRC_SCR 0
83
+#define SRC_A7RCR0 1
84
+#define SRC_A7RCR1 2
85
+#define SRC_M4RCR 3
86
+#define SRC_ERCR 5
87
+#define SRC_HSICPHY_RCR 7
88
+#define SRC_USBOPHY1_RCR 8
89
+#define SRC_USBOPHY2_RCR 9
90
+#define SRC_MPIPHY_RCR 10
91
+#define SRC_PCIEPHY_RCR 11
92
+#define SRC_SBMR1 22
93
+#define SRC_SRSR 23
94
+#define SRC_SISR 26
95
+#define SRC_SIMR 27
96
+#define SRC_SBMR2 28
97
+#define SRC_GPR1 29
98
+#define SRC_GPR2 30
99
+#define SRC_GPR3 31
100
+#define SRC_GPR4 32
101
+#define SRC_GPR5 33
102
+#define SRC_GPR6 34
103
+#define SRC_GPR7 35
104
+#define SRC_GPR8 36
105
+#define SRC_GPR9 37
106
+#define SRC_GPR10 38
107
+#define SRC_MAX 39
108
+
109
+/* SRC_A7SCR1 */
110
+#define R_CORE1_ENABLE_SHIFT 1
111
+#define R_CORE1_ENABLE_LENGTH 1
112
+/* SRC_A7SCR0 */
113
+#define R_CORE1_RST_SHIFT 5
114
+#define R_CORE1_RST_LENGTH 1
115
+#define R_CORE0_RST_SHIFT 4
116
+#define R_CORE0_RST_LENGTH 1
117
+
118
+#define TYPE_IMX7_SRC "imx7.src"
119
+OBJECT_DECLARE_SIMPLE_TYPE(IMX7SRCState, IMX7_SRC)
120
+
121
+struct IMX7SRCState {
122
+ /* <private> */
123
+ SysBusDevice parent_obj;
124
+
125
+ /* <public> */
126
+ MemoryRegion iomem;
127
+
128
+ uint32_t regs[SRC_MAX];
129
+};
130
+
131
+#endif /* IMX7_SRC_H */
132
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/hw/arm/fsl-imx7.c
135
+++ b/hw/arm/fsl-imx7.c
136
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_init(Object *obj)
137
*/
138
object_initialize_child(obj, "gpcv2", &s->gpcv2, TYPE_IMX_GPCV2);
139
140
+ /*
141
+ * SRC
142
+ */
143
+ object_initialize_child(obj, "src", &s->src, TYPE_IMX7_SRC);
144
+
145
/*
146
* ECSPIs
147
*/
148
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
149
/*
150
* SRC
151
*/
152
- create_unimplemented_device("src", FSL_IMX7_SRC_ADDR, FSL_IMX7_SRC_SIZE);
153
+ sysbus_realize(SYS_BUS_DEVICE(&s->src), &error_abort);
154
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->src), 0, FSL_IMX7_SRC_ADDR);
155
156
/*
157
* Watchdogs
158
diff --git a/hw/misc/imx7_src.c b/hw/misc/imx7_src.c
159
new file mode 100644
160
index XXXXXXX..XXXXXXX
161
--- /dev/null
162
+++ b/hw/misc/imx7_src.c
163
@@ -XXX,XX +XXX,XX @@
164
+/*
165
+ * IMX7 System Reset Controller
166
+ *
167
+ * Copyright (c) 2023 Jean-Christophe Dubois <jcd@tribudubois.net>
168
+ *
169
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
170
+ * See the COPYING file in the top-level directory.
171
+ *
172
+ */
173
+
174
+#include "qemu/osdep.h"
175
+#include "hw/misc/imx7_src.h"
176
+#include "migration/vmstate.h"
177
+#include "qemu/bitops.h"
178
+#include "qemu/log.h"
179
+#include "qemu/main-loop.h"
180
+#include "qemu/module.h"
181
+#include "target/arm/arm-powerctl.h"
182
+#include "hw/core/cpu.h"
183
+#include "hw/registerfields.h"
184
+
185
+#include "trace.h"
186
+
187
+static const char *imx7_src_reg_name(uint32_t reg)
188
+{
189
+ static char unknown[20];
190
+
191
+ switch (reg) {
192
+ case SRC_SCR:
193
+ return "SRC_SCR";
194
+ case SRC_A7RCR0:
195
+ return "SRC_A7RCR0";
196
+ case SRC_A7RCR1:
197
+ return "SRC_A7RCR1";
198
+ case SRC_M4RCR:
199
+ return "SRC_M4RCR";
200
+ case SRC_ERCR:
201
+ return "SRC_ERCR";
202
+ case SRC_HSICPHY_RCR:
203
+ return "SRC_HSICPHY_RCR";
204
+ case SRC_USBOPHY1_RCR:
205
+ return "SRC_USBOPHY1_RCR";
206
+ case SRC_USBOPHY2_RCR:
207
+ return "SRC_USBOPHY2_RCR";
208
+ case SRC_PCIEPHY_RCR:
209
+ return "SRC_PCIEPHY_RCR";
210
+ case SRC_SBMR1:
211
+ return "SRC_SBMR1";
212
+ case SRC_SRSR:
213
+ return "SRC_SRSR";
214
+ case SRC_SISR:
215
+ return "SRC_SISR";
216
+ case SRC_SIMR:
217
+ return "SRC_SIMR";
218
+ case SRC_SBMR2:
219
+ return "SRC_SBMR2";
220
+ case SRC_GPR1:
221
+ return "SRC_GPR1";
222
+ case SRC_GPR2:
223
+ return "SRC_GPR2";
224
+ case SRC_GPR3:
225
+ return "SRC_GPR3";
226
+ case SRC_GPR4:
227
+ return "SRC_GPR4";
228
+ case SRC_GPR5:
229
+ return "SRC_GPR5";
230
+ case SRC_GPR6:
231
+ return "SRC_GPR6";
232
+ case SRC_GPR7:
233
+ return "SRC_GPR7";
234
+ case SRC_GPR8:
235
+ return "SRC_GPR8";
236
+ case SRC_GPR9:
237
+ return "SRC_GPR9";
238
+ case SRC_GPR10:
239
+ return "SRC_GPR10";
240
+ default:
241
+ sprintf(unknown, "%u ?", reg);
242
+ return unknown;
243
+ }
244
+}
245
+
246
+static const VMStateDescription vmstate_imx7_src = {
247
+ .name = TYPE_IMX7_SRC,
248
+ .version_id = 1,
249
+ .minimum_version_id = 1,
250
+ .fields = (VMStateField[]) {
251
+ VMSTATE_UINT32_ARRAY(regs, IMX7SRCState, SRC_MAX),
252
+ VMSTATE_END_OF_LIST()
253
+ },
254
+};
255
+
256
+static void imx7_src_reset(DeviceState *dev)
257
+{
258
+ IMX7SRCState *s = IMX7_SRC(dev);
259
+
260
+ memset(s->regs, 0, sizeof(s->regs));
261
+
262
+ /* Set reset values */
263
+ s->regs[SRC_SCR] = 0xA0;
264
+ s->regs[SRC_SRSR] = 0x1;
265
+ s->regs[SRC_SIMR] = 0x1F;
266
+}
267
+
268
+static uint64_t imx7_src_read(void *opaque, hwaddr offset, unsigned size)
269
+{
270
+ uint32_t value = 0;
271
+ IMX7SRCState *s = (IMX7SRCState *)opaque;
272
+ uint32_t index = offset >> 2;
273
+
274
+ if (index < SRC_MAX) {
275
+ value = s->regs[index];
276
+ } else {
277
+ qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Bad register at offset 0x%"
278
+ HWADDR_PRIx "\n", TYPE_IMX7_SRC, __func__, offset);
279
+ }
280
+
281
+ trace_imx7_src_read(imx7_src_reg_name(index), value);
282
+
283
+ return value;
284
+}
285
+
286
+
287
+/*
288
+ * The reset is asynchronous so we need to defer clearing the reset
289
+ * bit until the work is completed.
290
+ */
291
+
292
+struct SRCSCRResetInfo {
293
+ IMX7SRCState *s;
294
+ uint32_t reset_bit;
295
+};
296
+
297
+static void imx7_clear_reset_bit(CPUState *cpu, run_on_cpu_data data)
298
+{
299
+ struct SRCSCRResetInfo *ri = data.host_ptr;
300
+ IMX7SRCState *s = ri->s;
301
+
302
+ assert(qemu_mutex_iothread_locked());
303
+
304
+ s->regs[SRC_A7RCR0] = deposit32(s->regs[SRC_A7RCR0], ri->reset_bit, 1, 0);
305
+
306
+ trace_imx7_src_write(imx7_src_reg_name(SRC_A7RCR0), s->regs[SRC_A7RCR0]);
307
+
308
+ g_free(ri);
309
+}
310
+
311
+static void imx7_defer_clear_reset_bit(uint32_t cpuid,
312
+ IMX7SRCState *s,
313
+ uint32_t reset_shift)
314
+{
315
+ struct SRCSCRResetInfo *ri;
316
+ CPUState *cpu = arm_get_cpu_by_id(cpuid);
317
+
318
+ if (!cpu) {
319
+ return;
320
+ }
321
+
322
+ ri = g_new(struct SRCSCRResetInfo, 1);
323
+ ri->s = s;
324
+ ri->reset_bit = reset_shift;
325
+
326
+ async_run_on_cpu(cpu, imx7_clear_reset_bit, RUN_ON_CPU_HOST_PTR(ri));
327
+}
328
+
329
+
330
+static void imx7_src_write(void *opaque, hwaddr offset, uint64_t value,
331
+ unsigned size)
332
+{
333
+ IMX7SRCState *s = (IMX7SRCState *)opaque;
334
+ uint32_t index = offset >> 2;
335
+ long unsigned int change_mask;
336
+ uint32_t current_value = value;
337
+
338
+ if (index >= SRC_MAX) {
339
+ qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Bad register at offset 0x%"
340
+ HWADDR_PRIx "\n", TYPE_IMX7_SRC, __func__, offset);
341
+ return;
342
+ }
343
+
344
+ trace_imx7_src_write(imx7_src_reg_name(SRC_A7RCR0), s->regs[SRC_A7RCR0]);
345
+
346
+ change_mask = s->regs[index] ^ (uint32_t)current_value;
347
+
348
+ switch (index) {
349
+ case SRC_A7RCR0:
350
+ if (FIELD_EX32(change_mask, CORE0, RST)) {
351
+ arm_reset_cpu(0);
352
+ imx7_defer_clear_reset_bit(0, s, R_CORE0_RST_SHIFT);
353
+ }
354
+ if (FIELD_EX32(change_mask, CORE1, RST)) {
355
+ arm_reset_cpu(1);
356
+ imx7_defer_clear_reset_bit(1, s, R_CORE1_RST_SHIFT);
357
+ }
358
+ s->regs[index] = current_value;
359
+ break;
360
+ case SRC_A7RCR1:
361
+ /*
362
+ * On real hardware when the system reset controller starts a
363
+ * secondary CPU it runs through some boot ROM code which reads
364
+ * the SRC_GPRX registers controlling the start address and branches
365
+ * to it.
366
+ * Here we are taking a short cut and branching directly to the
367
+ * requested address (we don't want to run the boot ROM code inside
368
+ * QEMU)
369
+ */
370
+ if (FIELD_EX32(change_mask, CORE1, ENABLE)) {
371
+ if (FIELD_EX32(current_value, CORE1, ENABLE)) {
372
+ /* CORE 1 is brought up */
373
+ arm_set_cpu_on(1, s->regs[SRC_GPR3], s->regs[SRC_GPR4],
374
+ 3, false);
375
+ } else {
376
+ /* CORE 1 is shut down */
377
+ arm_set_cpu_off(1);
378
+ }
379
+ /* We clear the reset bits as the processor changed state */
380
+ imx7_defer_clear_reset_bit(1, s, R_CORE1_RST_SHIFT);
381
+ clear_bit(R_CORE1_RST_SHIFT, &change_mask);
382
+ }
383
+ s->regs[index] = current_value;
32
+ break;
384
+ break;
33
+ default:
385
+ default:
34
+ g_assert_not_reached();
386
+ s->regs[index] = current_value;
387
+ break;
35
+ }
388
+ }
36
+}
389
+}
37
+
390
+
38
+static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size)
391
+static const struct MemoryRegionOps imx7_src_ops = {
39
+{
392
+ .read = imx7_src_read,
40
+ long off = neon_element_offset(reg, ele, size);
393
+ .write = imx7_src_write,
41
+
394
+ .endianness = DEVICE_NATIVE_ENDIAN,
42
+ switch (size) {
395
+ .valid = {
43
+ case MO_32:
396
+ /*
44
+ tcg_gen_st_i32(src, cpu_env, off);
397
+ * Our device would not work correctly if the guest was doing
45
+ break;
398
+ * unaligned access. This might not be a limitation on the real
46
+ default:
399
+ * device but in practice there is no reason for a guest to access
47
+ g_assert_not_reached();
400
+ * this device unaligned.
48
+ }
401
+ */
49
+}
402
+ .min_access_size = 4,
50
+
403
+ .max_access_size = 4,
51
static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
404
+ .unaligned = false,
52
{
405
+ },
53
TCGv_ptr ret = tcg_temp_new_ptr();
406
+};
54
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
407
+
408
+static void imx7_src_realize(DeviceState *dev, Error **errp)
409
+{
410
+ IMX7SRCState *s = IMX7_SRC(dev);
411
+
412
+ memory_region_init_io(&s->iomem, OBJECT(dev), &imx7_src_ops, s,
413
+ TYPE_IMX7_SRC, 0x1000);
414
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
415
+}
416
+
417
+static void imx7_src_class_init(ObjectClass *klass, void *data)
418
+{
419
+ DeviceClass *dc = DEVICE_CLASS(klass);
420
+
421
+ dc->realize = imx7_src_realize;
422
+ dc->reset = imx7_src_reset;
423
+ dc->vmsd = &vmstate_imx7_src;
424
+ dc->desc = "i.MX6 System Reset Controller";
425
+}
426
+
427
+static const TypeInfo imx7_src_info = {
428
+ .name = TYPE_IMX7_SRC,
429
+ .parent = TYPE_SYS_BUS_DEVICE,
430
+ .instance_size = sizeof(IMX7SRCState),
431
+ .class_init = imx7_src_class_init,
432
+};
433
+
434
+static void imx7_src_register_types(void)
435
+{
436
+ type_register_static(&imx7_src_info);
437
+}
438
+
439
+type_init(imx7_src_register_types)
440
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
55
index XXXXXXX..XXXXXXX 100644
441
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/translate-neon.c.inc
442
--- a/hw/misc/meson.build
57
+++ b/target/arm/translate-neon.c.inc
443
+++ b/hw/misc/meson.build
58
@@ -XXX,XX +XXX,XX @@ static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn)
444
@@ -XXX,XX +XXX,XX @@ system_ss.add(when: 'CONFIG_IMX', if_true: files(
59
* early. Since Q is 0 there are always just two passes, so instead
445
'imx6_src.c',
60
* of a complicated loop over each pass we just unroll.
446
'imx6ul_ccm.c',
61
*/
447
'imx7_ccm.c',
62
- tmp = neon_load_reg(a->vn, 0);
448
+ 'imx7_src.c',
63
- tmp2 = neon_load_reg(a->vn, 1);
449
'imx7_gpr.c',
64
+ tmp = tcg_temp_new_i32();
450
'imx7_snvs.c',
65
+ tmp2 = tcg_temp_new_i32();
451
'imx_ccm.c',
66
+ tmp3 = tcg_temp_new_i32();
452
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
67
+
453
index XXXXXXX..XXXXXXX 100644
68
+ read_neon_element32(tmp, a->vn, 0, MO_32);
454
--- a/hw/misc/trace-events
69
+ read_neon_element32(tmp2, a->vn, 1, MO_32);
455
+++ b/hw/misc/trace-events
70
fn(tmp, tmp, tmp2);
456
@@ -XXX,XX +XXX,XX @@ ccm_clock_freq(uint32_t clock, uint32_t freq) "(Clock = %d) = %d"
71
- tcg_temp_free_i32(tmp2);
457
ccm_read_reg(const char *reg_name, uint32_t value) "reg[%s] <= 0x%" PRIx32
72
458
ccm_write_reg(const char *reg_name, uint32_t value) "reg[%s] => 0x%" PRIx32
73
- tmp3 = neon_load_reg(a->vm, 0);
459
74
- tmp2 = neon_load_reg(a->vm, 1);
460
+# imx7_src.c
75
+ read_neon_element32(tmp3, a->vm, 0, MO_32);
461
+imx7_src_read(const char *reg_name, uint32_t value) "reg[%s] => 0x%" PRIx32
76
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
462
+imx7_src_write(const char *reg_name, uint32_t value) "reg[%s] <= 0x%" PRIx32
77
fn(tmp3, tmp3, tmp2);
463
+
78
- tcg_temp_free_i32(tmp2);
464
# iotkit-sysinfo.c
79
465
iotkit_sysinfo_read(uint64_t offset, uint64_t data, unsigned size) "IoTKit SysInfo read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
80
- neon_store_reg(a->vd, 0, tmp);
466
iotkit_sysinfo_write(uint64_t offset, uint64_t data, unsigned size) "IoTKit SysInfo write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
81
- neon_store_reg(a->vd, 1, tmp3);
82
+ write_neon_element32(tmp, a->vd, 0, MO_32);
83
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
84
+
85
+ tcg_temp_free_i32(tmp);
86
+ tcg_temp_free_i32(tmp2);
87
+ tcg_temp_free_i32(tmp3);
88
return true;
89
}
90
91
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
92
* 2-reg-and-shift operations, size < 3 case, where the
93
* helper needs to be passed cpu_env.
94
*/
95
- TCGv_i32 constimm;
96
+ TCGv_i32 constimm, tmp;
97
int pass;
98
99
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
100
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a,
101
* by immediate using the variable shift operations.
102
*/
103
constimm = tcg_const_i32(dup_const(a->size, a->shift));
104
+ tmp = tcg_temp_new_i32();
105
106
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
107
- TCGv_i32 tmp = neon_load_reg(a->vm, pass);
108
+ read_neon_element32(tmp, a->vm, pass, MO_32);
109
fn(tmp, cpu_env, tmp, constimm);
110
- neon_store_reg(a->vd, pass, tmp);
111
+ write_neon_element32(tmp, a->vd, pass, MO_32);
112
}
113
+ tcg_temp_free_i32(tmp);
114
tcg_temp_free_i32(constimm);
115
return true;
116
}
117
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a,
118
constimm = tcg_const_i64(-a->shift);
119
rm1 = tcg_temp_new_i64();
120
rm2 = tcg_temp_new_i64();
121
+ rd = tcg_temp_new_i32();
122
123
/* Load both inputs first to avoid potential overwrite if rm == rd */
124
neon_load_reg64(rm1, a->vm);
125
neon_load_reg64(rm2, a->vm + 1);
126
127
shiftfn(rm1, rm1, constimm);
128
- rd = tcg_temp_new_i32();
129
narrowfn(rd, cpu_env, rm1);
130
- neon_store_reg(a->vd, 0, rd);
131
+ write_neon_element32(rd, a->vd, 0, MO_32);
132
133
shiftfn(rm2, rm2, constimm);
134
- rd = tcg_temp_new_i32();
135
narrowfn(rd, cpu_env, rm2);
136
- neon_store_reg(a->vd, 1, rd);
137
+ write_neon_element32(rd, a->vd, 1, MO_32);
138
139
+ tcg_temp_free_i32(rd);
140
tcg_temp_free_i64(rm1);
141
tcg_temp_free_i64(rm2);
142
tcg_temp_free_i64(constimm);
143
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
144
constimm = tcg_const_i32(imm);
145
146
/* Load all inputs first to avoid potential overwrite */
147
- rm1 = neon_load_reg(a->vm, 0);
148
- rm2 = neon_load_reg(a->vm, 1);
149
- rm3 = neon_load_reg(a->vm + 1, 0);
150
- rm4 = neon_load_reg(a->vm + 1, 1);
151
+ rm1 = tcg_temp_new_i32();
152
+ rm2 = tcg_temp_new_i32();
153
+ rm3 = tcg_temp_new_i32();
154
+ rm4 = tcg_temp_new_i32();
155
+ read_neon_element32(rm1, a->vm, 0, MO_32);
156
+ read_neon_element32(rm2, a->vm, 1, MO_32);
157
+ read_neon_element32(rm3, a->vm, 2, MO_32);
158
+ read_neon_element32(rm4, a->vm, 3, MO_32);
159
rtmp = tcg_temp_new_i64();
160
161
shiftfn(rm1, rm1, constimm);
162
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
163
tcg_temp_free_i32(rm2);
164
165
narrowfn(rm1, cpu_env, rtmp);
166
- neon_store_reg(a->vd, 0, rm1);
167
+ write_neon_element32(rm1, a->vd, 0, MO_32);
168
+ tcg_temp_free_i32(rm1);
169
170
shiftfn(rm3, rm3, constimm);
171
shiftfn(rm4, rm4, constimm);
172
@@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a,
173
174
narrowfn(rm3, cpu_env, rtmp);
175
tcg_temp_free_i64(rtmp);
176
- neon_store_reg(a->vd, 1, rm3);
177
+ write_neon_element32(rm3, a->vd, 1, MO_32);
178
+ tcg_temp_free_i32(rm3);
179
return true;
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a,
183
widen_mask = dup_const(a->size + 1, widen_mask);
184
}
185
186
- rm0 = neon_load_reg(a->vm, 0);
187
- rm1 = neon_load_reg(a->vm, 1);
188
+ rm0 = tcg_temp_new_i32();
189
+ rm1 = tcg_temp_new_i32();
190
+ read_neon_element32(rm0, a->vm, 0, MO_32);
191
+ read_neon_element32(rm1, a->vm, 1, MO_32);
192
tmp = tcg_temp_new_i64();
193
194
widenfn(tmp, rm0);
195
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
196
if (src1_wide) {
197
neon_load_reg64(rn0_64, a->vn);
198
} else {
199
- TCGv_i32 tmp = neon_load_reg(a->vn, 0);
200
+ TCGv_i32 tmp = tcg_temp_new_i32();
201
+ read_neon_element32(tmp, a->vn, 0, MO_32);
202
widenfn(rn0_64, tmp);
203
tcg_temp_free_i32(tmp);
204
}
205
- rm = neon_load_reg(a->vm, 0);
206
+ rm = tcg_temp_new_i32();
207
+ read_neon_element32(rm, a->vm, 0, MO_32);
208
209
widenfn(rm_64, rm);
210
tcg_temp_free_i32(rm);
211
@@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a,
212
if (src1_wide) {
213
neon_load_reg64(rn1_64, a->vn + 1);
214
} else {
215
- TCGv_i32 tmp = neon_load_reg(a->vn, 1);
216
+ TCGv_i32 tmp = tcg_temp_new_i32();
217
+ read_neon_element32(tmp, a->vn, 1, MO_32);
218
widenfn(rn1_64, tmp);
219
tcg_temp_free_i32(tmp);
220
}
221
- rm = neon_load_reg(a->vm, 1);
222
+ rm = tcg_temp_new_i32();
223
+ read_neon_element32(rm, a->vm, 1, MO_32);
224
225
neon_store_reg64(rn0_64, a->vd);
226
227
@@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a,
228
229
narrowfn(rd1, rn_64);
230
231
- neon_store_reg(a->vd, 0, rd0);
232
- neon_store_reg(a->vd, 1, rd1);
233
+ write_neon_element32(rd0, a->vd, 0, MO_32);
234
+ write_neon_element32(rd1, a->vd, 1, MO_32);
235
236
+ tcg_temp_free_i32(rd0);
237
+ tcg_temp_free_i32(rd1);
238
tcg_temp_free_i64(rn_64);
239
tcg_temp_free_i64(rm_64);
240
241
@@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a,
242
rd0 = tcg_temp_new_i64();
243
rd1 = tcg_temp_new_i64();
244
245
- rn = neon_load_reg(a->vn, 0);
246
- rm = neon_load_reg(a->vm, 0);
247
+ rn = tcg_temp_new_i32();
248
+ rm = tcg_temp_new_i32();
249
+ read_neon_element32(rn, a->vn, 0, MO_32);
250
+ read_neon_element32(rm, a->vm, 0, MO_32);
251
opfn(rd0, rn, rm);
252
- tcg_temp_free_i32(rn);
253
- tcg_temp_free_i32(rm);
254
255
- rn = neon_load_reg(a->vn, 1);
256
- rm = neon_load_reg(a->vm, 1);
257
+ read_neon_element32(rn, a->vn, 1, MO_32);
258
+ read_neon_element32(rm, a->vm, 1, MO_32);
259
opfn(rd1, rn, rm);
260
tcg_temp_free_i32(rn);
261
tcg_temp_free_i32(rm);
262
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
263
264
static inline TCGv_i32 neon_get_scalar(int size, int reg)
265
{
266
- TCGv_i32 tmp;
267
- if (size == 1) {
268
- tmp = neon_load_reg(reg & 7, reg >> 4);
269
+ TCGv_i32 tmp = tcg_temp_new_i32();
270
+ if (size == MO_16) {
271
+ read_neon_element32(tmp, reg & 7, reg >> 4, MO_32);
272
if (reg & 8) {
273
gen_neon_dup_high16(tmp);
274
} else {
275
gen_neon_dup_low16(tmp);
276
}
277
} else {
278
- tmp = neon_load_reg(reg & 15, reg >> 4);
279
+ read_neon_element32(tmp, reg & 15, reg >> 4, MO_32);
280
}
281
return tmp;
282
}
283
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a,
284
* perform an accumulation operation of that result into the
285
* destination.
286
*/
287
- TCGv_i32 scalar;
288
+ TCGv_i32 scalar, tmp;
289
int pass;
290
291
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
292
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a,
293
}
294
295
scalar = neon_get_scalar(a->size, a->vm);
296
+ tmp = tcg_temp_new_i32();
297
298
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
299
- TCGv_i32 tmp = neon_load_reg(a->vn, pass);
300
+ read_neon_element32(tmp, a->vn, pass, MO_32);
301
opfn(tmp, tmp, scalar);
302
if (accfn) {
303
- TCGv_i32 rd = neon_load_reg(a->vd, pass);
304
+ TCGv_i32 rd = tcg_temp_new_i32();
305
+ read_neon_element32(rd, a->vd, pass, MO_32);
306
accfn(tmp, rd, tmp);
307
tcg_temp_free_i32(rd);
308
}
309
- neon_store_reg(a->vd, pass, tmp);
310
+ write_neon_element32(tmp, a->vd, pass, MO_32);
311
}
312
+ tcg_temp_free_i32(tmp);
313
tcg_temp_free_i32(scalar);
314
return true;
315
}
316
@@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
317
* performs a kind of fused op-then-accumulate using a helper
318
* function that takes all of rd, rn and the scalar at once.
319
*/
320
- TCGv_i32 scalar;
321
+ TCGv_i32 scalar, rn, rd;
322
int pass;
323
324
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
325
@@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a,
326
}
327
328
scalar = neon_get_scalar(a->size, a->vm);
329
+ rn = tcg_temp_new_i32();
330
+ rd = tcg_temp_new_i32();
331
332
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
333
- TCGv_i32 rn = neon_load_reg(a->vn, pass);
334
- TCGv_i32 rd = neon_load_reg(a->vd, pass);
335
+ read_neon_element32(rn, a->vn, pass, MO_32);
336
+ read_neon_element32(rd, a->vd, pass, MO_32);
337
opfn(rd, cpu_env, rn, scalar, rd);
338
- tcg_temp_free_i32(rn);
339
- neon_store_reg(a->vd, pass, rd);
340
+ write_neon_element32(rd, a->vd, pass, MO_32);
341
}
342
+ tcg_temp_free_i32(rn);
343
+ tcg_temp_free_i32(rd);
344
tcg_temp_free_i32(scalar);
345
346
return true;
347
@@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a,
348
scalar = neon_get_scalar(a->size, a->vm);
349
350
/* Load all inputs before writing any outputs, in case of overlap */
351
- rn = neon_load_reg(a->vn, 0);
352
+ rn = tcg_temp_new_i32();
353
+ read_neon_element32(rn, a->vn, 0, MO_32);
354
rn0_64 = tcg_temp_new_i64();
355
opfn(rn0_64, rn, scalar);
356
- tcg_temp_free_i32(rn);
357
358
- rn = neon_load_reg(a->vn, 1);
359
+ read_neon_element32(rn, a->vn, 1, MO_32);
360
rn1_64 = tcg_temp_new_i64();
361
opfn(rn1_64, rn, scalar);
362
tcg_temp_free_i32(rn);
363
@@ -XXX,XX +XXX,XX @@ static bool trans_VTBL(DisasContext *s, arg_VTBL *a)
364
return false;
365
}
366
n <<= 3;
367
+ tmp = tcg_temp_new_i32();
368
if (a->op) {
369
- tmp = neon_load_reg(a->vd, 0);
370
+ read_neon_element32(tmp, a->vd, 0, MO_32);
371
} else {
372
- tmp = tcg_temp_new_i32();
373
tcg_gen_movi_i32(tmp, 0);
374
}
375
- tmp2 = neon_load_reg(a->vm, 0);
376
+ tmp2 = tcg_temp_new_i32();
377
+ read_neon_element32(tmp2, a->vm, 0, MO_32);
378
ptr1 = vfp_reg_ptr(true, a->vn);
379
tmp4 = tcg_const_i32(n);
380
gen_helper_neon_tbl(tmp2, tmp2, tmp, ptr1, tmp4);
381
- tcg_temp_free_i32(tmp);
382
+
383
if (a->op) {
384
- tmp = neon_load_reg(a->vd, 1);
385
+ read_neon_element32(tmp, a->vd, 1, MO_32);
386
} else {
387
- tmp = tcg_temp_new_i32();
388
tcg_gen_movi_i32(tmp, 0);
389
}
390
- tmp3 = neon_load_reg(a->vm, 1);
391
+ tmp3 = tcg_temp_new_i32();
392
+ read_neon_element32(tmp3, a->vm, 1, MO_32);
393
gen_helper_neon_tbl(tmp3, tmp3, tmp, ptr1, tmp4);
394
+ tcg_temp_free_i32(tmp);
395
tcg_temp_free_i32(tmp4);
396
tcg_temp_free_ptr(ptr1);
397
- neon_store_reg(a->vd, 0, tmp2);
398
- neon_store_reg(a->vd, 1, tmp3);
399
- tcg_temp_free_i32(tmp);
400
+
401
+ write_neon_element32(tmp2, a->vd, 0, MO_32);
402
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
403
+ tcg_temp_free_i32(tmp2);
404
+ tcg_temp_free_i32(tmp3);
405
return true;
406
}
407
408
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a)
409
static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
410
{
411
int pass, half;
412
+ TCGv_i32 tmp[2];
413
414
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
415
return false;
416
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
417
return true;
418
}
419
420
- for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
421
- TCGv_i32 tmp[2];
422
+ tmp[0] = tcg_temp_new_i32();
423
+ tmp[1] = tcg_temp_new_i32();
424
425
+ for (pass = 0; pass < (a->q ? 2 : 1); pass++) {
426
for (half = 0; half < 2; half++) {
427
- tmp[half] = neon_load_reg(a->vm, pass * 2 + half);
428
+ read_neon_element32(tmp[half], a->vm, pass * 2 + half, MO_32);
429
switch (a->size) {
430
case 0:
431
tcg_gen_bswap32_i32(tmp[half], tmp[half]);
432
@@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a)
433
g_assert_not_reached();
434
}
435
}
436
- neon_store_reg(a->vd, pass * 2, tmp[1]);
437
- neon_store_reg(a->vd, pass * 2 + 1, tmp[0]);
438
+ write_neon_element32(tmp[1], a->vd, pass * 2, MO_32);
439
+ write_neon_element32(tmp[0], a->vd, pass * 2 + 1, MO_32);
440
}
441
+
442
+ tcg_temp_free_i32(tmp[0]);
443
+ tcg_temp_free_i32(tmp[1]);
444
return true;
445
}
446
447
@@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a,
448
rm0_64 = tcg_temp_new_i64();
449
rm1_64 = tcg_temp_new_i64();
450
rd_64 = tcg_temp_new_i64();
451
- tmp = neon_load_reg(a->vm, pass * 2);
452
+
453
+ tmp = tcg_temp_new_i32();
454
+ read_neon_element32(tmp, a->vm, pass * 2, MO_32);
455
widenfn(rm0_64, tmp);
456
- tcg_temp_free_i32(tmp);
457
- tmp = neon_load_reg(a->vm, pass * 2 + 1);
458
+ read_neon_element32(tmp, a->vm, pass * 2 + 1, MO_32);
459
widenfn(rm1_64, tmp);
460
tcg_temp_free_i32(tmp);
461
+
462
opfn(rd_64, rm0_64, rm1_64);
463
tcg_temp_free_i64(rm0_64);
464
tcg_temp_free_i64(rm1_64);
465
@@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a,
466
narrowfn(rd0, cpu_env, rm);
467
neon_load_reg64(rm, a->vm + 1);
468
narrowfn(rd1, cpu_env, rm);
469
- neon_store_reg(a->vd, 0, rd0);
470
- neon_store_reg(a->vd, 1, rd1);
471
+ write_neon_element32(rd0, a->vd, 0, MO_32);
472
+ write_neon_element32(rd1, a->vd, 1, MO_32);
473
+ tcg_temp_free_i32(rd0);
474
+ tcg_temp_free_i32(rd1);
475
tcg_temp_free_i64(rm);
476
return true;
477
}
478
@@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a)
479
}
480
481
rd = tcg_temp_new_i64();
482
+ rm0 = tcg_temp_new_i32();
483
+ rm1 = tcg_temp_new_i32();
484
485
- rm0 = neon_load_reg(a->vm, 0);
486
- rm1 = neon_load_reg(a->vm, 1);
487
+ read_neon_element32(rm0, a->vm, 0, MO_32);
488
+ read_neon_element32(rm1, a->vm, 1, MO_32);
489
490
widenfn(rd, rm0);
491
tcg_gen_shli_i64(rd, rd, 8 << a->size);
492
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F16_F32(DisasContext *s, arg_2misc *a)
493
494
fpst = fpstatus_ptr(FPST_STD);
495
ahp = get_ahp_flag();
496
- tmp = neon_load_reg(a->vm, 0);
497
+ tmp = tcg_temp_new_i32();
498
+ read_neon_element32(tmp, a->vm, 0, MO_32);
499
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
500
- tmp2 = neon_load_reg(a->vm, 1);
501
+ tmp2 = tcg_temp_new_i32();
502
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
503
gen_helper_vfp_fcvt_f32_to_f16(tmp2, tmp2, fpst, ahp);
504
tcg_gen_shli_i32(tmp2, tmp2, 16);
505
tcg_gen_or_i32(tmp2, tmp2, tmp);
506
- tcg_temp_free_i32(tmp);
507
- tmp = neon_load_reg(a->vm, 2);
508
+ read_neon_element32(tmp, a->vm, 2, MO_32);
509
gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
510
- tmp3 = neon_load_reg(a->vm, 3);
511
- neon_store_reg(a->vd, 0, tmp2);
512
+ tmp3 = tcg_temp_new_i32();
513
+ read_neon_element32(tmp3, a->vm, 3, MO_32);
514
+ write_neon_element32(tmp2, a->vd, 0, MO_32);
515
+ tcg_temp_free_i32(tmp2);
516
gen_helper_vfp_fcvt_f32_to_f16(tmp3, tmp3, fpst, ahp);
517
tcg_gen_shli_i32(tmp3, tmp3, 16);
518
tcg_gen_or_i32(tmp3, tmp3, tmp);
519
- neon_store_reg(a->vd, 1, tmp3);
520
+ write_neon_element32(tmp3, a->vd, 1, MO_32);
521
+ tcg_temp_free_i32(tmp3);
522
tcg_temp_free_i32(tmp);
523
tcg_temp_free_i32(ahp);
524
tcg_temp_free_ptr(fpst);
525
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a)
526
fpst = fpstatus_ptr(FPST_STD);
527
ahp = get_ahp_flag();
528
tmp3 = tcg_temp_new_i32();
529
- tmp = neon_load_reg(a->vm, 0);
530
- tmp2 = neon_load_reg(a->vm, 1);
531
+ tmp2 = tcg_temp_new_i32();
532
+ tmp = tcg_temp_new_i32();
533
+ read_neon_element32(tmp, a->vm, 0, MO_32);
534
+ read_neon_element32(tmp2, a->vm, 1, MO_32);
535
tcg_gen_ext16u_i32(tmp3, tmp);
536
gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
537
- neon_store_reg(a->vd, 0, tmp3);
538
+ write_neon_element32(tmp3, a->vd, 0, MO_32);
539
tcg_gen_shri_i32(tmp, tmp, 16);
540
gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp);
541
- neon_store_reg(a->vd, 1, tmp);
542
- tmp3 = tcg_temp_new_i32();
543
+ write_neon_element32(tmp, a->vd, 1, MO_32);
544
+ tcg_temp_free_i32(tmp);
545
tcg_gen_ext16u_i32(tmp3, tmp2);
546
gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
547
- neon_store_reg(a->vd, 2, tmp3);
548
+ write_neon_element32(tmp3, a->vd, 2, MO_32);
549
+ tcg_temp_free_i32(tmp3);
550
tcg_gen_shri_i32(tmp2, tmp2, 16);
551
gen_helper_vfp_fcvt_f16_to_f32(tmp2, tmp2, fpst, ahp);
552
- neon_store_reg(a->vd, 3, tmp2);
553
+ write_neon_element32(tmp2, a->vd, 3, MO_32);
554
+ tcg_temp_free_i32(tmp2);
555
tcg_temp_free_i32(ahp);
556
tcg_temp_free_ptr(fpst);
557
558
@@ -XXX,XX +XXX,XX @@ DO_2M_CRYPTO(SHA256SU0, aa32_sha2, 2)
559
560
static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
561
{
562
+ TCGv_i32 tmp;
563
int pass;
564
565
/* Handle a 2-reg-misc operation by iterating 32 bits at a time */
566
@@ -XXX,XX +XXX,XX @@ static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn)
567
return true;
568
}
569
570
+ tmp = tcg_temp_new_i32();
571
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
572
- TCGv_i32 tmp = neon_load_reg(a->vm, pass);
573
+ read_neon_element32(tmp, a->vm, pass, MO_32);
574
fn(tmp, tmp);
575
- neon_store_reg(a->vd, pass, tmp);
576
+ write_neon_element32(tmp, a->vd, pass, MO_32);
577
}
578
+ tcg_temp_free_i32(tmp);
579
580
return true;
581
}
582
@@ -XXX,XX +XXX,XX @@ static bool trans_VTRN(DisasContext *s, arg_2misc *a)
583
return true;
584
}
585
586
- if (a->size == 2) {
587
+ tmp = tcg_temp_new_i32();
588
+ tmp2 = tcg_temp_new_i32();
589
+ if (a->size == MO_32) {
590
for (pass = 0; pass < (a->q ? 4 : 2); pass += 2) {
591
- tmp = neon_load_reg(a->vm, pass);
592
- tmp2 = neon_load_reg(a->vd, pass + 1);
593
- neon_store_reg(a->vm, pass, tmp2);
594
- neon_store_reg(a->vd, pass + 1, tmp);
595
+ read_neon_element32(tmp, a->vm, pass, MO_32);
596
+ read_neon_element32(tmp2, a->vd, pass + 1, MO_32);
597
+ write_neon_element32(tmp2, a->vm, pass, MO_32);
598
+ write_neon_element32(tmp, a->vd, pass + 1, MO_32);
599
}
600
} else {
601
for (pass = 0; pass < (a->q ? 4 : 2); pass++) {
602
- tmp = neon_load_reg(a->vm, pass);
603
- tmp2 = neon_load_reg(a->vd, pass);
604
- if (a->size == 0) {
605
+ read_neon_element32(tmp, a->vm, pass, MO_32);
606
+ read_neon_element32(tmp2, a->vd, pass, MO_32);
607
+ if (a->size == MO_8) {
608
gen_neon_trn_u8(tmp, tmp2);
609
} else {
610
gen_neon_trn_u16(tmp, tmp2);
611
}
612
- neon_store_reg(a->vm, pass, tmp2);
613
- neon_store_reg(a->vd, pass, tmp);
614
+ write_neon_element32(tmp2, a->vm, pass, MO_32);
615
+ write_neon_element32(tmp, a->vd, pass, MO_32);
616
}
617
}
618
+ tcg_temp_free_i32(tmp);
619
+ tcg_temp_free_i32(tmp2);
620
return true;
621
}
622
--
467
--
623
2.20.1
468
2.34.1
624
625
diff view generated by jsdifflib
1
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to
1
The architecture requires (R_TYTWB) that an attempt to return from EL3
2
armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el().
2
when SCR_EL3.{NSE,NS} are {1,0} is an illegal exception return. (This
3
This is incorrect when the security state being queried is not the
3
enforces that the CPU can't ever be executing below EL3 with the
4
current one, because arm_current_el() uses the current security state
4
NSE,NS bits indicating an invalid security state.)
5
to determine which of the banked CONTROL.nPRIV bits to look at.
6
The effect was that if (for instance) Secure state was in privileged
7
mode but Non-Secure was not then we would return the wrong MMU index.
8
5
9
The only places where we are using this function in a way that could
6
We were missing this check; add it.
10
trigger this bug are for the stack loads during a v8M function-return
11
and for the instruction fetch of a v8M SG insn.
12
13
Fix the bug by expanding out the M-profile version of the
14
arm_current_el() logic inline so it can use the passed in secstate
15
rather than env->v7m.secure.
16
7
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20201022164408.13214-1-peter.maydell@linaro.org
10
Message-id: 20230807150618.101357-1-peter.maydell@linaro.org
20
---
11
---
21
target/arm/m_helper.c | 3 ++-
12
target/arm/tcg/helper-a64.c | 9 +++++++++
22
1 file changed, 2 insertions(+), 1 deletion(-)
13
1 file changed, 9 insertions(+)
23
14
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
15
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
25
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/m_helper.c
17
--- a/target/arm/tcg/helper-a64.c
27
+++ b/target/arm/m_helper.c
18
+++ b/target/arm/tcg/helper-a64.c
28
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
19
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
29
/* Return the MMU index for a v7M CPU in the specified security state */
20
spsr &= ~PSTATE_SS;
30
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
21
}
31
{
22
32
- bool priv = arm_current_el(env) != 0;
23
+ /*
33
+ bool priv = arm_v7m_is_handler_mode(env) ||
24
+ * FEAT_RME forbids return from EL3 with an invalid security state.
34
+ !(env->v7m.control[secstate] & 1);
25
+ * We don't need an explicit check for FEAT_RME here because we enforce
35
26
+ * in scr_write() that you can't set the NSE bit without it.
36
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
27
+ */
37
}
28
+ if (cur_el == 3 && (env->cp15.scr_el3 & (SCR_NS | SCR_NSE)) == SCR_NSE) {
29
+ goto illegal_return;
30
+ }
31
+
32
new_el = el_from_spsr(spsr);
33
if (new_el == -1) {
34
goto illegal_return;
38
--
35
--
39
2.20.1
36
2.34.1
40
41
diff view generated by jsdifflib
1
Sphinx 3.2 is pickier than earlier versions about the option:: markup,
1
In the m48t59 device we almost always use 64-bit arithmetic when
2
and complains about our usage in qemu-option-trace.rst:
2
dealing with time_t deltas. The one exception is in set_alarm(),
3
3
which currently uses a plain 'int' to hold the difference between two
4
../../docs/qemu-option-trace.rst.inc:4:Malformed option description
4
time_t values. Switch to int64_t instead to avoid any possible
5
'[enable=]PATTERN', should look like "opt", "-opt args", "--opt args",
5
overflow issues.
6
"/opt args" or "+opt args"
7
8
In this file, we're really trying to document the different parts of
9
the top-level --trace option, which qemu-nbd.rst and qemu-img.rst
10
have already introduced with an option:: markup. So it's not right
11
to use option:: here anyway. Switch to a different markup
12
(definition lists) which gives about the same formatted output.
13
14
(Unlike option::, this markup doesn't produce index entries; but
15
at the moment we don't do anything much with indexes anyway, and
16
in any case I think it doesn't make much sense to have individual
17
index entries for the sub-parts of the --trace option.)
18
6
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
21
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
22
Message-id: 20201030174700.7204-3-peter.maydell@linaro.org
23
---
9
---
24
docs/qemu-option-trace.rst.inc | 6 +++---
10
hw/rtc/m48t59.c | 2 +-
25
1 file changed, 3 insertions(+), 3 deletions(-)
11
1 file changed, 1 insertion(+), 1 deletion(-)
26
12
27
diff --git a/docs/qemu-option-trace.rst.inc b/docs/qemu-option-trace.rst.inc
13
diff --git a/hw/rtc/m48t59.c b/hw/rtc/m48t59.c
28
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
29
--- a/docs/qemu-option-trace.rst.inc
15
--- a/hw/rtc/m48t59.c
30
+++ b/docs/qemu-option-trace.rst.inc
16
+++ b/hw/rtc/m48t59.c
31
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ static void alarm_cb (void *opaque)
32
18
33
Specify tracing options.
19
static void set_alarm(M48t59State *NVRAM)
34
20
{
35
-.. option:: [enable=]PATTERN
21
- int diff;
36
+``[enable=]PATTERN``
22
+ int64_t diff;
37
23
if (NVRAM->alrm_timer != NULL) {
38
Immediately enable events matching *PATTERN*
24
timer_del(NVRAM->alrm_timer);
39
(either event name or a globbing pattern). This option is only
25
diff = qemu_timedate_diff(&NVRAM->alarm) - NVRAM->time_offset;
40
@@ -XXX,XX +XXX,XX @@ Specify tracing options.
41
42
Use :option:`-trace help` to print a list of names of trace points.
43
44
-.. option:: events=FILE
45
+``events=FILE``
46
47
Immediately enable events listed in *FILE*.
48
The file must contain one event name (as listed in the ``trace-events-all``
49
@@ -XXX,XX +XXX,XX @@ Specify tracing options.
50
available if QEMU has been compiled with the ``simple``, ``log`` or
51
``ftrace`` tracing backend.
52
53
-.. option:: file=FILE
54
+``file=FILE``
55
56
Log output traces to *FILE*.
57
This option is only available if QEMU has been compiled with
58
--
26
--
59
2.20.1
27
2.34.1
60
28
61
29
diff view generated by jsdifflib
1
The helper functions for performing the udot/sdot operations against
1
In the twl92230 device, use int64_t for the two state fields
2
a scalar were not using an address-swizzling macro when converting
2
sec_offset and alm_sec, because we set these to values that
3
the index of the scalar element into a pointer into the vm array.
3
are either time_t or differences between two time_t values.
4
This had no effect on little-endian hosts but meant we generated
5
incorrect results on big-endian hosts.
6
4
7
For these insns, the index is indexing over group of 4 8-bit values,
5
These fields aren't saved in vmstate anywhere, so we can
8
so 32 bits per indexed entity, and H4() is therefore what we want.
6
safely widen them.
9
(For Neon the only possible input indexes are 0 and 1.)
10
7
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 20201028191712.4910-3-peter.maydell@linaro.org
15
---
10
---
16
target/arm/vec_helper.c | 4 ++--
11
hw/rtc/twl92230.c | 4 ++--
17
1 file changed, 2 insertions(+), 2 deletions(-)
12
1 file changed, 2 insertions(+), 2 deletions(-)
18
13
19
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
14
diff --git a/hw/rtc/twl92230.c b/hw/rtc/twl92230.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/vec_helper.c
16
--- a/hw/rtc/twl92230.c
22
+++ b/target/arm/vec_helper.c
17
+++ b/hw/rtc/twl92230.c
23
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
18
@@ -XXX,XX +XXX,XX @@ struct MenelausState {
24
intptr_t index = simd_data(desc);
19
struct tm tm;
25
uint32_t *d = vd;
20
struct tm new;
26
int8_t *n = vn;
21
struct tm alm;
27
- int8_t *m_indexed = (int8_t *)vm + index * 4;
22
- int sec_offset;
28
+ int8_t *m_indexed = (int8_t *)vm + H4(index) * 4;
23
- int alm_sec;
29
24
+ int64_t sec_offset;
30
/* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
25
+ int64_t alm_sec;
31
* Otherwise opr_sz is a multiple of 16.
26
int next_comp;
32
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc)
27
} rtc;
33
intptr_t index = simd_data(desc);
28
uint16_t rtc_next_vmstate;
34
uint32_t *d = vd;
35
uint8_t *n = vn;
36
- uint8_t *m_indexed = (uint8_t *)vm + index * 4;
37
+ uint8_t *m_indexed = (uint8_t *)vm + H4(index) * 4;
38
39
/* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
40
* Otherwise opr_sz is a multiple of 16.
41
--
29
--
42
2.20.1
30
2.34.1
43
31
44
32
diff view generated by jsdifflib
1
The kerneldoc script currently emits Sphinx markup for a macro with
1
In the aspeed_rtc device we store a difference between two time_t
2
arguments that uses the c:function directive. This is correct for
2
values in an 'int'. This is not really correct when time_t could
3
Sphinx versions earlier than Sphinx 3, where c:macro doesn't allow
3
be 64 bits. Enlarge the field to 'int64_t'.
4
documentation of macros with arguments and c:function is not picky
5
about the syntax of what it is passed. However, in Sphinx 3 the
6
c:macro directive was enhanced to support macros with arguments,
7
and c:function was made more picky about what syntax it accepted.
8
4
9
When kerneldoc is told that it needs to produce output for Sphinx
5
This is a migration compatibility break for the aspeed boards.
10
3 or later, make it emit c:function only for functions and c:macro
6
While we are changing the vmstate, remove the accidental
11
for macros with arguments. We assume that anything with a return
7
duplicate of the offset field.
12
type is a function and anything without is a macro.
13
14
This fixes the Sphinx error:
15
16
/home/petmay01/linaro/qemu-from-laptop/qemu/docs/../include/qom/object.h:155:Error in declarator
17
If declarator-id with parameters (e.g., 'void f(int arg)'):
18
Invalid C declaration: Expected identifier in nested name. [error at 25]
19
DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME)
20
-------------------------^
21
If parenthesis in noptr-declarator (e.g., 'void (*f(int arg))(double)'):
22
Error in declarator or parameters
23
Invalid C declaration: Expecting "(" in parameters. [error at 39]
24
DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME)
25
---------------------------------------^
26
8
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
10
Reviewed-by: Cédric Le Goater <clg@kaod.org>
29
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
30
Message-id: 20201030174700.7204-2-peter.maydell@linaro.org
31
---
11
---
32
scripts/kernel-doc | 18 +++++++++++++++++-
12
include/hw/rtc/aspeed_rtc.h | 2 +-
33
1 file changed, 17 insertions(+), 1 deletion(-)
13
hw/rtc/aspeed_rtc.c | 5 ++---
14
2 files changed, 3 insertions(+), 4 deletions(-)
34
15
35
diff --git a/scripts/kernel-doc b/scripts/kernel-doc
16
diff --git a/include/hw/rtc/aspeed_rtc.h b/include/hw/rtc/aspeed_rtc.h
36
index XXXXXXX..XXXXXXX 100755
17
index XXXXXXX..XXXXXXX 100644
37
--- a/scripts/kernel-doc
18
--- a/include/hw/rtc/aspeed_rtc.h
38
+++ b/scripts/kernel-doc
19
+++ b/include/hw/rtc/aspeed_rtc.h
39
@@ -XXX,XX +XXX,XX @@ sub output_function_rst(%) {
20
@@ -XXX,XX +XXX,XX @@ struct AspeedRtcState {
40
    output_highlight_rst($args{'purpose'});
21
qemu_irq irq;
41
    $start = "\n\n**Syntax**\n\n ``";
22
42
} else {
23
uint32_t reg[0x18];
43
-    print ".. c:function:: ";
24
- int offset;
44
+ if ((split(/\./, $sphinx_version))[0] >= 3) {
25
+ int64_t offset;
45
+ # Sphinx 3 and later distinguish macros and functions and
26
46
+ # complain if you use c:function with something that's not
27
};
47
+ # syntactically valid as a function declaration.
28
48
+ # We assume that anything with a return type is a function
29
diff --git a/hw/rtc/aspeed_rtc.c b/hw/rtc/aspeed_rtc.c
49
+ # and anything without is a macro.
30
index XXXXXXX..XXXXXXX 100644
50
+ if ($args{'functiontype'} ne "") {
31
--- a/hw/rtc/aspeed_rtc.c
51
+ print ".. c:function:: ";
32
+++ b/hw/rtc/aspeed_rtc.c
52
+ } else {
33
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps aspeed_rtc_ops = {
53
+ print ".. c:macro:: ";
34
54
+ }
35
static const VMStateDescription vmstate_aspeed_rtc = {
55
+ } else {
36
.name = TYPE_ASPEED_RTC,
56
+ # Older Sphinx don't support documenting macros that take
37
- .version_id = 1,
57
+ # arguments with c:macro, and don't complain about the use
38
+ .version_id = 2,
58
+ # of c:function for this.
39
.fields = (VMStateField[]) {
59
+ print ".. c:function:: ";
40
VMSTATE_UINT32_ARRAY(reg, AspeedRtcState, 0x18),
60
+ }
41
- VMSTATE_INT32(offset, AspeedRtcState),
42
- VMSTATE_INT32(offset, AspeedRtcState),
43
+ VMSTATE_INT64(offset, AspeedRtcState),
44
VMSTATE_END_OF_LIST()
61
}
45
}
62
if ($args{'functiontype'} ne "") {
46
};
63
    $start .= $args{'functiontype'} . " " . $args{'function'} . " (";
64
--
47
--
65
2.20.1
48
2.34.1
66
49
67
50
diff view generated by jsdifflib
1
On some hosts (eg Ubuntu Bionic) pkg-config returns a set of
1
The functions qemu_get_timedate() and qemu_timedate_diff() take
2
libraries for gio-2.0 which don't actually work when compiling
2
and return a time offset as an integer. Coverity points out that
3
statically. (Specifically, the returned library string includes
3
means that when an RTC device implementation holds an offset
4
-lmount, but not -lblkid which -lmount depends upon, so linking
4
as a time_t, as the m48t59 does, the time_t will get truncated.
5
fails due to missing symbols.)
5
(CID 1507157, 1517772).
6
6
7
Check that the libraries work, and don't enable gio if they don't,
7
The functions work with time_t internally, so make them use that type
8
in the same way we do for gnutls.
8
in their APIs.
9
10
Note that this won't help any Y2038 issues where either the device
11
model itself is keeping the offset in a 32-bit integer, or where the
12
hardware under emulation has Y2038 or other rollover problems. If we
13
missed any cases of the former then hopefully Coverity will warn us
14
about them since after this patch we'd be truncating a time_t in
15
assignments from qemu_timedate_diff().)
9
16
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Message-id: 20200928160402.7961-1-peter.maydell@linaro.org
14
---
19
---
15
configure | 10 +++++++++-
20
include/sysemu/rtc.h | 4 ++--
16
1 file changed, 9 insertions(+), 1 deletion(-)
21
softmmu/rtc.c | 4 ++--
22
2 files changed, 4 insertions(+), 4 deletions(-)
17
23
18
diff --git a/configure b/configure
24
diff --git a/include/sysemu/rtc.h b/include/sysemu/rtc.h
19
index XXXXXXX..XXXXXXX 100755
25
index XXXXXXX..XXXXXXX 100644
20
--- a/configure
26
--- a/include/sysemu/rtc.h
21
+++ b/configure
27
+++ b/include/sysemu/rtc.h
22
@@ -XXX,XX +XXX,XX @@ if test "$static" = yes && test "$mingw32" = yes; then
28
@@ -XXX,XX +XXX,XX @@
23
fi
29
* The behaviour of the clock whose value this function returns will
24
30
* depend on the -rtc command line option passed by the user.
25
if $pkg_config --atleast-version=$glib_req_ver gio-2.0; then
31
*/
26
- gio=yes
32
-void qemu_get_timedate(struct tm *tm, int offset);
27
gio_cflags=$($pkg_config --cflags gio-2.0)
33
+void qemu_get_timedate(struct tm *tm, time_t offset);
28
gio_libs=$($pkg_config --libs gio-2.0)
34
29
gdbus_codegen=$($pkg_config --variable=gdbus_codegen gio-2.0)
35
/**
30
if [ ! -x "$gdbus_codegen" ]; then
36
* qemu_timedate_diff: Return difference between a struct tm and the RTC
31
gdbus_codegen=
37
@@ -XXX,XX +XXX,XX @@ void qemu_get_timedate(struct tm *tm, int offset);
32
fi
38
* a timestamp one hour further ahead than the current RTC time
33
+ # Check that the libraries actually work -- Ubuntu 18.04 ships
39
* then this function will return 3600.
34
+ # with pkg-config --static --libs data for gio-2.0 that is missing
40
*/
35
+ # -lblkid and will give a link error.
41
-int qemu_timedate_diff(struct tm *tm);
36
+ write_c_skeleton
42
+time_t qemu_timedate_diff(struct tm *tm);
37
+ if compile_prog "" "gio_libs" ; then
43
38
+ gio=yes
44
#endif
39
+ else
45
diff --git a/softmmu/rtc.c b/softmmu/rtc.c
40
+ gio=no
46
index XXXXXXX..XXXXXXX 100644
41
+ fi
47
--- a/softmmu/rtc.c
42
else
48
+++ b/softmmu/rtc.c
43
gio=no
49
@@ -XXX,XX +XXX,XX @@ static time_t qemu_ref_timedate(QEMUClockType clock)
44
fi
50
return value;
51
}
52
53
-void qemu_get_timedate(struct tm *tm, int offset)
54
+void qemu_get_timedate(struct tm *tm, time_t offset)
55
{
56
time_t ti = qemu_ref_timedate(rtc_clock);
57
58
@@ -XXX,XX +XXX,XX @@ void qemu_get_timedate(struct tm *tm, int offset)
59
}
60
}
61
62
-int qemu_timedate_diff(struct tm *tm)
63
+time_t qemu_timedate_diff(struct tm *tm)
64
{
65
time_t seconds;
66
45
--
67
--
46
2.20.1
68
2.34.1
47
69
48
70
diff view generated by jsdifflib
1
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error
1
Where architecturally one ARM_FEATURE_X flag implies another
2
meant we were using the H4() address swizzler macro rather than the
2
ARM_FEATURE_Y, we allow the CPU init function to only set X, and then
3
H2() which is required for 2-byte data. This had no effect on
3
set Y for it. Currently we do this in two places -- we set a few
4
little-endian hosts but meant we put the result data into the
4
flags in arm_cpu_post_init() because we need them to decide which
5
destination Dreg in the wrong order on big-endian hosts.
5
properties to create on the CPU object, and then we do the rest in
6
arm_cpu_realizefn(). However, this is fragile, because it's easy to
7
add a new property and not notice that this means that an X-implies-Y
8
check now has to move from realize to post-init.
9
10
As a specific example, the pmsav7-dregion property is conditional
11
on ARM_FEATURE_PMSA && ARM_FEATURE_V7, which means it won't appear
12
on the Cortex-M33 and -M55, because they set ARM_FEATURE_V8 and
13
rely on V8-implies-V7, which doesn't happen until the realizefn.
14
15
Move all of these X-implies-Y checks into a new function, which
16
we call at the top of arm_cpu_post_init(), so the feature bits
17
are available at that point.
18
19
This does now give us the reverse issue, that if there's a feature
20
bit which is enabled or disabled by the setting of a property then
21
then X-implies-Y features that are dependent on that property need to
22
be in realize, not in this new function. But the only one of those
23
is the "EL3 implies VBAR" which is already in the right place, so
24
putting things this way round seems better to me.
6
25
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
27
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
28
Message-id: 20230724174335.2150499-2-peter.maydell@linaro.org
10
Message-id: 20201028191712.4910-2-peter.maydell@linaro.org
11
---
29
---
12
target/arm/vec_helper.c | 8 ++++----
30
target/arm/cpu.c | 179 +++++++++++++++++++++++++----------------------
13
1 file changed, 4 insertions(+), 4 deletions(-)
31
1 file changed, 97 insertions(+), 82 deletions(-)
14
32
15
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
33
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/vec_helper.c
35
--- a/target/arm/cpu.c
18
+++ b/target/arm/vec_helper.c
36
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ DO_ABA(gvec_uaba_d, uint64_t)
37
@@ -XXX,XX +XXX,XX @@ unsigned int gt_cntfrq_period_ns(ARMCPU *cpu)
20
r2 = float16_##OP(m[H2(0)], m[H2(1)], fpst); \
38
NANOSECONDS_PER_SECOND / cpu->gt_cntfrq_hz : 1;
21
r3 = float16_##OP(m[H2(2)], m[H2(3)], fpst); \
39
}
22
\
40
23
- d[H4(0)] = r0; \
41
+static void arm_cpu_propagate_feature_implications(ARMCPU *cpu)
24
- d[H4(1)] = r1; \
42
+{
25
- d[H4(2)] = r2; \
43
+ CPUARMState *env = &cpu->env;
26
- d[H4(3)] = r3; \
44
+ bool no_aa32 = false;
27
+ d[H2(0)] = r0; \
45
+
28
+ d[H2(1)] = r1; \
46
+ /*
29
+ d[H2(2)] = r2; \
47
+ * Some features automatically imply others: set the feature
30
+ d[H2(3)] = r3; \
48
+ * bits explicitly for these cases.
49
+ */
50
+
51
+ if (arm_feature(env, ARM_FEATURE_M)) {
52
+ set_feature(env, ARM_FEATURE_PMSA);
53
+ }
54
+
55
+ if (arm_feature(env, ARM_FEATURE_V8)) {
56
+ if (arm_feature(env, ARM_FEATURE_M)) {
57
+ set_feature(env, ARM_FEATURE_V7);
58
+ } else {
59
+ set_feature(env, ARM_FEATURE_V7VE);
60
+ }
61
+ }
62
+
63
+ /*
64
+ * There exist AArch64 cpus without AArch32 support. When KVM
65
+ * queries ID_ISAR0_EL1 on such a host, the value is UNKNOWN.
66
+ * Similarly, we cannot check ID_AA64PFR0 without AArch64 support.
67
+ * As a general principle, we also do not make ID register
68
+ * consistency checks anywhere unless using TCG, because only
69
+ * for TCG would a consistency-check failure be a QEMU bug.
70
+ */
71
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
72
+ no_aa32 = !cpu_isar_feature(aa64_aa32, cpu);
73
+ }
74
+
75
+ if (arm_feature(env, ARM_FEATURE_V7VE)) {
76
+ /*
77
+ * v7 Virtualization Extensions. In real hardware this implies
78
+ * EL2 and also the presence of the Security Extensions.
79
+ * For QEMU, for backwards-compatibility we implement some
80
+ * CPUs or CPU configs which have no actual EL2 or EL3 but do
81
+ * include the various other features that V7VE implies.
82
+ * Presence of EL2 itself is ARM_FEATURE_EL2, and of the
83
+ * Security Extensions is ARM_FEATURE_EL3.
84
+ */
85
+ assert(!tcg_enabled() || no_aa32 ||
86
+ cpu_isar_feature(aa32_arm_div, cpu));
87
+ set_feature(env, ARM_FEATURE_LPAE);
88
+ set_feature(env, ARM_FEATURE_V7);
89
+ }
90
+ if (arm_feature(env, ARM_FEATURE_V7)) {
91
+ set_feature(env, ARM_FEATURE_VAPA);
92
+ set_feature(env, ARM_FEATURE_THUMB2);
93
+ set_feature(env, ARM_FEATURE_MPIDR);
94
+ if (!arm_feature(env, ARM_FEATURE_M)) {
95
+ set_feature(env, ARM_FEATURE_V6K);
96
+ } else {
97
+ set_feature(env, ARM_FEATURE_V6);
98
+ }
99
+
100
+ /*
101
+ * Always define VBAR for V7 CPUs even if it doesn't exist in
102
+ * non-EL3 configs. This is needed by some legacy boards.
103
+ */
104
+ set_feature(env, ARM_FEATURE_VBAR);
105
+ }
106
+ if (arm_feature(env, ARM_FEATURE_V6K)) {
107
+ set_feature(env, ARM_FEATURE_V6);
108
+ set_feature(env, ARM_FEATURE_MVFR);
109
+ }
110
+ if (arm_feature(env, ARM_FEATURE_V6)) {
111
+ set_feature(env, ARM_FEATURE_V5);
112
+ if (!arm_feature(env, ARM_FEATURE_M)) {
113
+ assert(!tcg_enabled() || no_aa32 ||
114
+ cpu_isar_feature(aa32_jazelle, cpu));
115
+ set_feature(env, ARM_FEATURE_AUXCR);
116
+ }
117
+ }
118
+ if (arm_feature(env, ARM_FEATURE_V5)) {
119
+ set_feature(env, ARM_FEATURE_V4T);
120
+ }
121
+ if (arm_feature(env, ARM_FEATURE_LPAE)) {
122
+ set_feature(env, ARM_FEATURE_V7MP);
123
+ }
124
+ if (arm_feature(env, ARM_FEATURE_CBAR_RO)) {
125
+ set_feature(env, ARM_FEATURE_CBAR);
126
+ }
127
+ if (arm_feature(env, ARM_FEATURE_THUMB2) &&
128
+ !arm_feature(env, ARM_FEATURE_M)) {
129
+ set_feature(env, ARM_FEATURE_THUMB_DSP);
130
+ }
131
+}
132
+
133
void arm_cpu_post_init(Object *obj)
134
{
135
ARMCPU *cpu = ARM_CPU(obj);
136
137
- /* M profile implies PMSA. We have to do this here rather than
138
- * in realize with the other feature-implication checks because
139
- * we look at the PMSA bit to see if we should add some properties.
140
+ /*
141
+ * Some features imply others. Figure this out now, because we
142
+ * are going to look at the feature bits in deciding which
143
+ * properties to add.
144
*/
145
- if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
146
- set_feature(&cpu->env, ARM_FEATURE_PMSA);
147
- }
148
+ arm_cpu_propagate_feature_implications(cpu);
149
150
if (arm_feature(&cpu->env, ARM_FEATURE_CBAR) ||
151
arm_feature(&cpu->env, ARM_FEATURE_CBAR_RO)) {
152
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
153
CPUARMState *env = &cpu->env;
154
int pagebits;
155
Error *local_err = NULL;
156
- bool no_aa32 = false;
157
158
/* Use pc-relative instructions in system-mode */
159
#ifndef CONFIG_USER_ONLY
160
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
161
cpu->isar.id_isar3 = u;
31
}
162
}
32
163
33
DO_NEON_PAIRWISE(neon_padd, add)
164
- /* Some features automatically imply others: */
165
- if (arm_feature(env, ARM_FEATURE_V8)) {
166
- if (arm_feature(env, ARM_FEATURE_M)) {
167
- set_feature(env, ARM_FEATURE_V7);
168
- } else {
169
- set_feature(env, ARM_FEATURE_V7VE);
170
- }
171
- }
172
-
173
- /*
174
- * There exist AArch64 cpus without AArch32 support. When KVM
175
- * queries ID_ISAR0_EL1 on such a host, the value is UNKNOWN.
176
- * Similarly, we cannot check ID_AA64PFR0 without AArch64 support.
177
- * As a general principle, we also do not make ID register
178
- * consistency checks anywhere unless using TCG, because only
179
- * for TCG would a consistency-check failure be a QEMU bug.
180
- */
181
- if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
182
- no_aa32 = !cpu_isar_feature(aa64_aa32, cpu);
183
- }
184
-
185
- if (arm_feature(env, ARM_FEATURE_V7VE)) {
186
- /* v7 Virtualization Extensions. In real hardware this implies
187
- * EL2 and also the presence of the Security Extensions.
188
- * For QEMU, for backwards-compatibility we implement some
189
- * CPUs or CPU configs which have no actual EL2 or EL3 but do
190
- * include the various other features that V7VE implies.
191
- * Presence of EL2 itself is ARM_FEATURE_EL2, and of the
192
- * Security Extensions is ARM_FEATURE_EL3.
193
- */
194
- assert(!tcg_enabled() || no_aa32 ||
195
- cpu_isar_feature(aa32_arm_div, cpu));
196
- set_feature(env, ARM_FEATURE_LPAE);
197
- set_feature(env, ARM_FEATURE_V7);
198
- }
199
- if (arm_feature(env, ARM_FEATURE_V7)) {
200
- set_feature(env, ARM_FEATURE_VAPA);
201
- set_feature(env, ARM_FEATURE_THUMB2);
202
- set_feature(env, ARM_FEATURE_MPIDR);
203
- if (!arm_feature(env, ARM_FEATURE_M)) {
204
- set_feature(env, ARM_FEATURE_V6K);
205
- } else {
206
- set_feature(env, ARM_FEATURE_V6);
207
- }
208
-
209
- /* Always define VBAR for V7 CPUs even if it doesn't exist in
210
- * non-EL3 configs. This is needed by some legacy boards.
211
- */
212
- set_feature(env, ARM_FEATURE_VBAR);
213
- }
214
- if (arm_feature(env, ARM_FEATURE_V6K)) {
215
- set_feature(env, ARM_FEATURE_V6);
216
- set_feature(env, ARM_FEATURE_MVFR);
217
- }
218
- if (arm_feature(env, ARM_FEATURE_V6)) {
219
- set_feature(env, ARM_FEATURE_V5);
220
- if (!arm_feature(env, ARM_FEATURE_M)) {
221
- assert(!tcg_enabled() || no_aa32 ||
222
- cpu_isar_feature(aa32_jazelle, cpu));
223
- set_feature(env, ARM_FEATURE_AUXCR);
224
- }
225
- }
226
- if (arm_feature(env, ARM_FEATURE_V5)) {
227
- set_feature(env, ARM_FEATURE_V4T);
228
- }
229
- if (arm_feature(env, ARM_FEATURE_LPAE)) {
230
- set_feature(env, ARM_FEATURE_V7MP);
231
- }
232
- if (arm_feature(env, ARM_FEATURE_CBAR_RO)) {
233
- set_feature(env, ARM_FEATURE_CBAR);
234
- }
235
- if (arm_feature(env, ARM_FEATURE_THUMB2) &&
236
- !arm_feature(env, ARM_FEATURE_M)) {
237
- set_feature(env, ARM_FEATURE_THUMB_DSP);
238
- }
239
240
/*
241
* We rely on no XScale CPU having VFP so we can use the same bits in the
34
--
242
--
35
2.20.1
243
2.34.1
36
37
diff view generated by jsdifflib
1
From: AlexChen <alex.chen@huawei.com>
1
M-profile CPUs generally allow configuration of the number of MPU
2
regions that they have. We don't currently model this, so our
3
implementations of some of the board models provide CPUs with the
4
wrong number of regions. RTOSes like Zephyr that hardcode the
5
expected number of regions may therefore not run on the model if they
6
are set up to run on real hardware.
2
7
3
In omap_lcd_interrupts(), the pointer omap_lcd is dereferinced before
8
Add properties mpu-ns-regions and mpu-s-regions to the ARMV7M object,
4
being check if it is valid, which may lead to NULL pointer dereference.
9
matching the ability of hardware to configure the number of Secure
5
So move the assignment to surface after checking that the omap_lcd is valid
10
and NonSecure regions separately. Our actual CPU implementation
6
and move surface_bits_per_pixel(surface) to after the surface assignment.
11
doesn't currently support that, and it happens that none of the MPS
12
boards we model set the number of regions differently for Secure vs
13
NonSecure, so we provide an interface to the boards and SoCs that
14
won't need to change if we ever do add that functionality in future,
15
but make it an error to configure the two properties to different
16
values.
7
17
8
Reported-by: Euler Robot <euler.robot@huawei.com>
18
(The property name on the CPU is the somewhat misnamed-for-M-profile
9
Signed-off-by: AlexChen <alex.chen@huawei.com>
19
"pmsav7-dregion", so we don't follow that naming convention for
10
Message-id: 5F9CDB8A.9000001@huawei.com
20
the properties here. The TRM doesn't say what the CPU configuration
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
21
variable names are, so we pick something, and follow the lowercase
22
convention we already have for properties here.)
23
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
26
Message-id: 20230724174335.2150499-3-peter.maydell@linaro.org
13
---
27
---
14
hw/display/omap_lcdc.c | 10 +++++++---
28
include/hw/arm/armv7m.h | 8 ++++++++
15
1 file changed, 7 insertions(+), 3 deletions(-)
29
hw/arm/armv7m.c | 21 +++++++++++++++++++++
30
2 files changed, 29 insertions(+)
16
31
17
diff --git a/hw/display/omap_lcdc.c b/hw/display/omap_lcdc.c
32
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
18
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/display/omap_lcdc.c
34
--- a/include/hw/arm/armv7m.h
20
+++ b/hw/display/omap_lcdc.c
35
+++ b/include/hw/arm/armv7m.h
21
@@ -XXX,XX +XXX,XX @@ static void omap_lcd_interrupts(struct omap_lcd_panel_s *s)
36
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(ARMv7MState, ARMV7M)
22
static void omap_update_display(void *opaque)
37
* + Property "vfp": enable VFP (forwarded to CPU object)
23
{
38
* + Property "dsp": enable DSP (forwarded to CPU object)
24
struct omap_lcd_panel_s *omap_lcd = (struct omap_lcd_panel_s *) opaque;
39
* + Property "enable-bitband": expose bitbanded IO
25
- DisplaySurface *surface = qemu_console_surface(omap_lcd->con);
40
+ * + Property "mpu-ns-regions": number of Non-Secure MPU regions (forwarded
26
+ DisplaySurface *surface;
41
+ * to CPU object pmsav7-dregion property; default is whatever the default
27
draw_line_func draw_line;
42
+ * for the CPU is)
28
int size, height, first, last;
43
+ * + Property "mpu-s-regions": number of Secure MPU regions (default is
29
int width, linesize, step, bpp, frame_offset;
44
+ * whatever the default for the CPU is; must currently be set to the same
30
hwaddr frame_base;
45
+ * value as mpu-ns-regions if the CPU implements the Security Extension)
31
46
* + Clock input "refclk" is the external reference clock for the systick timers
32
- if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable ||
47
* + Clock input "cpuclk" is the main CPU clock
33
- !surface_bits_per_pixel(surface)) {
48
*/
34
+ if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable) {
49
@@ -XXX,XX +XXX,XX @@ struct ARMv7MState {
50
Object *idau;
51
uint32_t init_svtor;
52
uint32_t init_nsvtor;
53
+ uint32_t mpu_ns_regions;
54
+ uint32_t mpu_s_regions;
55
bool enable_bitband;
56
bool start_powered_off;
57
bool vfp;
58
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/hw/arm/armv7m.c
61
+++ b/hw/arm/armv7m.c
62
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
63
}
64
}
65
66
+ /*
67
+ * Real M-profile hardware can be configured with a different number of
68
+ * MPU regions for Secure vs NonSecure. QEMU's CPU implementation doesn't
69
+ * support that yet, so catch attempts to select that.
70
+ */
71
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY) &&
72
+ s->mpu_ns_regions != s->mpu_s_regions) {
73
+ error_setg(errp,
74
+ "mpu-ns-regions and mpu-s-regions properties must have the same value");
35
+ return;
75
+ return;
36
+ }
76
+ }
77
+ if (s->mpu_ns_regions != UINT_MAX &&
78
+ object_property_find(OBJECT(s->cpu), "pmsav7-dregion")) {
79
+ if (!object_property_set_uint(OBJECT(s->cpu), "pmsav7-dregion",
80
+ s->mpu_ns_regions, errp)) {
81
+ return;
82
+ }
83
+ }
37
+
84
+
38
+ surface = qemu_console_surface(omap_lcd->con);
85
/*
39
+ if (!surface_bits_per_pixel(surface)) {
86
* Tell the CPU where the NVIC is; it will fail realize if it doesn't
40
return;
87
* have one. Similarly, tell the NVIC where its CPU is.
41
}
88
@@ -XXX,XX +XXX,XX @@ static Property armv7m_properties[] = {
89
false),
90
DEFINE_PROP_BOOL("vfp", ARMv7MState, vfp, true),
91
DEFINE_PROP_BOOL("dsp", ARMv7MState, dsp, true),
92
+ DEFINE_PROP_UINT32("mpu-ns-regions", ARMv7MState, mpu_ns_regions, UINT_MAX),
93
+ DEFINE_PROP_UINT32("mpu-s-regions", ARMv7MState, mpu_s_regions, UINT_MAX),
94
DEFINE_PROP_END_OF_LIST(),
95
};
42
96
43
--
97
--
44
2.20.1
98
2.34.1
45
99
46
100
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
The IoTKit, SSE200 and SSE300 all default to 8 MPU regions. The
2
2
MPS2/MPS3 FPGA images don't override these except in the case of
3
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the
3
AN547, which uses 16 MPU regions.
4
future HCR_EL2.TLOR when S-EL2 is enabled.
4
5
5
Define properties on the ARMSSE object for the MPU regions (using the
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
6
same names as the documented RTL configuration settings, and
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
following the pattern we already have for this device of using
8
all-caps names as the RTL does), and set them in the board code.
9
10
We don't actually need to override the default except on AN547,
11
but it's simpler code to have the board code set them always
12
rather than tracking which board subtypes want to set them to
13
a non-default value separately from what that value is.
14
15
Tho overall effect is that for mps2-an505, mps2-an521 and mps3-an524
16
we now correctly use 8 MPU regions, while mps3-an547 stays at its
17
current 16 regions.
18
19
It's possible some guest code wrongly depended on the previous
20
incorrectly modeled number of memory regions. (Such guest code
21
should ideally check the number of regions via the MPU_TYPE
22
register.) The old behaviour can be obtained with additional
23
-global arguments to QEMU:
24
25
For mps2-an521 and mps2-an524:
26
-global sse-200.CPU0_MPU_NS=16 -global sse-200.CPU0_MPU_S=16 -global sse-200.CPU1_MPU_NS=16 -global sse-200.CPU1_MPU_S=16
27
28
For mps2-an505:
29
-global sse-200.CPU0_MPU_NS=16 -global sse-200.CPU0_MPU_S=16
30
31
NB that the way the implementation allows this use of -global
32
is slightly fragile: if the board code explicitly sets the
33
properties on the sse-200 object, this overrides the -global
34
command line option. So we rely on:
35
- the boards that need fixing all happen to use the SSE defaults
36
- we can write the board code to only set the property if it
37
is different from the default, rather than having all boards
38
explicitly set the property
39
- the board that does need to use a non-default value happens
40
to need to set it to the same value (16) we previously used
41
This works, but there are some kinds of refactoring of the
42
mps2-tz.c code that would break the support for -global here.
43
44
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1772
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
45
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
46
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
47
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
48
Message-id: 20230724174335.2150499-4-peter.maydell@linaro.org
9
---
49
---
10
target/arm/helper.c | 19 +++++--------------
50
include/hw/arm/armsse.h | 5 +++++
11
1 file changed, 5 insertions(+), 14 deletions(-)
51
hw/arm/armsse.c | 16 ++++++++++++++++
12
52
hw/arm/mps2-tz.c | 29 +++++++++++++++++++++++++++++
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
53
3 files changed, 50 insertions(+)
54
55
diff --git a/include/hw/arm/armsse.h b/include/hw/arm/armsse.h
14
index XXXXXXX..XXXXXXX 100644
56
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
57
--- a/include/hw/arm/armsse.h
16
+++ b/target/arm/helper.c
58
+++ b/include/hw/arm/armsse.h
17
@@ -XXX,XX +XXX,XX @@ static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
59
@@ -XXX,XX +XXX,XX @@
60
* (matching the hardware) is that for CPU0 in an IoTKit and CPU1 in an
61
* SSE-200 both are present; CPU0 in an SSE-200 has neither.
62
* Since the IoTKit has only one CPU, it does not have the CPU1_* properties.
63
+ * + QOM properties "CPU0_MPU_NS", "CPU0_MPU_S", "CPU1_MPU_NS" and "CPU1_MPU_S"
64
+ * which set the number of MPU regions on the CPUs. If there is only one
65
+ * CPU the CPU1 properties are not present.
66
* + Named GPIO inputs "EXP_IRQ" 0..n are the expansion interrupts for CPU 0,
67
* which are wired to its NVIC lines 32 .. n+32
68
* + Named GPIO inputs "EXP_CPU1_IRQ" 0..n are the expansion interrupts for
69
@@ -XXX,XX +XXX,XX @@ struct ARMSSE {
70
uint32_t exp_numirq;
71
uint32_t sram_addr_width;
72
uint32_t init_svtor;
73
+ uint32_t cpu_mpu_ns[SSE_MAX_CPUS];
74
+ uint32_t cpu_mpu_s[SSE_MAX_CPUS];
75
bool cpu_fpu[SSE_MAX_CPUS];
76
bool cpu_dsp[SSE_MAX_CPUS];
77
};
78
diff --git a/hw/arm/armsse.c b/hw/arm/armsse.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/hw/arm/armsse.c
81
+++ b/hw/arm/armsse.c
82
@@ -XXX,XX +XXX,XX @@ static Property iotkit_properties[] = {
83
DEFINE_PROP_UINT32("init-svtor", ARMSSE, init_svtor, 0x10000000),
84
DEFINE_PROP_BOOL("CPU0_FPU", ARMSSE, cpu_fpu[0], true),
85
DEFINE_PROP_BOOL("CPU0_DSP", ARMSSE, cpu_dsp[0], true),
86
+ DEFINE_PROP_UINT32("CPU0_MPU_NS", ARMSSE, cpu_mpu_ns[0], 8),
87
+ DEFINE_PROP_UINT32("CPU0_MPU_S", ARMSSE, cpu_mpu_s[0], 8),
88
DEFINE_PROP_END_OF_LIST()
89
};
90
91
@@ -XXX,XX +XXX,XX @@ static Property sse200_properties[] = {
92
DEFINE_PROP_BOOL("CPU0_DSP", ARMSSE, cpu_dsp[0], false),
93
DEFINE_PROP_BOOL("CPU1_FPU", ARMSSE, cpu_fpu[1], true),
94
DEFINE_PROP_BOOL("CPU1_DSP", ARMSSE, cpu_dsp[1], true),
95
+ DEFINE_PROP_UINT32("CPU0_MPU_NS", ARMSSE, cpu_mpu_ns[0], 8),
96
+ DEFINE_PROP_UINT32("CPU0_MPU_S", ARMSSE, cpu_mpu_s[0], 8),
97
+ DEFINE_PROP_UINT32("CPU1_MPU_NS", ARMSSE, cpu_mpu_ns[1], 8),
98
+ DEFINE_PROP_UINT32("CPU1_MPU_S", ARMSSE, cpu_mpu_s[1], 8),
99
DEFINE_PROP_END_OF_LIST()
100
};
101
102
@@ -XXX,XX +XXX,XX @@ static Property sse300_properties[] = {
103
DEFINE_PROP_UINT32("init-svtor", ARMSSE, init_svtor, 0x10000000),
104
DEFINE_PROP_BOOL("CPU0_FPU", ARMSSE, cpu_fpu[0], true),
105
DEFINE_PROP_BOOL("CPU0_DSP", ARMSSE, cpu_dsp[0], true),
106
+ DEFINE_PROP_UINT32("CPU0_MPU_NS", ARMSSE, cpu_mpu_ns[0], 8),
107
+ DEFINE_PROP_UINT32("CPU0_MPU_S", ARMSSE, cpu_mpu_s[0], 8),
108
DEFINE_PROP_END_OF_LIST()
109
};
110
111
@@ -XXX,XX +XXX,XX @@ static void armsse_realize(DeviceState *dev, Error **errp)
112
return;
113
}
114
}
115
+ if (!object_property_set_uint(cpuobj, "mpu-ns-regions",
116
+ s->cpu_mpu_ns[i], errp)) {
117
+ return;
118
+ }
119
+ if (!object_property_set_uint(cpuobj, "mpu-s-regions",
120
+ s->cpu_mpu_s[i], errp)) {
121
+ return;
122
+ }
123
124
if (i > 0) {
125
memory_region_add_subregion_overlap(&s->cpu_container[i], 0,
126
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/hw/arm/mps2-tz.c
129
+++ b/hw/arm/mps2-tz.c
130
@@ -XXX,XX +XXX,XX @@ struct MPS2TZMachineClass {
131
int uart_overflow_irq; /* number of the combined UART overflow IRQ */
132
uint32_t init_svtor; /* init-svtor setting for SSE */
133
uint32_t sram_addr_width; /* SRAM_ADDR_WIDTH setting for SSE */
134
+ uint32_t cpu0_mpu_ns; /* CPU0_MPU_NS setting for SSE */
135
+ uint32_t cpu0_mpu_s; /* CPU0_MPU_S setting for SSE */
136
+ uint32_t cpu1_mpu_ns; /* CPU1_MPU_NS setting for SSE */
137
+ uint32_t cpu1_mpu_s; /* CPU1_MPU_S setting for SSE */
138
const RAMInfo *raminfo;
139
const char *armsse_type;
140
uint32_t boot_ram_size; /* size of ram at address 0; 0 == find in raminfo */
141
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_TYPE(MPS2TZMachineState, MPS2TZMachineClass, MPS2TZ_MACHINE)
142
#define MPS3_DDR_SIZE (2 * GiB)
18
#endif
143
#endif
19
144
20
/* Shared logic between LORID and the rest of the LOR* registers.
145
+/* For cpu{0,1}_mpu_{ns,s}, means "leave at SSE's default value" */
21
- * Secure state has already been delt with.
146
+#define MPU_REGION_DEFAULT UINT32_MAX
22
+ * Secure state exclusion has already been dealt with.
147
+
23
*/
148
static const uint32_t an505_oscclk[] = {
24
-static CPAccessResult access_lor_ns(CPUARMState *env)
149
40000000,
25
+static CPAccessResult access_lor_ns(CPUARMState *env,
150
24580000,
26
+ const ARMCPRegInfo *ri, bool isread)
151
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
152
OBJECT(system_memory), &error_abort);
153
qdev_prop_set_uint32(iotkitdev, "EXP_NUMIRQ", mmc->numirq);
154
qdev_prop_set_uint32(iotkitdev, "init-svtor", mmc->init_svtor);
155
+ if (mmc->cpu0_mpu_ns != MPU_REGION_DEFAULT) {
156
+ qdev_prop_set_uint32(iotkitdev, "CPU0_MPU_NS", mmc->cpu0_mpu_ns);
157
+ }
158
+ if (mmc->cpu0_mpu_s != MPU_REGION_DEFAULT) {
159
+ qdev_prop_set_uint32(iotkitdev, "CPU0_MPU_S", mmc->cpu0_mpu_s);
160
+ }
161
+ if (object_property_find(OBJECT(iotkitdev), "CPU1_MPU_NS")) {
162
+ if (mmc->cpu1_mpu_ns != MPU_REGION_DEFAULT) {
163
+ qdev_prop_set_uint32(iotkitdev, "CPU1_MPU_NS", mmc->cpu1_mpu_ns);
164
+ }
165
+ if (mmc->cpu1_mpu_s != MPU_REGION_DEFAULT) {
166
+ qdev_prop_set_uint32(iotkitdev, "CPU1_MPU_S", mmc->cpu1_mpu_s);
167
+ }
168
+ }
169
qdev_prop_set_uint32(iotkitdev, "SRAM_ADDR_WIDTH", mmc->sram_addr_width);
170
qdev_connect_clock_in(iotkitdev, "MAINCLK", mms->sysclk);
171
qdev_connect_clock_in(iotkitdev, "S32KCLK", mms->s32kclk);
172
@@ -XXX,XX +XXX,XX @@ static void mps2tz_class_init(ObjectClass *oc, void *data)
27
{
173
{
28
int el = arm_current_el(env);
174
MachineClass *mc = MACHINE_CLASS(oc);
29
175
IDAUInterfaceClass *iic = IDAU_INTERFACE_CLASS(oc);
30
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_ns(CPUARMState *env)
176
+ MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_CLASS(oc);
31
return CP_ACCESS_OK;
177
178
mc->init = mps2tz_common_init;
179
mc->reset = mps2_machine_reset;
180
iic->check = mps2_tz_idau_check;
181
+
182
+ /* Most machines leave these at the SSE defaults */
183
+ mmc->cpu0_mpu_ns = MPU_REGION_DEFAULT;
184
+ mmc->cpu0_mpu_s = MPU_REGION_DEFAULT;
185
+ mmc->cpu1_mpu_ns = MPU_REGION_DEFAULT;
186
+ mmc->cpu1_mpu_s = MPU_REGION_DEFAULT;
32
}
187
}
33
188
34
-static CPAccessResult access_lorid(CPUARMState *env, const ARMCPRegInfo *ri,
189
static void mps2tz_set_default_ram_info(MPS2TZMachineClass *mmc)
35
- bool isread)
190
@@ -XXX,XX +XXX,XX @@ static void mps3tz_an547_class_init(ObjectClass *oc, void *data)
36
-{
191
mmc->numirq = 96;
37
- if (arm_is_secure_below_el3(env)) {
192
mmc->uart_overflow_irq = 48;
38
- /* Access ok in secure mode. */
193
mmc->init_svtor = 0x00000000;
39
- return CP_ACCESS_OK;
194
+ mmc->cpu0_mpu_s = mmc->cpu0_mpu_ns = 16;
40
- }
195
mmc->sram_addr_width = 21;
41
- return access_lor_ns(env);
196
mmc->raminfo = an547_raminfo;
42
-}
197
mmc->armsse_type = TYPE_SSE300;
43
-
44
static CPAccessResult access_lor_other(CPUARMState *env,
45
const ARMCPRegInfo *ri, bool isread)
46
{
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_other(CPUARMState *env,
48
/* Access denied in secure mode. */
49
return CP_ACCESS_TRAP;
50
}
51
- return access_lor_ns(env);
52
+ return access_lor_ns(env, ri, isread);
53
}
54
55
/*
56
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
57
.type = ARM_CP_CONST, .resetvalue = 0 },
58
{ .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
59
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
60
- .access = PL1_R, .accessfn = access_lorid,
61
+ .access = PL1_R, .accessfn = access_lor_ns,
62
.type = ARM_CP_CONST, .resetvalue = 0 },
63
REGINFO_SENTINEL
64
};
65
--
198
--
66
2.20.1
199
2.34.1
67
200
68
201
diff view generated by jsdifflib
Deleted patch
1
If we're using the capstone disassembler, disassembly of a run of
2
instructions more than 32 bytes long disassembles the wrong data for
3
instructions beyond the 32 byte mark:
4
1
5
(qemu) xp /16x 0x100
6
0000000000000100: 0x00000005 0x54410001 0x00000001 0x00001000
7
0000000000000110: 0x00000000 0x00000004 0x54410002 0x3c000000
8
0000000000000120: 0x00000000 0x00000004 0x54410009 0x74736574
9
0000000000000130: 0x00000000 0x00000000 0x00000000 0x00000000
10
(qemu) xp /16i 0x100
11
0x00000100: 00000005 andeq r0, r0, r5
12
0x00000104: 54410001 strbpl r0, [r1], #-1
13
0x00000108: 00000001 andeq r0, r0, r1
14
0x0000010c: 00001000 andeq r1, r0, r0
15
0x00000110: 00000000 andeq r0, r0, r0
16
0x00000114: 00000004 andeq r0, r0, r4
17
0x00000118: 54410002 strbpl r0, [r1], #-2
18
0x0000011c: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c
19
0x00000120: 54410001 strbpl r0, [r1], #-1
20
0x00000124: 00000001 andeq r0, r0, r1
21
0x00000128: 00001000 andeq r1, r0, r0
22
0x0000012c: 00000000 andeq r0, r0, r0
23
0x00000130: 00000004 andeq r0, r0, r4
24
0x00000134: 54410002 strbpl r0, [r1], #-2
25
0x00000138: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c
26
0x0000013c: 00000000 andeq r0, r0, r0
27
28
Here the disassembly of 0x120..0x13f is using the data that is in
29
0x104..0x123.
30
31
This is caused by passing the wrong value to the read_memory_func().
32
The intention is that at this point in the loop the 'cap_buf' buffer
33
already contains 'csize' bytes of data for the instruction at guest
34
addr 'pc', and we want to read in an extra 'tsize' bytes. Those
35
extra bytes are therefore at 'pc + csize', not 'pc'. On the first
36
time through the loop 'csize' happens to be zero, so the initial read
37
of 32 bytes into cap_buf is correct and as long as the disassembly
38
never needs to read more data we return the correct information.
39
40
Use the correct guest address in the call to read_memory_func().
41
42
Cc: qemu-stable@nongnu.org
43
Fixes: https://bugs.launchpad.net/qemu/+bug/1900779
44
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
45
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
46
Message-id: 20201022132445.25039-1-peter.maydell@linaro.org
47
---
48
disas/capstone.c | 2 +-
49
1 file changed, 1 insertion(+), 1 deletion(-)
50
51
diff --git a/disas/capstone.c b/disas/capstone.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/disas/capstone.c
54
+++ b/disas/capstone.c
55
@@ -XXX,XX +XXX,XX @@ bool cap_disas_monitor(disassemble_info *info, uint64_t pc, int count)
56
57
/* Make certain that we can make progress. */
58
assert(tsize != 0);
59
- info->read_memory_func(pc, cap_buf + csize, tsize, info);
60
+ info->read_memory_func(pc + csize, cap_buf + csize, tsize, info);
61
csize += tsize;
62
63
if (cs_disasm_iter(handle, &cbuf, &csize, &pc, insn)) {
64
--
65
2.20.1
66
67
diff view generated by jsdifflib
Deleted patch
1
In gicv3_init_cpuif() we copy the ARMCPU gicv3_maintenance_interrupt
2
into the GICv3CPUState struct's maintenance_irq field. This will
3
only work if the board happens to have already wired up the CPU
4
maintenance IRQ before the GIC was realized. Unfortunately this is
5
not the case for the 'virt' board, and so the value that gets copied
6
is NULL (since a qemu_irq is really a pointer to an IRQState struct
7
under the hood). The effect is that the CPU interface code never
8
actually raises the maintenance interrupt line.
9
1
10
Instead, since the GICv3CPUState has a pointer to the CPUState, make
11
the dereference at the point where we want to raise the interrupt, to
12
avoid an implicit requirement on board code to wire things up in a
13
particular order.
14
15
Reported-by: Jose Martins <josemartins90@gmail.com>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20201009153904.28529-1-peter.maydell@linaro.org
18
Reviewed-by: Luc Michel <luc@lmichel.fr>
19
---
20
include/hw/intc/arm_gicv3_common.h | 1 -
21
hw/intc/arm_gicv3_cpuif.c | 5 ++---
22
2 files changed, 2 insertions(+), 4 deletions(-)
23
24
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/intc/arm_gicv3_common.h
27
+++ b/include/hw/intc/arm_gicv3_common.h
28
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
29
qemu_irq parent_fiq;
30
qemu_irq parent_virq;
31
qemu_irq parent_vfiq;
32
- qemu_irq maintenance_irq;
33
34
/* Redistributor */
35
uint32_t level; /* Current IRQ level */
36
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/intc/arm_gicv3_cpuif.c
39
+++ b/hw/intc/arm_gicv3_cpuif.c
40
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
41
int irqlevel = 0;
42
int fiqlevel = 0;
43
int maintlevel = 0;
44
+ ARMCPU *cpu = ARM_CPU(cs->cpu);
45
46
idx = hppvi_index(cs);
47
trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx);
48
@@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
49
50
qemu_set_irq(cs->parent_vfiq, fiqlevel);
51
qemu_set_irq(cs->parent_virq, irqlevel);
52
- qemu_set_irq(cs->maintenance_irq, maintlevel);
53
+ qemu_set_irq(cpu->gicv3_maintenance_interrupt, maintlevel);
54
}
55
56
static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
58
&& cpu->gic_num_lrs) {
59
int j;
60
61
- cs->maintenance_irq = cpu->gicv3_maintenance_interrupt;
62
-
63
cs->num_list_regs = cpu->gic_num_lrs;
64
cs->vpribits = cpu->gic_vpribits;
65
cs->vprebits = cpu->gic_vprebits;
66
--
67
2.20.1
68
69
diff view generated by jsdifflib